diff --git "a/abs_29K_G/test_abstract_long_2405.03606v1.json" "b/abs_29K_G/test_abstract_long_2405.03606v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.03606v1.json" @@ -0,0 +1,146 @@ +{ + "url": "http://arxiv.org/abs/2405.03606v1", + "title": "Strang Splitting for Parametric Inference in Second-order Stochastic Differential Equations", + "abstract": "We address parameter estimation in second-order stochastic differential\nequations (SDEs), prevalent in physics, biology, and ecology. Second-order SDE\nis converted to a first-order system by introducing an auxiliary velocity\nvariable raising two main challenges. First, the system is hypoelliptic since\nthe noise affects only the velocity, making the Euler-Maruyama estimator\nill-conditioned. To overcome that, we propose an estimator based on the Strang\nsplitting scheme. Second, since the velocity is rarely observed we adjust the\nestimator for partial observations. We present four estimators for complete and\npartial observations, using full likelihood or only velocity marginal\nlikelihood. These estimators are intuitive, easy to implement, and\ncomputationally fast, and we prove their consistency and asymptotic normality.\nOur analysis demonstrates that using full likelihood with complete observations\nreduces the asymptotic variance of the diffusion estimator. With partial\nobservations, the asymptotic variance increases due to information loss but\nremains unaffected by the likelihood choice. However, a numerical study on the\nKramers oscillator reveals that using marginal likelihood for partial\nobservations yields less biased estimators. We apply our approach to\npaleoclimate data from the Greenland ice core and fit it to the Kramers\noscillator model, capturing transitions between metastable states reflecting\nobserved climatic conditions during glacial eras.", + "authors": "Predrag Pilipovic, Adeline Samson, Susanne Ditlevsen", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "math.ST", + "stat.TH" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "We address parameter estimation in second-order stochastic differential\nequations (SDEs), prevalent in physics, biology, and ecology. Second-order SDE\nis converted to a first-order system by introducing an auxiliary velocity\nvariable raising two main challenges. First, the system is hypoelliptic since\nthe noise affects only the velocity, making the Euler-Maruyama estimator\nill-conditioned. To overcome that, we propose an estimator based on the Strang\nsplitting scheme. Second, since the velocity is rarely observed we adjust the\nestimator for partial observations. We present four estimators for complete and\npartial observations, using full likelihood or only velocity marginal\nlikelihood. These estimators are intuitive, easy to implement, and\ncomputationally fast, and we prove their consistency and asymptotic normality.\nOur analysis demonstrates that using full likelihood with complete observations\nreduces the asymptotic variance of the diffusion estimator. With partial\nobservations, the asymptotic variance increases due to information loss but\nremains unaffected by the likelihood choice. However, a numerical study on the\nKramers oscillator reveals that using marginal likelihood for partial\nobservations yields less biased estimators. We apply our approach to\npaleoclimate data from the Greenland ice core and fit it to the Kramers\noscillator model, capturing transitions between metastable states reflecting\nobserved climatic conditions during glacial eras.", + "main_content": "Introduction Second-order stochastic differential equations (SDEs) are an effective instrument for modeling complex systems showcasing both deterministic and stochastic dynamics, which incorporate the second derivative of a variable the acceleration. These models are extensively applied in many fields, including physics (Rosenblum and Pikovsky, 2003), molecular dynamics (Leimkuhler and Matthews, 2015), ecology (Johnson et al., 2008; Michelot and Blackwell, 2019), paleoclimate research (Ditlevsen et al., 2002), and neuroscience (Ziv et al., 1994; Jansen and Rit, 1995). arXiv:2405.03606v1 [stat.ME] 6 May 2024 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT The general form of a second-order SDE in Langevin form is given as follows: \u00a8 Xt = F(Xt, \u02d9 Xt, \u03b2) + \u03a3\u03bet. (1) Here, Xt \u2208Rd denotes the variable of interest, the dot indicates derivative with respect to time t, drift F represents the deterministic force, and \u03bet is a white noise representing the system\u2019s random perturbations around the deterministic force. We assume that \u03a3 is constant, that is the noise is additive. The main goal of this study is to estimate parameters in second-order SDEs. We first reformulate the d-dimensional second-order SDE (1) into a 2d-dimensional SDE in It\u00f4\u2019s form. We define an auxiliary velocity variable, and express the second-order SDE in terms of its position Xt and velocity Vt: dXt = Vt dt, X0 = x0, dVt = F (Xt, Vt; \u03b2) dt + \u03a3 dWt, V0 = v0, (2) where Wt is a standard Wiener process. We refer to Xt and Vt as the smooth and rough coordinates, respectively. A specific example of model (2) is F(x, v) = \u2212c(x, v)v \u2212\u2207U(x), for some function c(\u00b7) and potential U(\u00b7). Then, model (2) is called a stochastic damping Hamiltonian system. This system describes the motion of a particle subjected to potential, dissipative, and random forces (Wu, 2001). An example of a stochastic damping Hamiltonian system is the Kramers oscillator introduced in Section 2.1. Let Yt = (X\u22a4 t , V\u22a4 t )\u22a4, e F(x, v; \u03b2) = (v\u22a4, F(x, v; \u03b2)\u22a4)\u22a4and e \u03a3 = (0\u22a4, \u03a3\u22a4)\u22a4. Then (2) is formulated as dYt = e F (Yt; \u03b2) dt + e \u03a3 dWt, Y0 = y0. (3) The notation e over an object indicates that it is associated with process Yt. Specifically, the object is of dimension 2d or 2d \u00d7 2d. When it exists, the unique solution of (3) is called a diffusion or diffusion process. System (3) is usually not fully observed since the velocity Vt is not observable. Thus, our primary objective is to estimate the underlying drift parameter \u03b2 and the diffusion parameter \u03a3, based on discrete observations of either Yt (referred to as complete observation case), or only Xt (referred to as partial observation case). Diffusion Yt is said to be hypoelliptic since the matrix e \u03a3e \u03a3\u22a4= \u00140 0 0 \u03a3\u03a3\u22a4 \u0015 (4) is not of full rank, while Yt admits a smooth density. Thus, (2) is a subclass of a larger class of hypoelliptic diffusions. Parametric estimation for hypoelliptic diffusions is an active area of research. Ditlevsen and S\u00f8rensen (2004) studied discretely observed integrated diffusion processes. They proposed to use prediction-based estimating functions, which are suitable for non-Markovian processes and which do not require access to the unobserved component. They proved consistency and asymptotic normality of the estimators for N \u2192\u221e, but without any requirements on the sampling interval h. Certain moment conditions are needed to obtain results for fixed h, which are often difficult to fulfill for nonlinear drift functions. The estimator was applied to paleoclimate data in Ditlevsen et al. (2002), similar to the data we analyze in Section 5. Gloter (2006) also focused on parametric estimation for discretely observed integrated diffusion processes, introducing a contrast function using the Euler-Maruyama discretization. He studied the asymptotic properties as the sampling interval h \u21920 and the sample size N \u2192\u221e, under the so-called rapidly increasing experimental design Nh \u2192\u221e and Nh2 \u21920. To address the ill-conditioned contrast from the Euler-Maruyama discretization, he suggested using only the rough equations of the SDE. He proposed to recover the unobserved integrated component through the finite difference approximation (Xtk+1 \u2212Xtk)/h. This approximation makes the estimator biased and requires a correction factor of 3/2 in one of the terms of the contrast function for partial observations. Consequently, the correction increases the asymptotic variance of the estimator of the diffusion parameter. Samson and Thieullen (2012) expanded the ideas of (Gloter, 2006) and proved the results of (Gloter, 2006) in more general models. Similar to (Gloter, 2006), their focus was on contrasts using the Euler-Maruyama discretization limited to only the rough equations. Pokern et al. (2009) proposed an It\u00f4-Taylor expansion, adding a noise term of order h3/2 to the smooth component in the numerical scheme. They argued against the use of finite differences for approximating unobserved components. Instead, he suggested using the It\u00f4-Taylor expansion leading to non-degenerate conditionally Gaussian approximations of the transition density and using Markov Chain Monte Carlo (MCMC) Gibbs samplers for conditionally imputing missing components based on the observations. They found out that this approach resulted in a biased estimator of the drift parameter of the rough component. 2 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Ditlevsen and Samson (2019) focused on both filtering and inference methods for complete and partial observations. They proposed a contrast estimator based on the strong order 1.5 scheme (Kloeden and Platen, 1992), which incorporates noise of order h3/2 into the smooth component, similar to (Pokern et al., 2009). Moreover, they retained terms of order h2 in the mean, which removed the bias in the drift parameters noted in (Pokern et al., 2009). They proved consistency and asymptotic normality under complete observations, with the standard rapidly increasing experimental design Nh \u2192\u221eand Nh2 \u21920. They adopted an unconventional approach by using two separate contrast functions, resulting in marginal asymptotic results rather than a joint central limit theorem. The model was limited to a scalar smooth component and a diagonal diffusion coefficient matrix for the rough component. Melnykova (2020) developed a contrast estimator using local linearization (LL) (Ozaki, 1985; Shoji and Ozaki, 1998; Ozaki et al., 2000) and compared it to the least-squares estimator. She employed local linearization of the drift function, providing a non-degenerate conditional Gaussian discretization scheme, enabling the construction of a contrast estimator that achieves asymptotic normality under the standard conditions Nh \u2192\u221eand Nh2 \u21920. She proved a joint central limit theorem, bypassing the need for two separate contrasts as in Ditlevsen and Samson (2019). The models in Ditlevsen and Samson (2019) and Melnykova (2020) allow for parameters in the smooth component of the drift, in contrast to models based on second-order differential equations. Recent work by Gloter and Yoshida (2020, 2021) introduced adaptive and non-adaptive methods in hypoelliptic diffusion models, proving asymptotic normality in the complete observation regime. In line with this work, we briefly review their non-adaptive estimator. It is based on a higher-order It\u00f4-Taylor expansion that introduces additional Gaussian noise onto the smooth coordinates, accompanied by an appropriate higher-order mean approximation of the rough coordinates. The resulting estimator was later termed the local Gaussian (LG), which should be differentiated from LL. The LG estimator can be viewed as an extension of the estimator proposed in Ditlevsen and Samson (2019), with fewer restrictions on the class of models. Gloter and Yoshida (2020, 2021) found that using the full SDE to create a contrast reduces the asymptotic variance of the estimator of the diffusion parameter compared to methods using only rough coordinates in the case of complete observations. The most recent contributions are Iguchi et al. (2023a,b); Iguchi and Beskos (2023), building on the foundation of the LG estimator and focusing on high-frequency regimes addressing limitations in earlier methods. Iguchi et al. (2023b) presented a new closed-form contrast estimator for hypoelliptic SDEs (denoted as Hypo-I) based on Edgeworth-type density expansion and Malliavin calculus that achieves asymptotic normality under the less restrictive condition of Nh3 \u21920. Iguchi et al. (2023a) focused on a highly degenerate class of SDEs (denoted as Hypo-II) where smooth coordinates split into further sub-groups and proposed estimators for both complete and partial observation settings. Iguchi and Beskos (2023) further refined the conditions for estimators asymptotic normality for both Hypo-I and Hypo-II under a weak design Nhp \u21920, for p \u22652. The existing methods are generally based on approximations with varying degrees of refinements to correct for possible nonlinearities. This implies that they quickly degrade for highly nonlinear models if the step size is increased. In particular, this is the case for Hamiltonian systems. Instead, we propose to use splitting schemes, more precisely the Strang splitting scheme. Splitting schemes are established techniques initially developed for solving ordinary differential equations (ODEs) and have proven to be effective also for SDEs (Ableidinger et al., 2017; Buckwar et al., 2022; Pilipovic et al., 2024). These schemes yield accurate results in many practical applications since they incorporate nonlinearities in their construction. This makes them particularly suitable for second-order SDEs, where they have been widely used. Early work in dissipative particle dynamics (Shardlow, 2003; Serrano et al., 2006), applications to molecular dynamics (Vanden-Eijnden and Ciccotti, 2006; Melchionna, 2007; Leimkuhler and Matthews, 2015) and studies on internal particles (Pavliotis et al., 2009) all highlight the scheme\u2019s versatility. Burrage et al. (2007), Bou-Rabee and Owhadi (2010), and Abdulle et al. (2015) focused on the long-run statistical properties such as invariant measures. Bou-Rabee (2017); Br\u00e9hier and Gouden\u00e8ge (2019) and Adams et al. (2022) used splitting schemes for stochastic partial differential equations (SPDEs). Despite the extensive use of splitting schemes in different areas, statistical applications have been lacking. We have recently proposed statistical estimators for elliptic SDEs (Pilipovic et al., 2024). The straightforward and intuitive schemes lead to robust, easy-to-implement estimators, offering an advantage over more numerically intensive and less user-friendly state-of-the-art methods. We use the Strang splitting scheme to approximate the transition density between two consecutive observations and derive the pseudo-likelihood function since the exact likelihood function is often unknown or intractable. Then, to estimate parameters, we employ maximum likelihood estimation (MLE). However, two specific statistical problems arise due to hypoellipticity and partial observations. First, hypoellipticity leads to degenerate Euler-Maruyama transition schemes, which can be addressed by constructing the pseudo-likelihood solely from the rough equations of the SDE, referred to as the rough likelihood hereafter. The 3 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Strang splitting technique enables the estimator to incorporate both smooth and rough components (referred to as the full likelihood). It is also possible to construct Strang splitting estimators using only the rough likelihood, raising the question of which estimator performs better. Our results are in line with Gloter and Yoshida (2020, 2021) in the complete observation setting, where we find that using the full likelihood reduces the asymptotic variance of the diffusion estimator. We found the same results in the simulation study for the LL estimator proposed by Melnykova (2020). Second, we suggest to treat the unobserved velocity by approximating it using finite difference methods. While Gloter (2006) and Samson and Thieullen (2012) exclusively use forward differences, we investigate also central and backward differences. The forward difference approach leads to a biased estimator unless it is corrected. One of the main contributions of this work is finding suitable corrections of the pseudo-likelihoods for different finite difference approximations such that the Strang estimators are asymptotically unbiased. This also ensures consistency of the diffusion parameter estimator, at the cost of increasing its asymptotic variance. When only partial observations are available, we explore the impact of using the full likelihood versus the rough likelihood and how different finite differentiation approximations influence the parametric inference. We find that the choice of likelihood does not affect the asymptotic variance of the estimator. However, our simulation study on the Kramers oscillator suggests that using the full likelihood in finite sample setups introduce more bias than using only the rough marginal likelihood, which is the opposite of the complete observation setting. Finally, we analyze a paleoclimate ice core dataset from Greenland using a second-order SDE. The main contributions of this paper are: 1. We extend the Strang splitting estimator of (Pilipovic et al., 2024) to hypoelliptic models given by second-order SDEs, including appropriate correction factors to obtain consistency. 2. When complete observations are available, we show that the asymptotic variance of the estimator of the diffusion parameter is smaller when maximizing the full likelihood. In contrast, for partial observations, we show that the asymptotic variance remains unchanged regardless of using the full or marginal likelihood of the rough coordinates. 3. We discuss the influence on the statistical properties of using the forward difference approximation for imputing the unobserved velocity variables compared to using the backward or the central difference. 4. We evaluate the performance of the estimators through a simulation study of a second-order SDE, the Kramers oscillator. Additionally, we show numerically in a finite sample study that the marginal likelihood for partial observations is more favorable than the full likelihood. 5. We fit the Kramers oscillator to a paleoclimate ice core dataset from Greenland and estimate the average time needed to pass between two metastable states. The structure of the paper is as follows. In Section 2, we introduce the class of SDE models, define hypoellipticity, introduce the Kramers oscillator, and explain the Strang splitting scheme and its associated estimators. The asymptotic properties of the estimator are established in Section 3. The theoretical results are illustrated in a simulation study on the Kramers Oscillator in Section 4. Section 5 illustrates our methodology on the Greenland ice core data, while the technical results and the proofs of the main theorems and properties are in Section 6 and Supplementary Material S1, respectively. Notation. We use capital bold letters for random vectors, vector-valued functions, and matrices, while lowercase bold letters denote deterministic vectors. \u2225\u00b7 \u2225denotes both the L2 vector norm in Rd. Superscript (i) on a vector denotes the i-th component, while on a matrix it denotes the i-th column. Double subscript ij on a matrix denotes the component in the i-th row and j-th column. The transpose is denoted by \u22a4. Operator Tr(\u00b7) returns the trace of a matrix and det(\u00b7) the determinant. Id denotes the d-dimensional identity matrix, while 0d\u00d7d is a d-dimensional zero square matrix. We denote by [ai]d i=1 a vector with coordinates ai, and by [bij]d i,j=1 a matrix with coordinates bij, for i, j = 1, . . . , d. For a real-valued function g : Rd \u2192R, \u2202x(i)g(x) denotes the partial derivative with respect to x(i) and \u22022 x(i)x(j)g(x) denotes the second partial derivative with respect to x(i) and x(j). The nabla operator \u2207x denotes the gradient vector of g with respect of x, that is, \u2207xg(x) = [\u2202x(i)g(x)]d i=1. H denotes the Hessian matrix of function g, Hg(x) = [\u2202x(i)x(j)g(x)]d i,j=1. For a vector-valued function F : Rd \u2192Rd, the differential operator Dx denotes the Jacobian matrix DxF(x) = [\u2202x(i)F (j)(x)]d i,j=1. Let R represent a vector (or a matrix) valued function defined on (0, 1) \u00d7 Rd (or (0, 1) \u00d7 Rd\u00d7d), such that, for some constant C, \u2225R(a, x)\u2225< aC(1 + \u2225x\u2225)C for all a, x. When denoted by R, it refers to a scalar function. For an open set A, the bar A indicates closure. We write P \u2212 \u2192for convergence in probability P. 4 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 2 Problem setup Let Y = (Yt)t\u22650 in (3) be defined on a complete probability space (\u2126, F, P\u03b8) with a complete right-continuous filtration F = (Ft)t\u22650, and let the d-dimensional Wiener process W = (Wt)t\u22650 be adapted to Ft. The probability measure P\u03b8 is parameterized by the parameter \u03b8 = (\u03b2, \u03a3). Rewrite equation (3) as follows: dYt = e A(\u03b2)(Yt \u2212e b(\u03b2)) dt + e N (Yt; \u03b2) dt + e \u03a3 dWt, (5) where e A(\u03b2) = \u0014 0d\u00d7d Id Ax(\u03b2) Av(\u03b2) \u0015 , e b(\u03b2) = \u0014 b(\u03b2) 0d \u0015 , e N(x, v; \u03b2) = \u0014 0d N(x, v; \u03b2) \u0015 . (6) Function F in (2) is thus split as F(x, v; \u03b2) = Ax(\u03b2)(x \u2212b(\u03b2)) + Av(\u03b2)v + N(x, v; \u03b2). Let \u0398\u03b2 \u00d7 \u0398\u03a3 = \u0398 denote the closure of the parameter space with \u0398\u03b2 and \u0398\u03a3 being two convex open bounded subsets of Rr and Rd\u00d7d, respectively. The function N : R2d \u00d7 \u0398\u03b2 \u2192Rd is assumed locally Lipschitz; functions Ax and Av are defined on \u0398\u03b2 and take values in Rd\u00d7d; and the parameter matrix \u03a3 takes values in Rd\u00d7d. The matrix \u03a3\u03a3\u22a4is assumed to be positive definite, shaping the variance of the rough coordinates. As any square root of \u03a3\u03a3\u22a4induces the same distribution, \u03a3 is identifiable only up to equivalence classes. Hence, estimation of the parameter \u03a3 means estimation of \u03a3\u03a3\u22a4. The drift function e F in (3) is divided into a linear part given by the matrix e A and a nonlinear part given by e N. The true value of the parameter is denoted by \u03b80 = (\u03b20, \u03a30), and we assume that \u03b80 \u2208\u0398. When referring to the true parameters, we write Ax,0, Av,0, b0, N0(x), F0(x) and \u03a3\u03a3\u22a4 0 instead of Ax(\u03b20), Av(\u03b20), b(\u03b20), N(x; \u03b20), F(x; \u03b20) and \u03a30\u03a3\u22a4 0 , respectively. We write Ax, Av, b, N(x), F(x), and \u03a3\u03a3\u22a4for any parameter \u03b8. 2.1 Example: The Kramers oscillator The abrupt temperature changes during the ice ages, known as the Dansgaard\u2013Oeschger (DO) events, are essential elements for understanding the climate (Dansgaard et al., 1993). These events occurred during the last glacial era spanning approximately the period from 115,000 to 12,000 years before present and are characterized by rapid warming phases followed by gradual cooling periods, revealing colder (stadial) and warmer (interstadial) climate states (Rasmussen et al., 2014). To analyze the DO events in Section 5, we propose a stochastic model of the escape dynamics in metastable systems, the Kramers oscillator (Kramers, 1940), originally formulated to model the escape rate of Brownian particles from potential wells. The escape rate is related to the mean first passage time \u2014 the time needed for a particle to exceed the potential\u2019s local maximum for the first time, starting at a neighboring local minimum. This rate depends on variables such as the damping coefficient, noise intensity, temperature, and specific potential features, including the barrier\u2019s height and curvature at the minima and maxima. We apply this framework to quantify the rate of climate transitions between stadial and interstadial periods. This provides an estimate on the probability distribution of the ocurrence of DO events, contributing to our understanding of the global climate system. Following Arnold and Imkeller (2000), we introduce the Kramers oscillator as the stochastic Duffing oscillator an example of a second-order SDE and a stochastic damping Hamiltonian system. The Duffing oscillator (Duffing, 1918) is a forced nonlinear oscillator, featuring a cubic stiffness term. The governing equation is given by: \u00a8 xt + \u03b7 \u02d9 xt + d dxU(xt) = f(t), where U(x) = \u2212ax2 2 + bx4 4 , with a, b > 0, \u03b7 \u22650. (7) The parameter \u03b7 in (7) indicates the damping level, a regulates the linear stiffness, and b determines the nonlinear component of the restoring force. In the special case where b = 0, the equation simplifies to a damped harmonic oscillator. Function f represents the driving force and is usually set to f(t) = \u03b7 cos(\u03c9t), which introduces deterministic chaos (Korsch and Jodl, 1999). When the driving force is f(t) = \u221a2\u03b7T\u03be(t), where \u03be(t) is white noise, equation (7) characterizes the stochastic movement of a particle within a bistable potential well, interpreting T > 0 as the temperature of a heat bath. Setting \u03c3 = \u221a2\u03b7T, equation (7) can be reformulated as an It\u00f4 SDE for variables Xt and Vt = \u02d9 Xt, expressed as: dXt = Vt dt, dVt = \u0012 \u2212\u03b7Vt \u2212d dxU(Xt) \u0013 dt + \u03c3 dWt, (8) 5 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT where Wt denotes a standard Wiener process. The parameter set of SDE (8) is \u03b8 = {\u03b7, a, b, \u03c32}. The existence and uniqueness of the invariant measure \u03bd0(dx, dy) of (8) is proved in Theorem 3 in (Arnold and Imkeller, 2000). The invariant measure \u03bd0 is linked to the invariant density \u03c00 through \u03bd0(dx, dy) = \u03c00(x, v) dx dy. Here we write \u03c00(x, v) instead of \u03c0(x, v; \u03b80), and \u03c0(x, v) instead of \u03c0(x, v; \u03b8). The Fokker-Plank equation for \u03c0 is given by \u2212v \u2202 \u2202x\u03c0(x, v) + \u03b7\u03c0(x, v) + \u03b7v \u2202 \u2202v \u03c0(x, v) + d dxU(x) \u2202 \u2202v \u03c0(x, v) + \u03c32 2 \u22022 \u2202v2 \u03c0(x, v) = 0. (9) The invariant density that solves the Fokker-Plank equation is: \u03c0(x, v) = C exp \u0012 \u22122\u03b7 \u03c32 U(x) \u0013 exp \u0010 \u2212\u03b7 \u03c32 v2\u0011 , (10) where C is the normalizing constant. The marginal invariant probability of Vt is thus Gaussian with zero mean and variance \u03c32/(2\u03b7). The marginal invariant probability of Xt is bimodal driven by the potential U(x): \u03c0(x) = C exp \u0012 \u22122\u03b7 \u03c32 U(x) \u0013 . (11) At steady state, for a particle moving in any potential U(x) and driven by random Gaussian noise, the position x and velocity v are independent of each other. This is reflected by the decomposition of the joint density \u03c0(x, v) into \u03c0(x)\u03c0(v). Fokker-Plank equation (9) can also be used to derive the mean first passage time \u03c4 which is inversely related to Kramers\u2019 escape rate \u03ba (Kramers, 1940): \u03c4 = 1 \u03ba \u2248 2\u03c0 \u0012q 1 + \u03b72 4\u03c92 \u2212 \u03b7 2\u03c9 \u0013 \u2126 exp \u0012\u2206U T \u0013 , where xbarrier = 0 is the local maximum of U(x) and xwell = \u00b1 p a/b are the local minima, \u03c9 = p |U \u2032\u2032(xbarrier)| = \u221aa, \u2126= p U \u2032\u2032(xwell) = \u221a 2a, and \u2206U = U(xbarrier) \u2212U(xwell) = a2/4b, . The formula is derived assuming strong friction, or an over-damped system (\u03b7 \u226b\u03c9), and a small parameter T/\u2206U \u226a1, indicating sufficiently deep potential wells. For the potential defined in (7), the mean waiting time \u03c4 is then approximated by \u03c4 \u2248 \u221a 2\u03c0 q a + \u03b72 4 \u2212\u03b7 2 exp \u0012 a2\u03b7 2b\u03c32 \u0013 . (12) 2.2 Hypoellipticity The SDE (5) is said to be hypoelliptic if its quadratic diffusion matrix e \u03a3e \u03a3\u22a4is not of full rank, while its solutions admit a smooth transition density with respect to the Lebesgue measure. According to H\u00f6rmander\u2019s theorem (Nualart, 2006), this is fulfilled if the SDE in its Stratonovich form satisfies the weak H\u00f6rmander condition. Since \u03a3 does not depend on y, the It\u00f4 and Stratonovich forms coincide. We begin by recalling the concept of Lie brackets: for smooth vector fields f, g : R2d \u2192R2d, the i-th component of the Lie bracket, [f, g](i), is defined as [f, g](i) := D\u22a4 y g(i)(y)f(y) \u2212D\u22a4 y f (i)(y)g(y). We define the set H of vector fields by initially including e \u03a3(i), i = 1, 2, ..., 2d, and then recursively adding Lie brackets H \u2208H \u21d2[e F, H], [e \u03a3(1), H], . . . , [e \u03a3(2d), H] \u2208H. The weak H\u00f6rmander condition is met if the vectors in H span R2d at every point y \u2208R2d. The initial vectors span {(0, v) \u2208R2d | v \u2208Rd}, a d-dimensional subspace. We therefore need to verify the existence of some H \u2208H with a non-zero first element. The first iteration of the system yields [e F, e \u03a3(i)](1) = \u2212\u03a3(i), [e \u03a3(i), e \u03a3(j)](1) = 0, for i, j = 1, 2, ..., 2d. The first equation is non-zero, as are all subsequent iterations. Thus, the second-order SDE defined in (5) is always hypoelliptic. 6 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 2.3 Assumptions The following assumptions are a generalization of those presented in (Pilipovic et al., 2024). Let T > 0 be the length of the observed time interval. We assume that (5) has a unique strong solution Y = {Yt | t \u2208[0, T]}, adapted to F = {Ft | t \u2208[0, T]}, which follows from the following first two assumptions (Theorem 2 in Alyushina (1988), Theorem 1 in Krylov (1991), Theorem 3.5 in Mao (2007)). We need the last three assumptions to prove the properties of the estimators. (A1) Function N is twice continuously differentiable with respect to both y and \u03b8, i.e., N \u2208C2. Moreover, it is globally one-sided Lipschitz continuous with respect to y on R2d \u00d7 \u0398\u03b2. That is, there exists a constant C > 0 such that for all y1, y2 \u2208R2d, (y1 \u2212y2)\u22a4(N(y1; \u03b2) \u2212N(y2; \u03b2)) \u2264C\u2225y1 \u2212y2\u22252. (A2) Function N exhibits at most polynomial growth in y, uniformly in \u03b8. Specifically, there exist constants C > 0 and \u03c7 \u22651 such that for all y1, y2 \u2208R2d, \u2225N (y1; \u03b2) \u2212N (y2; \u03b2) \u22252 \u2264C \u00001 + \u2225y1\u22252\u03c7\u22122 + \u2225y2\u22252\u03c7\u22122\u0001 \u2225y1 \u2212y2\u22252. Additionally, its derivatives exhibit polynomial growth in y, uniformly in \u03b8. (A3) The solution Y to SDE (5) has invariant probability \u03bd0(dy). (A4) \u03a3\u03a3\u22a4is invertible on \u0398\u03a3. (A5) \u03b2 is identifiable, that is, if F(y, \u03b21) = F(y, \u03b22) for all y \u2208R2d, then \u03b21 = \u03b22. Assumption (A1) ensures finiteness of the moments of the solution X (Tretyakov and Zhang, 2013), i.e., E[ sup t\u2208[0,T ] \u2225Yt\u22252p] < C(1 + \u2225y0\u22252p), \u2200p \u22651. (13) Assumption (A3) is necessary for the ergodic theorem to ensure convergence in distribution. Assumption (A4) ensures that the model (5) is hypoelliptic. Assumption (A5) ensures the identifiability of the drift parameter. 2.4 Strang splitting scheme Consider the following splitting of (5): dY[1] t = e A(Y[1] t \u2212e b) dt + e \u03a3 dWt, Y[1] 0 = y0, (14) dY[2] t = e N(Y[2] t ) dt, Y[2] 0 = y0. (15) There are no assumptions on the choice of e A and e b, and thus the nonlinear function e N. Indeed, we show that the asymptotic results hold for any choice of e A and e b in both the complete and the partial observation settings. This extends the results in Pilipovic et al. (2024), where it is shown to hold in the elliptic complete observation case, as well. While asymptotic results are invariant to the choice of e A and e b, finite sample properties of the scheme and the corresponding estimators are very different, and it is important to choose the splitting wisely. Intuitively, when the process is close to a fixed point of the drift, the linear dynamics are dominating, whereas far from the fixed points, the nonlinearities might be dominating. If the drift has a fixed point y\u22c6, we therefore suggest setting e A = Dye F(y\u22c6) and e b = y\u22c6. This choice is confirmed in simulations (for more details see Pilipovic et al. (2024)). Solution of SDE (14) is an Ornstein\u2013Uhlenbeck (OU) process given by the following h-flow: Y[1] tk = \u03a6[1] h (Y[1] tk\u22121) = e \u00b5h(Y[1] tk\u22121; \u03b2) + e \u03b5h,k, (16) e \u00b5h(y; \u03b2) := e e Ah(y \u2212e b) + e b, (17) e \u2126h = Z h 0 e e A(h\u2212u) e \u03a3e \u03a3\u22a4e e A\u22a4(h\u2212u) du, (18) where e \u03b5h,k i.i.d \u223cN2d(0, e \u2126h) for k = 1, . . . , N. It is useful to rewrite e \u2126h in the following block matrix form, e \u2126h = \" \u2126[SS] h \u2126[SR] h \u2126[RS] h \u2126[RR] h # , (19) 7 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT where S in the superscript stands for smooth and R stands for rough. The Schur complement of e \u2126h with respect to \u2126[RR] h and the determinant of e \u2126h are given by: \u2126[S|R] h := \u2126[SS] h \u2212\u2126[SR] h (\u2126[RR] h )\u22121\u2126[RS] h , det e \u2126h = det \u2126[RR] h det \u2126[S|R] h . Assumptions (A1)-(A2) ensure the existence and uniqueness of the solution of (15) (Theorem 1.2.17 in Humphries and Stuart (2002)). Thus, there exists a unique function e fh : R2d \u00d7 \u0398\u03b2 \u2192R2d, for h \u22650, such that Y[2] tk = \u03a6[2] h (Y[2] tk\u22121) = e fh(Y[2] tk\u22121; \u03b2). (20) For all \u03b2 \u2208\u0398\u03b2, the h-flow e fh fulfills the following semi-group properties: e f0(y; \u03b2) = y, e ft+s(y; \u03b2) = e ft( e fs(y; \u03b2); \u03b2), t, s \u22650. For y = (x\u22a4, v\u22a4)\u22a4, we have: e fh(x, v; \u03b2) = \u0014 x fh(x, v; \u03b2) \u0015 , (21) where fh(x, v; \u03b2) is the solution of the ODE with vector field N(x, v; \u03b2). We introduce another assumption needed to define the pseudo-likelihood based on the splitting scheme. (A6) Inverse function e f \u22121 h (y; \u03b2) is defined asymptotically for all y \u2208R2d and all \u03b2 \u2208\u0398\u03b2, when h \u21920. Then, the inverse of \u02dc fh can be decomposed as: e f \u22121 h (x, v; \u03b2) = \u0014 x f \u22c6\u22121 h (x, v; \u03b2) \u0015 , (22) where f \u22c6\u22121 h (x, v; \u03b2) is the rough part of the inverse of e f \u22121 h . It does not equal f \u22121 h since the inverse does not propagate through coordinates when fh depends on x. We are now ready to define the Strang splitting scheme for model (5). Definition 2.1 (Strang splitting) Let Assumptions (A1)-(A2) hold. The Strang approximation of the solution of (5) is given by: \u03a6[str] h (Y[str] tk\u22121) = (\u03a6[2] h/2 \u25e6\u03a6[1] h \u25e6\u03a6[2] h/2)(Y[str] tk\u22121) = e fh/2(e \u00b5h( e fh/2(Y[str] tk\u22121)) + e \u03b5h,k). (23) Remark 1 The order of composition in the splitting schemes is not unique. Changing the order in the Strang splitting leads to a sum of 2 independent random variables, one Gaussian and one non-Gaussian, whose likelihood is not trivial. Thus, we only use the splitting (23). 2.5 Strang splitting estimators In this section, we introduce four estimators, all based on the Strang splitting scheme. We distinguish between estimators based on complete observations (denoted by C when both X and V are observed) and partial observations (denoted by P when only X is observed). In applications, we typically only have access to partial observations, however, the full observation estimator is used as a building block for the partial observation case. Additionally, we distinguish the estimators based on the type of likelihood function employed. These are the full likelihood (denoted by F) and the marginal likelihood of the rough component (denoted by R). We furthermore use the conditional likelihood based on the smooth component given the rough part (denoted by S | R) to decompose the full likelihood. 2.5.1 Complete observations Assume we observe the complete sample Y0:tN := (Ytk)N k=1 from (5) at time steps 0 = t0 < t1 < ... < tN = T. For notational simplicity, we assume equidistant step size h = tk \u2212tk\u22121. Strang splitting scheme (23) is a nonlinear transformation of a Gaussian random variable e \u00b5h( e fh/2(Y[str] tk\u22121)) + e \u03b5h,k. We define: e Zk,k\u22121(\u03b2) := e f \u22121 h/2(Ytk; \u03b2) \u2212e \u00b5h( e fh/2(Ytk\u22121; \u03b2); \u03b2), (24) 8 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT and apply change of variables to get: p(ytk | ytk\u22121) = pN (0,e \u2126h)(e zk,k\u22121 | ytk\u22121)| det Dy e f \u22121 h/2(ytk)|. Using \u2212log | det Dy e f \u22121 h/2 (y; \u03b2) | = log | det Dy e fh/2 (y; \u03b2) | and det Dy e fh/2 (y; \u03b2) = det Dvfh/2 (y; \u03b2), together with the Markov property of Y0:tN , we get the following objective function based on the full log-likelihood: L[CF](Y0:tN ; \u03b8) := N X k=1 \u0010 log det e \u2126h(\u03b8) + e Zk,k\u22121(\u03b2)\u22a4e \u2126h(\u03b8)\u22121e Zk,k\u22121(\u03b2) + 2 log | det Dvfh/2(Ytk; \u03b2)| \u0011 . (25) Now, split e Zk,k\u22121 from (24) into the smooth and rough parts e Zk,k\u22121 = ((Z[S] k,k\u22121)\u22a4, (Z[R] k,k\u22121)\u22a4)\u22a4defined as: Z[S] k,k\u22121(\u03b2) := [ e Z(i) k,k\u22121(\u03b2)]d i=1 = Xtk \u2212\u00b5[S] h ( e fh/2(Ytk\u22121; \u03b2); \u03b2), (26) Z[R] k,k\u22121(\u03b2) := [ e Z(i) k,k\u22121(\u03b2)]2d i=d+1 = f \u22c6\u22121 h/2 (Ytk; \u03b2) \u2212\u00b5[R] h ( e fh/2(Ytk\u22121; \u03b2); \u03b2), (27) where \u00b5[S] h (y; \u03b2) := [e \u00b5(i) h (y; \u03b2)]d i=1, \u00b5[R] h (y; \u03b2) := [e \u00b5(i) h (y; \u03b2)]2d i=d+1. (28) We also define the following sequence of vectors Z[S|R] k,k\u22121(\u03b2) := Z[S] k,k\u22121(\u03b2) \u2212\u2126[SR] h (\u2126[RR] h )\u22121Z[R] k,k\u22121(\u03b2). (29) The formula for jointly normal distributions yields: pN (0,e \u2126h)(e zk,k\u22121 | ytk\u22121) = pN (0,\u2126[RR] h )(z[R] k,k\u22121 | ytk\u22121) \u00b7 pN (\u2126[SR] h (\u2126[RR] h )\u22121z[R] k,k\u22121,\u2126[S|R] h )(z[S] k,k\u22121 | z[R] k,k\u22121, ytk\u22121). This leads to dividing the full log-likelihood L[CF] into a sum of the marginal log-likelihood L[CR](Y0:tN ; \u03b8) and the smooth-given-rough log-likelihood L[CS|R](Y0:tN ; \u03b8): L[CF](Y0:tN ; \u03b8) = L[CR](Y0:tN ; \u03b8) + L[CS|R](Y0:tN ; \u03b8), where L[CR] (Y0:tN ; \u03b8) := N X k=1 log det \u2126[RR] h (\u03b8) + Z[R] k,k\u22121 (\u03b2)\u22a4\u2126[RR] h (\u03b8)\u22121Z[R] k,k\u22121 (\u03b2) + 2 log \f \fdet Dvfh/2 (Ytk; \u03b2) \f \f ! , (30) L[CS|R] (Y0:tN ; \u03b8) := N X k=1 \u0010 log det \u2126[S|R] h (\u03b8) + Z[S|R] k,k\u22121(\u03b2)\u22a4\u2126[S|R] h (\u03b8)\u22121Z[S|R] k,k\u22121(\u03b2) \u0011 . (31) The terms containing the drift parameter in L[CR] in (30) are of order h1/2, as in the elliptic case, whereas the terms containing the drift parameter in L[CS|R] in (31) are of order h3/2. Consequently, under a rapidly increasing experimental design where Nh \u2192\u221eand Nh2 \u21920, the objective function (31) is degenerate for estimating the drift parameter. However, it contributes to the estimation of the diffusion parameter when the full objective function (25) is used. We show in later sections that employing (25) results in a lower asymptotic variance for the diffusion parameter making it more efficient in complete observation scenarios. The estimators based on complete observations are then defined as: \u02c6 \u03b8[obj] N := arg min \u03b8 L[obj] (Y0:tN ; \u03b8) , obj \u2208{[CF], [CR]}. (32) Although the full objective function is based on twice as many equations as the marginal likelihood, its implementation complexity, speed, and memory requirements are similar to the marginal objective function. Therefore, if the complete observations are available, we recommend using the objective function (25) based on the full likelihood. 9 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 2.5.2 Partial observations Assume we only observe the smooth coordinates X0:tN := (Xtk)N k=0. The observed process Xt alone is not a Markov process, although the complete process Yt is. To approximate Vtk, we define the backward difference process: \u2206hXtk := Xtk \u2212Xtk\u22121 h . (33) From SDE (2) it follows that \u2206hXtk = 1 h Z tk tk\u22121 Vt dt. (34) We propose to approximate Vtk using \u2206hXtk by any of the three approaches: 1. Backward difference approximation: Vtk \u2248\u2206hXtk; 2. Forward difference approximation: Vtk \u2248\u2206hXtk+1; 3. Central difference approximation: Vtk \u2248 \u2206hXtk +\u2206hXtk+1 2 . The forward difference approximation performs best in our simulation study, which is also the approximation method employed in Gloter (2006) and Samson and Thieullen (2012). In the field of numerical approximations of ODEs, backward and forward finite differences have the same order of convergence, whereas the central difference has a higher convergence rate. However, the diffusion parameter estimator based on the central difference (Xtk+1 \u2212Xtk\u22121)/2h is less suitable because this approximation skips a data point and thus increases the estimator\u2019s variance. For further discussion, see Remark 6. Thus, we focus exclusively on forward differences, following Gloter (2006); Samson and Thieullen (2012), and all proofs are done for this approximation. Similar results also hold for the backward difference, with some adjustments needed in the conditional moments due to filtration issues. We start by approximating e Z for the case of partial observations denoted by e Z: e Zk+1,k,k\u22121(\u03b2) := e f \u22121 h/2(Xtk, \u2206hXtk+1; \u03b2) \u2212e \u00b5h( e fh/2(Xtk\u22121, \u2206hXtk; \u03b2); \u03b2). (35) The smooth and rough parts of e Z are thus equal to: Z [S] k,k\u22121(\u03b2) := Xtk \u2212\u00b5[S] h ( e fh/2(Xtk\u22121, \u2206hXtk; \u03b2); \u03b2), (36) Z [R] k+1,k,k\u22121(\u03b2) := f \u22c6\u22121 h/2 (Xtk, \u2206hXtk+1; \u03b2) \u2212\u00b5[R] h ( e fh/2(Xtk\u22121, \u2206hXtk; \u03b2); \u03b2), (37) and Z [S|R] k+1,k,k\u22121(\u03b2) := Z [S] k,k\u22121(\u03b2) \u2212\u2126[SR] h (\u2126[RR] h )\u22121Z [R] k+1,k,k\u22121(\u03b2). (38) Compared to Z[R] k,k\u22121 in (27), Z [R] k+1,k,k\u22121 in (37) depends on three consecutive data points, with the additional point Xtk+1 entering through \u2206hXtk+1. Furthermore, Xtk enters both f \u22c6\u22121 h/2 and e \u00b5[R] h , rending them coupled. This coupling has a significant influence on later derivations of the estimator\u2019s asymptotic properties, in contrast to the elliptic case where the derivations simplify. While it might seem straightforward to incorporate e Z, Z [S] k,k\u22121 and Z [R] k,k\u22121 into the objective functions (25), (30) and (31), it introduces bias in the estimators of the diffusion parameters, as also discussed in (Gloter, 2006; Samson and Thieullen, 2012). The bias arises because Xtk enters in both f \u22c6\u22121 h/2 and e \u00b5[R] h , and the covariances of e Z, Z [S] k,k\u22121, and Z [R] k,k\u22121 differ from their complete observation counterparts. To eliminate this bias, Gloter (2006); Samson and Thieullen (2012) applied a correction of 2/3 multiplied to log det of the covariance term in the objective functions, which is log det \u03a3\u03a3\u22a4in the Euler-Maruyama discretization. We also need appropriate corrections to our objective functions (25), (30) and (31), however, caution is necessary because log det e \u2126h(\u03b8) depends on both drift and diffusion parameters. To counterbalance this, we also incorporate an adjustment to h in \u2126h. Moreover, we add the term 4 log | det Dvfh/2| to objective function (31) to obtain consistency of the drift estimator under partial observations. The detailed derivation of these correction factors will be elaborated in the following sections. 10 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT We thus propose the following objective functions: L[PF](X0:tN ; \u03b8) := 4 3(N \u22122) log det e \u21263h/4(\u03b8) (39) + N\u22121 X k=1 \u0010e Zk+1,k,k\u22121(\u03b2)\u22a4e \u2126h(\u03b8)\u22121e Zk+1,k,k\u22121(\u03b2) + 6 log | det Dvfh/2(Xtk, \u2206hXtk+1; \u03b2)| \u0011 , L[PR] (X0:tN ; \u03b8) := 2 3(N \u22122) log det \u2126[RR] 3h/2(\u03b8) (40) + N\u22121 X k=1 \u0010 Z [R] k+1,k,k\u22121 (\u03b2)\u22a4\u2126[RR] h (\u03b8)\u22121Z [R] k+1,k,k\u22121 (\u03b2) + 2 log \f \fdet Dvfh/2 \u0000Xtk, \u2206hXtk+1; \u03b2 \u0001\f \f \u0011 , L[PS|R] (X0:tN ; \u03b8) := 2(N \u22122) log det \u2126[S|R] h (\u03b8) (41) + N\u22121 X k=1 \u0010 Z [S|R] k+1,k,k\u22121(\u03b2)\u22a4\u2126[S|R] h (\u03b8)\u22121Z [S|R] k+1,k,k\u22121(\u03b2) + 4 log | det Dvfh/2(Xtk, \u2206hXtk+1; \u03b2)| \u0011 . (42) Remark 2 Due to the correction factors in the objective functions, we now have that L[PF](X0:tN ; \u03b8) \u0338= L[PR](X0:tN ; \u03b8) + L[PS|R](X0:tN ; \u03b8). (43) However, when expanding the objective functions (39)-(41) using Taylor series to the lowest necessary order in h, their approximations will satisfy equality in (43), as shown in Section 6. Remark 3 Adding the extra term 4 log | det Dvfh/2| in (41) is necessary to keep the consistency of the drift parameter. However, this term is not initially present in objective function (31), making this correction somehow artificial. This can potentially make the objective function further from the true log-likelihood. The estimators based on the partial sample are then defined as: \u02c6 \u03b8[obj] N := arg min \u03b8 L[obj] (X0:tN ; \u03b8) , obj \u2208{[PF], [PR]}. (44) In the partial observation case, the asymptotic variances of the diffusion estimators are identical whether using (39) or (40), in contrast to the complete observation scenario. This variance is shown to be 9/4 times higher than the variance of the estimator \u02c6 \u03b8[CF] N , and 9/8 times higher than that of the estimator based on the marginal likelihood \u02c6 \u03b8[CR] N . The numerical study in Section 4 shows that the estimator based on the marginal objective function (40) is less biased than the one based on the full objective function (39) in finite sample scenarios with partial observations. A potential reason for this is discussed in Remark 3. Therefore, we recommend using the objective function (40) for partial observations. 3 Main results This section states the two main results \u2013 consistency and asymptotic normality of all four proposed estimators. The key ideas for proofs are presented in Supplementary Materials S1. First, we state the consistency of the estimators in both complete and partial observation cases. Let L[obj] N be one of the objective functions (25), (30), (39) or (40) and b \u03b8[obj] N the corresponding estimator. Thus, obj \u2208{[CF], [CR], [PF], [PR]}. We use superscript [C\u00b7] to refer to any objective function in the complete observation case. Likewise, [\u00b7R] stands for an objective function based on the rough marginal likelihood either in the complete or the partial observation case. Theorem 3.1 (Consistency of the estimators) Assume (A1)-(A6), h \u21920, and Nh \u2192\u221e. Then under the complete observation or partial observation case, it holds: b \u03b2[obj] N P\u03b80 \u2212 \u2212 \u2192\u03b20, d \u03a3\u03a3 [obj] N P\u03b80 \u2212 \u2212 \u2192\u03a3\u03a3\u22a4 0 . 11 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Remark 4 We split the full objective function (25) into the sum of the rough marginal likelihood (30) and the conditional smooth-given-rough likelihood (31). Even if (31) cannot identify the drift parameter \u03b2, it is an important intermediate step in understanding the full objective function (25). This can be seen in the proof of Theorem 3.1, where we first establish consistency of the diffusion estimator with a convergence rate of \u221a N, which is faster than \u221a Nh, the convergence rate of the drift estimators. Then, under complete observations, we show that 1 Nh(L[CR] N (\u03b2, \u03c30) \u2212L[CR] N (\u03b20, \u03c30)) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 Z (F0(y) \u2212F(y))\u22a4(\u03a3\u03a3\u22a4)\u22121(F0(y) \u2212F(y)) d\u03bd0(y). (45) The right-hand side of (45) is non-negative, with a unique zero for F = F0. Conversely, for objective function (31), it holds: 1 Nh(L[CS|R] N (\u03b2, \u03c3) \u2212L[CS|R] N (\u03b20, \u03c3)) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 0. (46) Hence, (46) does not have a unique minimum, making the drift parameter unidentifiable. Similar conclusions are drawn in the partial observation case. Now, we state the asymptotic normality of the estimator. First, we need some preliminaries. Let \u03c1 > 0 and B\u03c1 (\u03b80) = {\u03b8 \u2208\u0398 | \u2225\u03b8 \u2212\u03b80\u2225\u2264\u03c1} be a ball around \u03b80. Since \u03b80 \u2208\u0398, for sufficiently small \u03c1 > 0, B\u03c1(\u03b80) \u2208\u0398. For \u02c6 \u03b8[obj] N \u2208B\u03c1 (\u03b80), the mean value theorem yields: \u0012Z 1 0 HL[obj] N (\u03b80 + t(\u02c6 \u03b8[obj] N \u2212\u03b80)) dt \u0013 (\u02c6 \u03b8[obj] N \u2212\u03b80) = \u2212\u2207\u03b8L[obj] N (\u03b80) . (47) Define: C[obj] N (\u03b8) := \uf8ee \uf8ef \uf8f0 h 1 Nh\u22022 \u03b2(i1)\u03b2(i2)L[obj] N (\u03b8) ir i1,i2=1 h 1 N \u221a h\u22022 \u03b2(i)\u03c3(j)L[obj] N (\u03b8) ir,s i=1,j=1 h 1 N \u221a h\u22022 \u03c3(j)\u03b2(i)L[obj] N (\u03b8) ir,s i=1,j=1 h 1 N \u22022 \u03c3(j1)\u03c3(j2)L[obj] N (\u03b8) is j1,j2=1 \uf8f9 \uf8fa \uf8fb, (48) s[obj] N := \"\u221a Nh( \u02c6 \u03b2[obj] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[obj] N \u2212\u03c30) # , \u03bb[obj] N := \uf8ee \uf8ef \uf8f0 \u2212 1 \u221a Nh \u2207\u03b2L[obj] N (\u03b80) \u22121 \u221a N \u2207\u03c3L[obj] N (\u03b80) \uf8f9 \uf8fa \uf8fb, (49) and D[obj] N := R 1 0 C[obj] N (\u03b80 + t(\u02c6 \u03b8[obj] N \u2212\u03b80)) dt. Then, (47) is equivalent to D[obj] N s[obj] N = \u03bb[obj] N . Let: [C\u03b2(\u03b80)]i1,i2 := Z (\u2202\u03b2(i1)F0(y))\u22a4(\u03a3\u03a3\u22a4 0 )\u22121(\u2202\u03b2(i2)F0(y)) d\u03bd0(y), 1 \u2264i1, i2 \u2264r, (50) [C\u03c3(\u03b80)]j1,j2 := Tr((\u2202\u03c3(j1)\u03a3\u03a3\u22a4 0 )(\u03a3\u03a3\u22a4 0 )\u22121(\u2202\u03c3(j2)\u03a3\u03a3\u22a4 0 )(\u03a3\u03a3\u22a4 0 )\u22121), 1 \u2264j1, j2 \u2264s. (51) Theorem 3.2 Let assumptions (A1)-(A6) hold, and let h \u21920, Nh \u2192\u221e, and Nh2 \u21920. Then under complete observations, it holds: \"\u221a Nh( \u02c6 \u03b2[CR] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[CR] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r 2C\u03c3(\u03b80)\u22121 \u0015\u0013 , \"\u221a Nh( \u02c6 \u03b2[CF] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[CF] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r C\u03c3(\u03b80)\u22121 \u0015\u0013 , under P\u03b80. If only partial observations are available and the unobserved coordinates are approximated using the forward or backward differences, then \"\u221a Nh( \u02c6 \u03b2[PR] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[PR] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r 9 4C\u03c3(\u03b80)\u22121 \u0015\u0013 , \"\u221a Nh( \u02c6 \u03b2[PF] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[PF] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r 9 4C\u03c3(\u03b80)\u22121 \u0015\u0013 , under P\u03b80. 12 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Here, we only outline the proof. According to Theorem 1 in Kessler (1997) or Theorem 1 in S\u00f8rensen and Uchida (2003), Lemmas 3.3 and 3.4 below are enough for establishing asymptotic normality of \u02c6 \u03b8N. For more details, see proof of Theorem 1 in S\u00f8rensen and Uchida (2003). Lemma 3.3 Let CN(\u03b80) be defined in (48). For h \u21920 and Nh \u2192\u221e, it holds: C[CR] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u0014 2C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r C\u03c3(\u03b80) \u0015 , C[PR] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u00142C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 2 3C\u03c3(\u03b80) \u0015 , C[CF] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u0014 2C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 2C\u03c3(\u03b80) \u0015 , C[PF] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u00142C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 8 3C\u03c3(\u03b80) \u0015 . Moreover, let \u03c1N be a sequence such that \u03c1N \u21920, then in all cases it holds: sup \u2225\u03b8\u2225\u2264\u03c1N \u2225C[obj] N (\u03b80 + \u03b8) \u2212C[obj] N (\u03b80)\u2225 P\u03b80 \u2212 \u2212 \u21920. Lemma 3.4 Let \u03bbN be defined (49). For h \u21920, Nh \u2192\u221eand Nh2 \u21920, it holds: \u03bb[CR] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 2C\u03c3(\u03b80) \u0015\u0013 , \u03bb[PR] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r C\u03c3(\u03b80) \u0015\u0013 , \u03bb[CF] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 4C\u03c3(\u03b80) \u0015\u0013 , \u03bb[PF] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 16C\u03c3(\u03b80) \u0015\u0013 , under P\u03b80. Now, the two previous lemmas suggest s[obj] N = (D[obj] n )\u22121\u03bb[obj] N d \u2212 \u2192C[obj] N (\u03b80)\u22121\u03bb[obj] N . The previous line is not completely formal, but it gives the intuition. For more details on formally deriving the result, see Section 7.4 in Pilipovic et al. (2024) or proof of Theorem 1 in S\u00f8rensen and Uchida (2003). 4 Simulation study This Section illustrates the simulation study of the Kramers oscillator (8), demonstrating the theoretical aspects and comparing our proposed estimators against estimators based on the EM and LL approximations. We chose to compare our proposed estimators to these two, because the EM estimator is routinely used in applications, and the LL estimator has shown to be one of the best state-of-the-art methods, see Pilipovic et al. (2024) for the elliptic case. The true parameters are set to \u03b70 = 6.5, a0 = 1, b0 = 0.6 and \u03c32 0 = 0.1. We outline the estimators specifically designed for the Kramers oscillator, explain the simulation procedure, describe the optimization implemented in the R programming language R Core Team (2022), and then present and interpret the results. 4.1 Estimators used in the study For the Kramers oscillator (8), the EM transition distribution is: \u0014 Xtk Vtk \u0015 | \u0014 Xtk\u22121 Vtk\u22121 \u0015 = \u0014 x v \u0015 \u223cN \u0012\u0014 x + hv v + h \u0000\u2212\u03b7v + ax \u2212bx3\u0001 \u0015 , \u0014 0 0 0 h\u03c32 \u0015\u0013 . The ill-conditioned variance of this discretization restricts us to an estimator that only uses the marginal likelihood of the rough coordinate. The estimator for complete observations directly follows from the Gaussian distribution. The estimator for partial observations is defined as (Samson and Thieullen, 2012): b \u03b8[PR] EM = arg min \u03b8 ( 2 3(N \u22123) log \u03c32 + 1 h\u03c32 N\u22122 X k=1 (\u2206hXtk+1 \u2212\u2206hXtk \u2212h(\u2212\u03b7\u2206hXtk\u22121 + aXtk\u22121 \u2212bX3 tk\u22121))2 ) . To our knowledge, the LL estimator has not previously been applied to partial observations. Given the similar theoretical and computational performance of the Strang and LL discretizations, we suggest (without formal proof) to adjust the LL objective functions with the same correction factors as used in the Strang approach. The numerical evidence indicates 13 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT that the LL estimator has the same asymptotic properties as those proved for the Strang estimator. We omit the definition of the LL estimator due to its complexity (see Melnykova (2020); Pilipovic et al. (2024) and accompanying code). To define S estimators based on the Strang splitting scheme, we first split SDE (8) as follows: d \u0014 Xt Vt \u0015 = \u0014 0 1 \u22122a \u2212\u03b7 \u0015 | {z } A \u0014 Xt Vt \u0015 \u2212 \u0014 x\u22c6 \u00b1 0 \u0015 | {z } b ! dt + \u0014 0 aXt \u2212bX3 t + 2a(Xt \u2212x\u22c6 \u00b1) \u0015 | {z } N(Xt,Vt) dt + \u0014 0 \u03c3 \u0015 dWt, where x\u22c6 \u00b1 = \u00b1 p a/b are the two stable points of the dynamics. Since there are two stable points, we suggest splitting with x\u22c6 +, when Xt > 0, and x\u22c6 \u2212, when Xt < 0. This splitting follows the guidelines from (Pilipovic et al., 2024). Note that the nonlinear ODE driven by N(x, v) has a trivial solution where x is a constant. To obtain Strang estimators, we plug in the corresponding components in the objective functions (25), (30), (39) and (40). 4.2 Trajectory simulation We simulate a sample path using the EM discretization with a step size of hsim = 0.0001 to ensure good performance. To reduce discretization errors, we sub-sample from the path at wider intervals to get time step h = 0.1. The path has N = 5000 data points. We repeat the simulations to obtain 250 data sets. 4.3 Optimization in R For optimizing the objective functions, we proceed as in Pilipovic et al. (2024) using the R package torch (Falbel and Luraschi, 2022), which allows automatic differentiation. The optimization employs the resilient backpropagation algorithm, optim_rprop. We use the default hyperparameters and limit the number of optimization iterations to 2000. The convergence criterion is set to a precision of 10\u22125 for the difference between estimators in consecutive iterations. The initial parameter values are set to (\u22120.1, \u22120.1, 0.1, 0.1). 4.4 Results The results of the simulation study are presented in Figure 1. Figure 1A) presents the distributions of the normalized estimators in the complete and partial observation cases. The S and LL estimators exhibit nearly identical performance, particularly in the complete observation scenario. In contrast, the EM method displays significant underperformance and notable bias. The variances of the S and LL rough-likelihood estimators of \u03c32 are higher compared to those derived from the full likelihood, aligning with theoretical expectations. Interestingly, in the partial observation scenario, Figure 1A) reveals that estimators employing the full likelihood display greater finite sample bias compared to those based on the rough likelihood. Possible reasons for this bias are discussed in Remark 3. However, it is noteworthy that this bias is eliminated for smaller time steps, e.g. h = 0.0001 (not shown), thus confirming the theoretical asymptotic results. This observation suggests that the rough likelihood is preferable under partial observations due to its lower bias. Backward finite difference approximations of the velocity variables perform similarly to the forward differences and are therefore excluded from the figure for clarity. We closely examine the variances of the S estimators of \u03c32 in Figure 1B). The LL estimators are omitted due to their similarity to the S estimators, and because the computation times for the LL estimators are prohibitive. To align more closely with the asymptotic predictions, we opt for h = 0.02 and conduct 1000 simulations. Additionally, we set \u03c32 0 = 100 to test different noise levels. Atop each empirical distribution, we overlay theoretical normal densities that match the variances as per Theorem 3.2. The theoretical variance is derived from C\u03c32(\u03b80) in (51), which for the Kramers oscillator in (8) is: C\u03c32(\u03b80) = 1 \u03c34 0 . (52) Figure 1 illustrates that the lowest variance of the diffusion estimator is observed when using the full likelihood with complete observations. The second lowest variance is achieved using the rough likelihood with complete observations. The largest variance is observed in the partial observation case; however, it remains independent of whether the full or rough likelihood is used. Once again, we observe that using the full likelihood introduces additional finite sample bias. In Figure 1C), we compare running times calculated using the tictoc package in R. Running times are measured from the start of the optimization step until convergence. The figure depicts the median over 250 repetitions to mitigate the influence of outliers. The EM method is notably the fastest; however, the S estimators exhibit only slightly slower performance. The LL estimators are 10-100 times slower than the S estimators, depending on whether complete or partial observations are used and whether the full or rough likelihood is employed. 14 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Figure 1: Parameter estimates in a simulation study for the Kramers oscillator, eq. (8). The color code remains consistent across all three figures. A) Normalized distributions of parameter estimation errors (\u02c6 \u03b8N \u2212\u03b80) \u2298\u03b80 in both complete and partial observation cases, based on 250 simulated data sets with h = 0.1 and N = 5000. Each column corresponds to a different parameter, while the color indicates the type of estimator. Estimators are distinguished by superscripted objective functions (F for full and R for rough). B) Distribution of b \u03c32 N estimators based on 1000 simulations with h = 0.02 and N = 5000 across different observation settings (complete or partial) and likelihood choices (full or rough) using the Strang splitting scheme. The true value of \u03c32 is set to \u03c32 0 = 100. Theoretical normal densities are overlaid for comparison. Theoretical variances are calculated based on C\u03c32(\u03b80), eq. (52). C) Median computing time in seconds for one estimation of various estimators based on 250 simulations with h = 0.1 and N = 5000. Shaded color patterns represent times in the partial observation case, while no color pattern indicates times in the complete observation case. 15 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Figure 2: Ice core data from Greenland. Left: Trajectories over time (in kilo years) of the centered negative logarithm of the Ca2+ measurements (top) and forward difference approximations of its rate of change (bottom). The two vertical dark red lines represent the estimated stable equilibria of the double-well potential function. Green points denote upand down-crossings of level \u00b10.6, conditioned on having crossed the other level. Green vertical lines indicate empirical estimates of occupancy in either of the two metastable states. Right: Empirical densities (black) alongside estimated invariant densities with confidence intervals (dark red), prediction intervals (light red), and the empirical density of a simulated sample from the estimated model (blue). 5 Application to Greenland Ice Core Data During the last glacial period, significant climatic shifts known as Dansgaard-Oeschger (DO) events have been documented in paleoclimatic records (Dansgaard et al., 1993). Proxy data from Greenland ice cores, particularly stable water isotope composition (\u03b418O) and calcium ion concentrations (Ca2+), offer valuable insights into these past climate variations (Boers et al., 2017, 2018; Boers, 2018; Ditlevsen et al., 2002; Lohmann and Ditlevsen, 2019; Hassanibesheli et al., 2020). The \u03b418O ratio, reflecting the relative abundance of 18O and 16O isotopes in ice, serves as a proxy for paleotemperatures during snow deposition. Conversely, calcium ions, originating from dust deposition, exhibit a strong negative correlation with \u03b418O, with higher calcium ion levels indicating colder conditions. Here, we prioritize Ca2+ time series due to its finer temporal resolution. In Greenland ice core records, the DO events manifest as abrupt transitions from colder climates (stadials) to approximately 10 degrees warmer climates (interstadials) within a few decades. Although the waiting times between state switches last a couple of thousand years, their spacing exhibits significant variability. The underlying mechanisms driving these changes remain largely elusive, prompting discussions on whether they follow cyclic patterns, result from external forcing, or emerge from noise-induced processes (Boers, 2018; Ditlevsen et al., 2007). We aim to determine if the observed data can be explained by noise-induced transitions of the Kramers oscillator. The measurements were conducted at the summit of the Greenland ice sheet as part of the Greenland Icecore Project (GRIP) (Anklin et al., 1993; Andersen et al., 2004). Originally, the data were sampled at 5 cm intervals, resulting in a non-equidistant time series due to ice compression at greater depths, where 5 cm of ice core spans longer time periods. For our analysis, we use a version of the data transformed into a uniformly spaced series through 20-year binning and averaging. This transformation simplifies the analysis and highlights significant climatic trends. The dataset is available in the supplementary material of (Rasmussen et al., 2014; Seierstad et al., 2014). 16 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT To address the large amplitudes and negative correlation with temperature, we transform the data to minus the logarithm of Ca2+, where higher values of the transformed variable indicate warmer climates at the time of snow deposition. Additionally, we center the transformed measurements around zero. With the 20-year binning, to obtain one point per 20 years, we average across the bins, resulting in a time step of h = 0.02kyr (1kyr = 1000 years). Additionally, we addressed a few missing values using the na.approx function from the zoo package. Following the approach of Hassanibesheli et al. (2020), we analyze a subset of the data with a sufficiently good signal-to-noise ratio. Hassanibesheli et al. (2020) examined the data from 30 to 60kyr before present. Here, we extend the analysis to cover 30kyr to 80kyr, resulting in a time interval of T = 50kyr and a sample size of N = 2500. We approximate the velocity of the transformed Ca2+ by the forward difference method. The trajectories and empirical invariant distributions are illustrated in Figure 2. We fit the Kramers oscillator to the \u2212log Ca2+ time series and estimate parameters using the Strang estimator. Following Theorem 3.2, we compute C\u03b2(\u03b80) from (50). Applying the invariant density \u03c00(x, v) from (10), which decouples into \u03c00(x) (11) and a Gaussian zero-mean and \u03c32 0/(2\u03b70) variance, leads us to: C\u03b2(\u03b80) = \uf8ee \uf8ef \uf8ef \uf8f0 1 2\u03b70 0 0 0 1 \u03c32 0 R \u221e \u2212\u221ex2\u03c00(x) dx \u22121 \u03c32 0 R \u221e \u2212\u221ex4\u03c00(x) dx 0 \u22121 \u03c32 0 R \u221e \u2212\u221ex4\u03c00(x) dx 1 \u03c32 0 R \u221e \u2212\u221ex6\u03c00(x) dx \uf8f9 \uf8fa \uf8fa \uf8fb. (53) Thus, to obtain 95% confidence intervals (CI) for the estimated parameters, we plug b \u03b8N into (52) and (53). The estimators and confidence intervals are shown in Table 1. We also calculate the expected waiting time \u03c4, eq. (12), of crossing from one state to another, and its confidence interval using the Delta Method. Parameter Estimate 95% CI \u03b7 62.5 59.4 \u221265.6 a 296.7 293.6 \u2212299.8 b 219.1 156.4 \u2212281.7 \u03c32 9125 8589 \u22129662 \u03c4 3.97 3.00 \u22124.94 Table 1: Estimated parameters of the Kramers oscillator from Greenland ice core data. The model fit is assessed in the right panels of Figure 2. Here, we present the empirical distributions of the two coordinates along with the fitted theoretical invariant distribution and a 95% confidence interval. Additionally, a prediction interval for the distribution is provided by simulating 1000 datasets from the fitted model, matching the size of the empirical data. We estimate the empirical distributions for each simulated dataset and construct a 95% prediction interval using the pointwise 2.5th and 97.5th percentiles of these estimates. A single example trace is included in blue. While the fitted distribution for \u2212log Ca2+ appears to fit well, even with this symmetric model, the velocity variables are not adequately captured. This discrepancy is likely due to the presence of extreme values in the data that are not effectively accounted for by additive Gaussian noise. Consequently, the model compensates by estimating a large variance. We estimate the waiting time between metastable states to be approximately 4000 years. However, this approximation relies on certain assumptions, namely 62.5 \u2248\u03b7 \u226b\u221aa \u224817.2 and 73 \u2248\u03c32/2\u03b7 \u226aa2/4b \u2248100. Thus, the accuracy of the approximation may not be highly accurate. Defining the current state of the process is not straightforward. One method involves identifying successive upand down-crossings of predefined thresholds within the smoothed data. However, the estimated occupancy time in each state depends on the level of smoothing applied and the distance of crossing thresholds from zero. Using a smoothing technique involving running averages within windows of 11 data points (equivalent to 220 years) and detecting downand up-crossings of levels \u00b10.6, we find an average occupancy time of 4058 years in stadial states and 3550 years in interstadial states. Nevertheless, the actual occupancy times exhibit significant variability, ranging from 60 to 6900 years, with the central 50% of values falling between 665 and 2115 years. This classification of states is depicted in green in Figure 2. Overall, the estimated mean occupancy time inferred from the Kramers oscillator appears reasonable. 6 Technical results In this Section, we present all the necessary technical properties that are used to derive the main results of the paper. 17 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT We start by expanding e \u2126h and its block components \u2126[RR] h (\u03b8)\u22121, \u2126[S|R] h (\u03b8)\u22121, log det \u2126[RR] h (\u03b8), log det \u2126[S|R] h (\u03b8) and log | det Dfh/2 (y; \u03b2) | when h goes to zero. Then, we expand e Zk,k\u22121(\u03b2) and e Zk+1,k,k\u22121(\u03b2) around Ytk\u22121 when h goes to zero. The main tools used are It\u00f4\u2019s lemma, Taylor expansions, and Fubini\u2019s theorem. The final result is stated in Propositions 6.6 and 6.7. The approximations depend on the drift function F, the nonlinear part N, and some correlated sequences of Gaussian random variables. Finally, we obtain approximations of the objective functions (25), (30), (31) and (39) (41). Proofs of all the stated propositions and lemmas in this section are in Supplementary Material S1. 6.1 Covariance matrix e \u2126h The covariance matrix e \u2126h is approximated by: e \u2126h = Z h 0 e e A(h\u2212u) e \u03a3e \u03a3\u22a4e e A\u22a4(h\u2212u) du = he \u03a3e \u03a3\u22a4+ h2 2 ( e Ae \u03a3e \u03a3\u22a4+ e \u03a3e \u03a3\u22a4e A\u22a4) + h3 6 ( e A2 e \u03a3e \u03a3\u22a4+ 2 e Ae \u03a3e \u03a3\u22a4e A\u22a4+ e \u03a3e \u03a3\u22a4( e A2)\u22a4) + h4 24( e A3 e \u03a3e \u03a3\u22a4+ 3 e A2 e \u03a3e \u03a3\u22a4e A\u22a4+ 3 e Ae \u03a3e \u03a3\u22a4( e A2)\u22a4+ e \u03a3e \u03a3\u22a4( e A3)\u22a4) + R(h5, y0). (54) The following lemma approximates each block of e \u2126h up to the first two leading orders of h. The result follows directly from equations (4), (6), and (54). Lemma 6.1 The covariance matrix e \u2126h defined in (54)-(19) approximates block-wise as: \u2126[SS] h (\u03b8) = h3 3 \u03a3\u03a3\u22a4+ h4 8 (Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h5, y0), \u2126[SR] h (\u03b8) = h2 2 \u03a3\u03a3\u22a4+ h3 6 (Av(\u03b2)\u03a3\u03a3\u22a4+ 2\u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h4, y0), \u2126[RS] h (\u03b8) = h2 2 \u03a3\u03a3\u22a4+ h3 6 (2Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h4, y0), \u2126[RR] h (\u03b8) = h\u03a3\u03a3\u22a4+ h2 2 (Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h3, y0). Building on Lemma 6.1, we calculate products, inverses, and logarithms of the components of e \u2126h in the following lemma. Lemma 6.2 For the covariance matrix e \u2126h defined in (54) it holds: (i) \u2126[RR] h (\u03b8)\u22121 = 1 h(\u03a3\u03a3\u22a4)\u22121 \u22121 2((\u03a3\u03a3\u22a4)\u22121Av(\u03b2) + Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121) + R(h, y0); (ii) \u2126[SR] h (\u03b8)\u2126[RR] h (\u03b8)\u22121 = h 2 I \u2212h2 12 (Av \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121) + R(h3, y0); (iii) \u2126[SR] h (\u03b8)\u2126[RR] h (\u03b8)\u22121\u2126[RS] h (\u03b8) = h3 4 \u03a3\u03a3\u22a4+ h4 8 (Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h5, y0); (iv) \u2126[S|R] h (\u03b8) = h3 12 \u03a3\u03a3\u22a4+ R(h5, y0); (v) log det \u2126[RR] h (\u03b8) = d log h + log det \u03a3\u03a3\u22a4+ h Tr Av(\u03b2) + R(h2, y0); (vi) log det \u2126[S|R] h (\u03b8) = 3d log h + log det \u03a3\u03a3\u22a4+ R(h2, y0); (vii) log det e \u2126h(\u03b8) = 4d log h + 2 log det \u03a3\u03a3\u22a4+ h Tr Av(\u03b2) + R(h2, y0). Remark 5 We adjusted the objective functions for partial observations using the term c log det \u2126[\u00b7] h/c, where c is a correction constant. This adjustment keeps the term h Tr Av(\u03b2) in (v)-(vii) constant, not affecting the asymptotic distribution of the drift parameter. There is no h4-term in \u2126[S|R] h (\u03b8) which simplifies the approximation of \u2126[S|R] h (\u03b8)\u22121 and log det \u2126[S|R] h (\u03b8). Consequently, this makes (41) a bad choice for estimating the drift parameter. 18 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 6.2 Nonlinear solution e fh We now state a useful proposition for the nonlinear solution e fh (Section 1.8 in (Hairer et al., 1993)). Proposition 6.3 Let Assumptions (A1), (A2) and (A6) hold. When h \u21920, the h-flow of (15) approximates as: e fh(y) = y + h e N(y) + h2 2 (Dy e N(y)) e N(y) + R(h3, y), (55) e f \u22121 h (y) = y \u2212h e N(y) + h2 2 (Dy e N(y)) e N(y) + R(h3, y). (56) Applying the previous proposition on (21) and (22), we get: fh(y) = v + hN(y) + h2 2 (DvN(y))N(y) + R(h3, y), (57) f \u22c6\u22121 h (y) = v \u2212hN(y) + h2 2 (DvN(y))N(y) + R(h3, y). (58) The following lemma approximates log | det Dfh/2 (y; \u03b2) | in the objective functions and connects it with Lemma 6.2. Lemma 6.4 Let e fh be the function defined in (21). It holds: 2 log | det Dfh/2 (Ytk; \u03b2) | = h Tr DvN(Ytk\u22121; \u03b2) + R(h3/2, Ytk\u22121), 2 log | det Dfh/2 \u0000Xtk, \u2206hXtk+1; \u03b2 \u0001 | = h Tr DvN(Ytk\u22121; \u03b2) + R(h3/2, Ytk\u22121). An immediate consequence of the previous lemma and that DvF(y; \u03b2) = Av(\u03b2) + DvN(y; \u03b2) is log det \u2126[RR] h (\u03b8) + 2 log | det Dfh/2 (Ytk; \u03b2) | = log det h\u03a3\u03a3\u22a4+ h Tr DvF(Ytk\u22121; \u03b2) + R(h3/2, Ytk\u22121). The same equality holds when Ytk is approximated by (Xtk, \u2206hXtk+1). The following lemma expands function \u00b5h( e fh/2(y)) up to the highest necessary order of h. Lemma 6.5 For the functions e fh in (21) and e \u00b5h in (28), it holds \u00b5[S] h ( e fh/2(y)) = x + hv + h2 2 F(y) + R(h3, y), (59) \u00b5[R] h ( e fh/2(y)) = v + h(F(y) \u22121 2N(y)) + R(h2, y). (60) 6.3 Random variables e Zk,k\u22121 and e Zk+1,k,k\u22121 To approximate the random variables Z[S] k,k\u22121(\u03b2), Z[R] k,k\u22121(\u03b2), Z [S] k,k\u22121(\u03b2), and Z [R] k+1,k,k\u22121(\u03b2) around Ytk\u22121, we start by defining the following random sequences: \u03b7k\u22121 := 1 h1/2 Z tk tk\u22121 dWt, (61) \u03bek\u22121 := 1 h3/2 Z tk tk\u22121 (t \u2212tk\u22121) dWt, \u03be\u2032 k := 1 h3/2 Z tk+1 tk (tk+1 \u2212t) dWt, (62) \u03b6k\u22121 := 1 h5/2 Z tk tk\u22121 (t \u2212tk\u22121)2 dWt, \u03b6\u2032 k := 1 h5/2 Z tk+1 tk (tk+1 \u2212t)2 dWt. (63) The random variables (61)-(63) are Gaussian with mean zero. Moreover, at time tk they are Ftk+1 measurable and independent of Ftk. The following linear combinations of (61)-(63) appear in the expansions in the partial observation case: Uk,k\u22121 := \u03be\u2032 k + \u03bek\u22121, (64) Qk,k\u22121 := \u03b6\u2032 k + 2\u03b7k\u22121 \u2212\u03b6k\u22121. (65) 19 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT It is not hard to check that \u03be\u2032 k + \u03b7k\u22121 \u2212\u03be\u2032 k\u22121 = Uk,k\u22121. This alternative representation of Uk,k\u22121 will be used later in proofs. The It\u00f4 isometry yields: E\u03b80[\u03b7k\u22121\u03b7\u22a4 k\u22121 | Ftk\u22121] = I, E\u03b80[\u03b7k\u22121\u03be\u22a4 k\u22121 | Ftk\u22121] = E\u03b80[\u03b7k\u22121\u03be\u2032\u22a4 k\u22121 | Ftk\u22121] = 1 2I, (66) E\u03b80[\u03bek\u22121\u03be\u2032\u22a4 k\u22121 | Ftk\u22121] = 1 6I, E\u03b80[\u03bek\u22121\u03be\u22a4 k\u22121 | Ftk\u22121] = E\u03b80[\u03be\u2032 k\u03be\u2032\u22a4 k | Ftk\u22121] = 1 3I, (67) E\u03b80[Uk,k\u22121U\u22a4 k,k\u22121 | Ftk\u22121] = 2 3I, E\u03b80[Uk,k\u22121(Uk,k\u22121 + 2\u03be\u2032 k\u22121)\u22a4| Ftk\u22121] = I. (68) The covariances of other combinations of the random variables (61)-(63) are not needed for the proofs. However, to derive asymptotic properties, we need some fourth moments calculated in Supplementary Materials S1. The following two propositions are the last building blocks for approximating the objective functions (30)-(31) and (40)-(41). Proposition 6.6 The random variables e Zk,k\u22121(\u03b2) in (24) and e Zk+1,k,k\u22121(\u03b2) in (35) are approximated by: Z[S] k,k\u22121(\u03b2) = h3/2\u03a30\u03be\u2032 k\u22121 + h2 2 (F0(Ytk\u22121) \u2212F(Ytk\u22121)) + h5/2 2 DvF0(Ytk\u22121)\u03a30\u03b6\u2032 k\u22121 + R(h3, Ytk\u22121), Z[R] k,k\u22121(\u03b2) = h1/2\u03a30\u03b7k\u22121 + h(F0(Ytk\u22121) \u2212F(Ytk\u22121)) \u2212h3/2 2 DvN(Ytk\u22121)\u03a30\u03b7k\u22121 + h3/2DvF0(Ytk\u22121)\u03a30\u03be\u2032 k\u22121 + R(h2, Ytk\u22121), Z [S] k,k\u22121(\u03b2) = \u2212h2 2 F(Ytk\u22121) \u2212h5/2 2 DvF(Ytk\u22121)\u03a30\u03be\u2032 k\u22121 + R(h3, Ytk\u22121), Z [R] k+1,k,k\u22121(\u03b2) = h1/2\u03a30Uk,k\u22121 + h(F0(Ytk\u22121) \u2212F(Ytk\u22121)) \u2212h3/2 2 DvN(Ytk\u22121)\u03a30Uk,k\u22121 \u2212h3/2DvF(Ytk\u22121)\u03a30\u03be\u2032 k\u22121 + h3/2 2 DvF0(Ytk\u22121)\u03a30Qk,k\u22121 + R(h2, Ytk\u22121). Remark 6 Proposition 6.6 yield E\u03b80[Z[R] k,k\u22121(\u03b2)Z[R] k,k\u22121(\u03b2)\u22a4| Ytk\u22121] = h\u03a3\u03a3\u22a4 0 + R(h2, Ytk\u22121) = \u2126[RR] h + R(h2, Ytk\u22121), E\u03b80[Z [R] k+1,k,k\u22121(\u03b2)Z [R] k+1,k,k\u22121(\u03b2)\u22a4| Ytk\u22121] = 2 3h\u03a3\u03a3\u22a4 0 + R(h2, Ytk\u22121) = 2 3\u2126[RR] h + R(h2, Ytk\u22121). Thus, the correction factor 2/3 in (40) compensates for the underestimation of the covariance of Z [R] k+1,k,k\u22121(\u03b2). Similarly, it can be shown that the same underestimation happens when using the backward difference. On the other hand, when using the central difference, it can be shown that E\u03b80[Z [R],central k+1,k,k\u22121(\u03b2)Z [R],central k+1,k,k\u22121(\u03b2)\u22a4| Ytk\u22121] = 5 12h\u03a3\u03a3\u22a4 0 + R(h2, Ytk\u22121), which is a larger deviation from \u2126[RR] h , yielding a larger correcting factor and larger asymptotic variance of the diffusion parameter estimator. Proposition 6.7 Let e Zk,k\u22121(\u03b2) and e Zk+1,k,k\u22121(\u03b2) be defined in (24) and (35), respectively. Then, Z[S|R] k,k\u22121(\u03b2) = \u2212h3/2 2 \u03a30(\u03b7k\u22121 \u22122\u03be\u2032 k\u22121) + h5/2 12 (Av \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u03a30\u03b7k\u22121 + h5/2 4 DvN(Ytk\u22121)\u03a30\u03b7k\u22121 \u2212h5/2 2 DvF0(Ytk\u22121)\u03a30(\u03be\u2032 k\u22121 \u2212\u03b6\u2032 k\u22121) + R(h3, Ytk\u22121), Z [S|R] k+1,k,k\u22121(\u03b2) = \u2212h3/2 2 \u03a30Uk,k\u22121 \u2212h2 2 F0(Ytk\u22121) + h5/2 12 (Av \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u03a30Uk,k\u22121 + h5/2 4 DvN(Ytk\u22121)\u03a30Uk,k\u22121 \u2212h5/2 4 DvF0(Ytk\u22121)\u03a30Qk,k\u22121 + R(h3, Ytk\u22121). 20 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 6.4 Objective functions Starting with the complete observation case, we approximate objective functions (30) and (31) up to order R(h3/2, Ytk\u22121) to prove the asymptotic properties of the estimators \u02c6 \u03b8[CR] N and \u02c6 \u03b8[CS|R] N . After omitting the terms of order R(h, Ytk\u22121) that do not depend on \u03b2, we obtain the following approximations: L[CR] N (Y0:tN ; \u03b8) = (N \u22121) log det \u03a3\u03a3\u22a4+ N X k=1 \u03b7\u22a4 k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30\u03b7k\u22121 (69) + 2 \u221a h N X k=1 \u03b7\u22a4 k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) + h N X k=1 (F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2))\u22a4(\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) \u2212h N X k=1 \u03b7\u22a4 k\u22121\u03a3\u22a4 0 DvF(Ytk\u22121; \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u03a30\u03b7k\u22121 + h N X k=1 Tr DvF(Ytk; \u03b2), L[CS|R] N (Y0:tN ; \u03b8) = (N \u22121) log det \u03a3\u03a3\u22a4+ 3 N X k=1 (\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30(\u03b7k\u22121 \u22122\u03be\u2032 k\u22121) (70) \u22123h N X k=1 (\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121DvN(Ytk\u22121; \u03b2)\u03a30\u03b7k\u22121 \u2212h N X k=1 (\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(Av(\u03b2) \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u03a30\u03b7k\u22121 L[CF] N (Y0:tN ; \u03b8) = L[CR] N (Y0:tN ; \u03b8) + L[CS|R] N (Y0:tN ; \u03b8) . (71) The two last sums in (70) converge to zero because E\u03b80[(\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u03b7\u22a4 k\u22121|Ftk\u22121] = 0. Moreover, (70) lacks the quadratic form of F(Ytk\u22121) \u2212F0(Ytk\u22121), that is crucial for the asymptotic variance of the drift estimator. This implies that the objective function L[CS|R] N is not suitable for estimating the drift parameter. Conversely, (70) provides a correct and consistent estimator of the diffusion parameter, indicating that the full objective function (the sum of L[CR] N and L[CS|R] N ) consistently estimates \u03b8. Similarly, the approximated objective functions in the partial observation case are: L[PR] N (Y0:tN ; \u03b8) = 2 3(N \u22122) log det \u03a3\u03a3\u22a4+ N\u22121 X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 (72) + 2 \u221a h N X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) + h N\u22121 X k=1 (F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2))\u22a4(\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) \u2212h N\u22121 X k=1 (Uk,k\u22121 + 2\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 DvF(Ytk\u22121; \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 + h N\u22121 X k=1 Tr DvF(Ytk; \u03b2), L[PS|R] N (Y0:tN ; \u03b8) = 2(N \u22122) log det \u03a3\u03a3\u22a4+ 3 N\u22121 X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 (73) + 6 \u221a h N X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121F(Ytk\u22121; \u03b20) \u22123h N\u22121 X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 DvN(Ytk\u22121; \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 + 2h N\u22121 X k=1 Tr DvN(Ytk; \u03b2), 21 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT L[PF] N (Y0:tN ; \u03b8) = L[PR] N (Y0:tN ; \u03b8) + L[PS|R] N (Y0:tN ; \u03b8) . (74) This time, the term with Av(\u03b2) \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121 vanishes because Tr(\u03a30Uk,k\u22121U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(Av(\u03b2) \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)) = 0 due to the symmetry of the matrices and the trace cyclic property. Even though the partial observation objective function L[PR] (X0:tN ; \u03b8) (40) depends only on X0:tN , we could approximate it with L[PR] N (Y0:tN ; \u03b8) (72). This is useful for proving the asymptotic normality of the estimator since its asymptotic distribution will depend on the invariant probability \u03bd0 defined for the solution Y. The absence of the quadratic form F(Ytk\u22121) \u2212F0(Ytk\u22121) in (73) indicates that L[PS|R] N is not suitable for estimating the drift parameter. Additionally, the penultimate term in (73) does not vanish, needing an additional correction term of 2h PN\u22121 k=1 Tr DvN(Ytk; \u03b2) for consistency. This correction is represented as 4 log | det Dvfh/2| in (41). Notably, this term is absent in the complete objective function (31), making this adjustment somewhat artificial and could potentially deviate further from the true log-likelihood. Consequently, the objective function based on the full likelihood (39) inherits this characteristic from (73), suggesting that in the partial observation scenario, using only the rough likelihood (72) may be more appropriate. 7", + "additional_graph_info": { + "graph": [ + [ + "Predrag Pilipovic", + "Susanne Ditlevsen" + ], + [ + "Susanne Ditlevsen", + "Massimiliano Tamborrino" + ], + [ + "Susanne Ditlevsen", + "Irene Tubikanec" + ] + ], + "node_feat": { + "Predrag Pilipovic": [ + { + "url": "http://arxiv.org/abs/2405.03606v1", + "title": "Strang Splitting for Parametric Inference in Second-order Stochastic Differential Equations", + "abstract": "We address parameter estimation in second-order stochastic differential\nequations (SDEs), prevalent in physics, biology, and ecology. Second-order SDE\nis converted to a first-order system by introducing an auxiliary velocity\nvariable raising two main challenges. First, the system is hypoelliptic since\nthe noise affects only the velocity, making the Euler-Maruyama estimator\nill-conditioned. To overcome that, we propose an estimator based on the Strang\nsplitting scheme. Second, since the velocity is rarely observed we adjust the\nestimator for partial observations. We present four estimators for complete and\npartial observations, using full likelihood or only velocity marginal\nlikelihood. These estimators are intuitive, easy to implement, and\ncomputationally fast, and we prove their consistency and asymptotic normality.\nOur analysis demonstrates that using full likelihood with complete observations\nreduces the asymptotic variance of the diffusion estimator. With partial\nobservations, the asymptotic variance increases due to information loss but\nremains unaffected by the likelihood choice. However, a numerical study on the\nKramers oscillator reveals that using marginal likelihood for partial\nobservations yields less biased estimators. We apply our approach to\npaleoclimate data from the Greenland ice core and fit it to the Kramers\noscillator model, capturing transitions between metastable states reflecting\nobserved climatic conditions during glacial eras.", + "authors": "Predrag Pilipovic, Adeline Samson, Susanne Ditlevsen", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "math.ST", + "stat.TH" + ], + "main_content": "Introduction Second-order stochastic differential equations (SDEs) are an effective instrument for modeling complex systems showcasing both deterministic and stochastic dynamics, which incorporate the second derivative of a variable the acceleration. These models are extensively applied in many fields, including physics (Rosenblum and Pikovsky, 2003), molecular dynamics (Leimkuhler and Matthews, 2015), ecology (Johnson et al., 2008; Michelot and Blackwell, 2019), paleoclimate research (Ditlevsen et al., 2002), and neuroscience (Ziv et al., 1994; Jansen and Rit, 1995). arXiv:2405.03606v1 [stat.ME] 6 May 2024 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT The general form of a second-order SDE in Langevin form is given as follows: \u00a8 Xt = F(Xt, \u02d9 Xt, \u03b2) + \u03a3\u03bet. (1) Here, Xt \u2208Rd denotes the variable of interest, the dot indicates derivative with respect to time t, drift F represents the deterministic force, and \u03bet is a white noise representing the system\u2019s random perturbations around the deterministic force. We assume that \u03a3 is constant, that is the noise is additive. The main goal of this study is to estimate parameters in second-order SDEs. We first reformulate the d-dimensional second-order SDE (1) into a 2d-dimensional SDE in It\u00f4\u2019s form. We define an auxiliary velocity variable, and express the second-order SDE in terms of its position Xt and velocity Vt: dXt = Vt dt, X0 = x0, dVt = F (Xt, Vt; \u03b2) dt + \u03a3 dWt, V0 = v0, (2) where Wt is a standard Wiener process. We refer to Xt and Vt as the smooth and rough coordinates, respectively. A specific example of model (2) is F(x, v) = \u2212c(x, v)v \u2212\u2207U(x), for some function c(\u00b7) and potential U(\u00b7). Then, model (2) is called a stochastic damping Hamiltonian system. This system describes the motion of a particle subjected to potential, dissipative, and random forces (Wu, 2001). An example of a stochastic damping Hamiltonian system is the Kramers oscillator introduced in Section 2.1. Let Yt = (X\u22a4 t , V\u22a4 t )\u22a4, e F(x, v; \u03b2) = (v\u22a4, F(x, v; \u03b2)\u22a4)\u22a4and e \u03a3 = (0\u22a4, \u03a3\u22a4)\u22a4. Then (2) is formulated as dYt = e F (Yt; \u03b2) dt + e \u03a3 dWt, Y0 = y0. (3) The notation e over an object indicates that it is associated with process Yt. Specifically, the object is of dimension 2d or 2d \u00d7 2d. When it exists, the unique solution of (3) is called a diffusion or diffusion process. System (3) is usually not fully observed since the velocity Vt is not observable. Thus, our primary objective is to estimate the underlying drift parameter \u03b2 and the diffusion parameter \u03a3, based on discrete observations of either Yt (referred to as complete observation case), or only Xt (referred to as partial observation case). Diffusion Yt is said to be hypoelliptic since the matrix e \u03a3e \u03a3\u22a4= \u00140 0 0 \u03a3\u03a3\u22a4 \u0015 (4) is not of full rank, while Yt admits a smooth density. Thus, (2) is a subclass of a larger class of hypoelliptic diffusions. Parametric estimation for hypoelliptic diffusions is an active area of research. Ditlevsen and S\u00f8rensen (2004) studied discretely observed integrated diffusion processes. They proposed to use prediction-based estimating functions, which are suitable for non-Markovian processes and which do not require access to the unobserved component. They proved consistency and asymptotic normality of the estimators for N \u2192\u221e, but without any requirements on the sampling interval h. Certain moment conditions are needed to obtain results for fixed h, which are often difficult to fulfill for nonlinear drift functions. The estimator was applied to paleoclimate data in Ditlevsen et al. (2002), similar to the data we analyze in Section 5. Gloter (2006) also focused on parametric estimation for discretely observed integrated diffusion processes, introducing a contrast function using the Euler-Maruyama discretization. He studied the asymptotic properties as the sampling interval h \u21920 and the sample size N \u2192\u221e, under the so-called rapidly increasing experimental design Nh \u2192\u221e and Nh2 \u21920. To address the ill-conditioned contrast from the Euler-Maruyama discretization, he suggested using only the rough equations of the SDE. He proposed to recover the unobserved integrated component through the finite difference approximation (Xtk+1 \u2212Xtk)/h. This approximation makes the estimator biased and requires a correction factor of 3/2 in one of the terms of the contrast function for partial observations. Consequently, the correction increases the asymptotic variance of the estimator of the diffusion parameter. Samson and Thieullen (2012) expanded the ideas of (Gloter, 2006) and proved the results of (Gloter, 2006) in more general models. Similar to (Gloter, 2006), their focus was on contrasts using the Euler-Maruyama discretization limited to only the rough equations. Pokern et al. (2009) proposed an It\u00f4-Taylor expansion, adding a noise term of order h3/2 to the smooth component in the numerical scheme. They argued against the use of finite differences for approximating unobserved components. Instead, he suggested using the It\u00f4-Taylor expansion leading to non-degenerate conditionally Gaussian approximations of the transition density and using Markov Chain Monte Carlo (MCMC) Gibbs samplers for conditionally imputing missing components based on the observations. They found out that this approach resulted in a biased estimator of the drift parameter of the rough component. 2 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Ditlevsen and Samson (2019) focused on both filtering and inference methods for complete and partial observations. They proposed a contrast estimator based on the strong order 1.5 scheme (Kloeden and Platen, 1992), which incorporates noise of order h3/2 into the smooth component, similar to (Pokern et al., 2009). Moreover, they retained terms of order h2 in the mean, which removed the bias in the drift parameters noted in (Pokern et al., 2009). They proved consistency and asymptotic normality under complete observations, with the standard rapidly increasing experimental design Nh \u2192\u221eand Nh2 \u21920. They adopted an unconventional approach by using two separate contrast functions, resulting in marginal asymptotic results rather than a joint central limit theorem. The model was limited to a scalar smooth component and a diagonal diffusion coefficient matrix for the rough component. Melnykova (2020) developed a contrast estimator using local linearization (LL) (Ozaki, 1985; Shoji and Ozaki, 1998; Ozaki et al., 2000) and compared it to the least-squares estimator. She employed local linearization of the drift function, providing a non-degenerate conditional Gaussian discretization scheme, enabling the construction of a contrast estimator that achieves asymptotic normality under the standard conditions Nh \u2192\u221eand Nh2 \u21920. She proved a joint central limit theorem, bypassing the need for two separate contrasts as in Ditlevsen and Samson (2019). The models in Ditlevsen and Samson (2019) and Melnykova (2020) allow for parameters in the smooth component of the drift, in contrast to models based on second-order differential equations. Recent work by Gloter and Yoshida (2020, 2021) introduced adaptive and non-adaptive methods in hypoelliptic diffusion models, proving asymptotic normality in the complete observation regime. In line with this work, we briefly review their non-adaptive estimator. It is based on a higher-order It\u00f4-Taylor expansion that introduces additional Gaussian noise onto the smooth coordinates, accompanied by an appropriate higher-order mean approximation of the rough coordinates. The resulting estimator was later termed the local Gaussian (LG), which should be differentiated from LL. The LG estimator can be viewed as an extension of the estimator proposed in Ditlevsen and Samson (2019), with fewer restrictions on the class of models. Gloter and Yoshida (2020, 2021) found that using the full SDE to create a contrast reduces the asymptotic variance of the estimator of the diffusion parameter compared to methods using only rough coordinates in the case of complete observations. The most recent contributions are Iguchi et al. (2023a,b); Iguchi and Beskos (2023), building on the foundation of the LG estimator and focusing on high-frequency regimes addressing limitations in earlier methods. Iguchi et al. (2023b) presented a new closed-form contrast estimator for hypoelliptic SDEs (denoted as Hypo-I) based on Edgeworth-type density expansion and Malliavin calculus that achieves asymptotic normality under the less restrictive condition of Nh3 \u21920. Iguchi et al. (2023a) focused on a highly degenerate class of SDEs (denoted as Hypo-II) where smooth coordinates split into further sub-groups and proposed estimators for both complete and partial observation settings. Iguchi and Beskos (2023) further refined the conditions for estimators asymptotic normality for both Hypo-I and Hypo-II under a weak design Nhp \u21920, for p \u22652. The existing methods are generally based on approximations with varying degrees of refinements to correct for possible nonlinearities. This implies that they quickly degrade for highly nonlinear models if the step size is increased. In particular, this is the case for Hamiltonian systems. Instead, we propose to use splitting schemes, more precisely the Strang splitting scheme. Splitting schemes are established techniques initially developed for solving ordinary differential equations (ODEs) and have proven to be effective also for SDEs (Ableidinger et al., 2017; Buckwar et al., 2022; Pilipovic et al., 2024). These schemes yield accurate results in many practical applications since they incorporate nonlinearities in their construction. This makes them particularly suitable for second-order SDEs, where they have been widely used. Early work in dissipative particle dynamics (Shardlow, 2003; Serrano et al., 2006), applications to molecular dynamics (Vanden-Eijnden and Ciccotti, 2006; Melchionna, 2007; Leimkuhler and Matthews, 2015) and studies on internal particles (Pavliotis et al., 2009) all highlight the scheme\u2019s versatility. Burrage et al. (2007), Bou-Rabee and Owhadi (2010), and Abdulle et al. (2015) focused on the long-run statistical properties such as invariant measures. Bou-Rabee (2017); Br\u00e9hier and Gouden\u00e8ge (2019) and Adams et al. (2022) used splitting schemes for stochastic partial differential equations (SPDEs). Despite the extensive use of splitting schemes in different areas, statistical applications have been lacking. We have recently proposed statistical estimators for elliptic SDEs (Pilipovic et al., 2024). The straightforward and intuitive schemes lead to robust, easy-to-implement estimators, offering an advantage over more numerically intensive and less user-friendly state-of-the-art methods. We use the Strang splitting scheme to approximate the transition density between two consecutive observations and derive the pseudo-likelihood function since the exact likelihood function is often unknown or intractable. Then, to estimate parameters, we employ maximum likelihood estimation (MLE). However, two specific statistical problems arise due to hypoellipticity and partial observations. First, hypoellipticity leads to degenerate Euler-Maruyama transition schemes, which can be addressed by constructing the pseudo-likelihood solely from the rough equations of the SDE, referred to as the rough likelihood hereafter. The 3 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Strang splitting technique enables the estimator to incorporate both smooth and rough components (referred to as the full likelihood). It is also possible to construct Strang splitting estimators using only the rough likelihood, raising the question of which estimator performs better. Our results are in line with Gloter and Yoshida (2020, 2021) in the complete observation setting, where we find that using the full likelihood reduces the asymptotic variance of the diffusion estimator. We found the same results in the simulation study for the LL estimator proposed by Melnykova (2020). Second, we suggest to treat the unobserved velocity by approximating it using finite difference methods. While Gloter (2006) and Samson and Thieullen (2012) exclusively use forward differences, we investigate also central and backward differences. The forward difference approach leads to a biased estimator unless it is corrected. One of the main contributions of this work is finding suitable corrections of the pseudo-likelihoods for different finite difference approximations such that the Strang estimators are asymptotically unbiased. This also ensures consistency of the diffusion parameter estimator, at the cost of increasing its asymptotic variance. When only partial observations are available, we explore the impact of using the full likelihood versus the rough likelihood and how different finite differentiation approximations influence the parametric inference. We find that the choice of likelihood does not affect the asymptotic variance of the estimator. However, our simulation study on the Kramers oscillator suggests that using the full likelihood in finite sample setups introduce more bias than using only the rough marginal likelihood, which is the opposite of the complete observation setting. Finally, we analyze a paleoclimate ice core dataset from Greenland using a second-order SDE. The main contributions of this paper are: 1. We extend the Strang splitting estimator of (Pilipovic et al., 2024) to hypoelliptic models given by second-order SDEs, including appropriate correction factors to obtain consistency. 2. When complete observations are available, we show that the asymptotic variance of the estimator of the diffusion parameter is smaller when maximizing the full likelihood. In contrast, for partial observations, we show that the asymptotic variance remains unchanged regardless of using the full or marginal likelihood of the rough coordinates. 3. We discuss the influence on the statistical properties of using the forward difference approximation for imputing the unobserved velocity variables compared to using the backward or the central difference. 4. We evaluate the performance of the estimators through a simulation study of a second-order SDE, the Kramers oscillator. Additionally, we show numerically in a finite sample study that the marginal likelihood for partial observations is more favorable than the full likelihood. 5. We fit the Kramers oscillator to a paleoclimate ice core dataset from Greenland and estimate the average time needed to pass between two metastable states. The structure of the paper is as follows. In Section 2, we introduce the class of SDE models, define hypoellipticity, introduce the Kramers oscillator, and explain the Strang splitting scheme and its associated estimators. The asymptotic properties of the estimator are established in Section 3. The theoretical results are illustrated in a simulation study on the Kramers Oscillator in Section 4. Section 5 illustrates our methodology on the Greenland ice core data, while the technical results and the proofs of the main theorems and properties are in Section 6 and Supplementary Material S1, respectively. Notation. We use capital bold letters for random vectors, vector-valued functions, and matrices, while lowercase bold letters denote deterministic vectors. \u2225\u00b7 \u2225denotes both the L2 vector norm in Rd. Superscript (i) on a vector denotes the i-th component, while on a matrix it denotes the i-th column. Double subscript ij on a matrix denotes the component in the i-th row and j-th column. The transpose is denoted by \u22a4. Operator Tr(\u00b7) returns the trace of a matrix and det(\u00b7) the determinant. Id denotes the d-dimensional identity matrix, while 0d\u00d7d is a d-dimensional zero square matrix. We denote by [ai]d i=1 a vector with coordinates ai, and by [bij]d i,j=1 a matrix with coordinates bij, for i, j = 1, . . . , d. For a real-valued function g : Rd \u2192R, \u2202x(i)g(x) denotes the partial derivative with respect to x(i) and \u22022 x(i)x(j)g(x) denotes the second partial derivative with respect to x(i) and x(j). The nabla operator \u2207x denotes the gradient vector of g with respect of x, that is, \u2207xg(x) = [\u2202x(i)g(x)]d i=1. H denotes the Hessian matrix of function g, Hg(x) = [\u2202x(i)x(j)g(x)]d i,j=1. For a vector-valued function F : Rd \u2192Rd, the differential operator Dx denotes the Jacobian matrix DxF(x) = [\u2202x(i)F (j)(x)]d i,j=1. Let R represent a vector (or a matrix) valued function defined on (0, 1) \u00d7 Rd (or (0, 1) \u00d7 Rd\u00d7d), such that, for some constant C, \u2225R(a, x)\u2225< aC(1 + \u2225x\u2225)C for all a, x. When denoted by R, it refers to a scalar function. For an open set A, the bar A indicates closure. We write P \u2212 \u2192for convergence in probability P. 4 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 2 Problem setup Let Y = (Yt)t\u22650 in (3) be defined on a complete probability space (\u2126, F, P\u03b8) with a complete right-continuous filtration F = (Ft)t\u22650, and let the d-dimensional Wiener process W = (Wt)t\u22650 be adapted to Ft. The probability measure P\u03b8 is parameterized by the parameter \u03b8 = (\u03b2, \u03a3). Rewrite equation (3) as follows: dYt = e A(\u03b2)(Yt \u2212e b(\u03b2)) dt + e N (Yt; \u03b2) dt + e \u03a3 dWt, (5) where e A(\u03b2) = \u0014 0d\u00d7d Id Ax(\u03b2) Av(\u03b2) \u0015 , e b(\u03b2) = \u0014 b(\u03b2) 0d \u0015 , e N(x, v; \u03b2) = \u0014 0d N(x, v; \u03b2) \u0015 . (6) Function F in (2) is thus split as F(x, v; \u03b2) = Ax(\u03b2)(x \u2212b(\u03b2)) + Av(\u03b2)v + N(x, v; \u03b2). Let \u0398\u03b2 \u00d7 \u0398\u03a3 = \u0398 denote the closure of the parameter space with \u0398\u03b2 and \u0398\u03a3 being two convex open bounded subsets of Rr and Rd\u00d7d, respectively. The function N : R2d \u00d7 \u0398\u03b2 \u2192Rd is assumed locally Lipschitz; functions Ax and Av are defined on \u0398\u03b2 and take values in Rd\u00d7d; and the parameter matrix \u03a3 takes values in Rd\u00d7d. The matrix \u03a3\u03a3\u22a4is assumed to be positive definite, shaping the variance of the rough coordinates. As any square root of \u03a3\u03a3\u22a4induces the same distribution, \u03a3 is identifiable only up to equivalence classes. Hence, estimation of the parameter \u03a3 means estimation of \u03a3\u03a3\u22a4. The drift function e F in (3) is divided into a linear part given by the matrix e A and a nonlinear part given by e N. The true value of the parameter is denoted by \u03b80 = (\u03b20, \u03a30), and we assume that \u03b80 \u2208\u0398. When referring to the true parameters, we write Ax,0, Av,0, b0, N0(x), F0(x) and \u03a3\u03a3\u22a4 0 instead of Ax(\u03b20), Av(\u03b20), b(\u03b20), N(x; \u03b20), F(x; \u03b20) and \u03a30\u03a3\u22a4 0 , respectively. We write Ax, Av, b, N(x), F(x), and \u03a3\u03a3\u22a4for any parameter \u03b8. 2.1 Example: The Kramers oscillator The abrupt temperature changes during the ice ages, known as the Dansgaard\u2013Oeschger (DO) events, are essential elements for understanding the climate (Dansgaard et al., 1993). These events occurred during the last glacial era spanning approximately the period from 115,000 to 12,000 years before present and are characterized by rapid warming phases followed by gradual cooling periods, revealing colder (stadial) and warmer (interstadial) climate states (Rasmussen et al., 2014). To analyze the DO events in Section 5, we propose a stochastic model of the escape dynamics in metastable systems, the Kramers oscillator (Kramers, 1940), originally formulated to model the escape rate of Brownian particles from potential wells. The escape rate is related to the mean first passage time \u2014 the time needed for a particle to exceed the potential\u2019s local maximum for the first time, starting at a neighboring local minimum. This rate depends on variables such as the damping coefficient, noise intensity, temperature, and specific potential features, including the barrier\u2019s height and curvature at the minima and maxima. We apply this framework to quantify the rate of climate transitions between stadial and interstadial periods. This provides an estimate on the probability distribution of the ocurrence of DO events, contributing to our understanding of the global climate system. Following Arnold and Imkeller (2000), we introduce the Kramers oscillator as the stochastic Duffing oscillator an example of a second-order SDE and a stochastic damping Hamiltonian system. The Duffing oscillator (Duffing, 1918) is a forced nonlinear oscillator, featuring a cubic stiffness term. The governing equation is given by: \u00a8 xt + \u03b7 \u02d9 xt + d dxU(xt) = f(t), where U(x) = \u2212ax2 2 + bx4 4 , with a, b > 0, \u03b7 \u22650. (7) The parameter \u03b7 in (7) indicates the damping level, a regulates the linear stiffness, and b determines the nonlinear component of the restoring force. In the special case where b = 0, the equation simplifies to a damped harmonic oscillator. Function f represents the driving force and is usually set to f(t) = \u03b7 cos(\u03c9t), which introduces deterministic chaos (Korsch and Jodl, 1999). When the driving force is f(t) = \u221a2\u03b7T\u03be(t), where \u03be(t) is white noise, equation (7) characterizes the stochastic movement of a particle within a bistable potential well, interpreting T > 0 as the temperature of a heat bath. Setting \u03c3 = \u221a2\u03b7T, equation (7) can be reformulated as an It\u00f4 SDE for variables Xt and Vt = \u02d9 Xt, expressed as: dXt = Vt dt, dVt = \u0012 \u2212\u03b7Vt \u2212d dxU(Xt) \u0013 dt + \u03c3 dWt, (8) 5 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT where Wt denotes a standard Wiener process. The parameter set of SDE (8) is \u03b8 = {\u03b7, a, b, \u03c32}. The existence and uniqueness of the invariant measure \u03bd0(dx, dy) of (8) is proved in Theorem 3 in (Arnold and Imkeller, 2000). The invariant measure \u03bd0 is linked to the invariant density \u03c00 through \u03bd0(dx, dy) = \u03c00(x, v) dx dy. Here we write \u03c00(x, v) instead of \u03c0(x, v; \u03b80), and \u03c0(x, v) instead of \u03c0(x, v; \u03b8). The Fokker-Plank equation for \u03c0 is given by \u2212v \u2202 \u2202x\u03c0(x, v) + \u03b7\u03c0(x, v) + \u03b7v \u2202 \u2202v \u03c0(x, v) + d dxU(x) \u2202 \u2202v \u03c0(x, v) + \u03c32 2 \u22022 \u2202v2 \u03c0(x, v) = 0. (9) The invariant density that solves the Fokker-Plank equation is: \u03c0(x, v) = C exp \u0012 \u22122\u03b7 \u03c32 U(x) \u0013 exp \u0010 \u2212\u03b7 \u03c32 v2\u0011 , (10) where C is the normalizing constant. The marginal invariant probability of Vt is thus Gaussian with zero mean and variance \u03c32/(2\u03b7). The marginal invariant probability of Xt is bimodal driven by the potential U(x): \u03c0(x) = C exp \u0012 \u22122\u03b7 \u03c32 U(x) \u0013 . (11) At steady state, for a particle moving in any potential U(x) and driven by random Gaussian noise, the position x and velocity v are independent of each other. This is reflected by the decomposition of the joint density \u03c0(x, v) into \u03c0(x)\u03c0(v). Fokker-Plank equation (9) can also be used to derive the mean first passage time \u03c4 which is inversely related to Kramers\u2019 escape rate \u03ba (Kramers, 1940): \u03c4 = 1 \u03ba \u2248 2\u03c0 \u0012q 1 + \u03b72 4\u03c92 \u2212 \u03b7 2\u03c9 \u0013 \u2126 exp \u0012\u2206U T \u0013 , where xbarrier = 0 is the local maximum of U(x) and xwell = \u00b1 p a/b are the local minima, \u03c9 = p |U \u2032\u2032(xbarrier)| = \u221aa, \u2126= p U \u2032\u2032(xwell) = \u221a 2a, and \u2206U = U(xbarrier) \u2212U(xwell) = a2/4b, . The formula is derived assuming strong friction, or an over-damped system (\u03b7 \u226b\u03c9), and a small parameter T/\u2206U \u226a1, indicating sufficiently deep potential wells. For the potential defined in (7), the mean waiting time \u03c4 is then approximated by \u03c4 \u2248 \u221a 2\u03c0 q a + \u03b72 4 \u2212\u03b7 2 exp \u0012 a2\u03b7 2b\u03c32 \u0013 . (12) 2.2 Hypoellipticity The SDE (5) is said to be hypoelliptic if its quadratic diffusion matrix e \u03a3e \u03a3\u22a4is not of full rank, while its solutions admit a smooth transition density with respect to the Lebesgue measure. According to H\u00f6rmander\u2019s theorem (Nualart, 2006), this is fulfilled if the SDE in its Stratonovich form satisfies the weak H\u00f6rmander condition. Since \u03a3 does not depend on y, the It\u00f4 and Stratonovich forms coincide. We begin by recalling the concept of Lie brackets: for smooth vector fields f, g : R2d \u2192R2d, the i-th component of the Lie bracket, [f, g](i), is defined as [f, g](i) := D\u22a4 y g(i)(y)f(y) \u2212D\u22a4 y f (i)(y)g(y). We define the set H of vector fields by initially including e \u03a3(i), i = 1, 2, ..., 2d, and then recursively adding Lie brackets H \u2208H \u21d2[e F, H], [e \u03a3(1), H], . . . , [e \u03a3(2d), H] \u2208H. The weak H\u00f6rmander condition is met if the vectors in H span R2d at every point y \u2208R2d. The initial vectors span {(0, v) \u2208R2d | v \u2208Rd}, a d-dimensional subspace. We therefore need to verify the existence of some H \u2208H with a non-zero first element. The first iteration of the system yields [e F, e \u03a3(i)](1) = \u2212\u03a3(i), [e \u03a3(i), e \u03a3(j)](1) = 0, for i, j = 1, 2, ..., 2d. The first equation is non-zero, as are all subsequent iterations. Thus, the second-order SDE defined in (5) is always hypoelliptic. 6 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 2.3 Assumptions The following assumptions are a generalization of those presented in (Pilipovic et al., 2024). Let T > 0 be the length of the observed time interval. We assume that (5) has a unique strong solution Y = {Yt | t \u2208[0, T]}, adapted to F = {Ft | t \u2208[0, T]}, which follows from the following first two assumptions (Theorem 2 in Alyushina (1988), Theorem 1 in Krylov (1991), Theorem 3.5 in Mao (2007)). We need the last three assumptions to prove the properties of the estimators. (A1) Function N is twice continuously differentiable with respect to both y and \u03b8, i.e., N \u2208C2. Moreover, it is globally one-sided Lipschitz continuous with respect to y on R2d \u00d7 \u0398\u03b2. That is, there exists a constant C > 0 such that for all y1, y2 \u2208R2d, (y1 \u2212y2)\u22a4(N(y1; \u03b2) \u2212N(y2; \u03b2)) \u2264C\u2225y1 \u2212y2\u22252. (A2) Function N exhibits at most polynomial growth in y, uniformly in \u03b8. Specifically, there exist constants C > 0 and \u03c7 \u22651 such that for all y1, y2 \u2208R2d, \u2225N (y1; \u03b2) \u2212N (y2; \u03b2) \u22252 \u2264C \u00001 + \u2225y1\u22252\u03c7\u22122 + \u2225y2\u22252\u03c7\u22122\u0001 \u2225y1 \u2212y2\u22252. Additionally, its derivatives exhibit polynomial growth in y, uniformly in \u03b8. (A3) The solution Y to SDE (5) has invariant probability \u03bd0(dy). (A4) \u03a3\u03a3\u22a4is invertible on \u0398\u03a3. (A5) \u03b2 is identifiable, that is, if F(y, \u03b21) = F(y, \u03b22) for all y \u2208R2d, then \u03b21 = \u03b22. Assumption (A1) ensures finiteness of the moments of the solution X (Tretyakov and Zhang, 2013), i.e., E[ sup t\u2208[0,T ] \u2225Yt\u22252p] < C(1 + \u2225y0\u22252p), \u2200p \u22651. (13) Assumption (A3) is necessary for the ergodic theorem to ensure convergence in distribution. Assumption (A4) ensures that the model (5) is hypoelliptic. Assumption (A5) ensures the identifiability of the drift parameter. 2.4 Strang splitting scheme Consider the following splitting of (5): dY[1] t = e A(Y[1] t \u2212e b) dt + e \u03a3 dWt, Y[1] 0 = y0, (14) dY[2] t = e N(Y[2] t ) dt, Y[2] 0 = y0. (15) There are no assumptions on the choice of e A and e b, and thus the nonlinear function e N. Indeed, we show that the asymptotic results hold for any choice of e A and e b in both the complete and the partial observation settings. This extends the results in Pilipovic et al. (2024), where it is shown to hold in the elliptic complete observation case, as well. While asymptotic results are invariant to the choice of e A and e b, finite sample properties of the scheme and the corresponding estimators are very different, and it is important to choose the splitting wisely. Intuitively, when the process is close to a fixed point of the drift, the linear dynamics are dominating, whereas far from the fixed points, the nonlinearities might be dominating. If the drift has a fixed point y\u22c6, we therefore suggest setting e A = Dye F(y\u22c6) and e b = y\u22c6. This choice is confirmed in simulations (for more details see Pilipovic et al. (2024)). Solution of SDE (14) is an Ornstein\u2013Uhlenbeck (OU) process given by the following h-flow: Y[1] tk = \u03a6[1] h (Y[1] tk\u22121) = e \u00b5h(Y[1] tk\u22121; \u03b2) + e \u03b5h,k, (16) e \u00b5h(y; \u03b2) := e e Ah(y \u2212e b) + e b, (17) e \u2126h = Z h 0 e e A(h\u2212u) e \u03a3e \u03a3\u22a4e e A\u22a4(h\u2212u) du, (18) where e \u03b5h,k i.i.d \u223cN2d(0, e \u2126h) for k = 1, . . . , N. It is useful to rewrite e \u2126h in the following block matrix form, e \u2126h = \" \u2126[SS] h \u2126[SR] h \u2126[RS] h \u2126[RR] h # , (19) 7 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT where S in the superscript stands for smooth and R stands for rough. The Schur complement of e \u2126h with respect to \u2126[RR] h and the determinant of e \u2126h are given by: \u2126[S|R] h := \u2126[SS] h \u2212\u2126[SR] h (\u2126[RR] h )\u22121\u2126[RS] h , det e \u2126h = det \u2126[RR] h det \u2126[S|R] h . Assumptions (A1)-(A2) ensure the existence and uniqueness of the solution of (15) (Theorem 1.2.17 in Humphries and Stuart (2002)). Thus, there exists a unique function e fh : R2d \u00d7 \u0398\u03b2 \u2192R2d, for h \u22650, such that Y[2] tk = \u03a6[2] h (Y[2] tk\u22121) = e fh(Y[2] tk\u22121; \u03b2). (20) For all \u03b2 \u2208\u0398\u03b2, the h-flow e fh fulfills the following semi-group properties: e f0(y; \u03b2) = y, e ft+s(y; \u03b2) = e ft( e fs(y; \u03b2); \u03b2), t, s \u22650. For y = (x\u22a4, v\u22a4)\u22a4, we have: e fh(x, v; \u03b2) = \u0014 x fh(x, v; \u03b2) \u0015 , (21) where fh(x, v; \u03b2) is the solution of the ODE with vector field N(x, v; \u03b2). We introduce another assumption needed to define the pseudo-likelihood based on the splitting scheme. (A6) Inverse function e f \u22121 h (y; \u03b2) is defined asymptotically for all y \u2208R2d and all \u03b2 \u2208\u0398\u03b2, when h \u21920. Then, the inverse of \u02dc fh can be decomposed as: e f \u22121 h (x, v; \u03b2) = \u0014 x f \u22c6\u22121 h (x, v; \u03b2) \u0015 , (22) where f \u22c6\u22121 h (x, v; \u03b2) is the rough part of the inverse of e f \u22121 h . It does not equal f \u22121 h since the inverse does not propagate through coordinates when fh depends on x. We are now ready to define the Strang splitting scheme for model (5). Definition 2.1 (Strang splitting) Let Assumptions (A1)-(A2) hold. The Strang approximation of the solution of (5) is given by: \u03a6[str] h (Y[str] tk\u22121) = (\u03a6[2] h/2 \u25e6\u03a6[1] h \u25e6\u03a6[2] h/2)(Y[str] tk\u22121) = e fh/2(e \u00b5h( e fh/2(Y[str] tk\u22121)) + e \u03b5h,k). (23) Remark 1 The order of composition in the splitting schemes is not unique. Changing the order in the Strang splitting leads to a sum of 2 independent random variables, one Gaussian and one non-Gaussian, whose likelihood is not trivial. Thus, we only use the splitting (23). 2.5 Strang splitting estimators In this section, we introduce four estimators, all based on the Strang splitting scheme. We distinguish between estimators based on complete observations (denoted by C when both X and V are observed) and partial observations (denoted by P when only X is observed). In applications, we typically only have access to partial observations, however, the full observation estimator is used as a building block for the partial observation case. Additionally, we distinguish the estimators based on the type of likelihood function employed. These are the full likelihood (denoted by F) and the marginal likelihood of the rough component (denoted by R). We furthermore use the conditional likelihood based on the smooth component given the rough part (denoted by S | R) to decompose the full likelihood. 2.5.1 Complete observations Assume we observe the complete sample Y0:tN := (Ytk)N k=1 from (5) at time steps 0 = t0 < t1 < ... < tN = T. For notational simplicity, we assume equidistant step size h = tk \u2212tk\u22121. Strang splitting scheme (23) is a nonlinear transformation of a Gaussian random variable e \u00b5h( e fh/2(Y[str] tk\u22121)) + e \u03b5h,k. We define: e Zk,k\u22121(\u03b2) := e f \u22121 h/2(Ytk; \u03b2) \u2212e \u00b5h( e fh/2(Ytk\u22121; \u03b2); \u03b2), (24) 8 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT and apply change of variables to get: p(ytk | ytk\u22121) = pN (0,e \u2126h)(e zk,k\u22121 | ytk\u22121)| det Dy e f \u22121 h/2(ytk)|. Using \u2212log | det Dy e f \u22121 h/2 (y; \u03b2) | = log | det Dy e fh/2 (y; \u03b2) | and det Dy e fh/2 (y; \u03b2) = det Dvfh/2 (y; \u03b2), together with the Markov property of Y0:tN , we get the following objective function based on the full log-likelihood: L[CF](Y0:tN ; \u03b8) := N X k=1 \u0010 log det e \u2126h(\u03b8) + e Zk,k\u22121(\u03b2)\u22a4e \u2126h(\u03b8)\u22121e Zk,k\u22121(\u03b2) + 2 log | det Dvfh/2(Ytk; \u03b2)| \u0011 . (25) Now, split e Zk,k\u22121 from (24) into the smooth and rough parts e Zk,k\u22121 = ((Z[S] k,k\u22121)\u22a4, (Z[R] k,k\u22121)\u22a4)\u22a4defined as: Z[S] k,k\u22121(\u03b2) := [ e Z(i) k,k\u22121(\u03b2)]d i=1 = Xtk \u2212\u00b5[S] h ( e fh/2(Ytk\u22121; \u03b2); \u03b2), (26) Z[R] k,k\u22121(\u03b2) := [ e Z(i) k,k\u22121(\u03b2)]2d i=d+1 = f \u22c6\u22121 h/2 (Ytk; \u03b2) \u2212\u00b5[R] h ( e fh/2(Ytk\u22121; \u03b2); \u03b2), (27) where \u00b5[S] h (y; \u03b2) := [e \u00b5(i) h (y; \u03b2)]d i=1, \u00b5[R] h (y; \u03b2) := [e \u00b5(i) h (y; \u03b2)]2d i=d+1. (28) We also define the following sequence of vectors Z[S|R] k,k\u22121(\u03b2) := Z[S] k,k\u22121(\u03b2) \u2212\u2126[SR] h (\u2126[RR] h )\u22121Z[R] k,k\u22121(\u03b2). (29) The formula for jointly normal distributions yields: pN (0,e \u2126h)(e zk,k\u22121 | ytk\u22121) = pN (0,\u2126[RR] h )(z[R] k,k\u22121 | ytk\u22121) \u00b7 pN (\u2126[SR] h (\u2126[RR] h )\u22121z[R] k,k\u22121,\u2126[S|R] h )(z[S] k,k\u22121 | z[R] k,k\u22121, ytk\u22121). This leads to dividing the full log-likelihood L[CF] into a sum of the marginal log-likelihood L[CR](Y0:tN ; \u03b8) and the smooth-given-rough log-likelihood L[CS|R](Y0:tN ; \u03b8): L[CF](Y0:tN ; \u03b8) = L[CR](Y0:tN ; \u03b8) + L[CS|R](Y0:tN ; \u03b8), where L[CR] (Y0:tN ; \u03b8) := N X k=1 log det \u2126[RR] h (\u03b8) + Z[R] k,k\u22121 (\u03b2)\u22a4\u2126[RR] h (\u03b8)\u22121Z[R] k,k\u22121 (\u03b2) + 2 log \f \fdet Dvfh/2 (Ytk; \u03b2) \f \f ! , (30) L[CS|R] (Y0:tN ; \u03b8) := N X k=1 \u0010 log det \u2126[S|R] h (\u03b8) + Z[S|R] k,k\u22121(\u03b2)\u22a4\u2126[S|R] h (\u03b8)\u22121Z[S|R] k,k\u22121(\u03b2) \u0011 . (31) The terms containing the drift parameter in L[CR] in (30) are of order h1/2, as in the elliptic case, whereas the terms containing the drift parameter in L[CS|R] in (31) are of order h3/2. Consequently, under a rapidly increasing experimental design where Nh \u2192\u221eand Nh2 \u21920, the objective function (31) is degenerate for estimating the drift parameter. However, it contributes to the estimation of the diffusion parameter when the full objective function (25) is used. We show in later sections that employing (25) results in a lower asymptotic variance for the diffusion parameter making it more efficient in complete observation scenarios. The estimators based on complete observations are then defined as: \u02c6 \u03b8[obj] N := arg min \u03b8 L[obj] (Y0:tN ; \u03b8) , obj \u2208{[CF], [CR]}. (32) Although the full objective function is based on twice as many equations as the marginal likelihood, its implementation complexity, speed, and memory requirements are similar to the marginal objective function. Therefore, if the complete observations are available, we recommend using the objective function (25) based on the full likelihood. 9 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 2.5.2 Partial observations Assume we only observe the smooth coordinates X0:tN := (Xtk)N k=0. The observed process Xt alone is not a Markov process, although the complete process Yt is. To approximate Vtk, we define the backward difference process: \u2206hXtk := Xtk \u2212Xtk\u22121 h . (33) From SDE (2) it follows that \u2206hXtk = 1 h Z tk tk\u22121 Vt dt. (34) We propose to approximate Vtk using \u2206hXtk by any of the three approaches: 1. Backward difference approximation: Vtk \u2248\u2206hXtk; 2. Forward difference approximation: Vtk \u2248\u2206hXtk+1; 3. Central difference approximation: Vtk \u2248 \u2206hXtk +\u2206hXtk+1 2 . The forward difference approximation performs best in our simulation study, which is also the approximation method employed in Gloter (2006) and Samson and Thieullen (2012). In the field of numerical approximations of ODEs, backward and forward finite differences have the same order of convergence, whereas the central difference has a higher convergence rate. However, the diffusion parameter estimator based on the central difference (Xtk+1 \u2212Xtk\u22121)/2h is less suitable because this approximation skips a data point and thus increases the estimator\u2019s variance. For further discussion, see Remark 6. Thus, we focus exclusively on forward differences, following Gloter (2006); Samson and Thieullen (2012), and all proofs are done for this approximation. Similar results also hold for the backward difference, with some adjustments needed in the conditional moments due to filtration issues. We start by approximating e Z for the case of partial observations denoted by e Z: e Zk+1,k,k\u22121(\u03b2) := e f \u22121 h/2(Xtk, \u2206hXtk+1; \u03b2) \u2212e \u00b5h( e fh/2(Xtk\u22121, \u2206hXtk; \u03b2); \u03b2). (35) The smooth and rough parts of e Z are thus equal to: Z [S] k,k\u22121(\u03b2) := Xtk \u2212\u00b5[S] h ( e fh/2(Xtk\u22121, \u2206hXtk; \u03b2); \u03b2), (36) Z [R] k+1,k,k\u22121(\u03b2) := f \u22c6\u22121 h/2 (Xtk, \u2206hXtk+1; \u03b2) \u2212\u00b5[R] h ( e fh/2(Xtk\u22121, \u2206hXtk; \u03b2); \u03b2), (37) and Z [S|R] k+1,k,k\u22121(\u03b2) := Z [S] k,k\u22121(\u03b2) \u2212\u2126[SR] h (\u2126[RR] h )\u22121Z [R] k+1,k,k\u22121(\u03b2). (38) Compared to Z[R] k,k\u22121 in (27), Z [R] k+1,k,k\u22121 in (37) depends on three consecutive data points, with the additional point Xtk+1 entering through \u2206hXtk+1. Furthermore, Xtk enters both f \u22c6\u22121 h/2 and e \u00b5[R] h , rending them coupled. This coupling has a significant influence on later derivations of the estimator\u2019s asymptotic properties, in contrast to the elliptic case where the derivations simplify. While it might seem straightforward to incorporate e Z, Z [S] k,k\u22121 and Z [R] k,k\u22121 into the objective functions (25), (30) and (31), it introduces bias in the estimators of the diffusion parameters, as also discussed in (Gloter, 2006; Samson and Thieullen, 2012). The bias arises because Xtk enters in both f \u22c6\u22121 h/2 and e \u00b5[R] h , and the covariances of e Z, Z [S] k,k\u22121, and Z [R] k,k\u22121 differ from their complete observation counterparts. To eliminate this bias, Gloter (2006); Samson and Thieullen (2012) applied a correction of 2/3 multiplied to log det of the covariance term in the objective functions, which is log det \u03a3\u03a3\u22a4in the Euler-Maruyama discretization. We also need appropriate corrections to our objective functions (25), (30) and (31), however, caution is necessary because log det e \u2126h(\u03b8) depends on both drift and diffusion parameters. To counterbalance this, we also incorporate an adjustment to h in \u2126h. Moreover, we add the term 4 log | det Dvfh/2| to objective function (31) to obtain consistency of the drift estimator under partial observations. The detailed derivation of these correction factors will be elaborated in the following sections. 10 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT We thus propose the following objective functions: L[PF](X0:tN ; \u03b8) := 4 3(N \u22122) log det e \u21263h/4(\u03b8) (39) + N\u22121 X k=1 \u0010e Zk+1,k,k\u22121(\u03b2)\u22a4e \u2126h(\u03b8)\u22121e Zk+1,k,k\u22121(\u03b2) + 6 log | det Dvfh/2(Xtk, \u2206hXtk+1; \u03b2)| \u0011 , L[PR] (X0:tN ; \u03b8) := 2 3(N \u22122) log det \u2126[RR] 3h/2(\u03b8) (40) + N\u22121 X k=1 \u0010 Z [R] k+1,k,k\u22121 (\u03b2)\u22a4\u2126[RR] h (\u03b8)\u22121Z [R] k+1,k,k\u22121 (\u03b2) + 2 log \f \fdet Dvfh/2 \u0000Xtk, \u2206hXtk+1; \u03b2 \u0001\f \f \u0011 , L[PS|R] (X0:tN ; \u03b8) := 2(N \u22122) log det \u2126[S|R] h (\u03b8) (41) + N\u22121 X k=1 \u0010 Z [S|R] k+1,k,k\u22121(\u03b2)\u22a4\u2126[S|R] h (\u03b8)\u22121Z [S|R] k+1,k,k\u22121(\u03b2) + 4 log | det Dvfh/2(Xtk, \u2206hXtk+1; \u03b2)| \u0011 . (42) Remark 2 Due to the correction factors in the objective functions, we now have that L[PF](X0:tN ; \u03b8) \u0338= L[PR](X0:tN ; \u03b8) + L[PS|R](X0:tN ; \u03b8). (43) However, when expanding the objective functions (39)-(41) using Taylor series to the lowest necessary order in h, their approximations will satisfy equality in (43), as shown in Section 6. Remark 3 Adding the extra term 4 log | det Dvfh/2| in (41) is necessary to keep the consistency of the drift parameter. However, this term is not initially present in objective function (31), making this correction somehow artificial. This can potentially make the objective function further from the true log-likelihood. The estimators based on the partial sample are then defined as: \u02c6 \u03b8[obj] N := arg min \u03b8 L[obj] (X0:tN ; \u03b8) , obj \u2208{[PF], [PR]}. (44) In the partial observation case, the asymptotic variances of the diffusion estimators are identical whether using (39) or (40), in contrast to the complete observation scenario. This variance is shown to be 9/4 times higher than the variance of the estimator \u02c6 \u03b8[CF] N , and 9/8 times higher than that of the estimator based on the marginal likelihood \u02c6 \u03b8[CR] N . The numerical study in Section 4 shows that the estimator based on the marginal objective function (40) is less biased than the one based on the full objective function (39) in finite sample scenarios with partial observations. A potential reason for this is discussed in Remark 3. Therefore, we recommend using the objective function (40) for partial observations. 3 Main results This section states the two main results \u2013 consistency and asymptotic normality of all four proposed estimators. The key ideas for proofs are presented in Supplementary Materials S1. First, we state the consistency of the estimators in both complete and partial observation cases. Let L[obj] N be one of the objective functions (25), (30), (39) or (40) and b \u03b8[obj] N the corresponding estimator. Thus, obj \u2208{[CF], [CR], [PF], [PR]}. We use superscript [C\u00b7] to refer to any objective function in the complete observation case. Likewise, [\u00b7R] stands for an objective function based on the rough marginal likelihood either in the complete or the partial observation case. Theorem 3.1 (Consistency of the estimators) Assume (A1)-(A6), h \u21920, and Nh \u2192\u221e. Then under the complete observation or partial observation case, it holds: b \u03b2[obj] N P\u03b80 \u2212 \u2212 \u2192\u03b20, d \u03a3\u03a3 [obj] N P\u03b80 \u2212 \u2212 \u2192\u03a3\u03a3\u22a4 0 . 11 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Remark 4 We split the full objective function (25) into the sum of the rough marginal likelihood (30) and the conditional smooth-given-rough likelihood (31). Even if (31) cannot identify the drift parameter \u03b2, it is an important intermediate step in understanding the full objective function (25). This can be seen in the proof of Theorem 3.1, where we first establish consistency of the diffusion estimator with a convergence rate of \u221a N, which is faster than \u221a Nh, the convergence rate of the drift estimators. Then, under complete observations, we show that 1 Nh(L[CR] N (\u03b2, \u03c30) \u2212L[CR] N (\u03b20, \u03c30)) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 Z (F0(y) \u2212F(y))\u22a4(\u03a3\u03a3\u22a4)\u22121(F0(y) \u2212F(y)) d\u03bd0(y). (45) The right-hand side of (45) is non-negative, with a unique zero for F = F0. Conversely, for objective function (31), it holds: 1 Nh(L[CS|R] N (\u03b2, \u03c3) \u2212L[CS|R] N (\u03b20, \u03c3)) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 0. (46) Hence, (46) does not have a unique minimum, making the drift parameter unidentifiable. Similar conclusions are drawn in the partial observation case. Now, we state the asymptotic normality of the estimator. First, we need some preliminaries. Let \u03c1 > 0 and B\u03c1 (\u03b80) = {\u03b8 \u2208\u0398 | \u2225\u03b8 \u2212\u03b80\u2225\u2264\u03c1} be a ball around \u03b80. Since \u03b80 \u2208\u0398, for sufficiently small \u03c1 > 0, B\u03c1(\u03b80) \u2208\u0398. For \u02c6 \u03b8[obj] N \u2208B\u03c1 (\u03b80), the mean value theorem yields: \u0012Z 1 0 HL[obj] N (\u03b80 + t(\u02c6 \u03b8[obj] N \u2212\u03b80)) dt \u0013 (\u02c6 \u03b8[obj] N \u2212\u03b80) = \u2212\u2207\u03b8L[obj] N (\u03b80) . (47) Define: C[obj] N (\u03b8) := \uf8ee \uf8ef \uf8f0 h 1 Nh\u22022 \u03b2(i1)\u03b2(i2)L[obj] N (\u03b8) ir i1,i2=1 h 1 N \u221a h\u22022 \u03b2(i)\u03c3(j)L[obj] N (\u03b8) ir,s i=1,j=1 h 1 N \u221a h\u22022 \u03c3(j)\u03b2(i)L[obj] N (\u03b8) ir,s i=1,j=1 h 1 N \u22022 \u03c3(j1)\u03c3(j2)L[obj] N (\u03b8) is j1,j2=1 \uf8f9 \uf8fa \uf8fb, (48) s[obj] N := \"\u221a Nh( \u02c6 \u03b2[obj] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[obj] N \u2212\u03c30) # , \u03bb[obj] N := \uf8ee \uf8ef \uf8f0 \u2212 1 \u221a Nh \u2207\u03b2L[obj] N (\u03b80) \u22121 \u221a N \u2207\u03c3L[obj] N (\u03b80) \uf8f9 \uf8fa \uf8fb, (49) and D[obj] N := R 1 0 C[obj] N (\u03b80 + t(\u02c6 \u03b8[obj] N \u2212\u03b80)) dt. Then, (47) is equivalent to D[obj] N s[obj] N = \u03bb[obj] N . Let: [C\u03b2(\u03b80)]i1,i2 := Z (\u2202\u03b2(i1)F0(y))\u22a4(\u03a3\u03a3\u22a4 0 )\u22121(\u2202\u03b2(i2)F0(y)) d\u03bd0(y), 1 \u2264i1, i2 \u2264r, (50) [C\u03c3(\u03b80)]j1,j2 := Tr((\u2202\u03c3(j1)\u03a3\u03a3\u22a4 0 )(\u03a3\u03a3\u22a4 0 )\u22121(\u2202\u03c3(j2)\u03a3\u03a3\u22a4 0 )(\u03a3\u03a3\u22a4 0 )\u22121), 1 \u2264j1, j2 \u2264s. (51) Theorem 3.2 Let assumptions (A1)-(A6) hold, and let h \u21920, Nh \u2192\u221e, and Nh2 \u21920. Then under complete observations, it holds: \"\u221a Nh( \u02c6 \u03b2[CR] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[CR] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r 2C\u03c3(\u03b80)\u22121 \u0015\u0013 , \"\u221a Nh( \u02c6 \u03b2[CF] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[CF] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r C\u03c3(\u03b80)\u22121 \u0015\u0013 , under P\u03b80. If only partial observations are available and the unobserved coordinates are approximated using the forward or backward differences, then \"\u221a Nh( \u02c6 \u03b2[PR] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[PR] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r 9 4C\u03c3(\u03b80)\u22121 \u0015\u0013 , \"\u221a Nh( \u02c6 \u03b2[PF] N \u2212\u03b20) \u221a N(\u02c6 \u03c3[PF] N \u2212\u03c30) # d \u2212 \u2192N \u0012 0, \u0014 C\u03b2(\u03b80)\u22121 0r\u00d7s 0s\u00d7r 9 4C\u03c3(\u03b80)\u22121 \u0015\u0013 , under P\u03b80. 12 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Here, we only outline the proof. According to Theorem 1 in Kessler (1997) or Theorem 1 in S\u00f8rensen and Uchida (2003), Lemmas 3.3 and 3.4 below are enough for establishing asymptotic normality of \u02c6 \u03b8N. For more details, see proof of Theorem 1 in S\u00f8rensen and Uchida (2003). Lemma 3.3 Let CN(\u03b80) be defined in (48). For h \u21920 and Nh \u2192\u221e, it holds: C[CR] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u0014 2C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r C\u03c3(\u03b80) \u0015 , C[PR] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u00142C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 2 3C\u03c3(\u03b80) \u0015 , C[CF] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u0014 2C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 2C\u03c3(\u03b80) \u0015 , C[PF] N (\u03b80) P\u03b80 \u2212 \u2212 \u2192 \u00142C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 8 3C\u03c3(\u03b80) \u0015 . Moreover, let \u03c1N be a sequence such that \u03c1N \u21920, then in all cases it holds: sup \u2225\u03b8\u2225\u2264\u03c1N \u2225C[obj] N (\u03b80 + \u03b8) \u2212C[obj] N (\u03b80)\u2225 P\u03b80 \u2212 \u2212 \u21920. Lemma 3.4 Let \u03bbN be defined (49). For h \u21920, Nh \u2192\u221eand Nh2 \u21920, it holds: \u03bb[CR] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 2C\u03c3(\u03b80) \u0015\u0013 , \u03bb[PR] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r C\u03c3(\u03b80) \u0015\u0013 , \u03bb[CF] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 4C\u03c3(\u03b80) \u0015\u0013 , \u03bb[PF] N d \u2212 \u2192N \u0012 0, \u0014 4C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r 16C\u03c3(\u03b80) \u0015\u0013 , under P\u03b80. Now, the two previous lemmas suggest s[obj] N = (D[obj] n )\u22121\u03bb[obj] N d \u2212 \u2192C[obj] N (\u03b80)\u22121\u03bb[obj] N . The previous line is not completely formal, but it gives the intuition. For more details on formally deriving the result, see Section 7.4 in Pilipovic et al. (2024) or proof of Theorem 1 in S\u00f8rensen and Uchida (2003). 4 Simulation study This Section illustrates the simulation study of the Kramers oscillator (8), demonstrating the theoretical aspects and comparing our proposed estimators against estimators based on the EM and LL approximations. We chose to compare our proposed estimators to these two, because the EM estimator is routinely used in applications, and the LL estimator has shown to be one of the best state-of-the-art methods, see Pilipovic et al. (2024) for the elliptic case. The true parameters are set to \u03b70 = 6.5, a0 = 1, b0 = 0.6 and \u03c32 0 = 0.1. We outline the estimators specifically designed for the Kramers oscillator, explain the simulation procedure, describe the optimization implemented in the R programming language R Core Team (2022), and then present and interpret the results. 4.1 Estimators used in the study For the Kramers oscillator (8), the EM transition distribution is: \u0014 Xtk Vtk \u0015 | \u0014 Xtk\u22121 Vtk\u22121 \u0015 = \u0014 x v \u0015 \u223cN \u0012\u0014 x + hv v + h \u0000\u2212\u03b7v + ax \u2212bx3\u0001 \u0015 , \u0014 0 0 0 h\u03c32 \u0015\u0013 . The ill-conditioned variance of this discretization restricts us to an estimator that only uses the marginal likelihood of the rough coordinate. The estimator for complete observations directly follows from the Gaussian distribution. The estimator for partial observations is defined as (Samson and Thieullen, 2012): b \u03b8[PR] EM = arg min \u03b8 ( 2 3(N \u22123) log \u03c32 + 1 h\u03c32 N\u22122 X k=1 (\u2206hXtk+1 \u2212\u2206hXtk \u2212h(\u2212\u03b7\u2206hXtk\u22121 + aXtk\u22121 \u2212bX3 tk\u22121))2 ) . To our knowledge, the LL estimator has not previously been applied to partial observations. Given the similar theoretical and computational performance of the Strang and LL discretizations, we suggest (without formal proof) to adjust the LL objective functions with the same correction factors as used in the Strang approach. The numerical evidence indicates 13 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT that the LL estimator has the same asymptotic properties as those proved for the Strang estimator. We omit the definition of the LL estimator due to its complexity (see Melnykova (2020); Pilipovic et al. (2024) and accompanying code). To define S estimators based on the Strang splitting scheme, we first split SDE (8) as follows: d \u0014 Xt Vt \u0015 = \u0014 0 1 \u22122a \u2212\u03b7 \u0015 | {z } A \u0014 Xt Vt \u0015 \u2212 \u0014 x\u22c6 \u00b1 0 \u0015 | {z } b ! dt + \u0014 0 aXt \u2212bX3 t + 2a(Xt \u2212x\u22c6 \u00b1) \u0015 | {z } N(Xt,Vt) dt + \u0014 0 \u03c3 \u0015 dWt, where x\u22c6 \u00b1 = \u00b1 p a/b are the two stable points of the dynamics. Since there are two stable points, we suggest splitting with x\u22c6 +, when Xt > 0, and x\u22c6 \u2212, when Xt < 0. This splitting follows the guidelines from (Pilipovic et al., 2024). Note that the nonlinear ODE driven by N(x, v) has a trivial solution where x is a constant. To obtain Strang estimators, we plug in the corresponding components in the objective functions (25), (30), (39) and (40). 4.2 Trajectory simulation We simulate a sample path using the EM discretization with a step size of hsim = 0.0001 to ensure good performance. To reduce discretization errors, we sub-sample from the path at wider intervals to get time step h = 0.1. The path has N = 5000 data points. We repeat the simulations to obtain 250 data sets. 4.3 Optimization in R For optimizing the objective functions, we proceed as in Pilipovic et al. (2024) using the R package torch (Falbel and Luraschi, 2022), which allows automatic differentiation. The optimization employs the resilient backpropagation algorithm, optim_rprop. We use the default hyperparameters and limit the number of optimization iterations to 2000. The convergence criterion is set to a precision of 10\u22125 for the difference between estimators in consecutive iterations. The initial parameter values are set to (\u22120.1, \u22120.1, 0.1, 0.1). 4.4 Results The results of the simulation study are presented in Figure 1. Figure 1A) presents the distributions of the normalized estimators in the complete and partial observation cases. The S and LL estimators exhibit nearly identical performance, particularly in the complete observation scenario. In contrast, the EM method displays significant underperformance and notable bias. The variances of the S and LL rough-likelihood estimators of \u03c32 are higher compared to those derived from the full likelihood, aligning with theoretical expectations. Interestingly, in the partial observation scenario, Figure 1A) reveals that estimators employing the full likelihood display greater finite sample bias compared to those based on the rough likelihood. Possible reasons for this bias are discussed in Remark 3. However, it is noteworthy that this bias is eliminated for smaller time steps, e.g. h = 0.0001 (not shown), thus confirming the theoretical asymptotic results. This observation suggests that the rough likelihood is preferable under partial observations due to its lower bias. Backward finite difference approximations of the velocity variables perform similarly to the forward differences and are therefore excluded from the figure for clarity. We closely examine the variances of the S estimators of \u03c32 in Figure 1B). The LL estimators are omitted due to their similarity to the S estimators, and because the computation times for the LL estimators are prohibitive. To align more closely with the asymptotic predictions, we opt for h = 0.02 and conduct 1000 simulations. Additionally, we set \u03c32 0 = 100 to test different noise levels. Atop each empirical distribution, we overlay theoretical normal densities that match the variances as per Theorem 3.2. The theoretical variance is derived from C\u03c32(\u03b80) in (51), which for the Kramers oscillator in (8) is: C\u03c32(\u03b80) = 1 \u03c34 0 . (52) Figure 1 illustrates that the lowest variance of the diffusion estimator is observed when using the full likelihood with complete observations. The second lowest variance is achieved using the rough likelihood with complete observations. The largest variance is observed in the partial observation case; however, it remains independent of whether the full or rough likelihood is used. Once again, we observe that using the full likelihood introduces additional finite sample bias. In Figure 1C), we compare running times calculated using the tictoc package in R. Running times are measured from the start of the optimization step until convergence. The figure depicts the median over 250 repetitions to mitigate the influence of outliers. The EM method is notably the fastest; however, the S estimators exhibit only slightly slower performance. The LL estimators are 10-100 times slower than the S estimators, depending on whether complete or partial observations are used and whether the full or rough likelihood is employed. 14 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Figure 1: Parameter estimates in a simulation study for the Kramers oscillator, eq. (8). The color code remains consistent across all three figures. A) Normalized distributions of parameter estimation errors (\u02c6 \u03b8N \u2212\u03b80) \u2298\u03b80 in both complete and partial observation cases, based on 250 simulated data sets with h = 0.1 and N = 5000. Each column corresponds to a different parameter, while the color indicates the type of estimator. Estimators are distinguished by superscripted objective functions (F for full and R for rough). B) Distribution of b \u03c32 N estimators based on 1000 simulations with h = 0.02 and N = 5000 across different observation settings (complete or partial) and likelihood choices (full or rough) using the Strang splitting scheme. The true value of \u03c32 is set to \u03c32 0 = 100. Theoretical normal densities are overlaid for comparison. Theoretical variances are calculated based on C\u03c32(\u03b80), eq. (52). C) Median computing time in seconds for one estimation of various estimators based on 250 simulations with h = 0.1 and N = 5000. Shaded color patterns represent times in the partial observation case, while no color pattern indicates times in the complete observation case. 15 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT Figure 2: Ice core data from Greenland. Left: Trajectories over time (in kilo years) of the centered negative logarithm of the Ca2+ measurements (top) and forward difference approximations of its rate of change (bottom). The two vertical dark red lines represent the estimated stable equilibria of the double-well potential function. Green points denote upand down-crossings of level \u00b10.6, conditioned on having crossed the other level. Green vertical lines indicate empirical estimates of occupancy in either of the two metastable states. Right: Empirical densities (black) alongside estimated invariant densities with confidence intervals (dark red), prediction intervals (light red), and the empirical density of a simulated sample from the estimated model (blue). 5 Application to Greenland Ice Core Data During the last glacial period, significant climatic shifts known as Dansgaard-Oeschger (DO) events have been documented in paleoclimatic records (Dansgaard et al., 1993). Proxy data from Greenland ice cores, particularly stable water isotope composition (\u03b418O) and calcium ion concentrations (Ca2+), offer valuable insights into these past climate variations (Boers et al., 2017, 2018; Boers, 2018; Ditlevsen et al., 2002; Lohmann and Ditlevsen, 2019; Hassanibesheli et al., 2020). The \u03b418O ratio, reflecting the relative abundance of 18O and 16O isotopes in ice, serves as a proxy for paleotemperatures during snow deposition. Conversely, calcium ions, originating from dust deposition, exhibit a strong negative correlation with \u03b418O, with higher calcium ion levels indicating colder conditions. Here, we prioritize Ca2+ time series due to its finer temporal resolution. In Greenland ice core records, the DO events manifest as abrupt transitions from colder climates (stadials) to approximately 10 degrees warmer climates (interstadials) within a few decades. Although the waiting times between state switches last a couple of thousand years, their spacing exhibits significant variability. The underlying mechanisms driving these changes remain largely elusive, prompting discussions on whether they follow cyclic patterns, result from external forcing, or emerge from noise-induced processes (Boers, 2018; Ditlevsen et al., 2007). We aim to determine if the observed data can be explained by noise-induced transitions of the Kramers oscillator. The measurements were conducted at the summit of the Greenland ice sheet as part of the Greenland Icecore Project (GRIP) (Anklin et al., 1993; Andersen et al., 2004). Originally, the data were sampled at 5 cm intervals, resulting in a non-equidistant time series due to ice compression at greater depths, where 5 cm of ice core spans longer time periods. For our analysis, we use a version of the data transformed into a uniformly spaced series through 20-year binning and averaging. This transformation simplifies the analysis and highlights significant climatic trends. The dataset is available in the supplementary material of (Rasmussen et al., 2014; Seierstad et al., 2014). 16 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT To address the large amplitudes and negative correlation with temperature, we transform the data to minus the logarithm of Ca2+, where higher values of the transformed variable indicate warmer climates at the time of snow deposition. Additionally, we center the transformed measurements around zero. With the 20-year binning, to obtain one point per 20 years, we average across the bins, resulting in a time step of h = 0.02kyr (1kyr = 1000 years). Additionally, we addressed a few missing values using the na.approx function from the zoo package. Following the approach of Hassanibesheli et al. (2020), we analyze a subset of the data with a sufficiently good signal-to-noise ratio. Hassanibesheli et al. (2020) examined the data from 30 to 60kyr before present. Here, we extend the analysis to cover 30kyr to 80kyr, resulting in a time interval of T = 50kyr and a sample size of N = 2500. We approximate the velocity of the transformed Ca2+ by the forward difference method. The trajectories and empirical invariant distributions are illustrated in Figure 2. We fit the Kramers oscillator to the \u2212log Ca2+ time series and estimate parameters using the Strang estimator. Following Theorem 3.2, we compute C\u03b2(\u03b80) from (50). Applying the invariant density \u03c00(x, v) from (10), which decouples into \u03c00(x) (11) and a Gaussian zero-mean and \u03c32 0/(2\u03b70) variance, leads us to: C\u03b2(\u03b80) = \uf8ee \uf8ef \uf8ef \uf8f0 1 2\u03b70 0 0 0 1 \u03c32 0 R \u221e \u2212\u221ex2\u03c00(x) dx \u22121 \u03c32 0 R \u221e \u2212\u221ex4\u03c00(x) dx 0 \u22121 \u03c32 0 R \u221e \u2212\u221ex4\u03c00(x) dx 1 \u03c32 0 R \u221e \u2212\u221ex6\u03c00(x) dx \uf8f9 \uf8fa \uf8fa \uf8fb. (53) Thus, to obtain 95% confidence intervals (CI) for the estimated parameters, we plug b \u03b8N into (52) and (53). The estimators and confidence intervals are shown in Table 1. We also calculate the expected waiting time \u03c4, eq. (12), of crossing from one state to another, and its confidence interval using the Delta Method. Parameter Estimate 95% CI \u03b7 62.5 59.4 \u221265.6 a 296.7 293.6 \u2212299.8 b 219.1 156.4 \u2212281.7 \u03c32 9125 8589 \u22129662 \u03c4 3.97 3.00 \u22124.94 Table 1: Estimated parameters of the Kramers oscillator from Greenland ice core data. The model fit is assessed in the right panels of Figure 2. Here, we present the empirical distributions of the two coordinates along with the fitted theoretical invariant distribution and a 95% confidence interval. Additionally, a prediction interval for the distribution is provided by simulating 1000 datasets from the fitted model, matching the size of the empirical data. We estimate the empirical distributions for each simulated dataset and construct a 95% prediction interval using the pointwise 2.5th and 97.5th percentiles of these estimates. A single example trace is included in blue. While the fitted distribution for \u2212log Ca2+ appears to fit well, even with this symmetric model, the velocity variables are not adequately captured. This discrepancy is likely due to the presence of extreme values in the data that are not effectively accounted for by additive Gaussian noise. Consequently, the model compensates by estimating a large variance. We estimate the waiting time between metastable states to be approximately 4000 years. However, this approximation relies on certain assumptions, namely 62.5 \u2248\u03b7 \u226b\u221aa \u224817.2 and 73 \u2248\u03c32/2\u03b7 \u226aa2/4b \u2248100. Thus, the accuracy of the approximation may not be highly accurate. Defining the current state of the process is not straightforward. One method involves identifying successive upand down-crossings of predefined thresholds within the smoothed data. However, the estimated occupancy time in each state depends on the level of smoothing applied and the distance of crossing thresholds from zero. Using a smoothing technique involving running averages within windows of 11 data points (equivalent to 220 years) and detecting downand up-crossings of levels \u00b10.6, we find an average occupancy time of 4058 years in stadial states and 3550 years in interstadial states. Nevertheless, the actual occupancy times exhibit significant variability, ranging from 60 to 6900 years, with the central 50% of values falling between 665 and 2115 years. This classification of states is depicted in green in Figure 2. Overall, the estimated mean occupancy time inferred from the Kramers oscillator appears reasonable. 6 Technical results In this Section, we present all the necessary technical properties that are used to derive the main results of the paper. 17 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT We start by expanding e \u2126h and its block components \u2126[RR] h (\u03b8)\u22121, \u2126[S|R] h (\u03b8)\u22121, log det \u2126[RR] h (\u03b8), log det \u2126[S|R] h (\u03b8) and log | det Dfh/2 (y; \u03b2) | when h goes to zero. Then, we expand e Zk,k\u22121(\u03b2) and e Zk+1,k,k\u22121(\u03b2) around Ytk\u22121 when h goes to zero. The main tools used are It\u00f4\u2019s lemma, Taylor expansions, and Fubini\u2019s theorem. The final result is stated in Propositions 6.6 and 6.7. The approximations depend on the drift function F, the nonlinear part N, and some correlated sequences of Gaussian random variables. Finally, we obtain approximations of the objective functions (25), (30), (31) and (39) (41). Proofs of all the stated propositions and lemmas in this section are in Supplementary Material S1. 6.1 Covariance matrix e \u2126h The covariance matrix e \u2126h is approximated by: e \u2126h = Z h 0 e e A(h\u2212u) e \u03a3e \u03a3\u22a4e e A\u22a4(h\u2212u) du = he \u03a3e \u03a3\u22a4+ h2 2 ( e Ae \u03a3e \u03a3\u22a4+ e \u03a3e \u03a3\u22a4e A\u22a4) + h3 6 ( e A2 e \u03a3e \u03a3\u22a4+ 2 e Ae \u03a3e \u03a3\u22a4e A\u22a4+ e \u03a3e \u03a3\u22a4( e A2)\u22a4) + h4 24( e A3 e \u03a3e \u03a3\u22a4+ 3 e A2 e \u03a3e \u03a3\u22a4e A\u22a4+ 3 e Ae \u03a3e \u03a3\u22a4( e A2)\u22a4+ e \u03a3e \u03a3\u22a4( e A3)\u22a4) + R(h5, y0). (54) The following lemma approximates each block of e \u2126h up to the first two leading orders of h. The result follows directly from equations (4), (6), and (54). Lemma 6.1 The covariance matrix e \u2126h defined in (54)-(19) approximates block-wise as: \u2126[SS] h (\u03b8) = h3 3 \u03a3\u03a3\u22a4+ h4 8 (Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h5, y0), \u2126[SR] h (\u03b8) = h2 2 \u03a3\u03a3\u22a4+ h3 6 (Av(\u03b2)\u03a3\u03a3\u22a4+ 2\u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h4, y0), \u2126[RS] h (\u03b8) = h2 2 \u03a3\u03a3\u22a4+ h3 6 (2Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h4, y0), \u2126[RR] h (\u03b8) = h\u03a3\u03a3\u22a4+ h2 2 (Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h3, y0). Building on Lemma 6.1, we calculate products, inverses, and logarithms of the components of e \u2126h in the following lemma. Lemma 6.2 For the covariance matrix e \u2126h defined in (54) it holds: (i) \u2126[RR] h (\u03b8)\u22121 = 1 h(\u03a3\u03a3\u22a4)\u22121 \u22121 2((\u03a3\u03a3\u22a4)\u22121Av(\u03b2) + Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121) + R(h, y0); (ii) \u2126[SR] h (\u03b8)\u2126[RR] h (\u03b8)\u22121 = h 2 I \u2212h2 12 (Av \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121) + R(h3, y0); (iii) \u2126[SR] h (\u03b8)\u2126[RR] h (\u03b8)\u22121\u2126[RS] h (\u03b8) = h3 4 \u03a3\u03a3\u22a4+ h4 8 (Av(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4Av(\u03b2)\u22a4) + R(h5, y0); (iv) \u2126[S|R] h (\u03b8) = h3 12 \u03a3\u03a3\u22a4+ R(h5, y0); (v) log det \u2126[RR] h (\u03b8) = d log h + log det \u03a3\u03a3\u22a4+ h Tr Av(\u03b2) + R(h2, y0); (vi) log det \u2126[S|R] h (\u03b8) = 3d log h + log det \u03a3\u03a3\u22a4+ R(h2, y0); (vii) log det e \u2126h(\u03b8) = 4d log h + 2 log det \u03a3\u03a3\u22a4+ h Tr Av(\u03b2) + R(h2, y0). Remark 5 We adjusted the objective functions for partial observations using the term c log det \u2126[\u00b7] h/c, where c is a correction constant. This adjustment keeps the term h Tr Av(\u03b2) in (v)-(vii) constant, not affecting the asymptotic distribution of the drift parameter. There is no h4-term in \u2126[S|R] h (\u03b8) which simplifies the approximation of \u2126[S|R] h (\u03b8)\u22121 and log det \u2126[S|R] h (\u03b8). Consequently, this makes (41) a bad choice for estimating the drift parameter. 18 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 6.2 Nonlinear solution e fh We now state a useful proposition for the nonlinear solution e fh (Section 1.8 in (Hairer et al., 1993)). Proposition 6.3 Let Assumptions (A1), (A2) and (A6) hold. When h \u21920, the h-flow of (15) approximates as: e fh(y) = y + h e N(y) + h2 2 (Dy e N(y)) e N(y) + R(h3, y), (55) e f \u22121 h (y) = y \u2212h e N(y) + h2 2 (Dy e N(y)) e N(y) + R(h3, y). (56) Applying the previous proposition on (21) and (22), we get: fh(y) = v + hN(y) + h2 2 (DvN(y))N(y) + R(h3, y), (57) f \u22c6\u22121 h (y) = v \u2212hN(y) + h2 2 (DvN(y))N(y) + R(h3, y). (58) The following lemma approximates log | det Dfh/2 (y; \u03b2) | in the objective functions and connects it with Lemma 6.2. Lemma 6.4 Let e fh be the function defined in (21). It holds: 2 log | det Dfh/2 (Ytk; \u03b2) | = h Tr DvN(Ytk\u22121; \u03b2) + R(h3/2, Ytk\u22121), 2 log | det Dfh/2 \u0000Xtk, \u2206hXtk+1; \u03b2 \u0001 | = h Tr DvN(Ytk\u22121; \u03b2) + R(h3/2, Ytk\u22121). An immediate consequence of the previous lemma and that DvF(y; \u03b2) = Av(\u03b2) + DvN(y; \u03b2) is log det \u2126[RR] h (\u03b8) + 2 log | det Dfh/2 (Ytk; \u03b2) | = log det h\u03a3\u03a3\u22a4+ h Tr DvF(Ytk\u22121; \u03b2) + R(h3/2, Ytk\u22121). The same equality holds when Ytk is approximated by (Xtk, \u2206hXtk+1). The following lemma expands function \u00b5h( e fh/2(y)) up to the highest necessary order of h. Lemma 6.5 For the functions e fh in (21) and e \u00b5h in (28), it holds \u00b5[S] h ( e fh/2(y)) = x + hv + h2 2 F(y) + R(h3, y), (59) \u00b5[R] h ( e fh/2(y)) = v + h(F(y) \u22121 2N(y)) + R(h2, y). (60) 6.3 Random variables e Zk,k\u22121 and e Zk+1,k,k\u22121 To approximate the random variables Z[S] k,k\u22121(\u03b2), Z[R] k,k\u22121(\u03b2), Z [S] k,k\u22121(\u03b2), and Z [R] k+1,k,k\u22121(\u03b2) around Ytk\u22121, we start by defining the following random sequences: \u03b7k\u22121 := 1 h1/2 Z tk tk\u22121 dWt, (61) \u03bek\u22121 := 1 h3/2 Z tk tk\u22121 (t \u2212tk\u22121) dWt, \u03be\u2032 k := 1 h3/2 Z tk+1 tk (tk+1 \u2212t) dWt, (62) \u03b6k\u22121 := 1 h5/2 Z tk tk\u22121 (t \u2212tk\u22121)2 dWt, \u03b6\u2032 k := 1 h5/2 Z tk+1 tk (tk+1 \u2212t)2 dWt. (63) The random variables (61)-(63) are Gaussian with mean zero. Moreover, at time tk they are Ftk+1 measurable and independent of Ftk. The following linear combinations of (61)-(63) appear in the expansions in the partial observation case: Uk,k\u22121 := \u03be\u2032 k + \u03bek\u22121, (64) Qk,k\u22121 := \u03b6\u2032 k + 2\u03b7k\u22121 \u2212\u03b6k\u22121. (65) 19 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT It is not hard to check that \u03be\u2032 k + \u03b7k\u22121 \u2212\u03be\u2032 k\u22121 = Uk,k\u22121. This alternative representation of Uk,k\u22121 will be used later in proofs. The It\u00f4 isometry yields: E\u03b80[\u03b7k\u22121\u03b7\u22a4 k\u22121 | Ftk\u22121] = I, E\u03b80[\u03b7k\u22121\u03be\u22a4 k\u22121 | Ftk\u22121] = E\u03b80[\u03b7k\u22121\u03be\u2032\u22a4 k\u22121 | Ftk\u22121] = 1 2I, (66) E\u03b80[\u03bek\u22121\u03be\u2032\u22a4 k\u22121 | Ftk\u22121] = 1 6I, E\u03b80[\u03bek\u22121\u03be\u22a4 k\u22121 | Ftk\u22121] = E\u03b80[\u03be\u2032 k\u03be\u2032\u22a4 k | Ftk\u22121] = 1 3I, (67) E\u03b80[Uk,k\u22121U\u22a4 k,k\u22121 | Ftk\u22121] = 2 3I, E\u03b80[Uk,k\u22121(Uk,k\u22121 + 2\u03be\u2032 k\u22121)\u22a4| Ftk\u22121] = I. (68) The covariances of other combinations of the random variables (61)-(63) are not needed for the proofs. However, to derive asymptotic properties, we need some fourth moments calculated in Supplementary Materials S1. The following two propositions are the last building blocks for approximating the objective functions (30)-(31) and (40)-(41). Proposition 6.6 The random variables e Zk,k\u22121(\u03b2) in (24) and e Zk+1,k,k\u22121(\u03b2) in (35) are approximated by: Z[S] k,k\u22121(\u03b2) = h3/2\u03a30\u03be\u2032 k\u22121 + h2 2 (F0(Ytk\u22121) \u2212F(Ytk\u22121)) + h5/2 2 DvF0(Ytk\u22121)\u03a30\u03b6\u2032 k\u22121 + R(h3, Ytk\u22121), Z[R] k,k\u22121(\u03b2) = h1/2\u03a30\u03b7k\u22121 + h(F0(Ytk\u22121) \u2212F(Ytk\u22121)) \u2212h3/2 2 DvN(Ytk\u22121)\u03a30\u03b7k\u22121 + h3/2DvF0(Ytk\u22121)\u03a30\u03be\u2032 k\u22121 + R(h2, Ytk\u22121), Z [S] k,k\u22121(\u03b2) = \u2212h2 2 F(Ytk\u22121) \u2212h5/2 2 DvF(Ytk\u22121)\u03a30\u03be\u2032 k\u22121 + R(h3, Ytk\u22121), Z [R] k+1,k,k\u22121(\u03b2) = h1/2\u03a30Uk,k\u22121 + h(F0(Ytk\u22121) \u2212F(Ytk\u22121)) \u2212h3/2 2 DvN(Ytk\u22121)\u03a30Uk,k\u22121 \u2212h3/2DvF(Ytk\u22121)\u03a30\u03be\u2032 k\u22121 + h3/2 2 DvF0(Ytk\u22121)\u03a30Qk,k\u22121 + R(h2, Ytk\u22121). Remark 6 Proposition 6.6 yield E\u03b80[Z[R] k,k\u22121(\u03b2)Z[R] k,k\u22121(\u03b2)\u22a4| Ytk\u22121] = h\u03a3\u03a3\u22a4 0 + R(h2, Ytk\u22121) = \u2126[RR] h + R(h2, Ytk\u22121), E\u03b80[Z [R] k+1,k,k\u22121(\u03b2)Z [R] k+1,k,k\u22121(\u03b2)\u22a4| Ytk\u22121] = 2 3h\u03a3\u03a3\u22a4 0 + R(h2, Ytk\u22121) = 2 3\u2126[RR] h + R(h2, Ytk\u22121). Thus, the correction factor 2/3 in (40) compensates for the underestimation of the covariance of Z [R] k+1,k,k\u22121(\u03b2). Similarly, it can be shown that the same underestimation happens when using the backward difference. On the other hand, when using the central difference, it can be shown that E\u03b80[Z [R],central k+1,k,k\u22121(\u03b2)Z [R],central k+1,k,k\u22121(\u03b2)\u22a4| Ytk\u22121] = 5 12h\u03a3\u03a3\u22a4 0 + R(h2, Ytk\u22121), which is a larger deviation from \u2126[RR] h , yielding a larger correcting factor and larger asymptotic variance of the diffusion parameter estimator. Proposition 6.7 Let e Zk,k\u22121(\u03b2) and e Zk+1,k,k\u22121(\u03b2) be defined in (24) and (35), respectively. Then, Z[S|R] k,k\u22121(\u03b2) = \u2212h3/2 2 \u03a30(\u03b7k\u22121 \u22122\u03be\u2032 k\u22121) + h5/2 12 (Av \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u03a30\u03b7k\u22121 + h5/2 4 DvN(Ytk\u22121)\u03a30\u03b7k\u22121 \u2212h5/2 2 DvF0(Ytk\u22121)\u03a30(\u03be\u2032 k\u22121 \u2212\u03b6\u2032 k\u22121) + R(h3, Ytk\u22121), Z [S|R] k+1,k,k\u22121(\u03b2) = \u2212h3/2 2 \u03a30Uk,k\u22121 \u2212h2 2 F0(Ytk\u22121) + h5/2 12 (Av \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u03a30Uk,k\u22121 + h5/2 4 DvN(Ytk\u22121)\u03a30Uk,k\u22121 \u2212h5/2 4 DvF0(Ytk\u22121)\u03a30Qk,k\u22121 + R(h3, Ytk\u22121). 20 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT 6.4 Objective functions Starting with the complete observation case, we approximate objective functions (30) and (31) up to order R(h3/2, Ytk\u22121) to prove the asymptotic properties of the estimators \u02c6 \u03b8[CR] N and \u02c6 \u03b8[CS|R] N . After omitting the terms of order R(h, Ytk\u22121) that do not depend on \u03b2, we obtain the following approximations: L[CR] N (Y0:tN ; \u03b8) = (N \u22121) log det \u03a3\u03a3\u22a4+ N X k=1 \u03b7\u22a4 k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30\u03b7k\u22121 (69) + 2 \u221a h N X k=1 \u03b7\u22a4 k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) + h N X k=1 (F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2))\u22a4(\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) \u2212h N X k=1 \u03b7\u22a4 k\u22121\u03a3\u22a4 0 DvF(Ytk\u22121; \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u03a30\u03b7k\u22121 + h N X k=1 Tr DvF(Ytk; \u03b2), L[CS|R] N (Y0:tN ; \u03b8) = (N \u22121) log det \u03a3\u03a3\u22a4+ 3 N X k=1 (\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30(\u03b7k\u22121 \u22122\u03be\u2032 k\u22121) (70) \u22123h N X k=1 (\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121DvN(Ytk\u22121; \u03b2)\u03a30\u03b7k\u22121 \u2212h N X k=1 (\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(Av(\u03b2) \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u03a30\u03b7k\u22121 L[CF] N (Y0:tN ; \u03b8) = L[CR] N (Y0:tN ; \u03b8) + L[CS|R] N (Y0:tN ; \u03b8) . (71) The two last sums in (70) converge to zero because E\u03b80[(\u03b7k\u22121 \u22122\u03be\u2032 k\u22121)\u03b7\u22a4 k\u22121|Ftk\u22121] = 0. Moreover, (70) lacks the quadratic form of F(Ytk\u22121) \u2212F0(Ytk\u22121), that is crucial for the asymptotic variance of the drift estimator. This implies that the objective function L[CS|R] N is not suitable for estimating the drift parameter. Conversely, (70) provides a correct and consistent estimator of the diffusion parameter, indicating that the full objective function (the sum of L[CR] N and L[CS|R] N ) consistently estimates \u03b8. Similarly, the approximated objective functions in the partial observation case are: L[PR] N (Y0:tN ; \u03b8) = 2 3(N \u22122) log det \u03a3\u03a3\u22a4+ N\u22121 X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 (72) + 2 \u221a h N X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) + h N\u22121 X k=1 (F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2))\u22a4(\u03a3\u03a3\u22a4)\u22121(F(Ytk\u22121; \u03b20) \u2212F(Ytk\u22121; \u03b2)) \u2212h N\u22121 X k=1 (Uk,k\u22121 + 2\u03be\u2032 k\u22121)\u22a4\u03a3\u22a4 0 DvF(Ytk\u22121; \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 + h N\u22121 X k=1 Tr DvF(Ytk; \u03b2), L[PS|R] N (Y0:tN ; \u03b8) = 2(N \u22122) log det \u03a3\u03a3\u22a4+ 3 N\u22121 X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 (73) + 6 \u221a h N X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121F(Ytk\u22121; \u03b20) \u22123h N\u22121 X k=1 U\u22a4 k,k\u22121\u03a3\u22a4 0 DvN(Ytk\u22121; \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u03a30Uk,k\u22121 + 2h N\u22121 X k=1 Tr DvN(Ytk; \u03b2), 21 \fStrang Splitting Parameter Estimator for Second-order SDEs A PREPRINT L[PF] N (Y0:tN ; \u03b8) = L[PR] N (Y0:tN ; \u03b8) + L[PS|R] N (Y0:tN ; \u03b8) . (74) This time, the term with Av(\u03b2) \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121 vanishes because Tr(\u03a30Uk,k\u22121U\u22a4 k,k\u22121\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121(Av(\u03b2) \u2212\u03a3\u03a3\u22a4Av(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)) = 0 due to the symmetry of the matrices and the trace cyclic property. Even though the partial observation objective function L[PR] (X0:tN ; \u03b8) (40) depends only on X0:tN , we could approximate it with L[PR] N (Y0:tN ; \u03b8) (72). This is useful for proving the asymptotic normality of the estimator since its asymptotic distribution will depend on the invariant probability \u03bd0 defined for the solution Y. The absence of the quadratic form F(Ytk\u22121) \u2212F0(Ytk\u22121) in (73) indicates that L[PS|R] N is not suitable for estimating the drift parameter. Additionally, the penultimate term in (73) does not vanish, needing an additional correction term of 2h PN\u22121 k=1 Tr DvN(Ytk; \u03b2) for consistency. This correction is represented as 4 log | det Dvfh/2| in (41). Notably, this term is absent in the complete objective function (31), making this adjustment somewhat artificial and could potentially deviate further from the true log-likelihood. Consequently, the objective function based on the full likelihood (39) inherits this characteristic from (73), suggesting that in the partial observation scenario, using only the rough likelihood (72) may be more appropriate. 7" + }, + { + "url": "http://arxiv.org/abs/2211.11884v2", + "title": "Parameter Estimation in Nonlinear Multivariate Stochastic Differential Equations Based on Splitting Schemes", + "abstract": "Surprisingly, general estimators for nonlinear continuous time models based\non stochastic differential equations are yet lacking. Most applications still\nuse the Euler-Maruyama discretization, despite many proofs of its bias. More\nsophisticated methods, such as Kessler's Gaussian approximation, Ozak's Local\nLinearization, A\\\"it-Sahalia's Hermite expansions, or MCMC methods, lack a\nstraightforward implementation, do not scale well with increasing model\ndimension or can be numerically unstable. We propose two efficient and\neasy-to-implement likelihood-based estimators based on the Lie-Trotter (LT) and\nthe Strang (S) splitting schemes. We prove that S has $L^p$ convergence rate of\norder 1, a property already known for LT. We show that the estimators are\nconsistent and asymptotically efficient under the less restrictive one-sided\nLipschitz assumption. A numerical study on the 3-dimensional stochastic Lorenz\nsystem complements our theoretical findings. The simulation shows that the S\nestimator performs the best when measured on precision and computational speed\ncompared to the state-of-the-art.", + "authors": "Predrag Pilipovic, Adeline Samson, Susanne Ditlevsen", + "published": "2022-11-21", + "updated": "2024-01-17", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "math.ST", + "stat.TH" + ], + "main_content": "Introduction Stochastic differential equations (SDEs) are popular models for physical, biological, and socio-economic processes. Some recent applications include tipping points in the climate (Ditlevsen and Ditlevsen, 2023), the spread of COVID-19 (Arnst et al., 2022; Kareem and Al-Azzawi, 2021), animal movements (Michelot et al., 2019, 2021) and cryptocurrency rates (Dipple et al., 2020). The advantage of SDEs is their ability to capture and quantify the randomness of the underlying dynamics. They are especially applicable when the dynamics are not entirely understood, and the unknown parts act as random. The following parametric form is common for an SDE model with additive noise: dXt = F (Xt; \u03b2) dt + \u03a3 dWt, X0 = x0. (1) arXiv:2211.11884v2 [stat.ME] 17 Jan 2024 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT We want to estimate the underlying drift parameter \u03b2 and diffusion parameter \u03a3 based on discrete observations of Xt. The transition density is necessary for likelihood-based estimators and, thus, a closed-form solution to (1). However, the transition density is only available for a few SDEs, including the Ornstein-Uhlenbeck (OU) process, which has a linear drift function F. Extensive literature exists on MCMC methods for the nonlinear case (Fuchs, 2013; Chopin and Papaspiliopoulos, 2020) however, these are often computationally intensive and do not always converge to the correct values for complex models. Thus, we need a valid approximation of the transition density to perform likelihood-based statistical inference. The most straightforward discretization scheme is the Euler-Maruyama (EM) (Kloeden and Platen, 1992). Its main advantage is the easy-to-implement and intuitive Gaussian transition density. Both frequentist and Bayesian approaches extensively employ EM across theoretical and applied studies. However, the EM-based estimator has many disadvantages. First, it exhibits pronounced bias as the discretization step increases (see Florens-Zmirou (1989) for a theoretical study, or Gloaguen et al. (2018), Gu et al. (2020) for applied studies). Second, Hutzenthaler et al. (2011) showed that it is not mean-square convergent when the drift function F of (1) grows super-linearly. Consequently, we should avoid EM for models with polynomial drift. Third, it often fails to preserve important structural properties, such as hypoellipticity, geometric ergodicity, and amplitudes, frequencies, and phases of oscillatory processes (Buckwar et al., 2022). Some pioneering papers on likelihood-based SDE estimators are Dacunha-Castelle and Florens-Zmirou (1986); Dohnal (1987); Florens-Zmirou (1989); Genon-Catalot and Jacod (1993); Kessler (1997). The first two only estimate the diffusion parameter. Florens-Zmirou (1989) used EM to estimate both parameters and derived asymptotic properties. Genon-Catalot and Jacod (1993) generalized to higher dimensions, non-equidistant discretization step, and a generic form of the objective function, however only estimating the diffusion parameter. Kessler (1997) proposed an estimator (denoted K) approximating the unknown transition density with a Gaussian density using the true conditional mean and covariance, or approximations thereof using the infinitesimal generator. He proved consistency and asymptotic normality under the commonly used, but too restrictive, global Lipschitz assumption on the drift function F. A competitive likelihood-based approach relies on local linearization (LL), initially proposed by Ozaki (1985) and later extended by Ozaki (1992); Shoji and Ozaki (1998). They approximated the drift between two consecutive observations by a linear function. In the case of additive noise, this corresponds to an OU process with a known Gaussian transition density. Thus, the likelihood approximation is a product of Gaussian densities. Shoji (1998) proved that LL discretization is one-step consistent and Lp convergent with order 1.5. Shoji (2011), Jimenez et al. (2017) extended the theory of LL for SDEs with multiplicative noise. Simulation studies show the superiority of the LL estimator compared to other estimators (Shoji and Ozaki, 1998; Hurn et al., 2007; Gloaguen et al., 2018; Gu et al., 2020). Until recently, the implementation of the LL estimator was numerically ill-conditioned due to the possible singularity of the Jacobian matrix of the drift function F. However, Gu et al. (2020) proposed an efficient implementation that overcomes this. The main disadvantage of the LL method is its slow computational speed. A\u00eft-Sahalia (2002) proposed Hermite expansions (HE) to approximate the transition density, focusing on univariate time-homogeneous diffusions. This method, widely utilized in finance, was later extended to both reducible and irreducible multivariate diffusions (A\u00eft-Sahalia, 2008). Chang and Chen (2011) found conditions under which the HE estimator has the same asymptotic distribution as the exact maximum likelihood estimator (MLE). Choi (2013, 2015) further broadened the technique to time-inhomogeneous settings. Picchini and Ditlevsen (2011) used the method for multidimensional diffusions with random effects. When an SDE is irreducible, A\u00eft-Sahalia (2008) applied Kolmogorov\u2019s backward and forward equations to develop a small-time expansion of the diffusion probability densities. Yang et al. (2019) introduced a delta expansion method, using It\u00f4-Taylor expansions to derive analytical approximations of the transition densities of multivariate diffusions inspired by A\u00eft-Sahalia (2002). While A\u00eft-Sahalia\u2019s approach allows for a broad class of drift and diffusion functions, the implementation can be complex. To our knowledge, there have not been any applications to models with more than four dimensions. Furthermore, computing coefficients even up to order two can be challenging, while higher-order approximations are often necessary for non-linear models. Hurn et al. (2007) implemented HE up to third order in univariate cases, emphasizing the importance of symbolic computation tools like Mathematica or Maple. Their survey concluded that while LL is the best among discrete maximum likelihood estimators, HE is the preferred overall choice. They highlighted that the HE proposed by A\u00eft-Sahalia (2002) has the best trade-off between speed and accuracy, proving more feasible than LL in most financial applications. This finding aligns with the newer review study from L\u00f3pez-P\u00e9rez et al. (2021). However, LL\u2019s broad applicability contrasts with the limitations of Hermite expansions, particularly for high-dimensional multivariate models exceeding three dimensions. Apart from the above-mentioned general methods, there are some specific setups. S\u00f8rensen and Uchida (2003) investigated a small-diffusion estimator, Ditlevsen and S\u00f8rensen (2004); Gloter (2006) worked with integrated diffusion, and Uchida and Yoshida (2012) used adaptive maximum likelihood estimation. Bibby and S\u00f8rensen (1995) and Forman and S\u00f8rensen (2008) explored martingale estimation functions (EF) in one-dimensional diffusions, but they are difficult 2 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT to extend to multidimensional SDEs. Ditlevsen and Samson (2019) used the 1.5 scheme to solve the problem of hypoellipticity when the diffusion matrix is not of full rank. More recently, contributions from Gloter and Yoshida (2020, 2021) have extended the research of Uchida and Yoshida (2012). Gloter and Yoshida (2020) introduced a non-adaptive approach and offered similar analytic asymptotic results as Ditlevsen and Samson (2019) without imposing strict limitations on the model class. Iguchi et al. (2022) proposed sampling schemes for elliptic and hypoelliptic models that often result in conditionally non-Gaussian integrals, distinguishing their approach from prior works. As the transition density of their new scheme is typically complex, Iguchi et al. (2022) created a closed-form density expansion using Malliavin calculus. They recommended a transition density scheme that retained second-order precision through prudent truncation of the expansion. This closed-form expansion aligns with the works of A\u00eft-Sahalia (2002, 2008) and Li (2013) on elliptic SDEs, although with a different approach. Iguchi et al. (2022) deliver asymptotic results with analytically available rates, beneficial for both elliptic and hypoelliptic models. Table 1 provides a comprehensive overview of estimator properties, finite sample performance, and required model assumptions for the most prominent state-of-the-art methods. While asymptotic properties might be similar in most cases, the finite sample properties are often different. The table also includes the Lie-Trotter (LT) and the Strang (S) splitting estimators, which we propose in this paper. The comparison encompasses four key characteristics: (1) Diffusion coefficient allowed in the model class, distinguishing between additive and general noise; (2) Asymptotic regime, the conditions needed to prove the asymptotic properties; (3) Implementation, assessing the complexity of implementation, dependence on model dimension and parameter optimization time; and (4) Finite sample properties, evaluating performance for fixed sample size N and discretization step size h. An essential aspect of any estimator is the practical execution in real-world applications. Although the previously mentioned research contributes significantly to the theoretical development and broadens our understanding of inference for SDEs, its practical implementations tend not to be user-friendly. Except for precomputed models, applications by non-specialists can be challenging. Our main contribution is proposing estimators that are intuitive, easy to implement, computationally efficient, and scalable with increasing dimensions. These characteristics make the estimators accessible to researchers in various applied sciences while maintaining desirable statistical properties. Moreover, these estimators remain competitive with the best state-of-the-art methods, particularly concerning estimation bias and variance. We propose to use the LT or the S splitting schemes for statistical inference. These numerical approximations were first suggested for ordinary differential equations (ODEs) (see for example, McLachlan and Quispel (2002); Blanes et al. (2009)), but their extension to SDEs is straightforward. A few studies have investigated numerical properties (Bensoussan et al., 1992; Ableidinger et al., 2017; Ableidinger and Buckwar, 2016; Buckwar et al., 2022). Barbu (1988) applied LT splitting on nonlinear optimal control problems, while Hopkins and Wong (1986) used it for nonlinear filtering. Bou-Rabee and Owhadi (2010); Abdulle et al. (2015) used LT splitting to investigate conditions for preserving the measure of the ergodic nonlinear Langevin equations. Recently, Br\u00e9hier et al. (2023) showed that LT splitting successfully preserved positivity for a class of nonlinear stochastic heat equations with multiplicative space-time white noise. Additional studies on the application of splitting schemes to SDEs include those by Misawa (2001); Milstein and Tretyakov (2003); Leimkuhler and Matthews (2015); Alamo and Sanz-Serna (2016); Br\u00e9hier and Gouden\u00e8ge (2019). Regarding statistical applications, to the best of our knowledge, only Buckwar et al. (2020); Ditlevsen et al. (2023) used splitting schemes for parametric inference in combination with Approximate Bayesian Computation, and Ditlevsen and Ditlevsen (2023) used it for prediction of a forthcoming collapse in the climate. This paper presents five main contributions: 1. We introduce two new efficient, easy-to-implement, and computationally fast estimators for multidimensional nonlinear SDEs. 2. We establish Lp convergence of the S splitting scheme. 3. We prove consistency and asymptotic normality of the new estimators under the less restrictive assumption of one-sided Lipschitz. This proof requires innovative approaches. 4. We demonstrate the estimators\u2019 performance in a stochastic version of the chaotic Lorenz system, in contrast to prior studies that primarily addressed the deterministic Lorenz system. 5. We compare the new estimators to three discrete maximum likelihood estimators from the literature in a simulation study, comparing the accuracy and computational speed. The rest of this paper is structured as follows. In Section 2 we introduce the SDE model class and define the splitting schemes and the estimators. In Section 3, we show that the S splitting has better one-step predictions than the LT, and we prove that the S splitting is Lp consistent with order 1.5 and Lp convergent with order 1. To the best of our knowledge, this is a new result. Sections 4 and 5 establish the estimator asymptotics under the less restrictive 3 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT Estimator Noise type Asymptotic regime Computational time and implementation Finite sample properties EM General h \u21920, Nh \u2192\u221e, Nh2 \u21920 (Florens-Zmirou, 1989) Fastest optimization and implementation. Straightforward for any dimension. Earliest bias exhibition with increasing h. K up to order J General J fixed: h \u21920, Nh \u2192\u221e, Nhp \u21920, for any p \u2208N a (Kessler, 1997) Fast optimization. Straightforward for J \u22643. Unbiased if the exact mean is known. For larger h, a higher order of J is needed. Performance between EM and LL. EF General h fixed: N \u2192\u221e(Bibby and S\u00f8rensen, 1995) Fast optimization. Requires moments of the transition density. Mainly suitable for univariate models. Unbiased also for large h, but not efficient. Good performance. LL Additive (possible generalization) (Jimenez et al., 2017) h \u21920, Nh \u2192\u221e, Nh2 \u21920 (Ozaki, 1992) Slowest discrete ML approximations. (Hurn et al., 2007) Straightforward for any dimension. Best among all discrete ML approximations. (Hurn et al., 2007) HE up to order J General h fixed: N \u2192\u221e, J \u2192\u221e, Nh2J+2 \u2192 0, J \u22652 fixed: N \u2192\u221e, h \u21920, Nh3 \u2192 \u221e, Nh2J+1 \u21920 (Chang and Chen, 2011) Slower than LL in the univariate case. Implementation becomes significantly more complex in higher dimensions or for J \u22652. (Hurn et al., 2007) For larger h, a higher order of J is needed. Better than LL in the univariate case. (Hurn et al., 2007) LT (proposed) Additive (possible generalization) h \u21920, Nh \u2192\u221e, Nh2 \u21920 Slower than K, but notably faster than LL. Straightforward implementation for given nonlinear ODE solution. Scales well with the increasing dimension. Performance relative to EM varies based on splitting strategy and model. S (proposed) Additive (possible generalization) h \u21920, Nh \u2192\u221e, Nh2 \u21920 Slower than LT, but notably faster than LL. Straightforward implementation for given nonlinear ODE solution. Scales well with the increasing dimension. As good as LL. Table 1: Comparison of the proposed Lie-Trotter (LT) and Strang (S) splittings (in bold) with five state-of-the-art estimators: Euler-Maruyama (EM), Kessler (K), Estimating functions (EF), Local linearization (LL) and Hermite expansion (HE). The comparison focuses on four key characteristics: (1) Noise type additive or general, (2) Asymptotic regime \u2013 investigating conditions where asymptotic properties align with the exact MLE, (3) Computational time and implementation \u2013 evaluating implementation and parameter optimization costs; and (4) Finite sample properties \u2013 assessing performance under fixed N and h. The finite sample properties of the estimators are likely influenced by specific experiment designs. aWhile Kessler (1997) did not explicitly explore the scenario of a fixed h, it is a reasonable assumption that the asymptotic results will hold as N \u2192\u221eand J \u2192\u221e. 4 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT one-sided global Lipschitz assumption. We illustrate in Section 6 the theoretical results in a simulation study on a model that is not globally Lipschitz, the 3-dimensional stochastic Lorenz systems. Since the objective functions based on pseudo-likelihoods are multivariate in both data and parameters, we use automatic differentiation (AD) to get faster and more reliable estimators. We compare the precision and speed of the EM, K, LL, LT, and S estimators. We show that the EM and LT estimators become biased before the others with increasing discretization step h and that the LL and S perform the best. However, S is much faster than LL because LL calculates a new covariance matrix for each combination of data points and parameter values. Notation. We use capital bold letters for random vectors, vector-valued functions, and matrices, while lowercase bold letters denote deterministic vectors. \u2225\u00b7 \u2225denotes both the L2 vector norm in Rd and the matrix norm induced by the L2 norm, defined as the square root of the largest eigenvalue. Superscript (i) on a vector denotes the i-th component, while on a matrix it denotes the i-th row. Double subscript ij on a matrix denotes the component in the i-th row and j-th column. If a matrix is a product of more matrices, square brackets with subscripts denote a component inside the matrix. The transpose is denoted by \u22a4. Operator Tr(\u00b7) returns the trace of a matrix and det(\u00b7) the determinant. Sometimes, we denote by [ai]d i=1 a vector with coordinates ai, and by [bij]d i,j=1 a matrix with coordinates bij, for i, j = 1, . . . , d. We denote with \u2202ig(x) the partial derivative of a generic function g : Rd \u2192R with respect to x(i) and \u22022 ijg(x) the second partial derivative. The nabla operator \u2207denotes the gradient vector of a function g, \u2207g(x) = [\u2202ig(x)]d i=1. The differential operator D denotes the Jacobian matrix DF(x) = [\u2202iF (j)(x)]d i,j=1, for a vector-valued function F : Rd \u2192Rd. H denotes the Hessian matrix of a real-valued function g, Hg(x) = [\u2202ijg(x)]d i,j=1. Let R represent a vector (or a matrix) valued function defined on (0, 1)\u00d7Rd, such that, for some constant C, \u2225R(a, x)\u2225< aC(1+\u2225x\u2225)C for all a, x. When denoted R, it is a scalar. The Kronecker delta function is denoted by \u03b4j i . For an open set A, the bar A indicates closure. We use \u03b8 = to indicate equality up to an additive constant that does not depend on \u03b8. We write P \u2212 \u2192, d \u2212 \u2192and P\u2212a.s. \u2212 \u2212 \u2212 \u2212 \u2192for convergence in probability, distribution, and almost surely, respectively. Id denotes the d-dimensional identity matrix, while 0d\u00d7d is a d-dimensional zero square matrix. For an event E \u2208F, we denote by \u22aeE the indicator function. 2 Problem setup Let X in (1) be defined on a complete probability space (\u2126, F, P\u03b8) with a complete right-continuous filtration (Ft)t\u22650, and let the d-dimensional Wiener process W = (Wt)t\u22650 be adapted to Ft. The probability measure P\u03b8 is parameterized by the parameter \u03b8 = (\u03b2, \u03a3). Rewrite equation (1) as follows: dXt = A(\u03b2)(Xt \u2212b(\u03b2)) dt + N (Xt; \u03b2) dt + \u03a3 dWt, X0 = x0, (2) such that F(x; \u03b2) = A(\u03b2)(x \u2212b(\u03b2)) + N (x; \u03b2). Let \u0398 = \u0398\u03b2 \u00d7 \u0398\u03a3 be the parameter space with \u0398\u03b2 and \u0398\u03a3 being two open convex bounded subsets of Rr and Rd\u00d7d, respectively. Functions F, N : Rd \u00d7 \u0398\u03b2 \u2192Rd are locally Lipschitz, and A, b are defined on \u0398\u03b2 and take values in Rd\u00d7d and Rd, respectively. Parameter matrix \u03a3 takes values in Rd\u00d7d. The matrix \u03a3\u03a3\u22a4is assumed to be positive definite and determines the variance of the process. Since any square root of \u03a3\u03a3\u22a4induces the same distribution, \u03a3 is only identifiable up to equivalence classes. Thus, instead of estimating \u03a3, we estimate \u03a3\u03a3\u22a4. The drift function F in (1) is split up into a linear part given by matrix A and vector b and a nonlinear part given by N. This decomposition is essential for defining the splitting schemes and the objective functions used for estimating \u03b8. We denote the true parameter value by \u03b80 = (\u03b20, \u03a30) and assume that \u03b80 \u2208\u0398. Sometimes we write A0, b0, N0(x) and \u03a3\u03a3\u22a4 0 instead of A(\u03b20), b(\u03b20), N(x; \u03b20) and \u03a30\u03a3\u22a4 0 , when referring to the true parameters. We write A, b, N(x) and \u03a3\u03a3\u22a4for any parameter \u03b8. Sometimes, we suppress the parameter to simplify notation. For example, E implicitly refers to E\u03b8. Remark The drift function F(x) can always be rewritten as A(x \u2212b) + N(x) for any A, b by setting N(x) = F(x) \u2212A(x \u2212b), including choosing A and b to be zero. In this case, the splitting proposed below will result in a Brownian motion (3) and a nonlinear ODE (4). Remark We assume additive noise, meaning that the diffusion matrix does not depend on the current state. While this assumption is natural in some applications, it can be restrictive in others. The proposed methodology could potentially be extended to reducible diffusions by applying the Lamperti transform to obtain a unit diffusion coefficient, as demonstrated by A\u00eft-Sahalia (2008). However, if the transform depends on the parameter, estimation is not straightforward. In this paper, we only consider additive noise. 5 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT 2.1 Assumptions The main assumption is that (2) has a unique strong solution X= (Xt)t\u2208[0,T ], adapted to (Ft)t\u2208[0,T ], which follows from the following first two assumptions (Theorem 2 in Alyushina (1988), Theorem 1 in Krylov (1991), Theorem 3.5 in Mao (2007)). We need the last three assumptions to prove the properties of the estimators. (A1) Function N is twice continuously differentiable with respect to x and \u03b8, i.e., N \u2208C2. Additionally, it is one-sided globally Lipschitz continuous with respect to x on Rd \u00d7 \u0398\u03b2, i.e., there exists a constant C > 0 such that: (x \u2212y)\u22a4(N(x; \u03b2) \u2212N(y; \u03b2)) \u2264C\u2225x \u2212y\u22252, \u2200x, y \u2208Rd. (A2) Function N grows at most polynomially in x, uniformly in \u03b8, i.e., there exist constants C > 0 and \u03c7 \u22651 such that: \u2225N (x; \u03b2) \u2212N (y; \u03b2) \u22252 \u2264C \u00001 + \u2225x\u22252\u03c7\u22122 + \u2225y\u22252\u03c7\u22122\u0001 \u2225x \u2212y\u22252, \u2200x, y \u2208Rd. Additionally, its derivatives are of polynomial growth in x, uniformly in \u03b8. (A3) The solution X of SDE (1) has invariant probability \u03bd0(dx). (A4) \u03a3\u03a3\u22a4is invertible on \u0398\u03a3. (A5) Function F is identifiable in \u03b2, i.e., if F(x, \u03b21) = F(x, \u03b22) for all x \u2208Rd, then \u03b21 = \u03b22. Assumption (A3) is required for the ergodic theorem to ensure convergence in distribution. Assumption (A4) implies the ellipticity of model (1), which is not needed for the S estimator. On the contrary, the EM estimator breaks down in hypoelliptic models. We will treat the hypoelliptic case in a separate paper where the proofs are more involved. Assumption (A5) ensures the identifiability of the parameter. Assume a sample (Xtk)N k=0 \u2261X0:tN from (2) at time steps 0 = t0 < t1 < \u00b7 \u00b7 \u00b7 < tN = T, which we, for notational simplicity, assume equidistant with step size h = tk \u2212tk\u22121. 2.2 Moments Assumption (A1) ensures finiteness of the moments of the solution X (Tretyakov and Zhang, 2013), i.e., E[ sup t\u2208[0,T ] \u2225Xt\u22252p] < C(1 + \u2225x0\u22252p), \u2200p \u22651. Furthermore, we need the infinitesimal generator L of (1) defined on sufficiently smooth functions g : Rd \u00d7 \u0398 \u2192R given by: L\u03b80g (x; \u03b8) = F (x; \u03b20)\u22a4\u2207g (x; \u03b8) + 1 2 Tr(\u03a3\u03a3\u22a4 0 Hg(x; \u03b8)). The moments of SDE (1) are expanded using the following lemma (Lemma 1.10 in S\u00f8rensen (2012)). Lemma 2.1 Let Assumptions (A1)-(A2) hold. Let X be a solution of (1). Let g \u2208C(2l+2) be of polynomial growth and p \u22652. Then, E\u03b80[g(Xtk; \u03b8) | Ftk\u22121] = l X j=0 hj j! Lj \u03b80g(Xtk\u22121; \u03b8) + R(hl+1, Xtk\u22121). We need terms up to order R(h3, Xtk\u22121). After applying the generator L\u03b8 on g(x) = x(i), the previous Lemma yields: E[X(i) tk | Xtk\u22121 = x] = x(i) + hF (i)(x) + h2 2 (F(x)\u22a4\u2207F (i)(x) + 1 2 Tr(\u03a3\u03a3\u22a4HF (i)(x))) + R(h3, x). 2.3 Splitting Schemes Consider the following splitting of (2): dX[1] t = A(X[1] t \u2212b) dt + \u03a3 dWt, X[1] 0 = x0, (3) dX[2] t = N(X[2] t ) dt, X[2] 0 = x0. (4) 6 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT The solution of equation (3) is an OU process given by the following h-flow: X[1] tk = \u03a6[1] h (X[1] tk\u22121) = eAhX[1] tk\u22121+(I \u2212eAh)b + \u03beh,k, (5) where \u03beh,k i.i.d \u223cNd(0, \u2126h) for k = 1, . . . , N (Vatiwutipong and Phewchean, 2019). The covariance matrix \u2126h and the conditional mean of the OU process (5) are provided by: \u2126h = Z h 0 eA(h\u2212u)\u03a3\u03a3\u22a4eA\u22a4(h\u2212u) du = h\u03a3\u03a3\u22a4+ h2 2 (A\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4A\u22a4) + R(h, x0), (6) \u00b5h(x; \u03b2) := eA(\u03b2)hx + (I \u2212eA(\u03b2)h)b(\u03b2). (7) Assumptions (A1) and (A2) ensure the existence and uniqueness of the solution of (4) (Theorem 1.2.17 in Humphries and Stuart (2002)). Thus, there exists a unique function fh : Rd \u00d7 \u0398\u03b2 \u2192Rd, for h \u22650, such that: X[2] tk = \u03a6[2] h (X[2] tk\u22121) = fh(X[2] tk\u22121; \u03b2). (8) For all \u03b2 \u2208\u0398\u03b2, the time flow fh fulfills the following semi-group properties: f0(x; \u03b2) = x, ft+s(x; \u03b2) = ft(fs(x; \u03b2); \u03b2), t, s \u22650. (9) Remark Since only one-sided Lipschitz continuity is assumed, the solution to (4) might not exist for all h < 0 and all x0 \u2208Rd, implying that the inverse f \u22121 h might not exist. If it exists, then f \u22121 h = f\u2212h. For the S estimator, we need a well-defined inverse. This is not an issue when N is globally Lipschitz . We, therefore, introduce the following and last assumption. (A6) Function f \u22121 h (x; \u03b2) is defined asymptotically, for all x \u2208Rd, \u03b2 \u2208\u0398\u03b2, when h \u21920. Before defining the splitting schemes, we present a useful proposition for expending the nonlinear solution fh (Section 1.8 in (Hairer et al., 1993)). Proposition 2.2 Let Assumptions (A1)-(A2) hold. When h \u21920, the h-flow of (4) is fh(x) = x + hN(x) + h2 2 (DN(x)) N(x) + R(h3, x). Now, we introduce the two most common splitting approximations, which serve as the main building blocks for the proposed estimators. Definition 2.3 Let Assumptions (A1) and (A2) hold. The Lie-Trotter and Strang splitting approximations of the solution of (2) are given by: X[LT] tk := \u03a6[LT] h (X[LT] tk\u22121) = (\u03a6[1] h \u25e6\u03a6[2] h )(X[LT] tk\u22121) = \u00b5h(fh(X[LT] tk\u22121)) + \u03beh,k, (10) X[S] tk := \u03a6[S] h (X[S] tk\u22121) = (\u03a6[2] h/2 \u25e6\u03a6[1] h \u25e6\u03a6[2] h/2)(X[S] tk\u22121) = fh/2(\u00b5h(fh/2(X[S] tk\u22121)) + \u03beh,k). (11) Remark The order of composition in the splitting schemes is not unique. Changing the order in the S splitting leads to a sum of 2 independent random variables, one Gaussian and one non-Gaussian, whose likelihood is not trivial. Thus, we only use the splitting (11). The reversed order in the LT splitting can be treated the same way as the S splitting. Remark Splitting the drift F(x) into a linear and a nonlinear part is not unique. However, all theorems and properties, particularly consistency and asymptotic normality of the estimators, hold for any splitting choice. Yet, for fixed step size h and sample size N, certain splittings perform better than others. In this paper, we present two general and intuitive strategies. The first applies when the system has a fixed point; here, the linear part of the splitting is the linearization around the fixed point. The linear OU performs accurately near the fixed point, with the nonlinear part correcting for nonlinear deviations. Simulations consistently show this approach to perform best. Another strategy is to linearize around the measured average value for each coordinate. An in-depth analysis of the splitting strategies for a specific example is provided in Section 2.5. Remark Overall trajectories of the S and LT splittings coincide up to the first h/2 and the last h/2 move of the flow \u03a6[2] h/2. Indeed, when applied k times, the S splitting can be written as: (\u03a6[S] h )k(x0) = (\u03a6[2] h/2 \u25e6(\u03a6[LT] h )k \u25e6\u03a6[2] \u2212h/2)(x0). Thus, it is natural that LT and S have the same order of Lp convergence. We prove this in Section 3. However, the LT and S trajectories differ in their output points (10) and (11). The S splitting outputs the middle points of the smooth steps of the deterministic flow (8), while the LT splitting outputs the stochastic increments in the rough steps. We conjecture that this is one of the reasons why the S splitting has superior statistical properties. 7 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT 2.4 Estimators In this section, we first introduce two new estimators, LT and S, given a sample X0:tN . Subsequently, we provide a brief overview of the EM, K, and LL estimators, which will be compared in the simulation study. 2.4.1 Splitting estimators The LT scheme (10) follows a Gaussian distribution. Consequently, the objective function corresponds to (twice) the negative pseudo-log-likelihood: L[LT](X0:tN ; \u03b8) \u03b8 = N log(det \u2126h(\u03b8)) + N X k=1 (Xtk \u2212\u00b5h(fh(Xtk\u22121; \u03b2); \u03b2))\u22a4\u2126h(\u03b8)\u22121(Xtk \u2212\u00b5h(fh(Xtk\u22121; \u03b2); \u03b2)). (12) The S splitting (11) is a nonlinear transformation of the Gaussian random variable \u00b5h(fh/2(Xtk\u22121; \u03b2); \u03b2) + \u03beh,k. We first define: Ztk(\u03b2) := f \u22121 h/2(Xtk; \u03b2) \u2212\u00b5h(fh/2(Xtk; \u03b2); \u03b2). (13) Afterwards, we apply a change of variables to derive the following objective function: L[S](X0:tN ; \u03b8) \u03b8 = N log(det \u2126h(\u03b8)) + N X k=1 Ztk(\u03b2)\u22a4\u2126h(\u03b8)\u22121Ztk(\u03b2) \u22122 N X k=1 log | det Df \u22121 h/2(Xtk; \u03b2)|. (14) The last term is due to the nonlinear transformation and is an extra term that does not appear in commonly used pseudo-likelihoods. The inverse function f \u22121 h may not exist for all parameters in the search domain of the optimization algorithm. However, this problem it can often be solved numerically. When f \u22121 h is well defined, we use the identity \u2212log | det Df \u22121 h (x; \u03b2) | = log | det Dfh (x; \u03b2) | in (14) to increase the speed and numerical stability. Finally, we define the estimators as: b \u03b8[k] N := arg min \u03b8 L[k] (X0:tN ; \u03b8) , k \u2208{LT, S}. (15) 2.4.2 Euler-Maruyama The EM method uses first-order Taylor expansion of (1): X[EM] tk := X[EM] tk\u22121 + hF(X[EM] tk\u22121 ; \u03b2) + \u03be[EM] h,k , (16) where \u03be[EM] h,k i.i.d. \u223cNd(0, h\u03a3\u03a3\u22a4) for k = 1, . . . , N (Kloeden and Platen, 1992). The transition density p[EM](Xtk | Xtk\u22121; \u03b8) is Gaussian, so the pseudo-likelihood follows trivially. 2.4.3 Kessler The K estimator uses Gaussian transition densities p[K](Xtk | Xtk\u22121; \u03b8) with the true mean and covariance of the solution X (Kessler, 1997). When the moments are unknown, they are approximated using the infinitesimal generator (Lemma 2.1). We implement the estimator K based on the 2nd-order approximation: X[K] tk := X[K] tk\u22121 + hF(X[K] tk\u22121; \u03b2) + \u03be[K] h,k(X[K] tk\u22121) + h2 2 \u0010 DF(X[K] tk\u22121; \u03b2)F(X[K] tk\u22121; \u03b2) + 1 2[Tr(\u03a3\u03a3\u22a4HF (i)(X[K] tk\u22121; \u03b2))]d i=1 \u0011 , (17) where \u03be[K] h,k(X[K] tk\u22121) \u223cNd(0, \u2126[K] h,k(\u03b8)), and \u2126[K] h,k(\u03b8) = h\u03a3\u03a3\u22a4+ h2 2 (DF(X[K] tk\u22121; \u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4D\u22a4F(X[K] tk\u22121; \u03b2)). The covariance matrix is not constant which makes the algorithm slower for a larger sample size. 8 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT 2.4.4 Ozaki\u2019s local linearization Ozaki\u2019s LL method approximates the drift of (1) between consecutive observations by a linear function (Jimenez et al., 1999). The LL method consists of the following steps: (1) Perform LL of the drift F in each time interval [t, t + h) by the It\u00f4-Taylor series; (2) Compute the analytic solution of the resulting linear SDE. The approximation becomes: X[LL] tk := X[LL] tk\u22121 + \u03a6[LL] h (X[LL] tk\u22121; \u03b8) + \u03be[LL] h,k (X[LL] tk\u22121), (18) where \u2126[LL] h,k (\u03b8) = R h 0 e DF(X[LL] tk\u22121;\u03b2)(h\u2212u)\u03a3\u03a3\u22a4e DF(X[LL] tk\u22121;\u03b2)\u22a4(h\u2212u) du and \u03be[LL] h,k (X[LL] tk\u22121) \u223cNd(0, \u2126[LL] h,k (\u03b8)). Moreover, \u03a6[LL] h (x; \u03b8) = Rh,0(DF(x; \u03b2))F(x; \u03b2) + (hRh,0(DF(x; \u03b2)) \u2212Rh,1(DF(x; \u03b2)))M(x; \u03b8), Rh,i(DF(x; \u03b2)) = Z h 0 exp(DF(x; \u03b2)u)ui du, i = 0, 1, M(x; \u03b8) = 1 2(Tr H1(x; \u03b8), Tr H2(x; \u03b8), ..., Tr Hd(x; \u03b8))\u22a4, Hk(x; \u03b8) = \u0014 [\u03a3\u03a3\u22a4]ij \u22022F (k) \u2202x(i)\u2202x(j) (x) \u0015d i,j=1 . Building on the approach by Gu et al. (2020), we can efficiently compute Rh,i and \u2126[LL] h,k (\u03b8) using the following procedure. To begin, let us define three block matrices: P1(x) = \u0014 0d\u00d7d Id 0d\u00d7d DF(x; \u03b2) \u0015 , P2(x) = \"\u2212DF(x; \u03b2) Id 0d\u00d7d 0d\u00d7d 0d\u00d7d Id 0d\u00d7d 0d\u00d7d 0d\u00d7d # , P3(x) = \u0014 DF(x; \u03b2) \u03a3\u03a3\u22a4 0d\u00d7d \u2212DF(x; \u03b2)\u22a4 \u0015 . (19) Then, we compute the matrix exponential of matrices hP1(x) and hP2(x): exp(hP1(x)) = \u0014 \u22c6 Rh,0(DF(x; \u03b2)) 0d\u00d7d \u22c6 \u0015 , exp(hP2(x)) = \" \u22c6 \u22c6 BRh,1(DF(x; \u03b2)) 0d\u00d7d \u22c6 \u22c6 0d\u00d7d 0d\u00d7d \u22c6 # . Starting with the first matrix, we derive Rh,0(DF(x; \u03b2)). Then, we compute Rh,1(DF(x; \u03b2)) using the formula Rh,1(DF(x; \u03b2)) = exp(hDF(x; \u03b2))BRh,1(DF(x; \u03b2)). The terms marked with \u22c6symbols can be disregarded. Finally, we obtain \u2126[LL] h,k (\u03b8) from the matrix exponential: exp(hP3(x)) = \u0014 B\u2126h,k(DF(x; \u03b2); \u03b8) C\u2126h,k(DF(x; \u03b2); \u03b8) 0d\u00d7d \u22c6 \u0015 , \u2126[LL] h,k (\u03b8) = C\u2126h,k(DF(x; \u03b2); \u03b8)B\u2126h,k(DF(x; \u03b2); \u03b8)\u22a4. Thus, we have a Gaussian density p[LL](Xtk | Xtk\u22121; \u03b8) and standard likelihood inference. Like in the case of K, the covariance matrix \u2126[LL] h,k (\u03b8) depends on the previous state X[LL] tk\u22121, which is a major downside since it is harder to implement and slower to run due to the computation of N \u22121 covariance matrices. Unlike K, LL does not use Taylor expansions of the approximated drift and covariance matrix, so the influence of the sample size N on computational times is much stronger. For details on the derivations of the previous formulas, see Gu et al. (2020). 2.5 An example: the stochastic Lorenz system The Lorenz system is a 3D system introduced by Lorenz (1963) to model atmospheric convection. The model is originally deterministic exhibiting deterministic chaos. It means that tiny differences in initial conditions lead to unpredictable and widely diverging trajectories. The Lorenz system evolves around two strange attractors. It means that the trajectories remain within some bounded region, while points that start in close proximity may eventually separate 9 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT Figure 1: An example trajectory of the stochastic Lorenz system (20) starting at (0, 1, 0) for N = 10000 and h = 0.005. The first row shows the evolution of the individual components X, Y , and Z. The second row shows the evolution of component pairs: (Y, Z), (X, Z) and (X, Y ). Parameters are p = 10, r = 28, c = 8/3, \u03c32 1 = 1, \u03c32 2 = 2 and \u03c32 3 = 1.5. by arbitrary distances as time progresses. (Hilborn and Hilborn, 2000). We add noise to include unmodelled forces and randomness in the Lorenz system . The stochastic Lorenz system is given by: dXt = p(Yt \u2212Xt) dt + \u03c31 dW (1) t , dYt = (rXt \u2212Yt \u2212XtZt) dt + \u03c32 dW (2) t , dZt = (XtYt \u2212cZt) dt + \u03c33 dW (3) t . (20) The variables Xt, Yt, and Zt represent convective intensity, and horizontal and vertical temperature differences, respectively. Parameters p, r, and c denote the Prandtl number, the Rayleigh number, and a geometric factor, respectively (Tabor, 1989). Lorenz (1963) used the values p = 10, r = 28 and c = 8/3, yielding chaotic behavior. The system does not fulfill the global or the one-sided Lipschitz condition because it is a second-order polynomial (Humphries and Stuart, 1994). However, it has a unique global solution and an invariant probability (Keller, 1996). Thus, all assumptions (A2)-(A5), except (A1) hold. Even so, we show in Section 6 that the estimators work. Different approaches for estimating parameters in the Lorenz system have been proposed, mostly in the deterministic case. Zhuang et al. (2020) and Lazz\u00fas et al. (2016) used sophisticated optimization algorithms to achieve better precision. Dubois et al. (2020) and Ann et al. (2022) used deep neural networks in combination with other machine learning algorithms. Ozaki et al. (2000) used Kalman filtering based on LL on the stochastic Lorenz system. Figure 1 shows an example trajectory of the stochastic Lorenz system. The trajectory was generated by subsampling from an EM simulation, such that N = 10000 and h = 0.05, with parameter values p = 10, r = 28, c = 8/3, \u03c32 1 = 1, \u03c32 2 = 2 and \u03c32 3 = 1.5. Even if the trajectory had not been stochastic, the unpredictable jumps in the first row of Figure 1 would still have been there due to the chaotic behavior . We suggest to split SDE (20) by choosing the OU part (3) as the linearization around one of the two fixed points (x\u22c6, y\u22c6, z\u22c6) = (\u00b1 p c(r \u22121), \u00b1 p c(r \u22121), r \u22121). For simplicity, we exclude the fixed point (0, 0, 0) since X and Y spend little time around this point, see Figure 1. Specifically, we apply a mixture of two splittings, linearizing around ( p c(r \u22121), p c(r \u22121), r \u22121) when X > 0 and around (\u2212 p c(r \u22121), \u2212 p c(r \u22121), r \u22121) when X < 0. We denote these estimators by LTmix and Smix. The splitting is given by: Amix = \"\u2212p p 0 1 \u22121 \u2212x\u22c6 y\u22c6 x\u22c6 \u2212c # , bmix = \"x\u22c6 y\u22c6 z\u22c6 # , Nmix(x, y, z) = \" 0 \u2212(x \u2212x\u22c6)(z \u2212z\u22c6) (x \u2212x\u22c6)(y \u2212y\u22c6) # . 10 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT The OU process is mean-reverting towards bmix = (x\u22c6, y\u22c6, z\u22c6). The nonlinear solution is fmix,h(x, y, z) = \" x (y \u2212y\u22c6) cos(h(x \u2212x\u22c6)) \u2212(z \u2212z\u22c6) sin(h(x \u2212x\u22c6)) + y\u22c6 (y \u2212y\u22c6) sin(h(x \u2212x\u22c6)) + (z \u2212z\u22c6) cos(h(x \u2212x\u22c6)) + z\u22c6 # . The solution is a composition of a 3D rotation and translation of (y, z) around the fixed point. The inverse always exists, and thus, Assumption (A6) holds. Moreover, det Df \u22121 mix,h(x, y, z) = 1. The mixing strategy does not increase the complexity of the implementation significantly, and it is straightforward to incorporate into the existing framework. Thus, this splitting strategy is convenient when the model has several fixed points. An alternative splitting linearizes around the average of the observations. Let (\u00b5x, \u00b5x, \u00b5z) be the average of the data, where we put \u00b5x = \u00b5y since the difference of their averages is small, around 10\u22123. We denote these estimators by LTavg and Savg.The splitting is given by: Aavg = \" \u2212p p 0 r \u2212\u00b5z \u22121 \u2212\u00b5x \u00b5x \u00b5x \u2212c # , bavg = \"\u00b5x \u00b5x \u00b5z # , Navg(x, y, z) = \uf8ee \uf8f0 0 \u2212(x \u2212\u00b5x)(z \u2212\u00b5z) + (r \u22121 \u2212\u00b5z)\u00b5x (x \u2212\u00b5x)(y \u2212\u00b5x) + \u00b52 x \u2212c\u00b5z \uf8f9 \uf8fb. The nonlinear solution is: favg,h(x, y, z) = \uf8ee \uf8ef \uf8f0 \u00b5x \u00b5x + c\u00b5z\u2212\u00b52 x x\u2212\u00b5x \u00b5z + \u00b5x(r\u22121\u2212\u00b5z) x\u2212\u00b5x \uf8f9 \uf8fa \uf8fb + \uf8ee \uf8ef \uf8f0 x \u2212\u00b5x (y \u2212\u00b5x \u2212c\u00b5z\u2212\u00b52 x x\u2212\u00b5x ) cos(h(x \u2212\u00b5x)) \u2212(z \u2212\u00b5z \u2212\u00b5x(r\u22121\u2212\u00b5z) x\u2212\u00b5x ) sin(h(x \u2212\u00b5x)) (y \u2212\u00b5x \u2212c\u00b5z\u2212\u00b52 x x\u2212\u00b5x ) sin(h(x \u2212\u00b5x)) + (z \u2212\u00b5z \u2212\u00b5x(r\u22121\u2212\u00b5z) x\u2212\u00b5x ) cos(h(x \u2212\u00b5x)) \uf8f9 \uf8fa \uf8fb, where we define favg,h(\u00b5x, y, z) = (\u00b5x, y + h\u00b5x(r \u22121 \u2212\u00b5z), z + h\u00b52 x \u2212c\u00b5z)\u22a4. Again, det Df \u22121 avg,h(x, y, z) = 1. 3 Order of one-step predictions and Lp convergence In this Section, we investigate Lp convergence of the splitting schemes and the order of the one-step predictions. Theorem 2.1 in Tretyakov and Zhang (2013) extends Milstein\u2019s fundamental theorem on Lp convergence for global Lipschitz coefficients (Milstein, 1988) to Assumptions (A1) and (A2). This theorem provides the theoretical underpinning for our approach, drawing on the key concepts of Lp consistency and boundedness of moments. Definition 3.1 (Lp consistency of a numerical scheme) The one-step approximation e \u03a6h of the solution X is Lp consistent, p \u22651, of order q2 \u22121/2\u22650, if for k = 1, . . . , N, and some q1 \u2265q2 + 1/2: \u2225E[Xtk \u2212e \u03a6h(Xtk\u22121) | Xtk\u22121 = x]\u2225= R(hq1, x), (E[\u2225Xtk \u2212e \u03a6h(Xtk\u22121)\u22252p | Xtk\u22121 = x]) 1 2p = R(hq2, x), Definition 3.2 (Bounded moments of a numerical scheme) A numerical approximation e X of the solution X has bounded moments, if for all p \u22651,there exists constant C > 0, such that, for k = 1, . . . , N: E[\u2225e Xtk\u22252p] \u2264C(1 + \u2225x0\u22252p). The following theorem (Theorem 2.1 in Tretyakov and Zhang (2013)) gives sufficient conditions for Lp convergence of a numerical scheme in a one-sided Lipschitz framework. Theorem 3.3 (Lp convergence of a numerical scheme) Let Assumptions (A1) and (A2) hold, and let e Xtk be a numerical approximation of the solution Xtk of (1) at time tk. If (1) The one-step approximation e Xtk = e \u03a6h( e Xtk\u22121) is Lp consistent of order q2 \u22121/2; and (2) e X has bounded moments, then, the numerical method e X is Lp convergent, p \u22651, of order q2 \u22121/2, i.e., for k = 1, . . . , N, it holds: (E[\u2225Xtk \u2212e Xtk\u22252p]) 1 2p = R(hq2\u22121/2, x0). 11 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT 3.1 Lie-Trotter splitting We first show that the one-step LT approximation is of order R(h2, x0) in mean. The following proposition is proved in the Supplementary Material (Pilipovic et al., 2023) for scheme (10), as well as for the reversed order of composition. We demonstrate that the order of one-step prediction can not be improved unless the drift F is linear. Proposition 3.4 (One-step prediction of LT splitting) Let Assumptions (A1) and (A2) hold, let X be the solution to SDE (1) and let \u03a6[LT] h be the LT approximation (10). Then, for k = 1, . . . , N, it holds: \u2225E[Xtk \u2212\u03a6[LT] h (Xtk\u22121) | Xtk\u22121 = x]\u2225= R(h2, Xtk\u22121). Lp convergence of the LT splitting scheme is established in Theorem 2 in Buckwar et al. (2022), which we repeat here for convenience. Theorem 3.5 (Lp convergence of the LT splitting) Let Assumptions (A1) and (A2) hold, let X[LT] be the LT approximation defined in (10), and let X be the solution of (1). Then, there exists C \u22651 such that for all p \u22652, and k = 1, . . . , N, it holds: (E[\u2225Xtk \u2212X[LT] tk \u2225p]) 1 p = R(h, x0). Now, we investigate the same properties for the S splitting. 3.2 Strang splitting The following proposition states that the S splitting (11) has higher order one-step predictions than the LT splitting (10). The proof can be found in Supplementary Material (Pilipovic et al., 2023). Proposition 3.6 Let Assumptions (A1) and (A2) hold, let X be the solution to (1), and let \u03a6[S] h be the S splitting approximation (11). Then, for k = 1, . . . , N, it holds: \u2225E[Xtk \u2212\u03a6[S] h (Xtk\u22121) | Xtk\u22121 = x]\u2225= R(h3, Xtk\u22121). (21) Remark Even though LT and S have the same order of Lp convergence, the crucial difference is in the one-step prediction. The approximated transition density between two consecutive data points depends on the one-step approximation. Thus, the objective function based on pseudo-likelihood from the S splitting is more precise than the one from the LT. To prove Lp convergence of the S splitting scheme for (1) with one-sided Lipschitz drift, we follow the same procedure as in Buckwar et al. (2022). The proof of the following theorem is in Section 7.1. Theorem 3.7 (Lp convergence of S splitting) Let Assumptions (A1), (A2) and (A6) hold, let X[S] be the S splitting defined in (11), and let X be the solution of (1). Then, there exists C \u22651 such that for all p \u22652 and k = 1, . . . , N, it holds: (E[\u2225Xtk \u2212X[S] tk \u2225p]) 1 p = R(h, x0). Before we move to parameter estimation, we prove a useful corollary. Corollary 3.8 Let all assumptions from Theorem 3.7 hold. Then, (E[\u2225Ztk \u2212\u03beh,k\u2225p])1/p = R(h, x0). Proof From the definition of Ztk in (13), it is enough to prove that: (E[\u2225f \u22121 h/2(Xtk) \u2212\u00b5h(fh/2(Xtk\u22121)) \u2212\u03beh,k\u2225p])1/p = R(h, x0). From (11) we have that \u03beh,k = f \u22121 h/2(X[S] tk ) \u2212\u00b5h(fh/2(X[S] tk\u22121)). Then, E[\u2225f \u22121 h/2(Xtk) \u2212\u00b5h(fh/2(Xtk\u22121)) \u2212\u03beh,k\u2225p]1/p \u2264C(E[\u2225f \u22121 h/2(Xtk) \u2212f \u22121 h/2(X[S] tk )\u2225p] + E[\u2225fh/2(Xtk\u22121) \u2212fh/2(X[S] tk\u22121)\u2225p])1/p \u2264C(E[\u2225Xtk \u2212X[S] tk \u2225p] + E[\u2225Xtk\u22121 \u2212X[S] tk\u22121\u2225p])1/p + R(h, x0). We used Proposition 2.2, together with the fact that X and X[S] have finite moments and fh/2 and f \u22121 h/2 grow polynomially. The result follows from Lp convergence of the S splitting scheme, Theorem 3.7. 12 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT 4 Auxiliary properties This paper centers around proving the properties of the S estimator. There are two reasons for this. First, most numerical properties in the literature are proved only for LT splitting because proofs for S splitting are more involved. Here, we establish both the numerical properties of the S splitting as well as the properties of the estimator. Second, the S splitting introduces a new pseudo-likelihood that differs from the standard Gaussian pseudo-likelihoods. Consequently, standard tools, like those proposed by Kessler (1997), do not directly apply. The asymptotic properties of the LT estimator are the same as for the S estimator. However, the following auxiliary properties will be stated and proved only for the S estimator. They can be reformulated for the LT estimator following the same logic. Before presenting the central results for the estimator, we establish the groundwork with two essential lemmas that rely on the model assumptions. Lemma 4.1 (Lemma 6 in Kessler (1997)) deals with the p-th moments of the solution of the SDE increments and also provides a moment bound of a polynomial map of the solution. The proof of this lemma, presented in Section 7.2, differs from that in Kessler (1997) due to our relaxation of the global Lipschitz assumption of the drift F. Instead, we use a one-sided Lipschitz condition for the drift function F in conjunction with the generalized Gr\u00f6nwall\u2019s inequality (Lemma 2.3 in Tian and Fan (2020) stated in Supplementary Material (Pilipovic et al., 2023)) to establish the result. Lemma 4.2 (Lemma 8 in Kessler (1997), Lemma 2 in S\u00f8rensen and Uchida (2003)) constitutes a central ergodic property that is essential for establishing the asymptotic behavior of the estimator. The proof when the drift F is one-sided Lipschitz is identical to the one presented in Kessler (1997), particularly when combined with Lemma 4.1. Lemma 4.1 Let Assumptions (A1) and (A2) hold. Let X be the solution of (1). For tk \u2265t \u2265tk\u22121, where h = tk \u2212tk\u22121 < 1, the following two statements hold. (1) For p \u22651, there exists Cp > 0 that depends on p, such that: E[\u2225Xt \u2212Xtk\u22121\u2225p | Ftk\u22121] \u2264Cp(t \u2212tk\u22121)p/2(1 + \u2225Xtk\u22121\u2225)Cp. (2) If g : Rd \u00d7 \u0398 \u2192R is of polynomial growth in x uniformly in \u03b8, then there exist constants C and Ct\u2212tk\u22121 that depends on t \u2212ttk\u22121, such that: E[|g(Xt; \u03b8)| | Ftk\u22121] \u2264Ct\u2212tk\u22121(1 + \u2225Xtk\u22121\u2225)C. Lemma 4.2 Let Assumptions (A1), (A2) and (A3) hold, and let X be the solution to (1). Let g : Rd \u00d7 \u0398 \u2192R be a differentiable function with respect to x and \u03b8 with derivative of polynomial growth in x, uniformly in \u03b8. If h \u21920 and Nh \u2192\u221e, then, 1 N N X k=1 g (Xtk, \u03b8) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 Z g (x, \u03b8) d\u03bd0(x), uniformly in \u03b8. Lastly, we state the moment bounds needed for the estimator asymptotics. The proof is in Supplementary Material (Pilipovic et al., 2023). Proposition 4.3 (Moment Bounds) Let Assumptions (A1), (A2) and (A6) hold. Let X be the solution of (1), and Ztk as defined in (13). Let g(x; \u03b2) be a generic function with derivatives of polynomial growth, and \u03b2 \u2208\u0398\u03b2. Then, for k = 1, . . . , N, the following moment bounds hold: (i) E\u03b80[Ztk(\u03b20) | Xtk\u22121 = x] = R(h3, Xtk\u22121) (ii) E\u03b80[Ztk(\u03b20)g(Xtk; \u03b2)\u22a4| Xtk = x] = h 2 (\u03a3\u03a3\u22a4 0 D\u22a4g(x; \u03b2) + Dg(x; \u03b2)\u03a3\u03a3\u22a4 0 ) + R(h2, Xtk\u22121); (iii) E\u03b80[Ztk(\u03b20)Ztk(\u03b20)\u22a4| Xtk\u22121 = x] = h\u03a3\u03a3\u22a4 0 + R(h2, Xtk\u22121). 5 Asymptotics The estimators \u02c6 \u03b8N are defined in (15). However, we do not need the full objective functions (12) and (14) to prove consistency and asymptotic normality. It is enough to approximate the covariance matrix \u2126h up to the second order by 13 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT h\u03a3\u03a3\u22a4+ h2 2 (A\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4A\u22a4) (see equation (6)). Indeed, after applying Taylor series on the inverse of \u2126h, we get: \u2126h(\u03b8)\u22121= 1 h(\u03a3\u03a3\u22a4)\u22121(I + h 2 (A(\u03b2) + \u03a3\u03a3\u22a4A(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)\u22121) + R(h, x0) = 1 h(\u03a3\u03a3\u22a4)\u22121(I \u2212h 2 (A(\u03b2) + \u03a3\u03a3\u22a4A(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121) + R(h, x0) = 1 h(\u03a3\u03a3\u22a4)\u22121 \u22121 2((\u03a3\u03a3\u22a4)\u22121A(\u03b2) + A(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121) + R(h, x0). Similarly, we approximate the log-determinant as: log det \u2126h(\u03b8)= log det(h\u03a3\u03a3\u22a4+ h2 2 (A(\u03b2)\u03a3\u03a3\u22a4+ \u03a3\u03a3\u22a4A(\u03b2)\u22a4)) + R(h2, x0) \u03b8 = log det \u03a3\u03a3\u22a4+ log det(I + h 2 (A(\u03b2) + \u03a3\u03a3\u22a4A(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121)) + R(h2, x0) = log det \u03a3\u03a3\u22a4+ h 2 Tr(A(\u03b2) + \u03a3\u03a3\u22a4A(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121) + R(h2, x0) = log det \u03a3\u03a3\u22a4+ h Tr A(\u03b2) + R(h2, x0). Using the same approximation we obtain: 2 log | det Dfh/2 (x; \u03b2) |= 2 log | det(I + h 2 DN(x; \u03b2))| = 2 log |1 + h 2 Tr DN(x; \u03b2)| + R(h, x) = h Tr DN(x; \u03b2) + R(h2, x0) Subsequently, retaining terms up to order R(Nh2, x0) from objective functions (12) and (14), we establish the approximate objective functions: L[LT] N (\u03b8) := N log det \u03a3\u03a3\u22a4+Nh Tr A(\u03b2) + 1 h N X k=1 (Xtk \u2212\u00b5h(fh(Xtk\u22121; \u03b2); \u03b2))\u22a4(\u03a3\u03a3\u22a4)\u22121(Xtk \u2212\u00b5h(fh(Xtk\u22121; \u03b2); \u03b2)) (22) \u2212 N X k=1 (Xtk \u2212\u00b5h(fh(Xtk\u22121; \u03b2); \u03b2))\u22a4(\u03a3\u03a3\u22a4)\u22121A(\u03b2)(Xtk \u2212\u00b5h(fh(Xtk\u22121; \u03b2); \u03b2)) L[S] N (\u03b8) := N log det \u03a3\u03a3\u22a4+Nh Tr A(\u03b2) + 1 h N X k=1 Ztk(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121Ztk(\u03b2) \u2212 N X k=1 Ztk(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121A(\u03b2)Ztk(\u03b2) + h N X k=1 Tr DN(Xtk; \u03b2). (23) Unlike other likelihood-based methods, such as Kessler (1997), A\u00eft-Sahalia (2002, 2008), Choi (2013, 2015), Yang et al. (2019), our estimators do not involve expansions. The objective functions are formulated in simple terms without hyperparameters, such as the order of the expansions. Hence, our approach is robust and user-friendly, as we directly employ (12) and (14) without requiring approximations. However, we leverage the approximations (22) and (23) for the mathematical analysis and the proofs. 5.1 Consistency Now, we state the consistency of \u02c6 \u03b2N and d \u03a3\u03a3 \u22a4 N. The proof of Theorem 5.1 is in Section 7.3. Theorem 5.1 Let Assumptions (A1)-(A6) hold, X be the solution of (1), and b \u03b8N = ( b \u03b2N, d \u03a3\u03a3 \u22a4 N) be the estimator that minimizes one of objective functions (22) or (23). If h \u21920 and Nh \u2192\u221e, then, \u02c6 \u03b2N P\u03b80 \u2212 \u2212 \u2192\u03b20, d \u03a3\u03a3 \u22a4 N P\u03b80 \u2212 \u2212 \u2192\u03a3\u03a3\u22a4 0 . 14 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT 5.2 Asymptotic normality In this Section, we state the asymptotic normality of the estimator. First, we need some preliminaries. Let \u03c1 > 0 and B\u03c1 (\u03b80) = {\u03b8 \u2208\u0398 | \u2225\u03b8 \u2212\u03b80\u2225\u2264\u03c1} be a ball around \u03b80. Since \u03b80 \u2208\u0398, for sufficiently small \u03c1 > 0, B\u03c1(\u03b80) \u2208\u0398. Let LN be one of the two objective functions (22) or (23). For \u02c6 \u03b8N \u2208B\u03c1 (\u03b80), the mean value theorem yields: \u0012Z 1 0 HLN (\u03b80 + t(\u02c6 \u03b8N \u2212\u03b80)) dt \u0013 (\u02c6 \u03b8N \u2212\u03b80) = \u2212\u2207LN (\u03b80) . (24) With \u03c2 := vech(\u03a3\u03a3\u22a4) = ([\u03a3\u03a3\u22a4]11, [\u03a3\u03a3\u22a4]12, [\u03a3\u03a3\u22a4]22, ..., [\u03a3\u03a3\u22a4]1d, ..., [\u03a3\u03a3\u22a4]dd), we half-vectorize \u03a3\u03a3\u22a4to avoid working with tensors when computing derivatives with respect to \u03a3\u03a3\u22a4. Since \u03a3\u03a3\u22a4is a symmetric d \u00d7 d matrix, \u03c2 is of dimension s = d(d + 1)/2. For a diagonal matrix, instead of a half-vectorization, we use \u03c2 := diag(\u03a3\u03a3\u22a4). Define: CN(\u03b8) := \" 1 Nh\u2202\u03b2\u03b2LN(\u03b8) 1 N \u221a h\u2202\u03b2\u03c2LN(\u03b8) 1 N \u221a h\u2202\u03b2\u03c2LN(\u03b8) 1 N \u2202\u03c2\u03c2LN(\u03b8) # , (25) sN := \"\u221a Nh( \u02c6 \u03b2N \u2212\u03b20) \u221a N(\u02c6 \u03c2N \u2212\u03c20) # , \u03bbN := \uf8ee \uf8ef \uf8f0 \u2212 1 \u221a Nh \u2202\u03b2LN(\u03b80) \u22121 \u221a N \u2202\u03c2LN(\u03b80) \uf8f9 \uf8fa \uf8fb, (26) and DN := R 1 0 CN(\u03b80 + t(\u02c6 \u03b8N \u2212\u03b80)) dt. Then, (24) is equivalent to DNsN = \u03bbN. Let: C(\u03b80) := \u0014 C\u03b2(\u03b80) 0r\u00d7s 0s\u00d7r C\u03c2(\u03b80) \u0015 , (27) where: [C\u03b2(\u03b80)]i1,i2 := Z (\u2202\u03b2i1 F0(x))\u22a4(\u03a3\u03a3\u22a4 0 )\u22121(\u2202\u03b2i2 F0(x)) d\u03bd0(x), 1 \u2264i1, i2 \u2264r, [C\u03c2(\u03b80)]j1,j2 := 1 2 Tr((\u2202\u03c2j1\u03a3\u03a3\u22a4 0 )(\u03a3\u03a3\u22a4 0 )\u22121(\u2202\u03c2j2\u03a3\u03a3\u22a4 0 )(\u03a3\u03a3\u22a4 0 )\u22121), 1 \u2264j1, j2 \u2264s. Now, we state the theorem for asymptotic normality, whose proof is in Section 7.4. Theorem 5.2 Let Assumptions (A1)-(A6) hold, X be the solution of (1), and b \u03b8N = ( b \u03b2N, b \u03c2N) be the estimator that minimizes one of the objective functions (22) or (23). If \u03b80 \u2208\u0398, C(\u03b80) is positive definite, h \u21920, Nh \u2192\u221e, and Nh2 \u21920, then, \u0014\u221a Nh( \u02c6 \u03b2N \u2212\u03b20) \u221a N(\u02c6 \u03c2N \u2212\u03c20) \u0015 d \u2212 \u2192N(0, C\u22121(\u03b80)), (28) under P\u03b80. The estimator of the diffusion parameter converges faster than the estimator of the drift parameter. Gobet (2002) showed that for a discretely sampled SDE model, the optimal convergence rates for the drift and diffusion parameters are 1/ \u221a Nh and 1/ \u221a N, respectively. Thus, our estimators reach optimal rates. Moreover, the estimators are asymptotically efficient since C is the Fisher information matrix for the corresponding continuous-time diffusion (see Kessler (1997), Gobet (2002)). Finally, since the asymptotic correlation is zero between the drift and diffusion estimators , they are asymptotically independent. 6 Simulation study This Section presents the simulation study of the Lorenz system, illustrating the theory and comparing the proposed estimators with other likelihood-based estimators from the literature. We briefly recall the estimators, describe the simulation process and the optimization in the programming language R (R Core Team, 2022), and present and analyze the results. 15 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT 6.1 Estimators used in the study The EM transition distribution (16) for the Lorenz system (20) is: \"Xtk Ytk Ztk # | \"Xtk\u22121 Ytk\u22121 Ztk\u22121 # = \"x y z # \u223cN \uf8eb \uf8ed \" x + hp(y \u2212x) y + h(rx \u2212y \u2212xz) z + h(xy \u2212cz) # , \uf8ee \uf8f0 h\u03c32 1 0 0 0 h\u03c32 2 0 0 0 h\u03c32 3 \uf8f9 \uf8fb \uf8f6 \uf8f8. We do not write the closed-form distributions for the K (17) and LL (18) estimators, but we use the corresponding formulas to implement the likelihoods. We implement the two splitting strategies proposed in Section 2.5, leading to four estimators: LTmix, LTavg, Smix, and Savg. To further speed up computation time, we use the same trick for calculating \u2126h in (6) as the one suggested by Gu et al. (2020) for calculating \u2126[LL] h . Namely, for the splitting schemes, we adapt P3 from (19) accordingly: P3 = \u0014 A \u03a3\u03a3\u22a4 0d\u00d7d \u2212A\u22a4 \u0015 . Therefore, computing exp(hP3) circumvents the need for evaluating the integral in \u2126h (6), following the approach described in Section 2.4.4. 6.2 Trajectory simulation To simulate sample paths, we use the EM discretization with a step size of hsim = 0.0001, which is small enough for the EM discretization to perform well. Then, we sub-sample the trajectory to get a larger time step h, decreasing discretization errors. We perform M = 1000 Monte Carlo repetitions. 6.3 Optimization in R To optimize the objective functions we use the R package torch (Falbel and Luraschi, 2022), which uses AD instead of the traditional finite differentiation used in optim. The two main advantages of AD are precision and speed. Finite differentiation is subject to floating point precision errors and is slow in high dimensions (Baydin et al., 2017). Conversely, AD is exact and fast and thus used in numerous applications, such as MLE or training neural networks. We tried all available optimizers in the torch package and chose the resilient backpropagation algorithm optim_rprop based on Riedmiller and Braun (1992). It performed faster than the rest and was more precise in finding the global minimum. We used the default hyperparameters and set the optimization iterations to 200. We chose the precision of 10\u22125 between the updated and the parameters from the previous iteration as the convergence criteria. For starting values, we used (0.1, 0.1, 0.1, 0.1, 0.1, 0.1). All estimators converged after approximately 80 iterations. 6.4 Comparing criteria We compare seven estimators based on their precision and speed. For the precision, we compute the absolute relative error (ARE) for each component \u02c6 \u03b8(i) N of the estimator \u02c6 \u03b8N: ARE(\u02c6 \u03b8(i) N ) = 1 M M X r=1 |\u02c6 \u03b8(i) N,r \u2212\u03b8(i) 0,r| \u03b8(i) 0,r . For S and LL, we compare the distributions of \u02c6 \u03b8N \u2212\u03b80 to investigate the precision more closely. The running times are calculated using the tictoc package in R, measured from the start of the optimization step until the convergence criterion is met. To avoid the influence of running time outliers, we compute the median over M repetitions. 6.5 Results In Figure 2, AREs are shown as a function of the discretization step h. For a clearer comparison, we use the log scale on the y axis. While most estimators work well for a step size no greater than 0.01, only LL, Smix, and Savg perform well for h = 0.05. The LTavg is not competitive even for h = 0.005. The performance of LTmix varies, sometimes approaching the performance of K, while other times performing similarly to EM. Thus, LTmix is not a good choice for this specific model. The bias of EM starts to show for h = 0.01 escalating for h = 0.05. The largest bias appears in the 16 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT Figure 2: Comparing the absolute relative error (ARE) as a function of increasing h for seven different estimators in the stochastic Lorenz system. The estimators are obtained for sample sizes of N = 10000. The y-axis is on a log scale. diffusion parameters, which is due to the poor approximation of \u2126EM h . K is less biased than EM except for p and r when h = 0.05. Note how some parameters are estimated better for larger h when N is fixed. This is due to a longer observation interval T = Nh, reflecting the \u221a Nh rate of convergence. Since Smix, Savg, and LL perform the best with large time steps, we zoom in on their distributions in Figure 3. To make the figure clearer, we removed some outliers for \u03c32 1 and \u03c32 2. This did not change the shape of the distributions, it only truncated the tails. The three estimators perform similarly, especially for small h. For h = 0.05, the drift parameters are underestimated by approximately 5 \u221210%, while the diffusion parameters are overestimated by up to 20%. Both S estimators exhibited superior performance compared to LL, except for the parameters p and \u03c32 1. While the LL and S estimators perform similarly in terms of precision, Figure 4 shows the superiority of the S estimators over LL in computational costs. The LL becomes increasingly computationally expensive for increasing N because it calculates N covariance matrices for each parameter value. The second slowest estimator is Smix, followed by LTmix, Savg, K, LTavg, and, finally, EM is the fastest. The speed of EM is almost constant in N. Additionally, it seems that the running times do not depend on h. Thus, we recommend using the S estimators, especially for large N. Figures 5 and 6 show that the theoretical results hold for the Smix and LTmix estimators. We compare how the distributions of \u02c6 \u03b8N \u2212\u03b80 change with sample size N and step size h. With increasing N, the variance decreases, whereas the mean does not change. For that, we need smaller h. To obtain negligible bias for LTmix, we need a step size smaller than h = 0.005 . However, Smix is practically unbiased up to h = 0.01. This shows that LT estimators might not be a good choice in practice, while S estimators are. The solid black lines in Figures 5 and 6 represent the theoretical asymptotic distributions for each parameter computed from (28). For the Lorenz system (20), the precision matrix (27) is given by: C(\u03b80) = diag Z (y \u2212x)2 \u03c32 1,0 d\u03bd0(x), Z x2 \u03c32 2,0 d\u03bd0(x), Z z2 \u03c32 3,0 d\u03bd0(x), 1 2\u03c34 1,0 , 1 2\u03c34 2,0 , 1 2\u03c34 3,0 ! . The integrals are approximated by taking the mean over all data points and all Monte Carlo repetitions. Some outliers of \u02c6 \u03c32 2 are removed from Figures 5 and 6 by truncating the tails. 17 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT Figure 3: Comparison of the normalized distributions of (\u02c6 \u03b8N \u2212\u03b80) \u2298\u03b80 (where \u2298is the element-wise division) in the Lorenz system for the Smix, Savg, and LL estimators for N = 10000. Each column represents one parameter, and each row represents one value of the discretization step h. A black dot with a vertical bar in each violin plot represents the mean and the standard deviation. Figure 4: Running times as a function of N for different estimators of the Lorenz system. Each column shows one value of h. On the x-axis is the sample size N, and on the y-axis is the running time in seconds. 7 Proofs 7.1 Proof of Lp convergence of the splitting scheme The proof of Proposition 3.6 is in Supplementary Material (Pilipovic et al., 2023). Here, we present the proof of Lp convergence stated in Theorem 3.7. 18 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT Figure 5: Comparing distributions of \u02c6 \u03b8N \u2212\u03b80 for the Smix estimator with theoretical asymptotic distributions (28) for each parameter (columns), for h = 0.01 and N \u2208{1000, 5000, 10000} (colors). The black lines correspond to the theoretical asymptotic distributions computed from data and true parameters for N = 10000 and h = 0.01. Figure 6: Comparing distributions of \u02c6 \u03b8N \u2212\u03b80 for the LTmix estimator with theoretical asymptotic distributions (28) for each parameter (columns), for h \u2208{0.005, 0.01} (rows) and N \u2208{1000, 5000, 10000} (colors). The black lines correspond to the theoretical asymptotic distributions computed from data and true parameters for N = 10000 and corresponding h. Proof of Theorem 3.7 We use Theorem 3.3 to prove Lp convergence. It is sufficient to prove the two conditions (1) and (2). To prove condition (1), we need to prove the following property: (E[\u2225Xtk \u2212\u03a6[S] h (Xtk\u22121)\u2225p | Xtk\u22121 = x]) 1 p = R(hq2, x), where q2 = 3/2. We start with \u2225Xtk \u2212\u03a6[S] h (Xtk\u22121)\u2225p = \u2225Xtk \u2212Xtk\u22121 \u2212hF(Xtk\u22121) \u2212\u03beh,k + R(h3/2, Xtk\u22121)\u2225p. For more details on the expansion of \u03a6[S] h , see Supplementary Material (Pilipovic et al., 2023). We approximate \u03beh,k = R tk tk\u22121 eA(tk\u2212s)\u03a3 dWs by: \u03beh,k = Z tk tk\u22121 (I + (tk\u2212s)A)\u03a3 dWs + R(h2, Xtk\u22121) = \u03a3(Wtk \u2212Wtk\u22121) + A\u03a3 Z tk tk\u22121 (tk \u2212s) d Ws + R(h2, Xtk\u22121). 19 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT Using the fact that R tk tk\u22121 (tk \u2212s) d Ws \u223cN(0, h3 3 I), we deduce that \u03beh,k = \u03a3(Wtk \u2212Wtk\u22121) + R(h3/2, Xtk\u22121). Then, H\u00f6lder\u2019s inequality yields: \u2225Xtk \u2212Xtk\u22121 \u2212hF(Xtk\u22121) \u2212\u03a3(Wtk \u2212Wtk\u22121)\u2225p \u2264hp\u22121 Z tk tk\u22121 \u2225(F(Xs) \u2212F(Xtk\u22121))\u2225p ds. Assumption (A2), the integral norm inequality, Cauchy-Schwartz, and H\u00f6lder\u2019s inequalities, together with the mean value theorem yield: E[\u2225Xtk \u2212\u03a6[S] h (Xtk\u22121)\u2225p | Xtk\u22121 = x] \u2264C(E[hp\u22121 Z tk tk\u22121 \u2225F(Xs) \u2212F(Xtk\u22121)\u2225p ds | Xtk\u22121 = x]) \u2264C(hp\u22121 Z tk tk\u22121 E[\u2225Xs \u2212Xtk\u22121\u2225p\u2225 Z 1 0 DxF(Xs \u2212u(Xs \u2212Xtk\u22121)) du\u2225p | Xtk\u22121 = x] ds) \u2264C hp\u22121 Z tk tk\u22121 (E[\u2225Xs \u2212Xtk\u22121\u22252p | Xtk\u22121 = x]) 1 2 (E[\u2225 Z 1 0 DxF(Xs \u2212u(Xs \u2212Xtk\u22121)) du\u22252p | Xtk\u22121 = x]) 1 2 ds \u0013 \u2264C(hp\u22121 Z tk tk\u22121 h p 2 ds) = R(h3p/2, x). In the last line, we used Lemma 4.1. This proves condition (1) of Theorem 3.3. Now, we prove condition (2). We use (5) and (11) to write X[S] tk = fh/2(eAh(fh/2(X[S] tk\u22121) \u2212X[1] tk\u22121) + X[1] tk ). Define Rtk := eAh(fh/2(X[S] tk ) \u2212X[1] tk ), and use the associativity (9) to get Rtk = eAh(fh(Rtk\u22121 + X[1] tk ) \u2212X[1] tk ). The proof of the boundness of the moments of Rtk is the same as in Lemma 2 in Buckwar et al. (2022). Finally, we have X[S] tk = f \u22121 h/2(e\u2212AhRtk + X[1] tk ). Since f \u22121 h/2 grows polynomially and X[1] tk has finite moments, X[S] tk must have finite moments too. This concludes the proof. 7.2 Proof of Lemma 4.1 Proof of Lemma 4.1 We first prove (1). In the following, C1 and C2 denote constants. We use the triangular inequality and H\u00f6lder\u2019s inequality to obtain: \u2225Xt \u2212Xtk\u22121\u2225p \u22642p\u22121(\u2225 Z t tk\u22121 F(Xs; \u03b8) ds\u2225p + \u2225\u03a3(Wt \u2212Wtk\u22121)\u2225p) \u22642p\u22121(( Z t tk\u22121 C1(1 + \u2225Xs\u2225)C1 ds)p + \u2225\u03a3(Wt \u2212Wtk\u22121)\u2225p) \u22642p\u22121Cp 1( Z t tk\u22121 (1 + \u2225Xs \u2212Xtk\u22121\u2225+ \u2225Xtk\u22121\u2225)C1 ds)p + 2p\u22121\u2225\u03a3(Wt \u2212Wtk\u22121)\u2225p \u22642C1+2p\u22123Cp 1(t \u2212tk\u22121)p\u22121( Z t tk\u22121 \u2225Xs \u2212Xtk\u22121\u2225pC1 ds + (t \u2212tk\u22121)p(1 + \u2225Xtk\u22121\u2225)pC1) + 2p\u22121\u2225\u03a3(Wt \u2212Wtk\u22121)\u2225p. In the second inequality, we used the polynomial growth (A2) of F. Furthermore, for some constant C2 that depends on p, we have E[\u2225\u03a3(Wt \u2212Wtk\u22121)\u2225p | Ftk\u22121] = (t \u2212ttk\u22121)p/2C2(p). Then, for h < 1, there exists a constant Cp that depends on p, such that: Cp(t \u2212tk\u22121)2p\u22121(1 + \u2225Xtk\u22121\u2225)Cp + Cp(t \u2212ttk\u22121)p/2 \u2264Cp(t \u2212tk\u22121)p/2(1 + \u2225Xtk\u22121\u2225)Cp. The last inequality holds because the term of order p/2 is dominating when t \u2212tk\u22121 < 1. Denote m(t) = E[\u2225Xt \u2212 Xtk\u22121\u2225p | Ftk\u22121]. Then, we have: m(t) \u2264Cp(t \u2212tk\u22121)p/2(1 + \u2225Xtk\u22121\u2225)Cp + Cp Z t tk\u22121 mC1(s) ds. (29) 20 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT Now, we apply the generalized Gr\u00f6nwall\u2019s inequality (Lemma 2.3 in Tian and Fan (2020), stated in Supplementary Material (Pilipovic et al., 2023)) on (29). Since we consider a super-linear growth, we can assume that there exist C1 > 1 and Cp > 0, such that: m(t) \u2264Cp(t \u2212tk\u22121)p/2(1 + \u2225Xtk\u22121\u2225)Cp + (\u03ba1\u2212C1(t) \u2212(C1 \u22121)2C1\u22121Cp(t \u2212tk\u22121)) 1 1\u2212C1 \u2264Cp(t \u2212tk\u22121)p/2(1 + \u2225Xtk\u22121\u2225)Cp + C\u03ba(t), (30) where \u03ba(t) = Cp(t \u2212tk\u22121)C1p/2+1(1 + \u2225Xtk\u22121\u2225)Cp. The bound C in inequality (30) makes sense, because the term: (1 \u2212(C1 \u22121)2C1\u22121Cp(t \u2212tk\u22121)\u03ba 1 1\u2212C1 (t)) 1 1\u2212C1 is positive by Lemma 2.3 from Tian and Fan (2020). Additionally, the same term reaches its maximum value of 1, for t = tk\u22121. The constant C in (30) includes some terms that depend on t \u2212tk\u22121. However, these terms will not change the dominating term of \u03ba(t) since h < 1. Finally, the terms in \u03ba(t) are of order p/2, thus for large enough constant Cp, it holds m(t) \u2264Cp(t \u2212tk\u22121)p/2(1 + \u2225Xtk\u22121\u2225)Cp. To prove (2), we use that g is of polynomial growth: E[|g(Xt; \u03b8)| | Ftk\u22121] \u2264C1E[(1 + \u2225Xtk\u22121\u2225+ \u2225Xt \u2212Xtk\u22121\u2225)C1 | Ftk\u22121] \u2264C2(1 + \u2225Xtk\u22121\u2225C1 + E[\u2225Xt \u2212Xtk\u22121\u2225C1 | Ftk\u22121]). Now, we apply the first part of the lemma, to get: E[|g(Xt; \u03b8)| | Ftk\u22121] \u2264C2(1 + \u2225Xtk\u22121\u2225C1 + C\u2032 t\u2212tk\u22121(1 + \u2225Xtk\u22121\u2225)C3) \u2264Ct\u2212tk\u22121(1 + \u2225Xtk\u22121\u2225)C. That concludes the proof. 7.3 Proof of consistency of the estimator The proof of consistency consists in studying the convergence of the objective function that defines the estimators. The objective function LN(\u03b2, \u03c2) (23) can be decomposed into sums of martingale triangular arrays. We thus first state a lemma that proves the convergence of each triangular array involved in the objective function. Then, we will focus on the proof of consistency. The proof of the Lemma is in Supplementary Material (Pilipovic et al., 2023). Lemma 7.1 Let Assumptions (A1)-(A6) hold, and X be the solution of (1). Let g, g1, g2 : Rd \u00d7 \u0398 \u00d7 \u0398 \u2192R be differentiable functions with respect to x and \u03b8, with derivatives of polynomial growth in x, uniformly in \u03b8. If h \u21920 and Nh \u2192\u221e, then: 1. 1 Nh N P k=1 Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4)\u22121Ztk(\u03b20) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 Tr((\u03a3\u03a3\u22a4)\u22121\u03a3\u03a3\u22a4 0 ); 2. h N N P k=1 g(Xtk\u22121; \u03b20, \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121g(Xtk\u22121; \u03b20, \u03b2) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 0; 3. 1 N N P k=1 Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4)\u22121g(Xtk\u22121; \u03b20, \u03b2) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 0; 4. 1 Nh N P k=1 Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4)\u22121g(Xtk\u22121; \u03b20, \u03b2) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 0; 5. 1 N N P k=1 Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4)\u22121g(Xtk; \u03b20, \u03b2) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 0; 6. 1 Nh N P k=1 Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4)\u22121g(Xtk; \u03b20, \u03b2) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 R Tr(Dg(x; \u03b20, \u03b2)\u03a3\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121) d\u03bd0(x); 7. h N N P k=1 g1(Xtk\u22121; \u03b20, \u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121g2(Xtk; \u03b20, \u03b2) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 0, 21 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT uniformly in \u03b8. Proof of Theorem 5.1 To establish consistency, we follow the proof of Theorem 1 in Kessler (1997) and study the limit of L[S] N (\u03b2, \u03c2) from (23), rescaled by the correct rate of convergence. More precisely, the consistency of the diffusion parameter is proved by studying the limit of 1 N L[S] N (\u03b2, \u03c2), while the consistency of the drift parameter is proved by studying the limit of 1 Nh(L[S] N (\u03b2, \u03c2) \u2212L[S] N (\u03b20, \u03c2)). We start with the consistency of the diffusion parameter \u03c2. We need to prove that: 1 N L[S] N (\u03b2, \u03c2) \u2192log(det(\u03a3\u03a3\u22a4)) + Tr((\u03a3\u03a3\u22a4)\u22121\u03a3\u03a3\u22a4 0 ) =: G1(\u03c2, \u03c20), (31) in P\u03b80, for Nh \u2192\u221e, h \u21920, uniformly in \u03b8. To study the limit, we first decompose 1 N L[S] N (\u03b2, \u03c2) as follows: 1 N L[S] N (\u03b2, \u03c2) = log det \u03a3\u03a3\u22a4+ T1 + T2 + T3 + 2(T4 + T5 + T6) + R(h, x0). (32) The terms T1, . . . , T6 are derived from the quadratic form in (23) by adding and subtracting the corresponding terms with \u03b20, followed by rearrangements, resulting in the following expressions: T1 := 1 Nh N X k=1 Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4)\u22121Ztk(\u03b20), T2 := 1 Nh N X k=1 (f \u22121 h/2,k(\u03b2) \u2212f \u22121 h/2,k(\u03b20))\u22a4(\u03a3\u03a3\u22a4)\u22121(f \u22121 h/2,k(\u03b2) \u2212f \u22121 h/2,k(\u03b20)), T3 := 1 Nh N X k=1 (\u00b5h,k\u22121(\u03b20) \u2212\u00b5h,k\u22121(\u03b2))\u22a4(\u03a3\u03a3\u22a4)\u22121(\u00b5h,k\u22121(\u03b20) \u2212\u00b5h,k\u22121(\u03b2)), T4 := 1 Nh N X k=1 Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4)\u22121(\u00b5h,k\u22121(\u03b20) \u2212\u00b5h,k\u22121(\u03b2)), T5 := 1 Nh N X k=1 (f \u22121 h/2,k(\u03b2) \u2212f \u22121 h/2,k(\u03b20))\u22a4(\u03a3\u03a3\u22a4)\u22121(\u00b5h,k\u22121(\u03b20) \u2212\u00b5h,k\u22121(\u03b2)), T6 := 1 Nh N X k=1 (f \u22121 h/2,k(\u03b2) \u2212f \u22121 h/2,k(\u03b20))\u22a4(\u03a3\u03a3\u22a4)\u22121Ztk(\u03b20). Previously, we defined f \u22121 h/2,k(\u03b2) := f \u22121 h/2(Xtk; \u03b2) and \u00b5h,k\u22121(\u03b2) := \u00b5h(fh/2(Xtk\u22121; \u03b2); \u03b2). These terms will also play a significant role in proving the asymptotic normality. The first term of (32) is a constant. Properties 1, 2, 3, 5, and 7 from Lemma 7.1 give the following limits T1 \u2192 Tr((\u03a3\u03a3\u22a4)\u22121\u03a3\u03a3\u22a4 0 ) and for l = 2, 3, ..., 6, Tl \u21920, uniformly in \u03b8. The convergence in probability is equivalent to the existence of a subsequence converging almost surely. Thus, the convergence in (31) is almost sure for a subsequence ( b \u03b2Nl, b \u03c2Nl). This implies: b \u03c2Nl P\u03b80\u2212a.s. \u2212 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 \u03c2\u221e. The compactness of \u0398 implies that ( b \u03b2Nl, b \u03c2Nl) converges to a limit (\u03b2\u221e, \u03c2\u221e) almost surely. By continuity of the mapping \u03c2 7\u2192G1(\u03c2, \u03c20) we have 1 Nl L[S] Nl( \u02c6 \u03b2Nl, b \u03c2Nl) \u2192G1(\u03c2\u22a4 \u221e, \u03c20), in P\u03b80, for Nh \u2192\u221e, h \u21920, uniformly in \u03b8. By the definition of the estimator, G1(\u03c2\u221e, \u03c20) \u2264G1(\u03c20, \u03c20). We also have: G1(\u03c2\u221e, \u03c20) \u2265G1(\u03c20, \u03c20) \u21d4log(det(\u03a3\u03a3\u22a4 \u221e)) + Tr((\u03a3\u03a3\u22a4 \u221e)\u22121\u03a3\u03a3\u22a4 0 ) \u2265log(det(\u03a3\u03a3\u22a4 0 )) + Tr(Id) \u21d4Tr((\u03a3\u03a3\u22a4 \u221e)\u22121\u03a3\u03a3\u22a4 0 ) \u2212log(det((\u03a3\u03a3\u22a4 \u221e)\u22121\u03a3\u03a3\u22a4 0 )) \u2265d \u21d4 d X i=1 \u03bbi \u2212log d Y i=1 \u03bbi \u2265 d X i=1 1 \u21d4 d X i=1 (\u03bbi \u22121 \u2212log \u03bbi) \u22650, where \u03bbi are the eigenvalues of (\u03a3\u03a3\u22a4 \u221e)\u22121\u03a3\u03a3\u22a4 0 , which is a positive definite matrix. The last inequality follows since for any positive x, log x \u2264x \u22121. Thus, G1(\u03c2\u221e, \u03c20) = G1(\u03c20.\u03c20). Then, all the eigenvalues \u03bbi must be equal to 1, 22 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT hence, \u03a3\u03a3\u22a4 \u221e= \u03a3\u03a3\u22a4 0 . We proved that a convergent subsequence of b \u03c2N tends to \u03c20 almost surely. From there, the consistency of the estimator of the diffusion coefficient follows. We now focus on the consistency of the drift parameter. The objective is to prove that the following limit in P\u03b80, for Nh \u2192\u221e, h \u21920, uniformly with respect to \u03b8: 1 Nh(L[S] N (\u03b2, \u03c2) \u2212L[S] N (\u03b20, \u03c2)) \u2192G2(\u03b20, \u03c20, \u03b2, \u03c2), (33) where: G2(\u03b20, \u03c20, \u03b2, \u03c2) := Z (F0(x) \u2212F(x))\u22a4(\u03a3\u03a3\u22a4)\u22121(F0(x) \u2212F(x)) d\u03bd0(x) + Z Tr(D(N0(x) \u2212N(x))(\u03a3\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121 \u2212I)) d\u03bd0(x). To prove it, we decompose 1 Nh(L[S] N (\u03b2, \u03c2) \u2212L[S] N (\u03b20, \u03c2)) as follows: 1 Nh(L[S] N (\u03b2, \u03c2) \u2212L[S] N (\u03b20, \u03c2)) = Tr(A(\u03b2) \u2212A(\u03b20)) + 1 h(T2 + T3 + 2(T4 + T5 + T6)) + 1 Nh N X k=1 (Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4)\u22121A(\u03b20)Ztk(\u03b20) \u2212Ztk(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121A(\u03b2)Ztk(\u03b2)) (34) + 1 N N X k=1 Tr D(N(Xtk; \u03b2) \u2212N(Xtk; \u03b20)) + R(h, x0). The term 1 Nh PN k=1(Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4)\u22121A(\u03b20)Ztk(\u03b20) \u2212Ztk(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121A(\u03b2)Ztk(\u03b2)) converges to Tr(A(\u03b20) \u2212A(\u03b2)), which thus cancels out with the first term in (34). Lemma 4.2 provides the uniform convergence of 1 hT2 with respect to \u03b8: 1 hT2 = 1 4N N X k=1 (N0(Xtk) \u2212N(Xtk))\u22a4(\u03a3\u03a3\u22a4)\u22121(N0(Xtk) \u2212N(Xtk)) + R(h, x0) \u21921 4 Z (N0(x) \u2212N(x))\u22a4(\u03a3\u03a3\u22a4)\u22121(N0(x) \u2212N(x)) d\u03bd0(x). The limit of 1 hT3 computes analogously. To prove 1 hT4 \u21920, we use Lemma 9 in Genon-Catalot and Jacod (1993) and Property 4 from Lemma 7.1. Lemma 4.2 yields: 1 hT5 P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 1 4 Z (N0(x) \u2212N(x))\u22a4(\u03a3\u03a3\u22a4)\u22121(N0(x) \u2212N(x)) d\u03bd0(x) +1 2 Z (A0(x\u2212b0) \u2212A(x\u2212b))\u22a4(\u03a3\u03a3\u22a4)\u22121(N0(x) \u2212N(x)) d\u03bd0(x). Finally, 1 hT6 \u21921 2 R Tr(D(N0(x) \u2212N(x))\u22a4\u03a3\u03a3\u22a4 0 (\u03a3\u03a3\u22a4)\u22121) d\u03bd0(x) uniformly in \u03b8, by Property 6 of Lemma 7.1. Lemma 4.2 gives: 1 N N X k=1 Tr D(N(Xtk) \u2212N0(Xtk)) P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 Z Tr D(N(x) \u2212N0(x)) d\u03bd0(x), uniformly in \u03b8. This proves (33). Then, there exists a subsequence Nl such that ( b \u03b2Nl, b \u03c2Nl) converges to a limit (\u03b2\u221e, \u03c2\u221e), almost surely. By continuity of the mapping (\u03b2, \u03c2) 7\u2192G2(\u03b20, \u03c20, \u03b2, \u03c2), for Nlh \u2192\u221e, h \u21920, we have the following convergence in P\u03b80: 1 Nlh(L[S] Nl( b \u03b2Nl, b \u03c2Nl) \u2212L[S] Nl(\u03b20, b \u03c2Nl)) \u2192G2(\u03b20, \u03c20, \u03b2\u221e, \u03c2\u221e). Then, G2(\u03b20, \u03c20, \u03b2\u221e, \u03c2\u221e) \u22650 since \u03a3\u03a3\u22a4 \u221e= \u03a3\u03a3\u22a4 0 . On the other hand, by the definition of the estimator L[S] Nl( b \u03b2Nl, b \u03c2Nl) \u2212L[S] Nl(\u03b20, b \u03c2Nl) \u22640. Thus, the identifiability assumption (A5) concludes the proof for the S estimator. To prove the same statement for the LT estimator, the representation of the objective function (32) has to be adapted. In the LT case, this representation is straightforward. There is no extra logarithmic term and only three instead of six auxiliary T terms are used. This is due to the Gaussian transition density in the LT approximation. 23 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT 7.4 Proof of asymptotic normality of the estimator Proof of Theorem 5.2 According to Theorem 1 in Kessler (1997) or Theorem 1 in S\u00f8rensen and Uchida (2003), Lemmas 7.2 and 7.3 below are enough for establishing the asymptotic normality of \u02c6 \u03b8N. Here, we only present the outline of the proof. For more details, see proof of Theorem 1 in S\u00f8rensen and Uchida (2003). Lemma 7.2 Let CN(\u03b80) and C(\u03b80) be as defined in (25) and (27), respectively. If h \u21920, Nh \u2192\u221e, and \u03c1N \u21920, then: CN(\u03b80) P\u03b80 \u2212 \u2212 \u21922C(\u03b80), sup \u2225\u03b8\u2225\u2264\u03c1N \u2225CN(\u03b80 + \u03b8) \u2212CN(\u03b80)\u2225 P\u03b80 \u2212 \u2212 \u21920. Lemma 7.3 Let \u03bbN be as defined (26). If h \u21920, Nh \u2192\u221eand Nh2 \u21920, then: \u03bbN d \u2212 \u2192N(0, 4C(\u03b80)), under P\u03b80. Lemma 7.2 states that CN(\u03b80) approaches 2C(\u03b80) as h \u21920 and Nh \u2192\u221e. Moreover, the difference between CN(\u03b80 + \u03b8) and CN(\u03b80) approaches zero when \u03b8 approaches \u03b80, within a distance specified by balls B\u03c1N (\u03b80), where \u03c1N \u21920. To ensure the asymptotic normality of \u02c6 \u03b8N, Lemma 7.2 is employed to restrict the term \u2225DN \u2212CN(\u03b80)\u2225 when \u02c6 \u03b8N \u2208\u0398 \u2229B\u03c1N (\u03b80) as follows: \u2225DN \u2212CN(\u03b80)\u2225\u22ae{ \u02c6 \u03b8N\u2208\u0398\u2229B\u03c1N (\u03b80)} \u2a7d sup \u03b8\u2208B\u03c1N (\u03b80) \u2225CN(\u03b8) \u2212CN(\u03b80)\u2225 P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 0 Applying again Lemma 7.2 on the previous line, we get DN \u21922C(\u03b80) in P\u03b80, as h \u21920 and Nh \u2192\u221e. Lemma 7.3 establishes the convergence in distribution of \u03bbN to N(0, 4C(\u03b80)), under P\u03b80, as h \u21920 and Nh \u2192\u221e. This result provides the groundwork for the asymptotic normality of \u02c6 \u03b8N. Indeed, consider the set DN composed of instances where DN is invertible. The probability, under \u03b80, of DN occurring approaches 1, as h \u21920 and Nh \u2192\u221e. This implies that DN is almost surely invertible in this limit. Furthermore, we define EN as the intersection of {\u02c6 \u03b8N \u2208\u0398} and DN. Then, it can be shown that \u22aeEN \u21921 in P\u03b80 when h \u21920 and Nh \u2192\u221e. For EN := DN on EN, we have EN \u21922C(\u03b80) in P\u03b80 as h \u21920 and Nh \u2192\u221e. Given that sN\u22aeEN = E\u22121 N DNsN\u22aeEN = E\u22121 N \u03bbN\u22aeEN and according to Lemma 7.3, sN\u22aeEN \u2192N(0, C(\u03b80)\u22121) in distribution as h \u21920, Nh \u2192\u221eand Nh2 \u21920. In conclusion, under P\u03b80, as h \u21920, Nh \u2192\u221eand Nh2 \u21920, sN\u22aeEN is shown to converge in distribution to N(0, C(\u03b80)\u22121). The asymptotic normality for \u02c6 \u03b8N is, thus, confirmed due to the convergence of \u22aeEN \u21921. Proof of Lemma 7.2 To prove the first part of the lemma, we aim to represent CN(\u03b80) from the objective function (14). In doing so, we again employ the approximation (23), focusing solely on the terms that do not converge to zero as Nh \u2192\u221eand h \u21920. We start as in the approximation (34) and compute the corresponding derivatives to obtain the first block matrix of CN (25). We begin with \u2202\u03b2i1\u03b2i2 L[S] N (\u03b2, \u03c2): 1 Nh\u2202\u03b2i1\u03b2i2 L[S] N (\u03b2, \u03c2)= \u2202\u03b2i1\u03b2i2 Tr A(\u03b2) + 1 N N X k=1 \u2202\u03b2i1\u03b2i2 Tr DN(Xtk; \u03b2) + \u2202\u03b2i1\u03b2i2 1 h \u0010 T2(\u03b20, \u03b2, \u03c2) + T3(\u03b20, \u03b2, \u03c2) + 2(T4(\u03b20, \u03b2, \u03c2) + T5(\u03b20, \u03b2, \u03c2) + T6(\u03b20, \u03b2, \u03c2)) \u0011 \u22121 Nh N X k=1 \u2202\u03b2i1\u03b2i2 (Ztk(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121A(\u03b2)Ztk(\u03b2))) + R(h, x0). To determine the convergence of each of the previous terms, we use the definitions of the sums Tis and approximate each Ti using Proposition 2.2 and the Taylor expansion of the function \u00b5h. As we apply the derivatives \u2202\u03b2i1\u03b2i2, the order of h in each sum increases since terms of order R(1, x0) are constant with respect to \u03b2. Finally, when evaluating 1 Nh\u2202\u03b2i1\u03b2i2 L[S] N (\u03b2, \u03c2) at \u03b8 = \u03b80, numerous terms will cancel out due to differences of the type g(\u03b20; Xtk, Xtk\u22121) \u2212 24 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT g(\u03b2; Xtk, Xtk\u22121). Using the results from Lemma 7.1 and the proof of Theorem 5.1, we get the following limits: \u2202\u03b2i1\u03b2i2 1 hT2(\u03b20, \u03b2, \u03c20) \f \f \f \u03b2=\u03b20 P\u03b80 \u2212 \u2212 \u21921 2 Z (\u2202\u03b2i1 N0(x))\u22a4(\u03a3\u03a3\u22a4 0 )\u22121\u2202\u03b2i2 N0(x) d\u03bd0(x), \u2202\u03b2i1\u03b2i2 1 hT3(\u03b20, \u03b2, \u03c20) \f \f \f \u03b2=\u03b20 P\u03b80 \u2212 \u2212 \u2192 1 2 Z (\u2202\u03b2i1 N0(x) + 2\u2202\u03b2i1 A0(x\u2212b0))\u22a4(\u03a3\u03a3\u22a4 0 )\u22121(\u2202\u03b2i2 N0(x) + 2\u2202\u03b2i2 A0(x\u2212b0)) d\u03bd0(x), \u2202\u03b2i1\u03b2i2 1 hT5(\u03b20, \u03b2, \u03c20) \f \f \f \u03b2=\u03b20 P\u03b80 \u2212 \u2212 \u21921 2 Z (\u2202\u03b2i1 F0(x))\u22a4(\u03a3\u03a3\u22a4 0 )\u22121\u2202\u03b2i2 N0(x) d\u03bd0(x) + 1 2 Z (\u2202\u03b2i2 A0(x\u2212b0))\u22a4(\u03a3\u03a3\u22a4 0 )\u22121\u2202\u03b2i1 N0(x) d\u03bd0(x), \u2202\u03b2i1\u03b2i2 1 hT6(\u03b20, \u03b2, \u03c20) \f \f \f \u03b2=\u03b20 P\u03b80 \u2212 \u2212 \u2192\u22121 2 Z Tr(D\u2202\u03b2i1\u03b2i2 N0(x)) d\u03bd0(x), for Nh \u2192\u221e, h \u21920. Since 1 hT4 \u21920, the partial derivatives go to zero too. From Lemma 4.2, for Nh \u2192\u221e, h \u21920, we have: 1 N N X k=1 \u2202\u03b2i1\u03b2i2 Tr DN(Xtk; \u03b2) P\u03b80 \u2212 \u2212 \u2192 Z Tr(D\u2202\u03b2i1\u03b2i2 N0(x)) d\u03bd0(x). Term 1 Nh PN k=1 \u2202\u03b2i1\u03b2i2 (Ztk(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121A(\u03b2)Ztk(\u03b2)), evaluated in \u03b8 = \u03b80, has only one term of order h: 1 Nh PN k=1 Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4 0 )\u22121\u2202\u03b2i1\u03b2i2 A(\u03b20)Ztk(\u03b20), which converges to \u2202\u03b2i1\u03b2i2 Tr A(\u03b20) (Property 1 Lemma 7.1). Thus, 1 Nh\u2202\u03b2i1\u03b2i2 L[S] N (\u03b2, \u03c20)|\u03b2=\u03b20 \u21922 R (\u2202\u03b2i2 F0(x))\u22a4(\u03a3\u03a3\u22a4 0 )\u22121\u2202\u03b2i2 F0(x) d\u03bd0(x), in P\u03b80 for Nh \u2192\u221e, h \u21920. Now, we prove 1 N \u221a h\u2202\u03b2\u03c2L[S] N (\u03b2, \u03c2)|\u03b2=\u03b20,\u03c2=\u03c20 \u21920, in P\u03b80 for Nh \u2192\u221e, h \u21920. For a constant Ch, depending on h, l = 2, 3, ..., 6, and generic functions g, g1, the following term is at most of order R(h, x0): \u2202\u03b2iTl(\u03b2, \u03c2) = Ch N X k=1 (g(\u03b20; Xtk, Xtk\u22121) \u2212g(\u03b2; Xtk, Xtk\u22121))\u22a4(\u03a3\u03a3\u22a4)\u22121g1(\u03b2; Xtk, Xtk\u22121), Then, term \u2202\u03b2\u03c2L[S] N (\u03b2, \u03c2) still contains g(\u03b20; Xtk, Xtk\u22121) \u2212g(\u03b2; Xtk, Xtk\u22121) which is 0 for \u03b2 = \u03b20. Moreover, the term 1 N PN k=1 \u2202\u03b2\u03c2(Ztk(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121A(\u03b2)Ztk(\u03b2)) is at most of order R(h, x0). Thus, 1 N \u221a h\u2202\u03b2\u03c2L[S] N (\u03b2, \u03c2)|\u03b2=\u03b20,\u03c2=\u03c20 = 0. Finally, we compute 1 N \u2202\u03c2j1\u03c2j2 L[S] N (\u03b2, \u03c2). As before, it holds 1 N \u2202\u03c2j1\u03c2j2 Tl(\u03b2, \u03c2)|\u03b2=\u03b20,\u03c2=\u03c20 \u21920, for l = 2, 3, ..., 6. Similarly, we see that 1 N PN k=1 Ztk(\u03b20)\u22a4\u2202\u03c2j1\u03c2j2 (\u03a3\u03a3\u22a4)\u22121A(\u03b20)Ztk(\u03b20) is at most of order R(h, x0). So, we need to compute the following second derivatives \u2202\u03c2j1\u03c2j2 log(det \u03a3\u03a3\u22a4) and \u2202\u03c2j1\u03c2j2 1 Nh PN k=1 Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4)\u22121Ztk(\u03b20). The first one yields: \u2202\u03c2j1\u03c2j2 log(det \u03a3\u03a3\u22a4) = Tr((\u03a3\u03a3\u22a4)\u22121\u2202\u03c2j1\u03c2j2 \u03a3\u03a3\u22a4) \u2212Tr((\u03a3\u03a3\u22a4)\u22121(\u2202\u03c2j1 \u03a3\u03a3\u22a4)(\u03a3\u03a3\u22a4)\u22121\u2202\u03c2j2 \u03a3\u03a3\u22a4). On the other hand, we have: \u2202\u03c2j1\u03c2j2 1 Nh N X k=1 Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4)\u22121Ztk(\u03b20) = \u22121 Nh N X k=1 Tr(Ztk(\u03b20)Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4)\u22121(\u2202\u03c2j1\u03c2j2 \u03a3\u03a3\u22a4)(\u03a3\u03a3\u22a4)\u22121) + 1 Nh N X k=1 Tr(Ztk(\u03b20)Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4)\u22121(\u2202\u03c2j1 \u03a3\u03a3\u22a4)(\u03a3\u03a3\u22a4)\u22121(\u2202\u03c2j2 \u03a3\u03a3\u22a4)(\u03a3\u03a3\u22a4)\u22121) + 1 Nh N X k=1 Tr(Ztk(\u03b20)Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4)\u22121(\u2202\u03c2j2 \u03a3\u03a3\u22a4)(\u03a3\u03a3\u22a4)\u22121(\u2202\u03c2j1 \u03a3\u03a3\u22a4)(\u03a3\u03a3\u22a4)\u22121). 25 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT Then, from Property 1 of Lemma 7.1, we get: \u2202\u03c2j1\u03c2j2 1 Nh N X k=1 Ztk(\u03b20)\u22a4(\u03a3\u03a3\u22a4)\u22121Ztk(\u03b20) \f \f \f \u03c2=\u03c20 P\u03b80 \u2212 \u2212 \u2212 \u2212 \u2212 \u2192 Nh\u2192\u221e h\u21920 2 Tr((\u03a3\u03a3\u22a4 0 )\u22121(\u2202\u03c2j1 \u03a3\u03a3\u22a4 0 )(\u03a3\u03a3\u22a4 0 )\u22121\u2202\u03c2j2 \u03a3\u03a3\u22a4 0 ) \u2212Tr((\u03a3\u03a3\u22a4 0 )\u22121\u2202\u03c2j1\u03c2j2 \u03a3\u03a3\u22a4 0 ). Thus, 1 N \u2202\u03c2j1\u03c2j2 L[S] N (\u03b2, \u03c2)|\u03b2=\u03b20,\u03c2=\u03c20 \u2192Tr((\u03a3\u03a3\u22a4 0 )\u22121(\u2202\u03c2j1 \u03a3\u03a3\u22a4 0 )(\u03a3\u03a3\u22a4 0 )\u22121\u2202\u03c2j2 \u03a3\u03a3\u22a4 0 ). Since all the limits used in this proof are uniform in \u03b8, the first part of the lemma is proved. The second part is trivial, because all limits are continuous in \u03b8. Proof of Lemma 7.3 First, we compute the first derivatives. We start with: \u2202\u03b2iL[S] N (\u03b2, \u03c2) = \u22122 N X k=1 Tr(Dfh/2,k(\u03b2)Dx\u2202\u03b2if \u22121 h/2,k(\u03b2)) + 2 h N X k=1 (f \u22121 h/2,k(\u03b2) \u2212\u00b5h,k\u22121(\u03b2))\u22a4(\u03a3\u03a3\u22a4)\u22121(\u2202\u03b2if \u22121 h/2,k(\u03b2) \u2212\u2202\u03b2i\u00b5h,k\u22121(\u03b2)). The first derivative with respect to \u03c2 is: \u2202\u03c2jL[S] N (\u03b2, \u03c2) = N\u2202\u03c2j log det(\u03a3\u03a3\u22a4) + 1 h\u2202\u03c2j N X k=1 (f \u22121 h/2,k(\u03b2) \u2212\u00b5h,k\u22121(\u03b2))\u22a4(\u03a3\u03a3\u22a4)\u22121(f \u22121 h/2,k(\u03b2) \u2212\u00b5h,k\u22121(\u03b2)) = \u22121 h N X k=1 \u0012 Tr \u0010 (f \u22121 h/2,k(\u03b2) \u2212\u00b5h,k\u22121(\u03b2))(f \u22121 h/2,k(\u03b2) \u2212\u00b5h,k\u22121(\u03b2))\u22a4 (\u03a3\u03a3\u22a4)\u22121(\u2202\u03c2j\u03a3\u03a3\u22a4)(\u03a3\u03a3\u22a4)\u22121\u0011 + Tr((\u03a3\u03a3\u22a4)\u22121\u2202\u03c2j\u03a3\u03a3\u22a4) \u0013 . Define: \u03b7(i) N,k(\u03b8) := 2 \u221a Nh Tr(Dfh/2,k(\u03b2)Dx\u2202\u03b2if \u22121 h/2,k(\u03b2)) (35) \u2212 2 \u221a Nhh Ztk(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121\u2202\u03b2i(f \u22121 h/2,k(\u03b2) \u2212\u00b5h,k\u22121(\u03b2)) \u03b6(j) N,k(\u03b8) := 1 \u221a Nh Tr(Ztk(\u03b2)Ztk(\u03b2)\u22a4(\u03a3\u03a3\u22a4)\u22121(\u2202\u03c2j\u03a3\u03a3\u22a4)(\u03a3\u03a3\u22a4)\u22121) (36) \u2212 1 \u221a N Tr((\u03a3\u03a3\u22a4)\u22121\u2202\u03c2j\u03a3\u03a3\u22a4), and rewrite \u03bbN as \u03bbN = PN k=1[\u03b7(1) N,k(\u03b80), . . . , \u03b7(r) N,k(\u03b80), \u03b6(1) N,k(\u03b80), . . . , \u03b6(s) N,k(\u03b80)]\u22a4. Now, by Proposition 3.1 from Crimaldi and Pratelli (2005), it is sufficient to prove Lemma 7.4. For more details, see Supplementary Material (Pilipovic et al., 2023). Lemma 7.4 Let \u03b7(i) N,k(\u03b8) and \u03b6(j) N,k(\u03b8) be defined as in (35) and (36), respectively. If h \u21920, Nh \u2192\u221e, and Nh2 \u21920, then for and all i, i1, i2 = 1, 2, ..., r, and j, j1, j2 = 1, 2, ..., s, it holds: [(i)] 1. E\u03b80[sup1\u2264k\u2264N|\u03b7(i) N,k(\u03b80)|] \u2212 \u21920, and E\u03b80[sup1\u2264k\u2264N|\u03b6(j) N,k(\u03b80)|] \u2212 \u21920; 2. PN k=1E\u03b80[\u03b7(i) N,k(\u03b80) | Xtk\u22121] P\u03b80 \u2212 \u2212 \u21920, and PN k=1E\u03b80[\u03b6(j) N,k(\u03b80) | Xtk\u22121] P\u03b80 \u2212 \u2212 \u21920; 3. PN k=1 E\u03b80[\u03b7(i1) N,k(\u03b80) | Xtk\u22121]E\u03b80[\u03b7(i2) N,k(\u03b80) | Xtk\u22121] P\u03b80 \u2212 \u2212 \u21920; 26 \fSDE Parameter Estimation using Splitting Schemes A PREPRINT 4. PN k=1 E\u03b80[\u03b6(j1) N,k(\u03b80) | Xtk\u22121]E\u03b80[\u03b6(j2) N,k(\u03b80) | Xtk\u22121] P\u03b80 \u2212 \u2212 \u21920; 5. PN k=1 E\u03b80[\u03b7(i) N,k(\u03b80) | Xtk\u22121]E\u03b80[\u03b6(j) N,k(\u03b80) | Xtk\u22121] P\u03b80 \u2212 \u2212 \u21920; 6. PN k=1 E\u03b80[\u03b7(i1) N,k(\u03b80)\u03b7(i2) N,k(\u03b80) | Xtk\u22121] P\u03b80 \u2212 \u2212 \u21924[C\u03b2(\u03b80)]i1i2; 7. PN k=1 E\u03b80[\u03b6(j1) N,k(\u03b80)\u03b6(j2) N,k(\u03b80) | Xtk\u22121] P\u03b80 \u2212 \u2212 \u21924[C\u03c2(\u03b80)]j1j2; 8. PN k=1 E\u03b80[\u03b7(i) N,k(\u03b80)\u03b6(j) N,k(\u03b80) | Xtk\u22121] P\u03b80 \u2212 \u2212 \u21920; 9. PN k=1 E\u03b80[(\u03b7(i1) N,k(\u03b80)\u03b7(i2) N,k(\u03b80))2 | Xtk\u22121] P\u03b80 \u2212 \u2212 \u21920; 10. PN k=1 E\u03b80[(\u03b6(j1) N,k(\u03b80)\u03b6(j2) N,k(\u03b80))2 | Xtk\u22121] P\u03b80 \u2212 \u2212 \u21920; 11. PN k=1 E\u03b80[(\u03b7(i) N,k(\u03b80)\u03b6(j) N,k(\u03b80)2) | Xtk\u22121] P\u03b80 \u2212 \u2212 \u21920. The proof of the previous Lemma is technical and is shown in Supplementary Material (Pilipovic et al., 2023). 8" + } + ], + "Susanne Ditlevsen": [ + { + "url": "http://arxiv.org/abs/2306.15787v2", + "title": "Network inference in a stochastic multi-population neural mass model via approximate Bayesian computation", + "abstract": "The aim of this article is to infer the connectivity structures of brain\nregions before and during epileptic seizure. Our contributions are fourfold.\nFirst, we propose a 6N-dimensional stochastic differential equation for\nmodelling the activity of N coupled populations of neurons in the brain. This\nmodel further develops the (single population) stochastic Jansen and Rit neural\nmass model, which describes human electroencephalography (EEG) rhythms, in\nparticular signals with epileptic activity. Second, we construct a reliable and\nefficient numerical scheme for the model simulation, extending a splitting\nprocedure proposed for one neural population. Third, we propose an adapted\nSequential Monte Carlo Approximate Bayesian Computation algorithm for\nsimulation-based inference of both the relevant real-valued model parameters as\nwell as the {0,1}-valued network parameters, the latter describing the coupling\ndirections among the N modelled neural populations. Fourth, after illustrating\nand validating the proposed statistical approach on different types of\nsimulated data, we apply it to a set of multi-channel EEG data recorded before\nand during an epileptic seizure. The real data experiments suggest, for\nexample, a larger activation in each neural population and a stronger\nconnectivity on the left brain hemisphere during seizure.", + "authors": "Susanne Ditlevsen, Massimiliano Tamborrino, Irene Tubikanec", + "published": "2023-06-27", + "updated": "2023-08-25", + "primary_cat": "stat.ME", + "cats": [ + "stat.ME", + "cs.NA", + "math.NA" + ], + "main_content": "Introduction Estimating the connectivity in a network of units is important in a vast variety of applications, ranging from biology over climate to finance, physics, sociology and other fields. This is a statistically challenging task, as these interacting units typically follow some complex underlying stochastic dynamics, which may be only partially observed. Moreover, the detection of directed connections and the distinction from spurious ones is particularly difficult. This article is motivated by neuroscience, where the study of neural activity and the underlying connectivity between brain regions is essential for understanding brain function and its implications in various neurological conditions. In particular, we are interested in inferring the connectivity structure of the brain before and during epileptic seizure, a critical area of study for understanding and managing epilepsy. Electroencephalography (EEG) is a widely used technique to measure and analyze brain activity, providing insights into the complex dynamics of the brain. Here, we aim at estimating the directed connections between N coupled neural populations whose activity corresponds to the simultaneous measurements from N electrodes in the EEG recordings, one per population. In this article, each neural population is modelled with a stochastic version of the Jansen and Rit neural mass model (JR-NMM) originally proposed in [18]. This model has shown useful for reproducing human EEG rhythms, including those associated with epileptic activity. The original JR-NMM is a 6-dimensional system of ordinary differential equations and describes the average activity of one neural population, corresponding to the measured activity of one electrode in EEG recordings. The model includes a term representing noisy extrinsic input from the neighborhood or more distant regions. As this term can be interpreted as a stochastic process, the solution of the dynamical system is a stochastic process as well, inheriting the analytical properties of this stochastic input function. Therefore, the model was reformulated as a stochastic differential equation (SDE) in [1], and proved to be geometrically ergodic, a property guaranteeing that the distribution of the solution converges to a unique limit distribution, exponentially fast and for any initial value. This has two important statistical implications: First, the choice of the (unknown) initial value is negligible, since its impact on the distribution of the process decreases exponentially fast. Second, quantities related to the distribution of the process can be estimated from a (sufficiently long) single path, instead of requiring many repeated path simulations. While the original JR-NMM [18] has been extended to model N coupled neural populations in [42], no such extension has been proposed for the SDE version [1] of this model. We fill in this gap by proposing a 6Ndimensional SDE model accounting for directed connections between N neural populations (first contribution). In contrast to [42], our model contains {0, 1}-valued coupling direction parameters, describing the underlying network structure that we aim to infer from EEG data. As neither the underlying transition density nor exact simulation schemes are available for the 6N-dimensional SDE model, a suitable numerical approximation is required. The commonly applied Euler-Maruyama discretisation is not suitable for this SDE, as it has been shown to fail in preserving crucial structural properties for the single population model, such as the dynamics of the modelled neural oscillations [1, 8]. The interested reader is also referred to [7, 10, 11, 20, 37], where similar issues of the Euler-Maruyama method applied to oscillatory SDE models are reported. Consequently, when embedded into a statistical inference procedure, this standard numerical scheme may yield wrong estimation results, make the inference algorithm computationally infeasible or lead to ill-conditioned estimation methods [8, 14, 29]. Therefore, to obtain a reliable and efficient numerical simulation method, we further develop the structure-preserving splitting procedure proposed in [1] for N = 1 population to our N-population SDE model (second contribution). In contrast to the Euler-Maruyama scheme, which is based on truncating a stochastic Taylor series, the idea behind the splitting approach is to divide the unsolvable equation into solvable subequations, and to compose their solutions in a proper way [5, 24]. The constructed splitting scheme for the stochastic N-population neural mass model is based on a Hamiltonian type re-formulation of the SDE and successfully handles multiple interacting components, allowing to accurately simulate the complex dynamics of coupled neural populations. As a next step, we aim to estimate the {0, 1}-valued coupling direction parameters between the N neural populations, as well as relevant real-valued model parameters of the 6N-dimensional 2 \fSDE from N simultaneously recorded EEG signals. This is particularly challenging, since this SDE falls into the class of hypoelliptic models [14, 21, 25, 29] and is only partially observed via N onedimensional linear functions of a subset of the 6N model components. These issues, combined with the lack of a tractable underlying likelihood, make this problem naturally suitable for likelihoodfree inference approaches. We focus on the simulation-based Approximate Bayesian Computation (ABC) method [34], which has become one of the leading tools for parameter estimation in complex mathematical models in the last decades. The idea behind the classical acceptance-rejection ABC algorithm is to: (a) sample a parameter candidate from a prior distribution; (b) simulate a dataset conditioned on this value; (c) keep the candidate as a sample from the desired approximate posterior if the distance between the summary statistics of the simulated and observed datasets is smaller than some threshold level [4, 23, 34]. This basic algorithm is computationally expensive, because parameter candidates are sampled from the prior distribution throughout, whose mass typically concentrates in regions \u201cfar away\u201d from those of the posterior distribution, leading to low acceptance rates and high computational costs. To tackle this, here we consider the Sequential Monte Carlo (SMC) ABC approach, which represents the state-of-the-art sampler within ABC. This sequential algorithm relies on targeted proposal samplers (constructed across iterations) that efficiently avoid improbable parameter regions, yielding intermediate approximate posterior distributions that move towards the desired posterior during several iterations [3, 13, 35]. In particular, we propose an adapted SMC-ABC algorithm to infer both the {0, 1}-valued network parameters and the real-valued model parameters (third contribution). The algorithm builds upon the work of [8] for Hamiltonian-type SDEs observed via univariate time series, further developing it at the following three levels. First, we rely on the numerical splitting scheme for synthetic data generation from the 6N-dimensional SDE. Second, we construct effective summary statistics by mapping the N-dimensional time series to their estimated N marginal densities and spectral densities, as well as to their estimated cross-spectral densities and cross-correlation functions to account for possible dependencies among the populations. Third and foremost, to handle the high-dimensional parameter space and the presence of {0, 1}-valued parameters, we consider two independent kernels for the SMC-ABC proposal, one Gaussian for the real-valued and one Bernoulli for the {0, 1}-valued parameters. By estimating the latter, we uncover the directed connectivity structure of the neural populations, enabling the identification of the intricate network connections within brain regions. After validating the performance of the proposed statistical approach on different types of simulated data, we apply it on real multi-channel EEG data measured before and during epileptic seizure (fourth contribution). We successfully obtain unimodal posteriors of a relatively large number of continuous model parameters and a clear network estimate during seizure. Comparing this network with that obtained based on the EEG recordings before seizure, we observe, e.g., a stronger connectivity on the left brain hemisphere during seizure. The proposed algorithm, along with the choice of its key ingredients, holds promise beyond the specific application, as it can be used for other coupled ergodic SDE models, provided that a reliable numerical simulation method can be derived. This flexibility broadens the scope of our work, offering a potential avenue for parameter and network estimation in various complex stochastic systems involving coupled units. The paper is organised as follows. In Section 2, we introduce the model. In Section 3, we construct the numerical splitting method. In Section 4, we detail the proposed adapted SMCABC algorithm for network inference. In Section 5, we illustrate its performance on simulated data. In Section 6, we apply our method to real EEG data with epileptic activity. Conclusions and discussion are reported in Section 7. Sample code is available at https://github.com/ IreneTubikanec/networkABC. 2 Multi-population stochastic Jansen and Rit neural mass model We first present the stochastic JR-NMM of one neural population [1]. We then extend it to a system of multiple coupled neural populations, following the strategies proposed in [19, 42]. Finally, inspired by [1], we derive the corresponding Hamiltonian type formulation of the multipopulation model. 3 \fNotation We denote by 0d the d-dimensional zero vector, by Od the d\u00d7d-dimensional zero matrix, by Id the d \u00d7 d-dimensional identity matrix, and by diag[a1, . . . , ad] a d \u00d7 d-dimensional diagonal matrix with diagonal entries a1, . . . , ad. The transpose is denoted by \u22a4and the Euclidean norm by \u2225\u00b7\u2225. We sometimes omit the time index of a stochastic process and use, e.g., (X(t))t\u2208[0,T ] and X interchangeably. 2.1 Modelling one neural population Let [0, T], with T > 0, be the time interval of interest. Let (\u2126, F, P) be a complete probability space with a complete and right-continuous filtration (F(t))t\u2208[0,T ], and let (Wi(t))t\u2208[0,T ], i = 4, 5, 6, be independent Wiener processes on (\u2126, F, P) and adapted to (F(t))t\u2208[0,T ]. Let Xi, i = 1, 2, 3, model the average postsynaptic potentials of three groups of neurons, i.e., the main neurons, excitatory interneurons and inhibitory interneurons, respectively. Define the sigmoid function sig: R \u2192[0, \u03bdmax], with \u03bdmax > 0, by sig(x) := \u03bdmax 1 + e\u03b3(v0\u2212x) , where x is a potential, \u03bdmax describes the maximum firing rate of the neural population, v0 \u2208R is the potential value for which 50% of the maximum firing rate is attained and \u03b3 > 0 is proportional to the slope of the sigmoid function at v0. The sigmoid function is used to transform the average membrane potential of a neural subpopulation into an average firing rate. The stochastic version of the JR-NMM [1] is given by dX1(t) = X4(t)dt dX2(t) = X5(t)dt dX3(t) = X6(t)dt dX4(t) = \u0002 Aa \u0000sig (X2(t) \u2212X3(t)) \u0001 \u22122aX4(t) \u2212a2X1(t) \u0003 dt + \u00af \u03f5dW4(t) dX5(t) = \u0002 Aa \u0000\u00b5 + C2sig \u0000C1X1(t) \u0001\u0001 \u22122aX5(t) \u2212a2X2(t) \u0003 dt + \u03c3dW5(t) dX6(t) = \u0002 BbC4sig (C3X1(t)) \u22122bX6(t) \u2212b2X3(t) \u0003 dt + \u02dc \u03f5dW6(t), (1) with F(0)-measurable initial value X0 = (X1(0), . . . , X6(0))\u22a4which is independent of W = (W4, W5, W6)\u22a4and satisfies E \u0002 \u2225X0\u22252\u0003 < \u221e. The meaning and typical values of the model parameters A, B, a, b, C1, C2, C3, C4, \u03bdmax, \u03b3 and v0 are reported in Table 1 (see also [1, 18, 42]). The parameters \u00b5 and \u03c3 scale the deterministic and stochastic input, respectively, coming from neighboring regions in the brain. Together with the Wiener process W5, they replace the stochastic input function of the original model. While usually \u03c3 \u226b1, weak noise acts on the components X4 and X6 with noise intensities \u00af \u03f5, \u02dc \u03f5 \u226a\u03c3. Throughout, we set \u03f5 = \u00af \u03f5 = \u02dc \u03f5 = 1 s\u22121, since their role is marginal. See [1] for further details. The stochastic JR-NMM (1) is an additive noise SDE with globally Lipschitz drift coefficient, and thus has a pathwise unique solution which is adapted to (F(t))t\u2208[0,T ] [2, 22]. Moreover, system (1) is hypoelliptic and geometrically ergodic [1]. The 6-dimensional solution X = (X1, X2, X3, X4, X5, X6)\u22a4is only partially observed through the difference of two of its coordinates Y (t) := X2(t) \u2212X3(t), t \u2208[0, T]. (2) The process Y = (Y (t))t\u2208[0,T ] is the observed output of the model and describes the average membrane potential of the main neurons as measured with EEG. An illustration of a simulated trace with parameters chosen to produce \u03b1-waves (neural oscillations in the 8\u201312 Hz frequency band) is provided in Figure 1, Panel A. The model can also produce more complex behaviour, such as brain signals occurring before and during epileptic seizures. This is achieved by increasing the excitation-inhibition-ratio A/B [42]. In Figure 1 Panel B, regular EEG activity is produced by using \u00b5 = 90, \u03c3 = 500, A = 3.25 and the standard values reported in Table 1. Increasing the excitation-inhibition-ratio by increasing A causes the model to generate sporadic and frequently occurring spikes (A = 3.5 and A = 3.6) and rhythmic discharge of spikes (A = 4.3), as observed in Panels C-E, respectively. 4 \fA. Alpha rythm B. Regular C. Sporadic D. Frequent E. Rhythmic 0.0 2.5 5.0 7.5 10.0 12.5 \u22125 0 5 10 15 \u22125 0 5 10 15 \u22125 0 5 10 15 \u22125 0 5 10 15 \u22125 0 5 10 15 t [s] Y [mV] Figure 1: Traces of one neural population with different types of activity. Simulated paths of the process Y in eq. (2) of model (1). Panel A: Trace showing \u03b1-rhythmic activity using the standard values of Table 1, A = 3.25, C = 134.263, \u00b5 = 202.547, and \u03c3 = 1859.211 (values taken from [8]). Panels B-E: Traces describing activity occurring during epileptic seizures using the standard values of Table 1, \u00b5 = 90, \u03c3 = 500 and different values of A. Panel B: Regular EEG, A = 3.25. Panel C: Sporadic spikes, A = 3.5. Panel D: Frequently occurring spikes, A = 3.6. Panel E: Rhythmic discharge of spikes, A = 4.3. The signals resemble experimental stereo EEG recordings (cf. Figure 3 in [42]). Table 1: Standard parameter values for the Jansen and Rit Neural Mass Model [1, 18, 42]. Parameter Meaning Standard value A Average excitatory synaptic gain 3.25 mV B Average inhibitory synaptic gain 22 mV a Membrane time constant of excitatory postsynaptic potential 100 s\u22121 b Membrane time constant of inhibitory postsynaptic potential 50 s\u22121 C Average number of synapses between the subpopulations 135 C1, C2 Avg. no. of synaptic contacts in the excitatory feedback loop C, 0.8 C C3, C4 Avg. no. of synaptic contacts in the inhibitory feedback loop 0.25 C, 0.25 C \u03bdmax Maximum firing rate (Maximum of the sigmoid function) 5 s\u22121 v0 Value for which 50% of the maximum firing rate is attained 6 mV \u03b3 Determines the slope of the sigmoid function at v0 0.56 mV\u22121 5 \f2.2 Modelling multiple coupled neural populations Here, we define the 6N-dimensional SDE describing a system of N > 1 neural populations, where the k-th population Xk = (Xk 1 , . . . , Xk 6 ), k = 1, . . . , N, satisfies (1) with suitable index k, except for the fifth equation, which is instead given by dXk 5 (t) = h Akak \u0010 \u00b5k+C2,ksig \u0000C1,kXk 1 (t) \u0001 + N X j=1,j\u0338=k \u03c1jkKjkXj 1(t) \u0011 \u22122akXk 5 (t)\u2212a2 kXk 2 (t) i dt+\u03c3kdW k 5 (t). In particular, the k-th population Xk receives inputs from populations j \u0338= k, j = 1, . . . , N, through its coordinate Xk 5 , where Kjk > 0 model the coupling strengths and \u03c1jk \u2208{0, 1} determine whether there is (\u03c1jk = 1) or not (\u03c1jk = 0) a directed coupling from the j-th to the k-th population, defining thus the functional network of relations among the neural populations. Since the main pyramidal cells are excitatory and have axons reaching other brain areas, the average action potential Xj 1(t) of population j \u0338= k provides excitatory input to population k, and thus Kjk > 0. Note that the coupling direction parameters \u03c1jk where not considered in the modelling approach discussed in [42]. The N-dimensional observed output is Y (t) := (Y 1(t), . . . , Y N(t))\u22a4= (X1 2(t) \u2212X1 3(t), . . . , XN 2 (t) \u2212XN 3 (t))\u22a4, t \u2208[0, T]. (3) Not only the excitation-inhibition-ratio A/B is relevant for epileptic behaviour, but also the coupling strengths and directions between neural groups play a crucial role [42]. This is illustrated in Figure 2, where simulated activity of N = 4 neural populations under different coupling regimes is shown. In the left panels, no coupling occurs, i.e., all \u03c1jk are set to zero. In the middle and right panels, there is a unidirectional (cascade) coupling structure (illustrated in Figure 3a), i.e, \u03c112 = \u03c123 = \u03c134 = 1, for different coupling strengths K. The activity of a passive site (no epileptic spikes occur without input from other populations) strongly depends on that of an active site (epileptic spikes occur without input from other populations). In Population 1, the excitation-inhibitionratio is increased by setting A1 = 3.6 to obtain spiking activity. In the remaining populations, the standard values of Table 1, \u00b5 = 90 and \u03c3 = 500 are used for each component. When there is no coupling among the populations, no activation of Populations 2\u20134 occurs (left panels). Introducing coupling and setting the coupling strength parameters to K12 = K23 = K34 = 300 (central panels) leads to a dependence of the activity of Populations 2\u20134 on that of Population 1. When the coupling is strong enough (K12 = K23 = K34 = 500, right panels), rhythmic synchronization occurs. A similar behaviour for two populations has also been observed for the original JR-NMM (cf. Figure 6 in [42]). 2.3 Formulation as a stochastic Hamiltonian type system The multi-population JR-NMM can be formulated as a damped stochastic Hamiltonian system with non-linear displacement, similarly to [1] for N = 1. This allows for a more compact formulation and constitutes the basis for the derivation of the numerical method in Section 3. First, let us decompose the k-th process as Xk := (Qk, P k)\u22a4with 3-dimensional components Qk = (Xk 1 , Xk 2 , Xk 3 )\u22a4and P k = (Xk 4 , Xk 5 , Xk 6 )\u22a4. Then, let us denote the 3-dimensional Wiener process in the k-th population by W k = (W k 4 , W k 5 , W k 6 )\u22a4and the corresponding 3\u00d73-dimensional diffusion matrix by \u03a3k = diag[\u03f5k, \u03c3k, \u03f5k]. The Hamiltonian type formulation of the k-th population is d \uf8eb \uf8edQk(t) P k(t) \uf8f6 \uf8f8= \uf8eb \uf8ed \u2207P Hk \u0000Qk(t), P k(t) \u0001 \u2212\u2207QHk \u0000Qk(t), P k(t) \u0001 \u22122\u0393kP k(t) + Gk(Q(t)) \uf8f6 \uf8f8dt + \uf8eb \uf8edO3 \u03a3k \uf8f6 \uf8f8dW k(t). (4) System (4) consists of a Hamiltonian part defined by the Hamiltonian function Hk : R6 \u2192R+ 0 given by Hk(Qk, P k) := 1 2 \u0010\r \rP k\r \r2 + \r \r\u0393kQk\r \r2\u0011 , 6 \fA. No connections B. Cascade network, K = 300 C. Cascade network, K = 500 Population 1 Population 2 Population 3 Population 4 0.0 2.5 5.0 7.5 10.0 0.0 2.5 5.0 7.5 10.0 0.0 2.5 5.0 7.5 10.0 0 5 10 15 0 5 10 15 0 5 10 15 0 5 10 15 t [s] Y [mV] Figure 2: Cascade network of four populations with one active population. Simulated paths of Y in eq. (3) of N = 4 neural populations. The standard values of Table 1, \u00b5 = 90 and \u03c3 = 500 are used, except for Population 1 where A1 = 3.6 to make it active. Panel A: \u03c1jk = 0, j, k = 1, . . . , 4, j \u0338= k. Panels B-C: \u03c112, \u03c123, \u03c134 = 1. The coupling strength parameters K12, K23 and K34 equal 300 (Panel B) and 500 (Panel C). with gradients \u2207P Hk \u0000Qk(t), P k(t) \u0001 = P k(t) and \u2207QHk \u0000Qk(t), P k(t) \u0001 = \u03932 kQk(t), a damping part determined by the 3 \u00d7 3-dimensional diagonal matrix \u0393k = diag[ak, ak, bk], and a non-linear displacement and coupling term Gk : R3N \u2192R3 given by Gk(Q(t)) = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed Akaksig \u0000Xk 2 (t) \u2212Xk 3 (t) \u0001 Akak \u0010 \u00b5k + C2,ksig \u0000C1,kXk 1 (t) \u0001 + N P j=1,j\u0338=k \u03c1jkKjkXj 1(t) \u0011 BkbkC4,ksig(C3,kXk 1 (t)) \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 , where Q = (Q1, . . . , QN)\u22a4= (X1 1, X1 2, X1 3, . . . , XN 1 , XN 2 , XN 3 )\u22a4. A compact Hamiltonian type formulation for the N coupled populations is then obtained by the process X := (Q, P)\u22a4with Q as above, 3N-dimensional component P = (P 1, . . . , P N)\u22a4= (X1 4, X1 5, X1 6, . . . , XN 4 , XN 5 , XN 6 )\u22a4 and 3N-dimensional Wiener process W = (W 1, . . . , W N)\u22a4= (W 1 4 , W 1 5 , W 1 6 , . . . , W N 4 , W N 5 , W N 6 )\u22a4. In particular, the whole 6N-dimensional stochastic Hamiltonian system is given by d \uf8eb \uf8edQ(t) P(t) \uf8f6 \uf8f8= \uf8eb \uf8ed P(t) \u2212\u03932Q(t) \u22122\u0393P(t) + G(Q(t)) \uf8f6 \uf8f8dt + \uf8eb \uf8edO3N \u03a3 \uf8f6 \uf8f8dW(t), (5) where \u0393 = diag[a1, a1, b1, . . . , aN, aN, bN] and \u03a3 = diag[\u03f51, \u03c31, \u03f51, . . . , \u03f5N, \u03c3N, \u03f5N] are 3N \u00d7 3Ndimensional diagonal matrices, and G : R3N \u2192R3N is given by G(Q) = (G1(Q), . . . , GN(Q))\u22a4. Remark 2.1 Including the coupling terms into the displacement function G : R3N \u2192R3N enables closed-form expressions of all required components of the splitting framework discussed in Section 3 for arbitrary N. 7 \f3 Simulation of the multi-population JR-NMM: Splitting integrators Let 0 = t0 < . . . < tm = T be a partition of the time interval [0, T] with equidistant time steps \u2206= ti+1 \u2212ti for i = 0, . . . , m \u22121, m \u2208N. As the solution X(ti) of system (5) at the discrete time points ti cannot be simulated exactly, we derive reliable approximations e X(ti) of X(ti), extending the numerical splitting method proposed in [1] for single neural populations to the multi-population case. The splitting approach consists of the following three steps [6, 24]: (i) Split the equation for X(t) into d \u2208N exactly solvable subequations for X[l](t), l = 1, . . . , d; (ii) Derive the exact solutions (flows) of the d subequations over an increment of length \u2206, i.e., for initial value X[l](ti), the \u2206-flow is \u03c6[l] \u2206(X[l](ti)) = X[l](ti+1); (iii) Compose the d exact solutions in a suitable way. Prominent methods are the Lie-Trotter [40] and Strang [36] compositions, given by e XLT(ti+1) = \u0010 \u03c6[1] \u2206\u25e6. . . \u25e6\u03c6[d] \u2206 \u0011 \u0010 e XLT(ti) \u0011 , e XS(ti+1) = \u0010 \u03c6[1] \u2206/2 \u25e6. . . \u25e6\u03c6[d\u22121] \u2206/2 \u25e6\u03c6[d] \u2206\u25e6\u03c6[d\u22121] \u2206/2 \u25e6. . . \u25e6\u03c6[1] \u2206/2 \u0011 \u0010 e XS(ti) \u0011 , respectively. Note that the order of the compositions can be changed. Despite having the same strong order of convergence, Strang compositions have been shown to outperform Lie-Trotter methods, yielding a better approximation of the true solution and preserving the model dynamics for larger time steps [1, 7, 9, 10, 17, 28, 41]. This is possibly due to the symmetry of the Strang splitting [1], its smaller mean biases [41] and its higher-order one-step predictions [28]. We therefore apply the Strang splitting for system (5), generalizing the Ornstein-Uhlenbeck integrator presented in [1]. Their Wiener integrator can be extended analogously. Step (i): Choice of subequations We separate the non-linear term G(Q(t)) of system (5), and consider the subequations d \uf8eb \uf8edQ[1](t) P [1](t) \uf8f6 \uf8f8= \uf8eb \uf8ed P [1](t) \u2212\u03932Q[1](t) \u22122\u0393P [1](t) \uf8f6 \uf8f8dt + \uf8eb \uf8edO3N \u03a3 \uf8f6 \uf8f8dW(t), (6) d \uf8eb \uf8edQ[2](t) P [2](t) \uf8f6 \uf8f8= \uf8eb \uf8ed 03N G(Q[2](t)) \uf8f6 \uf8f8dt. (7) Step (ii): Derivation of exact solutions Write subequation (6) as dX[1](t) = FX[1](t)dt + \u03a30dW(t), where F = \uf8eb \uf8edO3N I3N \u2212\u03932 \u22122\u0393 \uf8f6 \uf8f8, \u03a30 = \uf8eb \uf8edO3N \u03a3 \uf8f6 \uf8f8. Let X[1](ti) = (Q[1](ti), P [1](ti))\u22a4denote the solution of system (6) at time ti. The exact solution at time ti+1 is then given by X[1](ti+1) = \u03c6[1] \u2206 \u0010 X[1](ti) \u0011 = eF \u2206X[1](ti) + \u03bei(\u2206), 8 \fwhere \u03bei(\u2206), i = 1, . . . , m, are independent 6N-dimensional Gaussian random vectors with zero mean E[\u03bei(\u2206)] = 06N and covariance matrix given by Cov(\u2206) = \u2206 Z 0 eF (\u2206\u2212s)\u03a30\u03a3\u22a4 0 \u0010 eF (\u2206\u2212s)\u0011\u22a4 ds. For general matrices F, the exponential matrix eF \u2206and the covariance matrix Cov(\u2206) may be costly to compute. However, due to the sparseness of F coming from the damped Hamiltonian term, both eF \u2206and Cov(\u2206) can be expressed in closed-form for arbitrary N as eF \u2206= \uf8eb \uf8ede\u2212\u0393\u2206(I3N + \u0393\u2206) e\u2212\u0393\u2206\u2206 \u2212\u03932e\u2212\u0393\u2206\u2206 e\u2212\u0393\u2206(I3N \u2212\u0393\u2206) \uf8f6 \uf8f8=: \uf8eb \uf8ed\u03d1(\u2206) \u03ba(\u2206) \u03d1\u2032(\u2206) \u03ba\u2032(\u2206) \uf8f6 \uf8f8 and Cov(\u2206) = \uf8eb \uf8ed 1 4\u0393\u22123\u03a32 \u0000I3N + \u03ba(\u2206)\u03d1\u2032(\u2206) \u2212\u03d12(\u2206) \u0001 1 2\u03a32\u03ba2(\u2206) 1 2\u03a32\u03ba2(\u2206) 1 4\u0393\u22121\u03a32 \u0000I3N + \u03ba(\u2206)\u03d1\u2032(\u2206) \u2212\u03ba\u20322(\u2206) \u0001 \uf8f6 \uf8f8. Denote the solution of the second subequation (7) at time ti by X[2](ti) = (Q[2](ti), P [2](ti))\u22a4. Since the first component of the right side of system (7) is zero and the second component only depends on Q, the exact solution at time ti+1 is given by X[2](ti+1) = \u03c6[2] \u2206 \u0010 X[2](ti) \u0011 = X[2](ti) + \u2206 \uf8eb \uf8ed 03N G(Q[2](ti)) \uf8f6 \uf8f8. Step (iii): Composition of exact solutions The Strang splitting integrator for (5) is then given by e XS(ti+1) = \u0010 \u03c6[2] \u2206/2 \u25e6\u03c6[1] \u2206\u25e6\u03c6[2] \u2206/2 \u0011 \u0010 e XS(ti) \u0011 . (8) Using (8), a path of system (5) can be simulated by applying Algorithm 1. An analogous analysis as in [1] yields that the derived splitting scheme is mean-square convergent of order 1 and preserves the qualitative behaviour of the solution of (5), e.g., amplitudes of oscillations, marginal invariant densities and spectral densities. Algorithm 1 Strang splitting scheme for the N-population stochastic JR-NMM Input: Initial value X0, step size \u2206, number of time steps m in [0, T] and model parameters Output: Approximated path of (X(t))t\u2208[0,T ] at discrete times ti = i\u2206, i = 0, . . . , m, tm = T. 1: Set e XS(t0) = X0 2: for i = 0 : (m \u22121) do 3: Set X[2] = e XS(ti) + \u2206 2 \uf8eb \uf8ed 03N G( e QS(ti)) \uf8f6 \uf8f8 4: Set X[1] = eF \u2206X[2] + \u03bei(\u2206) 5: Set e XS(ti+1) = X[1] + \u2206 2 \uf8eb \uf8ed03N G(Q[1]) \uf8f6 \uf8f8 6: end for 7: Return e XS(ti), i = 0, . . . , m. 9 \fRemark 3.1 The Strang splitting integrator e XS2(ti+1) = \u0010 \u03c6[1] \u2206/2 \u25e6\u03c6[2] \u2206\u25e6\u03c6[1] \u2206/2 \u0011 \u0010 e XS2(ti) \u0011 is another possible choice. However, this composition requires to evaluate the more costly stochastic subsystem twice at every iteration step, generating twice as many pseudo-random numbers. 4 Adjusted SMC-ABC for network inference In this section, we first revise the acceptance-rejection, reference-table acceptance-rejection and SMC-ABC schemes. Then, as the considered parameters of interest are both real and binary, we propose an adjusted SMC-ABC scheme, where the Gaussian proposal sampler within the \u201ccanonical\u201d SMC-ABC algorithm is modified to account for both continuous (real-valued) model parameters and discrete ({0, 1}-valued) network parameters. Finally, we describe all underlying key ingredients (numerical scheme for data generation, summary statistics, proposal samplers, etc.). The resulting algorithm further develops the spectral density-based and measure-preserving ABC method proposed in [8] for ergodic SDEs with a univariate observed output process to multidimensional SDEs with coupled ergodic components, partially observed via N simultaneously recorded univariate time series. On the one hand, we replace the initially proposed reference table acceptance-rejection ABC with the more computationally efficient adjusted SMC-ABC (cf. Section 4.2). On the other hand, we generalise the summaries and distances to cope with the N > 1 simultaneously observed coupled time series (cf. Section 4.3). 4.1 ABC schemes Let \u03b8 = (\u03b81, . . . , \u03b8d) denote the parameter vector to be inferred from the observed data y. Denoting by \u03c0(\u03b8) the prior and by L(y|\u03b8) the likelihood function, the posterior distribution \u03c0(\u03b8|y) satisfies \u03c0(\u03b8|y) \u221d\u03c0(\u03b8)L(y|\u03b8). In general, the likelihood function, and thus the true posterior, are not available for SDEs like (5). The idea of ABC is to replace the likelihood via a large amount of \u201csynthetic\u201d datasets simulated from the model, obtaining an approximate posterior \u03c0ABC(\u03b8|y) targeting the true posterior \u03c0(\u03b8|y). Several ABC algorithms have been proposed throughout the years, see [34] for an overview. Among all, the simplest is acceptance-rejection ABC [4, 23, 34], consisting of three steps: (a) Sample \u03b8 \u2032 from the prior \u03c0(\u03b8); (b) Conditioned on \u03b8 \u2032, simulate a synthetic dataset \u02dc y\u03b8\u2032 (here a path) from the observed output Y ; (c) Keep the sampled value \u03b8 \u2032 if the distance D(\u00b7, \u00b7) between a vector of summary statistics s(\u00b7) of the observed and simulated data is smaller than a threshold \u03b4 > 0, i.e., D(s(y), s(\u02dc y\u03b8\u2032)) < \u03b4. Steps (a)-(c) are then repeated until M draws are accepted, which typically happens after n \u226bM drawings. This leads to the approximate acceptance-rejection ABC posterior \u03c0(\u03b8|y) \u2248\u03c0\u03b4 ABC(\u03b8|s(y)) \u221d Z 1{D(s(y),s(\u02dc y\u03b8\u2032))<\u03b4}\u03c0(\u03b8)L(s(\u02dc y\u03b8\u2032)|\u03b8)ds. Instead of keeping only the samples whose distance is smaller than some apriori fixed threshold \u03b4, the reference table acceptance-rejection ABC scheme [8, 12] first produces a reference table {\u03b8j, Dj}, j = 1 . . . , n, and then selects the threshold level \u03b4 as the q-th percentile of the calculated distances Dj. This procedure has the computational advantage of fixing the number of drawings n in advance. Acceptance-rejection ABC and its variant are computationally inefficient by construction, as the proposals \u03b8\u2032 are sampled from the prior distribution throughout. To tackle this, here we consider SMC-ABC [3, 13, 35], a sequential algorithm using proposal samplers constructed based on the kept sampled values (called particles) at the previous iteration r = 1, . . . , rlast, yielding 10 \fsamplers defined on more likely parameter regions. This increases the acceptance probability of each particle, yielding intermediate approximate posterior distributions during several iterations which move closer and closer to the desired posterior. In particular, SMC-ABC works as follows: At iteration one, acceptance-rejection ABC is run, sampling from the prior distribution until M particles \u03981 = (\u03b8(1) 1 , . . . , \u03b8(M) 1 ) have been accepted, i.e., have yielded a distance smaller than an initial threshold \u03b41. Then, the initial weights are set to w1 = (w(1) 1 , . . . , w(M) 1 ) = (1/M, . . . , 1/M). At iteration r > 1, a particle \u03b8 is initially sampled from the set of kept candidates \u0398r\u22121 of the previous iteration with the corresponding weights wr\u22121 and then perturbed to a value \u03b8\u2217\u223cK(\u00b7|\u03b8), where K is a suitable perturbation kernel. Such kernel is also known as proposal sampler or important sampler, and is commonly assumed to be Gaussian [13, 16], even if other possibilities have been proposed [27]. Here, we consider the optimized multivariate normal sampler as proposed in [16]. In particular, for given j, l \u2208{1, . . . , M} at iteration r, the perturbation kernel is given by Kr \u0010 \u03b8(j) r \f \f \f\u03b8(l) r\u22121 \u0011 = (2\u03c0)\u2212d/2 \u0010 det \u02c6 \u03a3r \u0011\u22121/2 exp \u0012 \u22121 2 h \u03b8(j) r \u2212\u03b8(l) r\u22121 i\u22a4\u02c6 \u03a3\u22121 r h \u03b8(j) r \u2212\u03b8(l) r\u22121 i\u0013 , (9) where \u02c6 \u03a3r is twice the weighted empirical covariance matrix obtained from the previous population \u0398r\u22121 and \u02c6 \u03a3\u22121 r denotes its inverse. Synthetic data \u02dc y\u03b8\u2217are then simulated conditioned on the perturbed \u03b8\u2217, which is accepted if d(s(y), s(\u02dc y\u03b8\u2217)) < \u03b4r, with \u03b4r < \u03b4r\u22121. This is repeated until M particles \u0398r = (\u03b8(1) r , . . . , \u03b8(M) r ) have been accepted, after which the corresponding important weights ( \u02dc w(1) r , . . . , \u02dc w(M) r ) are computed as \u02dc w(j) r = \u03c0 \u0010 \u03b8(j) r \u0011 / M X l=1 w(l) r\u22121Kr \u0010 \u03b8(j) r \f \f \f\u03b8(l) r\u22121 \u0011 and then normalised via w(j) r = \u02dc w(j) r / M X l=1 \u02dc w(l) r . This procedure is then repeated over several iterations until a suitable stopping criterion is reached. The final SMC-ABC posterior is obtained by sampling from the particles kept at the last iteration rlast with probabilities given by the corresponding normalized weights. 4.2 Adjusted SMC-ABC algorithm for real and binary parameters Here, we are interested in estimating both continuous (real-valued) model parameters and discrete ({0, 1}-valued) network parameters. To tackle this, we adapt the classical SMC-ABC algorithm with Gaussian proposal samplers to the specific case of additional {0, 1}-valued parameters, noting that the scheme can be adjusted to other distributions with finite support as well. The method is reported in Algorithm 2 and abbreviated as nSMC-ABC, where the \u201cn\u201d indicates that it includes network estimation. We split the parameter vector \u03b8 in continuous and discrete parts as \u03b8 := (\u03b8c, \u03b8d), where \u03b8c contains continuous (real-valued) model parameters, while \u03b8d consists of the discrete ({0, 1}-valued) network parameters. We denote by cn and dn the dimensions of these two vectors, respectively, and their entries by \u03b8k c , k = 1, . . . , cn, and \u03b8k d, k = 1, . . . , dn. We write \u0398c,r and \u0398d,r to denote the M kept continuous (resp. discrete) particles at iteration r, where each particle \u03b8(j) r is represented as \u03b8(j) r = \u0000\u03b8(j) c,r, \u03b8(j) d,r \u0001 , j = 1, . . . , M. We denote by \u03c0c and \u03c0d the prior distributions of \u03b8c and \u03b8d, and by Kc r and Kd r the perturbation kernels applied at the r-th iteration, respectively. 11 \fAt iteration one, we run the canonical acceptance-rejection ABC, sampling from the prior (lines 2 \u221210 of Algorithm 2). At iteration r > 1, we draw a particle \u03b8(j) c,r for the continuous parameters, which we sample from the weighted set {\u0398c,r\u22121, wr\u22121} (line 17 of Algorithm 2). However, differently from the classical SMC-ABC sampler, each entry of a particle \u03b8(j) d,r for the binary parameters is drawn from a Bernoulli distribution with probability given by the sample mean derived from the previous population (line 19 of Algorithm 2). Then, instead of jointly perturbing \u03b8(j) r = (\u03b8(j) c,r, \u03b8(j) d,r) with a joint Gaussian proposal kernel Kr, as done in the classical SMC-ABC, here we perturb them independently with respect to some continuous and discrete perturbation kernels Kc r and Kd r (lines 18 and 20 of Algorithm 2). This independence assumption comes from the fact that there is no natural relation between the network and continuous parameters, and it turns out to be beneficial, as it reduces the computational cost required to perturb a particle. Algorithm 2 Adjusted SMC-ABC for network inference (nSMC-ABC) Input: Summaries s(y) of the observed data y, prior distributions \u03c0c and \u03c0d, perturbation kernels Kc r and Kd r, number of kept samples per iteration M, initial threshold \u03b41 Output: Samples from the nSMC-ABC posterior 1: Set r = 1 2: for j = 1 : M do 3: repeat 4: Sample \u03b8d from \u03c0d and \u03b8c from \u03c0c, and set \u03b8 = (\u03b8c, \u03b8d) 5: Conditioned on \u03b8, simulate a synthetic dataset \u02dc y\u03b8 from the observed output Y 6: Compute the summaries s(\u02dc y\u03b8) 7: Calculate the distance D = d \u0000s(y), s(\u02dc y\u03b8) \u0001 8: until D < \u03b41 9: Set \u03b8(j) d,1 = \u03b8d and \u03b8(j) c,1 = \u03b8c 10: end for 11: Initialize the weights by setting each entry of w1 = (w(1) 1 , . . . , w(M) 1 ) to 1/M 12: repeat 13: Set r = r + 1 14: Determine \u03b4r < \u03b4r\u22121 15: for j = 1 : M do 16: repeat 17: Sample \u03b8c from the weighted set {\u0398c,r\u22121, wr\u22121} 18: Perturb \u03b8c to obtain \u03b8\u2217 c from Kc r(\u00b7|\u03b8c) 19: Sample \u03b8k d, k = 1, . . . , dn, from Bernoulli(\u02c6 pk r), where \u02c6 pk r = 1 M M P l=1 \u03b8k,(l) d,r\u22121 20: Perturb \u03b8d = (\u03b81 d, . . . , \u03b8dn d ) to obtain \u03b8\u2217 d from Kd r(\u00b7|\u03b8d) 21: Conditioned on \u03b8\u2217= (\u03b8\u2217 c, \u03b8\u2217 d), simulate a dataset \u02dc y\u03b8\u2217from the observed output Y 22: Compute the summaries s(\u02dc y\u03b8\u2217) 23: Calculate the distance D = d \u0000s(y), s(\u02dc y\u03b8\u2217) \u0001 24: until D < \u03b4r 25: Set \u03b8(j) d,r = \u03b8\u2217 d and \u03b8(j) c,r = \u03b8\u2217 c 26: Set \u02dc w(j) r = \u03c0c \u0010 \u03b8(j) c,r \u0011 / M P l=1 w(l) r\u22121Kc r \u0010 \u03b8(j) c,r \f \f \f\u03b8(l) c,r\u22121 \u0011 27: end for 28: Normalise the weights w(j) r = \u02dc w(j) r / M P l=1 \u02dc w(l) r , for j = 1, . . . , M 29: until stopping criterion is reached 30: Return the final \u0398d,rlast and {\u0398c,rlast, wrlast}. Remark 4.1 The proposed nSMC-ABC Algorithm 2 is related to ABC for model selection as discussed in [39], in the sense that each possible network (obtained for a given combination of the 12 \fbinary parameters) may be interpreted as a model. Their algorithm samples real-valued parameter candidates conditioned on a given model and obtains the posterior distribution of the model based on the number of particles kept under each model. Thus, it is suitable for a low number of models, while in our case we would obtain a prohibitive number of possible models, namely 2N(N\u22121) (which equals, e.g., 4096 for N = 4). 4.3 Ingredients of the nSMC-ABC algorithm The accuracy and performance of any of the discussed ABC algorithms depend on various aspects, such as the numerical method used to simulate the data, the data summaries, the distances, the threshold values, the proposal kernels, etc. In the following, we describe the choice of these key ingredients. Choice of simulation method We apply the Strang splitting integrator (8) described in Algorithm 1 to simulate datasets from the observed output process Y (3) of SDE (5). Choice of summary statistics For the marginal behaviour of each population Y k, k = 1, . . . , N, we use the summary statistics proposed in [8]. In particular, taking advantage of the underlying geometric ergodicity, we map the N-dimensional time series y to the N estimated marginal invariant densities and spectral densities, the latter describing the dynamics in the frequency domain. Moreover, we include summaries capturing the interactions between populations, specifically the magnitude squared coherence (MSC), a statistic used to examine the similarity between two signals at various frequencies [38], and the cross-correlation as a function of lag. Let Rk denote the auto-correlation function of Y k and Rjk the cross-correlation function of Yj and Yk, that is Rk(\u03c4) = E[Y k(t)Y k(t + \u03c4)], k \u2208{1, . . . , N}, Rjk(\u03c4) = E[Y j(t)Y k(t + \u03c4)], j, k \u2208{1, . . . , N}, j \u0338= k, where \u03c4 denotes the time-lag. It holds that Rjk(\u03c4) = Rkj(\u2212\u03c4). The spectral density Sk of Y k is given by the Fourier transform of Rk(\u03c4), that is Sk(\u03bd) = F{Rk}(\u03bd) = \u221e Z \u2212\u221e Rk(\u03c4)e\u2212i2\u03c0\u03bd\u03c4 d\u03c4, k \u2208{1, . . . , N}, where \u03bd denotes the frequency. Similarly, the cross-spectral density Sjk of Y j and Y k is given by Sjk(\u03bd) = F{Rjk}(\u03bd) = \u221e Z \u2212\u221e Rjk(\u03c4)e\u2212i2\u03c0\u03bd\u03c4 d\u03c4. The MSC is defined as Zjk(\u03bd) := |Sjk(\u03bd)|2 Sj(\u03bd)Sk(\u03bd), j, k \u2208{1, . . . , N}, j \u0338= k, where | \u00b7 | denotes the magnitude. The summary functions are thus the marginal densities, denoted by fk, the marginal spectral densities Sk, k = 1, . . . , N, the symmetric MSCs Zjk, j, k = 1, . . . , N, k > j, and the nonsymmetric cross-correlations Rjk, j, k = 1, . . . , N, j \u0338= k. These functions are estimated from the available observations, see Section 4.4 for further details. Denoting the estimates by \u02c6 fk, \u02c6 Sk, \u02c6 Zjk and \u02c6 Rjk, respectively, the proposed summaries s(\u00b7) of a dataset y are defined as s(y) := n \u02c6 fk, \u02c6 Sk, \u02c6 Zjk, \u02c6 Rjk o . (10) 13 \fChoice of distance measure Following [8], as distances between the summary functions (10) of the observed and simulated datasets, we use the integrated absolute error (IAE) given by IAE(g1, g2) := Z R |g1(x) \u2212g2(x)| dx \u2208R+, approximated by rectangular integration. The distance between the summaries of the observed dataset y and a simulated dataset \u02dc y\u03b8 is then defined as D(s(y), s(\u02dc y\u03b8)) := v1 1 N N X k=1 IAE( \u02c6 Sk, \u02dc Sk) + v2 1 N(N \u22121)/2 N X j=1,k>j IAE( \u02c6 Zjk, \u02dc Zjk) +v3 1 N(N \u22121) N X j,k=1,j\u0338=k IAE( \u02c6 Rjk, \u02dc Rjk) + v4 1 N N X k=1 IAE( \u02c6 fk, \u02dc fk), (11) where { \u02c6 fk, \u02c6 Sk, \u02c6 Zjk, \u02c6 Rjk} and { \u02dc fk, \u02dc Sk, \u02dc Zjk, \u02dc Rjk} denote the summaries of the observed and simulated datasets, respectively. The values vl \u22650, l = 1, 2, 3, 4, are weights guaranteeing that the different summary functions have a comparable impact on the distance measure (cf. Section 4.4). Choice of prior distributions We use independent uniform priors \u03b8k c \u223cU \u0000\u03b8k c,min, \u03b8k c,max \u0001 , k = 1, . . . , cn, for the continuous parameters and independent Bernoulli priors \u03b8k d \u223cBernoulli (p) , p = 1 2, k = 1, . . . , dn, for the discrete parameters. In particular, \u03c0c(\u03b8c) = cn Y k=1 \u03c0(\u03b8k c ) = cn Y k=1 1 \u03b8k c,max \u2212\u03b8k c,min , \u03c0d(\u03b8d) = dn Y k=1 \u03c0(\u03b8k d) = dn Y k=1 \u00121 2 \u0013\u03b8k d \u00121 2 \u00131\u2212\u03b8k d = 1 2dn . Choice of proposal samplers For the continuous parameters, we choose Kc r as in (9), i.e., the optimized Gaussian sampler [16]. A particle \u03b8c sampled at iteration r according to the normalized weights of wr\u22121 (line 17 of Algorithm 2) is thus perturbed to \u03b8\u2217 c, a realization of the multivariate normal distribution N(\u03b8c, \u02c6 \u03a3c,r), where \u02c6 \u03a3c,r is twice the weighted covariance matrix obtained from the previous population \u0398c,r\u22121 (line 18 of Algorithm 2). For the discrete parameters, a value \u03b8k d, k = 1, . . . , dn, sampled from a Bernoulli distribution at iteration r (line 19 of Algorithm 2) is either kept (with probability qstay) or perturbed to 1 \u2212\u03b8k d (line 20 of Algorithm 2). An explicit expression of the respective kernel Kd r is given by Kd r \u0010 \u03b8(j) d,r \f \f \f\u03b8(l) d,r\u22121 \u0011 = dn Y k=1 Kd,k r \u0010 \u03b8k,(j) d,r \f \f \f\u03b8k,(l) d,r\u22121 \u0011 = dn Y k=1 \u0010 pk,(l) r \u0011\u03b8k,(j) d,r \u0010 1 \u2212pk,(l) r \u00111\u2212\u03b8k,(j) d,r , (12) where pk,(l) r = ( qstay, if \u03b8k,(l) d,r\u22121 = 1 1 \u2212qstay, if \u03b8k,(l) d,r\u22121 = 0 . Throughout this work, the probability qstay is fixed across iterations (cf. Section 4.4). Alternatively, such probability could also vary, e.g., it may depend on some statistics of the previous population or the number of iterations. 14 \fChoice of threshold levels The initial threshold \u03b41 is obtained by a reference table acceptancerejection ABC pilot run. Under the given prior, we produce 104 distances and then choose \u03b41 as their median. For r > 1, the threshold \u03b4r is chosen as the median of the M distances computed at the previous iteration if the acceptance rate of particles at the previous iteration is larger than 1%. Otherwise \u03b4r is chosen as the 75th percentile of the M distances computed at iteration r \u22121. Choice of stopping criterion The algorithm is stopped after the acceptance rate of particles at a certain iteration has dropped below a prefixed threshold (cf. Section 4.4). 4.4 Implementation details The nSMC-ABC method is coded using the statistical software R [30], combined with the package Rcpp [15]. Algorithm 1 is coded in C++ and then integrated into the R-Code for Algorithm 2. The for-loops of Algorithm 2 are parallelized using the packages doParallel and foreach. All experiments are run on a multiple core High-Performance-Cluster at the University of Klagenfurt. The summaries (10) are computed as follows: Estimates of the spectral densities and MSCs are obtained using smoothed periodogram estimators. We apply the R-function crossSpectrum from the package IRISSeismic to obtain the spectral densities \u02c6 Sk and the MSCs \u02c6 Zjk. Estimates of the densities \u02c6 fk and cross-correlation functions \u02c6 Rjk are obtained using the (Gaussian) kernel density estimator density and the R-function ccf from the stats-package, respectively. The weights vl, l = 1, 2, 3, 4, in the distance function (11) are obtained as follows: We set v1 = 1 and obtain v2, v3, v4 by dividing the average area below the spectral densities of the observed data by the average area below the MSCs, cross-correlation functions and densities of the observed data, respectively (the latter equals 1). The multivariate normal proposal sampler (9) is calculated using the R-package mvnfast, which provides computationally efficient tools (e.g., through C++ code) for the multivariate normal distribution. In particular, \u03b8\u2217 c \u223cN(\u03b8c, \u02c6 \u03a3c,r) is sampled using the R-function rmvn, and the multivariate normal density is computed with the R-function dmvn (lines 18 and 26 of Algorithm 2). The weighted covariance matrix \u02c6 \u03a3c,r is estimated with the R-function cov.wt from the stats-package. The probability qstay of the discrete perturbation kernel (12) is set to 0.9 throughout. The number M of kept particles per iteration and the threshold for the stopping criterion are set to 500 and 0.1%, respectively, in all experiments. Finally, sample code is provided at https://github.com/IreneTubikanec/networkABC 5 Network inference from simulated data In this section, we test the performance of the proposed nSMC-ABC algorithm on simulated datasets. All reference datasets are generated up to time T = 20 with time step 10\u22124, and then subsampled with observation time step \u2206= 2 \u00b7 10\u22123, yielding 104 discrete time observations of an N-dimensional process. 5.1 Parameter vector and prior distribution To reduce the number of continuous parameters in the model, we assume that the coupling strength between two populations decreases with increasing distance, where the distance is defined by the difference between their subindices. In particular, for j, k \u2208{1, . . . , N} with j \u0338= k, define Kjk := c|j\u2212k|\u22121L, where L > 0 is a coupling strength parameter and the parameter 0 \u226ac < 1 describes how fast the network coupling strength decreases with increasing distance between populations. As the model parameters Ak, k \u2208{1, . . . , N}, also play a central role in the (non-)activation of neural populations (see Section 2), we also aim to infer them. The parameters \u00b5k and \u03c3k are fixed to 90 and 500, respectively, and the remaining continuous parameters are fixed according to the 15 \fstandard values reported in Table 1. Hence, applying the nSMC-ABC Algorithm 2, the goal is to infer the (N + 2 + N(N \u22121))-dimensional parameter vector \u03b8 = (A1, . . . , AN, L, c | {z } \u03b8c , vec(P) | {z } \u03b8d ), where the discrete parameters \u03b8d = vec(P) are given by P = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed \u2212 \u03c112 . . . . . . \u03c11N \u03c121 \u2212 ... . . . . . . ... ... ... . . . . . . ... \u2212 \u03c1N\u22121N \u03c1N1 . . . . . . \u03c1NN\u22121 \u2212 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 , with \u03c1jk \u2208{0, 1}, j, k = 1, . . . , N, j \u0338= k. The prior distributions for \u03b8c are independent continuous uniforms with supports, Ak \u223cU(2, 4), k = 1, . . . , N, L \u223cU(100, 2000), c \u223cU(0.5, 1). The prior distributions for \u03b8d are independent Bernoulli distributions with equal probabilities, \u03c1jk \u223cBernoulli (p) , p = 1 2, j, k \u2208{1, . . . , N}, j \u0338= k. (13) 5.2 Estimation results We focus on N = 4 neural populations and consider three scenarios, i.e., a cascade structure (used to simulate the data in Figure 2), a partially and a fully connected network. The three settings are visualised in Figure 3. We apply Algorithm 2 to reference datasets (4 simultaneously observed univariate time series, each with 104 data points) generated under these scenarios to estimate \u03b8. Setting 1: Cascade structure The reference data y of the cascade scenario is simulated under \u03b8 = (A1, A2, A3, A4, L, vec(P)) = (3.6, 3.25, 3.25, 3.25, 700, vec(P)), (14) with \u03c112 = \u03c123 = \u03c134 = 1 and all other \u03c1jk = 0. The network is visualised in Figure 3a. It shows a similar behaviour as the data in the right column of Figure 2. In this scenario, the parameter c is not present and is excluded from \u03b8. Figure 4a shows the marginal posterior densities (blue lines) and the uniform prior densities (horizontal red lines) of the continuous parameters. The true parameter values used to generate the data are indicated by vertical solid green lines, and the dotted black lines are the weighted posterior means. They are given by ( \u02c6 A1, \u02c6 A2, \u02c6 A3, \u02c6 A4, \u02c6 L) = (3.615, 3.248, 3.252, 3.254, 705.6), closely resembling the true values in (14). In Figure 4b, we report the marginal posterior histograms of the discrete parameters, representing the coupling directions. The true values (vertical dashed green lines) coincide with the nSMC-ABC posterior modes. To summarize, Algorithm 2 yields unimodal and narrow marginal posterior densities of all relevant continuous model parameters, covering the true values. Moreover, the obtained nSMC-ABC marginal posteriors for the binary parameters are Bernoulli distributed with probabilities equal to the true parameters \u03c1jk, successfully recovering the entire network structure. 16 \fSetting: N = 4 neural populations 1 2 3 4 1 2 3 4 L L L 1 2 3 4 1 2 3 4 L L L L 1 2 3 4 1 2 3 4 L L L L cL side 4 af 9 (a) 1 2 3 4 L L L 1 2 3 4 1 2 3 4 L L L L cL side 4 af 9 (b) 1 2 3 4 L L L L L L cL cL cL cL c2L c2L side 6 af 9 (c) Figure 3: Network structures of N = 4 neural populations used in the simulations. (a) Cascade network structure with equal coupling strengths. (b) Partially connected network. (c) Fully connected network. Setting 2: Partially connected network The reference data y of the partially connected scenario (visualised in Figure 3b) is generated using \u03b8 = (A1, A2, A3, A4, L, c, vec(P)) = (3.6, 3.25, 3.25, 3.25, 700, 0.8, vec(P)), with \u03c112 = \u03c123 = \u03c134 = \u03c113 = \u03c132 = 1 and all other \u03c1jk = 0. The parameter c is included since there is a connection from Population 1 to Population 3. Figure 5a shows the marginal posterior densities (blue lines) and the uniform prior densities (horizontal red lines) of the continuous parameters \u03b8k c , k = 1, . . . , 6. The true parameter values are shown by the vertical solid green lines, and the dotted black lines are the weighted nSMC-ABC posterior means, given by ( \u02c6 A1, \u02c6 A2, \u02c6 A3, \u02c6 A4, \u02c6 L, \u02c6 c) = (3.614, 3.222, 3.259, 3.203, 693.4, 0.726). As before, we obtain unimodal marginal posterior densities covering the true values for L and Ak, k = 1, . . . , 4. However, the inference for c is not satisfactory. This is probably due to the low information about c available in the data, as it only enters through the connection from Population 1 to Population 3. Figure 5b shows the nSMC-ABC marginal distributions of the discrete parameters \u03b8k d, k = 1, . . . , 12. As before, the posterior modes coincide with the true values of the \u03c1jk (vertical dashed green lines). Thus, the network structure is correctly identified. Setting 3: Fully connected network In the fully connected scenario (Figure 3c) the reference data y is obtained under \u03b8 = (A1, A2, A3, A4, L, c, vec(P)) = (3.25, 3.25, 3.25, 3.25, 700, 0.8, vec(P)), with \u03c1jk = 1 for all j, k. Figure 6a depicts the marginal posterior densities (blue lines) and the uniform prior densities (horizontal red lines) of the continuous parameters, the true parameter values (vertical green lines), and the weighted nSMC-ABC posterior means (dotted black lines), given by ( \u02c6 A1, \u02c6 A2, \u02c6 A3, \u02c6 A4, \u02c6 L, \u02c6 c) = (3.235, 3.247, 3.228, 3.246, 719.7, 0.791). 17 \fABC results A1 3.3 3.4 3.5 3.6 3.7 0 10 20 30 40 50 ABC results A2 3.0 3.1 3.2 3.3 3.4 3.5 0 5 10 15 20 ABC results A3 3.0 3.1 3.2 3.3 3.4 3.5 0 5 10 15 20 ABC results A4 3.0 3.1 3.2 3.3 3.4 3.5 0 5 10 15 20 ABC results L 500 600 700 800 900 1000 0.000 0.005 0.010 0.015 (a) ABC results \u03c112 0.0 0.5 1.0 ABC results \u03c113 ABC results \u03c114 ABC results \u03c121 ABC results \u03c123 ABC results \u03c124 ABC results \u03c131 0 1 0.0 0.5 1.0 ABC results \u03c132 0 1 ABC results \u03c134 0 1 ABC results \u03c141 0 1 ABC results \u03c142 0 1 ABC results \u03c143 0 1 (b) Figure 4: Cascade network. (a) nSMC-ABC marginal posterior densities (blue lines) compared to the prior densities (horizontal red lines) of the continuous parameters in the cascade scenario, see Figure 3a. The vertical green lines and the dotted black lines are the true parameter values and the weighted posterior means, respectively. (b) nSMC-ABC marginal posterior distributions of the network coupling parameters in the cascade scenario. The vertical dashed green lines are the true parameter values. 18 \fABC results A1 3.4 3.6 3.8 4.0 0 10 20 30 ABC results A2 2.5 3.0 3.5 4.0 0 2 4 ABC results A3 2.5 3.0 3.5 4.0 0 2 4 6 ABC results A4 2.5 3.0 3.5 4.0 0 2 4 6 ABC results L 500 750 1000 0.000 0.002 0.004 0.006 ABC results c 0.5 0.6 0.7 0.8 0.9 1.0 1.0 1.5 2.0 2.5 (a) ABC results \u03c112 0.0 0.5 1.0 ABC results \u03c113 ABC results \u03c114 ABC results \u03c121 ABC results \u03c123 ABC results \u03c124 ABC results \u03c131 0 1 0.0 0.5 1.0 ABC results \u03c132 0 1 ABC results \u03c134 0 1 ABC results \u03c141 0 1 ABC results \u03c142 0 1 ABC results \u03c143 0 1 (b) Figure 5: Partially connected network. (a) nSMC-ABC marginal posterior densities (blue lines) compared to the prior densities (horizontal red lines) of the continuous parameters in the partially connected scenario, see Figure 3b. The vertical green lines and the dotted black lines are the true parameter values and the weighted posterior means, respectively. (b) nSMC-ABC marginal posterior distributions of the network coupling parameters in the partially connected scenario. The vertical dashed green lines are the true parameter values. 19 \fABC results A1 2.5 3.0 3.5 4.0 0 3 6 9 12 ABC results A2 2.5 3.0 3.5 4.0 0 3 6 9 ABC results A3 2.5 3.0 3.5 4.0 0.0 2.5 5.0 7.5 10.0 ABC results A4 2.5 3.0 3.5 4.0 0 4 8 12 ABC results L 500 750 1000 0.000 0.002 0.004 0.006 ABC results c 0.5 0.6 0.7 0.8 0.9 1.0 0 1 2 3 (a) ABC results \u03c112 0.0 0.5 1.0 ABC results \u03c113 ABC results \u03c114 ABC results \u03c121 ABC results \u03c123 ABC results \u03c124 ABC results \u03c131 0 1 0.0 0.5 1.0 ABC results \u03c132 0 1 ABC results \u03c134 0 1 ABC results \u03c141 0 1 ABC results \u03c142 0 1 ABC results \u03c143 0 1 (b) Figure 6: Fully connected network. (a) nSMC-ABC marginal posterior densities (blue lines) compared to the prior densities (horizontal red lines) of the continuous parameters in the fully connected scenario, see Figure 3c. The vertical green lines and the dotted black lines are the true parameter values and the weighted posterior means, respectively. (b) nSMC-ABC marginal posterior distributions of the network coupling parameters in the fully connected scenario. The vertical dashed green lines are the true parameter values. 20 \fAcceptance Rate 0 10 20 30 40 0.0 0.1 0.2 0.3 iteration Runtime (min) 0 10 20 30 40 0 100 200 300 400 iteration (a) Simulated data Acceptance Rate 0 20 40 60 0.0 0.1 0.2 0.3 iteration Runtime (h) 0 20 40 60 0 10 20 30 40 iteration (b) Measured EEG data Figure 7: Acceptance rate and runtime as functions of the iterations r of Algorithm 2. (a) Simulated data, time measured in minutes. The vertical lines mark those iterations from which the correct network is inferred, i.e., the coupling parameters \u03c1jk are correctly estimated via the posterior modes. The solid black, dotted red and dashed blue lines correspond to the cascade, partially connected and fully connected scenario, respectively. (b) EEG data with N = 4 channels, time measured in hours. The dotted red and solid black lines correspond to the before and during seizure periods, respectively. The vertical lines mark that iteration from which the network estimate does not change anymore. As in the previous two scenarios, we obtain unimodal and narrow marginal nSMC-ABC posterior densities, covering the true values, for L and Ak, k = 1, . . . , 4. Compared to the partially connected case, we obtain slightly better inference for c. Similar to the cascade and partially connected scenarios, the network structure is correctly identified, see Figure 6b. Acceptance rate, runtime and effective sample size In Figure 7a, we report the acceptance rate of particles and the runtime (in min) of Algorithm 2 as functions of the iterations r, for the cascade (solid black lines), partially connected (dotted red lines) and fully connected (dashed blue lines) network scenarios. In all settings, Algorithm 2 terminates after iteration 43, where the acceptance rate drops below the threshold of 0.1%. The slope of both the acceptance rate and the runtime curve is comparable in the three scenarios. Remarkably, the correct network is inferred (i.e., the marginal posterior modes of the discrete parameters coincide with the true values) earlier than the algorithm terminates, i.e., from iteration 9, 20 and 13 for the cascade, partially connected and fully connected scenarios, respectively. These iteration numbers are indicated by the vertical lines. Note, however, that while the marginal posterior modes at these iterations are the same as at the final one, the marginal posterior distributions itself may still differ. Finally, we have also considered the \u201ceffective sample size\u201d (ESS), 1 \u2264ESS \u2264M, calculated at iteration r as 1/(PM l=1(w(l) r )2), as a measure of the effectiveness of the SMC sampler, that is, intuitively, how many of the M particles are \u201crelevant\u201d. The ESS of the three scenarios look similar, mainly oscillating between 350 and 450 (recall that M is set to 500 here), with overall median across the iterations equal to 411, 418 and 407, respectively (figure not reported). 6 Network inference from EEG data with epileptic activity After validating the proposed algorithm on simulated data, we now use it to estimate the connectivity structure from real EEG data. 6.1 Description of the data In [33], a study on 22 patients from the Children\u2019s Hospital Boston, who experience epileptic seizures, was presented. The aim of the study was to detect seizure periods in multiple hour EEG recordings of each individual. The data are available at https://www.physionet.org/content/ chbmit/1.0.0/. 21 \fFP1\u2212F7 FP1\u2212F3 FP2\u2212F4 FP2\u2212F8 2960 2980 3000 3020 3040 \u221220 \u221210 0 10 20 \u221220 \u221210 0 10 20 \u221220 \u221210 0 10 20 \u221220 \u221210 0 10 20 t [s] Figure 8: EEG recordings of a 11 year old female patient. The interval [2996, 3036]s (measurements to the right of the vertical dotted red lines) has been defined as a seizure in [33]. Here, we analyse recordings of an 11 year old female patient prior to and during seizure, from the edf-file chb01_03 in the above link. Following [26, 32], we consider the four channels FP1F7, FP1-F3, FP2-F4 and FP2-F8, where FP refers to the frontal lobes (the first two on the left hemisphere and the second two on the right) and F to a set of electrodes placed behind them. The electrode locations are according to the international 10\u201320 system for EEG measurements. The recordings are visualised in Figure 8, where the vertical dotted red lines separate the data into the period before and during seizure, lasting 40 seconds [s] each, with the period from 2996 to 3036s classified as a seizure in [33]. The data are sampled with a frequency of 256 Hz, corresponding to 10240 discrete time measurements during each of the two 40s periods. To put the recordings on the same scale as the model, we rescale the data by multiplying each data point with the factor 0.05. 6.2 Inference We now fit the stochastic multi-population JR-NMM to the data prior to and during seizure, aiming to infer the underlying network structure as well as relevant continuous model parameters in these two regimes via our proposed nSMC-ABC Algorithm 2. Parameter vector and prior distribution In the following, we identify the channels FP1-F7, FP1F3, FP2-F4 and FP2-F8 by Population 1, 2, 3, and 4, respectively. As the distance between channels on different hemispheres is larger than that on the same, we have that the distance between the channels FP1-F3 and FP2-F4 is larger than that between FP1-F7 and FP1-F3 or between FP2-F4 and FP2-F8. For this reason, we assume the following matrix of coupling strength parameters K = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed \u2212 K12 K13 K14 K21 \u2212 K23 K24 K31 K32 \u2212 K34 K41 K42 K43 \u2212 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed \u2212 L c2L c3L L \u2212 cL c2L c2L cL \u2212 L c3L c2L L \u2212 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 , with unknown L and c. Moreover, we assume the activation parameters Ak, k = 1, . . . , 4, the noise intensity parameters \u03c3l := \u03c31 = \u03c32 (left hemisphere) and \u03c3r := \u03c33 = \u03c34 (right hemisphere) as well 22 \fas the input parameters \u00b5l := \u00b51 = \u00b52 (left hemisphere) and \u00b5r := \u00b53 = \u00b54 (right hemisphere) to be unknown. For the remaining parameters, we use the standard values reported in Table 1, except for setting b = 20 and C = 70 (values chosen based on pilot experiments). We thus aim to infer the (10 + 12)-dimensional parameter vector \u03b8 = (A1, A2, A3, A4, L, c, \u03c3l, \u03c3r, \u00b5l, \u00b5r, vec(P)). We choose independent continuous uniform prior distributions for \u03b8c with broad supports, Ak \u223cU(1, 30), k = 1, . . . , 4, L \u223cU(100, 5000), c \u223cU(0.5, 1), \u03c3l, \u03c3r \u223cU(100, 15000), \u00b5l, \u00b5r \u223cU(1, 300). The priors for \u03b8d are independent Bernoulli distributions with equal probabilities, see (13). Estimation results In Figure 9a, we report the marginal posterior densities (green lines: before seizure, blue lines: during seizure) and the uniform prior densities (horizontal red lines) of the continuous parameters \u03b8k c , k = 1, . . . , 10. All posteriors for the parameters during seizure are unimodal, showing a clear update of the priors. Comparing the individual population parameters Ak, k = 1, ..., 4, for the two periods, there is a larger activation in all 4 neural populations during seizure. Moreover, as expected, the posteriors of the noise intensity parameters \u03c3l and \u03c3r assume larger values in both brain hemispheres during seizure. While we obtain a clear estimate of the posterior for the coupling strength parameter L during seizure, this is not the case before seizure. However, the posterior suggests a coupling strength far away from zero also in this scenario. Figure 9b shows the marginal posterior histograms for the coupling direction parameters \u03b8k d, k = 1, . . . , 12 (top panels: before seizure, bottom panels: during seizure). The posteriors during the seizure period provide a clear connectivity structure, with the inferred network visualised in Figure 10b. The inferred network prior to seizure (Figure 10a) is similar, except for a missing connection from Population 2 (FP1-F3) to Population 1 (FP1-F7) on the left hemisphere, present during seizure. The inferred network also differs for \u03c142 and \u03c143, with a connection from Population 4 (FP2-F8) to 2 (FP1-F3) and to 3 (FP2-F4) present only before seizure. However, for these parameters the results are less evident (estimated probabilities for a connection smaller than 0.75 in the before seizure scenario). To further investigate the differences, if any, between the two hemispheres, we also apply our nSMC-ABC Algorithm 2 only to the two channels from the same hemisphere, i.e., left hemisphere (FP1-F7 and FP1-F3) and right hemisphere (FP2-F8 and FP2-F4), both before and during seizure. The dotted orange/dashed grey (left hemisphere) and dotted brown/dashed black (right hemisphere) lines in Figure 9a are the marginal nSMC-ABC posteriors of the corresponding parameters before/during seizure. The dashed orange (left hemisphere) and red (right hemisphere) vertical lines in Figure 9b indicate the inferred connections, estimated as posterior modes. The corresponding posterior probabilities that \u03c112, \u03c121, \u03c134, \u03c143 are equal to one are 1, 0.02, 0.11, 0.984 and 1, 1, 0.998, 1 before and during seizure, respectively. When only using data from the left hemisphere, the inferred connections (namely, a unidirectional connection from Population 1 to 2 before seizure, and bidirectional connection during seizure) agree with those obtained in the 4-population-case. During the seizure period, the coupling strength L for the left hemisphere is slightly larger than for N = 4 (grey versus blue lines in the top right panel of Figure 9a). Moreover, the coupling strengths of the left hemisphere seem to be larger than the right one, both before and during seizure. For the period prior to seizure, the inferred connections (\u02c6 \u03c134 = 0, \u02c6 \u03c143 = 1) from the two populations on the right hemisphere (i.e., Population 3, FP2-F4, and Population 4, FP2-F8) also agree with the 4-population-case. However, this is not the case during seizure, as the bidirectional connections detected now for N = 2 were not present when using data from all 4 channels, even if the coupling strength L and c assume smaller values than before (black vs blue lines in the top right panel and left bottom panel of Figure 9a). 23 \fABC results A1 2.5 5.0 7.5 10.0 0.0 0.5 1.0 ABC results A2 2.5 5.0 7.5 10.0 0.0 0.5 1.0 ABC results A3 2.5 5.0 7.5 10.0 0.0 0.2 0.4 0.6 ABC results A4 2.5 5.0 7.5 10.0 0.0 0.3 0.6 0.9 ABC results L 0 1000 2000 3000 4000 5000 0.000 0.001 0.002 0.003 ABC results c 0.5 0.6 0.7 0.8 0.9 1.0 0 1 2 3 ABC results \u03c3 (left) 3000 6000 9000 0.0000 0.0005 0.0010 0.0015 ABC results \u03c3 (right) 3000 6000 9000 0.0000 0.0005 0.0010 0.0015 ABC results \u03bc (left) 0 50 100 150 200 0.00 0.02 0.04 0.06 ABC results \u03bc (right) 0 50 100 150 200 0.00 0.02 0.04 0.06 (a) \u03c112 0.0 0.5 1.0 \u03c113 \u03c114 \u03c121 \u03c123 \u03c124 \u03c131 \u03c132 \u03c134 \u03c141 \u03c142 \u03c143 \u03c112 0 1 0.0 0.5 1.0 \u03c113 0 1 \u03c114 0 1 \u03c121 0 1 \u03c123 0 1 \u03c124 0 1 \u03c131 0 1 \u03c132 0 1 \u03c134 0 1 \u03c141 0 1 \u03c142 0 1 \u03c143 0 1 (b) Figure 9: Estimation results for EEG recordings. Three models are considered: Populations 1-4, modelling all four channels; Populations 1-2, modelling the left hemisphere; Populations 3-4, modelling the right hemisphere. (a) nSMC-ABC marginal posterior densities of the continuous parameters. Before seizure: solid green (N = 4), dotted orange (N = 2, left hemisphere), dotted brown (N = 2, right hemisphere). During seizure: solid blue (N = 4), dashed grey (N = 2, left hemisphere), dashed black (N = 2, right hemisphere). The solid red lines are the uniform prior distributions. (b) nSMC-ABC marginal posterior distributions of the discrete network parameters before seizure (top panels) and during seizure (bottom panels). The vertical dashed orange and red lines denote the estimated posterior modes for the reduced models with N = 2 populations of the left and right hemispheres, respectively. 24 \fF7 F3 F4 F8 L L cL cL c2L c3L F7 F3 F4 F8 L cL cL c2L L c2L c3L c3L side 1 af 1 (a) Before seizure F7 F3 F4 F8 L L cL cL c2L c3L c3L F7 F3 F4 F8 L cL cL c2L L c2L c3L c3L side 1 af 1 (b) During seizure Figure 10: Inferred network from EEG recordings. Estimated network (N = 4, channels FP1-F7, FP1-F3, FP2-F4 and FP2-F8) for a 11 year old female patient before (a) and during (b) epileptic seizure. The dotted orange connections in (a) are estimated with a probability less than 0.75 (cf. top panels of Figure 9b). Fitted summaries Figure 11 provides a comparison of the summary statistics (10) derived from the EEG recordings (solid black lines) and those obtained from datasets simulated under the 500 kept posterior samples of the 4-population model (grey regions). Median values of the summaries for the 2-population models of the left (dashed orange lines) and right (dashed red lines) hemispheres are also reported. The left and right panels of each subfigure correspond to the before and during seizure scenario, respectively. The match of the observed and simulated summaries is generally good, except for the MSCs, which are based on the cross-spectral densities. Moreover, the oscillations of the cross-correlation functions before seizure are not captured well, suggesting some lack of fit of the multi-population JR-NMM to the EEG data. Acceptance rate and runtime In Figure 7b, we report the acceptance rate of particles and the runtime (in hours) of the 4-population-experiments as functions of the iteration r of the nSMCABC Algorithm 2 before (dotted red lines) and during (solid black lines) seizure. The acceptance rate dropped below the final threshold of 0.1%, causing the algorithm to terminate, after 59 and 72 iterations, respectively. The vertical lines indicate iteration 44 and 51, from which the network estimates (visualised in Figure 10) do not change anymore. 7" + }, + { + "url": "http://arxiv.org/abs/1512.00265v3", + "title": "Multi-class oscillating systems of interacting neurons", + "abstract": "We consider multi-class systems of interacting nonlinear Hawkes processes\nmodeling several large families of neurons and study their mean field limits.\nAs the total number of neurons goes to infinity we prove that the evolution\nwithin each class can be described by a nonlinear limit differential equation\ndriven by a Poisson random measure, and state associated central limit\ntheorems. We study situations in which the limit system exhibits oscillatory\nbehavior, and relate the results to certain piecewise deterministic Markov\nprocesses and their diffusion approximations.", + "authors": "Susanne Ditlevsen, Eva L\u00f6cherbach", + "published": "2015-12-01", + "updated": "2016-10-02", + "primary_cat": "math.PR", + "cats": [ + "math.PR", + "60G55, 60K35" + ], + "main_content": "Introduction Biological rhythms are ubiquitous in living organisms. The brain controls and helps maintain the internal clock for many of these rhythms, and fundamental questions are how they arise and what is their purpose. Many examples of such biological oscillators can be found in the classical book by Glass and Mackey (1988) [17]. The motivation for this paper comes from the rhythmic scratch like network activity in the turtle, induced by a mechanical stimulus, and recorded and analyzed by Berg and co-workers [3, 4, 5, 27]. Oscillations in a spinal motoneuron are initiated by the sensory input, and continue by some internal mechanisms for some time after the stimulus is terminated. While mechanisms of rapid processing are well documented in sensory systems, rhythm-generating motor circuits in the spinal cord are poorly understood. The activation leads to an intense synaptic bombardment of both excitatory and inhibitory input, and it is of interest to characterize such network activity, and to build models which can generate self-sustained oscillations. The aim of this paper is to present a microscopic model describing a large network of interacting neurons which can generate oscillations. The activity of each neuron is represented by a point process, namely, the successive times at which the neuron emits an \u2217Department of Mathematical Sciences, University of Copenhagen, Universitetsparken 5, DK-2100 Copenhagen \u2020Universit\u00b4 e de Cergy-Pontoise, AGM UMR-CNRS 8088, 2 avenue Adolphe Chauvin, F-95302 CergyPontoise Cedex 1 arXiv:1512.00265v3 [math.PR] 2 Oct 2016 \faction potential or a so-called spike. A realization of this point process is called a spike train. It is commonly admitted that the spiking intensity of a neuron, i.e., the in\ufb01nitesimal probability of emitting an action potential during the next time unit, depends on the past history of the neuron and it is a\ufb00ected by the activity of other neurons in the network. Neurons interact mostly through chemical synapses, where a spike of a pre-synaptic neuron leads to an increase if the synapse is excitatory, or a decrease if the synapse is inhibitory, of the membrane potential of the post-synaptic neuron, possibly after some delay. In neurophysiological terms this is called synaptic integration. When the membrane potential reaches a certain upper threshold, the neuron \ufb01res a spike. Thus, excitatory inputs from the neurons in the network increase the \ufb01ring intensity, and inhibitory inputs decrease it. Hawkes processes provide good models of this synaptic integration phenomenon by the structure of their intensity processes, see (1.1) below. We refer to Chevallier et al. (2015) [8], Chornoboy et al. (1988) [9], Hansen et al. (2015) [20] and to Reynaud-Bouret et al. (2014) [34] for the use of Hawkes processes in neuronal modeling. For an overview of point processes used as stochastic models for interacting neurons both in discrete and in continuous time and related issues, see also Galves and L\u00a8 ocherbach (2016) [16]. In this paper, we study oscillatory systems of interacting Hawkes processes representing the time occurrences of action potentials of neurons. The system consists of several large populations of neurons. Each population might represent a di\ufb00erent functional group of neurons, for example di\ufb00erent hierarchical layers in the visual cortex, such as V1 to V4, or the populations can be pools of excitatory and inhibitory neurons in a network. Each neuron is characterized by its spike train, and the whole system is described by multivariate counting processes ZN k,i(t), t \u22650. Here, ZN k,i(t) represents the number of spikes of the ith neuron belonging to the kth population, during the time interval [0, t]. The number of classes n is \ufb01xed, and each class k = 1, . . . , n consists of Nk neurons. The total number of neurons is therefore N = N1 + . . . + Nn. Under suitable assumptions, the sequence of counting processes (ZN k,i)1\u2264k\u2264n,1\u2264i\u2264Nk is characterized by its intensity processes (\u03bbN k,i(t)) de\ufb01ned through the relation P(ZN k,i has a jump in ]t, t + dt]|Ft) = \u03bbN k,i(t)dt, where Ft = \u03c3(ZN k,i(s), s \u2264t, 1 \u2264k \u2264n, 1 \u2264i \u2264Nk). We consider a mean-\ufb01eld framework where \u03bbN k,i(t) is given by \u03bbN k,i(t) = fk \uf8eb \uf8ed n X l=1 1 Nl X 1\u2264j\u2264Nl Z ]0,t[ hkl(t \u2212s)dZN l,j(s) \uf8f6 \uf8f8. (1.1) Here, fk : R \u2192R+ is the spiking rate function of population k, and {hkl : R+ \u2192R} is a family of synaptic weight functions modeling the in\ufb02uence of population l on population k. By integrating over ]0, t[ and not over ] \u2212\u221e, t[, we implicitly assume initial conditions of no spiking activity before time 0. Equation (1.1) has the typical form of the intensity of a multivariate nonlinear Hawkes process, going back to Hawkes (1971) [21] and Hawkes and Oakes (1974) [22]. We refer to Br\u00b4 emaud and Massouli\u00b4 e (1996) [6] for the stability properties of multivariate Hawkes processes, and to Delattre, Fournier and Ho\ufb00mann (2015) [13] and Chevallier (2015) [7] for the study of Hawkes processes in high dimensions. 2 \fThe structure of (1.1) is such that within each population, all neurons behave in a similar way, i.e., the intensity process \u03bbN k,i(t) depends only on the empirical measures of each population. Thus, neurons within a given population are exchangeable. Therefore, we deal with a multi-class system of populations interacting in a mean-\ufb01eld framework which is reminiscent of Graham (2008) [18] and Graham and Robert (2009) [19]. Our aim is to study the large population limit when N \u2192\u221eand to show that in this limit self-sustained periodic behavior emerges even though each single neuron does not follow periodic dynamics. The study follows a long tradition, see e.g. Scheutzow (1985) [31, 32] in the framework of nonlinear di\ufb00usion processes, or Dai Pra, Fischer and Regoli (2015) [12] and Collet, Dai Pra and Formentin (2015) [10]. Our paper continues these studies within the framework of in\ufb01nite memory point processes. The \ufb01rst important step is to establish propagation of chaos of the \ufb01nite system (ZN k,i(t))1\u2264k\u2264n,1\u2264i\u2264Nk as N \u2192\u221e, under the condition that for each class 1 \u2264k \u2264n, limN\u2192\u221eNk/N exists and is in ]0, 1[. 1.1 Propagation of chaos In Section 2.2, we study the limit behavior of the system (ZN k,i(t))1\u2264k\u2264n,1\u2264i\u2264Nk as N \u2192\u221e. We show in Theorem 1 that the system can be approximated by a system of inhomogeneous independent Poisson processes ( \u00af Z1(t), . . . , \u00af Zn(t)), where each \u00af Zk(t) has intensity fk n X l=1 Z t 0 hkl(t \u2212s)dE( \u00af Zl(s)) ! dt. Here, \u00af Zk(t) represents the number of spikes during [0, t] of a typical neuron belonging to population k in the limit system. This result is an extension of results obtained by [13] to the multi-class case. It follows that the system is multi-chaotic in the sense of [18]. The equivalence between the chaoticity of the system and a weak law of large numbers for the empirical measures, as proven in Theorem 1, is well-known (see for instance Sznitman (1991) [36]). This means that in the large population limit, within the same class, the neurons converge in law to independent and identically distributed copies of the same limit law. This property is usually called propagation of chaos in the literature. In particular, as pointed out in [18], we have asymptotic independence between the di\ufb00erent classes, and interactions between classes do only survive in law. In Section 3, still following ideas of [13], we state an associated central limit theorem in Theorem 2. The extension to nonlinear rate functions fk requires the use of matrixconvolution equations which go back to Crump (1970) [11] and Athreya and Murthy (1976) [1] and which are collected in Appendix, Section 6.1. 1.2 Oscillatory behavior of the limit system In Section 4 we present conditions under which the limit system possesses solutions which are periodic in law. To be more precise, the classes interact according to a cyclic feedback system and each class k is only in\ufb02uenced by class k + 1, where we identify n + 1 with 1. In this case mk t = E( \u00af Zk(t)), 1 \u2264k \u2264n, is solution of mk t = Z t 0 fk \u0012Z s 0 hkk+1(s \u2212u)dmk+1 u \u0013 ds. (1.2) 3 \fIf the memory kernels hkk+1 are given by Erlang kernels, as used e.g. in modeling the delay in the hemodynamics in nephrons, see [14, 35] and (4.17) below, then Theorem 3 characterizes situations in which the system (1.2) possesses attracting non-constant periodic orbits, that is, presents oscillatory behavior. This result goes back to deep theorems in dynamical systems, obtained by Mallet-Paret and Smith (1990) [30] and used in a di\ufb00erent context in Bena\u00a8 \u0131m and Hirsch (1999) [2], from where we learned about these results. In particular, the celebrated Poincar\u00b4 e-Bendixson theorem plays a crucial role. 1.3 Hawkes processes, associated piecewise deterministic Markov processes and longtime behavior of the approximating di\ufb00usion process Hawkes processes are truly in\ufb01nite memory processes and techniques from the theory of Markov processes are in general not applicable. However, in the special situation where the memory kernels are given by Erlang kernels, the intensity processes can be described in terms of an equivalent high dimensional system of piecewise deterministic Markov processes (PDMPs). Once we are back in the Markovian world, we can study the longtime behavior of the process, ergodicity, and so on. In Section 5, we obtain an approximating di\ufb00usion equation in (5.26) which is shown to be close in a weak sense to the original PDMP de\ufb01ning the Hawkes process (Theorem 4). Once we dispose of this small noise approximation, we then study the longtime behavior in the case of two populations, n = 2. In particular, we show to which extent the approximating di\ufb00usion presents the same oscillatory behavior as the limit system. This approximating di\ufb00usion is highly degenerate having Brownian noise present only in two of its coordinates. However, the very speci\ufb01c cascade structure of its drift vector implies that the weak H\u00a8 ormander condition holds on the whole state space, and as a consequence, the di\ufb00usion is strong Feller. A simple Lyapunov argument shows that the process comes back to a compact set in\ufb01nitely often, almost surely. Since the limit system possesses a non constant periodic orbit \u0393 which is asymptotically orbitally stable, it is well known that there exists a local Lyapunov function V (x) de\ufb01ned on a neighborhood of \u0393 such that V decreases along the trajectories of the limit system, describing the attraction of the limit system to \u0393 (see e.g. Yoshizawa (1966) [38] and Kloeden and Lorenz (1986) [28]). This Lyapunov function is shown also to be a Lyapunov function for the approximating di\ufb00usion, in particular, the di\ufb00usion is also attracted to \u0393, once it has entered the basin of attraction of \u0393. A control argument shows \ufb01nally that this happens in\ufb01nitely often almost surely (Theorem 5), in particular, for large enough N, the approximating di\ufb00usion also presents oscillations. We close our paper with some simulation studies. 2 Systems of interacting Hawkes processes, basic notation and large population limits Consider n populations, each composed by Nk neurons, k = 1, . . . , n. The total number of neurons in the system is N = N1 + . . . + Nn. The activity of each neuron is described by a counting process ZN k,i(t), 1 \u2264k \u2264n, 1 \u2264i \u2264Nk, t \u22650, recording the number of spikes of the ith neuron belonging to population k during the interval [0, t]. The sequence of 4 \fcounting processes (ZN k,i) is characterized by its intensity processes (\u03bbN k,i(t)) which are de\ufb01ned through the relation P(ZN k,i has a jump in ]t, t + dt]|Ft) = \u03bbN k,i(t)dt, 1 \u2264k \u2264n, 1 \u2264i \u2264Nk, where Ft = \u03c3(ZN k,i(s), s \u2264t, 1 \u2264k \u2264n, 1 \u2264i \u2264Nk) and \u03bbN k,i(t) are de\ufb01ned in (2.3) below. We consider a mean \ufb01eld framework where N \u2192\u221esuch that for each 1 \u2264k \u2264n, lim N\u2192\u221e Nk N = pk exists and is in ]0, 1[. The intensity processes will be of the form \u03bbN k,i(t) = fk \uf8eb \uf8ed n X l=1 1 Nl X 1\u2264j\u2264Nl Z ]0,t[ hkl(t \u2212s)dZN l,j(s) \uf8f6 \uf8f8, (2.3) where fk is the spiking rate function of population k and where the hkl are memory kernels. Assumption 1 (i) All fk belong to C1(R; R+). (ii) There exists a \ufb01nite constant L such that for every x and x\u2032 in R, for every 1 \u2264k \u2264n, |fk(x) \u2212fk(x\u2032)| \u2264L|x \u2212x\u2032|. (2.4) (iii) The functions hkl, 1 \u2264k, l \u2264n, belong to L2 loc(R+; R). 2.1 The setting We work on a \ufb01ltered probability space (\u2126, A, F) which we de\ufb01ne as follows. We write M for the canonical path space of simple point processes given by M := {m = (tn)n\u2208N : t1 > 0, tn \u2264tn+1, tn < tn+1 if tn < +\u221e, lim n\u2192+\u221etn = +\u221e}. For any m \u2208M, any n \u2208N, let Tn(m) = tn. We identify m \u2208M with the associated point measure \u00b5 = P n \u03b4Tn(m) and put Mt := \u03c3{\u00b5(A) : A \u2208B(R), A \u2282[0, t]}, M = M\u221e. Finally, we put (\u2126, A, F) := (M, M, (Mt)t\u22650)I where I = Sn k=1{(k, i), i \u22651}. We write (ZN k,i)1\u2264k\u2264n,1\u2264i\u2264Nk for the canonical multivariate point measure de\ufb01ned on the \ufb01nite dimensional subspace (M, M, (Mt)t\u22650)IN of \u2126, where IN = Sn k=1{(k, i), 1 \u2264i \u2264Nk}. De\ufb01nition 1 [compare to De\ufb01nition 1 of [13]]. A Hawkes process with parameters (fk, hkl, 1 \u2264k, l \u2264n) is a probability measure P on (\u2126, A, F) such that 1. P\u2212almost surely, for all (k, i) \u0338= (l, j), ZN k,i and ZN l,j never jump simultaneously, 2. for all (k, i) \u2208IN, the compensator of ZN k,i(t) is given by R t 0 \u03bbN k,i(s)ds de\ufb01ned in (2.3). Proposition 1 Under Assumption 1 there exists a path-wise unique Hawkes process (ZN k,i(t)(k,i)\u2208IN ) for all t \u22650. Proof The proof is analogous to the proof of Theorem 6 in [13]. \u2022 5 \f2.2 Mean-\ufb01eld limit and propagation of chaos The aim of the paper is to study the process (ZN k,i(t))(k,i)\u2208IN in the large population limit, i.e., as N \u2192\u221e. The convergence will be stated in terms of the empirical measures 1 Nk X 1\u2264i\u2264Nk \u03b4(ZN k,i(t))t\u22650, 1 \u2264k \u2264n, (2.5) taking values in the set P(D(R+, R+)) of probability measures on the space of c` adl` ag functions, D(R+, R+). We endow D(R+, R+) with the Skorokhod topology, and P(D(R+, R+)) with the weak convergence topology associated with the Skorokhod topology on D(R+, R+). Since we are dealing with multi-class systems, the classical notions of chaoticity and propagation of chaos have to be extended to this framework, see [18] for further details. We recall from [18] the following de\ufb01nition. Let P1, . . . , Pn \u2208P(D(R+, R+)). De\ufb01nition 2 The system (ZN k,i(t))(k,i)\u2208IN is called P1 \u2297. . .\u2297Pn\u2212multi-chaotic, if for any m \u22651, lim N\u2192\u221eL \u0000(ZN k,i), 1 \u2264k \u2264n, 1 \u2264i \u2264m \u0001 = P \u2297m 1 \u2297. . . \u2297P \u2297m n . In particular, Corollary 5.2 of [19] shows that in this case we have convergence in distribution 1 Nk X 1\u2264i\u2264Nk \u03b4(ZN k,i) L \u2192Pk, as N \u2192\u221e, for any 1 \u2264k \u2264n. The limit measure Pk has to be understood as the distribution of the limit process \u00af Zk(t), where the associated limit system is given by \u00af Zk(t) = Z t 0 Z R+ 1{z\u2264fk(Pn l=1 R s 0 hkl(s\u2212u)dE( \u00af Zl(u))}Nk(ds, dz), (2.6) 1 \u2264k \u2264n, where Nk are independent Poisson random measures (PRMs) on R+ \u00d7 R+ each having intensity measure dsdz. Introduce mt = (m1 t , . . . , mn t ) = (E( \u00af Z1(t), . . . , \u00af Zn(t))). Taking expectations in (2.6), it follows that mt is solution of mk t = Z t 0 fk n X l=1 Z s 0 hkl(s \u2212u)dml u ! ds, 1 \u2264k \u2264n. (2.7) Theorem 1 Under Assumption 1, there exists a path-wise unique solution to (2.6) such that t 7\u2192E(Pn k=1 \u00af Zk(t)) is locally bounded. Moreover, the system of processess (ZN k,i)(k,i)\u2208IN is P1 \u2297. . .\u2297Pn\u2212multi-chaotic, where Pk = L( \u00af Zk), 1 \u2264k \u2264n. In particular, for any i \u22651, ((ZN 1,i(t), . . . , ZN n,i(t))t\u22650) L \u2192(( \u00af Z1(t), . . . , \u00af Zn(t))t\u22650) as N \u2192\u221e(convergence in D(R+, Rn +), endowed with the Skorokhod topology). Remark 1 The above theorem shows that any \ufb01xed \ufb01nite sub-system is asymptotically independent with neurons of class k having the law of \u00af Zk. 6 \fThe proof of Theorem 1 is a direct adaptation of the proof of Theorem 8 in [13] to the multi-class case. Proof 1) Let ( \u00af Z1(t), . . . , \u00af Zn(t)) be any solution of (2.6) and consider the associated vector mt = (m1 t , . . . , mn t ) = (E( \u00af Z1(t), . . . , \u00af Zn(t))). Then mt is solution of (2.7), and an easy adaptation of Lemma 24 of [13] shows that this equation has a unique non-decreasing (in each coordinate) locally bounded solution, which is of class C1. 2) Well-posedness and uniqueness of a solution satisfying that t 7\u2192E(Pn k=1 \u00af Zk(t)) is locally bounded follow then as in [13], proof of Theorem 8. 3) Propagation of chaos: Let NN k,i(ds, dz), (k, i) \u2208IN, be i.i.d. PRMs having intensity dsdz on R+ \u00d7 R+. For each N \u22651, consider the Hawkes process (ZN k,i(t))(k,i)\u2208IN,t\u22650 given by ZN k,i(t) = Z t 0 Z \u221e 0 1{z\u2264\u03bbN k,i(s)}NN k,i(ds, dz) = Z t 0 Z \u221e 0 1n z\u2264fk \u0010Pn l=1 1 Nl P 1\u2264j\u2264Nl R s\u2212 0 hkl(s\u2212u)dZN l,j(u) \u0011oNN k,i(ds, dz). Indeed, ZN k,i de\ufb01ned in this way is a Hawkes process in the sense of De\ufb01nition 1, as follows from Proposition 3 of [13]. We now couple ZN k,i with the limit process (2.6) in the following way. Let mt be the unique solution of (2.7). Put \u00af ZN k,i(t) = Z t 0 Z \u221e 0 1{z\u2264fk( Pn l=1 R s 0 hkl(s\u2212u)dml u)}NN k,i(ds, dz), (2.8) where NN k,i is the PRM driving the dynamics of ZN k,i. Obviously, for all 1 \u2264i \u2264Nk, \u00af ZN k,i L = \u00af Zk. Moreover, the limit processes \u00af ZN k,i, 1 \u2264k \u2264n, 1 \u2264i \u2264Nk, are independent. Denote \u2206N k,i(t) = R t 0 |d[ \u00af ZN k,i(u) \u2212ZN k,i(u)]|, and \u03b4N k,i(t) = E(\u2206N k,i(t)). Notice that this last quantity does not depend on i \u2208{1, . . . , Nk}, due to the exchangeability of the neurons within one class. Then sup u\u2208[0,t] | \u00af ZN k,i(u)\u2212ZN k,i(u)| \u2264\u2206N k,i(t), whence E \" sup u\u2208[0,t] | \u00af ZN k,i(u) \u2212ZN k,i(u)| # \u2264\u03b4N k,1(t) := \u03b4N k (t). We start by controlling \u2206N k,i(t) which is given by \u2206N k,i(t) = Z t 0 Z \u221e 0 \f \f \f1{z\u2264fk( R s 0 Pn l=1 hkl(s\u2212u)dml u)} \u22121n z\u2264fk \u0010Pn l=1 1 Nl P 1\u2264j\u2264Nl R s\u2212 0 hkl(s\u2212u)dZN l,j(u) \u0011o \f \f \f \f NN k,i(ds, dz). Using the Lipschitz continuity of fk with Lipschitz constant L, 1 LE(\u2206N k,i(t)) \u2264 Z t 0 n X l=1 E \f \f \f \f \f \f 1 Nl X 1\u2264j\u2264Nl Z s\u2212 0 hkl(s \u2212u)(dml u \u2212d \u00af ZN l,j(u)) \f \f \f \f \f \f ds + Z t 0 n X l=1 E \f \f \f \f \f \f 1 Nl X 1\u2264j\u2264Nl Z s\u2212 0 hkl(s \u2212u)d[ \u00af ZN l,j(u) \u2212ZN l,j(u)] \f \f \f \f \f \f ds =: A + B, (2.9) 7 \fwhere A denotes the terms of the RHS of the \ufb01rst line, and B the terms within the second line. Now, using Lemma 22 of [13], B \u2264 Z t 0 E Z s\u2212 0 \" n X l=1 |hkl(s \u2212u)|d\u2206N l,1(u) # ds \u2264 Z t 0 \" n X l=1 |hkl(t \u2212u)|\u03b4N l (u) # du. To control A, let XN k,l,j(t) = R t\u2212 0 hkl(t \u2212u)d \u00af ZN l,j(u), for 1 \u2264j \u2264Nl. Then XN k,l,j(t), 1 \u2264j \u2264 Nl, are i.i.d. having mean R t 0 hkl(t \u2212u)dml u. Hence A \u2264 n X l=1 1 \u221aNl Z t 0 q V ar(XN k,l,1(s))ds. But XN k,l,1(s) = Z s\u2212 0 Z \u221e 0 1{z\u2264fl(Pn m=1 R u 0 hlm(u\u2212r)dmm r )}hkl(s \u2212u)NN l,1(du, dz), and thus, since the integrand is deterministic, XN k,l,1(s) \u2212E(XN k,l,1(s)) = Z s\u2212 0 Z \u221e 0 1{z\u2264fl(Pn m=1 R u 0 hlm(u\u2212r)dmm r )}hkl(s \u2212u) \u02dc NN l,1(du, dz), where \u02dc NN l,1(ds, dz) = NN l,1(ds, dz) \u2212dsdz is the compensated PRM. Recalling (2.7) we deduce that V ar(XN k,l,1(s)) = Z s 0 fl n X m=1 Z u 0 hlm(u \u2212r)dmm r ! h2 kl(s \u2212u)du = Z s 0 h2 kl(s \u2212u)dml u. Putting \u2225\u03b4N(t)\u22251 = Pn k=1 \u03b4N k (t), \u2225mt\u22251 = Pn k=1 mk t , \u2225h(t)\u22251 = Pn k,l=1 |hkl(t)|, we obtain 1 L\u2225\u03b4N(t)\u22251 \u2264 Z t 0 \u2225h(t \u2212u)\u22251 \u2225\u03b4N(u)\u22251du + n X k=1 1 \u221aNk ! Z t 0 \u0012Z s 0 \u2225h(s \u2212u)\u22252 1d\u2225mu\u22251 \u00131/2 ds. (2.10) It follows, as in Step 4 of the proof of Theorem 8 in [13], that sup t\u2264T \u2225\u03b4N(t)\u22251 \u2264CT n X k=1 1 \u221aNk ! . Consequently, for any \ufb01xed (k, i) \u2208IN, as N \u2192\u221e, E \" sup u\u2208[0,T] | \u00af ZN k,i(u) \u2212ZN k,i(u)| # \u2264CT n X k=1 1 \u221aNk ! \u21920. (2.11) The end of the proof is now standard, based on arguments developed in [18] and [19]. Recall that neurons within a given population are exchangeable. Then, by the proof of Theorem 5.1 in [19], in order to prove propagation of chaos, it is enough to show that for each \ufb01xed sequence \u2113k, 1 \u2264k \u2264n, ((ZN 1,1(t))t\u22650, . . . , (ZN 1,\u21131(t))t\u22650, . . . , (ZN n,1(t))t\u22650, . . . , (ZN n,\u2113n(t))t\u22650) 8 \fgoes in law to \u21131 independent copies of \u00af Z1, . . . , and \u2113n independent copies of \u00af Zn (convergence in D(R+, R\u21131+...+\u2113n + )). Since the topology of uniform convergence on compact time intervals is \ufb01ner than the Skorokhod topology, this follows clearly from (2.11), and thus the proof is \ufb01nished. \u2022 3 Central limit theorem A natural question to ask is to which extent the large time behavior of the limit system (m1 t , . . . , mn t ) predicts the large time behavior of the \ufb01nite size system, in particular in the case when the limit system presents oscillations (see Section 4 below). To answer this question, the present section states a central limit theorem where convergence of both N and t to in\ufb01nity is considered. All proofs can be found in Appendix. First we control the longtime behavior of the limit system represented by its integrated intensities (m1 t , . . . , mn t ). It is well-known that linear Hawkes processes, i.e., the case when the rate functions fk are linear, can be described in terms of classical Galton-Watson processes. In the non-linear case, a comparison with a Galton-Watson process is still possible if the rate functions are Lipschitz. In our case, the associated o\ufb00spring matrix is given by \u039b := (\u039bij)1\u2264i,j\u2264n, where \u039bij = L Z \u221e 0 |hij(t)|dt, 1 \u2264i, j \u2264n, (3.12) with L given in (2.4). De\ufb01ne the matrix H(t) = \u0010 L|hik(t)| \u0011 1\u2264i,k\u2264n, for any t \u22650, (3.13) such that \u039b = R \u221e 0 H(t)dt. Classically, one distinguishes the subcritical, the critical and the supercritical cases. Since we only need to bound the intensities, we concentrate on the subcritical and the supercritical case. The subcritical case is de\ufb01ned by the following property of the matrix \u039b. Assumption 2 The functions hkl, 1 \u2264k, l \u2264n, belong to L1(R+; R) \u2229L2(R+; R), and the largest eigenvalue \u00b51 of \u039b is strictly smaller than 1. We then obtain the following bound on the growth of mk t . Proposition 2 Grant Assumptions 1 and 2. Then there exists a constant \u03b10 such that mk t \u2264\u03b10t, for all 1 \u2264k \u2264n. Moreover, as in [13], Remark 9, in this case there exists a constant C such that E(sup s\u2264t |ZN k,i(s) \u2212\u00af ZN k,i(s)|) \u2264CtN\u22121/2, (3.14) for 1 \u2264k \u2264n, where \u00af ZN k,i is de\ufb01ned in (2.8). In the supercritical case, the control on the growth of mk t is more tricky. We need the following assumption. 9 \fAssumption 3 The functions hkl, 1 \u2264k, l \u2264n, belong to L1(R+; R) and there exist p \u22651 and a constant C, such that |hkl(t)| \u2264C(1 + tp) for all t \u22650, for any 1 \u2264k, l \u2264n. Moreover, the largest eigenvalue \u00b51 of \u039b in (3.12) is strictly larger than 1. Proposition 3 Grant Assumptions 1 and 3. Then for all 1 \u2264k \u2264n, mk t \u2264ce\u03b10t, where \u03b10 is unique such that R \u221e 0 e\u2212\u03b10tH(t)dt has largest eigenvalue \u2261+1. Here, H(t) is given in (3.13). Moreover, for 1 \u2264k \u2264n and for some constant C, E(sup s\u2264t |ZN k,i(s) \u2212\u00af ZN k,i(s)|) \u2264Ce\u03b10tN\u22121/2. (3.15) We obtain the following central limit theorem. It is an extension of Theorem 10 of [13] to the nonlinear case and several populations. Theorem 2 Grant Assumption 1 and either Assumption 2 or 3. Suppose moreover that for all 1 \u2264k \u2264n, lim inft\u2192\u221emk t /t \u2265\u03b1k for some \u03b1k > 0. We will consider limits as both N and t tend to in\ufb01nity, under the constraints that t/N \u21920 in the subcritical case and that e\u03b10tt\u22121N\u22121/2 \u21920 in the supercritical case, where \u03b10 is given in Proposition 3. 1. For any \ufb01xed i, we have that ZN k,i(t)/mk t tends to 1 in probability. More precisely, lim sup N,t\u2192\u221e (mk t )1/2E[|ZN k,i(t)/mk t \u22121|] \u2264C, for some constant C and for k = 1, . . . , n. 2. For any \ufb01xed \u21131, . . . , \u2113n, the vector \uf8eb \uf8ed ZN 1,i(t) \u2212m1 t p m1 t ! 1\u2264i\u2264\u21131 , . . . , ZN n,i(t) \u2212mn t p mn t ! 1\u2264i\u2264\u2113n \uf8f6 \uf8f8 tends in law to N(0, I\u21131+...+\u2113n) as (t, N) \u2192\u221e, under the constraint that lim N\u2192\u221eNk/N > 0 for all 1 \u2264k \u2264n. The proof of Theorem 2 and Propositions 2 and 3 can be found in Appendix. Remark 2 Since the rate functions are nonlinear, we only obtain the central limit theorem in the regime tN \u22121 \u21920 (subcritical case) or N\u22121/2t\u22121e\u03b10t \u21920 (supercritical case), contrarily to [13] who do not have any restriction in the subcricital case and who only impose N\u22121e\u03b10t \u21920 in the supercritical case. This is due to the fact on the one hand that we deal with the nonlinear case and on the other hand that we do not dispose of general asymptotical equivalents of t 7\u2192mk t . 10 \f4 Oscillations and associated dynamical systems in monotone cyclic feedback systems The aim of this section is to study the limit system (2.6) and (2.7) and describe situations in which oscillations will occur. Throughout this section we suppose that the information is transported through the system according to a monotone cyclic feedback system [30]. That it is monotone means that the rate functions fk are non-decreasing, and a cyclic feedback system means that each population k is only in\ufb02uenced by population k + 1, where we identify n + 1 with 1. Thus, hkl \u22610 for all k, l such that l \u0338= k + 1. The memory kernels hkk+1 describe how population k + 1 in\ufb02uences population k. From now on, we identify mn+1 with m1 and introduce the memory variables xk t = Z t 0 hkk+1(t \u2212s)dmk+1 s , for 1 \u2264k \u2264n. (4.16) We have mk t = R t 0 fk(xk s)ds. For speci\ufb01c choices of kernel functions the above system of memory variables can be developed into a system of di\ufb00erential equations without delay by increasing the dimension of the system, see (4.19) below. We call this a Markovian cascade of successive memory terms. It is obtained by using Erlang kernels, given by hkk+1(s) = cke\u2212\u03bdks s\u03b7k \u03b7k! , for k < n, and hn1(s) = cne\u2212\u03bdns s\u03b7n \u03b7n! , (4.17) where \u03b7k \u2208N0, ck \u2208{\u22121, 1} and \u03bdk > 0 are \ufb01xed constants. Here, \u03b7k + 1 is the order of the delay, i.e., the number of di\ufb00erential equations needed for population k to obtain a system without delay terms. The delay of the in\ufb02uence of population k + 1 on population k is distributed and taking its maximum absolute value at \u03b7k/\u03bdk time units back in time, and the mean is (\u03b7k +1)/\u03bdk (if normalizing to a probability density). The higher the order of the delay, the more concentrated is the delay around its mean value, and in the limit of \u03b7k \u2192\u221ewhile keeping (\u03b7k + 1)/\u03bdk \ufb01xed, the delay converges to a discrete delay. The sign of ck indicates if the in\ufb02uence is inhibitory or excitatory. Observing that h\u2032 kk+1(t) = \u2212\u03bdkhkk+1(t) + ck t\u03b7k\u22121 (\u03b7k\u22121)!e\u2212\u03bdkt leads to the following auxiliary variables xk,l t = Z t 0 cke\u2212\u03bdk(t\u2212s) (t \u2212s)\u03b7k\u2212l (\u03b7k \u2212l)! dmk+1 s , 1 \u2264k \u2264n, 0 \u2264l \u2264\u03b7k, where we identify xk = xk,0. Then we can rewrite dxk,l t dt = \u2212\u03bdkxk,l t + xk,l+1 t , l < \u03b7k. (4.18) Iterating this argument, the following system of coupled di\ufb00erential equations is obtained. For all 1 \u2264k \u2264n, and where as usual n + 1 is identi\ufb01ed with 1, dxk,l t dt = \u2212\u03bdkxk,l t + xk,l+1 t , 0 \u2264l < \u03b7k, dxk,\u03b7k t dt = \u2212\u03bdkxk,\u03b7k t + ckfk+1(xk+1,0 t ), (4.19) with initial conditions xk,l 0 = 0. System (4.19) exhibits the structure of a monotone cyclic feedback system as considered e.g. in [30] or as (33) and (34) in [2]. If Qn k=1 ck > 0, then 11 \fthe system (4.19) is of total positive feedback, otherwise it is of negative feedback. We obtain the following simple \ufb01rst result. Proposition 4 Suppose that Qn k=1 ck < 0 and that f1, . . . , fn are non-decreasing. Then (4.19) admits a unique equilibrium x\u2217. Proof Any equilibrium x\u2217must satisfy (x\u2217)n,\u03b7n = cn \u03bdn f1 \u25e6 c1 \u03bd\u03b71+1 1 f2 \u25e6. . . \u25e6 cn\u22121 \u03bd\u03b7n\u22121+1 n\u22121 fn( 1 \u03bd\u03b7n n (x\u2217)n,\u03b7n). Since cn \u03bdn f1 \u25e6 c1 \u03bd\u03b71+1 1 f2 \u25e6. . . \u25e6 cn\u22121 \u03bd \u03b7n\u22121+1 n\u22121 fn( 1 \u03bd\u03b7n n \u00b7) is decreasing, there exists exactly one solution (x\u2217)n,\u03b7n in R. Once (x\u2217)n,\u03b7n is \ufb01xed, we obviously have (x\u2217)n,\u03b7n\u22121 = 1 \u03bdn (x\u2217)n,\u03b7n, and the values of the other coordinates of x\u2217follow in a similar way. \u2022 In special cases system (4.19) is necessarily attracted to a non-equilibrium periodic orbit. Let \u03ba := n + Pn k=1 \u03b7k be the dimension of (4.19). We introduce the following assumption. Assumption 4 Suppose that fk, 1 \u2264k \u2264n, are non-decreasing bounded analytic functions. Moreover, suppose that \u03c1 := Qn k=1 ckf\u2032 k((x\u2217)k,0) satis\ufb01es that \u03c1 < 0. Notice that under Assumption 4, the conditions of Proposition 4 are satis\ufb01ed, and thus (4.19) admits a unique equilibrium x\u2217under Assumption 4. The following theorem is based on Theorem 4.3 of [30] and generalizes the result obtained by Theorem 6.3 in [2]. Theorem 3 Grant Assumption 4. Consider all solutions \u03bb of (\u03bd1 + \u03bb)\u03b71+1 \u00b7 . . . \u00b7 (\u03bdn + \u03bb)\u03b7n+1 = \u03c1 (4.20) and suppose that there exist at least two solutions \u03bb of (4.20) such that Re (\u03bb) > 0. (4.21) (i) x\u2217is linearly unstable, and the system (4.19) possesses at least one, but no more than a \ufb01nite number of periodic orbits. At least one of them is orbitally asymptotically stable. (ii) Moreover, if \u03ba = 3, then there exists a globally attracting invariant surface \u03a3 such that x\u2217is a repellor for the \ufb02ow in \u03a3. Every solution of (4.19) will be attracted to a non constant periodic orbit. Proof Since all functions fk are bounded, the system (4.19) possesses a compact invariant set K. Rewriting (4.19) as \u02d9 x = F(x), where x = (x1,0, . . . , xn,\u03b7n)T , the characteristic polynomial P(\u03bb) of DF(x\u2217) is given by 12 \fP(\u03bb) = n Y k=1 (\u2212\u03bdk \u2212\u03bb)\u03b7k+1 \u2212(\u22121)\u03ba\u03c1 = (\u22121)\u03ba[ n Y k=1 (\u03bdk + \u03bb)\u03b7k+1 \u2212\u03c1]. By assumption, there exist at least two eigenvalues having strictly positive real part. Therefore x\u2217is unstable. Moreover, since \u03c1 < 0, det(\u2212DF(x\u2217)) > 0 which is condition (4.5) of [30]. Then, the last assertion of item (i) follows from Theorem 4.3 of [30]. To prove part (ii), we follow the proof of Theorem 6.3 in [2]. First, notice that DF(x\u2217) is given by the matrix \uf8eb \uf8ed \u2212d a 0 0 \u2212e b c 0 \u2212f \uf8f6 \uf8f8, where d, e, f > 0 and \u03c1 = abc < 0. Hence, either all a, b, c are negative or only one of them, say c, is negative. In the \ufb01rst case, \u2212DF(x\u2217) is a positive irreducible matrix, in the second case, the change of variables y1 = x1, y2 = \u2212x2, y3 = x3 leeds to a negative irreducible matrix. We therefore suppose without loss of generality that we are in the \ufb01rst case. Then the Perron-Frobenius theorem implies that DF(x\u2217) possesses a single largest eigenvalue which is strictly negative, and the eigenvector associated to it has all its components of the same sign. Moreover, the other two eigenvectors associated to the conjugate complex eigenvalues having positive real part do not have all components of the same sign. By Theorem 1.7 of Hirsch (1988) [23], there exists a globally attracting invariant surface \u03a3 such that every trajectory within the invariant set K is eventually attracted to \u03a3. By [2] the equilibrium x\u2217is a repellor for the \ufb02ow in K. Hence, the Poincar\u00b4 e-Bendixson theorem implies that each such trajectory will eventually converge to a non constant periodic orbit. \u2022 Remark 3 If \u03bd1 = . . . = \u03bdn = \u03bd, then for \u03ba \u22653 the following condition |\u03c1| > \u03bd\u03ba \u0000cos( \u03c0 \u03ba) \u0001\u03ba (4.22) implies (4.21). Indeed, the di\ufb00erent eigenvalues for 1 \u2264j \u2264\u03ba are given by \u03bbj = \u2212\u03bd \u2212|\u03c1| 1 \u03ba ei 2j\u03c0 \u03ba (\u03ba odd) ; \u03bbj = \u2212\u03bd + |\u03c1| 1 \u03ba ei (2j\u22121)\u03c0 \u03ba (\u03ba even). If \u03ba is odd, then there is exactly one real root, which is strictly negative, \u03bb\u03ba = \u2212\u03bd \u2212|\u03c1| 1 \u03ba . The rest are complex conjugate pairs with real part \u2212\u03bd \u2212|\u03c1| 1 \u03ba cos( 2j\u03c0 \u03ba ). The maximal value is \u2212\u03bd + |\u03c1| 1 \u03ba cos( \u03c0 \u03ba) for j = (\u03ba \u00b1 1)/2, such that (4.22) implies (4.21). If \u03ba is even, then all roots are complex conjugate pairs with real part \u2212\u03bd + |\u03c1| 1 \u03ba cos( (2j\u22121)\u03c0 \u03ba ). The maximal value is as before, now for j = 1, \u03ba, such that again (4.22) implies (4.21). 13 \fRemark 4 (Phase transition due to increasing memory) In some cases, increasing the order of the memory, i.e. the value of some of the exponents \u03b7k in (4.17) or equivalently the value of \u03ba, can lead to a phase transition within the system (4.19). At the phase transition point, a system which was stable can become unstable, and in certain cases, increasing the order even more might stabilize the system again. As an example, consider a family of n populations of neurons, where n > 1 is \ufb01xed, and such that \u03bdk = \u03bd for all 1 \u2264k \u2264n. If \u03ba = 2, the \ufb01xed point is stable since eigenvalues are \u03bbj = \u2212\u03bd \u00b1i p |\u03c1|, and only damped oscillations occur. We will assume \u03ba \u22653. First note that \u03c1 is bounded due to the Lipschitz condition on the rate functions fk. The right hand side of (4.22) goes to in\ufb01nity for \u03bd \u2192\u221efor all values of \u03ba, and thus, if \u03bd is large, the system will always be stable and not exhibit oscillations. For any \ufb01xed value of \u03bd > 1, it also goes to in\ufb01nity for \u03ba \u2192\u221e, such that a possible unstable system becomes stable for increasing \u03ba. This implies that for a discrete delay of any value the system will never exhibit oscillations, since a discrete delay is obtained for \u03b7k \u2192\u221e, keeping \u03b7k/\u03bdk constant. Now assume that \u03bd = 1. Then increasing \u03ba does not change the coordinates (x\u2217)k,0 of the equilibrium state x\u2217, so \u03c1 does not change. The right hand side of (4.22) decreases towards one, so if \u22128 < \u03c1 < \u22121, then there exists \u03ba0 > 3 minimal such that for all \u03ba \u2265\u03ba0, (4.22) is ful\ufb01lled, but |\u03c1| \u2264 \u0000\u03bd/ cos( \u03c0 \u03ba) \u0001\u03ba for \u03ba < \u03ba0. Then all models corresponding to \u03ba < \u03ba0 have x\u2217as attracting equilibrium point, but for \u03ba \u2265\u03ba0, the equilibrium x\u2217becomes unstable. As a corollary of the above Theorem, we show that one of the conditions needed to state the central limit theorem in Theorem 2 is satis\ufb01ed. Corollary 1 Suppose that n = 2 and that the conditions of Theorem 3 hold true. Then there exist \u03b11, \u03b12 > 0 such that lim inf t\u2192\u221e mk t t \u2265\u03b1k, k = 1, 2. Proof Since c1c2 < 0, any solution to (4.19) is eventually attracted to a non constant periodic orbit. Since mk t = R t 0 fk(xk,0 s )ds and since fk is non-decreasing and strictly positive, it follows that lim inft\u2192\u221e1 t mk t > 0. \u2022 5 Study of an approximating di\ufb00usion process and simulation study In this section we work with the cyclic feedback system of the last section. The aim is to study to which extent the behavior of the limit system is also observed within the \ufb01nite size system ZN k,i. 14 \f5.1 An associated system of piecewise deterministic Markov processes Introducing the family of adapted c` adl` ag processes (recall (4.16)) XN k (t) := 1 Nk+1 Nk+1 X j=1 Z ]0,t] hkk+1(t \u2212s)dZN k+1,j(s) = Z ]0,t] hkk+1(t \u2212s)d \u00af ZN k+1(s), (5.23) where \u00af ZN k+1(s) = 1 Nk+1 PNk+1 j=1 ZN k+1,j(s) and recalling (2.3), it is clear that the dynamics of the system is entirely determined by the dynamics of the processes XN k (t\u2212), t \u22650.1 In some sense, XN k describes the accumulated memory belonging to the directed edge pointing from population k + 1 to population k. Without assuming the memory kernels to be Erlang kernels, the system (XN k , 1 \u2264k \u2264n) is not Markovian. For general memory kernels, Hawkes processes are truly in\ufb01nite memory processes. When the kernels are Erlang, given by (4.17), taking formal derivatives in (5.23) with respect to time t and introducing for any k and 0 \u2264l \u2264\u03b7k XN k,l(t) := ck Z ]0,t] (t \u2212s)\u03b7k\u2212l (\u03b7k \u2212l)! e\u2212\u03bdk(t\u2212s)d \u00af ZN k+1(s), (5.24) we obtain the following system of stochastic di\ufb00erential equations which is a stochastic version of (4.19). \u001a dXN k,l(t) = [\u2212\u03bdkXN k,l(t) + XN k,l+1(t)]dt, 0 \u2264l < \u03b7k, dXN k,\u03b7k(t) = \u2212\u03bdkXN k,\u03b7k(t)dt + ckd \u00af ZN k+1(t). (5.25) Here, XN k is identi\ufb01ed with XN k,0, \u00af ZN k = 1 Nk PNk j=1 ZN k,j, and each ZN k,j jumps at rate fk(XN k,0(t\u2212)). We call the system (5.25) a cascade of memory terms. Thus, the dynamics of the Hawkes process (ZN k,i(t))(k,i)\u2208IN is entirely determined by the piecewise deterministic Markov process (PDMP) (XN k,l)(1\u2264k\u2264n,0\u2264l\u2264\u03b7k) of dimension \u03ba. 5.2 A di\ufb00usion approximation in the large population regime The process \u00af ZN k+1(t) appearing in the last equation of (5.25) jumps at a rate given by Nk+1fk+1(XN k+1,0(t\u2212)), having jumps of size 1 Nk+1 . Its variance is fk+1(XN k+1,0(t\u2212)) Nk+1 . Therefore, it is natural to consider the approximating di\ufb00usion process \uf8f1 \uf8f2 \uf8f3 dY N k,l(t) = [\u2212\u03bdkY N k,l(t) + Y N k,l+1(t)]dt, 0 \u2264l < \u03b7k, dY N k,\u03b7k(t) = \u2212\u03bdkY N k,\u03b7k(t)dt + ckfk+1(Y N k+1,0(t))dt + ck q fk+1(Y N k+1,0)(t) \u221a Nk+1 dBk+1(t), (5.26) where the Bl(t), 1 \u2264l \u2264n, are independent standard Brownian motions, approximating the jump noise of each population. Write AX for the in\ufb01nitesimal generator of the process (5.25) and AY for the corresponding generator of (5.26). Moreover, write P X t and P Y t for the associated Markovian semigroups. We denote generic elements of the state space R\u03ba of Y N by x = (x1, . . . , x\u03ba). Finally, for a function g de\ufb01ned on R\u03ba, we de\ufb01ne \u2225g\u2225r,\u221e:= r X k=0 X |\u03b1|=k \u2225\u2202\u03b1g\u2225\u221e. 1We have to take the left-continuous version, since intensities are predictable processes. 15 \fThen we obtain the following approximation result showing that Y N is a good small noise approximation of XN. Theorem 4 Suppose that all spiking rate functions fk belong to the space C5 b of bounded functions having bounded derivatives up to order 5. Then there exists a constant C depending only on f1, . . . , fn and the bounds on its derivatives such that for all \u03d5 \u2208C4 b (R\u03ba; R), \u2225P X t \u03d5 \u2212P Y t \u03d5\u2225\u221e\u2264Ct\u2225\u03d5\u22254,\u221e N2 . The proof is given in the Appendix. Theorem 4 is a \ufb01rst step towards convergence in law and shows that the di\ufb00usion process (5.26) is a good approximation of (5.25), as N \u2192\u221e. However, in the limit of N \u2192\u221e, (5.26) is not a di\ufb00usion anymore, since the di\ufb00usive term tends to zero. Both processes, XN and Y N, tend to the limit process described in section 2.2. This convergence is of rate 1 N , which is slower than the approximation proved in Theorem 4. 5.3 Oscillations of the approximating di\ufb00usion at \ufb01xed population size We now show to which extent the approximating di\ufb00usion process (5.26) imitates the oscillatory behavior of the limit system described in section 4. Consider two populations, n = 2, where the memory kernels are given by (4.17). We denote by b(x) := \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed \u2212\u03bd1x1 + x2 \u2212\u03bd1x2 + x3 . . . \u2212\u03bd1x\u03b71+1 + c1f2(x\u03b71+2) \u2212\u03bd2x\u03b71+2 + x\u03b71+3 . . . \u2212\u03bd2x\u03ba + c2f1(x1) \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 (5.27) the drift vector of (5.26). Moreover, we introduce the \u03ba \u00d7 2\u2212di\ufb00usion matrix \u03c3(x) := \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed 0 0 . . . . . . 0 c1 \u221ap2 p f2(x\u03b71+2) 0 0 . . . . . . c2 \u221ap1 p f1(x1) 0 \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 , (5.28) where p1 = N1/N, p2 = N2/N. Then we may rewrite (5.26) as dY N(t) = b(Y N(t))dt + 1 \u221a N \u03c3(Y N(t))dB(t), (5.29) with B(t) = (B1(t), B2(t))T . 16 \fThroughout this section, we assume the conditions of Theorem 3, in particular, suppose that c1c2 < 0. Moreover, suppose that f1 and f2 are smooth strictly positive nondecreasing functions. In this case, under condition (4.21), the associated limit system possesses a non constant periodic orbit which is asymptotically orbitally stable. We will now show that also the \ufb01nite size system (5.29) is attracted to this periodic orbit. The existence of a global Lyapunov function implies that there exists a compact set K such that process (5.29) visits K in\ufb01nitely often, almost surely. More precisely, recalling that AY denotes the in\ufb01nitesimal generator of (5.29), we have the following result. In order to simplify notation, in the next proposition, we write x = (x1,0, . . . , x1,\u03b71, x2,0, . . . , x2,\u03b72) for generic elements of R\u03ba. Proposition 5 Grant the conditions of Theorem 3. Let j(x) be a smoothed version of |x|, i.e., j(x) = |x| for all |x| \u22651, |j\u2032(x)| \u2264C, |j\u2032\u2032(x)| \u2264C, for some constant C, for all x. Put then G(x) := P2 k=1 P\u03b7k l=0 l+1 \u03bdl k j(xk,l). Then G is a Lyapunov-function for (5.29) in the sense that AY G(x) \u2264\u2212cG(x) + d, for some constants c, d > 0 depending on maxk \u2225fk\u2225\u221e. Proof The above statement follows from the fact that the drift part of AY G(x) is given by 2 X k=1 \u03b7k\u22121 X j=0 n \u2212\u03bdkxk,j + xk,j+1o \u2202G \u2202xk,j \u2212 2 X k=1 \u03bdkxk,\u03b7k \u2202G \u2202xk,\u03b7k + 2 X k=1 ckfk+1(xk+1,0) \u2202G \u2202xk,\u03b7k . But for |xk,j| \u22651, n \u2212\u03bdkxk,j + xk,j+1o \u2202G \u2202xk,j = n \u2212\u03bdkxk,j + xk,j+1o sign(xk,j)(j + 1) (\u03bdk)j \u2264\u2212(j + 1) \u03bdj\u22121 k |xk,j| + (j + 1) \u03bdj k |xk,j+1|. As a consequence, if |xk,j| \u22651 for all k, j, the contribution of the drift part can be upper bounded by \u2212 2 X k=1 \u03bdk|xk,0| \u2212 2 X k=1 \u03b7k X j=1 1 \u03bdj\u22121 k |xk,j| + 2 max(\u03b71+1 \u03bd\u03b71 1 \u2225f1\u2225\u221e, \u03b72+1 \u03bd\u03b72 2 \u2225f2\u2225\u221e) \u2264\u2212cG(x) + d, for some constants c, d > 0. On the other hand, the contributions coming from terms with |xk,j| \u22641 are bounded, and the contribution coming from the di\ufb00usion part of AY G is bounded as well. This \ufb01nishes the proof. \u2022 In particular, putting K = {G \u22642d/c}, it follows that AY G(x) \u2264\u2212c 2G(x) + d1K(x). (5.30) It is well-known (see e.g. Douc, Fort and Guillin (2009) [15]) that (5.30) implies that Ex(e c 2 \u03c4K) \u2264G(x), (5.31) 17 \fwhere \u03c4K = inf{t \u22650 : Y N(t) \u2208K}. Thus, the process comes back to the compact K in\ufb01nitely often, and excursions out of K have exponential moments. In particular, we can concentrate on the study of the trajectories inside K. We will now study the behavior of the trajectories of Y N inside the compact K. By Theorem 3, under condition (4.21), the limit system possesses a non constant periodic orbit which is asymptotically orbitally stable. Denote this orbit by \u0393 and let T be its periodicity. We suppose without loss of generality that \u0393 \u2282K. In the following, we will show that each time the process is inside K, it will also visit vicinities of the periodic orbit (in a sense that will be made precise in Theorem 5 below). We start with support properties. Fix \u03b5 > 0 and let S(\u03b5, \u0393) := {x : d(x, \u0393) < \u03b5} be a tube around this orbit. Denote by QY x the law of the solution (Y N(t), t \u22650) of (5.29), starting from Y N(0) = x. Fix t1 > 1 and let O = {\u03d5 \u2208C(R+; R\u03ba) : \u03d5(t) \u2208S(\u03b5, \u0393) \u22001 \u2264t \u2264t1}. Then we have the following \ufb01rst result concerning the support of QY x . Proposition 6 Under the conditions of Theorem 3 and supposing that f1 and f2 are strictly positive, the following holds true. For all x \u2208R\u03ba, we have that QY x (O) > 0. Hence, the di\ufb00usion Y N visits the tube around \u0393, starting from any initial point x \u2208R\u03ba, during the time interval [1, t1], with positive probability. The fact that we choose time intervals [1, t1] is not important, and we could equally work with any time interval [t0, t1], for any \ufb01xed t0 > 0, see the proof in Appendix 6.4. Note that the above proposition gives a statement concerning the support of the law of Y N for \ufb01xed N. Its proof relies on the support theorem for di\ufb00usions. The result does not say anything about the actual value of the probability of tubes around \u0393, nor about the precise N \u2192\u221e\u2212asymptotics : this is outside the scope of the present paper and will be the subject of a future work. Proposition 6 implies that there is strictly positive probability to visit tubes around \u0393, yet, we do not know that such visits arrive almost surely, within a \ufb01nite time horizon. In order to prove this, we will show that x 7\u2192QN x (O) is lower-bounded on compacts. Thus, we have to control the dependence on the starting con\ufb01guration x in the above proposition. Therefore, we show that x 7\u2192QY x (A) is continuous for Borel sets A \u2208B(C(R+; R\u03ba)) which are e.g. of the form O, i.e., we need to show that the process Y N is strong Feller. However, Y N is a degenerate di\ufb00usion process since Brownian noise only appears in the coordinates Y N 1,\u03b71 and Y N 2,\u03b72. Following Ishihara and Kunita (1974) [26], Lemma 5.1, we therefore show that the process satis\ufb01es the weak H\u00a8 ormander condition. Notice that due to the speci\ufb01c form of the di\ufb00usion matrix in equation (5.28), the It\u02c6 o and the Stratonovich forms are the same. Recall that for smooth vector \ufb01elds f(x) and g(x) : R\u03ba \u2192R\u03ba, the Lie bracket [f, g] is de\ufb01ned by [f, g]i = \u03ba X j=1 \u0012 fj \u2202gi \u2202xj \u2212gj \u2202fi \u2202xj \u0013 , i = 1, . . . , \u03ba. We write \u03c31, \u03c32 : R\u03ba \u2192R\u03ba for the two column vectors constituting the di\ufb00usion matrix \u03c3 of (5.28). 18 \fDe\ufb01nition 3 De\ufb01ne a set L of vector \ufb01elds by the \u2018initial condition\u2019 \u03c31, \u03c32 \u2208L and an arbitrary number of iteration steps L \u2208L = \u21d2[b, L], [\u03c31, L], [\u03c32, L] \u2208L . (5.32) For M \u2208N, de\ufb01ne the subset LM by the same initial condition and at most M iterations (5.32). Write L\u2217 M for the closure of LM under Lie brackets; \ufb01nally, write \u2206L\u2217 M := LA(LM) for the linear hull of L\u2217 M, i.e. the Lie algebra spanned by LM. De\ufb01nition 4 We say that a point z\u2217\u2208R\u03ba is of full weak H\u00a8 ormander dimension if there is some M \u2208N such that (dim \u2206L\u2217 M )(z\u2217) = \u03ba. (5.33) Due to the cascade structure of the drift vector b and since f1(\u00b7), f2(\u00b7) > 0 on R, it is straightforward to show that the weak H\u00a8 ormander dimension holds at all points x \u2208R\u03ba : Proposition 7 Suppose that f1 and f2 are smooth and strictly positive. Then for all x \u2208R\u03ba, (dim \u2206L\u2217 M )(x) = \u03ba, where M = max(\u03b71, \u03b72). Proof The proof is done by \ufb01rst calculating the Lie-bracket [\u03c31, b] and [\u03c32, b] and then successively bracketing with b. \u2022 Once the weak H\u00a8 ormander condition holds everywhere, it follows that the process is strongly Feller, see [26]. As a consequence, the following holds. Corollary 2 Let A \u2208B(C(R+; R\u03ba)) be of the type A = {\u03d5 : \u03d5(t) \u2208B \u2200t1 \u2264t \u2264t2}, for some B \u2208B(R\u03ba) and for t1 < t2. Then x 7\u2192QY x (A) is continuous. Proof Using the Markov property, we have QY x (A) = Z P Y t1 (x, dy)QY y ({\u03d5 : \u03d5(t) \u2208B \u22000 \u2264t \u2264t2 \u2212t1}) = P Y t1 \u03a6(x), where \u03a6(y) = QY y ({\u03d5 : \u03d5(t) \u2208B \u22000 \u2264t \u2264t2 \u2212t1}) is measurable and bounded. But P Y t1 \u03a6(x) is continuous in x, since Y N is strong Feller. \u2022 A direct consequence of the above result is the fact that K \u220bx 7\u2192QY x (O) is strictly lower bounded for any \ufb01xed t1 > 1. This fact enables us to conclude our discussion and to show that the di\ufb00usion approximation (5.29) will have the same type of oscillations as the limit system (m1 t , m2 t ). Let \u03c4\u0393(t1) := inf{t \u22650 : Y N(s) \u2208S(\u03b5, \u0393) \u2200t \u2264s \u2264t + t1}. 19 \fTheorem 5 Grant the assumptions of Theorem 3 and let \u0393 be a non constant periodic orbit of period T of the limit system which is asymptotically orbitally stable. Then for all \u03b5 > 0 and t1 > 1, there exist C, \u03bb > 0 such that Ex(e\u03bb\u03c4\u0393(t1)) \u2264CG(x). Moreover, lim sup t\u2192\u221e 1{Y N(s)\u2208S(\u03b5,\u0393) \u2200t\u2264s\u2264t+t1} = 1 Px\u2212almost surely, for all x \u2208R\u03ba. In particular, the above theorem implies that the process Y N visits the oscillatory region S(\u03b5, \u0393) during time intervals of length t1 in\ufb01nitely often. The choice of t1 in the above theorem is free. By choosing t1 larger than the period T of \u0393 (or even choosing t1 \u2265kT for some \ufb01xed k \u2208N), this implies that Y N will present oscillations in\ufb01nitely often almost surely. Proof By Proposition 6, for every x and every \ufb01xed t1 > 1, QY x (O) > 0. By continuity and since K is compact, this implies that inf x\u2208K QY x (O) > 0. Now, by (5.31), the process visits the compact set K in\ufb01nitely often, almost surely, with exponential moments for the successive visits of the process to K. The assertion then follows by the conditional version of the Borel-Cantelli lemma. \u2022 Remark 5 The above result shows that Y N does actually visit tubes around the periodic orbit in\ufb01nitely often, with waiting times that possess exponential moments. It is valid for a \ufb01xed population size N. The above result does not show that, starting from a vicinity of \u0393, the di\ufb00usion stays there for a long time, before being kicked out of the tube due to noise. Such a study is much more di\ufb03cult, it is related to the large deviation properties of the process and will be the topic of a future work. 5.4 Simulation study In this Section we will check by simulations how N in\ufb02uences the approximating di\ufb00usion, and compare the behavior of (4.19) and (5.26). We set n = 2 and c1 = \u22121, c2 = 1 such that population 1 is inhibitory and population 2 is excitatory, and thus \u03c1 < 0, and it is a negative feedback system. We will use the following bounded, Lipschitz and strictly increasing rate functions, f1(x) = \u001a 10ex for x < log(20) 400 1+400e\u22122x for x \u2265log(20) ; f2(x) = \u001a ex for x < log(20) 40 1+400e\u22122x for x \u2265log(20) . In the \ufb01rst set of simulations, we put \u03bd1 = \u03bd2 = 1 and \u03b71 = 3, \u03b72 = 2 such that \u03ba = 7. Then (x\u2217)1,l = \u22122.424 for l = 0, 1, . . . , \u03b71, and (x\u2217)2,l = 0.885 for l = 0, 1, . . . , \u03b72. This yields \u03c1 = \u22122.15 and \u03bd\u03ba/ \u0000cos( \u03c0 \u03ba) \u0001\u03ba = 2.08, and thus, (4.22) is ful\ufb01lled. The period is approximately 2\u03c0/\u03c9 = 12.98, where \u03c9 = |\u03c1|1/\u03ba sin(\u03c0/\u03ba). Finally, we put N1 = N2 = 20. Results are presented in Fig. 1. The cascade structure in the memory variables is clearly 20 \fx(t) \u221240 \u221230 \u221220 \u221210 0 10 0 20 40 60 80 100 \u221240 \u221230 \u221220 \u221210 0 10 Y(t) time 0 10 20 30 40 excitatory, limit inhibitory, limit excitatory, diffusion inhibitory, diffusion \u03bbk N(t) 0 20 40 60 80 100 0 100 200 300 400 time mt k Figure 1: Comparison of the limit system and the di\ufb00usion approximation through simulations. Left top panel: The limiting system (4.19). The black curves are xk,0 given in (4.16), and those that are felt by the system. The gray curves are the variables in the Markovian memory cascade. The dotted lines indicate the location of the critical point. The inset is a blow-up of the last part of the simulation. Parameters are c1 = \u22121, c2 = 1, \u03bd1 = \u03bd2 = 1, \u03b71 = 3, \u03b72 = 2. Left bottom panel: As above but for the approximating system (5.26), with N1 = N2 = 20. Right top panel: Intensity processes corresponding to the simulations on the left. Right bottom panel: The cumulative intensity processes. As predicted by the Central Limit Theorem 2, both approximations are getting worse as t increases. seen, and the noisy di\ufb00usion approximation follows the limit cycle. The periodic behavior is evident. To investigate how close the approximating di\ufb00usion follows the oscillatory behavior of the limit system, we simulated 20 repetitions of the noisy process for di\ufb00erent values of N = 20, 100, 200, 1000 and for p1 = p2 = 1/2, and compared it to the limit system on a later time interval in Fig. 2. For small N, the system shifts phase randomly relative to the limiting system, but it maintains the oscillations. For larger N, the system follows the limiting system closely on a longer time horizon. To study phase transitions, we put \u03bd = \u03bd1 = \u03bd2 = 0.8 and for \u03b71 = \u03b72 we varied \u03ba = 4, 8, 12, 16, 20, 24. Condition (4.22) is only ful\ufb01lled for 6 \u2264\u03ba \u226412. Results are in Fig. 3, left. For \u03ba = 4 the condition is not ful\ufb01lled, and damped oscillations are seen. Then increasing \u03ba, a phase transition occurs, yielding sustained oscillations. A further phase transition occurs when \u03ba becomes larger than 12. For values of \u03ba between 12 and 16, damped oscillations happen after the initial large excursion, and when \u03ba becomes even larger, the system converges to the steady state in a seemingly monotone manner. Note that for \u03bd \u0338= 1 the steady state depends on the order of the memory. Finally, we \ufb01x \u03b71 = \u03b72 = 3 and vary \u03bd in Fig. 3, right. For low and high values of \u03bd, no oscillations occur, but at \u03bdlow = 0.7169 a Hopf-bifurcation occurs, and another again at \u03bdhigh = 1.3982. Thus, the system oscillates in an interval \u03bd \u2208(\u03bdlow, \u03bdhigh). In general, the interval will depend on the order of the memory. 21 \f\u22128 \u22126 \u22124 \u22122 0 2 Y(t) N1 = N2 = 10 \u22128 \u22126 \u22124 \u22122 0 2 150 160 170 180 190 200 Y(t) time N1 = N2 = 100 \u22128 \u22126 \u22124 \u22122 0 2 Y(t) N1 = N2 = 50 \u22128 \u22126 \u22124 \u22122 0 2 150 160 170 180 190 200 Y(t) time N1 = N2 = 500 Figure 2: Di\ufb00usion approximation for increasing N, each panel contains 20 realizations. The cyan curve is the limit system. Parameters are as in Fig. 1. 0 50 100 150 200 \u221230 \u221220 \u221210 0 10 x(t) \u03b71 = \u03b72 = 1 \u03b71 = \u03b72 = 3 \u03b71 = \u03b72 = 5 \u03b71 = \u03b72 = 7 \u03b71 = \u03b72 = 9 \u03b71 = \u03b72 = 11 time 0.6 0.8 1.0 1.2 1.4 1.6 1.8 0 10 20 30 40 50 \u03bd \u03c1 \u03bd\u03ba (cos(\u03c0 \u03ba))\u03ba interval where oscillations occur \u03ba = 8 \u03b71 = \u03b72 = 3 Figure 3: Phase transitions and Hopf bifurcations. Left: Phase transitions due to increasing order of the memory. Parameters are \u03bd1 = \u03bd2 = 0.8 and varying \u03b71 = \u03b72, so that \u03ba = 2(1+\u03b71). Right: Check of condition (4.22) as a function of \u03bd. In the cyan area, at least two eigenvalues of DF(x\u2217) have positive real part, and self-sustained oscillations occur in system (4.19). Hopf bifurcations happen at values \u03bdlow = 0.7169 and at \u03bdhigh = 1.3982. 6 Appendix In the proofs below, C denotes a generic constant, which might change value from equation to equation, and from line to line, even within the same equation. 22 \f6.1 Convolution equations in the matrix case The following is a version of Lemma 26 of [13] in the multidimensional case. It is based on old results of [1, 11] on systems of renewal equations. Lemma 1 (Corollary 3.1 of [11], see also Theorem 2.2 of [1]) Let H(s) be given by (3.13) such that Assumption 3 is ful\ufb01lled (supercritical case). Put \u0393(t) = P n\u22651 H\u2217n(t). Then the following assertions hold. 1. There is a unique \u03b10 such that R \u221e 0 e\u2212\u03b10tH(t)dt has largest eigenvalue \u2261+1. 2. \u0393 is locally bounded. Moreover, for some constant C, \u0393ij(t) \u2264Ce\u03b10t. (6.34) 3. For any pair of locally bounded functions u, h : R+ \u2192Rn such that u = h + H \u2217u, it holds u = h + \u0393 \u2217h. Proof 1. Due to Assumption 3, there exist constants C and p such that |hkl|(t) \u2264 C(1 + tp), for all 1 \u2264k, l \u2264n, for all t \u22650. As a consequence, the matrix-Laplace transform LH(\u03b1) := Z \u221e 0 e\u2212\u03b1tH(t)dt is well-de\ufb01ned for all \u03b1 > 0. Being a primitive matrix, by the Perron-Frobenius theorem, it possesses a unique maximal eigenvalue \u03bbH(\u03b1) with an associated eigenvector composed of positive coordinates. By Assumption 3, \u03bbH(0) > 1 and \u03bbH(\u221e) = 0. This implies that there exists a unique \u03b10 such that \u03bbH(\u03b10) = 1. (This step demands some extra work, which is done in [1, 11]). 2. Let M(t) = e\u2212\u03b10tH(t). Then M \u2217M(t) = e\u2212\u03b10tH \u2217H(t), thus, P n\u22651 M\u2217n(t) = e\u2212\u03b10t\u0393(t). Since H(s) is at most of polynomial growth, then R \u221e 0 tM(t)dt < \u221ecomponentwise. This implies that all entries of the matrix H of (3.9) of [11] are \ufb01nite, so that limt\u2192\u221e P n\u22651(M\u2217n(t))i,j/t exists and is \ufb01nite, for all 1 \u2264i, j \u2264n, see page 431 of the proof of Theorem 3.1 of [11], see also Theorem 2.2 of [1]. It follows that there exists a constant C such that \u0393i,j(t) \u2264Ce\u03b10t, for all 1 \u2264i, j \u2264n. 3. Follows from Theorem 2.1 of [11]. \u2022 6.2 Proof of Theorem 2 We \ufb01rst prove Propositions 2 and 3. Proof of Proposition 2 We have \u03bbk t := dmk t dt = fk Z t 0 X l hkl(t \u2212s)dml s ! . Using the Lipschitz property of fk, we obtain \u03bbk t \u2264fk(0) + n X l=1 L Z t 0 |hkl(t \u2212s)|\u03bbl sds. (6.35) 23 \fUsing H(s) de\ufb01ned in (3.13), we can rewrite (6.35) as \u03bbt \u2264f(0) + H \u2217\u03bb(t), where f(0) = (f1(0), . . . , fn(0))T . De\ufb01ne \u0393l := P 1\u2264k\u2264l H\u2217k. Since for any two matrixvalued functions A, B, R \u221e 0 A \u2217B(t)dt = ( R \u221e 0 A(t)dt)( R \u221e 0 B(t)dt), provided the integrals are well-de\ufb01ned, we have Z \u221e 0 \u0393l(t)dt = X 1\u2264k\u2264l \u039bk, \u039b = Z \u221e 0 H(t)dt, having unique maximal eigenvalue P 1\u2264k\u2264l \u00b5k 1 \u2264 \u00b51 1\u2212\u00b51 . As a consequence, the renewal function \u0393(t) = P l\u22651 H\u2217l(t) is well-de\ufb01ned and locally bounded, and the maximal eigenvalue of R \u221e 0 \u0393(t)dt is given by \u00b51 1\u2212\u00b51 . Any solution a(t) of a = f(0)+H\u2217a is given by a(t) = f(0)+\u0393\u2217f(0) = f(0)+ R t 0 \u0393(s)f(0)ds. Therefore, \u03bb(t) \u2264f(0) + \u0012Z t 0 \u0393(s)ds \u0013 f(0) \u2264f(0) + \u0012Z \u221e 0 \u0393(s)ds \u0013 f(0), where the inequality has to be understood component-wise. Thus, \u03bb(t) is a bounded function of t. This implies the \ufb01rst result. By (2.9) and (2.10), writing \u03b4N(t) = (\u03b4N 1 (t), . . . , \u03b4N n (t))T , \u03b4N k (t) \u2264 (H \u2217\u03b4N)k(t) + C \u221a N Z t 0 X l [ Z s 0 h2 kl(s \u2212u)\u03bbl udu]1/2ds \u2264 (H \u2217\u03b4N)k(t) + Ct \u221a N , by the \ufb01rst step, since \u03bbl u are bounded, and since hkl by assumption are in L2(R+; R). As a consequence, \u03b4N k (t) \u2264Ct \u221a N + C \u221a N Z t 0 n X l=1 \u0393kl(t \u2212s)sds \u2264Ct \u221a N . Together with the proof of Theorem 1 this \ufb01nishes the proof. \u2022 Proof of Proposition 3 As in the proof of Proposition 2, we obtain (6.35) for 1 \u2264k \u2264n. Hence, using Lemma 1, \u03bbk t \u2264fk(0) + X l \u0012Z t 0 \u0393kl(s)ds \u0013 fl(0) \u2264fk(0) + Ce\u03b10t \u2264ce\u03b10t, where c = max1\u2264k\u2264n fk(0)+C. The second assertion follows then analogously to the proof given for Proposition 2. \u2022 The main ingredients for the proof of Theorem 2 are the bounds in Propositions 2 and 3, depending on the criticality. We now prove Theorem 2 in the subcritical case, which is an adaptation of the proof of Theorem 10 of [13] to the nonlinear case. 24 \fProof of Theorem 2, subcritical case. Put U N k,i(t) := ZN k,i(t) \u2212mk t , (k, i) \u2208IN, and introduce the martingales MN k,i(t) = Z t 0 Z \u221e 0 1{z\u2264fk( R s 0 hkl(s\u2212u)dml(u))} \u02dc NN k,i(ds, dz) = \u00af ZN k,i(t) \u2212mk t , where \u02dc NN k,i denotes the compensated PRM NN k,i(ds, dz) \u2212dsdz. Then U N k,i(t) = MN k,i(t) + RN k,i(t), (6.36) where RN k,i(t) = ZN k,i(t) \u2212\u00af ZN k,i(t). We have already shown in (3.14) that E(sup s\u2264t |RN k,i(s)|) \u2264CtN\u22121/2. In (6.36), we have that [MN k,i, MN l,j]t = 0 for (k, i) \u0338= (l, j), since the martingales almost surely never jump at the same time. Moreover, [MN k,i, MN k,i]t = \u00af ZN k,i(t). Finally, E[(MN k,i(t))2] = E( \u00af ZN k,i(t)) = mk t . We obtain (mk t )\u22121E|U N k,1(t)| \u2264 (mk t )\u22121E|MN k,1| + (mk t )\u22121CtN\u22121/2 \u2264 \u0010 mk t \u0011\u22121/2 + Ct mk t N1/2 . (6.37) Now we use that t/N \u21920 and lim inft\u2192\u221emk t /t = \u03b1k > 0 to deduce that limsup N,t\u2192\u221e (mk t )1/2E h |ZN k,1(t)/mk t \u22121| i = limsup N,t\u2192\u221e (mk t )1/2 \" E|U N k,1(t)| (mk t ) # \u22641, implying item 1 of the theorem. The proof \ufb01nishes in the lines of the proof of Theorem 10 of [13]. For \ufb01xed k and i = 1, . . . , \u2113k \u2264Nk, we write (mk t )1/2(ZN k,i(t)/mk t \u22121) = (mk t )\u22121/2MN k,i(t) + (mk t )\u22121/2RN k,i(t). But E((mk t )\u22121/2| \u00af RN k (t)|) \u2264CtN\u22121/2(mk t )\u22121/2 \u21920 as t, N \u2192\u221e. So we only have to prove that \u0010 ((m1 t )\u22121/2MN 1,i(t))1\u2264i\u2264\u21131, . . . , ((mn t )\u22121/2MN n,i(t))1\u2264i\u2264\u2113n \u0011 tends in law to N(0, I\u21131+...+\u21132) which follows as in [13], proof of Theorem 10. \u2022 Proof of Theorem 2, supercritical case. We use the same notation as in the proof of the subcritical case and obtain in a \ufb01rst step the following control for (6.37). (mk t )\u22121E|U N k,1(t)| \u2264 \u0010 mk t \u0011\u22121/2 + Ce\u03b10t mk t N1/2 . (6.38) 25 \fTherefore, for N, t \u2192\u221eunder the constraint that N\u22121/2t\u22121e\u03b10t \u21920, limsup N,t\u2192\u221e (mk t )1/2E h |ZN k,1(t)/mk t \u22121| i = limsup N,t\u2192\u221e (mk t )1/2 \" E|U N k,1(t)| (mk t ) # \u2264C, implying item 1 of the theorem. The second item of the theorem follows using the same arguments as in the subcritical case. \u2022 6.3 Proof of Theorem 4 The proof of Theorem 4 is based on the following steps. First, a standard calculus shows that we have an approximation result for the generators. Lemma 2 Grant the conditions of Theorem 4. Then there exists a constant C such that for all \u03d5 \u2208C3 b (R\u03ba, R), \u2225AX\u03d5 \u2212AY \u03d5\u2225\u221e\u2264C \u2225\u03d5\u22253,\u221e N2 . The proof of the above lemma is straightforward and therefore omitted. In a next step, we obtain, applying It\u02c6 o\u2019s formula with jumps twice, the following estimate. Lemma 3 Grant the conditions of Theorem 4. Then there exists a constant C such that for all \u03d5 \u2208C4 b (R\u03ba, R), for any \u03b4 > 0, \u2225P Y \u03b4 \u03d5 \u2212\u03d5 \u2212\u03b4AY \u03d5\u2225\u221e\u2264C\u03b42\u2225\u03d5\u22254,\u221e and \u2225P X \u03b4 \u03d5 \u2212\u03d5 \u2212\u03b4AX\u03d5\u2225\u221e\u2264C\u03b42\u2225\u03d5\u22252,\u221e. Again, the proof of the above lemma is straightforward and therefore omitted. Finally, we will use the following fact. Lemma 4 Grant the conditions of Theorem 4. Then there exists a constant C such that for all \u03d5 \u2208C4 b (R\u03ba, R), for any t > 0, \u2225P Y t \u03d5\u22254,\u221e\u2264C\u2225\u03d5\u22254,\u221e. Proof By Kunita (1990) [29], see also Ikeda-Watanabe (1989) [25], there exists a version \u03a6t(x) of the stochastic \ufb02ow associated to the SDE (5.26) such that Y N(t) = \u03a6t(x), where Y N(t) is the solution of (5.26) starting from Y N(0) = x. Under our assumptions, this \ufb02ow is a \ufb02ow of C4\u2212di\ufb00eomorphisms. Then we can write P Y t \u03d5(x) = E[\u03d5(\u03a6t(x)], and by dominated convergence, for any 1 \u2264k \u2264\u03ba, \u2202xkP Y t \u03d5(x) = E[P l \u2202xl\u03d5(\u03a6t(x)) \u2202\u03a6l \u2202xk (x)]. The assertion follows then by iterating this argument, using classical estimates on the derivatives \u2202\u03b1\u03a6t(x) obtained e.g. in [25]. \u2022 We are now able to \ufb01nish the proof of Theorem 4. 26 \fProof of Theorem 4 Fix \u03b4 > 0 and write tk = k\u03b4 \u2227t, for k \u22650. A standard trick yields \u2225P X t \u03d5 \u2212P Y t \u03d5\u2225\u221e\u2264 X k:k\u03b4\u2264t \u2225P X t\u2212tk+1\u2206\u03b4P Y tk \u03d5\u2225\u221e\u2264 X k:k\u03b4\u2264t \u2225\u2206\u03b4P Y tk \u03d5\u2225\u221e, (6.39) where we de\ufb01ne \u2206\u03b4\u03d5(x) = P X \u03b4 \u03d5(x) \u2212P Y \u03b4 \u03d5(x). Since \u03d5 \u2208C4 b , also P Y tk \u03d5 \u2208C4 b . By Lemma 3, \u2225P X \u03b4 \u03d5(x) \u2212P Y \u03b4 \u03d5(x)\u2225\u221e\u2264\u03b4\u2225AX\u03d5 \u2212AY \u03d5\u2225\u221e+ \u03b42\u2225\u03d5\u22254,\u221e. Using Lemma 2, we deduce that \u2225P X \u03b4 \u03d5(x) \u2212P Y \u03b4 \u03d5(x)\u2225\u221e\u2264[\u03b4C 1 N2 + \u03b42]\u2225\u03d5\u22254,\u221e. Together with Lemma 4 and (6.39), this yields \u2225P X t \u03d5 \u2212P Y t \u03d5\u2225\u221e\u2264C( 1 N2 + \u03b4) \u2225\u03d5\u22254,\u221e \uf8eb \uf8edX k:k\u03b4\u2264t \u03b4 \uf8f6 \uf8f8\u2264Ct 1 N2 \u2225\u03d5\u22254,\u221e, where the last inequality follows by choosing \u03b4 = 1 N2 and using that card{k : k\u03b4 \u2264t} \u2264t \u03b4. \u2022 6.4 Control theorem and proof of Proposition 6 We will use the control theorem which goes back to Strook and Varadhan (1972) [37], see also Millet and Sanz-Sole (1994) [33], theorem 3.5, in order to prove Proposition 6. For some time horizon T1 < \u221ewhich is arbitrary but \ufb01xed, write H for the CameronMartin space of measurable functions h : [0, T1] \u2192R2 having absolutely continuous components h\u2113(t) = R t 0 \u02d9 h\u2113(s)ds with R T1 0 [\u02d9 h\u2113]2(s)ds < \u221e, 1 \u2264\u2113\u22642. For x \u2208R\u03ba and h \u2208H, consider the deterministic system \u03d5 = \u03d5(N,h,x) solution to d\u03d5(t) = b(\u03d5(t))dt + 1 \u221a N \u03c3(\u03d5(t))\u02d9 h(t)dt, with \u03d5(0) = x, (6.40) on [0, T1]. Thus \u03d5 is a function [0, T1] \u2192R\u03ba. Using localization techniques as in H\u00a8 opfner, L\u00a8 ocherbach and Thieullen (2015) [24], Theorem 5, we obtain the following result. Proposition 8 Grant the assumptions of Theorem 3. Denote by Qt1 x the law of the solution (Y N(t))0\u2264t\u2264t1 of (5.26), starting from Y N(0) = x. Let \u03d5 = \u03d5(N,h,x) denote a solution to d\u03d5(t) = b(\u03d5(t)) dt + 1 \u221a N \u03c3(\u03d5(t)) \u02d9 h(t) dt , \u03d5(0) = x. Fix x \u2208K and h \u2208H such that \u03d5 = \u03d5(N,h,x) exists on some time interval [0, T1] for T1 > t1. Then \u0010 \u03d5(N,h,x)\u0011 |[0,t1] \u2208supp \u0000Qt1 x \u0001 . 27 \fWe now show how to use Proposition 8 in order to prove Proposition 6. Proof of Proposition 6 Fix x \u2208R\u03ba and t1 > 1. Recall that, to simplify notation, we write x = (x1, . . . , x\u03ba), where \u03ba = 2+\u03b71 +\u03b72 instead of x = (x1,0, . . . , x1,\u03b71, x2,0, . . . , x2,\u03b72). In particular, the coordinates x\u03b71+1 and x\u03ba correspond to the two coordinates which are driven by Brownian noise. Let \u0393(t), t \u2208[0, T], be a parametrization of the periodic orbit, with T the periodicity of \u0393. Now, we choose a C\u221e-function \u03b3 = (\u03b31, \u03b32) : R+ \u2192R2 satisfying \uf8f1 \uf8f2 \uf8f3 \u03b31(0) = x1 \u03b32(0) = x\u03b71+2 \u03b3(s) \u2261(\u03931(s), \u0393\u03b71+2(s)) on [1, \u221e). (6.41) We want to use \u03b3 as a smooth trajectory driving the two components \u03d51 corresponding to Y N 1,0 and \u03d5\u03b71+2 corresponding to Y N 2,0 from their initial position to a position on the periodic orbit, during a time period of length one. We now show that it is indeed possible to choose a control h such that \u03d51 = \u03b31 and \u03d5\u03b71+2 = \u03b32. Recall that the di\ufb00usion coe\ufb03cient \u03c3 is null on every coordinate except the coordinates \u03b71 + 1 and \u03ba. As a consequence, any choice of h does only allow to in\ufb02uence directly these two coordinates \u03b71+1 and \u03ba. However, above we have prescribed a trajectory \u03b3 to the two coordinates 1 and \u03b71+2. So we have to prove that such a choice of h is possible. Suppose for a moment that we have already found this control h. Then, by the structure of b, once \u03d51 and \u03d5\u03b71+2 are \ufb01xed, we necessarily have \u03d52(t) = d\u03d51(t) dt + \u03bd1\u03d51(t), . . . , \u03d5\u03b71+1(t) = d\u03d5\u03b71(t) dt + \u03bd1\u03d5\u03b71(t) (6.42) for the \ufb01rst population, and \u03d5\u03b71+3(t) = d\u03d5\u03b71+2(t) dt + \u03bd2\u03d5\u03b72+2(t), . . . , \u03d5\u03ba(t) = d\u03d5\u03ba\u22121(t) dt + \u03bd2\u03d5\u03ba\u22121(t). (6.43) In other words, once \u03d51 and \u03d5\u03b71+2 are \ufb01xed, all other coordinates are entirely determined as measurable functions of \u03b3. Moreover, by the structure of the equations given in (4.19), it is clear that for all t \u22651, \u03d5(t) = \u0393(t), i.e. the trajectory evolves on the orbit after time 1. We have to show that we can indeed \ufb01nd a function h which allows for the above choice of \u03d5. The control h is related to the coordinates \u03d5\u03b71+1 and \u03d5\u03ba through d\u03d5\u03b71+1(t) dt = \u2212\u03bd1\u03d5\u03b71+1(t) + c1f2(\u03d5\u03b71+2(t)) + c1 \u221ap2 p f2(\u03d5\u03b71+2(t))\u02d9 h1(t) and d\u03d5\u03ba(t) dt = \u2212\u03bd2\u03d5\u03ba(t) + c2f1(\u03d51(t)) + c2 \u221ap1 p f1(\u03d51(t))\u02d9 h2(t). In the above two formulas, all functions \u03d5 are known as measurable functions of the prescribed trajectory \u03b3, and \u02d9 h1(t) and \u02d9 h2(t) have to be chosen. But since f1 and f2 are strictly positive, there exists indeed h : [0, \u221e[\u2192R2 achieving this choice of \u03b3. It su\ufb03ces to choose \u02d9 h1(t) = d\u03d5\u03b71+1(t) dt + \u03bd1\u03d5\u03b71+1(t) \u2212c1f2(\u03d5\u03b71+2(t)) c1/\u221ap2 p f2(\u03d5\u03b71+2(t)) (6.44) 28 \fand \u02d9 h2(t) = d\u03d5\u03ba(t) dt + \u03bd1\u03d5\u03ba(t) \u2212c2f1(\u03d51(t)) c2/\u221ap1 p f1(\u03d51(t)) . (6.45) Since f1 and f2 are strictly positive (and even lower bounded on K), \u02d9 h(t) is well-de\ufb01ned. Now, notice that for all t \u22651, \u03d51(t) = \u03931(t) and \u03d5\u03b71+2(t) = \u0393\u03b71+2(t) are evolving on the periodic orbit. Hence by (6.42) and (6.43), necessarily \u03d5(t) = \u0393(t) for all t \u22651. In particular, \u02d9 h1(t) = \u02d9 h2(t) = 0 for all t > 1. Hence, we have constructed a control forcing the trajectory to be on the periodic orbit after a \ufb01xed time. By construction, S(\u03b5, \u03d5) := {\u03c8 \u2208C(R+; R\u03ba) : |\u03d5(t) \u2212\u03c8(t)| < \u03b5 \u22001 \u2264 t \u2264t1} \u2282O. Moreover, by Proposition 8, \u03d5|[0,t1] \u2208supp \u0000Qt1 x \u0001 , whence Qt1 x (S(\u03b5, \u03d5)) > 0 and thus a fortiori Qt1 x (O) > 0, implying the assertion for all t1 > 1. \u2022" + }, + { + "url": "http://arxiv.org/abs/1108.0073v1", + "title": "The Morris-Lecar neuron model embeds a leaky integrate-and-fire model", + "abstract": "We show that the stochastic Morris-Lecar neuron, in a neighborhood of its\nstable point, can be approximated by a two-dimensional Ornstein-Uhlenbeck (OU)\nmodulation of a constant circular motion. The associated radial OU process is\nan example of a leaky integrate-and-fire (LIF) model prior to firing. A new\nmodel constructed from a radial OU process together with a simple firing\nmechanism based on detailed Morris-Lecar firing statistics reproduces the\nMorris-Lecar Interspike Interval (ISI) distribution, and has the computational\nadvantages of a LIF. The result justifies the large amount of attention paid to\nthe LIF models.", + "authors": "Susanne Ditlevsen, Priscilla Greenwood", + "published": "2011-07-30", + "updated": "2011-07-30", + "primary_cat": "math.PR", + "cats": [ + "math.PR", + "q-bio.NC" + ], + "main_content": "Introduction Much effort has been made to create a realistic but still easily computed stochastic neuron model, primarily by combining subthreshold dynamics with \ufb01ring rules. The result has been a variety of, usually one dimensional, leaky integrate-and-\ufb01re (LIF) descriptions with a \ufb01xed membrane potential \ufb01ring threshold [4, 11, 18, 19], or with a rate of \ufb01ring depending more sensitively on membrane potential [15, 21]. These models are useful both for obtaining analytical results and for ease of simulation. By contrast, the two-dimensional stochastic Morris-Lecar (ML) neuron model, a simple cousin to the more detailed Hodgkin-Huxley (HH) model, describes the dynamics of \ufb01ring in a way more closely motivated by the biology. It has been better respected by biologists than the LIF class of models, but has received little attention owing to the dif\ufb01culty of mathematical analysis of this rather complicated stochastic dynamical system. In Section 4 of this paper we show that in fact a LIF model is embedded in the ML model as an integral part of it, closely approximating the subthreshold \ufb02uctuations of the ML dynamics. This result suggests that perhaps the \ufb01ring pattern of a stochastic ML can be recreated using the embedded LIF together with a ML stochastic \ufb01ring mechanism. We construct such a model in Section 5 and 6, and show in Section 7 that its Interspike Interval (ISI) distribution is similar to that of the ML. Our model, while of the type described in our \ufb01rst paragraph, combines the realism of the ML with the ease of analysis and computation of a one dimensional LIF-type model. The work invested in LIF models is further justi\ufb01ed by this new model. \u2217Postal address: Department of Mathematical Sciences, Universitetsparken 5, DK-2100 Copenhagen \u00d8, email: susanne@math.ku.dk \u2217\u2217Postal address: Mathematics Annex 1208 2329W East Mall, Vancouver, BC V6T 1Z4, Canada, email: pgreenw@math.la.asu.edu 1 arXiv:1108.0073v1 [math.PR] 30 Jul 2011 \f2 Ditlevsen and Greenwood Before we set up our stochastic ML model and write analytical details, let us have an informal look at how it works. The principal dynamics of the ML, in the central range of the input current, consist of a stable limit cycle (Fig. 1A) corresponding to \ufb01ring, which encloses a stable \ufb01xed point. In between there loops an unstable limit cycle. The path of the stochastic model has two quasi-stable patterns (Fig. 1B). One is succesive \ufb01rings, where the dynamics makes \u201clarge\u201d noisy circuits around the stable limit cycle, the other is membrane \ufb02uctuations between spikes, where the dynamics makes \u201csmall\u201d noisy circuits around the \ufb01xed point inside the unstable limit cycle. The system would continue forever in one of these two patterns were it not for the noise which causes switching from \ufb01ring to subthreshold \ufb02uctuations and back again at random times when the dynamics cross the unstable limit cycle. Our analysis will show that the dynamics between spikes, of random cycling inside the unstable limit cycle followed by crossing to the stable limit cycle outside it, can be identi\ufb01ed with the sample path behavior of a two-dimensional Ornstein-Uhlenbeck (OU) process times a rotation. A main ingredient in our result is the stochastic dynamical phenomenon that oscillations which damp to a \ufb01xed point in a deterministic system will be sustained by the stochasticity in a corresponding stochastic system. Damped oscillations in a two-dimensional system are signalled by a local linear structure de\ufb01ned by a matrix having a pair of conjugate complex eigenvalues with negative real part. A corresponding stochastic system will not damp, being prevented by the noise. Instead, a quasi-stationary stochastic process is set up, which cycles in a random pattern around the \ufb01xed point. Using recent results of [2] we are able to identify, approximately, this stochastic process which is part of the subthreshold dynamics of the ML. Up to a \ufb01xed linear transformation, the approximating process is the product of a steady fast rotation with a two-dimensional OU process. The identi\ufb01cation allows us to cement in place the correspondance, for a particular set of model parameters, a particular LIF model as the appropriate subthreshold phase between ML \ufb01rings. 2. The Morris-Lecar model There exists a large variety of modeling approaches to the generation of spike trains in neurons (see e.g. [6, 11, 14]). Most famous is the Hodgkin-Huxley (HH) model [13] consisting of four coupled differential equations, one for the membrane voltage, and three equations describing the gating variables that model the voltage-dependent sodium and potassium channels. A large amount of research effort is currently directed towards understanding how neural coding carries information through nervous systems. Basic to the subject is how single neurons transmit information. As in any modeling effort, we must ignore or summarize details and focus on what, we hope, are a few essential aspects. The ML model [20] has often been used as a good, qualitatively quite accurate, two-dimensional model of neuronal spiking. It is a conductance-based model like the HH model, introduced to explain the dynamics of the barnacle muscle \ufb01ber. The original ML model was three-dimensional, including a fast responding voltage-sensitive Ca2+ conductance, and a delayed voltage-dependent K+ conductance for recovery. To justify the two-dimensional version, one uses that the Ca2+ activation moves on a much faster time scale than the other variables, and can conveniently be treated as an instantaneous variable, by replacing it by its steady-state value given the other variables. The parameter values in our computations were chosen from [22, 23], and are given in Table 1 together with the interpretation of variables and parameters. The variable Vt represents the membrane potential of the neuron at time t, and Wt represents the normalized conductance of the K+ current. This is a variable between 0 and 1, and could be interpreted as the probability \fThe Morris-Lecar neuron model embeds a leaky integrate-and-\ufb01re model 3 TABLE 1: Variables and parameter values used in the Morris-Lecar model V (t) [mV] Membrane voltage W(t) [1] Normalized K+ conductance t [ms] Time V1 = -1.2 mV Scaling parameter V2 = 18 mV Scaling parameter V3 = 2 mV Scaling parameter V4 = 30 mV Scaling parameter gCa = 4.4 \u00b5S/cm2 Maximal conductance associated with Ca2+ current gK = 8 \u00b5S/cm2 Maximal conductance associated with K+ current gL = 2 \u00b5S/cm2 Conductance associated with leak current VCa = 120 mV Reversal potential for Ca2+ current VK = -84 mV Reversal potential for K+ current VL = -60 mV Reversal potential for leak current C = 20 \u00b5F/cm2 Membrane capacitance \u03c6 = 0.04 1/ms Rate scaling parameter I = 90 \u00b5A/cm2 Input current that a K+ ion channel is open at time t. The non-linear model equations are dVt = 1 C (\u2212gCam\u221e(Vt)(Vt \u2212VCa) \u2212gKWt(Vt \u2212VK) \u2212gL(Vt \u2212VL) + I) dt, (1) dWt = (\u03b1(Vt)(1 \u2212Wt) \u2212\u03b2(Vt)Wt) dt, (2) with the auxiliary functions given by m\u221e(v) = 1 2 \u0012 1 + tanh \u0012v \u2212V1 V2 \u0013\u0013 , (3) \u03b1(v) = 1 2\u03c6 cosh \u0012v \u2212V3 2V4 \u0013 \u0012 1 + tanh \u0012v \u2212V3 V4 \u0013\u0013 , (4) \u03b2(v) = 1 2\u03c6 cosh \u0012v \u2212V3 2V4 \u0013 \u0012 1 \u2212tanh \u0012v \u2212V3 V4 \u0013\u0013 . (5) Equation (1) describing the dynamics of Vt contains four terms, corresponding to Ca2+ current, K+ current, a general leak current, and the input current I. The functions \u03b1(\u00b7) and \u03b2(\u00b7) model the rates of opening and closing, respectively, of the K+ ion channels. The function m\u221e(\u00b7) represents the equilibrium value of the normalized Ca2+ conductance for a given value of the membrane potential. In Fig. 1A the phase-state of the model is plotted. The system has two stable attractors; a stable \ufb01xed point corresponding to quiescence of the neuron, and a stable limit cycle corresponding to repetitive \ufb01ring. In between the two attractors is an unstable limit cycle, which splits the state space into two parts from either of which the deterministic process cannot escape, once trapped there. 2.1. The stochastic Morris-Lecar model with channel noise It has long been known that the opening and closing of ion channels is an important part of neuron function. Channel activity is summarized, even in the comparatively detailed HH \f4 Ditlevsen and Greenwood A B \u221240 \u221220 0 20 40 0.1 0.2 0.3 0.4 0.5 membrane voltage Vt normalized conductance Wt G \u221240 \u221220 0 20 40 0.1 0.2 0.3 0.4 0.5 membrane voltage Vt normalized conductance Wt G FIGURE 1: Phase-state plots of the normalized conductance Wt against membrane voltage Vt. The full drawn magenta curve is a stable limit cycle, the dashed magenta curve is an unstable limit cycle, and the magenta point is a stable \ufb01xed point. Black curves are sample trajectories. Panel A: model without noise, (1)\u2013(5). If the process is started between the stable and the unstable limit cycle, or outside the stable limit cycle, the solution is seen to spiral out, respectively in, towards the stable limit cycle, corresponding to repetitive \ufb01ring of the neuron. If the process is started inside the unstable limit cycle, the solution spirals into the stable \ufb01xed point, corresponding to subthreshold \ufb02uctuations of the neuron. Note that three trajectories are plotted. Panel B: model with noise, (1), (3)\u2013(5) and (8), \u03c3\u2217= 0.05. Only one trajectory is plotted, and the solution is seen to switch between periods of \ufb01ring and quiescence. model, by potential dependent averages. However, it has become apparent that the stochastic nature of ion channels must be explicitly modeled if we are to capture essential features of neuron dynamics. Changes in the states of channels cannot be tracked explicitly because of their vast number. Hence, it is useful to model the role of ion channels as a stochastic process, Wt, the proportion of channels open at time t. We therefore add channel noise by changing the ordinary differential equation system (1) \u2013 (5), to a stochastic differential equation system, replacing the conductance equation (2) by dWt = (\u03b1(Vt)(1 \u2212Wt) \u2212\u03b2(Vt)Wt) dt + h(Vt, Wt)dBt, (6) where Bt is a standard Wiener process, and the function h(\u00b7) has to be chosen. The diffusion coef\ufb01cient h(\u00b7) in (6) should be based on the drift coef\ufb01cient which gives the rate of change of fraction of open ion channels due to random openings and closings. A natural choice of the function h(\u00b7), following the diffusion approximation of [16], would be the square root of the sum of the two rates in the drift coef\ufb01cient, times a factor 1/ \u221a N where N is the number of ion channels involved. However, this choice has the problem that it is not zero when all the channels are closed, and the resulting (6) would produce negative solutions with positive probability. To avoid this dif\ufb01culty, for \ufb01xed Vt we let Wt be a Jacobi diffusion. In fact, in the class of Pearson diffusions [9], i.e. one-dimensional diffusions with linear drift, and with h2(\u00b7) a polynomial of at most degree two, this is the only bounded diffusion. Living on (0, 1), it has the form dXt = \u2212\u03b8 (Xt \u2212\u00b5) dt + \u03b3 p 2\u03b8Xt(1 \u2212Xt)dBt (7) \fThe Morris-Lecar neuron model embeds a leaky integrate-and-\ufb01re model 5 where \u03b8 > 0 and \u00b5 \u2208(0, 1). It is named for the eigenfunctions of the generator, which are the Jacobi polynomials. It is ergodic provided that \u03b32 \u2264min(\u00b5, (1 \u2212\u00b5)), and its stationary distribution is the Beta distribution with shape parameters \u00b5/\u03b32 and (1\u2212\u00b5)/\u03b32. It has mean \u00b5 and variance \u03b32\u00b5(1 \u2212\u00b5)/(1 + \u03b32). In our case, because the diffusion coef\ufb01cient in (7) should be of the same order as the one given by the Kurtz approximation [16], \u03b3 is proportional to 1/ \u221a N. By equating the drift terms in (6) and (7), we have \u03b8 = \u03b1(Vt) + \u03b2(Vt) and \u00b5 = \u03b1(Vt)/ (\u03b1(Vt)+\u03b2(Vt)). So for \ufb01xed Vt, with h2(Vt, Wt) = \u03b322(\u03b1(Vt)+\u03b2(Vt))Wt(1\u2212Wt), where \u03b32 is constrained by \u03b32(\u03b1(Vt)+\u03b2(Vt)) \u2264min(\u03b1(Vt), \u03b2(Vt)), also (6) will stay bounded in (0, 1). Since \u03b1(Vt) and \u03b2(Vt) are strictly positive, we can put \u03b32 = (\u03c3\u2217)2\u03b1(Vt)\u03b2(Vt)/(\u03b1(Vt) + \u03b2(Vt))2, with \u03c3\u2217\u2208(0, 1], and specify the conductance equation (6) as dWt = (\u03b1(Vt)(1 \u2212Wt) \u2212\u03b2(Vt)Wt) dt + \u03c3\u2217 s 2 \u03b1(Vt)\u03b2(Vt) \u03b1(Vt) + \u03b2(Vt)Wt(1 \u2212Wt)dBt. (8) In the next Section we compute the equilibrium point (Veq, Weq) of the system (1)\u2013(5) for the chosen parameters. By equating the diffusion coef\ufb01cient as it would occur in the diffusion approximation of [16] with the one in (8) at (Veq, Weq) we will obtain \u03c3\u2217in terms of 1/ \u221a N, where N is the number of channels involved. It can be shown by a coupling argument that also for varying Vt will Wt given by (8) stay bounded in (0, 1), since Vt is bounded once it is started inside some interval [7]. In Fig. 2, the model de\ufb01ned by (1), (3)\u2013(5) and (8) is simulated for different values of \u03c3\u2217, where these can be thought of as corresponding to different total numbers of ion channels. 3. The linear approximation of the stochastic Morris-Lecar during quiescence To identify the process of subthreshold oscillations, i.e. the dynamics close to the stable \ufb01xed point between \ufb01rings, we analyze the linearized system around this point. Consider the system dVt = f(Vt, Wt)dt, dWt = g(Vt, Wt)dt + h(Vt, Wt)dBt, where the functions f(\u00b7), g(\u00b7) and h(\u00b7) are given by (1), (3)\u2013(5) and (8). For the chosen parameter values given in Table 1, the deterministic system, obtained for h(\u00b7) = 0, has a unique locally stable equilibrium point (Veq, Weq) given by Weq(Veq) = \u03b1(Veq) \u03b1(Veq) + \u03b2(Veq) = 1 2 \u0012 1 + tanh \u0012Veq \u2212V3 V4 \u0013\u0013 and Veq is the solution to the equation f(Veq, Weq(Veq)) = 0, which cannot be solved analytically, but can be found numerically. The input current value I = 90\u00b5A/cm2 is a typical value well inside the range of I where the deterministic dynamics has a stable limit point inside an unstable limit cycle as shown in Fig. 1A. The equilibrium point for I = 90\u00b5A/cm2 is (Veq, Weq) = (\u221226.6 mV , 0.129). In terms of the centered variables X(1) t = Vt \u2212Veq , X(2) t = Wt \u2212Weq \f6 Ditlevsen and Greenwood time \u221230 \u221220 0 2000 4000 membrane voltage, V(t) 0.15 normalized conductance, W(t) time \u221250 0 0 2000 4000 membrane voltage, V(t) 0.2 0.4 normalized conductance, W(t) time \u221250 0 0 2000 4000 membrane voltage, V(t) 0.2 0.4 normalized conductance, W(t) time \u221250 0 0 2000 4000 membrane voltage, V(t) 0.2 0.4 normalized conductance, W(t) FIGURE 2: Time series plots (black curves) of the stochastic Morris-Lecar model for different noise levels started inside the unstable limit cycle, but not at the \ufb01xed point. Upper left: \u03c3\u2217= 0.02, upper right: \u03c3\u2217= 0.03, lower left: \u03c3\u2217= 0.05, lower right: \u03c3\u2217= 0.1. Note different scales, in the upper left panel there is no \ufb01ring. The magenta curves are the deterministic model, \u03c3\u2217= 0. the system becomes dX(1) t = f \u0010 (X(1) t + Veq), (X(2) t + Weq) \u0011 dt + 0 \u00b7 dB(1) t = f \u2217(X(1) t , X(2) t )dt, (9) dX(2) t = g \u0010 (X(1) t + Veq), (X(2) t + Weq) \u0011 dt + h \u0010 (X(1) t + Veq), (X(2) t + Weq) \u0011 dB(2) t = g\u2217(X(1) t , X(2) t )dt + h\u2217(X(1) t , X(2) t )dB(2) t . (10) We write Xt = (X(1) t X(2) t )T and Bt = (B(1) t B(2) t )T , where T denotes transposition. Note that B(1) t does not enter the dynamics, but is introduced to ease the matrix notation, as will be clear in the following. When the noise is small and the process Xt is started near the equilibrium point, x = (0, 0), we expect the dynamics to concentrate around the equilibrium point. A local approximation is obtained by linearizing (9)\u2013(10) around (0, 0). The diffusion \fThe Morris-Lecar neuron model embeds a leaky integrate-and-\ufb01re model 7 term is approximated by setting X(1) t = X(2) t = 0 in the diffusion coef\ufb01cients. The linearized system is dXt = MXtdt + GdBt, (11) where M = \u0012m11 m12 m21 m22 \u0013 = \u2202f \u2217 \u2202x1 \u2202f \u2217 \u2202x2 \u2202g\u2217 \u2202x1 \u2202g\u2217 \u2202x2 !\f \f \f \f \f (x1,x2)=(0,0) = \u0012 0.0258 \u221222.961 0.000335 \u22120.0446 \u0013 , using the parameter values in Table 1, and G = 0 0 0 \u03c3\u2217p 2(\u03b1(Veq) + \u03b2(Veq))(1 \u2212Weq)Weq ! = 0 0 0 \u03c3 ! , (12) where \u03c3 = 0.034\u03c3\u2217. By evaluating the diffusion approximation of [16] at (Veq, Weq) and equating to the above we obtain \u03c3\u2217= 1/ p Weq(1 \u2212Weq)N \u22483/ \u221a N. In the Appendix the matrix M is detailed. Solutions of (11) with G = 0 are given in terms of the eigenvalues of M which are complex conjugates and given by \u2212\u03bb \u00b1 \u03c9i = \u22120.0094 \u00b1 0.0803i where \u03bb = \u2212tr(M)/2, \u03c92 = |\u03bb2 \u2212det(M)| and i = \u221a\u22121. Thus, near the equilibrium point the solution of (11), with \u03c3 = 0, is Xt = C \u0012cos \u03c9t sin \u03c9t \u0013 e\u2212\u03bbt, (13) where C contains the initial conditions C = \u0012x0 (m12y0 + (m11 + \u03bb)x0)/\u03c9 y0 (m21x0 \u2212(m11 + \u03bb)y0)/\u03c9 \u0013 . In Fig. 3 the solution of the deterministic model, (1)\u2013(5) with \u03c3 = 0, is compared to the linear approximation (13). 4. Identi\ufb01cation of the stochastic process of quiescence In this Section we identify the stochastic process de\ufb01ned by the linearized system (11) in the limit of small \u03bb, i.e. under the condition \u03bb \u226a\u03c9. The deterministic system (13) has decaying oscillations, whereas for the stochastic system (11), the noise will prevent the decay of the oscillations. Can we describe the resulting process speci\ufb01cally? The answer is that, after a linear change of variables, this process can be approximated in distribution by a \ufb01xed matrix times a deterministic circular motion modulated by an OU process. We follow the development in [2], where a \ufb01rst step is to transform the matrix M into a form which reveals the slow decay towards the equilibrium point and the fast oscillatory structure of the deterministic dynamics. Let Q be a 2 \u00d7 2 matrix such that Q\u22121MQ = \u0012\u2212\u03bb \u03c9 \u2212\u03c9 \u2212\u03bb \u0013 . = A. \f8 Ditlevsen and Greenwood Figure 3: The solution of the deterministic model (1)\u2013(5) with \u03c3 = 0 (black full drawn curves) is compared to the linear approximation (13) (cyan dashed curves). Upper panel: normalized conductance Wt (dimensionless). Lower panel: membrane potential Vt (mV). Time is measured in ms. time \u221230 \u221225 0 500 1000 membrane voltage, V(t) 0.12 0.14 normalized conductance, W(t) exact approx A possible choice for Q is Q = \u0012\u2212\u03c9 m11 + \u03bb 0 m21 \u0013 . Let \u02dc Xt = Q\u22121Xt, then d \u02dc Xt = A \u02dc Xtdt + CdBt (14) where C = Q\u22121G. A further change of variables moves the rotation to form part of the diffusion coef\ufb01cient of the linear stochastic system. We de\ufb01ne \u02dc \u02dc Xt = R\u03c9t \u02dc Xt where Rs = \u0012 cos s \u2212sin s sin s cos s \u0013 is the counterclockwise rotation of angle s. Then by Ito\u2019s formula d \u02dc \u02dc Xt = \u2212\u03bb \u02dc \u02dc Xtdt + R\u03c9tCdBt. (15) The in\ufb01nitesimal covariance matrix in (14) is B = CCT = Q\u22121GGT (Q\u22121)T = \u03c32 m2 21\u03c92 \u0012(m11 + \u03bb)2 \u03c9(m11 + \u03bb) \u03c9(m11 + \u03bb) \u03c92 \u0013 . Now de\ufb01ne \u03c4 2 = 1 2tr(B) = 1 2(B11 + B22) = \u2212\u03c32m12 2\u03c92m21 , (16) \fThe Morris-Lecar neuron model embeds a leaky integrate-and-\ufb01re model 9 where we have used that (m11 + \u03bb)2 + \u03c92 = \u2212m12m21. Finally, we rescale \u02dc \u02dc Xt so that we can compare with a standardized two-dimensional OU process. Let Ut = \u221a \u03bb \u03c4 \u02dc \u02dc Xt/\u03bb. Relation (15) becomes dUt = \u2212Utdt + 1 \u03c4 R\u03c9t/\u03bbCd \u02dc Bt (17) where \u02dc Bt = \u221a \u03bbBt/\u03bb is another standard two-dimensional Brownian motion. The following Theorem from [2] allows us to approximate the process Ut given by (17), by a two-dimensional OU process with independent coordinates. Theorem 4.1. For each \ufb01xed t\u2217> 0 and x \u2208I R2 the distribution of {Ut : 0 \u2264t \u2264t\u2217} given by (17) with U0 = x converges as \u03bb/\u03c9 \u21920 to the distribution of the standardized two-dimensional OU process {St : 0 \u2264t \u2264t\u2217} generated by dSt = \u2212Stdt + dBt with S0 = x. Here St follows a normal distribution, St \u223cN \u0000S0e\u2212t, 1 2(1 \u2212e\u22122t)I \u0001 , where I is the 2 \u00d7 2 identity matrix. The proof of this Theorem uses a martingale problem convergence argument and involves the notion of stochastic averaging, where fast oscillations integrate out revealing the remaining structure determined by slower oscillations. Another result of this type obtained by a different method, called multiscale analysis, is in [17]. Thus, the process Ut is approximated by St if \u03bb \u226a\u03c9. In our case \u03bb is one order of magnitude smaller than \u03c9. Putting together the transformations and the \ufb01nal approximation we have, in the sense of stochastic process distributions, Xt = Q \u02dc Xt = QR\u2212\u03c9t \u02dc \u02dc Xt = QR\u2212\u03c9t \u03c4 \u221a \u03bb U\u03bbt \u2248QR\u2212\u03c9t \u03c4 \u221a \u03bb S\u03bbt = \u03c4 \u221a \u03bb \u0012 \u2212\u03c9 m11 + \u03bb 0 m21 \u0013 \u0012 cos \u03c9t sin \u03c9t \u2212sin \u03c9t cos \u03c9t \u0013 S\u03bbt. (18) Let us denote by Xa t the stochastic process on the right hand side of (18), i.e. Xa t = \u03c4QR\u2212\u03c9t S\u03bbt/ \u221a \u03bb. (19) To get a sense of how closely the process Xa t approximates the dynamics of the ML process in a neighborhood of (Veq, Weq) we compare their power spectral densities, as well as that of the solution of the linearized system (11). The spectral density of Xa t and that ofXt satisfying (11) can be calculated explicitly using the power spectrum formula of [10] for linear diffusions of the form (11). In fact Xa t is such a diffusion: the effect of the stochastic averaging can be seen as replacing C from (14) by a multiple of the identity in the system (14), so the approximation to \u02dc X satis\ufb01es d \u02dc Xa t = A \u02dc Xa t dt + \u03c4dBt, where \u03c4 is given by (16). If we transform this equation by Xa t = Q \u02dc Xa t , we see that Xa t satis\ufb01es dXa t = MXa t dt + \u03c4QdBt. (20) \f10 Ditlevsen and Greenwood The spectral density of the \ufb01rst coordinate of Xa is S(f) = 1 2\u03c0 \u03c32m2 12 ((f 2 \u2212det(M))2 + (ftr(M))2) \u0000f 2 + det(M) \u0001 2\u03c92 , whereas the spectral density of the \ufb01rst coordinate of the linearized system, (11), is S(f) = 1 2\u03c0 \u03c32m2 12 (f 2 \u2212det(M))2 + (ftr(M))2 . In Fig. 4 the theoretical spectral densities for the two approximations are plotted, together with the estimated spectral density of the quiescent process from simulations of the stochastic ML model (1),(3)\u2013(5) and (8). The spectral density is estimated by averaging over at least 20 estimates from paths started at 0 of at least 450 ms of subthreshold \ufb02uctuations, and scaled to have the same maximum as the theoretical spectral density from (20). The averaging is done to reduce the large variance connected with spectral density estimation, avoiding any smoothing. Thus, the estimator is approximately unbiased, see also [8] where this approach is treated. The estimation is done for \u03c3\u2217= 0.03, 0.05 and 0.1. For higher noise, the lengths of subthreshold \ufb02uctuations between spikes are too short to reliably estimate the spectral density. Moreover, \u03c3\u2217= 0.1 corresponds to a number of ion channels N \u2248900, which can be considered a minimum acceptable number for the diffusion approximation to be relevant. The value \u03c3\u2217= 0.03 corresponds to N \u224810, 000. Remember that \u03c3 = 0.034\u03c3\u2217, see (12). The approximations are only acceptable for small noise, which is expected, since larger noise brings the process to areas further away from the \ufb01xed point, where non-linearities become increasingly important. frequency [1/ms] power 0 10 20 30 0.0 0.1 0.2 \u03c9 Morris Lecar approximation linearization frequency [1/ms] 0 20 40 60 80 100 0.0 0.1 0.2 \u03c9 Morris Lecar approximation linearization frequency [1/ms] 0 100 200 300 400 0.0 0.1 0.2 \u03c9 Morris Lecar approximation linearization FIGURE 4: Spectral density estimated from simulations between spikes of model (1), (3)\u2013(5), (8) (black solid line), theoretical spectral density of model (20) (cyan dashed line), and theoretical spectral density of model (11) (magenta dotted line). From left to right: \u03c3\u2217= 0.03, 0.05 and 0.1. 5. Reconstructing the stochastic ML \ufb01ring mechanism In this Section we construct a \ufb01ring mechanism matching that of the stochastic ML neuron. In Section 6 we will de\ufb01ne a new LIF-type process by combining this \ufb01ring mechanism with the radial OU process. This new model will, for small \u03c3, have an ISI distribution similar to that of the ML. \fThe Morris-Lecar neuron model embeds a leaky integrate-and-\ufb01re model 11 Firing in model (1), (3)\u2013(5) and (8) occurs when the stochastic dynamics shifts from a path circulating the stable equilibrium, modulated by an OU, to a noisy circuiting of the stable limit cycle. This shift happens, roughly, when the orbit passes from the inside to the outside of the unstable limit cycle. When the orbit comes close to the unstable limit cycle, it will follow this limit cycle for a short time, and then escape either to the inside, i.e. continue its subthreshold oscillations, or to the outside and a spike will occur. This understanding is not accurate enough to be implemented as a \ufb01ring scheme for the radial OU process (27), as we discuss further in Section 7. Hence, we embed the process Xa de\ufb01ned by (19) in the stochastic ML model by constructing a \ufb01ring mechanism mimicking that of the ML itself. It is clear that in the ML model, starting inside the unstable limit cycle, a spike will occur with increasing probability, the further away the process is from the \ufb01xed point. In order to construct a \ufb01ring mechanism matching that of ML, we will estimate, from simulations, the conditional probability that the ML \ufb01res, given that the trajectory of the ML crosses the line L = {(v, w) : v = Veq, w < Weq}. We computed estimates from simulated data using crossings of the line L as follows. For a given value of \u03c3\u2217and distance l from the \ufb01xed point, a short trajectory starting in (Veq, Weq \u2212l) was simulated from model (1), (3)\u2013(5) and (8), and it was registered whether \ufb01ring occurred in the \ufb01rst cycle of the stochastic path around (Veq, Weq). Firing was de\ufb01ned by the path crossing the line v = 0, which is well above the largest level inside the unstable limit cycle, see Fig. 1B. This was repeated 1000 times, and estimates of the conditional probability of spiking, \u02c6 p(l, \u03c3\u2217), were computed as the frequency of the trajectories where \ufb01ring occurred. The procedure was repeated for l = li = i\u03b4, i = 1, . . . , 25, where \u03b4 is the distance to the stable limit cycle divided by 20. In this way a grid of possible l values was covered, starting from l = 0 at the \ufb01xed point, where the probability of \ufb01ring is close to zero, to a point on L below the stable limit cycle, where the probability of \ufb01ring is close to one. The estimation was, furthermore, repeated for \u03c3\u2217= 0.01 to 0.08 in steps of 0.01. For each \ufb01xed \u03c3\u2217, the estimates of the conditional probability appear to depend in a sigmoidal way on the distance from the \ufb01xed point. We assumed the conditional \ufb01ring probability to be of the form p(l) = 1 1 + exp((\u03b1 \u2212l)/\u03b2). (21) The parameters \u03b1 and \u03b2 were estimated using non-linear regression of the 25 estimates of \u02c6 p(li; \u03c3\u2217) on l. In Fig. 5A these parametric estimates are plotted, as well as the individual nonparametric estimates \u02c6 p for \u03c3\u2217= 0.02, 0.05 and 0.08. We see that the family of estimates, \u02c6 p, \ufb01ts the hypothetic curve quite well for each value of \u03c3\u2217. Regression estimates are reported in Table 2. Note that \u03b1 is the distance along L from Weq at which the conditional probability of \ufb01ring equals one half. For all values of \u03c3\u2217, the estimate of \u03b1 is close to the distance along L between Weq and the unstable limit cycle, which equals 0.0172. In other words, the probability \u03c3\u2217 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 \u02c6 \u03b1 0.0174 0.0174 0.0169 0.0168 0.0171 0.0169 0.0167 0.0168 \u02c6 \u03b2 0.0006 0.0013 0.0020 0.0028 0.0033 0.0039 0.0047 0.0054 \u02c6 \u03b1 \u221a 2\u03bb/\u03c3 7.1022 3.5426 2.3012 1.7156 1.3922 1.1474 0.9739 0.8549 \u02c6 \u03b2 \u221a 2\u03bb/\u03c3 0.2590 0.2624 0.2759 0.2831 0.2718 0.2674 0.2738 0.2764 TABLE 2: Estimates of regression parameters for p(\u00b7) in the original space (\ufb01rst two rows), and in the transformed coordinates (last two rows). \f12 Ditlevsen and Greenwood G G G G G G G G G G G G G G G G G G G G G G G G G 0.005 0.015 0.025 0.0 0.2 0.4 0.6 0.8 1.0 distance from fixed point conditional probability of firing (A) 0 1 2 3 4 0.0 0.2 0.4 0.6 0.8 1.0 distance in transformed space conditional probability of firing (B) FIGURE 5: Conditional probability of spiking when crossing the line L = {(v, w) : v = Veq, w < Weq} for different values of \u03c3\u2217. (A) Original space. The circles, plus\u2019s and stars are individual nonparametric estimates obtained using \u03c3\u2217= 0.02, 0.05 and 0.08, respectively, with the \ufb01tted curves on top given by (21). The dashed line indicates where the unstable limit cycle crosses L, the full drawn line where the stable limit cycle crosses L. (B) The \ufb01tted curves in the transformed space for \u03c3\u2217= 0.02, 0.03, 0.04, 0.05, 0.06, 0.07 and 0.08 (right to left), as a function of the distance from the \ufb01xed point in the transformed coordinates. The crosses and boxed crosses indicate the crossing of the unstable and stable limit cycles of L, respectively, which depend on \u03c3 = 0.034\u03c3\u2217. of \ufb01ring, if the path starts at the intersection of L with the unstable limit cycle, is about 1/2. The parameter \u03b2 indicates the width of a band around \u03b1 where the conditional probability essentially changes. For instance, if l \u2208\u03b1 \u00b1 \u03b2 then p(l) \u2208(0.27, 0.73), if l \u2208\u03b1 \u00b1 2\u03b2 then p(l) \u2208(0.12, 0.88). As expected, the estimate of \u03b2 increases with increasing \u03c3\u2217, and for small noise the conditional probability approaches a step function since the process is mostly dominated by the drift. A step function would correspond to the \ufb01ring being represented by a \ufb01rst-passage time of a \ufb01xed threshold. Note though that \u02c6 \u03b2 is approximately proportional to \u03c3\u2217, and thus, as we said earlier and will see in the following, a \ufb01xed threshold at the crossing of the unstable limit cycle does not reproduce the desired spiking characteristics. In order to simplify the construction in Section 6 of a LIF model which, together with a \ufb01ring rule, behaves like the stochastic ML, we will change coordinates as follows. Observe that (19) can be written \u221a \u03bb \u03c4 Q\u22121Xa t = \u0012 cos \u03c9t sin \u03c9t \u2212sin \u03c9t cos \u03c9t \u0013 S\u03bbt, (22) so for \ufb01xed t, \u221a \u03bbQ\u22121Xa t /\u03c4 is the clockwise rotation by angle \u03c9t of the orthogonal pair (S(1) \u03bbt , S(2) \u03bbt ). We de\ufb01ne a transformation of the space (v, w) by centering at (Veq, Weq) and normalizing as in (22). Let \u0012\u02dc v \u02dc w \u0013 = \u221a \u03bb \u03c4 Q\u22121 \u0012 v \u2212Veq w \u2212Weq \u0013 (23) be the coordinates of the transformed space. In the new coordinates our process is simpli\ufb01ed \fThe Morris-Lecar neuron model embeds a leaky integrate-and-\ufb01re model 13 to a rotation modulated by a standard two-dimensional OU process with independent components. The transformation depends on \u03c3 = 0.034\u03c3\u2217, namely, the transformed unstable limit cycle becomes smaller with increasing noise, through the value of \u03c4 given in (16). This is exactly what is causing a higher \ufb01ring probability for larger \u03c3\u2217. The line L will in the transformed space be \u02dc L = \u221a \u03bb \u03c4 Q\u22121 \u00120 l \u0013 = \u221a \u03bb m21\u03c4 \u0012 m11+\u03bb \u03c9 1 \u0013 l for l \u22650. A distance l will thus transform to a distance r = ( \u221a 2\u03bb/\u03c3)l, and the conditional probability of \ufb01ring (21) transforms to p(r) = 1 1 + exp((\u03b1\u2217\u2212r)/\u03b2\u2217), (24) where \u03b1\u2217= \u03b1 \u221a 2\u03bb/\u03c3 and \u03b2\u2217= \u03b2 \u221a 2\u03bb/\u03c3. The \ufb01tted curves of (24) for \u03c3\u2217= 0.02 \u22120.08, as a function of the distance from the \ufb01xed point in the transformed coordinates are given in Fig. 5B, with indication of the crossings of the unstable and stable limit cycles, respectively, which now depend on \u03c3. Note that in the transformed space, the width of the band where the conditional probability is essentially different from 0 or 1 is nearly constant, see Table 2. From here on we use the coordinates de\ufb01ned by (23). 6. Construction of a leaky-integrate-and-\ufb01re model with ML \ufb01ring statistics The simpler stochastic LIF models sacri\ufb01ce realism for mathematical tractability [4, 11]. In these models, a neuron is characterized by a single stochastic differential equation describing the evolution of neuronal membrane potential depending on time, dXt = \u00b5(Xt)dt + \u03c3(Xt)dBt, X0 = x0, (25) where Xt corresponds to Vt in the ML model, together with a threshold \ufb01ring rule, T = inf{t > 0 : Xt \u2265S}. (26) In this Section we de\ufb01ne a LIF model which does not make this compromise, using the result of Section 4 and the \ufb01ring mechanism de\ufb01ned in Section 5. The distance of the approximate process \u221a \u03bbQ\u22121Xa t /\u03c4 of (22) from the point (0, 0) at time t is given by the modulus of the two-dimensional standardized OU process S\u03bbt. The modulus of S\u03bbt at time t is given by the process R\u03bbt = q (S(1) \u03bbt )2 + (S(2) \u03bbt )2, which is a standard radial OU process with two degrees of freedom. It has state space (0, \u221e), and solves the stochastic differential equation dR\u03bbt = \u0012 1 2R\u03bbt \u2212R\u03bbt \u0013 dt + dW\u03bbt, (27) see e.g. [3]. We de\ufb01ne a new LIF process by (27), and \ufb01ring mechanism derived from (24). After each \ufb01ring, we will reset the time to 0 and assume the process reset to 0, i.e. R0 = 0, \f14 Ditlevsen and Greenwood corresponding to S0 = (0, 0) and (V0, W0) = (Veq, Weq). By Ito\u2019s formula, the process Yu = R2 u satis\ufb01es the stochastic differential equation dYu = 2 (1 \u2212Yu) du + 2 p YudWu, (28) and is thus a square-root process, see e.g. [5], also called a Feller or a Cox-Ingersoll-Ross process. This process is ergodic, and its stationary distribution is the exponential distribution with mean one. It follows that the stationary distribution of Ru has density f(r) = 2re\u2212r2 on (0, \u221e), i.e. it follows a Rayleigh distribution. The transition density of Yu starting at y0 at time 0, is a non-central \u03c72-distribution with two degrees of freedom and non-centrality parameter \u03b4(u, y0) = 2y0e\u22122u/(1 \u2212e\u22122u). Then 2Yu/(1 \u2212e\u22122u) follows the standard non-central \u03c72distribution F\u03c72(2y/(1 \u2212e\u22122u), 2, \u03b4(u, y0)). It is particularly simple because of the integer degrees of freedom. Transforming to the radial OU we obtain the transition density of Ru starting at s at time 0 fu(r, s) = 2r 1 \u2212e\u22122u exp \u001a \u2212r2 + s2e\u22122u 1 \u2212e\u22122u \u001b I0 \u0012 rs sinh(u) \u0013 , (29) where I0(x) = 1 \u03c0 R \u03c0 0 ex cos \u03b8d\u03b8 is the modi\ufb01ed Bessel function of the \ufb01rst kind of index 0. Writing the two-dimensional process Su in polar coordinates, Ru and \u03b8u, where \u03b8u is the angle at time u to the positive part of the \ufb01rst coordinate, we \ufb01nd that the modulus and the angle are independent, and that \u03b8u is uniformly distributed on (0, 2\u03c0). This can e.g. be seen from the fact that S(1) u and S(2) u are independent normal with mean 0 and equal variances. Thus, for \ufb01xed u, S(2) u /S(1) u is standard Cauchy distributed and \u03b8u = arctan(S(2) u /S(1) u ) is U(0, 2\u03c0). Let T denote the \ufb01ring time random variable. We want to compute the density of the distribution of T, and for this we \ufb01nd it convenient to express this density in terms of the conditional hazard rate, \u03b1(t, r) = lim \u2206t\u21920 1 \u2206tP(t \u2264T < t + \u2206t | T \u2265t, R\u03bbt = r). This function is the density of the conditional probability, given the position on L is r at time t, of a spike occuring in the next small time interval, given that it has not yet occurred. From standard results from survival analysis, see e.g. [1], we obtain P(T > t | R\u03bbs, 0 \u2264s \u2264t) = exp \u0012 \u2212 Z t 0 \u03b1(R\u03bbs)ds \u0013 . The unconditional distribution is then given by P(T > t) = E \u0012 exp \u0012 \u2212 Z t 0 \u03b1(R\u03bbs)ds \u0013\u0013 (30) where E(\u00b7) denotes expectation with respect to the distribution of R. The density is thus g(t) = d dtP(T \u2264t) = E \u0012 \u03b1(R\u03bbt) exp \u0012 \u2212 Z t 0 \u03b1(R\u03bbs)ds \u0013\u0013 . (31) \fThe Morris-Lecar neuron model embeds a leaky integrate-and-\ufb01re model 15 The \ufb01ring is de\ufb01ned to be initiated from L, and on average the process crosses L every 2\u03c0/\u03c9 = 78.2 time units. Using (24), the estimated conditional probability of \ufb01ring given the position on L is r, which by de\ufb01nition does not depend on t, we estimate the hazard rate as \u03b1(t, r) = \u03b1(r) = \u03c9 2\u03c0 1 1 + exp((\u03b1\u2217\u2212r)/\u03b2\u2217). (32) Note that it is bounded. This is not realistic, since a very large value of r should cause immediate \ufb01ring. In [21] a \ufb01ring rule with unbounded hazard rate was proposed, and in [15] it was shown to \ufb01t well to experimental data. Therefore, we will also see how our model performs if we use in the \ufb01ring mechanism a hazard rate of the form \u03b1(t, r) = \u03b1(r) = exp((r \u2212\u03b1)/\u03b2) (33) for \u03b1, \u03b2 > 0. Like before, \u03b1 plays the role of a threshold, and \u03b2 gives the width of the threshold region. When \u03b2 \u21920, the \ufb01ring rule converges to a \ufb01xed threshold crossing. To estimate \u03b1 and \u03b2 in (33), we simulated 1000 spike times from the ML. The cumulative hazard A(t) = R t 0 \u03b1(t) was then estimated from the simulated spike times by the standard empirical Nelson-Aalen estimator. The theoretical cumulative hazard using (27) and (33) can be calculated as A(t) = E \u0012Z t 0 \u03b1(R\u03bbs)ds \u0013 = exp \u0012 \u2212\u03b1 \u03b2 \u0013 Z t 0 E \u0012 exp \u0012R\u03bbs \u03b2 \u0013\u0013 ds = \u221a\u03c0 exp \u0012 \u2212\u03b1 \u03b2 \u0013 Z t 0 \u0012 g(s) exp \u00121 4g(s)2 \u0013 \u03a6 (g(s)) + 1 \u0013 ds (34) where we have used the density f\u03bbs(r, 0) given in (29). Here, g(s) = \u221a 1 \u2212e\u22122\u03bbs/\u03b2, and \u03a6(\u00b7) is the standard normal cumulative distribution function. Then, \u03b1 and \u03b2 were estimated by the least square distance between (34) and the estimated cumulative hazard from the simulated spike times. For \u03c3\u2217= 0.05 the estimates were \u03b1 = 6.31 and \u03b2 = 0.76. The \ufb01nal model is dRu = \u0012 1 2Ru \u2212Ru \u0013 dt + dWu \u2212Ru\u2212\u00b5(Ru\u2212, du), (35) where \u00b5(Ru\u2212, du) is a Poisson measure with intensity \u03b1(Ru\u2212), and Ru\u2212denotes the left limit of Ru. Here, \u03b1(\u00b7) is either given by (32) or (33). The jump size is \u2212Ru\u2212, thus giving the reset to 0 at spike times. A reasonable alternative to the soft threshold \ufb01ring mechanism used here would be to use the \ufb01ring rule de\ufb01ned by a threshold as in the classical LIF models, equation (26). A natural choice of threshold would be where the LIF process reaches a level corresponding to the unstable limit cycle. In fact, according to our estimates in Fig. 5 and Table 2, the \ufb01ring probability of the ML at this threshold is around 1/2. However, the ISI distribution estimated from simulations using a hard threshold at the unstable limit cycle is shifted towards larger times, relative to the ML ISI distribution. This happens because the process might cycle many times inside the unstable limit cycle, so even if the probability of spiking in a single cycle is small, the total probability is not negligible. This is lost when only a hard threshold is considered. Instead we chose the threshold value such that the mean of the ML ISI distribution and the mean of the \f16 Ditlevsen and Greenwood LIF ISI distribution were the same. In [12], the mean of T from (26) with Xt = Rt started at R0 = 0 is given using a hypergeometric function, E(T) = S2 2 2F2 \u00001, 1; 2, 2; S2\u0001 . (36) The average of the 1000 ML \ufb01ring times for \u03c3\u2217= 0.05 was 447. Equating with (36) gives a value S = 2.97 for the hard threshold. Note that this is much smaller than the estimated \u03b1 from (33). 7. Comparison of \ufb01ring statistics One of the major issues in computational neuroscience is to determine the ISI distribution. We therefore simulated the ML model given by (1), (3)\u2013(5) and (8) until spiking, and thereafter reset to the \ufb01xed point. This was done 1000 times, and the time of the \ufb01ring was recorded. The ISI distribution from our approximate model is given by the density (31), or equivalently, from the survival function (30). Due to the law of large numbers and since we know the exact distribution of Ru, for \ufb01xed t we can numerically determine (31) up to any desired precision by choosing n and M large enough through the expression g(t) \u2248 1 M M X m=1 \u03b1 \u0010 R(m) \u03bbt \u0011 exp \uf8eb \uf8ed\u2212t n n X i=1 \u03b1 \u0010 R(m) i\u03bbt/n \u0011 + \u03b1 \u0010 R(m) (i\u22121)\u03bbt/n \u0011 2 \uf8f6 \uf8f8. (37) Here (R(m) 0 , . . . , R(m) i\u03bbt/n, . . . , R(m) \u03bbt ) are M realizations of Ri\u03bbt/n, i = 0, 1, . . . , n, and the integral has been approximated by the trapezoidal rule. The hazard rate is either given by (32) or (33). The results are illustrated in Figure 6 for \u03c3\u2217= 0.05, using M = 1000. The estimated ISI distributions from our approximate model with both \ufb01ring mechanisms compare well with the estimated ISI distribution of ML reset to 0 after \ufb01rings. On the contrary, the hard threshold does not reproduce the ISI distribution well, e.g. the right tail is too heavy. This is because the probability of \ufb01ring during low subthreshold activity is set to 0, whereas we have seen it is not. 8. Discussion A stochastic LIF model constructed with a radial OU process and \ufb01ring mechanism of either logistic or exponential type has been shown to mimic the ISI statistics of a ML neuron model. It captures subthreshold dynamics, not of the membrane potential alone, but of a combination of the membrane potential and ion channels. This construction will allow us to answer several questions about ML models, which have been accessible only for LIF models, even though the latter have less biological motivation. An example of such a question would be: Using ISI experimental data, the noise standard deviation \u03c3 can be estimated [18]. In principle, this should also be possible from our new LIF model, even though we use a soft threshold. This will give an estimate of N, the number of ion channels involved, through the relation (\u03c3\u2217)2 \u22489/N. A question we have not explored is: what is the best way to restart our new LIF model? In our simulations we restarted both our LIF and the ML at the \ufb01xed point of the ML. However, an uninterrupted stochastic ML produces continuous paths as in Fig. 1B. After \ufb01ring, which \fThe Morris-Lecar neuron model embeds a leaky integrate-and-\ufb01re model 17 ISI distribution 0 500 1000 1500 2000 0 0.001 0.002 Figure 6: Distribution of \ufb01ring times for \u03c3\u2217= 0.05. The histogram is based on 1000 simulated \ufb01ring times from the ML model, the vertical dotted line is the average. Curves are estimates of the probability density, equation (37). Black curve is estimated using (32), gray curve is estimated using (33), dashed curve is estimated using a \ufb01xed threshold, (26). means traversing the large stable limit cycle, possibly several times, they reenter a neighborhood of the \ufb01xed point from its edge. A further re\ufb01nement of our LIF model will be obtained by introducing a reentry mechanism, which mimics this aspect of the ML. Appendix A. Linearization matrix The expression for M in (11) is M = m11 \u2212gkWeq(Veq \u2212VK)/C 2VeqWeq\u03b2 (Veq) /V4 \u2212\u03b1 (Veq) ! , m11 = \u2212Veq C \u00122gCa(Veq \u2212VCa)\u03b1 (Veq) \u03b2 (Veq) V2(\u03b1 (Veq) + \u03b2 (Veq))2 + gCam\u221e(Veq) + gKWeq + gL \u0013 Acknowledgements S. Ditlevsen supported by the Danish Council for Independent Research | Natural Sciences. P. Greenwood supported by the Statistical and Applied Mathematical Sciences Institute, Research Triangle Park, N.C., and the Mathematical, Computational and Modeling Sciences Center at Arizona State University. The Villum Kann Rasmussen foundation supported a 4 months visiting professorship for P. Greenwood at University of Copenhagen." + } + ], + "Massimiliano Tamborrino": [ + { + "url": "http://arxiv.org/abs/1310.6933v2", + "title": "Weak convergence of marked point processes generated by crossings of multivariate jump processes. Applications to neural network modeling", + "abstract": "We consider the multivariate point process determined by the crossing times\nof the components of a multivariate jump process through a multivariate\nboundary, assuming to reset each component to an initial value after its\nboundary crossing. We prove that this point process converges weakly to the\npoint process determined by the crossing times of the limit process. This holds\nfor both diffusion and deterministic limit processes. The almost sure\nconvergence of the first passage times under the almost sure convergence of the\nprocesses is also proved. The particular case of a multivariate Stein process\nconverging to a multivariate Ornstein-Uhlenbeck process is discussed as a\nguideline for applying diffusion limits for jump processes. We apply our\ntheoretical findings to neural network modeling. The proposed model gives a\nmathematical foundation to the generalization of the class of Leaky\nIntegrate-and-Fire models for single neural dynamics to the case of a firing\nnetwork of neurons. This will help future study of dependent spike trains.", + "authors": "Massimiliano Tamborrino, Laura Sacerdote, Martin Jacobsen", + "published": "2013-10-25", + "updated": "2014-07-13", + "primary_cat": "math.PR", + "cats": [ + "math.PR" + ], + "main_content": "Introduction Limit theorems for weak convergence of probability measures and stochastic jump processes with frequent jumps of small amplitudes have been widely investigated in the literature, both for univariate and multivariate processes. Besides the pure mathematical interest, the main reason is that these theorems allow to switch from discontinuous to continuous processes, improving their mathematical tractability. Depending on the assumptions on the frequency and size of the \u2217Corresponding author. Phone: +45 35320785. Fax: +45 35320704. Email addresses: mt@math.ku.dk (M.Tamborrino), laura.sacerdote@unito.it (L.Sacerdote), martin@math.ku.dk (M. Jacobsen) Preprint submitted to Elsevier January 15, 2020 arXiv:1310.6933v2 [math.PR] 13 Jul 2014 \fjumps, the limit object can be either deterministic, obtained e.g. as solution of systems of ordinary/partial di\ufb00erential equations [1, 2, 3, 4], or stochastic [5, 6, 7, 8]. Limit theorems of the \ufb01rst type are usually called the \ufb02uid limit, thermodynamic limit or hydrodynamic limit, and give rise to what is called Kurtz approximation [2], see e.g. [9] for a review. In this paper we consider limit theorems of the second type, which we refer to as di\ufb00usion limits, since they yield di\ufb00usion processes. Some well known univariate examples are the Wiener, the Ornstein-Uhlenbeck (OU) and the Cox-Ingersoll-Ross (also known as square-root) processes, which can be obtained as di\ufb00usion limits of random walk [5], Stein [10] and branching processes [11], respectively. A special case of weak convergence of multivariate jump processes is considered in Section 2, as a guideline for applying the method proposed in [6], based on convergence of triplets of characteristics. In several applications, e.g. engineering [12], \ufb01nance [13, 14], neuroscience [15, 16], physics [17, 18] and reliability theory [19, 20], the stochastic process evolves in presence of a boundary, and it is of paramount interest to detect the so-called \ufb01rst-passage-time (FPT) of the process, i.e. the epoch when the process crosses a boundary level for the \ufb01rst time. A natural question arises: how does the FPT of the jump process relate to the FPT of its limit process? The answer is not trivial, since the FPT is not a continuous functional of the process and therefore the continuous mapping theorem cannot be applied. There exist di\ufb00erent techniques for proving the weak convergence of the FPTs of univariate processes, see e.g. [10, 21]. The extension of these results to multivariate processes requests to de\ufb01ne the behavior of the single component after its FPT. Throughout, we assume to reset it and then restart its dynamics. This choice is suggested by application in neuroscience and reliability theory, see e.g. [22, 23]. The collection of FPTs coming from di\ufb00erent components determine a multivariate point process, which we interpret as a univariate marked point process in Section 3. The primary aim of this paper is to show that the marked point process determined by the exit times of a multivariate jump process with reset converges weakly to the marked point process determined by the exit times of its limit process (cf. Section 4 and Section 5 for proofs). Interestingly this result does not depend on whether the limit process is obtained through a di\ufb00usion or a Kurtz approximation. Moreover, we also prove that the almost sure convergence of the processes guarantees the almost sure convergence of their passage times. The second aim of this paper is to provide a simple mathematical model to describe a neural network able to reproduce dependences between spike trains, i.e. collections of a short-lasting events (spikes) in which the electrical membrane potential of a cell rapidly rises and falls. The availability of such a model can be useful in neuroscience as a tool for the study of the neural code. Indeed it is commonly believed that the neural code is encoded in the \ufb01ring times of the neurons: dependences between spike trains correspond to the transmission of information from a neuron to others [24, 25]. Natural candidates as neural network models are generalization of univariate Leaky Integrate-and-Fire (LIF) models, which describe single neuron dynamics, see e.g. [16, 26]. These models 2 \fsacri\ufb01ce realism, e.g. they disregard the anatomy of the neuron, describing it as a single point, and the biophysical properties related with ion channels, for mathematical tractability [27, 28, 29]. Thought some criticisms have appeared [30], they are considered good descriptors of the neuron spiking activity [31, 32]. In Section 6 we interpret our processes and theorems in the framework of neural network modeling, extending the class of LIF models from univariate to multivariate. First, the weak convergence shown in Section 2 gives a neuronal foundation to the use of multivariate OU processes for modeling sub-threshold membrane potential dynamics of neural networks [33, 34], where dependences between neurons are determined by common synaptic inputs from the surrounding network. Second, the multivariate process with reset introduced in Section 3 de\ufb01nes the \ufb01ring mechanism for a neural network. Finally, the weak convergence of the univariate marked point process proved in Section 4 guarantees that the neural code is kept under the di\ufb00usion limit. The paper is concluded with a brief discussion and outlook on further developments and applications. 2. Weak convergence of multivariate Stein processes to multivariate Ornstein-Uhlnebeck As an example for proving the weak convergence of multivariate jump processes using the method proposed in [6], we show the convergence of a multivariate Stein to a multivariate OU. Mimicking the one-dimensional case [8, 10] we introduce a sequence of multivariate Stein processes (Xn)n\u22651, with Xn = {(X1;n, . . . , Xk;n)(t); t \u22650} originated in the starting position x0;n = (x01;n, . . . , x0k;n). For each 1 \u2264j \u2264k, n \u22651, the jth component of the multivariate Stein process, denoted by Xj;n(t), is de\ufb01ned by Xj;n(t) = x0j;n \u2212 Z t 0 Xj;n(s) \u03b8 ds + \u0002 anN + j;n(t) + bnN \u2212 j;n(t) \u0003 + X A\u2208A 1{j\u2208A} h anM + A;n(t) + bnM \u2212 A;n(t) i , (1) where 1A is the indicator function of the set A and A denotes the set of all subsets of {1, . . . , k} consisting of at least two elements. Here N + j;n (intensity \u03b1j;n), N \u2212 j;n (intensity \u03b2j;n), M + A;n (intensity \u03bbA;n) and M \u2212 A;n (intensity \u03c9A;n) for 1 \u2264j \u2264k, A \u2208A, are a sequence of independent Poisson processes. In particular, the processes N + j;n(t) and N \u2212 j;n(t) are typical of the jth component, while the processes M + A;n(t) and M + A;n(t) act on a set of components A \u2208A. Therefore, the dynamics of Xj;n are determined by two di\ufb00erent types of inputs. Moreover, an > 0 and bn < 0 denote the constant amplitudes of the inputs N + n , M + n and N \u2212 n , M \u2212 n , respectively. Remark 2.1. The process de\ufb01ned by (1) is an example of piecewise-deterministic Markov process or stochastic hybrid system, i.e. a process with deterministic behavior between jumps [35, 36]. 3 \fFor each A \u2208A, 1 \u2264j \u2264k and \u03b1j;n \u2192\u221e, \u03b2j;n \u2192\u221e, \u03bbA;n \u2192\u221e, \u03c9A;n \u2192\u221e, (2) an \u21920, bn \u21920, (3) we assume that the rates of the Poisson processes ful\ufb01ll \u00b5j;n = \u03b1j;nan + \u03b2j;nbn \u2192\u00b5j, \u00b5A;n = \u03bbA;nan + \u03c9A;nbn \u2192\u00b5A, (4) \u03c32 j;n = \u03b1j;na2 n + \u03b2j;nb2 n \u2192\u03c32 j , \u03c32 A;n = \u03bbA;na2 n + \u03c9A;nb2 n \u2192\u03c32 A, (5) as n \u2192\u221e. A possible parameter choice satisfying these conditions is an = \u2212bn = 1 n \u03b1j;n = (\u00b5j + \u03c32 j 2 n)n, \u03b2j;n = \u03c32 j 2 n2, 1 \u2264j \u2264k \u03bbA;n = (\u00b5A + \u03c32 A 2 n)n, \u03c9A;n = \u03c32 A 2 n2, A \u2208A. Remark 2.2. Jumps possess amplitudes decreasing to zero for n \u2192\u221ebut occur at an increasing frequency roughly inversely proportional to the square of the jump size, following the literature for univariate di\ufb00usion limits. Thus we are not in the \ufb02uid limit setting, where the frequency are roughly inversely proportional to the jump size and the noise term is proportional to 1/\u221an [1]. To prove the weak convergence of Xn, we \ufb01rst de\ufb01ne a new process Zn = {(Z1;n, . . . , Zk;n)(t); t \u22650}, with jth component given by Zj;n(t) = \u2212\u0393j;nt+ \u0002 anN + j;n(t) + bnN \u2212 j;n(t) \u0003 + X A\u2208A 1{j\u2208A} h anM + A;n(t) + bnM \u2212 A;n(t) i , with \u0393j;n = \u00b5j;n + X A\u2208A 1{j\u2208A}\u00b5A;n, 1 \u2264j \u2264k. The process Zn converges weakly to a Wiener process W = {(W1, . . . , Wn)(t); t \u22650}: Lemma 1. Under conditions (2), (3), (4), (5), Zn converges weakly to a multivariate Wiener process W with mean 0 and de\ufb01nite positive not diagonal covariance matrix \u03a8 with components \u03c8jl = 1{j=l}\u03c32 j + X A\u2208A 1{j,l\u2208A}\u03c32 A, 1 \u2264j, l \u2264k. (6) The proof of Lemma 1 is given in Appendix A. Note that Zj;n(t) is the martingale part of Xj;n(t), see (A.4). Thus martingale limit theorems can alternatively be used for proving Lemma 1, mimicking the proofs in [3, 4]. Finally, we show that Xn is a continuous functional of Zn, and it holds 4 \fTheorem 1. Let x0;n be a sequence in Rk converging to y0 = (y01, . . . , y0k). Then, the sequence of processes Xn de\ufb01ned by (1) with rates ful\ufb01lling (4), (5), under conditions (2), (3), converges weakly to the multivariate OU di\ufb00usion process Y given by Yj(t) = y0j + Z t 0 \u0014 \u2212Yj(s) \u03b8 + \u0393j \u0015 ds + Wj(t), 1 \u2264j \u2264k, (7) where \u0393j is de\ufb01ned by \u0393j = \u00b5j + X A\u2208A 1{j\u2208A}\u00b5A, 1 \u2264j \u2264k, (8) and W is a k-dimensional Wiener process with mean 0 and covariance matrix \u03a8 given by (6). The proof of Theorem 1 is given in Appendix A. Remark 2.3. If all \u03c32 j and \u03c32 A in (5) equal 0, Theorem 1 yields a deterministic (\ufb02uid) limit and results from [3] can be applied. Remark 2.4. Theorem 1 also holds when (x0;n)n\u22651 is a random sequence converging to a random vector y0. Remark 2.5. The obtained OU process can be rewritten as dY (t) = (\u2212CY (t) + D)dt + dW (t), (9) where C is a diagonal k \u00d7 k matrix, D is a k-dimensional vector and W is a multivariate Wiener process with de\ufb01nite positive non-diagonal covariance matrix \u03a8 representing correlated Gaussian noise. For simulation purposes, the di\ufb00usion part in (9) should be rewritten through the Cholesky decomposition. A modi\ufb01cation of the original Stein model can be obtained introducing direct interactions between the ith and jth components. The corresponding di\ufb00usion limit process veri\ufb01es (9) with C non-diagonal matrix. 3. The multivariate FPT problem: preliminaries Consider a sequence (Xn)n\u22651 of multivariate jump processes weakly converging to Y . Let B = (B1, . . . , Bk) be a k-dimensional vector of boundary values, where Bj is the boundary of the jth component of the process. We denote Tj;n the crossing time of the jth component of the jump process through the boundary Bj, with Bj > x0j;n. That is Tj;n = TBj(Xj;n) = inf{t > 0 : Xj;n(t) > Bj}. Moreover, we denote \u03c41;n the minimum of the FPTs of the multivariate jump process Xn, i.e. \u03c41;n = min (T1;n, . . . , Tk;n) , 5 \fand \u03b71;n \u2282{1, . . . , k} the discrete random variable specifying the set of jumping components at time \u03c41;n. We introduce the reset procedure as follows. Whenever a component j attains its boundary, it is instantaneously reset to r0j < Bj, and then it restarts, while the other components pursue their evolution till the attainment of their boundary. This procedure determines the new process X\u2217 n. We de\ufb01ne it by introducing a sequence \u0010 X(m) n \u0011 m\u22651 of multivariate jump processes de\ufb01ned on successive time windows, i.e. X(m) n is de\ufb01ned on the mth time window, for m = 1, 2, . . .. Conditionally on (X(1) n , . . . , X(m) n ), X(m+1) n obeys to the same stochastic di\ufb00erential equation as Xn, with random starting position determined by (X(1) n , . . . , X(m) n ). In particular, the \ufb01rst time window contains the process Xn up to \u03c41;n, which we denote by X(1) n . The second time window contains the process X(2) n whose components are originated in X(1) n (\u03c41;n), except for the crossing components \u03b71;n, which are set to their reset values. This second window lasts until when one of the component attains its boundary at time \u03c42;n. Successive time windows are analogously introduced, de\ufb01ning the corresponding processes. Similarly, we de\ufb01ne Tj and \u03c41 for the process Y , while \u03b71 \u2208{1, . . . , k} is de\ufb01ned as the discrete random variable specifying the jumping component at time \u03c41, since simultaneous jumps do not occur for Y . We de\ufb01ne the reset process Y \u2217by introducing a sequence \u0000Y (m)\u0001 m\u22651 of multivariate di\ufb00usion processes. Set Y (1) \u2261Y . Conditionally on \u0000Y (1), . . . , Y (m)\u0001 , Y (m+1) obeys to the same stochastic di\ufb00erential equation as Y , with random starting position determined by \u0000Y (1), . . . , Y (m)\u0001 and with the k-dimensional Brownian motion W independent of \u0000Y (1), . . . , Y (m)\u0001 , for m \u22651. Below we shall brie\ufb02y say that X(m+1) n (or Y (m+1)) is obtained by conditional independence and then specify the initial value x0;n (or y0). Now we formalize the recursive de\ufb01nition of X\u2217 n and Y \u2217on consecutive time windows. A schematic illustration of the involved variables is given in Fig. 1. Step m = 1. De\ufb01ne X\u2217 n(t) = Xn(t) on the interval [0, \u03c41;n[ and Y \u2217(t) = Y (t) on [0, \u03c41[, with resetting value X\u2217 n(0) = r0 = Y \u2217(0). De\ufb01ne X\u2217 j;n(\u03c41;n) = Xj;n(\u03c41;n) if j \u0338\u2208\u03b71;n or X\u2217 j;n(\u03c41;n) = r0j if j \u2208\u03b71;n. Similarly de\ufb01ne Y \u2217 j (\u03c41) = Yj(\u03c41) if j \u0338= \u03b71 or Y \u2217 j (\u03c41) = r0j if j = \u03b71. Step m = 2. For j \u2208\u03b71;n, obtain X(2) n by conditional independence from X(1) n , with initial value x0;n = X\u2217 n (\u03c41;n). Similarly, for \u03b71 = j, obtain Y (2) by conditional independence from Y (1), with initial value y0 = Y \u2217(\u03c41). Then, de\ufb01ne T (2) j;n, \u03c42;n, \u03b72;n from X(2) n and T (2) j , \u03c42, \u03b72 from Y (2), for m = 1. De\ufb01ne X\u2217 n(t) = X(2) n (t \u2212\u03c41;n) on the interval [\u03c41;n, \u03c41;n + \u03c42;n[ and Y \u2217(t) = Y (t \u2212\u03c41) on [\u03c41, \u03c41 + \u03c42[. Then de\ufb01ne X\u2217 j;n (\u03c41;n + \u03c42;n) = X(2) j;n(\u03c42;n) if j \u0338\u2208\u03b72;n or X\u2217 j;n (\u03c41;n + \u03c42;n) = r0j if j \u2208\u03b72;n. Similarly de\ufb01ne Y \u2217 j (\u03c41 + \u03c42) = Y (2) j (\u03c42) if j \u0338= \u03b72 or Y \u2217 j (\u03c41 + \u03c42) = r0j if j = \u03b72. Step m > 2. For j \u2208\u03b7m;n, obtain X(m) n by conditional independence from X(m\u22121), with initial value x0;n = X\u2217 n(Pm\u22121 l=1 \u03c4l;n). Similarly, for \u03b7m = j, obtain 6 \fG G G G G G G G 0 B \u03c41;n \u03c42;n \u03c43;n \u03c44;n \u03b7n= 2 1 2 1 G G G G X1n X2n FPT(X1n) FPT(X2n) X1n(\u03c4n) X2n(\u03c4n) r01 r02 * * * * * * Figure 1: Illustration of a bivariate jump process with reset X\u2217 n = (X\u2217 1;n, X\u2217 2;n). Whenever a component j reaches its boundary Bj, it is instantaneously reset to its resting value r0j < Bj. The process is de\ufb01ned in successive time windows determined by the FPTs of the process. Here \u03b7i;n denotes the set of jumping components at time \u03c4i;n, which is the FPT of X(i) n in the ith time window. Y (m) by conditional independence from Y (m\u22121), with initial value y0 = Y \u2217(Pm\u22121 l=1 \u03c4l). De\ufb01ne, T (m) j;n , \u03c4m;n, \u03b7m;n from X(m) n and T (m) j , \u03c4m, \u03b7m from Y (m) as above. De\ufb01ne X\u2217 n(t) = X(m) n (t \u2212Pm\u22121 l=1 \u03c4l;n) for t \u2208[Pm\u22121 l=1 \u03c4l;n, Pm l=1 \u03c4l;n[ and Y \u2217(t) = Y (m)(t \u2212Pm\u22121 l=1 \u03c4l) for t \u2208[Pm\u22121 l=1 \u03c4l, Pm l=1 \u03c4l[. Then de\ufb01ne X\u2217 j;n (Pm l=1 \u03c4l;n) = X(r) j;n(\u03c4m;n) if j \u0338\u2208\u03b7m;n or X\u2217 j;n (Pm l=1 \u03c4l;n) = r0j if j \u2208\u03b7m;n. Similarly de\ufb01ne Y \u2217 j (Pm l=1 \u03c4l) = Y (m) j (\u03c4m) if j \u0338= \u03b7m or Y \u2217 j (Pm l=1 \u03c4l) = r0j if j = \u03b7m. Besides the processes X\u2217 n and Y \u2217, we introduce a couple of marked processes as follows. Denote \u03c4n = (\u03c4i;n)i\u22651, \u03c4 = (\u03c4i)i\u22651, \u03b7n = (\u03b7i;n)i\u22651 and \u03b7 = (\u03b7i)i\u22651. Then (\u03c4n, \u03b7n) and (\u03c4, \u03b7) may be viewed as marked point processes describing the passage times of the processes X\u2217 n and Y \u2217, respectively. These marked processes are superposition of point processes generated by crossing times of the single components. 4. Main result on the convergence of the marked point process The processes X\u2217 n and Y \u2217are neither continuous nor di\ufb00usions. Hence the convergence of X\u2217 n to Y \u2217does not directly follow from the convergence of Xn to Y . Since the FPT is not a continuous function of the process, the convergence of the marked point process (\u03c4n, \u03b7n) to (\u03c4, \u03b7) has also to be proved. Proceed as follows. Consider the space Dk = D([0, \u221e[, Rk), i.e. the space of functions 7 \ff : [0, \u221e) \u2192Rk that are right continuous and have a left limit at each t \u22650, and the space C1 = C ([0, \u221e[ , R). For y\u25e6\u2208C1, de\ufb01ne the hitting time e TB (y\u25e6) = inf {t > 0 : y\u25e6(t) = B} , and introduce the sets H = n y\u25e6\u2208C1 : TB (y\u25e6) = e TB (y\u25e6) o , Hk = n y\u25e6\u2208Ck : TBj \u0000y\u25e6 j \u0001 = e TBj \u0000y\u25e6 j \u0001 for all 1 \u2264j \u2264k o . The hitting time e TB de\ufb01nes the \ufb01rst time when a process reaches B, while the FPT TB is de\ufb01ned as the \ufb01rst time when a process crosses B. Denote by \u201c\u2192in Dk\u201d the convergence of a sequence of functions in Dk and by \u201c\u2192\u201d the ordinary convergence of a sequence of real numbers. To prove the main theorem, we need the following lemmas, whose proof are given in Section 5. Lemma 2. Let x\u25e6 n belong to D1 for n \u22651, and y\u25e6\u2208H with y\u25e6(0) < B. If x\u25e6 n \u2192y\u25e6in D1, then TB (x\u25e6 n) \u2192TB (y\u25e6). Lemma 3. Let x\u25e6 n belong to Dk for n \u22651, y\u25e6\u2208Hk with y\u25e6(0) < B. If x\u25e6 n \u2192y\u25e6in Dk, then (\u03c4 \u25e6 1;n, x\u25e6 n(\u03c4 \u25e6 1;n), \u03b7\u25e6 1;n) \u2192(\u03c4 \u25e6 1 , y\u25e6(\u03c4 \u25e6 1 ), \u03b7\u25e6 1). (10) The weak convergence of the multivariate process with reset and of its marked point process corresponds to the weak convergence of the \ufb01nite dimensional distributions of (\u03c4n, X\u2217 n(\u03c4n), \u03b7n) to (\u03c4, Y \u2217(\u03c4), \u03b7), where \u03c4n = (\u03c4i;n)l i=1, X\u2217 n(\u03c4n) = (X\u2217 n(\u03c4i;n))l i=1 , \u03b7n = (\u03b7i;n)l i=1, \u03c4 = (\u03c4i)l i=1, Y \u2217(t\u03c4) = (Y \u2217(\u03c4i))l i=1 and \u03b7 = (\u03b7i)l i=1, for any l \u2208N. We have Theorem 2 (Main theorem). The \ufb01nite dimensional distributions of (\u03c4n, X\u2217 n(\u03c4n), \u03b7n) converge weakly to those of (\u03c4, Y \u2217(\u03c4), \u03b7). The proof of Theorem 2 (cf. Section 5) uses the Skorohod\u2019s representation theorem [7] to switch the weak convergence of processes to almost sure convergence (strong convergence) in any time window between two consecutive passage times, which makes it possible to exploit Lemmas 2 and 3. As a consequence, the strong convergence of the processes implies the strong convergence of their FPTs. Remark 4.1. Theorem 2 holds for any multivariate jump process weakly converging to a continuous process characterized by simultaneous hitting and crossing times for each component, i.e. \u02dc TBj = TBj. Examples are di\ufb00usion processes and continuous processes with positive derivative at the epoch of the hitting time. Remark 4.2. Both the weak convergence of X\u2217 n and of its marked point process also hold when the reset of the crossing component j is not instantaneous, but happens with a delay \u2206j > 0, for 1 \u2264j \u2264k. This can be proved mimicking the proof of Theorem 2. 8 \f5. Proof of the main results Proof of Lemma 2. For each s < TB (y\u25e6), supt\u2264s y\u25e6(t) < B and since x\u25e6 n \u2192y\u25e6 uniformly on [0, s], also supt\u2264s x\u25e6 n(t) < B for n su\ufb03ciently large. This implies lim inf n\u2192\u221eTB (x\u25e6 n) \u2265s for all s < TB (y\u25e6) \u21d2 lim inf n\u2192\u221eTB (x\u25e6 n) \u2265TB (y\u25e6) . Because y\u25e6\u2208H we can \ufb01nd a sequence tk such that tk \u2193TB (y\u25e6) = e TB (y\u25e6) (with y\u25e6\u0010 e TB (y\u25e6) \u0011 = B) and y\u25e6(tk) > B for all k. Since x\u25e6 n (tk) \u2192y\u25e6(tk) for all k, it follows that TB (x\u25e6 n) \u2264tk for n su\ufb03ciently large and therefore \u2200k, lim sup n\u2192\u221eTB (x\u25e6 n) \u2264tk \u21d2 lim sup n\u2192\u221eTB (x\u25e6 n) \u2264TB (y\u25e6) . Proof of Lemma 3. If \u03b7\u25e6 1 = j, then \u03b7\u25e6 1;n = j for n large enough, since marginally x\u25e6 i;n \u2192y\u25e6 i;n for each component 1 \u2264i \u2264k. By Lemma 2 and since y\u25e6 j (0) < Bj by assumption, it follows \u03c4 \u25e6 1;n = TBj(x\u25e6 j;n) \u2192TBj(y\u25e6 j ) = \u03c4 \u25e6 1 . (11) Moreover, it holds |x\u25e6 i;n(\u03c4 \u25e6 1;n) \u2212y\u25e6 i (\u03c4 \u25e6 1 )| \u2264|x\u25e6 i;n(\u03c4 \u25e6 1;n) \u2212y\u25e6 i (\u03c4 \u25e6 1;n)| + |y\u25e6 i (\u03c4 \u25e6 1;n) \u2212y\u25e6 i (\u03c4 \u25e6 1 )|, (12) which goes to zero when n \u2192\u221e, for any 1 \u2264i \u2264k. Indeed, for each s < \u03c4 \u25e6 1 , the convergence of x\u25e6 i;n to y\u25e6 i on a compact time interval [0, s] implies the uniform convergence of x\u25e6 i;n to y\u25e6 i on [0, s]. Thus x\u25e6 i;n(\u03c4 \u25e6 1;n) \u2192y\u25e6 i (\u03c4 \u25e6 1;n). From (11) and since y\u25e6 i is continuous, y\u25e6 i (\u03c4 \u25e6 1;n) \u2192y\u25e6 i (\u03c4 \u25e6 1 ) when n \u2192\u221efor the continuous mapping theorem. Using the product topology on Dk, we have that x\u25e6 n \u2192y\u25e6 in Dk if x\u25e6 j;n \u2192y\u25e6 j in D1, for each 1 \u2264j \u2264k [37], implying the lemma. Denote E d = F two random variables that are identically distributed. Then Proposition 1. If a multivariate jump process Xn converges weakly to Y , then there exist a probability space (\u2126, F, P ) and random elements \u0010 f Xn \u0011\u221e n=1 and e Y in the Polish space Dk, de\ufb01ned on (\u2126, F, P ) such that Xn d = f Xn, X d = e Y and f Xn \u2192e Y a.s. as n \u2192\u221e. Proof. From its de\ufb01nition, Xn belongs to Dk, which is a Polish space with the Skorohod topology [38]. Then, the proposition follows applying the Skorohod\u2019s representation theorem [7]. 9 \fProof of Theorem 2 (main result). Applying Theorem 1 and Proposition 1 in any time window between two consecutive passage times, there exist f X\u2217 n and e Y \u2217such that f X\u2217 n d = X\u2217 n, e Y \u2217 d = Y \u2217and f X\u2217 n \u2192e Y \u2217a.s.. De\ufb01ne e \u03b7j;n, e \u03c4j;n from f X\u2217 n and e \u03b7j, e \u03c4j from e Y \u2217as done in Section 3. Assume e \u03b7m = j and thus e \u03b7m;n = j for n su\ufb03ciently large, due to the strong convergence of the processes. If (e \u03c4n, f X\u2217 n(e \u03c4n), e \u03b7n) \u2192(e \u03c4, e Y \u2217(e \u03c4), e \u03b7) a.s. (13) holds, we would have e \u03c4m;n = TBj \u0010 e X\u2217 j;n \u0011 d = TBj \u0000X\u2217 j;n \u0001 = \u03c4m;n, e \u03c4m = TBj \u0010 e Y \u2217 j \u0011 d = TBj \u0000Y \u2217 j \u0001 = \u03c4m, since f X\u2217 n d = X\u2217 n and e Y \u2217d = Y \u2217, which would also imply f X\u2217 n(e \u03c4m;n) d = X\u2217 n(\u03c4m;n) and e Y \u2217(e \u03c4m) d = Y \u2217(\u03c4m), for any 1 \u2264m \u2264l and l \u2208N, and thus the theorem. To prove (13), we proceed recursively in each time window: Step m = 1. By de\ufb01nition, e Y \u2217behaves like a multivariate di\ufb00usion Y in [0, e \u03c41[. Since each one-dimensional di\ufb00usion component e Yj crosses the level Bj in\ufb01nitely often immediately after e TB(e Yj), it follows TBj(e Yj) = e TBj(e Yj), for 1 \u2264j \u2264 k and thus f Y \u2217\u2208Hk. Since also e Y \u2217(0) < B by assumption, we can apply Lemma 3 and obtain the convergence of the triplets (10) with notreset \ufb01ring components. This convergence also holds if we reset the \ufb01ring components: assume e \u03b71 = j and then e \u03b71;n = j for n large enough. Then e X\u2217 j;n(e \u03c41;n) = r0;e \u03b71;n = e Y \u2217 j (e \u03c41), (14) and thus f X\u2217 n(e \u03c41;n) \u2192e Y \u2217(e \u03c41), implying (13). Step m = 2. On [e \u03c41;n, e \u03c41;n + e \u03c42;n[, f X\u2217 n is obtained by conditionally independence from f X\u2217 n on [0, e \u03c41;n[, with initial value e x0;n = f X\u2217(\u03c41;n). Similarly, on [e \u03c41, e \u03c41 + e \u03c42[, e Y \u2217is obtained by conditionally independence from e Y \u2217on [e \u03c41, e \u03c41 +e \u03c42[, with initial value e y0 = e Y \u2217(e \u03c41). From step m = 1, f X\u2217(e \u03c41;n) \u2192e Y \u2217(e \u03c41), and since e Y \u2217(e \u03c41) < B and e Y \u2217\u2208Hk, we can apply Lemma 3. Then, (13) follows noting that (10) also holds if we reset the \ufb01ring components e \u03b72;n and e \u03b72, as done in (14). Step m > 2 It follows mimicking Step 2. 6. Application to neural network modeling Membrane potential dynamics of neurons are determined by the arrival of excitatory and inhibitory postsynaptic potentials (PSPs) inputs that increase or decrease the membrane voltage. Di\ufb00erent models account for di\ufb00erent levels of complexity in the description of membrane potential dynamics. In LIF models, 10 \fthe membrane potential of a single neuron evolves according to a stochastic di\ufb00erential equation, with a drift term modeling the neuronal (deterministic) dynamics, e.g. input signals, spontaneous membrane decay, and the noise term accounting for random dynamics of incoming inputs. The \ufb01rst LIF model was proposed by Stein [39] to model the \ufb01ring activity of single neurons which receive a very large number of inputs from separated sources, e.g. Purkinjie cells. The membrane potential evolution is given by (1) with k = 1 when Xn(t) is less than a \ufb01ring threshold B > x0;n, considered constant for simplicity. Each event of the excitatory process N + n (t) depolarizes the membrane potential by an > 0 and analogously the inhibition process N \u2212 n (t) produces a hyperpolarization of size bn < 0. The values an and bn represent the values of excitatory and inhibitory PSPs, respectively. Between events of input processes N + n and N \u2212 n , Xn decays exponentially to its resting potential x0;n with time constant \u03b8. The \ufb01ring mechanism was modeled as follows: a neuron releases a spike when its membrane potential attains the threshold value. Then the membrane potential is instantaneously reset to its starting value and the dynamics restarts. The intertime between two consecutive spikes, called interspike intervals (ISIs), are modeled as FPTs of the process through the boundary. Since the ISIs of the single neuron are independent and identically distributed, the underlying process is renewal. In the following subsections we extend the one-dimensional Stein model to the multivariate case to describe a neural network. We interpret all previous processes and theorems in the framework of neuroscience. 6.1. Multivariate Stein model When k > 1, (1) represents a multivariate generalization of the Stein model for the description of the sub-threshold membrane potential evolution of a network of k neurons like Purkinjie cells. The synaptic inputs impinging on neuron j are modeled by Nj;n, while MA;n models the synaptic inputs impinging on a cluster of neurons belonging to a set A. The presence of MA;n allows for simultaneous jumps for the corresponding set of neurons A and determines a dependence between their membrane potential evolutions. We call this kind of structure cluster dynamics and we limit our paper to this type of dependence between neurons. Note that (1) might be rewritten in a more compact way, summing the Poisson processes with the same jump amplitudes. However, we prefer to distinguish between N and M, to highlight their di\ufb00erent role in determining the dependence structure. To simplify the notation, we assume \u03b8 to be the same in all neurons. This is a common hypothesis since the resistance properties of the neuronal membrane are similar for di\ufb00erent neurons [40]. As for the univariate Stein, this proposed multivariate LIF model catches some physiological features of the neurons, namely the spontaneous decay of the membrane potential in absence of inputs and the e\ufb00ect of PSPs on the membrane potentials. 6.2. Multivariate OU to model sub-threshold dynamics of neural network To make the multivariate Stein model mathematically tractable, we perform a di\ufb00usion limit. Theorem 1 guarantees that a multivariate OU process (7) 11 \fcan be used to approximate a multivariate Stein when the frequency of PSPs increases and the contribution of the single postsynaptic potential becomes negligible with respect to the total input, i.e. for neural networks characterized by a large number of synapses. Being the di\ufb00usion limit of the multivariate Stein model, the OU inherits both its biological meaning and dependence structure. Indeed they have the same membrane time constant \u03b8, which is responsible for the exponential decay of the membrane potential. Moreover, the terms \u00b5\u00b7 and \u03c3\u00b7 of the OU are given by (4) and (5) respectively, and thus they incorporate both frequencies and amplitudes of the jumps of the Poisson processes underlying the multivariate Stein model. Finally, if some neurons j and l belong to the same cluster A, their dynamics are related. This dependence is caught by the term \u03c32 A in the component \u03c8jl of the covariance matrix \u03c8, which is not diagonal. This highlights the importance of having correlated noise in the model, and it represents a novelty in the framework of neural network models. Indeed, the dependence is commonly introduced in the drift term, motivated by direct interactions between neurons, while the noise components are independent, see e.g. [33, 34]. Here we ignore this last type of dependence to focus on cluster dynamics, but the proposed model can be further generalized introducing direct interactions between the ith and jth components, as noted in Remark 2.5. 6.3. Firing neural network model and convergence of the spike trains In Section 3 we introduce the necessary mathematical tools to extend the single neuron \ufb01ring mechanism to a network of k neurons. Consider the subthreshold membrane potential dynamics of a neural network described by a multivariate Stein model Xn. A neuron j, 1 \u2264j \u2264k releases a spike when the membrane potential attains its boundary level Bj. Whenever it \ufb01res, its membrane potential is instantaneously reset to its resting potential r0j < Bj and then its dynamics restart. Meanwhile, the other components are not reset but continue their evolutions. Since the inputs are modeled by stationary Poisson processes, the ISIs within each spike train are independent and identically distributed. Thus the single neuron \ufb01ring mechanism holds for each component, which is described as a one-dimensional renewal Stein model. The \ufb01ring neural network model is described by a multivariate process behaving as the multivariate Stein process Xn in each time window between two consecutive passage times. For this reason, we call this model, multivariate \ufb01ring Stein model and we denote it X\u2217 n. The ISIs of the components of the multivariate processes are neither independent nor identically distributed. We identify the spike epochs of the jth component of the Stein process, as the FPT of Xj,n through the boundary Bj. The set of spike trains of all neurons corresponds to a multivariate point process with events given by the spikes. An alternative way of considering the simultaneously recorded spike trains is to overlap them and mark each spike with the component which generates it. Thus, we obtain the univariate point process \u03c4n with marked events \u03b7n. The objects Y \u2217, \u03c4 and \u03b7 are similarly de\ufb01ned for the multivariate OU process Y , and we call Y \u2217multivariate \ufb01ring OU process. Hence the models X\u2217 n and Y \u2217describe the membrane potential 12 \fdynamics of a network of neurons with reset mechanism after a spike and thus are multivariate LIF models. Finally, Theorem 2 implies the convergence of the multivariate \ufb01ring processes X\u2217 n to Y \u2217and the convergence of the collection of marked spike train (\u03c4n, \u03b7n) to (\u03c4, \u03b7). This guarantees that the neural code encoded in the FPTs is not lost in the di\ufb00usion limit. 6.4. Discussion As application of our mathematical \ufb01ndings, we developed a LIF model able to catch dependence features between spike trains in a neural networks characterized by large number of inputs from surrounding sources. To make the model mathematically tractable, we introduced three assumptions: each neuron is identi\ufb01ed with a point; Poisson inputs in (1) are independent; a \ufb01ring neuron is instantaneously reset to its starting value. The \ufb01rst assumption characterizes univariate LIF models and has been recently assumed for twocompartmental neuronal model [41]. We are aware that the Hodgkin-Huxley (HH) model and its variants are more biologically realistic than LIF. Indeed the HH model is a deterministic, macroscopic model describing the coupled evolution of the neural membrane potential and the averaged gating dynamics of Sodium and Potassium ion channels through a system of non-linear ordinary di\ufb00erential equations [42]. However, a mathematical relationship between Morris-Lecar model, i.e. a simpli\ufb01ed version of HH model, and LIF models has been recently shown [43]. This gives a (further) biological support to the use of LIF models and allows to avoid mathematical di\ufb03culties and computationally expensive numerical implementations which are required for HH models. The second assumption grounds on the description of the activity of each synapsis through a point process and it is also common to HH models, for which ion channels are modeled by independent Markov jump processes [44]. Physiological observations suggest that the behavior of each synapsis is weakly correlated with that of the others. Thanks to Palm-Kintchine Theorem, the overall neuron\u2019s input is described by two Poisson processes, one for the global inhibition and the other for the global excitation [45]. The third assumption has been introduced to simplify the notation, but it is not restrictive. Remark 4.2 guarantees the convergence of the \ufb01ring process and of the spike times in presence of delayed resets. Thus a refractory period can be introduced after each spike, increasing the biological realism of the model. Indeed after a spike, there is a time interval, called absolute refractory period, during which the spiking neuron cannot \ufb01re (while the others can), even in presence of strong stimulation [40]. Having a multivariate LIF model for neural networks, several researches will be possible. First, one can simulate dependent spike trains from neural networks with known dependence structures. This allows to compare and test the reliability of di\ufb00erent existing statistical techniques for the detection of dependence structure between neurons, see e.g. [22, 46, 47]. Moreover, inspired by the techniques for the FPT problem of univariate LIF models, one can develop analytical, numerical and statistical methods for the multivariate OU (or other 13 \fdi\ufb00usion processes) and its FPT problem, see e.g. [22, 48]. Furthermore, more biologically realistic LIF models for neural networks can be considered. Indeed Theorem 2 can be applied to more general models such as Stein processes with direct interactions between neurons, Stein with reversal potential [49] or birth and death processes with reversal potential [50]. Finally, the application of our results in the neuroscience framework is not limited to the case of LIF models. Thanks to Remark 4.1, Theorem 2 can be applied to processes obtained through di\ufb00usion and \ufb02uid limits, i.e. both LIF and HH models. Since the HH model can be obtained as a \ufb02uid limit [3], once a proper reset and \ufb01ring mechanism is introduced, the convergence of the FPTs follows straightforwardly from our results. Acknowledgements This paper is the natural extension of the researches started and encouraged by Professor Luigi Ricciardi on stochastic Leaky Integrate-and-Fire models for the description of \ufb01ring activity of single neurons. This work is dedicated to his memory. The authors are grateful to Priscilla Greenwood for useful suggestions. L.S. was supported by University of Torino (local grant 2012) and by project AMALFI Advanced Methodologies for the AnaLysis and management of the Future Internet (Universit` a di Torino/Compagnia di San Paolo). The work is part of the Dynamical Systems Interdisciplinary Network, University of Copenhagen. Appendix A. Proofs of Section 2 To prove Lemma 1, we \ufb01rst need to provide the characteristic triplet of Zn, as suggested in [6]. The characteristic function of Zn(t), is: \u03c6Zn(t)(u) = E \uf8ee \uf8f0i exp \uf8f1 \uf8f2 \uf8f3 k X j=1 ujZj;n(t) \uf8fc \uf8fd \uf8fe \uf8f9 \uf8fb, (A.1) where u = (u1, . . . , uk) \u2208Rk. We can write: k X j=1 ujZj;n(t) = k X j=1 uj \u0002 \u2212\u0393j;nt + \u0000anN + j;n(t) + bnN \u2212 j;n(t) \u0001\u0003 + X A\u2208A GA \u0010 anM + A;n(t) + bnM \u2212 A;n(t) \u0011 , (A.2) where GA = P j\u2208A uj. Plugging (A.2) in (A.1) and since the processes in (A.2) are independent and Poisson distributed for each n, we get the characteristic function \u03c6Zn(t)(u) = exp{t\u03c1n(u)}, 14 \fwhere \u03c1n (u) = \u2212i k X j=1 uj\u0393j;n + k X j=1 \u03b1j;n \u0000eiujan \u22121 \u0001 + k X j=1 \u03b2j;n \u0000eiujbn \u22121 \u0001 + X A\u2208A \u03bbA;n \u0000eiGAan \u22121 \u0001 + X A\u2208A \u03c9A;n \u0000eiGAbn \u22121 \u0001 . In [6], convergence results are proved for \u03c1n(u) given by \u03c1n (u) = iu \u00b7 bn \u22121 2u \u00b7 cn \u00b7 u + Z Rk\\0 \u0000eiu\u00b7x \u22121 \u2212iu \u00b7 h (x) \u0001 \u03bdn (dx) , (see Corollary II.4.19 in [6]), where u\u00b7v = Pk j=1 ujvj and u\u00b7d\u00b7v = Pk j,l=1 ujdjlvl. The vector bn, the matrix cn and the L\u00b4 evy measure \u03bdn are known as characteristic triplet of the process. Here h : Rk \u2192Rk is an arbitrary truncation function that is the same for all n, is bounded with compact support and satis\ufb01es h (x) = x in a neighborhood of 0. In our case, the triplet is 1. \u03bdn: \ufb01nite measure concentrated on \ufb01nitely many points, \u03bdn ({x : xj = an}) = \u03b1j;n, (1 \u2264j \u2264k, \u0338= 0) ; \u03bdn ({x : xj = bn}) = \u03b2j;n, (1 \u2264j \u2264k, \u0338= 0) ; \u03bdn ({x : xj = an} for j \u2208A) = \u03bbA;n, (A \u2208A, \u0338= 0) ; \u03bdn ({x : xj = bn} for j \u2208A) = \u03c9A;n, (A \u2208A, \u0338= 0) . All the non-speci\ufb01ed xj are set to 0, i.e. {x : xj = an} = {x : xj = an, xl = 0 for l \u0338= j}. Since an \u21920 and bn \u21920 when n is su\ufb03ciently large, \u03bdn is concentrated on a \ufb01nite subset of the neighborhood of 0, where h (x) = x. Without loss of generality, we may therefore, and shall, assume that h (x) = x. 2. cn = 0. 3. bn = \u2212\u0393n + R h (x) \u03bdn (dx)=0. Indeed, using h (x) = x, we have bj;n = \u2212\u0393j;n + (\u03b1j;nan + \u03b2j;nbn) + X A\u2208A 1{j\u2208A} (\u03bbA;nan + \u03c9A;nbn) = 0. Having provided the triplet (bn, cn, \u03bdn), we can prove Lemma 1 as follows Proof of Lemma 1. Use Theorem VII.3.4 in [6]. In our case, the weak convergence of Zn to W follows if i. bn \u21920; ii. e cjl;n := R xjxl \u03bdn (dx) \u2192\u03c8jl for 1 \u2264j, l \u2264k; iii. R g d\u03bdn \u21920 for all g \u2208C1 \u0000Rk\u0001 ; 15 \fiv. Bn t = tbn and \u02dc Cn t = te cn converge uniformly to Bt and \u02dc Ct respectively, on any compact interval [0, t]. Here C1 \u0000Rk\u0001 is de\ufb01ned in VII.2.7 in [6]. Since Bn t = tbn, the uniform convergence is evident. Furthermore, e Cn t = te cn converges uniformly provided that condition [ii] holds. To prove [ii], we rewrite e cjl;n as follows e cjl;n = k X i=1 \u00001{i=l=j}\u03b1j;na2 n + 1{i=l=j}\u03b2j;nb2 n \u0001 + X A\u2208A 1{j,l\u2208A} \u0000\u03bbA;na2 n + \u03c9A;nb2 n \u0001 = 1{j=l}\u03c32 j;n + X A\u2208A 1{j,l\u2208A}\u03c32 A;n. (A.3) Then, e cjl;n \u2192\u03c8jl follows from the convergence assumptions (2), (3), (4), (5). Using Theorem VII.2.8 in [6], we may show [iv] considering g \u2208C3 \u0000Rk\u0001 , i.e. the space of bounded and continuous function g : Rk \u2192R such that g(x) = o \u0010 |x|2\u0011 as x \u21920. Here, |x| is the Euclidean norm. For g \u2208C3 \u0000Rk\u0001 and \u03b5 > 0, we have |g(x)| \u2264\u03b5 |x|2 for |x| su\ufb03ciently small. Then \f \f \f \f Z g d\u03bdn \f \f \f \f \u2264\u03b5 Z |x|2 d\u03bdn \u2192\u03b5 k X i=1 \u03c8ii by (A.3), and R g d\u03bdn \u21920 follows. Indeed, since W is continuous, the L\u00b4 evy measure \u03bd for W is the null measure. Proof of Theorem 1. The jth component of Xn can be rewritten in terms of the jth component of Zn as Xj;n(t) = x0j;n + Z t 0 \u0014 \u2212Xj;n(s) \u03b8 + \u0393j;n \u0015 ds + Zj;n(t), 1 \u2264j \u2264k. (A.4) Solving it, we get Xj;n(t) = x0j;ne\u2212t \u03b8 + Zj;n(t) \u22121 \u03b8 Z t 0 e\u2212(t\u2212s)/\u03b8Zj;n(s)ds, 1 \u2264j \u2264k. Hence, Xn is a continuous functional of both x0;n and Zn. Therefore, due to the continuous mapping theorem, the weak convergence of x0;n (for hypothesis) and Zn (from Lemma 1) implies the weak convergence of Xn. Moreover, (A.4) guarantees that the limit process of Xn is that de\ufb01ned by (7)." + } + ], + "Irene Tubikanec": [ + { + "url": "http://arxiv.org/abs/2003.10193v2", + "title": "Qualitative properties of numerical methods for the inhomogeneous geometric Brownian motion", + "abstract": "We provide a comparative analysis of qualitative features of different\nnumerical methods for the inhomogeneous geometric Brownian motion (IGBM). The\nconditional and asymptotic mean and variance of the IGBM are known and the\nprocess can be characterised according to Feller's boundary classification. We\ncompare the frequently used Euler-Maruyama and Milstein methods, two\nLie-Trotter and two Strang splitting schemes and two methods based on the\nordinary differential equation (ODE) approach, namely the classical Wong-Zakai\napproximation and the recently proposed log-ODE scheme. First, we prove that,\nin contrast to the Euler-Maruyama and Milstein schemes, the splitting and ODE\nschemes preserve the boundary properties of the process, independently of the\nchoice of the time discretisation step. Second, we derive closed-form\nexpressions for the conditional and asymptotic means and variances of all\nconsidered schemes and analyse the resulting biases. While the Euler-Maruyama\nand Milstein schemes are the only methods which may have an asymptotically\nunbiased mean, the splitting and ODE schemes perform better in terms of\nvariance preservation. The Strang schemes outperform the Lie-Trotter\nsplittings, and the log-ODE scheme the classical ODE method. The mean and\nvariance biases of the log-ODE scheme are very small for many relevant\nparameter settings. However, in some situations the two derived Strang\nsplittings may be a better alternative, one of them requiring considerably less\ncomputational effort than the log-ODE method. The proposed analysis may be\ncarried out in a similar fashion on other numerical methods and stochastic\ndifferential equations with comparable features.", + "authors": "Irene Tubikanec, Massimiliano Tamborrino, Petr Lansky, Evelyn Buckwar", + "published": "2020-03-23", + "updated": "2021-04-01", + "primary_cat": "math.NA", + "cats": [ + "math.NA", + "cs.NA", + "60H10, 60H35, 65C20, 65C30" + ], + "main_content": "Introduction The inhomogeneous geometric Brownian motion (IGBM), described by the It\u02c6 o stochastic di\ufb00erential equation (SDE) dY (t) = \u0012 \u22121 \u03c4 Y (t) + \u00b5 \u0013 dt + \u03c3Y (t)dW(t), t \u22650, Y (0) = Y0, is frequently applied in mathematical and computational \ufb01nance, neuroscience and other \ufb01elds. In particular, it is often used to describe price \ufb02uctuations in \ufb01nance [14, 62] or changes in the neuronal membrane voltage in neuroscience [21]. This process is also known as geometric Brownian motion (GBM) with a\ufb03ne drift [38], geometric Ornstein-Uhlenbeck (OU) process [29] or mean reverting GBM [53] in real option theory, as Brennan-Schwarz model [9, 16] in the interest rate literature, as GARCH model [5, 37] in stochastic volatility and energy markets, as Lognormal di\ufb00usion with exogenous factors [26] in growth analysis and forecasting or as reciprocal gamma di\ufb00usion in [36]. The IGBM is a multiplicative noise process, characterised by an inhomogeneous drift term, de\ufb01ned through \u00b5 \u2208R, and can be seen as an illustrative equation for this class of SDEs. In particular, it is a member of the Pearson di\ufb00usion class [23]. Di\ufb00erently from other well-known Pearson di\ufb00usions, such as the OU process [4, 34] and the square-root process [17, 19, 22, 34], the transition density of the IGBM does not have a practical closed-form expression [62] and an exact simulation method is not available. Hence, we need to rely on numerical methods that accurately reproduce the features of the process, making its analysis and investigation via simulations possible and reliable. A large part of the area of (stochastic) numerical analysis is devoted to convergence of numerical methods in a suitable sense. These are limit results for the time discretisation step going to zero over a \ufb01nite interval and, of course, numerical methods which do not converge should not be used. Nevertheless, in practice, a strictly positive time step is required. In consequence, the numerical method can be viewed as the solution of a discrete dynamical system, which may or may not have the same properties and behaviour as the solution of the original problem [27]. In the worst case, although the method converges, the discretisation step may alter the essential properties of the model, making the numerical method practically useless or very ine\ufb03cient. The purpose of this article is to analyse and compare di\ufb00erent numerical methods regarding their ability to preserve qualitative features of the IGBM for a \ufb01xed time discretisation step. In particular, we focus on methods based on the splitting and ordinary di\ufb00erential equation (ODE) approaches, and on their comparison with the commonly used Euler-Maruyama and Milstein schemes. The idea behind the splitting approach is to split the equation of interest into explicitly solvable subequations, and to apply a proper composition of the resulting exact solutions. A standard procedure is the Lie-Trotter composition [58], and a less commonly analysed method is the Strang approach [56]. We refer to [6, 7, 43] for an exhaustive discussion of splitting methods for broad classes of ODEs and to [1, 2, 8, 35, 45, 46, 47, 48, 54] for extensions to SDEs. Here, we derive two Lie-Trotter and two Strang splitting schemes for the IGBM. While the Lie-Trotter schemes coincide with the methods discussed in [47], the Strang schemes have not been considered before. The ODE approach [60, 61] is based on the idea of linking Stratonovich calculus with ODE tools. To construct higher-order schemes, this approach has been extended by de\ufb01ning the underlying ODE via a truncated exponential Lie series expansion, where iterated integrals of Brownian motion and time are approximated by their means, conditioned on the given increments of the Wiener process [15, 40]. Here, we consider the classical method [60], sometimes called piecewise linear method, and the scheme recently introduced by Foster et al. [24]. They proposed a pathwise polynomial approximation method of the Brownian motion, which was used to estimate third order iterated integrals of Brownian motion and time. Incorporating these results into the ODE approach yielded a new numerical method for the IGBM, extending the classical ODE method. Among the properties of the IGBM, we are interested in both its conditional and asymptotic features (mean, variance and stationary density) and its boundary behaviour. The conditional and asymptotic mean and variance of this process are explicitly known. Hence, our \ufb01rst goal is to analyse whether the numerical methods accurately reproduce them. In particular, we derive closed-form expressions for the conditional and asymptotic means and variances of the considered numerical methods. These quantities di\ufb00er from the true ones. For this reason, we compare 2 \fthe resulting explicit biases. Knowing them is particularly relevant because it allows for a direct control of the respective simulation accuracy through the time discretisation step. This may be particularly bene\ufb01cial, for example, in di\ufb00erent statistical inference tools. Other features we are interested in are the boundary properties of the IGBM. Depending on the parameter \u00b5, the IGBM possesses di\ufb00erent properties at the boundary zero, according to Feller\u2019s classi\ufb01cation [31]. Our second goal is to analyse whether the numerical methods preserve them. This is particularly important, since the nature of a boundary may force the process to change its behaviour near or at the boundary. While frequently applied numerical methods, such as the Euler-Maruyama, Milstein or higher-order It\u02c6 o-Taylor approximation schemes, may fail in meeting such conditions [3, 41, 47], we prove that the splitting and ODE schemes preserve them. While Feller\u2019s boundary classi\ufb01cation is a standard concept in the \ufb01eld of stochastic analysis, it is not so often adopted as a qualitative feature in the analysis of numerical methods. An exception constitutes the topic of positivity preservation, often studied in terms of the square-root process [3, 30, 41, 46] and the domain-invariance [25, 39, 49, 55]. For an investigation of these issues related to splitting methods, we refer to [39, 46]. For a discussion of Feller\u2019s classi\ufb01cation in the context of splitting schemes, we refer to [47], where the focus lies on proving convergence results and only Lie-Trotter compositions are considered. If the parameter \u00b5 = 0, the IGBM coincides with the well-known GBM [4, 42], which has often been used as a test equation in the \ufb01eld of stochastic linear stability analysis in the mean-square or almost sure sense [11, 28, 51]. This theory has been introduced by Mitsui and Saito [50, 51], based on stability theory in the sense of Lyapunov [32], and has been extended to systems of SDEs in [10, 12, 52, 57]. Since the standard setting of this approach requires a constant equilibrium solution for which both the drift and di\ufb00usion components become zero, it cannot be applied to the IGBM. Nevertheless, known results for the Euler-Maruyama and Milstein schemes applied to the GBM are covered by our study as a special case. Thus, the results presented in this article are also related to stochastic stability analysis for SDEs with inhomogeneous drift coe\ufb03cients. The paper is organised as follows. In Section 2, we introduce the IGBM and recall its properties. In Section 3, we provide a brief account of the splitting and ODE approaches, and introduce the considered numerical schemes for the IGBM. In Section 4, we provide closed-form expressions for the conditional and asymptotic means and variances of the investigated schemes, analyse the resulting biases and discuss the boundary preservation. In Section 5, we illustrate the theoretical results of Section 4 through a series of simulations. Moreover, we illustrate the strong (mean-square) convergence rates of the di\ufb00erent numerical methods and investigate their required computational e\ufb00orts. In addition, we analyse their ability to approximate the underlying stationary density and study their behaviour at the lower boundary. Conclusions are reported in Section 6. 2 The IGBM and its properties The IGBM is described by the It\u02c6 o SDE dY (t) = \u0012 \u22121 \u03c4 Y (t) + \u00b5 \u0013 | {z } :=F (Y (t)) dt + \u03c3Y (t) | {z } :=G(Y (t)) dW(t), t \u22650, Y (0) = Y0, (1) where \u03c4, \u03c3 > 0, \u00b5 \u2208R and W = (W(t))t\u22650 is a standard Wiener process de\ufb01ned on the probability space (\u2126, F, P) with a \ufb01ltration F = (F(t))t\u22650 generated by W. The initial value Y0 is either a deterministic non-negative constant or an F(0)-measurable non-negative random variable with \ufb01nite second moment. Since (1) is a linear and autonomous SDE, a unique strong solution process Y = (Y (t))t\u22650 exists [4, 42]. The solution of the homogeneous SDE (if \u00b5 = 0) corresponds to the well-known GBM. The solution of the inhomogeneous equation can be expressed in terms of the embedded GBM. In particular, applying the variation of constants formula [42] to (1) yields Y (t) = e\u2212( 1 \u03c4 + \u03c32 2 )t+\u03c3W (t) \u0012 Y0 + \u00b5 Z t 0 e( 1 \u03c4 + \u03c32 2 )s\u2212\u03c3W (s)ds \u0013 . (2) 3 \fConditional and asymptotic mean and variance Since Y0 has \ufb01nite second moment, the mean and variance of the strong solution process Y , conditioned on the initial value Y0, exist. They are explicitly known [5, 21, 62] and given by E[Y (t)|Y0] = Y0e\u22121 \u03c4 t + \u00b5\u03c4(1 \u2212e\u22121 \u03c4 t), (3) Var(Y (t)|Y0) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 e\u22121 \u03c4 t \u00002\u00b5[tY0 \u2212\u03c4Y0 \u2212t\u00b5\u03c4] + Y 2 0 \u0001 \u2212e\u22122 \u03c4 t(Y0 \u2212\u00b5\u03c4)2 + (\u00b5\u03c4)2, if \u03c32\u03c4 = 1, e\u22121 \u03c4 t (4\u00b5\u03c4[\u00b5\u03c4 \u2212Y0]) \u2212e\u22122 \u03c4 t (Y0 \u2212\u00b5\u03c4)2 +2\u00b52\u03c4t \u22123(\u00b5\u03c4)2 + 2\u00b5\u03c4Y0 + Y 2 0 , if \u03c32\u03c4 = 2, (\u00b5\u03c4)2\u03c32\u03c4 2\u2212\u03c32\u03c4 + 2\u03c4\u03c32 (Y0\u2212\u00b5\u03c4)\u00b5\u03c4 1\u2212\u03c32\u03c4 e\u22121 \u03c4 t \u2212e\u22122 \u03c4 t(Y0 \u2212\u00b5\u03c4)2 +e(\u03c32\u22122 \u03c4 )t h Y 2 0 \u22122Y0\u00b5\u03c4 1\u2212\u03c32\u03c4 + 2(\u00b5\u03c4)2 (2\u2212\u03c32\u03c4)(1\u2212\u03c32\u03c4) i , otherwise. (4) Since \u03c4 > 0, from (3), it follows that the asymptotic mean of Y exists. It is given by E[Y\u221e] := lim t\u2192\u221eE[Y (t)|Y0] = \u00b5\u03c4. (5) From (4), it follows that, under the condition \u03c32\u03c4 < 2, the asymptotic variance of Y exists. It is given by Var(Y\u221e) := lim t\u2192\u221eVar(Y (t)|Y0) = (\u00b5\u03c4)2 2 \u03c32\u03c4 \u22121. (6) Boundary properties Depending on the parameter \u00b5, the IGBM possesses di\ufb00erent properties at the boundary 0 according to Feller\u2019s boundary classi\ufb01cation [31]. In particular, if \u00b5 = 0 and Y0 > 0, the boundary 0 is unattainable and attracting, i.e., the process cannot reach 0 in \ufb01nite time, but is attracted to it as time tends to in\ufb01nity. In terms of linear stochastic stability analysis, this means that the equilibrium solution 0 is asymptotically almost sure stable, since P(limt\u2192\u221eY (t) = 0|Y0 > 0) = 1. In the case that \u00b5 = Y0 = 0, the process is absorbed at the boundary immediately. If \u00b5 > 0, then 0 is an entrance boundary, i.e., the process cannot reach the boundary in \ufb01nite time if Y0 > 0 or it immediately leaves 0 and stays above it if Y0 = 0. If \u00b5 < 0, the boundary is of exit type, i.e., the process can reach the boundary in \ufb01nite time and, as soon as it attains the boundary, it leaves [0, +\u221e) and cannot return into it. In many applications the process is stopped when it reaches an exit boundary, such that its state space is [0, +\u221e). Feller\u2019s boundary classi\ufb01cation is based on the idea of transforming the one-dimensional di\ufb00usion into a Wiener process, \ufb01rst by a change of space (through the scale density) and second by a change of time (through the speed density). The scale and speed densities are given by s(y) := e \u2212 y R y0 2F (z) G2(z) dz = s0e2\u00b5/\u03c32yy2/\u03c32\u03c4, s0 = e\u22122\u00b5/\u03c32y0y\u22122/\u03c32\u03c4 0 , m(y) := 1 G2(y)s(y) = m0y\u2212(2+2/\u03c32\u03c4)e\u22122\u00b5/\u03c32y, m0 = s\u22121 0 \u03c3\u22122, respectively, where y0 > 0 and F and G denote the drift and di\ufb00usion coe\ufb03cients de\ufb01ned in (1). Further, the scale function is de\ufb01ned by S[x0, x] := Z x x0 s(y) dy, where x0 > 0. For the IGBM, the nature of the boundary 0 is uniquely determined by the three quantities S(0, x] := lim x0\u21920 S[x0, x], \u03a30 := Z \u03f5 0 S(0, x]m(x) dx, N0 := Z \u03f5 0 S[x, \u03f5]m(x) dx, for an arbitrary \u03f5 > 0. If \u00b5 = 0, then S(0, x] < \u221e, \u03a30 = \u221eand N0 = \u221e. If \u00b5 > 0, then S(0, x] = \u221e, \u03a30 = \u221eand N0 < \u221e. If \u00b5 < 0, then S(0, x] < \u221e, \u03a30 < \u221eand N0 = \u221e. This implies the di\ufb00erent types of boundary behaviour explained above, see Table 6.2 in [31]. According to this classi\ufb01cation, we de\ufb01ne the following properties, which are satis\ufb01ed by the IGBM: 4 \f\u2022 Unattainable property: If \u00b5 \u22650, then P(Y (t) > 0 \u2200t \u22650|Y0 > 0) = 1. \u2022 Absorbing property: If \u00b5 = 0, then P(Y (t) = 0 \u2200t \u22650|Y0 = 0) = 1. \u2022 Entrance property: If \u00b5 > 0, then P(Y (t) > 0 \u2200t > 0|Y0 = 0) = 1. \u2022 Exit property: If \u00b5 < 0, then P(Y (t) < 0 \u2200t > s|Y (s) \u22640) = 1. 3 Numerical methods for the IGBM Consider a discretised time interval [0, tmax], tmax > 0, with equidistant time steps \u2206= ti \u2212ti\u22121, i = 1, ..., N, N \u2208N, t0 = 0 and tN = tmax. We denote by e Y (ti) a numerical realisation of the process Y at the discrete time points ti = i\u2206, where e Y (t0) := Y0. Moreover, we denote by \u03bei\u22121 := W(ti) \u2212W(ti\u22121) \u223cN(0, \u2206), i = 1, ..., N, the Wiener increments which are independent and identically distributed (iid) normal random variables with null mean and variance \u2206. In the following, we recall di\ufb00erent numerical methods used to generate values e Y (ti) of the IGBM. 3.1 It\u02c6 o-Taylor expansion approach The most popular approach to derive numerical methods for SDEs is to use appropriate truncations of the It\u02c6 o-Taylor series expansion [33, 45]. 3.1.1 Euler-Maruyama and Milstein schemes Two of the most well-known methods in this class are the Euler-Maruyama and the Milstein schemes. The Euler-Maruyama method yields trajectories of the IGBM through the iteration e Y E(ti) = e Y E(ti\u22121) + \u2206 \u0012 \u22121 \u03c4 e Y E(ti\u22121) + \u00b5 \u0013 + \u03c3 e Y E(ti\u22121)\u03bei\u22121. (7) This method is mean-square convergent of order 1/2. This rate can be increased by taking into account additional terms of the It\u02c6 o-Taylor expansion. In particular, the Milstein method yields trajectories of the IGBM via e Y M(ti) = e Y M(ti\u22121) + \u2206 \u0012 \u22121 \u03c4 e Y M(ti\u22121) + \u00b5 \u0013 + \u03c3 e Y M(ti\u22121) \u0010 \u03bei\u22121 + \u03c3 2 (\u03be2 i\u22121 \u2212\u2206) \u0011 , (8) and has a mean-square convergence rate of order 1. 3.2 Splitting approach The second approach we focus on is based on splitting methods [6, 27, 43, 46]. A brief account of their key ideas is provided in the following. Consider an It\u02c6 o SDE of the form dY (t) = F(Y (t))dt + G(Y (t))dW(t), t \u22650, Y (0) = Y0, (9) where the drift coe\ufb03cient and the di\ufb00usion component can be expressed as F(Y (t)) = d X l=1 F [l](Y (t)), G(Y (t)) = d X l=1 G[l](Y (t)), d \u2208N. Usually, there are several ways how to decompose the components F and G. The goal is to obtain subequations dY [l](t) = F [l](Y [l](t))dt + G[l](Y [l](t))dW(t), l \u2208{1, ..., d}, (10) which can be solved explicitly. Once the explicit solutions are derived, they need to be composed. Two common procedures for doing this are the Lie-Trotter [58] and the Strang [56] approach. 5 \fLet \u03d5[l] t (Y0) denote the exact \ufb02ows (solutions) of the subequations in (10) at time t and starting from Y0. Then, the Lie-Trotter composition of \ufb02ows e Y (ti) = \u0010 \u03d5[1] \u2206\u25e6... \u25e6\u03d5[d] \u2206 \u0011 (e Y (ti\u22121)) and the Strang approach e Y (ti) = \u0010 \u03d5[1] \u2206/2 \u25e6... \u25e6\u03d5[d\u22121] \u2206/2 \u25e6\u03d5[d] \u2206\u25e6\u03d5[d\u22121] \u2206/2 \u25e6... \u25e6\u03d5[1] \u2206/2 \u0011 (e Y (ti\u22121)) yield numerical methods for (9). The order of the evaluations of the exact \ufb02ows can be changed, yielding di\ufb00erent schemes within each approach. 3.2.1 Lie-Trotter and Strang schemes for the IGBM With the purpose of excluding the inhomogeneous part, relying thus on the underlying GBM, we split (1) into two simple subequations, namely dY [1](t) = \u22121 \u03c4 Y [1](t) | {z } F [1](Y [1](t)) dt + \u03c3Y [1](t) | {z } G[1](Y [1](t)) dW(t), (11) dY [2](t) = \u00b5 |{z} F [2] dt, G[2] \u22610. (12) The \ufb01rst equation, corresponding to the GBM, allows for an exact simulation of sample paths through Y [1](ti) = \u03d5[1] \u2206(Y [1](ti\u22121)) = Y [1](ti\u22121)e\u2212( 1 \u03c4 + \u03c32 2 )\u2206+\u03c3\u03bei\u22121, i = 1, . . . , N. (13) The second equation is a simple ODE with its explicit solution given by Y [2](ti) = \u03d5[2] \u2206(Y [2](ti\u22121)) = Y [2](ti\u22121) + \u00b5\u2206, i = 1, . . . , N. (14) The Lie-Trotter composition yields e Y L1(ti) := \u0010 \u03d5[1] \u2206\u25e6\u03d5[2] \u2206 \u0011 (e Y L1(ti\u22121)) = e\u2212( 1 \u03c4 + \u03c32 2 )\u2206+\u03c3\u03bei\u22121 \u0010 e Y L1(ti\u22121) + \u00b5\u2206 \u0011 , (15) e Y L2(ti) := \u0010 \u03d5[2] \u2206\u25e6\u03d5[1] \u2206 \u0011 (e Y L2(ti\u22121)) = e Y L2(ti\u22121)e\u2212( 1 \u03c4 + \u03c32 2 )\u2206+\u03c3\u03bei\u22121 + \u00b5\u2206, (16) and the Strang approach results in e Y S1(ti) := \u0010 \u03d5[2] \u2206/2 \u25e6\u03d5[1] \u2206\u25e6\u03d5[2] \u2206/2 \u0011 (e Y S1(ti\u22121)) = \u0012 e Y S1(ti\u22121) + \u00b5\u2206 2 \u0013 e\u2212( 1 \u03c4 + \u03c32 2 )\u2206+\u03c3\u03bei\u22121 + \u00b5\u2206 2 , (17) e Y S2(ti) := \u0010 \u03d5[1] \u2206/2 \u25e6\u03d5[2] \u2206\u25e6\u03d5[1] \u2206/2 \u0011 (e Y S2(ti\u22121)) (18) = e Y S2(ti\u22121)e\u2212( 1 \u03c4 + \u03c32 2 )\u2206+\u03c3(\u03d5i\u22121+\u03c8i\u22121) + \u00b5\u2206e\u2212( 1 \u03c4 + \u03c32 2 ) \u2206 2 +\u03c3\u03c8i\u22121, with iid random variables \u03d5i\u22121, \u03c8i\u22121 \u223cN(0, \u2206/2). The equations (15)-(18) de\ufb01ne four di\ufb00erent numerical solutions of (1). For a discussion of the mean-square convergence of the second LieTrotter method (16) we refer to [47], where a rate of order 1 has been proved. It is expected that this result extends to the other three splitting schemes, a conjecture that we con\ufb01rm experimentally in Subsection 5.1. In particular, it has been observed that, in contrast to the deterministic case [27], the convergence rate of splitting schemes for SDEs cannot be increased by using Strang compositions, i.e., compositions based on fractional \u2206/2 steps [44]. 6 \f3.3 ODE approach An alternative approach to derive numerical solutions of SDEs is to solve properly derived ODEs, a methodology that we brie\ufb02y recall in the following. Consider the Stratonovich version of (9) given by dY (t) = \u00af F(Y (t))dt + G(Y (t)) \u25e6dW(t), t \u22650, Y (0) = Y0, (19) where \u00af F(y) = F(y) \u22121 2G(y)G\u2032(y), with G\u2032(y) denoting the derivative of G with respect to y. Then, given a \ufb01xed time step \u2206> 0 and a Wiener increment \u03bei\u22121, a numerical solution e Y (ti) of SDE (19) can be obtained by de\ufb01ning it as the solution at u = 1 of the ODE dz du = \u00af F(z)\u2206+ G(z)\u03bei\u22121, z0 = e Y (ti\u22121). (20) This method has been observed to have a mean-square convergence rate of order 1, see, e.g., [15, 24], and is called piecewise linear method, since it uses piecewise linear approximations of Brownian paths. Recently, Foster et al. [24] proposed an extended variant of this approach, using polynomial approximations of Brownian motion. This yielded numerical schemes for SDEs with mean-square order 1.5. In particular, a numerical solution e Y (ti) of SDE (19) can be obtained by de\ufb01ning it as the solution at u = 1 of the ODE dz du = \u00af F(z)\u2206+G(z)\u03bei\u22121+[G, \u00af F](z)\u2206\u03c1i\u22121+ \u0002 G, [G, \u00af F] \u0003 (z) \u00123 5\u2206\u03c12 i\u22121 + \u22062 30 \u0013 , z0 = e Y (ti\u22121), (21) where [\u00b7, \u00b7] denotes the standard Lie bracket of vector \ufb01elds, and the \u03c1i\u22121 := 1 \u2206 ti Z ti\u22121 \u0014 W(u) \u2212W(ti\u22121) \u2212u \u2212ti\u22121 \u2206 \u0010 W(ti) \u2212W(ti\u22121) \u0011\u0015 du are rescaled space-time L\u00b4 evy areas of the Wiener process over [ti\u22121, ti]. They are shown to have distribution \u03c1i\u22121 \u223cN(0, \u2206/12) and to be independent of the Wiener increments \u03bei\u22121. Following the notion in [24], we call this method log-ODE scheme, and we refer to [24] for further details. 3.3.1 Piecewise linear and log-ODE schemes for the IGBM To derive numerical schemes for the IGBM based on the ODE approach, consider the Stratonovich version of SDE (1) given by dY (t) = \u0012 \u2212 \u00101 \u03c4 + \u03c32 2 \u0011 Y (t) + \u00b5 \u0013 dt + \u03c3Y (t) \u25e6dW(t), t \u22650, Y (0) = Y0. Solving the corresponding ODE (20) yields the following piecewise linear scheme e Y Lin(ti) = e Y Lin(ti\u22121)e\u2212( 1 \u03c4 + \u03c32 2 )\u2206+\u03c3\u03bei\u22121 + \u00b5\u2206 e\u2212( 1 \u03c4 + \u03c32 2 )\u2206+\u03c3\u03bei\u22121 \u22121 \u2212( 1 \u03c4 + \u03c32 2 )\u2206+ \u03c3\u03bei\u22121 ! . (22) Noting that [G, \u00af F](y) = \u00af F \u2032(y)G(y) \u2212G\u2032(y) \u00af F(y) = \u2212\u00b5\u03c3, \u0002 G, [G, \u00af F] \u0003 (y) = \u00b5\u03c32, and solving the respective ODE (21) yields the following log-ODE scheme for the IGBM [24] e Y Log(ti) = e Y Log(ti\u22121)e\u2212( 1 \u03c4 + \u03c32 2 )\u2206+\u03c3\u03bei\u22121 +\u00b5\u2206 e\u2212( 1 \u03c4 + \u03c32 2 )\u2206+\u03c3\u03bei\u22121 \u22121 \u2212( 1 \u03c4 + \u03c32 2 )\u2206+ \u03c3\u03bei\u22121 ! \u0012 1 \u2212\u03c3\u03c1i\u22121 + \u03c32\u00103 5\u03c12 i\u22121 + \u2206 30 \u0011\u0013 . (23) 7 \fRemark 1. The numerical solutions (15)-(18) coincide with the discretised version of (2), where the integral is approximated using the left point rectangle rule, the right point rectangle rule, the trapezoidal rule and the midpoint rule, respectively. If \u00b5 = 0, the numerical solutions (15)-(18) and (22), (23) coincide with the exact simulation scheme (13) for the GBM. Notation 1. In the following, we use the abbreviations E, M, L1, L2, S1, S2, Lin and Log for the Euler-Maruyama (7), Milstein (8), \ufb01rst Lie-Trotter (15), second Lie-Trotter (16), \ufb01rst Strang (17), second Strang (18), piecewise linear (22) and log-ODE (23) methods, respectively. 4 Properties of the numerical methods for the IGBM We now examine the ability of the derived numerical methods to accurately preserve the properties of the process. In particular, we \ufb01rst provide closed-form expressions for their conditional and asymptotic means and variances and analyse the resulting biases. Then, we show that the four splitting and the two ODE schemes preserve the boundary properties of the IGBM, while the Euler-Maruyama and Milstein schemes do not. 4.1 Investigation of the conditional moments The numerical solutions de\ufb01ned by (7), (8), (15)-(18), (22) and (23) enable to express e Y (ti) in terms of the initial value Y0. Indeed, by performing back iteration, we obtain e Y E(ti) = Y0 i Y j=1 \u0012 1 \u2212\u2206 \u03c4 + \u03c3\u03bei\u2212j \u0013 + \u00b5\u2206 i\u22121 X k=1 k Y j=1 \u0012 1 \u2212\u2206 \u03c4 + \u03c3\u03bei\u2212j \u0013 + \u00b5\u2206, (24) e Y M(ti) = Y0 i Y j=1 \u0012 1 \u2212\u2206 \u03c4 + \u03c3\u03bei\u2212j + (\u03be2 i\u2212j \u2212\u2206)\u03c32 2 \u0013 +\u00b5\u2206 i\u22121 X k=1 k Y j=1 \u0012 1 \u2212\u2206 \u03c4 + \u03c3\u03bei\u2212j + (\u03be2 i\u2212j \u2212\u2206)\u03c32 2 \u0013 + \u00b5\u2206, (25) e Y L1(ti) = Y0e \u2212( 1 \u03c4 + \u03c32 2 )ti+\u03c3 i\u22121 P k=0 \u03bek + \u00b5\u2206 i X k=1 e \u2212( 1 \u03c4 + \u03c32 2 )tk+\u03c3 k P j=1 \u03bei\u2212j , (26) e Y L2(ti) = Y0e \u2212( 1 \u03c4 + \u03c32 2 )ti+\u03c3 i\u22121 P k=0 \u03bek + \u00b5\u2206 i\u22121 X k=0 e \u2212( 1 \u03c4 + \u03c32 2 )tk+\u03c3 k P j=1 \u03bei\u2212j , (27) e Y S1(ti) = \u0012 Y0 + \u00b5\u2206 2 \u0013 e \u2212( 1 \u03c4 + \u03c32 2 )ti+\u03c3 i\u22121 P k=0 \u03bek + \u00b5\u2206 i\u22121 X k=1 e \u2212( 1 \u03c4 + \u03c32 2 )tk+\u03c3 k P j=1 \u03bei\u2212j + \u00b5\u2206 2 , (28) e Y S2(ti) = Y0e \u2212( 1 \u03c4 + \u03c32 2 )ti+\u03c3 i\u22121 P k=0 \u03bek + \u00b5\u2206 i X k=1 e \u2212( 1 \u03c4 + \u03c32 2 )(k\u22121 2 )\u2206+\u03c3\u03c8i\u2212k+\u03c3 k\u22121 P j=1 \u03bei\u2212j , (29) e Y Lin(ti) = Y0e \u2212( 1 \u03c4 + \u03c32 2 )ti+\u03c3 i\u22121 P k=0 \u03bek + \u00b5\u2206 i\u22121 X k=0 e \u2212( 1 \u03c4 + \u03c32 2 )tk+\u03c3 k P j=1 \u03bei\u2212j e\u2212( 1 \u03c4 + \u03c32 2 )\u2206+\u03c3\u03bei\u22121\u2212k \u22121 \u2212( 1 \u03c4 + \u03c32 2 )\u2206+ \u03c3\u03bei\u22121\u2212k ! ,(30) e Y Log(ti) = Y0e \u2212( 1 \u03c4 + \u03c32 2 )ti+\u03c3 i\u22121 P k=0 \u03bek +\u00b5\u2206 i\u22121 X k=0 e \u2212( 1 \u03c4 + \u03c32 2 )tk+\u03c3 k P j=1 \u03bei\u2212j e\u2212( 1 \u03c4 + \u03c32 2 )\u2206+\u03c3\u03bei\u22121\u2212k \u22121 \u2212( 1 \u03c4 + \u03c32 2 )\u2206+ \u03c3\u03bei\u22121\u2212k ! \u0012 1 \u2212\u03c3\u03c1i\u22121\u2212k + \u03c32 \u00143 5\u03c12 i\u22121\u2212k + \u2206 30 \u0015\u0013 ,(31) where \u03bei := \u03d5i + \u03c8i in (29). These relations allow for an investigation of the conditional means E[e Y (ti)|Y0] and variances Var(e Y (ti)|Y0) of the numerical solutions. 8 \f4.1.1 Closed-form expressions for the conditional means and variances In Proposition 1, we provide closed-form expressions of the conditional mean and variance of a general random variable Zi that plays the role of a numerical solution e Y (ti) as in (24)-(31) for a \ufb01xed time ti. These expressions will allow for a straightforward derivation of the corresponding results for the numerical solutions of interest. Proposition 1. Consider the real-valued random variable Zi de\ufb01ned by Zi := Z0Wi + c1 I X k=0 WkHk+1 + c2, (32) where i \u2208N, I \u2208{i \u22121, i}, c1, c2 > 0, Z0 \u2208R, W0 \u2208{0, 1}, Wk := k Q j=1 Xj with Xj, j = 1, . . . , k, being iid with mean \u00b5x \u2208R and second moment r > 0. The Hk+1, k = 0, . . . , I, are iid with mean \u00b5h \u2208R and second moment rh > 0. Moreover, Wk and Hk+1 are independent and E[WlWkHk+1] = rk\u00b5l\u2212k x p, for k < l and p \u2208R. The mean of Zi conditioned on Z0 is given by E[Zi|Z0] = Z0\u00b5i x + c1\u00b5h I X k=1 \u00b5k x + c1W0\u00b5h + c2 (33) and the variance of Zi conditioned on Z0 is given by Var(Zi|Z0) = Z2 0(ri \u2212\u00b52i x ) + 2c1Z0 I X k=0 rk\u00b5i\u2212k x p \u2212\u00b5i+k x \u00b5h +c2 1 \" I X k=0 rkrh \u2212\u00b52k x \u00b52 h + 2 I X l=1 l\u22121 X k=0 \u00b5hrk\u00b5l\u2212k x p \u2212\u00b5l+k x \u00b52 h # . (34) The proof of Proposition 1 is given in Appendix A. Based on Proposition 1, we derive the conditional moments of the Euler-Maruyama, Milstein, splitting and ODE schemes. Corollary 1. Let e Y (ti) be the numerical solutions de\ufb01ned through (7), (8), (15)-(18), (22) and (23), respectively, at time ti = i\u2206. Their means and variances conditioned on the initial value Y0 are given by (33) and (34), respectively, with quantities \u00b5x, \u00b5h, r, rh, p, c1, c2, I, Z0 and W0 de\ufb01ned as reported in Table 1. The proof of Corollary 1 is given in Appendix B. Remark 2. To make the results of Proposition 1 and Corollary 1 more approachable, the conditional means of the considered numerical methods are listed in closed-form as follows E h e Y E(ti)|Y0 i = E h e Y M(ti)|Y0 i = Y0 \u0012 1 \u2212\u2206 \u03c4 \u0013i + \u00b5\u2206 1 \u2212 \u00001 \u2212\u2206 \u03c4 \u0001i \u2206/\u03c4 ! , E h e Y L1(ti)|Y0 i = Y0e\u22121 \u03c4 ti + \u00b5\u03c4 \u0010 1 \u2212e\u22121 \u03c4 ti\u0011 \u0012 \u2206/\u03c4 e\u2206/\u03c4 \u22121 \u0013 , E h e Y L2(ti)|Y0 i = Y0e\u22121 \u03c4 ti + \u00b5\u03c4 \u0010 1 \u2212e\u22121 \u03c4 ti\u0011 \u0012 \u2206/\u03c4 e\u2206/\u03c4 \u22121 \u0013 e\u2206/\u03c4, E h e Y S1(ti)|Y0 i = Y0e\u22121 \u03c4 ti + \u00b5\u03c4 \u0010 1 \u2212e\u22121 \u03c4 ti\u0011 \u0014\u0012 \u2206/\u03c4 e\u2206/\u03c4 \u22121 \u0013 e\u2206/\u03c4 \u2212\u2206 2\u03c4 \u0015 , E h e Y S2(ti)|Y0 i = Y0e\u22121 \u03c4 ti + \u00b5\u03c4 \u0010 1 \u2212e\u22121 \u03c4 ti\u0011 \u0012 \u2206/\u03c4 e\u2206/\u03c4 \u22121 \u0013 e\u2206/2\u03c4, 9 \fE h e Y Lin(ti)|Y0 i = Y0e\u22121 \u03c4 ti + \u00b5\u03c4 \u0010 1 \u2212e\u22121 \u03c4 ti\u0011 \u0012 \u2206/\u03c4 e\u2206/\u03c4 \u22121 \u0013 e\u2206/\u03c4L\u2206,\u03c4,\u03c3, E h e Y Log(ti)|Y0 i = Y0e\u22121 \u03c4 ti + \u00b5\u03c4 \u0010 1 \u2212e\u22121 \u03c4 ti\u0011 \u0012 \u2206/\u03c4 e\u2206/\u03c4 \u22121 \u0013 e\u2206/\u03c4L\u2206,\u03c4,\u03c3 \u0012 1 + \u03c32 \u2206 12 \u0013 , where L\u2206,\u03c4,\u03c3 is de\ufb01ned as L\u2206,\u03c4,\u03c3 := \u221a\u03c0 \u03c3 \u221a 2\u2206 exp \uf8eb \uf8ec \uf8ed \u2212 \u0010 1 \u03c4 + \u03c32 2 \u00112 \u2206 2\u03c32 \uf8f6 \uf8f7 \uf8f8 \uf8eb \uf8eder\ufb01 \uf8ee \uf8f0 \u0010 1 \u03c4 + \u03c32 2 \u0011 \u221a \u2206 \u03c3 \u221a 2 \uf8f9 \uf8fb+ er\ufb01 \uf8ee \uf8f0 \u0010 \u22121 \u03c4 + \u03c32 2 \u0011 \u221a \u2206 \u03c3 \u221a 2 \uf8f9 \uf8fb \uf8f6 \uf8f8, (35) with er\ufb01denoting the imaginary error function. The above expressions are obtained from (33) after calculating the geometric sums. Closed-form expression of the conditional variances can be obtained analogously. While the conditional means of the Euler-Maruyama and Milstein schemes are equal, their conditional variances are di\ufb00erent. This results from the fact that the Milstein scheme takes into account an additional term that is related only to the di\ufb00usion coe\ufb03cient of the SDE. Noting that \u00b5\u2206= \u00b5\u03c4\u2206/\u03c4, it can be observed that the conditional means of the Euler-Maruyama, Milstein and splitting methods depend on \u2206/\u03c4 and their conditional variances depend on \u2206/\u03c4 and \u2206\u03c32. Remarkably, only the conditional means of the ODE methods depend on \u03c3, while this is not the case for the true conditional mean (3). If \u00b5 = 0, the conditional means and variances of the splitting schemes (15)-(18) and ODE schemes (22), (23) coincide with the true quantities (3) and (4), respectively, at time ti. Remark 3. Having closed-form expressions for the conditional moments of the numerical solutions allows for a direct control of the simulation accuracy through the choice of the time step \u2206. Table 1: Quantities of interest for the numerical schemes entering in (33)-(34). Zi \u00b5x \u00b5h r rh p e Y E(ti) 1 \u2212\u2206 \u03c4 1 \u03c32\u2206+ (1 \u2212\u2206 \u03c4 )2 1 1 e Y M(ti) 1 \u2212\u2206 \u03c4 1 \u03c32\u2206+ (1 \u2212\u2206 \u03c4 )2 + (\u03c32\u2206)2 2 1 1 e Y L1(ti) e\u2212\u2206/\u03c4 1 e\u03c32\u2206\u22122\u2206/\u03c4 1 1 e Y L2(ti) e\u2212\u2206/\u03c4 1 e\u03c32\u2206\u22122\u2206/\u03c4 1 1 e Y S1(ti) e\u2212\u2206/\u03c4 1 e\u03c32\u2206\u22122\u2206/\u03c4 1 1 e Y S2(ti) e\u2212\u2206/\u03c4 \u00b51/2 x e\u03c32\u2206\u22122\u2206/\u03c4 r1/2 rh\u00b5\u22121/2 x e Y Lin(ti) e\u2212\u2206/\u03c4 L\u2206,\u03c4,\u03c3 (35) e\u03c32\u2206\u22122\u2206/\u03c4 \u00af L\u2206,\u03c4,\u03c3 (69) e L\u2206,\u03c4,\u03c3 (70) e Y Log(ti) e\u2212\u2206/\u03c4 K\u2206,\u03c4,\u03c3 (71) e\u03c32\u2206\u22122\u2206/\u03c4 \u00af K\u2206,\u03c4,\u03c3 (72) e K\u2206,\u03c4,\u03c3 (73) Zi c1 c2 I Z0 W0 e Y E(ti) \u00b5\u2206 \u00b5\u2206 i \u22121 Y0 0 e Y M(ti) \u00b5\u2206 \u00b5\u2206 i \u22121 Y0 0 e Y L1(ti) \u00b5\u2206 0 i Y0 0 e Y L2(ti) \u00b5\u2206 0 i \u22121 Y0 1 e Y S1(ti) \u00b5\u2206 \u00b5\u2206 2 i \u22121 Y0 + \u00b5\u2206 2 0 e Y S2(ti) \u00b5\u2206 0 i \u22121 Y0 1 e Y Lin(ti) \u00b5\u2206 0 i \u22121 Y0 1 e Y Log(ti) \u00b5\u2206 0 i \u22121 Y0 1 10 \f0 10 20 30 40 50 60 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 rBias\u2206,ti,Y0(E(Y ~)) in %age E,M L1 L2 S1 S2 Lin Log 0 10 20 30 40 50 60 zoom: rBias\u2206,ti,Y0(E(Y ~)) in %age \u22120.025 \u22120.015 \u22120.005 0.005 E,M S1 S2 Log \u03c3 = 0.2 \u03c3 = 1 \u03c3 = 1.5 0 10 20 30 40 50 60 \u22121 0 1 2 3 rBias\u2206,ti,Y0(Var(Y ~)) in %age ti E M L1 L2 S1 S2 Lin Log 0 2 0 10 20 30 40 50 60 zoom: rBias\u2206,ti,Y0(Var(Y ~)) in %age ti L1 L2 S1 S2 Lin Log \u22120.10 \u22120.05 0.00 0.05 0.10 Figure 1: Relative conditional mean bias (36) (top left panel for E, M, L1, L2, S1, S2, Lin, Log and a zoom in the top right panel for E, M, S1, S2, Log) and conditional variance bias (37) (bottom left panel for E, M, L1, L2, S1, S2, Lin, Log and a zoom in the bottom right panel for L1, L2, S1, S2, Lin, Log) in percentage as a function of the time ti, for Y0 = 10, \u2206= 0.1, \u00b5 = 1, \u03c4 = 5 and \u03c3 = 0.2. In the top right panel also \u03c3 = 1 and \u03c3 = 1.5 are considered. 4.1.2 Conditional mean and variance biases Corollary 1 implies that all methods yield conditional means and variances di\ufb00erent from the true values. In the following, we study the introduced relative mean and variance biases de\ufb01ned by rBias\u2206,ti,Y0(E[e Y ]) := E[e Y (ti)|Y0] \u2212E[Y (ti)|Y0] E[Y (ti)|Y0] , (36) rBias\u2206,ti,Y0(Var(e Y )) := Var(e Y (ti)|Y0) \u2212Var(Y (ti)|Y0) Var(Y (ti)|Y0) , (37) for each considered numerical method. These biases depend on the time step \u2206, the time ti, the initial condition Y0 and the parameters of the model. While the biases in the conditional means of the ODE methods depend on \u03c3, that of the remaining methods are independent of \u03c3. The biases in the conditional variance depend on all model parameters. In the top left panel of Figure 1, we report the relative mean bias (36) in percentage as a function of ti, for Y0 = 10, \u2206= 0.1, \u00b5 = 1, \u03c4 = 5 and \u03c3 = 0.2. The relative mean biases (in absolute value) introduced by the Strang splitting schemes are signi\ufb01cantly smaller than those of the Lie-Trotter splitting schemes and close to 0 for all ti under consideration, with the second Strang scheme performing slightly better than the \ufb01rst one (see the top right panel where we 11 \f0 5 10 15 20 25 30 rBias\u2206,ti,Y0(E(Y ~)) in %age \u22120.6 \u22120.2 0.2 0.6 E,M L1 L2 S1 S2 Lin Log 0 5 10 15 20 25 30 zoom: rBias\u2206,ti,Y0(E(Y ~)) in %age \u22120.0030 \u22120.0015 0.0000 0.0015 S1 S2 Log \u03c3 = 0.2 \u03c3 = 1 \u03c3 = 1.5 0 5 10 15 20 25 30 \u22126 \u22124 \u22122 0 2 4 rBias\u2206,ti,Y0(Var(Y ~)) in %age Y0 E M L1 L2 S1 S2 Lin Log 0 5 10 15 20 25 30 \u22120.04 \u22120.02 0.00 0.02 0.04 zoom: rBias\u2206,ti,Y0(Var(Y ~)) in %age Y0 S1 S2 Lin Log Figure 2: Relative conditional mean bias (36) (top left panel for E, M, L1, L2, S1, S2, Lin, Log and a zoom in the top right panel for S1, S2, Log) and conditional variance bias (37) (bottom left panel for E, M, L1, L2, S1, S2, Lin, Log and a zoom in the bottom right panel for S1, S2, Lin, Log) in percentage as a function of the initial value Y0, for ti = 2, \u2206= 0.1, \u00b5 = 1, \u03c4 = 5 and \u03c3 = 0.2. In the top right panel also \u03c3 = 1 and \u03c3 = 1.5 are considered. provide a zoom). Moreover, the piecewise linear method performs better than the Lie-Trotter methods, but worse than the Strang schemes. For the chosen value of \u03c3, the log-ODE method outperforms the Strang methods and produces a bias even closer to 0 for all times ti. However, this fact changes when \u03c3 is increased, as shown in the top right panel where we also consider \u03c3 = 1 and \u03c3 = 1.5. In particular, due to the dependence of the mean of the ODE schemes on \u03c3, they may perform worse than all other methods in terms of preserving the mean when \u03c3 increases. Furthermore, it can be observed that in the non-stationary initial part, the Strang and ODE methods clearly outperform the Euler-Maruyama and Milstein schemes. This changes with increasing time. In particular, the relative mean bias of the Euler-Maruyama and Milstein schemes approaches 0, suggesting an asymptotically unbiased mean (see Subsection 4.2). In the bottom left panel of Figure 1, we report the conditional variance biases (37) in percentage as a function of ti for the same values of Y0, \u2206, \u00b5, \u03c4 and \u03c3 = 0.2. All four splitting schemes and both ODE schemes yield better approximations of the conditional variance than the Euler-Maruyama and Milstein schemes for all ti under consideration. The log-ODE method yields again a bias close to 0 from the beginning, outperforming all other methods. This is also the case when \u03c3 is increased (\ufb01gures not shown). Except for ti very small, the Strang schemes outperform the piecewise linear method, and also yield biases close to 0 from the beginning. Moreover, the relative variance biases 12 \f(in absolute value) of the Lie-Trotter splitting schemes decrease in time and seem to coincide asymptotically with that of the \ufb01rst Strang scheme (see Subsection 4.2), as it can be observed in the bottom right panel of Figure 1. Similar results are obtained for other parameter values, time steps and initial conditions. In Figure 2, we report the relative biases of the conditional mean (36) (top panels) and variance (37) (bottom panels) in percentage as a function of the initial value Y0, for ti = 2 and the same parameters as before. All methods introduce larger biases for very small values of Y0. This may be explained by the fact that reproducing the features of the process near a boundary, i.e., near 0, is more di\ufb03cult. For \u03c3 = 0.2, the log-ODE method outperforms the other methods, yielding relative biases close to 0 for any considered choice of the initial condition, not being strongly in\ufb02uenced by it. Similar to before, this changes when \u03c3 is increased, as illustrated in the top right panel where we also consider \u03c3 = 1 and \u03c3 = 1.5. The Strang methods (whose mean bias does not depend on \u03c3) do then introduce the smallest bias in the conditional mean. In general, the performance of the splitting and ODE schemes improves as Y0 increases, while the Euler-Maruyama and Milstein schemes perform worse for large values of Y0. This is in agreement with the fact that Y0 enters into the conditional means of the splitting and ODE schemes in the same way as in the true quantity, as evident when comparing the expressions reported in Remark 2 with the true conditional mean (3). In particular, the conditional mean biases (not the relative ones) of the splitting and ODE schemes do not depend on Y0, while those of the Euler-Maruyama and Milstein schemes do. Furthermore, the conditional variance biases introduced by the splitting and ODE schemes depend linearly on Y0, while those of the Euler-Maruyama and Milstein schemes depend quadratically on Y0. If Y0 is close to the asymptotic mean \u00b5\u03c4, here 5, the relative mean bias of the Euler-Maruyama and Milstein schemes is almost 0 (top left panel), in agreement with the fact that they have an asymptotically unbiased mean (see Subsection 4.2). 4.2 Investigation of the asymptotic moments We now investigate the asymptotic mean and variance of the numerical solutions, i.e., E[e Y\u221e] := lim i\u2192\u221eE[e Y (ti)|Y0], Var(e Y\u221e) := lim i\u2192\u221eVar(e Y (ti)|Y0), ti = i\u2206, comparing them with the true quantities (5) and (6), respectively. 4.2.1 Closed-form expressions for the asymptotic means and variances In Proposition 2, we provide closed-form expressions of the asymptotic mean and variance of the random variable Zi introduced in Proposition 1. As before, these relations allow for a straightforward derivation of the corresponding results for the numerical schemes of interest, including necessary conditions that guarantee the existence of the asymptotic quantities. Proposition 2. Let the random variable Zi be de\ufb01ned as in Proposition 1. If |\u00b5x| < 1, the asymptotic mean of Zi is given by E[Z\u221e] := lim i\u2192\u221eE[Zi|Z0] = c1\u00b5h \u00b5x 1 \u2212\u00b5x + c1W0\u00b5h + c2. (38) If, in addition, r \u2208(0, 1), the asymptotic variance of Zi is given by Var(Z\u221e) := lim i\u2192\u221eVar(Zi|Z0) = c2 1 \u0012rh(\u00b5x \u22121)2 + 2\u00b5h\u00b5xp(1 \u2212\u00b5x) \u2212(1 \u2212r)\u00b52 h (\u00b5x \u22121)2(1 \u2212r) \u0013 . (39) The proof of Proposition 2 is given in Appendix C. Based on Proposition 2, we derive the asymptotic moments of the considered numerical schemes. 13 \fCorollary 2. Let e Y (ti) be the numerical solutions de\ufb01ned through (7), (8), (15)-(18), (22) and (23), respectively. The asymptotic means and variances of the Euler-Maruyama and Milstein schemes are given by If \f \f \f \f1 \u2212\u2206 \u03c4 \f \f \f \f < 1, E[e Y E \u221e] = E[e Y M \u221e] = \u00b5\u03c4, (40) If \f \f \f \f1 \u2212\u2206 \u03c4 \f \f \f \f < 1 and \u2206< 2\u03c4 \u2212\u03c32\u03c4 2, Var(e Y E \u221e) = (\u00b5\u03c4)2 2 \u03c32\u03c4 \u22121 \u2212 \u2206 \u03c32\u03c4 2 , (41) If \f \f \f \f1 \u2212\u2206 \u03c4 \f \f \f \f < 1 and \u2206< 2\u03c4 \u2212\u03c32\u03c4 2 \u03c34\u03c4 2 2 + 1 , Var(e Y M \u221e) = (\u00b5\u03c4)2(1 + \u03c32\u2206 2 ) 2 \u03c32\u03c4 \u22121 \u2212 \u2206 \u03c32\u03c4 2 \u2212\u03c32\u2206 2 . (42) The asymptotic means and variances of the splitting schemes are given by E[e Y L1 \u221e] = \u00b5\u03c4 \u0012 \u2206/\u03c4 e\u2206/\u03c4 \u22121 \u0013 , (43) E[e Y L2 \u221e] = E[e Y L1 \u221e] + \u00b5\u2206= E[e Y L1 \u221e]e\u2206/\u03c4, (44) E[e Y S1 \u221e] = E[e Y L1 \u221e] + \u00b5\u2206 2 = 1 2E[e Y L1 \u221e](1 + e\u2206/\u03c4), (45) E[e Y S2 \u221e] = E[e Y L1 \u221e]e\u2206/2\u03c4, (46) If \u03c32\u03c4 < 2, Var(e Y L1 \u221e) = Var(e Y L2 \u221e) = Var(e Y S1 \u221e) = E[e Y L1 \u221e]2 e2\u2206/\u03c4(e\u2206\u03c32 \u22121) e2\u2206/\u03c4 \u2212e\u2206\u03c32 , (47) If \u03c32\u03c4 < 2, Var(e Y S2 \u221e) = E[e Y L1 \u221e]2 e\u2206/\u03c4(e\u2206\u03c32/2 \u22121)(e2\u2206/\u03c4 + e\u2206\u03c32/2) e2\u2206/\u03c4 \u2212e\u2206\u03c32 . (48) The asymptotic means and variances of the ODE schemes are given by E[e Y Lin \u221e] = E[e Y L1 \u221e]e\u2206/\u03c4L\u2206,\u03c4,\u03c3, (49) E[e Y Log \u221e] = E[e Y L1 \u221e]e\u2206/\u03c4L\u2206,\u03c4,\u03c3 \u0012 1 + \u03c32 \u2206 12 \u0013 , (50) If \u03c32\u03c4 < 2, Var(e Y Lin \u221e) = E[e Y L1 \u221e]2 e2\u2206/\u03c4 \u0010 2Le L(e\u2206/\u03c4 \u22121) + \u00af L(e\u2206/\u03c4 \u22121)2 + L2(e\u2206\u03c32 \u2212e2\u2206/\u03c4) \u0011 e2\u2206/\u03c4 \u2212e\u2206\u03c32 , (51) If \u03c32\u03c4 < 2, Var(e Y Log \u221e) = E[e Y L1 \u221e]2 e2\u2206/\u03c4 \u0010 2K e K(e\u2206/\u03c4 \u22121) + \u00af K(e\u2206/\u03c4 \u22121)2 + K2(e\u2206\u03c32 \u2212e2\u2206/\u03c4) \u0011 e2\u2206/\u03c4 \u2212e\u2206\u03c32 , (52) where L \u2261L\u2206,\u03c4,\u03c3, \u00af L \u2261\u00af L\u2206,\u03c4,\u03c3, e L \u2261e L\u2206,\u03c4,\u03c3, K \u2261K\u2206,\u03c4,\u03c3, \u00af K \u2261\u00af K\u2206,\u03c4,\u03c3, e K \u2261e K\u2206,\u03c4,\u03c3 are as in (35) and (69)-(73), respectively. Proof. The results and their required conditions follow directly from Proposition 2, using the corresponding values reported in Table 1 and simplifying the resulting expressions. Remarkably, the splitting and ODE methods do not require extra conditions for the existence of the asymptotic mean, but, between the two, only the splitting schemes have asymptotic means independent on \u03c3, as it is the case for the IGBM. Moreover, the condition guaranteeing the existence of the asymptotic variance of the splitting and ODE schemes is the same as that of the true process, i.e., \u03c32\u03c4 < 2. In contrast, the Euler-Maruyama and the Milstein schemes rely on extra conditions that do not depend on the features of the model. If |1\u2212\u2206/\u03c4| < 1, the Euler-Maruyama and the Milstein schemes have unbiased asymptotic means. Regarding the asymptotic variance, the condition for the Milstein scheme in (42) is more restrictive than that for the Euler-Maruyama method in (41), agreeing with similar results in the literature [11]. The asymptotic variances of the Lie-Trotter schemes and the \ufb01rst Strang scheme coincide, as previously hypothesised looking at Figure 1. 14 \f0.0 0.2 0.4 0.6 0.8 1.0 \u22126 \u22124 \u22122 0 2 4 6 rBias\u2206(E(Y ~ Inf)) in %age E,M L1 L2 S1 S2 Lin Log 0.0 0.2 0.4 0.6 0.8 1.0 zoom: rBias\u2206(E(Y ~ Inf)) in %age \u22120.3 \u22120.1 0.1 0.3 E,M S1 S2 Log \u03c3 = 0.5 \u03c3 = 1 \u03c3 = 1.5 0.0 0.2 0.4 0.6 0.8 1.0 \u22121.0 \u22120.5 0.0 0.5 1.0 rBias\u2206(Var(Y ~ Inf)) in %age \u2206 E M L1,L2,S1 S2 Lin Log 0.0 0.2 0.4 0.6 0.8 1.0 \u22120.2 \u22120.1 0.0 0.1 0.2 0.3 zoom: rBias\u2206(Var(Y ~ Inf)) in %age \u2206 L1,L2,S1 S2 Log \u03c3 = 0.5 \u03c3 = 0.55 \u03c3 = 0.6 Figure 3: Relative asymptotic mean bias (53) (top left panel for E, M, L1, L2, S1, S2, Lin, Log and a zoom in the top right panel for E, M, S1, S2, Log) and asymptotic variance bias (54) (bottom left panel for E, M, L1, L2, S1, S2, Lin, Log and a zoom in the bottom right panel for L1, L2, S1, S2, Log) in percentage as a function of \u2206, for \u03c4 = 5 and \u03c3 = 0.5. In the top right and bottom right panels also \u03c3 = 1, \u03c3 = 1.5 and \u03c3 = 0.55, \u03c3 = 0.6 are considered, respectively. If \u00b5 = 0, the results for the Euler-Maruyama and Milstein methods in Corollary 2 are in agreement with those available in the linear stochastic stability literature for the GBM [28, 51]. In particular, the conditions required in (41) and (42) are the same as those guaranteeing their meansquare stability. On the contrary, Corollary 2 implies that the splitting (15)-(18) and ODE (22), (23) schemes are asymptotically \ufb01rst and second moment stable without needing extra conditions. 4.2.2 Asymptotic mean and variance biases Corollary 2 implies that the derived schemes introduce asymptotic mean and variance biases. In the following, we analyse the resulting asymptotic relative biases rBias\u2206 \u0010 E[e Y\u221e] \u0011 := E[e Y\u221e] \u2212E[Y\u221e] E[Y\u221e] , (53) rBias\u2206 \u0010 Var(e Y\u221e) \u0011 := Var(e Y\u221e) \u2212Var(Y\u221e) Var(Y\u221e) , (54) with respect to the true quantities (5) and (6), for each considered numerical method. These biases depend on the time step \u2206and on the model parameters. All relative asymptotic biases do not depend on \u00b5. In particular, except for the ODE methods, the asymptotic mean biases depend 15 \f0 5 10 15 20 rBias\u2206(E(Y ~ Inf)) in %age \u22121.5 \u22120.5 0.5 1.5 E,M L1 L2 S1 S2 Lin Log 0 5 10 15 20 \u22120.002 \u22120.001 0.000 0.001 0.002 zoom: rBias\u2206(E(Y ~ Inf)) in %age E,M S1 S2 Log 0 5 10 15 20 \u22122 0 2 4 6 rBias\u2206(Var(Y ~ Inf)) in %age \u03c4 E M L1,L2,S1 S2 Lin Log 0 5 10 15 20 \u22120.005 0.000 0.005 zoom: rBias\u2206(Var(Y ~ Inf)) in %age \u03c4 L1,L2,S1 S2 Log Figure 4: Relative asymptotic mean bias (53) (top left panel for E, M, L1, L2, S1, S2, Lin, Log and a zoom in the top right panel for E, M, S1, S2, Log) and asymptotic variance bias (54) (bottom left panel for E, M, L1, L2, S1, S2, Lin, Log and a zoom in the bottom right panel for L1, L2, S1, S2, Log) in percentage as a function of \u03c4, for \u2206= 0.1 and \u03c3 = 0.3. only on the ratio \u2206/\u03c4 and their asymptotic variance biases depend on both \u2206/\u03c4 and \u2206\u03c32. As expected, all biases vanish as \u2206\u21920, provided that the conditions of Corollary 2 are satis\ufb01ed. In the top left panel of Figure 3, we report the relative biases of the asymptotic mean (53) in percentage as a function of the time step \u2206, for \u03c4 = 5 and \u03c3 = 0.5. Only the asymptotic mean of the Euler-Maruyama and Milstein methods is unbiased. Moreover, independent of the choice of the model parameters and for any time step \u2206> 0, the Strang schemes yield signi\ufb01cantly smaller asymptotic mean biases (in absolute value) than the Lie-Trotter schemes, in agreement with the results reported in the previous section. Moreover, the mean bias (in absolute value) of the second Strang scheme is slightly smaller than that of the \ufb01rst Strang scheme as highlighted in the top right panel, where we provide a zoom. In addition, the log-ODE method introduces a smaller bias in the asymptotic mean than the piecewise linear method. This does not change when considering other values for \u03c4 and \u03c3, see the top panels of Figure 4 and Figure 5 where we \ufb01x \u2206= 0.1 and consider (53) as a function of \u03c4 and \u03c3, respectively. Furthermore, for small values of \u03c3, the log-ODE method performs better than the Strang schemes in terms of preserving the asymptotic mean. However, this changes when \u03c3 is increased, see the top right panel of Figure 3 and the top panels of Figure 5. 16 \f0 1 2 3 4 5 6 rBias\u2206(E(Y ~ Inf)) in %age \u22127 \u22126 \u22125 \u22124 \u22123 \u22122 \u22121 0 1 E,M L1 L2 S1 S2 Lin Log 0 1 2 3 4 5 6 zoom: rBias\u2206(E(Y ~ Inf)) in %age \u22120.010 \u22120.005 0.000 0.005 E,M S1 S2 Lin Log 0.0 0.1 0.2 0.3 0.4 0.5 0.6 \u22121 0 1 2 3 4 5 6 rBias\u2206(Var(Y ~ Inf)) in %age \u03c3 E M L1,L2,S1 S2 Lin Log 0.0 0.1 0.2 0.3 0.4 0.5 0.6 zoom: rBias\u2206(Var(Y ~ Inf)) in %age \u03c3 \u22120.050 \u22120.025 0.000 0.025 0.050 L1,L2,S1 S2 Lin Log Figure 5: Relative asymptotic mean bias (53) (top left panel for E, M, L1, L2, S1, S2, Lin, Log and a zoom in the top right panel for E, M, S1, S2, Lin, Log) and asymptotic variance bias (54) (bottom left panel for E, M, L1, L2, S1, S2, Lin, Log and a zoom in the bottom right panel for L1, L2, S1, S2, Lin, Log) in percentage as a function of \u03c3, for \u03c4 = 5 and \u2206= 0.1. In the bottom left panel of Figure 3, we report the relative biases of the asymptotic variance (54) in percentage as a function of the time step \u2206, for \u03c4 = 5 and \u03c3 = 0.5 ful\ufb01lling the conditions of Corollary 2. Note that, the Milstein scheme introduces a larger bias in the variance than the EulerMaruyama method. Moreover, all splitting schemes yield signi\ufb01cantly smaller asymptotic variance biases (in absolute value) than the Euler-Maruyama, Milstein and piecewise linear methods. The log-ODE method, however, outperforms the splitting methods. This fact holds true also for other values of \u03c4 and \u03c3, see the bottom panels of Figure 4 and Figure 5, where we \ufb01x \u2206= 0.1 and plot (54) as a function of \u03c4 and \u03c3, respectively. However, there exist combinations of \u03c4 and \u03c3 for which the condition \u03c32\u03c4 < 2 is satis\ufb01ed and the \ufb01rst Strang and Lie-Trotter methods outperform the log-ODE method in terms of asymptotic variance preservation. This is illustrated in Figure 6, where we provide a heatmap of ratioBias := \f \f \frBias\u2206 \u0010 Var(e Y L1,L2,S1 \u221e ) \u0011\f \f \f \f \f \frBias\u2206 \u0010 Var(e Y Log \u221e) \u0011\f \f \f , (55) for di\ufb00erent values of \u03c4 and \u03c3. In particular, the region within the white lines corresponds to combinations of \u03c4 and \u03c3 for which this ratio is smaller than 1, i.e., for which the \ufb01rst Strang and Lie-Trotter methods introduce a smaller relative asymptotic variance bias (in absolute value) than the log-ODE method. This can be also observed in the bottom right panel of Figure 3, where we compare the log-ODE and splitting methods for \u03c3 = 0.5, 0.55, 0.6, with \u03c3 = 0.55 within the region marked by the white lines of Figure 6. 17 \fFigure 6: Ratio (55) of the relative asymptotic variance biases of the L1, L2, S1 and Log schemes, for di\ufb00erent values of \u03c4 and \u03c3. For the region within the white lines this ratio is smaller than 1. Note also that all biases increase when \u03c4 is very small (cf. Figure 4). This is because all biases depend on the ratio \u2206/\u03c4, requiring a small value of the time step \u2206to keep this ratio constant when \u03c4 is small. Moreover, for small values of \u03c4 the process gets closer to the boundary, where its behaviour is more di\ufb03cult to preserve. The ODE methods, however, are less deterred by small values of \u03c4, possibly due to their dependence on L\u2206,\u03c4,\u03c3 (35), \u00af L\u2206,\u03c4,\u03c3 (69) and e L\u2206,\u03c4,\u03c3 (70). Moreover, the relative biases (in absolute value) of the splitting and ODE methods decrease for large values of \u03c4, while the relative variance biases (in absolute value) of the Euler-Maruyama and Milstein schemes initially decrease and then increase. Interestingly, while the relative asymptotic variance biases (in absolute value) of the EulerMaruyama, Milstein and ODE schemes increase in \u03c3 (bottom panels of Figure 5), that of the second Strang method decreases as \u03c3 increases (bottom right panel), and that of the \ufb01rst Strang and LieTrotter schemes \ufb01rst decreases, and then increases again. The latter scenario is related to the fact that there do exist parameter values for which the \ufb01rst Strang and Lie-Trotter methods introduce a smaller relative variance bias (in absolute value) than the log-ODE method, see Figure 6. In general, if both \u03c4 and \u03c3 are large, such that the stationary condition \u03c32\u03c4 < 2 is only met tightly, the splitting and log-ODE schemes perform well compared to the Euler-Maruyama, Milstein and piecewise linear method in terms of preserving the asymptotic variance. In particular, even though the Strang and log-ODE schemes perform slightly worse than the Euler-Maruyama and Milstein schemes in terms of the asymptotic mean, they clearly outperform them in terms of the asymptotic variance. For this reason, when, for example, analysing the asymptotic coe\ufb03cient of variation, i.e., CV(Y\u221e) := p Var(Y\u221e)/E[Y\u221e], which is a measure of dispersion that allows to simultaneously study the error impinging on both quantities, the Strang and log-ODE schemes are superior to all other schemes. Moreover, if the stationary condition \u03c32\u03c4 < 2 is only met tightly, the \ufb01rst Strang scheme outperforms the second one in terms of the CV. 4.3 Preservation of the boundary properties As discussed in Section 2, the boundary 0 of the IGBM may be of entrance, unattainable and attracting or exit type, depending on the parameter \u00b5. Corresponding properties motivated by this classi\ufb01cation have been introduced at the end of Section 2. A numerical scheme e Y (ti) is said to preserve these properties if the following discrete versions are ful\ufb01lled: \u2022 Discrete unattainable property: If \u00b5 \u22650, then P(e Y (ti) > 0|e Y (ti\u22121) > 0) = 1. \u2022 Discrete absorbing property: If \u00b5 = 0, then P(e Y (t1) = 0|Y0 = 0) = 1. \u2022 Discrete entrance property: If \u00b5 > 0, then P(e Y (t1) > 0|Y0 = 0) = 1. \u2022 Discrete exit property: If \u00b5 < 0, then P(e Y (ti) < 0|e Y (ti\u22121) \u22640) = 1. 18 \fIt is well known that the Euler-Maruyama and Milstein schemes may fail in meeting such conditions. For example, the Euler-Maruyama scheme (7) does not ful\ufb01ll the discrete unattainable property for any choice of \u2206, since \u03bei\u22121 assumes all values in R with a positive probability [30]. Moreover, the Milstein scheme (8) may not ful\ufb01ll this property either, if e Y (ti\u22121)(\u03c32 + 2 \u03c4 )\u22122\u00b5 > 0, unless the time discretisation step \u2206satis\ufb01es \u2206< 2yG\u2032(y) \u2212G(y) (G(y)G\u2032(y) \u22122F(y))G\u2032(y) = y \u03c32y + 2 \u03c4 y \u22122\u00b5, (56) where y := e Y (ti\u22121) and G\u2032(y) denotes the derivative of G with respect to y [30]. Thus, to guarantee positivity, the time step \u2206would need to be updated in every iteration step. Note that, the discrete absorbing and entrance properties are the only properties which are satis\ufb01ed by the Euler-Maruyama and Milstein schemes, for any time step \u2206. In contrast, the ODE methods and the derived splitting schemes preserve the di\ufb00erent boundary properties for any choice of time step \u2206> 0, as shown below. Moreover, their boundary behaviour depends only on the parameter \u00b5, as it is the case for the IGBM. Proposition 3. Let e Y (ti) be the splitting, piecewise linear and log-ODE schemes de\ufb01ned through (15)-(18), (22) and (23), respectively. They ful\ufb01ll the discrete unattainable, absorbing, entrance and exit properties for any choice of the time step \u2206. The discrete boundary properties can be veri\ufb01ed from (15)-(18), (22) and (23), using the corresponding assumptions on the parameter \u00b5 and the positivity of the exponential function. A detailed proof of Proposition 3 is given in Appendix D. 5 Simulation results We now illustrate the theoretical results introduced in the previous sections through a series of simulations. First, we represent graphically the mean-square convergence order of the di\ufb00erent numerical methods and discuss their required computational e\ufb00ort. Second, we focus on the conditional and asymptotic moments. Third, we compare the ability of the di\ufb00erent methods to estimate the stationary density of the process. Finally, we consider the boundary properties, and provide a further investigation of the behaviour of the numerical solutions at the boundary. 5.1 Mean-square convergence order and computational e\ufb00ort The mean-square convergence order of the di\ufb00erent numerical methods can be approximated via the root mean-squared error (RMSE) considered as a function of the time step \u2206. In particular, we de\ufb01ne RMSE(\u2206) := 1 n n X k=1 \f \f \fYk(tmax) \u2212e Yk(tmax) \f \f \f 2 !1/2 , (57) where Yk(tmax) and e Yk(tmax) denote the k-th realisation and approximation (obtained under a numerical method using the time step \u2206) of the process, respectively, at a \ufb01xed time tmax. In the left panel of Figure 7, we report the RMSEs of the di\ufb00erent schemes as a function of the time step \u2206and in log10 scale. We use the same parameter setting as in Figure 4.2 in [24], i.e., we \ufb01x n = 105, tmax = 5, Y0 = 0.06, \u00b5 = 0.004, \u03c4 = 10 and \u03c3 = 0.6. Since the IGBM is not known explicitly, the values Yk(tmax) are obtained under the log-ODE method, using the small time step \u2206= 2\u221210. The approximated values e Yk(tmax) are produced under the considered numerical methods and for di\ufb00erent values of \u2206, speci\ufb01cally \u2206= 2\u2212l, l = 0, . . . , 8. Note that the Yk(tmax) and e Yk(tmax) have to be computed with respect to the same Brownian paths, see [24] and its supporting code for how to deal with the rescaled space-time L\u00b4 evy areas of a Brownian increment. 19 \f\u22122.5 \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 \u22128 \u22126 \u22124 \u22122 0 G G G G G G G G G G G G G G G G G G G G E M L1 L2 S1 S2 Lin Log Order 3/2 Order 1 Order 1/2 log10(\u2206) log10(RMSE(\u2206)) Figure 7: RMSE (57) in log10 scale for E, M, L1, L2, S1, S2, Lin, Log as a function of \u2206, for n = 105, tmax = 5, and the same parameter values as used in Figure 4.2 in [24], i.e., Y0 = 0.06, \u00b5 = 0.004, \u03c4 = 10 and \u03c3 = 0.6. Table 2: Number of operations, function evaluations and random numbers required per iteration, i.e., required to produce e Y (ti) given e Y (ti\u22121), \u2206, \u03c4, \u00b5 and \u03c3. E\ufb00ort +, \u2212, \u00d7, / \u221a\u00b7 exp(\u00b7) N(0, 1) P E 10 1 0 1 12 M 15 1 0 1 17 L1 12 1 1 1 15 L2 12 1 1 1 15 S1 14 1 1 1 17 S2 19 1 2 2 24 Lin 15 1 1 1 18 Log 26 2 1 2 31 As expected, we observe a mean-square convergence rate of order 3/2 for the log-ODE scheme, a rate of order 1 for the Milstein, piecewise linear and splitting methods, and a rate of order 1/2 for the Euler-Maruyama discretisation. The log-ODE method yields the smallest RMSEs, and the Euler-Maruyama method produces the largest error estimates. Among the order 1 methods we observe di\ufb00erences in their accuracies. The \ufb01rst Strang scheme yields the smallest RMSEs, with error estimates slightly smaller that of the piecewise linear method. Moreover, the RMSEs of the two Lie-Trotter and second Strang methods are almost the same, the second Strang method performing slightly worse than the Lie-Trotter schemes. The Milstein method yields the largest error estimates in the considered class of order 1 methods. These results should be considered in relation to the computational e\ufb00ort required by the di\ufb00erent schemes to generate a path, see, e.g., [18]. We measure this e\ufb00ort by counting the number of operations, function evaluations and random numbers required per iteration, i.e., required to produce e Y (ti) given e Y (ti\u22121), \u2206, \u03c4, \u00b5 and \u03c3. This is summarised in Table 2. The log-ODE method requires the largest computational e\ufb00ort, and the Euler-Maruyama method the slightest. While the e\ufb00ort required by the two Lie-Trotter schemes is the same, the e\ufb00ort of the second Strang method clearly exceeds that of the \ufb01rst. Moreover, while the second Strang scheme yields almost the same RMSEs as the Lie-Trotter schemes (cf. Figure 7), it requires a greater e\ufb00ort to produce these errors (cf. Table 2). 20 \f0.0 0.2 0.4 0.6 0.8 1.0 4.4 4.6 4.8 5.0 5.2 5.4 5.6 5.8 E(Y ~(15)|Y0) G G G G G True E,M L1 L2 S1 S2 Lin Log 0.0 0.2 0.4 0.6 0.8 1.0 5.23 5.24 5.25 5.26 5.27 zoom: E(Y ~(15)|Y0) G G G G G True S1 S2 Lin Log 0.0 0.2 0.4 0.6 0.8 1.0 4.4 4.6 4.8 5.0 5.2 5.4 E(Y ~ Inf) G G G G G True E,M L1 L2 S1 S2 Lin Log 0.0 0.2 0.4 0.6 0.8 1.0 4.98 4.99 5.00 5.01 5.02 zoom: E(Y ~ Inf) G G G G G True E,M S1 S2 Lin Log 0.0 0.2 0.4 0.6 0.8 1.0 3.3 3.4 3.5 3.6 3.7 3.8 Var(Y ~(15)|Y0) \u2206 G G G G G G G G G G True E M L1 L2 S1 S2 Lin Log 0.0 0.2 0.4 0.6 0.8 1.0 2.8 2.9 3.0 3.1 3.2 G G G G G G G G Var(Y ~ Inf) \u2206 G G True E M L1,L2,S1 S2 Lin Log Figure 8: Theoretical conditional and asymptotic means and variances of the di\ufb00erent numerical methods (lines) as functions of \u2206, for \u00b5 = 1, \u03c4 = 5, \u03c3 = 0.2, Y0 = 10 and ti = 15, and corresponding values obtained via simulations (symbols), for \u2206= 0.25, 0.5, 0.75, 1. The true conditional and asymptotic mean and variance of the IGBM are represented by the grey horizontal lines. 21 \f5.2 Conditional and asymptotic moments Here, we illustrate that the conditional and asymptotic means and variances obtained via numerical simulations are in agreement with the previously derived theoretical expressions. To do so, we de\ufb01ne the sample mean \u02c6 mti and variance \u02c6 vti as follows E[Y (ti)|Y0] \u2248 E[e Y (ti)|Y0] \u2248\u02c6 mti := 1 n n X k=1 e Yk(ti), (58) Var(Y (ti)|Y0) \u2248 Var(e Y (ti)|Y0) \u2248\u02c6 vti := 1 n \u22121 n X k=1 \u0010 e Yk(ti) \u2212\u02c6 mti \u00112 , (59) where e Yk(ti) denotes the k-th simulated value of Y (ti) under each considered numerical method, respectively. We denote by RE( \u02c6 mti) and RE(\u02c6 vti) the relative biases (36) and (37), estimated replacing E[e Y (ti)|Y0] and Var(e Y (ti)|Y0) with the sample mean (58) and variance (59), respectively. To investigate the asymptotic case, we \ufb01x ti = 100 and denote by RE( \u02c6 m100) and RE(\u02c6 v100) the relative biases (53) and (54), estimated replacing E[e Y\u221e] and Var(e Y\u221e) with \u02c6 m100 and \u02c6 v100, respectively. In the top and bottom left panels of Figure 8, we \ufb01x ti = 15 and report the true conditional mean E[Y (15)|Y0] (3) and variance Var(Y (15)|Y0) (4) (grey horizontal lines), the theoretical conditional means E[e Y (15)|Y0] (33) and variances Var(e Y (15)|Y0) (34) of the numerical methods as a function of the time step \u2206and their estimated values (symbols) \u02c6 m15 (58) and \u02c6 v15 (59), derived for \u2206= 0.25, 0.5, 0.75, 1. We calculate the sample moments from n = 107 simulations of Y (15), for \u00b5 = 1, \u03c4 = 5, \u03c3 = 0.2 and Y0 = 10. In the middle and bottom right panels of Figure 8, we report the true asymptotic mean E[Y\u221e] (5) and variance Var(Y\u221e) (6) (grey horizontal lines), the theoretical asymptotic means E[e Y\u221e] (40), (43)-(46), (49), (50) and variances Var(e Y\u221e) (41), (42), (47), (48), (51), (52) as a function of the time step \u2206and their estimated values (symbols) \u02c6 m100 (58) and \u02c6 v100 (59), derived for \u2206= 0.25, 0.5, 0.75, 1. The corresponding relative biases RE( \u02c6 m15), RE(\u02c6 v15), RE( \u02c6 m100) and RE(\u02c6 v100), for \u2206= 0.5 and \u2206= 1 are reported in percentage in Table 3. The quantities obtained through numerical simulations are in agreement with the theoretical ones. Moreover, we veri\ufb01ed that there is no noteworthy di\ufb00erence in the standard deviations of the estimated values across the di\ufb00erent numerical schemes. 5.3 Stationary density As a further illustration, we investigate the stationary distribution of the IGBM. Under the conditions \u03c32\u03c4 < 2 and \u00b5 > 0, the stationary distribution of Y exists and is an inverse gamma distribution [5, 21, 23, 62] with mean (5) and variance (6). The probability density function of the stationary distribution of Y , which we denote by fY\u221e, is given by fY\u221e(y; \u03b1, \u03b2) := \u03b2\u03b1 \u0393(\u03b1)y\u2212\u03b1\u22121e\u2212\u03b2/y, (60) where \u0393(\u00b7) denotes the gamma function, \u03b1 = 1 + 2/\u03c32\u03c4 and \u03b2 = 2\u00b5/\u03c32. In Figure 9, we report the true stationary density fY\u221e(60) (grey solid lines) and the densities \u02c6 fY\u221e, estimated from n = 107 simulated values of Y (100), for \u00b5 = 1, \u03c4 = 5, \u03c3 = 0.55 and Y0 = 10, using the di\ufb00erent schemes. The densities are calculated with a kernel density estimator, i.e., fY\u221e(y) \u2248\u02c6 fY\u221e(y) := 1 nh n X k=1 K y \u2212e Yk(100) h ! , where the bandwidth h is a smoothing parameter and K is a kernel function (here Gaussian). If \u2206= 0.5 (left panels), the Strang and ODE schemes (bottom left panel) accurately preserve the stationary density, while the other schemes (top left panel) yield estimates that deviate from the 22 \ftrue density. This discrepancy increases as \u2206increases (top right panel), while the Strang and ODE schemes (bottom right panel) still yield satisfactory estimates. To quantify the distance between the true and the estimated densities under the considered numerical schemes for di\ufb00erent time steps, we consider their Kullback-Leibler (KL) divergences given by KL := Z fY\u221e(y) log fY\u221e(y) \u02c6 fY\u221e(y) ! dy, (61) where the integral is approximated using trapezoidal integration. The results shown in Figure 9 are con\ufb01rmed by the KL divergences (61) reported in Table 3. In particular, the best performance is achieved by the log-ODE method, which yields a very accurate estimate of the stationary density, even for \u2206= 1, and even though for \u03c4 = 5 and \u03c3 = 0.55 the \ufb01rst Strang scheme introduces a smaller bias in the asymptotic variance (see Subsection 4.2.2). Moreover, for the chosen parameter setting, the Strang schemes yield slightly better estimates of the stationary density than the piecewise linear method, and the Lie-Trotter schemes outperform the Euler-Maruyama and Milstein methods. Table 3: Comparison of theoretical quantities with simulated values (n = 107). We report relative conditional mean and variance biases (36) and (37), asymptotic mean and variance biases (53) and (54) (in parentheses) and 1000 times the KL divergences (61) for \u2206= 0.5 and \u2206= 1. The parameters are Y0 = 10, \u00b5 = 1, \u03c4 = 5, \u03c3 = 0.2 (REs) and \u03c3 = 0.55 (KLs). \u2206= 0.5 RE( \u02c6 m15) in %age RE(\u02c6 v15) in %age RE( \u02c6 m100) in %age RE(\u02c6 v100) in %age 1000 \u00b7 KL E \u22120.694 (\u22120.705) 4.425 (4.4) \u22120.001 (0) 5.897 (5.882) 0.925 M \u22120.694 (\u22120.705) 5.602 (5.586) \u22120.001 (0) 7.092 (7.067) 1.124 L1 \u22124.44 (\u22124.45) 0.81 (0.786) \u22124.917 (\u22124.917) \u22120.191(\u22120.216) 0.194 L2 4.611 (4.601) \u22121.163 (\u22121.187) 5.083 (5.083) \u22120.191 (\u22120.216) 0.445 S1 0.085 (0.075) \u22120.18 (\u22120.205) 0.083 (0.083) \u22120.191 (\u22120.216) 0.005 S2 \u22120.039 (\u22120.038) 0.228 (0.204) \u22120.025 (\u22120.042) 0.281 (0.233) 0.009 Lin \u22120.152 (\u22120.151) \u22120.33 (\u22120.335) \u22120.167 (\u22120.166) \u22120.391 (\u22120.415) 0.045 Log \u22120.008 (\u22120.0001) \u22120.094 (\u22120.001) 0.01 (\u22120.0001) 0.07 (\u22120.001) 0.003 \u2206= 1 RE( \u02c6 m15) in %age RE(\u02c6 v15) in %age RE( \u02c6 m100) in %age RE(\u02c6 v100) in %age 1000 \u00b7 KL E \u22121.384 (\u22121.391) 9.315 (9.296) 0.006 (0) 12.461 (12.5) 5.139 M \u22121.383 (\u22121.391) 11.796 (11.807) 0.003 (0) 14.957 (15.038) 3.743 L1 \u22128.742 (\u22128.75) 1.176 (1.169) \u22129.663 (\u22129.667) \u22120.921 (\u22120.862) 0.639 L2 9.361 (9.353) \u22122.762 (\u22122.769) 10.337 (10.333) \u22120.921 (\u22120.862) 2.981 S1 0.31 (0.302) \u22120.808 (\u22120.815) 0.337 (0.333) \u22120.921 (\u22120.862) 0.069 S2 \u22120.129 (\u22120.151) 0.91 (0.814) \u22120.164 (\u22120.166) 0.879 (0.928) 0.071 Lin \u22120.286 (\u22120.301) \u22120.745 (\u22120.802) \u22120.329 (\u22120.332) \u22121.05 (\u22120.992) 0.208 Log 0.003 (\u22120.0002) \u22120.005 (\u22120.003) 0.009 (\u22120.0002) 0.019 (\u22120.004) 0.001 23 \fG 0 1 3 5 7 0.0 0.1 0.2 0.3 f ^ YInf(y) True E M L1 L2 \u2206= 0.5 1 3 5 7 0.0 0.1 0.2 0.3 True E M L1 L2 \u2206= 1 G 0 1 3 5 7 0.0 0.1 0.2 0.3 f ^ YInf(y) y True S1 S2 Lin Log 1 3 5 7 0.0 0.1 0.2 0.3 True S1 S2 Lin Log y Figure 9: Comparison of the stationary density fY\u221e(grey solid lines) and the estimated densities \u02c6 fY\u221ebased on n = 107 simulations of Y (100), generated with the di\ufb00erent numerical schemes, for \u2206= 0.5 (left panels) and \u2206= 1 (right panels). The underlying parameters are \u00b5 = 1, \u03c4 = 5, \u03c3 = 0.55 and Y0 = 10. The corresponding KL divergences (61) are reported in Table 3. 5.4 Boundary properties An illustration of the preservation of the boundary properties by the splitting and ODE schemes is provided in Figure 10, where we report trajectories generated with the \ufb01rst Lie-Trotter, \ufb01rst Strang, piecewise linear and log-ODE schemes when the boundary 0 is of entrance (top panel), unattainable and attracting (middle panel) and exit (bottom panel) type. In particular, we use \u00b5 = \u22120.5, 0 and 0.5, respectively, \u03c4 = 5 and \u03c3 = 1. 5.5 Crossing probability As a further illustration of the boundary behaviour, we investigate the probability that the process Y crosses the boundary 0 in a \ufb01xed time interval (0, tmax], with tmax > 0 and Y0 > 0. We de\ufb01ne T := inf{t > 0 : Y (t) \u22640} (62) as the \ufb01rst passage (hitting) time of Y through 0, and estimate the probability that T < tmax as follows P(T < tmax) \u2248\u02c6 FT (tmax) := 1 n n X k=1 1{Tk 0, respectively. Note that the functions obtained under the splitting and ODE schemes lie close to each other, in spite of the large value of \u03c3. When the boundary 0 is of entrance or unattainable and attracting type, it is known that P(T < tmax) = 0 for all values of tmax. However, only the splitting and ODE schemes correctly preserve this property, while the Euler-Maruyama method drastically fails for all considered values of \u2206and the Milstein scheme only preserves it for small values of \u2206(left and middle panels). The latter is in agreement with condition (56). Consider, e.g., y = Y0 = 1. Then \u2206< 5/122 \u22480.0402 and \u2206< 5/127 \u22480.0394 is required in the entrance or unattainable and attracting case, respectively. In the exit scenario, the probabilities obtained from the EulerMaruyama and Milstein schemes lie above those obtained from the splitting and ODE schemes. This suggests that the Euler-Maruyama and Milstein methods yield trajectories that exit from [0, +\u221e) faster than those generated from the other schemes. Similar results are obtained when studying these probabilities as a function of tmax for \ufb01xed \u00b5. Moreover, independent of the type of boundary behaviour, the crossing probabilities obtained from the Strang splitting and log-ODE schemes seem not to vary signi\ufb01cantly as \u2206increases (a few undetected crossings may occur). This suggests their reliability even for large time steps, while those obtained from the Euler-Maruyama and Milstein schemes change for di\ufb00erent choices of \u2206. The crossing probabilities derived under the Lie-Trotter and piecewise linear methods deviate slightly as \u2206is increased, the latter one performing a bit better. 25 \f\u00b5 Estimate of P(T < tmax) \u22120.5 \u22120.3 \u22120.1 0.1 0.3 0.5 0.0 0.5 1.0 E M L1 L2 S1 S2 Lin Log \u2206= 0.01 \u00b5 \u22120.5 \u22120.3 \u22120.1 0.1 0.3 0.5 0.0 0.5 1.0 E M L1 L2 S1 S2 Lin Log \u2206= 0.025 \u00b5 \u22120.5 \u22120.3 \u22120.1 0.1 0.3 0.5 0.0 0.5 1.0 E M L1 L2 S1 S2 Lin Log \u2206= 0.05 Figure 11: Probability P(T < tmax) (63), estimated from n=106 simulated trajectories under the di\ufb00erent numerical schemes, as a function of \u00b5, for di\ufb00erent choices of the time step, namely \u2206= 0.01 (left panel), \u2206= 0.025 (middle panel) and \u2206= 0.05 (right panel), tmax = 0.5, \u03c4 = \u03c3 = 5 and Y0 = 1. The boundary 0 is of entrance, unattainable and attracting or exit type depending on whether \u00b5 > 0, \u00b5 = 0 (denoted by dashed grey vertical lines) or \u00b5 < 0, respectively. 6" + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file