diff --git "a/abs_29K_G/test_abstract_long_2405.04003v1.json" "b/abs_29K_G/test_abstract_long_2405.04003v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.04003v1.json" @@ -0,0 +1,307 @@ +{ + "url": "http://arxiv.org/abs/2405.04003v1", + "title": "High Energy Density Radiative Transfer in the Diffusion Regime with Fourier Neural Operators", + "abstract": "Radiative heat transfer is a fundamental process in high energy density\nphysics and inertial fusion. Accurately predicting the behavior of Marshak\nwaves across a wide range of material properties and drive conditions is\ncrucial for design and analysis of these systems. Conventional numerical\nsolvers and analytical approximations often face challenges in terms of\naccuracy and computational efficiency. In this work, we propose a novel\napproach to model Marshak waves using Fourier Neural Operators (FNO). We\ndevelop two FNO-based models: (1) a base model that learns the mapping between\nthe drive condition and material properties to a solution approximation based\non the widely used analytic model by Hammer & Rosen (2003), and (2) a model\nthat corrects the inaccuracies of the analytic approximation by learning the\nmapping to a more accurate numerical solution. Our results demonstrate the\nstrong generalization capabilities of the FNOs and show significant\nimprovements in prediction accuracy compared to the base analytic model.", + "authors": "Joseph Farmer, Ethan Smith, William Bennett, Ryan McClarren", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "physics.comp-ph", + "cats": [ + "physics.comp-ph", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Radiative heat transfer is a fundamental process in high energy density\nphysics and inertial fusion. Accurately predicting the behavior of Marshak\nwaves across a wide range of material properties and drive conditions is\ncrucial for design and analysis of these systems. Conventional numerical\nsolvers and analytical approximations often face challenges in terms of\naccuracy and computational efficiency. In this work, we propose a novel\napproach to model Marshak waves using Fourier Neural Operators (FNO). We\ndevelop two FNO-based models: (1) a base model that learns the mapping between\nthe drive condition and material properties to a solution approximation based\non the widely used analytic model by Hammer & Rosen (2003), and (2) a model\nthat corrects the inaccuracies of the analytic approximation by learning the\nmapping to a more accurate numerical solution. Our results demonstrate the\nstrong generalization capabilities of the FNOs and show significant\nimprovements in prediction accuracy compared to the base analytic model.", + "main_content": "Introduction Marshak waves, a common type of driven supersonic radiative heat waves, play a key part in the physics of internal confinement fusion (ICF) [1\u20134], astrophysics [5\u20137] and other high energy density phenomena [8]. In most cases, a full description of the radiative transfer process is not required. Therefore, approximations are in order. The diffusion approximation is one of these and is considered the simplest [9]. In some cases, analytic solutions to the radiation diffusion equation can be useful in understanding experiments [10\u201316]. These analytic or semi-analytic models can be thought of as a reduced order approximation of the full system, which is itself a simplification. As examples, [10] reduces a two dimensional diffusion system via asymptotic expansion. The diffusion system is an approximation to higher order radiation transport equations. Marshak, the namesake of these waves, reduced a partial differential equation (PDE) into an ordinary differential equation (ODE) [13, 14]. Reduced order solutions have the benefit of simpler calculation, as solving an ODE is usually preferable to solving a PDE, and they can be interrogated to clarify physical relationships between parameters. However, coming to a semi-analytic or analytic solution often involves invoking simplifications which may debase the accuracy of the prediction. Thus, the motive for this inquiry is to take a widely used and appreciated semi-analytic diffusion model, the Hammer and Rosen Marshak wave model (HR) [11], and provide a correction to the model\u2019s limiting assumptions in a computationally efficient manner. Classical numerical solvers such as finite difference, finite element, or finite volume methods discretize continuous equations into a finite set of algebraic equations [17\u2013 22]. These numerical solvers can be computationally expensive for high dimensional problems and for domains with complex geometries. In recent years, approaches that leverage ML have garnered support to alleviate these challenges [23\u201325]. In particular, neural operators, a class of ML models, have emerged as a promising solution to these challenges. These operators learn mappings between infinite-dimensional function spaces, effectively approximating differential or integral operators that govern PDEs in a data driven manner [26, 27]. One of the key advantages of neural operators is that they only need to be trained once to learn a family of PDEs, and obtaining a solution for a new instance of a PDE parameter requires only a forward pass of the network. Furthermore, neural operators are discretizationinvariant as they share network parameters across discretizations, allowing for the transfer of solutions between meshes. The Fourier neural operator (FNO) [28] is a seminal neural operator that learns network parameters in Fourier space. The FNO uses fast Fourier transform (FFT) for spectral decomposition of the input and computation of the convolution integral kernel in the Fourier space. This approach has shown promising results in learning the underlying physics of various PDEs including Burgers, Darcy, and Navier-Stokes equations. In this work, we propose to use FNO to learn the physics of Marshak waves for various input-output pairs. We develop two models: a base model which takes the physical parameters of the Marshak wave problem as input and outputs the time dependent wavefront position and temperature distribution as given by the HR model, 2 \fand a hybrid approach which corrects the analytic HR solution to output the numerical solution to the full flux-limited diffusion equation. The structure of this paper is as follows. The diffusion model for Marshak waves is introduced in Section 2. Hammer and Rosen\u2019s approximation is summarized in Section 3. The neural network that is employed to correct the HR model is discussed in Section 4. Finally, results and conclusions are offered in Sections 5 and 6. 2 Marshak wave problem We study radiation diffusion in planar geometry, which assumes variation of the dependent variables only in a single direction, x. The evolutions of the radiation and material energy density are governed by [29], \u2202er \u2202t = \u2202 \u2202x c 3\u03ba(\u03c1, T) \u2202er \u2202x + c\u03ba(aT 4 \u2212er), (1) \u2202e \u2202t = c\u03ba(e \u2212aT 4) (2) where, er is the energy density of the radiation and e is the energy density of the material. c is the speed of light, \u03ba is the opacity with units of inverse length, a is the radiation constant, defined a \u22614\u03c3 c where \u03c3 is the Stefan-Boltzmann constant. T is the material temperature and \u03c1 is the material density. A Marshak boundary condition will specify the incoming radiation flux [29], er(x = 0, t) \u2212 \u0012 2 3\u03ba \u2202er \u2202x \u0013 \f \f \f \f x=0 = 4 c Finc. (3) where Finc is the incident flux on the surface at x = 0. The material energy density is found via integration of the specific heat, e = Z T 0 dT \u2032 Cv(T \u2032). (4) Solutions to Eq. (1) in the optically thick limit are recognizable by sharp drops in temperature near the wavefront and gradual temperature variation behind the front. This is because the radiation temperature and material temperature are in equilibrium behind the wavefront. Thus, is often valid to assume equilibrium between the radiation temperature and and material temperature, i.e. er = aT 4. This assumption simplifies Eqs. (1) and (2) to a single equation for the material temperature, \u2202e \u2202t = 4 3 \u2202 \u2202x 1 \u03ba(\u03c1, T) \u0012 \u2202 \u2202x\u03c3T 4 \u0013 (5) with the boundary condition at the surface, T(x = 0, t) = Ts(t). (6) 3 \fFurthermore, the equation of state is specified so that, e = fT \u03b2\u03c1\u2212\u00b5, (7) This is the formulation given in [11]. The parameters f, \u03b2, \u00b5 are found by fitting experimental data, as in [30]. 3 Hammer and Rosen approximation The Hammer and Rosen model for supersonic thermal radiation diffusion is a perturbative, semi-analytic, one dimensional solution to the diffusion equation under mild limiting assumptions. In particular, this model assumes planar geometry, power law representations for the opacity, 1 K = gT \u03b1\u03c1\u2212\u03bb, and material internal energy, e = fT \u03b2\u03c1\u2212\u00b5, and a constant density. These assumptions transform Eq. (5) into, \u03c1\u2202e \u2202t = 4 3 \u2202 \u2202x \u0012 1 K\u03c1 \u2202 \u2202x\u03c3T 4 \u0013 , (8) where \u03c1 is the material density, e is the internal energy, \u03c3 is the Stefan-Boltzmann constant, and T is the radiation temperature. The application of these assumptions and some simplification leads to the expression \u2202T \u03b2 \u2202t = C \u22022 \u2202x2 T 4+\u03b1 (9) where our constants are collected into the term C = 4 4 + \u03b1 4 3 1 f g\u03c1\u00b5\u22122\u2212\u03bb (10) This model predicts the position of the wave front as a function of time as the solution to an integral expression, then provides an explicit expression for the temperature profile in the material. The model can accommodate an arbitrary radiation temperature boundary condition. The Hammer and Rosen model gives the position of the wavefront, xf, as x2 f (t) = 2 + \u03f5 1 \u2212\u03f5CT \u2212\u03b2 s Z t 0 T 4+\u03b1 s d\u02c6 t (11) where Ts is the boundary temperature, \u03f5 = \u03b2 4+\u03b1 is a combination of terms from the power laws, and xf is the heat front position as a function of time, t. With knowledge of the wavefront position a simple expression can be evaluated for the temperature profile: T 4+\u03b1 T 4+\u03b1 s (x, t) = \u0014\u0012 1 \u2212x xf \u0013 \u0012 1 + \u03f5 2 \u0012 1 \u2212 x2 f CH2\u2212\u03f5 dH dt \u0013 x xf \u0013\u00151/(1\u2212\u03f5) . (12) Here H = T 4+\u03b1 s . One hallmark of this approximate solution is that it is very inexpensive to evaluate. In practice, and when compared to computing a numerical solution, 4 \fthis method is effectively immediate. For this reason, it has proven to be particularly helpful for rapid iteration during the design process. 4 Fourier neural operator model We now turn to the consideration of producing a machine learning model to compute Marshak wave solutions. For this task we turn to the Fourier Neural Operator. In this section we use standard notation from the ML literature; regrettably, this overlaps with the standard notation for Marshak waves at times. g f \u00c6 \u00d8 \u220f \u00b5 \u03a9 Parameters 1.0 \u00a3 10\u00b04 1.0 \u00a3 10\u00b02 1.0 \u00a3 100 1.0 \u00a3 102 Values 0 1 2 3 t (ns) 0.000 0.045 0.090 0.135 0.180 0.225 0.270 xf (cm) 0.00 0.02 0.04 0.06 xf (cm) 0 1 2 T (HeV) 0 1 2 3 t (ns) 1.0 1.5 2.0 2.5 3.0 3.5 4.0 T (HeV) P Fourier layer 1 Fourier layer 2 Fourier layer l Q a(x) u(x) v(x) F R F\u22121 Fourier layer W + \u03c3 Fig. 1: Fourier neural operator architecture for solving the Marshak wave problem. The input function a(x) is projected to a higher representation v0(x) by the projection layer P. This is then processed through l iterations of Fourier layers. Each Fourier layer consists of a Fourier transform F that maps vi(x) to the Fourier domain, multiplication with the weight tensor R and filtering of higher Fourier modes, and an inverse Fourier transform F\u22121 to return to the spatial domain. The output is linearly transformed by W and passed through a nonlinear activation function \u03c3. This is added to the previous Fourier layer\u2019s output to produce the updated representation vi+1(x). After l layers, the final representation vl(x) is mapped to the output solution u(x). The boundary temperature drive (top left) and parameters (bottom left) represent the input functions and the front position (top right) and temperature distribution (bottom right) represent the output functions for the Marshak wave problem The primary goal of an operator G is to establish a mapping between infinitedimensional spaces from a finite collection of input-output pairs, denoted as A = A(Rda) \u2282Rda and U = U(Rdu) \u2282Rdu, respectively. Following from [28, 31], consider a partial differential equation (PDE) which maps input function spaces to an output solution space. For a given domain D \u2282Rd with boundary \u2202D, and x \u2208D, an operator would map source terms, f(x, t) : D \u2192R, boundary conditions, u(\u2202D, t) : D \u2192R, and initial conditions u(x, 0) : D \u2192R, to the solution space u(x, t) : D \u2192R, where t is time. In the present work, we aim to learn the nonlinear differential operator G : A \u2192U for various sets of input parameters a \u2208A in the Marshak wave problem. 5 \fBy constructing a parametric map G : A \u00d7 \u0398 \u2192U, the optimal parameter \u03b8 \u2208\u0398 can be approximated with data-driven methods to adjust \u03b8 such that G(\u00b7, \u03b8) approaches the target map G. Classical numerical solvers, be it finite elements, finite differences, or many modern data-driven and physics-informed neural networks attempt to learn the output function u(x, t) which satisfies G for a single instance of input parameter a and can be computationally prohibitive, especially when the solution for the PDE is required for many instances of the parameter. On the other hand, Fourier neural operators (FNO) have been developed to approximate G directly so that solutions to a family of PDEs are realized for different sets of a, thereby enhancing computational efficiency and practical utility. In general, input and output functions a and u are continuous, however, we assume to know only point-wise evaluations. To that end, the problem at hand can be described using the n-point discretization of D, Dj = {x1, . . . , xn} \u2282D with observations of input-output pairs indexed by j \b aj \u2208Rn\u00d7da, uj \u2208Rn\u00d7du\tN j=1, and uj = G(aj). The neural operator to learn the input-output mapping is an iterative architecture. First, the input a(x, t) is transformed to a higher dimensional representation by v0(x) = P(a(x)) where the transformation P(a(x)) : Rda 7\u2192Rdv. In this framework, a shallow fully connected network can achieve this desired transformation. Next a series of l updates vi 7\u2192vi+1 are performed vi+1(x) := \u03c3 (Wvi(x) + (K(a; \u03d5)vi) (x)) , \u2200x \u2208D. (13) with nonlinear activation function \u03c3(\u00b7) : R 7\u2192R and a linear transformation W : Rdv 7\u2192Rdv. Each vi is a dv-dimensional real vector in Rdv. For a vector input x = [x1, x2, . . . , xdv]T \u2208Rdv, \u03c3(x) is applied element-wise, resulting in [\u03c3(x1), \u03c3(x2), . . . , \u03c3(xdv)]T . The integral kernel operator K : A \u00d7 \u03b8 \u2192L(U, U) is parameterized by \u03d5 \u2208\u0398K (K(a; \u03d5)vi) (x) := Z D \u03ba\u03d5(x, y, a(x), a(y); \u03d5)vi(y)dy, \u2200x \u2208D. (14) where \u03ba\u03d5 : R2(d+da) \u2192Rdv\u00d7dv is a neural network parameterized by \u03d5 \u2208\u0398K. After all iterations, a transformation function u(x) = Q (vl(x)) moves vl(x) into the solution space Q (vl(x)) : Rdv 7\u2192Rdu. This approach extends the idea of neural networks to operate on infinite-dimensional function spaces, enabling the learning of mappings between such spaces from finite data samples. By leveraging neural operators, it becomes possible to approximate the nonlinear operators that govern the relationships between infinite-dimensional input and output function spaces, such as those arising in the context of partial differential equations. The FNO is a specific neural operator architecture designed for such nonlinear mappings. It replaces the kernel integral operator in by a Fourier convolution operator F\u22121 (F (\u03ba\u03d5) \u00b7 F (vi)) (x), and applying the convolution theorem. The Fourier kernel integral operator becomes (K(\u03d5)vi) (x) = F\u22121 (R\u03d5 \u00b7 (Fvi)) (x), \u2200x \u2208D, 6 \fwhere F is the Fourier transform of a function and F\u22121 is its inverse transform, R\u03d5 is the Fourier transform of a periodic function \u03ba parameterized by \u03d5 \u2208\u0398K. Given that \u03ba is periodic and can be represented by a Fourier series expansion, only discrete modes are considered k \u2208Zd. To create a finite dimensional representation, the Fourier series is truncated at a maximum number of modes kmax = |{k \u2208Zd : |kj| \u2264kmax,j for j = 1, . . . , d}|. In a discretized domain D with n \u2208N points, vi \u2208Rn\u00d7dv and F(vi) \u2208Cn\u00d7dv is obtained, here C represents the complex space. A convolution of vi with a function that has kmax Fourier modes gives F(vi) \u2208Ckmax\u00d7dv . Then the multiplication with the weight tensor R \u2208Ckmax\u00d7dv\u00d7dv is (R \u00b7 (Fvi))k,l = X j=1 Rk,l,j (Fvi)k,j , k = 1, . . . , kmax, j = 1, . . . , dv (15) With uniform discretization and resolution s1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 sd = n, Fast Fourier Transform (FFT) can replace F. For f \u2208Rn\u00d7dv, k = (k1, . . . , kd) \u2208Zs1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Zsd, and x = (x1, . . . , xd) \u2208D, the FFT \u02c6 F and its inverse \u02c6 F\u22121 are defined as ( \u02c6 Ff)l(k) = s1\u22121 X x1=0 \u00b7 \u00b7 \u00b7 sd\u22121 X xd=0 fl (x1, . . . , xd) e \u22122i\u03c0 Pd j=1 xj kj sj , (16) \u0010 \u02c6 F\u22121f \u0011 l (x) = s1\u22121 X k1=0 \u00b7 \u00b7 \u00b7 sd\u22121 X kd=0 fl (k1, . . . , kd) e 2i\u03c0 Pd j=1 xj kj sj . (17) Finally, since Eq. (13) follows standard neural network structures training a network training is done with an appropriate loss function L = U \u00d7 U \u0398 = arg min \u0398 (L(G(a), G(a, \u0398)). (18) A schematic representation of the Fourier Neural Operator model for the Marshak wave problem is provided in Figure 1. 5 Results 5.1 Problem description and parameter space The Marshak waves we consider concern the propagation of heat waves through lowdensity foam cylinders or other materials driven by a hohlraum similar to those described in [30, 32]. Key parameters in these experiments include density, drive energy and radiation temperature, which typically can range from 100 to 300 eV. Xray imaging is used to track the heat wave, while diagnostic tools measure the flux breaking through the foam edge. The experiments cover a wide range of temperatures, materials, and densities. 7 \fTable 1, adapted from [30], presents material properties used in various Marshak wave experiments. The first ten rows contain parameters for the foams, while the last two rows provide parameters for coating materials. For each material, the numerical parameters were fitted in relevant experimental regimes. Further details about the experiments can be found in [30] and references cited therein. Table 1: Material properties for various Marshak wave experiments Experiment Foam g \u0000g/cm2\u0001 f (MJ) \u03b1 \u03b2 \u03bb \u00b5 \u03c1 \u0000g/cm3\u0001 Massen C11H16Pb0.3852 1/3200 10.17 1.57 1.2 0.1 0 0.080 Xu pure C6H12 1/3926.6 12.27 2.98 1 0.95 0.04 0.05 Xu with copper C6H12Cu0.394 1/7692.9 8.13 3.44 1.1 0.67 0.07 0.05 Back, Moore SiO2 1/9175 8.77 3.53 1.1 0.75 0.09 0.05 Back Ta2O5 1/8433.3 4.78 1.78 1.37 0.24 0.12 0.04 Back low energy SiO2 1/9652 8.4 2.0 1.23 0.61 0.1 0.01 Moore C8H7Cl 1/24466 14.47 5.7 0.96 0.72 0.04 0.105 Keiter Pure C15H20O6 1/26549 11.54 5.29 0.94 0.95 0.038 0.065 Keiter with Gold C15H20O6Au0.172 1/4760 9.81 2.5 1.04 0.35 0.06 0.0625 Ji-Yan C8H8 1/2818.1 21.17 2.79 1.06 0.81 0.06 0.160 Au 1/7200 3.4 1.5 1.6 0.2 0.14 0.160 Be 1/402.8 8.81 4.89 1.09 0.67 0.07 0.160 Numerical approximations for solving the Marshak wave problem can be computationally expensive, especially when exploring a wide range of material properties. To overcome this challenge, we propose using the Fourier Neural Operator (FNO) to learn the mapping between material properties and their corresponding Marshak wave solutions. FNOs have shown success in solving partial differential equations by learning the solution operator from a dataset of input-output pairs. To train the FNO model, we generate a dataset that spans the parameter space defined by the material properties in Table 1. The input consists of a set of material properties, (g, f, \u03b1, \u03b2, \u03bb, \u00b5, \u03c1), while the output corresponds to the solution of the Marshak wave problem in terms of the temperature profile and wave front position at a given time. We create a uniformly spaced grid of values for each material property, covering the range of values found in the experiments: In Table 2, N is the number Table 2: Parameter ranges for generating training data Parameter Range Number of grid points g [min(g), max(g)] N (log-spaced) f [min(f), max(f)] N \u03b1 [min(\u03b1), max(\u03b1)] N \u03b2 [min(\u03b2), max(\u03b2)] N \u03bb [min(\u03bb), max(\u03bb)] N \u00b5 [min(\u00b5), max(\u00b5)] N \u03c1 [min(\u03c1), max(\u03c1)] N 8 \fof grid points for each parameter. For the g parameter, we use logarithmically spaced values to better capture its wide range, while the other parameters are linearly spaced. In addition to the material properties, the Marshak wave problem also depends on the boundary temperature (i.e., the drive temperature). We parameterize the drive with a function Tb(t, a, b, c, d), measured in HeV, defined as follows Tb(t, a, b, c, d) = a + (b(t \u2265c)(t \u2212c))(t < d) + (t \u2265d)(b(d \u2212c)). (19) Here t is time (in ns), and a \u2208[1, 3], b \u2208[0, 1], c \u2208[0.1, 2], and d \u2208[2, 5]. The function consists of a constant term a, and a piecewise function that takes different values based on the conditions involving t, c, and d. We generate a set of boundary temperature functions by sampling the parameters a, b, c, and d from their respective ranges. To create the training set, we take the Cartesian product of the material property values and the boundary temperature function parameters and obtain a set of input parameter combinations that cover the entire parameter space. For each input combination, we solve the Marshak wave problem using a numerical solver to obtain the corresponding output solution. These input-output pairs form our training dataset, which we use to train the FNO model. As will be seen, by learning from this diverse set of input-output pairs, the FNO can effectively capture the underlying physics of the Marshak wave problem across the entire parameter space, including the dependence on the boundary temperature function. This allows the trained model to quickly and accurately predict solutions for new, unseen combinations of material properties and boundary temperature functions within the specified ranges. 5.2 Base model As a starting point, we introduce a base model that takes all material properties and boundary temperature function parameters as inputs and uses the Hammer and Rosen approximation as the output. The Hammer and Rosen approximation provides an analytical solution to the Marshak wave problem, which serves as a useful benchmark for evaluating the performance of our FNO model. Figure 2 compares the temperature solutions of the Marshak wave in space for three different boundary temperature functions. The boundary temperature functions, shown in Figure 2a, are generated by varying the parameters a, b, c, and d in Equation 19. The corresponding temperature solutions, obtained using both the Hammer and Rosen approximation and the FNO model, are presented in Figure 2b. The results demonstrate good agreement between the FNO model and the Hammer and Rosen approximation for all three boundary temperature functions. This indicates that the FNO model is capable of accurately capturing the physics of the Marshak wave problem and reproducing the analytical solutions provided by the Hammer and Rosen approximation. 5.3 Hammer and Rosen Correction model While the Hammer and Rosen approximation provides an analytical solution to the Marshak wave problem, it suffers from inaccuracies due to the assumptions made in 9 \f0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 1.0 1.5 2.0 2.5 3.0 3.5 4.0 T (HeV) Tb1 Tb2 Tb3 (a) Temperature Drive 0.00 0.25 0.50 0.75 1.00 x (cm) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 T (HeV) Tb1 Tb2 Tb3 HR FNO (b) Temperature profile at 3 ns Fig. 2: Comparison of the Hammer and Rosen approximation and the FNO model for a representative material under different boundary temperature drives (a) are characterized by a constant temperature followed by a linear ramp at different times and rates. The corresponding temperature solutions (b) obtained from the Hammer and Rosen approximation (solid lines) and the FNO model (dashed lines) show close agreement. its derivation, Section 3. These inaccuracies become apparent when comparing the Hammer and Rosen solution to more accurate numerical solvers, such as diffusion based methods, and experimental results. To address this issue, we introduce the Hammer and Rosen Correction model, which aims to improve the accuracy of the Hammer and Rosen approximation using FNO. The Hammer and Rosen Correction model is built similarly to the base model but takes the Hammer and Rosen solution for the temperature and the front position as additional inputs. The outputs are generated using a more accurate diffusion solution, and the FNO learns to map the Hammer and Rosen solution to the diffusion solution. By doing so, the Hammer and Rosen Correction model effectively corrects the inaccuracies of the Hammer and Rosen approximation and provides a more accurate prediction of the Marshak wave behavior. Figure 3 illustrates in a parallel axis plot the input parameter values for four different test cases used to evaluate the Hammer and Rosen Correction model. Each line represents a specific test case, with the values of the parameters plotted along the y-axis for each parameter on the x-axis. The boundary temperature drive is given with parameters a = 1.2, b = 0.8, c = 1, and d = 2 for Eq. (19). The output values are produced by a numerical solver we developed to solve radiation diffusion in planar geometry. The solver assumes equilibrium between the radiation temperature and material temperature, reducing Eq. (1) and Eq. (2) to a single equation for the material temperature Eq. (5). The solver employs finite difference method to discretize the spatial domain into a uniform grid. Time integration is performed by the backward differentiation formula, an implicit multi-step method. The spatial derivatives in Eq. (5) are approximated using a second order central difference scheme. The left boundary at the surface (x = 0), Eq. (3), is prescribed as a 10 \fg f \u03b1 \u03b2 \u03bb \u00b5 \u03c1 Parameters 1.0 \u00d7 10\u22124 1.0 \u00d7 10\u22122 1.0 \u00d7 100 1.0 \u00d7 102 Values Case 1 Case 2 Case 3 Case 4 Fig. 3: Parameter values from the test set for four different cases to evaluate the performance of the Hammer and Rosen Correction model function of time and the solver assumes equation of state given by Eq. (7). At each time step, the solver computes the temperature profile across a one-dimensional spatial grid consisting of 100 spatial cells and tracks the position of the wavefront. The Hammer and Rosen correction model is trained and tested using the dataset generated by the numerical solver and the Hammer and solution, paired with the input parameter values. The dataset is split into standard training and testing sets. It is important to note that the testing set contains parameter combinations that may not represent physically realistic scenarios, as they are generated by uniformly sampling the parameter space defined in Table 2. The model is trained using 1.05M input-output pairs, with 58k trainable parameters and is trained over 30 epochs. Figure 4 presents a comparison of the front position solutions over time for the Hammer and Rosen approximation, the Hammer and Rosen Correction model, and the diffusion solution. The subfigures 4a, 4b, 4c, and 4d show the results for different sets of input parameters. It is evident from the figures that the Hammer and Rosen approximation deviates noticeable from the diffusion solution over time. In contrast, the Hammer and Rosen Correction model accurately predicts the diffusion solution, demonstrating its ability to correct the inaccuracies of the Hammer and Rosen approximation. Figure 5 provides a comparison of the temperature solutions for the same three models. Subfigures 5a, 5b, 5c, and 5d show the temperature profiles at the same time instance. Once again, the Hammer and Rosen Correction model closely matches the diffusion solution, while the Hammer and Rosen approximation exhibits discrepancies. The Hammer and Rosen Correction model both improves the accuracy of the Marshak wave Hammer and Rosen solution and provides a framework for integrating analytical approximations with data-driven approaches. This hybrid approach combines benefits of both analytical and machine learning methods by giving a physical solution to simplify the inference. 11 \f0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.000 0.045 0.090 0.135 0.180 0.225 0.270 xf (cm) Di\ufb00usion HR HR Correction (a) Case 1 front position solution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.07 0.14 0.21 0.28 0.35 0.42 xf (cm) Di\ufb00usion HR HR Correction (b) Case 2 front position solution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.000 0.012 0.024 0.036 0.048 0.060 0.072 xf (cm) Di\ufb00usion HR HR Correction (c) Case 3 front position solution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.0 0.3 0.6 0.9 1.2 1.5 1.8 xf (cm) Di\ufb00usion HR HR Correction (d) Case 4 front position solution Fig. 4: Comparison of the front position solutions over time for the Hammer and Rosen approximation, the Hammer and Rosen Correction model, and the diffusion solution for different sets of input parameters. The Hammer and Rosen approximation (orange lines), deviates from the diffusion solution (blue lines) over time, while the Hammer and Rosen Correction (dashed green lines) accurately predicts the diffusion solution. 5.4 Model generalization and performance In the previous sections, we demonstrated the effectiveness of the Hammer and Rosen Correction model in accurately predicting the Marshak wave behavior for unseen data. It is important to note that these tests were performed on collocation points of the spacing grid shown in Table 2. To validate generalization capabilities of FNO, we present additional tests on specific physical materials from Table 1. Figure 6 compares the front position solutions obtained from the diffusion solver and the Hammer and Rosen Correction model for four different materials: C15H20O6Au0.172, Be, C15H20O6, and C6H12 with properties as specified in [30]. These materials were not explicitly included in the training data grid but represent realistic physical scenarios. The subfigures 6a, 6b, 6c, and 6d show excellent agreement between diffusion solutions and the Hammer and Rosen Correction model predictions for all four materials. This demonstrates that the FNO has successfully learned the mapping 12 \f0.0 0.2 0.4 0.6 0.8 1.0 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (a) Case 1 temperature solution 0.00 0.01 0.02 0.03 0.04 0.05 0.06 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (b) Case 2 temperature solution 0.0 0.2 0.4 0.6 0.8 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (c) Case 3 temperature solution 0.0 0.5 1.0 1.5 2.0 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (d) Case 4 temperature solution Fig. 5: Comparison of the temperature profiles for the Hammer and Rosen approximation, the Hammer and Rosen Correction model, and the diffusion solution at the same time instance for different sets of input parameters. The Hammer and Rosen approximation (orange line) exhibits discrepancies compared to the diffusion solution (blue line), while the Hammer and Rosen Correction (dashed green lines) closely match the diffusion solution. in the entire parameter space and can accurately predict the Marshak wave behavior for arbitrary material properties within the considered ranges. To quantitatively asses the performance and computational efficiency of the Hammer and Rosen Correction model, we compare it with the base model in Table 3. Both models are trained with the same number of trainable parameters, training data, and epochs to ensure a fair comparison. The mean squared error (MSE) is used as the evaluation metric for both temperature and front position predictions. The results in Table 3 show that the Hammer and Rosen Correction model significantly outperforms the base model in terms of prediction accuracy. The Hammer and Rosen Correction model achieves a 56.16% improvement in temperature MSE and a 13 \f0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.02 0.04 0.06 0.08 0.10 0.12 xf (cm) Di\ufb00usion HR HR Correction (a) C15H20O6Au0.172 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.06 0.12 0.18 0.24 0.30 0.36 xf (cm) Di\ufb00usion HR HR Correction (b) Be 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.04 0.08 0.12 0.16 0.20 0.24 xf (cm) Di\ufb00usion HR HR Correction (c) C15H20O6 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.08 0.16 0.24 0.32 0.40 0.48 xf (cm) Di\ufb00usion HR HR Correction (d) C6H12 Fig. 6: Comparison of the front positions obtained from the Hammer and Rosen approximation (orange lines), diffusion solver (blue lines), and the Hammer and Rosen Correction model (dashed green lines) for four different materials from the Table 1. Table 3: Prediction performance and computational costs of deep learning models (MSE is the mean squared error) Parameter HR Correction Base model % Improvement Temperature MSE 0.00081 0.00185 56.16 Front position MSE 0.00807 0.01220 33.93 Train data 1.05M 1.05M Trainable parameters 58k 58k Epochs 30 30 Inference time (s) 0.0032 0.0016 33.93% improvement in front position MSE compared to the base model. This superior performance can be attributed to the hybrid-nature approach of the Hammer and Rosen Correction model. 14 \fIn terms of computational efficiency, the Hammer and Rosen Correction model has slightly slower inference time as compared to the base model. This is expected due to the additional complexity introduced by the correction step. However, it is important to note that both models have extremely fast inference times, with the Hammer and Rosen Correction model requiring only 0.0032 seconds per prediction and the base model requiring 0.0016 seconds. These fast inference time highlight the efficiency of the FNO-based approach, enabling real-time predictions of the Marshak wave behavior. 6", + "additional_graph_info": { + "graph": [ + [ + "Joseph Farmer", + "Ethan Smith" + ], + [ + "Joseph Farmer", + "William Bennett" + ], + [ + "Ethan Smith", + "Nayan Saxena" + ], + [ + "Ethan Smith", + "Ilham Variansyah" + ], + [ + "William Bennett", + "Ryan G. Mcclarren" + ], + [ + "William Bennett", + "Stephen Millmore" + ] + ], + "node_feat": { + "Joseph Farmer": [ + { + "url": "http://arxiv.org/abs/2405.04003v1", + "title": "High Energy Density Radiative Transfer in the Diffusion Regime with Fourier Neural Operators", + "abstract": "Radiative heat transfer is a fundamental process in high energy density\nphysics and inertial fusion. Accurately predicting the behavior of Marshak\nwaves across a wide range of material properties and drive conditions is\ncrucial for design and analysis of these systems. Conventional numerical\nsolvers and analytical approximations often face challenges in terms of\naccuracy and computational efficiency. In this work, we propose a novel\napproach to model Marshak waves using Fourier Neural Operators (FNO). We\ndevelop two FNO-based models: (1) a base model that learns the mapping between\nthe drive condition and material properties to a solution approximation based\non the widely used analytic model by Hammer & Rosen (2003), and (2) a model\nthat corrects the inaccuracies of the analytic approximation by learning the\nmapping to a more accurate numerical solution. Our results demonstrate the\nstrong generalization capabilities of the FNOs and show significant\nimprovements in prediction accuracy compared to the base analytic model.", + "authors": "Joseph Farmer, Ethan Smith, William Bennett, Ryan McClarren", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "physics.comp-ph", + "cats": [ + "physics.comp-ph", + "cs.LG" + ], + "main_content": "Introduction Marshak waves, a common type of driven supersonic radiative heat waves, play a key part in the physics of internal confinement fusion (ICF) [1\u20134], astrophysics [5\u20137] and other high energy density phenomena [8]. In most cases, a full description of the radiative transfer process is not required. Therefore, approximations are in order. The diffusion approximation is one of these and is considered the simplest [9]. In some cases, analytic solutions to the radiation diffusion equation can be useful in understanding experiments [10\u201316]. These analytic or semi-analytic models can be thought of as a reduced order approximation of the full system, which is itself a simplification. As examples, [10] reduces a two dimensional diffusion system via asymptotic expansion. The diffusion system is an approximation to higher order radiation transport equations. Marshak, the namesake of these waves, reduced a partial differential equation (PDE) into an ordinary differential equation (ODE) [13, 14]. Reduced order solutions have the benefit of simpler calculation, as solving an ODE is usually preferable to solving a PDE, and they can be interrogated to clarify physical relationships between parameters. However, coming to a semi-analytic or analytic solution often involves invoking simplifications which may debase the accuracy of the prediction. Thus, the motive for this inquiry is to take a widely used and appreciated semi-analytic diffusion model, the Hammer and Rosen Marshak wave model (HR) [11], and provide a correction to the model\u2019s limiting assumptions in a computationally efficient manner. Classical numerical solvers such as finite difference, finite element, or finite volume methods discretize continuous equations into a finite set of algebraic equations [17\u2013 22]. These numerical solvers can be computationally expensive for high dimensional problems and for domains with complex geometries. In recent years, approaches that leverage ML have garnered support to alleviate these challenges [23\u201325]. In particular, neural operators, a class of ML models, have emerged as a promising solution to these challenges. These operators learn mappings between infinite-dimensional function spaces, effectively approximating differential or integral operators that govern PDEs in a data driven manner [26, 27]. One of the key advantages of neural operators is that they only need to be trained once to learn a family of PDEs, and obtaining a solution for a new instance of a PDE parameter requires only a forward pass of the network. Furthermore, neural operators are discretizationinvariant as they share network parameters across discretizations, allowing for the transfer of solutions between meshes. The Fourier neural operator (FNO) [28] is a seminal neural operator that learns network parameters in Fourier space. The FNO uses fast Fourier transform (FFT) for spectral decomposition of the input and computation of the convolution integral kernel in the Fourier space. This approach has shown promising results in learning the underlying physics of various PDEs including Burgers, Darcy, and Navier-Stokes equations. In this work, we propose to use FNO to learn the physics of Marshak waves for various input-output pairs. We develop two models: a base model which takes the physical parameters of the Marshak wave problem as input and outputs the time dependent wavefront position and temperature distribution as given by the HR model, 2 \fand a hybrid approach which corrects the analytic HR solution to output the numerical solution to the full flux-limited diffusion equation. The structure of this paper is as follows. The diffusion model for Marshak waves is introduced in Section 2. Hammer and Rosen\u2019s approximation is summarized in Section 3. The neural network that is employed to correct the HR model is discussed in Section 4. Finally, results and conclusions are offered in Sections 5 and 6. 2 Marshak wave problem We study radiation diffusion in planar geometry, which assumes variation of the dependent variables only in a single direction, x. The evolutions of the radiation and material energy density are governed by [29], \u2202er \u2202t = \u2202 \u2202x c 3\u03ba(\u03c1, T) \u2202er \u2202x + c\u03ba(aT 4 \u2212er), (1) \u2202e \u2202t = c\u03ba(e \u2212aT 4) (2) where, er is the energy density of the radiation and e is the energy density of the material. c is the speed of light, \u03ba is the opacity with units of inverse length, a is the radiation constant, defined a \u22614\u03c3 c where \u03c3 is the Stefan-Boltzmann constant. T is the material temperature and \u03c1 is the material density. A Marshak boundary condition will specify the incoming radiation flux [29], er(x = 0, t) \u2212 \u0012 2 3\u03ba \u2202er \u2202x \u0013 \f \f \f \f x=0 = 4 c Finc. (3) where Finc is the incident flux on the surface at x = 0. The material energy density is found via integration of the specific heat, e = Z T 0 dT \u2032 Cv(T \u2032). (4) Solutions to Eq. (1) in the optically thick limit are recognizable by sharp drops in temperature near the wavefront and gradual temperature variation behind the front. This is because the radiation temperature and material temperature are in equilibrium behind the wavefront. Thus, is often valid to assume equilibrium between the radiation temperature and and material temperature, i.e. er = aT 4. This assumption simplifies Eqs. (1) and (2) to a single equation for the material temperature, \u2202e \u2202t = 4 3 \u2202 \u2202x 1 \u03ba(\u03c1, T) \u0012 \u2202 \u2202x\u03c3T 4 \u0013 (5) with the boundary condition at the surface, T(x = 0, t) = Ts(t). (6) 3 \fFurthermore, the equation of state is specified so that, e = fT \u03b2\u03c1\u2212\u00b5, (7) This is the formulation given in [11]. The parameters f, \u03b2, \u00b5 are found by fitting experimental data, as in [30]. 3 Hammer and Rosen approximation The Hammer and Rosen model for supersonic thermal radiation diffusion is a perturbative, semi-analytic, one dimensional solution to the diffusion equation under mild limiting assumptions. In particular, this model assumes planar geometry, power law representations for the opacity, 1 K = gT \u03b1\u03c1\u2212\u03bb, and material internal energy, e = fT \u03b2\u03c1\u2212\u00b5, and a constant density. These assumptions transform Eq. (5) into, \u03c1\u2202e \u2202t = 4 3 \u2202 \u2202x \u0012 1 K\u03c1 \u2202 \u2202x\u03c3T 4 \u0013 , (8) where \u03c1 is the material density, e is the internal energy, \u03c3 is the Stefan-Boltzmann constant, and T is the radiation temperature. The application of these assumptions and some simplification leads to the expression \u2202T \u03b2 \u2202t = C \u22022 \u2202x2 T 4+\u03b1 (9) where our constants are collected into the term C = 4 4 + \u03b1 4 3 1 f g\u03c1\u00b5\u22122\u2212\u03bb (10) This model predicts the position of the wave front as a function of time as the solution to an integral expression, then provides an explicit expression for the temperature profile in the material. The model can accommodate an arbitrary radiation temperature boundary condition. The Hammer and Rosen model gives the position of the wavefront, xf, as x2 f (t) = 2 + \u03f5 1 \u2212\u03f5CT \u2212\u03b2 s Z t 0 T 4+\u03b1 s d\u02c6 t (11) where Ts is the boundary temperature, \u03f5 = \u03b2 4+\u03b1 is a combination of terms from the power laws, and xf is the heat front position as a function of time, t. With knowledge of the wavefront position a simple expression can be evaluated for the temperature profile: T 4+\u03b1 T 4+\u03b1 s (x, t) = \u0014\u0012 1 \u2212x xf \u0013 \u0012 1 + \u03f5 2 \u0012 1 \u2212 x2 f CH2\u2212\u03f5 dH dt \u0013 x xf \u0013\u00151/(1\u2212\u03f5) . (12) Here H = T 4+\u03b1 s . One hallmark of this approximate solution is that it is very inexpensive to evaluate. In practice, and when compared to computing a numerical solution, 4 \fthis method is effectively immediate. For this reason, it has proven to be particularly helpful for rapid iteration during the design process. 4 Fourier neural operator model We now turn to the consideration of producing a machine learning model to compute Marshak wave solutions. For this task we turn to the Fourier Neural Operator. In this section we use standard notation from the ML literature; regrettably, this overlaps with the standard notation for Marshak waves at times. g f \u00c6 \u00d8 \u220f \u00b5 \u03a9 Parameters 1.0 \u00a3 10\u00b04 1.0 \u00a3 10\u00b02 1.0 \u00a3 100 1.0 \u00a3 102 Values 0 1 2 3 t (ns) 0.000 0.045 0.090 0.135 0.180 0.225 0.270 xf (cm) 0.00 0.02 0.04 0.06 xf (cm) 0 1 2 T (HeV) 0 1 2 3 t (ns) 1.0 1.5 2.0 2.5 3.0 3.5 4.0 T (HeV) P Fourier layer 1 Fourier layer 2 Fourier layer l Q a(x) u(x) v(x) F R F\u22121 Fourier layer W + \u03c3 Fig. 1: Fourier neural operator architecture for solving the Marshak wave problem. The input function a(x) is projected to a higher representation v0(x) by the projection layer P. This is then processed through l iterations of Fourier layers. Each Fourier layer consists of a Fourier transform F that maps vi(x) to the Fourier domain, multiplication with the weight tensor R and filtering of higher Fourier modes, and an inverse Fourier transform F\u22121 to return to the spatial domain. The output is linearly transformed by W and passed through a nonlinear activation function \u03c3. This is added to the previous Fourier layer\u2019s output to produce the updated representation vi+1(x). After l layers, the final representation vl(x) is mapped to the output solution u(x). The boundary temperature drive (top left) and parameters (bottom left) represent the input functions and the front position (top right) and temperature distribution (bottom right) represent the output functions for the Marshak wave problem The primary goal of an operator G is to establish a mapping between infinitedimensional spaces from a finite collection of input-output pairs, denoted as A = A(Rda) \u2282Rda and U = U(Rdu) \u2282Rdu, respectively. Following from [28, 31], consider a partial differential equation (PDE) which maps input function spaces to an output solution space. For a given domain D \u2282Rd with boundary \u2202D, and x \u2208D, an operator would map source terms, f(x, t) : D \u2192R, boundary conditions, u(\u2202D, t) : D \u2192R, and initial conditions u(x, 0) : D \u2192R, to the solution space u(x, t) : D \u2192R, where t is time. In the present work, we aim to learn the nonlinear differential operator G : A \u2192U for various sets of input parameters a \u2208A in the Marshak wave problem. 5 \fBy constructing a parametric map G : A \u00d7 \u0398 \u2192U, the optimal parameter \u03b8 \u2208\u0398 can be approximated with data-driven methods to adjust \u03b8 such that G(\u00b7, \u03b8) approaches the target map G. Classical numerical solvers, be it finite elements, finite differences, or many modern data-driven and physics-informed neural networks attempt to learn the output function u(x, t) which satisfies G for a single instance of input parameter a and can be computationally prohibitive, especially when the solution for the PDE is required for many instances of the parameter. On the other hand, Fourier neural operators (FNO) have been developed to approximate G directly so that solutions to a family of PDEs are realized for different sets of a, thereby enhancing computational efficiency and practical utility. In general, input and output functions a and u are continuous, however, we assume to know only point-wise evaluations. To that end, the problem at hand can be described using the n-point discretization of D, Dj = {x1, . . . , xn} \u2282D with observations of input-output pairs indexed by j \b aj \u2208Rn\u00d7da, uj \u2208Rn\u00d7du\tN j=1, and uj = G(aj). The neural operator to learn the input-output mapping is an iterative architecture. First, the input a(x, t) is transformed to a higher dimensional representation by v0(x) = P(a(x)) where the transformation P(a(x)) : Rda 7\u2192Rdv. In this framework, a shallow fully connected network can achieve this desired transformation. Next a series of l updates vi 7\u2192vi+1 are performed vi+1(x) := \u03c3 (Wvi(x) + (K(a; \u03d5)vi) (x)) , \u2200x \u2208D. (13) with nonlinear activation function \u03c3(\u00b7) : R 7\u2192R and a linear transformation W : Rdv 7\u2192Rdv. Each vi is a dv-dimensional real vector in Rdv. For a vector input x = [x1, x2, . . . , xdv]T \u2208Rdv, \u03c3(x) is applied element-wise, resulting in [\u03c3(x1), \u03c3(x2), . . . , \u03c3(xdv)]T . The integral kernel operator K : A \u00d7 \u03b8 \u2192L(U, U) is parameterized by \u03d5 \u2208\u0398K (K(a; \u03d5)vi) (x) := Z D \u03ba\u03d5(x, y, a(x), a(y); \u03d5)vi(y)dy, \u2200x \u2208D. (14) where \u03ba\u03d5 : R2(d+da) \u2192Rdv\u00d7dv is a neural network parameterized by \u03d5 \u2208\u0398K. After all iterations, a transformation function u(x) = Q (vl(x)) moves vl(x) into the solution space Q (vl(x)) : Rdv 7\u2192Rdu. This approach extends the idea of neural networks to operate on infinite-dimensional function spaces, enabling the learning of mappings between such spaces from finite data samples. By leveraging neural operators, it becomes possible to approximate the nonlinear operators that govern the relationships between infinite-dimensional input and output function spaces, such as those arising in the context of partial differential equations. The FNO is a specific neural operator architecture designed for such nonlinear mappings. It replaces the kernel integral operator in by a Fourier convolution operator F\u22121 (F (\u03ba\u03d5) \u00b7 F (vi)) (x), and applying the convolution theorem. The Fourier kernel integral operator becomes (K(\u03d5)vi) (x) = F\u22121 (R\u03d5 \u00b7 (Fvi)) (x), \u2200x \u2208D, 6 \fwhere F is the Fourier transform of a function and F\u22121 is its inverse transform, R\u03d5 is the Fourier transform of a periodic function \u03ba parameterized by \u03d5 \u2208\u0398K. Given that \u03ba is periodic and can be represented by a Fourier series expansion, only discrete modes are considered k \u2208Zd. To create a finite dimensional representation, the Fourier series is truncated at a maximum number of modes kmax = |{k \u2208Zd : |kj| \u2264kmax,j for j = 1, . . . , d}|. In a discretized domain D with n \u2208N points, vi \u2208Rn\u00d7dv and F(vi) \u2208Cn\u00d7dv is obtained, here C represents the complex space. A convolution of vi with a function that has kmax Fourier modes gives F(vi) \u2208Ckmax\u00d7dv . Then the multiplication with the weight tensor R \u2208Ckmax\u00d7dv\u00d7dv is (R \u00b7 (Fvi))k,l = X j=1 Rk,l,j (Fvi)k,j , k = 1, . . . , kmax, j = 1, . . . , dv (15) With uniform discretization and resolution s1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 sd = n, Fast Fourier Transform (FFT) can replace F. For f \u2208Rn\u00d7dv, k = (k1, . . . , kd) \u2208Zs1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Zsd, and x = (x1, . . . , xd) \u2208D, the FFT \u02c6 F and its inverse \u02c6 F\u22121 are defined as ( \u02c6 Ff)l(k) = s1\u22121 X x1=0 \u00b7 \u00b7 \u00b7 sd\u22121 X xd=0 fl (x1, . . . , xd) e \u22122i\u03c0 Pd j=1 xj kj sj , (16) \u0010 \u02c6 F\u22121f \u0011 l (x) = s1\u22121 X k1=0 \u00b7 \u00b7 \u00b7 sd\u22121 X kd=0 fl (k1, . . . , kd) e 2i\u03c0 Pd j=1 xj kj sj . (17) Finally, since Eq. (13) follows standard neural network structures training a network training is done with an appropriate loss function L = U \u00d7 U \u0398 = arg min \u0398 (L(G(a), G(a, \u0398)). (18) A schematic representation of the Fourier Neural Operator model for the Marshak wave problem is provided in Figure 1. 5 Results 5.1 Problem description and parameter space The Marshak waves we consider concern the propagation of heat waves through lowdensity foam cylinders or other materials driven by a hohlraum similar to those described in [30, 32]. Key parameters in these experiments include density, drive energy and radiation temperature, which typically can range from 100 to 300 eV. Xray imaging is used to track the heat wave, while diagnostic tools measure the flux breaking through the foam edge. The experiments cover a wide range of temperatures, materials, and densities. 7 \fTable 1, adapted from [30], presents material properties used in various Marshak wave experiments. The first ten rows contain parameters for the foams, while the last two rows provide parameters for coating materials. For each material, the numerical parameters were fitted in relevant experimental regimes. Further details about the experiments can be found in [30] and references cited therein. Table 1: Material properties for various Marshak wave experiments Experiment Foam g \u0000g/cm2\u0001 f (MJ) \u03b1 \u03b2 \u03bb \u00b5 \u03c1 \u0000g/cm3\u0001 Massen C11H16Pb0.3852 1/3200 10.17 1.57 1.2 0.1 0 0.080 Xu pure C6H12 1/3926.6 12.27 2.98 1 0.95 0.04 0.05 Xu with copper C6H12Cu0.394 1/7692.9 8.13 3.44 1.1 0.67 0.07 0.05 Back, Moore SiO2 1/9175 8.77 3.53 1.1 0.75 0.09 0.05 Back Ta2O5 1/8433.3 4.78 1.78 1.37 0.24 0.12 0.04 Back low energy SiO2 1/9652 8.4 2.0 1.23 0.61 0.1 0.01 Moore C8H7Cl 1/24466 14.47 5.7 0.96 0.72 0.04 0.105 Keiter Pure C15H20O6 1/26549 11.54 5.29 0.94 0.95 0.038 0.065 Keiter with Gold C15H20O6Au0.172 1/4760 9.81 2.5 1.04 0.35 0.06 0.0625 Ji-Yan C8H8 1/2818.1 21.17 2.79 1.06 0.81 0.06 0.160 Au 1/7200 3.4 1.5 1.6 0.2 0.14 0.160 Be 1/402.8 8.81 4.89 1.09 0.67 0.07 0.160 Numerical approximations for solving the Marshak wave problem can be computationally expensive, especially when exploring a wide range of material properties. To overcome this challenge, we propose using the Fourier Neural Operator (FNO) to learn the mapping between material properties and their corresponding Marshak wave solutions. FNOs have shown success in solving partial differential equations by learning the solution operator from a dataset of input-output pairs. To train the FNO model, we generate a dataset that spans the parameter space defined by the material properties in Table 1. The input consists of a set of material properties, (g, f, \u03b1, \u03b2, \u03bb, \u00b5, \u03c1), while the output corresponds to the solution of the Marshak wave problem in terms of the temperature profile and wave front position at a given time. We create a uniformly spaced grid of values for each material property, covering the range of values found in the experiments: In Table 2, N is the number Table 2: Parameter ranges for generating training data Parameter Range Number of grid points g [min(g), max(g)] N (log-spaced) f [min(f), max(f)] N \u03b1 [min(\u03b1), max(\u03b1)] N \u03b2 [min(\u03b2), max(\u03b2)] N \u03bb [min(\u03bb), max(\u03bb)] N \u00b5 [min(\u00b5), max(\u00b5)] N \u03c1 [min(\u03c1), max(\u03c1)] N 8 \fof grid points for each parameter. For the g parameter, we use logarithmically spaced values to better capture its wide range, while the other parameters are linearly spaced. In addition to the material properties, the Marshak wave problem also depends on the boundary temperature (i.e., the drive temperature). We parameterize the drive with a function Tb(t, a, b, c, d), measured in HeV, defined as follows Tb(t, a, b, c, d) = a + (b(t \u2265c)(t \u2212c))(t < d) + (t \u2265d)(b(d \u2212c)). (19) Here t is time (in ns), and a \u2208[1, 3], b \u2208[0, 1], c \u2208[0.1, 2], and d \u2208[2, 5]. The function consists of a constant term a, and a piecewise function that takes different values based on the conditions involving t, c, and d. We generate a set of boundary temperature functions by sampling the parameters a, b, c, and d from their respective ranges. To create the training set, we take the Cartesian product of the material property values and the boundary temperature function parameters and obtain a set of input parameter combinations that cover the entire parameter space. For each input combination, we solve the Marshak wave problem using a numerical solver to obtain the corresponding output solution. These input-output pairs form our training dataset, which we use to train the FNO model. As will be seen, by learning from this diverse set of input-output pairs, the FNO can effectively capture the underlying physics of the Marshak wave problem across the entire parameter space, including the dependence on the boundary temperature function. This allows the trained model to quickly and accurately predict solutions for new, unseen combinations of material properties and boundary temperature functions within the specified ranges. 5.2 Base model As a starting point, we introduce a base model that takes all material properties and boundary temperature function parameters as inputs and uses the Hammer and Rosen approximation as the output. The Hammer and Rosen approximation provides an analytical solution to the Marshak wave problem, which serves as a useful benchmark for evaluating the performance of our FNO model. Figure 2 compares the temperature solutions of the Marshak wave in space for three different boundary temperature functions. The boundary temperature functions, shown in Figure 2a, are generated by varying the parameters a, b, c, and d in Equation 19. The corresponding temperature solutions, obtained using both the Hammer and Rosen approximation and the FNO model, are presented in Figure 2b. The results demonstrate good agreement between the FNO model and the Hammer and Rosen approximation for all three boundary temperature functions. This indicates that the FNO model is capable of accurately capturing the physics of the Marshak wave problem and reproducing the analytical solutions provided by the Hammer and Rosen approximation. 5.3 Hammer and Rosen Correction model While the Hammer and Rosen approximation provides an analytical solution to the Marshak wave problem, it suffers from inaccuracies due to the assumptions made in 9 \f0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 1.0 1.5 2.0 2.5 3.0 3.5 4.0 T (HeV) Tb1 Tb2 Tb3 (a) Temperature Drive 0.00 0.25 0.50 0.75 1.00 x (cm) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 T (HeV) Tb1 Tb2 Tb3 HR FNO (b) Temperature profile at 3 ns Fig. 2: Comparison of the Hammer and Rosen approximation and the FNO model for a representative material under different boundary temperature drives (a) are characterized by a constant temperature followed by a linear ramp at different times and rates. The corresponding temperature solutions (b) obtained from the Hammer and Rosen approximation (solid lines) and the FNO model (dashed lines) show close agreement. its derivation, Section 3. These inaccuracies become apparent when comparing the Hammer and Rosen solution to more accurate numerical solvers, such as diffusion based methods, and experimental results. To address this issue, we introduce the Hammer and Rosen Correction model, which aims to improve the accuracy of the Hammer and Rosen approximation using FNO. The Hammer and Rosen Correction model is built similarly to the base model but takes the Hammer and Rosen solution for the temperature and the front position as additional inputs. The outputs are generated using a more accurate diffusion solution, and the FNO learns to map the Hammer and Rosen solution to the diffusion solution. By doing so, the Hammer and Rosen Correction model effectively corrects the inaccuracies of the Hammer and Rosen approximation and provides a more accurate prediction of the Marshak wave behavior. Figure 3 illustrates in a parallel axis plot the input parameter values for four different test cases used to evaluate the Hammer and Rosen Correction model. Each line represents a specific test case, with the values of the parameters plotted along the y-axis for each parameter on the x-axis. The boundary temperature drive is given with parameters a = 1.2, b = 0.8, c = 1, and d = 2 for Eq. (19). The output values are produced by a numerical solver we developed to solve radiation diffusion in planar geometry. The solver assumes equilibrium between the radiation temperature and material temperature, reducing Eq. (1) and Eq. (2) to a single equation for the material temperature Eq. (5). The solver employs finite difference method to discretize the spatial domain into a uniform grid. Time integration is performed by the backward differentiation formula, an implicit multi-step method. The spatial derivatives in Eq. (5) are approximated using a second order central difference scheme. The left boundary at the surface (x = 0), Eq. (3), is prescribed as a 10 \fg f \u03b1 \u03b2 \u03bb \u00b5 \u03c1 Parameters 1.0 \u00d7 10\u22124 1.0 \u00d7 10\u22122 1.0 \u00d7 100 1.0 \u00d7 102 Values Case 1 Case 2 Case 3 Case 4 Fig. 3: Parameter values from the test set for four different cases to evaluate the performance of the Hammer and Rosen Correction model function of time and the solver assumes equation of state given by Eq. (7). At each time step, the solver computes the temperature profile across a one-dimensional spatial grid consisting of 100 spatial cells and tracks the position of the wavefront. The Hammer and Rosen correction model is trained and tested using the dataset generated by the numerical solver and the Hammer and solution, paired with the input parameter values. The dataset is split into standard training and testing sets. It is important to note that the testing set contains parameter combinations that may not represent physically realistic scenarios, as they are generated by uniformly sampling the parameter space defined in Table 2. The model is trained using 1.05M input-output pairs, with 58k trainable parameters and is trained over 30 epochs. Figure 4 presents a comparison of the front position solutions over time for the Hammer and Rosen approximation, the Hammer and Rosen Correction model, and the diffusion solution. The subfigures 4a, 4b, 4c, and 4d show the results for different sets of input parameters. It is evident from the figures that the Hammer and Rosen approximation deviates noticeable from the diffusion solution over time. In contrast, the Hammer and Rosen Correction model accurately predicts the diffusion solution, demonstrating its ability to correct the inaccuracies of the Hammer and Rosen approximation. Figure 5 provides a comparison of the temperature solutions for the same three models. Subfigures 5a, 5b, 5c, and 5d show the temperature profiles at the same time instance. Once again, the Hammer and Rosen Correction model closely matches the diffusion solution, while the Hammer and Rosen approximation exhibits discrepancies. The Hammer and Rosen Correction model both improves the accuracy of the Marshak wave Hammer and Rosen solution and provides a framework for integrating analytical approximations with data-driven approaches. This hybrid approach combines benefits of both analytical and machine learning methods by giving a physical solution to simplify the inference. 11 \f0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.000 0.045 0.090 0.135 0.180 0.225 0.270 xf (cm) Di\ufb00usion HR HR Correction (a) Case 1 front position solution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.07 0.14 0.21 0.28 0.35 0.42 xf (cm) Di\ufb00usion HR HR Correction (b) Case 2 front position solution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.000 0.012 0.024 0.036 0.048 0.060 0.072 xf (cm) Di\ufb00usion HR HR Correction (c) Case 3 front position solution 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.0 0.3 0.6 0.9 1.2 1.5 1.8 xf (cm) Di\ufb00usion HR HR Correction (d) Case 4 front position solution Fig. 4: Comparison of the front position solutions over time for the Hammer and Rosen approximation, the Hammer and Rosen Correction model, and the diffusion solution for different sets of input parameters. The Hammer and Rosen approximation (orange lines), deviates from the diffusion solution (blue lines) over time, while the Hammer and Rosen Correction (dashed green lines) accurately predicts the diffusion solution. 5.4 Model generalization and performance In the previous sections, we demonstrated the effectiveness of the Hammer and Rosen Correction model in accurately predicting the Marshak wave behavior for unseen data. It is important to note that these tests were performed on collocation points of the spacing grid shown in Table 2. To validate generalization capabilities of FNO, we present additional tests on specific physical materials from Table 1. Figure 6 compares the front position solutions obtained from the diffusion solver and the Hammer and Rosen Correction model for four different materials: C15H20O6Au0.172, Be, C15H20O6, and C6H12 with properties as specified in [30]. These materials were not explicitly included in the training data grid but represent realistic physical scenarios. The subfigures 6a, 6b, 6c, and 6d show excellent agreement between diffusion solutions and the Hammer and Rosen Correction model predictions for all four materials. This demonstrates that the FNO has successfully learned the mapping 12 \f0.0 0.2 0.4 0.6 0.8 1.0 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (a) Case 1 temperature solution 0.00 0.01 0.02 0.03 0.04 0.05 0.06 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (b) Case 2 temperature solution 0.0 0.2 0.4 0.6 0.8 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (c) Case 3 temperature solution 0.0 0.5 1.0 1.5 2.0 xf (cm) 0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75 2.00 T (HeV) Di\ufb00usion HR HR Correction (d) Case 4 temperature solution Fig. 5: Comparison of the temperature profiles for the Hammer and Rosen approximation, the Hammer and Rosen Correction model, and the diffusion solution at the same time instance for different sets of input parameters. The Hammer and Rosen approximation (orange line) exhibits discrepancies compared to the diffusion solution (blue line), while the Hammer and Rosen Correction (dashed green lines) closely match the diffusion solution. in the entire parameter space and can accurately predict the Marshak wave behavior for arbitrary material properties within the considered ranges. To quantitatively asses the performance and computational efficiency of the Hammer and Rosen Correction model, we compare it with the base model in Table 3. Both models are trained with the same number of trainable parameters, training data, and epochs to ensure a fair comparison. The mean squared error (MSE) is used as the evaluation metric for both temperature and front position predictions. The results in Table 3 show that the Hammer and Rosen Correction model significantly outperforms the base model in terms of prediction accuracy. The Hammer and Rosen Correction model achieves a 56.16% improvement in temperature MSE and a 13 \f0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.02 0.04 0.06 0.08 0.10 0.12 xf (cm) Di\ufb00usion HR HR Correction (a) C15H20O6Au0.172 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.06 0.12 0.18 0.24 0.30 0.36 xf (cm) Di\ufb00usion HR HR Correction (b) Be 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.04 0.08 0.12 0.16 0.20 0.24 xf (cm) Di\ufb00usion HR HR Correction (c) C15H20O6 0.0 0.5 1.0 1.5 2.0 2.5 3.0 t (ns) 0.00 0.08 0.16 0.24 0.32 0.40 0.48 xf (cm) Di\ufb00usion HR HR Correction (d) C6H12 Fig. 6: Comparison of the front positions obtained from the Hammer and Rosen approximation (orange lines), diffusion solver (blue lines), and the Hammer and Rosen Correction model (dashed green lines) for four different materials from the Table 1. Table 3: Prediction performance and computational costs of deep learning models (MSE is the mean squared error) Parameter HR Correction Base model % Improvement Temperature MSE 0.00081 0.00185 56.16 Front position MSE 0.00807 0.01220 33.93 Train data 1.05M 1.05M Trainable parameters 58k 58k Epochs 30 30 Inference time (s) 0.0032 0.0016 33.93% improvement in front position MSE compared to the base model. This superior performance can be attributed to the hybrid-nature approach of the Hammer and Rosen Correction model. 14 \fIn terms of computational efficiency, the Hammer and Rosen Correction model has slightly slower inference time as compared to the base model. This is expected due to the additional complexity introduced by the correction step. However, it is important to note that both models have extremely fast inference times, with the Hammer and Rosen Correction model requiring only 0.0032 seconds per prediction and the base model requiring 0.0016 seconds. These fast inference time highlight the efficiency of the FNO-based approach, enabling real-time predictions of the Marshak wave behavior. 6" + } + ], + "Ethan Smith": [ + { + "url": "http://arxiv.org/abs/2402.13573v3", + "title": "ToDo: Token Downsampling for Efficient Generation of High-Resolution Images", + "abstract": "Attention mechanism has been crucial for image diffusion models, however,\ntheir quadratic computational complexity limits the sizes of images we can\nprocess within reasonable time and memory constraints. This paper investigates\nthe importance of dense attention in generative image models, which often\ncontain redundant features, making them suitable for sparser attention\nmechanisms. We propose a novel training-free method ToDo that relies on token\ndownsampling of key and value tokens to accelerate Stable Diffusion inference\nby up to 2x for common sizes and up to 4.5x or more for high resolutions like\n2048x2048. We demonstrate that our approach outperforms previous methods in\nbalancing efficient throughput and fidelity.", + "authors": "Ethan Smith, Nayan Saxena, Aninda Saha", + "published": "2024-02-21", + "updated": "2024-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction Transformers, and their key component, attention, have been integral to the success and improvements in generative models in recent years [Vaswani et al., 2023]. Their global receptive fields, ability to compute dynamically based on input context, and large capacities have made them useful building blocks across many tasks [Khan et al., 2022]. The main drawback of Transformer architectures is their quadratic scaling of computational complexity with sequence length, affecting both time and memory requirements. When looking to generate a Stable Diffusion image at 2048 \u00d7 2048 resolution, the attention maps of the largest U-Net blocks incur a memory cost of approximately 69 GB in half-precision, calculated as (1 batch\u00d78 heads\u00d7(2562 tokens)2\u00d72 bytes). This exceeds the capabilities of most consumer GPUs [Zhuang et al., 2023]. Specialized kernels, such as those used in Flash Attention, have greatly improved speed and reduced memory costs [Dao et al., 2022], however, challenges due to the unfavorable quadratic scaling with sequence length are persistent. In the quest for computational efficiency, the concept of sparse attention has gained traction. Methods like Token Merging (ToMe) [Bolya et al., 2023] and its application in latent image diffusion models [Bolya and Hoffman, 2023] have reduced the computation time required by condensing tokens with high similarity, thereby retaining the essence of Figure 1: A visualization of our method. From a given latent or image, we subsample positions on the grid in a strided fashion for the keys and values used in attention maintaining the full set of query tokens. Link to demo video is here. the information with fewer tokens. Similarly, approaches like Neighborhood Attention [Hassani et al., 2023] and Focal Transformers [Yang et al., 2021] have introduced mechanisms where query tokens attend only to a select neighborhood, balancing the trade-off between receptive field and computational load. These strategies aim to efficiently approximate the attention mechanism\u2019s output. While performant, these methods typically require training-time modifications to be successful, incurring significant logistical overheads to leverage their optimizations. Complementing the sparse attention frameworks, attention approximation methods offer an alternative avenue by exploiting mathematical properties to simplify the attention operation. Techniques ranging from replacing the softmax with more computationally friendly nonlinearities [Chen et al., 2020], to fully linearizing attention [Katharopoulos et al., 2020], and leveraging the kernel trick for dimensionality reduction [Choromanski et al., 2022], have been explored to approximate attention efficiently but are also generally required to be trained into the model. Building upon these works and aiming to address the pretraining requirement, we propose a novel post-hoc method for arXiv:2402.13573v3 [cs.CV] 8 May 2024 \faccelerating inference, which we refer to as Token Downsampling (ToDo). Our approach, ToDo, is inspired by the observation that adjacent pixels in images tend to exhibit similar values to their neighbors. Hence, we employ a downsampling technique to reduce tokens, akin to grid-based subsampling in image processing. Compared to prior method ToMe [Bolya and Hoffman, 2023], our method not only simplifies the merging process but also significantly reduces computational overhead, as it eliminates the need for exhaustive similarity calculations. In summary, our main contributions are: \u2022 A training-free method that can accelerate inference for Stable Diffusion up to 4.5x faster, beating previous methods in balancing throughput and fidelity. \u2022 An in-depth analysis of attention features within the UNet, and hypotheses on why attention can be approximated sparsely without substantially hurting fidelity. 2 Methods 2.1 Background Diffusion Models for Image Generation The diffusion model [Song and Ermon, 2019] employs a U-Net architecture [Ronneberger et al., 2015] with transformer-based blocks that utilize self-attention layers [Rombach et al., 2021]. This setup flattens spatial dimensions into a series of tokens, which are then fed through multiple transformer blocks to predict the denoised image. Original Token Merging Scheme In the original ToMe [Bolya et al., 2023] framework, tokens are categorized into source (src) and destination (dst) sets. The merging process involves identifying the r most similar tokens from the src set and merging them into the dst set, effectively reducing the total token count by r. This merging is defined as xmerged = 1 r Pr i=1 xi where xi represents individual tokens to be merged. Overall, the original ToMe method is predicated on the reduction of computational load through merging of similar tokens prior to being input to attention layers. This process involves the computation of a similarity matrix, where tokens exhibiting the highest similarity are merged. Subsequently, the unmerging process aims to redistribute the merged token information back to the original token locations. This approach, however, introduces two critical bottlenecks: \u2022 Computational Complexity: The similarity matrix calculation, O(n2) complexity, is costly in itself, especially when required at every step of the process. \u2022 Quality Degradation: The merge-unmerge cycle inherent to ToMe can lead to significant loss of image detail, particularly at higher merging ratios. 2.2 Training Free Enhancements Our proposed token downsampling (ToDo) methodology extends the original ToMe approach, addressing its computational bottlenecks and quality degradation issues when applied to Stable Diffusion models. We introduce two principal modifications with ToDo: an optimized token merging method based on spatial contiguity and a refined attention mechanism that mitigates the need for unmerging. Optimized Merging Through Spatial Contiguity We introduce a novel token merging strategy that leverages the inherent spatial contiguity of image tokens. Recognizing that tokens in close spatial proximity exhibit higher similarity, thus providing a basis for merging without the extensive computation of pairwise similarities. Therefore, we employ a downsampling function D(\u00b7) using the Nearest-Neighbor algorithm [Bankman, 2008]. We note this approach is akin to strided convolutions, as shown in Figure 1. Formally, let T = {t1, t2 . . . tn} denote the original set of image tokens arranged in a two-dimensional grid reflecting their spatial relationships. The proposed downsampling operation, D is applied to T to yield a reduced set of merged tokens T \u2032, as such: T \u2032 = D(T) = {D(t1), D(t2) . . . D(tm)} , where m < n This enhancement mitigates the computational overhead associated with the pairwise similarity computation inherent in ToMe. By leveraging the assumption that spatially adjacent tokens are more likely to be similar, we bypass the need for O(n2) similarity calculations, instead employing a more computationally efficient O(n) downsampling operation. Enhanced Attention Mechanism with Downsampling To mitigate the information loss inherent to the unmerging process in conventional token merging approaches, we introduce a refinement to the attention mechanism within the transformer architecture [Vaswani et al., 2023]. This modification entails the application of the downsampling operation D(\u00b7) to the keys, K, and values V of the attention mechanism while preserving the original queries Q. The modified attention function can be mathematically articulated as follows, with dk denoting the dimensionality of the keys, ensuring proper scaling within the softmax operation. Attention(Q, K, V ) = softmax \u0012Q \u00b7 D(K)T \u221adk \u0013 \u00b7 D(V ) This refinement ensures that the integrity of the queries is preserved, thereby maintaining the fidelity of the attention process while reducing the dimensionality of the matrices involved in the attention computation. 3 Experiments Experimental Setup For our empirical evaluation, we employ the finetuned DreamshaperV7 model [Luo et al., 2023], noted for its superior handling of larger image dimensions which are central to this study. All experiments are conducted on a single A6000 GPU, utilizing float16 precision and flash attention [Dao et al., 2022] for inference as this has become the norm for many users. We use the DDIM sampler [Song et al., 2020] with 50 diffusion steps and a guidance scale of 7.5 [Team, 2024]. Each experiment involves averaging 10 generations comparing ToDo against ToMe with baseline referring to standard generations without token merging. The resolutions benchmarked include: 1024 \u00d7 1024, 1536 \u00d7 1536 and 2048 \u00d7 2048 across two token merging ratios, 0.75 and 0.89 which denotes the proportion of tokens removed. This is equivalent to 2x and 3x downsample respectively. For the comparison in Figure 2 we also use a merge ratio of 0.9375 for the 2048 \u00d7 2048 images, equivalent to a 4x downsample. \fImage Quality and Throughput To assess the fidelity and detail preservation of generated images, we utilized Mean Squared Error (MSE) to quantify each method\u2019s deviation from the baseline, and High Pass Filter (HPF) a standard for evaluating image sharpness and texture preservation [Gonzalez, 2009]. Our analysis, substantiated by Figure 2 and Table 1, demonstrates that our method not only closely mirrors the baseline in terms of MSE but also maintains comparable HPF values, underscoring its efficiency in retaining image features while ensuring higher throughput, as depicted in Figure 3. Figure 2: Qualitative comparison of attention methods with: 25% of tokens at 1024\u00d71024, 11% at 1536\u00d71536, and 6% at 2048\u00d72048, maintaining a consistent token count of 4096 post-merging. Method Merge Ratio MSE HPF Baseline 4.846 ToMe 0.75 2.686 \u00d7 10\u22122 4.022 0.89 2.671 \u00d7 10\u22122 4.003 ToDo (ours) 0.75 6.247 \u00d7 10\u22123 4.887 0.89 9.207 \u00d7 10\u22123 4.733 Table 1: Metrics from various attention methods, averaged over 10 generations of different prompts at 1536 \u00d7 1536 resolution. MSE denotes the mean squared error relative to the baseline, while HPF represents the mean absolute magnitude post-high pass filtering. Latent Feature Redundancy We investigated latent feature redundancy in the Stable Diffusion U-Net, assessing similarity among adjacent latent features. By extracting latent representations at various stages and noise levels, we constructed cosine similarity matrices, focusing on the proportion of tokens with top-3 similarities within a 3 \u00d7 3 area, and the highest, mean, and lowest similarities within 3 \u00d7 3 and 5 \u00d7 5 areas. We observed high similarity among neighboring tokens within the hidden features and notable trends as seen in Figure 4. Similarity trends varied across different depths without a distinct pattern, possibly due to the increasFigure 3: Inference throughput, measured in seconds, across resolutions using attention methods at various merge ratios, with bars representing the relative performance increase against the baseline. ing spatial compression and consequent reduction in information redundancy with values diminishing as the denoising progresses, likely because diffusion models initially generate broad details and later refine them. 10 15 20 25 30 35 40 0.7 0.75 0.8 0.85 0.9 0.95 Timesteps Similarity Lowest Similarity in 3x3 Neighborhood 1024x1024 depth 0 down depth 0 up depth 1 down depth 1 up Figure 4: Lowest cosine similarity between tokens in a 3 \u00d7 3 area across diffusion timesteps and U-Net locations extracted from 10 generations of different prompts at 1024 \u00d7 1024. Timesteps out of 50 indicate noise reduction; Depth 0 is initial resolution, Depth 1 is after 2x downsampling. Up/down denotes encoder/decoder blocks. 4" + }, + { + "url": "http://arxiv.org/abs/2208.10942v1", + "title": "Variable Dynamic Mode Decomposition for Estimating Time Eigenvalues in Nuclear Systems", + "abstract": "We present a new approach to calculating time eigenvalues of the neutron\ntransport operator (also known as $\\alpha$ eigenvalues) by extending the\ndynamic mode decomposition (DMD) to allow for non-uniform time steps. The new\nmethod, called variable dynamic mode decomposition (VDMD), is shown to be\naccurate when computing eigenvalues for systems that were infeasible with DMD\ndue to a large separation in time scales (such as those that occur in delayed\nsupercritical systems). The $\\alpha$ eigenvalues of an infinite medium neutron\ntransport problem with delayed neutrons and consequently having multiple, very\ndifferent relevant time scales are computed. Furthermore, VDMD is shown to be\nof similar accuracy to the original DMD approach when computing eigenvalues in\nother systems where the previously studied DMD approach can be used.", + "authors": "Ethan Smith, Ilham Variansyah, Ryan McClarren", + "published": "2022-08-20", + "updated": "2022-08-20", + "primary_cat": "physics.comp-ph", + "cats": [ + "physics.comp-ph" + ], + "main_content": "INTRODUCTION The time behavior of neutron transport problems can be understood through the spectrum of the transport operator given by time eigenvalues [1], also known as \u03b1 eigenvalues. The corresponding eigenfunctions for the time eigenvalues are angular \ufb02uxes and, therefore, are more challenging to compute than the more well-known k eigenvalue problem where the eigenfunctions are scalar \ufb02uxes. The conventional approach to computing \u03b1 eigenvalues is via the k-\u03b1 iteration procedure [2, 3]. This method is ill-suited for many subcritical problems because the iteration procedure can introduce total cross-sections that are e\ufb00ectively negative. Moreover, k-\u03b1 iterations can only calculate the eigenvalue farthest to the right in the complex plane. In a given scenario, especially at short times, other eigenvalues may be more important to the dynamics. The dynamic mode decomposition (DMD) has been used in the neutron transport community for the purpose of estimating time eigenvalues in calculations [4] and experiments [5, 6], as well as to create reduced-order models [7], improve the convergence of power iteration in k-eigenvalue problems [8, 9], and to accelerate source iteration [10, 11]. Originally introduced for \ufb02uid dynamics problems [12, 13], DMD takes the solution to a time-dependent transport problem and uses the solution at several time steps to estimate an approximate transport operator based solely on the available solutions. This approximate operator has eigenvalue-eigenvector pairs that are also eigenvalues and eigenvectors of the full transport operator. DMD is a fully data-driven method; this means that it will \ufb01nd the eigenmodes that are important in the system\u2019s evolution. Additionally, the approximate operator that DMD estimates can be used as a reduced order model to evolve the system forward in time. The DMD procedure requires that data snapshots be spaced at regular intervals (i.e., have the same time step between the snapshots). This is a signi\ufb01cant drawback for transport problems with delayed neutrons or problems very near critical, as we will demonstrate. One would prefer to use variable time steps to resolve important transients without losing e\ufb03ciency. This was the motivation for developing a variable DMD algorithm that allows for irregularly spaced steps. In this paper we develop this method and show how it can be used for a variety of time integration techniques. The underlying idea is the same as in [4]: use solutions to a time-dependent transport problem to estimate the \u03b1 eigenvalues of a system. We demonstrate on a variety of problems 3 \fthat our method can accurately estimate the eigenvalues of the system without the requirement of equal-sized time steps. One restriction of our approach is that it does require knowledge of the type of time discretization used in producing the solution. II. THEORY OF A VARIABLE STEP DECOMPOSITION We extend the DMD procedure to accept a variable time step for determining \u03b1 eigenvalues by leveraging the fact that we know which time discretization method was used to create the data matrix. We begin with a generic, linear system of di\ufb00erential equations of the forma d dty(t) = Ay(t), (1) with initial conditions y(0) = y0. The vector y is of length M, and the matrix A is of size M \u00d7 M. In neutron transport applications A represents a discretized transport operator, but for most neutron transport calculations this matrix is never explicitly formed. The vector y contains the spatial, angular, and energy degrees of freedom in the solution. To develop our variable DMD method, we consider common, implicit time integration procedures for Eq. (1). The backward Euler method applied to this equation is yn+1 \u2212yn tn+1 \u2212tn = Ayn+1. Backward Euler (2) Here the superscripts denote a time step number: yn \u2248y(tn) is the approximation of the solution at time tn. The time step size is the di\ufb00erence between tn+1 and tn. Commonly, this equation would be factored to show how to produce yn+1 from yn. However, the form in Eq. (2) will be most useful to us in developing a DMD-like procedure for approximating the operator A. The Crank-Nicolson [14] and BDF-2 [15, Chap. V.1] methods applied to Eq. (1) yield the aThe DMD method, and our extension, can be applied to nonlinear problems. However, because we are interested in neutron transport problems primarily, beginning from a linear system is natural. 4 \ffollowing equations: yn+1 \u2212yn tn+1 \u2212tn = 1 2A(yn+1 + yn). Crank-Nicolson (3) yn+1 \u22124 3yn + 1 3yn\u22121 2 3(tn+1 \u2212tn) = Ayn+1. BDF-2 (4) For each of these methods we can write the update as un+1 = Avn, (5) where un+1 = 1 tn+1 \u2212tn \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 yn+1 \u2212yn Backward Euler or Crank-Nicolson 3 2 \u0000yn+1 \u22124 3yn + 1 3yn\u22121\u0001 BDF-2 , (6) and vn = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 yn+1 Backward Euler or BDF-2 1 2 \u0000yn+1 + yn\u0001 Crank-Nicolson . (7) If we consider the repeated application of the time integration method from time t0 = 0 over N time steps, we could collect via concatenation the vectors un and un+1 into matrices of size M \u00d7N, U+ and V\u2212de\ufb01ned as U+ = \u0002 uN uN\u22121 . . . u1\u0003 , V\u2212= \u0002 vN\u22121 vN\u22122 . . . v0\u0003 . (8) Using these de\ufb01nitions we can write U+ = AV\u2212. (9) We note that BDF-2 is not self-starting because it needs the two previous solutions. Either backward Euler or Crank-Nicolson can be used to compute u1 from the initial condition and still \ufb01t the form we have here. In our formulation, the operator A does not depend on time so that the time step size does not a\ufb00ect it; all of the information regarding the time step size is contained in vectors u and v. We now proceed as is standard for DMD. We \ufb01rst take the thin singular value decomposition 5 \f(SVD) of V\u2212and write this as V\u2212= LSRT, (10) where L is of size M \u00d7 r, S is a diagonal matrix of size r \u00d7 r with positive entries called singular values, and R is of size N \u00d7 r where r is number of non-zero singular values (or singular values with a magnitude larger than some threshold). The matrices L and R have the property that LTL = IM, RTR = IN, (11) where IK is an identity matrix of size K \u00d7 K. Using this property, we right multiply Eq. (9) by RS\u22121 and then left multiply by LT to get LTU+RS\u22121 = LTAL \u2261\u02dc A. (12) The matrix \u02dc A is an r \u00d7 r approximation to the operator A. Moreover, we can compute this approximation using only the known data matrices U+ and V\u2212as indicated by the LHS of Eq. (12). As shown in previous work [4, 13], the eigenvalues of LT AL are also eigenvalues of A, and if w is an eigenvector of LT AL, then Lw is an eigenvector of A. These properties allow us to estimate eigenvalues/eigenvectors of A without any knowledge of the operator itself, other than the results of calculating solutions using one of the time discretization schemes above. Equation (12) has the same form as the standard DMD approximation, except we have arrived at it using di\ufb00erent data matrices. We call this approach the variable dynamic mode decomposition (VDMD). One key di\ufb00erence between VDMD and the standard DMD method is that we require knowledge of how the solution is updated because the time integration method and the time step sizes in\ufb02uence the matrices U+ and V\u2212. This means that we cannot apply this method directly to experimental measurements without having an approximation of the time derivative of the measurement. Further investigation of the application of this approach to measured data should be the subject of future research. 6 \fII.A. Demonstration on a simple problem Consider the system \u2202 \u2202t \uf8ee \uf8ef \uf8f0 y1 y2 \uf8f9 \uf8fa \uf8fb= \uf8ee \uf8ef \uf8f0 0 1 \u2212\u03c92 \u2212\u03bb \uf8f9 \uf8fa \uf8fb \uf8ee \uf8ef \uf8f0 y1 y2 \uf8f9 \uf8fa \uf8fb= A \uf8ee \uf8ef \uf8f0 y1 y2 \uf8f9 \uf8fa \uf8fb, (13) with initial conditions y1(0) = 1 and y2(0) = 0. This system is the \ufb01rst-order system corresponding to the second-order ODE y\u2032\u2032 1 + \u03bby\u2032 1 + \u03c92y1 = 0, y1(0) = 1, y\u2032 1(0) = 0 (14) with solution y1(t) = \u0012 cos(wt) + sin(wt) w\u03c4 \u0013 e\u2212t/\u03c4, (15) where \u03c4 = 2 \u03bb, w2 = \u03c92 \u2212\u03bb2 4 . (16) For this problem, the matrix A has eigenvalues given by \u0000\u2212\u03bb \u00b1 \u221a \u03bb2 \u22124\u03c92\u0001 /2. We will solve system (13) using the various time integrators mentioned above. Setting \u03bb = 1/10 and \u03c9 = 13 \u221a 29/20 \u22483.500357, we can analytically determine the eigenvalues of A to be 0.05 \u00b1 3.5i. To demonstrate the VDMD method we solve this problem with 20 logarithmically spaced time steps starting from 10\u22123 and increasing to a \ufb01nal step size of 3.0 (i.e., the step size increases by a factor of about 1.524 each step). As can be seen from Figure 1 the various numerical solutions to this problem do not agree with the analytic solution with these large time step sizes. Nevertheless, applying the VDMD method to these numerical solutions produces eigenvalues that agree with the exact values to within machine precision (i.e., relative errors on the order of 10\u221214). Other numerical experiments indicate that with as few as 3 time steps we can approximate the eigenvalues to machine precision. This is an important result because it indicates that VDMD, because it has knowledge of the numerical method used to approximate the solution, is able to estimate properties of A even when the numerical solutions in the snapshot matrices have large amounts of time discretization error. 7 \f0 2 4 6 8 t \u22121.0 \u22120.5 0.0 0.5 1.0 y(t) Analytic Backward Euler Crank-Nicolson BDF-2 Fig. 1. Data used to estimate the eigenvalues of the damped oscillator problem using VDMD. The solutions shown here used 20 logarithmically spaced time steps where the \ufb01rst step was of size 0.001 and the \ufb01nal step was of size 3. Despite the numerical error, VDMD is able to estimate the eigenvalues of the underlying operator to machine precision. 8 \fIII. VDMD AND TIME EIGENVALUES FOR NEUTRONICS Now that we have presented the VDMD method in general, we turn to the neutron transport problem. We begin with the time-dependent transport equation including delayed neutrons [1] \u2202\u03c8 \u2202t = A\u03c8 + I X i=1 \u03c7di(E) 4\u03c0 \u03bbiCi(x, t), (17) \u2202Ci \u2202t = Z \u221e 0 dE \u03b2i\u03bd\u03a3f(E)\u03c6(x, E, t) \u2212\u03bbiCi(x, t), i = 1, . . . , I, (18) where \u03c8(x, \u2126, E, t) is the angular \ufb02ux at position x \u2208R3, in direction \u2126\u2208S2, at energy E and time t and Ci(x, t) is the delayed precursor density of \ufb02avor i. The transport operator A is given by A = v(E)(\u2212\u2126\u00b7 \u2207\u2212\u03a3t + S + F), with S and F the scattering and \ufb01ssion operators: S\u03c8 = Z 4\u03c0 d\u2126\u2032 Z \u221e 0 dE\u2032 \u03a3s(\u2126\u2032 \u2192\u2126, E\u2032 \u2192E)\u03c8(x, \u2126\u2032, E\u2032, t), (19) F\u03c8 = \u03c7p(E) 4\u03c0 Z \u221e 0 dE\u2032 (1 \u2212\u03b2)\u03bd\u03a3f(E\u2032)\u03c6(x, E\u2032, t), (20) where \u03a3s(\u2126\u2032 \u2192\u2126, E\u2032 \u2192E) is the double-di\ufb00erential scattering cross-section from direction \u2126\u2032 and energy E\u2032 to direction \u2126and energy E , \u03bd\u03a3f(E\u2032) is the \ufb01ssion cross-section times the expected number of \ufb01ssion neutrons at energy E\u2032, and \u03c7p(E) is the probability of a \ufb01ssion neutron being emitted with energy E, \u03c7di(E) is the probability of a delayed neutron of \ufb02avor i being emitted with energy E, \u03b2i is the fraction of \ufb01ssion neutrons that come from delayed \ufb02avor i, P i \u03b2i = \u03b2, and \u03bbi is the decay constant for precursor \ufb02avor i. The scalar \ufb02ux \u03c6(x, E, t) is de\ufb01ned as the integral of the angular \ufb02ux over the unit sphere, \u03c6(x, E, t) = Z 4\u03c0 d\u2126\u03c8(x, \u2126, E, t). (21) We study the use of VDMD on the transport equation with discretizations of the multigroup method [1] in energy, discrete ordinates in angle, and various spatial discretizations. To connect 9 \fwith Eq. (1), the time-dependent transport equation can be written as a system of di\ufb00erential equations \u2202\u03a8 \u2202t = A\u03a8(t), (22) where \u03a8(t) is a time-dependent vector of the discrete values of the angular \ufb02ux \u03c8 at each space, energy, and angle degree of freedom and the delayed neutron precursor densities at each spatial degree of freedom. The discrete transport/delayed neutron operator matrix is written as A. Notice that the time-eigenvalue problem which supposes a form for \u03a8(t) of e\u03b1t\u03a8\u03b1 leads to the eigenvalue problem \u03b1\u03a8\u03b1 = A\u03a8\u03b1. (23) From this form we can directly apply the VDMD method as detailed above: 1. Take N backward Euler (or Crank-Nicolson or BDF-2) steps of the discrete transport problem and form the matrices U+ and V\u2212[Eq. (8)]. 2. Compute the SVD of V\u2212and form \u02dc A [Eq. (12)]. 3. Compute the eigenvalues of \u02dc A and the associated eigenvectors. Previously published versions of DMD for estimating time eigenvalues [4] required a \ufb01xedsized time step to generate the data matrices. VDMD does not have this constraint, though it does require the knowledge of the time integration method used. The ability to handle variable-sized time steps is an important advance because, for problems where delayed neutrons are signi\ufb01cant, computing the prompt and delayed modes of the system would require minuscule steps to capture the prompt scales and a large number of these steps to integrate to times when delayed neutrons are signi\ufb01cant, as we demonstrate below. The algorithm has \ufb02exibility in that the initial condition for the time-dependent calculation can be chosen based on the analysis being performed. For example, to model an experiment and extract the important eigenmodes, one would initialize the problem with the appropriate experimental initial condition. Alternatively, to compute the dominant eigenmodes in a system, one could use a random initial condition to assure that all of the modes are excited in the system. 10 \fIV. NUMERICAL RESULTS IV.A. In\ufb01nite Medium Problem with Delayed Neutrons To demonstrate the need for a variable time step method, we consider the neutron \ufb02ux solution of a twelve group problem with six delayed neutron precursor groups and a spherical buckling approximation for the leakage. This problem results in a transport operator that is an 18 \u00d7 18 matrix for which we can use numerical linear algebra software to estimate the eigenvalues. A subcritical and delayed supercritical case are considered by modifying the radius of the sphere in the buckling. The numerical solution is computed using both backward Euler and Crank-Nicolson time integration, and logarithmically-spaced timesteps are used. The systems considered had a radius of 11.7335 cm for the subcritical case and a radius of 11.735 cm for the supercritical case. These unsteady problems were initialized with a single neutron per cm3 in the highest energy group at time zero (to approximate probing the system with a fast-neutron source), and used a logarithmically spaced time grid to a \ufb01nal time of 103 s, using 200 time steps; the \ufb01rst time step is of size \u2206t = 10\u221211 s. A challenge of using the traditional DMD algorithm to analyze these kinds of problems lies in the very di\ufb00erent time scales over which the features of the system manifest. As an example, the prompt multiplication peak occurs around ten nanoseconds into the problem, while the delayed multiplication continues until well into the hundreds of seconds. This is most evident in Fig. 3, where the supercritical behavior of the system is not evident until over one hundred seconds into the evolution of the problem. 11 \f10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 102 t [s] 10\u22127 10\u22125 10\u22123 10\u22121 n Fig. 2. Solution in terms of the number density of neutrons, n(t) = \u03c6(t)/v(t), for the subcritical sphere problem. The black line plots the analytical solution, the red dashed line plots the backward Euler solution, and the green dash-dot line is the Crank-Nicolson solution. Vertical blue lines are negative eigenperiods, the inverse of negative part of the \u03b1 eigenvalues. 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 102 t [s] 10\u22127 10\u22125 10\u22123 10\u22121 n Fig. 3. Solution in terms of the number density of neutrons, n(t), for the delayed supercritical sphere problem. The labeling is the identical to Figure 2 with the addition of a vertical red line to denote the positive eigenperiod. Figures 2 and 3 plot the neutron population in the sphere over time. Resolving the early 12 \fand late time features of this delayed subcritical system would be extremely expensive using a uniform timestep, the present example uses just 200 time points. In these \ufb01gures the black line corresponds to the exact solution based on the matrix exponential, the red dashed line is the backward Euler solution, and the green dash-dot line is the Crank-Nicolson solution. The vertical blue lines correspond to the magnitude of the periods of the exact eigenvalues ( |\u03b1|\u22121), where blue and red colors respectively indicate negative and positive value. We also note that the CrankNicoloson solution is closer to the analytic solution obtained using a matrix exponential. The biggest errors in the numerical solutions occur during the rapid transition between 10\u22126 and 10\u22124 seconds. The exact eigenvalues, along with the relative error in pcm (1 pcm = 10\u22125) for the eigenvalues estimated with VDMD and either backward Euler or Crank-Nicolson, are tabulated for both the subcritical and supercritical cases in Table I. In the table we can see that for all 18 eigenvalues the VDMD estimates are within 1 pcm of the reference. We also observe that there does not appear to be a clear bene\ufb01t to using Crank-Nicolson over backward Euler, despite the fact that the CrankNicolson is a second-order accurate method. This speaks to part of the bene\ufb01t of VDMD: VDMD is aware of the discretization so it can reasonably approximate the operator that generated the data. 13 \fTABLE I VDMD Eigenvalues Errors in pcm for Subcritical and Delayed Supercritical Spheres using Backward Euler (BE) and Crank-Nicolson (CN) Subcritical Supercritical Analytic (s\u22121) BE Error CN Error Analytic (s\u22121) BE Error CN Error -1.67621\u00d7109 4.12074\u00d710\u22127 9.15307\u00d710\u22127 -1.67597\u00d7109 3.5385\u00d710\u22127 3.48643\u00d710\u22127 -1.46266\u00d7109 1.04743\u00d710\u22126 2.41534\u00d710\u22126 -1.46245\u00d7109 8.39863\u00d710\u22127 8.98227\u00d710\u22127 -1.17322\u00d7109 7.94049\u00d710\u22127 2.67553\u00d710\u22126 -1.17306\u00d7109 1.38875\u00d710\u22126 9.82629\u00d710\u22127 -7.9423\u00d7108 1.40458\u00d710\u22127 1.9733\u00d710\u22126 -7.94119\u00d7108 3.57139\u00d710\u22127 7.95791\u00d710\u22127 -4.99878\u00d7108 5.88453\u00d710\u22127 1.61807\u00d710\u22126 -4.99804\u00d7108 8.25479\u00d710\u22127 5.93383\u00d710\u22127 -2.92659\u00d7108 4.74705\u00d710\u22127 1.42446\u00d710\u22126 -2.92606\u00d7108 1.03237\u00d710\u22127 5.74564\u00d710\u22127 -1.66397\u00d7108 1.48477\u00d710\u22128 1.43941\u00d710\u22126 -1.66365\u00d7108 2.66235\u00d710\u22127 5.597\u00d710\u22127 -9.36139\u00d7107 9.43281\u00d710\u22128 1.49989\u00d710\u22126 -9.35967\u00d7107 5.99602\u00d710\u22127 5.64083\u00d710\u22127 -5.28136\u00d7107 2.51533\u00d710\u22127 1.89254\u00d710\u22126 -5.28057\u00d7107 3.02384\u00d710\u22126 5.04243\u00d710\u22127 -3.3247\u00d7107 1.57121\u00d710\u22126 3.51607\u00d710\u22126 -3.32436\u00d7107 5.85692\u00d710\u22126 1.14324\u00d710\u22126 -2.64094\u00d7107 3.76945\u00d710\u22125 1.03566\u00d710\u22125 -2.64085\u00d7107 0.000125423 9.61842\u00d710\u22126 -609650 2.07949\u00d710\u22128 5.25697\u00d710\u22128 -540139 9.785\u00d710\u22129 9.24618\u00d710\u22128 -2.6143 0.0002196 0.0015871 -2.60143 0.000776098 0.00106403 -0.743522 0.00158942 0.00683878 -0.732142 0.00555557 0.00462957 -0.196729 0.0077539 0.0220177 -0.187957 0.0212489 0.0141573 -0.0766105 0.0123597 0.0339459 -0.0699507 0.0301533 0.0208952 -0.0158092 0.0221887 0.0445097 -0.0149391 0.0202711 0.018085 -0.00467553 0.0408661 0.0300762 0.00441678 0.00430661 0.000104375 IV.B. Modak and Gupta Problem To demonstrate that VDMD is also applicable to numerical solutions to transport problems, and to compare it with the original DMD formulation, we consider a transport problem \ufb01rst published by Modak and Gupta [16] with semi-analytic results published by Kornreich and Parsons [17]. Unlike the in\ufb01nite medium problem, the exact time evolution operator cannot be explicitly formed. The problem is de\ufb01ned as a 10 mean-free-path, non-multiplying slab of two materials, with \u03a3S = 10 in one material and \u03a3S = 9 in the other. The other nuclear properties are de\ufb01ned 14 \fas \u03a3T = 10, \u03bd\u03a3f = 0, and \u03c7 = 0. The grain size is varied for separate runs of this simulation, this parameter de\ufb01nes the fraction of the overall length that each slice of material is, compared to the whole. These material slices are arranged in alternating order. The zero grain size case indicates a homogeneous material with average properties. Given that we did not see a large bene\ufb01t in using Crank-Nicolson in the previous problem, in this problem we exclusively use backward Euler time discretization. Previous work has demonstrated that to accurately estimate eigenvalues on these problems, high resolution in space and angle is necessary. The Modak and Gupta problem was solved with VDMD using a S196 discrete ordinates approach with 1000 spatial grid points and 101 logarithmically spaced time steps with the \ufb01rst time step ending at t = 10\u22125 and the last ending at t = 100; results using the original, equi-spaced time step formulation of DMD used 101 equally spaced time steps over a time range of 100 using the same spatial and angular discretizations. We compare the results of the new VDMD algorithm and the existing DMD algorithm. The problem is initialized with a random initial condition for the angular \ufb02ux in all angles and all positions sampled uniformly between zero and one, to ensure that all eigenmodes are excited. The four largest real eigenvalues from these numerical experiments are tabulated in Table II. Bold numbers indicate an agreement for the number in that decimal place with the published results [17]. From the table we notice that VDMD and DMD have nearly identical performance in the number of digits matched with the semi-analytic results. This demonstrates that we have not sacri\ufb01ced accuracy in formulating a variable step version of DMD. 15 \fTABLE II Computed eigenvalues for the Modak and Gupta problem using VDMD and DMD compared with the semi-analytic results. Grain Size Semi-Analytic VDMD DMD -0.551429 -0.550814 -0.550814 0.5 -1.71149 -1.70646 -1.70645 -2.94399 -2.94235 -2.94235 -5.28234 -5.16961 -5.17338 -0.703578 -0.704010 -0.704010 0.25 -1.45315 -1.45044 -1.45044 -3.07282 -3.07073 -3.07073 -5.26925 -5.15114 -5.15081 -0.749672 -0.749256 -0.749255 0.1 -1.56062 -1.55665 -1.55665 -2.96323 -2.96002 -2.96002 -5.18772 -5.09398 -5.11195 -0.758893 -0.757022 -0.757022 0.05 -1.56062 -1.56447 -1.56447 -2.97899 -2.97842 -2.97842 -5.21764 -5.10019 -5.10619 -0.763507 -0.763508 -0.763508 0 -1.57201 -1.57201 -1.57202 -2.98348 -2.98352 -2.98352 -5.10866 -5.13648 -5.47278 IV.C. Sood Two Group Transport Problem The \ufb01nal problem we consider is a two group critical-slab problem considers neutron transport in a two material, two region system consisting of a fuel region and a re\ufb02ector region from [18, Problem 59]. The original problem, being a k-eigenvalue benchmark, does not specify neutron speeds. We set the neutron speeds in the two groups to be 1 and 10 cm/\u00b5s. The system is 16 \fTABLE III Comparison of Eigenvalues for the critical slab problem computed using k-\u03b1 iterations and VDMD. Method k-\u03b1 Eigenvalue (\u00b5s\u22121) VDMD Eigenvalue (\u00b5s\u22121) S8 -1.058231\u00d710\u22125 -1.058209\u00d710\u22125 S16 -4.265406\u00d710\u22126 -4.265209\u00d710\u22126 S64 -2.373016\u00d710\u22126 -2.373179\u00d710\u22126 S128 -2.281708\u00d710\u22126 -2.281636\u00d710\u22126 designed to be critical, and the right-most eigenvalue is expected to be non-zero, but small, as a consequence of the numerical solution. The ability of a data driven method to capture this extremely small, negative eigenvalue necessarily requires computing the solution out to a very long time. Instead of requiring a corresponding very large number of time steps, allowing a variable time step greatly reduces the computational and storage costs associated with these kinds of problems. This problem was simulated using a SN approximations with varying angles, 100 spatial cells, and 300 time steps logarithmically spaced, having an initial time step of \u22480.001 seconds, and ending with a time step of \u22480.1 seconds. The largest real eigenvalues recovered by VDMD are tabulated against the \u03b1 eigenvalue estimated by the standard k-\u03b1 iteration [19]. We note that VDMD estimates other eigenvalues, while k-\u03b1 iterations are limited to computing the rightmost eigenvalue in the complex plane. We are able to use k-\u03b1 iterations for this problem because it is close to critical. If we reduced the slab size, the system would become far from critical, and make the k-\u03b1 iteration procedure unworkable [20, 17]. While the exact, dominant eigenvalue for this problem is 0 because we have numerical error in the numerical solution (due to spatial, time, and angular discretizations), the eigenvalues for a given numerical instantiation of the problem are expected to be \u201cclose\u201d to zero. This is indeed what we observe in Table III. All of the eigenvalues estimated are subcritical and near zero. Note that as the number of angles in the discrete ordinates calculation increase, the eigenvalues get closer to zero. The results from k-\u03b1 iteration and from VDMD agree to 4 or more digits in all calculations. 17 \fV." + }, + { + "url": "http://arxiv.org/abs/2106.11246v2", + "title": "LEAP: Scaling Numerical Optimization Based Synthesis Using an Incremental Approach", + "abstract": "While showing great promise, circuit synthesis techniques that combine\nnumerical optimization with search over circuit structures face scalability\nchallenges due to a large number of parameters, exponential search spaces, and\ncomplex objective functions. The LEAP algorithm improves scaling across these\ndimensions using iterative circuit synthesis, incremental re-optimization,\ndimensionality reduction, and improved numerical optimization. LEAP draws on\nthe design of the optimal synthesis algorithm QSearch by extending it with an\nincremental approach to determine constant prefix solutions for a circuit. By\nnarrowing the search space, LEAP improves scalability from four to six qubit\ncircuits. LEAP was evaluated with known quantum circuits such as QFT and\nphysical simulation circuits like the VQE, TFIM, and QITE. LEAP can compile\nfour qubit unitaries up to $59\\times$ faster than QSearch and five and six\nqubit unitaries with up to $1.2\\times$ fewer CNOTs compared to the QFAST\npackage. LEAP can reduce the CNOT count by up to $36\\times$, or $7\\times$ on\naverage, compared to the CQC Tket compiler. Despite its heuristics, LEAP has\ngenerated optimal circuits for many test cases with a priori known solutions.\nThe techniques introduced by LEAP are applicable to other\nnumerical-optimization-based synthesis approaches.", + "authors": "Ethan Smith, Marc G. Davis, Jeffrey Larson, Ed Younis, Costin Iancu, Wim Lavrijsen", + "published": "2021-06-21", + "updated": "2021-12-17", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.ET" + ], + "main_content": "Introduction Quantum synthesis techniques generate circuits from high-level mathematical descriptions of an algorithm. They can provide a powerful tool for circuit optimization, hardware design exploration, and algorithm discovery. An important quality metric of synthesis, and of compilers in general, is circuit depth, which relates directly to the program performance on hardware. Short-depth circuits are especially important for noisy intermediate-scale quantum (NISQ) era devices, characterized by limited coherence time and noisy gates. Here synthesis provides a critical capability in enabling experimentation where only the shortest depth circuits provide usable outputs. In general, two concepts are important when thinking about synthesis algorithms [1\u20136]: circuit structure captures the application of gates on a \u201cphysical\u201d qubit link, while function captures the gate operations, for example, rotation angle Rz(\u03b8). Recently introduced techniques [6, 7] can generate short-depth circuits in a topology-aware manner by combining numerical optimization of parameterized gate representations (e.g., U3) to determine function together with search over circuit structures. Regarding circuit depth, their e\ufb03cacy surpasses that of traditional optimizing compilers such as IBM Qiskit [8] and CQC Tket [9], or of other available synthesis tools such as UniversalQ1 [10]. An exemplar of synthesis approaches is QSearch [6], which provides optimal-depth synthesis and has been shown to match known optimal quantum algorithm implementations for circuits such as QFT [11]. QSearch 1The UniversalQ algorithms have been recently incorporated into IBM Qiskit. For brevity, in the rest of this paper we will refer to it as Qiskit-synth. 1 arXiv:2106.11246v2 [quant-ph] 17 Dec 2021 \fgrows a circuit by adding layers of parameterized gates and permuting gate placement at each link, building on the previous best placements to form a circuit structure. A numerical optimizer is run on each candidate circuit structure to instantiate the function that \u201cminimizes\u201d a score (distance from the target based on the Hilbert\u2013Schmidt norm). This score guides the A* search algorithm [12] to extend and evaluate the next partial solution. The QSearch behavior is canonical for numerical-optimization-based synthesis [3, 4, 6]. While providing good-quality results, however, these techniques face scalability challenges: (1) the number of parameters to optimize grows with circuit depth; (2) the number of intermediate solutions to consider is exponential; and (3) the objective function for optimization is complex, and optimizers may get stuck in local minima. LEAP (Larger Exploration by Approximate Pre\ufb01xes) has been designed to improve the scalability of QSearch, and it introduces several novel techniques directly extensible to the broader class of search or numericaloptimization-based synthesis. Pre\ufb01x Circuit Synthesis: Designed to improve scaling, LEAP prunes the search space by limiting backtracking depth and by coarsening the granularity of the backtrack steps. Our branch-and-bound algorithm monitors progress during search and employs \u201cexecution-driven\u201d heuristics to decide which partial solutions are good pre\ufb01x candidates for the \ufb01nal solution. Whenever a pre\ufb01x is chosen, the question is whether to reuse the structure (gate placement) or structure and function (gate instantiation) together. The former approach prunes the search space, while the latter prunes both the search and parameter spaces. Incremental Re-synthesis: The end result of incremental pre\ufb01x synthesis (or other divide-and-conquer methods, partitioning techniques, etc.) is that circuit pieces are processed in disjunction, with the potential of missing the global optimum. Intuitively, LEAP gravitates toward the solution by combining local optimization on disjoint sub-circuits. By chopping and combining pieces of the \ufb01nal circuit, we can create new, unseen sub-circuits for the optimization process. Overall, this technique is designed to improve the solution quality for any divide-and-conquer or other hierarchical approach. Dimensionality Reduction: This technique could improve both scalability and solution quality. QSearch and LEAP require sets of gates that can fully describe the Hilbert subspace explored by the input transformation. This approach ensures convergence, but in many cases it may over\ufb01t the problem. We provide an algorithm to delete any parameterized gates that do not contribute to the solution, thereby reducing the dimension of the optimization problems. When applied directly to the \ufb01nal solution, dimensionality reduction may improve the solution quality by deleting single-qubit gates. Dimensionality reduction may also be applied in conjunction with pre\ufb01x circuit synthesis, improving both scalability and solution quality. Multistart Numerical Optimization: This technique a\ufb00ects both scalability and the quality of the solution. Any standalone numerical optimizer is likely to have a low success rate when applied to problem formulations that involve quantum circuit parameterizations. Multistart [13] improves on the success rate and quality of solution (avoids local minima) by running multiple numerical optimizations in conjunction. Each individual multi-optimization step may become slower, but improved solutions may reduce the chance of missing an optimal solution, causing further search expansion. LEAP has been implemented as an extension to QSearch, and it has been evaluated on traditional \u201cgates\u201d such as mul and adder, as well as full-\ufb02edged algorithms such as QFT [11], HLF [14], VQE [15], TFIM [16, 17], and QITE [18]. We compare its behavior with state-of-the-art synthesis approaches: QSearch, QFAST [7], Tket [9], and Qiskit-synth [10]. While QSearch scales up to four qubits, LEAP can compile fourqubit unitaries up to 59\u00d7 faster than QSearch and scales up to six qubits. On well-known quantum circuits such as the Variational Quantum Eigensolver (VQE), the Quantum Fourier Transformation (QFT), and physical simulation circuits such as the Transverse Field Ising Model (TFIM), LEAP with re-synthesis can reduce the CNOT count by up to 48\u00d7, or 11\u00d7 on average. Our heuristics rarely a\ufb00ect solution quality, and LEAP can frequently match optimal-depth solutions. At \ufb01ve and six qubits, LEAP synthesizes circuits with up to 1.19\u00d7 fewer CNOTs on average compared with QFAST, albeit with an average 3.55\u00d7 performance penalty. LEAP can be one order of magnitude slower than Qiskit-synth while providing two or more orders of magnitude shorter circuits. Compared with Tket, LEAP reduces the depth on average by 7.70\u00d7, while taking signi\ufb01cantly longer in runtime. All of our techniques a\ufb00ect behavior and performance in a nontrivial way: 2 \f\u2022 Compared with QSearch, pre\ufb01x synthesis reduces by orders of magnitude the number of partial solutions explored, leading to signi\ufb01cant speedup. \u2022 Incremental re-synthesis reduces circuit depth by 15% on average, albeit with large increases in running time. \u2022 Dimensionality reduction eliminates up to 40% of U3 gates (parameters) and shortens the circuit critical path. \u2022 Multistart increases the optimizer success rate from 15% (best value observed for any standalone optimizer) to 99%. For a single optimization run, however, multistart is up to 10\u00d7 slower than the underlying numerical optimizer. Overall, we believe LEAP provides a very competitive circuit optimizer for circuits on NISQ devices up to six qubits. We believe that our techniques can be easily generalized or transferred directly to other algorithms based on the search of circuit structures or numerical optimization. For example, re-synthesis, dimensionality reduction, and multistart are directly applicable to QFAST ; and re-synthesis is applicable to Qiskitsynth. We can expect that synthesis techniques using divide-and-conquer or partitioning methods will be mandatory for scalability to the number of qubits (in thousands) provided by future near-term processors. Our techniques provide valuable information to these budding approaches. The rest of this paper is structured as follows. In Section 2 we describe the problem and its challenges. The proposed solutions are discussed in Sections 3 through 6. The experimental evaluation is presented in Section 7. In Section 9 we discuss the implications of our approach. Related work is presented in Section 10. In Section 11 we brie\ufb02y summarize our conclusions. 2 Background In quantum computing, a qubit is the basic unit of quantum information. The general quantum state is represented by a linear combination of two orthonormal basis states (basis vectors). The most common basis is the equivalent of the 0 and 1 values used for bits in classical information theory, respectively |0\u27e9= \u0012 1 0 \u0013 and |1\u27e9= \u0012 0 1 \u0013 . The generic qubit state is a superposition of the basis states, namely, |\u03c8\u27e9= \u03b1 |0\u27e9+ \u03b2 |1\u27e9, with complex amplitudes \u03b1 and \u03b2, such that |\u03b1|2 + |\u03b2|2 = 1. The prevalent model of quantum computation is the circuit model introduced in [19], where information carried by qubits (wires) is modi\ufb01ed by quantum gates, which mathematically correspond to unitary operations. A complex square matrix U is unitary if its conjugate transpose U \u2217is its inverse, that is, UU \u2217= U \u2217U = I. In the circuit model, a single-qubit gate is represented by a 2 \u00d7 2 unitary matrix U. The e\ufb00ect of the gate on the qubit state is obtained by multiplying the U matrix with the vector representing the quantum state |\u03c8\u2032\u27e9= U |\u03c8\u27e9. The most general form of the unitary for a single-qubit gate is the \u201ccontinuous\u201d or \u201cvariational\u201d gate representation. U3(\u03b8, \u03c6, \u03bb) = \u0012 cos \u03b8 2 \u2212ei\u03bbsin \u03b8 2 ei\u03c6sin \u03b8 2 ei\u03bb+i\u03c6cos \u03b8 2 \u0013 A quantum transformation (algorithm, circuit) on n qubits is represented by a unitary matrix U of size 2n \u00d7 2n. A circuit is described by an evolution in space (application on qubits) and time of gates. Figure 1 shows an example circuit that applies single-qubit and CNOT gates on three qubits. Circuit Synthesis: The goal of circuit synthesis is to decompose unitaries from SU(n) into a product of terms, where each individual term (e.g., from SU(2) and SU(4)) captures the application of a quantum gate on individual qubits. This is depicted in Figure 1. The quality of a synthesis algorithm is evaluated by the number of gates in the resulting circuit and by the solution distinguishability from the original unitary. 3 \fCircuit length provides one of the main optimality criteria for synthesis algorithms: shorter circuits are better. CNOT count is a direct indicator of overall circuit length, since the number of single-qubit generic gates introduced in the circuit is proportional to a constant given by decomposition (e.g., ZXZXZ) rules. Since CNOT gates have low \ufb01delity on NISQ devices, state-of-the-art approaches [1, 2] directly attempt to minimize their count. Longer-term, single-qubit gate count (and circuit critical path) is likely to augment the quality metric for synthesis. Synthesis algorithms use distance metrics to assess the solution quality. Their goal is to minimize \u2225U \u2212 US\u2225, where U is the unitary that describes the transformation and US is the computed solution. They choose an error threshold \u03f5 and use it for convergence, \u2225U \u2212US\u2225\u2264\u03f5. Early synthesis algorithms used the diamond norm, while more recent e\ufb00orts [4, 20] use a metric based on the Hilbert\u2013Schmidt inner product between U and US. \u27e8U, US\u27e9HS = Tr(U \u2020US) (1) This is motivated by its lower computational overhead. U U1 U2 U3 C N O T U4 U5 \u225e \ud835\udc82\ud835\udfcf\ud835\udfcf \ud835\udc82\ud835\udfcf\ud835\udfd0 \ud835\udc82\ud835\udfd0\ud835\udfcf \ud835\udc82\ud835\udfd0\ud835\udfd0\u2a02\ud835\udc83\ud835\udfcf\ud835\udfcf \ud835\udc83\ud835\udfcf\ud835\udfd0 \ud835\udc83\ud835\udfd0\ud835\udfcf \ud835\udc83\ud835\udfd0\ud835\udfd0 = \ud835\udc82\ud835\udfcf\ud835\udfcf\ud835\udc83\ud835\udfcf\ud835\udfcf \ud835\udc83\ud835\udfcf\ud835\udfd0 \ud835\udc83\ud835\udfd0\ud835\udfcf \ud835\udc83\ud835\udfd0\ud835\udfd0 \ud835\udc82\ud835\udfcf\ud835\udfd0\ud835\udc83\ud835\udfcf\ud835\udfcf \ud835\udc83\ud835\udfcf\ud835\udfd0 \ud835\udc83\ud835\udfd0\ud835\udfcf \ud835\udc83\ud835\udfd0\ud835\udfd0 \ud835\udc82\ud835\udfd0\ud835\udfcf\ud835\udc83\ud835\udfcf\ud835\udfcf \ud835\udc83\ud835\udfcf\ud835\udfd0 \ud835\udc83\ud835\udfd0\ud835\udfcf \ud835\udc83\ud835\udfd0\ud835\udfd0 \ud835\udc82\ud835\udfd0\ud835\udfd0\ud835\udc83\ud835\udfcf\ud835\udfcf \ud835\udc83\ud835\udfcf\ud835\udfd0 \ud835\udc83\ud835\udfd0\ud835\udfcf \ud835\udc83\ud835\udfd0\ud835\udfd0 Figure 1: Unitaries (above) and tensors products (below). The unitary U represents a n = 3 qubit transformation, where U is a 23 \u00d7 23 matrix. The unitary is implemented (equivalent or approximated) by the circuit on the right-hand side. The single-qubit unitaries are 2 \u00d7 2 matrices, while CNOT is a 22 \u00d7 22 matrix. The computation performed by the circuit is (I2 \u2297U4 \u2297U5)(I2 \u2297CNOT)(U1 \u2297U2 \u2297U3), where I2 is the identity 2 \u00d7 2 matrix and \u2297is the tensor product operator. The right-hand side shows the tensor product of 2 \u00d7 2 matrices. 2.1 Optimal-Depth Topology-Aware Synthesis QSearch [6] introduces an optimal-depth topology-aware synthesis algorithm that has been demonstrated to be extensible across native gate sets (e.g., {RX, RZ, CNOT}, {RX, RZ, SWAP}) and to multilevel systems such as qutrits. The approach employed in QSearch is canonical for the operation of other synthesis approaches that employ numerical optimization. Conceptually, the problem can be thought of as a search over a tree of possible circuit structures containing parameterized gates. A search algorithm provides a principled way to walk the tree and evaluate candidate solutions. For each candidate, a numerical optimizer instantiates the function (parameters) of each gate in order to minimize some distance objective function. QSearch works by extending the circuit structure a layer at a time. At each step, the algorithm places a two-qubit expansion operator in all legal placements. The operator contains one CNOT gate and two U3(\u03b8, \u03c6, \u03bb) gates. QSearch then evaluates these candidates using numerical optimization to instantiate all the single-qubit gates in the structure. An A* [12] heuristic determines which of the candidates is selected for another layer expansion, as well as the destination of the backtracking steps. Figure 2 illustrates this process for a three-qubit circuit. Although theoretically able to solve for any \u201cprogram\u201d (unitary) size, the scalability of QSearch is limited in practice to four-qubit programs because of several factors. The A* strategy determines the number of solutions evaluated: at best this is linear in depth; at worst it is exponential. Any technique to reduce the number of candidates, especially when deep, is likely to improve performance. Our pre\ufb01x synthesis solution is discussed in Section 3. Since each expansion operator has two U3 gates, accounting for six2 parameters, circuit parameterization grows linearly with depth. Numerical optimizers scale at best with a high-degree polynomial in the number 2In practice, QSearch uses 5 parameters because of commutativity rules between single-qubit and CNOT gates. 4 \f\ud835\udc48! \" 0 \ud835\udc48\" \" 1 \ud835\udc48# \" 2 U (n0, x\u0305)= U\" # \u2297U# # \u2297 U$ # \ud835\udc48! \" 0 \ud835\udc48\" \" 1 \ud835\udc48# \" 2 U (n1, x\u0305) = (\ud835\udc3c2 \u2297\ud835\udc48# $ \u2297 \ud835\udc48$ $)(I2 \u2297\ud835\udc36#$ # ) \ud835\udc48\" # \ud835\udc48# # \ud835\udc36\"# \" \ud835\udc48! \" 0 \ud835\udc48\" \" 1 \ud835\udc48# \" 2 \ud835\udc48! # \ud835\udc48\" # \ud835\udc36!\" \" U (n2,x\u0305)= (\ud835\udc3c2 \u2297\ud835\udc48\" $ \u2297 \ud835\udc48# $)(I2 \u2297\ud835\udc36\"# # ) \ud835\udc48! \" 0 \ud835\udc48\" \" 1 \ud835\udc48# \" 2 U (n4, x\u0305)= (\ud835\udc3c2 \u2297\ud835\udc48\" ( \u2297 \ud835\udc48# ()(I2 \u2297\ud835\udc36\"$ $ ) \ud835\udc48\" # \ud835\udc48# # \ud835\udc36\"# \" \ud835\udc48! $ \ud835\udc48\" $ \ud835\udc36!\" # \ud835\udc48! \" 0 \ud835\udc48\" \" 1 \ud835\udc48# \" 2 U (n3, x\u0305) = (\ud835\udc3c2 \u2297\ud835\udc48# ( \u2297 \ud835\udc48$ ()(I2 \u2297\ud835\udc36#$ $ ) \ud835\udc48\" # \ud835\udc48# # \ud835\udc36\"# \" \ud835\udc48\" $ \ud835\udc48# $ \ud835\udc36\"# # \ud835\udc48! \" 0 \ud835\udc48\" \" 1 \ud835\udc48# \" 2 \ud835\udc48! # \ud835\udc48\" # \ud835\udc36!\" \" U (n5, x\u0305)= (\ud835\udc3c2 \u2297\ud835\udc48\" ( \u2297 \ud835\udc48# ()(I2 \u2297\ud835\udc36\"# $ ) \ud835\udc48! $ \ud835\udc48\" $ \ud835\udc36!\" # \ud835\udc48! \" 0 \ud835\udc48\" \" 1 \ud835\udc48# \" 2 \ud835\udc48! # \ud835\udc48\" # \ud835\udc36!\" \" U (n6, x\u0305)= (\ud835\udc3c2\u2297\ud835\udc48# ( \u2297 \ud835\udc48$ ()(I2 \u2297\ud835\udc36#$ $ ) \ud835\udc48\" $ \ud835\udc48# $ \ud835\udc36\"# # f(n) = cnot count + a * minx \u0305 D(U(n1,x\u0305), Utarget) Choose successor with smallest f(n) Choose successor with smallest f(n) Figure 2: Example evolution of the search algorithm for a three-qubit circuit. It starts by placing a layer of single-qubit gates, then generating the next two possible solutions. Each is evaluated, and in this case the upper circuit is closer to the target unitary, leading to a smaller heuristic value. This circuit is then expanded with its possible two successors. These are again instantiated by the optimizer. The second circuit from the top has an acceptable distance and is reported as the solution. The path in blue shows the evolution of the solution. The ansatz circuits enclosed by the dotted line have been evaluated during the search. of parameters, making optimization of long circuits challenging. Any technique to reduce the number of parameters is likely to improve performance. Dimensionality reduction is discussed further in Section 5. The scalability and the quality of the numerical optimizer matter. Faster optimizers are desirable, but their quality a\ufb00ects performance nontrivially. Our experimentation with CMA-ES [21], L-BFGS [22], and Google Ceres [23] shows that the QSearch success rate of obtaining a solution from a valid structure can vary from 20% to 1% for longer circuits. Besides this measurable outcome, the propensity of optimizers to get stuck in local minima and plateaus can have an insidious e\ufb00ect on scalability by altering the search path. A more nuanced approach to optimization and judicious allocation of optimization time budget may improve scalability. Our multistart approach is discussed further in Section 6. 3 Pre\ufb01x Circuit Synthesis The synthesis solution space can be thought of as a tree that enumerates circuit structures of increasing depth: Level 1 contains depth-one structures, Level 2 contains depth-two structures, and so on. For scalability, we want to reach a solution while evaluating the least number of candidates possible and the shallowest circuits possible. The number of evaluations is given by the search algorithm: in the case of QSearch the path is driven by A*, and scalability is limited by long backtracking chains. Our idea introduces a simple heuristic to reduce the frequency of backtracking. The approach is \u201cdata driven\u201d and inspired by techniques employed in numerical optimization, as shown in Figure 3. Imagine mapping the search tree onto an optimization surface, which will contain plateaus and local minima. Exiting a plateau is characterized by faster progress toward a solution and minima. If the minima are local (partial solution is not acceptable), the algorithm has to walk out of the \u201cvalley.\u201d Once out, the algorithm may still be on a plateau, but it can mark the region just explored as not \u201cinteresting\u201d for any backtracking. The e\ufb00ect of implementing these principles in the search is illustrated in Figure 4. The result is a partitioning of 5 \fDo not reevaluate Fast progress Work to climb Figure 3: Synthesis needs to navigate around local minima and plateaus. Figure 4: Pre\ufb01x-based synthesis induces a partitioning of the circuit. Each partition/pre\ufb01x captures the e\ufb00ect of its associated sub-tree on the search for a solution. Each partition has been subject to optimization: global with respect to the partition itself, but local with respect to the \ufb01nal solution. The resulting circuit in the middle reaches a solution from composing local optima. Re-synthesis combines disjoint partitions in order to form regions that are passed through optimization. Since the new regions have not been subject to optimization, there exists the potential for improvement. the solution space into coarse-grained regions grouped by circuit depth range. During search, backtracking between solutions within a region is performed by using the A* rules. We never backtrack outside of a region to any candidate solution that resides in the previous \u201cdepth band.\u201d Overall, the e\ufb00ect of our strategy can be thought of as determining a pre\ufb01x structure on the resulting circuit, as shown in Figure 4. The algorithm starts with a pure A* search on circuits up to depth d1. The \ufb01rst depth d1 viable partial solution is recorded, and the search proceeds to depth d2 in sub-tree A. A* search proceeds in sub-tree A until \ufb01nding the \ufb01rst viable candidate at depth d2, then proceeds in sub-tree B. At this point we have three regions: the start sub-tree for depth 0 to d1, A for depth d1 + 1 to d2, and B for depth d2 + 1 to d3. In this example the search in sub-tree B fails at depth d2 + 1. We, therefore, backtrack to d2, and the search proceeds on the path depicted on the right-hand side of the tree and eventually \ufb01nds a solution. One can easily see how by prohibiting backtracking into large solution sub-trees we can reduce the number of evaluated (numerically optimized) candidates and improve scalability. As this changes the A* optimality property of the algorithm, the challenge is determining these sub-trees in a manner that still leads to a short-depth solution. Pre\ufb01x Formation: A partial solution describes a circuit structure and its function (gates). We have considered both static and dynamic methods for pre\ufb01x formation. In our nomenclature, a static approach will choose a pre\ufb01x circuit whose structure and function are \ufb01xed: this is a fully instantiated circuit. A dynamic approach will choose a \ufb01xed structure whose function is still parameterized. In the \ufb01rst case, the pre\ufb01x circuit is completely instantiated with native gates to perform a single computation, while in the latter it can \u201cwalk\u201d a much larger Hilbert subspace as induced by the parameterization. Intuitively, determining a single instantiated pre\ufb01x circuit is good for scalability. This reduces the number of parameters evaluated in any numerical optimization operation after pre\ufb01x formation. We have experimented with several strategies for forming instantiated pre\ufb01x circuits in our synthesis algorithms, but they did not converge or they produced 6 \fvery long circuits. Pre\ufb01x Formulation: In LEAP we use a dynamic data-driven approach informed by the evolution of the underlying A* QSearch algorithm, described in Figure 4. Our analysis of the trajectories for multiple examples shows that many paths are characterized by a rapid improvement in solution quality (reduction in Hilbert\u2013Schmidt distance between target unitary and approximate pre\ufb01x), followed by plateauing induced either by optimizer limitations (local minima) or as an artifact of the particular structures considered (deadend). LEAP forms subtrees by \ufb01rst identifying and monitoring plateaus. Since during a plateau the rate of solution quality change is \u201clow,\u201d a \u201cpre\ufb01x\u201d is formed whenever a solution is evaluated with a jump in the rate of change. The plateau identi\ufb01cation heuristic is augmented with a work-based heuristic: we wait to form a pre\ufb01x until we sample enough partial solutions on a path. This serves several purposes: it gives us more samples in a sub-tree to gain some con\ufb01dence we have not skipped \u201cthe only few viable partial solutions,\u201d and it increases the backtracking granularity by identifying larger subtrees. Even more subtly, the work heuristic decreases the sensitivity of the approach to the thresholds used to assess the rate of change in the plateau identi\ufb01cation method. By delaying to form a pre\ufb01x based on work, we avoid jumping directly into another plateau that will result in super\ufb02uously evaluating many solutions that are close in depth to each other. Solution Optimality: By discarding pure A* search, LEAP gives up on always \ufb01nding the optimal solution. However, the following observations based on the properties of the solution search space indicate that optimality loss could be small and that the approach can be generalized to other search and numerical optimization-based methods. First, the solution tree of circuit structures exhibits high symmetry. Partial solutions can be made equivalent by qubit relabeling; all solutions reached from any equivalent structure will have a similar depth. For example, for a circuit with N qubits, a depth 1 circuit with a CNOT on qubits 0 and 1 can be thought of as \u201cequivalent\u201d to the circuit with a CNOT on qubits N \u22122 and N \u22121. Symmetry indicates that coarse-grained pruning may be feasible, since a sub-tree may contain many \u201cequivalent\u201d partial solutions. Second, assuming that the optimal solution has depth d, there are many easy-to-\ufb01nd solutions at depth > d. In Figure 3, assume that the solution node S at depth d is missed by our strategy. However, there are links solutions at d + 1, links2 solutions at d + 2, and so forth, trivially obtained by adding identity gates to S. In other words, the solution density increases (probably quadratically) with circuit depth increase. If the search has a \u201cdecent\u201d partial solution at depth d, numerical optimization is likely to \ufb01nd the \ufb01nal solution at very close depth. Overall, the high-level heuristic goal is to get to optimal depth with a \u201cgood enough\u201d partial solution. Our \u201cgood enough\u201d criteria combine the Hilbert\u2013Schmidt norm with a measure of work. The pseudocode for the pre\ufb01x formation algorithm in LEAP is presented in Figure 5. 4 Incremental Re-synthesis The end result of incremental synthesis (or other divide-and-conquer methods, partitioning techniques, etc.) is that circuit pieces are optimized in disjunction, with the potential of missing the optimal solution. For LEAP, this is illustrated in Figure 4. Pre\ufb01x synthesis generates a natural partitioning of the circuit. Each partition is optimized based on knowledge local to its sub-tree. The \ufb01nal solution is composed of local optima. The basic observation here is that by chopping and combining pieces of the circuit generated by pre\ufb01x synthesis, we can create new, unseen circuits for the optimization process. For incremental re-synthesis, we use the output circuit from pre\ufb01x synthesis and its partitioning (the list of depths where pre\ufb01xes were \ufb01xed). The reoptimizer removes circuit segments to create \u201choles\u201d of a size provided by the user (referred to as re-synthesis window) centered on the divisions between partitions. This circuit is lifted to a unitary, and the reoptimizer synthesizes it and replaces it into the original solution. The process continues iteratively until a stopping criterion is reached. This amounts to moving a sliding optimization window across the circuit. The quality of the solution is determined by the choice of the size of the re-synthesis window, the number of applications (circuit coverage) and stopping criteria, and the numerical optimizer. 7 \fAlgorithm 1 Helper Functions 1: function s(n) 2: return {n + CNOT + U3 \u2297 U3 for all possible CNOT positions} 3: 4: function p(n, U) 5: return minxD(U(n, x), U) 6: 7: function h(d) 8: return d \u2217a \u25b7a is a constant determined via experiment. See section 3.3.1 9: 10: function predict score(a, b, di) 11: return {Predicted CNOTs for depth dibased on points in a, b} Algorithm 2 LEAP Pre\ufb01x Formation 1: function leap synthesize(Utarget, \u03f5, \u03b4) 2: si \u2190the best score of pre\ufb01xes 3: ni \u2190the pre\ufb01x structure 4: while si > \u03f5 do 5: ni, si \u2190inner synthesize(Utarget, \u03f5, \u03b4) 6: return ni, si 7: 8: function inner synthesize(Utarget, \u03f5, \u03b4) 9: n \u2190representation of U3 on each qubit 10: a \u2190best depth values of intermediate results 11: b \u2190best depth values of intermediate results 12: push n onto queue with priority h(dbest)+0 13: while queue is not empty do 14: n \u2190pop from queue 15: for all ni \u2208s(n) do 16: si \u2190p(ni, Utarget) 17: di \u2190CNOT count of ni 18: sp \u2190predict score(a, b, di) 19: if si < \u03f5 then 20: return ni, si 21: if si < sp then 22: return ni, si 23: if di < \u03b4 then 24: push ni onto queue with priority h(di)+CNOT count of ni Figure 5: Pre\ufb01x formation algorithm in LEAP, based on the algorithm in [6]. In LEAP we make several pragmatic choices. The size of the optimization window is selected to be long enough for reduction potential but overall short enough that it can be optimized fast enough. The algorithm reoptimizes exactly once at each boundary in the original partitioning. The re-synthesis pass allows us to manage the budget given to numerical optimizers. Since each circuit piece is likely to be transformed multiple times, some of the operations can use fast but lower-quality/budget optimization. We do use the fastest optimizer available during pre\ufb01x synthesis, switching during re-synthesis to the higher-quality but slower multistart solver based on [13], described in Section 6. 5 Dimensionality Reduction The circuit solution provides a parameterized structure instantiated for the solution. This parameterization introduced by the single-qubit U3 gates may over\ufb01t the problem. For LEAP, which targets only the CNOT count, this may be a valid concern, and we therefore designed a dimensionality reduction pass. We use a simple algorithm that attempts to delete one U3 gate at a time and reinstantiates the circuit at each step. This linear complexity algorithm can discover and remove only simple correlations between parameters. More complex cases can be discovered borrowing from techniques for dimensionality reduction for machine learning [24] or numerical optimization [25]. When applied to the \ufb01nal synthesis solution, dimensionality reduction may reduce the circuit critical path even further by deleting U3 gates. It can also also be combined with the pre\ufb01x synthesis. Once a pre\ufb01x is formed, we can reduce its dimensionality. As numerical optimizers scale exponentially with parameters, this will improve the execution time per invocation. On the other hand, it may a\ufb00ect the quality of the solution as we remove expressive power from pre\ufb01xes. In the current LEAP version, only the \ufb01nal solution is simpli\ufb01ed. 8 \f6 Multistart Optimization Solving the optimization problem for the objective function in LEAP or QSearch can be di\ufb03cult. Quantum circuits, even optimal ones, are not unique: a global phase is physically irrelevant and thus does not a\ufb00ect the output. Furthermore, circuits that di\ufb00er only in a local basis transformation and its inverse surrounding a circuit subsection (e.g., a single 2-qubit gate) are mathematically equivalent.3 Provided native gate sets may contain equivalences; and single-qubit gates, being rotations, are periodic. As a practical matter, we \ufb01nd that we cannot declare these equivalences to existing optimizers. Furthermore, where they can be used to create constraints or inaccessible regions (e.g., by remapping the periodicity into a single region), we \ufb01nd that they hinder the search, because boundaries can create arti\ufb01cial local minima. The unavoidable presence of equivalent circuits means that we are essentially over\ufb01tting the problem, where changes in parameters can cancel each other out, leading to saddle points, which turn into local minima in the optimization surface because of the periodicity; see Figure 6. The former cause, at best, an increase in the number of iterations as progress slows down because of smaller gradients; the latter risks getting the optimizer stuck. Another problem comes from the speci\ufb01cation of the objective: distance metrics care only about the output, and di\ufb00erent circuits can thus result in equal distances from the desired unitary. If no derivatives are available, this results in costly evaluations just to determine no progress can be made, a problem that gets worse at scale. But even with a derivative, it closes directions for exploration and shrinks viable step sizes, thus increasing the likelihood of getting stuck in a local minimum. parameter 1 parameter 2 saddle points global minimum Figure 6: Optimization surface near the global minimum for a 4-qubit circuit of depth 6 for the \ufb01rst step in the QITE algorithm, varying 6 (3 pairs) out 42 parameters equally, showing the e\ufb00ect on the optimization surface for 2 parameters from distinct pairs. (The global minimum is so pronounced only because the remaining 36 parameters are kept \ufb01xed at optimum, reducing the total search space; most of the 42-dim surface is \ufb02at. In sum, local optimization methods are highly dependent on the starting parameters, yet global optimization methods can require far too many evaluations to be feasible for real-world objectives. An attractive middle ground is an approach that starts many local optimization runs from di\ufb00erent points in the domain. Multistart optimization methods are especially appealing when there is some structure in the objective, such as the least-squares form of the objective. Some multistart approaches complete a given local optimization run before starting another, whereas others may interleave points from di\ufb00erent runs. The asynchronously parallel optimization solver for \ufb01nding multiple minima (APOSMM) [13] begins with a uniform sampling of the domain and then starts local optimization runs from any point subject to constraints: (1) point not yet explored; (2) not a local optimum; and (3) no point available within a distance rk with a smaller function value. If no such point is available, more sampling is performed. The radius rk decreases as more points are sampled, thereby allowing past points to start runs. Under certain conditions on the objective function and the local optimization method, 3There are physical di\ufb00erences; in particular such circuits tend to sample di\ufb00erent noise pro\ufb01les. This property forms the basis of randomized compilation. 9 \fthe logic of APOSMM can be shown to asymptotically identify all local optima while starting only \ufb01nitely many local optimization runs. 7 Experimental Setup LEAP, available at https://github.com/BQSKit/qsearch, extends QSearch. We evaluated it with Python 3.8.5, using numpy 1.19.5 and Rust 1.48.0 code. For our APOSMM implementation, we integrated with the version in the libEnsemble Python package [26, 27]. We tried two di\ufb00erent local optimization methods within APOSMM: the L-BFGS implementation within SciPy [22] and the Google Ceres [23] least-squares optimization routine. For experimental evaluation, we use a 3.35GHz Epyc 7702p based server, with 64 cores and 128 threads. Our workload consists of known circuits (e.g., mul, add, Quantum Fourier Transform), as well as newly introduced algorithms. VQE [15] starts with a parameterized circuit and implements a hybrid algorithm where parameters are reinstantiated based on the results of the previous run. The TFIM [16] and Quantum Imaginary Time Evolution (QITE) [18] algorithms model the time evolution of a system. They are particularly challenging for NISQ devices as circuit length grows linearly with the simulated time step. In TFIM, each timestep (extension) can be computed and compiled ahead of time from \ufb01rst principles, while in QITE it is dependent on the previous time step. We evaluate LEAP against QSearch and other available state-of-the-art synthesis software and compilers. QFAST [28] scales better than QSearch by con\ufb02ating search for structure with numerical optimization, albeit producing longer circuits. Qiskit-synth [10] uses linear algebra decomposition rules for fast synthesis, but circuits tend to be long. IBM Qiskit [8] provides \u201ctraditional\u201d quantum compilation infrastructures using peephole optimization and mapping algorithms. CQC Tket [9] proves another good quality compilation infrastructure across multiple gate sets. To showcase the impact of QPU topology, we compile for processors where qubits are fully connected (all-to-all), as well as processors with qubits connected in a nearest-neighbor (linear) fashion. 8 Evaluation Summarized results are presented in Table 2, with more details in Tables 3 and 4. We present data for all-to-all and nearest-neighbor chip topology. Table 3 presents a direct comparison between QSearch and LEAP for circuits up to four qubits. Despite its heuristics, LEAP produces optimal depth solutions, matching the reference implementations on nearestneighbor chip topology. Overall, LEAP can compile four-qubit unitaries up to 59\u00d7 faster than QSearch. As shown in Table 4, LEAP scales up to six qubits. In this case, we include full topology data, as well results for compilation with QFAST, Qiskit, Qiskit-synth, and Tket. On well-known quantum circuits such as VQE and QFT and physical simulation circuits such as TFIM, LEAP with re-synthesis can reduce the CNOT count by up to 48\u00d7, or 11\u00d7 on average when compared to Qiskit. On average when compared to Tket, LEAP reduces depth by a factor of 7\u00d7. Our heuristics rarely a\ufb00ect solution quality, and LEAP can match optimal depth solutions. At \ufb01ve and six qubits, LEAP synthesizes circuits with to 1.19\u00d7 fewer CNOTs on average compared with QFAST, albeit with an average 3.55\u00d7 performance penalty. LEAP can be one order of magnitude slower than Qiskit-synth while providing two or more orders of magnitude shorter circuits. 8.1 Impact of Pre\ufb01x Synthesis Most of the speed improvements are directly attributable to pre\ufb01x synthesis, which reduces by orders of magnitude the number of partial solutions evaluated. For example, for QFT4, the whole search space contains \u224843M solution candidates. QSearch will explore 2,823 nodes, while LEAP will explore 410. For TFIM-22, these numbers are (\u22481.6M, 54,020, 176) respectively. Detailed results are omitted for brevity. 10 \fTable 1: Results for 3-4 qubit synthesis benchmarks. * 3 Qubit results were chosen as the best run of two samples. 3 Qubits* 4 Qubits fredkin to\ufb00oli grover hhl or peres qft3 adder vqe TFIM-1 TFIM-10 TFIM-22 TFIM-60 TFIM-80 TFIM-95 CNOTs All-to-All Qiskit Mapped 8 6 7 5 6 5 6 10 76 6 60 132 360 480 570 QF AST 8 8 7 3 8 7 7 15 43 8 14 16 18 14 21 LEAP 7 6 6 3 6 5 6 12 22 6 12 13 12 15 12 TKet Mapped 8 6 7 3 6 5 6 10 71 6 60 132 360 480 570 Qiskit Synthesized 15 9 29 13 11 11 27 66 566 124 218 218 218 218 218 Linear Qiskit Mapped 12 13 14 11 11 9 8 20 85 6 60 132 360 480 570 QF AST 8 8 7 4 8 7 8 36 40 6 10 10 12 12 23 LEAP 8 8 7 3 8 7 7 14 24 7 12 13 12 13 12 TKet Mapped 14 9 13 3 12 11 9 16 71 6 60 132 360 480 570 Qiskit Synthesized 30 17 74 30 19 28 70 247 2630 477 523 523 523 523 523 U3s All-to-All Qiskit Mapped 10 8 17 10 9 9 11 11 86 7 70 154 420 560 665 QF AST 19 19 17 9 19 17 17 34 91 20 32 36 40 32 46 LEAP 17 15 15 9 15 13 15 26 49 16 28 28 28 31 28 TKet Mapped 10 8 16 5 8 9 11 10 76 7 61 133 361 481 571 Qiskit Synthesized 19 11 42 17 17 12 39 88 671 160 261 261 261 261 261 Linear Qiskit Mapped 23 22 30 22 22 20 18 37 106 7 70 154 420 560 665 QF AST 19 19 17 11 19 17 19 76 84 16 24 24 28 28 50 LEAP 19 19 17 9 19 17 17 32 53 18 28 30 28 30 28 Qiskit Synthesized 50 32 126 49 37 45 120 410 4169 785 851 851 851 850 851 Depth All-to-All Qiskit Mapped 11 11 16 11 8 8 12 11 116 10 73 157 423 563 668 QF AST 17 17 15 7 17 15 15 21 61 9 21 29 29 29 35 LEAP 15 13 13 7 13 11 13 19 39 13 21 24 19 27 21 TKet Mapped 10 8 16 5 8 9 11 10 76 7 61 133 361 481 571 Qiskit Synthesized 29 17 56 26 21 19 51 121 1062 227 421 421 421 421 421 Linear Qiskit Mapped 23 24 29 23 21 18 17 32 136 10 73 157 423 563 668 QF AST 17 17 15 9 17 15 17 63 63 9 13 21 25 21 31 LEAP 17 17 15 7 17 15 15 27 41 15 23 23 25 23 21 TKet Mapped 12 11 15 6 8 8 12 11 104 10 73 157 423 563 668 Qiskit Synthesized 56 34 139 55 38 51 132 390 3949 770 852 852 852 852 852 Parallelism All-to-All Qiskit Mapped 1.64 1.27 1.50 1.36 1.88 1.75 1.42 1.91 1.40 1.30 1.78 1.82 1.84 1.85 1.85 QF AST 1.59 1.59 1.60 1.71 1.59 1.60 1.60 2.33 2.20 3.11 2.19 1.79 2.00 1.59 1.91 LEAP 1.60 1.62 1.62 1.71 1.62 1.64 1.62 2.00 1.82 1.69 1.90 1.71 2.11 1.70 1.90 TKet Mapped 19 15 22 6 16 15 16 16 104 10 73 157 423 563 668 Qiskit Synthesized 1.17 1.18 1.27 1.15 1.33 1.21 1.29 1.27 1.16 1.25 1.14 1.14 1.14 1.14 1.14 Linear Qiskit Mapped 1.52 1.46 1.52 1.43 1.57 1.61 1.53 1.78 1.40 1.30 1.78 1.82 1.84 1.85 1.85 QF AST 1.59 1.59 1.60 1.67 1.59 1.60 1.59 1.78 1.97 2.44 2.62 1.62 1.60 1.90 2.35 LEAP 1.59 1.59 1.60 1.71 1.59 1.60 1.60 1.70 1.88 1.67 1.74 1.87 1.60 1.87 1.90 TKet Mapped 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.03 0.18 0.01 0.10 0.23 0.66 0.90 1.08 Qiskit Synthesized 1.43 1.44 1.44 1.44 1.47 1.43 1.44 1.68 1.72 1.64 1.61 1.61 1.61 1.61 1.61 Time (s) All-to-All Qiskit Mapped 0.04 0.04 0.05 0.05 0.04 0.08 0.04 0.05 0.36 0.03 0.20 0.40 1.00 1.33 1.67 QF AST 1.82 1.77 1.82 0.23 4.57 0.54 0.70 7.71 553.79 1.29 13.19 12.26 10.87 6.12 11.29 LEAP 2.99 1.89 1.84 0.47 1.01 0.60 0.98 34.57 2006.31 10.56 42.59 16.41 31.73 30.71 51.12 TKet Mapped 0.02 0.01 0.01 0.01 0.01 0.01 0.01 0.03 0.19 0.01 0.11 0.25 0.70 0.94 1.14 Qiskit Synthesized 0.26 0.14 0.86 0.24 0.22 0.19 0.60 1.58 12.10 2.85 3.36 3.50 3.37 3.52 3.32 Linear Qiskit Mapped 0.17 0.15 0.17 0.15 0.16 0.18 0.13 0.20 1.04 0.06 0.32 0.66 1.82 2.54 2.93 QF AST 1.66 1.64 1.78 0.41 1.60 1.25 1.89 16.25 201.63 0.64 1.77 1.85 3.00 2.81 6.08 LEAP 2.42 1.62 1.42 0.21 1.52 1.13 0.72 32.61 765.19 1.93 57.15 18.82 9.54 12.80 11.24 TKet Mapped 0.02 0.02 0.03 0.02 0.03 0.02 0.02 0.04 0.27 0.02 0.17 0.38 1.14 1.44 1.72 Qiskit Synthesized 0.45 0.28 1.96 0.46 0.37 0.39 1.13 4.07 41.59 7.55 7.02 8.25 6.44 8.47 7.00 Table 2: Summary of the quality metrics (average value) for \ufb01veand six-qubit circuit synthesis. * Qiskit\u2019s methods are exact, yet due to some post-processing in their mapping pipeline, large errors are shown. All-to-all Linear Qiskit Mapped Tket Mapped LEAP QFAST Qiskit Synthesis Qiskit Mapped Tket Mapped LEAP QFAST Qiskit Synthesis Time (s) <1 <1 7.34e3 423 31 1.4 <1 608 342 76 Error 1e-16 3e-15 1e-12 1e-4 1e-11 2.9e-1* 3e-15 1e-12 1e-5 9e-1* CNOT 240 240 18.85 27.8 1991 250 248.6 18.8 36.4 6115 U3 270 243.07 41.71 60.9 2155 291 270.27 42.7 78.2 9512 Depth 207 206.67 29.2 43.9 3912 321 215.47 28 48.6 9004 Table 3: Summary of synthesis results for QSearch and LEAP on the linear topology. LEAP produces very similar results as QSearch in signi\ufb01cantly less time. QSearch LEAP ALG Qubits Ref CNOT Unitary Distance Time (s) CNOT Unitary Distance Time (s) QFT 3 6 7 3.33 \u221710\u221216 2.0 8 2.22 \u221710\u221216 1.7 Toffoli 3 6 8 2.22 \u221710\u221216 3.4 8 2.22 \u221710\u221216 1.6 Fredkin 3 8 8 4.44 \u221710\u221216 2.6 8 3.33 \u221710\u221216 1.7 Peres 3 5 7 0 1.7 7 2.22 \u221710\u221216 1.1 Logical OR 3 6 8 2.22 \u221710\u221216 3.4 8 3.33 \u221710\u221216 1.6 QFT 4 12 14 6.7 \u221710\u221216 2429.3 13 6.7 \u221710\u221216 77.9 TFIM-1 4 6 6 0 13.4 6 0 7.2 TFIM-10 4 60 11 9.08 \u221710\u221211 955.4 11 3.95 \u221710\u221211 47.8 TFIM-22 4 126 12 1.22 \u221710\u221215 2450.3 12 7.77\u221216 41.6 TFIM-60 4 360 12 4.44 \u221710\u221216 1391 12 2.22 \u221710\u221216 31.6 TFIM-80 4 480 12 4.44 \u221710\u221216 1553.1 12 2.22 \u221710\u221216 35 TFIM-95 4 570 12 6.66 \u221710\u221216 1221.4 12 2.22 \u221710\u221216 38.1 11 \fTable 4: Results for 5\u20136 qubit synthesis benchmarks with QFAST, LEAP, and IBM Qiskit. (* implies the program timed out after 12 hours.) 5 Qubits 6 Qubits grover5 hlf mul qaoa qft5 TFIM-10 TFIM-40 TFIM-60 TFIM-80 TFIM-100 TFIM-1 TFIM-10 TFIM-24 TFIM-31 TFIM-51 CNOTs All-to-All Qiskit Mapped 48 13 17 20 20 80 320 480 640 800 10 100 240 310 510 TKet Mapped 48 7 15 20 20 80 320 480 640 800 10 100 240 310 510 LEAP * 9 13 * 31 18 22 21 21 22 10 * * * * QF AST 70 13 18 39 46 20 20 24 22 26 12 29 26 24 28 Qiskit Synthesized 570 870 77 750 580 1025 1025 1025 1025 1025 4006 4474 4474 4474 4474 Linear Qiskit Mapped 131 23 22 55 31 80 320 480 640 800 10 100 240 310 510 TKet Mapped 96 16 24 42 41 80 320 480 640 800 10 100 240 310 510 LEAP 49 15 15 28 30 18 20 20 20 20 10 24 27 29 30 QF AST 60 55 58 69 114 12 18 20 20 21 10 16 20 22 32 Qiskit Synthesized 2503 2578 760 2692 2622 2791 2791 2791 2791 2791 13155 13365 13365 13365 13365 U3s All-to-All Qiskit Mapped 78 8 16 20 29 90 360 540 720 900 11 110 264 341 561 TKet Mapped 72 10 16 19 29 81 321 481 641 801 11 101 241 311 511 LEAP * 22 27 * 65 41 49 45 47 45 26 * * * * QF AST 145 31 41 83 97 45 45 53 49 57 30 64 58 54 62 Qiskit Synthesized 672 976 87 861 687 1140 1140 1140 1140 1140 4294 4765 4765 4765 4765 Linear Qiskit Mapped 235 37 40 93 63 90 360 540 720 900 11 110 264 341 561 TKet Mapped 72 10 15 19 29 81 321 481 641 801 11 101 241 311 511 LEAP 103 35 35 61 65 41 45 45 45 45 26 54 60 64 64 QF AST 125 115 121 143 233 29 41 45 45 47 26 38 46 50 70 Qiskit Synthesized 4008 4046 1190 4264 4165 4400 4401 4401 4401 4400 20375 20659 20658 20656 20658 Depth All-to-All Qiskit Mapped 85 16 26 32 26 76 286 426 566 706 16 79 177 226 366 TKet Mapped 85 8 25 32 26 76 286 426 566 706 16 79 177 226 366 LEAP * 13 22 * 47 23 35 31 31 33 13 * * * * QF AST 123 21 33 65 85 31 33 49 29 39 13 47 29 29 33 Qiskit Synthesized 1064 1662 138 1451 1089 2008 2008 2008 2008 2008 7872 8841 8841 8841 8841 Linear Qiskit Mapped 200 34 40 76 44 76 286 426 566 706 16 79 177 226 366 TKet Mapped 133 17 36 53 43 76 286 426 566 706 16 79 177 226 366 LEAP 71 17 29 41 45 25 27 35 35 31 13 31 31 33 37 QF AST 99 87 77 83 151 17 21 29 21 27 13 17 25 21 45 Qiskit Synthesized 3799 3933 1115 4061 3924 4236 4236 4236 4236 4236 19074 19495 19495 19494 19494 Time (s) All-to-All Qiskit Mapped 0.16 0.05 0.07 0.07 0.11 0.22 0.88 1.19 1.68 2.03 0.04 0.28 0.62 0.80 1.41 TKet Mapped 0.07 0.03 0.03 0.04 0.04 0.15 0.68 1.03 1.41 1.79 0.03 0.20 0.46 0.59 0.97 LEAP * 618.62 652.92 * 11418.54 7826.57 16527.44 9069.7 6628.47 1586.35 19233.36 * * * * QF AST 3187.40 27.70 86.79 249.15 499.49 79.86 69.38 71.98 77.42 215.13 23.14 618.43 191.99 270.70 684.63 Qiskit Synthesized 11.61 14.50 2.65 14.61 14.43 14.35 15.04 14.59 14.27 16.52 82.16 62.93 64.10 63.34 64.62 Linear Qiskit Mapped 1.12 0.24 0.38 0.46 0.34 0.43 1.75 2.57 3.39 4.31 0.09 0.51 1.30 1.61 2.60 TKet Mapped 0.14 0.04 0.05 0.06 0.07 0.25 1.10 1.65 2.12 2.69 0.06 0.34 0.76 0.98 1.58 LEAP 25233.78 165.50 856.36 3525.54 5165.28 11631.55 3585.95 2113.57 1901.41 2835.3 7651.29 145303.80 175491.42 177015.25 47681.98 QF AST 992.38 228.55 213.94 365.15 1901.26 7.67 22.78 26.63 30.28 21.01 5.25 61.68 82.52 408.35 772.39 Qiskit Synthesized 33.20 34.42 12.38 36.25 38.37 35.93 35.53 32.27 34.11 32.41 170.08 161.25 156.66 161.30 159.81 Table 5: Number and location of pre\ufb01x blocks for various circuits. ALG Qubits CNOT # of Blocks Block End Locations fredkin 3 8 2 5,8 toffoli 3 8 2 6,9 grover3 3 7 2 5,7 hhl 3 3 1 3, or 3 8 2 5,8 peres 3 7 2 6,7 qft3 3 8 2 5,9 qft4 4 18 4 5,13,18,21 adder 4 15 3 8,14,19 vqe 5 20 8 3,7,11,14,18,21,25,28 TFIM-1 4 7 2 5,7 TFIM-10 4 12 3 5,10,12 TFIM-22 4 12 3 5,10,12 TFIM-60 4 12 3 5,10,12 TFIM-80 4 12 3 5,10,12 TFIM-95 4 12 3 5,10,12 mul 5 15 5 3,9,12,16,18 qaoa 5 28 7 6,10,14,19,24,29,35 qft5 5 30 10 5,8,11,15,20,25,30,35,38,40 TFIM-10 5 18 7 3,6,9,13,16,19,21 TFIM-40 5 20 7 3,7,10,13,16,19,21 TFIM-60 5 20 7 3,6,10,15,18,21,24 TFIM-80 5 20 7 3,6,11,16,20,23,24 TFIM-100 5 20 6 5,9,13,17,20,22 TFIM-1 6 10 4 4,7,10,12 12 \fPre\ufb01x formation is calculated based on a best-\ufb01t line formed by a linear regression of the best scores versus the depth associated with the new best-found score. This linear regression is used as an estimator of the expected score at the current depth. When the score calculated from the heuristic is better than the expected score, this means that the new best score is better than expected; in other words, more progress to the solution has been made than expected. We note that when the search algorithm in QSearch needs to backtrack and search many di\ufb00erent nodes, the progress towards the solution is slower, and the calculated score is worse than the expected score. We, therefore, do not form pre\ufb01xes in this case, which allows LEAP to maintain the important backtracking and searching that makes QSearch optimal. Table 5 presents the number of pre\ufb01xes formed during synthesis for each circuit considered. Since pre\ufb01xes have a depth between three and \ufb01ve qubits, this informs our choice of the re-synthesis window discussed below. 8.2 Impact of Incremental Re-synthesis While signi\ufb01cantly reducing depth (with respect to the circuit reference), pre\ufb01x synthesis can be improved upon by incremental re-synthesis, as shown by the comparison in Table 6. LEAP applies only a single step of re-synthesis. Given the solution from pre\ufb01x synthesis, LEAP selects a window at each pre\ufb01x boundary, resynthesizes, and reassmembles the circuit. Detailed results are omitted for brevity, but further iterations do little to improve the solution. Table 6: Summary of the CNOT reduction and time for resynthesis on the linear topology. Before Resynthesis After Resynthesis ALG Qubits CNOT Unitary Distance Time (s) CNOT Unitary Distance Time (s) qft3 3 9 0 1.6 8 0 3.4 logical or 3 8 4.44 \u221710\u221216 1.4 8 4.44 \u221710\u221216 5.9 fredkin 3 8 2.22 \u221710\u221216 1.4 8 2.22 \u221710\u221216 5.7 toffoli 3 9 2.22 \u221710\u221216 1.7 8 0 3.4 adder 4 19 0 48.9 15 2.22 \u221710\u221216 76.7 qft4 4 21 2.22 \u221710\u221216 38.6 18 1.11 \u221710\u221216 190.3 TFIM-10 4 12 8.03 \u221710\u221212 10.3 12 8.03 \u221710\u221212 176.6 TFIM-80 4 12 6.66 \u221710\u221216 4.2 12 6.66 \u221710\u221216 103.8 TFIM-95 4 12 4.44 \u221710\u221216 6.5 12 4.44 \u221710\u221216 113 vqe 4 28 2.47 \u221710\u221211 151.2 20 2.70 \u221710\u221211 2062.8 qft5 5 40 1.22 \u221710\u221215 772.4 30 6.66 \u221710\u221216 4392.8 TFIM-10 5 21 7.97 \u221710\u221212 310.6 18 9.19 \u221710\u221212 11320.8 TFIM-40 5 21 6.66 \u221710\u221216 44 20 0 3541.8 TFIM-60 5 24 0 66.9 20 0 2046.5 TFIM-80 5 24 2.22 \u221710\u221216 73.5 20 2.22 \u221710\u221216 1827.8 TFIM-100 5 22 4.44 \u221710\u221216 55.4 20 1.11 \u221710\u221216 2779.8 mul 5 18 4.44 \u221710\u221216 47.0 15 2.22 \u221710\u221216 809.2 TFIM-1 6 12 2.22 \u221710\u221216 213.3 10 1.11 \u221710\u221216 7437.9 The re-synthesis window in LEAP is chosen pragmatically with a limited depth (7 CNOTs for 3 and 4 qubits, 5 CNOTs for 5 and 6 qubits in our case), to lead to reasonable expectations on execution time, while providing some optimization potential. Incremental re-synthesis reduces circuit depth by 15% on average, albeit in many cases with a signi\ufb01cant impact on the runtime. 8.3 Impact of Dimensionality Reduction LEAP applies a single step of dimensionality reduction at the end of the synthesis process, the sweep starting at the circuit beginning. For brevity, we omit detailed data and note that in this \ufb01nal stage dimensionality reduction eliminates up to 40% of U3 gates (parameters) and shortens the circuit critical path. These results indicate that our approach over\ufb01ts the problem by inserting too many U3 gates. We examined the spatial occurrence of single-qubit gate deletion since this may guide any dynamic attempts to eliminate parameters during synthesis for scalability purposes. Figure 7 presents a summary for three-qubit circuits; trends are similar for all other benchmarks considered. The data shows that gate deletion is successful at many circuit layers, indicating that a heuristic for on-the-\ufb02y dimensionality reduction heuristic may be feasible to develop for even further scalability and quality improvements. As discussed in 13 \fTable 7: Spatial placement of U3 gates deleted. The number of columns denotes circuit stages (CNOTs), and we present the number of gates deleted at each position. Name Number of Gates Deleted qft2 2 0 0 0 qft3 2 0 0 1 0 0 1 1 fredkin 3 2 0 1 1 2 0 0 1 toffoli 2 2 1 2 1 2 0 1 0 peres 2 0 1 2 0 1 0 1 logical or 2 1 2 0 2 1 0 1 0 hhl 2 0 2 0 Table 8: Accuracy and speed of various optimizers on a variety of circuits. APOSMM-N means APOSMM with N starting points. BFGS Ceres APOSMM-8 APOSMM-12 APOSMM-16 APOSMM-20 APOSMM-24 ALG CNOT % Success Time (s) % Success Time (s) % Success Time (s) % Success Time (s) % Success Time (s) % Success Time (s) % Success Time (s) fredkin 8 89 0.03 69 0.01 100 0.13 100 0.14 100 0.14 100 0.15 100 0.16 logical or 8 16 < 0.01 55 0.01 100 0.13 100 0.14 100 0.15 100 0.16 100 0.17 peres 7 18 < 0.01 73 0.01 69 0.08 90 0.11 92 0.12 98 0.13 99 0.14 toffoli 8 43 0.01 74 0.01 100 0.13 100 0.14 100 0.14 100 0.15 100 0.17 qft3 8 9 < 0.01 26 < 0.01 80 0.10 91 0.12 95 0.13 98 0.14 100 0.16 qft4 18 1 < 0.01 15 0.02 66 0.50 83 0.68 92 0.82 94 0.99 99 1.08 qft5 30 0 < 0.01 2 0.12 8 1.19 13 2.78 15 3.81 25 7.21 36 12.10 Section 6, dimensionality reduction will reduce the number of parameters for numerical optimization, while reducing over\ufb01tting and gate (parameter) correlation that lead to cancellations of gate e\ufb00ects on a qubit. 8.4 Impact of Multistart Optimization When evaluating numerical optimizers used in synthesis, we are interested in determining how often they found the true minimum, since this has a signi\ufb01cant impact on both solution quality and execution speed. We evaluated the commonly used local optimization methods Google\u2019s Ceres [23] and an implementation of L-BFGS [22] as well as the multistart APOSMM [13] framework. We ran each optimizer 100 times on several circuits to evaluate their accuracy and speed. The results are summarized in Table 8. The QFT results illustrate that the BFGS and Ceres optimizers perform poorly even on a smaller circuit such as a three-qubit QFT, \ufb01nding solutions just 9% and 26% of the time, much lower than even APOSMM with 8 starting points. We found that APOSMM with 12 starting points performed well on all but the \ufb01ve-qubit QFT circuit. Since optimizing the parameters of the QFT5 circuit is a much higher-dimensional problem, even APOSMM with 24 starting points found solutions in only 36% of the runs. While APOSMM is much more accurate than BFGS and Ceres on the circuits we tested, it is also about an order of magnitude slower for larger circuits, even though the local optimization runs are done in parallel. In addition, the slowdown increases with the number of starting points. The time for QFT5 approximately doubles every 4 additional starting points for parallel runs. For our runs in Table 4 we selected 12 starting points since this number was reasonably accurate and takes a reasonable amount of time. Therefore when using LEAP, we use Ceres because it is fast and scales well, and a missed solution will be found during re-synthesis. During re-synthesis, APOSMM is used, since it is much more likely to \ufb01nd true minima, thus strengthening the optimality of search-based algorithms. 8.5 Gate Set Exploration Similar to QSearch, LEAP can target di\ufb00erent native gate sets and provide another dimension to circuit optimization or hardware design exploration. Besides CNOT, we have targeted other two-qubit gates supported by QPU manufacturers: CSX ( \u221a CNOT), iSWAP, and SQISW ( \u221a iSWAP). Here, the square root gates implement the matrix square root of their counterpart, and their composition has been previously studied [29] for generic two-qubit programs. Results are presented in Table 9. We make the following observations: \u2022 While CNOT and iSWAP are considered \u201cequivalent\u201d in terms of expressive power, using CNOT gates for larger circuits (\ufb01ve and six qubits) tends to produce observably shorter circuits. \u2022 Mixing two-qubit gates (CNOT+iSWAP) tends to produce shorter circuits than when using CNOT alone. 14 \fTable 9: Number of two qubit gates needed to implement various threeto six-qubit circuits. Using CNOT reduces the number of two-qubit gates needed vs iSWAP, whereas a combination of CNOT and iSWAP reduces the number of two-qubit gates even further. ALG CNOT SQCNOT iSWAP SQISW CNOT + iSWAP CNOT + SQCNOT iSWAP + SQISW qft3 6 8 7 8 5 5 7 fredkin 7 9 7 9 7 7 8 toffoli 6 7 7 8 6 5 7 peres 5 5 7 8 5 4 6 logical or 6 7 7 8 6 8 7 ALG iSWAP CNOT qft4 22 13 tfim-4-22 16 12 tfim-4-95 14 12 vqe 26 21 full adder 30 18 hlf 22 13 mul 18 13 qft5 50 28 tfim-5-40 29 20 tfim-5-100 33 20 tfim-6-24 40 28 tfim-6-51 43 31 \u2022 The depths of CNOTand \u221a CNOT-based circuits are very similar. Given that in some implementations the latency of \u221a CNOT gates may be shorter than that of CNOT gates, the former may be able to provide a performance advantage. \u2022 Sleator and Weinfurter [30] prove that the To\ufb00oli gate can be optimally implemented using a \ufb01ve-gate combination of CNOT and \u221a CNOT. LEAP can reproduce this result, which indicates it may provide a useful tool for discovering optimal implementations of previously proposed gates. These observations are somewhat surprising and probably worth a more detailed future investigation. While the data indicates that mixing CNOT and iSWAP can produce the shortest circuits, we found that in LEAP the search space size would double, hence the speed to the solution will su\ufb00er. Therefore for our experiments, we kept with the CNOT+U3 gate set that was used by QFAST and Qsearch. 9 Discussion Overall, the results indicate that the heuristics employed in LEAP are much faster than QSearch and are still able to produce low-depth solutions in a topology-aware manner. The average depth di\ufb00erence for threeand four-qubit benchmarks between QSearch and LEAP is 0 across physical chip topologies and workload. We \ufb01nd the pre\ufb01x formation idea intuitive, easily generalizable, and powerful. The method used to derive pre\ufb01x formation employs concepts encountered in numerical optimization algorithms and is easily identi\ufb01able in other search-based synthesis algorithms: \u201cprogress\u201d to the solution, and \u201cregion of similarity\u201d or plateau. The LEAP algorithm indicates that incremental and iterative approaches to synthesis work well. In our case, the results even indicate that one extra step of local optimization can match the e\ufb03cacy of global optimization. This result bodes well for approaches that scale synthesis past hundreds of qubits through circuit partitioning, such as our QGo [31] optimization and QuEst [32] approximation algorithms. Dimensionality reduction as implemented in LEAP not only reduces the e\ufb00ects of over\ufb01tting by numerical optimization but also opens a promising path for scaling numerical-optimization-based synthesis. Since we were able to delete 40% of parameters from the \ufb01nal solution, we believe that by combining it with pre\ufb01x synthesis we can further improve LEAP\u2019s scalability. Multistart optimization can be trivially incorporated into any algorithm, and we have indeed already modi\ufb01ed the QSearch and QFAST algorithms to incorporate it. Furthermore, the spirit of the multistart \u201capproach\u201d can be employed to further prune the synthesis search space. Whenever a pre\ufb01x formed, the synthesis algorithm had explored a plateau and a local minimum. At this stage, a multistart search could be started using as seeds other promising partial solutions within the tree. The pre\ufb01x formation idea is powerful and showcases how synthesis can turn into a capability tool. TFIM circuits simulate a time-dependent Hamiltonian, where the circuit for each time step \u201ccontains\u201d the circuit 15 \fFigure 7: TFIM circuit depth evolution and \u201c\ufb01delity\u201d when executed on the IBM Athens system. \u201cIBM\u201d is compiled with Qiskit, while \u201cConstant Depth\u201d is synthesized with LEAP (computation) associated with the previous time step as a pre\ufb01x. The circuits generated by the TFIM domain generator grow linearly in size. In our experiments, we observed that after some initial time steps, all circuits for any late time step have an asymptotic constant depth. This observation led to the following experiment: we picked a circuit structure generated for a late simulation step and considered it as a parameterized template for all other simulation steps. We then successfully solved the numerical optimization problem with this template for any TFIM step. This procedure empirically provides us with a \ufb01xed-depth (shortdepth) template for the TFIM algorithm. Furthermore, this demonstration motivated a successful e\ufb00ort [33] to derive from \ufb01rst principles a \ufb01xed-depth circuit for TFIM. The results are presented in Figure 7. Note the highly increased \ufb01delity when running the circuit on the IBM Athens system. The QITE algorithm presents an interesting challenge to the pre\ufb01x formation idea. In this case, the next timestep circuit is obtained by extending the \u201ccurrent\u201d circuit with a block dependent on its output after execution. When executing on hardware, synthesis has real-time constraints, and it has to deal with the hardware noise that a\ufb00ects the output. Preliminary results, courtesy of our collaborators Jean-Loup Ville and Alexis Morvan, indicate that the approach taken for TFIM may be successful for QITE. Table 10 summarizes the preliminary observations and indicates that again synthesis produces better-quality circuits than the domain generator or traditional compilation does. Note that in this experiment LEAP was fast enough to produce real-time results during the hardware experiment only for three-qubit circuits. Table 10: Summary of QITE results when running synthesis on hardware experiments. Structure of any circuit is determined by the output of the previous circuit, hence hardware noise. CNOT QITE size Qiskit Isometry QFAST LEAP 2 3 3 3 3 30-35 10-12 7-12 4 160-200 70-80 30-50 Looking forward, the question remains whether numerical-optimization-based synthesis can be useful in fault-tolerant quantum computing. There, the single-qubit gates will change to Cli\ufb00ords and the T gate, or another non-Cli\ufb00ord gate that makes the gate set universal. The execution cost model is also expected to be di\ufb00erent: CNOTs and Cli\ufb00ords become cheap, while the non-Cli\ufb00ord operations become expensive. Likely, the non-Cli\ufb00ords are qualitatively more \u201cexpensive\u201d than CNOTs in NISQ computing. Thus, the optimization objective becomes minimizing the number of non-Cli\ufb00ord gates. We have already shown that LEAP can be retargeted to new gate sets. We also have very strong evidence that adding a multi-objective optimization approach to search-based synthesis works very well under a fault-tolerant quantum computing cost model. The data indicates that it is realistic to expect 16 \fe\ufb03cacy improvements similar to those provided by LEAP under the NISQ cost model. This work is ongoing (and due to intellectual property concerns, we cannot disclose more details). As the already mentioned scalable partitioning approaches only leverage LEAP and do not require additional cost models, this bodes very well for the future of numerical-optimization-based synthesis in fault-tolerant quantum computing. 10 Related Work A fundamental result that spurred the apparition of quantum circuit synthesis is provided by the Solovay\u2013 Kitaev (SK) theorem. The theorem relates circuit depth to the quality of the approximation, and its proof is by construction [34\u201336]. Di\ufb00erent approaches [34, 37\u201346] to synthesis have been introduced since, with the goal of generating shorter-depth circuits. These can be coarsely classi\ufb01ed based on several criteria: target gate set, algorithmic approach, and solution distinguishability. Target Gate Set: The SK algorithm is applicable to any universal gate set. Later examples include synthesis of z-rotation unitaries with Cli\ufb00ord+V approximation [47] or Cli\ufb00ord+T gates [48]. When ancillary qubits are allowed, one can synthesize single-qubit unitaries with the Cli\ufb00ord+T gate set [48\u201350]. While these e\ufb00orts propelled the \ufb01eld of synthesis, they are not used on NISQ devices, which o\ufb00er a di\ufb00erent gate set (Rx, Rz, CNOT, iSWAP and M\u00f8lmer\u2013S\u00f8rensen all-to-all). Several [1\u20133] other algorithms, discussed below, have since emerged. Algorithmic Approaches: The early attempts inspired by the Solovay\u2013Kitaev algorithm use a recursive (or divide-and-conquer) formulation, sometimes supplemented with search heuristics at the bottom. More recent search-based approaches are illustrated by the meet-in-the-middle [39] algorithm. Several approaches use techniques from linear algebra for unitary and tensor decomposition. Bullock and Markov [42] use QR matrix factorization via a Givens rotation and Householder transformation [43], but open questions remain as to the suitability for hardware implementation because these algorithms are expressed in terms of row and column updates of a matrix rather than in terms of qubits. The state-of-the-art upper bounds on circuit depth are provided by techniques [1, 2] that use cosine-sine decomposition. The cosine-sine decomposition was \ufb01rst used in [51] for compilation purposes. In practice, commercial compilers ubiquitously deploy only KAK [5] decompositions for 2-qubit unitaries. The basic formulation of these techniques is topology independent. Specializing for topology increases the upper bound on circuit depth by large constants; Shende et al. [2] mention a factor of 9, improved by Iten et al. [1] to 4\u00d7. The published approaches are hard to extend to di\ufb00erent qubit gate sets, however, and it remains to be seen whether they can handle qutrits.4 Several techniques use numerical optimization, much as we did. They describe the gates in their variational/continuous representation and use optimizers and search to \ufb01nd a gate decomposition and instantiation. The work closest to ours is that of Martinez et al. [3], who use numerical optimization and brute-force search to synthesize circuits for a processor using trapped-ion qubits. Their main advantage is the existence of all-toall M\u00f8lmer\u2013S\u00f8rensen gates, which allow a topology-independent approach. The main di\ufb00erence between our work and theirs is that they use randomization and genetic algorithms to search the solution space, while we show a more regimented way. When Martinez et al. describe their results, they claim that M\u00f8lmer\u2013S\u00f8rensen counts are directly comparable to CNOT counts. By this metric, we seem to generate circuits comparable to or shorter than theirs. It is not clear how their approach behaves when topology constraints are present. The direct comparison is further limited by the fact that they consider only randomly generated unitaries, rather than algorithms or well-understood gates such as To\ufb00oli or Fredkin. Another topology-independent numerical optimization technique is presented in [4]. The main contribution is to use a quantum annealer to do searches over sequences of increasing gate depth. The authors report results only for two-qubit circuits. All existing studies focus on the quality of the solution, rather than synthesis speed. They also report results for low-qubit concurrency: Khatri et al. [4] for two-qubit systems, Martinez et al. [3] for systems up 4 [52] describes a method using Givens rotations and Householder decomposition. 17 \fto four qubits. Solution Distinguishability: Synthesis algorithms can be classi\ufb01ed as exact or approximate based on distinguishability. This is a subtle classi\ufb01cation criterion, since many algorithms can be viewed as either. For example, the divide-and-conquer algorithm Meet-in-the-Middle proposed in [39], although designed for exact circuit synthesis, may also be used to construct an \u03f5-approximate circuit. The results seem to indicate that the algorithm failed to synthesize a three-qubit QFT circuit. We classify our implementation as approximate since we rely on numerical optimization and therefore must accept solutions at a small distance from the original unitary. 11" + } + ], + "William Bennett": [ + { + "url": "http://arxiv.org/abs/2308.10852v2", + "title": "Uncertainty benchmarks for time-dependent transport problems", + "abstract": "Verification solutions for uncertainty quantification are presented for time\ndependent transport problems where $c$, the scattering ratio, is uncertain. The\nmethod of polynomial chaos expansions is employed for quick and accurate\ncalculation of the quantities of interest and uncollided solutions are used to\ntreat part of the uncertainty calculation analytically. We find that\napproximately six moments in the polynomial expansion are required to represent\nthe solutions to these problems accurately. Additionally, the results show that\nif the uncertainty interval spans c=1, which means it is uncertain whether the\nsystem is multiplying or not, the confidence interval will grow in time.\nFinally, since the QoI is a strictly increasing function, the percentile values\nare known and can be used to verify the accuracy of the expansion. These\nresults can be used to test UQ methods for time-dependent transport problems.", + "authors": "William Bennett, Ryan G. McClarren", + "published": "2023-08-21", + "updated": "2023-12-16", + "primary_cat": "cs.CE", + "cats": [ + "cs.CE" + ], + "main_content": "Introduction Uncertainty quantification (UQ) is necessary for any robust comparison between experimental data and accurate numerical simulations. In research fields concerning radiative flows, the accuracy of simulations has improved enough that pragmatic researchers have begun to invest more effort into UQ [1, 2]. The most intuitive choice with the least overhead for thus minded researchers is a Monte-Carlo sampling of whatever simulation method is on hand. The work of Fryer, et al. [3] is a good example of this so called non-intrusive method applied to laser-driven heat wave experiments in foams. It is also possible to bake uncertainty estimation processes into a simulation, like in [4], which is an example of intrusive polynomial chaos expansions (PCE) that can be applied to time dependent radiation diffusion. The bulk of research for UQ transport calculations published to date has come from the neutron transport community. This is a consequence of the interest nuclear engineers take in the effects of uncertainty on reactor criticality. Zheng and McClarren [5] apply regression techniques to data sampled from a transport code to perform UQ on a simulated TRIGA reactor. The researchers in [6] apply PCE to a steady diffusion criticality problem. Williams [7] used a PCE treatment for a similar system, this time with the P1 approximation. Finally, Refs. [8] and [9] apply PCE to true steady state transport calculations with uncertain cross sections. We present a non-intrusive version of PCE that is an improvement on work previously surveyed in that it solves time dependent transport UQ problems. For one dimensional, infinite medium, time dependent transport, the only physical parameters are the cross sections, the initial condition, and the source function. This study will not consider source uncertainty and wraps up uncertainties in the cross sections by defining the scattering ratio as an uncertain parameter. While PCE is often applied to the equations that govern a system, in this case arXiv:2308.10852v2 [cs.CE] 16 Dec 2023 \fthe Green\u2019s function is known, although nontrivial to calculate, and given by Ganapol[10]; this solution was then extended to different sources in [11]. The organization of this work is as follows. Section 2 introduces the transport equation for the chosen configuration and corresponding benchmark solution and how uncertainty will be modeled. Section 3 discusses one dimensional PCE and how it has been applied to the problem and finally results are presented and discussed in Section 4. 2. Model Problem We begin with a problem of a planar pulse of neutrons in an infinite medium. The time dependent, isotropic scattering transport equation in slab geometry with a Dirac delta function source in space and time is \u0012 \u2202 \u2202t + \u00b5 \u2202 \u2202x + 1 \u0013 \u03c8(x, t, \u00b5) = c 2 \u03d5(x, t) + 1 2 \u03b4(x)\u03b4(t), (1) where \u03c8(x, t, \u00b5) is the angular flux and \u03d5(x, t) = R 1 \u22121d\u00b5\u2032 \u03c8(x, t, \u00b5\u2032) is the scalar flux; The spatial coordinate, x \u2208R is measured in mean-free paths from the origin, \u00b5 \u2208[\u22121, 1] is the cosine of the angle between the polar angle of the direction of flight of a particle and the x-axis. c is the average number of particles emitted (isotropically) from a particle collision. The scattering ratio, or the number of secondary particles emitted per collision, c, is defined as c \u2261\u03c3s/\u03c3t where \u03c3s is the sum of the scattering cross section and \u03c3t is the total cross section. In the case where there is fission or (n, \u03bdn) reactions, the definition would change to include contributions from these reactions in the numerator. The linearity of Eq. (1) allows the scalar flux to be divided into sum of the uncollided, \u03d5u, and collided, \u03d5c, parts (\u03d5 = \u03d5u + \u03d5c) where the uncollided flux has not experienced scattering and the collided flux has. gives The uncollided solution to Eq. (1) is [10] \u03d5u = 1 2 exp (\u2212t) t \u0398 (1 \u2212|\u03b7|) , (2) where \u0398 is the Heaviside step function. The collided solution also given in [10] is, \u03d5c(x, t) = c \u0012e\u2212t 8\u03c0 \u00001 \u2212\u03b72\u0001 Z \u03c0 0 du sec2 \u0010u 2 \u0011 Re h \u03be2e ct 2 (1\u2212\u03b72)\u03bei\u0013 \u0398(1 \u2212|\u03b7|), (3) where \u03be(u, \u03b7) = log q + i u \u03b7 + i tan( u 2 ), (4) and q = 1 + \u03b7 1 \u2212\u03b7 , \u03b7 = x t . (5) Since the source in this configuration is a delta function, the only physical parameter is the scattering ratio, c. This solution is also extended in [10] into an axisymmetric cylindrical geometry to give the flux for a line source. In [11], these solutions are used as integral kernels to produce solutions for other source configurations. Of those solutions, the square source and Gaussian source will be included in this study. The square source is a step source that is on for a time t0 and is strength one inside of some width x0 and zero outside. The Gaussian source is similar but the strength varies smoothly with some defined standard deviation, \u03c3 (Not to be confused with the standard deviation of the solution or the random variable). This introduces new physical parameters. For example, integrating Eqs. (2) and (3) over a square source introduces the source width and source duration as parameters. While multi-dimensional chaos expansions could be used to include uncertainty in these parameters, this study will be restricted to uncertainty in c only. 2 \fTo whit, c is defined as an uncertain parameter, c = c + \u03c91\u03b8, (6) where c is a known mean value, \u03c91 is a constant, and \u03b8 is a uniform random variable, \u03b8 \u223cU[\u22121, 1]. This definition makes \u03c91 > 0 give the width of the uncertain interval centered around the mean c. It is noteworthy that if the uncertain interval extends from c < 1 to c > 1, there is uncertainty in whether the system supports long time behavior that is multiplying or decaying. Also, it should be noted that the uncollided flux (Eq. (2)), has no uncertainty, a consequence of the uncollided particles being agnostic to the scattering properties of the material. It is assumed that the magnitude of \u03c91 is small enough so that c is always positive. For a uniform random variable, the probability density function (PDF) is defined as f(\u03b8i) = ( 1 2 \u03b8i \u2208[\u22121, 1] 0 otherwise . (7) With these choices for our uncertain parameters, the expectation of the scalar flux is defined as E[\u03d5](x, t; c) = (E[\u03d5u](x, t) + E[\u03d5c](x, t; c)) = \u03d5u(x, t) + 1 2 Z 1 \u22121 d\u03b81 \u03d5c(x, t; c). (8) The arguments are shown to indicate that the expected value of \u03d5 is a function of space and time. Additionally, the collided and total scalar flux are also parameterized by the value of c, as shown in Eq. (8). Henceforth, in the interest of notational parsimony, we will drop the (x, t) arguments. The variance is also given by VAR[\u03d5] = E[\u03d52] \u2212E[\u03d5]2, (9) which is in our case, VAR[\u03d5] = 1 2 Z 1 \u22121 d\u03b8 \u03d5c(c)2 \u22121 4 \u0012Z 1 \u22121 d\u03b8 \u03d5c(c) \u00132 . (10) Higher moments (e.g. skewness, kurtosis, etc.) can also be calculated but will not be pursued in this study. Percentile based statistics are also useful for describing a quantity of interest (QoI) in the presence of uncertainty and show greater resilience to outlier data than moment-based measures. Percentiles are calculated by finding the realization of the random variable that satisfies the relation p = Z x \u2212\u221e dx\u2032 f(x\u2032), (11) where f is the PDF of the QoI and p is the percentile represented in decimal form. These percentile values are calculated by estimating the cumulative distribution function (CDF) through sampling and then tabulating the inverse CDF. In this work, this is accomplished in Python with numpy\u2019s quantile function [12]. 3. Polynomial Chaos The benchmark solutions referenced in the introduction are non-trivial to evaluate. For a Monte Carlo sampling of the QoI, the solution must be evaluated once for each realization of the uncertain parameter, which could require a large investment of computational time for an accurate solution. Direct integration could circumvent this difficulty in the calculation of moment based values, but the sometimes more useful percentile measures would not be available. If however, the solution is represented as a polynomial expansion in the uncertain variables and there are no discontinuities in the space of the random variable, evaluation will be relatively trivial 3 \fonce the coefficients have been calculated. This is one motivation for PCE. Therefore, the scalar flux as a function of the scattering ratio is approximated by, \u03d5(c) = \u03d5u + N X j=0 ajPj(\u03b8), (12) where Pj are Legendre polynomials. The coefficients in the expansion are computed as aj = 2j + 1 2 Z 1 \u22121 d\u03b8 \u03d5c(c)Pj(\u03b8). (13) These expansion coefficients can be efficiently calculated using widely-available quadrature routines. Once the expansion coefficients have been calculated, the orthogonality of the basis is invoked to write the moments of the expansion exactly. The expectation of the expansion is found to be exactly that of the function being approximated, in this case the scalar flux, E[\u03d5] = \u03d5u + a0, (14) since the definition of a0 is identical to the expected value of the collided flux. Similarly, the variance simplifies to, VAR[\u03d5] = N X j=0 1 2j + 1a2 j \u2212a2 0 = N X j=1 1 2j + 1a2 j. (15) The variance is not dependent on the uncollided solution, which is expected. Orthogonality also allows for higher moments of the expansion to be expressed exactly. Although the expansion coefficients give us a direct way to estimate moments of the distribution, they do not provide percentile information. To get this information we can sample values of the parameter c from its underlying distribution and then evaluate the QoI using the polynomial representation. To efficiently calculate the percentiles values of c are sampled via a quasi-random sequence, the Sobol sequence [13], taking advantage of the faster convergence possible with these sequences. Roughly one million samples were taken for each problem. The code to reproduce the results shown below is freely available1. 4. Results For these results to have any value, the polynomial chaos expansion must be a good approximation of the true function. Figure 1 provides a convergence test of the variance of the scalar flux, calculated with Eq. (15) against an analytic expression for the variance for the plane pulse problem. The constant slope on the log-linear plot test indicates geometric convergence. Since the first moment is exactly the analytic expression, we can be confident that an order \u22486 basis is a good representation of the true solution for this problem since it represents the first two moments well when the uncertainty in the scattering ratio is large (fifty percent). For four problems, three in slab geometry and one in cylindrical, plots are given at early and intermediate times of moment and percentile based statistical measures for a range of c. Table 1 gives the problem setup for each figure. Each result shows a similar trend in the c = 1 cases of a widening of the certainty interval as time progresses. This is due to the difference between the exponential decay behavior of a non-multiplying system and the exponential growth of a multiplying system. Compared to the other problems, the Gaussian source, Figure 4 shows relatively smaller ranges where the solution could safely assumed to be between. This could be because the solution is smooth at early times. The other figures (Figures 2, 3, 5) show structure at early times and then relax into a Gaussian shape at later times. For each problem, the expected value seems to be the same as the median. They are slightly different however and diverge as time progresses. Figure 6 shows this phenomenon. While the uncertainty goes to zero at the wavefront in Figure 2, it will not in Figure 3 since the collided solution at the wavefront is nonzero. All results also follow the trend of having the highest variance at center of the problem (x = 0 for slab problems and r = 0 for cylindrical). 1github.com/wbennett39/pce transport 4 \fTable 1.: Description of uncertainty benchmarks, each with c = \u00af c+\u03c91\u03b8 as the uncertain parameter Source functional form parameters figure number(s) plane pulse \u03b4(x)\u03b4(t) 1, 2 6, 7 square source \u0398(x0 \u2212|x|)\u0398(t0 \u2212t) x0 = 0.5, t0 = 5 3 Gaussian source exp \u0010 \u2212x2 \u03c32 \u0011 \u0398(t0 \u2212t) \u03c3 = 0.5, t0 = 5 4 line source \u03b4(r)\u03b4(t) 5 The total \u201cmass\u201d of the system, which is the integral over all space of the solution, \u03d5(t; c) = Z \u221e \u2212\u221e dx\u2032\u03d5(c, x\u2032, t) (16) can yield important insights for these types of problems. The solution to Eq. (16) for the plane pulse is, \u03d5(t; c) = exp (t(c \u22121)) . (17) This is the mass of the system with no uncertainty. The masses of some statistical descriptions of the system (expected, median) are shown at t = 3 for a twenty five percent uncertainty in Figure 6 for the plane pulse problem. Three mean free times was chosen as an intermediate value between early times when the expected value shows little deviation from the nominal value (c = c) and later times when they are highly disparate. On the scale of the plot (Figure 6), the median and nominal (c = c) curves for the system mass appear coincident. There is a reason for this. For a strictly increasing function of a random variable, it can be shown that a quantile realization of function, the pth percentile, is the same as the function evaluated at the pth percentile of the random variable. This means that in this case, the 50th percentile of the scalar flux is equal to the case of no uncertainty, the nominal value. A derivation of this is included in Appendix B. All this is to say, sampling the expansion to estimate quantile values is not actually necessary in this case. However, it is a good test of the method to check if the sampled expansion percentile values converge to the known values, which we show in Figure 7. The order of the expansion is increased to eight for this plot to show convergence past the N = 6 floor at \u224810\u22126. We do not expect spectral convergence in this case, since the random sampling method at best converges algebraically. 5." + }, + { + "url": "http://arxiv.org/abs/2301.02596v2", + "title": "Benchmark solutions for radiative transfer with a moving mesh and exact uncollided source treatments", + "abstract": "The set of benchmark solutions used in the thermal radiative transfer\ncommunity suffer some coverage gaps, in particular nonlinear, non-equilibrium\nproblems. Also, there are no non-equilibrium, optically thick benchmarks. These\nshortcomings motivated the development of a numerical method free from the\nrequirement of linearity and easily able to converge on smooth optically thick\nproblems, a moving mesh Discontinuous Galerkin (DG) framework that utilizes an\nuncollided source treatment. Having already proven this method on time\ndependent scattering transport problems, we present here solutions to\nnon-equilibrium thermal radiative transfer problems for familiar linearized\nsystems together with more physical nonlinear systems in both optically thin\nand thick regimes, including both the full transport and the $S_2$/$P_1$\nsolution. Geometric convergence is observed for smooth sources at all times and\nsome nonsmooth sources at late times when there is local equilibrium. Also,\naccurate solutions are achieved for step sources when the solution is not\nsmooth.", + "authors": "William Bennett, Ryan G. McClarren", + "published": "2023-01-06", + "updated": "2023-05-09", + "primary_cat": "cs.CE", + "cats": [ + "cs.CE" + ], + "main_content": "Introduction The Stefan-Boltzmann law [1], which describes the relationship between radiation emitted from a material and its temperature as proportional to temperature to the fourth power, is partly responsible for the obdurate nonlinearity in high-energy density radiative heat transfer models. Also, the opacity of material of interest can have a nonlinear dependence on temperature. For these reasons, the extant analytic benchmarks with space and time dependence in this \ufb01eld are predicated on assumptions of linearity or equilibrium. There are solutions that assume the radiation energy and internal energy of the material instantly equilibrate, inducing a Marshak Wave [2, 3, 4], and solutions for non-equilibrium problems that linearize the system in T 4 by invoking a form for the material heat capacity that is proportional to temperature cubed, an innovation of Pomraning [5], and specify a constant opacity. This technique of de\ufb01ning a heat capacity to linearize the system has been used to produce an abundance of solutions, including transport treatments for the P1 equations [6], full transport solutions with one speed [7, 8], and non-grey problems [9]. In the di\ufb00usion limit, benchmarks solutions have been provided for one temperature [10], three temperature [11], and a non-grey treatment [12]. While these solutions are invaluable to code developers for veri\ufb01cation, it is necessary to point out that there are certain drawbacks to using linear problems to verify codes whose purpose it is to solve nonlinear systems. While ideally the numerical code in question would solve the fully nonlinear equations and implement a special equation of state when running these veri\ufb01cation problems, there is nothing to prohibit the curators of these codes from simply solving a linearized system when the benchmark is being run. Also, solutions to linear systems can be scaled to match benchmarks, unlike the unforgiving solutions to nonlinear systems, and the solution to the linearized equations equilibrates more quickly than a nonlinear problem as the temperature increases as a result of the special equation of state. Solving a linear problem does not completely verify the functionality of a radiative transfer code. Although nonlinearity is an impediment for analytic methods, it is not necessarily a source of di\ufb03culty for spectral methods. This was the impetus for our development of a moving mesh, uncollided source treatment Discontinuous Galerkin (DG) method for solving transport problems [13]. The time dependent 1 arXiv:2301.02596v2 [cs.CE] 9 May 2023 \fcell edges, which we call a moving mesh, and the uncollided source treatment were added on to the DG implementation because the transport equation with its \ufb01nite wavespeeds admits discontinuities that inhibit DG methods from attaining their higher order convergence potential. The moving mesh and uncollided source can present a smoother problem for the method to solve: the moving mesh by matching edges to moving wavefront discontinuities and the uncollided source by analytically resolving the most structured part of the solution. As documented in [13], we have already conducted extensive tests with this method on time-dependent transport problems, which allowed for a detailed analysis of the e\ufb03cacy of the moving mesh and uncollided mesh for di\ufb00erent source types. For example, for \ufb01nite width, nonsmooth sources that induce a nonsmooth solution that is smoothed over time, the method proved the most bene\ufb01cial when compared to a standard DG implementation, but displayed only algebraic error convergence, not the optimal geometric convergence that DG methods are capable of. However, for smooth Gaussian sources we we able to achieve spectral convergence, but the importance of the moving mesh was diminished. With an understanding of the e\ufb00ectiveness of this method on linear systems, we apply it to nonlinear radiative transfer problems and obtain results with accuracy comparable to an analytic solution, which is the stated intent of this work. The nonlinear problems we consider are close to the aforementioned linearized benchmarks, but with a more physical constant speci\ufb01c heat. While we could specify a temperature dependent opacity and provide a more physical benchmark, doing so would disallow some of the orthogonality simpli\ufb01cations in the DG derivation. Temperature dependent opacity will be left for a future work. Before attempting fully nonlinear problems however, we \ufb01rst apply our method the existing linear radiative transfer problems. This will allow us to test our method on problems with known solutions and uncover de\ufb01ciencies in a more forgiving arena. For nonlinear problems, we can still gauge the precision of our solution by inspecting magnitude of the expansion coe\ufb03cients and the accuracy by checking against existing numerical Sn solvers. We selected the Su and Olson transport benchmark [8] as an ideal veri\ufb01cation solution for our method. Unfortunately, the results are not given to enough digits to fully demonstrate the e\ufb00ectiveness of our scheme and recalculation of these results is non-trivial. Therefore, we rely on integration of a P1 version of this benchmark [6] to create solutions which are not necessarily as physically accurate as the full transport solution, but can be evaluated with additional precision. There are no existing radiative transfer transport veri\ufb01cation solutions for optically thick problems outside of the equilibrium di\ufb00usion limit. By optically thick, we mean that the source width or the support of an initial condition is orders of magnitude larger than a mean free path. Conversely, optically thin problems have source widths comparable to a mean free path. For a transport code to have su\ufb03cient coverage of veri\ufb01cation problems, converging to a di\ufb00usion benchmark of a thick problem while not resolving a mean free path is a good test. If the code resolves a mean free path, however, it will converge to the di\ufb00usion problem plus a transport correction. It is for the purpose of verifying this transport correction that we include transport solutions and the S2/P1 solutions for optically thick problems. The remaining sections of the paper are organized as follows. Section II contains an introduction to our model equations, nondimensionalization, and derivation of the uncollided source. Our DG implementation is laid out brie\ufb02y in Section III, but a more detailed derivation is left to [13]. Section IV is devoted to the calculation of S2 benchmarks and the corresponding uncollided solutions used in verifying our method. Following this is a description of how the convergence of the error in the results section is calculated (Section V. then our results (Sections VI and VII). The results sections also contain speci\ufb01c details of the methods used in each problem and discussion of the solution characteristics. II Equations We study non-equilbrium time dependent radiative heat transfer in an in\ufb01nite, purely absorbing, constant opacity, stationary medium with an internal radiation source. The radiation transport and material balance equations for this system are, \u00121 v \u2202 \u2202\u03c4 + \u00b5 \u2202 \u2202z + \u03c3a \u0013 \u03c8(z, \u03c4, \u00b5) = \u03c3a \u00121 2avT(z, \u03c4)4 \u0013 + 1 2S(z, \u03c4), (1) \u2202 \u2202\u03c4 e(z, \u03c4) = \u03c3a \u0000\u03c6(z, \u03c4) \u2212avT(z, \u03c4)4\u0001 , (2) 2 \fwhere the general form of the equation of state is, e = Z T 0 dT \u2032 Cv(T \u2032). (3) The variables in these equations are, \u03c8, the angular \ufb02ux or intensity, \u03c6 = R 1 \u22121d\u00b5\u2032\u03c8(x, t, \u00b5\u2032) the scalar \ufb02ux, T, the temperature, and e, the material energy density. \u03c8 and \u03c6 have units of energy per area per time ([GJ\u00b7cm\u22122ns\u22121]) and e has units of energy density ([GJ\u00b7cm\u22123]). S is a source term with units of energy density per time. \u00b5 \u2208[\u22121, 1] is the cosine of the particle direction with respect to the z axis. v is the particle velocity, which is the speed of light in a vacuum for our application, v = 29.998 cm ns\u22121. The radiation constant is a = 4\u03c3SB/v = 0.0137225 GJ cm\u22123 keV\u22124, where \u03c3SB is the Stefan-Boltzmann constant. The absorption cross section, \u03c3a, is in units of inverse length. We seek a non-dimensionalization for these equations that is compatible with the non-dimensionalization given the in Su-Olson benchmark [9] and that may be used in optically thick problems without enlarging the non-dimensionalized length to accommodate the larger opacity, x = l\u03c3az t = lv\u03c3a\u03c4. (4) l is a dimensionless scaling variable that is set to one for thin problems and a small number to o\ufb00set the greater \u03c3a in optically thick problems. Each equation is transformed into the new variables and divided by av\u03c3aT 4 H, where TH is the reference temperature, called the hohlraum temperature in previous work, \u0012 l \u2202 \u2202t + \u00b5l \u2202 \u2202x + 1 \u0013 \u03c8(x, t, \u00b5) = ca \u00121 2T(x, t)4 \u0013 + 1 2Q(x, t), (5) l \u2202 \u2202te(x, t) = ca \u0000\u03c6(x, t) \u2212T(x, t)4\u0001 . (6) Our non-dimensional dependent variables are now, \u03c8(x, t) = \u03c8(x, t) avT 4 H \u03c6(x, t) = \u03c6(x, t) avT 4 H T(x, t) = T(x, t) TH e(x, t) = e(x, t) aT 4 H , (7) the non-dimensional source is, Q(x, t) = S(x, t) \u03c3aavT 4 H (8) and the absorption ratio is de\ufb01ned, ca= \u03c3a \u03c3t = 1. (9) In this work, we consider two functional forms for Cv. To solve the Su-Olson benchmark problem, we use the familiar form, Cv = \u03b1T 3. (10) which renders Eq. (6) linear in T 4. With the conventional choice of \u03b1 = 4a, now eSU = T 4, (11) where the subscript \u201cSU\u201d indicates that this is the equation of state for the linear Su-Olson problem. While it is important for our investigation to solve these linear problems the novel aspect of this paper is results for nonlinear problems. For these, we choose a more physical, constant speci\ufb01c heat, Cv = Cv0, with units of of energy density per temperature. This choice renders e = Cv0T. To \ufb01nd the relationship between the nondimensional variables with this equation of state, we de\ufb01ne Cv0 = Cv0 aT 3 H . Now we can write eN = Cv0T, (12) where the subscript \u201cN\u201d indicates that this is our equation of state for the nonlinear problems. 3 \fII.A Uncollided solutions In time dependent transport trials, we found that the deployment of an an uncollided source treatment, where using the solution to the equation, \u0012 l \u2202 \u2202t + \u00b5l \u2202 \u2202x + 1 \u0013 \u03c8u(x, t, \u00b5) = 1 2Q(x, t), (13) as a source term to solve for the collided \ufb02ux is a signi\ufb01cant boon for accuracy when the solution is not smooth. The Green\u2019s function solution to Eq. (13) with l = 1 was provided by [14]. This solution is integrated for di\ufb00erent source con\ufb01gurations in [15], including a square and a Gaussian source. Using these solutions, we can say that \u03c8u is known and can be integrated analytically to \ufb01nd \u03c6u. For problems where l \u0338= 1, a simple scaling is required. For optically thick problems when l \u226a1, the uncollided solution is not as useful since it has decayed to zero by the pertinent evaluation times. To solve for the remaining collided portion of the \ufb02ux, we have the system, \u0012 l \u2202 \u2202t + \u00b5l \u2202 \u2202x + 1 \u0013 \u03c8c(x, t, \u00b5) = ca \u00121 2T(x, t)4 \u0013 (14) l \u2202 \u2202te(x, t) = ca \u0000\u03c6c(x, t) + \u03c6u(x, t) \u2212T(x, t)4\u0001 . (15) In linear transport applications, it is possible to decompose the \ufb02ux in\ufb01nitely, not just into uncolllided and collided \ufb02ux, but uncollided, \ufb01rst collided, second collided, etc. Even though the radiative transfer equations are nonlinear, we are able to use this linear solution technique since the uncollided \ufb02ux has no interaction with the material. However, we cannot not further decompose the \ufb02ux as we could in a linear transport problem. Answers obtained with an \u201cuncollided source\u201d treatment refer to solutions of the collided equations (Eqs. (14) and (15)) where the uncollided solution from Eq. (13) is evaluated at the \ufb01nal time and added to the collided portion. A \u201cstandard source\u201d treatment refers to Eqs. (5) and (6). The most useful source treatment used in a speci\ufb01c problem is determined by the behavior of the uncollided \ufb02ux during the solution time window. Tests run in [13] showed that integrating the uncollided source could require more computation time than a standard source. This is because the uncollided source is a complex function of space and more di\ufb03cult to integrate with quadrature than the standard source. As a rule, problems at times where the uncollided \ufb02ux has not decayed enough to be a negligible portion of the \ufb02ux are good candidates for an uncollided source treatment. In these problems, [13] showed an increase in accuracy and rate of convergence. At times where the uncollided solution has decayed, the uncollided source treatment is not as helpful. While the problems investigated in this paper are in purely absorbing media, the coupling between the material energy density and the radiation energy density acts as a scatterer in that it can smooth discontinuities over time. For this reason, we expect that the insights derived from solving purely scattering transport problems with uncollided source treatments will extend to these purely absorbing radiative transfer problems. III Moving Mesh DG spatial discretization Similar to the procedure in [13], we de\ufb01ne a DG spatial discretization with a moving mesh to solve equations of the form (5) and (6). We leave some of the details of the derivation to [13]. To solve for the integral over \u00b5 to \ufb01nd the scalar \ufb02ux, we discretize in angle via the method of discrete ordinates, where the solid angle \u00b5 \u2208[\u22121, 1] is discretized by choosing the points with a Gauss-Lobatto quadrature rule [16] for our full transport solution or, in the case of the S2 solution, a Gauss-Legendre rule. With the corresponding weights from our chosen quadrature, we can de\ufb01ne the scalar \ufb02ux as a weighted sum, \u03c6 \u2248 N X n\u2032=1 wn\u2032\u03c8 n\u2032 , (16) where wn are the weights and \u03c8n is the scalar \ufb02ux evaluated at a given angle. This choice makes Eq. (5) and (6), 4 \f\u0012 l \u2202 \u2202t + \u00b5nl \u2202 \u2202x + 1 \u0013 \u03c8 n(x, t) = ca \u00121 2T(x, t)4 \u0013 + 1 2Q(x, t) for n = 1 . . . N, (17) \u2202 \u2202te(x, t) = ca N X n\u2032=1 wn\u2032\u03c8n\u2032 \u2212T(x, t)4 ! . (18) To discretize the spatial domain, we de\ufb01ne K non-overlapping cells with time dependent edges xL(k, t) and xR(k, t). To allow simpli\ufb01cations to the coming weak form of the equations, we de\ufb01ne a mapping variable x\u2032(k, t) that maps x to [-1,1] inside a cell, x\u2032(k, t) \u2261xL(k, t) + xR(k, t) \u22122x xL(k, t) \u2212xR(k, t) , k = 1 . . . K. (19) Now we de\ufb01ne an orthonormalized Legendre polynomial basis function in x\u2032 for each cell, Bi,k(x\u2032) = \u221a2i + 1 p xR(k, t) \u2212xL(k, t) Pi(x\u2032). (20) Therefore, the weak solution of the angular \ufb02ux in a cell for a given angle is \u03c8 n(x, t) \u2248 M X j=0 Bj,k(x\u2032) un k,j. (21) where u is an entry in our three dimensional solution matrix. Likewise, the solution for the energy density in a given cell is, e(x, t) \u2248 M X j=0 Bj,k(x\u2032) uN+1 k,j , (22) The standard DG procedure for \ufb01nding the weak form of the equations involves multiplying each equation by a basis function, integrating over a cell, invoking integration by parts to shift the spatial derivative onto the basis function, and taking advantage of orthogonality to simplify the system. Our moving mesh method is similar to this, but with the added step of invoking the Reynolds Transport Theorem [17] since our integration domain is time dependent. Leaving the general outline of this procedure to [13], we arrive at, d dtUn \u2212GUn + \u0000LUn \u0001(surf) \u2212\u00b5nLUn + 1 l Un = ca 2l H + 1 2lQ for n = 1 . . . N, (23) d dtUN+1 + RU surf \u2212GU N+1 = ca l N X n\u2032=1 w\u2032 nU n\u2032 \u2212H ! , (24) where the time dependent solution vector is U n,k = [un k,0, un k,1, ..., un k,M]T , where M + 1 is the number of basis functions. We also de\ufb01ne Li,j = Z xR xL dx Bj,k(x\u2032) dBi,k(x\u2032) dx , (25) Gi,j = Z xR xL dx Bj,k(x\u2032) dBi(x\u2032) dt , (26) Qi = Z xR xL dx Bi,k(x\u2032) Q(x, t), (27) Hi = Z xR(k,t) xL(k,t) dx Bi(x\u2032) T 4(x, t). (28) 5 \fThe numerical \ufb02ux terms, which calculate the direction of \ufb02ow of the solution with an upwinding scheme based on the relative velocity of a particle with the cell edges (LU)surf i = \u0012 \u00b5n \u2212dxR dt \u0013 Bi,k(x\u2032 = 1)\u03c8 n+ \u2212 \u0012 \u00b5l \u2212dxL dt \u0013 Bi,k(x\u2032 = \u22121)\u03c8 n\u2212. (29) (RU)surf i = \u0012 \u2212dxR dt \u0013 Bi,k(x\u2032 = 1)e+ \u2212 \u0012 \u2212dxL dt \u0013 Bi,k(x\u2032 = \u22121)e\u2212. (30) \u03c8 l+ and \u03c8 l\u2212are found by evaluating Eq. (21) and e+ and e\u2212are found by evaluating Eq. (22). If we choose to employ an uncollided source treatment, Eqs. (23) and (24) change slightly in that the source term Q disappears from the RHS of Eq. (23) and ca l \u03c6 is added to the RHS of Eq. (24) where, \u03c6u = Z xR xL dx Bj,k(x\u2032) \u03c6u(x, t). (31) In this case, the numerical solution is for the collided \ufb02ux, so it is necessary to add the uncollided \ufb02ux at the \ufb01nal step to obtain the full solution. The general solution procedure is as follows. First, parameters such as the number of basis functions, the number of spatial cells, and the Sn order are set. A source is speci\ufb01ed and depending on the source treatment, the uncollided solution or the standard source is integrated at each timestep with a standard Gaussian integrator with points equal to 2M +1. The edges of the mesh are governed by a function designed to optimize the solution for the speci\ufb01c source. This function also returns velocities of the mesh edges in order to calculate the numerical \ufb02ux. The temperature balance terms are found by the equation of state and integrated in the same way as the source term. The solver returns the coe\ufb03cient arrays and the scalar \ufb02ux and material energy are reconstructed via the expansions de\ufb01ned in Eqs. (21) and (22) and, depending on the source treatment, the uncollided \ufb02ux is added onto the scalar \ufb02ux. To obtain solutions from our equations (23) and (24), we calculate the quadrature weights with the python package quadpy [18] and integrate the ordinary di\ufb00erential equations (ODEs) in with a built in integrator from scipy [19]. Our python implementation can be found on Github1. IV Benchmarks and uncollided solutions for the S2 radiative transfer equations In order to show greater precision in our linear problems than are given in [8], it was expedient to calculate our own analytic benchmarks. Since we already include results to each problem calculated with S2, we choose to verify our solver by using a S2/P1 benchmark given by [6]. This benchmark gives the analytic expression for the scalar \ufb02ux and energy density solutions to Eq. (17) and (18) with (N = ) 2, angles and Gauss-Legendre weighting, ([\u00b51, \u00b52] = [ \u22121 \u221a 3, 1 \u221a 3], and [w1, w2] = [1, 1]) where the source is a delta function in space and time. The characteristic wavespeed ( c \u221a 3) of the S2/P1 treatment of one speed transport problems is the result of an assumption made in the derivation of the P1 approximation that the angular \ufb02ux is an a\ufb03ne function of angle. This approximation causes a 1 3 to multiply the gradient term in the current equation (the \ufb01rst angular moment of the angular \ufb02ux) that limits the speed of information propagation. See [21, p. 221] for a thorough explanation. It is also interesting to note that if the radiation is modeled as a gas of photons with a speci\ufb01c intensity given by a Planckian distribution, the speed of sound in that gas is c \u221a 3 [22]. The Green\u2019s function given by [6] for \u03c6 = \u03c81 + \u03c82 for a delta function source at position s is, G(x, s, t) = v 2 \u221a 3e\u2212t \uf8eb \uf8ed tI1 \u0010p t2 \u22123(x \u2212s)2 \u0011 p t2 \u22123(x \u2212s)2 \u0398 \u0010 t \u2212 \u221a 3|x \u2212s| \u0011 + I0 \u0010p t2 \u22123(x \u2212s)2 \u0011 \u03b4 \u0010 t \u2212 \u221a 3|x \u2212s| \u0011 \uf8f6 \uf8f8, (32) 1www.github.com/wbennett39/moving mesh radiative transfer [20] 6 \fwhere \u0398 is a step function, \u03b4 is a Dirac delta function and I0 and I1 are modi\ufb01ed Bessel functions of the \ufb01rst kind. The Green\u2019s function for the material energy density is, GU(x, s, t) = \u221a 3 2 e\u2212t \u0010 I0 \u0010p t2 \u22123(x \u2212s)2 \u0011 \u0398 \u0010 t \u2212 \u221a 3|x \u2212s| \u0011\u0011 . (33) We choose to \ufb01nd solutions for a square source and a Gaussian source, to test our method on both smooth and nonsmooth problems. Therefore, the solution to the integral, \u03c6ss,gs = Z \u221e \u2212\u221e ds Z \u221e 0 dt\u2032 Sss,gs(s, t\u2032) G(x, s, t \u2212t\u2032) (34) gives the scalar \ufb02ux. The energy density is likewise obtained by ess,gs = Z \u221e \u2212\u221e ds Z \u221e 0 dt\u2032 Sss,gs(s, t\u2032) Gu(x, s, t \u2212t\u2032). (35) Here the subscript in the solution and the source is either \u201css\u201d for square source or \u201cgs\u201d for Gaussian. The source term is Sss(x, t) = \u0398(x0 \u2212x)\u0398(t0 \u2212t), (36) for the square source, or Sgs(x, t) = exp \u0012\u2212x2 x2 0 \u0013 \u0398(t0 \u2212t), (37) for the Gaussian source. Since \ufb01nding a benchmark for an optically thick problem requires evaluating these integrals at extremely late times where the integrand is not well behaved, we only calculate S2 benchmarks for our thin problems. IV.A S2 uncollided solutions The uncollided solutions that we have utilized so far for the uncollided source treatment have been full transport solutions from [15]. We cannot use these to solve the S2 transport equation, since the two uncollided \ufb02uxes are not equal. The full transport solutions are based on the assumption that the Sn order of the ODE\u2019s su\ufb03ciently resolves the angular error and that the collided \ufb02ux calculated with quadrature is a good approximation of the analytic integral over \u00b5, i.e. N X n\u2032=1 wn\u2032\u03c8 n\u2032 \u2248 Z 1 \u22121 d\u00b5\u2032 \u03c8(x, t, \u00b5\u2032), (38) so that it is acceptable to employ the uncollided scalar \ufb02ux found by integrating analytically the solution for the uncollided angular \ufb02ux. In the S2 equations, the assumption of Eq. (38) does not hold and the uncollided scalar \ufb02ux must be found by numerical quadrature of the angular \ufb02ux. Therefore, the process for \ufb01nding the uncollided scalar \ufb02ux to use as a source in the S2 solutions to our radiative transfer problems is to \ufb01nd the Green\u2019s solution for the angular \ufb02ux, integrate that solution with quadrature, and then integrate again over the given source. The uncollided solution to Eq. 13 with a delta function source (\u03b4(x)\u03b4(t)) is [14], \u03c8u(x, t) = e\u2212t 2t \u03b4 \u0010 \u00b5 \u2212x t \u0011 . (39) To \ufb01nd the S2 uncollided scalar \ufb02ux, the integral is done by Gauss-Legendre quadrature with N = 2 to give the uncollided scalar \ufb02ux, \u03c6 pl u (x, t) = e\u2212t 2t \u0012 \u03b4 \u0012 \u22121 \u221a 3 \u2212x t \u0013 + \u03b4 \u0012 1 \u221a 3 \u2212x t \u0013\u0013 . (40) To \ufb01nally \ufb01nd the uncollided scalar \ufb02ux that corresponds to the benchmark solutions calculated with Eqs. (34), we integrate \u03c6 ss,gs u (x, t) = Z \u221e \u2212\u221e ds Z \u221e 0 dt\u2032 \u03c6 pl u (x \u2212s, t \u2212t\u2032) Sss,gs(s, t\u2032), (41) where Sss,gs is given by Eq. (36) or Eq. (37). Solutions to Eq. (41) are given in Appendix A. 7 \fV Error estimation methods In the problems presented, two methods are used to estimate the solution accuracy. For problems with a benchmark solution, we use the root mean square error (RMSE) as our error metric. This is calculated by RMSE = v u u t N X i |yi \u2212\u02c6 yi|2 N , (42) where yi is either the calculated scalar \ufb02ux or the calculated material energy density at a given node, \u02c6 yi is the corresponding benchmark solution, and N is the total number of nodes in the computational solution. For problems that demonstrate geometric spectral convergence, as M \u2192\u221e, the error can be modeled as ERROR = C exp(\u2212c1M), (43) where M is the highest polynomial order of the basis and C and c1 are constants that could depend on the number of cells used in the problem. This curve is a straight line on a logarithmic-linear scale. For all of the problems in the following section, we plot the average of the absolute value of the coe\ufb03cients in the solution expansion to characterize the solution convergence. We de\ufb01ne the average value of the jth coe\ufb03cient in the solution expansion, |cj| = PK k=1 |aj,k| K , (44) where j corresponds to the order of the Legendre polynomial in the basis and K is the number of cells. When characterizing the error of \u03c6, since we are interested in the residual error of scalar \ufb02ux, aj,k is the weighted average using the weights from Eq. (16), aj,k = PN l\u2032=1 wl\u2032u(l\u2032,k,j) PN l\u2032=1 wl\u2032 . (45) For the material energy density, aj,k is, aj,k = u(N+1,k,j). (46) VI Optically thin results The results in this section are for problems where the source width is equal to a mean free path, meeting our de\ufb01nition of an optically thin problem. These problems are characterized by solutions where the uncollided solution is a signi\ufb01cant portion of the \ufb02ux and travelling wavefronts. Therefore, the problems in this section all use an uncollided source and the square sources which have travelling discontinuities employ a moving mesh. VI.A Su-Olson problem with a square source We \ufb01rst replicate the Su-Olson problem using the same square source originally presented in [8] with \u03c3a = 1 cm\u22121, the source width x0 = 0.5 and the source duration t0 = 10. The uncollided solution for this source has already been presented in [15]. For the S2 treatment of this problem, the uncollided source is given by Eq. (58). The temperature is calculated by Eq. (11). Some modi\ufb01cations were made to the original mesh function invented to solve the square source transport problem in [13]. In that mesh, the mesh edges inside the source never moved while the edges outside travelled outwards with the wavespeed. This was done to resolve the static discontinuities at the source edge and the travelling discontinuities at the wavefront. In the original Su-Olson results, the source turns o\ufb00at t0 = 10 and solutions are required long afterwards (t = 31.6228, 100). With our previous square source mesh, the edges would remain clustered around the source region long after the source has ceased to introduce nonsmoothness. This is not the optimal distribution of computational zones. 8 \fTherefore, the mesh function used in this problem is as follows. If the mesh edges are de\ufb01ned as the vector, X(t) = \u0002 x0(t), x1(t), ..., xK(t) \u0003 , (47) and initialized to be, if K 4 \u2264k \u22643K 4 , xk o = x0yj, (48) if k < K 4 , xk o = sk(\u03b4x)\u22122x0\u2212\u03b4x 2 (49) if k > 3K 4 , xk o = sl(\u03b4x)+2x0+\u03b4x 2 (50) where yj are the Gauss-Lobatto evaluation points with N, the number of points, equal to K 2 + 1 numbered from 0 and sm are the Gauss-Lobatto evaluation points for N = K 4 +1. The indices j and l are equal to k\u2212K 4 and k \u22123K 4 respectively. \u03b4x is a small initial width, and K is always an even number. This initialization assigns one third of the edges to the source and the other two thirds to cover the rest of the solution domain. Each subdomain is spanned by edges with Gauss-Lobatto spacing, which has the e\ufb00ect of concentrating cells near the source edges and the outgoing wavefronts, where discontinuities are most likely. As time progresses and the outside edges move outwards with the solution, their position is de\ufb01ned as, if t \u2264t0, xk(t) = xk o + xk xk o \u00d7 vt, (51) where v is the wavespeed,c for the transport problems and c \u221a 3 for the S2 problems. The edge velocity is de\ufb01ned, if t \u2264t0, dxk dt = xk xk o \u00d7 vt, (52) De\ufb01ning the edge positions and velocities this way preserves the relative spacing of the initialized edges, meaning that the edges are clustered at the source edges and the leading wavefronts. At later times when the source is o\ufb00, the solution to a square source in an optically thin problem will behave more like the solution for a Gaussian source since the solution will become smoother without the source emitting uncollided particles. Information \ufb02ow is no longer dominated by the wavespeed. The solution will be practically zero some distance from the origin that is much less than vt. For instance, in the Su-Olson problem at t = 100 the solution is practically zero past x = \u00b130. For these reasons, when the source turns o\ufb00, a constant acceleration will divert the trajectory of each edge so that at the \ufb01nal time, they are evenly spaced over a speci\ufb01ed width. This width is an estimate to how far the solution will have traveled by the evaluation time. We chose a constant acceleration instead of a instantaneous velocity change because the latter induced numerical errors which resulted in failure to converge to the benchmark solutions. The acceleration for each edge is found by, ck = 2 dxk dt \f \f \f \f t0 (t0 \u2212t\ufb01nal) \u2212xk \f \f \f \f t0 + xk \f \f \f \f tfinal ! (t0 \u2212t\ufb01nal)2 (53) where we \ufb01nd xk \f \f \f \f tfinal by specifying that the \ufb01nal positions vector X(t\ufb01nal) evenly spans [\u2212xf/2, xf/2], where xf is our estimate for the width of the solution domain at the evaluation time. With the acceleration de\ufb01ned, we calculate positions of the edges after the source has turned o\ufb00with, if t > t0, xk(t) = 1 2ck (t \u2212t0)2 + dxk dt \f \f \f \f t0 (t \u2212t0) + xk \f \f \f \f t0 , (54) and the velocities, if t > t0, dxk dt = ck (t \u2212t0) + dxk dt \f \f \f \f t0 . (55) 9 \f\u22120.5 0.0 0.5 x 0.0 0.2 0.4 t (a) t = 0.5 \u221210 0 10 x 0 10 20 30 t t0 (b) t = 31.6228 Figure 1: Edge position at early and late times for each edge in the thin square source mesh with 8 spaces, a wavespeed c \u221a 3, and a \ufb01nal domain width xf = 30. We refer to this method for governing the mesh edges and velocities as the \u201cthin square source mesh\u201d. Example x vs t diagrams of the mesh edges are given in Figure 1 for clari\ufb01cation. Figure 1a shows the Legendre spacing of the edges inside the source and the clustering of edges outside the source around the wavefront and Figure 1b shows how the edges relax into an even pattern at later times. After completing the necessary steps of de\ufb01ning a source, choosing a functional form for the temperature, and de\ufb01ning a mesh, we may present our results for this problem. Tables I and II give our solutions, with digits that agree bolded, for the same points and evaluation times as in Table 1 and 2 of [8]. The convergence results for the coe\ufb03cient expansions of these results are plotted in Figures 2 and 3. The solutions at a few selected times are plotted in Figure 4. For each case, a moving mesh and the uncollided source was used except in the S2 solution at times greater than t0, since the S2 uncollided solution becomes sharp and di\ufb03cult to resolve via quadrature. In these cases, the standard square source was integrated. The convergence results show that for this problem, S2 is considerably smoother at early times. Twice as many spatial divisions were required at early times in the full transport solution 256 to achieve similar levels of convergence to the S2 solutions. Both cases exhibited similar behavior over time. At early times the signi\ufb01cantly nonsmooth uncollided \ufb02ux induced discontinuities in the material energy density and required far more spatial divisions to resolve, sti\ufb00ening the problem and limiting the number of basis functions that could reasonably be used in the solution. After the source turned o\ufb00, the solution smoothed and equilibrated locally (complete equilibrium is impossible with an in\ufb01nite material). The solution became smoother and could easily be resolved with fewer spatial cells and more basis functions. We also note that at later times after the source has turned o\ufb00, the full transport and S2 solutions become more similar. Since we claim to present benchmark quality results, we are obliged to discuss the accuracy of Tables I and II. While [8] claims to converge their solution to four digits, the observant reader will see that in some cases only 3 digits match. Given that our solutions are converged to much greater accuracy, we believe that our reported digits are correct. VI.B Constant Cv problem with a square source This problem uses the same source as the problem of the last section but with a di\ufb00erent functional form of the heat capacity. Using Eq. (12) our system becomes nonlinear. We choose Cv0 = 0.03 GJ \u00b7 cm\u22123 \u00b7 keV\u22121. This value was chosen to see an appreciable change in temperature during the selected time window. Now that we no longer have the convenient condition that e = T 4, the local equilibrium condition is not \u03c6 = e as in the Su-Olson problem but \u03c6 1/4 = T. For this reason, the solution plots for this problem and all subsequent constant Cv problems do not show scalar \ufb02ux and material energy density but rather radiation temperature 10 \f2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 256 cells, t = 0.1 256 cells, t = 0.31623 256 cells, t = 1.0 256 cells, t = 3.16228 128 cells, t = 10.0 32 cells, t = 31.6228 32 cells, t = 100.0 (a) Radiation energy density, \u03c6 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 256 cells, t = 0.1 256 cells, t = 0.31623 256 cells, t = 1.0 256 cells, t = 3.16228 128 cells, t = 10.0 32 cells, t = 31.6228 32 cells, t = 100.0 (b) Material energy density, e Figure 2: Log-linear scaled average value of the solution expansion coe\ufb03cients (found by Eqs. (44)) for the optically thin (\u03c3a = 1 cm\u22121) Su-Olson square source problem where x0 = 0.5, t0 = 10. The quadrature order for all results is S256. All results were calculated with a moving mesh and uncollided source treatment. 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.1 128 cells, t = 0.31623 128 cells, t = 1.0 128 cells, t = 3.16228 128 cells, t = 10.0 32 cells, t = 31.6228 32 cells, t = 100.0 (a) Radiation energy density, \u03c6 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.1 128 cells, t = 0.31623 128 cells, t = 1.0 128 cells, t = 3.16228 128 cells, t = 10.0 32 cells, t = 31.6228 32 cells, t = 100.0 (b) Material energy density, e Figure 3: Log-linear scaled average value of the solution expansion coe\ufb03cients (found by Eqs. (44))) for the optically thin (\u03c3a = 1 cm\u22121) S2 Su-Olson square source problem where x0 = 0.5, t0 = 10. All results were calculated with a moving mesh and uncollided source treatment except for the t = 31.6228 and t = 100 cases where a standard source treatment was used. 11 \f\u22120.5 0.0 0.5 0.00 0.05 0.10 S2 Transport (a) t = 0.1 \u22120.5 0.0 0.5 0.0 0.1 0.2 S2 Transport (b) t = 0.31623 \u22121 0 1 0.0 0.2 0.4 0.6 S2 Transport (c) t = 1 \u22122 0 2 0.0 0.5 1.0 S2 Transport (d) t = 3.16228 \u221210 0 10 0 1 2 S2 Transport (e) t = 10 \u221220 0 20 0.0 0.1 0.2 0.3 S2 Transport (f) t = 100 Figure 4: S2 (left of x = 0) and full transport (right of x = 0) solutions for the optically thin Su-Olson square source problem with x0 = 0.5, t0 = 10. Solid lines are scalar \ufb02ux, \u03c6, dash-dotted lines are the uncollided scalar \ufb02ux, \u03c6u and dashed are material energy density, e. and material temperature. Though we can no longer rely on benchmark solutions for this problem, we can be con\ufb01dent that our solution is converged by plotting the magnitude of the coe\ufb03cients and check for systematic errors with a Sn solver. Also, since the mesh method employed here is the same method described in Section VI.A, we can be con\ufb01dent that the mesh is not introducing error. Solutions to this problem are plotted in Figure 5. We note that the problem is not everywhere at equilibrium by t = 100 as the Su-Olson problem is. Also, we note that the solution does not travel as far. In the Su-Olson problem, the speci\ufb01c heat is very small when temperature is small and increases with the cube of the temperature. This has the e\ufb00ect of attracting the solution to equilibrium. This e\ufb00ect is not present in a constant Cv case and there is less incentive for the solution to fall into local equilibrium. It is also noteworthy that at very early times (t < 1) the scalar \ufb02ux has not interacted with the material as much as in the Su-Olson problem and is mostly made up of the uncollided \ufb02ux. This is apparent in Figures 6a and 7a. Since the solution has not fully equilibrated at later times, the solutions are less smooth compared to the Su-Olson problem. The repercussions of this can be observed by comparing the convergence results at late times for this problem in Figures 6 and 7 to the convergence results of the Su-Olson problem in Figure 2. Also, the convergence results show that the material energy density is generally more nonsmooth than the scalar \ufb02ux. Nevertheless, we are satis\ufb01ed with the convergence of these results and present them in Tables III and IV. The di\ufb00erence between the full transport solution and our S2 result is also of interest, as it provides insight into the physical characteristics of the system. We note that the two solutions only begin to look similar at later times as the solution equilibrates. This tells us that the solution becomes less angularly dependent and better approximated by only two angles. 12 \f\u22120.5 0.0 0.5 0.0 0.2 0.4 S2 Transport (a) t = 0.1 \u22120.5 0.0 0.5 0.0 0.2 0.4 0.6 S2 Transport (b) t = 0.31623 \u22121 0 1 0.00 0.25 0.50 0.75 S2 Transport (c) t = 1 \u22122 0 2 0.0 0.5 S2 Transport (d) t = 3.16228 \u221210 0 10 0.0 0.5 1.0 S2 Transport (e) t = 10 \u221220 0 20 0.0 0.2 0.4 S2 Transport (f) t = 100 Figure 5: S2 (left of x = 0) and full transport (right of x = 0) solutions for the optically thin constant Cv square source problem with x0 = 0.5, t0 = 10. Solid lines are radiation temperature \u03c6 1/4, dash-dotted lines are the uncollided radiation temperature, \u03c6 1/4 u , and dashed are temperature, T. 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 256 cells, t = 0.1 256 cells, t = 0.31623 256 cells, t = 1.0 256 cells, t = 3.16228 128 cells, t = 10.0 32 cells, t = 31.6228 64 cells, t = 100.0 (a) Radiation energy density, \u03c6 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 256 cells, t = 0.1 256 cells, t = 0.31623 256 cells, t = 1.0 256 cells, t = 3.16228 128 cells, t = 10.0 32 cells, t = 31.6228 64 cells, t = 100.0 (b) Material energy density, e Figure 6: Log-linear scaled average value of the solution expansion coe\ufb03cients (found by Eqs. (44)) for the optically thin (\u03c3a = 1 cm\u22121) constant Cv square source problem where x0 = 0.5, t0 = 10. The quadrature order for all results is S256. All results were calculated with a moving mesh and uncollided source treatment. 13 \fTable I: Transport (top) and S2 (bottom) results for the scalar \ufb02ux, \u03c6 for the thin square source Su-Olson problem with x0 = 0.5, t0 = 10 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.095317 0.275294 0.643151 1.20069 2.235815 0.690187 0.357195 0.1 0.095317 0.275294 0.635943 1.188724 2.219553 0.689743 0.357137 0.17783 0.095317 0.275294 0.619626 1.162044 2.183558 0.688773 0.357011 0.31623 0.095317 0.262715 0.561896 1.071861 2.064534 0.685719 0.356612 0.45 0.08824 0.203128 0.447114 0.909526 1.860758 0.681168 0.356016 0.5 0.047658 0.137647 0.358083 0.799027 1.731816 0.679072 0.35574 0.56234 0.003762 0.062776 0.253722 0.666804 1.574955 0.67616 0.355355 0.75 0.002793 0.114315 0.446752 1.273984 0.665459 0.353929 1.0 0.036471 0.275396 0.987815 0.646922 0.351409 1.33352 0.002894 0.145309 0.708221 0.615381 0.346972 1.77828 0.059674 0.450163 0.563509 0.339223 3.16228 0.001155 0.096453 0.369659 0.303466 5.62341 0.003632 0.108305 0.213818 10.0 0.003914 0.072059 17.78279 0.002721 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.095317 0.275294 0.661668 1.0451 1.918396 0.659852 0.352728 0.1 0.095317 0.275294 0.645629 1.034885 1.906562 0.659502 0.352673 0.17783 0.095317 0.275294 0.604866 1.012559 1.88071 0.658738 0.352555 0.31623 0.095317 0.275294 0.515344 0.941841 1.798924 0.656328 0.352181 0.45 0.089213 0.179984 0.405572 0.835497 1.67622 0.652732 0.351621 0.5 0.047658 0.137647 0.358083 0.786079 1.619313 0.651071 0.351362 0.56234 0.000000 0.085419 0.299433 0.722866 1.545756 0.648764 0.351001 0.75 0.0000000 0.155119 0.552606 1.338003 0.640252 0.349662 1.0 0.027203 0.369788 1.092239 0.625397 0.347295 1.33352 0.000000 0.19493 0.816475 0.599795 0.343124 1.77828 0.056328 0.532488 0.556761 0.335829 3.16228 0.09838 0.384521 0.30198 5.62341 0.00028 0.116245 0.215656 10.0 0.002009 0.073828 17.78279 0.002307 RMSE 2.656e-07 1.747e-07 1.642e-06 3.589e-07 2.647e-07 9.157e-08 6.128e-09 VI.C Su-Olson problem with a Gaussian source Returning to the linearized Su-Olson problem, we consider a Gaussian source. Here the source is de\ufb01ned by Eq. (37) where the uncollided solution is taken from [15] for the full transport solution or Eq.(56) for the S2 solution. We set x0 = 0.5 and the source duration is still t0 = 10. In [13], smooth Gaussian sources allowed for geometric convergence of the solution at all times. We expect the same result in this application. Since there are no discontinuities induced by nonsmoothness in the source, we are able to employ a far simpler mesh function than what was used for the thin square source problems. We only guess the edge of the problem domain and span the given space evenly with stationary edges. The moving mesh was not used in this case because earlier tests in [13] revealed the mesh to be non-useful in smooth problems. The uncollided solution however, was employed. We include Gaussian sources though they do not prove to be challenging enough problems to require the full application of our method because we can achieve very accurate solutions. While S256 was used for the full transport solutions for the thin square sources, we only use S64 for the Gaussian sources. This choice is informed by tests run in [13] that showed that far fewer quadrature points 14 \f2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.1 128 cells, t = 0.31623 128 cells, t = 1.0 128 cells, t = 3.16228 128 cells, t = 10.0 32 cells, t = 31.6228 32 cells, t = 100.0 (a) Radiation energy density, \u03c6 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.1 128 cells, t = 0.31623 128 cells, t = 1.0 128 cells, t = 3.16228 128 cells, t = 10.0 32 cells, t = 31.6228 32 cells, t = 100.0 (b) Material energy density, e Figure 7: Log-linear scaled average value of the solution expansion coe\ufb03cients (found by Eqs. (44)) for the optically thin (\u03c3a = 1 cm\u22121) S2 constant Cv square source problem where x0 = 0.5, t0 = 10. All results were calculated with a moving mesh and uncollided source treatment except for the t = 31.6228 and t = 100 cases where a standard source treatment was used. 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 64 cells, t = 0.1 64 cells, t = 0.31623 64 cells, t = 1.0 64 cells, t = 3.16228 64 cells, t = 10.0 64 cells, t = 31.6228 64 cells, t = 100.0 (a) Radiation energy density, \u03c6 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 64 cells, t = 0.1 64 cells, t = 0.31623 64 cells, t = 1.0 64 cells, t = 3.16228 64 cells, t = 10.0 64 cells, t = 31.6228 64 cells, t = 100.0 (b) Material energy density, e Figure 8: Log-linear scaled average value of the solution expansion coe\ufb03cients (found by Eqs. (44)) for the optically thin (\u03c3a = 1 cm\u22121) Su-Olson Gaussian source problem where x0 = 0.5, t0 = 10. The quadrature order for all results is S16. All results were calculated with a static mesh and uncollided source treatment. 15 \f2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 64 cells, t = 0.1 64 cells, t = 0.31623 64 cells, t = 1.0 64 cells, t = 3.16228 64 cells, t = 10.0 64 cells, t = 31.6228 32 cells, t = 100.0 (a) Radiation energy density, \u03c6 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 64 cells, t = 0.1 64 cells, t = 0.31623 64 cells, t = 1.0 64 cells, t = 3.16228 64 cells, t = 10.0 64 cells, t = 31.6228 32 cells, t = 100.0 (b) Material energy density, e Figure 9: Log-linear scaled average value of the solution expansion coe\ufb03cients (found by Eqs. (44)) for the optically thin (\u03c3a = 1 cm\u22121) S2 Su-Olson Gaussian source problem where x0 = 0.5, t0 = 10. All results were calculated with a moving mesh and uncollided source treatment except for the t = 31.6228 and t = 100 cases where a standard source treatment was used. The dashed lines represent solutions found with a moving mesh. \u22122 0 2 0.00 0.05 S2 Transport (a) t = 0.1 \u22122 0 2 0.0 0.1 0.2 S2 Transport (b) t = 0.31623 \u22122.5 0.0 2.5 0.0 0.2 0.4 0.6 S2 Transport (c) t = 1 \u22122.5 0.0 2.5 0.0 0.5 1.0 S2 Transport (d) t = 3.16228 \u221210 0 10 0 1 2 S2 Transport (e) t = 10 \u221225 0 25 0.0 0.1 0.2 0.3 S2 Transport (f) t = 100 Figure 10: S2 (left of x = 0) and full transport (right of x = 0) solutions for the optically thin Su-Olson Gaussian source problem with x0 = 0.5, t0 = 10. Solid lines are scalar \ufb02ux, \u03c6, dash-dotted lines are the uncollided scalar \ufb02ux, \u03c6u, and dashed are material energy density, e. 16 \fTable II: Transport (top) and S2 (bottom) results for the material energy density, e for the thin square source Su-Olson problem with x0 = 0.5, t0 = 10 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.004682 0.040935 0.271307 0.94687 2.111923 0.704991 0.359136 0.1 0.004682 0.040935 0.268692 0.937154 2.095970 0.704514 0.359078 0.17783 0.004682 0.040935 0.26264 0.915402 2.060646 0.703474 0.358949 0.31623 0.004682 0.04034 0.239814 0.840926 1.943709 0.700198 0.358544 0.45 0.004552 0.033142 0.188264 0.702883 1.742967 0.69532 0.357938 0.5 0.002342 0.020469 0.141918 0.604935 1.615402 0.693073 0.357657 0.56234 0.00005 0.00635 0.08838 0.48846 1.460394 0.689954 0.357266 0.75 0.000063 0.030141 0.306558 1.165912 0.678498 0.355816 1.0 0.00625 0.175192 0.889908 0.658685 0.353254 1.33352 0.000162 0.08352 0.625213 0.625066 0.348744 1.77828 0.029349 0.386884 0.570027 0.34087 3.16228 0.000183 0.076146 0.367269 0.304561 5.62341 0.002412 0.103114 0.213768 10.0 0.003426 0.071226 17.78279 0.002609 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.004682 0.040935 0.280241 0.847357 1.808991 0.672725 0.354597 0.1 0.004682 0.040935 0.273782 0.837875 1.797308 0.672354 0.354541 0.17783 0.004682 0.040935 0.261727 0.817144 1.771785 0.671544 0.354421 0.31623 0.004682 0.040936 0.225609 0.751409 1.691033 0.668989 0.354041 0.45 0.004642 0.030469 0.169249 0.652361 1.569863 0.665177 0.353472 0.5 0.002341 0.020467 0.141916 0.606256 1.513661 0.663418 0.353209 0.56234 0.000000 0.008542 0.10839 0.547588 1.441078 0.660972 0.352842 0.75 0.000000 0.038396 0.393514 1.23688 0.651954 0.351482 1.0 0.001762 0.236801 0.997162 0.636229 0.349077 1.33352 0.000000 0.101119 0.731352 0.609161 0.344841 1.77828 0.019412 0.462775 0.563768 0.337432 3.16228 0.000000 0.074037 0.383609 0.303078 5.62341 0.000086 0.110507 0.215664 10.0 0.001626 0.072987 17.78279 0.002196 RMSE 1.233e-08 2.372e-08 9.65e-08 1.366e-07 4.052e-08 9.937e-08 5.936e-09 are required to resolve the angular error. With the temperature de\ufb01ned by Eq (11) we present solutions in Tables V and VI with convergence results shown in Figures 8 and 9. Solutions are shown in Figure 10. As illustrated in the aforementioned convergence plots, the problem converges geometrically even with a standard static mesh. We quickly note that for t = 31.6228 and t = 100 in the S2 results in Figure 9, a moving mesh was employed. In this case, the mesh moved with a constant speed from the initial width to the speci\ufb01ed \ufb01nal width. This was not done out of necessity, but rather to ascertain whether the moving mesh was useful in these problems. On the di\ufb00erence between the full and S2 solutions, we point out that it can be seen in Figure 10 that the S2 solution is better compared with the square sources at estimating the solution at early times. This is because the solution is mostly uncollided at early times and the S2 and full transport uncollided solutions for a Gaussian source are similar. For intermediate times, the full transport solution has more angular dependence and the S2 solution is not as accurate. At later times, collisions smooth the angular \ufb02ux of the transport solution and reduce the angular dependence, bringing the system closer to the di\ufb00usion approximation and S2 and transport again agree. 17 \fTable III: Transport (top) and S2 (bottom) results for the scalar \ufb02ux, \u03c6, for the thin square source constant Cv problem with x0 = 0.5, t0 = 10, and Cv0 = 0.03 GJ \u00b7 cm\u22123 \u00b7 keV\u22121 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.095162 0.271108 0.563683 0.765084 1.96832 0.267247 0.085108 0.1 0.095162 0.271108 0.557609 0.756116 1.950367 0.266877 0.085054 0.17783 0.095162 0.271108 0.543861 0.736106 1.910675 0.266071 0.084937 0.31623 0.095162 0.258592 0.495115 0.668231 1.779896 0.263527 0.084565 0.45 0.08809 0.199962 0.396442 0.543721 1.558248 0.259729 0.084008 0.5 0.047581 0.135554 0.316071 0.453151 1.420865 0.257976 0.08375 0.56234 0.00376 0.061935 0.222261 0.349209 1.252213 0.255538 0.083392 0.75 0.002788 0.102348 0.21078 0.908755 0.246543 0.082061 1.0 0.034228 0.124305 0.562958 0.230831 0.079715 1.33352 0.002864 0.067319 0.27752 0.203718 0.075591 1.77828 0.031357 0.120054 0.158039 0.068419 3.16228 0.001057 0.013737 0.022075 0.036021 5.62341 0.000413 0.000814 0.001068 10.0 5e-06 5e-06 17.78279 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.095162 0.271108 0.579404 0.649946 1.57957 0.243368 0.081882 0.1 0.095162 0.271108 0.56606 0.641494 1.567037 0.243074 0.081833 0.17783 0.095162 0.271108 0.529958 0.623092 1.539665 0.242432 0.081725 0.31623 0.095162 0.271108 0.45241 0.565417 1.453129 0.240406 0.081383 0.45 0.08906 0.177033 0.357556 0.480232 1.323479 0.237381 0.080871 0.5 0.047581 0.135554 0.31607 0.441134 1.263425 0.235984 0.080634 0.56234 0.084378 0.264889 0.392645 1.185561 0.234042 0.080305 0.75 0.140336 0.276699 0.962617 0.226873 0.079082 1.0 0.02637 0.175485 0.692975 0.214346 0.076927 1.33352 0.097064 0.392628 0.192725 0.073141 1.77828 0.033465 0.155709 0.156395 0.066564 3.16228 0.007509 0.03175 0.03718 5.62341 4.7e-05 0.000413 0.001075 10.0 17.78279 VI.D Constant Cv problem with a Gaussian source We include a problem with the same source and parameters as the last section but with a constant speci\ufb01c heat so that the temperature to material energy density conversion is given by Eq. (12). We specify the dimensional speci\ufb01c heat to be Cv0 = 0.03GJ\u00b7cm\u22123\u00b7keV\u22121. Solutions are shown in Figure 11. While we have less certainty in forecasting the behavior of these nonlinear results, we still expect geometric convergence since there are no sources of nonsmoothness. Like the Gaussian source in the linearized system, we speci\ufb01ed a static mesh that evenly spans some estimated width of the actual solution domain. The convergence results in Figures 12 and 13 show that this was su\ufb03cient to achieve geometric convergence at all chosen times. We also see the phenomenon \ufb01rst observed in Section VI.B of extremely fast convergence for the scalar \ufb02ux at early times. This is once again the result of the scalar \ufb02ux being most uncollided at these times, making solving for the collided portion simpler. Like the linearized Gaussian, during the time the source is on, the discrepancy between the S2 and transport solutions grows. This discrepancy diminishes after the source is turned o\ufb00and scattering reduces the angular dependence of the transport solution. Similar to the nonlinear square source, the solution is not 18 \fTable IV: Transport (top) and S2 (bottom) results for the material energy density, e, for the thin square source constant Cv problem with x0 = 0.5, t0 = 10, and Cv0 = 0.03 GJ \u00b7 cm\u22123 \u00b7 keV\u22121 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.004837 0.045121 0.354022 1.613529 2.57461 1.592549 1.190296 0.1 0.004837 0.045121 0.350958 1.601467 2.568476 1.591998 1.190108 0.17783 0.004837 0.045121 0.343803 1.573757 2.554747 1.590795 1.189698 0.31623 0.004837 0.044507 0.316063 1.47078 2.507772 1.586979 1.188398 0.45 0.004705 0.036765 0.249325 1.238666 2.421019 1.581228 1.186445 0.5 0.002419 0.022562 0.183937 1.025219 2.361647 1.578549 1.185538 0.56234 5.1e-05 0.006779 0.108887 0.759317 2.280932 1.5748 1.184271 0.75 6.4e-05 0.034842 0.416175 2.069946 1.56071 1.179537 1.0 0.006872 0.214491 1.68516 1.535052 1.171036 1.33352 0.000168 0.094966 1.028758 1.487096 1.155611 1.77828 0.032116 0.471906 1.391456 1.127131 3.16228 0.000196 0.049604 0.471468 0.954827 5.62341 0.001163 0.019493 0.082189 10.0 0.000113 0.000487 17.78279 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.004837 0.045121 0.364106 1.512533 2.431781 1.555683 1.178842 0.1 0.004837 0.045121 0.357116 1.497067 2.426675 1.555215 1.178665 0.17783 0.004837 0.045121 0.343553 1.462337 2.415402 1.554193 1.178277 0.31623 0.004837 0.045121 0.299658 1.343527 2.378606 1.550952 1.177048 0.45 0.004796 0.033907 0.223636 1.138096 2.319758 1.546073 1.175203 0.5 0.002418 0.02256 0.183934 1.031348 2.290735 1.543804 1.174346 0.56234 0.00909 0.135476 0.89324 2.251138 1.54063 1.173149 0.75 0.043524 0.56287 2.120765 1.528729 1.16868 1.0 0.001805 0.289458 1.885442 1.507188 1.160663 1.33352 0.106017 1.298617 1.467469 1.146151 1.77828 0.018336 0.530154 1.391381 1.119483 3.16228 0.023197 0.536229 0.964849 5.62341 3.1e-05 0.005921 0.056159 10.0 2.4e-05 17.78279 in equilibrium by t = 100. This leads us to draw the conclusion that equilibrium is not as impacted by the nonsmoothness of the source as it is by the functional form of the speci\ufb01c heat, since the Su-Olson problem with a nonsmooth source goes more quickly into equilibrium than this constant Cv smooth source problem. We present the solutions in Tables VII and VIII. 19 \f\u22122 0 2 0.0 0.2 0.4 S2 Transport (a) t = 0.1 \u22122 0 2 0.0 0.2 0.4 0.6 S2 Transport (b) t = 0.31623 \u22122.5 0.0 2.5 0.00 0.25 0.50 0.75 S2 Transport (c) t = 1 \u22122.5 0.0 2.5 0.00 0.25 0.50 0.75 S2 Transport (d) t = 3.16228 \u221210 0 10 0.0 0.5 1.0 S2 Transport (e) t = 10 \u221220 0 20 0.0 0.2 0.4 S2 Transport (f) t = 100 Figure 11: S2 (left of x = 0) and full transport (right of x = 0) solutions for the optically thin constant Cv Gaussian source problem with x0 = 0.5, t0 = 10. Solid lines are radiation temperature \u03c6 1/4,dash-dotted lines are the uncollided radiation temperature, \u03c6 1/4 u , and dashed are temperature, T. 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 64 cells, t = 0.1 64 cells, t = 0.31623 64 cells, t = 1.0 64 cells, t = 3.16228 128 cells, t = 10.0 128 cells, t = 31.6228 128 cells, t = 100.0 (a) Radiation energy density, \u03c6 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 64 cells, t = 0.1 64 cells, t = 0.31623 64 cells, t = 1.0 64 cells, t = 3.16228 128 cells, t = 10.0 128 cells, t = 31.6228 128 cells, t = 100.0 (b) Material energy density, e Figure 12: Log-linear scaled average value of the solution expansion coe\ufb03cients (found by Eqs. (44)) for the optically thin (\u03c3a = 1 cm\u22121) constant Cv Gaussian source problem where x0 = 0.5, t0 = 10. The quadrature order for all results is S16. All results were calculated with a static mesh and uncollided source treatment. 20 \fTable V: Transport (top) and S2 (bottom) results for the scalar \ufb02ux, \u03c6, for the thin Gaussian source Su-Olson problem with x0 = 0.5, t0 = 10 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.094869 0.264712 0.571441 1.053107 1.956705 0.610025 0.316341 0.1 0.091217 0.255181 0.556247 1.033176 1.933017 0.609634 0.31629 0.17783 0.083721 0.235541 0.524582 0.991429 1.883176 0.608783 0.316178 0.31623 0.063836 0.182853 0.436813 0.873994 1.741091 0.6061 0.315826 0.45 0.042514 0.125118 0.334025 0.732182 1.564765 0.602104 0.315298 0.5 0.035215 0.104949 0.29568 0.67763 1.49511 0.600263 0.315054 0.56234 0.027081 0.082142 0.250049 0.611096 1.408369 0.597705 0.314714 0.75 0.010198 0.033042 0.137011 0.433926 1.163525 0.588305 0.313452 1.0 0.001799 0.006564 0.050023 0.267122 0.898484 0.572018 0.311224 1.33352 8.2e-05 0.000371 0.008965 0.139587 0.64098 0.544297 0.3073 1.77828 2e-06 0.000387 0.05756 0.40731 0.498676 0.300446 3.16228 0.00137 0.087854 0.327865 0.268818 5.62341 0.003363 0.096556 0.189497 10.0 0.003523 0.063953 17.78279 0.002424 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.00467 0.040073 0.240699 0.727949 1.572225 0.594815 0.314048 0.1 0.00449 0.03858 0.23398 0.717893 1.560265 0.594488 0.313999 0.17783 0.004119 0.035509 0.219946 0.696571 1.534799 0.593775 0.313893 0.31623 0.003137 0.027317 0.180837 0.6345 1.45974 0.591526 0.313557 0.45 0.002085 0.018437 0.134764 0.554569 1.360669 0.58817 0.313054 0.5 0.001726 0.015365 0.117566 0.522032 1.319396 0.586621 0.312821 0.56234 0.001326 0.011918 0.09716 0.480715 1.266036 0.584468 0.312496 0.75 0.000497 0.004632 0.047669 0.359391 1.101318 0.576529 0.311293 1.0 8.7e-05 0.000863 0.013277 0.223358 0.893342 0.562683 0.309166 1.33352 3e-06 4.3e-05 0.001308 0.100634 0.657392 0.538848 0.305419 1.77828 1.8e-05 0.022908 0.417809 0.498866 0.298865 3.16228 0.068504 0.34004 0.268475 5.62341 0.000121 0.098551 0.191126 10.0 0.001487 0.064775 17.78279 0.001958 RMSE 9.105e-10 8.561e-10 2.604e-09 7.839e-09 1.321e-08 6.808e-09 6.899e-05 21 \fTable VI: Transport (top) and S2 (bottom) results for the material energy density, e, for the thin Gaussian source Su-Olson problem with x0 = 0.5, t0 = 10 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.00467 0.040089 0.245869 0.833299 1.847977 0.623026 0.318058 0.1 0.00449 0.038593 0.238404 0.815919 1.824555 0.622608 0.318006 0.17783 0.004119 0.035517 0.222913 0.779578 1.775289 0.621695 0.317892 0.31623 0.003137 0.027313 0.18051 0.677838 1.63499 0.61882 0.317534 0.45 0.002085 0.018426 0.132108 0.556161 1.461245 0.614538 0.316998 0.5 0.001726 0.015355 0.114504 0.509791 1.392751 0.612566 0.316749 0.56234 0.001326 0.011908 0.093967 0.453643 1.307592 0.609828 0.316404 0.75 0.000497 0.004629 0.04582 0.307085 1.068295 0.59977 0.315121 1.0 8.7e-05 0.000865 0.013544 0.175481 0.812011 0.58237 0.312856 1.33352 3e-06 4.4e-05 0.001743 0.082634 0.567439 0.552834 0.308867 1.77828 5e-05 0.029302 0.35102 0.504445 0.301903 3.16228 0.000297 0.069582 0.325807 0.269788 5.62341 0.002245 0.091971 0.189454 10.0 0.003086 0.063216 17.78279 0.002324 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.00467 0.040073 0.240699 0.727949 1.572225 0.594815 0.314048 0.1 0.00449 0.03858 0.23398 0.717893 1.560265 0.594488 0.313999 0.17783 0.004119 0.035509 0.219946 0.696571 1.534799 0.593775 0.313893 0.31623 0.003137 0.027317 0.180837 0.6345 1.45974 0.591526 0.313557 0.45 0.002085 0.018437 0.134764 0.554569 1.360669 0.58817 0.313054 0.5 0.001726 0.015365 0.117566 0.522032 1.319396 0.586621 0.312821 0.56234 0.001326 0.011918 0.09716 0.480715 1.266036 0.584468 0.312496 0.75 0.000497 0.004632 0.047669 0.359391 1.101318 0.576529 0.311293 1.0 8.7e-05 0.000863 0.013277 0.223358 0.893342 0.562683 0.309166 1.33352 3e-06 4.3e-05 0.001308 0.100634 0.657392 0.538848 0.305419 1.77828 1.8e-05 0.022908 0.417809 0.498866 0.298865 3.16228 0.068504 0.34004 0.268475 5.62341 0.000121 0.098551 0.191126 10.0 0.001487 0.064775 17.78279 0.001958 RMSE 9.105e-10 8.561e-10 2.604e-09 7.839e-09 1.321e-08 6.808e-09 6.899e-05 22 \f2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 64 cells, t = 0.1 64 cells, t = 0.31623 64 cells, t = 1.0 64 cells, t = 3.16228 128 cells, t = 10.0 128 cells, t = 31.6228 128 cells, t = 100.0 (a) Radiation energy density, \u03c6 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 64 cells, t = 0.1 64 cells, t = 0.31623 64 cells, t = 1.0 64 cells, t = 3.16228 128 cells, t = 10.0 128 cells, t = 31.6228 128 cells, t = 100.0 (b) Material energy density, e Figure 13: Log-linear scaled average value of the solution expansion coe\ufb03cients (found by Eqs. (44)) for the optically thin (\u03c3a = 1 cm\u22121) S2 constant Cv Gaussian source problem where x0 = 0.5, t0 = 10. All results were calculated with a static mesh and uncollided source treatment. 23 \fTable VII: Transport (top) and S2 (bottom) results for the scalar \ufb02ux, \u03c6, for the thin Gaussian source constant Cv problem with x0 = 0.5, t0 = 10, and Cv0 = 0.03 GJ \u00b7 cm\u22123 \u00b7 keV\u22121 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.094715 0.260632 0.501868 0.653649 1.639873 0.218959 0.069516 0.1 0.091069 0.251252 0.488575 0.636913 1.614763 0.21862 0.069466 0.17783 0.083585 0.231922 0.460879 0.602409 1.561824 0.217881 0.069356 0.31623 0.063732 0.180063 0.384158 0.509408 1.41001 0.215547 0.069011 0.45 0.042445 0.123229 0.294366 0.405067 1.219261 0.212063 0.068495 0.5 0.035158 0.103372 0.260874 0.367198 1.142975 0.210455 0.068256 0.56234 0.027037 0.080916 0.221011 0.322683 1.047037 0.208219 0.067923 0.75 0.010181 0.032561 0.122095 0.213179 0.768814 0.19997 0.066689 1.0 0.001796 0.006473 0.045352 0.123297 0.461235 0.185567 0.064514 1.33352 8.2e-05 0.000367 0.008352 0.064418 0.223392 0.160718 0.060695 1.77828 2e-06 0.000371 0.029558 0.097211 0.118847 0.054065 3.16228 0.001165 0.011091 0.014263 0.024308 5.62341 0.00034 0.00057 0.000634 10.0 4e-06 3e-06 17.78279 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.094714 0.260323 0.477228 0.539628 1.295862 0.197266 0.066556 0.1 0.091067 0.250998 0.467159 0.53082 1.283232 0.197001 0.066511 0.17783 0.083584 0.231775 0.445829 0.512305 1.256308 0.196423 0.066411 0.31623 0.063733 0.180133 0.384035 0.459709 1.176709 0.194598 0.066097 0.45 0.042446 0.123414 0.305857 0.394987 1.071024 0.191874 0.065626 0.5 0.035159 0.103564 0.274806 0.369646 1.026763 0.190617 0.065409 0.56234 0.027038 0.081091 0.236322 0.338318 0.969319 0.188869 0.065106 0.75 0.010181 0.032613 0.132763 0.251849 0.790231 0.18242 0.063983 1.0 0.001796 0.006438 0.045179 0.164176 0.560306 0.171164 0.062005 1.33352 8.1e-05 0.000354 0.005814 0.08995 0.308784 0.151783 0.058534 1.77828 1e-06 0.000114 0.03286 0.12204 0.119388 0.052522 3.16228 -0.0 2e-06 0.006114 0.019949 0.02605 5.62341 4.6e-05 0.000264 0.000564 10.0 17.78279 24 \fTable VIII: Transport (top) and S2 (bottom) results for the material energy density, e, for the thin Gaussian source constant Cv problem with x0 = 0.5, t0 = 10, and Cv0 = 0.03 GJ \u00b7 cm\u22123 \u00b7 keV\u22121 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.004825 0.044224 0.323377 1.4684 2.455993 1.518526 1.133064 0.1 0.004638 0.042571 0.313295 1.438115 2.446023 1.51794 1.132861 0.17783 0.004255 0.039172 0.292399 1.373348 2.424567 1.516657 1.132417 0.31623 0.003241 0.030112 0.235379 1.183116 2.359381 1.512586 1.131013 0.45 0.002154 0.020302 0.170708 0.944125 2.267966 1.506441 1.128901 0.5 0.001783 0.016913 0.147332 0.851718 2.227558 1.503576 1.12792 0.56234 0.001369 0.013111 0.120189 0.740141 2.172608 1.499564 1.126549 0.75 0.000513 0.005089 0.057328 0.458276 1.96946 1.484442 1.121422 1.0 8.9e-05 0.000949 0.016339 0.231272 1.546952 1.456735 1.112196 1.33352 4e-06 4.8e-05 0.002002 0.097311 0.889883 1.4042 1.095386 1.77828 5.5e-05 0.032461 0.400128 1.294119 1.064084 3.16228 0.000322 0.042204 0.349552 0.857807 5.62341 0.001039 0.014856 0.054666 10.0 8.8e-05 0.000336 17.78279 x/t 0.1 0.31623 1.0 3.16228 10.0 31.6228 100.0 0.01 0.004825 0.044206 0.317475 1.332131 2.309445 1.479441 1.120796 0.1 0.004638 0.042557 0.308253 1.31107 2.303385 1.478948 1.120606 0.17783 0.004255 0.039164 0.289032 1.265902 2.29029 1.47787 1.120189 0.31623 0.003241 0.030116 0.235788 1.131312 2.250065 1.474452 1.118869 0.45 0.002154 0.020312 0.173764 0.954656 2.192602 1.469301 1.116886 0.5 0.001783 0.016923 0.150839 0.882882 2.166882 1.466903 1.115965 0.56234 0.001369 0.013121 0.123828 0.79271 2.13172 1.463548 1.114678 0.75 0.000513 0.005092 0.059398 0.540414 2.00314 1.45095 1.109868 1.0 8.9e-05 0.000947 0.016015 0.293197 1.73921 1.428067 1.101228 1.33352 4e-06 4.7e-05 0.001515 0.113254 1.125017 1.385564 1.085536 1.77828 2e-05 0.022889 0.453017 1.302486 1.05652 3.16228 0.021263 0.375028 0.880385 5.62341 3.7e-05 0.00417 0.03231 10.0 1e-06 1.4e-05 17.78279 25 \fVII Optically thick results Here we include results for problems that we consider optically thick. By this, we mean that the source width is far greater than a mean free path. We accomplish this by specifying, \u03c3a = 800 cm\u22121 and z0 < 1 cm (z0 is the dimensional x0). In this section, we include S2 and transport results for linearized Gaussian and square sources as well as a constant Cv Gaussian source problem. We do not include a constant Cv square source, since our method could not resolve the nonlinear, non-equilibrium, and very sharp wave that the square source induces. Since in an optically thick problem results are of interest only after many mean free times, we give results for \u03c4 \u22480.01, 0.1, and 1 ns. By these times, the uncollided source is negligible and any discontinuous wavefronts have decayed. For these reasons, the problems in this section do not employ an uncollided source or a moving mesh. On the solution plots in this section, we show the di\ufb00usion solution for the energy density (for the linearized problems) and the temperature (for the nonlinear problems). These are included to demonstrate the qualitative di\ufb00erence between transport and di\ufb00usion for thick problems and were calculated with a numerical non-\ufb02ux limited, non-equilibrium di\ufb00usion solver [23]. VII.A Su-Olson problem with a square source To keep the dimensional spatial domain manageable, we set l = 1 800 in Eqs. (5) and (6). This makes the dimensional and nondimensional domains the same, but sti\ufb00ens the system. We are again using a square source (Eq. (36)) with x0 = 0.5 and t0 = 0.0125. We forgo the use of an uncollided source since the evaluation times of interest are long after the uncollided solution has decayed to zero. For the mesh in this problem, only a static mesh was necessary for satisfactory convergence. We use a initialization outlined in Section VI.A but that initial width \u03b4x is not set to a small number, but to a guess of the solution width at the evaluation time and the edges never move. Essentially, the mesh is the same as the initial mesh for the thin square source but covering the entire domain. The Gauss-Legendre spacing of the edges has the e\ufb00ect of concentrating static edges around the source edge which makes it more likely that the region where the wavefront will be is resolved. The initial guess for the solution width was important and was re\ufb01ned with each run increasing the number of spatial divisions. Since negative solutions are possible in our DG formulation and more likely to occur when there is a sharp wavefront, the temperature was calculated with T = sign(e)|e|1/4. The solution plots for this problem (Figure 14) show that for the chosen times, the solution is in local equilibrium. Unlike the selected thin square source solutions where a discontinuous wave travelling at the wavespeed determines the speed the solution travels, here a wave resembling a nonlinear heat waves moves outwards while the source is on. Geometric convergence, shown in Figures 15 and 16, is only possible since the nonsmooth portions of the scalar \ufb02ux have decayed to zero and the solution is in equilibrium, which has the e\ufb00ect of smoothing the leading edge of the wavefront. We also take note of the similarity between the transport and S2 solutions, apparent in the solution plots and in Tables IX and X. This is expected from the optically thick results, since we saw the the two solutions converge at long times when they have both come into equilibrium. The solution plots also show qualitative agreement between transport/S2 and di\ufb00usion. VII.B Su-Olson problem with a Gaussian source In order to provide consistent examples across optically thin and thick regimes and as a trial run for the constant Cv thick Gaussian source of the next section, we include here an optically thick Gaussian source. For this and the nonlinear Gaussian source, we specify the length parameter x0 = 0.375. The source duration, t0, is still 0.0125 and l = 1 800. Once again, the source is given by Eq. (37) and we do not use an uncollided source. Like the thin Gaussian sources, a moving mesh is not necessary. Figure 17 shows that like the optically thick square source, the solutions are in equilibrium during the selected time window. There is however, no wavefront but the solution maintains Gaussian characteristics. Like the square source, the transport, di\ufb00usion, and S2 solutions are very similar. Geometric convergence of our standard DG method is shown in Figures 18 and 19. We note that in this problem and the Su-Olson, 26 \f\u22121 0 1 0 2 4 S2 Transport (a) t = 0.3 \u22121 0 1 0 2 4 S2 Transport (b) t = 3 \u22121 0 1 0 2 4 S2 Transport (c) t = 30 Figure 14: S2 (left of x = 0) and full transport (right of x = 0) solutions for the optically thick Su-Olson square source problem with x0 = 0.5, t0 = 0.0125. Solid lines are scalar \ufb02ux, \u03c6, and dashed are material energy density, e. Triangles are the di\ufb00usion solution for the energy density. On the scale of this \ufb01gure, the solid and dashed lines are coincident. 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.3 128 cells, t = 3.0 128 cells, t = 30.0 (a) Radiation energy density, \u03c6 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.3 128 cells, t = 3.0 128 cells, t = 30.0 (b) Material energy density, e Figure 15: Log-linear scaled average value of the solution expansion coe\ufb03cients (found by Eqs. (44)) for the optically thick (\u03c3a = 800 cm\u22121) Su-Olson square source problem where x0 = 0.5, t0 = 0.0125. The quadrature order for all results is S16. All results were calculated with a static mesh and standard source treatment. 27 \fTable IX: Transport (top) and S2 (bottom) results for the scalar \ufb02ux, \u03c6, for the thick square source Su-Olson problem with x0 = 0.5, t0 = 0.0125 x/t 0.3 3.0 30.0 4.999998 4.999998 4.999957 0.0579 4.999998 4.999998 4.999802 0.1158 4.999998 4.999998 4.998522 0.1737 4.999998 4.999998 4.991207 0.2316 4.999998 4.999998 4.959099 0.2895 4.999998 4.999998 4.850729 0.3474 4.999998 4.999957 4.569407 0.4053 4.999998 4.981594 4.007694 0.4632 4.997551 4.256673 3.144975 0.5211 0.141706 1.375262 2.125719 0.5789 0.063798 1.200785 0.6368 0.000274 0.552639 0.6947 0.203932 0.7526 0.05963 0.8105 0.013701 0.8684 0.002459 0.9263 0.000343 0.9842 3.7e-05 1.0421 3e-06 1.1 x/t 0.3 3.0 30.0 4.999999 5.0 4.999961 0.0579 4.999999 5.0 4.999807 0.1158 4.999999 5.0 4.998528 0.1737 4.999999 5.0 4.991218 0.2316 4.999999 5.0 4.959116 0.2895 4.999999 4.999999 4.850739 0.3474 4.999999 4.999961 4.56939 0.4053 4.999999 4.981698 4.007652 0.4632 4.997908 4.256289 3.144948 0.5211 0.141029 1.375703 2.125739 0.5789 0.063676 1.200832 0.6368 0.000265 0.552668 0.6947 0.203932 0.7526 0.059617 0.8105 0.013692 0.8684 0.002455 0.9263 0.000342 0.9842 3.7e-05 1.0421 3e-06 1.1 square source 128 spatial cells were required to achieve the desired rate of convergence, though the solution was smooth. This was due to the small length scales. 28 \f2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.3 128 cells, t = 3.0 128 cells, t = 30.0 (a) Radiation energy density, \u03c6 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.3 128 cells, t = 3.0 128 cells, t = 30.0 (b) Material energy density, e Figure 16: Log-linear scaled average value of the solution expansion coe\ufb03cients (found by Eqs. (44)) for the optically thick (\u03c3a = 800 cm\u22121) S2 Su-Olson square source problem where x0 = 0.5, t0 = 0.0125. All results were calculated with a static mesh and standard source treatment. \u22121 0 1 0 2 4 S2 Transport (a) t = 0.3 \u22121 0 1 0 2 4 S2 Transport (b) t = 3 \u22121 0 1 0 2 4 S2 Transport (c) t = 30 Figure 17: S2 (left of x = 0) and full transport (right of x = 0) solutions for the optically thick Su-Olson Gaussian source problem with x0 = 0.375, t0 = 0.0125. Solid lines are scalar \ufb02ux, \u03c6, and dashed are material energy density, e. Triangles are the di\ufb00usion solution for the energy density. On the scale of this \ufb01gure, the solid and dashed lines are coincident. 29 \f2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.3 128 cells, t = 3.0 128 cells, t = 30.0 (a) Radiation energy density, \u03c6 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.3 128 cells, t = 3.0 128 cells, t = 30.0 (b) Material energy density, e Figure 18: Log-linear scaled average value of the solution expansion coe\ufb03cients (found by Eqs. (44)) for the optically thick (\u03c3a = 800 cm\u22121) Su-Olson Gaussian source problem where x0 = 0.375, t0 = 0.0125. The quadrature order for all results is S16. All results were calculated with a static mesh and standard source treatment. 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.3 128 cells, t = 3.0 128 cells, t = 30.0 (a) Radiation energy density, \u03c6 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.3 128 cells, t = 3.0 128 cells, t = 30.0 (b) Material energy density, e Figure 19: Log-linear scaled average value of the solution expansion coe\ufb03cients (found by Eqs. (44)) for the optically thick (\u03c3a = 800 cm\u22121) S2 Su-Olson square Gaussian problem where x0 = 0.375, t0 = 10. All results were calculated with a static mesh and standard source treatment. 30 \fTable X: Transport (top) and S2 (bottom) results for the material energy density, e, for the thick square source Su-Olson problem with x0 = 0.5, t0 = 0.0125. Convergence results for these answers are plotted in Figures 15 and 16. 0.3 3.0 30.0 4.999998 4.999998 4.999957 0.0579 4.999998 4.999998 4.999802 0.1158 4.999998 4.999998 4.998522 0.1737 4.999998 4.999998 4.991208 0.2316 4.999998 4.999998 4.959105 0.2895 4.999998 4.999998 4.850742 0.3474 4.999998 4.999957 4.56943 0.4053 4.999998 4.981624 4.007719 0.4632 4.997609 4.256925 3.144988 0.5211 0.140405 1.375054 2.125712 0.5789 0.063722 1.200762 0.6368 0.000273 0.552615 0.6947 0.203916 0.7526 0.059622 0.8105 0.013699 0.8684 0.002458 0.9263 0.000343 0.9842 3.7e-05 1.0421 3e-06 1.1 0.3 3.0 30.0 4.999999 5.0 4.999961 0.0579 4.999999 5.0 4.999807 0.1158 4.999999 5.0 4.998529 0.1737 4.999999 5.0 4.99122 0.2316 4.999999 5.0 4.959121 0.2895 4.999999 4.999999 4.850752 0.3474 4.999999 4.999961 4.569413 0.4053 4.999999 4.981729 4.007677 0.4632 4.997962 4.256541 3.144961 0.5211 0.139713 1.375495 2.125732 0.5789 0.063599 1.200809 0.6368 0.000264 0.552644 0.6947 0.203916 0.7526 0.05961 0.8105 0.013689 0.8684 0.002455 0.9263 0.000342 0.9842 3.6e-05 1.0421 3e-06 1.1 VII.C Constant Cv Gaussian problem Finally, we provide results for the constant Cv optically thick problem with a Gaussian source. Like the linear version of this problem, x0 = 0.375, t0 = 0.0125, and l = 1 800. We choose our constant opacity to be the same as we used for the optically thin case, and the constant heat capacity Cv0 = 0.03 GJ \u00b7 cm\u22123 \u00b7 keV\u22121 (which 31 \fTable XI: Transport (top) and S2 (bottom) results for the scalar \ufb02ux, \u03c6, for the thick Gaussian source Su-Olson problem with x0 = 0.375, t0 = 0.0125 x/t 0.3 3.0 30.0 4.99565 4.956221 4.607285 0.0842 4.750452 4.71669 4.414224 0.1684 4.084736 4.065342 3.882236 0.2526 3.175988 3.173439 3.13421 0.3368 2.232954 2.243553 2.322698 0.4211 1.418754 1.435689 1.579266 0.5053 0.815508 0.832457 0.986082 0.5895 0.423872 0.437156 0.565183 0.6737 0.199217 0.207914 0.297361 0.7579 0.084665 0.089558 0.143615 0.8421 0.032536 0.034938 0.063669 0.9263 0.011306 0.012344 0.025911 1.0105 0.003552 0.00395 0.009679 1.0947 0.001009 0.001144 0.003319 1.1789 0.000259 0.0003 0.001044 1.2632 6e-05 7.1e-05 0.000301 1.3474 1.2e-05 1.5e-05 7.9e-05 1.4316 2e-06 2e-06 1.9e-05 1.5158 4e-06 1.6 x/t 0.3 3.0 30.0 4.995653 4.956229 4.607284 0.0842 4.750456 4.716697 4.414223 0.1684 4.084739 4.065349 3.882238 0.2526 3.175991 3.173444 3.134214 0.3368 2.232956 2.243557 2.322703 0.4211 1.418755 1.435691 1.57927 0.5053 0.815508 0.832459 0.986085 0.5895 0.423872 0.437157 0.565185 0.6737 0.199218 0.207914 0.297362 0.7579 0.084665 0.089558 0.143614 0.8421 0.032536 0.034938 0.063669 0.9263 0.011306 0.012344 0.02591 1.0105 0.003552 0.00395 0.009679 1.0947 0.001009 0.001144 0.003319 1.1789 0.000259 0.0003 0.001044 1.2632 6e-05 7.1e-05 0.000301 1.3474 1.2e-05 1.5e-05 7.9e-05 1.4316 2e-06 2e-06 1.9e-05 1.5158 4e-06 1.6 is the same as for the optically thin problems). The source is again given by Eq. (37) and the uncollided solution is not used. As with the linear thick Gaussian source problem, a static mesh is employed. Although the Gaussian pro\ufb01le is slightly misshapen in the solution plots (Figure 20) when compared to the linearized Gaussian source problem results, the solution is smooth like the linearized problem and spectral convergence is observed in Figures 21 and 22. The scalar \ufb02ux and material energy density are given 32 \f\u22122 0 2 0.0 0.5 1.0 1.5 S2 Transport (a) t = 0.3 \u22122 0 2 0.0 0.5 1.0 1.5 S2 Transport (b) t = 3 \u22122 0 2 0.0 0.5 1.0 1.5 S2 Transport (c) t = 30 Figure 20: S2 (left of x = 0) and full transport (right of x = 0) solutions for the optically thick constant Cv Gaussian source problem with x0 = 0.375, t0 = 0.0125. Solid lines are radiation temperature \u03c6 1/4, and dashed are temperature, T. Triangles are the di\ufb00usion solution for the temperature. On the scale of this \ufb01gure, the solid and dashed lines are coincident. 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.3 128 cells, t = 3.0 128 cells, t = 30.0 (a) Radiation energy density, \u03c6 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.3 128 cells, t = 3.0 128 cells, t = 30.0 (b) Material energy density, e Figure 21: Log-linear scaled average value of the solution expansion coe\ufb03cients (found by Eqs. (44)) for the optically thick (\u03c3a = 800 cm\u22121) constant Cv Gaussian source problem where x0 = 0.375, t0 = 0.0125. The quadrature order for all results is S16. All results were calculated with a static mesh and standard source treatment. 33 \fTable XII: Transport (top) and S2 (bottom) results for the material energy density, e, for the thick Gaussian source Su-Olson problem with x0 = 0.375, t0 = 0.0125 x/t 0.3 3.0 30.0 4.995668 4.956239 4.6073 0.0842 4.750468 4.716705 4.414236 0.1684 4.084745 4.065351 3.882244 0.2526 3.17599 3.17344 3.134212 0.3368 2.232949 2.243548 2.322695 0.4211 1.418746 1.435681 1.57926 0.5053 0.8155 0.832449 0.986075 0.5895 0.423866 0.43715 0.565178 0.6737 0.199213 0.20791 0.297357 0.7579 0.084663 0.089556 0.143612 0.8421 0.032535 0.034937 0.063668 0.9263 0.011305 0.012343 0.02591 1.0105 0.003552 0.003949 0.009679 1.0947 0.001009 0.001144 0.003319 1.1789 0.000259 0.0003 0.001044 1.2632 6e-05 7.1e-05 0.000301 1.3474 1.2e-05 1.5e-05 7.9e-05 1.4316 2e-06 2e-06 1.9e-05 1.5158 4e-06 1.6 x/t 0.3 3.0 30.0 4.995672 4.956247 4.607298 0.0842 4.750472 4.716712 4.414236 0.1684 4.084748 4.065358 3.882246 0.2526 3.175992 3.173446 3.134216 0.3368 2.232951 2.243552 2.3227 0.4211 1.418747 1.435684 1.579264 0.5053 0.8155 0.832451 0.986078 0.5895 0.423866 0.437151 0.565179 0.6737 0.199214 0.20791 0.297357 0.7579 0.084663 0.089556 0.143611 0.8421 0.032535 0.034937 0.063667 0.9263 0.011305 0.012343 0.025909 1.0105 0.003552 0.003949 0.009679 1.0947 0.001009 0.001144 0.003319 1.1789 0.000259 0.0003 0.001044 1.2632 6e-05 7.1e-05 0.000301 1.3474 1.2e-05 1.5e-05 7.9e-05 1.4316 2e-06 2e-06 1.9e-05 1.5158 4e-06 1.6 in Tables XIII and XIV. As with the linearized Gaussian, the di\ufb00usion approximation is qualitatively correct. 34 \f2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.3 128 cells, t = 3.0 128 cells, t = 30.0 (a) Radiation energy density, \u03c6 2 4 8 12 M 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 avg. |cn| 128 cells, t = 0.3 128 cells, t = 3.0 128 cells, t = 30.0 (b) Material energy density, e Figure 22: Log-linear scaled average value of the solution expansion coe\ufb03cients (found by Eqs. (44)) for the optically thick (\u03c3a = 800 cm\u22121) S2 constant Cv Gaussian problem where x0 = 0.375, t0 = 0.0125. All results were calculated with a static mesh and standard source treatment. Table XIII: Transport (top) and S2 (bottom) results for the scalar \ufb02ux, \u03c6, for the thick Gaussian source constant Cv problem with x0 = 0.375, t0 = 0.0125, and Cv0 = 0.03 GJ \u00b7 cm\u22123 \u00b7 keV\u22121 x/t 0.3 3.0 30.0 6.494763 6.374101 5.426524 0.0789 6.11596 6.012864 5.186732 0.1579 5.085009 5.026915 4.518059 0.2368 3.682628 3.677866 3.56146 0.3158 2.248727 2.285336 2.499432 0.3947 1.081952 1.134228 1.513531 0.4737 0.352326 0.389958 0.733651 0.5526 0.060476 0.069031 0.230211 0.6316 0.00502 0.005255 0.01702 0.7105 0.000253 0.000255 0.000274 0.7895 8e-06 8e-06 8e-06 0.8684 x/t 0.3 3.0 30.0 6.494762 6.374098 5.42651 0.0789 6.115958 6.012861 5.18672 0.1579 5.085008 5.026914 4.518052 0.2368 3.682627 3.677866 3.561461 0.3158 2.248727 2.285337 2.499437 0.3947 1.081952 1.134229 1.513539 0.4737 0.352326 0.389959 0.733658 0.5526 0.060476 0.069032 0.230216 0.6316 0.00502 0.005255 0.017021 0.7105 0.000253 0.000255 0.000274 0.7895 8e-06 8e-06 8e-06 0.8684 35 \fTable XIV: Transport (top) and S2 (bottom) results for the material energy density, e, for the thick Gaussian source constant Cv problem with x0 = 0.375, t0 = 0.0125, and Cv0 = 0.03 GJ \u00b7 cm\u22123 \u00b7 keV\u22121 x/t 0.3 3.0 30.0 3.490028 3.473704 3.33671 0.0737 3.444608 3.429805 3.303993 0.1474 3.309364 3.299069 3.20642 0.2211 3.086882 3.083965 3.04554 0.2947 2.780372 2.787696 2.82358 0.3684 2.389727 2.410536 2.540796 0.4421 1.913502 1.950669 2.194284 0.5158 1.363511 1.408883 1.769126 0.5895 0.82657 0.846776 1.213573 0.6632 0.436854 0.439279 0.485835 0.7368 0.210529 0.210694 0.212429 0.8105 0.093598 0.093606 0.093685 0.8842 0.038508 0.038508 0.038511 0.9579 0.014665 0.014665 0.014665 1.0316 0.005169 0.005169 0.005169 1.1053 0.001686 0.001686 0.001686 1.1789 0.00051 0.00051 0.00051 1.2526 0.000142 0.000142 0.000142 1.3263 3.6e-05 3.6e-05 3.6e-05 1.4 8e-06 8e-06 8e-06 x/t 0.3 3.0 30.0 3.490028 3.473704 3.336708 0.0737 3.444608 3.429805 3.303991 0.1474 3.309364 3.299069 3.206419 0.2211 3.086882 3.083965 3.04554 0.2947 2.780372 2.787696 2.823581 0.3684 2.389727 2.410537 2.540799 0.4421 1.913502 1.95067 2.194288 0.5158 1.363511 1.408884 1.769132 0.5895 0.82657 0.846774 1.213586 0.6632 0.436853 0.439278 0.485758 0.7368 0.210529 0.210693 0.212427 0.8105 0.093598 0.093606 0.093684 0.8842 0.038508 0.038508 0.038511 0.9579 0.014665 0.014665 0.014665 1.0316 0.005169 0.005169 0.005169 1.1053 0.001686 0.001686 0.001686 1.1789 0.00051 0.00051 0.00051 1.2526 0.000142 0.000142 0.000142 1.3263 3.6e-05 3.6e-05 3.6e-05 1.4 8e-06 8e-06 8e-06 36 \fVIII" + }, + { + "url": "http://arxiv.org/abs/2206.13445v2", + "title": "Accurate solutions to time dependent transport problems with a moving mesh and exact uncollided source treatment", + "abstract": "For the purpose of finding benchmark quality solutions to time dependent Sn\ntransport problems, we develop a numerical method in a Discontinuous Galerkin\n(DG) framework that utilizes time dependent cell edges, which we call a moving\nmesh, and an uncollided source treatment. The DG method for discretizing space\nis a powerful solution technique on smooth problems and is robust on non-smooth\nproblems. In order to realize the potential of the DG method to spectrally\nresolve smooth problems, our moving mesh and uncollided source treatment is\ndevised to circumvent discontinuities in the solution or the first derivative\nof the solutions that are admitted in transport calculations. The resulting\nmethod achieves spectral convergence on smooth problems, like a standard DG\nimplementation. When applied to problems with nonsmooth sources that induce\ndiscontinuities, our moving mesh, uncollided source method returns a\nsignificantly more accurate solution than the standard DG method. On problems\nwith smooth sources, we observe spectral convergence even in problems with wave\nfronts. In problems where the angular flux is inherently non-smooth, as in\nGanapol's (2001) well known plane pulse benchmark, we do not observe an\nelevated order of accuracy when compared with static meshes, but there is a\nreduction in error that is nearly three orders of magnitude.", + "authors": "William Bennett, Ryan G. McClarren", + "published": "2022-06-27", + "updated": "2022-09-13", + "primary_cat": "cs.CE", + "cats": [ + "cs.CE" + ], + "main_content": "Introduction Implementations of the Discontinuous Galerkin method for spatial discretization to solve the time dependent Sn neutral particle transport equation must reckon with the inevitable nonsmoothness due to the \ufb01nite number of wave speeds akin to contact discontinuities in \ufb02uid equations. Either the method will fail to achieve the highorder convergence, or computational scientists must \ufb01nd a way to deal with the inevitable discontinuities that arise. 1 arXiv:2206.13445v2 [cs.CE] 13 Sep 2022 \fThis is not a trivial undertaking. Di\ufb00erent discontinuities are possible in the solution and the \ufb01rst derivative in the angular \ufb02ux and in the solution and derivative of the scalar \ufb02ux. Moreover, these discontinuities are time dependent. Ganapol\u2019s plane pulse benchmark solution [1], ubiquitous in the transport literature, is a good example of the behavior of nonsmooth regions in transport solutions. The so called uncollided \ufb02ux, which refers to particles that have not collided with other particles since being emitted from the source, is more structured and nonsmooth. In this case, the uncollided angular \ufb02ux is a decaying delta function in each angular direction traveling away from the initial pulse in the direction that it was emitted. The time decay is a result of scattering, which has an overall smoothing e\ufb00ect on the solution. The scalar \ufb02ux solution for this problem is smooth everywhere except for a travelling wavefront moving out from the origin at the particle speed. Other source con\ufb01gurations also exhibit the behaviors shown by the Ganapol problem: the uncollided \ufb02ux is the cause of discontinuities, scattering smooths the solution over time, and the angular \ufb02ux discontinuities can coalesce into discontinuities in the scalar \ufb02ux. Since the Ganapol problem is a Green\u2019s function for the time dependent transport equation, uncollided solutions for other sources can be understood as superpositions of this uncollided solution. This leads to another insight, if the uncollided solution is known, then the location of any discontinuities can be determined at any time. Also, we see from this problem that the uncollided \ufb02ux in any con\ufb01guration will be the most \u201cunsmooth\u201d when compared to the \ufb02ux from particles that have experienced collisions. Therefore, \ufb01nding the uncollided \ufb02ux has a twofold advantage. First, it gives the location of discontinuities and second, it gives the least smooth part of the solution. Knowing the location of discontinuities presents the opportunity to resolve them with mesh edges and knowing the uncollided solution presents the possibility to signi\ufb01cantly smooth the system being solved. This is the motivation for our method: a moving mesh to resolve discontinuities and an uncollided source treatment to reduce nonsmoothness in the system being solved. It is necessary to note that, as described in the plane pulse example, the discontinuities in the angular \ufb02ux are responsible for less than optimal convergence of a DG method. Tracking and resolving each one of these discontinuities would require a di\ufb00erent mesh de\ufb01ned for each angle in the Sn discretization. Since this is unrealistic, especially for solutions with large numbers of angles, we choose to only attempt to resolve nonsmoothness in the scalar \ufb02ux and abandon the hubristic quest for a spectrally convergent method on fundamentally nonsmooth problems. The method we propose of time dependent cell edges is similar to methods used in the \ufb01eld of Computational Fluid Dynamics (CFD). For example, [2] solves an optimization problem to \ufb01nd the location of discontinuities and [3] uses shock \ufb01tting to achieve high order 2 \faccuracy solutions. The principles in those works are the same: align non-smooth features with the mesh so that highly accurate and e\ufb03cient solutions can be obtained. Since our moving mesh is only meant to make the problem smoother inside mesh elements, and not to resolve all discontinuities, we employ an uncollided source treatment to make the problem smoother still. It has already been mentioned that the solution can be split into collided and uncollided parts. This decomposition of the solution can be extended to de\ufb01ning the \ufb02ux based on collisions. There is a uncollided \ufb02ux (i.e., the zeroth collided \ufb02ux), a \ufb01rst collided \ufb02ux, a second, and so on. This is called Multiple Flux Decomposition and is used by numerical methods to distribute computational resources more e\ufb03ciently (see [4, 5, 6]). Our implementation involves using the analytic solution for the uncollided \ufb02ux as a source term to solve for the remaining collided \ufb02ux and adding the uncollided and collided solutions at the \ufb01nal step. We adopt the uncollided \ufb02ux solutions from [7]. In order to verify the e\ufb00ectiveness of the moving mesh and uncollided source methods in solving smooth and nonsmooth problems, we chose a set of representative sources with varying degrees of solution nonsmoothness. Recently, we presented calculations for these sources in [7]. Also, to quantify the e\ufb00ectiveness of each method on its own and in combination, we implement four di\ufb00erent methods. The \ufb01rst has both an uncollided source and a moving mesh. One is a standard DG implementation. The \ufb01nal two test each method individually: a static mesh with an uncollided source and a moving mesh with a standard source treatment. Conveniently, the latter three methods can be easily implemented with small modi\ufb01cations to the system that we derive to \ufb01nd the moving mesh, uncollided solutions case. Given that there are semi-analytic results for the problems we present here, it may seem that computing highly accurate numerical solutions to these problems is to carry coal to Newcastle (that is perform a pointless action). Our ultimate goal is to develop benchmark quality solutions to nonlinear radiative transfer without utilizing linearizations (as in the speci\ufb01c form of the heat capacity widely used [8, 9, 10, 11, 12, 13] or resorting to the equilibrium di\ufb00usion limit as the Marshak wave [14, 15, 16, 17, 18, 19, 20]). By demonstrating that our approach is sound on problems of linear particle transport, we build a foundation for con\ufb01dence in solutions to nonlinear problems, where there are no known semi-analytic solutions, in future work. The uncollided solution method is presented in Section 2. The following section contains a derivation of our DG method with moving mesh edges. The implementation, a short description of our convergence analysis methods, and a discussion of the results follow in Sections 4, 5, and 6 respectively. 3 \f2 The transport model and uncollided solution We begin with the single-speed neutral particle transport equation in slab geometry with isotropic scattering [1]: \u0012 \u2202 \u2202t + \u00b5 \u2202 \u2202x + 1 \u0013 \u03c8 = c 2\u03c6 + 1 2S(x, t, \u00b5). (1) Here, \u03c8(x, t, \u00b5) is the angular \ufb02ux of particles, and \u03c6(x, t) = R 1 \u22121d\u00b5\u2032 \u03c8(x, t, \u00b5\u2032) is the scalar \ufb02ux given by the integral over the comment angle. The cosine of the angle between the polar angle of the direction of travel and the x-axis is represented by \u00b5 \u2208[\u22121, 1]. S is a source and c is the scattering ratio, c = \u03c3s/\u03c3t where \u03c3s is the scattering cross section and \u03c3t is the total absorption plus scattering cross section. The coordinates x and t are measured in units of mean free path lengths and mean free times, respectively. We represent the angular \ufb02ux as the sum of an uncollided and collided angular \ufb02ux, \u03c8 \u2261\u03c8u + \u03c8c. This allows us to obtain from Eq. (1) an expression for the the particles that have not collided after being emitted by the source (i.e., the uncollided \ufb02ux, \u03c8u), \u0012 \u2202 \u2202t + \u00b5 \u2202 \u2202x + 1 \u0013 \u03c8u = 1 2S(x, t, \u00b5). (2) It is possible to solve Eq. (2) analytically and recover an expression for the uncollided \ufb02ux in many cases. This is done by integrating the Green\u2019s solution provided by Ganapol [1] over an arbitrary source. That uncollided Green\u2019s function solution, \u03c8u = 1 2 exp(\u2212t)\u03b4 \u0000\u00b5 \u2212x t \u0001 , is a piecewise constant function that has a non-zero part expanding out from the origin at speed \u00b5. We adopt the uncollided solutions presented in [7] for our chosen source con\ufb01gurations: a plane pulse, a square pulse, a Gaussian pulse, a square source, and a Gaussian source. With an expression for the uncollided scalar \ufb02ux, the equation for the collided \ufb02ux has the same form as the original transport equation , \u0012 \u2202 \u2202t + \u00b5 \u2202 \u2202x + 1 \u0013 \u03c8c(x, t, \u00b5) = c 2 Z 1 \u22121 d\u00b5\u2032 (\u03c8c(x, t, \u00b5\u2032) + \u03c8u(x, t, \u00b5\u2032)) = c 2 Z 1 \u22121 d\u00b5\u2032\u03c8c(x, t, \u00b5\u2032) + Su(x, t). (3) Here the uncollided source is given by Su(x, t) = c 2 Z 1 \u22121 d\u00b5\u2032\u03c8u(x, t, \u00b5\u2032) = c 2\u03c6u(x, t). (4) 4 \fTo \ufb01nd the full solution, it is necessary to sum the uncollided and collided solutions once the collided solution has been calculated. Since Eq. (1) and Eq. (3) have the same form, a numerical method that handles arbitrary sources can easily be applied to both. One of the objectives of this work is to demonstrate the e\ufb03ciency improvement of analytically representing the more discontinuous, anisotropic uncollided \ufb02ux and using that to obtain benchmark-quality solutions for the total scalar \ufb02ux. 3 Moving mesh DG spatial discretization The second ingredient in our benchmark solutions is the use of a moving mesh. We develop a Discontinuous Galerkin (DG) scheme to solve equations of the form Eq. (1). First, we use the method of discrete ordinates to approximate Eq. (1) as a system of coupled partial di\ufb00erential equations, \u0012 \u2202 \u2202t + \u00b5l \u2202 \u2202x + 1 \u0013 \u03c8l = c 2 N X l\u2032=1 wl\u2032\u03c8l\u2032 + 1 2S(x, t, \u00b5) for l = 1 . . . N, (5) where the scalar \ufb02ux is a weighted sum of the angular \ufb02ux, \u03c6 \u2248PN l\u2032=1 wl\u2032\u03c8l\u2032. The number of discrete directions the angular \ufb02ux is evaluated at is N; \u00b5l is a discrete angle and \u03c8l(x, t) is the angular \ufb02ux in the direction \u00b5l; the quadrature weights are wl. We use standard Gauss-Lobatto quadrature rules. We de\ufb01ne our solution domain on a mesh of K non-overlapping cells, each with time dependent edges xL(k, t) and xR(k, t). The left edge of the kth cell is always at the same position as the right edge of the (k \u22121)st cell, and so on. We de\ufb01ne a new variable z to map to each cell to [-1,1], z(k, t) \u2261xL(k, t) + xR(k, t) \u22122x xL(k, t) \u2212xR(k, t) , k = 1 . . . K. We de\ufb01ne an orthonormal basis function in z for each cell, Bi,k(z) = \u221a2i + 1 p xR(k, t) \u2212xL(k, t) Pi(z), (6) where Pi is the ith Legendre Polynomial. Now we may de\ufb01ne the weak solution on each cell for every angle as a sum of basis functions with time dependent coe\ufb03cients, \u03c8l(x, t) \u2248 M X j=0 Bj,k(z) ul k,j. (7) 5 \fWe obtain the weak formulation of Eq. (5) by multiplying by a basis function and integrating over cell k, Z xR(k,t) xL(k,t) dx Bi,k(z)\u2202\u03c8l \u2202t | {z } I + \u00b5l Z xR(k,t) xL(k,t) dx Bi,k(z)\u2202\u03c8l \u2202x | {z } II + Z xR(k,t) xL(k,t) dx Bi,k(z)\u03c8l = Z xR(k,t) xL(k,t) dx Bi,k(z) c 2 N X l\u2032=1 wl\u2032\u03c8l\u2032 + 1 2S(x, t, \u00b5) ! . (8) In term II, integration by parts is used to push the derivative on to the basis function as is typical with the DG method. Since the integration domain in term I is time dependent, we invoke a special case of the Reynolds Transport Theorem [21] to write the total derivative as d dt Z xR(k,t) xL(k,t) dx \u03c8l(x, t)Bi,k(z) = Z xR(k,t) xL(k,t) dx \u0012\u2202\u03c8l \u2202t Bi,k(z) + \u03c8l\u2202Bi,k(z) \u2202t \u0013 + dxR(k, t) dt \u03c8l(xR, t)Bi,k(z = 1) \u2212dxL(k, t) dt \u03c8l(xL, t)Bi,k(z = \u22121). (9) Solving Eq. (9) for term I and substituting into Eq. (8) with term II integrated by parts, we obtain d dt Z xR(k,t) xL(k,t) dx \u03c8lBi,k(z)+ dxL(k, t) dt \u03c8l(xL, t)Bi,k(z = \u22121) \u2212dxR(k, t) dt \u03c8l(xR, t)Bi,k(z = 1) | {z } III \u2212 Z xR(k,t) xL(k,t) dx \u03c8c ldBi,k(x, t) dt + \u00b5l\u03c8lBi,k(z) \f \f \f \f xR(k,t) xL(k,t) | {z } IV \u2212\u00b5l Z xR(k,t) xL(k,t) dx \u03c8lldBi,k(z) dx + Z xR(k,t) xL(k,t) dx \u03c8llBi,k(z) = Z xR(k,t) xL(k,t) dx Bi,k(z) c 2 N X l\u2032=1 wl\u2032\u03c8l\u2032 + 1 2S(x, t, \u00b5) ! . (10) 6 \fIII and IV both involve evaluating the solution at the edges of the cell and are combined to create a numerical \ufb02ux (not to be confused with the scalar \ufb02ux or the angular \ufb02ux) term that governs the \ufb02ow of information based on the speed of the particles relative to the mesh (LU surf). Then, substituting Eq. (7) into Eq. (10) and exploiting the orthonormality of the chosen basis functions to simplify the mass matrix and the scalar \ufb02ux term, we obtain dU l dt \u2212GU l + (LU l)(surf) \u2212\u00b5lLU l + U l = c 2 N X l\u2032=1 wl\u2032U l\u2032 + 1 2Q, (11) where the time dependent solution vector is U l,k = [ul k,0, ul k,1, ..., ul k,M]T, where M + 1 is the number of basis functions. We also de\ufb01ne Li,j = Z xR xL dx Bj,k(z) dBi,k(z) dx , (12) Gi,j = Z xR xL dx Bj,k(z) dBi(z) dt , (13) Qi = Z xR xL dx Bi,k(z) S(x, t, \u00b5), (14) and (LU)surf i = \u0012 \u00b5l \u2212dxR dt \u0013 Bi,k(z = 1)\u03c8l+ \u2212 \u0012 \u00b5l \u2212dxL dt \u0013 Bi,k(z = \u22121)\u03c8l\u2212. (15) \u03c8l+ and \u03c8l\u2212are found by evaluating Eq. (7) with an upwinding scheme relative to the mesh motion at the right and left cell edges respectively. The initial condition is found from ul k,j = Z xR(k,0) xL(k,0) dx Bi,k(z)\u03c8(x, t = 0, \u00b5l). (16) Equation (11) is a system of coupled ordinary di\ufb00erential equations for the solution in an given cell that requires a time integration algorithm to update. To capture particles traveling with the wavefront, a Gauss-Lobatto quadrature scheme that includes the endpoints [-1,1] was used to calculate the angles and the weights [22]. These we calculated with the Python package quadpy [23]. This scheme easily handles arbitrary sources, which is useful since the uncollided solutions we use as source terms are usually complicated functions of space and time. To apply this scheme on a moving mesh, it is necessary to specify the velocities of the cell edges. We detail our approach to this in Section 4. 7 \f3.1 M = 1 Example equations To illustrate the method in a more tangible way, we show the method for M = 1. This choice makes Eq. (11) for an angle, l and cell, k, have the form d dt \uf8eb \uf8ed ul k,0 ul k,1 \uf8f6 \uf8f8\u2212 \uf8eb \uf8ed\u2212 dxR dt \u2212dxL dt 2(xR(t)\u2212xL(t)) 0 \u221a 3 \u0010 dxL dt + dxR dt \u0011 xL(t)\u2212xR(t) 3 \u0010 dxR dt \u2212dxL dt \u0011 2(xL(t)\u2212xR(t)) \uf8f6 \uf8f8 | {z } G \u00b7 \uf8eb \uf8ed ul k,0 ul k,1 \uf8f6 \uf8f8\u2212\u00b5l 0 0 \u2212 2 \u221a 3 xL(t)\u2212xR(t) 0 ! | {z } L \u00b7 \uf8eb \uf8ed ul k,0 ul k,1 \uf8f6 \uf8f8 + \uf8eb \uf8ed ul k,0 ul k,1 \uf8f6 \uf8f8 | {z } Ul + \uf8eb \uf8ec \uf8ec \uf8ed \u0000\u00b5l \u2212dxR dt \u0001 1 \u221a xR(t)\u2212xL(t)\u03c8l+ \u2212 \u0000\u00b5l \u2212dxL dt \u0001 1 \u221a xR(t)\u2212xL(t)\u03c8l\u2212 \u0000\u00b5l \u2212dxR dt \u0001 2 \u221a xR(t)\u2212xL(t)\u03c8l+ + \u0000\u00b5l \u2212dxL dt \u0001 2 \u221a xR(t)\u2212xL(t)\u03c8l\u2212 \uf8f6 \uf8f7 \uf8f7 \uf8f8 | {z } LUsurf = c 2 \uf8eb \uf8edw0 \uf8eb \uf8ed u0 k,0 ul k,1 \uf8f6 \uf8f8+ w1 \uf8eb \uf8ed u1 k,0 u1 k,1 \uf8f6 \uf8f8+ \u00b7 \u00b7 \u00b7 + wN \uf8eb \uf8ed uN k,0 uN k,1 \uf8f6 \uf8f8 \uf8f6 \uf8f8+ \uf8eb \uf8ed R xR xL dx B0,k(z) S(x, t, \u00b5) R xR xL dx B1,k(z) S(x, t, \u00b5) \uf8f6 \uf8f8 | {z } Q . (17) The terms (\u00b5l \u2212dxR dt ) and (\u00b5l \u2212dxL dt ) give the particle velocity relative to the right and left mesh edges respectively and determine where the solution is evaluated in the upwinding scheme. If (\u00b5l \u2212dxR dt ) > 0, \u03c8l+ = M X j=0 Bj,k(z) ul k,j. (18) If the relative velocity is negative, the right edge solution is evaluated in the next cell, \u03c8l+ = M X j=0 Bj,k+1(z) ul k+1,j. (19) For the solution at the left edge, if (\u00b5l \u2212dxL dt ) > 0, the solution from the previous cell is used. \u03c8l\u2212= M X j=0 Bj,k\u22121(z) ul k\u22121,j. (20) 8 \fFor a positive relative velocity, \u03c8l\u2212= M X j=0 Bj,k(z) ul k,j. (21) When dealing with edges at the end of the domain, k = 1 or k = K, the boundary condition is required to evaluate Eq. (19) and Eq. (20). For the in\ufb01nite medium problems we explored, the boundary is zero with the exception of the MMS problem (Section 6.1). 4 Implementation To verify the e\ufb00ectiveness of our moving mesh DG scheme with an uncollided source treatment, we implement the method in a code written in Python with Numba [24] to solve transport problems with six representative sources: a Method of Manufactured Solutions (MMS) source, a Gaussian pulse and source, a square pulse and source, and a plane pulse. For error quanti\ufb01cation, which is addressed in the next Section, the full solution for the MMS source is known from the problem setup, and we adopt the full solutions from [7] to calculate the accuracy of our method in the remaining \ufb01ve source con\ufb01gurations. All of our solutions are available on Github1. Since we intend to compare the uncollided source treatment and the moving mesh individually and in unison, it is necessary to develop a code that readily switches between four di\ufb00erent methods: (1) a moving mesh using an uncollided source, (2) a moving mesh with out an uncollided source, a static mesh with (3) and without (4) the uncollided source. Each of these cases requires solving the system of ODE\u2019s given by Eq. (11) with di\ufb00erent functions for the source term and the time dependent mesh edges. In this section, the methods we used to describe the moving and static meshes and the source treatments are presented in a general way. The subsequent results section includes the particular implementations for each source. Also in this section is a discussion of the time integration method. 4.1 Moving mesh While our method admits mesh motion that is any function of time that does not result in mesh edges crossing or zero width cells, we have chosen to restrict our 1www.github.com/wbennett39/moving mesh radiative transfer 9 \finvestigation to one simple method for moving the cells. This simple method requires the mesh to be subdivided into an even number of cells. The initial mesh spans a \ufb01nite width of 2x0, centered on zero. The edges move with a constant velocity away from the origin that is dependent on their initial location. If the edges are de\ufb01ned as a vector of location values where x0 is the leftmost cell edge, x1 is the right edge of that cell or the left edge of the adjacent cell, etc., then X(t) = [x0(t), x1(t), ..., xK(t)] , (22) and the location of the edges can be found with, xk(t) = xk(0) + vt xk(0) xK(0), (23) where xk(0) is the initial location of a given edge and v, the particle velocity, is unity. If the initial widths are chosen to span a \ufb01nite source, Eq. (23) moves the outermost edges at speed one, matching the solution wavefront. Since a cell interface will be initialized at x = 0, that interface will never move. For our static mesh calculations, we simply span a chosen width with evenly-spaced cells and set v = 0. 4.2 Source treatment The pulsed sources, i.e., the Gaussian pulse, the square pulse, and the plane pulse, are equivalent to initial conditions for Eq. (1). For these cases in the standard source treatments that do not employ the method of uncollided solutions, the source S in Eq. (14) is set to zero and the initial condition is found by letting \u03c8(x, t = 0) = S(x, t = 0)/2 and inserting into Eq. (16). For the uncollided source treatment of the pulsed sources, the initial condition is set to zero and the uncollided solution is used in Eq. (14) to \ufb01nd the source in the weak formulation, Q. Similarly for the source cases (Gaussian source, square source), Q is found by integrating either the uncollided solution or the source term, depending on the method used. 4.3 Time integration To solve the system of coupled ordinary di\ufb00erential equations from Eq. (11), we employed an 8th order, explicit Runge-Kudda algorithm, DOP853 [25], as implemented in scipy [26]. The relative tolerance parameter was set to 5\u00d710\u221213 and the absolute tolerance to 10\u221212. 10 \f5 Error characterization We judge the e\ufb00ectiveness of our scheme by characterizing the convergence of the solution on a test problem. We use the root mean squared error (RMSE) of the computed scalar \ufb02ux as our error metric, RMSE = v u u t N X i |\u03c6i \u2212\u02c6 \u03c6i|2 N , (24) where \u03c6i is the calculated scalar \ufb02ux at a given node, \u02c6 \u03c6i is the corresponding benchmark solution, and N is the total number of nodes in the computational solution. To characterize the convergence, we solve a benchmark problem and increase the degrees of freedom. For a problem that is algebraically convergent, holding the number of basis functions constant and increasing the cell divisions leads to an error behavior that limits, as K \u2192\u221e, to the form RMSE = C K\u2212A, (25) where C is the y intercept, and the constant A is the rate of convergence. If A is 2, the method is said to converge at second order. The curve in Eq. (25) is a straight line on a graph where both axes have a logarithmic scale. In characterizing problems where the method shows algebraic convergence, the intercept is signi\ufb01cant. Two separate methods may have the same algebraic convergence order, but wildly di\ufb00erent errors when tried on a problem if one method has a smaller intercept value. Therefore, we use our data for the errors as a function of the number of cells, K to estimate the values of A and C based on Eq. (25). For problems that demonstrate geometric spectral convergence, the error can be modeled as RMSE = C exp(\u2212c1M), (26) where M is the highest polynomial order of the basis and C and c1 are constants that could depend on the number of cells used in the problem. This curve is a straight line on a logarithmic-linear scale. For spectral problems, the coe\ufb03cient C is less consequential as our results show that the error is exceedingly small for modest values of M. 11 \fTable 1: Parameters for each test case in section 6 . Section Problem Initial Condition Source Uncollided source c x0 t0 \u03c3 6.1 MMS \u03c8(x, t = 0, \u00b5) = e\u2212x2/2 2 \u0398(t \u2212|x| + x0) S(x, t, \u00b5) = \u2212e\u2212x2 2 (\u00b5,(t+1)x+1) (t+1)2 \u0398(x \u2212|t| + x0) 1.0 0.1 6.2 Gaussian pulse \u03c8(x, t = 0, \u00b5) = 1 2 exp \u0010 \u2212x2 \u03c32 \u0011 Eq. (31) 1.0 0.5 6.3 Gaussian source \u03c8(x, t = 0, \u00b5) = 0 S(x, t) = exp \u0010 \u2212x2 \u03c32 \u0011 \u0398(t0 \u2212t) Eq. (33) 1.0 5.0 0.5 6.4 Plane pulse \u03c8(x, t = 0, \u00b5) = 1 2\u03b4(x)\u03b4(t) Eq. (35) 1.0 0.5 6.5 Square pulse \u03c8(x, t = 0, \u00b5) = 1 2\u0398 (x0 \u2212|x|) Eq. (39) 1.0 0.5 6.6 Square Source \u03c8(x, t = 0, \u00b5) = 0 S(x, t) = \u0398(x0 \u2212|x|)\u0398(t0 \u2212t) Eq. (43) 1.0 0.5 5.0 6.7 Square pulse \u03c8(x, t = 0, \u00b5) = 1 2\u0398 (x0 \u2212|x|) Eq. (39) 0.8, 1.2 0.625, 0.417 6.7 Gaussian pulse \u03c8(x, t = 0, \u00b5) = 1 2 exp \u0010 \u2212x2 \u03c32 \u0011 Eq. (31) 0.8, 1.2 0.625, 0.417 6 Results 6.1 MMS In the Method of Manufactured solutions, a solution is chosen and inserted into the governing equations to solve for a source term [27, 28, 29]. This source is then used in a numerical implementation to converge to the already known solution. Here, we specify a solution that will mimic the behavior of a plane pulse source, where a discontinuous wavefront smooths into a solution with Gaussian characteristics. The solution is: \u03c8MMS(x, \u00b5, t) = e\u2212x2/2 2(1 + t)\u0398(t \u2212|x| + x0). (27) where \u0398 is a step function. Notice that \u03c8MMS does not depend on \u00b5 so that the quadrature order will not a\ufb00ect the numerical solution. For three times, t = 1, t = 5, and t = 10 with x0 = 0.1, the solution is plotted in Figure 1. The source term that yields this manufactured solution is SMMS = \u2212e\u2212x2 2 (\u00b5(t + 1)x + 1) (t + 1)2 \u0398(x \u2212|t| + x0). (28) The source for Eq. (11) is found by inserting Eq. (28) into Eq. (14). It is important to note that the wavefront in this case is di\ufb00erent than the wavefronts that appear in the solutions for the \ufb01nite width sources (plane pulse, square pulse and source). In this MMS problem, the wavefront is not a feature of the \ufb01nite wavespeed in the governing equation, but is instead imposed by the step function. Therefore, it is necessary in this problem, and this problem only, to impose a boundary condition at the wavefront, x = \u00b1t \u00b1 x0. The value of the solution vector at the edges is found by integrating ul k,j = Z xR(k,t) xL(k,t) dx Bi,k(z)\u03c8MMS(x, t, \u00b5l), (29) 12 \f\u22121 0 1 x 0.0 0.2 0.4 scalar flux (a) t = 1 \u22125 0 5 x 0.00 0.05 0.10 0.15 scalar flux (b) t = 5 \u221210 0 10 x 0.000 0.025 0.050 0.075 scalar flux (c) t = 10 Figure 1: MMS analytic solution, \u03c6, with x0 = 0.1 at times t = 1, 5, and 10. at the wavefront. The initial condition is found from Eq. (16). The moving mesh is ideal for enforcing this boundary condition since a cell edge can always be matched to the travelling wavefront. While it is possible to enforce the boundary condition, Eq. (29), with a static mesh, we implement this MMS problem not to compare the di\ufb00erent schemes but to test the rate of convergence of the moving mesh method on a problem with a known smooth solution. We also do not attempt to \ufb01nd the uncollided solution in this con\ufb01guration. The moving mesh cell edges are governed by Eq. (23). The MMS problem is the only problem we consider that we are assured the angular \ufb02ux solution is a smooth function. The Gaussian source and pulse considered later have smooth source functions, but there is no analytic expression for the full solution. Figure 2 shows linear convergence on a log-linear scale which, as explained in Section 5, indicates spectral convergence (Eq. (26)). If we \ufb01x M, we expect the error to converge at an algebraic rate equal to the number of basis functions (M +1). In Figure 3 we observe that this is indeed the case for the MMS problem. Also, this \ufb01gure shows the phenomena which will be repeated in subsequent tests of a lowering of the y intercept with increasing number of basis functions. This intercept is C in Eq. (25). Although the MMS source is anisotropic, this solution required only S32 to converge to machine precision. The MMS solution results do not necessarily prove the merit of the moving mesh since the wavefront is created in this problem with a boundary condition and says nothing about uncollided source treatments. However, the MMS test veri\ufb01es that this method can converge at optimal rates when applied to a smooth problem. 13 \f2 4 8 M 10\u221213 10\u221211 10\u22129 10\u22127 10\u22125 10\u22123 RMSE Figure 2: MMS problem convergence results on a logarithmic-linear scale with increasing number of basis functions for t = 1 and 4 cells in the moving mesh. The uncollided source treatment is not used in this problem. Using Eq. (26), the decay rate in M is c1 \u22481.3. 14 \f2 4 8 16 32 cells 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 RMSE ORDER 3 S64 S64 S64 S64 S64 ORDER 7 Figure 3: MMS problem convergence results for a moving mesh, standard source treatment on a logarithmic scale at t = 1 for M = 2 (circles), M = 4 (triangles), and M = 6 (squares). 15 \f6.2 Gaussian pulse For the next problem we consider a Gaussian pulse of the form, Sgp(x, t) = exp \u0012\u2212x2 \u03c32 \u0013 \u03b4(t), (30) where \u03c3, not to be confused with a cross section, is the standard deviation. The solution for \u03c3 = 0.5 is plotted in Figure 4. The uncollided solution is signi\ufb01cant at early times, but has decayed to approximately zero by t = 5. Unlike the \ufb01nite width sources examined later, the uncollided solution for this source is a smooth function. The solution for the uncollided scalar \ufb02ux in this con\ufb01guration is [7], \u03c6gs u (x, t) = \u03c3 \u221a\u03c0 e\u2212t erf \u0000 t\u2212x \u03c3 \u0001 + erf \u0000 t+x \u03c3 \u0001 4t . (31) For the uncollided case, Eq. (31) is treated as S in Eq. (14) and the initial condition is zero everywhere. For the methods that do not use an uncollided source, the initial condition is found by inserting \u03c8(x, t = 0) = 1 2 exp \u0010 \u2212x2 \u03c32 \u0011 into Eq. (16) and setting S(x, t) = 0. While the initial condition does have an in\ufb01nite width, setting the initial mesh width (x0) to be the location where the initial condition is below a speci\ufb01ed small tolerance in Eq. (23) is su\ufb03cient to achieve accurate results. For example, the initial condition is less than 1 \u00d7 10\u221216 at x = 3.1 with the standard deviation, \u03c3 = 0.5. We use the same x0 for static mesh calculations and initialize the mesh to span the space [\u2212t\ufb01nal \u2212x0, t\ufb01nal + x0]. The Gaussian pulse and Gaussian source are a more realistic test than the MMS problem and still have the guarantee of a smooth source and smooth uncollided solution. All combinations of moving or static mesh and uncollided source or standard source that we applied to this problem achieved spectral convergence, shown in Figure 5. The uncollided, moving mesh method performed the best, consistently achieving lower error than the other methods. For the Gaussian pulse, only S256 is required to achieve accuracies of RMSE \u2248 10\u22126. After that point, more angles are required to ensure that the angular discretization error is lower than the errors in discretizing space or time. Figure 6b demonstrates that each method can achieve sixth-order convergence by holding M = 6 and increasing the number of mesh subdivisions, K. Since our methods are spectrally convergent, with seven basis functions (M = 6) one would expect the convergence to approach seventh-order as K \u2192\u221e. This convergence test is limited by the accuracy of the benchmark solution before K is su\ufb03ciently 16 \f\u22122 0 2 x 0.0 0.2 0.4 scalar flux (a) t = 1 \u22125 0 5 x 0.0 0.1 0.2 scalar flux (b) t = 5 \u221210 0 10 x 0.00 0.05 0.10 scalar flux (c) t = 10 Figure 4: Gaussian pulse semi-analytic solution, \u03c6 (solid) and \u03c6u (dashed) with \u03c3 = 0.5 and c = 1. The uncollided solution is not shown for times where it is negligible. large to achieve seventh order convergence. However, Figure 6a achieves fourthorder convergence with four basis functions (M = 3) before reaching the accuracy limit of the benchmark. 6.3 Gaussian source A Gaussian source turned on at t = 0 and left on until t = t0 is a superposition of Gaussian pulses, Sgs(x, t) = exp \u0012\u2212x2 \u03c32 \u0013 \u0398(t0 \u2212t). (32) Figure 7 shows this solution with \u03c3 = 0.5 and t0 = 5. Notice that the uncollided solution is still signi\ufb01cant at t = 5. Like the Gaussian pulse, the uncollided angular \ufb02ux and the collided angular \ufb02ux are smooth functions. Reference [7] gives the uncollided scalar \ufb02ux for this source as a convolution of the Gaussian pulse uncollided \ufb02ux, \u03c6gs u (x, t) = Z min(t,t0) 0 d\u03c4 \u03c3 \u221a\u03c0 e\u2212(t\u2212\u03c4) erf \u0000 t\u2212\u03c4\u2212x \u03c3 \u0001 + erf \u0000 t\u2212\u03c4+x \u03c3 \u0001 4(t \u2212\u03c4) . (33) For the two solution methods that use an uncollided source treatment, Eq. (33) is integrated in Eq. (14). For the two cases that do not employ the uncollided source, S in Eq. (14) is Eq. (32). The angular \ufb02ux is initialized to be zero everywhere for both source treatments. The moving mesh and static mesh are treated in the exact same way as Section 6.2, where the initial width is set to be the initial width of a Gaussian pulse with the same standard deviation. 17 \f2 4 8 16 M 10\u22128 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 RMSE S64 S256 S256 S256 S256 S256 S512 Figure 5: Gaussian pulse convergence with logarithmiclinear scaling at t = 1 increasing number of basis functions with 4 cells. Blue lines indicate the uncollided solution is used, red that no uncollided source is used. Dashed lines are for a static mesh and solid lines are for the moving mesh. Using Eq. (26), the estimated decay rate in M is c1 \u22481.0. 18 \f2 4 8 16 32 64 cells 10\u22128 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 RMSE ORDER 4 S64 S128 S128 S128 S256 S256 (a) M = 3 2 4 8 16 cells 10\u22128 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 RMSE ORDER 6 S64 S128 S256 S256 (b) M = 6 Figure 6: Gaussian pulse convergence results on a logarithmic scale with c = 1, \u03c3 = 0.5 at t = 1. Blue lines indicate the uncollided solution is used, red that no uncollided source is used. Dashed lines are for a static mesh and solid lines are for the moving mesh. \u22122 0 2 x 0.00 0.25 0.50 0.75 scalar flux (a) t = 1 \u22125 0 5 x 0 1 2 scalar flux (b) t = 5 \u221210 0 10 x 0.00 0.25 0.50 0.75 scalar flux (c) t = 10 Figure 7: Gaussian source semi-analytic solution, \u03c6 (solid) and \u03c6u (dashed) with \u03c3 = 0.5, t0 = 5, and c = 1. The uncollided \ufb02ux is not shown for times where it is negligible. 19 \fOur solutions for the Gaussian source achieve similar levels of accuracy as in the Gaussian pulse. Figure 8 shows spectral convergence for all four methods. Compared with the Gaussian pulse (Figure 5), these results show a more drastic di\ufb00erence in the intercepts between the methods that use the uncollided solution and those that do not. This could be because, as shown in Figure 7, the uncollided solution is a signi\ufb01cant portion of the solution at later times. Figure 9b shows these methods achieving almost sixth order convergence with M = 6 and improving to seventh order for K = 16 before the error approaches the number of accurate digits for the benchmark solution. Figure 9a better illustrates how each method can achieve an order of convergence equal to the number of basis functions (M + 1) as K \u2192\u221ewith fourth-order convergence with M = 3. These solutions have an angular error requiring S512 to achieve RMSE \u224810\u22127. 6.4 Plane pulse Ganapol\u2019s frequently used benchmark solution for an in\ufb01nite plane source is a Green\u2019s function for the source, Spl = \u03b4(x)\u03b4(t). (34) The full solution is plotted for three times in Figure 10. This solution is shows how a discontinuous uncollided scalar \ufb02ux can cause discontinuities in the \ufb01rst derivative of the full solution. In this case, the discontinuity is manifested as traveling wavefront at early time. The solution smooths to be redolent of a Gaussian at later times. The uncollided solution for this con\ufb01guration, also given by [1], is \u03c6pl u (x, t) = exp (\u2212t) 2t \u0398 \u0010 1 \u2212 \f \f \fx t \f \f \f \u0011 . (35) Equation (35) is substituted for S in Eq. (14) for the uncollided source treatment. For the two cases where an uncollided source is not used, it is necessary to approximate the delta function initial condition. For the static mesh case, an initial condition of, \u03c8pl(x, t = 0) = 1 2 \u0398(x0 \u2212|x|) 2x0 , (36) where x0 is a very small number, is used to approximate the initial condition in Eq. (16). The moving mesh, no uncollided source case did not converge due to how we represent the initial condition and is not included here. The moving mesh is governed by Eq. (23) with x0 set to approximately zero, which ensures that the step function in Eq. (35) is always one. The static mesh for this source spans [\u2212t\ufb01nal, t\ufb01nal] with evenly spaced cells. 20 \f2 4 8 16 M 10\u22128 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 RMSE S64 S256 S256 S256 S256 S512 S512 Figure 8: Gaussian source convergence on a logarithmic-linear scale at t = 1 increasing number of basis functions with 4 cells. Blue lines indicate the uncollided solution is used, red that no uncollided source is used. Dashed lines are for a static mesh and solid lines are for the moving mesh. Using Eq. (26), the decay rate in M is estimate to be c1 \u22480.95 21 \f2 4 8 16 32 64 cells 10\u22128 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 RMSE ORDER 4 S64 S128 S128 S128 S256 S256 (a) M = 3 2 4 8 16 cells 10\u22128 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 RMSE ORDER 6 ORDER 7 S64 S256 S256 S512 (b) M = 6 Figure 9: Gaussian source convergence results on a logarithmic scale with c = 1, \u03c3 = 0.5 at t = 1. Blue lines indicate the uncollided solution is used, red that no uncollided source is used. Dashed lines are for a static mesh and solid lines are for the moving mesh. 22 \f\u22121 0 1 x 0.0 0.2 0.4 0.6 scalar flux (a) t = 1 \u22125 0 5 x 0.0 0.1 0.2 scalar flux (b) t = 5 \u221210 0 10 x 0.00 0.05 0.10 0.15 scalar flux (c) t = 10 Figure 10: Plane pulse semi-analytic solution, \u03c6 (solid) and \u03c6u (dashed) with c = 1. The uncollided \ufb02ux is not shown for times where it is negligible. For j > 0, our Bj basis are orthogonal to functions that are constant in the range xL to xR. The uncollided solution always satis\ufb01es this criterion since the wavefronts are never inside any of the cells. Therefore, it is simple to integrate the uncollided solution for the moving mesh over a cell in the source term. The source becomes, Q0(x, t) = Z xR xL dx B0,k(z) exp (\u2212t) 2t = \u221axR \u2212xL exp (\u2212t) 2t , (37) This simpli\ufb01cation is not possible in the static mesh case due to the step function, which requires integration at the moving wavefront. For the \ufb01nite width sources, the plane pulse and the square source and pulse, spectral convergence is unrealistic with our methods. The discontinuities in the scalar \ufb02ux can be easily matched with mesh edges, resulting in a smooth problem, but the angular \ufb02ux poses a more di\ufb03cult problem. In the angular \ufb02ux of the plane pulse for example, the angular collided \ufb02ux has a discontinuity in the \ufb01rst derivative travelling outward from the origin at speed \u00b5t that is caused by the uncollided angular \ufb02ux. This means that there is as many discontinuities in the angular \ufb02uxes as there are discrete angles. Though the highly nonsmooth nature of the solution to the plane pulse at early time restricts all methods in Figure 11 to \ufb01rst order convergence, there is a substantial di\ufb00erence in the accuracy of each method. Table 2 shows that in the M = 6 case, there is an almost 900 times reduction in the intercept of the uncollided source, moving mesh case compared to the standard DG implementation. These results show that using an uncollided source is e\ufb00ective in reducing the error, in this case 36 times, but using the uncollided source and the moving mesh is a signi\ufb01cantly more e\ufb00ective method. The Gaussian nature of the plane pulse solution at later times allows for higher 23 \f2 4 8 16 32 cells 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 100 RMSE ORDER 1 S64 S256 S512 S2048 S4096 (a) M = 4 2 4 8 16 32 cells 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 100 RMSE ORDER 1 S64 S256 S512 S2048 S4096 (b) M = 6 Figure 11: Plane pulse convergence results on a logarithmic scale with c = 1 at t = 1. Blue lines indicate the uncollided solution is used, red that no uncollided source is used. Dashed lines are for a static mesh and solid lines are for the moving mesh. order convergence, as shown in Figure 12. The no-uncollided solution method in this case is restricted by the inherent error in approximating a delta function and the uncollided source, static mesh method is restricted by the necessity to resolve the discontinuous wavefront at early times with a polynomial. The uncollided, moving mesh method shows nearly optimal, sixth-order convergence with a signi\ufb01cantly lower error level. The plane pulse solution requires more angular resolution than all of the other source con\ufb01gurations tested, with S4096 required for errors less than 10\u22124. This angular error is reduced as the solution smooths at later times. 6.5 Square pulse We next consider a \ufb01nite width square pulse source of the form, Ssp = \u0398(x0 \u2212|x|)\u03b4(t). (38) 24 \fTable 2: Plane pulse intercepts from Figure (11) where the intercept, is found from the curve \ufb01t, RMSE = C K\u2212A, where K is the number of mesh subdivisions and A is the order of convergence. The improvement from the baseline is found by dividing the intercept from the no uncollided solution, static mesh case by the intercept from the given case. M = 4 M = 6 intercept improvement intercept improvement no uncollided + static mesh 0.41393 \u2013 0.4140 \u2013 uncollided + static mesh 0.01848 22 0.01145 36 uncollided + moving mesh 0.0008613 481 0.0004731 875 2 4 8 16 32 cells 10\u22128 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 RMSE ORDER 6 S64 S256 S512 S2048 Figure 12: Plane pulse convergence results on a logarithmic scale with c = 1 at t = 10 with M = 6 . Blue lines indicate the uncollided solution is used, dashed that the mesh is static, and solid that the mesh is moving. 25 \f[7] gives the piecewise linear uncollided solution for this problem, \u03c6sp u (x, t) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 0 |x| \u2212t > x0 x0 e\u2212t t t > x0 & x0 \u2212t \u2264x \u2264t \u2212x0 e\u2212t t \u2264x0 & t \u2212x0 \u2264x \u2264x0 \u2212t e\u2212t(t+x+x0) 2t \u2212t \u2212x0 < x < t + x0 & x0 + x \u2264t \u2264x0 \u2212x e\u2212t(t\u2212x+x0) 2t \u2212t \u2212x0 < x < t + x0 & x0 \u2212x \u2264t \u2264x0 + x. (39) The full solution and the uncollided piecewise solution for an initial width x0 = 0.5 at t = 1 is plotted in Figure 13. The uncollided \ufb02ux is discontinuous in the \ufb01rst derivative at x = \u00b1x0. These discontinuities travel towards the origin at the wavespeed, meet at x = 0, then travel outwards again. Tracking this discontinuity involves moving edges towards the origin, which preformed poorly in test implementation, so a hybrid static-moving mesh was adopted for the moving mesh implementation. This method requires the number of cells to multiples of two greater than or equal to four. Half of the zones span the source and never move, and half begin at the edges with width zero and move outwards. Therefore, edges on the left edges of the source obey xk(t) = xko \u2212xk x0 \u00d7 vt for k < K 4 . (40) Mesh edges from k = K 4 to k = 3K 4 stay at their initial values. For the rest, xk(t) = xko + xk xK \u00d7 vt for k > 3K 4 . (41) While this mesh method does not track the interior discontinuity, it ensures better resolution of the solution inside the source where the uncollided solution presents di\ufb03culty by allocating two-thirds of the cells to the source region. The static mesh in this case spans [\u2212t\ufb01nal \u2212x0, t\ufb01nal + x0] with an edge at the initial source width. For our uncollided source treatment, Eq. (39) is substituted into Eq. (14) and the initial condition is set to zero. For the standard source treatment, the initial condition is found by substituting \u03c8(x, t = 0) = 1 2\u0398 (x0 \u2212|x|) into Eq. (16) and setting S in Eq. (14) to zero. The square pulse source is signi\ufb01cantly smoother than the plane pulse. The former has an uncollided angular \ufb02ux that is piecewise smooth, where the latter has an uncollided angular \ufb02ux made of of travelling delta functions. Figure 14 shows that the uncollided methods are able to achieve second order convergence at early time, with the standard source treatments doing slightly worse. The square pulse 26 \f\u22121 0 1 x 0.0 0.2 0.4 0.6 scalar flux (a) t = 1 \u22125 0 5 x 0.0 0.1 0.2 scalar flux (b) t = 5 \u221210 0 10 x 0.00 0.05 0.10 0.15 scalar flux (c) t = 10 Figure 13: Square pulse semi-analytic solution, \u03c6 (solid) and \u03c6u (dashed) with x0 = 0.5 and c = 1. The uncollided \ufb02ux is not shown for times where it is negligible. Table 3: Square pulse intercepts from Figure 11 where the intercept, C is found from the curve \ufb01t, RMSE = C \u2206x\u2212A, where \u2206x is the number of mesh subdivisions and A is the order of convergence. The improvement from the baseline is found by dividing the intercept from the no uncollided solutions, static mesh case by the intercept from the given case M = 4 M = 6 intercept improvement intercept improvement no uncollided + static mesh 0.14594 0.03220 uncollided + static mesh 0.02664 6 0.006501 5 no ucollided + moving mesh 0.0078136 19 0.00414 8 uncollided + moving mesh 0.006379 23 0.002266 10 requires similar angular error resolution as do the smooth Gaussian problems, with S512 required for RMSE less than 10\u22126. As with the plane pulse, the intercept value is important. For six basis functions, the uncollided, moving mesh and the uncollided static mesh have the same order of convergence, but the former has a 10 times intercept reduction from the baseline and the latter is reduced 5 times (Table 3). Table 3 also shows a feature that is less discernible in Figure 14, the intercept for M = 6 is approximately 3 times smaller than for M = 4. 6.6 Square source We de\ufb01ne a square source which is a superposition of square pulses while t < t0, Sss = \u0398(x0 \u2212|x|)\u0398(t0 \u2212t). (42) 27 \f2 4 8 16 32 64 cells 10\u22128 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 RMSE ORDER 2 S64 S64 S256 S256 S256 (a) M = 4 2 4 8 16 32 64 cells 10\u22128 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 RMSE ORDER 2 S64 S64 S256 S256 S256 S512 (b) M = 6 Figure 14: Square pulse convergence results on a logarithmic scale with c = 1 at t = 1. Blue lines indicate the uncollided solution is used, red that no uncollided source is used. Dashed lines are for a static mesh and solid lines are for the moving mesh. The solution for this con\ufb01guration is shown in Figure (15) where the uncollided solution is continuous, but not everywhere smooth and is still signi\ufb01cant by the time the source is turned o\ufb00, t = 5. The uncollided solution for this source, given by [7], is \u03c6sp u (x, t) = [\u2212x0Ei(\u03c4 \u2212t)] \f \f \f \f b 0 + 1 2 \u0002 (|x| \u2212x0)Ei(\u03c4 \u2212t) + e\u03c4\u2212t\u0003 \f \f \f \f c b + \u0002 e\u2212(t\u2212\u03c4)\u0003 \f \f \f \f d c . (43) Where the intervals are de\ufb01ned by b = [min (d, t \u2212|x| \u2212x0)]+ , (44) c = [min (d, t + |x| \u2212x0)]+ , (45) d = [min (t0, t, t \u2212|x| + x0)]+ . (46) [\u00b7]+ returns the positive part of its argument and Ei is the exponential integral. To \ufb01nd the source term in the weak formulation, Eq. (43) is inserted into Eq. (14). For the standard source evaluation, Eq. (42) is used as the source in Eq. (14). The square source uncollided \ufb02ux creates a discontinuity in the \ufb01rst derivative at x = \u00b1x0 while the source is on. The moving mesh de\ufb01ned in Section 6.5 is ideal for handling this discontinuity since a mesh edge is always at \u00b1x0. Also, a two-thirds of 28 \f\u22121 0 1 x 0.00 0.25 0.50 0.75 scalar flux (a) t = 1 \u22125 0 5 x 0 1 2 scalar flux (b) t = 5 \u221210 0 10 x 0.0 0.5 scalar flux (c) t = 10 Figure 15: Square source semi-analytic solution, \u03c6 (solid) and \u03c6u (dashed) with x0 = 0.5, t0 = 5, and c = 1. The uncollided \ufb02ux is not shown for times where it is negligible. the cells available cells are inside of the source where the solution is more di\ufb03cult to resolve. The static mesh in this case spans [\u2212t\ufb01nal \u2212x0, t\ufb01nal +x0] with evenly spaced cells. An edge is always located at the source edge. The uncollided source methods for the square source achieves second order convergence at early times and the standard source treatments do slightly worse, shown in Figure 16. Like the square pulse, discontinuities in the uncollided angular \ufb02ux restrict the order of convergence of the uncollided source methods. The angular error dependence in this case is similar to the square pulse, with S512 required for errors less than 10\u22126. Since the uncollided solution in this case lasts longer than the square pulse case before decaying to zero, using the uncollided source, moving mesh treatment creates a more signi\ufb01cant intercept reduction from the standard method. Table 4 shows that for the M = 6 case, the intercept is reduced 70 times from the standard method. It is interesting to note that, according to Eq. (25), for M = 6, the standard DG method would require 1300 hundred spatial subdivisions to achieve an error that the uncollided, moving mesh method returns with 32 cells. The t = 1 convergence results for every case except the MMS problem for the uncollided source, moving mesh and the standard source, moving mesh are shown side by side in Figure 21. This \ufb01gure is a good illustration of the di\ufb00erence in convergence between sources within the same method and the di\ufb00erence in accuracy between the uncollided, moving mesh and the standard moving mesh methods. 29 \fTable 4: Square source intercepts from Figure (16) where the intercept, C is found from the curve \ufb01t RMSE = C \u2206x\u2212A, where \u2206x is the number of mesh subdivisions and A is the order of convergence. The improvement from the baseline is found by dividing the intercept from the no uncollided solutions, static mesh case by the intercept from the given case M = 4 M = 6 intercept improvement intercept improvement no uncollided + static mesh 0.1713 0.04705 uncollided + static mesh 0.03235 5 0.009397 5 no ucollided + moving mesh 0.05331 3 0.003389 14 uncollided + moving mesh 0.01674 10 0.0006706 70 4 8 16 32 cells 10\u22128 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 RMSE ORDER 2 S128 S512 S512 S4096 (a) M = 4 4 8 16 32 cells 10\u22128 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 RMSE ORDER 2 S128 S512 S512 S512 (b) M = 6 Figure 16: Square source convergence results at t = 1. Blue lines indicate the uncollided solution is used, red that no uncollided source is used. Dashed lines are for a static mesh and solid lines are for the moving mesh. 30 \f6.7 Results for non-purely scattering problems All of the test problems considered so far have been pure scattering problems (c = 1). In this section, we brie\ufb02y show that for the Gaussian pulse and square pulse problems, our method performs well with partially absorbing and multiplying scattering ratios. For these problems, we are able to avoid recalculating a benchmark by rescaling already calculated results. In an underappreciated footnote, Case and Zweifel ([30, p. 175]) presented a clever scaling to relate the solution for c = 1 (\u03c81) to solutions for any other nonzero scattering ratio (\u03c8c), \u03c8c(x, \u00b5, t) = c exp (\u2212(1 \u2212c)t) \u03c81(cx, \u00b5, ct). (47) This scaling was derived for initial value problems with no source term, hence the choice of the pulse-type problems in this section. It is also necessary to scale the initial condition of each source in the code implementation so that our code is running the same problem as the benchmark. This is done by scaling the parameters, x0 and \u03c3 for the square pulse and Gaussian pulse respectively, in the initial condition and the uncollided source, x\u2032 0 = x0 c \u03c3\u2032 = \u03c3 c , (48) where x\u2032 0 and \u03c3\u2032 are the new parameters for our source term. Also, the scaled evaluation time becomes, t\ufb01nal = t c, (49) where t is the evaluation time of the benchmark problem. For c = 0.8 and c = 1.2, the benchmarks for the square pulse and Gaussian pulse are plotted in Figures 17 and 18 respectively. We choose the evaluation times t\ufb01nal = 1.25 and t\ufb01nal = 0.83 so we may use the t = 1 benchmark. Convergence tests showed that our moving mesh, uncollided solutions method handles these partially absorbing or multiplying problems just as well as purely scattering problems. The results for the plane pulse behave similarly to c = 1 results, with second order convergence at early times and a signi\ufb01cant di\ufb00erence in magnitude between the uncollided, moving mesh case and the standard DG implementation. These results are shown in Figure 20. Changing the scattering ratio also did not impact the convergence behavior of the method on the Gaussian pulse problem, with the method achieving \ufb01fth order convergence with M = 4 for the tested values of c (Figure 19). In both tests, the multiplying scattering ratio showed similar convergence characteristics to the absorbing problem, but with a higher relative error. This is not necessarily an e\ufb00ect of the larger scattering ratio, but more likely of earlier evaluation time, when the solution is more challenging. 31 \f\u22122 0 2 x 0.0 0.2 0.4 scalar flux (a) c = 0.8, t = 1.25 \u22121 0 1 x 0.00 0.25 0.50 0.75 (b) c = 1.2, t = 0.83 Figure 17: Square pulse semi-analytic solutions, \u03c6 (solid) and \u03c6u (dashed), at two times for di\ufb00erent values of c, scaled from t = 1, c = 1 solution (13a). \u22125 0 5 x 0.0 0.1 0.2 0.3 scalar flux (a) c = 0.8, t = 1.25 \u22125 0 5 x 0.00 0.25 0.50 0.75 (b) c = 1.2, t = 0.83 Figure 18: Gaussian pulse semi-analytic solutions, \u03c6 (solid) and \u03c6u (dashed), at two times for di\ufb00erent values of c, scaled from t = 1, c = 1 solution (4a) 32 \f2 4 8 16 32 cells 10\u22128 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 RMSE ORDER 5 S64 S128 S128 S256 S256 (a) M = 4, c = 0.8, t = 1.25 2 4 8 16 32 cells 10\u22128 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 RMSE ORDER 5 S64 S128 S256 S256 S512 (b) M = 4, c = 1.2, t = 0.83 Figure 19: Gaussian pulse convergence results on a logarithmic scale with c = 0.8 evaluated at t = 1.25 and c = 1.2 evaluated at t = 0.83. The standard deviation of the initial condition, \u03c3 is 0.625 in the \ufb01rst case and 0.417 in the second. Blue lines indicate the uncollided solution is used, red that no uncollided source is used. Dashed lines are for a static mesh and solid lines are for the moving mesh. 4 8 16 32 cells 10\u22128 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 RMSE ORDER 2 S64 S256 S256 S256 (a) M = 4, c = 0.8, t = 1.25 4 8 16 32 cells 10\u22128 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 RMSE ORDER 2 S64 S256 S256 S256 (b) M = 4, c = 1.2, t = 0.83 Figure 20: Square pulse convergence results on a logarithmic scale with c = 0.8 evaluated at t = 1.25 and c = 1.2 evaluated at t = 0.83. The standard deviation of the initial condition, \u03c3 is 0.625 in the \ufb01rst case and 0.417 in the second. Blue lines indicate the uncollided solution is used, red that no uncollided source is used. Dashed lines are for a static mesh and solid lines are for the moving mesh. 33 \f2 4 8 16 32 cells 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 RMSE ORDER 6 plane pulse square pulse square source Gaussian pulse Gaussian source (a) Moving mesh 2 4 8 16 32 cells 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 RMSE ORDER 6 plane pulse square pulse square source Gaussian pulse Gaussian source (b) Static mesh Figure 21: Convergence results for an increasing number of cell divisions, K, for t = 1 for M = 6 for the uncollided source moving mesh (21a) and static mesh (21b) cases for every test problem except the MMS problem. 0 1 10 100 average run time (s) 10\u22127 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 RMSE (a) 4 cells, M = 2, 4, 8 and 16 1 10 100 average run time (s) 10\u22126 10\u22125 10\u22124 10\u22123 10\u22122 RMSE (b) M = 6, 2, 4, 8 and 16 cells Figure 22: Calculation time vs RMSE for \ufb01ve runs of the square source problem. Blue lines indicate the uncollided solution is used, dashed that the mesh is static, and solid that the mesh is moving. Panel (22a) contains data from calculations holding the number of mesh subdivisions constant and increasing the number of basis functions and Panel (22b) contains data from calculations holding M constant and increasing the number of mesh subdivisions. 34 \f6.8 Computational e\ufb03ciency For the square source problem of Section 6.6, benchmarks were created storing the average computational time over 5 runs holding M = 6 constant and increasing number of cell divisions and holding the number of cells constant (4) while increasing the number of basis functions, Figure 22. The calculations were performed on a 2020 MacBook Pro Laptop with an 8 core M1 CPU and 16GB of RAM. For both cases, the uncollided source, moving mesh method returned more accurate solutions for less calculation time. The results also show that the most e\ufb03cient way to obtain accurate solutions with this method is to use less mesh subdivisions and a higher order polynomial. The most accurate solution in Figure 22a takes about three times less time than the most accurate solution in Figure (22b). This can also be understood in light of the trend apparent in Figures 11, 14, and 16 where increasing the number of basis functions from 5 to 7 returns the same convergence but starting from a smaller intercept value. 7" + }, + { + "url": "http://arxiv.org/abs/2205.15783v2", + "title": "Benchmarks for infinite medium, time dependent transport problems with isotropic scattering", + "abstract": "The widely used AZURV1 transport benchmarks package provides a suite of\nsolutions to isotropic scattering transport problems with a variety of initial\nconditions (Ganapol 2001). Most of these solutions have an initial condition\nthat is a Dirac delta function in space; as a result these benchmarks are\nchallenging problems to use for verification tests in computer codes.\nNevertheless, approximating a delta function in simulation often leads to low\norders of convergence and the inability to test the convergence of high-order\nnumerical methods. While there are examples in the literature of integration of\nthese solutions as Green's functions for the transport operator to produce\nresults for more easily simulated sources, they are limited in scope and\nbriefly explained. For a sampling of initial conditions and sources, we present\nsolutions for the uncollided and collided scalar flux to facilitate accurate\ntesting of source treatment in numerical solvers. The solution for the\nuncollided scalar flux is found in analytic form for some sources. Since\nintegrating the Green's functions is often nontrivial, discussion of\nintegration difficulty and workarounds to find convergent integrals is\nincluded. Additionally, our uncollided solutions can be used as source terms in\nverification studies, in a similar way to the method of manufactured solutions.", + "authors": "William Bennett, Ryan G. McClarren", + "published": "2022-05-28", + "updated": "2022-06-15", + "primary_cat": "cs.CE", + "cats": [ + "cs.CE" + ], + "main_content": "Introduction The AZURV1 benchmark suite, developed by Ganapol et al. (2001), is an indispensable veri\ufb01cation tool in the transport community. Some of the works that rely on these benchmarks include (Variansyah and McClarren 2022; Harel, Burov, and Heizler 2021; Peng and McClarren 2021; Heizler 2010; Garrett and Hauck 2013; Seibold and Frank 2014; Schlachter and Schneider 2018; Hauck and Heningburg 2019; Heningburg and Hauck 2020). The AZURV1 benchmark extends the work of Monin (1956) and contains solutions for delta-function initial conditions in planar, line, point and spherical shell shapes. These solutions can be considered Green\u2019s functions1 for the respective geometries. Since running one of the AZURV1 problems with a numerical code requires the di\ufb03cult approximation of a delta function, Garrett and Hauck (2013) present a method for integrating Ganapol\u2019s line source problem to \ufb01nd the exact solution for a Gaussian initial condition that resembles the line source but is more manageable for a numerical solver. This nearby problem is cited by (Seibold and Frank 2014; Schlachter and Schneider 2018; Hauck and Heningburg 2019; Heningburg and Hauck 2020). The popularity of this solution con\ufb01rms that there is interest for such solutions in the transport community. In this work we present transport solutions for a variety of initial conditions and 1There is contention whether one should use \u201cGreen\u2019s function\u201d or \u201cGreen function\u201d. We follow the convention recommended by Wright (2006) in retaining the possessive. arXiv:2205.15783v2 [cs.CE] 15 Jun 2022 \fsources to address this need. These solutions are considerably easier to handle than Green\u2019s functions for numerical codes and are, therefore, more useful for convergence studies. Additionally, we give the solution for the uncollided scalar \ufb02ux for these problems. The uncollided \ufb02ux can be used as the source in a computer code that solves that transport equation. This method is similar to the Method of Manufactured Solutions (MMS) (Salari and Knupp 2000; McClarren and Lowrie 2008), where a known solution is used to solve for a source term (that is typically a complicated function of space, angle, and time), which is then given to the numerical algorithm. Other researchers can use the uncollided solutions we present as the prescribed source in a veri\ufb01cation test. The remainder of this paper is organized as follows. In Section 2 we present the multiple collision approach and the original plane pulse solution from (Ganapol et al. 2001). We then integrate this solution over square and Gaussian spatial distributions for an initial pulse and for a source that is on for a \ufb01xed time, t0 in Section 3. The pulsed line source solution is presented in Section 5, as well as the integral over this pulse in a Gaussian con\ufb01guration. 2. Uncollided-Collided Split Transport Model We begin with the neutral particle transport problem in an in\ufb01nite medium with isotropic scattering (Ganapol et al. 2001) \u0012 \u2202 \u2202t + \u00b5 \u2202 \u2202x + 1 \u0013 \u03c8(x, t, \u00b5) = c 2 \u03c6(x, t) + 1 2 S(x, t), (1) where the angular \ufb02ux is represented by \u03c8(x, t, \u00b5), the scalar \ufb02ux by \u03c6(x, t) = R 1 \u22121d\u00b5\u2032 \u03c8(x, t, \u00b5\u2032), S is a source, and c is the scattering ratio. The spatial coordinate x is measured in units of mean-free path and with a particle speed of unity, t measures time in units of mean-free time; \u00b5 \u2208[\u22121, 1] is the cosine of the angle between a direction of travel and the x-axis. Because we are in an in\ufb01nite medium, we do not specify boundary conditions. We do, however, assert that the initial conditions are zero-\ufb02ux, unless otherwise speci\ufb01ed. We can split Eq. (1) into an equation for the angular \ufb02ux of particles that have not undergone a collision and an equation for the particles that have undergone a collision. To do this we write \u03c8(x, t, \u00b5) = \u03c8u(x, t, \u00b5) + \u03c8c(x, t, \u00b5). The equation for the uncollided angular \ufb02ux, \u03c8u is \u0012 \u2202 \u2202t + \u00b5 \u2202 \u2202x + 1 \u0013 \u03c8u(x, t, \u00b5) = 1 2 S(x, t). (2) Notice that this equation is a purely absorbing transport equation. This fact will allow us to write closed-form solutions for the uncollided scalar \ufb02ux in many instances. The equation for the collided angular \ufb02ux, \u03c8c, looks like the original transport equation, Eq. (1), with the addition of a source term from the uncollided solution: \u0012 \u2202 \u2202t + \u00b5 \u2202 \u2202x + 1 \u0013 \u03c8c(x, t, \u00b5) = c 2 Z 1 \u22121 d\u00b5\u2032 \u0000\u03c8c(x, t, \u00b5\u2032) + \u03c8u(x, t, \u00b5\u2032) \u0001 = c 2 Z 1 \u22121 d\u00b5\u2032\u03c8c(x, t, \u00b5\u2032) + Su(x, t). (3) 2 \fHere we have de\ufb01ned a source term for the collided equation as Su(x, t) = c 2 Z 1 \u22121 d\u00b5\u2032\u03c8u(x, t, \u00b5\u2032) = c 2\u03c6u(x, t). (4) A few points are in order regarding the uncollided and collided transport equations, Eqs. (2) and (3). Firstly, it is clear that adding these two equations together yields the original transport equation, Eq. (1). Furthermore, we note that if the uncollided scalar \ufb02ux, \u03c6u, is known then one can solve a standard transport equation with source given by c\u03c6u/2 to get the collided solution \u03c8c. This means that one could use the uncollided solutions we give below as a source term in a veri\ufb01cation test because the solution will be an approximation to \u03c6c. Numerical methods that employ decomposition based on collisions are often more e\ufb03cient as they are able to allocate computational resources separately to solve for the typically less smooth uncollided \ufb02uxes, as in (Alcou\ufb00e, O\u2019Dell, and Brinkley 1990; Hauck and McClarren 2013; Walters and Haghighat 2017). Our proposed source treatment is similar to these methods, except instead of solving for the uncollided \ufb02ux with a more re\ufb01ned mesh or more angular degrees of freedom, an analytic or semi-analytic solution for the uncollided scalar \ufb02ux is inserted into Eq. (3), acting like a source term. 3. Planar pulse Green\u2019s functions For a one dimensional in\ufb01nite plane pulsed source with isotropic scattering the corresponding source is S(x, t) = \u03b4(x)\u03b4(t). The solution for the uncollided scalar \ufb02ux is (Ganapol et al. 2001) \u03c6pl u (x, t) = e\u2212t 2t \u0398(1 \u2212|\u03b7|), (5) with \u03b7 \u2261x t , (6) and \u0398 is a step function that returns unity for positive arguments and zero otherwise. The solution for the collided \ufb02ux is, \u03c6pl c (x, t) = c \u0012e\u2212t 8\u03c0 \u00001 \u2212\u03b72\u0001 Z \u03c0 0 du sec2 \u0010u 2 \u0011 Re h \u03be2e ct 2 (1\u2212\u03b72)\u03bei\u0013 \u0398(1 \u2212|\u03b7|), (7) where the complex valued function \u03be is , \u03be(u, \u03b7) = log q + i u \u03b7 + i tan( u 2) (8) and, q = 1 + \u03b7 1 \u2212\u03b7. (9) 3 \f\u22121 0 1 x 0.0 0.2 0.4 0.6 scalar flux (a) t = 1 \u22125 0 5 x 0.0 0.1 0.2 scalar flux (b) t = 5 \u221210 0 10 x 0.00 0.05 0.10 0.15 scalar flux (c) t = 10 Figure 1.: Plane pulse scalar \ufb02ux solutions, \u03c6pl u + \u03c6pl c , for c = 1 at several times; panel (a) also contains the uncollided scalar \ufb02ux, \u03c6pl u , denoted by a dashed line. Equations (5) and (7) will be treated as Green\u2019s function kernels and integrated to \ufb01nd solutions for a variety of sources and initial conditions. The general form for this integration is: \u03c6j(x, t) = Z t 0 d\u03c4 Z \u221e \u2212\u221e ds S(s, \u03c4) \u03c6pl j (x \u2212s, t \u2212\u03c4), (10) here S is an arbitrary source, and the subscript j = u or c denotes the uncollided or collided scalar \ufb02ux. The total scalar \ufb02ux (\u03c6u +\u03c6c) and the uncollided scalar \ufb02ux for this problem are shown in Figure 1. Because the uncollided solution decays exponentially in time and is e\ufb00ectively zero on the scale of the plots, we do not show the uncollided solution in the later time panels. We also point out that at t = 1 the presence of the wavefront is noticeable at x = 1; this feature also decays exponentially and is hardly noticeable in the t = 5 panel at x = 5. This wavefront is one of the features in the AZURV1 solution that can make it challenging to use this benchmark in convergence studies for high-order numerical methods. 3.1. Square pulse The square pulse integrates the solution from the plane pulse over a \ufb01nite spatial range. We consider a square pulse of width x0 and magnitude one centered on the origin, S(x, t) = \u0398(x0 \u2212|x|)\u03b4(t). (11) This source can also be written as initial condition \u03c6(x, t = 0, \u00b5) = \u0398(x0 \u2212|x|). With Eq. (11) as the source, Eq. (10) gives the uncollided solution: \u03c6sp u (x, t) = Z x0 \u2212x0 ds e\u2212t 2t \u0398 \u00001 \u2212|\u03b7\u2032| \u0001 , (12) where we have de\ufb01ned \u03b7\u2032 to contain the integration variable, \u03b7\u2032 = x \u2212s t . (13) The Dirac delta function in the source removes the integration over \u03c4. With the integration limits changed to [\u2212x0, x0], the step function in the source is always unity. Evaluating the integral in 4 \f\u22121 0 1 x 0.0 0.2 0.4 0.6 scalar flux (a) t = 1 \u22125 0 5 x 0.0 0.1 0.2 scalar flux (b) t = 5 \u221210 0 10 x 0.00 0.05 0.10 0.15 scalar flux (c) t = 10 Figure 2.: Square pulse scalar \ufb02ux solutions, \u03c6sp u + \u03c6sp c , for c = 1 and x0 = 0.5 at several times; panel (a) also contains the uncollided scalar \ufb02ux, \u03c6sp u , denoted by a dashed line. Eq. (12) requires considering all of the possible relationships of the parameters x, t and x0. With these cases considered and the integral evaluated, the uncollided solution can be written as a piecewise function: \u03c6sp u (x, t) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 0 |x| \u2212t > x0 x0 e\u2212t t t > x0 & x0 \u2212t \u2264x \u2264t \u2212x0 e\u2212t t \u2264x0 & t \u2212x0 \u2264x \u2264x0 \u2212t e\u2212t(t+x+x0) 2t \u2212t \u2212x0 < x < t + x0 & x0 + x \u2264t \u2264x0 \u2212x e\u2212t(t\u2212x+x0) 2t \u2212t \u2212x0 < x < t + x0 & x0 \u2212x \u2264t \u2264x0 + x (14) Eq. (10) with a square pulse as the source and the plane pulse collided solution as the kernel gives, \u03c6sp c (x, t) = Z x0 \u2212x0 ds Z \u03c0 0 du F1(x, s, t, u) \u0398 \u00001 \u2212|\u03b7\u2032| \u0001 , (15) with identical simpli\ufb01cations to the integrals as with the uncollided case. The integrand function is de\ufb01ned as F1(x, s, t, u) = c e\u2212t 8\u03c0 \u00001 \u2212\u03b7\u20322\u0001 sec2 \u0010u 2 \u0011 Re h \u03be2e ct 2 (1\u2212\u03b7\u20322)\u03bei , (16) with \u03be given by Eq. (8). We have not determined a simple, closed form for the integrals in Eq. (15), though we note that the integrand appears to be a well-behaved function, and we have had no trouble performing the integration. The solutions for this problem are shown in Figure 2 for times t = 1, 5, and 10 when the source width parameter is x0 = 0.5. As in the plane pulse case we also show the uncollided solution at t = 1. Though the uncollided solution is still exponentially decaying, we notice that the uncollided solution approaches zero linearly, and is continuous, but nonsmooth at |x| = x0. 3.2. Gaussian pulse Next, we consider an initial pulse with a Gaussian spatial pro\ufb01le with standard deviation \u03c3 centered on the origin, S(x, t) = exp \u0012\u2212x2 \u03c32 \u0013 \u03b4(t). (17) 5 \f\u22122 0 2 x 0.0 0.2 0.4 scalar flux (a) t = 1 \u22125 0 5 x 0.0 0.1 0.2 scalar flux (b) t = 5 \u221210 0 10 x 0.00 0.05 0.10 scalar flux (c) t = 10 Figure 3.: Gaussian pulse scalar \ufb02ux solutions, \u03c6gp u + \u03c6gp c , for c = 1 and \u03c3 = 0.5 at several times; panel (a) also contains the uncollided scalar \ufb02ux, \u03c6gp u , denoted by a dashed line. As in Section 3.1, the integration over time simpli\ufb01es and the uncollided solution from Eq. (10) becomes, \u03c6gp u (x, t) = Z \u221e \u2212\u221e ds e\u2212t 2t exp \u0012\u2212s2 \u03c32 \u0013 \u0398 \u00001 \u2212|\u03b7\u2032| \u0001 . (18) Equation (18) may be solved analytically. The step function de\ufb01ned in the Green\u2019s kernel changes the integration limits to x \u2212t and x + t and the solution is \u03c6gs u (x, t) = \u03c3 \u221a\u03c0 e\u2212t erf \u0000 t\u2212x \u03c3 \u0001 + erf \u0000 t+x \u03c3 \u0001 4t . (19) Unlike the solution for a plane pulse or a square pulse, the uncollided angular \ufb02ux induced by Eq. (17) is smooth, which has implications for the convergence of numerical solvers. The expression for the collided \ufb02ux in this con\ufb01guration is very similar to Eq. (15), but with a di\ufb00erent source term and integration limits, \u03c6gp c (x, t) = Z \u221e \u2212\u221e ds Z \u03c0 0 du exp \u0012\u2212s2 \u03c32 \u0013 F1(x, s, t, u) \u0398 \u00001 \u2212|\u03b7\u2032| \u0001 , (20) where F1 is given by Eq. (16) and \u03b7\u2032 by Eq. (13). In this case, \ufb01nding the e\ufb00ective integration limits is simple. Solving |\u03b7\u2032| = 1 for s gives the e\ufb00ective integration limits, \u03c6gp c (x, t) = Z x+t x\u2212t ds Z \u03c0 0 du exp \u0012\u2212s2 \u03c32 \u0013 F1(x, s, t, u). (21) Figure 3 shows the solution to this problem at times t = 1, 5 and 10. In this case all of the solutions, including the uncollided solutions, are smooth. 3.3. Square source We now consider sources that are nonzero for a \ufb01nite length of time. The source can be understood as superpositions of pulses, like those presented in Sections 3.1 and 3.2 from t = 0 to t = t0. This method uncovers di\ufb03culties in the Green\u2019s kernel that were overlooked in the construction of solutions for pulses. Namely, the expression e\u2212t 2tn , where n is any integer, that appears in the uncollided and collided kernels, is singular as t approaches zero and the step function \u0398(1 \u2212|\u03b7\u2032|) 6 \fbehaves more erratically. This erratic behavior causes integrands to be poorly behaved in numerical integration since the e\ufb00ective domain changes wildly which can cause quadrature points to be wasted on regions where the integrand is zero and miss important features. We \ufb01rst consider a source of width x0 and magnitude one turned o\ufb00at t = t0 centered on the origin, S(x, t) = \u0398(x0 \u2212|x|) \u0398(t0 \u2212t). (22) Using this source in Eq. (10) we arrive at the following integral to de\ufb01ne the uncollided square source solution \u03c6ss u (x, t) = Z min(t,t0) 0 d\u03c4 Z x0 \u2212x0 ds e\u2212(t\u2212\u03c4) 2(t \u2212\u03c4) \u0398 \u00001 \u2212|\u03b7\u2032\u2032| \u0001 , (23) with \u03b7\u2032\u2032 that contains the time integration variable, \u03b7\u2032\u2032 = x \u2212s t \u2212\u03c4 . (24) As in Section 3.2, the spatial step function in the source has been absorbed into the integration limit over s. The time dependent step function in the source is always one for the integration limits of Eq. (23). We note that the solution for the inner integral over s of Eq. (23) is Eq. (14) where t is replaced with t \u2212\u03c4. We have, \u03c6ss u (x, t) = Z min(t,t0) 0 d\u03c4 \u03c6sp u (x, t \u2212\u03c4). (25) Solving Eq. (25) is an exercise of accounting for all of the possible values for \u03c4. For example, the \ufb01rst case in the square pulse solution, the scalar \ufb02ux is zero if |x|\u2212t > x0. Solving |x|\u2212(t\u2212\u03c4) > x0 for \u03c4 tells us that the square source solution is zero if \u03c4 > t + x0 \u2212|x|. Considering all of the cases in Eq. (14) allows us to come to an analytic solution for the uncollided \ufb02ux from a square source, \u03c6sp u (x, t) = [\u2212x0Ei(\u03c4 \u2212t)] \f \f \f \f b 0 + 1 2 \u0002 (|x| \u2212x0)Ei(\u03c4 \u2212t) + e\u03c4\u2212t\u0003 \f \f \f \f c b + h e\u2212(t\u2212\u03c4)i \f \f \f \f d c . (26) Where the evaluation intervals of \u03c4 are de\ufb01ned by b = [min (d, t \u2212|x| \u2212x0)]+ , (27) c = [min (d, t + |x| \u2212x0)]+ , (28) d = [min (t0, t, t \u2212|x| + x0)]+ , (29) where [\u00b7]+ returns the positive part of its argument, and Ei is the exponential integral. The expression for the collided solution takes the form of an integral over Eq. (15): \u03c6ss c (x, t) = Z min(t,t0) 0 d\u03c4 Z x0 \u2212x0 ds F2(x, s, t, \u03c4, u) \u0398 \u00001 \u2212|\u03b7\u2032\u2032| \u0001 , (30) 7 \f\u22121 0 1 x 0.00 0.25 0.50 0.75 scalar flux (a) t = 1 \u22125 0 5 x 0 1 2 scalar flux (b) t = 5 \u221210 0 10 x 0.0 0.5 scalar flux (c) t = 10 Figure 4.: Square source scalar \ufb02ux solutions, \u03c6ss u + \u03c6ss c , for c = 1, t0 = 5 and x0 = 0.5 at several times; panels (a) and (b) also contain the uncollided scalar \ufb02ux, \u03c6ss u , denoted by a dashed line. where the integrand F2 is slightly di\ufb00erent from F1 from the square pulse case, F2(x, s, t, \u03c4, u) = ce\u2212(t\u2212\u03c4) 8\u03c0 \u00001 \u2212\u03b7\u2032\u20322\u0001 Z \u03c0 0 du sec2 \u0010u 2 \u0011 Re h \u03be2e c(t\u2212\u03c4) 2 (1\u2212\u03b7\u2032\u20322)\u03bei , (31) with \u03b7\u2032\u2032 given by Eq. (24). While we were able to avoid singularity di\ufb03culties by integrating analytically in \ufb01nding the solution for the uncollided case in this con\ufb01guration, the collided case must be integrated numerically. Due to the behavior of the step function, the e\ufb00ective integration domain of Eq. (30) varies drastically with \u03c4. Therefore, the integral is not well-suited to numerical integration. Switching the order of integration and merging the step function with the integration limits over \u03c4 gives \u03c6ss c (x, t) = Z x0 \u2212x0 ds Z min(t,t\u2212|x\u2212s|)+ 0 d\u03c4 F2(x, s, t, \u03c4, u), (32) and allows us to cast the integral in a form that we have found to converge at an reasonable rate. For this problem the solutions, as shown in Figure 4 for x0 = 0.5 and t0 = 5, are smoother than the square pulse solutions shown previously. Also, because the source is one until t = 5, there is a noticeable uncollided solution at that later time. 4. Gaussian source We next consider a Gaussian source with standard deviation \u03c3 that is turned o\ufb00at time t0 where S is given as S(x, t) = exp \u0012\u2212x2 \u03c32 \u0013 \u0398(t0 \u2212t). (33) Like the square pulse and the square source, the Gaussian source can be considered a superposition of Gaussian pulses, as Eq. (33) is a superposition of pulses de\ufb01ned in Eq. (17). Therefore, the uncollided scalar \ufb02ux solution can be found with a variable change from t to t \u2212\u03c4 in Eq. (19) and integration over the time that the source is on, \u03c6gs u (x, t) = Z min(t,t0) 0 d\u03c4 \u03c3 \u221a\u03c0 e\u2212(t\u2212\u03c4) erf \u0000 t\u2212\u03c4\u2212x \u03c3 \u0001 + erf \u0000 t\u2212\u03c4+x \u03c3 \u0001 4(t \u2212\u03c4) . (34) 8 \f\u22122 0 2 x 0.00 0.25 0.50 0.75 scalar flux (a) t = 1 \u22125 0 5 x 0 1 2 scalar flux (b) t = 5 \u221210 0 10 x 0.00 0.25 0.50 0.75 scalar flux (c) t = 10 Figure 5.: Gaussian source scalar \ufb02ux solutions, \u03c6gs u + \u03c6gs c , for c = 1, t0 = 5 and \u03c3 = 0.5 at several times; panels (a) and (b) also contain the uncollided scalar \ufb02ux, \u03c6gs u , denoted by a dashed line. Solving this integral requires numerical integration. While this integral involves evaluating the integrand when \u03c4 = t, the behavior of the error function allows the integrand to be well behaved. Like the pulse source, this uncollided scalar \ufb02ux is associated with a smooth angular \ufb02ux solution. To \ufb01nd the collided \ufb02ux, we integrate Eq. (20) over time, \u03c6gs c (x, t) = Z min(t,t0) 0 d\u03c4 Z \u221e \u2212\u221e ds exp \u0012\u2212s2 \u03c32 \u0013 F2(x, s, t, \u03c4, u) \u0398 \u00001 \u2212|\u03b7\u2032\u2032| \u0001 . (35) Just as with the square source case (Section 3.3), the integration orders for \u03c4 and s are switched to \ufb01nd a better behaved integrand, \u03c6gs c (x, t) = Z \u221e \u2212\u221e ds Z min(t,t0) 0 d\u03c4 exp \u0012\u2212s2 \u03c32 \u0013 F2(x, s, t, \u03c4, u)\u0398(1 \u2212|\u03b7\u2032\u2032|). (36) To \ufb01nd a more e\ufb03cient integration interval over s, we unravel \u03b7\u2032\u2032 to get t \u2212\u03c4 \u2212x \u2264s \u2264t \u2212\u03c4 + x. With the \ufb02oor of \u03c4 de\ufb01ned so that \u03c4 \u22650, new integration limits for s are found and the most e\ufb03cient form of the integral for the collided solution is, \u03c6gs c (x, t) = Z x+t x\u2212t ds Z min(t,t0,t\u2212|x\u2212s|)+ 0 d\u03c4 exp \u0012\u2212s2 \u03c32 \u0013 F2(x, s, t, \u03c4, u), (37) where min(\u00b7)+ returns the minimum of its arguments or zero if the minimum is negative. Figure 5 shows the solutions for this problem with \u03c3 = 0.5 and t0 = 5. These solutions resemble the square source solutions, especially at the later times. At t = 1 the Gaussian source solution appears to be narrower than that for the square source solution. 5. Cylindrical geometry Ganapol et al. (2001) provides the solution for cylindrical geometry where the source is an in\ufb01nite line pulse aligned with the z axis. This solution is obtained using the plane-to-point transform and integration of the point source over an in\ufb01nite line. The resulting uncollided expression is \u03c6l u(r, t) = e\u2212t 2\u03c0t2 1 q 1 \u2212\u03b72 p \u0398 (1 \u2212\u03b7p) . (38) 9 \f0.0 0.5 1.0 r 0 1 2 3 scalar flux (a) t = 1 0 2 4 r 0.00 0.02 0.04 scalar flux (b) t = 5 0 5 10 r 0.00 0.01 0.02 scalar flux (c) t = 10 Figure 6.: Line pulse scalar \ufb02ux solutions, \u03c6l u + \u03c6l c, for c = 1, at several times; panels (a) and (b) also contain the uncollided scalar \ufb02ux, \u03c6l u, denoted by a dashed line. Here the superscript l denotes a line source, r is the radial coordinate, and \u03b7p \u2261r t , (39) where the p subscript stands for \u201cpolar\u201d. The absolute value in the step function becomes irrelevant in this geometry and is discarded. Unlike the solutions presented thus far where there were no singularities apart from t = 0, Eq. (38) is singular as \u03b7p approaches one. The collided \ufb02ux is found by integrating the collided \ufb02ux for a point source, which Ganapol also provides, \u03c6l c(r, t) = \" 2t Z \u221a 1\u2212\u03b72 p 0 d\u03c9 \u03c6pt c \u0010 t q \u03b72 p + \u03c92, t \u0011# \u0398 (1 \u2212\u03b7p) , (40) where the point source collided \ufb02ux is, \u03c6pt c (r, t) = \u0398(1 \u2212\u03b7p)\u00d7 e\u2212t 4\u03c0rt2 (ct) log \u00141 + \u03b7p 1 \u2212\u03b7p \u0015 + 1 2\u03c0 e\u2212t 4\u03c0rt2 \u0012ct 2 \u00132 \u00001 \u2212\u03b72 p \u0001 Z \u03c0 0 du sec2 \u0010u 2 \u0011 Re h\u0010 \u03b7p + i tan \u0010u 2 \u0011\u0011 \u03be3e ct 2 (1\u2212\u03b72 p)\u03bei! , (41) where the superscript pt is short for point, \u03b7p is given by Eq. (39), and \u03be by Eq. (8). The step function is redundant since it has been absorbed into the integration limits of Eq. (40). The line source solution is useful for veri\ufb01cation of 2-D transport codes. However, as we can see from the solutions in Figure 6, there is a singularity at the wavefront that is still present at t = 5 on the scale of the \ufb01gure. 5.1. Gaussian pulse We consider an in\ufb01nite cylindrical Gaussian pulse of standard deviation \u03c3, S(r, t) = exp \u0012 \u2212r2 \u03c32 \u0013 \u03b4(t). (42) For the uncollided \ufb02ux, it is actually necessary to transform back into Cartesian coordinates since the integration in polar coordinates causes the integrand to be badly behaved. Using the relationship 10 \fr2 = x2 + y2, we introduce r\u2032, r\u20322 = (x \u2212s)2 + (y \u2212v)2, (43) Where s and v are dummy variables that are integrated over Cartesian space. De\ufb01ning a new \u03b7 for the Green\u2019s kernel, \u03b7\u2032 p = r\u2032 t . (44) Now the uncollided \ufb02ux for a Gaussian pulse may be written in integral form, \u03c6gp u (x, y, t) = e\u2212t 2\u03c0t2 Z \u221e \u2212\u221e dv Z \u221e \u2212\u221e ds 1 q 1 \u2212\u03b7\u20322 p exp \u0012 \u2212s2 + v2 \u03c32 \u0013 \u0398 \u00001 \u2212\u03b7\u2032 p \u0001 . (45) The integrand of Eq. (45) is still poorly behaved, but assimilating the step function into the integration limits of the integral over s will cast it into a well behaved form. This is done by \ufb01nding the roots of s for the equation, \u03b7\u2032 p = 1. The expression for the uncollided \ufb02ux becomes, \u03c6gp u (x, y, t) = e\u2212t 2\u03c0t2 Z \u221e \u2212\u221e dv Z sb sa ds 1 q 1 \u2212\u03b7\u20322 p exp \u0012 \u2212s2 + v2 \u03c32 \u0013 , (46) where the integration limits over s are, sa = x \u2212 p [t2 \u2212v2 + 2vy \u2212y2]+ (47) sb = x + p [t2 \u2212v2 + 2vy \u2212y2]+ (48) where [\u00b7]+ returns the positive part of its argument. Since the solution is symmetric about the pole, it is not necessary to integrate Eq. (46) over a two dimensional domain. We can choose y = 0 and \ufb01nd the uncollided solution as a function of r. \u03c6gp u (r, t) = e\u2212t 2\u03c0t2 Z \u221e \u2212\u221e dv Z sb sa ds 1 q 1 \u2212\u03b7\u2032\u20322 p exp \u0012 \u2212s2 + v2 \u03c32 \u0013 , (49) where \u03b7\u2032\u2032 p = p (r \u2212s)2 + v2 t . (50) The expression for the collided \ufb02ux is better behaved in polar coordinates, where the variables over which the Green\u2019s kernel is integrated become \u03c1 and \u03b8\u2032. With this transformation, we de\ufb01ne a new radius in polar coordinates, r\u2032 = p (r cos(\u03b8) \u2212\u03c1 cos(\u03b8\u2032))2 + (r sin(\u03b8) \u2212\u03c1 sin(\u03b8\u2032))2. (51) \u03b7\u2032 p is given by Eq. (44) with Eq. (51) as r\u2032. Since the solution will be symmetric about r, the value of angular coordinate \u03b8 is arbitrary. However, to properly integrate the solution kernel, the angular 11 \fcoordinate, \u03b8\u2032, must be independent from \u03b8. Therefore, the collided \ufb02ux requires integration over angle and radius, \u03c6gp c (r, \u03b8, t) = 2t Z 2\u03c0 0 d\u03b8\u2032 Z \u221e 0 d\u03c1 \u03c1 exp \u0012 \u2212\u03c12 \u03c32 \u0013 \"Z \u221a 1\u2212\u03b7\u20322 p 0 d\u03c9 \u03c6pt c \u0010 t q \u03b7\u20322 p + \u03c92, t \u0011# \u0398 \u00001 \u2212\u03b7\u2032 p \u0001 . (52) Evaluating Eq. (52) requires four integrals over a di\ufb03cult integrand. To simplify the integral, we \ufb01rst recast it in a more explicit form where the step function from the point source is brought out of the function, \u03c6gp c (r, \u03b8, t) = 2t Z 2\u03c0 0 d\u03b8\u2032 Z \u221e 0 d\u03c1 Z \u221a 1\u2212\u03b7\u20322 p 0 d\u03c9 Z \u03c0 0 du Q(\u03c1) \u03c1 F pt 2 \u0010 t q \u03b7\u20322 p + \u03c92, t \u0011 \u0398(1 \u2212\u03b7\u2032 p)+ 2t Z 2\u03c0 0 d\u03b8\u2032 Z \u221e 0 d\u03c1 Z \u221a 1\u2212\u03b7\u20322 p 0 d\u03c9 Q(\u03c1) \u03c1 F pt 1 \u0010 t q \u03b7\u20322 p + \u03c92, t \u0011 \u0398(1 \u2212\u03b7\u2032 p), (53) Where F pt 1 is the \ufb01rst collided kernel for a point source without the step function, F pt 1 (r, t) = e\u2212t 4\u03c0rt2 (ct) log \u00141 + \u03b7p 1 \u2212\u03b7p \u0015 , (54) and F pt 2 is the integrand for the second on to in\ufb01nite collided solution for a point source without the step function, F pt 2 (r, t) = 1 2\u03c0 e\u2212t 4\u03c0rt2 \u0012ct 2 \u00132 \u00001 \u2212\u03b72 p \u0001 sec2 \u0010u 2 \u0011 Re h\u0010 \u03b7p + i tan \u0010u 2 \u0011\u0011 \u03be3e ct 2 (1\u2212\u03b72 p)\u03bei , (55) where \u03b7p is given by Eq. (39). The source, Q(\u03c1) is Q(\u03c1) = exp \u0012 \u2212\u03c12 \u03c32 \u0013 . (56) The step function, \u0398(1\u2212\u03b7\u2032 p), causes the e\ufb00ective integration domain to be erratic and the integrand to be badly behaved. As with the uncollided case, solving \u03b7\u2032 p = 1 gives the upper and lower bounds of the integration for \u03c1 as dictated by the step function. Now a convergent form of collided \ufb02ux can be found \u03c6gp c (r, \u03b8, t) = 2t Z 2\u03c0 0 d\u03b8\u2032 Z \u03c1b \u03c1a d\u03c1 Z \u221a 1\u2212\u03b7\u20322 p 0 d\u03c9 Z \u03c0 0 du Q(\u03c1) \u03c1 F pt 2 \u0010 t q \u03b7\u20322 p + \u03c92, t \u0011 \u0398(1 \u2212\u03b7\u2032 p)+ 2t Z 2\u03c0 0 d\u03b8\u2032 Z \u03c1b \u03c1a d\u03c1 Z \u221a 1\u2212\u03b7\u20322 p 0 d\u03c9 Q(\u03c1) \u03c1 F pt 1 \u0010 t q \u03b7\u20322 p + \u03c92, t \u0011 \u0398(1 \u2212\u03b7\u2032 p), (57) where \u03c1a = r cos(\u03b8 \u2212\u03b8\u2032) \u2212 s\u0014r2 cos(2(\u03b8 \u2212\u03b8\u2032)) \u2212r2 2 + t2 \u0015 + , (58) 12 \f0 1 2 3 r 0.0 0.1 0.2 scalar flux (a) t = 1 0 2 4 6 r 0.00 0.02 0.04 scalar flux (b) t = 5 0 5 10 r 0.00 0.01 0.02 scalar flux (c) t = 10 Figure 7.: Cylindrical Gaussian source scalar \ufb02ux solutions, \u03c6gs u + \u03c6gs c , for c = 1, t0 = 5 and \u03c3 = 0.5 at several times; panel (a) also contains the uncollided scalar \ufb02ux, \u03c6gs u , denoted by a dashed line. \u03c1b = r cos(\u03b8 \u2212\u03b8\u2032) + s\u0014r2 cos(2(\u03b8 \u2212\u03b8\u2032)) \u2212r2 2 + t2 \u0015 + , (59) \u03b7\u2032 p = p (r cos(\u03b8) \u2212\u03c1 cos(\u03b8\u2032))2 + (r sin(\u03b8) \u2212\u03c1 sin(\u03b8\u2032))2 t . (60) This solution with \u03c3 = 0.5 and t0 = 5 is shown in Figure 7. Integrating the source over a \ufb01nite spatial range leads to a removal of the singularity in the solution at the wavefront. This leads to a smooth solution at all times. 6." + } + ], + "Nayan Saxena": [ + { + "url": "http://arxiv.org/abs/2111.07138v1", + "title": "Towards One Shot Search Space Poisoning in Neural Architecture Search", + "abstract": "We evaluate the robustness of a Neural Architecture Search (NAS) algorithm\nknown as Efficient NAS (ENAS) against data agnostic poisoning attacks on the\noriginal search space with carefully designed ineffective operations. We\nempirically demonstrate how our one shot search space poisoning approach\nexploits design flaws in the ENAS controller to degrade predictive performance\non classification tasks. With just two poisoning operations injected into the\nsearch space, we inflate prediction error rates for child networks upto 90% on\nthe CIFAR-10 dataset.", + "authors": "Nayan Saxena, Robert Wu, Rohan Jain", + "published": "2021-11-13", + "updated": "2021-11-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE" + ], + "main_content": "Introduction The problem of \ufb01nding optimal deep learning architectures has recently been automated by neural architecture search (NAS) algorithms. These algorithms continually sample operations from a prede\ufb01ned search space to construct neural networks to optimize a performance metric over time, eventually converging to better child architectures. This intuitive idea greatly reduces human intervention by restricting human bias in architecture engineering to just the selection of the prede\ufb01ned search space (Elsken et al. 2019). Although NAS has the potential to revolutionize architecture search across many applications, human selection of the search space remains a security risk that needs to be evaluated before NAS can be deployed in security-critical domains. While NAS has been studied to further develop more adversarially robust networks through addition of dense connections (Guo et al. 2020), little work has been done in the past to assess the adversarial robustness of NAS itself. Search phase analysis has shown that computationally ef\ufb01cient algorithms such as ENAS are worse at truly ranking child networks due to their reliance on weight sharing (Yu et al. 2019). Finally, most traditional poisoning attacks involve injecting mislabeled examples in the training data and have been executed against classical machine learning approaches (Schwarzschild et al. 2021). We validate these concerns by evaluating the robustness of one such NAS algorithm known as Ef\ufb01cient NAS (ENAS) (Pham et al. 2018) against data-agnostic search space poisoning (SSP) attacks on the CIFAR-10 dataset. Throughout this paper, we focus on the pre-optimized ENAS search space \u02c6 S = {Identity, 3x3 Separable Convolution, 5x5 Separable Convolution, Max Pooling (3x3), Average Pooling (3x3)} (Pham et al. 2018). Search Space Poisoning (SSP) The idea behind SSP, as shown in Figure 1, is to inject precisely designed multiset P of ineffective operations into the ENAS search space, making the search space S := \u02c6 S \u222aP. Our approach exploits the core functionality of the ENAS controller to sample child networks from a large computational graph of operations by introducing highly ineffective local operations into the search space. On the attacker\u2019s behalf, this requires no a priori knowledge of the problem domain or dataset being used, making this new approach more favourable than traditional data poisoning attacks. Figure 1: Overview of Search Space Poisoning (SSP) Multiple-Instance Poisoning As a na\u00a8 \u0131ve strategy, we \ufb01rst propose multiple-instance poisoning which increases the likelihood of sampling bad operations by including duplicates of these bad operations in the search spaces. Through experimental results we discovered that biasing the search space this way resulted in \ufb01nal networks that are mostly comprised of these poor operations with error rates exceeding 80%. However, as shown in Figure 2, to perform well this approach requires overwhelming the original search space with up to 300 bad operations (50:1 ratio of bad operations per each good operation) which is unreasonable. The motivation then is to reduce the ratio of bad to good operations down to 1:1, or even lower, to make search space poisoning more feasible and effective. Towards One Shot Poisoning In an attempt to improve the attack, we further attempted to reduce the number of poisoning points to just 2 points by adding: (i) Dropout(p = 1) (ii) Stretched Conv(k = 3, padding, dilation = 50) to the original search space. Our rationale is that dropout operations with p = 1 would erase all information and produce catastrophic values such as 0 or arXiv:2111.07138v1 [cs.LG] 13 Nov 2021 \fFigure 2: Final validation and test classi\ufb01cation errors as a function of multiple operation instances. (a) Identity layers were moderately effective (b) Gaussian noise reached high error rates even with fewer operations (c) Dropout proved most effective (d) Transposed convolutions plateaued after a saturation point. not-a-number (NaN). The results were promising, with error rates shooting up to 90% very quickly during training as seen in Figure 3 and Table 1. An example \ufb01nal child network producing these high errors can be observed in Figure 4. Figure 3: (a) Validation error for one shot poisoning over 300 epochs (b) Distribution of bad operations sampled by the ENAS controller after 300 epochs. SEARCH SPACE |P| VAL ERROR TEST ERROR \u02c6 S (Baseline) 0 16.4% 19.8% \u02c6 S + 300{Dropout(p = 1)} 300 84.8% 84.3% \u02c6 S + {Conv(k = 3, p, d = 50), Dropout(p = 1)} 2 90.1% 90.0% Table 1: Experimental results showing how one shot poisoning proves surprisingly effective with just 2 points as compared to its multiple instance counterpart with 300 points. Input Softmax Conv 3x3 Sep 5x5 Stretched Conv 3x3 Conv 3x3 Dropout(p=1) Avg. Pool Stretched Sep 3x3 Conv 3x3 Max Pool Conv 3x3 Figure 4: Network produced by ENAS after one shot poisoning with good operations highlighted in green and poisoning operations highlighted in red. Search space utilized is the same as shown in Table 1 with two poisoning points." + }, + { + "url": "http://arxiv.org/abs/2111.00870v1", + "title": "Statistical Consequences of Dueling Bandits", + "abstract": "Multi-Armed-Bandit frameworks have often been used by researchers to assess\neducational interventions, however, recent work has shown that it is more\nbeneficial for a student to provide qualitative feedback through preference\nelicitation between different alternatives, making a dueling bandits framework\nmore appropriate. In this paper, we explore the statistical quality of data\nunder this framework by comparing traditional uniform sampling to a dueling\nbandit algorithm and find that dueling bandit algorithms perform well at\ncumulative regret minimisation, but lead to inflated Type-I error rates and\nreduced power under certain circumstances. Through these results we provide\ninsight into the challenges and opportunities in using dueling bandit\nalgorithms to run adaptive experiments.", + "authors": "Nayan Saxena, Pan Chen, Emmy Liu", + "published": "2021-10-16", + "updated": "2021-10-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "math.ST", + "stat.TH" + ], + "main_content": "Introduction Rapid growth in online learning has provided scientists and researchers a new digital platform to conduct randomised experiments with students in real-world settings. This data can be leveraged by education researchers to adaptively assign students carefully tailored educational material, and further improve the quality of their content through careful exploratory assessment of the assigned conditions [19; 23]. To achieve these goals, one of the most popular problem frameworks is the Multi-Armed Bandit (MAB) problem which focuses on optimal assignment of conditions based on numerical reward signals from the participant (student) whose goal is to trade-off between exploration and exploitation of conditions [3; 20]. Historically, MAB algorithms have been used in industrial settings to leverage user feedback and adaptively display more popular advertisements, websites and produce content [8; 13], whereas in research settings these algorithms have been used for clinical trials [14; 21; 4], robot control [15; 16], and by behavioral and social scientists for crowd-sourcing experiments [1; 5; 2]. While MAB algorithms focus entirely on quantitative feedback (scalar reward), it has been shown that human motor learning is maximised when subjected to both qualitative and quantitative feedback [12] and recent work suggests that quantitative metrics might not be the best indicators of true human preferences [18]. Therefore, a more qualitative approach for preference elicitation from human participants should be more ideal to leverage student feedback, which can in turn be achieved through preference based MAB (dueling bandit) algorithms like Double Thompson Sampling (DTS) [7; 24], which focuses on participants simply choosing between two presented alternatives. Through this paper, we take a closer look at leveraging these dueling bandit algorithms for adaptive experimentation by empirically investigating the statistical properties of the DTS algorithm against arXiv:2111.00870v1 [cs.LG] 16 Oct 2021 \funiform random experimental condition selection. Our study is primarily motivated by prior work assessing the statistical properties of MAB algorithms in an educational setting [19], and builds upon recent work exploring the challenges that come with conducting hypothesis tests to analyze data from adaptive experiments using bandit algorithms [22], such as proposing strategies for modifying MAB algorithms to trade-off reward and power [25] and improving coverage of con\ufb01dence intervals [10]. In summary, our main contributions are: 1. We emphasise the conceptual signi\ufb01cance of quantifying the differences in statistical power, regret and false positive rate between uniform sampling and Double Thompson Sampling 2. Through simulation experiments we show that for a reasonable decrease in power, it may be advantageous to run experiments with dueling bandits when the number of arms is small and the expected effect size is large 3. By applying the same analysis on the real-world Microsoft Learning to Rank (LTR) dataset we con\ufb01rm that using DTS results in lowered power, regret and a higher proportion of people being assigned to the better alternative. 2 Preliminaries 2.1 Multi-armed Bandits As a precursor to dueling bandits, we consider a central problem in sequential decision making and adaptive experiment design known as the multi-armed bandit (MAB) problem [20]. The problem setup consists of k arms (actions) such that for every action ai \u2208A the corresponding probability of yielding a success is pi \u2208[0, 1]. With no a priori information about these success probabilities, the central goal for an agent is to maximise the overall number of successes by pulling these arms, or performing these actions, and ultimately settling on one over a \ufb01xed period of time denoted by t. Assume that under this framework the expected reward of arm ai is given by \u00b5(ai) and \u00b5\u2217= arg max \u00b5(ai), for i \u2208{1 . . . k}. The objective of an agent is to then minimise the cumulative regret, RMAB t = t X i=1 [\u00b5\u2217\u2212\u00b5a(i)] Here the agent is allowed to only choose one action, a(i) at a given time step. However, in many cases during adaptive experimentation, it may be dif\ufb01cult to frame the result of an action as a success or failure. In particular, in many cases what experimenters are interested in is simply which arm is preferred to the others, in which case pairwise preferences may be more appropriate for the subject to make. To accommodate pairwise preferences we now consider a generalised form of the MAB problem also known as the dueling bandit problem. 2.2 Dueling Bandits The dueling bandit problem can be characterised as a special case of the popular Multi-armed Bandit (MAB) problem that focuses on pairwise comparisons between actions at every iteration [26]. The problem setup is similar to the MAB problem, except at every iteration the agent chooses two actions am, an \u2208A and performs a comparison before choosing one action that they prefer 1. Throughout this paper we speci\ufb01cally utilise the Double Thompson Sampling (DTS) algorithm, which is an adaptation of the popular Thompson Sampling algorithm in the regular MAB context [24]. A comprehensive outline of the DTS procedure can be found in Appendix A. Formally, the probability of one arm winning over another is given by P(am \u227ban) which we abbreviate as Pmn \u2261P(am \u227ban). For each trial i the outcome of each comparison is binary xi \u223cBernoulli(Pmn) where the probability of one arm winning is formally given by, P(am \u227ban) = \u2206(am, an) + 0.5 1The traditional dueling bandit framework allows comparisons between the same actions (self-dueling) where m = n, but throughout this paper we assume m \u0338= n as it is unlikely for users to be presented with comparisons between the same actions during adaptive experimentation . 2 \fHere, \u2206(am, an) \u2208[\u22120.5, 0.5] is the difference measure between the two actions, abbreviated as \u2206mn \u2261\u2206(am, an), such that am \u227ban \u21d0 \u21d2\u2206mn > 0. It is further assumed that the probabilities representing user preferences are stationary over time and modelled by the unknown preference matrix denoted by P = [Pmn]k\u00d7k where the entries satisfy Pmn + Pnm = 1. The goal of an agent operating in this environment is to choose the optimal winning arm when conducting pairwise comparisons and ultimately minimise regret. Motivated by classical voting theory, there are different notions of a winning arm like the Borda winner [11] and Copeland winner [27; 9]. In this paper we focus on using the notion of a Condorcet winner [6] and strong regret [24] as described below. Condorcet Winner The Condorcet winner is a single action a\u2217that beats all other actions in a pairwise comparison, such that \u2206a\u2217a > 0 for all a \u0338= a\u2217. The Copeland winner, found by maximising the normalised Copeland score 1 k\u22121 P m\u0338=n 1[Pmn > 0.5] is a Condorcet criterion, which means that it always \ufb01nds the Condorcet winner if one exists. It should be noted that it is possible for no Condorcet winner to exist, in which case the same de\ufb01nition can be extended to obtain a set of Copeland winners which always exist. [27; 24]. Strong Regret Suppose a(i) m and a(i) n are actions chosen at a given timestep i and ao is the optimal action. Then under this framework the overall objective for an agent is to minimise the cumulative strong regret, RDUEL t = t X i=1 [\u2206om + \u2206on] 3 Methods 3.1 Experimental Setup In order to examine the impact of dueling bandit algorithms as compared to uniform random assignment during adaptive experimentation, we looked at several conditions relevant to experimenters by varying the number of arms and the effect sizes between pairs of arms. For each set of conditions 5000 simulations were carried out, each corresponding to a real-world study, where data at each timestep would correspond to data collected from a participant exposed to a treatment arm. Furthermore, to assess the statistical quality of each algorithm, we considered the statistical power in comparing pairs of arms, the regret accumulated, and the false positive rate. During its operation, each simulation assigned hypothetical participants using either the DTS algorithm, or uniformly at random and was initialized using a preference matrix P \u2208Rn\u00d7n, where n is the number of arms. For each given pair of arms (i, j), the effect size was calculated (Cohen\u2019s w of 0.1, 0.3 or 0.5) of the comparison based on the difference (0.05, 0.15, or 0.25) between Pij and Pji, noting that Pij + Pji = 1. Finally, in order to compute the false positive rate, preference matrices with zero effect size for all comparisons between pairs of arms were also used. These experimental conditions are also summarized in Table 1. Data and code are publicly available at https://bit.ly/35J2xH0. 3.2 Experimental Dataset In addition to simulated data, we also present experiments using data collected from users in a real-world context. In particular, we perform simulations using the Microsoft Learning to Rank (LTR) dataset which presents pairs of search queries and documents, along with the relevance ratings of the document [17]. As A/B testing is commonly used to test ranking and recommendation algorithms, this is an appropriate context in which we benchmark outcomes of such experiments. We used an implicit preference matrix over all 136 rankers derived from this dataset similar to [27], and then randomly sampled Condorcet and non-Condorcet submatrices from this larger matrix in order to simulate experiments comparing different rankers. The sample size is set as \u0000n 2 \u0001 \u00d7 m, where arm i and arm j are the pair of arms with the smallest effect size, and m is the number of participants needed to achieve expected statistical power of 0.8 for effect size 0.1. This simulates the real-world scenario where the number of required participants is unknown. To further evaluate the long-term performance of the two assignment methods, we ran the simulations with sample sizes up to 10 times more than the initial sample size and tracked the same set of metrics as in the main analysis. 3 \fCondition Description Sampling type Double Thompson Sampling Uniform Random Sampling Number of arms n \u2208{3, 5} Sample sizes Uniform effect sizes Non-zero effect sizes: s = 10mn where m is the number of participants needed to achieve expected statistical power of 0.8 and n is the number of paired comparisons in that simulation Zero effect size Same sample sizes as in the non-zero effect case calculated using the number of participants needed to achieve expected statistical power of 0.8 for effect size 0.3. Learning to Rank dataset Different effect sizes: s = 10mn, where m is the number of participants needed to achieve expected statistical power of 0.8 for 0.1 effect size, and n is the number of paired comparisons in that simulation. Effect sizes Simulated dataset None (0), Small (0.1), Medium (0.3), Large (0.5). Same effect size between each pair of arms. The winning arm in each pair is randomly assigned in non-zero cases. Learning to Rank dataset For each pair of arms i and j, the effect size is in [0, 1). Table 1: Summary of conditions varied across simulations 3.3 Analysis After each simulation, we shift our focus to analyzing several outcomes relevant to experimenters: (a) statistical power to detect an effect between each pair of arms, (b) false positive rate in detecting effects when none exist, (c) regret over time, and (d) percentage of participants assigned to the Condorcet winner when one exists. For simulations where there was some difference between pairs of arms, a Chi-squared contingency test was performed for each pair of arms, with signi\ufb01cance level \u03b1 = 0.05, which is the standard across multiple domains including educational experiments. For each simulation, regret over time, reward over time, and percentage of participants assigned to the Condorcet winner was recorded and aggregated. Finally, to calculate the false positive rate, we considered a pair of arms as a false positive if the comparison between them reached signi\ufb01cance, given there should be no effect between these two arms. 4 Results 4.1 Synthetic Data 4.1.1 Conditions that differ in terms of effectiveness Average Power Over Time In educational settings, different conditions, such as sample solutions, may have a different impact on students\u2019 learning ef\ufb01ciency and engagement with the educational resources. However, one condition that outperforms the others is often not statistically signi\ufb01cant until testing is conducted with many students involved. Yet, different ways of conducting the experiments may require a different number of participants to reach signi\ufb01cant effects amongst conditions. Figure 1: Average power over time for DTS algorithm (blue) and Uniform assignment (red) with synthetic data. 4 \fWe modeled the means of the power of different pairs of conditions, as this measures how well each assignment did in \ufb01nding statistically signi\ufb01cant effects among all the conditions. It can be observed in Figure 1 that uniform sampling consistently reached the 0.8 power threshold with fewer participants as compared to DTS, in both 3 and 5-condition settings. This pattern was also observed when \ufb01nal power was recorded for both uniform sampling and DTS, as seen in Figure 2, thus indicating that DTS is more susceptible to in\ufb02ated Type-II error rates (reduced power). This shows that when there are insuf\ufb01cient students to test among multiple variants, uniform sampling is more often a better way of assignment given the effects among different conditions are of the highest priority. Figure 2: Comparison of \ufb01nal power between DTS algorithm (blue) and Uniform assignment (red) for simulated dataset where the dotted line indicates effect detected. Proportion of Condorcet Winners In educational experiments, given that there is one unknown best condition, we examined how uniform sampling and double Thompson sampling ful\ufb01ll the goal of providing students with the best condition by calculating the proportion of trials in which a student would be assigned to the Condorcet winner. Figure 3: Proportion of trials with Condorcet winners for DTS algorithm (blue) and Uniform assignment (red) with synthetic data. Here, the dotted line indicates expected assignment. From Figure 3, it can be observed that, on average, DTS assigned a higher number of Condorcet winners to students who participated in the experiments. Across both 3 and 5-condition simulations, we observe that DTS assigned the Condorcet winner to almost every student, whereas with uniform 5 \fsampling the number of students receiving the Condorcet winner is in line with the statistical expectation. Cumulative Strong Regret Apart from the proportion of Condorcet winners, we used cumulative strong regret to measure the different experiences uniform sampling and DTS give to students. As seen in Figure 4, a noticeable difference in cumulative strong regret between uniform sampling and DTS can be seen, which grew more evident with increase in effect size from 0.5 to 0.1, and number of arms from 3 to 5. Across all simulations, DTS performed better by accumulating less regret as compared to uniform sampling. Figure 4: Comparison of cumulative strong regret over time between DTS algorithm (blue) and Uniform assignment (red) for simulated dataset. 4.1.2 Conditions that are equally-effective False Positive Rate To simulate scienti\ufb01c settings like clinical trials, along with behavioral and social sciences where little to no difference between arms is commonly observed, we measured the false positive rate when conditions are equally effective. False positives here mean that at least one comparison between arms produces a signi\ufb01cance value < 0.05 while the effect size between each pair of arms is 0. From Figure 5, we observe that the false positive rates for uniform sampling remain consistent across 3-condition and 5-condition scenarios and are relatively lower when compared to DTS algorithm. Figure 5: False positive rate for DTS algorithm (blue) and Uniform assignment (red) with simulated data. This indicates that when there is little to no difference between conditions the DTS algorithm will result in a higher Type-I error as compared to uniform sampling. These issues are especially problematic for scienti\ufb01c research, since erroneously believing an intervention is better than a control may result in lack of reproducibility. 6 \f4.2 Learning to Rank Dataset In real settings the effect sizes between any two conditions are not limited to the set {0.1, 0.3, 0.5}, therefore we ran the same experiments on the LTR dataset. Overall, our results on the real-world dataset were consistent with our simulation studies. We noticed that DTS provided the best condition to students more often and accrued relatively lower average power as seen in Figure 6. It should further be noted that in a few trials DTS did not assign the Condorcet winner to the majority of students possibly because the differences between the best condition and other arms is miniscule, thus requiring more participants for the algorithm to learn the best condition. Figure 6: Proportion of trials with Condorcet winners alongside average power over time for DTS algorithm (blue) and Uniform assignment (red) with LTR dataset. Consistent with our results on synthetic data, it can further be observed in Figure 6 that uniform sampling reached the 0.8 power threshold with fewer participants as compared to DTS across both 3 and 5-condition settings. This pattern was also observed when \ufb01nal power was recorded for both uniform sampling and DTS, as seen in Figure 7, thus reinforcing the \ufb01nding that DTS is more susceptible to reduced power even in real world settings. Figure 7: Comparison of \ufb01nal power between DTS algorithm (blue) and Uniform assignment (red) for LTR dataset where the dotted line indicates effect detected. Finally, we also observed that DTS does a much better job at minimising regret than uniform sampling on the LTR data set as seen in Figure 8. Overall, we note that both uniform sampling and DTS are useful algorithms in educational contexts depending on the available resources and potential intervention goals. Uniform sampling is more appropriate in cases where \ufb01nding signi\ufb01cant effects is of higher priority and the number of students is small, whereas DTS is more advantageous when better student experience is more important through optimal allocation. 7 \fFigure 8: Comparison of cumulative strong regret over time between DTS algorithm (blue) and Uniform assignment (red) for LTR dataset 5" + } + ], + "Ilham Variansyah": [ + { + "url": "http://arxiv.org/abs/2305.07646v1", + "title": "An effective initial particle sampling technique for Monte Carlo reactor transient simulations", + "abstract": "We propose a technique to effectively sample initial neutron and delayed\nneutron precursor particles for Monte Carlo (MC) simulations of typical\noff-critical reactor transients. The technique can be seen as an improvement,\nor alternative, to the existing ones. Similar to some existing techniques, the\nproposed sampling technique uses the standard MC criticality calculation.\nHowever, different from the others, the technique effectively produces\nuniform-weight particles around user-specified target sizes. The technique is\nimplemented into the open-source Python-based MC code MC/DC and verified\nagainst an infinite homogeneous 361-group medium problem and the 3D C5G7-TD\nbenchmark model.", + "authors": "Ilham Variansyah, Ryan G. McClarren", + "published": "2023-05-12", + "updated": "2023-05-12", + "primary_cat": "physics.comp-ph", + "cats": [ + "physics.comp-ph" + ], + "main_content": "INTRODUCTION The advance of high-performance parallel computing promotes the practicality of high-\ufb01delity reactor transient Monte Carlo (MC) simulations [1\u20135]. The reactor transients that are of typical interest include power maneuvers and safety/accident simulations, all of which start off of an assumed steady-state, critical initial condition. A technique to effectively sample particles\u2014neutrons and delayed neutron precursors (DNPs)\u2014from such a critical initial condition is needed to run the time-dependent MC simulations. There are two classes of techniques in the current literature. The \ufb01rst one is based on MC criticality calculation, during which particles can be sampled via the collision estimator. Implementations that apply this class of technique include [1,2,4]. In all of the implementations, one cannot directly set the desired sample sizes: the number of particles sampled would, respectively, be the same as the number of collisions in the last \ufb01ssion cycle and (may crucially) be dependent on user-speci\ufb01ed survival probability factors. Furthermore, the resulting particle weight distribution can be widely varying by many orders of magnitude. arXiv:2305.07646v1 [physics.comp-ph] 12 May 2023 \fVariansyah and McClarren The other class is based on running a specialized time-dependent \ufb01xed-source problem of the steady-state system prior to the actual transient problem [3,5]. In this approach, census time-step sizes need to be carefully determined, the time-stepping simulation continues until \ufb01ssion source distribution is converged (similar to inactive cycles in criticality calculation), and then \ufb01nally, the particles can be sampled in and at the end of the \ufb01nal time step, whose size may need to be different to the previous ones to optimize the sampling\u2014which, however, introduces another tunable parameter. The proposed sampling technique is based on criticality calculation and can be seen as an improvement, or an alternative, to the existing ones. The key feature of the technique is that it produces uniform-weight particles around user-speci\ufb01ed target sizes. In Section 2, we formulate the technique and discuss how it compares with the existing ones. Section 3 presents veri\ufb01cation results of the technique against an in\ufb01nite multigroup problem and the 3D C5G7-TD4 benchmark model [6]. Finally, we summarize and discuss future work in Section 4. 2. THE SAMPLING TECHNIQUE Let us consider the time-dependent neutron transport equations in operator notation: L\u03c8[\u03c8(\u20d7 r, \u02c6 \u2126, E, t)] = nSS(\u20d7 r, \u02c6 \u2126, E)\u03b4(t), (1) LC,j[Cj(\u20d7 r, t)] = CSS,j(\u20d7 r)\u03b4(t), j = 1, 2, . . . , J, (2) where L\u03c8[\u00b7] and LC,j[\u00b7] are the usual transport operators for neutron angular \ufb02ux \u03c8 and DNP concentration Cj. We note that the typical initial conditions \u03c8(\u20d7 r, \u02c6 \u2126, E, 0) = \u03c8SS(\u20d7 r, \u02c6 \u2126, E) and Cj(\u20d7 r, 0) = CSS,j(\u20d7 r) are replaced by the \u03b4(t) \ufb01xed sources, which are more conveniently modeled for MC method. The initial neutron angular density nSS(\u20d7 r, \u02c6 \u2126, E) and DNP concentration CSS,j(\u20d7 r) distributions are determined based on the steady-state angular \ufb02ux \u03c8SS(\u20d7 r, \u02c6 \u2126, E): nSS(\u20d7 r, \u02c6 \u2126, E) = 1 v\u03c8SS(\u20d7 r, \u02c6 \u2126, E), (3) CSS,j(\u20d7 r) = 1 ke\ufb00\u03bbj Z \u221e 0 \u03bdd,j(\u20d7 r, E)\u03a3f(\u20d7 r, E) \u0014Z 4\u03c0 \u03c8SS(\u20d7 r, \u02c6 \u2126, E)d\u2126 \u0015 dE. (4) The steady-state angular \ufb02ux distribution is usually obtained via criticality calculation since it is essentially the associated eigenfunction of the eigenvalue ke\ufb00. In practice, a criticality search needs to be performed, and ke\ufb00\u22481 is accepted within some tolerance. However, in some computational exercises, such as the benchmark problem C5G7-TD [6], a non-critical (ke\ufb00\u0338= 1) con\ufb01guration can be used as the initial condition as long as we include the 1/ke\ufb00factor in the \ufb01ssion production terms of the time-dependent transport operators L\u03c8[\u00b7] and LC,j[\u00b7]. One can get neutron and DNP samples via collision estimator during the MC criticality calculation [1,2]. This sampling method should be performed only if the \ufb01ssion source is already converged. One possible implementation of the idea is as follows. At each collision event, we get a neutron sample which is a copy of the inducing neutron but with the weight of wn = \u0012 w 1 \u03a3t \u0013 1 v, (5) \fInitial particle sampling for transient MC where w is the weight of the inducing neutron. In addition, we also get a DNP sample with the same location \u20d7 r as the inducing neutron, group number j sampled from the probability Pgroup(j), and the effective weight wC: Pgroup(j) = \u03bdd,j \u03bbj \" J X j\u2032=1 \u03bdd,j\u2032 \u03bbj\u2032 #\u22121 , wC = \u0012 w 1 \u03a3t \u0013 J X j=1 \u03bdd,j\u03a3f ke\ufb00\u03bbj . (6) Given this collision-based estimator, the number of particle samples that we collect would be the same as the number of collisions occurring during the active cycles (as in [2]) or the last cycle (as in [4]) of the MC criticality calculation. Suppose that we sample the particles during the active cycles. If there are in average Ncoll collisions per cycle, and we run Nactive active cycles, then we will get a total of Ntot = Ncoll \u00d7 Nactive samples for neutron and DNP. Generally, Ntot \u0338= Nn and NC. We can perform a population control technique [7] to the neutron and DNP sample banks to exactly yield the targeted population sizes. However, this requires us to store all the Ntot neutrons and DNPs, which may be computationally prohibitive because if N is the number of \ufb01ssion source particles per cycle, then typically Ncoll \u226b N (unless we have a leakage-dominated critical system, which is unlikely in practice). The number of particle samples can be reduced by (1) only sampling during the last or \ufb01nal cycle [4] or (2) incorporating tunable user-de\ufb01ned survival probability factors [2]\u2014that is, we perform Russian roulette game whenever a particle is sampled. In the proposed technique, we implement the survival probability approach. However, instead of making the probability factors user-tunable, the probabilities, Pn and PC, are determined on the \ufb02y to yield, on average, the neutron and DNP target sizes Nn and NC, respectively. Furthermore, in the proposed technique, we sample the particles not during the MC criticality calculation; instead, we do it in a separate MC criticality run. Let\u2019s call it the MC particle sampling run. The idea is to minimize intervention to the actual MC criticality calculation routine, which in practice has to be done very accurately prior to the transients and may involve extensive criticality search and multi-physics complexity. Besides the particle target sizes Nn and NC, the proposed sampling technique also seeks to produce uniform-weight particles. That is, all the sampled neutrons would be of unit weight, while all the DNPs would be of weight \u02dc wC (not de\ufb01ned yet). These unit-weight neutron samples (and the associated uniform-weight DNPs) try to re\ufb02ect source particles generated in an analog \ufb01xed-source MC simulation. To achieve the sample target sizes with uniform-weight particles, the MC particle sampling run requires the following information from the preceding MC criticality calculation: (1) the ke\ufb00and the last \ufb01ssion source particles, (2) mean neutron and DNP densities \u27e8n\u27e9and \u27e8C\u27e9, and (3) maximum neutron and DNP densities \u27e8n\u27e9max, and \u27e8C\u27e9max. Obtaining quantities in number (1) is typically supported in any MC transport code. As for numbers (2) and (3), they can be obtained via the following track-length estimator: \u27e8n\u27e9= 1 NNactive X active i\u2208tracks \u0014 (wl)1 v \u0015 i , \u27e8C\u27e9= 1 NNactive X active i\u2208tracks \" (wl) J X j=1 \u03bdd,j\u03a3f k\u03bbj # i , (7) \fVariansyah and McClarren \u27e8n\u27e9max = max active i\u2208tracks \u0014 (wl)1 v \u0015 i , \u27e8C\u27e9max = max active i\u2208tracks \" (wl) J X j=1 \u03bdd,j\u03a3f k\u03bbj # i , (8) which are similar to tallying the \ufb01ssion production or ke\ufb00during the active cycles. We need the ke\ufb00and the last \ufb01ssion source particles to effectively restart the \ufb01ssion cycles of the preceding MC criticality calculation. The mean particle densities \u27e8n\u27e9and \u27e8C\u27e9are needed to predict how many collisions occur at each cycle. Then given a number of cycles that we wish to run Ncycle, we can determine the survival probabilities that would ultimately yield, on average, the desired particle target sizes Nn and NC: Pn = wn (Ncycle\u27e8n\u27e9)/Nn , PC = wC (Ncycle\u27e8C\u27e9)/NC , (9) where wn and wC are those de\ufb01ned in Eqs. 5 and 6, respectively. Finally, all neutrons and DNPs that are sampled and survive their respective Russian roulette game will respectively be given uniform weights of 1 and \u02dc wC, \u02dc wC = \u27e8C\u27e9/NC \u27e8n\u27e9/Nn . (10) Note that we do not need to store the individual particle weights, as the DNP weight \u02dc wC is enough to describe the (normalized) weight distribution of the particle population. This sampling scheme essentially performs the weight-based Splitting-Roulette population control technique [7,8], except that instead of collecting all the samples over the entire Ncycle cycles, put them into a particle bank, and then apply the weight-based Splitting-Roulette technique targeting the desired population size of Nn and NC, we apply the Splitting-roulette on the \ufb02y as we sample each particle using the predicted total weights of Ncycle \u00d7 \u27e8n\u27e9and Ncycle \u00d7 \u27e8C\u27e9, respectively. We still need to decide how we determine Ncycle. The key consideration is the possibility of getting a survival probability, Pn or PC, larger than one. In that case, one could perform the splittingroulette game to retain the expected weight and targeted sample size. However, this would yield identical copies of the sample, which is not desirable. To minimize the occurrence of this issue, we use the predicted maximum densities \u27e8n\u27e9max and \u27e8C\u27e9max to determine the suitable number of cycles: Ncycle = max(Ncycle,n, Ncycle,C), (11) Ncycle,n = \u0018 \u27e8n\u27e9max \u27e8n\u27e9/Nn \u0019 , Ncycle,C = \u0018 \u27e8C\u27e9max \u27e8C\u27e9/NC \u0019 . (12) 3. VERIFICATION The proposed sampling technique is implemented into the open-source, Python-based MC code MC/DC * [9]. To verify the implementation, we consider an in\ufb01nite homogeneous 361-group (6 DNP groups) medium representing an in\ufb01nite water reactor pin cells. The critical steady-state neutron and DNP group densities, ng and Cj, can be obtained by solving the eigenvalue matrix *https://github.com/CEMeNT-PSAAP/MCDC.git \fInitial particle sampling for transient MC problem. The particle group densities are shown in Fig. 1. These will be used as reference solutions to measure the accuracy of the distributions of the particles sampled by the proposed technique. Figure 1: Reference steady-state solutions of neutron and DNP group densities, as well as the distributions of the particles sampled by the proposed MC technique with Nn = NC = 105 (left), and the error convergence of the technique results (right). First, we run an accurate MC criticality calculation: with 10 inactive and 100 active cycles and 10 million particles per cycle, we get a k-eigenvalue of 1.16019 \u00b1 4 pcm. We then perform the proposed particle sampling technique with increasing neutron and DNP target sizes (Nn and NC). We calculate the distributions of the sampled particles and compare them with the reference values. Figure 1 (left) shows that with Nn = NC = 105, the particle distributions calculated by the sampling technique agree well with the reference values, except for the zeros in the fast neutron energy range. This is expected, considering that the neutron density distribution ranges in about seven orders of magnitude. As we increase the sample target sizes, we resolve more of the neutron distribution. This is demonstrated by the convergence of the error in Fig. 1 (right) that exhibits the expected rate of O(N \u22120.5 n,C ). Figure 2 (left) shows the relative difference between the numbers of particles sampled by the technique to the sample target sizes. It is found that the difference is around 1% for smaller target sizes but effectively decreases as we increase the target sizes. We then move on to a more involved problem, the multigroup 3D C5G7-TD4 benchmark model [6], which consists of un-rodded four UO2/MOX assemblies surrounded by water re\ufb02ectors. Different from the previous homogeneous in\ufb01nite medium test problem, we cannot easily get a highlyaccurate steady-state angular neutron \ufb02ux and DNP group distributions. However, if we keep the model critical and run the time-dependent MC simulation, we should retain a steady, constant-intime solution. Again, we start by preparing the initial condition particles using the proposed sampling technique. \fVariansyah and McClarren Figure 2: Relative difference between the numbers of particles sampled by the proposed technique to the target sizes. First, we run an accurate criticality calculation: with 50 inactive and 150 active cycles and 20 million particles per cycle, we get a k-eigenvalue of 1.165366 \u00b1 2.8 pcm. We then prepare the initial condition particles by performing the proposed sample techniques with increasing particle target sizes. Figure 2 (right) shows that similar to the previous test problem, the relative difference between the numbers of particles sampled by the technique to the sample target sizes effectively decreases as we increase the target sizes, all the way to below 0.1% for target sizes above 106. By using the prepared initial-condition particles, the problem is run in \u201canalog\u201d (uniform weight, without any variance reduction technique, time census, or population control). This is achieved due to the uniform-weight source particles sampled by the proposed technique, MC/DC\u2019s time mesh tally capability, and breaking down each DNP into unit-weight delayed neutrons. The number of delayed neutrons emitted per DNP would be either \u2308\u02dc wC\u2309or \u230a\u02dc wC\u230bwith the average of \u02dc wC, which for this problem is 3681.25. Finally, the total \ufb01ssion rate is recorded via the time-average track-length estimator in a uniform time grid of \u2206t = 0.1 s up to 5 s. Figure 3 (left) shows the time-dependent MC solutions of the steady-state problem. Different curves indicate the different numbers of source (or initial) particles. While all of the cases show the expected steady-state behavior, increasing the number of initial particles would improve the accuracy and precision of the solution. Figure 3 (right) shows the convergence of the 2-norms of the relative errors (from the expected unit solution), which exhibits the expected rate of O(N \u22120.5)n,C. 4. SUMMARY AND FUTURE WORK We formulated a particle sampling technique that effectively produces uniform-weight particles around user-speci\ufb01ed target sizes. The technique can be seen as an improvement, or alternative, to the existing ones. The technique is implemented into the Python-based MC code MC/DC and veri\ufb01ed against a simple in\ufb01nite multigroup problem and the 3D C5G7-TD4 benchmark model. \fInitial particle sampling for transient MC Figure 3: Time-dependent MC results of the steady-state 3D C5G7-TD4 benchmark model with an increasing number of initial particles (left) and the error convergence of the results (right). Future work includes performing a parametric study on the impact of the resolution of the MC criticality calculation, which feeds not only the kke\ufb00and \ufb01ssion source particles but also the key parameters of the sampling techniques: \u27e8n\u27e9, \u27e8n\u27e9max, \u27e8C\u27e9, and \u27e8C\u27e9max. Furthermore, in this initial study, we use equal numbers for both target sizes Nn and NC. It would be interesting to see the impact of varying the ratio of Nn/NC on different transient problems. The sampling technique is based on the collision estimator. This may be an issue for systems with relatively long mean-free-path. Developing particle sampling based on the track-length estimator would address this potential issue. Finally, while the proposed particle sampling technique is purposed for transient starting off of a critical steady-state, the main idea can be applied to sourcedriven, subcritical reactor systems too, which makes an interesting research endeavor. ACKNOWLEDGEMENTS This work was supported by the Center for Exascale Monte-Carlo Neutron Transport (CEMeNT) a PSAAP-III project funded by the Department of Energy, grant number DE-NA003967." + }, + { + "url": "http://arxiv.org/abs/2305.07641v1", + "title": "High-fidelity treatment for object movement in time-dependent Monte Carlo transport simulations", + "abstract": "We investigate the use of time-dependent surfaces in Monte Carlo transport\nsimulation to accurately model prescribed, continuous object movements. The\nperformance of the continuous time-dependent surface technique, relative to the\ntypical stepping approximations and the recently proposed at-source geometry\nadjustment technique, is assessed by running a simple test problem involving\ncontinuous movements of an absorbing object. A figure of merit analysis,\nmeasured from the method's accuracy and total runtime, shows that the\ntime-dependent surface is more efficient than the stepping approximations. We\nalso demonstrate that the time-dependent surface technique offers robustness,\nas it produces accurate solutions even in problems where the at-source geometry\ntechnique fails. Finally, we verify the time-dependent surface technique\nagainst one of the multigroup 3D C5G7-TD benchmark problems.", + "authors": "Ilham Variansyah, Ryan G. McClarren", + "published": "2023-05-12", + "updated": "2023-05-12", + "primary_cat": "physics.comp-ph", + "cats": [ + "physics.comp-ph" + ], + "main_content": "INTRODUCTION Many time-dependent transport problems involve prescribed continuous object movements. Some practical examples include criticality experiments (e.g., Honeycomb and Lady Godiva [1]), reactor control element movements, and transient analyses of innovative reactor concepts, such as the coupling/decoupling of the Holos-Quad core\u2019s Subcritical Power Modules [2]. Time-dependent transport problem is typically modeled into a series of steady-state problems, where continuous object movements are approximated into steps, introducing discretization error. This stepping approximation of object movement has been traditionally adapted not only in deterministic transport methods [3,4] but also in time-dependent Monte Carlo (MC) [5,6,7]. A suf\ufb01ciently small time step is required to minimize the discretization error. Such a constraint is inherent in deterministic methods\u2014 but in MC, it comes with an additional consequence. Smaller time steps mean a more frequent particle census, which is required to pause the running MC simulation and subsequently change the model; this particle synchronization, however, adds to the overall simulation run time. There are several good reasons for particle census in time-dependent MC simulation, including (1) integrating non-linear feedback, (2) outputting large time-dependent solution tallies, (3) controlling population size, and (4) re-balancing the workload of parallel processors. Determining optimal census frequency or time step size is necessary to achieve ef\ufb01cient time-dependent MC simulation; implementing the stepping approximation to model continuous object movement adds a constraint in determining such optimal value. Regardless, one of the core values of MC simulation is its capability of high-\ufb01delity modeling\u2014it is rather unfortunate if we have to introduce some discretization errors on something that MC can resolve continuously. An alternative technique for modeling object movement was recently proposed [8]. The main idea (and assumption) is that the entire model geometry is adjusted at the creation of source and delayed particles. arXiv:2305.07641v1 [physics.comp-ph] 12 May 2023 \fIlham Variansyah and Ryan G. McClarren This technique, hereinafter referred to as \u201cat-source geometry adjustment\u201d, is reasonably effective for many applications. However, its accuracy would deteriorate as the particles\u2019 velocities become comparable with the objects\u2019 and the spacings between the particles and the objects increase [8]. In this paper, we investigate the use of time-dependent surfaces to achieve high-\ufb01delity modeling of prescribed, continuous object movements in MC simulations. In Section 2, we formulate and discuss the implementation of the time-dependent surfaces. Then, in Section 3, we devise a simple test problem to asses the relative performances of the proposed technique and the existing ones\u2014i.e., stepping and at-source adjustment [8]. In Section 4, we demonstrate the use of the time-dependent surfaces in solving one of the C5G7-TD benchmark problems [9]. Finally, Section 5 summarizes and discusses future work. 2. TIME-DEPENDENT SURFACE In the MC method, the domain of the transport problem is typically modeled into material cell objects bounded by constructive solid geometry (CSG) surfaces. Changing the position or angle of the bounding surfaces effectively moves or changes the geometry of the associated objects. CSG surface is usually de\ufb01ned by the constants of the quadric surface equation: S(x, y, z) = Ax2 + By2 + Cz2 + Dxy + Exz + Fyz + Gx + Hy + Iz + J, (1) and if we make the relevant constants time-dependent, we effectively move or change the surface. The axis-aligned planes are the simplest yet widely useful case of a time-dependent surface. As an example, the time-dependent CSG equation for the plane surface parallel to the z-axis is S(z, t) = z + J(t). (2) To determine on which side a particle is with respect to the moving surface (which is needed to determine particle cell), we can evaluate the sign of S(z0, t0), where z0 and t0 are respectively the particle position and time. This is the only modi\ufb01cation needed should one uses a delta tracking algorithm [10]. However, in surface tracking, we also need to determine particle \ufb02ight distance to the surface lsurf by solving S(z0 + lsurfuz, t0 + lsurf/v) = 0, (3) where uz and v are respectively the z-component direction and the speed of the particle. For a constant speed surface, J(t) = J1t + J0, this would be lsurf = \u2212S(z0, t0) uz + J1/v. (4) We note that if the surface speed J1 = 0, Eq. (4) reduces to the static surface formula. It is worth mentioning that this simple time-dependent axis-aligned plane surface may be enough to model many cases of moving objects in transport problems, from typical reactor control rod insertion/withdrawal to the coupling/decoupling of the Subcritical Power Modules of the Holos-Quad concept [2]\u2014i.e., by bounding the module (or assembly) universes within the time-dependent surfaces. 3. SIMPLE TEST PROBLEM To test and assess the bene\ufb01ts of using the proposed time-dependent surfaces, let us consider a simple monoenergetic two-region slab problem, where we continuously change the position of the interface separating the absorbing (\u03a3a = 0.9, \u03a3s = 0.1) and scattering (\u03a3a = 0.1, \u03a3s = 0.9) materials. The red line in Fig. 1 highlights the interface position as a function of time. The domain spans z \u2208[0, 6] with vacuum boundary conditions, the simulation starts with a zero initial condition, a uniform \ufb01xed source isotropically emits particles in t \u2208[0, 10], and the simulation ends at t = 15. All constants and variables are in the unit of the mean free path (mfp) and the mean free time (mft). We note that the problem tries to demonstrate typical control rod withdrawal and insertion. 2 \fHigh-\ufb01delity object movement in transient MC We implement the time-dependent axis-aligned plane surface formulated in Section 2 (as well as the stepping approach and the at-source geometry adjustment technique [8], for performance comparison) into the Python-based MC code MC/DC * [11]. The time-dependent surface position is modeled as a piece-wise linear function and (as an example of a possible user interface) set up in the input \ufb01le as shown in Fig. 2. The test problem is run in analog using the time-dependent surface capability with 1010 histories, and the scalar \ufb02ux result\u2014calculated via track-length estimator in a uniform mesh grid dz = dt = 0.1\u2014is shown in Fig. 1. We use this highly-precise solution as the reference for the performance comparison in the rest of this section. Figure 1: The test problem scalar \ufb02ux reference result (Nhist = 1010). Figure 2: A user interface example for setting the time-dependent surface of the test problem. Besides the highly-precise calculation of the reference solution, we solve the test problem with a signi\ufb01cantly lower, but still reasonably high, number of histories of 108 on 360 processors on LLNL\u2019s compute platform Quartz (Intel Xeon E5-2695). Four simulation cases\u2014different on how the moving interface is modeled\u2014are considered: (1) using the continuous time-dependent surface capability, (2) using the stepping approximation, (3) using the stepping approximation with material mixing, also known as \u201cdecusping\u201d, and (4) using the at-source geometry adjustment technique [8]. In Case 2 and Case 3, we consider 1, 2, 4, 8, 16, and 32 steps during the interface ramp change (t \u2208[5, 10]), yielding MC simulations with n + 1 censuses, where n is the number of steps. The census is performed only to change the interface position (no population control or other operation is performed). The stepping is made in implicit-Euler style, where 1 step means instantaneously moving the interface from z = 2 to z = 5 at t = 5. In Case 3, however, the homogenized mixture of the two materials (\u03a3a = 0.5, \u03a3s = 0.5) is used to model the continuous transition in each step. It is worth mentioning that in the implementation, some of the \ufb01xed-source particles may be emitted beyond a currently active census time. In that case, the *https://github.com/CEMeNT-PSAAP/MCDC 3 \fIlham Variansyah and Ryan G. McClarren particles are stored straight into the census bank and will start being transported only when their respective time tags are within the active census time. Figure 3 shows the results of Cases 2, 3, and 4. The result of the time-dependent surface technique Case 1 is not shown as it is very similar to the reference solution in Fig. 1, except for the minor statistical noise. Sub\ufb01gures 3(a) and 3(b) demonstrate how the stepping techniques discretize the continuous movement of the absorber, which introduce signi\ufb01cant error if the number of steps is not suf\ufb01ciently \ufb01ne. On the other hand, Sub\ufb01gure 3(c) shows that the at-source geometry adjustment technique gives a highly inaccurate result. This is because the devised test problem challenges the stipulations of the technique [8]. First, the absorbing material withdrawal speed is comparable to the particle speed. Second, some source particles are emitted very closely (in space and time) to the moving absorber. Third, half of the source particles are born before the absorbing material starts to move; particles born in t < 5 will not feel the absorber withdrawal in t \u2208[5, 10], which causes the signi\ufb01cant underestimation in t \u2208[5, 10]. Finally, all of the source particles are born before the \ufb01nal absorber insertion at t = 10; this causes the signi\ufb01cant overestimation in t \u2208[10, 15]. (a) Stepping, 2 steps (b) Stepping with mixing, 2 steps (c) At-source geometry adjustment Figure 3: Test problem scalar \ufb02uxes obtained with the stepping and at-source adjustment techniques (Nhist = 108). Next, we quantify the relative ef\ufb01ciencies of the continuous time-dependent surface technique and the stepping approximations. We consider two performance metrics: (1) 2-norm of scalar \ufb02ux relative error in t \u2208[5, 15] and (2) simulation runtime. We also consider a \ufb01gure of merit (FOM), de\ufb01ned as the inverse of the product of the two metrics. The resulting performance metrics are compared in Fig. 4. Sub\ufb01gure 4(a) shows that the errors of the stepping approximations (Steps) reduce and approach the value of the continuous time-dependent surface technique (Continuous) as we increase the number of steps, where the error of the stepping approximation with material mixing reduces more rapidly before it eventually hits the error threshold. However, as we increase the number of steps, more censuses are performed, increasing the 4 \fHigh-\ufb01delity object movement in transient MC total run time of the stepping approximations, as shown in the Sub\ufb01gure 4(b). Finally, it is observed from Sub\ufb01gure 4(c) that the FOMs of the stepping approximations are always lower than the continuous timedependent surface technique and would get worse as we further increase the number of steps because the error eventually stops improving while the run time keeps increasing. We note that the runtime of the at-source geometry adjustment technique is similar to the time-dependent surface technique; however, its relative error 2-norm is much larger (over 2000 times). (a) 2-norm of relative error (b) Total runtime (c) Figure of merit Figure 4: Performance metrics of the continuous time-dependent surface (Continuous) and stepping techniques (Steps). 4. VERIFICATION AGAINST C5G7-TD BENCHMARK In this section, we test our implementation of the time-dependent surface technique against one of the multigroup 3D C5G7-time-dependent benchmark problems [9]. We consider exercise TD4-2 which involves continuous insertion and withdrawal of a control rod bank. With our implementation in MC/DC, we can model the tips of all the moving control rods with a single, shared time-dependent axis-aligned surface. Note that we also need to apply the same time-dependent surface to bind the water moderator below the control rods. To run a MC simulation of this type of reactor transient (i.e., starting from a critical steady state), we need to prepare the initial-condition neutrons and delayed neutron precursors. We use the initial particle sampling technique proposed in [12]. First, we run an accurate criticality calculation on the un-rodded con\ufb01guration: with 50 inactive and 150 active cycles and 20 million particles per cycle, we get a k-eigenvalue of 1.165366 \u00b1 2.8 pcm. We then follow the techniques detailed in [12] to prepare the initial-condition particles. We set the neutron and delayed neutron precursor target sizes to be 5 \u00d7 106. With the sampled initial condition particles, we run the time-dependent MC problem. We record the total \ufb01ssion rate via a time-crossing estimator [13] in a uniform time grid of \u2206t = 0.1 s. We note that the simulation is run in \u201canalog\u201d [12]\u2014i.e., with unit weight neutrons and without any variance reduction 5 \fIlham Variansyah and Ryan G. McClarren technique or population control. The result is shown in Figure 5. It is evident that MC/DC result is in good agreement with the result generated by Shen et al. [3] using the deterministic code MPACT with a uniform time step size of \u2206t = 0.025 s, which is 4 times smaller than what is used in MC/DC. Different from deterministic codes, which need to resolve the continuously moving rods with suf\ufb01ciently small \u2206t, MC simulation with time-dependent surfaces can use arbitrarily large \u2206t without losing accuracy in modeling the transient objects. Figure 5: MC/DC result for the 3D C5G7 TD4-2 benchmark problem, run with the time-dependent surface technique with about 5 million initial neutrons and delayed neutron precursors and time-crossing tally estimator. MPACT result [3] is also presented as a reference. 5. SUMMARY AND FUTURE WORK We investigate the use of time-dependent surfaces for high-\ufb01delity treatment of continuous object movement in MC simulations. This object-moving technique is an alternative to the widely-used stepping approximations and the recently proposed at-source geometry adjustment technique [8]. We formulate the application of time-dependent axis-aligned plane surfaces for the surface-tracking algorithm, implement it into the MC code MC/DC [11], and verify it against a simple test problem and a 3D C5G7-TD benchmark problem. Through a \ufb01gure of merit analysis of the test problem, we \ufb01nd that the time-dependent surface is largely more ef\ufb01cient than the stepping approximations. We also demonstrate that the time-dependent surface technique offers robustness, as it produces accurate solutions even in problems where the at-source geometry technique fails. Future work includes implementing more complex geometry changes, such as those involving translation, rotation, and expansion of quadric surfaces [8]. Also, mentioned a couple of times in the paper, it would be interesting to test the formulated time-dependent axis-aligned surface to model coupling and decoupling of the Holos Quad core\u2019s Subcritical Power Modules [2]. Furthermore, the time-dependent surface technique can be synergized with time-dependent cross-section techniques\u2014e.g., by adapting the sampling technique proposed by Brown and Martin [14]\u2014to model system expansion and compression. Some time-dependent transport problems are nonlinear (e.g., multiphysics simulations), which inevitably require performing particle time census to update the system\u2019s con\ufb01guration. In this case, the use of the time-dependent surface technique, as well as its synergy with a time-dependent cross-section technique, can still be bene\ufb01cial\u2014as in that case, the techniques serve as a higher-order alternative to the typically implemented stepping (both in geometry and material property changes) approximations. 6 \fHigh-\ufb01delity object movement in transient MC ACKNOWLEDGEMENTS This work was supported by the Center for Exascale Monte-Carlo Neutron Transport (CEMeNT) a PSAAPIII project funded by the Department of Energy, grant number DE-NA003967." + }, + { + "url": "http://arxiv.org/abs/2305.07636v1", + "title": "Development of MC/DC: a performant, scalable, and portable Python-based Monte Carlo neutron transport code", + "abstract": "We discuss the current development of MC/DC (Monte Carlo Dynamic Code). MC/DC\nis primarily designed to serve as an exploratory Python-based MC transport\ncode. However, it seeks to offer improved performance, massive scalability, and\nbackend portability by leveraging Python code-generation libraries and\nimplementing an innovative abstraction strategy and compilation scheme. Here,\nwe verify MC/DC capabilities and perform an initial performance assessment. We\nfound that MC/DC can run hundreds of times faster than its pure Python mode and\nabout 2.5 times slower, but with comparable parallel scaling, than the\nhigh-performance MC code Shift for simple problems. Finally, to further\nexercise MC/DC's time-dependent MC transport capabilities, we propose a\nchallenge problem based on the C5G7-TD benchmark model.", + "authors": "Ilham Variansyah, J. P. Morgan, Jordan Northrop, Kyle E. Niemeyer, Ryan G. McClarren", + "published": "2023-05-12", + "updated": "2023-05-12", + "primary_cat": "physics.comp-ph", + "cats": [ + "physics.comp-ph" + ], + "main_content": "INTRODUCTION The Center for Exascale Monte Carlo Neutron Transport (CEMeNT) * focuses on advancing the state of the art of Monte Carlo (MC) neutronics calculations, particularly for solving transient transport problems on exascale computer architectures. One of CEMeNT\u2019s approaches is to develop an open-source Python-based MC code\u2014MC/DC (Monte Carlo Dynamic Code) [1]\u2014which is speci\ufb01cally designed for method and algorithm R&D on both CPU and GPU architectures. Written in Python, MC/DC offers rapid prototyping of MC methods and algorithms. However, MC/DC leverages Python code-generation libraries to boost its performance and achieve massive parallel scalability and backend portability. Initially inspired by the petascale-tested computational \ufb02uid dynamics code PyFR [2], this type of development approach is targeted to divorce the computer *https://cement-psaap.github.io/ arXiv:2305.07636v1 [physics.comp-ph] 12 May 2023 \fVariansyah, et al. science from the numerical algorithms, making methods development and testing easier on subject area experts, as well as allowing portability between computer architectures. In this paper, we discuss the development of MC/DC. Section 2 discusses the software engineering approach implemented. Section 3 discusses current capabilities, recent veri\ufb01cation efforts, and initial performance assessment results. Finally, we summarize and discuss ongoing and future work in Section 4. 2. SOFTWARE ENGINEERING APPROACH MC/DC serves two purposes: (1) to be a Python-based tool for MC transport method and algorithm explorations and (2) as a meta-programming demonstration toward performant, scalable, and portable Python-based MC transport. To achieve these, we leverage Python code-generation libraries and implement an abstraction strategy. Based on our previous explorations on the feasibility of Python-based hardware acceleration and abstraction techniques (tested on both x86 CPUs and Nvidia GPUs) [3], we chose Numba [4] as the basis for MC/DC development, as it provides a good balance of performance and ease of use. Numba is a Python package that uses a just-in-time (JIT) compilation scheme to convert and compile functions written in native Python using the LLVM Compiler to \u201capproach C-like speeds\u201d [4]. Furthermore, Numba can implement CPU threading via OpenMP and compile to Nvidia GPUs, which is an important piece of our MC/DC kernel abstraction strategy. Figure 1: Current abstraction/portability strategy of MC/DC. Figure 1 illustrates the current abstraction strategy for MC/DC. The objective is for all MC transport kernels (designed to act on a single particle on the typical history-based algorithm) to be strictly written in Python scripts. Then adapters in MC/DC alter the behavior and compile kernels to run on one of the four targeted modes: combinations of historyor event-based algorithms with CPU or GPU backend targets. These adapters are a series of Python decorators and Numba CPU/GPU JIT compilations. Figure 2 shows MC/DC\u2019s proposed compilation structure, including currently supported Numba functionality, planned extensions of our own fork of Numba to AMD GPUs, and integration of our own asynchronous GPU scheduler\u2014Harmonize\u2014for both AMD and Nvidia GPUs. After MC/DC \fDevelopment of MC/DC history-based MC transport kernels are wrapped and adapted, they are handed to Numba (shown in the yellow portion of Figure 2). From there, functions are lowered into the LLVM portability framework, then, in a currently supported path, compiled to Nvidia GPU and x86, ARM, and POWER CPU machine code. Bytecode Analysis Numba IR Function arguments Type Inference (unless specified) LLVM IR Python \u266aHarmonize\u266a AMD GPUs LLVM JIT NVVM PTX CPUs (x86, ARM, POWER) Nvidia GPUs HIP Experimental Hardware Code Execution Compilers Numba Supported Figure 2: Proposed compilation path of abstracted MC/DC kernels. Compiling kernels to AMD GPUs will be important for MC/DC since many exa-class machines use AMD GPUs as their primary accelerators (e.g., Oak Ridge\u2019s Frontier, LLNL\u2019s El Capitan). Numba does not currently support this. However, since full LLVM does have support for AMD\u2019s Heterogeneous Interface for Portability (HIP) framework, we plan on elevating required functions in our own fork of Numba to gain compute access to AMD GPUs. We hope that by using a compilation path reliant on LLVM\u2014which seeks to be a \u201csourceand target-independent optimizer\u201d [5]\u2014our chosen acceleration and abstraction techniques will also allow us to readily extend MC/DC to other currently deployed accelerator hardware (e.g., Intel GPUs) and hardware yet to be designed. Using LLVM intermediate representation (IR) will also allow us to extend MC/DC to use the Harmonize asynchronous GPU scheduler. Harmonize works through a set of template abstractions and vendor-supplied compiler tools to queue GPU functional calls then run them in unison, thus decreasing thread divergence and increasing GPU performance. It has been written speci\ufb01cally for transient Monte Carlo neutron transport with \ufb01ssion and provides 1.5\u00d7 or better improvement in about 77.8% of cases (varying material cross sections) in mono-energetic simulations [6]. Further work is still required to demonstrate Harmonize\u2019s abilities in a more-complex tracking algorithm \fVariansyah, et al. (e.g., multi-dimension, multigroup) and to extend its operability to work with LLVM IR (currently it works using Nvidia speci\ufb01c PTX code), but it shows promise for further increasing MC/DC\u2019s GPU performance without requiring signi\ufb01cant alterations to algorithms. Currently, MC/DC only runs with the history-based algorithm on CPU, with Numba-JIT compilation for performance improvement, and with MPI4Py [7] for scalable parallel runs across compute nodes. However, we have started investigating the proposed abstraction strategy (Fig. 1) and compilation path (Fig. 2) on a light version of MC/DC [8]. In the next section, we discuss MC/DC\u2019s current capabilities, recent veri\ufb01cation efforts, and initial performance assessment results for the history-based CPU mode. 3. VERIFICATION AND PERFORMANCE ASSESSMENT MC/DC is capable of running \ufb01xed-source and eigenvalue neutron transport problems de\ufb01ned on quadric surface constructive solid geometry. MC/DC has several time-dependent features, including time-dependent mesh tallies, time-dependent source, time census, population control techniques [9,10], time-dependent surface for continuous object movement [11], and initial condition particle sampling for typical reactor transients [12]. Currently, MC/DC only supports running multigroup transport problems; continuous-energy physics is planned to be implemented later in the project. Starting with multigroup physics allows us to focus more on the main novel work of MC/DC development: Python-based transport kernel and algorithm abstractions, as well as investigating and exploring time-dependent MC techniques. Furthermore, we do not see multigroup MC as a mere stepping stone to achieving the ultimate goal of running continuous-energy MC, as multigroup MC does have potential\u2014it can be an excellent alternative, or complement, to the inherently multigroup, yet widely-used, deterministic codes. With MC/DC, we want to explore optimal sets of MC algorithms and techniques for both multigroup and continuous-energy transports, which can be very different and highly problem dependent. Figure 3: Solution (left) and Numba speedup (right) for the supercritical AZURV1. MC/DC capabilities have been veri\ufb01ed against several analytical and numerical test problems. These include the time-dependent supercritical AZURV1 [13,9], where we add neutron \ufb01ssion to the original problem to achieve an effective multiplication ratio of c = 1.1 to exercise a signi\ufb01cant particle population growth, which is about 7.39 times of the initial value in 20 mean-free-time \fDevelopment of MC/DC (mft). The semi-analytical solution for this problem is shown in the left \ufb01gure of Fig. 3. We also devised a neutron-pulse transient of a subcritical homogeneous 361-group medium, representing an in\ufb01nite water reactor pin cell to verify the multigroup physics capability. We tally the evolution of the multigroup \ufb02ux in a logarithmically-spaced time grid with a time-crossing estimator [9]. Figure 4 shows a snapshot of the result. Finally, part of MC/DC veri\ufb01cation is observing the 1/ \u221a N convergence of the solution error [9], where N is the number of histories, for problems where we know the analytical/accurate solutions, such as the supercritical AZURV1 and the 361-group pulse problems. Figure 4: A solution snapshot of the solution of the neutron-pulse transient of an in\ufb01nite subcritical homogeneous 361-group medium (106 histories). 3.1. Numba Speedup MC/DC can be run in pure Python or performant Numba mode. Pure Python mode allows rapid testing and development of new methods. Once an implementation is veri\ufb01ed and ready for testing on more practical and computationally challenging problems, we can turn on the Numba mode, get the MC kernels JIT-compiled, and run performantly, approaching the speeds of compiled programs. Figure 3 (right) demonstrates this, showing the Numba speedup by comparing the respective runtimes of Python and Numba modes for the supercritical AZURV1 problem. The \ufb02at runtime for Numba mode on smaller numbers of histories indicates the JIT compilation time of about 45 seconds. Running with suf\ufb01ciently high numbers of histories hides the compilation time, and the actual speedup gain can be observed: around 212 times in this case. To further exercise the time-dependent capabilities of MC/DC, we devised other transient problems based on existing benchmarks. These include the three-dimensional dog-leg void duct Kobayashi radiation transport problem [14], whose model is shown in the left side of Fig. 5. We make the originally steady-state problem time-dependent by setting the monoenergetic neutron speed to one cm/s and making the speci\ufb01ed source active only in the \ufb01rst \ufb01fty seconds of the simulation. The quantity of interest is the time-averaged evolution (track-length estimator) of the \ufb02ux distribution up to 200 seconds into the simulation. Figure 5 shows the last snapshot of the normalized \ufb02ux distribution of the time-dependent Kobayashi \fVariansyah, et al. problem, obtained with 108 particle histories with implicit capture. For this problem, Numba mode runs about 56 times faster than the Python mode, which is signi\ufb01cantly lower than the number for the supercritical AZURV1 problem (212\u00d7). However, when we coarsen the spatial tally mesh of the Kobayashi problem by a factor of 10, the Numba mode runs about 309 times faster. This sensitivity of Numba speedup to the size (and dimension) of the tally mesh may be due to how global variables (simulation model, parameters, and tallies) are currently managed in MC/DC: by using a single, large Numpy [15] structure passed around as a function argument at each transport kernel call. We do this because Numba does not support global variables, as JIT compilation is done per each individual function. Refactoring the large global variables container into smaller optimal chunks may further improve the Numba speedup. Figure 5: The time-dependent Kobayashi dog-leg void duct problem [14] (left) and the last snapshot of the normalized time-averaged \ufb02ux distribution (108 histories) (right). 3.2. Code-to-Code Comparison and Parallel Scalability As an initial effort to compare the performance of the Python-based, JIT-compiled code MC/DC with conventional compiled MC codes, we set up a multigroup in\ufb01nite pin cell criticality problem based on the C5G7-TD UO2 pin model [16]. We solve the k-eigenvalue problem by running 50 inactive and 100 active cycles with 105 particles per cycle using MC/DC and the MC code Shift [17] on LLNL\u2019s compute platform Lassen (IBM POWER9). MC/DC with Numba runs the problem in 2563.8 seconds, while Shift runs the problem in 1020.7 seconds. Subtracting the Numba-JIT compilation time (which is around 63 seconds for this problem), Shift performs about 2.5 times faster than MC/DC, which is a respectable level for MC/DC considering the relatively low effort required to develop a Python-based code decorated by Numba\u2019s JIT. However, if we also calculate the \ufb02ux distribution in a \ufb01ne mesh during the eigenvalue simulation, the performance of MC/DC considerably degrades. With 20 \u00d7 20 \u00d7 100 \u00d7 7 and 100 \u00d7 100 \u00d7 500 \u00d7 7 mesh sizes (seven for the number of groups), Shift runs about 3 and 15 times faster than MC/DC, respectively. This could be related to the suboptimal Numba speedup in problems with large-size multidimensional \fDevelopment of MC/DC tallies discussed in the previous subsection. However, this may also indicate that Shift has more optimized tallying mechanics/algorithms. Figure 6: Strong (left) and weak scaling results (right) for the in\ufb01nite UO2 pin cell multigroup criticality calculation. Shift and MC/DC were run on IBM POWER9 (40 cores/node) and Intel Xeon E5-2695 (36 cores/node) CPUs, respectively. We also perform initial parallel scaling studies using the same in\ufb01nite pin cell eigenvalue problem. The Lassen architecture uses the specialized IBM Spectrum MPI, where MC/DC currently runs into MPI4Py issues when attempting to scale to large numbers of compute nodes. Thus, for these initial scaling studies, MCDC uses LLNL\u2019s compute platform Quartz (Intel Xeon E5-2695). Figure 6 shows the strong and weak scaling results. We run 25000 particles/cycle per core for the weak scaling; as for the strong scaling, we run with 25000 \u00d7 36 and 25000 \u00d7 40 for MC/DC and Shift, respectively. The results show that MC/DC scales comparably with Shift for the cases considered. 3.3. C5G7-TD Benchmark and Challenge Problem As the last veri\ufb01cation study in this initial phase of MC/DC development, we consider one of the more involved multigroup 3D C5G7-TD benchmark problems: the exercise TD4-4 [16]. The fourassembly problem involves continuous, overlapping insertion and withdrawal of two control rod banks. To run a MC simulation of this type of reactor transient (i.e., starting from a critical steady state), we need to prepare the initial-condition neutrons and delayed neutron precursors. We use the initial particle sampling technique proposed by Variansyah and McClarren [12]. First, we run an accurate criticality calculation on the un-rodded con\ufb01guration: with 50 inactive and 150 active cycles and 20 million particles per cycle, we get a k-eigenvalue of 1.165366 \u00b1 2.8 pcm. We then prepare the initial-condition particles by setting the neutron and delayed neutron precursor target sizes to be 5 \u00d7 106 [12]. The problem is run in \u201canalog\u201d (uniform weight, without any variance reduction technique or population control). The total \ufb01ssion rate is recorded via the time-crossing estimator in a uniform time grid of \u2206t = 0.1 s. Finally, the time-dependent surface [11] is applied to exactly model the continuous movement of the control rod banks. The analog transient MC simulation takes about 6 hours on 9216 cores of LLNL\u2019s compute platform Quartz. The time-dependent simulation (and its initial condition generation) is repeated eight times with different random number seeds to measure the result\u2019s uncertainty. Figure 7 shows the \fVariansyah, et al. Figure 7: MC/DC result for the 3D C5G7 TD4-4 benchmark problem (eight batches of 5 \u00d7 106 initial prompt and delayed neutrons, with time-crossing estimator). MPACT result [18] is also presented as a reference solution. result, which agrees well with the result generated by Shen et al. using the deterministic code MPACT [18]. Figure 8: Control rod positions (left) and MC/DC solution (right) of C5G7-TDX. Last but not least, part of MC/DC development under CEMeNT\u2019s project is developing challenge problems to measure the effectiveness of algorithms and methods being developed. One of these is based on the 3D C5G7-TD [16] problem, referred to here as C5G7-TDX. Here, we devise a fourphase problem simulating a start-up accident experiment, where each phase lasts \ufb01ve seconds. The problem is driven by a \ufb01xed source (in the fastest group) isotropically emitting neutrons at the center of Bank 1. In Phase 1, all control rod banks are fully inserted, except Bank 3, which is stuck at 0.74 insertion fraction; this phase simulates source particle propagation to a steady state in a sub-critical system. In Phase 2, control rod Banks 2 and 4 are fully withdrawn within \ufb01ve seconds, but Bank 4 is stuck halfway; this phase exhibits the typical S-wave control rod worth shape as the \ufb01ssion rate rises. In Phase 3, Bank 1 slowly withdraws up to the insertion fraction of \fDevelopment of MC/DC 0.889, which is enough to induce a neutron excursion. In Phase 4, the \ufb01xed source is removed, and reactor scram is initiated, where all control rod banks drop at 0.1 insertion fractions per second, except for Bank 2, which gets stuck at the insertion fraction of 0.8; in this phase, rapid neutron population collapse occurs, but the decay rate is limited by the contributions of the delayed neutron precursors accumulated during the previous phases. Figure 8 (left) shows the control rod banks\u2019 positions during the four-phase simulation, while MC/DC total time-averaged \ufb01ssion rate solution from running 109 analog histories is shown on the right. Figure 9: MC/DC parallel weak scaling ef\ufb01ciency (106 histories/core and 36 cores/node) for C5G7-TDX. Figure 9 shows weak parallel scaling of C5G7-TDX for up to 256 nodes (36 cores per node) with 106 histories per core. Despite the relatively high number of histories per core, the ef\ufb01ciency still signi\ufb01cantly decreases as we increase the number of nodes. This indicates a signi\ufb01cant load imbalance in the analog MC simulation of C5G7-TDX and makes it a reasonable challenge problem. 4. SUMMARY AND FUTURE WORK MC/DC is designed to serve as a Python-based tool for MC transport method and algorithm explorations. By leveraging MPI4Py and Numba JIT compilation libraries, as well as innovative abstraction strategy (Fig. 1) and compilation scheme (Fig. 2), MC/DC seeks to offer improved performance (beyond what is achievable by typical pure Python exploratory tools), massive scalability, and backend portability. Here, we veri\ufb01ed MC/DC\u2019s current capabilities against several test and benchmark problems. We also proposed a challenge problem based on the C5G7-TD benchmark model to further exercise MC/DC time-dependent features. Initial performance assessment shows that MC/DC\u2019s Numba mode can run hundreds of times faster than the pure Python mode. However, the speedup degrades if we signi\ufb01cantly increase the size of the multidimensional tally, which is an opportunity to improve the performance further by optimizing the current MC/DC\u2019s Numba implementation. Furthermore, running a simple in\ufb01nite lattice eigenvalue problem showed that MC/DC runs about 2.5 to 3 times slower than the MC code Shift; however, the runtime ratio grows as we increase the mesh tally complexity, which further warrants optimization of MC/DC\u2019s Numba implementation. Nevertheless, we demonstrated that MC/DC manages to match the excellent parallel scalability of Shift for this simple problem. \fVariansyah, et al. Currently, MC/DC only supports running multigroup transport problems on CPUs with the typical history-based MC algorithm. Future work includes developing continuous energy physics capabilities and implementing the proposed abstraction strategy (Fig. 1) and compilation scheme (Fig. 2), which are currently being investigated in a light version of the code. ACKNOWLEDGEMENTS This work was supported by the Center for Exascale Monte-Carlo Neutron Transport (CEMeNT), a PSAAP-III project funded by the Department of Energy, grant number DE-NA003967." + }, + { + "url": "http://arxiv.org/abs/2202.08631v2", + "title": "Analysis of Population Control Techniques for Time-Dependent and Eigenvalue Monte Carlo Neutron Transport Calculations", + "abstract": "An extensive study of population control techniques (PCTs) for time-dependent\nand eigenvalue Monte Carlo (MC) neutron transport calculations is presented. We\ndefine PCT as a technique that takes a censused population and returns a\ncontrolled, unbiased population. A new perspective based on an abstraction of\nparticle census and population control is explored, paving the way to improved\nunderstanding and application of the concepts. Five distinct PCTs identified\nfrom the literature are reviewed: Simple Sampling (SS), Splitting-Roulette\n(SR), Combing (CO), modified Combing (COX), and Duplicate-Discard (DD). A\ntheoretical analysis of how much uncertainty is introduced to a population by\neach PCT is presented. Parallel algorithms for the PCTs applicable for both\ntime-dependent and eigenvalue MC simulations are proposed. The relative\nperformances of the PCTs based on runtime and tally mean error or standard\ndeviation are assessed by solving time-dependent and eigenvalue test problems.\nIt is found that SR and CO are equally the most performant techniques, closely\nfollowed by DD.", + "authors": "Ilham Variansyah, Ryan G. McClarren", + "published": "2022-02-17", + "updated": "2022-06-17", + "primary_cat": "physics.comp-ph", + "cats": [ + "physics.comp-ph" + ], + "main_content": "Introduction The Monte Carlo (MC) method is indispensable in neutron transport calculations due to its ability to perform high-\ufb01delity, continuous-energy transport simulations with minimal approximation. MC, however, su\ufb00ers from stochastic uncertainties requiring an expensive computation of a large number of neutron source samples or histories. Nevertheless, thanks to the advancement of high-performance parallel computing, the inherently parallel features of MC can be e\ufb00ectively exploited to a very large extent\u2014 which can signi\ufb01cantly reduce run time to solution, particularly for the computationally expensive time-dependent neutron transport simulations [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. During a time-dependent MC simulation, particle population size can monotonically grow or decay depending on the criticality of the system. This monotonic evolution of population makes time-dependent MC simulation particularly challenging in two different ways. First, in a supercritical system, particle population size can quickly grow beyond the limited computational resources. Additionally, some MC implementations and variance reduction techniques\u2014such as the precursor forced decay technique in [2] and time-dependent adaptation of the hybrid source iteration methods in [11, 12]\u2014 may promote monotonic population growth, which raises the same issue on the limited computational memory. Second, in a subcritical system without a signi\ufb01cant persisting external source\u2014such as in pulsed-reactor and shut-down experiments\u2014particle population size can quickly decay to zero, which leads to a lack of samples and yields statistically noisy tally results at later times of the simulation. One typically uses a Population Control Technique (PCT) to address the monotonic population growth and collapse issues discussed above. PCT essentially controls the size of a particle population to be near a desired value while preserving certain statistical expectations to achieve unbiased MC simulation. In the implementation of PCT, time census is employed to limit the population growth/collapse. The census introduces a time boundary that stops particles whenever they are about to cross it. When all particles have already hit the time boundary, the time census is completed, and PCT can be performed on the censused particles. More recent applications of PCT include the use of random particle duplication or discard [3] in Serpent 2 [13], splitting 2 \fand Russian-Roulette technique [4] in MCATK [14], particle combing technique [15] in TRIPOLI-4 [16, 17] and GUARDYAN [8], and a modi\ufb01ed combing technique which is most recently introduced in [10]. An innovative approach to performing time-dependent MC is proposed by [9]. The central idea is to re-purpose the generally available k-eigenvalue MC simulation infrastructure to perform time-dependent simulations. This approach works because there is a built-in population control in k-eigenvalue MC simulation. Besides the introduction of the 1/k factor on the \ufb01ssion operator, which is essential in achieving a steady-state con\ufb01guration, simple sampling is typically performed to ensure that a certain number of particles are sampled from the \ufb01ssion bank and then used as the particle source for the simulation of the next \ufb01ssion generation. Observing the signi\ufb01cance of that connection between the k-eigenvalue and time-dependent MC simulations o\ufb00ers an improved understanding of PCT. Such a study has been done to an extent by Cullen et al. in [18]. Nevertheless, one can take advantage of this connection further by exploring potential bene\ufb01ts from and for both of the simulation modes. Despite the multiple distinct PCTs proposed in the literature [2, 3, 4, 9, 10], documented studies in characterizing and assessing relative performances of all the identi\ufb01ed PCTs are still very limited. A more recent e\ufb00ort found in [4] speci\ufb01cally compares the splitting and Russian-Roulette technique [4] to the particle combing technique [15]\u2014hereafter referred to as Splitting-Roulette (SR) and Combing (CO), respectively. Sweezy et al. [4] propose a normalized SR as an alternative to CO which may su\ufb00er from unwanted behavior due to possible correlations in the particle order. On the other hand, Faucher et al. [17] and Legrady et al. [19] prefer the use of CO instead of SR due to the inherent bias in the normalized SR [4] and suggest that the unwanted behavior of CO is unlikely to occur in practice. This support for CO, or, if you will, ctenophilia, is strengthened by an assertion [19] stating that the SR technique described in [4] is at least 2\u20143 times less e\ufb03cient than CO. This assertion is based on a comparative study of CO and a \u201cRussian roulette and splitting\u201d technique reported in [20]. However, the Russian roulette and splitting technique in [20] seems to be di\ufb00erent from the SR technique described in [4]; and for the record, the study [20] does not claim that the technique compared refers to [4]. Consistent implementation and fair comparison 3 \fof the two techniques, SR [4] and CO [15], would shed light on their actual relative performances. In this paper, we present an extensive study on PCT. In Sec. II, we start by making an abstraction of related concepts\u2014i.e., particle census and population control\u2014 followed by reviewing PCTs identi\ufb01ed from the literature. In Sec. III, we perform an analysis to reveal the theoretical uncertainty introduced by each of the PCTs, which directly a\ufb00ects the performance of the technique; these theoretical uncertainties are then veri\ufb01ed numerically. Sec. IV presents a parallel PCT algorithm that exploits the abstraction established in Sec. II and adapts the nearest-neighbor parallel \ufb01ssion bank algorithm proposed in [21]. In Secs. V and VI, we implement and test the PCTs on time-dependent and eigenvalue MC neutron transport problems, respectively. Finally, Sec. VII summarizes the takeaways of the study. It is worth mentioning that while this paper is focused on PCT application for MC neutron transport, discussions and analyses presented in this paper are also applicable to PCT application in other transport simulations, such as the Implicit Monte Carlo thermal radiative transfer [22]. II. Population Control Technique (PCT) Population control can be loosely de\ufb01ned as any MC technique that involves altering the number of particles being simulated; this includes many variance reduction techniques (e.g., cell importance and weight window) and even the introduction of 1/k factor in eigenvalue simulations [18]. However, in this paper, we speci\ufb01cally de\ufb01ne population control as a technique that controls populations of censused particles. In this section, we present an abstraction of particle census and population control (their de\ufb01nitions and how they are characterized) and then discuss distinct techniques identi\ufb01ed from the literature. II.A. Particle Census Census is a process where we (1) stop particles, (2) remove them from the current simulation, and then (3) store them into a census bank. Census can be performed at arbitrary steps during simulation; however, there are several triggering events that 4 \fphysically make sense to perform the census. Perhaps the most obvious one is time-grid crossing. In this time census, we stop particles whenever they are about to cross a predetermined time grid; these censused particles are then removed from the current simulation and stored into a time bank (the census bank). Another useful triggering event is \ufb01ssion emission. In this \ufb01ssion census, neutrons emitted from \ufb01ssion reactions are removed from the current simulation and stored into a \ufb01ssion bank. One can see that this is actually a standard practice that has been long used in k-eigenvalue MC transport simulations. We can take a step further and census not only the \ufb01ssion neutrons but also the scattering neutrons\u2014this results to collision census, which is typically used in the c-eigenvalue MC calculations [23]. There are several reasons to perform particle census. One is to limit particle population growth so that population control (discussed in more detail next) can be performed. Another reason is to allow the system (the MC model) to change\u2014which can be geometry, composition, or parameter changes due to multi-physics feedback. Additionally, one can also see census as a manifestation of an iterative scheme performed to solve an equation\u2014e.g., power iteration in k-eigenvalue problem. It is worth noting that census time grid for population control does not necessarily need to be identical to other time grids possibly used in MC simulation. These other time grids include the one for tally scoring (also known as tally \ufb01lters in some MC codes, such as OpenMC [24]), time grid for variance reduction techniques (e.g., weight window and forced precursor decay [2]), and census time grid for model change or multi-physics feedback. 5 \fII.B. Population Control Figure 1. Illustration of census-enabled MC transport with population control. Given an initial population of size N, the objective of population control is to return a controlled \ufb01nal population with a size around, or exactly at, a predetermined value M, as illustrated in Fig. 1. In a supercritical system, typically N > M; while in a subcritical one, N < M. The \ufb01nal population is then used as the source bank for the successive census-enabled transport simulation, during which a census bank is populated by a certain census mechanism (e.g., time census or \ufb01ssion census, as discussed in Sec. II.A). Once the transport is completed (i.e., both source and secondary particle banks are exhausted), the census bank becomes the initial population to be controlled by a PCT of choice. It is evident that population control does not care about what kind of transport simulation is being performed, whether it is a time-dependent \ufb01xedsource or an eigenvalue one. This also implies that any PCT can be used in any kind of transport simulation; as a particular example, one can use the particle combing technique [15] in k-eigenvalue simulation. The \ufb01nal population basically consists of copies of the initial particles, but how many times a particle gets copied will di\ufb00er between particles, and some particles may not get copied at all. The procedure on determining how many times each initial particle get copied to the \ufb01nal population is the essence of PCT and has to be done in a way such that the MC simulation is not biased\u2014i.e., the expectations of the population actions, and thus the expectations of simulation tally results, are preserved. The only requirement for a PCT to be unbiased is to preserve the expected weight of each particle in the initial population. That is, for initial particle i having weight 6 \fwi: E[Ci] = wi, i = 1, 2, ..., N, (1) Ci = diw\u2032 i, (2) where E[\u00b7] denotes the expectation of a random variable argument, di is the number of copies of particle i in the \ufb01nal population, w\u2032 i is the controlled weight assigned to the copies of particle i, and Ci is the total weight represented by the copies of particle i in the \ufb01nal population. Now that we have described the minimum requirements\u2014i.e., controlling population size from N to around M, while ensuring that Eq. (1) holds\u2014we next point out two desirable characteristics of PCT. The \ufb01rst is that we wish to have a low uncertainty in Ci. In particular, we want its standard deviation, \u03c3[Ci], to be as low as possible. In the absence of PCT, each initial particle i is copied once to the \ufb01nal population, and we will have Ci = wi and \u03c3[Ci] = 0; however, if a PCT is being used, \u03c3[Ci] \u22650. The value of \u03c3[Ci] a\ufb00ects the variance of the actions of particle i in the MC simulation, and later we numerically demonstrate that it ultimately a\ufb00ects the simulation tally results. The second desirable characteristic is that we would like our PCT to preserve the initial population total weight W as much as possible; in other words, if W \u2032 is the \ufb01nal population total weight: W = N X i=1 wi, (3) W \u2032 = N X i=1 Ci, (4) and we would like W \u2032 to be close or equal to W. Booth [15] suggests that such strict 7 \fequality of W \u2032 = W is generally unimportant for neutron and photon transport, but it may be very important in charged particle transport. Therefore, we consider it a desirable characteristic, not a requirement, of PCT. As a remark, PCT is a technique that takes an initial population of size N and total weight W and returns a controlled \ufb01nal population that: (1) has a size equal or close to M, (2) preserves the expected total weight of each particle (i.e., satis\ufb01es Eq. (1), E[Ci] = wi), (3) has a low \u03c3[Ci], and (4) has a total weight equal or close to W. We note that Point (1) is the objective of PCT, Point (2) is the requirement for unbiased PCT, and Points (3) and (4) are desirable characteristics. II.C. The PCTs Per our literature study, we identify \ufb01ve distinct PCTs: (1) Simple Sampling (SS), (2) Duplicate-Discard (DD) [3], (3) Splitting-Roulette (SR) [4], (4) Particle Combing (CO) [15], and (5) Modi\ufb01ed Particle Combing (COX) [10]. Additionally, there are three di\ufb00erent sampling bases with which each of the PCTs can be implemented: uniform, weight-based, and importance-based sampling. II.C.1. Combing (CO) Perhaps the most standardized1 PCT is the particle combing technique (CO). Per our classi\ufb01cation, the \u201cSimple Comb\u201d proposed by Booth (Section II in [15]) is weightbased CO. CO techniques are best explained with graphical illustrations. Let us consider a population control problem with N=6 and M=4. Weight-based CO combs the population as shown in Fig. 2, where \u03be is a random number (from 0 to 1) used to determine the o\ufb00set of the initial tooth of the comb. Once the initial tooth location is set, the remaining teeth are spaced W/M apart. Per Fig. 2, Particles 1 and 5 are copied once, Particle 3 is copied twice, and Particles 2, 4, and 6 are not copied at all. 1CO is the only technique that has a single, well-known origin to refer to ([15]), a proper name, and a clear, unambiguous procedure. 8 \fTo ensure unbiased MC simulation (c.f. Eq. (1)), the copies of particle i are assigned with weight w\u2032 i = W/M. Figure 2. Weight-based CO with initial and \ufb01nal population size N=6 and M=4 (adapted from Fig. 1 in [15]). Booth also proposes the \u201cImportance-Weighted Comb\u201d (Section III in [15]), which per our classi\ufb01cation is importance-based CO. Importance-based CO is similar to the weight-based CO shown in Fig. 2, but instead of using wi for the particle axis, W/M for the distance between teeth, \u03beW/M for the o\ufb00set of the comb, and \ufb01nal weight w\u2032 i = W/M, we respectively use ui, U/M, \u03beU/M, and w\u2032 i = U/(MIi)\u2014where ui = Iiwi is the product of importance Ii and weight of particle i, and U = P i ui is the total of the product. Now we discuss the other variant of CO: uniform CO. Uniform CO treats particles equally, regardless of their weight or importance. Uniform CO combs the initial particles as shown in Fig. 3. Per Fig. 3, each of Particles 1, 3, 4, and 6 is copied once, while Particles 2 and 5 are not copied at all. To ensure unbiased MC simulation (Eq. (1)), copies of particle i are assigned with weight w\u2032 i = (N/M)wi. We believe that this uniform variant of CO (as well as those of other PCTs) has never been articulated in the literature. A discussion on the signi\ufb01cance of the PCT sampling bases (uniform, weight-based, or importance-based) is given later in Sec. II.D.1. 9 \fFigure 3. Uniform CO with initial and \ufb01nal population size N=6 and M=4. II.C.2. Modi\ufb01ed Combing (COX) A modi\ufb01cation of CO is recently proposed by Ajami et al. [10]. Di\ufb00erent from the weight-based CO shown in Fig. 2, the weight-based COX combs the initial particle as shown in Fig. 4. In COX, instead of having uniformly-spaced teeth and sampling the o\ufb00set of the whole comb, we allow the teeth to be non-uniformly spaced by o\ufb00setting each tooth with a di\ufb00erent random number. The controlled weight w\u2032 i assigned to the particle copies to ensure unbiased MC simulation (Eq. (1)) are identical to those of CO. Figure 4. Weight-based COX with initial and \ufb01nal population size N=6 and M=4. Ajami et al. [10] provides limited discussion and demonstration on how COX compares to CO. In Sec. II.D.2, we discuss how COX may actually avoid a particular drawback of CO; and then in Sec. III, we discuss how that remedy comes at a signi\ufb01cant expense. 10 \fII.C.3. Splitting-Roulette (SR) Sweezy et al. [4] proposes the weight-based splitting-roulette (SR). In SR, we assign each initial particle i with splitting number si. For uniform, weight-based, and importance-based SR, the values for si are respectively M/N, wi/(W/M), and ui/(U/M). We split each particle i into \u230asi\u230b+ 1 copies, and then Russian-roulette the last copy with surviving probability si \u2212\u230asi\u230b; the function \u230a\u00b7\u230bdenotes the \ufb02oor function, which produces the greatest integer not greater than the variable argument. Finally, to ensure unbiased MC simulation, the surviving particle copies are assigned with controlled weight w\u2032 i, which happen to be identical to those of CO techniques. SR techniques neither exactly produce a \ufb01nal population of size M nor exactly preserve the initial total weight W\u2014however, they preserve the expectations. To exactly preserve the population\u2019s total weight W, Sweezy et al. suggest performing a weight normalization at the end of SR. This weight normalization can be applied to other PCTs that do not exactly preserve the population\u2019s total weight as well (e.g., uniform and importance-based CO). The signi\ufb01cance of this PCT weight normalization is further discussed later in Sec. II.D.3. II.C.4. Simple Sampling (SS) Simple sampling (SS) is the typical PCT employed in k-eigenvalue MC simulations [21]. In SS, we simply sample M particles from the initial population to be the \ufb01nal population. For uniform SS, all particles have a uniform probability to be sampled at each draw; while for weight-based and importance-based SS, the probability for a particle to be sampled at each draw is proportional to its weight wi and the product of its weight and importance ui, respectively. Finally, to ensure unbiased MC simulation, the sampled particles are assigned with controlled weight w\u2032 i which values happen to be identical to those of the other PCTs. II.C.5. Duplicate-Discard (DD) We identify the PCT proposed by Lepp\u00a8 anen in [3] as the uniform duplicate-discard technique (DD), due to its mechanism of randomly duplicating or discarding particles to achieve the desired population size. In particle duplication (for N < M), we \ufb01rst 11 \fcopy each initial particle once to the \ufb01nal population, and then, on top of that, randomly sample M \u2212N particles from the initial population to be copied to the \ufb01nal population. In particle discard (for N > M), we randomly sample N \u2212M particles from the initial population; the sampled particles do not get copied to the \ufb01nal population, while the rest are copied once. Finally, the controlled weight w\u2032 i that satis\ufb01es the unbiased MC simulation requirement is identical to that of the other uniform PCTs: (N/M)wi. One can improve the particle duplication of the uniform DD. Instead of keeping a copy of the initial population and then sampling M \u2212N additional particles, we keep \u230aM/N\u230bcopies and sample only (M mod N) particles (we note that \u201cmod\u201d denotes the remainder operator, such that (M mod N) = M \u2212\u230aM/N\u230bN). This improvement reduces both the number of samplings performed and the uncertainty introduced by the PCT. II.D. Additional Notes on the PCTs II.D.1. PCT Sampling Basis As mentioned earlier, each of the \ufb01ve distinct PCTs (CO, COX, SR, SS, and DD) can be implemented with three di\ufb00erent sampling bases: uniform, weight-based, and importance-based sampling. The computational procedures of the uniform sampling PCTs are the simplest, followed by their respective weight-based and then importance-based counterparts. As an example, uniform CO (Fig. 3) is simpler than the weight-based CO (Fig. 2) as it does not require some binary search to determine where exactly each tooth falls. If the initial population has a uniform weight, the weight-based sampling is identical to the uniform sampling, since W = Nwi. However, if the initial particles have varying weights, the weight-based sampling simultaneously functions as a variance reduction technique as well: particles having relatively large weights tend to be split into multiple copies, which leads to variance reduction; on the other hand, particles with relatively low weights tend to be Russian-rouletted, which may lead to more e\ufb03cient computation by not spending time tracking small-weight particles. Nevertheless, particle weight does 12 \fnot necessarily indicate particle importance. If the initial particles are assigned with some importance values, the importance-based sampling o\ufb00ers more e\ufb00ective variance reduction than the weight-based. One may argue that uniform sampling is the least optimal as it assigns all particles an identical splitting number or surviving probability regardless of their weights and importance. However, uniform sampling can be the most optimum choice in two cases. The \ufb01rst is when the population has a uniform weight and unknown importance, which is the case in a \ufb01xed-source problem without any variance reduction technique and in the typical k-eigenvalue simulation where all the \ufb01ssion neutrons are emitted with uniform weight. The second case is when the MC simulation is already equipped with some variance reduction techniques, such as the weight window or the uniform \ufb01ssion site method [25], because particle distribution and weight pro\ufb01le of the population would be optimized already, such that particles can be treated equally by the PCT. In other words, avoiding redundancy in variance reduction e\ufb00ort. In particular, in the application of an e\ufb00ective weight window or the uniform \ufb01ssion site method, the use of weight-based sampling may actually ruin the already optimized particle distribution. The interplay between PCT and variance reduction technique brie\ufb02y described above is outside the scope of this study. Furthermore, while the theoretical analysis performed in Sec. III is applicable to all sampling bases, only the uniform PCTs are implemented and tested in Secs. IV\u2013VI. II.D.2. Correlation Issue in CO In Sec. II.C, it is interesting to observe that CO techniques only require one random number to perform the population control (as a comparison, SS and SR respectively require M and N random numbers); in other words, a single random number determines the fate of all particles in the population. This unfortunately yields correlation in the particle sampling. As an example, Particles 1 and 2 in Fig. 2 will never be sampled together. This correlation may produce unwanted behavior depending on how the initial particles are ordered. 13 \fFigure 5. An illustration of the correlation issue in CO and how it is remedied in COX; E indicates particle\u2019s energy in MeV. Sweezy et al. [4] provide an illustrative demonstration of such possible unwanted behavior in CO, which is shown in the upper part of Fig. 5. In this postulated PCT problem, we wish to select 2 particles from an ordered initial population of size 4. The initial population consists of alternating 1-MeV and 2-MeV particles, all of which have uniform weight. If we apply CO, we will have a \ufb01nal population with either all 1-MeV or all 2-MeV particles. However, this behavior does not necessarily make the MC simulation biased, because each initial particle is still treated fairly individually\u2014i.e., Eq. (1) is still satis\ufb01ed. If one were to run the simulation in multiple batches\u2014which is necessary to get a measure of result uncertainty in a census-enabled MC simulation\u2014we would be running half of the batches with all 1-MeV particles and 2-MeV on the other half. While such behavior may result in a larger tally variance, the expectation is still preserved. Outside this postulated PCT problem, some extent of physics is naturally embedded in the particle population order (e.g., adjacent particles may be originated from the same emission event). However, there has never been any observable e\ufb00ect of this correlation issue in the practical application of CO [2, 17, 20, 8, 19]. If one wishes to eliminate this possible correlation issue, the initial population order must be randomized before CO is applied. However, in massively parallel computation with reproducibility requirement, this pre-randomization process will require a large number of communications, which may ruin the parallel scalability of the simulation. The modi\ufb01ed combing technique COX proposed by Ajami et al. [10], to some extent, remedies this correlation issue as demonstrated in the lower part of Fig. 5. Nevertheless, this remedy comes at the expense of increasing \u03c3[Ci], which is discussed later in Sec. 14 \fIII. II.D.3. PCT Weight Normalization Some PCTs\u2014i.e, uniform and importance-based PCTs, and all SR techniques\u2014do not exactly preserve the population total weight W. However, the expectation of the total weight is still preserved because E[W \u2032] = N X i=1 E[Ci] = W, (5) where the \ufb01rst and the second equalities respectively use Eqs. (4) and (3). To exactly preserve W, Sweezy et al. [4] suggest performing weight normalization after population control is performed. This is done by multiplying all of the \ufb01nal particles with the factor W/W \u2032, so that C(norm.) i = (W/W \u2032)Ci. Unfortunately, this PCT weight normalization introduces bias as Eq. (1) is now violated: E h C(norm.) i i = E \u0014 W W \u2032 Ci \u0015 = E \u0014 W W \u2032 \u0015 wi \u2265wi, (6) where the inequality comes from Jensen\u2019s inequality [26, 4] suggesting E[W/W \u2032] \u22651. Nevertheless, it can be seen that by using a large number of particles, the bias in the normalized PCTs can be minimized; however, it is also the case for the lack of exact total weight preservation in the non-normalized PCTs. In other words, PCT weight normalization suggested in [4] is only recommended if preserved total weight is more important than unbiased MC simulation. II.D.4. More Advanced PCTs The techniques considered in this work are those of basic PCTs. More advanced PCTs include the one proposed by Booth in Section IV of [15], which introduces the idea of partial population weight adjustment\u2014an unbiased alternative to the weight normalization proposed by Sweezy et al. [4] (see Sec. II.D.3)\u2014to exactly preserve the population\u2019s total weight W. This partial adjustment is technically more advanced than the weight normalization technique; it introduces tunable parameters (i.e., the 15 \fadjusted partial population size and the number of recursive partial adjustments) and additional challenges for parallel computing implementation. While the proposed partial population weight adjustment is applied to the importance-based CO in [15], it basically can be applied to other PCTs that do not exactly preserve W as well. Other developments of advanced PCTs include the more recent study by Legrady et al. [19], which introduces several advanced CO techniques speci\ufb01cally improved for extensive variance reduction. III. Uncertainty Introduced by PCT III.A. Theoretical Analysis By determining the \ufb01rst and second moments of Ci (the total weight of the copies of initial particle i in the \ufb01nal population), we can determine the variance introduced by a PCT: V ar[Ci] = E[C2 i ] \u2212E[Ci]2. (7) Another and perhaps more illustrative quantity is the relative uncertainty (standard deviation) introduced by the PCT to each particle i in the initial population: \u03c3r[Ci] = \u03c3[Ci] wi = 1 wi p V ar[Ci]. (8) Unless normalized (as discussed in Sec. II.D.3), all of the identi\ufb01ed PCTs (SS, SR, CO, COX, and DD) are unbiased, which means E[Ci] = wi. However, the second moments E[C2 i ] of the PCTs may be di\ufb00erent and thus become the key to determine the relative performance on how large uncertainty \u03c3r[Ci] is introduced by the techniques. In SR (described in Sec. II.C.3), each initial particle i is either copied \u230asi\u230b+1 times with a probability of si \u2212\u230asi\u230b, or otherwise copied \u230asi\u230btimes. This suggests E[C2 i ]SR = (si \u2212\u230asi\u230b) \u0002 (\u230asi\u230b+ 1)w\u2032 i \u00032 + [1 \u2212(si \u2212\u230asi\u230b)] \u0000\u230asi\u230bw\u2032 i \u00012 , (9) 16 \f\u03c3r[Ci]SR = 1 si q \u2212s2 i + (2\u230asi\u230b+ 1)si \u2212(\u230asi\u230b2 + \u230asi\u230b), (10) where we note that w\u2032 i = wi/si. In CO (described in Sec. II.C.1), each initial particle i is either copied \u2308si\u2309\u22121 times with a probability of \u2308si\u2309\u2212si, or otherwise copied \u2308si\u2309times. The quantity si (splitting number) used in this context happens to be identical to that of SR; and the function \u2308\u00b7\u2309denotes the ceiling function, which produces the smallest integer not smaller than the variable argument. Following the similar process to that of SR in the previous paragraph, we obtain \u03c3r[Ci]CO = 1 si q \u2212s2 i + (2\u2308si\u2309\u22121)si \u2212(\u2308si\u23092 \u2212\u2308si\u2309). (11) In SS (described in Sec. II.C.4), each particle i can be copied multiple times up to M; this means E[C2 i ]SS = M X j=0 \u0012M j \u0013 \u0010 si M \u0011j \u0010 1 \u2212si M \u0011M\u2212j \u0000jw\u2032 i \u00012 , (12) where we use the same de\ufb01nition of si used in the other PCTs. Per binomial theorem, we can \ufb01nd that V ar[Ci]SS = si \u0010 1 \u2212si M \u0011 w\u2032 i 2, (13) and thus \u03c3r[Ci]SS = s\u0012 1 si \u22121 M \u0013 \u2248 r 1 si , (14) where the approximation is due to the fact that typically si \u226aM (or equivalently N \u226b1 for uniform PCTs). In uniform DD (described in Sec. II.C.5), we have two di\ufb00erent cases. In the case of N > M, we uniformly discard N \u2212M particles from the initial population. Therefore, 17 \fparticle i has to survive all of the discard draws to get copied once, otherwise it will not get copied at all. This means, for N > M we have E[C2 i ]DD = \u0012N \u22121 N \u00d7 N \u22122 N \u22121 \u00d7 ... \u00d7 M M + 1 \u0013 w\u2032 i 2 = M N w\u2032 i 2, (15) \u03c3r[Ci]DD = r 1 si \u22121, (16) where again we use the same de\ufb01nition of si used in the other PCTs. On the other hand, in the case of N < M, DD keeps \u230aM/N\u230bcopies of the initial population, and then uniformly draw a particle duplicate (M mod N) times out of it. This process is similar to that of SS, except that we sample (M mod N) particles instead of M particles and we pre-keep \u230aM/N\u230bcopies of each initial particle. This gives \u03c3r[Ci]DD \u2248 s\u0012 1 \u22121 si \u230asi\u230b \u0013 1 si , (17) where the approximation is again due to N \u226b1. In COX (described in Sec. II.C.2), things are more involved in that deriving the relative uncertainty \u03c3r[Ci] is not as straightforward. First, let us observe how Fig. 4 of COX di\ufb00ers from Fig. 2 of CO. We can see that Particle 1 su\ufb00ers from the same uncertainty in both methods; in this case, both CO and COX introduce identical uncertainty to Particle 1. However, it is not the case for the other particles. For example, in CO, Particle 2 has only two possibilities, get copied once or not at all; but in COX, Particle 2 has an additional possibility, which is to get copied twice. Due to this additional possibility, COX introduces higher uncertainty to Particle 2 than CO does. Similar \ufb01ndings can be observed for Particle 3. These observations indicate that \u03c3r[Ci]COX \u2265\u03c3r[Ci]CO, depending on how the particle i is located relative to the comb grid (the broken line in Fig. 4). Figures. 6 and 7 illustrate di\ufb00erent situations of how particles can be located relative to the COX comb grid. The Particle 2 and Particle 3 cases discussed in the previous 18 \fparagraph are illustrated by the lower parts of Figs. 6 and 7, respectively. We note that we use a unit-spaced comb grid and the same de\ufb01nition of si used in other PCTs; this makes the analysis applicable for COX with any sampling basis. Symbols on the \ufb01gures\u2014i.e., \u03b6i = 1 \u2212\u03b4i and \u03b8i = si + \u03b4i \u2212\u2308si\u2309\u2014serve as key quantities to derive E[C2 i ]COX as a function of the comb o\ufb00set \u03b4i. By observing the \ufb01gures, we found that E[C2 i ]COX (and thus \u03c3r[Ci]COX) is dependent on \u03b4i, and the dependency is periodic with a unit period in \u03b4i. Figure 6. Illustration of particles with si \u22641 located \u03b4i away from COX comb grid (the broken lines). Figure 7. Illustration of particles with si \u22651 located \u03b4i away from COX comb grid (the broken lines). On the upper part of Fig. 6, we have si \u22641 and 0 \u2264\u03b4i \u22641 \u2212si; in this case, COX 19 \fand CO are identical. On the lower part of Fig. 6, we have si \u22641 and 1 \u2212si < \u03b4i \u22641; in this case, we have E[C2 i ]COX = \u03b6i\u03b8i(2w\u2032 i)2 + (\u03b6i + \u03b8i \u22122\u03b6i\u03b8i)(w\u2032 i)2. (18) On the upper part of Fig. 7, we have si \u22651 and 0 < \u03b4i \u2264\u2308si\u2309\u2212si; in this case, we have E[C2 i ]COX = \u03b6i\u03b8i(\u2308si\u2309w\u2032 i)2 + (\u03b6i + \u03b8i \u22122\u03b6i\u03b8i) \u0002 (\u2308si\u2309\u22121) w\u2032 i \u00032 + (1 \u2212\u03b6i)(1 \u2212\u03b8i) \u0002 (\u2308si\u2309\u22122) w\u2032 i \u00032 . (19) Finally, on the lower part of Fig. 7, we have si \u22651 and \u2308si\u2309\u2212si < \u03b4i \u22641; in this case, we have E[C2 i ]COX = \u03b6i\u03b8i \u0002 (\u2308si\u2309+ 1) w\u2032 i \u00032 + (\u03b6i + \u03b8i \u22122\u03b6i\u03b8i)(\u2308si\u2309w\u2032 i)2 + (1 \u2212\u03b6i)(1 \u2212\u03b8i) \u0002 (\u2308si\u2309\u22121) w\u2032 i \u00032 . (20) Fig. 8 shows the resulting \u03c3r[Ci] of COX as a function of \u03b4i at di\ufb00erent values of si. Figure 8. Theoretical relative uncertainty \u03c3r[Ci] of COX as a function of \u03b4i at di\ufb00erent values of si. The derived theoretical relative uncertainty \u03c3r[Ci] of the PCTs\u2014i.e., Eq. (10) for SR, Eq. (11) for CO, Eq. (14) for SS, and Eqs. (16) and (17) for DD\u2014are plotted in 20 \fFig. 9. Di\ufb00erent to those of the other PCTs, \u03c3r[Ci] of COX is dependent on \u03b4i as shown in Fig. 8; thus, in Fig. 9, we plot its average value and shade the region (min to max) of its possible values. The x-axis is chosen to be 1/si, which is equivalent to the ratio w\u2032 i/wi\u2014or N/M for the uniform PCTs. This x-axis e\ufb00ectively represents a measure of the system\u2019s population growth, which is dependent on the system criticality and the census frequency. Roughly speaking, one can say that N/M increases with the criticality of the system as illustrated with the arrows in the \ufb01gure. Figure 9. Theoretical relative uncertainty \u03c3r[Ci] introduced by di\ufb00erent PCTs. The larger \u03c3r[Ci], the larger the uncertainty introduced by the PCTs, which may lead to less precise (more statistical noise) results. From Fig. 9, it is evident that in a growing population regime (\u201cSuper\u201d), the larger the ratio N/M, the larger the uncertainty introduced by the PCTs; this trend generally extends to the decaying population regime (\u201cSub\u201d). However, some methods (SR, CO, and DD) take advantage of the pure-splitting scenario\u2014in which M is a multiple of N\u2014such that \u03c3r[Ci] drops to zero. We note that given a reasonably accurate prediction of population decay rate, one can take advantage of this behavior to minimize uncertainty introduced by the PCTs in systems with decaying population. In terms of \u03c3r[Ci], SS is the worst PCT, followed by COX; particularly, unlike the other PCTs, SS and COX introduce signi\ufb01cant uncertainties even when N \u2248M (which is the case throughout the active 21 \fcycles of an eigenvalue simulation, see Sec. VI). On the other hand, SR and CO are identically the best. III.B. Numerical Veri\ufb01cation To numerically verify the theoretical \u03c3r[Ci] derived in the previous subsection, we implement the PCTs into a Python-based research MC code and devise a PCT test problem. Per discussion in Sec. II.D.1, only uniform PCTs are discussed here. Nevertheless, a similar set up can be used to verify PCTs with the other sampling bases. In the test problem, we perform population control to an initial population with a cosine statistical weight distribution: wi = cos \u0012 i \u22121 N \u22121\u03c0 \u0013 + 1, i = 1, 2, ..., N. (21) Each initial particle i is associated to tally bin i. All copies of particle i in the \ufb01nal population will score their controlled weight w\u2032 i to the tally bin i; in other words, we are tallying Ci. Figs. 10 and 11 show the resulting Ci of di\ufb00erent PCTs for N/M = 1.25 and N/M = 0.75, respectively. In each subplot, the red line indicates the analog result, where no PCT is performed and no uncertainty is introduced to the population (Ci = wi). We note that there are N discrete values of analog Ci, but we present it as a line to distinguish it with the values calculated using PCT, which are marked by the blue circles. As discussed earlier in this section, PCTs introduce some uncertainties to the population, and the magnitudes of the uncertainties are illustrated by how far the blue circles deviate from the red line: the more spread away the blue circles are from the red line, the more uncertainties are introduced by the respective techniques. We note that the results shown in Figs. 10 and 11 are in agreement to the theoretical uncertainty shown in Fig. 9\u2014i.e, SS introduces the most uncertainty, followed by COX (and DD, for N/M < 1), while CO and SR introduce the least. 22 \f(a) Simple Sampling (b) Splitting-Roulette (c) Combing (d) Modi\ufb01ed Combing (e) Duplicate-Discard Figure 10. PCT test problem results with N = 1250 and M = 1000 of di\ufb00erent techniques. 23 \f(a) Simple Sampling (b) Splitting-Roulette (c) Combing (d) Modi\ufb01ed Combing (e) Duplicate-Discard Figure 11. PCT test problem results with N = 750 and M = 1000 of di\ufb00erent techniques. Next, we would like to use the PCT test problem to verify the theoretical PCT uncertainties \u03c3r[Ci] derived in this section. We set the target size M to be 1000 and consider multiple values of N such that N/M ranges from 0.75 to 1.25. In each case, the population control is repeated 100 times so that we can determine the relative 24 \fstandard deviation \u03c3r[Ci] based on the accumulation of Ci and C2 i . Furthermore, we randomize the particle order in the population \u201cstack\u201d at each repetition. In uniform PCTs, \u03c3r[Ci] is independent of i as it only depends on the value of N/M, as shown in Fig. 9. Therefore, in each case of N/M, we take the average of \u03c3r[Ci] over all i as the \ufb01nal result. Finally, these numerical results from all cases of N/M are compared to the theoretical values, as shown in Fig. 12. The numerical results are denoted by the markers, and the lines are the theoretical values identical to those in Fig. 9. Excellent agreement is observed, even for COX with its ranging theoretical \u03c3r[Ci] (the shaded area). This veri\ufb01es not only the theoretical \u03c3r[Ci] derived in Sec. III.A, but also the PCT implementations. Figure 12. Veri\ufb01cation of relative uncertainty \u03c3r[Ci] introduced by the PCTs. IV. Parallel PCT Algorithm Romano and Forget [21] introduce an e\ufb03cient, reproducible, parallel \ufb01ssion bank algorithm for k-eigenvalue MC simulation; in the paper, the typical uniform SS is described as the PCT. However, per our discussion in Secs. II.A and II.B, we can actually apply the algorithm not only to the k-eigenvalue MC simulation (\ufb01ssion census) but also to the time-dependent \ufb01xed-source with time census. This allows us to adapt the algorithm and design a common population control code routine for both simulation modes. Furthermore, the PCT of choice can be any of the \ufb01ve PCTs discussed in Sec. II.C. In this section, we describe the adaptation of the parallel particle bank algorithm proposed in [21], present the resulting pseudo-codes of the di\ufb00erent PCTs, and perform a weak scaling study to their implementations. Generalized from Fig. 3 in [21], Fig. 13 illustrates an example of how particle banks 25 \fare managed and population controlled using the proposed parallel algorithm. In the example, we consider 1000 source particles evenly distributed to 4 processors\u2014each processor holds a Source Bank of size 250. The source particles are then transported in parallel. The transported particles are subject to a census mechanism, which can be a time census for time-dependent simulation or \ufb01ssion census for eigenvalue one. Once the particle census is completed, population control is performed to the Census Bank using one of the PCTs (SS, SR, CO, COX, or DD). Finally, the resulting \ufb01nal population (Sampled Bank) is evenly redistributed to the processors via the nearestneighbor bank-passing algorithm, where each processor only needs to communicate (send or receive) with its adjacent neighbors as needed, without any global particle bank formation nor typical master-slave communication [21]. Figure 13. Illustration (adapted from [21]) of parallel particle bank handling and population control of the proposed algorithm. Two exclusive scans need to be performed in the proposed parallel algorithm. An exclusive scan to the Census Bank is required to determine the total size N and the position of the processor\u2019s local bank relative to the \u201cglobal\u201d bank, so that reproducible population control, regardless of the number of processors, can be achieved by 26 \fconsistently following the same random number sequence. The other scan is performed to the Sampled Bank so that we can determine local bank o\ufb00sets required to perform the nearest-neighbor bank passing. Algorithms 1 and 2 respectively show the pseudo-codes for bank-scanning and bankpassing processes, which are used in all of the PCT algorithms: Algs. 3\u20137. The PCT algorithms only take the minimum information required to perform the population control\u2014the Census Bank (which can be either \ufb01ssion or time bank) and the target size M\u2014and return the controlled, evenly distributed across processors, \ufb01nal bank. Therefore, the proposed parallel PCT algorithms are applicable for both time-dependent \ufb01xed-source and eigenvalue MC simulation modes. We also note that the algorithms are designed to start and return with the same random number seed across all processors, which is important for maintaining reproducibility. The parallel algorithms are implemented to the Python-based MC research code by using Python Abstract Base Class feature to allow streamlined implementation of the di\ufb00erent PCTs\u2014SS, SR, CO, COX, and DD. The distributed-memory parallel communication is facilitated by using MPI4Py [27]. We use the veri\ufb01cation test problem in Sec. III.B to verify that the PCTs are properly implemented and their results (distribution of Ci) are reproducible\u2014i.e., same results are produced regardless of the number of processors. Next, we perform a weak scaling test to assess the relative parallel scalabilities of the di\ufb00erent PCTs. The test is similar to the veri\ufb01cation test problem in Sec. III.B, except that M is set to be 105 times the number of processors, N \u2208[0.5M, 1.5M] is randomly picked in 50 repetitions, and the initial particles are randomly distributed to the processors. We note that there is no particle transport performed; we speci\ufb01cally measure the runtime required to perform population control and manage the particle bank, which includes communicating attributes and members of the bank throughout the processors. 27 \fFigure 14. Weak scaling results of di\ufb00erent PCTs. Marked solid lines and the associated shaded areas denote the average and standard deviation of the 50 repetitions, respectively. The weak scaling result is shown in Fig. 14. The marked lines and the shaded areas respectively show the average and the standard deviation of the runtimes in the 50 repetitions. It is found that SR, CO, and COX identically scale the best, followed by DD, and then SS. The sampling mechanisms of SR, CO, and COX are embarrassingly parallel, which make the techniques scale very well. On the other hand, SS, which scales the worst, needs to serially sample all of the M particles. The sampling mechanism of DD is also done in serial, but it only needs to sample as many as the di\ufb00erence between N and M; this makes DD scale better than SS, and also explains why DD\u2019s runtime has a relatively larger standard deviation. Finally, it is worth mentioning that we purposely picked a modest number of particles per processor (105) to get a balanced demonstration on the signi\ufb01cance of work (population control sampling) and communication. Should we signi\ufb01cantly increase the number of particles per processor, such that the amount of work far outweighs the amount of communication, parallel scalabilities of SR, CO, and COX will improve, approaching the perfect scaling (horizontal line). V. Time-Dependent Problems In this section, we devise time-dependent MC test problems and then solve them with the PCTs to assess their relative performances. We adapt the homogeneous in\ufb01nite 1D-slab medium problem of the analytical time-dependent benchmark suite AZURV1 28 \f[28]: \u0014 \u2202 \u2202t + \u00b5 \u2202 \u2202x + 1 \u0015 \u03c8(x, \u00b5, t) = c 2\u03c6(x, t) + 1 2\u03b4(x)\u03b4(t), (22) which is subject to the following boundary and initial conditions: lim |x|\u2192\u221e\u03c8(x, \u00b5, t) < \u221e, \u03c8(x, \u00b5, 0) = 0. (23) Note that particle position and time are respectively measured in mean-free-path (\u03a3\u22121 t ) and mean-free-time [(v\u03a3t)\u22121] where v is particle speed; and we also have the typical scattering parameter c = (\u03a3s + \u03bd\u03a3f)/\u03a3t. The scalar \ufb02ux solution \u03c6(x, t) = R 1 \u22121 \u03c8(x, \u00b5, t) d\u00b5 of this time-dependent problem is \u03c6(x, t) = e\u2212t 2t \u001a 1 + ct 4\u03c0 \u00001 \u2212\u03b72\u0001 Z \u03c0 0 sec2 \u0010u 2 \u0011 Re h \u03be2e ct 2 (1\u2212\u03b72)\u03bei du \u001b H(1 \u2212|\u03b7|), (24) where \u03b7 = x t , q = 1 + \u03b7 1 \u2212\u03b7, \u03be(u) = ln(q) + iu \u03b7 + i tan \u0010u 2 \u0011, (25) and H(\u00b7) denotes the heaviside function. For our test problems we consider c values of 1.1 and 0.9, respectively representing supercritical and subcritical systems. The analytical solution of the total \ufb02ux would be a simple exponential function of \u03c6(t) = exp [(c \u22121)t]; however, the spatial solutions [Eq. (24)] o\ufb00er some more interesting features, particularly for the supercritical case, as shown in Fig. 15 (note that the solutions in t \u22641 and |x| \u2208[10, 20] are not shown to better show the prominent spatial features). 29 \f(a) Supercritical, c = 1.1 (b) Subcritical, c = 0.9 Figure 15. Reference solution of the time-dependent test problems. The test problems are initiated by an isotropic neutron pulse at x = t = 0 (c.f. (22)). In both cases, the scalar \ufb02ux solution gradually di\ufb00uses throughout the medium. The di\ufb00erence is that the signi\ufb01cant neutron absorption promotes population decay in the subcritical case. On the other hand, while the solution of the supercritical case initially behaves similarly to that of the subcritical, it eventually raises up due to the signi\ufb01cant \ufb01ssion multiplication\u2014at t = 20, the population size reaches exp(2) = 7.39 times of the initial value. V.A. Verifying Time-Dependent Features of the MC code To the authors\u2019 knowledge, there are three di\ufb00erent time-dependent scalar \ufb02ux quantities that can be calculated via MC simulation: (1) spatial-average time-average \u03c6j,k, (2) spatial-average time-edge \u03c6j(t), and (3) spatial-edge time-average \u03c6k(x), where j and k respectively denote spatial and time mesh indices. The \ufb01rst tally uses the typical track-length estimator averaged over time mesh. The second uses a time-edge estimator, which accumulates the product of neutron speed and weight whenever a time-grid is crossed, averaged over spatial mesh. The third uses the typical spatialmesh-crossing estimator, which scores particle weight divided by absolute of normal product of particle direction and the surface, averaged over time mesh. The use of the track length estimator (spatial-average time-average \u03c6j,k) is typically desired because it generally gets more samples compared to \u201cevent-triggered\u201d estimators like 30 \ftime-edge-crossing (\u03c6j(t)) and spatial-mesh-crossing (\u03c6k(x)). Nevertheless, it is worth mentioning that in some applications, time-edge quantities \u03c6j(t) may be more desired than the time-average one \u03c6j,k. To simulate the supercritical (c = 1.1) and subcritical (c = 0.9) cases, we consider purely \ufb01ssion media with \u03bd = c. The test problems are simulated using the research MC code, and we record the scalar \ufb02ux using the three tally estimators that are subject to J = 202 uniform spatial meshes spanning x \u2208[\u221220.5, 20.5] and time grid t = 0, 1, 2, ..., 20. To limit particle population growth in the supercritical case, we set a time boundary at the \ufb01nal time t = 20\u2014particles crossing this time boundary will be killed (analogous to spatially crossing a convex vacuum boundary). Note that we have not introduced any PCT yet; the MC simulation is still run in analog. Simulations are performed with increasing number of histories Nh. The resulting 2-norms of normalized error [against the reference formula Eq. (24), normalized at each time index] of the supercritical problem are shown in Fig. 16. It is found that all of the error 2-norms converge at the expected rate of O(1/\u221aNh) (shown in black solid line); and the track length estimator result \u03c6j,k shows the lowest error, which is in line with the discussion in the previous paragraph. Similar convergence rate is observed in the subcritical case as well. This veri\ufb01es the time-dependent features of the MC code that we are going to use in the next subsection to assess the relative performances of the PCTs. Additionally, this also suggests that this set of test problems, the AZURV1 benchmark [28], serve as a good veri\ufb01cation tool to test time-dependent features of MC codes. Figure 16. Error convergence of the three time-dependent \ufb02ux tallies of analog (without PCT) MC simulations of the supercritical test problem. Black solid line indicates convergence rate of O(1/\u221aNh). 31 \fV.B. Performances of the PCTs in solving the Time-Dependent Test Problems The supercritical and subcritical problems are solved using the 5 PCTs (SS, SR, CO, COX, DD). Each simulation is run with 105 source particles on 36 distributed-memory processors. We consider uniformly-spaced population control time censuses within t \u2208 [0, 20]. With increasing frequency, we consider 8 number of censuses: 1, 2, 4, 8, 16, 32, 64, 128. In 1 census, the census is performed at t = 10; while in 2 censuses, it is performed at t = 20/3 and t = 40/3. Finally, each simulation is repeated 100 times with di\ufb00erent random number seeds. Table I. Census con\ufb01gurations for the time-dependent test problems. Number of censuses in t \u2208[0, 20] 1 2 4 8 16 32 64 128 Census period (mean-free-time) 10.0 6.67 4.00 2.22 1.18 0.61 0.31 0.16 Expected N/M Supercritical 2.72 1.95 1.49 1.25 1.12 1.06 1.03 1.02 Subcritical 0.37 0.51 0.67 0.8 0.89 0.94 0.97 0.98 Figure 17. Expected \u03c3r[Ci] of the simulated cases (c.f. Table I and Fig. 9). Table I shows census period and the expected ratio N/M associated with the simulated cases. By referring to Fig. 9, we can estimate the expected uncertainty \u03c3r[Ci] introduced by a PCT at a given value of N/M. For convenience, plots showing the expected \u03c3r[Ci] associated to the simulated cases are provided in Fig. 17. Note that this expected uncertainty is introduced every time the population control is performed\u2014 e.g., with 4 number of censuses, we perform census and population control, and introduce the associated uncertainty, once every 4 mean-free-times. This means, smaller 32 \f\u03c3r[Ci] due to larger census frequency does not necessarily lead to smaller uncertainty in the simulation result, because the more frequently we perform population control, the more frequently we introduce the uncertainty \u03c3r[Ci] (even though small) to the population. Two performance metrics are considered: (1) total runtime T and (2) the averaged sample standard deviation of the scalar \ufb02ux at the end of simulation time \u03c6j(t = 20), or simply \u03c6j. The sample standard deviation \u03c3[\u03c6j] (not the mean standard deviation) is calculated over the Nr = 100 repetitions: \u00af \u03c6j = 1 Nr Nr X i=1 \u03c6(i) j , (26) \u03c3[\u03c6j]2 = 1 Nr \u22121 \" Nr X i=1 h \u03c6(i) j i2 \u2212Nr \u00af \u03c62 j # , (27) where the superscript (i) denotes repetition index. We note that a repetition (or a realization) can be seen as a batch of source particles that makes a single, independent history. Finally, the averaged sample standard deviation of the scalar \ufb02ux is calculated as follows: \u00af \u03c3[\u03c6] = 1 J J X j=1 \u03c3[\u03c6j]. (28) The resulting performance metrics are shown in Fig. 18. A \ufb01gure of merit (FOM) based on the two performance metrics, FOM = 1 T \u00af \u03c3[\u03c6]2 , (29) is also shown in the \ufb01gure. Finally, the analog (without PCT) solution, also run in 100 repetitions, is shown in the \ufb01gure as well as a reference point. Figure 18 not only compares the relative performance of the PCTs but also shows the trends of the related metrics as functions of census frequency. The \ufb01gure also illustrates how PCT functions 33 \fdi\ufb00erently in supercritical and subcritical problems. (a) Runtime (b) Averaged sample standard deviation of the scalar \ufb02ux (c) Figure of merit Figure 18. Performance metrics of di\ufb00erent PCTs for the time-dependent problems. 34 \fV.B.1. Supercritical problem The main motivation of population control in a supercritical problem is to limit the number of neutrons tracked during the simulation so that it does not exceed the allocated computational memory\u2014in the test problem, population size exceeds seven times of the initial value if population control is not performed. However, this comes at the expense of less precise (more noisy) solution due to the signi\ufb01cant uncertainty introduced by the PCT used. Figure (a) of Fig. 18 shows that applying PCT in a supercritical problem potentially reduces the overall runtime. However, too frequent census may result to net increase in runtime (relative to analog) due to the signi\ufb01cant cost of performing too many population controls, which may involve considerable parallel communications. The \ufb01gure also shows that SR, CO, and COX have the lowest runtime, followed by DD and then SS, which is in agreement with the discussion in Sec. IV. Figure (b) of Fig. 18 shows the averaged scalar \ufb02ux standard deviations \u00af \u03c3[\u03c6] of the di\ufb00erent PCTs as functions of number of censuses performed. The averaged scalar \ufb02ux standard deviation is a measure of how noisy the simulation is\u2014the larger \u00af \u03c3[\u03c6], the lower the simulation precision and the larger the noise in the result. The \ufb01gure demonstrates the signi\ufb01cance of the uncertainty introduced by the PCTs (note the lower \u00af \u03c3[\u03c6] value of the analog result). Generally, the more frequently we perform population control, the more uncertainty is introduced to the population, and the larger \u00af \u03c3[\u03c6]. While N/M (as well as \u03c3r[Ci], per Figs. 9 and 17) reduces as we increase the census frequency, the number of population controls performed and thus how often the uncertainty is introduced also increase. It is shown that all PCTs seem to yield similar averaged scalar \ufb02ux standard deviations in the lower census frequency. However, as we increase the census frequency, SR, CO, and DD seem to limit their standard deviations; this demonstrates their superiority over COX and SS as the three techniques theoretically introduce the least uncertainty in supercritical problems, as shown in Figs. 9 and 17. Finally, \ufb01gure (c) of Fig. 18 shows that the FOMs of all PCTs are always lower than that of the analog simulation, and they monotonically decrease as we increase the census frequency\u2014it seems that PCT is parasitic in this MC simulation. However, 35 \fwe should note that the main reason of applying PCT in a supercritical system is to limit population size being tracked in the simulation. Nevertheless, in general, one may \ufb01nd situations where the PCTs have larger FOMs than the analog one (for smaller census frequency) should the advantage of runtime reduction signi\ufb01cantly outweighs the uncertainty introduced by the PCTs. Another important takeaway from the \ufb01gure is that SR, CO, and DD are in the same ballpark as the best PCTs, which are followed by COX, and then SS. V.B.2. Subcritical problem The main motivation of population control in a subcritical problem is to maintain population size so that we have enough samples to yield more precise (less noisy) solution. However, this comes at the expense of increasing overall runtime as more neutrons need to be tracked. Figure (a) of Fig. 18 shows that applying PCT in a subcritical problem increases overall runtime, and it increases further as we perform the population control more frequently. It is also worth to mention that DD has a similar runtime to those of SR, CO, and COX in higher number of censuses; this is because DD only needs to sample as many as |N \u2212M|, which gets closer to zero as we increase census frequency. Figure (b) of Fig. 18 shows that population control improves the solution precision. One may think that the solution would improve further as the population control is performed more frequently; however, we should be aware that population control introduces uncertainty in a subcritical problem too (see Figs. 9 and 17). The e\ufb00ect of this uncertainty is evident in the \ufb01gure (b) of Fig. 18\u2014at around 8 censuses, the solution improvement starts to deplete, and even reversed (\u00af \u03c3[\u03c6] increases) for SS and COX. Finally, \ufb01gure (c) of Fig. 18 shows that the PCTs o\ufb00er improved FOMs relative to the analog. The FOM is improved further as we perform population control more frequently. However, it starts to consistently degrade as the e\ufb00ects of the increasing runtime and of the signi\ufb01cant uncertainty introduced by the PCT start to dominate. Note that this is similar to the typical trend of a variance reduction technique: it helps to improve FOM, but will degrade FOM if it is used too much. Another important 36 \ftakeaway from the \ufb01gure (c) of Fig. 18 is that\u2014similar to the supercritical case\u2014SR, CO, and DD are in the same ballpark as the best PCT, followed by COX, and then SS. VI. k-Eigenvalue Problem In a MC calculation, the k-eigenvalue transport problem is typically solved via the method of successive generations. The MC simulation involves accumulation of \ufb01ssion neutrons in a \ufb01ssion bank. At the end of each generation (i.e., when the \ufb01ssion census is completed), the eigenvalue k is updated, and the generated \ufb01ssion bank is normalized such that its total weight is identical to the target population size M, which is the number of histories per generation. Finally, the normalized \ufb01ssion bank is set to be the source bank for the subsequent generation. The procedure described above is considered to be the \u201canalog\u201d approach where there is no PCT being used. The eigenvalue update and weight normalization are useful for the MC simulation, because they direct the neutronics system into the steady-state con\ufb01guration which helps maintaining the number of source particles simulated at each generation around the user-speci\ufb01ed value M. However, in the earlier generations, when the steady-state con\ufb01guration\u2014i.e. \u201cconvergence\u201d of the eigenvalue k\u2014has not been achieved yet, this analog technique may su\ufb00er from highly-\ufb02uctuating number of source particles, particularly if the initial guesses for the eigenvalue k and the \u201ceigenvector\u201d neutron distribution are poorly chosen. This possible issue can be avoided by performing population control (applying one of the identi\ufb01ed PCTs) to the normalized \ufb01ssion bank, so that the resulting source bank is well controlled, regardless of the convergence of the eigenvalue k. It is worth mentioning that the \u201ceigenfunction normalization\u201d described in the previous paragraphs and the \u201cPCT normalization\u201d discussed in Sec. II.D.3 serve di\ufb00erent purposes. The eigenfunction normalization is a necessary step to ensure that scores accumulated into simulation tallies are not arbitrary in magnitude. On the other hand, PCT normalization is an optional step to preserve the total weight of the initial population passed to the PCT (at the expense of introducing bias, as discussed in Sec. 37 \fII.D.3). As another clear distinction, the eigenfunction normalization is performed before we apply PCT, while the optional PCT normalization is performed after. Similar to the PCT application in time-dependent MC simulation, PCT application in k-eigenvalue simulation would also introduce the uncertainty \u03c3r[Ci] to the population. However, it should be emphasized that this uncertainty introduced by PCT is not a bias. There is a well-known bias associated with the method of successive generations (which can be mitigated given su\ufb03ciently large number of histories per generation [29]). This bias is introduced when the \ufb01ssion bank is normalized, which is in e\ufb00ect regardless whether population control is applied. PCT neither enhances nor eliminates this bias\u2014it merely introduces the additional uncertainty \u03c3r[Ci] to the already-bias simulation. In an eigenvalue simulation, how many times population control is performed is determined by the total number of generations, which is typically a very large number (in the order of 102 to 103). This means, the uncertainty \u03c3r[Ci] would be introduced by the PCT to the population so many times, which, according to the \ufb01ndings in the previous section, may lead to highly noisy solutions. However, the e\ufb00ect of the uncertainty introduced by the PCT on an eigenvalue simulation is expected to be much less pronounced than that in a time-dependent one. This is because once the eigenvalue convergence is achieved, we essentially simulate a steady-state system, where the ratio N/M is expected to be around and close to unity, in which most PCTs introduce minimum uncertainties, as shown in Fig. 9. Nevertheless, some PCTs (SS and COX) still introduce considerably high uncertainties even with N/M \u22481. In this section we would like to investigate the relative performances of the identi\ufb01ed PCTs in a keigenvalue transport calculation\u2014particularly, we would like to see whether there is any discernible e\ufb00ect due to the di\ufb00erent magnitudes of uncertainty \u03c3r[Ci] introduced by the techniques. We consider the k-eigenvalue transport problem of the mono-energetic two-region slab medium from [30]: \u0014 \u00b5 \u2202 \u2202x + \u03a3t(x) \u0015 \u03c8(x, \u00b5) = 1 2 \u0014 \u03a3s(x) + 1 k\u03bd\u03a3f(x) \u0015 \u03c6(x), (30) 38 \fSimilar to Sec. V, all physical quantities will be presented in the unit of mean-freepath. The \ufb01rst and the second regions respectively occupy x \u2208[0, 1.5] and x \u2208[1.5, 2.5]. The cross-sections of the two regions are \u03bd\u03a3f,1 = 0.6, \u03a3s,1 = 0.9, \u03bd\u03a3f,2 = 0.3, and \u03a3s,2 = 0.2. Finally, the two-region slab is subject to vacuum boundaries. By using a deterministic transport method, Kornreich and Parsons [30] provide reference values for the fundamental k-eigenvalue, k = 1.28657, and the associated scalar \ufb02uxes at certain points (shown in Fig. 19). Figure 19. Scalar \ufb02ux associated with the fundamental k-eigenvalue of the test problem [30]. The k-eigenvalue problem is solved using the analog (without population control) weight normalization technique and the \ufb01ve identi\ufb01ed PCTs (SS, SR, CO, COX, and DD). The numbers of inactive and active generations are set to be 100 and 200, respectively, with 105 neutron histories per generation. We tally the spatial-average neutron \ufb02ux \u03c6j with J = 50 uniform meshes spanning x \u2208[0, 2.5]. Uniform isotropic \ufb02ux distribution and k = 1 are used as the initial guess. Finally, each simulation is repeated 50 times with di\ufb00erent random number seeds and run on 36 distributedmemory processors. Solution of each run is veri\ufb01ed by comparing it with the reference solution shown in Fig. 19. Three performance metrics are considered: (1) averaged sample standard deviation of the scalar \ufb02ux \u00af \u03c3[\u03c6] (c.f. Eq. (28)), (2) sample standard deviation of the eigenvalue \u03c3[k], and (3) total runtime. The resulting performance metrics of the di\ufb00erent PCTs are compared in jittered box plots shown in Fig. 20. 39 \f(a) (Left) averaged sample standard deviation of the scalar \ufb02ux and (Right) sample standard deviation of the eigenvalue. The values are relative to the median of the analog technique. (b) (Left) Runtime spent in population control and managing particle \ufb01ssion bank and (Right) total runtime. (c) Figure of merit. The values are relative to the median of the analog technique. Figure 20. Performance metrics of di\ufb00erent PCTs for the k-eigenvalue test problem. Magenta line indicates median; box indicates lower (Q1) and upper (Q3) quartiles; lower and upper whiskers indicate minimum and maximum values that are not outliers (within the range [Q1 \u22121.5IQR, Q3 + 1.5IQR], where IQR = Q3 \u2212Q1). The analog weight normalization technique is expected to have the least noisy solution. This is because, unlike the PCTs, the analog technique does not introduce any additional uncertainty to the population. The comparison presented in Fig. 20 helps 40 \fin identifying the cost of performing population control (via one of the PCTs) instead of the analog weight normalization technique. Part (a) of Fig. 20 compares the scalar \ufb02ux averaged sample standard deviations \u00af \u03c3[\u03c6] and the eigenvalue sample standard deviations \u03c3[k]. The values are relative to the medians of the analog technique. Comparing \u00af \u03c3[\u03c6], \ufb01gure on the left implies that SS yields considerably larger noise to the simulation, which makes \u00af \u03c3[\u03c6] about 8.5% larger than the analog case. A discernible increase in \u00af \u03c3[\u03c6] is also found in COX, but only about 2%. The other PCTs (DD, SR, and CO), however, does not really su\ufb00er from an increase in \u00af \u03c3[\u03c6]. These \ufb01ndings are in agreement with the theoretical uncertainty introduced by the PCTs shown in Fig. 9. Similar trend can be observed in the \ufb01gure on the right that compares \u03c3[k], but it is not as pronounced since the data is widely \ufb02uctuating. This high \ufb02uctuation in \u03c3[k] can be reduced by increasing the number of active generations or the number of particle histories per generation. We note that given the current con\ufb01guration (200 generations, 105 particles/generation), the median of \u03c3[k] of the analog technique is 400.3 pcm, where the associated standard deviation of the expected mean is 28.3 pcm. From the left \ufb01gure of part (b) of Fig. 20, it is shown that SS runs much slower than the other PCTs as it su\ufb00ers from its serial particle sampling. DD also su\ufb00ers from a serial sampling; but di\ufb00erent to SS, the serial sampling of DD only needs to be done |N \u2212M| times, which is close to zero throughout the active generations of the simulation. This makes DD signi\ufb01cantly faster than SS. On the other hand, SR, CO, and COX bene\ufb01t from their embarrassingly parallel sampling procedures, which make them the fastest among the PCTs. Although not performing any population control, the analog case still spends an amount of time as it still needs to perform the bank passing procedure (Alg. 2). Finally, the \ufb01gure on the right demonstrates that the \ufb01ssion bank handling and population control take a considerable portion of the total simulation runtime (about 9% for SS, and 3% for the other PCTs), which is enough to make SS about 10% slower than the analog case; while the other PCTs are just about 1.5\u20133% slower. Finally, part (c) of Fig. 20 compares the resulting FOMs of the PCTs. The FOM follows the de\ufb01nition in Eq. (29). It is found that SS\u2014the simplest, yet the most 41 \fwell-known technique for k-eigenvalue simulation\u2014is the least performant PCT having FOM about 24% lower than the analog technique. On the other hand, SR, CO, and DD are the best PCTs with FOM about 2 to 4% lower. However, it is worth mentioning that the discernible decrease in FOMs of SR, CO, and DD is due to their higher runtimes (right \ufb01gure of \ufb01gure (b)), since their sample standard deviations (\ufb01gure (a)) are about the same as the one with the analog technique. The higher runtimes of the PCTs can be hidden should we run the simulations with a su\ufb03ciently large number of particles per processor. In such a situation, SR, CO, and DD would be as performant as the analog technique. VII." + } + ], + "Ryan G. Mcclarren": [ + { + "url": "http://arxiv.org/abs/2009.11686v2", + "title": "Data-Driven Acceleration of Thermal Radiation Transfer Calculations with the Dynamic Mode Decomposition and a Sequential Singular Value Decomposition", + "abstract": "We present a method for accelerating discrete ordinates radiative transfer\ncalculations for radiative transfer. Our method works with nonlinear positivity\nfixes, in contrast to most acceleration schemes. The method is based on the\ndynamic mode decomposition (DMD) and using a sequence of rank-one updates to\ncompute the singular value decomposition needed for DMD. Using a sequential\nmethod allows us to automatically determine the number of solution vectors to\ninclude in the DMD acceleration. We present results for slab geometry discrete\nordinates calculations with the standard temperature linearization. Compared\nwith positive source iteration, our results demonstrate that our acceleration\nmethod reduces the number of transport sweeps required to solve the problem by\na factor of about 3 on a standard diffusive Marshak wave problem, a factor of\nseveral thousand on a cooling problem where the effective scattering ratio\napproaches unity, and a factor of 20 improvement in a realistic, multimaterial\nradiating shock problem.", + "authors": "Ryan G. McClarren, Terry S. Haut", + "published": "2020-09-22", + "updated": "2021-02-23", + "primary_cat": "physics.comp-ph", + "cats": [ + "physics.comp-ph" + ], + "main_content": "Introduction The dynamic mode decomposition (DMD) [1, 2] is a data-driven method for understanding the spectral properties of an operator. It relies solely on a sequence of vectors generated by an operator and requires no knowledge of the operator. In the computational \ufb02uid dynamics community it has been used for understanding the properties of \ufb02ows [3] and for comparing simulation and experiment [1]. For neutron transport problems it was introduced as a technique to estimate time-eigenvalues [4], for creating reduced order models [5], to understand stability [6], and for accelerating power iterations for k-eigenvalue problems [7]. In this work we turn to the problem of accelerating discrete ordinates solutions to radiative transfer problems (primarily x-ray radiative transfer for time-dependent high-energy density physics applications). In radiative transfer problems positive solutions, by which we mean positive radiation densities, are essential due to the coupling of the radiation transport equation to an equation for the material temperature. Negative densities can lead to negative material temperatures that are both nonphysical and cause instabilities. Moreover, many numerical methods based on high-order representations of the solution [8] or di\ufb00erent angular discretizations [9] can lead to negative radiation densities. Methods to remove the negative solutions that can arise from these problems have been presented over the years. The zero-and-rescale \ufb01x [10, 11] sets any negative values to zero and scales other unknowns to conserve particles. This method has been shown to be e\ufb00ective, but it does not preserve certain moments of the transport equation. The consistent set-to-zero method (CSZ) [12] addresses this problem by solving a local nonlinear equation to remove nonlinearities. Other attempts to address negative solutions are the exponential discontinuous method [13] and the positive spherical harmonics method [14, 15]. All of these methods to remove negative solutions (with the exception of the exponential discontinuous scheme) render the solution of radiation transport equation nonlinear. Positive source iteration (a form of nonlinear Richardson 2 \fiteration) can still be used, but can be arbitrarily slow to converge on di\ufb00usive problems [16]. Nevertheless, the nonlinear nature of the solution technique implies that standard acceleration techniques based on linear problems such as di\ufb00usion synthetic acceleration (DSA) [16] and preconditioned GMRES [17] can no longer be used. There have been attempts to derive acceleration methods based on Jacobian-free Newton-Krylov [18] and nonlinear acceleration through a quasi-di\ufb00usion approach [11]. In this paper we propose to use a simple acceleration based on the dynamic mode decomposition. We use DMD to estimate the slowest decaying error modes in positive source iteration, and then estimate the solution. Because DMD is a data-driven method, it is simple to implement. DMD relies on the computation of a singular value decomposition (SVD) of a data matrix containing the solution for the scalar intensity (i.e., the scalar \ufb02ux) at several iterations. To alleviate the expense of this decomposition we employ a sequential algorithm to perform the SVD that estimates the SVD using rank-one updates. Additionally, because we use a sequential algorithm, we can automatically determine the number of iterations to include in the DMD update. The inclusion of a sequential SVD and the automatic selection of the number of iterations are the two key improvements over our preliminary work presented at a conference [19] and a similar approach for linear problems given by Andersson and Eriksson [20]. While data-driven methods may seem outside the normal toolkit of particle transport research, it is worth noting that Krylov methods such as GMRES can be thought of as data-driven because they do not require the knowledge of the matrix, rather just the action of the matrix. We also believe that the explosion of data being generated in the computational sciences will be another rowel to investigate more of these kinds of methods. In this work we compare a DMD acceleration technique based on a sequential SVD to positive source iteration and demonstrate signi\ufb01cant reduction in the number of iterations required to converge. Though other acceleration techniques have been proposed [18, 11], and many others are possible\u2014including nonlinear GMRES [21] and nonlinear Krylov acceleration [22]\u2014a thorough, fair 3 \fcomparison of these methods is outside the scope of this work. Such a fair comparison would need to employ the methods in the same code base and utilize the latest implementations of all the constituent parts such as solvers, preconditioners, parallel strategies, etc. and would be an excellent contribution to state of knowledge for future work. This paper is organized as follows. In section 2 we introduce the dynamic mode decomposition and its properties. We then in section 3 discuss the gray, discrete ordinates radiative transfer equations and the discontinuous Galerkin discretization of those equations using Bernstein polynomials. Section 4 gives the standard, unaccelerated positive source iteration method, before we present the DMD acceleration for that method in Section 5. Section 6 gives numerical results followed by conclusions and future work in section 7. 2. The dynamic mode decomposition Here we discuss the properties of the dynamic mode decomposition (DMD) for approximating an operator based on information from the action of the operator. A thorough treatment of the theory of this decomposition can be found in [1, 2, 23, 3]. We consider a sequence vectors yk that are related by the application of an operator A: yk+1 = Ayk. (1) The vectors yk \u2208RN, A is an operator of size N \u00d7 N, and k = 0, . . . , K. The vectors yk could come from a discretized PDE, experimental measurements, sensor readings, etc. As we will see, knowledge of A is not required; only the yk need to be known. To \ufb01nd the DMD decomposition, we append the vectors into a data matrices of size N \u00d7 K as Y+ = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ed | | | y1 y2 . . . yK | | | \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f8 Y\u2212= \uf8eb \uf8ec \uf8ec \uf8ec \uf8ed | | | y0 y1 . . . yK\u22121 | | | \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f8. (2) 4 \fWith the data matrices one can write Eq. (1) as Y+ = AY\u2212. (3) We then take the thin singular value decomposition (SVD) of Y\u2212to write Y\u2212= U\u03a3V T, (4) where U is a N \u00d7 K orthogonal matrix, \u03a3 is a diagonal K \u00d7 K matrix with non-negative entries on the diagonal, and V is a K \u00d7 K orthogonal matrix. The matrix U has columns that form an orthonormal basis for the row space of Y\u2212\u2282RN. In the case when there are only r < K nonzero singular values, we use the compact SVD where U is N \u00d7 r, \u03a3 is r \u00d7 r, and V is r \u00d7 K. We substitute the SVD of Y\u2212into Eq. (3), to get Y+ = AU\u03a3V T, (5) and then we use the orthonormality properties of V and U, and the fact that \u03a3 is a diagonal matrix with non-zero entries to write \u02dc A \u2261U TAU = U TY+V \u03a3\u22121. (6) The matrix \u02dc A is an r \u00d7 r matrix that is a rank r approximation to A, where r is the number of nonzero singular values in the SVD of Y\u2212. Notice in Eq. (6) that \u02dc A can be formed using only the data matrices and no knowledge of A. The dynamic modes of A are determined from the eigenvalues of \u02dc A. This requires solving an r\u00d7r eigenvalue problem. If (\u03bb, w) are eigenvalue/eigenvector pairs of \u02dc A then \u03d5 = 1 \u03bbY+V \u03a3\u22121w (7) are the r dynamic modes of A. The mode with the largest value of \u03bb is said to be the dominant mode. One of the properties of DMD is that the dynamic modes found will depend on the modes excited by the data. For instance, if y0 is an eigenvector of A, then only one mode will be excited. This property was used previously in timeeigenvalue problems in neutron transport to \ufb01nd eigenmodes important to the evolution of an experiment [4]. 5 \fBefore moving on, we point out that despite the fact that DMD is derived as a linear method, it has been shown that DMD can be applied to nonlinear operators. In particular DMD will \ufb01nd an approximation to the Koopman operator for the nonlinear update [3]. This will allow us to use DMD on nonlinear solution techniques for the radiative transfer equations. 3. The Gray, Discrete Ordinates Radiative Transfer Equations We will apply DMD to accelerate the solution to gray, radiative transfer calculations using discrete ordinates (SN). The SN equations of thermal radiative transfer in slab geometry the high-energy density regime are given by 1 c \u2202In \u2202t + \u00b5n \u2202In \u2202x + \u03c3a(x, t, T)In = 1 2\u03c3aacT 4 + 1 2Q(x, t), (8a) \u2202e \u2202t = \u03c3a(x, t, T)(\u03c6 \u2212acT 4), (8b) \u03c6(x, t) = N X n=1 wnIn. (8c) Here x [cm] is the spatial variable, t [ns] is the time variable, wn and \u00b5n are the weights and abscissas of a quadrature rule over the range (\u22121, 1), In(x, t) [GJ/(cm2\u00b7s\u00b7steradian)] is the speci\ufb01c intensity of radiation in the quadrature direction n, \u03c6(x, t) [GJ/(cm2\u00b7s)] is the scalar intensity, T(x, t) [keV] is the material temperature, and e(T) [GJ] is the internal energy density of the material related to T via a known equation of state. Additionally, c \u224830 [cm/ns] is the speed of light, a = 0.01372 [GJ/(cm3\u00b7keV4)], \u03c3a(x, t, T) [cm\u22121] is the absorption opacity, and Q(x, t) is a known, prescribed source. For quadrature rules we apply Gauss-Legendre quadrature rules of even order. The boundary conditions for Eq. (8) prescribe an incoming intensity on the boundary: In(0, t) = gn(t) \u00b5n > 0, In(X, t) = hn(t) \u00b5n < 0, (9) where gn and hn are known functions of time and X is the right boundary of the problem domain. Initial conditions specify In(x, 0) throughout the problem. 6 \fFor time discretization we use the backward Euler method with a linearization of the nonlinear temperature term. We write the solution at time t = m\u2206t using the superscript m: Im n (x) = I(x, m\u2206t). The semi-discrete equations are [24] \u00b5n \u2202Im+1 n \u2202x + \u03c3\u2217Im+1 n = 1 2 \u0000\u03c3m s \u03c6m+1 + \u03c3m a fac(T m)4\u0001 + Q\u2217, (10a) em+1 = em + \u2206t\u03c3m a (\u03c6m+1 \u2212ac(T m)4), (10b) where \u03c3\u2217= \u03c3m a + (c\u2206t)\u22121, Q\u2217= Qm+1/2 + (c\u2206t)\u22121, \u03c3s = (1 \u2212f)\u03c3m a is the e\ufb00ective scattering term, and the factor f is de\ufb01ned as f(x, t, T) = 1 1 + \u03b2c\u03c3a\u2206t, \u03b2 = 4a Cv (11) with Cv the heat capacity at constant volume for the material. It is also useful to de\ufb01ne a radiation temperature as Tr = 4 p \u03c6/(ac). The system in (10) is a quasi-steady transport problem to which we apply a discontinuous Galerkin \ufb01nite element method in space using the Bernstein polynomials as a basis [11, 25]. The resulting equations are (\u00b5nG + Fn + M\u2217)Im+1 n = 1 2 \u0000Ms\u03c6m+1 + Maac(Tm)4\u0001 + Q\u2217, (12a) Me(em+1 \u2212em) = Ma(\u03c6m+1 \u2212ac(Tm)4), (12b) where the superscript m denotes a time level, \u00b5nG + Fn is the upwinded representation of the derivative term, M\u2217, Ms, and Ma are the mass matrices associated with the \u03c3\u2217, \u03c3s, and \u03c3a terms, respectively. The vectors Im n , \u03c6m, Tm, and Q\u2217are vectors that contain the coe\ufb03cients of the \ufb01nite element representations of the intensity, scalar intensity, temperature, and source. The system in (12) can be advanced in time by solving Eq. (12a) and then evaluating the material internal energy update in Eq. (12b). However, the addition of the e\ufb00ective scattering term on the RHS of Eq. (12a) couples all of the N quadrature directions together. 4. Positive Source Iteration Method The matrices on the LHS of Eq. (12a) can be written in block lower-triangular form [17]. Therefore, we can perform the following iterative procedure to \ufb01nd 7 \fIm+1 n Im+1 n \f \f k+1 = (\u00b5nG+Fn+M\u2217)\u22121 \u00141 2 \u0000Ms\u03c6m+1\f \f k + Maac(Tm)4\u0001 + Q\u2217 \u0015 . (13) Here we denote the kth iteration of a quantity as (\u00b7)|k. The application of the inverse of the the lower triangular operator (\u00b5nG + Fn + M\u2217) is known as a transport sweep: it involves moving information for a particular direction n across the problem domain. Note that if we take the quadrature sum of both sides of Eq. (13) we get an update in terms of the scalar intensity only: \u03c6m+1\f \f k+1 = D(\u00b5nG + Fn + M\u2217)\u22121 \u00141 2 \u0000Ms\u03c6m+1\f \f k + Maac(Tm)4\u0001 + Q\u2217 \u0015 , (14) where D represents the quadrature sum PN n=1 wn operator. The iteration scheme in Eq. (14) can be very slow to converge when f \u21920 and/or \u2206t \u2192\u221e. In this scenario the discrete equations have no absorption of radiation, leading to the iterations having a spectral radius approaching unity [16]. It has been shown that the iterations can be accelerated by using a di\ufb00usion correction, called di\ufb00usion synthetic acceleration, and by \u201cwrapping\u201d the iterations in a Krylov solver and preconditioning the solver [17]. 4.1. Positivity Fixes Physically, the speci\ufb01c intensity is a phase-space density, and as such it should be non-negative. Nevertheless, it is known that solutions to discrete ordinates problems can give negative solutions [11, 26]. This is particularly vexing in radiative transfer problems because negative intensities can lead to negative temperatures [9] that can cause issues with evaluating material properties. To address this issue we use the zero-and-rescale \ufb01x [11] to impose positivity on the intensities in our calculations. This is a nonlinear method that during the transport sweep monitors the solution during the sweep. If one of the coe\ufb03cients is negative, this implies that the \ufb01nite element representation will have negative values. Therefore, we zero out any negative coe\ufb03cients and rescale the other coe\ufb03cients local to a zone to conserve the total intensity of 8 \fthe solution locally. Using transport sweeps with the zero-and-rescale \ufb01x is a form of nonlinear Richardson iteration. The addition of this nonlinear \ufb01x renders acceleration techniques such as di\ufb00usion synthetic acceleration and preconditioned GMRES impotent as these techniques require a linear iterative strategy. Recently, Yee, et al. showed that this nonlinear \ufb01x could be accommodated in a nonlinear quasi-di\ufb00usion iteration [11]. Here we will show how DMD can be used to handle this type of nonlinearity as well. 5. DMD Acceleration In this section we show how DMD can be applied to source iteration using a sequential SVD. To begin we write Eq. (14) in the following shorthand: yk+1 = Ayk + b (15) where yk+1 = \u03c6m+1\f \f k+1, and A = 1 2D(\u00b5nG + Fn + M\u2217)\u22121Ms, (16) b = D(\u00b5nG + Fn + M\u2217)\u22121 h Ma ac 2 (Tm)4 + Q\u2217i . (17) By substituting in the converged solutions, we can see that Eq. (15) is an iterative procedure for solving (I \u2212A)y = b, (18) where I is the identity operator. Also, if we subtract successive iterations we get the following relationship for the di\ufb00erence between iterations: yk+1 \u2212yk = A(yk \u2212yk\u22121). (19) It is this relationship that we will use with DMD to formulate an approximation to A. We de\ufb01ne data matrices to contain the di\ufb00erences between iterations Y+ = [y2 \u2212y1, y3 \u2212y2, . . . , yK+1 \u2212yK], (20) 9 \fY\u2212= [y1 \u2212y0, y2 \u2212y1, . . . , yK \u2212yK\u22121]. (21) These are each N \u00d7 K matrices, where N is the number of spatial degrees of freedom. As before we de\ufb01ne an approximate A as the K \u00d7 K matrix: \u02dc A = U TAU = U TY+V \u03a3\u22121. (22) We can use \u02dc A to construct the operator (I \u2212\u02dc A)\u22121 and use this to approximate the solution. Using Eq. (18) we can write the di\ufb00erence between the solution and the Kth iteration as (I \u2212A)(y \u2212yK) = b \u2212(I \u2212A)yK = b \u2212yK + (yK+1 \u2212b) = yK+1 \u2212yK. (23) Next, we de\ufb01ne \u2206z as the length K vector that satis\ufb01es y \u2212yK = U\u2206z, (24) and substitute this into the LHS of Eq. (23) and left multiply by U T to get (I \u2212\u02dc A)\u2206z = U T(yK+1 \u2212yK). (25) This is a linear system of size r \u2264K where r is the number of non-singular values in the SVD of Y\u2212. We solve this system and approximate the solution as y \u2248yK + U\u2206z. This algorithm uses the changes between iterations to estimate the operator A that governs the iterative change. We then, in e\ufb00ect, use this approximated operator to extrapolate the solution to convergence. This update requires taking K+1 iterations of source iteration, the computation of an SVD, and the solution of a small linear system. 5.1. Sequential SVD and Automatic DMD In the previous algorithm, we needed to compute K + 1 iterations in addition to the SVD. However, choosing how many iterations is not obvious. Additionally, the SVD will require O(NK2) operations to compute where N is the 10 \fnumber of spatial degrees of freedom. To address both of these problems we use a sequential SVD generated by rank-one updates. Brand [27] presented an algorithm for taking the SVD of a data matrix where the elements in the matrix were generated sequentially. The resulting cost of the SVD is then O(NKr) where r is the rank of the SVD. This algorithm was then used by Choi et al. [28] to develop reduced order models for particle transport problems. We use this approach to build up the SVD using successive source iterations and determine, based on the results, when K is large enough. The function de\ufb01ned in [28] incrementalSVD takes as inputs a new column vector, u, a tolerance for linear dependence \u03f5SVD, a minimum size for a singular value \u03f5SV , the current singular value decomposition U, S, V and the column index of the vector, k. Thus, we write a call of the incremental SVD as incrementalSVD(u, \u03f5SVD, \u03f5SV , U, S, V, k). 5.2. Acceleration Algorithm We specify the algorithm for applying automatic DMD with sequential SVD in Algorithm 1. This algorithm uses the incremental SVD function de\ufb01ned in Algorithm 2 of [28]. Our automatic DMD acceleration takes successive source iterations to build up the data matrices de\ufb01ned in Eqs. (20) and (21). After two source iterations, we have enough data to start applying the acceleration. We compute a new value of the estimated solution based on the approximation \u02dc A as in Eq. (25). We continue making approximations until either the maximum number of iterations is reached or until we \ufb01nd that the two source iterations did not add to the rank of \u02dc A. This stopping criteria is used because the data indicates that further source iterations are not improving the approximation \u02dc A. There are other particularities of the automatic DMD acceleration that we point out here. Firstly, we remove small singular values from the SVD of Y\u2212. This is done to remove singular values that are unimportant and could add numerical noise to the update. Additionally, we do not compute an update based on DMD if there are eigenvalues of \u02dc A with a magnitude larger than one (c.f. line 18 of the algorithm). This is because these large eigenvalues could 11 \fallow the solution to diverge in the update. Algorithm 1 Automatic DMD Acceleration [\u03c6] = AutomaticDMDAcceleration(A, b, \u03c6|0, \u03f5) Input: Sweep operators A and b, initial guess \u03c6|0, maximum iterations K, current residual estimate \u03f5 Output: Approximate solution \u03c6 1: tmpOld = \u03c6|0 2: Y+ = [], Y\u2212= [] 3: U = [], \u03a3 = [], V = [] 4: for k = 1 to K+1 do 5: tmp= A\u00b7tmpOld+b, i.e., Perform a sweep 6: \u03c6 = tmp 7: if k < K + 1 then 8: Append \u2206k = (tmp \u2212tmpOld) to Y\u2212 9: end if 10: if k > 1 then 11: Append \u2206k = (tmp \u2212tmpOld) to Y+ 12: [U, \u03a3, V, r] = incrementalSVD(\u2206k, \u03f5\u00b710\u221214, \u03f5\u00b710\u221214, U, S, V , k \u22121) 13: end if 14: if k > 2 then 15: Remove singular values from \u03a3 less than \u03f5 \u00d7 10\u22126 of the trace of \u03a3 16: \u02dc A = U TY+V \u03a3\u22121 17: Compute eigenvalues of \u02dc A as \u03bbk 18: if maxk(|\u03bbk|) < 1 then 19: Solve (I \u2212\u02dc A)\u2206z = U T\u2206k for \u2206z 20: \u03c6 = tmpOld + U\u2206z 21: else 22: Not enough iterations to estimate \u02dc A, continue 23: end if 24: end if 25: tmpOld = tmp 26: if k > r + 2 then 27: Exit For Loop 28: end if 29: end for 30: return \u03c6m+1 The time update using automatic DMD acceleration is shown in Algorithm 2. When we apply the automatic DMD acceleration to compute a time update, we add J additional source iterations outside the DMD acceleration. This is done to damp any high-frequency errors introduced by the DMD acceleration. In practice we typically use J = 2 or 3. In Algorithm 2 we check for convergence 12 \fin the source iterations outside the DMD acceleration step. In practice we also check for convergence in the DMD acceleration function to save on iterations, but this detail is omitted from our listing for clarity in the algorithms. Algorithm 2 Radiative Transfer Time Step Update with DMD [ In, \u03c6, e, T] = RadStep(\u03c6m, Im n , Tm, em, J, K, \u03f52, \u03f5\u221e,. . . ) Input: Previous solutions \u03c6m, Im n , Tm, and em, material properties, quadrature rule, number of extra iterations J and maximum DMD iterations K, \u03f52 and \u03f5\u221eas the L2 and L\u221etolerances Output: Solutions at time level m + 1: \u03c6, In, T, and e. 1: Compute b = D(\u00b5nG + Fn + M\u2217)\u22121 \u0002 Ma ac 2 (Tm)4 + Q\u2217\u0003 2: \u03c6|0 = \u03c6m 3: while Not Converged do 4: {Apply Source Iteration J times} 5: for j=1 to J do 6: \u03c6|j = A \u03c6|j\u22121 + b 7: change = \u2225\u03c6|j \u2212\u03c6|j\u22121 \u22252 8: if change < \u03f52 and \u2225\u03c6|j \u2212\u03c6|j\u22121 \u2225\u221e< \u03f5\u221ethen 9: {The iterations are converged} 10: In = (\u00b5nG + Fn + M\u2217)\u22121 h 1 2 \u0010 Ms\u03c6|j + Maac(Tm)4\u0011 + Q\u2217i 11: e = em + M\u22121 e Ma(\u03c6 \u2212ac(Tm)4) 12: Compute T by inverting the equation of state at e 13: return In, \u03c6|j, e, and T. 14: end if 15: end for 16: {Apply DMD Acceleration} 17: \u03c6|0 = AutomaticDMDAcceleration(A, b,\u03c6|J, change) 18: end while To compute a time step for the radiative transfer solver, the storage requirements for the radiation variables are \u2022 Two angular \ufb02ux vectors for the previous and current angular \ufb02ux, \u2022 The data matrices, Y+ and Y\u2212, each of size N \u00d7k where N is the number of spatial degrees of freedom, and k \u2264K is the number of iterations required in the DMD acceleration step. The number of iterations (transport sweeps) required for convergence will be the sum of the iterations outside the DMD acceleration step and those required 13 \fin the DMD update. For comparison with standard source iteration, we use the number of transport sweeps as our metric for e\ufb03ciency. For the nonlinear zero-and-rescale \ufb01x, we apply that nonlinearity during the transport sweep, i.e., in the application of D(\u00b5nG + Fn + M\u2217)\u22121. This is not explicitly called out in Algorithm 2, but will be understood in our results. 6. Numerical Results 6.1. Di\ufb00usive Marshak wave To demonstrate the e\ufb00ectiveness of our acceleration strategy, we consider a standard, di\ufb00usive Marshak wave problem [29, 30, 24]. We use \u03c3 = 300T \u22123, and an equation of state given by e = CvT with Cv = 0.3 GJ/(keV\u00b7cm\u22123); there is no source in the problem. The initial conditions are T(x, 0) = 0.001 keV and \u03c6(x, 0) = acT(x, 0)4. The domain has an incoming boundary condition of gn = ac/2 at x = 0 and no incoming radiation at the right edge of the domain. This problem will require positivity \ufb01xes, primarily near the wave front. In our numerical calculations we \ufb01nd that anywhere from 0.1 to 1.5% of mesh zones visited during the calculation (i.e., up to 2% of the product INM where I is the number of zones, N is the number of angles, and M is the number of iterations). We run the problem with di\ufb00erent values of the FEM expansion order, number of spatial zones, and with a time step size of \u2206t = 0.01 ns and S8 GaussLegendre quadrature. In Figure 1 results from the DMD solution with cubic elements of size 0.02 cm are compared with the semi-analytic di\ufb00usion solution. The source iteration solutions are identical on the scale of the \ufb01gure to the DMD solutions and are, therefore, not shown. In the \ufb01gure we see that the S8 solution agrees with the semi-analytic solution except near the wavefront, as has been previously observed in comparisons with the di\ufb00usion solution [31, 32, 33]. For this problem we inspect the three dominant dynamic modes in the update of \u03c6 found from the \ufb01rst application of DMD acceleration in the step at t = 1 and 10 ns. These dominant modes will be estimates of the slowest decaying error modes from source iteration. In Figure 2 we plot modes as calculated by 14 \f0.0 0.1 0.2 0.3 0.4 0.5 x (cm) 0.0 0.2 0.4 0.6 0.8 1.0 T (keV) 1 ns 10 ns 50 ns 100 ns Material Radiation Figure 1: Comparison of S8 solutions obtained with DMD and semi-analytic di\ufb00usion solution for the Marshak wave problem with at 4 di\ufb00erent times. The S8 solutions used \u2206t = 0.01 ns, zone sizes of 0.02 cm, and a cubic polynomial basis. Eq. (7); we normalize the modes by dividing by the maximum magnitude in the mode. Note that these magnitude are only de\ufb01ned up to a factor of \u00b11. From Figure 2 we see that the dominant modes highlight the wavefront and the heated region behind it. This is expected because most of the change in the solution is occurring at the wavefront. Also, it is here that positivity preservation is needed. The fact that the three modes have nearly the same shape indicates that there are several, similar error modes that are slowly decaying. 6.1.1. Comparison with Source Iteration To compare the e\ufb03cacy of our positive, automatic DMD acceleration with standard positive source iteration we vary the time step size, number of zones, and order of the FEM expansion. In Figure 3 the average number of iterations, that is the number of transport sweeps, per time step is shown. We note that for this problem the positivity \ufb01x we utilize is needed as the solution near the wavefront can become negative when the \ufb01x is not applied. For this Marshak wave problem, the time step size is a proxy for the scattering ratio, \u03c3s/\u03c3\u2217. Using the de\ufb01nition of these quantities and the material 15 \f0.000 0.025 0.050 0.075 0.100 0.125 0.150 0.175 0.200 0.225 x (cm) \u22121.00 \u22120.75 \u22120.50 \u22120.25 0.00 0.25 0.50 0.75 1.00 normalized magnitude t=1 ns t=10 ns Dyn. Mode 1 Dyn. Mode 2 Dyn. Mode 3 Figure 2: Three most dominant dynamic modes found during the time steps at times t = 1 and 10 ns. The eigenvalues of \u02dc A at the di\ufb00erent times are {8.56 \u00d7 10\u22123, 2.33 \u00d7 10\u22124, 2.21 \u00d7 10\u22126} and {1.54 \u00d7 10\u22123, 4.71 \u00d7 10\u22125, 2.81 \u00d7 10\u22127}. properties of this problem, we \ufb01nd the scattering ratio simpli\ufb01es to \u03c3s \u03c3\u2217= (1 \u2212f)\u03c3a \u03c3a + 1 c\u2206t (26) \u2248 1600\u2206t 1600\u2206t + 1. For \u2206t = 0.005, 0.01, and 0.02 ns the corresponding scattering ratios are 0.8889, 0.9412, 0.9697, respectively. We solve the Marshak wave problem until a \ufb01nal time of t = 10 ns using a variety of spatial resolutions, time steps, and \ufb01nite element expansion orders. We report the number of iterations (i.e., transport sweeps) required to solve the problem to the \ufb01nal time in Figure 3. From the \ufb01gure we see that the DMD-accelerated solutions require signi\ufb01cantly fewer iterations than positive source iteration, fewer than half as many iterations. The di\ufb00erence between the required number of iterations gets larger as the scattering ratio increases to about a 40% reduction when \u2206t = 0.02. For this problem we do not increase the time step further as this causes overheating due to the large time step and actually makes the medium behave less di\ufb00usive. Because DMD is a data-driven acceleration technique, there is no guarantee 16 \fthat changing the number of degrees of freedom will not change the behavior of the iterative convergence. We notice that as the order of the \ufb01nite element expansion increases the number of iterations required also increases. This was consistent in the source iteration and DMD-accelerated results. We also observe a slight increase in the number of iterations required in DMD as the number of zones increases in the \u2206t = 0.005 and 0.01 ns cases. The number of iterations required in the \u2206t = 0.02 ns case, however, decreases as the number of zones increases above 40. These changes in the number of iterations required as a function of the number of spatial zones and the \ufb01nite element order track those of previous results for linear problems using a piecewise constant discretization [19]. 6.2. Cooling problem While the Marshak wave a is a standard problem in radiative transfer, we are limited in how di\ufb03cult we can make it from a convergence point of view: the ratio \u03c3s/\u03c3\u2217can only be made so large. This is due to the fact that large time steps can make the problem nonphysically overheat, which makes the wave travel faster, and make the problem easier. To address this we contrived a test problem of a slab initially at a uniform temperature of 0.5 keV with a radiation temperature of 0.45 keV surrounded by vacuum. The slab is 1 cm thick and has a \u03c3a = \u03c30T \u2212m where m = 0 or 3. The problem is run for a single time step of 0.01 ns; the spatial discretization has 50 zones and order 3 polynomials. We will adjust \u03c30 from 10 to 106 to make the problem more di\ufb03cult. Though this problem will not require positivity \ufb01xes, it is indicative of a worst case scenario. The number of iterations required to solve this problem is shown in Figure 4. In the \ufb01gure we include results for both forms of the opacity to demonstrate that added nonlinearity does not seem to a\ufb00ect the behavior. The problem has the scattering ratio range from about 0.8 up to 1 \u22122.7 \u00d7 10\u22126. The number of iterations required for the DMD-accelerated method is roughly constant as the scattering ratio approaches unity. This is clearly not the case for positive 17 \f20 40 60 80 100 120 140 160 Number of zones 15.0 17.5 20.0 22.5 25.0 27.5 30.0 32.5 Iterations per time step \u0394t = 0.0005 order = 1 order = 2 order = 3 order = 4 (a) \u2206t = 0.005 20 40 60 80 100 120 140 160 Number of zones 20 30 40 50 60 Iterations per time step \u0394t = 0.001 order = 1 order = 2 order = 3 order = 4 (b) \u2206t = 0.01 20 40 60 80 100 120 140 160 Number of zones 40 60 80 100 120 Iterations per time step \u0394t = 0.002 order = 1 order = 2 order = 3 order = 4 (c) \u2206t = 0.02 Figure 3: Number of iterations per time step for the Marshak wave problem using several di\ufb00erent time step sizes, number of zones, and expansion order of the FEM solution. The problem was run until a \ufb01nal time of t = 10 ns. Solid lines denote positive source iteration results; dashed lines are for DMD-accelerated calculations. 18 \f10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 100 1-\u03c3s/\u03c3 * 101 102 103 104 Number of Iterations DMD Pos. Source It. \u03c3 \u221dT\u22123 \u03c3 constant Figure 4: Number of iterations required to solve the cooling problem as a function of the e\ufb00ective scattering ratio. source iteration as the number iterations grows rapidly as the scattering ratio approaches one. Indeed, we had to terminate the highest scattering ratio case at 6 \u00d7 104 iterations before reaching convergence. In this hardest case, there number of iterations is smaller by a factor of over 6000 for DMD. This problem demonstrates that the e\ufb00ectiveness of DMD as the limit as the scattering ratio approaches unity. The results also demonstrate that as the scattering ratio increases, it is possible for DMD to have the number of iterations go down. We have observed this in other problems of linear particle transport. We believe that this is due to there being less spatial structure in the solution as the scattering ratio approaches one because there is less cooling of the block. As shown in Figure 5(a) the solution has less spatial detail as the scattering ratio approaches one. With a lower scattering ratio there is a discontinuity at the cell edge near 0.98 cm that diminishes as the scattering ratio increases. Further evidence for this e\ufb00ect comes from the singular values of the data matrix Y \u2212 displayed in Figure 5(b). For the time step leading to a scattering ratio nearest to unity, there are only two singular values larger than 10\u221214. This means that the data can be well represented by a rank 2 approximation. This is not the 19 \f0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1.00 1.01 x (cm) 0.45 0.46 0.47 0.48 0.49 0.50 Radiation Temperature (keV) \u03c3s/\u03c3 * = 0.781906 \u03c3s/\u03c3 * = 0.973673 \u03c3s/\u03c3 * = 0.999997 (a) Piecewise Cubice Solution 1 2 3 4 5 6 x (cm) 10\u221216 10\u221214 10\u221212 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 Radiation Temperature (keV) \u03c3s/\u03c3 * = 0.781906 \u03c3s/\u03c3 * = 0.973673 \u03c3s/\u03c3 * = 0.999997 (b) Singular values of Y \u2212 Figure 5: The solution near the slab edge in the cooling problem and the singular values of the Y \u2212data matrix as a function of the e\ufb00ective scattering ratio. case in the two lower scattering ratio cases in Figure 5. Both of these cases have at least 6 singular values above 10\u221214 and, therefore, have more structure in the solution that DMD needs to approximate. This leads to more iterations being needed to form the DMD approximation. Our algorithm using the sequential SVD detects that this is the case and requires fewer iterations when the rank of the data matrix is lower. 6.3. Su-Olson Test There also exist semi-analytic solutions for radiative transfer in an optically thin problem, driven by a radiation source where the heat capacity of the material is proportional to the cube of the temperature. Transport and di\ufb00usion solutions for this problem can be found in [34] and S2 solutions1 are given in [35]. We solve this problem to demonstrate that the DMD-accelerated solution is not slower to converge than positive source iteration in optically thin media, where we would expect that no acceleration is needed. For this problem we observe that the number of iterations required is almost identical between DMD and source iteration. At most we see a 5% decrease in the number of iterations per time step. A comparison of the numerical and analytic solutions are shown in Figure 6. 1The solutions are given for the P1 equations, but S2 with Gauss quadrature is equivalent to P1 in slab geometry. 20 \f0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 x (cm) 0.0 0.5 1.0 1.5 2.0 Rad. Energy Density (GJ/cm3) S2 S2 analytic S16 transport (a) Radiation 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 x (cm) 0.0 0.2 0.4 0.6 0.8 Temperature (keV) S2 S2 analytic S16 transport (b) Material Temperature Figure 6: Comparison of the numerical results from DMD-accelerated SN and the S2/P1 and transport analytic solutions at times t = 0.316228\u03c3/c, \u03c3/c, 3.16228\u03c3/c, and 10\u03c3/c with \u03c3 = 1 cm\u22121. 6.4. Laser-Driven Radiating Shock Problem The \ufb01nal problem we solve is a radiative transfer problem inspired by experiments involving laser-driven shocks [36]. In these experiments a laser pulse strikes a beryllium (Be) disk that is on the end of a xenon (Xe) \ufb01lled tube. The laser launches a shock wave into the Be disk that eventually breaks out into the Xe gas. The state of the system at a given time [37] is used to set up our test problem. The radiative transfer in these shock experiments is complicated due to the large thin sources that arise [38, 39, 40, 41]. This problem will test how the DMD acceleration technique performs on a realistic problem with multiple materials, large variations in density, and optically thin and thick regions. The optically thick regions of this problem necessitate acceleration of positive source iteration (as we will show below). The density, temperature, and material (either Be or Xe) for the test problem are given in Table 1. This table gives the initial conditions for the temperature and the intensity in equilibrium (i.e., I = acT 4/2). The boundary conditions assume an incoming, isotropic source corresponding to the temperature nearest the boundary. Between the points in the table we linearly interpolate to evaluate the density and initial temperature. All points to the left of 0.1302 cm are beryllium, and the remainder is xenon. The heat capacities are based on a gamma-law equation of state with \u03b3 = 5/3 21 \fPosition (mm) Density (g/cm3) Temperature (eV) Material 0.0000 0.0168 40.1723 Be 0.0462 0.1681 11.6676 Be 0.1046 0.3429 5.5016 Be 0.1080 0.5107 3.6279 Be 0.1134 0.7383 1.9934 Be 0.1241 0.1829 4.6652 Be 0.1300 0.1384 6.2204 Be 0.1302 0.7405 16.2775 Xe 0.1322 0.0493 74.5469 Xe 0.6000 0.0065 14.9126 Xe Table 1: Density, initial temperature, and material as a function of position for the radiating shock problem. 0.00 0.05 0.10 0.15 0.20 0.25 position (mm) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Density (g/cm3) 0.00 0.05 0.10 0.15 0.20 0.25 position (mm) 0 10 20 30 40 50 60 70 Temperature (eV) 0.00 0.05 0.10 0.15 0.20 0.25 position (mm) 0.00 0.01 0.02 0.03 0.04 0.05 Internal energy density (GJ/cm3) 0.00 0.05 0.10 0.15 0.20 0.25 position (mm) 103 104 105 106 107 108 109 1010 Absorption opacity \u03c3a (cm\u22121) Figure 7: The density and initial temperature, internal energy density, and \u03c3a for the radiating shock problem. 22 \fin xenon and \u03b3 = 1.45 in beryllium as calibrated from experiment [42, 43] to give Cv \u0014 GJ keV \u00b7 cm3 \u0015 = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 1.1899 in Be 0.05513 in Xe . (27) Additionally, we use an approximate bremsstrahlung opacity [44] as \u03c3a(T) \u0002 cm\u22121\u0003 = 0.088\u03c12Z2T \u22127 2 , (28) for T in keV, \u03c1 the density in g/cm3, and Z is the atomic number of the material, 4 for Be and 54 for Xe. In Figure 7 the density and the initial values for the temperature, internal energy density, and \u03c3a are shown as a function of position. At the Be/Xe interface there is a jump in the temperature due to the fact that the two materials have di\ufb00erent heat capacities. Between this interface and the other density maximum at 0.1134 cm there is a region where the absorption opacity drops. This is where most of temperature change due to radiative transfer in this problem will occur. Due to the sti\ufb00ness of the problem from the large opacity and small value for the heat capacity, we use the modi\ufb01ed linearization from [45] with \u2113= 5. This has the e\ufb00ect of reducing the value of f in Eq. (11) and increasing the scattering. To compare the e\ufb03ciency of our DMD acceleration we solve the radiative transfer problem for this shock pro\ufb01le over a time step of 0.01 ns. We consider a spatial domain extended from x = 0 to 0.25 mm with 500 spatial zones and use order 3 \ufb01nite elements. The DMD-accelerated solution required 42 iterations while positive source iteration required 827; the acceleration led to a speed up of nearly a factor of 20 times. In other words the accelerated solution could complete 20 time steps for the cost of a single, unaccelerated time step. The solution for this problem after 100 time steps (i.e., t = 1 ns), is shown in Figure 8. 23 \f0.10 0.11 0.12 0.13 0.14 0.15 0.16 0.17 x (mm) 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 Temperature (keV) Material Radiation Initial Figure 8: Solution from DMD-accelerated S12 for the radiating shock problem at t = 1 ns along with the initial condition. 7." + }, + { + "url": "http://arxiv.org/abs/1812.05241v1", + "title": "Acceleration of Source Iteration using the Dynamic Mode Decomposition", + "abstract": "We present a novel acceleration technique for improving the convergence of\nsource iteration for discrete ordinates transport calculations. Our approach\nuses the idea of the dynamic mode decomposition (DMD) to estimate the slowly\ndecaying modes from source iteration and remove them from the solution. The\nmemory cost of our acceleration technique is that the scalar flux for a number\nof iterations must be stored; the computational cost is a singular value\ndecomposition of a matrix comprised of those stored scalar fluxes.On 1-D slab\ngeometry problems we observe an order of magnitude reduction in the number of\ntransport sweeps required compared to source iteration and that the number of\nsweeps required is independent of the scattering ratio in the problem. These\nobservations hold for an extremely heterogeneous problem and a 2-D problem. In\n2-D we do observe that the effectiveness of the approach slowly degrades as the\nmesh is refined, but is still about one order of magnitude faster than source\niteration.", + "authors": "Ryan G. McClarren, Terry S. Haut", + "published": "2018-12-13", + "updated": "2018-12-13", + "primary_cat": "physics.comp-ph", + "cats": [ + "physics.comp-ph" + ], + "main_content": "INTRODUCTION In scienti\ufb01c computing we are used to taking a known operator and making approximations to it, this is the basis for most numerical methods. Conversely, without knowledge of the operator, it is possible to use just the action of the operator to generate approximations to it. This is done in Krylov methods where the action of a linear operator is used to build a subspace to \ufb01nd solutions in [1]. One can think of such approaches as data-driven methods. One data driven method is the dynamic mode decomposition (DMD) [2,3,4] which uses the action of the operator to infer eigenmodes of an operator. DMD has enjoyed success in the \ufb02uid dynamics community as a way to compare simulation and experiment. This is possible because measured data can also be used in DMD, and the DMD modes can be directly compared between experiment and simulation. DMD has also been recently shown to be capable of computing time eigenvalues of arXiv:1812.05241v1 [physics.comp-ph] 13 Dec 2018 \fneutron transport problems by computing the evolution of the system over time [5]. This approach works for sub-critical as well as super-critical systems and \ufb01nds the eigenmodes that are signi\ufb01cant in a particular system. For the purpose of this paper we are using DMD to infer information about the source iteration operator in a discrete ordinates (SN) transport calculation. Using DMD we can estimate the slowest decaying modes in the iterative procedure and extrapolate to \ufb01nd an estimated converged solution. We begin with a description of DMD 2. DYNAMIC MODE DECOMPOSITION Here we present the basics of the dynamic mode decomposition. For more detail see [2,3,4,6]. Consider a sequence of vectors {y0, y1, . . . , yK} where yk \u2208RN. The vectors are related by a potentially unknown linear operator of size N \u00d7 N, A, as yk+1 = Ayk. If we construct the N \u00d7 K data matrices Y+ and Y\u2212, Y+ = \uf8eb \uf8ed | | | y1 y2 . . . yK | | | \uf8f6 \uf8f8 Y\u2212= \uf8eb \uf8ed | | | y0 y1 . . . yK\u22121 | | | \uf8f6 \uf8f8 we can write Y+ = AY\u2212. At this point we only need to know the data vectors yk, they could come from a calculation, measurement, etc. As K \u2192\u221ewe could hope to infer properties about A. We take the thin singular value decomposition (SVD) of Y\u2212to write Y\u2212= U\u03a3V T, where U is a N \u00d7 K orthogonal matrix, \u03a3 is a diagonal K \u00d7 K matrix with non-negative entries on the diagonal, and V is a K \u00d7 K orthogonal matrix. The SVD requires O(NK2) operations to compute. Later, we will want K \u226aN, if, for example, N is the number of unknowns in a transport calculation. Also, if the column rank of Y\u2212< K, then there is a further reduction in the SVD size. The matrix U has columns that forms an orthonormal basis for the row space of Y\u2212\u2282RN. Using the SVD we get Y+ = AU\u03a3V T. If there are only r < K non-zero singular values in \u03a3, we use the compact SVD where U is N \u00d7r, \u03a3 is r \u00d7 r, and V is K \u00d7 K. We can rearrange the relationship between Y+ and Y\u2212to be Y+ = AU\u03a3V T \u2192 U TAU = U TY+V \u03a3\u22121. De\ufb01ne \u02dc A = U TAU = U TY+V \u03a3\u22121. This is a rank K approximation to A. Using the approximate operator \u02dc A, we can now \ufb01nd out information about A. The eigenvalues/vectors of \u02dc A, \u02dc Aw = \u03bbw, \fare used to de\ufb01ne the dynamic modes of A: \u03d5 = 1 \u03bbU TY+V \u03a3\u22121w. The dynamic mode decomposition (DMD) of the data matrix Y+ is then the decomposition of into vectors \u03d5. The mode with the largest norm of \u03bb is said to be the dominant mode. 3. DMD AND SOURCE ITERATION The discrete ordinates method for transport is typically solved using source iteration (Richardson iteration) and diffusion-based preconditioning/acceleration. Source iterations converge quickly for problems with a small amount of particle scattering. For strongly scattering media, the transport operator has a near nullspace that can be handled using a diffusion preconditioner. However, the question of ef\ufb01ciently preconditioning/accelerating transport calculation on high-order meshes with discontinuous \ufb01ne elements is an open area of research. The approximate operator found from DMD can be used to remove this same near nullspace and improve iterative convergence without the need for a separate preconditioner or diffusion discretization/solve. The steady, single group transport equation with isotropic scattering can be written as L\u03c8 = c 4\u03c0\u03c6 + Q 4\u03c0, (1) where c is the scattering ratio, Q is a prescribed, isotropic source, and the streaming and removal operator is L = (\u2126\u00b7 \u2207+ 1) . In this equation the angular \ufb02ux is \u03c8(x, \u2126), with the direction-of-\ufb02ight variable written \u2126\u2208S2, (i.e., \u2126is a point on the unit sphere). The scalar \ufb02ux is the integral of the angular \ufb02ux over the unit sphere: \u03c6(x) = Z 4\u03c0 \u03c8 d\u2126= \u27e8\u03c8\u27e9. Source iteration solves the problem in Eq. (1) using the iteration strategy \u03c6\u2113= \u001c L\u22121 \u0012 c 4\u03c0\u03c6\u2113\u22121 + Q 4\u03c0 \u0013\u001d , (2) where \u2113is an iteration index. One iteration is often called a \u201ctransport sweep\u201d. A bene\ufb01t of source iteration is that the angular \ufb02ux, \u03c8 does not have to be stored. As c \u21921, the convergence of source iteration can be arbitrarily slow [7]. Rearranging the transport equation we see that source iteration is an iterative procedure for solving \u03c6 \u2212 D L\u22121 c 4\u03c0\u03c6 E = \u27e8L\u22121Q\u27e9. (3) We can de\ufb01ne an operator A and a vector b to write Eq. (3) as (I \u2212A)\u03c6 = b. \fTherefore, the source iteration vectors are \u03c6\u2113+1 = A\u03c6\u2113+ b, or, by subtracting successive iterations, \u03c6\u2113+1 \u2212\u03c6\u2113= A(\u03c6\u2113\u2212\u03c6\u2113\u22121). Therefore, we can cast the difference between iterates in a form that is amenable to the approximation of A using DMD, Y+ = AY\u2212, Y+ = \u0002 \u03c62 \u2212\u03c61, \u03c63 \u2212\u03c62, . . . , \u03c6K \u2212\u03c6K\u22121\u0003 , Y\u2212= \u0002 \u03c61 \u2212\u03c60, \u03c62 \u2212\u03c63, . . . , \u03c6K\u22121 \u2212\u03c6K\u22122\u0003 . As before we de\ufb01ne an approximate A as the K \u00d7 K matrix: \u02dc A = U TAU = U TY+V \u03a3\u22121, We can use \u02dc A to construct the operator (I \u2212\u02dc A)\u22121 and use this to approximate the solution: (I \u2212A)(\u03c6 \u2212\u03c6K\u22121) = b \u2212(I \u2212A)\u03c6K\u22121 = b \u2212\u03c6K\u22121 + (\u03c6K \u2212b) = \u03c6K \u2212\u03c6K\u22121. The difference \u03c6\u2212\u03c6K\u22121 is the difference between step K \u22121 and the converged answer. We de\ufb01ne a new vector \u2206y as the length K vector that satis\ufb01es \u03c6 \u2212\u03c6K\u22121 = U\u2206y. (4) We then substitute and multiply by U T to get (I \u2212\u02dc A)\u2206y = U T(\u03c6K \u2212\u03c6K\u22121). (5) This is a linear system of size K that we can solve to get \u2206y and then compute the update to \u03c6K\u22121 as \u03c6 \u2248\u03c6K\u22121 + U\u2206y. (6) The algorithm is as follows 1. Perform R source iterations: \u03c6\u2113= A\u03c6\u2113\u22121 + b. 2. Compute K source iterations to form Y+ and Y\u2212. The last column of Y\u2212we call \u03c6K\u22121. 3. Compute \u03c6 = \u03c6K\u22121 + U\u2206y as above. Each pass of the algorithm requires R + K source iterations. The R source iterations are used to correct any errors caused by the approximation of A using the SVD. It is easiest to assess convergence between the source iterations. This works regardless of the spatial discretization used. Other algorithms are possible: \f\u2022 Rather than extrapolate to an in\ufb01nite number of iterations, we can use \u02dc A to approximate a \ufb01nite number of source iterations. \u2022 We could use a coarsened vector \u00af \u03c6 in the DMD procedure to reduce the memory/computational cost. \u2022 We could use DMD in the low-order solve of a transport synthetic acceleration scheme. \u2022 The DMD acceleration could be wrapped in a Krylov solver [1] to further improve performance. 4. NUMERICAL RESULTS 4.1. Slab Geometry Examples DMD works perfectly on a homogeneous slab, the ur-demonstration problem for acceleration schemes. We consider a slab 50 mean-free paths thick with vacuum boundaries and a scattering ratio of c = 0.99 and 1.0 and 400 spatial zones, S8 angular discretization, and the diamond difference spatial discretization. The results from source iteration and DMD with R = 4 and different values of K are shown in Figure 1. In this \ufb01gure solid lines are c = 0.99 results and dashed lines are c = 1.0 From the \ufb01gure we see that the DMD results converge about one order of magnitude fewer transport sweeps than source iteration. In the \ufb01gure, we can see that between DMD updates, the convergence follows source iteration\u2019s trend as the solutions to estimate \u02dc A are computed. To further explore the behavior of the DMD acceleration we solve a homogeneous slab problem with 1000 cells and 50 mean-free paths in the slab for various scattering ratios. Table 1 shows the number of transport sweeps required to solve this problem as a function of the scattering ratio and the number of sweeps used in the DMD update. It does appear that there is an optimal value for K, though we have observed this to be somewhat problem dependent. Regardless of the value of K chosen, the number of iterations required appears to be independent of the scattering ratio for DMD. Table 1: Number of iterations (transport sweeps) for the homogeneous slab geometry problem. The DMD results used R = 4. K/c 0.1 0.5 0.9 0.99 0.999 0.9999 0.99999 0.999999 3 8 15 39 70 70 70 70 70 5 10 11 28 90 90 90 90 90 10 15 15 29 60 140 140 140 140 20 25 25 25 49 74 76 76 76 50 55 55 55 56 57 57 57 57 SI 6 17 89 637 2439 3681 3889 3911 We have observed that performance does degrade on an ad absurdum heterogeneous problem inspired by [1]. To demonstrate this, we consider a problem with vacuum boundaries, 1000 cells, \f100 101 102 103 Iteration 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 L2 Residual Source Iteration DMD K=20 DMD K=30 Figure 1: Residual for the homogeneous slab geometry problem. The horizontal axis is the number of transport sweeps. The residual does not change during the steps computed to estimate the DMD update. unit domain length, with c = 0.9999 and \u03c3t = ( 2p cell number odd 2\u2212p cell number even . In Figure 2 we see convergence for p = 5 (dashed) and p = 8 (solid), a factor of about 1000 and 6.5 \u00d7 104 between thick and thin cells, respectively. In this problem more iterations are necessary, however, there is still about and order of magnitude fewer transport sweeps for the DMD accelerated calculations. The results in Figure 2 demonstrate the need for source iteration calculations between DMD updates. The DMD update does introduce some high-frequency errors that are quickly removed from the solution; these are apparent in the jumps in the residual after a DMD update. 4.2. Multi-Dimensional Examples A version of the crooked pipe problem [8] is a more realistic test. We solve a steady, linear, xygeometry version of the crooked pipe problem where all materials have a scattering ratio of 0.988 (to simulate the time-absorption of a realistic sized time step). The density ratio between the thick \f100 101 102 103 104 Iteration 10\u221210 10\u22128 10\u22126 10\u22124 10\u22122 100 L2 Residual Source Iteration DMD K=30 DMD K=50 Figure 2: Residual for heterogeneous slab geometry problem with c = 0.9999. Two cases are shown p = 5 (dashed) and p = 8 (solid). and thin material is 1000. We solve the problem using fully lumped, bilinear discontinous Galerkin in space and S8 product quadrature. The solution using a 200 \u00d7 120 grid of cells for the domain of size 10 \u00d7 6 mean-free paths is shown in Figure 3. For this problem with different mesh resolutions we observe slow growth of the number of transport sweeps needed for the solution, this is not present in the source iteration calculation. The number of transport sweeps to complete the solve with K = 10 and R = 3 is shown in Table 2. The increase seems to be the resolution to the 1/2 power (square root of the number of cells per dimension). 5." + }, + { + "url": "http://arxiv.org/abs/1810.10678v2", + "title": "Calculating Time Eigenvalues of the Neutron Transport Equation with Dynamic Mode Decomposition", + "abstract": "A novel method to compute time eigenvalues of neutron transport problems is\npresented based on solutions to the time dependent transport equation. Using\nthese solutions we use the dynamic mode decomposition (DMD) to form an\napproximate transport operator. This approximate operator has eigenvalues that\ncan be directly related to the time eigenvalues of the neutron transport\nequation. This approach works for systems of any level of criticality and does\nnot require the user to have estimates for the eigenvalues. Numerical results\nare presented for homogeneous and heterogeneous media. The numerical results\nindicate that the method finds the eigenvalues that are most important to the\nsolution evolution over a given time range, and the eigenvalue with the largest\nreal part is not necessarily important to the system evolution.", + "authors": "Ryan G. McClarren", + "published": "2018-10-25", + "updated": "2019-01-14", + "primary_cat": "physics.comp-ph", + "cats": [ + "physics.comp-ph" + ], + "main_content": "Introduction In scienti\ufb01c computing we are used to taking a known operator and making approximations to it. Usually these approximations arise from the continuous operator and restricting it to some discrete representation. This is what is done in common methods for particle transport such as discrete ordinates where the continuous transport 1 arXiv:1810.10678v2 [physics.comp-ph] 14 Jan 2019 \fequation is replaced with equations for particular directions that are coupled through scattering via a quadrature rule. Alternatively, it is possible to use the action of the operator to generate approximations rather than using the operator itself. This is what is done in, for example, Krylov subspace methods for solving linear systems where the action of a matrix is used to create subspaces of increasing size that are used to \ufb01nd approximations to the solution. The use of the known action of an operator, even if the operator is not known, is the basis for the dynamic mode decomposition (DMD) [1, 2]. The main idea behind DMD is that if we have a sequence of vectors generated by successively applying an operator, we can estimate properties of that operator. In \ufb02uid dynamics, DMD is used to \ufb01nd important modes in the evolution of a system, even when the system does not have an interesting steady state [2, 3]. Additionally, because one does not need the operator, DMD can be applied to experimental measurements and quantitively compared to the DMD modes of a simulation [4]. In this paper we use DMD to \ufb01nd time eigenvalues, also known as \u03b1 eigenvalues, of the neutron transport equation using only the time dependent solution for the angular \ufb02ux. The calculation of \u03b1 eigenvalues has traditionally been accomplished using iterative search procedures where an eigenvalue is determined by \ufb01nding the value of \u03b1 that makes the equivalent k-eigenvalue problem exactly critical [5]. This is accomplished by subtracting \u03b1 divided by the neutron speed from the total interaction term. Unfortunately, if the \u03b1 eigenvalue is negative (that is, the system is subcritical), a negative total interaction term can result, leading to instabilities in most solution algorithms. Recently, there have been improvements to deterministic \u03b1 eigenvalue computation techniques that use specialized solvers to \ufb01nd positive and negative eigenvalues [6, 7, 8] or form the full discretization matrices to \ufb01nd eigenvalues [9]. Most of these methods either \ufb01nd only the eigenvalue with the largest real part (the rightmost eigenvalue in the complex plane), or require an accurate estimate to \ufb01nd other eigenvalues. Additionally, Monte Carlo can be used to \ufb01nd these eigenvalues with the transition rate matrix method [10, 11]. The bene\ufb01t of using the DMD method is that one can use standard transport solvers [12] to \ufb01nd any eigenvalues that are excited in a given calculation. The cost of the calculation, beyond the transport simulation, is the formation of a singular value decomposition (SVD) on the solution at several time steps. No development of transport solvers is required and o\ufb00-the-shelf linear algebra routines can be used to \ufb01nd the SVD. DMD will \ufb01nd the eigenvalues/eigenvectors that are the largest contributors to the dynamics of the system in a given time dependent problem: this a feature and not a bug. In many subcritical systems the rightmost eigenvalue will be unimportant to the system behavior in a given experiment. For instance, if we 2 \fconsider a subcritical system struck by a pulse of neutrons, such as those in [13], there will be eigenmodes corresponding to the slowest neutrons traveling across the system [14, 15] that will not impact the experiment. We will see an example of this later. This paper is organized as follows. We begin with the presentation of the dynamic mode decomposition in section II, and apply the method to time-eigenvalue problems in III. Numerical results are presented for a bare sphere in section IV and for heterogeneous systems in V before presenting conclusions and future work in section VI. II Dynamic Mode Decomposition Consider an evolution equation over time that can we written in the generic form \u2202y \u2202t = A(r)y(r, t), (1) where y(r, t) is a function of a set of variables denoted by r, which could be space, angle, energy, etc., and time t. Consider the solution to the equation at a sequence of equally spaced times, y(r, t0), y(r, t1), . . . , y(r, tN\u22121), y(r, tN), separated by a time \u2206t. These solutions are formally determined using the exponential of the operator A(r) via the relationship, y(r, tn) = eA\u2206ty(r, tn\u22121), n = 1, . . . , N. We can write a single equation relating the solutions at each time level as [y(r, tN), y(r, tN\u22121), . . . , y(r, t1)] = eA\u2206t[y(r, tN\u22121), y(r, tN\u22122), . . . , y(r, t0)]. (2) If we constrain ourselves to \ufb01nite dimensional problems, the solution is now a vector and the operator is a matrix. In this case the original equation has the form \u2202y \u2202t = Ay(t). (3) We will say that yn is of length M > N and A is an M \u00d7 M matrix. In this case, the solutions are related through the matrix exponential: [yN, yN\u22121, . . . , y1] = eA\u2206t[yN\u22121, yN\u22122, . . . , y0]. (4) In shorthand we can de\ufb01ne the N \u00d7 M matrices Y+ = [yN, yN\u22121, . . . , y1], Y\u2212= [yN\u22121, yN\u22122, . . . , y0], 3 \fas the matrices formed by appending the column vectors yn. This leads to the relation Y+ = eA\u2206tY\u2212. (5) Equation (5) is exact; however the matrix A may be too large to compute the exponential, eA\u2206t. Therefore, we desire to use just the solution to estimate the eigenvalues of eA\u2206t. To this end we will use the solution vectors collected in Y+ and Y\u2212to produce an approximation to A. We compute the thin singular-value decomposition (SVD) of the matrix Y\u2212: Y\u2212= U\u03a3V\u2217, (6) where U is an M \u00d7 N unitary matrix, V is a N \u00d7 N unitary matrix, and \u03a3 is an N \u00d7 N diagonal matrix with non-negative elements. The asterisk denotes the conjugate-transpose of a matrix. Typically, some of the diagonal elements of \u03a3 are e\ufb00ectively zero. Therefore, we make \u03a3 the r \u00d7 r matrix that contains all r values greater than some small, positive \u03f5. Substituting Eq. (6) into Eq. (5) we get Y+ = eA\u2206tU\u03a3V\u2217. Rearranging this equation gives U\u2217Y+V\u03a3\u22121 = U\u2217eA\u2206tU \u2261\u02dc S. (7) It has been shown [16] that an eigenvalue of \u02dc S is also an eigenvalue of eA\u2206t. To see this, we consider an eigenvalue \u03bb and eigenvector v of \u02dc S. By de\ufb01nition we have \u02dc Sv = \u03bbv, which is equivalent to U\u2217eA\u2206tUv = \u03bbv. Left multiplying this equation by U we get eA\u2206tUv = \u03bbUv, which shows that \u03bb is an eigenvalue of eA\u2206t. Additionally, \u02c6 v = Uv is the associated eigenvector of eA\u2206t to eigenvalue \u03bb. The matrix \u02dc S is much smaller than that for eA\u2206t and we can form \u02dc S without forming A. To create \u02dc S we need to know the result of eA\u2206t applied to an initial condition several times in succession. Then we need to compute the SVD of the data matrix Y\u2212. A direct computation requires O(M 2N) operations, though iterative methods for computing the SVD exist [17]. As a comparison, the QR factorization of eA\u2206t, requires O(M 3) operations. Our formulation here requires a constant time step size, though this can be relaxed as shown by Tu, et al.[16]. 4 \fIII Alpha Eigenvalues of the transport operator We will now demonstrate that we can estimate the alpha eigenvalues of a nuclear system by computing several time steps of a time-dependent transport equation and using the DMD theory presented above to form and compute the eigenvalues of \u02dc S. We begin by de\ufb01ning the alpha eigenvalue transport problem without delayed neutrons. Consider the time-dependent transport equation [18] \u2202\u03c8 \u2202t = A\u03c8, (8) where \u03c8(x, \u2126, E, t) is the angular \ufb02ux at position x \u2208R3, in direction \u2126\u2208S2, at energy E and time t. The transport operator A is given by A = v(E)(\u2212\u2126\u00b7 \u2207+ \u2212\u03c3t + S + F), with S and F the scattering and \ufb01ssion operators: S\u03c8 = Z 4\u03c0 d\u2126\u2032 Z \u221e 0 dE\u2032 \u03c3s(\u2126\u2032 \u2192\u2126, E\u2032 \u2192E)\u03c8(x, \u2126\u2032, E\u2032, t), (9) F\u03c8 = \u03c7(E) 4\u03c0 Z \u221e 0 dE\u2032 \u03bd\u03c3f(E\u2032)\u03c6(x, \u2126\u2032, E\u2032, t), (10) where \u03c3s(\u2126\u2032 \u2192\u2126, E\u2032 \u2192E) is the double-di\ufb00erential scattering cross-section from direction \u2126\u2032 and energy E\u2032 to direction \u2126and energy E , \u03bd\u03c3f(E\u2032) is the \ufb01ssion crosssection times the expected number of \ufb01ssion neutrons at energy E\u2032, and \u03c7(E) is the probability of a \ufb01ssion neutron being emitted with energy E. The scalar \ufb02ux \u03c6(x, \u2126\u2032, E\u2032, t) is de\ufb01ned as the integral of the angular \ufb02ux over the unit sphere, \u03c6(x, E, t) = Z 4\u03c0 d\u2126\u03c8(x, \u2126, E, t). (11) Above, we used a continuous formulation of the transport problem. For our calculations later, we will use a discretized transport equation using the multigroup method [18] in energy, discrete ordinates in angle, and a spatial discretization. In this case the time-dependent transport equation can be written as a system of di\ufb00erential equations \u2202\u03a8 \u2202t = A\u03a8, (12) where \u03a8 is a vector, and A is a matrix that represents the discrete transport operator. 5 \fTo de\ufb01ne alpha eigenvalues and eigenfunctions consider a solution of the form \u02c6 \u03c8(x, \u2126, E)e\u03b1t, which, using Eq. (8), leads to the relation \u03b1 \u02c6 \u03c8 = A \u02c6 \u03c8. The values of \u03b1 where this relation holds are called \u03b1 eigenvalues and \u02c6 \u03c8 are the alpha eigenfunctions. In discrete form the alpha eigenvalue problem is A \u02c6 \u03c8 = \u03b1 \u02c6 \u03c8, where \u03a8 has the form \u02c6 \u03c8e\u03b1t. In general the eigenvalues of the discrete problem are not the same as those for the continuous problem due to discretization. From here on, we consider the discrete problem. In the alpha eigenvalue problem, we are interested in the eigenvalues of A. We can use the DMD decomposition to form the operator \u02dc S and compute its eigenvalues, and as a result, the eigenvalues of eA\u2206t. To do this we begin with an initial condition and compute the solution at N time steps. Then we can form Y+ and Y\u2212, compute the SVD, and get the eigenvalues of eA\u2206t. We need a way to relate the eigenvalues of eA\u2206t to the \u03b1 eigenvalues. The relationship is if (\u03b1, v) is an eigenvalue/eigenvector pair of A then e\u03b1\u2206t is an eigenvalue of eA\u2206t with eigenvector v. These facts can be seen through the de\ufb01nition of the matrix exponential. Consider an eigenvalue \u03b1 with eigenvector v for the matrix A. Using the de\ufb01nition of an eigenvector, we can show that A\u2113v = A\u2113\u22121(\u03b1v) = A\u2113\u22122(\u03b12v) = \u00b7 \u00b7 \u00b7 = \u03b1\u2113v. The de\ufb01nition of the matrix exponential gives eA\u2206tv = \u221e X \u2113=0 \u2206t\u2113 \u2113! A\u2113 ! v (13) = \u221e X \u2113=0 \u2206t\u2113 \u2113! \u03b1\u2113 ! v = e\u03b1\u2206tv, where the last equality uses the Taylor series of the exponential function exp(\u03b1\u2206t) around 0. Therefore, if \u03bb is an eigenvalue of \u02dc S, and, by construction, an eigenvalue of eA\u2206t, then \u03b1 = log \u03bb \u2206t (14) 6 \fis an alpha eigenvalue of the discrete transport operator. The discussion above suggests the following algorithm for estimating alpha eigenvalues of the discrete transport equation: 1. Compute N time-dependent steps starting from \u03c80 using a numerical method of choice and \ufb01xed \u2206t. 2. Compute the SVD of the resulting data matrix Y\u2212, and form \u02dc S. 3. Compute the eigenvalues/eigenvectors of \u02dc S, and calculate the \u03b1 eigenvalues from Eq. (14). This is an approximate method because the time steps typically will not be computed using the matrix exponential, rather a time integration technique such as the backward Euler method will be used. The backward Euler algorithm estimates the matrix exponential as eA\u2206t \u2248(I \u2212\u2206tA)\u22121. When we use the DMD method on a data matrix generated by the backward Euler method, we are computing eigenvalues of (I \u2212\u2206tA)\u22121. To relate these eigenvalues to the \u03b1 eigenvalues we use the relation \u03b1 \u22481 \u2206t \u0012 1 \u22121 \u03bb \u0013 . This approximation will improve at \ufb01rst order as \u2206t \u21920. III.A Comparison with existing methods Standard techniques for computing alpha eigenvalues require solving a series of k eigenvalue problems [5]. The basis for these methods is that the \u03b1 eigenvalues make the equivalent k eigenvalue problem exactly critical when the total cross-section is replaced with \u03c3t(E)+\u03b1v(E)\u22121. This approach will have problems when \u03b1 is negative as it can cause negative absorption to arise in lower energy groups. To address this problem other methods have been developed such as Rayleigh quotient methods [6], the Arnoldi method [7, 19], and Newton-Krylov methods [20]. In these approaches the equations that need to be solved are typically di\ufb00erent than those required to solve time dependent transport problems. The DMD method allows one to get both the time dependent solution and eigenvalues as part of one calculation. Moreover, DMD provides an estimate for multiple eigenvalues based on the number of modes excited in the system and the number of steps used. 7 \fTable I: The group edges and centers for the 12-group calculations in this study. g Eg (MeV) \u00af Eg (MeV) 0 17 1 13.5 15.25 2 10 11.75 3 6.07 8.035 4 2.865 4.4675 5 1.353 2.109 6 0.5 0.9265 7 0.184 0.342 8 0.0676 0.1258 9 0.0248 0.0462 10 0.00912 0.01696 11 0.00335 0.006235 12 0.000454 0.001902 IV Results for Plutonium Sphere Here we present results for the prompt neutron solution for a sphere of 99 atom% 239Pu and 1 atom-% natural carbon using 12 group cross-sections and a simple buckling model for leakage so that we can solve an in\ufb01nite medium problem. The group structure is detailed in Table I. We will consider sub and super-critical systems by adjusting the radius of the sphere. Because we use a simple buckling model for this problem we can directly form the matrix for the transport operator and compute \u201cexact\u201d eigenvalues for this model. For DMD the time steps are computed using the backward Euler discretization for time integration. In this and subsequent sections we consider only prompt neutrons. IV.A Subcritical Case We consider a sphere of radius 4.77178 cm with an associated ke\ufb00in our model of 0.95000. The fundamental mode for this reactor is shown in Figure 1a along with several \u03b1 eigenmodes. The \u03b1 eigenvalues for this system have a fast decaying mode with a large number of neutrons in the fastest energy group, and the slowest decaying mode closely follows the fundamental mode. 8 \f10\u22122 10\u22121 100 101 E (MeV) \u22120.2 0.0 0.2 0.4 0.6 0.8 1.0 \u03d5\u03b1 (arb units) \u03b1 = -1957.424 \u03b1 = -34.4201 \u03b1 = -28.8533 \u03b1 = -17.7439 keff Fundamental Mode (a) Subcritical sphere 10\u22122 10\u22121 100 101 E (MeV) \u22120.2 0.0 0.2 0.4 0.6 0.8 1.0 \u03d5\u03b1 (arb units) \u03b1 = -1872.9879 \u03b1 = -33.1095 \u03b1 = -28.2933 \u03b1 = 0.3544 keff Fundamental Mode (b) Supercritical sphere Figure 1: Fundmental k-eigenmode, and several \u03b1 eigenmodes for the bare plutonium sphere problem with 12 groups in a subcritical and supercritical con\ufb01guration. The \u03b1 eigenvalues have units (\u00b5s\u22121). 9 \fTo test the DMD estimation of \u03b1 eigenvalues, we run a time-dependent problem where at time zero the system has 1000 neutrons in the energy group corresponding to 14.1 MeV. This is a crude approximation to an experiment where a pulse of DT fusion neutrons irradiates the sphere. The problem is run in time dependent mode out to various \ufb01nal times with uniform time steps, and the time steps are used in the DMD procedure to estimate \u03b1 eigenvalues. The \u03b1 eigenvalues computed by DMD are shown in Table II and compared to the exact eigenvalues computed from the matrices generated by the buckling approximation. The number of neutrons in the system as a function of time is shown in Figure 2, where one can see that subcritical multiplication is happening in the \ufb01rst 0.002 \u00b5s of the problem. As we argue next, DMD \ufb01nds the eigenvalues that are important in the time dependent solution over the time scales considered and that are resolved by the time step size. From Table II we can see that during the phase where subcritical multiplication is occurring (before t = 0.002 \u00b5s) DMD accurately computes to six digits the \u03b1 eigenmode that corresponds to a large population of 14.1 MeV neutrons. This is the mode most excited by the initial condition. It also accurately computes the eigenvalues with magnitudes larger than 200 to several digits. However, we note that the \u201cdominant\u201d or slowest decaying eigenmode is not detected by the DMD algorithm, indicating that its contribution at this early time is insigni\ufb01cant or cannot be distinguished from other slowly decaying modes. This indicates an important phenomenon in time dependent transport: the slowest decaying eigenvalue may not be important in a given problem. As we look at simulations run to later time, more eigenvalues are identi\ufb01ed using DMD. Running the simulation to intermediate times, 0.02 and 0.2 \u00b5s, we see that DMD \ufb01nds all of the eigenvalues in the problem to several digits of accuracy. In both of these solutions DMD does not \ufb01nd the eigenvalue near \u221228.85 \u00b5s\u22121. This eigenmode has more neutrons in the energy ranges in the thermal and epithermal energy ranges relative to the other modes.. Given that this problem has very little thermalization due to the small amount of carbon, this mode is not important at these intermediate times relative to other modes. At a much later time, 2 \u00b5s, DMD identi\ufb01es all of the slowly decaying modes but cannot \ufb01nd the rapidly decaying modes. This is due to the fact that the larger time steps used make it so that the solution cannot resolve the time scale where these modes are important. As a result DMD estimates a pair of complex eigenvalues with a real part that does not correspond to an actual eigenvalue. There are versions of DMD that allow variable time steps to be used [16], and the use of adaptive time stepping should be investigated in future work in order to estimate the fast and slowly decaying modes. 10 \f10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 100 t (\u03bcs) 0 500 1000 1500 2000 2500 3000 3500 number of neutrons in system subcritical supercritical Figure 2: The number of neutrons in the plutonium sphere in suband supercritical con\ufb01gurations as a function of time. Due to subcritical multiplication the peak number occurs about 0.002 \u00b5s into the simulation of the subcritical con\ufb01guration. 11 \fTable II: Alpha eigenvalues (\u00b5s\u22121) for the subcritical sphere computed using DMD using the solution obtained with di\ufb00erent values of \u2206t and \ufb01nal times. Exact t\ufb01nal (\u00b5s) = 0.002 0.02 0.2 2 -17.7439 -17.5504 -17.7588 -17.7437 -28.8533 -24.5669 -28.8628 -34.4201 -35.7281 -34.1948 -34.3999 -48.4269 -46.6817 -48.0231 -48.4613 -75.0701 -75.7798 -75.2787 -74.9998 -132.352 -132.183 -132.197 -132.587 -261.942 -262.78 -261.974 -262.127 -260.218 -547.732 -531.575 -547.719 -547.11 -585.536 -893.385 -893.314 -893.399 -895.262 -763.974 -1368.92 -1335.16 -1368.90 -1362.45 -1732.99 -1721.75 -1733.01 -1725.84 -1708 \u00b1381i -1957.42 -1957.42 -1957.41 -1957.42 \u2206t (\u00b5s) = 0.0002 0.0002 0.001 0.01 IV.B Supercritical Case We consider a sphere of radius 5.029636 cm with an associated prompt ke\ufb00in our model of 1.000998; the eigenvectors for this problem are shown in Figure 1b. We perform the same calculations as performed before on the subcritical sphere. Table III compares the eigenvalues computed with DMD with the eigenvalues computed by solving the equivalent in\ufb01nite medium problem. At an early time (0.002 \u00b5s), the DMD computation does not identify the exponentially increasing mode. Upon inspection of Figure 2, we see that at this time the supercritical and subcritical systems have neutron populations that are very similar. The subcritical multiplication observed in the smaller sphere where modes associated with the fusion neutrons contributed to the growth of the neutron population, is also present in this supercritical system. However, there are very few neutrons emitted in the fusion energy range from \ufb01ssion (\u03c71 \u22481.37 \u00d7 10\u22124), so these modes decay away. As the solution time increases the DMD-estimated eigenvalues agree well with the true values. This is most evident in the solution computed up to 0.2 \u00b5s where 11 of 12 eigenvalues are computed accurately to 2 digits. The exponentially growing mode is correctly estimated at later times; for the simulation run to the latest time the 12 \fTable III: Alpha eigenvalues (\u00b5s\u22121) for the supercritical sphere computed using DMD using the solution obtained with di\ufb00erent values of \u2206t and \ufb01nal times. Exact t\ufb01nal (\u00b5s) = 0.002 0.02 0.2 2 0.354439 -4.02079 0.332366 0.354291 0.354439 -28.2933 -28.2932 -33.1095 -32.8048 -33.1151 -46.0832 -45.3512 -45.817 -46.0703 -70.7945 -70.4805 -70.9448 -70.8261 -124.497 -124.568 -124.38 -124.381 -247.14 -247.914 -247.127 -247.281 -248.057 -521.689 -506.467 -521.693 -521.216 -507.684 -853.58 -853.733 -853.577 -855.008 -1309.4 -1279.91 -1309.4 -1305.12 -1050 +23i -1659.02 -1649.68 -1659.02 -1655.16 -1872.99 -1872.99 -1872.99 -1872.98 -2059.43 \u2206t (\u00b5s) = 0.0002 0.0002 0.001 0.01 eigenvalue is estimated accurately to 6 digits by DMD. At very late times the rapidly decaying modes are not correctly estimated and a complex eigenvalue is estimated, as we saw before in the subcritical case, but this is likely due to the large time step used. V Heterogeneous Media The plutonium sphere example required only computing the solution to in\ufb01nite media problems. We will now investigate how the DMD approach to estimating eigenvalues performs on a heterogeneous problems in slab geometry. Our numerical solutions are computed using the discrete ordinates (SN) method with diamond di\ufb00erence for the spatial discretization and backward Euler for time integration [12]. V.A Heterogeneous, One-speed Slab Problem The \ufb01rst heterogeneous problem we solve are based on benchmark problems published by Kornreich and Parsons [21] as solved by the Green\u2019s function method (GFM). Their work de\ufb01nes a slab problem for single-speed neutrons (i.e., one group) 13 \fx = 0 1 2 7 8 X fuel fuel absorber moderator moderator Figure 3: Layout for the multiregion slab problem from Kornreich and Parsons [21]. The total width of the problem X can be either 9 or 9.1. consisting of an absorber surrounded by a moderator and fuel; see Figure 3. They de\ufb01ne con\ufb01gurations of this problem that are symmetric and asymmetric, as well as subcritical and supercritical versions. In the symmetric version of problem the total width of the slab is 9, whereas, in the asymmetric version the width is 9.1. The total cross-section is one throughout the problem and the scattering cross-sections are \u03c3s(x) = ( 0.8 x \u2208fuel or moderator 0.1 x \u2208absorber. The value of \u03bd\u03c3f in the fuel is either 0.3 or 0.7 for the subcritical and supercritical cases, respectively. We solve this problem using DMD with 200 cells per mean free path and a 196angle Gauss-Legendre quadrature set. We use a time step size of \u2206t = 0.1 and run the problem for 500 time steps to a time of t = 50. For initial conditions we used two approaches: a symmetric initial condition where the solution is non-zero and inwardly directed in the outermost cells in the problem, and a random initial condition. In Table IV, results from the DMD calculations are compared with the GFM results. We use the nomenclature of \u201cfundamental\u201d for the alpha eigenvalue that is rightmost in the complex plane to coincide with the published results; the \u201csecond\u201d eigenvalue in the table is the eigenvalue that is just left of the fundamental eigenvalue in the complex plane. The results in the table show that the DMD results were able to reproduce the GFM eigenvalues within 10\u22125 (1 pcm). Except for the second eigenvalue in the symmetric case, all the DMD eigenvalues agreed to greater than 1 pcm precision using both initial conditions. The DMD results in Table IV for the fundamental eigenvalue were the same 14 \fTable IV: Eigenvalues for the benchmark as computed via the GFM and the di\ufb00erence between the GFM and DMD estimates in pcm (10\u22125). Geometry \u03bd\u03c3fuel f Fundamental \u03b1 (GFM) \u03b1GFM \u2212\u03b1DMD (pcm) Second \u03b1 (GFM) \u03b1GFM \u2212\u03b1DMD(pcm) Symmetric 0.3 -0.3196537 0.639 -0.3229855 0.694 0.7 -0.006156369 0.7711 -0.006440766 0.7724 Asymmetric 0.3 -0.2932468 0.535 -0.3213939 0.666 0.7 0.03759991 0.64 -0.006298843 0.7717 for both initial conditions to six signi\ufb01cant digits. We also have found that the eigenvalues found in the solution is insensitive to the number of time steps used in the DMD procedure, as long as any initial transients have died out (about 5 meanfree times in this problem). Using 400 or 100 time steps in the eigenvalue estimate gave the same eigenvalue estimates to 6 signi\ufb01cant digits. However, the second eigenvalue was not present in the solution for the symmetric initial condition on the symmetric problems. This is because the second eigenmode is asymmetric in space, and, therefore, this mode is not excited by the symmetric initial condition. The DMD eigenvectors for the four con\ufb01gurations of this problem are shown in Figures 4 and 5. The fundamental and second eigenvectors match the published plots for the \u03bd\u03c3f = 0.7 within the width of the lines. In the DMD results we found a third, real-valued eigenvalue, \u03b1 = \u22121.02158875. This eigenvalue is part of the continuum spectrum for the transport operator for this problem. The fact that it is found by DMD is an artifact of the approximations made in the method. We note that in the original paper by Kornreich and Parsons [21] they give results from the discrete ordinates code PARTISN [22] using 96 quadrature points (about half of what we used), and 2000 mesh cells per mean free path (10 times higher resolution than in our case). The PARTISN results agreed with the GFM results to within 0.1 pcm using this much \ufb01ner spatial grid. Nevertheless, PARTISN was not able to estimate the second eigenvalue in the asymmetric cases, whereas the DMD results are as expected. Furthermore, the Monte Carlo transport code, MCNP [23] was not able to estimate eigenvalues for any of the \u03bd\u03c3f = 0.3 cases. Recently, Betzler, et al.[10] published Monte Carlo results for these cases using Monte Carlo Markov Transition Rate Matrix Method. 15 \f0 2 4 6 8 \u22121.00 \u22120.75 \u22120.50 \u22120.25 0.00 0.25 0.50 0.75 1.00 Rightmost DMD Second DMD (a) Symmetric slab with \u03bd\u03c3f = 0.3 0 2 4 6 8 \u22121.00 \u22120.75 \u22120.50 \u22120.25 0.00 0.25 0.50 0.75 1.00 Fundamental DMD Second DMD (b) Symmetric slab with \u03bd\u03c3f = 0.7 Figure 4: Fundamental and second eigenmodes for the one group slab problem in the symmetric con\ufb01gurations. 0 2 4 6 8 0.0 0.2 0.4 0.6 0.8 1.0 Fundamental DMD Second DMD (a) Asymmetric slab with \u03bd\u03c3f = 0.3 0 2 4 6 8 0.0 0.2 0.4 0.6 0.8 1.0 Fundamental DMD Second DMD (b) Asymmetric slab with \u03bd\u03c3f = 0.7 Figure 5: Fundamental and second eigenmodes for the one group slab problem in the asymmetric con\ufb01gurations. 16 \fV.B Multiregion, 70-group System As a \ufb01nal demonstration we solve a problem consisting of two slabs of 239Pu with highdensity polyethylene (HDPE) between them and a re\ufb02ector of HDPE on the outside. The initial condition has a pulse of DT fusion neutrons striking the outer surface of the re\ufb02ector, implemented as the angular \ufb02ux for each angle directed toward the center being set to 1 in the outermost cell on each side for the initial condition. See Figure 6 for a schematic of the problem. The system is subcritical when the fuel regions are each 1.125 cm thick with a resulting ke\ufb00\u22480.97 and isotropic scattering is assumed. The fundamental mode has a large number of thermal neutrons in the middle of the problem as well as a fast peak in the fuel region. HDPE 239Pu 239Pu 1.125 cm 25.25 cm 14.1 MeV Neutron pulse Figure 6: Problem layout for the 70-group test problem. Running this problem out to a time of 1 \u00b5s with a time step size of 10\u22124 \u00b5s S8 quadrature, and 400 spatial zones, we use DMD to compute eigenvalues present in the solution over three di\ufb00erent time windows: 0.002 to 0.004 \u00b5s, 0.09 to 0.1 \u00b5s, and 0.99 to 1 \u00b5s. These eigenvalues are shown in Figure 7. The eigenvalues estimated by DMD at early time (0.002 to 0.004 \u00b5s) have a large imaginary component except for the rightmost value. As time progresses the imaginary part of the eigenvalues decreases and the real part moves rightward. This demonstrates a feature of the DMD method: early in time there are many modes present in the solution and the fast decaying ones are governing the solution behavior early in time. As time goes on, only the slowly decaying modes are present, and DMD \ufb01nds these later in time. The behavior of the neutron population in time, as well as the three time intervals over which the eigenvalues were estimated is shown in Figure 8a. The time interval from 0.002 to 0.004 \u00b5s is during the subcritical multiplication phase of the simulation. 17 \fIt makes sense that during this phase the slowly decaying modes are not important in the solution. Later in time these slowly decaying modes will dominate because the subcritical multiplication must end at some point given that the system is subcritical and does not have a \ufb01xed source. \u2212800 \u2212600 \u2212400 \u2212200 0 Re(\u03b1) \u03bcs\u22121 \u221210000 \u22125000 0 5000 10000 Im(\u03b1) \u03bcs\u22121 0.002 to 0.004 \u03bcs 0.09 to 0.10 \u03bcs 0.99 to 1.00 \u03bcs Figure 7: \u03b1 eigenvalues for the 70-group test problem estimated by DMD over three di\ufb00erent time intervals. In Figure 8 we show the neutron spectrum at several points in space. The spectra shown are computed using time steps from the indicated time ranges. From this \ufb01gure we can see that early in the time the solution is dominated by the presence of 14.1 MeV neutrons, though \ufb01ssion neutrons are present in the fuel and outer re\ufb02ector. At late times, near 1\u00b5s, the spectrum in the fuel and the re\ufb02ector is close to the fundamental eigenmode of the k-eigenvalue problem. Nevertheless, the central moderator in the problem has not reached the fundamental k eigenmode, as there has not been enough time to fully thermalize the neutrons. Additionally, the eigenvalue for the slowest decaying mode is associated with the travel time of the slowest neutrons crossing the moderator. This suggests that the problem would need 18 \fto be run longer to relax to this mode. Moreover, it indicates that if this system were involved in an experiment, the neutrons produced in the \ufb01rst microsecond would give little information about the spectrum of the k eigenvalue problem. 10\u22124 10\u22123 10\u22122 10\u22121 100 time (\u03bcs) 1.0 1.2 1.4 1.6 1.8 2.0 Rel. Number of neutrons in system (a) Neutron population over time 10\u22123 10\u22121 101 103 105 107 E (eV) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 \u03d5(E) 0.002 to 0.004 \u03bcs 0.09 to 0.10 \u03bcs 0.99 to 1.00 \u03bcs fundamental k (b) Midpoint in the outer re\ufb02ector 10\u22123 10\u22121 101 103 105 107 E (eV) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 \u03d5(E) 0.002 to 0.004 \u03bcs 0.09 to 0.10 \u03bcs 0.99 to 1.00 \u03bcs fundamental k (c) Midpoint of the fuel 10\u22123 10\u22121 101 103 105 107 E (eV) 0.0 0.2 0.4 0.6 0.8 1.0 1.2 \u03d5(E) 0.002 to 0.004 \u03bcs 0.09 to 0.10 \u03bcs 0.99 to 1.00 \u03bcs fundamental k (d) Problem midpoint Figure 8: Neutron population and spectra in the outer re\ufb02ector, fuel, and moderator averaged over the three time intervals. The time intervals are denoted by black lines in (a), and the fundamental k-eigenvalue spectra are shown in (b)-(c). The spatial distribution of neutrons is shown in Figure 9. From this \ufb01gure we see that at di\ufb00erent times the slowest decaying mode that the DMD estimates correspond with the modes that are important to the dynamics during a time interval. Early in time fast neutrons dominate; these fast neutrons then decay as more thermal neutrons are created from scattering. Nevertheless, near 1 \u00b5s the the neutron density of epithermal neutrons is still larger than the density of thermal neutrons. 19 \f0 5 10 15 20 25 x (cm) 102 103 104 105 neutron density k = 0.97004 fundamental mode thermal epithermal fast (a) k fundamental mode 0 5 10 15 20 25 x (cm) 10\u22126 10\u22125 10\u22124 10\u22123 |neutron density| thermal epithermal fast (b) 0.002 to 0.004 \u00b5s, \u03b1 = \u2212393.457951 \u00b5s\u22121 0 5 10 15 20 25 x (cm) 10\u22125 10\u22124 10\u22123 10\u22122 10\u22121 |neutron density| thermal epithermal fast (c) 0.09 to 0.1 \u00b5s, \u03b1 = \u22129.165716 \u00b5s\u22121 0 5 10 15 20 25 x (cm) 10\u22122 10\u22121 100 |neutron density| thermal epithermal fast (d) 0.99 to 1 \u00b5s, \u03b1 = \u22120.415305 \u00b5s\u22121 Figure 9: Spatial distribution of neutrons for the fundamental mode of the k eigenvalue problem, and the eigenvector for the rightmost \u03b1 eigenvalue as estimated by DMD over di\ufb00erent time intervals. Note that the \u03b1 eigenvectors are not positive so we plot the absolute value. In this \ufb01gure thermal neutrons have energy below 5 eV, fast neutrons are above 0.5 MeV, and epithermal neutrons are in between. 20 \fVI Discussion The dynamic mode decomposition allows for the approximation of the eigenvalues present in a time-dependent transport system from the solution at di\ufb00erent times without a separate eigenvalue solve. The decomposition works for subcritical and critical systems and can give highly accurate (sub-pcm) estimates of eigenvalues. Our results from a variety of problem types indicate that the method is useful for general estimation of system eigenvalues, especially if one is interested in the modes driving the dynamics over a particular time interval. The problems we presented did not include delayed neutrons, but adding these to the DMD method is straightforward. Because DMD uses the solution from time dependent transport to estimate eigenvalues, the time interval considered and the time step size a\ufb00ect the eigenvalues found. For instance at early times of the simulation there may be di\ufb00erent modes present than at later times. DMD will not be able to accurately estimate modes that decay much more quickly than the time step size used to generate the time-dependent solution. We note that DMD can be applied to nonlinear problems in the same fashion as we applied it to the linear problem of neutron transport. This could be useful for the situation where the neutron population dynamics are nonlinear. For instance, if we consider a system with negative feedback with regards to temperature, the dynamics of the neutron population would a\ufb00ect the temperature and the cross-sections of the material. One could apply DMD to this problem, though the interpretation of the resulting eigenvalues would necessarily be di\ufb00erent. Previous work [1, 24], has shown that the modes computed by DMD will be eigenfunctions of the Koopman operator, and the application of this type of analysis could be fruitful for understanding nuclear systems. VII Acknowledgements The author would like to thank B.D. Lansrud-Lopez, C.D. Ahrens, and R.D. Baker for helpful discussions during the development of this work. Also, thanks are in order to S.R. Bolding for sharing some python code for cross-section processing and in\ufb01nite media solutions. LA-UR-18-30110. 21" + } + ], + "Stephen Millmore": [ + { + "url": "http://arxiv.org/abs/1906.08521v1", + "title": "Multi-physics simulations of lightning strike on elastoplastic substrates", + "abstract": "This work is concerned with the numerical simulation of elastoplastic,\nelectromagnetic and thermal response of aerospace materials due to their\ninteraction with a plasma arc under lightning strike conditions. Current\napproaches treat the interaction between these two states of matter either in a\ndecoupled manner or through one-way coupled 'co-simulation'. In this paper a\nmethodology for multiphysics simulations of two-way interaction between\nlightning and elastoplastic materials is presented, which can inherently\ncapture the non-linear feedback between these two states of matter. This is\nachieved by simultaneously solving the magnetohydrodynamic and the\nelastoplastic systems of equations on the same computational mesh, evolving the\nmagnetic and electric fields dynamically. The resulting model allows for the\ntopological evolution and movement of the arc attachment point coupled to the\nstructural response and Joule heating of the substrate. The dynamic\ncommunication between the elastoplastic material and the plasma is facilitated\nby means of Riemann problem-based ghost fluid methods. This two-way coupling,\nto the best of the authors' knowledge, has not been previously demonstrated.\nThe proposed model is first validated against experimental laboratory studies,\ndemonstrating that the growth of the plasma arc can be accurately reproduced,\ndependent on the electrical conductivity of the substrate. It is then\nsubsequently evaluated in a setting where the dynamically-evolved properties\nwithin the substrate feed back into the plasma arc attachment. Results are\npresented for multi-layered substrates of different materials, and for a\nsubstrate with temperature-dependent electrical conductivity. It is\ndemonstrated that these conditions generate distinct behaviour due to the\ninteraction between the plasma arc and the substrate.", + "authors": "Stephen Millmore, Nikolaos Nikiforakis", + "published": "2019-06-20", + "updated": "2019-06-20", + "primary_cat": "physics.comp-ph", + "cats": [ + "physics.comp-ph", + "physics.app-ph", + "physics.flu-dyn", + "physics.plasm-ph" + ], + "main_content": "Introduction On average, every commercial airliner is struck by lightning once a year, hence all aircraft undergo rigorous testing to ensure these strikes do not lead to major in-\ufb02ight damage [1]. Traditionally, most aircraft skins have been made from aluminium, which is lightweight and strong under normal in-\ufb02ight conditions, but also both thermally and electrically conductive, thus it quickly dissipates the energy deposition from a lightning strike away from the impact site. Modern designs increasingly make use of carbon composite materials, which are stronger and lighter than aluminium under normal aircraft operating conditions [2]. However, they have much lower thermal and electrical conductivity, which leads to increased energy deposition at the site of a lightning strike, due to the Joule heating effect, which in turn can lead to much greater damage on an aircraft panel. In order to mitigate these effects, composite materials typically include an interwoven wire fabric, threads of high conductivity wires which dissipates current away from the initial impact site, reducing the local energy deposition. However, this increases the weight of the aircraft, negating some of the savings introduced through the lightweight composite. Over the course of a lightning strike, the current \ufb02ow consists of a long continuing current of a few hundred amperes, which can last for up to 1 s. Superimposed upon this are multiple high-current peaks, each with duration less than 0.5 ms, these are referred to as \u2018strokes\u2019, illustrated in Figure 1. This information is used to create standardised wave forms for experimental studies [4] which are designed to be representative of severe conditions and each stroke can reach maximum current input between 100,000 A and 200,000 A. Such extreme conditions are expected for less than 5% of lightning strikes. Damage due to lightning strike falls into two categories, direct and indirect effects. Direct effects are localised damage due to the arc connection, for which the individual strokes, shown in Figure 1, are the primary cause. Indirect effects involve the electrodynamic interaction of the process with the entire aircraft, and therefore consider the entire strike pro\ufb01le [5]. Preprint submitted to XXX June 21, 2019 arXiv:1906.08521v1 [physics.comp-ph] 20 Jun 2019 \fTime Current Strokes Burst Figure 1: Illustration of the typical features in the current over the course of a Lightning strike, adapted from [3]. There is an initial burst, as the stroke attaches (about 3 ms duration), followed by a longer period of continuous current input (around 200 ms and 330 A). Superimposed upon this longer input are high-current strokes, with a period of around 20 ms and currents which can exceed 200,000 A. These strokes can continue after the continuous current input ends. In this work, the local damage to the substrate, caused by an individual stroke, is the primary area of interest. Experimental modelling of these effects typically uses small-scale investigations [5] over the duration of a single stroke. Though at a much smaller scale than a full lightning strike, by using extreme stroke conditions, these provide a good model of lightning strike damage. The advantage of these experiments is that the impact point on the substrate can be closely controlled, allowing analysis over the entire course of the experiment. Numerical simulations can reduce the expense of repeated experiments and can complement experimental measurements during the dynamic interaction between a lightning strike and a substrate. The interaction between the arc and the substrate is a complex, non-linear process, and thus presents challenges in capturing the full behaviour within a numerical model. Initial work in simulating the arc pro\ufb01le was developed from a numerical magnetohydrodynamic (MHD) description of an argon arc by Hsu et al. [6], with applications in plasma arc welding. Gliezes et al. also considered stationary arcs, and this allowed a temperature pro\ufb01le at the attachment point to be computed [7]. The turbulent motion of the arc channel was simulated by Chemartin et al. [8], and a smaller scale arc, with current pro\ufb01le given between an anode and a cathode (the substrate skin) was later developed by Chemartin et al. [9]. This model was then applied to a swept arc, with multiple attachment points [10], and a subsequent analysis of the effects this had on a material substrate was carried out. Villa et al. [11] consider the pressure loading above a substrate using an MHD plasma model, with a prescribed current density evolution. Although pressures above the substrate are measured, the effect within the substrate itself was not investigated. The effects of the conductivity of the cathode were considered by Tholin et al. [12], and this showed how the shape of the arc attachment changes dramatically for low conductivity carbon composite materials. This work was expanded upon in the thesis of Martins [13], which focuses on experimental measurements of plasma arcs using a variety of substrates. Typically, the effects of the arc attachment on the substrate are modelled separately. Ogasawara et al. [14], consider the damage within a carbon composite substrate, though do not model the plasma arc directly, but instead use a prescribed current input. Abdelal and Murphy [15] also take this approach, with modi\ufb01cations as to how the current pro\ufb01le is applied, as did Guo et al. [16]. Foster et al. [17] highlight the importance of movement of the attachment point through prescribed motion of the current pro\ufb01le. Karch et al. [18] compute damage to a carbon composite substrate through a prescribed expansion of a plasma arc expansion. The computational models described above typically simulate the arc attachment process and the substrate response individually, rather than as a two-way interacting system, and coupling is achieved through a \u2018co-simulation\u2019 approach, modelling each system individually. The Joule heating and pressure loading effects of the arc attachment lead to damage within the substrate. However, this can both alter the shape of the substrate (either through bending or damage) and can change the properties of the substrate, such as electrical conductivity, which leads to a feed-back effect changing the arc attachment. Therefore a truly nonlinear multi-physics approach is needed to capture the twoway interaction between the two systems. The approaches described above do not capture this behaviour in a single model, and hence cannot fully replicate these non-linear effects. In this work, a multi-physics methodology is presented which allows for the dynamic non-linear coupling of the plasma arc and the substrate. The framework developed within the Laboratory for Scienti\ufb01c Computing at the 2 \fUniversity of Cambridge is used [19, 20], which simultaneously solves coupled elastoplastic and \ufb02uid equations. This framework is extended to simulate the interaction between a MHD description of a plasma arc and the elastoplastic equations. Through this, the feedback between the two states of matter can be captured; the plasma arc alters the properties of the substrate, and this in turn affects the topology of the arc. The rest of the paper is laid out as follows: In Section 2 the mathematical formulation of the model components, and the multi-material coupling, is detailed. In Section 3, validation of the plasma model used in this work is presented. In Section 4 the coupling to multi-layered substrates is demonstrated, and in Section 5 the ability for this model can capture feedback from the substrate into the plasma arc is shown. Conclusions and further work are given in Section 6. 2. Mathematical formulation In this section, the mathematical models used to describe the interaction between a plasma arc and an elastoplastic substrate are presented. A reference con\ufb01guration to describe the application of this model is considered; a plasma arc in air, generated by an electrode, and connected to a conductive elastoplastic substrate which is grounded at its outer edges. This is representative of the laboratory framework for testing the effects of lightning strike on aircraft skin con\ufb01gurations, such as those used in [11, 12, 13]. Within this framework, a cylindrically symmetric model is considered, with the arc connection at the centre of the domain, which is suf\ufb01cient to capture the bulk behaviour of the arc-substrate interaction [13]. This con\ufb01guration is illustrated in Figure 2, for which a single-material isotropic substrate is shown. A blunt electrode is placed above a grounded substrate, and suf\ufb01cient current is passed through the electrode to generate a plasma arc. The voltage breakdown of the air, which occurs at timescales much shorter than the mechanical evolution of the system, is not modelled within this framework. Instead the procedure of e.g. Chemartin et al. [9], Larsson et al. [21], and Tholin et al. [12], is followed, and a thin pre-heated region of the domain is considered, representative of the initial connection resulting from voltage breakdown. This region is of suf\ufb01ciently high temperature to result in ionisation and the formation of a plasma. This allows current \ufb02ow from the electrode to the grounded edge of the substrate. It has been shown that the values within this preheated region do not affect the overall evolution of the plasma arc [9, 21]. Figure 2: Schematic of an air plasma arc generated by an electrode connecting with a grounded conductive substrate. When considering the MHD approach to modelling plasma, it is generally assumed that the arc is close to local thermodynamic equilibrium (LTE), i.e. the arc can be described with a single temperature. This has been shown to give good agreement with experimental studies [12, 13], and the complexity in otherwise determining transport coef\ufb01cients for a non-equilibrium air plasma means the LTE approach is widely used, thus this approach is used in this work. Simulations using non-equilibrium plasmas do exist [22], but the added complexity of these models, in 3 \fwhich detailed chemistry is required for each species modelled, means they are not generally used when the LTE approach holds. The substrate and electrode materials are described through an elastoplastic model in the Eulerian frame [19, 20]. The domain beneath the substrate is included such that it is straightforward to incorporate the grounding of the substrate only at the outer edge. This domain is considered to be air, described through an ideal gas law, since over the timescales considered, it is not heated suf\ufb01ciently for ionisation and a plasma model to be required. 2.1. The plasma model Under the LTE assumption, the plasma arc is described through the single \ufb02uid Euler equations coupled with the Maxwell equations for electrodynamic \ufb01elds. This gives three equations for the conservation of mass, momentum and energy, \u2202\u03c1 \u2202t + \u2202 \u2202xi (\u03c1ui) = 0 (1a) \u2202 \u2202t (\u03c1uj)+ \u2202 \u2202xi (\u03c1uiuj +\u03b4ijp) = (J\u00d7B)j (1b) \u2202U \u2202t + \u2202 \u2202xi [ui (U + p)] = ui (J\u00d7B)i +\u03b7JiJi \u2212Sr, (1c) where \u03c1 denotes the density, u the velocity vector, p the pressure and U the total energy. The source term J\u00d7B is the Lorentz force due to circulation of the electric current, where J is the current density and B the magnetic \ufb01eld. The source term \u03b7J \u00b7 J = J \u00b7 E is the Joule heating term due to circulation of current in resistive media, where \u03b7 = 1/\u03c3 is the resistivity, the inverse of the electrical conductivity, and E is the electric \ufb01eld. The \ufb01nal source term, Sr is a radiative transfer term and in this work, a grey body treatment is used, following Villa et al. [11]. This is a simpli\ufb01ed, temperature dependent radiative model, though it may not be suitable for suf\ufb01ciently large temperature variations. Improvements to the radiative model will be the subject of future work. The electrodynamic source terms are calculated under the assumption that the electric \ufb01eld is static, depending only on the charge distribution and voltage gradient. The conservation of current density can therefore be written as \u2212\u2207\u00b7J = \u2212\u2207\u00b7(\u03c3E) = \u2207\u00b7(\u03c3\u2207\u03c6) = 0 (2) where \u03c6 is the electric potential. Note that although there is no explicit time dependence to the electric \ufb01eld, the electrical conductivity of the plasma is dependent on temperature (and pressure), thus does have implicit dependence. The magnetic \ufb01eld is computed from the current density through the Maxwell-Ampere equation B = \u2207\u00d7A, \u2207\u00b7\u2207Ai = \u2212\u00b50Ji (3) where A is the magnetic vector potential. In order to close the equations, an equation of state based on the work of d\u2019Angola et al. [23] is used. This describes the composition of an air plasma considering the 19 most important components over temperatures T < 60,000 K and pressures 0.01 < p < 100 atm. From this, the thermodynamic and electrodynamic properties of the plasma are also given. These relationships are given as \ufb01tted functions of pressure and temperature, which are not invertible. Therefore for numerical purposes, this data has been tabulated providing an ef\ufb01cient means to convert between variables within the current model [24]. 2.2. The elastoplastic model The elastoplastic substrate and electrode are described using the Eulerian framework as presented by Schoch et al. [19] and Michael et al. [20], based on the formulation of Godunov and Romenskii [25]. Plasticity effects are incorporated following the work of Miller and Collela [26]. Since an Eulerian framework is used, the deformation of the solid materials cannot be described through mesh distortion. Instead this behaviour is accounted for through consideration of the deformation gradient tensor, given by Fij = \u2202xi \u2202Xj . (4) 4 \fThis allows mapping back to the original con\ufb01guration, with coordinates given by X to the deformed con\ufb01guration, x. The technique of Rice [27] is followed, in which the plastic deformation is considered separately, Fp, which means the total deformation can be decomposed into plastic and elastic components, F = FeFp. The evolution of the solid materials is described by a hyperbolic system of conservation laws, \u2202\u03c1ui \u2202t + \u2202 \u2202xk (\u03c1uiuk \u2212\u03c3ik) = 0 (5) \u2202\u03c1E \u2202t + \u2202 \u2202xk (\u03c1Euk \u2212ui\u03c3ik) = 1 \u03b7 JiJi (6) \u2202\u03c1Fe i j \u2202t + \u2202 \u2202xk \u0010 \u03c1ukFe ij \u2212\u03c1uiFe k j \u0011 = \u2212ui \u2202\u03c1Fk j \u2202xk +Pij (7) \u2202\u03c1\u03ba \u2202t + \u2202 \u2202xi (\u03c1ui\u03ba) = \u03c1 \u02d9 \u03ba (8) where \u03c3 is the stress tensor and \u03ba is the scalar material history parameter which tracks work hardening of the material through plastic deformation. The density is related to the deformation gradient through \u03c1 = \u03c10 detFe (9) and the stress tensor is given by \u03c3ij = \u03c1Fe ik \u2202e \u2202Fe jk (10) where e is the speci\ufb01c internal energy. In order to close the system, an analytic constitutive model relates the speci\ufb01c internal energy to the deformation gradient, entropy and material history parameter, i.e. e = e(Fe,S,\u03ba). The effects of the current density passing through the solid substrate can be modelled through a Joule heating term in the energy conservation law (6). As with the plasma, the electric \ufb01eld is assumed static in the substrate, and the relevant equations (2) and (3) apply here too. The system of evolution equations (5)\u2013(8) is coupled with compatibility constraints, which ensure that deformations remain physical and continuous, given by \u2202\u03c1Fij \u2202xj = 0. (11) The MHD and elastoplastic solid formulations described in this section are solved numerically using high-resolution shock-capturing methods as described in previous work [19, 20]. 2.3. The multimaterial approach In this work ghost \ufb02uid methods are used, in combination with level set methods, to model the interfaces between the plasma arc, or air, and the substrate and electrode. Level set methods track the evolution of the interfaces, as they evolve over time, e.g. substrate bending under the impact loading of the plasma arc. In order to provide boundary conditions at these interfaces, the Riemann ghost \ufb02uid method is used, which solves mixed material Riemann problems to give interface states during evolution of the governing equations. Level set methods represent the interface between a pair of materials as a signed distance function, \u03c6 (x), with the zero contour of this function being the physical location of that interface. It is assumed that there is no mass transfer between materials, and this gives an advective law for evolving the level set function, \u2202\u03c6 \u2202t +u\u00b7\u2207\u03c6 = 0 (12) where u is the material velocity. This equation is evolved using a third order Hamilton-Jacobi WENO reconstruction scheme [28]. Under a non-uniform velocity \ufb01eld, the level set function will not remain a signed distance function without reinitialisation. Each material within the model is assigned a level set function, and a fast marching algorithm 5 \fto preserve the signed distance function around the contour \u03c6 (x) = 0 is used. The physical material for a given point can then be determined through identifying the single positive level set function. The Riemann ghost \ufb02uid method, developed by Sambasivan and Udaykumar, [29], provides dynamic boundary conditions at the material interfaces, based on the original method of Fedkiw et al. [30]. To provide these conditions, following procedure is used for each material, m, 1. For a cell i, if \u03c6i,m < 0 and an adjacent cell \u03c6i\u00b1,m > 0, it is adjacent to the interface, the closest interfacial location is given by P = i\u2212\u03c6mn 2. Two probes are projected into the two adjacent materials, to the points P L = P+n\u00b7\u2206x and P R = P\u2212n\u00b7\u2206x 3. States WL and WR are interpolated for each of these points from the surrounding cells 4. A mixed material Riemann problem is solved to obtain the star state W\u2217 L 5. The material state in cell i is replaced by W\u2217 L In this procedure, it is assumed, without loss of generality, that the state WL is the material that exists for \u03c6m > 0. It is noted that once the states P L and P R are found vector quantities, WL and WR must be projected into components normal and tangential to the interface. Once all interfacial cells have been assigned a boundary value, a fast marching method is used to \ufb01ll the region \u03c6m such that the stencil of the numerical method is always satis\ufb01ed. The mixedmaterial Riemann problems are based on linearised solutions to the systems of equations, and are described in [20]. The electrodynamic quantities, current density and magnetic \ufb01eld, are assumed to be continuous across material boundaries, and thermal effects due to temperature differences in the substrate materials and the plasma arc are not modelled. The governing equations (2) and (3) are solved across the entire domain for all materials, rather than on a per-material basis. 3. Validation Experimental validation studies for lightning strike plasma arc interaction face dif\ufb01culties in capturing the arc development due to the likelihood of damage to electronic equipment from the strong current which generates the arc. By recording features of the arc at a suf\ufb01cient distance can overcome these dif\ufb01culties, for example, Martins [13] uses light emission from the arc, whilst Villa, Malgesini and Barbieri [11] use pressure gauges away from the arc. We can use these results to validate the current approach, and henceforth these two models shall be referred to as M16 and VMB11 respectively. 3.1. Validation of the MHD equations Figure 3: Experimental set-up as used by Villa et al.[11]. A plasma arc is generated between an electrode and a metal plate. Away from the attachment point, thin tubes are mounted, with pressure sensors at the end of these tubes. The work in VMB11 allows the MHD equations (1) to be validated under a given current density pro\ufb01le. In Figure 3, the VMB11 con\ufb01guration is shown, in which a cylindrical electrode is used to generate a plasma arc, which connects to a \ufb02at metal sheet grounded at its outer edges. Tubes are mounted to this sheet in three locations, and a pressure-recording transducer is placed at the end of these tubes. As the expanding plasma arc travels over these tubes, it generates a pressure wave which travels down the tubes, and is recorded by the transducer. 6 \fTo simulate this experiment, cylindrical symmetry is used, and the electrode is placed 5 cm above a re\ufb02ective boundary. The current \ufb02ow through from the electrode was recorded to follow an oscillatory pro\ufb01le I(t) = I0exp(\u2212\u03b1t)sin(\u03b2t). (13) Here, I0 is the maximum current reached by the system, measured to be 2.18\u00d7105 A, \u03b1 is the damping factor and \u03b2 is the damped frequency. These are related to the properties of the electrical circuit used to generate this current, \u03b1 = R 2L, \u03b2 = p \u03c92 \u2212\u03b12, \u03c9 = r 1 LC (14) where \u03c9 is the undamped frequency, R is the resistance, L the inductance and C the capacitance. These last three properties are measured as R = 24 m\u2126, L = 2.9 \u00b5H and C = 26 \u00b5F respectively. For this validation test, the approach of VMB11 is used and a pre-determined current density pro\ufb01le is provided, J = \u2212I (t) \u03c0r2 0 e\u2212(r/r0)2ez. (15) Experimentally, the radius of the plasma arc was measured to be between 1.5 and 2.5 cm; a constant radius r0 = 2 cm is taken in this work. This allows the present implementation of the MHD formulation to be validated in isolation. The set-up in Figure 3 is modelled as a two-dimensional cylindrical test, considering the substrate and the electrode to be purely re\ufb02ective boundaries for the material properties of the plasma. The current density pro\ufb01le given by equation (15) is applied between the electrode and the substrate. Figure 4: Comparison of the temperature pro\ufb01le after 48 \u00b5s, comparable to the results of VMB11. The results show qualitatively similar structures, though there are quantitative differences likely due to differences in the discrete-space implementation and uncertainty in the experimental parameters used. The outwards moving shock wave due to the initial formation of the arc is visible as tight contours up to around 5,000 K. Behind this, there is a slower rise in temperature, and re\ufb02ected features around the electrode are visible. In Figure 4, the temperature pro\ufb01le is shown after 48 \u00b5s, with the expansion of the arc, and the central hightemperature region clearly visible. The overall shape, especially of the re\ufb02ection around the electrode, is consistent with the VMB11 results, though differences in the temperatures pro\ufb01le are seen, which are likely to be due to differences in the initialisation, and the numerical techniques used (both for gridding and for evolution), between the models. In order to model the pressure wave on the transducers, the technique used in VMB11 is followed, which treats this as a separate problem to the overall evolution of the plasma arc. This avoids the need to resolve the comparatively thin tubes within the simulation domain, and preserves cylindrical symmetry. The \ufb02ow down these tubes is simulated 7 \fthrough a solution of the one-dimensional Euler equations, modi\ufb01ed to account for frictional effects of the tube edges. These equations are \u2202\u03c1 \u2202t + \u2202 \u2202x (\u03c1v) = 0 (16) \u2202 \u2202t (\u03c1v)+ \u2202 \u2202x \u0000\u03c1v2 + p \u0001 = \u2212sign(\u03c1) 1 2 \u03bb D\u03c1v2 (17) \u2202E \u2202t + \u2202 \u2202x [(E + p)v] = \u03bb D\u03c1 \f \fv3\f \f (18) where \u03bb = 0.018 is the friction coef\ufb01cient and D = 1 cm is the diameter of the tube. The boundary conditions at the top of the tube are speci\ufb01ed by the properties of the plasma arc, and thus vary with time. These are given by \u03c1 =\u03c1p v =0 (19) p = ( p = pp \u22121 2\u03bav, v < 0 p = pp, v \u22650 where \u03ba = 0.43 is the inlet pressure loss coef\ufb01cient [31]. There is not substantial \ufb02ow of plasma into the tube, hence this system of equations can be solved using a standard ideal gas, with \u03b3 = 1.4. In Figure 5, the results are shown for the pressure wave travelling down the three tube mounted 5, 10 and 15 cm from the arc attachment. The present results are compared to both the VMB11 experimental and numerical results, with good comparison to both. The pressure peak is captured well, as is the rate of decay of the wave, which levels out slightly below atmospheric pressure. The simulation results typically give a slight overestimation of the pressure, due to the one-dimensional assumption used to model the tubes. As a result, any dissipation of the wave, due to interaction with the tube walls is lost. The results shown in this section validate the present numerical implementation of the equations governing plasma dynamics. In particular, they show that the correct behaviour of the outwards moving shock wave is correctly captured. 3.2. Validation of the fully coupled system When an elastoplastic substrate is incorporated, assumptions as to the shape of the current density pro\ufb01le can no longer be made. In this case, equation (2) is solved for current density across the entire domain. When solving the complete coupled system, the interaction with the substrate affects the evolution of the plasma arc. In order to validate the present model in this case, the results are compared to the experimental data of M16. Through high-speed imaging of the arc attachment and early evolution up to around 40 \u00b5s, both the width of the plasma arc, and the progression of the shock wave generated by the arc formation, could be measured. In order to validate the present implementation, two substrate con\ufb01gurations are considered, aluminium and an isotropic approximation to a carbon composite material (hereafter referred to as the isotropic composite). The electrical conductivity of these two materials differs substantially, aluminium has \u03c3 = 3.2 \u00d7 107 S/m whilst the carbon composite the isotropic material is based on has i\u03c3 = 1.6 \u00d7 104 S/m [12]. Experimental results show that for a material with lower conductivity material area is larger. This is due to the electrical conductivity of this material being comparable to that of the plasma, hence the extended arc connection offers a less resistive path for the current \ufb02ow. The initial data for this model uses the set-up described in Figure 2, which incorporates the electrode for current input within the domain. The initial data for this problem uses a pre-heated arc region at the centre of the domain of 8,000 K with a 2 mm diameter. For these tests, either a 1 mm (for comparison of an aluminium substrate to M16) or 2 mm (elsewhere) thick substrate is considered, located 40mm below the electrode. A direct current application through the electrode following a D-component waveform is used, as de\ufb01ned in the document ARP 5412B [4]. A polynomial \ufb01t is used to the current pro\ufb01le recorded by M16. Figure 6 shows the evolution of the pressure pro\ufb01le in the arc and a 2 mm thick aluminium substrate. After 1\u00b5s, Figure 6 (a), a pressure loading is evident on the aluminium substrate. Although the current input is still reasonably low at this early time, there is suf\ufb01cient energy input such that the pressure at the centre of the arc has doubled from 8 \f(a) Gauge: 5 cm (b) Gauge: 10 cm (c) Gauge: 15 cm Figure 5: Validation of the present plasma model against experimental and numerical results for \ufb02ow travelling over a tubes 5 cm, 10 cm and 15 cm from the arc connection. The purple curves are the present results, the green curves are the VMB11 numerical results and the remaining curves the VMB11 experimental results (between 2 and 4 for a given gauge). The pressure peak and subsequent decay are reasonably well reproduced, given the uncertainty of the experimental conditions. We note that the numerical models can over predict the pressure peak, since losses due to the assumption of a truly one-dimensional tube. the initial pre-heated value. As the arc evolves over time, Figure 6 (b-d), the highest pressure remains in the centre of the arc, with the majority of the pressure loading on the substrate occurring here. Away from the centre of the arc, the higher pressure associated with the initial shock wave moving radially away from the centre of the arc is also visible as a darker blue region in the plasma. A corresponding wave moves through the substrate as the shock wave imparts a loading effect, though this is substantially lower than at the centre of the arc. Whilst the pressure within the plasma arc must strictly stay positive, it is noted that within the substrate, negative values are experienced. This is because pressure is a component of the stress tensor, and a solid material can sustain tension, as a result of rarefaction waves. When considering overall damage effects, the magnitude of the stress within the substrate is an important criterion. The effect of the dynamic current density pro\ufb01le is clearly visible; the arc in Figure 6 does not maintain a cylindrical shape. Current density gradient in the z-direction leads to a higher pressure directly beneath the arc, but also at the attachment point, where high pressures, due to re\ufb02ected material, are seen. The temperature in the plasma and aluminium substrate as the arc evolves is shown in Figure 7 (a-d), at corre9 \fFigure 6: Pressure evolution for an arc attachment to an aluminium substrate at times (a) 1\u00b5s, (b) 10\u00b5s, (c) 15\u00b5s and (d) 20\u00b5s. Figure 7: Temperature evolution in both the plasma arc and the aluminium substrate at times of (a) 1\u00b5s, (b) 10\u00b5s, (c) 15\u00b5s and (d) 20\u00b5s. sponding times to the pressure images in Figure 6. The hottest temperature regions remain in the centre of the domain, where conductivity, and thus Joule heating, is greatest. The temperature is also plotted in the substrate, however, due to the high conductivity of the aluminium substrate in this test, the energy density deposited in the substrate is comparatively low at this timescale. The overall rise in temperature over the timescales considered is less than 1 K. Over longer timescales (O(1) s), temperature rise would be governed by diffusive and conductive behaviour, in addition to longer term Joule heating effects from the long continuous current, and this leads to the minor damage resulting from lightning strike. These results qualitatively compare well with the images obtained in M16. By taking measurements of the arc width and the shock progression, a quantitative comparison to the experimental results can be made. In the left half of 10 \fFigure 8: The left plot shows the comparison of the measurements of the arc width for experiment (+) and simulation (solid line). The numerical results are shown to correctly capture the evolution of the arc on an aluminium substrate. For comparison, the experimentally measured width of an arc between two electrodes is plotted (\u00d7), showing the substrate has a clear effect on the evolution itself. The right plot shows the comparison of the experimentally measured (+) and computational (solid line) shock wave propagation. It is clear that the present model captures this behaviour well. Figure 8, the present numerical results are compared to M16 for arc attachment to a 1 mm thick sheet of aluminium. The numerical results match the experimentally measured widths, demonstrating the interaction with the substrate can be correctly captured. This is further evident since the width of an arc between two electrodes with no substrate present (sometimes referred to as a free arc) is plotted. There is signi\ufb01cant difference in the evolution of the widths of the free arc compared to when a substrate is present after around 20 \u00b5s, which is correctly captured by the present model. The M16 experiment was also able to capture the evolution of the shock wave generated by the plasma arc through optical changes in a patterned background. In the right half of Figure 8 the present numerical results for the propagation of the shock wave are compared, and again we \ufb01nd good agreement between the present model and experimental studies. Arc attachment to the isotropic composite substrate is now considered and results are compared directly to the results shown for an aluminium substrate. The full modelling of a carbon composite material requires an anisotropic description of the alignment of the \ufb01bres comprising the substrate to be incorporated within the equation of state. This is currently beyond the capabilities of the present model, though by making an isotropic approximation to a composite material, the effect the substrate conductivity has on the arc can be considered. This isotropic model is approximately equivalent to a description of the composite material in the direction of the \ufb01bres. The comparison between a low-conductivity substrate and aluminium are presented with both results shown on the same plot, the isotropic composite substrate plotted at the left of the central axis, and the aluminium substrate on the right. In order to visualise the differences between the simulation, the same plot ranges are always used for both materials. In Figure 9 the evolution of the pressure for attachment to the isotropic composite substrate is shown. There are clear differences between the evolution pro\ufb01les, both within the arc and the substrate, compared to attachment to aluminium. Within the arc, the behaviour local to the electrode is largely unchanged, it is clear that the differences originate due to the interaction with the substrate. In Figure 9, there is a high-pressure region close to the surface of the substrate, which then has \u2018pinch\u2019 type behaviour directly above it. This serves to exacerbate the gradient in pressure down the arc, visible in Figures 9 (c-d). Additionally, in Figure 9 (c), it is clear that the change in behaviour at the substrate surface leads to a faster shock-propagation speed, though at the top of the domain, the shock speed remains similar to the case of an aluminium substrate. Within the substrate, the pressure loading is substantially higher. This is a result of greater energy deposition in the substrate through the Joule effect in equation (6). This is further evidenced by the location of the low pressure region in the substrate beneath the arc. This region is substantially larger than that beneath the arc attachment to aluminium, and in fact, exists even where pressure loading is highest. This suggests that there are additional effects contributing to the pressure increase within the substrate, subsequent plots show that the high pressure region is correlated to high current density, and hence Joule heating. 11 \fFigure 9: Pressure evolution of plasma arc attachment to the isotropic composite substrate (left) and to the aluminium substrate (right) at times of (a) 1 \u00b5s, (b) 10 \u00b5s, (c) 15 \u00b5s and (d) 20 \u00b5s. The differences in the pressure loading of the aluminium and isotropic composite substrates are shown in the temperature \ufb01eld over the same time period in Figure 10. As the plasma arc develops, the radius of the plasma arc close to the top surface of the substrate is greater than that of the aluminium. Where the arc radius is large, a lower temperature region is seen, particularly on the outer edges of the arc. Such a region is visible in the optical emission results of Tholin et al. suggesting the correct coupling is captured between the isotropic composite substrate and the arc. The optical emission is closely coupled to conductivity, which is itself dependent on temperature, thus a comparison can be made between these two results. The temperature within the substrate is also plotted, and it is clear that there is a noticeable increase for the isotropic composite material. Due to the energy deposition through the Joule heating effect, there is a corresponding rise in temperature. It is clear that there is in an increase associated with the leading edge of the arc. As with the arc attachment to aluminium, the M16 experimental results can be used to further validate this model. In Figure 11 the arc width obtained from both experiment and simulation is compared. There are two experimental values for arc width plotted, one in the direction aligned to the carbon weave, and one perpendicular to it. It is clear that the isotropic approximation captures the behaviour aligned to the weave well (this is the preferential direction for current to travel). Again the arc width of the free arc is plotted, and it is now clear that there is signi\ufb01cant differences in the arc width for this low-conductivity case and the aluminium attachment shown in Figure 8. In the right half of Figure 11 the expansion of the shock wave for the experiment and simulation is compared. As for the arc width, two experimental values are obtained, depending on the orientation of the recording equipment to 12 \fFigure 10: Temperature evolution of the plasma arc with an isotropic composite substrate (left) and an aluminium substrate (right) at times of (a) 1 \u00b5s, (b) 10 \u00b5s, (c) 15 \u00b5s and (d) 20 \u00b5s. Figure 11: The left plot shows the comparison of the measurements of the arc width for experimental attachment to carbon composite (+ and \u25a1) and simulation (solid line) of attachment to a low-conductivity substrate. The experimental model shows both arc widths along the carbon weave direction (+) and perpendicular to the weave (Box). The present numerical results correctly capture the evolution of the arc on along the weave direction. For comparison, the experimentally measured width of an arc between two electrodes is plotted (\u00d7). The right plot shows the comparison of the experimentally measured (+ and \u25a1) and computational (solid line) shock wave propagation against a low-conductivity substrate. Again, the two experimental results correspond to measurements along the carbon weave (+), and perpendicular to it (\u25a1). It is clear that the present model captures the shock expansion corresponding to the direction along the weave well. 13 \fthe carbon weave. As before, the current isotropic model of a low-conductivity substrate is found to correspond well to the behaviour in the direction of the carbon weave. 4. Multi-layered substrates Figure 12: Pressure comparison for arc interaction with an isotropic composite substrate in isolation (left) and a dual layered substrate (right) at times of (a) 1 \u00b5s, (b) 10 \u00b5s, (c) 15 \u00b5s and (d) 20 \u00b5s. The same pressure range is chosen for both sets of results to enable direct comparison. The multimaterial nature of the present model allows for the plasma arc to interact with materials that are not necessarily on the surface of the substrate. In this section, a test case which investigates the effects of layering materials with different electrical conductivities is considered. This is constructed such that an isotropic composite substrate is placed on top of a sheet of aluminium. By placing the high-conductivity substrate as the bottom layer in this scenario, the effects of an embedded layer used within current carbon composite materials is considered; it is expected that this layer can form a preferential path for the current \ufb02ow. Each layer has a thickness of 2 mm, giving a total substrate thickness of 4 mm. To initialise the plasma, a preheated region directly connecting the electrode to the substrate is again included. In order to ascertain the effects of the dual layered substrate, the results are compared to that of a single-layer isotropic composite substrate. Therefore, if the results are governed only by the top material, it would be expected that there is no difference visible between these two cases. Figure 12 shows the pressure evolution within the plasma and the substrate, compared to the situation where there is just isotropic composite present. The pressure pro\ufb01le in both plasma and substrate are clearly effected by the 14 \fpresence of the dual layering. The plasma arc does not show the \u2018pinch\u2019 feature, and subsequent expansion close to the surface, when the aluminium layer is included. The overall shape follows that of the single aluminium substrate shown in Figure 6. It is clear that this change in behaviour is also true for the expansion of the shock wave above the dual layered substrate. Additionally, the high pressure loading on the dual substrate is now con\ufb01ned to the area directly beneath the electrode. There is still a higher pressure within the isotropic composite substrate than for a single aluminium sheet, but the extent is con\ufb01ned to only a small region. Figure 13: Temperature evolution for arc attachment to an isotropic composite substrate (left) and a dual layered substrate (right) at times of (a) 1 \u00b5s, (b) 10 \u00b5s, (c) 15 \u00b5s and (d) 20 \u00b5s. The reduction in the radial extent of the high pressure region as a result of adding an aluminium layer shown in Figure 12 may be expected to yield a reduction in substrate heating. Figure 13 shows the temperature evolution corresponding to this pressure behaviour. As expected, the temperature pro\ufb01le follows the general trend of the pressure pro\ufb01le with the high temperature region being signi\ufb01cantly reduced in radial extent through the introduction of the aluminium layer. Once again, the shape of the temperature pro\ufb01le is comparable to the single aluminium layer, shown in Figure 7. As the pressure pro\ufb01le suggests, there is a high temperature in the dual layered substrate restricted to the initial arc attachment point. Away from this, there is no substantial increase in the temperature of the substrate, a clear contrast to the single isotropic composite substrate case. The reduction in temperature in the substrate is a consequence of less energy being deposited in this layer due to Joule heating, which suggests that the current \ufb02ow is predominantly through the aluminium layer to the ground site, instead of through the low conductivity layer. In Figure 14 the current density streamlines at t = 10 \u00b5s are shown, i.e. at the same time as Figure 13 (b). These are plotted over temperature \ufb01eld, to show the path of the current, and how other variables are follow this behaviour closely. For the isotropic composite substrate, there is a clear radial component to the streamlines within the plasma as they approach the substrate. As expected, in this case, they then attach directly to the ground site. When the 15 \fFigure 14: Annotated slice through the temperature \ufb01eld at t=10 \u00b5s comparing an isotropic composite substrate (left) to a dual layered substrate (right). Current streamlines are overlaid to show the path of current from the electrode to the grounding location. aluminium layer is included, the difference in path for the streamlines is clear. They now \ufb02ow much more directly into the aluminium substrate. It is at this point they turn radially towards the ground site, and remain within the aluminium layer. This explains the similarities between the results for a single aluminium substrate in Figures 6 and 7 and the dual layered results presented in this section. The results in this section demonstrate that the model presented is capable of accurately coupling the interaction of a plasma arc with a complex arrangement of substrate layers. The complete properties of these layers will govern the behaviour of the arc attachment, and not just the properties of the top layer. 5. Temperature-dependent conductivity The fully coupled nature of the present model means that behaviour within the substrate can alter the shape of the arc. This is most obvious when the energy input into the substrate alters the electrical conductivity of the material. For high-conductivity substrates such as aluminium, these effects are negligible over the timescales considered, since there is very little change in the temperature of the substrate. However, the signi\ufb01cant energy deposition into a low-conductivity substrate is suf\ufb01cient to alter the material properties, such as electrical conductivity, over a short timescale. It would then be expected that this will alter the interaction of the arc with the substrate, demonstrating a true two-way non-linear coupling. A test case is considered in which a temperature-dependent electrical conductivity for a carbon composite material is taken from Guo et al. [32], and applied to the isotropic composite used in this work. The conductivity of the material decreases close to linearly with increased temperature, hence a line is \ufb01t through the experimental data according to \u03c30 +\u03b1T (20) where \u03b1 = \u221217.115 S m\u22121 K\u22121. The reference conductivity for this material is \u03c30 = 1.45\u00d7104 S m\u22121, and thus two test cases are considered; \ufb01rstly where the conductivity within the substrate is constant at the value \u03c30, and secondly where it obeys equation (20). The effects of the temperature-dependent conductivity are shown after 5.4 \u00b5s in Figure 15, and after 15 \u00b5s in Figure 16. These plots show the effect of the temperature-dependent substrate properties, both on the temperature and current density magnitude pro\ufb01les. For each \ufb01gure, two contours are plotted, one for the constant conductivity model, and one for the temperature dependent conductivity. Both contours are plotted at the same value; 2\u00d7104 K for temperature in the plasma arc, 340 K for the substrate and 1.5\u00d7108 A/m\u22122 for the current density in both materials. In Figure 15, the current input to the system is close to its peak value. At this stage, there is little difference in the arc pro\ufb01le, its evolution is being governed primarily by this current input. It is, however, slightly wider where it attaches to the substrate. The effects within the substrate itself are more noticeable, both in temperature and current 16 \fFigure 15: Temperature (left) and current density magnitude (right) images for a substrate with a temperature dependent conductivity after 5.4 \u00b5s. Constant value contours are shown in grey; 2\u00d7104 K for temperature in the plasma arc, 340 K for the substrate and 1.5\u00d7108 A/m\u22122 for current density magnitude everywhere. The black contours show the corresponding values for a constant-conductivity substrate. At this time, the arc pro\ufb01les are comparable, though slightly wider at the attachment point with a temperature-dependent conductivity. However, it is clear that the substrate is heating more rapidly, and the extent of this is further radially outwards. density. The contour at 340 K is deeper in the case of a varying substrate conductivity, and the primary path of the current through the substrate is clearly radially further outwards. In Figure 16, the difference between the two cases are now clear in the plasma arc, as well as the substrate. The increase in width of the arc attachment is more pronounced, and this leads to a decrease in width, seen in the temperature pro\ufb01le midway between the substrate and the electrode. The extended path of the current through the arc, causing this greater attachment area, is visible in the current density magnitude. Within the substrate, the heated region is both wider, and radially further out. Additionally, the maximum temperature in this region is higher in the case of variable conductivity. The current density pro\ufb01le in the substrate again shows a greater radial distance of the attachment. These results demonstrate successful simulation of the feedback between the plasma arc and the substrate. The Joule heating effect imparting energy into the substrate alters its properties, and hence the optimal path for current to take. As a result, the shape of the plasma arc is changed, in this case moving further outwards. In this particular case, including the temperature dependent properties of the substrate could show that greater damage occurs to the substrate than would otherwise be predicted, due to the larger area of effect, and the greater temperatures reached within the substrate. 6." + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file