diff --git "a/abs_29K_G/test_abstract_long_2405.04483v1.json" "b/abs_29K_G/test_abstract_long_2405.04483v1.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.04483v1.json" @@ -0,0 +1,608 @@ +{ + "url": "http://arxiv.org/abs/2405.04483v1", + "title": "CloudDiff: Super-resolution ensemble retrieval of cloud properties for all day using the generative diffusion model", + "abstract": "Clouds play a crucial role in the Earth's water and energy cycles,\nunderscoring the importance of high spatiotemporal resolution data on cloud\nphase and properties for accurate numerical modeling and weather prediction.\nCurrently, Moderate Resolution Imaging Spectroradiometer (MODIS) provides cloud\nproducts with a spatial resolution of 1 km. However, these products suffer from\na lengthy revisit cycle. This study develops a generative diffusion model\n(donated as CloudDiff) for super-resolution retrieval of high spatiotemporal\ncloud phase and properties, applicable both day and night. Leveraging 2 km\nspatial resolution Himawari-8 Advanced Himawari Imager (AHI) thermal infrared\n(TIR) radiances and viewing geometry as condition, alongside daytime MODIS\nproducts as targets, the model can generate cloud phase (CLP), cloud top height\n(CTH), cloud optical thickness (COT), and cloud effective radius (CER) at 1 km\nspatial resolution and 10-minute temporal resolution. The conditional diffusion\nmodel can generate sharper images and capture finer local features than\ndeterministic super-resolution approaches. It draws multiple samples based on\nthe underlying probability distribution, enabling retrieval uncertainty\nassessment. Evaluations show agreement between cloud phase and properties\nderived from the CloudDiff and MODIS cloud products. The ensemble mean is found\nto enhance retrieval accuracy and credibility, outperforming the deterministic\nmodel.", + "authors": "Haixia Xiao, Feng Zhang, Lingxiao Wang, Wenwen Li, Bin Guo, Jun Li", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "physics.ao-ph", + "cats": [ + "physics.ao-ph" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Clouds play a crucial role in the Earth's water and energy cycles,\nunderscoring the importance of high spatiotemporal resolution data on cloud\nphase and properties for accurate numerical modeling and weather prediction.\nCurrently, Moderate Resolution Imaging Spectroradiometer (MODIS) provides cloud\nproducts with a spatial resolution of 1 km. However, these products suffer from\na lengthy revisit cycle. This study develops a generative diffusion model\n(donated as CloudDiff) for super-resolution retrieval of high spatiotemporal\ncloud phase and properties, applicable both day and night. Leveraging 2 km\nspatial resolution Himawari-8 Advanced Himawari Imager (AHI) thermal infrared\n(TIR) radiances and viewing geometry as condition, alongside daytime MODIS\nproducts as targets, the model can generate cloud phase (CLP), cloud top height\n(CTH), cloud optical thickness (COT), and cloud effective radius (CER) at 1 km\nspatial resolution and 10-minute temporal resolution. The conditional diffusion\nmodel can generate sharper images and capture finer local features than\ndeterministic super-resolution approaches. It draws multiple samples based on\nthe underlying probability distribution, enabling retrieval uncertainty\nassessment. Evaluations show agreement between cloud phase and properties\nderived from the CloudDiff and MODIS cloud products. The ensemble mean is found\nto enhance retrieval accuracy and credibility, outperforming the deterministic\nmodel.", + "main_content": "Introduction Clouds are critical in the Earth\u2019s water and energy budgets (Li et al., 2005). Their influence on the radiation budget can induce either heating or cooling of the planet, contingent upon the radiative characteristics of the cloud and its altitude (Stephens et al., 1981, 1990). The significance of clouds is further underscored by variables such as cloud optical thickness (COT), cloud effective radius (CER), cloud top height (CTH), and cloud phase (CLP). These parameters profoundly impact the Earth\u2019s net radiation balance due to their distinct scattering and absorption characteristics (Fauchez et al., 2018a; Min et al., 2020; Wang et al., 2016a). Achieving an accurate representation of these optical properties remains a formidable challenge, primarily because the microscale physical processes within clouds are difficult to explicitly simulate in global numerical models (Baran, 2012; Ceppi et al., 2017; Waliser et al., 2009). Consequently, there is an urgent need to obtain cloud phase and properties with high spatial and temporal resolution. Such detailed cloud data are indispensable for a deeper understanding of atmospheric physical processes, the enhancement of data assimilation techniques, and the improvement of weather forecasting accuracy (Muskatel et al., 2021). The retrieval of cloud properties has been conducted for several decades. Since the 1970s, airborne measurements have been employed to retrieve COT and CER, resulting in numerous successful experimental studies (Finger et al., 2015; King, 1987; Krisna et al., 2018; Platnick et al., 1995; Twomey and Cocks, 1989). However, these campaigns incur high costs, and the temporal and spatial coverage of field observations is limited. With the advancement of satellite remote sensing technology, particularly passive sensors (geostationary and polar-orbiting satellites), researchers have increasingly utilized data from visible and near-infrared bands to retrieve cloud properties. This approach enables the characterization of cloud properties at various spatial and temporal resolutions (King et al., 1992; Menzel et al., 2008; Platnick et al., 2003; Tang et al., 2017; Zhang and Platnick, 2011; Zhuge et al., 2020), owing to the wide observational coverage provided by passive sensors. The basic physical principle behind this method is that the cloud radiances measured by the nonabsorptive channels in the visible or near-infrared wavelengths are influenced by COT, while those captured by water-absorption channels in the shortwave infrared wavelength are sensitive to the CER (Nauss and Kokhanovsky, 2011). These retrieval methods, which rely on solar radiation, are effective only for daytime scenes. However, they are not applicable to nighttime scenes and exhibit higher uncertainties in high-latitude regions and optically thin cloud scenes (Wang et al., 2016b). Thermal Infrared (TIR) retrieval algorithm, utilizing the split-window technique (Parol et al., 1991; Toshiro, 1985), offer valuable capabilities for both daytime and nighttime scene analysis. This technique retrieves COT and CER from the brightness temperature differences between two distinct channels in the infrared atmospheric windows, where gaseous absorption is minimal. Additionally, the optimal estimation methodology (Rodgers, 2000) has been implemented for the Atmospheric Infrared 2 \fSounder V6 (AIRS) and Advanced Microwave Sounding Unit (AMSU), utilizing infrared spectral data to successfully retrieve the physical and optical properties of clouds (Kahn et al., 2014, 2015). However, due to significant absorption by cloud particles in the infrared spectrum, these traditional IR-based algorithms primarily excel in retrieving optically thin cloud properties, while facing challenges in scenarios involving opaque, thick clouds (Wang et al., 2016a). Consequently, an alternative approach is necessary to provide a more comprehensive solution. The data-driven deep learning method, renowned for their proficiency in capturing the spatial variations of image features with fast computation, have been extensively applied in the cloud identification and properties retrieval (Tong et al., 2023; Zhao et al., 2023). For example, Wang et al. (2022) developed a convolutional neural network (CNN) model for the continuous cloud identification and retrieval of cloud properties (i.e., COT, CER, and CTH) throughout the diurnal cycle for the Moderate Resolution Imaging Spectroradiometer (MODIS), leveraging utilizing daytime MODIS TIR radiances alongside satellite viewing zenith angles (VZA). Additionally, employing a transfer-learning-based UNet model and MODIS/Himawari-8 cloud products, Li et al. (2023) successfully estimated the CER, COT, and CTH from Himawari-8 TIR measurements, and results showed that the model enhanced performance for optically thick clouds. Previous research has relied on either polar-orbiting (e.g., MODIS) or geostationary (e.g., Himawari-8 Advanced Himawari Imager) satellite sensors for cloud property estimation. While polar-orbiting satellites offer high-resolution cloud products (1 km resolution), they suffer from a lengthy revisit cycle, impacting temporal resolution. Conversely, geostationary satellites provide frequent revisits, offering high temporal resolution and continuous cloud observation (Meng et al., 2024). However, their spatial resolution is lower compared to polar-orbiting satellites. Hence, combining data from both types of satellites to achieve high spatiotemporal resolution in cloud phase and properties is a promising direction to explore. For high-impact weather events such as severe convective storms, tropical and extratropical cyclones, the underlying dynamical and thermodynamic mechanisms are complex, leading to significant uncertainties in retrieving their cloud properties. Unfortunately, current CNN/UNet retrieval methods primarily focus on deterministic modeling, which often neglects the inherent uncertainties within the data. Diffusion models, a novel category of likelihood-based models recently highlighted for generating high-quality images (Sohl-Dickstein et al., 2015; Song and Ermon, 2019), offer desirable characteristics such as distribution coverage (Ho et al., 2020). Unlike deterministic retrieval methods, diffusion models derive probability distribution functions and can generate a large number of samples (Ho et al., 2020; Ling et al., 2024; Bishop, 2024), while guaranteeing that the retrieval distribution encapsulates all plausible outcomes, thus allowing for estimating the probability density and its score. Diffusion models have proven successful in various research domains, such as computer vision for image generation and synthesis (Croitoru, 2023), precipitation nowcasting (Nai 3 \fet al., 2024), estimating the unresolved geophysical processes (Pan et al., 2023), and earth system model downscaling (Hess et al., 2024), showcasing their effectiveness in handling complex systems. The primary objective of this study is to develop a diffusion model aimed at superresolution high spatiotemporal resolution cloud optical properties and cloud phase retrieval throughout the diurnal cycle using a geostationary satellite. Leveraging the TIR channels of the Himawari-8 satellite and employing MODIS cloud products as ground truth, we have developed a generative diffusion model capable of cloud identification and retrieval of COT, CER, and CTH, characterized by high precision and enhanced spatiotemporal resolution. The efficacy of this model is evaluated against standard MODIS cloud product measurements, focusing particularly on its generalization capabilities and the uncertainty, analyzed across typhoon case studies and extended datasets. The data, methodology, and experimental details are outlined in Section 2. The performance outcomes of the model are thoroughly examined in Section 3. Lastly, Section 4 offers conclusions and discussions. 2. Data and methods 2.1. Data 2.1.1. Himawari-8 AHI Satellite Data Himawari-8, launched in October 2014, is the geostationary satellite sensor system operated by the Japan Meteorological Agency (JMA). It represents the latest iteration in the Multifunctional Transport Satellite (MTSAT) series. The Advanced Himawari Imager (AHI) sensor onboard Himawari-8 captures full disk images every 10 minutes across 16 spectral bands from visible to infrared wavelengths, with spatial resolutions ranging from 500 m to 2 km and temporal resolutions between 2.5 and 10 minutes, covering regions from East Asia to Australia. The TIR measurements are sensitive to optically thin clouds and are continuously obtained throughout the diurnal cycle, independent of solar geometry (Fauchez et al., 2018a). In this study, TIR radiations from Himawari-8 AHI are utilized to estimate cloud properties during both daytime and nighttime. Additionally, the VZA are employed to construct the retrieval model. Table 1 summarizes the used TIR measurements (6.95\u201313.30 \u00b5m) and VZA of Himawari-8 AHI. 2.1.2. MODIS data With the launch of NASA\u2019s Terra satellite in 1999, followed by Aqua in 2002, MODIS has emerged as one of the most indispensable satellite remote sensing platforms for Earth science research. It measures reflected solar and emitted thermal radiation across 36 spectral channels (0.42\u201314.24 \u00b5m), offering unique spectral and spatial capabilities for retrieving cloud properties (Platnick et al., 2016). The Terra-MODIS (MOD06) and Aqua-MODIS (MYD06) products, which have a spatial resolution of 1 km, are accessible through the Atmosphere Archive and Distribution System website 4 \f(https://ladsweb.modaps.eosdis.nasa.gov/). These products include cloud top properties (e.g., CTH, CLP for both day and night) and cloud optical and microphysical properties (e.g., COT, CER, daytime only). Over the years, the MODIS cloud products have demonstrated consistent high accuracy and reliable performance (King et al., 2003; Platnick et al., 2015). In this study, the daytime MODIS cloud optical and physical properties (CTH, COT, CER, and CLP) from the Level-2 cloud product (MYD06 L2 and MOD06 L2) are utilized as ground truth to develop the super-resolution retrieval model. Table 1: The Himawari-8 AHI data used for cloud parameter super-resolution retrieval. Band Number Bandwidth (\u00b5m) Central Wavelength (\u00b5m) Spatial resolution (km) Spatial resolution (minute) 9 6.89\u20137.01 6.95 10 7.26\u20137.43 7.35 11 8.44\u20138.76 8.6 12 9.54\u20139.72 9.63 2 10 13 10.3\u201310.6 10.45 14 11.1\u201311.3 11.20 15 12.2\u201312.5 12.35 16 13.20\u201313.40 13.30 VZA \u2013 \u2013 2.1.3. Data preprocessing As described above, the TIR measurements (6.95 \u00b5m, 7.35 \u00b5m, 8.60 \u00b5m, 9.60 \u00b5m, 10.45 \u00b5m, 11.20 \u00b5m, 12.35 \u00b5m, and 13.30 \u00b5m) along with the VZA of the Himawari-8 AHI serve as the inputs for the model, while the MODIS level-2 CLP, CTH, COT, and CER data are used as the targets for training the model. To optimize the model during training and enhance its accuracy, we normalized the inputs and targets. By employing min-max normalization, we scaled the input and output variables to fall within the range of 0 to 1. To cover as wide a range of the Earth\u2019s surface and viewing geometries as possible, and to accommodate seasonal variations, we collected data from January 2016 to October 2017. Specifically, data from January 2016 to May 2017 was utilized for model training, data from June to August 20, 2017 for model validation, and data from August 21, 2017, to October 2017 served as the test set. Owing to the differing spatiotemporal resolutions of the Himawari-8 AHI and MODIS cloud products, we performed spatiotemporal matching of the data. In this process, we selected data from both MODIS and Himawari-8 for the same regions and times, with the cloud product grid points being twice that of the TIR observations. To alleviate memory and computational demands and to accelerate the selection process for the model, 5 \fwe cropped the cloud products in the training, validation, and test sets to a size of 256\u00d7256 km, while the input TIR observations were sized at 128\u00d7128 km. Ultimately, our training set comprised 76,247 samples, with the validation and test sets containing 9,530 and 9,532 samples, respectively. 2.2. Method The diffusion model is a state-of-the-art deep learning technique that employs probabilistic denoising processes to develop generative models (Bishop, 2024). The model typically operates on the principle of simulating a gradual process of denoising, effectively reconstructing data points from a noise-like distribution. This process is modeled as a reverse Markov chain, where a data sample is initially transformed into noise through a sequence of diffusion steps and then reconstructed back into a clean sample through learned reverse transitions. In a classical set-up, the model involves iteratively applying a series of conditional Gaussian distributions, beginning from a distribution of noise p(zT) and progressively denoising it to retrieve the original data distribution p(x0). This can be succinctly represented as, p(x0) = Z \u00b7 \u00b7 \u00b7 Z p(x0|x1)p(x1|x2) \u00b7 \u00b7 \u00b7 p(xT\u22121|zT)p(zT) dx1 \u00b7 \u00b7 \u00b7 dxT\u22121dzT. (1) In each iteration, the model utilizes the noisy data from the previous step as input, subsequently refining it to a greater degree of accuracy in accordance with the data\u2019s original state. The denoising path is learned from training data, thereby enabling the model to effectively generate or reconstruct high-quality data samples. 2.2.1. Conditional diffusion model In our study, these TIR measurements and VZA variable are denoted by y which is the condition variable. The target variables, cloud products, are represented by x. The objective is to approximate the conditional distribution of x given y, using a significantly large dataset of paired samples (xi, yi). The conditional diffusion model incorporates conditioning variables into the generative process (Batzolis, 2021), allowing the model to generate data conditioned on specific information. Mathematically, this can be represented as the transition from a noise distribution p(zT) to the data distribution p(x0) conditioned on a variable y, described by, p(x0|y) = Z p(x0|zT, y)p(zT|y) dzT, (2) where, zT represents the latent variables at the final timestep, and the model iteratively refines these variables through the conditioning on y, enhancing its ability to target specific data generation tasks. As Figure 1 shows, the conditional diffusion model enables to produce cloud products given the conditions of TIR and VZA variables, making it particularly useful in scenarios where the output needs to be tailored to specific environments. In this framework, for any given y, the algorithm 6 \foutputs samples of x from x \u223cp(x0|y), where p is a learned distribution that does not adhere to any predefined probability distribution form. The forward process has the same scheme as the Denoising Diffusion Probabilistic Models(DDPMs) (Ho et al., 2020), but in the reverse process we embed the conditional variables into the UNet for modelling the conditional probability distributions (Nai et al., 2024). \ud835\udc650 \ud835\udc651 \ud835\udc652 ... \ud835\udc65\ud835\udc47 \ud835\udc65\ud835\udc47 \ud835\udc650 Forward Diffusion Process Reverse Diffusion Process ... \ud835\udc65\ud835\udc47\u22121 UNet UNet Condition Figure 1: The CloudDiff for super-resolution cloud identification and properties retrieval. The generated samples x are cloud products, and the conditions y includes TIR and VZA variables. In the forward process, the data x0 undergoes a series of transformations, gradually adding noise over discrete time steps T until it is converted into pure Gaussian noise xT \u2261zT. The noise addition at each timestep t is defined by a variance schedule \u03b2t, and can be described by the following stochastic differential equation, xt = p 1 \u2212\u03b2txt\u22121 + p \u03b2t\u03f5, \u03f5 \u223cN(0, I), (3) where \u03f5 represents Gaussian noise. The reverse process, where the model learns to reconstruct the original data from noise, is explicitly conditioned on y. At each step, the model estimates the original data xt\u22121 from the current noisy data xt using a neural network parameterized by {\u03b8}. This network predicts the mean \u00b5\u03b8(xt, t, y) of the distribution for xt\u22121, typically modeled as, xt\u22121 = \u00b5\u03b8(xt, t, y) + \u03c3t\u03f5, \u03f5 \u223cN(0, I), (4) where \u03c3t is a predetermined noise level (Ho et al., 2020). 7 \fThe objective of training this conditional diffusion model is to minimise the difference between the estimated xt\u22121 and its actual value. This effectively allows the model to learn the reverse of the forward diffusion process. The loss function is originally from the Fisher divergence (Song and Ermon, 2019; Song et al., 2021; Nai et al., 2024), but equivalently used as a variant of the mean squared error between the predicted and actual previous timestep values, conditioned on y, L(\u03b8) = Ex0,\u03f5,y \u0002 \u2225\u03f5 \u2212\u03f5\u03b8(xt, t, y)\u22252\u0003 , (5) where \u03f5\u03b8 represents the outputs of the UNet as the predictions of the noise used to generate xt from xt\u22121. To improve the representation ability, we have introduced the multi-head attention modules into the UNet architecture (Vaswani et al., 2017). After training, the conditional diffusion model (hereafter, CloudDiff) is capable of generating multiple samples simultaneously. In our tests, we generate 30 samples per evaluation instance. These samples are reminiscent of the ensemble members used in numerical weather prediction\u2019s dynamical models, which employ large numbers of members for ensemble predictions (Li et al., 2024). Furthermore, we conduct comparative analyses between the CloudDiff and established deterministic data-driven methods. For this purpose, the study uses a supervised learning approach with a UNet architecture (Trebing et al., 2021), referred to as the deterministic model, as the benchmark. This method is specifically applied to the tasks of super-resolution retrieval of cloud properties and cloud identification, serving as a baseline for performance comparison. 2.2.2. Performance evaluation The CloudDiff serves as a super-resolution approach that requires an appropriate evaluation scheme. Although intuitive, sample-by-sample comparisons cannot fully demonstrate the effectiveness of the super-resolution technique. To obtain a comprehensive performance evaluation, we collect MODIS labels for assessing the quality of the generated cloud products. Consequently, we employ Mean Absolute Error (MAE) and Mean Squared Error (MSE) as metrics, allowing for a quantitative assessment of the model\u2019s performance in enhancing spatial resolution. These metrics, commonly used in cloud properties retrieval (Wang et al., 2022; Zhao et al., 2023), are defined as follows, MAE = 1 NNp N X i=1 Np X j=1 |xi,j \u2212\u02c6 xi,j| , (6) RMSE = v u u t 1 NNp N X i=1 Np X j=1 (xi,j \u2212\u02c6 xi,j)2, (7) where N represents the number of samples, xi denotes the values from MODIS cloud products, and \u02c6 xi represents the super-resolution retrieved cloud products. Np indicates the number of pixels for each sample, and j labels the index of the pixels. It 8 \fshould be noted that a more accurate super-resolution model will have a smaller root mean square error (RMSE) and mean absolute error (MAE). 3. Results 3.1. Case study We begin our study with a case analysis focusing on Typhoon Hato (No.1713) over the offshore areas of China to evaluate the performance of the CloudDiff and comprehend its uncertainty. Typhoon Hato developed in the northwest Pacific Ocean at 06:00 UTC on August 20, 2017, and progressively intensified. By 01:00 UTC on August 23, it had escalated to a severe typhoon, peaking at Category 16 with maximum sustained winds of 52 m/s. It made landfall near Zhuhai City, Guangdong Province, China, around 04:50 UTC on August 23 as a severe typhoon, causing substantial devastation in southern China. On that day, the Terra satellite passed over the coastal Zhuhai area around 02:50 UTC; thus, our analysis primarily focused on evaluating the retrieved COT, CER, CTH, and CLP at this specific time. The analysis covered the typhoon area between 19.78\u00b0N\u201322.32\u00b0N and 111.68\u00b0E\u2013114.22\u00b0E, corresponding to a grid size of 256\u00d7256. Figure 2 presents the various cloud properties generated by the CloudDiff across 30 samples and grid points where MODIS cloud properties were not captured by samples. Since all 30 CLP samples indicated ice clouds within the study area, CLP results are not displayed. It is observed that the cloud properties generated by different samples vary slightly but generally reflect the typhoon\u2019s morphology accurately. Despite variations in COT values among the samples and differing degrees of overestimation and underestimation in the typhoon\u2019s cloud wall, they accurately estimated the optical thickness at the typhoon eye. Notably, underestimation occurred for COT values over 90 at about 16.03% of the grid points, and overestimation at 1.67% of the grid points, while COT values below 60 were well retrieved. Regarding CER, some samples did not accurately represent the CER, generally overestimating (9.68%, mainly around the typhoon eye) and underestimating (12.49%, mainly in the typhoon\u2019s cloud wall). Additionally, samples underestimated CTH to various extents, particularly on the west and southwest sides of the typhoon eye, with a total underestimation of 30.41% in CTH and a mere 0.63% overestimation. To evaluate the performance and uncertainty of the CloudDiff, we compared the cloud properties with those from the deterministic model (Fig. 3). The results show that individual sample produces more sharpness and more local details of COT, CER, and CTH compared to the ensemble mean (appears blurrier). The deterministic model\u2019s results blurrier than the ensemble mean and also lack detail. Regarding COT, compared to MODIS cloud products, the sample underestimated the COT in the typhoon eye region and overestimated areas with COT <90. The ensemble mean (the mean values of 30 samples) also overestimated the extent of COT <90 but reported lower values than single sample, somewhat correcting the underestimation of COT in 9 \fSamples COT CER CTH Figure 2: Cloud properties retrieval in the typhoon Hato region centering around 21.8\u00b0N, 113.8\u00b0E at 0250 UTC on August 23, 2017, was conducted using the CloudDiff. The columns represent samples and grid points where MODIS cloud properties are not captured by samples. The underestimation and overestimation are respectively indicated by black squares and green \u2019x\u2019. The background is colored based on MOD06 cloud products. the typhoon eye region by single sample. The standard deviation of 30 samples, which can donate the retrieval uncertainty, indicates large error in the estimates of COT in the typhoon\u2019s cloud wall, mainly because most samples overestimated the COT in this area (see Fig. 2). The deterministic model not only overestimated the extent of COT >90 (with lower internal values) but also underestimated the optical thickness on the western side of the typhoon eye. Both single sample and ensemble mean, as well as the deterministic model, inaccurately retrieved areas with CER >35\u00b5m and overestimated the CER in the typhoon eye area. However, the CloudDiff exhibited smaller biases in CER retrievals compared to the deterministic model, and standard deviations mostly below 6\u00b5m across most regions, indicating small uncertainty. Regarding CTH, CloudDiff exhibits minimal uncertainty, with standard deviations generally below 1 km across most regions. compared to MODIS, the ensemble mean more accurately represented CTH in the southern part of the typhoon eye than individual samples, but it underestimated areas with CTH greater than 16 km and the CTH in the typhoon eye. The deterministic model also underestimated CTH greater than 16 km and the CTH in the typhoon eye. Additionally, deterministic model 10 \f20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N MODIS Sample Ensemble mean Deterministic model Std 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N 112\u00b0E 113\u00b0E 114\u00b0E 112\u00b0E 113\u00b0E 114\u00b0E 112\u00b0E 113\u00b0E 114\u00b0E 112\u00b0E 113\u00b0E 114\u00b0E 0 20 40 60 80 0 10 20 30 40 0 10 20 30 40 50 m 0 3 6 9 12 m 10 11 12 13 14 15 16 17 km 0.0 0.4 0.8 1.2 1.6 km CTH CER COT Figure 3: MOD06 cloud products and retrieved cloud properties in the typhoon Hato region at 0250 UTC on August 23,2017. The columns are MOD06 cloud products, sample, esemble means, deterministic model, and standard deviation (std). underestimated CTH at the image edges. Moreover, both the ensemble mean and deterministic model accurately retrieved CLP (not showed), consistent with MODIS cloud classification results. Overall, the super-resolution cloud properties retrieval based on the CloudDiff proved superior to those from the deterministic model, providing sharper and more localized details of 1 km cloud properties during the typhoon event. Using 30 samples generated by the CloudDiff, we computed probability estimates for various thresholds of cloud property estimates and cloud phase probability results (Fig. 4), which deterministic model cannot provide. Based on the thresholds provided by the International Satellite Cloud Climatology Project (ISCCP) for COT and CTH associated with cloud types, we computed probability estimates for COT (Fig.4b,c,d) and CTH (Fig.4j,k,l) at similar thresholds in ISCCP. The results indicate that the probability estimates from the CloudDiff are close to with MODIS data, with probabilities exceeding 80% in the 3.66.4 km to be over 90%. Following ISCCP cloud classifications, the predominant cloud types in the typhoon eye and its southwestern sea regions are cirrostratus, while other areas feature deep convection clouds. For CER, thresholds of 20 \u00b5m and 40 \u00b5m were selected for probability estimation (Fig.4f,g,h), revealing that the CloudDiff\u2019s CER estimates primarily 11 \ffall within the (20, 40] range, with very low probabilities for CER in the (0, 20] and CER>40 \u00b5m ranges. In comparison to MODIS, the CloudDiff tends to overestimate CER in the typhoon eye and underestimate CER over the western land areas of the typhoon eye. Furthermore, the CloudDiff\u2019s probability estimates for clouds classified as ice clouds in the study area exceed 99 % (not showed) , aligning well with MODIS. Overall, through probabilistic estimation, we can better ascertain the range of cloud property values and cloud phase, evaluate the uncertainty in cloud property retrieval and identification, and enhance the accuracy of super-resolution retrievals. 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (a)MODIS 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (b)(0, 3.6] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (c)(3.6, 23] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (d)COT > 23 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (e)MODIS 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (f)(0, 20] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (g)(20, 40] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (h)CER > 40 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (i)MODIS 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (j)(0, 3.2] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (k)(3.2, 6.4] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (l)CTH > 6.4 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 0 10 20 30 40 50 m 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 10 11 12 13 14 15 16 17 km 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 CTH CER COT Figure 4: The probability estimates for cloud properties in the typhoon Hato region at 0250 UTC on August 23,2017. (b-d) present the probability estimates of COT within different threshold ranges.(fh) display the probability estimates of CER for varying thresholds. (j-l) show the probability estimates for CTH across different threshold ranges. 3.2. Overall evaluation We evaluated the overall performances of the models using data from the test set. We employed MSE and RMSE metrics to evaluate cloud properties. A comparative analysis was conducted to investigate how the number of samples affects the superresolution retrieval performance. This analysis included ensemble means with 1 and 30 samples. Additionally, we compared these results with those from the deterministic model. Figure 5 illustrates the RMSE and MSE comparisons between the MODIS cloud products and the super-resolution retrieval results. 12 \fALL Water Ice 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 MAE (a) COT ALL Water Ice 4 5 6 7 8 9 10 11 (b) CER ( m) ALL Water Ice 0.75 1.00 1.25 1.50 1.75 2.00 2.25 (c) CTH (km) ALL Water Ice 12 13 14 15 16 17 RMSE (d) ALL Water Ice 6 8 10 12 14 16 (e) ALL Water Ice 1.5 2.0 2.5 3.0 3.5 4.0 (f) size 1 5 10 20 30 Deterministic model Figure 5: The performance evaluation of cloud properties. Skill metrics were calculated between the CloudDiff/deterministic model and MODIS cloud products. Different sizes of circles represent ensemble sizes ranging from 1 to 30, while pentagrams indicate deterministic model. For COT, CER, and CTH, the results indicate significantly higher MAE and RMSE values when the ensemble size is 1. As the ensemble size increases beyond five, both the MAE and RMSE of the ensemble mean gradually decrease. An interesting observation is that the improvement in super-resolution retrieval capability from 20 to 30 samples is relatively minor, suggesting that approximately 20 samples are sufficient to capture most of the high-resolution details and adequately cover the uncertainty space in the retrieval process. The MAE and RMSE values of the deterministic model retrieval approach those when the ensemble size is 5, and are notably lower than those observed with an ensemble size of 30. Specifically, for COT at an ensemble size of 30, the ensemble mean MAE for all clouds (water and ice) is 6.62, with an RMSE of 12.51, compared to the deterministic model results which have an MAE of 7.45 and an RMSE of 13.48. For water clouds alone, the MAE is 6.97 and the RMSE is 12.68, with ice clouds showing slightly better performance (MAE = 6.23, RMSE = 12.32). For CER, the ensemble mean MAE for all clouds at an ensemble size of 30 is 5.87\u00b5m, with an RMSE of 8.93\u00b5m. Water clouds exhibit a lower MAE of 4.47\u00b5m and RMSE of 6.62\u00b5m, whereas ice clouds have a higher MAE of 7.48\u00b5m and RMSE of 10.98\u00b5m. Similarly, for CTH at the same ensemble size, the ensemble mean MAE for all clouds is 1.18 km, with an RMSE of 2.15 km. The MAE for water clouds is 0.91 km and RMSE is 1.72 km, with ice clouds performing worse (MAE = 1.61 km, RMSE = 2.68 km). 13 \fClear Water Ice CloudDiff Clear Water Ice MODIS 0.89 0.10 0.02 0.10 0.85 0.05 0.02 0.10 0.88 (a) OA=85.89% Clear Water Ice Deterministic model Clear Water Ice MODIS 0.87 0.11 0.02 0.11 0.83 0.06 0.03 0.11 0.87 (b) OA=84.52% 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Figure 6: Confusion matrix of CLP products between MODIS and CloudDiff (a), deterministic model (b). \u2019OA\u2019 is the overall accuracy In addition, the cloud identification results were assessed. Here, we primarily compared the performance of the deterministic model with the ensemble mean results of 30 samples. The validation results demonstrate the model\u2019s capability to accurately identify true targets from MODIS data. Figure 6 presents the CLP identification results for the ensemble mean of the CloudDiff (Fig. 6a) and the deterministic model (Fig. 6b), which categorize the targets primarily into clear sky, water clouds, and ice clouds. The CloudDiff achieves an overall accuracy (OA) of 85.89%. Specifically, it shows a retrieval accuracy for clear sky and ice clouds of 89% and 88% respectively, and 85% for water clouds. In contrast, the deterministic model exhibits a retrieval accuracy of 88% for both clear sky and water clouds, but a slightly lower accuracy of 83% for ice clouds, with an OA of 84.52%, which is marginally lower than that of the CloudDiff. Overall, the ensemble mean of the CloudDiff demonstrates superior performance in identifying clear sky, water clouds, and ice clouds compared to the deterministic model. In summary, the CloudDiff enables the efficient generation of realistic samples that are faithful to a broad range of resolved retrieval schemes and sufficiently diverse to cover most plausible outcomes. 4.", + "additional_graph_info": { + "graph": [ + [ + "Haixia Xiao", + "Lingxiao Wang" + ], + [ + "Haixia Xiao", + "Wenwen Li" + ], + [ + "Haixia Xiao", + "Bin Guo" + ], + [ + "Lingxiao Wang", + "Zhuoran Yang" + ], + [ + "Wenwen Li", + "Sizhe Wang" + ], + [ + "Bin Guo", + "Jian Song" + ], + [ + "Bin Guo", + "Duong H. Phong" + ], + [ + "Bin Guo", + "Nicholas P. Warner" + ] + ], + "node_feat": { + "Haixia Xiao": [ + { + "url": "http://arxiv.org/abs/2405.04483v1", + "title": "CloudDiff: Super-resolution ensemble retrieval of cloud properties for all day using the generative diffusion model", + "abstract": "Clouds play a crucial role in the Earth's water and energy cycles,\nunderscoring the importance of high spatiotemporal resolution data on cloud\nphase and properties for accurate numerical modeling and weather prediction.\nCurrently, Moderate Resolution Imaging Spectroradiometer (MODIS) provides cloud\nproducts with a spatial resolution of 1 km. However, these products suffer from\na lengthy revisit cycle. This study develops a generative diffusion model\n(donated as CloudDiff) for super-resolution retrieval of high spatiotemporal\ncloud phase and properties, applicable both day and night. Leveraging 2 km\nspatial resolution Himawari-8 Advanced Himawari Imager (AHI) thermal infrared\n(TIR) radiances and viewing geometry as condition, alongside daytime MODIS\nproducts as targets, the model can generate cloud phase (CLP), cloud top height\n(CTH), cloud optical thickness (COT), and cloud effective radius (CER) at 1 km\nspatial resolution and 10-minute temporal resolution. The conditional diffusion\nmodel can generate sharper images and capture finer local features than\ndeterministic super-resolution approaches. It draws multiple samples based on\nthe underlying probability distribution, enabling retrieval uncertainty\nassessment. Evaluations show agreement between cloud phase and properties\nderived from the CloudDiff and MODIS cloud products. The ensemble mean is found\nto enhance retrieval accuracy and credibility, outperforming the deterministic\nmodel.", + "authors": "Haixia Xiao, Feng Zhang, Lingxiao Wang, Wenwen Li, Bin Guo, Jun Li", + "published": "2024-05-07", + "updated": "2024-05-07", + "primary_cat": "physics.ao-ph", + "cats": [ + "physics.ao-ph" + ], + "main_content": "Introduction Clouds are critical in the Earth\u2019s water and energy budgets (Li et al., 2005). Their influence on the radiation budget can induce either heating or cooling of the planet, contingent upon the radiative characteristics of the cloud and its altitude (Stephens et al., 1981, 1990). The significance of clouds is further underscored by variables such as cloud optical thickness (COT), cloud effective radius (CER), cloud top height (CTH), and cloud phase (CLP). These parameters profoundly impact the Earth\u2019s net radiation balance due to their distinct scattering and absorption characteristics (Fauchez et al., 2018a; Min et al., 2020; Wang et al., 2016a). Achieving an accurate representation of these optical properties remains a formidable challenge, primarily because the microscale physical processes within clouds are difficult to explicitly simulate in global numerical models (Baran, 2012; Ceppi et al., 2017; Waliser et al., 2009). Consequently, there is an urgent need to obtain cloud phase and properties with high spatial and temporal resolution. Such detailed cloud data are indispensable for a deeper understanding of atmospheric physical processes, the enhancement of data assimilation techniques, and the improvement of weather forecasting accuracy (Muskatel et al., 2021). The retrieval of cloud properties has been conducted for several decades. Since the 1970s, airborne measurements have been employed to retrieve COT and CER, resulting in numerous successful experimental studies (Finger et al., 2015; King, 1987; Krisna et al., 2018; Platnick et al., 1995; Twomey and Cocks, 1989). However, these campaigns incur high costs, and the temporal and spatial coverage of field observations is limited. With the advancement of satellite remote sensing technology, particularly passive sensors (geostationary and polar-orbiting satellites), researchers have increasingly utilized data from visible and near-infrared bands to retrieve cloud properties. This approach enables the characterization of cloud properties at various spatial and temporal resolutions (King et al., 1992; Menzel et al., 2008; Platnick et al., 2003; Tang et al., 2017; Zhang and Platnick, 2011; Zhuge et al., 2020), owing to the wide observational coverage provided by passive sensors. The basic physical principle behind this method is that the cloud radiances measured by the nonabsorptive channels in the visible or near-infrared wavelengths are influenced by COT, while those captured by water-absorption channels in the shortwave infrared wavelength are sensitive to the CER (Nauss and Kokhanovsky, 2011). These retrieval methods, which rely on solar radiation, are effective only for daytime scenes. However, they are not applicable to nighttime scenes and exhibit higher uncertainties in high-latitude regions and optically thin cloud scenes (Wang et al., 2016b). Thermal Infrared (TIR) retrieval algorithm, utilizing the split-window technique (Parol et al., 1991; Toshiro, 1985), offer valuable capabilities for both daytime and nighttime scene analysis. This technique retrieves COT and CER from the brightness temperature differences between two distinct channels in the infrared atmospheric windows, where gaseous absorption is minimal. Additionally, the optimal estimation methodology (Rodgers, 2000) has been implemented for the Atmospheric Infrared 2 \fSounder V6 (AIRS) and Advanced Microwave Sounding Unit (AMSU), utilizing infrared spectral data to successfully retrieve the physical and optical properties of clouds (Kahn et al., 2014, 2015). However, due to significant absorption by cloud particles in the infrared spectrum, these traditional IR-based algorithms primarily excel in retrieving optically thin cloud properties, while facing challenges in scenarios involving opaque, thick clouds (Wang et al., 2016a). Consequently, an alternative approach is necessary to provide a more comprehensive solution. The data-driven deep learning method, renowned for their proficiency in capturing the spatial variations of image features with fast computation, have been extensively applied in the cloud identification and properties retrieval (Tong et al., 2023; Zhao et al., 2023). For example, Wang et al. (2022) developed a convolutional neural network (CNN) model for the continuous cloud identification and retrieval of cloud properties (i.e., COT, CER, and CTH) throughout the diurnal cycle for the Moderate Resolution Imaging Spectroradiometer (MODIS), leveraging utilizing daytime MODIS TIR radiances alongside satellite viewing zenith angles (VZA). Additionally, employing a transfer-learning-based UNet model and MODIS/Himawari-8 cloud products, Li et al. (2023) successfully estimated the CER, COT, and CTH from Himawari-8 TIR measurements, and results showed that the model enhanced performance for optically thick clouds. Previous research has relied on either polar-orbiting (e.g., MODIS) or geostationary (e.g., Himawari-8 Advanced Himawari Imager) satellite sensors for cloud property estimation. While polar-orbiting satellites offer high-resolution cloud products (1 km resolution), they suffer from a lengthy revisit cycle, impacting temporal resolution. Conversely, geostationary satellites provide frequent revisits, offering high temporal resolution and continuous cloud observation (Meng et al., 2024). However, their spatial resolution is lower compared to polar-orbiting satellites. Hence, combining data from both types of satellites to achieve high spatiotemporal resolution in cloud phase and properties is a promising direction to explore. For high-impact weather events such as severe convective storms, tropical and extratropical cyclones, the underlying dynamical and thermodynamic mechanisms are complex, leading to significant uncertainties in retrieving their cloud properties. Unfortunately, current CNN/UNet retrieval methods primarily focus on deterministic modeling, which often neglects the inherent uncertainties within the data. Diffusion models, a novel category of likelihood-based models recently highlighted for generating high-quality images (Sohl-Dickstein et al., 2015; Song and Ermon, 2019), offer desirable characteristics such as distribution coverage (Ho et al., 2020). Unlike deterministic retrieval methods, diffusion models derive probability distribution functions and can generate a large number of samples (Ho et al., 2020; Ling et al., 2024; Bishop, 2024), while guaranteeing that the retrieval distribution encapsulates all plausible outcomes, thus allowing for estimating the probability density and its score. Diffusion models have proven successful in various research domains, such as computer vision for image generation and synthesis (Croitoru, 2023), precipitation nowcasting (Nai 3 \fet al., 2024), estimating the unresolved geophysical processes (Pan et al., 2023), and earth system model downscaling (Hess et al., 2024), showcasing their effectiveness in handling complex systems. The primary objective of this study is to develop a diffusion model aimed at superresolution high spatiotemporal resolution cloud optical properties and cloud phase retrieval throughout the diurnal cycle using a geostationary satellite. Leveraging the TIR channels of the Himawari-8 satellite and employing MODIS cloud products as ground truth, we have developed a generative diffusion model capable of cloud identification and retrieval of COT, CER, and CTH, characterized by high precision and enhanced spatiotemporal resolution. The efficacy of this model is evaluated against standard MODIS cloud product measurements, focusing particularly on its generalization capabilities and the uncertainty, analyzed across typhoon case studies and extended datasets. The data, methodology, and experimental details are outlined in Section 2. The performance outcomes of the model are thoroughly examined in Section 3. Lastly, Section 4 offers conclusions and discussions. 2. Data and methods 2.1. Data 2.1.1. Himawari-8 AHI Satellite Data Himawari-8, launched in October 2014, is the geostationary satellite sensor system operated by the Japan Meteorological Agency (JMA). It represents the latest iteration in the Multifunctional Transport Satellite (MTSAT) series. The Advanced Himawari Imager (AHI) sensor onboard Himawari-8 captures full disk images every 10 minutes across 16 spectral bands from visible to infrared wavelengths, with spatial resolutions ranging from 500 m to 2 km and temporal resolutions between 2.5 and 10 minutes, covering regions from East Asia to Australia. The TIR measurements are sensitive to optically thin clouds and are continuously obtained throughout the diurnal cycle, independent of solar geometry (Fauchez et al., 2018a). In this study, TIR radiations from Himawari-8 AHI are utilized to estimate cloud properties during both daytime and nighttime. Additionally, the VZA are employed to construct the retrieval model. Table 1 summarizes the used TIR measurements (6.95\u201313.30 \u00b5m) and VZA of Himawari-8 AHI. 2.1.2. MODIS data With the launch of NASA\u2019s Terra satellite in 1999, followed by Aqua in 2002, MODIS has emerged as one of the most indispensable satellite remote sensing platforms for Earth science research. It measures reflected solar and emitted thermal radiation across 36 spectral channels (0.42\u201314.24 \u00b5m), offering unique spectral and spatial capabilities for retrieving cloud properties (Platnick et al., 2016). The Terra-MODIS (MOD06) and Aqua-MODIS (MYD06) products, which have a spatial resolution of 1 km, are accessible through the Atmosphere Archive and Distribution System website 4 \f(https://ladsweb.modaps.eosdis.nasa.gov/). These products include cloud top properties (e.g., CTH, CLP for both day and night) and cloud optical and microphysical properties (e.g., COT, CER, daytime only). Over the years, the MODIS cloud products have demonstrated consistent high accuracy and reliable performance (King et al., 2003; Platnick et al., 2015). In this study, the daytime MODIS cloud optical and physical properties (CTH, COT, CER, and CLP) from the Level-2 cloud product (MYD06 L2 and MOD06 L2) are utilized as ground truth to develop the super-resolution retrieval model. Table 1: The Himawari-8 AHI data used for cloud parameter super-resolution retrieval. Band Number Bandwidth (\u00b5m) Central Wavelength (\u00b5m) Spatial resolution (km) Spatial resolution (minute) 9 6.89\u20137.01 6.95 10 7.26\u20137.43 7.35 11 8.44\u20138.76 8.6 12 9.54\u20139.72 9.63 2 10 13 10.3\u201310.6 10.45 14 11.1\u201311.3 11.20 15 12.2\u201312.5 12.35 16 13.20\u201313.40 13.30 VZA \u2013 \u2013 2.1.3. Data preprocessing As described above, the TIR measurements (6.95 \u00b5m, 7.35 \u00b5m, 8.60 \u00b5m, 9.60 \u00b5m, 10.45 \u00b5m, 11.20 \u00b5m, 12.35 \u00b5m, and 13.30 \u00b5m) along with the VZA of the Himawari-8 AHI serve as the inputs for the model, while the MODIS level-2 CLP, CTH, COT, and CER data are used as the targets for training the model. To optimize the model during training and enhance its accuracy, we normalized the inputs and targets. By employing min-max normalization, we scaled the input and output variables to fall within the range of 0 to 1. To cover as wide a range of the Earth\u2019s surface and viewing geometries as possible, and to accommodate seasonal variations, we collected data from January 2016 to October 2017. Specifically, data from January 2016 to May 2017 was utilized for model training, data from June to August 20, 2017 for model validation, and data from August 21, 2017, to October 2017 served as the test set. Owing to the differing spatiotemporal resolutions of the Himawari-8 AHI and MODIS cloud products, we performed spatiotemporal matching of the data. In this process, we selected data from both MODIS and Himawari-8 for the same regions and times, with the cloud product grid points being twice that of the TIR observations. To alleviate memory and computational demands and to accelerate the selection process for the model, 5 \fwe cropped the cloud products in the training, validation, and test sets to a size of 256\u00d7256 km, while the input TIR observations were sized at 128\u00d7128 km. Ultimately, our training set comprised 76,247 samples, with the validation and test sets containing 9,530 and 9,532 samples, respectively. 2.2. Method The diffusion model is a state-of-the-art deep learning technique that employs probabilistic denoising processes to develop generative models (Bishop, 2024). The model typically operates on the principle of simulating a gradual process of denoising, effectively reconstructing data points from a noise-like distribution. This process is modeled as a reverse Markov chain, where a data sample is initially transformed into noise through a sequence of diffusion steps and then reconstructed back into a clean sample through learned reverse transitions. In a classical set-up, the model involves iteratively applying a series of conditional Gaussian distributions, beginning from a distribution of noise p(zT) and progressively denoising it to retrieve the original data distribution p(x0). This can be succinctly represented as, p(x0) = Z \u00b7 \u00b7 \u00b7 Z p(x0|x1)p(x1|x2) \u00b7 \u00b7 \u00b7 p(xT\u22121|zT)p(zT) dx1 \u00b7 \u00b7 \u00b7 dxT\u22121dzT. (1) In each iteration, the model utilizes the noisy data from the previous step as input, subsequently refining it to a greater degree of accuracy in accordance with the data\u2019s original state. The denoising path is learned from training data, thereby enabling the model to effectively generate or reconstruct high-quality data samples. 2.2.1. Conditional diffusion model In our study, these TIR measurements and VZA variable are denoted by y which is the condition variable. The target variables, cloud products, are represented by x. The objective is to approximate the conditional distribution of x given y, using a significantly large dataset of paired samples (xi, yi). The conditional diffusion model incorporates conditioning variables into the generative process (Batzolis, 2021), allowing the model to generate data conditioned on specific information. Mathematically, this can be represented as the transition from a noise distribution p(zT) to the data distribution p(x0) conditioned on a variable y, described by, p(x0|y) = Z p(x0|zT, y)p(zT|y) dzT, (2) where, zT represents the latent variables at the final timestep, and the model iteratively refines these variables through the conditioning on y, enhancing its ability to target specific data generation tasks. As Figure 1 shows, the conditional diffusion model enables to produce cloud products given the conditions of TIR and VZA variables, making it particularly useful in scenarios where the output needs to be tailored to specific environments. In this framework, for any given y, the algorithm 6 \foutputs samples of x from x \u223cp(x0|y), where p is a learned distribution that does not adhere to any predefined probability distribution form. The forward process has the same scheme as the Denoising Diffusion Probabilistic Models(DDPMs) (Ho et al., 2020), but in the reverse process we embed the conditional variables into the UNet for modelling the conditional probability distributions (Nai et al., 2024). \ud835\udc650 \ud835\udc651 \ud835\udc652 ... \ud835\udc65\ud835\udc47 \ud835\udc65\ud835\udc47 \ud835\udc650 Forward Diffusion Process Reverse Diffusion Process ... \ud835\udc65\ud835\udc47\u22121 UNet UNet Condition Figure 1: The CloudDiff for super-resolution cloud identification and properties retrieval. The generated samples x are cloud products, and the conditions y includes TIR and VZA variables. In the forward process, the data x0 undergoes a series of transformations, gradually adding noise over discrete time steps T until it is converted into pure Gaussian noise xT \u2261zT. The noise addition at each timestep t is defined by a variance schedule \u03b2t, and can be described by the following stochastic differential equation, xt = p 1 \u2212\u03b2txt\u22121 + p \u03b2t\u03f5, \u03f5 \u223cN(0, I), (3) where \u03f5 represents Gaussian noise. The reverse process, where the model learns to reconstruct the original data from noise, is explicitly conditioned on y. At each step, the model estimates the original data xt\u22121 from the current noisy data xt using a neural network parameterized by {\u03b8}. This network predicts the mean \u00b5\u03b8(xt, t, y) of the distribution for xt\u22121, typically modeled as, xt\u22121 = \u00b5\u03b8(xt, t, y) + \u03c3t\u03f5, \u03f5 \u223cN(0, I), (4) where \u03c3t is a predetermined noise level (Ho et al., 2020). 7 \fThe objective of training this conditional diffusion model is to minimise the difference between the estimated xt\u22121 and its actual value. This effectively allows the model to learn the reverse of the forward diffusion process. The loss function is originally from the Fisher divergence (Song and Ermon, 2019; Song et al., 2021; Nai et al., 2024), but equivalently used as a variant of the mean squared error between the predicted and actual previous timestep values, conditioned on y, L(\u03b8) = Ex0,\u03f5,y \u0002 \u2225\u03f5 \u2212\u03f5\u03b8(xt, t, y)\u22252\u0003 , (5) where \u03f5\u03b8 represents the outputs of the UNet as the predictions of the noise used to generate xt from xt\u22121. To improve the representation ability, we have introduced the multi-head attention modules into the UNet architecture (Vaswani et al., 2017). After training, the conditional diffusion model (hereafter, CloudDiff) is capable of generating multiple samples simultaneously. In our tests, we generate 30 samples per evaluation instance. These samples are reminiscent of the ensemble members used in numerical weather prediction\u2019s dynamical models, which employ large numbers of members for ensemble predictions (Li et al., 2024). Furthermore, we conduct comparative analyses between the CloudDiff and established deterministic data-driven methods. For this purpose, the study uses a supervised learning approach with a UNet architecture (Trebing et al., 2021), referred to as the deterministic model, as the benchmark. This method is specifically applied to the tasks of super-resolution retrieval of cloud properties and cloud identification, serving as a baseline for performance comparison. 2.2.2. Performance evaluation The CloudDiff serves as a super-resolution approach that requires an appropriate evaluation scheme. Although intuitive, sample-by-sample comparisons cannot fully demonstrate the effectiveness of the super-resolution technique. To obtain a comprehensive performance evaluation, we collect MODIS labels for assessing the quality of the generated cloud products. Consequently, we employ Mean Absolute Error (MAE) and Mean Squared Error (MSE) as metrics, allowing for a quantitative assessment of the model\u2019s performance in enhancing spatial resolution. These metrics, commonly used in cloud properties retrieval (Wang et al., 2022; Zhao et al., 2023), are defined as follows, MAE = 1 NNp N X i=1 Np X j=1 |xi,j \u2212\u02c6 xi,j| , (6) RMSE = v u u t 1 NNp N X i=1 Np X j=1 (xi,j \u2212\u02c6 xi,j)2, (7) where N represents the number of samples, xi denotes the values from MODIS cloud products, and \u02c6 xi represents the super-resolution retrieved cloud products. Np indicates the number of pixels for each sample, and j labels the index of the pixels. It 8 \fshould be noted that a more accurate super-resolution model will have a smaller root mean square error (RMSE) and mean absolute error (MAE). 3. Results 3.1. Case study We begin our study with a case analysis focusing on Typhoon Hato (No.1713) over the offshore areas of China to evaluate the performance of the CloudDiff and comprehend its uncertainty. Typhoon Hato developed in the northwest Pacific Ocean at 06:00 UTC on August 20, 2017, and progressively intensified. By 01:00 UTC on August 23, it had escalated to a severe typhoon, peaking at Category 16 with maximum sustained winds of 52 m/s. It made landfall near Zhuhai City, Guangdong Province, China, around 04:50 UTC on August 23 as a severe typhoon, causing substantial devastation in southern China. On that day, the Terra satellite passed over the coastal Zhuhai area around 02:50 UTC; thus, our analysis primarily focused on evaluating the retrieved COT, CER, CTH, and CLP at this specific time. The analysis covered the typhoon area between 19.78\u00b0N\u201322.32\u00b0N and 111.68\u00b0E\u2013114.22\u00b0E, corresponding to a grid size of 256\u00d7256. Figure 2 presents the various cloud properties generated by the CloudDiff across 30 samples and grid points where MODIS cloud properties were not captured by samples. Since all 30 CLP samples indicated ice clouds within the study area, CLP results are not displayed. It is observed that the cloud properties generated by different samples vary slightly but generally reflect the typhoon\u2019s morphology accurately. Despite variations in COT values among the samples and differing degrees of overestimation and underestimation in the typhoon\u2019s cloud wall, they accurately estimated the optical thickness at the typhoon eye. Notably, underestimation occurred for COT values over 90 at about 16.03% of the grid points, and overestimation at 1.67% of the grid points, while COT values below 60 were well retrieved. Regarding CER, some samples did not accurately represent the CER, generally overestimating (9.68%, mainly around the typhoon eye) and underestimating (12.49%, mainly in the typhoon\u2019s cloud wall). Additionally, samples underestimated CTH to various extents, particularly on the west and southwest sides of the typhoon eye, with a total underestimation of 30.41% in CTH and a mere 0.63% overestimation. To evaluate the performance and uncertainty of the CloudDiff, we compared the cloud properties with those from the deterministic model (Fig. 3). The results show that individual sample produces more sharpness and more local details of COT, CER, and CTH compared to the ensemble mean (appears blurrier). The deterministic model\u2019s results blurrier than the ensemble mean and also lack detail. Regarding COT, compared to MODIS cloud products, the sample underestimated the COT in the typhoon eye region and overestimated areas with COT <90. The ensemble mean (the mean values of 30 samples) also overestimated the extent of COT <90 but reported lower values than single sample, somewhat correcting the underestimation of COT in 9 \fSamples COT CER CTH Figure 2: Cloud properties retrieval in the typhoon Hato region centering around 21.8\u00b0N, 113.8\u00b0E at 0250 UTC on August 23, 2017, was conducted using the CloudDiff. The columns represent samples and grid points where MODIS cloud properties are not captured by samples. The underestimation and overestimation are respectively indicated by black squares and green \u2019x\u2019. The background is colored based on MOD06 cloud products. the typhoon eye region by single sample. The standard deviation of 30 samples, which can donate the retrieval uncertainty, indicates large error in the estimates of COT in the typhoon\u2019s cloud wall, mainly because most samples overestimated the COT in this area (see Fig. 2). The deterministic model not only overestimated the extent of COT >90 (with lower internal values) but also underestimated the optical thickness on the western side of the typhoon eye. Both single sample and ensemble mean, as well as the deterministic model, inaccurately retrieved areas with CER >35\u00b5m and overestimated the CER in the typhoon eye area. However, the CloudDiff exhibited smaller biases in CER retrievals compared to the deterministic model, and standard deviations mostly below 6\u00b5m across most regions, indicating small uncertainty. Regarding CTH, CloudDiff exhibits minimal uncertainty, with standard deviations generally below 1 km across most regions. compared to MODIS, the ensemble mean more accurately represented CTH in the southern part of the typhoon eye than individual samples, but it underestimated areas with CTH greater than 16 km and the CTH in the typhoon eye. The deterministic model also underestimated CTH greater than 16 km and the CTH in the typhoon eye. Additionally, deterministic model 10 \f20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N MODIS Sample Ensemble mean Deterministic model Std 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N 112\u00b0E 113\u00b0E 114\u00b0E 112\u00b0E 113\u00b0E 114\u00b0E 112\u00b0E 113\u00b0E 114\u00b0E 112\u00b0E 113\u00b0E 114\u00b0E 0 20 40 60 80 0 10 20 30 40 0 10 20 30 40 50 m 0 3 6 9 12 m 10 11 12 13 14 15 16 17 km 0.0 0.4 0.8 1.2 1.6 km CTH CER COT Figure 3: MOD06 cloud products and retrieved cloud properties in the typhoon Hato region at 0250 UTC on August 23,2017. The columns are MOD06 cloud products, sample, esemble means, deterministic model, and standard deviation (std). underestimated CTH at the image edges. Moreover, both the ensemble mean and deterministic model accurately retrieved CLP (not showed), consistent with MODIS cloud classification results. Overall, the super-resolution cloud properties retrieval based on the CloudDiff proved superior to those from the deterministic model, providing sharper and more localized details of 1 km cloud properties during the typhoon event. Using 30 samples generated by the CloudDiff, we computed probability estimates for various thresholds of cloud property estimates and cloud phase probability results (Fig. 4), which deterministic model cannot provide. Based on the thresholds provided by the International Satellite Cloud Climatology Project (ISCCP) for COT and CTH associated with cloud types, we computed probability estimates for COT (Fig.4b,c,d) and CTH (Fig.4j,k,l) at similar thresholds in ISCCP. The results indicate that the probability estimates from the CloudDiff are close to with MODIS data, with probabilities exceeding 80% in the 3.66.4 km to be over 90%. Following ISCCP cloud classifications, the predominant cloud types in the typhoon eye and its southwestern sea regions are cirrostratus, while other areas feature deep convection clouds. For CER, thresholds of 20 \u00b5m and 40 \u00b5m were selected for probability estimation (Fig.4f,g,h), revealing that the CloudDiff\u2019s CER estimates primarily 11 \ffall within the (20, 40] range, with very low probabilities for CER in the (0, 20] and CER>40 \u00b5m ranges. In comparison to MODIS, the CloudDiff tends to overestimate CER in the typhoon eye and underestimate CER over the western land areas of the typhoon eye. Furthermore, the CloudDiff\u2019s probability estimates for clouds classified as ice clouds in the study area exceed 99 % (not showed) , aligning well with MODIS. Overall, through probabilistic estimation, we can better ascertain the range of cloud property values and cloud phase, evaluate the uncertainty in cloud property retrieval and identification, and enhance the accuracy of super-resolution retrievals. 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (a)MODIS 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (b)(0, 3.6] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (c)(3.6, 23] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (d)COT > 23 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (e)MODIS 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (f)(0, 20] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (g)(20, 40] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (h)CER > 40 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (i)MODIS 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (j)(0, 3.2] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (k)(3.2, 6.4] 112\u00b0E 113\u00b0E 114\u00b0E 20\u00b0N 20.5\u00b0N 21\u00b0N 21.5\u00b0N 22\u00b0N (l)CTH > 6.4 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 0 10 20 30 40 50 m 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 10 11 12 13 14 15 16 17 km 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 CTH CER COT Figure 4: The probability estimates for cloud properties in the typhoon Hato region at 0250 UTC on August 23,2017. (b-d) present the probability estimates of COT within different threshold ranges.(fh) display the probability estimates of CER for varying thresholds. (j-l) show the probability estimates for CTH across different threshold ranges. 3.2. Overall evaluation We evaluated the overall performances of the models using data from the test set. We employed MSE and RMSE metrics to evaluate cloud properties. A comparative analysis was conducted to investigate how the number of samples affects the superresolution retrieval performance. This analysis included ensemble means with 1 and 30 samples. Additionally, we compared these results with those from the deterministic model. Figure 5 illustrates the RMSE and MSE comparisons between the MODIS cloud products and the super-resolution retrieval results. 12 \fALL Water Ice 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 MAE (a) COT ALL Water Ice 4 5 6 7 8 9 10 11 (b) CER ( m) ALL Water Ice 0.75 1.00 1.25 1.50 1.75 2.00 2.25 (c) CTH (km) ALL Water Ice 12 13 14 15 16 17 RMSE (d) ALL Water Ice 6 8 10 12 14 16 (e) ALL Water Ice 1.5 2.0 2.5 3.0 3.5 4.0 (f) size 1 5 10 20 30 Deterministic model Figure 5: The performance evaluation of cloud properties. Skill metrics were calculated between the CloudDiff/deterministic model and MODIS cloud products. Different sizes of circles represent ensemble sizes ranging from 1 to 30, while pentagrams indicate deterministic model. For COT, CER, and CTH, the results indicate significantly higher MAE and RMSE values when the ensemble size is 1. As the ensemble size increases beyond five, both the MAE and RMSE of the ensemble mean gradually decrease. An interesting observation is that the improvement in super-resolution retrieval capability from 20 to 30 samples is relatively minor, suggesting that approximately 20 samples are sufficient to capture most of the high-resolution details and adequately cover the uncertainty space in the retrieval process. The MAE and RMSE values of the deterministic model retrieval approach those when the ensemble size is 5, and are notably lower than those observed with an ensemble size of 30. Specifically, for COT at an ensemble size of 30, the ensemble mean MAE for all clouds (water and ice) is 6.62, with an RMSE of 12.51, compared to the deterministic model results which have an MAE of 7.45 and an RMSE of 13.48. For water clouds alone, the MAE is 6.97 and the RMSE is 12.68, with ice clouds showing slightly better performance (MAE = 6.23, RMSE = 12.32). For CER, the ensemble mean MAE for all clouds at an ensemble size of 30 is 5.87\u00b5m, with an RMSE of 8.93\u00b5m. Water clouds exhibit a lower MAE of 4.47\u00b5m and RMSE of 6.62\u00b5m, whereas ice clouds have a higher MAE of 7.48\u00b5m and RMSE of 10.98\u00b5m. Similarly, for CTH at the same ensemble size, the ensemble mean MAE for all clouds is 1.18 km, with an RMSE of 2.15 km. The MAE for water clouds is 0.91 km and RMSE is 1.72 km, with ice clouds performing worse (MAE = 1.61 km, RMSE = 2.68 km). 13 \fClear Water Ice CloudDiff Clear Water Ice MODIS 0.89 0.10 0.02 0.10 0.85 0.05 0.02 0.10 0.88 (a) OA=85.89% Clear Water Ice Deterministic model Clear Water Ice MODIS 0.87 0.11 0.02 0.11 0.83 0.06 0.03 0.11 0.87 (b) OA=84.52% 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Figure 6: Confusion matrix of CLP products between MODIS and CloudDiff (a), deterministic model (b). \u2019OA\u2019 is the overall accuracy In addition, the cloud identification results were assessed. Here, we primarily compared the performance of the deterministic model with the ensemble mean results of 30 samples. The validation results demonstrate the model\u2019s capability to accurately identify true targets from MODIS data. Figure 6 presents the CLP identification results for the ensemble mean of the CloudDiff (Fig. 6a) and the deterministic model (Fig. 6b), which categorize the targets primarily into clear sky, water clouds, and ice clouds. The CloudDiff achieves an overall accuracy (OA) of 85.89%. Specifically, it shows a retrieval accuracy for clear sky and ice clouds of 89% and 88% respectively, and 85% for water clouds. In contrast, the deterministic model exhibits a retrieval accuracy of 88% for both clear sky and water clouds, but a slightly lower accuracy of 83% for ice clouds, with an OA of 84.52%, which is marginally lower than that of the CloudDiff. Overall, the ensemble mean of the CloudDiff demonstrates superior performance in identifying clear sky, water clouds, and ice clouds compared to the deterministic model. In summary, the CloudDiff enables the efficient generation of realistic samples that are faithful to a broad range of resolved retrieval schemes and sufficiently diverse to cover most plausible outcomes. 4." + } + ], + "Lingxiao Wang": [ + { + "url": "http://arxiv.org/abs/2311.03578v1", + "title": "Generative Diffusion Models for Lattice Field Theory", + "abstract": "This study delves into the connection between machine learning and lattice\nfield theory by linking generative diffusion models (DMs) with stochastic\nquantization, from a stochastic differential equation perspective. We show that\nDMs can be conceptualized by reversing a stochastic process driven by the\nLangevin equation, which then produces samples from an initial distribution to\napproximate the target distribution. In a toy model, we highlight the\ncapability of DMs to learn effective actions. Furthermore, we demonstrate its\nfeasibility to act as a global sampler for generating configurations in the\ntwo-dimensional $\\phi^4$ quantum lattice field theory.", + "authors": "Lingxiao Wang, Gert Aarts, Kai Zhou", + "published": "2023-11-06", + "updated": "2023-11-06", + "primary_cat": "hep-lat", + "cats": [ + "hep-lat", + "cs.LG" + ], + "main_content": "Introduction In lattice field theory, physical observables are obtained by approximating path integrals via summing over field configurations, traditionally done through Monte Carlo methods. However, these methods can be computationally expensive. A promising alternative lies in employing generative models, a machine learning framework, to create new configurations following the target physical distribution, thus potentially enhancing the efficiency of Monte Carlo simulations [1, 2]. Generative models fall into two main categories based on likelihood estimation methods. Implicit Maximum Likelihood Estimation (MLE) employs, for instance, Generative Adversarial Networks (GANs) to generate new configurations through a min-max game, shown effective in lattice simulations [3, 4]. Explicit MLE uses explicit probability descriptions, e.g., autoregressive models [5] and flow-based models [6\u201311], improving efficiency in lattice simulations without needing prepared training data. Despite being plagued by model-collapse and scalability [12\u201314], flow-based models have developed rapidly and yielded achievements [2]. Recently, Diffusion Models (DMs), a new Machine Learning and the Physical Sciences Workshop, NeurIPS 2023. arXiv:2311.03578v1 [hep-lat] 6 Nov 2023 \fimplicit MLE class but with an explicit probability description, have shown promise in generating high-quality images via stochastic processes [15], hinting at potential applications also in high-energy physics [16, 17]. This work explores the potential of DMs in generating lattice field configurations, and its intrinsic connection with stochastic quantization (SQ) [18\u201320] from a stochastic differential equation (SDE) perspective. We demonstrate its efficiency and accuracy of learning effective actions in a toy model, and verify its feasibility of generating configurations in a two-dimensional \u03d54 lattice field theory. Our findings suggest that DMs can serve as a significant tool to address computational challenges in lattice simulations, encouraging further explorations in this direction. Related Work Langevin dynamics has been utilized in Bayesian learning as a stochastic gradient optimization [21], which introduces stochasticity into the parameter updates, thereby avoiding collapses into local minima. Recent related work introduces stochasticity into the hybrid Monte-Carlo algorithm [22] and explores the correspondence between the exact renormalizing group (ERG) and DMs based upon the heat equation [23]. From a flow-based model perspective, Ref. [24] designed a continuous-time normalizing flow with an inferred velocity field from the probability current of a time-dependent density that interpolates between the prior and target densities stochastically. 2 Stochastic Differential Equation In DMs, a denoising model reconstructs original data from its noisy version obtained through a diffusion process. The predetermined process of adding noises, also called the forward process, smoothens the data\u2019s probability distribution by introducing noise. The denoising model learns the inverse process\u2014eliminating this noise. Once trained, it generates samples following the data distribution through a reverse diffusion process, starting with random noise and iteratively \"cleaning\" it until a convergent sample resembling the target data distribution is achieved. \ud835\udf19!\"# \ud835\udf19! \ud835\udf19$ \ud835\udf19% \ud835\udc51\ud835\udf19= f \ud835\udf19, \ud835\udc61\ud835\udc51\ud835\udc61+ \ud835\udc54\ud835\udc61\ud835\udc51\ud835\udc64 \ud835\udc51\ud835\udf19= f \ud835\udf19, \ud835\udc61 \u2212\ud835\udc54\ud835\udc61&\u2207'log \ud835\udc5d! \ud835\udf19\ud835\udc51\ud835\udc61+ \ud835\udc54\ud835\udc61\ud835\udc510 \ud835\udc64 Figure 1: A sketch of the forward diffusion process (upper arrows) and the reverse denoising process (bottom arrows). The two stochastic processes are described by two stochastic differential equations. The target distribution is typically unknown but learnt from the initial data. The forward process mentioned above can be introduced into the propagation of the field, \u03d5i, via the following Markov chain, \u03d5i = \u03d5i\u22121 + fi\u22121 + gi\u22121zi\u22121, with a total of N steps (i = 1, \u00b7 \u00b7 \u00b7 , N) and random noise zi \u223cN(0, I). Both the random noise zi and drift force fi have the same dimensionality as the field \u03d5i. Given an time interval T \u2261Ndt and N \u2192\u221e(dt \u21920), the above forward process converges to its continuous-time limit, which follows an It\u00f4 SDE, d\u03d5 = f(\u03d5, t)dt + g(t)dw, where t \u2208[0, T], w is the standard Wiener process, i.e, Brownian motion, f(\u03d5, t) is the drift term, and g(t) is the scalar diffusion coefficient. The forward diffusion process \u03d5(t) can be modeled as the solution of such a generic SDE. As Figure 1 demonstrates, starting from a sample taken from the prior distribution pT and reversing the above diffusion process enables obtaining a sample from the data distribution p0. Importantly, the reverse process represents a diffusion process evolving backward in time, which is expressed by the following reverse SDE [25], d\u03d5 = [f(\u03d5, t) \u2212g2(t)\u2207\u03d5 log pt(\u03d5)]dt + g(t)d \u00af w, where the reverse time t \u2261T \u2212t, and pt(\u03d5) is the probabilistic distribution at time-step t, \u00af w is a Wiener process in the reverse time direction, and dt represents an infinitesimal negative time step. This reverse SDE can be 2 \fsolved once we know the drift term and diffusion coefficient of the forward SDE, and in particular \u2207\u03d5 log pt(\u03d5) for each t \u2208[0, T]. The reverse SDEs of DMs are mathematically related to Langevin dynamics. For a concise implementation, we choose the variance expanding picture of DMs, i.e. setting f(\u03d5, t) \u22610, g(t) \u2261g\u03c4. Its Langevin equation (labeled by a new reverse time \u03c4) now reads, d\u03d5 d\u03c4 = g2 \u03c4\u2207\u03d5 log p\u03c4(\u03d5) + g\u03c4 \u00af \u03b7(\u03c4), (1) where the noise term \u27e8\u00af \u03b7(\u03c4)\u27e9= 0, \u27e8\u00af \u03b7(\u03c4)\u00af \u03b7(\u03c4 \u2032)\u27e9= 2\u00af \u03b1\u03b4(\u03c4 \u2212\u03c4 \u2032), with \u00af \u03b1 being the diffusion constant. Solving the reverse SDE (1) to depict denoising is difficult due to the intractable \u201ctime-dependent\u201d drift term. A U-Net neural network is used to parameterize the score function, s\u03b8(\u03d5, \u03c4), which estimates the drift term, \u2212\u2207\u03d5 log p\u03c4(\u03d5), in Eq. (1). The U-Net accepts time and a trajectory configuration as inputs and outputs the same size as the input. More details about the architecture of the U-Net can be found in Ref. [26]. 2.1 Stochastic Quantization In field theory, as an alternative quantization scheme, one can introduce SQ for real actions [18, 19], or complex Langevin dynamics for complex actions [27, 19]. Starting from a generic Euclidean path integral, Z = R D\u03d5 e\u2212SE, SQ introduces a fictitious time \u03c4 for the field \u03d5, whose evolution is described by Langevin dynamics, \u2202\u03d5(x, \u03c4) \u2202\u03c4 = \u2212\u03b4SE[\u03d5] \u03b4\u03d5(x, \u03c4) + \u03b7(x, \u03c4), (2) where the noise term satisfies \u27e8\u03b7(x, \u03c4)\u27e9= 0, \u27e8\u03b7(x, \u03c4)\u03b7(x\u2032, \u03c4 \u2032)\u27e9= 2\u03b1\u03b4(x \u2212x\u2032)\u03b4(\u03c4 \u2212\u03c4 \u2032), with \u03b1 being the diffusion constant. In the long-time limit, for real actions the system reaches an equilibrium state Peq(\u03d5) \u221dexp(\u2212SE(\u03d5)/\u03b1), which follows from properties of the associated Fokker-Planck Hamiltonian [19]. For complex actions, there are additional criteria to consider [28]. Comparing Eqs. (1) and (2), one notices the presence of g2 \u03c4, which rescales both the drift term and the variance of the noise, and is known as a kernel [19]. Its effect can be absorbed by rescaling time with g2(\u03c4), or equivalently absorbing it in the time step, g2 \u03c4\u2206\u03c4. One can then identify the drift term in Eq. (1) with the gradient of an effective DM action SDM, using \u2207\u03d5SDM(\u03d5, \u03c4) \u2261\u2212\u2207\u03d5 log p\u03c4(\u03d5) \u2248 s\u03b8(\u03d5, \u03c4). In the \u03c4 \u2192T limit, the distribution p\u03c4=T (\u03d5) \u2192P[\u03d5, T] \u221dexp(\u2212SDM/\u00af \u03b1). Upon identifying \u00af \u03b1 and \u03b1, this implies that the equilibrium state from a SQ perspective can be obtained by denoising a naive distribution using the DM prescription. Concurrently, sampling from a DM is equivalent to optimizing a stochastic trajectory to approach the equilibrium state in Euclidean quantum field theory, Peq[\u03d5] \u221dexp(\u2212SE/\u03b1), This will be demonstrated in the following Section. 3 Numerical Results Toy Model To demonstrate the capacity for learning effective DM actions, SDM(\u03d5, \u03c4), defined via the relation \u2207\u03d5SDM(\u03d5, \u03c4) = \u2212\u2207\u03d5 log p\u03c4(\u03d5) \u2248s\u03b8(\u03d5, \u03c4), we introduce an oversimplified 0 + 0dimensional field theory, i.e., a toy model with only one degree of freedom, and the action and drift term, S(\u03d5) = \u00b52 2 \u03d52 + g 4!\u03d54, f(\u03d5) = \u2212\u2202S(\u03d5) \u2202\u03d5 = \u2212\u00b52\u03d5 \u2212g 3!\u03d53, (3) with parameters \u00b52 and g. We prepared 5120 configurations as training datasets in two setups: \u00b52 1 = 1.0, g1 = 0.4 (single-well action), and \u00b52 2 = \u22121.0, g2 = 0.4 (double-well action). A one-toone neural network with time-embedding is implemented to represent the score function s\u03b8(\u03d5, \u03c4). After 500 epochs of training, the learned effective action SDM(\u03d5, \u03c4) = R \u03d5 \u02c6 s\u03b8(\u03d5\u2032, \u03c4)d\u03d5\u2032 is seen to approximate the action S(\u03d5) in the upper panel of Fig. 2; it approaches the physical action as \u03c4 increases. We have added to SDM a constant \u2206S0, which is the difference between min[S(\u03d5)] and min[SDM(\u03d5, \u03c4)]. Generally, the learned effective actions are accurate approximations in both the single and double-well cases, around \u03d5 \u223c0. In the bottom panel of Fig. 2, samples generated from the trained DM are compared with samples from the underlying theory. In this case, we utilized an Apple M2 Pro with 16GB of RAM and PyTorch for model training, achieving a total training time of 35 seconds over 500 epochs. 3 \fFigure 2: (Upper panel) The flow of the effective action, SDM(\u03d5, \u03c4), for various values of time 0 \u2264\u03c4 \u2264T = 1 during the stochastic process, learned by the diffusion model as a function of \u03d5 for both the single-well (left column) and double-well (right column) actions, using the relation \u2207\u03d5SDM(\u03d5, \u03c4) = \u2212\u2207\u03d5 log p\u03c4(\u03d5) \u2248s\u03b8(\u03d5, \u03c4). (Bottom panel) The first 1024 training samples (blue histograms) and 1024 generated samples (orange histograms) for both the single-well (left column) and double-well (right column) actions. Scalar Lattice Field Theory We consider a real scalar field in d Euclidean dimensions with the dimensionless action, SE = X x \" \u22122\u03ba d X \u03bd=1 \u03d5(x)\u03d5(x + \u02c6 \u03bd) + (1 \u22122\u03bb)\u03d52(x) + \u03bb\u03d54(x) # , (4) where \u03ba is the hopping parameter, and \u03bb denotes the dimensionless coupling constant describing field interactions. Both parameters are positive, more details can be found in Ref. [29]. 0.8 0.6 0.4 0.2 0.0 0.2 0.4 0.6 0.8 Figure 3: Generation of four independent configurations from a well-trained diffusion model in the broken phase. Each row in the figure represents a different sample, and each column represents a different time point (\u03c4 \u2208[0, 0.25, 0.5, 0.75, T = 1]) during the denoising process. In the broken phase (\u03ba = 0.5, \u03bb = 0.022), field configurations behave like large clusters. We demonstrate that the clustering behavior of field configurations in a d = 2 dimensional case, in which 4 \fit can be successfully captured by the well-trained DM. Fig. 3 visualizes the denoising process. The first column represents noise samples randomly drawn from the prior normal distribution, while the fifth column represents the generated samples obtained by denoising. Training set-ups and more quantitative evaluations both in the broken phase and symmetric phase can be found in Ref. [26]. 4" + }, + { + "url": "http://arxiv.org/abs/2301.03142v1", + "title": "Exploration in Model-based Reinforcement Learning with Randomized Reward", + "abstract": "Model-based Reinforcement Learning (MBRL) has been widely adapted due to its\nsample efficiency. However, existing worst-case regret analysis typically\nrequires optimistic planning, which is not realistic in general. In contrast,\nmotivated by the theory, empirical study utilizes ensemble of models, which\nachieve state-of-the-art performance on various testing environments. Such\ndeviation between theory and empirical study leads us to question whether\nrandomized model ensemble guarantee optimism, and hence the optimal worst-case\nregret? This paper partially answers such question from the perspective of\nreward randomization, a scarcely explored direction of exploration with MBRL.\nWe show that under the kernelized linear regulator (KNR) model, reward\nrandomization guarantees a partial optimism, which further yields a\nnear-optimal worst-case regret in terms of the number of interactions. We\nfurther extend our theory to generalized function approximation and identified\nconditions for reward randomization to attain provably efficient exploration.\nCorrespondingly, we propose concrete examples of efficient reward\nrandomization. To the best of our knowledge, our analysis establishes the first\nworst-case regret analysis on randomized MBRL with function approximation.", + "authors": "Lingxiao Wang, Ping Li", + "published": "2023-01-09", + "updated": "2023-01-09", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "main_content": "Introduction Reinforcement learning (RL) (Sutton and Barto, 2018) aims to learn the optimal policy by iteratively interacting with the environment. Model-based reinforcement learning (MBRL) (Osband and Roy, 2014; Luo et al., 2019; Ha and Schmidhuber, 2018; Luo et al., 2019; Sun et al., 2019; Kaiser et al., 2020; Ayoub et al., 2020; Kakade et al., 2020) achieves such a goal by \ufb01tting the environment from the observation and obtaining the policy from the \ufb01tted environment. Incorporated with deep learning, MBRL has achieved tremendous success in real-world tasks, including video games (Ha and Schmidhuber, 2018; Kaiser et al., 2020) and control tasks (Watter et al., 2015; Williams et al., 2015; Chua et al., 2018; Hafner et al., 2019; Song and Sun, 2021). A key factor to the success of MBRL is sample ef\ufb01ciency. In terms of the theoretical analysis, such sample ef\ufb01ciency is characterized by the regret analysis of MBRL. Previous analysis suggests that when incorporated with exploration strategies, MBRL enjoys a near-optimal e O( \u221a T) regret (Jaksch et al., 2010; Ayoub et al., 2020; Kakade et al., 2020), where T is the total number of interactions with the environment. However, previous provably ef\ufb01cient exploration typically utilizes optimistic planning (Jaksch et al., 2010; Luo et al., 2019; Ayoub et al., 2020; Kakade et al., 2020). Such exploration strategy requires (i) identifying a con\ufb01dence set of models D, which captures the uncertainty in model estimation, and then (ii) conducting optimistic planning by searching for the maximal policy among all possible models within D. The key to the success of optimistic planning is optimism under the face of the uncertainty principle (Jaksch et al., 2010). Intuitively, optimistic planning encourages the agent to explore less visited areas, hence enhancing the sample complexity of the corresponding RL algorithm. While step (i) is realizable with ensemble techniques, step (ii) is in general impossible to implement, as it requires solving an optimization problem over a possibly continuous space of models D. As an alternative, previous empirical study (Chua et al., 2018; Pathak et al., 2017, 2019) typically borrows the idea from optimistic planning and the study of Thompson sampling (TS) based algorithm (Osband and Roy, 2014). A common empirical approach is to utilize model ensembles to capture the uncertainty of model estimations. Such ensembles are further utilized in planning through TS (Chua et al., 2018) or bonus construction (Pathak et al., 2017, 2019). Unlike optimistic planning, such approaches typically do not have a worst-case regret guarantee. Nevertheless, they attain state-of-the-art performance in various testing environments. Such deviation from theory and practice motivates us to propose this question: Does randomized model ensemble guarantees optimism, and hence the optimal worst-case regret? In this paper, we provide a partial solution to the above question from reward randomization, a relatively less studied method for exploration in MBRL. We initiate our analysis under the kernelized linear regulator (KNR) transition model (Mania et al., 2022; Kakade et al., 2020; Song and Sun, 2021) and known reward functions. We propose PlanEx , which conducts exploration by iteratively planning with the \ufb01tted transition model and a randomized reward function. We further show that PlanEx attains the near-optimal e O( \u221a T) regret. A key observation of reward randomization is a notion of partial optimism (Russo, 2019; Zanette et al., 2020), which ensures that a suf\ufb01cient amount of interactions are devoted to exploration under the optimism principle. Motivated by the analysis under the KNR transition model, we extend PlanEx to general function approximation with calibrated model (Curi et al., 2020; Kidambi et al., 2021) and propose a generic design principle of reward randomization. We further propose concrete examples of valid reward randomization \fEXPLORATION IN MODEL-BASED RL and demonstrate the effectiveness of reward randomization theoretically. We highlight that the proposed reward randomization method can be easily implemented based on the model ensembles. In addition, the reward randomization is highly modular and can be incorporated with various SOTA baselines. Contribution. Our work provides a partial solution to the question we raised. Speci\ufb01cally, we investigate reward randomization and propose PlanEx . Our contributions are as follows. \u2022 We propose PlanEx , a novel exploration algorithm for MBPO with worst-case regret guarantee that is realizable with general function parameterizations. \u2022 We show that PlanEx has near-optimal worst-case regret under the KNR dynamics. To the best of our knowledge, our analysis establishes the \ufb01rst worst-case regret analysis on randomized MBRL with function approximation. Related Work. Our work is closely related to the regret analysis of MBRL and online control problem (Osband and Roy, 2014; Luo et al., 2019; Sun et al., 2019; Lu and Roy, 2019; Ayoub et al., 2020; Kakade et al., 2020; Curi et al., 2020; Agarwal et al., 2020b; Song and Sun, 2021). Ayoub et al. (2020) propose the value-target regression (VTR) algorithm, which focuses on the aspects of the transition model that are relevant to RL. Agarwal et al. (2020b) propose FLAMBE, a provably ef\ufb01cient MBRL algorithm under the linear MDP setting (Jin et al., 2020; Yang and Wang, 2019). Kakade et al. (2020) propose LC3, an online control algorithm under the KNR dynamics (Mania et al., 2022) that attains the optimal worst-case regret. Both Ayoub et al. (2020) and Kakade et al. (2020) utilizes optimistic planning for exploration, which is in general intractable. In contrast, we utilize reward randomization for exploration, which also attains the optimal worst-case regret and is highly tractable. To attain tractable optimistic planning, Curi et al. (2020) design HUCRL, which introduces an extra state deviation variable in the optimization for planning. In contrast, planning with randomized reward does not introduce extra variable in optimization. Luo et al. (2019) optimizes a lower bound of value functions, which avoids explicit uncertain quanti\ufb01cation. Recent works also utilizes reward bonus to attain optimistic planning (Kidambi et al., 2021; Song and Sun, 2021). Song and Sun (2021) propose PC-MLP, which constructs bonus by estimating the policy cover (Agarwal et al., 2020a) and is computationally tractable. As a comparison, PC-MLP requires extra sampling to estimate the covariate matrix of policy cover. As a consequence, PC-MLP does not attain the optimal e O( \u221a T)-regret. In contrast, PlanEx does not require extra sampling to construct the bonus and can achieve the e O( \u221a T)-regret. In addition, to attain tractable realization, PC-MLP utilizes different feature in model \ufb01tting and bonus construction, which is inconsistent with the theoretical analysis. In contrast, the implementation of PlanEx is consistent with the theoretical analysis under the calibrated model assumption. Previous work also study ef\ufb01cient model-free RL exploration algorithms with function approximation. See, e.g., Jiang et al. (2017); Jin et al. (2020); Du et al. (2020); Wang et al. (2020); Cai et al. (2020); Agarwal et al. (2020a); Modi et al. (2021) and references therein for this line of research. Our analysis is inspired by the recent progress in worst-case regret analysis of randomized RL algorithms (Russo, 2019; Pacchiano et al., 2020; Zanette et al., 2020; Ishfaq et al., 2021). Our optimism analysis is inspired by Russo (2019) and Zanette et al. (2020). Russo (2019) propose the \ufb01rst worst-case regret analysis to the randomized least-squares value iteration (RLSVI) algorithm under the tabular setting. Zanette et al. (2020) extend the analysis of RLSVI to truncated linear \ffunction approximations under the general state space. Ishfaq et al. (2021) analyze the randomized Q-learning with both linear function approximation and general function approximation. In contrast, we focus on the randomized MBRL algorithm. Pacchiano et al. (2020) analyze the worst-case regret of MBRL with both reward and transition randomization. We remark that both Pacchiano et al. (2020) and Ishfaq et al. (2021) require drawing multiple samples for each state-action pair in planning and further maximizing over all randomized reward functions in planning. In contrast, we only require one sample at each time step, and do not need further maximization. In addition, Pacchiano et al. (2020) focus on the tabular setting with \ufb01nite state and action spaces, whereas we consider the generic setting with function approximation. 2. Background 2.1. Reinforcement Learning In this paper, we model the environment by an episodic MDP (S, A, H, {rh}h\u2208[H], P). Here S and A are the state spaces, H is the length of episodes, rh : S \u00d7 A 7\u2192[0, 1] is the bounded reward function for h \u2208[H], and P is the transition kernel, which de\ufb01nes the transition probability sh+1 \u223cP(\u00b7 | sh, ah) for all h \u2208[H] and (sh, ah) \u2208S \u00d7 A. Interaction Procedure. An agent with a set of policies {\u03c0h}h\u2208[H] interacts with such environment as follows. The agent starts from a \ufb01xed initial state s1 \u2208S. Iteratively, upon reaching the state sh \u2208S, the agent takes the action ah = \u03c0h(sh). The agent then receives the reward rh(sh, ah). The environment transits into the next state sh+1 according to the probability P(\u00b7 | sh, ah). The process ends when the agent reaches the state sH+1. To describe the expected cumulative reward, for each policy \u03c0 = {\u03c0h}h\u2208[H], we introduce the action-value functions {Q\u03c0 h}h\u2208[H] de\ufb01ned as follows, Q\u03c0 h(sh, ah; {rh}h\u2208[H], P) = H X \u03c4=h E \u0002 r\u03c4(s\u03c4, a\u03c4) \f \f sh, ah, \u03c0 \u0003 , \u2200h \u2208[H], (sh, ah) \u2208S \u00d7 A, (1) where a\u03c4 = \u03c0\u03c4(s\u03c4) and s\u03c4+1 \u223cP(\u00b7 | s\u03c4, a\u03c4) for all \u03c4 = h, . . . , H. Similarly, we de\ufb01ne the value functions {V \u03c0 h }h\u2208[H] as follows, V \u03c0 h (sh; {rh}h\u2208[H], P) = H X \u03c4=h E \u0002 r\u03c4(s\u03c4, a\u03c4) \f \f sh, \u03c0 \u0003 , \u2200h \u2208[H], (sh, ah) \u2208S \u00d7 A. (2) We de\ufb01ne optimal policy \u03c0\u2217= {\u03c0\u2217 h}h\u2208[H] as the maximizer of the following optimization problem, \u03c0\u2217= argmax \u03c0 V \u03c0 1 (s1; {rh}h\u2208[H], P). (3) Correspondingly, we de\ufb01ne V \u2217and Q\u2217the value and action-value functions corresponding to the optimal policy \u03c0\u2217. The goal of reinforcement learning (RL) is to sequentially select the policy \u03c0k = {\u03c0k h}h\u2208[H] based on the previous experiences, aiming to maximize the expected cumulative reward collected by the agent in the interaction process. Equivalently, the goal is to minimize the following regret, R(K) = K X k=1 V \u2217(s1) \u2212V \u03c0k(s1), (4) \fEXPLORATION IN MODEL-BASED RL where K is the total number of interactions and s1 is the \ufb01xed initial state. Intuitively, the regret R(K) describes the deviation between the policies executed in the interaction process and the optimal policy. 2.2. The Online Nonlinear Control Problem We consider the online nonlinear control problem with the following transition dynamics, sh+1 = f(sh, ah) + \u01eb, where \u01eb \u223cN(0, \u03c32 \u00b7 I), \u2200h \u2208[H], (sh, ah) \u2208S \u00d7 A. Here the function f : S \u00d7 A 7\u2192S belongs to a Reproducing Kernel Hilbert Space (RKHS) with known kernel and the noise \u01eb is independent across transitions. Such transition is also known as the Kernelized Nonlinear Regulator (KNR) in previous study (Kakade et al., 2020; Song and Sun, 2021). In this work, we follow Mania et al. (2022); Kakade et al. (2020); Song and Sun (2021) and consider a primal version of such transition dynamics as the underlying transition dynamics for the RL problem, which is de\ufb01ned as follows, sh+1 = f(sh, ah; W \u2217) + \u01eb, where \u01eb \u223cN(0, \u03c32 \u00b7 I), f(sh, ah; W \u2217) = W \u2217\u03c6(sh, ah), \u2200h \u2208[H], (sh, ah) \u2208S \u00d7 A. (5) Here \u03c6 : S \u00d7 A 7\u2192Rd\u03c6 is a known feature embedding. Meanwhile, the state space S \u2286RdS is a subset of the Euclidean space with dimension dS and W \u2217\u2208RdS\u00d7d\u03c6 is the unknown true parameter of the KNR transition dynamics. Correspondingly, in the sequel, we denote by Q\u03c0(\u00b7, \u00b7; {rh}h\u2208[H], W) and V \u03c0(\u00b7; {rh}h\u2208[H], W) the value functions of the policy \u03c0 under the reward functions {rh}h\u2208[H] and the transition dynamics de\ufb01ned by the matrix W \u2208RdS\u00d7d\u03c6. For the simplicity of our analysis, we \ufb01x the following scaling of features and parameters. Assumption 1 (Normalized Model) We assume that \u2225\u03c6(s, a)\u22252 \u22641/ \u221a H for all (s, a) \u2208S \u00d7 A. correspondingly, we assume that \u2225W \u2217\u22252 = O( \u221a H), where W \u2217is the true parameter of the KNR transition dynamics de\ufb01ned in (5). Similar normalization assumptions also arises in Mania et al. (2022). We remark that the scaling assumptions in Assumption 1 only affect the rate of H in regret, and is imposed for the simplicity of our analysis. 2.3. Model-based RL for Unknown Transition Dynamics In model-based RL, the agent optimizes the policy by iteratively \ufb01tting the transition dynamics based on the data collected, and conducting optimal planning on the \ufb01tted transition dynamics. For each iteration k, the model-based RL consists of the following steps. \u2022 (i) Model Fitting. In this step, the agent updates the parameter Wk of transition dynamics based on the replay buffer Dk. \u2022 (ii) Planning. In this step, the agent conducts optimal planning based on the \ufb01tted parameter Wk of transition dynamics. By planning with the \ufb01tted models, the agent updates the policy \u03c0k. \f\u2022 (iii) Interaction. In this step, the agent interacts with the environment with the policy \u03c0k and collects a trajectory \u03b9k = (sk 1, ak 1, . . . , sk H, ak H, sk H+1). The agent then updates the replay buffer by Dk+1 = Dk \u222a\u03b9k. In the sequel, we raise the following assumption, which assume that we have access to a planning oracle to handle the planning in step (ii). Assumption 2 (Planning Oracle) We assume that we have access to the oracle Plan(\u00b7, \u00b7, \u00b7), which returns the optimal policy \u03c0 = Plan(s1, {rh}h\u2208[H], W) for any input reward functions {rh}h\u2208[H] and the parameter W of the transition dynamics. Remark 3 (Remark on Sample Complexity) In practice, the planning on \ufb01tted environment is typically handled by deep RL algorithms (Pathak et al., 2017; Luo et al., 2019; Pathak et al., 2019; Song and Sun, 2021) or model predictive control (Williams et al., 2015; Chua et al., 2018; Kakade et al., 2020). We remark that since such planning is conducted on the \ufb01tted environment, solving such planning problem does not raise concerns in the sample complexity of solvers. In contrast, such sample complexity concern is raised when interacting with the real environment in step (iii). We remark that the goal of exploration is to obtain a near-optimal policy with as few round of interactions K as possible. When measured with the regret R(K) de\ufb01ned in (4), the goal of exploration is to design algorithms to attain an regret R(K) that grows as slow as possible in terms of K. 3. Exploration with Randomized Reward In this section, we propose PlanEx , an provably ef\ufb01cient and realizable algorithm for the RL problem with KNR dynamics. In the sequel, we describe the procedure of each step in the k-th iteration of PlanEx . (i) Model Fitting. Given the dataset Dk = {(s\u03c4 h, a\u03c4 h, s\u03c4 h+1)(h,\u03c4)\u2208[H]\u00d7[k\u22121]}, we \ufb01t the transition parameter W k by minimizing the prediction error of sh+1 given (sh, ah). Speci\ufb01cally, we minimize the following least-squares loss, W k \u2190 argmin W \u2208RdS\u00d7d\u03c6 H X h=1 k\u22121 X \u03c4=1 \u2225s\u03c4 h+1 \u2212W\u03c6(s\u03c4 h, a\u03c4 h)\u22252 2 + \u03bb \u00b7 \u2225W\u22252 F , (6) where we denote by \u2225\u00b7 \u2225F the matrix Frobenius norm. The optimization in (6) has the following explicit form solution, W k \u2190 \u0012 H X h=1 k\u22121 X \u03c4=1 s\u03c4 h+1\u03c6(s\u03c4 h, a\u03c4 h)\u22a4 \u0013 \u039b\u22121 k , \u039bk = H X h=1 k\u22121 X \u03c4=1 \u03c6(s\u03c4 h, a\u03c4 h)\u03c6(s\u03c4 h, a\u03c4 h)\u22a4+ \u03bbI. (7) (ii) Planning. In the planning stage, we aim to derive a policy \u03c0k that interact with the environment. There are two objectives that we aim to achieve in deriving the policy \u03c0k. (a) Firstly, the policy \u03c0k should properly exploit our knowledge about the environment to optimize the cumulative reward. (b) Secondly, the policy \u03c0k should also incorporate our uncertainty to the environment and conduct exploration to unexplored critical events. To properly balance between (a) and (b), we need to quantify our uncertainty to the environment. Such uncertainty quanti\ufb01cation can be done by the \fEXPLORATION IN MODEL-BASED RL matrix \u039bk de\ufb01ned in (7). Speci\ufb01cally, it is known () that the matrix \u039bk de\ufb01nes the following con\ufb01dence region Gk = \b W \u2208RdS\u00d7d\u03c6 : \u2225(W \u2212W k)\u039b1/2 k \u22252 2 \u2264\u03b2k \t . For properly set \u03b2k, it is known that W \u2217\u2208Gk with high probability. Under such observation, previous attempts (Jaksch et al., 2010; Kakade et al., 2020) attain the balance between exploitation and exploration by \ufb01nding the maximizer \u03c0 of V \u03c0(s1; {rh}h\u2208[H], W) for W \u2208Gk, which, however, is computationally intractable. To propose a computationally tractable alternative, previous empirical approaches utilizes ensemble models to estimate the epistemic uncertainty in \ufb01tting the model with \ufb01nite observations. In this work, we investigate the approach of directly incorporating uncertainty into the reward functions. To this end, we introduce the following perturbed reward function, rk h,\u03be(sh, ah) = \b rh(sh, ah) + \u03c6(sh, ah)\u22a4\u03bek h \t+, \u2200h \u2208[H], (sh, ah) \u2208S \u00d7 A, (8) where the noise {\u03bek h}h\u2208[H] are sampled independently from the Gaussian distribution \u03bek h \u223cN(0, \u03c32 k\u00b7 \u039b\u22121 k ). Intuitively, such noise has larger variance in regions that are less explored by the agent, and smaller variance in regions that are well-explored. In addition, we clip the reward to ensure that the reward is positive. Upon perturbing the reward, we update the policy by planning based on the estimated transition and perturbed reward as follows, \u03c0k = {\u03c0k h}h\u2208[H] = Plan \u0000s1, {rk h,\u03be}h\u2208[H], W k\u0001 . (9) We remark that the reward perturbation in (8) is conducted before planning. The perturbed reward de\ufb01ned in (8) is \ufb01xed throughout the planning stage. We summarize PlanEx in Algorithm 1. 4. Theoretical Analysis In this section, we analyze PlanEx in Algorithm 1. Our key observations are, \u2022 the reward perturbation in PlanEx leads to optimistic planning for at least a constant proportion of the interactions, and \u2022 such partial optimism guaranteed by PlanEx is suf\ufb01cient for exploration. 4.1. Partial Optimism In the sequel, we show that PlanEx enjoys a partial optimism. Speci\ufb01cally, the following lemma holds. Lemma 4 (Partial Optimism) Under the good event W \u2217\u2208Gk = {W \u2208RdS\u00d7d\u03c6 : \u2225(W \u2212 W k)\u039b1/2 k \u22252 2 \u2264\u03b2k}, for properly selected \u03c3k, it holds with probability at least \u03a6(\u22121) that V \u2217 1 \u0000s1; {rh}h\u2208[H], W \u2217\u0001 \u2212V \u03c0k 1 \u0000s1; {rk h,\u03be}h\u2208[H], W k\u0001 \u22640, (10) where \u03a6(\u00b7) is the cumulative distribution function of the standard Gaussian distribution. \fAlgorithm 1 Planning with Randomized Reward Require: Dataset D, rewards {rh}h\u2208[H]. 1: Initialization: Set \u039b1 = \u03bb \u00b7 I. 2: for k = 1, 2, . . . , K do 3: Generate a set of independent noise \u03bek h \u223cN(0, \u03c32 k \u00b7 \u039b\u22121 k ) for all h \u2208[H]. 4: Obtain the perturbed rewards rk h,\u03be(sh, ah) = {rh(sh, ah) + \u03c6(sh, ah)\u22a4\u03bek h}+, \u2200(sh, ah) \u2208S \u00d7 A, h \u2208[H]. 5: Obtain the policy \u03c0k by calling the planning oracle, \u03c0k = Plan(s1, {rk h,\u03be}h\u2208[H], W k). 6: Execute \u03c0k to sample a trajectory \u03c4 k = {sk 1, ak 1, sk 2, ..., sk H, ak H, sk H+1}. 7: Update the dataset Dk \u2190Dk\u22121 \u222a\u03c4 k. 8: Update the model and covariate matrix W k+1 \u2190argmin W \u2208RS\u00d7d H X h=1 k X \u03c4=0 \u2225s\u03c4 h+1 \u2212W\u03c6(s\u03c4 h, a\u03c4 h)\u22252 2 + \u03bb \u00b7 \u2225W\u22252 2, \u039bk+1 \u2190\u039bk + H X h=1 \u03c6(sk h, ak h)\u03c6(sk h, ak h)\u22a4. 9: end for \fEXPLORATION IN MODEL-BASED RL Proof See \u00a7A.2 for a detailed proof. Lemma 4 ensures that at least \u03a6(\u22121) of the value function estimation in PlanEx overestimates the optimal value function V \u2217 1 (\u00b7; {rh}h\u2208[H], W \u2217) that we wish to obtain. As a consequence, the randomized reward in PlanEx guarantees that at least \u03a6(\u22121) of the trajectories contributes to exploration under the optimism principle. Intuitively, such optimism holds since, (i) on the one hand, the randomized Gaussian perturbation ensures that the perturbed reward has a suf\ufb01ciently large probability to be larger than the true reward, and (ii) on the other hand, the good event Gk ensures that the value functions estimated under the true model ({rh}h\u2208[H], W \u2217) does not deviate too much from the value function estimated under the current model ({rh}h\u2208[H], W k) without perturbation. 4.2. Regret Analysis We highlight that the optimism guarantee in Lemma 4 alone does not guarantee optimal regret. To conduct reasonable exploration, in addition to optimism, we need to ensure that the overestimation induced by perturbed reward does not deviated too far away from the value functions under the true reward. In our work, we ensure such deviation guarantee by properly incorporating the uncertainty into the transition dynamics. More speci\ufb01cally, recall that we de\ufb01ne the reward perturbation as follows rk h,\u03be(sh, ah) = \b rh(sh, ah) + \u03c6(sh, ah)\u22a4\u03bek h \t+, \u2200h \u2208[H], (sh, ah) \u2208S \u00d7 A, where the noise {\u03bek h}h\u2208[H] are sampled independently from the Gaussian distribution \u03bek h \u223cN(0, \u03c32 k\u00b7 \u039b\u22121 k ). Such perturbation introduces the noise \u03bek h, whose variance scales with the model uncertainty \u039bk. Such reward perturbation ensures that, with a high probability, the bias in value estimation under the perturbed reward scales with the error in transition model estimation in (7). Thus, as long as we have reasonable model estimation, such as minimizing least-squares error in (7), the overestimation induced by perturbed reward is small. Speci\ufb01cally, the following Theorem guarantees that PlanEx has an optimal regret in K. Theorem 5 Let \u03bb = 1 and \u03c32 k = H3\u00b7\u03b2k/\u03c32 with \u03b2k speci\ufb01ed in Appendix A.1. Under Assumptions 1 and 2, it holds for K > 1/\u03a6(\u22121) that E \u0002 R(K) \u0003 = O \u0010 (dS + d\u03c6)3/2 \u00b7 H7/2 \u00b7 log2(K) \u00b7 \u221a K \u0011 , where the expectation is taken with respect to the randomized reward perturbation and trajectory sampling in PlanEx . Proof See \u00a7A.4 for a detailed proof. We remark that the rate in Theorem 5 is information-theoretically optimal in the number of interactions K with the environment (Jiang et al., 2017). We remark that comparing with the optimal planning approach such as LC3 (Kakade et al., 2020), PlanEx suffers from extra dependencies in H, d\u03c6 and dS, which arises due to the random perturbation of rewards. In addition, we highlight that, comparing with PC-MLP (Song and Sun, 2021), our algorithm attains the optimal O( \u221a K) dependency with respect to K. Such stronger sample ef\ufb01ciency arises as PlanEx does not require extra sampling to compute policy cover matrix, which is required by PC-MLP. \fExploration with Model Uncertainty. We remark that the high probability optimism based on Thompson sampling typically arises in the analysis of randomized value iterations for RL (Russo, 2019; Zanette et al., 2020). In contrast, our work utilizes such idea for model-based exploration. To understand such counterpart in model-based exploration, we highlight that for both model-based and model-free exploration, designing provable exploration hinges on incorporating the model uncertainty into the value functions and its corresponding policy. In value-based approaches such as LSVI-UCB (Jin et al., 2020), such model uncertainty is estimated via regression of target value functions on sh+1 with respect to (sh, ah), and is incorporated into value functions as the bonus. In model-based approaches such as UCRL and its variants (Jaksch et al., 2010; Kakade et al., 2020), such model uncertainty is characterized by a con\ufb01dence region of transition dynamics, and is incorporated into value functions via optimistic planning. In addition, for algorithms that utilizes policy cover (Song and Sun, 2021; Agarwal et al., 2020a), such model uncertainty is obtained by aggregating the visitation trajectories of current policies. Our work instantiates such idea by directly perturbing the reward functions based on model uncertainty, which serves as a primitive view of all the exploration algorithms. 5. A Generalization with General Function Approximation A key observation from the design of PlanEx is that suf\ufb01cient exploration is guaranteed as long as at least a \ufb01xed proportion of iterations are dedicated to exploration with optimism. To further validate such observation, we generalize PlanEx by general function approximation in the sequel. We summarize the algorithm in Algorithm 2. To conduct our analysis, we assume that the estimation of transition dynamics is suf\ufb01ciently accurate and satis\ufb01es the following calibrated model assumption. Algorithm 2 Planning with Randomized Reward Require: Rewards {rh}h\u2208[H]. 1: Initialization: Initialize buffer D0 as an empty set. Initialize the transition dynamics P1. 2: for k = 1, 2, . . . , K do 3: Generate the randomized reward {rk h,\u03be}h\u2208[H]. 4: Obtain the policy \u03c0k by calling the planning oracle, \u03c0k = Plan(s1, {rk h,\u03be}h\u2208[H], Pk). 5: Execute \u03c0k to sample a trajectory \u03c4 k = {sk 1, ak 1, sk 2, ..., sk H, ak H, sk H+1}. 6: Update the dataset Dk \u2190Dk\u22121 \u222a\u03c4 k. 7: Update the transition dynamics Pk+1 based on the dataset Dk. 8: end for Assumption 6 (Calibrated model) Let Pk be the transition dynamics estimated in the k-th iteration. For all \u03b4 > 0 and k \u2208[K], it holds with probability at least 1 \u2212\u03b4 that \u2225P(\u00b7 | sh, ah) \u2212Pk(\u00b7 | sh, ah)\u22251 \u2264\u03b2(\u03b4) \u00b7 \u03b9k(sh, ah), \u2200k \u2208[K], (sh, ah) \u2208S \u00d7 A. Meanwhile, it holds that \u03b9k \u22641 for all k \u2208[K]. Here the parameter \u03b2(\u03b4) characterizes the variance in concentration, which typically scales with log(1/\u03b4). Similar assumption also arises in the analysis under general function approximation (Curi et al., \fEXPLORATION IN MODEL-BASED RL 2020; Kidambi et al., 2021). In addition, we remark that such assumption generalizes various commonly adopted parametric models, including the linear MDP model (Jin et al., 2020) and the KNR model (Kakade et al., 2020) we adopted in previous sections. Correspondingly, we propose the following complexity metric for the RL problems, IK = max {Dk}k\u2208[K] K X k=1 H X h=1 \u03b92 k(sk h, ak h). (11) Here the maximization is taken over all possible dataset {Dk}k\u2208[K] collected by an online learning algorithm with |Dk| = H for all k \u2208[K]. We remark that similar complexity metric also arises in the analysis of model-based RL with general function approximations (Curi et al., 2020; Kakade et al., 2020; Kidambi et al., 2021). We cast the following conditions on the reward randomization that ensures suf\ufb01cient exploration. Condition 7 ((Optimism)) It holds for the randomized reward function {rk h,\u03be}h\u2208[H] that H X h=1 rk h,\u03be(sh, ah) \u2212rh(sh, ah) \u2265H \u00b7 \u03b2(\u03b4) \u00b7 H X h=1 \u03b9k(sh, ah), which holds uniformly for all trajectories {(sh, ah)}h\u2208[H] with probability at least p0. Condition 8 ((Concentration of Rewards)) It holds for all \u03b4\u2032 > 0 that |rk h,\u03be \u2212rh| \u2264Cr(\u03b4\u2032) \u00b7 \u03b9k with probability at least 1 \u2212\u03b4\u2032 for all (k, h) \u2208[K] \u00d7 [H], where \u03b9k is de\ufb01ned in Assumption 6. Intuition Behind Reward Randomization Conditions. We remark that Conditions 7 and 8 are the key factors for the success of the randomized reward in PlanEx . On the one hand, Condition 7 ensures that a constant p0 proportion of the evaluations results in optimistic value functions. Such optimistic value estimation further allows for exploration under the optimism principle. On the other hand, the concentration condition in Condition 8 ensures that with high probability, the value function estimated under the randomized reward does not deviate too much from that evaluated under the true reward. The following Theorem upper bounds the regret of Algorithm 2 under Assumption 6. Theorem 9 (Regret Bound) Under Assumption 6, for the randomized reward that satis\ufb01es Conditions 7 and 8, the regret of Algorithm 2 is bounded as follows, E \u0002 R(K) \u0003 = O \u0010 Poly \u0000Cr(1/K), \u03b2(1/K), H \u0001 \u00b7 IK \u00b7 \u221a K \u0011 . Proof See \u00a7B for the detailed proof. We remark that for commonly used model parameterization such as linear MDP and KNR, the parameter \u03b2(1/K) typically scales with log(K). Meanwhile, for properly designed reward randomization scheme, the term Cr(1/K) also scales with log(K). Thus, Theorem 18 shows that Algorithm 2 has a regret bound that scales with e O(IK \u00b7 \u221a K), which matches the previous regret bound of exploration under model-based RL (). \f5.1. Design of Randomized Reward We remark that in practice, the model uncertainty {\u03b9k}k\u2208[K] de\ufb01ned in Assumption 6 can be estimated based on disagreement of ensemble models. Thus, to instantiate Algorithm 2, it remains to design proper reward randomization scheme that satis\ufb01es Conditions 7 and 8. In what follows, we present examples of such randomized rewards. Example 1 (Gaussian Perturbation) Let rk h,\u03be(sk h, ak h) = r(sk h, ak h) + \u03bek h, where {\u03bek h}(k,h)\u2208[K]\u00d7[H] are sampled independently from the Gaussian distribution N(0, \u03c3k \u00b7 \u03b92 k(sk h, ak h)). Under regulation conditions speci\ufb01ed in \u00a7B.2, the randomized reward {rk h,\u03be}(k,h)\u2208[K]\u00d7[H] satis\ufb01es Conditions 7 and 8. Example 2 (Bernoulli Perturbation) Let rk h,\u03be(sk h, ak h) = r(sk h, ak h) + \u03bek h \u00b7 \u03c3\u2032 k\u03b9k(sk h, ak h), where \u03bek h = 1 with probability 1/2 and \u03bek h = \u22121 with probability 1/2. For the parameters {\u03c3\u2032 k}(k)\u2208[K] speci\ufb01ed in \u00a7B.2, the randomized reward {rk h,\u03be}(k,h)\u2208[K]\u00d7[H] satis\ufb01es Conditions 7 and 8. A Comparison with Bonus-based Approaches. We remark that our proposed randomized reward is closely related to the reward bonus for model-based RL, which arises in the recent progress of exploration under model-based RL (Kidambi et al., 2021; Song and Sun, 2021). Such bonusbased approaches typically estimate the model uncertainty {\u03b9k}k\u2208[K] and then design reward bonus that incorporates such uncertainty estimation. Indeed, one may view the randomized rewards in Examples 1 and 2 as a randomized generalization of such reward bonus. Speci\ufb01cally, both the randomized reward and the reward bonus accomplishes exploration with optimism principle. For randomized reward, such exploration is guaranteed by the partial optimism that we investigated in \u00a74.1. In contrast, for reward bonus, such exploration is guaranteed by directly enforcing optimism with bonus. Given the connections between the reward bonus and randomized reward, one may be prompt to ask why using randomized reward? We highlight that adding bonus is, in fact, a pessimistic approach in order to reduce the worst-case regret. Such approach enforces a bonus that deviates from the true reward, aiming to introduce a bias in value estimations to achieve minimal worst-case regret. In comparison, randomized reward allows the perturbation to be centered around the true reward, and yields a milder deviation from the true reward. In addition, as shown in our work, such perturbation also has a worst-case regret guarantee at e O( \u221a K) order. \fEXPLORATION IN MODEL-BASED RL" + }, + { + "url": "http://arxiv.org/abs/2205.13476v2", + "title": "Embed to Control Partially Observed Systems: Representation Learning with Provable Sample Efficiency", + "abstract": "Reinforcement learning in partially observed Markov decision processes\n(POMDPs) faces two challenges. (i) It often takes the full history to predict\nthe future, which induces a sample complexity that scales exponentially with\nthe horizon. (ii) The observation and state spaces are often continuous, which\ninduces a sample complexity that scales exponentially with the extrinsic\ndimension. Addressing such challenges requires learning a minimal but\nsufficient representation of the observation and state histories by exploiting\nthe structure of the POMDP.\n To this end, we propose a reinforcement learning algorithm named Embed to\nControl (ETC), which learns the representation at two levels while optimizing\nthe policy.~(i) For each step, ETC learns to represent the state with a\nlow-dimensional feature, which factorizes the transition kernel. (ii) Across\nmultiple steps, ETC learns to represent the full history with a low-dimensional\nembedding, which assembles the per-step feature. We integrate (i) and (ii) in a\nunified framework that allows a variety of estimators (including maximum\nlikelihood estimators and generative adversarial networks). For a class of\nPOMDPs with a low-rank structure in the transition kernel, ETC attains an\n$O(1/\\epsilon^2)$ sample complexity that scales polynomially with the horizon\nand the intrinsic dimension (that is, the rank). Here $\\epsilon$ is the\noptimality gap. To our best knowledge, ETC is the first sample-efficient\nalgorithm that bridges representation learning and policy optimization in\nPOMDPs with infinite observation and state spaces.", + "authors": "Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang", + "published": "2022-05-26", + "updated": "2024-04-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.SY", + "eess.SY", + "stat.ML" + ], + "main_content": "Introduction Deep reinforcement learning demonstrates signi\ufb01cant empirical successes in Markov decision processes (MDPs) with large state spaces (Mnih et al., 2013, 2015; Silver et al., 2016, 2017). Such empirical successes are attributed to the integration of representation learning into reinforcement learning. In other words, mapping the state to a low-dimensional feature enables model/value learning and optimal control in a sample-e\ufb03cient manner. Meanwhile, it becomes more theoretically understood that the low-dimensional feature is the key to sample e\ufb03ciency in the linear setting (Cai et al., 2020; Jin et al., 2020b; Ayoub et al., 2020; Agarwal et al., 2020; Modi et al., 2021; Uehara et al., 2021). \u2217Northwestern University; lingxiaowang2022@u.northwestern.edu \u2020Northwestern University; qicai2022@u.northwestern.edu \u2021Yale University; zhuoran.yang@yale.edu \u00a7Northwestern University; zhaoranwang@gmail.com 1 \fIn contrast, partially observed Markov decision processes (POMDPs) with large observation and state spaces remain signi\ufb01cantly more challenging. Due to a lack of the Markov property, the lowdimensional feature of the observation at each step is insu\ufb03cient for the prediction and control of the future (Sondik, 1971; Papadimitriou and Tsitsiklis, 1987; Coates et al., 2008; Azizzadenesheli et al., 2016; Guo et al., 2016). Instead, it is necessary to obtain a low-dimensional embedding of the history, which assembles the low-dimensional features across multiple steps (Hefny et al., 2015; Sun et al., 2016). In practice, learning such features and embeddings requires various heuristics, e.g., recurrent neural network architectures and auxiliary tasks (Hausknecht and Stone, 2015; Li et al., 2015; Mirowski et al., 2016; Girin et al., 2020). In theory, the best results are restricted to the tabular setting (Azizzadenesheli et al., 2016; Guo et al., 2016; Jin et al., 2020a; Liu et al., 2022), which does not involve representation learning. To this end, we identify a class of POMDPs with a low-rank structure on the state transition kernel (but not on the observation emission kernel), which allows prediction and control in a sample-e\ufb03cient manner. More speci\ufb01cally, the transition admits a low-rank factorization into two unknown features, whose dimension is the rank. On top of the low-rank transition, we de\ufb01ne a Bellman operator, which performs a forward update for any \ufb01nite-length trajectory. The Bellman operator allows us to further factorize the history across multiple steps to obtain its embedding, which assembles the per-step feature. By integrating the two levels of representation learning, that is, (i) feature learning at each step and (ii) embedding learning across multiple steps, we propose a sample-e\ufb03cient algorithm, namely Embed to Control (ETC), for POMDPs with in\ufb01nite observation and state spaces. The key to ETC is balancing exploitation and exploration along the representation learning process. To this end, we construct a con\ufb01dence set of embeddings upon identifying and estimating the Bellman operator, which further allows e\ufb03cient exploration via optimistic planning. It is worth mentioning that such a uni\ufb01ed framework allows a variety of estimators (including maximum likelihood estimators and generative adversarial networks). We analyze the sample e\ufb03ciency of ETC under the future and past su\ufb03ciency assumptions. In particular, such assumptions ensure that the future and past observations are su\ufb03cient for identifying the belief state, which captures the information-theoretic di\ufb03culty of POMDPs. We prove that ETC attains an O(1/\u01eb2) sample complexity that scales polynomially with the horizon and the dimension of the feature (that is, the rank of the transition). Here \u01eb is the optimality gap. The polynomial dependency on the horizon is attributed to embedding learning across multiple steps, while polynomial dependency on the dimension is attributed to feature learning at each step, which is the key to bypassing the in\ufb01nite sizes of the observation and state spaces. Contributions. In summary, our contribution is threefold. \u2022 We identify a class of POMDPs with the low-rank transition, which allows representation learning and reinforcement learning in a sample-e\ufb03cient manner. \u2022 We propose ETC, a principled approach integrating embedding and control in the low-rank POMDP. \u2022 We establish the sample e\ufb03ciency of ETC in the low-rank POMDP with in\ufb01nite observation and state spaces. Related Work. Our work follows the previous studies of POMDPs. In general, solving a POMDP is intractable from both the computational and the statistical perspectives (Papadimitriou and Tsitsiklis, 2 \f1987; Vlassis et al., 2012; Azizzadenesheli et al., 2016; Guo et al., 2016; Jin et al., 2020a). Given such computational and statistical barriers, previous works attempt to identify tractable POMDPs. In particular, Azizzadenesheli et al. (2016); Guo et al. (2016); Jin et al. (2020a); Liu et al. (2022) consider the tabular POMDPs with (left) invertible emission matrices. Efroni et al. (2022) considers the POMDPs where the state is fully determined by the most recent observations of a \ufb01xed length. Cayci et al. (2022) analyze POMDPs where a \ufb01nite internal state can approximately determine the state. In contrast, we analyze POMDPs with the low-rank transition and allow the state and observation spaces to be arbitrarily large. Meanwhile, our analysis hinges on the future and past su\ufb03ciency assumptions, which only require that the density of the state is identi\ufb01ed by that of the future and past observations, respectively. In recent work, Cai et al. (2022) also utilizes the low-rank property in the transition. Nevertheless, Cai et al. (2022) assumes that the feature representation of state-action pairs is known, thus relieving the agent from feature learning. In contrast, we aim to recover the e\ufb03cient state-action representation for planning. In terms of the necessity of exploration, Azizzadenesheli et al. (2016); Guo et al. (2016) analyze POMDPs where an arbitrary policy can conduct e\ufb03cient exploration. Similarly, Cayci et al. (2022) consider POMDPs with a \ufb01nite concentrability coe\ufb03cient (Munos, 2003; Chen and Jiang, 2019), where the visitation density of an arbitrary policy is close to that of the optimal policy. In contrast, Jin et al. (2020a); Efroni et al. (2022); Cai et al. (2022) consider POMDPs where strategic exploration is necessary. In our work, we follow Jin et al. (2020a); Efroni et al. (2022); Cai et al. (2022) and design strategic exploration to attain sample e\ufb03ciency in solving the POMDPs. To learn a su\ufb03cient embedding for control, we utilize the low-rank transition of POMDPs. Our idea is motivated by the previous analysis of low-rank MDPs (Cai et al., 2020; Jin et al., 2020b; Ayoub et al., 2020; Agarwal et al., 2020; Modi et al., 2021; Uehara et al., 2021). In particular, the state transition of a low-rank MDP aligns with that in our low-rank POMDP model. Nevertheless, we remark that such states are observable in a low-rank MDP but are unobservable in POMDPs with the low-rank transition. Such unobservability makes solving a low-rank POMDP much more challenging than solving a low-rank MDP. Notation. We denote by Rd + the space of d-dimensional vectors with nonnegative entries. We denote by Lp(X) the Lp space of functions de\ufb01ned on X. We denote by \u2206(d) the space of ddimensional probability density arrays. We denote by [H] = {1, . . . , H} the index set of size H. For a linear operator M mapping from an Lp space to an Lq space, we denote by \u2225M\u2225p7\u2192q the operator norm of M. For a vector x \u2208Rd, we denote by [x]i the i-th entry of x. 2 Partially Observable Markov Decision Process We de\ufb01ne a partially observable Markov decision process (POMDP) by the following tuple, M = (S, A, O, {Ph}h\u2208[H], {Oh}h\u2208[H], r, H, \u00b51), where H is the length of an episode, \u00b51 is the initial distribution of state s1, and S, A, O are the state, action, and observation spaces, respectively. Here Ph(\u00b7 | \u00b7, \u00b7) is the transition kernel, Oh(\u00b7 | \u00b7) is the emission kernel, and r(\u00b7) is the reward function. In each episode, the agent with the policy \u03c0 = {\u03c0h}h\u2208[H] interact with the environment as follows. The environment select an initial state s1 drawn from the distribution \u00b51. In the h-th step, the agent receives the reward r(oh) and the observation oh drawn from the observation density Oh(\u00b7 | sh), and makes the decision ah = \u03c0h(\u03c4 h 1 ) according to the policy \u03c0h, where \u03c4 h 1 = {o1, a1, . . . , ah\u22121, oh} is the interaction history. 3 \fThe environment then transits into the next state sh+1 drawn from the transition distribution Ph(\u00b7 | sh, ah). The procedure ends until the environment transits into the state sH+1. In the sequel, we assume that the action space A is \ufb01nite with capacity |A| = A. Meanwhile, we highlight that the observation and state spaces O and S are possibly in\ufb01nite. Value Functions and Learning Objective. For a given policy \u03c0 = {\u03c0h}h\u2208[H], we de\ufb01ne the following value function that captures the expected cumulative rewards from interactions, V \u03c0 = E\u03c0 \u0014 H X h=1 r(oh) \u0015 . (2.1) Here we denote by E\u03c0 the expectation taken with respect to the policy \u03c0. Our goal is to derive a policy that maximizes the cumulative rewards. In particular, we aim to derive the \u01eb-suboptimal policy \u03c0 such that V \u03c0\u2217\u2212V \u03c0 \u2264\u01eb, based on minimal interactions with the environment, where \u03c0\u2217= argmax\u03c0 V \u03c0 is the optimal policy. Notations of POMDP. In the sequel, we introduce notations of the POMDP to simplify the discussion. We de\ufb01ne ah+k\u22121 h = {ah, ah+1, . . . , ah+k\u22121}, oh+k h = {oh, oh+1, . . . , oh+k} as the sequences of actions and observations, respectively. Correspondingly, we write r(oH 1 ) = PH h=1 r(oh) as the cumulative rewards for the observation sequence oH 1 . Meanwhile, we denote by \u03c4 h+k h the sequence of interactions from the h-th step to the (h + k)-th step, namely, \u03c4 h+k h = {oh, ah, . . . , oh+k\u22121, ah+k\u22121, oh+k} = {ah+k\u22121 h , oh+k h }. Similarly, we denote by \u03c4 h+k h the sequence of interactions from the h-th step to the (h + k)-th step that includes the latest action ah+k, namely, \u03c4 h+k h = {oh, ah, . . . , oh+k, ah+k} = {ah+k h , oh+k h }. In addition, with a slight abuse of notation, we de\ufb01ne P\u03c0(\u03c4 h+k h ) = P\u03c0(oh, . . . , oh+k | ah, . . . , ah+k\u22121) = P\u03c0(oh+k h | ah+k\u22121 h ), P\u03c0(\u03c4 h+k h | sh) = P\u03c0(oh, . . . , oh+k | sh, ah, . . . , ah+k\u22121) = P\u03c0(oh+k h | sh, ah+k\u22121 h ). Extended POMDP. To simplify the discussion and notations in our work, we introduce an extension of the POMDP, which allows us to access steps h smaller than zero and larger than the length H of an episode. In particular, the interaction of an agent with the extended POMDP starts with a dummy initial state s1\u2212\u2113for some \u2113> 0. During the interactions, all the dummy action and observation sequences \u03c4 0 1\u2212\u2113= {o1\u2212\u2113, a1\u2212\u2113, . . . , o0, a0} leads to the same initial state distribution \u00b51 that de\ufb01nes the POMDP. Moreover, the agent is allowed to interact with the environment for k steps after observing the \ufb01nal observation oH of an episode. Nevertheless, the agent only collects the reward r(oh) at steps h \u2208[H], which leads to the same learning objective as the POMDP. In addition, we denote by [H]+ = {1\u2212\u2113, . . . , H + k} the set of steps in the extended POMDP. In the sequel, we do not distinguish between a POMDP and an extended POMDP for the simplicity of presentation. 4 \f3 A Su\ufb03cient Embedding for Prediction and Control The key of solving a POMDP is the practice of inference, which recovers the density or linear functionals of density (e.g., the value functions) of future observation given the interaction history. To this end, previous approaches (Shani et al., 2013) typically maintain a belief, namely, a conditional density P(sh = \u00b7 | \u03c4 h 1 ) of the current state given the interaction history. The typical inference procedure \ufb01rst conducts \ufb01ltering, namely, calculating the belief at (h+1)-th step given the belief at h-th step. Upon collecting the belief, the density of future observation is obtained via prediction, which acquires the distribution of future observations based on the distribution of state sh+1. In the case that maintaining a belief or conducting the prediction is intractable, previous approaches establish predictive states (Hefny et al., 2015; Sun et al., 2016), which is an embedding that is su\ufb03cient for inferring the density of future observations given the interaction history. Such approaches typically recover the \ufb01ltering of predictive representations by solving moment equations. In particular, Hefny et al. (2015); Sun et al. (2016) establishes such moment equations based on structural assumptions on the \ufb01ltering of such predictive states. Similarly, Anandkumar et al. (2012); Jin et al. (2020a) establishes a sequence of observation operators and recovers the trajectory density via such observation operators. Motivated by the previous work, we aim to construct a embedding that are both learn-able and su\ufb03cient for control. A su\ufb03cient embedding for control is the density of the trajectory, namely, \u03a6(\u03c4 H 1 ) = P(\u03c4 H 1 ). (3.1) Such an embedding is su\ufb03cient as it allows us to estimate the cumulative rewards function V \u03c0 of an arbitrary given policy \u03c0. Nevertheless, estimating such an embedding is challenging when the length H of an episode and the observation space O are large. To this end, we exploit the low-rank structure in the state transition of POMDPs. 3.1 Low-Rank POMDP Assumption 3.1 (Low-Rank POMDP). We assume that the transition kernel Ph takes the following low-rank form for all h \u2208[H]+, Ph(sh+1 | sh, ah) = \u03c8\u2217 h(sh+1)\u22a4\u03c6\u2217 h(sh, ah), where \u03c8\u2217 h : S 7\u2192Rd +, \u03c6\u2217 h : S \u00d7 A 7\u2192\u2206(d) are unknown features. Here recall that we denote by [H]+ = {1\u2212\u2113, . . . , H+k} the set of steps in the extended POMDP. Note that our low-rank POMDP assumption does not specify the form of emission kernels. In contrast, we only require the transition kernels of states to be linear in unknown features. Function Approximation. We highlight that the features in Assumption 3.1 are unknown to us. Correspondingly, we assume that we have access to a parameter space \u0398 that allows us to \ufb01t such features as follows. 5 \fDe\ufb01nition 3.2 (Function Approximation). We de\ufb01ne the following function approximation space F\u0398 = {F\u0398 h }h\u2208[H] corresponding to the parameter space \u0398, F\u0398 h = \b (\u03c8\u03b8 h, \u03c6\u03b8 h, O\u03b8 h) : \u03b8 \u2208\u0398 \t , \u2200h \u2208[H]+. Here, O\u03b8 h : S \u00d7 O 7\u2192R+ is an emission kernel and \u03c8\u03b8 h : S 7\u2192Rd +, \u03c6\u03b8 h : S 7\u2192\u2206(d) are features for all h \u2208[H]+ and \u03b8 \u2208\u0398. In addition, it holds that \u03c8\u03b8(\u00b7)\u22a4\u03c6\u03b8(sh, ah) de\ufb01nes a probability over sh+1 \u2208S for all h \u2208[H]+ and (sh, ah) \u2208S \u00d7 A. Here we denote by \u03c8\u03b8 h, \u03c6\u03b8 h, O\u03b8 h a parameterization of features and emission kernels. In practice, one typically utilizes linear or neural netowrk parameterization for the features and emission kernels. In the sequel, we write P\u03b8 and P\u03b8,\u03c0 as the probability densities corresponding to the transition dynamics de\ufb01ned by {\u03c8\u03b8 h, \u03c6\u03b8 h, O\u03b8 h}h\u2208[H] and policy \u03c0, respectively. We impose the following realizability assumption to ensure that the true model belongs to the parameterized function space F\u0398. Assumption 3.3 (Realizable Parameterization). We assume that there exists a parameter \u03b8\u2217\u2208\u0398, such that \u03c8\u03b8\u2217 h = \u03c8\u2217 h, \u03c6\u03b8\u2217 h = \u03c6\u2217 h, and O\u03b8\u2217 h = Oh for all h \u2208[H]. We de\ufb01ne the following forward emission operator as a generalization of the emission kernel. De\ufb01nition 3.4 (Forward Emission Operator). We de\ufb01ne the following forward emission operator U\u03b8 h : L1(S) 7\u2192L1(Ak \u00d7 Ok+1) for all h \u2208[H], (U\u03b8 hf)(\u03c4 h+k h ) = Z S P\u03b8(\u03c4 h+k h | sh) \u00b7 f(sh)dsh, \u2200f \u2208L1(S), \u2200\u03c4 h+k h \u2208Ak \u00d7 Ok+1. (3.2) Here recall that we denote by \u03c4 h+k h = {ah+k\u22121 h , oh+k h } \u2208Ak \u00d7 Ok+1 the trajectory of interactions. In addition, recall that we de\ufb01ne P\u03b8(\u03c4 k h | sh) = P\u03b8(oh+k h | sh, ah+k\u22121 h ) for notational simplicity. We remark that when applying to a belief or a density over state sh, the forward emission operator returns the density of trajectory \u03c4 h+k h of k steps ahead of the h-th step. Bottleneck Factor Interpretation of Low-Rank Transition. Recall that in Assumption 3.1, the feature \u03c6\u2217 h maps from the state-action pair (sh, ah) \u2208S \u00d7A to a d-dimensional simplex in \u2206(d). Equivalently, one can consider the low-rank transition as a latent variable model, where the next state sh+1 is generated by \ufb01rst generating a bottleneck factor qh \u223c\u03c6\u2217(sh, ah) and then generating the next state sh+1 by [\u03c8\u2217(\u00b7)]qh. In other words, the probability array \u03c6\u2217(sh, ah) \u2208\u2206(d) induces a transition dynamics from the state-action pair (sh, ah) to the bottleneck factor qh \u2208[d] as follows, Ph(qh | sh, ah) = \u0002 \u03c6\u2217 h(sh, ah) \u0003 qh, \u2200qh \u2208[d]. Correspondingly, we write Ph(sh+1 | qh) = [\u03c8\u2217 h(sh+1)]qh the transition probability from the bottleneck factor qh \u2208[d] to the state sh+1 \u2208S. See Figure 1 for an illustration of the data generating process with the bottleneck factors. Understanding Bottleneck Factor. Utilizing the low-rank structure of the state transition requires us to understand the bottleneck factors {qh}h\u2208[H] de\ufb01ned by the low-rank transition. We highlight that the bottleneck factor qh is a compressed and su\ufb03cient factor for inference. In particular, the bottleneck factor qh determines the distribution of next state sh+1 through the 6 \f\u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 sh qh sh+1 ah oh rh oh+1 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 Figure 1: The directed acyclic graph (DAG) of a POMDP with low-rank transition. Here {sh, sh+1}, {oh, oh+1}, ah, rh are the states, observations, action, and reward, respectively. In addition, we denote by qh the bottleneck factor induced by the low-rank transition, which depends on the state and action pair (sh, ah) and determines the density of next state sh+1. In the DAG, we represent observable and unobservable variables by the shaded and unshaded nodes, respectively. In addition, we use the dashed node and arrows for the latent factor qh and its corresponding transitions, respectively, to di\ufb00erentiate such bottlenect factor from the state of the POMDP. feature \u03c8\u2217 h(sh+1 = \u00b7) = P(sh+1 = \u00b7 | qh = \u00b7). Such a property motivate us to obtain our desired embedding via decomposing the density of trajectory based on the feature set {\u03c8\u2217 h}h\u2208[H]+. To achieve such a decomposition, we \ufb01rst introduce the following su\ufb03ciency condition for all the parameterized features \u03c8\u03b8 h with \u03b8 \u2208\u0398. Assumption 3.5 (Future Su\ufb03ciency). We de\ufb01ne the mapping g\u03b8 h : Ak \u00d7 Ok+1 7\u2192Rd for all parameter \u03b8 \u2208\u0398 and h \u2208[H] as follows, g\u03b8 h = h U\u03b8 h \u0002 \u03c8\u03b8 h\u22121 \u0003 1, . . . , U\u03b8 h \u0002 \u03c8\u03b8 h\u22121 \u0003 d i\u22a4 , where we denote by [\u03c8\u03b8 h\u22121 \u0003 i the i-th entry of the mapping \u03c8\u03b8 h\u22121 for all i \u2208[d]. We assume for some k > 0 that the matrix M\u03b8 h = Z Ak\u00d7Ok+1 g\u03b8 h(\u03c4 h+k h )g\u03b8 h(\u03c4 h+k h )\u22a4d\u03c4 h+k h \u2208Rd\u00d7d is invertible. We denote by M\u03b8,\u2020 h the inverse of M\u03b8 h for all parameter \u03b8 \u2208\u0398 and h \u2208[H]. Intuitively, the future su\ufb03ciency condition in Assumption 3.5 guarantees that the density of trajectory \u03c4 h+k h in the future captures the information of the bottleneck variable qh\u22121, which further captures the belief at the h-th step. To see such a fact, we have the following lemma. Lemma 3.6 (Pseudo-Inverse of Forward Emission). We de\ufb01ne linear operator U\u03b8,\u2020 h : L1(Ak \u00d7 Ok+1) 7\u2192L1(S) for all \u03b8 \u2208\u0398 and h \u2208[H] as follows, (U\u03b8,\u2020 h f)(sh) = Z Ak\u00d7Ok+1 \u03c8\u03b8 h\u22121(sh)\u22a4M\u03b8,\u2020 h g\u03b8 h(\u03c4 h+k h ) \u00b7 f(\u03c4 h+k h )d\u03c4 h+k h , (3.3) 7 \fwhere f \u2208L1(Ak \u00d7 Ok+1) is the input of linear operator U\u03b8,\u2020 h and g\u03b8 h is the mapping de\ufb01ned in Assumption 3.5. Under Assumptions 3.1 and 3.5, it holds for all h \u2208[H], \u03b8 \u2208\u0398, and \u03c0 \u2208\u03a0 that U\u03b8,\u2020 h U\u03b8 h(P\u03b8,\u03c0 h ) = P\u03b8,\u03c0 h . Here P\u03b8,\u03c0 h \u2208L1(S) maps from all state sh \u2208S to the probability P\u03b8,\u03c0 h (sh), which is the probability of visiting the state sh in the h-th step when following the policy \u03c0 and the model de\ufb01ned by parameter \u03b8. Proof. See \u00a7A.1 for a detailed proof. By Lemma 3.6, the forward emission operator U\u03b8 h de\ufb01ned in De\ufb01nition 3.4 has a pseudo-inverse U\u03b8,\u2020 h under the future su\ufb03ciency condition in Assumption 3.5. Thus, one can identify the belief state by inverting the conditional density of the trajectory \u03c4 h+k h given the interaction history \u03c4 h 1 . More importantly, such invertibility further allows us to decompose the desired embedding \u03a6(\u03c4 H 1 ) in (3.1) across steps, which we introduce in the sequel. 3.2 Multi-Step Embedding Decomposition via Bellman Operator To accomplish the multi-step decomposition of embedding, we \ufb01rst de\ufb01ne the Bellman operator as follows. De\ufb01nition 3.7 (Bellman Operator). We de\ufb01ne the Bellman operators B\u03b8 h(ah, oh) : L1(Ak \u00d7 Ok+1) 7\u2192L1(Ak \u00d7 Ok+1) for all (ah, oh) \u2208A \u00d7 O and h \u2208[H] as follows, \u0000B\u03b8 h(ah, oh)f \u0001 (\u03c4 h+k+1 h+1 ) = Z S P\u03b8(\u03c4 h+k+1 h | sh) \u00b7 (U\u03b8,\u2020 h f)(sh)dsh, \u2200\u03c4 h+k+1 h+1 \u2208Ak \u00d7 Ok+1. Here recall that we denote by \u03c4 h+k+1 h = {oh+k+1 h , ah+k h } and P\u03b8(\u03c4 h+k+1 h | sh) = P\u03b8(oh+k+1 h | sh, ah+k+1 h ) for notational simplicity. We call B\u03b8 h(ah, oh) in De\ufb01nition 3.7 a Bellman operator as it performs a temporal transition from the density of trajectory \u03c4 h+k h to the density of trajectory \u03c4 h+k+1 h+1 and the observation oh, given that one take action ah at the h-th step. More speci\ufb01cally, Assumption 3.5 guarantees that the density of trajectory \u03c4 h+k h identi\ufb01es the density of sh in the h-th step. The Bellman operator then performs the transition from the density of sh to the density of the trajectory \u03c4 h+k+1 h+1 and observation oh given the action ah. The following Lemma shows that our desired embedding \u03a6(\u03c4 H 1 ) can be decomposed into products of the Bellman operators de\ufb01ned in De\ufb01nition 3.7. Lemma 3.8 (Embedding Decomposition). Under Assumptions 3.1 and 3.5, it holds for all the parameter \u03b8 \u2208\u0398 that P\u03b8(\u03c4 H 1 ) = 1 Ak \u00b7 Z Ak\u00d7Ok+1 \u0002 B\u03b8 H(oH, aH) . . . B\u03b8 1(o1, a1)b\u03b8 1 \u0003 (\u03c4 H+k+1 H+1 )d\u03c4 H+k+1 H+1 . Here recall that we denote by \u03c4 H+k+1 H+1 = {aH+k H+1, oH+k+1 H+1 } the dummy future trajectory. Meanwhile, we de\ufb01ne the following initial trajectory density, b\u03b8 1(\u03c4 k 1 ) = U\u03b8 1\u00b51 = P\u03b8(\u03c4 k 1 ), \u2200\u03c4 k 1 \u2208Ak \u00d7 Ok+1. 8 \fProof. See \u00a7A.3 for a detailed proof. By Lemma 3.8, we can obtain the desired representation \u03a6(\u03c4 H 1 ) = P(\u03c4 H 1 ) based on the product of the Bellman operators. It now remains to estimate the Bellman operators across each step. In the sequel, we introduce an identity that allows us to recover the Bellman operators based on observations. Estimating Bellman Operator. In the sequel, we introduce the following notation to simplify our discussion, zh = \u03c4 h+k h = {oh, ah, . . . , ah+k\u22121, oh+k} \u2208Ak \u00d7 Ok+1, (3.4) wh\u22121 = \u03c4 h\u22121 h\u2212\u2113= {oh\u2212\u2113, ah\u2212\u2113, . . . , oh\u22121, ah\u22121} \u2208A\u2113\u00d7 O\u2113. (3.5) We \ufb01rst de\ufb01ne two density mappings that induce the identity of the Bellman Operator. We de\ufb01ne the density mapping X\u03b8,\u03c0 h : A\u2113\u00d7 O\u21137\u2192L1(Ak \u00d7 Ok+1) as follows, X\u03b8,\u03c0 h (wh\u22121) = P\u03b8,\u03c0(wh\u22121, zh = \u00b7), \u2200wh\u22121 \u2208A\u2113\u00d7 O\u2113. (3.6) Intuitively, the density mapping X\u03b8,\u03c0 h maps from an input trajectory wh\u22121 to the density of zh, which represents the density of k-steps interactions following the input trajectory wh\u22121. Similarly, we de\ufb01ne the density mapping Y\u03b8,\u03c0 h : A\u2113+1 \u00d7 O\u2113+1 7\u2192L1(Ak \u00d7 Ok+1) as follows, Y\u03b8,\u03c0 h (wh\u22121, ah, oh) = P\u03b8,\u03c0(wh\u22121, ah, oh, zh+1 = \u00b7), \u2200(wh\u22121, ah, oh) \u2208A\u2113+1 \u00d7 O\u2113+1 (3.7) Based on the two density mappings de\ufb01ned in (3.6) and (3.7), respectively, we have the following identity for all h \u2208[H] and \u03b8 \u2208\u0398, B\u03b8 h(ah, oh)X\u03b8,\u03c0 h (wh\u22121) = Y\u03b8,\u03c0 h (wh\u22121, ah, oh), \u2200wh\u22121 \u2208A\u2113+1 \u00d7 O\u2113+1. (3.8) See \u00a7A.2 for the proof of (3.8). We highlight that the identity in (3.8) allows us to estimate the Bellman operator B\u03b8\u2217 h (ah, oh) under the true parameter \u03b8\u2217\u2208\u0398. In particular, both X\u03b8\u2217,\u03c0 h and Y\u03b8\u2217,\u03c0 h are density mappings involving the observations and actions, and can be estimated based on observable variables from the POMDP. Upon \ufb01tting such density mappings, we can recover the Bellman operator B\u03b8\u2217 h (ah, oh) by solving the identity in (3.8). An Overview of Embedding Learning. We now summarize the learning procedure of the embedding. First, we estimate the density mappings de\ufb01ned in (3.6) and (3.7) under the true parameter \u03b8\u2217based on interaction history. Second, we estimate the Bellman operators {B\u03b8\u2217 h (ah, oh)}h\u2208[H] based on the identity in (3.8) and the estimated density mappings in the \ufb01rst step. Finally, we recover the embedding \u03a6(\u03c4 H 1 ) by assembling the Bellman operators according to Lemma 3.8. 4 Algorithm Description of ETC In the sequel, we decribe the procedure of ETC. In summary, ETC iteratively (i) interacts with the environment to collect observations, (ii) \ufb01ts the density mappings de\ufb01ned in (3.6) and (3.7), respectively, by observations, (iii) identi\ufb01es a con\ufb01dence set of parameters by \ufb01tting the Bellman equations according to (3.8), and (iv) conducts optimistic planning based on the \ufb01tted embeddings and the associated the con\ufb01dence set. 9 \fTo conduct ETC, we \ufb01rst initialize a sequence of datasets indexed by the step h \u2208[H] and the action sequences ah+k h\u2212\u2113\u2208Ak+\u2113+1, D0 h(ah+k h\u2212\u2113) = \u2205. Meanwhile, we initialize a policy \u03c00 \u2208\u03a0, where \u03a0 is the class of all deterministic policies. In the sequel, we introduce the update procedure of ETC in the t-th iterate. 4.1 Data Collection We \ufb01rst introduce the data collecting process of an agent with the policy \u03c0t\u22121 in the t-th iterate. For each of the step h \u2208[H] and the action sequence ah+k h\u2212\u2113\u2208Ak+\u2113+1, the agent \ufb01rst execute the policy \u03c0t\u22121 until the (h \u2212\u2113)-th step, and collects a sequence of actions and observations as follows, tah\u2212\u2113\u22121 1\u2212\u2113 = \bta1\u2212\u2113, . . . , tah\u2212\u2113\u22121 \t , toh\u2212\u2113 1\u2212\u2113= \bto1\u2212\u2113, . . . , toh\u2212\u2113 \t . Here we use the superscript t to denote the observations and actions acquired in the t-th iterate. Correspondingly, we denote by t\u03c4 h\u22121 h\u2212\u2113= {tah\u2212\u2113\u22121 1\u2212\u2113 , toh\u2212\u2113 1\u2212\u2113} the interaction history from the (h \u2212\u2113)-th step to the (h \u22121)-th step. Then, the agent execute ah+k h\u2212\u2113regardless of the observations and collect the following observation sequence, toh+k+1 h\u2212\u2113+1 = \btoh\u2212\u2113+1, . . . , toh+k+1 \t . Finally, we store the observation sequence toh+k+1 h\u2212\u2113 generated by \ufb01xing the action sequence ah+k h\u2212\u2113 into a dataset indexed by such action sequence, namely, Dt h(ah+k h\u2212\u2113) \u2190Dt\u22121 h (ah+k h\u2212\u2113) \u222a \btoh+k+1 h\u2212\u2113 \t . 4.2 Density Estimation Upon collecting the data, we follow the embedding learning procedure and \ufb01t the density mappings for the estimation of Bellman operator. In practice, various approaches are available in \ufb01tting the density by observations, including the maximum likelihood estimation (MLE), the generative adversial approaches, and the reproducing kernel Hilbert space (RKHS) density estimation. In what follows, we unify such density estimation approaches by a density estimation oracle. Assumption 4.1 (Density Estimation Oracle). We assume that we have access to a density estimation oracle E(\u00b7). Moreover, for all \u03b4 > 0 and dataset D drawn from the density p of size n following a martingale process, we assume that \u2225E(D) \u2212p\u22251 \u2264C \u00b7 p wE \u00b7 log(1/\u03b4)/n with probability at least 1 \u2212\u03b4. Here C > 0 is an absolute constant and wE is a parameter that depends on the density estimation oracle E(\u00b7). We highlight that such convergence property can be achieved by various density estimations. In particular, when the function approximation space P of E(\u00b7) is \ufb01nite, Assumption 4.1 holds for the maximum likelihood estimation (MLE) and the generative adversial approach with wE = log |P| (Geer et al., 2000; Zhang, 2006; Agarwal et al., 2020). Meanwhile, wE scales with the entropy 10 \fintegral of P endowed with the Hellinger distance if P is in\ufb01nite (Geer et al., 2000; Zhang, 2006). In addition, Assumption 4.1 holds for the RKHS density estimation (Gretton et al., 2005; Smola et al., 2007; Cai et al., 2022) with wE = poly(d), where d is rank of the low-rank transition (Cai et al., 2022). We now \ufb01t the density mappings based on the density estimation oracle. For each step h \u2208[H] and action sequence ah+k h\u2212\u2113\u2208Ak+\u2113+1, we \ufb01rst \ufb01t the density of trajectory as follows, b Pt h(\u00b7 | ah+k h\u2212\u2113) = E \u0000Dt h(ah+k h\u2212\u2113) \u0001 , where the dataset Dt h is updated based on the data collection procedure described in \u00a74.1. Meanwhile, we de\ufb01ne the following density mappings for the estimation of Bellman operators, \u0002b Xt h(\u03c4 h\u22121 h\u2212\u2113) \u0003 (\u03c4 h+k h ) = b Pt h(\u03c4 h+k h\u2212\u2113), (4.1) \u0002b Yt h(\u03c4 h h\u2212\u2113) \u0003 (\u03c4 h+k+1 h+1 ) = b Pt h(\u03c4 h+k+1 h\u2212\u2113 ). (4.2) Here recall that we de\ufb01ne the trajectories \u03c4 h h\u2212\u2113= {ah h\u2212\u2113, oh h\u2212\u2113} and \u03c4 h+k+1 h\u2212\u2113 = {ah+k h\u2212\u2113, oh+k+1 h\u2212\u2113 }. Meanwhile, we write b Pt h(\u03c4 h+k+1 h\u2212\u2113 ) = b Pt(oh+k+1 h\u2212\u2113 | ah+k h\u2212\u2113) for notational simplicity. We remark that the density mappings b Xt h and b Yt h are estimations of the density mappings de\ufb01ned in (3.6) and (3.7), respectively, under the true parameter \u03b8\u2217and the mixing policy that collects the sample. We then estimate the Bellman operators by minimizing the following objective, Lt h(\u03b8) = sup ah h\u2212\u2113\u2208A\u2113+1 Z O\u2113+1 \u2225B\u03b8 h(ah, oh)b Xt h(\u03c4 h\u22121 h\u2212\u2113) \u2212b Yt h(\u03c4 h h\u2212\u2113)\u22251doh h\u2212\u2113. (4.3) We remark that the objective de\ufb01ned in (4.3) is motivated by the identity in (3.8). In what follows, we introduce an exploration procedure based on the objective de\ufb01ned in (4.3). In addition, we acquire the estimation of initial trajectory density b bt 1(\u03c4 k 1 ) = b Pt 1(\u03c4 k 1 ) by marginalizing the dummy past trajectory \u03c4 0 1\u2212\u2113of b Pt 1. 4.3 Optimistic Planning We remark that the objective de\ufb01ned in (4.3) encapsulates the uncertainty in the estimation of the corresponding Bellman operator B\u03b8 h(ah, oh). In particular, a smaller objective Lt h(\u03b8) yields a higher con\ufb01dence that \u03b8 is close to the true parameter \u03b8\u2217. Thus, we de\ufb01ne the following con\ufb01dence set of parameters, Ct = n \u03b8 \u2208\u0398 : max \b \u2225b\u03b8 1 \u2212b bt 1\u22251, Lt h(\u03b8) \t \u2264\u03b2t \u00b7 p 1/t, \u2200h \u2208[H] o , (4.4) where \u03b2t is the tuning parameter in the t-th iterate. Meanwhile, for each parameter \u03b8 \u2208\u0398, we can estimate the embedding \u03a6\u03b8(\u03c4 H 1 ) = P\u03b8(\u03c4 H 1 ) based on the Bellman operators {B\u03b8 h}h\u2208[H] and Lemma 3.8. Such embedding further allows us to evaluate a policy as follows, V \u03c0(\u03b8) = Z OH r(oH 1 ) \u00b7 P\u03b8\u0000oH 1 | (a\u03c0)H 1 \u0001 doH 1 = Z OH r(oH 1 ) \u00b7 \u03a6\u03b8\u0000oH 1 , (a\u03c0)H 1 \u0001 doH 1 , 11 \fwhere we de\ufb01ne V \u03c0(\u03b8) as the cumulative rewards of \u03c0 in the POMDP induced by the parameter \u03b8 \u2208\u0398. Meanwhile, we de\ufb01ne (a\u03c0)H 1 = (a\u03c0 1, . . . , a\u03c0 H), where the actions a\u03c0 h are the action taken by the deterministic policy \u03c0 in the h-th step given the observations. To conduct optimistic planning, we seek for the policy that maximizes the return among all parameters \u03b8 \u2208Ct and the corresponding features. The update of policy takes the following form, \u03c0t \u2190argmax \u03c0\u2208\u03a0 max \u03b8\u2208Ct V \u03c0(\u03b8), where we denote by \u03a0 the set of all deterministic policies. We summarize ETC in Algorithm 1. Algorithm 1 Embed to Control Require: Number of iterates T. A set of tuning parameters {\u03b2t}t\u2208[T]. 1: Initialization: Set \u03c00 as a deterministic policy. Set the dataset D0 h(ah+k h\u2212\u2113) as an empty set for all (h, ah+k h\u2212\u2113) \u2208[H] \u00d7 Ak+\u2113+1. 2: for t \u2208[T] do 3: for (h, ah+k h\u2212\u2113) \u2208[H] \u00d7 Ak+\u2113+1 do 4: Start a new episode from the (1 \u2212\u2113)-th step. 5: Execute policy \u03c0t\u22121 until the (h \u2212\u2113)-th step and receive the observations toh\u2212\u2113 1\u2212\u2113. 6: Execute the action sequence ah+k h\u2212\u2113regardless of the observations and receive the observations toh+k+1 h\u2212\u2113+1. 7: Update the dataset Dt h(ah+k h\u2212\u2113) \u2190Dt\u22121 h (ah+k h\u2212\u2113) \u222a \btoh+k+1 h\u2212\u2113 \t . 8: end for 9: Estimate the density of trajectory b Pt h(\u00b7 | ah+k h\u2212\u2113) \u2190E \u0000Dt(ah+k h\u2212\u2113) \u0001 for all h \u2208[H]. 10: Update the density mappings b Xt h and b Yt h as follows, b Xt h(wh\u22121) = b Pt h(wh\u22121, zh = \u00b7), b Yt h(wh\u22121, ah, oh) = b Pt h(wh\u22121, ah, oh, zh+1 = \u00b7). 11: Update the initial density estimation b bt 1(\u03c4 H 1 ) \u2190b Pt(\u03c4 H 1 ). 12: Update the con\ufb01dence set Ct by (4.4). 13: Update the policy \u03c0t \u2190argmax\u03c0\u2208\u03a0 max\u03b8\u2208Ct V \u03c0(\u03b8). 14: end for 15: Output: policy set {\u03c0t}t\u2208[T]. 5 Analysis In what follows, we present the sample complexity analysis of ETC presented in Algorithm 1. Our analysis hinges on the following assumptions. Assumption 5.1 (Bounded Pseudo-Inverse). We assume that \u2225U\u03b8,\u2020 h \u222517\u21921 \u2264\u03bd for all \u03b8 \u2208\u0398 and h \u2208[H], where \u03bd > 0 is an absolute constant. We remark that the upper bound of the pseudo-inverse in Assumption 5.1 quanti\ufb01es the fundamental di\ufb03culty of solving the POMDP. In particular, the pseudo-inverse of forward emission 12 \frecovers the state density at the h-th step based on the trajectory \u03c4 h+k h from the h-th step to the (h + k)-th step. Thus, the upper bound \u03bd on such pseudo-inverse operator characterizes how ill-conditioned the belief recovery task is based on the trajectory \u03c4 h+k h . In what follows, we impose a similar past su\ufb03ciency assumption. Assumption 5.2 (Past Su\ufb03ciency). We de\ufb01ne for all h \u2208[H] the following reverse emission operator F\u03b8,\u03c0 h : Rd 7\u2192L1(O\u2113\u00d7 A\u2113) for all h \u2208[H], \u03c0 \u2208\u03a0, and \u03b8 \u2208\u0398, (F\u03b8,\u03c0 h v)(\u03c4 h\u22121 h\u2212\u2113) = X qh\u22121\u2208[d] [v]qh\u22121 \u00b7 P\u03b8,\u03c0(oh\u22121 h\u2212\u2113| qh\u22121, ah\u22121 h\u2212\u2113), \u2200v \u2208Rd, where (\u03c4 h\u22121 h\u2212\u2113) \u2208A\u2113\u00d7 O\u2113. We assume for some \u2113> 0 that the operator F\u03b8,\u03c0 h is left invertible for all h \u2208[H], \u03c0 \u2208\u03a0, and \u03b8 \u2208\u0398. We denote by F\u03b8,\u03c0,\u2020 h the left inverse of F\u03b8,\u03c0 h . We assume further that \u2225F\u03b8,\u03c0,\u2020 h \u222517\u21921 \u2264\u03b3 for all h \u2208[H], \u03c0 \u2208\u03a0, and \u03b8 \u2208\u0398, where \u03b3 > 0 is an absolute constant. We remark that the left inverse F\u03b8,\u03c0,\u2020 h of reverse emission operator F\u03b8,\u03c0 h recovers the density of the bottleneck factor qh\u22121 based on the density of trajectory \u03c4 h\u22121 h\u2212\u2113from the (h \u2212\u2113)-th step to the (h \u22121)-th step. Intuitively, the past su\ufb03ciency assumption in Assumption 5.2 guarantees that the density of trajectory \u03c4 h\u22121 h\u2212\u2113from the past captures su\ufb03cient information of the bottleneck factor qh\u22121, which further determines the state distribution at the h-th step. Thus, similar to the upper bound \u03bd in Assumption 5.1, the upper bound \u03b3 in Assumption 5.2 characterizes how ill-conditioned the belief recovery task is based on the trajectory \u03c4 h\u22121 h\u2212\u2113generated by the policy \u03c0. In what follows, we analyze the mixture policy \u03c0T of the policy set {\u03c0t}t\u2208[T] returned by ETC in Algorithmn 1. In particular, the mixture policy \u03c0T is executed by \ufb01rst sampling a policy \u03c0 uniformly from the policy set {\u03c0t}t\u2208[T] in the beginning of an episode, and then executing \u03c0 throughout the episode. Theorem 5.3. Let \u03c0T be the mixture policy of the policy set {\u03c0t}t\u2208[T] returned by Algorithm 1. Let \u03b2t = (\u03bd + 1) \u00b7 A2k \u00b7 p wE \u00b7 (k + \u2113) \u00b7 log(H \u00b7 A \u00b7 T) for all t \u2208[T] and T = O \u0000\u03b32 \u00b7 \u03bd4 \u00b7 d2 \u00b7 w2 E \u00b7 H2 \u00b7 A2(2k+\u2113) \u00b7 (k + \u2113) \u00b7 log(H \u00b7 A/\u01eb)/\u01eb2\u0001 . Under Assumptions 3.1, 3.5, 4.1, 5.1, and 5.2, it holds with probability at least 1 \u2212\u03b4 that \u03c0T is \u01eb-suboptimal. Proof. See \u00a7B.3 for a detailed proof. In Theorem 5.3, we \ufb01x the lengths of future and past trajectories k and \u2113, respectively, such that Assumptions 3.5 and 5.2 holds. Theorem 5.3 shows that the mixture policy \u03c0T of the policy set {\u03c0t}t\u2208[T] returned by ETC is \u01eb-suboptimal if the number of iterations T scales with O(1/\u01eb2). We remark that such a dependency regarding \u01eb is information-therotically optimal for reinforcement learning in MDPs (Ayoub et al., 2020; Agarwal et al., 2020; Modi et al., 2021; Uehara et al., 2021), which is a special case of POMDPs. In addition, the sample complexity T depends polynomially on the length of horizon H, number of actions A, the dimension d of the low-rank transition in Assumption 3.1, and the upper bounds \u03bd and \u03b3 in Assumptions 5.1 and 5.2, respectively. We highlight that the sample complexity depends on the observation and state spaces only through the dimension d of the low-rank transition, extending the previous sample e\ufb03ciency analysis of tabular 13 \fPOMDPs (Azizzadenesheli et al., 2016; Jin et al., 2020a). In addition, the sample complexity depends on the upper bounds of the operator norms \u03bd and \u03b3 in Assumptions 5.1 and 5.2, respectively, which quantify the fundamental di\ufb03culty of solving the POMDP. See \u00a7D for the analysis under the tabular POMDP setting. 6" + }, + { + "url": "http://arxiv.org/abs/2112.06206v1", + "title": "Automatic differentiation approach for reconstructing spectral functions with neural networks", + "abstract": "Reconstructing spectral functions from Euclidean Green's functions is an\nimportant inverse problem in physics. The prior knowledge for specific physical\nsystems routinely offers essential regularization schemes for solving the\nill-posed problem approximately. Aiming at this point, we propose an automatic\ndifferentiation framework as a generic tool for the reconstruction from\nobservable data. We represent the spectra by neural networks and set chi-square\nas loss function to optimize the parameters with backward automatic\ndifferentiation unsupervisedly. In the training process, there is no explicit\nphysical prior embedding into neural networks except the positive-definite\nform. The reconstruction accuracy is assessed through Kullback-Leibler(KL)\ndivergence and mean square error(MSE) at multiple noise levels. It should be\nnoted that the automatic differential framework and the freedom of introducing\nregularization are inherent advantages of the present approach and may lead to\nimprovements of solving inverse problem in the future.", + "authors": "Lingxiao Wang, Shuzhe Shi, Kai Zhou", + "published": "2021-12-12", + "updated": "2021-12-12", + "primary_cat": "physics.comp-ph", + "cats": [ + "physics.comp-ph", + "cs.LG", + "hep-lat" + ], + "main_content": "Introduction The numerical solution to inverse problems is a vital area of research in many domains of science. In physics, especially quantum many-body physics, it\u2019s necessary to perform an analytic continuation of function from \ufb01nite observations which however is ill-posed [1\u20133]. It is encountered for example, in Euclidean Quantum Field Theory (QFT) when one aims at rebuilding spectral functions based on some discrete data points along the Euclidean axis. More speci\ufb01cally, the inverse problem occurs when we take a non-perturbative Monte Carlo simulations (e.g., lattice QCD) and try to bridge the correlator data points with physical spectra [2, 3]. The knowledge of spectra will be further applied in transport process and non-equilibrium phenomena in heavy ion collisions [2, 4]. Moreover, the inverse problem of rebuilding spectral function is not unique to strong interaction many-body systems, but have similar counterparts in quantum liquid and superconductivity [1]. Fourth Workshop on Machine Learning and the Physical Sciences (NeurIPS 2021). arXiv:2112.06206v1 [physics.comp-ph] 12 Dec 2021 \fRelated works In past two decades, the most common approach in such reconstruction project is Bayesian inference which is a classical statistical learning method. It comprises the extra prior knowledge from the physical domain about the spectral function, so as to regularize the inversion task [2, 5, 6]. For example, as two of axioms, the scale invariance and proper constant form prior are both embedded into the Bayesian approach with Shannon-Jaynes entropy, which is termed as maximum entropy method (MEM) [4, 6]. Besides, recent several studies have investigated reconstructing spectral functions through a neural network [7\u201311]. In a supervised learning framework, the prior knowledge is encoded in amounts of training data and the inverse transformation is explicitly unfolded through a training process [7\u20139]. To alleviate the dependence of redundant training data, there are also studies adopting the radial basis functions and Gaussian process [11, 12]. 2 Problem statement Our inverse problem set-up is based on a Fredholm equation of the \ufb01rst kind, which takes the following form, g(t) = K \u25e6f := Z b a K(t, s)f(s)ds, (1) and the problem is to reconstruct the function f(s) given the continuous kernel function K(t, s) and the function g(t) . In physical systems, g(t) is always available in a discrete form. When dealing with a \ufb01nite set of data points with \ufb01nite uncertainty, the inverse transform becomes ill-conditioned or degenerated [9, 13]. In other words, the formal inversion is not a stable operation. K\u00e4llen\u2013Lehmann(KL) spectral representation Henceforth in the paper, we discuss the uniqueness and accuracy of the spectral function by building the K\u00e4llen\u2013Lehmann(KL) representation [14], D(p) = Z \u221e 0 K(p, \u03c9)\u03c1(\u03c9)d\u03c9 \u2261 Z \u221e 0 \u03c9 \u03c1(\u03c9) \u03c92 + p2 d\u03c9 \u03c0 . (2) Mock spectral functions are constructed using a superposed collection of Breit-Wigner peaks based on a parametrization obtained directly from one-loop perturbative quantum \ufb01eld theory [3, 7]. Each individual Breit-Wigner spectral function is given by, \u03c1(BW)(\u03c9) = 4A\u0393\u03c9 (M 2 + \u03932 \u2212\u03c92)2 + 4\u03932\u03c92 , (3) here M denotes the mass of the corresponding state, \u0393 is its width and A amounts to a positive normalization constant. The multi-peak structure is built by combining different single peak modules together. 3 Methods In this section, we demonstrate vectorized formalism of our methodology which can be easily implemented by differential programming in Pytorch or other frameworks. For simplicity we take one Np point for D(p) observation as example. One can directly extend it to multiple data points by making summation over them in calculating the gradients. Spectral representations we develop 3 architectures with different levels of non-local correlations among \u03c1\u03c9i to represent the spectral functions with arti\ufb01cial neural networks(ANNs). The \ufb01st form is List, it is equivalent to set L = 1 without bias node, meanwhile, the differentiable variables are \u20d7 \u03c1 = [\u03c11, \u03c12, \u00b7 \u00b7 \u00b7 , \u03c1N\u03c9] as Figure 1 left panel a) shown. If one approximates the integration over frequencies \u03c9i to be summation over N\u03c9 points at \ufb01xed frequency interval d\u03c9, then it is suitable to the vectorized framework. The second representation is named as NN, in which we use L-layers neural network to represent the spectral function \u03c1(\u03c9) with a constant input node a0 = C and multiple output nodes aL = [\u03c11, \u03c12, \u00b7 \u00b7 \u00b7 , \u03c1N\u03c9]. The width of the l-th layer is nl, in which the correlation among discrete outputs is contained in a concealed form. The last way is to set input node as a0 = \u03c9i and single output node as aL = \u03c1i. It is termed as point-to-point neural networks (NN-P2P), in which the continuity of function \u03c1(\u03c9) is a regularization de\ufb01ned in domains of input and output. 2 \fD(p) \u2261\u222b \u221e 0 K(p, \u03c9)\u03c1(\u03c9)d\u03c9 \u03c72 = Np \u2211 i=1 (Di \u2212D(pi)) 2 \u03b4Di BP Forward a) List b) NN c) NN-P2P Observations p D(p) Spectral Functions \u03c9 \u03c1(\u03c9) Figure 1: Automatic differential framework to reconstruct spectra from observations. Different architectures of representing spectrum with neural networks: a) List. 1-layers neural network which is equivalent to the list of spectrum. b) NN. L-layers neural networks and the output is list of spectrum. c) NN-P2P. L-layers neural networks and the end-to-end nodes are (\u03c9i, \u03c1i) pairwise. Automatic differentiation The output of above representations is \u20d7 \u03c1 = [\u03c11, \u03c12, \u00b7 \u00b7 \u00b7 , \u03c1N\u03c9], from which we can calculate the correlator as D(p) = PN\u03c9 i \u20d7 \u03c1 \u2299K(p, \u20d7 \u03c9) with \u20d7 \u03c9 = [\u03c91, \u03c92, \u00b7 \u00b7 \u00b7 , \u03c9N\u03c9], where \u2018\u2299\u2019 represents element-wise product. After the forward process, we can get both \u20d7 \u03c1 and Loss L = \u03c72 = PN\u03c4 i (Di \u2212D(pi))2/D(pi), where Di is observed data at pi with Np points. To optimize the parameters of presentations {\u03b8} with loss function, we implement the backward propagation (BP). The gradients for layer-l is \u2202L \u2202\u03b8[l] = \u2206[l] and the input for backward propagation is, \u2206[L] = \u2202L \u2202D(p)K(p, \u20d7 \u03c9). (4) With iteration loops in backward direction the gradients, \u2206[l] = \u03b8[l+1]\u22a4\u2206[l+1] \u2299\u03c3\u2032(Z[l]) can be used to optimize parameters {\u03b8}, where \u2018\u22a4\u2019 represents the transpose, \u03b8[l] is weights matrix at layer-l, Z[l] is output of layer-l and \u03c3(\u00b7) is the corresponding activation function. Optimization strategy The components of the framework are differentiable and therefore amenable to gradient descent. Due to the feasibility of regularizers in neural network representations, the optimization makes use of the Adam algorithm [15] with weight decay1. In training process, we obey an annealing strategy which is setting a tight regularization at beginning and loosen it repeatedly in \ufb01st 20000 epochs. The weight decay rate set as 10\u22124 and learning rate is 10\u22123 for all cases. The smoothness regularization contributed to loss function is written as \u03bbs PN\u03c9 i=1(\u03c1i \u2212\u03c1i\u22121)2. Tight initial regularization is \u03bbs = 10\u22122. Besides the existing regularization of neural network itself, the only physical prior we enforce into the framework is the positive-de\ufb01niteness of hadron spectral functions, which is introduced through using Softplus activation function at last layer as Softplus(x) = log(1 + ex). It should be mentioned that the biases induced by using gradient descent-type optimizers are not avoided in our framework, but it could be improved by embedding ensemble learning strategies. 1Weight decay is equivalent to L2 regularization in stochastic gradient descent(SGD) when re-scaled by the learning rate [16]. 3 \f4 Numerical Results 0 5 10 15 20 p 0.00 0.05 0.10 0.15 0.20 0.25 D(p) Single Peak 0 5 10 15 20 p 0.00 0.05 0.10 0.15 0.20 D(p) Double Peak Ground Truth NN List NN-P2P 0 10 20 7.5 5.0 log10(AbsErr) 0 10 20 7.5 5.0 log10(AbsErr) Figure 2: Performance of rebuilding correlation functions in two samples. The insert \ufb01gures are showing absolute errors between observed correlation functions and the rebuilt. In this section, to demonstrate the performance of our framework, we prepare two pro\ufb01les of spectral functions from Eq. 3. In Figure 2, the left correlation function is from a single peak spectrum with A = 1, \u0393 = 0.3, M = 2.0 and the right hand-side is from double peak pro\ufb01le with A1 = 1, A2 = 1.5, \u03931 = 0.4, \u03932 = 0.5, M1 = 2.5, M2 = 5.0. Three representations are marked by green, blue and red dots, which are plot in high consistencies with observed correlators. Besides, to imitate the real-world observable data, we add Gaussian-type noise into mock data with \u02dc D(pi) = D(pi) + noise and noise = N(0, \u03f5). The reconstruction absolute error reaches 10\u22126 magnitude in all representations in the case with noise = 10\u22126. The corresponding rebuilt spectral functions are list in following Figure 3. 0 5 10 0.0 0.5 1.0 1.5 ( ) Noise = 0.001 0 5 10 0.0 0.5 1.0 1.5 Noise = 0.0001 0 5 10 0.0 0.5 1.0 1.5 Noise = 0.00001 Ground Truth List NN NN-P2P 0 5 10 0.0 0.5 1.0 ( ) 0 5 10 0.0 0.5 1.0 0 5 10 0.0 0.5 1.0 Ground Truth List NN NN-P2P Figure 3: The predicted spectral functions from NN, List and NN-P2P. For left to right panels, different Gaussian noises are added to the correlation data with \u03f5 = 0.001, 0.0001 and 0.00001. Three representations are marked by green, blue and red lines in Figure 3. They all show remarkable reconstruction performances for single peak case at noise level >0.0001. In which, List and NN behave oscillations around zero-point under different noise backgrounds. The rebuilding spectrum from NN-P2P do not oscillate even with noise smaller than 0.0001. This is especially important for such a task of extracting the transport coef\ufb01cients from real-world lattice calculation data. Although the List representation has intense oscillations in double peak data, it successfully unfold the two peaks information from correlators even with noise \u03f5 = 0.001. To assess the reconstruction performance quantitatively, we introduce MSE and Kullback\u2013Leibler(KL) divergence for rebuilt spectra q and the ground truth p as DKL(P\u2225Q) = R \u221e 0 p(\u03c9) log (p(\u03c9)/q(\u03c9)) d\u03c9. At multi-magnitude noises, the NN-P2P keeps consistent performances compared with the other two representations. Although it misses the second peak which may appear in the case of bimodal, the calculations of different order momentum from spectral function will not be disturbed. Intuitively speaking, in NN-P2P representation, there are series of 1-order 4 \f2 3 4 5 6 log10(Noise) 1 2 3 DKL 2 3 4 5 6 log10(Noise) 0.2 0.4 0.6 0.8 1.0 MSE List NN NN-P2P Figure 4: Evaluation to the reconstruction with KL divergence and MSE on mock data sets. Red triangle marks the NN-P2P, blue circles are NN representation and green cube labels the List. differentiable modules between input \u03c9 node and output \u03c1 node, in which the continuity of function \u03c1(\u03c9) is naturally preserved. It makes the NN-P2P has a better performance in single peak case but failed in reconstructing double peaks which needs a looser restriction. 5" + }, + { + "url": "http://arxiv.org/abs/2012.00082v1", + "title": "Machine learning spatio-temporal epidemiological model to evaluate Germany-county-level COVID-19 risk", + "abstract": "As the COVID-19 pandemic continues to ravage the world, it is of critical\nsignificance to provide a timely risk prediction of the COVID-19 in\nmulti-level. To implement it and evaluate the public health policies, we\ndevelop a framework with machine learning assisted to extract epidemic dynamics\nfrom the infection data, in which contains a county-level spatiotemporal\nepidemiological model that combines a spatial Cellular Automaton (CA) with a\ntemporal Susceptible-Undiagnosed-Infected-Removed (SUIR) model. Compared with\nthe existing time risk prediction models, the proposed CA-SUIR model shows the\nmulti-level risk of the county to the government and coronavirus transmission\npatterns under different policies. This new toolbox is first utilized to the\nprojection of the multi-level COVID-19 prevalence over 412 Landkreis (counties)\nin Germany, including t-day-ahead risk forecast and the risk assessment to the\ntravel restriction policy. As a practical illustration, we predict the\nsituation at Christmas where the worst fatalities are 34.5 thousand, effective\npolicies could contain it to below 21 thousand. Such intervenable evaluation\nsystem could help decide on economic restarting and public health policies\nmaking in pandemic.", + "authors": "Lingxiao Wang, Tian Xu, Till Hannes Stoecker, Horst Stoecker, Yin Jiang, Kai Zhou", + "published": "2020-11-30", + "updated": "2020-11-30", + "primary_cat": "physics.soc-ph", + "cats": [ + "physics.soc-ph", + "cs.LG", + "physics.med-ph" + ], + "main_content": "Introduction Public health security is the cornerstone of economic and social stability. CoronaVirus Disease 2019 (COVID-19), an infectious disease caused by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), became a global pandemic that spread out across the world from the beginning of 2020. Since middle of March, the number of infected cases in Germany experienced a rapid growth. Although the severe restriction of travel eases the situation in the summer, the number of infected cases have been increasing again with a more critical trend since mid-September, named routinely as the second-wave pandemic worldwide. By November 25, 2020, the pandemic has caused 983588 con\ufb01rmed cases and 15160 fatalities in Germany. Many states are facing dilemma challenges of public heath and economics in curbing the spread of the COVID-19, e.g., Berlin, Bavaria and Hessen. The medical resources of Germany (ICU beds and doctors) have not reached their ceiling, but the current phase of the pandemic is not encouraging. Being a high-infectious disease, COVID-19 is expected to continue spreading or be even worse in the coming winter, which will cause a higher number of infections and mortality in the next \u2217Corresponding author. Preprint submitted to XXX December 2, 2020 arXiv:2012.00082v1 [physics.soc-ph] 30 Nov 2020 \ffew months. With no perfect medical treatments nor enough vaccines available until now, implementation of relatively e\ufb00ective public health policies such as social distancing and travel restriction become the primary measures of the states to mitigate the spread of epidemic, which has been proven to be an e\ufb00ective containment in East Asia, e.g. China [1] and South Korea, also in Europe at early stage [2]. In this worsening health crisis, a dire need emerges to establish a multi-level (Nation-State-County) COVID-19 risk prediction model, which can guide the evaluation and improvement on the public health policies of governments (states) at multi-level. Such intervenable risk evaluation model is bene\ufb01cial to policymakers in decision-making on economics restarting and public health policies at di\ufb00erent scales during epidemics. The multi-level risk information can help governments to improve the public restriction strategies such as cross-state travel restriction, home o\ufb03ce rules, severe home isolating regulations and vaccination resources allocation. Residents, who are concerning the present situation, can also bene\ufb01t from the multi-level risk information, to make e\ufb00ective access of medical resources and help implementing or following the public health policies. Evaluation to the risk of the on-going COVID-19 pandemic across the geographical-dependent administrative divisions is challenging because of the substantial heterogeneity rooted in economic composition, industrial diversity, tra\ufb03c conditions and political views. Nonetheless, it is inevitably an emergent demand for governments at all levels. Although a lot of studies have been devoted to the dynamics of the epidemics based on mathematical models [3, 4], (see also reviews1 [5, 6, 7]), they mainly focus on time-domain analysis and ignore the in\ufb02uence of geographic distribution and population migration [2] which palpably presents as cross-county transfers at daily-scale. A model which can predict the spatio-temporal evolution of the COVID-19 cases is urgently needed to assess the local risk at multi-level regions. Besides works which concern spatial distribution or local evolution based on the historical statistical data [8, 9, 10] , recently there are several studies sharing similar goals in di\ufb00erent countries [11, 12, 13, 14], which is merited to be one of the most urgent needs across the world. In a practical perspective, a reasonable and e\ufb03cient risk prediction model needs to be performed comprehensively with all Landkreis 2, since people is highly mobile and connected by highways, railways and airways in Germany. Conceivably, a county-level risk evaluation to the public and local governments is informative and ponderable. Machine learning (ML), a branch of arti\ufb01cial intelligence, e\ufb03ciently integrates statistical and inference algorithms and thus o\ufb00er the opportunity to uncover hidden structure of evolution in complex coronavirus data and to describe it with \ufb01nite dynamical parameters. Therefore, combinatory usage of ML algorithms and spatio-temporal epidemiological model for risk prediction in the context of COVID19 pandemic are worth exploring. Although the state-of-the-art ML techniques have been deployed into myriad \ufb01elds of COVID-19 pandemic [15, 16, 17, 18], for the governments and residents, the highest concern is how fast the COVID-19 di\ufb00uses. Available ML models that focus on this exhibit promising implications in individual or national scale, but are still impeded by the paucity of validation information and limited privacy rules in Germany, thus lack the capability of predicting multi-scale evolution. In this study, we aim to establish a new evaluation paradigm by introducing a spatio-temporal epidemiological model with machine learning assisted. The dynamical model of the epidemic contains a Cellular Automaton (CA) [19, 20] and a modi\ufb01ed Susceptible-Undiagnosed-Infected-Removed (SUIR) model [13, 21]. The SUIR model is more inclusive than the previous Susceptible-InfectiousRecovery(SIR)-type models since consideration of the Undetected or Asymptomatic populations [22, 23] hidden under the lack of detection. As an important extension of the SIR model, we divide the infectious population into two compartments: one is the undiagnosed who manifest as normal susceptible individuals but with possible infectious and no timely detection; the other one is the infected (diagnosed) who 1Since the number of COVID-19 papers are too many to list them all, the following references are highly relevant to the topic we discussed in this paper. 2It is the primary administrative subdivision higher than a city in Germany, also could be named as county in English. 2 \fare con\ufb01rmed by detection and have been admitted to hospital or isolated at home. In this new temporal model, we constrain the mobility of the people in the susceptible and undiagnosed populations, in which the understanding to the latter will help address the underestimation issue concerning infection cases in the public databases. As a matter of fact, the movement of the undiagnosed people across the counties will contribute to the deteriorating epidemic situation, thus we introduce the CA model to capture the in\ufb02uences of the population mobility. Being assisted with ML to extract the dynamical parameters, this new evaluation system can inform the public and governments in di\ufb00erent regions with e\ufb00ective warning, which should be considered into adjusting the restriction rules. The article is organized as follows. In Section. 2, we propose a full framework of risk prediction to COVID-19 pandemic in Germnay with introducing the spatio-temporal CA-SUIR model. The machine learning techniques are applied to extract the dynamical parameters from reported data of Robert KochInstitut (RKI). Section. 3 reproduces the evolution of infection maps from 25 February to 28 March 2020, and extracts the infectious parameters from RKI data in September 2020 with a well-trained machine which was training on the data set from CA-SUIR model simulations. The well-learned machine predicts county-level infection maps from September to November 2020, which is matched remarkably with the reported situation. With regard to the up-coming Christmas holiday, the di\ufb00erent public health policies are evaluated in the end of the section, in which the entire, partial and none lock-down strategies are compared. In the last section, some concluding remarks are presented. 2. Machine Learning Spatio-temporal Epidemiological Model Nation States Landkreis Real Map Topological Map Spatial Evolution Temporal Evolution Machine Learning Parameters of CA-SUIR model Machine Learning COVID-19 Risk from Infection Maps SUIR + CA-SUIR model Figure 1: A framework to evaluate and predict the COVID-19 risk in Germany. The left panel is the geographic distribution of the infection cases in Germany, which is named as the real map at nation level. The central panel is the topological equivalent map of the infection cases, in which the main cities are mapping onto the sites on square lattice at state level. The right panel is the microscopic dynamics on each site, which is spatially depicted by Cellular Automata and temporally evolved through SUIR model at county level. The color of the characters in SUIR corresponds to the subsequent Fig. 3. In this section, we propose a spatio-temporal model to predict the multi-level COVID-19 risk in Germany, named as CA-SUIR model in the following parts. Machine learning techniques are introduced 3 \fto learn the dynamical parameters of the CA-SUIR model given a \u2019movie\u2019 of infection maps, which is sequentially utilized to evaluate the COVID-19 risk from RKI infection map timely in Germany. The full \ufb02ow chart is assembled in Fig.1 to present the multi-level spatio-temporal simulation of epidemics, where as the \ufb01rst step real maps are projected onto the topological equivalent maps as explained in Appendix. A. The cumulative infected cases in the real pro\ufb01les3 are reassembled into a topological equivalent map with 25 \u00d7 25 sites in Germany, which contains 412 counties (see details in Appendix. A). The microstates of each county, its number of infected cases, are driven by the SUIR and CA simultaneously in the model, which construct the topological infection map. With simulated data from the CA-SUIR model, the Conv2LSTM neural networks [24] consist of the Time-distributed Convolution Neural Networks(CNNs) and the Recurrent Neural Networks(RNNs) are designed to learn to extract dynamical parameters of CASUIR model. Subsequently, we transfer the well-trained neural networks to make timely prediction and risk evaluation on RKI infection maps. 2.1. Deep learning dynamical parameters LSTM Undiagnosed cases Infectious rate Diagnosed rate Transfer rate Removed rate DAY 1 DAY 2 DAY 3 DAY 4 DAY N CNN Figure 2: Machine learning the dynamical parameters of CA-SUIR model from the evolution of infection maps. The TimeDistributed Convolutional Neural Networks(CNNs) layers and Long Short-Term Memory(LSTM) units are combined to extract the information encoded in the input short movie (one image per frame). The undiagnosed population, infectious rate, removed rate, diagnosed rate and transfer rate are decoded from the Conv2LSTM neural networks. CNN model and Long Short-Term Memory (LSTM) model have been popularly adopted and show remarkable performance in many researches like in computer vision or Natural Language Processing tasks. While the CNN model can do e\ufb03cient pattern recognition with convolutional \ufb01lters on imagelike tensor data, the LSTM model is capable of capturing dependencies along time direction between 3see the map at COVID-19 pandemic in Germany 4 \fsequential tallies. In this work we combine CNN and LSTM model into Conv2LSTM for analysing the infection dynamical evolution. The Conv2LSTM neural networks we built are demonstrated in Fig. 2, in which the Time Distributed Convolutional Neural Networks(CNNs) layers and LSTM layers are combined to decode the information encoded in the short movie constructed by the infection maps. The details of the structure are arranged in Appendix. C. The time distributed CNNs could extract features (with temporal structure preserved) from sequential image inputs and then embed them into the following LSTM layers, which is able to e\ufb03ciently drill evolution rules from the short movies. 2.2. Temporal SUIR model We propose a proposed modi\ufb01ed Susceptible-Undiagnosed-Infected-Removed (SUIR) model and demonstrate it in Fig 3, where the infectious rate \u03bb controls the rate of spread which represents the probability of transmitting disease between a susceptible and an infectious individual. The diagnosed rate \u03c3 dictate the probability of latent infectious individuals becoming con\ufb01rmed by testing. Removed rate \u03b3 contains the rate of mortality and recovery from the infectious. Besides the classical de\ufb01nition of SIR model, considering the special features of COVID-19, we introduce the modi\ufb01ed SUIR model: there is a considerable population that could be infected but without any symptoms, or, they just have no access to perform a detection(with possibility p), which is concluded in a new compartment, Undiagnosed(U). In contrast to the Infected who is usually subjected to be isolated in medical facilities or quarantine at home, the Undiagnosed population is with normal mobility, which leads to the high infection; the infectivity could decrease after d days [25], but there is no clear evidence that they will get special immunity, which means immunity loss. Figure 3: The SUIR diagram shows how individuals are infected and transferred among each compartments in a close area. In a closed population with no births or mortalitys per day,the modi\ufb01ed SUIR model becomes, dS dt = \u2212S N \u03bbU + \u03c3\u2032 daysU (1) dU dt = S N \u03bbU \u2212\u03c3U \u2212\u03c3\u2032 daysU (2) dI dt = \u03c3U \u2212\u03b3I (3) dR dt = \u03b3I (4) where N = S + U + I + R is the total population. The rede\ufb01nition parameters in Fig.3 are \u03c3 \u2261\u03c3p, \u03c3\u2032 \u2261 \u03c3(1 \u2212p). The detail explanation of the parameters are in Appendix. B. 2.3. Spatio-temporal CA-SUIR model We construct a cellular automaton model for tracking the epidemics spatial evolution in a closed system (Nation with \ufb01xed boundary as Fig. 1 shown, which matches the extreme restriction adopt by the German government from March 16. to May). The underlying structure is a L\u00d7L cell grid, where L is the system size, and the number is from the topological map. The color of cell characterizes the population 5 \fin minimum geographical unit, which is county in our case. It\u2019s an instructive example to set the cell size to a typical area of city, such as 102 \u223c103km2, which can simulate the epidemics in the corresponding resolution approximately. The resolution could be naturally increased into community level, but it needs more detailed data which is laboriously accessible, thus the following discussion focuses on county level and corresponding resolution. This model adopts the Moore neighbor, and cells update their states by transition tensor Pi m,n(t) with 4 \u00d7 L \u00d7 L shape, which means the possibility of transfer from city (m, n) to direction i at t time-step. In Fig. 4, the Pi(t) shows as 4 matrices (Pu(t), Pd(t), Pl(t), Pr(t)) with same lattice size, in which di\ufb00erent colors labels the direction of transfer. In the following simulation, we adopt the mean \ufb01eld approximation [26], which leads to the uniform transition possibility Pu(t) = Pd(t) = Pl(t) = Pr(t) = \u03f5. The transfer rate \u03f5 is introduced to describe the average movement of the residents on each site. To mimic the border control induced closed boundary, a transfer rate \u03f5 = 0 has been used in the primary epidemic stage [27]. Cellular Automata pu m,n pd m,n pl m,n pr m,n Pi = pi 1,1 pi 1,2 \u22ef pi 1,25 pi 2,1 pi 2,2 \u22ef pi 2,25 \u22ee \u22ee pi m,n \u22ee pi 25,1 pi 25,2 \u22ef pi 25,25 Transition Matrix i = l,\u00a0r,\u00a0u,\u00a0d Figure 4: Transition matrices in Cellular Automaton model. The size of the grid is L = 25 in our case. The evolution of the CA-SUIR model is as follows, Step.1 Initialization. Set the position of infection burst (x, y) and generate population distribution S (t) at the L \u00d7 L lattice. At t = 0 time-step, epidemic turns on and population begins to migrate; Step.2 SUIR dynamics. At t time-step, evolve SUIR model t\u2032 times on each site; Step.3 Movement. The parts of susceptible and undiagnosed people (S and U) on site (m, n) migrate into the neighbor site with transition matrices Pk(t, m, n) at t+1 time-step. Update all sites synchronously; Step.4 Update. Turn to Step.2 until T time-step is reached. 3. Results 3.1. Pandemic dynamics modeling, data set generation and network capacity With the daily number of cumulative cases in Germany (Data is from the open database of Robert Koch-Institut (RKI) 4), we \ufb01x the SUIR parameters to be \u03bb\u2032 \u2261\u03bb \u2212N S \u03c3\u2032/days = 0.293, \u03c3 = 0.129, \u03b3 = 0.628 by \ufb01tting from rede\ufb01ned exponential behavior parameters(see Appendix. B). This parameter set can \ufb01t well the \ufb01rst-wave burst out in March especially the rapidly increasing stage (from 25 Feb.2020 to 28 Mar.2020), as presented in Fig. 5. The number of initial undiagnosed cases is determined in the model as U0 \u2243700 which distributes as the infection cases. Fig.5 suggests that there is an overestimation of the simulation at early stage(\ufb01rst 20 days), which might be naturally understood to be related with the underestimation in reality at that time since the limited detection ability and the supply of test 4www.rki.de/covid-19 and COVID-19 GeoHub Deutschland 6 \fFitting curve Real data 2-25 3-05 3-15 3-25 0 20K 40K 60K 80K Cummulative cases Figure 5: The comparison of RKI data and simulated cumulative infection cases. The data is from 25 Feb.2020 to 28 Mar.2020, in which the shadow bars are from uncertainty estimation. The orange panel is the variation space of SUIR model parameters, which shows a reliable area under 10% changes. The green panel is from the 10% uncertainty of the undiagnosed population(U), which is actually from the stability issues of RT-PCR testing [28] and unreported cases in practical situation. reagents were insu\ufb03cient [29]. Conceivably, by introducing more practical factors from clinical diagnosis or management, the SUIR model can be predictably improved to match with the record data, which will be discussed in our further works but is not the focus in the current one. According to the initial infection cases distribution and the locations of main cities of Germany, we set the initial undiagnosed population distribution U0(m, n) to be proportional to the population distribution of the cities N(m, n). In the simulations for March we choose the transfer rate \u03f5 = 0.08 as a constant, the population of transfer on site is proportional to the rate in each time-step. With the \ufb01tted parameters from above SUIR model, we generate the simulated evolution of the cumulative infection cases map in Fig.6, which demonstrates the high consistency with the RKI data map (named as real pro\ufb01les, see the full evolution sample in Appendix. A). (a) (b) 0 20 2\u00d7102 2\u00d7103 Figure 6: Simulated and data evolution of the infection distribution in Germany. The infection maps are evaluated through CA-SUIR model, from the same initial distribution at 25 Feb.2020. The color bar is set in an uniform orange color scale, the darkest color is 2000 cases with a logarithmic re-scale. The horizontal and vertical coordinates correspond to the real geographical latitude and longitude, which is consistent in the following \ufb01gures. To prepare training data for Conv2LSTM, we generate 30-day simulation maps as short movies by associating the SUIR model parameters a 10% uncertainty range. The total number of the data set is 7 \f10000, which contains 9000 infection movies as training data-set and the rest 1000 as testing data-set. The 30-day movie contains 30 frames, which is fed to the machine with continuous 7-day segments as slip windows [30]. The structure of the Conv2LSTM neural networks are exhibited in Appendix. C. With the short movies to be the input, the Conv2LSTM is trained to predict the involved epidemiological model simulation parameters. The learning curve is shown in Appendix. C. The four sub-\ufb01gures in Fig. 7 demonstrate the comparison between the prediction from the trained Conv2LSTM and ground-truth for testing data, they are the transfer rate \u03f5, the infectious rate \u03bb, the diagnosed rate \u03c3 and undiagnosed population U respectively. The well-trained machine extracts parameters from the RKI infection map of March 2020 as \u03bb\u2032 = 0.26, \u03c3 = 0.09, U0 = 1352, \u03f5 = 0.068. The \ufb01rst and last parameters are numerically matched with the \ufb01tting results, and the diagnosed rate and initial undiagnosed population maintain the consistency with \ufb01tting parameters in the relationship \u03c3U0 (see Appendix. B). (a) 0 0.2 0.4 0.6 0.8 1 Truth 0 0.2 0.4 0.6 0.8 1 Prediction Transfer Rate (b) 0 0.2 0.4 0.6 0.8 1 Truth 0 0.2 0.4 0.6 0.8 1 Prediction Infectious Rate (c) 0 0.2 0.4 0.6 0.8 1 Truth 0 0.2 0.4 0.6 0.8 1 Prediction Diagnosed Rate (d) 0 0.05 0.1 0.15 0.2 Truth 0 0.05 0.1 0.15 0.2 Prediction Undiagnosed Population U Figure 7: The performance of Conv2LSTM neural networks to extract dynamical parameters from infection maps. (a) The transfer rate \u03f5 is tested on the testing data-set with the coe\ufb03cient of determination R2 = 0.996. (b) The infectious rate \u03bb is tested on the testing data-set with the coe\ufb03cient of determination R2 = 0.998. (c) The diagnosed rate \u03c3 is tested on the testing data-set with the coe\ufb03cient of determination R2 = 0.987. (d) The undiagnosed population U is tested on the testing data-set with the coe\ufb03cient of determination R2 = 0.993. 3.2. Machine Learning-assisted Risk Prediction To assess the COVID-19 risk in realistic situation timely, we deploy the well-trained neural network into the current cases with transfer learning method. Transfer learning is a machine learning method where a model developed for a task is generally reused as the starting point for a model on a second task. In our case, transferring the pre-trained models into the on-going second-wave COVID-19 epidemics in Germany, we extract the dynamical parameters of the spatio-temporal pandemics from the 30-day RKI infection maps with beginning from September 8, 2020, which yields the dynamical parameters as \u03bb\u2032 = 0.21, \u03c3 = 0.167, U0 = 7426, \u03f5 = 0.09. Fig. 8 describes the homogeneity between simulation and RKI infection maps from September to November 2020 in Germany. The former cumulative infection maps shown in Fig. 8a are generated from 8 \f(a) 0 50 6\u00d7102 2\u00d7103 2\u00d7104 (b) 0 50 6\u00d7102 2\u00d7103 2\u00d7104 Figure 8: Comparison of simulation and RKI infection maps from September to November 2020 in Germany. The color bar is set into an uniform red color scale, the darkest color is 20000 cases with a logarithmic re-scale. (a) As of the end of November, the cumulative infection cases are presented in dynamical simulation with parameters learned by machine learning at 2020-09-25, 2020-10-25 and 2020-11-25 respectively. (b) As of the end of November, the cumulative infection cases are presented in real pro\ufb01les at 2020-09-25, 2020-10-25 and 2020-11-25 respectively. The top 5 biggest cities in Germany, Berlin (13\u00b023E, 52\u00b031N), Hamburg (10\u00b000E, 53\u00b033N), M\u00a8 unchen (11\u00b035E, 48\u00b003N), K\u00a8 oln (6\u00b058E, 50\u00b056N) and Frankfurt (8\u00b041E, 50\u00b007N) shows a worse situation than the other sites. the SUIR model with the machine learning parameters during 3 months. The latter maps shown in Fig. 8b are projected from the RKI data, in which we present the cumulative infection cases in real pro\ufb01les until the end of November. The slight mismatching on the map of November is due to the actual soft lockdown policy has been executed from 2 November. The functional regulations will contain the pandemic moderately, which will be examined in the following section under di\ufb00erent policies. 3.3. Public Policies Evaluation Unlocked Soft-locked Locked O\ufb03ce O\ufb03ce work in schedule Home o\ufb03ce for large group Home o\ufb03ce Campus Open Close Close School Open Close Close Supermarket Open Open with distance constrains Close Restaurant Customer number constrain Only Take-out Close Travel Free movement Long distance travel is prohibited Home isolation Table 1: Public Policies for Di\ufb00erent Spots. This table lists speci\ufb01c rules which lead to three scenarios: a) unlocked, b) soft-locked, c) locked. The last one is the well-know lock-down policy which shows most powerful restriction to the mobility of human. Three di\ufb00erent degrees of restriction rules, named as the unlocked, soft-locked and locked strategies in public policy, are evaluated in our COVID-19 evaluation framework. The examples of the correspond9 \fing policies are listed in Table. 1, in which we list some representative government policies in di\ufb00erent spots. It is of crucial concern for both governments and residents who are enduring the social-economical pressure in pandemic [11, 31, 32, 33]. The hot spots include but are not limited to o\ufb03ce, campus, school, restaurant and cross-county travel. It should be mentioned that although we introduce the heterogeneous rules in di\ufb00erent spots the comprehensive in\ufb02uences at county-level are averaged, which means the e\ufb00ect of policies are coarse graining to the county-level. Reasonably, it could be improved if we collect more precise data below county-level, e.g., individual or community-level [34]. (a) (b) (c) 0 50 6\u00d7102 2\u00d7103 2\u00d7104 Figure 9: Predicted cumulative infection maps at Christmas under three di\ufb00erent public policies. The color bar is set into an uniform red color scale, the darkest color is 20000 cases with a logarithmic re-scale. The dynamical parameters are extracted from RKI infection maps by the well-trained machine. (a) The prediction is based on the existing public policies. (b) The soft lockdown rules contains the relative loose restrictions to some hot spots, thus the di\ufb00usion of the epidemic is alleviated partially. (c)Locked: These strictest rules prohibit any form of mobility, which yield the least infections. The di\ufb00erences among them in simulation are the transfer rate and susceptible population: the most strict locked rules has transfer rate \u03f5 = 0, the soft-locked with half transfer \u03f5 = 0.045, and the rule with no additional restriction has \u03f5 = 0.09 as default; meanwhile, the containment policies could result in a consequence of infectious population decreasing that e\ufb00ectively deplete the susceptible population [1]. It leads to the reduction of the susceptible population, reaching to 1/2, 1/8 for the locked and soft-locked rules respectively, in which the 1/8 is from the rough estimation to the necessary out time. These three strategies are implemented from the same time point 25 November 2020, and their 30-day simulation results are shown in Fig. 9. In our prediction assisted by the transfer learning method, 2.3 million people will be infected before Christmas in the worst situation, where daily increase is approximately 80 thousand (estimate with mortality rate \u223c1.5% [35]). Compared with the prediction of the Institute for Health Metrics and Evaluation (IHME)5, they achieve the consistency in order of magnitude, which means the total mortality will reach 34.5 thousand approximately. With regard to the soft-locked policy, the infection number reduces to 1.4 million and the corresponding daily increasing cases are 20 thousand. In the most strict rules, although it restricts the mobility, the number of infection cases will be 1.07 million and the daily increase is 4 thousand. 4. Discussion In this paper, we construct a multi-level spatio-temporal epidemiological model that combines a spatial cellular Automaton (CA) with a temporal hybrid Susceptible-Undiagnosed-Infected-Removed (SUIR) model which contains the human mobility across the counties. This new toolbox enables the projection of the state-county-level COVID-19 prevalence over 412 Landkreise in Germany, including 5http://www.healthdata.org/covid 10 \ft-day-ahead risk forecast and the risk evaluation related to a public policy. The modular design in our survey can reach a more accurate estimation of the COVID-19 risk with replacing the modules by the other elaborate spatial or temporal models conveniently. With the help of machine learning methods, we extract the dynamical parameters directly from the infection maps, which are topologically equivalent to the real geographic maps. After training the Conv2LSTM neural networks composed of time distributed CNNs and LSTM on data set generated from CA-SUIR model, we transfer the well-trained neural networks into the data set collected from the open database of RKI. The CA-SUIR model reproduces the evolution of the infection maps in March and September acknowledged as the \ufb01rstand second-wave of COVID-19 pandemic. Even though the prediction focuses on the infectious dynamics in the current paper, it is conveniently feasible to derive recovery and mortality rate from the model under the guarantee of accurate data sources confronted. The simulations perform properly to describe and predict the data in the initial 30-day phase, while it shows a tendency towards a faster smoothing-out of the pronounced local \ufb01ne structure and persistent hot spots, as compared to the data. It could be understood that the observed infection cases concentrated in the big cities, such as Berlin, Hamburg, M\u00a8 unchen, K\u00a8 oln and Frankfurt than the other sites in the simulation, which re\ufb02ects high population density enhances the di\ufb00usion of the COVID-19. The prediction could be evidently improved by introducing more abundant interactions across and within the counties. As some practical improvements, e.g. replacing the geographic lattice to tra\ufb03c networks [5, 34, 36, 37, 38], introducing the higher resolution lattices beyond the county-level which also means higher-order interactions among the residents [12, 20, 34, 39], can be implemented and devoted in further future works. With the help of the deep learning, the machine learns the dynamical parameters matched with the data of March, which is transferred into the prediction to second-wave sequentially. Starting in September, the simulation maps are remarkably consistent with the data. Regarding the slight uncertainty in extracting parameters for the dynamics in September, which might has root in the di\ufb00erence of development trend between the \ufb01rstand second-wave, it could be avoided by training the neural networks on more heterogeneous data set. We will present the further improvement in our future papers. Compared to the prevailing COVID-19 risk prediction models, the machine learning-assisted CASUIR model here indicates that knowledge of risk evaluation in the county-level can be displayed directly to governments and residents. The prediction to the e\ufb00ects of di\ufb00erent public health policies suggests that the transmission modes of coronavirus could be shaped by the e\ufb03cient non-pharmaceutical interventions. Besides, the 7-day windows could give a robust prediction in long-term, which is advantageous to timely assess the current situation. The other valuable application of our evaluation system is to improve allocation of medical resources which are routinely unequal in di\ufb00erent counties, some of them are su\ufb00ering from the epidemic that far exceeds their medical capacity. It is of essential signi\ufb01cance to implement the intervenable risk evaluation model for decision-making on restarting economics and public health policies in the COVID-19 pandemic. 5. Acknowledgments The authors thank Esteban Vargas and Maria Babarossa for useful discussions and comments. The work on this research is supported by the BMBF through the ErUM-Data funding and the Samson AG AI grant (L.W, K.Z), by the National Natural Science Foundation of China, Grant No. 11875002(T.X, Y.J.), by the Zhuobai Program of Beihang University(Y.J.). The authors also thank the donation of NVIDIA GPUs from NVIDIA Corporation. A. Infection Maps In order to simulate the population evolution on lattice, the topological mapping is adopted in our CA computation. We \ufb01rstly embed the map in a rectangle area whose length and width are chosen as the 11 \f25 25 25 25 Real Infection Map Topological Infection Map Figure 10: Mapping from real infection map of Germany onto topological equivalent lattice. The left map is divided into 25 \u00d7 25 sites as the size of the right part. The geometric shape of the suqare lattice is irrelevant to the real geographic topography. The site on lattice depict a typical county-level unit of the real map. maximum value of the Germany map, as in Fig. 10(left). Then the area is segmented into uniform sites of L \u00d7 L. And each site is labelled by its row and column numbers as Bmn, where 1 \u2264m, n \u2264L. If a county occupies N sites, all kinds of the population, such as susceptible, undiagnosed, infected and removed, are assigned equally into each site. And if there is more than one county or county part on a site, the populations of the site is set as the sum of all these counties or county parts. Because the geographic distance is not important in our simulation, which focus on the population distributions, we set each site as a square when plotting the distribution in Fig. 10(right). In the current simulation, we con\ufb01ne all the population in Germany and thus set the out-of-Germany transfer possibility at the border as zero. (a) 0 20 2\u00d7102 2\u00d7103 (b) 0 20 2\u00d7102 2\u00d7103 Figure 11: Comparison of simulated and RKI infection maps during March 2020 in Germany. The color bar is set in an uniform orange color scale, the darkest color is 2000 cases with a logarithic re-scale. (a) As of the end of March, the cumulative infection cases are presented in dynamical simulation with \ufb01tting parameters at 2020-03-05, 2020-03-15 and 2020-03-25 respectively. (b) As of the end of March, the cumulative infection cases are presented in real pro\ufb01les at 2020-03-05, 2020-03-15 and 2020-03-25 respectively. 12 \fB. SUIR model parameters explanation In this part, we discuss the \ufb01tting details based on the mathematical simpli\ufb01cation to the SUIR model. In the large susceptible population limit and ignoring the recovery and mortality cases for the uncertainty at early stage, there are only two equations are relevant, dU dt = (\u03bb \u2212\u03c3 \u2212\u03c3\u2032 days)U (5) d(I + R) dt = \u03c3U (6) The solutions are clearly derived as, U(t) = U0Exp[(\u03bb \u2212\u03c3 \u2212\u03c3\u2032 days)t] (7) I + R = I0 + R0 \u2212 \u03c3U0 \u03bb \u2212\u03c3 \u2212 \u03c3\u2032 days + \u03c3U(t) \u03bb \u2212\u03c3 \u2212 \u03c3\u2032 days (8) The I + R \ufb01tting is easy now with I + R = a\u2217+ b\u2217Exp[c\u2217t]. This means we can not obtain all the independent parameters in the solution, but the following rede\ufb01nition could be helpful, a\u2217= I0 + R0 \u2212 \u03c3U0 \u03bb \u2212\u03c3 \u2212 \u03c3\u2032 days (9) b\u2217= \u03c3U0 \u03bb \u2212\u03c3 \u2212 \u03c3\u2032 days (10) c\u2217= \u03bb \u2212\u03c3 \u2212\u03c3\u2032 days (11) or equivalently only the following combinations could be obtained I0 + R0 = a\u2217+ b\u2217 (12) \u03c3U0 = b\u2217c\u2217 (13) \u03bb \u2212\u03c3 \u2212\u03c3\u2032 days = c\u2217 (14) As for the product \u03c3U0, which means it is di\ufb03cult to determine the \u03c3 and U0 individually. It will introduce the inevitable uncertainty to the two parameters, while the con\ufb01dence to the multiplicative variable could be reserved. 13 \fC. Conv2LSTM Neural Networks Figure 12: The Conv2LSTM neural networks. The nuts-and-bolts of the neural network are: Convolution layers within TimeDistributed wrapper, from Convolution 2D (64, kernel size=3x3, \u2018ReLU\u2019, \u2018same\u2019 padding) to Convolution 2D (64, kernel size=3x3, strides=2, \u2018ReLU\u2019, \u2018same\u2019 padding) to Convolution 2D(64, kernel size=3x3, \u2018ReLU\u2019, \u2018same\u2019 padding) to Convolution 2D(64, kernel size=3x3, \u2018ReLU\u2019, \u2018same\u2019 padding) and Flatten; for the LSTM part shown in Fig. 12, 256 LSTM cells with \u2018sequence=False\u2019 and activation=\u2018tanh\u2019 are used; dense layers contains 256 neurons with \u2018ReLU\u2019 active function, before the output is the \u2018sigmoid\u2019 function to \ufb01t the normalized targets. 0 20 40 60 80 100 Epoch 0 0.005 0.01 0.015 0.02 0.025 MSE Loss Validation Curve Training Curve 0 50 100 Epoch 0.8 0.85 0.9 0.95 1 R2 Figure 13: Learning curve of the Conv2LSTM neural networks. The red line is the learning curve, in which the Mmean-Square Error (MSE) loss is reducing with the epoch increasing. The blue line is located in validation data set, which shows the similar results in learning data set. The increasing of the R square with training is shown in sub\ufb01gure, which indicates the correlation between the prediction from neural networks and truth tends to be close. 14" + }, + { + "url": "http://arxiv.org/abs/2006.13182v1", + "title": "On the Global Optimality of Model-Agnostic Meta-Learning", + "abstract": "Model-agnostic meta-learning (MAML) formulates meta-learning as a bilevel\noptimization problem, where the inner level solves each subtask based on a\nshared prior, while the outer level searches for the optimal shared prior by\noptimizing its aggregated performance over all the subtasks. Despite its\nempirical success, MAML remains less understood in theory, especially in terms\nof its global optimality, due to the nonconvexity of the meta-objective (the\nouter-level objective). To bridge such a gap between theory and practice, we\ncharacterize the optimality gap of the stationary points attained by MAML for\nboth reinforcement learning and supervised learning, where the inner-level and\nouter-level problems are solved via first-order optimization methods. In\nparticular, our characterization connects the optimality gap of such stationary\npoints with (i) the functional geometry of inner-level objectives and (ii) the\nrepresentation power of function approximators, including linear models and\nneural networks. To the best of our knowledge, our analysis establishes the\nglobal optimality of MAML with nonconvex meta-objectives for the first time.", + "authors": "Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang", + "published": "2020-06-23", + "updated": "2020-06-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Meta-learning aims to \ufb01nd a prior that e\ufb03ciently adapts to a new subtask based on past subtasks. One of the most popular meta-learning methods, namely model-agnostic meta-learning (MAML) (Finn et al., 2017a), is based on bilevel optimization, where the inner level solves each subtask based on a shared prior, while the outer level optimizes the aggregated performance of the shared prior over all the subtasks. In particular, MAML associates the solution to each subtask with the shared \u2217Northwestern University; lingxiaowang2022@u.northwestern.edu \u2020Northwestern University; qicai2022@u.northwestern.edu \u2021Princeton University; zy6@princeton.edu \u00a7Northwestern University; zhaoranwang@gmail.com 1 \fprior through one step of gradient descent based on the subtask data. Due to its model-agnostic property, MAML is widely adopted in reinforcement learning (Finn et al., 2017a,b; Xu et al., 2018; Nagabandi et al., 2018; Gupta et al., 2018; Yu et al., 2018; Mendonca et al., 2019) and supervised learning (Finn et al., 2017a; Li et al., 2017; Finn et al., 2018; Rakelly et al., 2018; Yoon et al., 2018). Despite its popularity in empirical studies, MAML is scarcely explored theoretically. In terms of the global optimality of MAML, Finn et al. (2019) show that the meta-objective is strongly convex assuming that the inner-level objective is strongly convex (in its \ufb01nite-dimensional parameter). However, such an assumption fails to hold for neural function approximators, which leads to a gap between theory and practice. For nonconvex meta-objectives, Fallah et al. (2019) characterize the convergence of MAML to a stationary point under certain regularity conditions. Meanwhile, Rajeswaran et al. (2019) propose a variant of MAML that utilizes implicit gradients, which is also guaranteed to converge to a stationary point. However, the global optimality of such stationary points remains unclear. On the other hand, Pentina and Lampert (2014); Amit and Meir (2017) establish PAC-Bayes bounds for the generalization error of two variants of MAML. However, such generalization guarantees only apply to the global optima of the two meta-objectives rather than their stationary points. In this work, we characterize the global optimality of the \u01eb-stationary points attained by MAML for both reinforcement learning (RL) and supervised learning (SL). For meta-RL, we study a variant of MAML, which associates the solution to each subtask with the shared prior, namely \u03c0\u03b8, through one step of proximal policy optimization (PPO) (Schulman et al., 2015, 2017) in the inner level of optimization. In the outer level of optimization, we maximize the expected total reward associated with the shared prior aggregated over all the subtasks. We prove that the \u01eb-stationary point attained by such an algorithm is (approximately) globally optimal given that the function approximator has su\ufb03cient representation power. For example, for the linear function approximator \u03c0\u03b8(s, a) \u221dexp(\u03c6(s, a)\u22a4\u03b8), the optimality gap of the \u01eb-stationary point is characterized by the representation power of the linear class {\u03c6(\u00b7, \u00b7)\u22a4v : v \u2208B}, where B is the parameter space (which is speci\ufb01ed later). The core of our analysis is the functional one-point monotonicity (Facchinei and Pang, 2007) of the expected total reward J(\u03c0) with respect to the policy \u03c0 (Liu et al., 2019) for each subtask. Based on a similar notion of functional geometry in the inner level of optimization, we establish similar results on the optimality gap of meta-SL. Moreover, our analysis of both meta-RL and meta-SL allows for neural function approximators. More speci\ufb01cally, we prove that the optimality gap of the attained \u01eb-stationary points is characterized by the representation power of the corresponding classes of overparameterized two-layer neural networks. Challenge. We highlight that the bilevel structure of MAML makes it challenging for the analysis of its global optimality. In the simple case where the inner-level objective is strongly convex and 2 \fsmooth, Finn et al. (2019) show that the meta-objective is also strongly convex assuming that the stepsize of inner-level optimization is su\ufb03ciently small. \u2022 In practice, however, both the inner-level objective and the meta-objective can be nonconvex, which leads to a gap between theory and practice. For example, the inner-level objective of meta-RL is nonconvex even in the (in\ufb01nite-dimensional) functional space of policies. \u2022 Even assuming that the inner-level objective is convex in the (in\ufb01nite-dimensional) functional space, nonlinear function approximators, such as neural networks, can make the inner-level objective nonconvex in the \ufb01nite-dimensional space of parameters. \u2022 Furthermore, even for linear function approximators, the bilevel structure of MAML can make the meta-objective nonconvex in the \ufb01nite-dimensional space of parameters, especially when the stepsize of inner-level optimization is large. In this work, we tackle all these challenges by analyzing the global optimality of both meta-RL and meta-SL for both linear and neural function approximators. Contribution. Our contribution is three-fold. First, we propose a meta-RL algorithm and characterize the optimality gap of the \u01eb-stationary point attained by such an algorithm for linear function approximators. Second, under an assumption on the functional convexity of the inner-level objective, we characterize the optimality gap of the \u01eb-stationary point attained by meta-SL. Finally, we extend our optimality analysis for linear function approximators to handle overparameterized twolayer neural networks. To the best of our knowledge, our analysis establishes the global optimality of MAML with nonconvex meta-objectives for the \ufb01rst time. Related Work. Meta-learning is studied by various communities (Evgeniou and Pontil, 2004; Thrun and Pratt, 2012; Pentina and Lampert, 2014; Amit and Meir, 2017; Nichol et al., 2018; Nichol and Schulman, 2018; Khodak et al., 2019). See Pan and Yang (2009); Weiss et al. (2016) for the surveys of meta-learning and Taylor and Stone (2009) for a survey of meta-RL. Our work focuses on the model-agnostic formulation of meta-learning (MAML) proposed by Finn et al. (2017a). In contrast to existing empirical studies, the theoretical analysis of MAML is relatively scarce. Fallah et al. (2019) establish the convergence of three variants of MAML for nonconvex metaobjectives. Rajeswaran et al. (2019) propose a variant of MAML that utilizes implicit gradients of the inner level of optimization and establish the convergence of such an algorithm. This line of work characterizes the convergence of MAML to the stationary points of the corresponding meta-objectives. Our work is complementary to this line of work in the sense that we characterize the global optimality of the stationary points attained by MAML. Meanwhile, Finn et al. (2019) propose an online algorithm for MAML with regret guarantees, which rely on the strong convexity of the meta-objectives. In contrast, our work tackles nonconvex meta-objectives, which allows for 3 \fneural function approximators, and characterizes the global optimality of MAML. Mendonca et al. (2019) propose a meta-policy search method and characterize the global optimality for solving the subtasks under the assumption that the meta-objective is (approximately) globally optimal. Our work is complementary to their work in the sense that we characterize the global optimality of MAML in terms of optimizing the meta-objective. See also the concurrent work (Wang et al., 2020). There is a large body of literature that studies the training and generalization of overparameterized neural networks for SL (Daniely, 2017; Jacot et al., 2018; Wu et al., 2018; Allen-Zhu et al., 2018a,b; Du et al., 2018a,b; Zou et al., 2018; Chizat and Bach, 2018; Li and Liang, 2018; Cao and Gu, 2019a,b; Arora et al., 2019; Lee et al., 2019; Bai and Lee, 2019). See Fan et al. (2019) for a survey. In comparison, we study MAML with overparameterized neural networks for both RL and SL. The bilevel structure of MAML makes our analysis signi\ufb01cantly more challenging than that of RL and SL. Notation. We denote by [n] = {1, 2, ..., n} the index set. Also, we denote by x = ([x]\u22a4 1 , . . . , [x]\u22a4 m)\u22a4\u2208 Rmd a vector in Rmd, where [x]k \u2208Rd is the k-th block of x for k \u2208[m]. For a real-valued function f de\ufb01ned on X, we denote by \u2225f(\u00b7)\u2225p,\u03bd = { R X f p(x)d\u03bd(x)}1/p the Lp(\u03bd)-norm of f, where \u03bd is a measure on X. We write \u2225f(\u00b7)\u22252,\u03bd = \u2225f(\u00b7)\u2225\u03bd for notational simplicity and \u2225f\u2225p,\u03bd = \u2225f(\u00b7)\u2225p,\u03bd when the variable is clear from the context. For a vector \u03c6 \u2208Rn, we denote by \u2225\u03c6\u22252 the \u21132-norm of \u03c6. 2 Background In this section, we brie\ufb02y introduce reinforcement learning and meta-learning. 2.1 Reinforcement Learning We de\ufb01ne a Markov decision process (MDP) by a tuple (S, A, P, r, \u03b3, \u03b6), where S and A are the state and action spaces, respectively, P is the Markov kernel, r is the reward function, which is possibly stochastic, \u03b3 \u2208(0, 1) is the discount factor, and \u03b6 is the initial state distribution over S. In the sequel, we assume that A is \ufb01nite. An agent interacts with the environment as follows. At each step t, the agent observes the state st of the environment, takes the action at, and receives the reward r(st, at). The environment then transits into the next state according to the distribution P(\u00b7 | st, at) over S. We de\ufb01ne a policy \u03c0 as a mapping from S to distributions over A. Speci\ufb01cally, \u03c0(a | s) gives the probability of taking the action a at the state s. Given a policy \u03c0, we de\ufb01ne for 4 \fall (s, a) \u2208S \u00d7 A the corresponding stateand action-value functions V \u03c0 and Q\u03c0 as follows, V \u03c0(s) = (1 \u2212\u03b3) \u00b7 E \u0014 \u221e X t=0 \u03b3t \u00b7 r(st, at) \f \f \f \f s0 = s \u0015 , (2.1) Q\u03c0(s, a) = (1 \u2212\u03b3) \u00b7 E \u0014 \u221e X t=0 \u03b3t \u00b7 r(st, at) \f \f \f \f s0 = s, a0 = a \u0015 , (2.2) where st+1 \u223cP(\u00b7 | st, at) and at \u223c\u03c0(\u00b7 | st) for all t \u22650. Correspondingly, the advantage function A\u03c0 is de\ufb01ned as follows, A\u03c0(s, a) = Q\u03c0(s, a) \u2212V \u03c0(s), \u2200(s, a) \u2208S \u00d7 A. (2.3) A policy \u03c0 induces a state visitation measure \u03bd\u03c0 on S, which takes the form of \u03bd\u03c0(s) = (1 \u2212\u03b3) \u00b7 \u221e X t=0 \u03b3t \u00b7 P(st = s), (2.4) where s0 \u223c\u03b6, st+1 \u223cP(\u00b7 | st, at), and at \u223c\u03c0(\u00b7 | st) for all t \u22650. Correspondingly, we de\ufb01ne the state-action visitation measure by \u03c3\u03c0(s, a) = \u03c0(a | s) \u00b7 \u03bd\u03c0(s) for all (s, a) \u2208S \u00d7 A, which is a probability distribution over S \u00d7A. The goal of reinforcement learning is to \ufb01nd the optimal policy \u03c0\u2217that maximizes the expected total reward J(\u03c0), which is de\ufb01ned as J(\u03c0) = Es\u223c\u03b6 \u0002 V \u03c0(s) \u0003 = E(s,a)\u223c\u03c3\u03c0 \u0002 r(s, a) \u0003 . (2.5) When S is continuous, maximizing J(\u03c0) over all possible \u03c0 is computationally intractable. A common alternative is to parameterize the policy by \u03c0\u03b8 with the parameter \u03b8 \u2208\u0398, where \u0398 is the parameter space, and maximize J(\u03c0\u03b8) over \u03b8 \u2208\u0398. 2.2 Meta-Learning In meta-learning, the meta-learner is given a sample of learning subtasks {Ti}i\u2208[n] drawn independently from the task distribution \u03b9 and a set of parameterized algorithms A = {A\u03b8 : \u03b8 \u2208\u0398}, where \u0398 is the parameter space. Speci\ufb01cally, given \u03b8, the algorithm A\u03b8 \u2208A maps from a learning subtask T to its desired outcome. For example, an algorithm that solves reinforcement learning subtasks maps from an MDP T = (S, A, P, r, \u03b3, \u03b6) to a policy \u03c0, aiming at maximizing the expected total reward J(\u03c0) de\ufb01ned in (2.5). As an example, given a hypothesis class H, a distribution D over Z, which is the space of data points, and a loss function \u2113: H \u00d7 Z 7\u2192R, a supervised learning subtask aims at minimizing the risk Ez\u223cD[\u2113(h, z)] over h \u2208H. We denote the supervised learning subtask T by the tuple (D, \u2113, H). Similarly, an algorithm that solves supervised learning subtasks maps from T = (D, \u2113, H) to a hypothesis h \u2208H, aiming at minimizing the risk R(h) = Ez\u223cD[\u2113(h, z)] over h \u2208H. In what follows, we denote by HT the objective of a learning subtask T . If T is a 5 \freinforcement learning subtask, we have HT = J, and if T is a supervised learning subtask, we have HT = R. The goal of the meta-learner is to \ufb01nd \u03b8\u2217\u2208\u0398 that optimizes the population version of the meta-objective L(\u03b8), which is de\ufb01ned as L(\u03b8) = ET \u223c\u03b9 h HT \u0000A\u03b8(T ) \u0001i . (2.6) To approximately optimize L de\ufb01ned in (2.6) based on the sample {Ti}i\u2208[n] of subtasks, the metalearner optimizes the following empirical version of the meta-objective, L(\u03b8) = 1 n \u00b7 n X i=1 HTi \u0000A\u03b8(Ti) \u0001 . (2.7) The algorithm A\u03b8\u2217corresponding to the global optimum \u03b8\u2217of (2.7) incorporates the past experience through the observed learning subtasks {Ti}i\u2208[n], and therefore, facilitates the learning of a new subtask (Pentina and Lampert, 2014; Finn et al., 2017a; Amit and Meir, 2017; Yoon et al., 2018). As an example, in model-agnostic meta-learning (MAML) (Finn et al., 2017a) for supervised learning, the hypothesis class H is parameterized by h\u03b8 with \u03b8 \u2208\u0398, and the algorithm A\u03b8 performs one step of gradient descent with \u03b8 \u2208\u0398 as the starting point. In this setting, MAML aims to \ufb01nd the globally optimal starting point \u03b8\u2217by minimizing the following meta-objective by gradient descent, L(\u03b8) = 1 n \u00b7 n X i=1 Ri \u0000h\u03b8\u2212\u03b7\u00b7\u2207\u03b8Ri(h\u03b8) \u0001 , where \u03b7 is the learning rate of A\u03b8 and Ri(h) = Ez\u223cDi[\u2113(h, z)] is the risk of the supervised learning subtask Ti = (Di, \u2113, H). Similarly, in MAML for reinforcement learning, the algorithm A\u03b8 performs, e.g., one step of policy gradient with \u03b8 as the starting point. We call \u03c0\u03b8 the main e\ufb00ect in the sequel. MAML aims to \ufb01nd the globally optimal main e\ufb00ect \u03c0\u03b8\u2217by maximizing the following meta-objective by gradient ascent, L(\u03b8) = 1 n \u00b7 n X i=1 Ji \u0000\u03c0\u03b8+\u03b7\u00b7\u2207\u03b8Ji(\u03c0\u03b8) \u0001 , where \u03b7 is the learning rate of A\u03b8 and Ji is the expected total reward of the reinforcement learning subtask Ti = (S, A, Pi, ri, \u03b3i, \u03b6i). 3 Meta-Reinforcement Learning In this section, we present the analysis of meta-reinforcement learning (meta-RL). We \ufb01rst de\ufb01ne the detailed problem setup of meta-RL and propose a meta-RL algorithm. We then characterize the global optimality of the stationary point attained by such an algorithm. 6 \f3.1 Problem Setup and Algorithm In meta-RL, the meta-learner observes a sample of MDPs {(S, A, Pi, ri, \u03b3i, \u03b6i)}i\u2208[n] drawn independently from a task distribution \u03b9. We set the algorithm A\u03b8 in (2.7), which optimizes the policy, to be one step of (a variant of) proximal policy optimization (PPO) (Schulman et al., 2015, 2017) starting from the main e\ufb00ect \u03c0\u03b8. More speci\ufb01cally, A\u03b8 solves the following maximization problem, A\u03b8(S, A, Pi, ri, \u03b3i, \u03b6i) = argmax \u03c0 Es\u223c\u03bdi,\u03c0\u03b8 h \u27e8Q\u03c0\u03b8 i (s, \u00b7), \u03c0(\u00b7 | s)\u27e9\u22121/\u03b7 \u00b7 DKL \u0000\u03c0(\u00b7 | s) \r \r\u03c0\u03b8(\u00b7 | s) \u0001i . (3.1) Here \u27e8\u00b7, \u00b7\u27e9is the inner product over R|A|, \u03b7 is the tuning parameter of A\u03b8, and Q\u03c0\u03b8 i , \u03bdi,\u03c0\u03b8 are the action-value function and the state visitation measure, respectively, corresponding to the MDP (S, A, Pi, ri, \u03b3i, \u03b6i) and the policy \u03c0\u03b8. Note that the objective in (3.1) has DKL(\u03c0(\u00b7 | s)\u2225\u03c0\u03b8(\u00b7 | s)) in place of DKL(\u03c0\u03b8(\u00b7 | s)\u2225\u03c0(\u00b7 | s)) compared with the original version of PPO (Schulman et al., 2015, 2017). As shown by Liu et al. (2019), such a variant of PPO enjoys global optimality and convergence. We parameterize the main e\ufb00ect \u03c0\u03b8 as the following energy-based policy (Haarnoja et al., 2017), \u03c0\u03b8(a | s) = exp \u00001/\u03c4 \u00b7 \u03c6(s, a)\u22a4\u03b8 \u0001 P a\u2032\u2208A exp \u00001/\u03c4 \u00b7 \u03c6(s, a\u2032)\u22a4\u03b8 \u0001, \u2200(s, a) \u2208S \u00d7 A, (3.2) where \u03c6 : S \u00d7 A 7\u2192Rd is the feature mapping, \u03b8 \u2208Rd is the parameter, \u03c6(\u00b7, \u00b7)\u22a4\u03b8 is the energy function, and \u03c4 is the temperature parameter. The maximizer \u03c0i,\u03b8 = A\u03b8(S, A, Pi, ri, \u03b3i, \u03b6i) de\ufb01ned in (3.1) then takes the following form (Liu et al., 2019, Proposition 3.1), \u03c0i,\u03b8(\u00b7 | s) \u221dexp \u00001/\u03c4 \u00b7 \u03c6(s, \u00b7)\u22a4\u03b8 + \u03b7 \u00b7 Q\u03c0\u03b8 i (s, \u00b7) \u0001 , \u2200s \u2208S. (3.3) The goal of meta-RL is to \ufb01nd the globally optimal main e\ufb00ect \u03c0\u03b8 by maximizing the following meta-objective, L(\u03b8) = 1 n \u00b7 n X i=1 Ji(\u03c0i,\u03b8), where \u03c0i,\u03b8 = A\u03b8(S, A, Pi, ri, \u03b3i, \u03b6i). (3.4) Here Ji is the expected total reward de\ufb01ned in (2.5) corresponding to the MDP (S, A, Pi, ri, \u03b3i, \u03b6i) for all i \u2208[n]. To maximize L(\u03b8), we use gradient ascent, which iteratively updates \u03b8 as follows, \u03b8\u2113+1 \u2190\u03b8\u2113+ \u03b1\u2113\u00b7 \u2207\u03b8L(\u03b8\u2113), for \u2113= 0, 1, . . . , T \u22121, (3.5) where \u2207\u03b8L(\u03b8\u2113) is the gradient of the meta-objective at \u03b8\u2113, \u03b1\u2113is the learning rate at the \u2113-th iteration, and T is the number of iterations. It remains to calculate the gradient \u2207\u03b8L(\u03b8). To this end, we \ufb01rst de\ufb01ne the state-action visitation measures induced by the main e\ufb00ect \u03c0\u03b8, and then calculate \u2207\u03b8L(\u03b8) in closed form based on such state-action visitation measures. 7 \fDe\ufb01nition 3.1 (Visitation Measures of Main E\ufb00ect). For all i \u2208[n], given the MDP (S, A, Pi, ri, \u03b3i, \u03b6i) and the main e\ufb00ect \u03c0\u03b8, we denote by \u03c3i,\u03c0\u03b8 the state-action visitation measure induced by the main e\ufb00ect \u03c0\u03b8. We further de\ufb01ne the state-action visitation measure \u03c3(s,a) i,\u03c0\u03b8 initialized at (s, a) \u2208S \u00d7 A as follows, \u03c3(s,a) i,\u03c0\u03b8 (s\u2032, a\u2032) = (1 \u2212\u03b3i) \u00b7 \u221e X t=0 \u03b3t i \u00b7 P(st = s\u2032, at = a\u2032), \u2200(s\u2032, a\u2032) \u2208S \u00d7 A, (3.6) where s0 \u223cPi(\u00b7 | s, a), st+1 \u223cPi(\u00b7 | st, at), and at \u223c\u03c0\u03b8(\u00b7 | st) for all t \u22650. In other words, given the transition kernel Pi and the discount factor \u03b3i, \u03c3(s,a) i,\u03c0\u03b8 is the state-action visitation measure induced by the main e\ufb00ect \u03c0\u03b8 where the initial state distribution is given by s0 \u223cPi(\u00b7 | s, a). Based on the policy gradient theorem (Sutton and Barto, 2018), the following proposition calculates the gradient of the meta-objective L de\ufb01ned in (3.4) with respect to the parameter \u03b8 of the main e\ufb00ect \u03c0\u03b8. Proposition 3.2 (Gradient of Meta-Objective). It holds for all \u03b8 \u2208Rd that \u2207\u03b8L(\u03b8) = 1 n \u00b7 n X i=1 E(s,a)\u223c\u03c3\u03c0i,\u03b8 \u0002 hi,\u03b8(s, a) \u00b7 A\u03c0i,\u03b8 i (s, a) \u0003 , (3.7) where the auxiliary function hi,\u03b8 takes the form of hi,\u03b8(s, a) = 1/\u03c4 \u00b7 \u03c6(s, a) + \u03b7 \u00b7 \u03b3i/\u03c4 \u00b7 E(s\u2032,a\u2032)\u223c\u03c3(s,a) i,\u03c0\u03b8 \u0002 \u03c6(s\u2032, a\u2032) \u00b7 A\u03c0\u03b8 i (s\u2032, a\u2032) \u0003 . (3.8) Here A\u03c0i,\u03b8 i and A\u03c0\u03b8 i are the advantage functions of the policy \u03c0i,\u03b8 and the main e\ufb00ect \u03c0\u03b8, respectively, both corresponding to the MDP (S, A, Pi, ri, \u03b3i, \u03b6i). Also, \u03c3(s,a) i,\u03c0\u03b8 is the state-action visitation measure induced by the main e\ufb00ect \u03c0\u03b8 de\ufb01ned in De\ufb01nition 3.1, and \u03c3\u03c0i,\u03b8 is the state-action visitation measure induced by the policy \u03c0i,\u03b8, both corresponding to the MDP (S, A, Pi, ri, \u03b3i, \u03b6i). Proof. See \u00a7C.1 for a detailed proof. In the sequel, we assume without loss of generality that the action-value function Q\u03c0 is available once we obtain the policy \u03c0, and the expectations over state-action visitation measures in (3.7) and (3.8) of Theorem 3.2 are available once we obtain the policies {\u03c0i,\u03b8}i\u2208[n] and the main e\ufb00ect \u03c0\u03b8. We summarize meta-RL in Algorithm 1. In practice, we can estimate the action-value functions by temporal di\ufb00erence learning (Sutton, 1988) and the expectations over the visitation measures by Monte Carlo sampling (Konda, 2002). 3.2 Theoretical Results In this section, we analyze the global optimality of the \u01eb-stationary point attained by meta-RL (Algorithm 1). In the sequel, we assume that the reward functions {ri}i\u2208[n] are upper bounded by 8 \fAlgorithm 1 Meta-RL Require: MDPs {v}i\u2208[n] sampled from the task distribution \u03b9, feature mapping \u03c6, number of iterations T, learning rate {\u03b1\u2113}\u2113\u2208[T], temperature parameter \u03c4, tuning parameter \u03b7, initial parameter \u03b80. 1: for \u2113= 0, . . . , T \u22121 do 2: for i \u2208[n] do 3: Obtain the action-value function Q \u03c0\u03b8\u2113 i and the advantage function A \u03c0\u03b8\u2113 i corresponding to the MDP (S, A, Pi, ri, \u03b3i, \u03b6i) and the main e\ufb00ect \u03c0\u03b8\u2113. 4: Update the policy \u03c0i,\u03b8\u2113(\u00b7 | s) \u221dexp \u00001/\u03c4 \u00b7 \u03c6(s, \u00b7)\u22a4\u03b8\u2113+ \u03b7 \u00b7 Q \u03c0\u03b8\u2113 i (s, \u00b7) \u0001 . 5: Obtain the advantage function A \u03c0i,\u03b8\u2113 i corresponding to the MDP (S, A, Pi, ri, \u03b3i, \u03b6i). 6: Compute the auxiliary function hi,\u03b8\u2113(s, a) \u21901/\u03c4 \u00b7 \u03c6(s, a) + \u03b3i \u00b7 \u03b7/\u03c4 \u00b7 E(s\u2032,a\u2032)\u223c\u03c3(s,a) i,\u03c0\u03b8\u2113 \u0002 \u03c6(s\u2032, a\u2032) \u00b7 A \u03c0\u03b8\u2113 i (s\u2032, a\u2032) \u0003 . 7: end for 8: Compute the gradient of the meta-objective \u2207\u03b8L(\u03b8\u2113) \u21901 n \u00b7 n X i=1 E(s,a)\u223c\u03c3\u03c0i,\u03b8\u2113 \u0002 hi,\u03b8\u2113(s, a) \u00b7 A \u03c0i,\u03b8\u2113 i (s, a) \u0003 . 9: Update the parameter of the main e\ufb00ect \u03b8\u2113+1 \u2190\u03b8\u2113+ \u03b1\u2113\u00b7 \u2207\u03b8L(\u03b8\u2113). 10: Update the main e\ufb00ect \u03c0\u03b8\u2113+1(\u00b7 | s) \u221dexp \u00001/\u03c4 \u00b7 \u03c6(s, \u00b7)\u22a4\u03b8\u2113+1 \u0001 . 11: end for 12: Output: \u03b8T and \u03c0\u03b8T . 9 \fan absolute constant Qmax > 0 in absolute value. It then follows from (2.1) and (2.2) that |V \u03c0 i (s, a)| and |Q\u03c0 i (s, a)| are upper bounded by Qmax for all i \u2208[n] and (s, a) \u2208S \u00d7 A. Here we de\ufb01ne Q\u03c0 i and V \u03c0 i as the stateand action-value functions of the policy \u03c0, respectively, corresponding to the MDP (S, A, Pi, ri, \u03b3i, \u03b6i). To analyze the global optimality of meta-RL, we de\ufb01ne the following meta-visitation measures induced by the main e\ufb00ect \u03c0\u03b8. De\ufb01nition 3.3 (Meta-Visitation Measures). We de\ufb01ne the joint meta-visitation measure \u03c1i,\u03c0\u03b8 induced by the main e\ufb00ect \u03c0\u03b8 and the policy \u03c0i,\u03b8 as follows, \u03c1i,\u03c0\u03b8(s\u2032, a\u2032, s, a) = \u03c3(s,a) i,\u03c0\u03b8 (s\u2032, a\u2032) \u00b7 \u03c3\u03c0i,\u03b8(s, a), \u2200(s\u2032, a\u2032, s, a) \u2208S \u00d7 A \u00d7 S \u00d7 A. (3.9) We further de\ufb01ne the meta-visitation measure \u03c2i,\u03c0\u03b8 as the marginal distribution of the joint metavisitation measure \u03c1i,\u03c0\u03b8 of (s\u2032, a\u2032), that is, \u03c2i,\u03c0\u03b8(s\u2032, a\u2032) = E(s,a)\u223c\u03c3\u03c0i,\u03b8 \u0002 \u03c3(s,a) i,\u03c0\u03b8 (s\u2032, a\u2032) \u0003 , \u2200(s\u2032, a\u2032) \u2208S \u00d7 A. (3.10) In addition, we de\ufb01ne the mixed meta-visitation measure \u033a\u03c0\u03b8 over all the subtasks as follows, \u033a\u03c0\u03b8(s\u2032, a\u2032) = 1 n \u00b7 n X i=1 \u03c2i,\u03c0\u03b8(s\u2032, a\u2032), \u2200(s\u2032, a\u2032) \u2208S \u00d7 A. (3.11) In other words, the meta-visitation measure \u03c2i,\u03c0\u03b8 is the state-action visitation measure induced by \u03c0\u03b8 given the transition kernel Pi, the discount factor \u03b3i, and the initial state distribution s0 \u223c E(s,a)\u223c\u03c3\u03c0i,\u03b8 [Pi(\u00b7 | s, a)]. In what follows, we impose an assumption on the meta-visitation measures de\ufb01ned in De\ufb01nition 3.3. Assumption 3.4 (Regularity Condition on Meta-Visitation Measures). We assume for all \u03b8 \u2208Rd and i \u2208[n] that E(s\u2032,a\u2032)\u223c\u033a\u03c0\u03b8 h\u0000d\u03c3\u03c0i,\u03b8/d\u033a\u03c0\u03b8(s\u2032, a\u2032) \u00012i \u2264C2 0, (3.12) E(s\u2032,a\u2032)\u223c\u033a\u03c0\u03b8 h\u0000d\u03c2i,\u03c0\u03b8/d\u033a\u03c0\u03b8(s\u2032, a\u2032) \u00012i \u2264C2 0, (3.13) where C0 > 0 is an absolute constant . Here \u03c2i,\u03c0\u03b8 and \u033a\u03c0\u03b8 are the meta-visitation measure and the mixed meta-visitation measure induced by the main e\ufb00ect \u03c0\u03b8, which are de\ufb01ned in (3.10) and (3.11) of De\ufb01nition 3.3, respectively. Meanwhile, \u03c3\u03c0i,\u03b8 is the state-action visitation measure induced by the policy \u03c0i,\u03b8, which is de\ufb01ned in (2.4). Here d\u03c3\u03c0i,\u03b8/d\u033a\u03c0\u03b8 and d\u03c2i,\u03c0\u03b8/d\u033a\u03c0\u03b8 are the Radon-Nikodym derivatives. 10 \fAccording to (3.11) of De\ufb01nition 3.3, the upper bound in (3.12) of Assumption 3.4 holds if the L2(\u033a\u03c0\u03b8)-norms of d\u03c3\u03c0i,\u03b8/d\u03c2j,\u03c0\u03b8 is upper bounded by C0 for all i, j \u2208[n]. For i = j, note that \u03c0i,\u03b8 is obtained by one step of PPO with \u03c0\u03b8 as the starting point. Thus, for a su\ufb03ciently small tuning parameter \u03b7 in (3.3), \u03c0i,\u03b8 is close to \u03c0\u03b8. Hence, the assumption that d\u03c3\u03c0i,\u03b8/d\u03c2j,\u03c0\u03b8 has an upper bounded L2(\u033a\u03c0\u03b8)-norm for all i = j is a mild regularity condition. For i \u0338= j, to ensure the upper bound of the L2(\u033a\u03c0\u03b8)-norms of d\u03c3\u03c0i,\u03b8/d\u03c2j,\u03c0\u03b8 in (3.12), Assumption 3.4 requires the task distribution \u03b9 to generate similar MDPs so that the meta-visitation measures {\u03c2i,\u03c0\u03b8}i\u2208[n] are similar across all the subtasks indexed by i \u2208[n]. Similarly, to ensure the upper bound in (3.13), Assumption 3.4 also requires that the meta-visitation measures {\u03c2i,\u03c0\u03b8}i\u2208[n] are similar across all the subtasks indexed by i \u2208[n]. The following theorem characterizes the optimality gap of the \u01eb-stationary point attained by meta-RL (Algorithm 1). Let \u03b8\u2217be a global maximizer of the meta-objective L(\u03b8) de\ufb01ned in (3.4). For all (s\u2032, a\u2032) \u2208S \u00d7 A and \u03c9 \u2208Rd, we de\ufb01ne f\u03c9(s\u2032, a\u2032) = \u0012 n X i=1 A\u03c0i,\u03c9 i (s\u2032, a\u2032) 1 \u2212\u03b3i \u00b7 d\u03c3\u03c0i,\u03b8\u2217 d\u033a\u03c0\u03c9 (s\u2032, a\u2032) \u0013\u001e\u0012 n X i=1 gi,\u03c9(s\u2032, a\u2032) \u00b7 d\u03c2i,\u03c0\u03c9 d\u033a\u03c0\u03c9 (s\u2032, a\u2032) \u0013 , (3.14) where we de\ufb01ned gi,\u03c9 as follows, gi,\u03c9(s\u2032, a\u2032) = 1/\u03c4 \u00b7 A\u03c0i,\u03c9 i (s\u2032, a\u2032) \u00b7 (d\u03c3\u03c0i,\u03c9/d\u03c2i,\u03c0\u03c9)(s\u2032, a\u2032) + \u03b3i \u00b7 \u03b7/\u03c4 \u00b7 Gi,\u03c0\u03c9(s\u2032, a\u2032) \u00b7 A\u03c0\u03c9 i (s\u2032, a\u2032). Here \u03c4 is the temperature parameter in (3.2), \u03b7 is the tuning parameter de\ufb01ned in (3.1), A\u03c0i,\u03c9 i and A\u03c0\u03c9 i are the advantage functions of the policy \u03c0i,\u03c9 and the main e\ufb00ect \u03c0\u03c9, respectively, corresponding to the MDP (S, A, Pi, ri, \u03b3i, \u03b6i), and Gi,\u03c0\u03c9 is de\ufb01ned as follows, Gi,\u03c0\u03c9(s\u2032, a\u2032) = E(s\u2032,a\u2032,s,a)\u223c\u03c1i,\u03c0\u03c9 \u0002 A\u03c0i,\u03c9 i (s, a) \f \f s\u2032, a\u2032\u0003 , (3.15) where \u03c1i,\u03c0\u03c9 is the joint meta-visitation measure de\ufb01ned in (3.9) of De\ufb01nition 3.3. Theorem 3.5 (Optimality Gap of \u01eb-Stationary Point). Under Assumption 3.4, for all R > 0, \u03c9 \u2208Rd, and \u01eb > 0 such that \u2207\u03c9L(\u03c9)\u22a4v \u2264\u01eb, \u2200v \u2208B = {\u03b8 \u2208Rd : \u2225\u03b8\u22252 \u22641}, we have L(\u03b8\u2217) \u2212L(\u03c9) \u2264R \u00b7 \u01eb + 2C0 \u00b7 Qmax/\u03c4 \u00b7 (1 + 2Qmax \u00b7 \u03b3 \u00b7 \u03b7) \u00b7 inf v\u2208BR \u2225f\u03c9(\u00b7, \u00b7) \u2212\u03c6(\u00b7, \u00b7)\u22a4v\u2225\u033a\u03c0\u03c9 , (3.16) where BR = {\u03b8 \u2208Rd : \u2225\u03b8\u22252 \u2264R}, \u03b3 = (Pn i=1 \u03b3i)/n, C0 is de\ufb01ned in Assumption 3.4, \u03c4 is the temperature parameter in (3.2), \u03b7 is the tuning parameter de\ufb01ned in (3.1), and Qmax is the upper bound of the reward functions {ri}i\u2208[n] in absolute value. 11 \fProof. See \u00a7B.1 for a detailed proof. By Theorem 3.5, the global optimality of the \u01eb-stationary point \u03c9 hinges on the representation power of the linear class {\u03c6(\u00b7)\u22a4\u03b8 : \u03b8 \u2208BR}. More speci\ufb01cally, if the function f\u03c9 de\ufb01ned in (3.14) is well approximated by \u03c6(\u00b7)\u22a4\u03b8 for a parameter \u03b8 \u2208BR, then \u03c9 is approximately globally optimal. 4 Meta-Supervised Learning In this section, we present the analysis of meta-supervised learning (meta-SL). We \ufb01rst de\ufb01ne the detailed problem setup of meta-SL and present a meta-SL algorithm. We then characterize the global optimality of the stationary point attained by such an algorithm. 4.1 Problem Setup and Algorithm In meta-SL, the meta-learner observes a sample of supervised learning subtasks {(Di, \u2113, H)}i\u2208[n] drawn independently from a task distribution \u03b9. Speci\ufb01cally, each subtask (Di, \u2113, H) consists of a distribution Di over X \u00d7 Y, where Y \u2286R, a loss function \u2113: H \u00d7 X \u00d7 Y 7\u2192R, and a hypothesis class H. Each hypothesis h \u2208H is a mapping from X to Y. The goal of the supervised learning subtask (Di, \u2113, H) is to obtain the following hypothesis, h\u2217 i = argmin h\u2208H Ri(h) = argmin h\u2208H Ez\u223cDi \u0002 \u2113(h, z) \u0003 , (4.1) where Ri(h) = Ez\u223cDi[\u2113(h, z)] is the risk of h \u2208H. To approximately attain the minimizer de\ufb01ned in (4.1), we parameterize the hypothesis class H by H\u03b8 with a feature mapping \u03c6 : X 7\u2192Rd as follows, H\u03b8 = \b h\u03b8(\u00b7) = \u03c6(\u00b7)\u22a4\u03b8 : \u03b8 \u2208Rd\t , (4.2) and minimize Ri(h\u03b8) over \u03b8 \u2208Rd. We set the algorithm A\u03b8 in (2.7), which solves (Di, \u2113, H), to be one step of gradient descent with the starting point \u03b8, that is, A\u03b8(Di, \u2113, H) = h\u03b8\u2212\u03b7\u00b7\u2207\u03b8Ri(h\u03b8). (4.3) Here \u03b7 is the learning rate of A\u03b8. The goal of meta-SL is to minimize the following meta-objective, L(\u03b8) = 1 n \u00b7 n X i=1 Ri(h\u03b8i), where h\u03b8i = A\u03b8(Di, R, H). (4.4) To minimize L(\u03b8) de\ufb01ned in (4.4), we adopt gradient descent, which iteratively updates \u03b8\u2113as follows, \u03b8\u2113+1 \u2190\u03b8\u2113\u2212\u03b1\u2113\u00b7 \u2207\u03b8L(\u03b8\u2113), for \u2113= 0, 1, . . . , T \u22121. (4.5) 12 \fHere \u2207\u03b8L(\u03b8\u2113) is the gradient of the meta-objective at \u03b8\u2113, \u03b1\u2113is the learning rate at the \u2113-th iteration, and T is the number of iterations. Fallah et al. (2019) show that the update de\ufb01ned in (4.5) converges to an \u01eb-stationary point of the meta-objective L under a smoothness assumption on L. In what follows, we characterize the optimality gap of such an \u01eb-stationary point. We \ufb01rst introduce the Fr\u00b4 echet di\ufb00erentiability of the risk Ri in (4.1). De\ufb01nition 4.1 (Fr\u00b4 echet Di\ufb00erentiability). Let H be a Banach space with the norm \u2225\u00b7 \u2225H. A functional R : H 7\u2192R is Fr\u00b4 echet di\ufb00erentiable at h \u2208H if it holds for a bounded linear operator A : H 7\u2192R that lim h1\u2208H, \u2225h1\u2225H\u21920 |R(h + h1) \u2212R(h) \u2212A(h1)|/\u2225h1\u2225H \u21920. (4.6) We de\ufb01ne A as the Fr\u00b4 echet derivative of R at h \u2208H, and write DhR(\u00b7) = A(\u00b7). (4.7) In what follows, we assume that the hypothesis class H with the L2(\u03c1)-inner product is a Hilbert space, where \u03c1 is a distribution over X. Thus, following from the de\ufb01nition of the Fr\u00b4 echet derivative in De\ufb01nition 4.1 and the Rieze representation theorem (Rudin, 2006), it holds for an ah \u2208H that DhR(\u00b7) = A(\u00b7) = \u27e8\u00b7, ah\u27e9H, (4.8) Here we denote by \u27e8f, g\u27e9H = R X f(x) \u00b7 g(x)d\u03c1 the L2(\u03c1)-inner product. In what follows, we write (\u03b4R/\u03b4h)(x) = ah(x), \u2200x \u2208X, h \u2208H. (4.9) We refer to \u00a7A for an example of the Fr\u00b4 echet derivative de\ufb01ned in (4.9). We assume that H contains the parameterized hypothesis class H\u03b8 de\ufb01ned in (4.2), and impose the following assumption on the convexity and the Fr\u00b4 echet di\ufb00erentiability of the risk Ri in (4.1). Assumption 4.2 (Convex and Di\ufb00erentiable Risk). We assume for all i \u2208[n] that the risk Ri de\ufb01ned in (4.1) is convex and Fr\u00b4 echet di\ufb00erentiable on H. Assumption 4.2 is a mild regularity condition on the risk Ri, which holds for the risks induced by commonly used loss functions, such as the squared loss and the cross entropy loss. Speci\ufb01cally, the convexity of Ri holds if the loss function \u2113(h, z) is convex in h \u2208H for all z \u2208Z (Rockafellar, 1968). The following proposition holds under Assumption 4.2. Proposition 4.3 (Convex and Di\ufb00erentiable Risk (Ekeland and Temam, 1999)). Under Assumption 4.2, it holds for all i \u2208[n] that Ri(h1) \u2265Ri(h2) + \u27e8\u03b4Ri/\u03b4h2, h1 \u2212h2\u27e9H, \u2200h1, h2 \u2208H. 13 \fProof. See Ekeland and Temam (1999) for a detailed proof. We highlight that the convexity of the risks over h \u2208H does not imply the convexity of the meta-objective de\ufb01ned in (4.4). In contrast, Proposition 4.3 characterizes the functional geometry of the risk Ri in the Hilbert space H for all i \u2208[n], which allows us to analyze the global optimality of meta-SL in the sequel. 4.2 Theoretical Results In this section, we characterize the global optimality of the \u01eb-stationary point attained by meta-SL de\ufb01ned in (4.5). Let \u03b8\u2217be a global minimizer of the meta-objective L(\u03b8) de\ufb01ned in (4.4), and \u03c9 be the \u01eb-stationary point attained by meta-SL such that \u2207\u03c9L(\u03c9)\u22a4v \u2264\u01eb, \u2200v \u2208B = {\u03b8 \u2208Rd : \u2225\u03b8\u22252 \u22641}. (4.10) Our goal is to upper bound the optimality gap L(\u03c9) \u2212L(\u03b8\u2217). To this end, we \ufb01rst de\ufb01ne the mixed distribution M over all the distributions {Di}i\u2208[n] as follows, M(x, y) = 1 n \u00b7 n X i=1 Di(x, y), \u2200(x, y) \u2208X \u00d7 Y. (4.11) To simplify the notation, we write \u03c9i and \u03b8\u2217 i as the parameters that correspond to the outputs of the algorithms A\u03c9(Di, \u2113, H) and A\u03b8\u2217(Di, \u2113, H), respectively. More speci\ufb01cally, according to (4.3), we have \u03c9i = \u03c9 \u2212\u03b7 \u00b7 \u2207\u03c9Ri(h\u03c9), \u03b8\u2217 i = \u03b8\u2217\u2212\u03b7 \u00b7 \u2207\u03b8\u2217Ri(h\u03b8\u2217), \u2200i \u2208[n], (4.12) where \u03b7 is the learning rate of the algorithms A\u03c9(Di, \u2113, H) and A\u03b8\u2217(Di, \u2113, H). The following theorem characterizes the optimality gap of the \u01eb-stationary point \u03c9 attained by meta-SL. We de\ufb01ne for all (x, y, x\u2032) \u2208X \u00d7 Y \u00d7 X that w(x, y, x\u2032) = 1 n \u00b7 n X i=1 (\u03b4Ri/\u03b4h\u03c9i)(x\u2032) \u00b7 (dDi/dM)(x, y), (4.13) u(x, y, x\u2032) = \u0012 1 n \u00b7 n X i=1 (\u03b4Ri/\u03b4h\u03c9i)(x\u2032) \u00b7 \u0000h\u03c9i(x\u2032) \u2212h\u03b8\u2217 i (x\u2032) \u0001\u0013\u001e w(x, y, x\u2032), (4.14) \u03c6\u2113,\u03c9(x, y, x\u2032) = \u0010 Id \u2212\u03b7 \u00b7 \u22072 \u03c9\u2113 \u0000\u03c6(x)\u22a4\u03c9, (x, y) \u0001\u0011 \u03c6(x\u2032), (4.15) where dDi/dM is the Radon-Nikodym derivative and \u03b4Ri/\u03b4h\u03c9i is the Fr\u00b4 echet derivative de\ufb01ned in (4.9). 14 \fTheorem 4.4 (Optimality Gap of \u01eb-Stationary Point). Let \u03b8\u2217be a global minimizer of L(\u03b8). Also, let \u03c9 be the \u01eb-stationary point de\ufb01ned in (4.10). Let \u2113(h\u03b8(x), (x, y)) be twice di\ufb00erentiable with respect to all \u03b8 \u2208Rd and (x, y) \u2208X \u00d7 Y. Under Assumption 4.2, it holds for all R > 0 that L(\u03c9) \u2212L(\u03b8\u2217) \u2264R \u00b7 \u01eb |{z} (i) + \u2225w\u2225M\u00b7\u03c1 | {z } (ii) \u00b7 inf v\u2208BR \u2225u(\u00b7) \u2212\u03c6\u2113,\u03c9(\u00b7)\u22a4v\u2225M\u00b7\u03c1 | {z } (iii) , (4.16) where we de\ufb01ne BR = {\u03b8 \u2208Rd : \u2225\u03b8\u22252 \u2264R} as the ball with radius R and \u2225w\u2225M\u00b7\u03c1 = \u0012Z w2(x, y, x\u2032)dM(x, y)d\u03c1(x\u2032) \u00131/2 as the L2(M \u00b7 \u03c1)-norm of w. Proof. See \u00a7B.2 for a detailed proof. By Theorem 4.4, the optimality gap of the \u01eb-stationary point \u03c9 hinges on the three terms on the right-hand side of (4.16). Here term (i) characterizes the deviation of the \u01eb-stationary point \u03c9 from a stationary point. Term (ii) characterizes the di\ufb03culty of all the subtasks sampled from the task distribution \u03b9. Speci\ufb01cally, given the \u01eb-stationary point \u03c9, if the output h\u03c9i of A\u03c9(Di, \u2113, H) well approximates the minimizer of the risk Ri in (4.1), then the Fr\u00b4 echet derivative \u03b4Ri/\u03b4h\u03c9i de\ufb01ned in (4.9) is close to zero. Meanwhile, the Radon-Nikodym derivative dDi/dM characterizes the deviation of the distribution Di from the mixed distribution M de\ufb01ned in (4.11), which is upper bounded if Di is close to M. Thus, term (ii) is upper bounded if h\u03c9i well approximates the minimizer of Ri and Di is close to M for all i \u2208[n]. Term (iii) characterizes the representation power of the feature mapping \u03c6\u2113,\u03c9 de\ufb01ned in (4.15). Speci\ufb01cally, if the function u de\ufb01ned in (4.14) of Theorem 4.4 is well approximated by \u03c6\u2113,\u03c9(\u00b7)\u22a4v for some v \u2208BR, then term (iii) is small. In conclusion, if the subtasks generated by the task distribution \u03b9 are su\ufb03ciently regular so that term (ii) is upper bounded, and the linear class {\u03c6\u2113,\u03c9(\u00b7)\u22a4v : v \u2208BR} has su\ufb03cient representation power, then \u03c9 is approximately globally optimal. See \u00a7A for a corollary of Theorem 4.4 when it is adapted to the squared loss. 5 Neural Network Prameterization In this section, we present the global optimality analysis of meta-RL and meta-SL with the overparameterized two-layer neural network parameterization, namely neural meta-RL and neural metaSL, respectively. Speci\ufb01cally, for both neural meta-RL and neural meta-SL, we show that the global optimality of the attained \u01eb-stationary points hinges on the representation power of the corresponding classes of overparameterized two-layer neural networks. 15 \f5.1 Neural Network We \ufb01rst introduce the neural network parameterization. For x \u2208Rd, b = (b1, . . . , bm) \u2208Rm, and W = ([W]\u22a4 1 , . . . , [W]\u22a4 m) \u2208Rmd, we de\ufb01ne f(x; b, W) = 1 \u221am \u00b7 m X r=1 br \u00b7 \u03c3 \u0000[W]\u22a4 r x \u0001 , (5.1) where \u03c3(x) = x \u00b7 1{x > 0} is the recti\ufb01ed linear unit (ReLU). We set m to be divisible by two and initialize the parameter W with [Winit]r = [Winit]r+m/2 \u223c N(0, Id/d) for r \u2208[m/2]. Meanwhile, we initialize br = 1 and br+m/2 = \u22121 for all r \u2208[m/2]. Such initialization (Bai and Lee, 2019) is almost equivalent to the independent and identical initialization of [Winit]r for r \u2208[m] in our analysis, and ensures that f(x; Winit) = 0 for all x \u2208X. In what follows, we \ufb01x br for all r \u2208[m] and only optimize over W. We write f(x; W) = f(x; b, W) in the sequel for notational simplicity. Note that f(x; W) is almost everywhere di\ufb00erentiable with respect to W, and it holds for all x \u2208Rd and W \u2208Rmd that \u2207W f(x; W) = ([\u2207W f(x; W)]\u22a4 1 , . . . , [\u2207Wf(x; W)]\u22a4 m)\u22a4\u2208 Rmd, where \u0002 \u2207Wf(x; W) \u0003 r = \u0002 \u03c6W(x) \u0003 r = br \u221am \u00b7 x \u00b7 1 \b [W]\u22a4 r x > 0 \t , \u2200r \u2208[m]. (5.2) Here we de\ufb01ne the feature mapping as \u03c6W (x) = ([\u03c6W (x)]\u22a4 1 , . . . , [\u03c6W (x)]\u22a4 m) for all x \u2208Rd and W \u2208Rmd. It then follows from the de\ufb01nition of f(x; W) in (5.1) that f(x; W) = \u03c6W (x)\u22a4W. In the sequel, we denote by Einit the expectation with respect to the random initialization of the neural network. 5.2 Neural Meta-RL In this section, we analyze the global optimality of the \u01eb-stationary point attained by meta-RL when the main e\ufb00ect \u03c0\u03b8 is parameterized by the neural network de\ufb01ned in (5.1). Without loss of generality, we assume that S \u00d7 A \u2286Rd and \u2225(s, a)\u22252 \u22641 for all (s, a) \u2208S \u00d7 A. Similar to (3.2), we parameterize the main e\ufb00ect \u03c0\u03b8 as follows, \u03c0\u03b8(a | s) = exp \b 1/\u03c4 \u00b7 f \u0000(s, a); \u03b8 \u0001\t P a\u2032\u2208A exp \b 1/\u03c4 \u00b7 f \u0000(s, a\u2032); \u03b8 \u0001\t, \u2200(s, a) \u2208S \u00d7 A, (5.3) where f(\u00b7; \u03b8) is the neural network de\ufb01ned in (5.1) with W = \u03b8 for all \u03b8 \u2208Rmd. Correspondingly, given the MDP (S, A, Pi, ri, \u03b3i, \u03b6i), the maximizer \u03c0i,\u03b8 de\ufb01ned in (3.1) takes the form of \u03c0i,\u03b8(\u00b7 | s) \u221dexp \u0010 1/\u03c4 \u00b7 f \u0000(s, \u00b7); \u03b8 \u0001 + \u03b7 \u00b7 Q\u03c0\u03b8 i (s, \u00b7) \u0011 , \u2200s \u2208S, (5.4) where Q\u03c0\u03b8 i is the action-value function of \u03c0\u03b8 corresponding the MDP (S, A, Pi, ri, \u03b3i, \u03b6i). Neural meta-RL maximizes the following meta-objective via gradient ascent with Winit as the starting 16 \fpoint, L(\u03b8) = 1 n \u00b7 n X i=1 Ji(\u03c0i,\u03b8), (5.5) where \u03c0i,\u03b8 is de\ufb01ned in (5.4), and Ji(\u03c0i,\u03b8) is the expected total reward of \u03c0i,\u03b8 corresponding to the MDP (S, A, Pi, ri, \u03b3i, \u03b6i). In what follows, we analyze the global optimality of the \u01eb-stationary point \u03c9 of the meta-objective L attained by neural meta-RL. Speci\ufb01cally, we de\ufb01ne \u03c9 as follows, \u2207\u03c9L(\u03c9)\u22a4(v \u2212\u03c9) \u2264\u01eb, \u2200v \u2208Binit = {\u03b8 \u2208Rd : \u2225\u03b8 \u2212Winit\u22252 \u2264RT }. (5.6) Here Winit is the initial parameter, and the radius RT is the maximum trajectory length of T gradient ascent steps. We impose the following regularity condition on the mixed meta-visitation measure \u033a\u03c0\u03b8 de\ufb01ned in (3.11) of De\ufb01nition 3.3. Assumption 5.1 (Regularity Condition on \u033a\u03c0\u03b8). We assume for all \u03b8 \u2208Rmd that E(s,a)\u223c\u033a\u03c0\u03b8 h 1 \b |y\u22a4(s, a)| \u2264u \ti \u2264c \u00b7 u/\u2225y\u22252, \u2200y \u2208Rd, u > 0, where c > 0 is an absolute constant. Assumption 5.1 is imposed to rule out the corner case where \u033a\u03c0\u03b8 has a point mass at a speci\ufb01c state action pair (s, a) \u2208S \u00d7A. Similar assumptions arise in the analysis of RL with neural network parameterization (Cai et al., 2019; Liu et al., 2019). The following corollary characterizes the optimality gap of the \u01eb-stationary point de\ufb01ned in (5.6). Let \u03b8\u2217be a global maximizer of the meta-objective L(\u03b8) de\ufb01ned in (5.5). We de\ufb01ne c\u03c9(s, a) = f \u0000(s, a); \u03c9 \u0001 + f\u03c9(s, a), \u2200(s, a) \u2208S \u00d7 A, (5.7) where f(\u00b7; \u03c9) is the neural network de\ufb01ned in (5.1) with W = \u03c9 and f\u03c9 is de\ufb01ned in (3.14). Corollary 5.2 (Optimality Gap of \u01eb-Stationary Point). Under Assumptions 3.4 and 5.1, for the \u01eb-stationary point \u03c9 de\ufb01ned in (5.6), we have Einit \u0002 L(\u03b8\u2217) \u2212L(\u03c9) \u0003 \u2264 \u01eb |{z} (i) + C \u00b7 Einit h inf v\u2208Binit \r \rc\u03c9(\u00b7, \u00b7) \u2212f \u0000(\u00b7, \u00b7); v \u0001\r \r \u033a\u03c0\u03c9 i | {z } (ii) + O(R3/2 T \u00b7 m\u22121/4) | {z } (iii) , (5.8) where C = 2C0 \u00b7 Qmax/\u03c4 \u00b7 (1 + 2Qmax \u00b7 \u03b3 \u00b7 \u03b7), \u03b3 = (Pn i=1 \u03b3i)/n, C0 is de\ufb01ned in Assumption 3.4, and Binit is the parameter space de\ufb01ned in (5.6). Proof. See \u00a7B.4 for a detailed proof. 17 \fBy Corollary 5.2, the global optimality of the \u01eb-stationary point \u03c9 is upper bounded by the three terms on the right-hand side of (5.8). Here term (i) characterizes the deviation of \u03c9 from a stationary point. Term (ii) characterizes the representation power of neural networks. Specifically, if the function c\u03c9 de\ufb01ned in (5.7) is well approximated by the neural network de\ufb01ned in (5.1) with a parameter from the parameter space Binit, then term (ii) is small. Term (iii) is the linearization error of the neural networks, which characterizes the deviation of a neural network from its \ufb01rst-order Taylor expansion at the initial parameter Winit. Such an error is small for a su\ufb03ciently large width m, that is, if the neural network is overparameterized. In conclusion, if the class of overparameterized two-layer neural networks with the parameter space Binit has su\ufb03cient representation power, then the \u01eb-stationary point \u03c9 attained by neural meta-RL is approximately globally optimal. 5.3 Neural Meta-SL In this section, we analyze the global optimality of the \u01eb-stationary point attained by neural meta-SL associated with the squared loss, where we parameterize the hypothesis h\u03b8(\u00b7) = f(\u00b7; \u03b8) by the neural network de\ufb01ned in (5.1). Speci\ufb01cally, neural meta-SL minimizes the meta-objective L de\ufb01ned in (4.4) via gradient descent de\ufb01ned in (4.5) with Winit as the starting point. We analyze the global optimality of the \u01eb-stationary point \u03c9 attained by neural meta-SL, which is de\ufb01ned as follows, \u2207\u03c9L(\u03c9)\u22a4(\u03c9 \u2212v) \u2264\u01eb, \u2200v \u2208Binit = {\u03b8 \u2208Rd : \u2225\u03b8 \u2212Winit\u22252 \u2264RT }. (5.9) Here RT is the maximum trajectory length of T gradient descent steps. In what follows, we set X = {x \u2208Rd : \u2225x\u22252 \u22641}. Similar to Assumption 5.1, we impose the following regularity condition on the distribution \u03c1 that de\ufb01nes the Hilbert space in (4.8). Assumption 5.3 (Regularity Condition on \u03c1). We assume for an absolute constant c > 0 that Ex\u223c\u03c1 h 1 \b |x\u22a4y| \u2264u \ti \u2264c \u00b7 u/\u2225y\u22252, \u2200y \u2208Rd, u > 0. Such an assumption holds if the probability density function of \u03c1 is upper bounded by an absolute constant. Under Assumption 5.3, the following corollary characterizes the optimality gap of the \u01eb-stationary point \u03c9 de\ufb01ned in (5.9). We de\ufb01ne K\u03c9,\u03b7 = Ex\u223c\u03c1 \u0002 Imd \u22122\u03b7 \u00b7 \u03c6\u03c9(x)\u03c6\u03c9(x)\u22a4\u0003 , B0 = K\u03c9,\u03b7(\u03c9 \u2212Binit) + Winit, (5.10) u(x) = f(x; Winit) + \u0012 n X i=1 (\u03b4Ri/\u03b4h\u03c9i)(x) \u00b7 \u0000h\u03c9i(x) \u2212h\u03b8\u2217 i (x) \u0001\u0013\u001e\u0012 n X i=1 (\u03b4Ri/\u03b4h\u03c9i)(x) \u0013 , (5.11) where f(\u00b7; Winit) and \u03c6\u03c9 are the neural network and the feature mapping de\ufb01ned in (5.1) with W = Winit and (5.2) with W = \u03c9, respectively, Binit is the parameter space de\ufb01ned in (5.9), Winit 18 \fis the initial parameter, and \u03c9i, \u03b8\u2217 i are the parameters de\ufb01ned in (4.12). We further de\ufb01ne the average risk R as follows, R = 1 n \u00b7 n X i=1 R1/2 i (h\u03c9i) = 1 n \u00b7 n X i=1 n E(x,y)\u223cDi h\u0000y \u2212h\u03c9i(x) \u00012io1/2 . (5.12) Corollary 5.4 (Optimality Gap of \u01eb-Stationary Point). We denote by Di the marginal distribution of Di over X. Let Di = \u03c1 for all i \u2208[n] and |y| \u2264Ymax for all y \u2208Y. Under Assumptions 4.2 and 5.3, for the squared loss \u2113(h, (x, y)) = (h(x) \u2212y)2 and \u03c9 de\ufb01ned in (5.9), we have Einit \u0002 L(\u03c9) \u2212L(\u03b8\u2217) \u0003 \u2264\u01eb + Einit h 2R \u00b7 inf v\u2208B0 \u2225u(\u00b7) \u2212f(\u00b7; v)\u2225\u03c1 i + O(G3/2 T \u00b7 m\u22121/4), (5.13) where GT = (1 + \u03b7) \u00b7 RT + \u03b7 \u00b7 Ymax, RT is the maximum trajectory length in (5.9), and \u03b7 is the learning rate of A\u03c9 in (4.12). Proof. See \u00a7B.5 for a detailed proof. Similar to Corollary 5.2, by Corollary 5.4, if the function u de\ufb01ned in (5.11) is well approximated by an overparameterized two-layer neural network with a parameter from the parameter space B0 de\ufb01ned in (5.10), and the average risk R de\ufb01ned in (5.12) is upper bounded, then the \u01eb-stationary point \u03c9 attained by neural meta-SL is approximately globally optimal." + }, + { + "url": "http://arxiv.org/abs/2006.12311v1", + "title": "Provably Efficient Causal Reinforcement Learning with Confounded Observational Data", + "abstract": "Empowered by expressive function approximators such as neural networks, deep\nreinforcement learning (DRL) achieves tremendous empirical successes. However,\nlearning expressive function approximators requires collecting a large dataset\n(interventional data) by interacting with the environment. Such a lack of\nsample efficiency prohibits the application of DRL to critical scenarios, e.g.,\nautonomous driving and personalized medicine, since trial and error in the\nonline setting is often unsafe and even unethical. In this paper, we study how\nto incorporate the dataset (observational data) collected offline, which is\noften abundantly available in practice, to improve the sample efficiency in the\nonline setting. To incorporate the possibly confounded observational data, we\npropose the deconfounded optimistic value iteration (DOVI) algorithm, which\nincorporates the confounded observational data in a provably efficient manner.\nMore specifically, DOVI explicitly adjusts for the confounding bias in the\nobservational data, where the confounders are partially observed or unobserved.\nIn both cases, such adjustments allow us to construct the bonus based on a\nnotion of information gain, which takes into account the amount of information\nacquired from the offline setting. In particular, we prove that the regret of\nDOVI is smaller than the optimal regret achievable in the pure online setting\nby a multiplicative factor, which decreases towards zero when the confounded\nobservational data are more informative upon the adjustments. Our algorithm and\nanalysis serve as a step towards causal reinforcement learning.", + "authors": "Lingxiao Wang, Zhuoran Yang, Zhaoran Wang", + "published": "2020-06-22", + "updated": "2020-06-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction In reinforcement learning (RL) (Sutton and Barto, 2018), an agent maximizes its expected total reward by sequentially interacting with the environment. Empowered by the breakthrough in neural networks, which serve as expressive function approximators, deep reinforcement learning (DRL) achieves signi\ufb01cant empirical successes in various scenarios, e.g., game playing (Silver et al., 2016, 2017), robotics (Kober et al., 2013), and natural language processing (Li et al., 2016). Learning an expressive function approximator necessitates collecting a large dataset. Speci\ufb01cally, in the online setting, it requires the agent to interact with the environment for a large number of steps. For example, to learn a human-level policy for playing Atari games, the agent has to interact with a simulator for more than 108 steps (Hessel et al., 2018). However, in most scenarios, we do not have access to a simulator that allows for trial and error without any cost. Meanwhile, in critical scenarios, e.g., autonomous driving and personalized medicine, trial and error in the real world is unsafe and even unethical. As a result, it remains challenging to apply DRL to more scenarios. To bypass such a barrier, we study how to incorporate the dataset collected o\ufb04ine, namely the observational data, to improve the sample e\ufb03ciency of RL in the online setting (Levine et al., 2020). In contrast to the interventional data, which are collected online in possibly expensive ways, the observational data are often abundantly available in various scenarios. For example, in autonomous driving, we have access to a large number of trajectories generated by the drivers. As another example, in personalized medicine, we have access to a large number of electronic health records generated by the doctors. However, to incorporate the observational data in a provably e\ufb03cient way, we have to address two challenges. \u2022 The observational data are possibly confounded. Speci\ufb01cally, there often exist unobserved random variables, namely confounders, that causally a\ufb00ect the agent and the environment at the same time. In particular, the policy used to generate the observational data, namely the behavior policy, possibly depends on the confounders. Meanwhile, the confounders possibly a\ufb00ect the received rewards and the transition dynamics. In the example of autonomous driving (de Haan et al., 2019; Li et al., 2020), the driver may react, e.g., by pulling the break (policy), based on a tra\ufb03c situation, e.g., icy roads (confounder), that is not captured by the sensor. Meanwhile, icy roads may lead to car accidents (reward/transition). Also, in the example of personalized medicine (Murphy, 2003; Chakraborty and Murphy, 2014), the doctor may treat the patient, e.g., by prescribing a medicine (policy), based on a clinical \ufb01nding, e.g., appetite loss (confounder), that is not re\ufb02ected in the record. Meanwhile, appetite loss may lead to weight loss (reward/transition). Such a confounding issue makes the observational data uninformative and even misleading for identifying and estimating the causal e\ufb00ect of a policy, which is crucial for decision making in the online setting. In the example of autonomous driving, it is unclear from the observational data whether pulling the break causes car accidents. Also, in the example of personalized medicine, it is unclear from the observational data whether taking the medicine causes weight loss. \u2022 Even without the confounding issue, it remains unclear how the observational data may facilitate exploration in the online setting, which is the key to the sample e\ufb03ciency of RL. At the core of exploration is uncertainty quanti\ufb01cation. Speci\ufb01cally, quantifying the uncertainty that remains given the dataset collected up to the current step, including the observational data and the interventional data, allows us to construct a bonus. When incorporated into the reward, such a bonus 2 \fencourages the agent to explore the less visited state-action pairs that have more uncertainty. In particular, constructing such a bonus requires quantifying the amount of information carried over by the observational data from the o\ufb04ine setting, which also plays a key role in characterizing the regret, especially how much the observational data may facilitate reducing the regret. Uncertainty quanti\ufb01cation becomes even more challenging when the observational data are confounded. Speci\ufb01cally, as the behavior policy depends on the confounders, which are unobserved, there is a mismatch between the data generating processes in the o\ufb04ine setting and the online setting. As a result, it remains challenging to quantify how much information carried over from the o\ufb04ine setting is useful for the online setting, as the observational data are uninformative and even misleading due to the confounding issue. Contribution. To study causal reinforcement learning, we propose a class of Markov decision processes (MDPs), namely confounded MDPs, which captures the data generating processes in both the o\ufb04ine setting and the online setting as well as their mismatch due to the confounding issue. In particular, we study two tractable cases of confounded MDPs in the episodic setting with linear function approximation (Yang and Wang, 2019a,b; Jin et al., 2019; Cai et al., 2019). \u2022 In the \ufb01rst case, the confounders are partially observed in the observational data. Assuming that an observed subset of the confounders satis\ufb01es the backdoor criterion (Pearl, 2009), we propose the deconfounded optimistic value iteration (DOVI) algorithm. Speci\ufb01cally, DOVI explicitly corrects for the confounding bias in the observational data using the backdoor adjustment. \u2022 In the second case, the confounders are unobserved in the observational data. Assuming that there exists an observed set of intermediate states that satis\ufb01es the frontdoor criterion (Pearl, 2009), we propose an extension of DOVI, namely DOVI+, which explicitly corrects for the confounding bias in the observational data using the composition of two backdoor adjustments. In both cases, the adjustments allow DOVI and DOVI+ to incorporate the observational data into the interventional data while bypassing the confounding issue. It further enables estimating the causal e\ufb00ect of a policy on the received rewards and the transition dynamics with an enlarged e\ufb00ective sample size. Moreover, such adjustments allow us to construct the bonus based on a notion of information gain, which takes into account the amount of information carried over from the o\ufb04ine setting. In particular, we prove that DOVI and DOVI+ attain the \u2206H \u00b7 \u221a d3H3T-regret up to logarithmic factors, where d is the dimension of features, H is the length of each episode, and T = HK is the number of steps taken in the online setting, where K is the number of episodes. Here the multiplicative factor \u2206H > 0 depends on d, H, and a notion of information gain that quanti\ufb01es the amount of information obtained from the interventional data additionally when given the properly adjusted observational data. When the observational data are unavailable or uninformative upon the adjustments, \u2206H is a logarithmic factor. Correspondingly, DOVI and DOVI+ attain the optimal \u221a T-regret achievable in the pure online setting (Yang and Wang, 2019a,b; Jin et al., 2019; Cai et al., 2019). When the observational data are su\ufb03ciently informative upon the adjustments, \u2206H decreases towards zero as the e\ufb00ective sample size of the observational data increases, which quanti\ufb01es how much the observational data may facilitate exploration in the online setting. Related Work. Our work is based on the study of RL in the pure online setting, which focuses on attaining the optimal regret. See, e.g., Auer and Ortner (2007); Jaksch et al. (2010); Osband et al. 3 \f(2014); Azar et al. (2017); Yang and Wang (2019a,b); Jin et al. (2019) and the references therein. In contrast, we study a class of confounded MDPs, which captures a combination of the online setting and the o\ufb04ine setting. Our work is related to the study of causal bandit (Lattimore et al., 2016). The goal of causal bandit is to obtain the optimal intervention in the online setting where the data generating process is described by a causal diagram. Lu et al. (2019) propose the causal upper con\ufb01dence bound (CUCB) and causal Thompson Sampling (C-TS) algorithms, which attain the \u221a T-regret. Sen et al. (2017) propose an algorithm based on importance sampling in policy evaluation. In the pure o\ufb04ine setting, Kallus and Zhou (2018a,b) propose algorithms for contextual bandit with confounders in the observational data. Their algorithms are based on the analysis of sensitivity (Manski, 1990; Tan, 2006; Balke and Pearl, 2013; Zhang and Bareinboim, 2017), which characterizes the worstcase di\ufb00erence between the causal e\ufb00ect and the conditional density obtained from the confounded observational data. In a combination of the online setting and the o\ufb04ine setting, Forney et al. (2017) study multi-armed bandit with both the interventional data and the confounded observational data. In contrast to this line of work, we study causal RL in a combination of the online setting and the o\ufb04ine setting. Causal RL is more challenging than causal bandit, which corresponds to H = 1, as it involves the transition dynamics, which makes exploration more di\ufb03cult. Our work is related to the study of causal RL considered in various settings. Zhang and Bareinboim (2019) propose a model-based RL algorithm that solves dynamic treatment regimes (DTR), which involve a combination of the online setting and the o\ufb04ine setting. Their algorithm hinges on the analysis of sensitivity (Manski, 1990; Tan, 2006; Balke and Pearl, 2013; Zhang and Bareinboim, 2017), which constructs a set of feasible models of the transition dynamics based on the confounded observational data. Correspondingly, their algorithm achieves exploration by choosing an optimistic model of the transition dynamics from such a feasible set. In contrast, we propose a model-free RL algorithm, which achieves exploration through the bonus based on a notion of information gain. It is worth mentioning that the assumption of Zhang and Bareinboim (2019) is weaker than ours as theirs does not allow for identifying the causal e\ufb00ect. As a result of partial identi\ufb01cation, the regret of their algorithm is the same as the regret in the pure online setting as T \u2192+\u221e. In contrast, the regret of our algorithm is smaller than the regret in the pure online setting by a multiplicative factor for all T. Lu et al. (2018) propose a model-based RL algorithm in a combination of the online setting and the o\ufb04ine setting. Their algorithm uses a variational autoencoder (VAE) for estimating a structural causal model (SCM) based on the confounded observational data. In particular, their algorithm utilizes the actor-critic algorithm to obtain the optimal policy in such an SCM. However, the regret of their algorithm remains unclear. Buesing et al. (2018) propose a model-based RL algorithm in the pure online setting that learns the optimal policy in a partially observable Markov decision process (POMDP). The regret of their algorithm also remains unclear. 2 Confounded Reinforcement Learning Structural Causal Model. We denote a structural causal model (SCM) (Pearl, 2009) by a tuple (A, B, F, P). Here A is the set of exogenous (unobserved) variables, B is the set of endogenous (observed) variables, F is the set of structural functions capturing the causal relations, which determines an endogenous variable v \u2208B based on the other exogenous and endogenous variables, and P is the distribution of all the exogenous variables. We say that a pair of variables Y and Z are confounded by a variable W if they are both caused by W. 4 \fAn intervention on a set of endogenous variables X \u2286B assigns a value x to X regardless of the other exogenous and endogenous variables as well as the structural functions. We denote by do(X = x) the intervention on X and write do(x) if it is clear from the context. Similarly, a stochastic intervention (Mu\u02dc noz and van der Laan, 2012; D\u00b4 \u0131az and Hejazi, 2019) on a set of endogenous variables X \u2286B assigns a distribution p to X regardless of the other exogenous and endogenous variables as well as the structural functions. We denote by do(X \u223cp) the stochastic intervention on X. Confounded Markov Decision Process. To characterize a Markov decision process (MDP) in the o\ufb04ine setting with observational data, which are possibly confounded, we introduce an SCM, where the endogenous variables are the states {sh}h\u2208[H], actions {ah}h\u2208[H], and rewards {rh}h\u2208[H]. Let {wh}h\u2208[H] be the confounders. In \u00a73, we assume that the confounders are partially observed, while in \u00a74, we assume that they are unobserved, both in the o\ufb04ine setting. The set of structural functions F consists of the transition of states sh+1 \u223cPh(\u00b7 | sh, ah, wh), the transition of confounders wh \u223ce Ph(\u00b7 | sh), the behavior policy ah \u223c\u03bdh(\u00b7 | sh, wh), which depends on the confounder wh, and the reward function rh(sh, ah, wh). See Figure 1 for the causal diagram that describes such an SCM. sh ah wh sh+1 (a) O\ufb04ine Setting sh ah wh sh+1 (b) Online Setting Figure 1: Causal diagrams of the h-th step of the confounded MDP (a) in the o\ufb04ine setting and (b) in the online setting, respectively. Here ah and sh+1 are confounded by wh in addition to sh. We denote such a confounded MDP by the tuple (S, A, W, H, P, r), where H is the length of an episode, S, A, and W are the spaces of states, actions, and confounders, respectively, r = {rh}h\u2208[H] is the set of reward functions, and P = {Ph, e Ph}h\u2208H is the set of transition kernels. In the sequel, we assume without loss of generality that rh takes value in [0, 1] for all h \u2208[H]. In the online setting that allows for intervention, we assume that the confounders {wh}h\u2208[H] are unobserved. A policy \u03c0 = {\u03c0h}h\u2208[H] induces the stochastic intervention do(a1 \u223c\u03c01(\u00b7 | s1), . . . , aH \u223c \u03c0H(\u00b7 | sH)), which does not depend on the confounders. In particular, an agent interacts with the environment as follows. At the beginning of the k-th episode, the environment arbitrarily selects an initial state sk 1 and the agent selects a policy \u03c0k = {\u03c0k h}h\u2208[H]. At the h-th step of the k-th episode, the agent observes the state sk h and takes the action ak h \u223c\u03c0k h(\u00b7 | sk h). The environment randomly selects the confounder wk h \u223ce Ph(\u00b7 | sk h), which is unobserved, and the agent receives the reward rk h = rh(sk h, ak h, wk h). The environment then transits into the next state sk h+1 \u223cPh(\u00b7 | sk h, ak h, wk h). For a policy \u03c0 = {\u03c0h}h\u2208H, which does not depend on the confounders {wh}h\u2208[H], we de\ufb01ne the 5 \fvalue function V \u03c0 = {V \u03c0 h }h\u2208[H] as follows, V \u03c0 h (s) = E \u0014 H X j=h rj(sj, aj, wj) \f \f \f \f sh = s, sj+1 \u223cPh(\u00b7 | sj, aj, wj), wj \u223ce Ph(\u00b7 | sj), do \u0000aj \u223c\u03c0j(\u00b7 | sj) \u0001\u0015 = E\u03c0 \u0014 H X j=h rj(sj, aj, wj) \f \f \f \f sh = s \u0015 , \u2200h \u2208[H], (2.1) where we denote by E\u03c0 the expectation with respect to the confounders {wj}H j=h and the trajectory {(sj, aj)}H j=h, starting from the state sj = s and following the policy \u03c0. Correspondingly, we de\ufb01ne the action-value function Q\u03c0 = {Q\u03c0 h}h\u2208[H] as follows, Q\u03c0 h(s, a) = E\u03c0 \u0014 H X j=h rj(sj, aj, wj) \f \f \f \f sh = s, do(ah = a) \u0015 , \u2200h \u2208[H]. (2.2) We assess the performance of an algorithm using the regret against the globally optimal policy \u03c0\u2217= {\u03c0\u2217 h}h\u2208[H] in hindsight after K episodes, which is de\ufb01ned as follows, Regret(T) = max \u03c0 K X k=1 \u0000V \u03c0 1 (sk 1) \u2212V \u03c0k 1 (sk 1) \u0001 = K X k=1 \u0000V \u03c0\u2217 1 (sk 1) \u2212V \u03c0k 1 (sk 1) \u0001 . (2.3) Here T = HK is the total number of steps. Our goal is to design an algorithm that minimizes the regret de\ufb01ned in (2.3), where \u03c0\u2217does not depend on the confounders {wh}h\u2208[H]. In the online setting that allows for intervention, it is well understood how to minimize such a regret (Jaksch et al., 2010; Azar et al., 2017; Jin et al., 2018, 2019). However, it remains unclear how to e\ufb03ciently utilize the observational data obtained in the o\ufb04ine setting, which are possibly confounded. In real-world applications, e.g., autonomous driving and personalized medicine, such observational data are often abundant, whereas intervention in the online setting is often restricted. Why is Incorporating Confounded Observational Data Challenging? Straightforwardly incorporating the confounded observational data into an online algorithm possibly leads to an undesirable regret due to the mismatch between the online and o\ufb04ine data generating processes. In particular, due to the existence of the confounders {wh}h\u2208[H], which are partially observed (\u00a73) or unobserved (\u00a74), the conditional probability P(sh+1 | sh, ah) in the o\ufb04ine setting is di\ufb00erent from the causal e\ufb00ect P(sh+1 | sh, do(ah)) in the online setting (Peters et al., 2017). More speci\ufb01cally, it holds that P(sh+1 | sh, ah) = Ewh\u223ce Ph(\u00b7 | sh) \u0002 Ph(sh+1 | sh, ah, wh) \u00b7 \u03bdh(ah | sh, wh) \u0003 Ewh\u223ce Ph(\u00b7 | sh) \u0002 \u03bdh(ah | sh, wh) \u0003 , P \u0000sh+1 \f \f sh, do(ah) \u0001 = Ewh\u223ce Ph(\u00b7 | sh) \u0002 Ph(\u00b7 | sh, ah, wh) \u0003 . In other words, without proper covariate adjustments (Pearl, 2009), the confounded observational data may be not informative for estimating the transition dynamics and the associated actionvalue function in the online setting. To this end, we propose an algorithm that incorporates the 6 \fconfounded observational data in a provably e\ufb03cient manner. Moreover, our analysis quanti\ufb01es the amount of information carried over by the confounded observational data from the o\ufb04ine setting and to what extent it helps reducing the regret in the online setting. In what follows, we discuss the connection between confounded MDP and other extensions of MDP and SCM. \u2022 Dynamic Treatment Regimes (DTR). In a DTR (Zhang and Bareinboim, 2019), all the states {sh}h\u2208[H] are confounded by a common confounder w, whereas in a confounded MDP, each state sh depends on an individual confounder wh\u22121, which further depends on the previous state sh\u22121. If wh\u22121 does not depend on sh\u22121, the confounded MDP reduces to a DTR by summarizing the confounders into w = (w1, . . . , wH). \u2022 Contextual MDP (CMDP). A confounded MDP is similar to a CMDP (Hallak et al., 2015) if we cast the confounders {wh}h\u2208[H] as the context therein. In a CMDP, which focuses on the online setting, the context is \ufb01xed throughout an episode, whereas in a confounded MDP, the confounders {wh}h\u2208[H] vary across the H steps. Moreover, in a CMDP, the goal is to minimize the regret against the globally optimal policy that depends on the context, which is a stronger benchmark than \u03c0\u2217in (2.3), since \u03c0\u2217does not depend on the confounders {wh}h\u2208[H]. \u2022 Partially Observable MDP (POMDP). A confounded MDP is a simpli\ufb01ed POMDP (Tennenholtz et al., 2019) if we cast the confounders {wh}h\u2208[H] as the hidden states therein (assuming that the confounders are unobserved in the o\ufb04ine setting as in \u00a74). A POMDP is more challenging to solve, since marginalizing over the hidden states does not yield an MDP, which is the case in a confounded MDP. 3 Algorithm and Theory for Partially Observed Confounder In this section, we propose the Deconfounded Optimistic Value Iteration (DOVI) algorithm. DOVI handles the case where the confounders are unobserved in the online setting but are partially observed in the o\ufb04ine setting. We then characterize the regret of DOVI. 3.1 Algorithm Backdoor Adjustment. In the online setting that allows for intervention, the causal e\ufb00ect of ah on sh+1 given sh, that is, P(sh+1 | sh, do(ah)), plays a key role in the estimation of the action-value function. Meanwhile, the confounded observational data may not allow us to identify the causal e\ufb00ect P(sh+1 | sh, do(ah)) if the confounder wh is unobserved. However, if the confounder wh is partially observed in the o\ufb04ine setting, the observed subset uh of wh allows us to identify the causal e\ufb00ect P(sh+1 | sh, do(ah)), as long as uh satis\ufb01es the following backdoor criterion. Assumption 3.1 (Backdoor Criterion (Pearl, 2009; Peters et al., 2017)). In the SCM de\ufb01ned in \u00a72 and its induced directed acyclic graph (DAG), for all h \u2208[H], there exists an observed subset uh of wh that satis\ufb01es the backdoor criterion, that is, \u2022 the elements of uh are not the descendants of ah, and 7 \f\u2022 conditioning on sh, the elements of uh d-separate every path between ah and sh+1 that has an incoming arrow into ah. See Figure 2 for an example that satis\ufb01es the backdoor criterion. In particular, we identify the causal e\ufb00ect P(sh+1 | sh, do(ah)) as follows. Proposition 3.2 (Backdoor Adjustment (Pearl, 2009)). Under Assumption 3.1, it holds for all h \u2208[H] that P \u0000sh+1 \f \f sh, do(ah) \u0001 = Euh\u223cP(\u00b7 | sh) \u0002 P(sh+1 | sh, ah, uh) \u0003 , E \u0002 rh(sh, ah, wh) \f \f sh, do(ah) \u0003 = Euh\u223cP(\u00b7 | sh) h E \u0002 rh(sh, ah, wh) \f \f sh, ah, uh \u0003i . Here (sh+1, sh, ah, uh) follows the SCM de\ufb01ned in \u00a72, which generates the confounded observational data. Proof. See Pearl (2009) for a detailed proof. sh+1 ah u1,h u2,h w1,h w2,h w3,h Figure 2: An illustration of the backdoor criterion. The causal diagram corresponds to the h-th step of the confounded MDP conditioning on sh. Here wh = {w1,h, w2,h, w3,h, u1,h, u2,h} is the confounder and the subset uh = {u1,h, u2,h} satis\ufb01es the backdoor criterion. With a slight abuse of notation, we write P(sh+1 | sh, ah, uh) as Ph(sh+1 | sh, ah, uh) and P(uh | sh) as e Ph(uh | sh), since they are induced by the SCM de\ufb01ned in \u00a72. In the sequel, we de\ufb01ne U the space of observed state uh and write rh = rh(sh, ah, wh) for notational simplicity. Backdoor-Adjusted Bellman Equation. We now formulate the Bellman equation for the confounded MDP. It holds for all (sh, ah) \u2208S \u00d7 A that Q\u03c0 h(sh, ah) = E\u03c0 \u0014 H X j=h rj(sj, aj, uj) \f \f \f \f sh, do(ah) \u0015 = E \u0002 rh \f \f sh, do(ah) \u0003 + Esh+1 \u0002 V \u03c0 h+1(sh+1) \u0003 , where Esh+1 denotes the expectation with respect to sh+1 \u223cP(\u00b7 \f \f sh, do(ah)). Here E[rh \f \f sh, do(ah)] and P(\u00b7 \f \f sh, do(ah)) are characterized in Proposition 3.2. In the sequel, we de\ufb01ne the following transition operator and counterfactual reward function, (PhV )(sh, ah) = Esh+1\u223cP(\u00b7 | sh,do(ah)) \u0002 V (sh+1) \u0003 , \u2200V : S 7\u2192R, (sh, ah) \u2208S \u00d7 A, (3.1) Rh(sh, ah) = E \u0002 rh \f \f sh, do(ah) \u0003 , \u2200(sh, ah) \u2208S \u00d7 A. (3.2) We have the following Bellman equation, Q\u03c0 h(sh, ah) = Rh(sh, ah) + (PhV \u03c0 h+1)(sh, ah), \u2200h \u2208[H], (sh, ah) \u2208S \u00d7 A. (3.3) 8 \fCorrespondingly, the Bellman optimality equation takes the following form, Q\u2217 h(sh, ah) = Rh(sh, ah) + (PhV \u2217 h+1)(sh, ah), V \u2217 h (sh) = max ah\u2208A Q\u2217 h(sh, ah), (3.4) which holds for all h \u2208[H] and (sh, ah) \u2208S \u00d7 A. Such a Bellman optimality equation allows us to adapt the least-squares value iteration (LSVI) algorithm (Bradtke and Barto, 1996; Jaksch et al., 2010; Osband et al., 2014; Azar et al., 2017; Jin et al., 2019). Linear Function Approximation. We focus on the following setting with linear transition kernels and reward functions (Yang and Wang, 2019a,b; Jin et al., 2019; Cai et al., 2019), which corresponds to a linear SCM (Peters et al., 2017). Assumption 3.3 (Linear Confounded MDP). We assume that Ph(sh+1 | sh, ah, uh) = \u27e8\u03c6h(sh, ah, uh), \u00b5h(sh+1)\u27e9, \u2200h \u2208[H], (sh+1, sh, ah) \u2208S \u00d7 S \u00d7 A, where \u03c6h(\u00b7, \u00b7, \u00b7) and \u00b5h(\u00b7) = (\u00b51,h(\u00b7), . . . , \u00b5d,h(\u00b7))\u22a4are Rd-valued functions. We assume that Pd i=1 \u2225\u00b5i,h\u22252 1 \u2264 d and \u2225\u03c6h(sh, ah, uh)\u22252 \u22641 for all h \u2208[H] and (sh, ah, uh) \u2208S \u00d7 A \u00d7 U. Meanwhile, we assume that E[rh | sh, ah, uh] = \u03c6h(sh, ah, uh)\u22a4\u03b8h, \u2200h \u2208[H], (sh, ah, uh) \u2208S \u00d7 A \u00d7 U, (3.5) where \u03b8h \u2208Rd and \u2225\u03b8h\u22252 \u2264 \u221a d for all h \u2208[H]. Such a linear setting generalizes the tabular setting where S, A, and U are \ufb01nite. Proposition 3.4. We de\ufb01ne the backdoor-adjusted feature as follows, \u03c8h(sh, ah) = Euh\u223ce Ph(\u00b7 | sh) \u0002 \u03c6h(sh, ah, uh) \u0003 , \u2200h \u2208[H], (sh, ah) \u2208S \u00d7 A. (3.6) Under Assumption 3.1, it holds that P(sh+1 | sh, do(ah)) = \u27e8\u03c8h(sh, ah), \u00b5h(sh+1)\u27e9, \u2200h \u2208[H], (sh+1, sh, ah) \u2208S \u00d7 S \u00d7 A. Moreover, the action-value functions Q\u03c0 h and Q\u2217 h are linear in the backdoor-adjusted feature \u03c8h for all \u03c0. Proof. See \u00a7A.1 for a detailed proof. Such an observation allows us to estimate the action-value function based on the backdooradjusted features {\u03c8h}h\u2208[H] in the online setting. See \u00a75 for a detailed discussion. In the sequel, we assume that either the density of { e Ph(\u00b7 | sh)}h\u2208[H] is known or the backdoor-adjusted feature {\u03c8h}h\u2208[H] is know. In the sequel, we introduce the DOVI algorithm (Algorithm 1). Each iteration of DOVI consists of two components, namely point estimation, where we estimate Q\u2217 h based on the confounded observational data and the interventional data, and uncertainty quanti\ufb01cation, where we construct the upper con\ufb01dence bound (UCB) of the point estimator. 9 \fAlgorithm 1 Deconfounded Optimistic Value Iteration (DOVI) for Confounded MDP Require: Observational data {(si h, ai h, ui h, ri h)}i\u2208[n],h\u2208[H], tuning parameters \u03bb, \u03b2 > 0, backdooradjusted feature {\u03c8h}h\u2208[H], which is de\ufb01ned in (3.6). 1: Initialization: Set {Q0 h, V 0 h }h\u2208[H] as zero functions and V k H+1 as a zero function for k \u2208[K]. 2: for k = 1, . . . , K do 3: for h = H, . . . , 1 do 4: Set \u03c9k h \u2190argmin\u03c9\u2208Rd Pk\u22121 \u03c4=1(r\u03c4 h + V \u03c4 h+1(s\u03c4 h+1) \u2212\u03c9\u22a4\u03c8h(s\u03c4 h, a\u03c4 h))2 + \u03bb\u2225\u03c9\u22252 2 + Lk h(\u03c9), where Lk h is de\ufb01ned in (3.8). 5: Set Qk h(\u00b7, \u00b7) \u2190min{\u03c8h(\u00b7, \u00b7)\u22a4\u03c9k h + \u0393k h(\u00b7, \u00b7), H \u2212h}, where \u0393k h is de\ufb01ned in (3.12). 6: Set \u03c0k h(\u00b7 | sh) \u2190argmaxah\u2208A Qk h(sh, ah) for all sh \u2208S. 7: Set V k h (\u00b7) \u2190\u27e8\u03c0k h(\u00b7 | \u00b7), Qk h(\u00b7, \u00b7)\u27e9A. 8: end for 9: Obtain sk 1 from the environment. 10: for h = 1, . . . , H do 11: Take ak h \u223c\u03c0k h(\u00b7 | sk h). Obtain rk h = rh(sk h, ak h, uk h) and sk h+1. 12: end for 13: end for Point Estimation. To solve the Bellman optimality equation in (3.4), we minimize the empirical mean-squared Bellman error as follows at each step, \u03c9k h \u2190argmin \u03c9\u2208Rd k\u22121 X \u03c4=1 \u0000r\u03c4 h + V \u03c4 h+1(s\u03c4 h+1) \u2212\u03c9\u22a4\u03c8h(s\u03c4 h, a\u03c4 h) \u00012 + \u03bb\u2225\u03c9\u22252 2 + Lk h(\u03c9), h = H, . . . , 1, (3.7) where we set V k H+1 = 0 for all k \u2208[K] and V \u03c4 h+1 is de\ufb01ned in Line 7 of Algorithm 1 for all (\u03c4, h) \u2208[K] \u00d7 [H \u22121]. Here k is the index of episode, \u03bb > 0 is a tuning parameter, and Lk h is a regularizer, which is constructed based on the confounded observational data. More speci\ufb01cally, we de\ufb01ne Lk h(\u03c9) = n X i=1 \u0000ri h + V k h+1(si h+1) \u2212\u03c9\u22a4\u03c6h(si h, ai h, ui h) \u00012, \u2200(k, h) \u2208[K] \u00d7 [H], (3.8) which corresponds to the least-squares loss for regressing ri h + V k h+1(si h+1) against \u03c6h(si h, ai h, ui h) for all i \u2208[n]. Here {(si h, ai h, ui h, ri h)}(i,h)\u2208[n]\u00d7[H] are the confounded observational data, where ui h \u223ce Ph(\u00b7 | si h), si h+1 \u223cPh(\u00b7 | si h, ai h, ui h), and ai h \u223c\u03bdh(\u00b7 | si h, wi h) with \u03bd = {\u03bdh}h\u2208[H] being the behavior policy. Here recall that, with a slight abuse of notation, we write P(sh+1 | sh, ah, uh) as Ph(sh+1 | sh, ah, uh) and P(uh | sh) as e Ph(uh | sh), since they are induced by the SCM de\ufb01ned in \u00a72. The update in (3.7) takes the following explicit form, \u03c9k h \u2190(\u039bk h)\u22121 \u0012 k\u22121 X \u03c4=1 \u03c8h(s\u03c4 h, a\u03c4 h) \u00b7 \u0000V k h+1(s\u03c4 h+1) + r\u03c4 h \u0001 + n X i=1 \u03c6h(si h, ai h, ui h) \u00b7 \u0000V k h+1(si h+1) + ri h \u0001\u0013 , (3.9) 10 \fwhere \u039bk h = k\u22121 X \u03c4=1 \u03c8h(s\u03c4 h, a\u03c4 h)\u03c8h(s\u03c4 h, a\u03c4 h)\u22a4+ n X i=1 \u03c6h(si h, ai h, ui h)\u03c6h(si h, ai h, ui h)\u22a4+ \u03bbI. (3.10) Uncertainty Quanti\ufb01cation. We now construct the UCB \u0393k h(\u00b7, \u00b7) of the point estimator \u03c8h(\u00b7, \u00b7)\u22a4\u03c9k h obtained from (3.9), which encourages the exploration of the less visited state-action pairs. To this end, we employ the following notion of information gain to motivate the UCB, \u0393k h(sk h, ak h) \u221dH(\u03c9k h | \u03bek\u22121) \u2212H \u0000\u03c9k h | \u03bek\u22121 \u222a{(sk h, ak h)} \u0001 , (3.11) where H(\u03c9k h | \u03bek\u22121) is the di\ufb00erential entropy of the random variable \u03c9k h given the data \u03bek\u22121. In particular, \u03bek\u22121 = {(s\u03c4 h, a\u03c4 h, r\u03c4 h)}(\u03c4,h)\u2208[k\u22121]\u00d7[H] \u222a{(si h, ai h, ui h, ri h)}(i,h)\u2208[n]\u00d7[H] consists of the confounded observational data and the interventional data up to the (k \u22121)-th episode. However, it is challenging to characterize the distribution of \u03c9k h. To this end, we consider a Bayesian counterpart of the confounded MDP, where the prior of \u03c9k h is N(0, \u03bbI) and the residual of the regression problem in (3.7) is N(0, 1). In such a \u201cparallel\u201d confounded MDP, the posterior of \u03c9k h follows N(\u00b5k,h, (\u039bk h)\u22121), where \u039bk h is de\ufb01ned in (3.10) and \u00b5k,h coincides with the right-hand side of (3.9). Moreover, it holds for all (sk h, ak h) \u2208S \u00d7 A that H(\u03c9k h | \u03bek\u22121) = 1/2 \u00b7 log det \u0000(2\u03c0e)d \u00b7 (\u039bk h)\u22121\u0001 , H \u0000\u03c9k h \f \f \u03bek\u22121 \u222a{(sk h, ak h)} \u0001 = 1/2 \u00b7 log det \u0010 (2\u03c0e)d \u00b7 \u0000\u039bk h + \u03c8h(sk h, ak h)\u03c8h(sk h, ak h)\u22a4\u0001\u22121\u0011 . Correspondingly, we employ the following UCB, which instantiates (3.11), that is, \u0393k h(sk h, ak h) = \u03b2 \u00b7 \u0010 log det \u0000\u039bk h + \u03c8h(sk h, ak h)\u03c8h(sk h, ak h)\u22a4\u0001 \u2212log det(\u039bk h) \u00111/2 (3.12) for all (sk h, ak h) \u2208S \u00d7 A. Here \u03b2 > 0 is a tuning parameter. We highlight that, although the information gain in (3.11) relies on the \u201cparallel\u201d confounded MDP, the UCB in (3.12), which is used in Line 5 of Algorithm 1, does not rely on the Bayesian perspective. Also, our analysis establishes the frequentist regret. Regularization with Observational Data: A Bayesian Perspective. In the \u201cparallel\u201d confounded MDP, it holds that \u03c9k h \u223cN(0, \u03bbI), \u03c9k h | \u03be0 \u223cN \u0000\u00b51,h, (\u039b1 h)\u22121\u0001 , \u03c9k h | \u03bek\u22121 \u223cN \u0000\u00b5k,h, (\u039bk h)\u22121\u0001 , where \u00b5k,h coincides with the right-hand side of (3.9) and \u00b51,h is de\ufb01ned by setting k = 1 in \u00b5k,h. Here \u03be0 = {(si h, ai h, ui h, ri h)}(i,h)\u2208[n]\u00d7[H] are the confounded observational data. Hence, the regularizer Lk h in (3.8) corresponds to using \u03c9k h | \u03be0 as the prior for the Bayesian regression problem given only the interventional data \u03bek\u22121 \\ \u03be0 = {(s\u03c4 h, a\u03c4 h, r\u03c4 h)}(\u03c4,h)\u2208[k\u22121]\u00d7[H]. 3.2 Theory The following theorem characterizes the regret of DOVI, which is de\ufb01ned in (2.3). 11 \fTheorem 3.5 (Regret of DOVI). Let \u03b2 = CdH p log(d(T + nH)/\u03b6) and \u03bb = 1, where C > 0 and \u03b6 \u2208(0, 1] are absolute constants. Under Assumptions 3.1 and 3.3, it holds with probability at least 1 \u22125\u03b6/2 that Regret(T) \u2264C\u2032 \u00b7 \u2206H \u00b7 \u221a d3H3T \u00b7 q log \u0000d(T + nH)/\u03b6 \u0001 , (3.13) where C\u2032 > 0 is an absolute constant and \u2206H = 1 \u221a dH2 H X h=1 \u0000log det(\u039bK+1 h ) \u2212log det(\u039b1 h) \u00011/2. (3.14) Proof. See \u00a7A.3 for a detailed proof. Note that \u039bK+1 h \u2aaf(n + K + \u03bb)I and \u039b1 h \u2ab0\u03bbI for all h \u2208[H]. Hence, it holds that \u2206H = O( p log(n + K + 1)) in the worst case. Thus, the regret of DOVI is O( \u221a d3H3T) up to logarithmic factors, which is optimal in the total number of steps T if we only consider the online setting. However, \u2206H is possibly much smaller than O( p log(n + K + 1)), depending on the amount of information carried over by the confounded observational data from the o\ufb04ine setting, which is quanti\ufb01ed in the following. Interpretation of \u2206H: An Information-Theoretic Perspective. Let \u03c9\u2217 h be the parameter of the globally optimal action-value function Q\u2217 h, which corresponds to \u03c0\u2217in (2.3). Recall that we denote by \u03be0 and \u03beK the confounded observational data {(si h, ai h, ui h, ri h)}(i,h)\u2208[n]\u00d7[H] and the union {(si h, ai h, ui h, ri h)}(i,h)\u2208[n]\u00d7[H] \u222a{(sk h, ak h, rk h)}(k,h)\u2208[K]\u00d7[H] of the confounded observational data and the interventional data up to the K-th episode, respectively. We consider the aforementioned Bayesian counterpart of the confounded MDP, where the prior of \u03c9\u2217 h is also N(0, \u03bbI). In such a \u201cparallel\u201d confounded MDP, we have \u03c9\u2217 h \u223cN(0, \u03bbI), \u03c9\u2217 h | \u03be0 \u223cN \u0000\u00b5\u2217 1,h, (\u039b1 h)\u22121\u0001 , \u03c9\u2217 h | \u03beK \u223cN \u0000\u00b5\u2217 K,h, (\u039bK+1 h )\u22121\u0001 , (3.15) where \u00b5\u2217 1,h = (\u039b1 h)\u22121 n X i=1 \u03c6h(si h, ai h, ui h) \u00b7 \u0000V \u2217 h+1(si h+1) + ri h \u0001 , \u00b5\u2217 K,h = (\u039bK+1 h )\u22121 \u0012 \u039b1 h\u00b5\u2217 1,h + K X \u03c4=1 \u03c8h(s\u03c4 h, a\u03c4 h) \u00b7 \u0000V \u2217 h+1(s\u03c4 h+1) + r\u03c4 h \u0001\u0013 . It then holds for the right-hand side of (3.14) that 1/2 \u00b7 log det(\u039bK+1 h ) \u22121/2 \u00b7 log det(\u039b1 h) = H(\u03c9\u2217 h | \u03be0) \u2212H(\u03c9\u2217 h | \u03beK). (3.16) The left-hand side of (3.16) characterizes the information gain of intervention in the online setting given the confounded observational data in the o\ufb04ine setting. In other words, if the confounded observational data are su\ufb03ciently informative upon the backdoor adjustment, then \u2206H is small, which implies that the regret is small. More speci\ufb01cally, the matrices (\u039b1 h)\u22121 and (\u039bK+1 h )\u22121 de\ufb01ned in (3.10) characterize the ellipsoidal con\ufb01dence sets given \u03be0 and \u03beK, respectively. If the confounded observational data are su\ufb03ciently informative upon the backdoor adjustment, 12 \f\u039bK+1 h is close to \u039b1 h. To illustrate, let {\u03c8h(s\u03c4 h, a\u03c4 h)}(\u03c4,h)\u2208[K]\u00d7[H] and {\u03c6h(si h, ai h, ui h)}(i,h)\u2208[n]\u00d7[H] be sampled uniformly at random from the canonical basis {e\u2113}\u2113\u2208[d] of Rd. It then holds that \u039bK+1 h \u2248(K + n)I/d + \u03bbI and \u039b1 h \u2248nI/d + \u03bbI. Hence, for \u03bb = 1 and su\ufb03ciently large n and K, we have \u2206H = O( p log(1 + K/(n + d))) = O( p K/(n + d)). For example, for n = \u2126(K2), it holds that \u2206H = O(n\u22121/2), which implies that the regret of DOVI is O(n\u22121/2 \u00b7 \u221a d3H3T). In other words, if the confounded observational data are su\ufb03ciently informative upon the backdoor adjustment, the regret of DOVI can be arbitrarily small given a su\ufb03ciently large sample size n of the confounded observational data, which is often the case in practice (Murphy, 2003; Chakraborty and Murphy, 2014; de Haan et al., 2019; Li et al., 2020; Levine et al., 2020). 4 Algorithm and Theory for Unobserved Confounder In this section, we extend DOVI to handle the case where the confounders are unobserved in both the online setting and the o\ufb04ine setting. We then characterize the regret of such an extension of DOVI, namely DOVI+. In comparison with DOVI, DOVI+ additionally incorporates an intermediate state at each step, which extends the length of each episode from H to 2H. 4.1 Algorithm Frontdoor Adjustment. Since the confounders {wh}h\u2208[H] are unobserved in the o\ufb04ine setting, the confounded observational data {(si h, ai h, ri h)}(i,h)\u2208[n]\u00d7[H] are insu\ufb03cient for the identi\ufb01cation of the causal e\ufb00ect P(sh+1 | sh, do(ah)) (Pearl, 2009; Peters et al., 2017). However, such a causal e\ufb00ect is identi\ufb01able if we observe the intermediate states {mh}h\u2208[H] that satisfy the following frontdoor criterion. Assumption 4.1 (Frontdoor Criterion (Pearl, 2009; Peters et al., 2017)). In the SCM de\ufb01ned in \u00a72, for all h \u2208[H], there additionally exists an observed intermediate state mh that satis\ufb01es the frontdoor criterion, that is, \u2022 mh intercepts every directed path from ah to sh+1, \u2022 conditioning on sh, no path between ah and mh has an incoming arrow into ah, and \u2022 conditioning on sh, ah d-separates every path between mh and sh+1 that has an incoming arrow into mh. See Figure 3 for the causal diagram that describes such an SCM and Figure 4 for an example that satis\ufb01es the frontdoor criterion. Intuitively, Assumption 4.1 ensures that, conditioning on sh, (i) the intermediate state mh is caused by the action ah and the causal e\ufb00ect of the action ah on the next state sh+1 is summarized by mh, while (ii) the action ah and the intermediate state mh are not confounded. In the sequel, we denote by M the space of intermediate states and \u02d8 Ph(\u00b7 | \u00b7, \u00b7) the transition kernel that determines mh given sh and ah. The causal e\ufb00ect P(sh+1 | sh, do(ah)) is identi\ufb01ed as follows. Proposition 4.2 (Frontdoor Adjustment (Pearl, 2009)). Under Assumption 4.1, it holds that P \u0000sh+1 \f \f sh, do(ah) \u0001 = Emh,a\u2032 h \u0002 P(sh+1 | sh, a\u2032 h, mh) \u0003 , 13 \f!h ah wh sh+1 'h (a) O\ufb04ine Setting \u0000h ah wh sh+1 \u0001h (b) Online Setting Figure 3: Causal diagrams of the h-th step of the confounded MDP with the intermediate state (a) in the o\ufb04ine setting and (b) in the online setting, respectively. ah sh+1 w1,h w2,h w3,h mh Figure 4: An illustration of the frontdoor criterion. The causal diagram corresponds to the h-th step of the confounded MDP conditioning on sh. Here wh = {w1,h, w2,h, w3,h} is the confounder and the intermediate state mh satis\ufb01es the frontdoor criterion. where the expectation Emh,a\u2032 h is taken with respect to mh \u223c\u02d8 Ph(\u00b7 | sh, ah) and a\u2032 h \u223cEwh\u223ce Ph(\u00b7 | sh)[\u03bdh(\u00b7 | sh, wh)]. Here (sh+1, sh, ah, mh) follows the SCM de\ufb01ne in \u00a72 with the intermediate states {mh}h\u2208[H] in the o\ufb04ine setting. Frontdoor-Adjusted Bellman Equation. In the sequel, we assume without loss of generality that the reward rh is deterministic and only depends on the state sh and the action ah. In parallel to (3.3), we have Q\u03c0 h(sh, ah) = rh(sh, ah) + Esh+1 \u0002 V \u03c0 h+1(sh+1) \u0003 , (4.1) where the expectation Esh+1 is taken with respect to sh+1 \u223cP(\u00b7 | sh, do(ah)). We de\ufb01ne the the following transition operators, (Ph+1/2V )(sh, mh) = Esh+1\u223cP(\u00b7 | sh,do(mh)) \u0002 V (sh+1) \u0003 , \u2200V : S 7\u2192R, (sh, mh) \u2208S \u00d7 M, (Ph e V )(sh, ah) = Emh\u223cP(\u00b7 | sh,do(ah)) \u0002e V (sh, mh) \u0003 , \u2200e V : S \u00d7 M 7\u2192R, (sh, ah) \u2208S \u00d7 A. We highlight that, under Assumption 4.1, the causal e\ufb00ect P(mh | sh, do(ah)) coincides with the conditional probability P(mh | sh, ah), since ah and mh are not confounded given sh. In the sequel, we de\ufb01ne the value function at the intermediate state by V \u03c0 h+1/2(sh, mh) = (Ph+1/2V \u03c0 h+1)(sh, mh). We have the following Bellman equation, Q\u03c0 h(sh, ah) = rh(sh, ah) + \u0000Ph(Ph+1/2V \u03c0 h+1) \u0001 (sh, ah) = rh(sh, ah) + (PhV \u03c0 h+1/2)(sh, ah). (4.2) 14 \fCorrespondingly, the Bellman optimality equation takes the following form, Q\u2217 h(sh, ah) = rh(sh, ah) + (PhV \u2217 h+1/2)(sh, ah), V \u2217 h+1/2(sh, mh) = (Ph+1/2V \u2217 h+1)(sh, mh), V \u2217 h (sh) = max ah\u2208A Q\u2217 h(sh, ah). (4.3) Linear Function Approximation. In parallel to Assumption 3.3, we focus on the following setting with linear transition kernels and reward functions (Yang and Wang, 2019a,b; Jin et al., 2019; Cai et al., 2019), which corresponds to a linear SCM (Peters et al., 2017). Assumption 4.3 (Linear Confounded MDP). We assume that Ph(sh+1 | sh, mh, wh) = \u27e8\u03c1h(sh, mh, wh), \u00b5h(sh+1)\u27e9, \u2200h \u2208[H], (sh, mh, wh) \u2208S \u00d7 M \u00d7 W, \u02d8 Ph(mh | sh, ah) = \u27e8\u03b3h(sh, ah), \u00b5h(mh)\u27e9, \u2200h \u2208[H], (mh, sh, ah) \u2208M \u00d7 S \u00d7 A. where \u03c1h(\u00b7, \u00b7, \u00b7), \u03b3h(\u00b7, \u00b7), \u00b5h(\u00b7) = (\u00b51,h(\u00b7), . . . , \u00b5d,h(\u00b7))\u22a4, and \u00b5h(\u00b7) = (\u00b51,h(\u00b7), . . . , \u00b5d,h(\u00b7))\u22a4are Rdvalued functions. We assume that \u2225\u03c1h(sh, mh, wh)\u22252 \u22641, \u2225\u03b3h(sh, ah)\u22252 \u22641, Pd i=1 \u2225\u00b5i,h\u22252 1 \u2264d, and Pd i=1 \u2225\u00b5i,h\u22252 1 \u2264d for all h \u2208[H] and (sh, ah, mh, wh) \u2208S \u00d7 A \u00d7 M \u00d7 W. Meanwhile, we assume that rh(sh, ah) = \u03b3h(sh, ah)\u22a4\u03b8h, \u2200(h, k) \u2208[H] \u00d7 [K], where \u03b8h \u2208Rd and \u2225\u03b8h\u22252 \u2264 \u221a d for all h \u2208[H]. Proposition 4.4. We de\ufb01ne e \u03bdh(ah | sh) = Ewh\u223ce Ph(\u00b7 | sh)[\u03bdh(ah | sh, wh)], where \u03bd = {\u03bdh}h\u2208[H] is the behavior policy. With a slight abuse of notation, we de\ufb01ne the frontdoor-adjusted feature as follows, \u03c6h(sh, ah, mh) = Ewh\u223ce Ph(\u00b7 | sh) \u0002 \u03c1h(sh, mh, wh) \u00b7 \u03bdh(ah | sh, wh) \u0003 e \u03bdh(ah | sh) , \u2200h \u2208[H]. (4.4) Under Assumption 4.3, it holds that P(sh+1 | sh, ah, mh) = \u27e8\u03c6h(sh, ah, mh), \u00b5h(sh+1)\u27e9. (4.5) Proof. See \u00a7A.2 for a detailed proof. DOVI+: Update of V k h+1/2. With a slight abuse of notation, we de\ufb01ne the following feature, \u03c8h(sh, mh) = Ewh\u223ce Ph(\u00b7 | sh) \u0002 \u03c1h(sh, mh, wh) \u0003 . (4.6) Conditioning on the state sh, the confounder wh satis\ufb01es the backdoor criterion for identifying the causal e\ufb00ect P(sh+1 | sh, do(mh)), although it is unobserved. In the sequel, we assume that either the density of { e Ph(\u00b7 | sh)}h\u2208[H] is known to us or the features {\u03c6h}h\u2208[H] and {\u03c8h}h\u2208[H] are known to us. Following from (4.6), Proposition 3.2, and Assumption 4.3, it holds for all h \u2208[H] and (sh+1, sh, mh) \u2208S \u00d7 S \u00d7 M that P \u0000sh+1 \f \f sh, do(mh) \u0001 = \u27e8\u03c8h(sh, mh), \u00b5h(sh+1)\u27e9. (4.7) 15 \fAlgorithm 2 DOVI+ for Confounded MDP. Require: Observational data {(si h, ai h, mi h, ri h)}i\u2208[n],h\u2208[H], tuning parameters \u03bb, \u03b2 > 0, features {\u03c6h}h\u2208[H] and {\u03c8h}h\u2208[H], which are de\ufb01ned in (4.4) and (4.6), respectively. 1: Initialization: Set {Q0 h, V 0 h+1/2, V 0 h }h\u2208[H] as zero functions and V k H+1 as a zero function for k \u2208[K]. 2: for k = 1, . . . , K do 3: for h = H, . . . , 1 do 4: Update V k h+1/2: 5: Set \u03c9k 1,h \u2190argmin\u03c9\u2208Rd Pk\u22121 \u03c4=1(V \u03c4 h+1(s\u03c4 h+1) \u2212\u03c9\u22a4\u03c8h(s\u03c4 h, m\u03c4 h))2 + \u03bb\u2225\u03c9\u22252 2 + Lk 1,h(\u03c9), where Lk 1,h is de\ufb01ned in (4.9). 6: Set V k h+1/2(sh, mh) \u2190min{\u03c8h(sh, mh)\u22a4\u03c9k 1,h + \u0393k h+1/2(sh, mh), H \u2212h} for all (sh, mh) \u2208 S \u00d7 M, where \u0393k h+1/2 is de\ufb01ned in (4.12). 7: Update Qk h: 8: Set \u03c9k 2,h \u2190argmin\u03c9\u2208Rd Pk\u22121 \u03c4=1(rk h+V k h+1/2(s\u03c4 h, m\u03c4 h)\u2212\u03c9\u22a4\u03b3h(s\u03c4 h, a\u03c4 h))2+\u03bb\u2225\u03c9\u22252 2+Lk 2,h(\u03c9), where Lk 2,h is de\ufb01ned in (4.14). 9: Set Qk h(sh, ah) \u2190min{\u03b3h(sh, ah)\u22a4\u03c9k 2,h + \u0393k h(sh, ah), H \u2212h} for all (sh, ah) \u2208S \u00d7 A, where \u0393k h is de\ufb01ned in (4.15). 10: Update \u03c0k h and V k h : 11: Set \u03c0k h(\u00b7 | sh) \u2190argmaxah\u2208A Qk h(sh, ah) for all sh \u2208S. 12: Set V k h (\u00b7) \u2190\u27e8\u03c0k h(\u00b7 | \u00b7), Qk h(\u00b7, \u00b7)\u27e9A. 13: end for 14: Obtain sk 1 from the environment. 15: for h = 1, . . . , H do 16: Take ak h \u223c\u03c0k h(\u00b7 | sk h). Obtain rk h = rh(sk h, ak h), mk h, and sk h+1. 17: end for 18: end for 16 \fHence, by the Bellman equation and the Bellman optimality equation in (4.2) and (4.3), respectively, the value functions at the intermediate state V \u03c0 h+1/2 and V \u2217 h+1/2 are linear in the feature \u03c8h for all \u03c0. To solve for V \u2217 h+1/2 in the Bellman optimality equation in (4.3), we minimize the following empirical mean-squared Bellman error as follows at each step, \u03c9k 1,h \u2190argmin \u03c9\u2208Rd k\u22121 X \u03c4=1 \u0000V \u03c4 h+1(s\u03c4 h+1) \u2212\u03c9\u22a4\u03c8h(s\u03c4 h, m\u03c4 h) \u00012 + \u03bb\u2225\u03c9\u22252 2 + Lk 1,h(\u03c9), h = H, . . . , 1, (4.8) where we set V k H+1 = 0 for all k \u2208[K] and V \u03c4 h+1 is de\ufb01ned in Line 12 of Algorithm 2 for all (\u03c4, h) \u2208[K] \u00d7 [H \u22121]. Here k is the index of episode, \u03bb > 0 is a tuning parameter, and Lk 1,h is a regularizer, which is constructed based on the confounded observational data. More speci\ufb01cally, we de\ufb01ne Lk 1,h(\u03c9) = n X i=1 \u0000V \u03c4 h+1(si h+1) \u2212\u03c9\u22a4\u03c6h(si h, ai h, mi h) \u00012, \u2200(k, h) \u2208[K] \u00d7 [H], (4.9) which corresponds to the least-squares loss for regressing V \u03c4 h+1(si h+1) against \u03c6h(si h, ai h, mi h) for all i \u2208[n]. Here {(si h, ai h, mi h, ri h)}(i,h)\u2208[n]\u00d7[H] are the confounded observational data, where si h+1 \u223c Ph(\u00b7 | si h, ai h, wi h), mi h \u223c\u02d8 Ph(\u00b7 | si h, ai h), and ai h \u223c\u03bdh(\u00b7 | si h, wi h) with \u03bd = {\u03bdh}h\u2208[H] being the behavior policy. The update in (4.8) takes the following explicit form, \u03c9k 1,h \u2190(\u039bk 1,h)\u22121 \u0012 k\u22121 X \u03c4=1 \u03c8h(s\u03c4 h, m\u03c4 h) \u00b7 V k h+1(s\u03c4 h+1) + n X i=1 \u03c6h(si h, ai h, mi h) \u00b7 V k h+1(si h+1) \u0013 , (4.10) where \u039bk 1,h = k\u22121 X \u03c4=1 \u03c8h(s\u03c4 h, m\u03c4 h)\u03c8h(s\u03c4 h, m\u03c4 h)\u22a4+ n X i=1 \u03c6h(si h, ai h, mi h)\u03c6h(si h, ai h, mi h)\u22a4+ \u03bbI. (4.11) Meanwhile, we employ the following UCB of \u03c8h(sk h, mk h)\u22a4\u03c9k 1,h for all (sk h, mk h) \u2208S \u00d7 M, \u0393k h+1/2(sk h, mk h) = \u03b2 \u00b7 \u0010 log det \u0000\u039bk 1,h + \u03c8h(sk h, mk h)\u03c8h(sk h, mk h)\u22a4\u0001 \u2212log det(\u039bk 1,h) \u00111/2 . (4.12) The update of V k h+1/2 is de\ufb01ned in Line 6 of Algorithm 2. DOVI+: Update of Qk h. Upon obtaining V k h+1/2, we solve for Qk h by minimizing the following empirical mean-squared Bellman error as follows at each step, \u03c9k 2,h \u2190argmin \u03c9\u2208Rd k\u22121 X \u03c4=1 \u0000rk h + V k h+1/2(s\u03c4 h, m\u03c4 h) \u2212\u03c9\u22a4\u03b3h(s\u03c4 h, a\u03c4 h) \u00012 + \u03bb\u2225\u03c9\u22252 2 + Lk 2,h(\u03c9), h = H, . . . , 1. (4.13) Here Lk 2,h is a regularizer, which is de\ufb01ned as follows, Lk 2,h(\u03c9) = n X i=1 \u0000ri h + V k h+1/2(si h, mi h) \u2212\u03c9\u22a4\u03b3h(si h, ai h) \u00012, \u2200(k, h) \u2208[K] \u00d7 [H]. (4.14) 17 \fThe update in (4.13) takes the following explicit form, \u03c9k 2,h \u2190(\u039bk 2,h)\u22121 \u0012 k\u22121 X \u03c4=1 \u03b3h(s\u03c4 h, a\u03c4 h) \u00b7 \u0000V k h+1/2(s\u03c4 h, m\u03c4 h) + r\u03c4 h \u0001 + n X i=1 \u03b3h(si h, ai h) \u00b7 \u0000V k h+1/2(si h, mi h) + ri h \u0001\u0013 , where \u039bk 2,h = k\u22121 X \u03c4=1 \u03b3h(s\u03c4 h, a\u03c4 h)\u03b3h(s\u03c4 h, a\u03c4 h)\u22a4+ n X i=1 \u03b3h(si h, ai h)\u03b3h(si h, ai h)\u22a4+ \u03bbI. We employ the following UCB of \u03b3h(sk h, ak h)\u22a4\u03c9k 2,h for all (sk h, ak h) \u2208S \u00d7 A, \u0393k h(sk h, ak h) = \u03b2 \u00b7 \u0010 log det \u0000\u039bk 2,h + \u03b3h(sk h, ak h)\u03b3h(sk h, ak h)\u22a4\u0001 \u2212log det(\u039bk 2,h) \u00111/2 . (4.15) The update of Qk h is de\ufb01ned in Line 9 of Algorithm 2. 4.2 Theory In parallel to Theorem 3.5, the following theorem characterizes the regret of DOVI+, which is de\ufb01ned in (2.3) Theorem 4.5 (Regret of DOVI+). Let \u03b2 = CdH p log(d(T + nH)/\u03b6) and \u03bb = 1, where C > 0 and \u03b6 \u2208(0, 1] are absolute constants. Under Assumptions 4.1 and 4.3, it holds with probability at least 1 \u22125\u03b6 that Regret(T) \u2264C\u2032 \u00b7 (\u22061,H + \u22062,H) \u00b7 \u221a d3H3T \u00b7 q log \u0000d(T + nH)/\u03b6 \u0001 , where C\u2032 > 0 is an absolute constant and \u22061,H = 1 \u221a dH2 H X h=1 \u0000log det(\u039bK+1 1,h ) \u2212log det(\u039b1 1,h) \u00011/2, \u22062,H = 1 \u221a dH2 H X h=1 \u0000log det(\u039bK+1 2,h ) \u2212log det(\u039b1 2,h) \u00011/2. Proof. See \u00a7A.4 for a detailed proof. See the discussion of Theorem 3.5 in \u00a73, where \u2206H corresponds to \u22061,H and \u22062,H in Theorem 4.5. In particular, \u22061,H and \u22062,H admit the same information-theoretic interpretation. 5 Mechanism of Utilizing Confounded Observational Data In this section, we discuss the mechanism of incorporating the confounded observational data. 18 \f5.1 Partially Observed Confounder Corresponding to Line 4 of Algorithm 1, DOVI e\ufb00ectively estimates the causal e\ufb00ect P(\u00b7 | sh, do(ah)) using \u03c8h(sh, ah)\u22a4(\u039bk h)\u22121 \u0012k\u22121 X \u03c4=1 \u03c8h(s\u03c4 h, a\u03c4 h) \u00b7 \u03b4s\u03c4 h+1(\u00b7) + n X i=1 \u03c6h(si h, ai h, ui h) \u00b7 \u03b4si h+1(\u00b7) \u0013 , (5.1) where we denote by \u03b4s(\u00b7) the Dirac measure at s. To see why it works, let the tuning parameter \u03bb be su\ufb03ciently small. By the de\ufb01nition of \u039bk h in (3.10), we have P \u0000\u00b7 \f \f sh, do(ah) \u0001 = \u27e8\u03c8h(sh, ah), \u00b5h(\u00b7)\u27e9 \u2248\u03c8h(sh, ah)\u22a4(\u039bk h)\u22121 \u0012k\u22121 X \u03c4=1 \u03c8h(s\u03c4 h, a\u03c4 h) \u00b7 \u27e8\u03c8h(s\u03c4 h, a\u03c4 h), \u00b5h(\u00b7)\u27e9 + n X i=1 \u03c6h(si h, ai h, ui h) \u00b7 \u27e8\u03c6h(si h, ai h, ui h), \u00b5h(\u00b7)\u27e9 \u0013 . (5.2) Meanwhile, Assumption 3.3 and Proposition 3.4 imply P \u0000\u00b7 \f \f sh, do(ah) \u0001 = \u27e8\u03c8h(sh, ah), \u00b5h(\u00b7)\u27e9, Ph(\u00b7 | sh, ah, uh) = \u27e8\u03c6h(sh, ah, uh), \u00b5h(\u00b7)\u27e9, which rely on the backdoor adjustment. Since s\u03c4 h+1 and si h+1 in (5.1) are sampled following P(\u00b7 | s\u03c4 h, do(a\u03c4 h)) and Ph(\u00b7 | si h, ai h, ui h), respectively, (5.1) approximates the right-hand side of (5.2) as its empirical version. As k, n \u2192+\u221e, (5.1) converges to the right-hand side of (5.2) as well as the causal e\ufb00ect P(\u00b7 | sh, do(ah)). 5.2 Unobserved Confounder If the confounders {wh}h\u2208[H] are unobserved in the o\ufb04ine setting, the backdoor adjustment in \u00a73 is not applicable. Alternatively, the intermediate states {mh}h\u2208[H] allow us to estimate the causal e\ufb00ect without observing the confounders. The key is that the frontdoor criterion in Assumption 4.1 implies P \u0000sh+1 \f \f sh, do(ah) \u0001 = Z M P \u0000sh+1 \f \f sh, do(mh) \u0001 \u00b7 P \u0000mh \f \f sh, do(ah) \u0001 dmh. (5.3) It remains to estimate P(sh+1 | sh, do(mh)) and P(mh | sh, do(ah)) on the right-hand side of (5.3). Since ah and mh are not confounded given sh, the causal e\ufb00ect P(mh | sh, do(ah)) coincides with the conditional distribution P(mh | sh, ah), which can be estimated based on the observational data. To estimate the causal e\ufb00ect P(sh+1 | sh, do(mh)), we utilize the backdoor adjustment in Proposition 3.2 with uh replaced by ah, which is enabled by Assumption 4.1. More speci\ufb01cally, it holds that P \u0000sh+1 \f \f sh, do(mh) \u0001 = Ea\u2032 h\u223cP(\u00b7 | sh) \u0002 Ph(sh+1 \f \f sh, a\u2032 h, mh) \u0003 . (5.4) Correspondingly, we construct the value function at the intermediate state Vh+1/2 and adapt the value iteration following the Bellman optimality equation in (4.3). To estimate the value 19 \ffunctions {V k h+1/2}h\u2208[H] based on the confounded observational data, we utilize the adjustment in (5.4). Corresponding to Line 5 of Algorithm 2, DOVI+ e\ufb00ectively estimates the causal e\ufb00ect P(\u00b7 | sh, do(mh)) using \u03c8h(sh, mh)\u22a4(\u039bk 1,h)\u22121 \u0012k\u22121 X \u03c4=1 \u03c8h(s\u03c4 h, m\u03c4 h) \u00b7 \u03b4s\u03c4 h+1(\u00b7) + n X i=1 \u03c6h(si h, ai h, mi h) \u00b7 \u03b4si h+1(\u00b7) \u0013 , (5.5) To see why it works, let the tuning parameter \u03bb be su\ufb03ciently small. By the de\ufb01nition of \u039bk 1,h in (4.11), we have P \u0000\u00b7 \f \f sh, do(mh) \u0001 = \u27e8\u03c8h(sh, mh), \u00b5h(\u00b7)\u27e9 \u2248\u03c8h(sh, mh)\u22a4(\u039bk 1,h)\u22121 \u0012k\u22121 X \u03c4=1 \u03c8h(s\u03c4 h, m\u03c4 h) \u00b7 \u27e8\u03c8h(s\u03c4 h, m\u03c4 h), \u00b5h(\u00b7)\u27e9 + n X i=1 \u03c6h(si h, ai h, mi h) \u00b7 \u27e8\u03c6h(si h, ai h, mi h), \u00b5h(\u00b7)\u27e9 \u0013 . (5.6) Meanwhile, Assumption 4.3 and Proposition 4.4 imply P \u0000\u00b7 \f \f sh, do(mh) \u0001 = \u27e8\u03c8h(sh, mh), \u00b5h(\u00b7)\u27e9, P(\u00b7 | sh, ah, mh) = \u27e8\u03c6h(sh, ah, mh), \u00b5h(\u00b7)\u27e9. Since s\u03c4 h+1 and si h+1 in (5.6) are sampled following P(\u00b7 | s\u03c4 h, do(m\u03c4 h)) and P(\u00b7 | si h, ai h, mi h), respectively, (5.5) approximates the right-hand side of (5.6) as its empirical version. As k, n \u2192+\u221e, (5.5) converges to the right-hand side of (5.6) as well as the causal e\ufb00ect P(\u00b7 | sh, do(mh))." + }, + { + "url": "http://arxiv.org/abs/2006.11917v1", + "title": "Breaking the Curse of Many Agents: Provable Mean Embedding Q-Iteration for Mean-Field Reinforcement Learning", + "abstract": "Multi-agent reinforcement learning (MARL) achieves significant empirical\nsuccesses. However, MARL suffers from the curse of many agents. In this paper,\nwe exploit the symmetry of agents in MARL. In the most generic form, we study a\nmean-field MARL problem. Such a mean-field MARL is defined on mean-field\nstates, which are distributions that are supported on continuous space. Based\non the mean embedding of the distributions, we propose MF-FQI algorithm that\nsolves the mean-field MARL and establishes a non-asymptotic analysis for MF-FQI\nalgorithm. We highlight that MF-FQI algorithm enjoys a \"blessing of many\nagents\" property in the sense that a larger number of observed agents improves\nthe performance of MF-FQI algorithm.", + "authors": "Lingxiao Wang, Zhuoran Yang, Zhaoran Wang", + "published": "2020-06-21", + "updated": "2020-06-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Reinforcement learning (RL) (Sutton and Barto, 2018) searches for the optimal policy for sequential decision making through interacting with environments and learning from experiences. Multi-agent reinforcement learning (MARL) (Bu et al., 2008) generalizes RL to multi-agent systems. For competitive tasks such as zero-sum game and general-sum game, various MARL algorithms (Littman, 1994; Hu and Wellman, 2003; Wang and Sandholm, 2003) are proposed in search for the Nash equilibrium (Nash, 1951). Meanwhile, for cooperative tasks, MARL searches for the optimal policy that maximizes the social welfare (Ng, 1975), i.e., the expected total reward obtained by all agents (Tan, 1993; Panait and Luke, 2005; Wang and Sandholm, 2003; Claus and Boutilier, 1998; Lauer and Riedmiller, 2000; D\u02c7 zeroski et al., 2001; Guestrin et al., 2002; Kar et al., 2013; Zambaldi et al., 2018). Combined with the breakthrough in deep learning, MARL achieves signi\ufb01cant empirical successes in both settings, e.g., autonomous driving (Shalev-Shwartz et al., 2016), Go (Silver et al., 2016, 2017), esports (Vinyals et al., 2019; OpenAI, 2018), and robotics (Yang and Gu, 2004). Despite its empirical successes, MARL remains challenging in the \u201cmany-agent\u201d setting, as the capacity of state-action space grows exponentially in the number of agents, which hinders the learning of value function and policy due to the curse of dimensionality. Such a challenge is named as the \u201ccurse of many agents\u201d. One way to break such a curse is through mean-\ufb01eld approximation, which exploits the symmetry of homogeneous agents and summarizes them as a population. In the most general form, such a population is represented by a distribution over the state space of individual agents, while the reward and transition \u2217Northwestern University; lwang@u.northwestern.edu \u2020Princeton University; zy6@princeton.edu \u2021Northwestern University; zhaoranwang@gmail.com 1 \fare parametrized as functionals of distributions (Acciaio et al., 2018). Although mean-\ufb01eld MARL demonstrates remarkable e\ufb03ciency in applications such as large-scale \ufb02eet management (Lin et al., 2018) and ridesharing order dispaching (Li et al., 2019), its theoretical analysis remains scarce. In particular, despite signi\ufb01cant progress (Jiachen et al., 2017; Yang et al., 2018; Jiang and Lu, 2018; Guo et al., 2019), we still lack a principled model-free algorithm that allows individual agents to have continuous states, which requires approximating nonlinear functionals of in\ufb01nite-dimensional mean-\ufb01eld states, e.g., value function and policy. In this paper, we study mean-\ufb01eld MARL in the collaborative setting, where the mean-\ufb01eld states are distributions over a continuous space S. Here S denotes the state space of individual agents. In particular, we consider the setting with a centralized controller, which has a \ufb01nite action space A. Such a setting is extensively studied in the analysis of societal-scale systems (Gu\u00b4 eant et al., 2011; Gomes et al., 2015; Moll et al., 2019). As a simpli\ufb01ed example, the central bank or the central government decides whether to raise the interest rate or reduce the \ufb01scal budget, respectively, both with the goal of maximizing social welfare. In such an example, the action space only contains two actions. However, the action taken by the centralized controller a\ufb00ects the dynamics of billions of individuals. Such a setting can be viewed as centralized mean-\ufb01eld control (Huang et al., 2012; Carmona et al., 2013; Fornasier and Solombrino, 2014) with an in\ufb01nite number of homogeneous agents, which faces two challenges: (i) learning the value function and policy is intractable as they are functionals of distributions, which are in\ufb01nite dimensional as S is continuous, and (ii) the mean-\ufb01eld state is only accessible through the observation of a \ufb01nite number of agents, which only provides partial information. To tackle these challenges, we resort to the mean embedding of mean-\ufb01eld states into a reproducing kernel Hilbert space (RKHS) (Gretton et al., 2007; Smola et al., 2007; Sriperumbudur et al., 2010), which allows us to parametrize value functions as nonlinear functionals over the RKHS. Based on value function approximation, we propose the mean-\ufb01eld \ufb01tted Q-iteration algorithm (MF-FQI), which provably attains the optimal value function at a linear rate of convergence. In particular, we show that MF-FQI breaks the curse of many agents in the sense that its computational complexity only scales linearly in the number of observed agents, and moreover, the statistical accuracy enjoys a \u201cblessing of many agents\u201d, that is, a larger number of observed agents improves the statistical accuracy. Moreover, we characterize the phase transition in the statistical accuracy in terms of the batch size in \ufb01tted Q-iteration and the number of observed agents. Our Contribution. Our contribution is three-fold: (i) We propose the \ufb01rst model-free mean-\ufb01eld MARL algorithm, namely MF-FQI, that allows for continuous support with provable guarantees. (ii) We prove that MF-FQI breaks the curse of many agents by establishing its nonasymptotic computational and statistical rates of convergence. (iii) We motivate a principled framework for exploiting the invariance in MARL, e.g., exchangeability, via mean embedding. Related Works. Our work is related to mean-\ufb01eld games and mean-\ufb01eld control. The study of mean-\ufb01eld games focuses on the search of the Nash equilibrium (Huang et al., 2003; Lasry and Lions, 2006a,b, 2007; Huang et al., 2007; Gu\u00b4 eant et al., 2011; Carmona and Delarue, 2018), whereas the goal of mean-\ufb01eld control is to optimally control a McKean-Vlasov process (Buckdahn et al., 2009, 2011; Andersson and Djehiche, 2011; Meyer-Brandis et al., 2012; Bensoussan et al., 2013; Carmona et al., 2015; Acciaio et al., 2018). Most of these works focus on the continuous-time setting and require the knowledge of the transition model. In contrast, we consider the model-free and discrete-time setting. Our work falls in the study of mean-\ufb01eld MARL, which generalizes \ufb01nite-agent MARL by incorporating the notion of mean \ufb01elds. Previous works investigate mean-\ufb01eld MARL in both cooperative and competitive settings (Jiachen et al., 2017; Yang et al., 2018; Jiang and Lu, 2018; Guo et al., 2019). In Jiachen et al. (2017), a similar setting of mean-\ufb01eld MARL is studied, where the mean-\ufb01eld states are supported on a discrete space and the transition is linear in the state and action. In contrast, our work is model-free and allows for continuous support. In Yang et al. (2018); Guo et al. (2019), mean-\ufb01eld MARL algorithms are proposed in the competitive setting, which have provable guarantees when the support is discrete. 2 \fIn comparison, we consider the cooperative setting with continuous support and establish nonasymptotic guarantees. Our work exploits the exchangeability of agents via mean embedding. See e.g., Smola et al. (2007); Fukumizu et al. (2008); Gretton et al. (2009); Sriperumbudur et al. (2010); Gretton et al. (2012); Tolstikhin et al. (2017) and references therein for the study of mean embedding. Our work is closely related to various statistical models that exploit invariance, such as set kernels (Haussler, 1999; G\u00a8 artner et al., 2002; Kondor and Jebara, 2003) and deep sets (Zaheer et al., 2017). We refer to Bloem-Reddy and Teh (2019) for a detailed survey on learning with invariance. Notations. For a topological space X, we denote by CB(X) the set of bounded and continuous real functions on X. We denote by M(X) the space of all the probability distributions supported on X. For x \u2208X, we denote by \u03b4x \u2208M(X) the point mass at x. For a real-valued function f de\ufb01ned on X, we denote by \u2225f\u2225p,\u03bd the Lp(\u03bd) norm for p \u22651, where \u03bd \u2208M(X). We write \u2225f\u2225\u03bd = \u2225f\u22252,\u03bd for notational simplicity. 2 Mean-Field MARL via Mean Embedding In this section, we \ufb01rst motivate mean-\ufb01eld MARL by an example of N-player control with invariance. We then introduce the problem setup of mean-\ufb01eld MARL and the mean embedding of distributions. Finally, we propose the MF-FQI algorithm, which solves mean-\ufb01eld MARL based on the mean embedding. 2.1 Exchangeability in MARL We \ufb01rst consider an N-player control problem with a centralized controller in discrete time. Such a setting is extensively studied in the analysis of societal-scale systems (Gu\u00b4 eant et al., 2011; Gomes et al., 2015; Moll et al., 2019), such as the example of central bank or central government in \u00a71. At each time step t, the central controller takes an action at \u2208A based on the current joint state st = (s1,t, . . . , sN,t), where si,t \u2208S is the state of the i-th agent at time t. The immediate reward rt follows a distribution that depends on the current state st \u2208SN and action at \u2208A. The transition of the joint state follows a distribution, which is determined by the current state st \u2208SN and action at \u2208A. In summary, it holds that rt \u223cr(st, at), St+1 \u223cP(\u00b7 | st, at). (2.1) The process de\ufb01ned by the tuple (S, A, P, r) is a Markov decision process (MDP). We de\ufb01ne a policy \u03c0 : SN 7\u2192M(A) as a mapping that maps a joint state s \u2208SN to a probability distribution \u03c0(\u00b7 | s) over A. We de\ufb01ne the value function corresponding to the policy \u03c0 as V \u03c0(s) = E \u0014 \u221e X t=0 \u03b3t \u00b7 r(St, At) \f \f \f \f S0 = s \u0015 , (2.2) where At \u223c\u03c0(\u00b7|St), and St+1 \u223cP(\u00b7 | St, At), and \u03b3 \u2208(0, 1) is the discount factor. Similarly, we de\ufb01ne the action-value function corresponding to the policy \u03c0 as follows, Q\u03c0(s, a) = E \u0014 \u221e X t=0 \u03b3t \u00b7 r(St, At) \f \f \f \f S0 = s, A0 = a \u0015 , (2.3) where At \u223c\u03c0(\u00b7 | St), and St+1 \u223cP(\u00b7 | St, At). We de\ufb01ne the Bellman operator T \u03c0 as follows, T \u03c0Q(s, a) = E \u0002 r(s, a) + \u03b3 \u00b7 Q(S\u2032, A\u2032) \u0003 , (2.4) 3 \fwhere S\u2032 \u223cP(\u00b7 | s, a) and A\u2032 \u223c\u03c0(\u00b7 | s). Our goal is to \ufb01nd the optimal policy that maximizes the expected total reward as follows, Q\u2217(s, a) = sup \u03c0 Q\u03c0(s, a), \u2200(s, a) \u2208S \u00d7 A. (2.5) We denote by \u03c0\u2217the optimal solution of (2.5). It can be shown that Q\u2217= Q\u03c0\u2217, and the following Bellman optimality equation holds, Q\u2217(s, a) = T Q\u2217(s, a) = E h r(s, a) + \u03b3 \u00b7 max a\u2208A Q\u2217(S\u2032, a) i , (2.6) where S\u2032 \u223cP(\u00b7 | s, a). Here we call T the Bellman optimality operator. Curse of Many Agents: The learning of the optimal action-value function Q\u2217under the N-player setting su\ufb00ers from the curse of many agents. More speci\ufb01cally, as N increases, the capacity of the joint state space SN grows exponentially in N and incurs intractability in the learning of the action-value function Q. To address such a curse, we exploit the exchangeability of the MDP in (2.1). More speci\ufb01cally, we assume that the MDP is exchangeable in the sense that r(st, at) d = r \u0000\u03c3(st), at \u0001 , P(st+1 | st, at) = P \u0000\u03c3(st+1) \f \f \u03c3(st), at \u0001 , (2.7) which holds for any st, st+1 \u2208SN, at \u2208A, and \u03c3 \u2208SN. Here \u03c3 is a block-wise permutation of the vector st \u2208SN, and SN is the permutation group of order N. Under the exchangeability de\ufb01ned in (2.7), the following proposition shows that the optimal policy is invariant to permutations of the joint state. Proposition 2.1 (Invariance of Q\u2217). If (2.7) holds, then it holds for any \u03c3 \u2208SN, s \u2208SN, and a \u2208A that Q\u2217(s, a) = Q\u2217\u0000\u03c3(s), a \u0001 . (2.8) Moreover, it holds for any \u03c3 \u2208Sn, s \u2208SN, and a \u2208A that \u03c0\u2217(a | s) d = \u03c0\u2217(a | \u03c3(s)). Proof. See \u00a7B.1 for a detailed proof. Meanwhile, the following proposition proves that the action-value function Q\u03c0 is invariant to permutations of the joint states. Proposition 2.2 (Invariant Representation). Let \u03c0 be invariant to permutations such that \u03c0(\u00b7 | s) d = \u03c0(\u00b7 | \u03c3(s)). If (2.7) holds, then it holds for some g : M(S) \u00d7 A 7\u2192R that Q\u03c0(s, a) = g(Ms, a). (2.9) Here Ms(\u00b7) is the empirical measure supported on the set {si}i\u2208[N] corresponding to s = (s1, . . . , sN), which takes the form of Ms = 1 N N X i=1 \u03b4si, (2.10) where recall that \u03b4si is the point mass at si \u2208S for all i \u2208[N]. Proof. See \u00a7B.2 for a detailed proof. 4 \fBy Propositions 2.1 and 2.2, the optimal action-value function Q\u2217is related to the joint state through the empirical state distribution Ms de\ufb01ned in (2.10). When the number of agents N goes to in\ufb01nity, the empirical state distribution Ms converges to a limiting continuous distribution. To capture such a limiting dynamics of in\ufb01nite agents with exchangeability, we de\ufb01ne an MDP with M(S), the space of probability measures supported on S, as the mean-\ufb01eld state space as follows. Mean-Field MARL. We de\ufb01ne the discounted mean-\ufb01eld MDP by the tuple (M(S), A, P, r, \u03b3). Here A is the action space and M(S) is the mean-\ufb01eld state space, which is the space of all the distributions supported on S. Given a mean-\ufb01eld state ps \u2208M(S) and an action a \u2208A, the immediate reward follows the distribution r(a, ps), where r : A \u00d7 M(S) 7\u2192M(R). The Markov kernel P(\u00b7 | \u00b7) maps the action-state pair (a, ps) to a distribution on M(S), which is the distribution of the mean-\ufb01eld state after transition from the action-state pair (a, ps). For a policy \u03c0(\u00b7 | ps) that maps from M(S) to M(A), we de\ufb01ne the action-value function as Q\u03c0(a, ps) = E \u0014 \u221e X t=0 \u03b3t \u00b7 r(Ps,t, At) \f \f \f \f Ps,0 = ps, A0 = a \u0015 , (2.11) where At \u223c\u03c0(\u00b7 | Ps,t) and Ps,t+1 \u223cP(\u00b7 | a, ps,t). Correspondingly, we de\ufb01ne the Bellman evaluation operator T \u03c0 as follows, T \u03c0Q(a, ps) = E \u0002 r(a, ps) + \u03b3 \u00b7 Q(A\u2032, Ps\u2032) \u0003 , (2.12) where Ps\u2032 \u223cP(\u00b7 | a, ps) and A\u2032 \u223c\u03c0(\u00b7 | a, ps). We de\ufb01ne the optimal action-value function as Q\u2217= sup\u03c0 Q\u03c0. The Bellman optimality equation then takes the form of Q\u2217(a, ps) = T Q\u2217(a, ps) = E h r(a, ps) + \u03b3 \u00b7 max a\u2208A Q\u2217(a, P \u2032 s) i , (2.13) where Ps\u2032 \u223cP(\u00b7 | a, ps). We write \u2126= S \u00d7 A and de\ufb01ne the space of state-action con\ufb01gurations f M(\u2126) as follows, f M(\u2126) = \b \u03c9a,ps = \u03b4a \u00d7 ps : a \u2208A, ps \u2208M(S) \t . (2.14) Here we denote by \u03b4a \u2208M(A) the point mass at action a \u2208A and by \u03b4a \u00d7 ps the product measure on \u2126= S \u00d7 A induced by \u03b4a and ps. See \u00a7A for the de\ufb01nition of a topological structure that allows us to de\ufb01ne distributions on f M(\u2126). Note that the transition kernel P(\u00b7|a, ps) equivalently de\ufb01nes a Markov kernel from f M(\u2126) to M(S). With a slight abuse of notations, we denote by P(\u00b7|\u03c9a,ps) such a Markov kernel and do not distinguish between them. Similarly, we denote by r(\u03c9) and Q(\u03c9) the immediate reward and action-value function de\ufb01ned on the state-action con\ufb01guration \u03c9 \u2208f M(\u2126), respectively. We assume that the action set A is \ufb01nite, and the immediate reward is upper bounded by a positive absolute constant Rmax. It then holds that the action-value functions are upper bounded by Qmax = Rmax/(1 \u2212\u03b3). In a multi-agent environment with in\ufb01nite homogeneous agents and continuous state space S, we cannot access the mean-\ufb01eld state ps directly. Instead, we assume that we observe the states of N agents that follows the mean-\ufb01eld state ps. In what follows, we construct an algorithm that solves the mean-\ufb01eld MARL via such a \ufb01nite observation for each mean-\ufb01eld state. In the sequel, we denote by b Q\u03bb \u03ba and \u03c0\u03ba the outputs of MF-FQI. We highlight that the mean-\ufb01eld MARL setting faces two challenges: (i) learning the value function and policy is intractable as they are functionals of distributions, which are in\ufb01nite dimensional as S is continuous, and (ii) the mean-\ufb01eld state is only accessible through the observation of a \ufb01nite number of agents, which only provides partial information. In what follows, we tackle these challenges via mean embedding. 5 \f2.2 Mean Embedding To learn the optimal action-value function Q\u2217de\ufb01ned on f M(\u2126), which is a space of distributions, we introduce mean embedding, which embeds the space of distributions to a reproducing kernel Hilbert space (RKHS). We denote by H(k) the RKHS with reproducing kernel k : \u2126\u00d7 \u21267\u2192R. For any state-action con\ufb01guration \u03c9 \u2208f M(\u2126), the mean embedding \u00b5\u03c9(\u00b7) of \u03c9 into the RKHS H(k) is de\ufb01ned as follows (Gretton et al., 2007; Smola et al., 2007; Sriperumbudur et al., 2010), \u00b5\u03c9(x) = Z \u2126 k(x, t) d\u03c9(t) \u2208H(k). (2.15) Let X = {\u00b5\u03c9 : \u03c9 \u2208f M(\u2126)} \u2286H(k). To tackle challenge (i), we introduce another reproducing kernel K : X \u00d7 X 7\u2192R. Such a kernel generates an RKHS H(K) that include functions de\ufb01ned on X. Our idea is then to approximate Q\u2217using functions in H(K). Note that upon a proper selection of kernel K(\u00b7, \u00b7), the corresponding RKHS H(K) captures a rich family of functions de\ufb01ned on X. As an example, for universal kernels such as the radial basis function (RBF) kernel, the corresponding RKHS is dense in C(X). To further regulate the behavior of mean embedding and RKHS, we introduce the following regularity conditions on kernels k(\u00b7, \u00b7) and K(\u00b7, \u00b7). Assumption 2.3 (Regularity Condition of Kernels). We assume that the kernel k(\u00b7, \u00b7) and K(\u00b7, \u00b7) are continuous and bounded as follows, k(u, u) \u2264\u033a, \u2200u \u2208\u2126, K(\u00b5\u03c9, \u00b5\u03c9) \u2264\u03c2, \u2200\u03c9 \u2208f M(\u2126), (2.16) where \u033a and \u03c2 are positive absolute constants. We assume that k(\u00b7, \u00b7) is universal and the mean embeddings \u00b5\u03c9(\u00b7) are continuous for any \u03c9 \u2208f M(\u2126). We further assume that K(\u00b7, \u00b5\u03c9) is H\u00a8 older continuous such that for any x, y \u2208f M(\u2126), it holds that \u2225K(\u00b7, \u00b5x) \u2212K(\u00b7, \u00b5y)\u2225H(K) \u2264L \u00b7 \u2225\u00b5x \u2212\u00b5y\u2225h H(k), (2.17) where L and h are positive absolute constants. The assumption on the boundedness of the kernels in (2.16) is a standard assumption in the learning with kernel embedding (Caponnetto and De Vito, 2007; Muandet et al., 2012; Szab\u00b4 o et al., 2015; Lin et al., 2017). The universality assumption on k(\u00b7, \u00b7) ensures that each mean embedding uniquely characterizes a distribution (Gretton et al., 2007, 2012). The continuous assumption on the embeddings \u00b5\u03c9 is a mild regularity condition, which holds if the kernel k(\u00b7, \u00b7) is universal and the domain \u2126is compact (Sriperumbudur et al., 2010). Meanwhile, the H\u00a8 older continuity of K(\u00b7, \u00b7) in Assumption 2.3 is a mild regularity condition. Such an assumption holds for a rich family of commonly used reproducing kernels, such as the linear kernel K(\u00b5x, \u00b5y) = \u27e8\u00b5x, \u00b5y\u27e9H(k) and the RBF kernel K(\u00b5x, \u00b5y) = exp(\u2212\u2225\u00b5x \u2212\u00b5y\u22252 H(k)/\u03c32). We highlight that the H\u00a8 older continuity of K(\u00b7, \u00b7) allows for an approximation of mean embedding \u00b5p based on the empirical approximation b p of the distribution p \u2208f M(\u2126) with \ufb01nite observations, which further allows for an approximation of the action-value function with \ufb01nite observations, thus tackles the challenge (ii). For an empirical approximation of state-action con\ufb01guration \u03c9a,b ps = \u03b4a \u00d7 b ps, where b ps is the empirical distribution of the observed states {si}i\u2208N, the mean embedding takes the following form, \u00b5\u03c9a,b ps(\u00b7) = 1 N N X i=1 k \u0000\u00b7, (a, si) \u0001 , (2.18) which is invariant to the permutation of states {si}i\u2208[N]. Such an invariance is also exploited by the neuralnetwork based approach named deep sets (Zaheer et al., 2017). In what follows, we connect our mean 6 \fembedding approach to the invariant deep reinforcement learning under the framework of overparametrized two-layer neural networks (Zhang et al., 2016; Jacot et al., 2018; Neyshabur et al., 2018; Arora et al., 2019). Connection to Invariant Deep Reinforcement Learning. In what follows, we assume that (a, si) \u2208Rd and write xi = (a, si). We de\ufb01ne the feature mappings {\u03c6j(\u00b7)}j\u2208[m] and {\u03a6\u2113(\u00b7)}\u2113\u2208[M] as follows, \u03c6j(x) = 1 \u221am \u00b7 bj \u00b7 1{w\u22a4 j x > 0} \u00b7 x, \u03a6r(q) = 1 \u221a \u2113 \u00b7 b\u2032 r \u00b7 1{W \u2032 r \u22a4q > 0} \u00b7 q, (2.19) where bj, b\u2032 r \u223cUnif{\u22121, 1}, wj \u223cN(0, Id\u2113/(d\u2113)), and W \u2032 r \u223cN(0, I\u2113/\u2113). Correspondingly, we de\ufb01ne the kernels km(x, y) = Pm j=1 \u03c6j(x)\u22a4\u03c6j(y) and K\u2113(p, q) = P\u2113 r=1 \u03a6r(p)\u22a4\u03a6r(q). Note that the mean embedding of a point mass \u03b4xi by the kernel km is an array \u00b5i = [\u03c61(xi)\u22a4, . . . , \u03c6m(xi)\u22a4]\u22a4\u2208 Rmd. For a mean embedding \u00b5 \u2208Rmd, we consider the parametrization of action-value functions Q(\u00b5) = e Q(D\u22a4\u00b5), where e Q \u2208H(K\u2113) and D = [D1, . . . , Dm] \u2208Rmd\u00d7\u2113with Dj \u2208Rd\u00d7\u2113(j \u2208[m]). Let \u00b5p be the mean embedding of the empirical measure supported on {xi}i\u2208[N]. It then holds for some {\u03b1r}r\u2208[\u2113] \u2286Rd that Q(\u00b5p) = \u2113 X r=1 \u03b1\u22a4 r \u03a6r(D\u22a4\u00b5p) = 1 \u221a \u2113 \u2113 X r=1 b\u2032 r \u00b7 1{W \u2032 r \u22a4\u03c1 > 0} \u00b7 \u03b1\u22a4 r (D\u22a4\u00b5p) = f(D\u22a4\u00b5p). (2.20) Note that if \u03b1r is su\ufb03ciently close to W \u2032 r, then Q(\u00b5p) is close to e f(D\u22a4\u00b5p), where e f(\u03c1) = 1 \u221a \u2113 \u2113 X r=1 b\u2032 r \u00b7 1{\u03b1r\u22a4\u03c1 > 0} \u00b7 \u03b1\u22a4 r (\u03c1), \u2200\u03c1 \u2208R\u2113. (2.21) Here e f(\u00b7) is a two-layer neural network with parameters b\u2032 r, \u03b1r (r \u2208[\u2113]) and ReLU activation function. Meanwhile, it holds that D\u22a4\u00b5p = 1 N N X i=1 1 \u221am m X j=1 bj \u00b7 1{wj \u22a4xi > 0} \u00b7 D\u22a4 j xi = 1 N N X i=1 \u03c8(xi). (2.22) Similarly, if Dj is close to wj, then \u03c8(xi) is close to e \u03c8(xi) de\ufb01ned as follows, e \u03c8(xi) = 1 \u221am m X j=1 bj \u00b7 1{Dj \u22a4xi > 0} \u00b7 D\u22a4 j xi, \u2200xi \u2208Rd. (2.23) Here e \u03c6(xi) is a two-layer neural network with input xi, parameters b\u2032 j (j \u2208[m]) and D, and ReLU activation function. Note that for the functions f and \u03c8 de\ufb01ned in (2.20) and (2.22) to approximately take the form of neural networks, we requires the parameters \u03b1 and D to be su\ufb03ciently close to W and w, respectively. Such a requirement is formally characterized by the study of overparametrized neural networks. More speci\ufb01cally, if the widths m and \u2113of neural networks are su\ufb03ciently large (which depends on the deviation of parameters \u03b1 and D from their respective initializations W and w), then the functions f and \u03c8 well approximates the neural networks e f and e \u03c8 de\ufb01ned in (2.21) and (2.23), respectively. In conclusion, under the mean embedding with the feature mappings de\ufb01ned in (2.19), the parameterization of action-value function takes the form of Q(\u00b5p) = f(1/N \u00b7 PN i=1 \u03c8(xi)), where f and \u03c8 are approximations of two-layer neural networks e f and e \u03c8, respectively. Hence, the action-value function Q(\u00b5p) approximately takes the form of deep sets (Zaheer et al., 2017) with {xi}i\u2208[N] as the set input. 2.3 Mean-Field Fitted Q-Iteration In what follows, we establish a value-based algorithm that solves mean-\ufb01eld MARL problem in \u00a72.1 based on \ufb01tted Q-iteration (Ernst et al., 2005). More speci\ufb01cally, we propose an algorithm that learns the optimal action-value function Q\u2217by the sample {\u03b4ai \u00d7 pi,s}i\u2208[n] that follows a sampling distribution \u03bd over 7 \fthe space of state-action con\ufb01gurations f M(\u2126). For each state-action con\ufb01guration \u03b4ai \u00d7 pi,s, the mean\ufb01eld state pi,s \u2208M(S) is available to us through the observed states {si,j}j\u2208[N], which are sampled independently from the mean-\ufb01eld state pi,s. We further observe the immediate reward ri and the states {s\u2032 i,j}j\u2208[N] that are independently sampled from the mean-\ufb01eld state pi,s\u2032. Here pi,s\u2032 \u223cP(\u00b7 | ai, pi,s) is the mean-\ufb01eld state after transition from the state-action con\ufb01guration (ai, pi,s). Given the batch of data {({si,j}j\u2208[N], ai, ri, {s\u2032 i,j}j\u2208[N])}i\u2208[n], the mean-\ufb01eld \ufb01tted Q-iteration (MF-FQI) sequentially computes b yi,k = ri + \u03b3 \u00b7 max a\u2208A b Q\u03bb k(\u00b5\u03c9a,b pi,s\u2032 ) (2.24) at the k-th iteration. Here \u00b5\u03c9a,b pi,s\u2032 is the mean embedding of the distribution \u03c9a,b pi,s\u2032 = \u03b4a \u00d7 b pi,s\u2032, b pi,s\u2032 is the empirical distribution supported on the set {s\u2032 i,j}j\u2208[N], and Q\u03bb k is the approximation of the optimal actionvalue function at the k-th iteration of MF-FQI. Upon computing {b yi,k}i\u2208[n] according to (2.24), MF-FQI then updates the approximation of the optimal action-value function in the RKHS H(K) by solving the following optimization problem, Q\u03bb k+1 = argmin f\u2208H(K) 1 n n X i=1 \u0000f(\u00b5b \u03c9i) \u2212b yi,k \u00012 + \u03bb \u00b7 \u2225f\u22252 H(K), b Q\u03bb k+1 = min{Q\u03bb k+1, Qmax}, (2.25) where b \u03c9i = \u03b4a \u00d7 b pi,s and b pi,s is the empirical distribution supported on the set {si,j}j\u2208[N]. We summarize MF-FQI de\ufb01ned by (2.24) and (2.25) in Algorithm 1. We highlight that MF-FQI has a linear computational complexity in terms of the number of observed agents N. Therefore, MF-FQI is computationally tractable even for a large number of observed agents N. Algorithm 1 Mean-Field Fitted Q-Iteration (MF-FQI) 1: Input: Batch of data {({si,j}j\u2208[N], ai, ri, {s\u2032 i,j}j\u2208[N])}i\u2208[n], reproducing kernels k(\u00b7, \u00b7) and K(\u00b7, \u00b7), number of iterations \u03ba, parameter \u03bb, initial action-value function b Q\u03bb 0. 2: For all i \u2208[n], compute mean embeddings \u00b5b \u03c9i and \u00b5a,b pi,s\u2032 for all a \u2208A as follows, \u00b5b \u03c9i(\u00b7) = 1 N N X j=1 k \u0000\u00b7, (ai, si,j) \u0001 , \u00b5a,b pi,s\u2032 (\u00b7) = 1 N N X j=1 k \u0000\u00b7, (a, s\u2032 i,j) \u0001 , \u2200a \u2208A. 3: for k = 0, 1, . . ., \u03ba \u22121 do 4: Compute b yi,k = ri + \u03b3 \u00b7 maxa\u2208A b Q\u03bb k(\u00b5a,b pi,s\u2032 ) for all i \u2208[n]. 5: Update the action-value function b Q\u03bb k+1 by Q\u03bb k+1 = argmin f\u2208H(K) 1 n n X i=1 \u0000f(\u00b5b \u03c9i) \u2212b yi,k \u00012 + \u03bb \u00b7 \u2225f\u22252 H(K), b Q\u03bb k+1 = min{Q\u03bb k+1, Qmax}. 6: end for 7: Output: An estimator b Q\u03bb \u03ba of Q\u2217, a greedy policy \u03c0\u03ba with respect to b Q\u03bb \u03ba. 8 \f3 Main Results In this section, we establish the theoretical guarantee of MF-FQI de\ufb01ned in Algorithm 1. In the sequel, we denote by b Q\u03bb \u03ba and \u03c0\u03ba the outputs of MF-FQI. Our goal is to establish an upper bound for \u2225Q\u2217\u2212Q\u03c0\u03ba\u22251,\u00b5, where \u00b5 is the measurement distribution over f M(\u2126). We \ufb01rst introduce the de\ufb01nition of concentration coe\ufb03cients. For a policy \u03c01, we de\ufb01ne E\u03c01\u03bd as the distribution of \u039b1 = \u03b4A1 \u00d7 P1,s, where P1,s \u223cP(\u00b7 | \u03bd) and A1 \u223c\u03c01(\u00b7 | P1,s). Similarly, for policies {\u03c0i}i\u2208[\u2113], we recursively de\ufb01ne E\u03c0\u2113\u25e6E\u03c0\u2113\u22121 \u25e6. . .\u25e6E\u03c01\u03bd as the distribution of \u039b\u2113= \u03b4A\u2113\u00d7P\u2113,s, where P\u2113,s \u223cP(\u00b7 | E\u03c0\u2113\u22121 \u25e6 . . . \u25e6E\u03c01\u03bd) and A\u2113\u223c\u03c0\u2113(\u00b7 | P\u2113,s). In what follows, we de\ufb01ne the concentration coe\ufb03cients that measures the di\ufb00erence between the sampling distribution \u03bd and the measurement distribution \u00b5 on f M(\u2126). Assumption 3.1. (Concentration Coe\ufb03cients) Let \u03bd be the sampling distribution on f M(\u2126). Let \u00b5 be the measurement distribution on f M(\u2126). We assume that for any policies {\u03c0i}i\u2208[\u2113], the distribution E\u03c0\u2113\u25e6E\u03c0\u2113\u22121 \u25e6 . . . \u25e6E\u03c01\u03bd is absolutely continuous with respect to \u00b5. We de\ufb01ne the \u2113-th concentration coe\ufb03cients between \u03bd and \u00b5 as follows, \u03c6(\u2113; \u00b5, \u03bd) = sup \u03c01,...,\u03c0\u2113 E\u00b5 \u0014\u0012 dE\u03c0\u2113\u25e6E\u03c0\u2113\u22121 \u25e6. . . \u25e6E\u03c01\u03bd d\u00b5 \u00132\u0015!1/2 . (3.1) We assume that \u03c6(\u2113; \u00b5, \u03bd) < +\u221efor any \u2113\u2208N. We further assume that there exist a positive absolute constant \u03a6(\u00b5, \u03bd) such that \u221e X \u2113=1 \u03b3\u2113\u22121 \u00b7 \u2113\u00b7 \u03c6(\u2113; \u00b5, \u03bd) \u2264\u03a6(\u00b5, \u03bd)/(1 \u2212\u03b3)2. (3.2) Assumption 3.2 is a standard assumption in the theoretical analysis of reinforcement learning (Szepesv\u00b4 ari and Munos, 2005; Munos and Szepesv\u00b4 ari, 2008; Antos et al., 2008; Lazaric et al., 2016; Farahmand et al., 2010, 2016; Scherrer, 2013; Scherrer et al., 2015; Yang et al., 2019; Chen and Jiang, 2019). Under Assumption 3.1, the following theorem upper bounds the error \u2225Q\u2217\u2212Q\u03c0\u03ba\u22251,\u00b5 of MF-FQI. Proposition 3.2 (Error Propagation). Let { b Q\u03bb i }i\u2208[\u03ba] be the output of Algorithm 1. Let \u03c0\u03ba be the greedy policy corresponding to b Q\u03bb \u03ba. Under Assumption 3.1, it holds that \u2225Q\u2217\u2212Q\u03c0\u03ba\u22251,\u00b5 \u22642\u03b3 \u00b7 \u03a6(\u00b5, \u03bd) (1 \u2212\u03b3)2 \u00b7 max i\u2208[\u03ba] \u2225b Q\u03bb i \u2212T b Q\u03bb i\u22121\u2225\u03bd | {z } (a) + 4\u03b3\u03ba+1 \u00b7 Qmax 1 \u2212\u03b3 | {z } (b) . (3.3) Proof. See \u00a7B.3 for a detailed proof. Following from Theorem 3.2, the error of MF-FQI is upper bounded by the sum of the two terms on the right-hand side of (3.3). Here term (b) characterizes the algorithmic error that hinges on the number of iterations \u03ba. Meanwhile, term (a) characterizes the one-step approximation error that hinges on the approximation b Q\u03bb i of T b Q\u03bb i\u22121. In the sequel, we upper bound the one-step approximation error characterized by term (a). To this end, we \ufb01rst impose the following regularity condition on the Bellman optimality operator T and the RKHS H(K). Assumption 3.3 (Regularity Condition of T and H(K)). We de\ufb01ne the integral operator C as follows, Cf(x) = Z f M(\u2126) K(x, \u00b5\u03c9)f(\u00b5\u03c9)d\u03bd(\u03c9). (3.4) 9 \fWe assume that the eigenvalues {tn}n\u2208N of C is bounded such that \u03b1 \u2264nbtn \u2264\u03b2 for all n \u2208N, where \u03b1, \u03b2, and b > 1 are positive absolute constants. We further assume that for any output Q\u03bb \u2208H(K) of the regression problem de\ufb01ned in (2.25), it holds for some g \u2208H(K) that QH,T = C(c\u22121)/2g, \u2225g\u2225H(K) \u2264R. (3.5) Here R > 0 and c \u2208[1, 2] are absolute constants, and QH,T is de\ufb01ned as follows, QH,T = \u03a0H(K)(T b Q\u03bb) = argmin f\u2208H(K) \u2225f \u2212T b Q\u03bb\u2225\u03bd, b Q\u03bb = min{Q\u03bb, Qmax}, (3.6) where we denote by \u03a0H(K) the projection onto H(K) with respect to the norm \u2225\u00b7 \u2225\u03bd. Assumption 3.3 is a mild regularity assumption on the RKHS H(K) and the Bellman optimality operator T . Similar assumptions arises in the analysis of kernel ridge regression (Caponnetto and De Vito, 2007; Szab\u00b4 o et al., 2015; Lin et al., 2017). The parameters b and c in Assumption 3.3 de\ufb01ne a prior space P(b, c) in the context of kernel ridge regression (Caponnetto and De Vito, 2007). Intuitively, the parameter c controls the smoothness of QH,T de\ufb01ned in (3.6), and the parameter b controls the size of H(K). Under Assumption 3.3, the following theorem characterizes the one-step approximation error of MF-FQI de\ufb01ned in Algorithm 1. Theorem 3.4 (One-step Approximation Error). Let \u03b7, \u03c4 be two constants such that 0 < \u03b7 + \u03c4 < 1. Let C(\u03b7) = 32 log2(6/\u03b7). Under Assumptions 2.3 and 3.3, for N \u22652\u033a \u00b7 (1 + p log(|A| \u00b7 n/2\u03c4))2 \u00b7 (64L2\u03c22/\u03bb2)1/h, n \u2265 2C(\u03b7)\u03c2\u03b2b (b \u22121)\u03bb1+1/b , \u03bb \u2264\u2225C\u2225H(K), (3.7) it holds with probability at least 1 \u2212\u03b7 \u2212\u03c4 that \u2225b Q\u03bb k \u2212T b Q\u03bb k\u22121\u22252 \u03bd \u2264G1 + G2 + \u03c82 T , \u2200k \u2208[\u03ba], (3.8) where G1 = 8L2Q2 max \u00001 + p log(|A| \u00b7 n/2\u03c4) \u00012h \u00b7 (2\u033a)h \u03bb \u00b7 N h \u00b7 \u0010 1 + 5\u03c22 \u03bb2 \u0011 , G2 = C(\u03b7) \u00b7 R\u03bbc + \u03c22R \u03bb2\u2212cn2 + \u03c2R\u03bbc\u22121 4n + \u03c2M 2 \u03bbn2 + \u03a32\u03b2b (b \u22121)n\u03bb1/b ! , (3.9) and the term \u03c8T is de\ufb01ned as follows, \u03c8T = sup k\u2208[\u03ba] \u2225T b Q\u03bb k \u2212\u03a0H(K)(T Q\u03bb k)\u2225\u03bd. (3.10) Here M and \u03a3 are positive absolute constants, the parameters \u03c2, \u033a, h are de\ufb01ned in Assumption 2.3, and the parameters C, b, c are de\ufb01ned in Assumption 3.3, and \u03a0H(K) the projection onto H(K) with respect to the norm \u2225\u00b7 \u2225\u03bd. Proof. See \u00a7B.4 for a detailed proof. Following from Theorem 3.4, the one-step approximation error is upper bounded by the sum of the three terms, G1, G2, and \u03c8T , on the right-hand side of (3.8). Here the term G1 characterizes the error by approximating the mean-\ufb01eld state via N observed agents, the term G2 characterizes the error by estimating T Q with the batch of size n, and the term \u03c8T characterizes the error in of approximating T Q by functions from the RKHS H(K). If we further assume that T Q \u2208H(K) for all Q \u2208H(K) with Q \u2264Qmax, then term \u03c8T vanishes and Q\u2217\u2208H(K) is the unique \ufb01xed point of T in H(K). Combining the error propagation in Proposition 3.2 and the one-step approximation error in Theorem 3.4, we obtain the following theorem that upper bounds the error of MF-FQI de\ufb01ned in Algorithm 1. 10 \fTheorem 3.5 (Theoretical Guarantee of MF-FQI). Let \u03c0\u03ba be the output of MF-FQI and n = N a. Under the assumptions imposed by Proposition 3.2 and 3.4, for \u2225C\u2225H(K) \u2265max \b (a \u00b7 log N/N)h/(c+3), 1/N ab/(bc+1)\t , it holds with probability at least 1 \u2212\u03b7 \u2212\u03c4 that \u2225Q\u2217\u2212Q\u03c0\u03ba\u22251,\u00b5 \u22642\u03b3 \u00b7 \u03a6(\u00b5, \u03bd) (1 \u2212\u03b3)2 \u00b7 \u0012 C \u00b7 \u039e | {z } (i) + \u03c8T |{z} (ii) \u0013 + 4\u03b3\u03ba+1 \u00b7 Qmax 1 \u2212\u03b3 | {z } (iii) , (3.11) where C is a positive absolute constant, \u03c8T is de\ufb01ned in (3.10) of Theorem 3.4, and \u039e = max \u001a\u0010log(|A| \u00b7 N/\u03c4) N \u0011 hc 2(c+3) , 1/N abc 2(bc+1) \u001b . Here the integral operator C is de\ufb01ned in (3.4) of Assumption 3.3, the parameter h is de\ufb01ned in Assumption 2.3, and parameters b, c are de\ufb01ned in Assumption 3.3. Proof. See \u00a7B.5 for a detailed proof. By Theorem 3.5, the approximation error of the action-value function attained by MF-FQI is characterized by the three terms on the right-hand side of (3.11). Here term (i) characterizes the statistical error, which is small for a su\ufb03ciently large number of observed agents N and batch size n = N a. Term (ii) is the bias \u03c8T de\ufb01ned in in (3.10) of Theorem 3.4, which vanishes if the Bellman optimality operator T is closed in the RKHS H(K). Term (iii) characterizes the algorithmic error, which is small for a su\ufb03ciently large number of iterations \u03ba. In the sequel, we assume that T is closed in H(K) and thus \u03c8T = 0 for simplicity. Note that the algorithmic error characterized by term (ii) has a linear rate of convergence, which is negligible comparing with the statistical error characterized by term (i) if the iteration number is su\ufb03ciently large. More speci\ufb01cally, if it holds for some positive absolute constant C that \u03ba \u2265C \u00b7 max \u001a hc 2(c + 3) \u00b7 log(1/\u03b3) \u00b7 log \u0010 N log(|A| \u00b7 N) \u0011 , abc 2(bc + 1) \u00b7 log(1/\u03b3) \u00b7 log N \u001b , (3.12) then the dominating term on the right-hand side of (3.11) in Theorem 3.5 is term (i) that characterizes the statistical error. Phase Transition. Note that MF-FQI involves two stage of sampling, where the \ufb01rst stage samples n mean-\ufb01eld state from the sampling distribution \u03bd, and the second stage samples N states from each mean\ufb01eld state. In what follows, we discuss the connection between the performance of MF-FQI and the sample complexity of the two-stage sampling involved. More speci\ufb01cally, we discuss the phase transition in the statistical error of MF-FQI when a = log n/ log N transits from zero to in\ufb01nity. We categorize the phase transition into the following regimes in terms of a. 1. For a > h\u00b7(c+1/b)/(c+3), the rate of convergence of the statistical error takes the form of O((log(|A|\u00b7 N)/N)hc/(2c+6)). In this regime, increasing the number of observations N for each mean-\ufb01eld state improves the performance of MF-FQI, whereas increasing the batch size n of mean-\ufb01eld state cannot improve the performance of MF-FQI. 2. For 0 < a < h \u00b7 (c + 1/b)/(c + 3) , the rate of convergence of the statistical error takes the form of O(1/nbc/(2bc+2)). In this regime, increasing batch size n of mean-\ufb01eld state improves the performance of MF-FQI, whereas increasing the number of observations N for each mean-\ufb01eld state cannot improve the performance of MF-FQI. 11 \fIn conclusion, under regularity conditions, MF-FQI approximately achieves the optimal policy for a su\ufb03ciently large number of iteration \u03ba, batch size n, and number of observations N. We highlight that MF-FQI enjoys a \u201cblessing of many agents\u201d property. More speci\ufb01cally, for a su\ufb03ciently large batch size n of the mean-\ufb01eld state, a larger number N of observed agents improves the learning of Q\u2217. 4 Limitation and Future Work In this work, we propose MF-FQI, which tackles mean-\ufb01eld reinforcement learning problem with symmetric agents and a centralized controller. Such a setting is extensively studied in the analysis of societal-scale systems (Gu\u00b4 eant et al., 2011; Gomes et al., 2015; Moll et al., 2019), such as the example of central bank or central government in \u00a71. MF-FQI tackles the \u201ccurse of many agents\u201d via mean embedding of the mean-\ufb01eld state for the (inexact) policy evaluation step, which approximately calculates the action-value function for the greedy policy. Based on the action-value function, we obtain a greedy policy, which corresponds to the policy improvement step. Such an approach is intractable when the action space also su\ufb00ers from the \u201ccurse of many agents\u201d, as Q-learning requires taking the maximum over the action space at each iteration, which can be combinatorially large if each agent takes its own action. However, the mean embedding technique is still applicable for the policy evaluation step even if each agent takes its own action, which can be coupled with other policy optimization methods, such as policy gradient Sutton and Barto (2018) and proximal policy optimization Schulman et al. (2015, 2017). By replacing the greedy policy improvement step with other policy optimization methods, we are able to tackle the \u201ccurse of many agents\u201d of both the state space and the action space, which is left as our future research." + }, + { + "url": "http://arxiv.org/abs/1910.13659v3", + "title": "Efficient Privacy-Preserving Stochastic Nonconvex Optimization", + "abstract": "While many solutions for privacy-preserving convex empirical risk\nminimization (ERM) have been developed, privacy-preserving nonconvex ERM\nremains a challenge. We study nonconvex ERM, which takes the form of minimizing\na finite-sum of nonconvex loss functions over a training set. We propose a new\ndifferentially private stochastic gradient descent algorithm for nonconvex ERM\nthat achieves strong privacy guarantees efficiently, and provide a tight\nanalysis of its privacy and utility guarantees, as well as its gradient\ncomplexity. Our algorithm reduces gradient complexity while improves the best\nprevious utility guarantee given by Wang et al. (NeurIPS 2017). Our experiments\non benchmark nonconvex ERM problems demonstrate superior performance in terms\nof both training cost and utility gains compared with previous differentially\nprivate methods using the same privacy budgets.", + "authors": "Lingxiao Wang, Bargav Jayaraman, David Evans, Quanquan Gu", + "published": "2019-10-30", + "updated": "2023-02-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR", + "math.OC", + "stat.ML" + ], + "main_content": "Introduction For many important domains such as health care and medical research, the datasets used to train machine learning models contain sensitive personal information. There is a risk that models trained on this data can reveal private information about individual records in that training data (Fredrikson et al., 2014; Shokri et al., 2017; Carlini et al., 2019). This motivates the research on privacy-preserving machine learning, much of which has focused on achieving differential privacy (Dwork et al., 2006), a rigorous de\ufb01nition of privacy that provides statistical data privacy for individual records. In the past decade, many differentially private machine learning algorithms for solving the empirical risk minimization (ERM) problem have been proposed (e.g., (Chaudhuri et al., 2011; Kifer et al., 2012; Bassily et al., 2014; Zhang et al., 2017; Wang et al., 2017; Jayaraman et al., 2018; Wang and Gu, 2019, 2020)). Almost all of these are for ERM with convex loss functions, but many important machine learning approaches, including deep learning, are formulated as ERM problems with nonconvex loss functions. Furthermore, these learning problems often require large training sets, necessitating the use of stochastic optimization algorithms such as stochastic gradient descent (SGD). 1 arXiv:1910.13659v3 [cs.LG] 2 Feb 2023 \fSeveral recent studies have advanced the application of differential privacy in deep learning (Abadi et al., 2016; Papernot et al., 2016; McMahan et al., 2018; Bu et al., 2019). The studies prove differential privacy is satis\ufb01ed, but evaluate utility experimentally. Only a few differentially private algorithms for solving nonconvex optimization problems have proven utility bounds (Zhang et al., 2017; Wang et al., 2017). For example, Wang et al. (2017) proposed a differentially private gradient descent (DP-GD) algorithm with both privacy and utility guarantees. However, each iteration of DP-GD requires computing the full gradient, which makes it too expensive for use on large training sets. Zhang et al. (2017) proposed a random round private stochastic gradient descent (RRPSGD) that can achieve the same privacy guarantee as DP-GD with reduced runtime complexity, but with slightly worse utility bounds. In this paper, we propose a differentially private Stochastic Recursive Momentum (DP-SRM) algorithm for nonconvex ERM. At the core of our algorithm is the stochastic recursive momentum technique (Cutkosky and Orabona, 2019) that can consistently reduce the accumulated variance of the gradient estimator. Our approach is more scalable than stochastic variance reduced algorithms (Johnson and Zhang, 2013; Reddi et al., 2016a; Allen-Zhu and Hazan, 2016; Lei et al., 2017; Nguyen et al., 2017; Fang et al., 2018; Zhou et al., 2018) since it eliminates the periodical computation of the checkpoint gradient which usually requires a giant batch size. Contributions. The main contributions of our paper are summarized as follows: \u2022 We develop a new differentially private stochastic optimization algorithm for nonconvex ERM and provide a sharp analysis of the privacy guarantee using R\u00b4 enyi Differential Privacy (RDP) (Mironov, 2017). \u2022 Our algorithm improves the best-known utility guarantee for nonconvex optimization, with lower computational complexity. The utility guarantee of our algorithm is O \u0000(d log(1/\u03b4))1/3/(n\u03f5)2/3\u0001 , which is better than the best-known results O \u0000(d log(1/\u03b4))1/4/(n\u03f5)1/2\u0001 established in Wang et al. (2017). The gradient complexity (i.e., the number of stochastic gradients calculated in total) of our algorithm is O \u0000(n\u03f5)2/(d log(1/\u03b4)) \u0001 , which outperforms the best previous results (Zhang et al., 2017; Wang et al., 2017) when the problem dimension d is large (see Table 1 for more details). \u2022 We evaluate our proposed methods on two nonconvex ERM techniques: nonconvex logistic regression and convolutional neural networks. We report on experiments on several benchmark datasets (Section 7), \ufb01nding that our method not only produces models that are the closest to the non-private models in terms of model accuracy but also reduces the computational cost. Notation. We use curly symbol such as B to denote the index set. For a set B, we use |B| to denote its cardinality. For a \ufb01nite sum function F = Pn i=1 fi/n, we denote FB by P i\u2208B fi/|B|. For a d-dimensional vector x \u2208Rd, we use \u2225x\u22252 to denote its \u21132-norm. Given two sequences {an} and {bn}, if there exists a constant 0 < C < \u221esuch that an \u2264Cbn, we write an = O(bn). Besides, if there exist constants 0 < C1, C2 < \u221esuch that C1bn \u2264an \u2264C2bn, we write an = \u0398(bn). We use n, d to represent the number of training examples and the problem dimension, respectively. We also use the standard notation for (\u03f5, \u03b4)-DP where \u03f5 is the privacy budget and \u03b4 is the failure probability. 2 Related Work Over the past decade, many differentially private machine learning algorithms for convex ERM have been proposed. There are three main approaches to achieve differential privacy in such settings, including output 2 \fTable 1: Comparison of different (\u03f5, \u03b4)-DP algorithms for nonconvex optimization. We report the utility bound in terms of E\u2225\u2207F(\u03b8p)\u22252, where \u03b8p is the output of the differentially private algorithm, E is taken over the randomness of the algorithm. We only present results in terms of n, d, \u03f5, \u03b4 and ignores other parameters for simplicity. Algorithm Utility Gradient Complexity RRPSGD (Zhang et al., 2017) O \u0010 (d log(n/\u03b4) log(1/\u03b4))1/4 (n\u03f5)1/2 \u0011 O \u0000n2\u0001 DP-GD (Wang et al., 2017) O \u0010 (d log(1/\u03b4))1/4 (n\u03f5)1/2 \u0011 O \u0010 n2\u03f5 (d log(1/\u03b4))1/2 \u0011 DP-SRM O \u0010 (d log(1/\u03b4))1/3 (n\u03f5)2/3 \u0011 O \u0010 (n\u03f5)2 d log(1/\u03b4) \u0011 (This paper) perturbation (Wu et al., 2017; Zhang et al., 2017), objective perturbation (Chaudhuri et al., 2011; Kifer et al., 2012; Iyengar et al., 2019), and gradient perturbation (Bassily et al., 2014; Wang et al., 2017; Jayaraman et al., 2018). However, other than the methods using gradient perturbation, it is very hard to generalize these methods to nonconvex ERM because of the dif\ufb01culty in computing the sensitivity for nonconvex ERM. Thus, most differentially private algorithms for nonconvex ERM are based on the gradient perturbation, including our work. The problem with gradient perturbation approaches is that their iterative nature quickly consumes any reasonable privacy budget. Hence, the main challenge is to develop algorithms for nonconvex ERM that can provide suf\ufb01cient utility while maintaining privacy with high computational ef\ufb01ciency. Several recent works (Abadi et al., 2016; Papernot et al., 2016; Xie et al., 2018) studied deep learning with differential privacy. Abadi et al. (2016) proposed a method called moments accountant to keep track of the privacy cost of stochastic gradient descent algorithm during the training process, which provides a strong privacy guarantee. Papernot et al. (2016) established a Private Aggregation of Teacher Ensembles (PATE) framework to improve the privacy guarantee of deep learning for classi\ufb01cation tasks. Xie et al. (2018) and Yoon et al. (2019) investigated the differentially private Generative Adversarial Nets (GAN) with different distance metrics. However, none of these works provide utility guarantees for their algorithms. Table 1 summarizes differentially private nonconvex optimization algorithms that provide utility guarantees for nonconvex ERM. The Random Round Private Stochastic Gradient Descent (RRPSGD) method developed by Zhang et al. (2017) is the \ufb01rst differentially private nonconvex optimization algorithm with the utility guarantee. This method performs the perturbed SGD (adding Gaussian noise to the stochastic gradients), for a random number of iterations (Ghadimi and Lan, 2013). The gradient complexity of RRPSGD is O(n2), which makes it impractical for most settings. Zhang et al. (2017) showed that RRPSGD is able to \ufb01nd a stationary point in expectation with a diminishing error O \u0000(d log(n/\u03b4) log(1/\u03b4))1/4/(n\u03f5)1/2\u0001 . Their analysis of the privacy guarantee is based on the standard privacy-ampli\ufb01cation by subsampling result and strong composition theorem (Bassily et al., 2014). Although such an analysis can be easily adapted to the nonconvex setting with stochastic optimization algorithms, it results in a large bound on the variance of the added noise compared with relaxed de\ufb01nitions such as the moments accountant (Abadi et al., 2016) and Gaussian differential privacy (Dong et al., 2019). Wang et al. (2017) proposed the Differentially Private Gradient Descent (DP-GD) algorithm for nonconvex 3 \foptimization. DP-GD has a comparable gradient complexity O \u0000n2\u03f5/d1/2\u0001 , and an improved utility guarantee O \u0000(d log(1/\u03b4))1/4/(n\u03f5)1/2\u0001 compared with that of RRPSGD. The reason DP-GD can achieve this factor of O \u0000(log(n/\u03b4))1/4\u0001 improvement, is that it uses the full gradient rather than the stochastic gradient. This makes DP-GD computationally very expensive or even intractable for large-scale machine learning problems (n is big). Recently, Wang et al. (2019a) also proposed a differentially private stochastic algorithm for nonconvex optimization. Their goal is to \ufb01nd the local minima, while we aim to \ufb01nd the stationary point. In addition, their utility guarantee is asymptotic\u2014it provides the desired utility guarantee only if an in\ufb01nite number of iterations could be run. In contrast, our utility guarantee holds for a \ufb01nite number of iterations. 3 Preliminaries We consider the empirical risk minimization (ERM) problem: given a training set S = {(x1, y1), . . . , (xn, yn)} drawn from some unknown but \ufb01xed data distribution with xi \u2208RD, yi \u2208Y \u2286R, we aim to \ufb01nd a solution b \u03b8 \u2208Rd that minimizes the following empirical risk F(\u03b8) := 1 n n X i=1 fi(\u03b8), (3.1) where F(\u03b8) is the empirical risk function (i.e., training loss), fi(\u03b8) = \u2113(\u03b8; xi, yi) is the loss function de\ufb01ned on the i-th training example (xi, yi), and \u03b8 \u2208Rd is the model parameter we want to learn. Here, we provide some de\ufb01nitions and lemmas that will be used in our theoretical analysis. De\ufb01nition 3.1. \u03b8 \u2208Rd is an \u03b6-approximate stationary point if \u2225\u2207f(\u03b8)\u22252 \u2264\u03b6. De\ufb01nition 3.2. A function f : Rd \u2192R is G-Lipschitz, if for all \u03b81, \u03b82 \u2208Rd, we have |f(\u03b81) \u2212f(\u03b82)| \u2264G\u2225\u03b81 \u2212\u03b82\u22252. De\ufb01nition 3.3. A function f : Rd \u2192R has L-Lipschitz gradient, if for all \u03b81, \u03b82 \u2208Rd, we have \u2225\u2207f(\u03b81) \u2212\u2207f(\u03b82)\u22252 \u2264L\u2225\u03b81 \u2212\u03b82\u22252. Differential privacy provides a formal notion of privacy, introduced by Dwork et al. (2006): De\ufb01nition 3.4 ((\u03f5, \u03b4)-DP (Dwork et al., 2006)). A randomized mechanism M : Sn \u2192R satis\ufb01es (\u03f5, \u03b4)differential privacy if for any two adjacent data sets S, S\u2032 \u2208Sn differing by one element, and any output subset O \u2286R, it holds that P[M(S) \u2208O] \u2264e\u03f5 \u00b7 P[M(S\u2032) \u2208O] + \u03b4. To achieve (\u03f5, \u03b4)-DP for a given function q : Sn \u2192R, we can use Gaussian mechanism (Dwork and Roth, 2014) M = q(S) + u, where u is a standard Gaussian random vector with variance that is proportional to the \u21132-sensitivity of the function q, \u2206(q), which is de\ufb01ned as follows. De\ufb01nition 3.5 (\u21132-sensitivity(Dwork and Roth, 2014)). For two adjacent datasets S, S\u2032 \u2208Sn differing by 4 \fone element, the \u21132-sensitivity \u2206(q) of a function q : Sn \u2192R is de\ufb01ned as \u2206(q) = sup S,S\u2032 \u2225q(S) \u2212q(S\u2032)\u22252. R\u00b4 enyi differential privacy. Although the notion of (\u03f5, \u03b4)-DP is widely used in the output and objective perturbation methods, it suffers from the loose composition and privacy-ampli\ufb01cation by subsampling results, which makes it unsuitable for the stochastic iterative learning algorithms. In this work, we will make use of the notion of R\u00b4 enyi Differential Privacy (RDP) (Mironov, 2017) which is particularly useful when the dataset is accessed by a sequence of randomized mechanisms (Wang et al., 2019b). De\ufb01nition 3.6 (RDP (Mironov, 2017)). For \u03b1 > 1, \u03c1 > 0, a randomized mechanism M : Sn \u2192R satis\ufb01es (\u03b1, \u03c1)-R\u00b4 enyi differential privacy, i.e., (\u03b1, \u03c1)-RDP, if for all adjacent datasets S, S\u2032 \u2208Sn differing by one element, we have D\u03b1 \u0000M(S)||M(S\u2032) \u0001 := log E \u0002\u0000M(S)/M(S\u2032) \u0001\u03b1\u0003 /(\u03b1 \u22121) \u2264\u03c1. According to De\ufb01nition 3.6, RDP measures the ratio of probability distributions M(S) and M(S\u2032) by \u03b1-order Renyi Divergence with \u03b1 \u2208(1, \u221e). As \u03b1 \u2192\u221e, RDP reduces to \u03f5-DP. To further improve the privacy guarantee when using the Gaussian mechanisms to satisfy RDP, we establish the following privacy-ampli\ufb01cation by subsampling result, which is derived based on the result in (Wang et al., 2019b). Lemma 3.7. Given a function q : Sn \u2192R, the Gaussian Mechanism M = q(S)+u, where u \u223cN(0, \u03c32I), satis\ufb01es (\u03b1, \u03b1\u22062(q)/(2\u03c32))-RDP. In addition, if we apply the mechanism M to a subset of samples using uniform sampling without replacement with sampling rate \u03c4, M satis\ufb01es (\u03b1, 3.5\u03c4 2\u22062(q)\u03b1/\u03c32)-RDP given \u03c3\u20322 = \u03c32/\u22062(q) \u22650.7, \u03b1 \u22642\u03c32 log(1/ \u0000\u03c4\u03b1 \u00001 + \u03c3\u20322) \u0001\u0001 /3 + 1. Remark 3.8 (Comparison with moment accountant). Suppose \u2206(q) = 1, Lemma 3.7 suggests that to achieve (\u03b1, 3.5\u03c4 2\u03b1/\u03c32)-RDP of the subsampled Gaussian mechanism, we require \u03c32 \u22650.7. For the moment accountant based method (Abadi et al., 2016), it can achieve the asymptotic privacy guarantee of \u0000\u03b1, \u03c4 2\u03b1/(1 \u2212\u03c4)\u03c32 + O(\u03c4 3\u03b13/\u03c33) \u0001 -RDP when \u03c4 goes to zero and \u03c32 \u22651, \u03b1 \u2264\u03c32 log(1/(\u03c4\u03c3)). In contrast to moment accountant, our result has a closed-form bound on the privacy guarantee and a relaxed requirement of \u03c32. It is worth noting that there exist some other works (Mironov et al., 2019; Zhu and Wang, 2019) also studying the privacy-ampli\ufb01cation by subsampling results. However, they consider the Poisson subsampling approach, which is different from our uniform subsampling method. Based on Lemma 3.7, we can establish a strong privacy guarantee of our method in terms of RDP, and then transfer it to (\u03f5, \u03b4)-DP using the following lemma. Lemma 3.9 (Mironov (2017)). If a randomized mechanism M : Sn \u2192R satis\ufb01es (\u03b1, \u03c1)-RDP, then M satis\ufb01es (\u03c1 + log(1/\u03b4)/(\u03b1 \u22121), \u03b4)-DP for all \u03b4 \u2208(0, 1). 4 Algorithm Our proposed algorithm for differentially private nonconvex ERM, is illustrated in Algorithm 1. 5 \fAlgorithm 1 Differentially Private Stochastic Recursive Momentum (DP-SRM) input \u03b80, T, G, L, \u03b3, \u03b2, n0, privacy parameters \u03f5, \u03b4, accuracy for the \ufb01rst-order stationary point \u03b6 1: Uniformly sample b0 examples without replacement indexed by B0 2: Compute v0 = \u2207FB0(\u03b80), where \u2207FB0(\u03b80) = P i\u2208B0 \u2207fi(\u03b80)/b0, draw u0 \u223cN(0, \u03c32 0Id) with \u03c32 0 = 14TG2\u03b1/(\u03b2n2\u03f5), \u03b1 = log(1/\u03b4)/ \u0000(1 \u2212\u03b2)\u03f5 \u0001 + 1 3: Release the differentially private gradient estimator v0 p = v0 + u0 4: for t = 0, 1, 2, . . . , T \u22121 do 5: \u03b8t+1 = \u03b8t \u2212\u03b7tvt p, where \u03b7t = min \b \u03b6/(n0L\u2225vt p\u22252), 1/(2n0L) \t 6: Uniformly sample b examples without replacement indexed by Bt+1 7: Compute vt+1 = \u2207FBt+1(\u03b8t+1) + (1 \u2212\u03b3) \u0000vt p \u2212\u2207FBt+1(\u03b8t) \u0001 , draw ut+1 \u223cN(0, \u03c32Id) with \u03c32 = 14T \u0000(1 \u2212\u03b3)\u03b6/n0 + \u03b3G \u00012\u03b1/(\u03b2n2\u03f5), \u03b1 = log(1/\u03b4)/ \u0000(1 \u2212\u03b2)\u03f5 \u0001 + 1 8: Release the differentially private gradient estimator vt+1 p = vt+1 + ut+1 9: end for output e \u03b8 chosen uniformly at random from {\u03b8t}T\u22121 t=0 The main idea is to construct the differentially private gradient estimator vt p iteratively based on the information obtained from the previous updates. We initialize v0 to be the mini-batch stochastic gradient \u2207FB0(\u03b80) and inject Gaussian noise, u0, with covariance matrix \u03c32 0Id (lines 2, 3), to make it differentially private. Then, we recursively update vt (line 7) as vt = \u2207FBt(\u03b8t) + (1 \u2212\u03b3) \u0000vt\u22121 p \u2212\u2207FBt(\u03b8t\u22121) \u0001 , where \u2207FBt(\u03b8t), \u2207FBt(\u03b8t\u22121) are mini-batch stochastic gradients and vt\u22121 p is the private gradient estimator released at the last iteration. The momentum parameter, \u03b3, is used to control the decay rate of the prior information, vt\u22121 p \u2212\u2207FBt(\u03b8t\u22121). This is called stochastic recursive momentum (Cutkosky and Orabona, 2019), which can lead to fast convergence. After updating vt, we again inject Gaussian noise ut with covariance matrix \u03c32Id (line 8), to provide differential privacy. The variance \u03c32 0, \u03c32 of the Gaussian random vectors are determined by our RDP-based analysis. We choose an adaptive step size (line 5) to bound the sensitivity of the gradient estimator vt p, which is the key to establish the tight privacy and utility guarantees (Section 6) of our algorithm. 5 Main Theoretical Results In this section, we establish formal privacy and utility guarantees for Algorithm 1. Theorem 5.1. Suppose that each component function fi is G-Lipschitz and has L-Lipschitz gradient. Given the total number of iterations T, the momentum parameter \u03b3 and the accuracy for the \ufb01rst-order stationary point \u03b6, for any \u03b4 > 0 and the privacy budget \u03f5, Algorithm 1 satis\ufb01es (\u03f5, \u03b4)-differential privacy with \u03c32 0 = 14TG2\u03b1/(\u03b2n2\u03f5) and \u03c32 = 14T \u0000(1 \u2212\u03b3)\u03b6/n0 + \u03b3G \u00012\u03b1/(\u03b2n2\u03f5) if we have \u03b1 \u22121 = log(1/\u03b4)/ \u0000(1 \u2212\u03b2)\u03f5 \u0001 \u22642\u03c3\u20322 log \u00001/ \u0000\u03c4\u03b1(1 + \u03c3\u20322) \u0001\u0001 /3 with \u03b2 \u2208(0, 1) and \u03c3\u20322 = min{b2\u03c32/ \u00004((1 \u2212 \u03b3)\u03b6/n0 + \u03b3G)2\u0001 , b2 0\u03c32 0/(4G2)} \u22650.7, where b0 and b are batch sizes, and \u03c4 = max{b0/n, b/n}. Remark 5.2. According to Theorem 5.1, there exists a constraint on the parameter \u03b1, which is due to the privacy-ampli\ufb01cation by subsampling result in Lemma 3.7, and is similar to the constraint given by the moments accountant (Abadi et al., 2016) and other RDP-based analyses with subsampling approaches (Mironov et al., 2019; Zhu and Wang, 2019). Furthermore, as we mentioned in Remark 3.8, our result relaxes the requirement of the variance \u03c3\u20322 compared with the moments accountant based analysis. 6 \fFollowing the previous work (Bassily et al., 2019), we can get rid of the constraints in Theorem 5.1 by using a larger mini-batch size, as states in the following corollary. Corollary 5.3. Given the total number of iterations T, the momentum parameter \u03b3 and the accuracy for the \ufb01rst-order stationary point \u03b6. Under the same conditions of Theorem 5.1 on fi, \u03c32 0, \u03c32, for any \u03b4 > 0 and the privacy budget \u03f5, Algorithm 1 satis\ufb01es (\u03f5, \u03b4)-differential privacy if we choose b2 0 = b2 = n2\u03f5/T, \u03b2 = 1/2, and T is larger than O \u0000log4(1/\u03b4)/\u03f53\u0001 . Theorem 5.1 and Corollary 5.3 require that each component function fi is G-Lipschitz and has LLipschitz gradient which will be used to derive the sensitivity of the underlying query function (i.e., the gradient estimator vt in Algorithm 1) and thus determine the variance of the Gaussian noise. The GLipschitz condition has been widely assumed in the literature of differential privacy (Abadi et al., 2016; Wang et al., 2017; Jayaraman et al., 2018; Bassily et al., 2019), and the L-Lipschitz gradient condition has also been made in several previous works (Zhang et al., 2017; Feldman et al., 2020). In practice, we can use the clipping technique (Abadi et al., 2016) to ensure that at each iteration, \u2225\u2207fi(\u03b8t)\u22252 \u2264C1 and \u2225\u2207fi(\u03b8t) \u2212\u2207fi(\u03b8t\u22121)\u22252 \u2264C2, where C1, C2 are some prede\ufb01ned constants. As a result, we can guarantee that the sensitivity of vt is bounded by 2 \u0000(1 \u2212\u03b3)C2 + \u03b3C1 \u0001 /b (see (6.1)). Then, we can replace G and \u03b6/n0 with C1 and C2 in Algorithm 1 to establish the same privacy guarantee. The following theorem shows the utility guarantee and the gradient complexity, which is the total number of the stochastic gradients we need to estimate during the training process, of Algorithm 1. Theorem 5.4. Under the same conditions of Theorem 5.1 on fi, \u03c32, \u03c32 0, \u03c3\u20322, \u03b1, if we choose the number of iterations T = C1(n\u03f5LDF )4/3/ \u0000G8/3(d log(1/\u03b4)2/3\u0001 , where DF = F(\u03b80) \u2212F(\u03b8\u2217) and F(\u03b8\u2217) is a global minimum of F, the accuracy for the \ufb01rst-order stationary point \u03b6 = C2 \u0000GLDF d log(1/\u03b4) \u00011/3/(n\u03f5)2/3, batch sizes b0 = C3G3/(\u03b6LDF ), b = C4G/(n0\u03b6), n0 = LDF /G2, the momentum parameter \u03b32 = C5\u03b62/(n2 0G2) and n\u03f5 \u2265C6 max{G8 log2(1/\u03b4)/(LDF d)4, G2(d log(1/\u03b4))1/2/(LDF )}, then the output e \u03b8 of Algorithm 1 and satis\ufb01es the following E\u2225\u2207F(e \u03b8)\u22252 \u2264C7 \u0012p GLDF d log(1/\u03b4) n\u03f5 \u0013 2 3 , where {Ci}7 i=1 are absolute constants, and the expectation is taken over all the randomness of the algorithm, i.e., the random Gaussian noise and the subsample gradient. Since T = O \u0000(n\u03f5LDF )4/3/ \u0000G8/3(d log(1/\u03b4)2/3\u0001 , b0 = b = O \u0000G8/3(n\u03f5)2/3/(LDF )4/3(d log(1/\u03b4))1/3\u0001 , the total gradient complexity of Algorithm 1 is O \u0000(n\u03f5)2/(d log(1/\u03b4)) \u0001 . Remark 5.5 (Comparison with existing methods). According to Theorem 5.4, our method can achieve the following utility guarantee O \u0012\u0012p GLDF d log(1/\u03b4) n\u03f5 \u0013 2 3 \u0013 . This result is better than the best known result for differentially private nonconvex optimization method (Wang et al., 2017). Furthermore, their method is based on gradient descent, which is computationally very 7 \fexpensive in large-scale machine learning problems. Furthermore, the gradient complexity of our method is O \u0012 (n\u03f5)2 d log(1/\u03b4) \u0013 . This result is smaller than O(n2) gradient complexity provided by Zhang et al. (2017) and O \u0000n2\u03f5/(d log(1/\u03b4))1/2\u0001 gradient complexity provided by Wang et al. (2017) when d is large. Theorem 5.4 shows that our method only requires the computation of minibatch gradients with batch size at the order of O \u0000(n\u03f5)2/3/(d log(1/\u03b4)1/3\u0001 (ignoring the dependence on other parameters). Therefore, our method is more scalable than existing differentially private stochastic variance reduced algorithms, such as DP-SVRG (Wang et al., 2017) designed for convex optimization, which often require the periodic computation of the checkpoint gradient with a giant batch size (full batch in DP-SVRG). 6 Proof Outline of the Main Results In this section, we present the proof outline of the main results in Section 5. Our proof involves new techniques for the privacy and utility guarantees that are of general use for variance reduction-based algorithms. The detailed proof can be found in Section B in Appendix. 6.1 Privacy Guarantee According to Algorithm 1, the mechanism at t-th iteration is Mt, which is a composition of t Gaussian mechanisms: G0, . . . , Gt, where G0 = \u2207FB0(\u03b80) + u0 and Gt = \u2207FBt(\u03b8t) \u2212(1 \u2212\u03b3)\u2207FBt(\u03b8t\u22121) + ut. Therefore, we want to show that Mt is differentially private. For the given dataset S, we use S\u2032 to denote its neighboring dataset with one different example indexed by i\u2032 There are two main challenges in providing a tight privacy analysis. The \ufb01rst one is to deal with the subsampled mechanisms {Gi}T\u22121 i=0 . The second one is to control the sensitivity of Gt when t > 0. The \ufb01rst challenge can be addressed by our privacy-ampli\ufb01cation by subsampling result (Lemma 3.7), which gives us a tight closed-form bound on the privacy guarantee. We can overcome the second challenge by using an adaptive stepsize, enabling us to use a small amount of random noise to achieve differential privacy. According to Algorithm 1, Gt is the application of the following Gaussian mechanism e Gt to a subset of uniformly sampled examples, indexed by Bt e Gt = \u001a 1 b Pn i=1 \u2207fi(\u03b80) + u0, t = 0 1 b Pn i=1 \u0000\u2207fi(\u03b8t) \u2212\u03c6\u2207fi(\u03b8t\u22121) \u0001 + ut, t > 0, where \u03c6 = 1 \u2212\u03b3. For e q0 = Pn i=1 \u2207fi(\u03b80)/b0 in e G0, the sensitivity \u2206(e q0) is determined by \u2225e q0(S) \u2212e q0(S\u2032)\u22252 \u22641 b\u2225\u2207fi(\u03b80) \u2212\u2207fi\u2032(\u03b80)\u22252 \u22642G b0 , where the last inequality is due to G-Lipschitz of each component function. For e qt = Pn i=1 \u2207fi(\u03b8t)/b \u2212 8 \f(1 \u2212\u03b3) Pn i=1 \u2207fi(\u03b8t\u22121)/b in e Gt when t > 0, the sensitivity \u2206(e qt) = \u2225e qt(S) \u2212e qt(S\u2032)\u22252 is determined by 1 \u2212\u03b3 b \u2225\u2207fi(\u03b8t) \u2212\u2207fi(\u03b8t\u22121) + \u2207fi\u2032(\u03b8t) \u2212\u2207fi\u2032(\u03b8t\u22121)\u22252 + \u03b3 b \u2225\u2207fi(\u03b8t) \u2212\u2207fi\u2032(\u03b8t)\u22252. (6.1) Therefore, we have \u2225qt(S) \u2212qt(S\u2032)\u22252 \u22642L(1 \u2212\u03b3) b \u2225\u03b8t \u2212\u03b8t\u22121\u22252 + 2\u03b3G b = 2L(1 \u2212\u03b3) b \u03b7t\u22121\u2225vt\u22121 p \u22252 + 2\u03b3G b \u22642(1 \u2212\u03b3)\u03b6 n0b + 2\u03b3G b , where the \ufb01rst inequality is due to L-Lipschitz continuous gradient and G-Lipschitz of each component function. The last inequality comes from the adaptive stepsize \u03b7t = min \b \u03b6/(n0L\u2225vt p\u22252), 1/(2n0L) \t . Note that the proposed adaptive stepsize \u03b7t is the key to control the sensitivity of e qt. If we choose a \ufb01xed stepsize such as \u03b7t = 1/(2L), the sensitivity of e qt will be in the order of O(G2/b), which will lead to a much larger random noise to achieve differential privacy and thus deteriorate the utility of our method. According to Lemma 3.7, if the noise u0 and ut satisfy \u03c32 0 = 14T\u03b1G2/(\u03b2n2\u03f5) and \u03c32 = 14T\u03b1 \u0000(1 \u2212 \u03b3)\u03b6/n0 + \u03b3G \u00012/(\u03b2n2\u03f5), the Gaussian mechanism e Gt satis\ufb01es \u0000\u03b1, \u03b2\u03f5n2/ \u00007b2 0T \u0001\u0001 -RDP, and the privacyampli\ufb01cation by subsampling result shows that Gt satis\ufb01es (\u03b1, \u03b2\u03f5/T)-RDP. Therefore, by the composition rule of RDP Mironov (2017), after T \u2032 iterations, Algorithm 1 satis\ufb01es (\u03b1, \u03b2T \u2032\u03f5/T)-RDP. According to Lemma 3.9 and \u03b1 = log(1/\u03b4)/ \u0000(1 \u2212\u03b2)\u03f5 \u0001 + 1, we have that after T \u2032 iterations, Algorithm 1 satis\ufb01es (T \u2032\u03f5/T, \u03b4)-DP. 6.2 Utility Guarantee According to the de\ufb01nition of e \u03b8, we have E\u2225\u2207F(e \u03b8)\u22252 = 1 T T\u22121 X t=0 E\u2225\u2207F(\u03b8t)\u22252 \u22641 T T\u22121 X t=0 E \r \rvt p \r \r 2 + 1 T T\u22121 X t=0 E \r \r\u2207F(\u03b8t) \u2212vt p \r \r 2, where the expectation is taken over all the randomness of the algorithm. The key challenge in establishing a tight utility guarantee is to derive tight upper bounds for PT\u22121 t=0 E \r \rvt p \r \r 2/T and PT\u22121 t=0 E \r \r\u2207F(\u03b8t)\u2212vt p \r \r 2/T when we have adaptive stepsize \u03b7t and the random noise ut in vt p. First of all, by taking into account the adaptive stepsize \u03b7t, we can upper bound the term PT\u22121 t=0 E \r \rvt p \r \r 2/T as follows 4n0LDF T\u03b6 + 1 T\u03b6 T\u22121 X t=0 E \r \r\u2207F(\u03b8t) \u2212vt p \r \r2 2 + 2\u03b6, where DF = F(\u03b80) \u2212F(\u03b8\u2217). Furthermore, we can obtain the upper bound for PT\u22121 t=0 E \r \rvt p \u2212\u2207F(\u03b8t) \r \r2 2/T 9 \fas follows 2(1 \u2212\u03b3)2\u03b62 n2 0\u03b3b + 2\u03b3G2 b + G2 T\u03b3b0 + Td\u03c32 + d\u03c32 0 T\u03b3 , where the \ufb01rst term is determined by \u03b7t, and the last term is determined by the random noise ut in vt p. The last term in this bound is dominated by d\u03c32/\u03b3, which validates the necessity of using the adaptive stepsize to control the sensitivity of vt and thus enable a small \u03c32. Finally, combining these two new bounds and plugging the value of parameters in Theorem 5.4, we can obtain that E\u2225\u2207F(e \u03b8)\u22252 \u2264C1\u03b6 + C2 p GLDF d log(1/\u03b4) n\u03f5\u221a\u03b6 . By solving the smallest \u03b6, we can obtain \u03b6 = (GLDF d log(1/\u03b4))1/3/(n\u03f5C1/C2)2/3. Thus we have E\u2225\u2207F(e \u03b8)\u22252 \u2264C3\u03b6, where C1, C2, C3 are some constants. 7 Experiments This section presents results from experiments that evaluate our method\u2019s performance on different nonconvex ERM problems and different datasets. All experiments are implemented in Pytorch platform version 1.2.0 within Python 3.7.6. on a local machine which comes with Intel Xeon 4214 CPUs and NVIDIA GeForce RTX 2080Ti GPU (11G GPU RAM). 7.1 Nonconvex Logistic Regression We \ufb01rst consider the binary logistic regression problem with a nonconvex regularizer (Reddi et al., 2016b) min \u03b8\u2208Rd 1 n n X i=1 yi log \u03c6(x\u22a4 i \u03b8) + (1 \u2212yi) log \u0002 1 \u2212\u03c6(x\u22a4 i \u03b8) \u0003 + \u03bb d X i=1 \u03b82 j/(1 + \u03b82 j), where \u03c6(x) = 1/ \u00001 + exp(\u2212x) \u0001 is the sigmoid function, \u03b8j is the j-th coordinate of \u03b8, and \u03bb > 0 is the regularization parameter. We set \u03bb = 0.001 in this experiment. Here, we consider two commonly-used binary classi\ufb01cation benchmark datasets: a9a dataset, which contains 32561 training examples, 16281 test examples, 123 features, and ijcnn1 dataset with 49990 training examples, 91701 test examples, 22 features. Baseline methods. We compare our method (DP-SRM) with random round private stochastic gradient descent (RRPSGD) proposed by Zhang et al. (2017) , differentially private gradient descent (DP-GD) proposed by Wang et al. (2017), and differentially private adaptive gradient descent (DP-AGD) proposed by Lee and Kifer (2018). Gradient clipping and privacy tracking. We use the gradient clipping technique of Abadi et al. (2016) to ensure that at t-th iteration of Algorithm 1, \u2225\u2207fi(\u03b8t)\u22252 and \u2225\u2207fi(\u03b8t) \u2212\u2207fi(\u03b8t\u22121)\u22252 are upper bounded by some prede\ufb01ned values C1 and C2, respectively. This will ensure that the sensitivity of the gradient estimator vt is upper bounded by 2 \u0000(1 \u2212\u03b3)C1 + \u03b3C2 \u0001 (see (6.1)), and gives us the desired privacy protection. At each 10 \fTable 2: Comparison of different algorithms on a9a dataset when \u03f5 \u2208{0.2, 0.5} and \u03b4 = 10\u22125. We use the STORM algorithm (Cutkosky and Orabona, 2019) as the non-private baseline. Privacy Non-private Method Test Error Data CPU time (s) Gradient Norm Budget Baseline Passes \u03f5 = 0.2 0.3346 DP-GD 0.4155 (0.0107) 20 1.245 0.0953 (0.0212) DP-AGD 0.3713 (0.0043) 360 96.21 0.0437 (0.0020) RRPSGD 0.4019 (0.0033) 8 39.61 0.2175 (0.0116) (0.007) DP-SRM 0.3579 (0.0009) 4 0.6007 0.0528 (0.0042) \u03f5 = 0.5 0.3346 DP-GD 0.3859 (0.0057) 20 1.261 0.0866 (0.0129) DP-AGD 0.3627 (0.0038) 365 95.45 0.0402 (0.0022) RRPSGD 0.3861 (0.0028) 10 52.32 0.1454 (0.0126) (0.007) DP-SRM 0.3506 (0.0011) 5 0.7383 0.0502 (0.0061) iteration, we add the Gaussian noise with variance \u03c32, and keep track of the RDP according to Lemma 3.7 and transfer it to (\u03f5, \u03b4)-DP according to Lemma 3.9. Parameters. For all the algorithms, the step size is tuned around the theoretical values to give the fastest convergence using grid search. For our method, we tune the batch size b by searching the grid {50, 100, 200}. We set C1 = 1, C2 = 0.01 and \u03b3 = C2. We choose \u03f5 \u2208{0.2, 0.5} and \u03b4 = 10\u22125. Results. Due to the randomized nature of all the algorithms, the experimental results are obtained by averaging the results over 30 runs. Figures 1 and 2 show the objective function value and the gradient norm of different algorithms for privacy budgets \u03f5 \u2208{0.2, 0.5} on a9a and ijcnn1 datasets, respectively. We also report the 95% con\ufb01dence interval of these results. We can see from the plots that Our DP-SRM algorithm outperforms the other three baseline algorithms in terms of the objective loss, gradient norm, and convergence rate by a large margin. Tables 2 and 3 summarize the test error of different algorithms as well as the CPU time (in seconds) of the training process. The results also corroborate the advantages of our method in terms of accuracy and ef\ufb01ciency. Table 3: Comparison of different algorithms on ijcnn1 dataset under different privacy budgets \u03f5 \u2208{0.2, 0.5} and \u03b4 = 10\u22125. Note that the non-private baseline denotes the test error of the non-private STORM algorithm (Cutkosky and Orabona, 2019). Privacy Non-private Method Test Error Data CPU time Gradient Norm Budget Baseline Passes \u03f5 = 0.2 0.2096 DP-GD 0.3160 (0.0120) 20 0.5180 0.0184 (0.0024) DP-AGD 0.2645 (0.0044) 346 90.05 0.0133 (0.0018) RRPSGD 0.3110 (0.0106) 8 47.64 0.0175 (0.0023) (0.002) DP-SRM 0.2503 (0.0090) 4 0.4748 0.0117 (0.0008) \u03f5 = 0.5 0.2096 DP-GD 0.2717 (0.0081) 20 0.4990 0.0171 (0.0024) DP-AGD 0.2416 (0.0029) 365 94.28 0.0397 (0.0025) RRPSGD 0.3033 (0.0110) 10 59.06 0.0160 (0.0018) (0.002) DP-SRM 0.2341 (0.0042) 5 0.4368 0.0082 (0.0005) 11 \f0 2 4 6 8 10 Training epoch 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 Objective loss DP-GD DP-AGD GP-SGD DP-SRM (a) \u03f5 = 0.2 0 2 4 6 8 10 Training epoch 0.35 0.40 0.45 0.50 0.55 0.60 0.65 0.70 Objective loss DP-GD DP-AGD GP-SGD DP-SRM (b) \u03f5 = 0.5 0 2 4 6 8 10 Training epoch 0.05 0.15 0.25 0.35 0.45 0.55 0.65 0.75 Gradient norm DP-GD DP-AGD GP-SGD DP-SRM (c) \u03f5 = 0.2 0 2 4 6 8 10 Training epoch 0.05 0.15 0.25 0.35 0.45 0.55 0.65 0.75 Gradient norm DP-GD DP-AGD GP-SGD DP-SRM (d) \u03f5 = 0.5 Figure 1: Results for nonconvex logistic regression on a9a dataset. (a), (b) illustrate the objective loss versus the number of epochs. (c), (d) present the gradient norm versus the number of epochs. 0 2 4 6 8 10 Training epoch 0.25 0.35 0.45 0.55 0.65 Objective loss DP-GD DP-AGD GP-SGD DP-SRM (a) \u03f5 = 0.2 0 2 4 6 8 10 Training epoch 0.25 0.35 0.45 0.55 0.65 Objective loss DP-GD DP-AGD GP-SGD DP-SRM (b) \u03f5 = 0.5 0 2 4 6 8 10 0.00 0.04 0.08 0.12 0.16 DP-GD DP-AGD GP-SGD DP-SRM (c) \u03f5 = 0.2 0 2 4 6 8 10 0.00 0.04 0.08 0.12 0.16 DP-GD DP-AGD GP-SGD DP-SRM (d) \u03f5 = 0.5 Figure 2: Results for nonconvex logistic regression on ijcnn1 dataset. (a), (b) show the objective loss versus the number of epochs. (c), (d) illustrate the gradient norm versus the number of epochs. 7.2 Convolutional Neural Networks We compare our algorithm with the differentially private stochastic gradient descent (DP-SGD) algorithm proposed by Abadi et al. (2016) on training convolutional neural networks for image classi\ufb01cation on both MNIST (LeCun et al., 1998) and CIFAR-10 (Krizhevsky and Hinton, 2009) datasets. Architecture for MNIST. For MNIST dataset, we consider a 4 layer CNN 1, which can achieve 99% classi\ufb01cation accuracy on the test dataset after training with SGD. Parameters for MNIST. We choose privacy budgets \u03f5 \u2208{1.2, 3.0, 7.0}, and set \u03b4 = 10\u22125. To ensure the privacy guarantee (see (6.1)), we set the clipping parameter C1 = 1.5 for the term \u2225\u2207fi(\u03b8t)\u22252. For the term \u2225\u2207fi(\u03b8t)\u2212\u2207fi(\u03b8t\u22121)\u22252, we choose the clipping parameter C2 from the grid {0.01, 0.1, 0.3, 0.5, 0.7, 0.9, 0.99}. For both DP-SGD and DP-SRM, we tune the batch size b by searching the grid {256, 512, 1024} and the step size by {0.01, 0.05, 0.1, 0.25, 0.5}. For DP-SRM, we tune the batch size b0 by {b, 2b, 4b}. In addition, we set the momentum parameter \u03b3 = C2. Results for MNIST. Figures 3 illustrates the average test error and the corresponding 95% con\ufb01dence interval of different methods versus the number of iterations as well as the training time (in seconds) under the privacy budgets \u03f5 = 1.2 and \u03f5 = 3.0 over 30 trials. We see similar results under the privacy budget \u03f5 = 7.0, and thus defer them in Section A in Appendix. The CNN trained by the non-private SGD can achieve 1% test error after 20 epochs. Figure 3(a) and Figure 3(c) show that our proposed method can achieve 3.62% and 1https://github.com/facebookresearch/pytorch-dp. 12 \f0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 Steps 1e4 0.025 0.100 0.175 0.250 Test error DP-SGD DP-SRM (a) \u03f5 = 3.0 0 200 400 600 800 Training time 0.025 0.100 0.175 0.250 Test error DP-SGD DP-SRM (b) \u03f5 = 3.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Steps 1e3 0.025 0.100 0.175 0.250 Test error DP-SGD DP-SRM (c) \u03f5 = 1.2 0 50 100 150 200 Training time 0.025 0.100 0.175 0.250 Test error DP-SGD DP-SRM (d) \u03f5 = 1.2 Figure 3: Results on MNIST dataset. (a), (b) depict the test error under the privacy budget \u03f5 = 3.0. (c), (d) illustrate the test error under the privacy budget \u03f5 = 1.2. 0 1 2 3 4 5 6 Steps 1e3 0.28 0.34 0.40 0.46 0.52 Test error DP-SGD DP-SRM (a) \u03f5 = 2.0 0 100 200 300 400 500 Training time 0.28 0.34 0.40 0.46 0.52 Test error DP-SGD DP-SRM (b) \u03f5 = 2.0 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Steps 1e4 0.28 0.34 0.40 0.46 0.52 Test error DP-SGD DP-SRM (c) \u03f5 = 4.0 0 200 400 600 800 1000 Training time 0.28 0.34 0.40 0.46 0.52 Test error DP-SGD DP-SRM (d) \u03f5 = 4.0 Figure 4: Results for CNN6 on CIFAR-10 dataset. (a), (b) depict the test error under the privacy budget \u03f5 = 2.0. (c), (d) illustrate the test error under the privacy budget \u03f5 = 4.0 4.49% test errors when \u03f5 = 3.0 and \u03f5 = 1.2, which are better than DP-SGD with 3.81% and 5.33% test errors. Besides, our method converges faster than DP-SGD. Figure 3(a) and Figure 3(b) demonstrate that compared with DP-SGD, our method only takes 0.3\u00d7 iterations and 0.4\u00d7 training time to achieve comparable performances under the privacy budget \u03f5 = 3.0. Architecture for CIFAR-10. We consider two convolutional neural networks for CIFAR-10. The \ufb01rst one is a \ufb01ve layer CNN with two convolutional layers and three fully connected layers, and we call it CNN5 2. For CNN5, we train it from the scratch using our DP-SRM method and the DP-SGD method (Abadi et al., 2016) and compare their performances in terms of the model accuracy, iteration numbers and the training time. For the second one, we consider a similar architecture as in Abadi et al. (2016), which has three convolutional layers with 32, 64, 128 \ufb01lters in each convolution layer and three fully connected layers, and we denote it by CNN6. For CNN6, we follow the same experiment setting as in Abadi et al. (2016): we use CIFAR-100 dataset as a public dataset, and \ufb01rst train a network with the same architecture on this dataset as the pretrained model. Then, we initialize the convolutional layers of CNN6 using the cnvolutional layers of the pretrained model, and only train the fully connected layers of CNN6 on CIFAR-10 dataset using different private methods. Parameters for CNN6. We choose three different privacy budgets \u03f5 \u2208{2.0, 4.0, 8.0} and \u03b4 = 10\u22125. We set the clipping parameter C1 = 2 for the term \u2225\u2207fi(\u03b8t)\u22252. For the term \u2225\u2207fi(\u03b8t) \u2212\u2207fi(\u03b8t\u22121)\u22252, we choose the clipping parameter C2 by searching the grid {0.01, 0.05, 0.1, 0.3, 0.5, 0.7, 0.9, 0.95, 0.99}. For DP-SGD, we tune the step size by searching the grid {0.01, 0.02, 0.05, 0.1, 0.15, 0.2} and the batch size by {64, 128, 256}. For DP-SRM, we tune the batch size b by searching the grid {64, 128, 256}, step size by {0.01, 0.02, 0.05, 0.1, 0.15, 0.2}, and b0 by {b, 2b, 4b}. In addition, we set the momentum parameter \u03b3 = C2. 2https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html. 13 \fResults for CNN6. Figure 4 presents the average test error and the corresponding 95% con\ufb01dence interval of different methods versus the number of iterations as well as the training time (in seconds) over 30 trials. The CNN6 trained by the non-private SGD will have 18.5% test error after 150 epochs. The results show that our proposed method can achieve 33.2% and 31.0% test errors given \u03f5 = 2.0 and \u03f5 = 4.0, which are comparable to the results of DP-SGD with 33.2% and 31.2% under the same privacy budgets. However, we can see from the plots that our method can signi\ufb01cantly reduce the iteration numbers and the training time. For example, when \u03f5 = 4.0, DP-SGD takes 1.3 \u00d7 104 iterations and 1115 seconds to achieve 31.2% test error. In sharp contrast, our method only takes 6.8 \u00d7 103 iterations and 643 seconds to achieve 31.0% test error. We can observe similar results for CNN5, which are presented in Section A in Appendix. 8" + }, + { + "url": "http://arxiv.org/abs/1909.06322v1", + "title": "A Knowledge Transfer Framework for Differentially Private Sparse Learning", + "abstract": "We study the problem of estimating high dimensional models with underlying\nsparse structures while preserving the privacy of each training example. We\ndevelop a differentially private high-dimensional sparse learning framework\nusing the idea of knowledge transfer. More specifically, we propose to distill\nthe knowledge from a \"teacher\" estimator trained on a private dataset, by\ncreating a new dataset from auxiliary features, and then train a differentially\nprivate \"student\" estimator using this new dataset. In addition, we establish\nthe linear convergence rate as well as the utility guarantee for our proposed\nmethod. For sparse linear regression and sparse logistic regression, our method\nachieves improved utility guarantees compared with the best known results\n(Kifer et al., 2012; Wang and Gu, 2019). We further demonstrate the superiority\nof our framework through both synthetic and real-world data experiments.", + "authors": "Lingxiao Wang, Quanquan Gu", + "published": "2019-09-13", + "updated": "2019-09-13", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.CR", + "cs.LG" + ], + "main_content": "Introduction In the Big Data era, sensitive data such as genomic data and purchase history data, are ubiquitous, which necessitates learning algorithms that can protect the privacy of each individual data record. A rigorous and standard notion for privacy guarantees is di\ufb00erential privacy (Dwork et al., 2006). By adding random noise to the model parameters (output perturbation), some intermediate steps of the learning algorithm (gradient perturbation), or the objective function of learning algorithms (objective perturbation), di\ufb00erentially private algorithms ensure that the trained models can learn the statistical information of the population without leaking any information about the individuals. In the last decade, a surge of di\ufb00erentially private learning algorithms (Chaudhuri and Monteleoni, 2009; Chaudhuri et al., 2011; Kifer et al., 2012; Bassily et al., 2014; Talwar et al., 2015; Zhang et al., 2017; Wang et al., 2017, 2018; Jayaraman et al., 2018) for empirical risk minimization have been developed. However, most of these studies only consider the classical setting, where the problem dimension is \ufb01xed. In the modern high-dimensional setting where the problem dimension can increase with the number of observations, all these empirical risk minimization algorithms fail. A common and e\ufb00ective approach to address these issues is to assume the model has a certain structure such as sparse structure or low-rank structure. In this paper, we consider high-dimensional \u2217Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail: lingxw@cs.ucla.edu \u2020Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail: qgu@cs.ucla.edu 1 arXiv:1909.06322v1 [stat.ML] 13 Sep 2019 \fmodels with sparse structure. Given a dataset S = {(xi, yi)}n i=1, where xi \u2208Rd and yi \u2208R are the input vector and response of the i-th example, our goal is to estimate the underlying sparse parameter vector \u03b8\u2217\u2208Rd, which has s\u2217nonzero entries, by solving the following \u21132-norm regularized optimization problem with the sparsity constraint min \u03b8\u2208Rd \u00af LS(\u03b8) := LS(\u03b8) + \u03bb 2 \u2225\u03b8\u22252 2 subject to \u2225\u03b8\u22250 \u2264s, (1.1) where LS(\u03b8) := n\u22121 Pn i=1 \u2113(\u03b8; xi, yi) is the empirical loss on the training data, \u2113(\u03b8; xi, yi) is the loss function de\ufb01ned on the training example (xi, yi), \u03bb \u22650 is a regularization parameter, \u2225\u03b8\u22250 counts the number of nonzero entries in \u03b8, and s controls the sparsity of \u03b8. The reason we add an extra \u21132 regularizer to (1.1) is to ensure the strong convexity of the objective function without making any assumption on the data. In order to achieve di\ufb00erential privacy for sparse learning, a line of research (Kifer et al., 2012; Thakurta and Smith, 2013; Jain and Thakurta, 2014; Talwar et al., 2015; Wang and Gu, 2019) studied di\ufb00erentially private learning problems in the high-dimensional setting, where the problem dimension can be larger than the number of observations. For example, Jain and Thakurta (2014) provided a di\ufb00erentially private algorithm with the dimension independent utility guarantee. However, their approach only considers the case when the underlying parameter lies in a simplex. For sparse linear regression, Kifer et al. (2012); Thakurta and Smith (2013) proposed a two-stage approach to ensure di\ufb00erentially privacy. In detail, they \ufb01rst estimate the support set of the sparse model parameter vector using some di\ufb00erentially private model selection algorithm, and then estimate the parameter vector with its support restricted to the estimated subset using the objective perturbation approach (Chaudhuri and Monteleoni, 2009). Nevertheless, the support selection algorithm, like exponential mechanism, is computational ine\ufb03cient or even intractable in practice. Talwar et al. (2015) proposed a di\ufb00erentially private algorithm for sparse linear regression by combining the Frank-Wolfe method (Frank and Wolfe, 1956) and the exponential mechanism. Although their utility guarantee is worse than Kifer et al. (2012); Wang and Gu (2019), it does not depend on the restricted strong convexity (RSC) and smoothness (RSS) conditions (Negahban et al., 2009). Recently, Wang and Gu (2019) developed a di\ufb00erentially private iterative gradient hard thresholding (IGHT) (Jain et al., 2014; Yuan et al., 2014) based framework for sparse learning problems by injecting Gaussian noise into the intermediate gradients. However, all the aforementioned methods either have unsatisfactory utility guarantees or are computationally ine\ufb03cient. For example, the utility guarantees provided by Kifer et al. (2012); Thakurta and Smith (2013); Wang and Gu (2019) depend on the \u21132-norm bound of the input vector, which can be in the order of O( \u221a d) and grows as d increases in the worse case. While the utility guarantee of the algorithm proposed by Talwar et al. (2015) only depends on the \u2113\u221e-norm bound of the input vector, it has a worse utility guarantee, and its convergence rate is sub-linear. Therefore, a natural question is whether we can achieve the best of both worlds: a strong utility guarantee and high computational e\ufb03ciency. To this end, we propose to make use of the idea of knowledge distillation (Bucilu et al., 2006; Hinton et al., 2015), which is a knowledge transfer technique originally introduced as a mean of model compression. The original motivation of using knowledge distillation is to use a large and complex \u201cteacher\u201d model to train a small \u201cstudent\u201d 2 \fauxiliary features private data features+predictions private-preserving predictions trainning prediction not accessible accessible \u201cteacher\u201d model \u201cstudent\u201d model Figure 1: Illustration of the proposed framework: (1). A \u201cteacher\u201d estimator is trained using the private dataset; (2). A new private-preserving dataset is generated using the auxiliary features and their private predictions output by the \u201cteacher\u201d estimator; (3). A di\ufb00erentially private \u201cstudent\u201d estimator is trained using the newly generated dataset. model, while maintaining its accuracy. For the di\ufb00erentially private sparse learning problem, similar idea can be applied here: we can use a non-private \u201cteacher\u201d model to train a di\ufb00erentially private \u201cstudent\u201d model, while preserving the sparse information of the \u201cteacher\u201d model. We notice that several knowledge transfer approaches have been recently investigated in the di\ufb00erentially private classi\ufb01cation problem (Hamm et al., 2016; Papernot et al., 2016; Bassily et al., 2018; Yoon et al., 2018). Nevertheless, the application of knowledge distillation to the generic di\ufb00erentially private high-dimensional sparse learning problem is new and has never been studied before. In this paper, we propose a knowledge transfer framework for solving the high-dimensional sparse learning problem on a private dataset, which is illustrated in Figure 1. Our proposed algorithm is not only very e\ufb03cient but also has improved utility guarantees compared with the state-of-the-art methods. More speci\ufb01cally, we \ufb01rst train a non-private \u201cteacher\u201d model using IGHT from the private dataset. Based on this \u201cteacher\u201d model, we then construct a privacy-preserving dataset using some auxiliary inputs, which are drawn from some given distributions or public datasets. Finally, by training a \u201cstudent\u201d model using IGHT again based on the newly generated dataset, we can obtain a di\ufb00erentially private sparse estimator. Table 1 summarizes the detailed comparisons of di\ufb00erent methods for sparse linear regression, and we summarize the contributions of our work as follows \u2022 Our proposed di\ufb00erentially private framework can be applied to any smooth loss function, which covers a broad family of sparse learning problems. In particular, we showcase the application of our framework to sparse linear regression and sparse logistic regression. \u2022 We prove a better utility guarantee and establish a liner convergence rate for our proposed method. For example, for sparse linear regression, our method achieves O \u0000K2s\u22172\u221alog d/(n\u03f5) \u0001 utility guarantee, where K is the \u2113\u221e-norm bound of the input vectors, and \u03f5 is the privacy budget. 3 \fTable 1: Comparison of di\ufb00erent algorithms for sparse linear regression in the setting of (\u03f5, \u03b4)-DP. We report the utility bound achieved by the privacy-preserving mechanisms, and ignore the log(1/\u03b4) term. Note that n\u03f5 \u226b1, xi denotes the i-th input vector, and \u03c5 is the probability that the support selection procedure can successfully recover the true support. Algorithm Data Assumption Utility Convergence Utility Assumption Rate Frank-Wolfe maxi\u2208[n] \u2225xi\u2225\u221e\u22641 O \u0010 log(nd) (n\u03f5)2/3 \u0011 Sub-linear No (Talwar et al., 2015) Two Stage maxi\u2208[n] \u2225xi\u22252 \u2264e K O \u0010 e K2s\u22172 log(2/\u03c5) (n\u03f5)2 \u0011 NA RSC/RSS (Kifer et al., 2012) DP-IGHT maxi\u2208[n] \u2225xi\u22252 \u2264e K O \u0010 e K2s\u22172 log d (n\u03f5)2 \u0011 Linear RSC/RSS (Wang and Gu, 2019) DPSL-KT maxi\u2208[n] \u2225xi\u2225\u221e\u2264K O \u0010 K2s\u22172\u221alog d n\u03f5 \u0011 Linear No \u03bb > 0 DPSL-KT maxi\u2208[n] \u2225xi\u2225\u221e\u2264K O \u0010 K2s\u22173 log d (n\u03f5)2 \u0011 Linear RSC/RSS \u03bb = 0 RSC/RSS Compared with the best known utility bound O \u0000 e K2s\u22172 log d/(n2\u03f52) \u0001 (Kifer et al., 2012; Wang and Gu, 2019) ( e K is the \u21132-norm bound of the input vectors), our utility guarantee is better than it by a factor of O \u0000 e K2\u221alog d/(K2n\u03f5) \u0001 . Considering that e K can be \u221a d times larger than K, the improvement factor can be as large as O \u0000d\u221alog d/(n\u03f5) \u0001 . Similar improvement is achieved for sparse logistic regression. \u2022 With the extra sparse eigenvalue condition (Bickel et al., 2009) on the private data, our method can achieve O \u0000K2s\u22173 log d/(n2\u03f52) \u0001 utility guarantee for sparse linear regression. It is better than the best known result (Kifer et al., 2012; Wang and Gu, 2019) O \u0000 e K2s\u22172 log d/(n2\u03f52) \u0001 by a factor of O \u0000 e K2/(K2s\u2217) \u0001 , which can be as large as O \u0000d/s\u2217\u0001 . Similar improvement is also achieved for sparse logistic regression. Notation. For a d-dimensional vector x = [x1, ..., xd]\u22a4, we use \u2225x\u22252 = (Pd i=1 |xi|2)1/2 to denote its \u21132-norm, and use \u2225x\u2225\u221e= maxi |xi| to denote its \u2113\u221e-norm. We let supp(x) be the index set of nonzero entries of x, and supp(x, s) be the index set of the top s entries of x in terms of magnitude. We use Sn to denote the input space with n examples and R, R\u2032 to denote the output space. Given two sequences {an}, {bn}, if there exists a constant 0 < C < \u221esuch that an \u2264Cbn, we write an = O(bn), and we use e O(\u00b7) to hide the logarithmic factors. We use Id \u2208Rd\u00d7d to denote the identity matrix. Throughout the paper, we use \u2113i(\u00b7) as the shorthand notation for \u2113(\u00b7; xi, yi), and \u03b8min to denote the minimizer of problem (1.1). 4 \f1.1 Additional Related Work To further enhance the privacy guarantee for training data, there has emerged a fresh line of research (Hamm et al., 2016; Papernot et al., 2016; Bassily et al., 2018; Yoon et al., 2018) that studies the knowledge transfer techniques for the di\ufb00erentially private classi\ufb01cation problem. More speci\ufb01cally, these methods propose to \ufb01rst train an ensemble of \u201cteacher\u201d models based on disjoint subsets of the private dataset, and then train a \u201cstudent\u201d model based on the private aggregation of the ensemble. However, their approaches only work for the classi\ufb01cation task, and cannot be directly applied to general sparse learning problems. Moreover, their sub-sample and aggregate framework may not be suitable for the high-dimensional sparse learning problem since each \u201cteacher\u201d model is trained on a subset of the private dataset, which makes the \u201clarge d, small n\u201d scenario even worse. In contrast to their sub-sample and aggregate based knowledge transfer approach, we propose to use the distillation based method (Bucilu et al., 2006; Hinton et al., 2015), which is more suitable for the high-dimensional sparse learning problem. 2 Preliminaries In this section, we introduce some background and preliminaries about optimization and di\ufb00erential privacy. We \ufb01rst lay out the formal de\ufb01nitions of strongly convex and smooth functions. De\ufb01nition 2.1. A function f : Rd \u2192R is \u03bb-strongly convex, if for any \u03b81, \u03b82 \u2208Rd, f(\u03b81) \u2212f(\u03b82) \u2212\u27e8\u2207f(\u03b82), \u03b81 \u2212\u03b82\u27e9\u2265\u03bb 2 \u2225\u03b81 \u2212\u03b82\u22252 2. De\ufb01nition 2.2. A function f : Rd \u2192R is \u00af \u03b2-smooth, if for any \u03b81, \u03b82 \u2208Rd, f(\u03b81) \u2212f(\u03b82) \u2212\u27e8\u2207f(\u03b82), \u03b81 \u2212\u03b82\u27e9\u2264 \u00af \u03b2 2 \u2225\u03b81 \u2212\u03b82\u22252 2. Next we present the de\ufb01nition of sub-Gaussian distribution (Vershynin, 2010). De\ufb01nition 2.3. We say X \u2208Rd is a sub-Gaussian random vector with parameter \u03b1 > 0, if (E|u\u22a4X|p)1/p \u2264\u03b1\u221ap for all p \u22651 and all unit vector u with \u2225u\u22252 = 1. We also provide the de\ufb01nition of di\ufb00erential privacy. De\ufb01nition 2.4 ((Dwork et al., 2006)). A randomized mechanism M : Sn \u2192R satis\ufb01es (\u03f5, \u03b4)di\ufb00erential privacy if for any two adjacent datasets S, S\u2032 \u2208Sn di\ufb00ering by one example, and any output subset O \u2286R, it holds that P[M(S) \u2208O] \u2264e\u03f5 \u00b7 P[M(S\u2032) \u2208O] + \u03b4, where \u03b4 \u2208[0, 1). Now we introduce the Gaussian Mechanism (Dwork et al., 2014) to achieve (\u03f5, \u03b4)-DP. We start with the de\ufb01nition of \u21132-sensitivity, which is used to control the variance of the noise in Gaussian mechanism. De\ufb01nition 2.5 ((Dwork et al., 2014)). For two adjacent datasets S, S\u2032 \u2208Sn di\ufb00ering by one example, the \u21132-sensitivity \u22062(q) of a function q : Sn \u2192Rd is de\ufb01ned as \u22062(q) = supS,S\u2032 \u2225q(S) \u2212q(S\u2032)\u22252. Given the \u21132-sensitivity, we can ensure the di\ufb00erential privacy using Gaussian mechanism. 5 \fLemma 2.6 ((Dwork et al., 2014)). Given a function q : Sn \u2192Rd, the Gaussian Mechanism M = q(S) + u, where u \u223cN(0, \u03c32Id), satis\ufb01es (\u03f5, \u03b4)-DP for some \u03b4 > 0, if \u03c3 = p 2 log(1.25/\u03b4)\u22062(q)/\u03f5. The next lemma illustrates that (\u03f5, \u03b4)-DP has the post-processing property, i.e., the composition of a data independent mapping f with an (\u03f5, \u03b4)-DP mechanism M also satis\ufb01es (\u03f5, \u03b4)-DP. Lemma 2.7 ((Dwork et al., 2014)). Consider a randomized mechanism M : Sn \u2192R that is (\u03f5, \u03b4)-DP. Let f : R \u2192R\u2032 be an arbitrary randomized mapping. Then f(M) : Sn \u2192R\u2032 is (\u03f5, \u03b4)-DP. 3 The Proposed Algorithm In this section, we present our di\ufb00erentially private sparse learning framework, which is illustrated in Algorithm 1. Note that Algorithm 1 will call IGHT algorithm (Yuan et al., 2014; Jain et al., 2014) in Algorithm 2. IGHT enjoys linear convergence rate and is widely used for sparse learning. Note that for the sparsity constraint, i.e., \u2225\u03b8\u22250 \u2264s, the hard thresholding operator Hs(\u03b8) is de\ufb01ned as follows: [Hs(\u03b8)]i = \u03b8i if i \u2208supp(\u03b8, s) and [Hs(\u03b8)]i = 0 otherwise, for i \u2208[d]. It preserves the largest s entries of \u03b8 in magnitude. Equipped with IGHT, our framework also has a linear convergence rate for solving high-dimensional sparsity constrained problems. Algorithm 1 Di\ufb00erentially Private Sparse Learning via Knowledge Transfer (DPSL-KT) input Loss function \u00af LS, distribution e D, IGHT parameters s, \u03b71, \u03b72, T1, T2, function f, \u03b80, \u03c3 1: b \u03b8 = IGHT(\u03b80, \u00af LS, s, \u03b71, T1) 2: Generate training set: Sp = {(e xi, yp i )}m i=1, where yp i = \u27e8b \u03b8, e xi\u27e9+ \u03bei, e xi \u223ce D, \u03bei \u223cN(0, \u03c32) 3: Constructing the new task: e L(\u03b8) = 1 2m Pm i=1 \u0000yp i \u2212\u27e8\u03b8, e xi\u27e9 \u00012 4: \u03b8p = IGHT(\u03b80, e L, s, \u03b72, T2) output \u03b8p Algorithm 2 Iterative Gradient Hard Thresholding (IGHT) input Loss function LS, parameters s, \u03b7, T, \u03b80 1: for t = 1, 2, 3, . . . , T do 2: \u03b8t = Hs \u0000\u03b8t\u22121 \u2212\u03b7\u2207LS(\u03b8t\u22121) \u0001 3: end for output \u03b8T There are two key ingredients in our framework: (1) an e\ufb03cient problem solver, i.e., iterative gradient hard thresholding (IGHT) algorithm (Yuan et al., 2014; Jain et al., 2014), and (2) the knowledge transfer procedure. In detail, we \ufb01rst solve the optimization problem (1.1) using IGHT, which is demonstrated in Algorithm 2, to get a non-private \u201cteacher\u201d estimator b \u03b8. The next step is the knowledge transfer procedure: we draw some synthetic features {e xi}m i=1 from a given distribution e D, and output the corresponding private-preserving responses {yp i }m i=1 using the Gaussian mechanism: yp i = \u27e8b \u03b8, e xi\u27e9+ \u03bei, where \u03bei is the Gaussian noise to protect the private information contained in 6 \fb \u03b8. Finally, by solving a new sparsity constrained learning problem e L using the privacy-preserving synthetic dataset Sp = {(e xi, yp i )}m i=1, we can get a di\ufb00erentially private \u201cstudent\u201d estimator \u03b8p. Our proposed knowledge transfer framework can achieve both strong privacy and utility guarantees. Intuitively speaking, the newly constructed learning problem can reduce the utilization of the privacy budget since we only require the generated responses to preserve the privacy of original training sample, which in turn leads to a strong privacy guarantee. In addition, this new learning problem contains the knowledge of the \u201cteacher\u201d estimator, which preserves the sparsity information of the underlying parameter. As a result, the \u201cstudent\u201d estimator can also have a strong utility guarantee. 4 Main Results In this section, we will present the privacy and utility guarantees for Algorithm 1. We start with two conditions, which will be used in the result for generic models. Later, when we apply our result to speci\ufb01c models, these conditions will be veri\ufb01ed explicitly. The \ufb01rst condition is about the upper bound on the gradient of the function LS, which will be used to characterize the statistical error of generic sparse models. Condition 4.1. For a given sample size n and tolerance parameter \u03b6 \u2208(0, 1), let \u03b5(n, \u03b6) be the smallest scalar such that with probability at least 1 \u2212\u03b6, we have \u2225\u2207LS(\u03b8\u2217)\u2225\u221e\u2264\u03b5(n, \u03b6). To derive the utility guarantee, we also need the sparse eigenvalue condition (Zhang, 2010) on the function LS, which directly implies the restricted strong convex and smooth properties (Negahban et al., 2009; Loh and Wainwright, 2013) of the function LS. Condition 4.2. The empirical loss LS on the training data satis\ufb01es the sparse eigenvalue condition, if for all \u03b8, there exist positive numbers \u00b5 and \u03b2 such that \u00b5 = inf v \b v\u22a4\u22072LS(\u03b8)v | \u2225v\u22250 \u2264s, \u2225v\u22252 = 1 \t , \u03b2 = sup v \b v\u22a4\u22072LS(\u03b8)v | \u2225v\u22250 \u2264s, \u2225v\u22252 = 1 \t . 4.1 Results for Generic Models We \ufb01rst present the privacy guarantee of Algorithm 1 in the setting of (\u03f5, \u03b4)-DP. Theorem 4.3. Suppose the loss function on each training example satis\ufb01es \u2225\u2207\u2113i(\u03b8min)\u2225\u221e\u2264\u03b3, and e D is a sub-Gaussian distribution with parameter e \u03b1 and the covariance matrix \u2225e \u03a3\u22252 \u2264e \u03b2, and m \u2265C1e \u03b1s log d for some absolute constant C1. Given a privacy budget \u03f5 and a constant \u03b4 \u2208(0, 1), the output \u03b8p of Algorithm 1 satis\ufb01es (\u03f5, \u03b4)-DP if \u03c32 = 8me \u03b2s\u03b32 log(2.5/\u03b4)/(n2\u03f52\u03bb2). Remark 4.4. Theorem 4.3 suggests that in order to ensure the privacy guarantee, the only condition on the private data is the \u2113\u221e-norm bound on the gradient of the loss function on each training example. This is in contrast to the \u21132-norm bound required by many previous work (Kifer et al., 2012; Talwar et al., 2015; Wang and Gu, 2019) for sparse learning problems. We remark that \u2113\u221e-norm bound is a milder condition than \u21132-norm bound, and gives a better utility guarantee that only depends on the \u2113\u221e-norm of the input data vectors instead of their \u21132-norm. Next, we provide the linear convergence rate and the utility guarantee of Algorithm 1. 7 \fTheorem 4.5. Suppose that the loss function \u00af LS is \u00af \u03b2-smooth and LS satis\ufb01es Condition 4.1 with parameter \u03b5(n, \u03b6). Under the same conditions of Theorem 4.3 on \u2113i, e D, \u03c32, there exist constants {Ci}8 i=1 such that if n = m \u2265C1e \u03b1s log d, s \u2265C2\u03ba2s\u2217with \u03ba = \u00af \u03b2/\u03bb, the stepsize \u03b71 = C3\u03bb/\u00af \u03b22, \u03b72 = C4/e \u03b2, then \u03b8p converges to \u03b8\u2217at a linear rate. In addition, if we choose \u03bb2 = C5\u03b3 p s\u2217log d log(1/\u03b4)/(n\u03f5), for large enough T1, T2, with probability at least 1 \u2212\u03b6 \u2212C6/d, the output \u03b8p of Algorithm 1 satis\ufb01es \u2225\u03b8p \u2212\u03b8\u2217\u22252 2 \u2264C7 s\u2217 \u00af \u03b22 \u03b5(n, \u03b6)2 + C8 \u00001/\u00af \u03b22 + e \u03b12/e \u03b2 \u0001\u03b3 p s\u22173 log d log(1/\u03b4) n\u03f5 . Remark 4.6. The utility bound of our method consists of two terms: the \ufb01rst term denotes the statistical error of generic sparse models, while the second one corresponds to the error introduced by the Gaussian mechanism, and is the dominating term. Therefore, the utility bound is of order O \u0000\u03b3 p s\u22173 log d log(1/\u03b4)/(n\u03f5) \u0001 , which depends on the true sparsity s\u2217rather than the dimension of the problem d, and therefore is desirable for sparse learning. The following corollary shows that if LS further satis\ufb01es Condition 4.2, our method can achieve an improved utility guarantee. Corollary 4.7. Suppose that LS satis\ufb01es Condition 4.2 with parameters \u00b5, \u03b2. Under the same conditions of Theorem 4.5 on LS, \u2113i, e D, the output \u03b8p of Algorithm 1 satis\ufb01es (\u03f5, \u03b4)-DP if we set \u03bb = 0 and \u03c32 = 8me \u03b2s\u03b32 log(2.5/\u03b4)/(n2\u03f52\u00b52). In addition, there exist constants {Ci}7 i=1 such that if n = m \u2265C1e \u03b1s log d, s \u2265C2\u03ba2s\u2217with \u03ba = \u03b2/\u00b5, step size \u03b71 = C3\u00b5/\u03b22, \u03b72 = C4/e \u03b2, for large enough T1, T2, with probability at least 1 \u2212\u03b6 \u2212C5/d, the output \u03b8p of Algorithm 1 satis\ufb01es \u2225\u03b8p \u2212\u03b8\u2217\u22252 2 \u2264C6 s\u2217 \u03b22 \u03b5(n, \u03b6)2 + C7e \u03b12 \u03b32s\u22172 log d log(1/\u03b4) e \u03b2\u00b52n2\u03f52 . Remark 4.8. Corollary 4.7 shows that if the training loss on the private data satis\ufb01es the sparse eigenvalue condition, Algorithm 1 can achieve e O \u0000\u03b32s\u22172/(n\u03f5)2\u0001 utility guarantee by setting \u03bb = 0 and the variance \u03c32 accordingly. It improves the utility without the sparse eigenvalue condition e O \u0000\u03b3s\u22173/2/(n\u03f5) \u0001 in Theorem 4.5 by a factor of e O \u0000n\u03f5/\u03b3 \u221a s\u2217\u0001 . Note that sparse eigenvalue condition has been veri\ufb01ed for many sparse models (Negahban et al., 2009) including sparse linear regression and sparse logistic regression. 4.2 Results for Speci\ufb01c Models In this subsection, we demonstrate the results of our framework for speci\ufb01c models. Note that the privacy guarantee has been established in Theorem 4.3, and we only present the utility guarantees. 4.2.1 Sparse linear regression We consider the following linear regression problem in the high-dimensional regime (Tibshirani, 1996): y = X\u03b8\u2217+ \u03be, where y \u2208Rn is the response vector, X \u2208Rn\u00d7d denotes the design matrix, \u03be \u2208Rn is a noise vector, and \u03b8\u2217\u2208Rd with \u2225\u03b8\u2217\u22250 \u2264s\u2217is the underlying sparse coe\ufb03cient vector that we want to recover. In order to estimate the sparse vector \u03b8\u2217, we consider the following sparsity 8 \fconstrained estimation problem, which has been studied in many previous work (Zhang, 2011; Foucart and Rauhut, 2013; Yuan et al., 2014; Jain et al., 2014; Chen and Gu, 2016) min \u03b8\u2208Rd 1 2n\u2225X\u03b8 \u2212y\u22252 2 + \u03bb 2 \u2225\u03b8\u22252 2 subject to \u2225\u03b8\u22250 \u2264s. (4.1) The utility guarantee of Algorithm 1 for solving (4.1) can be implied by Theorem 4.5. Here we only need to verify Condition 4.2 for the sparse linear regression model. In speci\ufb01c, we can show that \u2207LS(\u03b8\u2217) = X\u22a4\u03be/n, and we can prove that (See Lemma B.3 in Appendix) \u2225\u2207LS(\u03b8\u2217)\u2225\u221e\u2264 C1\u03bd p log d/n holds with probability at least 1 \u2212exp(\u2212C2n), where C1, C2 are absolute constants. Therefore, we have \u03b6 = 1 \u2212exp(\u2212C2n), \u03b5(n, \u03b6) = C1\u03bd p log d/n. By substituting these quantities into Theorem 4.5, we can obtain the following corollary. Corollary 4.9. Suppose that each row of the design matrix satis\ufb01es maxi\u2208[n] \u2225xi\u2225\u221e\u2264K, and the noise vector \u03be \u223cN(0, \u03bd2In). Under the same conditions of Theorem 4.5 on e D, \u03c32, \u03b71, \u03b72, s, there exist constants {Ci}5 i=1 such that if m = n \u2265C1s log d, \u03bb2 = C2K2s\u2217p log d log(1/\u03b4)/(n\u03f5), with probability at least 1 \u2212C3/d, the output \u03b8p of Algorithm 1 satis\ufb01es \u2225\u03b8p \u2212\u03b8\u2217\u22252 2 \u2264C4\u03bd2K2 s\u2217log d n + C5e \u03b12K2 s\u22172p log d log(1/\u03b4) e \u03b2n\u03f5 . Remark 4.10. Corollary 4.9 suggests that O \u0000s\u2217log d/n + K2s\u22172p log d log(1/\u03b4)/(n\u03f5) \u0001 utility guarantee can be achieved by our algorithm. The term O(s\u2217log d/n) denotes the statistical error for sparse vector estimation, which matches the minimax lower bound (Raskutti et al., 2011). While the term e O(K2s\u22172/(n\u03f5)) corresponds to the error introduced by the privacy-preserving mechanism, and is the dominating term. Compared with the best-known result (Kifer et al., 2012; Wang and Gu, 2019) e O( e K2s\u22172/(n2\u03f52)), where \u2225xi\u22252 \u2264e K for all i \u2208[n], our utility guarantee does not require the sparse eigenvalue condition and is better than their results by a factor of e O \u0000 e K2/(K2n\u03f5) \u0001 . Since we have e K \u2264 \u221a dK in the worst case, the improvement factor can be as large as e O \u0000d/(n\u03f5) \u0001 . Compared with the utility guarantee e O \u00001/(n\u03f5)2/3\u0001 obtained by Talwar et al. (2015), our method improves their result by a factor of e O \u0000(n\u03f5)1/3/(Ks\u2217)2\u0001 , which demonstrates the advantage of our framework. Next, we present the theoretical guarantees of our methods under the extra sparse eigenvalue condition for sparse linear regression. Corollary 4.11. Suppose that each row xi of the design matrix satis\ufb01es xi \u223cN(0, \u03a3), maxi\u2208[n] \u2225xi\u2225\u221e\u2264 K, and the noise vector \u03be \u223cN(0, \u03bd2In). For a given \u03f5, \u03b4, under the same conditions of Corollary 4.7 on e D, \u03c32, \u03bb, \u03b71, \u03b72, s, there exist constants {Ci}4 i=1 such that if m = n \u2265C1s log d, the output of Algorithm 1 satis\ufb01es (\u03f5, \u03b4)-DP. In addition, with probability at least 1 \u2212C2/d, we have \u2225\u03b8p \u2212\u03b8\u2217\u22252 2 \u2264C3\u03bd2K2 s\u2217log d n + C4e \u03b12K2 s\u22173 log d log(1/\u03b4) e \u03b2n2\u03f52 . Remark 4.12. According to Corollary A.1, the output of Algorithm 1 will satisfy (\u03f5, \u03b4)-DP with the utility guarantee e O \u0000K2s\u22173/(n2\u03f52) \u0001 , which improves the result in Corollary 4.9 by a factor of e O \u0000n\u03f5/s\u2217\u0001 . 9 \f4.2.2 Sparse logistic regression For high-dimensional logistic regression, we assume the label of each example follows an i.i.d. Bernoulli distribution conditioned on the input vector P(y = 1|x, \u03b8\u2217) = exp \u0000\u03b8\u2217\u22a4x \u2212log \u00001 + exp(\u03b8\u2217\u22a4x) \u0001\u0001 , where x \u2208Rd is the input vector, \u03b8\u2217\u2208Rd with \u2225\u03b8\u2217\u22250 \u2264s\u2217is the sparse parameter vector we would like to estimate. Given observations {(xi, yi)}n i=1, we consider the following maximum likelihood estimation problem with sparsity constraints (Yuan et al., 2014; Chen and Gu, 2016) min \u03b8\u2208Rd \u22121 n n X i=1 \u0002 yi\u03b8\u22a4xi \u2212log \u00001 + exp(\u03b8\u22a4xi) \u0001\u0003 + \u03bb 2 \u2225\u03b8\u22252 2 subject to \u2225\u03b8\u22250 \u2264s. (4.2) The utility guarantee of Algorithm 1 for solving (4.2) is shown in the following corollary. Corollary 4.13. Under the same conditions of Corollary 4.9 on xi, e D, \u03c32, \u03b71, \u03b72, s, there exist constants {Ci}5 i=1 such that if m = n \u2265C1s log d, \u03bb2 = C2K p s\u2217log d log(1/\u03b4)/(n\u03f5), with probability at least 1 \u2212C3/d, the output \u03b8p of Algorithm 1 satis\ufb01es \u2225\u03b8p \u2212\u03b8\u2217\u22252 2 \u2264C4K2 s\u2217log d n + C5e \u03b12K p s\u22173 log d log(1/\u03b4) e \u03b2n\u03f5 . Remark 4.14. Corollary 4.13 suggests that O \u0000s\u2217log d/n+K p s\u22173 log d log(1/\u03b4)/(n\u03f5) \u0001 utility guarantee can be obtained by our algorithm for sparse logistic regression. The term e O \u0000Ks\u22173/2/(n\u03f5)) caused by the Gaussian mechanism is the dominating term and does not depend on the sparse eigenvalue condition, and is better than the best-known result (Wang and Gu, 2019) e O \u0000 e K2s\u22172/(n2\u03f52) \u0001 by a factor of e O \u0000 e K2s\u22171/2/(Kn\u03f5) \u0001 . The improvement factor can be as large as e O \u0000dK/(n\u03f5) \u0001 since e K \u2264 \u221a dK. If we have the extra sparse eigenvalue condition, our method can achieve an improved utility guarantee for sparse logistic regression as follows. Corollary 4.15. Suppose that each row xi of the design matrix satis\ufb01es xi \u223cN(0, \u03a3), maxi\u2208[n] \u2225xi\u2225\u221e\u2264 K. For a given \u03f5, \u03b4, under the same conditions of Corollary 4.7 on e D, \u03c32, \u03bb, \u03b71, \u03b72, s, there exist constants {Ci}4 i=1 such that if m = n \u2265C1s log d, the output of Algorithm 1 satis\ufb01es (\u03f5, \u03b4)-DP. In addition, with probability at least 1 \u2212C2/d, we have the following utility for \u03b8p \u2225\u03b8p \u2212\u03b8\u2217\u22252 2 \u2264C3K2 s\u2217log d n + C4e \u03b12K2s\u22172 log d log(1/\u03b4) e \u03b2n2\u03f52 . Remark 4.16. Corollary A.3 shows that our method can obtain an improved utility guarantee e O \u0000K2s\u22172/(n\u03f5)2\u0001 for sparse logistic regression under the extra sparse eigenvalue assumption. 5 Numerical Experiments In this section, we present experimental results of our proposed algorithm on both synthetic and real datasets. For sparse linear regression, we compare our framework with Two stage (Kifer et al., 2012), 10 \fFrank-Wolfe (Talwar et al., 2015), and DP-IGHT (Wang and Gu, 2019) algorithms. For sparse logistic regression, we compare our framework with DP-IGHT (Wang and Gu, 2019) algorithm. For all of our experiments, we choose the parameters of di\ufb00erent methods according to the requirements of their theoretical guarantees. More speci\ufb01cally, on the synthetic data experiments, we assume s\u2217 is known for all the methods. On the real data experiments, s\u2217is unknown, neither our method or the competing methods has the knowledge of s\u2217. So we simply choose a su\ufb03ciently large s as a surrogate of s\u2217. Given s, for the parameter \u03bb in our method, according to Theorem 4.5, we choose \u03bb from a sequence of values c1 p s log d log(1/\u03b4)/(n\u03f5), where c1 \u2208{10\u22126, 10\u22125, . . . , 101}, by cross-validation. For competing methods, given s, we choose the iteration number of Frank-Wolfe from a sequence of values c2s, where c2 \u2208{0.5, 0.6, . . . , 1.5}, and the regularization parameter in the objective function of Two Stage from a sequence of values c3s/\u03f5, where c3 \u2208{10\u22123, 10\u22122, . . . , 102}, by cross-validation. For DP-IGHT, we choose its stepsize from the grid {1/20, 1/21, . . . , 1/26} by cross-validation. For the non-private baseline, we use the non-private IGHT (Yuan et al., 2014). 5.1 Numerical Simulations In this subsection, we investigate our framework on synthetic datasets for sparse linear and logistic regression. In both problems, we generate the design matrix X \u2208Rn\u00d7d such that each entry is drawn i.i.d. from a uniform distribution U(\u22121, 1), and the underlying sparse vector \u03b8\u2217has s nonzero entries that are randomly generated. In addition, we consider the following two settings: (i) n = 800, d = 1000, s\u2217= 10; (ii) n = 4000, d = 5000, s\u2217= 50. We choose e D to be a uniform distribution U(\u22121, 1), which implies e \u03b2 = 1/3. 1 1.5 2 2.5 3 3.5 4 4.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) Linear regression 1 1.5 2 2.5 3 3.5 4 4.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (b) Linear regression 2 4 6 8 10 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (c) Logistic regression 2 4 6 8 10 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (d) Logistic regression Figure 2: Numerical results for sparse linear and logistic regression. (a), (b) Reconstruction error versus privacy budget for sparse linear regression; (c), (d) Reconstruction error versus privacy budget for sparse linear regression. Sparse linear regression For sparse linear regression, the observations are generated according to the linear regression model y = X\u22a4\u03b8\u2217+ \u03be, where the noise vector \u03be \u223cN(0, \u03bd2I) with \u03bd2 = 0.1. In our experiments, we set \u03b4 = 0.01 and vary the privacy budget \u03f5 from 0.8 to 5. Note that due to the hardness of the problem itself, we choose relatively large privacy budgets compared with the low-dimensional problem to ensure meaningful results. Figure 2(a) and 2(b) illustrate the estimation error \u2225b \u03b8 \u2212\u03b8\u2217\u22252/\u2225\u03b8\u2217\u22252 of di\ufb00erent methods averaged over 10 trails. The results show that the estimation error of our method is close to the non-private baseline, and is signi\ufb01cantly better than 11 \fTable 2: Comparison of di\ufb00erent algorithms for various privacy budgets \u03f5 with \u03b4 = 10\u22125 in terms of MSE (mean \u00b1 std) and its corresponding standard deviation on E2006-TFIDF. Method \u03f5 = 0.8 \u03f5 = 1.5 \u03f5 = 2.5 \u03f5 = 3.5 \u03f5 = 4.5 IGHT 0.8541 0.8541 0.8541 0.8541 0.8541 Frank-Wolfe 4.471 (0.239) 2.004 (0.155) 1.535 (0.140) 1.206 (0.095) 1.099 (0.082) Two stage 4.022 (0.159) 1.803 (0.141) 1.326 (0.093) 1.107 (0.103) 1.053 (0.069) DP-IGHT 3.731 (0.207) 1.687 (0.126) 1.304 (0.035) 1.067 (0.051) 0.968 (0.062) DPSL-KT 1.227 (0.110) 1.178 (0.056) 1.065 (0.054) 0.971 (0.031) 0.952 (0.010) Table 3: Comparison of di\ufb00erent algorithms for various privacy budgets \u03f5 with \u03b4 = 10\u22125 in terms of test error (mean \u00b1 std) and its corresponding standard deviation on RCV1 data. Method \u03f5 = 2 \u03f5 = 4 \u03f5 = 6 \u03f5 = 8 IGHT 0.0645 0.0645 0.0645 0.0645 Frank-Wolfe 0.1381 (0.0045) 0.1134 (0.0041) 0.0978 (0.0032) 0.0882 (0.0033) Two stage 0.1272 (0.0044) 0.1061(0.0038) 0.0949 (0.0035) 0.0866 (0.0031) DP-IGHT 0.1179 (0.0035) 0.1026 (0.0036) 0.0922 (0.0032) 0.0824 (0.0029) DPSL-KT 0.1105 (0.0038) 0.0974 (0.0035) 0.0885 (0.0029) 0.0787(0.0031) other private baselines. Even when we have a small privacy budget (i.e., \u03f5 = 0.8), our method can still recover the underlying sparse vector with reasonably small estimation error, while others fail. Sparse logistic regression For sparse logistic regression, each label is generated from the logistic distribution P(y = 1) = 1/ \u00001 + exp(x\u22a4 i \u03b8\u2217) \u0001 . In this problem, we vary the privacy budget \u03f5 from 2 to 10, and set \u03b4 = 0.01. We present the estimation error versus privacy budget \u03f5 of di\ufb00erent methods in Figure 2(c) and 2(d). The results show that our method can output accurate estimators when we have relative large privacy budget, and it consistently outperforms the private baseline. 5.2 Real Data Experiments For real data experiments, we use E2006-TFIDF dataset (Kogan et al., 2009) and RCV1 dataset (Lewis et al., 2004), for the evaluation of sparse linear regression and sparse logistic regression, respectively. E2006-TFIDF data For sparse linear regression problem, we use E2006-TFIDF dataset, which consists of \ufb01nancial risk data from thousands of U.S. companies. In detail, it contains 16087 training examples, 3308 testing examples, and we randomly sample 25000 features for this experiment. In order to validate our proposed framework, we randomly divide the original dataset into two datasets: private dataset and public dataset. For the private dataset, it contains 8044 training examples, and we assume that this dataset contains the sensitive information that we want to protect. For the public dataset, it contains 8043 training examples. We set s = 2000, \u03b4 = 10\u22125, \u03f5 \u2208[0.8, 5]. We estimate e \u03b2 by the sample covariance matrix. Table 2 reports the mean square error (MSE) on the test data of di\ufb00erent methods for various privacy budgets over 10 trails. The results show that the performance of our algorithm is close to the non-private baseline even when we have small private budgets, and is much better than existing methods. RCV1 data For sparse logistic regression, we use a Reuters Corpus Volume I (RCV1) data set for 12 \ftext categorization research. RCV1 is released by Reuters, Ltd. for research purposes, and consists of over 800000 manually categorized newswire stories. It contains 20242 training examples, 677399 testing examples and 47236 features. As before, we randomly divide the original dataset into two datasets with equal size serving as the private and publice datasets. In addition, we randomly choose 10000 test examples and 20000 features, and set s = 500, \u03b4 = 10\u22125, \u03f5 \u2208[2, 8]. We estimate e \u03b2 by the sample covariance matrix. We compare all algorithms in terms of their classi\ufb01cation error on the test set over 10 replications, which is summarized in Table 3. Evidently our algorithm achieves the lowest test error among all private algorithms on RCV1 dataset, which demonstrates the superiority of our algorithm. 6" + }, + { + "url": "http://arxiv.org/abs/1909.01150v3", + "title": "Neural Policy Gradient Methods: Global Optimality and Rates of Convergence", + "abstract": "Policy gradient methods with actor-critic schemes demonstrate tremendous\nempirical successes, especially when the actors and critics are parameterized\nby neural networks. However, it remains less clear whether such \"neural\" policy\ngradient methods converge to globally optimal policies and whether they even\nconverge at all. We answer both the questions affirmatively in the\noverparameterized regime. In detail, we prove that neural natural policy\ngradient converges to a globally optimal policy at a sublinear rate. Also, we\nshow that neural vanilla policy gradient converges sublinearly to a stationary\npoint. Meanwhile, by relating the suboptimality of the stationary points to the\nrepresentation power of neural actor and critic classes, we prove the global\noptimality of all stationary points under mild regularity conditions.\nParticularly, we show that a key to the global optimality and convergence is\nthe \"compatibility\" between the actor and critic, which is ensured by sharing\nneural architectures and random initializations across the actor and critic. To\nthe best of our knowledge, our analysis establishes the first global optimality\nand convergence guarantees for neural policy gradient methods.", + "authors": "Lingxiao Wang, Qi Cai, Zhuoran Yang, Zhaoran Wang", + "published": "2019-08-29", + "updated": "2019-11-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "math.OC", + "stat.ML" + ], + "main_content": "Introduction In reinforcement learning (Sutton and Barto, 2018), an agent aims to maximize its expected total reward by taking a sequence of actions according to a policy in a stochastic environment, which is modeled as a Markov decision process (MDP) (Puterman, 2014). To obtain the optimal policy, policy gradient methods (Williams, 1992; Baxter and Bartlett, 2000; Sutton et al., 2000) directly maximize the expected total reward via gradient-based optimization. As policy gradient methods are easily implementable and readily integrable with advanced optimization techniques such as variance reduction (Johnson and Zhang, 2013; Papini et al., 2018) and distributed optimization (Mnih et al., 2016; Espeholt et al., 2018), they enjoy wide popularity among practitioners. In particular, when the policy (actor) and action-value function (critic) are parameterized by neural networks, policy gradient methods achieve signi\ufb01cant empirical successes in challenging applications, such as playing Go (Silver et al., 2016, 2017), real-time strategy gaming (Vinyals et al., 2019), robot manipulation (Peters and Schaal, 2006; Duan et al., 2016), and natural language processing (Wang et al., 2018). See Li (2017) for a detailed survey. In stark contrast to the tremendous empirical successes, policy gradient methods remain much less well understood in terms of theory, especially when they involve neural networks. More speci\ufb01cally, most existing work analyzes the REINFORCE algorithm (Williams, 1992; Sutton et al., 2000), which estimates the policy gradient via Monte Carlo sampling. Based on the recent progress in nonconvex optimization, Papini et al. (2018); Shen et al. (2019); Xu et al. (2019a); Karimi et al. (2019); Zhang et al. (2019) establish the rate of convergence of REINFORCE to a \ufb01rstor second-order stationary point. However, the global optimality of the attained stationary point remains unclear. A more commonly used class of policy gradient methods is equipped with the actor-critic scheme (Konda and Tsitsiklis, 2000), which alternatingly estimates the action-value function in the policy gradient via a policy evaluation step (critic update), and performs a policy improvement step using the estimated policy gradient (actor update). The global optimality and rate of convergence of such a class are even more challenging to analyze than that of REINFORCE. In particular, the policy evaluation step itself may converge to an undesirable stationary point or even diverge (Tsitsiklis and Van Roy, 1997), especially when it involves both nonlinear action-value function approximator, such as neural network, and temporal-di\ufb00erence update (Sutton, 1988). As a result, the estimated policy gradient may be biased, which possibly leads to divergence. Even if the algorithm converges to a stationary point, due to the nonconvexity of the ex2 \fpected total reward with respect to the policy as well as its parameter, the global optimality of such a stationary point remains unclear. The only exception is the linear-quadratic regulator (LQR) setting (Fazel et al., 2018; Malik et al., 2018; Tu and Recht, 2018; Yang et al., 2019a; Bu et al., 2019), which is, however, more restrictive than the general MDP setting that possibly involves neural networks. To bridge the gap between practice and theory, we analyze neural policy gradient methods equipped with actor-critic schemes, where the actors and critics are represented by overparameterized two-layer neural networks. In detail, we study two settings, where the policy improvement steps are based on vanilla policy gradient and natural policy gradient, respectively. In both settings, the policy evaluation steps are based on the TD(0) algorithm (Sutton, 1988) with independent sampling. In the \ufb01rst setting, we prove that neural vanilla policy gradient converges to a stationary point of the expected total reward at a 1/ \u221a T-rate in the expected squared norm of the policy gradient, where T is the number of policy improvement steps. Meanwhile, through a geometric characterization that relates the suboptimality of the stationary points to the representation power of the neural networks parameterizing the actor and critic, we establish the global optimality of all stationary points under mild regularity conditions. In the second setting, through the lens of Kullback-Leibler (KL) divergence regularization, we prove that neural natural policy gradient converges to a globally optimal policy at a 1/ \u221a T-rate in the expected total reward. In particular, a key to such global optimality and convergence guarantees is a notion of compatibility between the actor and critic, which connects the accuracy of policy evaluation steps with the e\ufb03cacy of policy improvement steps. We show that such a notion of compatibility is ensured by using shared neural architectures and random initializations for both the actor and critic, which is often used as a practical heuristic (Mnih et al., 2016). To our best knowledge, our analysis gives the \ufb01rst global optimality and convergence guarantees for neural policy gradient methods, which corroborate their signi\ufb01cant empirical successes. Related Work. In contrast to the huge body of empirical literature on policy gradient methods, theoretical results on their convergence remain relatively scarce. In particular, Sutton et al. (2000) and Kakade (2002) analyze vanilla policy gradient (REINFORCE) and natural policy gradient with compatible action-value function approximators, respectively, which are further extended by Konda and Tsitsiklis (2000); Peters and Schaal (2008); Castro and Meir (2010) to incorporate actor-critic schemes. Most of this line of work only establishes the asymptotic convergence based on stochastic approximation techniques (Kushner and Yin, 2003; Borkar, 2009) and requires the actor and critic to be parameterized 3 \fby linear functions. Another line of work (Papini et al., 2018; Xu et al., 2019a,b; Shen et al., 2019; Karimi et al., 2019; Zhang et al., 2019) builds on the recent progress in nonconvex optimization to establish the nonasymptotic rates of convergence of REINFORCE (Williams, 1992; Baxter and Bartlett, 2000; Sutton et al., 2000) and its variants, but only to \ufb01rstor second-order stationary points, which, however, lacks global optimality guarantees. Moreover, when actor-critic schemes are involved, due to the error of policy evaluation steps and its impact on policy improvement steps, the nonasymptotic rates of convergence of policy gradient methods, even to \ufb01rstor second-order stationary points, remain rather open. Compared with the convergence of policy gradient methods, their global optimality is even less explored in terms of theory. Fazel et al. (2018); Malik et al. (2018); Tu and Recht (2018); Yang et al. (2019a); Bu et al. (2019) prove that policy gradient methods converge to globally optimal policies in the LQR setting, which is more restrictive. In very recent work, Bhandari and Russo (2019) establish the global optimality of vanilla policy gradient (REINFORCE) in the general MDP setting. However, they require the policy class to be convex, which restricts its applicability to the tabular and LQR settings. In independent work, Agarwal et al. (2019) prove that vanilla policy gradient and natural policy gradient converge to globally optimal policies at 1/ \u221a T-rates in the tabular and linear settings. In the tabular setting, their rate of convergence of vanilla policy gradient depends on the size of the state space. In contrast, we focus on the nonlinear setting with the actor-critic scheme, where the actor and critic are parameterized by neural networks. It is worth mentioning that when such neural networks have linear activation functions, our analysis also covers the linear setting, which is, however, not our focus. In addition, Liu et al. (2019) analyze the proximal policy optimization (PPO) and trust region policy optimization (TRPO) algorithms (Schulman et al., 2015, 2017), where the actors and critics are parameterized by neural networks, and establish their 1/ \u221a T-rates of convergence to globally optimal policies. However, they require solving a subproblem of policy improvement in the functional space using multiple stochastic gradient steps in the parameter space, whereas vanilla policy gradient and natural policy gradient only require a single stochastic (natural) gradient step in the parameter space, which makes the analysis even more challenging. There is also an emerging body of literature that analyzes the training and generalization error of deep supervised learning with overparameterized neural networks (Daniely, 2017; Jacot et al., 2018; Wu et al., 2018; Allen-Zhu et al., 2018a,b; Du et al., 2018a,b; Zou et al., 2018; Chizat and Bach, 2018; Jacot et al., 2018; Li and Liang, 2018; Cao and Gu, 2019a,b; Arora et al., 2019; Lee et al., 2019), especially when they are trained using stochastic gra4 \fdient. See Fan et al. (2019) for a detailed survey. In comparison, our focus is on deep reinforcement learning with policy gradient methods. In particular, the policy evaluation steps are based on the TD(0) algorithm, which uses stochastic semigradient (Sutton, 1988) rather than stochastic gradient. Moreover, the interplay between the actor and critic makes our analysis even more challenging than that of deep supervised learning. Notation. For distribution \u00b5 on \u2126and p > 0, we de\ufb01ne \u2225f(\u00b7)\u2225\u00b5,p = ( R \u2126|f|pd\u00b5)1/p as the Lp(\u00b5) norm of f. We de\ufb01ne \u2225f(\u00b7)\u2225\u00b5,\u221e= inf{C \u22650 : |f(x)| \u2264C for \u00b5-almost every x} as the L\u221e(\u00b5)-norm of f. We write \u2225f\u2225\u00b5,p for notational simplicity when the variable of f is clear from the context. We further denote by \u2225\u00b7 \u2225\u00b5 the L2(\u00b5)-norm for notational simplicity. For a vector \u03c6 \u2208Rn and p > 0, we denote by \u2225\u03c6\u2225p the \u2113p-norm of \u03c6. We denote by x = ([x]\u22a4 1 , . . . , [x]\u22a4 m)\u22a4a vector in Rmd, where [x]i \u2208Rd is the i-th block of x for i \u2208[m]. 2 Background In this section, we introduce the background of reinforcement learning and policy gradient methods. Reinforcement Learning. A discounted Markov decision process (MDP) is de\ufb01ned by tuple (S, A, P, \u03b6, r, \u03b3). Here S and A are the state and action spaces, respectively. Meanwhile, P is the Markov transition kernel and r is the reward function, which is possibly stochastic. Speci\ufb01cally, when taking action a \u2208A at state s \u2208S, the agent receives reward r(s, a) and the environment transits into a new state according to transition probability P(\u00b7 | s, a). Meanwhile, \u03b6 is the distribution of initial state S0 \u2208S and \u03b3 \u2208(0, 1) is the discount factor. In addition, policy \u03c0(a | s) gives the probability of taking action a at state s. We denote the stateand action-value functions associated with \u03c0 by V \u03c0 : S \u2192R and Q\u03c0 : S \u00d7 A \u2192R, which are de\ufb01ned respectively as V \u03c0(s) = (1 \u2212\u03b3) \u00b7 E \u0014 \u221e X t=0 \u03b3t \u00b7 r(St, At) \f \f \f \f S0 = s \u0015 , \u2200s \u2208S, (2.1) Q\u03c0(s, a) = (1 \u2212\u03b3) \u00b7 E \u0014 \u221e X t=0 \u03b3t \u00b7 r(St, At) \f \f \f \f S0 = s, A0 = a \u0015 , \u2200(s, a) \u2208S \u00d7 A, (2.2) where At \u223c\u03c0(\u00b7 | St), and St+1 \u223cP(\u00b7 | St, At) for all t \u22650. Also, we de\ufb01ne the advantage function of policy \u03c0 as the di\ufb00erence between Q\u03c0 and V \u03c0, i.e., A\u03c0(s, a) = Q\u03c0(s, a) \u2212V \u03c0(s) 5 \ffor all (s, a) \u2208S \u00d7 A. By the de\ufb01nitions in (2.1) and (2.2), V \u03c0 and Q\u03c0 are related via V \u03c0(s) = E\u03c0 \u0002 Q\u03c0(s, a) \u0003 = \u27e8Q\u03c0(s, \u00b7), \u03c0(\u00b7 | s)\u27e9, where \u27e8\u00b7, \u00b7\u27e9is the inner product in R|A|. Here we write Ea\u223c\u03c0(\u00b7 | s)[Q\u03c0(s, a)] as E\u03c0[Q\u03c0(s, a)] for notational simplicity. Note that policy \u03c0 together with the transition kernel P induces a Markov chain over state space S. We denote by \u033a\u03c0 the stationary state distribution of the Markov chain induced by \u03c0. We further de\ufb01ne \u03c2\u03c0(s, a) = \u03c0(a | s) \u00b7 \u033a\u03c0(s) as the stationary state-action distribution for all (s, a) \u2208S \u00d7A. Meanwhile, policy \u03c0 induces a state visitation measure over S and a state-action visitation measure over S \u00d7 A, which are denoted by \u03bd\u03c0 and \u03c3\u03c0, respectively. Speci\ufb01cally, for all (s, a) \u2208S \u00d7 A, we de\ufb01ne \u03bd\u03c0(s) = (1 \u2212\u03b3) \u00b7 \u221e X t=0 \u03b3t \u00b7 P(St = s), \u03c3\u03c0(s, a) = (1 \u2212\u03b3) \u00b7 \u221e X t=0 \u03b3t \u00b7 P(St = s, At = a), (2.3) where S0 \u223c\u03b6(\u00b7), At \u223c\u03c0(\u00b7 | St), and St+1 \u223cP(\u00b7 | St, At) for all t \u22650. By de\ufb01nition, we have \u03c3\u03c0(\u00b7, \u00b7) = \u03c0(\u00b7 | \u00b7) \u00b7 \u03bd\u03c0(\u00b7). We de\ufb01ne the expected total reward function J(\u03c0) by J(\u03c0) = (1 \u2212\u03b3) \u00b7 E \u0014 \u221e X t=0 \u03b3t \u00b7 r(St, At) \u0015 = E\u03b6 \u0002 V \u03c0(s) \u0003 = E\u03c3\u03c0 \u0002 r(s, a) \u0003 , \u2200\u03c0, (2.4) where we write E\u03c3\u03c0[r(s, a)] = E(s,a)\u223c\u03c3\u03c0(\u00b7,\u00b7)[r(s, a)] for notational simplicity. The goal of reinforcement learning is to \ufb01nd the optimal policy that maximizes J(\u03c0), which is denoted by \u03c0\u2217. When the state space S is large, a popular approach is to \ufb01nd the maximizer of J(\u03c0) over a class of parameterized policies {\u03c0\u03b8 : \u03b8 \u2208B}, where \u03b8 \u2208B is the parameter and B is the parameter space. In this case, we obtain the optimization problem max\u03b8\u2208B J(\u03c0\u03b8). Policy Gradient Methods. Policy gradient methods maximize J(\u03c0\u03b8) using \u2207\u03b8J(\u03c0\u03b8). These methods are based on the policy gradient theorem (Sutton and Barto, 2018), which states that \u2207\u03b8J(\u03c0\u03b8) = E\u03c3\u03c0\u03b8 \u0002 Q\u03c0\u03b8(s, a) \u00b7 \u2207\u03b8 log \u03c0\u03b8(a | s) \u0003 , (2.5) where \u03c3\u03c0\u03b8 is the state-action visitation measure de\ufb01ned in (2.3). Based on (2.5), (vanilla) policy gradient maximizes the expected total reward via gradient ascent. Speci\ufb01cally, we generate a sequence of policy parameters {\u03b8i}i\u22651 via \u03b8i+1 \u2190\u03b8i + \u03b7 \u00b7 \u2207\u03b8J(\u03c0\u03b8i), (2.6) where \u03b7 > 0 is the learning rate. Meanwhile, natural policy gradient (Kakade, 2002) utilizes natural gradient ascent (Amari, 1998), which is invariant to the parameterization of policies. 6 \fSpeci\ufb01cally, let F(\u03b8) be the Fisher information matrix corresponding to policy \u03c0\u03b8, which is given by F(\u03b8) = E\u03c3\u03c0\u03b8 h \u2207\u03b8 log \u03c0\u03b8(a | s) \u0000\u2207\u03b8 log \u03c0\u03b8(a | s) \u0001\u22a4i . (2.7) At each iteration, natural policy gradient performs \u03b8i+1 \u2190\u03b8i + \u03b7 \u00b7 \u0000F(\u03b8i) \u0001\u22121 \u00b7 \u2207\u03b8J(\u03c0\u03b8i), (2.8) where (F(\u03b8i))\u22121 is the inverse of F(\u03b8i) and \u03b7 is the learning rate. In practice, both Q\u03c0\u03b8 in (2.5) and F(\u03b8) in (2.7) remain to be estimated, which yields approximations of the policy improvement steps in (2.6) and (2.8). 3 Neural Policy Gradient Methods In this section, we represent \u03c0\u03b8 by a two-layer neural network and study neural policy gradient methods, which estimate the policy gradient and natural policy gradient using the actor-critic scheme (Konda and Tsitsiklis, 2000). 3.1 Overparameterized Neural Policy We now introduce the parameterization of policies. For notational simplicity, we assume that S \u00d7 A \u2286Rd with d \u22652. Without loss of generality, we further assume that \u2225(s, a)\u22252 = 1 for all (s, a) \u2208S \u00d7 A. A two-layer neural network f((s, a); W, b) with input (s, a) and width m takes the form of f \u0000(s, a); W, b \u0001 = 1 \u221am m X r=1 br \u00b7 ReLU \u0000(s, a)\u22a4[W]r \u0001 , \u2200(s, a) \u2208S \u00d7 A. (3.1) Here ReLU: R \u2192R is the recti\ufb01ed linear unit (ReLU) activation function, which is de\ufb01ned as ReLU(u) = 1{u > 0} \u00b7 u. Also, {br}r\u2208[m] and W = ([W]\u22a4 1 , . . . , [W]\u22a4 m)\u22a4\u2208Rmd in (3.1) are the parameters. When training the two-layer neural network, we initialize the parameters via [Winit]r \u223cN(0, Id/d) and br \u223cUnif({\u22121, 1}) for all r \u2208[m]. Note that the ReLU activation function satis\ufb01es ReLU(c \u00b7 u) = c \u00b7 ReLU(u) for all c > 0 and u \u2208R. Hence, without loss of generality, we keep br \ufb01xed at the initial parameter throughout training and only update W in the sequel. See, e.g., Allen-Zhu et al. (2018b) for a detailed argument. For notational simplicity, we write f((s, a); W, b) as f((s, a); W) hereafter. 7 \fUsing the two-layer neural network in (3.1), we de\ufb01ne \u03c0\u03b8(a | s) = exp \u0002 \u03c4 \u00b7 f \u0000(s, a); \u03b8 \u0001\u0003 P a\u2032\u2208A exp \u0002 \u03c4 \u00b7 f \u0000(s, a\u2032); \u03b8 \u0001\u0003, \u2200(s, a) \u2208S \u00d7 A, (3.2) where f((\u00b7, \u00b7); \u03b8) is de\ufb01ned in (3.1) with \u03b8 \u2208Rmd playing the role of W. Note that \u03c0\u03b8 de\ufb01ned in (3.2) takes the form of an energy-based policy (Haarnoja et al., 2017). With a slight abuse of terminology, we call \u03c4 the temperature parameter, which corresponds to the inverse temperature, and f((\u00b7, \u00b7); \u03b8) the energy function in the sequel. In the sequel, we investigate policy gradient methods for the class of neural policies de\ufb01ned in (3.2). We de\ufb01ne the feature mapping \u03c6\u03b8 = ([\u03c6\u03b8]\u22a4 1 , . . . , [\u03c6\u03b8]\u22a4 m)\u22a4: Rd \u2192Rmd of a two-layer neural network f((\u00b7, \u00b7); \u03b8) as [\u03c6\u03b8]r(s, a) = br \u221am \u00b7 1 \b (s, a)\u22a4[\u03b8]r > 0 \t \u00b7 (s, a), \u2200(s, a) \u2208S \u00d7 A, \u2200r \u2208[m]. (3.3) By (3.1), it holds that f((\u00b7, \u00b7); \u03b8) = \u03c6\u03b8(\u00b7, \u00b7)\u22a4\u03b8. Meanwhile, f((\u00b7, \u00b7); \u03b8) is almost everywhere di\ufb00erentiable with respect to \u03b8, and it holds that \u2207\u03b8f((\u00b7, \u00b7); \u03b8) = \u03c6\u03b8(\u00b7, \u00b7). In the following proposition, we calculate the closed forms of the policy gradient \u2207\u03b8J(\u03c0\u03b8) and the Fisher information matrix F(\u03b8) for \u03c0\u03b8 de\ufb01ned in (3.2). Proposition 3.1 (Policy Gradient and Fisher Information Matrix). For \u03c0\u03b8 de\ufb01ned in (3.2), we have \u2207\u03b8J(\u03c0\u03b8) = \u03c4 \u00b7 E\u03c3\u03c0\u03b8 h Q\u03c0\u03b8(s, a) \u00b7 \u0010 \u03c6\u03b8(s, a) \u2212E\u03c0\u03b8 \u0002 \u03c6\u03b8(s, a\u2032) \u0003\u0011i , (3.4) F(\u03b8) = \u03c4 2 \u00b7 E\u03c3\u03c0\u03b8 h\u0010 \u03c6\u03b8(s, a) \u2212E\u03c0\u03b8 \u0002 \u03c6\u03b8(s, a\u2032) \u0003\u0011\u0010 \u03c6\u03b8(s, a) \u2212E\u03c0\u03b8 \u0002 \u03c6\u03b8(s, a\u2032) \u0003\u0011\u22a4i , (3.5) where \u03c6\u03b8(\u00b7, \u00b7) is the feature mapping de\ufb01ned in (3.3), \u03c4 is the temperature parameter, and \u03c3\u03c0\u03b8 is the state-action visitation measure de\ufb01ned in (2.3). Here we write E\u03c0\u03b8[\u03c6\u03b8(s, a\u2032)] = Ea\u2032\u223c\u03c0\u03b8(\u00b7 | s)[\u03c6\u03b8(s, a\u2032)] for notational simplicity. Proof. See \u00a7D.1 for a detailed proof. Since the action-value function Q\u03c0\u03b8 in (3.4) is unknown, to obtain the policy gradient, we use another two-layer neural network to track the action-value function of policy \u03c0\u03b8. Speci\ufb01cally, we use a two-layer neural network Q\u03c9(\u00b7, \u00b7) = f((\u00b7, \u00b7); \u03c9) de\ufb01ned in (3.1) to represent the action-value function Q\u03c0\u03b8, where \u03c9 plays the same role as W in (3.1). Such an approach is known as the actor-critic scheme (Konda and Tsitsiklis, 2000). We call \u03c0\u03b8 and Q\u03c9 the actor and critic, respectively. 8 \fShared Initialization and Compatible Function Approximation. Sutton et al. (2000) introduce the notion of compatible function approximations. Speci\ufb01cally, the action-value function Q\u03c9 is compatible with \u03c0\u03b8 if we have \u2207\u03c9A\u03c9(s, a) = \u2207\u03b8 log \u03c0\u03b8(a | s) for all (s, a) \u2208 S\u00d7A, where A\u03c9(s, a) = Q\u03c9(s, a)\u2212\u27e8Q\u03c9(s, \u00b7), \u03c0\u03b8(\u00b7 | s)\u27e9is the advantage function corresponding to Q\u03c9. Compatible function approximations enable us to construct unbiased estimators of the policy gradient, which are essential for the optimality and convergence of policy gradient methods (Konda and Tsitsiklis, 2000; Sutton et al., 2000; Kakade, 2002; Peters and Schaal, 2008; Wagner, 2011, 2013). To approximately obtain compatible function approximations when both the actor and critic are represented by neural networks, we use a shared architecture between the actionvalue function Q\u03c9 and the energy function of \u03c0\u03b8, and initialize Q\u03c9 and \u03c0\u03b8 with the same parameter Winit, where [Winit]r \u223cN(0, Id/d) for all r \u2208[m]. We show that in the overparameterized regime where m is large, the shared architecture and random initialization ensure Q\u03c9 to be approximately compatible with \u03c0\u03b8 in the following sense. We de\ufb01ne \u03c60 = ([\u03c60]\u22a4 1 , . . . , [\u03c60]\u22a4 m)\u22a4: Rd \u2192Rmd as the centered feature mapping corresponding to the initialization, which takes the form of [\u03c60]r(s, a) = br \u221am \u00b7 1 \b (s, a)\u22a4[Winit]r > 0 \t \u00b7 (s, a) (3.6) \u2212E\u03c0\u03b8 \u0014 br \u221am \u00b7 1 \b (s, a\u2032)\u22a4[Winit]r > 0 \t \u00b7 (s, a\u2032) \u0015 , \u2200(s, a) \u2208S \u00d7 A, where Winit is the initialization shared by both the actor and critic, and we omit the dependency on \u03b8 for notational simplicity. Similarly, we de\ufb01ne for all (s, a) \u2208S \u00d7 A the following centered feature mappings, \u03c6\u03b8(s, a) = \u03c6\u03b8(s, a) \u2212E\u03c0\u03b8 \u0002 \u03c6\u03b8(s, a\u2032) \u0003 , \u03c6\u03c9(s, a) = \u03c6\u03c9(s, a) \u2212E\u03c0\u03b8 \u0002 \u03c6\u03c9(s, a\u2032) \u0003 . (3.7) Here \u03c6\u03b8(s, a) and \u03c6\u03c9(s, a) are the feature mappings de\ufb01ned in (3.3), which correspond to \u03b8 and \u03c9, respectively. By (3.1), we have A\u03c9(s, a) = Q\u03c9(s, a) \u2212E\u03c0\u03b8 \u0002 Q\u03c9(s, a\u2032) \u0003 = \u03c6\u03c9(s, a)\u22a4\u03c9, \u2207\u03b8 log \u03c0\u03b8(a | s) = \u03c6\u03b8(s, a), (3.8) which holds almost everywhere for \u03b8 \u2208Rmd. As shown in Corollary A.3 in \u00a7A, when the width m is su\ufb03ciently large, in policy gradient methods, both \u03c6\u03b8 and \u03c6\u03c9 are well approximated by \u03c60 de\ufb01ned in (3.6). Therefore, by (3.8), we conclude that in the overparameterized regime with shared architecture and random initialization, Q\u03c9 is approximately compatible with \u03c0\u03b8. 9 \f3.2 Neural Policy Gradient Methods Now we present neural policy gradient and neural natural policy gradient. Following the actor-critic scheme, they generate a sequence of policies {\u03c0\u03b8i}i\u2208[T+1] and action-value functions {Q\u03c9i}i\u2208[T]. 3.2.1 Actor Update As introduced in \u00a72, we aim to solve the optimization problem max\u03b8\u2208B J(\u03c0\u03b8) iteratively via gradient-based methods, where B is the parameter space. We set B = {\u03b1 \u2208Rmd : \u2225\u03b1 \u2212 Winit\u22252 \u2264R}, where R > 1 and Winit is the initial parameter de\ufb01ned in \u00a73.1. For all i \u2208[T], let \u03b8i be the policy parameter at the i-th iteration. For notational simplicity, in the sequel, we denote by \u03c3i and \u03c2i the state-action visitation measure \u03c3\u03c0\u03b8i and the stationary stateaction distribution \u03c2\u03c0\u03b8i, respectively, which are de\ufb01ned in \u00a72. Similarly, we write \u03bdi = \u03bd\u03c0\u03b8i and \u033ai = \u033a\u03c0\u03b8i. To update \u03b8i, we set \u03b8i+1 \u2190\u03a0B \u0000\u03b8i + \u03b7 \u00b7 G(\u03b8i) \u00b7 b \u2207\u03b8J(\u03c0\u03b8i) \u0001 , (3.9) where we de\ufb01ne \u03a0B : Rmd \u2192B as the projection operator onto the parameter space B \u2286Rmd. Here G(\u03b8i) \u2208Rmd\u00d7md is a matrix speci\ufb01c to each algorithm. Speci\ufb01cally, we have G(\u03b8i) = Imd for policy gradient and G(\u03b8i) = (F(\u03b8i))\u22121 for natural policy gradient, where F(\u03b8i) is the Fisher information matrix in (3.5). Meanwhile, \u03b7 is the learning rate and b \u2207\u03b8J(\u03c0\u03b8i) is an estimator of \u2207\u03b8J(\u03c0\u03b8i), which takes the form of b \u2207\u03b8J(\u03c0\u03b8i) = 1 B \u00b7 B X \u2113=1 Q\u03c9i(s\u2113, a\u2113) \u00b7 \u2207\u03b8 log \u03c0\u03b8i(a\u2113| s\u2113). (3.10) Here \u03c4i is the temperature parameter of \u03c0\u03b8i, {(s\u2113, a\u2113)}\u2113\u2208[B] is sampled from the state-action visitation measure \u03c3i corresponding to the current policy \u03c0\u03b8i, and B > 0 is the batch size. Also, Q\u03c9i is the critic obtained by Algorithm 2. Here we omit the dependency of b \u2207\u03b8J(\u03c0\u03b8i) on \u03c9i for notational simplicity. Sampling From Visitation Measure. Recall that the policy gradient \u2207\u03b8J(\u03c0\u03b8) in (3.4) involves an expectation taken over the state-action visitation measure \u03c3\u03c0\u03b8. Thus, to obtain an unbiased estimator of the policy gradient, we need to sample from the visitation measure \u03c3\u03c0\u03b8. To achieve such a goal, we introduce an arti\ufb01cial MDP (S, A, e P, \u03b6, r, \u03b3). Such an MDP only di\ufb00ers from the original MDP in the Markov transition kernel e P, which is de\ufb01ned as e P(s\u2032 | s, a) = \u03b3 \u00b7 P(s\u2032 | s, a) + (1 \u2212\u03b3) \u00b7 \u03b6(s\u2032), \u2200(s, a, s\u2032) \u2208S \u00d7 A \u00d7 S. 10 \fHere P is the Markov transition kernel of the original MDP. That is, at each state transition of the arti\ufb01cial MDP, the next state is sampled from the initial state distribution \u03b6 with probability 1 \u2212\u03b3. In other words, at each state transition, we restart the original MDP with probability 1 \u2212\u03b3. As shown in Konda (2002), the stationary state distribution of the induced Markov chain is exactly the state visitation measure \u03bd\u03c0\u03b8. Therefore, when we sample a trajectory {(St, At)}t\u22650, where S0 \u223c\u03b6(\u00b7), At \u223c\u03c0(\u00b7 | St), and St+1 \u223ce P(\u00b7 | St, At) for all t \u22650, the marginal distribution of (St, At) converges to the state-action visitation measure \u03c3\u03c0\u03b8. Inverting Fisher Information Matrix. Recall that G(\u03b8i) is the inverse of the Fisher information matrix used in natural policy gradient. In the overparameterized regime, inverting an estimator b F(\u03b8i) of F(\u03b8i) can be infeasible as b F(\u03b8i) is a high-dimensional matrix, which is possibly not invertible. To resolve this issue, we estimate the natural policy gradient G(\u03b8i) \u00b7 \u2207\u03b8J(\u03c0\u03b8i) by solving min \u03b1\u2208B \u2225b F(\u03b8i) \u00b7 \u03b1 \u2212\u03c4i \u00b7 b \u2207\u03b8J(\u03c0\u03b8i)\u22252, (3.11) where b \u2207\u03b8J(\u03c0\u03b8i) is de\ufb01ned in (3.10), \u03c4i is the temperature parameter in \u03c0\u03b8i, and B is the parameter space. Meanwhile, b F(\u03b8i) is an unbiased estimator of F(\u03b8i) based on {(s\u2113, a\u2113)}\u2113\u2208[B] sampled from \u03c3i, which is de\ufb01ned as b F(\u03b8i) = \u03c4 2 i B \u00b7 B X \u2113=1 \u0010 \u03c6\u03b8i(s\u2113, a\u2113) \u2212E\u03c0\u03b8i \u0002 \u03c6\u03b8i(s\u2113, a\u2032 \u2113) \u0003\u0011\u0010 \u03c6\u03b8i(s\u2113, a\u2113) \u2212E\u03c0\u03b8i \u0002 \u03c6\u03b8i(s\u2113, a\u2032 \u2113) \u0003\u0011\u22a4 , (3.12) where a\u2032 \u2113\u223c\u03c0\u03b8i(\u00b7 | s\u2113) and \u03c6\u03b8i is de\ufb01ned in (3.3) with \u03b8 = \u03b8i. The actor update of neural natural policy gradient takes the form of \u03c4i+1 \u2190\u03c4i + \u03b7, \u03c4i+1 \u00b7 \u03b8i+1 \u2190\u03c4i \u00b7 \u03b8i + \u03b7 \u00b7 argmin \u03b1\u2208B \u2225b F(\u03b8i) \u00b7 \u03b1 \u2212\u03c4i \u00b7 b \u2207\u03b8J(\u03c0\u03b8i)\u22252, (3.13) where we use an arbitrary minimizer of (3.11) if it is not unique. Note that we also update the temperature parameter by \u03c4i+1 \u2190\u03c4i + \u03b7, which ensures \u03b8i+1 \u2208B. It is worth mentioning that up to minor modi\ufb01cations, our analysis allows for approximately solving (3.11), which is the common practice of approximate second-order optimization (Martens and Grosse, 2015; Wu et al., 2017). To summarize, at the i-th iteration, neural policy gradient obtains \u03b8i+1 via projected gradient ascent using b \u2207\u03b8J(\u03c0\u03b8i) de\ufb01ned in (3.10). Meanwhile, neural natural policy gradient solves (3.11) and obtains \u03b8i+1 according to (3.13). 11 \f3.2.2 Critic Update To obtain b \u2207\u03b8J(\u03c0\u03b8), it remains to obtain the critic Q\u03c9i in (3.10). For any policy \u03c0, the actionvalue function Q\u03c0 is the unique solution to the Bellman equation Q = T \u03c0Q (Sutton and Barto, 2018). Here T \u03c0 is the Bellman operator that takes the form of T \u03c0Q(s, a) = E \u0002 (1 \u2212\u03b3) \u00b7 r(s, a) + \u03b3 \u00b7 Q(s\u2032, a\u2032) \u0003 , \u2200(s, a) \u2208S \u00d7 A, where s\u2032 \u223cP(\u00b7 | s, a) and a\u2032 \u223c\u03c0(\u00b7 | s\u2032). Correspondingly, we aim to solve the following optimization problem \u03c9i \u2190argmin \u03c9\u2208B E\u03c2i h\u0000Q\u03c9(s, a) \u2212T \u03c0\u03b8iQ\u03c9(s, a) \u00012i , (3.14) where \u03c2i and T \u03c0\u03b8i are the stationary state-action distribution and the Bellman operator associated with \u03c0\u03b8i, respectively, and B is the parameter space. We adopt neural temporaldi\ufb00erence learning (TD) studied in Cai et al. (2019), which solves the optimization problem in (3.14) via stochastic semigradient descent (Sutton, 1988). Speci\ufb01cally, an iteration of neural TD takes the form of \u03c9(t + 1/2) \u2190\u03c9(t) \u2212\u03b7TD \u00b7 \u0000Q\u03c9(t)(s, a) \u2212(1 \u2212\u03b3) \u00b7 r(s, a) \u2212\u03b3Q\u03c9(t)(s\u2032, a\u2032) \u0001 \u00b7 \u2207\u03c9Q\u03c9(t)(s, a), (3.15) \u03c9(t + 1) \u2190argmin \u03b1\u2208B \u2225\u03b1 \u2212\u03c9(t + 1/2)\u22252, (3.16) where (s, a) \u223c\u03c2i(\u00b7), s\u2032 \u223cP(\u00b7 | s, a), a\u2032 \u223c\u03c0(\u00b7 | s\u2032), and \u03b7TD is the learning rate of neural TD. Here (3.15) is the stochastic semigradient step, and (3.16) projects the parameter obtained by (3.15) back to the parameter space B. Meanwhile, the state-action pairs in (3.15) are sampled from the stationary state-action distribution \u03c2i, which is achieved by sampling from the Markov chain induced by \u03c0\u03b8i until it mixes. See Algorithm 2 in \u00a7B for details. Finally, combining the actor updates and the critic update described in (3.9), (3.13), and (3.14), respectively, we obtain neural policy gradient and natural policy gradient, which are described in Algorithm 1. 4 Main Results In this section, we establish the global optimality and convergence for neural policy gradient methods. Hereafter, we assume that the absolute value of the reward function r is upper 12 \fAlgorithm 1 Neural Policy Gradient Methods Require: Number of iterations T, number of TD iterations TTD, learning rate \u03b7, learning rate \u03b7TD of neural TD, temperature parameters {\u03c4i}i\u2208[T+1], batch size B. 1: Initialization: Initialize br \u223cUnif({\u22121, 1}) and [Winit]r \u223cN(0, Id/d) for all r \u2208[m]. Set B \u2190{\u03b1 \u2208Rmd : \u2225\u03b1 \u2212Winit\u22252 \u2264R} and \u03b81 \u2190Winit. 2: for i \u2208[T] do 3: Update \u03c9i using Algorithm 2 with \u03c0\u03b8i as the input, \u03c9(0) \u2190Winit and {br}r\u2208[m] as the initialization, TTD as the number of iterations, and \u03b7TD as the learning rate. 4: Sample {(s\u2113, a\u2113)}\u2113\u2208[B] from the visitation measure \u03c3i, and estimate b \u2207\u03b8J(\u03c0\u03b8) and b F(\u03b8i) using (3.10) and (3.12), respectively. 5: If using policy gradient, update \u03b8i+1 by \u03b8i+1 \u2190\u03a0B \u0000\u03b8i + \u03b7 \u00b7 b \u2207\u03b8J(\u03c0\u03b8i) \u0001 . If using natural policy gradient, update \u03b8i+1 and \u03c4i+1 by \u03c4i+1 \u2190\u03c4i + \u03b7, \u03c4i+1 \u00b7 \u03b8i+1 \u2190\u03c4i \u00b7 \u03b8i + \u03b7 \u00b7 argmin \u03b1\u2208B \u2225b F(\u03b8i) \u00b7 \u03b1 \u2212\u03c4i \u00b7 b \u2207\u03b8J(\u03c0\u03b8i)\u22252. 6: end for 7: Output: {\u03c0\u03b8i}i\u2208[T+1]. 13 \fbounded by an absolute constant Qmax > 0. As a result, we obtain from (2.1) and (2.2) that |V \u03c0(s, a)| \u2264Qmax, |Q\u03c0(s, a)| \u2264Qmax, and |A\u03c0(s, a)| \u22642Qmax for all \u03c0 and (s, a) \u2208S \u00d7 A. In \u00a74.1, we show that neural policy gradient converges to a stationary point of J(\u03c0\u03b8) with respect to \u03b8 at a sublinear rate. We further characterize the geometry of J(\u03c0\u03b8) and establish the global optimality of the obtained stationary point. Meanwhile, in \u00a74.2, we prove that neural natural policy gradient converges to the global optimum of J(\u03c0\u03b8) at a sublinear rate. 4.1 Neural Policy Gradient In the sequel, we study the convergence of neural policy gradient, i.e., Algorithm 1 with (3.9) as the actor update, where G(\u03b8) = Imd. In what follows, we lay out a regularity condition on the action-value function Q\u03c0. Assumption 4.1 (Action-Value Function Class). We de\ufb01ne FR,\u221e= \u001a f(s, a) = f0(s, a) + Z 1 \b w\u22a4(s, a) > 0 \t \u00b7 (s, a)\u22a4\u03b9(w)d\u00b5(w) : \u2225\u03b9(w)\u2225\u221e\u2264R/ \u221a d \u001b , where \u00b5: Rd \u2192R is the density function of the Gaussian distribution N(0, Id/d) and f0(\u00b7, \u00b7) = f((\u00b7, \u00b7); Winit) is the two-layer neural network corresponding to the initial parameter Winit, and \u03b9 : Rd \u2192Rd together with f0 parameterizes the element of FR,\u221e. We assume that Q\u03c0 \u2208FR,\u221efor all \u03c0. Assumption 4.1 is a mild regularity condition on Q\u03c0, as FR,\u221ecaptures a su\ufb03ciently general family of functions, which constitute a subset of the reproducing kernel Hilbert space (RKHS) induced by the random feature 1{w\u22a4(s, a) > 0} \u00b7 (s, a) with w \u223cN(0, Id/d) (Rahimi and Recht, 2008, 2009) up to the shift of f0. Similar assumptions are imposed in the analysis of batch reinforcement learning in RKHS (Farahmand et al., 2016). In what follows, we lay out a regularity condition on the state visitation measure \u03bd\u03c0 and the stationary state distribution \u033a\u03c0. Assumption 4.2 (Regularity Condition on \u03bd\u03c0 and \u033a\u03c0). Let \u03c0 and e \u03c0 be two arbitrary policies. We assume that there exists an absolute constant c > 0 such that Ee \u03c0\u00b7\u03bd\u03c0 h 1 \b |y\u22a4(s, a)| \u2264u \ti \u2264c \u00b7 u/\u2225y\u22252, Ee \u03c0\u00b7\u033a\u03c0 h 1 \b |y\u22a4(s, a)| \u2264u \ti \u2264c \u00b7 u/\u2225y\u22252, \u2200y \u2208Rd, \u2200u > 0. Here the expectations are taken over the joint distributions e \u03c0(\u00b7 | \u00b7) \u00b7 \u03bd\u03c0(\u00b7) and e \u03c0(\u00b7 | \u00b7) \u00b7 \u033a\u03c0(\u00b7) over S \u00d7 A, respectively. 14 \fAssumption 4.2 essentially imposes a regularity condition on the Markov transition kernel P of the MDP as P determines \u03bd\u03c0 and \u033a\u03c0 for all \u03c0. Such a regularity condition holds if both \u03bd\u03c0 and \u033a\u03c0 have upper-bounded density functions for all \u03c0. After introducing these regularity conditions, we present the following proposition adapted from Cai et al. (2019), which characterizes the convergence of neural TD for the critic update. Proposition 4.3 (Convergence of Critic Update). We set \u03b7TD = min{(1\u2212\u03b3)/8, 1/\u221aTTD} in Algorithm 1. Let Q\u03c9i be the output of the i-th critic update in Line 3 of Algorithm 1, which is an estimator of Q\u03c0\u03b8i obtained by Algorithm 2 with TTD iterations. Under Assumptions 4.1 and 4.2, it holds for TTD = \u2126(m) that Einit \u0002 \u2225Q\u03c9i \u2212Q\u03c0\u03b8i\u22252 \u03c2i \u0003 = O(R3 \u00b7 m\u22121/2 + R5/2 \u00b7 m\u22121/4), (4.1) where \u03c2i is the stationary state-action distribution corresponding to \u03c0\u03b8i. Here the expectation is taken over the random initialization. Proof. See \u00a7B.1 for a detailed proof. Cai et al. (2019) show that the error of the critic update consists of two parts, namely the approximation error of two-layer neural networks and the algorithmic error of neural TD. The former decays as the width m grows, while the latter decays as the number of neural TD iterations TTD in Algorithm 2 grows. By setting TTD = \u2126(m), the algorithmic error in (4.1) of Proposition 4.3 is dominated by the approximation error. In contrast with Cai et al. (2019), we obtain a more re\ufb01ned convergence characterization under the more restrictive assumption that Q\u03c0 \u2208FR,\u221e. Speci\ufb01cally, such a restriction allows us to obtain the upper bound of the mean squared error in (4.1) of Proposition 4.3. It now remains to establish the convergence of the actor update, which involves the estimator b \u2207\u03b8J(\u03c0\u03b8i) of the policy gradient \u2207\u03b8J(\u03c0\u03b8i) based on {(s\u2113, a\u2113)}\u2113\u2208[B]. We introduce the following regularity condition on the variance of b \u2207\u03b8J(\u03c0\u03b8i). Assumption 4.4 (Variance Upper Bound). Recall that \u03c3i is the state-action visitation measure corresponding to \u03c0\u03b8i for all i \u2208[T]. Let \u03bei = b \u2207\u03b8J(\u03c0\u03b8i) \u2212E[b \u2207\u03b8J(\u03c0\u03b8i)], where b \u2207\u03b8J(\u03c0\u03b8i) is de\ufb01ned in (3.10). We assume that there exists an absolute constant \u03c3\u03be > 0 such that E[\u2225\u03bei\u22252 2] \u2264\u03c4 2 i \u00b7 \u03c32 \u03be/B for all i \u2208[T]. Here the expectations are taken over \u03c3i given \u03b8i and \u03c9i. 15 \fAssumption 4.4 is a mild regularity condition. Such a regularity condition holds if the Markov chain that generates {(s\u2113, a\u2113)}\u2113\u2208[B] mixes su\ufb03ciently fast and Q\u03c9i(s, a) with (s, a) \u223c \u03c3i have upper bounded second moments for all i \u2208[T]. Zhang et al. (2019) verify that under certain regularity conditions, similar unbiased policy gradient estimators have almost surely upper bounded norms, which implies Assumption 4.4. Similar regularity conditions are also imposed in the analysis of policy gradient methods by Xu et al. (2019a,b). In what follows, we impose a regularity condition on the discrepancy between the stateaction visitation measure and the stationary state-action distribution corresponding to the same policy. Assumption 4.5 (Regularity Condition on \u03c3i and \u03c2i). We assume that there exists an absolute constant \u03ba > 0 such that \u001a E\u03c2i \u0014\u0012d\u03c3i d\u03c2i (s, a) \u00132\u0015\u001b1/2 \u2264\u03ba, \u2200i \u2208[T]. (4.2) Here d\u03c3i/d\u03c2i is the Radon-Nikodym derivative of \u03c3i with respect to \u03c2i. We highlight that if the MDP is initialized at the stationary distribution \u03c2i, the stateaction visitation measure \u03c3i is the same as \u03c2i. Meanwhile, if the induced Markov state-action chain mixes su\ufb03ciently fast, such an assumption also holds. A similar regularity condition is imposed by Scherrer (2013), which assumes that the L\u221e-norm of d\u03c3i/d\u03c2i is upper bounded, whereas we only assume that its L2-norm is upper bounded. Meanwhile, we impose the following regularity condition on the smoothness of the expected total reward J(\u03c0\u03b8) with respect to \u03b8. Assumption 4.6 (Lipschitz Continuous Policy Gradient). We assume that \u2207\u03b8J(\u03c0\u03b8) is LLipschitz continuous with respect to \u03b8, where L > 0 is an absolute constant. Such an assumption holds when the transition probability P(\u00b7 | s, a) and the reward function r are both Lipschitz continuous with respect to their inputs (Pirotta et al., 2015). Also, Karimi et al. (2019); Zhang et al. (2019); Xu et al. (2019b); Agarwal et al. (2019) verify the Lipschitz continuity of the policy gradient under certain regularity conditions. Note that we restrict \u03b8 to the parameter space B. Here we call b \u03b8 \u2208B a stationary point of J(\u03c0\u03b8) if it holds for all \u03b8 \u2208B that \u2207\u03b8J(\u03c0b \u03b8)\u22a4(\u03b8 \u2212b \u03b8) \u22640. We now show that the sequence {\u03b8i}i\u2208[T+1] generated by neural policy gradient converges to a stationary point at a sublinear rate. 16 \fTheorem 4.7 (Convergence to Stationary Point). We set \u03c4i = 1, \u03b7 = 1/ \u221a T, \u03b7TD = min{(1\u2212 \u03b3)/8, 1/\u221aTTD}, TTD = \u2126(m), and B = {\u03b1 : \u2225\u03b1 \u2212Winit\u22252 \u2264R} by Algorithm 1, where the actor update is given in (3.9) with G(\u03b8) = Imd. For all i \u2208[T], we de\ufb01ne \u03c1i = \u03b7\u22121 \u00b7 h \u03a0B \u0000\u03b8i + \u03b7 \u00b7 \u2207\u03b8J(\u03c0\u03b8i) \u0001 \u2212\u03b8i i \u2208Rmd, (4.3) where \u03a0B : Rmd \u2192B is the projection operator onto B \u2286Rmd. Under the assumptions of Proposition 4.3 and Assumptions 4.4-4.6, for T \u22654L2 we have min i\u2208[T] E \u0002 \u2225\u03c1i\u22252 2 \u0003 \u22648/ \u221a T \u00b7 E \u0002 J(\u03c0\u03b8T +1) \u2212J(\u03c0\u03b81) \u0003 + 8\u03c32 \u03be/B + \u03b5Q(T), where \u03ba is de\ufb01ned in (4.2) of Assumption 4.2 and \u03b5Q(T) = \u03ba \u00b7 O(R5/2 \u00b7 m\u22121/4 \u00b7 T 1/2 + R9/4 \u00b7 m\u22121/8 \u00b7 T 1/2). Here the expectations are taken over all the randomness. Proof. See \u00a75.1 for a detailed proof. By Theorem 4.7 with m = \u2126(T 8 \u00b7 R18) and B = \u2126( \u221a T), we obtain mini\u2208[T] E[\u2225\u03c1i\u22252 2] = O(1/ \u221a T). Therefore, when the two-layer neural networks are su\ufb03ciently wide and the batch size B is su\ufb03ciently large, neural policy gradient achieves a 1/ \u221a T-rate of convergence. Moreover, \u03c1i de\ufb01ned in (4.3) is known as the gradient mapping at \u03b8i (Nesterov, 2018). It is known that b \u03b8 \u2208B is a stationary point if and only if the gradient mapping at b \u03b8 is a zero vector. Therefore, (a subsequence of) {\u03b8i}i\u2208[T+1] converges to a stationary point b \u03b8 \u2208B as mini\u2208[T] E[\u2225\u03c1i\u22252 2] converges to zero. In other words, neural policy gradient converges to a stationary point at a 1/ \u221a T-rate. Also, we remark that the projection operator in the actor update is adopted only for the purpose of simplicity, which can be removed with more re\ufb01ned analysis. Moreover, the projection-free version of neural policy gradient converges to a stationary point at a similar sublinear rate. See \u00a7C for details. We now characterize the global optimality of the obtained stationary point b \u03b8. To this end, we compare the expected total reward of \u03c0b \u03b8 with that of the global optimum \u03c0\u2217of J(\u03c0). Theorem 4.8 (Global Optimality of Stationary Point). Let b \u03b8 \u2208B be a stationary point of J(\u03c0\u03b8). It holds that (1 \u2212\u03b3) \u00b7 \u0000J(\u03c0\u2217) \u2212J(\u03c0b \u03b8) \u0001 \u22642Qmax \u00b7 inf \u03b8\u2208B \u2225ub \u03b8(\u00b7, \u00b7) \u2212\u03c6b \u03b8(\u00b7, \u00b7)\u22a4\u03b8\u2225\u03c3\u03c0b \u03b8, where Qmax is the upper bound of |r| and ub \u03b8 : S \u00d7 A \u2192R is de\ufb01ned as ub \u03b8(s, a) = d\u03c3\u03c0\u2217 d\u03c3\u03c0b \u03b8 (s, a) \u2212d\u03bd\u03c0\u2217 d\u03bd\u03c0b \u03b8 (s) + \u03c6b \u03b8(s, a)\u22a4b \u03b8, \u2200(s, a) \u2208S \u00d7 A. (4.4) Here d\u03c3\u03c0\u2217/d\u03c3\u03c0b \u03b8 and d\u03bd\u03c0\u2217/d\u03bd\u03c0b \u03b8 are the Radon-Nikodym derivatives, and \u2225\u00b7\u2225\u03c3\u03c0b \u03b8 is the L2(\u03c3\u03c0b \u03b8)norm. 17 \fProof. See \u00a75.2 for a detailed proof. To understand Theorem 4.8, we highlight that for \u03b8, b \u03b8 \u2208B, the function \u03c6b \u03b8(\u00b7, \u00b7)\u22a4\u03b8 is well approximated by the overparameterized two-layer neural network f((\u00b7, \u00b7); \u03b8). See Corollary A.4 for details. Therefore, the global optimality of \u03c0b \u03b8 depends on the error of approximating ub \u03b8 with an overparameterized two-layer neural network. Speci\ufb01cally, if ub \u03b8 is well approximated by an overparameterized two-layer neural network, then \u03c0b \u03b8 is nearly as optimal as \u03c0\u2217. In the following corollary, we formally establish a su\ufb03cient condition for any stationary point b \u03b8 to be globally optimal. Theorem 4.9 (Global Optimality of Stationary Point). Let b \u03b8 \u2208B be a stationary point of J(\u03c0\u03b8). We assume that ub \u03b8 \u2208FR,\u221ein Theorem 4.8. Under Assumption 4.2, it holds that (1 \u2212\u03b3) \u00b7 Einit \u0002 J(\u03c0\u2217) \u2212J(\u03c0b \u03b8) \u0003 = O(R3/2 \u00b7 m\u22121/4). More generally, without assuming ub \u03b8 \u2208FR,\u221ein Theorem 4.8, under Assumption 4.2, it holds that (1 \u2212\u03b3) \u00b7 Einit \u0002 J(\u03c0\u2217) \u2212J(\u03c0b \u03b8) \u0003 = O(R3/2 \u00b7 m\u22121/4) + Einit \u0002 \u2225\u03a0FR,\u221eub \u03b8 \u2212ub \u03b8\u2225\u03c3\u03c0b \u03b8 \u0003 . Here the expectations are taken over the random initialization, and \u03a0FR,\u221eis the projection operator onto FR,\u221ewith respect to the L2(\u03c3\u03c0b \u03b8)-norm. Proof. See \u00a7D.2 for a detailed proof. By Theorem 4.9, a stationary point b \u03b8 is globally optimal if ub \u03b8 \u2208FR,\u221eand m \u2192\u221e. Moreover, following from the de\ufb01nition of \u03c1i in (4.3) of Theorem 4.7, we obtain that \u2207\u03b8J(\u03c0\u03b8i)\u22a4(\u03b8 \u2212\u03b8i) \u2264(2R + 2\u03b7 \u00b7 Qmax) \u00b7 \u2225\u03c1i\u22252, \u2200\u03b8 \u2208B. (4.5) See \u00a7D.3 for a detailed proof of (4.5). Since \u2225\u03c1i\u22252 = 0 implies that \u03b8i is a stationary point, the right-hand side of (4.5) quanti\ufb01es the deviation of \u03b8i from a stationary point b \u03b8. Following similar analysis to \u00a75.2 and \u00a7D.2, if u\u03b8i \u2208FR,\u221efor all i \u2208[T], we obtain that (1 \u2212\u03b3) \u00b7 min i\u2208[T] E \u0002 J(\u03c0\u2217) \u2212J(\u03c0\u03b8i) \u0003 = O(R3/2 \u00b7 m\u22121/4) + (2R + 2\u03b7 \u00b7 Qmax) \u00b7 min i\u2208[T] E \u0002 \u2225\u03c1i\u22252 \u0003 . Thus, by invoking Theorem 4.7, it holds for su\ufb03ciently large m and B that the expected total reward J(\u03c0\u03b8i) converges to the global optimum J(\u03c0\u2217) at a 1/T 1/4-rate. A similar rate of convergence holds for the projection-free version of neural policy gradient. See \u00a7C.2 for details. 18 \f4.2 Neural Natural Policy Gradient In the sequel, we study the convergence of neural natural policy gradient. As shown in Algorithm 1, neural natural policy gradient uses neural TD for policy evaluation and updates the actor using (3.13), where \u03b8i and \u03c4i in (3.2) are both updated. To analyze the critic update, we impose Assumptions 4.1 and 4.2, which guarantee that Proposition 4.3 holds. Meanwhile, to analyze the actor update, we impose the following regularity conditions. In parallel to Assumption 4.4, we lay out the following regularity condition on the variance of the estimators of the policy gradient and the Fisher information matrix. Assumption 4.10 (Variance Upper Bound). Let B = {\u03b1 \u2208Rmd : \u2225\u03b1\u2212Winit\u22252 \u2264R}, where Winit is the initial parameter. We de\ufb01ne \u03b4i = (\u03c4i+1 \u00b7 \u03b8i+1 \u2212\u03c4i \u00b7 \u03b8i)/\u03b7 = argmin \u03b1\u2208B \u2225b F(\u03b8i) \u00b7 \u03b1 \u2212\u03c4i \u00b7 b \u2207\u03b8J(\u03c0\u03b8i)\u22252, \u2200i \u2208[T], where b \u2207\u03b8J(\u03c0\u03b8i) and b F(\u03b8i) are de\ufb01ned in (3.10) and (3.12), respectively. With slight abuse of notation, for all i \u2208[T], we de\ufb01ne the function \u03bei : Rmd \u2192Rmd as \u03bei(\u03b1) = b F(\u03b8i) \u00b7 \u03b1 \u2212\u03c4i \u00b7 b \u2207\u03b8J(\u03c0\u03b8i) \u2212E \u0002 b F(\u03b8i) \u00b7 \u03b1 \u2212\u03c4i \u00b7 b \u2207\u03b8J(\u03c0\u03b8i) \u0003 . We assume that there exists an absolute constant \u03c3\u03be > 0 such that E \u0002 \u2225\u03bei(\u03b4i)\u22252 2 \u0003 \u2264\u03c4 4 i \u00b7 \u03c32 \u03be/B, E \u0002 \u2225\u03bei(\u03c9i)\u22252 2 \u0003 \u2264\u03c4 4 i \u00b7 \u03c32 \u03be/B, \u2200i \u2208[T]. Here the expectations are taken over \u03c3i given \u03b8i and \u03c9i. Next, we lay out a regularity condition on the visitation measures \u03c3i, \u03bdi and the stationary distributions \u03c2i, \u033ai, respectively. Assumption 4.11 (Upper Bounded Concentrability Coe\ufb03cient). We denote by \u03bd\u2217and \u03c3\u2217 the state and state-action visitation measures corresponding to the global optimum \u03c0\u2217. For all i \u2208[T], we de\ufb01ne the concentrability coe\ufb03cients \u03d5i, \u03c8i, \u03d5\u2032 i, and \u03c8\u2032 i as \u03d5i = n E\u03c3i \u0002 (d\u03c3\u2217/d\u03c3i)2\u0003o1/2 , \u03c8i = n E\u03bdi \u0002 (d\u03bd\u2217/d\u03bdi)2\u0003o1/2 , \u03d5\u2032 i = n E\u03c2i \u0002 (d\u03c3\u2217/d\u03c2i)2\u0003o1/2 , \u03c8\u2032 i = n E\u033ai \u0002 (d\u03bd\u2217/d\u033ai)2\u0003o1/2 , (4.6) where d\u03c3\u2217/d\u03c3i, d\u03bd\u2217/d\u03bdi, d\u03c3\u2217/d\u03c2i, and d\u03bd\u2217/d\u033ai are the Radon-Nikodym derivatives. We assume that the concentrability coe\ufb03cients de\ufb01ned in (4.6) are uniformly upper bounded by an absolute constant c0 > 0. 19 \fThe regularity condition on upper bounded concentrability coe\ufb03cients is commonly imposed in the reinforcement learning literature and is standard for theoretical analysis (Szepesv\u00b4 ari and Munos, 2005; Munos and Szepesv\u00b4 ari, 2008; Antos et al., 2008; Lazaric et al., 2016; Farahmand et al., 2010, 2016; Scherrer, 2013; Scherrer et al., 2015; Yang et al., 2019b; Chen and Jiang, 2019). Finally, we introduce the following regularity condition on the initial parameter Winit in Algorithm 1. Assumption 4.12 (Upper Bounded Moment at Random Initialization). Let \u03c60(s, a) \u2208Rmd be the feature mapping de\ufb01ned in (3.3) with \u03b8 = Winit. We assume that there exists an absolute constant M > 0 such that Einit \u0014 sup (s,a)\u2208S\u00d7A \f \ff \u0000(s, a); Winit \u0001\f \f2 \u0015 = Einit \u0014 sup (s,a)\u2208S\u00d7A |\u03c60(s, a)\u22a4Winit|2 \u0015 \u2264M2. Here the expectations are taken over the random initialization. Note that as m \u2192\u221e, the two-layer neural network \u03c60(s, a)\u22a4Winit converges to a Gaussian process indexed by (s, a) (Lee et al., 2018), which lies in a compact subset of Rd. It is known that under certain regularity conditions, the maximum of a Gaussian process over a compact index set is a sub-Gaussian random variable (van Handel, 2014). Therefore, the regularity condition that max(s,a) |\u03c60(s, a)\u22a4Winit| has a \ufb01nite second moment is mild. We now establish the global optimality and rate of convergence of neural natural policy gradient. Theorem 4.13 (Global Optimality and Convergence). We set \u03b7 = 1/ \u221a T, \u03b7TD = min{(1 \u2212 \u03b3)/8, 1/\u221aTTD}, TTD = \u2126(m), \u03c4i = (i \u22121) \u00b7 \u03b7, and B = {\u03b1 : \u2225\u03b1 \u2212Winit\u22252 \u2264R} in Algorithm 1, where the actor update is given in (3.13). Under the assumptions of Proposition 4.3 and Assumptions 4.10-4.12, we have min i\u2208[T] E \u0002 J(\u03c0\u2217) \u2212J(\u03c0\u03b8i) \u0003 \u2264log |A| + 9R2 + M (1 \u2212\u03b3) \u00b7 \u221a T + 1 (1 \u2212\u03b3) \u00b7 T \u00b7 T X i=1 \u00af \u01ebi(T). (4.7) Here M is de\ufb01ned in Assumption 4.12 and \u00af \u01ebi(T) satis\ufb01es \u00af \u01ebi(T) = \u221a 8c0 \u00b7 R1/2 \u00b7 (\u03c32 \u03be/B)1/4 | {z } (a) (4.8) + O \u0000(\u03c4i+1 \u00b7 T 1/2 + 1) \u00b7 R3/2 \u00b7 m\u22121/4 + R5/4 \u00b7 m\u22121/8\u0001 | {z } (b) + \u03b5Q,i |{z} (c) , 20 \fwhere c0 is de\ufb01ned in Assumption 4.11 and \u03b5Q,i = c0 \u00b7 O(R3/2 \u00b7 m\u22121/4 + R5/4 \u00b7 m\u22121/8). Here the expectation is taken over all the randomness. Proof. See \u00a75.3 for a detailed proof. As shown in (4.7) of Theorem 4.13, the optimality gap mini\u2208[T] E[J(\u03c0\u2217)\u2212J(\u03c0\u03b8i)] is upper bounded by two terms. Intuitively, the \ufb01rst O(1/ \u221a T) term characterizes the convergence of neural natural policy gradient as m, B \u2192\u221e. Meanwhile, the second term aggregates the errors incurred by both the actor update and the critic update due to \ufb01nite m and B. Speci\ufb01cally, in (4.8) of Theorem 4.13, (a) corresponds to the estimation error of b F(\u03b8) and b \u2207\u03b8J(\u03c0\u03b8) due to the \ufb01nite batch size B, which vanishes as B \u2192\u221e. Also, (b) corresponds to the incompatibility between the parameterizations of the actor and critic. As introduced in \u00a73.1, we use shared architecture and random initialization to ensure approximately compatible function approximations. In particular, (b) vanishes as m \u2192\u221e. Meanwhile, (c) corresponds to the policy evaluation error, i.e., the error of approximating Q\u03c0\u03b8i using Q\u03c9i. As shown in Proposition 4.3, such an error is su\ufb03ciently small when both m and TTD are su\ufb03ciently large. To conclude, when m, B, and TTD are su\ufb03ciently large, the expected total reward of (a subsequence of) {\u03c0\u03b8i}i\u2208[T+1] obtained from the neural natural policy gradient converges to the global optimum J(\u03c0\u2217) at a 1/ \u221a T-rate. Formally, we have the following corollary. Corollary 4.14 (Global Optimality and Convergence). Under the same assumptions of Theorem 4.13, it holds for m = \u2126(R10 \u00b7 T 6) and B = \u2126(R2 \u00b7 T 2 \u00b7 \u03c32 \u03be) that min i\u2208[T] E \u0002 J(\u03c0\u2217) \u2212J(\u03c0\u03b8i) \u0003 = O \u0012 log |A| (1 \u2212\u03b3) \u00b7 \u221a T \u0013 . Here the expectation is taken over all the randomness. Proof. See \u00a7D.4 for a detailed proof. Corollary 4.14 establishes both the global optimality and rate of convergence of neural natural policy gradient. Combining Theorem 4.7 and Corollary 4.14, we conclude that when we use overparameterized two-layer neural networks, both neural policy gradient and neural natural policy gradient converge at 1/ \u221a T-rates. In comparison, when m and B are su\ufb03ciently large, neural policy gradient is only shown to converge to a stationary point under the additional regularity condition that \u2207\u03b8J(\u03c0\u03b8) is Lipschitz continuous (Assumption 4.6). Moreover, by Theorem 4.8, the global optimality of such a stationary point hinges on 21 \fthe representation power of the overparameterized two-layer neural network. In contrast, neural natural policy gradient is shown to attain the global optimum when both m and B are su\ufb03ciently large without additional regularity conditions such as Assumption 4.6, which reveals the bene\ufb01t of incorporating more sophisticated optimization techniques to reinforcement learning. A similar phenomenon is observed in the LQR setting (Fazel et al., 2018; Malik et al., 2018; Tu and Recht, 2018), where natural policy gradient enjoys an improved rate of convergence. In recent work, Liu et al. (2019) study the global optimality and rates of convergence of neural proximal policy optimization (PPO) and trust region policy optimization (TRPO) (Schulman et al., 2015, 2017). Although Liu et al. (2019) establish a similar 1/ \u221a T-rate of convergence to the global optimum, neural PPO is di\ufb00erent from neural natural policy gradient, as it requires solving a subproblem of policy improvement in the functional space by \ufb01tting an overparameterized two-layer neural network using multiple stochastic gradient steps in the parameter space. In contrast, neural natural policy gradient only requires a single stochastic natural gradient step in the parameter space, which makes the analysis even more challenging. 5 Proof of Main Results In this section, we present the proof of Theorems 4.7, 4.8, and 4.13. Our proof utilizes the following lemma, which establishes the one-point convexity of J(\u03c0) at the global optimum \u03c0\u2217. Such a lemma is adapted from Kakade and Langford (2002). Lemma 5.1 (Performance Di\ufb00erence (Kakade and Langford, 2002)). It holds for all \u03c0 that J(\u03c0\u2217) \u2212J(\u03c0) = (1 \u2212\u03b3)\u22121 \u00b7 E\u03bd\u2217 \u0002 \u27e8Q\u03c0(s, \u00b7), \u03c0\u2217(\u00b7 | s) \u2212\u03c0(\u00b7 | s)\u27e9 \u0003 , where \u03bd\u2217is the state visitation measure corresponding to \u03c0\u2217. Proof. Following from Lemma F.1, which is Lemma 6.1 in Kakade and Langford (2002), it holds for all \u03c0 that J(\u03c0\u2217) \u2212J(\u03c0) = (1 \u2212\u03b3)\u22121 \u00b7 E\u03c3\u2217 \u0002 A\u03c0(s, a) \u0003 , (5.1) where \u03c3\u2217is the state-action visitation measure corresponding to \u03c0\u2217, and A\u03c0 is the advantage function associated with \u03c0. By de\ufb01nition, we have \u03c3\u2217(\u00b7, \u00b7) = \u03c0\u2217(\u00b7 | \u00b7) \u00b7 \u03bd\u2217(\u00b7). Meanwhile, it 22 \fholds for all s \u2208S that E\u03c0\u2217\u0002 A\u03c0(s, a) \u0003 = E\u03c0\u2217\u0002 Q\u03c0(s, a) \u0003 \u2212V \u03c0(s) = \u27e8Q\u03c0(s, \u00b7), \u03c0\u2217(\u00b7 | s)\u27e9\u2212\u27e8Q\u03c0(s, \u00b7), \u03c0(\u00b7 | s)\u27e9 = \u27e8Q\u03c0(s, \u00b7), \u03c0\u2217(\u00b7 | s) \u2212\u03c0(\u00b7 | s)\u27e9. (5.2) Combining (5.1) and (5.2), we conclude that J(\u03c0\u2217) \u2212J(\u03c0) = (1 \u2212\u03b3)\u22121 \u00b7 E\u03bd\u2217 \u0002 \u27e8Q\u03c0(s, \u00b7), \u03c0\u2217(\u00b7 | s) \u2212\u03c0(\u00b7 | s)\u27e9 \u0003 , which concludes the proof of Lemma 5.1. 5.1 Proof of Theorem 4.7 Proof. We \ufb01rst lower bound the di\ufb00erence between the expected total rewards of \u03c0\u03b8i+1 and \u03c0\u03b8i. By Assumption 4.6, \u2207\u03b8J(\u03c0\u03b8) is L-Lipschitz continuous. Thus, it holds that J(\u03c0\u03b8i+1) \u2212J(\u03c0\u03b8i) \u2265\u03b7 \u00b7 \u2207\u03b8J(\u03c0\u03b8i)\u22a4\u03b4i \u2212L/2 \u00b7 \u2225\u03b8i+1 \u2212\u03b8i\u22252 2, (5.3) where \u03b4i = (\u03b8i+1 \u2212\u03b8i)/\u03b7. Recall that \u03bei = b \u2207\u03b8J(\u03c0\u03b8i) \u2212E[b \u2207\u03b8J(\u03c0\u03b8i)], where the expectation is taken over \u03c3i given \u03b8i and \u03c9i. It holds that \u2207\u03b8J(\u03c0\u03b8i)\u22a4\u03b4i = \u0010 \u2207\u03b8J(\u03c0\u03b8i) \u2212E \u0002b \u2207\u03b8J(\u03c0\u03b8i) \u0003\u0011\u22a4 \u03b4i \u2212\u03be\u22a4 i \u03b4i + b \u2207\u03b8J(\u03c0\u03b8i)\u22a4\u03b4i. (5.4) On the right-hand side of (5.4), the \ufb01rst term represents the error of estimating \u2207\u03b8J(\u03c0\u03b8i) using E[b \u2207\u03b8J(\u03c0\u03b8i)] = E\u03c3i[\u2207\u03b8 log \u03c0\u03b8i(a | s)\u00b7Q\u03c9i(s, a)], the second term is related to the variance of the estimator b \u2207\u03b8J(\u03c0\u03b8i) of the policy gradient \u2207\u03b8J(\u03c0\u03b8i), and the last term relates the increment \u03b4i of the actor update to b \u2207\u03b8J(\u03c0\u03b8i). In the following lemma, we establish a lower bound of the \ufb01rst term. Lemma 5.2. It holds that \f \f \f \u0010 \u2207\u03b8J(\u03c0\u03b8i) \u2212E \u0002b \u2207\u03b8J(\u03c0\u03b8i) \u0003\u0011\u22a4 \u03b4i \f \f \f \u22644\u03ba \u00b7 R/\u03b7 \u00b7 \u2225Q\u03c0\u03b8i \u2212Q\u03c9i\u2225\u03c2i, where b \u2207J(\u03c0\u03b8i) is de\ufb01ned in (3.10), \u03c2i is the stationary state-action distribution, and \u03ba is the absolute constant de\ufb01ned in Assumption 4.5. Here the expectation is taken over \u03c3i given \u03b8i and \u03c9i. Proof. See \u00a7D.5 for a detailed proof. 23 \fFor the second term on the right-hand side of (5.4), we have \u2212\u03be\u22a4 i \u03b4i \u2265\u2212\u2225\u03bei\u22252 2/2 \u2212\u2225\u03b4i\u22252 2/2. (5.5) Now it remains to lower bound the third term on the right-hand side of (5.4). For notational simplicity, we de\ufb01ne ei = \u03b8i+1 \u2212 \u0000\u03b8i + \u03b7 \u00b7 b \u2207J(\u03c0\u03b8i) \u0001 = \u03a0B \u0000\u03b8i + \u03b7 \u00b7 b \u2207J(\u03c0\u03b8i) \u0001 \u2212 \u0000\u03b8i + \u03b7 \u00b7 b \u2207J(\u03c0\u03b8i) \u0001 , where \u03a0B is the projection operator onto B. It then holds that e\u22a4 i h \u03a0B \u0000\u03b8i + \u03b7 \u00b7 b \u2207J(\u03c0\u03b8i) \u0001 \u2212x i = e\u22a4 i (\u03b8i+1 \u2212x) \u22640, \u2200x \u2208B. (5.6) Speci\ufb01cally, setting x = \u03b8i in (5.6), we obtain that e\u22a4 i \u03b4i \u22640, which implies b \u2207J\u03b8(\u03c0\u03b8i)\u22a4\u03b4i = (\u03b4i \u2212ei/\u03b7)\u22a4\u03b4i \u2265\u2225\u03b4i\u22252 2. (5.7) By plugging Lemma 5.2, (5.5), and (5.7) into (5.4), we obtain that \u2207\u03b8J(\u03c0\u03b8i)\u22a4\u03b4i \u2265\u22124\u03ba \u00b7 R/\u03b7 \u00b7 \u2225Q\u03c0\u03b8i \u2212Q\u03c9i\u2225\u03c2i + \u2225\u03b4i\u22252 2/2 \u2212\u2225\u03bei\u22252 2/2. (5.8) Thus, by plugging (5.8) and the de\ufb01nition that \u03b4i = (\u03b8i+1 \u2212\u03b8i)/\u03b7 into (5.3), we obtain for all i \u2208[T] that (1 \u2212L \u00b7 \u03b7) \u00b7 E \u0002 \u2225\u03b4i\u22252 2/2 \u0003 \u2264\u03b7\u22121 \u00b7 E \u0002 J(\u03c0\u03b8i+1) \u2212J(\u03c0\u03b8i) \u0003 + 4\u03ba \u00b7 R/\u03b7 \u00b7 E \u0002 \u2225Q\u03c0\u03b8i \u2212Q\u03c9i\u2225\u03c2i \u0003 + E \u0002 \u2225\u03bei\u22252 2/2 \u0003 , (5.9) where the expectations are taking over all the randomness. Now we turn to characterize \u2225\u03c1i \u2212\u03b4i\u22252. By the de\ufb01nition of \u03c1i in (4.3), we have \u2225\u03c1i \u2212\u03b4i\u22252 = \u03b7\u22121 \u00b7 \r \r \r\u03a0B \u0000\u03b8i + \u03b7 \u00b7 \u2207\u03b8J(\u03c0\u03b8i) \u0001 \u2212\u03b8i \u2212 \u0010 \u03a0B \u0000\u03b8i + \u03b7 \u00b7 b \u2207\u03b8J(\u03c0\u03b8i) \u0001 \u2212\u03b8i \u0011\r \r \r 2 = \u03b7\u22121 \u00b7 \r \r\u03a0B \u0000\u03b8i + \u03b7 \u00b7 \u2207\u03b8J(\u03c0\u03b8i) \u0001 \u2212\u03a0B \u0000\u03b8i + \u03b7 \u00b7 b \u2207\u03b8J(\u03c0\u03b8i) \u0001\r \r 2 \u2264\u2225\u2207\u03b8J(\u03c0\u03b8i) \u2212b \u2207\u03b8J(\u03c0\u03b8i)\u22252. (5.10) The following lemma further upper bounds the right-hand side of (5.10). Lemma 5.3. It holds for all i \u2208[T] that E \u0002 \u2225\u2207\u03b8J(\u03c0\u03b8i) \u2212b \u2207\u03b8J(\u03c0\u03b8i)\u22252 2 \u0003 \u22642E \u0002 \u2225\u03bei\u22252 2 \u0003 + 8\u03ba2 \u00b7 E \u0002 \u2225Q\u03c0\u03b8i \u2212Q\u03c9i\u22252 \u03c2i \u0003 . Here the expectations are taken over all the randomness. 24 \fProof. See \u00a7D.6 for a detailed proof. Recall that we set \u03b7 = 1/ \u221a T. Upon telescoping (5.9), it holds for T \u22654L2 that min i\u2208[T] E \u0002 \u2225\u03c1i\u22252 2 \u0003 \u22641/T \u00b7 T X i=1 E \u0002 \u2225\u03c1i\u22252 2 \u0003 \u22641/T \u00b7 T X i=1 \u0010 2E \u0002 \u2225\u03b4i\u22252 2 \u0003 + 2E \u0002 \u2225\u03c1i \u2212\u03b4i\u22252 2 \u0003\u0011 \u22641/T \u00b7 T X i=1 4(1 \u2212L \u00b7 \u03b7) \u00b7 E \u0002 \u2225\u03b4i\u22252 2 \u0003 + 2E \u0002 \u2225\u03c1i \u2212\u03b4i\u22252 2 \u0003 \u22648/ \u221a T \u00b7 E \u0002 J(\u03c0\u03b8T +1) \u2212J(\u03c0\u03b81) \u0003 + 8/T \u00b7 T X i=1 E \u0002 \u2225\u03bei\u22252 2 \u0003 + \u03b5Q(T), (5.11) where the third inequality follows from the fact that 1 \u2212L \u00b7 \u03b7 \u22651/2, while the fourth inequality follows from (5.9), (5.10), and Lemma 5.3. Here the expectations are taken over all the randomness, and \u03b5Q(T) is de\ufb01ned as \u03b5Q(T) = 32\u03ba \u00b7 R/ \u221a T \u00b7 T X i=1 E \u0002 \u2225Q\u03c0\u03b8i \u2212Q\u03c9i\u2225\u03c2i \u0003 + 16\u03ba2/T \u00b7 T X i=1 E \u0002 \u2225Q\u03c0\u03b8i \u2212Q\u03c9i\u22252 \u03c2i \u0003 . By Proposition 4.3 and Assumption 4.4, it holds for all i \u2208[T] that E \u0002 \u2225Q\u03c0\u03b8i \u2212Q\u03c9i\u22252 \u03c2i \u0003 = O(R3 \u00b7 m\u22121/2 + R5/2 \u00b7 m\u22121/4), E[\u2225\u03bei\u22252 2] \u2264\u03c32 \u03be/B. (5.12) By plugging (5.12) into (5.11), we conclude that min i\u2208[T] E \u0002 \u2225\u03c1i\u22252 2 \u0003 \u22648/ \u221a T \u00b7 E \u0002 J(\u03c0\u03b8T +1) \u2212J(\u03c0\u03b81) \u0003 + 8\u03c32 \u03be/B + \u03b5Q(T), where \u03b5Q(T) = \u03ba \u00b7 O(R5/2 \u00b7 m\u22121/4 \u00b7 T 1/2 + R9/4 \u00b7 m\u22121/8 \u00b7 T 1/2). Thus, we complete the proof of Theorem 4.7. 5.2 Proof of Theorem 4.8 Proof. Since b \u03b8 is a stationary point of J(\u03c0\u03b8), it holds that \u2207\u03b8J(\u03c0b \u03b8)\u22a4(\u03b8 \u2212b \u03b8) \u22640, \u2200\u03b8 \u2208B. (5.13) 25 \fTherefore, by Proposition 3.1, we obtain from (5.13) that \u2207\u03b8J(\u03c0b \u03b8)\u22a4(\u03b8 \u2212b \u03b8) = E\u03c3\u03c0b \u03b8 \u0002 \u03c6b \u03b8(s, a)\u22a4(\u03b8 \u2212b \u03b8) \u00b7 Q\u03c0b \u03b8(s, a) \u0003 \u22640, \u2200\u03b8 \u2208B. (5.14) Here \u03c6b \u03b8 and \u00af \u03c6b \u03b8 are de\ufb01ned in (3.3) and (3.7) with \u03b8 = b \u03b8, respectively. Note that E\u03c3\u03c0b \u03b8 \u0002 \u03c6b \u03b8(s, a)\u22a4(\u03b8 \u2212b \u03b8) \u00b7 V \u03c0b \u03b8(s) \u0003 = E\u03bd\u03c0b \u03b8 h E\u03c0b \u03b8 \u0002 \u03c6b \u03b8(s, a) \u0003\u22a4(\u03b8 \u2212b \u03b8) \u00b7 V \u03c0b \u03b8(s) i = 0, E\u03c3\u03c0b \u03b8 h E\u03c0b \u03b8 \u0002 \u03c6b \u03b8(s, a\u2032)\u22a4(\u03b8 \u2212b \u03b8) \u0003 \u00b7 A\u03c0b \u03b8(s, a) i = E\u03bd\u03c0b \u03b8 h E\u03c0b \u03b8 \u0002 \u03c6b \u03b8(s, a\u2032)\u22a4(\u03b8 \u2212b \u03b8) \u0003 \u00b7 E\u03c0b \u03b8 \u0002 A\u03c0b \u03b8(s, a) \u0003i = 0, which holds since E\u03c0b \u03b8[\u03c6b \u03b8(s, a)] = E\u03c0b \u03b8[A\u03c0b \u03b8(s, a)] = 0 for all s \u2208S. Thus, by (5.14), we have E\u03c3\u03c0b \u03b8 \u0002 \u03c6b \u03b8(s, a)\u22a4(\u03b8 \u2212b \u03b8) \u00b7 Q\u03c0b \u03b8(s, a) \u0003 = E\u03c3\u03c0b \u03b8 \u0002 \u03c6b \u03b8(s, a)\u22a4(\u03b8 \u2212b \u03b8) \u00b7 A\u03c0b \u03b8(s, a) \u0003 \u2212E\u03c3\u03c0b \u03b8 h E\u03c0b \u03b8 \u0002 \u03c6b \u03b8(s, a\u2032)\u22a4(\u03b8 \u2212b \u03b8) \u0003 \u00b7 A\u03c0b \u03b8(s, a) i + E\u03c3\u03c0b \u03b8 \u0002 \u03c6b \u03b8(s, a)\u22a4(\u03b8 \u2212b \u03b8) \u00b7 V \u03c0b \u03b8(s) \u0003 = E\u03c3\u03c0b \u03b8 \u0002 \u03c6b \u03b8(s, a)\u22a4(\u03b8 \u2212b \u03b8) \u00b7 A\u03c0b \u03b8(s, a) \u0003 \u22640, \u2200\u03b8 \u2208B. (5.15) Meanwhile, by Lemma 5.1 we have (1 \u2212\u03b3) \u00b7 \u0000J(\u03c0\u2217) \u2212J(\u03c0b \u03b8) \u0001 = E\u03bd\u2217 \u0002 \u27e8A\u03c0b \u03b8(s, \u00b7), \u03c0\u2217(\u00b7 | s) \u2212\u03c0b \u03b8(\u00b7 | s)\u27e9 \u0003 . (5.16) In what follows, we write \u2206\u03b8 = \u03b8 \u2212b \u03b8. Combining (5.15) and (5.16), we obtain that (1 \u2212\u03b3) \u00b7 \u0000J(\u03c0\u2217) \u2212J(\u03c0b \u03b8) \u0001 \u2264E\u03bd\u2217 \u0002 \u27e8A\u03c0b \u03b8(s, \u00b7), \u03c0\u2217(\u00b7 | s) \u2212\u03c0b \u03b8(\u00b7 | s)\u27e9 \u0003 \u2212E\u03c3\u03c0b \u03b8 \u0002 \u03c6b \u03b8(s, a)\u22a4\u2206\u03b8 \u00b7 A\u03c0b \u03b8(s, a) \u0003 = E\u03bd\u2217 \u0002 \u27e8A\u03c0b \u03b8(s, \u00b7), \u03c0\u2217(\u00b7 | s) \u2212\u03c0b \u03b8(\u00b7 | s)\u27e9 \u0003 \u2212E\u03bd\u03c0b \u03b8 \u0002 \u27e8A\u03c0b \u03b8(s, \u00b7), \u03c6b \u03b8(s, \u00b7)\u22a4\u2206\u03b8 \u00b7 \u03c0b \u03b8(\u00b7 | s)\u27e9 \u0003 , (5.17) where we use the fact that \u03c3\u03c0b \u03b8(\u00b7, \u00b7) = \u03c0b \u03b8(\u00b7 | \u00b7)\u00b7\u03bd\u03c0b \u03b8(\u00b7). It remains to upper bound the right-hand side of (5.17). By calculation, it holds for all (s, a) \u2208S \u00d7 A that \u0000\u03c0\u2217(a | s) \u2212\u03c0b \u03b8(a | s) \u0001 d\u03bd\u2217(s) \u2212\u03c6b \u03b8(s, a)\u22a4\u2206\u03b8 \u00b7 \u03c0b \u03b8(a | s)d\u03bdb \u03b8(s) = \u0012\u03c0\u2217(a | s) \u2212\u03c0b \u03b8(a | s) \u03c0b \u03b8(a | s) \u00b7 d\u03bd\u2217 d\u03bdb \u03b8 (s) \u2212\u03c6b \u03b8(s, a)\u22a4\u2206\u03b8 \u0013 \u00b7 \u03c0b \u03b8(a | s)d\u03bd\u03c0b \u03b8(s) = \u0000ub \u03b8(s, a) \u2212\u03c6b \u03b8(s, a)\u22a4\u03b8 \u0001 d\u03c3\u03c0b \u03b8(s, a), (5.18) where ub \u03b8 is de\ufb01ned as ub \u03b8(s, a) = d\u03c3\u03c0\u2217 d\u03c3\u03c0b \u03b8 (s, a) \u2212d\u03bd\u03c0\u2217 d\u03bd\u03c0b \u03b8 (s) + \u03c6b \u03b8(s, a)\u22a4b \u03b8, \u2200(s, a) \u2208S \u00d7 A. 26 \fHere d\u03c3\u03c0\u2217/d\u03c3\u03c0b \u03b8 and d\u03bd\u03c0\u2217/d\u03bd\u03c0b \u03b8 are the Radon-Nikodym derivatives. By plugging (5.18) into (5.17), we obtain that (1 \u2212\u03b3) \u00b7 \u0000J(\u03c0\u2217) \u2212J(\u03c0b \u03b8) \u0001 \u2264E\u03bd\u2217 \u0002 \u27e8A\u03c0b \u03b8(s, \u00b7), \u03c0\u2217(\u00b7 | s) \u2212\u03c0b \u03b8(\u00b7 | s)\u27e9 \u0003 \u2212E\u03bd\u03c0b \u03b8 \u0002 \u27e8A\u03c0b \u03b8(s, \u00b7), \u03c6b \u03b8(s, \u00b7)\u22a4\u2206\u03b8 \u00b7 \u03c0b \u03b8(\u00b7 | s)\u27e9 \u0003 = Z S X a\u2208A A\u03c0b \u03b8(s, a) \u00b7 \u0010\u0000\u03c0\u2217(a | s) \u2212\u03c0b \u03b8(a | s) \u0001 d\u03bd\u2217(s) \u2212\u03c6b \u03b8(s, a)\u22a4\u2206\u03b8 \u00b7 \u03c0b \u03b8(a | s)d\u03bdb \u03b8(s) \u0011 = Z S\u00d7A A\u03c0b \u03b8(s, a) \u00b7 \u0000ub \u03b8(s, a) \u2212\u03c6b \u03b8(s, a)\u22a4\u2206\u03b8 \u0001 d\u03c3\u03c0b \u03b8(s, a) \u2264\u2225A\u03c0b \u03b8(\u00b7, \u00b7)\u2225\u03c3\u03c0b \u03b8 \u00b7 \u2225ub \u03b8(\u00b7, \u00b7) \u2212\u03c6b \u03b8(\u00b7, \u00b7)\u22a4\u03b8\u2225\u03c3\u03c0b \u03b8, (5.19) where the second equality follows from (5.18) and the last inequality is from the CauchySchwartz inequality. Note that |A\u03c0b \u03b8(s, a)| \u22642Qmax for all (s, a) \u2208S \u00d7 A. Therefore, it follows from (5.19) that (1 \u2212\u03b3) \u00b7 \u0000J(\u03c0\u2217) \u2212J(\u03c0b \u03b8) \u0001 \u22642Qmax \u00b7 \u2225ub \u03b8(\u00b7, \u00b7) \u2212\u03c6b \u03b8(\u00b7, \u00b7)\u22a4\u03b8\u2225\u03c3\u03c0b \u03b8, \u2200\u03b8 \u2208B. (5.20) Finally, by taking the in\ufb01mum of the right-hand side of (5.20) with respect to \u03b8 \u2208B, we obtain that (1 \u2212\u03b3) \u00b7 \u0000J(\u03c0\u2217) \u2212J(\u03c0b \u03b8) \u0001 \u22642Qmax \u00b7 inf \u03b8\u2208B \u2225ub \u03b8(\u00b7, \u00b7) \u2212\u03c6b \u03b8(\u00b7, \u00b7)\u22a4\u03b8\u2225\u03c3\u03c0b \u03b8, which concludes the proof of Theorem 4.8. 5.3 Proof of Theorem 4.13 Proof. For notational simplicity, we write \u03c0i = \u03c0\u03b8i hereafter. In the following lemma, we characterize the performance di\ufb00erence J(\u03c0\u2217) \u2212J(\u03c0i) based on Lemma 5.1. Lemma 5.4. It holds that (1 \u2212\u03b3) \u00b7 \u03b7 \u00b7 \u0000J(\u03c0\u2217) \u2212J(\u03c0i) \u0001 = E\u03bd\u2217 h DKL \u0000\u03c0\u2217(\u00b7 | s) \r \r\u03c0i(\u00b7 | s) \u0001 \u2212DKL \u0000\u03c0\u2217(\u00b7 | s) \r \r\u03c0i+1(\u00b7 | s) \u0001 \u2212DKL \u0000\u03c0i+1(\u00b7 | s) \r \r\u03c0i(\u00b7 | s) \u0001i \u2212Hi, 27 \fwhere Hi is de\ufb01ned as Hi = E\u03bd\u2217 h log \u0000\u03c0i+1(\u00b7 | s)/\u03c0i(\u00b7 | s) \u0001 \u2212\u03b7 \u00b7 Q\u03c9i(s, \u00b7), \u03c0\u2217(\u00b7 | s) \u2212\u03c0i(\u00b7 | s) \u000bi | {z } (i) (5.21) + \u03b7 \u00b7 E\u03bd\u2217 \u0002 \u27e8Q\u03c9i(s, \u00b7) \u2212Q\u03c0i(s, \u00b7), \u03c0\u2217(\u00b7 | s) \u2212\u03c0i(\u00b7 | s)\u27e9 \u0003 | {z } (ii) + E\u03bd\u2217 h log \u0000\u03c0i(\u00b7 | s)/\u03c0i+1(\u00b7 | s) \u0001 , \u03c0i+1(\u00b7 | s) \u2212\u03c0i(\u00b7 | s) \u000bi | {z } (iii) . Proof. See \u00a7D.7 for a detailed proof. Here Hi de\ufb01ned in (5.21) of Lemma 5.4 consists of three terms. Speci\ufb01cally, (i) is related to the error of estimating the natural policy gradient using (3.11). Also, (ii) is related to the error of estimating Q\u03c0i using Q\u03c9i. Meanwhile, (iii) is the remainder term. We upper bound these three terms in \u00a7D.8. Combining these upper bounds, we obtain the following lemma. Lemma 5.5. Under Assumptions 4.2 and 4.12, we have E \u0014 |Hi| \u2212E\u03bd\u2217 h DKL \u0000\u03c0i+1(\u00b7 | s) \r \r\u03c0i(\u00b7 | s) \u0001i\u0015 \u2264\u03b72 \u00b7 (9R2 + M2) + \u03b7 \u00b7 (\u03d5\u2032 i + \u03c8\u2032 i) \u00b7 \u03b5Q,i + \u03b5i. Here the expectation is taken over all the randomness. Meanwhile, \u03d5\u2032 i and \u03c8\u2032 i are the concentrability coe\ufb03cients de\ufb01ned in (4.6) of Assumption 4.11, \u03b5Q,i is de\ufb01ned as \u03b5Q,i = E[\u2225Q\u03c0i \u2212Q\u03c9i\u2225\u03c2i], M is the absolute constant de\ufb01ned in Assumption 4.12, and \u03b5i is de\ufb01ned as \u03b5i = \u221a 2 \u00b7 R1/2 \u00b7 \u03b7 \u00b7 (\u03d5i + \u03c8i) \u00b7 \u03c4 \u22121 i \u00b7 n E \u0002 \u2225\u03bei(\u03b4i)\u22252 \u0003 + E \u0002 \u2225\u03bei(\u03c9i)\u22252 \u0003o1/2 (5.22) + O \u0000(\u03c4i+1 + \u03b7) \u00b7 R3/2 \u00b7 m\u22121/4 + \u03b7 \u00b7 R5/4 \u00b7 m\u22121/8\u0001 . Here \u03bei(\u03b4i) and \u03bei(\u03c9i) are de\ufb01ned in Assumption 4.10, where \u03b4i = \u03b7\u22121 \u00b7 (\u03c4i+1 \u00b7 \u03b8i+1 \u2212\u03c4i \u00b7 \u03b8i), while \u03d5i and \u03c8i are the concentrability coe\ufb03cients de\ufb01ned in (4.6) of Assumption 4.11. Proof. See \u00a7D.8 for a detailed proof. By Lemmas 5.4 and 5.5, we obtain that (1 \u2212\u03b3) \u00b7 E \u0002 J(\u03c0\u2217) \u2212J(\u03c0i) \u0003 \u2264\u03b7\u22121 \u00b7 E \u0014 E\u03bd\u2217 h DKL \u0000\u03c0\u2217(\u00b7 | s)\u2225\u03c0i(\u00b7 | s) \u0001 (5.23) \u2212DKL \u0000\u03c0\u2217(\u00b7 | s)\u2225\u03c0i+1(\u00b7 | s) \u0001i\u0015 + \u03b7 \u00b7 (9R2 + M2) + \u03b7\u22121 \u00b7 \u03b5i + (\u03d5\u2032 i + \u03c8\u2032 i) \u00b7 \u03b5Q,i, 28 \fwhere \u03b5Q,i is de\ufb01ned as \u03b5Q,i = E[\u2225Q\u03c0i \u2212Q\u03c9i\u2225\u03c2i], M is the absolute constant de\ufb01ned in Assumption 4.12, \u03b5i is de\ufb01ned in (5.22) of Lemma 5.5, and the expectations are taken over all the randomness. Recall that we set \u03b7 = 1/ \u221a T. Upon telescoping (5.23), we obtain that (1 \u2212\u03b3) \u00b7 min i\u2208[T] E \u0002 J(\u03c0\u2217) \u2212J(\u03c0i) \u0003 \u22641 \u2212\u03b3 T \u00b7 T X i=1 E \u0002 J(\u03c0\u2217) \u2212J(\u03c0i) \u0003 (5.24) \u2264 1 \u221a T \u00b7 \u0012 E \u0014 E\u03bd\u2217 h DKL \u0000\u03c0\u2217(\u00b7 | s) \r \r\u03c01(\u00b7 | s) \u0001i\u0015 + 9R2 + M2 \u0013 + 1 T \u00b7 T X i=1 \u0000\u221a T \u00b7 \u03b5i + (\u03d5\u2032 i + \u03c8\u2032 i) \u00b7 \u03b5Q,i \u0001 , where the expectations are taken over all the randomness and the last inequality follows from the fact that DKL \u0000\u03c0\u2217(\u00b7 | s) \r \r\u03c0T+1(\u00b7 | s) \u0001 \u22650, \u2200s \u2208S, \u2200\u03b8T+1 \u2208Rmd. In what follows, we upper bound the right-hand side of (5.24). Note that we set \u03c41 = 0. By the parameterization of policy in (3.2), it then holds that \u03c01(\u00b7 | s) is uniform over A for all s \u2208S and \u03b81 \u2208Rmd. Therefore, we obtain that DKL \u0000\u03c0\u2217(\u00b7 | s)\u2225\u03c01(\u00b7 | s) \u0001 \u2264log |A|, \u2200s \u2208S, \u2200\u03b81 \u2208Rmd. (5.25) Meanwhile, by Assumption 4.10, we have E \u0002 \u2225\u03bei(\u03b4i)\u22252 \u0003 \u2264 n E h E\u03c3i \u0002 \u2225\u03bei(\u03b4i)\u22252 2 \u0003io1/2 \u2264\u03c4 2 i \u00b7 \u03c3\u03be \u00b7 B\u22121/2, where the expectation E\u03c3i[\u2225\u03bei(\u03b4i)\u22252 2] is taken over \u03c3i given \u03b8i and \u03c9i, while the other expectations are taken over all the randomness. A similar upper bound holds for E[\u2225\u03bei(\u03c9i)\u22252]. Therefore, by plugging the upper bounds of E[\u2225\u03bei(\u03c3i)\u22252] and E[\u2225\u03bei(\u03c9i)\u22252] into \u03b5i de\ufb01ned in (5.22) of Lemma 5.5, we obtain from Assumption 4.11 that \u221a T \u00b7 \u03b5i \u22642 \u221a 2c0 \u00b7 R1/2 \u00b7 \u03c31/2 \u03be \u00b7 B\u22121/4 (5.26) + O \u0000(\u03c4i+1 \u00b7 T 1/2 + 1) \u00b7 R3/2 \u00b7 m\u22121/4 + R5/4 \u00b7 m\u22121/8\u0001 . Also, combining Assumption 4.11 and Proposition 4.3, it holds that (\u03d5\u2032 i + \u03c8\u2032 i) \u00b7 \u03b5Q,i \u22642c0 \u00b7 E \u0002 \u2225Q\u03c0i \u2212Q\u03c9i\u2225\u03c2i \u0003 = c0 \u00b7 O(R3/2 \u00b7 m\u22121/4 + R5/4 \u00b7 m\u22121/8). (5.27) Finally, by plugging (5.25), (5.26), and (5.27) into (5.24) and setting \u00af \u01ebi(T) = \u221a T \u00b7 \u03b5i + (\u03d5\u2032 i + \u03c8\u2032 i) \u00b7 \u03b5Q,i, we complete the proof of Theorem 4.13. 29" + }, + { + "url": "http://arxiv.org/abs/1902.06541v2", + "title": "Escape dynamics based on bounded rationality", + "abstract": "The bounded rationality plays a vital role in the collective behavior of the\nevacuation process. Also investigating human behavior in such an extreme\nsituation is a continuing concern within social psychology. In this paper, we\nconstruct a cellular automaton (CA) model for the escape dynamics, and the\nbounded rational behavior induced by heterogeneous information is introduced.\nThe non-trivial behavior shows in the replicator dynamics method with mean\nfield approximation, where people's perception of the distribution of\npopulation and velocity is reduced to an average value in a certain direction.\nAnalyzing the escape efficiency shows that under the premise of rationality,\nthe bounded rational strategy can get higher performance. Interestingly, a\nquantifiable meta-stable state appears in the escape process, and the escape\ntime is power-law dependent on system size.", + "authors": "Lingxiao Wang, Yin Jiang", + "published": "2019-02-18", + "updated": "2019-05-03", + "primary_cat": "physics.soc-ph", + "cats": [ + "physics.soc-ph", + "nlin.CG" + ], + "main_content": "Introduction Public security is the cornerstone of national and social stability. In addition to the direct loss of life and property caused by natural disasters, crowd congestion in emergencies often leads to disaster (e.g. clogging and stampede[1, 2, 3]) . It naturally becomes important to understand the collective behavior patterns in case of emergency. Recent models and experiments show that the characteristics of group movement emerge as short intermittent bursts[4, 5, 6]. When the desire of \u2217Corresponding author. Tel.: +86 010-62772694. Email address: wlx15@mails.tsinghua.edu.cn (Lingxiao Wang) URL: https://sites.google.com/view/lingxiao (Lingxiao Wang) Preprint submitted to Physica A May 6, 2019 arXiv:1902.06541v2 [physics.soc-ph] 3 May 2019 \fescaping danger exceeds the idea of avoiding the collision, people\u2019s behavior pattern changes from order to disorder. However, our understanding of the transition is still limited. Even the research on escape dynamics in emergencies has a long history[7, 8], it was not until the 1990s that the dynamics on collective behavior attracts people\u2019s attention[3, 9]. Subsequently, game theory, decision theory, communication model and queuing model had also been comprehensively applied[10]. However, due to the lack of individual self-organization, the prediction of many results deviates from the actual situation. Some works start from the hydrodynamic model to study the collective behavior of the population[11, 2]. It successfully explained the group behavior during the pilgrimage to Mecca. Also, we can get some non-trivial macroscopic patterns in these hydrodynamics models[6, 12]. However, the macro-model sometimes is coarse and will miss individual information, while the micro-model represented by the Social force model, the Cellular automata model, and the Magnetic \ufb01eld force model[9, 1, 13, 14, 15] can give a more completed description for the escape dynamics. Simulating the collective behavior in emergencies becomes more convenient with the increasing of computer capabilities. The simulating results, such as the \u201dfaster-is slower\u201d phenomenon also can be veri\ufb01ed by experiments on the di\ufb00erent group (e.g people, vehicles, ants, sheep, microbial populations, etc.)[4, 16, 17, 18, 19]. With the development of sensor technology and the improvement of microchip computing power, collecting more redundant data and using more realistic methods to simulate the escape process becomes possible[20, 21, 22, 23, 24]. It is of practical signi\ufb01cance that the disaster may happen in di\ufb00erent complex environments[25]. It\u2019s natural to introduce the game theory into the evacuation model, for it can include the interaction between people (and environment) selfconsistently[26, 27]. The embedded game action occurs in con\ufb02icts mainly, and people will decide to advance or avoid others by pay-o\ufb00matrix[28, 29, 30, 31, 32]. However, handling con\ufb02icts are only parts of escape dynamics, how human beings make decisions induced by speci\ufb01c circumstances also is an essential part[33]. In this paper, we introduce the replicator dynamics into the evacuation. It makes people\u2019s response to the external environment can be included in the decision-making process. Besides, some researches focus on the behavior itself, for escape dynamics provides an extreme case to investigate collective behavior[21, 15, 34, 35]. The diverse and fascinating collective behaviors occur in both virtual and real space[36, 37]: social network, \ufb01nancial network and social norms, these virtual social connections naturally incubate the collective behavior; as for the real space, collective modes are common in urban dynamics, tra\ufb03c \ufb02ow, and pedestrian dy2 \fnamics. Therefore, the escape dynamics provides us with an extreme scene, in which human instinct dominates[38, 23]. It makes us have chances to e\ufb00ectively study human behavior itself without complex social relations. Based on the point, this work introduces bounded rationality[39, 40]from behavioral economics in the escape dynamics through the replicator dynamics method[41, 42]. A cellular automaton model is used to model the escape dynamics in a closed boundary. And the in\ufb02uence of boundedly rational behavior strategy on collective behavior is investigated by using mean \ufb01eld approximation[43]. Escape e\ufb03ciency is a\ufb00ected by the environment and heterogeneous information processing. We also analyze the group and individual escape time, giving a possible picture to understand the connection between collective behavior and individual action. 2. Methods 2.1. Cellular Automaton Model We construct a cellular automation model for simulating the pedestrian \ufb02ow in a two-dimensional system. The underlying structure is a L \u00d7 L cell grid, where L is the system size. The state of cell can be empty, or occupied by one pedestrian exactly or wall. It\u2019s an instructive sample, once we set the size of cell as 0.5m \u00d7 0.5m, which can simulate the escaping when some disaster happens. Model adopts the Moore neighbor, and pedestrians update their positions by transition matrices T(i, t), T(i, t) = \uf8eb \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ec \uf8ed P1,1(i, t) P1,2(i, t) P1,3(i, t) P2,1(i, t) P2,2(i, t) P2,3(i, t) P3,1(i, t) P3,2(i, t) P3,3(i, t) \uf8f6 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f7 \uf8f8 (1) where Pm,n(i, t) means the possibility that the pedestrian i moves from t time at position (x(i, t), y(i, t)) to next time-step position. The neighbors\u2019 directions are labeled by (m, n), where m, n = 1, 2, 3. Each cell can either be empty, or occupied by wall or exactly one pedestrian. Every time-step pedestrian can choose to move into a new position or stop. Once we have chosen the location of the exit, the synchronously updated cellular automaton can imitate the escape process[13, 44]. 2.2. Heterogeneous Information and Bounded Rationality Bounded rationality is formalized with such major variables as incomplete information, information processing and the non-traditional policy-maker target 3 \ffunction[40]. Heterogeneous information could be the reason why people shows irrationality[45, 38, 20, 41]. The extreme situation of escaping from disasters constrains people\u2019s behavior, in which only intuition or social habits remains, no long term trade-o\ufb00. The replicator dynamics modeling[41, 42] can link the different behaviors, whether practical or spiritual, during the escaping process. The transition possibility P(i, t) derives from the follow de\ufb01nition, Pm,n(i, t) = Bm,n(i, t)Rm,n(i, t) P B(i, t)R(i, t) (2) where R(i, t), B(i, t) means the weight from rational and bounded rational part respectively. The de\ufb01nition of the components in matrix Rm,n(i, t) = Om,n(i, t)Em,n(i, t), Om,n(i, t) = ( 1 empty \u03f5 occupied , Em,n(i, t) = ( \u03b1 exit \u03f5 nothing (3) which means if the position (m, n) around the individual i at t time is empty, the Om,n(i, t) = 1, whereas the value is \u03f5. And the Em,n(i, t) = \u03b1 only holds when the exit direction is indicated by (m, n), if not take the value \u03f5. The \u03f5 is a minimum value that the calculation accuracy can reach. The parameter \u03b1 represents the attraction of the exit to persons want to escape, or the importance of the information of the exit position. The de\ufb01nition of the bounded rational part Bm,n relies on the heterogeneous information from the crowd. As the transport model of statistical physics inspired us, the escape dynamics needs more information that persons\u2019 position and velocity distribution, the basic variables of the transport theory. Considering the full information cannot easily be achieved by individuals, the mean-\ufb01eld approximate can provide a global perception for the people on move, which shows as the follow, Bm,n(i, t) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 1 rational nm,n(i, t) crowd vm,n(i, t) follower (4) The rational indicates the transition possibility only decided by R(i, t), the neighbor occupied state and the direction of exit, or the objective environment. The crowd de\ufb01nes nm,n(i, t) = P m,n N(i, t)/P All N(i, t), where N(i, t) is the population distribution at t time. The de\ufb01nition shows the proportion of individuals in (m, n) orientation as mean-\ufb01eld approximation, and people will be attracted to the direction with more density. We use it to mimic the \u201dcrowd\u201d behavior for 4 \findividuals, which also means people can potentially get more population density information. As for the follower, vm,n(i, t) = P j N\u20d7 v(j,t)=(m,n)/P All N(i, t), where \u20d7 v( j, t) is the velocity distribution at t time. The proportion of individuals move to (m, n) orientation has been extracted, and people will follow others as the weight. It transfers more potential velocity information to people. The latter two strategies show the individual can process the heterogeneous information \u2013 population and velocity of all persons. The population n(x, y, t) and velocity v(x, y, t) of the crowd are the global continuous distribution quantity in reality, which a\ufb00ects the human behavior indirectly, for people can gather and process information from the environment[20, 45]. In this work, the distribution is discrete and the individual can process them as background, that\u2019s what above de\ufb01nition means. People\u2019s perception to the distribution is reduced to the average value in a certain direction, a mean background \ufb01eld, as what statistical physics did in a many-body system. 2.3. Evolution Rules The model escape rules gives as follows, Step.1 Initialization. Set the position of exit (x, y) and generate N(i, t) population distribution at the L \u00d7 L lattice. At the t = 0 time, disaster turns out and individuals begin to move; Step.2 Evolution. At the t time step, the individual i move to the next position as transition matrices T(i, t) at t + 1 time step. Update all individuals synchronously, and the con\ufb02ict will be handled by compared the transition possibility; Handle Con\ufb02icts. The con\ufb02icts occurs when the two or more persons want to move into the same position, and what we do to handle the con\ufb02icts is to compare their transition possibilities Pm,n(i, t) which re\ufb02ects their willingness to move. For example, the individual j and k both want to move into position (x, y), and the corresponding possibility for the j is Pm,n( j, t) and the k is Pm\u2032,n\u2032(k, t). If the Pm,n(i, t) > Pm\u2032,n\u2032(k, t), then the individual j move successfully and the k stayed where it was, and vice versa. For equal cases, one is randomly selected. It can be easily extended to the situation of many people. Step.3 Escape. For the individual whose destination is exit at the next time step, escape successfully, and remove it from the lattice and reduce population as N(t + 1) = N(t) \u22121. If N(t) = 0 the escape stops. Step.4 Update. Update the transition matrices as above strategies, turn to Step.2. The t time step escape \ufb01nished. 5 \f3. Results and Discussions 3.1. Dynamics Simulation Figure 1: Evolution Sample. Initial population ratio \u03c10 \u22480.3, lattice size L = 20, the exit size is 2, and the parameter \u03b1 = 10 with the rational strategy. Firstly we build an escape dynamics simulation frame based on the escape rules, in which three di\ufb00erent strategies for processing heterogeneous information have been added. As an example, Fig. 1 depicts a typical escape process: at the beginning individuals distribute in the lattice L \u00d7 L randomly with initial population ratio \u03c10; then they move into the exit direction as the parameter \u03b1 which shows how important the exit information is for them; at the t = 150 time step, people escape from the disaster area. In the Fig. 2, we show the arch-like blocking as the other simulations and experiments found[4, 16, 18, 19], which indicates the escape dynamical model catches the key points for the \ufb02ock clogging problem. Fig. 3 Shows the escape e\ufb03ciency in the t \u2208[0, 100] time steps at the parameter, lattice size L = 30 and the exit size is 2. Over time, all three behavior strategies are driving groups to \ufb02ee disaster areas. Figs. 3a to 3c have di\ufb00erent \u03c10 and the high initial population case can escape faster than the lower. It\u2019s because the low initial population case has more empty space, which reduces the escape chances. As for the three behavior strategies, they have similar escape e\ufb03ciency in the higher \u03c10 case, where the population decreasing is almost same in 100 time steps. The three behavior strategies show relative large di\ufb00erences in low density situations. During the same time, the crowd made more people escape, the 6 \fFigure 2: Arch-like blocking. Initial population ratio \u03c10 \u22480.3, lattice size L = 50, the exit size is 2, and the parameter \u03b1 = 10, at time step t = 80 with the rational strategy. follower followed, and the relative worst is the rational strategy. The above results illustrate that the \u201dBounded Rationality\u201d strategies can re\ufb01ne information from environment, which makes people work better in low density case. Figs. 3c and 3d have di\ufb00erent \u03b1 and the high \u03b1 case can escape faster than the lower. It makes sense that \u03b1 represents the importance of exit, and it could be the intensity of exit signs or how clear people knows the exit information. In the Fig. 3d, the exit information parameter \u03b1 = 1, it makes people random walks at some time and the crowd strategy shows more powerful capability of driving people to the exit. 3.2. Collective Behavior and Meta-stable State To investigate the factors a\ufb00ecting collective behavior more quantitatively, we de\ufb01ne the escape ratio p, the corresponding group escape time tp and individual escape time ti. The tp is the time taken to evacuate p of the population and the ti = PN(tp) n=1 tn N(0)\u00d7p is the average time, where the tn is the time taken to evacuate nth person. Figs. 4 and 5 shows the escape time in crowd strategy at di\ufb00erent parameter (\u03b1, p), lattice size L = 30, initial population ratio \u03c10 = 0.5 and the exit size is 2. The parameter \u03b1 has played a key role in the escape dynamics, as Fig. 4 shows, the escape time decays exponentially with this exit parameter. At small \u03b1 region, the \u03b1 increasing would push people escape faster; at the large region, the e\ufb00ect of increasing is limited. The results re\ufb02ect that there is a e\ufb00ective range for the relative importance of exit information in the dynamics processing. The other 7 \f1.Rational 2.Crowd 3.Follower 0 20 40 60 80 100 0.00 0.02 0.04 0.06 0.08 0.10 t TimeSteps \u03c1 Population (a) \u03c10 \u22480.12, \u03b1 = 10 1.Rational 2.Crowd 3.Follower 0 20 40 60 80 100 0.30 0.35 0.40 0.45 0.50 t TimeSteps \u03c1 Population (b) \u03c10 \u22480.51, \u03b1 = 10 1.Rational 2.Crowd 3.Follower 0 20 40 60 80 100 0.10 0.15 0.20 0.25 0.30 t TimeSteps \u03c1 Population (c) \u03c10 \u22480.3, \u03b1 = 10 1.Rational 2.Crowd 3.Follower 0 20 40 60 80 100 0.26 0.27 0.28 0.29 0.30 t TimeSteps \u03c1 Population (d) \u03c10 \u22480.3, \u03b1 = 1 Figure 3: Escape E\ufb03ciency statistical result gets in Fig. 5, where group escape time tp increases with escape ratio near exponentially and the individual escape time ti increases near linearly. It implies that the time taken to rescue more people rise sharply, which reveals the con\ufb02ict of interest between individual and group. The above results are independent of initial conditions, for every case runs 20 times in di\ufb00erent random initial population distribution. The above nonlinear dependence inspired us to study the in\ufb02uence of system size on escape time. The results exhibit in Fig. 6, the other system parameters are: initial population ratio \u03c10 = 0.5, the parameter \u03b1 = 10, p = 0.9 and the exit size is same as before. Both the group and individual escape time have ln(tp/i)/ ln(L) \u2248 1.8 , and the exponential coe\ufb03cient deviating from system area L2 slightly. It can be treated as a signal of the critical self-organizing behavior for the escape dynamics[36, 35]. At the same time, we notice that the arch-like blocking is a corresponding phenomenon during the critical process. It can be understood as a meta-stable state, for the empty is the stable \ufb01nal state. We extract the edge curve as parabola \ufb01tting (y = a2(t)x2 + a1(t)) from the escape maps (e.g.Fig. 7 as 8 \ftp Group ti Individual 2 4 6 8 10 0 200 400 600 800 1000 1200 \u03b1 Paramater t TimeSteps Figure 4: The escape time is decreasing with \u03b1 at p = 0.9 tp Group ti Individual 0.0 0.2 0.4 0.6 0.8 1.0 0 50 100 150 200 250 300 350 p Escape Ratio t TimeSteps Figure 5: The escape time is increasing with p at \u03b1 = 10 a sample at L = 40, \u03b1 = 10, \u03c10 = 0.3 ), and this meta-stable state will emerge in the mid-time term (t \u2208[20, 120] for this sample). Where the pixel positions of corresponding evolution patterns represented by coordinate axes (x, y), and the position of exit is (0,200). As Fig. 7 shows, the edge of population will push forward and become narrower. It can be described by the two \ufb01tting parameters: a1(t) = 0.1512t + 27.53, a2(t) = 3.986 \u00d7 10\u22124t + 0.04354 in this sample. As time goes, the opening of parabola becomes narrow until the meta-stable state disappears. Even parabola \ufb01tting is not enough accurate, it still re\ufb02ects such a meta-stable propulsion process. 9 \f\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf \u25cf\u25cf\u25cf \u25cf\u25cf\u25cf\u25cf\u25cf \u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0\u25a0 tp=0.6642 L1.794 ti=0.2385 L1.834 10 20 30 40 50 0 200 400 600 L LatticeSize t TimeSteps Figure 6: The scale dependent of escape time t=25 t=50 t=75 t=100 -100 -50 0 50 100 0 50 100 150 200 x Pixel y Pixel Figure 7: The parabola \ufb01tting for the edge 10 \f4." + } + ], + "Wenwen Li": [ + { + "url": "http://arxiv.org/abs/2401.08787v1", + "title": "Segment Anything Model Can Not Segment Anything: Assessing AI Foundation Model's Generalizability in Permafrost Mapping", + "abstract": "This paper assesses trending AI foundation models, especially emerging\ncomputer vision foundation models and their performance in natural landscape\nfeature segmentation. While the term foundation model has quickly garnered\ninterest from the geospatial domain, its definition remains vague. Hence, this\npaper will first introduce AI foundation models and their defining\ncharacteristics. Built upon the tremendous success achieved by Large Language\nModels (LLMs) as the foundation models for language tasks, this paper discusses\nthe challenges of building foundation models for geospatial artificial\nintelligence (GeoAI) vision tasks. To evaluate the performance of large AI\nvision models, especially Meta's Segment Anything Model (SAM), we implemented\ndifferent instance segmentation pipelines that minimize the changes to SAM to\nleverage its power as a foundation model. A series of prompt strategies was\ndeveloped to test SAM's performance regarding its theoretical upper bound of\npredictive accuracy, zero-shot performance, and domain adaptability through\nfine-tuning. The analysis used two permafrost feature datasets, ice-wedge\npolygons and retrogressive thaw slumps because (1) these landform features are\nmore challenging to segment than manmade features due to their complicated\nformation mechanisms, diverse forms, and vague boundaries; (2) their presence\nand changes are important indicators for Arctic warming and climate change. The\nresults show that although promising, SAM still has room for improvement to\nsupport AI-augmented terrain mapping. The spatial and domain generalizability\nof this finding is further validated using a more general dataset EuroCrop for\nagricultural field mapping. Finally, we discuss future research directions that\nstrengthen SAM's applicability in challenging geospatial domains.", + "authors": "Wenwen Li, Chia-Yu Hsu, Sizhe Wang, Yezhou Yang, Hyunho Lee, Anna Liljedahl, Chandi Witharana, Yili Yang, Brendan M. Rogers, Samantha T. Arundel, Matthew B. Jones, Kenton McHenry, Patricia Solis", + "published": "2024-01-16", + "updated": "2024-01-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Recent advances in large language models (LLM), such as OpenAI\u2019s GPT (Generative Pre-Trained Transformer), have brought a new wave of important changes to AI. LLMs are trained on millions of web documents (as natural language text) to predict the next or missing words in a sentence, which can be accomplished in a self-supervised manner (Devlin et al. 2019). These models adopt new architectures and cutting-edge learning strategies (e.g., transformer and multi-head attention; Vaswani et al. 2017) and are capable of extracting language structures and patterns, enabling them to handle multiple new tasks, such as question answering and code debugging, for which they were not originally trained. As a result, LLMs have gained a dramatic capability in generalization and language understanding, and they can serve as the foundation for a wide variety of downstream tasks. This is why the term \u201cfoundation model\u201d was coined in 2019. What is an AI foundation model? According to Bommasani et al. (2021), foundation models are trained with massive data to derive an unprecedented ability for knowledge representation, which can further be adapted to solve multiple problems across a diverse set of domains. The technology that enables foundation models to train data at scale is self-supervised learning (Devlin et al. 2019). In traditional machine learning (ML), feature engineering is required, and this manual process limits ML\u2019s ability to handle big data. The deep learning paradigm in AI enabled automated feature extraction and representation during the model training process. In order to achieve good model generalizability, a large human-annotated dataset was needed. With the emergence of self-supervised learning (for example, contrastive learning and self-distillation), AI models can finally go beyond the limitation of labeled training data and conduct at-scale learning to obtain a more powerful representation from rich, unlabeled big data. A second commonly viewed characteristic of foundation models is that they are large models, consisting of high-depth neural networks with millions to hundreds of millions of parameters and smart learning mechanisms (such as, attention and gating structures) to capture long range, multiplicative interactions within the raw data. A third and perhaps the defining characteristic of foundation models is their ability for domain adaptation, which concerns a model\u2019s ability to conduct pre-training on one data distribution and adapt it to solve a different domain problem through very little extra training effort. When domain adaptation can be achieved with no requirement for training on new datasets, it is called zero-shot learning. When a few samples need to be offered to the model to adapt it for the new task, it is called few-shot learning. Because the foundation models\u2019 goal is to substantially alleviate the labeling cost of supervised learning, their domain adaptation ability has become key to evaluating a model\u2019s generalizability and knowledge transferability. Embracing foundation models for environmental and social science research has garnered significant interests from the geospatial community. As an emerging concept and exciting new development, foundation models offer the prospect of reducing model development time for individual researchers, and gaining the capability needed to analyze the ever-increasing amount of geospatial data. However, in comparison to the research progress in constructing foundation models for natural language processing (NLP), the advances of vision foundation models are catching up in speed. This is because of the different learning goals between language and vision tasks. Many of the NLP tasks can be formulated as text-to-text processing, which involves taking natural language as input and providing natural language response as output (Brown et al. 2020). When the training data reach a certain size, the LLMs (e.g., ChatGPT) are able 2 \fto gain incredible generalizability and can be adapted to a diverse set of NLP tasks even without fine-tuning. The success of ChatGPT provides evidence for the power of such LLMs. In contrast, not all computer vision models can be designed as generative models. The goal for vision tasks varies depending on the required granularity in image analysis, be it for visual question answering, image reconstruction, image-level classification, object detection, or instance segmentation. Due to this variability, it is difficult to derive a generalized model for vision tasks, and the emerging vision foundation models, therefore, remain task-specific. In this research, we aim to assess the strengths and weaknesses of emerging vision foundation model, especially the Segment Anything Model, in its capacity to support GeoAI vision tasks for permafrost mapping. This model was chosen as it is the first publicly-released large vision model for image segmentation and one of the few that are open-sourced and allow for model adaptation using geo-domain data. Two challenging permafrost datasets, ice wedge polygons and retrogressive thaw slumps, were chosen for evaluating SAM\u2019s performance in zero-shot prediction, knowledge-embedded learning, as well as its prediction accuracy with model integration and fine-tuning on SAM. The results are compared with those from a cutting-edge model, MViTv2 (Li et al. 2022c), based on supervised learning and multi-scale, transformer-based architecture. The rest of the paper is organized as follows: Section 2 reviews the growing list of vision foundation models, their model architecture, and supporting vision tasks. Section 3 describes the datasets and a series of strategies we developed to evaluate SAM\u2019s instance segmentation performance, as well as the results. Section 4 summarizes our findings and discusses the strengths and limitations of SAM for AI-augmented permafrost mapping. The extension of this workflow to other geospatial problems, such as agricultural field mapping, is also discussed. Section 5 concludes the paper and propose future research directions. 2. AI foundation models for GeoAI vision tasks: Recent progress As Li and Hsu (2022) pointed out, the evolution of GeoAI may be seen in three phases: (I) import \u2013 to bring AI technology into geography and apply it to a domain problem; (II) adaptation \u2013 to develop problem-specific strategies to improve general AI models to achieve better performance in various geospatial tasks; (III) export \u2013 the integration of geospatial principles and knowledge back to AI to develop innovative models to help better solve both geospatial and aspatial problems. Exciting progress has been made in the past few years, and the field is moving quickly beyond Phase I toward Phase II and III. For adopting new technology, such as foundation models, we still need to go through these three phases, and the study of GeoAI foundation models is clearly at the initial, exploration phase (Phase I). Excitingly, there has already been preliminary research toward developing foundation models in the field of remote sensing image analysis. For example, Cha et al. (2023) reported a billion-scale foundation model tailored from the Vision Transformer (ViT) based model. It is constructed with 2.4 billion parameters and pre-trained on the MillionAID dataset (Long et al. 2021). MillionAID is annotated for image scene classification, with each training image assigned with a scene label, such as dry land, oil field, and sports land. Next, the model is further fine-tuned to support two downstream tasks: rotated object detection and semantic segmentation. The results show that the proposed large model performs better overall than other models, such as RetinaNet (Lin et al. 2017) and Masked Auto Encoding (MAE; Wang et al. 2022), on 3 \fTable 1. Comparison of data and model operation space for foundation models and (pre) foundation models. NLP: Natural language processing. CV: Computer vision. M: million, B: billion, T: trillion. Organization Model Data size Model parameter space Released date Task Open source? Google BERT BooksCorpus (800M words) and English Wikipedia (2500M words) 110M (base) 340M (large) 2018 NLP \u2713 OpenAI GPT-3 570GB text data from various sources 175B 2020 NLP Google GLaM 1.6T tokens 1.2T 2021 NLP DeepMind Gopher 300B tokens 280B 2021 NLP Google PaLM 780B tokens 540B 2022 NLP OpenAI GPT-3.5 300B tokens 175B 2022 NLP Meta LLaMA 1.4T tokens 65B 2023 NLP \u2713 OpenAI GPT-4 N/A 1T 2023 NLP OpenAI CLIP 400M image-text pairs 150M 2021 NLP & CV \u2713 OpenAI DALL\u00b7E 250M text-image pairs 12B 2021 NLP & CV Microsoft Florence 900M image-text pairs 893M 2021 NLP & CV Meta SAM 11M images with 1B masks 636M 2023 CV \u2713 Cha et al. 2023 ViT G12x4 1M images 2.4B 2023 CV IBM-NASA Prithvi 30-meter, 6-band satellite imagery over the continental US 100M 2023 CV \u2713 the two remote sensing image processing tasks. It also confirms that when a model is trained on a larger, easier-to-retrieve dataset (MillionAID contains 1 million image scenes) and fine-tuned on other smaller datasets, its performance can be improved as the model becomes more \u201cknowledgeable\u201d by learning from bigger data. In the computer vision field, several big technology organizations, such as Meta, OpenAI, and Microsoft, have put efforts into developing vision foundation models (Table 1). In 2021, OpenAI released CLIP (Contrastive Language-Image Pre-training; Radford et al. 2021) as a transferable visual model. The model learned feature representation by matching pairs of text (captions) and images during the pre-training phase, comparing the similarity between the encoded text and image information. This way, if an image is described as \u201ca picture of a dog,\u201d the model will learn the representation of \u201cdog\u201d from the images through natural language supervision. Based on this information, CLIP can achieve zero-shot prediction on image scene classes by finding the textural category with the highest similarity to the image\u2019s content. CLIP can be adapted to support traditional vision tasks, such as image classification and video action recognition. The model\u2019s performance was tested on over 30 vision tasks and the results demonstrated CLIP\u2019s comparable zero-shot performance on large benchmark datasets, such as ImageNet and ObjectNet. However, CLIP\u2019s performance was less satisfying when analyzing images not included in its pre-training datasets. Hence, the limitation in the size and distribution of training data will directly affect performance in downstream tasks. Furthermore, CLIP cannot conduct fine-granularity tasks, such as object detection and instance segmentation. Microsoft has developed a computer vision foundation model named Florence, which is also trained on a large set of image-text pairs (900 million) collected from the Internet. Florence aims to achieve zero-shot transfer learning by expanding the representation from coarse to fine-granularity, from static images to dynamic scenes, and from RGB images to images with multiple modalities and channels. This way, Florence can be adapted to more vision tasks than CLIP, including object detection. Florence\u2019s pre-training model uses two-tower pipelines, including a CLIP language encoder and a Swin Transformer-based image encoder to encode the image and text data. It uses a contrastive loss function that classifies all image and text description pairs that can be mapped to a unique caption as positive samples and the rest as negative samples. Flo4 \frence also adopts advanced learning strategies, such as dynamic head, to achieve better adaptation capability for downstream tasks. The model was tested on over 30 datasets on multiple image analysis tasks, including image classification and object detection. The results show that Florence achieves better zero-shot transfer learning than other large visual models, including CLIP. On object detection, it also achieves state-of-theart performance compared to other cutting-edge models. However, note that object detection with Florence is not zero-shot learning, and the model needs to be fine-tuned for optimal performance. Although promising, Florence is not open-sourced, rendering it difficult to access. Moreover, the model was trained on 512 NVIDIA-A100 GPUs for several days, making training and retraining the model an expensive computational process. In April 2023, Meta AI released the Segment Anything Model (SAM), which, for the first time, has the power to perform zero-shot image segmentation for more challenging vision tasks. Although AI models such as CLIP and Florence enable vision tasks by associating images with their descriptive text to capture image-scene-level semantics, their ability to identify instance-level semantics within the scene remains weak. SAM enables this capability through a powerful image encoder-decoder-based architecture. A large transformer model is used as the image encoder, and at the decoding phase, the model requires a prompt from the user input to generate the object mask (or object boundary). As shown in Figure 1(a), the input prompt can be a point, a box, or some text to indicate targets of interest. SAM\u2019s pre-training went through three stages: (1) supervised training on a small set of images with manually annotated instance masks; (2) semi-automated training by incorporating annotated masks that are ambiguous for SAM; and (3) fully automated training on the entire 11 million image datasets. SAM\u2019s advantage is its ability to represent features by learning from a vast number of images, and it can also help accelerate the speed of object annotation for different segmentation tasks through user-machine interaction. Soon after its release, several integrated models (Jiang 2023) were developed to support downstream tasks by incorporating SAM\u2019s object segmentation ability. For example, SAM is connected with CLIP to predict each segmented mask\u2019s (object\u2019s) category, such as water. Figure 1 presents the SAM+CLIP integrated model architecture. The SAM\u2019s outputs (segmented masks) are fed to CLIP\u2019s input, and by providing CLIP a text prompt (e.g., water), CLIP will output all the masks that belong to \u201cwater\u201d through text-image similarity matching. This way, instance segmentation can be achieved. Based on the above analysis and considering model maturity, capability, and opensourceness, in this paper, we select the SAM model to assess its performance in segmenting natural features. Different from other works that apply SAM in remote sensing, our instance segmentation pipelines retain SAM\u2019s entire architecture instead of using some of its submodules, such as feature extraction backbones, in the development of the instance segmentation pipeline. This design maximizes the adoption of SAM as a foundation model that emphasizes easy reuse and requires minimal additional effort in model development. In the next section, we will describe the datasets used, our experimental design, and the various prompt strategies developed to comprehensively assess SAM\u2019s instance segmentation performance. 3. Data and experiments Kirillov et al. (2023) evaluated SAM\u2019s performance on multiple vision tasks, including 5 \fFigure 1. Architecture of SAM (left of the dashed line) and CLIP (right of the dashed line) and their combined workflow for instance segmentation. edge detection, object proposal generation, and instance segmentation, all in a zeroshot manner. The results show that, even if SAM was not trained for edge detection, it is capable of generating reasonable edge maps on a benchmark dataset BSDS500 (Martin et al. 2001). This provides significant evidence for SAM\u2019s task adaptation and transfer learning abilities. For mid-level vision tasks, such as object proposal generation, SAM has exhibited remarkable zero-shot performance in segmenting medium and large objects, outperforming a strong benchmark model ViTDet-H (Li et al. 2022a) based on supervised learning. This experiment was conducted on a challenging vision dataset LVIS (Large Vocabulary Instance Segmentation; Gupta et al. 2019). Regarding high-level vision tasks, although SAM\u2019s zero-shot segmentation accuracy is lower than the supervised model ViTDet-H, the gap is small (8.82% on the benchmark COCO dataset [Lin et al. 2014] and 4.08% on the LVIS dataset). Furthermore, its mask quality was rated higher than other supervised models based on human studies. 3.1. Datasets While these studies demonstrate SAM\u2019s satisfying performance in general computer vision benchmark datasets, its domain adaptation capability for segmenting natural landscape features remains unknown. To address this question, we have selected two natural feature datasets ice-wedge polygons (IWP) and retrogressive thaw slumps (RTS) for the assessment. The first dataset is the AI-ready Ice Wedge Polygon (IWP) data for instancelevel segmentation. IWPs are ambiguous ground surface features found in permafrostaffected landscapes, specifically in regions with ice-rich permafrost. The type of IWP changes when the upper section of an ice wedge melts, indicating the rate of Arctic warming (Liljedahl et al. 2016). Our training dataset covers the most prevalent types of polygonal landscapes found in tundra regions, such as sedge, tussock, and barren. The very high-resolution satellite imagery at 0.5m resolution from commercial Maxar sensors is used as the training image (Witharana et al. 2020). 867 image tiles at the sizes between 226 \u00d7 226 to 507 \u00d7 507 were processed. A total of 34,931 IWP were manually annotated within the study area (Li et al. 2022b). The second dataset is the AI-ready Retrogressive Thaw Slumps (RTS) data, also curated for instance-level segmentation. Thaw slumps are active permafrost features that develop rapidly when ice-rich permafrost thaws. They are a type of landslide 6 \fformed when ground ice begins to melt causing the ground to become unstable and the soil to move, especially on steep slopes. This dataset contains 855 image tiles using the 4m Maxar imagery as the base map. A total of 965 RTS were annotated in the study area covering Yamal and Gydan peninsulas in Siberia and six other pan-Arctic sites in Canada and Northeastern Siberia (Yang et al. 2023). These two features are used to evaluate SAM\u2019s image segmentation capabilities for two reasons. First, natural feature datasets are generally more challenging to detect than humanmade features, which are common targets in general AI computer vision tasks and many remote sensing image analysis tasks (Li and Hsu 2022). Their forms are driven by underlying complex geospatial processes, leading to large variations across geographical locations, scales, and landscapes. Because research data (e.g., highresolution satellite imagery) is often managed in databases and not as readily available as images (e.g., street views) containing humanmade features, their inclusion in existing large pre-trained models can be very limited. For this reason, they also become an ideal dataset to test the domain adaptation capabilities for general-purpose foundation models for GeoAI vision tasks. Second, both IWP and RTS are important permafrost features, the changes of which provide a strong linkage to Arctic warming and climate change (Nitze et al. 2022). Therefore, AI-augmented permafrost mapping is becoming increasingly important to provide scientific insights into the pace of permafrost thaw and to support global change research, monitoring, and policy (Liljedahl et al. 2016, Meredith et al. 2019, Natali et al. 2022). 3.2. Experimental setup We designed a series of experiments to evaluate SAM\u2019s potential usage for natural feature segmentation, particularly for important Arctic permafrost features. In an instance segmentation task, a model needs to provide not only the mask indicating the exact boundary of an object, but also a prediction of the object\u2019s class. Object localization, object segmentation, and object class prediction are the three factors that affect an instance segmentation model\u2019s performance (Hsu and Li 2021). Because SAM\u2019s goal is to segment \u201canything,\u201d its output contains only masks of all objects without their classes. Therefore, SAM, on its own, cannot perform instance segmentation. It must always be combined with other models to create an instance segmentation workflow. The first approach is to use SAM to segment objects of any class within an image scene and then connect it with a mask classifier to filter interested objects belonging to a specific class, such as IWP. The SAM and CLIP integrated pipeline shown in Figure 1 belongs to this category. No training is required in the entire pipeline, except for providing a text prompt, for example, ice wedge polygon, to CLIP. Hence, this process is also known as zero-shot learning (Strategy 1 in Table 2). The second approach is to provide SAM with prior knowledge, for example, the location (a point or a bounding box) of objects of interest as a prompt, and then ask SAM to generate the segmented masks of objects of that class. This is a surrogate of instance segmentation; Strategies 2 to 4 in Table 2 are such examples. Strategy 2 is to feed the ground truth BBOX to SAM and ask SAM to segment the object within the given region. This strategy provides the strongest knowledge among all strategies. Strategy 3 involves feeding SAM with the ground truth point locations for the objects of interest and asking the model to segment the object near the point. Strategy 4 involves training an object detector through supervised learning on the training datasets and using its predicted BBOX to feed the SAM model for instance 7 \fTable 2. Strategies for enabling instance segmentation capability of SAM. No. Strategies Prior knowledge embedded? Zero-shot? Fine-tunable? 1 Feed SAM with regular points (32\u00d732) \u2713 N/A 2 Feed SAM with ground truth bounding box (BBOX) \u2713(strongest) \u2713 3 Feed SAM with point locations of the features of interest \u2713 \u2713 4 Feed SAM with object detector predicted BBOX \u2713 \u2713 Table 3. Zero-shot performance of the integrated SAM and CLIP model for instance segmentation. mAP: Mean Average Precision. 50 is a threshold to determine the matches between predicted and ground truth masks. When the IOU (Intersection Over Union) of the two is at or above 50%, the predicted mask is considered a true positive. Model Dataset Prompt for CLIP mAP50 (SAM) mAP50 (SAM + CLIP) SAM + CLIP IWP ice-wedge polygon 0.073 0.117 ice wedge 0.104 RTS thaw slump 0.003 0.028 retrogressive thaw slump 0.007 segmentation. As Strategies 2 4 all require the use of training data in the segmentation pipeline, they are not considered zero-shot learning. Since SAM provides code for the model, the pipelines implementing Strategies 2 4 can be fine-tuned using domainspecific datasets. 3.2.1. Zero-shot instance segmentation (Strategy 1) In this experiment, we investigate the zero-shot capability of SAM to locate and segment natural landscape features. Since SAM generates only masks without any category information, it is incapable of instance segmentation. To address this, we combine SAM with CLIP to perform instance segmentation tasks and use the IWP and RTS datasets to evaluate their performance. CLIP is also a zero-shot model, which evaluates the correlation between a given text prompt and an input image. Taking advantage of this, we can use CLIP to predict the missing category information of each mask produced by SAM. The CLIP model used in this experiment is ViT-B/32, released by OpenAI in 2021 (Radford et al. 2021). More specifics of the model can be found in Table 1. We used a regular 32 by 32 grid of evenly distributed points across the image as a prompt for SAM. The output of SAM includes masks generated for all segmented objects. These masks are then utilized to clip the original image, creating a new image that solely contains the specific object, with black markings outside the mask. We then sent this image to CLIP along with the text prompt to identify masks belonging to the given object class. This SAM+CLIP integrated modeling approach (Figure 1) implements zero-shot instance segmentation. We evaluated the model on both IWP and RTS datasets using different text prompts for CLIP. As shown in Table 3, the best zero-shot prediction accuracy (measured by mAP50) for the IWP datasets was 0.117 when using \u201cice wedge polygon\u201d as the prompt for CLIP. Using other text prompts, such as \u201cice wedge,\u201d resulted in lower mAP values. As indicated by the mAP value, the results are poor. The model\u2019s predictions for the RTS dataset were even worse, with a maximum prediction accuracy score of only 0.028. When we examined the results visually (Figure 2), we discovered that RTS is 8 \fTable 4. Strategies used to enable SAM\u2019s instance segmentation capability through embedded knowledge, and the model\u2019s predictive results. Each strategy number corresponds to the definitions provided in Table 2. Model Learning type Strategy No. Prompt mAP50 (IWP) mAP50 (RTS) SAM without fine-tuning Ground truth knowledge plus zero-shot 2(a) Ground truth BBOX 0.844 0.804 3(a) Ground truth point 0.233 0.085 supervised learning 4(a) Object detector predicted BBOX 0.521 0.290 SAM with fine-tuning Ground truth knowledge plus fine-tuning 2(b) Ground truth BBOX 0.989 0.926 3(b) Ground truth point 0.609 0.298 supervised learning 4(b) Object detector predicted BBOX 0.595 0.303 Benchmark segmentation model (MViTv2; Li et al. 2022c) Supervised learning 0 N/A 0.605 0.354 a more challenging feature to segment than IWP due to its complex and less-defined shape. Since the large SAM/CLIP models do not have enough knowledge of RTS, they tend to have a low success rate. Figure 2 shows two examples (RTS1 and RTS2) of partially segmented RTS features using the SAM+CLIP model, and for many of the testing images, no predictions were made. Second, since SAM segments all objects within an image scene, it can generate false positives, such as the lower right portion of IWP1 and the two very large regions segmented in IWP2. These false positives can be filtered out by CLIP (last column in Figure 2). However, CLIP\u2019s recall rate cannot be improved beyond SAM\u2019s output. Although CLIP can filter out irrelevant masks, it may also filter out true positives. To evaluate the impact of CLIP on SAM\u2019s performance, we calculated the mAP50 for SAM alone and the integrated SAM+CLIP model. As shown in Table 3, adding CLIP improved the model\u2019s overall performance compared to using SAM alone. This suggests that CLIP did not negatively affect SAM\u2019s performance, and the low overall score is due to SAM\u2019s limited segmentation capabilities for natural landscape features. 3.2.2. Knowledge-embedded instance segmentation with SAM (Strategies 2 4) Our second set of experiments evaluates SAM\u2019s instance segmentation capability using highly accurate and less accurate location prompts. The more accurate location prompts include the ground truth BBOX and the object\u2019s center point. Ground truth BBOX provides highly accurate location information about the target objects. Thus the embedded knowledge in these experiments (Strategies 2a and 2b in Table 4) is also the strongest. Masks are not used in these strategies as they provide the answers for SAM. It is important to note that ground truth information (such as ground truth BBOX) should be used only during the training/fine-tuning phase, and the model should not receive any kind of ground truth information during the testing phase, whether for a supervised learning model or a foundation model. Since Strategies 2 and 3 (Table 4) require such ground-truth information to be provided (in the form of a prompt) to SAM during the testing phase, experiments using these strategies indicate only SAM\u2019s theoretical upper-bound segmentation performance and cannot be used in real-world scenarios. In contrast, Strategy 4, which utilizes an object detector to provide BBOX information to SAM for further segmentation, is a practical approach. It involves training the object detector on domain-specific datasets and using it during the testing phase to predict the BBOX of the objects of interest before feeding this information to SAM for object segmentation. Because SAM provides the source code for training models, the weights can be 9 \fOriginal image Ground truth SAM SAM + CLIP IWP1 IWP2 RTS1 RTS2 Figure 2. Results of zero-shot learning with the integrated SAM+CLIP model. The last column displays the final result, and the second-to-last column presents the intermediate results from SAM, which are used as input for CLIP. Image source: Maxar.com. 10 \ffurther fine-tuned using domain datasets to achieve better results. Our main focus in this experiment was to evaluate the impact of fine-tuning on SAM\u2019s performance in segmenting natural features (Strategies 2b, 3b, and 4b in Table 4). SAM comprises three primary components: an image encoder, a prompt encoder, and a mask-decoding transformer, all of which are transformer-based architectures. In our experiment, we froze the model parameters in both the image and prompt encoders and focused our attention on fine-tuning the mask-decoding transformer to reduce computation cost and increase result sensitivity. To optimize the model\u2019s performance, we used Dice Loss, which calculates the overlap between the predicted segmentation masks and the ground truth masks. By comparing the results before and after fine-tuning, we can obtain a more comprehensive understanding of SAM\u2019s domain adaptation ability for out-of-distribution datasets. Table 4 shows SAM\u2019s strong segmentation performance when provided with ground truth BBOX, achieving mAP50 scores of 0.844 for IWP and 0.804 for RTS (Strategy 2a). However, when given a ground truth point, the accuracy drops significantly to 0.233 for IWP and 0.085 for RTS (Strategy 3a). Interestingly, fine-tuning SAM on domain datasets substantially improves predictive accuracy, with detection accuracy reaching 0.609 for IWP and 0.298 for RTS (Strategy 3b). Despite its poor zero-shot performance on new datasets, SAM demonstrates strong domain adaptation capabilities through fine-tuning (Table 3). This highlights SAM\u2019s generalizability in learning and extracting common feature representations from large image datasets. Proper fine-tuning can significantly enhance its performance on new datasets. However, it is important to note that feeding the model with ground truth information during testing (Strategies 2 and 3) is not practical in real-world scenarios. Strategy 4, which involves training an object detector and feeding SAM with the predicted BBOX at the testing phase, is a practical solution. Here we selected the Mask region-based convolutional neural network (Mask R-CNN) architecture as our primary object detector because of its dual capability in generating both object BBOX and masks. This allows us to use its predicted BBOX as the input for SAM and also use Mask R-CNN\u2019s instance segmentation results as a baseline to evaluate the segmentation quality of SAM. Specifically, we adopted MViTv2 (Li et al. 2022c), a Mask R-CNN type of model that achieves cutting-edge instance segmentation performance in our study. This model uses a multi-scale vision transformer (MViT) as the feature extraction backbone to replace the traditional CNN-based backbone (e.g., ResNet 50) in a Mask R-CNN model. The MViT achieves cutting-edge instance segmentation performance by taking advantage of both the transformer models in capturing long-range data dependencies and the classic CNN models in hierarchical feature learning and strong information flow enabled by residual connections (Li and Hsu 2020). In our experiments, the MViTv2, which has 103M parameters, was adopted as it was tested to yield the best performance on our training datasets. The MViTv2 result also offers a practical upper bound in segmentation accuracy for IWP and RTS. Results in Table 4 indicate that before fine-tuning, Strategy 4a (feeding SAM with object-detector-predicted BBOX) performs better than feeding SAM with ground truth points (Strategy 3a), but worse than when feeding SAM with ground truth BBOX (Strategy 2a). However, SAM\u2019s performance is lower on both datasets when comparing the mAPs of SAM using Strategy 4 to the benchmark segmentation model (MViTv2). After fine-tuning, SAM\u2019s predictive accuracy improves from 0.521 to 0.595 (mAP50) for IWP segmentation, which is close to the benchmark model\u2019s mAP of 0.605 (Strategy 0). However, for the RTS dataset, SAM\u2019s performance does not significantly improve after fine-tuning (0.303 vs. 0.290 for Strategy 4), and a gap remains compared 11 \fto the benchmark segmentation model\u2019s result of 0.354. This gap can be attributed to several factors. First, the RTS dataset is more challenging than the IWP dataset, making the learning of representative features crucial for effective segmentation. The MViT model incorporates innovative strategies to integrate the advantages of CNN, originally designed for image analysis, into transformer models primarily designed for processing sequential data. Consequently, it can capture stronger image feature representations than regular transformer or CNN-based models. Second, our fine-tuning focuses on the decoder part, as fine-tuning the encoder part is very computationally expensive and not effectively feasible. These experiments show that with and without providing ground-truth BBOX (Strategy 2 vs. Strategy 4), there can be over a 20% performance gap in the instance segmentation results. Under model fine-tuning and when SAM is fed with ground truth BBOX (Strategy 2b), the model achieves the best segmentation results closest to the ground truth masks. However, in practice, because SAM does not have the ability to perform instance segmentation, it needs to rely on another object detector\u2019s predicted results to achieve the goal. Consequently, its performance is limited by the upstream object detector. Therefore, the results for Strategy 4 for both datasets using SAM remain lower than those of the benchmark model MViTv2 (Strategy 0). Additionally, MViTv2\u2019s performance can come very close to or be better than the pipeline implementing Strategy 3b when SAM is provided a ground-truth point and fine-tuned using the domain datasets. It is important to mention that Strategies 2 and 3 are not applicable in real-world application scenarios. Hence, the results for Strategy 4 and Strategy 0 have more practical significance. Figure 3 displays segmentation results for the same images shown in Figure 2. These are the results from the fine-tuned SAM and the benchmark segmentation model (MViTv2). There are several interesting observations from Figure 3. First, when SAM is fed with ground truth points (Strategy 3b), the resulting masks tend to be smaller than when SAM is fed a ground truth BBOX (Strategy 2b). This is likely because when the input prompt is smaller, SAM\u2019s predictions tend to favor labeling smaller masks with higher confidence. Second, some of SAM\u2019s segmentation results exhibit holes (IWP1, Strategy 2b, and Strategy 4b), whereas the MViTv2 results (Strategy 0) do not display any holes. This is likely because SAM is trained on datasets containing holes, whereas MViTv2 is trained exclusively on domain datasets that do not include holes. Third, the results of Strategy 4b (feeding SAM with MViTv2-predicted BBOX) and Strategy 0 (MViTv2) are similar since they both segment masks based on the same BBOX information. Consequently, when MViTv2 fails to make certain predictions (e.g., RTS1), SAM (Strategy 4b) will also be unable to produce the corresponding prediction at the respective location. 4. Discussions 4.1. Summary analysis Based on the above analysis, we can see that when directly adopting SAM with no prior knowledge provided (as referenced in the experiment in Section 3.2.1), its segmentation capability for out-of-distribution natural features is not good because these features are likely not considered in the model\u2019s pre-training processes. Consequently, SAM\u2019s zero-shot learning performance for AI-augmented terrain mapping remains poor. Experiments in Section 3.2.2 show that when providing SAM with strong knowledge the 12 \fGround truth (GT) Strategy 2(b) GT BBOX + SAM Strategy 3(b) GT point + SAM Strategy 4(b) Ojbect detector predicted BBOX + SAM Strategy 0 Instance segmentation model (MViTv2) IWP1 IWP2 RTS1 RTS2 Figure 3. Results from knowledge-embedded learning with SAM. The results are those after fine-tuning. The images are the same as those in Figure 2. Image source: Maxar.com. 13 \fground truth BBOX it can segment the objects inside the target area very well. While ground-truth BBOX cannot be used in a fully automated model prediction scenario, it can be applied in an interactive AI-augmented mapping with human annotators, accelerating the process of data labeling and training data preparation. Some tools have already been developed to integrate SAM into the crowdsourced mapping procedure, such as DS-Annotate (Avery 2022), representing a major step forward in AI and foundation-model-assisted mapping. When SAM is used as a downstream model to an object detector that trains on the domain dataset and feeds SAM with its predicted BBOX, the model shows much better performance than using it for zero-shot instance segmentation. This is a strategy that can be used in practical applications. However, its performance may be limited by the upstream object detector which feeds it with the predicted BBOX. Comparing SAM\u2019s performance on these natural feature datasets and general-purpose computer vision datasets such as COCO and LVIS (discussed at the beginning of Section 3), we found that the margins between SAM and a supervised learning model are considerably larger when SAM is adapted for segmenting natural features. Specifically, the difference is nearly 14% (0.521 for SAM vs. 0.605 for MViTv2 in Table 4) for the IWP dataset and 22% (0.290 for SAM vs. 0.354 for MViTv2 in Table 4) for the RTS dataset. This suggests a focused area for improving SAM\u2019s domain adaptability in processing geospatial data, especially the challenging permafrost features. One positive aspect of SAM is that its performance can be improved after finetuning. This is likely due to SAM\u2019s strategy of pre-training on a huge number of images, giving it the generalizability to extract common characteristics of diverse objects, even though it might not have learned the intrinsic representation of natural features. Compared to other emerging geospatial vision foundation models, such as IBM\u2019s Prithvi which requires 6-band input (Jakubik et al. 2023), SAM\u2019s input is the most commonly seen RGB bands, hence its has potentially higher adaptability to diverse datasets. 4.2. Spatial and domain generalizability test To further validate our findings, additional experiments were conducted using SAM for agriculture field mapping. The EuroCrops dataset is employed in this analysis. EuroCrop (Schneider et al. 2023) is the largest harmonized open crop dataset across the European Union; it consists of 944 image scenes captured at the beginning of the growing period in April 2019. The image mosaic is created from two cloud-free Top of Atmosphere (TOA) Sentinel-2 images, each with a spatial resolution of 10 meters. The study area encompasses central Denmark, characterized by dominant agricultural land use and relatively flat terrains. Each image scene measures 128 by 128 in size. 80% of randomly selected samples are used for training, with the remaining 20% reserved for testing. The same set of experiments was conducted on SAM using this dataset. Table 5 illustrates the zero-shot performance by SAM and the combined SAM + CLIP model. It is evident that using SAM alone results in a relatively low model performance. As depicted in Figure 4, SAM may mistakenly segment two crop fields as one large field or fail to identify certain crop fields. Upon incorporating CLIP with SAM and utilizing different prompts for semantic filtering, an increased prediction accuracy is observed, especially when \u201cCrop land\u201d is used as a prompt. The last column in Figure 4 showcases results using this approach. Notably, CLIP helps to remove some irrelevant 14 \fTable 5. Zero-shot performance of the integrated SAM and CLIP model for instance segmentation of agricutural fields. The same experimental settings were used as in Table 3. Model Dataset Prompt for CLIP mAP50 (SAM) mAP50 (SAM + CLIP) SAM + CLIP EuroCrop Crop land 0.118 0.161 Agricultural field 0.109 Table 6. Strategies used to enable SAM\u2019s instance segmentation capability for agricultural field mapping through embedded knowledge, and model predictive results. Each strategy number corresponds to the definitions provided in Table 2. Model Learning type Strategy No. Prompt mAP50 (EuroCrop) SAM without fine-tuning Ground truth knowledge plus zero-shot 2(a) Ground truth BBOX 0.907 3(a) Ground truth point 0.249 supervised learning 4(a) Object detector predicted BBOX 0.685 SAM with fine-tuning Ground truth knowledge plus fine-tuning 2(b) Ground truth BBOX 0.922 3(b) Ground truth point 0.661 supervised learning 4(b) Object detector predicted BBOX 0.694 Benchmark segmentation model (MViTv2; Li et al. 2022c) Supervised learning 0 N/A 0.717 urban regions (as seen in image AGR2), contributing to the enhancement of the final results. In contrast, when the prompt \u201cAgricultural field\u201d is utilized, the model yields slightly lower accuracy than when using SAM alone (Table 5). This suggests that the choice of prompt text is a factor influencing the CLIP model\u2019s results. Experiments were also conducted to assess SAM\u2019s performance on additional learning strategies, and the statistical results are shown in Table 6. When SAM is provided with the ground truth BBOX of the agricultural lands, its segmentation accuracy can reach up to 0.907 with zero-shot learning and 0.922 after model fine-tuning. This result is much better than that obtained using a ground-truth point as the input prompt. When used in real-world scenarios and provided with another model\u2019s predicted BBOX as input, SAM\u2019s prediction accuracy (0.694) can get very close to using a cutting-edge supervised learning model MViTv2 (0.717). Figure 5 illustrates some example results for visual inspection. Overall, the findings on this more general dataset are consistent with what was found in the two permafrost datasets. This further verifies our findings about the strengths and limitations of using SAM in geospatial applications. 5." + }, + { + "url": "http://arxiv.org/abs/2309.14500v4", + "title": "Assessment of a new GeoAI foundation model for flood inundation mapping", + "abstract": "Vision foundation models are a new frontier in Geospatial Artificial\nIntelligence (GeoAI), an interdisciplinary research area that applies and\nextends AI for geospatial problem solving and geographic knowledge discovery,\nbecause of their potential to enable powerful image analysis by learning and\nextracting important image features from vast amounts of geospatial data. This\npaper evaluates the performance of the first-of-its-kind geospatial foundation\nmodel, IBM-NASA's Prithvi, to support a crucial geospatial analysis task: flood\ninundation mapping. This model is compared with convolutional neural network\nand vision transformer-based architectures in terms of mapping accuracy for\nflooded areas. A benchmark dataset, Sen1Floods11, is used in the experiments,\nand the models' predictability, generalizability, and transferability are\nevaluated based on both a test dataset and a dataset that is completely unseen\nby the model. Results show the good transferability of the Prithvi model,\nhighlighting its performance advantages in segmenting flooded areas in\npreviously unseen regions. The findings also indicate areas for improvement for\nthe Prithvi model in terms of adopting multi-scale representation learning,\ndeveloping more end-to-end pipelines for high-level image analysis tasks, and\noffering more flexibility in terms of input data bands.", + "authors": "Wenwen Li, Hyunho Lee, Sizhe Wang, Chia-Yu Hsu, Samantha T. Arundel", + "published": "2023-09-25", + "updated": "2023-11-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "INTRODUCTION Floods are one of the most devastating natural disasters, affecting over 20% of the world\u2019s population [32]. Recent climate change and extreme weather events have led to an increase in both the frequency and intensity of flooding, resulting in significant economic losses and loss of life [25]. To effectively conduct risk assessments and disaster management, flood inundation mapping\u2014an effort to accurately delineate the extent of flooding\u2014has become a vital tool. Flood maps are not only used by insurance companies to assess \u2217Corresponding Author flood risks but are also integral to early warning systems, aiding residents in preparing for and evacuating from potential flooding events [27]. Additionally, flood maps assist engineers and city planners in designing resilient infrastructure to mitigate flood-related impacts [21]. Traditional methods for flood inundation mapping rely on a combination of field surveys, aerial photography, topographic maps, hydrological models, and historical data. While providing very valuable information for risk assessment and mitigation planning, these methods are often costly and the data are time-consuming to collect. In recent years, advanced remote sensing platforms, such as Sentinel and Planet, are generating finer-resolution satellite imagery with high revisit frequency and multiple spectral bands, and have become important data sources for flood mapping. In addition to the advances in data, the rapid development of GeoAI technology[20], especially deep learning, also offers a more informed and automated approach to analyze big satellite data [13\u201315, 23, 24, 35]. Through data-driven analysis, deep learning models have demonstrated the capability to recognize distinctive flood patterns and subtle flood boundaries, even in regions under complex and rapidly changing conditions. In GeoAI, flood inundation mapping can be formulated as a semantic segmentation problem, which aims to provide per-pixel classification of flood and non-flood areas. Popular deep learning methods that can tackle this problem include U-Net, a convolutional neural network (CNN) architecture that uses a CNN-based encoder for feature extraction and a decoder for the reconstruction of image features at the original resolution by considering spatial contextual information [33]. This encoder-decoder based architecture in U-Net has been proven very effective in image segmentation and it has become the foundation for many semantic segmentation models, such as ResUNet [10] and SegNet [3]. Another family of convolutional semantic segmentation models include the DeepLab family developed by Google [7]. The DeepLab family uses dilated convolutions to enable multi-scale feature aggregation, and the model has found many applications in geospatial problems. However, for segmenting floods, which are less frequently occurring events than some of the others, the benchmark training datasets are relatively arXiv:2309.14500v4 [cs.CV] 3 Nov 2023 \fLi, et al. Figure 1: The architecture of the geospatial foundation model Prithvi, tailored for semantic segmentation. small in size. As a result, DeepLab did not show a performance as high as expected on such applications due to its model complexity and the needs of more training data [38]. Recently, the transformer-based architecture has shown even stronger performance than CNN-based models due to its capability in capturing data dependencies over long ranges and the ability to reduce inductive bias [22]. Transformer was originally designed for natural language processing, and then adopted in many image analysis and vision tasks through the development of vision transformers (ViT). For such dense prediction (e.g., per-pixel) as semantic segmentation, ViT needs to be extended to add a segmentation head. Commonly used models of this category include Mask2Former [8], Swin transformer [28], and Segformer [37]. Like U-Net, these models often adopt an encoder-decoder based architecture, but with different designs in the attention mechanism and mask segmentation. Segformer, for example, consists of a hierarchical transformer encoder and a lightweight decoder based on a multilayer perceptron. This model uses a smaller patch size than regular ViT-based models to achieve precise segmentation. Its efficient architecture also yields fast processing speed while achieving competitive segmentation results on multiple benchmark datasets [34]. A new wave of AI research is the development of large foundation models [4]. They are models trained on huge amounts of data leveraging often self-supervised learning to achieve unprecedented generalizability for downstream tasks. In the computer vision domain, notable progress has been made. For instance, Meta released the Segment Anything Model (SAM) [18] in April 2023 to achieve zero-shot instance segmentation with prompts. This model was trained on 11 million images and 1 billion masks, which were created by human annotation, and then semi-automated and fully-automated learning. Although SAM provides fine-grained, instance-level segmentation, it cannot be easily adopted for semantic segmentation due to the differences in the goals of these tasks and the training data used. In the GeoAI domain, a billion-scale foundation model was developed [6]. This model is built with 2.4 billion parameters, its backbone is also transformer-based architecture, and it is pre-trained on a remote sensing benchmark dataset, the MillionAID [29]. However, the model is not open-sourced; therefore, its adaptability in different domain tasks is difficult to assess. In August 2023, IBM and NASA jointly released a new geospatial foundation model (GFM), Prithvi. This model was trained using selfsupervised learning on Harmonized Landsat and Sentinel 2 (HLS) data within the continental United States. Because both datasets are satellite imagery capturing important spectral signatures of the Earth surface, the foundation model learned from them tends to gain more geospatial domain knowledge than models trained from other natural images. Hence, it has high potential to better support geospatial tasks. This paper aims to assess IBM and NASA\u2019s new foundation model in support of flood inundation mapping. We compare the model with commonly used CNN and transformer-based semantic segmentation architectures, especially U-Net and Segformer, in their predictability, generalizability, and transferability. The goal is to better understand the strengths and weaknesses of this new solution and how it can be best leveraged to support disaster assessment related to flood. \fAssessment of a new GeoAI foundation model for flood inundation mapping Figure 2: Architecture of U-Net. H: height. W: width. C: channel. The rest of the paper is organized as follows. Section 2 will introduce Prithvi\u2019s architecture, along with the other two comparative models. Section 3 describes the experimental settings, including dataset, training parameters, and evaluation metrics. Section 4 presents the results and analysis. Section 5 concludes the work and proposes future research directions. 2 MODELS 2.1 IBM-NASA\u2019s geospatial foundation model Prithvi IBM-NASA\u2019s GFM Prithvi [16] adopts a temporal ViT trained on 30-m HLS data covering the continental United States. The model pre-training is based on a Masked AutoEncoder (MAE) [12] strategy, which asks the model to learn and predict a masked patch from the original image through self-supervised learning. Different from the general ViT model, which is trained on single time slice images, the Prithvi model (Figure 1, encoder part) can take in timeseries satellite images with an additional time dimension, and the patch embedding process is customized such that both spatial and positioning information as well as the temporal information are captured in the resulting embeddings. After that, the embeddings are sent to the transformer encoder for capturing inherent spatial and temporal dependencies within the datasets, creating a set of features similar to the feature map obtained from a CNN-based backbone. It is important to note that the original decoder of the model used for MAE-based pre-training might not be suitable for various downstream tasks. To better support image analysis tasks, a task-specific head, such as a segmentation head, needs to be added to build the analytical pipeline. Figure 1 presents the integrated semantic segmentation pipeline for flood inundation mapping. The encoder is a pre-trained module of the model and it works as described above. The decoder is responsible for pixel-wise image segmentation and it works as follows. First, the encoded vectors will be deserialized to create two-dimensional (2D)-shaped feature maps. To improve the segmentation precision, the feature map with reduced size as compared to the original image will go through an upsampling process, which goes through four 2D transposed convolutional layers. Finally, a 2D convolutional layer is added to the end of the decoder to make the final predictions. In the flood mapping case, there are three classes to predict: flood, non-flood, and no data. An important feature of the Prithvi model is that its input requires six bands, rather than the typical three RGB (red, green, blue) bands, used in other models. These six bands include: red, green, blue, narrow NIR (Near InfraRed), SWIR1 (Short-Wave Infrared) and SWIR2. SWIR1 and SWIR2 differ in their wavelengths, and they are used for discriminating moisture content (SWIR1) and improved moisture content (SWIR2) of soil and vegetation [31]. 2.2 Comparative models for flood inundation mapping 2.2.1 U-Net. The U-Net model is selected for performance comparison with Prithvi because it is a classic model for semantic segmentation [33] and one of the most popular models adopted for flood inundation mapping [19]. U-Net is a convolutional neural network-based architecture, consisting of two parts: an encoder and a decoder (Figure 2). The encoder provides a contracting path, where important image features are extracted hierarchically through convolution. The size of the feature map also gradually decreases by using downsampling to reduce spatial resolution and thereby the overall computational cost. The U-Net encoder used in our experiments contains four downsampling steps, accomplished by two convolutional layers for feature extraction and one max-pooling layer for downsampling. The output of the encoder is fed into the decoder part, which contains an expanding path to recover the resolution of the feature maps to the original image resolution. This is achieved through four upsampling modules (consisting of an interpolation upsampling layer and one convolution layer) to reconstruct \fLi, et al. Figure 3: Model architecture of Segformer (Adapted from [37]). H: height. W: width. C: channel. spatial details. In addition to its classic encoder-decoder-based architecture, another key design feature of U-Net is the inclusion of skip connections. These connections link the decoder outputs with corresponding features generated during the encoding phase, further enhancing precision in segmentation. 2.2.2 Segformer. Segformer [37] also adopts an encoder-decoder architecture (Figure 3) but different from U-Net, it uses a transformerbased feature extraction backbone. In the encoder stage, Segformer introduces hierarchical feature representation to generate CNNlike multi-level features. These feature maps retain both high-level features at low-resolution and low-level features at high-resolution. Segformer enhances computational efficiency through Efficient Self-Attention in its transformer blocks (Figure 3). It is a modified self-attention mechanism that scales down the number of input sequences, effectively alleviating the primary computational bottleneck within the encoder part. The model also incorporates a data-driven positional encoding through Mix-FFN (Feed Forward Network), which uses a 3-by-3 convolutional layer within the FFN to encode positional information. This adaptation mitigates the performance drop caused by positional encoding when the test image resolution changes during inference. Segformer also partitioned the images into overlapping patches to preserve the data continuity and local contexts. These patches are merged together before being sent to the next transformer blocks. The Segformer decoder, which benefits from the large effective receptive field (ERF) of the transformer encoder, fuses these multilevel features using a lightweight All-MLP (multi-layer perceptron) module. This module exclusively consists of MLP components for generating the segmentation mask, thereby substantially reducing computational demands compared to other models, such as SEgmentation TRansformer (SETR) [39], that utilize multiple convolution layers in its decoder. 3 DATA AND EXPERIMENTS 3.1 Data Sen1Floods11 [5] is utilized to conduct the experiments. Sen1Floods11 is a georeferenced dataset for flood inundation mapping. This dataset contains 446 pairs of Sentinel-1 and Sentinel-2 satellite imagery, which captured 11 flood events from 11 countries (Bolivia, Ghana, India, Nigeria, Pakistan, Paraguay, Somalia, Spain, Sri Lanka, United States, and Vietnam) between 2016 and 2019, with corresponding human expert labeled data. Each satellite image maintains a spatial resolution of 10m with an image size of 512 \u00d7 512 pixels. In particular, the Sentinel-2 satellite imagery includes 13 spectral bands at varying resolutions, and all bands are linearly interpolated to 10m resolution to align spatially. Data from 6 Sentinel-2 bands (red, green, blue, narrow NIR, SWIR1, SWIR2) are used as the input to run all three models as they match exactly with the required input of Prithvi. 3.2 Experimental setup Our experiments evaluate the GFM (Prithvi)\u2019s performance when applied to downstream geospatial tasks, especially flood inundation mapping using semantic segmentation, compared to transformerbased backbone not specifically designed for geospatial data (Segformer using a pretrained model), and a traditional encoder-decoder \fAssessment of a new GeoAI foundation model for flood inundation mapping architecture built from scratch (U-Net). The experiments were conducted based on the MMSegmentation framework [9]. For comparison, the experimental setups followed the GFM\u2019s configuration for the flood inundation mapping use case provided by Hugging Face (https://huggingface.co/ibm-nasa-geospatial/Prithvi-100M-sen1floods11, accessed on 31 August 2023). One exception is the use of the optimizer. The original Prithvi model uses Adam as the optimizer, but we found through our experiments that AdamW [30], which uses a different strategy for weight decay, offers better segmentation performance for Prithvi. Hence, AdamW is applied to all three of the comparatives. Additionally, regarding learning rate, we referred to the previous studies of U-Net for flood inundation mapping [5, 19]. As U-Net has a simpler architecture than Segformer and Prithvi, using a slightly larger learning rate helps the model to converge faster. The detailed experimental settings are described in Table 1. \u201cPoly\u201d in Table 1 refers to the learning rate scheduler that linearly reduces the learning rate from the initial value to zero as the training advances. Table 1: The parameters of the deep learning models for the experiment. Parameters U-Net Segformer Prithvi Optimizer AdamW AdamW AdamW Learning rate 5e-4 6e-5 6e-5 Weight decay 0.01 0.01 0.01 Batch size 4 4 4 Learning rate scheduler Poly Poly Poly Class weight (flood:non-flood) 7:3 7:3 7:3 Loss function Cross Entropy Cross Entropy Cross Entropy Cropped image size 244 244 244 Datasets for pretraining ADE20K [40] HLS In terms of architecture, these models exhibit both shared and distinct features. First, both the Prithvi and the Segformer models adopt transformer based architectures, which are known to be powerful but also require a huge amount of data to achieve outstanding performance. Hence, pre-training is often necessary to ensure their performance when being fine-tuned for downstream tasks. In comparison, U-Net is a classic CNN-based model that involves less computation and therefore it can often be trained from scratch. Hence, Table 1 shows the pre-training datasets for Prithvi and Segformer. For Segformer, we adopted the model pretrained on a benchmark semantic segmentation dataset ADE20K [40] and the Prithvi is pre-trained on NASA\u2019s HLS dataset. Second, both U-Net and Segformer introduce multi-scale learning in their model design, ensuring a good model performance and high-precision segmentation. Third, U-Net and Segformer are trained based on supervised learning, whereas the pre-training phase of Prithvi uses self-supervised learning and only adopts supervised learning for the fine-tuning part. In the experiments, we utilized two types of datasets for performance evaluation. The first consists of test data, which include images from the same geographic region (e.g., country) as the training data but differ from the training images. The second dataset comprises entirely \u2018unseen\u2019 data from Bolivia, which was not part of the training, validation, or test datasets. For both datasets, we conducted 10 runs of each model and recorded the performance metrics for each run. These metrics include Intersection over Union (IoU) for each class, mean IoU (mIoU) across all classes, Accuracy (Acc) for each class, and Mean Accuracy (mAcc) across all classes. Equations 1-4 provide their definitions, which have been adopted from the MMSegmentation framework (TP: True Positive; FN: False Negative; FP: False Positive). It is important to note that the accuracy formula in MMSegmentation is designed for multi-class segmentation, equivalent to the recall formula used in binary segmentation. The final result is presented as the average of each metric obtained from the 10 experiments. \ud835\udc3c\ud835\udc5c\ud835\udc48= \ud835\udc47\ud835\udc43 \ud835\udc47\ud835\udc43+ \ud835\udc39\ud835\udc41+ \ud835\udc39\ud835\udc43 (1) \ud835\udc5a\ud835\udc3c\ud835\udc5c\ud835\udc48= \ud835\udc3c\ud835\udc5c\ud835\udc48\ud835\udc53\ud835\udc59\ud835\udc5c\ud835\udc5c\ud835\udc51+ \ud835\udc3c\ud835\udc5c\ud835\udc48\ud835\udc5b\ud835\udc5c\ud835\udc5b\ud835\udc53\ud835\udc59\ud835\udc5c\ud835\udc5c\ud835\udc51 2 (2) \ud835\udc34\ud835\udc50\ud835\udc50= \ud835\udc47\ud835\udc43 \ud835\udc47\ud835\udc43+ \ud835\udc39\ud835\udc41 (3) \ud835\udc5a\ud835\udc34\ud835\udc50\ud835\udc50= \ud835\udc34\ud835\udc50\ud835\udc50\ud835\udc53\ud835\udc59\ud835\udc5c\ud835\udc5c\ud835\udc51+ \ud835\udc34\ud835\udc50\ud835\udc50\ud835\udc5b\ud835\udc5c\ud835\udc5b\ud835\udc53\ud835\udc59\ud835\udc5c\ud835\udc5c\ud835\udc51 2 (4) 4 RESULTS AND ANALYSIS Table 2 lists the experimental results from the Prithvi model, the Segformer model, and the U-Net model. Two Segformer models (B0 and B5) with different model sizes were run with the results reported. B0 is the smallest Segformer model and B5 is the largest Segformer model. Their trainable parameters can be found in Table 2. The experimental results show that the U-Net model obtains the best overall segmentation performance in all measures. This is attributed to U-Net\u2019s unique encoder-decoder architecture and its ability to consider multi-scale features to reconstruct spatial details for high-precision segmentation. The performance of Prithvi and the Segformer models is quite similar in terms of IoU, with a 1% difference in mIoU and nearly a 3% difference in IoU when segmenting flood pixels, compared to U-Net. The prediction accuracy (mAcc) of Segformer-B5 is slightly higher than that of Prithvi (Table 2), and the two models have no obvious performance advantage over the Segformer-B0 model. In segmenting flood pixels, Segfomer-B5 and Prithvi obtained slightly better accuracy (1.2-1.5% in Avg. Acc) than the other two models. This indicates these large models can capture diverse spectral characteristics of flood pixels. When we compare the model size with regard to the number of training parameters, Segfomer-B0 only contains 3.7M parameters (Table 2), which is much smaller than the other models, and it still achieves comparable results. This has largely benefited from its lightweight model design and the use of hierarchical feature fusion in its learning process. \fLi, et al. Table 2: Performance evaluation results on test data. Avg.: Average. Acc: Accuracy. mIoU: mean Intersection over Union. M: million Model Avg. mIoU (%) Avg. IoU (%) Avg. mAcc (%) Avg. Acc (%) Number of trainable parameters Flood Non-flood Flood Non-flood IBM-NASA\u2019s Prithvi 89.59 81.98 97.21 94.35 90.12 98.58 100M Segfomer-B0 89.36 81.57 97.14 94.20 89.84 98.55 3.7M Segfomer-B5 89.54 81.89 97.18 94.44 90.35 98.52 82M U-Net 90.80 84.03 97.57 94.80 90.74 98.86 29M Table 3: Performance evaluation results on the unseen, Bolivia data Model Avg. mIoU (%) Avg. IoU (%) Avg. mAcc (%) Avg. Acc (%) Number of trainable parameters Flood Non-flood Flood Non-flood IBM-NASA\u2019s Prithvi 86.02 76.62 95.43 90.38 82.12 98.65 100M Segfomer-B0 83.05 71.46 94.65 86.95 74.77 99.14 3.7M Segfomer-B5 85.59 75.82 95.36 89.58 80.26 98.91 82M U-Net 82.54 70.57 94.52 86.45 73.73 99.18 29M However, the models\u2019 performance for unseen regions are quite different from that in the test regions. Table 3\u2019s results show that Prithvi yields the highest mIoU and prediction accuracy (mAcc) among all comparative models. The second best model is SegformerB5, with a difference of less than 1% compared to Prithvi. SegformerB0 follows, while U-Net ranks lowest in terms of performance, with a nearly 4% gap in both mIoU and mAcc compared to Prithvi. This result indicates that large models, such as Prithvi and Segfomer-B5, have a greater capability to learn important feature representations, thus possessing stronger domain adaptability and transferability. The geospatial foundation model Prithvi, pre-trained on remote sensing imagery, captures more domain-specific geospatial knowledge, making it particularly suitable for geospatial tasks. Smaller models, such as U-Net, work well in the test data, meaning that the model learns and fits very well with the flood datasets; however, this near \u201cperfect\u201d fitting on a given dataset also limits its ability to adapt to other datasets and tasks. To further examine the characteristics of each model, we visualized the prediction results in Figure 4. For the prediction results on the test data (rows (a) and (b)), which contain narrow flood inundation areas (e.g., rivers, channels), the Prithvi model did not perform as well as the other models. This is because the U-Net and Segformer architectures enable multi-level feature extraction, whereas Prithvi only utilizes a single-level ViT-based architecture as the feature extraction backbone. On the other hand, in the Bolivia data, Prithvi demonstrates consistently higher performance than U-Net. The bottom row (c) of Figure 4 shows the segmentation results in the unseen region. It can be seen that Prithvi predicted better results than the other models, especially in the highlighted image regions (red boxes). This is attributed to the model\u2019s pretraining on huge amounts of data to achieve good generalizability and, therefore, transferability compared to models learned from and (over)fitted on a small dataset. 5" + }, + { + "url": "http://arxiv.org/abs/2306.05341v1", + "title": "Real-time GeoAI for High-resolution Mapping and Segmentation of Arctic Permafrost Features", + "abstract": "This paper introduces a real-time GeoAI workflow for large-scale image\nanalysis and the segmentation of Arctic permafrost features at a\nfine-granularity. Very high-resolution (0.5m) commercial imagery is used in\nthis analysis. To achieve real-time prediction, our workflow employs a\nlightweight, deep learning-based instance segmentation model, SparseInst, which\nintroduces and uses Instance Activation Maps to accurately locate the position\nof objects within the image scene. Experimental results show that the model can\nachieve better accuracy of prediction at a much faster inference speed than the\npopular Mask-RCNN model.", + "authors": "Wenwen Li, Chia-Yu Hsu, Sizhe Wang, Chandi Witharana, Anna Liljedahl", + "published": "2023-06-08", + "updated": "2023-06-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "INTRODUCTION Polar regions are one of Earth\u2019s remaining frontiers that play a vital role in global climate, ecosystems, and economy. Global warming over the past century is driving dramatic change in the Arctic ecosystem, endangering its natural environment, infrastructure, and life of the indigenous population. Permafrost, ground that remains below 0\u00b0C for at least two consecutive summers, is at the center of this change. Covering nearly 1 4 of the land in the northern hemisphere, thawing permafrost is causing significant local and regional impacts on the Arctic community. As the ice-rich frozen ground thaws, land subsides causing severe damage to buildings, roads, pipelines, and industrial infrastructure [9]. Permafrost degradation also increases rates of coastal erosion, wildfires, and flooding, which may further accelerate the thawing process and make the Arctic ecosystem even more vulnerable to climate change [6]. At a global scale, the thawing of Arctic permafrost will result in the release of an immense amount of carbon dioxide and methane, exaggerating the greenhouse effect and global warming through complex feedback mechanisms [17]. To improve our understanding of permafrost dynamics and its linkages to other Arctic ecosystem components in the midst of rapid Arctic change, it is critically important to have spatial data readily available that provide fine-granularity mapping of permafrost features, their extent, distribution, and longitudinal changes. Achieving this goal requires new approaches that can perform automated mining from Arctic big data. It is exciting that the Arctic community has started to embrace GeoAI [11, 12] and big data to support Arctic research, from predicting Arctic sea ice concentration [1], to finding marine mammals on ice [15], creating Arctic land cover maps [18], and automated mapping of permafrost features [2]. Pioneering research in performing automated characterization of Arctic permafrost features has also been reported in the literature. An arXiv:2306.05341v1 [cs.CV] 8 Jun 2023 \fSIGSPATIAL \u201922, November 1-4, 2022, Seattle, WA, USA Li, et al. GeoAI-based Mapping Application for Permafrost Land Environment (MAPLE) is being developed to integrate Big Imagery, GeoAI, and High-Performance Computing (HPC) to achieve classification of permafrost features, in particular, ice-wedge polygons (IWP) [19]. The delineation of IWPs is achieved using a popular instance segmentation model, Mask R-CNN [7]. Huang et al. [10] applied a semantic segmentation model U-Net for mapping retrogressive thaw slumps, another important feature type of Arctic permafrost for understanding permafrost thaw and Arctic warming. While these deep learning models, such as Mask R-CNN, result in satisfying performance in terms of prediction accuracy, they can hardly achieve real-time processing because the algorithms often require placement of a large number of candidate bounding boxes and complex post-processing to remove redundant information. To reduce computational cost and perform efficient permafrost mapping at the pan-Arctic scale (which covers over 5 million km2 of tundra region), it is necessary to develop and apply new models that can achieve high-accuracy and real-time prediction. This paper aims to achieve this goal by integrating a novel real-time instance segmentation model, SparseInst [5], in our automated permafrost feature mapping pipeline. The next section describes the methodological workflow in detail. Figure 1: Real-time GeoAI workflow for Arctic permafrost segmentation and mapping. FC: Fully Connected layer 2 METHOD Figure 1 demonstrates the workflow of real-time GeoAI for Arctic permafrost mapping. We adopt a novel instance segmentation model SparseInst into the workflow, which contains three major components: a feature extractor, an instance context encoder, and an Instance Activation Map (IAM)-based decoder. The feature extractor is responsible for extracting multi-scale features from the input. The encoder will process the extracted features and fuse them into single-level features with multi-scale representations. The encoded features are then processed by the decoder to generate IAMs for instance classification and segmentation. Each component is designed under the consideration of lightweight architecture and low computational complexity to achieve fast inference speed. 2.1 Feature Extractor The feature extractor adopted in this work is ResNet-50 [8]. Among various deep neural network (DNN) architectures, ResNet-50 enjoys a good trade-off between accuracy and model complexity so to support real-time applications [3]. ResNet extracts representative features for objects of different types using a deep residual network. After a series of convolutional operations, multi-scale feature maps can be generated, among which high-resolution maps are better at small-object segmentation and low-resolution feature maps can better support segmentation of large objects. To accurately segment objects of varying sizes, hierarchical feature maps at multiple scales and resolutions are passed to the encoder (see Figure 1). 2.2 Instance Context Encoder The main purpose of the encoder is to generate a single feature map containing multi-scale representations. Conventional approaches use multi-scale features with multi-level predictions [13] for segmenting objects at different scales [20]. However, this will increase overall processing time of the model, making it less efficient and less favorable for real-time applications. Recent real-time instance segmentation models [4, 16] fuse multi-scale information into a single feature map to reduce both prediction and post-processing time. SparseInst utilizes a similar idea and it fuses three feature maps obtained from different convolution stages. The fusion first follows the feature pyramid network (FPN) [14] to use a top-down pathway for building semantic-rich features. To further enhance the scale information, the last feature map (C3) also undergoes a pyramid pooling operation [23] to increase the global contextual information without increasing the size of the feature maps. Next, all feature maps are upsampled to the same resolution and concatenated together to generate feature maps at a single resolution but with multi-scale representations. The output is then sent to the decoder for classification and segmentation. 2.3 IAM-based Decoder The function of the decoder is to take the fused feature map from the encoder as input to generate \ud835\udc41predictions. Each prediction contains a triple . The objectness score refers to the probability of an object belonging to a certain class and the kernel is a low-dimensional representation of location information for that object. This instance-level prediction is achieved through the generation of Instance Activation Maps (IAMs) which are capable of highlighting important image areas. Different from conventional approaches which use dense anchors to detect and segment objects, SparseInst trains the decoder to create IAMs, which have a one-to-one mapping with the objects to segment. This design helps the decoder to achieve real-time performance as it avoids the time-consuming post-processing of some models, such as Mask R-CNN, which need to select from thousands of anchors to predict the most accurate mask and to perform matching between predicted masks and the ground-truth. Once the predictions are generated, they are sent to perform bipartite matching to associate each ground-truth object with its most \fReal-time GeoAI for High-resolution Mapping and Segmentation of Arctic Permafrost Features SIGSPATIAL \u201922, November 1-4, 2022, Seattle, WA, USA Table 1: Comparisons with Mask R-CNN [7] for mask AP and speed on IWP dataset. Inference speeds of all models are tested with single NVIDIA A5000 GPU. Model FPS AP50 AP\ud835\udc46 AP\ud835\udc40 AP\ud835\udc3f Mask R-CNN 27.01 52.86 33.28 60.03 64.39 SparseInst 45.61 53.97 31.70 60.78 68.10 similar prediction, then the difference between the prediction and the ground-truth is encoded into the loss function. As the model is being trained, it learns to generate more accurate IAMs and thus more accurate predictions, lowering the loss until the model fully converges. 3 EXPERIMENTS AND RESULTS 3.1 Data To assess the performance of the models, we created an AI-ready dataset containing 867 image tiles and a total of 34,931 ice-wedge polygons (IWPs). The dataset covers dominant tundra vegetation types in the polygonal landscapes, including sedge, tussock, and barren tundra. Very high resolution (0.5 m) remote sensing imagery acquired by Maxar sensors is used for annotation and model training. The average image size is \u223c226 \u00d7 226 with the largest image size 507 \u00d7 507. Each image has a label indicating the image size and coordinates of the IWPs. The labeled images are divided into three sets: training (70%), validation (15%), and testing (15%). The maximum number of IWPs per image is 447. This statistic is critical in determining the maximum number of detections per image, as it is an important hyperparameter to set in the segmentation model. It also affects both accuracy and speed and provides a trade-off between them (Section 3.3). 3.2 Model Training and Results In this work, we compare SparseInst with one of the most popular instance segmentation models, Mask R-CNN [7]. Both models are built upon Detectron2 [21], a module of the PyTorch deep learning framework which provides state-of-the-art segmentation algorithms. The training is conducted on four NVIDIA A5000 GPUs. The batch size is 16 and the maximum number of iterations is 20,000. The maximum number of detections per image \ud835\udc41is set to 500. Table 1 shows the performance comparison between Mask R-CNN (default setting) and SparseInst. The evaluation metric for model inference speed is frame per second (FPS) and for accuracy, average precision (AP) [22] is used. As the results show, SparseInst demonstrates better performance in terms of both speed and accuracy than Mask R-CNN. We also separate IWPs into three groups by their areas: small (area < 200 pixels), medium (area in between 200 and 450 pixels), and large (area > 450 pixels). Table 1 also shows the average precision (AP) in each group. SparseInst performs slightly worse than Mask R-CNN on small IWPs segmentation, but it works better at segmenting mediumto large-size IWPs. Overall, SparseInst yields better detection accuracy than Mask R-CNN. Speed-wise, the model runs nearly twice as fast as Mask R-CNN, achieving real-time performance (model\u2019s inference speed at 30 FPS or above). 3.3 Precision vs. Speed Figure 2 shows the precision and speed trade-off of the SparseInst model and its comparison with Mask R-CNN. We used the default setting of Mask R-CNN to conduct training and testing as it achieves better performance than other experimental settings. Differently, SparseInst requires a predefined \ud835\udc41to determine the maximum number of masks and predictions per image. This hyperparameter not only affects the model\u2019s prediction accuracy but also its speed. A larger \ud835\udc41will slow down the process of bipartite matching during training and increase model complexity in the decoder part, therefore negatively affecting the model\u2019s efficiency during both training and testing. Here, we tested the model performance at different settings of \ud835\udc41(at 100, 300 and 500 respectively). It can be seen that as \ud835\udc41decreases, the model\u2019s prediction speed increases (x axis) but its predictive power (y axis) decreases (from 54% at \ud835\udc41=500 to 51% at \ud835\udc41=100). For Mask R-CNN, while its prediction accuracy is quite high, the speed is below the threshold of models that can be considered real-time. It is noteworthy that at both \ud835\udc41=500 and \ud835\udc41=300, SparseInst achieves better prediction accuracy than Mask R-CNN. This result verifies the importance of carefully setting values of hyperparameters according to data characteristics to achieve satisfying model performance. Figure 2: Speed and accuracy trade-off. 3.4 Prediction Results Figure 3 illustrates segmentation results for two sample images. Figure 3a and 3c provides the ground-truth labels of the IWPs. The ice-wedge polygons in these two images belong to two distinctive types of IWPs: low-centered (3a, 3b) and high-centered (3c, 3d). A preliminary analysis has also shown that when separating these feature types, thus making the segmentation task more challenging, the performance advantage of SparseInst over Mask R-CNN become even more dominant. This reflects the robustness of the SparseInst model in performing high-accuracy and real-time IWP segmentation. Figure 3b and 3d present the model prediction results for the two images to their left (3a and 3c). It can be seen that for smaller objects, although the predicted area is quite close to the ground-truth, the \fSIGSPATIAL \u201922, November 1-4, 2022, Seattle, WA, USA Li, et al. (a) (b) (c) (d) Figure 3: Comparison between ground-truth (a and c) and model segmentation results (b and d). Red arrow: missing prediction; Yellow arrow: incorrect prediction boundary line itself is not as smooth as the human labels, 3b. This issue does not exist in segmentation results for large objects, 3d. The model did miss predictions for a few IWPs when there exist no clear boundaries around them (red arrows in 3b and 3d). There are also incorrect predictions (yellow arrows in 3d); this is likely due to the semantically different concepts that expert annotators and the machine consider. Interestingly, the model can predict labels for some partial IWP near the border where is not labeled by experts. 4" + }, + { + "url": "http://arxiv.org/abs/1812.01496v1", + "title": "Sturm: Sparse Tubal-Regularized Multilinear Regression for fMRI", + "abstract": "While functional magnetic resonance imaging (fMRI) is important for\nhealthcare/neuroscience applications, it is challenging to classify or\ninterpret due to its multi-dimensional structure, high dimensionality, and\nsmall number of samples available. Recent sparse multilinear regression methods\nbased on tensor are emerging as promising solutions for fMRI, yet existing\nworks rely on unfolding/folding operations and a tensor rank relaxation with\nlimited tightness. The newly proposed tensor singular value decomposition\n(t-SVD) sheds light on new directions. In this work, we study t-SVD for sparse\nmultilinear regression and propose a Sparse tubal-regularized multilinear\nregression (Sturm) method for fMRI. Specifically, the Sturm model performs\nmultilinear regression with two regularization terms: a tubal tensor nuclear\nnorm based on t-SVD and a standard L1 norm. We further derive the algorithm\nunder the alternating direction method of multipliers framework. We perform\nexperiments on four classification problems, including both resting-state fMRI\nfor disease diagnosis and task-based fMRI for neural decoding. The results show\nthe superior performance of Sturm in classifying fMRI using just a small number\nof voxels.", + "authors": "Wenwen Li, Jian Lou, Shuo Zhou, Haiping Lu", + "published": "2018-12-04", + "updated": "2018-12-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Brain diseases affect millions of people worldwide and impose signi\ufb01cant challenges to healthcare systems. Functional magnetic resonance imaging (fMRI) is a key medical imaging technique for diagnosis, monitoring and treatment of brain diseases. Beyond healthcare, fMRI is also an indispensable tool in neuroscience studies (Faro & Mohamed, 2010). *Equal contribution 1Department of Computer Science, University of Shef\ufb01eld, UK 2Department of Computer Science, Hong Kong Baptist University, China. Correspondence to: Wenwen Li , Jian Lou , Shuo Zhou , Haiping Lu . Preprint, 2018. Copyright 2018 by the author(s). Figure 1. A 3D fMRI volume has three directions: sagittal (x), coronal (y) and axial (z). The original scans are taken with slices oriented in parallel to axial plane (as shown in the diagonal). We aim to classify fMRI with only a small number of voxels for easy interpretation by proposing a sparse multilinear regression model with tubal tensor nuclear norm regularization. (This \ufb01gure is best viewed on screen.) fMRI records the Blood Oxygenation Level Dependent (BOLD) signal caused by changes in blood \ufb02ow (Ogawa et al., 1990), as depicted in Fig. 1. fMRI scan is a fourdimensional (4-D) sequence composed of magnetic resonance imaging (MRI) volumes sampled every few seconds, with over 100,000 voxels each MRI volume. It is often represented as a three-dimensonal (3-D) volume, with the statistics of each voxel summarized along the time axis. Figure 1 visualizes example values of such statistics, showing very different characteristics from natural images. Unlike natural images, fMRI data is expensive to obtain. Thus, the number of fMRI samples in a study is typically limited to dozens. This makes fMRI challenging to analyze, particularly in its full (i.e., whole brain). Moreover, in healthcare and neuroscience, prediction accuracy is not the only concern. It is also important to interpret the learned features to domain experts such as clinicians or neuroscientists. This makes sparse learning models (Ryali et al., 2010; Simon et al., 2013; Rao et al., 2013; Hastie et al., 2015) attractive because they can reveal direct dependency of a response with a small portion of input features. Therefore, tensor-based, sparse multilinear regression methods are emerging recently, where tensor refers to multidimensional array. For simplicity, we consider only third-order (i.e., 3-D) tensor in this paper. arXiv:1812.01496v1 [cs.CV] 4 Dec 2018 \fSturm: Sparse Tubal-Regularized Multilinear Regression for fMRI Sparse multilinear regression models relate a predictor/feature tensor with a univariate response via a coef\ufb01cient tensor, generalizing Lasso-based models (Tibshirani, 1996; Hastie et al., 2015) to tensor data. Regularization that promotes sparsity and low rankness is also generalized to the coef\ufb01cient tensor. For example, the regularized multilinear regression and selection (Remurs) model (Song & Lu, 2017) incorporates a sparse regularization term, via an \u21131 norm, and a Tucker rank-minimization term, via a summation of the nuclear norms (SNN) of unfolded matrices. There are also CANDECOMP/PARAFAC (CP) rank-based methods (Tan et al., 2012; He et al., 2018). For example, the fast stagewise unit-rank tensor factorization (SURF) (He et al., 2018) enforces the low rankness via the CP decomposition model (Hitchcock, 1927; Harshman, 1970; Carroll & Chang, 1970), with the rank to be incrementally estimated via residual computation rather than direct minimization. And it proposes a divide-and-conquer strategy to improve ef\ufb01ciency and scalability with a greedy algorithm (Cormen et al., 2009). Although Remurs minimizes the Tucker rank directly, it involves unfolding/folding operations that transform tensor to/from matrix, which break some tensor structure, and SNN is not a tight convex relaxation of its target rank (Romera-Paredes & Pontil, 2013). A new Tubal Tensor Nuclear Norm (TNN) (Zhang et al., 2014; Zhang & Aeron, 2017; Lu et al., 2016; 2018a) is recently proposed based on the tubal rank, which originates from the tensor singular value decomposition (t-SVD) (Kilmer et al., 2013). In tensor recovery problems, TNN guarantees the exact recovery under certain conditions. Improved performance has been observed on image/video data on tensor completion/recovery and robust tensor principal component analysis (PCA) (Zhang & Aeron, 2017; Lu et al., 2016; 2018a). In addition, TNN does not require unfolding/folding of tensor in its optimization. In this work, we study sparse multilinear regression under the t-SVD framework for fMRI classi\ufb01cation. The success of TNN is limited to unsupervised learning settings such as completion/recovery and robust PCA. To our knowledge, TNN has not been studied in a supervised setting yet, such as multilinear regression where the problem is not recovery given samples of a tensor but prediction of a response with a set of training tensor samples. Moreover, the targeted fMRI classi\ufb01cation tasks have additional challenge of small sample size (relative to the feature dimension). Our contributions are twofold: Firstly, we propose a Sparse tubal-regularized multilinear regression (Sturm) method that incorporates TNN regularization. Speci\ufb01cally, we formulate the Sturm model by incorporating both a TNN regularization and a sparsity regularization on the coef\ufb01cient tensor. We solve the resulted Sturm problem using the alternating direction method of multipliers (ADMM) framework (Boyd et al., 2011). TNN-based formulation allows ef\ufb01cient parameter update in the Fourier domain, which is highly parallelizable. Our second contribution is that we evaluate Sturm and related methods on both resting-state and task-based fMRI classi\ufb01cation problems, instead of only one of them as in previous works (Zhou et al., 2013b; 2014; Shi et al., 2014; Zhou et al., 2017; He et al., 2017). We use public datasets with identi\ufb01able subsets for repeatibility, and examine both the classi\ufb01cation accuracy and sparsity. The results show Sturm outperforming other state-of-the-art methods on the whole. 2. Related Work Tensor decomposition and rank. There are three popular tensor decomposition methods and associated tensor rank de\ufb01nitions: 1) The CP decomposition of a tensor is written as the summation of R rank-one tensors (Hitchcock, 1927; Harshman, 1970; Carroll & Chang, 1970), where R is the CP rank. 2) For a 3-D tensor, the Tucker decomposition decomposes it into a core tensor and three factor matrices (De Lathauwer et al., 2000), and the Tucker rank is 3-tuple consisting of the rank for each mode-n unfolded matrix. 3) The t-SVD views a 3-D tensor as a matrix of tubes (mode-3 vectors) oriented along the third dimension and decomposes it as circular convolutions on three tensors (Braman, 2010; Kilmer et al., 2013; Kilmer & Martin, 2011; Gleich et al., 2013). This leads to a new tensor tubal rank de\ufb01ned as the number of non-zero singular tubes. Please refer to (Zhang & Aeron, 2017; Zhang et al., 2014; Yuan & Zhang, 2016; Lu et al., 2016; 2018a; Mu et al., 2014) for more detailed discussion, e.g., on their pros and cons. Low-rank tensor completion/recovery. Tensor completion/recovery takes a tensor with missing/noisy entries as the input and aims to recover/complete those entries. Lowrank assumption makes solving such problems feasible, and often with provable theoretical guarantees under mild conditions. The three types of tensor decomposition have their corresponding tensor completion/recovery approaches that minimize respective tensor ranks. Direct minimization of tensor rank is NP-hard (Hillar & Lim, 2013). Therefore, CP rank, Tucker rank, and tubal rank are relaxed to CP-based nuclear norm (Shi et al., 2017), the sum of matrix nuclear norm (Liu et al., 2013), and the tubal tensor nuclear norm. Sparse and low-rank multilinear regression. Sparse and low-rank constraints have also been applied to multilinear regression problems. Multilinear regression models (Signoretto et al., 2010; Su et al., 2012; Guo et al., 2012; Zhou et al., 2013a) relate a predictor/feature tensor X \u2208RI1\u00d7I2\u00d7I3 with a univariate response y, via a coef\ufb01cient tensor W \u2208RI1\u00d7I2\u00d7I3. The Remurs model (Song \fSturm: Sparse Tubal-Regularized Multilinear Regression for fMRI Notation Description a Lowercase letter denotes scalar a Bold lowercase letter denotes vector A Bold uppercase letter denotes matrix A Calligraphic uppercase letter denotes tensor A \u2217B t-product between tensors A and B A\u22a4 Tensor conjugate transpose of A A(i3) The i3th mode-3 (frontal) slice of A AF The discrete Fourier transform of A Table 1. Important notations. & Lu, 2017) introduces regularization with a sparsity constraint, via \u21131 norm, and a low-rank constraint via Tucker rank-based SNN. Instead of minimizing the rank, the fast stagewise unit-rank tensor factorization (SURF) (He et al., 2018) is an ef\ufb01cient and scalable method that imposes a low CP rank constraint by setting a maximum rank and increasing the rank estimate from 1 to the maximum, stopping upon zero residual. Tensor methods on fMRI. Due to the high dimensionality of fMRI data, regions of interest (ROIs) are typically used rather than all voxels in the original 3-D spatial domain (Chen et al., 2015; He et al., 2018). ROI analysis requires strong prior knowledge to determine the regions. In contrast, whole-brain fMRI analysis (Poldrack et al., 2013) is more data-driven. Thus, tensor-based machine learning methods (Cichocki, 2013; Cichocki et al., 2009) have been developed for fMRI, with promising results reported (Acar et al., 2017; Zhou et al., 2013a; Song et al., 2015; He et al., 2017; Song & Lu, 2017; Ozdemir et al., 2017; Barnathan et al., 2011). However, in these works, the learning methods are only evaluated on either resting-state fMRI for disease diagnosis (Zhou et al., 2017) or task-based fMRI for neural decoding (He et al., 2017; Chen et al., 2015; Song & Lu, 2017), but not both. 3. Tubal Tensor Nuclear Norm Notations. Table 1 summarizes important notations used in this paper. We use lowercase, bold lowercase, bold uppercase, calligraphic uppercase letters to denote scalar, vector, matrix, and tensor, respectively. We denote indices by lowercase letters spanning the range from 1 to the uppercase letter of the index, e.g., m = 1, . . . , M. A third-order tensor A \u2208RI1\u00d7I2\u00d7I3 is addressed by three indices {in}, n = 1, 2, 3. Each in usually addresses the nth mode of A, while such convention may not be strictly followed when the context is clear. The i3th mode-3 slice, a.k.a. the frontal slice, of A is denoted as A(i3), a matrix obtained by \ufb01xing the mode-3 index i3, i.e., A(i3) = A(:, :, i3). The (i1, i2)th tube of A, denoted as \u02da ai1i2, is a mode-3 vector obtained by \ufb01xing the \ufb01rst two mode indices, i.e., A(i1, i2, :). = I1 I1 I1 I2 I3 I1 I2 I3 I3 I2 I2 I3 * * Figure 2. Illustration of t-SVD A = U \u2217S\u2217V\u22a4, assuming I1 > I2. t-SVD and tubal rank. We \ufb01rst review the t-SVD framework following the de\ufb01nitions in (Kilmer et al., 2013). The t-product (tensor-tensor product) between tensor A \u2208 RI1\u00d7I2\u00d7I3 and B \u2208RI2\u00d7J4\u00d7I3 is de\ufb01ned as A \u2217B = C \u2208 RI1\u00d7J4\u00d7I3. The (i1, j4)th tube\u02da ci1j4 of C is computed as \u02da ci1j4 = C(i1, j4, :) = I2 X i2=1 A(i1, i2, :) \u2217B(i2, j4, :), (1) where \u2217denotes the circular convolution (Rabiner & Gold, 1975) between two tubes (vectors) of the same size and the respective t-product between tensors. The tensor conjugate transpose of A \u2208RI1\u00d7I2\u00d7I3 is denoted as A\u22a4\u2208 RI2\u00d7I1\u00d7I3, obtained by conjugate transposing each of the frontal slice and then reversing the order of transposed frontal slices A(:, :, i3). A tensor I \u2208RI\u00d7I\u00d7I3 is an identity tensor if its \ufb01rst mode-3 slice I(1) is an I \u00d7 I identity matrix and all the rest mode-3 slices, i.e. I(i3) for i3 = 2, ..., I3, are zero matrices. An orthogonal tensor is a tensor Q \u2208RI\u00d7I\u00d7I3 that satis\ufb01es the following condition, Q\u22a4\u2217Q = Q \u2217Q\u22a4= I, (2) where I \u2208RI\u00d7I\u00d7I3 is an identity tensor and \u2217is the tproduct de\ufb01ned above. If A\u2019s all mode-3 slices A(i3), i3 = 1, ..., I3 are diagonal matrices, it is called an fdiagonal tensor. Based on these de\ufb01nitions, the t-SVD of A \u2208RI1\u00d7I2\u00d7I3 is de\ufb01ned as A = U \u2217S \u2217V\u22a4, (3) where U \u2208RI1\u00d7I1\u00d7I3, V \u2208RI2\u00d7I2\u00d7I3 are orthogonal tensors, and S \u2208RI1\u00d7I2\u00d7I3 is an f-diagonal tensor. This t-SVD de\ufb01nition leads to a new tensor rank, the tubal rank, which is de\ufb01ned as the number of nonzero singular tubes of S, i.e. #{i : S(i2, i2, :) \u0338= 0}, assuming I1 \u2265I2. Figure 2 is an illustration of t-SVD. t-SVD via Fourier transform. t-SVD can be computed via the discrete Fourier transform (DFT) for better ef\ufb01ciency. We denote the Fourier transformed tensor A as AF, obtained via fast Fourier transform (FFT) along mode-3, i.e., AF = fft(A, [ ], 3). The connection between t-SVD and DFT is detailed in (Kilmer et al., 2013). Tubal Tensor nuclear norm. TNN is a convex relaxation for the tubal rank, as an average tubal multi-rank within the \fSturm: Sparse Tubal-Regularized Multilinear Regression for fMRI unit tensor spectral norm ball (Lu et al., 2018a). It can be de\ufb01ned via DFT, similar to t-SVD computation above. The TNN of A \u2208RI1\u00d7I2\u00d7I3 is de\ufb01ned as \u2225A\u2225T NN = 1 I3 I3 X i3=1 \u2225A(i3) F \u2225\u2217, (4) where \u2225\u00b7 \u2225\u2217denotes the matrix nuclear norm. Please refer to (Lu et al., 2018a) for a detailed derivation and complete theoretical analysis, e.g., on the tightness of the relaxation. Note that when I3 = 1, the above de\ufb01nitions for tensors will be equivalent to the counterparts for matrices. 4. Sparse Multilinear Regression with Tubal Rank Regularization Tubal rank-based TNN has shown to be superior to Tucker rank-based SNN in tensor completion/recovery (Zhang & Aeron, 2017), and tensor robust PCA (Lu et al., 2016), which are all unsupervised learning settings. To our knowledge, there is no study on TNN in a supervised learning setting yet. In this work, we explore the supervised learning with TNN and study whether TNN can improve supervised learning, e.g., multilinear regression. In the following, we propose the Sturm model and derive the Sturm algorithm under the ADMM framework. 4.1. The Sturm model We incorporate TNN in the multilinear regression problem, which trains a model from M pairs of feature tensors and their response labels (Xm \u2208RI1\u00d7I2\u00d7I3, ym) with m = 1, ..., M to relate them via a coef\ufb01cient tensor W \u2208RI1\u00d7I2\u00d7I3. This can be achieved by minimizing a loss function, typically with certain regularization: min W 1 M M X m=1 L(\u27e8Xm, W\u27e9, ym) + \u03bb\u2126(W), (5) where L(\u00b7) is a loss function, \u2126(\u00b7) is a regularization function, \u03bb is a balancing hyperparameter, and \u27e8X, W\u27e9denotes the inner product (a.k.a. the scalar product) of two tensors of the same size de\ufb01ned as \u27e8X, W\u27e9:= X i1 X i2 X i3 X(i1, i2, i3) \u00b7 W(i1, i2, i3), (6) The Remurs (Song & Lu, 2017) model uses a conventional least square loss function and assumes W to be both sparse and low rank. The sparsity of W is regularized by an \u21131 norm and the rank by an SNN norm. However, the SNN requires unfolding W into matrices, susceptible to losing some higher-order structural information. Moreover, it has been pointed out in (Romera-Paredes & Pontil, 2013) that SNN is not a tight convex relaxation of its target rank. This motivates us to propose a Sparse tubal-regularized multilinear regression (Sturm) model which replaces SNN in Remurs with TNN. This leads to the following objective function min W 1 2 M X m=1 (ym\u2212\u27e8Xm, W\u27e9)2+\u03c4\u2225W\u2225T NN +\u03b3\u2225W\u22251, (7) where \u03c4 and \u03b3 are hyperparameters, and \u2225W\u22251 is the \u21131 norm of tensor W, de\ufb01ned as \u2225W\u22251 = X i1 X i2 X i3 |W(i1, i2, i3)| , (8) which is equivalent to the \u21131 norm of its vectorized representation w. Here, the TNN regularization term \u2225W\u2225T NN enforces low tubal rank in W. The trade-off between \u03c4 and \u03b3 as well as the degenerated versions follow the analysis for the Remurs (Song & Lu, 2017). 4.2. The Sturm algorithm via ADMM ADMM (Boyd et al., 2011) is a standard solver for Problem (7). Thus, we derive an ADMM algorithm to optimize the Sturm objective function. We begin with introducing two auxiliary variables, A and B to disentangle the TNN and the \u21131-norm regularization: min W 1 2 M X m=1 (ym \u2212\u27e8Xm, A\u27e9)2 + \u03c4\u2225B\u2225T NN + \u03b3\u2225W\u22251 (9) s.t. A = W and B = W. Then, we introduce two Lagrangian dual variables P (for A) and Q (for B). With a Lagrangian constant \u03c1, the augmented Lagrangian becomes, L\u03c1(A, B, W, P, Q) = 1 2 M X m=1 (ym \u2212\u27e8Xm, A\u27e9)2 + \u03c4\u2225B\u2225T NN + \u03b3\u2225W\u22251 + D P, A \u2212W E + \u03c1 2\u2225A \u2212W\u22252 F + D Q, B \u2212W E + \u03c1 2\u2225B \u2212W\u22252 F . (10) We further introduce two scaled dual variables P\u2032 = 1 \u03c1P and Q\u2032 = 1 \u03c1Q only for notation convenience. Next, we derive the update from iteration k to k + 1 by taking an alternating strategy, i.e., minimizing one variable with all other variables \ufb01xed. Updating Ak+1: Ak+1 = arg min A L\u03c1(A, Bk, Wk, P\u2032k, Q\u2032k) = arg min A 1 2 M X m=1 (ym \u2212\u27e8Xm, A\u27e9)2 + \u03c1 2\u2225A \u2212Wk + P\u2032k\u2225F . (11) \fSturm: Sparse Tubal-Regularized Multilinear Regression for fMRI Algorithm 1 Proximal Operator for TNN: prox\u00b5\u2225\u00b7\u2225T NN (T ) Require: T \u2208RI1\u00d7I2\u00d7I3, \u00b5 1: TF = fft(T , [ ], 3); 2: for i3 = 1, 2, ..., I3 do 3: [U, diag(s), V] = svd(T(i3) F ) 4: Z(i3) F = U(diag((s \u2212\u00b5)+))V\u22a4; 5: end for 6: prox\u00b5\u2225\u00b7\u2225T NN (T ) = ifft(ZF, [ ], 3); Ensure: prox\u00b5\u2225\u00b7\u2225T NN (T ) This can be rewritten as a linear-quadratic objective function by vectorizing all the tensors. Speci\ufb01cally, let a = vec(A), wk = vec(Wk), p\u2032k = vec(P\u2032k), y = [y1 \u00b7 \u00b7 \u00b7 yM]\u22a4, xm = vec(Xm), and X = [x1 \u00b7 \u00b7 \u00b7 xM]\u22a4. Then we get an equivalent objective function with the following solution: ak+1 = (X\u22a4X + \u03c1I)\u22121(X\u22a4y + \u03c1(wk \u2212p\u2032k)), (12) where I is an identity matrix. Note that this does not break/lose any structure because Eq. (11) and Eq. (12) are equivalent. Ak+1 is obtained by folding (reshaping) ak+1 into a third-order tensor, denoted as Ak+1 = tensor3(ak+1). Here, for a \ufb01xed \u03c1, we can avoid high periteration complexity of updating ak+1 by pre-computing a Cholesky decomposition of (X\u22a4X + \u03c1I), which does not change over iterations. Updating Bk+1: Bk+1 = arg min B L\u03c1(Ak+1, B, Wk, P\u2032k, Q\u2032k) = arg min B \u03c4\u2225B\u2225T NN + \u03c1 2\u2225B \u2212Wk + Q\u2032k\u22252 F = prox \u03c4 \u03c1 \u2225\u00b7\u2225T NN (Wk \u2212Q\u2032k). (13) This means that Bk+1 can be solved by passing parameter \u03c4 \u03c1 to the proximal operator of the TNN (Zhang et al., 2014; Zhang & Aeron, 2017). The proximal operator for the TNN at tensor T with parameter \u00b5 is denoted by prox\u00b5\u2225\u00b7\u2225T NN (T ) and de\ufb01ned as prox\u00b5\u2225\u00b7\u2225T NN (T ) := arg min W \u00b5\u2225W\u2225T NN + 1 2\u2225W \u2212T \u22252 F , (14) where \u2225\u00b7 \u2225F is the Frobenius norm de\ufb01ned as \u2225T \u2225F = p \u27e8T , T \u27e9using Eq. (6). The proximal operator for TNN can be more ef\ufb01ciently computed in the Fourier domain, as in Algorithm 1, where in Step 1, (s\u2212\u00b5)+ = max{s\u2212\u00b5, 0} and diag(a) denotes a diagonal matrix whose diagonal elements are from a. Algorithm 2 ADMM for Sturm Require: (Xm, ym) for m = 1, ..., M, \u03c4, and \u03bb; 1: Initialize A0, B0, W0, P\u20320, Q\u20320 to all zero-tensors and set \u03c1 and K. 2: for k = 1, ..., K do 3: Update Ak+1 by Eq. (12); 4: Update Bk+1 by Alg. 1 as prox \u03c4 \u03c1 \u2225\u00b7\u2225T NN (Wk\u2212Q\u2032k); 5: Update Wk+1 by Eq. (16) as prox \u03b3 2\u03c1 \u2225\u00b7\u22251 \u0010Ak+1 + P\u2032k + Bk+1 + Q\u2032k 2 \u0011 ; 6: P\u2032k+1 = P\u2032k + Ak+1 \u2212Wk+1; 7: Q\u2032k+1 = Q\u2032k + Bk+1 \u2212Wk+1; 8: end for Ensure: WK Updating Wk+1: Wk+1 = arg min W L\u03c1(Ak+1, Bk+1, W, P\u2032k, Q\u2032k) = prox \u03b3 2\u03c1 \u2225\u00b7\u22251 \u0010Ak+1 + P\u2032k + Bk+1 + Q\u2032k 2 \u0011 . (15) It can be solved by calling the proximal operator of the \u21131 norm with parameter \u03b3 2\u03c1, which is simply the element-wise soft-thresholding, i.e. prox\u00b5\u2225\u00b7\u22251(T ) = (T \u2212\u00b5)+. (16) Updating Pk+1 and Qk+1: The updates of P and Q are simply dual ascent steps: P\u2032k+1 = P\u2032k + Ak+1 \u2212Wk+1, (17) Q\u2032k+1 = Q\u2032k + Bk+1 \u2212Wk+1. (18) The complete procedure is summarized in Algorithm 2. The code will be made publicly available via GitHub. 4.3. Computational complexity Finally, we analyze the per-iteration computational complexity of Algorithm 2. Let I = I1I2I3. Step 2 takes O(IM + min{M 2, I2}). Step 2 takes O(min{I1, I2}I) for the singular value thresholding in the Fourier domain, plus O(I log(I3)) for fft and ifft. Step 2 takes O(I) because the proximal operation for \u21131 norm is element-wise. Step 2 and Step 2 take O(I). As a result, in a high dimensional (or small sample) setting where I \u226bM, the per-iteration complexity is O(I(log(I3) + M)). 5. Experiments We evaluate our Sturm algorithm on four binary classi\ufb01cation problems with six datasets from three public fMRI \fSturm: Sparse Tubal-Regularized Multilinear Regression for fMRI repositories. While known works typically focus on either resting-state or task-based fMRI only and rarely both, here we study both types. We test the ef\ufb01cacy of the Sturm approach against \ufb01ve state-of-the-art algorithms and three additional variations, in terms of both classi\ufb01cation accuracy and sparsity. 5.1. Classi\ufb01cation problems and datasets Resting-state fMRI for disease diagnosis. Resting-state fMRI is scanned when the subject is not doing anything, i.e., at rest. It is commonly used for brain disease diagnosis, i.e., classifying clinical population. In this paper, we consider only the binary classi\ufb01cation of patients (positive) and health control subjects (negative). Task-based fMRI for neural decoding. Task-based fMRI is scanned when the subject is performing certain tasks, such as viewing pictures or reading sentences (Wang et al., 2003). It is commonly used for studies decoding brain cognitive states, or neural decoding. The objective is to classify (decode) the tasks performed by subjects using the fMRI information. In this paper, we consider only binary classi\ufb01cation of two different tasks. Chosen datasets. We study four fMRI classi\ufb01cation problems on six datasets from three public repositories, with the key information summarized in Table 2. Two are disease diagnosis problems on resting-state fMRI, and the other two are neural decoding problems on task-based fMRI, as described below. \u2022 Resting 1 \u2013 ABIDENYU&UM: The Autism Brain Imaging Data Exchange (ABIDE)1 (Craddock et al., 2013) consists of patients with autism spectrum disorder (ASD) and healthy control subjects. We chose the largest two subsets contributed by New York University (NYU) and University of Michigan (UM). The fMRI data has been preprocessed by the pipeline of Con\ufb01gurable Pipeline for the Analysis of Connectomes (CPAC). Quality control was performed by selecting the functional images with quality \u2018OK\u2019 reported in the phenotype data. \u2022 Resting 2 \u2013 ADHD-200NYU: We chose the NYU subset from the Attention De\ufb01cit Hyperactivity Disorder (ADHD) 200 (ADHD-200) dataset2 (Bellec et al., 2017), with ADHD patents and healthy controls. The raw data is preprocessed by the pipeline of Neuroimaging Analysis Kit (NIAK). \u2022 Task 1 \u2013 Balloon vs Mixed gamble : We chose two 1http://fcon_1000.projects.nitrc.org/ indi/abide 2http://neurobureau.projects.nitrc.org/ ADHD200/Data.html Classi\ufb01cation Problem # Pos. # Neg. Input data size I1 \u00d7 I2 \u00d7 I3 Resting 1 \u2013 ABIDENYU&UM 101 131 61 \u00d7 73 \u00d7 61 Resting 2 \u2013 ADHD-200NYU 118 98 53 \u00d7 64 \u00d7 46 Task 1 \u2013 Balloon vs Mixed 32 32 91 \u00d7 109 \u00d7 91 Task 2 \u2013 Simon vs Flanker 42 52 91 \u00d7 109 \u00d7 91 Table 2. Summary of the four classi\ufb01cation problems and respective datasets. # denotes the number of volumes (samples). Pos. and Neg. are short for the positive and negative classes, respectively. For diagnosis problems on ABIDE/ADHD-200, patients and health subjects are considered as positive and negative classes, respectively. The two neural decoding problems are formed by using six OpenfMRI datasets listed as Pos. vs Neg. gamble-related datasets from the OpenfMRI repository3 (Poldrack et al., 2013) project to form a classi\ufb01cation problem. They are 1) Balloon analog risk-taking task (BART) and 2) Mixed gambles task. \u2022 Task 2 \u2013 Simon vs Flanker: We chose another two recognition and response related tasks from OpenfMRI for binary classi\ufb01cation. They are 1) Simon task and 2) Flanker task. Resting-state fMRI preprocessing. The raw resting-state brain fMRI data is 4-D. We follow typical approaches to reduce the 4-D data to 3-D by either taking the average (He et al., 2017) or the amplitude (Yu-Feng et al., 2007) of low frequency \ufb02uctuation of voxel values along the time dimension. We perform experiments on both and report the best results. Task-based fMRI preprocessing. Following (Poldrack et al., 2013), we re-implemented a preprocessing pipeline to process the OpenfMRI data to obtain the 3D statistical parametric maps (SPMs) for each brain condition with a standard template. We used the same criteria as in (Poldrack et al., 2013) to selected one contrast (one speci\ufb01c brain condition over experimental conditions) per task for classi\ufb01cation. The tubal mode of fMRI. Figure 1 illustrates how fMRI scan is obtained along the axial direction, which is mode 3 in tensor representation. Each image along the diagonal is a mode-3 (frontal) slice. Therefore, it is a natural choice to consider mode 3 as the tubal mode to apply Sturm. 5.2. Algorithms We evaluate Sturm and Sturm + SVM (support vector machine) against the following \ufb01ve algorithms and three additional algorithms via combination with SVM. 3Data used in this paper are available at OpenfMRI: https://legacy.openfmri.org, now known as OpenNeuro: https://openneuro.org. \fSturm: Sparse Tubal-Regularized Multilinear Regression for fMRI \u2022 SVM: We chose linear SVM for both speed and prediction accuracy. (We studied both the linear and Gaussain RBF kernel SVM and found the linear one performs better on the whole.) \u2022 Lasso: It is a linear regression method with the \u21131 norm regularization. \u2022 Elastic Net (ENet): It is a linear regression method with \u21131 and \u21132 regularization. \u2022 Remurs (Song & Lu, 2017): It is a multilinear regression model with \u21131 norm and Tucker rank-based SNN regularization. \u2022 Multi-way Multi-level Kernel Modeling (MMK) (He et al., 2017): It is a kernelized CP tensor factorization method to learn nonlinear features from tensors. Gaussain RBF kernel MMK is used with pre-computed kernel SVM. SVM, Lasso, and ENet take vectorized fMRI data as input while Remurs and MMK directly take 3-D fMRI tensors as input. Lasso, ENet, Remurs, and Sturm can also be used for (embedded) feature selection. Therefore, we can add an SVM after each of them to obtain Lasso + SVM, ENet + SVM, Remurs + SVM and Sturm + SVM. The code for Sturm is built on the software library from (Lu, 2016; Lu et al., 2018b). Remurs, Lasso, and ENet are implemented with the SLEP package (Liu et al., 2009). MMK code is kindly provided by the \ufb01rst author of (He et al., 2017). 5.3. Algorithm and evaluation settings Model hyperparameter tuning. Default settings are used for all existing algorithms. For Sturm, we follow the Remurs default setting (Song & Lu, 2017) to set \u03c1 to 1 and use the same set {10\u22123, 5 \u00d7 10\u22123, 10\u22122, . . . , 5 \u00d7 102, 103} for \u03c4 and \u03b3, while scaling the \ufb01rst term in Eq. (7) by a factor \u03b1 = p (max(I1, I2) \u00d7 I3) to better balance the scales of the loss function and regularization terms (Lu et al., 2016; 2018a). Image resizing. To improve computational ef\ufb01ciency and reduce the small sample size problem (and over\ufb01tting), the input 3-D tensors are further re-sized into three different sizes with a factor \u03b2, choosing from {0.3, 0.5, 0.7}. Feature selection. In Lasso + SVM, ENet + SVM, Remurs + SVM, and Sturm + SVM, we rank the selected features by their associated absolute values of W in the descending order and feed the top \u03b7% of the features to SVM. We study \ufb01ve values of \u03b7: {1, 5, 10, 50, 100}. Evaluation metric and method. The classi\ufb01cation accuracy is our primary evaluation metric, and we also examine the sparsity of the obtained solution for all algorithms except SVM and MMK. For a particular binary classi\ufb01cation problem, we perform ten-fold cross validation and report the mean and standard deviation of the classi\ufb01cation accuracy and sparsity over ten runs. For each of the ten (test) folds, we perform an inner nine-fold cross validation using the remaining nine folds to determine \u03c4 and \u03b3 (jointly for ENet, Remurs, and Sturm on a 13 \u00d7 13 grid), \u03b2, and \u03b7 above that give the highest classi\ufb01cation accuracy, with the corresponding sparsity recorded. The sparsity is calculated as the ratio of the number of zeros in the output coef\ufb01cient tensor W to its size I1 \u00d7 I2 \u00d7 I3. In general, higher sparsity implies better interpretability (Hastie et al., 2015). 5.4. Results and discussion Table 3 reports the classi\ufb01cation accuracy for all algorithms except MMK. This is because all MMK results are below 60% on all four problems, possibly due to the default settings of the CP rank and SVM kernel. (We have tried a few alternative settings without improvement, though further tuning may still lead to better results.) Table 4 presents the respective sparsity values except SVM, which uses all features so the sparsity is zero. In both tables, the best results are highlighted in bold, with the second best ones underlined. Performance on Resting 1 & 2. For these two resting-state problems, Sturm + SVM has the highest accuracy of 65.45%, and Lasso is the second best with 2.45% lower accuracy. In terms of sparsity, Sturm + SVM and Sturm are the top two. Speci\ufb01cally, on Resting 1, Remurs + SVM and Sturm + SVM are the top two algorithms with almost identical accuracy of 64.67% and 64.66%, respectively. Moreover, Sturm + SVM also has the highest sparsity of 0.87 and Sturm has the second-highest sparsity of 0.86. For Resting 2, Sturm + SVM has outperformed all other algorithms on both accuracy (66.24%) and sparsity (0.99). Performance on Task 1 & 2. For these two task-based problems, Sturm has outperformed all other algorithms in accuracy, with 89.10% on Task 1 and 86.89% on Task 2. Lasso is again the second best in accuracy. ENet and ENet + SVM has the best sparsity of 0.96 while their accuracy values are only 81.87% and 74.21%, respectively. Sturm + SVM has signi\ufb01cant drop in accuracy compared with Sturm alone, and Lasso + SVM, ENet + SVM and Remurs + SVM all have lower accuracy compared to without SVM. Summary. There are four key observations on the whole: \u2022 Sturm has the best overall accuracy of 75.38%. Sturm has outperformed Remurs in accuracy for all four classi\ufb01cation problems. The only difference between Sturm and Remurs is replacing SNN with TNN. Therefore, this superiority indicates that tubal rank-based TNN is superior to Tucker rank-based SNN in the supervised, regression setting. \fSturm: Sparse Tubal-Regularized Multilinear Regression for fMRI Method Resting 1 Resting 2 Task 1 Task 2 Average Resting Task All SVM 60.78 \u00b1 0.09 63.97 \u00b1 0.09 87.38 \u00b1 0.12 82.56 \u00b1 0.17 62.38 84.97 73.67 Lasso 61.16 \u00b1 0.08 64.84 \u00b1 0.11 87.38 \u00b1 0.12 85.22 \u00b1 0.07 63.00 86.30 74.65 ENet 61.21 \u00b1 0.10 64.38 \u00b1 0.10 81.19 \u00b1 0.15 82.56 \u00b1 0.17 62.80 81.87 72.34 Remurs 60.72 \u00b1 0.08 62.13 \u00b1 0.09 87.14 \u00b1 0.13 84.67 \u00b1 0.15 61.43 85.90 73.67 Sturm 62.05 \u00b1 0.11 63.47 \u00b1 0.07 89.10 \u00b1 0.09 86.89 \u00b1 0.16 62.76 88.00 75.38 Lasso + SVM 63.37 \u00b1 0.08 62.56 \u00b1 0.09 74.05 \u00b1 0.20 72.11 \u00b1 0.16 62.97 73.08 68.02 ENet + SVM 64.20 \u00b1 0.07 61.61 \u00b1 0.08 76.43 \u00b1 0.14 72.00 \u00b1 0.14 62.91 74.21 68.56 Remurs + SVM 64.67 \u00b1 0.10 60.23 \u00b1 0.10 81.19 \u00b1 0.12 83.56 \u00b1 0.19 62.45 82.37 72.41 Sturm+ SVM 64.66 \u00b1 0.12 66.24 \u00b1 0.06 78.10 \u00b1 0.22 82.44 \u00b1 0.16 65.45 80.27 72.86 Table 3. Classi\ufb01cation accuracy (mean \u00b1 standard deviation in %). Resting 1 and Resting 2 denote two disease diagnosis problems on ABIDENYU&UM and ADHD-200, respectively. Task 1 and Task 2 denote two neural decoding problems on OpenfMRI datasets for Balloon vs Mixed gamble and Simon vs Flanker, respectively. The best accuracy among all of the compared algorithms for each column is highlighted in bold and the second best is underlined. Method Resting 1 Resting 2 Task 1 Task 2 Average Resting Task All Lasso 0.52 \u00b1 0.09 0.23 \u00b1 0.32 0.74 \u00b1 0.12 0.73 \u00b1 0.01 0.38 0.73 0.55 ENet 0.60 \u00b1 0.01 0.01 \u00b1 0.01 0.96 \u00b1 0.05 0.95 \u00b1 0.03 0.31 0.96 0.63 Remurs 0.69 \u00b1 0.03 0.73 \u00b1 0.17 0.81 \u00b1 0.08 0.81 \u00b1 0.07 0.71 0.81 0.76 Sturm 0.86 \u00b1 0.18 0.86 \u00b1 0.24 0.72 \u00b1 0.24 0.60 \u00b1 0.15 0.86 0.66 0.76 Lasso + SVM 0.57 \u00b1 0.05 0.19 \u00b1 0.40 0.77 \u00b1 0.10 0.75 \u00b1 0.06 0.38 0.76 0.57 ENet + SVM 0.58 \u00b1 0.09 0.02 \u00b1 0.01 0.96 \u00b1 0.04 0.95 \u00b1 0.04 0.30 0.96 0.63 Remurs + SVM 0.70 \u00b1 0.13 0.74 \u00b1 0.17 0.80 \u00b1 0.04 0.79 \u00b1 0.13 0.72 0.79 0.76 Sturm + SVM 0.87 \u00b1 0.07 0.99 \u00b1 0.01 0.85 \u00b1 0.14 0.56 \u00b1 0.11 0.93 0.71 0.82 Table 4. Sparsity (mean \u00b1 standard deviation) for respective results in Table 3 with the best and second best highlighted. \u2022 The reported sparsity corresponds to the best solution determined via nine-fold cross validation. Lasso and Lasso + SVM have the lowest, i.e., poorest, sparsity. ENet and ENet + SVM also have much lower sparsity than Remurs/Sturm and their + SVM versions, more than 0.10 (10%) lower. On the other hand, ENet and ENet + SVM have the highest sparsity on task-based fMRI while getting a solution closely to zero sparsity on Resting 2, showing high variation. \u2022 Performing SVM after the four regression methods can improve the classi\ufb01cation accuracy (though not always) on resting-state fMRI, while it has degraded their classi\ufb01cation performance on task-based fMRI in all cases. In particularly, Lasso + SVM and Sturm + SVM on Task 1, and Lasso + SVM and ENet + SVM on Task 2 have dropped more than 10% in accuracy. \u2022 Disease diagnosis on resting-state fMRI is signi\ufb01cantly more challenging than neural decoding on task-based fMRI. Although it should be aware from Table 2 that the number of samples is different for the resting-state and task-based fMRI, engaged brain activities are generally easier to classify than those not engaged and the difference is consistent with results reported in the literature. 5.5. Analysis Hyperparameter sensitivity. Figure 3 illustrates the classi\ufb01cation performance sensitivity of Sturm on its two hyperparameters \u03c4 and \u03b3 for two problems: Resting 2 and Task 2. In general, the lower right, i.e., large \u03c4 and small \u03b3 values, shows higher accuracy. This implies that the tubal rank regularization helps more in improving the accuracy than sparsity regularization. However, Fig. 3a shows poorer smoothness than Fig. 3b, another indication of resting-state fMRI being more challenging than task-based fMRI. This makes hyperparameter tuning more dif\ufb01cult for resting-state fMRI, (partly) causing the poorer classi\ufb01cation performance than task-based fMRI. Convergence analysis. Figure 4 shows the convergence of W and the Sturm objective function value in (7) on Resting 2 and Task 2. It can be seen that W has a fast convergence speed in both cases, though the objective function converges at a slower rate. 6." + } + ], + "Bin Guo": [ + { + "url": "http://arxiv.org/abs/2401.17366v1", + "title": "Inscribing geodesic circles on the face of the superstratum", + "abstract": "We use families of circular null geodesics as probes of a family of\nmicrostate geometries, known as $(1,0,n)$ superstrata. These geometries carry a\nleft-moving momentum wave and the behavior of some of the geodesic probes is\nvery sensitive to this background wave. The left-moving geodesics behave like\nBPS particles and so can be placed in circular orbits anywhere in the geometry\nand actually \"float\" at fixed radius and angle in the three-dimensional \"capped\nBTZ\" geometry. The right-moving geodesics behave like non-BPS particles. We\nshow that they provide a simple geometric characterization of the black-hole\nbound: when the momentum charge of the geometry is below this bound, such\ngeodesics can be placed anywhere, but exceeding the bound, even by a small\namount, means these geodesics are restricted to the deep interior of the\ngeometry. We also show that for left-moving string probes, the tidal forces\nremain comparable with those of global AdS$_3$. Nevertheless, for some of these\nprobes, the \"bumps\" in the geometry induce an oscillatory mass term and we\ndiscuss how this can lead to chaotic scrambling of the state of the string.", + "authors": "Bin Guo, Shaun D. Hampton, Nicholas P. Warner", + "published": "2024-01-30", + "updated": "2024-01-30", + "primary_cat": "hep-th", + "cats": [ + "hep-th" + ], + "main_content": "Introduction 2 2 Probing Microstate geometries 6 2.1 The geometries and their null geodesics . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.1 The six-dimensional metric . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.1.2 Geodesic motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1.3 The string metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 The probes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2.1 Null geodesic deviation and the Penrose limit . . . . . . . . . . . . . . . . . 9 2.2.2 Conformal transformations of the null geodesic deviation . . . . . . . . . . 10 3 Circular orbits 12 3.1 Simplifying the geodesic motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 The AdS3 limit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.3 Circular orbits in the superstratum . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.3.1 The \u201cBPS geodesics\u201d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.3.2 Bounding the counter-rotating geodesics . . . . . . . . . . . . . . . . . . . 14 3.3.3 Holography and the bound on geodesics . . . . . . . . . . . . . . . . . . . . 15 3.3.4 Orbits in the cap, at infinity and in between . . . . . . . . . . . . . . . . . . 17 4 Geodesic deviation, tidal forces and resonances 18 4.1 The scale of the tidal forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 4.2 Stringy excitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.3 Fluctuating tidal forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 5 Conclusions 25 1 Introduction Geodesics are one of the simplest and most fundamental probes of geometries. Moreover, because of the geometric optics approximation, they also provide information about possible solutions of the wave equation. The equations of geodesic deviation then provide a deeper insight into tidal stresses experienced by probes and the scattering of particles and waves moving through geometries. In string theory, we can go one step further and examine the dynamics of strings as they move through diverse backgrounds, and one of the simplest first steps in this direction is to examine the Penrose [1,2], or Penrose-G\u00a8 uven limit [3] of such probes. A string, or a particle, traveling at ultra-relativistic speeds only samples the immediate vicinity of the center of mass trajectory, and this trajectory is well approximated by a null geodesic. One can take a pencil of null geodesics around the original geodesic, and, at leading non-trivial order, the metric in this pencil becomes that of a plane wave. G\u00a8 uven generalized this to other families of background fields [3]. A classical string can be solved and quantized in light-cone gauge in 2 \fsuch a plane-wave background [4], and the world-sheet dynamics reduces to that of a free field in which the background fields create a time-dependent mass matrix for the string excitations. Such stringy analyses of supergravity backgrounds became something of an industry 20 years ago, and, amongst other things, it led to an invaluable streamlining of the procedure of taking the Penrose limit and putting it in the Brinkmann form in which the string is readily solved. A review of the early technology can be found in [5, 6] and its streamlined version is discussed in [7\u20139]. This simplified version relates the mass matrix felt by the string to an analogue of the geodesic deviation matrix for null geodesics. In recent years there has been something of a resurgence in using geodesic deviation and stringy probes in the microstate geometry programme. Microstate geometries closely approximate their black-hole counterparts until one is at the horizon scale: they are smooth and horizon-free, and cap off smoothly just above where the horizon would be in the black-hole solution. The throats of microstate geometries differ infinitesimally from those of black holes through multipole moments created by the cap geometry. Despite these tiny differences, it was shown in [10\u201312] that the ultra-relativistic speeds of infalling probes can magnify the multipole moments in such a manner that the tidal stresses reach the string scale before the probe even reaches the cap. This led to the study of stringy probes [13, 14], where it was shown that these tidal forces would excite strings significantly above their ground states, taking energy out of the center of mass motion. The result was \u201ctidal trapping:\u201d even massless infalling string probes would be excited and become trapped and scrambled into the microstate geometry. Microstate geometries thus exhibited another of the defining features of a black hole. There are two motivations behind this paper: to examine classes of geodesics that may give rise to trapping and scrambling, and to look at the dynamics of strings that follow some of these geodesics. Indeed, it was suggested several years ago that long-term trapping of time-like geodesics could lead to instabilities of microstate geometries to the formation of black holes or black rings [15, 16]. However, it has now been shown that the instability arising from longterm trapping only occurs at sub-Planckian wavelengths [17,18], and is therefore an intrinsically stringy phenomenon. Moreover, as suggested in [11], it seems that a coherent expression of such an instability will simply cause a microstate geometry to evolve along its vast moduli space. The time-like geodesics that exhibit this extreme long-term trapping are those that limit to a particular closed null geodesic associated with an evanescent ergosphere. Indeed, this null geodesic lies at the heart of the microstate geometry. From a holographic perspective, this null geodesic is the original locus of the supertube upon which momentum states have been loaded thereby giving rise to three-charge black-hole microstates for which the corresponding black hole has a macroscopic horizon. From the gravitational perspective, such a closed null geodesic might lead to concerns that there could be CTC\u2019s. However, microstate geometries are stably causal and the closed null geodesic reflects the fact that a stationary observer at infinity (for asymptotically flat microstate geometries) becomes arbitrarily highly boosted when continued to the core of the microstate geometry [19]. It is this feature that leads to the long-term trapping as seen from infinity. There is thus a lot interesting physics associated with the neighborhood of the closed null geodesic. 3 \fThere are also orbits in the heart of microstate geometries that intuition suggests should be loci for strong scrambling. The microstate geometries known as superstrata can be characterized as a capped BTZ geometry, K, with a deformed 3-sphere, S3, fibered over it. For future reference, we use coordinates (t, r, \u03c8) on K, where \u03c8 is the circle around the BTZ throat. The cap is approximately \u201cthe bowl\u201d at the bottom of a global AdS3 and the \u03c8-circle pinches off smoothly at the center of the cap. The throat is where the circumference of the \u03c8-circle stabilizes to a size determined by the momentum charge, QP , and the geometry is indeed very close to that of the BTZ solution. There are, of course, deviations that appear as multipole moments in the BTZ throat and in the AdS-like cap. The one place where the geometry shows significant distortion is in the \u201ctransition zone\u201d between the cap and the BTZ throat. This is where the momentum wave of the microstate geometry \u201csettles\u201d at some finite, but small value of r determined by the angular momentum of the solution. Below the transition zone (at lower values of r), the geometry is like the AdS of the two-charge D1-D5 system while above the transition zone, the geometry feels the momentum charge, QP , and becomes the lower end of a BTZ throat. Ultra-relativistic infalling probes already feel strong tidal forces from the multipoles in the BTZ throat, but the tidal forces peak at the transition zone where the probe encounters the momentum wave sourcing the geometry. The transition zone is also very \u201ccorrugated.\u201d Superstrata are sourced by momentum excitations, and these give rise to fields that oscillate around the \u03c8-circle of the BTZ throat and on the sphere, S3. In the Einstein frame, and for solutions that are asymptotic to AdS3, the oscillations can be somewhat suppressed (coiffured) and even reduced to their RMS values for single modes, but they are strongly present in the string metric and for asymptotically flat solutions. One would expect that a string orbiting at the speed of light around the \u03c8-circle, and on the S3, near the transition zone would encounter these bumps as if they were a \u201cnull shockwave.\u201d Indeed, this intuition is reinforced by the corresponding results arising from black holes and black rings. In the early days of black ring construction, it was shown how one could seemingly put a varying charge density around the horizon of a black ring [20]. It was subsequently shown [21] that the simplest such solutions are unphysical because the horizon could not be smooth: an infalling particle would spiral around such a varying charge density and experience it as a null shock wave. One could soften the impact by coiffuring [22] but there was still a significant bump. The correspondence between black holes and microstate geometries suggests that something similar should arise around the transition zone in the microstate geometry. Certainly, infalling probes encounter huge tidal forces at the transition zone, but as we will see (at least for the superstratum considered in this paper), strings orbiting at the speed of light avoid the very bumpy ride that one might have expected. This happens because of some very simple physics. We take the momentum wave sourcing the microstate geometry to be left-moving. This biases the geometry. We find that, for simple microstate geometries, we can put a co-rotating closed geodesic \u201corbit\u201d (or, more precisely, \u201cfloat\u201d) at any fixed spatial position, (r, \u03c8), in the capped BTZ geometry. However, the same is not true for geodesics that are counter-rotating (relative to the background momentum wave). For very small background momentum charge, QP , a counter-rotating geodesic can still be put anywhere, but there is a critical value of QP , above which the counter-rotating geodesic cannot be placed above a finite radius, rmax: the counterrotating circular geodesics thus becomes bound to the geometry. We show that this critical value 4 \fof QP is precisely the boundary of the black-hole regime. Thus, through its bound-state structure, the microstate geometry knows exactly where the black-hole bound lies. As QP increases above the bound, the value of rmax decreases rapidly so that, even for very modest values of QP , the counter-rotating circular geodesics are confined to the AdS-like cap. This means that, for any value of QP in the black-hole regime, such geodesics have limited use as probes because they cannot explore the bumps associated with the transition zone. On the other hand, the co-rotating geodesics can be placed anywhere. However, these geodesics replicate the behavior of BPS particles and become truly co-moving, or floating, with respect to the bumps in the space-time, (t, r, \u03c8), directions. Such particles thus sit still in the space-time corrugations rather than bounce around over them. There is therefore no tidal enhancement associated with motion across the corrugations in the space-time. On the other hand, some of the co-rotating geodesics have non-trivial orbits on the sphere, S3, and so can experience non-trivial bumps on this part of the geometry. We will see that the scale of the tidal tensor, while fluctuating within the six-dimensional geometry, remains at a scale set by the AdS curvature. The tidal stresses can still become large compared to the string scale, but this requires the stringy probe to have a string-scale center of mass energy. As we will discuss, such strings remain within the probe approximation and the corrugations in the metric means that strings develop a periodic, or oscillatory, mass term. Such a mass term can create exponentially growing excitations of string modes and thus lead to the sort of chaotic scrambling one expects in a black hole. In Section 2 we discuss the geometries and null geodesics of the superstrata that we are going to probe. In Section 3 we focus on circular null geodesics, that is, null geodesics that make closed orbits on the deformed S3 and run around the \u03c8-circle and fixed r in the space-time, K. We are seeking to probe both the long-term trapping region of the geometry and the bumpiest part of the transition zone and so we fix the orbit on the S3 at \u03b8 = \u03c0 2 , which is the appropriate value for the supertube locus and the value that maximizes the bump functions. From the perspective of the capped BTZ space-time, K, the angular momentum on the S3 represents a Kaluza-Klein mass for the particle. We classify the co-rotating and counter-rotating circular geodesics in this space-time, and show that the trapping of counter-rotating geodesics happens at the black-hole bound. We discuss tidal forces in Section 4 and show how they remain small unless the probe energy approaches string scale. We also show how the corrugations of the superstratum can result in oscillatory mass terms for some of the string probes, and describe how tidal resonances can lead to exponentially growing string excitations with Lyapunov exponents that depend on the energy of the string, the amplitude of the mass term oscillation, and the proximity to a resonance. This means that the scrambling will be chaotic. Indeed, in a rather different context, it has already been shown how probes bouncing around the cores of microstate geometries can result in chaotic behavior [23]. Section 5 contains our final remarks. 5 \f2 Probing Microstate geometries 2.1 The geometries and their null geodesics We are going to use the simplest work-horse of the microstate geometry program: the (1, 0, n) superstratum [24\u201329,13,14]. 2.1.1 The six-dimensional metric The six-dimensional part of the metric is most canonically written1 as a deformed S3 fibration over a three-dimensional base manifold, K, that is asymptotic to AdS3. In the Einstein frame one has: ds2 6 = p Q1Q5 \u0010 b ds 2 3 + e ds 2 3 \u0011 (2.1) where b ds 2 3 = \u039b \u0014 dr2 r2 + a2 + r2(r2 + a2) a4 d\u03c82 \u2212 1 A4 G \u0012 d\u03c4 + A2r2 a2 d\u03c8 \u00132\u0015 , (2.2) and e ds 2 3 = \u039b d\u03b82 + 1 \u039b sin2 \u03b8 \u0012 d\u03c61 \u22121 A2 d\u03c4 \u00132 + G \u039b cos2 \u03b8 \u0012 d\u03c62 + 1 A2 G \u0010 d\u03c4 \u2212 \u00001 + (A2 \u22121)F \u0001 d\u03c8 \u0011\u00132 . (2.3) The functions F, G and \u039b are \u201cbump functions:\u201d F \u22611 \u2212 r2n (r2 + a2)n , G \u22611 \u2212 a2 b2 2a2 + b2 r2n (r2 + a2)n+1 , \u039b \u2261 q 1 \u2212(1 \u2212G(r)) sin2 \u03b8 . (2.4) The \u201cred-shift\u201d parameter, A, is defined by: A \u2261 r 1 + b2 2a2 . (2.5) The \u03c8-coordinate is compactified on a unit circle: \u03c8 \u2261\u03c8 + 2\u03c0 , (2.6) and the time coordinate, \u03c4, is dimensionless. To make contact with the more standard formulation, one introduces a scale, Ry, and the usual double null, (u, v), and space-time, (t, y), coordinates: u = 1 \u221a 2(t \u2212y) , v = 1 \u221a 2(t + y) \u2261Ry \u221a 2 \u03c8 , t = Ry \u03c4 . (2.7) The scale becomes the radius of the y-circle: y \u2261y + 2\u03c0Ry . (2.8) 1There is a crucial typographical error in the coefficient of d\u03c4 2 in b ds 2 3 in [13] that is corrected in [14]. The computations in [13] used the correct metric. 6 \fRegularity of the solution requires Q1Q5 = \u0012 a2 + b2 2 \u0013 R2 y . (2.9) This leaves five remaining parameters: the D1 and D5 brane charges, Q1, Q5, two real parameters a and b, and the integer, n \u22650, appearing in the bump functions. The last three parameters determine the angular momenta and the momentum charges of the solution: JL = JR = Ry 2 a2 , QP = 1 2 n b2 . (2.10) The supergravity solution is also supported by fluxes and one can find the precise expressions, and all the other relevant details, in earlier work, like [25\u201328]. However, to construct the string metric we will need the explicit forms of the electrostatic potentials: Z1 = Q1 \u03a3 \u0010 1 + (1 \u2212G(r)) sin2 \u03b8 cos(2 n \u03c8 + 2 \u03c61) \u0011 , Z2 = Q5 \u03a3 , Z4 = Ry \u03a3 p (2a2 + b2)(1 \u2212G(r)) sin \u03b8 cos(n \u03c8 + \u03c61) . (2.11) Indeed, one then has the canonical relationship with the warp factor, \u039b: \u039b \u2261 \u03a3 \u221aQ1Q5 q Z1Z2 \u2212Z2 4 . (2.12) It is also very convenient to define a conformally related, six-dimensional metric by: d\u02dc s2 6 \u2261 1 \u221aQ1Q5 \u039b ds2 6 = \u0012 dr2 r2 + a2 + r2(r2 + a2) a4 d\u03c82 \u2212 1 A4 G \u0012 d\u03c4 + A2r2 a2 d\u03c8 \u00132\u0013 + d\u03b82 + 1 \u039b2 sin2 \u03b8 \u0012 d\u03c61 \u22121 A2 d\u03c4 \u00132 + G \u039b2 cos2 \u03b8 \u0012 d\u03c62 + 1 A2 G \u0010 d\u03c4 \u2212 \u00001 + (A2 \u22121)F \u0001 d\u03c8 \u0011\u00132 (2.13) Indeed, we will work mainly with this metric. Note that we have made it dimensionless by scaling out \u221aQ1Q5. For b = 0, this metric reduces to global AdS3 with unit radius. 2.1.2 Geodesic motion The six-dimensional metric (2.1) is independent of (\u03c4, \u03c8, \u03c61, \u03c62), which means that the corresponding momenta are conserved: L1 = K(1)M dzM d\u03bb , L2 = K(2)M dzM d\u03bb , L3 = K(3)M dzM d\u03bb , E = K(4)M dzM d\u03bb , (2.14) where the K(I) are the Killing vectors: K(1) = \u2202 \u2202\u03c61 , K(2) = \u2202 \u2202\u03c62 , K(3) = \u2202 \u2202\u03c8 and K(4) = \u2202 \u2202\u03c4 . In addition, there is the standard quadratic conserved quantity coming from the metric: \u03b5 \u2261gMN dzM d\u03bb dzN d\u03bb . (2.15) 7 \fIt was also shown in [25] that there is a conformal Killing tensor: \u039e \u2261\u03beMN dzM d\u03bb dzN d\u03bb \u2261Q1Q5 \u039b2 \u0012 d\u03b8 d\u03bb \u00132 + L2 1 sin2 \u03b8 + L2 2 cos2 \u03b8 , (2.16) which, for any geodesic, satisfies: d d\u03bb\u039e = Ry \u0012 d\u03b8 d\u03bb \u0013 \u0012\u2202\u039b \u2202\u03b8 \u0013 \u0012 gMN dzM d\u03bb dzN d\u03bb \u0013 . (2.17) For null geodesics we therefore have six conserved quantities given by (2.14), (2.15) and (2.16). Moreover, null geodesics are independent of conformal transformations, and so we could equally well use the conformally related metric (2.13). Indeed, henceforth, and unless otherwise stated, we will work with (2.13). Again, note that this metric is scaled so that it is asymptotic to AdS3 of unit radius at infinity. In the metric (2.13), and for null geodesics, the Killing tensor gives the conserved quantity: e \u039e \u2261\u02dc \u03beMN dzM d\u03bb dzN d\u03bb = \u0012 d\u03b8 d\u03bb \u00132 + L2 1 sin2 \u03b8 + L2 2 cos2 \u03b8 = m2 . (2.18) and metric conservation law can then be written in the form: 1 r2 + a2 \u0012 dr d\u03bb \u00132 + a4 r2(r2 + a2) \u0012 L3 \u2212r2 a2 (L1 + A2E) + 1 A2G(r) \u0012 1 + b2 2a2 F(r) + A2 r2 a2 \u0013 L2 \u00132 \u2212 1 G(r) \u0010 L2 \u2212G(r)(L1 + A2E) \u00112 \u2212 \u00001 \u2212G(r) \u0001\u0012 L2 1 \u2212 L2 2 G(r) \u0013 = \u2212m2 . (2.19) Recall that affine parameters are not conformally invariant and one should therefore note that \u03bb is affine for the metric (2.13). Remarkably, the dynamics of null geodesics are fully separated between the (deformed) sphere directions and the (deformed) AdS3 directions. In particular, the \u03b8 and r dynamics are completely independent. Moreover, the dynamics on the deformed sphere directions are identical to the those of the undeformed, round S3. We have introduced the parameter, m, as the energy of the motion on the sphere, and from the perspective of the three-dimensional space-time, K, the S3-energy, m, becomes the \u201cKaluza-Klein\u201d mass of the particle. As is evident from (2.19), the dynamics of these particles are complicated. 2.1.3 The string metric Since we are going to consider string probes as well as geodesics, we will need the full tendimensional string metric. This may be found in [30, Appendix E.7] and [26]2 ds2 10 = \u221aZ1Z2 p Z1Z2 \u2212Z2 4 ds2 6 + r Z1 Z2 d\u02c6 s2 4 = \u03a0 p Q1Q5 d\u02dc s2 6 + s Q1 Q5 d\u02c6 s2 4 ! , (2.20) 2There is a typographical error in [31]: the warp factor in from of the six-dimensional metric should be \u221a\u03b1 and not (\u221a\u03b1)\u22121. 8 \fwhere d\u02c6 s2 4 is the flat metric on T4 and \u03a0 \u2261 s Q5 Q1 Z1 Z2 = q 1 + (1 \u2212G(r)) sin2 \u03b8 cos(2 n \u03c8 + 2 \u03c61) . (2.21) Again, because null geodesics are conformally invariant, we can ignore the factors of \u03a0 in (2.20) and work with the much simpler dimensionless metric: d\u02dc s2 6 + 1 Q5 d\u02c6 s2 4 . (2.22) However, it will be important to keep track of affine parameters. Suppose \u03bb is the affine parameter for (2.22), and is the affine parameter inherent in (2.18) and (2.19), and let \u00b5 be the affine parameter in the string metric (2.20), then we have: d\u00b5 d\u03bb = \u03a0 = \u0010 1 + (1 \u2212G(r)) sin2 \u03b8 cos(2 n \u03c8 + 2 \u03c61) \u0011 1 2 . (2.23) It is also useful to recall how the curvature tensor behaves under conformal transformations of a metric. Suppose one has d\u02c6 s2 \u2261\u02c6 g\u00b5\u03bd dx\u00b5dx\u03bd = \u21262 g\u00b5\u03bd dx\u00b5dx\u03bd \u2261\u21262 ds2 , (2.24) then the curvature, \u02c6 R\u03c1\u03c3\u00b5\u03bd , of \u02c6 g\u00b5\u03bd is given by: \u02c6 R\u03c1\u03c3\u00b5\u03bd = R\u03c1\u03c3\u00b5\u03bd \u2212 h \u03b4\u03c1 \u00b5 \u0000\u2207\u03c3V\u03bd \u2212V\u03c3V\u03bd \u0001 \u2212\u03b4\u03c1 \u03bd \u0000\u2207\u03c3V\u00b5 \u2212V\u03c3V\u00b5 \u0001 \u2212g\u03c3\u00b5 \u0000\u2207\u03c1V\u03bd \u2212V \u03c1V\u03bd \u0001 + g\u03c3\u03bd \u0000\u2207\u03c1V\u00b5 \u2212V \u03c1V\u00b5 \u0001 + \u0000V \u03bbV\u03bb \u0001 \u0000\u03b4\u03c1 \u00b5 g\u03c3\u03bd \u2212\u03b4\u03c1 \u03bd g\u03c3\u00b5 \u0001i , (2.25) where R\u03c1\u03c3\u00b5\u03bd is the curvature of g\u00b5\u03bd, all the index raising and lowering on the right-hand side is done in the metric g\u00b5\u03bd, and where V\u00b5 \u2261\u2207\u00b5 log(\u2126) . (2.26) 2.2 The probes We are going to look at Penrose limits of the metric in the neighborhood of ultra-relativistic particles. We therefore start from null geodesics, which, from the discussion above, may be thought of as a particular class of massive particles on K. 2.2.1 Null geodesic deviation and the Penrose limit To make contact with ultra-relativistic string probes, we will need the Brinkmann form of the Penrose limit, in which the ten-dimensional metric is reduced to the form of a plane-fronted gravitational wave: ds2 = 2 du dv + \u0010 Aab(u)xaxb\u0011 du2 + \u03b4ab dxadxb . (2.27) 9 \fThe original construction of the Penrose limit was rather laborious. One started with a null geodesic and then constructed a pencil of null geodesics in its neighborhood. One then scaled the solution so as to extract the second order expansion of the ultra-relativistic limit of the metric in the neighborhood of the original null geodesic, and then one had to make a non-trivial change of coordinate to get to the Brinkmann form of this metric. (For a review, see [5,6].) Fortunately, this whole procedure has been greatly streamlined [7\u20139] in a manner that closely resembles geodesic deviation for time-like geodesics. One starts from the original null geodesic, xM(u), where u is an affine parameter. Along this geodesic one constructs the Fermi transport (which is the same as parallel transport for geodesics parametrized by an affine parameter) of a set of orthonormal frames, EA \u2261EAMdxM: dxM du \u2207M \u0000EAP \u0001 = 0 , gMNEAMEBN = \u03b7AB , (2.28) where \u03b7AB is the Minkowski metric. Along the geodesic, \u03b3, one may thus write: ds2\f \f \u03b3 = 2E+E\u2212+ \u03b4ab EaEb . (2.29) One can then show that the matrix Aab in (2.27) can be obtained from: Aab = \u2212RMNPQ EM a EP b dxN du dxQ du \u2261\u2212Raubu . (2.30) As noted in [8], this matrix governs the transverse null geodesic deviation: d2 du2 Za = Aab Zb . (2.31) where Za is the transverse geodesic deviation vector. The importance of the plane-fronted gravitational wave metric (2.27) is that string dynamics can be solved exactly in such backgrounds [4]. Indeed, these techniques have been applied to microstate geometries and have led to remarkable new insights in terms of tidal trapping [13,14]. In this context, the significance of Aab is that it represents the negative of a mass matrix for the string. Just as in the geodesic deviation equations, positive eigenvalues of Aab represent instabilities: exponentially growing deviations of geodesics, or negative masses for string excitations. One way to drive instabilities in the bosonic string is if these negative masses become large and overcome the string tension, as they do with infalling geodesics [13, 14]. As we will discuss in Section 4.3, a periodic mass term can also drive unstable \u201cresonances\u201d in the string. 2.2.2 Conformal transformations of the null geodesic deviation We consider two metrics, \u02c6 g\u00b5\u03bd and g\u00b5\u03bd, related by a conformal transformation as in (2.24). The starting point will be a null geodesic x\u00b5(\u00b5) with affine parameter, \u00b5, and in the metric \u02c6 g\u00b5\u03bd and affine parameter, \u03bb, and in g\u00b5\u03bd. One then has d\u00b5 d\u03bb = \u21262 . (2.32) 10 \fWe introduce frames \u02c6 E\u00b1, \u02c6 Ea and E\u00b1, Ea satisfying (2.29) for these two metrics along the null geodesic. We take the inverse frames \u02c6 E+ and E+ to be tangent to the geodesic: \u02c6 E\u00b5 + = dx\u00b5 d\u00b5 = \u2126\u22122 dx\u00b5 d\u03bb = \u2126\u22122 E\u00b5 + . (2.33) It follows that \u02c6 E\u2212= \u21262E\u2212. As stipulated in Section 2.2.1, we will take the frames \u02c6 EA to be parallel transported along the null geodesic, however this is not a conformally invariant notion. Indeed, if \u02c6 W \u00b5 is parallel transported in along the geodesic in \u02c6 g\u00b5\u03bd, this becomes 0 = dx\u03c1 d\u00b5 \u02c6 \u2207\u03c1 \u02c6 W \u00b5 = \u2126\u22121 dx\u03c1 d\u03bb h \u2207\u03c1W \u00b5 \u2212g\u00b5\u03c3(\u2207\u03c3 log \u2126)W\u03c1 + (\u2207\u03c3 log \u2126)W \u03c3\u03b4\u00b5 \u03c1 i , (2.34) where W \u00b5 = \u2126\u02c6 W \u00b5. Parallel transport is only conformally covariant for vectors that satisfy: W \u03c1 \u2207\u03c1 log \u2126= W\u03c1 dx\u03c1 d\u03bb = 0 , W \u00b5 = \u2126\u02c6 W \u00b5 . (2.35) In particular, we note that this is true for the T4 directions in (2.20) and (2.22). Let \u02c6 E\u03b1 = \u2126E\u03b1 denote the parallel transported frames in these direction. Using (2.25) and the orthogonality conditions in (2.35), it follows that along the T4 directions: A\u03b1\u03b2 = \u2212\u02c6 R\u03b1\u00b5\u03b2\u03bd dx\u00b5 d\u00b5 dx\u03bd d\u00b5 = \u03b4\u03b1 \u03b2 \u2126\u22124 h \u2207\u00b5\u2207\u03bd log \u2126\u2212(\u2207\u00b5 log \u2126)(\u2207\u03bd log \u2126) i dx\u00b5 d\u03bb dx\u03bd d\u03bb = \u2212\u03b4\u03b1 \u03b2 \u2126\u22123 d2 d\u03bb2 \u0012 1 \u2126 \u0013 . (2.36) To arrive at this result one must use the geodesic equation and remember that the covariant derivatives in (2.36) are those of (2.22) and thus the geodesic equation is most simply applied in (2.22) using the corresponding affine parameter, \u03bb. Making the change of affine parameter from \u00b5 to \u03bb on the left-hand side of (2.31) yields the equation: d2 d\u00b52 Z\u03b1 = \u2126\u22124 \u0014d2Z\u03b1 d\u03bb2 \u22122 \u2126 \u0012d\u2126 d\u03bb \u0013 dZ\u03b1 d\u03bb \u0015 = \u2212\u2126\u22123 \u0014 d2 d\u03bb2 \u0012 1 \u2126 \u0013\u0015 Z\u03b1 . (2.37) Define Z\u03b1 = \u2126\u02dc Z\u03b1 . (2.38) and the geodesic deviation equation along the torus becomes: d2 d\u03bb2 \u02dc Z\u03b1 = 0 . (2.39) In other words, the deviation along the T4 is entirely determined by the conformal rescaling. The other directions are much more complicated and so we will ultimately resort to a more qualitative discussion. 11 \f3 Circular orbits We now focus on the purely circular orbits in the (1, 0, n) microstate geometries. For a round sphere there is obviously no distinction between any of the great circles, but for the superstratum, the most interesting locus lies at the peak of the bump functions and warp factors, and so we consider the circles at \u03b8 = \u03c0 2 . This locus is also that of the closed null geodesic of the underlying supertube. 3.1 Simplifying the geodesic motion From (2.18), it is evident that if we want to set \u03b8 = \u03c0 2 , we must take L2 = 0. The geodesics then only move in the \u03c61 direction on the sphere and (2.18) yields L1 = m. It is also natural to parametrize the \u03c8-motion in relation to \u03c61 motion, or the Kaluza-Klein mass m. Because of the mixing of the \u03c4 and \u03c61, the energy E only appears in the combination L1 + A2E. We will assume m \u0338= 0, and take: L1 = m , L2 = 0 , L3 = \u2212\u03b3 m , \u02c6 E = 1 + A2E m , (3.1) for some choice of L1, \u02c6 E and \u03b3. The radial motion can now be characterized through an effective potential: \u0012 dr d\u03bb \u00132 + m2 V (r; \u02c6 E, \u03b3) = 0 , (3.2) where V (r; \u02c6 E, \u03b3) = 1 r2 \u0000 \u02c6 E r2 + \u03b3 a2\u00012 + (r2 + a2) G(r) \u00001 \u2212\u02c6 E2\u0001 . (3.3) From (3.2) it is evident that we can absorb m into a rescaling of the affine parameter, \u03bb. However, as one can see from (3.1), the sign of m plays a role in the form of the geodesic motion and so we will retain m, allowing m = \u00b11. For circular geodesics one must have V (r; \u02c6 E, \u03b3) = 0 and d drV (r; \u02c6 E, \u03b3) = 0, which give the following two constraints: 1 r2 \u0000 \u02c6 E r2 + \u03b3 a2\u00012 + (r2 + a2) G(r) \u00001 \u2212\u02c6 E2\u0001 = 0 , (3.4) 2 r3 \u0000 \u02c6 E2 r4 \u2212\u03b32 a4\u0001 + (1 \u2212\u02c6 E2) \u00002 r G(r) + (r2 + a2) G\u2032(r) \u0001 = 0 . (3.5) These two equations determine \u02c6 E and \u03b3 in terms of the constant value of r for circular orbits. By taking linear combinations of (3.4) and (3.5), one can obtain a linear equation for \u03b3. If one then substitutes this back into either (3.4) or (3.5), one obtains a quadratic equation for \u02c6 E2: \u0000 \u02c6 E2 \u22121 \u0001 h\u0000P(r)2 \u221216 r2(r2 + a2) G(r) \u0001 \u02c6 E2 \u2212P(r)2i = 0 , (3.6) where P(r) \u2261r(r2 + a2) G\u2032(r) + 2 (2 r2 + a2) G(r) , (3.7) 12 \fand \u03b3 is given by \u03b3 = \u2212 1 4 a2 \u02c6 E h 4 r2 \u02c6 E2 \u2212 \u0000 \u02c6 E2 \u22121 \u0001 P(r) i . (3.8) The velocities are then given by: dr d\u03bb \u22610 , d\u03c4 d\u03bb = m A2 \u0014 ( \u02c6 E r2 + \u03b3 a2) (r2 + a2) \u2212\u02c6 E G(r) \u0015 , d\u03c8 d\u03bb = \u2212m a2 ( \u02c6 E r2 + \u03b3 a2) r2(r2 + a2) , d\u03b8 d\u03bb \u22610 , d\u03c61 d\u03bb = m \u0014 ( \u02c6 E r2 + \u03b3 a2) (r2 + a2) + (1 \u2212\u02c6 E) G(r) \u0015 , d\u03c62 d\u03bb = \u2212m \u03b3 a2 r2 . (3.9) Given the velocity combinations that appear in (2.3), it is worth noting that the boosted angular velocity around the S3 follows the profile function, G(r): d\u03c6\u2032 1 d\u03bb \u2261d\u03c61 d\u03bb \u2212 1 A2 d\u03c4 d\u03bb = m G(r) . (3.10) Combining the fact that there are four roots for \u02c6 E, and the fact that we can absorb the magnitude of m into \u03bb, so that m = \u00b11, there are, in principle, eight choices for circular geodesics. The metric and geodesic equations have an obvious symmetry under \u03c4 \u2192\u2212\u03c4, \u03c8 \u2192\u2212\u03c8, and we want to restrict to future-directed geodesics (d\u03c4 > 0), thus there are only four physically interesting distinct solutions. The choices are related to the orientations of the rotations in the \u03c8 and \u03c61 directions. This is easiest to illustrate by going back to the pure AdS3 solution. 3.2 The AdS3 limit The pure AdS3 solution is obtained by setting b = 0, and hence G(r) \u22611. It is useful to note that in this limit, the metric (2.13) becomes: d\u02dc s2 6 = dr2 r2 + a2 + r2 a2 (d\u03c8 \u2212d\u03c4)2 \u2212r2 + a2 a2 d\u03c4 2 + d\u03b82 + sin2 \u03b8 (d\u03c61 \u2212d\u03c4)2 + cos2 \u03b8 \u0000d\u03c62 \u2212(d\u03c8 \u2212d\u03c4) \u00012 = dr2 r2 + a2 + r2 a2 d\u03c8\u20322 \u2212r2 + a2 a2 d\u03c4 2 + d\u03b82 + sin2 \u03b8 d\u03c6\u2032 1 2 , + cos2 \u03b8 d\u03c6\u2032 2 2 (3.11) where \u03c8\u2032 = \u03c8 \u2212\u03c4, \u03c6\u2032 1 = \u03c61 \u2212\u03c4 and \u03c6\u2032 2 = \u03c62 \u2212(\u03c8 \u2212\u03c4). The four solutions to the constraints are then: ( \u02c6 E , \u03b3) = \u00b1 \u0010 1 , \u2212r2 a2 \u0011 , ( \u02c6 E , \u03b3) = \u00b1 \u0010 1 + 2 r2 a2 , r2 a2 \u0011 , (3.12) and the corresponding velocities are: \u0012d\u03c4 d\u03bb, d\u03c8 d\u03bb ,d\u03c61 d\u03bb , d\u03c62 d\u03bb \u0013 = \u2212m \u00001 , 0 , 0 , \u22121 \u0001 , m \u00001 , 0 , 2 , \u22121 \u0001 , \u2212m \u00001 , 2 , 0 , 1 \u0001 , m \u00001 , 2 , 2 , 1 \u0001 , (3.13) which may be written as: \u0012d\u03c4 d\u03bb, d\u03c8\u2032 d\u03bb ,d\u03c6\u2032 1 d\u03bb , d\u03c6\u2032 2 d\u03bb \u0013 = \u2212m \u00001 , \u22121 , \u22121 , 0 \u0001 , m \u00001 , \u22121 , 1 , 0 \u0001 , \u2212m \u00001 , 1 , \u22121 , 0 \u0001 , m \u00001 , 1 , 1 , 0 \u0001 . (3.14) 13 \fWe will always choose the sign of m so that d\u03c4 d\u03bb > 0. The four solutions then correspond to the four choices of the direction of rotations in the \u03c8\u2032 and \u03c6\u2032 1 directions. The values of d\u03c62 d\u03bb in (3.13) are simply an artifact of setting L2 = 0 and the mixing terms in (3.11). 3.3 Circular orbits in the superstratum 3.3.1 The \u201cBPS geodesics\u201d One set of superstratum orbits is extremely simple, and closely resemble the co-rotating AdS3 geodesics: ( \u02c6 E , \u03b3) = \u00b1 \u0010 1 , \u2212r2 a2 \u0011 . (3.15) The corresponding velocities are: \u0012d\u03c4 d\u03bb, d\u03c8 d\u03bb , d\u03c61 d\u03bb , d\u03c62 d\u03bb \u0013 = \u2212m \u0000A2 G(r) , 0 , 0 , \u22121 \u0001 , m \u0000A2 G(r) , 0 , 2 G(r) , \u22121 \u0001 . (3.16) Note that both these sets of geodesics have d\u03c8 d\u03bb \u22610, and so, from the perspective of the threedimensional space-time, K, these geodesics are also co-rotating. There is also no restriction on the value of r: these geodesics are smooth and well-defined for all values of r. They are thus like BPS objects in that they can be placed anywhere within the superstratum. Indeed, from the perspective of the manifold, K, they are floating branes [32] in that they sit happily at fixed (r, \u03c8), feeling no force. We will therefore refer to these orbits as \u201cBPS geodesics.\u201d We note that the first family of these geodesics has d\u03c61 d\u03bb = 0 and is therefore also co-rotating, or floating, on the S3. As we will see, the counter-rotating geodesics have very different properties and are \u201cvery non-BPS\u201d objects. 3.3.2 Bounding the counter-rotating geodesics The counter-rotating geodesics come from the values of \u02c6 E that lead to the vanishing of the expression in the square bracket of (3.6). To have non-trivial solutions one must have Q(r) \u2261 \u0000P(r)2 \u221216 r2(r2 + a2) G(r) \u0001 > 0 , (3.17) This is true for b = 0 and for small values of b. However, as we will see, there is a critical value, bcrit, of b, above which Q(r) can vanish, and become negative. Indeed, we will show that if b > bcrit then counter-rotating circular geodesics can only exist at values of r less than a maximal value, rmax, that depends on b. This result follows from the following identity Q(r) = 4 a4 \u0012 1 + b \u221a 2a2 + b2 xn \u0010\u221a n + 1 + \u221an x \u0011\u0013 \u0012 1 + b \u221a 2a2 + b2 xn \u0010\u221a n + 1 \u2212\u221an x \u0011\u0013 \u00d7 \u0012 1 \u2212 b \u221a 2a2 + b2 xn \u0010\u221a n + 1 + \u221an x \u0011\u0013 \u0012 1 \u2212 b \u221a 2a2 + b2 xn \u0010\u221a n + 1 \u2212\u221an x \u0011\u0013 , (3.18) 14 \fwhere x \u2261 r \u221a r2 + a2 . (3.19) Since 0 \u2264x < 1, it follows that all the terms in the parentheses in (3.18) are strictly positive, except, possibly, for the third parenthesis: q(r) \u2261 \u0012 1 \u2212 b \u221a 2a2 + b2 xn \u0010\u221a n + 1 + \u221an x \u0011\u0013 . (3.20) Define bcrit by the vanishing of q(r) as r \u2192\u221e. One then finds that bcrit = a sr 1 + 1 n \u22121 . (3.21) For b > bcrit, q(r) will go negative for r sufficiently large. Define rmax by: q(rmax) = 0 , (3.22) which has no solutions for b < bcrit and a single real solution for b > bcrit. It follows that counterrotating circular geodesics can be located at any value of r when b \u2264bcrit, while for b > bcrit they can only be located at r < rmax. It is also very interesting to note that (i) bcrit is very small, and (ii) the value of rmax drops very rapidly from infinity to the cap as b increases above bcrit. To be more specific, bcrit decreases monotonically with n, starting at a p\u221a 2 \u22121 \u22480.6436 a and, at large n, it is asymptotic to a \u221a 2n. The transition zone between the cap and the BTZ throat is approximately at the turning point of G(r), or at r = a\u221an. For rmax to drop to this value of r one must have q(a\u221an) = 0, which means b = bcap \u2261a s 2 (n + 1)n+1 (2n + 1)2 nn \u2212(n + 1)n+1 . (3.23) Thus the counter-rotating orbits are restricted to the cap for b > bcap. The expression for bcap is also a monotonically decreasing function of n, starting at 2a q 2 5 \u22481.2649 a and, at large n, it is asymptotic to a p e 2n \u22481.1658 a \u221an. We have shown some illustrative examples of the behavior of rmax as a function of b in Fig. 1. The rapid confinement of counter-rotating geodesics to the cap of the microstate geometry suggests a critical transition to a non-BPS bound state. We now show that there is indeed a clear physical interpretation of the behavior we have found here. 3.3.3 Holography and the bound on geodesics It is valuable to recall the holographic dictionary for the superstrata that we are using here [24,26]. These solutions are dual to the D1-D5 state: (|++\u27e91)N1\u0010 (L\u22121 \u2212J3 \u22121)n|00\u27e9k=1 \u0011N2 , (3.24) 15 \f0.5 1.0 1.5 2.0 b 2 4 6 8 10 rmax 0.5 1.0 1.5 2.0 b 5 10 15 20 25 rmax Figure 1: Plots of rmax as a function of b for counter-rotating circular geodesics with a = 1 with n = 4 (on the left) and n = 25 (on the right). The horizontal dashed line marks the transition at r = \u221an a between the BTZ throat and the cap. The vertical dashed line is at b = bcrit. Up until b = bcrit there is no limit on the radius of the counter-rotating geodesics, but as b increases above bcrit, the maximum possible radius for circular counter-rotating geodesics, rmax, drops precipitously towards the end of the BTZ throat and the start of the cap. When scaled appropriately, the features of both curves are almost identical. with the constraint N1 + N2 = N \u2261n1n5, where n1 and n5 are the numbers of underlying D1 and D5 branes. The quantized angular momenta, jL, jR, and momentum, nP , of this state are given by: jL = jR = 1 2 N1 , nP = n N2 . (3.25) These are related to the supergravity parameters by: jL = jR = 1 2 N a2 , nP = 1 2 N n b2 , (3.26) where N \u2261n1n5R2 y/(Q1Q5). The constraint N1 + N2 = n1n5 then translates directly into the regularity condition (2.9). For these states one can also write the regularity condition as: j + nP 2 n = 1 2 n1n5 . (3.27) where j \u2261jL = jR. As noted in [24], rotating D1-D5-P black holes with regular horizons exist when n1n5nP\u2212j2 > 0 and this cosmic censorship bound defines the \u201cblack-hole regime\u201d for these parameters. This bound translates into the supergravity parameters as: 0 < n1n5nP \u2212j2 = 1 2 N 2 \u0012Q1Q5 R2 y n b2 \u22121 2 a4 \u0013 = 1 4 N 2 n \u0012 \u0012 b2 a2 + 1 \u00132 \u2212 \u0012 1+ 1 n \u0013\u0013 , (3.28) where we have used the regularity condition (2.9). It follows that the black-hole bound is simply: b2 a2 > r 1 + 1 n \u22121 \u2261b2 crit a2 , (3.29) where bcrit was introduced in (3.21). Thus the critical value of b that determines the onset of trapping of counter-rotating circular geodesics is nothing other than the black-hole bound. 16 \fThis is particularly interesting because, as b increases, the overall superstratum geometry slowly changes from global AdS3, at b = 0, to some capped BTZ geometry with a progressively longer throat. There is no sudden and obvious transition in the geometry as one crosses the black-hole bound. However, we see that the counter-rotating geodesics know exactly where this bound lies and provide an \u201corder parameter\u201d for this transition. We conclude this section by looking at some representative examples of the geodesics. 3.3.4 Orbits in the cap, at infinity and in between For n > 1, the metric in the cap3 is that of AdS3. If one drops all the r2n terms in (2.13) one obtains: d\u02dc s2 6 = dr2 r2 + a2 + r2 a2 \u0012 d\u03c8 \u2212d\u03c4 A2 \u00132 \u2212(r2 + a2) a2 \u0012 d\u03c4 A2 \u00132 + d\u03b82 + sin2 \u03b8 \u0012 d\u03c61 \u2212d\u03c4 A2 \u00132 + cos2 \u03b8 \u0012 d\u03c62 + \u0010 d\u03c4 A2 \u2212d\u03c8 \u0011\u00132 = dr2 r2 + a2 + r2 a2 d\u03c8\u20322 \u2212(r2 + a2) a2 d\u03c4 \u20322 + d\u03b82 + sin2 \u03b8 d\u03c6\u2032 1 2 + cos2 \u03b8 d\u03c6\u2032 2 2 (3.30) where \u03c4 \u2032 = \u03c4 A2 , \u03c8\u2032 = \u03c8 \u2212\u03c4 \u2032, \u03c6\u2032 1 = \u03c61 \u2212\u03c4 \u2032 and \u03c6\u2032 2 = \u03c62 \u2212(\u03c8 \u2212\u03c4 \u2032). The only difference between this and the b \u21920 limit is the factor of A2 rescaling \u03c4, which represents the redshift between the top and bottom of the superstratum throat. The four solutions to the constraints are then exactly as in (3.12) and these lead to orbital motion given by (3.13) and (3.14) except that d\u03c4 d\u03bb must be replaced by d\u03c4 \u2032 d\u03bb . At infinity, the metric is asymptotically AdS3 and G(r) \u21921 as r \u2192\u221e. The co-rotating circular geodesics with velocities (3.16) simply limit to those of AdS3, (3.13), up to the re-scaling \u03c4 \u2032 = \u03c4 A2 . The story is somewhat different for the counter-rotating geodesics. Expanding the solutions to the constraints around infinity gives: ( \u02c6 E , \u03b3) = \u00b1 r2 \u221an q (b2 crit \u2212b2)(b2 crit + b2 + 2) \u00002 A2 , \u22121 \u0001 , (3.31) and the corresponding velocities are: \u0012d\u03c4 d\u03bb, d\u03c8 d\u03bb , d\u03c61 d\u03bb , d\u03c62 d\u03bb \u0013 = m \u00000 , 0 , 1 , 0 \u0001 \u2213 m a2 \u221an q (b2 crit \u2212b2)(b2 crit + b2 + 2) \u0000A2 , 2 A2 , 1 , 1 \u0001 . (3.32) These match the AdS3 results for counter-rotating geodesics, (3.12) and (3.13), for b = 0. These solutions also become unphysical for b > bcrit. Finally, we catalog some representative examples of the velocities of particles on circular geodesics of various radii. Fig. 2 shows the non-trivial velocities of simple, but generic, \u201ccorotating\u201d geodesic. These geodesics exist for all values of r. Fig. 3 shows a generic \u201ccounterrotating\u201d geodesic for a value of b < bcrit. Again these exist for all values of r. 17 \f0 5 10 15 20 44 46 48 50 r d\u03c4 d\u03bb 0 5 10 15 20 1.70 1.75 1.80 1.85 1.90 1.95 2.00 r d\u03c61 d\u03bb Figure 2: The geodesic velocities, \u0000 d\u03c4 d\u03bb, d\u03c61 d\u03bb \u0001 , for the second set of co-rotating geodesics in (3.16), with a = 1, b = 10, n = 2. (We do not plot \u0000 d\u03c8 d\u03bb , d\u03c62 d\u03bb \u0001 because these velocities are constant.) Note that the velocities are finite and well-defined for 0 \u2264r < \u221e. These velocities follow the profile of G(r), whose minimum is at r = \u221a 2 \u22481.4142. Fig. 4 shows a generic \u201ccounter-rotating\u201d geodesic for a value of b > bcrit. These geodesics can only exist for a limited range of values of r < rmax, and the velocities all diverge as r \u2192rmax. For the example depicted, we have b \u226bbcrit, and circular counter-rotating geodesics can only exist in the cap region. 4 Geodesic deviation, tidal forces and resonances 4.1 The scale of the tidal forces To bound the size of the tidal forces, we define MMN = \u2212RMNPQ dxN du dxQ du \u2261\u2212RMuNu , (4.1) and note that MMN dxN du = 0 . (4.2) It follows that A \u2261 p Aab Aab = p MMNMMN . (4.3) The latter expression is simpler to compute because one does not need the parallel-transported frames. Since we are primarily interested in string probes, we will focus on the ten-dimensional string metric (2.20). It is also convenient to factor out the scale, \u221aQ1Q5, from the metric and work with the dimensionless metric: d\u02dc s2 10 \u2261 1 \u221aQ1Q5 ds2 10 = \u03a0 \u0012 d\u02dc s2 6 + 1 Q5 d\u02c6 s2 4 \u0013 . (4.4) 3For n = 1, the bump functions do not quite die off fast enough as r \u21920 and so there is some deformation of the cap metric. 18 \f0 5 10 15 20 1.2 1.4 1.6 1.8 r d\u03c4 d\u03bb 0 5 10 15 20 2.0 2.5 3.0 3.5 r d\u03c8 d\u03bb 0 5 10 15 20 2.0 2.2 2.4 2.6 2.8 r d\u03c61 d\u03bb 0 5 10 15 20 1.0 1.2 1.4 1.6 1.8 r d\u03c62 d\u03bb Figure 3: Geodesic velocities for a counter-rotating geodesic with a = 1, b = 2 5, n = 2. These geodesics exist for all values of r because b < bcrit = \u0010q 3 2 \u22121 \u0011 1 2 \u22480.4741. Indeed, since the T4 directions are completely orthogonal to the six-dimensional metric, we can factor out the T4 and simply compute and examine A for the six-dimensional metric: \u03a0 d\u02dc s2 6 . (4.5) We will also simplify the angular dependence by defining: \u03c7 \u22612 n \u03c8 + 2 \u03c61 . (4.6) As noted in the discussion of (3.23), when b is only slightly greater than a, the counter-rotating geodesics are localized in the cap, where the geometry is fairly smooth and close to that of global AdS3. We therefore focus on the co-rotating \u201cBPS geodesics.\u201d There are two families of such geodesics, corresponding to the two possible choices of sign in (3.15) and the two sets of velocities in (3.16). Amusingly enough, the first family of geodesics is \u201ctoo BPS;\u201d it has d\u03c8 d\u03bb = d\u03c61 d\u03bb = 0 and so has d\u03c7 d\u03bb = 0. It simply does not traverse the bumps in the metric at all! Indeed, one finds that for these geodesics, A \u22612m2 for all values or r, a and b. This is precisely the value for global AdS and so these geodesics do not sense any deviation from AdS created by the momentum charge and momentum wave in the superstratum. The second set of BPS geodesics results in non-trivial tidal forces because they have non-trivial motion in the \u03c61-direction. One can explicitly simplify A to obtain: A = m2 \u00001\u2212(1 \u2212G(r)) cos \u03c7 \u0001\u22122\u0010 2 + (1 \u2212G(r))2(1 + 9G(r)2) + 4 (1 \u2212G(r))(1 + 2G(r)2) cos \u03c7 + (1 \u2212G(r))3(1 + G(r)) cos 2 \u03c7 \u0011 . (4.7) 19 \f0.0 0.2 0.4 0.6 0.8 0 50 100 150 200 250 r d\u03c4 d\u03bb 0.0 0.2 0.4 0.6 0.8 0 10 20 30 40 50 r d\u03c8 d\u03bb 0.0 0.2 0.4 0.6 0.8 0 10 20 30 40 50 r d\u03c61 d\u03bb 0.0 0.2 0.4 0.6 0.8 0 10 20 30 40 50 r d\u03c62 d\u03bb Figure 4: Geodesic velocities for a counter-rotating geodesic with a = 1, b = 10, n = 2. Here bcrit = \u0010q 3 2 \u22121 \u0011 1 2 \u22480.4741 and for b = 10 one has rmax \u22480.7941. The geodesic velocities diverge as the orbit is pushed out to rmax. To get a sense of the structure of this function we have plotted its features in Figs. 5, 6 and 7. In general, the peak value of A increases with b, but rapidly saturates at large b (see Fig. 5) at a value that is not much larger (less than 50% higher) than the AdS tidal force. The second plot in Fig. 5 shows that if one increases n, the peak tidal force actually decreases and the peak flattens and moves to larger values of r. For n > 20, the tidal forces are very close to their AdS values. The tidal forces, and hence A, have non-trivial dependence on \u03c7. Fig. 6 shows that the fluctuations are largest for n = 1, and their amplitudes decrease with n. The amplitude of the fluctuations in A is of the same order as the AdS value of A. Our a priori expectations were that the amplitudes of the tidal forces would be large and increase with n. We anticipated the tidal forces depicted in the first diagram in Fig. 7: an extremely bumpy function near the transition zone, and we thought that a null geodesic would bounce across this at ultra-relativistic speeds creating the large tidal forces. Ironically, the nonBPS geodesics cannot enter this bumpy region, and one of the BPS geodesics does not traverse the bumps at all, while the other only traverses the bumps on the S3. Thus the only effect of n on the tidal forces comes from the amplitude of G(r), and this decreases with n. While the first plot in Fig. 7 depicts the tidal bumps the the spatial, (r, \u03c8) directions, of the geometry of K, it is Fig. 6 and the second plot in Fig. 7 that best represents the tidal forces as a result of orbiting in \u03c61 and not \u03c8. Thus, unlike infalling geodesics, the geometry and the motion of the probe do not amplify the 20 \f0 2 4 6 8 10 12 2.0 2.2 2.4 2.6 0 2 4 6 8 10 12 2.0 2.2 2.4 2.6 2.8 3.0 Figure 5: Plots of the magnitude, A, of the tidal force in the dimensionless metric (4.4). The plot on the left shows A as a function of r, for a = 1, n = 1, \u03c7 = \u03c0 4 , for b = 1, 5, 10, 20, 500. The higher curves correspond to higher values of b, and the curve does not change significantly for b \u226510. The curve on the right shows A as a function of r, for a = 1, b = 20, \u03c7 = \u03c0 4 , for n = 1, 2, 10, 25. As n increases, the curve flattens and the peak moves to the right. The horizontal dashed line shows A for the AdS metric with b = 0. 0 1 2 3 4 5 6 0.5 1.0 1.5 2.0 2.5 3.0 Figure 6: Plots of the magnitude, A, of the tidal force in the dimensionless metric (4.4) as a function of \u03c7. These plots are taken near the peak amplitude of A, with r = \u221an a, for a = 1, b = 20, for n = 1, 2, 5, 25. The higher amplitudes correspond to smaller values of n. The horizontal dashed line shows A for the AdS metric with b = 0. tidal forces. These forces fluctuate by amounts of O(1) in units set by the AdS radius. We now consider what this can do to stringy probes. 4.2 Stringy excitations The string world-sheet action in the pp-wave limit, (2.27), becomes S = Z d2\u03c3 h 2 (\u2202\u03b1u) (\u2202\u03b1v) + \u2202\u03b1zi \u2202\u03b1zj\u03b4ij + Aab(u) zazb (\u2202\u03b1u) (\u2202\u03b1u) + . . . i , (4.8) where . . . represent additional terms arising from the NS B\u00b5\u03bd field and the fermionic fields. Since we are looking for a qualitative understanding, we will omit these terms in the following discussion. In the light-cone gauge one takes: u = \u03b1\u2032E \u03c30 , (4.9) 21 \fFigure 7: The magnitude of A in the BTZ throat and cap as a function of (x1, x2) = r (cos \u03c8 , sin \u03c8), on the left, and as a function of (x1, x2) = r (cos \u03c61 , sin \u03c61), on the right. This geometry has a = 1, n = 4 and b = 10. Since n = 4, there are thus four times as many bumps in the function on the left as there are in the function on the right. The center shows the smooth AdS cap, and the folds reveal the bumps at the junction zone between the throat and the cap. and then the the kth oscillator mode satisfies: \u22022 \u03c30za + k2za + (\u03b1\u2032E)2Aa b(u)zb = 0 . (4.10) Generally, the matrix Aa b is non-diagonal and thus generates a rotation of zi. As depicted in Fig. 6, the matrix exhibits oscillatory behavior, and this will be reflected in its eigenvalues. The actual behavior is a rather complicated function, but to give a broad-brush description of string excitations, we focus on a single eigenvalue, \u02c6 A, and approximate its form by: \u02c6 A \u223c\u03b10 + \u03b11 n\u2212s cos(\u03b12 \u03b1\u2032E\u03c30) . (4.11) where, based on Fig. 6 and the velocities depicted in Fig. 2 for the second family of BPS geodesics, the parameters \u03b10, \u03b11 and \u03b12 are numbers of order unity, and s > 0 is chosen to match the diminishing amplitude of the eigenvalues with n. The primary contribution to \u03b10 comes from the constant curvature of the unperturbed AdS metric. To arrive at the oscillatory term, recall that the plot in Fig. 6 is in terms of \u03c7, defined in (4.6), and the velocities are all of O(1) in their dependence on \u03c61, or u = \u03b1\u2032E\u03c30. Thus we consider the following mode equation h \u22022 \u03c30 + k2 + \u03b10(\u03b1\u2032E)2 + \u03b11(\u03b1\u2032E)2n\u2212s cos(\u03b12 \u03b1\u2032E\u03c30) i z = 0 . (4.12) It is also important to recall that in analyzing the tidal forces we have factored the length scale, (Q1Q5) 1 4 , out the metric and are working with the dimensionless metrics (2.13) and (2.22). The tidal tensor has dimensions of L\u22122, and so \u03b10 and \u03b11 are actually numbers of order unity times (Q1Q5)\u22121 2 , and \u03b12 is a number of order unity times (Q1Q5)\u22121 4 . The differential operator in (4.12) is then dimensionless because [\u03b1\u2032] \u223cL2, the center of mass energy, E, has dimensions [E] \u223cL\u22121. It is also useful to recall that (see, for example, [13]) that one has: Q1 Q5 = (2\u03c0)4 g2 s \u03b1\u20324 V4 n1 n5 , (4.13) where n1 and n5 are the quantized charges and V4 is the volume of the four-dimensional compactification torus. The dynamics is therefore entirely controlled by the value of \u03bd \u2261 \u03b1\u2032E (Q1Q5) 1 4 \u2261 V 1 4 4 E 2\u03c0 g 1 2 s (n1n5) 1 4 . (4.14) 22 \fFor \u03bd \u226a1, the amplitude of the perturbations is extremely small and the frequency of the perturbation, \u03b1\u2032E, is much less than the lowest mode of the string, and so the string will remain in its ground state. Stringy excitations will only become pronounced for \u03bd \u22731. In this range the energy of the probe is large but it remains within the probe approximation. Indeed, from the perspective of asymptotically-flat superstrata, a significant gravitational back-reaction will arise when the energy, E, is comparable with the mass of the background, which would mean: E \u223cG\u22121 5 (Q1 + Q5) \u223c8 V4 Ry g2 s \u03b1\u20324 (Q1 + Q5) . (4.15) From the AdS/CFT perspective, a heavy operator has E \u223cc \u223cQ1Q5. Thus, however one slices it, E \u223c\u03b1\u2032\u22121 (Q1Q5) 1 4 \u223c2\u03c0 g 1 2 s V 1 4 4 (n1 n5) 1 4 (4.16) remains well within the bounds of the probe approximation. On the other hand, this energy is a large multiple of the Kaluza-Klein energy, and so such a particle will have many channels in which it can interact with the background. Nevertheless, it is interesting and instructive to examine how such a string probe can become excited as a result of its motion through the background metric of the superstratum. Indeed, one can easily see how such interactions can grow exponentially and could therefore lead to chaotic excitations of the string. 4.3 Fluctuating tidal forces Motion in periodic potentials has a long history in physics, and most particularly with Bloch wave solutions to the Schr\u00a8 odinger equation. The string dynamics involves a second-order linear differential equation with a periodic \u201cpotential\u201d term and represents a classic situation in Floquet theory. One can always write the equations as a first-order system: d dt X(t) = A(t) X(t) , (4.17) where X(t) is a n-dimensional vector and A(t + T) = A(t) is an n \u00d7 n matrix with period T. The solutions to this system have the form X(t) = ei\u00b5 t P(t) , (4.18) where P(t + T) = P(t) is a periodic vector function and \u00b5 \u2208C is a complex number. There are n independent such solutions to this system with generically n different values, \u00b5j, for \u00b5. The \u00b5j measure the failure of periodicity and are known as the characteristic, or Floquet, exponents4. The imaginary part of \u00b5j is the Lyupanov exponent of the solution, and if it is negative, the solution grows exponentially with time. In this way, periodic tidal forces can lead to chaotic scrambling of strings. 4We have added an extra factor of i in the exponent in (4.18) so as to make the conventions consistent with the standard conventions for Mathieu functions. 23 \f3.9 4.0 4.1 4.2 4.3 0.005 0.010 0.015 16.0083 16.0083 16.0084 16.0084 16.0084 16.0085 5. \u00d7 10-7 1. \u00d7 10-6 1.5 \u00d7 10-6 2. \u00d7 10-6 2.5 \u00d7 10-6 3. \u00d7 10-6 3.5 \u00d7 10-6 Figure 8: The left-hand plot shows the regions of stability (white) and instability (red) in the (q, a) parameter space of the Mathieu equation. The instability regions extend all the way to the q = 0 axis, meeting it at (q = 0, a = j2) for j \u2208Z+. (The apparent gaps in the plot as one approaches q = 0 are artifacts of the numerical resolution.) The plots on the right show the unstable Lyupanov exponents for q = 1 2 and a \u22484 and a \u224816. As a increases, the instability bands get narrower and the exponents get smaller. It is entirely possible that the solution to the system has vanishing exponents, or simply Im(\u00b5j) \u22650, so that the solutions do not grow in time. This, of course, depends on the details of A(t). The tidal tensors we are considering are already very complicated and the tidal tensors for more general superstrata are going to be even more complex. Therefore, rather than making an exhaustive search for instabilities in the system we have here, we will make some general observations about a much simpler system whose features resemble the tidal system of interest. The Mathieu equation d2 dt2 Y (t) + \u0000a \u22122 q cos(2t) \u0001 Y (t) = 0 , (4.19) is well studied and understood, and has a potential that is qualitatively similar to those depicted in Fig. 6. The equation, and hence the solution space, is invariant under t \u2192\u2212t, which means that any imaginary exponent signals an instability. It is also worth noting that the sign of q can be flipped by sending t \u2192t + \u03c0 2 and so stability issues are invariant under q \u2192\u2212q. For q = 0 the equation is that of the simple harmonic oscillator and the frequency is \u221aa. The exponent, \u00b5, for the Mathieu equation is a well-studied function of (q, a) and so it is straightforward to map out the regions of stability and instability in this parameter space. We have plotted these in Fig. 8. At large |q| the solutions are largely unstable, with very small islands of stability for specific, narrow ranges of a. For q = 0, the solutions are harmonic and thus stable, or marginally stable. The instability regions are curvilinear triangles, each with an apex at (q = 0, a = j2) for j \u2208Z+. The regions of instability reach all the way down to q = 0, however, for larger values of a the instability zone for small q is extremely narrow and 24 \fthe Lyupanov exponent is very small. In Fig. 8 we have also plotted the imaginary parts of the exponent in the regions a \u22484 and a \u224816 with q = 1 2. These figures illustrate how narrow the instability region is, and how small the Lyupanov exponent becomes. One of the interesting features of these Floquet instabilities is that they are far more extensive and powerful than the much simpler phenomenon of resonant forcing. The fact that the potential function is periodic generates instabilities that grow exponentially in a region around every integer multiple of the fundamental frequency of the underlying potential, and the larger the value of q, the wider the instability range in a. Thus, in principle, even a low-energy string probe encountering small bumps in the geometry can excite an exponentially growing string-scale resonance in the probe. This requires fine tuning and the growth will be slow, but it will be non-zero and exponential. For higher-energy probes, the resonance range and exponents will be larger. When q becomes comparable with a, the instabilities regions are dominant. 5" + }, + { + "url": "http://arxiv.org/abs/2307.14255v1", + "title": "Bootstrapping multi-wound twist effects in symmetric orbifold CFTs", + "abstract": "We investigate the effects of the twist-2 operator in 2D symmetric orbifold\nCFTs. The twist operator can join together a twist-$M$ state and a twist-$N$\nstate, creating a twist-$(M+N)$ state. This process involves three effects:\npair creation, propagation, and contraction. We study these effects by using a\nBogoliubov ansatz and conformal symmetry. In this multi-wound scenario, pair\ncreation no longer decouples from propagation, in contrast to the previous\nstudy where $M=N=1$. We derive equations for these effects, which organize\nthemselves into recursion relations and constraints. Using the recursion\nrelations, we can determine the infinite number of coefficients in the effects\nthrough a finite number of inputs. Moreover, the number of required inputs can\nbe further reduced by applying constraints.", + "authors": "Bin Guo, Shaun D. Hampton", + "published": "2023-07-26", + "updated": "2023-07-26", + "primary_cat": "hep-th", + "cats": [ + "hep-th" + ], + "main_content": "Introduction 2 2 Symmetric orbifold CFT for one free boson 3 3 E\ufb00ects of a twist operator 5 4 Bogoliubov Transformation 6 5 Bootstrapping e\ufb00ects of the twist operator in the multiwound sector 8 5.1 Relations using L0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 5.2 Relations using L\u22121 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 5.3 Relations using L1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.3.1 Pair creation \u03b3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 5.3.2 Propagation f . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 5.4 Relations using Ln>0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 6 Summary of Results 16 6.1 Recursion relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 6.2 Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 7 Results from Covering Map 18 8 Examples 19 8.1 M = 2, N = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 8.2 M = 3, N = 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 8.3 M = N = 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 9 Discussion 33 1 \f1 Introduction Symmetric orbifold CFTs have proven to be a useful class of theories in studying AdS3/CFT2 [1\u201310]. Among these, the symmetric orbifold CFT with N = (4, 4) supersymmetry has played a prominent role in understanding the D1D5 system, both in the context of the free theory and beyond. Recent studies found that the free D1D5 CFT is dual to tensionless string theory on AdS3 [11\u201317]. The free theory also provides a microscopic count of black hole microstates from a \ufb01eld theory perspective [2,3]. In certain cases, these microstates are explicitly known [18\u201320], corresponding to speci\ufb01c states in the CFT. Perturbing away from the free point has also been studied to address questions at the gravity point such as the lifting of energies [21\u201334], tidal scrambling [35,36], thermalization [37,38], and dynamical evolution of black hole microstates [39], etc. There has also been work exploring symmetric orbifolds of minimal models [40\u201342]. Symmetric orbifold CFTs have the target space MN/SN, where M represents the target space of the \u2018seed\u2019 CFT, N denotes the number of seed CFT copies, and SN is the permutation group. Due to the orbifold, there exist twist-n operators \u03c3n, around which n copies of the seed CFT permute into each other. The traditional way to compute correlation functions involving twist operators is to map the 2-d base space to the corresponding covering space, as developed in [43\u201346]. However, as the number of twist operators or copies involved increases, \ufb01nding the covering map becomes increasingly di\ufb03cult as it requires solving higher-order polynomial equations. Recently a new method was developed without using the covering space [47,48]. It uses conformal symmetry and the nature of the twist operator as a Bogoliubov transformation. Speci\ufb01cally, the e\ufb00ect of the twist operator \u03c32 joining two singly wound (untwisted) copies was computed. Some early attempts in this direction can be found in [49\u201351]. For other related works that compute correlation functions of twist operators using conformal symmetry, see e.g. [7, 52\u201355]. The potential generalization of this method to correlation functions where a covering map is not applicable makes it interesting. Furthermore, the perturbation of the CFT away from the free point involves the twist operator \u03c32. Computing the e\ufb00ect of this operator is also crucial for understanding the physics at the gravity point. In this work, we generalize this method to compute the e\ufb00ect of the twist operator \u03c32 on a twist-M state and a twist-N state, creating a twist-(M + N) state. This process involves M + N copies of the seed CFT, which provides a valuable test of the method when more copies are involved. There are three e\ufb00ects of the twist operator \u03c32: pair creation, propagation, and contraction. The main challenge in this work lies in the fact that pair creation is no longer independently determined from propagation; instead, they are coupled together algebraically. This complication arises from the fact that when L\u22121 acts on a twisted vacuum, it does not yield zero as it does when acting on an untwisted vacua. To solve this coupled system, we organize the equations into recursion relations and constraints. Using the recursion relations, we can determine the in\ufb01nite number of coe\ufb03cients in the e\ufb00ects through a \ufb01nite number of inputs. The number of required inputs can be further reduced by applying 2 \fconstraints. The plan of this paper is as follows. In section 2, we review the symmetric orbifold CFT of a single free boson. In section 3, we review the e\ufb00ects of the twist operator. In section 4, we discuss the Bogoliubov transformation. In section 5, we derive relations for the e\ufb00ects of the twist operator by using conformal generators and the Bogoliubov ansatz. In section 6, we summarize our results, organizing the relevant equations into recursion relations and constraints. In section 7, we brie\ufb02y recall the results of the twist e\ufb00ects derived using covering maps for the multi-wound scenario considered in this paper. In section 8, we provide speci\ufb01c examples for some values of M and N. In section 9, we discuss our results and future directions. 2 Symmetric orbifold CFT for one free boson Symmetric orbifold CFTs are obtained by orbifolding N copies of a seed CFT with target space M by the permutation group SN, which yields the target space MN/SN (2.1) In this paper, we study a seed CFT of one free boson, where M = R and c = 1. Extension to theories containing multiple free bosons and fermions can be obtained similarly [48]. This theory contains untwisted sectors and twisted sectors. Consider a single bosonic \ufb01eld X(w) living on a cylinder de\ufb01ned by the coordinate w = \u03c4 + i\u03c3, \u2212\u221e< \u03c4 < \u221e, 0 \u2264\u03c3 < 2\u03c0 (2.2) Consider N such copies of this boson X(i)(w), 1 \u2264i \u2264N (2.3) Pick k of the N copies and de\ufb01ne the k twisted sector by the action of a twist operator of order k, \u03c3k. What this action does is change the boundary conditions of the bosonic \ufb01elds X(i) as one takes \u03c3 \u2192\u03c3 + 2\u03c0 as follows X(i)(\u03c3 + 2\u03c0) = X(i+1)(\u03c3), 1 \u2264i \u2264k \u22121 X(k)(\u03c3 + 2\u03c0) = X(1)(\u03c3) (2.4) This de\ufb01nes the k twisted sector in the orbifold theory. The remaining copies are in the untwisted sector and are de\ufb01ned by X(j)(\u03c3 + 2\u03c0) = X(j)(\u03c3), k + 1 \u2264j \u2264N (2.5) Oscillator modes can be de\ufb01ned in the singly wound (untwisted) sector for copy i, at time \u03c4, as \u03b1(i) m = 1 2\u03c0 Z 2\u03c0 \u03c3=0 dw emw\u2202X(i)(w), m \u2208Z (2.6) 3 \fThe commutation relations are [\u03b1(i) m , \u03b1(j) n ] = m\u03b4ij\u03b4m+n,0 (2.7) The bosonic modes obey the reality condition \u03b1(i)\u2020 m = \u03b1(i) \u2212m (2.8) and the bosonic vacuum is de\ufb01ned by the condition \u03b1(i) m |0\u27e9(i) = 0, m \u22650 (2.9) The theory also contains Virasoro generators Ln with n \u2208Z which can be written as Ln = 1 2 X i X m \u03b1(i) m \u03b1(i) n\u2212m, m, n \u2208Z (2.10) Their commutation relations with bosonic modes of copy i are given by [Ln, \u03b1(i) m ] = \u2212m\u03b1(i) m+n (2.11) Oscillator modes can also be de\ufb01ned for a k-wound (twist-k) bosonic \ufb01eld for a constant time \u03c4 as follows \u03b1m = 1 2\u03c0 Z 2\u03c0k \u03c3=0 dw emw\u2202X(w), m = p k, p \u2208Z (2.12) In the k twisted sector the vacuum is de\ufb01ned by \u03b1m|0k\u27e9= 0, m \u22650 (2.13) where the k twisted vacuum |0k\u27e9can be created by the twist-k operator \u03c3k in the following manner |0k\u27e9\u2261\u03c3k(w \u2192\u2212\u221e) k Y i=1 |0\u27e9(i) (2.14) We will also use the term \u2018k wound\u2019 to denote \u2018twist-k\u2019. The commutation relation for the modes are given by [\u03b1m, \u03b1n] = km\u03b4m+n,0 (2.15) The Virasoro generators in the k twisted sector take the form. Ln = 1 2k X m \u03b1m\u03b1n\u2212m, m, n = p k, p \u2208Z (2.16) The commutation relation between the Virasoro generators and a single bosonic mode is still given by [Ln, \u03b1m] = \u2212m\u03b1m+n (2.17) We note that this relation has the same form as (2.11). The dimension of the twist-k operator is given by h(\u03c3k) = 1 24 \u0012 k \u22121 k \u0013 (2.18) 4 \f3 E\ufb00ects of a twist operator In this section we discuss di\ufb00erent e\ufb00ects of the twist operator on the bosonic modes. In this paper we only consider the twist-2 operator \u03c32 and we thus de\ufb01ne \u03c3 \u2261\u03c32 (3.1) which has dimension h \u2261h2 = 1 16 (3.2) In addition, we consider processes where an M wound copy and an N wound copy in the initial state are twisted into an M + N wound copy. As such let\u2019s consider a state of the following form \u03b1(i1) \u2212m1 k1 \u03b1(i2) \u2212m2 k2 . . . \u03b1(in) \u2212mn kn |0M\u27e9(1)|0N\u27e9(2) (3.3) where mj are positive integers and kj = M for ij = 1, and kj = N for ij = 2. Now, consider the action of a single twist on this state \u03c3(w) \u03b1(i1) \u2212m1 k1 \u03b1(i2) \u2212m2 k2 . . . \u03b1(in) \u2212mn kn |0M\u27e9(1)|0N\u27e9(2) (3.4) This will produce a \ufb01nal state in the M + N wound sector. We would like to study the e\ufb00ects which characterize this state. These e\ufb00ects come in three types (i) Contraction: The \ufb01rst e\ufb00ect is called contraction where any two modes, \u03b1(i) \u2212r ki \u03b1(j) \u2212s kj in the initial state (before the twist) contract together. We denote this by C[\u03b1(i) \u2212r ki , \u03b1(j) \u2212s kj ] \u2261Cij r ki , s kj (3.5) We consider all possible pairs of contractions. If the modes don\u2019t contract then we must consider another process where each mode moves across the twist to form a linear combination of modes in the \ufb01nal state. We consider this in step (ii). (ii) Propagation: When a mode passes through the twist it produces a linear combination of modes in the \ufb01nal state. We call this propagation, and the process is characterized by the functions f (i) r ki , p M+N , i = 1, 2, k1 = M, k2 = N (3.6) where the \ufb01rst index labels the initial energy and the second index labels the \ufb01nal energy. We note that if the \ufb01nal mode numbers correspond to integer energies and they don\u2019t equal the initial energy then the propagation can be argued to vanish [47,50] f (i) r ki , p M+N = 0, r ki \u0338= p M + N \u2208Z (3.7) 5 \f(iii) Pair creation: The third e\ufb00ect comes when the vacuum itself is twisted together. We write this e\ufb00ect as the following |\u03c7\u27e9 \u2261 \u03c3(w)|0M\u27e9(1)|0N\u27e9(2) = A exp \u0010 X p,q>0 \u03b3 p M+N , q M+N \u03b1\u2212 p M+N \u03b1\u2212 q M+N \u0011 |0M+N\u27e9 (3.8) where \u03b3 p M+N , q M+N \u0338= 0, p M + N , q M + N / \u2208Z (3.9) A is a function that captures the contributions of the vacuum correlations of just the twist operators themselves. We will not need to determine this function explicitly as it will be divided out in our computations. To be more precise and illustrative about these e\ufb00ects, let\u2019s consider a scenario involving only a single mode in the initial state. As there are no other modes present, this mode cannot contract with them and thus, must propagate through the twist operator. Additionally, there is also the pair creation e\ufb00ect. The \ufb01nal result is \u03c3(w)\u03b1(i) \u2212r ki |0M\u27e9(1)|0N\u27e9(2) = X p>0 f (i) r ki , p M+N \u03b1\u2212 p M+N |\u03c7\u27e9 (3.10) Let\u2019s consider two modes in the initial state. The state produced by the twist operator is given by \u03c3(w)\u03b1(i) \u2212r ki \u03b1(j) \u2212s kj |0M\u27e9(1)|0N\u27e9(2) = \u0010 Cij r ki , s kj + X p>0 f (i) r ki , p M+N \u03b1\u2212 p M+N X q>0 f (j) s kj , q M+N \u03b1\u2212 q M+N \u0011 |\u03c7\u27e9 (3.11) The \ufb01rst term arises from the contraction of the two initial modes. The second term comes from the propagation of the two modes through the twist operator. The state \u03c7 captures the pair creation e\ufb00ect. In the next section, we explain the motivation behind the ansatz for these e\ufb00ects. 4 Bogoliubov Transformation Here we explain the Bogoliubov transformation and how it motivates the ansatz for the e\ufb00ects of the twist. We use a simple example. Consider a set of creation and annihilation operators \u02c6 a, \u02c6 a\u2020 of some quantum theory. The vacuum of this theory is de\ufb01ned by \u02c6 a|0\u27e9a = 0 (4.1) 6 \fNow consider a change of basis in which the original operators are written in terms of a new set \u02c6 b,\u02c6 b\u2020 with a new vacuum state de\ufb01ned by \u02c6 b|0\u27e9b = 0 (4.2) We can write the operators \u02c6 a, \u02c6 a\u2020 as a linear combination of the operators \u02c6 b,\u02c6 b\u2020 as follows \u02c6 a = \u03b1\u02c6 b + \u03b2 \u02c6 b\u2020 \u02c6 a\u2020 = \u03b1\u2217\u02c6 b\u2020 + \u03b2\u2217\u02c6 b (4.3) where |\u03b1|2 \u2212|\u03b2|2 = 1 to make the commutation relations canonical [\u02c6 a, \u02c6 a\u2020] = 1, [\u02c6 b,\u02c6 b\u2020] = 1 (4.4) We can now see that \u02c6 b does not annihilate the a vacuum, i.e. \u02c6 a|0\u27e9a = 0 = \u21d2(\u03b1\u02c6 b + \u03b2 \u02c6 b\u2020)|0\u27e9a = 0 (4.5) This equation is solved if |0\u27e9a = e 1 2\u03b3 \u02c6 b\u2020\u02c6 b\u2020|0\u27e9b (4.6) where \u03b3 = \u2212\u03b2 \u03b1 (4.7) This shows that the vacuum in terms of one set of modes can be an excited state with respect to another set of modes. Let\u2019s consider an example to further demonstrate our motivation. Consider two excitations on the \u2018a\u2019 vacuum \u02c6 a\u2020\u02c6 a\u2020|0\u27e9a (4.8) Rewriting this in terms \u02c6 b,\u02c6 b\u2020 we have \u02c6 a\u2020\u02c6 a\u2020|0\u27e9a = (\u03b1\u2217\u02c6 b\u2020 + \u03b2\u2217\u02c6 b)(\u03b1\u2217\u02c6 b\u2020 + \u03b2\u2217\u02c6 b)|0\u27e9a (4.9) Using commutation relations and (4.6), this can be rearranged as \u02c6 a\u2020\u02c6 a\u2020|0\u27e9a = \u0000(\u03b1\u2217+ \u03b2\u2217\u03b3)\u02c6 b\u2020(\u03b1\u2217+ \u03b2\u2217\u03b3)\u02c6 b\u2020 + \u03b2\u2217\u03b1\u2217\u0001 e 1 2 \u03b3 \u02c6 b\u2020\u02c6 b\u2020|0\u27e9b (4.10) We can see the similarity between this and our ansatz for the e\ufb00ects of the twist operator where here the following e\ufb00ects can be identi\ufb01ed: (i) Contraction is identi\ufb01ed with the term C[\u02c6 a\u2020\u02c6 a\u2020] = \u03b2\u2217\u03b1\u2217 (4.11) 7 \f(ii) Propagation is given by the term \u02c6 a\u2020 \u2192(\u03b1\u2217+ \u03b2\u2217\u03b3)\u02c6 b\u2020 (4.12) (iii) Pair creation |0\u27e9a = e 1 2 \u03b3\u02c6 b\u2020\u02c6 b\u2020|0\u27e9b (4.13) These coe\ufb03cients are not independent but are related through (4.7). We will call these three rules, along with the constraint (4.7), the \u2018normal\u2019 Bogoliubov ansatz. In [50], the e\ufb00ect of the twist operator was studied using a Bogoliubov transformation along with a constraint similar to the one in (4.7) where the coe\ufb03cients \u03b1, \u03b2, \u03b3 were promoted to in\ufb01nite dimensional matrices \u03b1ij, \u03b2ij, \u03b3ij. In this method the relation (4.7) becomes a matrix multiplication which requires the inversion of in\ufb01nite dimensional matrices. In this paper, following [47, 48], we use a \u2018weak\u2019 Bogoliubov ansatz in which we use the ansatz outlined in section 3, without the matrix version of the constraint (4.7). In addition, we instead use conformal symmetry in an attempt to compute the relative quantities. In the next section, using conformal generators, we \ufb01nd a set of expressions for pair creation and propagation. 5 Bootstrapping e\ufb00ects of the twist operator in the multiwound sector In this section, we use conformal symmetry and the Bogoliubov ansatz to derive equations for computing the e\ufb00ects of the twist operator in the multiwound scenario. The results are summarized in section 6. 5.1 Relations using L0 Let us begin with relations that yield the w dependence of the pair creation \u03b3 and propagation f. To determine the w dependence for \u03b3, we start with the following equation 0 = \u03c3(w)(L0 \u2212hM \u2212hN)|0M\u27e9(1)|0N\u27e9(2) (5.1) where we have used the fact that L0|0P\u27e9= hP|0P\u27e9 (5.2) where hP is de\ufb01ned in (2.18), and the twisted vacuum is de\ufb01ned in (2.14). By commuting L0 through \u03c3(w), we obtain the relation 0 = \u0000(L0 \u2212hM \u2212hN)\u03c3(w) \u2212[L0, \u03c3(w)] \u0001 |0M\u27e9(1)|0N\u27e9(2) (5.3) 8 \fLet us compute the commutator in the second term. Since similar commutators appear frequently later on, we will derive the commutator for a general Ln [Ln, \u03c3(w)] = 1 2\u03c0i I dw\u2032enw\u2032T(w\u2032)\u03c3(w) = enw 1 2\u03c0i I dw\u2032en(w\u2032\u2212w)T(w\u2032)\u03c3(w) = enw 1 2\u03c0i I dw\u2032(1 + n(w\u2032 \u2212w) + . . .)T(w\u2032)\u03c3(w) = enw(L(w) \u22121 + nL(w) 0 )\u03c3(w) = enw(\u2202+ nh)\u03c3(w) (5.4) Here, the modes with the superscript (w), i.e., L(w) n , refer to those centered around w. We have used the fact that \u03c3(w) is a primary of dimension h = 1 16, satisfying the following conditions L(w) \u22121 \u03c3(w) = \u2202\u03c3(w), L(w) 0 \u03c3(w) = h\u03c3(w), L(w) n>0\u03c3(w) = 0 (5.5) Inserting this into (5.3) and using the ansatz (3.8) gives 0 = (L0 \u2212\u2202\u2212hM \u2212hN)\u03c3(w)|0M\u27e9(1)|0N\u27e9(2) = (L0 \u2212\u2202\u2212hM \u2212hN)A exp \u0012 X p,q>0 \u03b3 p M+N , q M+N \u03b1\u2212 p M+N \u03b1\u2212 q M+N \u0013 |0M+N\u27e9 (5.6) Expanding the exponential and looking at terms without any modes, we \ufb01nd the relation \u2202A = (hM+N \u2212hM \u2212hN)A (5.7) Considering terms with two modes \u03b1\u2212 p M+N \u03b1\u2212 q M+N , we obtain 0 = \u0012 p + q M + N + hM+N \u2212\u2202\u2212hM \u2212hN \u0013 A \u03b3 p M+N , q M+N (5.8) By inserting the relation (5.7), we \ufb01nd the following relation \u0012 p + q M + N \u2212\u2202 \u0013 \u03b3 p M+N , q M+N = 0 (5.9) This indicates that the w dependence of \u03b3 is given by \u03b3 p M+N , q M+N \u221de p+q M+N w (5.10) Next, Let us derive the w dependence of the propagation f using L0. We \ufb01rst look at copy 1 and the results are easily extendable to copy 2. We start with the relation 0 = \u03c3(w) \u0010 L0 \u2212 \u0000hM + hN + r M \u0001\u0011 \u03b1(1) \u2212r M |0M\u27e9(1)|0N\u27e9(2) (5.11) 9 \fwhere we have used the commutation relation (2.17). We can commute L0 through the twist and then use (5.4) to obtain the relation 0 = \u0010 L0 \u2212\u2202\u2212 \u0000hM + hN + r M \u0001\u0011 \u03c3(w)\u03b1(1) \u2212r M |0M\u27e9(1)|0N\u27e9(2) (5.12) Using the ansatz (3.10), we \ufb01nd 0 = \u0010 L0 \u2212\u2202\u2212 \u0000hM + hN + r M \u0001\u0011 X p\u2032>0 f (1) r M , p\u2032 M+N \u03b1\u2212 p\u2032 M+N A exp \u0012 X p,q>0 \u03b3 p M+N , q M+N \u03b1\u2212 p M+N \u03b1\u2212 q M+N \u0013 |0M+N\u27e9 (5.13) Looking at terms with one mode \u03b1\u2212 p M+N , we obtain 0 = \u0012\u0000hM+N + p M + N \u0001 \u2212\u2202\u2212 \u0000hM + hN + r M \u0001\u0013 A f (1) r M , p M+N (5.14) Inserting the expression (5.7) to eliminate A yields \u0012 p M + N \u2212\u2202\u2212r M \u0013 f (1) r M , p M+N = 0 (5.15) Hence, we \ufb01nd that the w dependence of the propagation f (1) is given by f (1) r M , p M+N \u221de( p M+N \u2212r M )w (5.16) We see that the exponential factor is the di\ufb00erence between the initial and \ufb01nal energies. Similarly, we can derive f (2) by interchanging M and N in the expression for f (1). Thus, we have f (2) r N , p M+N \u221de( p M+N \u2212r N )w (5.17) 5.2 Relations using L\u22121 Now let us derive relations using the conformal generator L\u22121. We can write it as a sum of modes acting on each copy L\u22121 = 1 2M X r \u03b1(1) \u2212r M \u03b1(1) \u22121+ r M + 1 2N X r \u03b1(2) \u2212r N \u03b1(2) \u22121+ r N (5.18) It is worth noting that L\u22121 acting on the multiwound vacuum |0M\u27e9(1)|0N\u27e9(2) is not zero, except in the case where M = N = 1. This leads to nonlinear recursion relations that involve both pair creation \u03b3 and propagation f. In contrast, when M = N = 1 (as studied 10 \fin [47, 48]), the linear recursion relations only involve pair creation \u03b3 and can be solved more easily. Using (5.18), we can write the following equation 0 = \u03c3(w) \u0012 L\u22121 \u2212 1 2M M\u22121 X r=1 \u03b1(1) \u2212r M \u03b1(1) \u22121+ r M \u22121 2N N\u22121 X r=1 \u03b1(2) \u2212r N \u03b1(2) \u22121+ r N \u0013 |0M\u27e9(1)|0N\u27e9(2) (5.19) By commuting L\u22121 through the twist and using (5.4), we obtain 0 = \u0012 L\u22121\u03c3(w) \u2212e\u2212w(\u2202\u2212h)\u03c3(w) \u2212\u03c3(w) 1 2M M\u22121 X r=1 \u03b1(1) \u2212r M \u03b1(1) \u22121+ r M \u2212\u03c3(w) 1 2N N\u22121 X r=1 \u03b1(2) \u2212r N \u03b1(2) \u22121+ r N \u0013 |0M\u27e9(1)|0N\u27e9(2) (5.20) Using (3.11), we obtain 0 = \u0012 L\u22121 \u2212e\u2212w(\u2202\u2212h) \u2212 1 2M M\u22121 X r=1 C11 r M ,1\u2212r M \u22121 2N N\u22121 X r=1 C22 r N ,1\u2212r N \u22121 2M M\u22121 X r=1 X p>0 f (1) r M , p M+N \u03b1\u2212 p M+N X q>0 f (1) 1\u2212r M , q M+N \u03b1\u2212 q M+N \u22121 2N N\u22121 X r=1 X p>0 f (1) r N , p M+N \u03b1\u2212 p M+N X q>0 f (1) 1\u2212r N , q M+N \u03b1\u2212 q M+N \u0013 \u00d7A exp \u0010 X p,q>0 \u03b3 p M+N , q M+N \u03b1\u2212 p M+N \u03b1\u2212 q M+N \u0011 |0M+N\u27e9 (5.21) Looking at terms without any modes, we \ufb01nd the relation 0 = e\u2212w(\u2202\u2212h)A + 1 2M M\u22121 X r=1 C11 r M ,1\u2212r M + 1 2N N\u22121 X r=1 C22 r N ,1\u2212r N (5.22) Using (5.7) we \ufb01nd the following constraint on the sum of contraction terms 1 2M M\u22121 X r=1 C11 r M ,1\u2212r M + 1 2N N\u22121 X r=1 C22 r N ,1\u2212r N = e\u2212w(h \u2212hM+N + hM + hN) (5.23) Although we don\u2019t compute the individual values of Cij in this paper, we will use the above relation to simplify our computations for the quantities \u03b3 and f (i). Now, looking at terms with two modes \u03b1\u2212 p M+N \u03b1\u2212 q M+N in (5.21), we obtain the relation A \u0012 \u03b3 p M+N \u22121, q M+N [L\u22121, \u03b1\u2212 p M+N +1]\u03b1\u2212 q M+N + \u03b3 p M+N , q M+N \u22121\u03b1\u2212 p M+N [L\u22121, \u03b1\u2212 q M+N +1] \u0013 11 \f+ \u0012 \u2212e\u2212w(\u2202\u2212h)A\u03b3 p M+N , q M+N \u2212A 1 2M M\u22121 X r=1 f (1) r M , p M+N f (1) 1\u2212r M , q M+N \u2212A 1 2N N\u22121 X r=1 f (2) r N , p M+N f (2) 1\u2212r N , q M+N \u2212A 1 2M M\u22121 X r=1 C11 r M ,1\u2212r M \u03b3 p M+N , q M+N \u2212A 1 2N N\u22121 X r=1 C22 r N ,1\u2212r N \u03b3 p M+N , q M+N +A 1 2(M + N)\u03b4 p M+N + q M+N ,1 \u0013 \u03b1\u2212 p M+N \u03b1\u2212 q M+N = 0 (5.24) Performing the commutation relations using (2.17) and (5.22) yield the relation 0 = \u0000p M + N \u22121 \u0001 \u03b3 p M+N \u22121, q M+N + \u0000q M + N \u22121 \u0001 \u03b3 p M+N , q M+N \u22121 \u2212e\u2212w\u2202\u03b3 p M+N , q M+N \u2212 1 2M M\u22121 X r=1 f (1) r M , p M+N f (1) 1\u2212r M , q M+N \u22121 2N N\u22121 X r=1 f (2) r N , p M+N f (2) 1\u2212r N , q M+N + 1 2(M + N)\u03b4 p M+N + q M+N ,1 (5.25) Using the w dependence of \u03b3 (5.10) we \ufb01nd \u03b3 p M+N , q M+N = ew M + N p + q \u0012\u0000p M + N \u22121 \u0001 \u03b3 p M+N \u22121, q M+N + \u0000q M + N \u22121 \u0001 \u03b3 p M+N , q M+N \u22121 \u22121 2M M\u22121 X r=1 f (1) r M , p M+N f (1) 1\u2212r M , q M+N \u22121 2N N\u22121 X r=1 f (2) r N , p M+N f (2) 1\u2212r N , q M+N + 1 2(M + N)\u03b4 p M+N + q M+N ,1 \u0013 (5.26) 5.3 Relations using L1 In this section, we will derive relations using the conformal generator L1. We will \ufb01rst consider the pair creation \u03b3 and then the propagation f. 5.3.1 Pair creation \u03b3 We start with 0 = \u03c3(w)L1|0M\u27e9(1)|0N\u27e9(2) (5.27) where we have used the fact that L1 acting on the vacuum gives zero. Commuting L1 through the twist and using (5.4) for n = 1 we have 0 = \u0000L1\u03c3(w) \u2212[L1, \u03c3(w)] \u0001 |0M\u27e9(1)|0N\u27e9(2) 12 \f= \u0000L1 \u2212ew(\u2202+ h) \u0001 \u03c3(w)|0M\u27e9(1)|0N\u27e9(2) (5.28) Inserting our ansatz (3.8) gives 0 = \u0000L1 \u2212ew(\u2202+ h) \u0001 A exp \u0010 X p,q>0 \u03b3 p M+N , q M+N \u03b1\u2212 p M+N \u03b1\u2212 q M+N \u0011 |0M+N\u27e9 (5.29) Looking at terms without any modes, we obtain the relation 0 = A M+N\u22121 X p=1 \u03b3 p M+N ,1\u2212 p M+N \u0002 [L1, \u03b1\u2212 p M+N ], \u03b1\u22121+ p M+N \u0003 \u2212ew(\u2202+ h)A (5.30) which gives A M+N\u22121 X p=1 \u03b3 p M+N ,1\u2212 p M+N p \u00001 \u2212 p M + N \u0001 = ew(\u2202+ h)A (5.31) Inserting (5.7) to eliminate A gives M+N\u22121 X p=1 \u03b3 p M+N ,1\u2212 p M+N p \u00001 \u2212 p M + N \u0001 = ew(h + hM+N \u2212hM \u2212hN) (5.32) Next, looking at terms with two modes \u03b1\u2212 p M+N \u03b1\u2212 q M+N in (5.29), we obtain 0 = A\u03b31+ p M+N , q M+N [L1, \u03b1\u22121\u2212 p M+N ]\u03b1\u2212 q M+N + A\u03b3 p M+N ,1+ q M+N \u03b1\u2212 p M+N [L1, \u03b1\u22121\u2212 q M+N ] +A M+N\u22121 X p\u2032=1 \u03b3 p M+N , q M+N \u03b3 p\u2032 M+N ,1\u2212 p\u2032 M+N \u0002 [L1, \u03b1\u2212 p\u2032 M+N ], \u03b1\u22121+ p\u2032 M+N \u0003 \u03b1\u2212 p M+N \u03b1\u2212 q M+N +2A M+N\u22121 X p\u2032=1 \u03b3 p M+N , p\u2032 M+N \u03b3 q M+N ,1\u2212 p\u2032 M+N \u0002 [L1, \u03b1\u2212 p\u2032 M+N ], \u03b1\u22121+ p\u2032 M+N \u0003 \u03b1\u2212 p M+N \u03b1\u2212 q M+N \u2212ew(\u2202+ h)A\u03b3 p M+N , q M+N \u03b1\u2212 p M+N \u03b1\u2212 q M+N (5.33) Performing the commutation relations (2.17) yield 0 = A\u03b31+ p M+N , q M+N \u00001 + p M + N \u0001 + A\u03b3 p M+N ,1+ q M+N (1 + q M + N ) +A M+N\u22121 X p\u2032=1 \u03b3 p M+N , q M+N \u03b3 p\u2032 M+N ,1\u2212 p\u2032 M+N p\u2032 \u00001 \u2212 p\u2032 M + N \u0001 +2A M+N\u22121 X p\u2032=1 \u03b3 p M+N , p\u2032 M+N \u03b3 q M+N ,1\u2212 p\u2032 M+N p\u2032 \u00001 \u2212 p\u2032 M + N \u0001 \u2212ew(\u2202+ h)A\u03b3 p M+N , q M+N (5.34) 13 \fUsing (5.31) to eliminate A, we get the relation ew\u2202\u03b3 p M+N , q M+N = \u03b31+ p M+N , q M+N \u00001 + p M + N \u0001 + \u03b3 p M+N ,1+ q M+N \u00001 + q M + N \u0001 +2 M+N\u22121 X p\u2032=1 \u03b3 p M+N , p\u2032 M+N \u03b3 q M+N ,1\u2212 p\u2032 M+N p\u2032 \u00001 \u2212 p\u2032 M + N \u0001 (5.35) Inserting the w dependence of \u03b3 given in (5.10) we obtain the relation \u03b3 p M+N , q M+N = e\u2212w M + N p + q \u0012 \u03b31+ p M+N , q M+N \u00001 + p M + N \u0001 + \u03b3 p M+N ,1+ q M+N \u00001 + q M + N \u0001 + 2 M+N\u22121 X p\u2032=1 \u03b3 p M+N , p\u2032 M+N \u03b3 q M+N ,1\u2212 p\u2032 M+N p\u2032 \u00001 \u2212 p\u2032 M + N \u0001\u0013 (5.36) 5.3.2 Propagation f Now let us derive relations for the propagation f using L1. We \ufb01rst compute the relations for f (1) and then note that the relations for f (2) can be obtained in a similar way. We start with the following equation 0 = \u03c3(w)L1\u03b1\u2212r M |0M\u27e9(1)|0N\u27e9(2), 1 \u2264r \u2264M \u22121 = \u0000L1 \u2212ew(\u2202+ h) \u0001 \u03c3(w)\u03b1\u2212r M |0M\u27e9(1)|0N\u27e9(2) = \u0000L1 \u2212ew(\u2202+ h) \u0001 X p\u2032>0 f (1) r M , p\u2032 M+N \u03b1\u2212 p\u2032 M+N A exp \u0010 X p,q>0 \u03b3 p M+N , q M+N \u03b1\u2212 p M+N \u03b1\u2212 q M+N \u0011 |0M+N\u27e9 (5.37) Looking at terms with a single mode \u03b1\u2212 p M+N , we have 0 = A f (1) r M ,1+ p M+N [L1, \u03b1\u22121\u2212 p M+N ] + A M+N\u22121 X q=1 \u00002f (1) r M , q M+N \u03b31\u2212 q M+N , p M+N + f (1) r M , p M+N \u03b3 q M+N ,1\u2212 q M+N \u0001 \u00d7 \u0002 [L1, \u03b1\u2212 q M+N ], \u03b1\u22121+ q M+N \u0003 \u03b1\u2212 p M+N \u2212ew(\u2202+ h)A f (1) r M , p M+N \u03b1\u2212 p M+N (5.38) Performing commutation relations (2.17) give 0 = A f (1) r M ,1+ p M+N \u00001 + p M + N \u0001 + A M+N\u22121 X q=1 \u00002f (1) r M , q M+N \u03b31\u2212 q M+N , p M+N + f (1) r M , p M+N \u03b3 q M+N ,1\u2212 q M+N \u0001 q \u00001 \u2212 q M + N \u0001 14 \f\u2212ew(\u2202+ h)A f (1) r M , p M+N (5.39) Inserting the expression (5.31) to eliminate A gives f (1) r M ,1+ p M+N \u00001 + p M + N \u0001 = ew\u2202f (1) r M , p M+N \u2212 M+N\u22121 X q=1 2 f (1) r M , q M+N \u03b31\u2212 q M+N , p M+N q \u00001 \u2212 q M + N \u0001 (5.40) Using the relation (5.16), which provides the w-dependence of f, we \ufb01nd the relation f (1) r M ,1+ p M+N \u00001 + p M + N \u0001 = ew\u0000p M + N \u2212r M \u0001 f (1) r M , p M+N \u22122 M+N\u22121 X q=1 f (1) r M , q M+N \u03b31\u2212 q M+N , p M+N q \u00001 \u2212 q M + N \u0001 (5.41) Similarly, f (2) can be obtained by taking M \u2194N f (2) r N ,1+ p M+N \u00001 + p M + N \u0001 = ew\u0000p M + N \u2212r N \u0001 f (2) r N , p M+N \u22122 M+N\u22121 X q=1 f (2) r N , q M+N \u03b31\u2212 q M+N , p M+N q \u00001 \u2212 q M + N \u0001 (5.42) 5.4 Relations using Ln>0 Here we use the generator, Ln, for n > 0, to obtain additional relations for \u03b3. We start with the following relation 0 = \u03c3(w)Ln|0M\u27e9(1)|0N\u27e9(2), n > 0 (5.43) Acting Ln to the left and using (5.4) we \ufb01nd 0 = \u0000Ln \u03c3(w) \u2212[Ln, \u03c3(w)] \u0001 |0M\u27e9(1)|0N\u27e9(2) = \u0000Ln \u2212enw(\u2202+ nh) \u0001 A exp \u0010 X p,q>0 \u03b3 p M+N , q M+N \u03b1\u2212 p M+N \u03b1\u2212 q M+N \u0011 |0M+N\u27e9 (5.44) Looking at terms without any modes, we \ufb01nd the relation 0 = A n(M+N)\u22121 X p=1 \u03b3 p M+N ,n\u2212 p M+N \u0002 [Ln, \u03b1\u2212 p M+N ], \u03b1\u2212n+ p M+N \u0003 \u2212enw(\u2202+ nh)A = A n(M+N)\u22121 X p=1 \u03b3 p M+N ,n\u2212 p M+N p \u0000n \u2212 p M + N \u0001 \u2212enw(nh + hM+N \u2212hM \u2212hN)A(5.45) 15 \fwhich gives the relation n(M+N)\u22121 X p=1 \u03b3 p M+N ,n\u2212 p M+N p \u0000n \u2212 p M + N \u0001 = enw(nh + hM+N \u2212hM \u2212hN) (5.46) We note that for n = 1, relation (5.46) reduces to (5.32). So far we have used the Virasoro generators, L\u22121, L0, L1, Ln>0 to \ufb01nd a set of relations that need to be solved. We summarize these relations in the next section. 6 Summary of Results In this section, we collect all the relevant equations and classify them into two categories: recursion relations and constraints. First notice that the w-dependence is given by the relations derived from the generator L0 (5.10), (5.16), and (5.17) \u03b3 p M+N , q M+N \u221d e( p+q M+N )w f (1) r M , p M+N \u221d e( p M+N \u2212r M )w f (2) r N , p M+N \u221d e( p M+N \u2212r N )w (6.1) 6.1 Recursion relations The following relations are categorized as recursion relations: the relations derived from the generator L\u22121 from terms with two modes (5.26) \u03b3 p M+N , q M+N = ew M + N p + q \u0012\u0000p M + N \u22121 \u0001 \u03b3 p M+N \u22121, q M+N + \u0000q M + N \u22121 \u0001 \u03b3 p M+N , q M+N \u22121 \u22121 2M M\u22121 X r=1 f (1) r M , p M+N f (1) 1\u2212r M , q M+N \u22121 2N N\u22121 X r=1 f (2) r N , p M+N f (2) 1\u2212r N , q M+N + 1 2(M + N)\u03b4 p M+N + q M+N ,1 \u0013 (6.2) and the relations derived from the generator L1 for propagation f (1) (7.2) and f (2) (7.3) f (1) r M ,1+ p M+N \u00001 + p M + N \u0001 = ew\u0000p M + N \u2212r M \u0001 f (1) r M , p M+N \u22122 M+N\u22121 X q=1 f (1) r M , q M+N \u03b31\u2212 q M+N , p M+N q \u00001 \u2212 q M + N \u0001 (6.3) and f (2) r N ,1+ p M+N \u00001 + p M + N \u0001 16 \f= ew\u0000p M + N \u2212r N \u0001 f (2) r N , p M+N \u22122 M+N\u22121 X q=1 f (2) r N , q M+N \u03b31\u2212 q M+N , p M+N q \u00001 \u2212 q M + N \u0001 (6.4) Given all the values of f (1) r M <1, p M+N <1 and f (2) r N <1, p M+N <1, we can determine the values of \u03b3 p M+N , q M+N , f (1) r M <1, p M+N , and f (2) r N <1, p M+N by the following steps: 1) Use relations (6.2) to determine all \u03b3 p M+N , q M+N where p M+N , q M+N < 1. 2) Use relations (6.3) and (6.4) to determine all f (1) r M <1, 1< p M+N <2 and f (2) r N <1, 1< p M+N <2. 3) Use relations (6.2) to determine all \u03b3 p M+N , q M+N where p M+N , q M+N < 2. 4) Use relations (6.3) and (6.4) to determine all f (1) r M <1, 2< p M+N <3 and f (2) r N <1, 2< p M+N <3. . . . Therefore, using these recursion relations, the in\ufb01nite number of coe\ufb03cients for the pair creation can be determined by a \ufb01nite number of inputs f (1) r M <1, p M+N <1 , f (2) r N <1, p M+N <1 = \u21d2 \u03b3 p M+N , q M+N (6.5) with the number given by (M \u22121)(M + N \u22121) + (N \u22121)(M + N \u22121) = (M + N \u22122)(M + N \u22121) (6.6) Next, we write down constraint equations that can be used to reduce the number of inputs. 6.2 Constraints There are also constraints that help us to reduce the number of required inputs or even determine them. The \ufb01rst type of constraints are given by the relations derived from the generator Ln>0 for \u03b3 (5.46), which comes from terms without any modes n(M+N)\u22121 X p=1 \u03b3 p M+N ,n\u2212 p M+N p \u0000n \u2212 p M + N \u0001 = enw(nh + hM+N \u2212hM \u2212hN) (6.7) The second type of constraints are given by (5.36), which comes from the generator L1 for \u03b3 from terms with two modes \u03b3 p M+N , q M+N = e\u2212w M + N p + q \u0012 \u03b31+ p M+N , q M+N \u00001 + p M + N \u0001 + \u03b3 p M+N ,1+ q M+N \u00001 + q M + N \u0001 17 \f+ 2 M+N\u22121 X p\u2032=1 \u03b3 p M+N , p\u2032 M+N \u03b3 q M+N ,1\u2212 p\u2032 M+N p\u2032 \u00001 \u2212 p\u2032 M + N \u0001\u0013 (6.8) Unlike the recursion relations that determine higher energy \u03b3 and f in terms of lower energy quantities, these constraints provide relations between \u03b3 at the same energy, expressed in terms of \u03b3 at lower energy. For example, (6.8) gives constraints for \u03b31+ p M+N , q M+N and \u03b3 p M+N ,1+ q M+N in terms of the lower energy \u03b3 p M+N , q M+N and PM+N\u22121 p\u2032=1 \u03b3 p M+N , p\u2032 M+N \u03b3 q M+N ,1\u2212 p\u2032 M+N . Notice that as we include higher energy constraints (e.g., larger n in (6.7) and larger p and q in (6.8)), the number of constraints will increase. At a critical energy bound, naively, it would seem that we would have more constraints than inputs (6.5) providing the possibility of determining all required inputs without having to insert any at the beginning. It is clear that not all constraints are independent. As we will show with explicit examples, it seems that the constraints coming from (6.8) are all trivial and thus do not help in solving for the relevant coe\ufb03cients. They only serve as an additional check that the correct equations are being used. It appears that constraints (6.7) are nontrivial. Using the constraints, the number of inputs (6.6) can be reduced to Ninputs = (M + N \u22122)(M + N \u22121) \u2212Nconstr (6.9) In section 8, we will provide explicit examples for (M, N) = (2, 1), (3, 1), (2, 2). We will show that indeed some of the constraints are nontrivial and e\ufb00ectively reduce the number of required inputs. 7 Results from Covering Map Here we record relevant results for pair creation \u03b3 and propagation f, which were obtained using the covering map method [56\u201360]. In [59], the coe\ufb03cients \u03b3 and f were computed in the D1D5 CFT with four free bosons and four free fermions, considering the same twist con\ufb01guration as studied in this paper: a single \u03c3(w) twists together an initial copy of winding M and an initial copy of winding N into a copy of winding M + N. In our case, although we only consider a single real boson, we can use the results from [59] because all the bosons are free in both cases. However, an appropriate rescaling factor is needed since the bosons used in [59] are complex bosons that consist of two real bosons each. Therefore, we have \u03b3 p M+N , q M+N = 1 2\u03b3D1D5 p M+N , q M+N = \u22121 2 ap+q \u03c02 sin \u0002 \u03c0Np M + N \u0003 sin \u0002 \u03c0Nq M + N \u0003 MN (M + N)2 1 p + q \u0393[ Mp M+N ]\u0393[ Np M+N ] \u0393[p] \u0393[ Mq M+N ]\u0393[ Nq M+N ] \u0393[q] (7.1) 18 \fAs for the f (i), it remains unchanged since it computes the transition from one boson in the initial state to one boson in the \ufb01nal state. The rescaling factors cancel out and there is no overall rescaling factor left. The expressions are given by f (1) r M , p M+N = f (1),D1D5 r M , p M+N = (\u22121)r sin( \u03c0Mp M+N ) \u03c0(M + N) (\u2212a)p\u2212(M+N)r M p M+N \u2212 r M \u0393[ (M+N)r M ] \u0393[r]\u0393[ Nr M ] \u0393[ Mp M+N ]\u0393[ Np M+N ] \u0393[p] , r M \u0338= p M + N f (1) r M , p M+N = M M + N , r M = p M + N (7.2) and f (2) r N , p M+N = f (2),D1D5 r N , p M+N = (\u22121)r sin( \u03c0Np M+N ) \u03c0(M + N) (\u2212a)p\u2212(M+N)r N p M+N \u2212r N \u0393[ (M+N)r N ] \u0393[r]\u0393[ Mr N ] \u0393[ Mp M+N ]\u0393[ Np M+N ] \u0393[p] , r N \u0338= p M + N f (2) r N , p M+N = N M + N , r N = p M + N (7.3) where a = e\u2212i\u03c0 N M+N \u0012 ew MMNN \u0013 1 M+N (M + N) (7.4) Here, w is the location of \u03c3(w). In the next section, we provide several examples where we solve the recursion relations and constraints derived in the previous sections for speci\ufb01c values of initial winding M and N. We will show that the resulting pair creation \u03b3 and propagation f are the same as the results in this section. 8 Examples In this section, we provide examples of solving the recursion relations and constraints for (M, N) = (2, 1), (3, 1), (2, 2). Notice that the constraints (6.7) and (6.8) include only the pair creation \u03b3. However, it remains unclear which constraints are trivial and which are nontrivial. To explore this, we consider some explicit examples and restrict ourselves to constraints that contain only the following \u03b3 \u03b3 p M+N <2, q M+N <2, p M + N + q M + N \u22642 (8.1) Therefore we only need to consider the constraint (6.7) for n = 1, 2, as well as the lowest constraints in (6.8) with p M+N + q M+N \u22641. To determine these \u03b3 from the recursion relations, we will also need to \ufb01nd f (i) r M <1, p M+N <2, i = 1, 2 (8.2) 19 \fNotice that since we are considering only some of the lowest constraints, not all powers of the constraints have been utilized. Nevertheless, by explicitly solving these examples, we can gain some insight into which constraints are nontrivial, leading us to a better understanding of the underlying structures. In the case of (M, N) = (2, 1), we \ufb01nd that there is only one nontrivial constraint Nconstr = 1 coming from the \ufb01rst type (6.7) with n = 1. For (M, N) = (3, 1), (2, 2), there are two nontrivial constraints Nconstr = 2 coming from the \ufb01rst type (6.7) with n = 1, 2. 8.1 M = 2, N = 1 In this case, the w-dependence is given by \u03b3 p 3, q 3 \u221d e p+q 3 w f (1) r 2 , p 3 \u221d e( p 3 \u2212r 2)w f (2) r, p 3 \u221d e( p 3 \u2212r)w (8.3) The recursion relations (6.2) and (6.3) are given by \u03b3 p 3 , q 3 = ew 3 p + q \u0012\u0000p 3 \u22121 \u0001 \u03b3 p 3 \u22121, q 3 + \u0000q 3 \u22121 \u0001 \u03b3 p 3 , q 3\u22121 \u22121 4f (1) 1 2, p 3f (1) 1 2, q 3 + 1 6\u03b4 p 3+ q 3 ,1 \u0013 (8.4) and f (1) r 2,1+ p 3 \u00001 + p 3 \u0001 = ew\u0000p 3 \u2212r 2 \u0001 f (1) r 2, p 3 \u22124 3f (1) r 2 , 1 3\u03b3 2 3, p 3 \u22124 3f (1) r 2 , 2 3\u03b3 1 3 , p 3 (8.5) We note that for N = 1 we do not get any f (2) terms. The constraints (6.7) with n = 1, 2 and (6.8) become \u03b3 1 3, 2 3 = 1 12ew (8.6) 5 X p=1 \u03b3 p 3 ,2\u2212p 3 p (2 \u2212p 3) = 25 144e2w (8.7) \u03b3 p 3 , q 3 = e\u2212w 3 p + q \u0012 \u03b31+ p 3 , q 3 \u00001 + p 3 \u0001 + \u03b3 p 3 ,1+ q 3 \u00001 + q 3 \u0001 + 4 3\u03b3 p 3 , 1 3\u03b3 2 3, q 3 + 4 3\u03b3 p 3 , 2 3\u03b3 1 3, q 3 \u0013 (8.8) Explicit relations The set of coe\ufb03cients we consider is \u03b3 m 3 , n 3 , m 3 + n 3 \u22642 and f (1) r 2, m 3 , r 2 < 1, m 3 < 2 (8.9) which are the following 10 variables \u03b3 1 3 , 1 3, \u03b3 1 3, 2 3, \u03b3 2 3, 2 3, \u03b3 1 3 , 4 3, \u03b3 1 3 , 5 3, \u03b3 2 3, 4 3, f (1) 1 2, 1 3, f (1) 1 2 , 2 3, f (1) 1 2, 4 3, f (1) 1 2, 5 3 (8.10) 20 \fAt each step in section 6.1, we keep equations containing only the above variables. It is su\ufb03cient to proceed up to step 3) in the recursion relations. In the following analysis, we will set w = 0, and we can reintroduce the w-dependence using (8.3). The recursion relations are step 1) \u03b3 1 3 , 1 3 = \u22123 8(f (1) 1 2 , 1 3)2 (8.11) \u03b3 1 3 , 2 3 = 1 6 \u22121 4f (1) 1 2 , 1 3f (1) 1 2, 2 3 (8.12) \u03b3 2 3 , 2 3 = \u22123 16(f (1) 1 2, 2 3)2 (8.13) step 2) 4 3f (1) 1 2 , 4 3 = \u22121 6f (1) 1 2 , 1 3 \u22124 3\u03b3 1 3, 1 3f (1) 1 2 , 2 3 \u22124 3\u03b3 1 3, 2 3f (1) 1 2 , 1 3 (8.14) 5 3f (1) 1 2 , 5 3 = 1 6f (1) 1 2 , 2 3 \u22124 3\u03b3 1 3, 2 3f (1) 1 2 , 2 3 \u22124 3\u03b3 2 3, 2 3f (1) 1 2 , 1 3 (8.15) step 3) \u03b3 1 3 , 4 3 = 3 5 \u00121 3\u03b3 1 3, 1 3 \u22121 4f (1) 1 2, 1 3f (1) 1 2 , 4 3 \u0013 (8.16) \u03b3 1 3 , 5 3 = 1 2 \u00122 3\u03b3 1 3, 2 3 \u22121 4f (1) 1 2, 1 3f (1) 1 2 , 5 3 \u0013 (8.17) \u03b3 2 3 , 4 3 = 1 2 \u00121 3\u03b3 1 3, 2 3 \u22121 4f (1) 1 2, 2 3f (1) 1 2 , 4 3 \u0013 (8.18) The constraints are 1 9 = 4 3\u03b3 1 3, 2 3 (8.19) 25 144 = 16 3 \u03b3 2 3, 4 3 + 10 3 \u03b3 1 3, 5 3 (8.20) \u03b3 1 3, 1 3 = 3 2 \u00128 3\u03b3 1 3, 1 3\u03b3 1 3, 2 3 + 8 3\u03b3 1 3, 4 3 \u0013 (8.21) \u03b3 1 3, 2 3 = 4 3\u03b32 1 3, 2 3 + 4 3\u03b3 1 3, 1 3\u03b3 2 3 , 2 3 + 4 3\u03b3 2 3, 4 3 + 5 3\u03b3 1 3, 5 3 (8.22) By using the recursion relations, all 10 variables can be determined using just 2 variables, as indicated in (6.5) f (1) 1 2 , 1 3, f (1) 1 2 , 2 3 (8.23) We have 4 constraints to determine them. To show this more clearly, we \ufb01rst make the following de\ufb01nitions f (1) 1 2, 1 3 \u2261x, f (1) 1 2, 2 3 \u2261y (8.24) 21 \fStep 1 Inserting these de\ufb01nitions into the equations involved in step 1 (8.11 \u2013 8.13) yield \u03b3 1 3, 1 3 = \u22123x2 8 (8.25) \u03b3 1 3, 2 3 = 1 6 \u2212xy 4 (8.26) \u03b3 2 3, 2 3 = \u22123y2 16 (8.27) Step 2 Inserting the de\ufb01nitions (8.24) and the relations (8.25 \u2013 8.27) into the relations for step 2 (8.14), (8.15), give f 1 2 , 4 3 = 1 24x(15xy \u22127) (8.28) f 1 2 , 5 3 = 1 60y(21xy \u22122) (8.29) Step 3 Inserting de\ufb01nitions (8.24) and the relations from step 1 (8.25 \u2013 8.27) and step 2 (8.28), (8.29) into the relations for step 3 (8.16 \u2013 8.18) yield \u03b3 1 3, 4 3 = \u22121 32x2(3xy + 1) (8.30) \u03b3 1 3, 5 3 = 1 18 \u2212 1 480xy(21xy + 38) (8.31) \u03b3 2 3, 4 3 = 1 576 \u000016 \u22123xy(15xy + 1) \u0001 (8.32) Constraints Inserting the expressions (8.25 \u2013 8.32) into the constraint equations (8.19 8.20) we \ufb01nd that it gives one nontrivial relation coming from (8.19) 1 = 3xy (8.33) The other constraints turn out to be trivial. Solutions Inserting the constraint (8.33) into (8.25 \u2013 8.32) we obtain all 10 variables in terms of a single variable, x = f (1) 1 2 , 1 3, \u03b3 1 3 , 2 3 = 1 12, \u03b3 1 3, 5 3 = 7 288, \u03b3 2 3 , 4 3 = 5 288 22 \f\u03b3 1 3 , 1 3 = \u2212 3(f (1) 1 2 , 1 3)2 8 , \u03b3 1 3, 4 3 = \u2212 (f (1) 1 2, 1 3)2 16 , \u03b3 2 3 , 2 3 = \u2212 1 48(f (1) 1 2 , 1 3)2, f 1 2 , 2 3 = 1 3f (1) 1 2, 1 3 , f 1 2, 4 3 = \u2212 f (1) 1 2, 1 3 12 , f 1 2 , 5 3 = 1 36f (1) 1 2, 1 3 (8.34) We notice, written in the \ufb01rst row, that the \u03b3\u2019s whose indices obey the relation p 3 + q 3 \u2208Z are completely solved without the need for extra input. These values are in agreement with (7.1). In order to solve for the remaining \u03b3\u2019s and f\u2019s we need a single input which, using (7.2), we write as f (1) 1 2 , 1 3 = \u2212(\u22124) 2 3 \u221a 3 (8.35) Inserting this into the second and third row of (8.34) the remaining values of \u03b3 and f are given by \u03b3 1 3 , 1 3 = (\u22121 2) 1 3 4 , \u03b3 1 3 , 4 3 = (\u22121 2) 1 3 24 , \u03b3 2 3 , 2 3 = \u2212(\u22121 2) 2 3 16 f 1 2 , 2 3 = (\u22121 2) 1 3 \u221a 3 , f 1 2 , 4 3 = (\u22121 2) 2 3 6 \u221a 3 , f 1 2 , 5 3 = (\u22121 2) 1 3 12 \u221a 3 (8.36) which also agree with (7.1) and (7.2). This shows that the equations derived from conformal generators are indeed correct. We note that the number of inputs is 1, in agreement with equation (6.9) for (M, N) = (2, 1) and Nconstr = 1. Next, we look at the case (M, N) = (3, 1) with low energy constraints. 8.2 M = 3, N = 1 In this case, the w-dependence is given by \u03b3 p 4, q 4 \u221d e p+q 4 w f (1) r 3 , p 4 \u221d e( p 4 \u2212r 3)w f (2) r, p 4 \u221d e( p 4 \u2212r)w (8.37) The recursion relations (6.2), (6.3), and (6.4) are given by \u03b3 p 4 , q 4 = ew 4 p + q \u0012\u0000p 4 \u22121 \u0001 \u03b3 p 4 \u22121, q 4 + \u0000q 4 \u22121 \u0001 \u03b3 p 4 , q 4\u22121 \u22121 6 \u0000f (1) 1 3, p 4f (1) 2 3, q 4 + f (1) 2 3, p 4f (1) 1 3, q 4 \u0001 + 1 8\u03b4 p 4 + q 4,1 \u0013 (8.38) 23 \fand f (1) r 3 ,1+ p 4 \u00001 + p 4 \u0001 = ew\u0000p 4 \u2212r 3 \u0001 f (1) r 3 , p 4 \u22123 2f (1) r 3, 1 4\u03b3 3 4, p 4 \u22122f (1) r 3, 1 2\u03b3 1 2, p 4 \u22123 2f (1) r 3 , 3 4\u03b3 1 4, p 4 (8.39) We note that for N = 1 we do not get any f (2) terms. The constraints (6.7) with n = 1, 2 and (6.8) become 3 2\u03b3 1 4 , 3 4 + \u03b3 1 2 , 1 2 = 31 288ew (8.40) 7 X p=1 \u03b3 p 4 ,2\u2212p 4p(2 \u2212p 4) = 49 288e2w (8.41) \u03b3 p 4 , q 4 = e\u2212w 4 p + q \u0012 \u03b31+ p 4 , q 4 \u00001 + p 4 \u0001 + \u03b3 p 4 ,1+ q 4 \u00001 + q 4 \u0001 +3 2\u03b3 p 4 , 1 4\u03b3 3 4, q 4 + 2\u03b3 p 4 , 1 2\u03b3 1 2 , q 4 + 3 2\u03b3 p 4 , 3 4\u03b3 1 4, q 4 \u0013 (8.42) Explicit relations The set of coe\ufb03cients we consider is \u03b3 m 4 , n 4 , m 4 + n 4 \u22642 and f (1) r 3 , m 4 , r 3 < 1, m 4 < 2 (8.43) which are the following 24 variables \u03b3 1 4, 1 4, \u03b3 1 4 , 1 2, \u03b3 1 4, 3 4, \u03b3 1 4, 5 4, \u03b3 1 4 , 3 2, \u03b3 1 4, 7 4, \u03b3 1 2, 1 2, \u03b3 1 2 , 3 4, \u03b3 1 2 , 5 4, \u03b3 1 2, 3 2, \u03b3 3 4, 3 4, \u03b3 3 4 , 5 4, f (1) 1 3, 1 4, f (1) 1 3 , 1 2, f (1) 1 3, 3 4, f (1) 1 3, 5 4, f (1) 1 3, 3 2, f (1) 1 3 , 7 4, f (1) 2 3 , 1 4, f (1) 2 3, 1 2, f (1) 2 3, 3 4, f (1) 2 3, 5 4, f (1) 2 3 , 3 2, f (1) 2 3 , 7 4 (8.44) At each step in section 6.1, we keep equations containing only the above variables. It is su\ufb03cient to proceed up to step 3) in the recursion relations. In the following analysis, we will set w = 0, and we can reintroduce the w-dependence using (8.37). The recursion relations are step 1) \u03b3 1 4, 1 4 = \u22122 3f (1) 1 3, 1 4f (1) 2 3, 1 4 (8.45) \u03b3 1 4, 1 2 = \u22122 9 \u0010 f (1) 1 3, 1 2f (1) 2 3, 1 4 + f (1) 1 3 , 1 4f (1) 2 3, 1 2 \u0011 (8.46) \u03b3 1 4, 3 4 = 1 6 \u0010 \u2212f (1) 1 3, 3 4f (1) 2 3, 1 4 \u2212f (1) 1 3, 1 4f (1) 2 3, 3 4 \u0011 + 1 8 (8.47) 24 \f\u03b3 1 2, 1 2 = 1 8 \u22121 3f (1) 1 3, 1 2f (1) 2 3, 1 2 (8.48) \u03b3 1 2, 3 4 = 2 15 \u0010 \u2212f (1) 1 3, 3 4f (1) 2 3 , 1 2 \u2212f (1) 1 3, 1 2f (1) 2 3 , 3 4 \u0011 (8.49) \u03b3 3 4, 3 4 = \u22122 9f 1 3, 3 4f 2 3, 3 4 (8.50) step 2) 5 4f (1) 1 3 , 5 4 = \u22121 12f (1) 1 3, 1 4 \u22123 2f (1) 1 3, 3 4\u03b3 1 4, 1 4 \u22122f (1) 1 3, 1 2\u03b3 1 4 , 1 2 \u22123 2f (1) 1 3 , 1 4\u03b3 1 4 , 3 4 (8.51) 5 4f (1) 2 3 , 5 4 = \u22125 12f (1) 2 3, 1 4 \u22123 2f (1) 2 3, 3 4\u03b3 1 4, 1 4 \u22122f (1) 2 3, 1 2\u03b3 1 4 , 1 2 \u22123 2f (1) 2 3 , 1 4\u03b3 1 4 , 3 4 (8.52) 3 2f (1) 1 3 , 3 2 = 1 6f (1) 1 3, 1 2 \u22123 2f (1) 1 3, 3 4\u03b3 1 4, 1 2 \u22122f (1) 1 3, 1 2\u03b3 1 2, 1 2 \u22123 2f (1) 1 3, 1 4\u03b3 1 2, 3 4 (8.53) 3 2f (1) 2 3 , 3 2 = \u22121 6f (1) 2 3, 1 2 \u22123 2f (1) 2 3 , 3 4\u03b3 1 4 , 1 2 \u22122f (1) 2 3 , 1 2\u03b3 1 2 , 1 2 \u22123 2f (1) 2 3 , 1 4\u03b3 1 2, 3 4 (8.54) 7 4f (1) 1 3 , 7 4 = 5 12f (1) 1 3 , 3 4 \u22123 2f (1) 1 3 , 3 4\u03b3 1 4 , 3 4 \u22122f (1) 1 3, 1 2\u03b3 1 2, 3 4 \u22123 2f (1) 1 3, 1 4\u03b3 3 4, 3 4 (8.55) 7 4f (1) 2 3 , 7 4 = 1 12f (1) 2 3 , 3 4 \u22123 2f (1) 2 3 , 3 4\u03b3 1 4 , 3 4 \u22122f (1) 2 3, 1 2\u03b3 1 2, 3 4 \u22123 2f (1) 2 3, 1 4\u03b3 3 4, 3 4 (8.56) step 3) \u03b3 1 4, 5 4 = 2 3 \u0012 \u22121 6 \u0010 f (1) 1 3 , 5 4f (1) 2 3, 1 4 + f (1) 1 3, 1 4f (1) 2 3 , 5 4 \u0011 + 1 4\u03b3 1 4, 1 4 \u0013 (8.57) \u03b3 1 4, 3 2 = 4 7 \u0012 \u22121 6 \u0010 f (1) 1 3 , 3 2f (1) 2 3, 1 4 + f (1) 1 3, 1 4f (1) 2 3 , 3 2 \u0011 + 1 2\u03b3 1 4, 1 2 \u0013 (8.58) \u03b3 1 4, 7 4 = 1 2 \u0012 \u22121 6 \u0010 f (1) 1 3 , 7 4f (1) 2 3, 1 4 + f (1) 1 3, 1 4f (1) 2 3 , 7 4 \u0011 + 3 4\u03b3 1 4, 3 4 \u0013 (8.59) \u03b3 1 2, 5 4 = 4 7 \u0012 \u22121 6 \u0010 f (1) 1 3 , 5 4f (1) 2 3, 1 2 + f (1) 1 3, 1 2f (1) 2 3 , 5 4 \u0011 + 1 4\u03b3 1 4, 1 2 \u0013 (8.60) \u03b3 1 2, 3 2 = 1 2 \u0012 \u22121 6 \u0010 f (1) 1 3 , 3 2f (1) 2 3, 1 2 + f (1) 1 3, 1 2f (1) 2 3 , 3 2 \u0011 + 1 2\u03b3 1 2, 1 2 \u0013 (8.61) \u03b3 3 4, 5 4 = 1 2 \u0012 \u22121 6 \u0010 f (1) 1 3 , 5 4f (1) 2 3, 3 4 + f (1) 1 3, 3 4f (1) 2 3 , 5 4 \u0011 + 1 4\u03b3 1 4, 3 4 \u0013 (8.62) The constraints are 31 288 = \u03b3 1 2 , 1 2 + 3 2\u03b3 1 4 , 3 4 (8.63) 25 \f49 288 = 15 2 \u03b3 3 4, 5 4 + 6\u03b3 1 2, 3 2 + 7 2\u03b3 1 4, 7 4 (8.64) \u03b3 1 4, 1 4 = 2 \u0012 2\u03b32 1 4, 1 2 + 3\u03b3 1 4, 1 4\u03b3 1 4, 3 4 + 5 2\u03b3 1 4, 5 4 \u0013 (8.65) \u03b3 1 4, 1 2 = 4 3 \u0012 2\u03b3 1 4, 1 2\u03b3 1 2, 1 2 + 3 2\u03b3 1 4, 1 2\u03b3 1 4, 3 4 + 3 2\u03b3 1 4, 1 4\u03b3 1 2, 3 4 + 5 4\u03b3 1 2, 5 4 + 3 2\u03b3 1 4, 3 2 \u0013 (8.66) \u03b3 1 2, 1 2 = 2\u03b32 1 2, 1 2 + 3\u03b3 1 4, 1 2\u03b3 1 2 , 3 4 + 3\u03b3 1 2 , 3 2 (8.67) \u03b3 1 4, 3 4 = 3 2\u03b32 1 4, 3 4 + 2\u03b3 1 4, 1 2\u03b3 1 2 , 3 4 + 3 2\u03b3 1 4, 1 4\u03b3 3 4 , 3 4 + 5 4\u03b3 3 4, 5 4 + 7 4\u03b3 1 4, 7 4 (8.68) By using the recursion relations, all 24 variables can be determined using only 6 variables, as indicated in (6.5) f (1) 1 3, 1 4, f (1) 1 3, 1 2, f (1) 1 3 , 3 4, f (1) 2 3 , 1 4, f (1) 2 3, 1 2, f (1) 2 3, 3 4 (8.69) Inserting the recursion relations (8.45 \u2013 8.62) into the 6 constraints, we \ufb01nd that 4 of them turn out to be trivial, while 2 constraints (8.63) and (8.64) remain nontrivial. These nontrivial constraints are 31 288 = \u22121 4f (1) 1 3, 3 4f (1) 2 3, 1 4 \u22121 3f (1) 1 3, 1 2f (1) 2 3 , 1 2 \u22121 4f (1) 1 3 , 1 4f (1) 2 3, 3 4 + 5 16 (8.70) 49 288 = \u22121 6(f (1) 1 3, 3 4)2(f (1) 2 3, 1 4)2 \u22124 9(f (1) 1 3 , 1 2)2(f (1) 2 3 , 1 2)2 \u22121 6(f (1) 1 3 , 1 4)2(f (1) 2 3 , 3 4)2 + 1 6f (1) 1 3 , 1 2f (1) 2 3, 1 2 + 3 2 \u00121 8 \u22121 3f (1) 1 3, 1 2f (1) 2 3 , 1 2 \u0013 + f (1) 2 3 , 3 4 \u0012 \u22124 9f (1) 2 3, 1 4(f (1) 1 3 , 1 2)2 \u22124 9f (1) 1 3, 1 4f (1) 2 3 , 1 2f (1) 1 3, 1 2 \u22122 9f (1) 1 3, 1 4 \u0013 + f (1) 1 3, 3 4 \u0012 \u22124 9f (1) 1 3, 1 4(f (1) 2 3, 1 2)2 \u22124 9f (1) 1 3 , 1 2f (1) 2 3, 1 4f (1) 2 3, 1 2 \u22121 9f (1) 2 3, 1 4 \u0013 \u221213 9 f (1) 1 3, 3 4f (1) 2 3 , 3 4f (1) 1 3, 1 4f (1) 2 3 , 1 4 + 9 32 (8.71) We can solve these constraints in terms of only four variables f (1) 1 3, 1 4, f (1) 1 3 , 1 2, f (1) 2 3 , 1 4, f (1) 2 3, 1 2 (8.72) as f (1) 1 3, 3 4 = 1 720(f (1) 1 3, 1 4)(f (1) 2 3, 1 4)2 \u0014 2 \u0012 64512 \u0000f (1) 1 3, 1 2 \u00012\u0000f (1) 1 3, 1 4 \u00012\u0000f (1) 2 3 , 1 2 \u00012\u0000f (1) 2 3, 1 4 \u00012 \u221234560 \u0000f (1) 1 3, 1 2 \u00013\u0000f (1) 1 3 , 1 4 \u0001\u0000f (1) 2 3 , 1 2 \u0001\u0000f (1) 2 3, 1 4 \u00013 + 23832 \u0000f (1) 1 3 , 1 2 \u00012\u0000f (1) 1 3, 1 4 \u0001\u0000f (1) 2 3, 1 4 \u00013 + 5184 \u0000f (1) 1 3 , 1 2 \u00014\u0000f (1) 2 3, 1 4 \u00014 \u221234560 \u0000f (1) 1 3, 1 2 \u0001\u0000f (1) 1 3, 1 4 \u00013\u0000f (1) 2 3 , 1 2 \u00013\u0000f (1) 2 3 , 1 4 \u0001 \u221257840 \u0000f (1) 1 3, 1 2 \u0001\u0000f (1) 1 3, 1 4 \u00012\u0000f (1) 2 3 , 1 2 \u0001\u0000f (1) 2 3 , 1 4 \u00012 26 \f+ 18648 \u0000f (1) 1 3 , 1 4 \u00013\u0000f (1) 2 3, 1 2 \u00012\u0000f (1) 2 3, 1 4 \u0001 + 5184 \u0000f (1) 1 3 , 1 4 \u00014\u0000f (1) 2 3 , 1 2 \u00014 + 16234 \u0000f (1) 1 3, 1 4 \u00012\u0000f (1) 2 3 , 1 4 \u00012 \u0013 1 2 \u2212144 \u0000f (1) 1 3 , 1 2 \u00012\u0000f (1) 2 3, 1 4 \u00012 \u2212480 \u0000f (1) 1 3 , 1 2 \u0001\u0000f (1) 1 3, 1 4 \u0001\u0000f (1) 2 3, 1 2 \u0001\u0000f (1) 2 3, 1 4 \u0001 + 144 \u0000f (1) 1 3 , 1 4 \u00012\u0000f (1) 2 3, 1 2 \u00012 + 259 \u0000f (1) 1 3, 1 4 \u0001\u0000f (1) 2 3, 1 4 \u0001\u0015 (8.73) f (1) 2 3, 3 4 = 1 720 \u0010 f (1) 1 3 , 1 4 \u0011 2 \u0010 f (1) 2 3 , 1 4 \u0011 \u0014 \u22122 \u0012 64512 \u0000f (1) 1 3 , 1 2 \u00012\u0000f (1) 1 3, 1 4 \u00012\u0000f (1) 2 3 , 1 2 \u00012\u0000f (1) 2 3 , 1 4 \u00012 \u221234560 \u0000f (1) 1 3, 1 2 \u00013\u0000f (1) 1 3 , 1 4 \u0001\u0000f (1) 2 3 , 1 2 \u0001\u0000f (1) 2 3, 1 4 \u00013 + 23832 \u0000f (1) 1 3 , 1 2 \u00012\u0000f (1) 1 3, 1 4 \u0001\u0000f (1) 2 3, 1 4 \u00013 + 5184 \u0000f (1) 1 3 , 1 2 \u00014\u0000f (1) 2 3, 1 4 \u00014 \u221234560 \u0000f (1) 1 3, 1 2 \u0001\u0000f (1) 1 3, 1 4 \u00013\u0000f (1) 2 3 , 1 2 \u00013\u0000f (1) 2 3 , 1 4 \u0001 \u221257840 \u0000f (1) 1 3, 1 2 \u0001\u0000f (1) 1 3, 1 4 \u00012\u0000f (1) 2 3 , 1 2 \u0001\u0000f (1) 2 3 , 1 4 \u00012 + 18648 \u0000f (1) 1 3 , 1 4 \u00013\u0000f (1) 2 3, 1 2 \u00012\u0000f (1) 2 3, 1 4 \u0001 + 5184 \u0000f (1) 1 3 , 1 4 \u00014\u0000f (1) 2 3 , 1 2 \u00014 + 16234 \u0000f (1) 1 3, 1 4 \u00012\u0000f (1) 2 3 , 1 4 \u00012 \u0013 1 2 + 144 \u0000f (1) 1 3, 1 2 \u00012\u0000f (1) 2 3, 1 4 \u00012 \u2212480 \u0000f (1) 1 3, 1 2 \u0001\u0000f (1) 1 3 , 1 4 \u0001\u0000f (1) 2 3 , 1 2 \u0001\u0000f (1) 2 3 , 1 4 \u0001 \u2212144 \u0000f (1) 1 3 , 1 4 \u00012\u0000f (1) 2 3, 1 2 \u00012 + 331 \u0000f (1) 1 3, 1 4 \u0001\u0000f (1) 2 3, 1 4 \u0001\u0015 (8.74) Using these and the recursion relations, all 10 variables can be determined completely in terms of the 4 variables (8.72). The values for these inputs are given by (7.2) which we write below f (1) 1 3, 1 4 = 3 1 4 \u221a 2(\u22121 + i) 1 3 f (1) 1 3 , 1 2 = (\u22121 + i) 2 3 2 \u221a 3 f (1) 2 3, 1 4 = ( 1 4 \u2212i 4)3 1 4 2 5 6 f (1) 2 3 , 1 2 = \u2212 5i 4 2 1 3\u221a 3 (8.75) Inserting the values (8.75) into the system of equations (8.73), (8.74), and (8.45 \u2013 8.62), we obtain the following values for the remaining variables, \u03b3 1 4, 1 4 = i 4 \u221a 3 , \u03b3 1 4, 1 2 = (\u22121 3) 1 4 9 , \u03b3 1 4, 3 4 = 5 144, \u03b3 1 4, 5 4 = 77i 2592 \u221a 3 , \u03b3 1 4, 3 2 = 2(\u22121 3) 1 4 81 , \u03b3 1 4, 7 4 = 221 20736, \u03b3 1 2, 1 2 = 1 18, \u03b3 1 2, 5 4 = 11(\u22121 3) 1 4 648 , \u03b3 3 4, 5 4 = 385 62208, \u03b3 1 2, 3 4 = \u2212(\u22121 3) 3 4 18 , \u03b3 3 4, 3 4 = \u2212 25i 1296 \u221a 3 , \u03b3 1 2, 3 2 = 7 486, f 1 3, 3 4 = (\u22121 3) 1 4 6 2 2 3 , f 2 3, 3 4 = 25(\u22121 3) 1 4 24 2 1 3 , f 1 3, 5 4 = 7(\u22121 3) 3 4 72 2 2 3 , f 2 3, 5 4 = 55(\u22121 3) 3 4 288 2 1 3 , 27 \ff 1 3, 3 2 = i2 1 3 27 \u221a 3 , f 2 3, 3 2 = 7i 54 2 1 3\u221a 3 , f 1 3, 7 4 = 13(\u22121 3) 1 4 432 2 2 3 , f 2 3, 7 4 = 85(\u22121 3) 1 4 1728 2 1 3 (8.76) which agree with the known results from (7.1) and (7.2) again demonstrating that the bootstrap method generates the correct equations. Note that the number of inputs is 4, in agreement with equation (6.9) for (M, N) = (3, 1) and Nconstr = 2. 8.3 M = N = 2 In this case, the w-dependence is given by \u03b3 p 4 , p 4 \u221d e p+q 4 w f (1) r 2 , p 4 \u221d e( p 4 \u2212r 2 )w f (2) r 2 , p 4 \u221d e( p 4 \u2212r 2 )w (8.77) The recursion relations (6.2), (6.3) and (6.4) are given by \u03b3 p 4 , q 4 = ew 4 p + q \u0012\u0000p 4 \u22121 \u0001 \u03b3 p 4 \u22121, q 4 + \u0000q 4 \u22121 \u0001 \u03b3 p 4 , q 4 \u22121 \u22121 4f (1) 1 2, p 4 f (1) 1 2, q 4 \u22121 4f (2) 1 2, p 4f (2) 1 2, q 4 + 1 8\u03b4 p 4 + q 4 ,1 \u0013 (8.78) and f (i) r 2 , p 4 +1 \u0000p 4 + 1 \u0001 = ew\u0000p 4 \u2212r 2 \u0001 f (i) r 2 , p 4 \u22123 2f (i) r 2, 3 4\u03b3 1 4, p 4 \u22122f (i) r 2, 1 2\u03b3 1 2, p 4 \u22123 2f (i) r 2 , 1 4\u03b3 3 4, p 4 (8.79) where i = 1, 2. The constraints (6.7) with n = 1, 2 and (6.8) become \u03b3 1 2, 1 2 + 3 2\u03b3 3 4, 1 4 = 3 32ew (8.80) 7 X p=1 \u03b3 p 4 ,2\u2212p 4p(2 \u2212p 4) = 5 32e2w (8.81) \u03b3 p 4 , q 4 = e\u2212w 4 p + q \u0012 \u0010p 4 + 1 \u0011 \u03b3 p 4 +1, q 4 + \u0010q 4 + 1 \u0011 \u03b3 p 4 , q 4+1 +3 2\u03b3 p 4 , 1 4\u03b3 3 4, q 4 + 2\u03b3 p 4 , 1 2\u03b3 1 2 , q 4 + 3 2\u03b3 p 4 , 3 4\u03b3 1 4, q 4 \u0013 (8.82) 28 \fExplicit relations The set of coe\ufb03cients we consider is \u03b3 m 4 , n 4 , m 4 + n 4 \u22642 and f (i) r 2, m 4 , r 2 < 1, m 4 < 2, i = 1, 2 (8.83) which are the following 24 variables \u03b3 1 4, 1 4, \u03b3 1 4 , 1 2, \u03b3 1 4, 3 4, \u03b3 1 4, 5 4, \u03b3 1 4 , 3 2, \u03b3 1 4 , 7 4, \u03b3 1 2, 1 2, \u03b3 1 2 , 3 4, \u03b3 1 2 , 5 4, \u03b3 1 2, 3 2, \u03b3 3 4, 3 4, \u03b3 3 4, 5 4 f (1) 1 2 , 1 4, f (1) 1 2 , 1 2, f (1) 1 2, 3 4, f (1) 1 2, 5 4, f (1) 1 2, 3 2, f (1) 1 2 , 7 4, f (2) 1 2 , 1 4, f (2) 1 2, 1 2, f (2) 1 2, 3 4, f (2) 1 2, 5 4, f (2) 1 2 , 3 2, f (2) 1 2 , 7 4 (8.84) At each step in section 6.1, we keep equations containing only the above variables. It is su\ufb03cient to proceed up to step 3) in the recursion relations. In the following analysis, we will again set w = 0, and we can reintroduce the w-dependence using (8.77). The recursion relations are step 1) \u03b3 1 4 , 1 4 = \u22122 \u00121 4 \u0000f (1) 1 2, 1 4 \u00012 + 1 4 \u0000f (2) 1 2, 1 4 \u00012 \u0013 (8.85) \u03b3 1 4 , 1 2 = \u22124 3 \u00121 4f (1) 1 2, 1 4f (1) 1 2, 1 2 + 1 4f (2) 1 2, 1 4f (2) 1 2, 1 2 \u0013 (8.86) \u03b3 1 4 , 3 4 = \u22121 4f (1) 1 2 , 1 4f (1) 1 2, 3 4 \u22121 4f (2) 1 2, 1 4f (2) 1 2, 3 4 + 1 8 (8.87) \u03b3 1 2 , 1 2 = \u22121 4 \u0000f (1) 1 2, 1 2 \u00012 \u22121 4 \u0000f (2) 1 2, 1 2 \u00012 + 1 8 (8.88) \u03b3 1 2 , 3 4 = 4 5 \u0012 \u22121 4f (1) 1 2, 1 2f (1) 1 2, 3 4 \u22121 4f (2) 1 2, 1 2f (2) 1 2 , 3 4 \u0013 (8.89) \u03b3 3 4 , 3 4 = 2 3 \u0012 \u22121 4 \u0000f (1) 1 2, 3 4 \u00012 \u22121 4 \u0000f (2) 1 2, 3 4 \u00012 \u0013 (8.90) step 2) 5 4f (1) 1 2 , 5 4 = \u22121 4f (1) 1 2 , 1 4 \u22123 2f (1) 1 2 , 3 4\u03b3 1 4 , 1 4 \u22122f (1) 1 2, 1 2\u03b3 1 4, 1 2 \u22123 2f (1) 1 2, 1 4\u03b3 1 4, 3 4 (8.91) 3 2f (1) 1 2 , 3 2 = \u22123 2f (1) 1 2 , 3 4\u03b3 1 4 , 1 2 \u22122f (1) 1 2, 1 2\u03b3 1 2, 1 2 \u22123 2f (1) 1 2, 1 4\u03b3 1 2, 3 4 (8.92) 7 4f (1) 1 2 , 7 4 = 1 4f (1) 1 2, 3 4 \u22123 2f (1) 1 2, 3 4\u03b3 1 4, 3 4 \u22122f (1) 1 2 , 1 2\u03b3 1 2 , 3 4 \u22123 2f (1) 1 2 , 1 4\u03b3 3 4 , 3 4 (8.93) 5 4f (2) 1 2 , 5 4 = \u22121 4f (2) 1 2 , 1 4 \u22123 2f (2) 1 2 , 3 4\u03b3 1 4 , 1 4 \u22122f (2) 1 2, 1 2\u03b3 1 4, 1 2 \u22123 2f (2) 1 2, 1 4\u03b3 1 4, 3 4 (8.94) 3 2f (2) 1 2 , 3 2 = \u22123 2f (2) 1 2 , 3 4\u03b3 1 4 , 1 2 \u22122f (2) 1 2, 1 2\u03b3 1 2, 1 2 \u22123 2f (2) 1 2, 1 4\u03b3 1 2, 3 4 (8.95) 29 \f7 4f (2) 1 2 , 7 4 = 1 4f (2) 1 2, 3 4 \u22123 2f (2) 1 2, 3 4\u03b3 1 4, 3 4 \u22122f (2) 1 2 , 1 2\u03b3 1 2 , 3 4 \u22123 2f (2) 1 2 , 1 4\u03b3 3 4 , 3 4 (8.96) step 3) \u03b3 1 4, 5 4 = 2 3 \u0012 \u22121 4f (1) 1 2, 1 4f (1) 1 2 , 5 4 \u22121 4f (2) 1 2 , 1 4f (2) 1 2, 5 4 + 1 4\u03b3 1 4, 1 4 \u0013 (8.97) \u03b3 1 4, 3 2 = 4 7 \u0012 \u22121 4f (1) 1 2, 1 4f (1) 1 2 , 3 2 \u22121 4f (2) 1 2 , 1 4f (2) 1 2, 3 2 + 1 2\u03b3 1 4, 1 2 \u0013 (8.98) \u03b3 1 4, 7 4 = 1 2 \u0012 \u22121 4f (1) 1 2, 1 4f (1) 1 2 , 7 4 \u22121 4f (2) 1 2 , 1 4f (2) 1 2, 7 4 + 3 4\u03b3 1 4, 3 4 \u0013 (8.99) \u03b3 1 2, 5 4 = 4 7 \u0012 \u22121 4f (1) 1 2, 1 2f (1) 1 2 , 5 4 \u22121 4f (2) 1 2 , 1 2f (2) 1 2, 5 4 + 1 4\u03b3 1 4, 1 2 \u0013 (8.100) \u03b3 1 2, 3 2 = 1 2 \u0012 \u22121 4f (1) 1 2, 1 2f (1) 1 2 , 3 2 \u22121 4f (2) 1 2 , 1 2f (2) 1 2, 3 2 + 1 2\u03b3 1 2, 1 2 \u0013 (8.101) \u03b3 3 4, 5 4 = 1 2 \u0012 \u22121 4f (1) 1 2, 3 4f (1) 1 2 , 5 4 \u22121 4f (2) 1 2 , 3 4f (2) 1 2, 5 4 + 1 4\u03b3 1 4, 3 4 \u0013 (8.102) The constraints are 3 32 = \u03b3 1 2 , 1 2 + 3 2\u03b3 1 4 , 3 4 (8.103) 5 32 = 15 2 \u03b3 3 4, 5 4 + 6\u03b3 1 2, 3 2 + 7 2\u03b3 1 4, 7 4 (8.104) \u03b3 1 4, 1 4 = 2 \u0012 2\u03b32 1 4, 1 2 + 3\u03b3 1 4, 1 4\u03b3 1 4, 3 4 + 5 2\u03b3 1 4, 5 4 \u0013 (8.105) \u03b3 1 4, 1 2 = 4 3 \u0012 2\u03b3 1 4, 1 2\u03b3 1 2, 1 2 + 3 2\u03b3 1 4, 1 2\u03b3 1 4, 3 4 + 3 2\u03b3 1 4, 1 4\u03b3 1 2, 3 4 + 5 4\u03b3 1 2, 5 4 + 3 2\u03b3 1 4, 3 2 \u0013 (8.106) \u03b3 1 2, 1 2 = 2\u03b32 1 2, 1 2 + 3\u03b3 1 4, 1 2\u03b3 1 2 , 3 4 + 3\u03b3 1 2 , 3 2 (8.107) \u03b3 1 4, 3 4 = 3 2\u03b32 1 4, 3 4 + 2\u03b3 1 4, 1 2\u03b3 1 2, 3 4 + 3 2\u03b3 1 4, 1 4\u03b3 3 4, 3 4 + 5 4\u03b3 3 4, 5 4 + 7 4\u03b3 1 4, 7 4 (8.108) By using the recursion relations, all 24 variables can be determined using only 6 variables f (1) 1 2, 1 4, f (1) 1 2, 1 2, f (1) 1 2 , 3 4, f (2) 1 2 , 1 4, f (2) 1 2, 1 2, f (2) 1 2, 3 4 (8.109) We have 6 constraints to determine them. Inserting the recursion relations (8.85 \u2013 8.102) into these constraints, we again \ufb01nd that 4 of the constraints are trivial and 2 of them, (8.103), (8.104), are not. The nontrivial constraints are 3 2 = \u22124(f (1) 1 2 , 1 2)2 \u22126f (1) 1 2, 1 4f (1) 1 2 , 3 4 \u22124(f (2) 1 2 , 1 2)2 \u22126f (2) 1 2, 1 4f (2) 1 2 , 3 4 + 5, (8.110) 30 \f5 = \u22124(f (1) 1 2 , 3 4)2\u00008(f (1) 1 2, 1 4)2 + 5(f (2) 1 2 , 1 4)2\u0001 \u22128f (1) 1 2 , 3 4 \u0010 4f (1) 1 2 , 1 2f (2) 1 2, 1 4f (2) 1 2 , 1 2 + 3f (1) 1 2, 1 4f (2) 1 2, 1 4f (2) 1 2 , 3 4 +4f (1) 1 2, 1 4(f (1) 1 2 , 1 2)2 + f (1) 1 2 , 1 4 \u0011 \u22124(f (2) 1 2, 3 4)2\u00005(f (1) 1 2 , 1 4)2 + 8(f (2) 1 2, 1 4)2\u0001 \u22128 \u00002(f (1) 1 2, 1 2)2(f (2) 1 2, 1 2)2 + (f (1) 1 2, 1 2)4 + (f (1) 1 2, 1 2)2 + (f (2) 1 2, 1 2)4\u0001 \u22128f (2) 1 2 , 3 4 \u00004f (1) 1 2 , 1 4f (1) 1 2, 1 2f (2) 1 2 , 1 2 + 4f (2) 1 2, 1 4(f (2) 1 2 , 1 2)2 + f (2) 1 2, 1 4 \u0001 \u22128(f (2) 1 2 , 1 2)2 + 15 (8.111) We can solve these two constraints in terms of only four variables f (1) 1 2, 1 4, f (1) 1 2 , 1 2, f (2) 1 2 , 1 4, f (2) 1 2, 1 2 (8.112) as f (1) 1 2 , 3 4 = 1 60 \u0000\u0000f (1) 1 2, 1 4 \u00012 + \u0000f (2) 1 2, 1 4 \u00012\u00012 \u0014 \u22122 \u221a 2 \u0012 \u22122192 \u0000f (1) 1 2, 1 2 \u00012\u0000f (2) 1 2 , 1 2 \u00012\u0000f (1) 1 2, 1 4 \u00012\u0000f (2) 1 2, 1 4 \u00014 \u2212232 \u0000f (1) 1 2, 1 2 \u00012\u0000f (2) 1 2 , 1 2 \u00012\u0000f (1) 1 2 , 1 4 \u00014\u0000f (2) 1 2, 1 4 \u00012 \u2212232 \u0000f (1) 1 2 , 1 2 \u00012\u0000f (2) 1 2, 1 2 \u00012\u0000f (2) 1 2, 1 4 \u00016 + 384 \u0000f (1) 1 2 , 1 2 \u00013\u0000f (2) 1 2 , 1 2 \u0001\u0000f (1) 1 2 , 1 4 \u00013\u0000f (2) 1 2, 1 4 \u00013 + 1536 \u0000f (1) 1 2 , 1 2 \u00013\u0000f (2) 1 2, 1 2 \u0001\u0000f (1) 1 2, 1 4 \u0001\u0000f (2) 1 2 , 1 4 \u00015 \u2212232 \u0000f (1) 1 2, 1 2 \u00014\u0000f (1) 1 2 , 1 4 \u00012\u0000f (2) 1 2 , 1 4 \u00014 + 580 \u0000f (1) 1 2 , 1 2 \u00012\u0000f (1) 1 2, 1 4 \u00012\u0000f (2) 1 2, 1 4 \u00014 \u221220 \u0000f (1) 1 2, 1 2 \u00014\u0000f (1) 1 2, 1 4 \u00014\u0000f (2) 1 2 , 1 4 \u00012 + 80 \u0000f (1) 1 2 , 1 2 \u00012\u0000f (1) 1 2, 1 4 \u00014\u0000f (2) 1 2, 1 4 \u00012 \u2212500 \u0000f (1) 1 2 , 1 2 \u00014\u0000f (2) 1 2, 1 4 \u00016 + 500 \u0000f (1) 1 2, 1 2 \u00012\u0000f (2) 1 2, 1 4 \u00016 + 1536 \u0000f (1) 1 2, 1 2 \u0001\u0000f (2) 1 2, 1 2 \u00013\u0000f (1) 1 2 , 1 4 \u00013\u0000f (2) 1 2 , 1 4 \u00013 + 384 \u0000f (1) 1 2 , 1 2 \u0001\u0000f (2) 1 2 , 1 2 \u00013\u0000f (1) 1 2, 1 4 \u0001\u0000f (2) 1 2 , 1 4 \u00015 \u2212840 \u0000f (1) 1 2, 1 2 \u0001\u0000f (2) 1 2, 1 2 \u0001\u0000f (1) 1 2, 1 4 \u00013\u0000f (2) 1 2, 1 4 \u00013 \u2212840 \u0000f (1) 1 2 , 1 2 \u0001\u0000f (2) 1 2 , 1 2 \u0001\u0000f (1) 1 2 , 1 4 \u0001\u0000f (2) 1 2, 1 4 \u00015 \u2212232 \u0000f (2) 1 2, 1 2 \u00014\u0000f (1) 1 2 , 1 4 \u00012\u0000f (2) 1 2 , 1 4 \u00014 + 580 \u0000f (2) 1 2 , 1 2 \u00012\u0000f (1) 1 2, 1 4 \u00012\u0000f (2) 1 2, 1 4 \u00014 \u2212500 \u0000f (2) 1 2 , 1 2 \u00014\u0000f (1) 1 2, 1 4 \u00014\u0000f (2) 1 2, 1 4 \u00012 + 500 \u0000f (2) 1 2 , 1 2 \u00012\u0000f (1) 1 2 , 1 4 \u00014\u0000f (2) 1 2, 1 4 \u00012 \u221220 \u0000f (2) 1 2 , 1 2 \u00014\u0000f (2) 1 2, 1 4 \u00016 + 80 \u0000f (2) 1 2, 1 2 \u00012\u0000f (2) 1 2 , 1 4 \u00016 \u2212250 \u0000f (1) 1 2, 1 4 \u00012\u0000f (2) 1 2 , 1 4 \u00014 \u2212125 \u0000f (1) 1 2, 1 4 \u00014\u0000f (2) 1 2, 1 4 \u00012 \u2212125 \u0000f (2) 1 2 , 1 4 \u00016 \u0013 1 2 \u221240 \u0000f (1) 1 2, 1 2 \u00012\u0000f (1) 1 2 , 1 4 \u00013 \u221288 \u0000f (1) 1 2, 1 2 \u00012\u0000f (1) 1 2 , 1 4 \u0001\u0000f (2) 1 2 , 1 4 \u00012 + 48 \u0000f (1) 1 2 , 1 2 \u0001\u0000f (2) 1 2 , 1 2 \u0001\u0000f (1) 1 2, 1 4 \u00012\u0000f (2) 1 2 , 1 4 \u0001 \u221248 \u0000f (1) 1 2, 1 2 \u0001\u0000f (2) 1 2, 1 2 \u0001\u0000f (2) 1 2, 1 4 \u00013 \u221240 \u0000f (2) 1 2 , 1 2 \u00012\u0000f (1) 1 2, 1 4 \u00013 + 8 \u0000f (2) 1 2, 1 2 \u00012\u0000f (1) 1 2 , 1 4 \u0001\u0000f (2) 1 2 , 1 4 \u00012 + 35 \u0000f (1) 1 2 , 1 4 \u00013 + 35 \u0000f (1) 1 2 , 1 4 \u0001\u0000f (2) 1 2 , 1 4 \u00012 \u0015 (8.113) f (2) 1 2 , 3 4 = 1 60 \u0000f (2) 1 2 , 1 4 \u0001\u0000\u0000f (1) 1 2, 1 4 \u00012 + \u0000f (2) 1 2 , 1 4 \u00012\u00012 31 \f\u0014 2 \u221a 2 \u0000f (1) 1 2, 1 4 \u0001\u0012 \u22122192 \u0000f (1) 1 2, 1 2 \u00012\u0000f (2) 1 2 , 1 2 \u00012\u0000f (1) 1 2, 1 4 \u00012\u0000f (2) 1 2, 1 4 \u00014 \u2212232 \u0000f (1) 1 2, 1 2 \u00012\u0000f (2) 1 2 , 1 2 \u00012\u0000f (1) 1 2 , 1 4 \u00014\u0000f (2) 1 2, 1 4 \u00012 \u2212232 \u0000f (1) 1 2 , 1 2 \u00012\u0000f (2) 1 2, 1 2 \u00012\u0000f (2) 1 2, 1 4 \u00016 + 384 \u0000f (1) 1 2 , 1 2 \u00013\u0000f (2) 1 2 , 1 2 \u0001\u0000f (1) 1 2 , 1 4 \u00013\u0000f (2) 1 2, 1 4 \u00013 + 1536 \u0000f (1) 1 2 , 1 2 \u00013\u0000f (2) 1 2, 1 2 \u0001\u0000f (1) 1 2, 1 4 \u0001\u0000f (2) 1 2 , 1 4 \u00015 \u2212232 \u0000f (1) 1 2, 1 2 \u00014\u0000f (1) 1 2 , 1 4 \u00012\u0000f (2) 1 2 , 1 4 \u00014 + 580 \u0000f (1) 1 2 , 1 2 \u00012\u0000f (1) 1 2, 1 4 \u00012\u0000f (2) 1 2, 1 4 \u00014 \u221220 \u0000f (1) 1 2, 1 2 \u00014\u0000f (1) 1 2, 1 4 \u00014\u0000f (2) 1 2 , 1 4 \u00012 + 80 \u0000f (1) 1 2 , 1 2 \u00012\u0000f (1) 1 2, 1 4 \u00014\u0000f (2) 1 2, 1 4 \u00012 \u2212500 \u0000f (1) 1 2 , 1 2 \u00014\u0000f (2) 1 2, 1 4 \u00016 + 500 \u0000f (1) 1 2, 1 2 \u00012\u0000f (2) 1 2, 1 4 \u00016 + 1536 \u0000f (1) 1 2, 1 2 \u0001\u0000f (2) 1 2, 1 2 \u00013\u0000f (1) 1 2 , 1 4 \u00013\u0000f (2) 1 2 , 1 4 \u00013 + 384 \u0000f (1) 1 2 , 1 2 \u0001\u0000f (2) 1 2 , 1 2 \u00013\u0000f (1) 1 2, 1 4 \u0001\u0000f (2) 1 2 , 1 4 \u00015 \u2212840 \u0000f (1) 1 2, 1 2 \u0001\u0000f (2) 1 2, 1 2 \u0001\u0000f (1) 1 2, 1 4 \u00013\u0000f (2) 1 2, 1 4 \u00013 \u2212840 \u0000f (1) 1 2 , 1 2 \u0001\u0000f (2) 1 2 , 1 2 \u0001\u0000f (1) 1 2 , 1 4 \u0001\u0000f (2) 1 2, 1 4 \u00015 \u2212232 \u0000f (2) 1 2, 1 2 \u00014\u0000f (1) 1 2 , 1 4 \u00012\u0000f (2) 1 2 , 1 4 \u00014 + 580 \u0000f (2) 1 2 , 1 2 \u00012\u0000f (1) 1 2, 1 4 \u00012\u0000f (2) 1 2, 1 4 \u00014 \u2212500 \u0000f (2) 1 2 , 1 2 \u00014\u0000f (1) 1 2, 1 4 \u00014\u0000f (2) 1 2, 1 4 \u00012 + 500 \u0000f (2) 1 2 , 1 2 \u00012\u0000f (1) 1 2 , 1 4 \u00014\u0000f (2) 1 2, 1 4 \u00012 \u221220 \u0000f (2) 1 2 , 1 2 \u00014\u0000f (2) 1 2, 1 4 \u00016 + 80 \u0000f (2) 1 2, 1 2 \u00012\u0000f (2) 1 2 , 1 4 \u00016 \u2212250 \u0000f (1) 1 2, 1 4 \u00012\u0000f (2) 1 2 , 1 4 \u00014 \u2212125 \u0000f (1) 1 2, 1 4 \u00014\u0000f (2) 1 2, 1 4 \u00012 \u2212125 \u0000f (2) 1 2 , 1 4 \u00016 \u0013 1 2 + 8 \u0000f (1) 1 2, 1 2 \u00012\u0000f (1) 1 2, 1 4 \u00012\u0000f (2) 1 2 , 1 4 \u00012 \u221240 \u0000f (1) 1 2, 1 2 \u00012\u0000f (2) 1 2 , 1 4 \u00014 \u221248 \u0000f (1) 1 2, 1 2 \u0001\u0000f (2) 1 2, 1 2 \u0001\u0000f (1) 1 2, 1 4 \u00013\u0000f (2) 1 2, 1 4 \u0001 + 48 \u0000f (1) 1 2 , 1 2 \u0001\u0000f (2) 1 2 , 1 2 \u0001\u0000f (1) 1 2 , 1 4 \u0001\u0000f (2) 1 2, 1 4 \u00013 \u221288 \u0000f (2) 1 2, 1 2 \u00012\u0000f (1) 1 2, 1 4 \u00012\u0000f (2) 1 2 , 1 4 \u00012 \u221240 \u0000f (2) 1 2, 1 2 \u00012\u0000f (2) 1 2 , 1 4 \u00014 + 35 \u0000f (1) 1 2 , 1 4 \u00012\u0000f (2) 1 2, 1 4 \u00012 + 35 \u0000f (2) 1 2, 1 4 \u00014 \u0015 (8.114) Using these and the recursion relations (8.85 \u2013 8.102), all 24 variables can be determined in terms of only 4 variables (8.112). Using (7.2) and (7.3) the value of these four variables are f (1) 1 2, 1 4 = \u2212i 2 f (2) 1 2, 1 4 = i 2 f (1) 1 2, 1 2 = 1 2 f (2) 1 2, 1 4 = 1 2 (8.115) Therefore, we \ufb01nd the following values for the remaining variables \u03b3 1 4 , 1 4 = 1 4, \u03b3 1 4, 1 2 = 0, \u03b3 1 4, 3 4 = 1 16, \u03b3 1 2, 1 2 = 0, \u03b3 1 2 , 3 4 = 0, \u03b3 1 2 , 5 4 = 0, \u03b3 3 4, 3 4 = 1 48, \u03b3 1 4, 5 4 = 1 32, \u03b3 1 4, 3 2 = 0, \u03b3 1 4 , 7 4 = 5 256, \u03b3 1 2 , 3 2 = 0, \u03b3 3 4, 5 4 = 3 256, f (1) 1 2, 3 4 = i 4, f (1) 1 2, 5 4 = i 16, f (1) 1 2 , 3 2 = 0, f (1) 1 2 , 7 4 = i 32, f (2) 1 2, 3 4 = \u2212i 4, f (2) 1 2, 5 4 = \u2212i 16, f (2) 1 2, 3 2 = 0, f (2) 1 2 , 7 4 = \u2212i 32 (8.116) 32 \fThese values agree with the values that one obtains directly from (7.1), (7.2) and (7.3). We note that the number of inputs is 4, in agreement with equation (6.9) for (M, N) = (2, 2) and Nconstr = 2. In the above scenario, we note also that quantities whose \ufb01nal state energies are multiples of 1/2 vanish (except for f (i) 1 2, 1 2). This is because both initial copies have winding 2, respectively, and the \ufb01nal copy winding 4, suggesting that all mode numbers, both initial and \ufb01nal, can be rescaled by a factor of 2. After this rescaling, the initial copies then have winding 1 and the \ufb01nal copy has winding 2. This case was already studied in [56,57]. As shown in (3.7) and (3.9), the \u03b3\u2019s were argued to vanish when the mode numbers were integers and the f\u2019s we argued to vanish when the \ufb01nal mode numbers were integers (unless the \ufb01nal energy was equal to the initial energy). 9 Discussion In this paper we extended the techniques developed in [47,48] to bootstrap the e\ufb00ects of the twist operator \u03c32 for multiwound initial states. The major di\ufb00erence between this scenario and those in [47, 48] is that the coe\ufb03cients of pair creation and propagation are coupled together in the equations. This arises from the fact that L\u22121 acting on singly wound initial states gives zero whereas L\u22121 acting on multiwound initial states does not, which is due to the presence of fractional modes in the multiwound sectors. This signi\ufb01cantly increased the work required in \ufb01nding solutions as opposed to the case which considered only singly wound copies in the initial state. We investigated the scenario beginning with initial states containing two copies where copy 1 had winding M and copy 2 had winding N. We looked at the e\ufb00ects produced by twisting together initial states in these winding sectors into \ufb01nal states living on a copy of winding M + N. Using a set of Virasoro generators L\u22121, L0, L1, Ln>1 we derived general relations for the e\ufb00ects of the twist operator. Because we were considering multiwound initial states, the nonvanishing of L\u22121 on the multiwound vacua gave rise to relations which coupled together pair creation, \u03b3, with propagation, f. To solve these coupled equations systematically, we organized them into recursion relations and constraints in the following way. We \ufb01rst used the relation derived from L0 to determine the dependence of the coordinate w. Next, we organized the L\u22121 relation for pair creation and the L1 relation for propagation as recursion relations. By using these two classes of equations, we can e\ufb00ectively determine the in\ufb01nite number of coe\ufb03cients associated with pair creation and propagation, using only a \ufb01nite number of inputs. These inputs are the low-energy propagations f (1) r M <1, p M+N <1 and f (2) r N <1, p M+N <1, which have initial and \ufb01nal energies less than 1. In addition, there are two types of constraints to consider. Firstly, we have constraint (6.7) derived from Ln>0 for pair creation from terms without any modes. Secondly, we have relations (6.8) derived from L1 for pair creation from terms with two modes. Each type of constraint is in\ufb01nite in number, while only a \ufb01nite number of inputs need to be solved. 33 \fDue to the complication of these constraints, it was unclear if we could use them to determine all the inputs. To investigate this, we studied explicit examples for (M, N) = (2, 1), (3, 1), (2, 2) and considered low energy constraints from the aforementioned two types. We found that the second type of constraint was always trivial. For the \ufb01rst type, the L1 constraint was always nontrivial, while the L2 constraint was only nontrivial for the case (M, N) = (3, 1), (2, 2). We used these constraints to reduce the number of inputs. It remains unclear whether the two types of constraints are su\ufb03cient to determine all the inputs, given that we have only considered low-energy constraints within each type. Based on the explicit examples, we conjecture that as M and N increase, more constraints from the \ufb01rst type will become nontrivial, and the second type of constraint will always be trivial for any M and N. Even though not all the inputs can be determined, \ufb01nding the in\ufb01nite number of coe\ufb03cients using only a \ufb01nite number of inputs is still a nontrivial step. There are potential directions for improving the constraints. One possibility is to continue considering constraints of the same type but at higher energies, which we have not explored in the examples investigated here. As we reach higher energies, there may be more nontrivial constraints. Another approach is to consider additional types of constraints. For instance, we could study terms involving four or more modes in (5.27), and likewise for other Ln. We could also introduce more initial modes, such as including two initial modes in (5.27). This approach would further couple the contraction with pair creation and propagation. Although it would introduce more coe\ufb03cients to solve, it would also generate more constraints. It\u2019s also possible that one may need to \ufb01nd relations using fractional Virasoro generators [61,62]. In this case, one would need to know how to move these generators through the twist operator to obtain relations in the same manner we use in this paper. These approaches have the potential to determine all the required inputs. Finally, an overall primary goal to which these computations contribute, is to compute the e\ufb00ects when an arbitrary number of twist operators are inserted. This is because in certain scenarios, particularly in the D1D5 CFT, these twist operators play a fundamental role in marginal deformations away from free theories towards supergravity theories. By understanding these twist e\ufb00ects, one can better understand how physical processes studied at the free point actually map to the supergravity point. While the current work only considers a single twist, we believe these methods, along with those of [47, 48], in an appropriate setting, can be combined and adapted to understand the e\ufb00ects produced by multiple twists. We plan to return to this in future work. Acknowledgements The work of B.G. and S.D.H is supported by ERC Grant787320 QBH Structure. 34" + } + ], + "Zhuoran Yang": [ + { + "url": "http://arxiv.org/abs/2012.11554v2", + "title": "Variational Transport: A Convergent Particle-BasedAlgorithm for Distributional Optimization", + "abstract": "We consider the optimization problem of minimizing a functional defined over\na family of probability distributions, where the objective functional is\nassumed to possess a variational form. Such a distributional optimization\nproblem arises widely in machine learning and statistics, with Monte-Carlo\nsampling, variational inference, policy optimization, and generative\nadversarial network as examples. For this problem, we propose a novel\nparticle-based algorithm, dubbed as variational transport, which approximately\nperforms Wasserstein gradient descent over the manifold of probability\ndistributions via iteratively pushing a set of particles. Specifically, we\nprove that moving along the geodesic in the direction of functional gradient\nwith respect to the second-order Wasserstein distance is equivalent to applying\na pushforward mapping to a probability distribution, which can be approximated\naccurately by pushing a set of particles. Specifically, in each iteration of\nvariational transport, we first solve the variational problem associated with\nthe objective functional using the particles, whose solution yields the\nWasserstein gradient direction. Then we update the current distribution by\npushing each particle along the direction specified by such a solution. By\ncharacterizing both the statistical error incurred in estimating the\nWasserstein gradient and the progress of the optimization algorithm, we prove\nthat when the objective function satisfies a functional version of the\nPolyak-\\L{}ojasiewicz (PL) (Polyak, 1963) and smoothness conditions,\nvariational transport converges linearly to the global minimum of the objective\nfunctional up to a certain statistical error, which decays to zero sublinearly\nas the number of particles goes to infinity.", + "authors": "Zhuoran Yang, Yufeng Zhang, Yongxin Chen, Zhaoran Wang", + "published": "2020-12-21", + "updated": "2024-04-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "math.OC", + "math.ST", + "stat.ML", + "stat.TH" + ], + "main_content": "Introduction We study a class of optimization problems over nonparametric probability distributions, dubbed as distributional optimization, where the goal is to minimize a functional of probability distribution. Speci\ufb01cally, a distributional optimization problem is given by minp\u2208P2(X) F(p), where P2(X) is the set of all probability densities supported on X \u2286Rd with \ufb01nite second-order moments, and F is \u2217Princeton University, email:zy6@princeton.edu. \u2020Northwestern University, email: yufengzhang2023@u.northwestern.edu, zhaoran.wang@northwestern.edu. \u2021Georgia Institute of Technology, email:yongchen@gatech.edu. \u00a7Part of this work was done while the Zhuoran Yang and Zhaoran Wang was visiting the Simons Institute for the Theory of Computing. 1 \fthe objective functional of interest. Many machine learning problems fall into such a category. For instance, in Bayesian inference (Gelman et al., 2013), the probability distribution describes the belief based on the observations and the functional of interest is the Kullback-Leibler (KL) divergence. In distributionally robust optimization (DRO) (Rahimian and Mehrotra, 2019), the inner problem optimizes a linear functional to \ufb01nd the worst-case data distribution. Besides, in unsupervised learning models such as generative adversarial network (Goodfellow et al., 2014), the objective functional captures the proximity between the generative model and the target distribution. Whereas the policy optimization problem in reinforcement learning (Sutton and Barto, 2018) seeks a distribution over the state-action space that achieves the highest expected reward. All of these instances have been intensively studied separately with algorithms proposed in parallel. Distributional optimization belongs to the general family of in\ufb01nite-dimensional optimization problems (Ekeland and Turnbull, 1983; Burachik and Jeyakumar, 2005; Fattorini et al., 1999) and hence inherits many of the challenges therein. In particular, without further assumptions, it requires an in\ufb01nite number of parameters to fully represent the optimization variable, namely the distribution p. As a result, despite a broad range of applications of distributional optimization, such in\ufb01nitedimensionality makes it signi\ufb01cantly more challenging to solve than \ufb01nite-dimensional optimization. One straightforward approach is to parameterize p by a \ufb01nite-dimensional parameter \u03b8 as p\u03b8 and reduce the problem to solving min\u03b8\u2208\u0398 F(p\u03b8), where \u0398 is the parameter space. Such a method is a common practice in variational inference (Gershman and Blei, 2012; Kingma and Welling, 2019), policy optimization (Sutton et al., 2000; Schulman et al., 2015; Haarnoja et al., 2018), and GAN (Goodfellow et al., 2014; Arjovsky et al., 2017), and has achieved tremendous empirical successes. However, this approach su\ufb00ers from the following three drawbacks. First, the validity of this approach hinges on the fact that the parameterized model {p\u03b8 : \u03b8 \u2208\u0398} has su\ufb03cient representation power such that it well approximates the global minimizer of F over P2(X). Otherwise, the \ufb01nite-dimensional parameterization introduces a considerable approximation bias such that the global minimizer of min\u03b8\u2208\u0398 F(p\u03b8) is suboptimal for F. Second, the new objective function F(p\u03b8) is generally nonconvex in \u03b8, which oftentimes results in undesirable empirical performances and a lack of theoretical guarantees. Finally, in many applications of distributional optimization, instead of \ufb01nding the optimal probability distribution itself, the goal is to draw samples from it, which is, e.g., the case for Bayesian inference with the goal of sampling from the posterior. In this case, there is a tension between optimization and sampling. Speci\ufb01cally, \ufb01nite-dimensional parametrization of probability distributions are commonly implemented via either the Boltzmann distribution or the reparameterization trick (Kingma and Welling, 2013). In the former, p\u03b8 is written as p\u03b8(\u00b7) = exp[\u2212f\u03b8(\u00b7)]/Z\u03b8 for some energy function f\u03b8, where Z\u03b8 = R x\u2208X exp[\u2212f\u03b8(x)] dx is the normalization factor. Although the density p\u03b8 has a closed form which makes the optimization problem convenient, due to the integral in Z\u03b8, it is often challenging to draw samples from p\u03b8. In contrast, when using the reparameterization trick, sampling from p\u03b8 is conveniently achieved by sending a random noise to a di\ufb00erentiable mapping with parameter \u03b8. Nevertheless, since the density p\u03b8 is implicitly de\ufb01ned, optimizing F(p\u03b8) can be challenging, especially when the closed form of p\u03b8 is required. To alleviate these obstacles, we propose to directly solve the in\ufb01nite-dimensional distributional 2 \foptimization problem, utilizing the additional constraint that the decision variable p is a probability distribution. In particular, utilizing the the optimal transport (OT) framework (Otto, 2001; Villani, 2003, 2008), it is shown that P2(X) is a geodesic space where the second-order Wasserstein distance plays the role of the geodesic distance. From such a perspective, in many applications of distributional optimization, the objective functional F is a geodesically convex functional on P2(X). Thus, we propose to directly optimize the functional F on P2(X) by function gradient descent with respect to the geodesic distance (Zhang and Sra, 2016), which constructs a sequence of iterates {pt}t\u22651 in P2(X) satisfying pt+1 \u2190Exppt \u0002 \u2212\u03b1t \u00b7 grad F(pt) \u0003 , (1.1) where \u03b1t is the stepsize, grad F is the functional gradient with respect to the Wasserstein distance, which is also known as the Wasserstein gradient, and Expp denotes the exponential mapping on P2(X), which speci\ufb01es how to move along any given direction on P2(X). To implement the update in (1.1), we need to (i) obtain the Wasserstein gradient grad F(pt) and (ii) calculate the exponential mapping. Moreover, to make it useful for applications, it is desirable to (iii) be able to draw samples from each pt and (iv) allow a su\ufb03ciently general class of objective functionals. To achieve these goals, we focus on a class of functionals that admit a variational form F(p) = supf\u2208F{Ep[f(X)] \u2212F \u2217(f)}, where F is a function class on X and F \u2217: F \u2192R is a strongly convex functional. Such a variational form generalizes beyond convex functionals, where F \u2217corresponds to the convex conjugate when F is convex. A key feature of the variational functional objective is that it enables us implement the Wasserstein gradient update in (1.1) using data sampled from pt. In particular, the Wasserstein gradient grad F(pt) can be calculated from the solution f \u2217 pt to the maximization problem for obtaining F(pt), which can be estimated using data. Meanwhile, as we will show in \u00a73, the exponential mapping in (1.1) is equivalent to a pushforward mapping induced by f \u2217 pt, which can be e\ufb03ciently obtained once f \u2217 pt is estimated. Moreover, to e\ufb03ciently sample from pt, we approximate it using the empirical measure of a set of particles, which leads to our proposed algorithm, which is named as variational transport as it utilizes the optimal transport framework and a variational representation of the objective. Speci\ufb01cally, variational transport maintains a set of particles and outputs their empirical measure as the solution to the distributional optimization problem. In each iteration, variational transport approximates the update in (1.1) by \ufb01rst solving the dual maximization problem associated with the variational form of the objective and then using the obtained solution to specify a direction to push each particle. The variational transport algorithm can be viewed as a forward discretization of the Wasserstein gradient \ufb02ow (Santambrogio, 2017) with particle approximation and gradient estimation. Compared with existing methods, variational transport features a uni\ufb01ed algorithmic framework that enjoys the following advantages. First, by considering functionals with a variational form, the algorithm can be applied to a broad class of objective functionals. Second, the functional optimization problem associated with the variational representation of F can be solved by any supervised learning methods such as deep learning (LeCun et al., 2015; Goodfellow et al., 2016; Fan et al., 2019) and kernel methods (Friedman et al., 2001; Shawe-Taylor et al., 2004), which 3 \fo\ufb00ers additional \ufb02exibility in algorithm design. Finally, by considering nonparametric probability distribution, variational transport does not su\ufb00er from the approximation bias incurred by \ufb01nite-dimensional parameterization of the probability distribution, and the particle approximation enables convenient sampling from the obtained probability measure. To showcase these advantages, we consider an instantiation of variational transport where the objective functional F satis\ufb01es the Polyak Lojasiewicz (PL) condition (Polyak, 1963) with respect to the Wasserstein distance and the variational problem associated with F is solved via kernel methods. In this case, we prove that variational transport generates a sequence of probability distributions that converges linearly to a global minimizer of F up to some statistical error. Here the statistical error is incurred in estimating the Wasserstein gradient by solving the dual maximization problem using functions in a reproducing kernel Hilbert space (RKHS) with \ufb01nite data, which converges sublinearly to zero as the number of particles goes to in\ufb01nity. Therefore, in this scenario, variational transport provably enjoys both computational e\ufb03ciency and global optimality. Our Contribution. Our contribution is two fold. First, utilizing the optimal transport framework and the variational form of the objective functional, we propose a novel variational transport algorithmic framework for solving the distributional optimization problem via particle approximation. In each iteration, variational transport \ufb01rst solves the variational problem associated with the objective to obtain an estimator of the Wasserstein gradient and then approximately implements Wasserstein gradient descent by pushing the particles. Second, when the Wasserstein gradient is approximated using RKHS functions and the objective functional satis\ufb01es the PL condition, we prove that the sequence of probability distributions constructed by variational transport converges linearly to the global minimum of the objective functional, up to certain statistical error that converges to zero as the number of particles goes to in\ufb01nity. To the best of our knowledge, we seem to propose the \ufb01rst particle-based algorithm for general distributional optimization problems with both global convergence and global optimality guarantees. Related Works. There is a large body of literature on manifold optimization where the goal is to minimize a functional de\ufb01ned on a Riemannian manifold. See, e.g., Udriste (1994); Ferreira and Oliveira (2002); Absil et al. (2009); Ring and Wirth (2012); Bonnabel (2013); Zhang and Sra (2016); Zhang et al. (2016); Liu et al. (2017); Agarwal et al. (2018); Zhang et al. (2018); Tripuraneni et al. (2018); Boumal et al. (2018); B\u00b4 ecigneul and Ganea (2018); Zhang and Sra (2018); Sato et al. (2019); Zhou et al. (2019); Weber and Sra (2019) and the references therein. Also see recent reviews (Ferreira et al., 2020; Hosseini and Sra, 2020) for summary. These works all focus on \ufb01nite-dimensional manifolds where each point in the feasible set has a neighborhood that is homeomorphic to the Euclidean space. In contrast, the feasible set of distributional optimization is the Wasserstein space on a subset X of Rd, which is an in\ufb01nite-dimensional manifold. As a result, unlike \ufb01nite-dimensional manifold optimization, on Wasserstein space, both the Riemannian gradient of the objective functional and the exponential mapping cannot be easily obtained, which makes it infeasible to directly apply manifold optimization methods. Moreover, our work is closely related to the vast literature on Bayesian inference. Our work is particularly related to the line of research on gradient-based MCMC, which is a family of particlebased sampling algorithms that approximate di\ufb00usion process whose stationary distribution is 4 \fthe target distribution. The \ufb01nite-time convergence of gradient-based MCMC has been extensively studied. See, e.g., Welling and Teh (2011); Chen et al. (2014); Ma et al. (2015); Chen et al. (2015); Dubey et al. (2016); Vollmer et al. (2016); Chen et al. (2016); Dalalyan (2017); Chen et al. (2017); Raginsky et al. (2017); Brosse et al. (2018); Xu et al. (2018); Cheng and Bartlett (2018); Chatterji et al. (2018); Wibisono (2018); Bernton (2018); Dalalyan and Karagulyan (2019); Baker et al. (2019); Ma et al. (2019a,b); Mou et al. (2019); Vempala and Wibisono (2019); Salim et al. (2019); Durmus et al. (2019); Wibisono (2019) and the references therein. Among these works, our work is more related to Wibisono (2018); Bernton (2018); Ma et al. (2019a,b); Cheng and Bartlett (2018); Vempala and Wibisono (2019); Wibisono (2019); Salim et al. (2019), which establish the \ufb01nite-time convergence of gradient-based MCMC methods in terms of the KL-divergence. These works utilize the property that the di\ufb00usion process associated with Langevin dynamics in X corresponds to the Wasserstein gradient \ufb02ow of the KL-divergence in P2(X) (Jordan et al., 1998), and the methods proposed in these works apply various time-discretization techniques. Besides, Frogner and Poggio (2020) recently applies the Wasserstein \ufb02ow of KL divergence for the inference of di\ufb00usion processes. In addition to gradient-based MCMC, variational transport also shares similarity with Stein variational gradient descent (SVGD) (Liu and Wang, 2016), which is a more recent particle-based algorithm for Bayesian inference. Variants of SVGD have been subsequently proposed. See, e.g., Detommaso et al. (2018); Han and Liu (2018); Chen et al. (2018); Liu et al. (2019); Gong et al. (2019); Wang et al. (2019); Zhang et al. (2020); Ye et al. (2020) and the references therein. Departing from MCMC where independent stochastic particles are used, it leverages interacting deterministic particles to approximate the probability measure of interest. In the mean-\ufb01eld limit where the number of particles go to in\ufb01nity, it can be viewed as the gradient \ufb02ow of the KLdivergence with respect to a modi\ufb01ed Wasserstein metric (Liu, 2017). Utilizing the gradient \ufb02ow interpretation, the convergence of SVGD has been established in the mean-\ufb01eld limit (Liu, 2017; Duncan et al., 2019; Korba et al., 2020; Chewi et al., 2020). Meanwhile, it is worth noting that Chen et al. (2018); Liu et al. (2019) build the connection between MCMC and SVGD through the lens of Wasserstein gradient \ufb02ow of KL divergence. Comparing to MCMC and SVGD, variational transport approximates the Wasserstein gradient descent using particles, which corresponds to a forward discretization of the Wasserstein gradient \ufb02ow. Moreover, utilizing the variational representation of the objective functional, variational transport can be applied to functionals beyond the family of f-divergences. Furthermore, our work is also related to Arbel et al. (2019), which studies the convergence of Wasserstein gradient \ufb02ow of the maximum mean discrepancy (MMD) and its discretization. Since MMD is an integral probability metric and thus admits variational representation, variational transport can also be utilized to minimize MMD. Moreover, when restricted to MMD minimization, our algorithm is similar to the sample-based approximation method proposed in Arbel et al. (2019). Besides, Futami et al. (2019) proposes a particle-based algorithm that minimizes MMD using the Frank-Wolfe method. Another related work on Bayesian inference is Dai et al. (2016), which proposes a distributional optimization algorithm named particle mirror descent (PMD). Speci\ufb01cally, PMD performs in\ufb01nite-dimensional mirror descent on the probability density function using functional gradients of the KL-divergence. In contrast to our work, their functional gradient is with 5 \frespect to the functional \u21131 \u2212\u2113\u221estructure whereas variational transport utilize the Wasserstein geometry. Moreover, for computational tractability, PMD maintains a set of particles and directly estimates the density function via kernel density estimation in each iteration. In comparison, variational transport does not requires the density functions of the iterates. Instead, we use the empirical distribution of the particles to approximate the probability measure and the iterates are updated via pushing the particles in directions speci\ufb01ed the solution to a variational problem. Finally, there exists a body of literature on general distributional optimization problems. Gaivoronski (1986); Molchanov and Zuyev (2001, 2002, 2004); Pieper and Walter (2019) study the Frank-Wolfe and steepest descent algorithms on the space of distribution measures. The methods proposed in these works utilize functional gradients that might be ine\ufb03cient to compute in machine learning problems. Besides, motivated by the idea of representing probability measures using their moments (Lasserre, 2010), for distributional optimization with a linear objective functional that is given by a polynomial function on X, Lasserre (2001, 2009, 2008); Henrion et al. (2009); Jasour et al. (2018) propose convex relaxation methods through the sum-of-squares techniques (Parrilo, 2000; Lasserre, 2010). These approaches require solving large-scale semide\ufb01nite programming problems to the global optima and thus are computationally challenging. A more related work is Chu et al. (2019), which casts various machine learning algorithms as functional gradient methods for distributional optimization, where the gradient is not necessarily with respect to the Wasserstein distance. In comparison, we study a similar distributional optimization framework that comprises these interesting machine learning problems. Moreover, utilizing the Wasserstein gradient, we propose a novel algorithm that provably \ufb01nds the global optimum with computational e\ufb03ciency, which complements the results in Chu et al. (2019). Notation. Throughout this paper, we let Rd, Zd, Td denote the d-dimensional Euclidean space, integer lattice, and torus, respectively. For X being a compact subset of Rd, let P(X) and P2(X) denote the set of probability distributions over X and the set of probability density functions on X with \ufb01nite second-order moments, respectively. Here the density functions are with respect to the Lebesgue measure on X. For any p \u2208P2(X), we identify the density function with the probability measure p(x) dx that is induced by p. For any p, q \u2208P(X), let KL(p, q) denote the Kullback-Leibler divergence between p and q. For any mapping T : X \u2192X and any \u00b5 \u2208P(X), let T\u266f\u00b5 denote the pushforward measure induced by T. Meanwhile, let Trace, div, and \u27e8\u00b7, \u00b7\u27e9denote the trace operator, divergence operator, and inner product on Rd, respectively. For f : X \u2192R being a di\ufb00erentiable function, we let \u2207f and \u22072f denote the gradient and Hessian of f, respectively. Furthermore, let F denote a class of functions on X and let L: F \u2192R be a functional on F. We use DL, D2L, and D3L to denote the \ufb01rst, second, and third order Fr\u00b4 echet derivatives of L respectively. Besides, throughout this work, we let H denote an RKHS de\ufb01ned on X, and let \u27e8\u00b7, \u00b7\u27e9H and \u2225\u00b7 \u2225H denote the inner product on H and the RKHS norm, respectively. Finally, for any Riemannian manifold M with a Riemannian metric g, we let TpM denote the tangent space at point p \u2208M and let \u27e8\u00b7, \u00b7\u27e9p denote the inner product on TpM induced by g. For any functional F : M \u2192R, we let grad F denote the functional gradient of F with respect to the Riemannian metric g. 6 \f2 Background To study optimization problems on the space of probability measures, we \ufb01rst introduce the background knowledge of the Riemannian manifold and the Wasserstein space. In addition, to analyze the statistical estimation problem that arises in estimating the Wasserstein gradient, we introduce the reproducing kernel Hilbert space. 2.1 Metric Space and Riemannian Manifold A metric space (X, \u2225\u00b7\u2225) consists of a set X and a distance function \u2225\u00b7\u2225(Burago et al., 2001). Given a continuous curve \u03b3 : [0, 1] \u2192X, the length of \u03b3 is de\ufb01ned as L(\u03b3) = sup Pn i=1 \u2225\u03b3(ti\u22121) \u2212\u03b3(ti)\u2225, where the supremum is taken over n \u22651 and all partitions 0 = t0 < t1 < . . . < tn = 1 of [0, 1]. It then holds for any curve \u03b3 that L(\u03b3) \u2265\u2225\u03b3(0) \u2212\u03b3(1)\u2225. If there exists a constant v \u22650 such that \u2225\u03b3(t1) \u2212\u03b3(t2)\u2225= v \u00b7 |t1 \u2212t2| for any t1, t2 \u2208[0, 1], the curve \u03b3 is called a geodesic. In this case, for any 0 \u2264t1 < t2 \u22641, the length of \u03b3 restricted to [t1, t2] is equal to \u2225\u03b3(t1) \u2212\u03b3(t2)\u2225. Thus, a geodesic is locally a distance minimizer everywhere. Moreover, (X, \u2225\u00b7 \u2225) is called a geodesic space if any two points x, y \u2208X are connected by a geodesic \u03b3 such that \u03b3(0) = x and \u03b3(1) = y. A d-dimensional di\ufb00erential manifold M is a topological space that is locally homeomorphic to the Euclidean space Rd with a globally de\ufb01ned di\ufb00erential structure (Chern et al., 1999). A tangent vector at x \u2208M is an equivalence class of di\ufb00erentiable curves going through x with a prescribed velocity vector at x. The tangent space at x, denoted by TxM, consists of all tangent vectors at x. To compare two tangent vectors in a meaningful way, we consider a Riemannian manifold (M, g), which is a smooth manifold equipped with an inner product gx on the tangent space TxM for any x \u2208M (do Carmo, 1992; Petersen et al., 2006). The inner product of any two tangent vectors u1, u2 \u2208TxM is de\ufb01ned as \u27e8u1, u2\u27e9x = gx(u1, u2). Besides, we call g the Riemannian metric. On a Riemannian manifold, the length of a smooth curve \u03b3 : [0, 1] \u2192M is de\ufb01ned as L(\u03b3) = Z 1 0 q \u27e8\u03b3\u2032(t), \u03b3\u2032(t)\u27e9\u03b3(t) dt. (2.1) The distance between any two point x, y \u2208M is de\ufb01ned as \u2225x\u2212y\u2225= inf\u03b3 L(\u03b3), where the in\ufb01mum is taken over all smooth curves such that \u03b3(0) = x and \u03b3(1) = y. Equipped with such a distance, (M, g) is a metric space and is further a geodesic space if it is compete and connected (Burago et al., 2001, Hopf\u2013Rinow theorem). Hereafter, we assume (M, g) is a geodesic space. Another important concept is the exponential map, which speci\ufb01es how to move a point along a tangent vector. Speci\ufb01cally, the exponential mapping at x, denoted by Expx : TxM \u2192M, sends any tangent vector u \u2208TxM to y = \u03b3u(1) \u2208M, where \u03b3u : [0, 1] \u2192M is the unique geodesic determined by \u03b3u(0) = x and \u03b3\u2032 u(0) = u. Moreover, since \u03b3u is the unique geodesic connecting x and y, the exponential mapping is invertible and we have u = Exp\u22121 x (y). The distance between x and y satis\ufb01es \u2225x \u2212y\u2225= [\u27e8Exp\u22121 x (y), Exp\u22121 x (y)\u27e9x]1/2, which is also called the geodesic distance. Note that tangent vectors of two di\ufb00erent points lie in distinct tangent spaces. Thus, to compare these tangent vectors, for any two points x, y \u2208M, we de\ufb01ne the parallel transport \u0393y x : TxM \u2192TyM, which speci\ufb01es how a tangent vector of x is uniquely identi\ufb01ed with an element in TyM. Moreover, 7 \fparallel transport perseveres the inner product in the sense that \u27e8u, v\u27e9x = \u27e8\u0393y xu, \u0393y xv\u27e9y for any u, v \u2208TxM. On the Riemannian manifold (M, g), for any x \u2208M and any v \u2208TxM, the directional derivative of a functional f : M \u2192R is de\ufb01ned as \u2207vf(x) = d dtf[\u03b3(t)] \f \f t=0, where \u03b3 is a smooth curve satisfying \u03b3(0) = x and \u03b3\u2032(0) = v. If there exists u \u2208TxM such that \u2207vf(x) = \u27e8u, v\u27e9x for any v \u2208TxM, the functional f is di\ufb00erentiable at x and the tangent vector u is called the gradient of f at x, denoted by grad f(x). With the notions of the gradient and the geodesic distance \u2225\u00b7 \u2225, we are able to de\ufb01ne convex functionals on M. Speci\ufb01cally, a functional f is called geodesically \u00b5-strongly convex if f(y) \u2265f(x) + grad f(x), Exp\u22121 x (y) \u000b x + \u00b5/2 \u00b7 \u2225x \u2212y\u22252, (2.2) and we say f is a geodesically convex functional when \u00b5 = 0. 2.2 Wasserstein Space Let P(X) denote the set of all Borel probability measures on the measurable space (X, B(X)), where X is a d-dimensional Riemannian manifold without boundary and B(X) is the Borel \u03c3algebra on X. For instance, X can be a convex compact region in Rd with zero \ufb02ux or periodic boundary conditions. For any \u00b5, \u03bd \u2208P(X), we let \u03a0(\u00b5, \u03bd) denote the set of all couplings of \u00b5 and \u03bd, i.e., \u03a0(\u00b5, \u03bd) consists of all probability measures on X \u00d7 X whose two marginal distributions are equal to \u00b5 and \u03bd, respectively. Kantorovich\u2019s formulation of the optimal transport problem aims to \ufb01nd a coupling \u03c0 of \u00b5 and \u03bd such that R X\u00d7X \u2225x \u2212y\u22252 d\u03c0(x, y) is minimized, where \u2225\u00b7 \u2225is the geodesic distance on X. It is shown that there exists a unique minimizer which is called the optimal transport plan (Villani, 2008). Moreover, the minimal value W2(\u00b5, \u03bd) = \u0014 inf \u03c0\u2208\u03a0(\u00b5,\u03bd) Z X\u00d7X \u2225x \u2212y\u22252 d\u03c0(x, y) \u00151/2 (2.3) de\ufb01nes a distance on P(X), which is known as the second-order Wasserstein distance. To study the optimization problem on the space of probability measures, in the following, we focus on a subset of probability distributions P2(X), which is de\ufb01ned as P2(X) = n p: X \u2192[0, \u221e): Z X \u2225x \u2212x0\u22252 \u00b7 p(x) dx < \u221e, Z X p(x) dx = 1 o , (2.4) where x0 is any \ufb01xed point in X and we let dx denote the Lebesgue measure on X. Then P2(X) consists of probability density functions on X with \ufb01nite second-order moments. It is known that (P2(X), W2) is an in\ufb01nite-dimensional geodesic space (Villani, 2008), which is called the Wasserstein space. Speci\ufb01cally, any curve on P2(X) can be written as \u03c1: [0, 1] \u00d7 X \u2192[0, \u221e), where \u03c1(t, \u00b7) \u2208 P2(X) for all t \u2208[0, 1]. A tangent vector at p \u2208P2(X) can be written as \u2202\u03c1/\u2202t for some curve \u03c1 such that \u03c1(0, \u00b7) = p(\u00b7). Besides, under certain regularity conditions, the elliptical equation \u2212div(\u03c1\u00b7 \u2207u) = \u2202\u03c1/\u2202t admits a unique solution u: X \u2192R (Denny, 2010; Gilbarg and Trudinger, 2015), where div is the divergence operator on X. Notice that X is a d-dimentional Riemannian manifold. Here we let \u2207u denote the Riemannian gradient of function u, grad u to simplify the notation, which is introduced in \u00a72.1. For any x \u2208X, \u2207u(x) \u2208TxX, which can be identi\ufb01ed as a vector 8 \fin Rd. Thus, the tangent vector \u2202\u03c1/\u2202t is uniquely identi\ufb01ed with the vector-valued mapping \u2207u. Moreover, such a construction endows the in\ufb01nite-dimensional manifold P2(X) with a Riemannian metric (Otto, 2001; Villani, 2003). Speci\ufb01cally, for any s1, s2 \u2208TpP2(X), let u1, u2 : X \u2192R be the solutions to elliptic equations \u2212div(p \u00b7 \u2207u1) = s1 and \u2212div(p \u00b7 \u2207u2) = s2, respectively. The Riemannian metric, which is an inner product structure, between s1 and s2 is de\ufb01ned as \u27e8s1, s2\u27e9p = Z X \u2207u1(x), \u2207u2(x) \u000b x \u00b7 p(x) dx, (2.5) where \u27e8\u2207u1(x), \u2207u2(x)\u27e9x is the standard inner product in TxX, which is homeomorphic to the inner product of Rd. Equipped with such a Riemannian metric, (P2(X), W2) is a geodesic space and the Wasserstein distance de\ufb01ned in (2.3) can be written as W2(\u00b5, \u03bd) = \u0014 inf \u03b3 : [0,1]\u2192P2(X) Z 1 0 \u03b3\u2032(t), \u03b3\u2032(t) \u000b \u03b3(t) dt \u00151/2 , (2.6) where the in\ufb01mum is taken over all curves on P2(X) such that \u03b3(0) = \u00b5 and \u03b3(1) = \u03bd, and \u27e8\u00b7, \u00b7\u27e9\u03b3(t) in (2.6) is the inner product on the Riemannian manifold de\ufb01ned in (2.5). The in\ufb01mum in (2.6) is attained by the geodesic jointing \u00b5 and \u03bd. Here we slightly abuse the notation by letting \u00b5 and \u03bd denote the probability measures as well as their densities. In geodesic space (P2(X), W2), we similarly de\ufb01ne the exponential mapping and the parallel transport. Furthermore, thanks to the Riemannian metric, we can de\ufb01ne the gradient and convexity for functionals on P2(X) in the same way as in \u00a72.1 with the geodesic distance \u2225\u00b7 \u2225in (2.2) being the Wasserstein distance W2. In the sequel, for any functional F : P2(X) \u2192R, we let grad F denote the the gradient of F with respect to W2. Such a Riemannian gradient is also also known as the Wasserstein gradient. 2.3 Reproducing Kernel Hilbert Space Another important tool we need in the following analysis is the reproducing kernel Hilbert space (RKHS). Let (H, \u27e8\u00b7, \u00b7\u27e9H) be a Hilbert space, where H is a function class on X and \u27e8\u00b7, \u00b7\u27e9H : H\u00d7H \u2192R is the inner product structure on H. This Hilbert space is an RKHS if and only if there exists a feature mapping K : X \u2192H such that f(x) = \u27e8f, Kx\u27e9H for all f \u2208H and all x \u2208X, where K maps each point x \u2208X to Kx \u2208H. Moreover, this enables us to de\ufb01ne a kernel function K : X \u00d7 X \u2192R by letting K(x, y) = \u27e8Kx, Ky\u27e9H for any x, y \u2208X and the feature mapping can be written as Kx = K(x, \u00b7). In addition, an equivalent de\ufb01nition of RKHS is as follows. A Hilbert space (H, \u27e8\u00b7, \u00b7\u27e9H) is an RKHS if and only if for any x \u2208X, there exists a constant Mx > 0 such that for all f \u2208H, |f(x)| \u2264Mx \u00b7 \u2225f\u2225H. This de\ufb01nition of RKHS does not require an explicit form of the kernel function K(\u00b7, \u00b7) and is hence relatively easier to verify in practice. Given an RKHS H with a bounded kernel K(\u00b7, \u00b7), let us recall some basic results of the connection between H and L2 \u03bd(X), which is the space of square-integrable functions with respect to a measure \u03bd \u2208P(X). In what follows, we assume that \u03bd has a positive density function. As shown in Theorem 9 \f4.27 in Steinwart and Christmann (2008), the embedding operator I : H \u2192L2 \u03bd(X) is well-de\ufb01ned, Hilbert-Schmidt, bounded, and its operator norm satis\ufb01es \u2225I\u2225op = hZ X K(x, x) d\u03bd(x) i1/2 < \u221e. (2.7) For the notational simplicity, we write [f]\u03bd \u2208L2 \u03bd(X) as the image of any f \u2208H under I. Moreover, the adjoint operator S = I\u2217: L2 \u03bd(X) \u2192H satis\ufb01es that Sf = R X Kx \u00b7f(x)d\u03bd(x) for any f \u2208L2 \u03bd(X). In other words, for any x \u2208X and any f \u2208L2 \u03bd(X), we have (Sf)(x) = \u27e8Sf, Kx\u27e9H = Z X \u27e8Kx, Kx\u2032\u27e9H \u00b7 f(x\u2032) d\u03bd(x\u2032) = Z X K(x, x\u2032) \u00b7 f(x\u2032) d\u03bd(x\u2032). We further de\ufb01ne the composition operators C = S \u25e6I and T = I \u25e6S, which by de\ufb01nition are self-adjoint and positive semi-de\ufb01nite integral operators on H and L2 \u03bd(X), respectively. Moreover, by de\ufb01nition, for any f \u2208H and any h \u2208L2 \u03bd(X), we have Cf = Z X Kx \u00b7 [f]\u03bd(x) d\u03bd(x), T h(x) = Z X K(x, x\u2032) \u00b7 h(x\u2032) d\u03bd(x\u2032). (2.8) The operator T : L2 \u03bd(X) \u2192L2 \u03bd(X) is also known as the integral operator induced by kernel K. By de\ufb01nition, for any f, g \u2208H, it holds that \u27e8Cf, g\u27e9H = Z X \u27e8Kx, g\u27e9H \u00b7 [f]\u03bd(x) d\u03bd(x) = [f]\u03bd, [g]\u03bd \u000b \u03bd, (2.9) where \u27e8\u00b7, \u00b7\u27e9\u03bd is the inner product of L2 \u03bd(X), and [f]\u03bd and [g]\u03bd are the images of f and g under I, respectively. Note that I is injective, since \u03bd has a positive density function. By Mercer\u2019s Theorem (Steinwart and Scovel, 2012), when the embedding I is injective, the integral operator T has countable and positive eigenvalues {\u00b5i}i\u22651 and corresponding eigenfunctions {\u03c8i}i\u22651, which form an orthogonal system of L2 \u03bd(X). Note that the embedding I is injective when \u03bd has a positive probability density function. Moreover, the RKHS H can be written as a subset of L2 \u03bd(X) as H = \u001a f \u2208L2 \u03bd(X): \u221e X i=1 \u27e8f, \u03c8i\u27e92 \u03bd \u00b5i < \u221e \u001b , which is equipped with an RKHS inner product \u27e8\u00b7, \u00b7\u27e9H de\ufb01ned by \u27e8f, g\u27e9H = \u221e X i=1 1/\u00b5i \u00b7 \u27e8f, \u03c8i\u27e9\u03bd \u00b7 \u27e8g, \u03c8i\u27e9\u03bd. By such a construction, the scaled eigenfunctions {\u221a\u00b5i\u03c8i}i\u22651 form an orthogonal system of the RKHS H and the feature mapping Kx \u2208H is given by Kx = P\u221e i=1 \u00b5i\u03c8i \u00b7 \u03c8i(x) for any x \u2208X. Thus, the integration operator C admits the following spectral decomposition Cf = Z X Kx \u00b7 f(x) d\u03bd(x) = \u221e X i=1 \u00b5i\u27e8f, \u03c8i\u27e9\u03bd \u00b7 \u03c8i = \u221e X i=1 \u00b5i\u27e8f, \u221a\u00b5i\u03c8i\u27e9H \u00b7 \u221a\u00b5i\u03c8i, \u2200f \u2208H. 10 \fFinally, the spaces H and L2 \u03bd(X) are uni\ufb01ed via the notion of \u03b1-power space, which is de\ufb01ned by H\u03b1 := n \u221e X i=1 ui\u00b5\u03b1/2 i \u03c8i : \u221e X i=1 u2 i < \u221e o = n f \u2208L2 \u03bd(X) : \u221e X i=1 \u27e8f, \u03c8i\u27e92 \u03bd \u00b5\u03b1 i o , \u03b1 \u22650, (2.10) and is equipped with an inner product \u27e8f, g\u27e9H\u03b1 = P\u221e i=1 \u00b5\u2212\u03b1 i \u00b7 \u27e8f, \u03c8i\u27e9\u03bd \u00b7 \u27e8g, \u03c8i\u27e9\u03bd. By de\ufb01nition, H\u03b1 is a Hilbert space for any \u03b1 \u22650. Two special cases are \u03b1 = 0 and \u03b1 = 1, where H\u03b1 corresponds to L2 \u03bd(X) and H, respectively. Intuitively, as \u03b1 increases from zero, the \u03b1-power spaces speci\ufb01es a cascade of more and more restricted function classes. 3 The Variational Transport Algorithm Let X be a compact d-dimensional Riemannian manifold without boundary and let (P2(X), W2) be the Wasserstein space on X, where P2(X) \u2286P(X) is de\ufb01ned in (2.4) and W2 is the Wasserstein distance de\ufb01ned in (2.3). Note that each element in P2(X) is a probability density over X. For intuitive understanding, X can be regarded as a compact subset of the Euclidean space Rd with a periodic boundary condition, e.g., the torus Td. Besides, to di\ufb00erentiate functionals on X and P2(X), we refer to functional on X as functions hereafter. Let F : P2(X) \u2192R be a di\ufb00erentiable functional on P2(X) with a variational form, F(p) = sup f\u2208F \u001aZ X f(x) \u00b7 p(x) dx \u2212F \u2217(f) \u001b , (3.1) where F is a class of square-integrable functions on X with respect to the Lebesgue measure, and F \u2217: F \u2192R is a strongly convex functional. Such a variational representation generalizes convex conjugacy to P2(X). In the sequel, we consider the following distributional optimization problem, minimize p\u2208P2(X) F(p) (3.2) over the Wasserstein space P2(X). Such an optimization problem incorporates many prominent examples in statistics and machine learning, which are listed as follows. 3.1 Examples of Distributional Optimization Example 3.1. [Nonconvex Optimization] Let g \u2208F be a possibly nonconvex function on X. For the problem of minimizing g over X, we consider the following variational reformulation known as the lifting trick, minimize x\u2208X g(x) = \u21d2 minimize p\u2208P2(X) Fg(p) = Z X g(x) \u00b7 p(x) dx, (3.3) which bridges a d-dimensional possibly nonconvex optimization problem over X and an in\ufb01nitedimensional linear optimization problem over P2(X). Moreover, when the global minima of g is a discrete set, for any global minimum x\u22c6of g, the Dirac \u03b4-measure \u03b4x\u22c6at x\u22c6is a global optimum of the distributional optimization problem in (3.3). To see that the linear functional Fg admits a 11 \fvariational form as in (3.1), we de\ufb01ne F \u2217 g : F \u2192R by letting F \u2217 g (g) = 0 and F \u2217 g (f) = +\u221efor any f \u0338= g. Then it holds that Fg(p) = sup f\u2208F \u001aZ X f(x) \u00b7 p(x) dx \u2212F \u2217 g (f) \u001b = Z X g(x) \u00b7 p(x) dx, which implies that the lifted problem in (3.3) is a degenerate example of the distributional optimization problem in (3.1) and (3.2). Moreover, following the principle of maximum entropy (Guiasu and Shenitzer, 1985; Shore and Johnson, 1980), we can further incorporate a Kullback-Leibler (KL) divergence regularizer into (3.3). Specifically, letting p0 \u2208P2(X) be a prior distribution, for any p \u2208P2(X), KL(p, p0) can be written as a variational form (Nguyen et al., 2010) as KL(p, p0) = sup h\u2208F n 1 + EX\u223cp[h(X)] \u2212EX\u223cp0 \b exp[h(X)] \to , (3.4) where the optimal solution is given by h\u22c6(x) = log[p(x)/p0(x)]. Then we consider a regularized distributional optimization problem minimize p\u2208P2(X) F(p) = \u001aZ X g(x) \u00b7 p(x) dx + \u03c4 \u00b7 KL(p, p0) \u001b = sup f\u2208F \u001a Z X f(x) \u00b7 p(x) dx \u2212 Z X \u03c4 \u00b7 exp \b [f(x) \u2212g(x)]/\u03c4 \t \u00b7 p0(x) dx + \u03c4 \u001b , (3.5) where in the second equality we utilize (3.4) and let f = g + \u03c4 \u00b7 h. Thus, (3.5) also falls in the framework of (3.1) and (3.2) with F \u2217(f) = \u03c4 \u00b7EX\u223cp0(exp{[f(X)\u2212g(X)]/\u03c4})\u2212\u03c4. Furthermore, the Gibbs variational principle (Petz et al., 1989) implies that the global optimum of (3.5) is given by p\u22c6 \u03c4(x) \u221dexp \u0002 \u2212g(x)/\u03c4 \u0003 \u00b7 p0(x), \u2200x \u2208X. (3.6) Assuming that p0 is an uninformative and improper prior, the posterior p\u22c6 \u03c4 in (3.6) is a Gibbs measure that concentrates to the global optima of g as \u03c4 \u21920. Example 3.2 (Distributionally Robust Optimization (DRO)). Let \u0398 be a compact subset of R \u00af d and let \u2113(\u00b7; \u00b7): X \u00d7 \u0398 \u2192R be a bivariate function, where \u00af d is a \ufb01xed integer. We consider the following DRO problem (Rahimian and Mehrotra, 2019), min \u03b8\u2208\u0398 max p\u2208M Z X \u2113(x; \u03b8) \u00b7 p(x) dx, (3.7) where M \u2286P2(X) is called the ambiguity set. One commonly used ambiguity set is the level set of KL divergence, i.e., M := {p : KL(p, p0) \u2264\u01eb} for some \u01eb > 0, where p0 is a given probability measure, namely the nominal distribution. In the context of supervised learning, \u2113in (3.7) is the loss function and the the goal of DRO is to minimize the population risk, which is the expected loss under an adversarial perturbation of the data generating distribution p0 within the ambiguity set M. When M is speci\ufb01ed by the level set of KL divergence, for any \ufb01xed \u03b8, using Lagrangian duality, we can transform the inner problem in (3.7) into a KL divergence regularized distributional 12 \foptimization problem as in (3.5) with g is replaced by \u2113(\u00b7; \u03b8). As a result, the inner problem of DRO can be formulated as an instance of the distributional optimization problem given by (3.1) and (3.2). Example 3.3 (Variational Inference). In Bayesian statistics (Gelman et al., 2013), it is critical to estimate the posterior distribution of the latent variable Z \u2208Z given data X. The density of the posterior distribution is given by p(z | x) = p(x | z) \u00b7 p0(z) R Z p(x | z) \u00b7 p0(z) dz , (3.8) where x is the observed data, p(x | z) is the likelihood function, and p0 \u2208P(Z) is the prior distribution of the latent variable. Here p(x | z) and p0 are given in closed form. In many statistical models, the computation of the integral in (3.8) is intractable. To circumvent such intractability, variational inference turns to minimize the KL divergence between a variational posterior p and the true posterior p(z | x) in (3.8) (Wainwright and Jordan, 2008; Blei et al., 2017), yielding the following distributional optimization problem, p\u22c6(z) = argmin p\u2208P2(Z) KL \u0002 p(z), p(z | x) \u0003 = argmax p\u2208P2(Z) \u001a \u2212 Z Z log \u0014 p(z) p(x | z) \u00b7 p0(z) \u0015 \u00b7 p(z) dz \u001b . (3.9) By the variational form of KL divergence in (3.4), we observe that such a problem belongs to the framework given by (3.1) and (3.2). Meanwhile, the right-hand side of (3.9) is known as the evidence lower bound, which avoids the integral in (3.8). The common practice of variational inference is to parameterize p by a \ufb01nite-dimensional parameter and optimize such a parameter in (3.9), which, however, potentially leads to a bias in approximating the true posterior p(z | x). As is shown subsequently, our proposed framework avoids such bias by representing p with particles. Example 3.4 (Markov-Chain Monte-Carlo (MCMC)). Following the previous example, consider sampling from the posterior distribution p(z | x) via MCMC. One example is the (discrete-time) Langevin MCMC algorithm (Gilks et al., 1995; Brooks et al., 2011; Welling and Teh, 2011), which constructs a Markov chain {zk}k\u22650 given by zk+1 \u2190zk \u2212\u03b3k \u00b7 \u2207log \u0002 p(y | zk) \u00b7 p0(zk) \u0003 + p 2\u03b3k \u00b7 \u01ebk, where \u03b3k > 0 is the stepsize and {\u01ebk}k\u22650 are independently sampled from N(0, Id). Such a Markov chain iteratively attains the limiting probability measure p\u22c6in (3.9) while avoiding the intractable integral in (3.8). See, e.g., Cheng et al. (2017); Cheng and Bartlett (2018); Xu et al. (2018); Durmus et al. (2019) and the references therein for the analysis of the Langevin MCMC algorithm. Besides, it is shown that (discrete-time) Langevin MCMC can be viewed as (a discretization of) the Wasserstein gradient \ufb02ow of KL[p(z), p(z | x)) over P2(Z) (Jordan et al., 1998; Wibisono, 2018). In other words, posterior sampling with Langevin MCMC can be posed as a distributional optimization method. Furthermore, in addition to the KL divergence, F(p) in (3.1) also incorporates other f-divergences (Csisz\u00b4 ar, 1967). For instance, the \u03c72-divergence between p and q can be written as \u03c72(p, q) = sup f\u2208F n 2EX\u223cp[f(X)] \u2212EX\u223cq[f 2(X)] \u22121 o . 13 \fAs a result, when applied to posterior sampling, our proposed framework generalizes Langevin MCMC beyond the minimization of the KL divergence and further allow more general objective functionals such as f-divergences. Example 3.5 (Contextual Bandit). We consider the following interaction protocol between an agent and the environment. At the t-th round, the agent observes a context ct \u2208C, which can be chosen adversarially. Then the agent takes an action at \u2208A and receives the reward rt \u2208R, which is a random variable with mean R(ct, at), where R: C \u00d7 A \u2192R is known as the reward function. Here C and A are the context and action spaces, respectively, and R is unknown. The agent aims to maximize its expected total reward. To this end, at the (t + 1)-th around, the agent estimates the reward function R within a function class F := {fz : z \u2208Z} based on the observed evidence Et := {(ct, at, rt)}t t\u2032=1, where z denotes the parameter of fz. The two most common algorithms for contextual bandit, namely exponential weighting for exploration and exploitation with experts (Exp4) (Farias and Megiddo, 2005) and Thompson sampling (TS) (Agrawal and Goyal, 2012, 2013), both maintain a probability measure over F, which is updated by pt+1(z) \u2190argmin p\u2208P2(Z) \u001aZ Z Gt(z) \u00b7 p(z) dz + 1 \u03b3t \u00b7 KL(p, pt) \u001b \u221dexp \u0002 \u2212\u03b3t \u00b7 Gt(z) \u0003 \u00b7 pt(z), (3.10) where Gt is a function speci\ufb01ed by the algorithm and \u03b3t > 0 is the stepsize. In particular, Exp4 sets Gt(z) as the estimated reward of the greedy policy with respect to fz, while TS sets Gt(z) to be the log-likelihood of the observed evidence Et so that pt+1(z) is the posterior. We observe that the KL divergence regularized distributional optimization problem in (3.10) falls in our proposed framework. Moreover, when Z is complicated, existing Exp4-based algorithms mostly rely on the discretization of Z to maintain p(z) as a probability vector (Auer et al., 2002). In contrast, as we will see subsequently, our proposed algorithm bypasses the discretization of Z in Exp4, and meanwhile, eases the computational overhead of TS. Example 3.6 (Policy Optimization). In reinforcement learning (Sutton and Barto, 2018), the model is commonly formulated as a Markov decision process (S, A, R, P, \u03b3) with a state space S, an action space A, a reward function R, a Markov transition kernel P, and a discount factor \u03b3 \u2208(0, 1). A policy \u03c0 maps each state s \u2208S to a distribution \u03c0(\u00b7 | s) over the action space A. Any policy \u03c0 induces a visitation measure \u03c1\u03c0 over S \u00d7 A, which is de\ufb01ned as \u03c1\u03c0(s, a) = \u221e X t\u22650 \u03b3t \u00b7 P(st = s, at = a), \u2200(s, a) \u2208S \u00d7 A. (3.11) Here st and at are the t-th state and action, respectively, and the trajectory {st, at}t\u22650 is generated following policy \u03c0. Let \u03c10 \u2208P(S \u00d7 A) be a \ufb01xed distribution. We consider the regularized policy optimization problem (Ziebart, 2010; Schulman et al., 2017) maximize \u03c0 \b E(s,a)\u223c\u03c1\u03c0[R(s, a)] \u2212\u03bb \u00b7 KL(\u03c1\u03c0, \u03c10) \t , where \u03bb > 0 is the regularization parameter that balances between exploration and exploitation. This problem can be viewed as a distributional optimization problem with an extra constraint that the distribution of interest is a visitation measure induced by a policy. 14 \fExample 3.7 (Generative Adversarial Network (GAN)). The goal of GAN (Goodfellow et al., 2014) is to learn a generative model p that is close to a target distribution q, where p is de\ufb01ned by transforming a low dimensional noise via a neural network. Since the objective in (3.1) includes f-divergences as special cases, our distributional optimization problem incorporates the family of f-GANs (Nowozin et al., 2016) by letting the function class F in (3.1) be the class of discriminators. Besides, by letting F be the family of 1-Lipschitz continuous functions de\ufb01ned on X, the \ufb01rst-order Wasserstein distance between p and q can be written as W1(p, q) = sup f\u2208F \b EX\u223cp[f(X)] \u2212EX\u223cq[f(X)] \t , (3.12) which is of the same variational form as in (3.1). Thus, the distributional optimization in (3.2) includes Wasserstein GAN (Arjovsky et al., 2017) as a special case. Moreover, by letting F in (3.12) be various function classes, we further recover other integral probability metrics (M\u00a8 uller, 1997) between probability distributions, which induce a variety of GANs (Mroueh and Sercu, 2017; Li et al., 2017; Mroueh et al., 2018; Uppal et al., 2019). 3.2 Variational Transport In what follows, we introduce the variational transport algorithm for the distributional optimization problem speci\ufb01ed in (3.1) and (3.2). Recall that X is a compact Riemannian manifold without a boundary and (P2(X), W2) de\ufb01ned in (2.4) is a geodesic space equipped with a Riemannian metric. To simplify the presentation, in the sequel, we specialize to the case where X is a compact subset of Rd with a periodic boundary condition. For instance, X can be a torus Td, which can be viewed as the d-dimensional hypercube [0, 1)d where the opposite (d \u22121)-cells are identi\ufb01ed, We specialize to such a structure only for rigorous theoretical analysis, which also appears in other works involving the Wasserstein space (Gr\u00a8 af and Hielscher, 2015). Our results can be readily generalized to a general X with extra technical care. We will provide a brief introduction of the Wasserstein space de\ufb01ned on Td in \u00a7B.1. Moreover, note that the objective function F(p) in (3.1) admits a variational form as the supremum over a function class F. Since maximization over an unrestricted function class is computationally intractable, for the variational problem in (3.1) to be well-posed, in the sequel, we consider F to be a subset of the square-integrable functions such that maximization over F can be solved numerically, e.g., a class of deep neural networks or an RKHS. Motivated by the \ufb01rst-order methods in the Euclidean space, we propose a \ufb01rst-order iterative optimization algorithm over P2(X). Speci\ufb01cally, suppose p \u2208P2(X) is the current iterate of our algorithm, we would like to update p in the direction of the Wasserstein gradient grad F(p). Moreover, to ensure that p lies in P2(X), we update p by moving in the direction of grad F(p) along the geodesic. Thus, in the ideal case, the Wasserstein gradient update is given by p \u2190Expp \u0002 \u2212\u03b1 \u00b7 grad F(p) \u0003 , (3.13) where Expp is the exponential mapping at p and \u03b1 > 0 is the stepsize. 15 \fTo obtain an implementable algorithm, it remains to characterize the Wasserstein gradient grad F(p) and the exponential mapping Expp at any p \u2208P2(X). The following proposition establishes the connection between the Wasserstein gradient and functional derivative of a di\ufb00erentiable functional on P2(X). Proposition 3.8 (Functional Gradient). We assume F : P2(X) \u2192R is a di\ufb00erentiable functional with its functional derivative with respect to the \u21132-structure denoted by \u03b4F(p)/\u03b4p. Then, it holds that grad F(p) = \u2212div \u0014 p \u00b7 \u2207 \u0012\u03b4F \u03b4p \u0013\u0015 , (3.14) where div is the divergence operator on X. Furthermore, for the functional F de\ufb01ned in (3.1), we assume that F \u2217is strongly convex on F and that the supremum is attained at f \u2217 p \u2208F for a \ufb01xed p \u2208P2(X), i.e., F(p) = EX\u223cp[f \u2217 p(X)] \u2212F \u2217(f \u2217 p). Then, we have \u03b4F/\u03b4p = f \u2217 p and grad F(p) = \u2212div(p \u00b7 \u2207f \u2217 p). Proof. See \u00a7D.1 for a detailed proof. By Proposition 3.8, to obtain grad F(p), we need to \ufb01rst solve the variational problem in (3.1) to obtain f \u2217 p \u2208F and then compute the divergence in (3.14). However, even if we are given f \u2217 p, (3.14) further requires the analytic form of density function p, which is oftentimes challenging to obtain. Furthermore, in addition to the challenge of computing grad F(p), to implement the Wasserstein gradient update in (3.13), we also need to characterize the exponential mapping Expp. Fortunately, in the following proposition, we show that exponential mapping along any tangent vector in TpP2(X) is equivalent to a pushforward mapping of p, which enables us to perform the Wasserstein gradient update via pushing particles. Proposition 3.9 (Pushing Particles as Exponential Mapping). For any p \u2208P2(X) and any s \u2208 TpP2(X), suppose the elliptic equation \u2212div(p\u00b7\u2207u) = s admits a unique solution u: X \u2192R that is twice continuously di\ufb00erentiable. We further assume that \u2207u: X \u2192Rd is H-Lipschitz continuous. Then for any t \u2208[0, 1/H), we have \u0002 ExpX (t \u00b7 \u2207u) \u0003 \u266fp = Expp(t \u00b7 s), (3.15) where ExpX denotes the exponential mapping on X and Expp is the exponential mapping on P2(X) at point p. Moreover, for any x \u2208X, the density function of [ExpX (t \u00b7 \u2207u)]\u266fp at x is given by n\u0002 ExpX (t \u00b7 \u2207u) \u0003 \u266fp o (x) = p(y) \u00b7 \f \f \f \f d dy Expy[t \u00b7 \u2207u(y)] \f \f \f \f \u22121 , where x = Expy[t \u00b7 \u2207u(y)] and | d dy Expy[t \u00b7 \u2207u(y)]| is the determinant of the Jacobian. Proof. See \u00a7D.2 for a detailed proof. 16 \fNote that the elliptic equation in Proposition 3.9 holds in the weak sense. Combining Propositions 3.8 and 3.9, if \u2207f \u2217 p is H-Lipschitz, it holds for any t \u2208[0, 1/H) that Expp[\u2212t \u00b7 grad F(p)] = \u0002 ExpX \u0000\u2212t \u00b7 \u2207f \u2217 p) \u0003 \u266fp, (3.16) which reduces to (id +t \u00b7 \u2207f \u2217 p)\u266fp when X = Rd. Here id: Rd \u2192Rd is the identity mapping. Thus, when we have access to f \u2217 p, (3.16) enables us to perform the Wasserstein gradient update approximately via pushing particles. Speci\ufb01cally, assume that probability measure p can be approximated by an empirical measure e p of N particles {xi}i\u2208[N], that is, p \u2248e p = 1 N N X i=1 \u03b4xi, (3.17) where \u03b4x is the Dirac \u03b4-measure for x \u2208X. For each i \u2208[N], we de\ufb01ne yi = Expxi \u0002 \u2212t \u00b7 \u2207f \u2217 p(xi) \u0003 . Then we have [ExpX \u0000\u2212t \u00b7 \u2207f \u2217 p)]\u266fe p = N \u22121 PN i=1 \u03b4yi by direct computation. By (3.16) and (3.17), Expp[\u2212t \u00b7 grad F(p)] can be approximated by the empirical measure of {yi}i\u2208[N]. In addition, since f \u2217 p is unknown, we estimate f \u2217 p via the following maximization problem, maximize f\u2208F Z X f(x) de p(x) \u2212F \u2217(f) = 1 N N X i=1 f(xi) \u2212F \u2217(f), (3.18) where we replace the probability distribution p in (3.1) by the empirical measure e p in (3.17). When N is su\ufb03ciently large, since e p is close to p, we expect that the solution to (3.18), denoted by e f \u2217 p \u2208F, is close to f \u2217 p. This enables us to approximate the Wasserstein gradient update Expp \u0002 \u2212\u03b1\u00b7grad F(p) \u0003 by [ExpX (\u2212\u03b1 \u00b7 \u2207e f \u2217 p)]\u266fe p, which is the empirical measure of {Expxi[\u2212\u03b1 \u00b7 \u2207e f \u2217 p(xi)]}i\u2208[N]. Therefore, we obtain the variational transport algorithm for solving the desired distributional optimization problem in (3.2), whose details are presented in Algorithm 1. Algorithm 1 Variational Transport Algorithm for Distributional Optimization 1: Input: Functional F : P2(X) \u2192R de\ufb01ned in (3.1), initial point p0 \u2208P2(X), number of particles N, number of iterations K, and stepsizes {\u03b1k}K k=0. 2: Initialize the N particles {xi}i\u2208[N] by drawing N i.i.d. observations from p0. 3: for k = 0, 1, 2, . . . , K do 4: Compute e f \u2217 k \u2208F via e f \u2217 k \u2190argmax f\u2208F \u001a 1 N N X i=1 f(xi) \u2212F \u2217(f) \u001b . (3.19) 5: Push the particles by letting xi \u2190Expxi[\u2212\u03b1k \u00b7 \u2207e f \u2217 k(xi)] for all i \u2208[N]. 6: end for 7: Output: The empirical measure N \u22121 P i\u2208[N] \u03b4xi. In this algorithm, we maintains N particles {xi}i\u2208[N] and output their empirical measure as the solution to (3.2). These particles are updated in Line 5 of Algorithm 1. In the k-th iteration, the 17 \fupdate is equivalent to updating the empirical measure e p = N \u22121 P i\u2208[N] \u03b4xi by the pushforward measure [ExpX (\u2212\u03b1k \u00b7 \u2207e f \u2217 k)]\u266fe p, which approximates the Wasserstein gradient update in (3.13) with stepsize \u03b1k. Here e f \u2217 k is obtained in (3.19) by solving an empirical maximization problem over a function class F, which can be chosen to be an RKHS or the class of deep neural networks in practice. After getting e f \u2217 k, we push each particle xi along the direction \u2207e fk(xi) with stepsize \u03b1k, which leads to [ExpX (\u2212\u03b1k \u00b7 \u2207e f \u2217 k)]\u266fe p and completes one iteration of the algorithm. To gain some intuition of variational transport, as a concrete example, consider the regularized nonconvex optimization problem introduced Example 3.1 with X = Rd. Let g: Rd \u2192R be a differentiable function on Rd. Consider the distributional optimization where the objective functional is de\ufb01ned as F(p) = Z X g(x) \u00b7 p(x) dx + \u03c4 \u00b7 Ent(p) = Z X g(x) \u00b7 p(x) dx + \u03c4 \u00b7 Z X p(x) \u00b7 log p(x) dx. Here \u03c4 > 0 is the regularization parameter and Ent is the entropy functional. For such an objective functional F, the Wasserstein gradient is given by grad F(p) = \u2212div \u0002 p \u00b7 \u0000\u2207g + \u03c4 \u00b7 \u2207log p \u0001\u0003 . Then, suppose the N particles {xi}i\u2208[N] are sampled from density p, variational transport proposes to push each particle xi by letting xi \u2190xi \u2212\u03b1k \u00b7 \u0002 \u2207g(xi) + \u03c4 \u00b7 \u2207log p(xi) \u0003 , (3.20) which can be viewed as a regularized gradient descent step for objective function g at xi. When \u03c4 = 0, the update in (3.20) reduces to a standard gradient descent step. However, due to the nonconvexity of g, gradient descent might be trapped in a stationary point of g. By introducing the entropy regularization, variational transport aims to \ufb01nd the Gibbs distribution p\u2217 \u03c4(x) \u221dexp[\u2212g(x)/\u03c4], which is the global minimum of F and is supported on the global minima of g as \u03c4 goes to zero. Meanwhile, we remark that our variational transport algorithm is deterministic in essence: the only randomness comes from the initialization of the particles. Moreover, this algorithm can be viewed as Wasserstein gradient descent with biased gradient estimates. Speci\ufb01cally, to characterize the bias, we de\ufb01ne a sequence of transportation maps {Tk : X \u2192X}K+1 k=0 as follows, T0 = id, Tk+1 = [ExpX (\u2212\u03b1k \u00b7 \u2207e f \u2217 k)] \u25e6Tk, \u2200k \u2208{0, . . . , K}, (3.21) where e f \u2217 k is obtained in (3.19) of Algorithm 1. For each k \u22651, we de\ufb01ne b pk = (Tk)\u266fp0. Note that the population version of the optimization problem in (3.19) of Algorithm 1 and its solution are given by b f \u2217 k \u2190argmax f\u2208F \u001aZ X f(x) \u00b7 b pk(x) dx \u2212F \u2217(f) \u001b . (3.22) Thus, {b pk}k\u22650 are the iterates constructed by the ideal version of Algorithm 1 where the number of particles goes to in\ufb01nity. For such an ideal sequence, at each b pk, by Proposition 3.8, the desired 18 \fWasserstein gradient direction is grad F(b pk) = \u2212div(b pk \u00b7 \u2207b f \u2217 k). However, with only \ufb01nite data, we can only obtain an estimator e f \u2217 k of b f \u2217 k by solving (3.19). Hence, the estimated Wasserstein gradient at b pk is \u2212div(b pk \u00b7 \u2207e f \u2217 k), whose error is denoted by \u03b4k = \u2212div[b pk \u00b7 (\u2207e f \u2217 k \u2212\u2207b f \u2217 k)]. Note that \u03b4k is a tangent vector at b pk, i.e., \u03b4k \u2208Tb pkP2(X). Using the Riemmanian metric on P2(X), we de\ufb01ne \u03b5k = \u27e8\u03b4k, \u03b4k\u27e9b pk = Z X \r \r\u2207e f \u2217 k(x) \u2212\u2207b f \u2217 k(x) \r \r2 2 \u00b7 b pk(x) dx. (3.23) Therefore, Algorithm 1 can be viewed as a biased Wasserstein gradient algorithm on P2(X), where the squared norm of the bias term in the k-th iteration is \u03b5, which decays to zero as N goes to in\ufb01nity. The algorithm constructs explicitly a sequence of transportation maps {Tk}K+1 k=0 and implicitly a sequence of probability distributions {b pk}K+1 k=1 , which is approximated by a sequence of empirical distributions of N particles. Furthermore, for computational e\ufb03ciency, in practical implementations of the variational transport algorithm, we can push the particles without solving (3.19) completely. For example, when we employ a deep neural network f\u03b8 : X \u2192R to solve (3.19), where \u03b8 is the network weights, we can replace (3.19) by updating \u03b8 with a few gradient ascent updates. Then, the particle xi is updated by Expxi[\u2212\u03b1k \u00b7 \u2207f\u03b8(xi)]. Moreover, when N is very large, we can also randomly sample a subset of the N particles to update f\u03b8 via mini-batch stochastic gradient ascent. 4 Theoretical Results In this section, we provide theoretical analysis for the variational transport algorithm. For mathematical rigor, in the sequel, we let F in (3.19) be an RKHS (H, \u27e8\u00b7, \u00b7\u27e9H) de\ufb01ned on X with a reproducing kernel K(\u00b7, \u00b7). In this case, the the distributional optimization problem speci\ufb01ed in (3.1) and (3.2) can be written as minimize p\u2208P2(X) F(p), F(p) = sup f\u2208H \u001aZ X f(x) \u00b7 p(x) dx \u2212F \u2217(f) \u001b . (4.1) In \u00a74.1, we probe statistical error incurred in solving the inner maximization problem in (4.1), which provides an upper bound for the bias of Wasserstein gradient estimates constructed by variational transport. Then in \u00a74.2, we establish the rate of convergence of variational transport for a class of objective functionals satisfying the Polyak Lojasiewicz (PL) condition (Polyak, 1963) with respect to the Wasserstein distance. 4.1 Statistical Analysis To characterize the bias of the Wasserstein gradient estimates, in this subsection, we provide a statistical analysis of the inner maximization problem in (4.1). Without loss of generality, it su\ufb03ces to consider the following nonparametric estimation problem. Let P be a probability measure over X and f \u2217be the minimizer of the population risk function over the RKHS H, i.e., f \u2217= argmin f\u2208H L(f), where L(f) = \u2212EX\u223cP \u0002 f(X) \u0003 + F \u2217(f). (4.2) 19 \fOur goal is to estimate f \u2217in (4.2) based on n i.i.d. observations X1, \u00b7 \u00b7 \u00b7 Xn sampled from P. A natural estimator of f \u2217is the minimizer of the empirical risk with an RKHS norm regularizer, which is de\ufb01ned as fn,\u03bb = argmin f\u2208H \b Ln(f) + \u03bb/2 \u00b7 \u2225f\u22252 H \t , where Ln(f) = \u22121 n n X i=1 f(Xi) + F \u2217(f). (4.3) Here \u03bb > 0 is the regularization parameter and Ln is known as the empirical risk function. The population problem de\ufb01ned in (4.2) reduces to (3.22) when we set the density of P to be b pk and the observations {Xi}i\u2208[n] to be the particles {xi}i\u2208[N] sampled from b pk. Then, the estimator fn,\u03bb in (4.3) is a regularized version of e f \u2217 k de\ufb01ned in (3.19). In the sequel, we upper bound the statistical error fn,\u03bb \u2212f \u2217in terms of the RKHS norm, which is further used to obtain an upper bound of \u03b5k de\ufb01ned in (3.23). Before presenting the theoretical results, we \ufb01rst introduce a few technical assumptions. Let \u03bd denote the Lebesgue measure on X. As introduced in \u00a72.3, H can be viewed as a subset of L\u03bd. The following assumption speci\ufb01es a regularity condition for functional F \u2217in (4.2). Assumption 4.1. We assume that the functional F \u2217: H \u2192R is smooth enough such that F \u2217is Fr\u00b4 echet di\ufb00erentiable up to the third order. For j \u2208{1, 2, 3}, we denote by DjF \u2217the j-th order Fr\u00b4 echet derivative of F \u2217in the RKHS norm. In addition, we assume that there exists a constant \u03ba > 0 such that, for any f, f \u2032 \u2208H, we have F \u2217(f) \u2212F \u2217(f \u2032) \u2212 DF \u2217(f \u2032), f \u2212f \u2032\u000b H \u2265\u03ba 2 Z X \f \ff(x) \u2212f \u2032(x) \f \f2 dx. (4.4) Assumption 4.1 is analogous to the strongly convexity condition in parametric statistic problems. It lower bounds the Hessian of the population risk L de\ufb01ned in (4.2), which enables us to characterize the distance between any function and the global minimizer f \u2217based on the population risk. It is worth noting that the right-hand side of (4.4) is an integral with respect to the Lebesgue measure. We introduce a regularity condition for the global minimizer of the population risk L. Assumption 4.2. There exists a constant \u03b2 \u2208(1, 2) such that f \u2217de\ufb01ned in (4.2) belongs to H\u03b2, which is \u03b2-power space of order \u03b2 de\ufb01ned in (2.10). Assumption 4.2 requires that the global minimizer f \u2217of the population risk L belongs to a higher order power space of the RKHS H. In other words, f \u2217has greater smoothness than a general function in H. This assumption is made solely for technical reasons and is adopted from Fischer and Steinwart (2017), which is used to control the bias caused by the RKHS norm regularization. Before we lay out our next assumption, we de\ufb01ne b f\u03bb = f \u2217\u2212\u03bb \u00b7 \u0000D2F \u2217(f \u2217) + \u03bb \u00b7 id \u0001\u22121 \u00b7 f \u2217= \u0002\u0000D2F \u2217(f \u2217) + \u03bb \u00b7 id \u0001\u22121 \u25e6D2F \u2217(f \u2217) \u0003 \u00b7 f \u2217, (4.5) which is the solution to the linearized equation DL\u03bb(f \u2217) + \u0002 D2L\u03bb(f \u2217) \u0003 (f \u2212f \u2217) = 0. (4.6) 20 \fHere L\u03bb(f) = L(f) + \u03bb/2 \u00b7 \u2225f\u22252 H is the RKHS norm regularized population risk, id: H \u2192H is the identity mapping on H, and D2F \u2217: H \u2192H is the second-order Fr\u00b4 echet derivative of F \u2217with respect to the RKHS norm. Note that we have DL\u03bb(f \u2217) = \u03bb \u00b7 f \u2217since DL(f \u2217) = 0. Besides, note that the left-hand side of (4.6) is the \ufb01rst-order expansion of DL\u03bb(f) at f \u2217. Thus, b f\u03bb de\ufb01ned in (4.5) can be viewed as an approximation of the minimizer of L\u03bb. Since L\u03bb is the population version of the objective in (4.3), it is reasonable to expect that fn,\u03bb is close to b f\u03bb when the sample size is su\ufb03ciently large. It remains to characterize the di\ufb00erence between b f\u03bb and f \u2217. In the following assumption, we postulate that b f\u03bb converges to f \u2217when the regularization parameter \u03bb goes to zero. Assumption 4.3. We de\ufb01ne \u0398\u03bb,3 > 0 as \u0398\u03bb,3 = sup \u03c6,\u03c8\u2208H sup u,v\u2208H \r \r(D2F \u2217(\u03c6) + \u03bb \u00b7 id)\u22121 \u25e6 \u0000D3F \u2217(\u03c8)uv \u0001\r \r H \u000e\u0000\u2225u\u2225H \u00b7 \u2225v\u2225H \u0001 . (4.7) Then, we assume that b f\u03bb de\ufb01ned in (4.5) satis\ufb01es \u2225b f\u03bb \u2212f \u2217\u2225H \u21920 and \u0398\u03bb,3 \u00b7 \u2225b f\u03bb \u2212f \u2217\u2225H \u21920 as \u03bb goes to zero. Compared with Assumptions 4.1 and 4.2, Assumption 4.3 is much more technical and obscure. As we will see in the proof details, \u0398\u03bb,3 naturally appears when bounding the di\ufb00erence between b f\u03bb and the minimizer of L\u03bb. Roughly speaking, b f\u03bb and \u0398\u03bb,3 together characterize the bias induced by the regularization term \u03bb/2 \u00b7 \u2225f\u22252 H in (4.3). Hence, intuitively, Assumption 4.3 states that the e\ufb00ect of regularization shrinks to zero as \u03bb goes to zero and the system is stable under small perturbations. A similar assumption also appears in Cox and O\u2019Sullivan (1990) for the analysis of regularized maximum likelihood estimation in RKHS. In addition to the above assumptions, we impose the following regularity condition on the RKHS kernel K. Assumption 4.4. Recall that the kernel function K of H satis\ufb01es K(x, y) = \u27e8K(x, \u00b7), K(y, \u00b7)\u27e9H for any x, y \u2208X. For any i, j \u2208{1, . . . , d}, we denote by \u2202jK(x, \u00b7) and \u22022 ijK(x, \u00b7) the derivatives \u2202K(x, \u00b7)/\u2202xj and \u22022K(x, \u00b7)/(\u2202xi\u2202xj), respectively. We assume that there exists absolute constants CK,1, CK,2, and CK,3 such that sup x\u2208X \r \rK(x, \u00b7) \r \r H \u2264CK,1, sup x\u2208X \r \r\u2202jK(x, \u00b7) \r \r H \u2264CK,2, \u2200j \u2208{1, . . . , d}, sup x\u2208X \r \r\u22022 ijK(x, \u00b7) \r \r H \u2264CK,3, \u2200i, j \u2208{1, . . . , d}. Assumption 4.4 characterizes the regularity of H. Speci\ufb01cally, it postulates that the kernel K and its derivatives are upper bounded, which is satis\ufb01ed by many popular kernels, including the Gaussian RBF kernel and the Sobolev kernel (Smale and Zhou, 2003; Rosasco et al., 2013; Yang et al., 2016). Moreover, Assumption 4.4 implies that the embedding mapping from RKHS H to space L\u221e \u03bd (X) is continuous. Indeed, for any f \u2208H and any x \u2208X, it holds that f(x) = K(x, \u00b7), f \u000b H \u2264 \r \rK(x, \u00b7) \r \r H \u00b7 \u2225f\u2225H = p K(x, x) \u00b7 \u2225f\u2225H \u2264CK,1 \u00b7 \u2225f\u2225H, 21 \fwhich further implies that \u2225f\u2225L\u221e \u03bd \u2264CK,1 \u00b7 \u2225f\u2225H for all f \u2208H. With Assumptions 4.1-4.4, we are ready to present the following theorem, which characterizes the statistical error of estimating f \u2217by fn,\u03bb. Theorem 4.5. Under Assumptions 4.1-4.4, we set the regularization parameter \u03bb in (4.3) to be \u03bb = O \u0002 C1\u2212\u03b1\u03b2 K,1 \u00b7 \u03ba\u03b1\u03b2 \u00b7 \u2225f \u2217\u2225\u03b1\u03b2\u22121 H\u03b2 \u00b7 (log n/n)(1\u2212\u03b1\u03b2)/2\u0003 , where \u03b1\u03b2 = (\u03b2 \u22121)/(\u03b2 + 1) and O(\u00b7) omits absolute constants that does not depend on \u03ba, CK,1, or \u2225f \u2217\u2225H\u03b2. Then, with probability at least 1 \u2212n\u22122, we have that \u2225fn,\u03bb \u2212f \u2217\u2225H = O \u0002 C\u03b1\u03b2 K,1 \u00b7 \u03ba\u2212\u03b1\u03b2 \u00b7 \u2225f \u2217\u22251\u2212\u03b1\u03b2 H\u03b2 \u00b7 (log n/n)\u03b1\u03b2/2\u0003 , Z X \r \r\u2207fn,\u03bb(x) \u2212\u2207f \u2217(x) \r \r2 2 dP(x) = O \u0002 C2 K,2 \u00b7 d \u00b7 C2\u03b1\u03b2 K,1 \u00b7 \u03ba\u22122\u03b1\u03b2 \u00b7 \u2225f \u2217\u22252\u22122\u03b1\u03b2 H\u03b2 \u00b7 (log n/n)\u03b1\u03b2\u0003 . (4.8) Proof. See \u00a7C.1 for a detailed proof. As shown in Theorem 4.5, when the regularization parameter \u03bb is properly chosen, the statistical error measured by \u2225fn,\u03bb\u2212f \u2217\u2225H converges to zero as the sample size n goes to in\ufb01nity with high probability. Moreover, when regarding \u03ba and \u2225f \u2217\u2225H\u03b2 as constants, we have \u2225fn,\u03bb \u2212f \u2217\u2225H = e O(n\u2212\u03b1\u03b2/2) where e O(\u00b7) omits absolute constants and logarithmic factors. This implies that the statistical error converges to zero at a sublinear n\u2212\u03b1\u03b2/2 rate. Here \u03b1\u03b2 increases with \u03b2, which quanti\ufb01es the smoothness of f \u2217. In other words, when the target function f \u2217is smoother, we obtain a faster statistical rate of convergence. Furthermore, we remark that our statistical rate is with respect to the RKHS norm, which is more challenging to handle than \u2225fn,\u03bb \u2212f \u2217\u2225L2 P. Under the regularity condition that the kernel satis\ufb01es Assumption 4.4, the RKHS norm statistical rate further implies an upper bound on the estimation error of the gradient function \u2207f \u2217. This result is critical for establishing the biases of the Wasserstein gradient estimates constructed by variational transport. Finally, as shown in \u00a7C.1, by integrating the tail probabilities, we can also establish in-expectation statistical rates that are similar to the high-probability bounds given in (4.8). 4.2 Optimization Theory In what follows, we establish the rate of convergence for the variational transport algorithm where F is set to be an RKHS (H, \u27e8\u00b7, \u00b7\u27e9H) with kernel K(\u00b7, \u00b7). As introduced in \u00a73, variational transport creates a sequence of probability measures via Wasserstein gradient descent with biased Wasserstein gradient estimates. Speci\ufb01cally, starting from e p0 = p0, we de\ufb01ne e pk+1 = \u0002 ExpX (\u2212\u03b1k \u00b7 \u2207e f \u2217 k) \u0003 \u266fe pk, (4.9) where p0 is the input of the algorithm and e f \u2217 k is de\ufb01ned similarly as in (3.19) with an RKHS norm regularization, e f \u2217 k \u2190argmax f\u2208H \u001a 1 N N X i=1 f(xi) \u2212F \u2217(f) \u2212\u03bb \u00b7 \u2225f\u22252 H \u001b . (4.10) 22 \fHere \u03bb > 0 is the regularization parameter, and {xi}i\u2208[N] are i.i.d. observations drawn from distribution e pk. We remark that sampling i.i.d. observations and utilizing RKHS in (4.10) are both artifacts adopted only for theoretical analysis. We present the details of such a modi\ufb01ed algorithm in Algorithm 2 in \u00a7A. Without these modi\ufb01cations, Algorithm 2 reduces to the general method proposed in Algorithm 1, a deterministic particle-based algorithm, which is more advisable for practical implementations. Such a modi\ufb01ed version of variational transport can also be viewed as Wasserstein gradient descent method for minimizing the functional F in (4.1). Here the bias incurred in the estimation of the Wasserstein gradient stems from the statistical error of e f \u2217 k. Speci\ufb01cally, by Propositions 3.8 and Proposition 3.9, at each e pk, the desired Wasserstein gradient descent updates e pk via Expe pk \u0002 \u2212\u03b1k \u00b7 grad F(e pk) \u0003 = \u0002 ExpX (\u2212\u03b1k \u00b7 \u2207f \u2217 k) \u0003 \u266fe pk, (4.11) where f \u2217 k is the solution to the maximization problem in (4.1) with p replaced by e pk, and \u03b1k is the stepsize. Comparing (4.9) with (4.11), it is evident that the estimation error \u2207e f \u2217 k(x) \u2212\u2207f \u2217 k(x) contributes to the bias, which can be handled by the statistical analysis presented in \u00a74.1. Our goal is to show that {e pk}k\u22650 de\ufb01ned in (4.9) converges to the minimizer of F. We \ufb01rst present the a regularity condition on F. Assumption 4.6. We assume that F in (4.1) is gradient dominated in the sense that \u00b5 \u00b7 \u0002 F(p) \u2212infp\u2208P2(X)F(p) \u0003 \u2264\u27e8grad F(p), grad F(p)\u27e9p (4.12) for any p \u2208P2(X), where \u00b5 > 0 is an absolute constant. Here \u27e8\u00b7, \u00b7\u27e9p is the inner product on TpP2(X), which is de\ufb01ned in (2.5). In addition, we assume that there exist an absolute constant L such that F is L-smooth with respect to the Wasserstein distance. Speci\ufb01cally, for any p, q in P2(X), we have F(q) \u2264F(p) + grad F(p), Exp\u22121 p (q) \u000b p + L/2 \u00b7 W 2 2 (p, q), (4.13) where Exp\u22121 p (q) \u2208TpP2(X) is the tangent vector that speci\ufb01es the geodesic connecting p and q. Assumption 4.6 assumes that F admits the gradient dominance and the smoothness conditions. Speci\ufb01cally, (4.12) and (4.13) extends the classical Polyak Lojasiewicz (PL) (Polyak, 1963) and smoothness conditions to distributional optimization, respectively. This assumption holds for the KL divergence F(\u00b7) = KL(\u00b7, \u00af q) where \u00af q \u2208P2(X) is a \ufb01xed distribution. For such an objective functional, we have grad F(p) = \u2212div[p \u00b7 \u2207log(p/q)] and (4.12) reduces to \u00b5 \u00b7 KL(p, \u00af q) \u2264 Z X \r \r\u2207log[p(x)/\u00af q(x)] \r \r2 2 \u00b7 p(x) dx = Ep \u0002\r \r\u2207log(p/\u00af q) \r \r2 2 \u0003 = I(p, \u00af q), (4.14) where I(p, \u00af q) is known as the relative Fisher information between p and \u00af q. Inequality (4.14) corresponds to the logarithmic Sobolev inequality (Gross, 1975; Otto and Villani, 2000; Cryan et al., 23 \f2019), which holds for \u00af q being the density of a large class of distributions, e.g., Gaussian and logconcave distributions. In addition, the smoothness condition in (4.13) is a natural extension of that for Euclidean space to P2(X), where the Euclidean distance is replaced by W2. The following assumption characterizes the regularity of the solution to the inner optimization problem associated with the variational representation of the objective functional. Assumption 4.7. For any p \u2208P2(X), let f \u2217 p \u2208H denote the solution to the variational problem in (4.1) used to de\ufb01ne F. We assume that there exists a constant R > 0 and \u03b2 \u2208(1, 2) such that f \u2217 p belongs to {f \u2208H\u03b2 | \u2225f\u2225H\u03b2 \u2264R} for all p \u2208P2(X), where H\u03b2 is the \u03b2-power space. This assumption postulates that the target function f \u2217 p belongs to an RKHS norm ball of the \u03b2power space H\u03b2. Such an assumption, similar to Assumption 4.2 in \u00a74.1, ensures that the variational problem for de\ufb01ning F is well-conditioned in the sense that its solution lies in the \u03b2-power space with uniformly bounded RKHS norm in H\u03b2. We are now ready to present our main result. Theorem 4.8 (Convergence of Variational Transport). Let e pk be the iterates de\ufb01ned in (4.9). Suppose that Assumptions 4.1, 4.3, 4.4, 4.6, and 4.7 hold, where constants \u00b5 and L in Assumption 4.6 satisfy 0 < \u00b5 \u2264L. In variational transport, we set stepsize \u03b1t to be a constant \u03b1, which satis\ufb01es \u03b1 \u2208 \u0010 0, min \b 1/(2L), 1/H0 \ti , (4.15) where we de\ufb01ne H0 = 2d \u00b7 CK,3 \u00b7 \u00b5(\u03b2\u22121)/2 max \u00b7 R. Here \u00b5max = maxi\u22651 \u00b5i is the largest eigenvalue of the integral operator T of the RKHS. Moreover, we set the number of particles N to be su\ufb03ciently large such that N > K, where K is the number of iterations. Then, with probability at least 1 \u2212N \u22121, it holds for any k \u2264K that F(e pk) \u2212infp\u2208P2(X)F(p) \u2264\u03c1k \u00b7 [F(e p0) \u2212infp\u2208P2(X)F(p)] + (1 \u2212\u03c1)\u22121 \u00b7 Err (4.16) where we de\ufb01ne \u03c1 = 1 \u2212\u03b1 \u00b7 \u00b5/2 and Err = O h \u03b1 \u00b7 C2 K,2 \u00b7 d \u00b7 C2\u03b1\u03b2 K,1 \u00b7 \u03ba\u22122\u03b1\u03b2 \u00b7 R2\u22122\u03b1\u03b2 \u00b7 (log N/N)\u03b1\u03b2 i . (4.17) Here \u03b1\u03b2 = (\u03b2 \u22121)/(\u03b2 + 1) and O(\u00b7) omits absolute constants. Proof. See \u00a7C.2 for a detailed proof. Theorem 4.8 proves that, with high probability, variational transport converges linearly to the global minimum of the objective functional F up to an error term Err de\ufb01ned in (4.17). In particular, (4.16) shows that the suboptimality of e pk is bounded by the sum of two terms, which correspond to the computational and statistical errors, respectively. Speci\ufb01cally, the \ufb01rst term converges to zero at a linear rate as k increases, which characterizes the convergence rate of the population version of Wasserstein gradient descent. Meanwhile, the second term is independent of k and characterizes the statistical error incurred in estimating the Wasserstein gradient direction using N particles. When N and k are su\ufb03ciently large, the right-hand size on (4.16) is dominated 24 \fby the statistical error (1 \u2212\u03c1)\u22121 \u00b7 Err, which decays to zero as N goes to in\ufb01nity. In other words, as the number of particles and the number of iterations both go to in\ufb01nity, variational transport \ufb01nds the global minimum of F. Furthermore, the constant stepsize \u03b1 in (4.15) is determined by (i) the smoothness parameter L of the objective functional and (ii) H0 = 2d \u00b7 CK,3 \u00b7 \u00b5(\u03b2\u22121)/2 max \u00b7 R. Speci\ufb01cally, the requirement that \u03b1 < 1/(2L) is standard in the optimization literature, which guarantees that each step of Wasserstein gradient descent decreases the objective functional su\ufb03ciently. Whereas H0 serves as an upper bound on the Lipschitz constant of \u2207e f \u2217 k, which also enforces an upper bound on the stepsize due to Proposition 3.9. Under the assumptions made in Theorem 4.8, both L and H0 are regarded as constants. As a result, with a constant stepsize \u03b1, \u03c1 is a constant in (0, 1), which further implies that variational transport converges linearly up to a certain statistical error. Finally, neglecting constants such as d, {Ck,i}i\u2208[3], \u03ba, R, \u00b5, and L, when both N and k are su\ufb03ciently large, variational transport attains an e O(N \u2212\u03b1\u03b2) error, which re\ufb02ects the statistical error incurred in solving the inner variational problem in (4.1). Here e O(\u00b7) hides absolute constants and logarithmic terms. The parameter \u03b1\u03b2 increases with \u03b2. That is, when the target functions under Assumption 4.7 belong to a smoother RKHS class, variational transport attains a smaller statistical error. 5" + }, + { + "url": "http://arxiv.org/abs/2011.04622v2", + "title": "On Function Approximation in Reinforcement Learning: Optimism in the Face of Large State Spaces", + "abstract": "The classical theory of reinforcement learning (RL) has focused on tabular\nand linear representations of value functions. Further progress hinges on\ncombining RL with modern function approximators such as kernel functions and\ndeep neural networks, and indeed there have been many empirical successes that\nhave exploited such combinations in large-scale applications. There are\nprofound challenges, however, in developing a theory to support this\nenterprise, most notably the need to take into consideration the\nexploration-exploitation tradeoff at the core of RL in conjunction with the\ncomputational and statistical tradeoffs that arise in modern\nfunction-approximation-based learning systems. We approach these challenges by\nstudying an optimistic modification of the least-squares value iteration\nalgorithm, in the context of the action-value function\n represented by a kernel function or an overparameterized neural network. We\nestablish both polynomial runtime complexity and polynomial sample complexity\nfor this algorithm, without additional assumptions on the data-generating\nmodel. In particular, we prove that the algorithm incurs an\n$\\tilde{\\mathcal{O}}(\\delta_{\\mathcal{F}} H^2 \\sqrt{T})$ regret, where\n$\\delta_{\\mathcal{F}}$ characterizes the intrinsic complexity of the function\nclass $\\mathcal{F}$, $H$ is the length of each episode, and $T$ is the total\nnumber of episodes. Our regret bounds are independent of the number of states,\na result which exhibits clearly the benefit of function approximation in RL.", + "authors": "Zhuoran Yang, Chi Jin, Zhaoran Wang, Mengdi Wang, Michael I. Jordan", + "published": "2020-11-09", + "updated": "2020-12-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "math.OC", + "math.ST", + "stat.ML", + "stat.TH" + ], + "main_content": "Introduction Reinforcement learning (RL) algorithms combined with modern function approximators such as kernel functions and deep neural networks have produced empirical successes in a variety of application problems (e.g., Duan et al., 2016; Silver et al., 2016, 2017; Wang et al., 2018; Vinyals et al., 2019). However, theory has lagged, and when these powerful function approximators are employed, there is little theoretical guidance regarding the design of RL algorithms that are e\ufb03cient computationally or statistically, or regarding whether they even converge. In particular, function approximation blends statistical estimation issues with dynamical optimization issues, resulting in the need to balance the bias-variance tradeo\ufb00s that arise in statistical estimation with the exploration-exploitation \u2217Princeton University. Email:{zy6,chij,mendiw}@princeton.edu \u2020Northwestern University. Email: zhaoranwang@gmail.com \u2021University of California, Berkeley. Email: jordan@cs.berkeley.edu 1 \ftradeo\ufb00s that are inherent in RL. Accordingly, full theoretical treatments are mostly restricted to the tabular setting, where both the state and action spaces are discrete and the value function can be represented as a table (see, e.g., Jaksch et al., 2010; Osband et al., 2014; Azar et al., 2017; Jin et al., 2018; Osband et al., 2018; Russo, 2019), and there is a disconnect between theory and the most compelling applications. Provably e\ufb03cient exploration in the function approximation setting has been addressed only recently, with most of the existing work considering (generalized) linear models (Yang and Wang, 2019b,a; Jin et al., 2019; Cai et al., 2019a; Zanette et al., 2020a; Wang et al., 2019). These algorithms and their analyses stem from classical upper con\ufb01dence bound (UCB) or Thompson sampling methods for linear contextual bandits (Bubeck and Cesa-Bianchi, 2012; Lattimore and Szepesv\u00b4 ari, 2019) and it seems di\ufb03cult to extend them beyond the linear setting. Unfortunately, the linear assumption is rather rigid and rarely satis\ufb01ed in practice; moreover, when such a model is misspeci\ufb01ed, sublinear regret guarantees can vanish. There has been some recent work that has presented sample-e\ufb03cient algorithms with general function approximation. However, these methods are either computationally intractable (Krishnamurthy et al., 2016; Jiang et al., 2017; Dann et al., 2018; Dong et al., 2019) or hinge on strong assumptions on the transition model (Wen and Van Roy, 2017; Du et al., 2019b). Thus, the following question remains open: Can we design RL algorithms that incorporate powerful nonlinear function approximators such as neural networks or kernel functions and provably achieve both computational and statistical e\ufb03ciency? In this work, we provide an a\ufb03rmative answer to this question. Focusing on the setting of an episodic Markov decision process (MDP) where the value function is represented by either a kernel function or an overparameterized neural network, we propose an RL algorithm with polynomial runtime complexity and sample complexity, without imposing any additional assumptions on the data-generating model. Our algorithm is relatively simple\u2014it is an optimistic modi\ufb01cation of the least-squares value iteration algorithm (LSVI) (Bradtke and Barto, 1996)\u2014a classical batch RL algorithm\u2014to which we add a UCB bonus term to each iterate. Speci\ufb01cally, when using a kernel function, each LSVI update becomes a kernel ridge regression, and the bonus term is derived from that proposed for kernelized contextual bandits (Srinivas et al., 2009; Valko et al., 2013; Chowdhury and Gopalan, 2017). For the neural network setting, motivated by the NeuralUCB algorithm for contextual bandits (Zhou et al., 2019), we construct a UCB bonus from the tangent features of the neural network and we perform the LSVI updates via projected gradient descent. In both of these settings, the usage of the UCB bonus ensures that the value functions constructed by the algorithm are always optimistic in the sense that they serve as uniform upper bounds of the optimal value function. Furthermore, for both the kernel and neural settings, we prove that the proposed algorithm incurs an e O(\u03b4FH2\u221a T) regret, where H is the length of each episode, T is the total number of episodes, and \u03b4F quanti\ufb01es the intrinsic complexity of the function class F. Speci\ufb01cally, as we will show in \u00a74, \u03b4F is determined by the interplay between the \u2113\u221e-covering number of the function class used to represent the value function and the e\ufb00ective dimension of function class F. (See Table 1 for a summary.) 2 \fA key feature of our regret bounds is that they depend on the complexity of the state space only through \u03b4F and thus allow the number of states to be very large or even divergent. This clearly exhibits the bene\ufb01t of function approximation by tying it directly to sample e\ufb03ciency. To the best of our knowledge, this is the \ufb01rst provably e\ufb03cient framework for reinforcement learning with kernel and neural network function approximations. function class F regret bound general RKHS H H2 \u00b7 p de\ufb00\u00b7 [de\ufb00+ log N\u221e(\u01eb\u2217)] \u00b7 \u221a T \u03b3-\ufb01nite spectrum H2 \u00b7 p \u03b33T \u00b7 log(\u03b3TH) \u03b3-exponential decay H2 \u00b7 p (log T)3/\u03b3 \u00b7 T \u00b7 log(TH) \u03b3-polynomial decay H2 \u00b7 T \u03ba\u2217+\u03be\u2217+1/2 \u00b7 [log(TH)]3/2 overparameterized neural network H2 \u00b7 p de\ufb00\u00b7 [de\ufb00+ log N\u221e(\u01eb\u2217)] \u00b7 \u221a T + poly(T, H) \u00b7 m\u22121/12 Table 1: Summary of the main results. Here H is the length of each episode, T is the number of episodes in total, and 2m is the number of neurons of the overparameterized networks in the neural setting. For an RKHS H in general, de\ufb00denotes the e\ufb00ective dimension of H and N\u221e(\u01eb\u2217) is the \u2113\u221e-covering number of the value function class, where \u01eb\u2217= H/T. Note that to obtain concrete bounds, we apply the general result to RKHS\u2019s with various eigenvalue decay conditions. Here \u03b3 is a positive integer in the case of a \u03b3-\ufb01nite spectrum and is a positive number in the subsequent two cases. In addition, \u03ba\u2217and \u03be\u2217, which are de\ufb01ned in (4.9), are constants that depend on d, the input dimension, and the parameter \u03b3. Finally, in the last case we present the regret bound for the neural setting in general, where de\ufb00is the e\ufb00ective dimension of the neural tangent kernel (NTK) induced by the overparameterized neural network with 2m neurons and poly(T, H) is a polynomial in T and H. Such a general regret bound can be expressed concretely as a function of the spectrum of the NTK. Related Work. There is a vast literature on establishing provably e\ufb03cient RL methods in the absence of a generative model or an explorative behavioral policy. Much of this literature has focused on the tabular setting; see Jaksch et al. (2010); Osband et al. (2014); Azar et al. (2017); Dann et al. (2017); Strehl et al. (2006); Jin et al. (2018); Russo (2019) and the references therein. In particular, Azar et al. (2017); Jin et al. (2018) prove that an RL algorithm necessarily incurs a \u2126( \u221a SAT) regret under the tabular setting, where S and A are the cardinalities of the state and action spaces, respectively. Thus, algorithms designed for the tabular setting cannot be directly applied to the function approximation setting, where the number of e\ufb00ective states is large. A recent literature has accordingly focused on the function approximation setting, speci\ufb01cally the (generalized) linear setting where the value function (or the transition model) can be represented using a linear transform of a known feature mapping (Yang and Wang, 2019a,b; Jin et al., 2019; Cai et al., 2019a; Zanette et al., 2020a; Wang et al., 2019; Ayoub et al., 2020; Zhou et al., 2020; Kakade et al., 2020). Among these papers, our work is most closely related to Jin et al. (2019). In particular, in our kernel setting when the kernel function has a \ufb01nite rank, both our LSVI algorithm and the corresponding regret bound reduce to those established in Jin et al. (2019). However, the sample complexity and regret bounds in Jin et al. (2019) diverge when the dimension of the feature 3 \fmapping goes to in\ufb01nity and thus cannot be directly applied to the kernel setting. Also closely related to our work is Wang et al. (2020), which studies a similar optimistic LSVI algorithm for general function approximation. This work focuses on value function classes with bounded eluder dimension (Russo and Van Roy, 2013; Osband and Van Roy, 2014). It is unclear whether whether this formulation can be extended to the kernel or neural network settings. Yang and Wang (2019b) studies a kernelized MDP model where the transition model can be directly estimated. Under a slightly more general model, Ayoub et al. (2020) recently propose an optimistic model-based algorithm via value-targeted regression, where the model class is the set of functions with bounded eluder dimension. In other recent work, Kakade et al. (2020) studies a nonlinear control formulation in which the transition dynamics belongs to a known RKHS and can be directly estimated from the data. Our work di\ufb00ers from this work in that we impose an explicit assumption on the transition model and our proposed algorithm is model-free. Other authors who have presented regret bounds and sample complexities beyond the linear setting include Krishnamurthy et al. (2016); Jiang et al. (2017); Dann et al. (2018); Dong et al. (2019). These algorithms generally involve either high computational costs or require possibly restrictive assumptions on the transition model (Wen and Van Roy, 2013, 2017; Du et al., 2019b). Our work is also related to the literature on contextual bandits with either kernel function classes (Srinivas et al., 2009; Krause and Ong, 2011; Srinivas et al., 2012; Valko et al., 2013; Chowdhury and Gopal 2017; Durand et al., 2018) or neural network function classes (Zhou et al., 2019). Our construction of a bonus function for the RL setting has been adopted from this previous work. However, while contextual bandits can be viewed formally as special cases of our episodic MDP formulation with the episode length equal to one, the temporal dependence in the MDP setting raises signi\ufb01cant challenges. In particular, the covering number N\u221e(\u01eb\u2217) in Table 1 arises as a consequence of the fundamental challenge of performing temporally extended exploration in RL. Finally, our analysis of the optimistic LSVI algorithm is related to recent work on optimization and generalization in overparameterized neural networks within the framework of the neural tangent kernel (Jacot et al., 2018). See also Daniely (2017); Jacot et al. (2018); Wu et al. (2018); Du et al. (2018a,b); Allen-Zhu et al. (2018b,a); Zou et al. (2018); Chizat and Bach (2018); Li and Liang (2018); Arora et al. (2019); Cao and Gu (2019a,b); Lee et al. (2019). This literature focuses principally on supervised learning, however; in the RL setting we need an additional bonus term in the least-squares problem and thus require a novel analysis. 2 Background In this section, we provide essential background on reinforcement learning, reproducing kernel Hilbert space (RKHS), and overparameterized neural networks. 2.1 Episodic Markov Decision Processes We focus on episodic MDPs, denoted MDP(S, A, H, P, r), where S and A are the state and action spaces, respectively, the integer H > 0 is the length of each episode, P = {Ph}h\u2208[H] and r = {rh}h\u2208[H] are the Markov transition kernel and the reward functions, respectively, where we let 4 \f[n] denote the set {1, . . . , n} for integers n \u22651. We assume that S is a measurable space of possibly in\ufb01nite cardinality while A is a \ufb01nite set. Finally, for each h \u2208[H], Ph(\u00b7 | x, a) denotes the probability transition kernel when action a is taken at state x \u2208S in timestep h \u2208[H], and rh : S \u00d7 A \u2192[0, 1] is the reward function at step h which is assumed to be deterministic for simplicity. A policy \u03c0 of an agent is a set of H functions \u03c0 = {\u03c0h}h\u2208[H] such that each \u03c0h(\u00b7 | x) is a probability distribution over A. Here \u03c0h(a | x) is the probability of the agent taking action a at state x at the h-th step in the episode. The agent interacts with the environment as follows. For any t \u22651, at the beginning of the t-th episode, the agent determines a policy \u03c0t = {\u03c0t h}h\u2208[H] while an initial state xt 1 is picked arbitrarily by the environment. Then, at each step h \u2208[H], the agent observes the state xt h \u2208S, picks an action at h \u223c\u03c0t h(\u00b7 | xt h), and receives a reward rh(xt h, at h). The environment then transitions into a new state xt h+1 that is drawn from the probability measure Ph(\u00b7 | xt h, at h). The episode terminates when the H-th step is reached and rH(xt H, at H) is thus the \ufb01nal reward that the agent receives. The performance of the agent is captured by the value function. For any policy \u03c0, and h \u2208[H], we de\ufb01ne the value function V \u03c0 h : S \u2192R as V \u03c0 h (x) = E\u03c0 \" H X h\u2032=h rh\u2032(xh\u2032, ah\u2032) \f \f \f \f xh = x # , \u2200x \u2208S, h \u2208[H], where E\u03c0[\u00b7] denotes the expectation with respect to the randomness of the trajectory {(xh, ah)}H h=1 obtained by following the policy \u03c0. We also de\ufb01ne the action-value function Q\u03c0 h : S \u00d7 A \u2192R as follows: Q\u03c0 h(x, a) = E\u03c0 \u0014 H X h\u2032=h rh\u2032(xh\u2032, ah\u2032) \f \f \f xh = x, ah = a \u0015 . Moreover, let \u03c0\u22c6denote the optimal policy which by de\ufb01nition yields the optimal value function, V \u22c6 h (x) = sup\u03c0 V \u03c0 h (x), for all x \u2208S and h \u2208[H]. To simplify the notation, we write (PhV )(x, a) := Ex\u2032\u223cPh(\u00b7 | x,a)[V (x\u2032)], for any measurable function V : S \u2192[0, H]. Using this notation, the Bellman equation associated with a policy \u03c0 becomes Q\u03c0 h(x, a) = (rh + PhV \u03c0 h+1)(x, a), V \u03c0 h (x) = \u27e8Q\u03c0 h(x, \u00b7), \u03c0h(\u00b7 | x)\u27e9A, V \u03c0 H+1(x) = 0. (2.1) Here we let \u27e8\u00b7, \u00b7\u27e9A denote the inner product over A. Similarly, the Bellman optimality equation is given by Q\u22c6 h(x, a) = (rh + PhV \u22c6 h+1)(x, a), V \u22c6 h (x) = max a\u2208A Q\u22c6 h(x, a), V \u22c6 H+1(x) = 0. (2.2) Thus, the optimal policy \u03c0\u22c6is the greedy policy with respect to {Q\u22c6 h}h\u2208[H]. Moreover, we de\ufb01ne the Bellman optimality operator T\u22c6 h by letting (T\u22c6 hQ)(x, a) = r(x, a) + (PhV )(x, a) for all Q: S \u00d7 A \u2192R, 5 \fwhere V (x) = maxa\u2208A Q(x, a). By de\ufb01nition, the Bellman equation in (2.2) is equivalent to Q\u22c6 h = T\u22c6 hQ\u22c6 h+1, \u2200h \u2208[H]. The goal of the agent is to learn the optimal policy \u03c0\u22c6. For any policy \u03c0, the di\ufb00erence between V \u03c0 1 and V \u22c6 1 quanti\ufb01es the sub-optimality of \u03c0. Thus, for a \ufb01xed integer T > 0, after playing for T episodes, the total (expected) regret (Bubeck and Cesa-Bianchi, 2012) of the agent is de\ufb01ned as Regret(T) = T X t=1 \u0002 V \u22c6 1 (xt 1) \u2212V \u03c0t 1 (xt 1) \u0003 , where \u03c0t is the policy executed in the t-th episode and xt 1 is the initial state. 2.2 Reproducing Kernel Hilbert Space We will be considering the use of reproducing kernel Hilbert space (RKHS) as function spaces to represent optimal value functions Q\u22c6 h. To this end, hereafter, to simplify the notation, we let z = (x, a) denote a state-action pair and denote Z = S \u00d7 A. Without loss of generality, we regard Z as a compact subset of Rd, where the dimension d is assumed \ufb01xed. This can be achieved if there exists an embedding \u03c8embed : Z \u2192Rd that serves as a preprocessing of the input (x, a). Let H be an RKHS de\ufb01ned on Z with kernel function, K : Z \u00d7 Z \u2192R, which contains a family of functions de\ufb01ned on Z. Let \u27e8\u00b7, \u00b7\u27e9H : H \u00d7 H \u2192R and \u2225\u00b7 \u2225H : H \u2192R denote the inner product and RKHS norm on H, respectively. Since H is an RKHS, there exists a feature mapping, \u03c6: Z \u2192H, such that f(z) = \u27e8f(\u00b7), \u03c6(z)\u27e9H for all f \u2208H and all z \u2208Z. Moreover, for any x, y \u2208Z, we have K(x, y) = \u27e8\u03c6(x), \u03c6(y)\u27e9H. In this work, we assume that the kernel function K is uniformly bounded in the sense that supz\u2208Z K(z, z) < \u221e. Without loss of generality, we assume that supz\u2208Z K(z, z) \u22641, which implies that \u2225\u03c6(z)\u2225H \u22641 for all z \u2208Z. Let L2(Z) be the space of square-integrable functions on Z with respect to Lebesgue measure and let \u27e8\u00b7, \u00b7\u27e9L2 be the inner product on L2(Z). The kernel function K induces a integral operator TK : L2(Z) \u2192L2(Z) de\ufb01ned as TKf(z) = Z Z K(z, z\u2032) \u00b7 f(z\u2032) dz\u2032, \u2200f \u2208L2(Z). (2.3) By Mercer\u2019s Theorem (Steinwart and Christmann, 2008), the integral operator TK has countable and positive eigenvalues {\u03c3i}i\u22651 and the corresponding eigenfunctions {\u03c8i}i\u22651 form an orthonormal basis of L2(Z). Moreover, the kernel function admits a spectral expansion K(z, z\u2032) = \u221e X i=1 \u03c3i \u00b7 \u03c8i(z) \u00b7 \u03c8j(z\u2032). (2.4) The RKHS H can thus be written as a subset of L2(Z): H = \u001a f \u2208L2(Z): \u221e X i=1 \u27e8f, \u03c8i\u27e92 L2 \u03c3i < \u221e \u001b , and the inner product of H can be written as \u27e8f, g\u27e9H = \u221e X i=1 1/\u03c3i \u00b7 \u27e8f, \u03c8i\u27e9L2 \u00b7 \u27e8g, \u03c8i\u27e9L2, for all f, g \u2208H. 6 \fBy such a construction, the scaled eigenfunctions {\u221a\u03c3i\u03c8i}i\u22651 form an orthogonal basis of RKHS H and the feature mapping \u03c6(z) \u2208H can be written as \u03c6(z) = P\u221e i=1 \u03c3i\u03c8i(z) \u00b7 \u03c8i for any z \u2208Z. 2.3 Overparameterized Neural Networks We also study a formulation in which value functions are approximated by neural networks. In this section we de\ufb01ne the class of overparameterized neural networks that will be used in our representation. Recall that we denote Z = S \u00d7A and view it as a subset of Rd. For neural networks, we further regard Z as a subset of the unit sphere in Rd. That is, \u2225z\u22252 = 1 for all z = (x, a) \u2208Z. A two-layer neural network f(\u00b7; b, W): Z \u2192R with 2m neurons and weights (b, W) is de\ufb01ned as f(z; b, W) = 1 \u221a 2m 2m X j=1 bj \u00b7 act(W \u22a4 j z), \u2200z \u2208Z. (2.5) Here act: R \u2192R is the activation function, bj \u2208R and Wj \u2208Rd for all j \u2208[2m], and b = (b1, . . . , b2m)\u22a4\u2208R2m and W = (W1, . . . , W2m) \u2208R2dm. During training, we initialize (b, W) via a symmetric initialization scheme (Gao et al., 2019; Bai and Lee, 2019) as follows. For any j \u2208[m], we set bj i.i.d. \u223cUnif({\u22121, 1}) and Wj i.i.d. \u223cN(0, Id/d), where Id is the identity matrix in Rd. For any j \u2208{m + 1, . . . , 2m}, we set bj = \u2212bj\u2212m and Wj = Wj\u2212m. We note that such an initialization implies that the initial neural network is a zero function, which is used only to simply the theoretical analysis. Moreover, for ease of presentation, during training we \ufb01x b at its initial value and only optimize over W. Finally, we denote f(z; b, W) by f(z; W) to simplify the notation. We assume that the neural network in is overparameterized in the sense that the width 2m is much larger than the number of episodes T. Overparameterization has been shown to be pivotal for neural training in both theory and practice (Neyshabur and Li, 2019; Allen-Zhu et al., 2018a; Arora et al., 2019). Under such a regime, the dynamics of the training process can be captured by the framework of neural tangent kernel (NTK) (Jacot et al., 2018). Speci\ufb01cally, let \u03d5(\u00b7; W): Z \u2192 R2md be the gradient of f(; W) with respect to W, which is given by \u03d5(z; W) = \u2207Wf(z; W) = \u0000\u2207W1f(z; W), . . . , \u2207W2mf(z; W)), \u2200z \u2208Z. (2.6) Let W (0) be the initial value of W. Conditioning on the realization of W (0), de\ufb01ne a kernel matrix Km : Z \u2192Z as Km(z, z\u2032) = \u03d5(z; W (0)), \u03d5(z\u2032; W (0)) \u000b , \u2200(z, z\u2032) \u2208Z \u00d7 Z. (2.7) When m is su\ufb03ciently large, for all W in a neighborhood of W (0), it can be shown that f(\u00b7, W) is close to its linearization at W (0), f(\u00b7; W) \u2248b f(\u00b7; W) = f(\u00b7, W (0)) + \u03c6(\u00b7; W (0)), W \u2212W (0)\u000b = \u03c6(\u00b7; W (0)), W \u2212W (0)\u000b . (2.8) The linearized function b f(\u00b7; W) belongs to an RKHS with kernel Km. Moreover, as m goes to in\ufb01nity, due to the random initialization, Km converges to a kernel Kntk : Z \u00d7 Z, referred to as a 7 \fneural tangent kernel (NTK), which is given by Kntk(z, z\u2032) = E \u0002 act\u2032(w\u22a4z) \u00b7 act\u2032(w\u22a4z\u2032) \u00b7 z\u22a4z\u2032\u0003 , (z, z\u2032) \u2208Z \u00d7 Z, (2.9) where act\u2032 is the derivative of the activation function, and the expectation in (2.9) is taken with respect to w \u223cN(0, Id/d). 3 Optimistic Least-Squares Value Iteration Algorithms In this section, we introduce the optimistic least-squares value iteration algorithm based on the representations of action-value functions discussed in the previous section. The value iteration algorithm (Puterman, 2014; Sutton and Barto, 2018), which computes {Q\u22c6 h}h\u2208[H] by applying the Bellman equation in (2.2) recursively, is one of the classical methods in reinforcement learning. In more detail, the algorithm forms a sequence of action-value functions, {Qh}h\u2208[H], via the following recursion: Qh(x, a) \u2190(T\u22c6 hQh+1) = (rh + PhVh+1)(x, a), (3.1) Vh+1(x) \u2190max a\u2032\u2208A Qh+1(x, a\u2032), \u2200(x, a) \u2208S \u00d7 A, \u2200h \u2208[H], where QH+1 is set to be the zero function. To turn this generic formulation into a practical solution to real-world RL problems, it is necessary to address two main problems: (i) the transition kernel Ph is unknown, and (ii) we can neither iterate over all state-action pairs nor store a table of size |S \u00d7 A| when the number of states is large. To address these challenges, the least-squares value iteration (Bradtke and Barto, 1996; Osband et al., 2014) algorithm implements an approximate update to (3.1) by solving a least-squares regression problem based on historical data consisting of the trajectories generated by the learner in previous episodes. Speci\ufb01cally, at the onset of the t-th episode, we have observed t \u22121 transition tuples, {(x\u03c4 h, a\u03c4 h, x\u03c4 h+1)}\u03c4\u2208[t\u22121], and LSVI proposes to estimate Q\u22c6 h by replacing (3.1) with the following least-squares regression problem: b Qt h \u2190minimize f\u2208F \u001a t\u22121 X \u03c4=1 \u0002 rh(x\u03c4 h, a\u03c4 h) + V t h+1(x\u03c4 h+1) \u2212f(x\u03c4 h, a\u03c4 h) \u00032 + pen(f) \u001b , (3.2) where F is a function class, and pen(f) is a regularization term. Although classical RL methods did not focus on formal methods for exploration, in our setting it is critical to incorporate such methods. Following the principle of optimism in the face of uncertainty (Lattimore and Szepesv\u00b4 ari, 2019), we de\ufb01ne a bonus function bt h : Z \u2192R and write Qt h(\u00b7, \u00b7) = min \b b Qt h(\u00b7, \u00b7) + \u03b2 \u00b7 bt h(\u00b7, \u00b7), H \u2212h + 1 \t+, V t h(\u00b7) = max a\u2208A Qt h(\u00b7, a), (3.3) where \u03b2 > 0 is a parameter and min{\u00b7, H\u2212h+1}+ denotes a truncation to the interval [0, H\u2212h\u22121]. Here, we truncate the value function Qt h to [0, H \u2212h + 1] as each reward function is bounded in [0, 1]. In the t-the episode, we then let \u03c0t be the greedy policy with respect to {Qt h}h\u2208[H] and execute \u03c0t. Combining (3.2) and (3.3) yields our optimistic least-squares value iteration algorithm, whose details are given in Algorithm 1. 8 \fAlgorithm 1 Optimistic Least-Squares Value Iteration with Function Approximation 1: Input: Function class F, penalty function pen(\u00b7), and parameter \u03b2. 2: for episode t = 1, . . . , T do 3: Receive the initial state xt 1. 4: Set V t H+1 as the zero function. 5: for step h = H, . . . , 1 do 6: Obtain Qt h and V t h according to (3.2) and (3.3). 7: end for 8: for step h = 1, . . . , H do 9: Take action at h \u2190argmaxa\u2208A Qt h(xt h, a). 10: Observe the reward rh(xt h, at h) and the next state xt h+1. 11: end for 12: end for We note that both the bonus function bt h in (3.3) and the penalty function in (3.2) depend on the choice of function class F, and the optimistic LSVI in Algorithm 1 is only implementable when F is speci\ufb01ed. For instance, when F consists of linear functions, \u03b8\u22a4\u03c6(z), where \u03c6: Z \u2192Rd is a known feature mapping and \u03b8 \u2208Rd is the parameter, we choose the ridge penalty \u2225\u03b8\u22252 2 in (3.2) and de\ufb01ne bt h(z) as [\u03c6(z)\u22a4At h\u03c6(z)]1/2 for some invertible matrix At h. In this case, Algorithm 1 recovers the LSVI-UCB algorithm studied in Jin et al. (2019), which further reduces to the tabular UCBVI algorithm (Azar et al., 2017) when \u03c6 is the canonical basis. In the rest of this section, we instantiate the optimistic LSVI framework in two ways, by setting F to be an RKHS and the class of overparameterized neural networks. 3.1 The Kernel Setting In the following, we consider the case where function class F is an RKHS H with kernel K. In this case, by setting pen(f) to be the ridge penalty, (3.2) reduces to a kernel ridge regression problem. Moreover, we de\ufb01ne bt h in (3.3) as the UCB bonus function that also appears in kernelized contextual bandit algorithms (Srinivas et al., 2009; Valko et al., 2013; Chowdhury and Gopalan, 2017; Durand et al., 2018; Yang and Wang, 2019b; Sessa et al., 2019; Calandriello et al., 2019). With these two modi\ufb01cations, we de\ufb01ne the Kernel Optimistic Least-Squares Value Iteration (KOVI) algorithm, which is summarized in Algorithm 2. Speci\ufb01cally, for each t \u2208[T], at the beginning of the t-th episode, we \ufb01rst obtain value functions {Qt h}h\u2208[H] by solving a sequence of kernel ridge regressions with the data obtained from the previous t \u22121 episodes. In particular, letting Qt H+1 be a zero function, for any h \u2208[H], we replace (3.2) by a kernel ridge regression given by b Qt h \u2190minimize f\u2208H t\u22121 X \u03c4=1 \u0002 rh(x\u03c4 h, a\u03c4 h) + V t h+1(x\u03c4 h+1) \u2212f(x\u03c4 h, a\u03c4 h) \u00032 + \u03bb \u00b7 \u2225f\u22252 H, (3.4) where \u03bb > 0 is the regularization parameter. We then obtain Qt h and V t h as in (3.3), where the 9 \fbonus function bt h will be speci\ufb01ed later. We have: Qt h(s, a) = min \b b Qt h(s, a) + \u03b2 \u00b7 bt h(s, a), H \u2212h + 1 \t+, V t h(s) = max a Qt h(s, a), (3.5) where \u03b2 > 0 is a parameter. The solution to (3.4) can be written in closed form as follows. We de\ufb01ne the response vector yt h \u2208Rt\u22121 by letting its \u03c4-th entry be [yt h]\u03c4 = rh(x\u03c4 h, a\u03c4 h) + V t h+1(x\u03c4 h+1), \u2200\u03c4 \u2208[t \u22121]. (3.6) Recall that we denote z = (x, a) and Z = S \u00d7 A. Based on the kernel function K of the RKHS, we de\ufb01ne the Gram matrix Kt h \u2208R(t\u22121)\u00d7(t\u22121) and function kt h : Z \u2192Rt\u22121 respectively as Kt h = [K(z\u03c4 h, z\u03c4 \u2032 h )]\u03c4,\u03c4 \u2032\u2208[t\u22121] \u2208R(t\u22121)\u00d7(t\u22121), kt h(z) = \u0002 K(z1 h, z), . . . K(zt\u22121 h , z) \u0003\u22a4\u2208Rt\u22121. (3.7) Then b Qt h in (3.4) can be written as b Qt h(z) = kt h(z)\u22a4\u03b1t h, where we de\ufb01ne \u03b1t h = (Kt h + \u03bb \u00b7 I)\u22121yt h. Using Kt h and kt h de\ufb01ned in (3.7), the bonus function is de\ufb01ned as bt h(x, a) = \u03bb\u22121/2 \u00b7 \u0002 K(z, z) \u2212kt h(z)\u22a4(Kt h + \u03bbI)\u22121kt h(z) \u00031/2, (3.8) which can be interpreted as the posterior variance of a Gaussian process regression (Rasmussen, 2003). This bonus term reduces to the UCB bonus used for linear bandits (Bubeck and Cesa-Bianchi, 2012; Lattimore and Szepesv\u00b4 ari, 2019) when the feature mapping \u03c6 of the RKHS is \ufb01nite-dimensional. Moreover, in this case, KOVI reduces to the LSVI-UCB algorithm proposed in Jin et al. (2019) for linear value functions. Furthermore, since both \u03b1t h and bt h admit close-form expressions, the runtime complexity of KOVI is a polynomial of H and T. We refer to the bonus de\ufb01ned in (3.8) as a UCB bonus because, when combined with Qt h as de\ufb01ned in (3.5), it serves as an upper bound of Q\u22c6 h for all state-action pairs. In more detail, intuitively the target function of the kernel ridge regression in (3.4) is T\u22c6 hQt h+1. Due, however, to having limited data, the solution b Qt h has some estimation error, as quanti\ufb01ed by bt h. When \u03b2 is properly chosen, the bonus term captures the uncertainty of estimation, yielding Qt h \u2265T\u22c6 hQt h+1 elementwisely. Notice that Qt H+1 = Q\u22c6 H+1 = 0. The Bellman equation, Q\u22c6 h = T\u22c6 hQ\u22c6 h+1, directly implies that Qt h is an elementwise upper bound of Q\u22c6 h for all h \u2208[H]. Our algorithm is thus called \u201coptimistic value iteration\u201d, as the policy \u03c0t is greedy with respect to {Qt h}h\u2208[H], which is an upper bound on the optimal value function. In other words, compared with a standard value iteration algorithm, we always over-estimate the value function. Such an optimistic approach is pivotal for the RL agent to perform e\ufb03cient, temporally-extended exploration. 3.2 The Neural Setting An alternative is to estimate the value functions for an RL agent using overparameterized neural networks. In particular, we estimate each Q\u22c6 h using a neural network as given in (2.5), again using a symmetric initialization scheme (Gao et al., 2019; Bai and Lee, 2019). We assume, for simplicity, that all the neural networks share the same initial weights, denoted by (b(0), W (0)). In addition, we \ufb01x b = b(0) in (2.5) and only update the value of W \u2208R2md. 10 \fAlgorithm 2 Kernelized Optimistic Least-Squares Value Iteration (KOVI) 1: Input: Parameters \u03bb and \u03b2. 2: for episode t = 1, . . . , T do 3: Receive the initial state xt 1. 4: Set V t H+1 as the zero function. 5: for step h = H, . . . , 1 do 6: Compute the response yt h \u2208Rt\u22121, the Gram matrix Kt h \u2208R(t\u22121)\u00d7(t\u22121), and function kt h as in (3.6) and (3.7), respectively. 7: Compute 8: \u03b1t h = (Kt h + \u03bb \u00b7 I)\u22121yt h, 9: bt h(\u00b7, \u00b7) = \u03bb\u22121/2 \u00b7 \u0002 K(\u00b7, \u00b7; \u00b7, \u00b7) \u2212kt h(\u00b7, \u00b7)\u22a4(Kt h + \u03bbI)\u22121kt h(\u00b7, \u00b7) \u00031/2. 10: Obtain value functions Qt h(\u00b7, \u00b7) \u2190min{kt h(\u00b7, \u00b7)\u22a4\u03b1t h + \u03b2 \u00b7 bt h(\u00b7, \u00b7), H \u2212h + 1}+, V t h(\u00b7) = max a Qt h(\u00b7, a). 11: end for 12: for step h = 1, . . . , H do 13: Take action at h \u2190argmaxa\u2208A Qt h(xt h, a). 14: Observe the reward rh(xt h, at h) and the next state xt h+1. 15: end for 16: end for In the neural network setting, we replace the least-squares regression in (3.2) by a nonlinear ridge regression. That is, for any (t, h) \u2208[T] \u00d7 [H], we de\ufb01ne the loss function Lt h : R2md \u2192R as Lt h(W) = t\u22121 X \u03c4=1 \u0002 rh(x\u03c4 h, a\u03c4 h) + V t h+1(x\u03c4 h+1) \u2212f(x\u03c4 h, a\u03c4 h; W) \u00032 + \u03bb \u00b7 \r \rW \u2212W (0)\r \r2 2, (3.9) where \u03bb > 0 is the regularization parameter. We then de\ufb01ne b Qt h as b Qt h(\u00b7, \u00b7) = f \u0000\u00b7, \u00b7; c W t h \u0001 , where c W t h = argmin W \u2208R2md Lt h(W). (3.10) Here we assume that there is an optimization oracle that returns the global minimizer of the loss function Lt h. It has been shown in a large body of literature that, when m is su\ufb03ciently large, with random initialization, simple optimization methods such as gradient descent provably \ufb01nd the global minimizer of the empirical loss function at a linear rate of convergence (Du et al., 2018b,a; Arora et al., 2019). Thus, such an optimization oracle can be realized by gradient descent with su\ufb03ciently large number of iterations and the computational cost of realizing such an oracle is a polynomial in m, T, and H. It remains to construct the bonus function bt h. Recall that we de\ufb01ne \u03d5(\u00b7; W) = \u2207Wf(\u00b7; W) in 11 \f(2.6). We de\ufb01ne a matrix \u039bt h \u2208R2md\u00d72md as \u039bt h = \u03bb \u00b7 I2md + t\u22121 X \u03c4=1 \u03d5 \u0000x\u03c4 h, a\u03c4 h; c W \u03c4+1 h \u0001 \u03d5 \u0000x\u03c4 h, a\u03c4 h; c W \u03c4+1 h \u0001\u22a4, (3.11) which can be recursively computed by letting \u039b1 h = \u03bb \u00b7 I2md, \u039bt h = \u039bt\u22121 h + \u03d5 \u0000xt\u22121 h , at\u22121 h ; c W t h \u0001 \u03d5 \u0000xt\u22121 h , at\u22121 h ; c W t h \u0001\u22a4, \u2200t \u22652. Then the bonus function bt h is de\ufb01ned as follows: bt h(x, a) = \u0002 \u03d5 \u0000x, a; c W t h \u0001\u22a4(\u039bt h)\u22121\u03d5 \u0000x, a; c W t h \u0001\u00031/2, \u2200(x, a) \u2208S \u00d7 A. (3.12) Finally, we obtain the value functions Qt h and V t h via (3.5), with b Qt h and bt h de\ufb01ned in (3.10) and (3.12), respectively. Letting \u03c0t be the greedy policy with respect to {Qt h}h\u2208[H], we have de\ufb01ned our Neural Optimistic Least-Squares Value Iteration (NOVI) algorithm, the pseudocode for which is presented in Algorithm 3 in \u00a7A. Since bt h in (3.12) admits a closed-form expression and c W t h de\ufb01ned in (3.10) can be obtained in polynomial time, the runtime complexity of NOVI is a polynomial of m, T, and H. An intuition for the bonus term in (3.12) can be obtained via the connection between overparameterized neural networks and neural tangent kernels. Speci\ufb01cally, when m is su\ufb03ciently large, it can be shown that each c W t h is not far from the initial value W (0). When this is the case, suppose we replace the neural tangent features {\u03d5(\u00b7; c W \u03c4 h )}\u03c4\u2208[T] in (3.11) and (3.12) by \u03d5(\u00b7; W (0)), then bt h recovers the UCB bonus in linear contextual bandits and linear MDPs with the feature mapping \u03d5(\u00b7; W (0)) (Abbasi-Yadkori et al., 2011; Jin et al., 2019; Wang et al., 2019). Moreover, when m goes to in\ufb01nity, we obtain the UCB bonus de\ufb01ned in (3.8) for the RKHS setting with the kernel being Kntk. Accordingly, when working with overparameterized neural networks, the value functions {Qt h}h\u2208[H] are approximately elementwise upper bounds for the optimal value functions, and we again obtain optimism-based exploration at least approximately. 4 Main Results In this section, we prove that both KOVI and NOVI achieve e O(\u03b4FH2\u221a T) regret, where \u03b4F characterizes the intrinsic complexity of the function class F used to approximate {Q\u22c6 h}h\u2208[H]. We \ufb01rst consider the kernel setting and then turn to the neural network setting. 4.1 Regret of KOVI Before presenting the theory, we \ufb01rst lay out a structural assumption for the kernel setting, which postulates that the Bellman operator maps any bounded value function to a bounded RKHS-norm ball. Assumption 4.1. Let RQ > 0 be a \ufb01xed constant. We de\ufb01ne Q\u22c6= {f \u2208H: \u2225f\u2225H \u2264RQH}. We assume that for any h \u2208[H] and any Q: S \u00d7 A \u2192[0, H], we have T\u22c6 hQ \u2208Q\u22c6. 12 \fSince Q\u22c6 h is bounded by in [0, H] for each all h \u2208[H], Assumption 4.1 ensures that the optimal value functions are contained in the RKHS-norm ball Q\u22c6. Thus, there is no approximation bias when using functions in H to approximate {Q\u22c6 h}h\u2208[H]. A su\ufb03cient condition for Assumption 4.1 to hold is that rh(\u00b7, \u00b7), Ph(x\u2032 | \u00b7, \u00b7) \u2208{f \u2208H: \u2225f\u2225H \u22641}, \u2200h \u2208[H], \u2200x\u2032 \u2208S. (4.1) That is, both the reward function and the Markov transition kernel can be represented by functions in the unit ball of H. When (4.1) holds, for any V : S \u2192[0, H], it holds that rh + PhV \u2208H with its RKHS norm bounded by H + 1. Hence, Assumption 4.1 holds with RQ = 2. Similar assumptions are made in Yang and Wang (2019a,b); Jin et al. (2019); Zanette et al. (2020a,b); Wang et al. (2019) for (generalized) linear functions. See also Du et al. (2019a); Van Roy and Dong (2019); Lattimore and Szepesvari (2019) for a discussion of the necessity of such an assumption. It is shown in Du et al. (2019a) that only assuming {Q\u22c6 h}h\u2208[H] \u2286Q\u22c6is not su\ufb03cient for achieving a regret that is polynomial in H. Thus, we further assume that Q\u22c6contains the image of the Bellman operator. Given this assumption, the complexity of H plays an important role in the performance of KOVI. To characterize the intrinsic complexity of F, we consider the maximal information gain (Srinivas et al., 2009): \u0393K(T, \u03bb) = sup D\u2286Z \b 1/2 \u00b7 logdet(I + KD/\u03bb) \t , (4.2) where the supremum is taken over all D \u2286Z with |D| \u2264T. Here KD is the Gram matrix de\ufb01ned as in (3.7); i.e., it is based on D, \u03bb > 0 is a parameter, and the subscript K in \u0393K indicates the kernel K. The magnitude of \u0393K(T, \u03bb) relies on how fast the the eigenvalues H decay to zero and can be viewed as a proxy of the dimension of H when H is in\ufb01nite-dimensional. In the special case where H is \ufb01nite rank, we have that \u0393K(T, \u03bb) = O(\u03b3 \u00b7 log T) where \u03b3 is the rank of H. Furthermore, for any h \u2208[H], note that each Qt h constructed by KOVI takes the following form: Q(z) = min \b Q0(z) + \u03b2 \u00b7 \u03bb\u22121/2\u0002 K(z, z) \u2212kD(z)\u22a4(KD + \u03bbI)\u22121kD(z) \u00031/2, H \u2212h + 1 \t+, (4.3) where Q0 \u2208H, an analog of b Qt h in (3.4), is the solution to a kernel ridge regression problem and D \u2286Z is a discrete subset of Z containing no more than T state-action pairs. Moreover, KD and kD are de\ufb01ned analogously to (3.7) based on data in D. Then, for any R, B > 0, we de\ufb01ne a function class Qucb(h, R, B) as Qucb(h, R, B) = \b Q: Q takes the form of (4.3) with \u2225Q0\u2225H \u2264R, \u03b2 \u2208[0, B], |D| \u2264T \t . (4.4) As we will show in Lemma C.1, we have \u2225b Qt h\u2225H \u2264RT for all (t, h) \u2208[T] \u00d7 [H], where RT = 2H p \u0393K(T, \u03bb). Thus, when B exceeds the value \u03b2 in (3.5), each Qt h is contained in Qucb(h, RT , B). Moreover, since rh + PhV t h+1 = T\u22c6 hQt h+1 is the population ground truth of the ridge regression in (3.4), the complexity of Qucb(h + 1, RT , B) naturally appears when quantifying the uncertainty of b Qt h. To this end, for any \u01eb > 0, let N\u221e(\u01eb; h, B) be the \u01eb-covering number of Qucb(h, RT , B) with respect to the \u2113\u221e-norm on Z, which characterizes the complexity of the value functions constructed by KOVI and which is determined by the spectral structure of H. We are now ready to present a regret bound for KOVI. 13 \fTheorem 4.2. Assume that there exists BT > 0 satisfying 8 \u00b7 \u0393K(T, 1 + 1/T) + 8 \u00b7 log N\u221e(\u01eb\u2217; h, BT ) + 16 \u00b7 log(2TH) + 22 + 2R2 Q \u2264(BT /H)2 (4.5) for all h \u2208[H], where \u01eb\u2217= H/T. We set \u03bb = 1 + 1/T and \u03b2 = BT in Algorithm 2. Then, under Assumption 4.1, with probability at least 1 \u2212(T 2H2)\u22121, we have Regret(T) \u22645\u03b2H \u00b7 p T \u00b7 \u0393K(T, \u03bb). (4.6) As shown in (4.16), the regret can be written as O(H2\u00b7\u03b4H \u00b7 \u221a T), where \u03b4H = BT /H \u00b7 p \u0393K(T, \u03bb) re\ufb02ects the complexity of H and BT satis\ufb01es (4.5). Speci\ufb01cally, \u03b4H involves (i) the \u2113\u221e-covering number N\u221e(\u01eb\u2217, h, BT ) of Qucb(h, RT , BT ), and (ii) the e\ufb00ective dimension \u0393K(T, \u03bb). Moreover, when neglecting the constant and logarithmic terms in (4.5), it su\ufb03ces to choose BT satisfying BT /H \u224d p \u0393K(T, \u03bb) + max h\u2208[H] p log N\u221e(\u01eb\u2217, h, BT ), which reduces the regret bound in (4.16) to the following: Regret(T) = e O \u0010 H2 \u00b7 h \u0393K(T, \u03bb) + max h\u2208[H] p \u0393K(T, \u03bb) \u00b7 log N\u221e(\u01eb\u2217, h, BT ) i | {z } \u03b4H : complexity of function class H \u00b7 \u221a T \u0011 . (4.7) To obtain some intuition for (4.7), let us consider the tabular case where Q\u22c6consists of all measurable functions de\ufb01ned on S \u00d7 A with range [0, H]. In this case, the value function class Qucb(h, RT , BT ) can be set to Q\u22c6, whose \u2113\u221e-covering number N\u221e(\u01eb\u2217, h, BT ) \u2264|S \u00d7 A| \u00b7 log T. It can be shown that the e\ufb00ective dimension is also O(|S \u00d7 A| \u00b7 log T). Thus, ignoring the logarithmic terms, Theorem 4.2 implies that by choosing \u03b2 \u224dH \u00b7|S \u00d7A|, optimistic least-squares value iteration achieves an e O(H2 \u00b7 |S \u00d7 A| \u00b7 \u221a T) regret. Furthermore, we remark that the regret bound in (4.16) holds in general for any RKHS. The result hinges on (i) Assumption 4.1, which postulates that the RKHS-norm ball {f \u2208H: \u2225f\u2225H \u2264 RQH} contains the image of the Bellman operator, and (ii) the inequality in (4.5) admits a solution BT , which is set to be \u03b2 in Algorithm 2. Here we set \u03b2 to be su\ufb03ciently large so as to dominate the uncertainty of b Qt h, whereas to quantify such uncertainty, we utilize the uniform concentration over the value function class Qucb(h + 1, RT , \u03b2) whose complexity metric, the \u2113\u221e-covering number, in turn depends on \u03b2. Such an intricate desideratum leads to (4.5) which determines \u03b2 implicitly. It is worth noting that the uniform concentration is unnecessary when H = 1. In this case, it su\ufb03ces to choose \u03b2 = e O( p \u0393K(T, \u03bb)) and KOVI incurs a e O(\u0393K(T, \u03bb) \u00b7 \u221a T) regret, which matches the regret bounds of UCB algorithms for kernelized contextual bandits in Srinivas et al. (2009) and Chowdhury and Gopalan (2017). Here e O(\u00b7) omits logarithmic terms. Thus, the covering number in (4.7) is speci\ufb01c for MDPs and arises due to the temporal dependence within an episode. To obtain a concrete regret bound from (4.16), it remains to further characterize \u0393K(T, \u03bb) and log N\u221e(\u01eb\u2217, h, BT ) using characteristics of H. To this end, we specify the eigenvalue decay property of H. 14 \fAssumption 4.3 (Eigenvalue Decay of H). Recall that the integral operator TK de\ufb01ned in (2.3) has eigenvalues {\u03c3j}j\u22651 and eigenfunctions {\u03c8j}j\u22651. We assume that {\u03c3j}j\u22651 satis\ufb01es one of the following three eigenvalue decay conditions for some constant \u03b3 > 0: (i) \u03b3-\ufb01nite spectrum: we have \u03c3j = 0 for all j > \u03b3, where \u03b3 is a positive integer. (ii) \u03b3-exponential decay: there exist absolute constants C1 and C2 such that \u03c3j \u2264C1\u00b7exp(\u2212C2\u00b7j\u03b3) for all j \u22651. (iii) \u03b3-polynomial decay: there exists a constant C1 such that \u03c3j \u2265C1 \u00b7 j\u2212\u03b3 for all j \u22651, where \u03b3 > 1. For cases (ii) and (iii), we further assume that there exist constants \u03c4 \u2208[0, 1/2) C\u03c8 > 0 such that supz\u2208Z \u03c3\u03c4 j \u00b7 |\u03c8j(z)| \u2264C\u03c8 for all j \u22651. Case (i) implies that H is a \u03b3-dimensional RKHS. When this is the case, under Assumption 4.1, there exists a feature mapping \u03c6: Z \u2192R\u03b3 such that, for any V : S \u2192[0, H], rh + PhV is a linear function of \u03c6. Such a property is satis\ufb01ed by the linear MDP model studied in Yang and Wang (2019a), Yang and Wang (2019b), Jin et al. (2019), and Zanette et al. (2020a). Moreover, when H satis\ufb01es case (i), KOVI reduces to the LSVI-UCB algorithm studied in Jin et al. (2019). In addition, cases (ii) and (iii) postulate that the eigenvalues of TK decay exponentially and polynomially fast, respectively, where \u03b3 is a constant that might depend on the input dimension d, which is assumed \ufb01xed throughout this paper. For example, the squared exponential kernel belongs to case (ii) with \u03b3 = 1/d, whereas the Mat\u00b4 ern kernel with parameter \u03bd > 2 belongs to case (iii) with \u03b3 = 1 + 2\u03bd/d (Srinivas et al., 2009). Moreover, we assume that there exists \u03c4 \u2208[0, 1/2) such that \u03c3\u03c4 j \u00b7 \u2225\u03c8j\u2225\u221eis universally bounded. Since K(z, z) \u22641, this condition is naturally satis\ufb01ed for \u03c4 = 1/2. However, here we assume that \u03c4 \u2208(0, 1/2), which is satis\ufb01ed when the magnitudes of the eigenvectors do not grow too fast compared to the decay of the eigenvalues. Such a condition is signi\ufb01cantly weaker than assuming \u2225\u03c8j\u2225\u221eis universally bounded, which is also commonly made in the literature on nonparametric statistics (La\ufb00erty and Lebanon, 2005; Shang et al., 2013; Zhang et al., 2015; Lu et al., 2016; Yang et al., 2017). It can be shown that the squared exponential kernel on the unit sphere in Rd satis\ufb01es this condition for any \u03c4 > 0 \u2014 see Appendix \u00a7B.3 \u2014 and the Mat\u00b4 ern kernel on [0, 1] satisfy this property with \u03c4 = 0 (Yang et al., 2017). We refer to Mendelson et al. (2010) for a more detailed discussion. We now present regret bounds for the three eigenvalue decay conditions. Corollary 4.4. Under Assumptions 4.1 and 4.3, we set \u03bb = 1 + 1/T and \u03b2 = BT in Algorithm 2, where BT is de\ufb01ned as BT = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 Cb \u00b7 \u03b3H \u00b7 p log(\u03b3 \u00b7 TH) \u03b3-\ufb01nite spectrum, Cb \u00b7 H p log(TH) \u00b7 (log T)1/\u03b3 \u03b3-exponential decay, Cb \u00b7 H log(TH) \u00b7 T \u03ba\u2217 \u03b3-polynomial decay. (4.8) Here Cb is an absolute constant that does not depend on T or H, and \u03ba\u2217is given by \u03ba\u2217= max \u001a \u03be\u2217, 2d + \u03b3 + 1 (d + \u03b3) \u00b7 [\u03b3(1 \u22122\u03c4) \u22121], 2 \u03b3(1 \u22122\u03c4) \u22123 \u001b , \u03be\u2217= d + 1 2(\u03b3 + d). (4.9) 15 \fFor the third case, we further assume that \u03b3 is su\ufb03ciently large such that \u03ba\u2217+ \u03be\u2217< 1/2. Then, there exists an absolute constant Cr such that, with probability at least 1 \u2212(T 2H2)\u22121, we have Regret(T) \u2264 \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 Cr \u00b7 H2 \u00b7 p \u03b33T \u00b7 log(\u03b3TH) \u03b3-\ufb01nite spectrum, Cr \u00b7 H2 \u00b7 p (log T)3/\u03b3 \u00b7 T \u00b7 log(TH) \u03b3-exponential decay, Cr \u00b7 H2 \u00b7 T \u03ba\u2217+\u03be\u2217+1/2 \u00b7 \u0002 log(TH) \u00033/2 \u03b3-polynomial decay. (4.10) Corollary 4.4 asserts that when \u03b2 is chosen properly according to the eigenvalue decay property of H, KOVI incurs a sublinear regret under all of the three cases speci\ufb01ed in Assumption 4.3. Note that the linear MDP (Jin et al., 2019) satis\ufb01es the \u03b3-\ufb01nite spectrum condition and KOVI recovers the LSVI-UCB algorithm studied in Jin et al. (2019) when restricted to this setting. Moreover, our e O(H2 \u00b7 p \u03b33T) also matches the regret bound in Jin et al. (2019). In addition, under the \u03b3exponential eigenvalue decay condition, as we will show in Appendix \u00a7D, the log-covering number and the e\ufb00ective dimension are bounded by (log T)1+2/\u03b3 and (log T)1+1/\u03b3, respectively. Plugging these facts into (4.7), we obtain the sublinear regret in (4.16). As a concrete example, for the squared exponential kernel, we obtain an e O(H2 \u00b7 (log T)1+1.5d \u00b7 \u221a T) regret, where d is the input dimension. This such a regret is (log T)d/2 worse than that in Srinivas et al. (2009) for kernel contextual bandits, which is due to bounding the log-covering number. Furthermore, for the case of \u03b3-polynomial decay, since the eigenvalues decay to zero rather slowly, we fail to obtain a \u221a Tregret and only obtain a sublinear regret in (4.16). As we will show in the proof, the log-covering number and the e\ufb00ective dimension are e O(T 2\u03ba\u2217) and e O(T 2\u03be\u2217), respectively, which, combined with (4.7), yield the regret bound in (4.16). As a concrete example, consider the Mat\u00b4 ern kernel with parameter \u03bd > 0 where we have \u03b3 = 1 + 2\u03bd/d and \u03c4 = 0. In this case, when \u03bd is su\ufb03ciently large such that T 2\u03be\u2217\u22121/2 = o(1), we have \u03be\u2217= d(d + 1) 2[2\u03bd + d(d + 1)], \u03ba\u2217= max n \u03be\u2217, 3 d \u22121, 2 d \u22121 o , which implies that KOVI achieves an e O(H2 \u00b7 T 2\u03be\u2217+1/2) regret when d is large. Such a regret bound matches that in Srinivas et al. (2009) for the bandit setting. See Appendix \u00a7B.1 for details. Furthermore, similarly to the discussion in Section 3.1 of Jin et al. (2018), the regret bound in (4.16) directly translates to an upper bound on the sample complexity as follows. When the initial state is \ufb01xed for all episodes, for any \ufb01xed \u01eb > 0, with at least a constant probability, KOVI returns a policy \u03c0 satisfying V \u22c6 1 (x1) \u2212V \u03c0 1 (x1) \u2264\u01eb using O(H4B2 T \u00b7 \u0393K(T, \u03bb)/\u01eb2) samples. Speci\ufb01cally, for the three cases considered in Assumption 4.3, such a sample complexity guarantee reduces to e O \u0000H4 \u00b7 \u03b33/\u01eb2\u0001 , e O \u0000H4 \u00b7 (log T)2+3/\u03b3/\u01eb2\u0001 , e O \u0000H4 \u00b7 T 2(\u03ba\u2217+\u03be\u2217)/\u01eb2\u0001 , respectively. Moreover, similar to Jin et al. (2019), our analysis can also be extended to the misspeci\ufb01ed setting where inff\u2208Q\u22c6\u2225f \u2212T \u22c6 h Q\u2225\u221e\u2264errmis for all Q: Z \u2192[0, H]. Here errmis is the model misspeci\ufb01cation error. Under this setting, KOVI will su\ufb00er from an extra errmis \u00b7TH regret. The analysis for the misspeci\ufb01ed setting is similar to that for the neural network setting that will be presented in \u00a74.2. 16 \f4.2 Regret of NOVI In this section, we establish the regret of NOVI. Throughout this subsection, we let H denote the RKHS whose kernel function is Kntk de\ufb01ned in (2.9). Recall that we regard Z = S \u00d7 A as a subset of the unit sphere Sd\u22121 = {z \u2208Rd : \u2225z\u22252 = 1}. Let (b(0), W (0)) be the initial value of the network weights obtained via the symmetric initialization scheme introduced in \u00a72.3. Conditioning on the randomness of the initialization, we de\ufb01ne a \ufb01nite-rank kernel, Km : Z\u00d7Z \u2192R, by letting Km(z, z\u2032) = \u27e8\u2207W f(z; b(0), W (0)), \u2207W f(z\u2032; b(0), W (0))\u27e9. Notice that the rank of Km is md, where m is much larger than T and H and is allowed to increase to in\ufb01nity. Moreover, with a slight abuse of notation, we de\ufb01ne Q\u22c6= \u001a f\u03b1(z) = Z Rd act\u2032(w\u22a4z) \u00b7 z\u22a4\u03b1(w) dp0(w): \u03b1: Rd \u2192Rd, \u2225\u03b1\u22252,\u221e\u2264RQH/ \u221a d \u001b , (4.11) where RQ is a positive number, p0 is the density of N(0, Id/d), and we de\ufb01ne \u2225\u03b1\u22252,\u221e= supw \u2225\u03b1(w)\u22252. That is, Q\u22c6consists of functions that can be expressed as in\ufb01nite number of random features. As shown in Lemma C.1 of Gao et al. (2019), Q\u22c6is a dense subset of the RKHS H. Thus, when RQ is su\ufb03ciently large, Q\u22c6in (4.11) is an expressive function class. We impose the following condition on Q\u22c6. Assumption 4.5. We assume that for any h \u2208[H] and any Q: S\u00d7A \u2192[0, H], we have T\u22c6 hQ \u2208Q\u22c6. Assumption 4.5 is in the same vein as Assumption 4.1. Here we focus on Q\u2217instead of an RKHS norm ball of NTK solely for technical convenience. Since functions of the form in (4.11) are dense in H, Assumptions 4.5 and 4.1 are very similar. To characterize the value function class associated with NOVI, for any discrete set D \u2286Z, in a manner akin to (3.11), we de\ufb01ne \u039bD = \u03bb \u00b7 I2md + X z\u2208D \u03d5(z; W (0))\u03d5(z; W (0))\u22a4, where \u03d5(\u00b7; W (0)) is the neural tangent feature de\ufb01ned in (2.6). With a slight abuse of notation, for any R, B > 0, we let Qucb(h, R, B) denote that class of functions that takes the following form: Q(z) = min \b \u03d5(z; W (0)), W\u27e9+ \u03b2 \u00b7 \u0002 \u03d5(z; W (0))\u22a4(\u039bD)\u22121\u03d5(z; W (0)) \u00031/2, H \u2212h + 1 \t+, (4.12) where W \u2208R2md satis\ufb01es \u2225W\u22252 \u2264R, \u03b2 \u2208[0, B], and D has cardinality no more than T. Intuitively, when both R and B are su\ufb03ciently large, Qucb(h, R, B) contains the counterpart of the neuralnetwork-based value function Qt h that is based on neural tangent features. Moreover, when m is su\ufb03ciently large, it is expected that Qt h is well approximated by functions in Qucb(h, R, B) where the approximation error decays with m. It is worth noting the class of linear functions of \u03d5(\u00b7; W (0)) forms an RKHS with kernel Km in (2.7). Any function g in this class can be written as g(\u00b7) = \u27e8\u03d5(\u00b7; W (0)), Wg\u27e9for some Wg \u2208R2md. Moreover, the RKHS norm of f is given by \u2225Wg\u22252. Thus, Qucb(h, R, B) de\ufb01ned above coincides with the counterpart de\ufb01ned in (4.4) with the kernel function being Km. We set RT = H p 2T/\u03bb and let N\u221e(\u01eb; h, B) denote the \u01eb-covering number of Qucb(h, RT , B) with respect to the \u2113\u221e-norm on Z. We can now present a general regret bound for NOVI. 17 \fTheorem 4.6. Under Assumptions 4.5, We also assume that m is su\ufb03ciently large such that m = \u2126(T 13H14 \u00b7 (log m)3). In Algorithm 3, we let \u03bb be a su\ufb03ciently large constant and let \u03b2 = BT which satis\ufb01es inequality 16 \u00b7 \u0393Km(T, \u03bb) + 16 \u00b7 log N\u221e(\u01eb\u2217, h + 1, BT ) + 32 \u00b7 log(2TH) + 4R2 Q \u00b7 (1 + \u03bb/d) \u2264(BT /H)2, (4.13) for all h \u2208[H]. Here \u01eb\u2217= H/T and \u0393Km(T, \u03bb) is the maximal information gain de\ufb01ned for kernel Km. In addition, for the neural network in (2.5), we assume that the activation function act is Cact-smooth; i.e., its derivative act\u2032 is Cact-Lipschitz, and m is su\ufb03ciently large such that m = \u2126 \u0000\u03b212 \u00b7 T 13 \u00b7 H14 \u00b7 (log m)3\u0001 . (4.14) Then, with probability at least 1 \u2212(T 2H2)\u22121, we have Regret(T) = 5\u03b2H \u00b7 p T \u00b7 \u0393Km(T, \u03bb) + 10\u03b2TH \u00b7 \u03b9, (4.15) where we de\ufb01ne \u03b9 = T 7/12 \u00b7 H1/6 \u00b7 m\u22121/12 \u00b7 (log m)1/4. This theorem shows that, when m is su\ufb03ciently large, NOVI enjoys a similar regret bound as KOVI. Speci\ufb01cally, the choice of \u03b2 in (4.13) is similar to that in (4.5) for the kernel Km. Here we set \u03bb to be an absolute constant as supz Km(z, z) \u22641 no longer holds. In addition, here we assume that act\u2032 is Cact-Lipschitz on R, which can be relaxed to only assuming act\u2032 is Lipschitz continous on a bounded interval of R that contains w\u22a4z with high probability, where w is drawn from the initial distribution of Wj, j \u2208[m]. Comparing (4.16) and (4.15), we observe that, when m is su\ufb03ciently large, NOVI can be viewed as a misspeci\ufb01ed version of KOVI for the RKHS with kernel Km, where the model misspeci\ufb01cation error is errmis = 10\u03b2 \u00b7 \u03b9. Speci\ufb01cally, the \ufb01rst term in (4.15) is the same as that in (4.16), where the choice of \u03b2 and \u0393Km(T, \u03bb) re\ufb02ects the intrinsic complexity of Km. The second term is equal to errmis \u00b7 TH, which arises from the approximation of neural-network-based value functions by functions in Qucb(h, RT , BT ), which are constructed using the neural tangent feature \u03d5(\u00b7; W (0)). Moreover, when \u03b2 is bounded by a polynomial of TH, to make errmis \u00b7 TH be negligible, it su\ufb03ces to let m be a polynomial of TH. That is, when the network width is a polynomial of the total number of steps, NOVI achieves the same performance as KOVI. Neglecting the constants and logarithmic terms in (4.13), we can simplify the regret bound in (4.15) as follows: Regret(T) = e O \u0010 H2 \u00b7 h \u0393Km(T, \u03bb) + max h\u2208[H] p \u0393Km(T, \u03bb) \u00b7 log N\u221e(\u01eb\u2217, h, BT ) i \u00b7 \u221a T + errmis \u00b7 T \u0011 , which depends on the intrinsic complexity of Km through both the e\ufb00ective dimension \u0393Km(T, \u03bb) and the log-covering number log N\u221e(\u01eb\u2217, h, BT ). To obtain a more concrete regret bound, in the following, we impose an assumption on the spectral structure of Km. Assumption 4.7 (Eigenvalue Decay of the Empirical NTK). Conditioning on the randomness of (b(0), W (0)), let Km be the kernel induced by the neural tangent feature \u03d5(\u00b7; W (0)) . Let TKm be the 18 \fintegral operator induced by Km and Lebesgue measure on Z and let {\u03c3j}j\u22651 and {\u03c8j}j\u22651 be its eigenvalues and eigenvectors, respectively. We assume that {\u03c3j}j\u22651 and {\u03c8j}j\u22651 satisfy either one of the three decay conditions speci\ufb01ed in Assumption 4.3. Here we assume the constants C1, C2, C\u03c8, \u03b3, and \u03c4 do not depend on m. Here we essentially assume that Km satis\ufb01es Assumption 4.3. Since Km depends on the initial network weights, which are random, this assumption should be better understood in a limiting sense. Speci\ufb01cally, as m goes to in\ufb01nity, Km converges to Kntk, which is determined by both the activation function and the distribution of the initial network weights. Thus, if the RKHS with kernel Kntk satis\ufb01es Assumption 4.3, when m is su\ufb03ciently large, it is reasonable to expect that such a condition also holds for Km. Due to space limitations, we defer concrete examples of Kntk satisfying Assumption 4.3 to Appendix \u00a7B.3. We now characterize the performance of NOVI for each case separately. Corollary 4.8. Under Assumptions 4.5 and 4.7, we assume the activation function is Cact-smooth and the number of neurons of the neural network satis\ufb01es (4.14). Besides, in Algorithm 3 we let \u03bb be a su\ufb03ciently large constant and set \u03b2 = BT as in (4.8). Then exists an absolute constant Cr such that, with probability at least 1 \u2212(T 2H2)\u22121, we have Regret(T) \u2264 \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 Cr \u00b7 H2 \u00b7 p \u03b33T \u00b7 log(\u03b3TH) + 10\u03b2TH \u00b7 \u03b9 \u03b3-\ufb01nite spectrum, Cr \u00b7 H2 \u00b7 p (log T)3/\u03b3 \u00b7 T \u00b7 log(TH) + 10\u03b2TH \u00b7 \u03b9 \u03b3-exponential decay, Cr \u00b7 H2 \u00b7 T \u03ba\u2217+\u03be\u2217+1/2 \u00b7 \u0002 log(TH) \u00033/2 + 10\u03b2TH \u00b7 \u03b9 \u03b3-polynomial decay, (4.16) where we de\ufb01ne \u03b9 = T 7/12 \u00b7 H1/6 \u00b7 m\u22121/12 \u00b7 (log m)1/4. Corollary 4.8 is parallel to Corollary 4.4, with an additional misspeci\ufb01cation error 10\u03b2TH \u00b7 \u03b9. It remains to see whether there exist concrete neural networks that induce NTKs satisfying each eigenvalue decay condition. As we will show in Appendix \u00a7B.3, a neural network with a quadratic activation function induces an NTK with a \ufb01nite spectrum, while the sine activation function and the polynomials of ReLU activation functions induce NTKs that satisfy the exponential and polynomial eigenvalue decay conditions, respectively. Corollary 4.8 can be directly applied to these concrete examples to obtain sublinear regret bounds. 5 Proofs of the Main Results In this section, we provide the proofs of Theorems 4.2 and 4.6. The proofs of the supporting lemmas and auxiliary results are deferred to the appendix. 5.1 Proof of Theorem 4.2 Proof. For simplicity of presentation, we de\ufb01ne the temporal-di\ufb00erence (TD) error as \u03b4t h(x, a) = rh(x, a) + (PhV t h+1)(x, a) \u2212Qt h(x, a), \u2200(x, a) \u2208S \u00d7 A. (5.1) 19 \fHere \u03b4t h is a function on S \u00d7A for all h \u2208[H] and t \u2208[T]. Note that V t h(\u00b7) = maxa\u2208A Qt h(\u00b7, a). Intuitively, {\u03b4t h}h\u2208[H] quanti\ufb01es the how far the {Qt h}h\u2208[H] are from satisfying the Bellman optimality equation in (2.2). Next, recall that \u03c0t is the policy executed in the t-th episode, which generates a trajectory {(xt h, at h)}h\u2208[H]. For any h \u2208[H] and t \u2208[T], we further de\ufb01ne \u03b61 t,h, \u03b62 t,h \u2208R as \u03b61 t,h = \u0002 V t h(xt h) \u2212V \u03c0t h (xt h) \u0003 \u2212 \u0002 Qt h(xt h, at h) \u2212Q\u03c0t h (xt h, at h) \u0003 , (5.2) \u03b62 t,h = \u0002 (PhV t h+1)(xt h, at h) \u2212(PhV \u03c0t h+1)(xt h, at h) \u0003 \u2212 \u0002 V t h+1(xt h+1) \u2212V \u03c0t h+1(xt h+1) \u0003 . (5.3) By de\ufb01nition, \u03b61 t,h and \u03b62 t,h capture two sources of randomness\u2014the randomness of choosing an action at h \u223c\u03c0t h(\u00b7 | xt h) and that of drawing the next state xt h+1 from Ph(\u00b7 | xt h, at h), respectively. As we will see in Appendix \u00a7C.3, {\u03b61 t,h, \u03b62 t,h} form a bounded martingale di\ufb00erence sequence with respect to a properly chosen \ufb01ltration, which enables us to bound their total sum via the Azuma-Hoe\ufb00ding inequality (Azuma, 1967). To establish an upper bound on the regret, the following lemma \ufb01rst decomposes the regret into three parts using the notation de\ufb01ned above. Similar regret decomposition results also appear in Cai et al. (2019a); Efroni et al. (2020). Lemma 5.1 (Regret Decomposition). The temporal-di\ufb00erence error is the mapping \u03b4t h : S \u00d7 A \u2192 de\ufb01ned in (5.1) for all (t, h) \u2208[T] \u00d7 [H]. We can thus write the regret as Regret(T) = T X t=1 H X h=1 \u0002 E\u03c0\u22c6[\u03b4t h(xh, ah) | x1 = xt 1] \u2212\u03b4t h(xt h, at h) \u0003 | {z } (i) + T X t=1 H X h=1 (\u03b61 t,h + \u03b62 t,h) | {z } (ii) T X t=1 H X h=1 E\u03c0\u22c6\u0002 Qt h(xh, \u00b7), \u03c0\u22c6 h(\u00b7 | xh) \u2212\u03c0t h(\u00b7 | xh) \u000b A \f \f x1 = xt 1 \u0003 | {z } (iii) , (5.4) where \u03b61 t,h and \u03b62 t,h are de\ufb01ned in (5.2) and (5.3), respectively. Proof. See Appendix \u00a7C.1 for a detailed proof. Returning to the main proof, notice that \u03c0t h is the greedy policy with respect to Qt h for all (t, h) \u2208[T] \u00d7 [H]. We have Qt h(xh, \u00b7), \u03c0\u22c6 h(\u00b7 | xh) \u2212\u03c0t h(\u00b7 | xh) \u000b A = Qt h(xh, \u00b7), \u03c0\u22c6 h(\u00b7 | xh) \u000b A \u2212max a\u2208A Qt h(xh, a) \u22640, for all xh \u2208S. Thus, Term (iii) in (5.4) is non-positive. Then, by Lemma 5.1, we can upper bound the regret by Regret(T) \u2264 \u001a T X t=1 H X h=1 \u0002 E\u03c0\u22c6[\u03b4t h(xh, ah) | x1 = xt 1] \u2212\u03b4t h(xt h, at h) \u0003\u001b | {z } (i) + \u0014 T X t=1 H X h=1 (\u03b61 t,h + \u03b62 t,h) \u0015 | {z } (ii) . (5.5) 20 \fFor Term (i), since we do not observe trajectories from \u03c0\u2217, which is unknown, it appears that E\u03c0\u2217[\u03b4t h(xh, ah) | x1 = xt 1] cannot be estimated. Fortunately, however, by adding the bonus term in Algorithm 2, we ensure that the temporal-di\ufb00erence error \u03b4t h is a non-positive function, as shown in the following lemma. Lemma 5.2 (Optimism). Let \u03bb = 1 + 1/T and \u03b2 = BT in Algorithm 2, where BT satis\ufb01es (4.5). Under Assumptions 4.1, with probability at least 1 \u2212(2T 2H2)\u22121, we have that the following holds for all (t, h) \u2208[T] \u00d7 [H] and (x, a) \u2208S \u00d7 A: \u22122\u03b2 \u00b7 bt h(x, a) \u2264\u03b4t h(x, a) \u22640. Proof. See Appendix \u00a7C.2 for a detailed proof. Applying Lemma 5.2 to Term (i) in (5.5), we obtain that Term (i) \u2264 \u0014 T X t=1 H X h=1 \u2212\u03b4t h(xt h, at h) \u0015 \u22642\u03b2 \u00b7 \u0014 T X t=1 H X h=1 bt h(xt h, at h) \u0015 (5.6) holds with probability at least 1 \u2212(2T 2H2)\u22121, where \u03b2 is equal to BT as speci\ufb01ed in (4.5). Finally, it remains to bound the sum of bonus terms in (5.6). As we show in (C.17), using the feature representation of H, we can write each bt h(xt h, at h) as bt h(xt h, at h) = \u0002 \u03c6(xt h, at h)\u22a4(\u039bt h)\u22121\u03c6(xt h, at h) \u00031/2, where \u039bt h = \u03bb \u00b7 IH + Pt\u22121 \u03c4=1 \u03c6(xt h, at h)\u03c6(xt h, at h)\u22a4is a self-adjoint and positive-de\ufb01nite operator on H and IH is the identity mapping on H. Thus, combining the Cauchy-Schwarz inequality and Lemma E.3, we have, for any h \u2208[H], with probability at least 1 \u2212(2T 2H2)\u22121 the following: Term (i) \u22642\u03b2 \u00b7 \u221a T \u00b7 H X h=1 \u0014 T X t=1 \u03c6(xt h, at h)\u22a4(\u039bt h)\u22121\u03c6(xt h, at h) \u00151/2 \u22642\u03b2 \u00b7 H X h=1 \u0002 2T \u00b7 logdet(I + KT h /\u03bb) \u00031/2 = 4\u03b2H \u00b7 p T \u00b7 \u0393K(T, \u03bb), (5.7) where \u0393K(T, \u03bb) is the maximal information gain de\ufb01ned in (4.2) with parameter \u03bb. It remains to bound Term (ii) in (5.5), which is the purpose of the following lemma. Lemma 5.3. For \u03b61 t,h and \u03b62 t,h de\ufb01ned respectively in (5.2) and (5.3) and for any \u03b6 \u2208(0, 1), with probability at least 1 \u2212\u03b6, we have T X t=1 H X h=1 (\u03b61 t,h + \u03b62 t,h) \u2264 p 16TH3 \u00b7 log(2/\u03b6). Proof. See Appendix \u00a7C.3 for a detailed proof. 21 \fSetting \u03b6 = (2T 2H2)\u22121 in Lemma 5.3 we obtain that Term (ii) = T X t=1 H X h=1 (\u03b61 t,h + \u03b62 t,h) \u2264 p 16TH3 \u00b7 log(4T 2H2) = p 32TH3 \u00b7 log(2TH) (5.8) holds with probability at least 1 \u2212(2TH)\u22121. Therefore, combining (4.5), (5.5), and (5.8), we conclude that, with probability at least 1 \u2212 (T 2H2)\u22121, the regret is bounded by Regret(T) \u22644\u03b2H \u00b7 p T \u00b7 \u0393K(T, \u03bb) + p 32TH3 \u00b7 log(2TH) \u22645\u03b2H \u00b7 p T \u00b7 \u0393K(T, \u03bb), where the last inequality follows from the choice of \u03b2 = BT, which implies that \u03b2 \u2265H \u00b7 p 16 log(TH) \u2265 p 32H \u00b7 log(2TH). This concludes the proof of Theorem 4.2. 5.2 Proof of Theorem 4.6 Proof. The proof of Theorem 4.6 is similar to that of Theorem 4.2. Recall that we let Z denote S\u00d7A for simplicity. Recall also that for all (t, h) \u2208[T]\u00d7[H], we de\ufb01ne the temporal-di\ufb00erence (TD) error \u03b4t h : Z \u2192R in (5.1) and de\ufb01ne random variables \u03b61 t,h and \u03b62 t,h in (5.2) and (5.3), respectively. Then, combining Lemma 5.1 and the fact that \u03c0t is the greedy policy with respect to {Qt h}h\u2208[H], we bound the regret by Regret(T) \u2264 \u001a T X t=1 H X h=1 \u0002 E\u03c0\u22c6[\u03b4t h(xh, ah) | x1 = xt 1] \u2212\u03b4t h(xt h, at h) \u0003\u001b | {z } (i) + \u0014 T X t=1 H X h=1 (\u03b61 t,h + \u03b62 t,h) \u0015 | {z } (ii) . (5.9) Here, Term (ii) is a sum of a martingale di\ufb00erence sequence. By setting \u03b6 = (4T 2H2)\u22121 in Lemma 5.3, with probability at least 1 \u2212(4T 2H2)\u22121, we have Term (ii) = T X t=1 H X h=1 (\u03b61 t,h + \u03b62 t,h) \u2264 p 16TH3 \u00b7 log(8T 2H2) \u2264H \u00b7 p 32TH log(2TH). (5.10) It remains to bound Term (i) in (5.9). To this end, we aim to establish a counterpart of Lemma 5.2 for neural value functions, which shows that, by adding a bonus term \u03b2 \u00b7 bt h, the TD error \u03b4t h is always a non-positive function approximately. This implies that bounding Term (i) in (5.9) reduces to controlling PT t=1 PH h=1 bt h(xt h, at h). Note that the bonus functions bt h are constructed based on the neural tangent features \u03d5(\u00b7; c W t h) and the matrix \u039bt h. In order to relate PT t=1 PH h=1 bt h(xt h, at h) to the maximal information gain of the empirical NTK Km, we de\ufb01ne \u039b t h and b t h, by analogy with \u039bt h and bt h, as follows: \u039b t h = \u03bb \u00b7 I2md + t\u22121 X \u03c4=1 \u03d5(x\u03c4 h, a\u03c4 h; W (0))\u03d5(x\u03c4 h, a\u03c4 h; W (0))\u22a4, b t h(z) = \u0002 \u03d5(z; W (0))\u22a4(\u039b t h)\u22121\u03d5(z; W (0)) \u00031/2. 22 \fIn the following lemma, we bound the TD error \u03b4t h using b t h and show that bt h and b t h are close in the \u2113\u221e-norm on Z when m is su\ufb03ciently large. Lemma 5.4 (Optimism). Let \u03bb be an absolute constant and let \u03b2 = BT in Algorithm 3, where BT satis\ufb01es (4.13). Under the assumptions made in Theorem 4.6, with probability at least 1 \u2212 (2T 2H2)\u22121 \u2212m2, it holds for all (t, h) \u2208[T] \u00d7 [H] and (x, a) \u2208S \u00d7 A that \u22125\u03b2 \u00b7 \u03b9 \u22122\u03b2 \u00b7 b t h(x, a) \u2264\u03b4t h(x, a) \u22645\u03b2 \u00b7 \u03b9, sup (x,a)\u2208Z \f \fbt h(x, a) \u2212b t h(x, a) \f \f \u22642\u03b9, (5.11) where we de\ufb01ne \u03b9 = T 7/12 \u00b7 H1/12 \u00b7 m\u22121/12 \u00b7 (log m)1/4. Proof. See Appendix \u00a7C.4 for a detailed proof. Applying Lemma 5.2 to Term (i) in (5.5), we obtain that Term (i) \u2264 \u0014 T X t=1 H X h=1 \u2212\u03b4t h(xt h, at h) \u0015 + 5TH \u00b7 \u03b9 \u22642\u03b2 \u00b7 \u0014 T X t=1 H X h=1 b t h(xt h, at h) \u0015 + 10\u03b2TH \u00b7 \u03b9 (5.12) holds with probability at least 1 \u2212(2T 2H2)\u22121 \u2212m\u22122, where \u03b2 = BT . Moreover, combining the Cauchy-Schwarz inequality and Lemma E.3, we have T X t=1 H X h=1 b t h(xt h, at h) \u2264 \u221a T \u00b7 H X h=1 \u0014 T X t=1 \u03d5(xt h, at h; W (0))\u22a4(\u039b t h)\u22121\u03d5(xt h, at h; W (0)) \u00151/2 \u22642H \u00b7 p T \u00b7 \u0393Km(T, \u03bb), (5.13) where \u0393K(T, \u03bb) is the maximal information gain de\ufb01ned in (4.2) for kernel Km. Notice that (2T 2H2)\u22121 +m\u22122 +(4T 2H2)\u22121 \u2264(T 2H2)\u22121. Thus, combining (5.9), (5.10), (5.12), and (5.13), we obtain that Regret(T) \u22644\u03b2H \u00b7 p T \u00b7 \u0393Km(T, \u03bb) + 10\u03b2TH \u00b7 \u03b9 + H \u00b7 p 32TH log(2TH) \u22645\u03b2H \u00b7 p T \u00b7 \u0393Km(T, \u03bb) + 10\u03b2TH \u00b7 \u03b9 holds with probability at least 1 \u2212(2T 2H2)\u22121. Here the last inequality follows from the fact that \u03b2 \u2265H \u00b7 p 32 log(TH) \u2265 p 32H log(2TH). This concludes the proof of Theorem 4.6. 6" + } + ], + "Sizhe Wang": [ + { + "url": "http://arxiv.org/abs/2304.09864v1", + "title": "GeoGraphViz: Geographically Constrained 3D Force-Directed Graph for Knowledge Graph Visualization", + "abstract": "Knowledge graphs are a key technique for linking and integrating cross-domain\ndata, concepts, tools, and knowledge to enable data-driven analytics. As much\nof the worlds data have become massive in size, visualizing graph entities and\ntheir interrelationships intuitively and interactively has become a crucial\ntask for ingesting and better utilizing graph content to support semantic\nreasoning, discovering hidden knowledge discovering, and better scientific\nunderstanding of geophysical and social phenomena. Despite the fact that many\nsuch phenomena (e.g., disasters) have clear spatial footprints and geographical\nproperties, their location information is considered only as a textual label in\nexisting graph visualization tools, limiting their capability to reveal the\ngeospatial distribution patterns of the graph nodes. In addition, most graph\nvisualization techniques rely on 2D graph visualization, which constraints the\ndimensions of information that can be presented and lacks support for graph\nstructure examination from multiple angles. To tackle the above challenges, we\ndeveloped a novel 3D map-based graph visualization algorithm to enable\ninteractive exploration of graph content and patterns in a spatially explicit\nmanner. The algorithm extends a 3D force directed graph by integrating a web\nmap, an additional geolocational force, and a force balancing variable that\nallows for the dynamic adjustment of the 3D graph structure and layout. This\nmechanism helps create a balanced graph view between the semantic forces among\nthe graph nodes and the attractive force from a geolocation to a graph node.\nOur solution offers a new perspective in visualizing and understanding spatial\nentities and events in a knowledge graph.", + "authors": "Sizhe Wang, Wenwen Li, Zhining Gu", + "published": "2023-04-14", + "updated": "2023-04-14", + "primary_cat": "cs.HC", + "cats": [ + "cs.HC" + ], + "main_content": "INTRODUCTION Knowledge graphs are a machine-parsable graph technology that interlinks world entities, both physical and digital, and provides formal descriptions of each entity to enhance information consumption, support query answering, and semantic inference and reasoning for deriving new knowledge1,2. As the data volume in geospatial and other domains continue to hit exponential growth, organizations and data holders have increasingly used knowledge graphs to organize massive amounts of domain data. The IT giants, such as Google, have created a knowledge graph by integrating information from a variety of sources to support better arXiv:2304.09864v1 [cs.HC] 14 Apr 2023 \f2 Wang ET AL search and information retrieval. Statistics have shown that by May 2020, Google\u2019s knowledge graph has grown to contain 5 billion entities and over 500 billion facts, and it has supported over 1/3 of 100 billion queries made to Google each month. In the geospatial domain, progress in building spatial data infrastructures, such as the European Union\u2019s INSPIRE (Infrastructure for Spatial Information in the European Community) portal, has been in creating data and metadata schemas to annotate data in the broad Earth and environmental science domains3. In the US, the National Science Foundation (NSF) has started a new program called Convergence Accelerator. One of its aims is to develop cutting-edge knowledge graph technologies for linking cross-domain data for building an open knowledge network that fosters convergence research. A key research advancement in spatial sciences is the creation of KnowWhereGraph4, a knowledge graph that connects environmental datasets related to natural disasters, agriculture, and soil properties to understand the environment impacts on the society. Semantic enrichment services are provided to support semantic reasoning and question answering on top of its data store of over 12 billion facts5. As the domain data in various knowledge graphs proliferate in the research community, challenges also arise for nondevelopers as well as knowledge engineers to access and understand massive graph-ready datasets6 7 8. In particular, it has become increasingly challenging to enable open access to the right graph data in the right form for the right end users. As human brains are especially good at visual inspection, providing an intuitive presentation of the knowledge graphs are critical for the understanding of the graph, its entities, and interconnections among them. Knowledge graph visualization, which enables interactive exploration and information \ufb01ltering of graph data and structure, has become an important means to facilitate data consumption and sense making of big, linked data. Popular graph visualization tools, such as those provided as part of a triple store platform (e.g., GraphDB9 or AllegroGraph10), mainly support 2D graph visualization. In the graph, nodes are used to represent the entities, and edges represent interrelationships between the entities. To allow real-time rendering, especially in handling large graph data, nodes expansion or clustering functions are provided such that the graph starts from a summarized view and can be gradually expanded according to end user interest. One major limitation in such solutions is the lack of geographical reference for the graph data, especially for data that are geospatial, such as hurricane tracks that have originated in the North Atlantic Ocean and moved westward to hit the US coastal communities, or an interruption of a food supply chain caused by a wild\ufb01re event that has occurred near the farmlands in Southern California. In existing visualization tools\u2014even if the location information, which could be a point, line, or polygon\u2014is encoded in the graph, they are treated as regular text and there are very few tools that support the visualization of this special graph information in a virtual geographical space, such as a map. In comparison, traditional GISs (Geographic Information Systems) provide powerful solutions for overlaying multiple geospatial data on a base map, and they often adopt \ufb02ow maps to support graph data visualization11,12. In such systems, when presenting interrelationships among entities, each entity must be pinned to a \ufb01xed geographic location on the map, regardless of whether an entity possesses multiple geographical properties or is a moving target (e.g., hurricane events). The relationships between the entities are visualized through straight or curved lines between two locations on a map. Although these map-based graph visualizations o\ufb00er a geographical reference, the graph patterns, such as local clusters, are signi\ufb01cantly altered as strengths of the edges connecting the nodes are not considered. This limitation hinders the dynamic exploration of graph structure and patterns. To overcome this limitation, this paper presents a new \u201cGraph Above the Map\u201d solution to enable interactive graph exploration from both the graph and map perspectives. A geographically constrained 3D force-directed graph visualization algorithm is developed to dynamically render the graph layout by considering the interrelations among the graph nodes and their geolocations. This way, both the location property and graph properties can be visualized in a coherent web interface, supporting knowledge discovery within a geospatial context. The rest of the paper is organized as follows: Section 2 reviews existing techniques for graph visualization, Section 3 introduces our proposed algorithm in detail, Section 4 describes the data and experiments to quantitatively measure the graph and geographical presentation of linked data, Section 5 introduces the GeoGraphViz user interface, and Section 6 concludes the paper with a discussion of future research directions. 2 LITERATURE REVIEW 2.1 Map-based visualization Flow maps are a prominent approach that uses map-based visualization to display linkages and relationships between two entities. It is a combination of a 2D or 3D map with a \ufb02ow diagram. For instance, the Sankey diagram13 is type of \ufb02ow diagram to illustrate a \ufb02ow trend from the start node to the end node. In the \ufb02ow diagram, the arrow on the edges shows the \ufb02ow direction and the width of edges connecting each pair of nodes indicates the intensity between the connections. The edges can also be assigned \fWang ET AL 3 with di\ufb00erent colors to show di\ufb00erent types of connections. When the start and end nodes are associated with some locations and are placed on a map, the diagram becomes a \ufb02ow map. Relying on distinctive \ufb02ow styles and node symbolization, a \ufb02ow map can exhibit common and unique properties of di\ufb00erent locations11. It can also intuitively demonstrate the patterns of \ufb02ow, be it local, regional, or global. Because of the intuitive manner for displaying \ufb02ows and connections, \ufb02ow map visualization has received widespread adoptions in several applications, such as mapping tra\ufb03c patterns, migration routes, trade patterns, disease spread, and other geospatial data with linkage information14,15,16. To further improve the visual e\ufb00ect, Yang et al.17 proposed a 3D \ufb02ow map to incorporate a height dimension of the arrows to display additional data attributes that 2D \ufb02ow maps are not capable of. Ardissono et al.18 developed a novel model using colored shapes and interactive functions to assist entity \ufb01ltering based on selected categories to better support map-based visualization. Because of its ability in presenting connections between points, \ufb02ow maps can be leveraged to support graph visualization, particularly when location is the most prominent property of a graph node, such as a city. However, because a \ufb02ow map needs to pinpoint the graph nodes to a \ufb01xed location on the map, it suppresses the display of some graph patterns, such as clustering patterns of the graph nodes measured by the strengths of their interconnections rather than the geographic proximity among the nodes. The \ufb02ow map has also shown limitation in visualizing the spatiotemporal evolution of dynamic entities. 2.2 Graph visualization methods Recently, several graph visualization tools have also been developed to visualize linked data in a knowledge graph19,20. WebVOWL is a web-based visualization tool for ontologies21. It can directly load ontologies (a logical representation of knowledge graph) in an RDF (Resource Description Framework) format to visualize the interrelationships among the graph nodes. Different line styles and node colors are applied to di\ufb00erentiate entities, their properties and relationships. Several semantic graph databases, such as GraphDB22, Neo4j23, and ArangoDB24, also provide visual graph interfaces along with data query support to allow visualization of graph data. Similar to WebVOWL, these graph databases also provide 2D visual graphs to display nodes and their connections; some provide additional statistical functions to show counts of nodes and relationships belonging to di\ufb00erent categories while others support \ufb01ltering operation to visualize a subgraph of interest. In ArangoDB, graph visualization is combined with timeline visualization for identifying events and relationships with a temporal pattern. These built-in visualization solutions are based on drawing information on a 2D visual graph, limiting the multi-dimensional exploration of the graph. Besides these built-in tools for knowledge graph visualization, customized tools that support di\ufb00erent graph exploration needs have also been developed. For instance, Heim et al.25,26 developed RelFinder to support the identi\ufb01cation and visualization of node relationships. This is achieved through highlighting a path connecting two nodes of interests in a 2D graph. He et al.27 developed an interactive graph platform to support knowledge discovery on dietary supplements. The graph can also be dragged and panned, and a node\u2019s visibility can be con\ufb01gured. Noel et al.28 developed CyGraph, a uni\ufb01ed graph-based system for network security monitoring and improvement. CyGraph provides multiple functions to allow visual interactions with the graph, including node and edge property \ufb01ltering and color con\ufb01guration. Most interestingly, it can also show a cluster view of graph nodes that share the same property. Similar works that allow display of clustering patterns can also be found in29. All these graph solutions are based on 2D visualizations. There are also works integrating map visualization and graph visualization for linked data exploration. Liu et al.30 developed a web portal that uses multiple visualization methods, such as 2D graph visualization, streamgraph visualization, map visualization, and circular \ufb02ow charts, to showcase patterns of research collaboration, popular research topics, and other information from publications. Regalia et al.31 developed a linked data browser called Phuzzy.link to use hyperlinks to hop among connected graph nodes. If a graph node has geographic information, a map will be displayed on the side to enrich the visualization32. This is a way to visualize graph data in a non-graph form. A similar visualization portal was developed by Mai et al.33. In addition to visualizing graph data in a non-graph form, the authors employ a narrative cartography to map geographic information with timestamps on the map to tell a story, such as one describing an expedition path. Recently, 3D graph visualization has become a trending technique. Open-source JavaScript (JS) libraries, such as ThreeJS34 and Gephi35, have become popular tools to provide 3D rendering engines using WebGL (Web Graphic Language) to visualize and explore graph data in 3D on the Web. Powered by ThreeJS, a force-directed 3D graph drawing mechanism can be implemented to place nodes in a visually 3D space to avoid node cluttering problems. The 3D forced-directed graph supports multiple ways of graph interactions and visual e\ufb00ects. For instance, users can drag the graph nodes and place them in a di\ufb00erent position on the screen; they can also rotate the entire graph to observe patterns from di\ufb00erent viewpoints (e.g., to view the graph from \f4 Wang ET AL the front, the top, and the back, as well as in any other 3D angle). When a user clicks on a node, the information about it pops up, and the clicked node will be enlarged or highlighted. The nodes and edges can also be colored according to the types of connections. Text, images, and customized geometries can be used to render the nodes. These 3D visualization libraries o\ufb00er an e\ufb03cient way to explore graph data interactively in a web-based environment. Existing visualization tools or libraries provide visual displays for users to interact and explore interested data, relationships, and knowledge within a graph. To allow real-time rendering of large graph data, node folding/expansion or clustering functions are often provided so that the visual graph will start from a summarized view and can be gradually expanded according to end user interest in certain subgraphs. However, in almost all these tools, location information is rendered as text information (e.g., as an annotation indicating the placename). This limitation makes it di\ufb03cult to examine the geospatial relationships among the graph nodes or exploring the spatial contexts of semantically relevant information. As a result, it is di\ufb03cult to intuitively exhibit the knowledge connections between spatial information and semantic relationships among entities. To address the aforementioned limitations in both the more traditional map-based visualization (which lacks the \ufb02exibility in presenting semantic relationships of the graph nodes) and 2D graph visualization (which lacks the intuitive representation of location information and spatial context), we propose in a new \u201cGraph Above the Map\u201d solution, the GeoGraphViz. In this, 3D web-based graph visualization is enabled in combination with a 2D map view to allow a more coherent spatial-semantic visual presentation of knowledge graph data. In particular, a geographically constrained 3D force-directed graph visualization algorithm is developed. The algorithm is described in detail in the next section. 3 GEOGRAPHVIZ VISUALIZATION ALGORITHM The intuition behind our proposed algorithm is that current graph visualization strategies almost entirely visualize nodes and links based on their semantic relationships. The weights on the relationships are often ignored in existing solutions. As such, nodes are placed close to each other not because they are similar, but because there are linkages among them. Force-directed graphs are capable of simulating the graph layout by the strengths of connections among nodes. Therefore, they are very suitable for visualizing graphs which have quantitative weights (such as similarities) on the edges. However, the graph can only visualize based on one kind of force; the algorithm is not capable of visualizing a graph when multiple forces present, such as the semantic similarity36 among graph nodes, and the geographical force that a node receives based on its own geolocation properties. However, we argue that the semantic connections among the nodes and the geographical distribution patterns of the nodes are both important for inspecting graph patterns, especially when the graph nodes are geospatially related (e.g., each node represents a geographical entity, such as a natural feature)37 38. Hence, in our paper, we are looking to \u201cblend in\u201d di\ufb00erent amounts of semantic and geographical forces to demonstrate multi-faceted characteristics of graph patterns in a single view to enable a new kind of geospatial knowledge discovery. The GeoGraphViz visualization algorithm derives from and extends the force-directed graph placement algorithm39. A forcedirected graph simulates the graph layout with two forces: attractive and repulsive force among graph nodes. Attractive force is used to place connected nodes visually close to each other; it exists when there is an edge connecting two nodes. Repulsive force exists universally among all node pairs in the graph, regardless of whether a connection is presented. It prevents the nodes from becoming too close to each other in the visual graph. With a joint e\ufb00ect of attractive and repulsive forces, the position of the graph nodes is dynamically adjusted until the graph reaches a stable state, in which all forces are at equilibrium. The resultant graph layout is capable of presenting node clustering patterns based on their interconnectivity. It also maintains su\ufb03cient distance among the nodes to ensure graph readability and interpretability. However, when the nodes contain location information, which is important to understand the spatial context of the graph patterns, this algorithm no longer ful\ufb01lls the requirement. Our algorithm addresses this limitation by adding the third force, which we call the \u201cgeo-force,\u201d to the original force-directed graph placement algorithm to support location-aware graph visualization. The geo-force is a type of attractive force from a speci\ufb01c geolocation to a graph node. The geolocation can be a point location unique to a geographic entity, such as a city; it could also be the location property of other named entities, such as an expert who is related to a location (e.g., through his/her a\ufb03liation). Integrating the three forcesthe attractive force (\ud453\ud434), repulsive force (\ud453\ud445), and the geo-force (\ud453\ud43a)-the visualization algorithm can not only reveal the interrelationships among the graph nodes, but also the geographic distribution of the nodes. Mathematically, the three forces can be computed as follows: \fWang ET AL 5 \ud453\ud434(\ud462, \ud463) = \u2016\ud436(\ud462) \u2212\ud436(\ud463)\u20162 \ud458 \u22c5 \ud436(\ud462) \u2212\ud436(\ud463) \u2016\ud436(\ud462) \u2212\ud436(\ud463)\u2016 = \u2016\ud436(\ud462) \u2212\ud436(\ud463)\u2016 (\ud436(\ud462) \u2212\ud436(\ud463)) \ud458 (1) \ud453\ud445(\ud462, \ud463) = \u2212 \ud4582 \u2016\ud436(\ud462) \u2212\ud436(\ud463)\u2016 \u22c5 \ud436(\ud462) \u2212\ud436(\ud463) \u2016\ud436(\ud462) \u2212\ud436(\ud463)\u2016 = \u2212\ud4582(\ud436(\ud462) \u2212\ud436(\ud463)) \u2016\ud436(\ud462) \u2212\ud436(\ud463)\u20162 (2) \ud453\ud43a(\ud462) = \ud43e\u22c5\u2016\ud43a(\ud462) \u2212\ud436(\ud462)\u20162 \ud458 \u22c5 \ud43a(\ud462) \u2212\ud436(\ud462) \u2016\ud43a(\ud462) \u2212\ud436(\ud462)\u2016 = \ud43e\u22c5\u2016\ud43a(\ud462) \u2212\ud436(\ud462)\u2016 (\ud43a(\ud462) \u2212\ud436(\ud462)) \ud458 (3) where \ud462and \ud463are two nodes between which the forces apply. \ud436(\u22c5) indicates the coordinate vector of a graph node in a virtual 3D space in which the graph is placed. \ud43a(\u22c5) refers to the geographical coordinate vector (e.g., latitude and longitude) representing the geolocation property of a graph node. Note that, before calculation, the geographic coordinates need to be projected into the aforementioned virtual 3D space in which the graph layout is updated. || \u22c5|| computes the magnitude of a vector. \ud453\ud434(\ud462, \ud463) and \ud453\ud445(\ud462, \ud463) respectively de\ufb01ne the attractive and repulsive forces that \ud463receives from \ud462. \ud458(> 0) is a parameter that balances between the attractive and repulsive forces. The larger the \ud458is, the smaller attractive force is, and thus the stronger repulsive force a node will receive, resulting in a less localized layout. Parameter \ud43e(\u22650) controls the relative strength of the geo-force compared with the two other forces. The larger the \ud43eis, the closer a graph node will be placed toward its actual geolocation on a map. To simulate the \ufb01nal graph layout, the location of each node will be updated through an iterative process until the 3D graph reaches a stable state: a graph-level force equilibrium. Below, we provide the pseudocode of our proposed algorithm. The input of the algorithm includes the graph \ud43a, an initial system temperature \ud447, and a cooling parameter \ud6fc. Here the system temperature and cooling factor are introduced to simulate the annealing process in which a solid is being heated up by a high temperature so that all particles of the solid can be transferred into the liquid state. This is followed by a cooling process that slowly lowers the temperature until all particles reach a low energy ground state40. These parameters are also often used in a heuristic-based optimization algorithm to apply more aggressive search/move (a high initial \ud447) at the beginning of the process and a careful \ufb01netuning (smaller \ud6fc) at a later stage to \ufb01nd the near-optimal solution. Here in the algorithm, the system temperature \ud447determines the maximal distance \ud451\ud447that a node can move at each iteration. There are \ufb01ve main steps at each iteration. Steps 1-3 calculate three individual forces that act on each node from all the other nodes according to Equations 1, 2, and 3. The net force jointly determined by the three forces are saved in \ud439(\ud463) as a vector. Step 4 updates a node\u2019s position. The moving direction of the node is along the direction given by the net force \ud439(\ud463) and its moving distance is co-decided by the strength of the net force ||\ud439(\ud463)|| and a maximum moving distance \ud451\ud447. The system temperature \ud447decreases (Step 5) as the process goes on so that, just as with the annealing process, the graph layout is updated fast at the beginning and slower later for \ufb01ne-tuning to eventually reach a stable state, when an optimal or a near-optimal graph layout is found. 4 DATA AND EXPERIMENTS 4.1 Graph Data To test our proposed 3D geographically constrained force-directed graph rendering algorithm, we use expert network data from Direct Relief, a non-pro\ufb01le organization dedicated to providing disaster relief and humanitarian aid. This expert network contains 41 worldwide experts with infectious disease related expertise (e.g., COVID-19) that the Direct Relief sta\ufb00works with to distribute medical and other supplies to help vulnerable and COVID-19 a\ufb00ected populations. Each expert has properties that include a name, research interest, a\ufb03liation and its geolocation, as well as a public research pro\ufb01le. The experts are linked through a semantic similarity measure of their research interests, generating a similarity graph. The similarity scores between two experts have a value range (0,1] and they are computed from a semantic analysis of the experts\u2019 most representative publications. \f6 Wang ET AL Algorithm 1: Geographically Constrained Force-Directed Graph Simulation Input: Graph \ud43a= (\ud449, \ud438) Input: Initial temperature \ud447 Input: Cooling parameters \ud6fc Output: Graph \ud43awith updated layout 1 for \ud456\u21901 to \ud45b_\ud456\ud461\ud452\ud45f\ud44e\ud461\ud456\ud45c\ud45b\ud460do 2 // Step 0: Initialize the net force \ud439for each vertex 3 foreach vertex \ud463\u2208\ud449do 4 let \ud439(\ud463) \u21900 5 end 6 // Step 1: Calculate accumulated attractive force 7 foreach edge \ud452\u2208\ud438do 8 let \ud462and \ud463be the two vertices of \ud452and \ud462, \ud463\u2208\ud449 9 \ud439(\ud463) \u2190\ud439(\ud463) + \ud453\ud434(\ud462, \ud463) 10 \ud439(\ud462) \u2190\ud439(\ud462) + \ud453\ud434(\ud463, \ud462) 11 end 12 // Step 2: Calculate accumulated repulsive force 13 foreach vertex \ud463\u2208\ud449do 14 foreach vertex \ud462\u2208\ud449do 15 if \ud462\u2260\ud463then \ud439(\ud463) \u2190\ud439(\ud463) + \ud453\ud445(\ud462, \ud463); 16 end 17 end 18 // Step 3: Calculate the geo-force 19 foreach vertex \ud463\u2208\ud449do 20 \ud439(\ud463) \u2190\ud439(\ud463) + \ud453\ud43a(\ud463) 21 end 22 // Step 4: Update the coordinates of each vertex in the virtual 3D space 23 let \ud451\ud447\u2190\ud447 24 foreach vertex \ud463\u2208\ud449do 25 \ud436(\ud463) \u2190\ud436(\ud463) + \ud439(\ud463) ||\ud439(\ud463)|| \u22c5min(\ud451\ud447, ||\ud439(\ud463)||) 26 end 27 // Step 5: cooling function (temperature decay) 28 \ud447\u2190(1 \u2212\ud6fc) \u22c5\ud447 29 end Figure 1 shows the pro\ufb01le of an infectious disease expert (left) and her potential collaboration network with researchers sharing similar expertise (right). 4.2 Force balancing and graph presentation To investigate the e\ufb00ect of the added geo-force to the visual presentation of the 3D graph, we conducted an experiment to measure the graph layout changes when the forces are assigned with di\ufb00erent weights (parameter \ud43ein equation 3, which is also known as the forcing balancing parameter). Here we propose two quantitative metrics to evaluate the degree to which the graph structure can be preserved and how well the spatial patterns can be revealed with an added geo-force. To measure the layout changes of a rendered graph, we adopt edge length variation (ELV), which provides a normalized measure of average edge length changes of a graph. This measure has been increasingly adopted in recent studies to determine the quality of a graph layout41,42. The metric \ud440\ud438\ud43f\ud449is de\ufb01ned as follows: \fWang ET AL 7 FIGURE 1 An example expert node in the expert graph. \ud440\ud438\ud43f\ud449= \ud459\ud463 \u221a \ud45b\ud438\u22121 (4) with: \ud459\ud463= \u221a \u221a \u221a \u221a\u2211 \ud452\u2208\ud438 (\ud459\ud452\u2212\ud459\ud707)2 \ud45b\ud438\u22c5\ud4592 \ud707 (5) Where \ud438is the set of edges in the graph, \ud45b\ud438is the number of edges, and \ud459\ud707is the mean length of all edges. Terms \u221a \ud45b\ud438\u22121 and \ud4592 \ud707are added for normalization purposes. The intuition behind our choice of this metric is that the location of nodes and length of edges tend to reach a near static state in a 3D force-directed graph when the attractive and repulsive forces are being balanced. But after adding the geo-force, the nodes which are originally clustered together may be stretched far apart if their geolocations are distant from each other, resulting in an increase in the ELV. We also introduce the mean locational o\ufb00set (MLO) to measure the location accuracy in the visual presentation of the graph nodes. The metric \ud440\ud440\ud43f\ud442is de\ufb01ned as follows: \ud440\ud440\ud43f\ud442= \u2211 \ud463\u2208\ud449 \u2016\ud436(\ud463) \u2212\ud43a(\ud463)\u2016 \ud45b\ud449\u22c5\ud451\ud43a\ud436 (6) Where \ud449is the set of nodes in the graph, \ud45b\ud449is the number of nodes, \ud436(\ud463) indicates coordinate vector of a node in the \ufb01nal graph layout. \ud43a(\ud463) is the coordinate vector of the projected geolocation of a node. The o\ufb00set is computed as the horizontal distance that a node moves in the 2D map plane. The distance between the North Pole and the South Pole on the map, \ud451\ud43a\ud436, \f8 Wang ET AL TABLE 1 Quantitative measures for graph structure change of expert data under di\ufb00erent force balancing parameter \ud43e Scenario \ud43e \ud440\ud438\ud43f\ud449 \ud440\ud440\ud43f\ud442 (a) 0 0.0555 0.554 (b) 5 0.0731 0.119 (c) 10,000 0.0941 0.001 measured in the viewport coordinate system, is used to normalize the MLO. When every node is placed right above its exact geolocation, \ud440\ud440\ud43f\ud442= 0. Table 1 presents the values of \ud440\ud438\ud43f\ud449and \ud440\ud440\ud43f\ud442at di\ufb00erent settings of the forcing balancing parameter \ud43e. The resultant graph layouts visualized by the GeoGraphViz are shown in Figure 2 . To more clearly present the 3D graph through a 2D snapshot, less prominent nodes (nodes with fewer connections) are set to be invisible, but their interactions through the three forces are still considered. In practice, we recommend setting the parameter \ud43ebetween 3 and 20. Graphs created with \ud43efalling in this value range can better preserve location representation of each node, as well as important graph structures, such as local clusters. In this experiment, we use \ud43e= 5 to present a balanced graph layout. Figure 2 a shows the scenario in which the geoforce is excluded when drawing the graph. In this case, \ud43e= 0 and \ud440\ud438\ud43f\ud449reaches its lower bound and \ud440\ud440\ud43f\ud442reaches at its upper bound for this test dataset in which experts are distributed very broadly with a strong international perspective. In the Figure 2 a graph, we can clearly observe a local cluster near the center of the graph. However, it is almost impossible to correctly infer the geographic distribution pattern of these experts, especially for the two experts from India, if we consider only the information in the graph layer (white nodes and white links). Here, because GeoGraphViz implements the new \u201cGraph Above the Map\u201d strategy and uses the green lines to connect the nodes with its geolocation on the map layer, the spatial context can still be captured. Figure 2 c presents a scenario when \ud43eis set to be a very large number (Table 1 ), where the geo-force dominates the graph layout simulation. The graph nodes in Figure 2 c locate nearly exactly on top of their actual geolocation, showing a clear geographic distribution pattern. Hence, the \ud440\ud440\ud43f\ud442is at a near-zero value. However, the graph structure is signi\ufb01cantly varied with the local cluster presented in Figure 2 a almost completely gone. It is therefore very di\ufb03cult to analyze the graph patterns through the graph\u2019s visual topological presentation. \ud440\ud438\ud43f\ud449in this case reaches its upper bound due to the signi\ufb01cant relocation of graph nodes. In comparison, Figure 2 b presents a more balanced view in location-aware graph visualization (with \ud43e= 5). While the nodes are moving closer to re\ufb02ect their geolocation, we can still observe a highly densely connected subgraph near the center of the view. This graph view can also better present the international perspective of the clustered nodes of experts with similar research expertise. Quantitatively, \ud440\ud438\ud43f\ud449reduces from when \ud43e= 0 and falls near the middle of its value range, meaning that nodes move but not dramatically. We also found that \ud440\ud440\ud43f\ud442has reduced substantially compared to when \ud43e= 0, meaning that the location o\ufb00set is much reduced so the spatial context can be better presented. Observing value changes in the two metrics (\ud440\ud438\ud43f\ud449and \ud440\ud440\ud43f\ud442) can also help reveal the joint e\ufb00ects of the attractive force, repulsive force, and the geo-force to the graph layout. Ideally, if the nodes of each densely connected subgraphs (clusters) in a graph are all located within a local geographical region, the algorithm is capable of generating a graph layout with both a clear graph pattern and spatial pattern. In such a scenario, \ud440\ud438\ud43f\ud449and \ud440\ud440\ud43f\ud442will remain low with varying \ud43e. However, when a moderate \ud43evalue (e.g., between 3 and 20) results in a large edge length variation (with \ud440\ud438\ud43f\ud449at close to its upper bound), this means adding geo-force will signi\ufb01cantly change the graph structure. For applications that focus on investigating the graph patterns, the use of geo-force is not recommended. Instead, the users can still rely on the GeoGraphViz\u2019s unique \u201cGraph Above the Map\u201d feature (e.g., green lines connecting graph nodes with their geolocations) to understand the spatial distribution patterns. 4.3 Algorithm generalizability test To further test the generalizability of the proposed algorithm, we created a simulated dataset with distinctive graph and geographical distribution patterns and more nodes (>200) than the real dataset used in Section 4.2. The graph can be considered as a large collaboration network of researchers with international perspectives. It can also be used to model semantic relationships for other types of data (e.g., publications, images, and commercial products), among which their similarity can be measured. This simulated dataset contains three major clusters. The number of nodes distributed to each cluster are about the same (\u223c70). The similarity values among each pair of nodes were randomly generated, following normal distributions. The existence of a \fWang ET AL 9 (a) (b) (c) FIGURE 2 Graph layout at the force balancing parameter \ud43e. (a) \ud43e= 0, (b) \ud43e= 5, and (c) \ud43e= 10, 000 linkage between two nodes, whether they are in the same cluster or not, is based on a prede\ufb01ned probability. More connections (edges) with stronger strengths (higher similarity values) will be assigned to nodes within the same cluster, and the number of edges and similarity values are lower for between-cluster nodes. The white nodes and white links in Figure 3 a illustrate the clusters. This \ufb01gure also shows when no geographical force is applied to each node. Hence, the clusters are solely semantic. To assess the e\ufb00ect of location awareness in graph visualization, each node in the simulated dataset was assigned with a geolocation. In general, nodes in each of the three clusters are geographically distributed in the United States (US), Europe, and Asia, respectively. From west to east, let us call the three main clusters observed (in Figure 3 a) the U.S., the European, and \f10 Wang ET AL the Asian cluster. However, outliers also exist. The green linkages in Figure 3 a show nodes which \"travel\" a long distance. For instance, several nodes whose geolocations are in the U.S. (the New York region) belong semantically to the European cluster. When there are no geographical constraints, these nodes will be placed near nodes to which they are akin. Similar cases also include (1) a few nodes geographically located in Germany but semantically belong to the Asian cluster; and (2) three nodes in Norway, Sweden, and Thailand moved westward to be close to the clusters to which they belong. Nodes with signi\ufb01cant geographical movements are highlighted in greenlinks in Figure 3 a. (a) (b) (c) FIGURE 3 Graph layout of simulated dataset at the force balancing parameter \ud43e. (a) \ud43e= 0, (b) \ud43e= 5, and (c) \ud43e= 10, 000 The visualization results of the simulated graph under di\ufb00erent force balancing parameter \ud43eare shown in Figure 3 . While Figure 3 a demonstrates the visualized graph without a geo-force, Figure3 c shows when a very strong geo-force is applied (\ud43e=10,000) and has become the dominant force. In such a case, all nodes are placed very close to their geo-locations, so it is easy to observe the geographical properties of the graph nodes, but the graph\u2019s semantic structure can hardly be inspected. A \fWang ET AL 11 TABLE 2 Quantitative measures for graph structure change of simulated data under di\ufb00erent force balancing parameter \ud43e Scenario \ud43e \ud440\ud438\ud43f\ud449 \ud440\ud440\ud43f\ud442 (a) 0 0.0241 0.315 (b) 5 0.0445 0.0576 (c) 10,000 0.0519 5.68e-5 visualization result that balances between the two forces is shown in Figure 3 b, which presents a clear spatial distribution as well as the semantic clustering pattern. In particular, the nodes from New York and Germany are dragged close to their actual clusters, but their geo-locations are still recognizable in the given parameter settings. The statistics of the above three con\ufb01gurations, measured by \ud440\ud438\ud43f\ud449and \ud440\ud440\ud43f\ud442, are listed in Table 2 . It demonstrates a similar pattern as that in Table 1 . When \ud43eincreases from 0 to 5, the node\u2019s semantic distance (\ud440\ud438\ud43f\ud449) becomes twice as great, but the geographical distance \ud440\ud440\ud43f\ud442drops signi\ufb01cantly, reaching a balanced view between graph and spatial patterns. 4.4 Algorithm E\ufb03ciency Test To assess system e\ufb03ciency in visualizing graphs with an increasing size, we simulated graph data with two types of patterns. In Type I graphs, the total number of graph edges are in proportion to that of a complete, undirected graph (i.e., for any two nodes in the graph, there is an edge connecting them). The number of edges \ud441\ud452in such graphs can be represented as: \ud441\ud452= \ud45d\u22c5\ud45b\u22c5(\ud45b\u22121) 2 (7) Where \ud45bis the number of nodes in the graph. It can be seen that \ud441\ud452is proportional to \ud45b2 controlled by a parameter \ud45d\u2208(0, 1]. Type II graphs share the characteristic of the total number of edges, being proportional to the number of graph nodes. Hence, \ud441\u2032 \ud452can be represented as: \ud441\u2032 \ud452= \ud450\u22c5\ud45b 2 , (\ud450\u2264\ud45b\u22121) (8) Where \ud45bis the number of nodes in the graph. In graphs with such patterns, the total number of edges, \ud441\u2032 \ud452is proportional to the number of the nodes. This proportion is controlled by a parameter \ud450\u2208[0, \ud45b\u22121]. This strategy emulates real-world scenarios, such as social networks wherein each person maintains a certain number of (social) connections on average, regardless of the size of the network/graph. We further generated graphs with di\ufb00erent sizes (with \ud45b= 100, 200, 400, 800, and 1600) and di\ufb00erent graph density parameters (\ud45d=0.05, \ud45d=0.5, and \ud450=50) following the two graph types. The graph visualization time counted from loading the graph data, and simulating the positions of nodes in the graph, to \ufb01nishing rendering it in the web browser, as reported in Figure 4 . The X axis shows the number of nodes in the graph. The Y axis provides the graph visualization time (in sec). In Type I graphs, the number of edges is proportional to the square of number of nodes. The larger the parameter \ud45d, the denser the graph. In Type II graphs, the number of edges is proportional to the number of nodes. The larger the parameter \ud450, the denser the graph. The dashed blue line demonstrates the result of a graph simulation using the second strategy (Type II graph) with \ud450=50. The solid green line with triangular markers indicates the visualization time for Type I graphs with \ud45d=0.05. The solid orange line with square markers shows the results of Type I graphs with \ud45d=0.5. It also presents an edge case (rare in real-world scenarios) where the graphs are very densely connected (the number of edges equal to half of that in a undirected, complete graph). The green and blue lines show similar results. As the number of nodes increases, there is a linear relationship between visualization e\ufb03ciency and the number of graph nodes. The orange line, in comparison, shows almost a quadratic growth per number of nodes. This result re\ufb02ects the time complexity of \ud442(\ud45b2) indicated in the algorithm presented in Section 3. As seen in Figure 4 , the performance of the GeoGraphViz algorithm in terms of time e\ufb03ciency is good and can serve near real-time visualization purposes when the number of graph nodes is below 800. As shown in both the dashed blue and solid green lines, the visualization time will be less than 3 sec. For the orange line, the visualization can be completed within 7 sec. There are some lags but considering the density of the graph, this visualization time is acceptable. When the number of graph nodes reaches a very high number, such as 1600, the lags become longer. In such cases, further optimization is required to \f12 Wang ET AL FIGURE 4 E\ufb03ciency of graph visualization as graph size increases improve both the visualization e\ufb03ciency and the visual e\ufb00ect, as it will be di\ufb03cult to examine the pattern of a very large graph with dense connections. 5 GEOGRAPHVIZ USER INTERFACE Figure 5 demonstrates the interface for our graph visualization tool (with expert data). As seen, there are two layers shown in the interface: (1) the 3D graph layer showing connections of disease experts (white lines connecting white nodes); and (2) a map layer connecting experts to their locations (i.e., geolocations of their a\ufb03liations) by using green lines connecting white nodes to the red dots. These two layers together could demonstrate the clusters of disease experts who share similar research interests from the graph perspective. They can also reveal the spatial patterns of the potential collaboration network, be it local, regional, or international. Several supporting functions are developed to allow (1) clicking on a graph node to view the pro\ufb01le information of an expert (see an example in Figure 1 ), (2) \ufb01ltering the graph to present subgraphs with di\ufb00erent degrees of node connections (Figure 2 ), (3) turning on and o\ufb00the \u201cKnowWhere\u201d feature to display purely a graph view or the \u201cGraph Above the Map\u201d view, and (4) a word cloud view showing collective expertise of the experts in a selected subgraph. Many graph features (e.g., connections among the graph nodes and connections across graph and geolocations) can be set to invisible to allow examination of di\ufb00erent facets of the graph. The graph can also be rotated, panned, and zoomed to improve user experience. 6" + } + ], + "Jian Song": [ + { + "url": "http://arxiv.org/abs/2309.01907v1", + "title": "SyntheWorld: A Large-Scale Synthetic Dataset for Land Cover Mapping and Building Change Detection", + "abstract": "Synthetic datasets, recognized for their cost effectiveness, play a pivotal\nrole in advancing computer vision tasks and techniques. However, when it comes\nto remote sensing image processing, the creation of synthetic datasets becomes\nchallenging due to the demand for larger-scale and more diverse 3D models. This\ncomplexity is compounded by the difficulties associated with real remote\nsensing datasets, including limited data acquisition and high annotation costs,\nwhich amplifies the need for high-quality synthetic alternatives. To address\nthis, we present SyntheWorld, a synthetic dataset unparalleled in quality,\ndiversity, and scale. It includes 40,000 images with submeter-level pixels and\nfine-grained land cover annotations of eight categories, and it also provides\n40,000 pairs of bitemporal image pairs with building change annotations for\nbuilding change detection task. We conduct experiments on multiple benchmark\nremote sensing datasets to verify the effectiveness of SyntheWorld and to\ninvestigate the conditions under which our synthetic data yield advantages. We\nwill release SyntheWorld to facilitate remote sensing image processing\nresearch.", + "authors": "Jian Song, Hongruixuan Chen, Naoto Yokoya", + "published": "2023-09-05", + "updated": "2023-09-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.HC" + ], + "main_content": "Introduction High-resolution remote sensing image processing is vital for urban planning, disaster response, and environmental monitoring. Although advances in deep neural networks and the emergence of various benchmark datasets have led to significant progress in these research areas, the unique aspects of remote sensing image processing tasks still present many challenges. First, acquiring large-scale datasets that compare with those in computer vision and natural language processing is difficult due to the sensitivity, privacy, and commercial considerations of remote sensing data. As a result, remote sensing datasets tend to be significantly smaller. Second, compared to fields like computer vision or natural language processing, remote sensing data annotation is both more costly and time-intensive. For example, annotating a 1024 \u00d7 1024 image from a large land cover mapping dataset such as [48] usually takes more than two hours. Finally, variations in image capture conditions such as sensor type, image acquisition season, and geographical location introduce a severe domain shift problem in remote sensing image processing. Synthetic datasets, with their low-cost acquisition, high fidelity, and diversity, present a viable solution to these challenges. In the field of computer vision, numerous highquality synthetic datasets [4,13,25,32,36,47] have already emerged, primarily serving tasks such as semantic segmentation, depth estimation, optical flow estimation, and 3D reconstruction of street-view and indoor-view scenario. However, high-quality synthetic datasets for remote sensing are scarce in comparison. The most important reason is, as described in [21], in a virtual world constructed for streetview or indoor-view scenes, the distance between the sensor and the target location is relatively small (a few or tens of meters), with the main focus being on pedestrians, vehicles, road signs, or various furniture, resulting in a relatively small virtual world size. In contrast, in remote sensing scenarios, sensors are often located tens of thousands of meters away from the target virtual world, making even a relatively small virtual world extend over several square kilometers, while maintaining a multitude of diverse targets, such as thousands of trees in different poses and hundreds of buildings of different styles. This makes the construction of large-scale synthetic remote sensing datasets exceptionally challenging. Upon a thorough survey of the available synthetic remote sensing datasets [3,21,31,39,50,52], we discern that each of them has specific limitations. First, most existing works focus on a single task, such as building segmentation [21, 52] or object detection [39, 50]. However, there is a notable lack of effective synthetic datasets for critical tasks like multi-class land cover mapping and building change detection. Furthermore, these datasets exhibit lim1 arXiv:2309.01907v1 [cs.CV] 5 Sep 2023 \fRangeland Bareland Developed space Road Tree Water Agriculture land Building Figure 1. Examples of SyntheWorld dataset. ited diversity due to constraints associated with the size of the virtual world and the tools used. They either emulate real-world cities to create a limited number of virtual environments or use real remote sensing images as the background. Furthermore, when it comes to 3D models in the virtual world, existing methodologies consistently rely on predefined textures, layouts, and geometries, resulting in a restrictive range of styles for buildings, trees, and other land objects. In this work, we use the freely available open-source 3D modeling software Blender [7], along with various plugins from the Blender community, GPT-4 [29], and the Stable Diffusion model [34], to develop a procedural modeling system specifically for generating high-resolution remote sensing datasets. We present SyntheWorld, the largest highresolution remote sensing image dataset for land cover mapping and building change detection tasks. Fig. 1 displays some examples from the proposed SyntheWorld dataset. The main contributions of this work are: \u2022 We introduce SyntheWorld, the first fully synthetic high-resolution remote sensing dataset, which integrates procedural 3D modeling techniques with Artificial Intelligence Generated Content (AIGC). \u2022 We use SyntheWorld as the first synthetic dataset specifically designed to improve performance in two crucial tasks: multi-class land cover mapping and building change detection. \u2022 We propose the Relative Distance Ratio (RDR), a new metric designed to quantify the conditions under which the synthetic dataset can drive performance improvements. \u2022 Through comprehensive experiments on various remote sensing benchmark datasets, we demonstrate the utility and effectiveness of our dataset. 2. Related Works 2.1. Remote Sensing Image Processing Tasks 2.1.1 Land Cover Mapping The discipline of land cover mapping is a crucial component of remote sensing image processing, where the goal is to categorize and depict physical features on Earth\u2019s surface, such as grass, trees, water bodies, bareland, buildings, etc. This task resembles semantic segmentation in traditional computer vision. Although the introduction of benchmark datasets for real-world scenarios, such as DeepGlobe [11], LoveDA [46], and OpenEarthMap (OEM) [48], has made significant advances in associated research, there is still a clear need for high-quality synthetic datasets. This is an area where the field of computer vision has made significant progress. Recognizing this gap, we were motivated to create SyntheWorld, a synthetic dataset crafted to improve performance in land cover mapping tasks. 2.1.2 Building Change Detection The task of building change detection forms another crucial component within the realm of remote sensing image processing. It involves the identification and localization of modifications in man-made structures, especially buildings, over time, achieved through the analysis of images of the same area captured at different intervals. It is an indispensable technique for assessing damage in scenarios such as earthquakes, hurricanes, or floods, and for monitoring urban development and expansion over time. Typical annotations for this task involve binary masks, with networks trained to predict areas of building change based on input image pairs from two time points. While the emergence of benchmark real-world datasets such as WHU-CD [17], LEVIRCD+ [5], and SECOND [51] have provided the field with valuable data resources, the lack of high-quality synthetic datasets has hindered the pace of related research. 2 \f2.2. Existing Synthetic Datasets 2.2.1 Street-view & Indoor-view As we mentioned, the availability of large, high-quality synthetic datasets for street-view and indoor-view has driven the development of related techniques in traditional computer vision. The MPI Sintel Dataset [4] is widely used for training and evaluating optical flow algorithms, capturing natural scenes and motions in its synthetic dataset derived from an animated film. SceneFlow [25], with more than 35,000 synthetic stereo video sequences, is designed for the evaluation of optical flow, disparity, and scene flow algorithms. SYNTHIA [36], a dataset composed of 9,400 multiviewpoint frames from a virtual city, targets urban scene understanding tasks with its pixel-level semantic annotations. The GTA5 dataset [32], comprising 24,966 synthetic images from the perspective of a car in virtual cities, is tailored to the understanding of urban scenes with its pixellevel semantic annotations compatible with the Cityscapes dataset [9]. Synscapes [47], featuring 25,000 photorealistic street scenes, aims to improve the performance of computer vision models in outdoor scenes with its precise semantic labels. Finally, SceneNet [13], a diverse synthetic dataset of over 5 million indoor scenes with RGB-D images and semantic labels, is designed for indoor scene understanding tasks. 2.2.2 Overhead-view The AICD dataset [3], one of the earliest datasets with an overhead view, uses the Virtual Battle Station 2 game engine to simulate building alterations. Despite its 1,000 pairs of 800\u00d7600 RGB image pairs with building change masks, its 500 change instances are limited compared to the tens of thousands found in real-world datasets. The GTA-VSID dataset [52], extracted from the GTA-V game, covers a 100km2 area with 121 500 \u00d7 500 aerial RGB images. Although it is useful for building segmentation tasks, its 1m GSD limits performance in high-resolution remote sensing datasets. Syntinel-1 [21], the first high-resolution synthetic remote sensing dataset for building segmentation, is based on CityEngine and offers a variety of urban styles. The Syntcities dataset [31] is for disparity estimation in remote sensing images, featuring three virtual cities and 8,100 pairs of high-resolution images. RarePlanes [39], a semisynthetic dataset for aircraft object detection, combines real WorldView-3 satellite imagery and 3D models. 3. Dataset Generation and Description Constructing a virtual city manually is time-consuming. Comparatively, SyntheWorld differs from existing overhead-view synthetic datasets by using procedural modeling. Previous studies in computer graphics Road/River.py #Road/River parameters river_num = randint() road_num = randint() width = uniform() ... Tree.py #Tree parameters trunk = Sample_Noise() branch_num = randint() leaf_num = randint() ... Building.py #Building parameters height = uniform() type = select() roof_angle = uniform() ... Grid_based.py #Grid-based parameters district_num = randint() district_size = randint() obj_density = uniform() ... Terrain_based.py #Terrain-based parameters flat_area = uniform() mountain_area = uniform() tree_density = uniform() ... GPT-4 Stable Diffusion . . . Rangeland Agriculture land Bareland Road Roof Manual Prompts Sensor.py #Sensor parameters azimuth = uniform() look_angle = gaussian() gsd = uniform() ... Sun.py #Sunlight parameters elevation = uniform() intensity = uniform() color = [uniform(),uniform(),uniform()] ... Sensor & Sunlight Layout Geometry Texture Figure 2. The essential components for building SyntheWorld dataset. have explored procedural modeling for cities and buildings [19, 27, 28], but none have utilized these techniques for the creation of overhead view datasets. We create our own procedural rules to create 3D geometries and apply textures derived from generative models, which minimize labor costs and enrich diversity. 3.1. Generation Workflow Layout. We adopt grid-based and terrain-based methods for the virtual world, as illustrated in Fig. 2. For the gridbased method, we randomly slice a grid of 0.25-0.36km2 into several blocks of varying numbers and sizes, placing different types of buildings and trees in each block, and the boundaries between the blocks serve as our road system. It mainly simulates the more regular city and suburban layouts, and also contributes to the production of 0.30.6m GSD synthetic remote sensing images. For the terrainbased method, we use random noise textures to generate terrains such as mountains, plains, and oceans with ranges of 1-2km2. Placing rivers, roads, buildings, and trees according to carefully designed rules based on Geometry Nodes in Blender [7], this method mimics irregular layouts in developing regions. It mainly contributes to the production of 0.6-1.0m GSD synthetic remote sensing images. Geometry. The geometry row in Fig. 2 demonstrates our approach to procedurally model trees and buildings. For buildings, we use random noise to cut out differently shaped grids on a flat plane, which we then extrude into 3D geometries following pre-set rules. Users can control predefined parameters to generate an infinite number of different geometric styles. We distribute predefined asset components (walls, roofs, windows, etc.) to the geometry and finally map the texture generated by AIGC to the building. For trees, we use random-shaped curves as trunks and distribute different styles of tree components to the curve following certain rules. Texture. The last row in Fig. 2 shows examples of our process for generating corresponding texture assets using 3 \fRS Synthetic Datasets Features and Composition GSD (m) Task # of images Image Size Automatic Labeling Fully Synthetic Procedural Modeling AICD [3] \u2212 BCD 1, 000 pairs 800 \u00d7 600 \u221a \u221a \u00d7 GTA-V-SID [52] 1 BS 121 500 \u00d7 500 \u00d7 \u221a \u00d7 Synthinel-1 [21] 0.3 BS 1, 054 572 \u00d7 572 \u221a \u00d7 \u00d7 RarePlanes [39] 0.31 \u223c0.39 OD 50, 000 512 \u00d7 512 \u221a \u00d7 \u00d7 SyntCities [31] 0.1, 0.3, 1.0 DE 8, 100 pairs 1024 \u00d7 1024 \u221a \u00d7 \u00d7 SyntheWorld (Ours) 0.3 \u223c0.6 BS/LC/BCD 30, 000 pairs 512 \u00d7 512 \u221a \u221a \u221a 0.6 \u223c1.0 10, 000 pairs 1024 \u00d7 1024 Table 1. Features and composition comparison among remote sensing synthetic datasets. LC: land cover mapping. BCD: building change detection. BS: building segmentation. OD: object detection. DE: disparity estimation. AIGC. In terms of operational specifics, we first make a Stable Diffusion usage guide as a prompt to help GPT-4 understand its workings and prompt forms. We then provide excellent prompts as examples and ask GPT-4 to generate different themed prompts for different types of textures. In total, we generated around 140,000 seamless textures for different geometry to build SyntheWorld, far exceeding the number of textures used by existing overhead-view datasets. See the supplementary material for detailed prompts and generated images. 3.2. Structure of Dataset As shown in Tab. 1, SyntheWorld is a comprehensive image dataset, consisting of 40,000 pairs of images. Of these, 30,000 pairs have a GSD ranging from 0.3 to 0.6 m, with each image having size 512 \u00d7 512. The remaining 10,000 pairs have a GSD of 0.6 to 1.0 m and a larger image size of 1024 \u00d7 1024. Each pair in the dataset contains a post-event image, which is utilized for the land cover mapping task. These post-event images are accompanied by semantic labels of eight categories, as shown in Fig. 1. These categories are consistent with those of the OEM [48] dataset. Correspondingly, the pre-event images are derived by introducing variability in each scene. This involves different textures, lighting parameters, and camera settings. Additionally, there is a 10% to 50% chance that any given building in the scene might be removed. Both pre-event and post-event images from each pair are used collectively for the building change detection task. Accordingly, the dataset comes with 40,000 binary classification masks corresponding to this task. The off-nadir angle of all images ranges from \u221225\u25e6to 25\u25e6and follows a Gaussian distribution with a certain mean 0\u25e6and variance 2.3\u25e6. Similarly, we simulate the sun\u2019s position during the day in most countries by adjusting the zenith (ranging between 25\u25e6to 35\u25e6) and the elevation parameters (ranging between 45\u25e6to 135\u25e6), as guided by the documentation of the Pro Atom [8] addon in the Blender community, both parameters following a uniform distribution. This inclusion of various viewing angles and sun elevation enhances the robustness of SyntheWorld and ensures its applicability to a wide range of real-world datasets. 3.3. Comparison with Existing Synthetic Datasets As depicted in Tab. 1, we provide a comparative analysis of SyntheWorld and existing synthetic remote sensing datasets, in terms of their features and composition. The Task column presents the primary tasks illustrated in the corresponding dataset\u2019s literature. Regarding label generation, the GTA-V-SID dataset [52] consists of screenshots of the GTA-5 commercial video game, with buildings manually annotated. On the contrary, the remaining datasets are capable of automatically generating annotations via the corresponding 3D software. In terms of complete synthesis, only SyntheWorld achieves this feat. The other datasets have adopted real remote sensing images to some extent as texture or as part of the dataset during their construction. Finally, in SyntheWorld, most 3D models are generated using procedural modeling, while in other synthetic datasets, the geometric structure and texture of the models are either predefined or meticulously designed by 3D artists. This unique characteristic of SyntheWorld significantly enhances its diversity. 4. Experiments 4.1. Real-world Benchmark Datasets To validate the versatility and effectiveness of SyntheWorld, we performed experiments using several highresolution remote sensing datasets from various real-world scenarios. In the subsequent discussion, we present an indepth overview of these datasets. In the experiments showcased in this section, we employ \u201cw\u201d to signify the utilization of the SyntheWorld dataset and \u201cw/o\u201d to indicate its non-use. For the building segmentation task, we relied on OEM [48] and LoveDA [46] datasets, as well as INRIA [24] and BANDON [30] datasets. The INRIA dataset, which 4 \fTrain on Test on OEM* LoveDA* INRIA BANDON GTA-V-SID [52] 2.43 0.88 1.74 1.64 Synthinel-1 [21] 35.37 14.13 39.89 28.19 SyntCities [31] 23.61 21.39 30.39 30.01 SyntheWorld 49.26 37.28 45.76 34.01 OEM* [48] 80.48 55.35 75.61 64.19 Table 2. mIoU(%) results of the building segmentation task using DeepLabv3+. * means to use only the part of the building label in the dataset. targets building footprint segmentation, incorporates aerial images from ten cities in the United States and Europe at a resolution of 0.3 m. The BANDON dataset stands out with significant off-nadir angles and focuses on urban areas with skyscrapers. It offers high-resolution 0.6m remote sensing images from Beijing and Shanghai. We turned to OEM and LoveDA datasets again for the multi-class land cover mapping task. The OEM dataset, encompassing 97 regions across 44 countries worldwide, provides high-resolution images with detailed eight-class land cover annotations. The LoveDA dataset offers 0.3m GSD remote sensing images from three diverse regions in China, labeled with seven land cover categories. In the building change detection task, we harnessed the WHU-CD [17], LEVIR-CD+ [5], and SECOND [51] datasets. The LEVIR-CD+ dataset consists of 987 image pairs, with 637 pairs in the training set and 348 pairs in the test set. SECOND, a semantic change detection dataset, collects 4662 pairs of aerial images from various platforms and sensors across cities like Hangzhou, Chengdu, and Shanghai. The WHU-CD dataset consists of two pairs of superhigh-resolution (0.075m) aerial images. We cropped these large training (21243\u00d715354) and testing (11265\u00d715354) images into non-overlapping 512 \u00d7 512 patches for our experiments. 4.2. Building Segmentation To compare with existing overhead-view synthetic datasets, which mainly include semantic labels of buildings, we performed building segmentation experiments. We use the DeepLabv3 + [6] network equipped with ResNet50 [14] backbone. We adopted the SGD optimizer [33] for all synthetic datasets, employing a learning rate of 1e-3, a weight decay of 5e-4, and a momentum of 0.9; for the OEM dataset, we opted for a higher learning rate of 1e-2. The results are presented in Tab. 2. The GTA-VSID [52] dataset underperforms on various high-resolution real-world datasets due to its smaller quantity and 1m GSD. The model trained on the SyntheWorld dataset outperforms other datasets on four real-world datasets, especially on the OEM and LoveDA datasets. These two datasets include a considerable number of buildings in developing or develDatasets w/o w/ OEM [48] 66.96 66.84 LoveDA [46] 51.14 53.32 O\u2192L 35.28 34.83 L\u2192O 21.95 25.24 Table 3. Land cover mapping mIoU(%) outcomes from intradataset and cross-dataset evaluations, utilizing the DeepLabv3+ model for all experiments. O\u2192L denotes training on the OEM training set and testing on the LoveDA validation set, while L\u2192O represents the converse. oped areas. Thus, the performance of SyntheWorld far exceeds that of other competitors in these two datasets. As the buildings in the INRIA [24] and BANDON [30] datasets are predominantly high-rises in urban areas or well-organized detached houses in suburban areas, the advantage of the SyntheWorld dataset is not as evident as in the other two datasets, but still shows the best performance. Furthermore, the last column of Tab. 2 shows the performance of the model trained on the OEM dataset and tested on other datasets. Although SyntheWorld significantly outperforms other synthetic datasets, there is still a gap compared to realworld datasets. In Fig. 3 (a), we also visualized the feature extract of the well-trained ResNet-50 of all synthetic and real datasets using UMAP [26]. In terms of feature space, SyntheWorld is closer to real-world datasets than any existing synthetic datasets. 4.3. Land Cover Mapping SyntheWorld is the first synthetic dataset that offers consistent annotations compatible with high-resolution realworld benchmarks. In this section, we primarily discuss the performance of SyntheWorld in the land cover mapping task. 4.3.1 Cross-dataset Experiments To evaluate the enhancements brought about by using SyntheWorld, we adopted the mixed training strategy [32] often used with synthetic datasets, a batch size of 8, including 7 real images and 1 synthetic image per batch. The model was trained using DeepLabv3+ with the SGD optimizer and an initial learning rate of 1e-2, accompanied by a weight decay of 5e-4, and a momentum of 0.9. All experiments were trained for 100 epochs on a Tesla A100 GPU. Specifically, we map the rangeland class and the developed space class in OEM and SyntheWorld to the background class in LoveDA to keep the classes consistent. Tab. 3 outlines the results obtained by integrating training images from a real-world dataset with SyntheWorld and the results of cross-dataset tests using the SyntheWorld dataset. Incorporating SyntheWorld with the entire 5 \fBANDON (Real) INRIA (Real) LoveDA (Real) OEM (Real) GTA-V-SID (Synthetic) SyntCities (Synthetic) Synthinel-1 (Synthetic) SyntheWorld (Synthetic) (a) (b) Figure 3. (a) 2D UMAP visualization of synthetic and real datasets. We use ResNet-50 pre-trained on the OEM dataset as the feature extractor; (b) Colormap of density estimation for SyntheWorld, OEM, and LoveDA dataset. Datasets 1% 5% 10% w/o w/ w/o w/ w/o w/ OEM [48] 40.9 45.01 52.21 54.0 58.40 59.31 LoveDA [46] 34.59 36.75 42.38 44.58 45.27 48.12 Table 4. mIoU(%) results from the DeepLabv3+ model, trained both with and without SyntheWorld, and deployed on two realworld land cover mapping datasets at various proportions of real image utilization. OEM [48] training set does not result in performance enhancements. Similarly, combining SyntheWorld with the OEM training set and subsequently testing on LoveDA [46] slightly reduces model efficacy. However, when we merge SyntheWorld with the LoveDA and test on the same, the model\u2019s mIoU increases by 2.18 points. In addition, a 3.29point improvement in mIoU is observed when testing the OEM test set after integrating SyntheWorld and LoveDA. To investigate the observed phenomenon, we made density estimation maps for the three datasets as displayed in Fig. 3 (b). This reveals a notable overlap between SyntheWorld and OEM, with a lesser overlap in relation to LoveDA. The expansive coverage of the OEM dataset surpasses that of LoveDA and SyntheWorld. This finding sheds light on the patterns observed in Tab. 4. The vast diversity of the OEM dataset effectively captures the most data diversity inherent in SyntheWorld. Therefore, no performance enhancement results from integrating SyntheWorld. Nevertheless, the substantial overlap between SyntheWorld and OEM enables a performance boost when SyntheWorld is merged with LoveDA and tested on OEM. Conversely, the lesser overlap between SyntheWorld and LoveDA means that integrating SyntheWorld during OEM training does not lead to improvements in the LoveDA test set. Subsequently, we assessed performance when integrating SyntheWorld with varying proportions of real-world datasets. Tab. 4 presents the findings. Irrespective of the real-world dataset being OEM or LoveDA, the integration Train on Test on Urban Rural w/o w w/o w Urban 47.00 50.32 33.44 37.95 Rural 36.86 38.17 48.64 51.66 Table 5. Land cover mapping results, measured in mIoU(%), from cross-domain experiments involving urban and rural areas of the LoveDA dataset. of SyntheWorld consistently enhances model performance when the quantity of training data is limited. 4.3.2 Cross-domain Experiments In order to examine the performance of SyntheWorld in outof-domain test scenarios, we partition the OEM [48] dataset into seven distinct continents. Africa, Asia, Europe, Central America, North America, South America, and Oceania. Simultaneously, for the LoveDA [46] dataset, we conducted experiments using urban and rural areas as separate domains. We conduct experiments with various decoders and encoders; in this section, we show the results of one model. See supplementary material for more results from different models, dataset division, and experimental setup. Continent-wise experimental results. Fig. 4 displays the results of cross-continent experiments in the OEM dataset using the U-Net [35] architecture with the EfficientNet-B4 [43] encoder. We can observe that our SyntheWorld dataset can significantly enhance performance across most dataset pairs. Also, we show in Fig. 5 the qualitative results when synthetic data can lead to a boost. More results can be found in the supplementary material. However, in some cases, the synthetic dataset does not yield a substantial improvement and could even degrade the model performance. It is crucial to investigate the reasons for such enhancement and impairment for the use of synthetic datasets. Therefore, we have conducted a further analysis of these results in Sec. 4.3.3. Urban-Rural experimental results. We conducted similar cross-domain experiments on the LoveDA dataset, which includes two domains, rural and urban. The results are illustrated in Tab. 5. We found that the SyntheWorld dataset enhances model performance in both in-domain and out-of-domain tests. 4.3.3 Relative Distance Ratio The cross-domain experiments discussed in Sec. 4.3.1 and Sec. 4.3.2 show that the SyntheWorld dataset does not always yield significant improvements. This highlights the need to understand the underlying causes. We introduce a metric, the Relative Distance Ratio (RDR), aiming to quantify the relationship between source, target, and synthetic 6 \f(a) (b) (c) Figure 4. Results of continent-wise in-domain and out-of-domain land cover mapping experiments of OEM dataset. The x-axis represents the target domain and the y-axis represents the source domain. U-Net with EfficientNet-B4 encoder is used for all experiments. (a) The mIoU results of without using SyntheWorld; (b) The mIoU results of mixed training with SyntheWorld; (c) Changes in mIoU. SA\u2192SA SA\u2192NS SA\u2192AS SA\u2192EU Image Ground Truth w/ SyntheWorld w/o SyntheWorld Figure 5. Qualitative results by U-Net model of continent-wise land cover mapping task. datasets and clarify when synthetic data can bring improvements. For measuring the distance between datasets, various methods have been discussed in the literature [1, 15, 38]. The most commonly used measure of the distance between synthetic and real datasets is the FID score [15]. Here we adopt the Fr\u00b4 echet Distance, as the measure of distance between different datasets. Since the Inception model [42] pre-trained on ImageNet [12] is not suitable for remote sensing datasets, we use ResNet-50 [14] pre-trained on the OEM [48] dataset. The formula to compute the FD between any dataset pair is as follows: FD( x, y) = ||\\mu _x \\mu _y||^2 + \\text {Tr}(\\Sigma _x + \\Sigma _y 2(\\Sigma _x\\Sigma _y)^{\\frac {1}{2}}) \\label {eq:FD} (1) where \u00b5x and \u00b5y denote the mean feature vectors of datasets x and y, respectively, and \u03a3x and \u03a3y represent the covariance matrices of the corresponding feature vectors. Then we denote the source domain dataset as S, the target domain dataset as T, SyntheWorld as G, and the FD between any two datasets as \u03b4(., .). Afterwards, the distance between the source domain dataset S and the target domain Figure 6. Scatter diagram with correlation between mIoU changes and proposed Relative Distance Ratio (RDR). dataset T can be expressed as: \\de lt a (f_S, f_ T) = FD(f_S, f_T) \\label {eq:FD_ST} (2) Similarly, the distance between the target domain dataset T and the synthetic dataset G can be represented as: \\d e lta (f_T, f_G) = FD(f_T, f_G) \\label {eq:FD_TG} (3) These fT , fS and fG are obtained by applying a ResNet50 model, pre-trained on the OEM dataset. Subsequently, we can define the Relative Distance Ratio (RDR), denoted as \\mat hc a l {R}(f_S, f_T, f_G) , to be calculated using the following formula: \\ma th c al { R}(f _ S, f_T, f_ G) = \\frac {\\delta (f_T , f_G )}{\\delta (f_S , f_T )} \\label {eq:RDR} (4) Intuitively, a smaller R indicates a greater capacity of the model to integrate knowledge from the synthetic data and transfer it to the target domain. To validate this, we presented a correlation scatter plot in Fig. 6, which reveals a negative correlation between R and the improvement in mIoU. This observation aligns with our initial conception of designing the RDR metric. Therefore, the proposed RDR 7 \fDatasets STANet-PAM DTCDSCN ChangeFormer w/o w/ w/o w/ w/o w/ LEVIR-CD+ [5] 0.752 0.782 0.793 0.812 0.784 0.835 SECOND* [51] 0.713 0.733 0.712 0.727 0.723 0.734 WHU-CD [17] 0.707 0.802 0.769 0.862 0.783 0.836 Table 6. F1 score resulting from the use or non-use of SyntheWorld across three building change detection benchmark datasets, assessed with three different models. Datasets 1% 5% 10% w/o w/ w/o w/ w/o w/ LEVIR-CD+ [5] 0.517 0.646 0.636 0.731 0.726 0.764 SECOND* [51] 0.401 0.435 0.546 0.622 0.583 0.631 WHU-CD [17] 0.242 0.312 0.433 0.638 0.510 0.705 Table 7. Comparison of F1 scores from the DTCDSCN model trained with and without SyntheWorld, applied on three different real-world datasets at varying ratios of real image use. metric effectively serves as a quantitative conditional criterion for employing synthetic data, that is, when R is large, there is a risk of using synthetic data and vice versa. 4.4. Building Change Detection In this section, we demonstrate the effectiveness of SyntheWorld on the building change detection task. We employ four prevalent building change detection networks, FC-siam-Diff [10], STANet-PAM [5], DTCDSCN [22], and ChangeFormer [2]. We adhere to a mixed training strategy that includes a 7:1 real-to-synthetic image ratio. For ChangeFormer and DTCDSCN we use AdamW [23] optimizer with learning rate 1e-4, for the other two models we use Adam optimizer with learning rate 1e-3. Each mixed training experiment is trained for 100 epochs on the Tesla A100 GPU. Tab. 6 presents the F1 score of three different models applied to three different datasets in the real world. Evidently, for each real-world dataset and each model type, integrating the SyntheWorld dataset induces an improvement, notably for the WHU-CD dataset where it can induce almost a 10-point increase in the F1 score when using the STANetPAM and DTCDSCN models. Also, we display in Fig. 7 the qualitative results when using SyntheWorld can lead to enhancements. More results can be found in the supplementary material. Tab. 7 reveals the F1 score of the DTCDSCN model with different proportions of the real-world training set, with and without the incorporation of SyntheWorld. Across all realworld datasets, SyntheWorld invariably provides substantial performance improvement when training data is scarce. Tab. 8 illustrates the generalizability of the SyntheWorld dataset with the FC-siam-Diff model. We draw comparisons with three real datasets and the AICD synthetic Train on Test on LEVIR-CD+ SECOND* WHU-CD LEVIR-CD+ (Real) 0.751 0.180 0.614 SECOND* (Real) 0.405 0.614 0.522 WHU-CD (Real) 0.222 0.248 0.812 AICD (Synthetic) 0.094 0.267 0.092 SyntheWorld 0.419 0.386 0.457 Table 8. Evaluation of generalizability across multiple building change detection datasets. The table shows the F1 scores. * means to use the part of building change label in SECOND. True positive True negative False positive False negative LEVIR-CD+ SECOND* WHU-CD Pre-event image Post-event image Reference map w/ SyntheWorld w/o SyntheWorld Figure 7. Qualitative results by DTCDSCN model of building change detection task on three datasets. dataset. The results show that by using only the SyntheWorld dataset for training, we can achieve acceptable results on all three datasets. Specifically, compared to the AICD [3] dataset, ours has a significant performance and generalization advantage. 5. Discussion and Societal Impacts We introduced SyntheWorld, the most extensive synthetic remote sensing dataset, used for land cover mapping and building change detection. Its diversity, enhanced by procedural modeling and AIGC, sets it apart from other datasets. Comprehensive experiments validate SyntheWorld\u2019s utility and flexibility. Furthermore, we investigate scenarios where SyntheWorld does not enhance performance, proposing the RDR metric for initial exploration of when SyntheWorld can deliver lift. Notably, SyntheWorld has a significant gap compared to real datasets. This stems from some modeling rules mismatching real-world distributions, a challenge we aim to address in future work. Additional future work involves leveraging SyntheWorld to explore domain adaptation and generalization techniques in remote sensing. Acknowledgements This work was supported in part by JST FOREST Grant Number JPMJFR206S; Microsoft Research Asia; and the GSFS Challenging New Area Doctoral Research Grant (Project No. C2303). 8" + } + ], + "Duong H. Phong": [ + { + "url": "http://arxiv.org/abs/2304.02533v1", + "title": "Geometric flows from unified string theories", + "abstract": "A survey of new geometric flows motivated by string theories is provided.\nTheir settings can range from complex geometry to almost-complex geometry to\nsymplectic geometry. From the PDE viewpoint, many of them can be viewed as\nintermediate flows between the Ricci flow and the K\\\"ahler-Ricci flow, albeit\noften coupled to flows of additional fields. In particular, a survey is given\nof joint works of the author with Tristan Collins, Teng Fei, Bin Guo, Sebastien\nPicard, and Xiangwen Zhang.", + "authors": "Duong H. Phong", + "published": "2023-04-05", + "updated": "2023-04-05", + "primary_cat": "math.DG", + "cats": [ + "math.DG", + "math.AP" + ], + "main_content": "Introduction The four known fundamental forces of nature are each described by either the Einstein equation, or by the Yang-Mills equation, both of which have had a profound in\ufb02uence on the development of geometry and the theory of partial di\ufb00erential equations. A grand dream of theoretical physics is however a uni\ufb01ed theory of all interactions, for which string theories, uni\ufb01ed themselves with 11-dimensional supergravity since the mid 1990\u2019s into M Theory, remain at this moment as the only viable candidate. Presumably M Theory will be governed by equations which unify in a suitable sense the Einstein equation and the Yang-Mills equation, and which will surely be of great importance for mathematics. The problem is that M Theory is still far from being well understood, and is even at this moment only known indirectly through its various limits. As a \ufb01rst step, we can still consider the equations for each of the limiting theories of M Theory, and in fact, of their low-energy limits, which are 10d or 11d theories of point particles. Historically, this was actually done \ufb01rst, with the identi\ufb01cation in 1985 by Candelas, Horowitz, Strominger, and Witten [10] of Calabi-Yau 3-folds as vacua for supersymmetric compacti\ufb01cations of the heterotic string to a 4-dimensional space-time. At the time, Calabi-Yau 3-folds appeared so rigid as to preclude searches for other solutions of the heterotic string or for the other string theories. But the situation has changed a lot since, especially with dualities and non-perturbative e\ufb00ects, and it is imperative to try and understand these other vacua. There is now strong evidence that each of these theories leads to a new notion of special geometry, and is governed by new partial di\ufb00erential equations which are of considerable interest in their own right. Thus the above CalabiYau 3-folds should be interpreted as manifestations of SU(3) holonomy with respect to 1Contribution to Surveys in Di\ufb00erential Geometry, Vol. 27 (2022), \u201cForty Years of Ricci Flow\u201d, edited by H.D. Cao, R. Hamilton, and S.T. Yau. Work supported in part by the National Science Foundation under grants DMS-18-55947 and DMS-22-03273. \fthe Chern unitary connection, but this may be only one among many special geometries yet to be discovered. Remarkably, geometric \ufb02ows turn out to be the most e\ufb00ective method to investigate these new equations. An early example is the G2 \ufb02ow of Bryant [7, 8], which can be viewed as an odd-dimensional version of the K\u00a8 ahler-Ricci \ufb02ow. More generally, it turns out that the equations for the limiting theories of M Theory all involve a cohomological constraint. In the absence of a \u2202\u00af \u2202-Lemma, the most e\ufb00ective way of implementing a constraint is to start from an initial data satisfying it, and let the data evolve by a geometric \ufb02ow preserving the constraint. In this paper, we shall provide a survey of the geometric \ufb02ows which have appeared recently in uni\ufb01ed string theories, focusing on the PDE aspects. The original formulation of these \ufb02ows varies a great deal: some \ufb02ows arise as \ufb02ows of (2, 2)-forms on a 3-dimensional complex manifold (\u00a72 and \u00a74), others as \ufb02ows of real 3-forms on a symplectic 6-manifold (\u00a73), others as yet as parabolic reductions of 11d supergravity (\u00a75), or as \ufb02ows of spinors (\u00a76). But they all induce \ufb02ows of the Riemannian structure gjk, and we can try and compare them from there. Typically they are then of the form \u2202tgjk = \u22122euRjk + \u00b7 \u00b7 \u00b7 (1.1) and involve couplings with the \ufb02ows of other \ufb01elds. Here \u00b7 \u00b7 \u00b7 represent some lower order terms, which may also involve reparametrizations and/or gauge transformations. The appearance of the Ricci tensor is not surprising in a reparametrization invariant \ufb02ow of second order. Thus the distinctive features of the \ufb02ow are rather to be found in the lower order terms and/or in the additional \ufb01elds to which the metric gjk is coupled. It should also be kept in mind that, while cohomological constraints and any underlying special geometry may not be evident from the above \ufb02ows of the metrics gjk, they are there and of considerable importance, even though they are usually much weaker than in K\u00a8 ahler geometry. In this sense, the geometric \ufb02ows of uni\ufb01ed string theories can be viewed as intermediates between the Ricci \ufb02ow and the K\u00a8 ahler-Ricci \ufb02ow. Geometric \ufb02ows from uni\ufb01ed string theories have begun to be investigated systematically only a few years ago, and very little is as yet known about them. While we can build on the many powerful techniques and insights developed over the years for the study of classic \ufb02ows such as the Ricci \ufb02ow [63, 64] and the Yang-Mills \ufb02ow [113, 26], the more recent \ufb02ows invariably lead to new di\ufb03culties of their own. In all likelihood, they will require new methods which may well prove to be valuable in their own right for the theory of partial di\ufb00erential equations. There is much then to understand and discover, and this whole area should be very fertile for research. With this in mind, we have thrived in this survey to present the new \ufb02ows in as mathematically self-contained a way as possible, so that the readers can begin studying them as pure PDE questions if they choose. The physics context for these equations is especially rich and complex, and we have to content ourselves with providing some references to the literature, which is immense. 2 \f2 The Type IIB \ufb02ow We begin with some general remarks, which apply not just to this section, but to the other sections as well. String theories and M Theory [2, 69, 116, 109] are theories of extended objects, but for our purposes, we only consider limits where they reduce to theories of point particles, in dimensions 10 and 11 respectively. These limits are supergravity theories, in the sense that they incorporate gravity and are supersymmetric, i.e., they admit a symmetry exchanging tensor \ufb01elds (\u201cbosons\u201d) and spinor \ufb01elds (\u201cfermions\u201d). Gravity is described by a metric which is a symmetric rank-two tensor, and its supersymmetric partner is described by a \ufb01eld called the gravitino \ufb01eld, which is a one-form valued in the bundle of spinors. We are interested in supersymmetric vacua of these theories, that is, space-times whose \ufb01elds satisfy the \ufb01eld equation, and whose gravitino \ufb01eld remains invariant under supersymmetry transformations. This last requirement reduces to the existence of a spinor \ufb01eld which is covariantly constant under a connection with possibly non-vanishing \ufb02ux (see \u00a77 for some more details). It is one of the key new requirements which set string theories and M Theory apart from previous gauge theories and gravity theories. Except for 11d supergravity, all the equations considered in the present survey can be traced back to the requirement of existence of a covariantly constant spinor. This implies in particular a reduced holonomy. For phenomenological reasons, it is desirable to compactify space-time to M3,1\u00d7X, where M3,1 is Minkowski or a maximally symmetric 4-dimensional space-time, and X is an internal space of dimension either 6 or 7, depending on whether we compactify string theories or 11-dimensional supergravity. It is also desirable to preserve supersymmetry upon compacti\ufb01cation. Thus our considerations can apply to the original space-time or more often, the internal space X. The equations for string theories have been extensively explored in the literature [54, 107, 52, 51, 53, 62, 14, 61], notably from the point of view of reduced holonomy and G structures. In the case of a six-dimensional internal space X, we are then in the case of SU(3) structures, for which a detailed classi\ufb01cation of what the \ufb02ux can be [43, 16], and this has proved to be a powerful tool for a classi\ufb01cation of possible supersymmetric vacua. Our goal is to identify the equations which exhibit the typical features of each of the string theories. For this, we rely on the work of Tseng and Yau [110, 111, 112], who have proposed some representative models. For the Type IIB string (compacti\ufb01ed to a six-dimensional internal space X), the setting is then as follows. Let X be a compact 3-dimensional complex manifold, equipped with a nowhere vanishing holomorphic (3, 0)-form \u2126. A supersymmetric compacti\ufb01cation of the Type IIB string with an O5/D5 brane source can be described by a Hermitian metric \u03c9 satisfying the following system of equations d\u03c92 = 0, i\u2202\u00af \u2202(|\u2126|\u22122 \u03c9 \u03c9) = \u03c1B (2.1) 3 \fwhere |\u2126|\u03c9 is de\ufb01ned by i\u2126\u2227\u00af \u2126= |\u2126|2 \u03c9\u03c93, and \u03c1B is the Poincare dual of a linear combination of holomorphic 2-cycles. We observe that the equation d\u03c92 = 0 is reminiscent of a K\u00a8 ahler condition, but it is a lot weaker. Metrics satisfying this condition are sometimes called \u201cbalanced\u201d in the mathematics literature [84]. In particular, there is no known analogue of the \u2202\u00af \u2202Lemma which can provide an e\ufb00ective parametrization of all forms \u03c9 with \u03c92 closed. In the absence of such a parametrization, we try and implement the balanced condition by introducing instead the following \ufb02ow of (2, 2)-forms \u2202t(\u03c92) = i\u2202\u00af \u2202(|\u2126|\u22122 \u03c9 \u03c9) \u2212\u03c1B (2.2) with any initial data \u03c90 satisfying d\u03c92 0 = 0. The point is that the right-hand side of the \ufb02ow is a closed form, so the closedness of the form \u03c92 is preserved for all time, and formally, if the \ufb02ow exists for all time and converges, then its stationary points provide solutions to the full Type IIB system. It is shown in [88] that the leading terms in this equation satisfy Hamilton\u2019s conditions for the short-time existence of the \ufb02ow for smooth data. We also observe that, in this form, the Type IIB \ufb02ow admits a ready generalization to any compact complex manifold of dimension m, \u2202t(\u03c9m\u22121) = i\u2202\u00af \u2202(|\u2126|x \u03c9\u03c9m\u22122) \u2212\u03c1B (2.3) which may be of interest in its own right. Here \u03c1B is now the Poincare dual of a linear combination of holomorphic m \u22121-cycles. It is also not di\ufb03cult to rewrite this Type IIB \ufb02ow as a \ufb02ow of (1, 1)-forms [89]. 2.1 The case \u03c1B = 0 We describe now some precise results about the above \ufb02ow in the case of no source \u03c1B = 0. In this case, even for general complex dimension m, there is a remarkable change of variables that allows the dependence of the Type IIB \ufb02ow on the form \u2126to be completely eliminated from the \ufb02ow, and reside only in the initial data [32]. Indeed set \u03b7 = |\u2126|\u03c9\u03c9 (2.4) Then it is shown in [32] that the Type IIB \ufb02ow can be expressed in terms of \u03b7 as i\u22121\u2202t\u03b7\u00af kj = 1 m \u22121(\u2212R\u00af kj + 1 2Tj \u00af mp \u00af T\u00af k \u00af mp) (2.5) where R|barkj is the Ricci-Chen tensor, for initial data \u03b70 satisfying the conformally balanced condition d(|\u2126|\u03b7\u03b7m\u22121) = 0. The above right hand side actually coincides with a speci\ufb01c \ufb02ow within the family of \ufb02ows introduced in [102], as a natural generalization in 4 \fHermitian geometry of the K\u00a8 ahler-Ricci \ufb02ow. Remarkably, it is the speci\ufb01c \ufb02ow identi\ufb01ed by Ustinovskyi [114] as preserving the Gri\ufb03ths and Nakano-positivity of the tangent bundle. We note however that, in our context, it is essential that the initial data be conformally balanced. The following Shi-type estimates were established in [34]: Theorem 1 [34] Assume that the Type IIB \ufb02ow exists on the time interval [0, T), and that K\u22121 1 \u03c9(0) \u2264\u03c9(t) \u2264K1\u03c90, |d log |\u2126|| \u2264K2 (2.6) Then there exists a constant C and 0 < \u03b1 < 1, depending on K1, K2, \u03a8, and the initial data, but not on T, so that \u2225g\u2225C2+\u03b1,1+ \u03b1 2 (X\u00d7[0,T)) \u2264C (2.7) In particular, the \ufb02ow extends then across T. Traditionally, the C3 estimates for the Monge-Amp` ere equation and the K\u00a8 ahler-Ricci \ufb02ow relied on a complex version of an identity going back in Calabi. Here, no potential for a K\u00a8 ahler metric is available, but we can use rely instead on the method of [93], which gives estimates for the connection \u2207hh\u22121, where h is the relative endomorphism between a reference metric and the evolving metric. Applications of the above theorem to the formation of singularities for the Type IIB \ufb02ow can be found in [76]. The Type IIB \ufb02ow should provide a powerful probe for the geometry of the underlying complex manifold X. As a \ufb01rst test, we can show that it is certainly capable of giving a new proof, with new estimates, of Yau\u2019s fundamental theorem [117] on the existence of Ricci-\ufb02at K\u00a8 ahler metrics on K\u00a8 ahler manifolds with c1(X) = 0. Thus assume that \u02c6 \u03c7 is a K\u00a8 ahler metric on X and take as initial data for the Type IIB \ufb02ow a metric \u03c9(0) satisfying |\u2126|\u03c9(0)\u03c9(0)n\u22121 = \u02c6 \u03c7n\u22121. Then it can be readily shown [91] that the Type IIB \ufb02ow exists for all time t > 0, and as t \u2192\u221e, the solution \u03c9(t) converges smoothly to a K\u00a8 ahler, Ricci-\ufb02at, metric \u03c9\u221e. The point in this case is that the (1, 1)-form \u03c7 de\ufb01ned by |\u2126|\u03c9(t)\u03c9(t)n\u22121 = \u03c7(t)n\u22121 (2.8) will remain K\u00a8 ahler along the \ufb02ow. If we set \u03c7(t) = \u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5(t), we can rewrite the Type IIB \ufb02ow as a parabolic Monge-Amp` ere \ufb02ow in \u03d5, \u2202t\u03d5 = e\u2212f (\u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5)n \u02c6 \u03c7n , \u03d5(x, 0) = 0 (2.9) for a suitable given function f, subject to the plurisubharmonicity condition \u02c6 \u03c7+i\u2202\u00af \u2202\u03d5 > 0. 5 \fNote that, unlike the K\u00a8 ahler-Ricci \ufb02ow or the inverse Monge-Amp` ere \ufb02ow [17, 12], this \ufb02ow is not concave in D2\u03d5. Because of this, we need a new way of obtaining C2 estimates. It turns out that a new test function, di\ufb00erent from the classical one used by Yau and Aubin for the C2 estimates for the Monge-Amp` ere equation, G(z, t) = log Tr h \u2212A(\u03d5 \u2212 1 [\u02c6 \u03c7n] Z X \u03d5\u02c6 \u03c7n) + B[(\u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5)n \u02c6 \u03c7n ]2 is needed. The new additional term is the square of the Monge-Amp` ere determinant. It has proved to be useful in C2 estimates for other non-linear parabolic \ufb02ows as well [95, 100], and con\ufb01rms that tools developed for the \ufb02ows of uni\ufb01ed string theories will also be of interest for other PDE\u2019s. It is not di\ufb03cult to see that the only stationary points of the Type IIB \ufb02ow with no source are Ricci-\ufb02at K\u00a8 ahler metrics. Thus the \ufb02ow can help determine whether the conformally balanced manifold X is actually K\u00a8 ahler. For this, the following stability theorem due to Bedulli and Vezzoni [5] is of particular interest: Theorem 2 [5] If (X, \u2126, \u03c7) is a K\u00a8 ahler manifold with \u03c7 a K\u00a8 ahler Ricci-\ufb02at metric, then for any \u01eb > 0, there exists \u03b4 > 0 so that, if the initial data \u03c90 satis\ufb01es ||\u2126|\u03c90\u03c9n\u22121 0 \u2212\u03c7n\u22121| < \u03b4 (2.10) then the Type IIB \ufb02ow exists for all time, ||\u2126|\u03c9(t)\u03c9(t)n\u22121 \u2212\u03c7n\u22121| < \u01eb, and |\u2126|\u03c9\u03c9n\u22121 converges to \u03c9n\u22121 \u221e with \u03c9\u221ean astheno-Kaehler metric. If \u03c90 is conformally balanced, then \u03c9\u221eis K\u00a8 ahler Ricci-\ufb02at. Note that this stability theorem allows in particular to consider more general initial data than conformally K\u00a8 ahler. Its proof is based on a general method of the authors [3, 4] for proving the stability of \ufb02ows which exist by the Hamilton-Nash-Moser implicit function theorem. 2.2 The case with source \u03c1B We discuss brie\ufb02y the case of non-vanishing sources \u03c1B. If the source \u03c1B is a smooth closed (2, 2)-form, then the short-time existence of the \ufb02ow follows from the same arguments as in the case of no source, and Theorem 1 still applies. The more interesting case is for the source to have singularities. In general, the theory of geometric \ufb02ows and canonical metrics is much less developed in the cases with singularities than as in the smooth case. For example, only very recently has the existence of HermitianEinstein metrics been established on normal K\u00a8 ahler spaces in the sense of Grauert, and this required new analytic tools [13]. In the case of the Type IIB \ufb02ow, practically nothing is known in the case with singular \u03c1B. If \u03c1B is given, we may naively try and consider the 6 \fType IIB \ufb02ow with a source given by a regularization of \u03c1B, and determine the e\ufb00ects of removing the regularization. But if \u03c1B has Dirac-type singularities, we may hope that such Dirac-type singularities would arise dynamically from the \ufb02ow, when it does not admit smooth critical points. As we shall see in the discussion of the Type IIA \ufb02ow with source \u03c1A below, there does not appear to be any other viable for such a source \u03c1A to develop. 3 The Type IIA \ufb02ow Again, we focus on a representative case of supersymmetric compact\ufb01\ufb01cation of the Type IIA string with brane sources, as proposed by Tseng and Yau [110, 111, 112]. Let X be this time a real 6-dimensional compact symplectic manifold, in the sense that it admits a closed, non-degenerate 2-form \u03c9 (but there may be no compatible complex structure, so it may not be a K\u00a8 ahler form). Given a real 3-form \u03d5, let J\u03d5 be the almostcomplex structure associated some 20 years ago to \u03d5 by Hitchin [68]. Then the Type IIA equation is the following system of equations for a real positive and primitive 3-form \u03d5, dd\u039b(|\u03d5|2 \u22c6\u03d5) = \u03c1A, d\u03d5 = 0. (3.1) Here \u03d5 is positive means that the quadratic form g\u03d5(X, X) = \u03c9(X, J\u03d5X) is a metric, \u03c1A is the Poincare dual of a linear combination of special Lagrangians, d\u039b = d\u039b \u2212\u039bd is the symplectic adjoint of d, and \u22c6is the Hodge star operator of the metric g\u03d5. As in the Type IIB \ufb02ow, a seemingly innocuous condition, but in practice rather di\ufb03cult to implement, is d\u03d5 = 0. And just as in the previous case of the Type IIB \ufb02ow, this suggests looking for solutions of the above Type IIA system as stationary points of the following \ufb02ow \u2202t\u03d5 = dd\u039b(|\u03d5|2 \u22c6\u03d5) \u2212\u03c1A, (3.2) with an initial value \u03d50 which is closed and primitive. Since the right hand side is closed and primitive, formally the closedness and primitiveness of the initial condition will be preserved, and the stationary points of the \ufb02ow are automatically the solutions of the Type IIA system. 3.1 Type IIA geometry Hitchin\u2019s construction of an almost-complex structure J\u03d5 from any non-degenerate 3-form \u03d5 on any 6-dimensional manifold M, was a pointwise and purely algebraic construction. The above setting for the Type IIA string requires several speci\ufb01c additional properties: the form \u03d5 has to be closed, the manifold M is also equipped with a symplectic structure, so it makes sense to require that \u03d5 be primitive. These few speci\ufb01c requirements actually lead to a very rich geometric structure, which is called Type IIA geometry, and whose key features are the following. 7 \fSet \u2126\u03d5 = \u03d5 + iJ\u03d5\u03d5. Then we have D0,1\u2126\u03d5 = 0 (3.3) so that \u2126\u03d5 is formally a holomorphic form with respect to the almost-complex structure J\u03d5. Next the Nijenhuis tensor has only 6 independent components, and in this sense, the almost-complex structure is not far from integrable. But the most important property of Type IIA geometries is that they have SU(3) holonomy with respect to a speci\ufb01c connection, namely the projected Levi-Civita connection \u02dc D with respect to the almostcomplex structure J\u03d5 and the metric \u02dc g\u03d5 = |\u03d5|2g\u03d5. (3.4) More speci\ufb01cally, recall that given an almost-complex structure J and Nijenhuis tensor N, then any metric g(X, Y ) satisfying the compatibility condition g(JU, JV ) = g(U, V ) determines a 2-form \u03c9(U, V ) = g(U, JV ) and a line of unitary connections Dt, called the Gauduchon line [50], preserving J and g(X, Y ), given explicitly by Dt iXm = \u2207iXm + gmk(\u2212Nijk \u2212Vijk + tUijk)Xj (3.5) where \u2207is the Levi-Civita connection of the metric g(X, Y ), and the tensors Uijk and Vijk are de\ufb01ned by Um bc = 1 4((dc\u03c9)m bc + (dc\u03c9)m jkJj bJk c), V m bc = 1 4((dc\u03c9)m bc \u2212(dc\u03c9)m jkJj bJk c). (3.6) Here dc\u03c9 = \u2212Jd\u03c9, and is given explicitly in components by (dc\u03c9)abc = \u2212JkcJjbJia(\u2202i\u03c9jk + \u2202j\u03c9ki+\u2202i\u03c9jk). Note that all three tensors N, U, V are anti-symmetric in the last two indices. When d\u03c9 = 0, the Gauduchon line reduces to a single connection. But in general, there are several connections which are of noteworthy: t = 1 corresponds to the Chern connection, t = \u22121 is the Bismut connection, characterized by the property that its torsion is a 3-form. and t = 0 de\ufb01nes the so-called projected Levi-Civita connection. We return now to our discussion of Type IIA geometries on a symplectic 6-dimensional manifold M. Their most basic property is given in the following theorem [35]: Theorem 3 [35] Let (J\u03d5, \u03d5, \u03c9) be a Type IIA geometry, and let \u02dc g\u03d5 be the rescaled metric de\ufb01ned by (3.4). Let \u02dc \u03c9 = |\u03d5|2\u03c9 be the corresponding rescaled 2-form. Then we have D0( \u2126\u03d5 |\u2126\u03d5|) = 0 (3.7) where D0 is the projected Levi-Civita connection of the metric \u02dc g\u03d5, with respect to the almost-complex structure J\u03d5. Thus the Type IIA geometries have SU(3) holonomy, but with respect to the connection D0. 8 \fWe observe that the original symplectic form \u03c9 is closed, so the Gauduchon line with respect to the metric g\u03d5 reduces to a single connection. However, because of the rescaling, the form \u02dc \u03c9 is not closed, and we do have a whole line of unitary connections with respect to the metric \u02dc g\u03d5. The above theorem singles out the projected Levi-Civita connection as the natural connection for Type IIA geometries. 3.2 The case of \u03c1A = 0 The case of no source is already geometrically interesting and presents many new analytic challenges. First, we have to establish the short-time existence of the \ufb02ow: Theorem 4 [35] The Type IIA \ufb02ow with \u03c1A = 0 always exists for at least a short-time for any initial data \u03d5 which is closed, positive, and primitive. The properties of closedness and primitiveness are preserved along the \ufb02ow. The new di\ufb03culty here is that, upon reparametrization by a vector \ufb01eld V to lift the degeneracy inherent to the weak parabolicity of the Type IIA \ufb02ow, the symplectic form \u03c9 also evolves in time. Thus we obtain a coupled \ufb02ow of (\u03d5, \u03c9) which remains only weakly parabolic. The remedy turns out to introduce the following regularized coupled \ufb02ow of (\u03d5, \u03c9), \u2202t\u03d5 = d\u039bd(|\u03d5|2J\u03d5) \u2212BdJd(|\u03d5|2\u039bJ\u03d5) + d(\u03b9V \u03d5), \u2202t\u03c9 = d(\u03b9V \u03c9) (3.8) for a \ufb01xed positive constant B. Applying Hamilton\u2019s Nash-Moser arguments, this \ufb02ow can be shown to admit short-time existence. Furthermore, it can be shown to preserve the primitiveness of \u03d5. But for primitive data, the regularization term with coe\ufb03cient B vanishes, and thus the solution of the reparametrized, regularized \ufb02ow satis\ufb01es the original Type IIA \ufb02ow. Further analysis of the Type IIA \ufb02ow requires working out the corresponding \ufb02ow of metrics: Theorem 5 [35] Assume that \u03d5 \ufb02ows by the Type IIA \ufb02ow. There are two ways of expressing the \ufb02ow of metrics induced by the Type IIA \ufb02ow. (a) Set \u02c7 g\u03d5 = |\u03d5|\u22122g\u03d5 and u = log |\u03d5|2 . Then \u2202t(\u02c7 g\u03d5)ij = e 3 2u{ \u22122 \u02c7 Rij + 3 2uiuj \u2212uJiuJj + 4uk(Nikj + Njki) \u22124Nkp iNpkj +1 2(|du|2 \u02c7 g + |N|2 \u02c7 g)\u02c7 gij} (3.9) Note that the volume of g\u03d5 is known and equal to \u03c93/3!, so u is determined by \u02c7 g\u03d5, and the above \ufb02ow is self-contained. 9 \f(b) Alternatively, we can consider the \ufb02ow of the pair (g\u03d5, u). If \u03d5 \ufb02ows by the Type IIA \ufb02ow, then the pair \ufb02ows by \u2202tg = eu[\u22122Rij + 2\u2207i\u2207ju \u22124(N2 \u2212)ij + uiuj \u2212uaubJa iJb j + 4up(Ni p j + Nj p i)] \u2202tu = eu\u2206u + eu(2|\u2207u|2 + |N|2) (3.10) An important feature of the Type IIA \ufb02ow which distinguishes it from many known \ufb02ows of almost-complex structures, e.g. [78], is that the norm of the Nijenhuis tensor in the Type IIA \ufb02ow itself obeys a well-behaved \ufb02ow, (\u2202t \u2212eu\u2207)|N|2 = eu[\u22122|\u2207N|2 + (\u22072u) \u22c6N2 + Rm \u22c6N2 +N \u22c6\u2207N \u22c6(N + \u2207u) + |N|4 + N2 \u22c6\u2207u + N2 \u22c6(\u2207u)2]. (3.11) From here, we can establish Shi-type estimates which show that the singularities of the Type IIA \ufb02ow can be traced to only a \ufb01nite number of geometric quantities blowing up: Theorem 6 [35] Assume that the Type IIA \ufb02ow exists on some time interval [0, T) and that | log \u03d5| + |Rm| \u2264A (3.12) for some constant A. Then for any \u03b1, we have |\u2207\u03b1\u03d5| \u2264C(A, \u03b1, T, \u03d5(0)) (3.13) In particular, if [0, T) is the maximum time internal of existence, we must have limt\u2192TsupX(| log \u03d5| + |Rm|) = \u221e (3.14) Singularity models resulting from these estimates are considered in [77]. The stationary points of the Type IIA \ufb02ow correspond to integrable complex structures, and the resulting metrics are Ricci-\ufb02at K\u00a8 ahler metrics. As shown in [34], the Type IIA \ufb02ow is dynamically stable in the following sense: Theorem 7 [34] Let ( \u00af \u03d5, \u00af \u03c9) be a Ricci-\ufb02at Type IIA structure on a compact oriented 6 manifold M. Then there exists \u03b50 > 0 with the following property. For any Type IIA structure (\u03d50, \u03c90) satisfying |\u03c90 \u2212\u00af \u03c9|W 10,2 + |\u03d50 \u2212\u00af \u03d5|W 10,2 < \u03b50, (3.15) the Type IIA \ufb02ow with initial value (\u03d50, \u03c90) exists for all time, and converges in C\u221eto a Ricci-\ufb02at Type IIA structure (\u03d5\u221e, \u03c90). 10 \fAs an immediate consequence, if a compact symplectic 6d manifold (M, \u00af \u03c9) admits a compatible integrable complex structure, then any su\ufb03ciently close symplectic form will also admit a compatible integrable complex structure. More generally, it can be seen in explicit examples that the Type IIA \ufb02ow will evolve towards almost-complex structures which are optimal in a suitable sense. First we consider the Type IIA \ufb02ow on the nilmanifolds constructed by de Bartholomeis and Tomassini [24]. There the basic structure is a nilpotent Lie group, and the natural ansatze for \u03d5 are preserved, and reduce the Type IIA \ufb02ow as a system of ODE\u2019s which can be solved explicitly. We \ufb01nd in this way examples where the \ufb02ow exists for all times, but the limit limt\u2192\u221eJt does not exist, but limt\u2192\u221e|N|2 = 0. (3.16) Next, another instructive example is provided by the symplectic half-\ufb02at structures on the solvmanifold introduced by Tomassini and Vezzoni [108]. Some natural ansatze for \u03d5 are again preserved by the Type IIA \ufb02ow, which reduces then to a system of ODE\u2019s which can be solved exactly. We \ufb01nd then the following interesting phenomena: \u2022 For any initial data, the \ufb02ow for \u03d5 develops singularities in \ufb01nite time. However, the limits of J\u03d5 and g\u03d5 continue to exist. \u2022 For certain initial data, \u03d5 is a self-expander, while J is stationary, and in fact a critical point of the energy functional of Blair-Ianus [6] and Le-Wang [78]. \u2022 For other initial data, as t approaches the maximum time of existence T, the limit limt\u2192T J exists, and is a harmonic almost-complex structure, and in particular a minimizer for |N|2. See also [99]. 3.3 The case with source \u03c1A We had mentioned before, in the context of the Type IIB \ufb02ow, the comparative lack of development of the theory of geometric \ufb02ows with singular data compared to the theory for smooth data. The situation is worse for the Type IIA \ufb02ow, because the source \u03c1A is required to satisfy a condition that depends on the form \u03d5, namely that it be the Poincare dual of a linear combination of special Lagrangians. The notion of Lagrangian depends only on the given symplectic structure, but the notion of special Lagrangian [65] depends on the metric, which is g\u03d5 and hence depends on \u03d5. Thus, if we approach the Type IIA equation with brane by \ufb02ow methods, we have to let both \u03c1 and the 3-form \u03d5 evolve. This would already be a challenging and interesting problem in itself. Alternatively, when \u03c1A has Dirac-type singularities, we can hope for a scenario where \u03c1A arises dynamically as the singularities in the case where the \ufb02ow with no source does not have any smooth stationary point. From this point of view, the problem becomes reminiscent of a free boundary problem. It would certainly be a very exciting development for both theories if methods from the theory of free boundary problems can be brought successfully to bear on the PDE\u2019s from string theory. 11 \f3.4 Related \ufb02ows in symplectic geometry For most of the geometric \ufb02ows in geometric analysis which are weakly parabolic but not strictly parabolic, e.g. the Ricci \ufb02ow, the Yang-Mills \ufb02ow, spinor \ufb02ows, the Type IIA \ufb02ows, etc. we can still establish their short-time existence for arbitrary data. However there are some \ufb02ows which pose a challenge even in this regard. Some notable examples are the following. 3.4.1 The Hitchin functional and corresponding gradient \ufb02ow On any compact 6-d manifold M, Hitchin introduced the functional on 3-forms H(\u03d5) = 1 2 Z M \u03d5 \u2227(J\u03d5\u03d5) and showed that \u03b4H = 0 is equivalent to d(J\u03d5\u03d5) = 0, and to J\u03d5 being integrable. If M is equipped with a symplectic form and \u03d5 is primitive, then we get a compatible metric, and we can consider the gradient \ufb02ow of H(\u03d5). It turns out that it is given by \u2202t\u03d5 = dd\u2020\u03d5 Recall that Bryant\u2019s Laplacian \ufb02ow on 7-manifolds is given exactly by this formula, so the Hitchin gradient \ufb02ow can be viewed as the 6-manifold version of the 7-manifold Bryant\u2019s Laplacian \ufb02ow. Remarkably, the Hitchin gradient \ufb02ow can be cast in turn as a limiting version of the Type IIA \ufb02ow \u2202t\u03d5 = d\u039bd(\u22c6\u03d5) 3.4.2 The dual Ricci \ufb02ow This \ufb02ow was introduced by T. Fei and the author [33], as the symplectic dual of the Ricci \ufb02ow on a complex manifold. It is a \ufb02ow of positive and primitive 3-forms on a 6-dimensional symplectic manifold M, which can be worked out to be given by \u2202t\u03d5 = d\u039bd( log |\u03d5| \u22c6\u03d5) which is similar to the Hitchin gradient \ufb02ow, but this time with a log |\u03d5| inserted. 3.4.3 The regularized gradient Hitchin \ufb02ow On the other hand, it is not hard to see that, for any \u01eb > 0, the modi\ufb01ed Type IIA \ufb02ow de\ufb01ned by \u2202t\u03d5 = d\u039bd(|\u03d5|\u01eb \u22c6\u03d5) does admit short-time existence, and it is tempting that, from the possibly renormalized solutions \u03d5\u01eb to these \ufb02ows, we can extract a \ufb01nite regularized limit, which can serve as solution to either the gradient Hitchin \ufb02ow or the dual Ricci \ufb02ow. 12 \fAnalytically, the situation bears some similarity to the Uhlenbeck-Yau proof [113] of the Donaldson-Uhlenbeck-Yau theorem on the equivalence between the existence of Hermitian-Einstein metrics and Mumford stability. There they prove that for small \u01eb > 0, the regularized equation \u039bF\u01eb \u2212\u00b5I = \u2212\u01eb log h\u01eb always admits a solution. Here H\u01eb is the unknown metric on a holomorphic vector bundle E \u2192(X, g\u00af kj), F = \u2212\u00af \u2202(H\u22121 \u01eb \u2202H\u01eb), and h = H\u22121 0 H\u01eb is the corresponding endomorphism, with H0 a \ufb01xed reference metric. The issue then is whether h\u01eb is uniformly bounded as \u01eb \u21920, in which case one can let \u01eb go to 0 and obtain a solution of the Hermitian-Einstein equation \u039bF \u2212\u00b5I = 0 or else, from any subsequence h\u01eb with Trh\u01eb \u2192\u221econstruct a destabilizing sheaf violating the Mumford stability condition. In this light, it may be the case that a given initial condition may have to satisfy some stability condition for the \ufb02ow to admit even short-time existence, and the underlying manifold may have to satisfy a stability condition for the short-time existence for any initial data. Of course, these considerations are at the moment quite preliminary, and there is no yet no other strongly supporting evidence for this scenario. Nevertheless, these above \ufb02ows are geometrically very appealing, and this scenario would seem to be entirely new. So the uncertainty around the approach may be well worth the possible pay-o\ufb00. 4 Flows from the Heterotic string Shortly after the identi\ufb01cation by Candelas, Horowitz, Strominger, and Witten [10] of Calabi-Yau 3-folds as supersymmetric vacua for the heterotic string, Hull [70] and Strominger [103] independently proposed a more general, still supersymmetric, solution allowing metrics with torsion, as long as a key anomaly cancellation is still implemented. Their equations can be described as follows. 4.1 The Hull-Strominger system Let M be a compact complex 3-manifold, equipped with a nowhere vanishing holomorphic (3, 0)-form \u2126, and let E \u2192M be a holomorphic vector bundle with c1(E) = 0. Then the Hull-Strominger system is the following system of equations for a Hermitian metric \u03c9 = ig\u00af kjdzj \u2227d\u00af zk on T 1,0(M) and a Hermitian metric H\u00af \u03b1\u03b2 on E, i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032 4 Tr(Rm \u2227Rm \u2212F \u2227F) = 0, d(\u2225\u2126\u2225\u03c9\u03c92) = 0 (4.1) F 2,0 = F 0,2 = 0, gj\u00af kF\u00af kj = 0. (4.2) 13 \fHere Rm and F are the curvatures of \u03c9 and H, viewed as 2-forms valued in End(T 1,0(M)) and End(E) respectively. The second equation in the \ufb01rst line above was originally written as an equation on the torsion of the metric \u03c9. Its reformulation as d(|\u2126|\u03c9\u03c92) = 0 is an important insight of Li and Yau [80], which makes apparent its cohomological meaning. As mentioned earlier, in the mathematical literature, metrics \u03c9 on an m-dimensional manifold satisfying d\u03c9m\u22121 = 0 are said to be balanced [84], and we shall say that a metric \u03c9 is conformally balanced if it is balanced up to a conformal transformation. Note a distinguishing features of the equations for the heterotic string, compared to the other string theories, namely the presence of a gauge \ufb01eld F and the Green-Schwarz anomaly cancellation requirement as expressed in the \ufb01rst line of the above system. When \u03b1\u2032 = 0, this equation reduces to the Type IIB equation with no source. Calabi-Yau manifolds can be recognized as special solutions of the Hull-Strominger system with E = T 1,0(M), \u03c9 = H K\u00a8 ahler, and Ric(\u03c9) = 0. Solutions with singularities were given in [103], but it was Fu and Yau [44, 45] who found the \ufb01rst smooth nonperturbative, non-K\u00a8 ahler solutions, given by toric \ufb01brations over K2 surfaces, and who brought to light the deep geometric structure of Hull-Strominger systems. Many other solutions have been found since [40, 46, 21]. More generally, it has been suggested by Yau that the Hull-Strominger systems should be viewed as de\ufb01ning canonical metrics for non-K\u00a8 ahler complex manifolds with c1(M) = 0, and the \ufb01rst steps applying them towards Reid\u2019s fantasy have very recently been carried out in [20, 18]. 4.2 Anomaly \ufb02ows As in the previous cases with the equations of the Type IIA and Type IIB strings, a major di\ufb03culty in solving the Hull-Strominger system in general is to implement the conformally balanced condition constraint. The same strategy of using a \ufb02ow that preserves this constraint was advocated in [88, 89], and the following \ufb02ow of pairs (\u03c9, H\u00af \u03b1\u03b2) was introduced, \u2202t(|\u2126|\u03c9\u03c92) = i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032 4 Tr(Rm \u2227Rm \u2212F \u2227F), H\u22121\u2202tH = \u03c92 \u2227F \u03c93 (8) with initial data (\u03c90, H0) satisfying the conformally balanced condition d(|\u2126|\u03c90\u03c92 0) = 0. The \ufb02ow was called the Anomaly \ufb02ow, as its stationary points would satisfy the GreenSchwarz Anomaly cancellation equation. For \ufb01xed \u03c9, the \ufb02ow in H is the well-known Donaldson heat \ufb02ow, so the novelty in the Anomaly \ufb02ow lies in the \ufb02ow for \u03c9, and its coupling to the Donaldson heat \ufb02ow. The short-time existence for the Anomaly \ufb02ow can be established [88] using Hamilton\u2019s Nash-Moser method for weakly parabolic systems [63]. Using the Anomaly \ufb02ow, we can recapture [87] the solution of the Hull-Strominger system found by Fu and Yau [44, 45]. The indications are that it will be a powerful tool in the future, and its behavior on unimodular groups and nilmanifolds have been worked out in [92] and [98] respectively. 14 \f4.3 The Hull-Strominger-Ivanov system Initially, it was believed that the system of equations proposed by Hull and Strominger would automatically imply the equations of motion, but it was subsequently shown by Ivanov and others [71, 38, 39] that this was not the case. Rather, for the \ufb01eld equations to be satis\ufb01ed, the curvature Rm in the Anomaly cancellation equation must be the curvature R\u2207of an SU(3) holomorphic instanton \u2207on the tangent bundle of M. It is not required to be the curvature of the Chern unitary connection de\ufb01ned by \u03c9. We shall refer to the Hull-Strominger system, with the modi\ufb01cations suggested by Ivanov et al, as the Hull-Strominger-Ivanov, or HSI, system. A formulation of this system, which may lead to existence and stability conditions for the existence of solutions, has been proposed by Garcia-Fernandez and Molina [49], and a Futaki invariant introduced in [48]. A formulation in terms of generalized geometry has been proposed in [47]. Analytically, the HSI system seems also considerably simpler than the original HullStrominger system. Indeed, once we \ufb01x the holomorphic structure of the SU(3) instanton, its curvature R\u2207can be viewed as known, and there is no more in the Anomaly cancellation equation a quadratic term in the Riemannian curvature Rm of \u03c9. It is straightforward to adapt the geometric \ufb02ows introduced earlier for the Hull-Strominger system to the HSI system, resulting in an even simpler \ufb02ow. 5 Flows from 11-dimensional supergravity As mentioned above, it is believed that string theories can be themselves uni\ufb01ed into M Theory, a theory which has not been fully constructed as of today, and which is mainly understood through its various limits [2, 69, 109, 116]. One of these limits is 11-dimensional supergravity [23], whose tensor \ufb01elds are actually very simple: they consist of a Lorentz metric GMN and a closed 4-form F = dA, and the action with all spinor \ufb01elds set to 0 is given by I(GMN, F) = Z d11x \u221a \u2212G(R \u22121 2|F|2) \u22121 6 Z A \u2227F \u2227F The \ufb01eld equations are given by the critical points of the action functional, and given explicitly by d \u22c6F = 1 2F \u2227F, RMN = 1 12FMP QRFN P QR. The supersymmetric solutions are the solutions which admit a non-vanishing spinor \u03be satisfying DM\u03be \u2261\u2207M\u03be \u2212 1 288FABCD(\u0393ABCD M + 8 \u0393ABC\u03b4D M)\u03be = 0, (5.1) i.e. spinors which are covariantly constant with respect to the connection DM, obtained by twisting the Levi-Civita connection by the \ufb02ux F. 15 \f5.1 Early solutions Some early solutions were found with the Ansatz M11 = M4 \u00d7M7, where M4 is a Lorentz 4-manifold and M7 a Riemannian manifold with metrics g4 and g7 respectively. Setting F = cV ol4 where V ol4 is the volume form on M4 reduces the \ufb01eld equations to (Ric4)ij = \u2212c2 3 (g4)ij, (Ric7)ij = c2 6 (g7)ij i.e., M4 and M7 are Einstein manifolds with negative and positive scalar curvatures respectively. These are the Freund-Rubin solutions [42], which include AdS4 \u00d7 S7. More sophisticated solutions can be found with other ansatz for F, e.g. F = cV ol4 + \u03c8 for suitable \u03c8, leading to nearly G2 manifolds (see e.g. [29, 25, 96, 97] and many others). For us solutions of particular interest are obtained by setting M11 = M3 \u00d7 M8, and g11 = e2Ag3 + g8, F = V ol3 \u2227d f where g3 is a Lorentz metric on M3, g8 is a Riemannian metric on M8, and (A, f) are smooth functions on M8. The now well-known solution of Du\ufb00-Stelle [27, 28] is then obtained by assuming the \ufb02atness of g3, the conformal \ufb02atness of g8, the radial dependence of A, f, and supersymmetry. In [30], a characterization is provided for when the data (g3, g8, A, f) gives rise to a supersymmetric solution: Theorem 8 [30] The data (g3, g8, A, f) is a supersymmetric solution to 11-dimensional supergravity equation if and only if (a) g3 is \ufb02at; (b) \u00af g8 := eAg8 is a Ricci-\ufb02at metric admitting covariantly constant spinors with respect to the Levi-Civita connection; (c) e\u22123A is a harmonic function on (M8, g8) with respect to the metric \u00af g8; (d) d f = \u00b1d(e3A). Applying the classic results of S.Y. Cheng and P. Li [15] on lower bounds for Green\u2019s functions together with new complete K\u00a8 ahler-Ricci \ufb02at metrics on C4 constructed by Szekelyhidi [104], Conlon-Rochon [22], and Li [81], we obtain in this manner complete supersymmetric multi-membrane solutions, including many not known before in the physics literature. If we allow solutions of 11d supergravity which are not supersymmetric, then there are many more solutions. For example, it is shown in [30] that the Du\ufb00-Stelle solution can be naturally be imbedded in a 5-dimensional family of solutions, in which it is the only supersymmetric solution. 16 \f5.2 Parabolic dimensional reductions of 11d supergravity More general solutions of 11-dimensional supergravity will ultimately have to be found by solving partial di\ufb00erential equations. As a starting point, we shall consider the following \ufb02ow of pairs (GMN, F), \u2202tF = \u2212\u2737GF \u2212\u03c3 2 d \u22c6(F \u2227F) \u2202tGMN = \u22122RMN + F 2 MN \u22121 3|F|2GMN (5.2) where \u2737G is the Hodge-Laplacian on 4-forms with respect to GMN, F 2 MN = 1 3!FMABCFN ABC, and we have introduced a parameter \u03c3 = \u00b11, depending on whether the metric GMN is respectively Lorentz or Euclidian. Clearly the solutions of the \ufb01eld equations of 11d supergravity must be stationary points of this \ufb02ow. Conversely, the stationary points of the \ufb02ow can readily seen to satisfy the \ufb01eld equations if M11 is Euclidian, compact, and H3(M11, R) = 0. More generally, this remains to be the case, if M11 is assumed to have no non-trivial 3-forms which are both closed and co-closed. We shall consider two situations when the above system can be reduced to a parabolic system. The \ufb01rst, immediate situation, arises by considering manifolds M11 with Euclidian signature, and \u03c3 = \u22121. Even though 11-dimensional supergravity requires a Lorentz signature, it may be expected that this Riemannian version would prove to be of mathematical interest, just as in the case of the Euclidian Yang-Mills theory. In this case, the following theorem was established in [31]: Theorem 9 [31] Consider the \ufb02ow (5.2) of a pair (GMN, F) with GMN a Euclidian metric, F a closed 4-form on an 11d Riemannian manifold M11, and \u03c3 = \u22121. Then we have (a) The form F remains closed as long as the \ufb02ow exists; (b) The \ufb02ow exists for at least a time interval [0, T0) with T0 > 0; (c) If T < \u221eis the maximum existence time of the \ufb02ow, then limsupt\u2192T \u2212supM11(|Rm| + |F|) = \u221e. (5.3) Next, we consider the case of M11 of Lorentz signature. In this case, the \ufb02ow (5.2) will be hyperbolic. However, we shall identify dimensional reductions of the \ufb02ow where the Lorentz components are static, and the \ufb02ow reduces to a parabolic \ufb02ow. Thus we consider space-times and \ufb01eld con\ufb01gurations of the form M11 = M1,p \u00d7 M10\u2212p, G = e2Ag1,p + g, F = dV olg \u2227\u03b2 + \u03a8 (5.4) where \u03b2 and \u03a8 are now respectively a (3 \u2212p)-form and a 4-form on M10\u2212p, both of whom are assumed to be closed. The metric g1,p is a Lorentz and Einstein metric on M1,p, Ric(g1,p) = \u03bbg1,p, (5.5) 17 \fA is a scalar function on M10\u2212p, and the metric g = gij is a Riemannian metric on M10\u2212p. The dimension p is allowed to take any value between 0 and 10, and \u03b2 = 0 if p \u22654. For static g1,p, the geometry of M11 is characterized by the 4-tuple (gij, A, \u03b2, \u03a8). We shall consider the following \ufb02ow of such 4-tuples: \u2202tgij = \u22122Rij + 2(p + 1)(\u2207i\u2207jA + \u2207iA\u2207jA) \u2212e2(p+1)A\u03b22 ij + \u03a82 ij \u22121 3(|\u03a8|2 \u2212e\u22122(p+1)A|\u03b2|2)gij \u2202tA = \u2206A + 1 4(p + 1)|\u2207A|2 \u22121 3e\u22122(p+1)A|\u03b2|2 \u22121 6|\u03a8|2 \u2212\u03bbe\u22122A \u2202t\u03b2 = \u2212\u2737\u03b2 + (\u22121)p+1(p + 1)d \u22c6(dA \u2227\u22c6\u03b2) \u2212(\u22121)pp + 1 2 e(p+1)AdA \u2227\u22c6(\u03a8 \u2227\u03a8) = \u2212(\u22121)p1 2e(p+1)Ad \u22c6(\u03a8 \u2227\u03a8) \u2202t\u03a8 = \u2212\u2737\u03a8 + (\u22121)p(p + 1)d \u22c6(dA \u2227\u03a8) \u2212(p + 1)e\u2212(p+1)AdA \u2227\u22c6(\u03b2 \u2227\u03a8) e\u2212(p+1)Ad \u22c6(\u03b2 \u2227\u03a8) (5.6) where the norms, Hodge star operator, and covariant derivatives are all taken with respect to the metric gij on M10\u2212p. The following theorem was established in [31]: Theorem 10 [31] Let M11 be a Lorentz 11-dimensional manifold, equipped with a metric GMN and a closed 4-form F given by the expression (5.4) in terms of the 4-tuple (gij, A, \u03b2, \u03a8) on a Riemannian manifold M10\u2212p. (a) If the 4-tuple evolves by the above \ufb02ow (5.6), then the pair (GMN, F) on M11 evolved by the original \ufb02ow (5.2) with \u03c3 = 1. (b) The forms \u03b2, \u03a8 and F remain closed along the \ufb02ow, and the above ansatz is preserved; (c) The \ufb02ow exists at least for some time interval [0, T0) with T0 > 0. If T < \u221eis the maximum time of existence of the \ufb02ow, then limsupt\u2192T \u2212supM10\u2212p(|Rm| + |A| + |\u03b2| + |\u03a8|) = \u221e (5.7) Similar reductions should be possible for 5-dimensional supergravity. By construction, the stationary points of these \ufb02ows will satisfy the supergravity \ufb01eld equations. In view of the above stated goal of \ufb01nding spaces satisfying both supersymmetry and the \ufb01eld equations, it is natural to examine these \ufb02ows on spaces which are at least known to admit a non-vanishing spinor. The same reductions of G structure that are anticipated to be helpful for the spinor \ufb02ows of (gab, \u03c8, H, \u03d5) described in the next section \u00a76 should also be instrumental in the parabolic reductions \ufb02ows of (gij, A, \u03b2, \u03a8) described in above. It should be very instructive to have a comparison of the two types of \ufb02ow under this assumption of existence of a non-vanishing spinor \ufb01eld. 18 \f6 Flows of spinor \ufb01elds While this is not apparent at \ufb01rst sight, all the equations for uni\ufb01ed string theories which we have written down so far, except for 11d supergravity, originate from the requirement of (local) supersymmetry. Local supersymmetry can be viewed as an analogue of reparametrization invariance, whose transformations are generated by a spinor \ufb01eld instead of a vector \ufb01eld. Space-time is required to be invariant under such transformations, which reduces to the condition that the generating spinor \ufb01eld be a non-zero \ufb01eld which is covariantly constant with respect to a suitable connection possibly with \ufb02ux. Thus the common underlying thread in all the equations which we have written down so far, except for 11d supergravity, is the existence of such a covariantly constant spinor \ufb01eld, and the equations themselves re\ufb02ect the consequences of the existence of this spinor \ufb01eld on the tensor (\u201cbosonic\u201d) \ufb01elds of the theory, while not involving directly the spinor (\u201cfermionic\u201d) \ufb01eld themselves. This suggests that an alternative, and complementary, approach to the equations of uni\ufb01ed string theories would be to examine conditions under which such a covariantly constant spinor \ufb01eld can exist, as they are necessary for supersymmetry. Such an approach, especially if it was based on geometric \ufb02ows, may have its own advantages: already we have observed that, in the case of 11d supergravity, this is a natural approach. But perhaps more important, it is a fundamental aspect of uni\ufb01ed string theories, which we had not yet addressed except in special cases in \u00a73, that they are mutually related by dualities. Such dualities are highly non-trivial, and usually hard to detect in theories which have each their own formulation. It can be hoped that this task would become easier if we restrict to the space of supersymmetric theories, and more speci\ufb01cally to the moduli space of theories admitting a covariantly constant spinor. Such a moduli space and Kuranishi deformation theory, especially with \ufb02uxes, may well be of considerable geometric interest in its own right. We observe that the existence of a covariantly constant spinor is well known in mathematics to be characteristic of reduced holonomy and special geometry (see e.g. Berger [9], Lichnerowicz [81], Joyce [73] and others). An approach using geometric \ufb02ows had been initiated by Ammann, Weiss, and Witt [1], based on the gradient \ufb02ow of the energy functional I(gij, \u03c8) = \u2225\u2207\u03c8\u22252 L2, about which we shall say more below. What the physics of uni\ufb01ed string theories has done is to infuse a new interest in such questions, with the key additional complication of the possible presence of \ufb02ux. 6.1 Spinors, connections, and supersymmetry We begin by recalling the basic set-up for spinor \ufb01elds. Let M be a compact spin manifold of dimension n, and S \u2192M the bundle of spinors on M. We always work in local trivializations of the spin bundle S, which project on local trivializations of the bundle of orthonormal frames, given by a choice of frame {ea} = {eaj} in a local coordinate 19 \fsystem {xj}, 1 \u2264j \u2264n. The space of spinors is given then pointwise by a vector space V , so that spinor \ufb01elds are just locally functions \u03c8 valued in V . A de\ufb01ning property of the vector space V is that it carries a representation of the Cli\ufb00ord algebra, namely the algebra generated by Dirac matrices {\u03b3a}. We shall adopt the following convention for Dirac matrices, \u03b3a\u03b3b + \u03b3b\u03b3a = 2 \u03b4ab. (6.1) A basis for the endomorphisms of V is given by the anti-symmetrized products \u03b3b1\u00b7\u00b7\u00b7bp = 1 p! X \u03c3 (\u22121)|\u03c3|\u03b3\u03c31 \u00b7 \u00b7 \u00b7 \u03b3\u03c3p (6.2) where \u03c3 : (1, \u00b7 \u00b7 \u00b7, p) \u2192(\u03c31, \u00b7 \u00b7 \u00b7 , \u03c3p) runs over all permutations of (1, \u00b7 \u00b7 \u00b7, p), and |\u03c3| is its signature. The spin bundle S admits a natural connection \u2207, namely the spin connection given by \u2207j\u03c8 = \u2202j\u03c8 \u22121 4\u03c9jab\u03b3ab\u03c8 (6.3) where \u03c9jab is the Levi-Civita connection. A key feature of all the limits of M Theory is that they automatically incorporate a supergravity theory, that is, a theory which incorporates gravitation and is supersymmetric. In particular, their spectrum includes the supersymmetric partner of the metric gij, which is a spinor-valued form \u03c7i = {\u03c7i\u03b1} \u2208T\u2217(M) \u2297S called the gravitino \ufb01eld. As mentioned above, the generator of supersymmetry transformations is a spinor \ufb01eld \u03c8, and in analogy with generators of di\ufb00eomorphisms which are vector \ufb01elds V j and act on metrics by \u03b4gij = \u2207iVj + \u2207jVi, supersymmetry transformations act on the gravitino by \u03b4\u03c7i = \u2207i\u03c8 + \u00b7 \u00b7 \u00b7 . (6.4) Here the right hand side is a connection on \u03c8, just as the right hand side in di\ufb00eomorphisms is a connection on V j, and the dots \u00b7 \u00b7 \u00b7 indicate contributions to this connection from the other \ufb01elds in the theory. We shall generically refer to them as \u201c\ufb02ux\u201d terms. Since the endomorphisms of V are generated by the Cli\ufb00ord algebra, it follows that the \ufb02ux terms must be given by a polynomial in the Dirac matrices \u03b3a, whose coe\ufb03cients depend on the other \ufb01elds of the theory, X p X b1,\u00b7\u00b7\u00b7,bp H(p) Mb1\u00b7\u00b7\u00b7bp\u03b3b1\u00b7\u00b7\u00b7bp\u03c8 (6.5) The cases of particular importance for M theory are when the \ufb01eld H(p) MN1\u00b7\u00b7\u00b7Np is a (p + 1)form. The case of a 3-form is responsible for the torsion H in the equations for the Type I, Type II, and heterotic string, and the case of a 4-form results in the \ufb01eld F4 in 11-dimensional supergravity, both discussed earlier. 20 \fThus for the supergravity theories which we consider and for our purposes, the \ufb02ux terms can be taken to be in a very speci\ufb01c form. Fix a positive integer p, and let H be a p-form. Then the covariant derivatives on spinors which we consider are of the form \u2207H a \u03c8 = \u2207a\u03c8 + (\u03bb|H)a (6.6) where for each vector \u03bb = (\u03bb1, \u03bb2) \u2208R2, we have set (\u03bb|H)a\u03c8 = \u03bb1Hab1\u00b7\u00b7\u00b7bk\u22121\u03b3b1\u00b7\u00b7\u00b7bp\u03c8 + \u03bb2Hb1\u00b7\u00b7\u00b7bk\u03b3ab1\u00b7\u00b7\u00b7bk\u03c8, (6.7) which is a 1-form valued in the space End(S) of endomorphisms of spinors. Note that the two terms in (\u03bb|H)a are either Hermitian or anti-Hermitian, depending on the parity of p, but not both. We set then \u2207H a \u03c8 = \u2207a\u03c8 + (\u03bb|H)a\u03c8, and we are particularly interested in determining when a compact spin manifold M admits a nowhere vanishing spinor \ufb01eld \u03c8 with \u2207H\u03c8 = 0. As mentioned earlier, in the special case where H is assumed to be 0, one can consider the gradient \ufb02ow of the functional I(gij, \u03c8) = Z X |\u2207\u03c8|2\u221agdx (6.8) subject to the constraint |\u03c8|2 = 1. This \ufb02ow was introduced by Ammann, Weiss, and Witt [1], who established its short-time existence. Shi-type estimates for it were subsequently established by He and Wang [69]. However, the possibility of a non-vanishing \ufb02ux H leads to considerable new complications, not least due to the fact that connections with \ufb02uxes are usually not unitary, and that there is no obvious normalization of \u03c8 that would prevent it from sliding to 0. In [19], T. Collins and the author proposed to address this problem by introducing a dynamic normalization condition e\u22122\u03d5|\u03c8|2 = 1 (6.9) and considering \ufb02ows of the 4-tuple (gij, \u03c8, H, \u03d5). While the arguments apply with some generality, they do depend on relations between the dimension n of the spin manifold M and the rank p of the \ufb02ux H. In the following we shall assume that 3p = n + 1, which correspond to the cases of 5d and 11d supergravity. With this assumption, we consider the following \ufb02ow of the 4-tuple (gij, \u03c8, H, \u03d5), \u2202tgab = 1 4e\u22122\u03d5Re QH {ab} + 1 2\u27e8\u2207H {a\u03c8, \u2207H b}\u03c8\u27e9\u22121 4gab|\u2207H\u03c8|2) \u2202tH = \u2212(d\u2020dH + d(d\u2020H \u2212c \u22c6(H \u2227H)) \u2212L(H)) \u2202t\u03d5 = X a e\u22122\u03d5(\u2207a \u22122e\u22122\u03d5\u27e8ha\u03c8, \u03c8\u27e9){e2\u03d5(\u2207a\u03d5 + e\u22122\u03d5(\u2207a\u03d5 + e\u22122\u03d5\u27e8ha\u03c8, \u03c8\u27e9)} \u2202t\u03c8 = \u2212 X a (\u2207H a )\u2020\u2207H a \u03c8 + c(t)\u03c8 (6.10) 21 \fThe various quantities entering the formula are de\ufb01ned as follows: brackets {ab} denote symmetrization with respect to a and b; ha = 1 2((\u03bb|H)a + (\u03bb|H)\u2020 a) is the Hermitian part of the operator (\u03bb|H)a; c is a given constant; L(H) is the p-form de\ufb01ned by \u27e8L(H), \u03b2\u27e9= 1 2\u27e8c \u22c6(\u03b2 \u2227H + H \u2227\u03b2), d\u2020H \u2212c \u22c6(H \u2227H)\u27e9, the coe\ufb03cients c(t) and tensor Qab are de\ufb01ned by c(t) = e\u22122\u03d5|\u2207H\u03c8|2 \u22122e\u22122\u03d5 P a\u27e8ha\u03c8, \u03c8\u27e9((\u2207a\u03d5 + e\u22122\u03d5\u27e8ha\u03c8, \u03c8\u27e9) QH ab = \u2207c\u27e8\u03b3ca\u03c8, \u2207H b \u03c8\u27e9. Theorem 11 [19] Consider the \ufb02ow (6.10) of 4-tuples (gij, \u03c8, H, \u03d5) on a spin manifold M of dimension n, where H is a p-form, and 3p = n+1. Then it preserves the normalization condition (6.9). Furthermore, (a) At any stationary point, the spinor \ufb01eld is covariantly constant with respect to the connection \u2207H, and the \ufb02ux H satis\ufb01es the equation dH = 0, d\u2020H = c \u22c6H \u2227H. (6.11) (b) The \ufb02ow (6.10) is weakly parabolic, and it always exists for some positive time interval, for any smooth initial data satisfying the normalization constraint (6.9). (c) The following Shi-type estimates hold: assume that on the time interval (0, \u03c4 A), we have the following estimates |\u2207\u03c8|2, |\u22072\u03c8|, |H|2, |\u2207H|, |Rm| \u2264A, \u2225\u03d5\u2225L\u221e\u2264\u02c6 A (6.12) for some constant \u02c6 A. Then for any integer q \u22650, there is a constant Cq depending only on q, n, \u02c6 A, and an upper bound for \u03c4 so that |\u2207qRm| + |\u2207q+2\u03c8| + |\u2207q+2e2\u03d5| + |\u2207q+1H| \u2264Cq A tq/2 (6.13) uniformly on [0, \u03c4 A). We note that the above \ufb02ows are suggested by the requirement that, at the stationary points, the \ufb02ux H satis\ufb01es the equations arising from 5d and 11d supergravity theories. A key new observation is that the requirement that \u03c8 be covariantly constant implies that \u03d5 must satisfy a speci\ufb01c equation of its own. The choice of the \ufb02ow for \u03d5 in (6.10) is designed so that this equation is satis\ufb01ed at the stationary points, and so that the whole system is weakly parabolic. It may also be instructive to indicate here an identity between the form QH {p,\u2113} and the Ricci curvature Rp\u2113, generalizing such an identity due to Ammann, Weiss, Witt [1] and He and Wang [69], Re(QH {p,\u2113}) = 1 2e2\u03d5Rp\u2113+ Re(\u27e8\u03c8, \u03b3\u2113\u2207p/D\u03c8\u27e9) \u22121 2Hess(e2\u03d5)p\u2113 \u2212Re(\u27e8/D\u03c8, \u03b3\u2113\u2207p\u03c8\u27e9) + \u2207aRe(\u27e8\u03b3a\u2113\u03c8, (\u03bb|H)p\u03c8\u27e9) + 2Re\u27e8\u2207p\u03c8, \u2207\u2113\u03c8\u27e9 22 \fwhich relates the \ufb02ow of the metric gp\u2113to the Ricci \ufb02ow. The following theorem is more speci\ufb01cally dependent on viewing the spinor \ufb02ow (6.10) as a coupled system: Theorem 12 [19] Assume that we have the following estimates |\u2207\u03c8|2, |\u22072\u03c8|, |H|2, |\u2207H|, |\u22072H| 2 3 \u2264A, \u2225\u03d5\u2225L\u221e\u2264\u02c6 A (6.14) on some time interval [0, \u03c4). Then there exists a constant C, depending only on A, \u02c6 A, n, an upper bound for \u03c4, and \u03ba := Z M |Rm(0)|qq g(0) + Vol(M, g(0)) (6.15) for some q > n 2, so that, uniformly on [0, \u03c4), we have |Rm(t)|g(t) \u2264C. (6.16) Finally, we would like to mention a di\ufb00erent interesting direction, namely parallel spinors on Lorentz manifolds. In particular, it has been shown by Murcia and Shahbazi [86, 87] that the existence of a global parallel spinor on a hyperbolic Lorentz 4-manifold induces a solution of a hyperbolic \ufb02ow called the \u201cparallel spinor \ufb02ow\u201d on a Cauchy hypersurface, and vice versa. 7 Some further developments We had restricted our discussion to geometric \ufb02ows motivated by string theories, distinctive features of which have been some new underlying special geometry. But there has been considerable progress as well in recent years on \ufb02ows with more established underlying notions of geometry. One case which has attracted increasing attention has been G2 \ufb02ows or \ufb02ows of balanced metrics. The short-time existence and Shi-type estimates for Bryant\u2019s G2 Laplacian \ufb02ow have been established in [8, 3, 75, 83, 37], solitons introduced in [41], and the convergence in particular cases established in [94], building partly on techniques newly introduced in the study of Anomaly \ufb02ows. Another case is the K\u00a8 ahler-Ricci \ufb02ow [11], especially on questions motivated by the Analytic Minimal Model Program [101]. New PDE techniques for L\u221eestimates for complex Monge-Amp` ere equations [60, 57] have allowed new estimates on Green\u2019s functions no longer dependent on direct conditions on the Ricci curvature [58]. These in turn have helped answer some long-standing questions on diameter bounds and the Gromov-Hausdor\ufb00convergence of the K\u00a8 ahler-Ricci \ufb02ow, in both cases of \ufb01nite-time and long-time solutions [59]. Looking forward, the fact that practically all the \ufb02ows from uni\ufb01ed string theories can be viewed as the Ricci \ufb02ow coupled to other \ufb01elds, suggests that the development 23 \fof methods to study such \ufb02ows should be very valuable. There appears to have been relatively few e\ufb00orts in this direction, except for couplings to scalar \ufb01elds and harmonic maps (see e.g. [85, 82, 72, 56] and references therein). On the other hand, a coupled system which has been of great importance in geometry and physics is the Hitchin system [67] on Riemann surfaces. There has been considerable progress recently on the singularities of the Vafa-Witten [115] and the Kapustin-Witten [74] equations, which can be viewed as natural generalizations of Hitchin systems to higher dimensions. See [106, 105]. Similar results for coupled Ricci \ufb02ows could be truly instrumental for the \ufb02ows considered here. Acknowledgements The author is very grateful to his collaborators, Tristan Collins, Teng Fei, Bin Guo, Sebastien Picard, and Xiangwen Zhang, for joint e\ufb00orts on the works presented here. He would like especially to thank Li-Sheng Tseng for describing to him his joint works with Shing-Tung Yau on supersymmetry and cohomology, which in\ufb02uenced him greatly. Parts of the material in this survey have also been presented in lectures at Harvard University, the University of California at Irvine, the University of Torino, the Laboratoire de Mathematiques at Orsay, Hamburg University, the University of Connecticut at Storrs, and at the conference on \u201cComplex Analysis and Dynamics\u201d at Portoroz, Slovenia." + }, + { + "url": "http://arxiv.org/abs/1906.03693v1", + "title": "Geometric Partial Differential Equations from Unified String Theories", + "abstract": "An informal introduction to some new geometric partial differential equations\nmotivated by string theories is provided. Some of these equations are also\ninteresting from the point of view of non-K\\\"ahler geometry and the theory of\nnon-linear partial differential equations. In particular, a survey is given of\njoint works of the author with Teng Fei, Bin Guo, Sebastien Picard, and\nXiangwen Zhang.", + "authors": "Duong H. Phong", + "published": "2019-06-09", + "updated": "2019-06-09", + "primary_cat": "math.AP", + "cats": [ + "math.AP", + "math.CV" + ], + "main_content": "Introduction The laws of nature at its most fundamental level have long been a source of inspiration for the theory of geometric partial di\ufb00erential equations. Each of the equations for the four known forces of nature, namely Maxwell\u2019s equations for electromagnetism, the Yang-Mills equations for the weak and the strong interactions, and Einstein\u2019s equation for gravity, has led to a deep and rich theory, with wide rami\ufb01cations in many branches of mathematics, including topology and geometry. In the mid 1980\u2019s, string theories burst upon the scene as the only consistent candidates for a uni\ufb01ed theory of all forces including gravity [46]. In their simplest form, the equations from string theories unexpectedly brought in complex structures [10], together with a completely new motivation for equations from K\u00a8 ahler geometry which had only been studied before as generalizations of the Uniformization Theorem [90]. This con\ufb02uence of complex geometry with string physics has contributed to some of the most exciting developments in mathematics and theoretical physics in the last few decades, notably mirror symmetry. However, for many reasons, including phenomenology, the moduli-space problem, and non-perturbative e\ufb00ects such as branes and duality [4, 55, 74], it has proven to be useful to consider solutions of string equations which are more general than the simplest one \ufb01rst considered in [10]. A frequent feature of these more general solutions is the incorporation of non-vanishing \ufb02uxes, which correspond on the geometric side to Hermitian connections with non-vanishing torsion. Thus physical considerations lead, perhaps surprisingly, to the realm of non-K\u00a8 ahler geometry. This is a wide open area in mathematics, still largely unexplored, and natural questions originating from theoretical physics should provide a very valuable guidance. 1Contribution to the proceedings of the ICCM 2018 conference in Taipei, Taiwan. Work supported in part by the National Science Foundation under grant DMS-12-66033 and by the Galileo Galilei Institute of Theoretical Physics. \fThe equations required for the simplest solutions of string theories had actually been already solved by Yau [90] and Donaldson [18] and Uhlenbeck-Yau [86] at the time they were proposed in [10]. But much less is known even at the present time about the more general equations referred to above. Even ignoring the motivation from physics or geometry, they are quite interesting from the point of view of the theory of non-linear partial di\ufb00erential equations. All this is well appreciated in selected circles, but it may not be as widely known as it should be. The purpose of this lecture is to provide an informal, and necessarily very incomplete, introduction to these equations. The focus will only be on some joint works of the speaker with Teng Fei, Bin Guo, Sebastien Picard, and Xiangwen Zhang, and only some particular equations from 11-dimensional supergravity, the heterotic string, and the Type II strings also of interest in geometry will be discussed. 2 Some Aspects of Uni\ufb01ed String Theories Of the four known forces of nature, electromagnetism and weak and strong interactions are described by gauge \ufb01elds, Abelian in the case of electromagnetism, and non-Abelian in the case of the other two forces. A gauge \ufb01eld is a connection A on a vector bundle over space-time, the \ufb01eld strength is given by the curvature F = dA+ A\u2227A, and the \ufb01eld equations are the Yang-Mills equation, coupled with the Bianchi identity, d\u2020 AF = 0, dAF = 0. (2.1) Gravity is on the other hand described by general relativity, where the \ufb01eld is a metric gij and the \ufb01eld equation in vacuum is given by Einstein\u2019s equation Rij = 0 (2.2) where Rij is the Ricci curvature of the metric gij. While many examples of solutions of the corresponding \ufb01eld equations have been obtained by exploiting a complex structure, the original equations are formulated for a real space-time, which is usually Lorentz, although a Wick rotation to Euclidian is of interest as well. Gauge theories are renormalizable while general relativity is not, and thus a quantum theory of gravity does not appear accessible by standard \ufb01eld theory methods. The uni\ufb01cation of all the forces of nature, including gravity, into a single, consistent quantum theory, is one of the grand dreams of theoretical physics. So far, practically all attempts with just \ufb01eld theories have run into serious di\ufb03culties, and since the mid 1980\u2019s, the only known viable candidate has been string theories, which are theories of onedimensional extended objects. There are 5 string theories, known as the Type I string, the Heterotic string with either SO(32) or E8 \u00d7 E8 as gauge groups, and the Type IIA and Type II B string. The Type I theory is a theory of open strings, the other theories are 2 \ftheories of closed strings. They all take place in a 10-dimensional space-time [46]. Since the mid 1990\u2019s, it has been realized that they are all related by dualities, and should be viewed as manifestations of a new quantum theory, called M theory, which is approximated at low energies by a classical \ufb01eld theory, namely 11-dimensional supergravity [51, 89, 80, 4, 55, 74]. All these theories incorporate supersymmetry, which is a symmetry pairing bosons (represented by tensor \ufb01elds) with fermions (represented by spinor \ufb01elds). 2.1 Supergravity theories It is not possible to give here an adequate description of these theories which are di\ufb03cult and still far from being understood. Instead, we shall just focus on a few mathematical implications which play an important role in the geometric partial di\ufb00erential equations that we shall describe in the sequel. First, for our purposes, we do not have to deal with the extended objects themselves. Rather, in the low-energy limit, string theories and M Theory reduce to just \ufb01eld theories of point particles in a 10 or 11-dimensional space-time, and we shall just consider these. Since string theories automatically incorporate gravity and are supersymmetric, their low energy limits are supergravity theories, in the sense that they are \ufb01eld theories which incorporate gravity and are supersymmetric. The incorporation of gravity means that the \ufb01eld content always includes a metric GMN, where M, N are space-time indices. The requirement of supersymmetry on a higher-dimensional space-time is a severe constraint, and there are very few supergravity theories. In each case, it su\ufb03ces to specify the bosonic \ufb01elds, and the fermionic \ufb01elds and the action are then completely determined by supersymmetry. We can accordingly describe the \ufb01eld theory limits of string theories and M theory as follows [16, 4, 55, 74]: \u2022 11-dimensional supergravity: The bosonic \ufb01elds are a metric G = GMN and a 4-form F = dA3 on an 11-dimensional space-time. The action is I = Z d11x \u221a \u2212G(R \u22121 2|F4|2) \u22121 6 Z A3 \u2227F4 \u2227F4 (2.3) Here F4 = dA3 is the \ufb01eld strength of the potential A3, and all fermionic \ufb01elds have been set to 0, as we shall also do systematically below. \u2022 The Type I and Heterotic strings: The bosonic \ufb01elds are a metric GMN, a 2-form BMN, a scalar \u03a6 (from the gravity multiplet)in 10-dimensional space-time, and a vector potential AM (from the vector multiplet), valued in SO(32) in the case of the Type I string, or in either SO(32) or E8 \u00d7 E8 in the case of the Heterotic string. The action is I = Z d10x \u221a \u2212G(R \u2212|\u2207\u03a6|2 \u2212e\u2212\u03a6|H|2 \u2212e\u2212\u03a6/2Tr[F 2]) (2.4) 3 \fand F is the curvature of A, and H = dB \u2212\u03c9CS(A) + \u03c9CS(L), (2.5) where \u03c9CS(A) is the gauge Chern-Simons form Tr(A \u2227dA \u22122 3A \u2227A \u2227A), and \u03c9CS(L) is the Lorentz Chern-Simons form. \u2022 The Type II A and Type II B strings: the bosonic \ufb01elds are again the gravity multiplet (GMN, BMN, \u03a6) in 10-dimensional space-time, supplemented by odd and even forms C2k+1 and C2k respectively, together with self-duality constraints. Since we shall only discuss these theories tangentially in this paper, we omit the description of their actions. 2.2 Supersymmetry We now say a few words about supersymmetry, which underlies much of what is discussed here. It is one of the main novel considerations beyond the equations suggested earlier by general relativity and gauge theories. Again, it is a deep concept, and we can only touch on a few of its mathematical consequences which motivate the equations we consider later. The key consequence of interest to us is that supersymmetry requires a spinor on spacetime, which is covariantly constant with respect to a suitable connection which may have torsion. This arises roughly as follows. Since we are dealing with supergravity theories, the bosonic \ufb01elds always include a metric GMN, and by supersymmetry, its partner \u03c7M, which is a spinor-valued one-form \u03c7M called the gravitino \ufb01eld. Supersymmetry transformations act on the gravitino \ufb01eld as follows \u03b4\u03c7M = DM\u03be (2.6) where the in\ufb01nitesimal generator \u03be is a spinor \ufb01eld and DM is a suitable connection on the spin bundle. Note that these transformations can be interpreted as the analogue of the in\ufb01nitesimal variation of a metric GMN under di\ufb00eomorphisms generated by a vector \ufb01eld V M, which is given by \u03b4GMN = \u2207{MVN}. Supersymmetry of a \ufb01eld con\ufb01guration requires that, under a supersymmetry transformation, its gravitino \ufb01eld is unchanged. Thus we need a spinor \u03be satisfying DM\u03be = 0. Now we examine the possibilities for connections on the spin bundle. For our purposes, a spinor \u03c8 is just a section of a spinor bundle, and a spinor bundle is a vector bundle carrying a representation of the Cli\ufb00ord algebra {\u03b3M}, \u03b3M\u03b3N + \u03b3N\u03b3M = 2GMN. The simplest connection on spinors is the spin connection \u2207M\u03be = \u2202M\u03be + 1 2\u03c9MJN\u03b3J\u03b3N\u03be, (2.7) where \u03c9MJN is the Levi-Civita connection. But other connections DM are possible, and may actually be required by the desired symmetries of the \ufb01nal theory. They necessarily 4 \fdi\ufb00er from the spin connection by a Cli\ufb00ord algebra valued one-form and, expanding in the basis generated by \u03b3-matrices, they are given by DM\u03be = \u2207M\u03be + X p X N1,\u00b7\u00b7\u00b7,Np H(p) MN1\u00b7\u00b7\u00b7Np\u03b3[N1 \u00b7 \u00b7 \u00b7 \u03b3Np]\u03be (2.8) Cases of particular importance for M theory are when the \ufb01eld H(p) MN1\u00b7\u00b7\u00b7Np is a (p+1)-form. The case of a 3-form is responsible for the torsion H in the equations for the Type I, Type II, and Heterotic string, and the case of a 4-form results in the \ufb01eld F4 in 11-dimensional supergravity, both discussed earlier. In summary, we do \ufb01nd that supersymmetry of a con\ufb01guration requires the existence of a spinor \ufb01eld \u03be, covariantly constant with respect to a connection DM involving a torsion \ufb01eld H. We observe that the existence of a covariantly constant spinor is well known in mathematics to be characteristic of reduced holonomy and special geometry (see e.g. Berger [8], Lichnerowicz [59], Joyce [53] and others). What uni\ufb01ed string theories have provided is supersymmetry as a motivation, as well as the necessity of considering other connections di\ufb00ering from the Levi-Civita connection by a torsion \ufb01eld. More on the consequences of supersymmetry can be found in e.g. [41, 42, 44, 49, 6] and references therein. For phenomenological reasons, it is desirable to compactify space-time to M3,1 \u00d7 X, where M3,1 is Minkowski or a maximally symmetric 4-dimensional space-time, and X is an internal space of dimension either 6 or 7, depending on whether we compactify string theories or 11-dimensional supergravity. It is also desirable to preserve supersymmetry upon compacti\ufb01cation. The above considerations descend to similar considerations on the internal space X. In particular the existence of covariantly constant spinor \ufb01elds \u03be imposes additional structure on the internal space, such as e.g. a G2 structure when X is 7-dimensional or, when X is 6-dimensional, a complex structure and a holomorphic top-form \u2126, constructed from bilinears in \u03be as JM N = i\u03be\u2020\u03b3M\u03b3N\u03be, \u2126MNP = \u03be\u2020\u03b3M\u03b3N\u03b3P\u03be. (2.9) Thus we can trace back to supersymmetry the origin of the appearance of complex geometry in the equations of string theories. 3 Equations from the Heterotic String We come now to a detailed study of the equations resulting from the heterotic string. Here the constraints discussed earlier for supersymmetric compacti\ufb01cations have been worked out independently by C. Hull [52] and A. Strominger [78], who proposed the following system of equations. Let X be a 3-fold equipped with a holomorphic non-vanishing (3, 0)-form \u2126and a holomorphic vector bundle E \u2192X with c1(E) = 0. Then the Hull-Strominger system 5 \fis the following system for a Hermitian metric \u03c9 on X, with curvature Rm \u2208\u039b1,1 \u2297 End(T 1,0(X)), and a Hermitian metric H\u00af \u03b1\u03b2 on E, with curvature F \u2208\u039b1,1 \u2297End(E), i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032 4 Tr(Rm \u2227Rm \u2212F \u2227F) = 0, \u03c92 \u2227F = 0 d(\u2225\u2126\u2225\u03c9\u03c92) = 0 (3.1) The last equation was written originally in terms of the torsion of \u03c9. The above reformulation is due to Li and Yau [57] and will play a very important role below. The Hull-Strominger system is a generalization of the solution found initially by Candelas, Horowitz, Strominger, and Witten [10], which is obtained by setting E = T 1,0(X) (up to a direct sum of \ufb02at bundles) and H\u00af \u03b1\u03b2 = \u03c9. Then the \ufb01rst and third equation reduce to i\u2202\u00af \u2202\u03c9 = 0, d(\u2225\u2126\u2225\u03c9\u03c92) = 0, which imply that \u03c9 is K\u00a8 ahler and Ricci-\ufb02at. The second equation also reduces to the Ricci \ufb02at condition for H\u00af \u03b1\u03b2, and so the system is consistent and all equations are satis\ufb01ed. The emergence of K\u00a8 ahler Ricci-\ufb02at metrics, now known as Calabi-Yau metrics following the solution by Yau [90] of the Calabi conjecture, is the key unexpected link with complex geometry which we had mentioned earlier. We note that the second equation is just the Hermitian-Einstein equation, which is the bundle analogue of the K\u00a8 ahler Ricci-\ufb02at equation solved by Donaldson [18] and Uhlenbeck-Yau [86]. 3.1 The conformally balanced condition The Hull-Strominger system is a coupled system for a metric \u03c9 on X and a metric H\u00af \u03b1\u03b2 on a vector bundle E. As a \ufb01rst step, we can consider a simpli\ufb01ed situation where the metric H\u00af \u03b1\u03b2 is known, and concentrate on the remaining two equations for \u03c9. One of them, namely the third equation, is a (2, 2) cohomological condition, while the other, namely the \ufb01rst equation, is a condition on the curvature of a suitable representative in the cohomology class. These are typical features of canonical metrics in K\u00a8 ahler geometry, which are representatives of a \ufb01xed (1, 1) cohomology class satisfying e.g. the condition of having constant scalar curvature (see e.g. [19], and for a survey [73]). From this point of view, the Hull-Strominger system can be viewed as providing a new notion of canonical metric in non-K\u00a8 ahler geometry. Conditions of the form of the third equation had been introduced before in the mathematics literature. A metric \u03c9 is said to be balanced in the sense of Michelsohn [63] is \u03c92 is closed. In this terminology, the third condition just says that the metric \u2225\u2126\u22251/2 \u03c9 \u03c9 is balanced. The balanced condition is quite natural, and even has the advantage over the K\u00a8 ahler condition of being invariant under algebro-geometric modi\ufb01cations [2]. We shall say that \u03c9 itself is conformally balanced. However, while formally similar, a (2, 2)-form cohomological condition is much less manageable than a K\u00a8 ahler condition. A big di\ufb00erence is the \u2202\u00af \u2202-lemma of K\u00a8 ahler geometry. It says that a metric \u03c9 is in the same K\u00a8 ahler class as \u03c90 if and only if \u03c9 = \u03c90 + i\u2202\u00af \u2202\u03d5 (3.2) 6 \fwhere the potential \u03d5 is unique up to a harmless additive constant. Any condition on the volume of \u03c9 or a related curvature condition can then be easily rewritten as a partial di\ufb00erential equation in the potential \u03d5. An example is the condition of vanishing Ricci curvature, which can be rewritten as a complex Monge-Amp` ere equation on \u03d5, and which has provided a particularly e\ufb03cient way of \ufb01nding and studying Calabi-Yau metrics. By contrast, the absence of a \u2202\u00af \u2202-lemma in non-K\u00a8 ahler geometry has been a major impediment in \ufb01nding any e\ufb03cient parametrization of balanced Hermitian metrics \u03c9 with \u03c92 in a given (2, 2)-cohomology class, and hence in solving the Hull-Strominger system. 3.2 Anomaly \ufb02ows as substitutes for the \u2202\u00af \u2202-lemma We describe now joint works with Teng Fei, Sebastien Picard, and Xiangwen Zhang, based on the central idea that the absence of a \u2202\u00af \u2202-lemma for balanced metrics can be bypassed by using a geometric \ufb02ow. This idea was \ufb01rst applied to the case of the Hull-Strominger system in [64, 65, 66, 70], and applied since more broadly to a range of other systems, including systems arising from the Type II A and Type II B strings [28, 29] . To be speci\ufb01c, consider the set-up for the Hull-Strominger system, that is, a compact 3-fold X equipped with a nowhere vanishing (3, 0) form \u2126and a holomorphic vector bundle E \u2192X. We assume that c1(E) = 0 for notational simplicity. Let \u03c90 be a conformally balanced Hermitian metric on X, in the sense that d(\u2225\u2126\u2225\u03c90\u03c92 0) = 0. Then the Anomaly \ufb02ow is de\ufb01ned to be the following \ufb02ow for a pair (\u03c9(t), H\u00af \u03b1\u03b2(t)), where \u03c9(t) is a Hermitian metric on X and H\u00af \u03b1\u03b2(t) a Hermitian metric on E, \u2202t(\u2225\u2126\u2225\u03c9\u03c92) = i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032 4 Tr (Rm \u2227Rm \u2212F \u2227F) H\u22121\u2202tH = \u03c92 \u2227F \u03c93 . (3.3) Clearly the stationary points satisfy two of the three equations in the Hull-Strominger system. But the key point is that the second equation, namely the conformally balanced condition for \u03c9, is automatically satis\ufb01ed without having to appeal to a \u2202\u00af \u2202-lemma. This is because the right hand side of the \ufb01rst equation in (3.3) is closed, by standard Bott-Chern theory, and thus the closedness of \u2225\u2126\u2225\u03c9\u03c92 is preserved along the \ufb02ow. The \ufb02ow (3.3) admits many natural variants and generalizations, depending on the dimension, and on whether the well-known \ufb02ow for the metric H\u00af \u03b1\u03b2 can be decoupled. For our purposes, on complex manifolds X of dimension m \u22652, the variants which we consider \ufb01t into the following general \ufb02ows for the sole metric \u03c9, \u2202t(\u2225\u2126\u2225\u03c9\u03c9m\u22121) = i\u2202\u00af \u2202\u03c9m\u22122 \u2212\u03b1\u2032\u03a6 (3.4) where \u03a6 = \u03a6(\u03c9, Rm(\u03c9), t) is a closed (m \u22121, m \u22121)-form, and the initial data \u03c9(0) is conformally balanced in the sense that d(\u2225\u2126\u2225\u03c90\u03c9m\u22121 0 ) = 0. 7 \f3.3 Formulation as a \ufb02ow of metrics instead of (2, 2) forms While the Anomaly \ufb02ow arises naturally as a \ufb02ow of (m \u22121, m \u22121)-forms, its analysis requires an explicit and equivalent reformulation as a \ufb02ow of (1, 1)-forms. This was worked out in [65, 70], using the explicit formulas derived there for the Hodge \u22c6operator. The answer is the following, where we have taken \u03a6 = 0 for notational simplicity, the general case being obtainable in exactly the same way: Theorem 1 Consider the Anomaly \ufb02ow with a conformally balanced initial metric on a complex manifold X of dimension m. Then the \ufb02ow can also be expressed as \u2202tg\u00af kj = 1 (m \u22121)\u2225\u2126\u2225\u03c9 { \u2212\u02dc R\u00af kj \u22121 2T\u00af kpq \u00af Tj pq + T\u00af kjs\u00af \u03c4 s + \u03c4 \u00af r \u00af Tj\u00af k\u00af r + \u03c4j\u00af \u03c4\u00af k + 1 2(m \u22122)(|T|2 \u22122|\u03c4|2)g\u00af kj} (3.5) for all m \u22653. Here \u03c9 = ig\u00af kjdzj\u2227d\u00af zk, \u02dc R\u00af kj = gp\u00af qR\u00af qp\u00af kj is the Chern-Ricci tensor, T = i\u2202\u03c9 = 1 2T\u00af kjmdzm \u2227dzj \u2227d\u00af zk is the torsion tensor, and \u03c4\u2113= (\u039bT)\u2113= gj\u00af kT\u00af kj\u2113. If |\u03b1\u2032Rm(\u03c9(0))| is su\ufb03ciently small, the \ufb02ow will exist at least for a short-time. We note that, explicitly, \u02dc R\u00af kj = \u2212gp\u00af qg\u00af km\u2202\u00af q(gm\u00af \u2113\u2202pg\u00af \u2113j) = \u2212\u2206g\u00af kj + \u00b7 \u00b7 \u00b7 which shows that the \ufb02ow is parabolic for \u03b1\u2032 = 0. This parabolicity continues to hold for \u03b1\u2032 small enough. Note that the other Ricci tensor R\u00af kj = \u2212\u2202j\u2202\u00af k log g = \u2212gp\u00af q\u2202j\u2202\u00af kg\u00af qp would not have been a good substitute for \u02dc R\u00af kj since it would not have implied parabolicity. When m = 3, the expression between brackets on the right hand side of (3.5) reduces to the simpler expression \u2212\u02dc R\u00af kj + gs\u00af rgp\u00af qT\u00af qsj \u00af Tp\u00af r\u00af k, which was the form of the \ufb02ow originally derived in [65]. That the two expressions actually coincide is a consequence of an identity for the torsion tensor found in [28]. The original formulation of the Anomaly \ufb02ow as well as the above formula for \u2202tg\u00af kj require the existence of the nowhere vanishing holomorphic form \u2126. Recently, in the case \u03a6 = 0, another formulation was found where \u2126appeared only in the initial data, and hence the \ufb02ow can be generalized to manifolds where such a form \u2126may not exist [28]: Theorem 2 Let \u03b7 be the Hermitian metric de\ufb01ned by \u03b7 = \u2225\u2126\u2225\u03c9\u03c9. If \u03c9 evolves by the Anomaly \ufb02ow (3.4) with \u03a6 = 0 and \u03c9(0) is conformally balanced in the sense that d(\u2225\u2126\u2225\u03c9(0)\u03c9m\u22121) = 0, then the corresponding metrics \u03b7 evolve according to the following \ufb02ow i\u22121\u2202t\u03b7 = \u2212 1 m \u22121( \u02dc R\u00af kj(\u03b7) + 1 2T\u00af kpq(\u03b7) \u00af Tj pq(\u03b7)). (3.6) and its initial data satis\ufb01es d(\u2225\u2126\u22252 \u03b7(0)\u03b7m\u22121) = 0. In particular, rescaling time by t \u2192 (m \u22121)\u22121t, the Anomaly \ufb02ow admits a natural extension to all dimensions. 8 \fThis formulation allows a generalization of the Anomaly \ufb02ow to arbitrary initial data, and not just conformally balanced ones. These generalizations \ufb01t into the family of generalizations of the Ricci \ufb02ow introduced in [77]. The \ufb02ow (3.6) actually coincides with the \ufb02ow identi\ufb01ed in [87] as a \ufb02ow preserving the Gri\ufb03ths positivity and the dual Nakanopositivity of the tangent bundle. Such properties hold for the K\u00a8 ahler-Ricci \ufb02ow [3, 63], and are known to have many important consequences, see e.g. [50, 72]. 3.4 The Anomaly \ufb02ow and the Fu-Yau solution of the HullStrominger system The Anomaly \ufb02ow method appears to have a lot of potential. Here we shall discuss how it can be applied to give new proofs of two fundamental results in complex geometry, namely the Fu-Yau solution of the Hull-Strominger system, and Yau\u2019s solution of the Calabi conjecture. We begin with the Fu-Yau solution. Let (Y, \u02c6 \u03c9) be a Calabi-Yau surface, equipped with a nowhere vanishing holomorphic form \u2126Y . Let \u03c91, \u03c92 \u2208H2(Y, Z) satisfy \u03c91 \u2227\u02c6 \u03c9 = \u03c92 \u2227\u02c6 \u03c9 = 0. From this data, building on earlier ideas of Calabi and Eckmann [9], Goldstein and Prokushkin [44] constructed a toric \ufb01bration \u03c0 : X \u2192Y , equipped with a (1, 0)-form \u03b8 on X satisfying \u2202\u03b8 = 0, \u00af \u2202\u03b8 = \u03c0\u2217(\u03c91 + i\u03c92). Furthermore, the form \u2126= \u221a 3 \u2126M \u2227\u03b8 (3.7) is a holomorphic nowhere vanishing (3, 0)-form on X, and for any scalar function u on Y , the (1, 1)-form \u03c9u = \u03c0\u2217(eu\u02c6 \u03c9) + i\u03b8 \u2227\u00af \u03b8 (3.8) is a conformally balanced metric on X. In a breakthrough on the Hull-Strominger system, Fu and Yau [36, 37] found the \ufb01rst non-perturbative, non-K\u00a8 ahler solution by showing how the system would descend on these \ufb01brations to a single scalar Monge-Amp` ere equation. They succeeded in solving this equation, even though it involved new di\ufb03cult gradient terms. Using the Anomaly \ufb02ow, we \ufb01nd [66]: Theorem 3 Consider the Anomaly \ufb02ow on the \ufb01bration X \u2192Y constructed above, with initial data \u03c9(0) = \u03c0\u2217(M \u02c6 \u03c9) + i\u03b8\u00af \u03b8, where M is a positive constant. Then for a suitable bundle \u03c0\u2217(EY ) on X and a suitable initial Hermitian metric H\u00af \u03b1\u03b2(0) (see below), the metrics \u03c9(t) are of the form \u03c0\u2217(eu\u02c6 \u03c9)+i\u03b8\u2227\u00af \u03b8. Assuming an integrability condition on the data (which is necessary), there exists M0 > 0, so that for all M \u2265M0, the \ufb02ow exists for all time, and converges to a metric \u03c9\u221ewith (\u03c9\u221e, H\u00af \u03b1\u03b2) satisfying the Hull-Strominger system. We note that, in general, a given elliptic equation admits an in\ufb01nite number of parabolic \ufb02ows with it as stationary point. But not all of these may have the right structural 9 \fproperties to satisfy long-time existence and to converge. The Anomaly \ufb02ow was motivated by a completely di\ufb00erent, geometric, consideration, namely to preserve the conformally balanced condition. The above theorem on the present case of toric \ufb01brations suggests that it is also well-behaved from the analytic viewpoint. We describe some of the main ideas in the proof of Theorem 3. As in [36, 37], if we choose the bundle E to be the pull-back \u03c0\u2217(EY ) of a bundle EY on the base Y , then the system can be shown to descend to a system on the base Y , for an unknown metric of the form \u02c6 \u03c9u = eu\u02c6 \u03c9. We can take H to be the pull-back of a metric on EY which is Hermitian-Einstein with respect to \u02c6 \u03c9, and hence with respect to \u02c6 \u03c9u for any u. Then the Anomaly \ufb02ow descends to the following \ufb02ow on the base Y , \u2202t\u02c6 \u03c9u = \u2212 1 2\u2225\u2126\u2225\u02c6 \u03c9u (R(\u02c6 \u03c9u) 2 \u2212|T(\u02c6 \u03c9u)|2 \u2212\u03b1\u2032 4 \u03c32(iRic\u02c6 \u03c9(u)) + 2\u03b1\u2032i\u2202\u00af \u2202(\u2225\u2126\u2225\u02c6 \u03c9u\u03c1) \u02c6 \u03c92 u \u22122 \u00b5 \u02c6 \u03c92 u )\u02c6 \u03c9u where \u03c1 and \u00b5 are \ufb01xed and given forms which depend on the geometric data (Y, \u2126Y , \u03c91, \u03c92). To start, we assume that the initial data satis\ufb01es |\u03b1\u2032Ric\u03c9| << 1, so that the di\ufb00usion operator \u2206F = F p\u00af q\u2207p\u2207\u00af q, F p\u00af q = gp\u00af q + \u03b1\u2032\u2225\u2126\u22253 \u03c9 \u02dc \u03c1p\u00af q \u2212\u03b1\u2032 2 (Rgp\u00af q \u2212Rp\u00af q) is positive de\ufb01nite. The key and di\ufb03cult step is to prove that this condition is preserved along the \ufb02ow. This is done through the following several careful estimates in terms of the scale M: \u2022 Uniform equivalence of the metrics \u03c9(t): this is equivalent to a uniform estimate for the conformal factor u in \u02c6 \u03c9u = eu\u02c6 \u03c9, and is established by Moser iteration, exploiting the fact that the quantity R X \u2225\u2126\u2225\u03c9\u03c92 is conserved along the \ufb02ow. For our purposes, we shall need the following precise version in terms of M. Assume that the \ufb02ow exists for t \u2208[0, T) and starts with \u02c6 \u03c9(0) = M \u02c6 \u03c9. Then there exists M0 so that, for M \u2265M0, we have supX\u00d7[0,T)eu \u2264C1M, supX\u00d7[0,T)e\u2212u \u2264C2 M (3.9) where C1, C2 depend only on (X, \u02c6 \u03c9), \u00b5, \u03c1, and \u03b1\u2032. \u2022 Estimates for the torsion: there exists M0 with the following property. If the \ufb02ow is started with \u03c9(0) = M \u02c6 \u03c9 and M \u2265M0, and if |\u03b1\u2032Ric\u02c6 \u03c9| \u226410\u22126 (3.10) along the \ufb02ow, then there exists a constant C3 depending only on Y, \u02c6 \u03c9, \u00b5, \u03c1, \u03b1\u2032 so that |T|2 \u2264 C3 M4/3 << 1. (3.11) 10 \f\u2022 Estimates for the curvature: start the \ufb02ow with \u03c9(0) = M \u02c6 \u03c9. There exists M0 > 1 such that, for every M \u2265M0, if \u2225\u2126\u22252 \u2264C2 2 M2, |T|2 \u2264 C3 M4/3 along the \ufb02ow, then |\u03b1\u2032 Ric\u02c6 \u03c9| \u2264 1 M1/2 \u2022 Higher order estimates: under similar conditions, we can establish estimates for the higher order derivatives of the torsion and the curvature. An interesting new technical point is the usefulness of test functions of the form G = (|\u03b1\u2032Ric\u02c6 \u03c9| + \u03c41)|\u2207Ric\u02c6 \u03c9|2 + (|T|2 + \u03c42)|\u2207T|2 \u2022 Closing the loop of estimates: if we start with an initial data satisfying |\u03b1\u2032Ric\u02c6 \u03c9| \u2264 10\u22126, the estimates for the torsion imply that |T|2 \u2264C3M\u22124/3 and hence |\u03b1\u2032Ric\u02c6 \u03c9| \u2264M\u22121/2. But this implies that for M >> 1, the \ufb02ow does not leave the region |\u03b1\u2032Ric\u02c6 \u03c9| \u226410\u22126, and hence all these estimates hold for all time. This implies the existence for all time of the \ufb02ow, and an additional argument establishes its convergence in C\u221e. We observe that, as anticipated, the techniques discovered in the solution of the Anomaly \ufb02ow have proven to be useful for other partial di\ufb00erential equations as well [67, 68, 69]. Earlier techniques can be found in [47, 48]. 3.5 The Anomaly \ufb02ow and the Calabi conjecture Next, we apply the Anomaly \ufb02ow to obtain a new proof, with new estimates, of Yau\u2019s theorem on the existence of Ricci-\ufb02at K\u00a8 ahler metrics. For this it su\ufb03ces to consider the special case with \u03b1\u2032 = 0 in the \ufb02ow (3.4). It is not di\ufb03cult to show that conformally balanced metrics \u03c9 which also satisfy the stationary condition of this \ufb02ow, namely i\u2202\u00af \u2202\u03c9 = 0, must be K\u00a8 ahler and Ricci-\ufb02at metrics [10, 61, 34, 70]. Thus it su\ufb03ces to produce an initial data for which the \ufb02ow (3.4) with \u03b1\u2032 = 0 converges. In [70], we prove: Theorem 4 Assume that the initial data \u03c9(0) satis\ufb01es \u2225\u2126\u2225\u03c9(0)\u03c9(0)n\u22121 = \u02c6 \u03c7n\u22121, where \u02c6 \u03c7 is a K\u00a8 ahler metric. Then the \ufb02ow exists for all time t > 0, and as t \u2192\u221e, the solution \u03c9(t) converges smoothly to a K\u00a8 ahler, Ricci-\ufb02at, metric \u03c9\u221e. If we de\ufb01ne the metric \u03c7\u221eby \u03c9\u221e= \u2225\u2126\u2225\u22122/(n\u22122) \u03c7\u221e \u03c7\u221e, (3.12) then \u03c7\u221eis the unique K\u00a8 ahler Ricci-\ufb02at metric in the cohomology class [\u02c6 \u03c7], and \u2225\u2126\u2225\u03c7\u221eis an explicit constant. 11 \fIn this case, the Anomaly \ufb02ow is actually conformally equivalent to a \ufb02ow of metrics t \u2192\u03c7(t) in the K\u00a8 ahler class of \u02c6 \u03c7. Indeed, given a solution \u03c9(t) of the Anomaly \ufb02ow, we can de\ufb01ne the Hermitian metric \u03c7(t) by \u2225\u2126\u2225\u03c9(t)\u03c9m\u22121(t) = \u03c7m\u22121(t). (3.13) Then the \ufb02ow (3.4) with \u03b1\u2032 = 0 can be rewritten as \u2202t\u03c7m\u22121(t) = i\u2202\u00af \u2202(\u2225\u2126\u2225\u22122 \u03c7(t)\u03c7m\u22122(t)) (3.14) and hence, upon carrying out the di\ufb00erentiations and assuming that \u03c7(t) is K\u00a8 ahler, (m \u22121)\u2202t\u03c7 \u2227\u03c7m\u22122 = (i\u2202\u00af \u2202\u2225\u2126\u2225\u22122 \u03c7(t)) \u2227\u03c7m\u22122. (3.15) This last equation is satis\ufb01ed if \u03c7(t) \ufb02ows according to (m \u22121)\u2202t\u03c7 = i\u2202\u00af \u2202\u2225\u2126\u2225\u22122 \u03c7(t). (3.16) But this \ufb02ow preserves the K\u00a8 ahler condition, and hence all the steps of the previous derivation are justi\ufb01ed. In particular, by the uniqueness of solutions of parabolic equations, the solution \u03c9(t) of (3.4) is indeed given by (3.13) with \u03c7(t) K\u00a8 ahler. The \ufb02ow (3.16) can be written explicitly in terms of potentials. Setting \u03c7(t) = \u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5(t) > 0, we \ufb01nd \u2202t\u03d5 = e\u2212f (\u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5)n \u02c6 \u03c7n , \u03d5(z, 0) = 0 (3.17) where the scalar function f \u2208C\u221e(X, R) is de\ufb01ned by (n \u22121)e\u2212f = \u2225\u2126\u2225\u22122 \u02c6 \u03c7 . We note that this \ufb02ow is of Monge-Amp` ere type, but without the log as in the solution by K\u00a8 ahlerRicci \ufb02ow by Cao [11], and without the inverse power of the determinant, as in the recent equation proposed by Cao and Keller [12] and Collins, Hisamoto, and Takahashi [14]. Because of this, we need a new way of obtaining C2 estimates. It turns out that the test function G(z, t) = log Tr h \u2212A(\u03d5 \u2212 1 [\u02c6 \u03c7n] Z X \u03d5\u02c6 \u03c7n) + B[(\u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5)n \u02c6 \u03c7n ]2 can do the job. We observe that it di\ufb00ers from the standard test function \u02c6 G(z, t) = log Tr h \u2212A\u03d5 used in the study of Monge-Amp` ere equations (see e.g. [71] and references therein) by terms involving the square of the Monge-Amp` ere determinant. Thus the study of the Anomaly \ufb02ow leads again to new tools that should be useful in the development of the theory of non-linear partial di\ufb00erential equations. 12 \f3.6 Further remarks While the Anomaly \ufb02ow has been very successful in the several cases studied so far, its general theory remains to be more fully developed. From the sole fact that it can be viewed as an extension of the K\u00a8 ahler-Ricci \ufb02ow to the non-K\u00a8 ahler case, the general theory can be expected to be complicated. We restrict ourselves to a few remarks. It may be instructive to draw a closer comparison of the Anomaly \ufb02ow with the K\u00a8 ahlerRicci \ufb02ow. In the K\u00a8 ahler-Ricci \ufb02ow, the (1, 1)-cohomology class is determined [\u03c9(t)] = [\u03c9(0)] \u2212tc1(X) (3.18) In the Anomaly \ufb02ow, we have something similar [\u2225\u2126\u2225\u03c9(t)\u03c9(t)2] = [[\u2225\u2126\u2225\u03c9(0)\u03c9(0)2] \u2212t\u03b1\u2032(c2(X) \u2212c2(E)). (3.19) However, the (2, 2) cohomology class [\u2225\u2126\u2225\u03c9\u03c92] provides much less information than the (1, 1)-cohomology class [\u03c9]. For example, the volume is an invariant of an (1, 1) cohomology class, but not of an (2, 2)-cohomology class. As an indirect consequence, the maximum time of the Anomaly \ufb02ow is not determined by cohomology alone and depends on the initial data. In this delicate dependence on the initial data, the Anomaly \ufb02ow is closer to the Ricci \ufb02ow than the K\u00a8 ahler-Ricci \ufb02ow. The dependence of the long-time behavior of Anomaly \ufb02ow on the initial data can be seen explicitly in several examples worked out by Fei, Huang, and Picard [23, 28], including on hyperk\u00a8 ahler \ufb01brations over Riemann surfaces. It is shown there that the \ufb02ow can both exist for all time or terminate in \ufb01nite time. In the \ufb01rst case, the \ufb02ow, suitably normalized, can collapse the \ufb01bers. But even in this very speci\ufb01c geometric setting the behavior of the \ufb02ow has not been worked out for general data. In particular, it has not been worked out for data close to the stationary points found earlier in [24], which are particularly interesting as an in\ufb01nite family of topologically distinct solutions to the Hull-Strominger system. That such a family of Calabi-Yau metrics is conjectured, but not yet known, to exist is an illustration of the comparative \ufb02exibility of the Hull-Strominger system. A priori, one of the reasons the Anomaly \ufb02ow may terminate is if \u2225\u2126\u2225\u03c9 goes to 0 or \u221e. But at least when \u03b1\u2032 = 0, it has now been shown that \u2225\u2126\u2225\u03c9 always remains bounded along the \ufb02ow [28]. An essential tool is the monotonicity of dilaton functionals, which had also been considered in [38, 39, 40]. Finally, it would be helpful to work out many more examples. In this context, we note that many solutions of the Hull-Strominger system have been found in many geometric settings, which should be instructive to investigate, see e.g. [31, 32, 33, 38, 39, 40] and references therein. 13 \f4 Equations from the Type II A and Type II B strings Next, we consider supersymmetric compacti\ufb01cations of the Type II B string and of the Type II A string with brane sources, as formulated by Tseng and Yau [83], building on earlier formulations of Grana-Minasian-Petrini-Tomasiello [45], Tomasiello [80], and others. In particular, (subspaces of) linearized solutions have been identi\ufb01ed by Tseng and Yau with Bott-Chern and Aeppli cohomologies in the case of Type II B, and with their own symplectic cohomology in the case of Type II A, as well as interpolating cohomologies [84, 85]. Related boundary value problems have been recently studied by Tseng and Wang [82]. The main feature of these equations of concern to us is that they all involve cohomological conditions that are not known to be enforceable by any \u2202\u00af \u2202-lemma. Here we shall just point out that the same idea of Anomaly \ufb02ows can be applied. A more detailed study of the resulting \ufb02ows will appear elsewhere [29]. 4.1 Type II B strings Let X be compact 3-dimensional complex manifold, equipped with a nowhere vanishing holomorphic 3-form \u2126. Let \u03c1B be the Poincare dual of a linear combination of holomorphic 2-cycles. We look for a Hermitian metric satisfying the following system d\u03c92 = 0, i\u2202\u00af \u2202(\u2225\u2126\u2225\u22122 \u03c9 \u03c9) = \u03c1B where \u2225\u2126\u2225\u03c9 is de\ufb01ned by i\u2126\u2227\u00af \u2126= \u2225\u2126\u22252 \u03c9\u03c93. If we set \u03b7 = \u2225\u2126\u2225\u22122 \u03c9 \u03c9, this system can be recast in a form similar to the Hull-Strominger system, d(\u2225\u2126\u2225\u03b7\u03b72) = 0, i\u2202\u00af \u2202\u03b7 = \u03c1B This system can be approached by the following Anomaly type \ufb02ow, \u2202t(\u2225\u2126\u2225\u03b7\u03b72) = i\u2202\u00af \u2202\u03b7 \u2212\u03c1B, d(\u2225\u2126\u2225\u03b70\u03b72 0) = 0. (4.1) Because the right hand side is closed, the closedness of the initial condition is preserved, and the system is solved if the \ufb02ow converges. 4.2 Type II A strings Let X be this time a real 6-dimensional symplectic manifold, in the sense that it admits a closed, non-degenerate 2-form \u03c9 (but there may be no compatible complex structure, so it may not be a K\u00a8 ahler form). Then the equations are now for a complex 3-form \u2126with Im \u2126= \u22c6Re \u2126, and d(Re \u2126) = 0, dd\u039b(\u22c6\u2225\u2126\u22252Re \u2126) = \u03c1A 14 \fwhere \u03c1A is the Poincare dual of a linear combination of special Lagrangians. Here d\u039b = d\u039b \u2212\u039bd is the symplectic adjoint. Thus it is natural to introduce another Anomalytype \ufb02ow \u2202t(Re \u2126) = dd\u039b(\u22c6\u2225\u2126\u22252Re \u2126) \u2212\u03c1A, d(Re \u21260) = 0. (4.2) whose stationary points would again solve the desired system. 5 Equations from 11-dimensional Supergravity It does not appear that the mathematical study of the equations from 11-dimensional supergravity is as extensive as in the heterotic case. Nevertheless we shall describe joint works with Teng Fei and Bin Guo on some exact solutions and their moduli, criteria for the construction of supersymmetric compacti\ufb01cations, and the formulation of some geometric \ufb02ows which may provide an approach to more general solutions [26, 27]. Recall that the \ufb01elds of 11-dimensional supergravity are an 11-dimensional Lorentz metric Gij and a 4-form F = dA, and the action is given by (2.3). The resulting \ufb01eld equations are d \u22c6F = 1 2F \u2227F, Rij = 1 2(F 2)ij \u22121 6|F|2Gij where the symmetric 2-tensor F 2 is de\ufb01ned by (F 2)ij = 1 6FiklmFj klm. The supersymmetric solutions are the solutions which admit a spinor \u03be satisfying Dm\u03be := \u2207m\u03be \u2212 1 288Fabcd(\u0393abcd m + 8\u0393abc\u03b4d m)\u03be = 0 i.e. spinors which are covariantly constant with respect to the connection Dm, obtained by twisting the Levi-Civita connection with the \ufb02ux F. 5.1 Early solutions Some early solutions were found with the Ansatz M11 = M4 \u00d7M7, where M4 is a Lorentz 4-manifold and M7 a Riemannian manifold with metrics g4 and g7 respectively. Setting F = cV ol4 where V ol4 is the volume form on M4 reduces the \ufb01eld equations to (Ric4)ij = \u2212c2 3 (g4)ij, (Ric7)ij = c2 6 (g7)ij i.e., M4 and M7 are Einstein manifolds with negative and positive scalar curvatures respectively. These are the Freund-Rubin solutions, which include AdS4 \u00d7 S7 [35]. 15 \fMore sophisticated solutions can be found with other ansatz for F, e.g. F = cV ol4 + \u03c8 for suitable \u03c8, leading to nearly G2 manifolds, as well as many others see e.g. [22, 75, 76, 17, 20]. For us the special solution of particular interest, following Du\ufb00-Stelle [21], is obtained by setting M11 = M3 \u00d7 M8, g11 = e2Ag3 + g8, F = V ol3 \u2227d f where g3 is a Lorentz metric on M3, g8 is a Riemannian metric on M8, and (A, f) are smooth functions on M8. The now well-known solution of Du\ufb00-Stelle is then obtained by assuming the \ufb02atness of g3, the conformal \ufb02atness of g8, the radial dependence of A, f, and supersymmetry. 5.2 Other multimembrane solutions We now discuss a way of \ufb01nding more systematically solutions to 11-dimensional supergravity [26]. To begin with, we consider solutions given by warped products M11 = M3 \u00d7 M8, g11 = e2Ag3 + g8, F = V ol3 \u2227d f as in the original work of Du\ufb00and Stelle. The \ufb01rst result is a complete characterization of such data giving rise to a supersymmetric solution: Theorem 5 The data (g3, g8, A, f) is a supersymmetric solution to 11-dimensional supergravity equation if and only if (a) g3 is \ufb02at; (b) \u00af g8 := eAg8 is a Ricci-\ufb02at metric admitting covariantly constant spinors with respect to the Levi-Civita connection; (c) e\u22123A is a harmonic function on (M8, g8) with respect to the metric \u00af g8; (d) d f = \u00b1d(e3A). Applying the classic results of S.Y. Cheng, P. Li, and S.T. Yau [13, 58] on lower bounds for Green\u2019s functions as well as new constructions of complete K\u00a8 ahler-Ricci \ufb02at metrics on C4 by Szekelyhidi [79], Conlon-Rochon [15], and Li [59], we obtain in this manner the complete supersymmetric solution for this ansatz, which includes many solutions not available before in the physics literature. Next, we \ufb01nd indications that 11-dimensional supergravity may have some integrable structure. Indeed, we \ufb01nd that the solution found by Du\ufb00-Stelle can be imbedded into a \ufb01ve-parameter family of solutions, the only one which is supersymmetric is the Du\ufb00-Stelle solution itself. More precisely [26], Theorem 6 There is a 5-parameter family of solutions (g3, g8, A.f) of solutions of the \ufb01eld equations of 11-dimensional supergravity. In fact, g3 is Einstein and one of the parameters is its scalar curvature, a second parameter is a global scale parameter for g8, and the three 16 \fremaining parameters are constants of integration for the following 3rd order ODE, from which the rest of the Ansatz can be determined, d3v dt3 + 7d2v dt2 v + 14(dv dt 2 ) + 2dv dt (17v2 \u221260) + 12(v2 \u22124)(v2 \u22126) = 0. (5.1) Actually many special solutions of this equation can be written down explicitly, and it may deserve further investigation from the pure ODE viewpoint. 5.3 Parabolic reductions of 11-dimensional supergravity More general solutions of 11-dimensional supergravity will ultimately have to be found by solving partial di\ufb00erential equations. These equations will be hyperbolic, because of the Lorentz signature of M11, but as a start, we can try and identify the subcases where the Lorentz components are known, and deal only with elliptic equations. Thus we consider space-times and \ufb01eld con\ufb01gurations of the form M11 = M1,p \u00d7 M10\u2212p, G = e2Ag1,p + g, F = dV olg \u2227\u03b2 + \u03a8 (5.2) where \u03b2 and \u03a8 are now respectively a 1-form and a 4-form on M10\u2212p, both closed [27]: Theorem 7 There exist general parabolic \ufb02ows of the con\ufb01guration (g(t), A(t), \u03b2(t), \u03a8(t)) (which can be written down explicitly) with the following properties: (a) The forms \u03b2 and \u03a8 remain closed along the \ufb02ow, and the above ansatz is preserved; (b) The corresponding con\ufb01guration (G, F) on M11 evolves in time by the \ufb02ow \u2202tGMN = \u22122RMN + F 2 MN \u22121 3|F|2GMN, \u2202tF = \u2212\u2737F \u22121 2d \u22c6(F \u2227F) whose stationary points are (assuming a cohomological condition) solutions of 11-dimensional supergravity; (c) If T < \u221eis the maximum time of existence of the \ufb02ow, then limsupt\u2192T \u2212supM10\u2212p(|Rm| + |A| + |\u03b2| + |\u03a8|) = \u221e The constraint of supersymmetry has not yet been enforced on these reductions, and this is an important avenue for further investigations. We note in this context the wealth of results for supersymmetric solutions of supergravity theories in dimensions 5 and 6 [41, 42, 44, 49, 6]. In a di\ufb00erent vein, interesting related \ufb02ows of balanced metrics or G2 structures have been proposed in [5, 7, 54, 60, 30]. Yet another line of investigation is that of static solutions with symmetry, pioneered in [88] for Einstein\u2019s equations in 4 dimensional, and more recently in [1] for 5-dimensional minimal supergravity. 17 \fAcknowledgements The author would like to thank the organizers of the ICCM 2018 in Taipei, Taiwan, for their invitation to speak there. Part of this material was also presented at the Johns Hopkins conference in honor of B. Shi\ufb00man, at a Colloquium at the University of Connecticut, Storrs, and at the conference in honor of S.T. Yau at Harvard University in May 2019. The author would like to thank the Galileo Galilei Institute for Theoretical Physics and INFN for hospitality and partial support during the workshop \u201cString Theory from a worldsheet perspective\u201d where part of this paper was written. He would also like to thank Professor Rodolfo Russo and Professor Li-Sheng Tseng for very stimulating conversations and correspondence." + }, + { + "url": "http://arxiv.org/abs/1806.11235v2", + "title": "New curvature flows in complex geometry", + "abstract": "This is a survey of some of the recent developments on the geometric and\nanalytic aspects of the Anomaly flow. It is a flow of $(2,2)$-forms on a\n$3$-fold which was originally motivated by string theory and the need to\npreserve the conformally balanced property of a Hermitian metric in the absence\nof a $\\partial\\bar\\partial$-Lemma. It has revealed itself since to be a\nremarkable higher order extension of the Ricci flow. It has also led to several\nother curvature flows which may be interesting from the point of view of both\nnon-K\\\"ahler geometry and the theory of non-linear partial differential\nequations.", + "authors": "Duong H. Phong, Sebastien Picard, Xiangwen Zhang", + "published": "2018-06-29", + "updated": "2018-07-09", + "primary_cat": "math.DG", + "cats": [ + "math.DG", + "math.AP", + "math.CV" + ], + "main_content": "Introduction The search for canonical metrics is at the interface of several particularly active \ufb01elds in mathematics. The theory of non-linear partial di\ufb00erential equations certainly plays a major role, since a canonical metric is typically de\ufb01ned by a curvature condition, and hence is a non-linear system of second order equations in the metric. The curvature condition should optimize the metric in some sense, which suggests a variational principle, or a link with the \ufb01eld equation of a physics theory. The existence of a canonical metric, or of a minimizer for the variational problem, is usually tied to some deep geometric properties of the underlying space, such as stability in algebraic geometry. This interaction between the theory of partial di\ufb00erential equations, theoretical physics, and algebraic geometry has been particularly fertile for complex, and especially K\u00a8 ahler geometry. There, the complex and the Riemannian structures \ufb01t seamlessly, and the search for Hermitian-YangMills connections [29, 97], K\u00a8 ahler-Einstein metrics [7, 101, 22], and K\u00a8 ahler metrics with constant scalar curvature (see [30, 19] for recent advances and [84] for a survey) have motivated many exciting developments in the last 50 years. However, conditions such as Yang-Mills connections, or metrics of constant scalar or Ricci curvature are all linear in the curvature form. Recent advances in theoretical physics, especially string theory, suggest that the notion of canonical metrics should be widened to include conditions which involve its higher powers. This is in view of the Green-Schwarz anomaly cancellation mechanism [50], which involves the square of the curvature, and also of string-theoretic corrections to the Einstein-Hilbert and the Yang-Mills actions. Since the curvature form with respect to the Chern connection of the Hermitian metric 1Contribution to the proceedings of the Conference celebrating \u201c50 Years of the Journal of Di\ufb00erential Geometry\u201d, Harvard University, April 28-May 2, 2017. Work supported in part by the National Science Foundation Grants DMS-12-66033 and DMS-1605968, and the Simons Collaboration Grant-523313. \fis a (1, 1)-form (valued in the space of endomorphisms of the tangent space or a vector bundle), equations involving its p-th power for p > 1 will be equations for (p, p)-forms. But the K\u00a8 ahler condition is a condition on (1, 1)-forms, so we expect to have to replace it by conditions on (p, p)-forms, such as Michelsohn\u2019s balanced condition [68], which will usually be weaker. Because of these accumulated di\ufb03culties, systems involving powers of the curvature may seem inaccessible but for the breakthrough by Fu and Yau [43, 44], who found some special solutions by PDE methods to one such system, namely the HullStrominger system [56, 57, 90]. The Fu-Yau solution also revealed that the system has a lot of structure and may perhaps be amenable to a more systematic approach. If we have to implement a condition which is weaker than the K\u00a8 ahler condition, we face a very serious obstacle, namely the lack of a \u2202\u00af \u2202-Lemma in non-K\u00a8 ahler geometry. Perhaps surprisingly, in some cases such as the Hull-Strominger system, this can be circumvented by introducing a suitable geometric \ufb02ow, which is a \ufb02ow of (2, 2)-forms on a 3-dimensional complex manifold [75]. Even more surprising is that the resulting \ufb02ow, called the Anomaly \ufb02ow, turns out to be a generalization of the Ricci \ufb02ow, albeit with corrections due to the metric not being K\u00a8 ahler and to the higher powers of the curvature tensor [78]. Besides its original motivation as a parabolic approach to solving the Hull-Strominger system, the Anomaly \ufb02ow admits many variants and generalizations, which can potentially be useful in addressing some classic questions of non-K\u00a8 ahler geometry. While many extensions of the Ricci and related \ufb02ows have been introduced over the years in the literature [8, 10, 11, 47, 61, 64, 65, 88, 89, 94], the ones arising from the Anomaly \ufb02ow seem all new. In fact, while for many of the existing generalizations, only short-time existence theorems and/or derivative estimates are available at the present time, the long-time behavior and convergence of the Anomaly-related \ufb02ows can be established in a number in highly nontrivial cases. This can perhaps be attributed to their natural handling of the conformally balanced condition, and also argues for their interest from the point of view of the theory of non-linear partial di\ufb00erential equations. The purpose of the present paper is to present an introduction to the Anomaly and related \ufb02ows, and to survey some of what is known. Their study has just begun, and many basic questions are as yet unanswered. We hope that the paper will motivate many researchers to help answer these questions. 2 The Reference Flow: the K\u00a8 ahler-Ricci \ufb02ow When discussing the new \ufb02ows, it will be instructive to compare them to a reference \ufb02ow in complex geometry, namely the K\u00a8 ahler-Ricci \ufb02ow. We begin by recalling some basic facts about the K\u00a8 ahler-Ricci \ufb02ow. The literature on the subject is immense, and we shall only touch upon those aspects where a comparison with the new \ufb02ows would be particularly helpful. 2 \f2.1 The Ricci \ufb02ow We start with the Ricci \ufb02ow. Let X be a Riemannian manifold. For simplicity, we shall assume throughout this paper that X is compact, unless indicated speci\ufb01cally otherwise. The Ricci \ufb02ow is the \ufb02ow of metrics t \u2192gij(t) given by \u2202tgij(t) = \u22122 Rij(t), gij(0) = g0 ij (2.1) where g0 ij is an initial metric, and Rij(t) is the Ricci curvature of gij(t). In normal coordinates near any given point, the Ricci curvature is given by Rij = 1 2(\u2212gpq\u2202p\u2202q gij \u2212gpq\u2202i\u2202j gpq + gpq\u2202p\u2202j giq + gpq\u2202i\u2202q gpj). (2.2) Using De Turck\u2019s trick [27], the last three terms can be removed by gauge \ufb01xing, and the \ufb01rst term \u2212gpq\u2202p\u2202q gij is used to obtain a unique short time solution to the Ricci \ufb02ow. Initially developed by Hamilton, the early successes of the Ricci \ufb02ow include the proof of its convergence to a space form for 3-folds with positive Ricci curvature [53] and, with appropriate surgeries, for 4-folds with positive curvature operator [54]. As everyone knows, these successes culminated in Perelman\u2019s spectacular proof of the Geometrization Conjecture [72, 73]. It has been a very active research direction ever since, and many deep and powerful results continue to be obtained. 2.2 The K\u00a8 ahler-Ricci \ufb02ow Let X be now a compact K\u00a8 ahler manifold. The K\u00a8 ahler-Ricci \ufb02ow is just the Ricci \ufb02ow \u2202tg\u00af kj = \u2212R\u00af kj(t) (2.3) where g\u00af kj(0) (and hence g\u00af kj(t) for all t) is K\u00a8 ahler. We recall the notion of Chern unitary connection de\ufb01ned by a Hermitian metric g\u00af kj. To a Hermitian metric g\u00af kj, we can associate its form \u03c9 = ig\u00af kjdzj \u2227d\u00af zk. The metric is said to be K\u00a8 ahler if d\u03c9 = 0. The K\u00a8 ahler class of \u03c9 is then the de Rham cohomology class [\u03c9]. Given a Hermitian metric \u03c9, the Chern unitary connection \u2207is de\ufb01ned by the requirement that \u2207\u00af kV j = \u2202\u00af kV j and unitarity, i.e., \u2207\u00af kg \u00af mj = 0. In general, the Chern unitary connection di\ufb00ers from the Levi-Civita connection. But the two are the same if \u03c9 is K\u00a8 ahler. For K\u00a8 ahler metrics, the Ricci curvature is given by the very simple formula R\u00af kj = \u2212\u2202j\u2202\u00af k log \u03c9n, (2.4) where n is the complex dimension of X. The corresponding Ricci form Ric(\u03c9) is de\ufb01ned by Ric(\u03c9) = iR\u00af kjdzj \u2227d\u00af zk, or equivalently, Ric(\u03c9) = \u2212i\u2202\u00af \u2202log \u03c9n. (2.5) 3 \fClearly Ric(\u03c9) is a closed form, and de\ufb01nes itself a cohomology class. Since for two di\ufb00erent metrics \u03c9 and \u02dc \u03c9, the ratio \u03c9n/\u02dc \u03c9n is a well-de\ufb01ned smooth function, [Ric(\u03c9)] is independent of \u03c9. It is called the \ufb01rst Chern class c1(X). More geometrically, let KX be the canonical bundle of X, that is, the bundle of (n, 0)-forms on X. We can think of \u03c9n as a positive section of KX \u2297\u00af KX, and hence as a metric on the anti-canonical bundle K\u22121 X . The Ricci curvature de\ufb01ned by (2.4) can then be viewed as the curvature of the line bundle K\u22121 X , and its de Rham cohomology class c1(X) can be identi\ufb01ed as the \ufb01rst Chern class of K\u22121 X . 2.3 Immediate advantages of the K\u00a8 ahler condition Since \u2202t\u03c9 = \u2212Ric(\u03c9), the closedness of Ric(\u03c9) implies that \u03c9 evolves only by closed forms, and hence it remains closed if it is closed at time t = 0. In other words, the K\u00a8 ahler property is preserved. The resulting cohomological information accounts for some marked di\ufb00erences between the K\u00a8 ahler-Ricci \ufb02ow and the general Ricci \ufb02ow on just Riemannian manifolds. (a) In the K\u00a8 ahler case, the K\u00a8 ahler-Ricci \ufb02ow implies \u2202t[\u03c9(t)] = \u2212c1(X), and hence, even though \u03c9(t) may still be di\ufb03cult to \ufb01nd, we know at least its cohomology class [\u03c9(t)] = [\u03c9(0)] \u2212t c1(X). (2.6) (b) Thus the \ufb02ow can only exist for t such that [\u03c9(t)] > 0, in the sense that [\u03c9(t)] contains a positive representative. This suggests that the maximum time of existence for the \ufb02ow should be given by T = sup{t : [\u03c9(0)] \u2212t c1(X) > 0} (2.7) which is determined by the cohomology class of [\u03c9(0)], and does not depend on \u03c9(0) itself. This was con\ufb01rmed by Tian and Z. Zhang [91]. (c) On a compact K\u00a8 ahler manifold, the \u2202\u00af \u2202-Lemma says that if \u03c9 and \u02dc \u03c9 are any two (1, 1)-forms satisfying [\u02dc \u03c9] = [\u03c9], then we can write \u02dc \u03c9 = \u03c9 +i\u2202\u00af \u2202\u03d5 for some smooth function \u03d5 unique up to an additive constant, which is called the potential of \u02dc \u03c9 with respect to \u03c9. In particular, a K\u00a8 ahler metric \u03c9 is determined by its cohomology class [\u03c9] and a scalar function. We can apply the \u2202\u00af \u2202-Lemma to reduce the K\u00a8 ahler-Ricci \ufb02ow to a scalar \ufb02ow as follows. First we choose a known and evolving (1, 1)-form \u02c6 \u03c9(t) with \u2202t\u02c6 \u03c9(t) = i\u2202\u00af \u2202log \u2126= \u2212c1(X), \u02c6 \u03c9(0) = \u03c9(0), where \u2126is a \ufb01xed smooth, strictly positive section of KX. Then [\u02c6 \u03c9(t)] = [\u03c9(t)], and we can set \u03c9(t) = \u02c6 \u03c9(t) + i\u2202\u00af \u2202\u03d5(t) (2.8) with \u03d5(t) now the unknown scalar function. The K\u00a8 ahler-Ricci \ufb02ow is equivalent to i\u2202\u00af \u2202\u2202t\u03d5(t) = i\u2202\u00af \u2202log \u03c9n(t) \u2212i\u2202\u00af \u2202log \u2126= i\u2202\u00af \u2202log \u03c9n(t) \u2126 (2.9) 4 \fChanging if necessary \u2126by a constant multiple, this equation is equivalent to the following parabolic complex Monge-Amp` ere type equation for the scalar function \u03d5, \u2202t\u03d5 = log (\u02c6 \u03c9(t) + i\u2202\u00af \u2202\u03d5)n \u2126 , \u02c6 \u03c9(t) + i\u2202\u00af \u2202\u03d5 > 0. (2.10) This equivalence between a \ufb02ow of metrics and a much simpler \ufb02ow of scalar functions is arguably the most useful consequence of the K\u00a8 ahler property, and one which will be sorely missing when we consider the new curvature \ufb02ows. 2.4 The K\u00a8 ahler-Ricci \ufb02ow in analytic/algebraic geometry Early on, the K\u00a8 ahler-Ricci \ufb02ow provided a parabolic approach to the problem of \ufb01nding K\u00a8 ahler-Einstein metrics, that is K\u00a8 ahler metrics satisfying the condition Ric(\u03c9) = \u00b5 \u03c9 (2.11) for \u00b5 constant. This problem was solved using elliptic methods by Yau [101] for \u00b5 = 0 and by Yau [101] and Aubin [7] for \u00b5 < 0. Such metrics can also be viewed as stationary points of the (normalized) K\u00a8 ahler-Ricci \ufb02ow, and an alternative proof using the K\u00a8 ahler-Ricci \ufb02ow was given by H.D. Cao [17]. The case \u00b5 > 0 was solved only relatively recently in 2013 by Chen, Donaldson, and Sun [20, 21, 22]. Subsequent progress on the K\u00a8 ahler-Ricci \ufb02ow approach in this case can be found in [92, 23] and references therein. In the previous application, the initial assumption, which is necessary for the existence of K\u00a8 ahler-Einstein metrics, was that c1(X) was either zero, or de\ufb01nite. More recently, in an important development, the existence of K\u00a8 ahler-Einstein metrics was also established by Wu and Yau [98, 99, 100], starting instead from an assumption of negative holomorphic sectional curvature. Wu and Yau used elliptic methods, but Nomura [70], Tong [93], and Huang et al [55] have been able to recover and extend the results of Wu and Yau in several ways, using the K\u00a8 ahler-Ricci \ufb02ow. The above applications of the K\u00a8 ahler-Ricci \ufb02ow concern the cases when a canonical metric does exist, and the K\u00a8 ahler-Ricci \ufb02ow converges to it. Several longer range and presently ongoing programs are based on the expectation that the K\u00a8 ahler-Ricci \ufb02ow can still help \ufb01nd a canonical metric, or a canonical complex structure, even when there is none on the original manifold. These include the analytic minimal model program of Song and Tian [87], with early contributions by Tsuji [96], which requires the continuation of the K\u00a8 ahler-Ricci \ufb02ow beyond singularities, possibly on a di\ufb00erent manifold; and \ufb02ows on instable manifolds, where one would have to address the phenomena of jumps to a di\ufb00erent complex structure and moduli theory. A major di\ufb03culty in general is to identify the limiting complex manifold. So far only the case of S2 with an arbitrary con\ufb01guration of marked points, including semistable and unstable ones, has been fully worked out in [83]. The convergence in the stable case had been obtained in [67]. 5 \f3 Motivation for the New Flows While the study of the K\u00a8 ahler-Ricci \ufb02ow remains a very active research area, with many hard and deep questions left open, a number of new problems arising from di\ufb00erent areas seem to require new \ufb02ows. Remarkably, while their origin is a priori unrelated, these new \ufb02ows turn out to be in many ways just higher order corrections of the Ricci \ufb02ow, in a natural setting which is somewhere intermediate between the Riemannian and the K\u00a8 ahler settings. We begin with an informal discussion of the motivation and qualitative features of these \ufb02ows, leaving the precise formulation to the next section. 3.1 Anomaly cancellation and powers of the curvature In string theory, a crucial requirement discovered by M. Green and J. Schwarz [50] for the cancellation of anomalies is the equation dH = \u03b1\u2032 4 (Tr(Rm \u2227Rm) \u2212Tr(F \u2227F)) . (3.1) Here space-time is a 10-dimensional manifold, and the \ufb01elds in the theory include a metric gij, a Hermitian metric h on a vector bundle E, and a 3-form H. The expressions Rm and F are the curvatures of \u03c9 and h, viewed as 2-forms valued in the endomorphisms of the tangent space and the bundle E respectively. The coe\ufb03cient \u03b1\u2032 is the string tension, which is a positive constant. Note that this is an equation of 4-forms involving the square of the curvature. By contrast, more familiar equations such as the K\u00a8 ahler-Einstein equation or the Yang-Mills equation are linear in the curvature. The right hand side of the equation (3.1) arises rather from Pontryagin or Bott-Chern characteristic classes, which arise themselves from gauge and gravitational anomalies. To derive it from an action, one would need to expand in the string tension \u03b1\u2032, and keep both the leading terms as well as the next order expansion terms. As shown by Candelas, Horowitz, Strominger, and Witten [16], in compacti\ufb01cations of the heterotic string from 10-dimensional to 4-dimensional space-time, supersymmetry requires the existence of a covariantly constant spinor from which an integrable complex structure with nowhere vanishing holomorphic (3, 0)-form \u2126can be constructed on the internal 6-dimensional manifold. Restricted to the internal space, the metric gij can then be identi\ufb01ed with a (1, 1)-form \u03c9, and H becomes the torsion of \u03c9, H = i(\u00af \u2202\u2212\u2202)\u03c9/2. The bundle E turns out to be a holomorphic vector bundle, and the curvature F is that of the Chern unitary connection of h on E. In the equation (3.1), it is natural to view Rm as the curvature of the Chern unitary connection of \u03c9. However, it has been shown by Hull [57] that other unitary connections are also admissible. Altogether, the equation (3.1) is then an equation for (2, 2)-forms, which implies that Tr(Rm \u2227Rm) is a (2, 2)-form, although Rm itself may not be a (1, 1)-form. 6 \fWe observe that, except in the special case when the right hand side of (3.1) vanishes identically (which can only happen if the vector bundles T 1,0(X) and E have the same second Chern class), the equation (3.1) implies that the Hermitian metric \u03c9 is not K\u00a8 ahler. Thus, besides introducing an equation of (2, 2)-forms and the square of the curvature tensor, the anomaly cancellation mechanism of Green and Schwarz also suggests a widening of metrics under consideration to non-K\u00a8 ahler metrics. 3.2 Weakenings of the K\u00a8 ahler condition Many weakenings of the K\u00a8 ahler condition on Hermitian metrics are known, and we shall actually discuss some of them in section \u00a76 below. But for our purposes, the most important condition is the notion of balanced metric introduced by Michelsohn [68]. A metric \u03c9 on an n-dimensional complex manifold is said to be balanced if d(\u03c9n\u22121) = 0 (3.2) and a manifold is said to be balanced if it admits a balanced metric. Depending on the circumstances, this condition may well prove to be preferable to the K\u00a8 ahler condition, because it is invariant under birational transformations, as was shown by Alessandrini and Bassanelli [2]. Perhaps unexpectedly, the notion of balanced metric arose relatively recently in a completely di\ufb00erent context, which is the one discussed in the previous section of compacti\ufb01cations of the heterotic string to 4-dimensional Minkowski space preserving supersymmetry. In the original set-up considered in [16], the compacti\ufb01cation was achieved by a product of Minkowski space time with the internal space. In the generalization considered by Hull [56, 57] and Strominger [90], the product is replaced by a warped product. Supersymmetry still leads to the internal space being a Calabi-Yau 3-fold, with a holomorphic non-vanishing (3, 0)-form \u2126, but the Hermitian metric \u03c9 must satisfy the condition d(\u2225\u2126\u2225\u03c9\u03c92) = 0 (3.3) (as reformulated by Li and Yau [63]). This condition just means that the metric \u2225\u2126\u2225 1 2 \u03c9\u03c9 is balanced, and we shall just refer to \u03c9 as being conformally balanced. For more recent developments on balanced metrics, please see the survey paper [40] and references therein. 3.3 Detection of instability Even when the solution of an elliptic equation can already be found by a particular \ufb02ow under e.g. a stability condition, other \ufb02ows can be potentially useful too. This is because, in the absence of a stability condition, they may fail to converge in di\ufb00erent ways, which would provide then di\ufb00erent ways of detecting instability. A well-known example is the method of continuity for the K\u00a8 ahler-Einstein problem for Fano manifolds. The failure 7 \fof convergence is detected there by the emergence of a non-trivial multiplier ideal sheaf [69, 86]. By contrast, the behavior of the K\u00a8 ahler-Ricci \ufb02ow in the unstable case is still obscure. On the other hand, a third \ufb02ow again with K\u00a8 ahler-Einstein metrics as stationary point is the inverse Monge-Amp` ere \ufb02ow recently introduced by Collins et al [26]. This \ufb02ow appears to provide more information than the K\u00a8 ahler-Ricci \ufb02ow when the underlying manifold is an unstable toric Fano manifold, as it produces then an optimal destabilizing test con\ufb01guration. 3.4 New Types of Partial Di\ufb00erential Equations The appearance of powers of the curvature, in contrast with the equations of constant scalar or constant Ricci curvature with which we are more familiar, can be expected to result also in new types of PDE\u2019s that have not been encountered before. Two new di\ufb03culties will also have to be addressed: (a) The \ufb01rst is the lack of a \u2202\u00af \u2202-lemma in non-K\u00a8 ahler geometry. So the equations will usually not reduce to scalar equations. In this respect, the non-K\u00a8 ahler \ufb02ows that we shall encounter may be closer to the Ricci \ufb02ow than the K\u00a8 ahler-Ricci \ufb02ow. (b) But even when they happen to be reducible to a scalar equation, more often than not, these equations will not be concave in the eigenvalues of the Hessian. This will prevent the use of classical and powerful PDE tools such as those of Ca\ufb00arelli-Nirenberg-Spruck [12, 13] and the Evans-Krylov theorem, and other methods will have to be developed. Thus the new geometric \ufb02ows may be interesting in their own right from the point of view of the theory of non-linear partial di\ufb00erential equations. 4 The Anomaly Flow We now formulate precisely the \ufb01rst of the new \ufb02ows which appeared recently in complex geometry. This is the Anomaly \ufb02ow introduced by the authors in [75], and given this name in recognition of the anomaly cancellation mechanism introduced in [50]. There are actually several versions of the Anomaly \ufb02ow, and we begin with the simplest version. Let X be a compact complex 3-fold, equipped with a nowhere vanishing holomorphic (3, 0)-form \u2126. Let t \u2192\u03a6(t) be a given path of closed (2, 2)-forms, with [\u03a6(t)] = [\u03a62(0)] for each t. Let \u03c90 be an initial metric which is conformally balanced, i.e., \u2225\u2126\u2225\u03c90\u03c92 0 is a closed (2, 2)-form. Then the Anomaly \ufb02ow is the \ufb02ow of (2, 2)-forms de\ufb01ned by \u2202t(\u2225\u2126\u2225\u03c9\u03c92) = i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032 4 (Tr (Rm \u2227Rm) \u2212\u03a6), (4.1) where \u2225\u2126\u2225= (i\u2126\u2227\u00af \u2126\u03c9\u22123) 1 2 is the norm of \u2126with respect to \u03c9. Here \u03b1\u2032 is the string tension introduced earlier in (3.1). Mathematically, the equation (4.1) is well-de\ufb01ned for any real number \u03b1\u2032 and we \ufb01x \u03b1\u2032 \u2208R unless speci\ufb01ed otherwise. 8 \fThe expression Rm is the curvature of the Chern unitary connection de\ufb01ned by the Hermitian metric \u03c9, viewed as a section of \u039b1,1 \u2297End(T 1,0(X)). We shall on occasion consider other connections too, and say explicitly so when this is the case. Note that the stationary points of the \ufb02ow are precisely given by the Green-Schwarz anomaly cancellation equation (3.1), and that the \ufb02ow preserves the conformally balanced condition. Indeed, by Chern-Weil theory, its right hand side is a closed (2, 2)-form and hence for all t, d(\u2225\u2126\u2225\u03c9\u03c92) = 0 at all times t, if d(\u2225\u2126\u2225\u03c9\u03c92) = 0 at time t = 0. 4.1 The Hull-Strominger system The main motivation for the Anomaly \ufb02ow is to solve the following system of equations for supersymmetric compacti\ufb01cations of the heterotic string, proposed independently by Hull [56, 57] and Strominger [90], which generalizes an earlier proposal by Candelas, Horowitz, Strominger, and Witten [16]. Let X be a 3-fold equipped with a holomorphic non-vanishing (3, 0)-form \u2126and a holomorphic vector bundle E \u2192X with c1(E) = 0. Then we look for a Hermitian metric \u03c9 on X and a Hermitian metric h on E on X satisfying i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032 4 (Tr(Rm \u2227Rm) \u2212Tr(F \u2227F)) = 0 d(\u2225\u2126\u2225\u03c9\u03c92) = 0 \u03c92 \u2227F = 0 (4.2) The third condition is the familiar Hermitian-Yang-Mills equation, so the novelty of the Hull-Strominger system resides essentially in the \ufb01rst two equations. Actually, the second equation in the above system was originally written in a di\ufb00erent way in [56, 57] and [90]. The formulation given above, which brings to light the conformally balanced condition, is due to Li and Yau [63]. The solutions of the Hull-Strominger system can be viewed as generalizations of Ricci\ufb02at K\u00a8 ahler metrics in the following sense. Assume that \u03b1\u2032 4 (Tr(Rm \u2227Rm) \u2212Tr(F \u2227F)) = 0. (4.3) Then the \ufb01rst equation reduces to i\u2202\u00af \u2202\u03c9 = 0. Combined with the second equation d(\u2225\u2126\u2225\u03c9\u03c92) = 0, it implies that \u03c9 is both K\u00a8 ahler and Ricci-\ufb02at (a detalled proof of this fact is given in \u00a76.1 below). One simple way of guaranteeing the condition (4.3), as suggested in [16], is to take E = T 1,0(X), and \u03c9 = h. If \u03c9 is K\u00a8 ahler, then the \ufb01rst equation in the Hull-Strominger system is satis\ufb01ed. The third condition on F coincides then with the Ricci-\ufb02at condition on \u03c9. This implies that \u2225\u2126\u2225\u03c9 is constant, and hence the second 9 \fequation is a consequence of \u03c9 being K\u00a8 ahler. Thus Ricci-\ufb02at K\u00a8 ahler metrics provide a consistent solution to the Hull-Strominger system. An immediate di\ufb03culty when trying to solve the Hull-Strominger system in general is how to implement the second condition in the absence of an analogue of a \u2202\u00af \u2202-Lemma. While there are many Ansatze that can produce a balanced metric starting from a given one, none of them is particularly special, and they all lead to very unwieldy expressions for their curvatures. Anomaly \ufb02ows are designed to circumvent this di\ufb03culty, and produce solutions of the Hull-Strominger system without appealing to any particular ansatz. The prototype of an Anomaly \ufb02ow was introduced in (4.1). For the purpose of solving the Hull-Strominger system, we \ufb01x a conformally balanced metric \u03c9(0) on X and a metric h(0) on E and introduce the following coupled \ufb02ow t \u2192(\u03c9(t), h(t)), \u2202t(\u2225\u2126\u2225\u03c9\u03c92) = i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032 4 (Tr(Rm \u2227Rm) \u2212Tr(F \u2227F)) h(t)\u22121\u2202th(t) = \u03c92 \u2227F \u03c93 (4.4) As noted before, the \ufb02ow preserves the conformally balanced condition for \u03c9, and hence its stationary points are automatically balanced if they exist. Since they will manifestly also satisfy the \ufb01rst and third equations in the Hull-Strominger system, they satisfy the complete Hull-Strominger system. Thus we shall have circumvented the problem of the absence of a \u2202\u00af \u2202-Lemma for conformally balanced metrics. We observe that, while the \ufb02ow (4.4) is a coupled \ufb02ow for \u03c9(t) and h(t), there are cases when F can be viewed as known. For example, in the case of Calabi-Eckmann-GoldsteinProkushkin \ufb01brations over a K3 surface, the Hermitian-Yang-Mills condition on F does not change as \u03c9 involves, and h can be taken as \ufb01xed, given by the Hermitian-Yang-Mills metric corresponding to the metric \u03c9(0) on X. Setting \u03a6 = Tr(F \u2227F), we see that the \ufb02ow (4.4) reduces to the \ufb02ow (4.1). 4.2 Formulation as a \ufb02ow of metrics instead of (2, 2) forms While the formulation of the Anomaly \ufb02ow as a \ufb02ow of (2, 2)-forms is ideally suited to the manifest preservation of the conformally balanced condition of Hermitian metrics, for an analytic study of the \ufb02ow, we need another, more conventional formulation as a \ufb02ow of (1, 1)-forms. This is provided by the following theorem [75, 78]: Theorem 1 Consider the anomaly \ufb02ow (4.1) with a conformally balanced initial metric. (a) The \ufb02ow can also be expressed as \u2202tg\u00af kj = 1 2\u2225\u2126\u2225\u03c9 \u001a \u2212\u02dc R\u00af kj + gs\u00af rgp\u00af qT\u00af qsj \u00af Tp\u00af r\u00af k \u2212\u03b1\u2032 4 gs\u00af r(R[\u00af ks \u03b1 \u03b2R\u00af rj] \u03b2 \u03b1 \u2212\u03a6\u00af ks\u00af rj) \u001b (4.5) 10 \fHere \u02dc R\u00af kj = gp\u00af qR\u00af qp\u00af kj is the Chern-Ricci tensor, and T = i\u2202\u03c9 = 1 2T\u00af kjmdzm \u2227dzj \u2227d\u00af zk is the torsion tensor. The bracket [, ] denote anti-symmetrization in each of the two sets of barred and unbarred indices. (b) The \ufb02ow is parabolic and exists at least for a short time when \u03b1\u2032Rm satis\ufb01ed the following positivity condition |\u03be|2|A|2 + \u03b1\u2032 2 gq\u00af bga\u00af pgs\u00af rg\u03b1\u00af \u03b3R[\u00af rq \u03b2 \u03b1\u03be\u00af p\u03bes]A\u00af \u03b3\u03b2A\u00af ba > 0 (4.6) for all Hermitian tensors A and vectors \u03be. In particular, it is parabolic and exists at least for a short time if |\u03b1\u2032Rm(0)| < 1/2. The above formulas for the Anomaly \ufb02ow should be viewed in the context of conformally balanced metrics. In general, we de\ufb01ne the curvature tensor of the Chern unitary connection of a Hermitian metric g\u00af kj by [\u2207j, \u2207\u00af k]V p = R\u00af kj p qV q (4.7) for any vector \ufb01eld V p. Explicitly, R\u00af kj p q = \u2212\u2202\u00af k(gp \u00af m\u2202jg \u00af mq). (4.8) When g\u00af kj is not K\u00a8 ahler, there are several candidates for the notion of Ricci curvature, R\u00af kj = R\u00af kj p p, \u02dc R\u00af kj = Rp p\u00af kj, R\u2032 \u00af kj = R\u00af k p pj, R\u2032\u2032 \u00af kj = Rp j\u00af kp (4.9) with corresponding notions of scalar curvature R, \u02dc R, R\u2032 and R\u2032\u2032. We always have R\u2032 = R\u2032\u2032 and R = \u02dc R. However, for conformally balanced metrics, we also have the following relations R\u2032 \u00af kj = R\u2032\u2032 \u00af kj = 1 2R\u00af kj = \u02dc R\u00af kj \u2212\u2207mT\u00af kjm. (4.10) For parabolicity purposes, it is crucial that it is the Chern-Ricci curvature \u02dc R\u00af kj that appears in the right hand side of (4.5) rather than the other notions of Ricci curvature. Indeed, explicitly, \u02dc R\u00af kj = \u2212gp\u00af qg\u00af km\u2202\u00af q(gm\u00af \u2113\u2202pg\u00af \u2113j) = \u2212\u2206g\u00af kj + \u00b7 \u00b7 \u00b7 which is the analogue of the equation (2.2) for the Ricci \ufb02ow and shows that the \ufb02ow is parabolic. Note that the other Ricci tensor R\u00af kj = \u2212\u2202j\u2202\u00af k log \u03c9n = \u2212gp\u00af q\u2202j\u2202\u00af kg\u00af qp + \u00b7 \u00b7 \u00b7 has a natural interpretation as the \ufb01rst Chern class for X, but would not have been appropriate in the right hand side of (4.5) for parabolicity purposes. 11 \f4.3 Comparison with the K\u00a8 ahler-Ricci \ufb02ow With the formulation (4.5) for the Anomaly \ufb02ow, it is easier now to compare it with the K\u00a8 ahler-Ricci \ufb02ow. Perhaps surprisingly, since their original motivations are rather di\ufb00erent, the two \ufb02ows appear rather similar, with the Anomaly \ufb02ow a higher order version of the K\u00a8 ahler-Ricci \ufb02ow and the additional complications of \u2225\u2126\u2225\u22121 \u03c9 , of torsion, and of quadratic terms in the curvature tensor. (a) In the K\u00a8 ahler-Ricci \ufb02ow, the (1, 1)-cohomology class is determined [\u03c9(t)] = [\u03c9(0)]\u2212 tc1(X). In the Anomaly \ufb02ow (4.1), we have rather h \u2225\u2126\u2225\u03c9(t)\u03c92(t) i = h \u2225\u2126\u2225\u03c9(0)\u03c92(0) i \u2212t\u03b1\u2032 4 (c2(X) \u2212[\u03a62(0)]) (b) However, the (2, 2) cohomology class [\u2225\u2126\u2225\u03c9\u03c92] provides much less information than the (1, 1)-cohomology class [\u03c9]. For example, the volume is an invariant of an (1, 1) cohomology class, but not of an (2, 2)-cohomology class. (c) As an indirect consequence, the maximum time of the Anomaly \ufb02ow is not determined by cohomology alone and depends on the initial data. In this respect, the Anomaly \ufb02ow is closer to the Ricci \ufb02ow than the K\u00a8 ahler-Ricci \ufb02ow. (d) The di\ufb00usion operator for the Anomaly \ufb02ow is more complicated. It includes the Laplacian as for the K\u00a8 ahler-Ricci \ufb02ow, but also the Riemann curvature, e.g. \u03b4R\u00af kj \u03c1 \u03bb \u2192 1 2\u2225\u2126\u2225(\u2206(\u03b4R\u00af kj \u03c1 \u03bb) + 2\u03b1\u2032g\u03c1\u00af \u00b5gs\u00af rR[\u00af r\u03bb \u03b2 \u03b1\u2207s\u2207\u00af \u00b5]\u03b4R\u00af kj \u03b1 \u03b2) (4.11) Another complication is that the torsion stays identically 0 in the K\u00a8 ahler-Ricci \ufb02ow, but it evolves in the Anomaly \ufb02ow. Explicitly, its evolution is given by \u2202tT\u00af pjq = 1 2\u2225\u2126\u2225\u03c9 \u0014 \u2206T\u00af pjq \u2212\u03b1\u2032gs\u00af r(\u2207j(R[\u00af ps \u03b1 \u03b2R\u00af rq] \u03b2 \u03b1 \u2212\u03a6\u00af ps\u00af rq) + \u03b1\u2032gs\u00af r\u2207q(R[\u00af ps \u03b1 \u03b2R\u00af rj] \u03b2 \u03b1 \u2212\u03a6\u00af ps\u00af rj)) \u0015 + 1 2\u2225\u2126\u2225\u03c9 (T m jq\u03a8\u00af pm \u2212Tj\u03a8\u00af pq + Tq\u03a8\u00af pj + \u2207j(T \u00af T)\u00af pq \u2212\u2207q(T \u00af T)\u00af pj) \u2212 1 2\u2225\u2126\u2225\u03c9 (T r q\u03bbR\u03bb r\u00af pj \u2212T r j\u03bbR\u03bb r\u00af pq) (4.12) where \u2206= gp\u00af q\u2207p\u2207\u00af q is the Laplacian, and \u03a8 is the right hand side of (4.1). 4.4 The Anomaly \ufb02ow for the Hull-Strominger-Ivanov system In the original formulation of Strominger [90] of the Hull-Strominger system (4.2), it was suggested that Rm be the curvature of the Chern unitary connection of \u03c9, as Rm would then be a (1, 1)-form, which would be consistent with the \ufb01rst equation in (4.2) being an 12 \fequation of (2, 2)-forms. On the other hand, in [58], Hull argued that actually Rm can be the curvature of any connection, at the only cost of adding \ufb01nite counterterms to the e\ufb00ective action. More recently, in [59], Ivanov argued that, in order to obtain the \ufb01eld equations of the heterotic string, the Hull-Strominger system ought to be supplemented by the condition that Rm be the curvature of an SU(3) instanton. By de\ufb01nition, an SU(3) instanton with respect to a metric \u03c9 is given by another Hermitian metric \u02dc \u03c9 on X whose Chern unitary connection would satisfy \u03c92 \u2227\u02dc F = 0. (4.13) Here \u02dc F is the curvature of the Chern unitary connection of \u02dc \u03c9, and the right hand side is 0 because c1(X) = 0. Thus a complete system of equations for a supersymmetric vacuum of the heterotic string would consist of three unknowns (\u03c9, \u02dc \u03c9, h) satisfying both (4.2) and (4.13). We shall refer to the resulting system for (\u03c9, \u02dc \u03c9, h) as the Hull-Strominger-Ivanov system. K\u00a8 ahler Ricci-\ufb02at metrics can still be viewed as special solutions of the HullStrominger-Ivanov system, in the sense that if \u03c9 is such a metric and we set E = T 1,0(X), h = \u02dc \u03c9 = \u03c9, then we would obtain a solution of this system. In [46], Garcia-Fernandez et al. introduced an elegant way of reformulating the HullStrominger-Ivanov system. For this, let (X, \u2126) be as before, and let E be a holomorphic vector bundle over X. Let c be also a bilinear form c : \u039b1,1(E) \u00d7 \u039b1,1(E) \u2192\u039b2,2(X) \u2229Ker d. (4.14) (for some applications, it may be necessary to impose the stronger condition that in the right hand side, Ker d should be replaced by the smaller space Im i\u2202\u00af \u2202.) For each Hermitian metric H on E, let FH be the curvature of the corresponding Chern unitary connection. Then the following system of equations for (\u03c9, H) i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032 4 c(FH, FH) = 0 d(\u2225\u2126\u2225\u03c9\u03c92) = 0 F 0,2 H = F 2,0 H = 0, \u03c92 \u2227FH = 0 (4.15) is a generalization of the Hull-Strominger-Ivanov system. Indeed, if we let E = T 1,0(X)\u2297E, H = (\u02dc \u03c9, h), and c((A1, B1), (A2, B2)) = Tr(A1 \u2227A2) \u2212Tr(B1 \u2227B2) (4.16) where (Ai, Bi) is the corresponding decomposition of \u039b1,1(E) = \u039b1,1(T 1,0(X))\u2295\u039b1,1(E), we get back the Hull-Strominger-Ivanov system. In their paper [46] Garcia-Fernandez et al. also found a variational principle for the Hull-Strominger-Ivanov system, in other words, 13 \fa space of (\u03c9, H) on which a functional M(\u03c9, H) can de de\ufb01ned, whose critical points are exactly the solutions of the system. We won\u2019t reproduce the formula for M(\u03c9, H) here, but note only that a simple version in the special case with \u03b1\u2032 = 0 and no H appears below in (6.12). It is straightforward to formulate a version of the Anomaly \ufb02ow for the Hull-StromingerIvanov system. In view of the previous reformulation (4.15) of the system, it su\ufb03ces to consider the following version t \u2192(\u03c9(t), H(t)) of the Anomaly \ufb02ow, \u2202t(\u2225\u2126\u2225\u03c9\u03c92) = i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032 4 c(FH, FH) H\u22121\u2202tH = \u03c92 \u2227FH \u03c93 . (4.17) for initial data (\u03c9(0), H(0)) with \u03c9(0) conformally balanced. Clearly this system again preserves the conformally balanced property of \u03c9(t), and its stationary points satisfy the Hull-Strominger-Ivanov system (4.15). In fact, this system is even simpler than the original Anomaly \ufb02ow de\ufb01ned in (4.4), because it has no quadratic term in Rm(\u03c9). There are quadratic terms in FH, but these are simpler than Rm(\u03c9). The parabolicity of the \ufb02ow (4.17), and hence its short-time existence is an immediate consequence of the results of [75] and [78], and more speci\ufb01cally of formulas of the type (4.5). 5 Illustrative Cases of the Anomaly Flow The Anomaly \ufb02ow is still largely unexplored, and the many additional di\ufb03culties compared to the K\u00a8 ahler-Ricci \ufb02ow which were described in section \u00a74.3, suggest that a lot of work will have to be done before we can achieve a level of understanding anywhere close to our present one for the K\u00a8 ahler-Ricci \ufb02ow. However, many special cases of the original Anomaly \ufb02ow (4.4) have now been worked out, each of which sheds a di\ufb00erent light on the \ufb02ow, and each of which may ultimately be useful in developing the general theory. 5.1 The \ufb02ow on toric \ufb01brations over Calabi-Yau surfaces Let (Y, \u02c6 \u03c9) be a Calabi-Yau surface, equipped with a nowhere vanishing holomorphic (2, 0)-form \u2126Y . Let \u03c91, \u03c92 \u2208H2(Y, Z) satisfy \u03c91 \u2227\u02c6 \u03c9 = \u03c92 \u2227\u02c6 \u03c9 = 0. From this data, building on ideas going back to Calabi and Eckmann [15], Goldstein and Prokushkin [48] showed how to construct a toric \ufb01bration \u03c0 : X \u2192Y , equipped with a (1, 0)-form \u03b8 on X satisfying \u2202\u03b8 = 0, \u00af \u2202\u03b8 = \u03c0\u2217(\u03c91 + i\u03c92). Furthermore, the form \u2126= \u221a 3 \u2126Y \u2227\u03b8 is a holomorphic nowhere vanishing (3, 0)-form on X, and for any scalar function u on M, the (1, 1)-form \u03c9u = \u03c0\u2217(eu\u02c6 \u03c9) + i\u03b8 \u2227\u00af \u03b8 (5.1) 14 \fis a conformally balanced metric on X. The \ufb01rst non-K\u00a8 ahler solution of the Hull-Strominger was actually found by Fu and Yau [43, 44] in 2006, using these \ufb01brations and elliptic partial di\ufb00erential equations methods. Besides the solutions themselves, the Fu-Yau work also revealed the rich and deep structure of the Hull-Strominger system. Since then many other non-K\u00a8 ahler solutions have been found, see e.g. [3, 4, 34, 36, 37, 38, 43, 41, 71]. In general, for a given elliptic system, there is an in\ufb01nite number of \ufb02ows that will admit it as stationary points, but most of them will not behave well for large time. Usually one has the freedom of selecting a well-behaved \ufb02ow. But in the present case, the Anomaly \ufb02ow was dictated by the requirement that the balanced condition is preserved, and there does not appear any other \ufb02ow that meets this requirement. Thus it is particularly important to determine whether the Anomaly \ufb02ow is a viable tool for solving the Hull-Strominger system. This is con\ufb01rmed by the following theorem in the case of toric \ufb01brations [79]: Theorem 2 Consider the Anomaly \ufb02ow on the \ufb01bration X \u2192Y constructed above, with initial data \u03c9(0) = \u03c0\u2217(M \u02c6 \u03c9) + i\u03b8 \u2227\u00af \u03b8, where M is a positive constant. Fix h on the bundle E \u2192Y satisfying the Hermitian-Yang-Mills equation \u02c6 \u03c9 \u2227F = 0. Then \u03c9(t) is of the form \u03c0\u2217(eu\u02c6 \u03c9) + i\u03b8 \u2227\u00af \u03b8 and, assuming an integrability condition on the data (which is necessary), there exists M0 > 0, so that for all M \u2265M0, the \ufb02ow exists for all time, and converges to a metric \u03c9\u221ewith (X, \u03c0\u2217E, \u03c9\u221e, \u03c0\u2217(h)) satisfying the Hull-Strominger system. The reason why the Anomaly \ufb02ow on Calabi-Eckmann-Goldstein-Prokushkin \ufb01brations is simpler than the general case is because it descends to a \ufb02ow on the base Y of the form. \u2202t\u03c9 = \u2212 1 2\u2225\u2126\u2225\u03c9 \"R 2 \u2212|T|2 \u2212\u03b1\u2032 4 \u03c32(iRic\u03c9) + 2\u03b1\u2032i\u2202\u00af \u2202(\u2225\u2126\u2225\u03c9\u03c1) \u03c92 \u22122 \u00b5 \u03c92 # \u03c9 (5.2) with an initial metric of the form \u03c9(0) = M \u02c6 \u03c9. Here \u03c9 is now a Hermitian metric on the base Y , and \u03c32(\u03a6) = \u03a6 \u2227\u03a6 \u03c9\u22122 is the usual determinant of a real (1, 1)-form \u03a6, relative to the metric \u03c9. \u03c1 and \u00b5 are given smooth (1, 1)-form and (2, 2)-form, respectively, both are determined explicitly by \u02c6 \u03c9, F and \u03b8. Since \u03c9 stays conformal to \u02c6 \u03c9, we can set \u03c9 = eu\u02c6 \u03c9, and rewrite also this \ufb02ow as a parabolic \ufb02ow in u, \u2202tu = 1 2e\u2212u \u2206\u02c6 \u03c9eu + \u03b1\u2032\u02c6 \u03c32(i\u2202\u00af \u2202u) \u22122\u03b1\u2032i\u2202\u00af \u2202(e\u2212u\u03c1) \u02c6 \u03c92 + 2 \u00b5 \u02c6 \u03c92 ! (5.3) where both the Laplacian \u2206\u02c6 \u03c9 and the determinant \u02c6 \u03c32 are written with respect to the \ufb01xed metric \u02c6 \u03c9. This \ufb02ow is not concave in the Hessian of u, and this makes it inaccessible to standard non-linear PDE techniques. In practice, we actually solve the problem by using the previous, Ricci-\ufb02ow like formulation. Some key ingredients of the proof are the following estimates. We illustrate only features which are peculiar to the \ufb02ow (5.2), the details can be found in the original paper [79]. 15 \f5.1.1 A priori estimate for \u2225\u2126\u2225\u03c9 The \ufb01rst estimate is a C0 estimate for \u2225\u2126\u2225\u03c9 = e\u2212u, or equivalently an upper and a lower bound for eu, but which has to be sharper than the usual C0 estimates. This is because it does not su\ufb03ce to show that the upper and lower bounds are independent of time, but also that they remain comparable to the size of the initial data. Lemma 1 Assume that the \ufb02ow (5.2) exists and is parabolic for t \u2208[0, T) and starts from u0 = log M. Then there exists M0 such that, for all M \u2265M0, we have supX\u00d7[0,T)eu \u2264C1M, supX\u00d7[0,T)e\u2212u \u2264C2M\u22121 (5.4) with constants C1, C2 depending only on (Y, \u02c6 \u03c9), \u03c1, \u00b5, and \u03b1\u2032. This estimate is proved by parabolic Moser iteration. The \ufb01rst step is to isolate from the \ufb02ow a term comparable to the L2 norm of the gradient of e(k+1)u/2, for arbitrary k. For example, to establish the sup norm bound, we use k 2 Z X e(k+1)u{\u02c6 \u03c9 + \u03b1\u2032e\u22122u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u + \u2202 \u2202t 2 k + 1 Z X e(k+1)u \u02c6 \u03c92 2! = \u2212k 2 Z X ekui\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 + Z X eku\u00b5 \u2212\u03b1\u2032(1 \u2212 1 1 \u2212k) Z X e(k\u22121)ui\u2202\u00af \u2202\u03c1 (5.5) with a form \u03c9\u2032 which is positive as long as the \ufb02ow exists and is parabolic. To prove the lemma, we assume \ufb01rst that supXe\u2212u(\u00b7,t) < \u03b4 for all t \u2208[0, T), where \u03b4 is a \ufb01xed small positive constant depending only on the data (Y, \u02c6 \u03c9), \u03c1, \u00b5, and \u03b1\u2032. We later show that supXe\u2212u(\u00b7,t) < \u03b4 is preserved. The above estimate implies the bound k 2 Z X e(k+1)u|\u2207u|2 + \u2202 \u2202t 2 k + 1 Z X e(k+1)u \u2264(\u2225\u00b5\u2225L\u221e+ 2\u2225\u03b1\u2032\u03c1\u2225C2) \u0012Z X eku + Z X e(k\u22121)u \u0013 .(5.6) We can then apply the Sobolev inequality and iterate the bound. But the need for obtaining precise bounds in terms of M requires that we need to consider not just large time, but also small time. This is because some \ufb01nite bound is automatic for small time from parabolicity, but this is not the case for the speci\ufb01c bound of the type stated in the lemma. For the lower bound, we need to use the upper bound to establish the preliminary integral estimate Z X e\u2212u \u22642C0M\u22121 (5.7) again assuming to begin with that supXe\u2212u(\u00b7,t) < \u03b4 for all t \u2208[0, T). Isolating the gradient, applying the Sobolev inequality, and iterating as before starting from the above integral estimate, separately for the cases of small time and large time, we obtain supX\u00d7[0,T)e\u2212u \u2264C2M\u22121 (5.8) 16 \fassuming that supXe\u2212u(\u00b7,t) < \u03b4 for all t \u2208[0, T). The proof of Lemma 1 is then completed by the following argument. Start from u(0) = log M with M large enough so that the condition supXe\u2212u(\u00b7,0) < \u03b4 is satis\ufb01ed. The estimates which have already been obtained imply that this condition is preserved as long as the \ufb02ow exists, and actually the better estimate (5.8) holds. Q.E.D. 5.1.2 A priori estimate for the torsion The previous C0 estimate is the only one which relies on the formulation of the Anomaly \ufb02ow as a scalar parabolic equation. From now on, the other estimates make use rather of the formulation (5.2) as a geometric \ufb02ow of metrics. Lemma 2 There exists M0 > 1 with the following property. Let the \ufb02ow (4.5) start from a constant function u(0) = log M with M \u2265M0. If |\u03b1\u2032Ric\u03c9| \u226410\u22126 (5.9) along the \ufb02ow, then there exists C3 > 0 depending only on (Y, \u02c6 \u03c9), \u03c1, \u00b5, and \u03b1\u2032, such that |T|2 \u2264C3M\u22121. (5.10) The proof is by the maximum principle applied to the function G = log |T|2 \u2212\u039b log \u2225\u2126\u2225\u03c9 (5.11) with \u039b = 1 + 2\u22123. Set F p\u00af q = gp\u00af q + \u03b1\u2032\u2225\u2126\u22253 \u03c9 \u02dc \u03c1p\u00af q \u2212\u03b1\u2032 2 (Rgp\u00af q \u2212Rp\u00af q) (5.12) where the (1, 1)-form \u02dc \u03c1p\u00af q is de\ufb01ned by picking up the coe\ufb03cient of u\u00af qp in the expansion \u2212\u03b1\u2032i\u2202\u00af \u2202(e\u2212u\u03c1) = (\u03b1\u2032e\u2212u\u02dc \u03c1p\u00af qu\u00af qp + \u00b7 \u00b7 \u00b7) \u02c6 \u03c92 2 . (5.13) We can then show that, at a maximum point of G, we must have 0 \u2264F p\u00af q\u2202p\u2202\u00af qG \u2212 1 200|T|2 + C\u2225\u2126\u2225\u03c9(1 + \u2225\u2126\u2225 1 2 \u03c9 |T| ). (5.14) If |T| \u2264\u2225\u2126\u2225 1 2 \u03c9, the desired estimate follows is a trivial consequence of Lemma 1. Otherwise, we obtain at a maximum point |T|2 \u2264200 C\u2225\u2126\u2225\u03c9(1 + \u2225\u2126\u2225 1 2 \u03c9 |T| ) (5.15) which again implies the desired estimate. 17 \f5.1.3 A priori estimate for the curvature Perhaps the most technically elaborate estimate is the following C2 estimate: Lemma 3 Start the \ufb02ow with a constant function u0 = log M. There exists M0 > 1 such that for every M \u2265M0, if \u2225\u2126\u2225\u03c9 \u2264C2M, |T|2 \u2264C3M\u22121 (5.16) along the \ufb02ow, then |\u03b1\u2032Ric\u03c9| \u2264C5M\u22121 2 (5.17) where C5 only depends on (Y, \u02c6 \u03c9), \u03c1, \u00b5, and \u03b1\u2032. Here, C2 and C3 are the constants given in Lemmas 1 and 2 respectively. This lemma is proved by considering the following quantity |\u03b1\u2032Ric\u03c9|2 + \u0398|T|2 (5.18) where \u0398 is a suitable constant. This quantity starts from the value 0 at time t = 0. If we choose \u0398 to be \u0398 = max{4|\u03b1\u2032|\u22121, 8(\u03b1\u2032)2(C4 + 1)} (5.19) then we can show that \u2202t(|\u03b1\u2032Ric\u03c9|2 + \u0398|T|2) \u22640 (5.20) the \ufb01rst time that it reaches the value (2\u0398C3 + 1)M\u22121. This implies that it never exceeds this value, and Lemma 3 is proved. 5.1.4 The long-time existence of the \ufb02ow A priori estimates for the derivatives of the curvature and of the torsion, in terms of suitable powers of M and constants depending only on the geometric data can be formulated and established in the same way, when |\u03b1\u2032Ric\u03c9| \u2264\u03b41 and |T| \u2264\u03b42 for some \ufb01xed small positive constants \u03b41 and \u03b42.. Once the a priori estimates are available, the long-time existence of the \ufb02ow can be established as follows. If we start from u(0) = log M, we have |\u03b1\u2032Ric\u03c9| = |T| = 0 at time t = 0. So the set of times t for which |\u03b1\u2032Ric\u03c9| \u2264\u03b41, |T| \u2264\u03b42 on X \u00d7 [0, t) is closed and not empty. By the a priori estimates, we obtain even better estimates |\u03b1\u2032Ric0| < C5M\u22121 2, |T|2 \u2264C3M\u22121, which show that it is also open. Thus we obtain estimates for all derivatives and all time, which implies the existence of the \ufb02ow for all time. 18 \f5.1.5 The convergence of the \ufb02ow The preceding bounds imply the existence of a sequence of times for which the \ufb02ow is convergent. To get the full convergence, we introduce J(t) = Z X v2 \u02c6 \u03c92 2! (5.21) with v = \u2202teu. Using the a priori estimates and the fact that v has average 0 at all times, we can show that dJ(t) dt \u2264\u2212\u03b7J (5.22) for some strictly positive constant \u03b7 if M is large enough. Thus v tends to 0 in L2 norm, and it is not di\ufb03cult, using the a priori estimates, to deduce then that it tends to 0 in C\u221e norm. The proof of Theorem 2 is complete. 5.2 The Anomaly Flow on hyperk\u00a8 ahler \ufb01brations over Riemann surfaces Next we describe a very recent work of T. Fei, Z. Huang, and S. Picard [34, 35]. Their set-up is certain hyperk\u00a8 ahler \ufb01brations over Riemann surfaces originating from works of Fei [31, 32, 33], which are themselves built on earlier constructions of Calabi [14] and Gray [49]. Let \u03a3 be a Riemann surface and a holomorphic map \u03d5 : \u03a3 \u2192CP1 with \u03d5\u2217O(2) = K\u03a3. By pulling back sections of O(2), we obtain three holomorphic (1, 0)-forms \u00b51, \u00b52, \u00b53. Let \u02c6 \u03c9 be the metric proportional to i P3 j=1 \u00b5j \u2227\u00af \u00b5j normalized so that R \u03a3 \u02c6 \u03c9 = 1. Next, take a \ufb02at 4-torus (T 4, g) which is viewed as a hyperk\u00a8 ahler manifold with complex structures I, J, K satisfying I2 = J2 = K2 = \u22121 and IJ = K. Let \u03c9I, \u03c9J, \u03c9K be the corresponding K\u00a8 ahler forms. For any point (\u03b1, \u03b2, \u03b3) \u2208S2, there is a compatible complex structure \u03b1I + \u03b2J + \u03b3K on T 4. Let now X = \u03a3 \u00d7 T 4, with the complex structure j\u03a3 \u2295(\u03b1I + \u03b2J + \u03b3K), where we have let \u03d5 = (\u03b1, \u03b2, \u03b3). Then X is a non-k\u00a8 ahler complex manifold with trivial canonical bundle, and nonvanishing holomorphic (3, 0)-form given by \u2126= \u00b51 \u2227\u03c9I + \u00b52 \u2227\u03c9J + \u00b53 \u2227\u03c9K Furthermore, for any real function f \u2208C\u221e(\u03a3), as shown in [31, 32], the (1, 1)-form \u03c9f = e2f \u02c6 \u03c9 + ef\u03c9\u2032, with \u03c9\u2032 = \u03b1\u03c9I + \u03b2\u03c9J + \u03b3\u03c9K (5.23) is a conformally balanced metric on X. Furthermore, \u2225\u2126\u2225\u03c9f = e\u22122f and \u2225\u2126\u2225\u03c9f\u03c92 f = 2volT 4 + 2ef \u02c6 \u03c9 \u2227\u03c9\u2032, (5.24) 19 \fThe Anomaly \ufb02ow on X preserves the above ansatz and reduces then to the following conformal \ufb02ow on \u03a3 for a metric \u03c9(t) \u2202tef = 1 2[\u02c6 gz\u00af z\u2202z\u2202\u00af z(ef + \u03b1\u2032 2 \u03bae\u2212f) \u2212\u03ba(ef + \u03b1\u2032 2 \u03bae\u2212f)] (5.25) where \u03ba is the Gauss curvature of \u02c6 \u03c9. Explicitly, \u03ba = \u2212\u03d5\u2217\u03c9F S/\u02c6 \u03c9 and is always < 0, except at the branch points of \u03d5, where it vanishes. Despite the extent to which Riemann surfaces have been studied, the \ufb02ow (5.25) does not seem to have appeared before. It may be useful for future investigations to have the following more intrinsic formulation of it, \u2202t\u03c9 = \u03c9 |\u00b5|2(\u2212R + |\u2207log |\u00b5|2|2) + \u03b1\u2032 2 (\u2206|\u2207\u03d5|2 \u2212|\u2207\u03d5|4) (5.26) where all norms and operators are taken with respect to the evolving metric \u03c9(t). The following criterion for the long-time existence of the \ufb02ow in this setting was proved in [35]: Theorem 3 Assume that \u03b1\u2032 > 0. Suppose that a solution to (5.25) exists on [0, T). If sup \u03a3\u00d7[0,T) \u2225\u2126\u2225\u03c9f = sup \u03a3\u00d7[0,T) e\u22122f < \u221e, (5.27) then the \ufb02ow can be extended to [0, T + \u01eb) for some \u01eb > 0. The proof is short and can be given here. Let the function u be de\ufb01ned by u = ef + \u03b1\u2032 2 \u03bae\u2212f. (5.28) Then u evolves by \u2202tu = 1 2(1 \u2212\u03b1\u2032 2 \u03bae\u2212f)(\u2206\u02c6 \u03c9u \u2212\u03bau). (5.29) Since \u03b1\u2032 > 0, \u03ba \u22640, and e\u2212f \u2264C, this is a parabolic equation for u with bounded coef\ufb01cients. We may therefore apply the theorem of Krylov-Safonov [62] to obtain parabolic H\u00a8 older estimates for u. Since 2 ef = (u + \u221a u2 \u22122\u03b1\u2032\u03ba), we may deduce H\u00a8 older estimates for ef and e\u2212f. Applying parabolic Schauder theory and a bootstrap argument, we obtain higher order estimates for ef which allow us to extend the \ufb02ow past T. Q.E.D. However, it may happen that e\u2212f \u2192\u221eas t \u2192T for some \ufb01nite time T, and the \ufb02ow will terminate in \ufb01nite time. Indeed, integrating (5.25) gives d dt Z \u03a3 ef \u02c6 \u03c9 = 1 2 Z \u03a3(\u2212\u03ba)ef \u02c6 \u03c9 \u2212\u03b1\u2032 4 Z \u03a3 \u03ba2e\u2212f \u02c6 \u03c9. (5.30) Applying the Cauchy-Schwarz inequality, d dt Z \u03a3 ef \u02c6 \u03c9 \u2264\u2225\u03ba\u2225L\u221e(\u03a3) 2 Z \u03a3 ef \u02c6 \u03c9 \u2212\u03b1\u2032 4 \u0012Z \u03a3(\u2212\u03ba)\u02c6 \u03c9 \u00132 \u0012Z \u03a3 ef \u02c6 \u03c9 \u0013\u22121 . (5.31) 20 \fSince the genus g \u22653 for this construction, the Gauss-Bonnet theorem gives R \u03a3(\u2212\u03ba)\u02c6 \u03c9 > 0. By studying this ODE for R \u03a3 ef \u02c6 \u03c9, we see that if R \u03a3 ef \u02c6 \u03c9 is too small initially then it will collapse to zero in \ufb01nite time. This provides an example of the Anomaly \ufb02ow on [0, T) with T < \u221ewhere \u2225\u2126\u2225\u03c9f = e\u22122f \u2192\u221eas t \u2192T. At the other end, it was proved in [35] that for initial data with ef large, then the \ufb02ow will exist for all time, and collapse the \ufb01bers in the limit: Theorem 4 Assume that ef \u226b1 at the initial time. Then the \ufb02ow exists for all time, and \u03c9f R X \u2225\u2126\u2225\u03c9f \u03c93 f 3! \u2192p\u2217(q2 1 \u02c6 \u03c9) where q1 is the \ufb01rst eigenfunction of the operator \u2212\u2206\u02c6 \u03c9 \u2212|\u2207\u03d5|2 \u02c6 \u03c9. Furthermore (X, \u03c9f R X \u2225\u2126\u2225\u03c9f \u03c93 f 3! ) \u2192(\u03a3, q2 1 \u02c6 \u03c9) in Gromov \u2212Hausdor\ufb00topology. The two regions of initial data ef small or large leave out the particularly interesting region of intermediate ef. It is actually in this intermediate region that Fei, Huang, and Picard [34] found in\ufb01nitely many topologically distinct examples, which is remarkable, as the analogous statement has been conjectured for Calabi-Yau manifolds, but has not been proved as yet. When comparing the Anomaly \ufb02ow to the K\u00a8 ahler-Ricci \ufb02ow in section \u00a74.3, we had noted that that the Anomaly \ufb02ow preserves the (2, 2) de Rham cohomology class [\u2225\u2126\u2225\u03c9\u03c92] (when c2(X) = c2(E)), but that this condition is much weaker than the preservation of the (1, 1)-K\u00a8 ahler class as in the case of the K\u00a8 ahler-Ricci \ufb02ow. Nevertheless, it can give important information, as we shall now discuss in the present case. Taking the de Rham class of (5.24) yields 1 2[\u2225\u2126\u2225\u03c9f\u03c92 f] = [volT 4] + [ef\u03b1\u02c6 \u03c9][\u03c9I] + [ef\u03b2\u02c6 \u03c9][\u03c9J] + [ef\u03b3\u02c6 \u03c9][\u03c9K]. (5.32) Therefore [\u2225\u2126\u2225\u03c9f\u03c92 f] \u2208H4(X, R) is parametrized by the vector V = ( Z \u03a3 ef\u03b1 \u02c6 \u03c9, Z \u03a3 ef\u03b2 \u02c6 \u03c9, Z \u03a3 ef\u03b3 \u02c6 \u03c9). (5.33) Since the class [\u2225\u2126\u2225\u03c9f\u03c92 f] is preserved along the Anomaly \ufb02ow, the vector V is constant along the \ufb02ow, and since (\u03b1, \u03b2, \u03b3) \u2208S2, we have Z \u03a3 ef \u02c6 \u03c9 \u2265|V |. (5.34) 21 \fThis is a bound from below for the \ufb02ow in terms of cohomological data. It gives more re\ufb01ned information about the \ufb01nite time singularity obtained when R \u03a3 ef \u02c6 \u03c9 \u226a1. Since the ODE (5.31) forces R \u03a3 ef \u02c6 \u03c9 \u21920 in \ufb01nite time, we see that the long-time existence criterion must fail \ufb01rst, and e\u22122f = \u2225\u2126f\u2225\u03c9f \u2192\u221eas t \u2192T on a set of measure zero. Finally, we note that the \ufb02ow admits an energy functional I(u) = Z \u03a3 |\u2207u|2 \u02c6 \u03c9 \u02c6 \u03c9 + Z \u03a3 \u03bau2\u02c6 \u03c9. (5.35) which is monotone decreasing for arbitrary initial data. This functional was used in [35] to provide convergence of the normalized solution in the regime u \u22650. It is possible that the existence of solutions to the Hull-Strominger system may be tied to some stability condition. In this case, the functional I(u) may turn out to be useful. We note that [34] introduced a condition which they call the \u201chemisphere condition\u201d, which may also be related to some notion of stability. 5.3 The case of unimodular Lie groups Next, we consider the Anomaly \ufb02ow on unimodular Lie groups. This is a natural setting to consider, since any left-invariant metric will be conformally balanced. The left-invariance property will also reduce the Anomaly \ufb02ow to a system of ordinary di\ufb00erential equations. This will allow us to tackle the \ufb01rst case when the Anomaly \ufb02ow is a genuine system instead of just a scalar equation. It will be then easier to trace the e\ufb00ects of a non-zero parameter \u03b1\u2032. An interesting feature of the Hull-Strominger equations will emerge, which is that solutions may require a di\ufb00erent connection on the Gauduchon line of metrics associated to a Hermitian metric than the Chern unitary connection. This had been anticipated in the physics literature [5, 9, 37, 56], and worked out explicitly by Fei and Yau [36] for stationary points. The only caveat is that we shall be dealing with non-compact settings, which may not behave exactly in the same way as in the compact case. Consider left-invariant metrics on a 3-dimensional complex Lie group X. Let {ea} be a basis of left-invariant holomorphic vector \ufb01elds on X, with structure constants cdab, [ea, eb] = X d edcd ab Then \u2126= e1 \u2227e2 \u2227e3 is a holomorphic nowhere vanishing (3, 0)-form on X, and if X is unimodular in the sense that X d cd db = 0, for any b, then any metric \u03c9 = ig\u00af abeb \u2227\u00af ea is balanced [1]. Unimodular Lie groups in 3-d are given by C3, the Heisenberg group, the rigid motions of R2, and SL(2, C). 22 \fBecause \u03c9 is not necessarily K\u00a8 ahler, there are many natural unitary connections associated to it. The most familiar one is the Chern unitary connection, de\ufb01ned earlier. But other unitary connections \u2207(\u03ba), called the Gauduchon line, can be de\ufb01ned by \u2207(\u03ba) j V k = \u2207jV k \u2212\u03baT k jmV m Here T = i\u2202\u03c9 is the torsion. The connections with \u03ba = 1 and \u03ba = 1/2 are known respectively as the Bismut connection and the Lichnerowicz connection. The following theorem was proved in [80]: Theorem 5 Set \u03c4 = 2\u03ba2(2\u03ba \u22121), and assume that \u03b1\u2032\u03c4 > 0. Then (a) When X = C3, any metric, and hence the \ufb02ow is stationary. (b) When X is nilpotent, there is no stationary point, and hence the \ufb02ow can never converge. If the initial metric is diagonal, the metric remains diagonal along the \ufb02ow, the lowest eigenvalue remains constant, while the other two eigenvalues tend to \u221eat a constant rate. (c) When X is solvable, the stationary points of the \ufb02ow are precisely the metrics with g\u00af 12 = g\u00af 21 = 0, \u03b1\u2032\u03c4 g3\u00af 3 = 1. The \ufb02ow is asymptotically unstable near any stationary point. However, the condition g\u00af 12 = g\u00af 21 = 0 is preserved along the \ufb02ow, and the \ufb02ow with any initial metric satisfying this condition converges to a stationary point. (d) When X = SL(2, C), there is a unique stationary point, given by the diagonal metric g\u00af ab = \u03b1\u2032\u03c4 2 \u03b4ab The linearization of the \ufb02ow at the \ufb01xed point admits both positive and negative eigenvalues, and hence the \ufb02ow is asymptotically unstable. The details can be found in [80]. Here we shall stress only that, underlying all this is the remarkable fact, worked out by Fei and Yau, that while Rm is not a (1, 1)-form for general \u2207(\u03ba) connections, Tr(Rm \u2227Rm) is a (2, 2)-form. More precisely, Tr(Rm \u2227Rm)\u00af k\u00af \u2113ij = \u03c4crk\u2113csrpcq ijcs qp We should also say that the equations appear complicated, for example, they are given by \u2202tg\u00af 11 = 1 2\u2225\u2126\u2225g3\u00af 3g\u00af 11(1 \u2212\u03b1\u2032\u03c4 4 g3\u00af 3), \u2202tg\u00af 22 = 1 2\u2225\u2126\u2225g3\u00af 3g\u00af 22(1 \u2212\u03b1\u2032\u03c4 4 g3\u00af 3), etc. in the case of the group of rigid motions, and in the case of SL(2, C) by \u2202t\u03bb1 = (\u03bb1\u03bb2\u03bb3) 1 2 2 \u001a\u03b1\u2032\u03c4 4 ( 2 \u03bb1 + \u03bb1 \u03bb2 2 + \u03bb1 \u03bb2 3 ) \u2212\u03bb2 \u03bb3 \u2212\u03bb3 \u03bb2 \u001b 23 \fwith similar equations for \u03bb2 and \u03bb3. But they reveal a lot of structure upon closer inspection, and this leads to the above detailed information on the phase diagram of the corresponding dynamical systems. 6 A Flow of Balanced Metrics with K\u00a8 ahler Fixed Points The special case \u03b1\u2032 = 0 of the Anomaly \ufb02ow may actually be of independent geometric interest as well. It is given by \u2202t(\u2225\u2126\u2225\u03c9\u03c92) = i\u2202\u00af \u2202\u03c9 for 3-folds X, and a natural generalization to arbitrary n dimensions for n \u22653 can be de\ufb01ned by [82] \u2202t(\u2225\u2126\u2225\u03c9\u03c9n\u22121) = i\u2202\u00af \u2202\u03c9n\u22122. (6.1) Here X is assumed to be a compact complex manifold of dimension n, which admits a nowhere vanishing holomorphic (n, 0)-form \u2126, and the initial metric \u03c9(0) is assumed to be conformally balanced, d(\u2225\u2126\u2225\u03c9(0)\u03c9n\u22121(0)) = 0. 6.1 Formulation as a \ufb02ow of metrics As in the case of 3 dimensions, the above \ufb02ow of (n \u22121, n \u22121)-forms admits a useful reformulation as a \ufb02ow of metrics, as it was given in [82]: Theorem 6 Consider the \ufb02ow (6.1) with X, \u2126, and \u03c9(0) as described above. Set \u03c9 = ig\u00af kjdzj \u2227d\u00af zk. Then the evolution of g\u00af kj is given by (a) When n = 3, \u2202tg\u00af kj = 1 2\u2225\u2126\u2225\u03c9 h \u2212\u02dc R\u00af kj + gm\u00af \u2113gs\u00af rT\u00af rmj \u00af Ts\u00af \u2113\u00af k i (6.2) (b) When n \u22654, \u2202tg\u00af kj = 1 (n \u22121)\u2225\u2126\u2225\u03c9 \u0014 \u2212\u02dc R\u00af kj + 1 2(n \u22122)(|T|2 \u22122|\u03c4|2) g\u00af kj \u22121 2gq\u00af pgs\u00af rT\u00af kqs \u00af Tj \u00af p\u00af r + gs\u00af r(T\u00af kjs \u00af T\u00af r + Ts \u00af Tj\u00af k\u00af r) + Tj \u00af T\u00af k \u0015 . (6.3) Recall that the torsion terms are de\ufb01ned by T = i\u2202\u03c9, T\u2113= gj\u00af kT\u00af kj\u2113, and we have introduced the (1, 0)-form \u03c4 = T\u2113dz\u2113. The reason that the \ufb02ow (6.1) may be interesting is because its stationary points are Ricci-\ufb02at K\u00a8 ahler metrics. In fact, the stationary points of the \ufb02ow satisfy the so-called astheno-K\u00a8 ahler condition i\u2202\u00af \u2202\u03c9n\u22122 = 0 introduced in [60]. The fact that conformally balanced and astheno-K\u00a8 ahler metrics must be K\u00a8 ahler had been shown earlier by MatsuoTakahashi [66] and Fino-Tomassini [39]. With the above formulas (6.2) and (6.3) for the 24 \f\ufb02ow (6.1), it is easy to give an independent proof of this fact. We give the proof when n \u22654, the same argument works for n = 3 and is even simpler. At a stationary point \u2202tg\u00af kj = 0, and taking the trace of the right hand side of (6.3) gives 0 = \u2212\u02dc R + n 2(n \u22122)(|T|2 \u22122|\u03c4|2) \u22121 2|T|2 + 3|\u03c4|2 But for conformally balanced metrics, \u02dc R = R = \u2206log \u2225\u2126\u22252 \u03c9. Simplifying yields (n \u22122)\u2206log \u2225\u2126\u22252 \u03c9 = |T|2 + 2(n \u22123)|\u03c4|2 \u22650 Thus log \u2225\u2126\u22252 \u03c9 is constant by maximum principle, |T|2 = 0, and \u03c9 is K\u00a8 ahler and Ricci-\ufb02at. 6.2 A convergence theorem It is a well-known and still open question in complex geometry to determine when a balanced or conformally balanced complex manifold X is actually K\u00a8 ahler. The \ufb02ow (6.1) appears particularly well-suited to address this question when c1(X) = 0. In particular, from the above discussion, the K\u00a8 ahler property is a necessary condition for the convergence of the \ufb02ow. At this moment, we can prove a partial converse [82]: Theorem 7 Assume that the initial data \u03c9(0) satis\ufb01es \u2225\u2126\u2225\u03c9(0)\u03c9(0)n\u22121 = \u02c6 \u03c7n\u22121, where \u02c6 \u03c7 is a K\u00a8 ahler metric. Then the \ufb02ow exists for all time t > 0, and as t \u2192\u221e, the solution \u03c9(t) converges smoothly to a K\u00a8 ahler, Ricci-\ufb02at, metric \u03c9\u221esatisfying \u03c9\u221e= \u2225\u2126\u2225\u22122/(n\u22122) \u03c7\u221e \u03c7\u221e where \u03c7\u221eis the unique K\u00a8 ahler Ricci-\ufb02at metric in the cohomology class [\u02c6 \u03c7], and \u2225\u2126\u2225\u03c7\u221e is an explicit constant. It is instructive to examine the corresponding PDE in greater detail. The key starting point is that, in this special case, we can actually identify an evolution of (1, 1)-cohomology classes. De\ufb01ne the smooth function f \u2208C\u221e(X, R) by e\u2212f = 1 (n \u22121)\u2225\u2126\u2225\u22122 \u02c6 \u03c7 Let t \u2192\u03d5(t) be the following Monge-Amp` ere type \ufb02ow \u2202t\u03d5 = e\u2212f (\u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5)n \u02c6 \u03c7n , with \u03d5(x, 0) = 0, subject to the plurisubharmonicity condition \u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5 > 0. Then, \u03c9(t) de\ufb01ned by \u2225\u2126\u2225\u03c9(t) \u03c9n\u22121(t) = \u03c7n\u22121(t) with \u03c7(t) = \u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5(t) > 0 (6.4) 25 \fis precisely the solution of the Anomaly \ufb02ow. We note the similarity with the K\u00a8 ahler-Ricci \ufb02ow and with the \ufb02ow recently introduced by Collins et al [26], except we have here the Monge-Amp` ere determinant itself, instead of its log or its inverse. In this respect, the Anomaly \ufb02ow with \u03b1\u2032 = 0 can be viewed more like the complex analogue of the inverse Gauss curvature \ufb02ow, which for convex bodies can be expressed as \u2202tu = det(ugij + \u2207i\u2207ju) detgij , u(x, 0) = u0(x) > 0 where u is the support function u : Sn \u00d7 [0, T) de\ufb01ned by u(N, t) = \u27e8P, N\u27e9where P \u2208Mt is the point on Mt with normal N, and (Sn, gij) is the standard sphere. But unlike the K\u00a8 ahler-Ricci \ufb02ow, the equation is not concave. Thus many well-known methods for the theory of non-linear parabolic partial di\ufb00erential equations, including those recently developed in [85]. Rather, the equation has to be studied on its own. We describe now some of the main steps. 6.2.1 Estimates for \u2202t\u03d5 Despite the di\ufb03culties we have noted, the \ufb02ow has a very important property, which is that \u2202t\u03d5 \u2261\u02d9 \u03d5 as well as the oscillation of \u03d5 can be controlled at all times. To see this, let L be the linear elliptic second order operator de\ufb01ned by L = e\u2212f (\u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5)n \u02c6 \u03c7n \u03c7j\u00af k \u02c6 \u2207j \u02c6 \u2207\u00af k (6.5) where we have denoted by \u02c6 \u2207the connection with respect to the reference metric \u02c6 \u03c7. Then di\ufb00erentiating the \ufb02ow gives (\u2202t \u2212L) \u02d9 \u03d5 = 0 (6.6) and hence, by the maximum principle infX \u02d9 \u03d5(0) \u2264\u02d9 \u03d5 \u2264supX \u02d9 \u03d5(0). (6.7) which shows that \u02d9 \u03d5 is uniformly bounded for all times. Next, we can rewrite the \ufb02ow as a Monge-Amp` ere equation with bounded right hand side, (\u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5)n = ef \u02d9 \u03d5\u02c6 \u03c7n (6.8) Yau\u2019s C0 estimate gives then a uniform bound on the oscillation supX\u03d5 \u2212infX\u03d5. 26 \f6.2.2 The C2 estimate The key estimate is the following C2 estimate: let \u03d5 be a solution of (6.1) on X \u00d7 [0, T] satisfying the positivity condition (6.4). Then there exists constants C, A > 0 depending only on X, \u02c6 \u03c7, and f such that \u2206\u02c6 \u03c7\u03d5(x, t) \u2264C eA( \u02dc \u03d5(x,t)\u2212infX\u00d7[0,T ] \u02dc \u03d5) (6.9) for all (x, t) \u2208X \u00d7 [0, T]. Here \u02dc \u03d5 is de\ufb01ned by \u02dc \u03d5(x, t) = \u03d5(x, t) \u22121 V Z X \u03d5(\u00b7, t)\u02c6 \u03c7n, V = Z X \u02c6 \u03c7n. (6.10) The proof is by the maximum principle applied to the following test function G(x, t) = log Tr h \u2212A \u02dc \u03d5 + B 2 [(\u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5)n \u02c6 \u03c7n ]2 (6.11) where hjk = \u02c6 \u03c7j \u00af p\u03c7\u00af pk, and A, B > 0 are suitable constants. We note the extra term B[(\u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5)n/\u02c6 \u03c7n]2/2, which does not appear in the standard test function used in the study of the complex Monge-Amp` ere type equations or the K\u00a8 ahler-Ricci \ufb02ow. Indeed, this is the main innovation of this test function, and it is required in order to overcome the di\ufb03cult terms caused by the lack of concavity of the \ufb02ow as well as cross terms resulting from the conformal factor e\u2212f. 6.2.3 The C2,\u03b1 estimate The lack of concavity is again a source of di\ufb03culties in getting C2,\u03b1 estimates and the absence of an Evans-Krylov theorem. Here we adapt instead techniques for C2,\u03b1 estimates developed by B. Chow and D. Tsai [18, 95] (see also Andrews [6]) in the study of Gauss curvature type \ufb02ows in convex geometry. This implies the long-time existence of the \ufb02ow. Finally the convergence of the \ufb02ow can be established in several ways. One elegant way is to use the dilaton functional M(\u03c9) = Z X \u2225\u2126\u2225\u03c9\u03c9n (6.12) introduced by Garcia-Fernandez et al [46]. This functional can be shown to be monotone decreasing for the \ufb02ow (6.1) with conformally K\u00a8 ahler initial data, with dM(t)/dt \u21920 as t \u2192\u221e. This completes the proof of Theorem 7. 6.3 Singularities and divergence Since the convergence of the \ufb02ow (6.1) is intimately related to the question of whether the underlying conformally balanced manifold X is actually K\u00a8 ahler, it would be very valuable 27 \fto \ufb01nd geometric criteria for when it develops singularities and/or diverges. An elementary example is provided by the case of Calabi-Eckmann-Goldstein-Prokushkin \ufb01brations with \u03b1\u2032 = 0. If \u03c91 and \u03c92 are the two harmonic forms used in the construction of the toric \ufb01bration \u03c0 : X \u2192Y , and \u03c9u is the metric (5.1), then i\u2202\u00af \u2202\u03c9u = (\u2206\u02c6 \u03c9eu + \u2225\u03c91\u22252 \u02c6 \u03c9 + \u2225\u03c92\u22252 \u02c6 \u03c9)\u02c6 \u03c92/2, so that the Anomaly \ufb02ow \u2202t(\u2225\u2126\u2225\u03c9u\u03c92 u) = i\u2202\u00af \u2202\u03c9u becomes \u2202teu = 1 2(\u2206\u02c6 \u03c9eu + \u2225\u03c91\u22252 \u02c6 \u03c9 + \u2225\u03c92\u22252 \u02c6 \u03c9). (6.13) It follows immediately that \u2202t \u0012Z Y eu\u02c6 \u03c92 \u0013 = Z Y \u0010 \u2225\u03c91\u22252 \u02c6 \u03c9 + \u2225\u03c92\u22252 \u02c6 \u03c9 \u0011 \u02c6 \u03c92 2 > 0. (6.14) Thus the \ufb02ow cannot converge unless \u03c91 = \u03c92 = 0, which is exactly when the \ufb01bration \u03c0 becomes the trivial product Y \u00d7 T, which is indeed K\u00a8 ahler. The study of the formation of singularities in a \ufb02ow begins with the identi\ufb01cation of which geometric quantity must blow up in order for the \ufb02ow not to extend further. For the original Anomaly \ufb02ow in dimension 3, the following theorem was established in [78]: Theorem 8 Assume that n = 3 and we have a solution of the Anomaly \ufb02ow (4.5) with \u03b1\u2032 = 0 (or (6.2)) on the time interval [0, 1 A]. Then for all non-negative integers k, there exists a constant Ck, depending on a uniform lower bound for \u2225\u2126\u2225\u03c9, such that, if |Rm|\u03c9 + |DT|\u03c9 + |T|2 \u03c9 \u2264A, (z, t) \u2208X \u00d7 [0, 1 A] (6.15) then |DkRm(z, t)|\u03c9 + |Dk+1T(z, t)|\u03c9 \u2264CkA t k 2 , (z, t) \u2208X \u00d7 (0, 1 A]. (6.16) In particular, the \ufb02ow will exist for all time, unless there is a time T > 0 and a sequence (zj, tj) with tj \u2192T with either \u2225\u2126(zj, tj)\u2225\u03c9(tj) \u21920, or \u0010 |Rm|2 \u03c9 + |DT|2 \u03c9 + |T|2 \u03c9 \u0011 (zj, tj) \u2192\u221e. (6.17) Indeed, the above theorem holds for the \ufb02ow (6.2) in any dimension n \u22653. We expect this theorem to generalize to the \ufb02ow (6.1) for arbitrary n \u22653. 7 More Speculative Flows We conclude with a brief discussion of some more speculative \ufb02ows. 28 \f7.1 Flow of balanced metrics with source term In \u00a76, we investigated the \ufb02ow (6.1) for conformally balanced metrics, which is just the higher dimensional generalization of the Anomaly \ufb02ow (4.1) with zero slope parameter \u03b1\u2032. We can also consider a similar \ufb02ow with source term \u2202t(\u2225\u2126\u2225\u03c9\u03c92) = i\u2202\u00af \u2202\u03c9 \u2212\u03a5, (7.1) where \u03a5 is a given i\u2202\u00af \u2202-exact (2,2)-form on the compact complex 3-fold X. Indeed, this \ufb02ow provides a natural approach to a question raised by Garcia-Fernandez [45], namely, whether for any given real i\u2202\u00af \u2202-exact (2, 2)-form \u03a5 on a compact conformally balanced 3-fold (X, \u2126, \u03c90), there exists another conformally balanced metric \u03c9 with i\u2202\u00af \u2202\u03c9 = \u03a5 and [\u2225\u2126\u2225\u03c9\u03c92] = [\u2225\u2126\u2225\u03c90\u03c92 0] ? This question it itself closely related to a conjecture on the solvability of the Hull-Strominger system, raised by Yau. Thus the long time behavior and convergence of the \ufb02ow (7.1) is of particular interest. 7.2 The Anomaly \ufb02ow in arbitrary dimension The original motivation for the Hull-Strominger system, and hence for the Anomaly \ufb02ow was for compacti\ufb01cations of the 10-dimensional heterotic string to 4-dimensional spacetimes, and thus the internal complex manifold X must have complex dimension 3. The Green-Schwarz anomaly cancellation mechanism made also use speci\ufb01cally of the second Chern classes of T 1,0(X) and of the bundle E \u2192X, which appear in (3.1). However, from the pure mathematical standpoint, the Anomaly \ufb02ow seems to \ufb01t naturally in a family of \ufb02ows which can be de\ufb01ned in arbitrary dimension n, and are given by \u2202t(\u2225\u2126\u2225\u03c9\u03c9n\u22121) = i\u2202\u00af \u2202\u03c9n\u22122 \u2212\u03b1\u2032(cn\u22121(Rm) \u2212\u03a6n\u22121(t)) (7.2) where cn\u22121(Rm) = Tr(Rm\u2227\u00b7 \u00b7 \u00b7\u2227Rm) is the (n\u22121)-th Chern class of T 1,0(X), and \u03a6n\u22121(t) is a given \ufb02ow of closed (n \u22121, n \u22121)-forms, for example \u03a6n\u22121(t) = cn\u22121(F(t)) where F(t) is the curvature of a \ufb02ow of Hermitian metrics H(t). Just as Hermitian-Yang-Mills equations originally appeared in dimension 4 but proved to be mathematically important in arbitrary dimensions, we expect that these \ufb02ows will be interesting for n > 3 as well. We note that other generalizations of the Hull-Strominger system to arbitrary dimensions have been proposed earlier in [43, 44]. Restricted to Calabi-Eckmann-Golstein-Prokushkin \ufb01brations, they lead to the following interesting ndimensional generalizations of the equation studied by Fu and Yau [43, 44], called the Fu-Yau equation, i\u2202\u00af \u2202(eu\u02c6 \u03c9 \u2212\u03b1\u2032e\u2212u\u03c1) \u2227\u02c6 \u03c9n\u22122 + \u03b1\u2032i\u2202\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u02c6 \u03c9n\u22122 + \u00b5 = 0, (7.3) where \u00b5 is a given smooth (n, n) form. When n = 2, the above equation reduces to the right hand side of equation (5.3) and was studied by Fu and Yau using elliptic methods in [43, 44]. 29 \fThe general dimensional case of (7.3) has been subsequently studied in [74, 77, 81, 24, 25]. They are of Hessian type, and suggest interesting questions about a priori estimates for such equations, some of which have been addressed in [28, 51, 52, 76, 77]. 7.3 Flows without the form \u2126 For some questions, e.g., the relation between balanced cone and K\u00a8 ahler cone [42], it would be desirable to eliminate the assumption of the existence of a non-vanishing holomorphic form \u2126. Thus it is tempting to consider the \ufb02ow \u2202t\u03c9n\u22121 = i\u2202\u00af \u2202\u03c9n\u22122 on an n-dimensional complex manifold X, for initial data \u03c9(0) which are balanced, i.e. d\u03c9(0)n\u22121 = 0. The stationary points are still exactly the K\u00a8 ahler metrics, although they are not necessarily Ricci-\ufb02at. Explicitly, we can still write it in terms of metrics, \u2202tg\u00af kj = \u2212 1 n \u22121\u2207mT\u00af kjm + 1 2(n \u22121) \u0014 \u2212gq\u00af pgs\u00af rT\u00af kqs \u00af Tj \u00af p\u00af r + |T|2 n \u22121g\u00af kj \u0015 (7.4) Compared to the expression (6.3) for the Anomaly \ufb02ow, we see that the key term \u02dc R\u00af kj has cancelled out, and the \ufb02ow is not parabolic. The lack of parabolicity poses severe problems, and it is unclear whether this \ufb02ow can be used in any way. It would be very interesting if one can \ufb01nd a \ufb02ow of balanced metrics with K\u00a8 ahler metrics as \ufb01xed points, but which is parabolic. In this context, we note that another \ufb02ow of balanced metrics has been introduced by Bedulli and Vezzoni in [8], based on a generalization of the Calabi \ufb02ow, and its short-time existence established there." + }, + { + "url": "http://arxiv.org/abs/1805.01029v2", + "title": "A flow of conformally balanced metrics with K\u00e4hler fixed points", + "abstract": "While the Anomaly flow was originally motivated by string theory, its zero\nslope case is potentially of considerable interest in non-Kahler geometry, as\nit is a flow of conformally balanced metrics whose stationary points are\nprecisely Kahler metrics. We establish its convergence on Kahler manifolds for\nsuitable initial data. We also discuss its relation to some current problems in\ncomplex geometry.", + "authors": "Duong H. Phong, Sebastien Picard, Xiangwen Zhang", + "published": "2018-05-02", + "updated": "2018-05-23", + "primary_cat": "math.DG", + "cats": [ + "math.DG" + ], + "main_content": "Introduction The main purpose of this paper is to study a geometric \ufb02ow which is of potential interest from several viewpoints, including mathematical physics, non-K\u00a8 ahler complex geometry, and the theory of non-linear partial di\ufb00erential equations. Let X be a compact n-dimensional complex manifold, which admits a non-vanishing holomorphic (n, 0)-form \u2126. The \ufb02ow which we shall consider is the \ufb02ow t \u2192\u03c9(t) of Hermitian metrics de\ufb01ned by \u2202t(\u2225\u2126\u2225\u03c9\u03c9n\u22121) = i\u2202\u00af \u2202\u03c9n\u22122 (1.1) with an initial data \u03c90 which is conformally balanced, in the sense that d(\u2225\u2126\u2225\u03c90\u03c9n\u22121 0 ) = 0. (1.2) Here \u2225\u2126\u2225\u03c90 is the norm of \u2126, de\ufb01ned by \u2225\u2126\u22252 \u03c90 = in2n! \u2126\u2227\u00af \u2126\u03c9\u2212n 0 . The \ufb02ow (1.1) is a generalization to arbitrary dimension, with the slope parameter \u03b1\u2032 set to 0, of the Anomaly \ufb02ow introduced in [56] for n = 3. This \ufb02ow provides a systematic approach for solving the Hull-Strominger system [38, 39, 65] for supersymmetric compacti\ufb01cations of the heterotic string. Thus it is highly desirable to gain a better understanding for it, and the case \u03b1\u2032 = 0 provides already an important and non-trivial special case. Another de\ufb01ning feature of the \ufb02ow (1.1) is that it preserves the conformally balanced condition (1.2), and its stationary points are astheno-K\u00a8 ahler metrics (see \u00a72.2 for de\ufb01nition). This implies that its stationary points are K\u00a8 ahler [24, 53], so the convergence of the \ufb02ow is closely related to a well-known question in non-K\u00a8 ahler geometry, namely when is a conformally balanced manifold actually K\u00a8 ahler. Also closely related is another 1Work supported in part by the National Science Foundation Grants DMS-12-66033 and DMS-1605968, and the Simons Collaboration Grant-523313. Key words: Anomaly \ufb02ow, conformally balanced metric, astheno-K\u00a8 ahler metric, K\u00a8 ahler Ricci-\ufb02at metric, complex Monge-Amp` ere \ufb02ow. \ffundamental question in non-K\u00a8 ahler geometry and algebraic-geometric stability conditions [16, 47, 77, 78], namely when does a positive (p, p) cohomology class admit as representative the p-th power of a K\u00a8 ahler form. Finally, on K\u00a8 ahler manifolds the \ufb02ow (1.1) can be viewed as a complex version of the inverse Gauss curvature \ufb02ow studied extensively in convex geometry, see for example [2, 5, 12, 13, 25, 75], and is in itself quite interesting as a fully non-linear partial di\ufb00erential equation. More details on all these motivations will be provided in Section \u00a72 below. In [59] another generalization of the Anomaly \ufb02ow to arbitrary dimension n was proposed which agrees with the \ufb02ow (1.1) in dimension n = 3. For that generalization, when \u03b1\u2032 = 0 for all dimensions n, the \ufb02ow was shown to continue to exist, as long as the curvature and torsion remain bounded and the norm of \u2126does not tend to 0. But this is all which is known at the present time. In this paper, we shall prove the following theorems: Theorem 1 Let X be an n-dimensional compact complex manifold with a nowhere vanishing (n, 0) holomorphic form \u2126, n \u22653. Let the initial data \u03c90 be a Hermitian metric which is conformally balanced, i.e. the condition (1.2) holds. Then the \ufb02ow is parabolic, and admits in particular a unique solution on some time interval [0, T) with T > 0. Clearly the \ufb02ow (1.1) can only converge if the manifold X is K\u00a8 ahler. Conversely, if the manifold X is K\u00a8 ahler, we can prove: Theorem 2 Consider the \ufb02ow (1.1) on a compact n-fold X equipped with a holomorphic (n, 0)-form \u2126as in the previous theorem, n \u22653. Assume that the initial data \u03c9(0) satis\ufb01es \u2225\u2126\u2225\u03c9(0)\u03c9(0)n\u22121 = \u02c6 \u03c7n\u22121, (1.3) where \u02c6 \u03c7 is a K\u00a8 ahler metric. Then the \ufb02ow (1.1) exists for all time t > 0, and as t \u2192\u221e, the solution \u03c9(t) converges smoothly to a metric \u03c9\u221esatisfying \u03c9\u221e= \u2225\u2126\u2225\u22122/(n\u22122) \u03c7\u221e \u03c7\u221e, (1.4) where \u03c7\u221eis the unique K\u00a8 ahler Ricci-\ufb02at metric in the cohomology class [\u02c6 \u03c7], and \u2225\u2126\u2225\u03c7\u221e is a constant given by \u2225\u2126\u22252 \u03c7\u221e= n! [\u02c6 \u03c7]n \u0012Z X in2\u2126\u2227\u00af \u2126 \u0013 . (1.5) In particular, \u03c9\u221eis K\u00a8 ahler and Ricci-\ufb02at. As a consequence we obtain another proof of the classical theorem of Yau [79] on the existence of Ricci-\ufb02at K\u00a8 ahler metrics. A \ufb01rst parabolic proof of Yau\u2019s theorem was the one by Cao [10] using the K\u00a8 ahler-Ricci \ufb02ow. More recent parabolic proofs using inverse MongeAmp` ere \ufb02ow were obtained in [11, 14, 17]. But, as stressed in [14] and also discussed in Section \u00a72.2 below, alternative approaches are interesting not just for alternative proofs 2 \fin themselves, but also for the singularities that they would develop when no stationary point exists. Theorem 2 will be reduced to the following theorem on a \ufb02ow of complex MongeAmp` ere type which may be of independent interest: Theorem 3 Let X be a compact K\u00a8 ahler manifold with K\u00a8 ahler metric \u02c6 \u03c7. Let f \u2208C\u221e(X, R) be a given function. Consider the \ufb02ow \u2202t\u03d5 = e\u2212f (\u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5)n \u02c6 \u03c7n , \u03d5(x, 0) = 0, (1.6) where \u03d5(t) is subject to the plurisubharmonicity condition \u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5 > 0. (1.7) Then the \ufb02ow \u03d5(t) exists for all t > 0, and the averages \u02dc \u03d5(t) = \u03d5(t) \u22121 V R X \u03d5\u02c6 \u03c7n, with V = R X \u02c6 \u03c7n, converge in C\u221e. The \ufb02ow (1.6) shares with the K\u00a8 ahler-Ricci \ufb02ow and the inverse Monge-Amp` ere \ufb02ow (MA\u22121-\ufb02ow) introduced by Collins-Hisamoto-Takahashi [14] the fact that the MongeAmp` ere determinant (\u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5)n/\u02c6 \u03c7n appears in the right hand side. However, it appears here as a \ufb01rst power, and not as a log or as an inverse power, which makes it not concave. In particular, it does not fall within the scope of standard parabolic PDE methods such as those developed in [63]. The paper is organized as follows. In \u00a72, we provide some background and motivations for the \ufb02ow (1.1). In \u00a73, we compute the evolution equation for the metric and prove the short time existence as claimed in Theorem 1. In \u00a74, the Anomaly \ufb02ow is reduced to the study of a complex Monge-Amp` ere \ufb02ow for initial data satisfying (1.3). We establish all the necessary estimates for the long-time behavior of the \ufb02ows. In \u00a75, we prove the convergence of the \ufb02ow which completes the proof of Theorems 2 and 3. In \u00a76, we provide some further remarks about the Anomaly \ufb02ow and its possible connection to some interesting problems in complex geometry. The appendix contains some calculations used in \u00a73. 2 Motivations for the \ufb02ow We provide now some details on the three di\ufb00erent contexts which make the \ufb02ow (1.1) of particular interest. 2.1 The Hull-Strominger system Let X be a compact 3-fold equipped with a nowhere vanishing holomorphic (3, 0)-form \u2126. Let t \u2192\u03a6(t) be a \ufb02ow of (2, 2)-forms with d\u03a6 = 0 for any t. Then the following \ufb02ow was 3 \fintroduced in [56] \u2202t(\u2225\u2126\u2225\u03c9\u03c92) = i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032 4 (Tr(Rm \u2227Rm) \u2212\u03a6(t)) (2.1) with initial data \u03c90 required to satisfy the condition d(\u2225\u2126\u2225\u03c90\u03c92 0) = 0. Here \u03b1\u2032 is a physical parameter called the slope, and Rm is the Riemann curvature of the metric \u03c9, viewed as a (1, 1)-form valued in the bundle of endomorphisms of T 1,0(X). One case of particular interest is when X is also equipped with a holomorphic vector bundle E \u2192X with c1(E) = 0, and \u03a6(t) = Tr(F \u2227F) where F is the curvature of a Hermitian metric H(t) on E which itself evolves simultaneously under the Donaldson heat \ufb02ow H\u22121\u2202tH = \u2212(\u039b\u03c9iF), H(0) = H0. (2.2) Here H0 is a given initial metric, F is viewed as a (1, 1)-form valued in the bundle of endomorphisms of E, and (\u039b\u03c9iF)\u03b1\u03b2 = gj\u00af kF\u00af kj \u03b1\u03b2 is the usual Hodge contraction. The stationary points of the simultaneous \ufb02ows (2.1) and (2.2) are precisely the solutions of a system of equations proposed independently by C. Hull [38, 39] and A. Strominger [65] for supersymmetric compacti\ufb01cations of the heterotic string to M3,1 \u00d7 X, where M3,1 is Minkowski space-time. The Hull-Strominger system of equations is a generalization of the more speci\ufb01c case considered earlier by Candelas, Horowitz, Strominger and Witten [9], where E was set to T 1,0(X). If we consider only the stationary points of the system, we can set H = \u03c9, and consequently \u03a6 = Tr(Rm\u2227Rm). In this case the term Tr(Rm\u2227Rm)\u2212\u03a6(t) vanishes trivially, and it was shown in [9] that the stationary points of both (2.1) and (2.2) are given by the same condition of \u03c9 being a Ricci-\ufb02at K\u00a8 ahler metric. A major reason for considering the \ufb02ow (2.1), among all the \ufb02ows which have the same stationary points, is that it preserves the condition that the form \u2225\u2126\u2225\u03c9\u03c92 be closed. In fact, the (2, 2) de Rham class of \u2225\u2126\u2225\u03c9(t)\u03c9(t)2 can be easily read o\ufb00from the \ufb02ow. This is however only a weak substitute for the analogous statement for the (1, 1) K\u00a8 ahler class of \u03c9 in the K\u00a8 ahler-Ricci \ufb02ow, and this accounts for many new di\ufb03culties that one encounters in the study of (2.1). The \ufb02ow (2.1) can be expressed more explicitly in terms of a \ufb02ow for the (1, 1)-form \u03c9(t) itself [59]. Perhaps surprisingly, this formulation shows that (2.1) can be viewed as a generalization of the Ricci \ufb02ow with a triple complication, namely the metrics are not K\u00a8 ahler (or Levi-Civita), the norms \u2225\u2126\u2225\u03c9 also occur, and so do quadratic expressions in the curvature tensor. The \ufb02ow (2.1) was shown in [60] to produce a simpler and uni\ufb01ed way of recovering the solutions of the Hull-Strominger system on Calabi-Eckmann-Goldstein-Prokushkin \ufb01brations [8, 36] originally found by Fu and Yau [31, 32]. It also inspired directly the solutions of the Fu-Yau Hessian equations found recently in [62], which extended our earlier work [55, 58]. However, it is still largely unexplored, and its long-term behavior is already 4 \fknown to exhibit in general a rather intricate behavior, such as a particular sensitivity to the initial data [20, 60, 61]. Because of all these complications, it is necessary to examine the \ufb02ow (2.1) in some simpler settings. The above choice of \u03a6 in [9] has the e\ufb00ect of eliminating the terms in the right hand side of (2.1) which are quadratic in the curvature tensors. This e\ufb00ect is also achieved simply by considering the case \u03b1\u2032 = 0. The \ufb02ow (2.1) can then be considered on its own. And while the dimension 3 for X is required for the interpretation of the Hull-Strominger system as a compacti\ufb01cation of the heterotic string, from the pure mathematical standpoint, we can consider all dimensions n \u22653, and the \ufb02ow we propose in (1.1) is the natural generalization of the Anomaly \ufb02ow (2.1) with \u03b1\u2032 = 0. 2.2 Generalizations of the K\u00a8 ahler condition The \ufb02ow (1.1) also \ufb01ts into the broad question of when a compact complex manifold X may admit a K\u00a8 ahler metric or a weaker substitute. We review some of the relevant notions. Let (X, J) be a compact complex manifold of complex dimension n equipped with a Hermitian metric g, and denote its K\u00a8 ahler form by \u03c9 = i P g\u00af kjdzj \u2227d\u00af zk. Since the K\u00a8 ahler form of a Hermitian metric determines the metric (as J is \ufb01xed), by an abuse of terminology we will not distinguish between the two notions. If d\u03c9 = 0, then g is called a K\u00a8 ahler metric, and X is called a K\u00a8 ahler manifold. For each 2 \u2264k \u2264n \u22121, it is natural to consider a weaker condition of the form d \u03c9k = 0. (2.1) The case k = n \u22121 was introduced by Michelsohn [54] who called such a metric balanced, and the manifold X a balanced manifold. By an observation due to Gray and Hervella [37] conditions such as d\u03c9k = 0 for some 2 \u2264k \u2264n \u22122 actually imply that \u03c9 is K\u00a8 ahler, so the case k = n \u22121 is the only non-trivial generalization of the K\u00a8 ahler property. Michelsohn found an intrinsic characterization of compact manifolds with balanced metrics by means of positive currents. Using such a characterization, Alessandrini and Bassanelli proved that the existence of balanced metrics is preserved under birational transformations in [1]. Remarkably, as we just saw in the previous section \u00a72.1, balanced metrics also arise in string theory since the torsion constraint equation d(\u2225\u2126\u2225\u03c9\u03c92) = 0 in dimension 3 of the Hull-Strominger system just means that the metric \u2225\u2126\u2225 1 2 \u03c9\u03c9 is balanced, see [49]. The existence of balanced metrics on compact Hermitian manifolds has been studied extensively (see e.g. [18, 19, 21, 22, 26, 27, 28, 54, 71, 76] and references therein). On the other hand, another natural generalization of the K\u00a8 ahler condition is i\u2202\u00af \u2202\u03c9\u2113= 0, (2.2) for some \u2113with 1 \u2264\u2113\u2264n \u22121. If \u2113= n \u22121, then \u03c9 is called a Gauduchon metric. In [34], Gauduchon proved that there always exists a unique Gauduchon metric, up to a constant 5 \fconformal factor, in the conformal class of a given Hermitian metric. Gauduchon manifolds also provide a natural setting for an extension [48, 51] of the Donaldson-Uhlenbeck-Yau theory of Hermitian-Yang-Mills metrics on stable bundles over K\u00a8 ahler manifolds. If \u2113= n \u22122, then \u03c9 is called an astheno-K\u00a8 ahler metric. This notion was introduced by Jost and Yau in [41] to establish the existence of Hermitian harmonic maps, and it turns out to be particularly interesting for many analytic arguments to be useful. For example, Tosatti and Weinkove proved in [72] the Calabi-Yau theorems for Gauduchon and strongly Gauduchon metrics on the class of compact astheno-K\u00a8 ahler manifolds. Recently, the existence such metrics on compact complex manifolds has been studied widely, see for example [23, 24, 46, 52, 53]. An interesting question raised in [69] asks whether a compact complex non-K\u00a8 ahler manifold can admit both an astheno-K\u00a8 ahler metric and a balanced metric. Very recently, such examples were constructed in [23, 46]. In fact, the astheno-K\u00a8 ahler metric of LatorreUgarte [46] is k-th Gauduchon [29], meaning that i\u2202\u00af \u2202\u03c9k \u2227\u03c9n\u2212k\u22121 = 0, for every 1 \u2264k \u2264 n \u22121. However, a single metric on a compact manifold cannot be both balanced and astheno-K\u00a8 ahler, unless it is K\u00a8 ahler. This statement can be viewed as a generalization to arbitrary dimensions of the arguments of [9]. It was proved in Matsuo-Takahashi [53], and extended in Fino-Tomassini [24] to the case of metrics which are both conformally balanced and astheno-K\u00a8 ahler. For easy reference, we state these results as a lemma, and include an alternative proof in this paper (see Corollary 1 in \u00a73 for (ii) and (e) in \u00a76 for (i)): Lemma 1 [24, 53] Let X be an n-dimensional compact complex manifold with Hermitian metric \u03c9. Then \u03c9 is K\u00a8 ahler if one of the following conditions is satis\ufb01ed (i) \u03c9 satis\ufb01es both d\u03c9n\u22121 = 0 and i\u2202\u00af \u2202\u03c9n\u22122 = 0, (ii) X admits a nowhere vanishing holomorphic (n, 0)-form \u2126and \u03c9 satis\ufb01es both d(\u2225\u2126\u2225\u03c9\u03c9n\u22121) = 0 and i\u2202\u00af \u2202\u03c9n\u22122 = 0. In this case, \u03c9 is also Ricci-\ufb02at. We return now to the \ufb02ow (1.1). Since the right hand side of the \ufb02ow is both d and i\u2202\u00af \u2202exact, it follows that if \u2225\u2126\u2225\u03c90\u03c9n\u22121 0 is d-closed or i\u2202\u00af \u2202-closed, then \u2225\u2126\u2225\u03c9\u03c9n\u22121 remains d-closed or i\u2202\u00af \u2202-closed along the \ufb02ow, and the de Rham cohomology class [\u2225\u2126\u2225\u03c90\u03c9n\u22121 0 ] or the Bott-Chern cohomology class [\u2225\u2126\u2225\u03c90\u03c9n\u22121 0 ]BC is preserved. Moreover, thanks to (ii) in Lemma 1, we see that if the Anomaly \ufb02ow (1.1) converges, the limit metric must be a K\u00a8 ahler Ricci-\ufb02at metric. Therefore, the \ufb02ow provides a deformation path in the space of conformally balanced metrics to a K\u00a8 ahler metric. If no K\u00a8 ahler metric exists on X, then the \ufb02ow cannot converge, and either its singularities in \ufb01nite-time or long-term behavior should provide an analytic measure of the absence of K\u00a8 ahler metrics. 6 \f2.3 Parabolic fully non-linear equations on Hermitian manifolds The theory of parabolic fully non-linear equations on Hermitian manifolds has been developed extensively over the years, and has resulted in some powerful and general results. As an example, let (X, \u03c9) be a compact Hermitian manifold equipped with a smooth closed form \u02c6 \u03c7, and consider the equation \u2202tu = F(A(i\u2202\u00af \u2202u)) \u2212\u03c8(z), u(0) = u0. (2.3) Here i\u2202\u00af \u2202u is the complex Hessian of u, and A(i\u2202\u00af \u2202u) is the set of eigenvalues of the form \u02c6 \u03c7 + i\u2202\u00af \u2202u with respect to the metric \u03c9. The function F(\u03bb) is a given function de\ufb01ned on a given cone \u0393, and A(i\u2202\u00af \u2202u) is required to be in \u0393 for all time. The function F is required to satisfy many conditions, including conditions amounting to the parabolicity of the \ufb02ow (2.3). But an additional condition, originating from the earliest works of Ca\ufb00arelli-Nirenberg-Spruck [7] and Krylov [43] on the theory of fully non-linear equations, is that F be a concave function of \u03bb \u2208\u0393. This concavity condition, together with the existence of subsolutions and more technical hypotheses, is now known to lead to some very general theorems which can apply to a wide variety of geometric \ufb02ows that have been studied in the literature (see e.g. [63] and references therein). The Anomaly \ufb02ow on Calabi-Eckmann-Goldstein-Prokushkin \ufb01brations studied in [60] provides however a natural geometric example of a \ufb02ow that does not satisfy the concavity condition, but which is nevertheless well-behaved. This suggests the existence of a useful theory going beyond concave equations, the formulation of which would require the treatment of many more examples. As we shall see, the \ufb02ow (1.1), in its realization (1.6), provides another instructive example. Note that it corresponds to F = ef(z) Qn j=1 \u03bbj, unlike the K\u00a8 ahler-Ricci \ufb02ow, which corresponds to F = log Qn j=1 \u03bbj which is concave. 3 Short-time existence and proof of Theorem 1 To prove Theorem 1, we need a lemma which will allow us to compute relevant quantities with the Hodge star operator. For a (p, q)-form \u0398 on a manifold X, its components \u0398\u00af k1\u00b7\u00b7\u00b7\u00af kqj1\u00b7\u00b7\u00b7jp are de\ufb01ned by \u0398 = 1 p!q! X \u0398\u00af k1\u00b7\u00b7\u00b7\u00af kqj1\u00b7\u00b7\u00b7jp dzjp \u2227\u00b7 \u00b7 \u00b7 \u2227dzj1 \u2227d\u00af zkq \u2227\u00b7 \u00b7 \u00b7 \u2227d\u00af zk1. (3.1) For a (p, p)-form \u0398, we de\ufb01ne Tr \u0398 = \u27e8\u0398, \u03c9p\u27e9= i\u2212p p Y \u2113=1 gk\u2113\u00af j\u2113\u0398\u00af j1k1\u00b7\u00b7\u00b7\u00af jpkp. (3.2) 7 \fLemma 2 Let \u03b1 \u2208\u21261,1(X, R), \u03a6 \u2208\u21262,2(X, R) and \u03a8 \u2208\u21263,3(X, R) on a complex manifold X of dimension n \u22653. Let \u03c9 = ig\u00af kjdzj \u2227d\u00af zk be a Hermitian metric and \u22c6its associated Hodge star operator. Then \u22c6(\u03b1 \u2227\u03c9n\u22122) = \u2212(n \u22122)! \u03b1 + (n \u22122)!(Tr \u03b1) \u03c9, (3.3) (\u22c6(\u03a6 \u2227\u03c9n\u22123))\u00af kj = i(n \u22123)!gs\u00af r\u03a6\u00af rs\u00af kj + i(n \u22123)! 2 (Tr \u03a6) g\u00af kj, (3.4) and if n \u22654, then (\u22c6(\u03a8 \u2227\u03c9n\u22124))\u00af kj = (n \u22124)! 2 gq\u00af pgs\u00af r\u03a8\u00af rs\u00af pq\u00af kj + i(n \u22124)! 6 (Tr \u03a8) g\u00af kj. (3.5) Proof: We will use the general formula for the Hodge star operator on (n \u22121, n \u22121) forms, as given in (Lemma 3, [59]). For any (n \u22121, n \u22121) form \u0398, there holds (\u22c6\u0398)\u00af jk = 1 (n \u22121)!(n2 \u22126n + 6) \u001a 6 i\u2212(n\u22122) n\u22122 Y p=1 gkp\u00af jp\u0398\u00af jk\u00af j1k1\u00b7\u00b7\u00b7\u00af jn\u22122kn\u22122 + (n \u22126)(Tr\u0398) i g\u00af jk \u001b . In fact, the proof of (3.3) can also be found in [59, 40], but we provide details here for completeness. First, we notice that (\u03b1 \u2227\u03c9n\u22122)\u00af kj \u00af k1j1\u00b7\u00b7\u00b7\u00af kn\u22122jn\u22122 = in\u22122\u03b1{\u00af kjg \u00af k1j1 \u00b7 \u00b7 \u00b7 g\u00af kn\u22122jn\u22122}, (3.6) (\u03a6 \u2227\u03c9n\u22123)\u00af kj\u00af rs \u00af k1j1\u00b7\u00b7\u00b7\u00af kn\u22123jn\u22123 = in\u22123\u03a6{\u00af kj\u00af rsg \u00af k1j1 \u00b7 \u00b7 \u00b7 g\u00af kn\u22123jn\u22123}, (3.7) (\u03a8 \u2227\u03c9n\u22124)\u00af kj\u00af rs\u00af pq \u00af k1j1\u00b7\u00b7\u00b7\u00af kn\u22124jn\u22124 = in\u22124\u03a8{\u00af kj\u00af rs\u00af pqg \u00af k1j1 \u00b7 \u00b7 \u00b7 g\u00af kn\u22124jn\u22124}. (3.8) Here we use the notation {} to mean antisymmetrization in both the barred and unbarred indices. Using the formula for the Hodge star on (n \u22121, n \u22121) forms exhibited above, we deduce that we must have (\u22c6(\u03b1 \u2227\u03c9n\u22122))\u00af kj = an\u03b1\u00af kj + ibn(Tr \u03b1)g\u00af kj, (3.9) (\u22c6(\u03a6 \u2227\u03c9n\u22123))\u00af kj = icngs\u00af r\u03a6\u00af rs\u00af kj + idn(Tr \u03a6)g\u00af kj, (3.10) (\u22c6(\u03a8 \u2227\u03c9n\u22124))\u00af kj = engq\u00af pgs\u00af r\u03a8\u00af rs\u00af pq\u00af kj + ifn(Tr \u03a8)g\u00af kj, (3.11) for some coe\ufb03cients an, bn, cn, dn, en, fn to be determined. If Tr \u03b1 = 0, a direct computation shows \u22c6(\u03b1 \u2227\u03c9n\u22122) = \u2212(n \u22122)!\u03b1. (3.12) Therefore an = \u2212(n \u22122)!. Taking \u03b1 = \u03c9 and using \u22c6\u03c9n\u22121 = (n \u22121)!\u03c9, we deduce the relation (n \u22121)! = an + nbn. (3.13) It follows that bn = (n \u22122)! and we have established (3.3). 8 \fNext, we test \u03a6 = \u03b1 \u2227\u03c9, which has components (\u03b1 \u2227\u03c9)\u00af rs\u00af kj = i(\u03b1\u00af kjg\u00af rs \u2212\u03b1\u00af ksg\u00af rj \u2212\u03b1\u00af rjg\u00af ks + \u03b1\u00af rsg\u00af kj). (3.14) Thus gs\u00af r(\u03b1 \u2227\u03c9)\u00af rs\u00af kj = (n \u22122)i\u03b1\u00af kj \u2212(Tr \u03b1)g\u00af kj. (3.15) and Tr (\u03b1 \u2227\u03c9) = 2(n \u22121)(Tr \u03b1). (3.16) Equating (3.9) with (3.10), we therefore we have the relation \u2212(n \u22122)!\u03b1\u00af kj + i(n \u22122)!(Tr \u03b1)g\u00af kj = i2cn(n \u22122)\u03b1\u00af kj \u2212icn(Tr \u03b1)g\u00af kj + 2idn(n \u22121)(Tr \u03b1)g\u00af kj. Taking \u03b1 with Tr \u03b1 = 0, we see that cn = (n \u22123)!. Solving for dn gives dn = (n\u22123)! 2 . This establishes (3.4). We now test \u03a8 = \u03a6 \u2227\u03c9. Its components can be worked out to be (\u03a6 \u2227\u03c9)\u00af rs\u00af pq\u00af kj = i \u001a \u03a6\u00af rs\u00af pqg\u00af kj + \u03a6\u00af rj \u00af psg\u00af kq + \u03a6\u00af rq\u00af pjg\u00af ks + \u03a6\u00af ks\u00af rqg\u00af pj + \u03a6\u00af kj\u00af rsg\u00af pq +\u03a6\u00af kq\u00af rjg\u00af ps + \u03a6\u00af ps\u00af kqg\u00af rj + \u03a6\u00af pj\u00af ksg\u00af rq + \u03a6\u00af pq\u00af kjg\u00af rs \u001b . (3.17) Then gq\u00af pgs\u00af r(\u03a6 \u2227\u03c9)\u00af rs\u00af pq\u00af kj = 2i(n \u22123)gs\u00af r\u03a6\u00af kj\u00af rs \u2212iTr \u03a6 g\u00af kj, (3.18) and Tr (\u03a6 \u2227\u03c9) = 3(n \u22122)(Tr \u03a6). (3.19) Equating (3.10) and (3.11) with \u03a8 = \u03a6 \u2227\u03c9 gives the relation i(n \u22123)!gs\u00af r\u03a6\u00af rs\u00af kj + i(n \u22123)! 2 (Tr \u03a6) g\u00af kj = 2i(n \u22123)engs\u00af r\u03a6\u00af kj\u00af rs \u2212ien(Tr \u03a6) g\u00af kj + 3i(n \u22122)fn(Tr \u03a6)g\u00af kj. (3.20) If we choose \u03a6 with Tr \u03a6 = 0, we see that en = (n\u22124)! 2 . Substituting the value of en into (3.20), we deduce fn = (n\u22124)! 6 . Q.E.D. Before stating the full evolution of the components of the metric in the Anomaly \ufb02ow (1.1), we establish notation which will be used subsequently. The torsion of \u03c9(t) is given by T = i\u2202\u03c9 and \u00af T = \u2212i\u00af \u2202\u03c9, with components T = 1 2T\u00af kj\u2113dz\u2113\u2227dzj \u2227d\u00af zk, \u00af T = 1 2 \u00af Tk\u00af j\u00af \u2113d\u00af z\u2113\u2227d\u00af zj \u2227dzk, (3.21) given by T\u00af kj\u2113= \u2202jg\u00af k\u2113\u2212\u2202\u2113g\u00af kj, \u00af Tk\u00af j\u00af \u2113= \u2202\u00af jg\u00af \u2113k \u2212\u2202\u00af \u2113g\u00af jk. (3.22) 9 \fThe torsion 1-form \u03c4 is given by \u03c4 = T\u2113dz\u2113, T\u2113= gj\u00af kT\u00af kj\u2113. (3.23) We may take the norms of T and \u03c4 by setting |T|2 = gm\u00af kgj\u00af ng\u2113\u00af pT\u00af kj\u2113\u00af Tm\u00af n\u00af p, |\u03c4|2 = gj\u00af kTj \u00af T\u00af k. (3.24) The curvature tensor of the Chern connection \u2207of \u03c9 is given by R\u00af kj p q = \u2212\u2202\u00af k(gp\u00af \u2113\u2202jg\u00af \u2113q), Rm = R\u00af kj p q dzj \u2227d\u00af zk \u2208\u21261,1(X) \u2297End(T 1,0(X)). (3.25) For a general Hermitian metric \u03c9, there are 4 notions of Ricci curvature. R\u00af kj = R\u00af kj p p, \u02dc R\u00af kj = Rp p\u00af kj, R\u2032 \u00af kj = R\u00af k p pj, R\u2032\u2032 \u00af kj = Rp j\u00af kp. (3.26) Conformally balanced metrics have additional structure, and their curvatures satisfy (see Lemma 5 in [59]) R\u2032 \u00af kj = R\u2032\u2032 \u00af kj = 1 2R\u00af kj, \u02dc R\u00af kj = 1 2R\u00af kj + \u2207mT\u00af kjm. (3.27) We can now state the formula for the evolution of the metric tensor along the Anomaly \ufb02ow (1.1). The structure of the torsion terms can be compared to other geometric \ufb02ows studied in di\ufb00erent settings e.g. [4, 6, 42, 50, 66, 67, 73]. The appearance of quadratic torsion terms proportional to g\u00af kj when n \u22654 is reminiscent of evolution of the metric in the Laplacian \ufb02ow for closed G2 structures [6, 42, 50]. The di\ufb00erence in expressions when n = 3 and n \u22654 is due to the appearance of (n \u22122)(n \u22123)i\u2202\u03c9 \u2227\u00af \u2202\u03c9 \u2227\u03c9n\u22123 when expanding i\u2202\u00af \u2202\u03c9n\u22122 if n \u22654. In deriving (3.29), we cancelled coe\ufb03cients of the form (n\u22123) (n\u22123), making it invalid to substitute n = 3 in this expression. Theorem 4 Start the Anomaly \ufb02ow d dt(\u2225\u2126\u2225\u03c9\u03c9n\u22121) = i\u2202\u00af \u2202(\u03c9n\u22122) with initial metric \u03c90 satisfying d(\u2225\u2126\u2225\u03c90\u03c9n\u22121 0 ) = 0. If n = 3, the evolution of the metric is given by \u2202tg\u00af kj = 1 2\u2225\u2126\u2225\u03c9 \u0014 \u2212\u02dc R\u00af kj + gm\u00af \u2113gs\u00af rT\u00af rmj \u00af Ts\u00af \u2113\u00af k \u0015 . (3.28) If n \u22654, the evolution of the metric is given by \u2202tg\u00af kj = 1 (n \u22121)\u2225\u2126\u2225\u03c9 \u0014 \u2212\u02dc R\u00af kj + 1 2(n \u22122)(|T|2 \u22122|\u03c4|2) g\u00af kj \u22121 2gq\u00af pgs\u00af rT\u00af kqs \u00af Tj \u00af p\u00af r + gs\u00af r(T\u00af kjs \u00af T\u00af r + Ts \u00af Tj\u00af k\u00af r) + Tj \u00af T\u00af k \u0015 . (3.29) 10 \fAs a immediate consequence, we see that the \ufb02ow has a short-time solution. Indeed, by de\ufb01nition \u02dc R\u00af kj = \u2212gp\u00af q\u2202\u00af q\u2202pg\u00af kj + gp\u00af qgr\u00af s\u2202\u00af qg\u00af kr\u2202pg\u00af sj. The leading term of the linearization is the di\ufb00erent operator \u03b4g\u00af kj \u2192\u2212gp\u00af q\u2202p\u2202\u00af q\u03b4g\u00af kj (3.30) which is elliptic. Hence the \ufb02ow is strictly parabolic. Theorem 1 is proved. Proof: The evolution equation when n = 3 was already given in [59], so we assume that n \u22654. Since \u2225\u2126\u22252 \u03c9 = \u2126\u00af \u2126(detg)\u22121, we have d dt\u2225\u2126\u2225\u03c9 = \u22121 2\u2225\u2126\u2225\u03c9Tr \u02d9 \u03c9. (3.31) The Anomaly \ufb02ow (1.1) can be written as \u22121 2\u2225\u2126\u2225\u03c9(Tr \u02d9 \u03c9)\u03c9n\u22121 + (n \u22121)\u2225\u2126\u2225\u03c9 \u02d9 \u03c9 \u2227\u03c9n\u22122 = (n \u22122)i\u2202\u00af \u2202\u03c9 \u2227\u03c9n\u22123 + i(n \u22122)(n \u22123)T \u2227\u00af T \u2227\u03c9n\u22124. (3.32) Wedging both sides by \u03c9 gives the following equation of top forms \u22121 2\u2225\u2126\u2225\u03c9(Tr \u02d9 \u03c9)\u03c9n + (n \u22121)\u2225\u2126\u2225\u03c9 \u02d9 \u03c9 \u2227\u03c9n\u22121 = (n \u22122)i\u2202\u00af \u2202\u03c9 \u2227\u03c9n\u22122 + i(n \u22122)(n \u22123)T \u2227\u00af T \u2227\u03c9n\u22123. (3.33) Using the identities (A.3), (A.5), (A.7) for contracting forms, we obtain \u22121 2\u2225\u2126\u2225\u03c9(Tr \u02d9 \u03c9) + (n \u22121) n \u2225\u2126\u2225\u03c9(Tr \u02d9 \u03c9) = \u2212(n \u22122) 2n(n \u22121)gj\u00af kg\u2113\u00af m(i\u2202\u00af \u2202\u03c9)\u00af kj \u00af m\u2113\u2212 (n \u22123) 6n(n \u22121)gj\u00af kgq\u00af pgs\u00af r(T \u2227\u00af T)\u00af kj \u00af pq\u00af rs. (3.34) Therefore (Tr \u02d9 \u03c9) = 1 (n \u22121)\u2225\u2126\u2225\u03c9 Tr (i\u2202\u00af \u2202\u03c9) + i(n \u22123) 3(n \u22121)(n \u22122) 1 \u2225\u2126\u2225\u03c9 Tr (T \u2227\u00af T). (3.35) We now apply the Hodge star operator with respect to \u03c9 to (3.32) \u22121 2\u2225\u2126\u2225\u03c9(Tr \u02d9 \u03c9) \u22c6\u03c9n\u22121 + (n \u22121)\u2225\u2126\u2225\u03c9 \u22c6( \u02d9 \u03c9 \u2227\u03c9n\u22122) = (n \u22122) \u22c6(i\u2202\u00af \u2202\u03c9 \u2227\u03c9n\u22123) + i(n \u22122)(n \u22123) \u22c6(T \u2227\u00af T \u2227\u03c9n\u22124). (3.36) Substituting Lemma 2, \u2212(n \u22121) 2 \u2225\u2126\u2225\u03c9(Tr \u02d9 \u03c9)g\u00af kj + (n \u22121)\u2225\u2126\u2225\u03c9{\u2212\u2202tg\u00af kj + (Tr \u02d9 \u03c9)g\u00af kj} (3.37) = gs\u00af r(i\u2202\u00af \u2202\u03c9)\u00af rs\u00af kj + 1 2(Tr i\u2202\u00af \u2202\u03c9)g\u00af kj + 1 2gq\u00af pgs\u00af r(T \u2227\u00af T)\u00af rs\u00af pq\u00af kj + i1 6Tr (T \u2227\u00af T)g\u00af kj. 11 \fSubstituting (3.35), we see that the (Tr i\u2202\u00af \u2202\u03c9) terms cancel and we are left with \u2202tg\u00af kj = \u2212 1 (n \u22121)\u2225\u2126\u2225\u03c9 gs\u00af r(i\u2202\u00af \u2202\u03c9)\u00af rs\u00af kj \u2212 1 2(n \u22121)\u2225\u2126\u2225\u03c9 gq\u00af pgs\u00af r(T \u2227\u00af T)\u00af rs\u00af pq\u00af kj \u2212 i 6(n \u22121)(n \u22122)\u2225\u2126\u2225\u03c9 Tr (T \u2227\u00af T) g\u00af kj. (3.38) Next, we compute the components of the terms of the right-hand side. By substituting the relation of Ricci curvatures for conformally balanced metrics (3.27) into identity (A.19) for components of i\u2202\u00af \u2202\u03c9 from the appendix, we obtain gs\u00af r(i\u2202\u00af \u2202\u03c9)\u00af rs\u00af kj = \u02dc R\u00af kj \u2212gm\u00af \u2113gs\u00af rT\u00af rmj \u00af Ts\u00af \u2113\u00af k. (3.39) Substituting this identity together with (A.13) and (A.14) into (3.38) gives equation (3.29) for the evolution of the metric. Q.E.D. As a consequence of the above calculation, we obtain a proof for case (ii) in Lemma 1. Corollary 1 Stationary points of the Anomaly \ufb02ow d dt(\u2225\u2126\u2225\u03c9\u03c9n\u22121) = i\u2202\u00af \u2202(\u03c9n\u22122) with initial metric \u03c90 satisfying d(\u2225\u2126\u2225\u03c90\u03c9n\u22121 0 ) = 0 are Ricci-\ufb02at K\u00a8 ahler metrics. Proof: The case n = 3 was done in [59], so we assume n \u22654. Setting (3.29) to zero and taking the trace gives 0 = \u2212\u02dc R + n 2(n \u22122)(|T|2 \u22122|\u03c4|2) \u22121 2|T|2 + 3|\u03c4|2. (3.40) By the de\ufb01nition of Ricci curvatures (3.26), \u02dc R = gj\u00af kRp p\u00af kj = \u2212gj\u00af k\u2202\u00af k\u2202j log detg\u00af \u2113m, (3.41) hence \u02dc R = \u2206log \u2225\u2126\u22252 \u03c9. Simplifying, we obtain (n \u22122)\u2206log \u2225\u2126\u22252 \u03c9 = |T|2 + 2(n \u22123)|\u03c4|2. (3.42) By the maximum principle, we conclude that log \u2225\u2126\u22252 \u03c9 is constant and |T|2 = 0. It follows that \u03c9 is K\u00a8 ahler with zero Ricci curvature. Q.E.D. 4 A complex Monge-Amp` ere \ufb02ow 4.1 The Anomaly \ufb02ow and a scalar Monge-Amp` ere \ufb02ow Let X be a compact K\u00a8 ahler manifold with K\u00a8 ahler metric \u02c6 \u03c7 = i\u02c6 \u03c7\u00af kjdzj \u2227d\u00af zk. Given a potential function \u03d5 : X \u2192R, we will use the notation F[\u03d5] = (\u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5)n \u02c6 \u03c7n . (4.1) 12 \fLet f \u2208C\u221e(X, R) be a given function. Consider the Monge-Amp` ere \ufb02ow of potentials \u2202t\u03d5 = e\u2212fF[\u03d5], \u03d5(x, 0) = 0, (4.2) subject to the plurisubharmonicity condition \u03c7 = \u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5 > 0. (4.3) Assume further that X admits a nowhere holomorphic (n, 0)-form \u2126and consider the \ufb02ow (1.1). We claim that, with the function f chosen by e\u2212f = 1 (n \u22121)\u2225\u2126\u2225\u22122 \u02c6 \u03c7 , (4.4) the Monge-Amp` ere \ufb02ow (4.2) is just the Anomaly \ufb02ow (1.1) with initial data \u2225\u2126\u2225\u03c90\u03c9n\u22121 0 = \u02c6 \u03c7n\u22121. (4.5) Indeed, let t \u2192\u03d5(t) be the solution of the Monge-Amp` ere \ufb02ow (4.2) and set \u03c7(t) = \u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5(t) > 0, \u2225\u2126\u2225\u03c9(t)\u03c9(t)n\u22121 = \u03c7(t)n\u22121. (4.6) Then \u2202t(\u2225\u2126\u2225\u03c9\u03c9n\u22121) = \u2202t\u03c7n\u22121 = (n \u22121)i\u2202\u00af \u2202(\u2202t\u03d5) \u2227\u03c7n\u22122. (4.7) Next, the above relation between \u03c9(t) and \u03c7(t) can be inverted: taking the determinants of both sides gives \u2225\u2126\u22251/(n\u22121) \u03c9 = \u2225\u2126\u22252/(n\u22122) \u03c7 , and thus the relation can also be written as \u03c9n\u22122 = \u2225\u2126\u2225\u22122 \u03c7 \u03c7n\u22122. (4.8) It follows that i\u2202\u00af \u2202\u03c9n\u22122 = i\u2202\u00af \u2202(\u2225\u2126\u2225\u22122 \u03c7 \u03c7n\u22122) = i\u2202\u00af \u2202(\u2225\u2126\u2225\u22122 \u03c7 ) \u2227\u03c7n\u22122. (4.9) It su\ufb03ces then to show that (n \u22121)\u2202t\u03d5 = \u2225\u2126\u2225\u22122 \u03c7 (4.10) since this would imply that \u03c9 is a solution of the Anomaly \ufb02ow (1.1), which is uniquely determined by the parabolicity of the \ufb02ow established in Theorem 1. But by the de\ufb01nition of \u03d5, we have (n \u22121)\u2202t\u03d5 = (n \u22121)e\u2212fF[\u03d5] = det \u02c6 \u03c7 \u2126\u00af \u2126\u00b7 det (\u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5) det \u02c6 \u03c7 = 1 \u2225\u2126\u22252 \u03c7 . (4.11) This completes the proof of our claim. 13 \f4.2 Evolution identities In this section, we compute the evolution of various basic quantities along the \ufb02ow (1.6). Let us \ufb01rst establish notations. We will denote \u03c7\u00af kj = \u02c6 \u03c7\u00af kj + \u03d5\u00af kj, and use upper indices \u03c7j\u00af k to denote the inverse matrix of \u03c7\u00af kj. We will sometimes write \u02d9 \u03d5 for \u2202t\u03d5. Subscripts on a function will denote spacial partial derivatives and \u02c6 \u2207and \u2206\u02c6 \u03c7 = \u02c6 \u03c7p\u00af q \u02c6 \u2207p \u02c6 \u2207\u00af q will denote covariant derivatives with respect to the background K\u00a8 ahler metric \u02c6 \u03c7. We use the Chern connection de\ufb01ned as usual by \u02c6 \u2207\u00af kV p = \u2202\u00af kV p, \u02c6 \u2207kV p = \u02c6 \u03c7p\u00af q\u2202k(\u02c6 \u03c7\u00af qiV i) for any section V of T 1,0X, and the curvature convention [ \u02c6 \u2207j, \u02c6 \u2207\u00af k]Wi = \u2212\u02c6 R\u00af kj p iWp, (4.12) for any section W of (T 1,0X)\u2217. We introduce the linearized operator L = e\u2212fF \u03c7j\u00af k \u02c6 \u2207j \u02c6 \u2207\u00af k. (4.13) We will also use the relative endomorphism hij = \u02c6 \u03c7i\u00af k\u03c7\u00af kj, which satis\ufb01es Tr h = n + \u2206\u02c6 \u03c7\u03d5, Tr h\u22121 = \u03c7p\u00af q \u02c6 \u03c7\u00af qp. (4.14) 4.2.1 Evolution of the potential We \ufb01rst compute (\u2202t \u2212L)\u03d5 = e\u2212fF \u2212e\u2212fF\u03c7k\u00af j\u03d5\u00af jk = e\u2212fF \u2212e\u2212fF\u03c7k\u00af j(\u03c7\u00af jk \u2212\u02c6 \u03c7\u00af jk). (4.15) Therefore (\u2202t \u2212L)\u03d5 = \u2212(n \u22121)e\u2212fF + e\u2212fF Tr h\u22121. (4.16) 4.2.2 Evolution of the determinant Next, di\ufb00erentiating the evolution equation in time gives \u2202t(e\u2212fF) = L(e\u2212fF). (4.17) Expanding, we obtain (\u2202t \u2212L)F = efL(e\u2212f)F \u22122Fe\u2212f Re{\u03c7j\u00af kfjF\u00af k}. (4.18) 14 \f4.2.3 Evolution of the cohomological representative Di\ufb00erentiating equation (1.6) once gives \u2202t\u2202\u00af p\u03d5 = e\u2212fF\u03c7k\u00af j \u02c6 \u2207\u00af p\u03d5\u00af jk \u2212e\u2212fF \u2202\u00af pf. (4.19) We introduce the notation F k\u00af j,r\u00af s = F\u03c7r\u00af s\u03c7k\u00af j \u2212F\u03c7r\u00af j\u03c7k\u00af s, (4.20) so that \u02c6 \u2207q(F\u03c7k\u00af j) = F k\u00af j,r\u00af s \u02c6 \u2207q\u03d5\u00af sr. (4.21) We note that unlike when F is concave, the quantity \u2212F k\u00af j,r\u00af sM\u00af jkM\u00af rs for a Hermitian tensor M\u00af ab does not have a sign. Di\ufb00erentiating the \ufb02ow twice, which amounts to di\ufb00erentiating (4.19) once more gives \u2202t\u03d5\u00af pq = e\u2212fF\u03c7k\u00af j \u02c6 \u2207q \u02c6 \u2207\u00af p\u03d5\u00af jk + e\u2212fF k\u00af j,r\u00af s \u02c6 \u2207\u00af p\u03d5\u00af jk \u02c6 \u2207q\u03d5\u00af sr \u2212e\u2212fF\u03c7k\u00af j \u02c6 \u2207\u00af p\u03d5\u00af jk\u2202qf \u2212e\u2212fF \u2202q\u2202\u00af pf + e\u2212fF \u2202\u00af pf\u2202qf \u2212e\u2212fF\u03c7k\u00af j \u02c6 \u2207q\u03d5\u00af jk \u2202\u00af pf. (4.22) Commuting derivatives e\u2212fF\u03c7k\u00af j \u02c6 \u2207q \u02c6 \u2207\u00af p\u03d5\u00af jk = e\u2212fF\u03c7k\u00af j \u02c6 \u2207k \u02c6 \u2207\u00af j\u03d5\u00af pq \u2212e\u2212fF\u03c7k\u00af j \u02c6 R\u00af jk\u00af p \u00af a\u03d5\u00af aq + e\u2212fF\u03c7k\u00af j \u02c6 R\u00af jq\u00af p \u00af a\u03d5\u00af ak. (4.23) Thus we obtain the evolution of \u03c7\u00af pq = \u02c6 \u03c7\u00af pq + \u03d5\u00af pq (\u2202t \u2212L)\u03c7\u00af pq = \u2212e\u2212fF\u03c7k\u00af j \u02c6 R\u00af jk\u00af p \u00af a\u03d5\u00af aq + e\u2212fF\u03c7k\u00af j \u02c6 R\u00af jq\u00af p \u00af a\u03d5\u00af ak + e\u2212fF k\u00af j,r\u00af s \u02c6 \u2207\u00af p\u03c7\u00af jk \u02c6 \u2207q\u03c7\u00af sr \u2212e\u2212fF\u03c7k\u00af j \u02c6 \u2207\u00af p\u03c7\u00af jk fq \u2212e\u2212fF\u03c7k\u00af j \u02c6 \u2207q\u03c7\u00af jk f\u00af p \u2212e\u2212fF f\u00af pq + e\u2212fF f\u00af pfq. (4.24) Since L involves covariant derivatives with respect to the background \u02c6 \u03c7, we may take the trace with respect to \u02c6 \u03c7 and derive (\u2202t \u2212L)Tr h = \u2212e\u2212fF\u03c7k\u00af j \u02c6 R\u00af jk p\u00af a\u03d5\u00af ap + e\u2212fF\u03c7k\u00af j \u02c6 R\u00af jp p\u00af a\u03d5\u00af ak +e\u2212fF k\u00af j,r\u00af s \u02c6 \u03c7p\u00af q \u02c6 \u2207\u00af q\u03c7\u00af jk \u02c6 \u2207p\u03c7\u00af sr \u22122e\u2212fRe{\u02c6 \u03c7q\u00af pFqf\u00af p} \u2212e\u2212fF \u02c6 \u03c7q\u00af pf\u00af pq + e\u2212fF \u02c6 \u03c7q\u00af pfqf\u00af p. (4.25) Writing \u03c7\u00af k\u2113= \u02c6 \u03c7\u00af k\u2113+ \u03d5\u00af k\u2113and \u02c6 Rppqq = \u02c6 R, (\u2202t \u2212L)Tr h = \u2212e\u2212fF\u03c7k\u00af j \u02c6 R\u00af jk p\u00af a\u03c7\u00af ap + e\u2212fF \u02c6 R + e\u2212fF k\u00af j,r\u00af s \u02c6 \u03c7p\u00af q \u02c6 \u2207\u00af q\u03c7\u00af jk \u02c6 \u2207p\u03c7\u00af sr \u22122e\u2212fRe{\u02c6 \u03c7q\u00af pFqf\u00af p} \u2212e\u2212fF \u02c6 \u03c7q\u00af pf\u00af pq + e\u2212fF \u02c6 \u03c7q\u00af pfqf\u00af p. (4.26) 15 \f4.3 Estimate of the speed Di\ufb00erentiating the equation in time, we obtain (\u2202t \u2212L) \u02d9 \u03d5 = 0. (4.27) By the maximum principle, we have the bound inf X \u02d9 \u03d5(0) \u2264\u02d9 \u03d5 \u2264sup X \u02d9 \u03d5(0). (4.28) Since \u02d9 \u03d5(0) = e\u2212f, it follows that e\u2212fF = \u02d9 \u03d5 \u2265infX e\u2212f. As a consequence, we obtain Lemma 3 Let \u03d5 be a solution to (1.6) on X \u00d7 [0, T) satisfying the positivity condition (1.7). There exists a constants C > 0 and \u03b4 > 0 depending only on (X, \u02c6 \u03c7) and f such that |\u2202t\u03d5| \u2264C, \u03b4 \u2264e\u2212fF[\u03d5] \u2264C, (sup X \u03d5 \u2212inf X \u03d5)(t) \u2264C. (4.29) Proof: It only remains to establish the oscillation estimate. This follows from applying Yau\u2019s C0 estimate [79] to the equation (\u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5)n = (\u2202t\u03d5) ef (\u02c6 \u03c7)n (4.30) at each \ufb01xed time. Q.E.D. 4.4 Second order estimate Lemma 4 Let \u03d5 be a solution to (1.6) on X \u00d7 [0, T] satisfying the positivity condition (1.7). There exists a constants C > 0 and A > 0 depending only on X, \u02c6 \u03c7 and f such that \u2206\u02c6 \u03c7 \u03d5(x, t) \u2264CeA( \u02dc \u03d5(x,t)\u2212infX\u00d7[0,T ] \u02dc \u03d5), (4.31) where \u02dc \u03d5(x, t) = \u03d5 \u22121 V Z X \u03d5 \u02c6 \u03c7n, V = Z X \u02c6 \u03c7n. (4.32) We will apply the maximum principle to the following test function de\ufb01ned on X \u00d7 [0, T], G(x, t) = log Tr h \u2212A \u02dc \u03d5 + B 2 F 2, (4.33) where A, B > 0 are constants to be determined. Before preceeding, we note the extra term B 2 F 2, which does not appear in the standard test function used in the study of the complex Monge-Amp` ere type equations (see for example, [57, 79]) or the K\u00a8 ahler-Ricci \ufb02ow [10]. Indeed, this is the main innovation of this test function, and we use it to overcome the di\ufb03cult terms caused by the lack of concavity of the Monge-Amp` ere \ufb02ow, as well as the cross terms involving the conformal factor e\u2212f. 16 \fWe compute the evolution of the test function. (\u2202t \u2212L)G = 1 Tr h(\u2202t \u2212L)Tr h + e\u2212fF (Tr h)2\u03c7j\u00af k(\u2202jTr h)(\u2202\u00af kTr h) \u2212A(\u2202t \u2212L)\u03d5 +A V Z X \u2202t\u03d5 \u02c6 \u03c7n + BF(\u2202t \u2212L)F \u2212Be\u2212fF\u03c7j\u00af kFjF\u00af k. (4.34) Substituting (4.16), (4.18) and (4.26) (\u2202t \u2212L)G = 1 Tr h \u001a \u2212e\u2212fF\u03c7k\u00af j \u02c6 R\u00af jk p\u00af a\u03c7\u00af ap + e\u2212fF \u02c6 R \u2212e\u2212fF \u02c6 \u03c7j\u00af kf\u00af kj + e\u2212fF \u02c6 \u03c7j\u00af kfjf\u00af k +e\u2212fF k\u00af j,r\u00af s \u02c6 \u03c7p\u00af q \u02c6 \u2207\u00af q\u03c7\u00af jk \u02c6 \u2207p\u03c7\u00af sr \u22122e\u2212fRe{\u02c6 \u03c7j\u00af kFjf\u00af k} + e\u2212fF Tr h \u03c7j\u00af k(\u2202jTr h)(\u2202\u00af kTr h) \u001b +A(n \u22121)e\u2212fF \u2212Ae\u2212fF Tr h\u22121 + A V Z X e\u2212fF \u02c6 \u03c7n +BF 3\u03c7j\u00af k(e\u2212f)\u00af kj \u22122BF 2e\u2212f Re{\u03c7j\u00af kfjF\u00af k} \u2212Be\u2212fF\u03c7j\u00af kFjF\u00af k. (4.35) Using Lemma 3, we estimate 1 Tr h \u001a \u2212e\u2212fF\u03c7k\u00af j \u02c6 R\u00af jk p\u00af a\u03c7\u00af ap + e\u2212fF \u02c6 R \u2212e\u2212fF \u02c6 \u03c7j\u00af kf\u00af kj + e\u2212fF \u02c6 \u03c7j\u00af kfjf\u00af k \u001b \u2264 C Tr h{(Tr h\u22121)(Tr h) + 1} \u2264CTr h\u22121, (4.36) where the constant C depends on e\u2212f and the curvature of the background metric \u02c6 \u03c7. Next, by (4.20) we have e\u2212fF k\u00af j,r\u00af s \u02c6 \u03c7p\u00af q \u02c6 \u2207\u00af q\u03c7\u00af jk \u02c6 \u2207p\u03c7\u00af sr = e\u2212fF\u03c7r\u00af s\u03c7k\u00af j \u02c6 \u03c7p\u00af q \u02c6 \u2207\u00af q\u03c7\u00af jk \u02c6 \u2207p\u03c7\u00af sr \u2212e\u2212fF\u03c7r\u00af j\u03c7k\u00af s \u02c6 \u03c7p\u00af q \u02c6 \u2207\u00af q\u03c7\u00af jk \u02c6 \u2207p\u03c7\u00af sr = e\u2212f F \u02c6 \u03c7p\u00af qFpF\u00af q \u2212e\u2212fF\u03c7r\u00af j\u03c7k\u00af s \u02c6 \u03c7p\u00af q \u02c6 \u2207\u00af q\u03c7\u00af jk \u02c6 \u2207p\u03c7\u00af sr. (4.37) The term involving \u02c6 \u03c7p\u00af qFpF\u00af q is the new bad term compared to standard arguments, and it is the reason for the addition of B 2 F 2 to the test function G. The main inequality becomes (\u2202t \u2212L)G \u2264 1 Tr h \u001ae\u2212f F \u02c6 \u03c7j\u00af kFjF\u00af k \u22122e\u2212fRe{\u02c6 \u03c7j\u00af kFjf\u00af k} \u001b +e\u2212fF Tr h \u001a 1 Tr h\u03c7j\u00af k(\u2202jTr h)(\u2202\u00af kTr h) \u2212\u02c6 \u03c7p\u00af q\u03c7r\u00af j\u03c7k\u00af s \u02c6 \u2207\u00af q\u03c7\u00af jk \u02c6 \u2207p\u03c7\u00af sr \u001b \u22122BF 2e\u2212f Re{\u03c7j\u00af kfjF\u00af k} \u2212Be\u2212fF\u03c7j\u00af kFjF\u00af k +{C(1 + B) \u2212Ae\u2212fF}Tr h\u22121 + C(A + B + 1). (4.38) By the inequality of Yau and Aubin [79, 3], 1 Tr h\u03c7j\u00af k(\u2202jTr h)(\u2202\u00af kTr h) \u2212\u02c6 \u03c7p\u00af q\u03c7r\u00af j\u03c7k\u00af s \u02c6 \u2207\u00af q\u03c7\u00af jk \u02c6 \u2207p\u03c7\u00af sr \u22640. (4.39) 17 \fNext, we estimate 1 Tr h \u001ae\u2212f F \u02c6 \u03c7j\u00af kFjF\u00af k \u22122e\u2212fRe{\u02c6 \u03c7j\u00af kFjf\u00af k} \u001b \u2264 1 Tr h \u001a 2e\u2212f F \u02c6 \u03c7j\u00af kFjF\u00af k + e\u2212fF \u02c6 \u03c7j\u00af kfjf\u00af k \u001b \u2264 2e\u2212f F \u03c7j\u00af kFjF\u00af k + CTr h\u22121, (4.40) and \u22122BF 2e\u2212f Re{\u03c7j\u00af kfjF\u00af k} \u2264\u03c7j\u00af kFjF\u00af k + CB2Tr h\u22121. (4.41) Applying these estimates in the main inequality yields (\u2202t \u2212L)G \u2264 (1 + 2e\u2212fF \u22121 \u2212Be\u2212fF)\u03c7j\u00af kFjF\u00af k +{C(1 + B + B2) \u2212Ae\u2212fF}Tr h\u22121 + C(A + B + 1). (4.42) By Lemma 3, we have the uniform lower bound e\u2212fF \u2265\u03b4 > 0. Thus may choose B \u226b1 such that (1 + 2e\u2212fF \u22121 \u2212Be\u2212fF) \u22640. Next, we may choose A \u226bB \u226b1 such that (C(1 + B + B2) \u2212Ae\u2212fF) \u2264\u22121. Then if G attains a maximum on X \u00d7 [0, T] at a point (x0, t0) with t0 > 0, by the maximum principle we have the inequality 0 \u2264(\u2202t \u2212L)G \u2264\u2212Tr h\u22121 + C(A + B + 1). (4.43) It follows that the eigenvalues of h are bounded below at (x0, t0). The product of the eigenvalues of h is given by F, which is uniformly bounded along the \ufb02ow. Thus the eigenvalues of h are bounded above at (x, t0), and so Tr h \u2264C at (x0, t0). Therefore G(x, t) \u2264G(x0, t0) \u2264C \u2212A inf X\u00d7[0,T] \u02dc \u03d5. (4.44) If G attains a maximum at t0 = 0, we have already have G(x, t) \u2264C. Therefore log Tr h \u2264C + A \u02dc \u03d5 \u2212 inf X\u00d7[0,T] \u02dc \u03d5 ! (4.45) along the \ufb02ow. Q.E.D. Corollary 2 Let \u03d5 be a solution to (1.6) on X \u00d7 [0, T] satisfying the positivity condition (1.7). There exists a constant C > 0 depending only on X, \u02c6 \u03c7 and f such that C\u22121 \u02c6 \u03c7\u00af kj(z) \u2264\u03c7\u00af kj(z, t) \u2264C \u02c6 \u03c7\u00af kj(z). (4.46) Proof: We \ufb01rst use Lemma 4 to obtain an upper bound on \u2206\u02c6 \u03c7\u03d5 independent of time. Indeed, the oscillation of (supX \u02dc \u03d5 \u2212infX \u02dc \u03d5)(t) is bounded at each \ufb01xed time by Lemma 3. Since R \u02dc \u03d5 \u02c6 \u03c7n = 0, we must have that (supX \u02dc \u03d5)(t) \u22650 and (infX \u02dc \u03d5)(t) \u22640, hence \u02dc \u03d5 is uniformly bounded along the \ufb02ow. By Lemma 4, Tr h \u2264C. (4.47) This gives the upper bound of \u03c7. For the lower bound, we use that the determinant F[\u03d5] = \u03c7n/\u02c6 \u03c7n is uniformly bounded below by Lemma 3. Q.E.D. 18 \f4.5 Higher order estimates In this section, we will follow the main line of the argument given in Tsai [75] to prove the higher order estimates. We note that we cannot directly apply a standard theorem to our equation \u2202t\u03d5 = e\u2212fF[\u03d5], F[\u03d5] = det(\u02c6 \u03c7\u00af kj + \u03d5\u00af kj)/det\u02c6 \u03c7\u00af kj, since F[\u03d5] is not concave in the second derivatives of \u03d5. However, log F[\u03d5] is concave as a function of \u03d5\u00af kj, and to introduce the logarithm we split the argument into several steps to treat space and time seperately. First, we \ufb01x some notation. We will work locally on a ball Br(0) and cylinder Q = Br \u00d7 (T0, T). For functions v : Br \u2192R and u : Q \u2192R, we de\ufb01ne \u2225v\u2225C\u03b1(Br) = \u2225v\u2225L\u221e(Br) + sup x\u0338=y\u2208Br |v(x) \u2212v(y)| |x \u2212y|\u03b1 , (4.48) and \u2225u\u2225C\u03b1,\u03b1/2(Q) = \u2225u\u2225L\u221e(Q) + sup (x,t)\u0338=(y,s)\u2208Q |u(x, t) \u2212u(y, s)| (|x \u2212y| + |t \u2212s|1/2)\u03b1. (4.49) The main estimate of this section is the following. Lemma 5 Let \u03d5 be a solution to (1.6) on X \u00d7 [0, \u01eb) satisfying the positivity condition (1.7). Let B1 be a coordinate chart on X such that B1 \u2282Rn is a unit ball. Then there exists 0 < \u03b1 < 1 and C > 0, depending on \u02c6 \u03c7, f, and \u01eb, such that on Q = B1/2 \u00d7 [ \u01eb 2, \u01eb), \u2225\u2202t\u03d5\u2225C\u03b1,\u03b1/2(Q) + \u2225\u03d5\u00af kj\u2225C\u03b1,\u03b1/2(Q) \u2264C. (4.50) We \ufb01rst notice that the speed function of the \ufb02ow satis\ufb01es the equation \u2202t(e\u2212fF) = L(e\u2212fF) = e\u2212fF\u03c7j\u00af k(e\u2212fF)\u00af kj. (4.51) By Lemma 3 and Corollary 2, we see that this is a uniformly parabolic linear equation with bounded coe\ufb03cients on X \u00d7 [0, \u01eb). It follows from the Krylov-Safanov Harnack inequality [45] that \u2225e\u2212fF\u2225C\u03b1,\u03b1/2(Q) \u2264C, (4.52) for some \u03b1 \u2208(0, 1). This implies that \u2225F\u2225C\u03b1,\u03b1/2(Q) \u2264C, \u2225\u2202t\u03d5\u2225C\u03b1,\u03b1/2(Q) \u2264C. (4.53) For each \ufb01xed t \u2208[ \u01eb 2, \u01eb), by covering the compact manifold X we can control the H\u00a8 older norm of F(\u00b7, t) on all of X, and we have the space norm estimate \u2225F(\u00b7, t)\u2225C\u03b1(B1) \u2264C, (4.54) where C is a constant independent of t \u2208[ \u01eb 2, \u01eb). Recall that F[\u03d5] = det(\u02c6 \u03c7\u00af kj + \u03d5\u00af kj)/det\u02c6 \u03c7\u00af kj, so after possibly taking a smaller 0 < \u03b1 < 1, we may apply the estimates in TosattiWeinkove-Wang-Yang [74] to establish \u2225\u03d5\u00af kj(\u00b7, t)\u2225C\u03b1(B1) \u2264C (4.55) 19 \fwhere C is independent of t \u2208[ \u01eb 2, \u01eb). Therefore, in view of a standard lemma [44] in parabolic H\u00a8 older spaces which allows us to treat time and space separately, it remains to show that sup s,t\u2208[ \u01eb 2 ,\u01eb],s\u0338=t |\u03d5\u00af kj(z, s) \u2212\u03d5\u00af kj(z, t)| |s \u2212t| \u03b1 2 \u2264C, (4.56) for some C independent of z \u2208B1/2. Following Tsai [75], for 0 < h < \u01eb 2 we consider the function U\u03bb,h(z, t) = \u03bb\u03d5(z, t) + (1 \u2212\u03bb)\u03d5(z, t + h) with 0 \u2264\u03bb \u22641, (4.57) de\ufb01ned on B1 \u00d7 [ \u01eb 2, \u01eb \u2212h). Compute log F(z, t) \u2212log F(z, t + h) = Z 1 0 d d\u03bb log det \u0010 \u02c6 \u03c7\u00af kj + \u2202\u00af k\u2202jU\u03bb,h(z, t) \u0011 d\u03bb (4.58) = \u0012Z 1 0 \u03c7j\u00af k \u03bb,h(z, t) d\u03bb \u0013 \u00b7 (\u03d5\u00af kj(z, t) \u2212\u03d5\u00af kj(z, t + h)) where \u03c7j\u00af k \u03bb,h(z, t) is the inverse of (\u02c6 \u03c7\u00af kj + \u2202j\u2202\u00af kU\u03bb,h) > 0. Denote aj\u00af k h (z, t) = Z 1 0 \u03c7j\u00af k \u03bb,h(z, t) d\u03bb. (4.59) It follows from Corollary 2 and the space norm estimate (4.55) that aj\u00af k is uniform elliptic and satis\ufb01es \u2225aj\u00af k h (\u00b7, t)\u2225C\u03b1(B1) \u2264C (4.60) for some constant C independent of h and t \u2208[ \u01eb 2, \u01eb \u2212h). Denote \u03d5h(z, t) = \u03d5(z, t) \u2212\u03d5(z, t + h) |h| \u03b1 4 (4.61) with (z, t) \u2208B1 \u00d7 [ \u01eb 2, \u01eb \u2212h). Then, \u03d5h satis\ufb01es the equation aj\u00af k h (z, t)\u2202j\u2202\u00af k \u03d5h(z, t) = gh(z, t) (4.62) with gh(z, t) = log F(z, t) \u2212log F(z, t + h) |h| \u03b1 4 . (4.63) As we discussed above, at a \ufb01xed time we have that aj\u00af k h (\u00b7, t) is uniformly elliptic and H\u00a8 older continuous, with constants independent of time t and parameter h. We need the following lemma to estimate the H\u00a8 older norm of gh. 20 \fLemma 6 The function gh(z, t) satis\ufb01es \u2225gh(\u00b7, t)\u2225C \u03b1 4 (B1) \u2264C, (4.64) where C is independent of h and t \u2208[ \u01eb 2, \u01eb \u2212h). Given this lemma, we can apply the elliptic Schauder estimate to equation (4.62) at a \ufb01xed time t and obtain \u2225\u03d5h(\u00b7, t)\u2225C2(B1/2) \u2264C \u0010 \u2225gh(\u00b7, t)\u2225C \u03b1 4 (B1) + \u2225\u03d5h(\u00b7, t)\u2225L\u221e(B1) \u0011 , (4.65) with C independent of t and h. By the estimate of the speed function in Lemma 3, we have \u2225\u03d5h(\u00b7, t)\u2225L\u221e(B1) \u2264C with C independent of t and h. This together with Lemma 6 implies sup z\u2208B1/2 |\u03d5\u00af kj(z, t) \u2212\u03d5\u00af kj(z, t + h)| |h| \u03b1 4 \u2264C (4.66) for all h > 0 small, where C is independent of h and t \u2208[ \u01eb 2, \u01eb \u2212h). Hence, we complete the proof of Lemma 5. Proof of Lemma 6: Since F[\u03d5] is uniformly bounded away from zero along the \ufb02ow, we know that by (4.53) that \u2225log F\u2225C\u03b1,\u03b1/2(B1\u00d7[ \u01eb 2 ,\u01eb)) \u2264C. (4.67) This implies that 1 |h| \u03b1 4 | log F(x, t) \u2212log F(x, t + h) \u2212log F(y, t) + log F(y, t + h)| \u2264C|x \u2212y| \u03b1 4 . (4.68) Indeed, if |h| \u2264|x \u2212y|, then 1 |h| \u03b1 4 {| log F(x, t) \u2212log F(x, t + h)| + | log F(y, t) \u2212log F(y, t + h)|} \u2264 C|h| \u03b1 4 \u2264C|x \u2212y| \u03b1 4 . (4.69) On the other hand, if |h| \u2265|x \u2212y|, then 1 |h| \u03b1 4 {| log F(x, t) \u2212log F(y, t)| + | log F(x, t + h) \u2212log F(y, t + h)|} \u2264 C |x \u2212y| \u03b1 4 |h| \u03b1 4 |x \u2212y| \u03b1 4 \u2264C|x \u2212y| \u03b1 4 . (4.70) Combining these two cases and using the triangle inequality proves (4.68). Thus for any t \u2208[ \u01eb 2, \u01eb \u2212h) there holds \r \r \r log F(\u00b7, t) \u2212log F(\u00b7, t + h) |h| \u03b1 4 \r \r \r C\u03b1/4(B1) \u2264C, (4.71) where C is independent of t and h. This completes the proof of the lemma. Q.E.D. 21 \f5 Proof of Theorem 2 and Theorem 3 In this section, we prove the long time existence and convergence of the complex MongeAmp` ere \ufb02ow, which completes the proof for Theorem 3. Then, from the discussion in \u00a74.1, we also obtain the proof of Theorem 2. 5.1 Long-time existence In this section, we show that the solution \u03d5 and its normalization \u02dc \u03d5 are smooth and exist for all time. Since the \ufb02ow is parabolic, a solution exists on a maximal time interval [0, T) with T > 0. Di\ufb00erentiating the equation, we obtain \u2202t\u2202p\u03d5 = e\u2212fF\u03c7k\u00af j \u02c6 \u2207k \u02c6 \u2207\u00af j\u2202p\u03d5 \u2212e\u2212fF\u2202p\u03d5. (5.1) By Corollary 2 and Lemma 5, this is a uniformly parabolic equation for \u2202p\u03d5 with H\u00a8 older continuous coe\ufb03cients. By parabolic Schauder estimates (e.g. [44]), we obtain the C2+\u03b1,1+\u03b1/2 estimate for \u2202p\u03d5. A bootstrapping argument gives estimates on all derivatives of \u03d5. If T < \u221e, then our estimates allow us to take a subsequential limit and then extend the \ufb02ow using the short-time existence theorem. It follows that a smooth solution exists on [0, \u221e). We already noted in the proof of Corollary 2 that \u02dc \u03d5 is uniformly bounded, and the above argument shows that \u02dc \u03d5 and all its derivatives are bounded along the \ufb02ow. 5.2 Dilaton functional We now return to the Anomaly \ufb02ow \u2202t(\u2225\u2126\u2225\u03c9\u03c9n\u22121) = i\u2202\u00af \u2202(\u03c9n\u22122) with ansatz \u2225\u2126\u2225\u03c9\u03c9n\u22121 = \u03c7n\u22121. Recall that this ansatz is preserved and the conformally balanced metric \u03c9(t) is given by the expression \u03c9 = \u2225\u2126\u2225\u22122/(n\u22122) \u03c7 \u03c7, \u03c7(t) = \u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5(t), (5.2) where \u03d5 solves the Monge-Amp` ere \ufb02ow (1.6). By the previous section, the Anomaly \ufb02ow for the Hermitian metric \u03c9(t) exists for all time. Let M(\u03c9) = Z X \u2225\u2126\u2225\u03c9 \u03c9n (5.3) denote the dilaton functional. This functional was introduced by Garcia-Fernandez, Rubio, Shahbazi and Tipler [33] to formulate a variational principle for the Hull-Strominger system on holomorphic Courant algebroids. Here we compute its evolution along the Anomaly \ufb02ow. Lemma 7 Let \u03c9(t) be a solution to the Anomaly \ufb02ow (1.1) with n \u22653. Then the dilaton functional evolves by d dtM(\u03c9(t)) = 1 2 1 (n \u22121)(n \u22122) Z X{|T|2 \u22122|\u03c4|2} \u03c9n. (5.4) 22 \fProof: Wedging the equation d dt(\u2225\u2126\u2225\u03c9\u03c9n\u22121) = i\u2202\u00af \u2202(\u03c9n\u22122) with \u03c9 gives d dt\u2225\u2126\u2225\u03c9 ! \u03c9n + (n \u22121)\u2225\u2126\u2225\u03c9\u03c9n\u22121 \u2227\u02d9 \u03c9 = i\u2202\u00af \u2202\u03c9n\u22122 \u2227\u03c9. (5.5) By de\ufb01nition \u2225\u2126\u22252 \u03c9 = \u2126\u00af \u2126(detg)\u22121, and we have d dt\u2225\u2126\u2225\u03c9 ! \u03c9n = \u2212n 2 \u2225\u2126\u2225\u03c9\u03c9n\u22121 \u2227\u02d9 \u03c9. (5.6) Therefore (n \u22122) 2 \u2225\u2126\u2225\u03c9\u03c9n\u22121 \u2227\u02d9 \u03c9 = i\u2202\u00af \u2202\u03c9n\u22122 \u2227\u03c9. (5.7) It follows that d dt (\u2225\u2126\u2225\u03c9\u03c9n) = d dt \u0010 \u2225\u2126\u2225\u03c9\u03c9n\u22121\u0011 \u2227\u03c9 + \u2225\u2126\u2225\u03c9\u03c9n\u22121 \u2227\u02d9 \u03c9 = \u0012 n n \u22122 \u0013 i\u2202\u00af \u2202\u03c9n\u22122 \u2227\u03c9. We now use Stokes theorem to obtain d dtM(t) = Z X d dt (\u2225\u2126\u2225\u03c9\u03c9n) = \u2212 n n \u22122 Z X i\u2202\u03c9n\u22122 \u2227\u00af \u2202\u03c9. (5.8) Hence d dtM(t) = \u2212n Z X i\u2202\u03c9 \u2227\u00af \u2202\u03c9 \u2227\u03c9n\u22123. (5.9) Applying identity (A.15) from the appendix with T = i\u2202\u03c9 and \u00af T = \u2212i\u00af \u2202\u03c9, we may rewrite the integral as \u2212n Z X i\u2202\u03c9 \u2227\u00af \u2202\u03c9 \u2227\u03c9n\u22123 = 1 2 1 (n \u22121)(n \u22122) Z X{|T|2 \u22122|\u03c4|2} \u03c9n. (5.10) This completes the proof of the lemma. Q.E.D. Lemma 8 Let \u03c9(t) be a solution to the Anomaly \ufb02ow (1.1) with n \u22653. Further assume that the solution \u03c9(t) is conformally K\u00a8 ahler. Then the dilaton functional evolves by d dtM(\u03c9(t)) = \u2212 1 2(n \u22121) Z X |T|2 \u03c9n. (5.11) In particular, the dilaton functional is monotone decreasing along the \ufb02ow. Proof: Fix a time t, and let \u03c9(t) = e\u03c8\u02c6 \u03c9 with d\u02c6 \u03c9 = 0. The components of the torsion of \u03c9 = ig\u00af kjdzj \u2227d\u00af zk are given by T\u00af kj\u2113= \u2202j\u03c8 g\u00af k\u2113\u2212\u2202\u2113\u03c8 g\u00af kj, T\u2113= \u2212(n \u22121)\u2202\u2113\u03c8. (5.12) Computing the norms of these torsion tensors gives |T|2 = 2(n \u22121)|\u2207\u03c8|2 g, |\u03c4|2 = (n \u22121)2|\u2207\u03c8|2 g. (5.13) Therefore 2|\u03c4|2 = (n \u22121)|T|2 in this case, and the identity follows from Lemma 7. Q.E.D. 23 \fLemma 9 Let \u03c9(t) be a solution on [0, \u221e) to the Anomaly \ufb02ow (1.1) with n \u22653 with initial data satisfying \u2225\u2126\u2225\u03c9(0)\u03c9(0)n\u22121 = \u02c6 \u03c7n\u22121, (5.14) where \u02c6 \u03c7 is a K\u00a8 ahler metric. Then M(\u03c9(t)) is monotonically decreasing and furthermore d dtM(\u03c9(t)) \u21920 as t \u2192\u221e. Proof: By the formula for the ansatz (4.8), we have \u03c9 = \u2225\u2126\u2225\u22122/(n\u22122) \u03c7 \u03c7, and the metric is conformally K\u00a8 ahler \u03c9(t) = e\u03c8(t)\u03c7(t) with \u03c8 = 1 (n \u22122) log \u2225\u2126\u2225\u22122 \u03c7 . (5.15) Furthermore |T|2 = 2(n \u22121)|\u2207\u03c8|2 \u03c9 = 2(n \u22121) (n \u22122)2 \u2225\u2126\u22252/(n\u22122) \u03c7 \f \f \f\u2207log \u2225\u2126\u2225\u22122 \u03c7 \f \f \f 2 \u03c7 . (5.16) Also, \u03c9n = \u2225\u2126\u2225\u22122n/(n\u22122) \u03c7 \u03c7n = \u2225\u2126\u2225\u22122n/(n\u22122) \u03c7 \u2225\u2126\u2225\u22122 \u03c7 \u2225\u2126\u22252 \u02c6 \u03c7 \u02c6 \u03c7n. (5.17) We may apply Lemma 8 and obtain that M(t) is monotonically decreasing along the \ufb02ow. Recall that (n \u22121) \u02d9 \u03d5 = \u2225\u2126\u2225\u22122 \u03c7 gives the \ufb02ow of the potential. Then d dtM(t) = \u2212 1 (n \u22122)2 Z X(\u2225\u2126\u2225\u22122 \u03c7 ) 2n\u22123 n\u22122 \f \f \f\u2207log \u2225\u2126\u2225\u22122 \u03c7 \f \f \f 2 \u03c7 \u2225\u2126\u22252 \u02c6 \u03c7 \u02c6 \u03c7n (5.18) = \u2212(n \u22121) 2n\u22123 n\u22122 (n \u22122)2 Z X \u02d9 \u03d5 1 n\u22122 |\u2207\u02d9 \u03d5|2 \u03c7 \u2225\u2126\u22252 \u02c6 \u03c7 \u02c6 \u03c7n. We compute d2 dt2M(t) = \u2212(n \u22121) 2n\u22123 n\u22122 (n \u22122)2 \u001a 1 n \u22122 Z X \u02d9 \u03d5 \u2212(n\u22123) n\u22122 \u2202t \u02d9 \u03d5 |\u2207\u02d9 \u03d5|2 \u03c7 \u2225\u2126\u22252 \u02c6 \u03c7 \u02c6 \u03c7n \u2212 Z X \u02d9 \u03d5 1 n\u22122 \u03c7j\u00af q\u03c7p\u00af k\u2202j \u02d9 \u03d5 \u2202\u00af k \u02d9 \u03d5 \u02d9 \u03d5\u00af qp \u2225\u2126\u22252 \u02c6 \u03c7 \u02c6 \u03c7n + Z X \u02d9 \u03d5 1 n\u22122 \u03c7j\u00af k (\u2202j\u2202t \u02d9 \u03d5 \u2202\u00af k \u02d9 \u03d5 + \u2202j \u02d9 \u03d5 \u2202\u00af k\u2202t \u02d9 \u03d5) \u2225\u2126\u22252 \u02c6 \u03c7 \u02c6 \u03c7n \u001b . (5.19) By our estimates, \u02d9 \u03d5 and \u03c7 are uniformly bounded above and away from zero, and all spacetime derivatives of \u02d9 \u03d5 are bounded. M(t) is uniformly bounded, monotone decreasing, and d2 dt2M(t) is uniformly bounded. It follows that d dtM(t) \u21920 as t \u21920. Q.E.D. 24 \f5.3 Convergence It remains only to show the convergence of the Anomaly \ufb02ow (1.1), which we shall do, using the dilaton functional. Suppose there exists a sequence of times tj \u2192\u221esuch that \u03c9(tj) does not converge to \u03c9\u221eas given in the theorem. By our estimates and the ArzelaAscoli theorem, upon taking a subsequence we have that \u03c9(tjk) converges smoothly to a metric \u03c9\u2032 \u221e\u0338= \u03c9\u221e. Since d dtM(t) \u21920, we conclude by Lemma 8 that Z X |T(\u03c9\u2032 \u221e)|2(\u03c9\u2032 \u221e)n = 0 (5.20) and hence \u03c9\u2032 \u221eis K\u00a8 ahler. By the ansatz (4.8), \u03c9\u2032 \u221e= \u2225\u2126\u2225\u22122/(n\u22122) \u03c7\u2032 \u221e \u03c7\u2032 \u221e, (5.21) and so \u2225\u2126\u2225\u03c7\u2032 \u221eis constant. It follows that \u03c7\u2032 \u221e= \u02c6 \u03c7 + i\u2202\u00af \u2202\u03d5\u221eis the unique K\u00a8 ahler Ricci-\ufb02at metric in [\u02c6 \u03c7]. Since Z X \u2225\u2126\u22252 \u03c7\u2032 \u221e (\u03c7\u2032 \u221e)n n! = Z X in2\u2126\u2227\u00af \u2126, (5.22) the constant \u2225\u2126\u2225\u03c7\u2032 \u221eis identi\ufb01ed. Thus \u03c9\u2032 \u221e= \u03c9\u221egiven in the theorem, and we have smooth convergence. Q.E.D. We chose the argument using the dilaton functional in the belief that it will be useful in future studies of the Anomaly \ufb02ow. For those readers who are only interested in the scalar equation (1.6), there are alternate ways to establish convergence of \u02dc \u03d5. For example, the functional R X e\u2212fF 2\u02c6 \u03c7n is also monotone decreasing. Alternatively, convergence can be obtained by using the Li-Yau Harnack inequality as in [10, 35, 63]. In this case, we would use the Li-Yau Harnack estimate for the heat equation ut = gj\u00af ku\u00af kj on a Hermitian manifold (M, g) proved by Gill [35], and apply it to the di\ufb00erentiated equation \u2202t \u02d9 \u03d5 = e\u2212fF\u03c7j\u00af k \u02d9 \u03d5\u00af kj. 6 Further Developments We conclude with some observations and open questions. (a) The convergence theorems established in this paper for the \ufb02ow (1.1) should be viewed as only the \ufb01rst step in a fuller theory yet to be developed. For example, we do not know at this moment whether the \ufb02ow (1.1) will converge if the initial data is only known to satisfy \u2225\u2126\u2225\u03c90\u03c9n\u22121 0 \u2208[\u02c6 \u03c7n\u22121]. We expect that it will not, unless \u2225\u2126\u2225\u03c90\u03c9n\u22121 0 = (\u03c7\u2032)n\u22121 for some K\u00a8 ahler form \u03c7\u2032, in which case [\u03c7\u2032] = [\u02c6 \u03c7]. If so, whether the \ufb02ow (1.1) converges with initial data \u2225\u2126\u2225\u03c90\u03c9n\u22121 0 may serve as a criterion for whether \u2225\u2126\u2225\u03c90\u03c9n\u22121 0 is the (n \u22121)-th power of a K\u00a8 ahler form. In general, we actually expect the failure of convergence of the \ufb02ow to provide important geometric information. As just stated above, this failure may be caused by the choice of 25 \finitial data, even within the (n \u22121, n \u22121)-cohomology class [\u02c6 \u03c7n\u22121] with \u02c6 \u03c7 K\u00a8 ahler. More important, it may be caused by the class [\u2225\u2126\u2225\u03c90\u03c9n\u22121 0 ] not containing \u02c6 \u03c7n\u22121 for any K\u00a8 ahler form \u03c7, and in particular by the manifold X not being K\u00a8 ahler. In all these situations, we expect the formation of singularities of the \ufb02ow and/or long-time behavior to re\ufb02ect the non-K\u00a8 ahler setting. We shall return to these issues elsewhere. (b) The existence of an initial metric \u03c90 satisfying the condition (4.5) is equivalent to the existence of a K\u00a8 ahler metric \u02c6 \u03c7. Indeed, if \u02c6 \u03c7 is a K\u00a8 ahler metric, we can set \u03c90 = \u2225\u2126\u2225 \u2212 2 n\u22122 \u02c6 \u03c7 \u02c6 \u03c7 to obtain a metric satisfying (4.5). In fact, any conformally balanced initial metric which is conformally K\u00a8 ahler satis\ufb01es (4.5). Let \u03c90 be an initial conformally balanced metric such that \u03c90 = e\u03c8 \u02c6 \u03c7 where \u03c8 : X \u2192 R is a smooth function and \u02c6 \u03c7 is a K\u00a8 ahler metric. Substituting \u03c90 = e\u03c8 \u02c6 \u03c7, we obtain \u2225\u2126\u2225\u03c90\u03c9n\u22121 0 = (e( n 2 \u22121)\u03c8\u2225\u2126\u2225\u02c6 \u03c7)\u02c6 \u03c7n\u22121. (6.1) Since d(\u2225\u2126\u2225\u03c90\u03c9n\u22121 0 ) = 0, we conclude that e( n 2 \u22121)\u03c8\u2225\u2126\u2225\u02c6 \u03c7 = C where C > 0 is a constant. It follows that \u2225\u2126\u2225\u03c90\u03c9n\u22121 0 = C \u02c6 \u03c7n\u22121. After replacing \u02c6 \u03c7 by C1/(n\u22121) \u02c6 \u03c7, we see that the ansatz (4.5) is satis\ufb01ed. In particular, we have shown that the Anomaly \ufb02ow (1.1) preserves the conformally K\u00a8 ahler condition. (c) Given an initial metric \u03c90 satisfying d(\u2225\u2126\u2225\u03c90\u03c9n\u22121 0 ) = 0, the Anomaly \ufb02ow (1.1) preserves the balanced class of the initial metric. d dt[\u2225\u2126\u2225\u03c9(t)\u03c9(t)n\u22121] = [i\u2202\u00af \u2202\u03c9n\u22122] = 0. (6.2) Here we take cohomology classes in Bott-Chern cohomology Hn\u22121,n\u22121 BC (X, R) = {closed real (n \u22121, n \u22121) forms} {i\u2202\u00af \u2202\u03b2 : \u03b2 \u2208\u2126n\u22122,n\u22122(X, R)} . (6.3) Thus \u2225\u2126\u2225\u03c9(t)\u03c9(t)n\u22121 \u2208[\u2225\u2126\u2225\u03c90\u03c9n\u22121 0 ], (6.4) where [\u2225\u2126\u2225\u03c90\u03c9n\u22121 0 ] \u2208Hn\u22121,n\u22121 BC (X, R) (6.5) is the balanced class of \u03c90. Since stationary points of the Anomaly \ufb02ow are K\u00a8 ahler metrics, the Anomaly \ufb02ow could potentially be used to study the relation between the balanced cone and K\u00a8 ahler cone on a K\u00a8 ahler Calabi-Yau manifold (X, \u2126). The interaction between these two cones was explored by J.-X. Fu and J. Xiao [30], and they raised the question of detecting when a balanced class contains a K\u00a8 ahler metric (or more generally, a limit of K\u00a8 ahler metrics). Examples are given in [30], [70] of positive balanced classes on a K\u00a8 ahler manifold which do not contain a K\u00a8 ahler metric. As discussed above in (a), the formation 26 \fof singularities of the Anomaly \ufb02ow may be related to the properties of the initial balanced class. (d) Similar questions to (a) have been brought to our attention in informal discussions with T. Collins, in connection with a conjecture of Lejmi-Sz\u00b4 ekelyhidi [47]. We brie\ufb02y describe here one special case of the Lejmi-Sz\u00b4 ekelyhidi conjecture. Let \u03c9 and \u03b1 be two K\u00a8 ahler metrics on X. It is conjectured that if Z X(\u03c9n \u2212n\u03c9 \u2227\u03b1n\u22121) \u22650, Z D(\u03c9n\u22121 \u2212\u03b1n\u22121) > 0, (6.6) for every irreducible divisor D, then there exists a K\u00a8 ahler metric \u03c9\u2032 \u2208[\u03c9] satisfying the positivity condition \u03c9\u2032n\u22121 \u2212\u03b1n\u22121 > 0. (6.7) It was proved by Xiao [78] that one can \ufb01nd a balanced Hermitian metric \u02dc \u03c9 such that [\u02dc \u03c9n\u22121] = [\u03c9n\u22121] and \u02dc \u03c9n\u22121\u2212\u03b1n\u22121 > 0, but it remains to \ufb01nd a K\u00a8 ahler metric in the balanced class [\u03c9n\u22121] with the desired positivity. The positivity condition (6.7) is of interest, as it corresponds to subsolutions to the fully nonlinear PDE n\u03b1n\u22121 \u2227(\u03c9 + i\u2202\u00af \u2202u) = (\u03c9 + i\u2202\u00af \u2202u)n. (6.8) The existence of such subsolutions provides the existence of a genuine solution, as established in [64, 17] (see also [68, 15, 63] for extensions and generalizations). (e) There is another \ufb02ow super\ufb01cially similar to (1.1), but which can be considered for any compact complex manifold X, and not just manifolds which admit a nowhere holomorphic (n, 0)-form \u2126, \u2202t\u03c9n\u22121 = i\u2202\u00af \u2202\u03c9n\u22122, (6.9) with initial data \u03c90 satisfying d\u03c9n\u22121 0 = 0. A similar computation to the one in the proof of Theorem 4, using Lemma 4 in [59], shows that the \ufb02ow (6.9) can be expressed as \u2202tg\u00af kj = \u2212 1 (n \u22121)\u2207mT\u00af kjm + 1 2(n \u22121){\u2212gq\u00af pgs\u00af rT\u00af kqs \u00af Tj \u00af p\u00af r + |T|2 n \u22121g\u00af kj} (6.10) for n \u22654. From this formula, it is easy to deduce case (i) in Lemma 1, namely that the stationary points of the \ufb02ow (6.9) are K\u00a8 ahler metrics: it su\ufb03ces to set (6.10) to zero and to take the trace in order to obtain |T|2 = 0. However, the \ufb02ow (6.9) may be hard to use, because it is not parabolic, and its stationary set may be too large, as it contains all K\u00a8 ahler metrics. (f) The \ufb02ow (1.1) can be viewed as a K\u00a8 ahler analogue of the inverse Gauss curvature \ufb02ow. Indeed, if we consider a \ufb02ow of a strictly closed convex hypersurface Mt in Rn by the inverse of its Gauss curvature, then it can be expressed [75] as \u2202tu = det(ugij + \u2207i\u2207ju) detgij , u(x, 0) = u0(x) > 0, (6.11) 27 \fwhere u is the support function u : Sn \u00d7 [0, T) de\ufb01ned by u(N, t) = \u27e8P, N\u27e9where P \u2208Mt is the point on Mt with normal N, and (Sn, gij) is the standard sphere. The right hand side of this \ufb02ow exhibits the determinant of the Hessian of the unknown u, just as the right hand side of the equation (1.6). A Appendix In this Appendix, we group together our conventions for di\ufb00erential forms and several identities needed for the proof of Theorem 1 and Theorem 4. A.1 Components of a di\ufb00erential form Let \u03d5 be a (p, q)-form on the manifold X. We de\ufb01ne its components \u03d5\u00af k1\u00b7\u00b7\u00b7\u00af kqj1\u00b7\u00b7\u00b7jp by \u03d5 = 1 p!q! X \u03d5\u00af k1\u00b7\u00b7\u00b7\u00af kqj1\u00b7\u00b7\u00b7jp dzjp \u2227\u00b7 \u00b7 \u00b7 \u2227dzj1 \u2227d\u00af zkq \u2227\u00b7 \u00b7 \u00b7 \u2227d\u00af zk1. (A.1) A.2 Contraction identities We note a few basic identities for contracting di\ufb00erential forms of degree (p, p). Let \u03c9 = ig\u00af kjdzj \u2227d\u00af zk be a Hermitian metric. For a (1, 1) form \u03b1 \u03b1 = \u03b1\u00af pq dzq \u2227d\u00af zp, (A.2) we have \u03b1 \u2227 \u03c9n\u22121 (n \u22121)! = \u2212i(gj\u00af k\u03b1\u00af kj) \u03c9n n! . (A.3) Next, for a (2, 2) form \u03a6 with components \u03a6 = 1 4\u03a6\u00af pq\u00af rs dzs \u2227d\u00af zr \u2227dzq \u2227d\u00af zp, (A.4) we have \u03a6 \u2227 \u03c9n\u22122 (n \u22122)! = \u22121 2 \u001a gj\u00af kg\u2113\u00af m\u03a6\u00af kj \u00af m\u2113 \u001b \u03c9n n! . (A.5) For a (3, 3) form \u03a8 with components \u03a8 = 1 36\u03a8\u00af kj \u00af pq\u00af rs dzs \u2227d\u00af zr \u2227dzq \u2227d\u00af zp \u2227dzj \u2227d\u00af zk, (A.6) we have \u03a8 \u2227 \u03c9n\u22123 (n \u22123)! = i 6 \u001a gj\u00af kgq\u00af pgs\u00af r\u03a8\u00af kj \u00af pq\u00af rs \u001b \u03c9n n! . (A.7) 28 \fA.3 Computing T \u2227\u00af T Next, let T be (2, 1) form. T = 1 2T\u00af ksjdzj \u2227dzs \u2227d\u00af zk. (A.8) Then T \u2227\u00af T = 1 4T\u00af ksj \u00af Tq\u00af p\u00af r dzj \u2227d\u00af zk \u2227dzs \u2227d\u00af zr \u2227dzq \u2227d\u00af zp. (A.9) Antisymmetrizing T \u2227\u00af T = 1 (3!)2(T \u2227\u00af T)\u00af pq\u00af rs\u00af kj dzj \u2227d\u00af zk \u2227dzs \u2227d\u00af zr \u2227dzq \u2227d\u00af zp. (A.10) where (T \u2227\u00af T)\u00af pq\u00af rs\u00af kj = \u001a T\u00af ksj \u00af Tq\u00af p\u00af r + T\u00af rsj \u00af Tq\u00af k\u00af p + T\u00af psj \u00af Tq\u00af r\u00af k + T\u00af kqs \u00af Tj \u00af p\u00af r + T\u00af rqs \u00af Tj\u00af k\u00af p +T\u00af pqs \u00af Tj\u00af r\u00af k + T\u00af kjq \u00af Ts\u00af p\u00af r + T\u00af rjq \u00af Ts\u00af k\u00af p + T\u00af pjq \u00af Ts\u00af r\u00af k \u001b . (A.11) Let \u03c4 = Tqdzq, Tq = gj\u00af kT\u00af kjq, |\u03c4|2 = gq\u00af pTq \u00af T\u00af p, |T|2 = gq\u00af pgs\u00af rgj\u00af kT\u00af psj \u00af Tq\u00af r\u00af k. (A.12) Then gq\u00af pgs\u00af r(T \u2227\u00af T)\u00af rs\u00af pq\u00af kj = gq\u00af pgs\u00af r(2T\u00af psj \u00af Tq\u00af r\u00af k + T\u00af kqs \u00af Tj \u00af p\u00af r) \u22122gs\u00af r(T\u00af kjs \u00af T\u00af r + Ts \u00af Tj\u00af k\u00af r) \u22122Tj \u00af T\u00af k, (A.13) and gj\u00af kgq\u00af pgs\u00af r(T \u2227\u00af T)\u00af pq\u00af rs\u00af kj = 3{|T|2 \u22122|\u03c4|2}. (A.14) Applying formula (A.7), we obtain T \u2227\u00af T \u2227\u03c9n\u22123 = i 2 1 n(n \u22121)(n \u22122){|T|2 \u22122|\u03c4|2} \u03c9n. (A.15) A.4 Computing i\u2202\u00af \u2202\u03c9 We have i\u2202\u00af \u2202\u03c9 = 1 22(i\u2202\u00af \u2202\u03c9)\u00af kj\u00af \u2113m dzm \u2227d\u00af z\u2113\u2227dzj \u2227d\u00af zk, (A.16) with (i\u2202\u00af \u2202\u03c9)\u00af kj\u00af \u2113m = \u2202\u00af \u2113\u2202jg\u00af km \u2212\u2202\u00af \u2113\u2202mg\u00af kj \u2212\u2202\u00af k\u2202jg\u00af \u2113m + \u2202\u00af k\u2202mg\u00af \u2113j. (A.17) Using the de\ufb01nition of curvature, (i\u2202\u00af \u2202\u03c9)\u00af kj\u00af \u2113m = R\u00af kj\u00af \u2113m \u2212R\u00af km\u00af \u2113j + R\u00af \u2113m\u00af kj \u2212R\u00af \u2113j\u00af km \u2212gs\u00af rT\u00af rmj \u00af Ts\u00af \u2113\u00af k. (A.18) Therefore gj\u00af k(i\u2202\u00af \u2202\u03c9)\u00af kj \u00af m\u2113= \u02dc R\u00af \u2113m \u2212R\u2032\u2032 \u00af \u2113m + R\u00af \u2113m \u2212R\u2032 \u00af \u2113m \u2212gj\u00af kgs\u00af rT\u00af rmj \u00af Ts\u00af \u2113\u00af k. (A.19) 29" + }, + { + "url": "http://arxiv.org/abs/1801.09842v1", + "title": "Fu-Yau Hessian Equations", + "abstract": "We solve the Fu-Yau equation for arbitrary dimension and arbitrary slope\n$\\alpha'$. Actually we obtain at the same time a solution of the open case\n$\\alpha'>0$, an improved solution of the known case $\\alpha'<0$, and solutions\nfor a family of Hessian equations which includes the Fu-Yau equation as a\nspecial case. The method is based on the introduction of a more stringent\nellipticity condition than the usual $\\Gamma_k$ admissible cone condition, and\nwhich can be shown to be preserved by precise estimates with scale.", + "authors": "Duong H. Phong, Sebastien Picard, Xiangwen Zhang", + "published": "2018-01-30", + "updated": "2018-01-30", + "primary_cat": "math.DG", + "cats": [ + "math.DG", + "math.AP", + "math.CV" + ], + "main_content": "Introduction The main goal of this paper is to solve the following non-linear partial di\ufb00erential equation proposed in 2008 by J.X. Fu and S.T. Yau [10], i\u2202\u00af \u2202(eu\u02c6 \u03c9 \u2212\u03b1\u2032e\u2212u\u03c1) \u2227\u02c6 \u03c9n\u22122 + \u03b1\u2032i\u2202\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u02c6 \u03c9n\u22122 + \u00b5 \u02c6 \u03c9n = 0. (1.1) Here the unknown is a scalar function u on a compact n-dimensional K\u00a8 ahler manifold (X, \u02c6 \u03c9), and the given data is a real (1, 1) form \u03c1, a function \u00b5, and a number \u03b1\u2032 \u2208R called the slope. A key innovation in the solution is the introduction of an ellipticity condition which is more restrictive than the usual cone conditions for fully non-linear second order partial di\ufb00erential equations, but which can be shown to be preserved by the continuity method using some precise estimates with scale. This innovation may be useful for other equations as well, and we shall illustrate this by using it to solve a whole family of Hessian equations in which the equation (1.1) \ufb01ts as only the simplest example. The equation (1.1) is a generalization of an equation in complex dimension 2, which was shown in [10] to arise from the Hull-Strominger system [17, 18, 27]. The Hull-Strominger system is an extension of a proposal of Candelas, Horowitz, Strominger, and Witten [5] for supersymmetric compacti\ufb01cations of the heterotic string. It poses new geometric di\ufb03culties as it involves quadratic expressions in the curvature tensor, but it can potentially lead to a new notion of canonical metric in non-K\u00a8 ahler geometry. From our point of view, the equation (1.1) is of particular interest as a model equation for an eventual extension of the classical theory of Monge-Amp` ere equations of Yau [32] and Hessian equations of Ca\ufb00arelli, Nirenberg, and Spruck [4], to more general equations mixing the unknown, its gradient, and several Hessians. When the dimension of X is n = 2, the equation (1.1) was solved by Fu and Yau in two separate papers, [10] for the case when \u03b1\u2032 > 0, and [11] for the case when \u03b1\u2032 < 0 (when \f\u03b1\u2032 = 0, the equation poses no di\ufb03culty as it reduces essentially to the Laplacian). As we shall discuss below, in the approach of [10, 11], the required estimates in the two cases \u03b1\u2032 > 0 and \u03b1\u2032 < 0 are quite di\ufb00erent. In an earlier paper [21], we had solved the equation (1.1) for general dimension n when \u03b1\u2032 < 0. However, the case \u03b1\u2032 > 0 for general dimension n remained open, as a key lower bound for the Hessian could not be established [19]. In this paper, we shall simultaneously solve the open case \u03b1\u2032 > 0 for general dimension n, improve on the solution found in [21] for the case \u03b1\u2032 < 0, and do it actually for more general equations where the factor (i\u2202\u00af \u2202u)2 in (1.1) is replaced by higher powers of i\u2202\u00af \u2202u. More precisely, let (X, \u02c6 \u03c9), \u03c1, \u00b5, \u03b1\u2032 be as above. For each \ufb01xed integer k, 1 \u2264k \u2264n \u22121 and each real number \u03b3 > 0, we consider the equation i\u2202\u00af \u2202 n eku\u02c6 \u03c9 \u2212\u03b1\u2032e(k\u2212\u03b3)u\u03c1 o \u2227\u02c6 \u03c9n\u22122 + \u03b1\u2032(i\u2202\u00af \u2202u)k+1 \u2227\u02c6 \u03c9n\u2212k\u22121 + \u00b5 \u02c6 \u03c9n = 0. (1.2) Clearly, when k = 1 and \u03b3 = 2, this equation reduces to the Fu-Yau equation (1.1). We shall refer to (1.2) as Fu-Yau Hessian equations. Our main result is then the following: Theorem 1 Let \u03b1\u2032 \u2208R, \u03c1 \u2208\u21261,1(X, R), and \u00b5 : X \u2192R be a smooth function such that R X \u00b5 \u02c6 \u03c9n = 0. De\ufb01ne the set \u03a5k by \u03a5k = n u \u2208C2(X, R) : e\u2212\u03b3u < \u03b4, |\u03b1\u2032||e\u2212ui\u2202\u00af \u2202u|k \u02c6 \u03c9 < \u03c4 o , (1.3) where 0 < \u03b4, \u03c4 \u226a1 are explicit \ufb01xed constants depending only on (X, \u02c6 \u03c9), \u03b1\u2032, \u03c1, \u00b5, n, k, \u03b3, whose expressions are given in (2.6, 2.7) below. Then there exists M0 \u226b1 depending on (X, \u02c6 \u03c9), \u03b1\u2032, n, k, \u03b3, \u00b5 and \u03c1, such that for each M \u2265M0, there exists a unique smooth function u \u2208\u03a5k with normalization R X eu \u02c6 \u03c9n = M solving the Fu-Yau Hessian equation (1.2). We outline now the key di\ufb00erences between the earlier approaches and the approach of the present paper. The earlier approaches [10, 11, 19, 20] were based on rewriting the equation (1.1) as \u02c6 \u03c32(\u03c9\u2032) = n(n \u22121) 2 (e2u \u22124\u03b1\u2032eu|\u2207u|2) + \u03bd (1.4) where \u03bd is a linear combination of known functions, u and \u2207u, \u03c9\u2032 is de\ufb01ned by \u03c9\u2032 = eu\u02c6 \u03c9 + \u03b1\u2032e\u2212u\u03c1 + 2n\u03b1\u2032i\u2202\u00af \u2202u, and \u02c6 \u03c3k(\u03c9\u2032) is the k-th symmetric function of the eigenvalues of \u03c9\u2032 with respect to \u02c6 \u03c9. We look then for solutions u satisfying the condition \u03c9\u2032 \u2208\u03932, where \u03932 is de\ufb01ned by the conditions \u02c6 \u03c31(\u03c9\u2032) > 0 and \u02c6 \u03c32(\u03c9\u2032) > 0. The left hand side is then > 0. When \u03b1\u2032 > 0, this implies immediately an upper bound on |\u2207u|. However, the di\ufb03culty is then to derive a positive lower bound for \u02c6 \u03c32(\u03c9\u2032), and the arguments of [10] worked only when n = 2. On the other hand, when \u03b1\u2032 < 0, such a lower bound turns out to hold because there is no cancellation in the expression e2u \u22124\u03b1\u2032eu|\u2207u|2. The estimate for 2 \f|\u2207u| and |\u02c6 \u03c32(\u03c9\u2032)| can then be obtained respectively by applying the techniques of DinewKolodziej [8], and Chou-X.J. Wang [6], Hou-Ma-Wu [16], Guan [14], and the authors [22]. The approach in the present paper relies instead on a di\ufb00erent strategy. First, the equation (1.1) corresponds to the case k = 1, \u03b3 = 2 of the Fu-Yau Hessian equations. As stated in Theorem 1, we look for solutions u \u2208\u03a51, which is a more stringent condition than \u03c9\u2032 \u2208\u03932. The set \u03a51 and its condition e\u2212u|\u03b1\u2032i\u2202\u00af \u2202u|\u02c6 \u03c9 < \u03c4 are inspired by the condition |\u03b1\u2032Rm(\u03c9)| << 1 in [21, 22] which guarantees the parabolicity of the geometric \ufb02ows introduced in these papers 1. In the method of continuity, the given equation (1.1) is realized as the end point of a family of equations for each t \u2208[0, 1]. The condition u \u2208\u03a51 implies that the di\ufb00usion operator F p\u00af q\u2207p\u2207\u00af q governing the evolution of |Du|2 and |\u03b1\u2032i\u2202\u00af \u2202u|2 is a controllable perturbation of the Laplacian \u2206= gp\u00af q\u2207p\u2207\u00af q. The main problem is then to show that, if u \u2208\u03a51 at time t = 0, it will stay in \u03a51 at all times. This is accomplished by establishing a priori estimates, which we shall refer to as \u201cestimates with scale\u201d, which are more precise and delicate than the usual ones. Indeed, a priori estimates for |u|, |Du|, and |\u03b1\u2032i\u2202\u00af \u2202u| are usually required only to be independent of z \u2208X and t \u2208[0, 1]. In the present situation, the normalization as given in Theorem 1 Z X eu\u02c6 \u03c9n = M (1.5) sets e\ufb00ectively a scale M, and the estimates with scale that we need are estimates for |u|, |Du|, and |\u03b1\u2032i\u2202\u00af \u2202u| in terms of some speci\ufb01c powers of M. An example of such an estimate is the C0 estimate stated in Theorem 3 below, C\u22121M \u2264eu \u2264C M, which is a version in the present context of similar C0 estimates established earlier in [10, 11, 20]. The hardest part of the paper resides in the proof of similar estimates with scale for |Du| and |i\u2202\u00af \u2202u|, as stated in Theorems 4 and 5. Neither the set \u03a51 nor the estimates with scale depend on the sign of \u03b1\u2032, which is why both cases \u03b1\u2032 > 0 and \u03b1\u2032 < 0 can be treated simultaneously. Furthermore we obtain a solution u \u2208\u03a51, which is better than a solution in \u03932. A vital clue that a strategy based on \u03a51 and estimates with scale could work was provided by the authors\u2019 earlier alternative proof [21, 22] by \ufb02ow methods of the Fu-Yau theorem [10, 11] in dimension n = 2. The power of the new method is even more evident when it comes to the general Fu-Yau Hessian equation (1.2). For k \u22652, it is no longer possible to express the equation (1.2) in terms of a single Hessian \u02c6 \u03c3k+1(\u03c9\u2032) for some (1, 1)-form \u03c9\u2032 as in (1.4). Rather, the equation leads to a combination of several Hessians, which makes it non-concave, and prevents a derivation of C2 and C2,\u03b1 estimates by standard techniques of concave PDE\u2019s. On the other hand, the method of an ellipticity condition \u03a5k preserved by estimates with scale works seamlessly in all cases of 1 \u2264k \u2264n \u22121. In fact the C3 estimates that we obtain 1In these \ufb02ows, a Hermitian metric \u03c9 evolves with time, and Rm(\u03c9) is the curvature of the Chern unitary connection of \u03c9. The condition |\u03b1\u2032Rm(\u03c9)| << 1 was subsequently also used in [9].) 3 \fappear to be the \ufb01rst C3 estimates established in the literature for any general class of Hessian equations besides the Laplacian and the Monge-Amp` ere equations. 2 Proof of Theorem 1: A Priori Estimates In our study of (1.2), we will assume that Vol(X, \u02c6 \u03c9) = 1, which can be acheived by scaling \u02c6 \u03c9 7\u2192\u03bb\u02c6 \u03c9, \u03b1\u2032 7\u2192\u03bbk\u03b1\u2032, \u03c1 7\u2192\u03bb\u2212k+1\u03c1, \u00b5 7\u2192\u03bb\u22121\u00b5. Since the equation (1.2) reduces to the Laplace equation when \u03b1\u2032 = 0, we assume from now on that \u03b1\u2032 \u0338= 0. We will use the notation C\u2113 n = n! \u2113!(n\u2212\u2113)! and \u02c6 \u03c3\u2113(i\u2202\u00af \u2202u) \u02c6 \u03c9n = C\u2113 n (i\u2202\u00af \u2202u)\u2113\u2227\u02c6 \u03c9n\u2212\u2113. Given \u03c1, we de\ufb01ne the di\ufb00erential operator L\u03c1 acting on functions by L\u03c1f \u02c6 \u03c9n = ni\u2202\u00af \u2202(f\u03c1) \u2227\u02c6 \u03c9n\u22122. (2.1) For each \ufb01xed k \u2208{1, 2, 3, . . ., n \u22121} and a real number \u03b3 > 0, the Fu-Yau Hessian equation (1.2) can be rewritten as 1 k \u2206\u02c6 geku + \u03b1\u2032 \u001a L\u03c1e(k\u2212\u03b3)u + \u02c6 \u03c3k+1(i\u2202\u00af \u2202u) \u001b = \u00b5. (2.2) We note that we adjusted our conventions compared to the introduction by rede\ufb01ning \u00b5, \u03c1, and \u03b1\u2032 up to a constant. From this point on, we only work with the present conventions (2.2). The standard Fu-Yau equation can be recovered by letting k = 1, \u03b3 = 2. We remark that this equation is already of interest in the case when \u03c1 \u22610, in which case the term L\u03c1e(k\u2212\u03b3)u vanishes. We can also write L\u03c1 as L\u03c1 = aj\u00af k\u2202j\u2202\u00af k + bi\u2202i + b \u00af i\u2202\u00af i + c, (2.3) where aj\u00af k is a Hermitian section of (T 1,0X)\u2217\u2297(T 0,1X)\u2217, bi is a section of (T 1,0X)\u2217, and c is a real function. All these coe\ufb03cients are characterized by the following equations ni\u2202\u00af \u2202f \u2227\u03c1\u2227\u02c6 \u03c9n\u22122 = aj\u00af k\u2202j\u2202\u00af kf \u02c6 \u03c9n, ni\u2202f \u2227\u00af \u2202\u03c1\u2227\u02c6 \u03c9n\u22122 = bi\u2202if \u02c6 \u03c9n, ni\u2202\u00af \u2202\u03c1\u2227\u02c6 \u03c9n\u22122 = c\u02c6 \u03c9n. (2.4) for an arbitrary function f, and can be expressed explicitly in terms of \u03c1 and \u02c6 \u03c9 if desired. We will use the constant \u039b depending on \u03c1 de\ufb01ned by \u2212\u039b\u02c6 gj\u00af k \u2264aj\u00af k \u2264\u039b\u02c6 gj\u00af k, \u02c6 \u03c9 = \u02c6 g\u00af kjidzj \u2227d\u00af zk, \u02c6 gj\u00af k = (\u02c6 g\u00af kj)\u22121. (2.5) We will look for solutions in the region \u03a5k = n u \u2208C2(X, R) : e\u2212\u03b3u < \u03b4, |\u03b1\u2032||e\u2212ui\u2202\u00af \u2202u|k \u02c6 \u03c9 < \u03c4 o , \u03c4 = 2\u22127 Ck n\u22121 , (2.6) 4 \fwhere 0 < \u03b4 \u226a1 is a \ufb01xed small constant depending only on (X, \u02c6 \u03c9), \u03b1\u2032, \u03c1, \u00b5, k, n, \u03b3. More precisely, it su\ufb03ces for \u03b4 to satisfy the inequality \u03b4 \u2264min ( 1, 2\u221213 |\u03b1\u2032|(k + \u03b3)3\u039b, \u0012 \u03b8 2CX (\u2225\u00b5\u2225\u221e+ \u2225\u03b1\u2032c\u2225\u221e) \u0013\u03b3/\u03b3\u2032) , (2.7) where \u03b8 = 1 2C1 \u22121, \u03b3\u2032 = min{k, \u03b3}, C1 = {2(CX + 1)(\u03b3 + k)}n \u0012 n n \u22121 \u0013n2 . (2.8) Here CX is the maximum of the constants appearing in the Poincar\u00b4 e inequality and Sobolev inequality on (X, \u02c6 \u03c9). The proof of Theorem 1 is based on the following a priori estimates: Theorem 2 Let u \u2208\u03a5k be a C5,\u03b2(X) function with normalization R X eu \u02c6 \u03c9n = M solving the k-th Fu-Yau Hessian equation (2.2). Then C\u22121M \u2264eu \u2264CM, e\u2212u|i\u2202\u00af \u2202u|\u02c6 \u03c9 \u2264CM\u22121/2, e\u22123u|\u2207\u00af \u2207\u2207u|2 \u02c6 \u03c9 \u2264C\u2032, (2.9) where C > 1 only depends on (X, \u02c6 \u03c9), \u03b1\u2032, k, \u03b3, n, \u03c1, and \u00b5. Assuming Theorem 2, we can prove Theorem 1. Both the existence and uniqueness statements will be proved by the continuity method. We begin with the existence. Fix \u03b1\u2032 \u2208R\\{0}, \u03b3 > 0, 1 \u2264k \u2264(n\u22121), \u03c1 \u2208\u21261,1(X, R) and \u00b5 : X \u2192R such that R X \u00b5 \u02c6 \u03c9n = 0, and de\ufb01ne the set \u03a5k as above. For a real parameter t, we consider the family of equations 1 k \u2206\u02c6 gekut + \u03b1\u2032 n tL\u03c1e(k\u2212\u03b3)ut + \u02c6 \u03c3k+1(i\u2202\u00af \u2202ut) o = t\u00b5. (2.10) As equations of di\ufb00erential forms, this family can be expressed as i\u2202\u00af \u2202 (eku k \u02c6 \u03c9 + \u03b1\u2032te(k\u2212\u03b3)u\u03c1 ) \u2227\u02c6 \u03c9n\u22122 + \u03b1\u2032 Ck n\u22121 k + 1(i\u2202\u00af \u2202u)k+1 \u2227\u02c6 \u03c9n\u2212k\u22121 \u2212t\u00b5 n \u02c6 \u03c9n = 0. (2.11) We introduce the following spaces BM = {u \u2208C5,\u03b2(X, R) : Z X eu \u02c6 \u03c9n = M}, (2.12) B1 = {(t, u) \u2208[0, 1] \u00d7 BM : u \u2208\u03a5k}, (2.13) B2 = {\u03c8 \u2208C3,\u03b2(X, R) : Z X \u03c8 \u02c6 \u03c9n = 0} (2.14) and de\ufb01ne the map \u03a8 : B1 \u2192B2 by \u03a8(t, u) = 1 k \u2206\u02c6 gekut + \u03b1\u2032tL\u03c1e(k\u2212\u03b3)ut + \u03b1\u2032\u02c6 \u03c3k+1(i\u2202\u00af \u2202ut) \u2212t\u00b5. (2.15) 5 \fWe consider I = {t \u2208[0, 1] : there exists u \u2208BM such that (t, u) \u2208B1 and \u03a8(t, u) = 0}. (2.16) First, 0 \u2208I: indeed the constant function u0 = log M \u2212log R X \u02c6 \u03c9n is in \u03a5k when M \u226b1, and u0 solves the equation at t = 0. In particular I is non-empty. Next, we show that I is open. Let (t0, u0) \u2208B1, and let L = (Du\u03a8)(t0,u0) be the linearized operator at (t0, u0), L : \u001a h \u2208C5,\u03b2(X, R) : Z X heu0 \u02c6 \u03c9n = 0 \u001b \u2192 \u001a \u03c8 \u2208C3,\u03b2(X, R) : Z X \u03c8 \u02c6 \u03c9n = 0 \u001b , (2.17) de\ufb01ned by L(h)\u02c6 \u03c9n = i\u2202\u00af \u2202{eku0h \u02c6 \u03c9 + \u03b1\u2032(k \u2212\u03b3)t0e(k\u2212\u03b3)u0h \u03c1} \u2227\u02c6 \u03c9n\u22122 +\u03b1\u2032Ck n\u22121i\u2202\u00af \u2202h \u2227(i\u2202\u00af \u2202u0)k \u2227\u02c6 \u03c9n\u2212k\u22121. (2.18) The leading order terms are L(h)\u02c6 \u03c9n = eku0\u03c7(t0,u0) \u2227\u02c6 \u03c9n\u2212k\u22121 \u2227i\u2202\u00af \u2202h + \u00b7 \u00b7 \u00b7 (2.19) where \u03c7(t,u) = \u02c6 \u03c9k + \u03b1\u2032(k \u2212\u03b3)te\u2212\u03b3u \u03c1 \u2227\u02c6 \u03c9k\u22121 + \u03b1\u2032Ck n\u22121(e\u2212ui\u2202\u00af \u2202u)k. (2.20) Since u0 \u2208\u03a5k, we see from the conditions (2.6) that \u03c7(t0,u0) > 0 as a (k, k) form and hence L is elliptic. The L2 adjoint L\u2217is readily computed by integrating by parts: Z X \u03c8L(h) \u02c6 \u03c9n = Z X h eku0\u03c7(t0,u0) \u2227\u02c6 \u03c9n\u2212k\u22121 \u2227i\u2202\u00af \u2202\u03c8 = Z X hL\u2217(\u03c8) \u02c6 \u03c9n. (2.21) Since L\u2217is an elliptic operator with no zeroth order terms, by the strong maximum principle the kernel of L\u2217consists of constant functions. An index theory argument (see e.g. [21] or [10] for full details) shows that the kernel of L is spanned by a function of constant sign. It follows that L is an isomorphism. By the implicit function theorem, there exists a unique solution (t, ut) for t su\ufb03ciently close to t0, with ut \u2208\u03a5k since \u03a5k is open. We conclude that I is open. Finally, we apply Theorem 2 to show that I is closed. Consider a sequence ti \u2208I such that ti \u2192t\u221e, and denote uti \u2208\u03a5k\u2229BM the associated C5,\u03b2 functions such that \u03a8(ti, uti) = 0. By di\ufb00erentating the equation e\u2212kuti\u03a8(ti, uti) = 0 with the Chern connection \u02c6 \u2207of the K\u00a8 ahler metric \u02c6 \u03c9, we obtain 0 = \u03c7(ti,uti) \u2227\u02c6 \u03c9n\u2212k\u22121 \u2227i\u2202\u00af \u2202(\u2202\u2113uti) \u02c6 \u03c9n + \u02c6 \u2207\u2113{\u03b1\u2032tie\u2212\u03b3uti((k \u2212\u03b3)2ap\u00af q\u2202puti \u2202\u00af quti + (k \u2212\u03b3)bk\u2202kuti + (k \u2212\u03b3)b \u00af k\u2202\u00af kuti + c)} +k \u02c6 \u2207\u2113|Duti|2 \u02c6 g \u2212\u03b1\u2032ke\u2212kuti \u02c6 \u03c3k+1(i\u2202\u00af \u2202uti)\u2202\u2113uti \u2212ti\u2202\u2113{e\u2212kuti\u00b5}. (2.22) 6 \fSince the equations (2.10) are of the form (2.2) with uniformly bounded coe\ufb03cients \u03c1 and \u00b5, Theorem 2 applies to give uniform control of |uti| and |\u2202\u00af \u2202\u2202uti|\u02c6 \u03c9 along this sequence. Therefore \u02c6 \u2206uti is uniformly controlled in C\u03b2(X) for any 0 < \u03b2 < 1. By Schauder estimates, we have \u2225uti\u2225C2,\u03b2 \u2264C. Thus the di\ufb00erentiated equation (2.22) is a linear elliptic equation for \u2202\u2113uti with C\u03b2 coe\ufb03cients. This equation is uniformly elliptic along the sequence, since \u03c7(ti,uti) \u22651 2 \u02c6 \u03c9k by (2.9) when M \u226b1. By Schauder estimates, we have uniform control of \u2225Duti\u2225C2,\u03b2. A bootstrap argument shows that we have uniform control of \u2225uti\u2225C6,\u03b2, hence we may extract a subsequence converging to u\u221e\u2208C5,\u03b2. Furthermore, for M \u2265M0 \u226b1 large enough, we see from (2.9) that e\u2212u\u221e\u226a1, |e\u2212ui\u2202\u00af \u2202u\u221e|\u02c6 \u03c9 \u226a1, (2.23) hence u\u221e\u2208\u03a5k. Thus I is closed. Hence I = [0, 1] and consequently there exists a C5,\u03b2 function u \u2208\u03a5k with normalization R X eu \u02c6 \u03c9n = M solving the Fu-Yau equation (2.2). By applying Schauder estimates and a bootstrap argument to the di\ufb00erentiated equation (2.22), we see that u is smooth. We complete now the proof of Theorem 1 with the proof of uniqueness. First, we show that the only solutions of the equation 1 k i\u2202\u00af \u2202eku \u2227\u02c6 \u03c9n\u22121 + \u03b1\u2032 Ck n\u22121 k + 1(i\u2202\u00af \u2202u)k+1 \u2227\u02c6 \u03c9n\u2212k\u22121 = 0 (2.24) with |\u03b1\u2032|Ck n\u22121|e\u2212ui\u2202\u00af \u2202u|k \u02c6 \u03c9 < 2\u22127 are constant functions. Multiplying by u and integrating, we see that 0 = Z X i\u2202u \u2227\u00af \u2202u \u2227 \u001a eku\u02c6 \u03c9k + \u03b1\u2032 Ck n\u22121 k + 1(i\u2202\u00af \u2202u)k \u001b \u2227\u02c6 \u03c9n\u2212k\u22121, (2.25) and hence u must be constant since eku\u02c6 \u03c9k + \u03b1\u2032 Ck n\u22121 k+1 (i\u2202\u00af \u2202u)k > 0 as a (k, k) form. Now suppose there are two distinct solutions u \u2208\u03a5k and v \u2208\u03a5k satisfying (2.2) under the normalization R X eu \u02c6 \u03c9n = R X ev \u02c6 \u03c9n = M with M \u2265M0. For t \u2208[0, 1], de\ufb01ne \u03a6(t, u) = i\u2202\u00af \u2202 (eku k \u02c6 \u03c9 + \u03b1\u2032(1 \u2212t)e(k\u2212\u03b3)u\u03c1 ) \u2227\u02c6 \u03c9n\u22122 +\u03b1\u2032 Ck n\u22121 k + 1(i\u2202\u00af \u2202u)k+1 \u2227\u02c6 \u03c9n\u2212k\u22121 \u2212(1 \u2212t)\u00b5 n \u02c6 \u03c9n, (2.26) and consider the path t 7\u2192ut satisfying \u03a6(t, ut) = 0, ut \u2208\u03a5k, R X eut \u02c6 \u03c9n = M with initial condition u0 = u. The same argument which shows that I is open also shows that the path ut exists for a short-time: there exists \u01eb > 0 such that ut is de\ufb01ned on [0, \u01eb). By our estimates (2.9), we may extend the path to be de\ufb01ned for t \u2208[0, 1]. By uniqueness of the equation with t = 1, we know that u1 = log M \u2212log R X \u02c6 \u03c9n. The same argument gives a path t 7\u2192vt 7 \fsatisfying \u03a6(t, vt) = 0, vt \u2208\u03a5k, R X evt \u02c6 \u03c9n = M with v0 = v and v1 = log M \u2212log R X \u02c6 \u03c9n. But then at the \ufb01rst time 0 < t0 \u22641 when ut0 = vt0, we contradict the local uniqueness of \u03a6(t, ut) = 0 given by the implicit function theorem. It follows from our discussion that in order to prove Theorem 1, it remains to establish the a priori estimates (2.9). 3 The Uniform Estimate Theorem 3 Suppose u \u2208\u03a5k solves (2.2) subject to the normalization R X eu \u02c6 \u03c9n = M. Then C\u22121M \u2264eu \u2264CM, (3.1) where C only depends on (X, \u02c6 \u03c9), k, and \u03b3. We \ufb01rst note the following general identity which holds for any function u. 0 = \u03b1\u2032(p\u2212k) Z X e(p\u2212k)ui\u2202u\u2227\u00af \u2202u\u2227(i\u2202\u00af \u2202u)k\u2227\u02c6 \u03c9n\u2212k\u22121+\u03b1\u2032 Z X e(p\u2212k)u (i\u2202\u00af \u2202u)k+1\u2227\u02c6 \u03c9n\u2212k\u22121. (3.2) Substituting the Fu-Yau Hessian equation (2.11) with t = 1, we obtain 0 = \u03b1\u2032 Ck n\u22121 k + 1(p \u2212k) Z X e(p\u2212k)ui\u2202u \u2227\u00af \u2202u \u2227(i\u2202\u00af \u2202u)k \u2227\u02c6 \u03c9n\u2212k\u22121 + Z X e(p\u2212k)u\u00b5 n \u2212 Z X e(p\u2212k)ui\u2202\u00af \u2202 (eku k \u02c6 \u03c9 + \u03b1\u2032e(k\u2212\u03b3)u\u03c1 ) \u2227\u02c6 \u03c9n\u22122. (3.3) We integrate by parts to derive 0 = \u03b1\u2032 Ck n\u22121 k + 1(p \u2212k) Z X e(p\u2212k)ui\u2202u \u2227\u00af \u2202u \u2227(i\u2202\u00af \u2202u)k \u2227\u02c6 \u03c9n\u2212k\u22121 + Z X e(p\u2212k)u\u00b5 n + (p \u2212k) Z X epu i\u2202u \u2227i\u00af \u2202u \u2227\u02c6 \u03c9n\u22121 +(p \u2212k)\u03b1\u2032 Z X e(p\u2212k)u i\u2202u \u2227i\u00af \u2202(e(k\u2212\u03b3)u\u03c1) \u2227\u02c6 \u03c9n\u22122. (3.4) Integrating by parts again gives (p \u2212k) Z X epui\u2202u \u2227\u00af \u2202u \u2227\u02c6 \u03c9n\u2212k\u22121 \u2227\u03c7 = \u2212 Z X e(p\u2212k)u\u00b5 \u02c6 \u03c9 n + p \u2212k p \u2212\u03b3 \u03b1\u2032 Z X e(p\u2212\u03b3)u \u2227i\u2202\u00af \u2202\u03c1 \u2227\u02c6 \u03c9n\u22122, (3.5) where we now assume p > \u03b3 and we de\ufb01ne \u03c7 = \u02c6 \u03c9k + \u03b1\u2032(k \u2212\u03b3)e\u2212\u03b3u\u03c1 \u2227\u02c6 \u03c9k\u22121 + \u03b1\u2032 Ck n\u22121 k + 1(e\u2212ui\u2202\u00af \u2202u)k. (3.6) 8 \fNext, we estimate i\u2202u \u2227\u00af \u2202u \u2227\u02c6 \u03c9n\u2212k\u22121 \u2227\u03c7 = |\u2207u|2 \u02c6 \u03c9 n \u02c6 \u03c9n + \u03b1\u2032(k \u2212\u03b3)e\u2212\u03b3uai\u00af juiu\u00af j n \u02c6 \u03c9n +\u03b1\u2032 Ck n\u22121 k + 1i\u2202u \u2227\u00af \u2202u \u2227(e\u2212ui\u2202\u00af \u2202u)k \u2227\u02c6 \u03c9n\u2212k\u22121 \u2265 |\u2207u|2 \u02c6 \u03c9 n \u02c6 \u03c9n \u2212|\u03b1\u2032\u039b(k \u2212\u03b3)|\u03b4|\u2207u|2 \u02c6 \u03c9 n \u02c6 \u03c9n \u2212|\u03b1\u2032| Ck n\u22121 k + 1|e\u2212ui\u2202\u00af \u2202u|k \u02c6 \u03c9 |\u2207u|2 \u02c6 \u03c9 n \u02c6 \u03c9n. (3.7) Since u \u2208\u03a5k, by (2.6) and (2.7) the positive term dominates the expression and we can conclude i\u2202u \u2227\u00af \u2202u \u2227\u02c6 \u03c9n\u2212k\u22121 \u2227\u03c7 \u22651 2 |\u2207u|2 \u02c6 \u03c9 n \u02c6 \u03c9n. (3.8) The proof of Theorem 3 will be divided into three propositions. We note that in the following arguments we will omit the background volume form \u02c6 \u03c9n when integrating scalar functions. Proposition 1 Suppose u \u2208\u03a5k solves (2.2) subject to normalization R X eu = M. There exists C1 > 0 such that eu \u2264C1M, (3.9) where C1 only depends on (X, \u02c6 \u03c9), n, k and \u03b3. In fact, C1 is given by (2.8). Combining (3.5) and (3.8) gives 1 2(p \u2212k) Z X epu|\u2207u|2 \u02c6 \u03c9 \u2264 \u2212 Z X e(p\u2212k)u\u00b5 + p \u2212k p \u2212\u03b3 n\u03b1\u2032 Z X e(p\u2212\u03b3)u \u2227i\u2202\u00af \u2202\u03c1 \u2227\u02c6 \u03c9n\u22122. (3.10) We estimate Z X |\u2207e p 2 u|2 \u02c6 \u03c9 \u2264 p2 2(p \u2212k) \u001a \u2225\u00b5\u2225L\u221e Z X e(p\u2212k)u + p \u2212k p \u2212\u03b3 \u2225\u03b1\u2032c\u2225L\u221e Z X e(p\u2212\u03b3)u \u001b . (3.11) For any p \u22652 max{\u03b3, k}, there holds p2 2(p\u2212k) \u2264p and p\u2212k p\u2212\u03b3 \u22642. Using e\u2212\u03b3u \u2264\u03b4 \u22641 and (2.7), we conclude that Z X |\u2207e p 2 u|2 \u02c6 \u03c9 \u2264 2(\u2225\u00b5\u2225L\u221e+ \u2225\u03b1\u2032c\u2225L\u221e)\u03b4 min{k,\u03b3} \u03b3 p Z X epu \u2264 \u03b8 CX p Z X epu \u2264 p CX Z X epu, (3.12) 9 \ffor any p \u22652(\u03b3 + k). Let \u03b2 = n n\u22121. The Sobolev inequality gives us \u0012Z X e\u03b2pu \u00131/\u03b2 \u2264CX \u0012 Z X |\u2207e p 2 u|2 \u02c6 \u03c9 + Z X epu \u0013 . (3.13) Therefore for all p \u22652(\u03b3 + k), \u2225eu\u2225Lp\u03b2 \u2264(CX + 1)1/pp1/p\u2225eu\u2225Lp. (3.14) Iterating this inequality gives \u2225eu\u2225Lp\u03b2(k+1) \u2264{(CX + 1)p} 1 p Pk i=0 1 \u03b2i \u00b7 \u03b2 1 p Pk i=1 i \u03b2i \u2225eu\u2225Lp. (3.15) Letting k \u2192\u221e, we obtain sup X eu \u2264C\u2032 1\u2225eu\u2225L2(\u03b3+k), C\u2032 1 = {2(CX + 1)(\u03b3 + k)} 1 2(\u03b3+k) P\u221e i=0 1 \u03b2i \u00b7 \u03b2 1 2(\u03b3+k) P\u221e i=1 i \u03b2i . (3.16) It follows that sup X eu \u2264C\u2032 1(sup X eu)1\u2212(2(\u03b3+k))\u22121 \u0012Z X eu \u00131/2(\u03b3+k) , (3.17) and we conclude that sup X eu \u2264C1 Z X eu, C1 = (C\u2032 1)2(\u03b3+k). (3.18) This proves the estimate. As it will be needed in the future, we note that the precise form of C1 agrees with the de\ufb01nition given in (2.8). Proposition 2 Suppose u \u2208\u03a5k solves (2.2) subject to normalization R X eu = M. There exists a constant C only depending on (X, \u02c6 \u03c9), n, k and \u03b3 such that Z X e\u2212u \u2264CM\u22121. (3.19) Setting p = \u22121 in (3.5) gives (k + 1) Z X e\u2212ui\u2202u \u2227\u00af \u2202u \u2227\u02c6 \u03c9n\u2212k\u22121 \u2227\u03c7 (3.20) = Z X e\u2212(1+k)u\u00b5 \u02c6 \u03c9n n \u22121 + k 1 + \u03b3 Z X e\u2212(1\u2212\u03b3)ui\u2202\u00af \u2202\u03c1 \u2227\u02c6 \u03c9n\u22122 \u2264 1 n\u2225\u00b5\u2225L\u221e Z X e\u2212(1+k)u + 1 + k (1 + \u03b3)n\u2225\u03b1\u2032c\u2225L\u221e Z X e\u2212(1+\u03b3)u. (3.21) Since u \u2208\u03a5k, we may use (3.8) and e\u2212\u03b3u \u2264\u03b4 \u22641 to obtain Z X e\u2212u|\u2207u|2 \u02c6 \u03c9 \u22642\u03b4 min{k,\u03b3} \u03b3 (\u2225\u00b5\u2225L\u221e+ \u2225\u03b1\u2032c\u2225L\u221e) Z X e\u2212u. (3.22) 10 \fBy the Poincar\u00b4 e inequality Z X e\u2212u \u2212 \u0012Z X e\u2212u/2 \u00132 \u2264CX Z X |\u2207e\u2212u/2|2 \u02c6 \u03c9. (3.23) After using the de\ufb01nition of \u03b4 (2.7), it follows that Z X e\u2212u \u2264 1 1 \u2212\u03b8 4 \u0012Z X e\u2212u/2 \u00132 . (3.24) Let U = {x \u2208X : eu \u2265M 2 }. From Proposition 1, and using Vol(X, \u02c6 \u03c9) = 1, M = Z X eu \u2264C1M|U| + (1 \u2212|U|)M 2 . (3.25) Hence |U| \u2265\u03b8 > 0, where we recall that \u03b8 was de\ufb01ned in (2.7). Using |U| \u2265\u03b8 and (3.24), it was shown in [21] that the estimate Z X e\u2212u \u2264 1 1 \u2212\u03b8 4 \u0012 1 + 2 \u03b8 \u0013 \u0012 2 \u03b82 \u0013 M\u22121 (3.26) follows. Proposition 3 Suppose u \u2208\u03a5k solves (2.2) subject to the normalization R X eu = M. There exists C such that sup X e\u2212u \u2264CM\u22121, (3.27) where C only depends on (X, \u02c6 \u03c9), n, k and \u03b3. Exchanging p for \u2212p in (3.5) and using (3.8) gives (p + k) Z X e\u2212pui\u2202u \u2227\u00af \u2202u \u2227\u02c6 \u03c9n\u22121 (3.28) \u2264 2 Z X e\u2212(p+k)u\u00b5 \u02c6 \u03c9n n \u22122\u03b1\u2032p + k p + \u03b3 Z X e\u2212(p+\u03b3)ui\u2202\u00af \u2202\u03c1 \u2227\u02c6 \u03c9n\u22122. By using e\u03b3u \u2264\u03b4 \u22641, we obtain Z X |\u2207e\u2212p 2 u|2 \u02c6 \u03c9 \u2264 p2 2(p + k)\u03b4 min{k,\u03b3} \u03b3 (\u2225\u00b5\u2225L\u221e+ p + k p + \u03b3 \u2225\u03b1\u2032c\u2225L\u221e) Z X e\u2212pu. (3.29) We may use (2.7) to obtain a constant C depending on (X, \u02c6 \u03c9), n, k, and \u03b3 such that Z X |\u2207e\u2212p 2 u|2 \u02c6 \u03c9 \u2264Cp Z X e\u2212pu. (3.30) for any p \u22651. Using the Sobolev inequality and iterating in a similar way to Proposition 1, we obtain sup X e\u2212u \u2264C\u2225e\u2212u\u2225L1. (3.31) Applying Proposition 2 gives the desired estimate. 11 \f4 Setup and Notation 4.1 The formalism of evolving metrics We come now to the key steps of establishing the gradient and the C2 estimates. It turns out that, for these steps, it is more natural to view the equation (2.2) as an equation for the unknown, non-K\u00a8 ahler, Hermitian form \u03c9 = eu\u02c6 \u03c9 (4.1) and to carry out calculations with respect to the Chern unitary connection \u2207of \u03c9. As usual, we identify the metrics \u02c6 g and g via \u02c6 \u03c9 = \u02c6 g\u00af kj idzj \u2227d\u00af zk and \u03c9 = g\u00af kj idzj \u2227d\u00af zk, and denote \u02c6 gj\u00af k, gj\u00af k to be the inverse matrix of \u02c6 g\u00af kj, g\u00af kj. Then g\u00af kj = eu\u02c6 g\u00af kj, gj\u00af k = e\u2212u\u02c6 gj\u00af k. Recall that the Chern unitary connection \u2207is de\ufb01ned by \u2207\u00af kV j = \u2202\u00af kV j, \u2207kV j = gj \u00af m\u2202k(g \u00af mpV p) (4.2) and its torsion and curvature by [\u2207\u03b1, \u2207\u03b2]V \u03b3 = R\u03b2\u03b1 \u03b3 \u03b4V \u03b4 + T \u03b4 \u03b2\u03b1\u2207\u03b4V \u03b3. (4.3) Explicitly, R\u00af kq j p = \u2212\u2202\u00af k(gj \u00af m\u2202jg \u00af mq), T j pq = gj \u00af m(\u2202pg \u00af mq \u2212\u2202qg \u00af mp). (4.4) The curvatures and torsions of the metrics g\u00af kj and \u02c6 g\u00af kj are then related by R\u00af kj p i = \u02c6 R\u00af kj p i \u2212u\u00af kj\u03b4p i, T \u03bb kj = uk\u03b4\u03bb j \u2212uj\u03b4\u03bb k. (4.5) The following commutation formulas with either 3 or 4 covariant derivatives will be particularly useful, \u2207j\u2207p\u2207\u00af qu = \u2207p\u2207\u00af q\u2207ju + T m pj\u2207m\u2207\u00af qu (4.6) and \u2207\u00af k\u2207j\u2207p\u2207\u00af qu = \u2207p\u2207\u00af q\u2207j\u2207\u00af ku \u2212R\u00af qp\u00af k \u00af m\u2207\u00af m\u2207ju + R\u00af kj m p\u2207m\u2207\u00af qu +T \u00af m \u00af q\u00af k\u2207p\u2207\u00af m\u2207ju + T m pj\u2207\u00af k\u2207m\u2207\u00af qu. (4.7) They reduce in our case to \u2207j\u2207p\u2207\u00af qu = \u2207p\u2207\u00af q\u2207ju + upu\u00af qj \u2212uju\u00af qp. (4.8) and to \u2207\u00af k\u2207j\u2207p\u2207\u00af qu = \u2207p\u2207\u00af q\u2207j\u2207\u00af ku + up\u2207\u00af k\u2207j\u2207\u00af qu \u2212uj\u2207\u00af k\u2207p\u2207\u00af qu +u\u00af q\u2207p\u2207\u00af k\u2207ju \u2212u\u00af k\u2207p\u2207\u00af q\u2207ju + \u02c6 R\u00af kj \u03bb pu\u00af q\u03bb \u2212\u02c6 R\u00af qp\u00af k \u00af \u03bbu\u00af \u03bbj. (4.9) 12 \fIt will also be convenient to use the symmetric functions of the eigenvalues of i\u2202\u00af \u2202u with respect to \u03c9 rather than with respect to \u02c6 \u03c9. Thus we de\ufb01ne \u03c3\u2113(i\u2202\u00af \u2202u) to be the \u2113-th elementary symmetric polynomial of the eigenvalues of the endomorphism hij = gi\u00af ku\u00af kj. Explicitly, if \u03bbi are the eigenvalues of the endomorphism hij = gi\u00af ku\u00af kj, then \u03c3\u2113(i\u2202\u00af \u2202u) = P i1<\u00b7\u00b7\u00b7 0. Combining (5.19) and (5.21) and dropping a nonnegative term, \u2212 1 |\u2207u|4 g F p\u00af q\u2207p|\u2207u|2 g\u2207\u00af q|\u2207u|2 g \u2212 2k |\u2207u|2 g Re{gj\u00af iu\u00af i\u2207j|\u2207u|2 g} + (1 \u2212\u03b5) 1 |\u2207u|2 g |\u2207\u2207u|2 F g \u2265 \u2212(1 + \u03c3)2\u03b5|\u2207u|2 F + 2(1 + \u03c3)k|\u2207u|2 g + 2(1 + \u03c3)(1 \u2212\u03b5) |\u2207u|2 g Re{F p\u00af qgj\u00af iuju\u00af ipu\u00af q}. (5.22) Substituting this inequality into (5.18), partial cancellation occurs and we are left with F p\u00af q\u2207p\u2207\u00af qG \u2265 1 |\u2207u|2 g |\u2207\u00af \u2207u|2 F g + \u03b5 |\u2207u|2 g |\u2207\u2207u|2 F g +{2\u03c3 \u22122\u03b5(1 + \u03c3)} 1 |\u2207u|2 g Re{F p\u00af qgj\u00af iu\u00af iupu\u00af qj} +\u03c3k|\u2207u|2 g \u2212(1 + \u03c3)2\u03b5|\u2207u|2 F + 1 |\u2207u|2 g F p\u00af qgj\u00af iuj \u02c6 R\u00af qp\u00af i \u00af \u03bbu\u00af \u03bb \u2212 2 |\u2207u|2 g Re{gj\u00af iEju\u00af i} + (2 + \u03c3) \u02dc E. (5.23) Since u \u2208\u03a5k, we now use (4.21) in Lemma 2 to pass the norms with respect to F p\u00af q to gp\u00af q up to an error of order 2\u22126. We choose \u03b5 = (1 + \u03c3)\u22122(1 + 2\u22126)\u22121\u03c3 2 . (5.24) 17 \fThen (1 + \u03c3)2\u03b5|\u2207u|2 F \u2264\u03c3 2 |\u2207u|2 g, (5.25) and \u03b5 |\u2207u|2 g |\u2207\u2207u|2 F g \u2265 \u03c3 2(1 + \u03c3)2 1 \u22122\u22126 1 + 2\u22126 1 |\u2207u|2 g |\u2207\u2207u|2 g. (5.26) Since \u03c3 = 2\u22127, we have the inequality of numbers 1 2 1\u22122\u22126 (1+\u03c3)2(1+2\u22126) \u22651 4. Thus \u03b5 |\u2207u|2 g |\u2207\u2207u|2 F g \u2265\u03c3 4 1 |\u2207u|2 g |\u2207\u2207u|2 g. (5.27) We also note the inequalities 1 |\u2207u|2 g |\u2207\u00af \u2207u|2 F g \u2265(1 \u22122\u22126) 1 |\u2207u|2 g |\u2207\u00af \u2207u|2 g, (5.28) and {2\u03c3 \u22122\u03b5(1 + \u03c3)} 1 |\u2207u|2 g Re{F p\u00af qgj\u00af iu\u00af iupu\u00af qj} \u2265 \u2212{2 \u2212(1 + \u03c3)\u22121(1 + 2\u22126)\u22121}\u03c3(1 + 2\u22126)|\u2207\u00af \u2207u|g \u2265 \u22122\u03c3(1 + 2\u22126)|\u2207\u00af \u2207u|g. (5.29) The main inequality (5.23) becomes F p\u00af q\u2207p\u2207\u00af qG \u2265 (1 \u22122\u22126) 1 |\u2207u|2 g |\u2207\u00af \u2207u|2 g + \u03c3 4 |\u2207\u2207u|2 g |\u2207u|2 g \u22122\u03c3(1 + 2\u22126)|\u2207\u00af \u2207u|g +\u03c3 2 |\u2207u|2 g + 1 |\u2207u|2 g F p\u00af qgj\u00af iuj \u02c6 R\u00af qp\u00af i \u00af \u03bbu\u00af \u03bb \u2212 2 |\u2207u|2 g Re{gj\u00af iEju\u00af i} + (2 + \u03c3) \u02dc E. (5.30) 5.2 Estimating the perturbative terms 5.2.1 The Ej terms Recall the constant \u039b is such that \u2212\u039b\u02c6 gj\u00af i \u2264aj\u00af i \u2264\u039b\u02c6 gj\u00af i. We will go through each term in the de\ufb01nition of Ej (5.12) and estimate the terms appearing in 2 |\u2207u|2 gRe{gj\u00af iEju\u00af i} by groups. In the following, we will use C to denote constants possibly depending on \u03b1\u2032, k, \u03b3, ap\u00af q, bi, c, \u00b5, and their derivatives. First, using 2ab \u2264a2 + b2, we estimate the terms involving \u2207\u00af \u2207u 2|\u03b1\u2032(k \u2212\u03b3)| |\u2207u|2 g e\u2212(1+\u03b3)u|gj\u00af iu\u00af i(\u2212\u03b3ap\u00af qu\u00af qpuj + \u02c6 \u2207jap\u00af qu\u00af qp + (k \u2212\u03b3)ap\u00af qupu\u00af qj + \u00af bqu\u00af qj)| 18 \f\u2264 2|\u03b1\u2032\u039b(k \u2212\u03b3)(k + 2\u03b3)|e\u2212\u03b3|\u2207\u00af \u2207u|g + Ce\u2212\u03b3ue\u2212u/2|\u2207\u00af \u2207u|g |\u2207u|g \u2264 2 \u001a |\u03b1\u2032\u039b|1/2(k \u2212\u03b3)|\u03b41/2|\u2207u|g \u001b\u001a \u03b41/2(k + 2\u03b3)|\u039b\u03b1\u2032|1/2|\u2207\u00af \u2207u|g |\u2207u|g \u001b + Ce\u2212u/2|\u2207\u00af \u2207u|g |\u2207u|g \u2264 |\u03b1\u2032|\u039b(k \u2212\u03b3)2\u03b4|\u2207u|2 g + 4|\u039b\u03b1\u2032|(k + \u03b3)2\u03b4|\u2207\u00af \u2207u|2 g |\u2207u|2 g + \u03c3|\u2207\u00af \u2207u|2 g |\u2207u|2 g + C(\u03c3)e\u2212u. (5.31) Second, we estimate the terms involving \u2207\u2207u 2|\u03b1\u2032(k \u2212\u03b3)| |\u2207u|2 g e\u2212(1+\u03b3)u|gj\u00af iu\u00af i{(k \u2212\u03b3)ap\u00af q\u2207j\u2207pu u\u00af q + bp\u2207j\u2207pu}| \u2264 2|\u03b1\u2032|(k \u2212\u03b3)2\u039be\u2212\u03b3u|\u2207\u2207u|g + 2 \u001a C |\u03b1\u2032\u039b|1/2e\u2212(1+\u03b3)u/2 \u001b\u001a |\u03b1\u2032\u039b|1/2|k \u2212\u03b3|e\u2212\u03b3u/2|\u2207\u2207u|g |\u2207u|g \u001b \u2264 |\u03b1\u2032|(k \u2212\u03b3)2\u039b\u03b4 \u001a|\u2207\u2207u|2 g |\u2207u|2 g + |\u2207u|2 g \u001b + |\u03b1\u2032\u039b|(k \u2212\u03b3)2e\u2212\u03b3u|\u2207\u2207u|2 g |\u2207u|2 g + C2 |\u03b1\u2032\u039b|e\u2212(1+\u03b3)u \u2264 2|\u03b1\u2032|\u039b(k \u2212\u03b3)2\u03b4|\u2207\u2207u|2 g |\u2207u|2 g + \u03b4|\u03b1\u2032|(k \u2212\u03b3)2\u039b|\u2207u|2 g + Ce\u2212u. (5.32) Third, we estimate the terms involving \u2207u quadratically 2|\u03b1\u2032(k \u2212\u03b3)| |\u2207u|2 g e\u2212(1+\u03b3)u|gj\u00af iu\u00af i{(k \u2212\u03b3) \u02c6 \u2207jap\u00af qupu\u00af q \u22122(1 + \u03b3)Re{bpup}uj + ujbiui}| \u2264 Ce\u2212\u03b3ue\u2212u/2|\u2207u|g \u2264\u03c3 16|\u2207u|2 g + C(\u03c3)e\u2212(1+2\u03b3)u \u2264\u03c3 16|\u2207u|2 g + Ce\u2212u. (5.33) Finally, for all the other terms in Ej, we can estimate 2|\u03b1\u2032(k \u2212\u03b3)| |\u2207u|2 g e\u2212(1+\u03b3)u|gj\u00af iu\u00af i{\u2212\u03b3(k \u2212\u03b3)ap\u00af qupu\u00af quj + \u02c6 \u2207jbpup + \u2202j\u00af bqu\u00af q}| + 2 |\u2207u|2 g |gj\u00af iu\u00af i{\u2212(1 + \u03b3)\u03b1\u2032ce\u2212(1+\u03b3)uuj + \u03b1\u2032e\u2212(1+\u03b3)u\u2202jc + (k + 1)e\u2212(k+1)u\u00b5uj \u2212e\u2212(k+1)u\u2202j\u00b5}| \u2264 2|\u03b1\u2032|\u039b(k \u2212\u03b3)2\u03b3e\u2212\u03b3u|\u2207u|2 g + Ce\u2212(1+\u03b3)u + Ce\u2212(1+\u03b3)u e\u2212u/2 |\u2207u|g + Ce\u2212(k+1)u + Ce\u2212(k+1)u e\u2212u/2 |\u2207u|g \u2264 2|\u03b1\u2032|\u039b(k \u2212\u03b3)2\u03b3\u03b4|\u2207u|2 g + Ce\u2212u + Ce\u2212u e\u2212u/2 |\u2207u|g . (5.34) Putting everything together, we obtain the following estimate for the terms coming from Ej. 2 |\u2207u|2 g |gj\u00af iEju\u00af i| \u2264 \u001a 2|\u03b1\u2032|\u039b(k \u2212\u03b3)2(1 + \u03b3)\u03b4 + \u03c3 16 \u001b |\u2207u|2 g + Ce\u2212u + Ce\u2212u e\u2212u/2 |\u2207u|g +{4|\u03b1\u2032|\u039b(k + \u03b3)2\u03b4 + \u03c3}|\u2207\u00af \u2207u|2 g |\u2207u|2 g + 2|\u03b1\u2032|\u039b(k \u2212\u03b3)2\u03b4|\u2207\u2207u|2 g |\u2207u|2 g (5.35) 19 \f5.2.2 The \u02dc E terms Next, estimating \u02dc E de\ufb01ned in (5.17) gives (2 + \u03c3)| \u02dc E| \u2264 k(2 + \u03c3)|\u03b1\u2032||\u03c3k+1(i\u2202\u00af \u2202u)| + (2 + \u03c3)|\u03b1\u2032\u039b|(k \u2212\u03b3)2e\u2212\u03b3u|\u2207u|2 g +2\u2225\u03b1\u2032(k \u2212\u03b3)bi\u2225\u221ee\u2212\u03b3ue\u2212u/2|\u2207u|g + Ce\u2212(1+\u03b3)u + Ce\u2212(k+1)u. (5.36) Using e\u2212u \u2264\u03b4 \u22641 and 2\u2225\u03b1\u2032(k \u2212\u03b3)b\u2225\u221ee\u2212\u03b3ue\u2212u/2|\u2207u|g \u2264\u03c3 16|\u2207u|2 g + C(\u03c3)e\u2212ue\u22122\u03b3u, (5.37) we obtain (2 + \u03c3)| \u02dc E| \u2264k(2 + \u03c3)|\u03b1\u2032||\u03c3k+1(i\u2202\u00af \u2202u)| + (2 + \u03c3)|\u03b1\u2032\u039b|(k \u2212\u03b3)2\u03b4|\u2207u|2 g + \u03c3 16|\u2207u|2 g + Ce\u2212u. By Lemma 1, we have k|\u03b1\u2032||\u03c3k+1(i\u2202\u00af \u2202u)| \u2264k|\u03b1\u2032| Ck+1 n n1/2nk/2|\u2207\u00af \u2207u|k g|\u2207\u00af \u2207u|g \u2264{|\u03b1\u2032|Ck n\u22121|\u2207\u00af \u2207u|k g}|\u2207\u00af \u2207u|g. (5.38) Since u \u2208\u03a5k, we have |\u03b1\u2032|Ck n\u22121|\u2207\u00af \u2207u|k g \u22642\u22127. Thus (2 + \u03c3)| \u02dc E| \u2264 \u001a (2 + \u03c3)|\u03b1\u2032\u039b|(k \u2212\u03b3)2\u03b4 + \u03c3 16 \u001b |\u2207u|2 g + 2\u22127(2 + \u03c3)|\u2207\u00af \u2207u|g + Ce\u2212u. (5.39) 5.3 Completing the estimate Combining (5.35) and (5.39), 2 |\u2207u|2 g |gj\u00af iEju\u00af i| + (2 + \u03c3)| \u02dc E| \u2264 \u001a 5|\u03b1\u2032|\u039b(k \u2212\u03b3)2(1 + \u03b3)\u03b4 + \u03c3 8 \u001b |\u2207u|2 g +2|\u03b1\u2032\u039b|(k \u2212\u03b3)2\u03b4|\u2207\u2207u|2 g |\u2207u|2 g + {4|\u03b1\u2032|\u039b(k + \u03b3)2\u03b4 + \u03c3}|\u2207\u00af \u2207u|2 g |\u2207u|2 g +2\u22127(2 + \u03c3)|\u2207\u00af \u2207u|g + Ce\u2212u + Ce\u2212u e\u2212u/2 |\u2207u|g . (5.40) Recall \u03c3 = 2\u22127 and using (k \u2212\u03b3)2(1 + \u03b3) \u2264(k + \u03b3)3, the de\ufb01nition (2.7) of \u03b4 implies 5|\u03b1\u2032|\u039b(k \u2212\u03b3)2(1 + \u03b3)\u03b4 \u2264\u03c3 8 ; 4|\u03b1\u2032\u039b|(k + \u03b3)2\u03b4 \u22642\u22127. Then, we have 2 |\u2207u|2 g |gj\u00af iEju\u00af i| + (2 + \u03c3)| \u02dc E| \u2264 \u03c3 4 |\u2207u|2 g + \u03c3 4 |\u2207\u2207u|2 g |\u2207u|2 g + 2\u22126|\u2207\u00af \u2207u|2 g |\u2207u|2 g + 2\u22127(2 + \u03c3)|\u2207\u00af \u2207u|g +Ce\u2212u + Ce\u2212u e\u2212u/2 |\u2207u|g . (5.41) 20 \fUsing (5.41), the main inequality (5.30) becomes F p\u00af q\u2207p\u2207\u00af qG \u2265 (1 \u22122\u22125) 1 |\u2207u|2 g |\u2207\u00af \u2207u|2 g \u2212 n 2\u03c3(1 + 2\u22126) + 2\u22127(2 + \u03c3) o |\u2207\u00af \u2207u|g +\u03c3 4|\u2207u|2 g + 1 |\u2207u|2 g F p\u00af qgj\u00af iuj \u02c6 R\u00af qp\u00af i \u00af \u03bbu\u00af \u03bb \u2212Ce\u2212u \u2212Ce\u2212u e\u2212u/2 |\u2207u|g . (5.42) By our choice \u03c3 = 2\u22127, we have the inequality of numbers n 2\u03c3(1 + 2\u22126) + 2\u22127(2 + \u03c3) o2 1 1 \u22122\u22125 \u2264\u03c3 2 . (5.43) Thus n 2\u03c3(1 + 2\u22126) + 2\u22127(2 + \u03c3) o |\u2207\u00af \u2207u|g \u2264 (1 \u22122\u22125) 1 |\u2207u|2 g |\u2207\u00af \u2207u|2 g + 1 4 n 2\u03c3(1 + 2\u22126) + 2\u22127(2 + \u03c3) o2 1 1 \u22122\u22125|\u2207u|2 g \u2264 (1 \u22122\u22125) 1 |\u2207u|2 g |\u2207\u00af \u2207u|2 g + \u03c3 8 |\u2207u|2 g. (5.44) We may also estimate 1 |\u2207u|2 g F p\u00af qgj\u00af iuj \u02c6 R\u00af qp\u00af i \u00af \u03bbu\u00af \u03bb \u2265\u2212Ce\u2212u. (5.45) Putting everything together, at p there holds 0 \u2265F p\u00af q\u2207p\u2207\u00af qG \u2265\u03c3 8 |\u2207u|2 g \u2212Ce\u2212ue\u2212u/2 |\u2207u|g \u2212Ce\u2212u. (5.46) From this inequality, we can conclude |\u2207u|2 g(p) \u2264Ce\u2212u(p). (5.47) By de\ufb01nition G(x) \u2264G(p), and we have |\u2207u|2 g \u2264Ce\u2212u(p)e(1+\u03c3)(u(p)\u2212u) \u2264CM\u22121, (5.48) since eu(p)e\u2212u \u2264C and e\u2212u \u2264CM\u22121 by Theorem 3. This completes the proof of Theorem 4. 6 Second Order Estimate Theorem 5 Let u \u2208\u03a5k be a C4(X) function with normalization R X eu \u02c6 \u03c9n = M solving the Fu-Yau equation (2.2). Then |\u2207\u00af \u2207u|2 g \u2264CM\u22121. (6.1) where C only depends on (X, \u02c6 \u03c9), \u03b1\u2032, k, \u03b3, \u2225\u03c1\u2225C4(X,\u02c6 \u03c9) and \u2225\u00b5\u2225C2(X). 21 \fWe begin by noting the following elementary estimate. Lemma 3 Let \u2113\u2208{2, 3, . . . , n}. The following estimate holds: |gj\u00af i\u03c3p\u00af q,r\u00af s \u2113 \u2207ju\u00af qp\u2207\u00af iu\u00af sr| \u2264C\u2113\u22122 n\u22122|\u2207\u00af \u2207u|\u2113\u22122 g |\u2207\u00af \u2207\u2207u|2 g. (6.2) Proof: Since the inequality is invariant, we may work at a point p \u2208X where g is the identity and u\u00af qp is diagonal. At p, we can use (4.17) and conclude |gj\u00af i\u03c3p\u00af q,r\u00af s \u2113 \u2207ju\u00af qp\u2207\u00af iu\u00af sr| \u2264 X i X p,q |\u03c3\u2113\u22122(\u03bb|pq)||\u2207iu\u00af qp|2. (6.3) By Lemma 1, |\u03c3\u2113\u22122(\u03bb|pq)| \u2264 C\u2113\u22122 n\u22122 (n \u22122)(\u2113\u22122)/2 |\u2207\u00af \u2207u|\u2113\u22122 g . (6.4) This inequality proves the Lemma. Q.E.D. 6.1 Di\ufb00erentiating the norm of second derivatives Lemma 4 Let u \u2208\u03a5k be a C4(X) function solving (2.2) with normalization R X eu = M. There exists a constant C > 0 depending only on (X, \u02c6 \u03c9), \u03b1\u2032, k, \u03b3, \u2225\u03c1\u2225C4(X,\u02c6 \u03c9) and \u2225\u00b5\u2225C2(X) such that F p\u00af q\u2207p\u2207\u00af q|\u2207\u00af \u2207u|2 g \u2265 2(1 \u22122\u22125)|\u2207\u00af \u2207\u2207u|2 g \u2212(1 + 2k)|\u03b1\u2032|\u22121/k\u03c4 1/k|\u2207\u2207u|2 g \u2212(1 + 2k)|\u03b1\u2032|\u22121/k\u03c4 1/k|\u2207\u00af \u2207u|2 g \u2212|\u2207\u00af \u2207u|g|\u2207\u00af \u2207\u2207u|g \u2212CM\u22121/2|\u2207\u00af \u2207\u2207u|g \u2212CM\u22121|\u2207\u2207u|g \u2212CM\u22121. (6.5) We start by di\ufb00erentiating (5.11) and using the de\ufb01nition of F p\u00af q to obtain 0 = \u03b1\u2032\u2207\u00af i\u03c3p\u00af q k+1\u2207j\u2207p\u2207\u00af qu + F p\u00af q\u2207\u00af i\u2207j\u2207p\u2207\u00af qu +k\u2207\u00af i\u2207j|\u2207u|2 g \u2212\u03b1\u2032(k \u2212\u03b3)(1 + \u03b3)e\u2212(1+\u03b3)uap\u00af qu\u00af i\u2207j\u2207p\u2207\u00af qu +\u03b1\u2032(k \u2212\u03b3)e\u2212(1+\u03b3)u\u2207\u00af iap\u00af q\u2207j\u2207p\u2207\u00af qu + \u2207\u00af iEj. (6.6) Next, we use (4.13) and (4.9) to conclude F p\u00af q\u2207p\u2207\u00af qu\u00af ij = \u2212\u03b1\u2032\u03c3p\u00af q,r\u00af s k+1 \u2207j\u2207p\u2207\u00af qu\u2207\u00af i\u2207r\u2207\u00af su \u2212F p\u00af q [up\u2207\u00af i\u2207j\u2207\u00af qu \u2212uj\u2207\u00af i\u2207p\u2207\u00af qu + u\u00af q\u2207p\u2207\u00af i\u2207ju \u2212u\u00af i\u2207p\u2207\u00af q\u2207ju] \u2212F p\u00af q \u02c6 R\u00af ij \u03bb pu\u00af q\u03bb + F p\u00af q \u02c6 R\u00af qp\u00af i \u00af \u03bbu\u00af \u03bbj \u2212k\u2207\u00af i\u2207j|\u2207u|2 g + \u03b1\u2032(k \u2212\u03b3)(1 + \u03b3)e\u2212(1+\u03b3)uap\u00af qu\u00af i\u2207j\u2207p\u2207\u00af qu \u2212\u03b1\u2032(k \u2212\u03b3)e\u2212(1+\u03b3)u\u2207\u00af iap\u00af q\u2207j\u2207p\u2207\u00af qu \u2212\u2207\u00af iEj. (6.7) 22 \fDirect computation gives F p\u00af q\u2207p\u2207\u00af q|\u2207\u00af \u2207u|2 g = 2gs\u00af igj\u00af rF p\u00af q\u2207p\u2207\u00af qu\u00af iju\u00af rs + 2|\u2207\u00af \u2207\u2207u|2 F gg. (6.8) Recall (4.21) that we can pass from F p\u00af q to the metric gp\u00af q up to an error of order 2\u22126. Substituting (6.7) into (6.8) and estimating terms gives F p\u00af q\u2207p\u2207\u00af q|\u2207\u00af \u2207u|2 g \u2265 2 \u001a (1 \u22122\u22126)|\u2207\u00af \u2207\u2207u|2 g \u2212|\u03b1\u2032gm\u00af igj\u00af n\u03c3i\u00af j,r\u00af s k+1 \u2207ju\u00af qp\u2207\u00af iu\u00af sru\u00af nm| \u001b \u2212C|\u2207\u00af \u2207u|g|\u2207\u00af \u2207\u2207u|g \u001a |Du|g + e\u2212\u03b3u|\u2207u|g + e\u2212\u03b3ue\u22121 2 u \u001b \u2212C|\u2207\u00af \u2207u|g \u001a e\u2212u|\u2207\u00af \u2207u|g \u001b \u22122k \f \f \f \fgs\u00af igj\u00af r\u2207\u00af i\u2207j|\u2207u|2 gu\u00af rs \f \f \f \f \u22122 \f \f \f \fgs\u00af igj\u00af r\u2207\u00af iEju\u00af rs \f \f \f \f. (6.9) The condition u \u2208\u03a5k (2.6) together with k \u2264(n \u22121) gives Ck\u22121 n\u22122|\u03b1\u2032||\u2207\u00af \u2207u|k g \u2264|\u03b1\u2032|Ck n\u22121|\u2207\u00af \u2207u|k g \u22642\u22127. (6.10) Therefore by (6.2) |\u03b1\u2032gm\u00af igj\u00af n\u03c3p\u00af q,r\u00af s k+1 \u2207ju\u00af qp\u2207\u00af ku\u00af sru\u00af nm| \u22642\u22127|\u2207\u00af \u2207\u2207u|2 g. (6.11) In the coming estimates, we will often use the C0 and C1 estimates, and the condition u \u2208\u03a5k (2.6), which we record here for future reference. e\u2212u \u2264CM\u22121, |\u2207u|2 g \u2264CM\u22121, |\u2207\u00af \u2207u|g \u2264|\u03b1\u2032|\u22121/k\u03c4 1/k, (6.12) where \u03c4 = (Ck n\u22121)\u221212\u22127. Since u \u2208\u03a5k, we have M = R X eu\u02c6 \u03c9n \u22651, and so we will often only keep the leading power of M since M \u22651. Applying all this to (6.9), we have F p\u00af q\u2207p\u2207\u00af q|\u2207\u00af \u2207u|2 g \u2265 2(1 \u22122\u22125)|\u2207\u00af \u2207\u2207u|2 g \u2212CM\u22121/2|\u2207\u00af \u2207u|g|\u2207\u00af \u2207\u2207u|g \u2212CM\u22121|\u2207\u00af \u2207u|g|\u2207\u00af \u2207u|g \u22122k \f \f \f \fgs\u00af igj\u00af r\u2207\u00af i\u2207j|\u2207u|2 gu\u00af rs \f \f \f \f \u22122 \f \f \f \fgs\u00af igj\u00af r\u2207\u00af iEju\u00af rs \f \f \f \f. (6.13) We will now estimate the two last terms. We compute the \ufb01rst of these directly, using (4.5) to commute derivatives. 2kgs\u00af igj\u00af r\u2207\u00af i\u2207j|\u2207u|2 gu\u00af rs = 2kgs\u00af igj\u00af r \u001a gp\u00af qu\u00af q\u2207j\u2207\u00af i\u2207pu + gp\u00af qup\u2207\u00af i\u2207j\u2207\u00af qu +gp\u00af q\u2207j\u2207pu\u2207\u00af i\u2207\u00af qu + gp\u00af qu\u00af ipu\u00af qj +gp\u00af qu\u00af q \u02c6 R\u00af ij \u2113 pu\u2113\u2212gp\u00af qu\u00af qu\u00af ijup \u001b u\u00af rs. (6.14) 23 \fWe estimate \f \f \f \f2kgs\u00af igj\u00af r\u2207\u00af i\u2207j|\u2207u|2 gu\u00af rs \f \f \f \f \u2264 k \u001a 4|\u2207\u00af \u2207\u2207u|g|\u2207u|g + 2|\u2207\u00af \u2207u|2 g + 2|\u2207\u2207u|2 g +Ce\u2212u|\u2207u|2 g + 2|\u2207u|2 g|\u2207\u00af \u2207u|g \u001b |\u2207\u00af \u2207u|g. (6.15) We will use (6.12). Then \f \f \f \f2kgs\u00af igj\u00af r\u2207\u00af i\u2207j|\u2207u|2 gu\u00af rs \f \f \f \f \u2264 2k|\u03b1\u2032|\u22121/k\u03c4 1/k|\u2207\u00af \u2207u|2 g + 2k|\u03b1\u2032|\u22121/k\u03c4 1/k|\u2207\u2207u|2 g (6.16) +CM\u22121/2|\u2207\u00af \u2207\u2207u|g + CM\u22122 + CM\u22121. Next, using the de\ufb01nition (5.12) of Ej, we keep track of the order of each term and obtain the estimate |gs\u00af igj\u00af r\u2207\u00af iEju\u00af rs| \u2264 C(a, b, c, \u03b1\u2032)|\u2207\u00af \u2207u|g|\u2207\u00af \u2207\u2207u|g \u001a e\u2212\u03b3ue\u2212u/2 + e\u2212\u03b3u|\u2207u|g \u001b +C(a, b, c)|\u2207\u00af \u2207u|2 g \u001a e\u2212\u03b3u|\u2207u|2 g + e\u2212\u03b3ue\u2212u/2|\u2207u|g + e\u2212(1+\u03b3)u \u001b +C(a, b, c, \u03b1\u2032)|\u2207\u00af \u2207u|g|\u2207\u2207u|g \u001a e\u2212\u03b3u|\u2207u|2 g + e\u2212\u03b3ue\u2212u/2|\u2207u|g + e\u2212(1+\u03b3)u \u001b +C(a, b, c, \u03b1\u2032)|\u2207\u00af \u2207u|g \u001a e\u2212(2+\u03b3)u + e\u2212(1+\u03b3)ue\u2212u/2|\u2207u|g + e\u2212(1+\u03b3)u|\u2207u|2 g +e\u2212(1+\u03b3)ue\u2212u/2|\u2207u|3 g + e\u2212(1+\u03b3)u|\u2207u|4 g \u001b +C(\u00b5)|\u2207\u00af \u2207u|g \u001a e\u2212(k+1)u|\u2207\u00af \u2207u|g + e\u2212(k+1)u|\u2207u|2 g +e\u2212(k+1)ue\u2212u/2|\u2207u|g + e\u2212(k+2)u \u001b +(k \u2212\u03b3)2gs\u00af kgj\u00af r|(\u03b1\u2032e\u2212(1+\u03b3)uap\u00af q\u2207\u00af k\u2207j\u2207puu\u00af q)u\u00af rs| +|k \u2212\u03b3|gs\u00af kgj\u00af r|(\u03b1\u2032e\u2212(1+\u03b3)ubi\u2207\u00af k\u2207j\u2207iu)u\u00af rs| +|k \u2212\u03b3|gs\u00af kgj\u00af r|(\u03b1\u2032e\u2212(1+\u03b3)u\u03b3ap\u00af qu\u00af qpu\u00af kju\u00af rs| +(k \u2212\u03b3)2gs\u00af kgj\u00af r|(\u03b1\u2032e\u2212(1+\u03b3)uap\u00af qu\u00af kpu\u00af qj)u\u00af rs| +(k \u2212\u03b3)2gs\u00af kgj\u00af r|(\u03b1\u2032e\u2212(1+\u03b3)uap\u00af q\u2207j\u2207pu\u2207\u00af k\u2207\u00af qu)u\u00af rs|. (6.17) We will use our estimates (6.12). We also recall the notation \u2212\u039b\u02c6 gp\u00af q \u2264ap\u00af q \u2264\u039b\u02c6 gp\u00af q. We use these estimates and commute covariant derivatives to obtain |gs\u00af kgj\u00af r\u2207\u00af kEju\u00af rs| \u2264 CM\u22121/2|\u2207\u00af \u2207\u2207u|g + CM\u22121|\u2207\u2207u|g + CM\u22121 + CM\u22122 +CM\u2212(k+1) + CM\u2212(k+2) +(k \u2212\u03b3)2e\u2212(1+\u03b3)ugs\u00af kgj\u00af r|(\u03b1\u2032ap\u00af q\u2207j\u2207\u00af k\u2207puu\u00af q + \u03b1\u2032ap\u00af qR\u00af kj \u03bb pu\u03bbu\u00af q)u\u00af rs| +|k \u2212\u03b3|e\u2212(1+\u03b3)ugs\u00af kgj\u00af r|(\u03b1\u2032bi\u2207j\u2207\u00af k\u2207iu + \u03b1\u2032biR\u00af kj \u03bb iu\u03bb)u\u00af rs| +2e\u2212\u03b3u|\u03b1\u2032|\u039b(k + \u03b3)2|\u2207\u00af \u2207u|g|\u2207\u00af \u2207u|2 g +e\u2212\u03b3u|\u03b1\u2032|\u039b(k + \u03b3)2|\u2207\u00af \u2207u|g|\u2207\u2207u|2 g. (6.18) 24 \fSince u \u2208\u03a5k, we have 2|\u03b1\u2032|\u039b(k + \u03b3)2e\u2212\u03b3u \u22641. |gs\u00af kgj\u00af r\u2207\u00af kEju\u00af rs| \u2264 |\u03b1\u2032|\u22121/k\u03c4 1/k|\u2207\u2207u|2 g + |\u03b1\u2032|\u22121/k\u03c4 1/k|\u2207\u00af \u2207u|2 g +CM\u22121/2|\u2207\u00af \u2207\u2207u|g + CM\u22121|\u2207\u2207u|g + CM\u22121. (6.19) Substituting (6.16) and (6.19) into (6.13) and keeping the leading orders of M, we arrive at (6.5). 6.2 Using a test function Let G = |\u2207\u00af \u2207u|2 g + \u0398|\u2207u|2 g. (6.20) where \u0398 \u226b1 is a large constant depending on n, k, \u03b1\u2032. To be precise, we let \u0398 = (1 \u22122\u22126)\u22121{(1 + 2k)|\u03b1\u2032|\u22121/k\u03c4 1/k + 1}. (6.21) By (5.9), F p\u00af q\u2207p\u2207\u00af q|\u2207u|2 g \u2265 |\u2207\u00af \u2207u|2 F g + |\u2207\u2207u|2 F g \u22122|\u2207u|g|\u2207\u00af \u2207\u2207u|g \u2212|\u2207u|2 g|\u2207\u00af \u2207u|g \u2212Ce\u2212u|\u2207u|2 g. (6.22) Applying (6.12) and converting F p\u00af q to gp\u00af q yields F p\u00af q\u2207p\u2207\u00af q|\u2207u|2 g \u2265 (1 \u22122\u22126)|\u2207\u00af \u2207u|2 g + (1 \u22122\u22126)|\u2207\u2207u|2 g \u2212CM\u22121/2|\u2207\u00af \u2207\u2207u|g \u2212CM\u22121|\u2207\u00af \u2207u|g \u2212CM\u22122. (6.23) Combining (6.5) and (6.23), we have F p\u00af q\u2207p\u2207\u00af qG \u2265 2(1 \u22122\u22125)|\u2207\u00af \u2207\u2207u|2 g + |\u2207\u00af \u2207u|2 g + |\u2207\u2207u|2 g \u2212|\u2207\u00af \u2207u|g|\u2207\u00af \u2207\u2207u|g \u2212CM\u22121/2|\u2207\u00af \u2207\u2207u|g \u2212CM\u22121|\u2207\u2207u|g \u2212CM\u22121. (6.24) We will split the linear terms into quadratic terms by applying CM\u22121/2|\u2207\u00af \u2207\u2207u|g \u22641 2|\u2207\u00af \u2207\u2207u|2 g + C2 2 M\u22121, (6.25) |\u2207\u00af \u2207u|g|\u2207\u00af \u2207\u2207u|g \u22641 2|\u2207\u00af \u2207\u2207u|2 g + 1 2|\u2207\u00af \u2207u|2 g. (6.26) CM\u22121|\u2207\u2207u|g \u2264C2 4 M\u22122 + |\u2207\u2207u|2 g. (6.27) 25 \fApplying these estimates, we may discard the remaining quadratic positive terms and (6.24) becomes F p\u00af q\u2207p\u2207\u00af qG \u22651 2|\u2207\u00af \u2207u|2 g \u2212CM\u22121, (6.28) Let p \u2208X be a point where G attains its maximum. From the maximum principle, |\u2207\u00af \u2207u|2 g(p) \u2264CM\u22121. We conclude from G \u2264G(p) that |\u2207\u00af \u2207u|2 g \u2264CM\u22121. (6.29) establishing Theorem 5. We note that many equations involving the derivative of the unknown and/or several Hessians have been studied recently in the literature (see e.g. [3, 4, 7, 13, 15, 29, 26, 28] and references therein). It would be very interesting to determine when estimates with scale hold. 7 Third Order Estimate Theorem 6 Let u \u2208Ak be a C5(X) function solving equation (2.2). Then |\u2207\u00af \u2207\u2207u|2 g \u2264C. (7.1) where C only depends on (X, \u02c6 \u03c9), \u03b1\u2032, k, \u03b3, \u2225\u03c1\u2225C5(X,\u02c6 \u03c9) and \u2225\u00b5\u2225C3(X). To prove this estimate, we will apply the maximum principle to the test function G = (|\u2207\u00af \u2207u|2 g + \u03b7)|\u2207\u00af \u2207\u2207u|2 g + B(|\u2207u|2 g + A)|\u2207\u2207u|2 g, (7.2) where A, B \u226b1 are large constants to be speci\ufb01ed later and \u03b7 = m\u03c4 2/k|\u03b1\u2032|\u22122/k. We will specify m \u226b1 later and \u03c4 = (Ck n\u22121)\u221212\u22127. The condition (2.6) u \u2208\u0393 implies |\u03b1\u2032|1/k|\u2207\u00af \u2207u|g \u2264\u03c4 1/k. (7.3) Our choice of constants ensures that \u03b7 and |\u2207\u00af \u2207u|2 g are of the same \u03b1\u2032 scale. By our previous work, we may estimate by C any term involving u, \u2207u, \u2207\u00af \u2207u, or the curvature or torsion of g = eu\u02c6 g. 7.1 Quadratic second order term Lemma 5 Let u \u2208Ak be a C4(X) function solving equation (2.2). Then for all A \u226b1 larger than a \ufb01xed constant only depending on |\u2207u|g and for all B > 0, F p\u00af q\u2207p\u2207\u00af q n (|\u2207u|2 g + A)|\u2207\u2207u|2 g o \u2265 A 2 |\u2207\u2207\u2207u|2 g + (1 \u22122\u22125)|\u2207\u2207u|4 g \u22121 25B |\u2207\u00af \u2207\u2207u|4 g \u2212C(A, B). (7.4) where C(A, B) only depends on A, B, (X, \u02c6 \u03c9), \u03b1\u2032, k, \u03b3, \u2225\u03c1\u2225C4(X,\u02c6 \u03c9) and \u2225\u00b5\u2225C2(X). 26 \fDi\ufb00erentiating (5.11) gives F p\u00af q\u2207\u2113\u2207j\u2207p\u2207\u00af qu = \u2212\u03b1\u2032(k \u2212\u03b3)\u2207\u2113(e\u2212(1+\u03b3)uap\u00af q)\u2207ju\u00af qp \u2212\u03b1\u2032(\u2207\u2113\u03c3p\u00af q k+1)\u2207ju\u00af qp \u2212k\u2207\u2113\u2207j|\u2207u|2 g \u2212\u2207\u2113Ej, (7.5) Commuting derivatives F p\u00af q\u2207p\u2207\u00af q\u2207\u2113\u2207ju = F p\u00af q\u2207\u2113\u2207j\u2207p\u2207\u00af qu + F p\u00af q\u2207p( \u02c6 R\u00af q\u2113 \u03bb j\u2207\u03bbu \u2212u\u00af q\u2113uj) \u2212F p\u00af qT \u03bb p\u2113\u2207\u03bb\u2207j\u2207\u00af qu \u2212F p\u00af q\u2207\u2113(up\u2207j\u2207\u00af qu \u2212uj\u2207p\u2207\u00af qu). (7.6) We compute directly and commute derivatives to derive F p\u00af q\u2207p\u2207\u00af q|\u2207\u2207u|2 g = 2Re{g\u2113\u00af bgj \u00af dF p\u00af q\u2207p\u2207\u00af q\u2207\u2113\u2207ju\u2207\u00af b\u2207\u00af du} (7.7) +g\u2113\u00af bgj \u00af d\u2207\u2113\u2207juF p\u00af qR\u00af qp\u00af b \u00af \u03bb\u2207\u00af \u03bb\u2207\u00af du + g\u2113\u00af bgj \u00af d\u2207\u2113\u2207juF p\u00af qR\u00af qp \u00af d \u00af \u03bb\u2207\u00af b\u2207\u00af \u03bbu +F p\u00af qg\u2113\u00af bgj \u00af d\u2207p\u2207\u2113\u2207ju\u2207\u00af q\u2207\u00af b\u2207\u00af du + F p\u00af qg\u2113\u00af bgj \u00af d\u2207\u00af q\u2207\u2113\u2207ju\u2207p\u2207\u00af b\u2207\u00af du. Combining (7.5), (7.6), (7.7) and converting F p\u00af q to gp\u00af q using Lemma 2, we estimate F p\u00af q\u2207p\u2207\u00af q|\u2207\u2207u|2 g \u2265 (1 \u22122\u22126)|\u2207\u2207\u2207u|2 g + (1 \u22122\u22126)| \u00af \u2207\u2207\u2207u|2 g \u22122\u03b1\u2032Re{g\u2113\u00af bgj \u00af d\u03c3p\u00af q,r\u00af s k+1 \u2207\u2113u\u00af sr\u2207ju\u00af qp\u2207\u00af b\u2207\u00af du} \u22122Re{g\u2113\u00af bgj \u00af d\u2207\u2113Ej\u2207\u00af b\u2207\u00af du} \u2212C|\u2207\u2207u|g(|\u2207\u2207\u2207u|g + |\u2207\u00af \u2207\u2207u|g + |\u2207\u2207u|g + 1) (7.8) We used the identity (4.8) to estimate |\u2207\u2207\u00af \u2207u| by |\u2207\u00af \u2207\u2207u| and lower order terms. Next, using Lemma 6.2 we estimate \u22122Re{\u03b1\u2032g\u2113\u00af bgj \u00af d\u03c3p\u00af q,r\u00af s k+1 \u2207\u2113u\u00af sr\u2207ju\u00af qp\u2207\u00af b\u2207\u00af du} \u2265 \u22122Ck\u22121 n\u22122|\u03b1\u2032||\u2207\u00af \u2207u|k\u22121 g |\u2207\u2207u|g|\u2207\u00af \u2207\u2207u|2 g \u2265 \u22122Ck\u22121 n\u22122\u03c4 1\u2212(1/k)|\u03b1\u2032|1/k|\u2207\u2207u|g|\u2207\u00af \u2207\u2207u|2 g(7.9) and |g\u2113\u00af bgj \u00af d\u2207\u2113Ej\u2207\u00af b\u2207\u00af du| \u2264C|\u2207\u2207u|g{1 + |\u2207\u2207u|g + |\u2207\u00af \u2207\u2207u|g + |\u2207\u2207\u2207u|g}. (7.10) Thus F p\u00af q\u2207p\u2207\u00af q|\u2207\u2207u|2 g \u2265 (1 \u22122\u22126)|\u2207\u2207\u2207u|2 g + (1 \u22122\u22126)| \u00af \u2207\u2207\u2207u|2 g (7.11) \u2212C|\u2207\u2207u|g{|\u2207\u00af \u2207\u2207u|2 g + |\u2207\u2207\u2207u|g + |\u2207\u00af \u2207\u2207u|g + |\u2207\u2207u|g + 1} By (5.14), F p\u00af q\u2207p\u2207\u00af q|\u2207u|2 g \u2265(1 \u22122\u22126)|\u2207\u00af \u2207u|2 g + (1 \u22122\u22126)|\u2207\u2207u|2 g \u2212C|\u2207\u2207u|g \u2212C. (7.12) Direct computation gives F p\u00af q\u2207p\u2207\u00af q n (|\u2207u|2 g + A)|\u2207\u2207u|2 g o = (|\u2207u|2 g + A)F p\u00af q\u2207p\u2207\u00af q|\u2207\u2207u|2 g + |\u2207\u2207u|2 gF p\u00af q\u2207p\u2207\u00af q|\u2207u|2 g +2Re{F p\u00af q\u2207p|\u2207u|2 g\u2207\u00af q|\u2207\u2207u|2 g}. (7.13) 27 \fWe estimate 2 \f \f \fF p\u00af q\u2207p|\u2207u|2 g\u2207\u00af q|\u2207\u2207u|2 g \f \f \f \u2264 2(1 + 2\u22126)|\u2207\u2207u|2 g|\u2207u|g| \u00af \u2207\u2207\u2207u|g +2(1 + 2\u22126)|\u2207\u2207u|2 g|\u2207u|g|\u2207\u2207\u2207u|g +C| \u00af \u2207\u2207\u2207u|g|\u2207\u2207u|g + C|\u2207\u2207\u2207u|g|\u2207\u2207u|g. (7.14) Substituting (7.11), (7.12), (7.14) into (7.13), F p\u00af q\u2207p\u2207\u00af q{(|\u2207u|2 g + A)|\u2207\u2207u|2 g} \u2265 A(1 \u22122\u22126) n |\u2207\u2207\u2207u|2 g + | \u00af \u2207\u2207\u2207u|2 g o + (1 \u22122\u22126)|\u2207\u2207u|4 g \u22123|\u2207\u2207u|2 g|\u2207u|g n | \u00af \u2207\u2207\u2207u|g + |\u2207\u2207\u2207u|g o \u2212C(A)|\u2207\u2207u|g \u001a |\u2207\u00af \u2207\u2207u|2 g + |\u2207\u2207\u2207u|g + |\u2207\u00af \u2207\u2207u|g +|\u2207\u2207u|2 g + |\u2207\u2207u|g + 1 \u001b . (7.15) Using 2ab \u2264a2 + b2, 3|\u2207\u2207u|2 g|\u2207u|g| \u00af \u2207\u2207\u2207u| \u22642\u22127|\u2207\u2207u|4 g + 2532|\u2207u|2 g| \u00af \u2207\u2207\u2207u|2 g, (7.16) 3|\u2207\u2207u|2 g|\u2207u|g|\u2207\u2207\u2207u| \u22642\u22127|\u2207\u2207u|4 g + 2532|\u2207u|2 g|\u2207\u2207\u2207u|2 g, (7.17) C(A)|\u2207\u2207\u2207u|g|\u2207\u2207u|g \u2264|\u2207\u2207\u2207u|2 g + C(A)2 4 |\u2207\u2207u|2 g (7.18) C(A)|\u2207\u00af \u2207\u2207u|2 g|\u2207\u2207u|g \u2264 1 25B |\u2207\u00af \u2207\u2207u|4 g + 23C(A)2B|\u2207\u2207u|2 g (7.19) for a constant B \u226b1 to be determined later. Then F p\u00af q\u2207p\u2207\u00af q n (|\u2207u|2 g + A)|\u2207\u2207u|2 g o \u2265 n A(1 \u22122\u22126) \u22122632|\u2207u|2 g \u22121 o |\u2207\u2207\u2207u|2 g + n A(1 \u22122\u22126) \u22122632|\u2207u|2 g \u22121 o | \u00af \u2207\u2207\u2207u|2 g +(1 \u22122\u22125)|\u2207\u2207u|4 g \u2212 1 25B |\u2207\u00af \u2207\u2207u|4 g (7.20) \u2212C(A, B) \u001a |\u2207\u2207u|g + |\u2207\u2207u|2 g + |\u2207\u2207u|3 g \u001b . The terms |\u2207\u2207u|g +|\u2207\u2207u|2 g +|\u2207\u2207u|3 g can be absorbed into |\u2207\u2207u|4 g by Young\u2019s inequality. For A \u226b1, obtain (7.4). 7.2 Third order term Lemma 6 Let u \u2208Ak be a C5(X) function solving equation (2.2). Then F p\u00af q\u2207p\u2207\u00af q n (|\u2207\u00af \u2207u|2 g + \u03b7)|\u2207\u00af \u2207\u2207u|2 g o \u2265 1 16|\u2207\u00af \u2207\u2207u|4 g 28 \f\u2212C|\u2207\u2207\u2207u|g \u001a |\u2207\u00af \u2207\u2207u|g|\u2207\u2207u|g + |\u2207\u00af \u2207\u2207u|g + |\u2207\u2207u|g \u001b \u2212C \u001a |\u2207\u00af \u2207\u2207u|2 g|\u2207\u2207u|2 g + |\u2207\u00af \u2207\u2207u|2 g|\u2207\u2207u|g +|\u2207\u00af \u2207\u2207u|g|\u2207\u2207u|2 g + |\u2207\u00af \u2207\u2207u|g|\u2207\u2207u|g + 1 \u001b . (7.21) where C only depends on (X, \u02c6 \u03c9), \u03b1\u2032, k, \u03b3, \u2225\u03c1\u2225C5(X,\u02c6 \u03c9) and \u2225\u00b5\u2225C3(X). To start this computation, we di\ufb00erentiate (6.7). F p\u00af q\u2207i\u2207p\u2207\u00af qu\u00af \u2113j = \u2212\u03b1\u2032\u2207i(\u03c3p\u00af q,r\u00af s k+1 )\u2207ju\u00af qp\u2207\u00af \u2113u\u00af sr \u2212\u03b1\u2032\u03c3p\u00af q,r\u00af s k+1 \u2207i\u2207ju\u00af qp\u2207\u00af \u2113u\u00af sr \u2212\u03b1\u2032\u03c3p\u00af q,r\u00af s k+1 \u2207ju\u00af qp\u2207i\u2207\u00af \u2113u\u00af sr + \u2207i [\u2212F p\u00af qup\u2207\u00af \u2113u\u00af qj + F p\u00af quj\u2207\u00af \u2113u\u00af qp] +\u2207i h \u2212F p\u00af qu\u00af q\u2207pu\u00af \u2113j + F p\u00af qu\u00af \u2113\u2207pu\u00af qj i + \u2207i[F p\u00af q \u02c6 R\u00af qp\u00af \u2113 \u00af \u03bbu\u00af \u03bbj \u2212F p\u00af q \u02c6 R\u00af \u2113j \u03bb pu\u00af q\u03bb] \u2212k\u2207i \u0014 gp\u00af qu\u00af q\u2207ju\u00af \u2113p + gp\u00af qup\u2207\u00af \u2113u\u00af qj + gp\u00af q\u2207j\u2207pu\u2207\u00af \u2113\u2207\u00af qu + gp\u00af qu\u00af \u2113pu\u00af qj +gp\u00af qu\u00af q \u02c6 R\u00af \u2113j \u03bb pu\u03bb \u2212gp\u00af qu\u00af qu\u00af \u2113jup \u0015 + \u2207i[\u03b1\u2032(k \u2212\u03b3)(1 + \u03b3)e\u2212(1+\u03b3)uap\u00af qu\u00af \u2113\u2207ju\u00af qp] \u2212\u2207i[\u03b1\u2032(k \u2212\u03b3)e\u2212(1+\u03b3)u\u2207\u00af \u2113ap\u00af q\u2207ju\u00af qp] \u2212\u2207i\u2207\u00af \u2113Ej. (7.22) Our conventions (4.3) imply the following commutator identities for any tensor W\u00af kj. \u2207p\u2207\u00af qW\u00af kj = \u2207\u00af q\u2207pW\u00af kj + R\u00af qp\u00af k \u00af \u03bbW\u00af \u03bbj \u2212R\u00af qp \u03bb jW\u00af k\u03bb, (7.23) \u2207p\u2207\u00af q\u2207iW\u00af kj = \u2207i\u2207p\u2207\u00af qW\u00af kj + T \u03bb ip\u2207\u03bbW\u00af kj \u2212\u2207p[R\u00af qi\u00af k \u00af \u03bbW\u00af \u03bbj \u2212R\u00af qi \u03bb jW\u00af k\u03bb]. (7.24) Thus commuting derivatives gives F p\u00af q\u2207p\u2207\u00af q\u2207iu\u00af kj = F p\u00af q\u2207i\u2207p\u2207\u00af qu\u00af kj + F p\u00af qui\u2207p\u2207\u00af qu\u00af kj \u2212F p\u00af qup\u2207i\u2207\u00af qu\u00af kj +F p\u00af q\u2207p[ \u02c6 R\u00af qi \u03bb ju\u00af k\u03bb \u2212\u02c6 R\u00af qi\u00af k \u00af \u03bbu\u00af \u03bbj] (7.25) We compute the expression for F p\u00af q\u2207p\u2207\u00af q acting on |\u2207\u00af \u2207\u2207u|2 g, and exchange covariant derivatives to obtain F p\u00af q\u2207p\u2207\u00af q|\u2207\u00af \u2207\u2207u|2 g = 2Re{gi \u00af dga\u00af kgj\u00af bF p\u00af q\u2207p\u2207\u00af q\u2207iu\u00af kj\u2207\u00af du\u00af ba} +F p\u00af qga \u00af dge\u00af bgc \u00af f\u2207p\u2207au\u00af bc\u2207\u00af q\u2207\u00af du \u00af fe + F p\u00af qga \u00af dge\u00af bgc \u00af f\u2207a\u2207\u00af qu\u00af bc\u2207\u00af d\u2207pu \u00af fe +F p\u00af qga \u00af dge\u00af bgc \u00af f\u2207a\u2207\u00af qu\u00af bcR \u00af dp \u00af f \u00af \u03bbu\u00af \u03bbe \u2212F p\u00af qga \u00af dge\u00af bgc \u00af f\u2207a\u2207\u00af qu\u00af bcR \u00af dp \u03bb eu \u00af f\u03bb \u2212F p\u00af qga \u00af dge\u00af bgc \u00af fR\u00af qa\u00af b \u00af \u03bbu\u00af \u03bbc\u2207p\u2207\u00af du \u00af fe + F p\u00af qga \u00af dge\u00af bgc \u00af fR\u00af qa \u03bb cu\u00af b\u03bb\u2207p\u2207\u00af du \u00af fe +ga \u00af dge\u00af bgc \u00af f\u2207au\u00af bcF p\u00af qR\u00af qp \u00af d \u00af \u03bb\u2207\u00af \u03bbu \u00af fe + ga \u00af dge\u00af bgc \u00af f\u2207au\u00af bcF p\u00af qR\u00af qp \u00af f \u00af \u03bb\u2207\u00af du\u00af \u03bbe \u2212ga \u00af dge\u00af bgc \u00af f\u2207au\u00af bcF p\u00af qR\u00af qp \u03bb e\u2207\u00af du \u00af f\u03bb. (7.26) 29 \fSubstituting (7.22) and (7.25) into (7.26), and using Lemma 2, we have F p\u00af q\u2207p\u2207\u00af q|\u2207\u00af \u2207\u2207u|2 g \u2265 (1 \u22122\u22126)|\u2207\u2207\u00af \u2207\u2207u|2 g + (1 \u22122\u22126)|\u2207\u00af \u2207\u2207\u00af \u2207u|2 g \u22122\u03b1\u2032Re{gi \u00af dga\u00af kgj\u00af b\u2207i(\u03c3p\u00af q,r\u00af s k+1 )\u2207ju\u00af qp\u2207\u00af ku\u00af sr\u2207\u00af du\u00af ba} \u22122\u03b1\u2032Re{gi \u00af dga\u00af kgj\u00af b\u03c3p\u00af q,r\u00af s k+1 \u2207i\u2207ju\u00af qp\u2207\u00af ku\u00af sr\u2207\u00af du\u00af ba} \u22122\u03b1\u2032Re{gi \u00af dga\u00af kgj\u00af b\u03c3p\u00af q,r\u00af s k+1 \u2207ju\u00af qp\u2207i\u2207\u00af ku\u00af sr\u2207\u00af du\u00af ba} \u2212C \u001a (|\u2207\u00af \u2207\u2207\u00af \u2207u|g + |\u2207\u2207\u00af \u2207\u2207u|g)|\u2207\u00af \u2207\u2207u|g + |\u2207\u00af \u2207\u2207\u00af \u2207u|g +(|\u2207\u2207\u2207u|g + | \u00af \u2207\u2207\u2207u|g + 1)|\u2207\u2207u|g|\u2207\u00af \u2207\u2207u|g +|\u2207\u00af \u2207\u2207u|3 g + |\u2207\u00af \u2207\u2207u|2 g + |\u2207\u00af \u2207\u2207u|g \u001b \u22122Re{gi \u00af dga\u00af kgj\u00af b\u2207i\u2207\u00af kEj\u2207\u00af du\u00af ba} (7.27) For the following steps, we will use that |\u03b1\u2032|1/k|\u2207\u00af \u2207u|g \u2264\u03c4 1/k for any u \u2208Ak, where \u03c4 = (Ck n\u22121)\u221212\u22127. We also recall that we use the notation C\u2113 m = m! \u2113!(m\u2212\u2113)!. If k > 1, we can estimate 2|\u03b1\u2032gi \u00af dga\u00af \u2113gj\u00af b\u2207i(\u03c3p\u00af q,r\u00af s k+1 )\u2207ju\u00af qp\u2207\u00af \u2113u\u00af sr\u2207\u00af du\u00af ba| \u2264 2|\u03b1\u2032|Ck\u22122 n\u22123|\u2207\u00af \u2207u|k\u22122|\u2207\u00af \u2207\u2207u|4 g \u2264 (2Ck n\u22121\u03c4)|\u03b1\u2032|2/k\u03c4 \u22122/k|\u2207\u00af \u2207\u2207u|4 g = 2\u22126|\u03b1\u2032|2/k\u03c4 \u22122/k|\u2207\u00af \u2207\u2207u|4 g. (7.28) We used Ck\u22122 n\u22123 \u2264Ck n\u22121. If k = 1, the term on the left-hand side vanishes and the inequality still holds. Using the same ideas, we can also estimate \u22122\u03b1\u2032Re{gi \u00af dga\u00af \u2113gj\u00af b\u03c3p\u00af q,r\u00af s k+1 \u2207i\u2207ju\u00af qp\u2207\u00af \u2113u\u00af sr\u2207\u00af du\u00af ba} \u22122\u03b1\u2032Re{gi \u00af dga\u00af \u2113gj\u00af b\u03c3p\u00af q,r\u00af s k+1 \u2207ju\u00af qp\u2207i\u2207\u00af \u2113u\u00af sr\u2207\u00af du\u00af ba} \u2265 \u22122|\u03b1\u2032|Ck\u22121 n\u22122|\u2207\u00af \u2207u|k\u22121 g |\u2207\u00af \u2207\u2207u|2 g \u001a |\u2207\u2207\u00af \u2207\u2207u|g + |\u2207\u00af \u2207\u2207\u00af \u2207u|g \u001b \u2265 \u2212(2Ck n\u22121\u03c4)|\u03b1\u2032|1/k\u03c4 \u22121/k|\u2207\u00af \u2207\u2207u|2 g \u001a |\u2207\u2207\u00af \u2207\u2207u|g + |\u2207\u00af \u2207\u2207\u00af \u2207u|g \u001b = \u22122\u22126|\u03b1\u2032|1/k\u03c4 \u22121/k|\u2207\u00af \u2207\u2207u|2 g \u001a |\u2207\u2207\u00af \u2207\u2207u|g + |\u2207\u00af \u2207\u2207\u00af \u2207u|g \u001b . (7.29) The perturbative terms can be estimated roughly by using the de\ufb01nition (5.12) of Ej and keeping track of the orders of terms that we do not yet control. \u22122Re{gi \u00af dga\u00af kgj\u00af b\u2207i\u2207\u00af kEj\u2207\u00af du\u00af ba} \u2265 \u2212C|\u2207\u00af \u2207\u2207u|g \u001a |\u2207\u00af \u2207\u2207\u00af \u2207u|g + |\u2207\u00af \u2207\u2207\u2207u|g +(|\u2207\u00af \u2207\u2207u|g + |\u2207\u2207\u2207u|g)|\u2207\u2207u|g + |\u2207\u00af \u2207\u2207u|g + |\u2207\u2207\u2207u|g +|\u2207\u2207u|2 g + |\u2207\u2207u|g + 1 \u001b . (7.30) Applying these estimates leads to F p\u00af q\u2207p\u2207\u00af q|\u2207\u00af \u2207\u2207u|2 g \u2265 (1 \u22122\u22126) h |\u2207\u2207\u00af \u2207\u2207u|2 g + |\u2207\u00af \u2207\u2207\u00af \u2207u|2 g i \u22122\u22126|\u03b1\u2032|2/k\u03c4 \u22122/k|\u2207\u00af \u2207\u2207u|4 g 30 \f\u22122\u22126|\u03b1\u2032|1/k\u03c4 \u22121/k|\u2207\u00af \u2207\u2207u|2 g h |\u2207\u2207\u00af \u2207\u2207u|g + |\u2207\u00af \u2207\u2207\u00af \u2207u|g i \u2212CP (7.31) where P = |\u2207\u00af \u2207\u2207\u00af \u2207u|g|\u2207\u00af \u2207\u2207u|g + |\u2207\u2207\u00af \u2207\u2207u|g|\u2207\u00af \u2207\u2207u|g + |\u2207\u00af \u2207\u2207\u00af \u2207u|g +|\u2207\u2207\u2207u|g|\u2207\u00af \u2207\u2207u|g|\u2207\u2207u|g + |\u2207\u2207\u2207u|g|\u2207\u00af \u2207\u2207u|g +|\u2207\u00af \u2207\u2207u|2 g|\u2207\u2207u|g + |\u2207\u00af \u2207\u2207u|g|\u2207\u2207u|2 g + |\u2207\u00af \u2207\u2207u|g|\u2207\u2207u|g +|\u2207\u2207\u2207u|g|\u2207\u2207u|g + |\u2207\u00af \u2207\u2207u|3 g + |\u2207\u00af \u2207\u2207u|2 g + |\u2207\u00af \u2207\u2207u|g. (7.32) We used the fact that the di\ufb00erence between |\u2207\u00af \u2207\u2207\u2207u|g and |\u2207\u2207\u00af \u2207\u2207u|g is a lower order term according to the commutation formula (7.23). Next, we apply (6.5) to obtain F p\u00af q\u2207p\u2207\u00af q|\u2207\u00af \u2207u|2 g \u2265|\u2207\u00af \u2207\u2207u|2 g \u2212C|\u2207\u00af \u2207\u2207u|g \u2212C|\u2207\u2207u|2 g \u2212C|\u2207\u2207u|g \u2212C. (7.33) We directly compute F p\u00af q\u2207p\u2207\u00af q n (|\u2207\u00af \u2207u|2 g + \u03b7)|\u2207\u00af \u2207\u2207u|2 g o = |\u2207\u00af \u2207\u2207u|2 gF p\u00af q\u2207p\u2207\u00af q|\u2207\u00af \u2207u|2 g +(|\u2207\u00af \u2207u|2 g + \u03b7)F p\u00af q\u2207p\u2207\u00af q|\u2207\u00af \u2207\u2207u|2 g +2Re{F p\u00af q\u2207p|\u2207\u00af \u2207u|2 g\u2207\u00af q|\u2207\u00af \u2207\u2207u|2 g}. (7.34) We can estimate 2Re{F p\u00af q\u2207p|\u2207\u00af \u2207u|2 g\u2207\u00af q|\u2207\u00af \u2207\u2207u|2 g} \u2265 \u22124(1 + 2\u22126)|\u2207\u00af \u2207u|g|\u2207\u00af \u2207\u2207u|2 g|\u2207\u00af \u2207\u2207\u00af \u2207u|g (7.35) \u22124(1 + 2\u22126)|\u2207\u00af \u2207u|g|\u2207\u00af \u2207\u2207u|2 g|\u2207\u2207\u00af \u2207\u2207u|g \u2265 \u22124(1 + 2\u22126)|\u03b1\u2032|\u22121/k\u03c4 1/k|\u2207\u00af \u2207\u2207u|2 g|\u2207\u00af \u2207\u2207\u00af \u2207u|g \u22124(1 + 2\u22126)|\u03b1\u2032|\u22121/k\u03c4 1/k|\u2207\u00af \u2207\u2207u|2 g|\u2207\u2207\u00af \u2207\u2207u|g. Combining (7.31), (7.33), (7.35) with (7.34), setting \u03b7 = m|\u03b1\u2032|\u22122/k\u03c4 2/k and using |\u2207\u00af \u2207u|2 g \u2264 |\u03b1\u2032|\u22122/k\u03c4 2/k leads to F p\u00af q\u2207p\u2207\u00af q n (|\u2207\u00af \u2207u|2 g + \u03b7)|\u2207\u00af \u2207\u2207u|2 g o \u2265 m(1 \u22122\u22126)|\u03b1\u2032|\u22122/k\u03c4 2/k \u001a |\u2207\u2207\u00af \u2207\u2207u|2 g + |\u2207\u00af \u2207\u2207\u00af \u2207u|2 g \u001b \u22124(1 + 2\u22126)|\u03b1\u2032|\u22121/k\u03c4 1/k|\u2207\u00af \u2207\u2207u|2 g \u001a |\u2207\u2207\u00af \u2207\u2207u|g + |\u2207\u00af \u2207\u2207\u00af \u2207u|g \u001b \u22122\u22126(m + 1)|\u03b1\u2032|\u22121/k\u03c4 1/k|\u2207\u00af \u2207\u2207u|2 g \u001a |\u2207\u2207\u00af \u2207\u2207u|g + |\u2207\u00af \u2207\u2207\u00af \u2207u|g \u001b + \u001a 1 \u22122\u22126(m + 1) \u001b |\u2207\u00af \u2207\u2207u|4 g \u2212C|\u2207\u00af \u2207\u2207u|2 g|\u2207\u2207u|2 g \u2212CP. (7.36) 31 \fUsing 2ab \u2264a2 + b2, we estimate 4(1 + 2\u22126)|\u03b1\u2032|\u22121/k\u03c4 1/k|\u2207\u00af \u2207\u2207u|2 g{|\u2207\u00af \u2207\u2207\u00af \u2207u|g + |\u2207\u2207\u00af \u2207\u2207u|g} \u2264 16(1 + 2\u22126)2|\u03b1\u2032|\u22122/k\u03c4 2/k{|\u2207\u00af \u2207\u2207\u00af \u2207u|2 g + |\u2207\u2207\u00af \u2207\u2207u|2 g} + 1 2|\u2207\u00af \u2207\u2207u|4 g, (7.37) and 2\u22126(m + 1)|\u03b1\u2032|\u22121/k\u03c4 1/k|\u2207\u00af \u2207\u2207u|2 g{|\u2207\u2207\u00af \u2207\u2207u|g + |\u2207\u00af \u2207\u2207\u00af \u2207u|g} \u2264 1 2|\u03b1\u2032|\u22122/k\u03c4 2/k{|\u2207\u2207\u00af \u2207\u2207u|2 g + |\u2207\u00af \u2207\u2207\u00af \u2207u|2 g} + 2\u221212(m + 1)2|\u2207\u00af \u2207\u2207u|4 g. (7.38) The main inequality becomes F p\u00af q\u2207p\u2207\u00af q n (|\u2207\u00af \u2207u|2 g + \u03b7)|\u2207\u00af \u2207\u2207u|2 g o \u2265 {m(1 \u22122\u22126) \u221216(1 + 2\u22126)2 \u22121 2}|\u03b1\u2032|\u22122/k\u03c4 2/k \u001a |\u2207\u2207\u00af \u2207\u2207u|2 g + |\u2207\u00af \u2207\u2207\u00af \u2207u|2 g \u001b + \u001a1 2 \u22122\u22126(m + 1) \u22122\u221212(m + 1)2 \u001b |\u2207\u00af \u2207\u2207u|4 g \u2212C|\u2207\u00af \u2207\u2207u|2 g|\u2207\u2207u|2 g \u2212CP. (7.39) Next, we estimate terms on the \ufb01rst line in the de\ufb01nition (7.32) of P C{|\u2207\u00af \u2207\u2207\u00af \u2207u|g + |\u2207\u2207\u00af \u2207\u2207u|g}|\u2207\u00af \u2207\u2207u|g \u2264 1 16|\u03b1\u2032|\u22122/k\u03c4 2/k{|\u2207\u00af \u2207\u2207\u00af \u2207u|2 g + |\u2207\u2207\u00af \u2207\u2207u|2 g} + 8C2|\u03b1\u2032|2/k\u03c4 \u22122/k|\u2207\u00af \u2207\u2207u|2 g (7.40) and C|\u2207\u00af \u2207\u2207\u00af \u2207u|g \u22641 16|\u03b1\u2032|\u22122/k\u03c4 2/k|\u2207\u00af \u2207\u2207\u00af \u2207u|2 g + 4C2|\u03b1\u2032|2/k\u03c4 \u22122/k (7.41) and absorb |\u2207\u00af \u2207\u2207u|3 g +|\u2207\u00af \u2207\u2207u|2 g +|\u2207\u00af \u2207\u2207u|g into 2\u221212|\u2207\u00af \u2207\u2207u|4 g plus a large constant. We can now let m = 18 and drop the positive fourth order terms. We are left with F p\u00af q\u2207p\u2207\u00af q n (|\u2207\u00af \u2207u|2 g + \u03b7)|\u2207\u00af \u2207\u2207u|2 g o \u2265 \u001a1 2 \u22122\u22126(m + 1) \u22122\u221212(m + 1)2 \u22122\u221212 \u001b |\u2207\u00af \u2207\u2207u|4 g \u2212C|\u2207\u2207\u2207u|g \u001a |\u2207\u00af \u2207\u2207u|g|\u2207\u2207u|g + |\u2207\u00af \u2207\u2207u|g + |\u2207\u2207u|g \u001b \u2212C \u001a |\u2207\u00af \u2207\u2207u|2 g|\u2207\u2207u|2 g + |\u2207\u00af \u2207\u2207u|2 g|\u2207\u2207u|g + |\u2207\u00af \u2207\u2207u|g|\u2207\u2207u|2 g +|\u2207\u00af \u2207\u2207u|g|\u2207\u2207u|g + 1 \u001b . (7.42) Since m = 18, 1 2 \u22122\u22126(m + 1) \u22122\u221212(m + 1)2 \u22122\u221212 \u22652\u22124, (7.43) and we obtain (7.21). 32 \f7.3 Using the test function We have computed F p\u00af q\u2207p\u2207\u00af q acting on the two terms of the test function G de\ufb01ned in (7.2). Combining (7.4) and (7.21) F p\u00af q\u2207p\u2207\u00af qG \u2265 1 32|\u2207\u00af \u2207\u2207u|4 g + AB 2 |\u2207\u2207\u2207u|2 g + (1 \u22122\u22125)B|\u2207\u2207u|4 g \u2212C \u001a |\u2207\u2207\u2207u|g|\u2207\u00af \u2207\u2207u|g|\u2207\u2207u|g + |\u2207\u2207\u2207u|g|\u2207\u00af \u2207\u2207u|g + |\u2207\u2207\u2207u|g|\u2207\u2207u|g +|\u2207\u00af \u2207\u2207u|2 g|\u2207\u2207u|2 g + |\u2207\u00af \u2207\u2207u|2 g|\u2207\u2207u|g + |\u2207\u00af \u2207\u2207u|g|\u2207\u2207u|2 g +|\u2207\u00af \u2207\u2207u|g|\u2207\u2207u|g \u001b \u2212C(A, B). The negative terms are readily split and absorbed into the positive terms on the \ufb01rst line. For example, C|\u2207\u2207\u2207u|g|\u2207\u00af \u2207\u2207u|g|\u2207\u2207u|g \u2264|\u2207\u2207\u2207u|2 g + C2 4 |\u2207\u00af \u2207\u2207u|2 g|\u2207\u2207u|2 g, (7.44) C|\u2207\u00af \u2207\u2207u|2 g|\u2207\u2207u|2 g \u22642\u22127|\u2207\u00af \u2207\u2207u|4 g + 25C2|\u2207\u2207u|4 g (7.45) C|\u2207\u00af \u2207\u2207u|2 g|\u2207\u2207u|g \u22642\u22127|\u2207\u00af \u2207\u2207u|4 g + 25C2|\u2207\u2207u|2 g. (7.46) C|\u2207\u00af \u2207\u2207u|g|\u2207\u2207u|2 g \u22642\u22127|\u2207\u00af \u2207\u2207u|2 g + 25C2|\u2207\u2207u|4 g. (7.47) This leads to F p\u00af q\u2207p\u2207\u00af qG \u2265 2\u22127|\u2207\u00af \u2207\u2207u|4 g + {AB 2 \u22121}|\u2207\u2207\u2207u|2 g + {B 2 \u2212C}|\u2207\u2207u|4 g \u2212C(A, B). (7.48) By choosing A, B \u226b1 to be large, we conclude by the maximum principle that at a point p where G attains a maximum, we have |\u2207\u00af \u2207\u2207u|4 g(p) \u2264C, |\u2207\u2207u|4 g(p) \u2264C. (7.49) Therefore |\u2207\u00af \u2207\u2207u|g and |\u2207\u2207u|g are both uniformly bounded. 7.4 Remark on the case k = 1 In the case of the standard Fu-Yau equation (k = 1), to prove Theorem 1 we can instead appeal to a general theorem of concave elliptic PDE and obtain H\u00a8 older estimates for the second order derivatives of the solution. To exploit the concave structure, we must rewrite the Fu-Yau equation into the standard form of complex Hessian equation. 33 \fRecall that \u02c6 \u03c31(\u03c7) \u02c6 \u03c9n = n\u03c7 \u2227\u02c6 \u03c9n\u22121, \u02c6 \u03c32(\u03c7) \u02c6 \u03c9n = n(n\u22121) 2 \u03c72 \u2227\u02c6 \u03c9n\u22122. A direct computation with equation (1.1) gives \u02c6 \u03c32(eu\u02c6 \u03c9 + \u03b1\u2032e\u2212u\u03c1 + 2\u03b1\u2032i\u2202\u00af \u2202u) = n(n \u22121) 2 e2u \u22122(n \u22121)\u03b1\u2032eu|Du|2 \u02c6 \u03c9 \u22122(n \u22121)\u03b1\u2032\u00b5 (7.50) +2(n \u22121)(\u03b1\u2032)2e\u2212u(aj\u00af kuju\u00af k \u2212biui \u2212b \u00af iu\u00af i) +2(n \u22121)(\u03b1\u2032)2e\u2212uc + (n \u22121)e\u2212u\u02c6 \u03c31(\u03b1\u2032\u03c1) + e\u22122u\u02c6 \u03c32(\u03b1\u2032\u03c1) We note that the right hand side of the equation involves the given data \u03b1\u2032, \u03c1, \u00b5, u and Du. Since u \u2208\u03a51, the (1, 1)-form \u03c9\u2032 = eu\u02c6 \u03c9 + \u03b1\u2032e\u2212u\u03c1 + 2\u03b1\u2032i\u2202\u00af \u2202u is positive de\ufb01nite, and thus both sides of the above equation have a positive lower bound. Moreover, our previous estimates imply that we have uniform a priori estimates on \u2225u\u2225C1,\u03b2(X) for any 0 < \u03b2 < 1. The right hand side is therefore bounded in C\u03b2(X). Since \u02c6 \u03c31/2 2 (\u03c7) is a concave uniformly elliptic operator on the space of admissible solutions, we may apply a Evans-Krylov type result of Tosatti-Weinkove-Wang-Yang [30] to conclude \u2225u\u2225C2,\u03b2 \u2264C. However, for general k \u22652 case, it is impossible to re-write equation (2.2) into the standard form of complex Hessian equation and thus there is no obvious concavity we can use. Note: Just as we were about to post this paper, a preprint, The Fu-Yau equation in higher dimensions by J. Chu, L. Huang, and X.H. Zhu appeared in the net, arXiv:1801.09351, in which is stated the existence of a solution in the \u03932 cone of the Fu-Yau equation (k = 1). Our result is more precise, as our solution is in the admissible set \u03a51 \u2282\u03932. Moreover, our method was used to solve a whole family of Fu-Yau Hessian equations, in which the Fu-Yau equation with k = 1 is only the simplest example." + }, + { + "url": "http://arxiv.org/abs/1711.10697v2", + "title": "Fully non-linear parabolic equations on compact Hermitian manifolds", + "abstract": "A notion of parabolic C-subsolutions is introduced for parabolic equations,\nextending the theory of C-subsolutions recently developed by B. Guan and more\nspecifically G. Sz\\'ekelyhidi for elliptic equations. The resulting parabolic\ntheory provides a convenient unified approach for the study of many geometric\nflows.", + "authors": "Duong H. Phong, Dat T. T\u00f4", + "published": "2017-11-29", + "updated": "2017-12-14", + "primary_cat": "math.DG", + "cats": [ + "math.DG", + "math.AP", + "math.CV" + ], + "main_content": "Introduction Subsolutions play an important role in the theory of partial di\ufb00erential equations. Their existence can be viewed as an indication of the absence of any global obstruction. Perhaps more importantly, it can imply crucial a priori estimates, as for example in the Dirichlet problem for the complex Monge-Amp` ere equation [43, 18]. However, for compact manifolds without boundary, it is necessary to extend the notion of subsolution, since the standard notion may be excluded by either the maximum principle or cohomological constraints. Very recently, more \ufb02exible and compelling notions of subsolutions have been proposed by Guan [19] and Sz\u00b4 ekelyhidi [50]. In particular, they show that their notions, called C-subsolution in [50], do imply the existence of solutions and estimates for a wide variety of fully non-linear elliptic equations on Hermitian manifolds. It is natural to consider also the parabolic case. This was done by Guan, Shi, and Sui in [21] for the usual notion of subsolution and for the Dirichlet problem. We now carry this out for the more general notion of C-subsolution on compact Hermitian manifolds, adapting the methods of [19] and especially [50]. As we shall see, the resulting parabolic theory provides a convenient uni\ufb01ed approach to the many parabolic equations which have been studied in the literature. Let (X, \u03b1) be a compact Hermitian manifold of dimension n, \u03b1 = i \u03b1\u00af kjdzj \u2227d\u00af zk > 0, and \u03c7(z) be a real (1, 1)form, \u03c7 = i \u03c7\u00af kj(z)dzj \u2227d\u00af zk. If u \u2208C2(X), let A[u] be the matrix with entries A[u]kj = \u03b1k \u00af m(\u03c7 \u00af mj +\u2202j\u2202\u00af mu). We consider the fully nonlinear parabolic equation, \u2202tu = F(A[u]) \u2212\u03c8(z), (1.1) 1The \ufb01rst author was supported in part by the National Science Foundation under NSF Grant DMS12-66033. The second author was supported by the CFM foundation and ATUPS travel grant. \fwhere F(A) is a smooth symmetric function F(A) = f(\u03bb[u]) of the eigenvalues \u03bbj[u], 1 \u2264j \u2264n of A[u], de\ufb01ned on a open symmetric, convex cone \u0393 \u2282Rn with vertex at the origin and containing the positive orthant \u0393n. We shall assume throughout the paper that f satis\ufb01es the following conditions: (1) fi > 0 for all i, and f is concave. (2) f(\u03bb) \u2192\u2212\u221eas \u03bb \u2192\u2202\u0393 (3) For any \u03c3 < sup\u0393 f and \u03bb \u2208\u0393, we have limt\u2192\u221ef(t\u03bb) > \u03c3. We shall say that a C2 function u on X is admissible if the vector of eigenvalues of the corresponding matrix A is in \u0393 for any z \u2208X. Fix T \u2208(0, \u221e]. To alleviate the terminology, we shall also designate by the same adjective functions in C2,1(X \u00d7 [0, T)) which are admissible for each \ufb01xed t \u2208[0, T). The following notion of subsolution is an adaptation to the parabolic case of Sz\u00b4 ekelyhidi\u2019s [50] notion in the elliptic case: De\ufb01nition 1 An admissible function u \u2208C2,1(X \u00d7 [0, T)) is said to be a (parabolic) Csubsolution of (1.1), if there exist constants \u03b4, K > 0, so that for any (z, t) \u2208X \u00d7 [0, T), the condition f(\u03bb[u(z, t)] + \u00b5) \u2212\u2202tu + \u03c4 = \u03c8(z), \u00b5 + \u03b4I \u2208\u0393n, \u03c4 > \u2212\u03b4 (1.2) implies that |\u00b5| + |\u03c4| < K. Here I denotes the vector (1, \u00b7 \u00b7 \u00b7, 1) of eigenvalues of the identity matrix. We shall see below (\u00a74.1) that this notion is more general than the classical notion de\ufb01ned by f(\u03bb([u]))\u2212\u2202tu(z, t) > \u03c8(z, t) and studied by Guan-Shi-Sui [21]. A C-subsolution in the sense of Sz\u00b4 ekelyhidi of the equation F(A[u]) \u2212\u03c8 = 0 can be viewed as a parabolic C-subsolution of the equation (1.1) which is time-independent. But more generally, to solve the equation F(A[u]) \u2212\u03c8 = 0 by say the method of continuity, we must choose a time-dependent deformation of this equation, and we would need then a C-subsolution for each time. The heat equation (1.1) and the above notion of parabolic subsolution can be viewed as a canonical choice of deformation. To discuss our results, we need a \ufb01ner classi\ufb01cation of non-linear partial di\ufb00erential operators due to Trudinger [61]. Let \u0393\u221ebe the projection of \u0393n onto Rn\u22121, \u0393\u221e= {\u03bb\u2032 = (\u03bb1, \u00b7 \u00b7 \u00b7, \u03bbn\u22121); \u03bb = (\u03bb1, \u00b7 \u00b7 \u00b7 , \u03bbn) \u2208\u0393 for some \u03bbn} (1.3) and de\ufb01ne the function f\u221eon \u0393\u221eby f\u221e(\u03bb\u2032) = lim\u03bbn\u2192\u221ef(\u03bb\u2032, \u03bbn). (1.4) It is shown in [61] that, as a consequence of the concavity of f, the limit is either \ufb01nite for all \u03bb\u2032 \u2208\u0393\u221eor in\ufb01nite for all \u03bb\u2032 \u2208\u0393\u221e. We shall refer to the \ufb01rst case as the bounded case, 2 \fand to the second case as the unbounded case. For example, Monge-Amp` ere \ufb02ows belong to the unbounded case, while the J-\ufb02ow and Hessian quotient \ufb02ows belong to the bounded case. In the unbounded case, any admissible function, and in particular 0 if \u03bb[\u03c7] \u2208\u0393, is a C-subsolution in both the elliptic and parabolic cases. We have then: Theorem 1 Consider the \ufb02ow (1.1), and assume that f is in the unbounded case. Then for any admissible initial data u0, the \ufb02ow admits a smooth solution u(z, t) on [0, \u221e), and its normalization \u02dc u de\ufb01ned by \u02dc u := u \u22121 V Z X u \u03b1n, V = Z X \u03b1n, (1.5) converges in C\u221eto a function \u02dc u\u221esatisfying the following equation for some constant c, F(A[\u02dc u\u221e]) = \u03c8(z) + c. (1.6) The situation is more complicated when f belongs to the bounded case: Theorem 2 Consider the \ufb02ow (1.1), and assume that it admits a subsolution u on X \u00d7 [0, \u221e), but that f is in the bounded case. Then for any admissible data u0, the equation admits a smooth solution u(z, t) on (0, \u221e). Let \u02dc u be the normalization of the solution u, de\ufb01ned as before by (1.5). Assume that either one of the following two conditions holds. (a) The initial data and the subsolution satisfy \u2202tu \u2265supX(F(A[u0]) \u2212\u03c8); (1.7) (b) or there exists a function h(t) with h\u2032(t) \u22640 so that supX(u(t) \u2212h(t) \u2212u(t)) \u22650 (1.8) and the Harnack inequality supX(u(t) \u2212h(t)) \u2264\u2212C1infX(u(t) \u2212h(t)) + C2 (1.9) holds for some constants C1, C2 > 0 independent of time. Then \u02dc u converges in C\u221eto a function \u02dc u\u221esatisfying (1.6) for some constant c. The essence of the above theorems resides in the a priori estimates which are established in \u00a72. The C1 and C2 estimates can be adapted from the corresponding estimates for Csubsolutions in the elliptic case, but the C0 estimate turns out to be more subtle. Following Blocki [1] and Sz\u00b4 ekelyhidi [50], we obtain C0 estimates from the Alexandrov-BakelmanPucci (ABP) inequality, using this time a parabolic version of ABP due to K. Tso [62]. However, it turns out that the existence of a C-subsolution gives only partial information 3 \fon the oscillation of u, and what can actually be estimated has to be formulated with some care, leading to the distinction between the cases of f bounded and unbounded, as well as Theorem 2. The conditions (a) and especially (b) in Theorem 2 may seem impractical at \ufb01rst sight since they involve the initial data as well as the long-time behavior of the solution. Nevertheless, as we shall discuss in greater detail in section \u00a74, Theorems 1 and 2 can be successfully applied to a wide range of parabolic \ufb02ows on Hermitian manifolds previously studied in the literature, including the K\u00a8 ahler-Ricci \ufb02ow, the Chern-Ricci \ufb02ow, the J-\ufb02ow, the Hessian \ufb02ows, the quotient Hessian \ufb02ows, and mixed Hessian \ufb02ows. We illustrate this by deriving in \u00a74 as a corollary of Theorem 2 a convergence theorem for a mixed Hessian \ufb02ow, which seems new to the best of our knowledge. It answers a question raised for general 1 \u2264\u2113< k \u2264n by Fang-Lai-Ma [12] (see also Sun [44, 45, 46, 48]), and extends the solution obtained for k = n by Collins-Sz\u00b4 ekelyhidi [7] and subsequently also by Sun [48, 49]: Theorem 3 Assume that (X, \u03b1) is a compact K\u00a8 ahler n-manifold, and \ufb01x 1 \u2264\u2113< k \u2264n. Fix a closed (1, 1)-form \u03c7 which is k-positive and non-negative constants cj, and assume that there exists a form \u03c7\u2032 = \u03c7 + i\u2202\u00af \u2202u which is a closed k-positive form and satis\ufb01es kc(\u03c7\u2032)k\u22121 \u2227\u03b1n\u2212k \u2212 \u2113 X j=1 jcj(\u03c7\u2032)j\u22121 \u2227\u03b1n\u2212j > 0, (1.10) in the sense of positivity of (n \u22121, n \u22121)-forms. Here the constant c is given by c[\u03c7k][\u03b1n\u2212k] = \u2113 X j=1 cj[\u03c7j][\u03b1n\u2212j]. (1.11) Then the \ufb02ow \u2202tu = \u2212 P\u2113 j=1 cj\u03c3j(\u03bb(A[u])) \u03c3k(\u03bb(A[u])) + c, u(\u00b7, 0) = 0, (1.12) admits a solution for all time which converges smoothly to a function u\u221eas t \u2192\u221e. The form \u03c9 = \u03c7 + i\u2202\u00af \u2202u\u221eis k-positive and satis\ufb01es the equation c \u03c9k \u2227\u03b1n\u2212k = \u2113 X j=1 cj \u03c9j \u2227\u03b1n\u2212j. (1.13) Regarding the condition (a) in Theorem 2, we note that natural geometric \ufb02ows whose long-time behavior may be very sensitive to the initial data are appearing increasingly frequently in non-K\u00a8 ahler geometry. A prime example is the Anomaly \ufb02ow, studied in [33, 36, 37, 38, 13]. Finally, Theorem 2 will also be seen to imply as a corollary a theorem of Sz\u00b4 ekelyhidi ([50], Proposition 26), and the condition for solvability there will be seen to correspond to condition (a) in Theorem 2. This suggests in particular that some additional conditions for the convergence of the \ufb02ow cannot be dispensed with altogether. 4 \f2 A Priori Estimates 2.1 C0 Estimates We begin with the C0 estimates implied by the existence of a C-subsolution for the parabolic \ufb02ow (1.1). One of the key results of [50] was that the existence of a subsolution in the elliptic case implies a uniform bound for the oscillation of the unknown function u. In the parabolic case, we have only the following weaker estimate: Lemma 1 Assume that the equation (1.1) admits a parabolic C-solution on X \u00d7 [0, T) in the sense of De\ufb01nition 1, and that there exists a C1 function h(t) with h\u2032(t) \u22640 and supX(u(\u00b7, t) \u2212u(\u00b7, t) \u2212h(t)) \u22650. (2.1) Then there exists a constant C depending only on \u03c7, \u03b1, \u03b4, \u2225u0\u2225C0, and \u2225i\u2202\u00af \u2202u\u2225L\u221eso that u(\u00b7, t) \u2212u(\u00b7, t) \u2212h(t) \u2265\u2212C for all (z, t) \u2208X \u00d7 [0, T). (2.2) Proof. First, note that by Lemma 6 proven later in \u00a73, the function \u2202tu is uniformly bounded for all time by a constant depending only on \u03c8 and the initial data u0. Integrating this estimate on [0, \u03b4] gives a bound for |u| on X \u00d7 [0, \u03b4] depending only on \u03c8, u0 and \u03b4. Thus we need only consider the range t \u2265\u03b4. Next, the fact that u is a parabolic subsolution and the condition that h\u2032(t) \u22640 imply that u + h(t) is a parabolic subsolution as well. So it su\ufb03ces to prove the desired inequality with h(t) = 0, as long as the constants involved do not depend on \u2202tu. Fix now any T \u2032 < T, and set for each t, v = u \u2212u, and L = minX\u00d7[0,T \u2032]v = v(z0, t0) (2.3) for some (z0, t0) \u2208X \u00d7 [0, T \u2032]. We shall show that L can be bounded from below by a constant depending only on the initial data u0 and independent of T \u2032. We can assume that t0 > 0, otherwise we are already done. Let (z1, \u00b7 \u00b7 \u00b7 , zn) be local holomorphic coordinates for X centered at z0, U = {z; |z| < 1}, and de\ufb01ne the following function on the set U = U \u00d7 {t; \u2212\u03b4 \u22642(t \u2212t0) < \u03b4}, w = v + \u03b42 4 |z|2 + |t \u2212t0|2, (2.4) where \u03b4 > 0 is the constant appearing in the de\ufb01nition of subsolutions. Clearly w attains its minimum on U at (z0, t0), and w \u2265minUw+ 1 4\u03b42 on the parabolic boundary of U. We can thus apply the following parabolic version of the Alexandrov-Bakelman-Pucci inequality, due to K. Tso ([62], Proposition 2.1, with the function u there set to u = \u2212w+minUw+ \u03b42 4 ): Let U be the subset of R2n+1 de\ufb01ned above, and let w : U \u2192R be a smooth function which attains its minimum at (0, t0), and w \u2265minUw + 1 4\u03b42 on the parabolic boundary of U. De\ufb01ne the set S := ( (x, t) \u2208U : w(x, t) \u2264w(z0, t0) + 1 4\u03b42, |Dxw(x, t)| < \u03b42 8 , and w(y, s) \u2265w(x, t) + Dxw(x, t).(y \u2212x), \u2200y \u2208U, s \u2264t ) . (2.5) 5 \fThen there is a constant C = C(n) > 0 so that C\u03b44n+2 \u2264 Z S(\u2212wt) det(wij)dxdt. Returning to the proof of Lemma 1, we claim that, on the set S, we have |wt| + det (D2 jkw) \u2264C (2.6) for some constant depending only on \u03b4, and \u2225i\u2202\u00af \u2202u\u2225L\u221e. Indeed, let \u00b5 = \u03bb[u] \u2212\u03bb[u], \u03c4 = \u2212\u2202tu + \u2202tu. (2.7) Along S, we have D2 ijw \u22650 and \u2202tw \u22640. In terms of \u00b5 and \u03c4, this means that \u00b5+\u03b4I \u2208\u0393n and 0 \u2264\u2212\u2202tw = \u03c4 \u22122(t \u2212t0) \u2264\u03c4 + \u03b4. The fact that u is a solution of the equation (1.1) can be expressed as f(\u03bb[u] + \u00b5) \u2212\u2202tu + \u03c4 = \u03c8(z). (2.8) Thus the condition that u is a parabolic subsolution implies that |\u00b5| and |\u03c4| are bounded uniformly in (z, t). Since along S, we have det(D2 ijw) \u22642n(det(D2 \u00af kjw))2, it follows that both |wt| and det(D2 ijw) are bounded uniformly, as was to be shown. Next, by the de\ufb01nition of the points (x, t) on S, we have w(x, t) \u2264L + \u03b42 4 . Since we can assume that |L| > \u03b42, it follows that w < 0 and |w| \u2265|L| 2 on S. Thus we can write, in view of (2.6), for any p > 0, Cn\u03b44n+2 \u2264C Z S dxdt \u2264 |L| 2 !\u2212p Z S |w(x, t)|pdxdt \u2264 |L| 2 !\u2212p Z U |w(x, t)|pdxdt. (2.9) Next write |w| = \u2212w = \u2212v \u2212\u03b42 4 |z|2 \u2212(t \u2212t0)2 \u2264\u2212v \u2264 \u2212v + supXv (2.10) since supXv \u22650 by the assumption (2.1). Since \u03bb[u] \u2208\u0393 and the cone \u0393 is convex, it follows that \u2206u \u2265\u2212C and hence \u2206(v \u2212supX v) = \u2206u \u2212\u2206u \u2265\u2212A (2.11) for some constant A depending only on \u03c7, \u03b1, and \u2225i\u2202\u00af \u2202u\u2225L\u221e. The Harnack inequality applied to the function v \u2212supXv, in the version provided by Proposition 10, [50], implies that \u2225v \u2212supXv\u2225Lp(X) \u2264C (2.12) for C depending only on (X, \u03b1), A, and p. Substituting these bounds into (2.9) gives C\u03b44n+2 \u2264 |L| 2 !\u2212p Z |t|< 1 2 \u03b4 \u2225supXv \u2212v\u2225p Lp(X)dt \u2264C\u2032\u03b4 |L| 2 !\u2212p (2.13) from which the desired bound for L follows. Q.E.D. 6 \f2.2 C2 Estimates In this section we prove an estimate for the complex Hessian of u in terms of the gradient. The original strategy goes back to the work of Chou-Wang [6], with adaptation to complex Hessian equations by Hou-Ma-Wu [25], and to fully non-linear elliptic equations admitting a C-subsolution by Guan [19] and Sz\u00b4 ekelyhidi [50]. Other adaptations to C2 estimates can be found in [51], [32], [34], [67]. We follow closely [50]. Lemma 2 Assume that the \ufb02ow (1.1) admits a C-subsolution on X \u00d7 [0, T). Then we have the following estimate |i\u2202\u00af \u2202u| \u2264\u02dc C(1 + sup X\u00d7[0,T) |\u2207u|2 \u03b1) (2.14) where \u02dc C depends only on \u2225\u03b1\u2225C2, \u2225\u03c8\u2225C2, \u2225\u03c7\u2225C2, \u2225\u02dc u \u2212\u02dc u\u2225L\u221e, \u2225\u2207u\u2225L\u221e, \u2225i\u2202\u00af \u2202u\u2225L\u221e, \u2225\u2202tu\u2225L\u221e, \u2225\u2202t(u \u2212u)\u2225L\u221e, and the dimension n. Proof. Let L = \u2212\u2202t + F k\u00af k\u2207k\u2207\u00af k. Denote g = \u03c7 + i\u2202\u00af \u2202u, then A[u]kj = \u03b1k\u00af pg\u00af pj. We would like to apply the maximum principle to the function G = log \u03bb1 + \u03c6(|\u2207u|2) + \u03d5(\u02dc v) (2.15) where v = u \u2212u, \u02dc v is the normalization of v, \u03bb1 : X \u2192R is the largest eigenvalue of the matrix A[u] at each point, and the functions \u03c6 and \u03d5 will be speci\ufb01ed below. Since the eigenvalues of A[u] may not be distinct, we perturb A[u] following the technique of [50], Proposition 13. Thus assume that G attains its maximum on X \u00d7 [0, T \u2032] at some (z0, t0), with t0 > 0. We choose local complex coordinates, so that z0 corresponds to 0, and A[u] is diagonal at 0 with eigenvalues \u03bb1 \u2265\u00b7 \u00b7 \u00b7 \u2265\u03bbn. Let B = (Bij) be a diagonal matrix with 0 = B11 < B22 < \u00b7 \u00b7 \u00b7 < Bnn and small constant entries, and set \u02dc A = A \u2212B. Then at the origin \u02dc A has eigenvalues \u02dc \u03bb1 = \u03bb1, \u02dc \u03bbi = \u03bbi \u2212Bii < \u02dc \u03bb1 for all i > 1. Since all the eigenvalues of \u02dc A are distinct, we can de\ufb01ne near 0 the following smooth function \u02dc G, \u02dc G = log \u02dc \u03bb1 + \u03c6(|\u2207u|2) + \u03d5(\u02dc v) (2.16) where \u03c6(t) = \u22121 2 log (1 \u2212t 2P ), P = supX\u00d7[0,T \u2032](|\u2207u|2 + 1) (2.17) and, following [51] \u03d5(t) = D1e\u2212D2t (2.18) 7 \ffor some large constants D1, D2 to be chosen later. Note that 1 4P \u2264\u03c6\u2032 \u22641 2P , \u03c6\u2032\u2032 = 2(\u03c6\u2032)2 > 0. (2.19) The norm |\u2207u|2 is taken with respect to the \ufb01xed Hermitian metric \u03b1 on X, and we shall compute using covariant derivatives \u2207with respect to \u03b1. Since the matrix Bjm is constant in a neighborhood of 0 and since we are using the Chern unitary connection, we have \u2207\u00af kBjm = 0. Our conventions for the curvature and torsion tensors of a Hermitian metric \u03b1 are as follows, [\u2207\u03b2, \u2207\u03b1]V \u03b3 = R\u03b1\u03b2 \u03b3 \u03b4V \u03b4 + T \u03b4 \u03b1\u03b2\u2207\u03b4V \u03b3. (2.20) We also set F = X i fi(\u03bb[u]). (2.21) An important observation is that there exists a constant C1, depending only on \u2225\u03c8\u2225L\u221e(X) and \u2225\u2202tu\u2225L\u221e(X\u00d7[0,T)) so that F \u2265C1. (2.22) Indeed it follows from the properties of the cone \u0393 that P i fi(\u03bb) \u2265C(\u03c3) for each \ufb01xed \u03c3 and \u03bb \u2208\u0393\u03c3. When \u03bb = \u03bb[u], \u03c3 must lie in the range of \u2202tu + \u03c8, which is a compact set bounded by \u2225\u2202tu\u2225L\u221e(X\u00d7[0,T)) + \u2225\u03c8\u2225L\u221e(X), hence our claim. 2.2.1 Estimate of L( log \u02dc \u03bb1) Clearly L log \u02dc \u03bb1 = 1 \u03bb1 (F k\u00af k\u02dc \u03bb1,\u00af kk \u2212\u2202t\u02dc \u03bb1) \u2212F k\u00af k|\u02dc \u03bb1,\u00af k|2 \u03bb2 1 . (2.23) We work out the term F k\u00af k\u02dc \u03bb1,\u00af kk \u2212\u2202t\u02dc \u03bb1 using the \ufb02ow. The usual di\ufb00erentiation rules ([43]) readily give \u02dc \u03bb1,\u00af k = \u2207\u00af kg\u00af 11 (2.24) and \u02dc \u03bb1,\u00af kk = \u2207k\u2207\u00af kg\u00af 11 + X p>1 |\u2207\u00af kg\u00af p1|2 + |\u2207\u00af kg\u00af 1p|2 \u03bb1 \u2212\u02dc \u03bbp \u2212 X p>1 \u2207kB1p\u2207\u00af kg\u00af p1 + \u2207kBp1\u2207\u00af kg\u00af 1p \u03bb1 \u2212\u02dc \u03bbp . (2.25) while it follows from the \ufb02ow that \u2202t\u02dc \u03bb1 = \u2202tu\u00af 11 = F l\u00af k,s\u00af r\u2207\u00af 1g\u00af kl\u22071g\u00af rs + F k\u00af k\u22071\u2207\u00af 1g\u00af kk \u2212\u03c8\u00af 11. (2.26) 8 \fThus F k\u00af k\u02dc \u03bb1,\u00af kk \u2212\u2202t\u02dc \u03bb1 = F k\u00af k(\u2207k\u2207\u00af kg\u00af 11 \u2212\u22071\u2207\u00af 1g\u00af kk) + F l\u00af k,s\u00af r\u2207\u00af 1g\u00af kl\u22071g\u00af rs \u2212\u03c8\u00af 11 +F k\u00af k X p>1 {|\u2207\u00af kg\u00af p1|2 + |\u2207\u00af kg\u00af 1p|2 \u03bb1 \u2212\u02dc \u03bbp \u2212\u2207kB1p\u2207\u00af kg\u00af p1 + \u2207kBp1\u2207\u00af kg\u00af 1p \u03bb1 \u2212\u02dc \u03bbp } A simple computation gives \u2207k\u2207\u00af kg\u00af 11 \u2212\u22071\u2207\u00af 1g\u00af kk = \u22122Re(T p k1\u2207\u00af kg\u00af p1) + T \u22c6\u2207\u03c7 + R \u22c6\u2207\u00af \u2207u + T \u22c6T \u22c6\u2207\u00af \u2207u \u2265 \u22122Re(T p k1\u2207\u00af kg\u00af p1) \u2212C2(\u03bb1 + 1), (2.27) where C2 depending only on \u2225\u03b1\u2225C2 and \u2225\u03c7\u2225C2. We also have X p>1 {|\u2207\u00af kg\u00af p1|2 + |\u2207\u00af kg\u00af 1p|2 \u03bb1 \u2212\u02dc \u03bbp \u2212\u2207kB1p\u2207\u00af kg\u00af p1 + \u2207kBp1\u2207\u00af kg\u00af 1p \u03bb1 \u2212\u02dc \u03bbp } (2.28) \u22651 2 X p>1 |\u2207\u00af kg\u00af p1|2 + |\u2207\u00af kg\u00af 1p|2 \u03bb1 \u2212\u02dc \u03bbp \u2212C3 \u2265 1 2(n\u03bb1 + 1) X p>1 |\u2207\u00af kg\u00af p1|2 + |\u2207\u00af kg\u00af 1p|2 \u2212C3, where C3 only depends on the dimension n, and the second inequality is due to the fact that (\u03bb1 \u2212\u02dc \u03bbp)\u22121 \u2265(n\u03bb1 + 1)\u22121, which follows itself from the fact that P i \u03bbi \u22650 and B was chosen to be small. Thus \u2207k\u2207\u00af kg\u00af 11 \u2212\u22071\u2207\u00af 1g\u00af kk + X p>1 {|\u2207\u00af kg\u00af p1|2 + |\u2207\u00af kg\u00af 1p|2 \u03bb1 \u2212\u02dc \u03bbp \u2212\u2207kB1p\u2207\u00af kg\u00af p1 + \u2207kBp1\u2207\u00af kg\u00af 1p \u03bb1 \u2212\u02dc \u03bbp } \u2265\u22122Re(T p k1\u2207\u00af kg\u00af p1) + 1 2(n\u03bb1 + 1) X p>1 |\u2207\u00af kg\u00af p1|2 + |\u2207\u00af kg\u00af 1p|2 \u2212C2(\u03bb1 + 1) \u2212C3 \u2265\u2212C4|\u2207\u00af kg\u00af 11| \u2212C5\u03bb1 \u2212C6 (2.29) where we have used the positive terms to absorb all the terms T p k1\u2207\u00af kg\u00af p1, except for T 1 k1\u2207\u00af kg\u00af 11 and C4, C5, C6 only depend on \u2225\u03b1\u2225C2, \u2225\u03c7\u2225C2, n. Altogether, F k\u00af k\u02dc \u03bb1,\u00af kk \u2212\u2202t\u02dc \u03bb1 \u2265\u2212C4F k\u00af k|\u2207\u00af kg\u00af 11| + F l\u00af k,s\u00af r\u2207\u00af 1g\u00af kl\u22071g\u00af rs \u2212\u03c8\u00af 11 \u2212C5F\u03bb1 \u2212C6F (2.30) and we \ufb01nd L log \u02dc \u03bb1 \u2265 \u2212F k\u00af k |\u02dc \u03bb1,\u00af k|2 \u03bb2 1 \u22121 \u03bb1 F l\u00af k,s\u00af r\u2207\u00af 1g\u00af kl\u22071g\u00af rs \u2212C4 1 \u03bb1 F k\u00af k|\u2207\u00af kg\u00af 11| \u2212C7F, (2.31) where we have bounded \u03c8\u00af 11 by a constant that can be absorbed in C6F/\u03bb1 \u2264C6F, since \u03bb1 \u22651 by assumption, and F is bounded below by a constant depending on \u2225\u03c8\u2225L\u221eand \u2225\u2202tu\u2225L\u221e. The constant C7 thus only depends on \u2225\u03b1\u2225C2, \u2225\u03c7\u2225C2, n, \u2225\u2202tu\u2225L\u221eand \u2225\u03c8\u2225C2. In view of (2.24), this can also be rewritten as L log \u02dc \u03bb1 \u2265 \u2212F k\u00af k |\u02dc \u03bb1,\u00af k|2 \u03bb2 1 \u22121 \u03bb1 F l\u00af k,s\u00af r\u2207\u00af 1g\u00af kl\u22071g\u00af rs \u2212C4 1 \u03bb1 F k\u00af k|\u02dc \u03bb1,\u00af k| \u2212C7F. (2.32) 9 \f2.2.2 Estimate for L\u03c6(|\u2207u|2) Next, a direct calculation gives L\u03c6(|\u2207u|2) = \u03c6\u2032(F q\u00af q\u2207q\u2207\u00af q \u2212\u2202t)|\u2207u|2 + \u03c6\u2032\u2032F q\u00af q\u2207q|\u2207u|2\u2207\u00af q|\u2207u|2 = \u03c6\u2032{\u2207ju(F q\u00af q\u2207q\u2207\u00af q \u2212\u2202t)\u2207ju + \u2207 \u00af ju(F q\u00af q\u2207q\u2207\u00af q \u2212\u2202t)\u2207\u00af ju} +\u03c6\u2032F q\u00af q(|\u2207q\u2207u|2 + |\u2207q \u00af \u2207u|2) + \u03c6\u2032\u2032F q\u00af q\u2207q|\u2207u|2\u2207\u00af q|\u2207u|2. (2.33) In view of the \ufb02ow, we have \u2207j\u2202tu = F k\u00af k\u2207jg\u00af kk \u2212\u03c8j, \u2207\u00af j\u2202tu = F k\u00af k\u2207\u00af jg\u00af kk \u2212\u03c8\u00af j. (2.34) It follows that (F k\u00af k\u2207k\u2207\u00af k \u2212\u2202t)\u2207\u00af ju = F k\u00af k(\u2207k\u2207\u00af ku\u00af j \u2212\u2207\u00af jg\u00af kk) + \u03c8\u00af j = F k\u00af k(\u2212\u2207\u00af j\u03c7\u00af kk + T p kj\u2207\u00af j\u2207ku + R\u00af jk \u00af m\u00af k\u2207\u00af mu) + \u03c8\u00af j (2.35) and hence, for small \u03b5, there is a constant C8 > 0 depending only on \u03b5, \u2225\u03c7\u2225C2, \u2225\u03b1\u2225C2 and ||\u03c8\u2225C2 such that \u03c6\u2032\u2207 \u00af ju(F q\u00af q\u2207q\u2207\u00af q \u2212\u2202t)\u2207\u00af ju \u2265\u2212C8F \u2212\u03b5 P F q\u00af q(|\u2207q\u2207u|2 + |\u2207q \u00af \u2207u|2) (2.36) since we can assume that \u03bb1 >> P = supX\u00d7[0,T \u2032](|\u2207u|2+1) (otherwise the desired estimate \u03bb1 < CP already holds), and (4P)\u22121 < \u03c6\u2032 < (2P)\u22121. Similarly we obtain the same estimate for \u03c6\u2032\u2207ju(F q\u00af q\u2207q\u2207\u00af q \u2212\u2202t)\u2207ju. Thus by choosing \u03b5 = 1/24, we have L\u03c6(|\u2207u|2) \u2265\u2212C8F + 1 8P F q\u00af q(|\u2207q\u2207u|2 + |\u2207q \u00af \u2207u|2) + \u03c6\u2032\u2032F q\u00af q\u2207q|\u2207u|2\u2207\u00af q|\u2207u|2. (2.37) 2.2.3 Estimate for L \u02dc G The evaluation of the remaining term L\u03d5(\u02dc v) is straightforward, L\u03d5(\u02dc v) = \u03d5\u2032(\u02dc v)(F k\u00af k\u2207k\u2207\u00af k\u02dc v \u2212\u2202t\u02dc v) + \u03d5\u2032\u2032(\u02dc v)F k\u00af k\u2207k\u02dc v\u2207\u00af k\u02dc v. (2.38) Altogether, we have established the following lower bound for L \u02dc G, L \u02dc G \u2265 \u2212F k\u00af k |\u02dc \u03bb1,\u00af k|2 \u03bb2 1 \u22121 \u03bb1 F l\u00af k,s\u00af r\u2207\u00af 1g\u00af kl\u22071g\u00af rs \u2212C4 1 \u03bb1 F k\u00af k|\u03bb1,\u00af k| \u2212C9F + 1 8P F q\u00af q(|\u2207q\u2207u|2 + |\u2207q \u00af \u2207u|2) + \u03c6\u2032\u2032F q\u00af q\u2207q|\u2207u|2\u2207\u00af q|\u2207u|2 +\u03d5\u2032(\u02dc v)(F k\u00af k\u2207k\u2207\u00af k\u02dc v \u2212\u2202t\u02dc v) + \u03d5\u2032\u2032(\u02dc v)F k\u00af k\u2207k\u02dc v\u2207\u00af k\u02dc v, (2.39) where C4 and C9 only depend on \u2225\u03c7\u2225C2, \u2225\u03b1\u2225C2, \u2225\u03c8\u2225C2, \u2225\u2202tu\u2225L\u221eand the dimension n. For a small \u03b8 > 0 to be chosen hereafter, we deal with two following cases. 10 \f2.2.4 Case 1: \u03b8\u03bb1 \u2264\u2212\u03bbn In this case, we have \u03b82\u03bb2 1 \u2264\u03bb2 n. Thus we can write 1 8P F q\u00af q(|\u2207q\u2207u|2 + |\u2207q \u00af \u2207u|2) \u2265 F n\u00af n 8P |u\u00af nn|2 = F n\u00af n 8P |\u03bbn \u2212\u03c7\u00af nn|2 \u2265F\u03bb2 n 10nP \u2212C10F P \u2265 \u03b82 10nP F\u03bb2 1 \u2212C10F, (2.40) where C10 only depends on \u2225\u03c7\u2225C2. Next, it is convenient to combine the \ufb01rst and third terms in the expression for L \u02dc G, \u2212F k\u00af k |\u02dc \u03bb1,\u00af k|2 \u03bb2 1 \u2212C4 1 \u03bb1 F k\u00af k|\u02dc \u03bb1,\u00af k| \u2265\u22123 2F k\u00af k |\u02dc \u03bb1,\u00af k|2 \u03bb2 1 \u2212C11F. (2.41) where C11 only depends on C4. At a maximum point for \u02dc G, we have 0 \u2265L \u02dc G. Combining the lower bound (2.39) for L \u02dc G with the preceding inequalities and dropping the second and last terms, which are non-negative, we obtain 0 \u2265 \u03b82 10nP F\u03bb2 1 \u2212C12F \u22123 2F k\u00af k |\u02dc \u03bb1,\u00af k|2 \u03bb2 1 + \u03c6\u2032\u2032F q\u00af q|\u2207\u00af q|\u2207u|2|2 + \u03d5\u2032(\u02dc v)(F k\u00af k\u2207k\u2207\u00af k\u02dc v \u2212\u2202t\u02dc v), (2.42) where C12 = C9 + C10 + C11, depending on \u2225\u03c7\u2225C2, \u2225\u03b1\u2225C2, \u2225\u03c8\u2225C2, \u2225\u2202tu\u2225L\u221eand n. Since we are at a critical point of \u02dc G, we also have \u2207\u02dc G = 0, and hence \u02dc \u03bb1,\u00af k \u03bb1 + \u03c6\u2032\u2207\u00af k|\u2207u|2 + \u03d5\u2032\u2202\u00af k\u02dc v = 0 (2.43) which implies 3 2F k\u00af k| \u02dc \u03bb1,\u00af k \u03bb1 |2 = 3 2F k\u00af k|\u03c6\u2032\u2207\u00af k|\u2207u|2 + \u03d5\u2032\u2202\u00af k\u02dc v|2 \u22642F k\u00af k(\u03c6\u2032)2|\u2207\u00af k|\u2207u|2|2 + 4F k\u00af k(\u03d5\u2032)2|\u2207\u00af k\u02dc v|2 \u2264 F k\u00af k\u03c6\u2032\u2032|\u2207\u00af k|\u2207u|2|2 + C13FP, (2.44) where C13 depending on \u2225\u02dc v\u2225L\u221eand \u2225\u2207u\u2225L\u221e. Since \u03d5\u2032(\u02dc v) is bounded in terms of \u2225\u02dc v\u2225L\u221e and \u2225\u2207u\u2225L\u221e, and |F k\u00af k\u2207k\u2207\u00af k\u02dc v \u2212\u2202t\u02dc v| \u2264C14F\u03bb1 + C13, where C14 depending on \u2225\u2202tv\u2225L\u221e and \u2225\u2202\u00af \u2202u\u2225L\u221e, we arrive at 0 \u2265 \u03b82 10nP F\u03bb2 1 \u2212C15PF, (2.45) where C15 depends on \u2225\u03c7\u2225C2, \u2225\u03b1\u2225C2, n, \u2225\u03c8\u2225C2, \u2225\u2202\u00af \u2202u\u2225L\u221e, \u2225\u2207u\u2225L\u221e, \u2225\u02dc v\u2225L\u221e, \u2225\u2202tv\u2225L\u221eand \u2225\u2202tu\u2225L\u221e. This implies the desired estimate \u03bb1 \u2264\u02dc C P. 11 \f2.2.5 The key estimate provided by subsolutions In the second case when \u03b8\u03bb1 > \u2212\u03bbn, we need to use the following key property of subsolutions. Lemma 3 Let u be a subsolution of the equation (1.1) in the sense of De\ufb01nition 1 with the pair (\u03b4, K). Then there exists a constant C = C(\u03b4, K), so that, if |\u03bb[u] \u2212\u03bb[u]| > K with K in De\ufb01nition 1, then either F pq(A[u])(Ap q[u] \u2212Ap q[u]) \u2212(\u2202tu \u2212\u2202tu) > C F (2.46) or we have for any 1 \u2264i \u2264n, F ii(A[u]) > C F. (2.47) Proof. The proof is an adaptation of the one for the elliptic version [50, Proposition 6](see also [19] for a similar argument). However, because of the time parameter t which may tend to \u221e, we need to produce explicit bounds which are independent of t. As in [50], it su\ufb03ces to prove that n X i=1 fi(\u03bb[u])(\u03bbi[u] \u2212\u03bbi[u]) \u2212(\u2202tu \u2212\u2202tu) > CF. (2.48) For any (z0, t0) \u2208X \u00d7 [0, T \u2032], since u is a C-subsolution as in De\ufb01nition 1, the set Az0,t0 = {(w, s)| w + \u03b4 2I \u2208\u0393n, s \u2265\u2212\u03b4, f(\u03bb[u(z0, t0)] + w) \u2212\u2202tu(z0, t0) + s \u2264\u03c8(z0)} is compact, and Az0,t0 \u2282Bn+1(0, K). For any (w, s) \u2208Az0,t0, then the set Cw,s = {v \u2208Rn|\u2203r > 0, w+rv \u2208\u2212\u03b4I+\u0393n, f(\u03bb[u(z0, t0)]+w+rv)\u2212\u2202tu(z0, t0)+s = \u03c8(z0)} is a cone with vertex at the origin. We claim that Cw,s is stricly larger than \u0393n. Indeed, for any v \u2208\u0393n, we can choose r > 0 large enough so that |w+rv| > K, then by the de\ufb01nition of C-subsolution, at (z0, t0) f(\u03bb[u] + w + rv) \u2212\u2202tu + s > \u03c8(z0). Therefore there exist r\u2032 > 0 such that f(\u03bb[u])+w +r\u2032v)\u2212\u2202tu+s = \u03c8(z0), hence v \u2208Cw,s. This implies that \u0393n \u2282Cw,s. Now, for any pair (i, j) with i \u0338= j and i, j = 1, . . . , n, we choose v(i,j) := (v1, . . . , vn) with vi = K + \u03b4 and vj = \u2212\u03b4/3 and vk = 0 for k \u0338= i, j, then we have w + v(i,j) \u2208\u2212\u03b41 + \u0393n. By the de\ufb01nition of C-subsolution, we also have, at (z0, t0) f(\u03bb[u]) + w + v(i,j)) \u2212\u2202tu + s > \u03c8(z0), 12 \fhence v(i,j) \u2208Cw,s for any pair (i, j). Denote by C\u2217 w,s the dual cone of Cw,s, C\u2217 w,s = {x \u2208Rn : \u27e8x, y\u27e9> 0, \u2200y \u2208Cw,s}. We now prove that that there is an \u03b5 > 0 such that if x = (x1, . . . , xn) \u2208C\u2217 w,s is a unit vector, then xi > \u03b5, \u2200i = 1, . . . n. First we remark that xi > 0, \u2200i = 1, n since \u0393n \u2282Cw,s Suppose that x1 is the smallest element between xi, then \u27e8x, v(1,j)\u27e9> 0, implies that (K + \u03b4)x1 \u2265\u03b4 3xj, hence (K + \u03b4)2x2 1 \u2265(\u03b42/9)x2 j, \u2200j = 2, . . . , n, so n(K + \u03b4)2x2 1 \u2265\u03b42/9. Therefore we can choose \u03b5 = \u03b42 9n(K+\u03b4)2 . Fix (z1, t1) \u2208X \u00d7 [0, T \u2032] such that at this point |\u03bb[u] \u2212\u03bb[u]| > K. Let T be the tangent plane to {(\u03bb, \u03c4)| f(\u03bb) + \u03c4 = \u03c3} at (\u03bb[u(z1, t1)], \u2212\u2202tu(z1, t1)). There are two cases: 1) There is some point (w, s) \u2208Az1,t1 such that at (z1, t1) (\u03bb[u] + w, \u2212\u2202tu + s) \u2208T , i.e \u2207f(\u03bb[u]).(\u03bb[u] + w \u2212\u03bb[u]) + (\u2212\u2202tu + s + \u2202tu) = 0. (2.49) Now for any v \u2208Cw,s, there exist r > 0 such that f(\u03bb[u] + w + rv) \u2212\u2202tu + s = \u03c8(z), this implies that \u2207f(\u03bb[u]).(\u03bb[u] + w + rv \u2212\u03bb[u]) + (\u2212\u2202tu + s + \u2202tu) > 0, so combing with (2.50) we get \u2207f(\u03bb[u]).v > 0. It follows that at (z1, t1) we have \u2207f(\u03bb[u])(z, t) \u2208C\u2217 w,s, so fi(\u03bb[u]) \u2265\u03b5\u2207f(\u03bb[u]), \u2200i = 1, . . . , n, hence fi(\u03bb[u]) > \u03b5 \u221an X p fp(\u03bb[u]), \u2200i = 1, . . . , n, where \u03b5 = \u03b42 9n(K + \u03b4)2. 2) Otherwise, we observe that if Az1,t1 \u0338= \u2205, then (w0, s0) = (\u2212\u03b4/2, . . . , \u2212\u03b4/2, \u2212\u03b4) \u2208Az1,t1 and at (z1, t1), (\u03bb[u] \u2212w0, \u2212ut + s0) must lie above T in the sense that (\u2207f(\u03bb[u]), 1).(\u03bb[u] + w0 \u2212\u03bb[u], \u2212\u2202tu + s0 + \u2202tu) > 0, at (z1, t1). (2.50) Indeed, if it is not the case, using the monotonicity of f we can \ufb01nd v \u2208\u0393n such that (\u03bb[u]+w0+v, \u2212\u2202tu+s0) \u2208T , so the concavity of (\u03bb, \u03c4) 7\u2192f(\u03bb)+\u03c4 implies that (w0+v, s0) is in Az1,t1 and then satis\ufb01es the \ufb01rst case, this gives a contradiction. Now it follows from (2.50) that at (z1, t1) (\u2207f(\u03bb[u]), 1).(\u03bb[u] \u2212\u03bb[u], \u2212\u2202tu + \u2202tu) \u2265 \u2212\u2207f(\u03bb[u]).w0 \u2212s0 = (\u03b4/2)F + \u03b4 \u2265(\u03b4/2)F, 13 \fwhere F = P i fi(\u03bb[u]) > 0. This means n X i=1 fi(\u03bb[u])(\u03bb[u] \u2212\u03bb[u]) \u2212(\u2202tu \u2212ut) > (\u03b4/2)F (2.51) as required. Now if Az1,t1 = \u2205, then at (z1, t1) f(\u03bb[u] + w0) \u2212\u2202tu + s0 > \u03c8(z1), hence we also have that (\u03bb[u]+w0, \u2212\u2202tu+s0) lies above T using the concavity of (\u03bb, \u03c4) 7\u2192 f(\u03bb) + \u03c4. By the same argument above, we also obtain the inequality (2.51). So we get the desired inequalities. Q.E.D. 2.2.6 Case 2: \u03b8\u03bb1 > \u2212\u03bbn Set I = {i; F i\u00af i \u2265\u03b8\u22121F 1\u00af 1}. (2.52) At the maximum point \u2202\u00af k \u02dc G = 0, and we can write \u2212 X k\u0338\u2208I F k\u00af k |\u02dc \u03bb1,\u00af k|2 \u03bb2 1 = \u2212 X k / \u2208I F k\u00af k|\u03c6\u2032\u2207\u00af k|\u2207u|2 + \u03d5\u2032\u2202\u00af k\u02dc v|2 \u2265 \u22122(\u03c6\u2032)2 X k / \u2208I F k\u00af k|\u2207\u00af k|\u2207u|2|2 \u22122(\u03d5\u2032)2 X k / \u2208I F k\u00af k|\u2207\u00af k\u02dc v|2 \u2265 \u2212\u03c6\u2032\u2032 X k / \u2208I F k\u00af k|\u2207\u00af k|\u2207u|2|2 \u22122(\u03d5\u2032)2\u03b8\u22121F 1\u00af 1P \u2212C16F, (2.53) where C16 depends on \u2225\u2207u\u2225L\u221eand \u2225\u02dc v\u2225L\u221e. On the other hand, \u22122\u03b8 X k\u2208I F k\u00af k |\u02dc \u03bb1,\u00af k|2 \u03bb2 1 \u2265\u22122\u03b8\u03c6\u2032\u2032 X k\u2208I F k\u00af k|\u2207\u00af k|\u2207u|2|2 \u22124\u03b8(\u03d5\u2032)2 X k\u2208I F k\u00af k|\u2207\u00af k\u02dc v|2. (2.54) Choose 0 < \u03b8 << 1 such that 4\u03b8(\u03d5\u2032)2 \u22641 2\u03d5\u2032\u2032. Then (2.39) implies that 0 \u2265 \u22121 \u03bb1 F l\u00af k,s\u00af r\u2207\u00af 1g\u00af kl\u22071g\u00af rs \u2212(1 \u22122\u03b8) X k\u2208I F k\u00af k |\u02dc \u03bb1,\u00af k|2 \u03bb2 1 \u2212C 1 \u03bb1 F k\u00af k|\u02dc \u03bb1,\u00af k| + 1 8P F q\u00af q(|\u2207q\u2207u|2 + |\u2207q \u00af \u2207u|2) +1 2\u03d5\u2032\u2032F k\u00af k|\u2207\u00af k\u02dc v|2 + \u03d5\u2032(F k\u00af k\u2207k\u2207\u00af k\u02dc v \u2212\u2202t\u02dc v) \u22122(\u03d5\u2032)2\u03b8\u22121F 1\u00af 1P \u2212C17F, (2.55) 14 \fwhere C17 depend on \u2225\u03c7\u2225C2, \u2225\u03b1\u2225C2, n, \u2225\u03c8\u2225C2, \u2225\u2202tu\u2225L\u221e, \u2225\u02dc v\u2225L\u221eand \u2225\u2207u\u2225L\u221e. The concavity of F implies that F l\u00af k,s\u00af r\u2207\u00af 1g\u00af kl\u22071g\u00af rs \u2264 X k\u2208I F 1\u00af 1 \u2212F k\u00af k \u03bb1 \u2212\u03bbk |\u22071g\u00af 1k|2 (2.56) since F 1\u00af 1\u2212F k\u00af k \u03bb1\u2212\u03bbk \u22640. Moreover, for k \u2208I, we have F 1\u00af 1 \u2264\u03b8F k\u00af k, and the assumption \u03b8\u03bb1 \u2265\u2212\u03bbn yields 1 \u2212\u03b8 \u03bb1 \u2212\u03bbk \u22651 \u22122\u03b8 \u03bb1 . (2.57) It follows that X k\u2208I F 1\u00af 1 \u2212F k\u00af k \u03bb1 \u2212\u03bbk |\u22071g\u00af 1k|2 \u2264\u2212 X k\u2208I (1 \u2212\u03b8)F k\u00af k \u03bb1 \u2212\u03bbk |\u22071g\u00af 1k|2 \u2264\u22121 \u22122\u03b8 \u03bb1 X k\u2208I F k\u00af k|\u22071g\u00af 1k|2. (2.58) Combining with the previous inequalities, we obtain 0 \u2265 \u2212(1 \u22122\u03b8) X k\u2208I F k\u00af k |\u02dc \u03bb1,\u00af k|2 \u2212|\u22071g\u00af 1k|2 \u03bb2 1 \u2212C17F \u2212C4 \u03bb1 F k\u00af k|\u02dc \u03bb1,\u00af k| + 1 8P F q\u00af q(|\u2207q\u2207u|2 + |\u2207q \u00af \u2207u|2) +1 2\u03d5\u2032\u2032F k\u00af k|\u2207\u00af k\u02dc v|2 + \u03d5\u2032(F k\u00af k\u2207k\u2207\u00af k\u02dc v \u2212\u2202t\u02dc v) \u22122(\u03d5\u2032)2\u03b8\u22121F 1\u00af 1P. (2.59) Since \u22071g\u00af 1k = \u02dc \u03bb1,\u03bb + O(\u03bb1), we have \u2212(1 \u22122\u03b8) X k\u2208I F k\u00af k |\u02dc \u03bb1,\u00af k|2 \u2212|\u22071g\u00af 1k|2 \u03bb2 1 \u2265\u2212C18F (2.60) where C18 depends on \u2225\u03c7\u2225C2 and \u2225\u03b1\u2225C2. Next, using again the equations for critical points, we can write C4 \u03bb1 F k\u00af k|\u02dc \u03bb1,\u00af k| = C4 \u03bb1 F k\u00af k|\u03c6\u2032\u2207\u00af k|\u2207u|2 + \u03d5\u2032\u2207\u00af k\u02dc v| (2.61) \u2264 1 2K 1 2 X F k\u00af k(|\u2207\u00af k\u2207pu| + |\u2207\u00af k\u2207\u00af pu|) + C\u03b5|\u03d5\u2032|F k\u00af k|\u2207\u00af k\u02dc v|2 + \u03b5C19|\u03d5\u2032|F + C20F, where C19 and C20 depend on C4. Accordingly, the previous inequality implies 0 \u2265 1 10K F q\u00af q(|\u2207q\u2207u|2 + |\u2207q \u00af \u2207u|2) + 1 2\u03d5\u2032\u2032F k\u00af k|\u2207\u00af k\u02dc v|2 + \u03d5\u2032(F k\u00af k\u2207k\u2207\u00af k\u02dc v \u2212\u2202t\u02dc v) \u22122(\u03d5\u2032)2\u03b8\u22121F 1\u00af 1P \u2212C\u03b5|\u03d5\u2032|F k\u00af k|\u2207\u00af k\u02dc v|2 \u2212\u03b5C19|\u03d5\u2032|F \u2212C21F, (2.62) 15 \fwhere C21 depending only on \u2225\u03c7\u2225C2, \u2225\u03b1\u2225C2, n, \u2225\u03c8\u2225C2, \u2225\u2202tv\u2225C0, \u2225\u02dc v\u2225L\u221e, \u2225\u2202tu\u2225L\u221eand \u2225\u2207u\u2225L\u221e. Finally we get 0 \u2265 F 1\u00af 1( \u03bb2 1 20P \u22122(\u03d5\u2032)2\u03b8\u22121P) + (1 2\u03d5\u2032\u2032 \u2212C\u03b5|\u03d5\u2032|)F k\u00af k|\u2207\u00af k\u02dc v|2 \u2212\u03b5C19|\u03d5\u2032|F + \u03d5\u2032(F k\u00af k\u2207k\u2207\u00af k\u02dc v \u2212\u2202t\u02dc v) \u2212C21F. (2.63) We now apply Lemma 3. Fix \u03b4 and K as in De\ufb01nition 1, if \u03bb1 > K, then there are two possibilities: \u2022 Either F k\u00af k(u\u00af kk \u2212u\u00af kk) + (\u2202tu \u2212\u2202tu) \u2265\u03baF, for some \u03ba depending only on \u03b4 and K, equivalently, F k\u00af k\u2207k\u2207\u00af k\u02dc v \u2212\u2202t\u02dc v \u2212 Z X \u2202tv\u03b1n \u2264\u2212\u03baF + C22F, (2.64) where C22 depends on \u2225\u2202tv\u2225L\u221e. Since \u03d5\u2032 < 0, we \ufb01nd 0 \u2265 F 1\u00af 1( \u03bb2 1 20P \u22122(\u03d5\u2032)2\u03b8\u22121P) + (1 2\u03d5\u2032\u2032 \u2212C\u03b5|\u03d5\u2032|)F k\u00af k|\u2207\u00af k\u02dc v|2 \u2212C23F \u2212\u03b5C19|\u03d5\u2032|F \u2212\u03d5\u2032\u03baF (2.65) with C23 depending only on n, \u2225\u03c7\u2225C2, \u2225\u03b1\u2225C2, \u2225\u03c8\u2225C2, \u2225\u2202tv\u2225L\u221e, \u2225\u02dc v\u2225L\u221e, \u2225\u2202tu\u2225L\u221eand \u2225\u2207u\u2225L\u221e. We \ufb01rst choose \u03b5 small enough so that \u03b5C19 < \u03ba/2, then D2 large enough so that \u03d5\u2032\u2032 > 2C\u03b5|\u03d5\u2032|. We obtain 0 \u2265F 1\u00af 1( \u03bb2 1 20P \u22122(\u03d5\u2032)2\u03b8\u22121P) \u2212C23F \u22121 2\u03d5\u2032\u03baF. (2.66) We now choose D1 large enough (depending on \u2225\u02dc v\u2225L\u221e) so that \u2212C23 \u22121 2\u03d5\u2032\u03ba > 0. Then \u03bb2 1 20P \u22642(\u03d5\u2032)2\u03b8\u22121P (2.67) and the desired upper bound for \u03bb1/P follows. \u2022 Or F 1\u00af 1 \u2265\u03baF. With D1, D2, and \u03b8 as above, the inequality (2.63) implies 0 \u2265\u03baF( \u03bb2 1 20P \u22122(\u03d5\u2032)2\u03b8\u22121P) \u2212C24F \u2212\u03d5\u2032F k\u00af kg\u00af kk, (2.68) with C24 depending only on \u2225\u03c7\u2225C2, \u2225\u03b1\u2225C2, n, \u2225\u03c8\u2225C2, \u2225\u2202tv\u2225L\u221e, \u2225\u02dc v\u2225L\u221e, \u2225\u2202tu\u2225L\u221e, \u2225\u2207u\u2225L\u221e, and \u2225i\u2202\u00af \u2202u\u2225L\u221e. Since F k\u00af kg\u00af kk \u2264F\u03bb1, we can divide by FP to get 0 \u2265\u03ba \u03bb2 1 20P 2 \u2212C25(1 + 1 P + \u03bb1 P ) (2.69) with a constant C25 depending only on \u2225\u03c7\u2225C2, \u2225\u03b1\u2225C2, n, \u2225\u03c8\u2225C2, \u2225\u02dc v\u2225L\u221e, \u2225\u2202tv\u2225L\u221e, \u2225\u2202tu\u2225L\u221e, \u2225\u2207u\u2225L\u221e, and \u2225i\u2202\u00af \u2202u\u2225L\u221e. Thus we obtain the desired bound for \u03bb1/P. It was pointed out in [50] that, under an extra concavity condition on f, C2 estimates can be derived directly from C0 estimates in the elliptic case, using a test function introduced in [39]. The same holds in the parabolic case, but we omit a fuller discussion. 16 \f2.3 C1 Estimates The C1 estimates are also adapted from [50], which reduce the estimates by a blow-up argument to a key Liouville theorem for Hessian equations due to Sz\u00b4 ekelyhidi [50] and Dinew and Kolodziej [10]. Lemma 4 There exist a constant C > 0, depending on u, \u2225\u2202tu\u2225L\u221e(X\u00d7[0,T)), \u2225\u02dc u\u2225L\u221e(X\u00d7[0,T)) \u2225\u03b1\u2225C2, \u03c7, \u03c8 and the constant \u02dc C in Lemma 2 such that sup X\u00d7[0,T) |\u2207u|2 \u03b1 \u2264C. (2.70) Proof. Assume by contradiction that (2.70) does not hold. Then there exists a sequence (xk, tk) \u2208X \u00d7 [0, T) with tk \u2192T such that lim k\u2192\u221e|\u2207u(tk, xk)|\u03b1 = +\u221e. We can assume further that Rk = |\u2207u(xk, tk)|\u03b1 = sup X\u00d7[0,tk] |\u2207u(x, t)|\u03b1, as k \u2192+\u221e, and limk\u2192\u221exk = x. Using localization, we choose a coordinate chart {U, (z1, . . . , zn)} centered at x, identifying with the ball B2(0) \u2282Cn of radius 2 centered at the origin such that \u03b1(0) = \u03b2, where \u03b2 = P j idzj \u2227d\u00af zj. We also assume that k is su\ufb03ciently large so that zk := z(xk) \u2208B1(0). De\ufb01ne the following maps \u03a6k : Cn \u2192Cn, \u03a6k(z) := R\u22121 k z + zk, \u02dc uk : BRk(0) \u2192R, \u02dc uk(z) := \u02dc u(\u03a6k(z), tk) = \u02dc u(R\u22121 k z + zk, tk), where \u02dc u = u \u2212 R X u \u03b1n. Then the equation ut = F(A) \u2212\u03c8(z), implies that f \u0010 R2 k\u03bb[\u03b2i\u00af p k (\u03c7k,\u00af pj + \u02dc uk,\u00af pj)] \u0011 = \u03c8(R\u22121 k z + zk) + ut(\u03a6k(z), tk), (2.71) where \u03b2k := R2 k\u03a6\u2217 k\u03b1, \u03c7k := \u03a6\u2217 k\u03c7. Since \u03b2k \u2192\u03b2, and \u03c7k(z, t) \u21920, in C\u221e loc as k \u2192\u221e, we get \u03bb[\u03b2i\u00af p k (\u03c7k,\u00af pj + \u02dc uk,\u00af pj)] = \u03bb(\u02dc uk,\u00af ji) + O |z| R2 k ! . (2.72) 17 \fBy the construction, we have sup BRk(0) \u02dc uk \u2264C, sup BRk(0) |\u2207\u02dc uk| \u2264C (2.73) where C depending on \u2225\u02dc u\u2225L\u221e, and |\u2207\u02dc uk|(0) = R\u22121 k |\u2207uk|\u03b1(xk) = 1. Thanks to Lemma 2, we also have that sup BRk(0) |\u2202\u00af \u2202\u02dc uk|\u03b2 \u2264CR\u22122 k sup X |\u2202\u00af \u2202u(., tk)|\u03b1 \u2264C\u2032. (2.74) As the argument in [50, 57], it follows from (2.73), (2.74), the elliptic estimates for \u2206and the Sobolev embedding that for each given K \u2282Cn compact, 0 < \u03b3 < 1 and p > 1, there is a constant C such that \u2225\u02dc uk\u2225C1,\u03b3(K) + \u2225\u02dc uk\u2225W 2,p(K) \u2264C. Therefore there is a subsequence of \u02dc uk converges strongly in C1,\u03b3 loc (Cn), and weakly in W 2,p loc (Cn) to a function v with supCn(|v| + |\u2207v|) \u2264C and \u2207v(0) \u0338= 0, in particular v is not constant. The proof can now be completed exactly as in [50]. The function v is shown to be a \u0393-solution in the sense of Sz\u00b4 ekelyhidi [50, De\ufb01nition 15], and the fact that v is not constant contradicts Szekelyhidi\u2019s Liouville theorem for \u0393-solutions [50, Theorem 20], which is itself based on the Liouville theorem of Dinew and Kolodziej [10]. Q.E.D. 2.4 Higher Order Estimates Under the conditions on f(\u03bb), the uniform parabolicity of the equation (1.1) will follow once we have established an a priori estimate on \u2225i\u2202\u00af \u2202u\u2225L\u221eand hence an upper bound for the eigenvalues \u03bb[u]. However, we shall often not have uniform control of \u2225u(\u00b7, t)\u2225L\u221e. Thus we shall require the following version of the Evans-Krylov theorem for uniformly parabolic and concave equations, with the precise dependence of constants spelled out, and which can be proved using the arguments of Trudinger [60], and more particularly Tosatti-Weinkove [53] and Gill [17]. Lemma 5 Assume that u is a solution of the equation (1.1) on X \u00d7 [0, T) and that there exists a constant C0 with \u2225i\u2202\u00af \u2202u\u2225L\u221e\u2264C0. Then there exist positive constants C and \u03b3 \u2208(0, 1) depending only on \u03b1, \u03c7, C0 and \u2225\u03c8\u2225C2 such that \u2225i\u2202\u00af \u2202u\u2225C\u03b3(X\u00d7[0,T)) \u2264C. (2.75) Once the C\u03b3 estimate for i\u2202\u00af \u2202u has been established, it is well known that a priori estimates of arbitrary order follow by bootstrap, as shown in detail for the Monge-Amp` ere equation in Yau [65]. We omit reproducing the proofs. 18 \f3 Proof of Theorems 1 and 2 We begin with the following simple lemma, which follows immediately by di\ufb00erentiating the equation (1.1) with respect to t, and applying the maximum principle, which shows that the solution of a linear heat equation at any time can be controlled by its initial value: Lemma 6 Let u(z, t) be a smooth solution of the \ufb02ow (1.1) on any time interval [0, T). Then \u2202tu satis\ufb01es the following linear heat equation \u2202t(\u2202tu) = F j k\u03b1k \u00af m\u2202j\u2202\u00af m(\u2202tu) (3.1) and we have the following estimate for any t \u2208[0, T), minX(F(A[u0]) \u2212\u03c8) \u2264\u2202tu(t, \u00b7) \u2264maxXF(A[u0] \u2212\u03c8) (3.2) We can now prove a lemma which provides general su\ufb03cient conditions for the convergence of the \ufb02ow: Lemma 7 Consider the \ufb02ow (1.1). Assume that the equation admits a parabolic Csubsolution u \u2208C2,1(X \u00d7 [0, \u221e)), and that there exists a constant C independent of time so that oscXu(t, \u00b7) \u2264C. (3.3) Then a smooth solution u(z, t) exists for all time, and its normalization \u02dc u converges in C\u221eto a solution u\u221eof the equation (1.6) for some constant c. In particular, if we assume further that \u2225u\u2225L\u221e(X\u00d7[0,\u221e)) \u2264C and for each t > 0, there exists y = y(t) \u2208X such that \u2202tu(y, t) = 0, then u converges in C\u221eto a solution u\u221eof the equation (1.6) for the constant c = 0. Proof of Lemma 7. We begin by establishing the existence of the solution for all time. For any \ufb01xed T > 0, Lemma 6 shows that |\u2202tu| is uniformly bounded by a constant C. Integrating between 0 and T, we deduce that |u| is uniformly bounded by C T. We can now apply Lemma 4, 2, 5, to conclude that the function u is uniformly bounded in Ck norm (by constants depending on k and T) for arbitrary k. This implies that the solution can be extended beyond T, and since T is arbitrary, that it exists for all time. Next, we establish the convergence. For this, we adapt the arguments of Cao [2] and especially Gill [17] based on the Harnack inequality. Since oscXu(t, \u00b7) is uniformly bounded by assumption, and since \u2202tu is uniformly bounded in view of Lemma 6, we can apply Lemma 2 and deduce that the eigenvalues of the matrix [\u03c7 + i\u2202\u00af \u2202u] are uniformly bounded over the time interval [0, \u221e). The 19 \funiform ellipticity of the equation (3.5) follows in turn from the properties (1) and (2) of the function f(\u03bb). Next set v = \u2202tu + A (3.4) for some large constant A so that v > 0. The function v satis\ufb01es the same heat equation \u2202tv = F i\u00af j\u2202i\u2202\u00af jv. (3.5) Since the equation (3.5) is uniformly elliptic, by the di\ufb00erential Harnack inequality proved originally in the Riemannian case by Li and Yau in [29], and extended to the Hermitian case by Gill [17], section 6, it follows that there exist positive constants C1, C2, C3, depending only on ellipticity bounds, so that for all 0 < t1 < t2, we have supXv(\u00b7, t1) \u2264infXv(\u00b7, t2) \u0012t2 t1 \u0013C2 exp \u0012 C3 t2 \u2212t1 + C1(t2 \u2212t1) \u0013 . (3.6) The same argument as in Cao [2], section 2, and Gill [17], section 7, shows that this estimate implies the existence of constants C4 and \u03b7 > 0 so that oscXv(\u00b7, t) \u2264C4e\u2212\u03b7t (3.7) If we set \u02dc v(z, t) = v(z, t) \u22121 V Z X v \u03b1n = \u2202tu(z, t) \u22121 V Z X \u2202tu \u03b1n = \u2202t\u02dc u, (3.8) it follows that |\u02dc v(z, t)| \u2264C4e\u2212\u03b7t (3.9) for all z \u2208X. In particular, \u2202t(\u02dc u + C4 \u03b7 e\u2212\u03b7t) = \u02dc v \u2212C4e\u2212\u03b7t \u22640, (3.10) and the function \u02dc u(z, t)+ C4 \u03b7 e\u2212\u03b7t is decreasing in t. By the assumption (3.3), this function is uniformly bounded. Thus it converges to a function u\u221e(z). By the higher order estimates in section \u00a72, the derivatives to any order of \u02dc u are uniformly bounded, so the convergence of \u02dc u + C4 \u03b7 e\u2212\u03b7t is actually in C\u221eThe function \u02dc u(z, t) will also converge in C\u221e, to the same limit u\u221e(z). Now the function \u02dc u(z, t) satis\ufb01es the following \ufb02ow, \u2202t\u02dc u = F(A[\u02dc u]) \u2212\u03c8(z) \u22121 V Z X \u2202tu \u03b1n. (3.11) Taking limits, we obtain 0 = F(A[\u02dc u\u221e]) \u2212\u03c8(z) \u2212limt\u2192\u221e Z X \u2202tu \u03b1n (3.12) 20 \fwhere the existence of the limit of the integral on the right hand side follows from the equation. De\ufb01ne the constant c as the value of this limit. This implies the \ufb01rst statement in Lemma 7. Now we assume that \u2225u\u2225L\u221e(X\u00d7[0,\u221e)) \u2264C and for each t \u22650, there exists y = y(t) \u2208X such that \u2202tu(y, t) = 0. By the same argument above, we have oscX\u2202tu(\u00b7, t) \u2264C4e\u2212\u03b7t, (3.13) for some C4, \u03b7 > 0. Since for each t \u22650, there exists y = y(t) \u2208X such that \u2202tu(y, t) = 0, we imply that for any z \u2208X, |\u2202tu(z, t)| = |\u2202tu(z, t) \u2212\u2202tu(y, t)| \u2264oscX\u2202tu(\u00b7, t) \u2264C4e\u2212\u03b7t. (3.14) Therefore by the same argument above, the function u(z, t) + C4 \u03b7 e\u2212\u03b7t converges in C\u221eand \u2202tu converges to 0 as t \u2192+\u221e. We thus infer that u converges in C\u221e, to u\u221esatisfying the equation F(A[\u02dc u\u221e]) = \u03c8(z). (3.15) Lemma 7 is proved. Proof of Theorem 1. Since f is unbounded, the function u = u0 is a C-subsolution of the \ufb02ow. In view of Lemma 7, it su\ufb03ces to establish a uniform bound for oscXu(t, \u00b7). But the \ufb02ow can be re-expressed as the elliptic equation F(A) = \u03c8 + \u2202tu (3.16) where the right hand side \u03c8 + \u2202tu is bounded uniformly in t, since we have seen that \u2202tu is uniformly bounded in t. Furthermore, because f is unbounded, the function u = u0 is a C-subsolution of (3.16). By the C0 estimate of [50], the oscillation oscXu(t, \u00b7) can be bounded for each t by the C0 norm of the right hand side, and is hence uniformly bounded. Q.E.D. Proof of Theorem 2. Again, it su\ufb03ces to establish a uniform bound in t for oscXu(t, \u00b7). Consider \ufb01rst the case (a). In view of Lemma 6 and the hypothesis, we have \u2202tu \u2265\u2202tu (3.17) on all of X \u00d7 [0, \u221e). But if we rewrite the \ufb02ow (1.1) as F(A) = \u03c8 + \u2202tu (3.18) we see that the condition that u be a parabolic C-subsolution for the equation (1.1) together with (3.17) implies that u is a C-subsolution for the equation (3.18) in the elliptic 21 \fsense.. We can then apply Sz\u00b4 ekelyhidi\u2019s C0 estimate for the elliptic equation to obtain a uniform bound for oscXu(t, \u00b7). Next, we consider the case (b). In this case, the existence of a function h(t) with the indicated properties allows us to apply Lemma 1, and obtain immediately a lower bound, u \u2212u \u2212h(t) \u2265\u2212C (3.19) for some constant C independent of time. The inequality (1.9) implies than a uniform bound for oscXu. 4 Applications to Geometric Flows Theorems 1 and 2 can be applied to many geometric \ufb02ows. We should stress that they don\u2019t provide a completely independent approach, as they themselves are built on many techniques that had been developed to study these \ufb02ows. Nevertheless, they may provide an attractive uniform approach. 4.1 A criterion for subsolutions In practice, it is easier to verify that a given function u on X \u00d7 [0, \u221e) is a C-subsolution of the equation (1.1) using the following lemma rather than the original De\ufb01nition 1: Lemma 8 Let u be a C2,1 admissible function on X \u00d7 [0, \u221e), with \u2225u\u2225C2,1(X\u00d7[0,\u221e)) < \u221e. Then u is a parabolic C-subsolution in the sense of De\ufb01nition 1 if and only if there exists a constant \u02dc \u03b4 > 0 independent from (z, t) so that lim\u00b5\u2192+\u221ef(\u03bb[u(z, t)] + \u00b5ei) \u2212\u2202tu(z, t) > \u02dc \u03b4 + \u03c8(z) (4.1) for each 1 \u2264i \u2264n. In particular, if u is independent of t, then u is a parabolic Csubsolution if and only if lim\u00b5\u2192+\u221ef(\u03bb[u(z, t)] + \u00b5ei) > \u03c8(z). (4.2) Note that there is a similar lemma in the case of subsolutions for elliptic equations (see [50], Remark 8). Here the argument has to be more careful, not just because of the additional time parameter t, but also because the time interval [0, \u221e) is not bounded, invalidating certain compactness arguments. Proof of Lemma 8. We show \ufb01rst that the condition (4.1) implies that u is a C-subsolution. We begin by showing that the condition (4.1) implies that there exists \u01eb0 > 0 and M > 0, so that for all \u01eb \u2264\u01eb0, all \u03bd > M, all (z, t), and all 1 \u2264i \u2264n, we have f(\u03bb[u(z, t)] \u2212\u01ebI + \u03bdei) \u2212\u2202tu(z, t) > \u02dc \u03b4 4 + \u03c8(z). (4.3) 22 \fThis is because the condition (4.1) is equivalent to f\u221e(\u03bb\u2032[u(z, t)]) \u2212\u2202tu(z, t) > \u02dc \u03b4 + \u03c8(z). (4.4) Now the concavity of f(\u03bb) implies the concavity of its limit f\u221e(\u03bb\u2032) and hence the continuity of f\u221e(\u03bb\u2032). Furthermore, the set \u039b = {\u03bb[u(z, t)], \u2200(z, t) \u2208X \u00d7 [0, \u221e)}, (4.5) as well as any of its translates by \u2212\u01ebI for a \ufb01xed \u01eb small enough, is compact in \u0393. So are their projections on Rn\u22121. By the uniform continuity of continuous functions on compact sets, it follows that there exists \u01eb0 > 0 so that f\u221e(\u03bb\u2032[u(z, t)] \u2212\u01ebI) \u2212\u2202tu(z, t) > \u02dc \u03b4 2 + \u03c8(z) (4.6) for all (z, t) and all \u01eb \u2264\u01eb0. But f\u221eis the continuous limit of a sequence of monotone increasing continuous functions f\u221e(\u03bb\u2032 \u2212\u01ebI) = lim\u03bd\u2192\u221ef(\u03bb \u2212\u01ebI + \u03bdei). (4.7) By Dini\u2019s theorem, the convergence is uniform over any compact subset. Thus there exists M > 0 large enough so that \u03bd > M implies that f(\u03bb[u(z, t)] \u2212\u01ebI + \u03bdei) > f\u221e(\u03bb\u2032[u(z, t)] \u2212\u01ebI) \u2212 \u02dc \u03b4 4 (4.8) for all (z, t) and all \u01eb \u2264\u01eb0. The desired inequality (4.3) follows from (4.6) and (4.8). Assume now that u is not a C-subsolution. Then there exists \u01ebm, \u03bdm, \u03c4m, with \u01ebm \u21920, \u03bdm \u2208\u2212\u01ebmI + \u0393n, \u03c4m > \u2212\u01ebm, and |\u03c4m| + |\u03bdm| \u2192\u221e, so that f(\u03bb[u(zm, tm)] + \u03bdm) \u2212\u2202tu(zm, tm) + \u03c4m = \u03c8(zm, tm). (4.9) Set \u03bdm = \u2212\u01ebm + \u00b5m, with \u00b5m \u2208\u0393n. Then we can write \u03c4m = \u2212f(\u03bb[u(zm, tm)] \u2212\u01ebmI + \u00b5m) + \u2202tu(zm, tm) + \u03c8(zm, tm) \u2264 \u2212f(\u03bb[u(zm, tm)] \u2212\u01ebmI) + \u2202tu(zm, tm) + \u03c8(zm, tm) (4.10) which is bounded by a constant. Thus we must have |\u03bdm| tending to +\u221e, or equivalently, |\u00b5m| tending to +\u221e. By going to a subsequence, we may assume that there is an index i for which the i-th components \u00b5i m of the vector \u00b5m tend to \u221eas m \u2192\u221e. By the monotonicity of f in each component, we have f(\u03bb[u(zm, tm)] \u2212\u01ebmI + \u00b5i mei) \u2212\u2202tu(zm, tm) \u2264 f(\u03bb[u(zm, tm)] \u2212\u01ebmI + \u00b5m) \u2212\u2202tu(zm, tm) = f(\u03bb[u(zm, tm)] + \u03bdm) \u2212\u2202tu(zm, tm). 23 \fIn view of (4.3), the left hand side is \u2265 \u02dc \u03b4 4 +\u03c8(zm, tm) for \u00b5i m large and \u01ebm small enough. On the other hand, the equation (4.9) implies that the right hand side is equal to \u03c8(zm, tm)\u2212\u03c4m. Thus we obtain \u02dc \u03b4 4 + \u03c8(zm, tm) \u2264\u03c8(zm, tm) \u2212\u03c4m \u2264\u03c8(zm, tm) + \u01ebm. (4.11) Hence \u02dc \u03b4 4 \u2264\u01ebm, which is a contradiction, since \u01ebm \u21920. Finally, we show that if u is a subsolution, it must satisfy the condition (4.1). Assume otherwise. Then there exists an index i and a sequence \u03b4m \u21920 and points (zm, tm) so that lim\u03bd\u2192\u221ef(\u03bb[u(zm, tm)] + \u03bdei) \u2212\u2202tu(zm, tm) \u2264\u03b4m + \u03c8(zm). (4.12) Since f is increasing in \u03bd, this implies that for any \u03bd \u2208R+, we have f(\u03bb[u(zm, tm)] + \u03bdei) \u2212\u2202tu(zm, tm) \u2264\u03b4m + \u03c8(zm). (4.13) For each \u03bd \u2208R+, de\ufb01ne \u03c4m by the equation f(\u03bb[u(zm, tm)] + \u03bdei) \u2212\u2202tu(zm, tm) + \u03c4m = \u03c8m. (4.14) The previous inequality means that \u03c4m \u2265\u2212\u03b4m, and thus the pair (\u03c4m, \u00b5 = \u03bdei) satisfy the equation (1.2). Since we can take \u03bd \u2192+\u221e, this contradicts the de\ufb01ning property of C-subsolutions. The proof of Lemma 8 is complete. 4.2 Sz\u00b4 ekelyhidi\u2019s theorem Theorem 2 can be applied to provide a proof by parabolic methods of the following theorem originally proved by Sz\u00b4 ekelyhidi [50]: Corollary 1 Let (X, \u03b1) be a compact Hermitian manifold, and f(\u03bb) be a function satisfying the conditions (1-3) spelled out in \u00a71 and in the bounded case. Let \u03c8 be a smooth function on X. If there exists an admissible function u0 with F(A[u0]) \u2264\u03c8, and if the equation F(A[u]) = \u03c8 admits a C-subsolution in the sense of [50], then the equation F(A[u]) = \u03c8 + c admits a smooth solution for some constant c. Proof of Corollary 1. It follows from Lemma 8 that a C-subsolution in the sense of [50] of the elliptic equation F(A[u]) = \u03c8 can be viewed as a time-independent parabolic Csubsolution u of the equation (1.1). Consider this \ufb02ow with initial value u0. Then \u2202tu = 0 \u2265F(A[u0]) \u2212\u03c8. (4.15) Thus condition (a) of Theorem 2 is satis\ufb01ed, and the corollary follows. 24 \f4.3 The K\u00a8 ahler-Ricci \ufb02ow and the Chern-Ricci \ufb02ow On K\u00a8 ahler manifolds (X, \u03b1) with c1(X) = 0, the K\u00a8 ahler-Ricci \ufb02ow is the \ufb02ow \u02d9 g\u00af kj = \u2212R\u00af kj. For initial data in the K\u00a8 ahler class [\u03b1], the evolving metric can be expressed as g\u00af kj = \u03b1\u00af kj + \u2202j\u2202\u00af k\u03d5, and the \ufb02ow is equivalent to the following Monge-Amp` ere \ufb02ow, \u2202t\u03d5 = log (\u03b1 + i\u2202\u00af \u2202\u03d5)n \u03b1n \u2212\u03c8(z) (4.16) for a suitable function \u03c8(z) satisfying the compatibility condition R X e\u03c8\u03b1n = R X \u03b1n. The convergence of this \ufb02ow was proved by Cao[2], thus giving a parabolic proof of Yau\u2019s solution of the Calabi conjecture [65]. We can readily derive Cao\u2019s result from Theorem 1: Corollary 2 For any initial data, the normalization \u02dc \u03d5 of the \ufb02ow (4.16) converges in C\u221e to a solution of the equation (\u03b1 + i\u2202\u00af \u2202\u03d5)n = e\u03c8\u03b1n. Proof of Corollary 2. The Monge-Amp` ere \ufb02ow (4.16) corresponds to the equation (1.1) with \u03c7 = \u03b1, f(\u03bb) = log Qn j=1 \u03bbj, and \u0393 being the full octant \u0393n. It is straightforward that f satis\ufb01es the condition (1-3) in \u00a71. In particular f is in the unbounded case, and Theorem 1 applies, giving the convergence of the normalizations \u02dc u(\u00b7, t) to a smooth solution of the equation (\u03b1+i\u2202\u00af \u2202\u03d5)n = e\u03c8+c\u03b1n for some constant c. Integrating both sides of this equation and using the compatibility condition on \u03c8, we \ufb01nd that c = 0. The corollary is proved. The generalization of the \ufb02ow (4.16) to the more general set-up of a compact Hermitian manifold (X, \u03b1) was introduced by Gill [17]. It is known as the Chern-Ricci \ufb02ow, with the Chern-Ricci tensor RicC(\u03c9) = \u2212i\u2202\u00af \u2202log \u03c9n playing the role of the Ricci tensor in the K\u00a8 ahler-Ricci \ufb02ow (we refer to [54, 55, 56, 59] and references therein). Gill proved the convergence of this \ufb02ow, thus providing an alternative proof of the generalization of Yau\u2019s theorem proved earlier by Tosatti and Weinkove [52]. Generalizations of Yau\u2019s theorem had attracted a lot of attention, and many partial results had been obtained before, including those of Cherrier [3], Guan-Li [20], and others. Theorem 1 gives immediately another proof of Gill\u2019s theorem: Corollary 3 For any initial data, the normalizations \u02dc \u03d5 of the Chern-Ricci \ufb02ow converge in C\u221eto a solution of the equation (\u03b1 + i\u2202\u00af \u2202\u03d5)n = e\u03c8+c\u03b1n, for some constant c. We note that there is a rich literature on Monge-Amp` ere equations, including considerable progress using pluripotential theory. We refer to [26, 11, 8, 23, 24, 41, 58, 59, 30, 31] and references therein. 4.4 Hessian \ufb02ows Hessian equations, where the Laplacian or the Monge-Amp` ere determinant of the unknown function u are replaced by the k-th symmetric polynomial of the eigenvalues of the Hessian 25 \fof u, were introduced by Ca\ufb00arelli, Nirenberg, and Spruck [5]. More general right hand sides and K\u00a8 ahler versions were considered respectively by Chou and Wang [6] and HouMa-Wu [25], who introduced in the process some of the key techniques for C2 estimates that we discussed in \u00a72. A general existence result on compact Hermitian manifolds was recently obtained by Dinew and Kolodziej [10], Sun [47], and Sz\u00b4 ekelyhidi [50]. See also Zhang [66]. Again, we can derive this theorem as a corollary of Theorem 1: Corollary 4 Let (X, \u03b1) be a compact Hermitian n-dimensional manifold, and let \u03c7 be a positive real (1, 1)-form which is k-positive for a given k, 1 \u2264k \u2264n. Consider the following parabolic \ufb02ow for the unknown function u, \u2202tu = log (\u03c7 + i\u2202\u00af \u2202u)k \u2227\u03b1n\u2212k \u03b1n \u2212\u03c8(z). (4.17) Then for any admissible initial data u0, the \ufb02ow admits a solution u(z, t) for all time, and its normalization \u02dc u(z, t) converge in C\u221eto a function u\u221e\u2208C\u221e(X) so that \u03c9 = \u03c7+i\u2202\u00af \u2202u\u221e satis\ufb01es the following k-Hessian equation, \u03c9k \u2227\u03b1n\u2212k = e\u03c8+c\u03b1n. (4.18) Proof of Corollary 4. This is an equation of the form (1.1), with F = f(\u03bb) = log \u03c3k(\u03bb), de\ufb01ned on the cone \u0393k = {\u03bb; \u03c3j(\u03bb) > 0, j = 1, \u00b7 \u00b7 \u00b7, k}, (4.19) where \u0010n k \u0011 \u03c3k is the k-th symmetric polynomial in the components \u03bbj, 1 \u2264j \u2264n. In our setting, \u03c3k(\u03bb[u]) = (\u03c7 + i\u2202\u00af \u2202u)k \u2227\u03b1n\u2212k \u03b1n . (4.20) It follows from [43, Corollary 2.4] that g = \u03c31/k k is concave and gi = \u2202g \u2202\u03bbi > 0 on \u0393k, hence f = log g satis\ufb01es the conditions (1-3) mentioned in \u00a71. The function u = 0 is a subsolution of (4.17) and f is in the unbounded case since for any \u00b5 = (\u00b51, \u00b7 \u00b7 \u00b7 , \u00b5n) \u2208\u0393k, and any 1 \u2264i \u2264n, lims\u2192\u221elog \u03c3k(\u00b51, \u00b7 \u00b7 \u00b7, \u00b5i + s, \u00b7 \u00b7 \u00b7 , \u00b5n) = \u221e. (4.21) The desired statement follows then from Theorem 1. 4.5 The J \ufb02ow and quotient Hessian \ufb02ows The J-\ufb02ow on K\u00a8 ahler manifolds was introduced independently by Donaldson [9] and Chen [4]. The case n = 2 was solved by Weinkove [63, 64], and the case of general dimension by 26 \fSong and Weinkove [42], who identi\ufb01ed a necessary and su\ufb03cient condition for the longtime existence and convergence of the \ufb02ow as the existence of a K\u00a8 ahler form \u03c7 satisfying nc\u03c7n\u22121 \u2212(n \u22121)\u03c7n\u22122 \u2227\u03c9 > 0 (4.22) in the sense of positivity of (n \u22121, n \u22121)-forms. The constant c is actually determined by cohomology. Their work was subsequently extended to inverse Hessian \ufb02ows on K\u00a8 ahler manifolds by Fang, Lai, and Ma [12], and to inverse Hessian \ufb02ows on Hermitian manifolds by Sun [44]. These \ufb02ows are all special cases of quotient Hessian \ufb02ows on Hermitian manifolds. Their stationary points are given by the corresponding quotient Hessian equations. Our results can be applied to prove the following generalization to quotient Hessian \ufb02ows of the results of [63, 64, 12], as well as an alternative proof of a result of Sz\u00b4 ekelyhidi [50, Proposition 22] on the Hessian quotient equations. The \ufb02ow (4.24) below has also been studied recently by Sun [46] where he obtained a uniform C0 estimate using Moser iteration. Our proof should be viewed as di\ufb00erent from all of these, since its C0 estimate uses neither Moser iteration nor strict C2 estimates Tr\u03b1\u03c7u \u2264C eu\u2212infXu. Corollary 5 Assume that (X, \u03b1) is a compact K\u00a8 ahler n-manifold, and \ufb01x 1 \u2264\u2113< k \u2264n. Fix a closed (1, 1)-form \u03c7 which is k-positive, and assume that there exists a function u so that the form \u03c7\u2032 = \u03c7 + i\u2202\u00af \u2202u is closed k-positive and satis\ufb01es kc (\u03c7\u2032)k\u22121 \u2227\u03b1n\u2212k \u2212\u2113(\u03c7\u2032)\u2113\u22121 \u2227\u03b1n\u2212\u2113> 0 (4.23) in the sense of the positivity of (n \u22121, n \u22121)-forms. Here c = [\u03c7\u2113]\u222a[\u03b1n\u2212\u2113] [\u03c7k]\u222a[\u03c7n\u2212k]. Then for any admissible initial data u0 \u2208C\u221e(X), the \ufb02ow \u2202tu = c \u2212\u03c7\u2113 u \u2227\u03b1n\u2212\u2113 \u03c7k u \u2227\u03b1n\u2212k (4.24) admits a solution u for all time, and it converges to a smooth function u\u221e. The form \u03c9 = \u03c7 + i\u2202\u00af \u2202u\u221eis k-positive and satis\ufb01es the equation \u03c9\u2113\u2227\u03b1n\u2212\u2113= c \u03c9k \u2227\u03b1n\u2212k. (4.25) Proof of Corollary 5. The \ufb02ow (4.24) is of the form (1.1), with f(\u03bb) = \u2212\u03c3\u2113(\u03bb) \u03c3k(\u03bb), de\ufb01ned on the cone \u0393k = {\u03bb; \u03c3j(\u03bb) > 0, j = 1, \u00b7 \u00b7 \u00b7, k}. (4.26) By the Maclaurin\u2019s inequality (cf. [43]), we have \u03c31/k k \u2264\u03c31/\u2113 \u2113 on \u0393k, hence f(\u03bb) \u2192\u2212\u221eas \u03bb \u2192\u2202\u0393k. It follows from [43, Theorem 2.16] that the function g = (\u03c3k/\u03c3\u2113) 1 (k\u2212\u2113) satis\ufb01es 27 \fgi = \u2202g \u2202\u03bbi > 0, \u2200i = 1, . . . , n and g is concave on \u0393k. Therefore f = \u2212g\u2212(k\u2212\u2113) satis\ufb01es the condition (1-3) spelled out in \u00a71. Moreover, f is in the bounded case with f\u221e(\u03bb\u2032) = \u2212\u2113\u03c3\u2113\u22121(\u03bb\u2032) k\u03c3k\u22121(\u03bb\u2032 where \u03bb\u2032 \u2208\u0393\u221e= \u0393k\u22121. We can assume that u0 = 0 by replacing \u03c7 (resp. u and u) by \u03c7 + i\u2202\u00af \u2202u0 (resp. u \u2212u0 and u \u2212u0). The inequality (4.23) infers that u is a subsolution of the equation (4.24). Indeed, for any (z, t) \u2208X \u00d7 [0, \u221e), set \u00b5 = \u03bb(B), Bij = \u03b1j\u00af k(\u03c7\u00af kj + u\u00af kj)(z, t). Since u is independent of t, it follows from Lemma 8 and the symmetry of f that we just need to show that for any z \u2208X if \u00b5\u2032 = (\u00b51, \u00b7 \u00b7 \u00b7, \u00b5n\u22121) then lims\u2192\u221ef(\u00b5\u2032, \u00b5n + s) > \u2212c. (4.27) This means f\u221e(\u00b5\u2032) = \u2212\u2113\u03c3\u2113\u22121(\u00b5\u2032) k\u03c3k\u22121(\u00b5\u2032) > \u2212c. (4.28) As in [50], we restrict to the tangent space of X spanned by by the eigenvalues corresponding to \u00b5\u2032. Then on this subspace \u03c3j(\u00b5\u2032) = \u03c7j\u22121 \u2227\u03b1n\u2212j \u03b1n\u22121 (4.29) for all j. Thus the preceding inequality is equivalent to kc(\u03c7\u2032)k \u2227\u03b1n\u2212k \u2212\u2113(\u03c7\u2032)\u2113\u22121 \u2227\u03b1n\u2212\u2113> 0. (4.30) By a priori estimates in Section 2, the solution exists for all times. We now use the second statement in Lemma 7 to prove the convergence. It su\ufb03ces to check that u is uniformly bounded in X \u00d7 [0, +\u221e) and for all t > 0, there exists y such that \u2202tu(y, t) = 0. The second condition is straightforward since Z X \u2202tu\u03c7k u \u2227\u03b1n\u2212k = 0. For the uniform bound we make use of the following lemma Lemma 9 Let \u03c6 \u2208C\u221e(X) function and {\u03d5s}s\u2208[0,1] be a path with \u03d5(0) = 0 and \u03d5(1) = \u03c6. Then we have Z 1 0 Z X \u2202\u03d5 \u2202s \u03c7k \u03d5 \u2227\u03b1n\u2212kds = 1 k + 1 k X j=0 Z X \u03c6\u03c7j \u03c6 \u2227\u03c7k\u2212j \u2227\u03b1n\u2212k, (4.31) so the left hand side is independent of \u03d5. Therefore we can de\ufb01ne the following functional Ik(\u03c6) = Z 1 0 Z X \u2202\u03d5 \u2202s \u03c7k \u03d5 \u2227\u03b1n\u2212kds. (4.32) 28 \fWe remark that when k = n and \u03c7 is K\u00a8 ahler, this functional is well-known (see for instance [64]). We discuss here the general case. Proof of Lemma 9. Observe that Z 1 0 Z X \u2202\u03d5 \u2202s \u03c7k \u03d5 \u2227\u03b1n\u2212kds = k X j=1 k j ! Z 1 0 Z X \u2202\u03d5 \u2202s (i\u2202\u00af \u2202\u03d5)j \u2227\u03c7k\u2212j \u2227\u03b1n\u2212kds. (4.33) For any j = 0, . . . , k we have Z 1 0 Z X \u2202\u03d5 \u2202s (i\u2202\u00af \u2202\u03d5)j \u2227\u03c7k\u2212j \u2227\u03b1n\u2212kds = Z 1 0 d ds \u0012Z X \u03d5(i\u2202\u00af \u2202\u03d5)j \u2227\u03c7k\u2212j \u2227\u03b1n\u2212k \u0013 ds \u2212 Z 1 0 Z X \u03d5 \u2202 \u2202s \u0010 (i\u2202\u00af \u2202\u03d5)j \u2227\u03c7k\u2212j \u2227\u03b1n\u2212k\u0011 ds = Z X \u03c6(i\u2202\u00af \u2202\u03c6)j \u2227\u03c7k\u2212j \u2227\u03b1n\u2212k (4.34) \u2212 Z 1 0 Z X \u03d5 \u2202 \u2202s \u0010 (i\u2202\u00af \u2202\u03d5)j \u2227\u03c7k\u2212j \u2227\u03b1n\u2212k\u0011 ds We also have Z 1 0 Z X \u03d5 \u2202 \u2202s \u0010 (i\u2202\u00af \u2202\u03d5)j \u2227\u03c7k\u2212j \u2227\u03b1n\u2212k\u0011 ds = Z 1 0 Z X j\u03d5 i\u2202\u00af \u2202\u2202\u03d5 \u2202s ! \u2227(i\u2202\u00af \u2202\u03d5)j\u22121 \u2227\u03c7k\u2212j \u2227\u03b1n\u2212kds = Z 1 0 Z X j \u2202\u03d5 \u2202s (i\u2202\u00af \u2202\u03d5)j \u2227\u03c7k\u2212j \u2227\u03b1n\u2212kds, (4.35) here we used in the second identity the integration by parts and the fact that \u03c7 and \u03b1 are closed. Combining (4.34) and (4.35) yields Z 1 0 Z X \u2202\u03d5 \u2202s (i\u2202\u00af \u2202\u03d5)j \u2227\u03c7k\u2212j \u2227\u03b1n\u2212kds = 1 j + 1 Z X \u03c6(i\u2202\u00af \u2202\u03c6)j \u2227\u03c7k\u2212j \u2227\u03b1n\u2212k. (4.36) Therefore (4.33) implies that Z 1 0 Z X \u2202\u03d5 \u2202s \u03c7k \u03d5 \u2227\u03b1n\u2212kds = k X j=1 k j ! 1 j + 1 Z X \u03c6(i\u2202\u00af \u2202\u03c6)j \u2227\u03c7k\u2212j \u2227\u03b1n\u2212k = k X j=1 k j ! 1 j + 1 Z X \u03c6(\u03c7\u03c6 \u2212\u03c7)j \u2227\u03c7k\u2212j \u2227\u03b1n\u2212k = k X j=1 k j ! 1 j + 1 Z X j X p=0 j p ! (\u22121)j\u2212p\u03c6\u03c7p \u03c6 \u2227\u03c7k\u2212p \u2227\u03b1n\u2212k (4.37) = k X p=0 \uf8eb \uf8ed k X j=p k j ! 1 j + 1 j p ! (\u22121)j\u2212p \uf8f6 \uf8f8 Z X \u03c6\u03c7p \u03c6 \u2227\u03c7k\u2212p \u2227\u03b1n\u2212k. 29 \fBy changing m = j \u2212p, we get k X j=p k j ! 1 j + 1 j p ! (\u22121)j\u2212p = k p ! k\u2212p X m=0 (\u22121)m m + p + 1 k \u2212p m ! . (4.38) The right hand side can be computed by k p ! k\u2212p X m=0 (\u22121)m m + p + 1 k \u2212p m ! = k p ! Z 1 0 (1 \u2212x)k\u2212pxpdx = k p ! p! Z 1 0 1 (k \u2212p + 1) . . . k(1 \u2212x)kdx = 1 k + 1, where we used the integration by parts p times in the second identity. Combining this with (4.37) and (4.38) we get the desired identity (4.31). Q.E.D. We now have for any t\u2217> 0, along the \ufb02ow Ik(u(t\u2217)) = Z t\u2217 0 Z X \u2202u \u2202t \u03c7k u \u2227\u03b1n\u2212k = Z t\u2217 0 c \u2212\u03c7\u2113 u \u2227\u03b1n\u2212\u2113 \u03c7k u \u2227\u03b1n\u2212k ! \u03c7k u \u2227\u03b1n\u2212k = 0. As in Weinkove [63, 64], there exist C1, C2 > 0 such that for all t \u2208[0, \u221e), 0 \u2264sup X u(., t) \u2264\u2212C1 inf X u(., t) + C2. (4.39) Indeed, in view of (4.31), Ik(u) = 0 along the \ufb02ow implies that k X j=0 Z X u\u03c7j u \u2227\u03c7k\u2212j \u2227\u03b1n\u2212k = 0, (4.40) hence supX u \u22650 and infX u \u22640. For the right inequality in (4.39), we remark that there exists a positive constant B such that \u03b1n \u2264B\u03c7k \u2227\u03b1n\u2212k. Therefore combining with (4.40) gives Z X u\u03b1n = Z X(u \u2212inf X u)\u03b1n + Z X inf X u \u03b1n \u2264 B Z X(u \u2212inf X u)\u03c7k \u2227\u03b1n\u2212k + inf X u Z X \u03b1n = \u2212B k X j=1 Z X u\u03c7j u \u2227\u03c7k\u2212j \u2227\u03b1n\u2212k + inf X u \u0012Z X \u03b1n \u2212B Z X \u03c7k \u2227\u03b1n\u2212k \u0013 = \u2212B k X j=1 Z X \u0012 u \u2212inf X u \u0013 \u03c7j u \u2227\u03c7k\u2212j \u2227\u03b1n\u2212k + inf X u \u0012Z X \u03b1n \u2212B(k + 1) Z X \u03c7k \u2227\u03b1n\u2212k \u0013 \u2264 inf X u \u0012Z X \u03b1n \u2212B(k + 1) Z X \u03c7k \u2227\u03b1n\u2212k \u0013 = \u2212C1 inf X u. 30 \fSince \u2206\u03b1u \u2265\u2212tr\u03b1\u03c7 \u2265\u2212A, using the fact that the Green\u2019s function G(., .) of \u03b1 is bounded from below we infer that u(x, t) = Z X u\u03b1n \u2212 Z X \u2206\u03b1u(y, t)G(x, y)\u03b1n(y) \u2264 \u2212C1 inf X u + C2. Hence we obtain the Harnack inequality, supX u \u2264\u2212C1 infX u + C2. Since we can normalize u by supX u = 0, the left inequality in (4.40) implies sup X (u(\u00b7, t) \u2212u(\u00b7, t)) \u22650. It follows from Lemma 1 that u \u2265u \u2212C3 for some constant C3. This give a lower bound for u since u is bounded. The Harnack inequality in (4.39) implies then a uniform bound for u. Now the second statement in Lemma 7 implies the convergence of u. Q.E.D. A natural generalization of the Hessian quotient \ufb02ows on Hermitian manifolds is the following \ufb02ow \u2202tu = log \u03c7k u \u2227\u03b1n\u2212k \u03c7\u2113 u \u2227\u03b1n\u2212\u2113\u2212\u03c8 (4.41) where \u03c8 \u2208C\u221e(X), the admissible cone is \u0393k, 1 \u2264\u2113< k \u2264n, and \u03c7u = \u03c7 + i\u2202\u00af \u2202u. This \ufb02ow was introduced by Sun [44] when k = n. We can apply Theorem 2 to obtain the following result, which is analogous to one of the main results in Sun [44], and analogous to the results of Song-Weinkove [42] and Fang-Lai-Ma [12] for k = n: Corollary 6 Let (X, \u03b1) be a compact Hermitian manifold and \u03c7 be a (1, 1)-form which is k-positive. Assume that there exists a form \u03c7\u2032 = \u03c7+i\u2202\u00af \u2202u which is k-positive, and satis\ufb01es k (\u03c7\u2032)k\u22121 \u2227\u03b1n\u2212k \u2212e\u03c8 \u2113(\u03c7\u2032)\u2113\u22121 \u2227\u03b1n\u2212\u2113> 0 (4.42) in the sense of the positivity of (n \u22121, n \u22121)-forms. Assume further that there exists an admissible u0 \u2208C\u221e(X) satisfying e\u03c8 \u2265\u03c7k u0 \u2227\u03b1n\u2212k \u03c7\u2113 u0 \u2227\u03b1n\u2212\u2113 (4.43) Then the \ufb02ow (4.41) admits a smooth solution for all time with initial data u0. Furthermore, there exists a unique constant c so that the normalization \u02dc u = u \u2212 1 [\u03b1n] Z X u\u03b1n (4.44) converges in C\u221eto a function u\u221ewith \u03c9\u221e= \u03c7 + i\u2202\u00af \u2202u\u221esatisfying \u03c9k \u221e\u2227\u03b1n\u2212k = e\u03c8+c\u03c9\u2113 \u221e\u2227\u03b1n\u2212\u2113. (4.45) 31 \fProof of Corollary 6. This equation is of the form (1.1), with F(A) = f(\u03bb) = log \u03c3k(\u03bb) \u03c3\u2113(\u03bb) , with \u03bb = \u03bb(A), (4.46) de\ufb01ned on \u0393k. As in the proof of Corollary 5 we also have that f satis\ufb01es the conditions (1-3) mentioned in \u00a71. Moreover, f is in the bounded case with f\u221e(\u03bb\u2032) = log k\u03c3k\u22121(\u03bb\u2032) \u2113\u03c3\u2113\u22121(\u03bb\u2032) where \u03bb\u2032 \u2208\u0393\u221e= \u0393k\u22121. It su\ufb03ces to verify that u = 0 is a subsolution of the equation (4.41). For any (z, t) \u2208 X \u00d7[0, \u221e), set \u00b5 = \u03bb(B), Bij = \u03b1j\u00af k\u03c7\u00af kj(z, t). Since u is independent of t, Lemma 8 implies that we just need to show that for any z \u2208X if \u00b5\u2032 = (\u00b51, \u00b7 \u00b7 \u00b7, \u00b5n\u22121), lims\u2192\u221ef(\u00b5\u2032, s) > \u03c8(z). (4.47) This means f\u221e(\u00b5\u2032) = log k\u03c3k\u22121(\u00b5\u2032) \u2113\u03c3\u2113\u22121(\u00b5\u2032) > \u03c8(z), (4.48) where we restrict to the tangent space of X spanned by by the eigenvalues corresponding to \u00b5\u2032. As the argument in the proof of Corollary 5, this inequality is equivalent to k\u03c7k \u2227\u03b1n\u2212k \u2212\u2113e\u03c8\u03c7\u2113\u22121 \u2227\u03b1n\u2212\u2113> 0. (4.49) Moreover, the condition (4.43) is equivalent to 0 = u \u2265F(A[u0]) \u2212\u03c8. (4.50) We can now apply Theorem 2 to complete the proof. Q.E.D In the case of (X, \u03b1) compact K\u00a8 ahler, the condition on \u03c8 can be simpli\ufb01ed, and we obtain an alternative proof to the main result of Sun in [45]. We recently learnt that Sun [49] also provided independently another proof of [45] using the same \ufb02ow as below: Corollary 7 Let (X, \u03b1) be K\u00a8 ahler and \u03c7 be a k-positive closed (1, 1)-form. Assume that there exists a closed form \u03c7\u2032 = \u03c7 + i\u2202\u00af \u2202u which is k-positive, and satis\ufb01es k (\u03c7\u2032)k\u22121 \u2227\u03b1n\u2212k \u2212e\u03c8 \u2113(\u03c7\u2032)\u2113\u22121 \u2227\u03b1n\u2212\u2113> 0 (4.51) in the sense of the positivity of (n \u22121, n \u22121)-forms. Assume further that e\u03c8 \u2265ck,\u2113= [\u03c7k] \u222a[\u03c7n\u2212k] [\u03c7\u2113] \u222a[\u03b1n\u2212\u2113] . (4.52) 32 \fThen for any admissible initial data u0 \u2208C\u221e(X), the \ufb02ow (4.41) admits a smooth solution for all time. Furthermore, there exists a unique constant c so that the normalization \u02dc u = u \u2212 1 [\u03b1n] Z X u\u03b1n (4.53) converges in C\u221eto a function u\u221ewith \u03c9\u221e= \u03c7 + i\u2202\u00af \u2202u\u221esatisfying \u03c9k \u221e\u2227\u03b1n\u2212k = e\u03c8+c\u03c9\u2113 \u221e\u2227\u03b1n\u2212\u2113. (4.54) Proof of Corollary 6. By the same argument above, the admissible function u \u2208C\u221e(X) with supX u = 0 satisfying (4.51) is a C-subsolution. As explained in the proof of Corollary 5, we can assume that u0 = 0. We \ufb01rst observe that along the \ufb02ow, the functional I\u2113de\ufb01ned in Lemma 9 is decreasing. Indeed, using Jensen\u2019s inequality and then (4.52) we have d dtI\u2113(u) = Z X \u2202u \u2202t \u03c7\u2113 u \u2227\u03b1n\u2212\u2113= Z X log \u03c7k u \u2227\u03b1n\u2212k \u03c7\u2113 u \u2227\u03b1n\u2212\u2113\u2212\u03c8 ! \u03c7\u2113 u \u2227\u03b1n\u2212\u2113 \u2264 log ck,\u2113 Z X \u03c7\u2113 u \u2227\u03b1n\u2212\u2113\u2212 Z X \u03c8\u03c7\u2113 u \u2227\u03b1n\u2212\u2113\u22640. (4.55) Set \u02c6 u := u \u2212h(t), h(t) = I\u2113(u) R X \u03c7\u2113\u2227\u03b1n\u2212\u2113. (4.56) For any t\u2217\u2208[0, \u221e) we have I\u2113(\u02c6 u(t\u2217)) = Z t\u2217 0 Z X \u2202\u02c6 u \u2202t \u03c7\u2113 u \u2227\u03b1n\u2212\u2113= Z t\u2217 0 Z X \u2202u \u2202t \u2212 1 R X \u03c7\u2113\u2227\u03b1n\u2212\u2113 d dtI\u2113(u) ! \u03c7\u2113 u \u2227\u03b1n\u2212\u2113= 0. By the same argument in Corollary 5, we deduce that there exist C1, C2 > 0 such that 0 \u2264sup X \u02c6 u(., t) \u2264\u2212C1 inf X \u02c6 u(., t) + C2, (4.57) for all t \u2208[0, \u221e). By our choice, supX u = 0, and (4.57) implies that sup X (u \u2212h(t) \u2212u) = sup X (\u02c6 u \u2212u) \u22650, \u2200t \u22650. Since I\u2113(u) is decreasing along the \ufb02ow, we also have h\u2032(t) \u22640. Theorem 2 now gives us the required result. Q.E.D. Similarly, we can consider the \ufb02ow (1.1) with \u2202tu = \u2212 \u03c7\u2113 u \u2227\u03b1n\u2212\u2113 \u03c7k u \u2227\u03b1n\u2212k ! 1 k\u2212\u2113 + \u03c8(z), u(z, 0) = 0, (4.58) where 1 \u2264\u2113< k \u2264n. When (X, \u03b1) is K\u00a8 ahler, \u03c8 is constant and k = n, this is the inverse Hessian \ufb02ow studied by Fang-Lai-Ma [12]. We can apply Theorem 2 to obtain another corollary which is analogous to the main result of Fang-Lai-Ma [12]. 33 \fCorollary 8 Let (X, \u03b1), and \u03c7 as in Corollary 6. Assume further that \u03c8 \u2208C\u221e(X, R+) and there exists a smooth function u with \u03c7\u2032 = \u03c7 + i\u2202\u00af \u2202u a k-positive (1, 1)-form which satis\ufb01es k\u03c8k\u2212\u2113(\u03c7\u2032)k\u22121 \u2227\u03b1n\u2212k \u2212\u2113(\u03c7\u2032)n\u2212\u2113\u22121 \u2227\u03b1n\u2212\u2113> 0 (4.59) in the sense of positivity of (n \u22121, n \u22121) forms, and \u03c8k\u2212\u2113\u2264\u03c7\u2113\u2227\u03b1n\u2212\u2113 \u03c7k \u2227\u03b1n\u2212k . (4.60) Then the \ufb02ow (4.58) exists for all time, and there is a unique constant c so that the normalized function \u02dc u converges to a function u\u221ewith \u03c9 = \u03c7 + i\u2202\u00af \u2202u\u221ea k-positive form satisfying the equation \u03c9n\u2212\u2113\u2227\u03b1n\u2212\u2113= (\u03c8 + c)k\u2212\u2113\u03c9k \u2227\u03b1n\u2212k. (4.61) In particular, if (X, \u03b1) is K\u00a8 ahler, we assume further that \u03c7 is closed, then the condition (4.60) can be simpli\ufb01ed as \u03c8k\u2212\u2113\u2264c\u2113,k = [\u03c7\u2113] \u222a[\u03b1n\u2212\u2113] [\u03c7k] \u222a[\u03b1n\u2212k]. (4.62) Proof of Corollary 8. This equation is of the form (1.1), with F(A) = f(\u03bb) = \u2212 \u03c3\u2113(\u03bb) \u03c3k(\u03bb) ! 1 k\u2212\u2113 , with \u03bb = \u03bb(A), (4.63) de\ufb01ned on \u0393k. As in Corollary 5, it follows from the Maclaurin\u2019s inequality, the monotonicity and concavity of g = (\u03c3k/\u03c3\u2113) 1 k\u2212\u2113(cf. [43]) that f satis\ufb01es the conditions (1-3) spelled out in \u00a71. Moreover, f is in the bounded case with f\u221e(\u03bb\u2032) = \u2212 \u2113\u03c3\u2113\u22121(\u03bb\u2032) k\u03c3k\u22121(\u03bb\u2032) ! 1 k\u2212\u2113 where \u03bb\u2032 \u2208\u0393\u221e= \u0393k\u22121. In addition, as the same argument in previous corollaries, the condition (4.59) is equivalent to that u = 0 is a C-subsolution for (4.58). Moreover, the condition (4.60) implies that 0 = u \u2265F(A[0]) + \u03c8. (4.64) We can now apply Theorem 2 to get the \ufb01rst result. Next, assume that (X, \u03b1) is K\u00a8 ahler and \u03c7 is closed. As in Corollary 7 and [12], the functional I\u2113(see Lemma 9) is decreasing along the \ufb02ow. Indeed, using (4.62), d dtI\u2113(u) = Z X \u2202u \u2202t \u03c7\u2113 u \u2227\u03b1n\u2212\u2113= Z X \uf8eb \uf8ed\u2212 \u03c3\u2113(\u03bb) \u03c3k(\u03bb) ! 1 k\u2212\u2113 + \u03c8 \uf8f6 \uf8f8\u03c7\u2113 u \u2227\u03b1n\u2212\u2113 \u2264 \u2212 Z X \u03c3\u2113(\u03bb) \u03c3k(\u03bb) ! 1 k\u2212\u2113 \u03c7\u2113 u \u2227\u03b1n\u2212\u2113+ c 1 k\u2212\u2113 \u2113,k Z X \u03c7\u2113 u \u2227\u03b1n\u2212\u2113. (4.65) 34 \fUsing the H\u00a8 older inequality, we get Z X \u03c7\u2113 u \u2227\u03b1n\u2212\u2113 = Z X \u03c3\u2113\u03b1n = Z X \u03c3\u2113 \u03c31/(k\u2212\u2113+1) k ! \u03c3 1 k\u2212\u2113+1 k \u03b1n \u2264 \uf8ee \uf8f0 Z X \u03c3\u2113 \u03c31/(k\u2212\u2113+1) k ! k\u2212\u2113+1 k\u2212\u2113 \u03b1n \uf8f9 \uf8fb k\u2212\u2113 k\u2212\u2113+1 \u0012Z X \u03c3k \u03b1n \u0013 1 k\u2212\u2113+1 = \uf8ee \uf8f0 Z X \u03c3\u2113(\u03bb) \u03c3k(\u03bb) ! 1 k\u2212\u2113 \u03c7\u2113 u \u2227\u03b1n\u2212\u2113 \uf8f9 \uf8fb k\u2212\u2113 k\u2212\u2113+1 \u0012Z X \u03c7k u \u2227\u03b1n\u2212k \u0013 1 k\u2212\u2113+1 = \uf8ee \uf8f0 Z X \u03c3\u2113(\u03bb) \u03c3k(\u03bb) ! 1 k\u2212\u2113 \u03c7\u2113 u \u2227\u03b1n\u2212\u2113 \uf8f9 \uf8fb k\u2212\u2113 k\u2212\u2113+1 c \u22121 k\u2212\u2113+1 \u2113,k \u0012Z X \u03c7\u2113 u \u2227\u03b1n\u2212\u2113 \u0013 1 k\u2212\u2113+1 . This implies that c 1 k\u2212\u2113 \u2113,k Z X \u03c7\u2113 u \u2227\u03b1n\u2212\u2113\u2264 Z X \u03c3\u2113(\u03bb) \u03c3k(\u03bb) ! 1 k\u2212\u2113 \u03c7\u2113 u \u2227\u03b1n\u2212\u2113, hence dI\u2113(u)/dt \u22640. For the rest of the proof, we follow the argument in Corollary 7, starting from the fact that I\u2113(\u02c6 u) = 0 where \u02c6 u = u \u2212 I\u2113(u) R X \u03c7\u2113\u2227\u03b1n\u2212\u2113. Then we obtain the Harnack inequality 0 \u2264sup X \u02c6 u(., t) \u2264\u2212C1 inf X \u02c6 u(., t) + C2, (4.66) for some constants C1, C2 > 0. Finally, Theorem 2 gives us the last claim. Q.E.D. 4.6 Flows with mixed Hessians \u03c3k Our method can be applied to solve other equations containing many terms of \u03c3k. We illustrate this with the equation \u2113 X j=1 cj\u03c7j u \u2227\u03b1n\u2212j = c\u03c7k u \u2227\u03b1n\u2212k (4.67) on a K\u00a8 ahler manifold (X, \u03b1), where 1 \u2264\u2113< k \u2264n, cj \u22650 are given non-negative constants, and c \u22650 is determined by cj by integrating the equation over X. 35 \fWhen k = n, It was conjectured by Fang-Lai-Ma [12] that this equation is solvable assuming that nc\u03c7\u2032n\u22121 \u2212 n\u22121 X k=1 kck\u03c7\u2032k\u22121 \u2227\u03b1n\u2212k > 0, for some closed k-positive form \u03c7\u2032 = \u03c7 + i\u2202\u00af \u2202v. This conjecture was solved recently by Collins-Sz\u00b4 ekelyhidi [7] using the continuity method. An alternative proof by \ufb02ow methods is in Sun [48]. Theorem 3 stated earlier in the Introduction is an existence result for more general equations (4.67) using the \ufb02ow (1.12) In particular, it gives a parabolic proof of a generalization of the conjecture due to Fang-Lai-Ma [12, Conjecture 5.1]. We also remark that the \ufb02ow (1.12) was mentioned in Sun [44], but no result given there, to the best of our understanding. Proof of Theorem 3. This equation is of the form (1.1), with F(A) = f(\u03bb) = \u2212 P\u2113 j=1 cj\u03c3j(\u03bb) \u03c3k(\u03bb) + c, de\ufb01ned on the cone \u0393k. As in the proof of Corollary 5, for any j = 1, . . . , \u2113, the function \u2212\u03c3j/\u03c3k on \u0393k satis\ufb01es the conditions (1-3) in \u00a71, so does f. We also have that f is in the bounded case with f\u221e(\u03bb\u2032) = \u2212 P\u2113 j=1 jcj\u03c3j\u22121(\u03bb) k\u03c3k\u22121(\u03bb) where \u03bb\u2032 \u2208\u0393\u221e= \u0393k. Suppose \u03c7\u2032 = \u03c7 + i\u2202\u00af \u2202u with supX u = 0 satis\ufb01es kc(\u03c7\u2032)k\u22121 \u2227\u03b1n\u2212k \u2212 \u2113 X j=1 jcj(\u03c7\u2032)j\u22121 \u2227\u03b1n\u2212j > 0. By the same argument in Corollary 5, this is equivalent to that u is a C-subsolution of (1.12). Observe that for all t\u2217> 0, Ik(u(t\u2217)) = Z t\u2217 0 Z X \u2202u \u2202t \u03c7k u \u2227\u03b1n\u2212k = Z t\u2217 0 Z X c \u2212 P\u2113 j=1 cj\u03c3j(\u03bb) \u03c3k(\u03bb) ! \u03c7k u \u2227\u03b1n\u2212k = Z t\u2217 0 \uf8eb \uf8edc Z X \u03c7k u \u2227\u03b1n\u2212k \u2212 \u2113 X j=1 cj Z X \u03c7j u \u2227\u03b1n\u2212j \uf8f6 \uf8f8= 0. (4.68) Therefore Lemma 9 implies that k X j=0 Z X u\u03c7j u \u2227\u03c7k\u2212j \u2227\u03b1n\u2212k = 0. 36 \fTherefore we can obtain the Harnack inequality as in Corollary 5: 0 \u2264sup X u(., t) \u2264\u2212C1 inf u(., t) + C2, (4.69) and infX u < 0, for some positive constants C1, C2. Lemma 1 then gives a uniform bound for u. Since Z X \u2202tu\u03c7k u \u2227\u03b1n\u2212k = 0, for any t > 0, there exists y = y(t) such that \u2202tu(y, t) = 0. The rest of the proof is the same to the proof of Corollary 5 where we used Lemma 7 to imply the convergence of the \ufb02ow. Q.E.D. We observe that equations mixing several Hessians seem to appear increasingly frequently in complex geometry. A recent example of particular interest is the Fu-Yau equation [14, 15, 32, 35] and its corresponding geometric \ufb02ows [36]. 4.7 Concluding Remarks We conclude with a few open questions. It has been conjectured by Lejmi and Sz\u00b4 ekelyhidi [28] that conditions of the form (4.22) and their generalizations can be interpreted as geometric stability conditions. This conjecture has been proved in the case of the J-\ufb02ow on toric varieties by Collins and Sz\u00b4 ekelyhidi [7]. Presumably there should be similar interpretations in terms of stability of the conditions formulated in the previous section. A discussion of stability conditions for constant scalar curvature K\u00a8 ahler metrics can be found in [40]. It would also be very helpful to have a suitable geometric interpretation of conditions such as the one on the initial data u0. Geometric \ufb02ows whose behavior may behave very di\ufb00erently depending on the initial data include the anomaly \ufb02ows studied in [33], [38], [13]. For many geometric applications, it would be desirable to extend the theory of subsolutions to allow the forms \u03c7 and \u03c8 to depend on time as well as on u and \u2207u. Acknowledgements: The second-named author would like to thank his thesis advisor, Vincent Guedj for his constant support and encouragement, and also Yuxin Ge for some useful discussions. This work was begun when the second-named author was visiting the department of Mathematics of Columbia University, and completed when the \ufb01rst-named author was visiting the Institut de Mathematiques of Toulouse. Both authors would like to thank these institutions for their hospitality. They would also like to thank Nguyen Van Hoang for a careful reading of the paper." + }, + { + "url": "http://arxiv.org/abs/1705.09763v1", + "title": "The Anomaly flow on unimodular Lie groups", + "abstract": "The Hull-Strominger system for supersymmetric vacua of the heterotic string\nallows general unitary Hermitian connections with torsion and not just the\nChern unitary connection. Solutions on unimodular Lie groups exploiting this\nflexibility were found by T. Fei and S.T. Yau. The Anomaly flow is a flow whose\nstationary points are precisely the solutions of the Hull-Strominger system.\nHere we examine its long-time behavior on unimodular Lie groups with general\nunitary Hermitian connections. We find a diverse and intricate behavior, which\ndepends very much on the Lie group and the initial data.", + "authors": "Duong H. Phong, Sebastien Picard, Xiangwen Zhang", + "published": "2017-05-27", + "updated": "2017-05-27", + "primary_cat": "math.DG", + "cats": [ + "math.DG", + "math-ph", + "math.CV", + "math.MP" + ], + "main_content": "Introduction The Hull-Strominger system [19, 20, 32] is a system of equations for supersymmetric vacua of the heterotic string which generalizes the well-known Calabi-Yau compacti\ufb01cations found by Candelas, Horowitz, Strominger, and Witten [5]. It is also of considerable interest from the point of view of non-K\u00a8 ahler geometry and non-linear partial di\ufb00erential equations [13, 14, 26, 28, 29]. While more special solutions continue to be constructed (see e.g. [7, 8, 9, 12, 10, 11, 25, 13, 14, 15, 23] and references therein), the complete solution of the Hull-Strominger system still seems distant. In [27], a general strategy was proposed, namely to look for solutions as stationary points of a \ufb02ow, called there the Anomaly \ufb02ow. Despite its unusual original formulation as a \ufb02ow of (2, 2)-forms, the Anomaly \ufb02ow turns out to have some remarkable properties, including some suggestive analogies with the Ricci \ufb02ow [30], and the fact [31] that it can recover the famous solutions found in 2006 by Fu and Yau [13, 14]. Even so, very little is known at this time about the Anomaly \ufb02ow in general, and it is important to gain insight from more examples. Part of the interest in the Hull-Strominger system lies in it allowing metrics on complex manifolds which have non-vanishing torsion. For such metrics, there is actually a whole line of natural unitary connections, namely the Yano-Gauduchon line [17] passing by the Chern unitary connection and the Bismut connection. The Hull-Strominger system can be formulated with any speci\ufb01c choice along this line [20], and not just the Chern unitary connection. This \ufb02exibility is essential in the solution found by Fei and Yau on unimodular Lie groups [10], using an ansatz originating from [2] and [4]. The advantage of unimodular 1Work supported in part by the National Science Foundation under NSF Grant DMS-12-66033. \fLie groups is that any left-invariant Hermitian metric is automatically balanced, so if we \ufb01x a holomorphic vector bundle with a Hermitian-Yang-Mills invariant metric, the Hull-Strominger will reduce to a single equation. A di\ufb03culty in using general Hermitian connections is that their Riemannian curvature tensor Rm will in general have components of all (2, 0), (1, 1), and (0, 2) types. But Fei and Yau discovered that, for unimodular Lie groups, the term Tr(Rm \u2227Rm) reduces to a (2, 2)-form, and that solutions to the HullStrominger system can be found with any connection on the Yano-Gauduchon line, except for the Chern unitary connection and the Lichnerowicz connection. The purpose of this paper is analyze the Anomaly \ufb02ow on unimodular Lie groups. As had been stressed earlier, the study of the Anomaly \ufb02ow has barely begun, and there is a strong need for good examples. Even in the case of toric \ufb01brations analyzed in [31], the convergence of the \ufb02ow was established only for a particular type of initial data, and a fuller understanding of the long-time behavior of the \ufb02ow is still not available. Unlike the more familiar cases of the K\u00a8 ahler-Ricci and the Donaldson heat \ufb02ows, which are known to converge whenever a stationary point exists, the Anomaly \ufb02ow is expected to behave di\ufb00erently depending on the initial data. This can be traced back to the fact that the conformally balanced condition is weaker than the K\u00a8 ahler condition, and that the positivity and closedness of a (2, 2)-form seem to carry less information than the positivity and closedness of a (1, 1)-form. The case of Lie groups is particular appealing, as the \ufb02ow reduces then to a system of ordinary di\ufb00erential equations, and its formalism can be readily extended to general connections on the Yano-Gauduchon line. Our main results are as follows. Let X be a complex 3-dimensional Lie group. Fix a basis {ea} of left-invariant holomorphic vector \ufb01elds on X, and let {ea} be the dual basis of holomorphic forms. Let cd ab denote the structure constants of X de\ufb01ned by this basis [ea, eb] = X d ed cd ab. (1.1) The Lie group X is said to be unimodular if the structure constants satisfy the condition X a ca ab = 0 (1.2) for any b. This condition is invariant under a change of basis of holomorphic vector \ufb01elds (see e.g. Section \u00a73.1 for a proof). The basis {ea} also de\ufb01nes a holomorphic, nowhere vanishing (3, 0)-form on X, given by \u2126= e1 \u2227e2 \u2227e3. (1.3) We de\ufb01ne the norm of \u2126with respect to any metric \u03c9 by \u2225\u2126\u22252 = i\u2126\u2227\u00af \u2126 \u0012\u03c93 3! \u0013\u22121 . (1.4) 2 \fTheorem 1 Let X be a unimodular complex 3-dimensional Lie group. Let t \u2192\u03c9(t) be the Anomaly \ufb02ow on X, as de\ufb01ned in (3.4) below. Set \u03c9(t) = ig\u00af ab(t) eb \u2227\u00af ea. Then the Anomaly \ufb02ow is given explicitly by \u2202tg\u00af ab = 1 2\u2225\u2126\u2225\u03c9 gd\u00af p \u0012 g\u00af isciapcs bd \u2212\u03b1\u2032\u03c4 4 g\u00af \u2113ign\u00af j cmapc\u2113mjci sncs bd \u0013 . (1.5) Here \u03b1\u2032 is the string tension, and \u03c4 = 2\u03ba2(2\u03ba\u22121), where \u03ba indicates the connection (2.11). In three dimensions, as listed in Fei and Yau [10] and Knapp [22], the unimodular complex Lie groups consist precisely of those whose Lie algebra is isomorphic to that of either the Abelian Lie group C3, the nilpotent Heisenberg group, the solvable group of (complex\ufb01cations of) rigid motions in R2, or the semi-simple group SL(2, C). The Anomaly \ufb02ow behaves di\ufb00erently in each case. Fixing the structure constants in each case as spelled out in \u00a73, we have: Theorem 2 Assume that \u03b1\u2032\u03c4 > 0. (a) When X = C3, any metric is a stationary point for the \ufb02ow, and the \ufb02ow is consequently stationary for any initial metric \u03c9(0). (b) When X is nilpotent, there is no stationary point. Consequently the \ufb02ow cannot converge for any initial metric. If the initial metric is diagonal, then the metric remains diagonal along the \ufb02ow, the lowest eigenvalue is constant, while the other two eigenvalues tend to +\u221eat a constant rate. (c) When X is solvable, the stationary points of the \ufb02ow are precisely the metrics with g\u00af 12 = g\u00af 21 = 0, \u03b1\u2032\u03c4 4 g3\u00af 3 = 1. (1.6) The Anomaly \ufb02ow is asymptotically instable near any stationary point. However, the condition g\u00af 12 = g\u00af 12 = 0 is preserved along the \ufb02ow, and for any initial metric satisfying this condition, the \ufb02ow converges to a stationary point. (d) When X = SL(2, C), there is a unique stationary point, given by the diagonal metric g\u00af ab = \u03b1\u2032\u03c4 2 \u03b4ab. (1.7) The linearization of the \ufb02ow at the \ufb01xed point admits both positive and negative eigenvalues. In particular, the \ufb02ow is asymptotically instable. The paper is organized as follows. In \u00a72, the terms in the main equation in the HullStrominger system, namely the anomaly cancellation condition, are worked out in detail for connections on the Yano-Gauduchon line for left-invariant metrics. While the arguments are directly inspired by those of Fei and Yau, we have adopted a more explicit component formalism that should make the proof as well as the resulting formulas more accessible and convenient for geometric \ufb02ows. The Anomaly \ufb02ow is analyzed in \u00a73. Theorem 1 is proved in \u00a73.2, and Theorem 2 is proved in \u00a73.3, beginning in each case with the identi\ufb01cation of the stationary points, and ending with the analysis of the Anomaly \ufb02ow proper. 3 \f2 The curvature of unitary connections with torsion on Lie groups In this section, we derive formulas for the curvature of an arbitrary connection on the Yano-Gauduchon line for Lie groups. They have been obtained before by Fei and Yau [10], but we need a formulation convenient for our subsequent study of the Anomaly \ufb02ow. 2.1 Unitary connections with torsion Let (X, \u03c9) be a complex manifold of complex dimension n, equipped with a Hermitian metric \u03c9. We shall denote by zj, 1 \u2264j \u2264n, local holomorphic coordinates on X, and by \u03be\u03b1, 1 \u2264\u03b1 \u22642n, real smooth local coordinates on X. In particular g\u00af kj denotes the metric in holomorphic coordinates, and g\u03b1\u03b2 the metric in real coordinates. We consider only unitary connections on X, that is connections \u2207on T 1,0(X) which preserve the metric, \u2207g\u00af kj = 0. Our notation for covariant derivatives is \u2207\u03b1V \u03b3 = \u2202\u03b1V \u03b3 + A\u03b3 \u03b1\u03b2V \u03b2. (2.1) and our conventions for the curvature R\u03b1\u03b2\u03b3\u03b4 and the torsion tensors T \u03b3\u03b1\u03b2 are, [\u2207\u03b2, \u2207\u03b1]V \u03b3 = R\u03b1\u03b2 \u03b3 \u03b4V \u03b4 + T \u03b4 \u03b1\u03b2\u2207\u03b4V \u03b3, (2.2) with similar formulas when using the complex coordinates zj, \u00af zj, 1 \u2264j \u2264n. For example, in complex coordinates, the components of the torsion are T p\u00af jm = Ap \u00af jm, T \u00af p\u00af jm = \u2212A\u00af p m\u00af j, T p jm = Ap jm \u2212Ap mj, T \u00af p\u00af j \u00af m = A\u00af p \u00af j \u00af m \u2212A\u00af p \u00af m\u00af j. (2.3) Among the unitary connections, of particular interest are the following three special connections: (1) The Levi-Civita connection \u2207L on T(X), characterized by the fact that its torsion is 0. In general, it does not preserve the complex structure, and in particular, it does not induce a connection on the complex tangent space T 1,0(X). It preserves the complex structure J if and only if \u03c9 is K\u00a8 ahler. (2) The Chern connection \u2207C, characterized by the fact that it preserves the complex structure, and that \u2207C \u00af j W k = \u2202\u00af jW k, i.e., Ap \u00af jm = 0. By unitarity, it follows that Ap jm = gp\u00af q\u2202jg\u00af qm. Thus the components of the torsion of the Chern connection in the \ufb01rst line of (2.3) vanish, and the torsion reduces to a section of the following bundle T = T + \u00af T \u2208(\u039b2,0 \u2297T 1,0) \u2295(\u039b0,2 \u2297T 0,1). (2.4) 4 \fwith T = 1 2 T \u2113 jk \u2202 \u2202z\u2113\u2297(dzk \u2227dzj), \u00af T = 1 2 \u00af T \u00af \u2113\u00af j\u00af k \u2202 \u2202\u00af z\u2113\u2297(d\u00af zk \u2227d\u00af zj) \u00af T \u00af \u2113\u00af j\u00af k = T \u2113jk. The torsion of the Chern connection is 0 exactly when \u03c9 is K\u00a8 ahler. (3) The Bismut connection \u2207H. Connections obtained by modifying the Levi-Civita connection by a 3-form seem to have been written down \ufb01rst by Yano in his book [35], which became the standard reference for it in the physics literature on supersymmetric sigma models (see e.g. [16, 21, 19, 32] and references therein). They were subsequently rediscovered by Bismut [3], who also identi\ufb01ed among them the connection preserving the complex structure, which is now known as the Bismut connection. Thus the Bismut connection can be written in two di\ufb00erent ways, depending on whether we use \u2207L or \u2207C as reference connection. To write it with \u2207L as reference connection, introduce the following real 3-form 2, H = 1 4(H + \u00af H) \u2208\u039b2,1 \u2295\u039b1,2 \u2282\u039b3 (2.5) where H = 1 2 g \u00af m\u2113T \u2113 jk dzk \u2227dzj \u2227d\u00af zm, \u00af H = 1 2 g\u00af \u2113m \u00af T \u00af \u2113\u00af j\u00af k d\u00af zk \u2227d\u00af zj \u2227dzm, H \u00af mjk = g \u00af m\u2113T \u2113 jk, \u00af Hm\u00af j\u00af k = g\u00af \u2113m \u00af T \u2113\u00af j\u00af k. (2.6) Note that, in terms of the Hermitian form \u03c9 = ig\u00af kjdzj \u2227d\u00af zk, we have H = i\u2202\u03c9, \u00af H = \u2212i\u00af \u2202\u03c9. (2.7) Next, introduce the 1-form SH = (SH \u03b1 )d\u03be\u03b1 valued in the space of endomorphisms antisymmetric with respect to the metric g\u03b1\u03b2 by setting 3 H \u22611 3! H\u03b1\u03b2\u03b3 d\u03be\u03b3 \u2227d\u03be\u03b2 \u2227d\u03be\u03b1, (SH \u03b1 )\u03b2 \u03b3 = 2H\u03b1\u03c1\u03b3 g\u03c1\u03b2. (2.8) The Bismut connection is then de\ufb01ned by \u2207H = \u2207L + SH. (2.9) 2It is important to distinguish the form H \u2208\u039b2,1 from the torsion tensor T \u2208\u039b2,0 \u2297T 1,0, although their coe\ufb03cients are the same, and they can be recovered from one another. Perhaps a good analogue is the metric g\u00af kj and the corresponding symplectic form \u03c9 = ig\u00af kj dzj \u2227d\u00af zk, which are distinct objects, although they can be recovered from one another, once the complex structure is \ufb01xed. 3An endomorphism S\u03b2\u03b3 is antisymmetric with respect to a metric g\u03c1\u03b2 if \u27e8SW, U\u27e9= \u2212\u27e8W, SU\u27e9, which means that g\u03b2\u03c1S\u03b2\u03b3 = \u2212S\u00b5\u03c1g\u03b3\u00b5, which means that S\u03c1\u03b3 is anti-symmetric, if we raise and power indices using the metric g\u03b2\u03c1. 5 \fThis formulation shows that it is unitary and that its torsion can be identi\ufb01ed with a 3-form. If we use the Chern connection as the reference connection, we can also write \u2207H j W p = \u2207C j W p \u2212T p jkW k, \u2207H \u00af j W p = \u2207C \u00af j W p + gp\u00af qg \u00af mr \u00af T \u00af m\u00af j\u00af qW r. (2.10) This formulation shows that the Bismut connection is unitary and that it manifestly preserves the complex structure. The equivalence of (2.9) and (2.10) can be found in [3]. The two connections \u2207C and \u2207H determine a line of connections \u2207(\u03ba) which are all unitary and preserve the complex structure. Indeed, in terms of an orthonormal frame, a unitary connection is characterized by a Hermitian matrix, and a real linear combination of Hermitian matrices is again Hermitian. Also a linear combination of connections on T 1,0(X) is again a connection on T 1,0(X), hence our assertion. We refer to this line of connections as the Yano-Gauduchon line of canonical connections [17]. With \u2207C as reference connection, the connection \u2207(\u03ba) can be expressed as \u2207(\u03ba) j W p = \u2207C j W p \u2212\u03ba T p jkW k, \u2207(\u03ba) \u00af j W p = \u2207C \u00af j W p + \u03ba gp\u00af qg \u00af mr \u00af T \u00af m\u00af j\u00af qW r. (2.11) Clearly \u03ba = 0 corresponds to the Chern connection and \u03ba = 1 to the Bismut connection. The case \u03ba = 1/2 is known as the Lichnerowicz connection. 2.2 Evaluation of curvature forms Let (X, \u03c9) be now a complex Lie group with left-invariant metric \u03c9. Let e1, . . . , en \u2208g be an orthonormal frame of left-invariant holomorphic vector \ufb01elds on X, and let e1, . . . , en be the dual frame of holomorphic 1-forms, so that \u03c9 = i X a ea \u2227\u00af ea. (2.12) The structure constants of the Lie algebra g in this basis are de\ufb01ned in (1.1) and denoted by cdab. They satisfy the Jacobi identity cq ircr jk + cq krcr ij + cq jrcr ki = 0. (2.13) By Cartan\u2019s formula for the exterior derivative, we have \u2202ea = 1 2ca bd ed \u2227eb (2.14) which is the Maurer-Cartan equation. Our goal is to determine the torsion and curvature of an arbitrary connection \u2207(\u03ba) on the Yano-Gauduchon line in terms of the frame e1, \u00b7 \u00b7 \u00b7, en and the structure constants cdab. First, we express the connection forms for \u2207(\u03ba) in terms of the structure constants of X: 6 \fLemma 1 Let \u2207(\u03ba) be a connection on the Yano-Gauduchon line for a left-invariant Hermitian metric \u03c9 on the Lie group X. Let e1, \u00b7 \u00b7 \u00b7, en be a left invariant orthonormal basis for \u03c9, and let cdab be the corresponding structure constants, as de\ufb01ned above. Then for any vector \ufb01eld V = V aea, \u2207(\u03ba) b V a = \u2202ebV a + \u03baca bdV d, \u2207(\u03ba) \u00af b V a = \u2202\u00af ebV a \u2212\u03bacdbaV d. (2.15) Proof. We use the expression for \u2207(\u03ba) with the Chern connection \u2207C as reference connection. For this, we need the torsion form H of the Chern connection. Taking the exterior derivative of \u03c9 and applying (2.14) gives H = i\u2202\u03c9 = \u22121 2 X cd ab eb \u2227ea \u2227\u00af ed. (2.16) Therefore H \u00af dab = (i\u2202\u03c9) \u00af dab = \u2212cd ab. (2.17) Next, since the coe\ufb03cients of the metric \u03c9 are constant in an orthonormal frame, the Chern connection reduces to the exterior derivative: \u2207C j = \u2202j in this frame. The lemma follows now from the formula (2.11) giving \u2207(\u03ba) in terms of \u2207C, H, and the parameter \u03ba. Henceforth, we shall denote \u2207(\u03ba) by just \u2207for simplicity. We obtain Aa bd = \u03baca bd, Aa\u00af bd = \u2212\u03bacdba, A\u00af a b \u00af d = \u2212\u03bacd ba, A\u00af a\u00af b \u00af d = \u03bacabd. (2.18) Lemma 2 The components of the curvature of the connection \u2207(\u03ba) in the orthonormal frame e1, \u00b7 \u00b7 \u00b7, en are given by Rkj p q = (\u03ba \u2212\u03ba2)cr kjcp rq, R\u00af k\u00af j p q = \u2212(\u03ba \u2212\u03ba2)crkjcqrp R\u00af kj p q = \u03ba2(\u2212cp jrcqkr + crkpcr jq). (2.19) Proof. The de\ufb01ning formula (2.2) for the curvature and the torsion tdba of the connection \u2207(\u03ba) in coordinates becomes the following formula in the frame {ea}, [\u2207a, \u2207b]W c = Rba c dW d + td ba\u2207dW c + cd ab\u2207dW c (2.20) where the last term on the right hand side is the contribution of the commutator [ea, eb]. H We now compute the left hand side using the formulas of the previous lemma. We \ufb01nd Rkj p qW q + tq kj\u2207qW p + cq jk\u2207qW p = [\u2207j, \u2207k]W p = cq jk\u2202eqW p + \u03ba2(\u2212cp krcr js + cp jrcr ks)W s +2\u03bacr kj\u2207rW p. (2.21) 7 \fBy (2.15) and the Jacobi identity (2.13), we see that the (2, 0) component is Rkj p q = \u2212\u03bacr jkcp rq + \u03ba2(\u2212cp krcr jq + cp jrcr kq) = (\u03ba \u2212\u03ba2)cr kjcp rq. (2.22) This proves the \ufb01rst formula in the lemma. Next, we have R\u00af k\u00af j p qW q + \u00af t\u00af q\u00af k\u00af j\u2207\u00af qW p + cqjk\u2207\u00af qW p = [\u2207\u00af j, \u2207\u00af k]W p = cqjk\u2202\u00af eqW p + \u03ba2(crjpcskr \u2212crkpcsjr)W s \u22122\u03bacrkj\u2207\u00af rW p. (2.23) We see that the (0, 2) component is R\u00af k\u00af j p q = \u03bacrjkcqrp + \u03ba2(crjpcqkr \u2212crkpcqjr) = \u2212(\u03ba \u2212\u03ba2)crkjcqrp. (2.24) which is the second formula in the lemma. Finally, we have R\u00af kj p qW q \u2212A\u00af q j\u00af k\u2207\u00af qW p + Aq\u00af kj\u2207qW p = [\u2207j, \u2207\u00af k]W p = \u03back jr\u2207\u00af rW p \u2212\u03bacjkr\u2207rW p +\u03ba2(\u2212cp jrcqkr + crkpcr jq)W q. (2.25) We see that the (1, 1) component is R\u00af kj p q = \u03ba2(\u2212cp jrcqkr + crkpcr jq). (2.26) This completes the proof of the lemma. We note our expressions satisfy the usual symmetries Rkj p s = \u2212Rjk p s, R\u00af k\u00af j p s = \u2212R\u00af j\u00af k p s (2.27) R\u00af k\u00af j p q = \u2212Rkjqp, R\u00af kj p q = R\u00af jkqp. (2.28) Next, we compute Tr (Rm\u2227Rm) on the Lie group X. This computation was \ufb01rst done by Fei-Yau [10]. Their computation showed in particular that Tr (Rm \u2227Rm) is a (2, 2) form along the Yano-Gauduchon line. Here we recover their result using our formalism. Lemma 3 Let \u2207(\u03ba) be an arbitrary connection on the Yano-Gauduchon line, as before. Assume that dim X = 3. Then Tr (Rm \u2227Rm) is a (2, 2)-form, given explicitly by Tr (Rm \u2227Rm)\u00af k\u00af \u2113ij = \u03c4 crk\u2113csrpcq ijcs qp. (2.29) where we have set as before \u03c4 = 2\u03ba2(2\u03ba \u22121). 8 \fProof. Because X is assumed to have dimension 3, the (4, 0) and (0, 4) components of Tr(Rm \u2227Rm) are automatically 0. Next, we show the vanishing of the (3, 1)-component, Tr (Rm \u2227Rm)\u00af \u2113ijk = 2(Rij p sR\u00af \u2113k s p + Rki p sR\u00af \u2113j s p + Rjk p sR\u00af \u2113i s p). (2.30) Fixing an index (ijk, \u2113), we use the previously computed formulas for the curvature of a Lie group to obtain Rij p sR\u00af \u2113k s p = (\u03ba \u2212\u03ba2)\u03ba2cr ijcp rs(\u2212cs kqcp\u2113q + cq\u2113scq kp). (2.31) Applying the Jacobi identity (2.13), Rij p sR\u00af \u2113k s p (2.32) = \u03ba3(1 \u2212\u03ba)(\u2212cp ircr jscs kqcp\u2113q + cp jrcr iscs kqcp\u2113q \u2212cq kpcp jrcr iscq\u2113s + cq kpcp ircr jscq\u2113s) = \u03ba3(1 \u2212\u03ba)(\u2212cp ircr jscs kq + cp jrcr iscs kq \u2212cp kscs jrcr iq + cp kscs ircr jq)cp\u2113q. (2.33) If we denote Fijkpq = cpircrjscskq, then Rij p sR\u00af \u2113k s p = \u03ba3(1 \u2212\u03ba)(\u2212Fijk p q + Fjik p q \u2212Fkji p q + Fkij p q) cp\u2113q. (2.34) Upon cyclically permuting (ijk), we see that Rij p sR\u00af \u2113k s p + Rki p sR\u00af \u2113j s p + Rjk p sR\u00af \u2113i s p = 0. (2.35) Therefore Tr (Rm \u2227Rm)\u00af \u2113ijk = 0 (2.36) as claimed. Next, we compute the (1, 3) component, Tr (Rm \u2227Rm)\u00af i\u00af j\u00af k\u2113 = 2(R\u00af i\u00af j p sR\u00af k\u2113 s p + R\u00af j\u00af k p sR\u00af i\u2113 s p + R\u00af k\u00af i p sR\u00af j\u2113 s p). (2.37) Using the symmetric property (2.28) and the vanishing of the (3, 1) part, we see that the (1, 3) part also vanishes. Finally, we evaluate the (2, 2)-component, Tr (Rm \u2227Rm)\u00af k\u00af \u2113ij = 2R\u00af k\u00af \u2113 p sRij s p + 2(R\u00af kj p sR\u00af \u2113i s p \u2212R\u00af ki p sR\u00af \u2113j s p). (2.38) From the previously established formulas, we have R\u00af k\u00af \u2113 p sRij s p = \u2212(\u03ba \u2212\u03ba2)2 crk\u2113csrpcq ijcs qp. (2.39) Next, R\u00af kj p sR\u00af \u2113i s p = \u03ba4(\u2212cp jrcskr + crkpcr js)(\u2212cs iqcp\u2113q + cq\u2113scq ip) (2.40) = \u03ba4(cp jrcskrcs iqcp\u2113q + cq ipcrkpcr jscq\u2113s \u2212cp jrcskrcq\u2113scq ip \u2212cs iqcp\u2113qcrkpcr js). 9 \fSome cancellation occurs and we are left with (R\u00af kj p sR\u00af \u2113i s p \u2212R\u00af ki p sR\u00af \u2113j s p) = \u03ba4(\u2212cskrcq\u2113scq ipcp jr \u2212cp\u2113qcrkpcr jscs iq + cskrcq\u2113scq jpcp ir + cp\u2113qcrkpcr iscs jq) = \u03ba4 (cskrcq\u2113s(\u2212cq ipcp jr + cq jpcp ir) + cp\u2113qcrkp(\u2212cr jscs iq + cr iscs jq)) = \u03ba4(cskrcq\u2113scq rpcp ij \u2212cp\u2113qcrkpcr qscs ij) = \u03ba4(cskrcq\u2113s \u2212cs\u2113rcqks)cq rpcp ij = \u03ba4cqrscsk\u2113cq rpcp ij. (2.41) Adding these two equations together, the terms of order \u03ba4 cancel and we obtain the desired formula. Recall that we view the Lie algebra of X as generated by a given basis of left-invariant holomorphic vector \ufb01elds ea, with structure constants cdab. So far we have considered only the metric \u03c9 = i P ea \u2227\u00af ea de\ufb01ned by the condition that ea be orthonormal. We consider now the general left-invariant metric given by \u03c9 = X g\u00af ba iea \u2227\u00af eb, (2.42) where g\u00af ba is a positive-de\ufb01nite Hermitian matrix. We have Lemma 4 Let X be a 3-dimensional complex Lie group, with a given basis of left-invariant holomorphic vector \ufb01elds ea with structure constants cdab. Let \u03c9 be the metric given by (2.42), and let \u2207(\u03ba) be the Hermitian connection on the Gauduchon line with parameter \u03ba. Then Tr (Rm \u2227Rm) = \u03c4 4 g\u00af \u2113ign\u00af j cmabc\u2113mjci sncs cd ed \u2227ec \u2227\u00af eb \u2227\u00af ea. (2.43) and i\u2202\u00af \u2202\u03c9\u2212\u03b1\u2032 4 Tr (Rm\u2227Rm) = 1 4 \u0012 g\u00af isciabcs cd\u2212\u03b1\u2032\u03c4 4 g\u00af \u2113ign\u00af j cmabc\u2113mjci sncs cd \u0013 ed\u2227ec\u2227\u00af eb\u2227\u00af ea. (2.44) Proof. These formulas for general metrics follow from the ones obtained earlier for the metric i P a ea \u2227\u00af ea after performing a change of basis. More speci\ufb01cally, we let P be a matrix such that \u00af P \u00af a \u00af pg\u00af abP b q = \u03b4\u00af pq. (2.45) Therefore, denoting ga\u00af b to be the inverse of g\u00af ba, we have g\u00af ba = ( \u00af P \u22121)\u00af r\u00af b(P \u22121)r a, ga\u00af b = P a r \u00af P \u00af b \u00af r. (2.46) We now perform a change of basis and de\ufb01ne fi = erP r i, [fi, fj] = frkr ij. (2.47) 10 \fThe induced transformation laws are fi = erP r i, f i = (P \u22121)i rer, ei = fr(P \u22121)r i, ei = P i rf r, (2.48) k\u2113 ij = (P \u22121)\u2113 sP r iP q jcs rq. (2.49) By construction, \u03c9 is diagonal in the basis {fa}. From (2.29), we can compute the curvature in the basis {f a}, Tr (Rm \u2227Rm) = \u03c4 4 krabksrpkq cdks qp f d \u2227f c \u2227\u00af f b \u2227\u00af f a. (2.50) Using the above transformation laws, Tr (Rm\u2227Rm) can be rewritten in terms of the {ea} basis. A straightforward computation gives the \ufb01rst formula in the lemma. To obtain the second formula, we begin by computing i\u2202\u00af \u2202\u03c9 in the model case where the metric is P a iea \u2227\u00af ea. We had already found i\u2202\u03c9 in (2.16). Di\ufb00erentiating again gives i\u2202\u00af \u2202(i X a ea \u2227\u00af ea) = 1 4 X c\u2113ab c\u2113 cd ed \u2227ec \u2227\u00af eb \u2227\u00af ea. (2.51) Reverting to the general metric \u03c9 given by (2.42) and performing the same change of bases as before, we \ufb01nd i\u2202\u00af \u2202\u03c9 = 1 4g\u00af isciabcs cd ed \u2227ec \u2227\u00af eb \u2227\u00af ea. (2.52) Combining this formula with the one found previously for Tr(Rm \u2227Rm), we obtain the second formula stated in the lemma. Q.E.D. 3 Hull-Strominger systems and Anomaly \ufb02ows We come now to the study of the Anomaly \ufb02ow on the complex Lie group X. 3.1 The Hull-Strominger system on unimodular Lie groups First we recall the Hull-Strominger system [19, 20, 32]. Let X be a 3-dimensional complex manifold with a nowhere vanishing holomorphic (3, 0)-form \u2126. The Hull-Strominger system is a system of equations for a Hermitian metric \u03c9 on X and a holomorphic vector bundle E \u2192X equipped with a Hermitian metric H\u00af \u03b1\u03b2 satisfying F 0,2 = F 0,2 = 0, \u03c92 \u2227F 1,1 = 0 i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032 4 Tr(Rm \u2227Rm \u2212F \u2227F) = 0 d\u2020\u03c9 = i(\u2202\u2212\u00af \u2202) log \u2225\u2126\u2225\u03c9 (3.1) where F p,q are the components of the Chern curvature F \u2208\u039b2 \u2297End(E) of the metric H\u00af \u03b1\u03b2, and \u2225\u2126\u2225\u03c9 denotes the norm of \u2126with respect to the metric \u03c9 as de\ufb01ned in (1.4). The 11 \f\ufb01rst equation in (3.1) is the familiar Hermitian-Yang-Mills equation, and its solution is well-known for given metric \u03c9 by the theorem of Donaldson-Uhlenbeck-Yau [6, 33]. Thus the most novel aspects in the Hull-Strominger system resides in the other two equations. It has been pointed out by Li and Yau [23] that the last equation is equivalent to the following condition of \u201cconformally balanced metric\u201d, d(\u2225\u2126\u2225\u03c9\u03c92) = 0 (3.2) which is a generalization of the balanced condition introduced in 1981 by Michelsohn [24]. We shall take the bundle E \u2192X to be trivial, F to be 0, and restrict ourselves to leftinvariant metrics. In this case, the \ufb01rst equation in (3.1) is trivially satis\ufb01ed. Furthermore, the norm \u2225\u2126\u2225\u03c9 is a constant function on X, and the conformally balanced condition reduces to exactly the balanced condition of Michelsohn [24], i.e., d\u03c92 = 0. A key observation, also exploited earlier in the work of Fei and Yau [10], is that, on a unimodular complex Lie group, any left-invariant metric is unimodular. This statement is well-known and to our knowledge \ufb01rst appeared in [1]. For the reader\u2019s convenience, we provide the brief argument. Recall that a complex Lie group is said to be unimodular if there exists a left-invariant basis of holomorphic vector \ufb01elds with structure constants cdab satisfying the condition (1.2). In view of the transformation rule (2.49) for structure constants under a change of basis of left-invariant vector \ufb01elds, this statement holds for all bases if and only if it holds for some basis. Let now \u03c9 be any invariant Hermitian metric on X, and express it in terms of any basis of holomorphic ea forms as (2.42). A direct calculation using (2.14) gives \u2202\u03c92 = X 1 2(g\u00af prg\u00af qc \u2212g\u00af pcg\u00af qr)cr ab ea \u2227eb \u2227ec \u2227\u00af ep \u2227\u00af eq. (3.3) Choosing ea to be orthonormal with respect to \u03c9, we can assume that g\u00af ba = \u03b4ba, and we readily see that the condition d\u03c92 = 0 is equivalent to the unimodular condition (1.2). Thus, on unimodular Lie groups, the Hull-Strominger system for a left-invariant metric reduces to the middle equation in (3.1). 3.2 Proof of Theorem 1 The Anomaly \ufb02ow, introduced in [27], is a parabolic \ufb02ow whose stationary points are solutions of the Hull-Strominger system. In the present setting, X is a 3-dimensional unimodular complex Lie group with a basis {ea} of left-invariant holomorphic vector \ufb01elds, \u2126= e1 \u2227e2 \u2227e3, and the bundle E \u2192X is taken to be trivial. Then the Anomaly \ufb02ow [27] is de\ufb01ned to be the following \ufb02ow of (2, 2)-forms, \u2202t(\u2225\u2126\u2225\u03c9\u03c92) = i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032 4 Tr (Rm \u2227Rm). (3.4) 12 \fwith any given initial data of the form \u2225\u2126\u2225\u03c90\u03c92 0. Theorem 1 of [30] shows how to rewrite this \ufb02ow as a curvature \ufb02ow for the Hermitian metric \u03c9. Setting \u03a6 = i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032 4 Tr (Rm \u2227Rm), we obtain \u2202tg\u00af ac = \u2212 1 2\u2225\u2126\u2225\u03c9 gd\u00af b\u03a6\u00af a\u00af bcd. (3.5) Substituting in the formulas obtained in Lemma 4 for \u03a6, we obtain Theorem 1. 3.3 Proof of Theorem 2 We discuss the unimodular Lie group case by case, as listed in Theorem 2. 3.3.1 The Abelian Case In this case, we have for all a, b, [ea, eb] = 0 (3.6) and all the structure constants cdab vanish. The \ufb02ow (1.5) is static for all initial data, and part (a) of Theorem 2 is immediate. 3.3.2 Nilpotent case In this case, we may assume the Lie algebra satis\ufb01es the commutation relations [e1, e3] = 0, [e2, e3] = 0, [e1, e2] = e3. (3.7) It follows that c312 = 1 and all other structure constants vanish. Substituting these structure constants into the \ufb02ow (1.5) gives the following system \u2202tg\u00af 11 = g2\u00af 2g\u00af 33 2\u2225\u2126\u2225, \u2202tg\u00af 12 = \u2212g1\u00af 2g\u00af 33 2\u2225\u2126\u2225, \u2202tg\u00af 22 = g1\u00af 1g\u00af 33 2\u2225\u2126\u2225, \u2202tg\u00af p3 = 0. (3.8) We see that there are no stationary points in this case. This proves part (b) of Theorem 2. In this case, we can also describe completely the \ufb02ow for diagonal initial data. The diagonal property is preserved, and setting g\u00af ba(t) = \u03bba(t)\u03b4ab, we \ufb01nd that \u03bb3(t) = \u03bb3(0) is constant in time, while \u03bb1(t) and \u03bb2(t) satisfy the ODE system, \u2202t\u03bb1 = q \u03bb3(0) 2 s \u03bb1 \u03bb2 , \u2202t\u03bb2 = q \u03bb3(0) 2 s \u03bb2 \u03bb1 . (3.9) In particular, \u2202t \u03bb1 \u03bb2 = 0 (3.10) 13 \fThus the ratio \u03bb1(t)/\u03bb2(t) is constant for all time. Substituting in the previous equation, we can solve explicitly for \u03bb1(t) and \u03bb2(t), \u03bb1(t) = \u03bb1(0) + 1 2t v u u t\u03bb1(0)\u03bb3(0) \u03bb2(0) , \u03bb2(t) = \u03bb2(0) + 1 2t v u u t\u03bb2(0)\u03bb3(0) \u03bb1(0) (3.11) which tend both to \u221eas t \u2192\u221e. 3.3.3 Solvable case In this case, we may assume the Lie algebra satis\ufb01es the commutation relations [e3, e1] = e1, [e3, e2] = \u2212e2, [e1, e2] = 0 (3.12) that is, c131 = 1, c232 = \u22121, and all the other structure constants vanish. Substituting these structure constants into the \ufb02ow (1.5) gives the following system \u2202tg\u00af 11 = 1 2\u2225\u2126\u2225 \u0012 g3\u00af 3g\u00af 11 \u2212\u03b2g3\u00af 3g3\u00af 3g\u00af 11 \u0013 , (3.13) \u2202tg\u00af 12 = 1 2\u2225\u2126\u2225 \u0012 g3\u00af 3g\u00af 12 \u2212\u03b2g3\u00af 3g3\u00af 3g\u00af 12 \u0013 , (3.14) \u2202tg\u00af 13 = 1 2\u2225\u2126\u2225 \u0012 \u2212g1\u00af 3g\u00af 11 + g3\u00af 2g\u00af 12 + \u03b2g1\u00af 3g3\u00af 3g\u00af 11 + \u03b2g3\u00af 2g3\u00af 3g\u00af 12 \u0013 , (3.15) \u2202tg\u00af 22 = 1 2\u2225\u2126\u2225 \u0012 g3\u00af 3g\u00af 22 \u2212\u03b2g3\u00af 3g3\u00af 3g\u00af 22 \u0013 (3.16) \u2202tg\u00af 23 = 1 2\u2225\u2126\u2225 \u0012 g1\u00af 3g\u00af 21 \u2212g2\u00af 3g\u00af 22 + \u03b2g1\u00af 3g3\u00af 3g\u00af 21 + \u03b2g2\u00af 3g3\u00af 3g\u00af 22 \u0013 , (3.17) \u2202tg\u00af 33 = 1 2\u2225\u2126\u2225 \u0012 g1\u00af 1g\u00af 11 + g2\u00af 2g\u00af 22 \u2212g2\u00af 1g\u00af 12 \u2212g1\u00af 2g\u00af 21 \u2212\u03b2g1\u00af 1g3\u00af 3g\u00af 11 \u2212\u03b2g2\u00af 2g3\u00af 3g\u00af 22 \u2212\u03b2g2\u00af 1g3\u00af 3g\u00af 12 \u2212\u03b2g1\u00af 2g3\u00af 3g\u00af 21 \u0013 . (3.18) Here we set \u03b2 = \u03b1\u2032\u03c4 4 . (3.19) We begin by identifying the stationary metrics. Let g\u00af ab be a stationary metric. From (3.13) we see that \u03b2 > 0 and g3\u00af 3 = \u03b2\u22121. Substituting into (3.15), we obtain 2g3\u00af 2g\u00af 12 = 0. (3.20) If g\u00af 12 = 0, the stationary point is of the desired form. Hence we must have g3\u00af 2 = 0. Similarly, substituting g3\u00af 3 = \u03b2\u22121 into (3.17) leads to either g\u00af 21 = 0 or g1\u00af 3 = 0. Hence we 14 \fmay assume that g3\u00af 2 = g3\u00af 1 = 0 which implies that g\u00af 31 = g\u00af 32 = 0 and g\u00af 33 = \u03b2. Next, by (3.18), we conclude 0 = 2(g2\u00af 1g\u00af 12 + g1\u00af 2g\u00af 21) = 4Re {g2\u00af 1g\u00af 12}. (3.21) By the formula for inverse matrices, we have g2\u00af 1g\u00af 12 = \u2212|g\u00af 21|2g\u00af 33 detg . (3.22) Therefore g\u00af 12 = 0 and the solution is of the desired form. It follows that the stationary metrics are exactly the metrics which satisfy g\u00af 12 = 0, g3\u00af 3 = \u03b2\u22121. (3.23) These equations can also be rewritten as g\u00af 12 = 0, |g\u00af 13|2 g\u00af 11 + |g\u00af 23|2 g\u00af 22 = g\u00af 33 \u2212\u03b2. (3.24) In particular, there are stationary points g\u00af ba which are not diagonal. For example, setting g\u00af 12 = g\u00af 21 = 0, g\u00af 33 = 2 + \u03b2 and all other entries to 1 gives a stationary point. More generally, the moduli space of solutions requires locally two complex parameters g\u00af 13 and g\u00af 23, and two real parameters g\u00af 11, g\u00af 22. Next, we examine the Anomaly \ufb02ow. First, we consider the case of initial metrics with g\u00af 12(0) = 0. This condition is clearly preserved under the \ufb02ow, and the \ufb02ow for the other components of the metric becomes \u2202tg\u00af 11 = 1 2\u2225\u2126\u2225g3\u00af 3g\u00af 11(1 \u2212\u03b2g3\u00af 3), (3.25) \u2202tg\u00af 22 = 1 2\u2225\u2126\u2225g3\u00af 3g\u00af 22(1 \u2212\u03b2g3\u00af 3), (3.26) \u2202tg\u00af 33 = 1 2\u2225\u2126\u2225(g1\u00af 1g\u00af 11(1 \u2212\u03b2g3\u00af 3) + g2\u00af 2g\u00af 22(1 \u2212\u03b2g3\u00af 3)), (3.27) \u2202tg\u00af 13 = 1 2\u2225\u2126\u2225g1\u00af 3g\u00af 11(\u22121 + \u03b2g3\u00af 3), (3.28) \u2202tg\u00af 23 = 1 2\u2225\u2126\u2225g2\u00af 3g\u00af 22(\u22121 + \u03b2g3\u00af 3). (3.29) We shall use the following simple formulas for the entries of the inverse metric ga\u00af b, g1\u00af 1 = g\u00af 22g\u00af 33 \u2212|g\u00af 23|2 detg , g2\u00af 2 = g\u00af 11g\u00af 33 \u2212|g\u00af 13|2 detg , g3\u00af 3 = g\u00af 11g\u00af 22 detg g1\u00af 3 = \u2212g\u00af 22g\u00af 13 detg , g2\u00af 3 = \u2212g\u00af 11g\u00af 23 detg . (3.30) We note the following identities: 15 \f\u2022 g\u00af 22 = ag\u00af 11 with a = g\u00af 22 g\u00af 11(0). This follows directly from equation (3.25) and (3.26). \u2022 |g\u00af 13| = bg\u00af 11 with b = |g\u00af 13| g\u00af 11 (0). Indeed, putting g1\u00af 3 into equation (3.28) and g3\u00af 3 into (3.25), we obtain \u2202tg\u00af 13 = 1 2\u2225\u2126\u2225 g\u00af 13g\u00af 22g\u00af 11 detg (1 \u2212\u03b2g3\u00af 3), \u2202tg\u00af 11 = 1 2\u2225\u2126\u2225 g\u00af 11g\u00af 22g\u00af 11 detg (1 \u2212\u03b2g3\u00af 3). (3.31) Hence \u2202t ln |g\u00af 13| = \u2202t ln g\u00af 11 (3.32) and this implies the desired relation. \u2022 |g\u00af 23| = cg\u00af 11 with c = |g\u00af 23| g\u00af 11 (0). The proof is similar to the previous case. \u2022 g3\u00af 3 = d g2 \u00af 11 with d = g2 \u00af 11(0)g3\u00af 3(0). First, we compute \u2202tg3\u00af 3 = g\u00af 22\u2202tg\u00af 11 detg + g\u00af 11\u2202tg\u00af 22 detg \u2212g\u00af 11g\u00af 22 (detg)2\u2202tdetg = 1 \u2225\u2126\u2225g3\u00af 3g3\u00af 3(1 \u2212\u03b2g3\u00af 3) \u2212g\u00af 11g\u00af 22 (detg)2\u2202tdetg. (3.33) It follows from (3.28) and (3.29) that \u2202t|g\u00af 13|2 = 1 \u2225\u2126\u2225g3\u00af 3(1 \u2212\u03b2g3\u00af 3)|g\u00af 13|2, \u2202t|g\u00af 23|2 = 1 \u2225\u2126\u2225g3\u00af 3(1 \u2212\u03b2g3\u00af 3)|g\u00af 23|2. (3.34) Next, \u2202t detg = \u2202t(g\u00af 11g\u00af 22g\u00af 33 \u2212g\u00af 11|g\u00af 23|2 \u2212g\u00af 22|g\u00af 13|2) = \u2202tg\u00af 11g\u00af 22g\u00af 33 + g\u00af 11\u2202tg\u00af 22g\u00af 33 + g\u00af 11g\u00af 22\u2202tg\u00af 33 \u2212\u2202tg\u00af 11|g\u00af 23|2 g\u00af 11\u2202t|g\u00af 23|2 \u2212\u2202tg\u00af 22|g\u00af 13|2 \u2212g\u00af 22\u2202t|g\u00af 13|2 = 1 2\u2225\u2126\u2225(1 \u2212\u03b2g3\u00af 3) \u0010 2g3\u00af 3g\u00af 11g\u00af 22g\u00af 33 + g\u00af 11g\u00af 22(g1\u00af 1g\u00af 11 + g2\u00af 2g\u00af 22) \u0011 \u2212 3 2\u2225\u2126\u2225g3\u00af 3(1 \u2212\u03b2g3\u00af 3)(g\u00af 11|g\u00af 23|2 + g\u00af 22|g\u00af 13|2). (3.35) Using (3.30) yields \u2202t detg = 1 2\u2225\u2126\u2225(1 \u2212\u03b2g3\u00af 3)g3\u00af 3 \u0012 2g\u00af 11g\u00af 22g\u00af 33 + (detg)(g1\u00af 1g\u00af 11 + g2\u00af 2g\u00af 22) \u22123g\u00af 11|g\u00af 23|2 \u22123g\u00af 22|g\u00af 13|2 \u0013 = 2 \u2225\u2126\u2225(1 \u2212\u03b2g3\u00af 3)g3\u00af 3detg. (3.36) 16 \fCombining (3.33) and (3.36), \u2202tg3\u00af 3 = \u22121 \u2225\u2126\u2225g3\u00af 3g3\u00af 3(1 \u2212\u03b2g3\u00af 3). (3.37) Comparing this equation with (3.25), it follows that \u2202tg3\u00af 3 = \u22122\u2202tg\u00af 11 g\u00af 11 g3\u00af 3, \u2202t ln g3\u00af 3 = 2\u2202t ln 1 g\u00af 11 , (3.38) which implies g3\u00af 3 = d g2 \u00af 11 with d = g2 \u00af 11(0)g3\u00af 3(0). Denote \u03bb = g\u00af 11. Using the previously derived formulas and solving for g\u00af 33 from the expression for detg, it follows that g\u00af 11 = \u03bb, g\u00af 22 = a\u03bb, |g\u00af 13| = b\u03bb, |g\u00af 23| = c\u03bb, detg = a d \u03bb4, g\u00af 33 = \u03bb2 d + \u0012c2 a + b \u0013 \u03bb. (3.39) Putting the above relations into equation (3.25), \u2202t\u03bb = (detg)1/2 2 g3\u00af 3g\u00af 11(1 \u2212\u03b2g3\u00af 3) = 1 2 ra d\u03bb2 \u00b7 d \u03bb2 \u00b7 \u03bb(1 \u2212\u03b2d \u03bb2 ) (3.40) Then \u2202t\u03bb2 = \u221a ad (\u03bb2 \u2212\u03b2d), (3.41) and hence |\u03bb2(t) \u2212\u03b2d| = |\u03bb2(0) \u2212\u03b2d| \u00b7 e \u221a ad t. (3.42) Suppose \u03b2 > 0. Then \u03bb is uniformly bounded above and away from zero as t \u2192\u2212\u221e, hence from (3.39) we see that the metric g\u00af ba is uniformly bounded and remains non-degenerate. Thus the \ufb02ow exists for all time t < 0 and g2 \u00af 11(t) \u2192\u03b2d as t \u2192\u2212\u221efor any initial data. In particular, this implies that g3\u00af 3 \u2192\u03b2\u22121. This completes the description of the \ufb02ow for initial data satisfying the condition g\u00af 12(0) = 0. We consider now the case of initial data with g\u00af 12 \u0338= 0. In this case, we claim that the \ufb02ow cannot converge to a non-degenerate metric. Indeed, if \u2202tg\u00af ba \u21920 then g\u00af 12 \u21920. However, from (3.13) and (3.14), we deduce \u2202tg\u00af 11 g\u00af 11 = \u2202t|g\u00af 12| |g\u00af 12| . (3.43) Therefore for all times, |g\u00af 12| = Cg\u00af 11, C = |g\u00af 12(0)| g\u00af 11(0) . (3.44) The convergence to 0 of g\u00af 12 implies then the convergence to 0 of g\u00af 11 \u21920, which contradicts the requirement that the limit be a non-degenerate metric. In particular, given a stationary solution g\u221e, if we perturb it in the g\u00af 12 direction and run the Anomaly \ufb02ow, the \ufb02ow will not take the metric back to g\u221e. This completes the proof of part (c) of Theorem 2. 17 \f3.3.4 The Semi-Simple Case This is the case of X = SL(2, C), whose standard basis {ea} has structure constants ckij = \u01ebkij, the Levi-Civita symbol. We begin by showing that the metrics g\u00af ab = 2\u03b2\u03b4ab are the only stationary points of the \ufb02ow, where \u03b2 is de\ufb01ned in (3.19). In particular, the existence of a stationary point requires \u03b2 > 0. Assume then that g\u00af ab is a stationary point of the \ufb02ow. Let P be a matrix such that \u00af P \u00af a \u00af pg\u00af abP bq = \u03b4\u00af pq, and perform a change of basis by de\ufb01ning fi = erP ri, [fa, fb] = fckcab. As previously discussed (2.46), in this new frame {fa} we have \u03c9 = if a \u2227\u00af f a. The \ufb01xed point equation for the \ufb02ow (1.5) becomes h\u00af ab = \u03b2 kmaph \u00af msks bp, (3.45) where we de\ufb01ned h\u00af ab = kmapkm bp. (3.46) If we let \u03ba\u00af ab = cmapcmbp, by symmetry of the Levi-Civita symbol we obtain \u03ba\u00af ab = 2\u03b4ab. By using the transformation laws (2.49), we derive k\u2113 ij = (P \u22121)\u2113 sP r iP q j\u01ebsrq, h\u00af ab = \u00af P \u00af q \u00af aP p b\u03ba\u00af qp = 2 \u00af P \u00af r \u00af aP r b. (3.47) If we substitute (3.47) into (3.45) and use ga\u00af b = P ar \u00af P \u00af b\u00af r, cancellation occurs and we are left with h\u00af ab = 2\u03b2 \u00af P \u00af p \u00af aP q bgr\u00af s\u01eb\u2113ps\u01eb\u2113qr. (3.48) We note the formula X \u2113 \u01eb\u2113ps\u01eb\u2113qr = \u03b4pq\u03b4sr \u2212\u03b4pr\u03b4qs. (3.49) Combining (3.48) and (3.49), we obtain h\u00af ab = 2\u03b2 \u00af P \u00af p \u00af aP q bgr\u00af s (\u03b4pq\u03b4sr \u2212\u03b4pr\u03b4qs) = 2\u03b2 ( \u00af P \u00af r \u00af aP r b)Tr(g\u22121) \u22122\u03b2 \u00af P \u00af p \u00af agp\u00af qP q b. (3.50) Substituting the relations h\u00af ab = 2 \u00af P \u00af r\u00af aP rb and ga\u00af b = P ar \u00af P \u00af b\u00af r, it follows that h\u00af ab = \u03b2 h\u00af ab(P m r \u00af P \u00af m \u00af r) \u22122\u03b2 \u00af P \u00af p \u00af aP p m \u00af P \u00af q \u00af mP q b = 1 2\u03b2 h\u00af abTr h \u22121 2\u03b2 h\u00af amh \u00af mb. (3.51) Hence h\u00af ab(\u03b2Trh \u22122) = \u03b2h\u00af amh \u00af mb. (3.52) By multiplying hb\u00af a on both sides of the equation, we obtain \u03b2 h\u00af bb = \u03b2Trh\u22122. In particular, Trh = 3\u03b2\u22121. Putting this back into equation (3.52), h\u00af ab = \u03b2h\u00af amh \u00af mb. (3.53) 18 \fThus, \u03b2h\u00af ab is an invertible idempotent matrix. This implies that \u03b2 h = I and hence \u00af P \u00af r\u00af aP rb = 1 2h\u00af ab = 1 2\u03b2\u03b4ab. Therefore, g\u00af ab = ( \u00af P \u22121)\u00af r \u00af a(P \u22121)r b = 2\u03b2 \u03b4ab (3.54) as was to be shown. Next, we show that the stationary metric is asymptotically unstable 4. For this, it su\ufb03ces to show that the \ufb02ow can be restricted to a submanifold of Hermitian metrics, and that restricted to this submanifold, the \ufb02ow is asymptotically unstable. We choose this submanifold to be the submanifold of metrics g\u00af ab which are diagonal with respect to the given basis of invariant holomorphic vector \ufb01elds, g\u00af ab = \u03bbb\u03b4ab. (3.55) Using the explicit form (1.5) of the \ufb02ow and the fact that the structure constants are given by \u03b5abc, it is easy to verify that the diagonal form of metrics is preserved along the \ufb02ow. The \ufb02ow reduces to the following ODE system for the eigenvalues \u03bba, \u2202t\u03bb1 = (\u03bb1\u03bb2\u03bb3)1/2 2 \u0012 \u03bb\u22121 2 \u03bb3 + \u03bb\u22121 3 \u03bb2 \u22122\u03b2 \u03bb\u22121 1 \u2212\u03b2 \u03bb1(\u03bb2)\u22122 \u2212\u03b2 \u03bb1(\u03bb3)\u22122 \u0013 , \u2202t\u03bb2 = (\u03bb1\u03bb2\u03bb3)1/2 2 \u0012 \u03bb\u22121 1 \u03bb3 + \u03bb\u22121 3 \u03bb1 \u22122\u03b2 \u03bb\u22121 2 \u2212\u03b2 \u03bb2(\u03bb1)\u22122 \u2212\u03b2 \u03bb2(\u03bb3)\u22122 \u0013 , \u2202t\u03bb3 = (\u03bb1\u03bb2\u03bb3)1/2 2 \u0012 \u03bb\u22121 1 \u03bb2 + \u03bb\u22121 2 \u03bb1 \u22122\u03b2 \u03bb\u22121 3 \u2212\u03b2 \u03bb3(\u03bb1)\u22122 \u2212\u03b2 \u03bb3(\u03bb2)\u22122 \u0013 . The linearization of the \ufb02ow at the stationary point \u03bb1 = \u03bb2 = \u03bb3 = 2\u03b2 is easily worked out, \u2202t\u03bba = X b Qab(\u03b4\u03bbb) (3.56) where the matrix Q = (Qab) is given by Q = s \u03b2 2 \uf8ee \uf8ef \uf8f0 0 1 1 1 0 1 1 1 0 \uf8f9 \uf8fa \uf8fb (3.57) The matrix Q has eigenvalues \u2212 q \u03b2/2 and 2 q \u03b2/2 with multiplicities 2 and 1 respectively. By a classical theorem on ordinary di\ufb00erential equations (see e.g. Theorem 3.3 in [34]), the presence of an eigenvalue with strictly positive real part implies that the \ufb02ow is asymptotically unstable. Part (d) of Theorem 2 has now been proved, completing the proof of the theorem. 4Recall that a stationary point for a \ufb02ow is said to be asymptotically stable if the \ufb02ow will converge to the stationary point for any initial data in some neighborhood of the point. The \ufb02ow is said to a asymptotically unstable if it is not asymptotically stable. 19 \f3.3.5 Remarks We conclude with several remarks. \u2022 For simplicity, we have formulated Theorem 2 under the assumption that \u03b1\u2032\u03c4 > 0. The behavior of the Anomaly \ufb02ow can be readily worked out as well by similar methods when \u03b1\u2032\u03c4 \u22640. The arguments are in fact simpler in that case, because there is then no cancellation between the two terms on the right hand side (1.5) of the \ufb02ow. We leave the details to the reader. \u2022 In general, the sign of the right hand side in the Anomaly \ufb02ow is dictated by the requirement that the \ufb02ow be weakly parabolic. But in the case of Lie groups, the \ufb02ow reduces to an ODE, and both signs are allowed. The opposite sign can be obtained from the sign we chose here simply by a time-reversal. \u2022 The remaining remarks are about the semi-simple case SL(2, C). The eigenvalues of the linerarized operator at the stationary point imply that there is a stable surface and an unstable curve near this point. The stable surface appears di\ufb03cult to identify explicitly, but the unstable curve is easily found. It is given by the line of metrics proportional to the identity matrix, g\u00af ab = \u03bb \u03b4ab. This line is preserved under the \ufb02ow, which reduces to \u2202t\u03bb = \u03bb 3 2 1 \u22122\u03b2 \u03bb ! . (3.58) This equation can be solved explicitly by \u03bb(t) = 2\u03b2 \u0012Ce \u221a 2\u03b2t + 1 1 \u2212Ce \u221a 2\u03b2t \u00132 , if \u03bb(0) > 2\u03b2, (3.59) \u03bb(t) = 2\u03b2 \u00121 \u2212Ce \u221a 2\u03b2t Ce \u221a 2\u03b2t + 1 \u00132 , if \u03bb(0) < 2\u03b2, (3.60) where C = \f \f \f \f \u221a \u03bb(0) \u2212\u221a2\u03b2 \u221a \u03bb(0) + \u221a2\u03b2 \f \f \f \f < 1. (3.61) This shows that the \ufb02ow terminates in \ufb01nite time at T = 1 \u221a 2\u03b2 log 1 C, with \u03bb(t) \u2192+\u221eas t \u2192T if \u03bb(0) > 2\u03b2, and \u03bb(t) \u21920 as t \u2192T if \u03bb(0) < 2\u03b2. \u2022 More generally, if two eigenvalues are equal at some time, then they are equal for all time. This follows from rewriting the \ufb02ow as \u2202t\u03bb1 = \u2212(\u03bb1\u03bb2\u03bb3) 1 2 2 \u0012 \u03b2( 2 \u03bb1 + \u03bb1 \u03bb2 2 + \u03bb1 \u03bb2 3 ) \u2212\u03bb2 2 + \u03bb2 3 \u03bb2\u03bb3 \u0013 (3.62) 20 \fwith similar formulas for \u2202t\u03bb2 and \u2202t\u03bb3. In particular, we have \u2202t log \u03bb1 \u03bb2 = \u2212(\u03bb1\u03bb2\u03bb3) 1 2 2 (\u03b2( 1 \u03bb2 1 \u22121 \u03bb2 2 ) \u2212\u03bb2 2 \u2212\u03bb2 1 \u03bb1\u03bb2\u03bb3 ) = \u2212(\u03bb1\u03bb2\u03bb3) 1 2 2 \u03bb2 2 \u2212\u03bb2 1 \u03bb2 1\u03bb2 2 (\u03b2 \u2212\u03bb1\u03bb2 \u03bb3 ). (3.63) Let [0, T) be the maximum time of existence of the \ufb02ow. This equation implies that if any two eigenvalues are equal at some time t0, then they are identically equal on the whole interval [0, T). Indeed, by the Cauchy-Kowalevska theorem, the eigenvalues are analytic functions of t near any time where they are all strictly positive. The equation (3.63) implies that, if say \u03bb1 and \u03bb2 are equal at t0, then all derivatives in time of \u03bb1 and \u03bb2 at t0 are also equal, as we can see by di\ufb00erentiating the equation (3.63). By analyticity, they must be equal in a neighborhood of t0. Thus the set where \u03bb1 and \u03bb2 coincide is both open and closed. This establishes our claim. \u2022 It follows that if the eigenvalues at the initial time are ordered as \u03bb1 \u2265\u03bb2 \u2265\u03bb3 (3.64) then this ordering is preserved by the \ufb02ow. The con\ufb01guration space can be divided into the invariant and mutually disjoint subsets {\u03bb1 > \u03bb2 > \u03bb3}, {\u03bb1 = \u03bb2 > \u03bb3}, {\u03bb1 > \u03bb2 = \u03bb3}, {\u03bb1 = \u03bb2 = \u03bb3}. We have already shown that the \ufb02ow diverges to +\u221eon the last invariant subset. We shall next make a few remarks on the other sets. \u2022 On each of the other invariant subsets, we have the following: (a) If \u03bb1(0) < 2\u03b2, then \u03bb1(t) is monotone decreasing, and in particular less than \u03bb1(0) for all time t \u2208[0, T); (b) If \u03bb3(0) > 2\u03b2, then \u03bb3(t) is monotone increasing, and in particular greater than \u03bb3(0) for all time t \u2208[0, T). To see this, we express the \ufb02ow as \u2202t\u03bb1 = (\u03bb1\u03bb2\u03bb3) 1 2 2 \u0012\u03bb2 \u03bb3 + \u03bb3 \u03bb2 \u2212\u03b2( 2 \u03bb1 + \u03bb1 \u03bb2 2 + \u03bb1 \u03bb2 3 ) \u0013 . (3.65) We shall make use of the following two estimates \u03bb1 \u03bb2 + \u03bb2 \u03bb1 \u2212\u03b2( 2 \u03bb3 + \u03bb3 \u03bb2 1 + \u03bb3 \u03bb2 2 ) \u2265\u03bb1 \u03bb2 + \u03bb2 \u03bb1 \u2212\u03b2( 2 \u03bb3 + \u03bb3 \u03bb2 3 + \u03bb3 \u03bb2 3 ) = \u03bb1 \u03bb2 + \u03bb2 \u03bb1 \u22124\u03b2 1 \u03bb3 (3.66) and \u03bb2 \u03bb3 + \u03bb3 \u03bb2 \u2212\u03b2( 2 \u03bb1 + \u03bb1 \u03bb2 2 + \u03bb1 \u03bb2 3 ) \u2264\u03bb2 \u03bb3 + \u03bb3 \u03bb2 \u2212\u03b2 2 \u03bb2 \u2212\u03b2 2 \u03bb3 = 1 \u03bb2 (\u03bb3 \u22122\u03b2) + 1 \u03bb3 (\u03bb2 \u22122\u03b2). (3.67) We can now establish (a). First, we claim that \u03bb1(t) < 2\u03b2 for any time t \u2208[0, T). Otherwise, let t0 be the \ufb01rst time when \u03bb1(t0) = 2\u03b2. Then \u03bb1(t) < 2\u03b2 on the interval 21 \f[0, t0). On the interval [0, t0], we have then \u03bb3 \u2264\u03bb2 \u2264\u03bb1 \u22642\u03b2, and the inequality (3.67) implies that \u2202t\u03bb1 < 0 on this interval. It follows that \u03bb1(t0) \u2264\u03bb1(0) < 2\u03b2, which contradicts our assumption. But now that we know that \u03bb1(t) < 2\u03b2 for all time t, the same inequality (3.67) shows that \u03bb1(t) is a strictly monotone decreasing function of time. Next, we establish (b). Again, let t0 be the \ufb01rst time when \u03bb3(t0) = 2\u03b2. On the interval [0, t0), we can apply the inequality (3.66) and obtain \u2202t\u03bb3 \u2265(\u03bb1\u03bb2\u03bb3) 1 2 2 \u0012\u03bb1 \u03bb2 + \u03bb2 \u03bb1 \u2212\u03b2 4 \u03bb3 \u0013 > 0 (3.68) where we used the inequality \u03bb1 \u03bb2 + \u03bb2 \u03bb1 \u22652. It follows that \u03bb3(t0) > \u03bb3(0) > 2\u03b2, which is a contradiction. Thus \u03bb3(t) is a strictly monotone increasing function of time. Acknowledgements The authors would especially like to thank Teng Fei for many stimulating discussions." + }, + { + "url": "http://arxiv.org/abs/1610.02740v2", + "title": "The Anomaly flow and the Fu-Yau equation", + "abstract": "The Anomaly flow is shown to converge on toric fibrations with the Fu-Yau\nansatz, for both positive and negative values of the slope parameter $\\alpha'$.\nThis implies both results of Fu and Yau on the existence of solutions for\nHull-Strominger systems, which they proved using different methods depending on\nthe sign of $\\alpha'$. It is also the first case where the Anomaly flow can\neven be shown to exist for all time. This is in itself remarkable from the\npoint of view of the theory of fully nonlinear partial differential equations,\nas the elliptic terms in the flow are not concave.", + "authors": "Duong H. Phong, Sebastien Picard, Xiangwen Zhang", + "published": "2016-10-09", + "updated": "2018-03-23", + "primary_cat": "math.DG", + "cats": [ + "math.DG", + "math.AP", + "math.CV" + ], + "main_content": "Introduction The Hull-Strominger system [25, 36] is a system of equations for supersymmetric compacti\ufb01cations of the heterotic string, which is less restrictive than the Ricci-\ufb02at and K\u00a8 ahler conditions originally proposed by Candelas, Horowitz, Strominger, and Witten [6]. More speci\ufb01cally, let Y be a compact 3-fold with a nowhere vanishing holomorphic (3, 0)-form \u2126Y and a vector bundle E \u2192Y . Then the Hull-Strominger system is the following system of equations for a Hermitian metric \u03c7 on Y and a Hermitian metric H on E, F 2,0 = F 0,2 = 0, \u03c72 \u2227F 1,1 = 0 (1.1) i\u2202\u00af \u2202\u03c7 \u2212\u03b1\u2032 4 Tr(Rm \u2227Rm \u2212F \u2227F) = 0 (1.2) d(\u2225\u2126Y \u2225\u03c7\u03c72) = 0. (1.3) Here \u03b1\u2032 is a constant parameter called the slope, \u2225\u2126Y \u2225\u03c7 is the norm of \u2126Y with respect to \u03c7 de\ufb01ned by \u2225\u2126Y \u22252 \u03c7 = i\u2126Y \u2227\u00af \u2126Y \u03c7\u22123, and Rm and F are respectively the curvatures of the Chern unitary connections of \u03c7 and H, viewed as (1, 1)-forms valued in the bundles of endomorphisms of T 1,0(Y ) and E 2. For \ufb01xed \u03c7, the \ufb01rst equation (1.1) is just the well-known Hermitian-Yang-Mills equation. The essential novelty in the Hull-Strominger system, from the point of view of both non-K\u00a8 ahler geometry and non-linear partial di\ufb00erential equations, resides rather in the last two equations (1.2) and (1.3). The equation (1.3) says that the 1Work supported in part by the National Science Foundation Grants DMS-12-66033 and DMS-16-05968. Key words: Hull-Strominger systems, Goldstein-Prokushkin \ufb01bration, slope parameter \u03b1\u2032, conformally balanced, torsion constraints, curvature, Moser iteration, Ck estimates, exponential convergence. 2The equation (1.3) was originally written in [25, 36] as d\u2020\u03c7 = i(\u2202\u2212\u00af \u2202) log \u2225\u2126\u2225\u03c7. That (1.3) is an equivalent formulation is an important insight due to J. Li and S.T. Yau [29]. \fmetric \u03c7 is not required to be K\u00a8 ahler, but conformally balanced, and the equation (1.2) de\ufb01nes a new curvature condition, which di\ufb00ers markedly from more familiar conditions such as Einstein since it is quadratic in the curvature tensor. By now, many special solutions of the Hull-Strominger system have been found, both in the physics (see e.g. [3, 5, 11]) and the mathematics literature (see e.g. [1, 2, 12, 14, 15, 16, 17, 18, 19, 22, 29, 30, 42]). The main goal in the present paper is rather to develop PDE techniques towards an eventual general solution. A \ufb01rst major di\ufb03culty in the Hull-Strominger system, symptomatic of non-K\u00a8 ahler geometry, is to implement the conformally balanced condition (1.3) in the absence of an analogue of the \u2202\u00af \u2202-lemma. While balanced metrics can be produced by many Ans\u00a8 atze, none seems more natural than the others, and all lead to unwieldy expressions for the equation (1.2). This is why it was proposed by the authors in [31] to bypass completely the choice of an Ansatz by considering instead the \ufb02ow H\u22121 \u2202tH = \u2212\u039b\u03c7F(H) (1.4) \u2202t(\u2225\u2126Y \u2225\u03c7\u03c72) = i\u2202\u00af \u2202\u03c7 \u2212\u03b1\u2032 4 Tr(Rm(\u03c7) \u2227Rm(\u03c7) \u2212F(H) \u2227F(H)) (1.5) starting from a metric H(0) on E, and a metric \u03c7(0) on Y which is conformally balanced. Here \u039b\u03c7\u03c8 = \u03c72 \u2227\u03c8 \u03c7\u22123 is the Hodge operator on (1, 1)-forms \u03c8. The point of this \ufb02ow is that, by Chern-Weil theory, the right hand side of (1.5) is a closed (2, 2)-form, so if the initial metric \u03c7(0) is conformally balanced, then the metric \u03c7(t) will remain conformally balanced for all t. It su\ufb03ces then to determine whether the \ufb02ow exists for all time and converges. Flows of the form (1.4) have been called Anomaly \ufb02ows in [31], in recognition of the fact that the equation (1.2) originates from the famous Green-Schwarz anomaly cancellation mechanism [23] required for the consistency of superstring theories. The \ufb02ow (1.4) has been shown in [31] to be weakly parabolic when |\u03b1\u2032Rm(\u03c7)| is small, which implies its short-time existence. However, for any given elliptic system, there are many parabolic \ufb02ows with it as stationary point, so the true test of whether a particular parabolic \ufb02ow is the right one is its long-time existence and convergence. In the general theory of fully non-linear PDE\u2019s, it is customary to select the parabolic \ufb02ow by some desirable properties, such as, in the case of scalar equations, concavity in the second derivatives [8, 24, 28]. In the present case, there is no further \ufb02exibility, as the particular \ufb02ow (1.4) provides the only known way of implementing the conformally balanced condition (1.3) without appealing to any particular Ansatz. In e\ufb00ect, the geometric constraint of the metric being conformally balanced has excluded the customary desirable analytic properties for \ufb02ows, and it is vital at this stage to develop some new analytic tools. For this, we shall consider the Anomaly \ufb02ow on the model case of Calabi-Eckmann \ufb01brations. This case retains enough features of the general case to provide a valuable guide in the future (see \u00a72.4 below). It is also the case where J.X. Fu and S.T. Yau [18, 19] found, by a particularly di\ufb03cult and delicate analysis, the \ufb01rst non-perturbative, non-K\u00a8 ahler solution 2 \fof the Hull-Strominger system. The precise C0 estimate that they obtained has a great in\ufb02uence on the present work. On the other hand, we shall see that the Anomaly \ufb02ow requires very di\ufb00erent C1, C2, and C2,\u03b1 estimates, which actually sharpen the elliptic estimates in many ways. In particular, they de\ufb01ne a new region, narrower than the space of positive Hermitian forms, which is preserved by the \ufb02ow and where the unknown metric ultimately belongs. We now describe more precisely our results. Let (X, \u02c6 \u03c9) be a Calabi-Yau surface, equipped with a nowhere vanishing holomorphic (2, 0)-form \u2126normalized to satisfy \u2225\u2126\u2225\u02c6 \u03c9 = 1 and two harmonic form \u03c91, \u03c92 \u2208H1,1(X, Z). Building on an earlier construction of Calabi and Eckmann [4], Goldstein and Prokushkin [21] have shown how to associate to this data a toric \ufb01bration \u03c0 : Y \u2192X, equipped with a (1, 0)-form \u03b8 with the property that \u2126Y = \u2126\u2227\u03b8 is holomorphic and non-vanishing on Y , and \u03c9u = \u03c0\u2217(eu\u02c6 \u03c9) + i\u03b8 \u2227\u00af \u03b8 is a conformally balanced metric on Y for all u \u2208C\u221e(X) (see \u00a72.1 for more details). We shall prove Theorem 1 Let \u03c0 : Y \u2192X be a Goldstein-Prokushkin \ufb01bration over a Calabi-Yau surface (X, \u02c6 \u03c9) with Ricci-\ufb02at metric \u02c6 \u03c9 as above, and EX \u2192(X, \u02c6 \u03c9) a stable vector bundle with zero slope and Hermitian-Yang-Mills metric HX. Assume that the cohomological condition R X \u00b5 = 0 is satis\ufb01ed, where \u00b5 is de\ufb01ned in (2.18) below. Set E = \u03c0\u2217(EX) and consider the Anomaly \ufb02ow (1.4, 1.5) on (Y, E), with initial data H(0) = \u03c0\u2217(HX), and \u03c7(0) = \u03c0\u2217(M \u02c6 \u03c9) + i\u03b8 \u2227\u00af \u03b8, where M is a positive constant. Then there exists M0 \u226b1 such that, for all M \u2265M0, the \ufb02ow exists for all time, and converges smoothly to a solution (\u03c7\u221e, \u03c0\u2217(HX)) of the Hull-Strominger system for (Y, E). In particular, the theorem recaptures at one stroke the results of Fu and Yau in [18, 19], where they proved the existence of solutions of the Hull-Strominger system on GoldsteinProkushkin \ufb01brations satisfying the cohomological condition R X \u00b5 = 0, when \u03b1\u2032 > 0 and \u03b1\u2032 < 0 respectively. Restricted to a Goldstein-Prokushkin \ufb01bration, the Anomaly \ufb02ow becomes equivalent to the following \ufb02ow for a metric \u03c9 = ig\u00af kjdzj \u2227d\u00af zk on a Calabi-Yau surface X, equipped with a nowhere vanishing holomorphic (2, 0)-form \u2126, \u2202t\u03c9 = \u2212 1 2\u2225\u2126\u2225\u03c9 \u0012R 2 \u2212|T|2 \u2212\u03b1\u2032 4 \u03c32(iRic\u03c9) + 2\u03b1\u2032i\u2202\u00af \u2202(\u2225\u2126\u2225\u03c9\u03c1) \u03c92 \u22122 \u00b5 \u03c92 \u0013 \u03c9 (1.6) where \u03c32(\u03a6) = \u03a6 \u2227\u03a6 \u03c9\u22122 is the usual determinant of a real (1, 1)-form \u03a6, relative to the metric \u03c9. The expression |T|2 is the norm of the torsion of \u03c9 de\ufb01ned in (2.33) below. Thus the theorem that we shall actually prove is the following Theorem 2 Let (X, \u02c6 \u03c9) be a Calabi-Yau surface, equipped with a Ricci-\ufb02at metric \u02c6 \u03c9 and a nowhere vanishing holomorphic (2, 0)-form \u2126normalized by \u2225\u2126\u2225\u02c6 \u03c9 = 1. Let \u03b1\u2032 be a nonzero real number, and let \u03c1 and \u00b5 be smooth real (1, 1) and (2, 2)-forms respectively, with 3 \f\u00b5 satisfying the integrability condition Z X \u00b5 = 0. (1.7) Consider the \ufb02ow (1.6), with an initial metric given by \u03c9(0) = M \u02c6 \u03c9, where M is a constant. Then there exists M0 large enough so that, for all M \u2265M0, the \ufb02ow (1.6) exists for all time, and converges exponentially fast to a metric \u03c9\u221esatisfying the Fu-Yau equation i\u2202\u00af \u2202(\u03c9\u221e\u2212\u03b1\u2032\u2225\u2126\u2225\u03c9\u221e\u03c1) \u2212\u03b1\u2032 8 Ric\u03c9\u221e\u2227Ric\u03c9\u221e+ \u00b5 = 0, (1.8) and the normalization R X \u2225\u2126\u2225\u03c9\u221e \u03c92 \u221e 2! = M. Rewriting the \ufb02ow as a \ufb02ow of the conformal factor u, we can discuss now more precisely its features in the context of the general theory of non-linear parabolic PDE\u2019s. The \ufb02ow (1.6) can be expressed as \u2202tu = 1 2 \u0012 \u2206\u02c6 \u03c9u + \u03b1\u2032e\u2212u\u02c6 \u03c32(i\u2202\u00af \u2202u) \u22122\u03b1\u2032e\u2212ui\u2202\u00af \u2202(e\u2212u\u03c1) \u02c6 \u03c92 + |Du|2 \u02c6 \u03c9 + e\u2212u\u02dc \u00b5 \u0013 (1.9) where \u02dc \u00b5 = 2\u00b5 \u02c6 \u03c9\u22122 is a time-independent scalar function, and both the Laplacian \u2206\u02c6 \u03c9 and the determinant \u02c6 \u03c32 are written with respect to the \ufb01xed metric \u02c6 \u03c9. Setting the right hand side to 0 gives the equation solved by Fu and Yau [18, 19], so the Anomaly \ufb02ow is indeed a parabolic version of the Fu-Yau equation. Moreover, the equation (1.9) can be rewritten in the form 2\u03b1\u2032eu\u2202tu = \u0010 eu\u02c6 \u03c9 + e\u2212u\u03c1 + \u03b1\u2032i\u2202\u00af \u2202u \u00112 \u02c6 \u03c92 + w(\u00b5, \u03c1, u, Du), (1.10) and it is a parabolic complex Monge-Amp` ere type equation. However, unlike the K\u00a8 ahlerRicci \ufb02ow where the elliptic term is log det(gi\u00af j + ui\u00af j), the equation (1.10) has none of the desirable concavity properties of elliptic and parabolic equations. In particular, none of the techniques used in [18, 19] for the elliptic case (as well as the ones for the more general equations in [33, 34]) can be adapted here besides the Moser iteration technique for the C0 estimate. We shall see below that the proof of Theorem 2 relies instead, in an essential manner, on the geometric formulation of (1.6), and that the estimates are obtained using the metric which evolves with the \ufb02ow. Various geometric \ufb02ows have been studied in non-K\u00a8 ahler complex geometry (see e.g. [9, 20, 37, 38, 39, 40, 41] and references therein). The main di\ufb03culty in studying (1.6) is that it is quadratic in the Ricci curvature. This creates substantial problems in applying known techniques to try and obtain estimates on the torsion, curvature, and derivatives of curvature. To overcome these issues, we \ufb01rst start the \ufb02ow with a metric with vanishing 4 \fRicci curvature and torsion, and the objective is to show that for suitably large normalization of the initial metric, we can prevent the terms which are nonlinear in curvature from growing too large and dominating the behavior of the \ufb02ow. Proposition 4 in \u00a77 shows exactly how the estimates on Ricci curvature and torsion depend on the normalization of the initial metric. More precisely, the metric \u03c9(t) will be proved to belong at all times in the following set C\u22121M \u02c6 \u03c9 \u2264\u03c9(t) \u2264CM \u02c6 \u03c9, |T(\u03c9(t))| \u2264CM\u22121/2, |\u03b1\u2032Ric\u03c9(t)| \u2264CM\u22121/2 (1.11) for a constant C depending only on the geometric data de\ufb01ning the Goldstein-Prokushkin \ufb01bration, the Hermitian-Yang-Mills metric HX, and the slope \u03b1\u2032, but independent of the normalization constant M. In particular, it is a set much narrower and more speci\ufb01c than the cone of metrics on X. The normalization of the initial metric actually sets a scale for the problem, and Proposition 4 is an example of estimates with scales. The key underlying strategy of the present paper can be viewed as the use of estimates with scales to tame higher powers of the curvature tensor. This strategy appears both \ufb02exible and powerful: since the initial posting online in 2016 of the original version of the present paper, we have applied it successfully in our paper \u201cFu-Yau Hessian equations\u201d, arXiv:1801.09842, in fact with the same test functions used in the proof of Proposition 4, to solve the Fu-Yau equation and its Hessian generalizations in all dimensions, for both signs of the parameter \u03b1\u2032. See also the paper \u201cThe Fu-Yau equation in higher dimensions\u201d by J. Chu, L. Huang, and X.H. Zhu, arXiv:1801.09351. The paper is organized as follows. In Section \u00a72, we provide the background on Goldstein-Prokushkin \ufb01brations, and show how to reduce the Anomaly \ufb02ow in this case to the \ufb02ow (1.6) for metrics on a Calabi-Yau surface. Sections \u00a73-\u00a76 are devoted to successive estimates: the uniform boundedness of the metrics \u03c9 in \u00a73, the estimates for the torsion in Section \u00a74, the estimates for the curvature, and higher order derivatives of both the torsion and the curvature in Sections \u00a75-\u00a76. Finally, long time existence is shown in Section \u00a77 and the convergence of the \ufb02ow is proved in Section \u00a78. 2 Anomaly \ufb02ows on Goldstein-Prokushkin \ufb01brations 2.1 The Goldstein-Prokushkin \ufb01bration We would like to restrict the Anomaly \ufb02ow from a general 3-fold Y to the special case of a Goldstein-Prokushkin \ufb01bration \u03c0 : Y \u2192X. We begin by recalling the basic properties of Goldstein-Prokushkin \ufb01brations that we need. 5 \fLet (X, \u02c6 \u03c9) be a compact Calabi-Yau manifold of dimension 2, with \u02c6 \u03c9 a Ricci-\ufb02at K\u00a8 ahler metric, and \u2126a nowhere vanishing holomorphic (2, 0)-form, normalized so that 1 = \u2225\u2126\u22252 \u02c6 \u03c9 = \u2126\u2227\u2126\u02c6 \u03c9\u22122. (2.1) Let \u03c91, \u03c92 \u22082\u03c0 H2(X, Z) be two (1, 1)-forms such that \u03c91 \u2227\u02c6 \u03c9 = \u03c92 \u2227\u02c6 \u03c9 = 0. From this data, Goldstein and Prokushkin [21] construct a compact 3-fold Y which is a toric \ufb01bration \u03c0 : Y \u2192X over X equipped with a (1, 0) form \u03b8 on Y satisfying \u00af \u2202\u03b8 = \u03c0\u2217(\u03c91 + i\u03c92) \u2202\u03b8 = 0. (2.2) Furthermore, the (3, 0)-form \u2126Y = \u221a 3 \u2126\u2227\u03b8 (2.3) is holomorphic and nowhere vanishing, and the (1, 1)-form \u03c70 = \u03c0\u2217(\u02c6 \u03c9) + i\u03b8 \u2227\u00af \u03b8 (2.4) is positive-de\ufb01nite on Y . Observe that i\u2126Y \u2227\u2126Y = 3\u2126\u2227\u2126\u2227i\u03b8 \u2227\u00af \u03b8 = \u2225\u2126\u22252 \u02c6 \u03c9(3\u02c6 \u03c92 \u2227i\u03b8 \u2227\u00af \u03b8) = \u2225\u2126\u22252 \u02c6 \u03c9 \u03c73 0. (2.5) Thus, de\ufb01ning the norm \u2225\u2126Y \u2225\u03c7 of the holomorphic form \u2126Y on Y with respect to a metric \u03c7 as \u2225\u2126Y \u22252 \u03c7 = i\u2126Y \u2227\u00af \u2126Y \u2227\u03c7\u22123, we have \u2225\u2126Y \u2225\u03c70 = \u2225\u2126\u2225\u02c6 \u03c9 = 1. Consequently, \u2225\u2126Y \u2225\u03c70\u03c72 0 = \u02c6 \u03c92 + 2i\u02c6 \u03c9 \u2227\u03b8 \u2227\u00af \u03b8. (2.6) This implies that d(\u2225\u2126Y \u2225\u03c70\u03c72 0) = 0 by (2.2) and the fact that \u03c91 and \u03c92 wedged with \u02c6 \u03c9 gives zero. Thus \u03c70 is a conformally balanced metric on Y . More generally, for any smooth function u on Y , introduce the following metrics \u03c9u and \u03c7u on the manifolds X and Y respectively, \u03c9u = eu\u02c6 \u03c9, \u03c7u = \u03c0\u2217(eu\u02c6 \u03c9) + i\u03b8 \u2227\u00af \u03b8. (2.7) Then the same arguments that we just used show that \u2225\u2126Y \u2225\u03c7u = \u2225\u2126\u2225\u03c9u = e\u2212u, (2.8) and furthermore, \u2225\u2126Y \u2225\u03c7u\u03c72 u = \u2225\u2126\u2225\u03c9u\u03c92 u + 2i\u02c6 \u03c9 \u2227\u03b8 \u2227\u00af \u03b8. (2.9) This shows that d(\u2225\u2126Y \u2225\u03c7u\u03c72 u) = 0 since d(\u2225\u2126\u2225\u03c9u\u03c92 u) is the pull-back of a di\ufb00erential form of rank 5 de\ufb01ned on the 4-dimensional manifold X, and 2id(\u02c6 \u03c9 \u2227\u03b8 \u2227\u00af \u03b8) is zero as before in view of (2.2). It follows that the metric \u03c7u is a conformally balanced metric on Y for any choice of u. 6 \f2.2 The Fu-Yau Ansatz In [18, 19], Fu and Yau obtain a solution of the Hull-Strominger system in the following manner. Let \u03c0 : Y \u2192X be a Goldstein-Prokushkin \ufb01bration, constructed as described above from a Calabi-Yau surface (X, \u02c6 \u03c9), equipped with two integer-valued harmonic (1, 1)forms \u03c91/(2\u03c0) and \u03c92/(2\u03c0). Let EX \u2192X be a stable holomorphic vector bundle over X, with slope R X c1(EX)\u2227\u02c6 \u03c9 = 0. Then by the Donaldson-Uhlenbeck-Yau [10, 43] theorem, EX admits a metric HX with respect to \u02c6 \u03c9 satisfying the Hermitian-Yang-Mills equation \u02c6 \u03c9 \u2227F(HX) = 0. Let E = \u03c0\u2217(EX) \u2192Y be the pull-back bundle over Y , and let H = \u03c0\u2217(HX). Since \u03c72 u \u2227F(H) = \u03c0\u2217(eu\u02c6 \u03c9 \u2227F(HX)) \u2227(eu\u02c6 \u03c9 + 2i\u03b8 \u2227\u00af \u03b8) = 0, (2.10) it follows that H is Hermitian-Yang-Mills with respect to \u03c7u, for any u. Now recall that \u03c7u is conformally balanced for any u. This means that, if we look for a solution of the Strominger system under the form (Y, E), equipped with the metrics \u03c7u and H, then the only equation which is left to solve is the Green-Schwarz anomaly cancellation equation (1.2), with the scalar function u de\ufb01ning the metric \u03c7u as the unknown. The key property that allows this approach to work is that, for metrics of the form \u03c7u in a Goldstein-Prokushkin \ufb01bration, the equation on Y i\u2202\u00af \u2202\u03c7u \u2212\u03b1\u2032 4 Tr(Rm(\u03c7u) \u2227Rm(\u03c7u) \u2212F \u2227F) = 0 (2.11) descends to an equation on the base X. This was established by Fu and Yau [18] and a summary of their results is as follows. First, the term i\u2202\u00af \u2202\u03c7u is readily worked out, using the properties (2.2) of the form \u03b8, i\u2202\u00af \u2202\u03c7u = i\u2202\u00af \u2202\u03c9u \u2212\u00af \u2202\u03b8 \u2227\u2202\u00af \u03b8. (2.12) Next, the quadratic term in the curvature tensor can be worked out to be (Proposition 8 in [18]) Tr(Rm(\u03c7u) \u2227Rm(\u03c7u)) = Tr(Rm(\u03c9u) \u2227Rm(\u03c9u)) + \u2202\u00af \u2202(\u2225\u2126\u2225\u03c9uTr(\u00af \u2202B \u2227\u2202B\u2217\u00b7 \u02c6 \u03c9\u22121)).(2.13) Here B is a (1, 0)-form depending on the data (\u02c6 \u03c9, \u03c91, \u03c92), which is only locally de\ufb01ned. However the full expression Tr(\u00af \u2202B \u2227\u2202B\u2217\u00b7 \u02c6 \u03c9\u22121) is not only globally well-de\ufb01ned on Y , but it is the pull-back of a globally de\ufb01ned real (1, 1)-form \u03c1 on X, 1 4Tr(\u00af \u2202B \u2227\u2202B\u2217\u00b7 \u02c6 \u03c9\u22121) = \u03c0\u2217(\u03c1). (2.14) On the other hand, from \u03c9u = eu\u02c6 \u03c9, it follows that Rm(\u03c9u) = \u2212\u2202\u00af \u2202u \u2297I + Rm(\u02c6 \u03c9) (2.15) 7 \fand hence, in view of the fact that the metric \u02c6 \u03c9 is Ricci-\ufb02at, Tr(Rm(\u03c9u) \u2227Rm(\u03c9u)) = Tr(Rm(\u02c6 \u03c9) \u2227Rm(\u02c6 \u03c9)) + 2\u2202\u00af \u2202u \u2227\u2202\u00af \u2202u. (2.16) Altogether, the Green-Schwarz anomaly cancellation equation can be written as the following equation on the base manifold X, 0 = i\u2202\u00af \u2202(\u03c9u \u2212\u03b1\u2032\u2225\u2126\u2225\u03c9u\u03c1) \u2212\u03b1\u2032 2 (\u2202\u00af \u2202u) \u2227(\u2202\u00af \u2202u) + \u00b5 (2.17) where we have set \u00b5 = \u2212\u00af \u2202\u03b8 \u2227\u2202\u00af \u03b8 \u2212\u03b1\u2032 4 Tr(Rm(\u02c6 \u03c9) \u2227Rm(\u02c6 \u03c9)) + \u03b1\u2032 4 Tr(F(HX) \u2227F(HX)). (2.18) which is a well-de\ufb01ned (2, 2)-form on X. The equation (2.17) is the Fu-Yau equation. Clearly, a necessary condition for the existence of solutions is Z X \u00b5 = 0. (2.19) This condition was shown to be su\ufb03cient by Fu and Yau in [18] for \u03b1\u2032 > 0 and in [19] for \u03b1\u2032 < 0. Examples of \ufb01brations \u03c0 : Y \u2192X and vector bundles E \u2192X satisfying R X \u00b5 = 0 are exhibited in [18, 19]. 2.3 Reduction of the Anomaly \ufb02ow by the Fu-Yau Ansatz We consider now the Anomaly \ufb02ow (1.5) on a Goldstein-Prokushkin \ufb01bration \u03c0 : Y \u2192X, equipped with the holomorphic (3, 0)-form \u2126Y and restricted to metrics of the form \u03c7 = \u03c7u. Recall that F = F(\u03c0\u2217(HX)) is \ufb01xed, and that \u039b\u03c7F = 0. Thus the metric HX remains \ufb01xed along the \ufb02ow, and we need only concentrate on the \ufb02ow for the metric \u03c7u. We work out both sides of the \ufb02ow (1.5) in this setting. Recall that we have set \u03c9u = eu\u02c6 \u03c9 which is a metric on X. From the earlier equation (2.9) and the fact that the term \u02c6 \u03c9 \u2227\u03b8 \u2227\u00af \u03b8 is time-independent, it follows at once that \u2202t(\u2225\u2126Y \u2225\u03c7u\u03c72 u) = \u2202t(\u2225\u2126\u2225\u03c9u\u03c92 u). (2.20) On the other hand, the same formulas derived by Fu and Yau [18] for their reduction of the anomaly equation on Y to an equation on X and which we described in the previous section give i\u2202\u00af \u2202\u03c7u \u2212\u03b1\u2032 4 Tr(Rm(\u03c7u) \u2227Rm(\u03c7u) \u2212F \u2227F) = i\u2202\u00af \u2202(\u03c9u \u2212\u03b1\u2032\u2225\u2126\u2225\u03c9u\u03c1) \u2212\u03b1\u2032 2 \u2202\u00af \u2202u \u2227\u2202\u00af \u2202u + \u00b5 (2.21) 8 \fwith \u00b5 the (2, 2)-form de\ufb01ned by (2.18). Thus the Anomaly \ufb02ow for Goldstein-Prokushkin \ufb01brations is equivalent to the \ufb02ow for metrics on X given by \u2202t(\u2225\u2126\u2225\u03c9u\u03c92 u) = i\u2202\u00af \u2202(\u03c9u \u2212\u03b1\u2032\u2225\u2126\u2225\u03c9u\u03c1) \u2212\u03b1\u2032 2 \u2202\u00af \u2202u \u2227\u2202\u00af \u2202u + \u00b5. (2.22) Since we wish to apply techniques of geometric \ufb02ows, it is useful to re-express the \ufb02ow entirely in terms of curvature. If we denote by Ric\u03c9u the Chern-Ricci tensor of \u03c9u, we have Ric\u03c9u = \u22122\u2202\u00af \u2202u (2.23) since the metric \u02c6 \u03c9 is Ricci-\ufb02at. Thus the Anomaly \ufb02ow can be rewritten as the following \ufb02ow of metrics on X, \u2202t(\u2225\u2126\u2225\u03c9\u03c92) = i\u2202\u00af \u2202(\u03c9 \u2212\u03b1\u2032\u2225\u2126\u2225\u03c9\u03c1) \u2212\u03b1\u2032 8 Ric\u03c9 \u2227Ric\u03c9 + \u00b5 (2.24) which we can take now as our starting point. Here we have suppressed the subindex u in \u03c9u. A technical issue in Anomaly \ufb02ows is that they are formulated in terms of \ufb02ows for \u2225\u2126\u2225\u03c9 \u03c92, and not of \u03c9 itself. This issue was addressed in all generality in [32] in dimension 3. For the above Anomaly \ufb02ow on the surface X arising from the Goldstein-Prokushkin \ufb01bration, the metric \u03c9 is already characterized by its volume form, and we can proceed more directly as follows. First, using the two-dimensional identity 2\u2202t\u03c9 \u2227\u03c9 = (\u2202t log \u03c92)\u03c92 and the fact that \u03c92 = \u2225\u2126\u2225\u22122 \u03c9 \u02c6 \u03c92, we can rewrite the left hand side as \u2202t(\u2225\u2126\u2225\u03c9\u03c92) = \u2225\u2126\u2225\u03c9(\u2202t log \u2225\u2126\u2225\u03c9\u03c92 + 2\u2202t\u03c9 \u2227\u03c9) = \u2212\u2225\u2126\u2225\u03c9(\u2202t log \u2225\u2126\u2225\u03c9)\u03c92. (2.25) Next, we work out the right hand side more explicitly. Following [32], we de\ufb01ne the torsion T(\u03c9) = 1 2T\u00af kpq dzq \u2227dzp \u2227d\u00af zk of a Hermitian metric \u03c9 by T = i\u2202\u03c9, \u00af T = \u2212i\u00af \u2202\u03c9, (2.26) and we also introduce the (1, 0)-form Tm and the (0, 1)-form \u00af T \u00af m by Tm = gj\u00af kT\u00af kjm, \u00af T \u00af m = g \u00af jk \u00af Tk\u00af j \u00af m. (2.27) Then (i\u2202\u00af \u2202\u03c9)\u00af kj\u00af \u2113m = R\u00af kj\u00af \u2113m \u2212R\u00af km\u00af \u2113j + R\u00af \u2113m\u00af kj \u2212R\u00af \u2113j\u00af km + gs\u00af r T\u00af rmj \u00af Ts\u00af k\u00af \u2113. (2.28) In general, there are several notions of Ricci curvature for Hermitian metrics, given by R\u00af kj = R\u00af kj p p, \u02dc R\u00af kj = Rp p\u00af kj, R\u2032 \u00af kj = R\u00af k p pj, R\u2032\u2032 \u00af kj = Rp j\u00af kp. (2.29) 9 \fFor metrics of the form \u03c9 = eu\u02c6 \u03c9, where \u02c6 \u03c9 is K\u00a8 ahler and Ricci-\ufb02at, the following important relations between torsion and curvature hold, Tq(\u03c9) = \u2202q log \u2225\u2126\u2225\u03c9, \u00af T\u00af q(\u03c9) = \u2202\u00af q log \u2225\u2126\u2225\u03c9 (2.30) and R\u00af kj(\u03c9) = 2\u2207\u00af kTj(\u03c9) = 2\u2207j \u00af T\u00af k(\u03c9), R\u2032 \u00af kj(\u03c9) = R\u2032\u2032 \u00af kj(\u03c9) = 1 2R\u00af kj(\u03c9). (2.31) Also, because \u02c6 \u03c9 is K\u00a8 ahler, we have T(\u03c9) = i\u2202u \u2227\u03c9 (2.32) so that the (1, 0)-form Tm actually determines in our case the full (2, 1) torsion tensor T(\u03c9). Henceforth, unless explicitly indicated otherwise, we shall designate by T the (1, 0)form Tmdzm rather than the (2, 1)-form i\u2202\u03c9. For example, the norm |T|2 will designate the expression |T|2 = gm\u00af \u2113Tm \u00af T\u00af \u2113 (2.33) rather than |i\u2202\u03c9|2 (which can be veri\ufb01ed to be equal to 2|T|2). Using these relations, and the fact that we are in dimension 2, we \ufb01nd i\u2202\u00af \u2202\u03c9 = 1 2(\u2212R + 2|T|2)\u03c92 2 . (2.34) Substituting this equation and (2.25) in the \ufb02ow (2.24), we obtain \u2202t log \u2225\u2126\u2225\u03c9 = 1 \u2225\u2126\u2225\u03c9 \u0012R 2 \u2212|T|2 + 2\u03b1\u2032i\u2202\u00af \u2202(\u2225\u2126\u2225\u03c9\u03c1) \u03c92 \u2212\u03b1\u2032 4 \u03c32(iRic\u03c9) \u2212\u2225\u2126\u22252 \u03c9 \u02dc \u00b5 \u0013 , (2.35) where we have introduced the time-independent, scalar function \u02dc \u00b5 by \u00b5 = \u02dc \u00b5 \u02c6 \u03c92 2 , and the \u03c32 operator with respect to the evolving metric 2\u03c32(iRic\u03c9) \u03c92 2 = iRic\u03c9 \u2227iRic\u03c9. (2.36) Since the metric \u03c9 = eu\u02c6 \u03c9 is entirely determined by the conformal factor eu, this \ufb02ow for the volume form is equivalent to the \ufb02ow of metrics (1.6) quoted in the Introduction. The \ufb02ow in terms of the conformal factor u is easily worked out to be given by the equation (1.9). 10 \f2.4 Comparisons between the 3-dimensional Anomaly \ufb02ow and its 2-dimensional reduction It may be noteworthy that the \ufb02ow (2.24) retains many of the features of the original Anomaly \ufb02ow in 3-dimensions. Indeed, as shown in [32], the conformally balanced condition (1.3) in dimension 3 implies that the Hermitian metric \u03c7 on the 3-fold Y satis\ufb01es exactly the same relations (2.30) and (2.3) between torsion and curvature as the metric \u03c9 = eu\u02c6 \u03c9 on the surface X, Tq(\u03c7) = \u2202q log \u2225\u2126Y \u2225\u03c7, \u00af T\u00af q(\u03c7) = \u2202\u00af q log \u2225\u2126Y \u2225\u03c7 (2.37) and R\u00af kj(\u03c7) = 2\u2207\u00af kTj(\u03c7) = 2\u2207j \u00af T\u00af k(\u03c7), R\u2032 \u00af kj(\u03c7) = R\u2032\u2032 \u00af kj(\u03c7) = 1 2R\u00af kj(\u03c7). (2.38) This suggests that the \ufb02ow (1.6) is interesting not just as a special case of the general Anomaly \ufb02ow, but also as a good model for developing general methods for studying the \ufb02ow. 2.5 Starting the Flow In [31] general conditions were given for the short-time existence of the Anomaly \ufb02ow, using the Nash-Moser implicit function theorem. However, the short-time existence of the \ufb02ow can be seen more directly from the parabolicity of the \ufb02ow, which holds when the form \u03c9\u2032 = eu\u02c6 \u03c9 + \u03b1\u2032e\u2212u\u03c1 + \u03b1\u2032i\u2202\u00af \u2202u > 0, (2.39) is positive de\ufb01nite. This can be seen from the scalar equation (1.9). We will always assume that we start the \ufb02ow from a large constant multiple of the background metric u(x, 0) = log M \u226b1, \u03c9(0) = eu(0)\u02c6 \u03c9 = M \u02c6 \u03c9. (2.40) Recall that \u00b5 is de\ufb01ned in (2.18). In all that follows, we will assume that the cohomological condition Z X \u00b5 = 0, (2.41) is satis\ufb01ed. Integrating (2.22) and using the fact that \u2225\u2126\u2225\u03c9\u03c92 = eu\u02c6 \u03c92 gives the following conservation law \u2202 \u2202t Z X eu \u02c6 \u03c92 2! = 0. (2.42) Hence Z X eu \u02c6 \u03c92 2! = M, (2.43) along the \ufb02ow. 11 \f3 The C0 estimate of the conformal factor In this section, we will work with equation (2.22), since it will be easier to work with di\ufb00erential forms to obtain integral estimates. We let \u02c6 \u03c9 denote the \ufb01xed background K\u00a8 ahler form of X. We can rescale \u02c6 \u03c9 such that R X \u02c6 \u03c92 2! = 1. We will omit the background volume form \u02c6 \u03c92 2! when integrating scalar functions. All norms in the current section will be taken with respect to the background metric \u02c6 \u03c9. The starting point for the Moser iteration argument is to compute the quantity Z X i\u2202\u00af \u2202(e\u2212ku) \u2227\u03c9\u2032, (3.1) in two di\ufb00erent ways. Recall that \u03c9\u2032 is de\ufb01ned in (2.39). On one hand, by the de\ufb01nition of \u03c9\u2032 and Stokes\u2019 theorem, we have Z X i\u2202\u00af \u2202(e\u2212ku) \u2227\u03c9\u2032 = Z X{eu\u02c6 \u03c9 + \u03b1\u2032e\u2212u\u03c1} \u2227i\u2202\u00af \u2202(e\u2212ku). (3.2) Expanding Z X i\u2202\u00af \u2202(e\u2212ku) \u2227\u03c9\u2032 = k2 Z X e\u2212ku{eu\u02c6 \u03c9 + \u03b1\u2032e\u2212u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u \u2212k Z X e\u2212ku{eu\u02c6 \u03c9 + \u03b1\u2032e\u2212u\u03c1} \u2227i\u2202\u00af \u2202u. (3.3) On the other hand, without using Stokes\u2019 theorem, we obtain Z X i\u2202\u00af \u2202(e\u2212ku) \u2227\u03c9\u2032 = k2 Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 \u2212k Z X e\u2212ku{eu\u02c6 \u03c9 + \u03b1\u2032e\u2212u\u03c1} \u2227i\u2202\u00af \u2202u \u2212\u03b1\u2032k Z X e\u2212kui\u2202\u00af \u2202u \u2227i\u2202\u00af \u2202u. (3.4) We equate (3.3) and (3.4) 0 = \u2212k2 Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 + k2 Z X e\u2212ku{eu\u02c6 \u03c9 + \u03b1\u2032e\u2212u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u +\u03b1\u2032k Z X e\u2212kui\u2202\u00af \u2202u \u2227i\u2202\u00af \u2202u. (3.5) Using equation (2.22) and that \u2225\u2126\u2225\u03c9\u03c92 = eu\u02c6 \u03c92, 0 = \u2212k2 Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 + k2 Z X e\u2212ku{eu\u02c6 \u03c9 + \u03b1\u2032e\u2212u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u \u22122k Z X e\u2212ku\u00b5 \u22122k Z X e\u2212kui\u2202\u00af \u2202(eu\u02c6 \u03c9 \u2212\u03b1\u2032e\u2212u\u03c1) + 4k Z X e\u2212(k\u22121)u\u2202tu \u02c6 \u03c92 2! . (3.6) Expanding out terms and dividing by 2k yields 0 = \u2212k 2 Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 + k 2 Z X e\u2212ku{eu\u02c6 \u03c9 + \u03b1\u2032e\u2212u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u \u2212 Z X e\u2212ku\u00b5 12 \f\u2212 Z X e\u2212(k\u22121)ui\u2202\u00af \u2202u \u2227\u02c6 \u03c9 \u2212 Z X e\u2212(k\u22121)ui\u2202u \u2227\u00af \u2202u \u2227\u02c6 \u03c9 \u2212\u03b1\u2032 Z X e\u2212(k+1)ui\u2202\u00af \u2202u \u2227\u03c1 +\u03b1\u2032 Z X e\u2212(k+1)ui\u2202u \u2227\u00af \u2202u \u2227\u03c1 + \u03b1\u2032 Z X e\u2212(k+1)ui\u2202\u00af \u2202\u03c1 \u22122\u03b1\u2032Re Z X e\u2212(k+1)ui\u2202u \u2227\u00af \u2202\u03c1 + 2 Z X e\u2212(k\u22121)u\u2202tu \u02c6 \u03c92 2! . (3.7) Integration by parts gives 0 = \u2212k 2 Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 \u2212k 2 Z X e\u2212ku{eu\u02c6 \u03c9 + \u03b1\u2032e\u2212u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u \u2212 Z X e\u2212ku\u00b5 + \u03b1\u2032 Z X e\u2212(k+1)ui\u2202\u00af \u2202\u03c1 \u2212\u03b1\u2032Re Z X e\u2212(k+1)ui\u2202u \u2227\u00af \u2202\u03c1 +2 Z X e\u2212(k\u22121)u\u2202tu \u02c6 \u03c92 2! . (3.8) One more integration by parts yields the following identity: k 2 Z X e\u2212ku{eu\u02c6 \u03c9 + \u03b1\u2032e\u2212u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u + \u2202 \u2202t 2 k \u22121 Z X e\u2212(k\u22121)u \u02c6 \u03c92 2! (3.9) = \u2212k 2 Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 \u2212 Z X e\u2212ku\u00b5 + (\u03b1\u2032 \u2212 \u03b1\u2032 k + 1) Z X e\u2212(k+1)ui\u2202\u00af \u2202\u03c1. The identity (3.9) will be useful later to control the in\ufb01mum of u, but to control the supremum of u, we replace k with \u2212k in (3.9). Then, for k \u0338= 1, k 2 Z X e(k+1)u{\u02c6 \u03c9 + \u03b1\u2032e\u22122u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u + \u2202 \u2202t 2 k + 1 Z X e(k+1)u \u02c6 \u03c92 2! (3.10) = \u2212k 2 Z X ekui\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 + Z X eku\u00b5 \u2212(\u03b1\u2032 \u2212 \u03b1\u2032 1 \u2212k) Z X e(k\u22121)ui\u2202\u00af \u2202\u03c1. 3.1 Estimating the supremum Proposition 1 Start the \ufb02ow with initial data eu(x,0) = M. Suppose the \ufb02ow exists for t \u2208[0, T) with T > 0, and that infX eu \u22651 and \u03b1\u2032e\u22122u\u03c1 \u2265\u22121 2 \u02c6 \u03c9 for all time t \u2208[0, T). Then sup X\u00d7[0,T) eu \u2264C1M, (3.11) where C1 only depends on (X, \u02c6 \u03c9), \u03c1, \u00b5, \u03b1\u2032. Proof: As long as the \ufb02ow exists, we have i\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 \u22650. (3.12) 13 \fLet \u03b2 = n n\u22121 = 2. We can use (3.12), (3.10), and \u03b1\u2032e\u22122u\u03c1 \u2265\u22121 2 \u02c6 \u03c9 to derive the following estimate for any k \u2265\u03b2 k 4 Z X e(k+1)u|Du|2 + \u2202 \u2202t 2 k + 1 Z X e(k+1)u \u2264(\u2225\u00b5\u2225L\u221e+ 2|\u03b1\u2032|\u2225\u03c1\u2225C2) \u0012Z X eku + Z X e(k\u22121)u \u0013 . (3.13) Here we omit the background volume form \u02c6 \u03c92 2! when integrating scalars. We now consider two cases: the case of small time and the case of large time. We must consider both these cases carefully because the objective is more than just to control eu uniformly in time; rather, we need to establish that eu stays comparable to the scale M for all times. We begin with the estimate for large time. Suppose T \u2208[n, n+ 1] for an integer n \u22651. Let n \u22121 < \u03c4 < \u03c4 \u2032 < T. Let \u03b6(t) \u22650 be a monotone function which is zero for t \u2264\u03c4, identically 1 for t \u2265\u03c4 \u2032, and |\u03b6\u2032| \u22642(\u03c4 \u2032 \u2212\u03c4)\u22121. Multiplying inequality (3.13) by \u03b6 gives, for any k \u2265\u03b2, k\u03b6 4 Z X e(k+1)u|Du|2 + \u2202 \u2202t 2\u03b6 k + 1 Z X e(k+1)u \u2264 (\u2225\u00b5\u2225L\u221e+ 2|\u03b1\u2032|\u2225\u03c1\u2225C2) \u001a \u03b6 Z X e(k\u22121)u + \u03b6 Z X eku \u001b + 2\u03b6\u2032 k + 1 Z X e(k+1)u. (3.14) Let \u03c4 \u2032 < s \u2264T. Integrating from \u03c4 to s yields k 4 Z s \u03c4 \u2032 Z X e(k+1)u|Du|2 + 2 k + 1 Z X e(k+1)u(s) (3.15) \u2264 C \u001a Z T \u03c4 Z X e(k\u22121)u + Z T \u03c4 Z X eku + 1 \u03c4 \u2032 \u2212\u03c4 Z T \u03c4 Z X e(k+1)u \u001b , (3.16) for any k \u2265\u03b2, where C only depends on \u03b1\u2032, \u03c1, \u00b5. We rearrange this inequality to obtain, for k \u2265\u03b2 + 1, (k \u22121) k Z s \u03c4 \u2032 Z X |De k 2 u|2 + Z X eku(s) \u2264 Ck \u001a Z T \u03c4 Z X e(k\u22122)u + Z T \u03c4 Z X e(k\u22121)u + 1 \u03c4 \u2032 \u2212\u03c4 Z T \u03c4 Z X eku \u001b . (3.17) Using e\u2212u \u22641, Z s \u03c4 \u2032 Z X |De k 2 u|2 + Z X eku(s) \u2264Ck \u001a 1 + 1 \u03c4 \u2032 \u2212\u03c4 \u001b\u001a Z T \u03c4 Z X eku \u001b . (3.18) The Sobolev inequality gives us \u0012Z X ek\u03b2u \u0013 1 \u03b2 \u2264C\u2032 X \u0012Z X |e k 2 u|2 + Z X |De k 2 u|2 \u0013 , (3.19) 14 \fwhere C\u2032 X is the Sobolev constant on manifold (X, \u02c6 \u03c9). Let \u03b2\u2217be such that 1 \u03b2 + 1 \u03b2\u2217= 1. By H\u00a8 older\u2019s inequality and the Sobolev inequality, Z T \u03c4 \u2032 Z X ekue k \u03b2\u2217u \u2264 Z T \u03c4 \u2032 \u0012 Z X ek\u03b2u \u00131/\u03b2\u0012 Z X eku \u00131/\u03b2\u2217 \u2264 C\u2032 X sup t\u2208[\u03c4 \u2032,T] \u0012 Z X eku \u00131/\u03b2\u2217Z T \u03c4 \u2032 \u001a Z X eku + Z X |De k 2 u|2 \u001b . (3.20) Using estimate (3.18), and de\ufb01ning \u03b3 = 1 + 1 \u03b2\u2217= 1 + 1 2, we have for k \u22651 + \u03b2, \u0012 Z T \u03c4 \u2032 Z X e\u03b3ku \u00131/\u03b3 \u2264Ck \u001a 1 + 1 \u03c4 \u2032 \u2212\u03c4 \u001b Z T \u03c4 Z X eku. (3.21) We will iterate with \u03c4k = (n \u22121) + \u03b81 \u2212\u03b3\u2212k(\u03b81 \u2212\u03b82), for \ufb01xed 0 < \u03b82 < \u03b81 \u22641. \u0012 Z T \u03c4k+1 Z X e\u03b3k+1u \u00131/\u03b3k+1 \u2264 \u001a C\u03b3k + (\u03b81 \u2212\u03b82)\u22121 C\u03b32k 1 \u2212\u03b3\u22121 \u001b1/\u03b3k\u001a Z T \u03c4k Z X e\u03b3ku \u001b1/\u03b3k . (3.22) Iterating, and using P i \u03b3\u2212i = 3, we see that for p = \u03b3\u03ba0 \u22651 + \u03b2, there holds sup X\u00d7[n\u22121+\u03b81,T] eu \u2264 C (\u03b81 \u2212\u03b82)3 \u2225eu\u2225Lp(X\u00d7[n\u22121+\u03b82,T]), (3.23) where C only depends on (X, \u02c6 \u03c9), \u03c1, \u00b5, and \u03b1\u2032. A standard argument can be used to relate the Lp norm of eu to R X eu = M. Indeed, by Young\u2019s inequality, sup X\u00d7[n\u22121+\u03b81,T] eu \u2264 C(\u03b81 \u2212\u03b82)\u22123 \u0012 sup X\u00d7[n\u22121+\u03b82,T] e(1\u22121/p)u \u0013\u0012 Z X\u00d7[n\u22121+\u03b82,T] eu \u00131/p \u2264 1 2 sup X\u00d7[n\u22121+\u03b82,T] eu + C(\u03b81 \u2212\u03b82)\u22123p Z X\u00d7[n\u22121,T] eu, (3.24) for all 0 < \u03b82 < \u03b81 \u22641. We iterate this inequality with \u03b80 = 1 and \u03b8i+1 = \u03b8i \u22121 2(1\u2212\u03b7)\u03b7i+1, where 1/2 < \u03b73p < 1. Then for each k > 1, sup X\u00d7[n,T] eu \u22641 2k sup X\u00d7[n\u22121+\u03b8k,T] eu ! + 23p CM (1 \u2212\u03b7)3p\u03b73p k\u22121 X i=0 1 2\u03b73p !i . (3.25) Taking the limit as k \u2192\u221e, we obtain a constant C depending only on (X, \u02c6 \u03c9), \u03c1, \u00b5, \u03b1\u2032 such that sup X\u00d7[n,T] eu \u2264CM, (3.26) for any T \u2208[n, n + 1] and integer n \u22651. 15 \fNext, we adapt the previous estimate to the small time region [0, T] \u2286[0, 1]. The argument is similar in essence, and we provide all details for completeness. Integrating (3.13) from 0 to 0 < s \u2264T yields k 4 Z s 0 Z X e(k+1)u|Du|2 + 2 k + 1 Z X e(k+1)u(s) \u2264C \u001a Z T 0 Z X e(k\u22121)u + Z T 0 Z X eku + Mk+1 \u001b , for any k \u2265\u03b2, where C only depends on \u03b1\u2032, \u03c1, \u00b5. We rearrange this inequality to obtain, for k \u2265\u03b2 + 1, (k \u22121) k Z s 0 Z X |De k 2 u|2 + Z X eku(s) \u2264Ck \u001a Z T 0 Z X e(k\u22122)u + Z T 0 Z X e(k\u22121)u + Mk \u001b . (3.27) Using e\u2212u \u22641, we obtain the following estimate, which holds uniformly for all 0 < s \u2264T. Z s 0 Z X |De k 2 u|2 + Z X eku(s) \u2264Ck \u001a Z T 0 Z X eku + Mk \u001b . (3.28) As estimate in (3.20), by the H\u00a8 older and Sobolev inequalities there holds Z T 0 Z X ekue k \u03b2\u2217u \u2264C\u2032 X sup s\u2208[0,T] \u0012 Z X eku \u00131/\u03b2\u2217Z T 0 \u001a Z X eku + Z X |De k 2 u|2 \u001b . (3.29) Recall that \u03b3 = 1 + 1 \u03b2\u2217. Thus for k \u22651 + \u03b2, Z T 0 Z X ek\u03b3u \u2264(Ck)\u03b3 \u0012 Z T 0 Z X eku + Mk \u0013\u03b3 . (3.30) Therefore \u0012 Z T 0 Z X ek\u03b3u + Mk\u03b3 \u00131/\u03b3 \u2264 \u001a (Ck)\u03b3 \u0012 Z T 0 Z X eku + Mk \u0013\u03b3 + Mk\u03b3 \u001b1/\u03b3 , (3.31) and hence \u0012 Z T 0 Z X ek\u03b3u + Mk\u03b3 \u00131/\u03b3 \u2264Ck \u001a Z T 0 Z X eku + Mk \u001b . (3.32) It follows that for all \u03b3k \u22651 + \u03b2, \u0012 Z T 0 Z X e\u03b3k+1u + M\u03b3k+1\u00131/\u03b3k+1 \u2264 \u001a C\u03b3k \u001b1/\u03b3k\u001a Z T 0 Z X e\u03b3ku + M\u03b3k\u001b1/\u03b3k . (3.33) Iterating, we see that for all k such that \u03b3k \u2265\u03b3\u03ba0 \u22651 + \u03b2, \u0012 Z T 0 Z X e\u03b3k+1u \u00131/\u03b3k+1 \u2264 \u001a k Y i=\u03ba0 \u0012 C\u03b3i \u00131/\u03b3i\u001b\u001a Z T 0 Z X e\u03b3\u03ba0u + M\u03b3\u03ba0 \u001b1/\u03b3\u03ba0 . (3.34) 16 \fSending k \u2192\u221e, we obtain for p = \u03b3\u03ba0, sup X\u00d7[0,T] eu \u2264C(\u2225eu\u2225Lp(X\u00d7[0,T]) + M), (3.35) where C only depends on (X, \u02c6 \u03c9), \u03c1, \u00b5, and \u03b1\u2032. Lastly, we relate the Lp norm of eu to R X eu = M. By the previous estimate sup X\u00d7[0,T] eu \u2264C \u0012 sup X\u00d7[0,T] e(p\u22121)u \u0013 1 p \u0012 Z X\u00d7[0,T] eu \u00131/p + CM. (3.36) We absorb the supremum term on the right-hand side using Young\u2019s inequality. Therefore, sup X\u00d7[0,T] eu \u2264CTM + CM \u2264CM, (3.37) for any 0 < T \u22641, and C only depends on (X, \u02c6 \u03c9), \u03c1, \u00b5, and \u03b1\u2032. By combining (3.26) and (3.37), we conclude the proof of Proposition 1. Q.E.D. 3.2 Estimating the in\ufb01mum We introduce the constant \u03b8 = 1 2C1 \u22121. (3.38) Note that since C1 \u22651, we must have 0 < \u03b8 \u22641. Fix a small constant 0 < \u03b4 < 1 such that \u03b4 < \u03b8 4CX(|\u03b1\u2032|\u2225\u03c1\u2225C2 + \u2225\u00b5\u2225C0), and \u03b1\u2032\u03b42\u03c1 \u2265\u22121 2 \u02c6 \u03c9, (3.39) where CX is the Poincar\u00b4 e constant for the reference K\u00a8 ahler manifold (X, \u02c6 \u03c9). De\ufb01ne S\u03b4 := {t \u2208[0, T) : sup X e\u2212u \u2264\u03b4}. (3.40) Recall that we start the \ufb02ow at u0 = log M. It follows that if M > \u03b4\u22121, then the \ufb02ow starts in the region S\u03b4. At any time \u02c6 t \u2208S\u03b4, we consider U = {z \u2208X : e\u2212u \u2264 2 M }. Then by Proposition 1, M = Z U eu + Z X\\U eu \u2264|U| sup X eu + (1 \u2212|U|)M 2 \u2264C1M|U| + (1 \u2212|U|)M 2 . (3.41) It follows that at any \u02c6 t, |U| > \u03b8 > 0. (3.42) We will also need the constant C0 > 1 de\ufb01ned by C0 = 1 1 \u2212\u03b8 4 (1 + 2 \u03b8) \u0012 2 \u03b82 \u0013 . (3.43) 17 \f3.2.1 Integral estimate Proposition 2 Start the \ufb02ow at u0 = log M, where M is large enough such that the \ufb02ow starts in the region S\u03b4. Suppose [0, T] \u2286S\u03b4. Then on [0, T], there holds Z X e\u2212u \u22642C0 M . (3.44) Proof: At t = 0, we have R X e\u2212u = 1 M < 2C0 M . Suppose \u02c6 t \u2208S\u03b4 is the \ufb01rst time when we reach R X e\u2212u = 2C0 M . Then we must have \u2202 \u2202t \f \f \f \f t=\u02c6 t Z X e\u2212u \u22650. (3.45) Setting k = 2 in (3.9) and dropping the negative term involving \u03c9\u2032 \u22650, we have Z X e\u2212u{\u02c6 \u03c9 + \u03b1\u2032e\u22122u\u03c1} \u2227i\u2202u \u2227i\u00af \u2202u + 2 \u2202 \u2202t Z X e\u2212u \u2264 \u0012 |\u03b1\u2032|\u2225\u03c1\u2225C2 Z X e\u22123u + \u2225\u00b5\u2225C0 Z X e\u22122u \u0013 . Since \u2202 \u2202t \f \f \f \f t=\u02c6 t R X e\u2212u \u22650, and e\u2212u \u2264\u03b4 < 1, there holds at \u02c6 t, Z X |De\u2212u 2 |2 \u2264(|\u03b1\u2032|\u2225\u03c1\u2225C2 + \u2225\u00b5\u2225C0)\u03b4 Z X e\u2212u. (3.46) By the Poincar\u00b4 e inequality Z X e\u2212u \u2212 \u0012 Z X e\u2212u 2 \u00132 = Z X \f \f \f \fe\u2212u 2 \u2212 Z X e\u2212u 2 \f \f \f \f 2 \u2264CX Z X |De\u2212u 2 |2. (3.47) By (3.39), we have Z X e\u2212u \u2212 \u0012 Z X e\u2212u 2 \u00132 \u2264\u03b8 4 Z X e\u2212u, (3.48) and it implies Z X e\u2212u \u2264 1 1 \u2212\u03b8 4 \u0012 Z X e\u2212u 2 \u00132 . (3.49) We may use the measure estimate and (3.49) to obtain \u0012 Z X e\u2212u 2 \u00132 \u2264 (1 + 2 \u03b8) \u0012 Z U e\u2212u 2 \u00132 + (1 + \u03b8 2) \u0012 Z X\\U e\u2212u 2 \u00132 \u2264 (1 + 2 \u03b8)|U| Z U e\u2212u + (1 + \u03b8 2)(1 \u2212|U|) Z X\\U e\u2212u \u2264 (1 + 2 \u03b8) 2 M + (1 + \u03b8 2)(1 \u2212\u03b8) 1 1 \u2212\u03b8 4 \u0012 Z X e\u2212u 2 \u00132 . (3.50) Thus \u0012 Z X e\u2212u 2 \u00132 \u2264(1 + 2 \u03b8) 2 M \u0012 1 1 \u2212(1 + \u03b8 2)(1 \u2212\u03b8)(1 \u2212\u03b8 4)\u22121 \u0013 . (3.51) 18 \fFor any \u03b8 \u22650, we have the elementary estimate (1 + \u03b8 2)(1 \u2212\u03b8)(1 \u2212\u03b8 4)\u22121 \u22641 \u2212\u03b82. (3.52) Using this and (3.49), Z X e\u2212u \u2264 1 1 \u2212\u03b8 4 (1 + 2 \u03b8) \u0012 2 \u03b82 \u0013 1 M = C0 M . (3.53) This contradicts that R X e\u2212u = 2C0 M at \u02c6 t. It follows that R X e\u2212u stays less than 2C0 M for all time t \u2208S\u03b4. 3.2.2 Iteration Proposition 3 Start the \ufb02ow with initial data eu(x,0) = M. Suppose the \ufb02ow exists for t \u2208[0, T) with T > 0, and [0, T) \u2286S\u03b4. Then sup X\u00d7[0,T) e\u2212u \u2264C2 M , (3.54) where C2 only depends on (X, \u02c6 \u03c9), \u03c1, \u00b5, \u03b1\u2032. Proof: We can drop the negative terms involving \u03c9\u2032 \u22650 and use \u03b1\u2032e\u22122u\u03c1 \u2265\u22121 2 \u02c6 \u03c9 in (3.9) to obtain the estimate, for k \u22652, k 4 Z X e\u2212(k\u22121)u|Du|2 + \u2202 \u2202t 2 k \u22121 Z X e\u2212(k\u22121)u \u2264C \u0012 Z X e\u2212(k+1)u + Z X e\u2212ku \u0013 . (3.55) As in the upper bound on eu, we split the argument into the cases of large time and small time, and \ufb01rst consider the case of large time. Suppose T \u2208[n, n + 1] for an integer n \u22651. Let n \u22121 < \u03c4 < \u03c4 \u2032 < T. Let \u03b6(t) \u22650 be a monotone function which is zero for t \u2264\u03c4 and identically 1 for t \u2265\u03c4 \u2032. Multiplying (3.55) by \u03b6 gives k\u03b6 4 Z X e\u2212(k\u22121)u|Du|2 + \u2202 \u2202t 2\u03b6 k \u22121 Z X e\u2212(k\u22121)u \u2264C \u001a \u03b6 Z X e\u2212(k+1)u + \u03b6 Z X e\u2212ku + \u03b6\u2032 Z X e\u2212(k\u22121)u \u001b . (3.56) Let \u03c4 \u2032 < s \u2264T. Integrating from \u03c4 to s k 4 Z s \u03c4 \u2032 Z X e\u2212(k\u22121)u|Du|2 + 2 k \u22121 Z X e\u2212(k\u22121)u(s) \u2264 C \u001a Z T \u03c4 Z X e\u2212(k+1)u + Z T \u03c4 Z X e\u2212ku + 1 \u03c4 \u2032 \u2212\u03c4 Z T \u03c4 Z X e\u2212(k\u22121)u \u001b . (3.57) 19 \fWe rearrange this inequality to obtain, for k \u22651, Z s \u03c4 \u2032 Z X |De\u2212k 2 u|2 + 2 Z X e\u2212ku(s) \u2264Ck \u001a Z T \u03c4 Z X e\u2212(k+2)u + Z T \u03c4 Z X e\u2212(k+1)u + 1 \u03c4 \u2032 \u2212\u03c4 Z T \u03c4 Z X e\u2212ku \u001b . Since e\u2212u \u2264\u03b4 < 1, we have Z s \u03c4 \u2032 Z X |De\u2212k 2 u|2 + 2 Z X e\u2212ku(s) \u2264Ck \u001a 1 + 1 \u03c4 \u2032 \u2212\u03c4 \u001b\u001a Z T \u03c4 Z X e\u2212ku \u001b . (3.58) Recall that we denote \u03b2 = n n\u22121 = 2, \u03b2\u2217such that 1 \u03b2 + 1 \u03b2\u2217= 1, and \u03b3 = 1 + 1 \u03b2\u2217. By the Sobolev inequality Z T \u03c4 \u2032 Z X e\u2212kue\u2212k \u03b2\u2217u \u2264 Z T \u03c4 \u2032 \u0012 Z X e\u2212k\u03b2u \u00131/\u03b2\u0012 Z X e\u2212ku \u00131/\u03b2\u2217 \u2264 C sup t\u2208[\u03c4 \u2032,T] \u0012 Z X e\u2212ku \u00131/\u03b2\u2217Z T \u03c4 \u2032 \u001a Z X e\u2212ku + Z X |De\u2212k 2 u|2 \u001b . (3.59) Using estimate (3.58), we arrive at \u0012 Z T \u03c4 \u2032 Z X e\u2212\u03b3ku \u00131/\u03b3 \u2264Ck \u001a 1 + 1 \u03c4 \u2032 \u2212\u03c4 \u001b\u001a Z T \u03c4 Z X e\u2212ku \u001b . (3.60) Iterating with \u03c4k = (1 \u2212\u03b3\u2212(k+1)) + (n \u22121), \u0012 Z T \u03c4k+1 Z X e\u2212\u03b3k+1u \u00131/\u03b3k+1 \u2264 \u001a C\u03b3k + C\u03b32k 1 \u2212\u03b3\u22121 \u001b1/\u03b3k\u001a Z T \u03c4k Z X e\u03b3ku \u001b1/\u03b3k . (3.61) Note \u03c4k \u2265n \u22122 3. Sending k \u2192\u221e, we have the C0 estimate sup X\u00d7[n,T] e\u2212u \u2264C\u2225e\u2212u\u2225L1(X\u00d7[n\u22122 3,T]). (3.62) By Proposition 2, for n \u2264T \u2264n + 1 and n \u22651, we obtain sup X\u00d7[n,T] e\u2212u \u2264C M . (3.63) Next, we consider the small time region [0, T] \u2286[0, 1]. Integrating (3.55) from 0 to 0 < s < T, we obtain k 4 Z s 0 Z X e\u2212(k\u22121)u|Du|2 + 2 k \u22121 Z X e\u2212(k\u22121)u(s) \u2264C \u001a Z T 0 Z X e\u2212(k+1)u + Z T 0 Z X e\u2212ku + M\u2212(k\u22121) k \u22121 \u001b . We rearrange this inequality to obtain, for k \u22651, Z s 0 Z X |De\u2212k 2 u|2 + 2 Z X e\u2212ku(s) \u2264Ck \u001a Z T 0 Z X e\u2212(k+2)u + Z T 0 Z X e\u2212(k+1)u + M\u2212k \u001b . (3.64) 20 \fSince e\u2212u \u2264\u03b4 < 1, we have Z s 0 Z X |De\u2212k 2 u|2 + 2 Z X e\u2212ku(s) \u2264Ck \u001a Z T 0 Z X e\u2212ku + M\u2212k \u001b . (3.65) As before, by the Sobolev inequality Z T 0 Z X e\u2212kue\u2212k \u03b2\u2217u \u2264C sup s\u2208[0,T] \u0012 Z X e\u2212ku \u00131/\u03b2\u2217Z T 0 \u001a Z X e\u2212ku + Z X |De\u2212k 2 u|2 \u001b . (3.66) Combining this with (3.65) yields \u0012 Z T 0 Z X e\u2212\u03b3ku + M\u2212\u03b3k \u00131/\u03b3 \u2264Ck \u001a Z T 0 Z X e\u2212ku + M\u2212k \u001b . (3.67) Iterating, we obtain the C0 estimate sup X\u00d7[0,T] e\u2212u \u2264C\u2225e\u2212u\u2225L1(X\u00d7[0,T]) + CM\u22121. (3.68) By Proposition 2, for 0 < T \u22641 we obtain sup X\u00d7[0,T) e\u2212u \u2264CTM\u22121 + CM\u22121 \u2264C M . (3.69) By combining (3.63) and (3.69), we conclude the proof of Proposition 3. Q.E.D. Theorem 3 Suppose the \ufb02ow exists for t \u2208[0, T), and initially starts with u0 = log M. There exists M0 \u226b1 such that for all M \u2265M0, there holds sup X\u00d7[0,T) eu \u2264C1M, sup X\u00d7[0,T) e\u2212u \u2264C2 M , (3.70) where C1, C2 only depend on (X, \u02c6 \u03c9), \u03c1, \u00b5, \u03b1\u2032. Proof: By Proposition 1 and Proposition 3, the estimates hold as long as we stay in S\u03b4. Choose M0 such that C2 M0 < \u03b4 2, (3.71) where recall \u03b4 is de\ufb01ned in (3.39). Then at t = 0, we have e\u2212u0 < \u03b4, and the estimate is preserved on [0, T). The theorem follows. Q.E.D. 21 \f4 Evolution of the torsion Before proceeding, we clearly state the conventions and notation that will be used for the maximum principle estimates of Sections \u00a74-6. All norms from this point on will be with respect to the evolving metric \u03c9 = eu\u02c6 \u03c9, unless denoted otherwise. We will write \u03c9 = ig\u00af kjdzj \u2227d\u00af zk. We will use the Chern connection of \u03c9 to di\ufb00erentiate \u2207\u00af kV \u03b1 = \u2202\u00af kV \u03b1, \u2207kV \u03b1 = g\u03b1\u00af \u03b2\u2202k(g \u00af \u03b2\u03b3V \u03b3). (4.1) The curvature of the metric \u03c9 is R\u00af kj \u03b1 \u03b2 = \u2212\u2202\u00af k(g\u03b1\u00af \u03b3\u2202jg\u00af \u03b3\u03b2) = \u02c6 R\u00af kj \u03b1 \u03b2 \u2212u\u00af kj\u03b4\u03b1 \u03b2. (4.2) The torsion tensor of the metric \u03c9 is T\u00af kmj = \u2202mg\u00af kj \u2212\u2202jg\u00af km, and since \u02c6 \u03c9 has zero torsion, we may compute T \u03bb mj = g\u03bb\u00af kT\u00af kmj = um\u03b4\u03bb j \u2212uj\u03b4\u03bb m. (4.3) We note the following formulas for the torsion and Chern-Ricci curvature of the evolving metric R\u00af kj = R\u00af kj \u03b1 \u03b1 = \u22122u\u00af kj, Tj = T \u03bb \u03bbj = \u2212\u2202ju. (4.4) Recall that |T|2 refers to the norm of Tj, as noted in (2.33). We will often use the following commutation formulas to exchange covariant derivatives [\u2207j, \u2207\u00af k] Vi = \u2212R\u00af kj p iVp, [\u2207j, \u2207k] Vi = \u2212T \u03bb jk\u2207\u03bbVi. (4.5) To handle the di\ufb00erentiation of the equation, we will rewrite the terms involving \u03c1 in the \ufb02ow (1.6). Compute \u2212\u03b1\u2032i\u2202\u00af \u2202(e\u2212u\u03c1) = \u2212\u03b1\u2032e\u2212ui\u2202\u00af \u2202\u03c1 + 2\u03b1\u2032Re{e\u2212ui\u2202u \u2227\u00af \u2202\u03c1} +\u03b1\u2032e\u2212ui\u2202\u00af \u2202u \u2227\u03c1 \u2212\u03b1\u2032ie\u2212u\u2202u \u2227\u00af \u2202u \u2227\u03c1. (4.6) We introduce the notation \u2212\u03b1\u2032i\u2202\u00af \u2202(e\u2212u\u03c1) = \u0012 \u2212\u03b1\u2032e\u2212u\u03c8\u03c1 + \u03b1\u2032e\u2212uRe{bi \u03c1ui} + \u03b1\u2032e\u2212u\u02dc \u03c1j\u00af ku\u00af kj \u2212\u03b1\u2032e\u2212u\u02dc \u03c1p\u00af qup\u00af u\u00af q \u0013 \u02c6 \u03c92 2 , (4.7) where \u03c8\u03c1(z), bi \u03c1(z), \u02dc \u03c1j\u00af k(z) are de\ufb01ned one by one corresponding to the previous expression. We note that \u03c8\u03c1, bi \u03c1, \u02dc \u03c1j\u00af k are bounded in C\u221eby constants depending only on the form \u03c1 and the background metric \u02c6 \u03c9. We also note that \u02dc \u03c1j\u00af k is Hermitian since \u03c1 is real. We may rewrite this expression as \u2212\u03b1\u2032i\u2202\u00af \u2202(e\u2212u\u03c1) = \u0012 \u2212\u03b1\u2032e\u22123u\u03c8\u03c1\u2212\u03b1\u2032e\u22123uRe{bi \u03c1Ti}\u2212\u03b1\u2032 2 e\u22123u\u02dc \u03c1j\u00af kR\u00af kj\u2212\u03b1\u2032e\u22123u\u02dc \u03c1p\u00af qTp \u00af T\u00af q \u0013\u03c92 2 . (4.8) 22 \fWith all the introduced notation, we can write the \ufb02ow (1.6) in the following way. \u2202tg\u00af kj = 1 2\u2225\u2126\u2225\u03c9 \u0012 \u2212R 2 \u2212\u03b1\u2032 2 \u2225\u2126\u22253 \u03c9 \u02dc \u03c1p\u00af qR\u00af qp + \u03b1\u2032 4 \u03c32(iRic\u03c9) + |T|2 + \u2225\u2126\u22252 \u03c9 \u03bd \u0013 g\u00af kj, (4.9) where \u03bd = \u2212\u03b1\u2032\u2225\u2126\u2225\u03c9\u03c8\u03c1 \u2212\u03b1\u2032\u2225\u2126\u2225\u03c9Re{bi \u03c1Ti} \u2212\u03b1\u2032\u2225\u2126\u2225\u03c9 \u02dc \u03c1p\u00af qTp \u00af T\u00af q + \u02dc \u00b5. (4.10) In the following, we will use \u2225\u2126\u2225to replace \u2225\u2126\u2225\u03c9 for simplicity, if there is no confusing of the notation. 4.1 Torsion tensor Using \u2225\u2126\u2225= e\u2212u and g\u00af kj = eu\u02c6 g\u00af kj, (4.9) implies the following evolution of \u2225\u2126\u2225, \u2202t log \u2225\u2126\u2225= 1 2\u2225\u2126\u2225 \u0012R 2 + \u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp \u2212|T|2 \u2212\u03b1\u2032 4 \u03c32(iRic\u03c9) \u2212\u2225\u2126\u22252 \u03bd \u0013 . (4.11) Using (2.30) and (4.11), we evolve \u2202tTj = \u2202j\u2202t log \u2225\u2126\u2225 = \u2207j \u001a 1 2\u2225\u2126\u2225 \u0012R 2 + \u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp \u2212|T|2 \u2212\u03b1\u2032 4 \u03c32(iRic\u03c9) \u2212\u2225\u2126\u22252 \u03bd \u0013\u001b . (4.12) Using \u2202j\u2225\u2126\u2225= \u2225\u2126\u2225Tj and the de\ufb01nition of \u03bd (4.10), a straightforward computation gives \u2202tTj = 1 2\u2225\u2126\u2225 \u001a \u22121 2TjR + Tj|T|2 + \u03b1\u2032 4 Tj\u03c32(iRic\u03c9) +1 2\u2207jR + \u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af q\u2207jR\u00af qp \u2212\u2207j|T|2 \u2212\u03b1\u2032 4 \u2207j\u03c32(iRic\u03c9) + Ej \u001b , (4.13) where Ej = 2\u03b1\u2032\u2225\u2126\u22253\u03c8\u03c1Tj + 2\u03b1\u2032\u2225\u2126\u22253Re{bi \u03c1Ti}Tj + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qpTj +2\u03b1\u2032\u2225\u2126\u22253(\u02dc \u03c1p\u00af qTp \u00af T\u00af q)Tj \u2212\u2225\u2126\u22252\u02dc \u00b5Tj + \u03b1\u2032\u2225\u2126\u22253\u2207j\u03c8\u03c1 +\u03b1\u2032\u2225\u2126\u22253Re{\u2207jbi \u03c1Ti} + \u03b1\u2032\u2225\u2126\u22253Re{bi \u03c1\u2207jTi} + \u03b1\u2032 2 \u2225\u2126\u22253(\u2207j \u02dc \u03c1p\u00af q)R\u00af qp +\u03b1\u2032\u2225\u2126\u22253(\u2207j \u02dc \u03c1p\u00af q)Tp \u00af T\u00af q + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q\u2207jTp \u00af T\u00af q + \u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qTpR\u00af qj \u2212\u2225\u2126\u22252\u2207j \u02dc \u00b5. (4.14) Our reason for treating Ej as an error term is that the C0 estimate tells us that \u2225\u2126\u2225= e\u2212u \u226a1 if we start the \ufb02ow from a large enough constant log M. As we will see, the terms appearing in Ej will only slightly perturb the coe\ufb03cients of the leading terms in the proof of Theorem 4. 23 \fWe need to express the highest order terms in (4.13) as the linearized operator acting on torsion. First, we write the Ricci curvature in terms of the conformal factor \u2207jR\u00af qp = \u22122\u2207j\u2207p\u2207\u00af qu. (4.15) Exchanging covariant derivatives \u22122\u2207j\u2207p\u2207\u00af qu = \u22122\u2207p\u2207\u00af q\u2207ju \u22122T \u03bb pj\u2207\u03bb\u2207\u00af qu. (4.16) It follows from (4.4) that \u2207jR\u00af qp = 2\u2207p\u2207\u00af qTj + T \u03bb pjR\u00af q\u03bb. (4.17) Hence \u2207jR \u2212\u03b1\u2032 2 \u2207j\u03c32(iRic\u03c9) + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q\u2207jR\u00af qp = gp\u00af q\u2207jR\u00af qp + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q\u2207jR\u00af qp \u2212\u03b1\u2032 2 \u03c3p\u00af q 2 \u2207jR\u00af qp = 2F p\u00af q\u2207p\u2207\u00af qTj + F p\u00af qT \u03bb pjR\u00af q\u03bb, (4.18) where we introduced the notation \u03c3p\u00af q 2 = R gp\u00af q \u2212Rp\u00af q, (4.19) and F p\u00af q = gp\u00af q + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q \u2212\u03b1\u2032 2 (R gp\u00af q \u2212Rp\u00af q). (4.20) The tensor F p\u00af q is Hermitian, and in Section \u00a75 we will show that F p\u00af q stays close to gp\u00af q along the \ufb02ow. Substituting (4.18) into (4.13) \u2202tTj = 1 2\u2225\u2126\u2225 \u001a F p\u00af q\u2207p\u2207\u00af qTj \u2212\u2207j|T|2 \u22121 2TjR + \u03b1\u2032 4 Tj\u03c32(iRic\u03c9) +1 2F p\u00af qT \u03bb pjR\u00af q\u03bb + Tj|T|2 + Ej \u001b . (4.21) Before proceeding, let us discuss \u03c3p\u00af q 2 and F p\u00af q using convenient coordinates. Suppose we work at a point where the evolving metric gi\u00af j = \u03b4ij and R\u00af kj is diagonal. Let Aij = gi\u00af kR\u00af kj. The function \u03c32(Aij) maps a Hermitian endomorphism to the second elementary symmetric polynomial of its eigenvalues. We are working in dimension n = 2, so \u03c32(Aij) is the product of the two eigenvalues of A. Our operator \u03c32(iRic\u03c9) de\ufb01ned in (2.36) is with respect to the evolving metric \u03c9, so denoting Aij = gi\u00af kR\u00af kj, we have \u03c32(iRic\u03c9) = \u03c32(A). We de\ufb01ne \u03c3p\u00af q 2 = \u2202\u03c32 \u2202Akpgk\u00af q. It is well-known that \u2202\u03c32 \u2202A11 = A22, \u2202\u03c32 \u2202A22 = A11, and \u2202\u03c32 \u2202A12 = 0 if A is diagonal. Then in our case, \u03c31\u00af 1 2 = R\u00af 22, \u03c32\u00af 2 2 = R\u00af 11, \u03c31\u00af 2 2 = \u03c32\u00af 1 2 = 0. (4.22) We obtain F 1\u00af 1 = 1 + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c11\u00af 1 \u2212\u03b1\u2032 2 R\u00af 22, F 2\u00af 2 = 1 + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c12\u00af 2 \u2212\u03b1\u2032 2 R\u00af 11, F 1\u00af 2 = \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c11\u00af 2, F 2\u00af 1 = \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c12\u00af 1. (4.23) 24 \f4.2 Norm of the torsion We will compute \u2202t|T|2 = \u2202t{gi\u00af jTi \u00af T\u00af j}. (4.24) We have \u2202tgi\u00af j = \u2212gi\u00af \u03bbg\u03b3\u00af j\u2202tg\u00af \u03bb\u03b3 = 1 2\u2225\u2126\u2225 \u0012R 2 +\u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp\u2212\u03b1\u2032 4 \u03c32(iRic\u03c9)\u2212|T|2\u2212\u2225\u2126\u22252 \u03bd \u0013 gi\u00af j. (4.25) Hence \u2202t|T|2 = 2Re\u27e8\u2202tT, T\u27e9 + |T|2 2\u2225\u2126\u2225 \u0012R 2 + \u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp \u2212\u03b1\u2032 4 \u03c32(iRic\u03c9) \u2212|T|2 \u2212\u2225\u2126\u22252 \u03bd \u0013 (4.26) Next, using the notation |W|2 F g = F p\u00af qgi\u00af jWpi \u00af W\u00af q\u00af j, F p\u00af q\u2207p\u2207\u00af q|T|2 = F p\u00af qgi\u00af j\u2207p\u2207\u00af qTi \u00af T\u00af j + F p\u00af qgi\u00af jTi\u2207p\u2207\u00af q \u00af T\u00af j + |\u2207T|2 F g + |\u2207T|2 F g = F p\u00af qgi\u00af j\u2207p\u2207\u00af qTi \u00af T\u00af j + gi\u00af jTiF q\u00af p\u2207q\u2207\u00af pTj + F p\u00af qgi\u00af jTiR\u00af qp\u00af j \u00af \u03bb \u00af T\u00af \u03bb +|\u2207T|2 F g + |\u2207T|2 F g. (4.27) We introduce the notation \u2206F = F p\u00af q\u2207p\u2207\u00af q. We have shown \u2206F|T|2 = 2Re\u27e8\u2206FT, T\u27e9+ |\u2207T|2 F g + |\u2207T|2 F g + F p\u00af qgi\u00af jTiR\u00af qp\u00af j \u00af \u03bb \u00af T\u00af \u03bb. (4.28) Combining (4.21), (4.26), and (4.28), we obtain \u2202t|T|2 = 1 2\u2225\u2126\u2225 \u001a \u2206F|T|2 \u2212|\u2207T|2 F g \u2212|\u2207T|2 F g \u22122Re{gi\u00af j\u2207i|T|2 \u00af T\u00af j} \u22121 2R|T|2 + \u03b1\u2032 4 \u03c32(iRic\u03c9)|T|2 + Re{F p\u00af qgi\u00af jT \u03bb piR\u00af q\u03bb \u00af T\u00af j} \u2212F p\u00af qgi\u00af jTiR\u00af qp\u00af j \u00af \u03bb \u00af T\u00af \u03bb + |T|4 + \u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp|T|2 \u2212\u2225\u2126\u22252|T|2\u03bd + 2Re\u27e8E, T\u27e9 \u001b . (4.29) 4.3 Estimating the torsion Theorem 4 There exists M0 \u226b1 such that all M \u2265M0 have the following property. Start the \ufb02ow with a constant function u0 = log M. If |\u03b1\u2032Ric\u03c9| \u226410\u22126 (4.30) along the \ufb02ow, then there exists C3 > 0 depending only on (X, \u02c6 \u03c9), \u03c1, \u02dc \u00b5 and \u03b1\u2032, such that |T|2 \u2264C3 M \u226a1. (4.31) 25 \fDenote \u039b = 1 + 1 8. We will study the test function G = log |T|2 \u2212\u039b log \u2225\u2126\u2225. (4.32) Taking the time derivative gives us \u2202tG = \u2202t|T|2 |T|2 \u2212\u039b\u2202t log \u2225\u2126\u2225. (4.33) Computing using (2.30) and (4.20), \u2206F log \u2225\u2126\u2225 = F p\u00af q\u2207p \u00af T\u00af q = 1 2F p\u00af qR\u00af qp = 1 2R \u2212\u03b1\u2032 4 \u03c3p\u00af q 2 R\u00af qp + \u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp = 1 2R \u2212\u03b1\u2032 2 \u03c32(iRic\u03c9) + \u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp. (4.34) Therefore by (4.11) \u2202t log \u2225\u2126\u2225= 1 2\u2225\u2126\u2225 \u001a \u2206F log \u2225\u2126\u2225\u2212|T|2 + \u03b1\u2032 4 \u03c32(iRic\u03c9) \u2212\u2225\u2126\u22252 \u03bd \u001b . (4.35) Substituting (4.29) and (4.35) into (4.33), we have \u2202tG = 1 2\u2225\u2126\u2225 \u001a \u2206FG + |\u2207|T|2|2 F |T|4 \u2212|\u2207T|2 F g |T|2 \u2212|\u2207T|2 F g |T|2 \u2212 2 |T|2Re{gi\u00af j\u2207i|T|2 \u00af T\u00af j} \u22121 2R + \u03b1\u2032 4 \u03c32(iRic\u03c9) + 1 |T|2Re{F p\u00af qgi\u00af jT \u03bb piR\u00af q\u03bb \u00af T\u00af j} \u22121 |T|2F p\u00af qgi\u00af jTiR\u00af qp\u00af j \u00af \u03bb \u00af T\u00af \u03bb + |T|2 + \u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp \u2212\u2225\u2126\u22252\u03bd + 2 |T|2Re\u27e8E, T\u27e9+ \u039b|T|2 \u2212\u03b1\u2032 4 \u039b\u03c32(iRic\u03c9) + \u039b\u2225\u2126\u22252 \u03bd \u001b . (4.36) Let (p, t0) be the point in X \u00d7 [0, T] where G attains its maximum. Since we start the \ufb02ow at t = 0 with a constant function u0 = log M, the torsion is zero at the initial time. It follows that t0 > 0. The following computation will be done at this point (p, t0), and we note that |T|2 > 0 at (p, t0). The critical equation \u2207G = 0 gives 0 = \u2207i|T|2 |T|2 \u2212\u039bTi. (4.37) Using (2.38), this can be rewritten in the following way \u27e8\u2207iT, T\u27e9 |T|2 = \u039bTi \u2212\u27e8T, \u2207\u00af iT\u27e9 |T|2 = \u039bTi \u2212 1 2|T|2gj\u00af kTjR\u00af ki. (4.38) 26 \fTherefore, by Cauchy-Schwarz and the critical equation, \u2212|\u2207T|2 F g |T|2 \u2264 \u2212 \f \f \f \f \u27e8\u2207T, T\u27e9 |T|2 \f \f \f \f 2 F = \u2212 \f \f \f \f\u039bTi \u2212 1 2|T|2gj\u00af kTjR\u00af ki \f \f \f \f 2 F = \u2212\u039b2|T|2 F \u2212 1 4|T|4 \f \f \f \fgj\u00af kTjR\u00af ki \f \f \f \f 2 F + \u039b |T|2Re{F p\u00af qgj\u00af kTjR\u00af kp \u00af T\u00af q}. (4.39) Here we used the notation |V |2 F = F p\u00af qVp \u00af V\u00af q. We may also expand the following term using the de\ufb01nition of F p\u00af q, 4|\u2207T|2 F g = F p\u00af qgi\u00af jR\u00af qiR\u00af jp = |Ric\u03c9|2 \u2212\u03b1\u2032 2 gi\u00af j\u03c3p\u00af q 2 R\u00af qiR\u00af jp + \u03b1\u2032\u2225\u2126\u22253gi\u00af j \u02dc \u03c1p\u00af qR\u00af qiR\u00af jp. (4.40) Set \u03b5 = 1/100. Using (4.39) and (4.40), and the critical equation (4.37) once more on the \ufb01rst and last term, we obtain |\u2207|T|2|2 F |T|4 \u2212(1 \u2212\u03b5)|\u2207T|2 F g |T|2 \u2212|\u2207T|2 F g |T|2 \u2212 2 |T|2Re{gi\u00af j\u2207i|T|2 \u00af T\u00af j} \u2264 \u039b2|T|2 F \u2212(1 \u2212\u03b5)\u039b2|T|2 F \u2212(1 \u2212\u03b5) 1 4|T|4 \f \f \f \fgj\u00af kTjR\u00af ki \f \f \f \f 2 F +(1 \u2212\u03b5) \u039b |T|2Re{F p\u00af qgj\u00af kTjR\u00af kp \u00af T\u00af q} \u22121 4 |Ric\u03c9|2 |T|2 + \u03b1\u2032 8|T|2gi\u00af j\u03c3p\u00af q 2 R\u00af qiR\u00af jp \u2212\u03b1\u2032 4|T|2\u2225\u2126\u22253gi\u00af j \u02dc \u03c1p\u00af qR\u00af qiR\u00af jp \u22122\u039b|T|2. (4.41) Substituting this inequality into (4.36), our main inequality becomes \u2202tG \u2264 1 2\u2225\u2126\u2225 \u001a \u2206FG \u2212\u03b5|\u2207T|2 F g |T|2 \u22121 4 |Ric\u03c9|2 |T|2 \u2212(\u039b \u22121)|T|2 + \u03b5\u039b2|T|2 F \u22121 2R \u2212\u03b1\u2032 4 (\u039b \u22121)\u03c32(iRic\u03c9) + \u03b1\u2032 8|T|2gi\u00af j\u03c3p\u00af q 2 R\u00af qiR\u00af jp + (1 \u2212\u03b5) \u039b |T|2Re{F p\u00af qgj\u00af kTjR\u00af kp \u00af T\u00af q} \u22121 |T|2F p\u00af qgi\u00af jTiR\u00af qp\u00af j \u00af \u03bb \u00af T\u00af \u03bb + 1 |T|2Re{F p\u00af qgi\u00af jT \u03bb piR\u00af q\u03bb \u00af T\u00af j} \u2212(1 \u2212\u03b5) 4|T|4 \f \f \f \fgj\u00af kTjR\u00af ki \f \f \f \f 2 F \u2212 \u03b1\u2032 4|T|2\u2225\u2126\u22253gi\u00af j \u02dc \u03c1p\u00af qR\u00af qiR\u00af jp + \u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp +(\u039b \u22121)\u2225\u2126\u22252\u03bd + 2 |T|2Re\u27e8E, T\u27e9 \u001b , (4.42) which holds at (p, t0). Next, we use (4.2) to write the evolving curvature as R\u00af qp\u00af j \u00af \u03bb = \u02c6 R\u00af qp\u00af j \u00af \u03bb + 1 2R\u00af qp\u03b4j \u03bb. (4.43) 27 \fThis identity allows us to write \u22121 |T|2F p\u00af qgi\u00af jTiR\u00af qp\u00af j \u00af \u03bbT\u00af \u03bb = \u22121 |T|2F p\u00af qgi\u00af jTi \u02c6 R\u00af qp\u00af j \u00af \u03bb \u00af T\u00af \u03bb \u22121 2F p\u00af qR\u00af qp. (4.44) Next, by (4.3), the torsion can be written as T \u03bb pi = Ti\u03b4\u03bb p \u2212Tp\u03b4\u03bb i, (4.45) so we may rewrite 1 |T|2Re{F p\u00af qgi\u00af jT \u03bb piR\u00af q\u03bb \u00af T\u00af j} = F p\u00af qR\u00af qp \u2212 1 |T|2Re{F p\u00af qgi\u00af jR\u00af qi \u00af T\u00af jTp}. (4.46) Together, we have \u22121 |T|2F p\u00af qgi\u00af jTiR\u00af qp\u00af j \u00af \u03bb \u00af T\u00af \u03bb + 1 |T|2Re{F p\u00af qgi\u00af jT \u03bb piR\u00af q\u03bb \u00af T\u00af j} = \u22121 |T|2F p\u00af qgi\u00af jTi \u02c6 R\u00af qp\u00af j \u00af \u03bb \u00af T\u00af \u03bb + 1 2F p\u00af qR\u00af qp \u2212 1 |T|2Re{F p\u00af qgi\u00af jR\u00af qi \u00af T\u00af jTp} = \u22121 |T|2F p\u00af qgi\u00af jTi \u02c6 R\u00af qp\u00af j \u00af \u03bb \u00af T\u00af \u03bb + 1 2R \u22121 2\u03b1\u2032\u03c32(iRic\u03c9) +\u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp \u2212 1 |T|2Re{F p\u00af qgi\u00af jR\u00af qi \u00af T\u00af jTp}. (4.47) We also compute |T|2 F = |T|2 + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qTpT\u00af q \u2212\u03b1\u2032 2 \u03c3p\u00af q 2 Tp \u00af T\u00af q. (4.48) Substituting (4.47) and (4.48) in the main inequality (4.42), we see that the terms of order R have cancelled. \u2202tG \u2264 1 2\u2225\u2126\u2225 \u001a \u2206FG \u2212\u03b5|\u2207T|2 F g |T|2 \u22121 4 |Ric\u03c9|2 |T|2 \u2212(\u039b \u22121 \u2212\u03b5\u039b2)|T|2 \u2212\u03b5\u039b2\u03b1\u2032 2 \u03c3p\u00af q 2 Tp \u00af T\u00af q \u2212(1 + \u039b)\u03b1\u2032 4 \u03c32(iRic\u03c9) + \u03b1\u2032 8|T|2gi\u00af j\u03c3p\u00af q 2 R\u00af qiR\u00af jp +(\u039b \u2212\u03b5\u039b \u22121) 1 |T|2Re{F p\u00af qgj\u00af kTjR\u00af kp \u00af T\u00af q} \u2212(1 \u2212\u03b5) 4|T|4 \f \f \f \fgj\u00af kTjR\u00af ki \f \f \f \f 2 F \u22121 |T|2F p\u00af qgi\u00af jTi \u02c6 R\u00af qp\u00af j \u00af \u03bb \u00af T\u00af \u03bb \u2212 \u03b1\u2032 4|T|2\u2225\u2126\u22253gi\u00af j \u02dc \u03c1p\u00af qR\u00af qiR\u00af jp + \u03b5\u039b2\u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qTp \u00af T\u00af q +\u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp + (\u039b \u22121)\u2225\u2126\u22252\u03bd + 2 |T|2Re\u27e8E, T\u27e9 \u001b . (4.49) We now substitute \u039b = 1 + 1 8 and \u03b5 = 1 100. Then \u2202tG \u2264 1 2\u2225\u2126\u2225 \u001a \u2206FG \u2212 1 100 |\u2207T|2 F g |T|2 \u22121 4 |Ric\u03c9|2 |T|2 \u22121 9|T|2 \u2212 \u00129 8 \u00132 1 100 \u03b1\u2032 2 \u03c3p\u00af q 2 Tp \u00af T\u00af q 28 \f\u221217 16 \u03b1\u2032 2 \u03c32(iRic\u03c9) + \u03b1\u2032 8|T|2gi\u00af j\u03c3p\u00af q 2 R\u00af qiR\u00af jp + \u00121 8 \u2212 9 800 \u0013 1 |T|2Re{F p\u00af qgj\u00af kTjR\u00af kp \u00af T\u00af q} \u221299 400 1 |T|4 \f \f \f \fgj\u00af kTjR\u00af ki \f \f \f \f 2 F \u2212 1 |T|2F p\u00af qgi\u00af jTi \u02c6 R\u00af qp\u00af j \u00af \u03bb \u00af T\u00af \u03bb \u2212 \u03b1\u2032 4|T|2\u2225\u2126\u22253gi\u00af j \u02dc \u03c1p\u00af qR\u00af qiR\u00af jp +\u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp + 1 100 \u00129 8 \u00132 \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qTp \u00af T\u00af q + 1 8\u2225\u2126\u22252\u03bd + 2 |T|2Re\u27e8E, T\u27e9 \u001b (4.50) We are assuming in the hypothesis of Theorem 4 that |\u03b1\u2032Ric\u03c9| < 10\u22126. By Theorem 3, we know that \u2225\u2126\u2225\u2264C2 M \u226a1, so for M large enough we can assume (1 \u221210\u22126)gi\u00af j \u2264F i\u00af j \u2264(1 + 10\u22126)gi\u00af j. (4.51) One way to see this inequality is by writing F i\u00af j in coordinates (4.23). Using (4.51), we can estimate \u221217 16 \u03b1\u2032 2 \u03c32(iRic\u03c9) \u2212 \u03b1\u2032 8|T|2gi\u00af j\u03c3p\u00af q 2 R\u00af qiR\u00af jp + \u00121 8 \u2212 9 800 \u0013 1 |T|2Re{F p\u00af qgj\u00af kTjR\u00af kp \u00af T\u00af q} \u2264 17 16 1 2|\u03b1\u2032Ric\u03c9| |Ric\u03c9| + 1 8|\u03b1\u2032Ric\u03c9| |Ric\u03c9|2 |T|2 + 1 7|Ric\u03c9| \u2264 1 (2)(3)|Ric\u03c9| + 1 100 |Ric\u03c9|2 |T|2 \u2264 1 (2)(3)2|T|2 + \u0012 1 100 + 1 (2)(2)2 \u0013|Ric\u03c9|2 |T|2 . (4.52) We also notice \u2212 \u00129 8 \u00132 1 100 \u03b1\u2032 2 \u03c3p\u00af q 2 Tp \u00af T\u00af q \u2264|\u03b1\u2032Ric||T|2 \u2264 1 106|T|2, (4.53) and \u22121 |T|2F p\u00af qgi\u00af jTi \u02c6 R\u00af qp\u00af j \u00af \u03bb \u00af T\u00af \u03bb \u2264Ce\u2212u = C\u2225\u2126\u2225. (4.54) Substituting these estimates into (4.50) gives \u2202tG \u2264 1 2\u2225\u2126\u2225 \u001a \u2206FG \u2212 1 200 |\u2207T|2 |T|2 \u2212 1 100 |Ric\u03c9|2 |T|2 \u2212 1 100|T|2 +C\u2225\u2126\u2225+ \u03b1\u2032 4|T|2\u2225\u2126\u22253gi\u00af j \u02dc \u03c1p\u00af qR\u00af qiR\u00af jp + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp + 1 100 \u00129 8 \u00132 \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qTp \u00af T\u00af q + 1 8\u2225\u2126\u22252\u03bd + 2 |T|2Re\u27e8E, T\u27e9 \u001b . (4.55) By the de\ufb01nition of E (4.14) and \u03bd (4.10), the terms on the last two lines can only slightly perturb the coe\ufb03cients of the \ufb01rst line since \u2225\u2126\u2225= e\u2212u \u2264C2 M \u226a1 for M \u226b1 large enough. We recall that \u02dc \u03c1p\u00af q and bi \u03c1 are bounded in C\u221ein terms of the background metric \u02c6 g, so for example, \u2225\u2126\u2225\u02dc \u03c1p\u00af q \u2264Ce\u2212u\u02c6 gp\u00af q = Cgp\u00af q, \u2225\u2126\u22251/2|bi \u03c1Ti| \u2264C|T|. (4.56) 29 \fThis allows us to bound certain terms such as \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp \u2264C\u2225\u2126\u22252|Ric\u03c9| \u2264C 2 \u2225\u2126\u22252|Ric\u03c9|2 |T|2 + C 2 \u2225\u2126\u22252|T|2, (4.57) and \u03b1\u2032\u2225\u2126\u22253Re{bi \u03c1Ti} \u2264C\u2225\u2126\u22252|T| \u2264C\u2225\u2126\u22252|T|2 2 + C 2 \u2225\u2126\u22252. (4.58) Covariant derivatives with respect to the evolving metric act like \u2207i = \u2202i \u2212Ti, so we can bound terms such as 2 |T|2 \u03b1\u2032 2 \u2225\u2126\u22253gj\u00af k(\u2207j \u02dc \u03c1p\u00af q)R\u00af qp \u00af T\u00af k \u2264C\u2225\u2126\u22252|Ric\u03c9| |T| + C\u2225\u2126\u22252|Ric\u03c9| |T| |T|. (4.59) The inequality 2ab \u2264a2 +b2 can be used to absorb terms into the \ufb01rst line. We also bound terms \u22122 |T|2\u2225\u2126\u22252gj\u00af k\u2207j \u02dc \u00b5 \u00af T\u00af k \u2264C\u2225\u2126\u22252\u2225\u2126\u22251/2 |T| . (4.60) Using these estimates, it is possible to show that at the maximum point (p, t0) of G, for \u2225\u2126\u2225\u2264C2 M \u226a1, there holds 0 \u2264 1 2\u2225\u2126\u2225 \u001a \u2206FG \u2212 1 200|T|2 + C\u2225\u2126\u2225 \u0012 1 + \u2225\u2126\u22251/2 |T| \u0013\u001b . (4.61) By (4.51), \u2206FG \u22640 at the maximum (p, t0) of G, hence |T|2 \u2264C\u2225\u2126\u2225\u2264C M . (4.62) Therefore G \u2264G(p, t0) \u2264log C M + \u039bu(p). (4.63) By Theorem 3, |T|2 \u2264 C M exp {\u039b(u(p) \u2212u)} \u2264 C M \u0012 sup X\u00d7[0,T) eu \u0013\u039b\u0012 sup X\u00d7[0,T) e\u2212u \u0013\u039b \u2264 C M (C2C1)\u039b \u226a1. (4.64) This proves Theorem 4. 30 \f5 Evolution of the curvature 5.1 Ricci curvature In this subsection, we \ufb02ow the Ricci curvature of the evolving Hermitian metric eu\u02c6 g. We will use the well-known general formula for the evolution of the curvature tensor \u2202tR\u00af kj \u03b1 \u03b2 = \u2212\u2207\u00af k\u2207j(g\u03b1\u00af \u03b3\u2202tg\u00af \u03b3\u03b2). (5.1) Recall that we de\ufb01ned R\u00af kj = R\u00af kj \u03b1\u03b1, hence substituting (4.9) yields \u2202tR\u00af kj = \u2212\u2207\u00af k\u2207j \u001a 1 2\u2225\u2126\u2225 \u0012 \u2212R \u2212\u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp + \u03b1\u2032 2 \u03c32(iRic\u03c9) + 2|T|2 + 2\u2225\u2126\u22252\u03bd \u0013\u001b . (5.2) Expanding out terms gives \u2202tR\u00af kj = 1 2\u2225\u2126\u2225 \u001a \u2207\u00af k\u2207jR + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q\u2207\u00af k\u2207jR\u00af qp \u2212\u2207\u00af k\u2207j \u03b1\u2032 2 \u03c32(iRic\u03c9) \u22122\u2207\u00af k\u2207j|T|2 +\u03b1\u2032\u2207\u00af k(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)\u2207jR\u00af qp + \u03b1\u2032\u2207j(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)\u2207\u00af kR\u00af qp +\u03b1\u2032\u2207\u00af k\u2207j(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)R\u00af qp \u2212\u2207\u00af k\u2207j2\u2225\u2126\u22252\u03bd \u001b \u2212\u2207j\u2225\u2126\u2225 2\u2225\u2126\u22252 \u2207\u00af k \u001a R + \u03b1\u2032(\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp) \u2212\u03b1\u2032 2 \u03c32(iRic\u03c9) \u22122|T|2 \u22122\u2225\u2126\u22252\u03bd \u001b \u2212\u2207\u00af k\u2225\u2126\u2225 2\u2225\u2126\u22252 \u2207j \u001a R + \u03b1\u2032(\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp) \u2212\u03b1\u2032 2 \u03c32(iRic\u03c9) \u22122|T|2 \u22122\u2225\u2126\u22252\u03bd \u001b + \u001a\u2212\u2207\u00af k\u2207j\u2225\u2126\u2225 2\u2225\u2126\u22252 + 2\u2207\u00af k\u2225\u2126\u2225\u2207j\u2225\u2126\u2225 2\u2225\u2126\u22253 \u001b\u001a R + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp \u2212\u03b1\u2032 2 \u03c32(iRic\u03c9) \u22122|T|2 \u22122\u2225\u2126\u22252\u03bd \u001b . (5.3) Using \u2207j\u2225\u2126\u2225= \u2225\u2126\u2225Tj, \u2202tR\u00af kj = 1 2\u2225\u2126\u2225 \u001a \u2207\u00af k\u2207jR + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q\u2207\u00af k\u2207jR\u00af qp \u2212\u2207\u00af k\u2207j \u03b1\u2032 2 \u03c32(iRic\u03c9) \u22122\u2207\u00af k\u2207j|T|2 + \u03b1\u2032\u2207\u00af k(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)\u2207jR\u00af qp + \u03b1\u2032\u2207j(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)\u2207\u00af kR\u00af qp +\u03b1\u2032\u2207\u00af k\u2207j(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)R\u00af qp \u22122\u2207\u00af k\u2207j n \u2225\u2126\u22252\u03bd o \u2212Tj\u2207\u00af kR \u2212\u03b1\u2032Tj\u2207\u00af k(\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp) + 2Tj\u2207\u00af k|T|2 + \u03b1\u2032 2 Tj\u2207\u00af k (\u03c32(iRic\u03c9)) + 2Tj\u2207\u00af k n \u2225\u2126\u22252\u03bd o \u2212T\u00af k\u2207jR \u2212\u03b1\u2032T\u00af k\u2207j(\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp) + 2T\u00af k\u2207j|T|2 + \u03b1\u2032 2 T\u00af k\u2207j (\u03c32(iRic\u03c9)) +2T\u00af k\u2207j n \u2225\u2126\u22252\u03bd o + RTjT\u00af k + \u03b1\u2032TjT\u00af k(\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp) \u22122|T|2TjT\u00af k \u2212\u03b1\u2032 2 \u03c32(iRic\u03c9)TjT\u00af k \u22122TjT\u00af k n \u2225\u2126\u22252\u03bd o \u2212R\u2207\u00af kTj \u2212\u03b1\u2032\u2207\u00af kTj(\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp) +2|T|2\u2207\u00af kTj + \u03b1\u2032 2 \u03c32(iRic\u03c9)\u2207\u00af kTj + 2\u2207\u00af kTj n \u2225\u2126\u22252\u03bd o\u001b . (5.4) 31 \fWe now study the highest order terms, namely \u2207\u00af k\u2207jR\u00af qp = \u22122\u2207\u00af k\u2207j\u2207p\u2207\u00af qu. (5.5) We will use the following commutation formula for covariant derivatives in Hermitian geometry \u2207\u00af k\u2207j\u2207p\u2207\u00af qu = \u2207p\u2207\u00af q\u2207j\u2207\u00af ku + T \u03bb pj\u2207\u00af q\u2207\u03bb\u2207\u00af ku + \u00af T \u00af \u03bb \u00af q\u00af k\u2207p\u2207j\u2207\u00af \u03bbu +R\u00af kj \u03bb pu\u00af q\u03bb \u2212R\u00af qp\u00af k \u00af \u03bbu\u00af \u03bbj + \u00af T \u00af \u03bb \u00af q\u00af kT \u03b3 pju\u00af \u03bb\u03b3. (5.6) Using R\u00af qp = \u22122u\u00af qp, we obtain \u2207\u00af k\u2207jR\u00af qp = \u2207p\u2207\u00af qR\u00af kj + T \u03bb pj\u2207\u00af qR\u00af k\u03bb + \u00af T \u00af \u03bb \u00af q\u00af k\u2207pR\u00af \u03bbj +R\u00af kj \u03bb pR\u00af q\u03bb \u2212R\u00af qp\u00af k \u00af \u03bbR\u00af \u03bbj + \u00af T \u00af \u03bb \u00af q\u00af kT \u03b3 pjR\u00af \u03bb\u03b3. (5.7) Hence \u2207\u00af k\u2207jR = gp\u00af q\u2207p\u2207\u00af qR\u00af kj + gp\u00af qT \u03bb pj\u2207\u00af qR\u00af k\u03bb + gp\u00af q \u00af T \u00af \u03bb \u00af q\u00af k\u2207pR\u00af \u03bbj +R\u00af kj p\u00af qR\u00af qp \u2212Rp p\u00af k \u00af \u03bbR\u00af \u03bbj + gp\u00af q \u00af T \u00af \u03bb \u00af q\u00af kT \u03b3 pjR\u00af \u03bb\u03b3. (5.8) Di\ufb00erentiating \u03c3p\u00af q 2 (4.19) leads to the following de\ufb01nition \u03c3p\u00af q,r\u00af s 2 = gp\u00af qgr\u00af s \u2212gp\u00af sgr\u00af q. (5.9) With this notation, we now di\ufb00erentiate \u03c32(iRic\u03c9) twice. \u2207\u00af k\u2207j\u03c32(iRic\u03c9) = \u2207\u00af k(\u03c3p\u00af q 2 \u2207jR\u00af qp) = \u03c3p\u00af q 2 \u2207\u00af k\u2207jR\u00af qp + \u03c3p\u00af q,r\u00af s 2 \u2207\u00af kR\u00af sr\u2207jR\u00af qp = \u03c3p\u00af q 2 \u2207p\u2207\u00af qR\u00af kj + \u03c3p\u00af q,r\u00af s 2 \u2207\u00af kR\u00af sr\u2207jR\u00af qp + \u03c3p\u00af q 2 T \u03bb pj\u2207\u00af qR\u00af k\u03bb +\u03c3p\u00af q 2 \u00af T \u00af \u03bb \u00af q\u00af k\u2207pR\u00af \u03bbj + \u03c3p\u00af q 2 R\u00af kj \u03bb pR\u00af q\u03bb \u2212\u03c3p\u00af q 2 R\u00af qp\u00af k \u00af \u03bbR\u00af \u03bbj +\u03c3p\u00af q 2 \u00af T \u00af \u03bb \u00af q\u00af kT \u03b3 pjR\u00af \u03bb\u03b3. (5.10) By (5.8) and (5.10), and proceeding similarly for the \u03c1 term, we obtain \u2207\u00af k\u2207jR + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q\u2207\u00af k\u2207jR\u00af qp \u2212\u03b1\u2032 2 \u2207\u00af k\u2207j\u03c32(iRic\u03c9) = F p\u00af q\u2207p\u2207\u00af qR\u00af kj \u2212\u03b1\u2032 2 \u03c3p\u00af q,r\u00af s 2 \u2207\u00af kR\u00af sr\u2207jR\u00af qp + F p\u00af qT \u03bb pj\u2207\u00af qR\u00af k\u03bb + F p\u00af q \u00af T \u00af \u03bb \u00af q\u00af k\u2207pR\u00af \u03bbj +F p\u00af qR\u00af kj \u03bb pR\u00af q\u03bb \u2212F p\u00af qR\u00af qp\u00af k \u00af \u03bbR\u00af \u03bbj + F p\u00af q \u00af T \u00af \u03bb \u00af q\u00af kT \u03b3 pjR\u00af \u03bb\u03b3, (5.11) where the de\ufb01nition of F p\u00af q was given in (4.20). 32 \fUsing (2.38), we may convert derivatives of torsion \u2207T into curvature terms, but terms \u2207T are of di\ufb00erent type and must be treated separately. For example \u22122\u2207\u00af k\u2207j|T|2 = \u22122gp\u00af q\u2207\u00af k\u2207jTp \u00af T\u00af q \u22122gp\u00af q\u2207jTp\u2207\u00af k \u00af T\u00af q \u22121 2gp\u00af qR\u00af kpR\u00af qj \u2212gp\u00af qTp\u2207\u00af kR\u00af qj = \u2212gp\u00af q\u2207jR\u00af kp \u00af T\u00af q \u22122gp\u00af qR\u00af kj \u03bb pT\u03bb \u00af T\u00af q \u22122gp\u00af q\u2207jTp\u2207\u00af k \u00af T\u00af q \u22121 2gp\u00af qR\u00af kpR\u00af qj \u2212gp\u00af qTp\u2207\u00af kR\u00af qj. (5.12) Substituting (5.11) and (5.12) into (5.4), \u2202tR\u00af kj = 1 2\u2225\u2126\u2225 \u001a F p\u00af q\u2207p\u2207\u00af qR\u00af kj \u2212\u03b1\u2032 2 \u03c3p\u00af q,r\u00af s 2 \u2207\u00af kR\u00af sr\u2207jR\u00af qp +2\u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q\u2207jTp\u2207\u00af k \u00af T\u00af q \u22122gp\u00af q\u2207jTp\u2207\u00af k \u00af T\u00af q + Y\u00af kj \u001b . (5.13) where Y\u00af kj contains various combinations of torsion and curvature terms, but is linear in \ufb01rst derivatives of curvature and torsion and does not contain higher order derivatives of curvature and torsion. Explicitly, Y\u00af kj = F p\u00af qT \u03bb pj\u2207\u00af qR\u00af k\u03bb + F p\u00af q \u00af T \u00af \u03bb \u00af q\u00af k\u2207pR\u00af \u03bbj + F p\u00af qR\u00af kj \u03bb pR\u00af q\u03bb \u2212F p\u00af qR\u00af qp\u00af k \u00af \u03bbR\u00af \u03bbj +F p\u00af q \u00af T \u00af \u03bb \u00af q\u00af kT \u03b3 pjR\u00af \u03bb\u03b3 \u2212gp\u00af q\u2207jR\u00af kp \u00af T\u00af q \u22122gp\u00af qR\u00af kj \u03bb pT\u03bb \u00af T\u00af q \u22121 2gp\u00af qR\u00af kpR\u00af qj \u2212gp\u00af qTp\u2207\u00af kR\u00af qj + \u03b1\u2032\u2207\u00af k(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)\u2207jR\u00af qp + \u03b1\u2032\u2207j(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)\u2207\u00af kR\u00af qp +\u03b1\u2032(\u2207\u00af k\u2207j\u2225\u2126\u22253\u02dc \u03c1p\u00af q)R\u00af qp \u2212TjF p\u00af q\u2207\u00af kR\u00af qp \u2212\u03b1\u2032Tj\u2207\u00af k(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)R\u00af qp + gp\u00af qTjR\u00af kp \u00af T\u00af q +2gp\u00af qTjTp\u2207\u00af k \u00af T\u00af q \u22122 \u001a \u2212\u03b1\u2032\u2207\u00af k\u2207j(\u2225\u2126\u22253\u03c8\u03c1) \u2212\u03b1\u2032 2 Re{\u2225\u2126\u22253bi \u03c1\u2207jR\u00af ki} \u2212\u03b1\u2032Re{\u2225\u2126\u22253bi \u03c1R\u00af kj \u03bb iT\u03bb} \u2212\u03b1\u2032Re{\u2207\u00af k(\u2225\u2126\u22253bi \u03c1)\u2207jTi} \u2212\u03b1\u2032Re{\u2207j(\u2225\u2126\u22253bi \u03c1)\u2207\u00af kTi} \u2212\u03b1\u2032Re{\u2207\u00af k\u2207j(\u2225\u2126\u22253bi \u03c1)Ti} + \u2207\u00af k\u2207j(\u2225\u2126\u22252\u02dc \u00b5) \u001b + ( 2\u03b1\u2032(\u2207\u00af k\u2207j\u2225\u2126\u22253\u02dc \u03c1p\u00af q)Tp \u00af T\u00af q + 2\u03b1\u2032\u2207\u00af k(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)\u2207j(Tp \u00af T\u00af q) +2\u03b1\u2032\u2207j(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)\u2207\u00af k(Tp \u00af T\u00af q) + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q\u2207jR\u00af kp \u00af T\u00af q + 2\u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af kj \u03bb pT\u03bb \u00af T\u00af q +\u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qTp\u2207\u00af kR\u00af qj + \u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af kpR\u00af qj \u001b +2Tj\u2207\u00af k \u001a \u2212\u03b1\u2032\u2225\u2126\u22253\u03c8\u03c1 \u2212\u03b1\u2032\u2225\u2126\u22253Re{bi \u03c1Ti} \u2212\u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qTp \u00af T\u00af q + \u2225\u2126\u22252\u02dc \u00b5 \u001b \u2212T\u00af kF p\u00af q\u2207jR\u00af qp \u2212\u03b1\u2032T\u00af k\u2207j(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)R\u00af qp + 2gp\u00af qT\u00af k\u2207jTp \u00af T\u00af q + gp\u00af qT\u00af kTpR\u00af qj +2T\u00af k\u2207j \u001a \u2212\u03b1\u2032\u2225\u2126\u22253\u03c8\u03c1 \u2212\u03b1\u2032\u2225\u2126\u22253Re{bi \u03c1Ti} \u2212\u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qTp \u00af T\u00af q + \u2225\u2126\u22252\u02dc \u00b5 \u001b +RTjT\u00af k + \u03b1\u2032TjT\u00af k(\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp) \u22122|T|2TjT\u00af k \u2212\u03b1\u2032 2 \u03c32(iRic\u03c9)TjT\u00af k 33 \f\u22122TjT\u00af k \u001a \u2212\u03b1\u2032\u2225\u2126\u22253\u03c8\u03c1 \u2212\u03b1\u2032\u2225\u2126\u22253Re{bi \u03c1Ti} \u2212\u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qTp \u00af T\u00af q + \u2225\u2126\u22252\u02dc \u00b5 \u001b \u22121 2RR\u00af kj \u2212\u03b1\u2032 2 R\u00af kj(\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp) + |T|2R\u00af kj + \u03b1\u2032 4 \u03c32(iRic\u03c9)R\u00af kj +R\u00af kj \u001a \u2212\u03b1\u2032\u2225\u2126\u22253\u03c8\u03c1 \u2212\u03b1\u2032\u2225\u2126\u22253Re{bi \u03c1Ti} \u2212\u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qTp \u00af T\u00af q + \u2225\u2126\u22252\u02dc \u00b5 \u001b . (5.14) The terms in brackets indicate terms which come from substituting the de\ufb01nition of \u03bd (4.10). 5.2 Evolving the norm of the curvature We will compute \u2202t|Ric\u03c9|2 = \u2202t{gk\u00af \u2113gi\u00af jR\u00af \u2113iR\u00af kj}. (5.15) We have \u2202tgi\u00af j = \u2212gi\u00af \u03bbg\u03b3\u00af j\u2202tg\u00af \u03bb\u03b3 = 1 2\u2225\u2126\u2225 \u0012R 2 + \u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp \u2212\u03b1\u2032 4 \u03c32(iRic\u03c9) \u2212|T|2 \u2212\u2225\u2126\u22252 \u03bd \u0013 gi\u00af j. (5.16) Hence \u2202t|Ric\u03c9|2 = 2Re\u27e8\u2202tRic\u03c9, Ric\u03c9\u27e9 +|Ric\u03c9|2 2\u2225\u2126\u2225 \u0012 R + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp \u2212\u03b1\u2032 2 \u03c32(iRic\u03c9) \u22122|T|2 \u22122\u2225\u2126\u22252 \u03bd \u0013 .(5.17) Next, F p\u00af q\u2207p\u2207\u00af q|Ric\u03c9|2 = gk\u00af \u2113gi\u00af jF p\u00af q\u2207p\u2207\u00af qR\u00af \u2113iR\u00af jk + gk\u00af \u2113gi\u00af jR\u00af \u2113iF p\u00af q\u2207p\u2207\u00af qR\u00af jk +|\u2207Ric\u03c9|2 F gg + |\u2207Ric\u03c9|2 F gg = gk\u00af \u2113gi\u00af jF p\u00af q\u2207p\u2207\u00af qR\u00af \u2113iR\u00af kj + gk\u00af \u2113gi\u00af jR\u00af \u2113iF q\u00af p\u2207q\u2207\u00af pR\u00af kj \u2212gk\u00af \u2113gi\u00af jR\u00af \u2113iF p\u00af qR\u00af qp \u03bb kR\u00af j\u03bb + gk\u00af \u2113gi\u00af jR\u00af \u2113iF p\u00af qR\u00af qp\u00af j \u00af \u03bbR\u00af \u03bbk +|\u2207Ric\u03c9|2 F gg + |\u2207Ric\u03c9|2 F gg. (5.18) We have shown \u2206F|Ric\u03c9|2 = 2Re\u27e8\u2206FRic\u03c9, Ric\u03c9\u27e9+ |\u2207Ric\u03c9|2 F gg + |\u2207Ric\u03c9|2 F gg \u2212gk\u00af \u2113gi\u00af jR\u00af \u2113iF p\u00af qR\u00af qp \u03bb kR\u00af j\u03bb + gk\u00af \u2113gi\u00af jR\u00af \u2113iF p\u00af qR\u00af qp\u00af j \u00af \u03bbR\u00af \u03bbk. (5.19) Substituting (5.13) into (5.17) gives \u2202t|Ric\u03c9|2 = 1 2\u2225\u2126\u2225 \u001a \u2206F|Ric\u03c9|2 \u2212|\u2207Ric\u03c9|2 F gg \u2212|\u2207Ric\u03c9|2 F gg (5.20) 34 \f\u2212\u03b1\u2032Re{gj\u00af \u2113gm\u00af k\u03c3p\u00af q,r\u00af s 2 R\u00af \u2113m\u2207\u00af kR\u00af sr\u2207jR\u00af qp} +4\u03b1\u2032Re{gj\u00af \u2113gm\u00af kR\u00af \u2113m\u2225\u2126\u22253\u02dc \u03c1p\u00af q\u2207jTp\u2207\u00af k \u00af T\u00af q} \u22124Re{gj\u00af \u2113gm\u00af kR\u00af \u2113mgp\u00af q\u2207jTp\u2207\u00af k \u00af T\u00af q} + 2Re{gj\u00af \u2113gm\u00af kR\u00af \u2113mY\u00af kj} +gk\u00af \u2113gi\u00af jR\u00af \u2113iF p\u00af qR\u00af qp \u03bb kR\u00af j\u03bb \u2212gk\u00af \u2113gi\u00af jR\u00af \u2113iF p\u00af qR\u00af qp\u00af j \u00af \u03bbR\u00af \u03bbk + |Ric\u03c9|2R +\u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp|Ric\u03c9|2 \u22122|T|2|Ric\u03c9|2 \u2212\u03b1\u2032 2 \u03c32(iRic\u03c9)|Ric\u03c9|2 \u22122\u2225\u2126\u22252|Ric\u03c9|2 \u03bd \u001b . 5.3 Estimating Ricci curvature Lemma 1 Let 0 < \u03b4, \u01eb < 1 2 be such that \u22121 4gp\u00af q < \u03b1\u2032\u03b42\u2225\u2126\u2225\u02dc \u03c1p\u00af q < 1 4gp\u00af q, and \u2225\u2126\u22252 \u2264\u03b4, |T|2 \u2264\u03b4, |\u03b1\u2032Ric\u03c9| \u2264\u01eb, (5.21) at a point (p, t0). Let \u039b > 1 be any constant. Then at (p, t0) there holds \u2202t(|\u03b1\u2032Ric\u03c9|2 + \u039b|T|2) \u2264 1 2\u2225\u2126\u2225 \u001a \u2206F(|\u03b1\u2032Ric\u03c9|2 + \u039b|T|2) \u2212 \u00121 2 \u22122\u01eb \u0013 |\u03b1\u2032\u2207Ric\u03c9|2 (5.22) \u2212 \u0012\u039b 4 \u2212(5 + C\u03b42)\u01eb|\u03b1\u2032|\u22121 \u0013 |\u2207T|2 \u2212\u039b 8 |Ric\u03c9|2 + C(1 + \u039b)\u01eb\u03b4 + C\u01eb2 + C\u039b\u03b4 \u001b , for some constant C only depending on \u02dc \u00b5, \u03c1, \u03b1\u2032, and the background manifold (X, \u02c6 \u03c9). Proof: Since \u01eb and \u03b4 are assumed to be small, we have F p\u00af q = gp\u00af q + \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q \u2212\u03b1\u2032 2 \u03c3p\u00af q 2 , 1 2gp\u00af q < F p\u00af q < 3 2gp\u00af q. (5.23) We note the following estimate \u2212\u03b1\u2032Re{gj\u00af \u2113gm\u00af k\u03c3p\u00af q,r\u00af s 2 R\u00af \u2113m\u2207\u00af kR\u00af sr\u2207jR\u00af qp} \u2264|\u03b1\u2032Ric\u03c9| |\u2207Ric\u03c9|2. (5.24) We will estimate and group terms in (5.20) and (5.14). We will convert F p\u00af q into the metric gp\u00af q, and handle \u02dc \u03c1p\u00af q and bi as in (4.56). We will also use that the norm of the full torsion T(\u03c9) = i\u2202\u03c9 is 2|T|2, \u2207i\u2225\u2126\u2225= \u2225\u2126\u2225Ti, \u2207\u00af k\u2207i\u2225\u2126\u2225= \u2225\u2126\u2225Ti \u00af T\u00af k + 2\u22121\u2225\u2126\u2225R\u00af kj, and \u2225\u2126\u2225\u22641. \u2202t|Ric\u03c9|2 \u2264 1 2\u2225\u2126\u2225 \u001a \u2206F|Ric\u03c9|2 \u22121 2|\u2207Ric\u03c9|2 \u22121 2|\u2207Ric\u03c9|2 (5.25) +|\u03b1\u2032Ric\u03c9||\u2207Ric\u03c9|2 + (4 + C\u2225\u2126\u22252)|Ric\u03c9||\u2207T|2 \u001b 35 \f+ C 2\u2225\u2126\u2225 \u001a |T||Ric\u03c9||\u2207Ric\u03c9| + \u2225\u2126\u22252(1 + |T|)|Ric\u03c9||\u2207Ric\u03c9| +(|Ric\u03c9| + |Ric\u03c9|2)|T|2|\u2207T| + |Rm||Ric\u03c9|2 + |Rm||Ric\u03c9||T|2 +|Ric\u03c9|2|T|2 + |Ric\u03c9||T|4 + |Ric\u03c9|3(|T| + 1)2 + |Ric\u03c9|4 +\u2225\u2126\u22252|Ric\u03c9|(|T| + 1)4(|Ric\u03c9| + |Rm| + |\u2207T| + 1) \u001b . First, we estimate C(|Ric\u03c9| + |Ric\u03c9|2)|T|2|\u2207T| \u2264|Ric\u03c9||\u2207T|2 + C2 2 |Ric\u03c9|(1 + |Ric\u03c9|)2|T|4. (5.26) C|T||Ric\u03c9||\u2207Ric\u03c9| \u22641 2|\u03b1\u2032Ric\u03c9||\u2207Ric\u03c9|2 + C2 2|\u03b1\u2032||Ric\u03c9||T|2, (5.27) We may estimate, using |T| \u22641, C\u2225\u2126\u22252|Ric\u03c9| (|T| + 1)4|\u2207T| \u2264\u2225\u2126\u22252 |Ric\u03c9||\u2207T|2 + C2 4 (2)8\u2225\u2126\u22252|Ric\u03c9|, (5.28) C\u2225\u2126\u22252(1 + |T|)|Ric\u03c9||\u2207Ric\u03c9| \u22641 2|\u03b1\u2032Ric\u03c9||\u2207Ric\u03c9|2 + 1 2|\u03b1\u2032|(2C\u2225\u2126\u22252)2|Ric\u03c9|. (5.29) Recall that R\u00af kj \u03b1 \u03b2 = \u02c6 R\u00af kj \u03b1 \u03b2 + 1 2R\u00af kj \u03b4\u03b1 \u03b2. (5.30) Hence, using \u2225\u2126\u2225\u22641, |T| \u22641 and |\u03b1\u2032Ric\u03c9| \u22641 on lower order terms, from (5.25) and the above estimates, we get \u2202t|Ric\u03c9|2 \u2264 1 2\u2225\u2126\u2225 \u001a \u2206F|Ric\u03c9|2 \u2212 \u00121 2 \u22122|\u03b1\u2032Ric\u03c9| \u0013 |\u2207Ric\u03c9|2 + (5 + C\u2225\u2126\u22252)|Ric\u03c9||\u2207T|2 \u001b + C 2\u2225\u2126\u2225 \u001a |Ric\u03c9||T|2 + |Ric\u03c9|2 + \u2225\u2126\u22252|Ric\u03c9| \u001b . (5.31) In terms of 0 < \u01eb, \u03b4 < 1, we have \u2202t |\u03b1\u2032Ric\u03c9|2 \u2264 1 2\u2225\u2126\u2225 \u001a \u2206F |\u03b1\u2032Ric\u03c9|2 \u2212 \u00121 2 \u22122\u01eb \u0013 |\u03b1\u2032\u2207Ric\u03c9|2 +(5 + C\u03b42)\u01eb|\u03b1\u2032|\u22121|\u2207T|2 + C\u03b4 \u01eb + C\u01eb2 \u001b . (5.32) Using the evolution of the torsion (4.29) \u2202t|T|2 = 1 2\u2225\u2126\u2225 \u001a \u2206F|T|2 \u2212|\u2207T|2 F g \u22121 4|Ric\u03c9|2 F g \u22122Re{gi\u00af jgp\u00af q\u2207iTp \u00af T\u00af q \u00af T\u00af j} \u2212Re{gi\u00af jgp\u00af qTpR\u00af qi \u00af T\u00af j} \u22121 2R|T|2 + \u03b1\u2032 4 \u03c32(iRic\u03c9)|T|2 +Re{F p\u00af qgi\u00af jT \u03bb piR\u00af q\u03bb \u00af T\u00af j} \u2212F p\u00af qgi\u00af jTi( \u02c6 R\u00af qp\u00af j \u00af \u03bb + R\u00af qp\u03b4 \u00af \u03bb j )T\u00af \u03bb + |T|4 +\u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp|T|2 \u2212|T|2\u2225\u2126\u22252 \u03bd + 2Re\u27e8E, T\u27e9 \u001b . (5.33) 36 \fEstimating by replacing F p\u00af q by the evolving metric gp\u00af q, \u2202t|T|2 \u2264 1 2\u2225\u2126\u2225 \u001a \u2206F|T|2 \u22121 2|\u2207T|2 \u22121 8|Ric\u03c9|2 + 2|\u2207T||T|2 + C|Ric\u03c9||T|2 +|R||T|2 + |\u03b1\u2032| 4 |Ric\u03c9|2|T|2 + \u2225\u2126\u2225| \u02c6 Rm|\u02c6 g|T|2 + |T|4 +C\u2225\u2126\u22252(|T|4 + |T|3 + |T|2 + |T|)(1 + |Ric\u03c9| + |\u2207T|) \u001b . (5.34) Estimate 2|\u2207T||T|2 \u22641 8|\u2207T|2 + 8|T|4, (5.35) and C\u2225\u2126\u22252(|T|4 + |T|3 + |T|2 + |T|)|\u2207T| \u22641 8|\u2207T|2 + 2C2\u2225\u2126\u22254(4)2. (5.36) Using 0 < \u03b4, \u01eb < 1, \u2202t |T|2 \u2264 1 2\u2225\u2126\u2225 \u001a \u2206F |T|2 \u22121 4|\u2207T|2 \u22121 8|Ric\u03c9|2 + C\u01eb\u03b4 + C\u03b4 \u001b . (5.37) Combining (5.32) and (5.37), we obtain the desired estimate. Theorem 5 Start the \ufb02ow with a constant function u0 = log M. There exists M0 \u226b1 such that for every M \u2265M0, if \u2225\u2126\u22252 \u2264C2 2 M2, |T|2 \u2264C3 M , (5.38) along the \ufb02ow, then |\u03b1\u2032Ric\u03c9| \u2264 C5 M1/2 , (5.39) where C5 only depends on (X, \u02c6 \u03c9), \u03c1, \u02dc \u00b5 and \u03b1\u2032. Here, C2 and C3 are the constants given in Theorems 3 and 4 respectively. Proof: Denote \u01eb = 1 M1/2 , \u03b4 = C3 M . (5.40) Let C4 denote the constant C on the right-hand side of (5.22), which only depends on (X, \u02c6 \u03c9), \u03c1, \u02dc \u00b5 and \u03b1\u2032. For M0 large enough, we can simultaneously satisfy the hypothesis of Lemma 1, and the inequalities 2\u01eb < 1 2 and (5 + C4\u03b42)\u01eb \u22641. We will study the evolution equation of |\u03b1\u2032Ric\u03c9|2 + \u039b|T|2, (5.41) where \u039b is a constant given by \u039b = max{ 4|\u03b1\u2032|\u22121, 8|\u03b1\u2032|2(C4 + 1) }. (5.42) 37 \fWith this choice of \u039b and M0, we have \u00121 2 \u22122\u01eb \u0013 \u22650, \u0012\u039b 4 \u2212(5 + C4\u03b42)\u01eb|\u03b1\u2032|\u22121 \u0013 \u22650. (5.43) At t = 0, u0 = log M and it follows that \u03b1\u20322|Ric\u03c9|2 + \u039b|T|2 = 0. (5.44) Suppose that along the \ufb02ow, we reach \u03b1\u20322|Ric\u03c9|2 + \u039b|T|2 = (2\u039bC3 + 1)\u01eb2, (5.45) at some point p \u2208X at a \ufb01rst time t0 > 0. By Lemma 1, \u2202t(|\u03b1\u2032Ric\u03c9|2 + \u039b|T|2) \u2264 1 2\u2225\u2126\u2225 \u001a \u2212\u039b 8 |Ric\u03c9|2 + C4(1 + \u039b)\u01eb\u03b4 + C4\u01eb2 + C4\u039b\u03b4 \u001b .(5.46) At (p, t0), we have |\u03b1\u2032Ric\u03c9|2 = (2\u039bC3 + 1)\u01eb2 \u2212\u039b|T|2 \u2265(2\u039bC3 + 1)\u01eb2 \u2212\u039b\u03b4. (5.47) Thus \u2202t(|\u03b1\u2032Ric\u03c9|2 + \u039b|T|2) \u2264 1 2\u2225\u2126\u2225 \u001a \u2212 \u039b 8|\u03b1\u2032|2\u01eb2 + C4\u01eb2 \u2212 \u039b2 8|\u03b1\u2032|2(2C3\u01eb2 \u2212\u03b4) + C4\u039b\u03b4 + C4(1 + \u039b)\u01eb\u03b4 \u001b . After substituting the de\ufb01nition of \u01eb and \u03b4, we obtain \u2202t(|\u03b1\u2032Ric\u03c9|2 + \u039b|T|2) \u2264 1 2\u2225\u2126\u2225 \u001a \u2212 \u0012 \u039b 8|\u03b1\u2032|2 \u2212C4 \u0013 1 M \u2212 \u0012 \u039b 8|\u03b1\u2032|2 \u2212C4 \u0013C3\u039b M +C3C4(1 + \u039b) 1 M1/2 1 M \u001b . (5.48) By our choice of \u039b (5.42), for M0 \u226b1 depending only on (X, \u02c6 \u03c9), \u03b1\u2032, \u02dc \u00b5, \u03c1, for all M \u2265M0 we have at (p, t0) \u2202t(|\u03b1\u2032Ric\u03c9|2 + \u039b|T|2) \u22640. (5.49) Hence along the \ufb02ow, there holds |\u03b1\u2032Ric\u03c9|2 + \u039b|T|2 \u2264(2\u039bC3 + 1)\u01eb2. (5.50) It follows that |\u03b1\u2032Ric\u03c9| \u2264(2\u039bC3 + 1)1/2\u01eb (5.51) is preserved along the \ufb02ow. 38 \f6 Higher order estimates 6.1 The evolution of derivatives of torsion 6.1.1 Covariant derivative of torsion Since \u2207\u00af kTj = 1 2R\u00af kj, we only need to look at \u2207kTj. We will compute \u2202t\u2207iTj = \u2207i\u2202tTj \u2212\u2202t\u0393\u03bb ijT\u03bb. (6.1) First, using the standard formula for the evolution of the Christo\ufb00el symbols and (1.6), we compute \u2202t\u0393\u03bb ij = g\u03bb\u00af \u00b5\u2207i\u2202tg\u00af \u00b5j = \u2207i \u001a 1 2\u2225\u2126\u2225 \u0012 \u2212R 2 \u2212\u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp + |T|2 + \u03b1\u2032 4 \u03c32(iRic\u03c9) + \u03b1\u2032\u2225\u2126\u22252 \u03bd \u0013\u001b \u03b4\u03bb j = 1 2\u2225\u2126\u2225 \u001a \u22121 2\u2207iR \u2212\u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af q\u2207iR\u00af qp + \u03b1\u2032 4 \u03c3p\u00af q 2 \u2207iR\u00af qp + gp\u00af q\u2207iTp \u00af T\u00af q +1 2gp\u00af qTpR\u00af qi + R 2 Ti \u2212|T|2Ti \u2212\u03b1\u2032 4 \u03c32(iRic\u03c9)Ti \u2212Ei \u001b \u03b4\u03bb j. (6.2) We recall that the de\ufb01nition of Ei is given in (4.14). Using (4.21) \u2202t\u2207iTj = 1 2\u2225\u2126\u2225\u2207i \u001a F p\u00af q\u2207p\u2207\u00af qTj \u2212\u2207j|T|2 \u22121 2TjR + \u03b1\u2032 4 Tj\u03c32(iRic\u03c9) + 1 2F p\u00af qT \u03bb pjR\u00af q\u03bb +Tj|T|2 + Ej \u001b + \u2207i \u001a 1 2\u2225\u2126\u2225 \u001b\u001a F p\u00af q\u2207p\u2207\u00af qTj \u2212gp\u00af q\u2207jTp \u00af T\u00af q \u22121 2gp\u00af qTpR\u00af qj \u22121 2TjR + \u03b1\u2032 4 Tj\u03c32(iRic\u03c9) + 1 2F p\u00af qT \u03bb pjR\u00af q\u03bb + Tj|T|2 + Ej \u001b \u2212 1 2\u2225\u2126\u2225 \u001a \u2212F p\u00af q\u2207p\u2207\u00af qTi + gp\u00af q\u2207iTp \u00af T\u00af q + 1 2gp\u00af qTpR\u00af qi +R 2 Ti \u2212|T|2Ti \u2212\u03b1\u2032 4 \u03c32(iRic\u03c9)Ti \u2212Ei \u001b Tj. (6.3) First, we may rewrite F p\u00af q\u2207p\u2207\u00af qTj = 1 2F p\u00af q\u2207pR\u00af qj. (6.4) Next, \u2207i{F p\u00af q\u2207p\u2207\u00af qTj} = F p\u00af q\u2207i\u2207p\u2207\u00af qTj + \u2207i \u0012 \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q \u2212\u03b1\u2032 2 \u03c3p\u00af q 2 \u0013 \u2207p\u2207\u00af qTj = F p\u00af q\u2207p\u2207i\u2207\u00af qTj + F p\u00af qT \u03bb pi\u2207\u03bb\u2207\u00af qTj + \u03b1\u2032\u2207i(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)\u2207p\u2207\u00af qTj \u2212\u03b1\u2032 2 \u03c3p\u00af q,r\u00af s 2 \u2207iR\u00af sr\u2207p\u2207\u00af qTj = F p\u00af q\u2207p\u2207\u00af q\u2207iTj \u2212F p\u00af q\u2207p(R\u00af qi \u03bb jT\u03bb) + F p\u00af qT \u03bb pi\u2207\u03bbR\u00af qj +\u03b1\u2032 2 \u2207i(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)\u2207pR\u00af qj \u2212\u03b1\u2032 4 \u03c3p\u00af q,r\u00af s 2 \u2207iR\u00af sr\u2207pR\u00af qj. (6.5) 39 \fWe also compute \u2207i\u2207j|T|2 = gp\u00af q\u2207i\u2207jTp \u00af T\u00af q + gp\u00af q\u2207jTp\u2207i \u00af T\u00af q + gp\u00af q\u2207iTp\u2207j \u00af T\u00af q + gp\u00af qTp\u2207i\u2207j \u00af T\u00af q = gp\u00af q\u2207i\u2207jTp \u00af T\u00af q + 1 2gp\u00af q\u2207jTpR\u00af qi + 1 2gp\u00af q\u2207iTpR\u00af qj + 1 2gp\u00af qTp\u2207iR\u00af qj. (6.6) We introduce the notation E, which denotes any combination of terms involving only Rm, T, g, \u2225\u2126\u2225, \u03b1\u2032, \u03c1 and \u00b5, as well as any derivatives of \u03c1 and \u00b5. Note that F p\u00af q is an element of E. The notation \u2217refers to a contraction using the evolving metric g. The notation DE denotes any term which is a covariant derivative of a term in E. For example, the group DE contains terms involving \u2207T, \u00af \u2207\u00af T, and \u2207Ric\u03c9. Substituting (6.4), (6.5), (6.6) gives \u2202t\u2207iTj = 1 2\u2225\u2126\u2225 \u001a \u2206F\u2207iTj \u2212\u03b1\u2032 4 \u03c3p\u00af q,r\u00af s 2 \u2207iR\u00af sr\u2207pR\u00af qj + \u2207\u2207T \u2217E + DE \u2217E + E \u001b . (6.7) Here we also used that \u2207iEj = \u2207\u2207T \u2217E + DE \u2217E + E which can be veri\ufb01ed from the de\ufb01nition of Ej given in (4.14) 6.1.2 Norm of covariant derivative of torsion We will compute \u2202t|\u2207T|2 = \u2202t{gi\u00af jgk\u00af \u2113\u2207iTk\u2207\u00af j \u00af T\u00af \u2113}. (6.8) As in (4.26), we have \u2202t|\u2207T|2 = 2Re\u27e8\u2202t\u2207T, \u2207T\u27e9 +2|\u2207T|2 2\u2225\u2126\u2225 \u0012R 2 + \u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp \u2212\u03b1\u2032 4 \u03c32(iRic\u03c9) \u2212|T|2 \u2212\u2225\u2126\u22252 \u03bd \u0013 . (6.9) Next, \u2206F|\u2207T|2 = F p\u00af qgi\u00af jgk\u00af \u2113\u2207p\u2207\u00af q\u2207iTk\u2207\u00af j \u00af T\u00af \u2113+ gi\u00af jgk\u00af \u2113\u2207iTkF q\u00af p\u2207\u00af p\u2207q\u2207jT\u2113 +F p\u00af qgi\u00af jgk\u00af \u2113\u2207p\u2207iTk\u2207\u00af q\u2207\u00af j \u00af T\u00af \u2113+ F p\u00af qgi\u00af jgk\u00af \u2113\u2207\u00af q\u2207iTk\u2207p\u2207\u00af j \u00af T\u00af \u2113 = 2Re\u27e8\u2206F\u2207T, \u2207T\u27e9+ gi\u00af jgk\u00af \u2113\u2207iTkF p\u00af qR\u00af qp\u00af j \u00af \u03bb\u2207\u00af \u03bb \u00af T\u00af \u2113 +gi\u00af jgk\u00af \u2113\u2207iTkF p\u00af qR\u00af qp\u00af \u2113 \u00af \u03bb\u2207\u00af jT\u00af \u03bb + |\u2207\u2207T|2 F gg + F p\u00af qgi\u00af jgk\u00af \u2113\u2207\u00af q\u2207iTk\u2207p\u2207\u00af j \u00af T\u00af \u2113. The last term can be written as a norm of \u2207Ric\u03c9 plus commutator terms. Explicitly, F p\u00af qgi\u00af jgk\u00af \u2113\u2207\u00af q\u2207iTk\u2207p\u2207\u00af j \u00af T\u00af \u2113 = F p\u00af qgi\u00af jgk\u00af \u2113\u2207i\u2207\u00af qTk\u2207\u00af p\u2207jT\u2113+ F p\u00af qgi\u00af jgk\u00af \u2113R\u00af qi \u03bb kT\u03bb\u2207p\u2207\u00af j \u00af T\u00af \u2113 = F p\u00af qgi\u00af jgk\u00af \u2113\u2207i\u2207\u00af qTk\u2207j\u2207\u00af pT\u2113+ F p\u00af qgi\u00af jgk\u00af \u2113\u2207i\u2207\u00af qTkR\u00af jp\u00af \u2113 \u00af \u03bbT\u00af \u03bb +F p\u00af qgi\u00af jgk\u00af \u2113R\u00af qi \u03bb kT\u03bb\u2207\u00af j\u2207p \u00af T\u00af \u2113+ F p\u00af qgi\u00af jgk\u00af \u2113R\u00af qi \u03bb kT\u03bbR\u00af jp\u00af \u2113 \u00af \u03bb \u00af T\u00af \u03bb = 1 4F p\u00af qgi\u00af jgk\u00af \u2113\u2207iR\u00af qk\u2207jR\u00af p\u2113+ 1 2F p\u00af qgi\u00af jgk\u00af \u2113\u2207iR\u00af qkR\u00af jp\u00af \u2113 \u00af \u03bbT\u00af \u03bb +1 2F p\u00af qgi\u00af jgk\u00af \u2113R\u00af qi \u03bb kT\u03bb\u2207\u00af jR\u00af \u2113p + F p\u00af qgi\u00af jgk\u00af \u2113R\u00af qi \u03bb kT\u03bbR\u00af jp\u00af \u2113 \u00af \u03bb \u00af T\u00af \u03bb. (6.10) 40 \fHence \u2206F|\u2207T|2 = 2Re\u27e8\u2206F\u2207T, \u2207T\u27e9+ |\u2207\u2207T|2 F gg + 1 4|\u2207Ric\u03c9|2 F gg +gi\u00af jgk\u00af \u2113\u2207iTkF p\u00af qR\u00af qp\u00af j \u00af \u03bb\u2207\u00af \u03bb \u00af T\u00af \u2113+ gi\u00af jgk\u00af \u2113\u2207iTkF p\u00af qR\u00af qp\u00af \u2113 \u00af \u03bb\u2207\u00af jT\u00af \u03bb +1 2F p\u00af qgi\u00af jgk\u00af \u2113\u2207iR\u00af qkR\u00af jp\u00af \u2113 \u00af \u03bbT\u00af \u03bb + 1 2F p\u00af qgi\u00af jgk\u00af \u2113R\u00af qi \u03bb kT\u03bb\u2207\u00af jR\u00af \u2113p +F p\u00af qgi\u00af jgk\u00af \u2113R\u00af qi \u03bb kT\u03bbR\u00af jp\u00af \u2113 \u00af \u03bb \u00af T\u00af \u03bb. (6.11) Therefore, by (6.7), (6.9) and (6.11), \u2202t|\u2207T|2 = 1 2\u2225\u2126\u2225 \u001a \u2206F|\u2207T|2 \u2212|\u2207\u2207T|2 F gg \u22121 4|\u2207Ric\u03c9|2 F gg \u2212\u03b1\u2032 2 Re{gi\u00af jgk\u00af \u2113\u03c3p\u00af q,r\u00af s 2 \u2207iR\u00af sr\u2207pR\u00af qk\u2207\u00af j \u00af T\u00af \u2113} + \u2207\u2207T \u2217\u2207T \u2217E +DE \u2217DE \u2217E + DE \u2217E + E \u001b . (6.12) 6.2 The evolution of derivatives of curvature 6.2.1 Derivative of Ricci curvature We will compute \u2202t\u2207iR\u00af kj = \u2207i\u2202tR\u00af kj \u2212\u2202t\u0393\u03bb ijR\u00af k\u03bb. (6.13) Using (5.13) and (6.2), we obtain \u2202t\u2207iR\u00af kj = 1 2\u2225\u2126\u2225 \u001a \u2207i(F p\u00af q\u2207p\u2207\u00af qR\u00af kj) \u2212\u03b1\u2032 2 \u2207i(\u03c3p\u00af q,r\u00af s 2 \u2207\u00af kR\u00af sr\u2207jR\u00af qp) +(2gp\u00af q + 2\u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q) \u2217\u2207\u2207T \u2217\u2207T + DDE \u2217E +DE \u2217DE \u2217E + DE \u2217E + E \u001b . (6.14) Here, we used that \u2207\u00af \u2207\u00af T = \u00af \u2207Ric\u03c9 + Rm \u2217\u00af T. Compute \u2207i(F p\u00af q\u2207p\u2207\u00af qR\u00af kj) = F p\u00af q\u2207i\u2207p\u2207\u00af qR\u00af kj + \u03b1\u2032\u2207i(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)\u2207p\u2207\u00af qR\u00af kj \u2212\u03b1\u2032 2 \u2207i(\u03c3p\u00af q 2 )\u2207p\u2207\u00af qR\u00af kj = F p\u00af q\u2207p\u2207i\u2207\u00af qR\u00af kj + F p\u00af qT \u03bb pi\u2207\u03bb\u2207\u00af qR\u00af kj +\u03b1\u2032\u2207i(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)\u2207p\u2207\u00af qR\u00af kj \u2212\u03b1\u2032 2 \u03c3p\u00af q,r\u00af s 2 \u2207iR\u00af sr\u2207p\u2207\u00af qR\u00af kj = F p\u00af q\u2207p\u2207\u00af q\u2207iR\u00af kj + F p\u00af q\u2207p(R\u00af qi\u00af k \u00af \u03bbR\u00af \u03bbj \u2212R\u00af qi \u03bb jR\u00af k\u03bb) +F p\u00af qT \u03bb pi\u2207\u03bb\u2207\u00af qR\u00af kj + \u03b1\u2032\u2207i(\u2225\u2126\u22253\u02dc \u03c1p\u00af q)\u2207p\u2207\u00af qR\u00af kj \u2212\u03b1\u2032 2 \u03c3p\u00af q,r\u00af s 2 \u2207iR\u00af sr\u2207p\u2207\u00af qR\u00af kj. (6.15) 41 \fHence, using that \u2207i\u03c3p\u00af q,r\u00af s 2 = 0 (5.9), we obtain \u2202t\u2207iR\u00af kj = 1 2\u2225\u2126\u2225 \u001a \u2206F\u2207iR\u00af kj \u2212\u03b1\u2032 2 \u03c3p\u00af q,r\u00af s 2 \u2207i\u2207\u00af kR\u00af sr\u2207jR\u00af qp \u2212\u03b1\u2032 2 \u03c3p\u00af q,r\u00af s 2 \u2207\u00af kR\u00af sr\u2207i\u2207jR\u00af qp \u2212\u03b1\u2032 2 \u03c3p\u00af q,r\u00af s 2 \u2207iR\u00af sr\u2207p\u2207\u00af qR\u00af kj +(2gp\u00af q + 2\u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q) \u2217\u2207\u2207T \u2217\u2207T +DDE \u2217E + DE \u2217DE \u2217E + DE \u2217E + E \u001b . (6.16) 6.2.2 Norm of derivative of Ricci curvature We will compute \u2202t|\u2207Ric\u03c9|2 = \u2202t{gi\u00af agb\u00af kgj\u00af c\u2207iR\u00af kj\u2207aR\u00af bc}. (6.17) As in (4.26), we have \u2202t|\u2207Ric\u03c9|2 = 2Re\u27e8\u2202t\u2207Ric\u03c9, \u2207Ric\u03c9\u27e9 +3|\u2207Ric\u03c9|2 1 2\u2225\u2126\u2225 \u0012R 2 + \u03b1\u2032 2 \u2225\u2126\u22253\u02dc \u03c1p\u00af qR\u00af qp \u2212\u03b1\u2032 4 \u03c32(iRic\u03c9) \u2212|T|2 \u2212\u2225\u2126\u22252 \u03bd \u0013 . Next, compute \u2206F|\u2207Ric\u03c9|2 = F p\u00af qgi\u00af jgk\u00af \u2113gm\u00af n\u2207p\u2207\u00af q\u2207iR\u00af nk\u2207jR \u00af m\u2113+ gi\u00af jgk\u00af \u2113gm\u00af n\u2207iR\u00af nkF q\u00af p\u2207\u00af p\u2207q\u2207jR \u00af m\u2113 +|\u2207\u2207Ric\u03c9|2 F ggg + |\u2207\u2207Ric\u03c9|2 F ggg = 2Re\u27e8\u2206F\u2207Ric\u03c9, \u2207Ric\u03c9\u27e9+ |\u2207\u2207Ric\u03c9|2 F ggg + |\u2207\u2207Ric\u03c9|2 F ggg +F p\u00af qgi\u00af jgk\u00af \u2113gm\u00af n\u2207iR\u00af nkR\u00af qp\u00af j \u00af \u03bb\u2207\u00af \u03bbR\u00af \u2113m + F p\u00af qgi\u00af jgk\u00af \u2113gm\u00af n\u2207iR\u00af nkR\u00af qp\u00af \u2113 \u00af \u03bb\u2207\u00af jR\u00af \u03bbm \u2212F p\u00af qgi\u00af jgk\u00af \u2113gm\u00af n\u2207iR\u00af nkR\u00af qp \u03bb m\u2207\u00af jR\u00af \u2113\u03bb. (6.18) Commuting covariant derivatives |\u2207\u2207Ric\u03c9|2 F ggg = |\u2207\u2207Ric\u03c9|2 F ggg + \u2207\u2207E \u2217E + E. (6.19) Hence \u2202t|\u2207Ric\u03c9|2 = 1 2\u2225\u2126\u2225 \u001a \u2206F|\u2207Ric\u03c9|2 \u2212|\u2207\u2207Ric\u03c9|2 F ggg \u2212|\u2207\u2207Ric\u03c9|2 F ggg \u001b + 1 2\u2225\u2126\u22252Re \u001a \u2212\u03b1\u2032 2 gi\u00af agb\u00af kgj\u00af c\u03c3p\u00af q,r\u00af s 2 \u2207i\u2207\u00af kR\u00af sr\u2207jR\u00af qp\u2207aR\u00af bc \u2212\u03b1\u2032 2 gi\u00af agb\u00af kgj\u00af c\u03c3p\u00af q,r\u00af s 2 \u2207\u00af kR\u00af sr\u2207i\u2207jR\u00af qp\u2207aR\u00af bc \u2212\u03b1\u2032 2 gi\u00af agb\u00af kgj\u00af c\u03c3p\u00af q,r\u00af s 2 \u2207iR\u00af sr\u2207p\u2207\u00af qR\u00af kj\u2207aR\u00af bc +(2gp\u00af q + 2\u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q) \u2217\u2207\u2207T \u2217\u2207T \u2217\u2207Ric\u03c9 \u001b +DDE \u2217DE \u2217E + DDE \u2217E + DE \u2217DE \u2217DE \u2217E +DE \u2217DE \u2217E + DE \u2217E. (6.20) 42 \fLemma 2 Suppose |\u03b1\u2032Ric\u03c9| \u22641 4 and \u22121 8gp\u00af q < \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q < 1 8gp\u00af q. Then \u2202t|\u2207Ric\u03c9|2 \u2264 1 2\u2225\u2126\u2225 \u001a \u2206F|\u2207Ric\u03c9|2 \u22121 2|\u2207\u2207Ric\u03c9|2 \u22121 2|\u2207\u2207Ric\u03c9|2 \u001b + 1 2\u2225\u2126\u2225 \u001a 9\u03b1\u20322|\u2207Ric\u03c9|4 + 5|\u2207\u2207T||\u2207T||\u2207Ric\u03c9| +DDE \u2217DE \u2217E + (DE + E)3 \u001b . (6.21) Proof: By assumption, we may use |\u2207\u2207Ric\u03c9|2 F ggg + |\u2207\u2207Ric\u03c9|2 F ggg \u22653 4(|\u2207\u2207Ric\u03c9|2 + |\u2207\u2207Ric\u03c9|2). (6.22) In coordinates where the evolving metric g is the identity, we have \u03c3p\u00af q,r\u00af s 2 = \u00b11. Using 2ab \u2264a2 + b2, estimate (6.21) follows from (6.20). 6.3 Higher order estimates Theorem 6 There exists 0 < \u03b41, \u03b42 with the following property. Suppose \u22121 8gp\u00af q < \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q < 1 8gp\u00af q, \u2225\u2126\u2225\u22641, (6.23) |\u03b1\u2032Ric\u03c9| \u2264\u03b41, (6.24) and |T|2 \u2264\u03b42, (6.25) along the \ufb02ow. Then |\u2207Ric\u03c9| \u2264C, |\u2207T| \u2264C, (6.26) where C depends only on \u03b41, \u03b42, \u03b1\u2032, \u03c1, \u02dc \u00b5, and (X, \u02c6 \u03c9). Proof: Let us assume that \u03b41 < 1 4. This will allow us to use the estimate 3 4g\u00af kj \u2264F j\u00af k \u22642g\u00af kj. (6.27) This follows from the de\ufb01nition of F j\u00af k, see (4.23). From (5.20), with assumptions (6.24) and (6.27) we may estimate \u2202t|Ric\u03c9|2 \u2264 1 2\u2225\u2126\u2225 \u001a \u2206F|Ric\u03c9|2 \u22121 2|\u2207Ric\u03c9|2 \u001b + 1 2\u2225\u2126\u2225Re \u001a DE \u2217E + 5 \u2207T \u2217\u2207T \u2217Ric + E \u001b . (6.28) 43 \fHere we used \u2212\u03b1\u2032Re{gj\u00af \u2113gm\u00af k\u03c3p\u00af q,r\u00af s 2 R\u00af \u2113m\u2207\u00af kR\u00af sr\u2207jR\u00af qp} \u2264\u03b41|\u2207Ric\u03c9|2, (6.29) to absorb this term into the \u2212|\u2207Ric\u03c9|2 term. We will compute the evolution of G = (|\u03b1\u2032Ric\u03c9|2 + \u03c41)|\u2207Ric\u03c9|2 + (|T|2 + \u03c42)|\u2207T|2, (6.30) where \u03c41 and \u03c42 are constants to be determined. First, we compute \u2202t{(|\u03b1\u2032Ric\u03c9|2 + \u03c41)|\u2207Ric\u03c9|2} = \u03b1\u20322\u2202t|Ric\u03c9|2|\u2207Ric\u03c9|2 + (|\u03b1\u2032Ric\u03c9|2 + \u03c41)\u2202t|\u2207Ric\u03c9|2. (6.31) By (6.21) and (6.28) \u2202t{(|\u03b1\u2032Ric\u03c9|2 + \u03c41)|\u2207Ric\u03c9|2} \u2264 1 2\u2225\u2126\u2225 \u001a \u2206F|\u03b1\u2032Ric\u03c9|2|\u2207Ric\u03c9|2 \u2212\u03b1\u20322 2 |\u2207Ric\u03c9|4 \u001b + 1 2\u2225\u2126\u2225Re \u001a DE \u2217E + 5 \u2207T \u2217\u2207T \u2217Ric + E \u001b \u03b1\u20322|\u2207Ric\u03c9|2 +(|\u03b1\u2032Ric\u03c9|2 + \u03c41) 2\u2225\u2126\u2225 \u001a \u2206F|\u2207Ric\u03c9|2 \u22121 2|\u2207\u2207Ric\u03c9|2 \u22121 2|\u2207\u2207Ric\u03c9|2 \u001b +(|\u03b1\u2032Ric\u03c9|2 + \u03c41) 2\u2225\u2126\u2225 \u001a 9\u03b1\u20322|\u2207Ric\u03c9|4 + 5|\u2207\u2207T||\u2207T||\u2207Ric\u03c9| +\u2207\u2207E \u2217DE \u2217E + \u2207\u2207E \u2217DE \u2217E + (DE + E)3 \u001b . (6.32) Hence \u2202t{(|\u03b1\u2032Ric\u03c9|2 + \u03c41)|\u2207Ric\u03c9|2} \u2264 1 2\u2225\u2126\u2225 \u001a \u2206F{(|\u03b1\u2032Ric\u03c9|2 + \u03c41)|\u2207Ric\u03c9|2} \u2212 \u00121 2 \u22129|\u03b1\u2032Ric\u03c9|2 \u22129\u03c41 \u0013 \u03b1\u20322|\u2207Ric\u03c9|4 \u22121 2|\u2207\u2207Ric\u03c9|2(|\u03b1\u2032Ric\u03c9|2 + \u03c41) \u22121 2|\u2207\u2207Ric\u03c9|2(|\u03b1\u2032Ric\u03c9|2 + \u03c41) \u22122Re {F i\u00af j\u2207i|\u03b1\u2032Ric\u03c9|2\u2207\u00af j|\u2207Ric\u03c9|2} + 6(\u03b42 1 + \u03c4)|\u2207\u2207T||\u2207T||\u2207Ric\u03c9| \u001b +\u03b1\u20322|\u2207Ric\u03c9|2 2\u2225\u2126\u2225 Re \u001a 5 \u2207T \u2217\u2207T \u2217Ric + DE \u2217E + E \u001b +(|\u03b1\u2032Ric\u03c9|2 + \u03c41) 2\u2225\u2126\u2225 \u001a \u2207\u2207E \u2217DE \u2217E + \u2207\u2207E \u2217DE \u2217E + (DE + E)3 \u001b . (6.33) We estimate \u22122Re {F i\u00af j\u2207i|\u03b1\u2032Ric\u03c9|2\u2207\u00af j|\u2207Ric\u03c9|2} \u2264 8|\u03b1\u2032|\u03b41|\u2207Ric\u03c9|2(|\u2207\u2207Ric\u03c9| + |\u2207\u2207Ric\u03c9| + E) \u2264 \u03b1\u20322 24 |\u2207Ric\u03c9|4 + 28\u03b42 1(|\u2207\u2207Ric\u03c9|2 + |\u2207\u2207Ric\u03c9|2) + C|\u2207Ric\u03c9|2, (6.34) 44 \f6(\u03b42 1 + \u03c41)|\u2207\u2207T||\u2207T||\u2207Ric\u03c9| \u2264 1 2(\u03b42 1 + \u03c41)|\u2207\u2207T|2 + 2132(\u03b42 1 + \u03c41)|\u2207T|2|\u2207Ric\u03c9|2 \u2264 1 2(\u03b42 1 + \u03c41)|\u2207\u2207T|2 + \u03b1\u20322 24 |\u2207Ric\u03c9|4 + 2434\u03b1\u2032\u22122(\u03b42 1 + \u03c41)2|\u2207T|4, (6.35) \u03b1\u20322|\u2207Ric\u03c9|2 2\u2225\u2126\u2225 Re \u001a 5 \u2207T \u2217\u2207T \u2217Ric + \u2207E \u2217E + E \u001b \u2264 1 2\u2225\u2126\u2225 \u001a\u03b1\u20322 24 |\u2207Ric\u03c9|4 + 2252\u03b42 1|\u2207T|4 + C|\u2207Ric\u03c9|3 + C|\u2207T|3 + C \u001b . (6.36) (|\u03b1\u2032Ric\u03c9|2 + \u03c41) 2\u2225\u2126\u2225 \u001a \u2207\u2207E \u2217DE \u2217E + \u2207\u2207E \u2217DE \u2217E + (DE + E)3 \u001b \u2264 1 2\u2225\u2126\u2225 \u001a1 4|\u2207\u2207Ric\u03c9|2(|\u03b1\u2032Ric\u03c9|2 + \u03c41) + 1 4|\u2207\u2207Ric\u03c9|2(|\u03b1\u2032Ric\u03c9|2 + \u03c41) +1 2(\u03b42 1 + \u03c41)|\u2207\u2207T|2 + C|\u2207Ric\u03c9|3 + C|\u2207T|3 + C \u001b . (6.37) Therefore \u2202t{(|\u03b1\u2032Ric\u03c9|2 + \u03c41)|\u2207Ric\u03c9|2} (6.38) \u2264 1 2\u2225\u2126\u2225 \u001a \u2206F{(|\u03b1\u2032Ric\u03c9|2 + \u03c41)|\u2207Ric\u03c9|2} \u2212 \u00121 4 \u22129\u03b42 1 \u22129\u03c41 \u0013 \u03b1\u20322|\u2207Ric\u03c9|4 \u2212(|\u2207\u2207Ric\u03c9|2 + |\u2207\u2207Ric\u03c9|2)(\u03c41 4 \u221228\u03b42 1) + (\u03b42 1 + \u03c41)|\u2207\u2207T|2 + \u0012 2434\u03b1\u2032\u22122(\u03b42 1 + \u03c41)2 + 2252\u03b42 1 \u0013 |\u2207T|4 + C\u03b1\u2032,\u03c4,\u03b4|\u2207Ric\u03c9|3 + C\u03b1\u2032,\u03c4,\u03b4|\u2207T|3 + C\u03b1\u2032,\u03c4,\u03b4 \u001b . Next, we compute \u2202t{(|T|2 + \u03c42)|\u2207T|2} = \u2202t|T|2|\u2207T|2 + (|T|2 + \u03c42)\u2202t|\u2207T|2. (6.39) By (4.29), we have \u2202t|T|2 \u2264 1 2\u2225\u2126\u2225 \u001a \u2206F|T|2 \u2212|\u2207T|2 F g + C|\u2207T| + C \u001b . (6.40) By (6.12), we have \u2202t|\u2207T|2 \u2264 1 2\u2225\u2126\u2225 \u001a \u2206F|\u2207T|2 \u2212|\u2207\u2207T|2 F gg + |\u03b1\u2032||\u2207T||\u2207Ric\u03c9|2 +C|\u2207\u2207T||\u2207T| + C|\u2207T|2 + C|\u2207Ric\u03c9|2 + C \u001b . (6.41) 45 \fBy our assumption |\u03b1\u2032Ric\u03c9| \u22641 4, we have |\u2207\u2207T|2 F gg \u22651 2|\u2207\u2207T|2 and |\u2207T|2 F g \u22651 2|\u2207T|2. Therefore \u2202t{(|T|2 + \u03c42)|\u2207T|2} \u2264 1 2\u2225\u2126\u2225 \u001a \u2206F{(|T|2 + \u03c42)|\u2207T|2} \u22122Re{F i\u00af j\u2207i|T|2\u2207\u00af j|\u2207T|2} \u22121 4|\u2207T|4 \u2212(|T|2 + \u03c42)1 4|\u2207\u2207T|2 + C|\u2207Ric\u03c9|3 + C|\u2207T|3 + C \u001b . (6.42) Here we used Young\u2019s inequality |\u2207T||\u2207Ric\u03c9|2 \u22641 3|\u2207T|3 + 2 3|\u2207Ric\u03c9|3. In the following, we will use that \u2207T can be expressed as Ricci curvature. We estimate \u22122Re{F i\u00af j\u2207i|T|2\u2207\u00af j|\u2207T|2} \u2264 4|T||\u2207T|(|\u2207T| + |\u2207T|)(|\u2207\u2207T| + |\u2207\u2207T|) \u2264 4|T||\u2207T|2|\u2207\u2207T| + 4|T||\u2207T|2|\u2207Ric\u03c9| + 4|T||\u2207T||Ric\u03c9||\u2207\u2207T| +4|T||\u2207T||Ric\u03c9||\u2207Ric\u03c9| + 4|T||\u2207T|(|\u2207T| + |\u2207T|)|R \u2217T|. (6.43) We may estimate the \ufb01rst term in the following way 4|T||\u2207T|2|\u2207\u2207T| \u22644|\u2207T|2(\u03b42)1/2|\u2207\u2207T| \u22641 23|\u2207T|4 + 25\u03b42|\u2207\u2207T|2. (6.44) The other terms may be estimated using Young\u2019s inequality, and we can derive \u22122Re{F i\u00af j\u2207i|T|2\u2207\u00af j|\u2207T|2} \u22641 23|\u2207T|4 + 26\u03b42|\u2207\u2207T|2 + C|\u2207T|3 + C|\u2207Ric\u03c9|3 + C. Hence \u2202t{(|T|2 + \u03c42)|\u2207T|2} \u2264 1 2\u2225\u2126\u2225 \u001a \u2206F{(|T|2 + \u03c42)|\u2207T|2} \u22121 8|\u2207T|4 \u2212(\u03c42 4 \u221226\u03b42)|\u2207\u2207T|2 + C|\u2207Ric\u03c9|3 + C|\u2207T|3 + C \u001b .(6.45) Combining (6.38) and (6.45) gives \u2202tG \u2264 1 2\u2225\u2126\u2225 \u001a \u2206FG \u2212 \u00121 4 \u22129\u03b42 1 \u22129\u03c41 \u0013 \u03b1\u20322|\u2207Ric\u03c9|4 \u2212 \u0012\u03c41 4 \u221228\u03b42 1 \u0013 (|\u2207\u2207Ric\u03c9|2 + |\u2207\u2207Ric\u03c9|2) \u2212 \u0012\u03c42 4 \u221226\u03b42 \u2212\u03b42 1 \u2212\u03c41 \u0013 |\u2207\u2207T|2 \u2212 \u00121 8 \u22122434\u03b1\u2032\u22122(\u03b42 1 + \u03c41)2 \u22122252\u03b42 1 \u0013 |\u2207T|4 +C\u03b1\u2032,\u03c4,\u03b4|\u2207Ric\u03c9|3 + C\u03b1\u2032,\u03c4,\u03b4|\u2207T|3 + C\u03b1\u2032,\u03c4,\u03b4 \u001b . (6.46) We may choose \u03c41 = min{2\u22127, 2\u221253\u22122|\u03b1\u2032|} and \u03c42 = 1. Then for any \u03b41, \u03b42 > 0 such that \u03b41, \u03b42 \u22642\u22126\u03c41 \u226a\u03c42 = 1, (6.47) 46 \fwe have the estimate \u2202tG \u2264 1 2\u2225\u2126\u2225 \u001a \u2206FG \u22121 8\u03b1\u20322|\u2207Ric\u03c9|4 \u22121 16|\u2207T|4 + C\u03b1\u2032,\u03c4,\u03b4 \u001b . (6.48) Now, suppose G attains its maximum at a point (z, t) where t > 0. From the above estimate, at this point we have 1 8\u03b1\u20322|\u2207Ric\u03c9|4 + 1 16|\u2207T|4 \u2264C\u03b1\u2032,\u03c4,\u03b4. (6.49) It follows that G is uniformly bounded along the \ufb02ow, and hence |\u2207Ric\u03c9| \u2264C, |\u2207T| \u2264C, (6.50) along the \ufb02ow. Corollary 1 There exists 0 < \u03b41, \u03b42 with the following property. Suppose \u22121 8gp\u00af q < \u03b1\u2032\u2225\u2126\u22253\u02dc \u03c1p\u00af q < 1 8gp\u00af q, (6.51) |\u03b1\u2032Ric\u03c9| \u2264\u03b41, (6.52) and |T|2 \u2264\u03b42, (6.53) along the \ufb02ow. If there exists \u03b40 > 0 such that 0 < \u03b40 \u2264\u2225\u2126\u2225\u22641 along the \ufb02ow, then |DkRic\u03c9| \u2264C, |DkT| \u2264C, (6.54) where C depends only on \u03b40, \u03b41, \u03b42, \u03b1\u2032, \u03c1, \u02dc \u00b5, and (X, \u02c6 \u03c9). Proof: Since \u2225\u2126\u2225= e\u2212u, we are assuming that |u| stays bounded, and that the metrics \u02c6 g and g = eu\u02c6 g are equivalent. We are also assuming that e\u2212u|Du|2 \u02c6 g \u226a1 and e\u2212u|\u03b1\u2032u\u00af kj|\u02c6 g \u226a1. By Theorem 6, there exists \u03b41 and \u03b42 such that |\u2207\u2207u| and |\u2207\u00af \u2207\u2207u| stay bounded along the \ufb02ow. We will estimate partial derivatives in coordinate charts. Since \u2202i \u00af \u2202j\u2202ku = \u2207i \u00af \u2207j\u2207ku + \u0393\u03bb iku\u00af j\u03bb, \u2202i\u2202ju = \u2207i\u2207ju + \u0393\u03bb iju\u03bb, (6.55) and the Christo\ufb00el symbol \u0393\u03bb ik = e\u2212u\u02c6 g\u03bb\u00af \u03b3\u2202i(eu\u02c6 g\u00af \u03b3k) = ui\u03b4\u03bb k + \u02c6 \u0393\u03bb ik (6.56) stays bounded, we have that |u|, |\u2202u|, |\u2202\u2202u|, |\u2202\u00af \u2202u|, |\u2202\u00af \u2202\u2202u| \u2264C. (6.57) 47 \fThe scalar equation is \u2202tu = \u2206\u02c6 \u03c9u + \u03b1\u2032e\u22122u\u02dc \u03c1p\u00af qu\u00af qp + \u03b1\u2032e\u2212u \u02c6 \u03c32(i\u2202\u00af \u2202u) + |Du|2 \u02c6 \u03c9 + e\u2212u\u03bd. (6.58) where \u03bd(x, u, Du). Di\ufb00erentiating once gives \u2202tDu = \u02c6 F p\u00af qDu\u00af qp + \u03b1\u2032D(e\u22122u\u02dc \u03c1p\u00af q)u\u00af qp + D|Du|2 \u02c6 g \u2212\u03b1\u2032e\u2212u \u02c6 \u03c32(i\u2202\u00af \u2202u)Du + D(e\u2212u\u03bd), (6.59) where \u02c6 F p\u00af q = \u02c6 gp\u00af q + \u03b1\u2032e\u22122u\u02dc \u03c1p\u00af q + \u03b1\u2032e\u2212u \u02c6 \u03c32 p\u00af q. (6.60) We note that \u02c6 F j\u00af k only di\ufb00ers from F j\u00af k (4.20) by a factor of eu. From our assumptions on |\u03b1\u2032Ric\u03c9| = e\u2212u|\u03b1\u2032\u2202\u00af \u2202u|\u02c6 g and \u2225\u2126\u2225= e\u2212u, we have uniform ellipticity of \u02c6 F j\u00af k. Di\ufb00erentiating twice yields \u2202tu\u00af kj = \u02c6 F p\u00af q\u2202p\u2202\u00af qu\u00af kj + \u03a8(x, u, \u2202u, \u2202\u2202u, \u2202\u00af \u2202u, \u2202\u00af \u2202\u2202u), (6.61) where \u03a8 is uniformly bounded along the \ufb02ow. By the Krylov-Safonov theorem [27], we have that u\u00af kj is bounded in the C\u03b1/2,\u03b1 norm. The function u and the spacial gradient Du are also bounded in the C\u03b1/2,\u03b1 norm since the right-hand sides of (6.58) and (6.59) are bounded. We may now apply parabolic Schauder theory (for example, in [26]) to the linearized equation (6.59). Standard theory and a bootstrap argument give higher order estimates of u, and hence we obtain estimates on derivatives of the curvature and torsion of g = eu\u02c6 g. 7 Long time existence Proposition 4 Let C1, C2, C5 be the named constants as before, which only depend on (X, \u02c6 g), \u02dc \u00b5, \u03c1, and \u03b1\u2032. There exists M0 \u226b1 such that for all M \u2265M0, the following statement holds. If the \ufb02ow exists on [0, t0), and initially starts with u0 = log M, then along the \ufb02ow 1 C1M \u2264e\u2212u \u2264C2 M , |T|2 \u2264C3 M , |\u03b1\u2032Ric\u03c9| \u2264 C5 M1/2, (7.1) and |Dku|2 \u02c6 g \u2264\u02dc Ck, 1 2\u02c6 gj\u00af k \u2264\u02c6 F j\u00af k \u22642\u02c6 gj\u00af k, (7.2) where \u02dc Ck only depends on (X, \u02c6 g), \u00b5, \u03c1, \u03b1\u2032, M. Proof: Let \u03b41 and \u03b42 be the constants from Corollary 1, and choose a smaller \u03b41 if necessary to ensure \u03b41 < 10\u22126. Recall that from Theorem 3, 1 C1M \u2264\u2225\u2126\u2225= e\u2212u \u2264C2 M (7.3) 48 \falong the \ufb02ow for M large enough. Consider the set I = {t \u2208[0, t0) such that |\u03b1\u2032Ric\u03c9| \u2264\u03b41, |T|2 \u2264\u03b42 holds on [0, t]}. (7.4) Since at t = 0 we have |\u03b1\u2032Ric\u03c9| = |T|2 = 0, we know that I is non-empty. By de\ufb01nition, I is relatively closed. We now show that I is open. Suppose \u02c6 t \u2208I. By de\ufb01nition of I, the hypothesis of Theorem 4 is satis\ufb01ed, hence |T|2 \u2264C3 M < \u03b42 at \u02c6 t as long as M is large enough. It follows that the hypothesis of Theorem 5 is satis\ufb01ed as long as M is large enough, hence |\u03b1\u2032Ric\u03c9| \u2264 C5 M1/2 < \u03b41 at \u02c6 t. We can conclude the existence of \u03b5 > 0 such that [\u02c6 t + \u03b5) \u2282I, and hence I is open. It follows that I = [0, t0). We know that \u2212C\u02c6 gp\u00af q \u2264\u02dc \u03c1p\u00af q \u2264C\u02c6 gp\u00af q since \u02dc \u03c1 can be bounded using the background metric. For M large enough, we can conclude \u22121 8e\u2212u\u02c6 gp\u00af q < \u03b1\u2032e\u22123u\u02dc \u03c1p\u00af q < 1 8e\u2212u\u02c6 gp\u00af q, (7.5) and we can apply Corollary 1 to obtain higher order estimates of u. Uniform ellipticity follows from the de\ufb01nition of \u02c6 F j\u00af k (6.60) and the estimates on |\u03b1\u2032Ric\u03c9| = e\u2212u|\u03b1\u2032\u2202\u00af \u2202u|\u02c6 g and \u2225\u2126\u2225. Q.E.D. Theorem 7 There exists M0 \u226b1 such that for all M \u2265M0, if the \ufb02ow initially starts with u0 = log M, then the \ufb02ow exists on [0, \u221e). Proof: By short-time existence [31], we know the \ufb02ow exists for some maximal time interval [0, T). If T < \u221e, we may apply the previous proposition to extend the \ufb02ow to [0, T + \u01eb), which is a contradiction. Q.E.D. 8 Convergence of the \ufb02ow We may apply Theorem 7 to construct solutions to the Fu-Yau equation. Theorem 8 There exists M0 \u226b1 such that for all M \u2265M0, if the \ufb02ow initially starts with u0 = log M, then the \ufb02ow exists on [0, \u221e) and converges smoothly to a function u\u221e, where u\u221esolves 0 = i\u2202\u00af \u2202(eu\u221e\u02c6 \u03c9 \u2212\u03b1\u2032e\u2212u\u221e\u03c1) + \u03b1\u2032 2 i\u2202\u00af \u2202u\u221e\u2227i\u2202\u00af \u2202u\u221e+ \u00b5, Z X eu\u221e= M. (8.1) Proof: Since we will work with the scalar equation, all norms in this section will be with respect to the background metric \u02c6 \u03c9. Let v = \u2202teu. Recall that Z X v = 0, (8.2) 49 \falong the \ufb02ow. Di\ufb00erentiating equation (2.22) with respect to time gives 2\u2202tv \u02c6 \u03c92 2! = i\u2202\u00af \u2202(v\u02c6 \u03c9 + \u03b1\u2032e\u22122uv\u03c1) + \u03b1\u2032i\u2202\u00af \u2202u \u2227i\u2202\u00af \u2202(e\u2212uv). (8.3) Consider the functional J(t) = Z X v2 \u02c6 \u03c92 2! . (8.4) Compute dJ dt = Z X v i\u2202\u00af \u2202(v\u02c6 \u03c9 + \u03b1\u2032e\u22122uv\u03c1) + \u03b1\u2032 Z X v i\u2202\u00af \u2202u \u2227i\u2202\u00af \u2202(e\u2212uv) (8.5) = \u2212 Z X i\u2202v \u2227\u00af \u2202v \u2227\u02c6 \u03c9 \u2212\u03b1\u2032 Z X i\u2202v \u2227\u00af \u2202(e\u22122uv\u03c1) \u2212\u03b1\u2032 Z X i\u2202\u00af \u2202u \u2227i\u2202v \u2227i\u00af \u2202(e\u2212uv) = \u2212 Z X |\u2207v|2 \u2212\u03b1\u2032 Z X e\u22122u i\u2202v \u2227\u00af \u2202v \u2227\u03c1 + 2\u03b1\u2032 Z X e\u22122uv i\u2202v \u2227\u00af \u2202u \u2227\u03c1 \u2212\u03b1\u2032 Z X e\u22122uv i\u2202v \u2227\u00af \u2202\u03c1 \u2212\u03b1\u2032 Z X e\u2212u i\u2202\u00af \u2202u \u2227i\u2202v \u2227i\u00af \u2202v + \u03b1\u2032 Z X e\u2212uv i\u2202\u00af \u2202u \u2227i\u2202v \u2227i\u00af \u2202u. We may estimate dJ dt \u2264 \u2212 Z X |\u2207v|2 + \u03b1\u2032\u2225\u03c1\u2225 Z X e\u22122u|\u2207v|2 + 2\u03b1\u2032\u2225\u03c1\u2225\u2225\u2207u\u2225 Z X e\u22122u|v| |\u2207v| (8.6) +\u03b1\u2032\u2225\u2202\u03c1\u2225 Z X e\u22122u|v| |\u2207v| + \u2225\u03b1\u2032e\u2212ui\u2202\u00af \u2202u\u2225 Z X |\u2207v|2 +\u2225\u2207u\u2225\u2225\u03b1\u2032e\u2212ui\u2202\u00af \u2202u\u2225 Z X |v| |\u2207v|. By Proposition 4, we know that on [0, \u221e) we have the estimates e\u2212u \u2264C2 M \u226a1, |\u2207u|2 \u02c6 g \u2264C3C1, |\u03b1\u2032e\u2212uu\u00af kj|\u02c6 g \u2264 C5 M1/2 . (8.7) Hence for any \u03b5 > 0, we can choose M large enough such that dJ dt \u2264\u22121 2 Z X |\u2207v|2 + \u03b5 Z X |v| |\u2207v| \u2264\u2212 \u00121 2 \u2212\u03b5 2 \u0013 Z X |\u2207v|2 + \u03b5 2 Z X |v|2. (8.8) Since R X v = 0, we may use the Poincar\u00b4 e inequality to obtain, for \u03b5 > 0 small enough, dJ dt \u2264\u2212\u03b7 Z X v2 = \u2212\u03b7J, (8.9) with \u03b7 > 0. This implies that J(t) \u2264Ce\u2212\u03b7t, (8.10) that is, Z X v2 \u2264Ce\u2212\u03b7t. (8.11) 50 \fFrom this estimate, we see that for any sequence v(tj) converging to v\u221e, we have v\u221e= 0. We can now show convergence of the \ufb02ow. Following the argument given in Proposition 2.2 in [7], we have Z X |eu(x, s\u2032) \u2212eu(x, s)| \u2264 Z X Z s\u2032 s |\u2202teu(x, t)| = Z s\u2032 s Z X |v(x, t)| \u2264 Z s\u2032 s \u0012Z X v2 \u0013 1 2 dt \u2264 Z +\u221e s \u0012Z X v2 \u0013 1 2 dt \u2264 C Z +\u221e s e\u2212\u03b7 2 tdt (8.12) Recall that we normalized the background metric such that R X \u02c6 \u03c92 2 = 1. This estimate shows that, as t \u2192+\u221e, eu(x, t) are Cauchy in L1 norm. Thus eu(x, t) converges in the L1 norm to some function eu\u221e(x) as t \u2192\u221e. By our uniform estimates, eu\u221eis bounded in C\u221e, and a standard argument shows that eu converges in C\u221e. Indeed, if there exist a sequence of times such that \u2225e\u2212u(x,tj) \u2212 e\u2212u\u221e(x)\u2225Ck \u2265\u01eb, then by our estimates a subsequence converges in Ck to e\u2212u\u2032 \u221e. Then \u2225e\u2212u\u2032 \u221e(x) \u2212e\u2212u\u221e(x)\u2225L1 = 0 but \u2225e\u2212u\u2032 \u221e(x) \u2212e\u2212u\u221e(x)\u2225Ck \u2265\u01eb, a contradiction. It follows from (8.11) that eu\u221esatis\ufb01es the Fu-Yau equation (8.1)." + }, + { + "url": "http://arxiv.org/abs/1610.02739v2", + "title": "Anomaly flows", + "abstract": "The Anomaly flow is a flow which implements the Green-Schwarz anomaly\ncancellation mechanism originating from superstring theory, while preserving\nthe conformally balanced condition of Hermitian metrics. There are several\nversions of the flow, depending on whether the gauge field also varies, or is\nassumed known. A distinctive feature of Anomaly flows is that, in $m$\ndimensions, the flow of the Hermitian metric has to be inferred from the flow\nof its $(m-1)$-th power $\\omega^{m-1}$. We show how this can be done\nexplicitly, and we work out the corresponding flows for the torsion and the\ncurvature tensors. The results are applied to produce criteria for the\nlong-time existence of the flow, in the simplest case of zero slope parameter.", + "authors": "Duong H. Phong, Sebastien Picard, Xiangwen Zhang", + "published": "2016-10-09", + "updated": "2018-03-13", + "primary_cat": "math.DG", + "cats": [ + "math.DG", + "math.AP", + "math.CV" + ], + "main_content": "Introduction Starting with the uniformization theorem, canonical metrics such as Hermitian-Yang-Mills and K\u00a8 ahler-Einstein metrics have played a major role in complex geometry. However, theoretical physics suggests more notions of metrics which should qualify in some sense as canonical. Indeed, the classical canonical metrics are typically de\ufb01ned by a linear constraint in the curvature tensor. But in string theory, the key Green-Schwarz anomaly cancellation mechanism [15] for the consistency of superstring theory is an equation which involves the square of the curvature tensor. Furthermore, while the supersymmetry of the heterotic string compacti\ufb01ed to 4-dimensional Minkowski space-time required that the intermediate space carry a complex structure [2], it allowed the corresponding Chern unitary connection to have non-vanishing torsion [30]. The resulting condition is known as a Strominger system, and Calabi-Yau manifolds with their K\u00a8 ahler Ricci-\ufb02at metrics are only a special solution. What seems to emerge then is an as-yet unexplored area of non-K\u00a8 ahler geometry, where the K\u00a8 ahler condition is replaced by some speci\ufb01c constraint on the torsion, and the canonical metric condition is replaced by an equation on the torsion and possibly higher powers of the curvature. These equations are also novel from the point of view of the theory of partial di\ufb00erential equations, and it is an important problem to develop methods for their solutions. The goal of the present paper is to develop methods for the study of the following \ufb02ow of Hermitian metrics on a 3-dimensional complex manifold X, \u2202t(\u2225\u2126\u2225\u03c9\u03c92) = i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032(TrRm \u2227Rm \u2212\u03a6(t)) \u03c9(0) = \u03c90. (1.1) 1Work supported in part by the National Science Foundation Grants DMS-12-66033 and DMS-1605968. Key words: Green-Schwarz anomaly cancellation, quadratic terms in the curvature tensor, torsion constraints, \ufb02ows, maximum principle. \fHere X is equipped with a nowhere vanishing (3, 0) holomorphic form \u2126, \u2225\u2126\u2225\u03c9 is the norm of \u2126with respect to the Hermitian metric \u03c9, de\ufb01ned by \u2225\u2126\u22252 \u03c9 = i\u2126\u2227\u00af \u2126\u03c9\u22123, (1.2) and the expression \u03a6(t) is a given closed (2, 2)-form in the characteristic class c2(X), evolving with time. The expression Rm is the curvature of the Chern unitary connection of \u03c9, viewed as a (1, 1)-form valued in the bundle of endomorphisms End(T 1,0(X)) of T 1,0(X). The initial Hermitian form \u03c90 is required to satisfy the following conformally balanced condition d(\u2225\u2126\u2225\u03c90\u03c92 0) = 0. (1.3) The motivation for the \ufb02ow (1.1) is as follows. In [30], building on the earlier work of Candelas, Horowitz, Strominger, and Witten [2], Strominger identi\ufb01ed the following system of equations for a Hermitian metric \u03c9 on X and a Hermitian metric H\u00af \u03b1\u03b2 on a holomorphic vector bundle E \u2192X, F 2,0 = F 0,2 = 0, F \u2227\u03c92 = 0 (1.4) i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032Tr(Rm \u2227Rm \u2212F \u2227F) = 0 (1.5) d\u2020\u03c9 = i(\u00af \u2202\u2212\u2202) log \u2225\u2126\u2225\u03c9, (1.6) as conditions for the product of X with 4-dimensional space-time to be a supersymmetric vacuum con\ufb01guration for the heterotic string. The conditions on F in the \ufb01rst equation above just mean that F is the curvature of the Chern unitary connection of H\u00af \u03b1\u03b2, and that H\u00af \u03b1\u03b2 is Hermitian-Yang-Mills with respect to any metric conformal to \u03c9. It is a subsequent, but basic observation of Li and Yau [18] that the third condition on \u03c9 above, which is at \ufb01rst sight a torsion constraints condition, is equivalent to the condition that \u03c9 be conformally balanced d(\u2225\u2126\u2225\u03c9\u03c92) = 0. (1.7) In the special case where (X, \u03c9) is a compact K\u00a8 ahler 3-fold with c1(X) = 0, if we take E = T 1,0(X), H = \u03c9, then the anomaly condition is automatically satis\ufb01ed. The Hermitian-Yang-Mills condition reduces to the condition that \u03c9 be Ricci-\ufb02at, which can be implemented by Yau\u2019s theorem [35]. The norm \u2225\u2126\u2225\u03c9 is then constant, and the torsion constraints follow from the K\u00a8 ahler property of \u03c9. Thus Calabi-Yau 3-folds with their Ricci-\ufb02at metrics can be viewed as special solutions of the Strominger system, and they have played a major role ever since in both superstring theory and algebraic geometry [2]. From this point of view, it is natural to think of the pair (\u03c9, H) as a canonical metric for (X, E), and if H happens to be \ufb01xed for some reason, of the metric \u03c9 itself as a canonical metric in non-K\u00a8 ahler geometry. 2 \fStrominger systems are di\ufb03cult to solve, and the \ufb01rst non-perturbative, non-K\u00a8 ahler solutions to the systems were obtained by Fu-Yau [10, 11], some twenty years after Strominger\u2019s original proposal. These solutions were on toric \ufb01brations over K3 surfaces constructed earlier by Goldstein and Prokushkin [14]. On such manifolds, Fu-Yau succeeded in reducing the Strominger system to a new complex Monge-Amp` ere equation on the two-dimensional K\u00a8 ahler base, which they succeeded in solving. Higher dimensional analogues of the Fu-Yau solution were considered by the authors in [23, 24, 25]. Geometric constructions of some special solutions of Strominger systems have been given in e.g. [1, 4, 5, 6, 7, 8, 9, 21]. A major problem at the present time is to develop analytical methods for solving the general Strominger system. Even if the curvature F of the bundle metric H were known and we concentrate only on the equations for \u03c9, an immediate di\ufb03culty typical of nonK\u00a8 ahler geometry, is that there is no general or convenient way of parametrizing conformally balanced metrics, comparable to the parametrization of K\u00a8 ahler metrics by their potentials which was instrumental in Yau\u2019s solution of the Ricci-\ufb02at equation. It appears to be a daunting problem to have to deal with the anomaly equation and the conformally balanced equation as a system of equations. A way of bypassing this di\ufb03culty was suggested by the authors in [22], which is to introduce the coupled geometric \ufb02ow H\u22121 \u2202tH = \u2212\u039bF \u2202t(\u2225\u2126\u2225\u03c9\u03c92) = i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032Tr(Rm \u2227Rm \u2212F \u2227F) (1.8) with initial conditions \u03c9(0) = \u03c90, H(0) = H0, where H0 is a given metric on E, and \u03c90 is a Hermitian metric on X which satis\ufb01es the conformally balanced condition (1.3)2. The point of the \ufb02ow is that, by Chern-Weil theory, the right hand side in the second line above is always closed, and hence the condition d(\u2225\u2126\u2225\u03c9\u03c92) = 0 is preserved by the \ufb02ow. Thus there is no need to treat the conformally balanced condition as a separate equation, and the stationary points of the \ufb02ow will automatically satisfy all the equations in the Strominger system. For \ufb01xed \u03c9, the \ufb02ow of the metric H\u00af \u03b1\u03b2 is just the Donaldson heat \ufb02ow [3]. If the \ufb02ow for H\u00af \u03b1\u03b2(t) is known, and if we set \u03a6(t) = Tr(F \u2227F), then the \ufb02ow for \u03c9 reduces to the \ufb02ow (1.1). An understanding of (1.1) appears a necessary preliminary step in an understanding of (1.8). The \ufb02ow (1.8) was called the Anomaly \ufb02ow in [22], in reference to the key role played by the right hand side in the Green-Schwarz anomaly cancellation mechanism. We shall use the same generic name for all closely related \ufb02ows such as (1.1). Anomaly \ufb02ows appear to be considerably more complicated than classical \ufb02ows in geometry of which the Yang-Mills \ufb02ow and the Ricci \ufb02ow are well-known examples. A \ufb01rst hurdle is that the \ufb02ow of metrics \u03c9(t) has to be deduced from the \ufb02ow of (2, 2)-forms 2Note that there are ways for constructing individual conformally balanced metrics \u03c90 (see e.g. [33]). 3 \f\u2225\u2126\u2225\u03c9 \u03c92. Now the existence in dimension m of an (m\u22121)-th root of a positive (m\u22121, m\u22121)form has been shown by Michelsohn [20], and this passage back and forth between positive (1, 1)-forms and (m\u22121, m\u22121)-forms has played a major role e.g. in works of Popovici [27] and in the recent proof by Szekelyhidi, Tosatti, and Weinkove [31, 33] of the existence of Gauduchon metrics with prescribed volume form. However, it does not appear possible to use the formalism in these works to deduce the \ufb02ow of the curvature tensor of \u03c9 from the \ufb02ow of \u03c9m\u22121. This is one of the main goals of the present paper. What we do is to produce a seemingly new formula for the square root of a (2, 2)-form, or equivalently, for the Hodge \u22c6operator, without using the antisymmetric symbol \u03b5. With such a formula, and using the very speci\ufb01c torsion constraints resulting from the conformally balanced condition, we obtain the following completely explicit expression for the Anomaly \ufb02ow: Theorem 1 If the initial metric \u03c90 is conformally balanced, then the Anomaly \ufb02ow (1.1) can also be expressed as \u2202tg\u00af pq = 1 2\u2225\u2126\u2225\u03c9 \u0014 \u2212\u02dc R\u00af pq + g\u03b1\u00af \u03b2gs\u00af rT\u00af \u03b2sq \u00af T\u03b1\u00af r\u00af p \u2212\u03b1\u2032gs\u00af r(R[\u00af ps \u03b1 \u03b2R\u00af rq] \u03b2 \u03b1 \u2212\u03a6\u00af ps\u00af rq) \u0015 . (1.9) where \u02dc R\u00af kj is the Ricci tensor and T\u00af kij is the torsion tensor, as de\ufb01ned in (2.36) and (2.28) below. The brackets [ , ] denote anti-symmetrization separately in each of the two sets of barred and unbarred indices. The above theorem shows that the Anomaly \ufb02ow can be viewed as generalization of the Ricci \ufb02ow, with higher order corrections in the curvature tensor proportional to \u03b1\u2032. Indeed, the terms \u02dc R\u00af pq \u2212g\u03b1\u00af \u03b2gs\u00af rT\u00af \u03b2sq \u00af T\u03b1\u00af r\u00af p reduce to the Ricci curvature R\u00af pq (see the de\ufb01nition in (2.36)) if the torsion vanishes, and the terms with coe\ufb03cient \u03b1\u2032 are the higher order corrections. It is remarkable that this analogy with the Ricci \ufb02ow is due not to an attempt to generalize the Ricci \ufb02ow, but rather to the combination of the Green-Schwarz cancellation mechanism, more speci\ufb01cally the de Kalb-Ramond \ufb01eld i\u2202\u00af \u2202\u03c9, with the torsion constraints equivalent to the conformally balanced condition. Once the formulation of the \ufb02ow provided by Theorem 1 is available, it is straightforward to derive the \ufb02ows of the torsion and curvature tensors. The full results are given in Theorems 4 and 5 below. Here, we note only that they reinforce the same analogy with the Ricci \ufb02ow. For example, the di\ufb00usion operator in the \ufb02ow for the Ricci curvature is given by \u2202tR\u00af kj = 1 2\u2225\u2126\u2225\u03c9 (\u2206R\u00af kj + 2\u03b1\u2032g\u03bb\u00af \u00b5gs\u00af rR[\u00af r\u03bb \u03b2 \u03b1\u2207s\u2207\u00af \u00b5]R\u00af kj \u03b1 \u03b2) + \u00b7 \u00b7 \u00b7 (1.10) Up to the factor (2\u2225\u2126\u2225\u03c9)\u22121, it coincides with the di\ufb00usion operator \u2206for the Ricci curvature in the Ricci \ufb02ow, up to a higher order correction in the curvature which is proportional to \u03b1\u2032. The formulation of the Anomaly \ufb02ow provided by Theorem 1 makes it more amenable to existing techniques for \ufb02ows, and indeed many \ufb02ows of metrics with torsion have been 4 \fstudied in the literature (e.g. [13, 19, 28, 29, 32] and others). However, the Anomaly \ufb02ow still involves a combination of novel features such as the particular torsion constraints, the presence of the factor \u2225\u2126\u2225\u03c9 (which is quite important in string theory as it originates from the dilaton \ufb01eld), and especially the presence of the quadratic terms in the curvature tensor. All this makes a general solution only a remote possibility at this time. Thus we focus on two important special cases. The \ufb01rst case is the Anomaly \ufb02ow restricted to the Fu-Yau ansatz for solutions of the Strominger system on toric \ufb01brations over Ricci-\ufb02at K\u00a8 ahler surfaces. We can show that the Anomaly \ufb02ow converges in this case, and thus gives another proof of the existence theorem of Fu-Yau. But because of its length and complexity, the full argument will be presented in a companion paper [26] to the present one. The second case is when \u03b1\u2032 = 0. In this case, our main results are as follows. Theorem 2 Assume that \u03b1\u2032 = 0. Suppose that A > 0 and \u03c9(t) is a solution to the Anomaly \ufb02ow (3.1) below, with t \u2208[0, 1 A]. Then, for all k \u2208N, there exists a constant Ck depending on a uniform lower bound of \u2225\u2126\u2225\u03c9 such that, if |Rm|\u03c9 + |DT|\u03c9 + |T|2 \u03c9 \u2264A, for all z \u2208M and t \u2208[0, 1 A], (1.11) then, |DkRm(z, t)|\u03c9 \u2264CkA tk/2 , |Dk+1T(z, t)|\u03c9 \u2264CkA tk/2 (1.12) for all z \u2208M and t \u2208(0, 1 A]. The estimates given in the above theorem can be viewed as Shi-type derivative estimates for the curvature tensor and torsion tensor along the Anomaly \ufb02ow (3.1). With this theorem, we can provide a criterion for the long-time existence of the Anomaly \ufb02ow: Theorem 3 Assume that \u03b1\u2032 = 0, and that the Anomaly \ufb02ow (3.1) exists on an interval [0, T) for some T > 0. If inft\u2208[0,T)\u2225\u2126\u2225\u03c9 > 0 (or equivalently \u03c93(t) \u2264C \u03c93(0)), and if supX\u00d7[0,T)(|Rm|2 \u03c9 + |DT|2 \u03c9 + |T|4 \u03c9) < \u221e (1.13) then the \ufb02ow can be continued to an interval [0, T + \u01eb) for some \u01eb > 0. In particular, the \ufb02ow exists for all time, unless there is a time T > 0 and a sequence (zj, tj), with tj \u2192T, with either \u2225\u2126(zj, tj)\u2225\u03c9 \u21920, or (|Rm|2 \u03c9 + |DT|2 \u03c9 + |T|4 \u03c9)(zj, tj) \u2192\u221e. (1.14) The paper is organized as follows. In \u00a72, we begin by providing an e\ufb00ective way for recapturing the form \u2202t\u03c9 from the form \u2202t(\u2225\u2126\u2225\u03c9\u03c92). We then discuss the torsion constraints in the Strominger system, and in particular, how they result in two di\ufb00erent 5 \fnotions of Ricci curvature, but a single notion of scalar curvature. We can then prove Theorem 1. With Theorem 1, it is straightforward to derive the \ufb02ows of the curvature and of the torsion. In \u00a73, we give the proof of Theorem 2. This proof is analogous to the proof for the classical \ufb02ows, but it is more complicated here due to the non-vanishing torsion and the expression \u2225\u2126\u2225\u03c9. Once we have Theorem 2, it is easy to prove Theorem 3. Finally, we provide a list of conventions in the appendices, together with some basic identities of Hermitian geometry. 2 The \ufb02ows of the metric, torsion and curvature The \ufb01rst task in the study of a geometric \ufb02ow is to derive the \ufb02ows of the curvature tensor, and in the case of non Levi-Civita connections, of the torsion tensor. In the case of Anomaly \ufb02ows, this task is complicated by the fact that the \ufb02ow is de\ufb01ned as a \ufb02ow of the (2, 2)-form \u2225\u2126\u2225\u03c9 \u03c92, and that the \ufb02ow of \u03c9 itself has to be recaptured from there. 2.1 The equation \u03d5 \u2227\u03c9m\u22122 = \u03a6 and the Hodge \u22c6operator Since \u2202t\u03c92 = 2\u2202t\u03c9 \u2227\u03c9, the \ufb02ow of \u03c9 can be recovered from the \ufb02ow of \u03c92 if we can solve explicitly equations of the form \u03d5 \u2227\u03c9 = \u03a6 for a given \u03a6. We begin by doing this, in general dimension m instead of just m = 3, as the resulting formulas for the solution as well as the Hodge \u22c6operator may be of independent interest. Let \u03c9 = ig\u00af kjdzj \u2227d\u00af zk and \u03b7 be a (p, q)-form. We de\ufb01ne its components \u03b7\u00af k1\u00b7\u00b7\u00b7\u00af kqj1\u00b7\u00b7\u00b7jp by \u03b7 = 1 p!q! X \u03b7\u00af k1\u00b7\u00b7\u00b7\u00af kqj1\u00b7\u00b7\u00b7jp dzjp \u2227\u00b7 \u00b7 \u00b7 \u2227dzj1 \u2227d\u00af zkq \u2227\u00b7 \u00b7 \u00b7 \u2227d\u00af zk1. (2.1) Lemma 1 Let \u03a6 be a (m \u22121, m \u22121) form on a Hermitian manifold (X, \u03c9) of dimension m. Then the equation \u03d5 \u2227\u03c9m\u22122 = \u03a6 (2.2) admits a unique solution, given by \u03d5\u00af jk = 1 \u03b1m \u001a i\u2212(m\u22122) m\u22122 Y p=1 gkp\u00af jp\u03a6\u00af jk\u00af j1k1\u00b7\u00b7\u00b7\u00af jm\u22122km\u22122 \u2212 \u03b2m (m \u22121)!2(Tr \u03a6) i g\u00af jk \u001b (2.3) where \u03b1m and \u03b2m are universal constants, depending only on the dimension m, given by \u03b1m = (m \u22121)!(m \u22122)!(m \u22121 \u2212m2 6 ), \u03b2m = m!(m \u22122)! 6 . (2.4) 6 \fand Tr \u0398 for a (p, p)-form \u0398 is de\ufb01ned by Tr \u0398 = \u27e8\u0398, \u03c9p\u27e9= i\u2212p p Y \u2113=1 gk\u2113\u00af j\u2113\u0398\u00af j1k1\u00b7\u00b7\u00b7\u00af jpkp. (2.5) The traces of \u03d5 and \u03a6 are related by Tr\u03d5 = 1 (m \u22121)!2Tr \u03a6. (2.6) Proof. In components, the equation \u03d5 \u2227\u03c9m\u22122 = \u03a6 can be expressed as im\u22122 \u03d5{\u00af jkg\u00af j1k1 \u00b7 \u00b7 \u00b7g\u00af jm\u22122km\u22122} = \u03a6\u00af jk\u00af j1k1\u00b7\u00b7\u00b7\u00af jm\u22122km\u22122 (2.7) where the bracket {, } denote antisymmetrization of all the barred indices as well as of all the unbarred indices. We contract both sides, getting im\u22122 m\u22122 Y p=1 gkp\u00af jp\u03d5{\u00af jkg\u00af j1k1 \u00b7 \u00b7 \u00b7 g\u00af jm\u22122km\u22122} = m\u22122 Y p=1 gkp\u00af jp\u03a6\u00af jk\u00af j1k1\u00b7\u00b7\u00b7\u00af jm\u22122km\u22122. (2.8) We expand the left-hand side by writing down all the terms arising from antisymmetrization of the sub-indices. Carrying out the contractions with Qm\u22122 p=1 gkp\u00af jp, it is easy to verify that each term is a constant multiple of \u03d5\u00af jk or of (Tr \u03d5) g\u00af kj. This shows that we have a relation of the form im\u22122 \u03b1m\u03d5\u00af kj + im\u22121 \u03b2m(Tr \u03d5) g\u00af kj = m\u22122 Y p=1 gkp\u00af jp\u03a6\u00af jk\u00af j1k1\u00b7\u00b7\u00b7\u00af jm\u22122km\u22122. (2.9) Next, we have \u03d5 \u2227\u03c9m\u22121 = \u03a6 \u2227\u03c9, which implies \u27e8\u03d5, \u22c6\u03c9m\u22121\u27e9= \u27e8\u03a6, \u22c6\u03c9\u27e9 (2.10) Recalling that \u22c6\u03c9m\u22121 = (m \u22121)! \u03c9 and \u22c6\u03c9 = 1 (m\u22121)! \u03c9m\u22121, we obtain Tr \u03d5 = \u27e8\u03d5, \u03c9\u27e9= 1 (m \u22121)!2 Tr \u03a6. (2.11) This establishes the form (2.3). It is easy to see that \u03b1m \u0338= 0, otherwise we obtain a relation between g\u00af kj and \u03a6 that cannot hold for an arbitrary (m \u22121, m \u22121)-form \u03a6. To determine the precise values of \u03b1m and \u03b2m, we proceed as follows. First, contracting (2.9) with respect to gk\u00af j and using the de\ufb01nition of Tr\u0398 in (2.5), we have the following relation between \u03b1m and \u03b2m, im\u22121 \u03b1m Tr \u03d5 + im\u22121 \u03b2mm Tr \u03d5 = im\u22121 Tr \u03a6 = im\u22121 (m \u22121)!2 Tr \u03d5, (2.12) 7 \fand hence \u03b1m + \u03b2mm = (m \u22121)!2. (2.13) Thus it remains only to determine \u03b2m. We note that the only permutation of indices which can produce a multiple of (Tr \u03d5) g\u00af kj is of the form, e.g., \u03d5\u00af j1k1g\u00af jk g\u00af j2k2 \u00b7 \u00b7 \u00b7 g\u00af jm\u22122km\u22122 (2.14) which produces, upon contraction with Qm\u22122 p=1 gkp\u00af jp and antisymmetrization in j2, \u00b7 \u00b7 \u00b7 , jn\u22122 and k2, \u00b7 \u00b7 \u00b7, kn\u22122, (Tr \u03d5)g\u00af kj \u27e8\u03c9m\u22123, \u03c9m\u22123\u27e9= (Tr \u03d5)g\u00af kj\u2225\u03c9m\u22123\u22252. (2.15) We can compute \u2225\u03c9m\u22123\u22252 as follows \u03c9m\u22123 = (m \u22123)! X j0 m X i=0 \u2113 X j=0 \u2207m\u2212i \u00af \u2207\u2113\u2212jRm \u2217\u2207i \u00af \u2207j(\u2202tg) (3.14) Using the evolution equation of Rm, we get \u2202t(\u2207m \u00af \u2207\u2113Rm) = m X i=1 \u2113 X j=1 \u2207m\u2212i \u00af \u2207\u2113\u2212jRm \u2217\u2207i \u00af \u2207j(\u2202tg) (3.15) + 1 2\u2225\u2126\u2225\u03c9 \u2207m \u00af \u2207\u2113H + X i+j>0 m X i=0 \u2113 X j=0 \u2207m\u2212i \u00af \u2207\u2113\u2212jH \u2217\u2207i \u00af \u2207j 1 2\u2225\u2126\u2225\u03c9 ! 21 \fWe compute the second term, \u2207m \u00af \u2207\u2113H = 1 2\u2207m \u00af \u2207\u2113\u2206RRm + \u2207m \u00af \u2207\u2113\u2207\u00af \u2207(T \u2217T) + \u2207m \u00af \u2207\u2113+1(T \u2217Rm) +\u2207m \u00af \u2207\u2113\u2207( \u00af T \u2217Rm) + \u2207m \u00af \u2207\u2113(Rm \u2217Rm) + \u2207m \u00af \u2207\u2113+1(T \u2217\u03a8) +\u2207m \u00af \u2207\u2113(\u03a8 \u2217\u00af T \u2217T) + \u2207m \u00af \u2207\u2113(\u2207\u03a8 \u2217\u00af T) + \u2207m \u00af \u2207\u2113(T \u2217\u00af \u2207\u03a8) (3.16) In view of the commutation identity given in the appendix, \u2207m \u00af \u2207\u2113\u2206RRm = \u2206R(\u2207m \u00af \u2207\u2113Rm) + m X i=0 \u2113 X j=0 \u2207i \u00af \u2207jRm \u2217\u2207m\u2212i \u00af \u2207\u2113\u2212jRm (3.17) + m X i=0 \u2113 X j=0 \u2207i \u00af \u2207jT \u2217\u2207m\u2212i \u00af \u2207\u2113+1\u2212iRm + m X i=0 \u2113 X j=0 \u2207i \u00af \u2207jT \u2217\u2207m+1\u2212i \u00af \u2207\u2113\u2212jRm we obtain \u2202t(\u2207m \u00af \u2207\u2113Rm) = 1 2\u2225\u2126\u2225\u03c9 n1 2\u2206R(\u2207m \u00af \u2207\u2113Rm) + m X i=0 \u2113 X j=0 \u2207i \u00af \u2207jRm \u2217\u2207m\u2212i \u00af \u2207\u2113\u2212jRm + m X i=0 \u2113 X j=0 \u2207i \u00af \u2207jT \u2217 \u0010 \u2207m\u2212i \u00af \u2207\u2113+1\u2212iRm + \u2207m+1\u2212i \u00af \u2207\u2113\u2212jRm \u0011 + \u2207m \u00af \u2207\u2113\u2207\u00af \u2207(T \u2217T) +\u2207m \u00af \u2207\u2113+1(T \u2217Rm) + \u2207m \u00af \u2207\u2113\u2207( \u00af T \u2217Rm) + \u2207m \u00af \u2207\u2113(Rm \u2217Rm) +\u2207m \u00af \u2207\u2113+1(T \u2217\u03a8) + \u2207m \u00af \u2207\u2113(\u03a8 \u2217\u00af T \u2217T) + \u2207m \u00af \u2207\u2113(\u2207\u03a8 \u2217\u00af T) o + X i+j>0 m X i=0 \u2113 X j=0 \u2207m\u2212i \u00af \u2207\u2113\u2212jRm \u2217\u2207i \u00af \u2207j(\u2202tg) + X i+j>0 m X i=0 \u2113 X j=0 \u2207m\u2212i \u00af \u2207\u2113\u2212jH \u2217\u2207i \u00af \u2207j 1 2\u2225\u2126\u2225\u03c9 ! (3.18) Next we compute \u2202t|\u2207m \u00af \u2207\u2113Rm|2 \u2264 \u27e8\u2202t\u2207m \u00af \u2207\u2113Rm, \u2207m \u00af \u2207\u2113Rm\u27e9+ \u27e8\u2207m \u00af \u2207\u2113Rm, \u2202t\u2207m \u00af \u2207\u2113Rm\u27e9 (3.19) + C 2\u2225\u2126\u2225\u03c9 |\u2207m \u00af \u2207\u2113Rm|2 \u00b7 |\u03a8| We also compute \u2206R|\u2207m \u00af \u2207\u2113Rm|2 = \u27e8\u2206R\u2207m \u00af \u2207\u2113Rm, \u2207m \u00af \u2207\u2113Rm\u27e9+ \u27e8\u2207m \u00af \u2207\u2113Rm, \u2206R\u2207m \u00af \u2207\u2113Rm\u27e9(3.20) +2|\u2207m+1 \u00af \u2207\u2113Rm|2 + 2| \u00af \u2207\u2207m \u00af \u2207\u2113Rm|2 = \u27e8\u2206R\u2207m \u00af \u2207\u2113Rm, \u2207m \u00af \u2207\u2113Rm\u27e9+ \u27e8\u2207m \u00af \u2207\u2113Rm, \u2206R\u2207m \u00af \u2207\u2113Rm\u27e9 +2|\u2207m+1 \u00af \u2207\u2113Rm|2 + 2|\u2207m \u00af \u2207\u2113+1Rm|2 +2 \u0010 | \u00af \u2207\u2207m \u00af \u2207\u2113Rm|2 \u2212|\u2207m \u00af \u2207\u2113+1Rm|2\u0011 22 \fWe can estimate the last term by a commutation identity. \u00af \u2207\u2207m \u00af \u2207\u2113Rm \u2212\u2207m \u00af \u2207\u00af \u2207\u2113Rm = m\u22121 X i=0 \u2207iRm \u2217\u2207m\u22121\u2212i \u00af \u2207\u2113Rm (3.21) It follows that | \u00af \u2207\u2207m \u00af \u2207\u2113Rm|2 \u2212|\u2207m \u00af \u2207\u2113+1Rm|2 (3.22) \u2265 \u2212C|\u2207m \u00af \u2207\u2113+1Rm| \u00b7 m\u22121 X i=0 |\u2207iRm \u2217\u2207m\u22121\u2212i \u00af \u2207\u2113Rm| \u2212C m\u22121 X i=0 |\u2207iRm \u2217\u2207m\u22121\u2212i \u00af \u2207\u2113Rm|2 \u2265 \u2212C1|\u2207m \u00af \u2207\u2113+1Rm| \u00b7 m\u22121 X i=0 |DiRm| \u00b7 |Dm+\u2113\u22121\u2212iRm| \u2212C m\u22121 X i=0 |DiRm|2 \u00b7 |Dm+\u2113\u22121\u2212iRm|2 Putting all the computation together, we arrive at \u2202t|\u2207m \u00af \u2207\u2113Rm|2 \u2264 1 2\u2225\u2126\u2225\u03c9 (1 2\u2206R|\u2207m \u00af \u2207\u2113Rm|2 \u2212|\u2207m+1 \u00af \u2207\u2113Rm|2 \u2212|\u2207m \u00af \u2207\u2113+1Rm|2 +C1|\u2207m \u00af \u2207\u2113+1Rm| \u00b7 m\u22121 X i=0 |DiRm| \u00b7 |Dm+\u2113\u22121\u2212iRm| + C m\u22121 X i=0 |DiRm|2 \u00b7 |Dm+\u2113\u22121\u2212iRm|2 +C|\u2207m \u00af \u2207\u2113Rm| \u00b7 \" m X i=0 \u2113 X j=0 |\u2207i \u00af \u2207jRm| \u00b7 |\u2207m\u2212i \u00af \u2207\u2113\u2212jRm| + |\u2207m+1 \u00af \u2207\u2113+1(T \u2217T)| + m X i=0 \u2113\u22121 X j=0 |\u2207i \u00af \u2207jRm| \u00b7 |\u2207m\u2212i \u00af \u2207\u2113\u2212j(T \u2217T)| + |\u2207m \u00af \u2207\u2113+1(T \u2217Rm)| + m X i=0 \u2113 X j=0 |\u2207i \u00af \u2207jT| \u00b7 \u0010 \u2207m\u2212i \u00af \u2207\u2113+1\u2212jRm| + |\u2207m+1\u2212i \u00af \u2207\u2113\u2212jRm| \u0011 +|\u2207m+1 \u00af \u2207\u2113( \u00af T \u2217Rm)| + m X i=0 \u2113\u22121 X j=0 |\u2207i \u00af \u2207jRm| \u00b7 |\u2207m\u2212i \u00af \u2207\u2113\u22121\u2212j( \u00af T \u2217Rm)| +|\u2207m \u00af \u2207\u2113(Rm \u2217Rm)| + |\u2207m \u00af \u2207\u2113+1(T \u2217\u03a8)| + |\u2207m \u00af \u2207\u2113(\u03a8 \u2217\u00af T \u2217T)| + |\u2207m \u00af \u2207\u2113(\u2207\u03a8 \u2217\u00af T)| + X i+j>0 m X i=0 \u2113 X j=0 |\u2207m\u2212i \u00af \u2207\u2113\u2212jH| \u00b7 |\u2207i \u00af \u2207j 1 2\u2225\u2126\u2225\u03c9 ! | + X i+j>0 m X i=0 \u2113 X j=0 |\u2207m\u2212i \u00af \u2207\u2113\u2212jRm| \u00b7 |\u2207i \u00af \u2207j(\u2202tg)| #) + C 2\u2225\u2126\u2225\u03c9 |\u2207m \u00af \u2207\u2113Rm|2 \u00b7 |\u03a8| (3.23) where we used commutating identities for terms \u2207m \u00af \u2207\u2113\u2207\u00af \u2207(T \u2217T) and \u2207m \u00af \u2207\u2113\u2207( \u00af T \u2217Rm) in the evolution equation \u2202t\u2207k \u00af \u2207\u2113Rm. Next, we use the non-standard notation D introduced 23 \fat the beginning of this section. Note that, for a tensor E, |\u2207i \u00af \u2207jE| \u2264|Di+jE|. (3.24) Let k = m + \u2113. We have \u2202t|\u2207m \u00af \u2207\u2113Rm|2 \u2264 1 2\u2225\u2126\u2225\u03c9 (1 2\u2206R|\u2207m \u00af \u2207\u2113Rm|2 \u2212|\u2207m+1 \u00af \u2207\u2113Rm|2 \u2212|\u2207m \u00af \u2207\u2113+1Rm|2 +C1|\u2207m \u00af \u2207\u2113+1Rm| \u00b7 k\u22121 X i=0 |DiRm| \u00b7 |Dk\u22121\u2212iRm| + C k\u22121 X i=0 |DiRm|2 \u00b7 |Dk\u22121\u2212iRm|2 +C|\u2207m \u00af \u2207\u2113Rm| \u00b7 \" k X i=0 |DiRm| \u00b7 |Dk\u2212iRm| + k X i=0 |DiT| \u00b7 |Dk+1\u2212iRm| +|Dk+2(T \u2217T)| + k\u22121 X i=0 |DiRm| \u00b7 |Dk\u2212i(T \u2217T)| +|Dk+1(T \u2217Rm)| + |Dk+1( \u00af T \u2217Rm)| + k\u22121 X i=0 |DiRm| \u00b7 |Dk\u22121\u2212i( \u00af T \u2217Rm)| +|Dk(Rm \u2217Rm)| + |Dk+1(T \u2217\u03a8)| + |Dk(\u03a8 \u2217T \u2217T)| + |Dk(\u2207\u03a8 \u2217T)| + k X i=1 |Dk\u2212iH| \u00b7 |Di 1 2\u2225\u2126\u2225\u03c9 ! | + k X i=1 |Dk\u2212iRm| \u00b7 |Di(\u2202tg)| #) + C 2\u2225\u2126\u2225\u03c9 |\u2207m \u00af \u2207\u2113Rm|2 \u00b7 |\u03a8| (3.25) Recall that |DkRm|2 = X m+\u2113=k |\u2207m \u00af \u2207\u2113Rm|2 (3.26) |\u2207m \u00af \u2207\u2113+1Rm| \u2264 |Dk+1Rm|, |\u2207m \u00af \u2207\u2113Rm| \u2264|DkRm| (3.27) and we also have |Dk+1Rm|2 = X m+q=k+1 |\u2207m \u00af \u2207qRm|2 = X m+q\u22121=k, q\u22651 |\u2207m \u00af \u2207qRm|2 + |\u2207k+1Rm|2 = X m+\u2113=k, m\u22650, \u2113\u22650 |\u2207m \u00af \u2207\u2113+1Rm|2 + |\u2207k+1Rm|2 \u2264 X m+\u2113=k |\u2207m \u00af \u2207\u2113+1Rm|2 + X m+\u2113=k |\u2207m+1 \u00af \u2207\u2113Rm|2 (3.28) Using these inequalities, we get \u2202t|DkRm|2 24 \f\u2264 1 2\u2225\u2126\u2225\u03c9 (1 2\u2206R|DkRm|2 \u2212|Dk+1Rm|2 +C1|Dk+1Rm| \u00b7 k\u22121 X i=0 |DiRm| \u00b7 |Dk\u22121\u2212iRm| + C k\u22121 X i=0 |DiRm|2 \u00b7 |Dk\u22121\u2212iRm|2 +C|DkRm| \u00b7 \" k X i=0 |DiRm| \u00b7 |Dk\u2212iRm| + k X i=0 |DiT| \u00b7 |Dk+1\u2212iRm| +|Dk+2(T \u2217T)| + k\u22121 X i=0 |DiRm| \u00b7 |Dk\u2212i(T \u2217T)| +|Dk+1(T \u2217Rm)| + |Dk+1( \u00af T \u2217Rm)| + k\u22121 X i=0 |DiRm| \u00b7 |Dk\u22121\u2212i( \u00af T \u2217Rm)| +|Dk(Rm \u2217Rm)| + |Dk+1(T \u2217\u03a8)| + |Dk(\u03a8 \u2217T \u2217T)| + |Dk(\u2207\u03a8 \u2217T)| + k X i=1 |Dk\u2212iH| \u00b7 |Di 1 2\u2225\u2126\u2225\u03c9 ! | + k X i=1 |Dk\u2212iRm| \u00b7 |Di(\u2202tg)| #) + C 2\u2225\u2126\u2225\u03c9 |DkRm|2 \u00b7 |\u03a8| (3.29) We estimate the terms on right hand side one by one. Recall that we have |DjRm| \u2264 C A tj/2 , 0 \u2264j \u2264k \u22121 (3.30) |Dj+1T| \u2264 C A tj/2 , 0 \u2264j \u2264k \u22121 (3.31) |T|2 \u2264 C A; (3.32) and the unknown terms are |Dk+1Rm|, |DkRm|, |Dk+2T| and |Dk+1T|. \u2022 Estimate for |Dk+1Rm| \u00b7 Pk\u22121 i=0 |DiRm| \u00b7 |Dk\u22121\u2212iRm| : |Dk+1Rm| \u00b7 k\u22121 X i=0 |DiRm| \u00b7 |Dk\u22121\u2212iRm| \u2264 |Dk+1Rm| \u00b7 k\u22121 X i=0 CA ti/2 \u00b7 CA t(k\u22121\u2212i)/2 (3.33) \u2264 |Dk+1Rm| \u00b7 CA2 t\u2212k\u22121 2 \u2264 \u03b8|Dk+1Rm|2 + C(\u03b8) A3 t\u2212k where \u03b8 is a small positive number such that C1\u03b8 < 1 4 . To obtain the last inequality, we used Cauchy-Schwarz inequality and the fact that A t < 1. \u2022 Estimate for Pk\u22121 i=0 |DiRm|2 \u00b7 |Dk\u22121\u2212iRm|2 : k\u22121 X i=0 |DiRm|2 \u00b7 |Dk\u22121\u2212iRm|2 \u2264 k\u22121 X i=0 \u0012CA ti/2 \u00132 \u00b7 \u0012 CA t(k\u22121\u2212i)/2 \u00132 (3.34) \u2264 CA4 t\u2212(k\u22121) \u2264CA3 t\u2212k 25 \f\u2022 Estimate for Pk i=0 |DiRm| \u00b7 |Dk\u2212iRm| : k X i=0 |DiRm| \u00b7 |Dk\u2212iRm| = 2|DkRm| \u00b7 |Rm| + k\u22121 X i=1 |DiRm| \u00b7 |Dk\u2212iRm| (3.35) \u2264 CA |DkRm| + CA2 t\u2212k 2 \u2022 Estimate for Pk i=0 |DiT| \u00b7 |Dk+1\u2212iRm| : k X i=0 |DiT| \u00b7 |Dk+1\u2212iRm| = |T| \u00b7 |Dk+1Rm| + |DT| \u00b7 |DkRm| + k X i=2 |DiT| \u00b7 |Dk+1\u2212iRm| \u2264 CA 1 2 |Dk+1Rm| + CA |DkRm| + CA2 t\u2212k 2 (3.36) \u2022 Estimate for |Dk+2(T \u2217T)| : |Dk+2(T \u2217T)| \u2264 k+2 X i=0 |DiT| \u00b7 |Dk+2\u2212iT| (3.37) = 2|T| \u00b7 |Dk+2T| + 2|DT| \u00b7 |Dk+1T| + k X i=2 |DiT| \u00b7 |Dk+2\u2212iT| \u2264 CA 1 2 |Dk+2T| + CA |Dk+1T| + CA2 t\u2212k 2 \u2022 Estimate for Pk\u22121 i=0 |DiRm| \u00b7 |Dk\u2212i(T \u2217T)| : k\u22121 X i=0 |DiRm| \u00b7 |Dk\u2212i(T \u2217T)| (3.38) \u2264 2 k\u22121 X i=0 |DiRm| \u00b7 |T| \u00b7 |Dk\u2212iT| + k\u22121 X i=0 k\u2212i X j=1 |DiRm| \u00b7 |DjT| \u00b7 |Dk\u2212i\u2212jT| \u2264 CA2 t\u2212k 2 \u2022 Estimate for |Dk+1(T \u2217Rm)| : |Dk+1(T \u2217Rm)| \u2264 |T| \u00b7 |Dk+1Rm| + |DT| \u00b7 |DkRm| + |Dk+1T| \u00b7 |Rm| (3.39) + k X i=2 |DiT| \u00b7 |Dk+1\u2212iRm| \u2264 CA 1 2 |Dk+1Rm| + CA |DkRm| + CA |Dk+1T| + CA2 t\u2212k 2 \u2022 Estimate |Dk+1( \u00af T \u2217Rm)| : |Dk+1( \u00af T \u2217Rm)| \u2264CA 1 2 |Dk+1Rm| + CA |DkRm| + CA |Dk+1T| + CA2 t\u2212k 2 (3.40) 26 \f\u2022 Estimate for Pk\u22121 i=0 |DiRm| \u00b7 |Dk\u22121\u2212i( \u00af T \u2217Rm)| : k\u22121 X i=0 |DiRm| \u00b7 |Dk\u22121\u2212i( \u00af T \u2217Rm)| (3.41) \u2264 k\u22121 X i=0 |DiRm| \u00b7 |T| \u00b7 |Dk\u22121\u2212iRm| + k\u22121 X i=0 k\u22121\u2212i X j=1 |DiRm| \u00b7 |DjT| \u00b7 |Dk\u22121\u2212i\u2212jRm| \u2264 CA2 t\u2212k 2 \u2022 Estimate for |Dk(Rm \u2217Rm)| : |Dk(Rm \u2217Rm)| \u2264 2|Rm| \u00b7 |DkRm| + k\u22121 X i=1 |DiRm| \u00b7 |Dk\u2212iRm| (3.42) \u2264 CA |DkRm| + CA2 t\u2212k 2 \u2022 Estimate for |Dk+1(T \u2217\u03a8)| : Recall that \u03a8\u00af pq = \u2212\u02dc R\u00af pq + gs\u00af r gm\u00af n T\u00af nsq \u00af Tm\u00af r\u00af p, we have |Dk+1(\u03a8 \u2217T)| \u2264 |Dk+1(Rm \u2217T)| + |Dk+1(T \u2217T \u2217T)| (3.43) The \ufb01rst term is the same as (3.39). We only need to estimate the second term. |Dk+1(T \u2217T \u2217T)| \u2264 |Dk+1T| \u00b7 |T|2 \u03c9 + X p+q=k+1;p,q>0 |DpT| \u00b7 |DqT| \u00b7 |T| (3.44) + X p+q+r=k+1;p,q,r>0 |DpT| \u00b7 |DqT| \u00b7 |DrT| \u2264 CA |Dk+1T| + CA 5 2 t\u2212(k\u22121) 2 + CA3 t\u2212(k\u22122) 2 \u2264 CA |Dk+1T| + CA2 t\u2212k 2 It follows that |Dk+1(\u03a8 \u2217T)| \u2264 CA 1 2 |Dk+1Rm| + CA (|DkRm| + |Dk+1T|) + CA2 t\u2212k 2 (3.45) \u2022 Estimate for |Dk(\u03a8 \u2217T \u2217T)| : |Dk(\u03a8 \u2217T \u2217T)| \u2264 |Dk(Rm \u2217T \u2217T)| + |Dk(T \u2217T \u2217T \u2217T)| (3.46) We use the same trick as above to estimate these two terms. For the \ufb01rst term, we have |Dk(Rm \u2217T \u2217T)| \u2264 |DkRm| \u00b7 |T|2 \u03c9 + X p+q=k;q>0 |DpRm| \u00b7 |DqT| \u00b7 |T| (3.47) + X p+q+r=k;q,r>0 |DpRm| \u00b7 |DqT| \u00b7 |DrT| \u2264 CA |DkRm| + CA 5 2 t\u2212k\u22121 2 + CA3 t\u2212k\u22122 2 \u2264 CA |DkRm| + CA2 t\u2212k 2 27 \fFor the second term, we have |Dk(T \u2217T \u2217T \u2217T)| \u2264 4|DkT| \u00b7 |T|3 + X p+q=k; p,q>0 |DpT| \u00b7 |DqT| \u00b7 |T|2 \u03c9 (3.48) + X p+q+r=k; p,q,r>0 |DpT| \u00b7 |DqT| \u00b7 |DrT| \u00b7 |T|\u03c9 + X p+q+r+s=k;p,q,r,s>0 |DpT| \u00b7 |DqT| \u00b7 |DrT| \u00b7 |DsT| \u2264 CA 5 2 t\u2212k\u22121 2 + CA3 t\u2212k\u22122 2 + CA 7 2 t\u2212k\u22123 2 + CA4 t\u2212k\u22124 2 \u2264 CA2 t\u2212k 2 Thus, we have |Dk(\u03a8 \u2217T \u2217T)| \u2264 CA |DkRm| + CA2 t\u2212k 2 . (3.49) \u2022 Estimate for |Dk(\u2207\u03a8 \u2217T)| : |Dk(\u2207\u03a8 \u2217T)| \u2264 |Dk(\u2207Rm \u2217T)| + |Dk(\u2207(T \u2217T) \u2217T)| (3.50) \u2264 |Dk+1Rm| \u00b7 |T| + |DkRm| \u00b7 |DT| + k X i=2 |Dk+1\u2212iRm| \u00b7 |DiT| +|Dk+1(T \u2217T)| \u00b7 |T| + k X i=1 |Dk+1\u2212i(T \u2217T)| \u00b7 |DiT| \u2264 CA 1 2 |Dk+1Rm| + CA |DkRm| + CA |Dk+1T| + CA2 t\u2212k 2 \u2022 Estimate for Pk i=1 |Dk\u2212iH| \u00b7 |Di \u0010 1 2\u2225\u2126\u2225\u03c9 \u0011 | : Recall that H = 1 2\u2206RRm + \u2207\u00af \u2207(T \u2217\u00af T) + \u00af \u2207(T \u2217Rm) + \u2207( \u00af T \u2217Rm) (3.51) +Rm \u2217Rm + ( \u00af \u2207T \u2212\u00af T \u2217T) \u2217\u03a8 + \u00af T \u2217\u2207\u03a8 + T \u2217\u00af \u2207\u03a8 and we also compute, for any m, \u2207m 1 2\u2225\u2126\u2225\u03c9 ! = \u2207m\u22121\u2207 1 2\u2225\u2126\u2225\u03c9 ! = \u2212\u2207m\u22121 1 2\u2225\u2126\u2225\u03c9 T ! (3.52) = \u2212\u2207m\u22121 1 2\u2225\u2126\u2225\u03c9 ! \u2217T \u2212 1 2\u2225\u2126\u2225\u03c9 \u2207m\u22121T = 1 2\u2225\u2126\u2225\u03c9 m X j=1 \u2207m\u2212jT \u2217T j\u22121 28 \fwhere T j\u22121 = T \u2217T \u2217\u00b7 \u00b7 \u00b7 \u2217T with (j \u22121) factors. Again keep in mind that the unknown terms are |Dk+1Rm|, |DkRm|,|Dk+2T| and |Dk+1T|. Notice that these terms only appear for i = 1, 2 in the summation. k X i=1 |Dk\u2212iH| \u00b7 |Di 1 2\u2225\u2126\u2225\u03c9 ! | = |Dk\u22121H| \u00b7 |D 1 2\u2225\u2126\u2225\u03c9 ! | + |Dk\u22122H| \u00b7 |D2 1 2\u2225\u2126\u2225\u03c9 ! | + k X i=3 |Dk\u2212iH| \u00b7 |Di 1 2\u2225\u2126\u2225\u03c9 ! | (3.53) Using (3.51) and (3.52), we can estimate the terms on the right hand side one by one and obtain k X i=1 |Dk\u2212iH| \u00b7 |Di 1 2\u2225\u2126\u2225\u03c9 ! | (3.54) \u2264 CA 1 2|Dk+1Rm| + CA (|DkRm| + |Dk+1T|) + CA2 t\u2212k 2 \u2022 Estimate for Pk i=1 |Dk\u2212iRm| \u00b7 |Di(\u2202tg)| : |Di(\u2202tg)| = |Di 1 2\u2225\u2126\u2225\u03c9 \u03a8 ! | = i X j=0 |Dj 1 2\u2225\u2126\u2225\u03c9 ! | \u00b7 |Di\u2212j\u03a8| (3.55) By the de\ufb01nition of \u03a8 and the computation (3.52), we know that the only unknown term appeared in the summation is when j = i = k. Thus, we arrive the following estimate k X i=1 |Dk\u2212iRm| \u00b7 |Di(\u2202tg)| \u2264 CA |DkRm| + CA2 t\u2212k 2 (3.56) \u2022 Estimate for the last term |DkRm|2 \u00b7 |\u03a8| : |DkRm|2 \u00b7 |\u03a8| \u2264CA |DkRm|2. (3.57) Finally, putting all the above estimates together, we obtain the lemma. Q.E.D. Following the same strategy, we can also prove the following lemma on estimates for the derivatives of the torsion. Lemma 10 Under the same assumption as in Lemma 9, we have \u2202t|Dk+1T|2 \u2264 1 2\u2225\u2126\u2225\u03c9 n1 2\u2206R|Dk+1T|2 \u22123 4 |Dk+2T|2 (3.58) +CA 1 2 \u0010 |Dk+2T| + |Dk+1Rm| \u0011 \u00b7 |Dk+1T| +CA \u0010 |Dk+1T| + |DkRm| \u0011 \u00b7 |Dk+1T| +CA2 t\u2212k 2 |\u2207k+1T| + CA3t\u2212ko . 29 \fNow we return to the proof of Theorem 2: We \ufb01rst prove the estimate (1.12) for the case k = 1. To obtain the desired estimate, we apply the maximum principle to the function G1(z, t) = t \u0010 |DRm|2 + |D2T|2\u0011 + \u039b \u0010 |Rm|2 + |DT|2\u0011 (3.59) Using Lemma 9 and Lemma 10 with k = 1, we have \u2202t \u0010 |DRm|2 + |D2T|2\u0011 (3.60) \u2264 1 2\u2225\u2126\u2225\u03c9 n1 2\u2206R \u0010 |DRm|2 + |D2T|2\u0011 \u22123 4 \u0010 |D2Rm|2 + |D3T|2\u0011 +CA 1 2 \u0010 |D2Rm| + |D3T| \u0011 \u00b7 \u0010 |DRm| + |D2T| \u0011 +CA \u0010 |DRm| + |D2T| \u00112 + CA2 t\u22121 2 \u0010 |DRm| + |D2T| \u0011 + CA3 t\u22121o \u2264 1 2\u2225\u2126\u2225\u03c9 n1 2\u2206R \u0010 |DRm|2 + |D2T|2\u0011 \u22121 2 \u0010 |D2Rm|2 + |D3T|2\u0011 +CA \u0010 |DRm|2 + |D2T|2\u0011 + CA3 t\u22121o where we used the Cauchy-Schwarz inequality in the last inequality. Recall the evolution equation \u2202t(|DT|2 + |Rm|2) \u2264 1 2\u2225\u2126\u2225\u03c9 n1 2\u2206R(|DT|2 + |Rm|2) \u22121 2(|D2T|2 + |DRm|2) +CA 3 2(|DRm| + |D2T|) + CA3o . (3.61) It follows that \u2202tG1 \u2264 1 4\u2225\u2126\u2225\u03c9 n \u2206RG1 \u2212t \u0010 |D2Rm|2 + |D3T|2\u0011 \u2212\u039b \u0010 |D2T|2 + |DRm|2\u0011 +CA t (|DRm|2 + |D2T|2) + CA3 (3.62) +CA 3 2 \u039b(|DRm| + |D2T|) + CA3 \u039b o + (|DRm|2 + |D2T|2) Again, using Cauchy-Schwarz inequality, C A 3 2 \u039b(|DRm| + |D2T|) \u2264CA3 \u039b + \u039b(|DRm|2 + |D2T|2). (3.63) Putting these estimates together, we have \u2202tG \u2264 1 4\u2225\u2126\u2225\u03c9 n \u2206RG \u2212t \u0010 |D2Rm|2 + |D3T|2\u0011 (3.64) +(\u2225\u2126\u2225\u03c9 \u2212\u039b + CAt) \u0010 |D2T|2 + |DRm|2\u0011 + CA3o 30 \fBy At \u22641 and choosing \u039b large enough, \u2202tG \u2264 1 4\u2225\u2126\u2225\u03c9 n 2\u2206RG + CA3\u039b o (3.65) We note that the choice of constant \u039b depends on the upper bound of \u2225\u2126\u2225\u03c9. However, with the assumption (1.11), we can get the uniform C0 bound of the metric depending on the uniform lower bound of \u2225\u2126\u2225\u03c9. Consequently, we obtain the upper bound of \u2225\u2126\u2225\u03c9, which also depends on the uniform lower bound of \u2225\u2126\u2225\u03c9. To \ufb01nish the proof for k = 1, observing that when t = 0, G(0) = \u039b 2 (|DT|2 + |Rm|2) \u2264C\u039bA2. (3.66) Thus, applying the maximum principle to the above inequality implies that G(t) \u2264C\u039bA2 + CA3\u039b t \u2264CA2 (3.67) It follows |DRm| + |D2T| \u2264CA t1/2 . (3.68) This establishes the estimate (1.12) when k = 1. Next, we use induction on k to prove the higher order estimates. Using Lemma 9 and Lemma 10 again, we have \u2202t \u0010 |DkRm|2 + |Dk+1T|2\u0011 (3.69) \u2264 1 2\u2225\u2126\u2225\u03c9 n1 2\u2206R \u0010 |DkRm|2 + |Dk+1T|2\u0011 \u22123 4 \u0010 |Dk+1Rm|2 + |Dk+2T|2\u0011 +CA 1 2 \u0010 |Dk+1Rm| + |Dk+2T| \u0011 \u00b7 \u0010 |DkRm| + |Dk+1T| \u0011 +CA \u0010 |DkRm| + |Dk+1T| \u00112 + CA2 t\u2212k 2 \u0010 |DkRm| + |Dk+1T| \u0011 + CA3 t\u2212ko \u2264 1 2\u2225\u2126\u2225\u03c9 n1 2\u2206R \u0010 |DkRm|2 + |Dk+1T|2\u0011 \u22121 2 \u0010 |Dk+1Rm|2 + |Dk+2T|2\u0011 +CA \u0010 |DkRm|2 + |Dk+1T|2\u0011 + CA3 t\u2212ko . Denote fj(z, t) = |DjRm|2 + |Dj+1T|2. (3.70) Then, \u2202tfk \u2264 1 4\u2225\u2126\u2225\u03c9 \u0010 \u2206Rfk \u2212fk+1 + CA fk + CA3 t\u2212k\u0011 . (3.71) 31 \fNext, we apply the maximum principle to the test function Gk(z, t) = tkfk + k X i=1 \u039bi Bk i tk\u2212i fk\u2212i (3.72) where \u039bi (1 \u2264i \u2264k) are large numbers to be determined and Bk i = (k\u22121)! (k\u2212i)! . We note that, for 1 \u2264i < k, we still have an inequality similar to (3.70) for fk\u2212i. \u2202tfk\u2212i \u2264 1 4\u2225\u2126\u2225\u03c9 \u0010 2\u2206Rfk\u2212i \u2212fk\u2212i+1 + CA fk\u2212i + CA3 t\u2212(k\u2212i)\u0011 \u2264 1 4\u2225\u2126\u2225\u03c9 \u0010 2\u2206Rfk\u2212i \u2212fk\u2212i+1 + CA3 t\u2212(k\u2212i)\u0011 (3.73) where we used the induction condition (3.9) for the term fk\u2212i when 1 \u2264i < k. From (3.70) and (3.73), we deduce \u2202tGk = ktk\u22121 fk + tk\u2202tfk + k\u22121 X i=1 \u039bi Bk i (k \u2212i) tk\u2212i\u22121 fk\u2212i + k X i=1 \u039bi Bk i tk\u2212i \u2202tfk\u2212i (3.74) = ktk\u22121 fk + 1 4\u2225\u2126\u2225\u03c9 tk \u0010 2\u2206Rfk \u2212fk+1 + CA fk + CA3 t\u2212k\u0011 + k\u22121 X i=1 \u039bi Bk i (k \u2212i) tk\u2212i\u22121 fk\u2212i + 1 4\u2225\u2126\u2225\u03c9 k X i=1 \u039bi Bk i tk\u2212i \u0010 2\u2206Rfk\u2212i \u2212fk\u2212i+1 + CA3 t\u2212(k\u2212i)\u0011 = 1 4\u2225\u2126\u2225\u03c9 2\u2206RGk \u2212 1 4\u2225\u2126\u2225\u03c9 tk fk+1 + tk\u22121fk k + CA t 4|\u2126|\u03c9 ! + 1 4\u2225\u2126\u2225\u03c9 CA3 1 + k X i=1 \u039bi Bk i ! + k\u22121 X i=1 \u039bi Bk i (k \u2212i) tk\u2212i\u22121 fk\u2212i \u2212 1 4\u2225\u2126\u2225\u03c9 k X i=1 \u039bi Bk i tk\u2212i fk\u2212i+1 \u2264 1 4\u2225\u2126\u2225\u03c9 2\u2206RGk + tk\u22121fk k + CA t 4\u2225\u2126\u2225\u03c9 \u2212\u039b1 Bk 1 4\u2225\u2126\u2225\u03c9 ! + 1 4\u2225\u2126\u2225\u03c9 CA3 + k\u22121 X i=1 \u039bi Bk i (k \u2212i) tk\u2212i\u22121 fk\u2212i \u2212 1 4\u2225\u2126\u2225\u03c9 k X i=2 \u039bi Bk i tk\u2212i fk\u2212i+1 We note that the last two terms can be re-written as k\u22121 X i=1 \u039bi Bk i (k \u2212i) tk\u2212i\u22121 fk\u2212i \u2212 1 4\u2225\u2126\u2225\u03c9 k X i=2 \u039bi Bk i tk\u2212i fk\u2212i+1 (3.75) = k\u22121 X i=1 \u039bi Bk i (k \u2212i) \u2212 1 4\u2225\u2126\u2225\u03c9 \u039bi+1 Bk i+1 ! tk\u2212i\u22121 fk\u2212i = k\u22121 X i=1 \u039bi \u2212 1 4\u2225\u2126\u2225\u03c9 \u039bi+1 ! Bk i+1 tk\u2212i\u22121 fk\u2212i 32 \fThus, we obtain \u2202tGk \u2264 1 4\u2225\u2126\u2225\u03c9 2\u2206RGk + tk\u22121fk k + CA t 4\u2225\u2126\u2225\u03c9 \u2212\u039b1 Bk 1 4\u2225\u2126\u2225\u03c9 ! + 1 4\u2225\u2126\u2225\u03c9 CA3 (3.76) + k\u22121 X i=1 \u039bi \u2212 1 4\u2225\u2126\u2225\u03c9 \u039bi+1 ! Bk i+1 tk\u2212i\u22121 fk\u2212i Choosing \u039b1 large enough and \u039bi \u2264 1 4\u2225\u2126\u2225\u03c9 \u039bi+1 for 1 \u2264i \u2264k \u22121, we have \u2202tGk \u2264 1 4\u2225\u2126\u2225\u03c9 (2\u2206RGk + CA3) (3.77) Note that max z\u2208M G(z, 0) = \u039bk Bk kf0 = (k \u22121)! 2 \u039bk (|Rm|2 + |DT|2) \u2264CA2 (3.78) Applying the maximum principle to the inequality satis\ufb01ed by Gk, we have max z\u2208M G(z, t) \u2264CA2 + CA3 t \u2264CA2. (3.79) Finally, we get |DkRm| + |Dk+1T| \u2264CA t\u2212k 2 . (3.80) The proof of Theorem 2 is complete. Q.E.D. 3.3 Doubling estimates for the curvature and torsion Let f(z, t) = |DT|2 \u03c9 + |Rm|2 \u03c9 + |T|4 \u03c9 (3.81) and denote f(t) = maxz\u2208M f(z, t). We can derive a doubling-time estimate for f(t), which roughly says that f(t) cannot blow up quickly. Proposition 1 There is a constant C depending on a lower bound for \u2225\u2126\u2225\u03c9 such that max M \u0010 |DT|2 + |Rm|2 + |T|4\u0011 (t) \u22644 max M \u0010 |DT|2 + |Rm|2 + |T|4\u0011 (0) (3.82) for all t \u2208[0, 1 4Cf 1 2 (0)]. 33 \fProof. The proof is standard and we apply the maximum principle to f(z, t). Recall the evolution equations, by taking k = 0 in (3.29), \u2202t|Rm|2 \u2264 1 2\u2225\u2126\u2225\u03c9 n1 2\u2206R|Rm|2 \u2212|DRm|2 + C|D2T| \u00b7 |Rm| \u00b7 |T| (3.83) +C|DRm| \u00b7 |Rm| \u00b7 |T| + C|DT|2 \u00b7 |Rm| + C|DT| \u00b7 |Rm|2 +C|Rm|3 + C|DT| \u00b7 |Rm| \u00b7 |T|2 + C|Rm|2 \u00b7 |T|2 + C|Rm| \u00b7 |T|4o We apply the Young\u2019s inequalities and get \u2202t|Rm|2 \u2264 1 2\u2225\u2126\u2225\u03c9 n1 2\u2206R|Rm|2 \u22121 2|DRm|2 + 1 2|D2T|2 + C \u0010 |DT|3 + |Rm|3 + |T|6\u0011 o (3.84) Similarly, considering the evolution equation for |DT|2 and |T|2, we can derive \u2202t|\u2207T|2 \u2264 1 2\u2225\u2126\u2225\u03c9 n1 2\u2206R|DT|2 \u22121 2|D2T|2 + 1 2|DRm|2 + C \u0010 |DT|3 + |Rm|3 + |T|6\u0011 o (3.85) and \u2202t|T|4 \u2264 1 2\u2225\u2126\u2225\u03c9 n1 2\u2206R|T|4 + C \u0010 |DT|3 + |Rm|3 + |T|6\u0011 o (3.86) Putting the above evolution equations together, we have \u2202tf(z, t) \u2264 1 2\u2225\u2126\u2225\u03c9 n1 2\u2206Rf + C \u0010 |DT|3 + |Rm|3 + |T|6\u0011 o (3.87) \u2264 1 2\u2225\u2126\u2225\u03c9 \u00121 2\u2206Rf + Cf 3 2 \u0013 Finally, by the maximum principle, we have \u2202tf(t) \u2264 C 2\u2225\u2126\u2225\u03c9 f 3 2 (3.88) which implies that f(t) \u2264 f(0) \u0010 1 \u22122C f 1 2(0) t \u00112 (3.89) Thus, as long as the \ufb02ow exists and t \u2264 1\u22121 A 2Cf 1 2 (0), we have f(t) \u2264A2f(0). Q.E.D. 34 \f3.4 A criterion for the long-time existence of the \ufb02ow We can give now the proof of Theorem 3. We begin by observing that, under the given hypotheses, the metrics \u03c9(t) are uniformly equivalent for t \u2208(T \u2212\u03b4, T). Our goal is to show that the metrics are uniformly bounded in C\u221efor some interval t \u2208(T \u2212\u03b4, T). This would imply the existence of the limit \u03c9(T) of a subsequence \u03c9(tj) with tj \u2192T. By the short-time existence theorem for the Anomaly \ufb02ow proved in [22], it follows that the \ufb02ow extends to [0, T + \u01eb) for some \u01eb > 0. 3.4.1 C1 bounds for the metric We need to establish the C\u221econvergence of (subsequence of) the metrics g\u00af kj(t) as t \u2192 T. We have already noted the C0 uniform boundedness of g\u00af kj(t). In this section, we establish the C1 bounds. For this, we \ufb01x a reference metric \u02c6 g\u00af kj and introduce the relative endomorphism hj m(t) = \u02c6 gj \u00af pg\u00af pm(t). (3.90) The uniform C0 bound of g\u00af kj(t) is equivalent to the C0 bound of h(t). We need to estimate the derivatives of h(t). For this, recall the curvature relation between two di\ufb00erent metrics g\u00af kj(t) and \u02c6 g\u00af kj, R\u00af kj p m = \u02c6 Rp \u00af kjm \u2212\u2202\u00af k(hp q \u02c6 \u2207jhp m) (3.91) where \u02c6 \u2207denotes the covariant derivative with respect to \u02c6 g\u00af kj. This relation can be viewed as a second order PDE in h, with bounded right hand sides because the curvature R\u00af kj pm is assumed to be bounded, and which is uniformly elliptic because the metrics g\u00af kj(t) are uniformly equivalent (and hence the relative endomorphisms h(t) are uniformly bounded away from 0 and \u221e). It follows that \u2225h\u2225C1,\u03b1 \u2264C. (3.92) 3.4.2 Ck bounds for the metric We will use the notation Gk for the summation of norms squared of all combinations of \u02c6 \u2207m \u02c6 \u2207\u2113acting on g such that m + \u2113= k. For example, G2 = | \u02c6 \u2207\u02c6 \u2207g|2 + | \u02c6 \u2207\u02c6 \u2207g|2 + | \u02c6 \u2207\u02c6 \u2207g|2. (3.93) We introduce the tensor \u0398k ij = \u2212gk\u00af \u2113\u02c6 \u2207ig\u00af \u2113j, (3.94) which is the di\ufb00erence of the background connection and the evolving connection: \u0398 = \u03930\u2212\u0393. We will use the notation Sk for the summation of norms squared of all combinations of \u2207m\u2207\u2113acting on \u0398 such that m + \u2113= k. For example, S2 = |\u2207\u2207\u0398|2 + |\u2207\u2207\u0398|2 + |\u2207\u2207\u0398|2. (3.95) 35 \fOur evolution equation is \u2202tg\u00af pq = 1 2\u2225\u2126\u2225\u03c9 \u03a8\u00af pq, (3.96) where \u03a8\u00af pq = \u2212\u02dc R\u00af pq + g\u03b1\u00af \u03b2gs\u00af rT\u00af \u03b2sq \u00af T\u03b1\u00af r\u00af p. Proposition 2 Suppose all covariant derivatives of curvature and torsion of g(t) with respect to the evolving connection \u2207are bounded on [0, T). Then all covariant derivatives of \u03a6\u00af pq 2\u2225\u2126\u2225\u03c9 with respect to the evolving connection \u2207are bounded on [0, T). Proof: Compute \u2207m\u2207 \u2113\u0012 \u03a8\u00af pq 2\u2225\u2126\u2225\u03c9 \u0013 = 1 2 X i\u2264m X j\u2264\u2113 \u2207i\u2207 j\u0012 1 \u2225\u2126\u2225\u03c9 \u0013 \u2207m\u2212i\u2207 \u2113\u2212j\u03a8\u00af pq. (3.97) We have \u2207i\u2207 j\u0012 1 \u2225\u2126\u2225\u03c9 \u0013 = \u2212\u2207i\u2207 j\u22121 T \u2225\u2126\u2225\u03c9 ! = 1 \u2225\u2126\u2225\u03c9 X \u2207i1\u2207 i2T i3 \u2217\u2207i4\u2207 i5T i6 \u2217T i7 \u2217T i8. (3.98) Since \u03a8 is written in terms of curvature and torsion, and \u2225\u2126\u2225\u03c9 has a lower bound, the proposition follows. Q.E.D. Proposition 3 Suppose all covariant derivatives of curvature and torsion of g(t) with respect to the evolving connection \u2207are bounded on [0, T). If Gi \u2264C and Si\u22121 \u2264C for all non-negative integers i \u2264k, then Gk+1 \u2264C and Sk \u2264C on [0, T). Proof: By the previous proposition, all covariant derivatives of \u03a8\u00af pq 2\u2225\u2126\u2225\u03c9 with respect to the evolving connection \u2207are bounded on [0, T). Let m + \u2113= k + 1, and compute \u02c6 \u2207m \u02c6 \u2207\u2113\u03a8\u00af pq 2\u2225\u2126\u2225\u03c9 = (\u2207+ \u0398)m(\u2207+ \u0398)\u2113\u03a8\u00af pq 2\u2225\u2126\u2225\u03c9 = \u2207m\u2207 \u2113\u22121\u0012 \u0398 \u03a8\u00af pq 2\u2225\u2126\u2225\u03c9 \u0013 + O(1) = \u2207m\u2207 \u2113\u22121\u0398 \u00b7 \u03a8\u00af pq 2\u2225\u2126\u2225\u03c9 + O(1), (3.99) where O(1) represents terms which involve evolving covariant derivatives of \u03a8\u00af pq 2\u2225\u2126\u2225\u03c9 and up to (k \u22121)th order evolving covariant derivatives of \u0398, which are bounded by assumption. If \u2113= 0, the right-hand side is replaced by \u2207m\u22121\u0398 \u00b7 \u03a8\u00af pq 2\u2225\u2126\u2225\u03c9 . Next, we compute \u2207m\u2207 \u2113\u22121\u0398 \u00af k \u00af i\u00af j = \u2212g\u2113\u00af k\u2207m\u2207 \u2113\u22121 \u02c6 \u2207\u00af ig\u00af j\u2113 = \u2212g\u2113\u00af k( \u02c6 \u2207\u2212\u0398)m( \u02c6 \u2207\u2212\u0398)\u2113\u22121 \u02c6 \u2207\u00af ig\u00af j\u2113 = \u2212g\u2113\u00af k \u02c6 \u2207m \u02c6 \u2207\u2113\u22121 \u02c6 \u2207\u00af ig\u00af j\u2113+ O(1). (3.100) 36 \fIt follows that \f \f \f \f \u02c6 \u2207m \u02c6 \u2207\u2113 \u03a8\u00af pq 2\u2225\u2126\u2225\u03c9 \f \f \f \f \u2264C \u0012 1 + | \u02c6 \u2207m \u02c6 \u2207\u2113g| \u0013 . (3.101) By di\ufb00erentiating the evolution equation and using the above estimate, we have \u2202t| \u02c6 \u2207m \u02c6 \u2207\u2113g|2 \u02c6 g \u2264C \u0012 1 + | \u02c6 \u2207m \u02c6 \u2207\u2113g|2 \u02c6 g \u0013 , (3.102) hence | \u02c6 \u2207m \u02c6 \u2207\u2113g| has exponential growth. This proves Gk+1 \u2264C. Then Sk \u2264C now follows from (3.100), since \u2207m\u2207 \u2113\u0398 = \u2207 m\u2207\u2113\u0398 and we can exchange evolving covariant derivatives up to bounded terms. Q.E.D. By the C1 bound on the metric, we have G1 \u2264C. We see that S0 = |\u0398| \u2264C by de\ufb01nition of \u0398. Hence we can apply the previous proposition to deduce any estimate of the form | \u02c6 \u2207m \u02c6 \u2207\u2113g| \u2264C. (3.103) By di\ufb00erentiating the evolution equation with respect to time, we obtain \u2202i t \u02c6 \u2207m \u02c6 \u2207\u2113g = \u02c6 \u2207m \u02c6 \u2207\u2113\u2202i t \u0012 \u03a8\u00af pq 2\u2225\u2126\u2225\u03c9 \u0013 . (3.104) Time derivatives of \u03a8\u00af pq 2\u2225\u2126\u2225\u03c9 can be expressed as time derivatives of connections, curvature and torsion, which in previous sections have been written as covariant derivatives of curvature and torsion. It follows that \u02c6 \u2207m \u02c6 \u2207\u2113\u2202i t \u0012 \u03a8\u00af pq 2\u2225\u2126\u2225\u03c9 \u0013 can be written in terms of evolving covariant derivatives of curvature and torsion, and hence is bounded. Therefore \f \f \f \f\u2202i t \u02c6 \u2207m \u02c6 \u2207\u2113g \f \f \f \f \u2264C, (3.105) on [0, T). Q.E.D. 4 Appendix A Conventions for di\ufb00erential forms Let \u03d5 be a (p, q)-form on the manifold X. We de\ufb01ne its components \u03d5\u00af k1\u00b7\u00b7\u00b7\u00af kqj1\u00b7\u00b7\u00b7jp by \u03d5 = 1 p!q! X \u03d5\u00af k1\u00b7\u00b7\u00b7\u00af kqj1\u00b7\u00b7\u00b7jp dzjp \u2227\u00b7 \u00b7 \u00b7 \u2227dzj1 \u2227d\u00af zkq \u2227\u00b7 \u00b7 \u00b7 \u2227d\u00af zk1. (A.1) 37 \fAlthough \u03c6 can be expressed in several ways under the above form, we reserve the notation \u03d5\u00af k1\u00b7\u00b7\u00b7\u00af kqj1\u00b7\u00b7\u00b7jp for the uniquely de\ufb01ned coe\ufb03cients \u03d5\u00af k1\u00b7\u00b7\u00b7\u00af kqj1\u00b7\u00b7\u00b7jp which are anti-symmetric under permutation of any two of the barred indices or any two of the unbarred indices. To each Hermitian metric g\u00af kj corresponds a Hermitian, positive (1, 1)-form de\ufb01ned by \u03c9 = ig\u00af kj dzj \u2227d\u00af zk. (A.2) The Hermitian property g\u00af kj = g\u00af jk is then equivalent to the condition \u03c9 = \u03c9. B Conventions for Chern unitary connections Let E \u2192X be a holomorphic vector bundle over a complex manifold X. Let H\u00af \u03b1\u03b2 be a Hermitian metric on E. The Chern unitary connection is de\ufb01ned by \u2207\u00af kV \u03b1 = \u2202\u00af kV \u03b1, \u2207kV \u03b1 = H\u03b1\u00af \u03b3\u2202k(H\u00af \u03b3\u03b2V \u03b2) (B.1) for V \u03b1 any section of E. Its curvature tensor is then de\ufb01ned by [\u2207j, \u2207\u00af k]V \u03b1 = F\u00af kj \u03b1 \u03b2V \u03b2. (B.2) Explicitly, we have \u2207kV \u03b1 = \u2202kV \u03b1 + A\u03b1 k\u03b2V \u03b2, A\u03b1 k\u03b2 = H\u03b1\u00af \u03b3\u2202kH\u00af \u03b3\u03b2 (B.3) and F\u00af kj \u03b1 \u03b2 = \u2212\u2202\u00af kA\u03b1 j\u03b2 = \u2212\u2202\u00af k(H\u03b1\u00af \u03b3\u2202jH\u00af \u03b3\u03b2). (B.4) In particular, when E = T 1,0(X), and g\u00af kj is a Hermitian metric on X, we have the corresponding formulas \u2207\u00af kV p = \u2202\u00af kV p, \u2207kV p = gp \u00af m\u2202k(g \u00af mqV q) [\u2207j, \u2207\u00af k]V p = R\u00af kj p qV q R\u00af kj p q = \u2212\u2202\u00af kAp jq = \u2212\u2202\u00af k(gp \u00af m\u2202jg \u00af mq) (B.5) Our convention for the curvature form Rm is Rm = R\u00af kj p qdzj \u2227d\u00af zk. (B.6) It is the same as in [10, 11], but it di\ufb00ers from that of [30] by a factor of i. When the metric on X has torsion, the commutator identities [\u2207j, \u2207k] for the Chern connections on any holomorphic vector bundle are given by Hence for any tensor A, we have [\u2207j, \u2207k]A = T \u03bb jk\u2207\u03bbA, [\u2207\u00af j, \u2207\u00af k]A = \u00af T \u00af \u03bb\u00af j\u00af k\u2207\u00af \u03bbA. (B.7) 38 \fSome useful examples are \u2207c\u2207a\u2207\u00af bA\u00af ij\u00af k\u2113 = \u2207a\u2207c\u2207\u00af bA\u00af ij\u00af k\u2113\u2212T \u03bb ca\u2207\u03bb\u2207\u00af bA\u00af ij\u00af k\u2113 = \u2207a\u2207\u00af b\u2207cA\u00af ij\u00af k\u2113\u2212T \u03bb ca\u2207\u03bb\u2207\u00af bA\u00af ij\u00af k\u2113 +\u2207a(R\u00af bc\u00af i \u00af \u03bbA\u00af \u03bbj\u00af k\u2113+ R\u00af bc\u00af k \u00af \u03bbA\u00af ij\u00af \u03bb\u2113\u2212R\u00af bc \u03bb jA\u00af i\u03bb\u00af k\u2113\u2212R\u00af bc \u03bb \u2113A\u00af ij\u00af k\u03bb). (B.8) and \u2207c\u2207\u00af d\u2207a\u2207\u00af bA = \u2207c\u2207a\u2207\u00af d\u2207\u00af bA + \u2207(Rm \u2217\u00af \u2207A) = \u2207a\u2207c\u2207\u00af d\u2207\u00af bA + T \u2217\u2207\u00af \u2207\u00af \u2207A + \u2207(Rm \u2217\u00af \u2207A) = \u2207a\u2207c\u2207\u00af b\u2207\u00af dA + \u2207\u2207( \u00af T \u2217\u00af \u2207A) + T \u2217\u2207\u00af \u2207\u00af \u2207A + \u2207(Rm \u2217\u00af \u2207A) = \u2207a\u2207\u00af b\u2207c\u2207\u00af dA + \u2207\u2207( \u00af T \u2217\u00af \u2207A) + T \u2217\u2207\u00af \u2207\u00af \u2207A + \u2207(Rm \u2217\u00af \u2207A). (B.9) The general pattern is \u2207(k)\u2207 (\u2113)\u2207a\u2207\u00af bA = \u2207a\u2207\u00af b\u2207(k)\u2207 (\u2113)A + X \u03bd+\u03bb=k X \u00b5+\u03c1=\u2113 \u2207(\u03bd)\u2207 (\u00b5)Rm \u2217\u2207(\u03bb)\u2207 (\u03c1)A + X \u03bd+\u03bb=k X \u00b5+\u03c1=\u2113+1 \u2207(\u03bd)\u2207 (\u00b5)T \u2217\u2207(\u03bb)\u2207 (\u03c1)A + X \u03bd+\u03bb=k+1 X \u00b5+\u03c1=\u2113 \u2207(\u03bd)\u2207 (\u00b5)T \u2217\u2207(\u03bb)\u2207 (\u03c1)A. (B.10) C Identities for non-K\u00a8 ahler metrics When the metric is not K\u00a8 ahler, the integration by parts formula becomes Z X \u2207jV j \u03c9n = Z X(Ap pj \u2212Ap jp)V j \u03c9n = Z X gp\u00af qT\u00af qpj V j \u03c9n. (C.1) It is convenient to introduce Tj = gp\u00af qT\u00af qpj (C.2) so that the above equation becomes Z X \u2207jV j \u03c9n = Z X TjV j \u03c9n. (C.3) C.1 The adjoints \u00af \u2202\u2020 and \u2202\u2020 with torsion Since the signs are crucial, we work out in detail the operators \u00af \u2202\u2020 and \u2202\u2020 on the space \u039b1,1 of (1, 1)-forms. 39 \fConsider \ufb01rst the operator \u00af \u2202: \u039b1,0 \u2192\u039b1,1. Explicitly, \u00af \u2202(fjdzj) = \u2202\u00af kfjd\u00af zk \u2227dzj = \u2212\u2202\u00af kfjdzj \u2227d\u00af zk (C.4) which means that (\u00af \u2202f)\u00af kj = \u2212\u2202\u00af kfj. (C.5) Let \u03a6 = \u03a6\u00af pqdzq \u2227d\u00af zp be a (1, 1)-form. The adjoint \u00af \u2202\u2020 is characterized by the equation \u27e8\u00af \u2202f, \u03a6\u27e9= \u27e8f, \u00af \u2202\u2020\u03a6\u27e9 (C.6) which is equivalent to Z X(\u2212\u2202\u00af kfj)\u03a6\u00af pqgp\u00af kgj\u00af q \u03c9n n! = Z X fj(\u00af \u2202\u2020\u03a6)qgj\u00af q \u03c9n n! . (C.7) Integrating by parts, we \ufb01nd (\u00af \u2202\u2020\u03a6)q = gk\u00af p(\u2207k\u03a6\u00af pq \u2212T j kj\u03a6\u00af pq) = gk\u00af p(\u2207k\u03a6\u00af pq \u2212Tk\u03a6\u00af pq). (C.8) Similarly, we work out \u2202\u2020. For f = f\u00af kd\u00af zk, we have \u2202f = \u2202jf\u00af kdzj \u2227d\u00af zk, so that (\u2202f)\u00af kj = \u2202jf\u00af k. Thus, the equation \u27e8\u2202f, \u03a6\u27e9= \u27e8f, \u2202\u2020\u03a6\u27e9becomes Z X \u2202jf\u00af k\u03a6\u00af pqgp\u00af kgj\u00af q \u03c9n n! = Z X f\u00af k(\u2202\u2020\u03a6)\u00af pgp\u00af k \u03c9n n! . (C.9) This results now into (\u2202\u2020\u03a6)\u00af q = \u2212gp\u00af j(\u2207\u00af j\u03a6\u00af qp \u2212\u00af T\u00af j\u03a6\u00af qp). (C.10) C.2 Bianchi identities for non-K\u00a8 ahler metrics It is well-known that the Riemann curvature tensor of K\u00a8 ahler metrics satis\ufb01es the following important identities R\u00af \u2113m\u00af kj = R\u00af km\u00af \u2113j = R\u00af kj\u00af \u2113m \u2207qR\u00af \u2113m k j = \u2207mR\u00af \u2113q k j, \u2207\u00af pR\u00af \u2113m k j = \u2207\u00af \u2113R\u00af pm k j. (C.11) For general Hermitian metrics, these identities become R\u00af \u2113m\u00af kj = R\u00af \u2113j\u00af km + \u2207\u00af \u2113T\u00af kjm R\u00af \u2113m\u00af kj = R\u00af km\u00af \u2113j + \u2207m \u00af Tj\u00af k\u00af \u2113 (C.12) and \u2207mR\u00af kj p q = \u2207jR\u00af km p q + T r jmR\u00af kr p q, \u2207mR\u00af kj \u00af pq = \u2207jR\u00af km\u00af pq + T r jmR\u00af kr\u00af pq \u2207\u00af mR\u00af kj p q = \u2207\u00af kR \u00af mj p q + \u00af T \u00af r\u00af k \u00af mR\u00af rj p q, \u2207\u00af mR\u00af kj \u00af pq = \u2207\u00af kR \u00af mj \u00af pq + \u00af T \u00af r\u00af k \u00af mR\u00af rj \u00af pq (C.13) 40 \fObserve that to interchange, say m and q in the second Bianchi identity for non-K\u00a8 ahler metrics, we have to use the \ufb01rst Bianchi identity and di\ufb00erentiate, resulting into \u2207mR\u00af kj \u00af pq \u2212\u2207qR\u00af kj \u00af pm = \u2207q\u2207\u00af kT\u00af pmj + \u2207m\u2207\u00af kT\u00af pqj + T r qmR\u00af kr\u00af pj. (C.14) The occurrence of D2T on the right hand side is a source of potential di\ufb03culties, so it is desirable not to exchange this type of pairs of indices. Acknowledgements The authors would like to thank the referees for a particularly careful reading of their paper, and in particular for pointing out several typos that could have been quite confusing to the reader." + }, + { + "url": "http://arxiv.org/abs/1602.08838v3", + "title": "The Fu-Yau equation with negative slope parameter", + "abstract": "The Fu-Yau equation is an equation introduced by J. Fu and S.T. Yau as a\ngeneralization to arbitrary dimensions of an ansatz for the Strominger system.\nAs in the Strominger system, it depends on a slope parameter $\\alpha'$. The\nequation was solved in dimension $2$ by Fu and Yau in two successive papers for\n$\\alpha'>0$, and for $\\alpha'<0$. In the present paper, we solve the Fu-Yau\nequation in arbitrary dimension for $\\alpha'<0$. To our knowledge, these are\nthe first non-trivial solutions of the Fu-Yau equation in any dimension\nstrictly greater than $2$.", + "authors": "Duong H. Phong, Sebastien Picard, Xiangwen Zhang", + "published": "2016-02-29", + "updated": "2017-04-10", + "primary_cat": "math.CV", + "cats": [ + "math.CV", + "math.AP", + "math.DG" + ], + "main_content": "Introduction Let (X, \u03c9) be a compact K\u00a8 ahler manifold of dimension n. The Fu-Yau equation with slope parameter \u03b1\u2032 is the following equation for an unknown scalar function u, i\u2202\u00af \u2202(eu\u03c9 \u2212\u03b1\u2032e\u2212u\u03c1) \u2227\u03c9n\u22122 + n\u03b1\u2032i\u2202\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122 + \u00b5\u03c9n n! = 0. (1.1) Here \u00b5 : X \u2192R is a smooth function satisfying R X \u00b5 \u03c9n n! = 0, \u03c1 is a smooth real (1, 1) form, and the solution u is required to be admissible, in the sense that the vector \u03bb\u2032 of eigenvalues of the Hermitian form \u03c9\u2032 = eu\u03c9 + \u03b1\u2032e\u2212u\u03c1 + 2n\u03b1\u2032i\u2202\u00af \u2202u, (1.2) with respect to \u03c9, lies in the admissible cone \u03932 de\ufb01ned in (2.3) below. Henceforth, to simplify the notation, we shall just denote \u03b1\u2032 by \u03b1. When \u03b1 = 0, the Fu-Yau equation reduces to a Laplacian equation in eu, so the only non-trivial cases are when \u03b1 is strictly positive or negative. The Fu-Yau equation was solved in dimension dim X = 2 by Fu and Yau in two ground breaking papers, \ufb01rst for \u03b1 > 0 in [15], and then for \u03b1 < 0 in [16]. The main goal of the present paper is to prove the following theorem: Theorem 1 Let \u03b1 < 0. Then for any dimension n \u22652, any smooth (1, 1)-form \u03c1 and any smooth function \u00b5 satisfying the condition R X \u00b5 \u03c9n = 0, there exists a constant M\u2032 so that, for all M0 \u2265M\u2032, there exists a smooth admissible solution u to the Fu-Yau equation (1.1) with normalization Z X eu = M0. (1.3) 1Work supported in part by the National Science Foundation under Grant DMS-1266033, DMS-1605968 and DMS-1308136. Keywords: Hessian equations with gradient terms; symmetric functions of eigenvalues; Moser iteration; maximum principles. \fThe Fu-Yau equation in general dimension n and with \u03b1 > 0 has been studied in [29]. All the basic a priori estimates needed for a solution by the method of continuity had been derived there, except for a lower bound on the second symmetric function of the eigenvalues of \u03c9\u2032. As a consequence, whether the equation is solvable for \u03b1 > 0 and n > 2 is still an open question at this time. It is an intriguing question whether the distinct behavior of the equations with \u03b1 > 0 and \u03b1 < 0 is indicative of a signi\ufb01cant geometric di\ufb00erence in the metrics de\ufb01ned by the two equations. The Fu-Yau equation is motivated by the fact that, in dimension n = 2, as shown in [15], it is equivalent to the Strominger system for a certain class of 3-dimensional manifolds constructed by Goldstein and Prokushkin [18]. In higher dimensions, it corresponds to an interesting modi\ufb01cation of the Strominger system. More precisely, let M be an (n + 1)dimensional complex manifold, equipped with a nowhere vanishing holomorphic (n + 1, 0) form \u2126. Let E \u2192M be a holomorphic vector bundle with Hermitian metric H. The modi\ufb01ed Strominger system, as proposed by Fu and Yau [15], is the following system for a metric \u03c9 on M and a Hermitian metric H on E, FH \u2227\u03c9n = 0, F 2,0 H = F 0,2 H = 0 (1.4) \u001a i\u2202\u00af \u2202\u03c9 \u2212\u03b1 4 (Tr R \u2227R \u2212Tr FH \u2227FH) \u001b \u2227\u03c9n\u22121 = 0, (1.5) d \u0012 \u2225\u2126\u2225 2(n\u22121) n \u03c9 \u03c9n \u0013 = 0. (1.6) Here FH is the curvature of the bundle E with respect to the metric H and R is the Riemann curvature tensor of the Chern connection of the metric \u03c9, viewed as (1, 1)forms valued in the endomorphisms of E and T 1,0(M) respectively. Note that a natural extension to arbitrary dimensions of the Strominger system may have been with the power \u03c9n\u22122 in the equation (1.5). The power \u03c9n\u22121 above is motivated by the requirement that the modi\ufb01ed Strominger system be equivalent to the Fu-Yau equation when restricted to Goldstein-Prokushkin \ufb01brations2. When n + 1 = 3, which is the case of particular interest in string theory, both powers \u03c9n\u22121 and \u03c9n\u22122 lead to the same Fu-Yau equation when restricted to Goldstein-Prokushkin \ufb01brations. As observed by Li and Yau [24], in dimension n + 1 = 3 the equation (1.6) can be replaced by the equation d\u2020\u03c9 = i(\u00af \u2202\u2212 \u2202) ln \u2225\u2126\u2225\u03c9, which is the original form written down by Strominger [31]. Strominger\u2019s original motivation was from string theory, and the system he proposed would guarantee the N = 1 supersymmetry of the heterotic string compacti\ufb01ed to Minkowski space-time by a 3-dimensional internal space M. For this, we would take n + 1 = 3, and the slope \u03b1 is positive. Nevertheless, the Strominger systems and their modi\ufb01cations for general values of n and \u03b1 are compelling systems of considerable geometric interest, as they unify 2In [15], there appears to be a misprint, with the power of \u03c9 in (1.5) written as n \u22122. The authors are grateful to Li-Sheng Tseng for pointing out to them the correct power n \u22121. 2 \fin a natural way two basic equations of complex geometry, namely the Hermitian-Einstein equation for holomorphic vector bundles and a generalization of the Ricci-\ufb02at equation. In particular, the equations (1.5) and (1.6) (for \ufb01xed metric H on E) can then be viewed as legitimate non-K\u00a8 ahler alternatives to the canonical metrics of K\u00a8 ahler geometry. The way a solution of the Fu-Yau equation would give rise to a solution of the Strominger system is a key contribution of Fu and Yau [15, 16], based on an earlier geometric construction of Goldstein and Prokushkin [18]. Recall that the Goldstein-Prokushkin construction associates a toric \ufb01bration \u03c0 : M \u2192X to a compact Calabi-Yau manifold (X, \u03c9X) of dimension n with nowhere vanishing holomorphic n-form \u2126X and two primitive harmonic forms \u03c91 2\u03c0, \u03c92 2\u03c0 \u2208H2(X, Z). Furthermore, there is a (1, 0)-form \u03b8 on M so that \u2126= \u2126X \u2227\u03b8 is a nowhere vanishing holomorphic (n + 1)-form on M, and \u03c9u = \u03c0\u2217(eu\u03c9X) + i\u03b8 \u2227\u00af \u03b8 (1.7) is a metric satisfying the balanced condition (1.6) for any scalar function u on X. Let E \u2192X be a stable holomorphic vector bundle of degree 0, and let H be a HermitianEinstein metric on E, which exists by the Donaldson-Uhlenbeck-Yau theorem. Let \u03c0\u2217(E), \u03c0\u2217(H) be their pull-backs to M. The equations (1.4) and (1.6) are now satis\ufb01ed. It is then shown by Fu and Yau [15, 16] that the last equation (1.5) in the Strominger system for the system (\u03c0\u2217E, \u03c0\u2217H, M, \u03c9u) is satis\ufb01ed if and only if u satis\ufb01es the Fu-Yau equation (1.1) on the manifold X, for suitable \u03c1 and \u00b5 given explicitly by \u00b5\u03c9n X n! = (n \u22122)! 2 (\u2225\u03c91\u22252 \u03c9X + \u2225\u03c92\u22252 \u03c9X)\u03c9n X n! + \u03b1 4 Tr(FH \u2227FH \u2212RX \u2227RX) \u2227\u03c9n\u22122 X . (1.8) The solvability condition R X \u00b5 \u03c9n X n! = 0 of the Fu-Yau equation can be viewed as a cohomological condition on X, E, \u03c91, \u03c92 and the slope parameter \u03b1. Applying this construction of Fu and Yau, we obtain, as an immediate corollary of Theorem 1: Theorem 2 Let (X, \u03c9X) be an n-dimensional Calabi-Yau manifold, equipped with a nowhere vanishing holomorphic n-form \u2126(n \u22652). Let \u03c91 2\u03c0, \u03c92 2\u03c0 \u2208H1,1(X, Z) be primitive harmonic forms. Let E \u2192X be a stable bundle with Hermitian-Einstein metric E. Assume that \u03b1 < 0, and the cohomological condition R X \u00b5 \u03c9n X n! = 0 is satis\ufb01ed. Then the modi\ufb01ed Strominger system (1.4), (1.5), (1.6) admits a smooth solution of the form (\u03c0\u2217E, \u03c0\u2217H, M, \u03c9u). Following Fu-Yau [15, 16], we can construct many examples of data (X, E, \u03c91, \u03c92, \u03b1) satisfying the cohomological condition R X \u00b5 \u03c9n X n! = 0 with \u03b1 < 0 in higher dimension, and this is illustrated in \u00a77. Thus Theorem 2 provides the \ufb01rst known solutions of the Strominger system in higher dimensions by the Fu-Yau ansatz. In section \u00a77, we shall exhibit a speci\ufb01c example due to Fu and Yau [16] with \u03b1 = \u22122 and \u00b5 = 0. Other, more geometric, constructions of solutions to the Strominger system have also been provided in [1, 9, 10, 11, 12, 13, 14, 19, 37]. 3 \fBesides its occurrence in geometry and physics, the Fu-Yau equation (1.1) is also interesting from the point of view of the theory of fully non-linear elliptic partial di\ufb00erential equations. In dimension n = 2, it is a complex Monge-Amp` ere equation, and the natural ellipticity condition is that the form \u03c9\u2032 de\ufb01ned in (1.2) be positive-de\ufb01nite. But in dimension n > 2, it is actually a complex 2-Hessian equation, with the ellipticity condition given by the condition that the eigenvalues of \u03c9\u2032 be in the cone \u03932. In fact, as worked out in detail in \u00a74.1, it can be rewritten as (\u03c9\u2032)2 \u2227\u03c9n\u22122 \u03c9n = n(n \u22121) 2 \u0010 e2u \u22124\u03b1eu|Du|2\u0011 + \u03bd, (1.9) where \u03c9\u2032 is the Hermitian (1, 1) form given in (1.2), and \u03bd is a function depending on u, Du, \u00b5 and \u03c1, given explicitly in (4.6) below. Complex Hessian equations on compact manifolds have been studied extensively by many authors in recent years, see for example, [3, 7, 8, 21, 22, 23, 25, 26, 32, 33, 34, 39, 40]. However, in comparison with the previous works, a crucial new feature of the equation here is the dependence of the right hand side on the gradient Du of the unknown function u. The proof of Theorem 1 is by the method of continuity, and the main task is to derive the a priori estimates. We now describe brie\ufb02y some of the innovations required in the derivation of these estimates. In the original papers of Fu-Yau [15, 16], the C0 estimate for equation (1.1) was proved using two di\ufb00erent arguments depending on the sign of \u03b1. Here we provide a uni\ufb01ed approach. We also impose a simpler normalization condition on R X eu instead of on R X e\u2212pu, with p depending on dimension n, as in [15, 16]. This simpler normalization arises naturally in the study of a parabolic version of the Fu-Yau equation, which is a reduction by the Goldstein-Prokushkin construction of a geometric \ufb02ow on (2, 2) forms introduced by the authors [29] and called the anomaly \ufb02ow. The anomaly \ufb02ow preserves the balanced condition of metrics, and its stationary points satisfy the anomaly equation of the Strominger system. It can be shown that R X eu is constant along the \ufb02ow. The C1 estimate uses the blow-up and Liouville theorem technique of Dinew-Kolodziej [7]. To adapt this argument, we need to show that there is a uniform constant C, depending only on \u03c9, \u03c1, \u00b5 and \u03b1, such that sup X |\u2202\u00af \u2202u|\u03c9 \u2264C(1 + sup X |Du|2 \u03c9). (1.10) For standard complex Hessian equations on compact K\u00a8 ahler manifolds, this C2 estimate was obtained by Hou-Ma-Wu [22]. It is worth mentioning that such a C2 estimate combined with a blow-up argument has been a key ingredient in the solvability of several equations in complex geometry. For example, this type of estimate can be obtained for the form-type Monge-Amp` ere equation occurring in the proof of the Gauduchon conjecture [34, 35], and the dHYM equation [5] motivated from mirror symmetry. In this paper, we establish the 4 \fC2 estimate (1.10) for the Fu-Yau equation (1.1). The proof is quite di\ufb00erent from the proof given by Fu and Yau [15, 16] for the case n = 2. To obtain estimate (1.10), the major obstacle in our case is the presence of gradient terms such as eu|Du|2. Indeed, if we allow the constant C to depend on the gradient of u, the C2 estimate for complex Hessian equations with gradient terms was established in [28] under a stronger cone condition, building on the techniques developed by Guan-Ren-Wang [20] for the real Hessian equations. However, the argument is not completely applicable here, because we need to get estimate (1.10) with C independent of the gradient in order to get subsequently the C1 estimate by blow-up arguments. In this paper, we exploit the precise form of the gradient terms to obtain a cancellation of the P |uip|2 terms arising from di\ufb00erentiating the right-hand side, as seen in Lemma 1 in section \u00a75. We would like to stress that the estimates here are also quite di\ufb00erent from the case \u03b1 > 0 which was studied in our previous work [27]. When \u03b1 > 0, the C1 estimate easily follows from the C0 estimate and ellipticity of the equation. However, a new di\ufb03culty arises, which is that the equation may become degenerate. This would happen if the righthand side cn(e2u \u22124\u03b1eu|Du|2) of the equation (1.9) tends to 0. This is not possible if \u03b1 is negative, but it cannot be ruled out at the outset if \u03b1 is positive. In [27], we reduced the solvability of the Fu-Yau equation with \u03b1 > 0 to a non-degeneracy estimate. In fact, this non-degeneracy estimate is equivalent to a strong Fu-Yau type gradient estimate, which was obtained in [15] for n = 2. It is still not known whether it holds for general n. In any case, a proof will certainly require a new method. The paper is organized as follows. We give a general setup for the method of continuity in section \u00a72 and establish the a priori estimates in sections \u00a73, \u00a74, \u00a75. In section \u00a76, we give the proof of Theorem 1. In section \u00a77, we follow Fu-Yau\u2019s construction to give a solution with \u03b1 = \u22122 to the modi\ufb01ed Strominger system. Finally, in section \u00a78, we propose another possible generalization of the Strominger system to higher dimensions which makes use instead of higher Chern classes. 2 The Continuity Method We shall work on a compact K\u00a8 ahler manifold (X, \u03c9) of dimension n \u22652. We use the notation \u03c9 = P g\u00af kj idzj \u2227d\u00af zk and \u03c1 = P \u03c1\u00af kj idzj \u2227d\u00af zk, and we normalize the volume Vol(X, \u03c9) = R X \u03c9n n! to be 1. We denote by D the covariant derivatives with respect to the background metric \u03c9. All norms and inner products are with respect to the background metric \u03c9 unless denoted otherwise. 5 \f2.1 The set-up for the continuity method We shall solve the Fu-Yau equation using the continuity method. For a real parameter t, we consider i\u2202\u00af \u2202(eu\u03c9 \u2212t\u03b1e\u2212u\u03c1) \u2227\u03c9n\u22122 + n\u03b1i\u2202\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122 + t\u00b5\u03c9n n! = 0. (2.1) De\ufb01ne \u03bb\u2032 (t,u) to be the eigenvalues of (g\u2032 (t,u))\u00af jk = eug\u00af jk + t\u03b1e\u2212u\u03c1\u00af jk + 2n\u03b1u\u00af jk (2.2) with respect to the background K\u00a8 ahler metric \u03c9. The tensor (g\u2032 (t,u)) is relevant because the ellipticity condition of (2.1) is \u03bb\u2032 (t,u) \u2208\u03932, where \u03932 = {\u03bb \u2208Rn; \u03c31(\u03bb) > 0, \u03c32(\u03bb) > 0}. (2.3) Here \u03c3k(\u03bb) is the k-th symmetric function of \u03bb, de\ufb01ned to be \u03c3k(\u03bb) = P j1<\u00b7\u00b7\u00b7 0. Let M0 > 0 be a constant which will eventually be taken to be very large. For 0 < \u03b3 < 1, we de\ufb01ne the following function spaces BM = {u \u2208C2,\u03b3(X, R) : Z X eu = M0}, (2.5) B1 = {(t, u) \u2208[0, 1] \u00d7 BM : \u03bb\u2032 (t,u) \u2208\u03932}, (2.6) B2 = {\u03c8 \u2208C\u03b3(X, R) : Z X \u03c8 = 0}. (2.7) Consider the operator \u03a8 : B1 \u2192B2 de\ufb01ned by \u03a8(t, u)\u03c9n n! = i\u2202\u00af \u2202(eu\u03c9 \u2212t\u03b1e\u2212u\u03c1) \u2227\u03c9n\u22122 + n\u03b1i\u2202\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122 + t\u00b5\u03c9n n! . (2.8) De\ufb01ne the set I = {t \u2208[0, 1] : there exists u \u2208BM such that (t, u) \u2208B1 and \u03a8(t, u) = 0}. (2.9) We note that 0 \u2208I due to the trivial solution u0 = log M0. The goal is to show that I is both open and closed. In the remaining part of this section, we show that I is open. The proof of closedness and the necessary a priori estimates will be given in subsequent sections. 6 \f2.2 Proof of the openness of I Suppose \u02c6 t \u2208I. Then there exists \u02c6 u \u2208BM such that \u03a8(\u02c6 t, \u02c6 u) = 0. We wish to use the implicit function theorem to solve \u03a8(t, ut) for t close to \u02c6 t. We compute the linearized operator at \u02c6 u to be (Du\u03a8)(\u02c6 t,\u02c6 u) : \u001a h \u2208C2,\u03b3(X, R) : Z X he\u02c6 u = 0 \u001b \u2192 \u001a \u03c8 \u2208C\u03b3(X, R) : Z X \u03c8 = 0 \u001b , (Du\u03a8)(\u02c6 t,\u02c6 u) = L, (2.10) with L(h)\u03c9n n! = i\u2202\u00af \u2202(he\u02c6 u\u03c9 + \u02c6 t\u03b1he\u2212\u02c6 u\u03c1) \u2227\u03c9n\u22122 + 2n\u03b1i\u2202\u00af \u2202\u02c6 u \u2227i\u2202\u00af \u2202h \u2227\u03c9n\u22122. (2.11) Expanding terms gives L(h) = (n \u22122)! gi\u00af kgp\u00af j\u02dc g\u00af kpDiD\u00af jh + 2Re \u001ai\u2202h \u2227\u00af \u2202(e\u02c6 u\u03c9 + \u02c6 t\u03b1e\u2212\u02c6 u\u03c1) \u2227\u03c9n\u22122 (n!)\u22121\u03c9n \u001b + \u001ai\u2202\u00af \u2202(e\u02c6 u\u03c9 + \u02c6 t\u03b1e\u2212\u02c6 u\u03c1) \u2227\u03c9n\u22122 (n!)\u22121\u03c9n \u001b h, (2.12) where \u02dc g\u00af kp = (ga\u00af b(g\u2032 (\u02c6 t,\u02c6 u))\u00af ba)g\u00af kp \u2212(g\u2032 (\u02c6 t,\u02c6 u))\u00af kp > 0 since \u03bb\u2032 (\u02c6 t,\u02c6 u) \u2208\u03932. We see that L is a second order linear elliptic operator on X, and the deformation tL + (1 \u2212t)\u2206shows that L has the same index as the Laplacian, namely index 0. We shall compute the adjoint L\u2217with respect to the L2 inner product with volume \u03c9n n! . We integrate by parts to obtain Z X \u03c8 L(h) \u03c9n n! = Z X \u03c8i\u2202\u00af \u2202(he\u02c6 u\u03c9 + \u02c6 t\u03b1he\u2212\u02c6 u\u03c1) \u2227\u03c9n\u22122 + 2n\u03b1 Z X \u03c8i\u2202\u00af \u2202\u02c6 u \u2227i\u2202\u00af \u2202h \u2227\u03c9n\u22122 = Z X h {e\u02c6 u\u03c9 + \u02c6 t\u03b1e\u2212\u02c6 u\u03c1 + 2n\u03b1i\u2202\u00af \u2202\u02c6 u} \u2227i\u2202\u00af \u2202\u03c8 \u2227\u03c9n\u22122 = (n \u22122)! Z X h gi\u00af kgp\u00af j\u02dc g\u00af kpDiD\u00af j\u03c8 \u03c9n n! . (2.13) It follows that L\u2217= (n\u22122)! gi\u00af kgp\u00af j\u02dc g\u00af kpDiD\u00af j. By the strong maximum principle, the kernel of L\u2217is the constant functions. Again by the strong maximum principle, a non-zero function in the image of L\u2217must change sign. Since L\u2217has index 0, the codimension of Im L\u2217is one, and so the kernel of L is spanned by a function of constant sign. To summarize, we have Ker L\u2217= R, Ker L = R\u27e8\u03c6\u27e9, \u03c6 constant sign. (2.14) By the Fredholm alternative, we obtain that (Du\u03a8)(\u02c6 t,\u02c6 u) is an isomorphism of tangent spaces. By the implicit function theorem, we can solve \u03a8(t, ut) for t close to \u02c6 t. Hence I is open. 7 \f3 The C0 Estimate This section as well as the next three are devoted to the proof of the closedness of the set I of parameters t for which the deformed equation (2.1) can be solved. For this, we need a priori C0, C1, C2, and C2,\u03b3 a priori estimates. As usual, it is notationally convenient to derive these bounds for the original equation (1.1), as long as the bounds obtained depend only on suitable norms for the data \u03c1 and \u00b5. 3.1 The supremum estimate Proposition 1 Let u be a solution to (1.1) such that \u03bb\u2032 \u2208\u03932 and R X eu = M0. Suppose eu \u22651 and \u03b1e\u22122u\u03c1 \u2265\u22121 2\u03c9. Then there exists a constant C depending only on (X, \u03c9), \u03b1, \u03c1 and \u00b5 such that sup X eu \u2264C Z X eu = CM0. (3.1) Proof. We proceed by Moser iteration. Recall the form \u03c9\u2032 de\ufb01ned by \u03c9\u2032 = eu\u03c9 + \u03b1e\u2212u\u03c1 + 2n\u03b1i\u2202\u00af \u2202u. (3.2) The starting point is to compute the quantity Z X i\u2202\u00af \u2202(e\u2212ku) \u2227\u03c9\u2032 \u2227\u03c9n\u22122 (3.3) in two di\ufb00erent ways. On one hand, by the de\ufb01nition of \u03c9\u2032 and Stokes\u2019 theorem, we have Z X i\u2202\u00af \u2202(e\u2212ku) \u2227\u03c9\u2032 \u2227\u03c9n\u22122 = Z X{eu\u03c9 + \u03b1e\u2212u\u03c1} \u2227i\u2202\u00af \u2202(e\u2212ku) \u2227\u03c9n\u22122. (3.4) Expanding Z X i\u2202\u00af \u2202(e\u2212ku) \u2227\u03c9\u2032 \u2227\u03c9n\u22122 = k2 Z X e\u2212ku{eu\u03c9 + \u03b1e\u2212u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u \u2227\u03c9n\u22122 \u2212k Z X e\u2212ku{eu\u03c9 + \u03b1e\u2212u\u03c1} \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122. (3.5) On the other hand, without using Stokes\u2019 theorem, we obtain Z X i\u2202\u00af \u2202(e\u2212ku) \u2227\u03c9\u2032 \u2227\u03c9n\u22122 = k2 Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 \u2227\u03c9n\u22122 \u2212k Z X e\u2212ku{eu\u03c9 + \u03b1e\u2212u\u03c1} \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122 \u2212(2n\u03b1)k Z X e\u2212kui\u2202\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122. (3.6) 8 \fWe equate (3.5) and (3.6) 0 = \u2212k2 Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 \u2227\u03c9n\u22122 + k2 Z X e\u2212ku{eu\u03c9 + \u03b1e\u2212u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u \u2227\u03c9n\u22122 +(2n\u03b1)k Z X e\u2212kui\u2202\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122. (3.7) Using equation (1.1), 0 = \u2212k2 Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 \u2227\u03c9n\u22122 + k2 Z X e\u2212ku{eu\u03c9 + \u03b1e\u2212u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u \u2227\u03c9n\u22122 \u22122k Z X e\u2212ku\u00b5\u03c9n n! \u22122k Z X e\u2212kui\u2202\u00af \u2202(eu\u03c9 \u2212\u03b1e\u2212u\u03c1) \u2227\u03c9n\u22122. (3.8) Expanding out terms and dividing by 2k yields 0 = \u2212k 2 Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 \u2227\u03c9n\u22122 + k 2 Z X e\u2212ku{eu\u03c9 + \u03b1e\u2212u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u \u2227\u03c9n\u22122 \u2212 Z X e\u2212ku\u00b5\u03c9n n! \u2212 Z X e\u2212(k\u22121)ui\u2202\u00af \u2202u \u2227\u03c9n\u22121 \u2212 Z X e\u2212(k\u22121)ui\u2202u \u2227\u00af \u2202u \u2227\u03c9n\u22121 \u2212\u03b1 Z X e\u2212(k+1)ui\u2202\u00af \u2202u \u2227\u03c1 \u2227\u03c9n\u22122 +\u03b1 Z X e\u2212(k+1)ui\u2202u \u2227\u00af \u2202u \u2227\u03c1 \u2227\u03c9n\u22122 + \u03b1 Z X e\u2212(k+1)ui\u2202\u00af \u2202\u03c1 \u2227\u03c9n\u22122 \u22122\u03b1Re Z X e\u2212(k+1)ui\u2202u \u2227\u00af \u2202\u03c1 \u2227\u03c9n\u22122. Integration by parts gives 0 = \u2212k 2 Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 \u2227\u03c9n\u22122 \u2212k 2 Z X e\u2212ku{eu\u03c9 + \u03b1e\u2212u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u \u2227\u03c9n\u22122 \u2212 Z X e\u2212ku\u00b5\u03c9n n! + \u03b1 Z X e\u2212(k+1)ui\u2202\u00af \u2202\u03c1 \u2227\u03c9n\u22122 \u2212\u03b1 Z X e\u2212(k+1)ui\u2202u \u2227\u00af \u2202\u03c1 \u2227\u03c9n\u22122. (3.9) One more integration by parts yields the following identity: k 2 Z X e\u2212ku{eu\u03c9 + \u03b1e\u2212u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u \u2227\u03c9n\u22122 (3.10) = \u2212k 2 Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 \u2227\u03c9n\u22122 \u2212 Z X e\u2212ku\u00b5 + (\u03b1 \u2212 \u03b1 k + 1) Z X e\u2212(k+1)ui\u2202\u00af \u2202\u03c1 \u2227\u03c9n\u22122. The identity (3.10) will be useful later to control the in\ufb01mum of u, but to control the supremum of u, we replace k with \u2212k in (3.10). Then, for k \u0338= 1, k 2 Z X e(k+1)u{\u03c9 + \u03b1e\u22122u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u \u2227\u03c9n\u22122 (3.11) = \u2212k 2 Z X ekui\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 \u2227\u03c9n\u22122 + Z X eku\u00b5 \u2212(\u03b1 \u2212 \u03b1 1 \u2212k) Z X e(k\u22121)ui\u2202\u00af \u2202\u03c1 \u2227\u03c9n\u22122. 9 \fSince \u03bb\u2032 \u2208\u03932, by the properties of the cone we have that P i\u0338=l \u03bb\u2032 i > 0 for each index l \u2208{1, \u00b7 \u00b7 \u00b7, n}. It follows that i\u2202u \u2227\u00af \u2202u \u2227\u03c9\u2032 \u2227\u03c9n\u22122 \u22650. (3.12) Let \u03b2 = n n\u22121 > 1. We can use (3.12) and (3.11) to derive the following estimate for any k \u2265\u03b2 k Z X e(k+1)u{\u03c9 + \u03b1e\u22122u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u \u2227\u03c9n\u22122 \u2264C \u0012Z X eku + Z X e(k\u22121)u \u0013 . (3.13) By assumption, \u03b1e\u22122u\u03c1 \u2265\u22121 2\u03c9. For k \u22652\u03b2, we can estimate Z X |De k 2 u|2 \u2264Ck \u0012Z X e(k\u22121)u + Z X e(k\u22122)u \u0013 . (3.14) Since eu \u22651, we can conclude Z X |De k 2 u|2 \u2264Ck Z X eku, (3.15) for k \u22652\u03b2. The Sobolev inequality yields \u0012Z X ek\u03b2u \u00131/\u03b2 \u2264Ck Z X eku. (3.16) After iterating this estimate, we arrive at sup X eu \u2264C\u2225eu\u2225L2\u03b2. (3.17) To relate the L2\u03b2 norm of eu to R X eu = M0, we can use a standard scaling argument. sup X eu \u2264C \u0012 Z X eue2\u03b2u\u2212u \u00131/2\u03b2 \u2264C \u0012 sup X e(2\u03b2\u22121)u \u0013 1 2\u03b2 \u0012 Z X eu \u00131/2\u03b2 . (3.18) It follows immediately that sup X eu \u2264C Z X eu = CM0. (3.19) 3.2 An integral estimate for e\u2212u Proposition 2 Let u be a solution to (1.1) such that \u03bb\u2032 \u2208\u03932 and R X eu = M0. There exists 0 < \u03b4\u2032 < 1, chosen small enough such that e\u2212u \u2264\u03b4\u2032 implies \u03b1e\u22122u\u03c1 \u2265\u22121 2\u03c9, and C1 depending only on (X, \u03c9), \u03b1, \u03c1, \u00b5 with the following property. If e\u2212u \u2264\u03b4\u2032, then Z X e\u2212u \u2264C1 M0 . (3.20) 10 \fProof. Setting k = 2 in (3.10) and using (3.12) gives Z X e\u2212u{\u03c9 + \u03b1e\u22122u\u03c1} \u2227i\u2202u \u2227\u00af \u2202u \u2227\u03c9n\u22122 \u2264C \u0012 Z X e\u22122u + Z X e\u22123u \u0013 . (3.21) Choose \u03b4\u2032 > 0 such that \u03b1e\u22122u\u03c1 \u2265\u22121 2\u03c9. Since we are assuming that e\u2212u \u2264\u03b4\u2032 pointwise, we obtain Z X |De\u2212u 2 |2 \u2264C\u03b4\u2032 Z X e\u2212u. (3.22) By the Poincar\u00b4 e inequality Z X e\u2212u \u2212 \u0012 Z X e\u2212u 2 \u00132 = Z X \f \f \f \fe\u2212u 2 \u2212 Z X e\u2212u 2 \f \f \f \f 2 \u2264C Z X |De\u2212u 2 |2. (3.23) Hence, for some C0 independent of \u03b4\u2032, if \u03b4\u2032 is small Z X e\u2212u \u2264 1 1 \u2212C0\u03b4\u2032 \u0012 Z X e\u2212u 2 \u00132 . (3.24) Let U = {e\u2212u \u2264 2 M0}. Then by Proposition 1, M0 = Z U eu + Z X\\U eu \u2264|U| sup X eu + (1 \u2212|U|)M0 2 \u2264CM0|U| + (1 \u2212|U|)M0 2 . (3.25) It follows that there exists \u03b8 > 0 independent of M0 such that |U| > \u03b8 > 0. (3.26) Let \u03b5 > 0. We may use the measure estimate and (3.24) to obtain \u0012 Z X e\u2212u 2 \u00132 \u2264 (1 + C\u03b5) \u0012 Z U e\u2212u 2 \u00132 + (1 + \u03b5) \u0012 Z X\\U e\u2212u 2 \u00132 \u2264 (1 + C\u03b5)|U| Z U e\u2212u + (1 + \u03b5)(1 \u2212|U|) Z X\\U e\u2212u \u2264 (1 + C\u03b5) 2 M0 + (1 + \u03b5)(1 \u2212\u03b8) 1 1 \u2212C0\u03b4\u2032 \u0012 Z X e\u2212u 2 \u00132 . (3.27) Thus \u0012 Z X e\u2212u 2 \u00132 \u2264(1 + C\u03b5) 2 M0 \u0012 1 1 \u2212(1 + \u03b5)(1 \u2212\u03b8)(1 \u2212C0\u03b4\u2032)\u22121 \u0013 . (3.28) Therefore by (3.24) Z X e\u2212u \u2264 1 1 \u2212C0\u03b4\u2032(1 + C\u03b5) 2 M0 \u0012 1 1 \u2212(1 + \u03b5)(1 \u2212\u03b8)(1 \u2212C0\u03b4\u2032)\u22121 \u0013 . (3.29) Choose \u03b5 = \u03b8 2 and suppose 0 < \u03b4\u2032 < \u03b8 4C0. Using this choice of \u03b5 and \u03b4\u2032 along with 0 < \u03b8 < 1, it follows that (1 + \u03b5)(1 \u2212C0\u03b4\u2032)\u22121 \u22641 + \u03b8. Therefore Z X e\u2212u \u2264 1 1 \u2212\u03b8 4 (1 + C\u03b8) 2 M0 1 \u03b82 = C1 M0 . (3.30) The important point is that \u03b8 does not depend on M0, so both \u03b4\u2032 and C1 do not depend on M0. 11 \f3.3 The in\ufb01mum estimate Proposition 3 Let u be a solution to (1.1) such that \u03bb\u2032 \u2208\u03932 and R X eu = M0. There exists 0 < \u03b4\u2032 < 1 (the same \u03b4\u2032 as in Proposition 2) and C2 depending only on (X, \u03c9), \u03b1, \u03c1, \u00b5, such that if e\u2212u \u2264\u03b4\u2032, then sup X e\u2212u \u2264C2 M0 . (3.31) Proof. Combining (3.10) with (3.12), and choosing \u03b4\u2032 > 0 such that \u03b1e\u22122u\u03c1 \u2265\u22121 2\u03c9, we obtain for k \u22652 k Z X e\u2212(k\u22121)u|Du|2 \u2264C \u0012 Z X e\u2212ku + Z X e\u2212(k+1)u \u0013 . (3.32) Therefore, for k \u22651, we have Z X |De\u2212k 2 u|2 \u2264Ck \u0012Z X e\u2212(k+1)u + Z X e\u2212(k+2)u \u0013 . (3.33) If e\u2212u \u2264\u03b4\u2032 < 1, we deduce Z X |De\u2212k 2 u|2 \u2264Ck Z X e\u2212ku. (3.34) By the Sobolev inequality \u0012Z X e\u2212k\u03b2u \u00131/\u03b2 \u2264Ck Z X e\u2212ku. (3.35) By iterating this estimate, we obtain sup X e\u2212u \u2264C\u2225e\u2212u\u2225L1. (3.36) Combining this estimate with Proposition 2 completes the proof. 3.4 The C0 estimate along the continuity method Combining the supremum and in\ufb01mum estimates, we shall prove the desired C0 estimate along the continuity method (2.1). Proposition 4 Let \u03b1 \u0338= 0. There exists B1 > 1, B2 > 1, and M\u2032 \u226b1 depending only on (X, \u03c9), \u03b1, \u03c1, and \u00b5 such that every M0 \u2265M\u2032 has the following property. Let u0 = log M0. Suppose that for all t \u2208[0, t0) with t0 \u22641 there exists a solution ut to i\u2202\u00af \u2202(eut\u03c9 \u2212t\u03b1e\u2212ut\u03c1) \u2227\u03c9n\u22122 + n\u03b1i\u2202\u00af \u2202ut \u2227i\u2202\u00af \u2202ut \u2227\u03c9n\u22122 + t\u00b5\u03c9n n! = 0, (3.37) such that \u03bb\u2032 (t,ut) \u2208\u03932 and R X eut = M0. Then the following C0 estimate holds: eut \u2264B1M0, e\u2212ut \u2264B2 M0 . (3.38) 12 \fProof: Choose B2 = 2C2 where C2 > 1 is as in Proposition 3. Take M\u2032 \u226b1 such that (2C2)M\u2032\u22121 < \u03b4\u2032, where \u03b4\u2032 > 0 is as in Proposition 3. At t = 0, we have e\u2212u0 = M\u22121 0 < (2C2)M\u22121 0 . We claim that e\u2212ut can never reach (2C2)M\u22121 0 on [0, t0). If for some t\u2032 \u2208(0, t0) there holds e\u2212ut\u2032 = (2C2)M\u22121 0 < \u03b4\u2032, then by Proposition 3 it would follow that e\u2212ut\u2032 \u2264C2M\u22121 0 , which is a contradiction. The estimate on eut was established in Proposition 1. Q.E.D. 4 The C2 estimate We come now to one of the key estimates, namely the C2 estimate. For this, it is essential to view the Fu-Yau equation as a complex 2-Hessian equation to exploit the concavity of the operator. 4.1 The Fu-Yau equation as a Hessian equation Using the elementary symmetric function, equation (1.1) can be written as the following scalar equation 0 = {(n \u22121)eugj\u00af k + \u03b1e\u2212u\u02dc \u03c1j\u00af k}DjD\u00af ku + 2n\u03b1\u03c32(i\u2202\u00af \u2202u) + (n \u22121)eu|Du|2 \u2212\u03b1e\u2212u\u02dc \u03c1j\u00af kuju\u00af k \u22122\u03b1Re\u27e8\u2202e\u2212u, \u2202\u03c1\u27e9\u03c9 \u2212\u03b1e\u2212u\u2206\u03c9\u03c1 + \u00b5 (n \u22122)!. (4.1) Here we introduced the following notation \u02dc \u03c1j\u00af k = gj\u00af \u2113gm\u00af k((ga\u00af b\u03c1\u00af ba)g\u00af \u2113m \u2212\u03c1\u00af \u2113m), (4.2) \u27e8\u2202e\u2212u, \u2202\u03c1\u27e9\u03c9 \u03c9n n! = i\u2202e\u2212u \u2227\u00af \u2202\u03c1 \u2227\u03c9n\u22122 (n \u22122)! , \u2206\u03c9\u03c1 \u03c9n n! = i\u2202\u00af \u2202\u03c1 \u2227\u03c9n\u22122 (n \u22122)! . (4.3) As it was mentioned in (1.9), we shall rewrite the equation in terms of g\u2032 \u00af kj = eug\u00af kj + \u03b1e\u2212u\u03c1\u00af kj + 2n\u03b1u\u00af kj, (4.4) where \u03c9\u2032 = i P g\u2032 \u00af kjdzj \u2227d\u00af zk. We will use \u03bb\u2032 to denote the eigenvalues of g\u2032 with respect to the background metric \u03c9. Direct computation gives \u03c32(\u03bb\u2032) = n(n \u22121) 2 e2u + \u03b12e\u22122u\u03c32(\u03c1) + (2n\u03b1)2\u03c32(i\u2202\u00af \u2202u) + \u03b1(n \u22121)ga\u00af b\u03c1\u00af ba +(2n\u03b1){(n \u22121)eugj\u00af k + \u03b1e\u2212u\u02dc \u03c1j\u00af k}DjD\u00af ku. (4.5) Introduce the constant \u03bac = n(n\u22121) 2 . When \u03b1 \u0338= 0, we may combine (4.1) and (4.5) to obtain the following equivalent equation \u03c32(\u03bb\u2032) = \u03bac(e2u \u22124\u03b1eu|Du|2) + 2n\u03b12e\u2212u\u02dc \u03c1j\u00af kuju\u00af k \u22124n\u03b12e\u2212uRe\u27e8\u2202u, \u00af \u2202\u03c1\u27e9\u03c9 +\u03b12e\u22122u\u03c32(\u03c1) + 2n\u03b12e\u2212u\u2206\u03c9\u03c1 + \u03b1(n \u22121)ga\u00af b\u03c1\u00af ba \u2212 2n\u03b1 (n \u22122)!\u00b5. (4.6) 13 \fAs noted in the introduction, the ellipticity condition is that \u03bb\u2032 \u2208\u03932. To obtain higher order estimates, we will work with a version of the equation (4.6) which exhibits a concave elliptic operator. Denote F = \u03c31/2 2 (\u03bb\u2032). Equation (4.6) is equivalent to F = \u03c32(\u03bb\u2032)1/2 = w, (4.7) with w2 = \u03bace2u \u22122\u03b1eu \u001a 2\u03bac|Du|2 \u2212n\u03b1e\u22122u\u02dc \u03c1j\u00af kuju\u00af k + 2n\u03b1e\u22122uRe\u27e8\u2202u, \u00af \u2202\u03c1\u27e9\u03c9 \u001b +\u03b12e\u22122u\u03c32(\u03c1) + 2n\u03b12e\u2212u\u2206\u03c9\u03c1 + \u03b1(n \u22121)ga\u00af b\u03c1\u00af ba \u2212 2n\u03b1 (n \u22122)!\u00b5. (4.8) 4.2 The linearization F j\u00af k At a point p \u2208X where the background metric g\u00af kj = \u03b4kj, we will use the notation \u03c3j\u00af k 2 = \u2202\u03c32(\u03bb\u2032) \u2202g\u2032 \u00af kj , F j\u00af k = \u2202\u03c31/2 2 (\u03bb\u2032) \u2202g\u2032 \u00af kj , F j\u00af k,\u2113\u00af m = \u22022\u03c31/2 2 (\u03bb\u2032) \u2202g\u2032 \u00af kj\u2202g\u2032 \u00af m\u2113 . (4.9) Thus F j\u00af k = \u03c3j\u00af k 2 2\u03c32(\u03bb\u2032)1/2, F = F j\u00af kg\u00af kj = (n \u22121) 2 \u03c31(\u03bb\u2032) \u03c32(\u03bb\u2032)1/2. (4.10) In this section, we shall derive expressions for F j\u00af kDjD\u00af k acting on various quantities. First, 2n\u03b1F j\u00af kDjD\u00af ku = F j\u00af kg\u2032 \u00af kj \u2212euF j\u00af kg\u00af kj \u2212\u03b1e\u2212uF j\u00af k\u03c1\u00af kj = \u03c31/2 2 (\u03bb\u2032) \u2212euF \u2212\u03b1e\u2212uF j\u00af k\u03c1\u00af kj. (4.11) Covariantly di\ufb00erentiating \u03c32(\u03bb\u2032)1/2 gives \u2202p\u03c31/2 2 = F j\u00af kDpg\u2032 \u00af kj. (4.12) Substituting in the de\ufb01nition of g\u2032 \u00af kj, we obtain the following formulas for 2n\u03b1F j\u00af kDjD\u00af k acting on Du, 2n\u03b1F j\u00af kDjD\u00af k(Dpu) = \u2202p\u03c31/2 2 \u2212F j\u00af kDp(eug\u00af kj + \u03b1e\u2212u\u03c1\u00af kj), (4.13) 2n\u03b1F j\u00af kDjD\u00af k(D\u00af pu) = \u2202\u00af p\u03c31/2 2 \u2212F j\u00af kD\u00af p(eug\u00af kj + \u03b1e\u2212u\u03c1\u00af kj) + 2n\u03b1F j\u00af kR\u00af pj\u00af k \u00af qD\u00af qu. (4.14) Here R\u00af pj\u00af k \u00af q denotes the curvature of the background metric \u03c9. Introduce the notation |DDu|2 F g = F j\u00af kg\u2113\u00af mDjD\u2113uD\u00af kD \u00af mu, |D \u00af Du|2 F g = F j\u00af kg\u2113\u00af mDjD \u00af muD\u2113D\u00af ku. (4.15) 14 \fThen 2n\u03b1F j\u00af kDjD\u00af k|Du|2 = 2n\u03b1g\u2113\u00af mF j\u00af k(DjD\u00af kD\u2113u D \u00af mu + D\u2113u DjD\u00af kD \u00af mu) +2n\u03b1(|DDu|2 F g + |D \u00af Du|2 F g), (4.16) and hence 2n\u03b1F j\u00af kDjD\u00af k|Du|2 = 2Re\u27e8D\u03c31/2 2 , Du\u27e9+ 2n\u03b1F j\u00af kgl\u00af pDluR\u00af pj\u00af k \u00af qD\u00af qu \u22122Re\u27e8F j\u00af kD(eug\u00af kj + \u03b1e\u2212u\u03c1\u00af kj), Du\u27e9 +2n\u03b1(|DDu|2 F g + |D \u00af Du|2 F g). (4.17) Finally, we compute the operator 2n\u03b1F j\u00af kDjD\u00af k acting on the Hessian DpD\u00af qu. Di\ufb00erentiating the equation (4.12) again gives F j\u00af kDpD\u00af qg\u2032 \u00af kj = \u2202p\u2202\u00af q\u03c31/2 2 \u2212F i\u00af j,k\u00af \u2113Dpg\u2032 \u00af jiD\u00af qg\u2032 \u00af \u2113k. (4.18) Using the de\ufb01nition of g\u2032, we obtain 2n\u03b1F j\u00af kDjD\u00af kDpD\u00af qu (4.19) = 2n\u03b1F j\u00af kDpD\u00af qDjD\u00af ku + 2n\u03b1 \u0010 F j\u00af kR\u00af qj\u00af k \u00af au\u00af ap \u2212F j\u00af kR\u00af qp\u00af k \u00af au\u00af aj \u0011 = F j\u00af kDpD\u00af qg\u2032 \u00af kj \u2212F j\u00af kDpD\u00af q(eug\u00af kj + \u03b1e\u2212u\u03c1\u00af kj) + 2n\u03b1 \u0010 F j\u00af kR\u00af qj\u00af k \u00af au\u00af ap \u2212F j\u00af kR\u00af qp\u00af k \u00af au\u00af aj \u0011 = \u2202p\u2202\u00af q\u03c31/2 2 \u2212F i\u00af j,k\u00af \u2113Dpg\u2032 \u00af jiD\u00af qg\u2032 \u00af \u2113k + 2n\u03b1 \u0010 F j\u00af kR\u00af qj\u00af k \u00af au\u00af ap \u2212F j\u00af kR\u00af qp\u00af k \u00af au\u00af aj \u0011 \u2212F j\u00af k(eug\u00af kj \u2212\u03b1e\u2212u\u03c1\u00af kj)DpD\u00af qu \u2212F j\u00af k(eug\u00af kj + \u03b1e\u2212u\u03c1\u00af kj)DpuD\u00af qu +\u03b1e\u2212uF j\u00af kDpuD\u00af q\u03c1\u00af kj + \u03b1e\u2212uF j\u00af kD\u00af quDp\u03c1\u00af kj \u2212\u03b1e\u2212uF j\u00af kDpD\u00af q\u03c1\u00af kj. (4.20) 4.3 Proof of the C2 estimate Proposition 5 Let u be a smooth solution to (4.6) such that \u03bb\u2032 \u2208\u03932, and suppose the C0 estimate B\u22121 2 M0 \u2264eu \u2264B1M0 holds. Suppose the parameter \u03b1 < 0. There exists an M\u2032 such that for all M0 \u2265M\u2032, there exists C > 1 such that sup X |\u2202\u00af \u2202u|\u03c9 \u2264C(1 + sup X |Du|2 \u03c9), (4.21) where C depends on (X, \u03c9), \u03c1, \u00b5, \u03b1, M0, B1, B2. We will use the notation K = sup X |Du|2 + 1. (4.22) As before, \u03bb\u2032 = (\u03bb\u2032 1, . . . , \u03bb\u2032 n) will denote the eigenvalues of g\u2032 with respect to g, and we shall take the ordering \u03bb\u2032 1 \u2265\u03bb\u2032 2 \u2265\u00b7 \u00b7 \u00b7 \u2265\u03bb\u2032 n. We will often use that the complex Hessian of u can be bounded by \u03bb\u2032 1. Indeed, since g\u2032 \u2208\u03932, we can estimate |2n\u03b1u\u00af kj| \u2264|g\u2032 \u00af kj| + |eug\u00af kj + \u03b1e\u2212u\u03c1\u00af kj| \u2264C(\u03bb\u2032 1 + 1). (4.23) 15 \fWe now \ufb01rst state a lemma which exploits the speci\ufb01c function w and the sign of the parameter \u03b1 < 0. Lemma 1 Let u be as Proposition 5. Suppose that at p \u2208X, we have \u03bb\u2032 1 \u226bK \u22651+|Du|2. Then at p, there holds F \u226b1 + |Du|, (4.24) |\u27e8Dw, Du\u27e9| \u2264C{KF + |DDu||Du|}, (4.25) D1D\u00af 1w \u03bb\u2032 1 \u2265\u2212C \u001a F + (1 + |Du|)|DDu| \u03bb\u2032 1 + |Du\u00af 11| \u03bb\u2032 1 \u001b . (4.26) Proof: We shall compute at a point where g\u00af kj = \u03b4kj and g\u2032 \u00af kj is diagonal. Recall F = n\u22121 2w P i \u03bb\u2032 i, and P i \u03bb\u2032 i = \u03bb\u2032 1 + \u03c31\u00af 1 2 (\u03bb\u2032) \u2265\u03bb\u2032 1. For choice of normalization M0 \u226b1, by the C0 estimate we have e\u2212u \u226a1. It follows that for M0 \u226b1, w > 0 and 1 C (1 + |Du|2) \u2264w2 \u2264C(1 + |Du|2). (4.27) Thus F \u22651 C \u03bb\u2032 1 w2w \u22651 C \u03bb\u2032 1 K (1 + |Du|). (4.28) Hence F \u226b1 + |Du|. Next, we compute derivatives of w2. Dkw2 = 2\u03bace2uDku + 4|\u03b1|\u03baceuDk|Du|2 + 2n\u03b12e\u2212uDk{\u02dc \u03c1i\u00af juiu\u00af j} +4|\u03b1|\u03baceu|Du|2Dku \u22122n\u03b12e\u2212u\u02dc \u03c1i\u00af juiu\u00af jDku +4n\u03b12e\u2212uRe\u27e8\u2202u, \u00af \u2202\u03c1\u27e9\u03c9Dku \u22124n\u03b12e\u2212uReDk\u27e8\u2202u, \u00af \u2202\u03c1\u27e9\u03c9 +Dk \u001a \u03b12e\u22122u\u03c32(\u03c1) + 2n\u03b12e\u2212u\u2206\u03c9\u03c1 + \u03b1(n \u22121)ga\u00af b\u03c1\u00af ba \u2212 2n\u03b1 (n \u22122)!\u00b5 \u001b . (4.29) Estimate |\u27e8Dw, Du\u27e9| \u2264 1 2w|Du||Dw2| (4.30) \u2264 C \u001a|Du|4 w + 1 + |Du| w |Du||DDu| + 1 + |Du| w |Du||D \u00af Du| + |Du| w \u001b . Using Cw \u22651 + |Du|, we obtain |\u27e8Dw, Du\u27e9| \u2264CK(1 + |Du| + \u03bb\u2032 1 w ) + C|Du||DDu| \u2264C(KF + |Du||DDu|). (4.31) To complete the lemma, it remains to show (4.26). Compute D1D\u00af 1w = 1 2w \u001a \u2212|D1w2|2 2w2 + D1D\u00af 1w2 \u001b = 1 2w \u001a \u2212 1 2w2 \f \f \f \f4\u03b1\u03baceuD1|Du|2 \f \f \f \f 2 \u2212 1 2w22Re\u27e84\u03b1\u03baceuD\u00af 1|Du|2, R1\u27e9 \u2212|R1|2 2w2 + D1D\u00af 1w2 \u001b , (4.32) 16 \fwhere R1 = 2\u03bace2uD1u + 2n\u03b12e\u2212uD1{\u02dc \u03c1i\u00af juiu\u00af j} + 4|\u03b1|\u03baceu|Du|2D1u \u22122n\u03b12e\u2212u\u02dc \u03c1i\u00af juiu\u00af jD1u +4n\u03b12e\u2212uRe\u27e8\u2202u, \u00af \u2202\u03c1\u27e9\u03c9D1u \u22124n\u03b12e\u2212uReD1\u27e8\u2202u, \u00af \u2202\u03c1\u27e9\u03c9 +D1 \u001a \u03b12e\u22122u\u03c32(\u03c1) + 2n\u03b12e\u2212u\u2206\u03c9\u03c1 + \u03b1(n \u22121)ga\u00af b\u03c1\u00af ba \u2212 2n\u03b1 (n \u22122)!\u00b5 \u001b . (4.33) We estimate |R1| \u2264C|Du|3 + C(n, \u03b1, \u03c1)e\u2212u(1 + |Du|) X p {|u1p| + |u\u00af 1p|} + C, (4.34) |R1|2 \u2264C|Du|6 + C(n, \u03b1, \u03c1)e\u22122u(1 + |Du|2) X p {|u1p|2 + |u\u00af 1p|2} + C, (4.35) Re\u27e84\u03b1\u03baceuD\u00af 1|Du|2, R1\u27e9\u2264C(n, \u03b1, \u03c1)(1 + |Du|2) X p {|u1p|2 + |u\u00af 1p|2} + C|Du|6 + C. (4.36) Together with eu \u226b1 for M0 \u226b1, we use the above inequalities to obtain Re\u27e84\u03b1\u03baceuD\u00af 1|Du|2, R1\u27e9+ |R1|2 2 \u2264 \u00121 8(4\u03b1\u03bac)2e2u|Du|2 + eu \u0013 X p {|u1p|2 + |u\u00af 1p|2} + C|Du|6 + C. (4.37) Therefore D1D\u00af 1w \u2265 1 2w \u001a \u2212 1 2w2(4\u03b1\u03bac)2e2u|Du|2 X p \u00125 4|u1p|2 + 5|u\u00af 1p|2 \u0013 \u22121 w2 \u00121 8(4\u03b1\u03bac)2e2u|Du|2 + eu \u0013 X p {|u1p|2 + |u\u00af 1p|2} \u2212C w2|Du|6 \u2212C + D1D\u00af 1w2 \u001b . (4.38) Combining terms D1D\u00af 1w \u2265 1 2w \u001a \u2212 3 4w2 \u0012 (4\u03b1\u03bac)2e2u|Du|2 + eu \u0013 X p |u1p|2 \u2212C(1 + |Du|2) w2 |D \u00af Du|2 \u2212C w2|Du|6 \u2212C + D1D\u00af 1w2 \u001b . (4.39) For e\u2212u \u226a1, we have w2 \u22657 8{\u03bace2u + 4|\u03b1|\u03baceu|Du|2}. (4.40) Hence D1D\u00af 1w \u22651 2w \u001a \u22126 7(4|\u03b1|\u03bac)eu X p |u1p|2 \u2212C|D \u00af Du|2 \u2212C|Du|4 \u2212C + D1D\u00af 1w2 \u001b . (4.41) 17 \fTaking a second derivative of (4.29), we estimate for e\u2212u \u226a1, D1D\u00af 1w2 \u2265 6 7(4\u03bac|\u03b1|)eu X p {|u1p|2 + |u\u00af 1p|2} (4.42) \u2212C{(1 + |Du|)|Du\u00af 11| + (1 + |Du|2)(|DDu| + |D \u00af Du|) + |Du|4 + 1}. We see that the terms involving P p |u1p|2 cancel. Hence D1D\u00af 1w \u2265\u2212C \u001a(1 + |Du|) w {|Du\u00af 11| + (1 + |Du|)|DDu|} + |D \u00af Du|2 w + |Du|4 + 1 w \u001b . (4.43) Therefore, for \u03bb\u2032 1 \u226bK, D1D\u00af 1w \u03bb\u2032 1 \u2265\u2212C \u001a|Du\u00af 11| \u03bb\u2032 1 + (1 + |Du|)|DDu| \u03bb\u2032 1 + \u03bb\u2032 1 w + |Du| + 1 \u001b . (4.44) This inequality yields (4.26). Q.E.D. Given Lemma 1, we now prove the C2 estimate. We shall use the maximum principle applied to the test function of Hou-Ma-Wu [22]. Let N > 0 be a large constant to be determined later. Let L = 2n|\u03b1| supX |u|. De\ufb01ne \u03c8(t) = N log \u0012 1 + t 2L \u0013 , (4.45) for |t| \u2264L. It follows that N L > \u03c8\u2032 > N 3L, \u03c8\u2032\u2032 = \u2212|\u03c8\u2032|2 N . (4.46) De\ufb01ne \u03c6(t) = \u2212log (2K \u2212t), K = sup X |Du|2 + 1, (4.47) for 0 \u2264t \u2264K. We have \u03c6\u2032(|Du|2) \u22641 K \u22641, \u03c6\u2032(|Du|2) \u2265 1 2K , (4.48) and the relationship \u03c6\u2032\u2032 = (\u03c6\u2032)2. (4.49) First, consider G0(z, \u03be) = log (g\u2032 \u00af jk\u03bek \u00af \u03bej) \u2212\u03c8(2n\u03b1u) + \u03c6(|Du|2), (4.50) for z \u2208X and \u03be \u2208T 1,0 z (X) a unit vector. G0 is not de\ufb01ned everywhere, but we may restrict to the compact set where g\u2032 \u00af jk\u03bek \u00af \u03bej \u22650 and obtain an upper semicontinuous function. Let (p, \u03be0) be the maximum of G0. Choose coordinates centered at p such that g\u00af jk = \u03b4jk and 18 \fg\u2032 \u00af jk is diagonal. As before, we use the ordering \u03bb\u2032 1 \u2265\u00b7 \u00b7 \u00b7 \u2265\u03bb\u2032 n for the eigenvalue of g\u2032 with respect to g. At p, we have \u03bb\u2032 1(p) = g\u2032 \u00af 11(p), and \u03be0(p) = \u22021. We extend \u03be0(p) to a local unit vector \ufb01eld \u03be0 = g\u22121/2 \u00af 11 \u2202 \u2202z1. De\ufb01ne the local function G(z) = log (g\u22121 \u00af 11 g\u2032 \u00af 11) \u2212\u03c8(2n\u03b1u) + \u03c6(|Du|2). (4.51) This function G also attains a maximum at p \u2208X. We will compute at the point p. Covariantly di\ufb00erentiating G gives G\u00af j = D\u00af j(eu + \u03b1e\u2212u\u03c1\u00af 11) + 2n\u03b1D\u00af jD1D\u00af 1u g\u2032 \u00af 11 + \u03c6\u2032D\u00af j|Du|2 \u22122n\u03b1\u03c8\u2032D\u00af ju. (4.52) Covariantly di\ufb00erentiating G a second time and contracting with F i\u00af j yields F i\u00af jG\u00af ji = 2n\u03b1 \u03bb\u2032 1 F i\u00af jDiD\u00af jD1D\u00af 1u + (eu \u2212\u03b1e\u2212u\u03c1\u00af 11) \u03bb\u2032 1 F i\u00af jDiD\u00af ju + (eu + \u03b1e\u2212u\u03c1\u00af 11) \u03bb\u2032 1 |Du|2 F \u22122\u03b1e\u2212u \u03bb\u2032 1 Re{F i\u00af jui(\u03c1\u00af 11)\u00af j} + \u03b1e\u2212u \u03bb\u2032 1 F i\u00af j(\u03c1\u00af 11)\u00af ji \u2212|Dg\u2032 \u00af 11|2 F \u03bb\u2032 12 + \u03c6\u2032F i\u00af jDiD\u00af j|Du|2 +\u03c6\u2032\u2032|D|Du|2|2 F \u22122n\u03b1\u03c8\u2032F i\u00af jDiD\u00af ju \u2212(2n\u03b1)2\u03c8\u2032\u2032|Du|2 F. (4.53) Here we introduced the notation |D\u03c7|2 F = F j\u00af kDj\u03c7D\u00af k\u03c7. We \ufb01rst get an estimate for F i\u00af jDiD\u00af jD1D\u00af 1u by using the identity (4.19) and noting that the complex Hessian of u can be bounded by \u03bb\u2032 1 (4.23). 2n\u03b1F i\u00af jDiD\u00af jD1D\u00af 1u \u2265D1D\u00af 1w \u2212F i\u00af j,k\u00af \u2113D1g\u2032 \u00af jiD\u00af 1g\u2032 \u00af \u2113k \u2212C(1 + |Du|2 + \u03bb\u2032 1)F. (4.54) From (4.11), for suitable normalization eu \u226b1 and hence \u03b1e\u2212u\u03c1\u00af kj \u2265\u2212g\u00af kj, and so we have \u22122n\u03b1F i\u00af jDiD\u00af ju \u2265F \u2212w. (4.55) By (4.17), we have F i\u00af jDiD\u00af j|Du|2 \u2265 Re\u27e8Dw, Du\u27e9 n\u03b1 \u2212C(1 + |Du|2)F + (|D \u00af Du|2 F g + |DDu|2 F g). (4.56) Using inequalities (4.54), (4.55), (4.56), in (4.53) yields the following inequality at the maximum point p \u2208X of G 0 \u2265 1 \u03bb\u2032 1 \u001a D1D\u00af 1w \u2212F i\u00af j,k\u00af \u2113D1g\u2032 \u00af jiD\u00af 1g\u2032 \u00af \u2113k \u001b \u2212|Dg\u2032 \u00af 11|2 F \u03bb\u2032 12 + \u03c6\u2032|D \u00af Du|2 F g + \u03c6\u2032|DDu|2 F g +\u03c6\u2032\u2032|D|Du|2|2 F + \u03c6\u2032 n\u03b1Re\u27e8Dw, Du\u27e9\u2212(2n\u03b1)2\u03c8\u2032\u2032|Du|2 F +\u03c8\u2032F \u2212C \u001a 1 + \u03c6\u2032 + \u03c6\u2032|Du|2 + |Du|2 \u03bb\u2032 1 \u001b F \u2212\u03c8\u2032w \u2212C. (4.57) 19 \fWe shall assume \u03bb\u2032 1 \u226bK \u22651 + |Du|2, otherwise the estimate is complete. Since \u03c6\u2032 \u2264 1 K, we have 1 + \u03c6\u2032 + \u03c6\u2032|Du|2 + |Du|2 \u03bb\u2032 1 \u2264C. (4.58) Combining (4.24) with w \u2264C(1 + |Du|), we obtain w \u2264 1 N F for \u03bb\u2032 1 large enough. Using Lemma 1 on the terms involving derivatives of w, the main inequality becomes 0 \u2265 \u22121 \u03bb\u2032 1 F i\u00af j,k\u00af \u2113D1g\u2032 \u00af jiD\u00af 1g\u2032 \u00af \u2113k \u2212|Dg\u2032 \u00af 11|2 F \u03bb\u2032 12 + \u03c6\u2032|D \u00af Du|2 F g + \u03c6\u2032|DDu|2 F g \u2212C\u03c6\u2032|DDu||Du| \u2212C(1 + |Du|)|DDu| \u03bb\u2032 1 \u2212C |Du\u00af 11| \u03bb\u2032 1 +\u03c6\u2032\u2032|D|Du|2|2 F \u2212(2n\u03b1)2\u03c8\u2032\u2032|Du|2 F + (\u03c8\u2032 \u2212C)F. (4.59) Using the critical equation DG = 0 and thus setting (4.52) to zero, we obtain |Du\u00af 11| \u03bb\u2032 1 \u2264 C \u03bb\u2032 1 (1 + |Du|) + \u03c6\u2032|D|Du|2| + C\u03c8\u2032|Du| \u2264 F + C\u03c6\u2032|Du||DDu| + C\u03c6\u2032|Du|\u03bb\u2032 1. (4.60) In the last line we used (4.24) from Lemma 1. Using F \u2265n\u22121 2 \u03bb\u2032 1 w , we estimate \u03c6\u2032|Du|\u03bb\u2032 1 \u2264\u03bb\u2032 1 \u221a K \u2264C \u03bb\u2032 1 w \u2264CF. (4.61) Therefore, by using the equation \u03c32(\u03bb\u2032) = w2 and (4.48), C \u001a \u03c6\u2032|DDu||Du| + (1 + |Du|)|DDu| \u03bb\u2032 1 + |Du\u00af 11| \u03bb\u2032 1 \u001b \u2264 C \u001a \u03c6\u2032|DDu||Du| + (1 + |Du|)|DDu| \u03bb\u2032 1 + F \u001b = C \u001a\u0012\u03c6\u2032\u03c31/2 2 |DDu|2 \u03bb\u2032 1 \u00131/2 \u0012\u03c6\u2032|Du|2\u03bb\u2032 1 w \u00131/2 + \u0012\u03c6\u2032\u03c31/2 2 |DDu|2 \u03bb\u2032 1 \u00131/2 \u0012(1 + |Du|)2 \u03c6\u2032\u03bb\u2032 1w \u00131/2 + F \u001b \u2264 \u03c6\u2032 2 \u03c31/2 2 |DDu|2 n\u03bb\u2032 1 + C \u001a|Du|2\u03bb\u2032 1 Kw + (1 + |Du|)2K \u03bb\u2032 1w + F \u001b \u2264 \u03c6\u2032 2 \u03c31/2 2 |DDu|2 n\u03bb\u2032 1 + CF. (4.62) To obtain the last inequality, we used \u03bb\u2032 1 w \u2264 2 n\u22121F, \u03bb\u2032 1 \u226bK, Cw \u22651 + |Du|, and (4.24). Next, we note that for any \u03bb\u2032 \u2208\u03932, we have the inequality \u03bb\u2032 1\u03c31\u00af 1 2 \u22652 n\u03c32. This inequality is well-known, and a proof can be found for example in [27]. It follows that F i\u00af i \u2265F 1\u00af 1 \u2265\u03c31/2 2 n\u03bb\u2032 1 . (4.63) 20 \fTherefore \u2212C \u001a \u03c6\u2032|DDu||Du| + (1 + |Du|)|DDu| \u03bb\u2032 1 + |Du\u00af 11| \u03bb\u2032 1 \u001b \u2265\u2212\u03c6\u2032 2 |DDu|2 F g \u2212CF. (4.64) The main inequality becomes 0 \u2265 \u22121 \u03bb\u2032 1 F i\u00af j,k\u00af \u2113D1g\u2032 \u00af jiD\u00af 1g\u2032 \u00af \u2113k \u2212|Dg\u2032 \u00af 11|2 F \u03bb\u2032 12 + \u03c6\u2032|D \u00af Du|2 F g + \u03c6\u2032 2 |DDu|2 F g +\u03c6\u2032\u2032|D|Du|2|2 F \u2212(2n\u03b1)2\u03c8\u2032\u2032|Du|2 F + (\u03c8\u2032 \u2212C)F. (4.65) At this point, the estimate follows from the argument of Hou-Ma-Wu [22]. We present the argument for the sake of completeness. Since we are dealing with a 2-Hessian equation, we will also use ideas from [30]. Before proceeding in cases, we use the critical equation DG = 0 to notice the following estimate which holds for each \ufb01xed index i, F i\u00af i|Dig\u2032 \u00af 11|2 \u03bb\u2032 12 = F i\u00af i \f \f \f \f\u03c6\u2032Di|Du|2 \u22122n\u03b1\u03c8\u2032Diu \f \f \f \f 2 \u2264 (1 + 1 8)(\u03c6\u2032)2F i\u00af i|Di|Du|2|2 + C(\u03c8\u2032)2F i\u00af i|Diu|2 \u2264 (\u03c6\u2032)2F i\u00af i|Di|Du|2|2 + (\u03c6\u2032)2|Du|2 4 F i\u00af i X p (|uip|2 + |u\u00af ip|2) + C(\u03c8\u2032)2F i\u00af i|Diu|2 \u2264 \u03c6\u2032\u2032F i\u00af i|Di|Du|2|2 + \u03c6\u2032 4 F i\u00af i X p (|uip|2 + |u\u00af ip|2) + C(\u03c8\u2032)2F i\u00af i|Diu|2. (4.66) In the last line, we used the properties of the function \u03c6 given in (4.48) and (4.49). We shall need the constants \u03b4 = \u03c4 4 \u22123\u03c4 , \u03c4 = 1 1 + N . (4.67) Case (A): \u2212\u03bb\u2032 n \u2265\u03b4\u03bb\u2032 1. Using F i\u00af j,k\u00af \u2113\u22640 by the concavity of \u03c31/2 2 on the \u03932 cone, \u03c8\u2032\u2032 < 0, and using estimate (4.66) on |Dg\u2032 \u00af 11|2 F \u03bb\u2032 1 2 , we obtain 0 \u2265\u03c6\u2032 2 |D \u00af Du|2 F g + \u03c6\u2032 4 |DDu|2 F g \u2212C\u03c8\u20322|Du|2 F + (\u03c8\u2032 \u2212C)F. (4.68) Using the assumption on the smallest eigenvalue \u03bb\u2032 n, we estimate for \u03bb\u2032 1 large enough \u22122n\u03b1u\u00af nn = \u2212\u03bb\u2032 n + eu + \u03b1e\u2212u\u03c1\u00af nn \u2265\u03b4\u03bb\u2032 1 + eu + \u03b1e\u2212u\u03c1\u00af nn \u2265\u03b4\u03bb\u2032 1 2 . (4.69) Hence \u03c6\u2032 2 |D \u00af Du|2 F g \u2265 1 4K F n\u00af nu2 \u00af nn \u2265 \u03b42 16K(2n\u03b1)2F n\u00af n\u03bb\u2032 1 2. (4.70) 21 \fSince F n\u00af n \u2265F i\u00af i for all indices i, we have 0 \u2265 \u03b42 16K(2n\u03b1)2F n\u00af n\u03bb\u2032 1 2 \u2212CK\u03c8\u20322F n\u00af n + (\u03c8\u2032 \u2212C)F. (4.71) By (4.46), we can choose \u03c8 such that (\u03c8\u2032 \u2212C) \u22650. The estimate \u03bb\u2032 1 \u2264C(1 + K) follows. Case (B): \u2212\u03bb\u2032 n \u2264\u03b4\u03bb\u2032 1. We partition {1, \u00b7 \u00b7 \u00b7 , n} into I = {i : F i\u00af i \u2264\u03b4\u22121F 1\u00af 1}, J = {i : F i\u00af i > \u03b4\u22121F 1\u00af 1}. (4.72) Using (4.66) for each i \u2208I occurring in |Dg\u2032 \u00af 11|2 F \u03bb\u2032 1 2 , the main inequality becomes 0 \u2265 \u22121 \u03bb\u2032 1 F i\u00af j,k\u00af \u2113D1g\u2032 \u00af jiD\u00af 1g\u2032 \u00af \u2113k \u22121 \u03bb\u2032 12 X i\u2208J F i\u00af i|Dig\u2032 \u00af 11|2 + \u03c6\u2032 2 |D \u00af Du|2 F g + \u03c6\u2032 4 |DDu|2 F g +\u03c6\u2032\u2032 X i\u2208J F i\u00af i|Di|Du|2|2 \u2212(2n\u03b1)2\u03c8\u2032\u2032|Du|2 F \u2212CK(\u03c8\u2032)2\u03b4\u22121F 1\u00af 1 +(\u03c8\u2032 \u2212C)F. (4.73) Using (4.49), DG(p) = 0, and (4.46), X i\u2208J \u03c6\u2032\u2032F i\u00af i|Di|Du|2|2 = X i\u2208J F i\u00af i|\u03c6\u2032|Du|2 i|2 = X i\u2208J F i\u00af i \f \f \f \f \f Dig\u2032 \u00af 11 g\u2032 \u00af 11 \u22122n\u03b1\u03c8\u2032ui \f \f \f \f \f 2 \u2265 \u03c4 \u03bb\u2032 12 X i\u2208J F i\u00af i|Dig\u2032 \u00af 11|2 \u2212 \u03c4 1 \u2212\u03c4 X i\u2208J F i\u00af i |2n\u03b1\u03c8\u2032ui|2 = \u03c4 \u03bb\u2032 12 X i\u2208J F i\u00af i|Dig\u2032 \u00af 11|2 + \u03c4N 1 \u2212\u03c4 (2n\u03b1)2\u03c8\u2032\u2032 X i\u2208J F i\u00af i|ui|2 = \u03c4 \u03bb\u2032 12 X i\u2208J F i\u00af i|Dig\u2032 \u00af 11|2 + (2n\u03b1)2\u03c8\u2032\u2032 X i\u2208J F i\u00af i|ui|2. (4.74) In the last line we used the de\ufb01nition of \u03c4 (4.67). The main inequality becomes 0 \u2265 \u22121 \u03bb\u2032 1 F i\u00af j,k\u00af \u2113D1g\u2032 \u00af jiD\u00af 1g\u2032 \u00af \u2113k \u22121 \u2212\u03c4 \u03bb\u2032 12 X i\u2208J F i\u00af i|Dig\u2032 \u00af 11|2 + \u03c6\u2032 2 |D \u00af Du|2 F g + \u03c6\u2032 4 |DDu|2 F g \u2212CK(\u03c8\u2032)2\u03b4\u22121F 1\u00af 1 + (\u03c8\u2032 \u2212C)F. (4.75) Terms involving \u2212\u03c8\u2032\u2032 > 0 were discarded. Recall that if F(A) = f(\u03bb1, \u00b7 \u00b7 \u00b7 , \u03bbn) is a symmetric function of the eigenvalues of a Hermitian matrix A, then at a diagonal matrix A, we have (see [2, 17]), F i\u00af j = \u03b4ijfi, (4.76) F i\u00af j,r\u00af sTi\u00af jkTr\u00af s\u00af k = X fijTi\u00af ikTj\u00af j\u00af k + X p\u0338=q fp \u2212fq \u03bbp \u2212\u03bbq |Tp\u00af qk|2, (4.77) 22 \fwhere the second term on the right-hand side of (4.77) has to be interpreted as a limit if \u03bbp = \u03bbq. In our case f(\u03bb\u2032) = \u03c31/2 2 (\u03bb\u2032), and we may compute fp = 1 2\u03c32(\u03bb\u2032)1/2 X k\u0338=p \u03bb\u2032 k, fp \u2212fq \u03bbp \u2212\u03bbq = \u2212 1 2\u03c32(\u03bb\u2032)1/2. (4.78) Since f(\u03bb\u2032) = \u03c31/2 2 (\u03bb\u2032) is concave, identity (4.77) gives us the following inequality F i\u00af j,r\u00af sTi\u00af jkTr\u00af s\u00af k \u2264\u2212 1 2\u03c31/2 2 X p\u0338=q |Tp\u00af qk|2. (4.79) We now estimate \u22121 \u03bb\u2032 1 F i\u00af j,k\u00af \u2113D1g\u2032 \u00af jiD\u00af 1g\u2032 \u00af \u2113k \u2265 \u2212 1 2\u03bb\u2032 1w X i\u0338=j |D1g\u2032 \u00af ji|2 \u2265 1 2\u03bb\u2032 1w X i\u0338=1 |Dig\u2032 \u00af 11 \u2212Di(eu + \u03b1e\u2212u\u03c1\u00af 11) + \u03b1D1(e\u2212u\u03c1\u00af 1i)|2 \u2265 1 \u2212\u03c4 2 \u03bb\u2032 1 X i\u0338=1 1 2w|Dig\u2032 \u00af 11|2 \u2212C\u03c4 \u03bb\u2032 1w(1 + |Du|2). (4.80) For any index i \u2208J, we have that \u03bb\u2032 i \u0338= \u03bb\u2032 1. We only keep indices i \u2208J in the summation, and use the de\ufb01nitions of J and case (B), to obtain \u22121 \u03bb\u2032 1 F i\u00af j,k\u00af \u2113D1g\u2032 \u00af jiD\u00af 1g\u2032 \u00af \u2113k \u2265 1 \u2212\u03c4 2 \u03bb\u2032 1 X i\u2208J F i\u00af i \u2212F 1\u00af 1 \u03bb\u2032 1 \u2212\u03bb\u2032 i |Dig\u2032 \u00af 11|2 \u2212C\u03c4 \u03bb\u2032 1w(1 + |Du|2) \u2265 1 \u03bb\u2032 12(1 \u2212\u03c4 2)1 \u2212\u03b4 1 + \u03b4 X i\u2208J F i\u00af i|Dig\u2032 \u00af 11|2 \u2212F = 1 \u03bb\u2032 12(1 \u2212\u03c4) X i\u2208J F i\u00af i|Dig\u2032 \u00af 11|2 \u2212F, (4.81) for \u03bb\u2032 1 \u226bK. In the last line we used the de\ufb01nition of \u03b4 (4.67). The main inequality becomes 0 \u2265\u03c6\u2032 2 |D \u00af Du|2 F g \u2212CK(\u03c8\u2032)2\u03b4\u22121F 1\u00af 1 + (\u03c8\u2032 \u2212C)F. (4.82) Choosing N \u226b1 such that \u03c8\u2032 \u2212C \u22650, we obtain 0 \u2265 1 4K F 1\u00af 1\u03bb\u2032 1 2 \u2212CK(\u03c8\u2032)2\u03b4\u22121F 1\u00af 1. (4.83) The estimate \u03bb\u2032 1 \u2264C(1 + K) follows. Q.E.D. 23 \f5 The Gradient Estimate The gradient estimate is immediate from the equation (4.6) when \u03b1 > 0 for eu \u226b1, since \u03c32(\u03bb\u2032) \u22650. In the present case, when \u03b1 < 0, we use the blow-up argument and Liouville theorem of Dinew-Kolodziej [7]. Proposition 6 Let u be a solution to (4.6) with parameter \u03b1 < 0 such that \u03bb\u2032 \u2208\u03932. Suppose the C0 estimate B\u22121 2 M0 \u2264eu \u2264B1M0 and C2 estimate |\u2202\u00af \u2202u| \u2264C(1+supX |Du|2) hold. There exists C > 1 such that |Du|2 \u2264C, (5.1) where C depends on (X, \u03c9), \u03c1, \u00b5, \u03b1, M0, B1, B2. Proof: Proceed by contradiction. Suppose there exists a sequence of functions uk : X \u2192R solving (4.6) with \u03bb\u2032 \u2208\u03932 such that |Duk(xk)| = Ck for some xk \u2208X and Ck \u2192\u221e. After taking a subsequence, we may assume that xk \u2192x\u221efor some x\u221e\u2208X. We take a coordinate chart centered at x\u221e, and assume that all xk are inside this coordinate chart. We shall take coordinates such that \u03c9(0) = \u03b2, where \u03b2 = P j idzj \u2227d\u00af zj. De\ufb01ne the local functions \u02c6 uk(x) = uk \u0012 x Ck + xk \u0013 . (5.2) These functions have the following properties |D\u02c6 u(x)| \u2264|D\u02c6 uk(0)| = 1, |\u2202\u00af \u2202\u02c6 uk| \u2264C C2 k (1 + C2 k) \u2264C, \u2225\u02c6 uk\u2225L\u221e\u2264C. (5.3) Elliptic estimates for the Laplacian show that \u02c6 uk is bounded is C1,\u03b1. Therefore, on any BR(0) there exists a subsequence (2n\u03b1)\u02c6 uk \u2192u\u221ein C1,\u03b2(BR(0)). De\ufb01ne \u03a6k : Cn \u2192Cn to be the map \u03a6k(x) = C\u22121 k x + xk. We introduce notation analogous to [35], \u03b2k = C2 k \u03a6\u2217 k\u03c9, \u03c7k = \u03a6\u2217 k(euk\u03c9 + \u03b1e\u2212uk\u03c1). (5.4) It follows that \u03a6\u2217 k(euk\u03c9 + \u03b1e\u2212uk\u03c1 + 2n\u03b1i\u2202\u00af \u2202uk) = \u03c7k + 2n\u03b1 i\u2202\u00af \u2202\u02c6 uk. (5.5) We also note the following convergence \u03b2k \u2192\u03b2, \u03c7k \u21920, in C\u221e loc. (5.6) Since \u03c7k + 2n\u03b1i\u2202\u00af \u2202\u02c6 uk is in the \u03932 cone, it follows that for any function v \u2208C2(BR(0)) such that (i\u2202\u00af \u2202v) \u2208\u03932, we have (\u03c7k +2n\u03b1i\u2202\u00af \u2202\u02c6 uk)\u2227i\u2202\u00af \u2202v \u2227\u03b2n\u22122 \u22650. This follows from Garding\u2019s inequality P i \u2202\u03c32(\u03bb) \u2202\u03bbi \u00b5i \u22652\u03c32(\u03bb)1/2\u03c32(\u00b5)1/2 for any \u03bb, \u00b5 \u2208\u03932. Hence upon taking a limit, we have i\u2202\u00af \u2202u\u221e\u2227i\u2202\u00af \u2202v \u2227\u03b2n\u22122 \u22650 (5.7) 24 \fin the sense of currents. This is the de\ufb01nition of 2-subharmonicity introduced by Blocki [3] that is required in the Liouville theorem of Dinew-Kolodziej [7]. Having shown that u\u221eis 2-subharmonic, we will show that it is maximal by proving (i\u2202\u00af \u2202u\u221e)2 \u2227\u03b2n\u22122 = 0 as a measure. After multiplying through by (C2 k)n\u22122 and pulling back by \u03a6k, the equation (4.6) solved by uk becomes the following equation for \u02c6 uk (\u03c7k + 2n\u03b1i\u2202\u00af \u2202\u02c6 uk)2 \u2227\u03b2n\u22122 k (5.8) = \u001a\u03bace2\u02c6 uk C4 k \u22124\u03b1\u03bace\u02c6 uk |D\u02c6 uk|2 C2 k + 2n\u03b12e\u2212\u02c6 uk \u02dc \u03c1a\u00af bDa(\u02c6 uk)D\u00af b(\u02c6 uk) C2 k \u22124n\u03b12 C3 k e\u2212\u02c6 ukRe\u27e8\u2202\u02c6 uk, \u00af \u2202\u03c1\u27e9\u03c9 + \u03b12 C4 k e\u22122\u02c6 uk\u03c32(\u03c1) + 2n\u03b12 C4 k e\u2212\u02c6 uk\u2206\u03c9\u03c1 + \u03b1(n \u22121) C4 k ga\u00af b\u03c1\u00af ba \u22122n\u03b1 C4 k \u00b5 (n \u22122)! \u001b (\u03c9 \u25e6\u03a6k)2 \u2227\u03b2n\u22122 k . Since \u02c6 uk is uniformly bounded, |D\u02c6 uk| \u22641 and Ck \u2192\u221e, we see that the right hand side tends to zero. Combining this with (5.6), we may conclude (i\u2202\u00af \u2202\u02c6 uk)2 \u2227\u03b2n\u22122 \u21920, (5.9) in the sense of currents. Since 2n\u03b1 \u02c6 uk \u2192u\u221elocally uniformly, it is well-known (e.g. [6] Chapter III Cor. 3.6) that (2n\u03b1 i\u2202\u00af \u2202\u02c6 uk)2 \u2227\u03b2n\u22122 \u2192(i\u2202\u00af \u2202u\u221e)2 \u2227\u03b2n\u22122 weakly. Thus (i\u2202\u00af \u2202u\u221e)2 \u2227\u03b2n\u22122 = 0, (5.10) in the sense of Bedford-Taylor [4]. Since u\u221eis a bounded maximal 2-subharmonic function in Cn with bounded gradient, by the Liouville theorem of Dinew-Kolodziej [7], u\u221emust be constant. We obtain a contradiction, since |Du\u221e|2(0) = 1. Q.E.D. 6 Solving the Fu-Yau equation We return to the continuity method (2.1) i\u2202\u00af \u2202(eut\u03c9 \u2212t\u03b1e\u2212ut\u03c1) \u2227\u03c9n\u22122 + n\u03b1i\u2202\u00af \u2202ut \u2227i\u2202\u00af \u2202ut \u2227\u03c9n\u22122 + t\u00b5\u03c9n n! = 0. (6.1) We combine our estimates to establish Proposition 7 Let \u03b1 < 0. There exists M\u2032 \u226b1 such that for all M0 \u2265M\u2032, the following holds. Let u0 = log M0, and suppose that for all t \u2208[0, t0) with t0 \u22641 there exists a solution ut to (6.1) such that \u03bb\u2032 (t,ut) \u2208\u03932 and R X eut = M0. Then there exist constants C > 1 and 0 < \u03b3 < 1 only depending on (X, \u03c9), \u03c1, \u03b1, \u00b5 and M0 such that \u2225ut\u2225C2,\u03b3 \u2264C, \u03c32(\u03bb\u2032 (t,ut)) \u22651 C . (6.2) 25 \fProof: Combining our C0, C1 and C2 estimates yields \u2225ut\u2225C2 \u2264C. When \u03b1 < 0, from (4.6) it is clear that for M0 \u226b1 we have a lower bound for \u03c32(\u03bb\u2032). From (4.63), we see that \u03c32(\u03bb\u2032)1/2 is a concave uniformly elliptic operator, with right-hand side in C\u03b3\u2032 for some 0 < \u03b3\u2032 < 1. We may apply the same argument as in [27] using a result by Tosatti-WangWeinkove-Yang [36], extending an original argument of Wang [38], to obtain \u2225ut\u2225C2,\u03b3 \u2264C. Q.E.D. Since I is open and contains 0, it contains the interval [0, \u02c6 t). Consider any sequence tk \u2208I converging to \u02c6 t. Then there exists (tk, uk) \u2208B1 satisfying (2.1), and by Proposition 7, the estimate \u2225uk\u2225C2,\u03b3 \u2264C holds. By Arzela-Ascoli, after passing to a subsequence we have that uk \u2192\u02c6 u in C2,\u03b3. By the lower bound for \u03c32(\u03bb\u2032) in Proposition 7, we have \u03bb\u2032 (\u02c6 t,\u02c6 u) \u2208\u03932. Taking the limit yields (\u02c6 t, \u02c6 u) \u2208B1 and \u03a8(\u02c6 t, \u02c6 u) = 0, hence I contains \u02c6 t. It follows that I is closed, and together with the fact that it is already known to be open and not empty, that I = [0, 1]. We have shown the existence of a C2,\u03b3(X, R) solution to (2.1) at t = 1. Di\ufb00erentiating \u03c32(\u03bb\u2032)1/2 yields (4.13) and (4.14). We know that F j\u00af kDjD\u00af k is uniformly elliptic with coe\ufb03cients in C\u03b3. Since \u03c32(\u03bb\u2032) is a smooth function of (z, u, Du), we have that \u2202p\u03c31/2 2 is in C\u03b3. By Schauder estimates and a bootstrapping argument, we see that this solution u is smooth. This establishes existence of solutions to the Fu-Yau equation when \u03b1 < 0. 7 Application to the Strominger system By a construction of Fu and Yau [15, 16], solutions of the Fu-Yau equation can be viewed as particular solutions of the Strominger system. Our goal is this section is to describe brie\ufb02y a speci\ufb01c example due to [16], which satis\ufb01es \u03b1 = \u22122 and \u00b5 = 0, so that Theorem 1 is directly applicable. Let X be a compact Calabi-Yau manifold of dimension n with Ricci-\ufb02at metric \u03c9X and nowhere vanishing holomorphic (n, 0) form \u2126X. Let \u03c91 2\u03c0, \u03c92 2\u03c0 \u2208H2(X, Z) be primitive harmonic (1, 1) forms. This data determines a T 2 \ufb01bration \u03c0 : M \u2192X with a 1-form \u03b8, such that for every u \u2208C\u221e(X, R), the Hermitian form \u03c9u = \u03c0\u2217(eu\u03c9X) + i 2\u03b8 \u2227\u00af \u03b8, (7.1) is a metric \u03c9u > 0 on M, and \u2126= \u2126X \u2227\u03b8, (7.2) is a nowhere vanishing holomorphic (n + 1, 0) form. Furthermore, the balanced condition (1.6) is satis\ufb01ed for any \u03c9u. If we take a stable vector bundle E over (X, \u03c9X) with Hermitian-Einstein metric H (whose existence is guaranteed by the Donaldson-UhlenbeckYau theorem), then the system (\u03c0\u2217(E), \u03c0\u2217(H), M, \u03c9u) automatically satisfy the conditions 26 \f(1.4) and (1.6) in the Strominger system. Thus it remains only to solve the last condition (1.5), which is equivalent, as explained earlier in the Introduction, to the Fu-Yau equation (1.1) on X with \u00b5 given explicitly by (1.8). (The exact form of the form \u03c1 is not needed for our considerations.) The following explicit example with \u03b1 = \u22122 and \u00b5 = 0 was given in [16]. Choose line bundles L1, L2 equipped with metrics h1, h2, such that their curvature forms satisfy iFh1 = \u2212i\u2202\u00af \u2202log h1 = \u03c91 and iFh2 = \u2212i\u2202\u00af \u2202log h2 = \u03c92. Let E = L1 \u2295L2 \u2295T (1,0)X, H = (h1, h2, \u03c9X). (7.3) The curvature of (E, H) is iFH = diag(\u03c91, \u03c92, iRX). Since \u03c91 and \u03c92 are primitive and RX is Ricci-\ufb02at, we have FH \u2227\u03c9n\u22121 X = 0. Using that for a primitive (1, 1) form \u03b7 we have \u2217\u03c9X\u03b7 = \u2212 1 (n\u22122)!\u03c9n\u22122 X \u2227\u03b7, we compute \u03b1 4 Tr(FH \u2227FH \u2212RX \u2227RX) \u2227\u03c9n\u22122 X = \u22121 2(\u2212\u03c91 \u2227\u03c91 \u2212\u03c92 \u2227\u03c92) \u2227\u03c9n\u22122 X = \u2212(n \u22122)! 2 (\u2225\u03c91\u22252 \u03c9X + \u2225\u03c92\u22252 \u03c9X)\u03c9n X n! . (7.4) It follows that \u00b5 = 0 in this case. The system (\u03c0\u2217E, \u03c0\u2217H, M, \u03c9u) satis\ufb01es the modi\ufb01ed Strominger system. 8 Other generalizations of the Strominger system in terms of higher Chern classes Finally, we would like to observe that another natural generalization of the Strominger system may be FH \u2227\u03c9n = 0, F 2,0 H = F 0,2 H = 0 (8.1) i\u2202\u00af \u2202(\u2225\u2126\u2225 2(n\u22122) n \u03c9 \u03c9n\u22121) \u2212\u03b1 4 (Tr R \u2227\u00b7 \u00b7 \u00b7 \u2227R \u2212Tr FH \u2227\u00b7 \u00b7 \u00b7 \u2227FH) = 0, (8.2) d \u0012 \u2225\u2126\u2225 2(n\u22121) n \u03c9 \u03c9n \u0013 = 0. (8.3) Here E \u2192M is a holomorphic vector bundle over a compact complex manifold of dimension n+1, and H and \u03c9 are Hermitian metrics on E and on M respectively. The left-hand side in (8.2) is an (n, n)-form, so that the curvatures R and F in each wedge product appears n times, giving the n-th Chern classes for T 1,0(M) and for E respectively. On a Goldstein-Prokushkin \ufb01bration, it is easy to see that the Hermitian metrics \u03c9u satisfy the conformally balanced condition (8.3) for any smooth function u on the n-dimensional base Calabi-Yau manifold X. By choosing as before a stable bundle E with its corresponding Hermitian-Einstein metric H, we can then reduce this system to the sole equation (8.2). 27 \fThis equation is in turn a scalar equation involving complex Hessian operators which may be of interest in itself. The above system also leads to a natural generalization to arbitrary dimension n+ 1 of the anomaly \ufb02ow de\ufb01ned in [29] for n+ 1 = 3. We shall return to these issues elsewhere. Acknowledgements: The authors are very grateful to Li-Sheng Tseng for discussions on modi\ufb01cations of the Strominger system." + }, + { + "url": "http://arxiv.org/abs/1508.03315v2", + "title": "Geometric flows and Strominger systems", + "abstract": "A geometric flow on $(2,2)$-forms is introduced which preserves the balanced\ncondition of metrics, and whose stationary points satisfy the anomaly equation\nin Strominger systems. The existence of solutions for a short time is\nestablished, using Hamilton's version of the Nash-Moser implicit function\ntheorem.", + "authors": "Duong H. Phong, Sebastien Picard, Xiangwen Zhang", + "published": "2015-08-13", + "updated": "2017-04-10", + "primary_cat": "math.DG", + "cats": [ + "math.DG", + "math.CV" + ], + "main_content": "Introduction The Strominger system is a system of equations for a metric on a 3-dimensional complex manifold X equipped with a nowhere vanishing holomorphic 3-form, and a Hermitian metric on a holomorphic vector bundle over X. It is of considerable interest both in physics, where it is the equation for supersymmetric compacti\ufb01cations of the heterotic string to a four-dimensional space-time, and in mathematics, where it is a non-K\u00a8 ahler generalization of a Calabi-Yau metric [34] coupled to a Hermitian-Einstein connection [5, 33]. The \ufb01rst mathematically rigorous solutions to the Strominger system were found by Li-Yau [21] and Fu-Yau [14, 15], and more solutions were found in [1, 2, 7, 8, 9, 10, 11, 12, 17, 23, 32]. Other solutions on physical grounds were studied by physicists in e.g. [3, 4, 6]. Generalizations of the Fu-Yau equation to higher dimensions are considered in [24, 25]. The general solution appears out of reach at the present time. The main goal of this paper is to propose a geometric \ufb02ow of (2, 2)-forms, whose stationary points provide solutions of the Strominger system. We call it the Anomaly \ufb02ow, as its curvature terms are the local characteristic classes arising in gravitational and Yang-Mills anomalies in string theory. In this paper, we present some evidence that the Anomaly \ufb02ow may provide a viable approach to Strominger systems. In particular, we show that it preserves the balanced property of metrics, that it also admits a more conventional description as a \ufb02ow of Hermitian metrics by their curvatures, and that short-time solutions always exist for small values of the string tension parameter. Estimates for solutions and criteria for long-time existence and convergence are relegated to later work. 2 The Anomaly \ufb02ow Let X be a compact 3-dimensional complex manifold, which admits a nowhere vanishing holomorphic (3, 0)-form \u2126. Let E \u2192X be a holomorphic vector bundle over X. Let \u03c90 1Work supported in part by the National Science Foundation under Grant DMS-12-66033 and DMS1308136. Keywords: balanced metrics, anomaly equation, Nash-Moser implicit function theorem. \fbe a Hermitian metric on X, and H0 a Hermitian metric on E. We de\ufb01ne the Anomaly \ufb02ow for the pair (X, E) to be the \ufb02ow \u03c9(t), H(t) of metrics on X and on E given by, \u2202t(\u2225\u2126\u2225\u03c9\u03c92) = i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032 4 (Tr(R \u2227R) \u2212Tr(F \u2227F)) H\u22121 \u2202tH = \u2212\u039bF (2.1) with initial condition \u03c9(0) = \u03c90, H(0) = H0. Here \u03b1\u2032 is a \ufb01xed positive parameter, called the string tension in the physics literature. The expressions R = (Rpq) and F = (F \u03b1\u03b2) denote respectively the curvature of the Chern connections de\ufb01ned by \u03c9(t) and H(t) on X and on E. They are (1, 1)-forms, valued respectively in the bundles End(T 1,0) and End(E) of endomorphisms of T 1,0 and E. Our conventions are [\u2207j, \u2207\u00af k]V p = R\u00af kj p qV q, [\u2207j, \u2207\u00af k]\u03d5\u03b1 = F\u00af kj \u03b1 \u03b2\u03d5\u03b2, (2.2) where V p and \u03d5\u03b1 are respectively sections of T 1,0 and E in a local trivialization. The Hermitian form \u03c9 is de\ufb01ned by \u03c9 = i g\u00af kjdzj \u2227d\u00af zk, and a (p, q) form \u03b7 has component \u03b7\u00af k1\u00b7\u00b7\u00b7\u00af kqj1\u00b7\u00b7\u00b7jp given by \u03b7 = 1 p!q! X \u03b7\u00af k1\u00b7\u00b7\u00b7\u00af kqj1\u00b7\u00b7\u00b7jp dzjp \u2227\u00b7 \u00b7 \u00b7 \u2227dzj1 \u2227d\u00af zkq \u2227\u00b7 \u00b7 \u00b7 d\u00af zk1. (2.3) With this convention, the (1, 1)-forms R = (Rpq) and F = (F \u03b1\u03b2) are then given by Rp q = R\u00af kj p q dzj \u2227d\u00af zk, F \u03b1 \u03b2 = F\u00af kj \u03b1 \u03b2 dzj \u2227d\u00af zk. We also de\ufb01ne the pointwise inner product \u27e8, \u27e9\u03c9 as \u27e8\u03c6, \u03c8\u27e9\u03c9 = 1 p!q!g\u00b51 \u00af \u03b21 \u00b7 \u00b7 \u00b7 g\u00b5q \u00af \u03b2qg\u03b11\u00af \u03bb1 \u00b7 \u00b7 \u00b7 g\u03b1p\u00af \u03bbp \u03c6 \u00af \u03b21\u00b7\u00b7\u00b7\u00af \u03b2q\u03b11\u00b7\u00b7\u00b7\u03b1p\u03c8\u00af \u00b51\u00b7\u00b7\u00b7\u00af \u00b5q\u03bb1\u00b7\u00b7\u00b7\u03bbp, for any (p, q) forms \u03c6 and \u03c8. The Hodge operator \u039b is de\ufb01ned as usual by (\u039bF)\u03b1\u03b2 = gj\u00af kF\u00af kj \u03b1\u03b2. Here (gj\u00af k) denotes the inverse (g\u00af jk)\u22121 of (g\u00af jk). We shall also be interested in a version of the Anomaly \ufb02ow for just a metric \u03c9(t) on X. Thus let \u03a60 be a given closed smooth (2, 2)-form on X. The Anomaly \ufb02ow for X with given \u03a60 is de\ufb01ned by \u2202t(\u2225\u2126\u2225\u03c9\u03c92) = i\u2202\u00af \u2202\u03c9 \u2212\u03b1\u2032 4 (Tr R \u2227R \u2212\u03a60) (2.4) with initial condition \u03c9(0) = \u03c90. The following simple theorem provides the motivation for the Anomaly \ufb02ows: Theorem 1 Let X, E, \u2126be as above, and consider the equations (2.1) or (2.4) on either (X, E) or X, with initial metrics \u03c90 and H0. 2 \f(a) The equation (2.4) is well-de\ufb01ned as a \ufb02ow, i.e., it de\ufb01nes a vector \ufb01eld on the space of metrics, or equivalently, a vector \ufb01eld on the space of positive Hermitian (2, 2)forms. The \ufb02ows are local, in the sense that the vector \ufb01elds are given by local expressions in the underlying Hermitian metrics (or the underlying (2, 2)-form). Similarly for the equation (2.1), which de\ufb01nes now a vector \ufb01eld on the direct sum of the space of Hermitian metrics on X (or positive (2, 2)-forms) with the space of Hermitian metrics on E. (b) Assume that the Anomaly \ufb02ows admit a smooth solution on some time interval [0, T). If the initial metric \u03c90 satis\ufb01es the balanced condition 1 d(\u2225\u2126\u2225\u03c90\u03c92 0) = 0 (2.5) then the metric \u03c9(t) will satisfy the same balanced condition, namely d(\u2225\u2126\u2225\u03c9t\u03c92 t ) = 0, for any t in its time interval [0, T) of existence. (c) Assume that the Anomaly \ufb02ow for (X, E) exists for all time [0, \u221e), and that the initial metric satis\ufb01es the balanced-like condition (2.5). If the \ufb02ow (\u03c9(t), H(t)) converges to a pair (\u03c9\u221e, H\u221e) of Hermitian metrics on X and E, then this pair satis\ufb01es the Strominger system of equations 2 F\u221e\u2227\u03c92 \u221e= 0 F 2,0 \u221e= F 0,2 \u221e= 0 i\u2202\u00af \u2202\u03c9\u221e= \u03b1\u2032 4 (TrR\u221e\u2227R\u221e\u2212TrF\u221e\u2227F\u221e) d \u0010 \u2225\u2126\u2225\u03c9\u221e\u03c92 \u221e \u0011 = 0. (2.6) Similarly, if the Anomaly \ufb02ow on X with given \u03a60 converges, then the limiting metric \u03c9\u221e satis\ufb01es i\u2202\u00af \u2202\u03c9\u221e= \u03b1\u2032 4 (Tr(R\u221e\u2227R\u221e) \u2212\u03a60) d \u0010 \u2225\u2126\u2225\u03c9\u221e\u03c92 \u221e \u0011 = 0. (2.7) We note that the condition d(\u2225\u2126\u2225\u03c9\u221e\u03c92 \u221e) = 0 has been shown by Li and Yau [21] to be equivalent to the condition d\u2020\u03c9\u221e= i(\u00af \u2202\u2212\u2202) log \u2225\u2126\u2225\u03c9\u221e, so that the system (2.6) is indeed equivalent to the system originally written down by Strominger [30]. We also note that the equations for H in the Strominger system are just the HermitianEinstein equation for a Chern unitary connection, and the \ufb02ow of H in (2.1) is of course the well-known Donaldson heat \ufb02ow, which is gauge equivalent to the Yang-Mills \ufb02ow. 1The usual balanced condition for a Hermitian metric \u03c9 in dimension n is d\u03c9n\u22121 = 0. For simplicity, we use the same terminology for the slight modi\ufb01cation used in (2.5). 2It has recently been brought to our attention that the same system of equations for supersymmetric compacti\ufb01cations was independently proposed by C. Hull [19, 20]. 3 \fNext, we consider the issue of short-time solutions for the \ufb02ows, and when they would be parabolic. For a \ufb01xed metric \u03c9, we de\ufb01ne a modi\ufb01ed operator \u02dc \u22c6of the Hodge \u22c6\u03c9 operator as the operator from the space of (2, 2)-forms \u03b4\u03a8 to the space of (1, 1)-forms, given by \u02dc \u22c6\u03b4\u03a8 = 1 2\u2225\u2126\u2225\u03c9 (\u27e8\u22c6\u03b4\u03a8, \u03c9\u27e9\u03c9 \u2212\u22c6\u03b4\u03a8). (2.8) Next, we view the curvature tensor R\u00af kj pq of \u03c9 as the operator Rm from the space of (1, 1)-forms \u03b4\u03c9 into itself given by Rm(\u03b4\u03c9)\u00af kj = R\u00af kj p\u00af q(\u03b4\u03c9)\u00af qp. (2.9) We can then de\ufb01ne the following linear di\ufb00erential operator \u02dc \u2206of order 2 on the space of Hermitian tensors \u03b4\u03a8 of type (2, 2), \u02dc \u2206(\u03b4\u03a8) = i\u2202\u00af \u2202(\u02dc \u22c6\u03b4\u03a8 \u2212\u03b1\u2032 2 Rm(\u02dc \u22c6\u03b4\u03a8)). (2.10) We note that the range of \u02dc \u2206is contained in the space of closed (2, 2)-forms. We shall say that the operator \u02dc \u2206is elliptic on the space of closed (2, 2)-forms if its symbol, restricted to the null space of the symbol of the exterior derivative d, admits only eigenvalues with strictly positive real parts. We have then the following theorem: Theorem 2 Let (X, \u2126, E) be as before, and consider the \ufb02ows (2.1) and (2.4), with initial Hermitian metrics \u03c90 and H0 on X and E respectively. If the operator \u02dc \u2206with respect to the initial metric \u03c90 is elliptic on the space of closed (2, 2)-forms, in the above sense, then both \ufb02ows (2.1) and (2.4) admit a smooth solution on some non-trivial \ufb01nite time interval. 3 Proof of Theorem 1 Let \u03a8 = \u2225\u2126\u2225\u03c9\u03c92. The essence of Theorem 1 is that \u03c9 can be recaptured from \u03a8 (and of course vice versa), by purely local expressions. For this we need to discuss the issue of (n \u22121)-th root of a positive (n \u22121, n \u22121)-form in some detail. 3.1 The (n \u22121)-th root of an (n \u22121, n \u22121)-form In general, in dimension n, let \u03a6 be a (n \u22121, n \u22121)-form which is positive de\ufb01nite, in the sense that \u03a6 \u2227i \u03b7 \u2227\u00af \u03b7 4 \fis a positive (n, n)-form for any non-zero (1, 0)-form \u03b7 and which equals 0 if and only if \u03b7 = 0. Michelsohn [22] has shown that there exists a unique positive (1, 1)-form \u03c9 with \u03c9n\u22121 = \u03a6. (3.1) We need a viable formula for \u03c9, which can be obtained as follows. Let \u03a6 be expressed as in [22, 31] by \u03a6 = in\u22121(n \u22121)! X k,j (sgn(k, j)) \u03a6k\u00af jdz1 \u2227d\u00af z1 \u2227\u00b7 \u00b7 \u00b7 \u2227d dzk \u2227d\u00af zk \u2227\u00b7 \u00b7 \u00b7 (3.2) \u2227dzj \u2227d d\u00af zj \u2227\u00b7 \u00b7 \u00b7 \u2227dzn \u2227d\u00af zn where sgn(k, j) = \u22121 if k > j and sgn(k, j) = 1 otherwise. One advantage for this representation is that \u03a6k\u00af j is a Hermitian matrix. Then the (n\u22121)-th root \u03c9 = i g\u00af jkdzk\u2227d\u00af zj of \u03a6 is given by g\u00af jk = (det g) (\u03a6\u22121)\u00af jk, (3.3) where (\u03a6\u22121)\u00af jk is the inverse matrix of \u03a6k\u00af j, i.e., \u03a6k\u00af j(\u03a6\u22121)\u00af j\u2113= \u03b4k\u2113. To see this, we note that the entry (\u03c9n\u22121)j\u00af k in the product \u03c9n\u22121 is obtained by taking the product of the entries, with corresponding permutation signs and (n \u22121)! factor, of the matrix obtained from g\u00af pq by removing the j-th row and the k-th column. In other words, it is the j\u00af k cofactor of the matrix (g\u00af pq). The equation (3.3) is just a reformulation of this statement. The notion of (n \u22121)-th root is independent of any metric. Nevertheless, it can be useful to express it in terms of the Hodge star operator. If \u02dc \u03c9 = i \u02dc g\u00af kjdzj \u2227d\u00af zk is any metric, recall that the Hodge star operator \u22c6\u02dc \u03c9 with respect to \u02dc \u03c9 is de\ufb01ned by the equation \u03c6 \u2227\u03a6 = \u27e8\u03c6, (\u22c6\u02dc \u03c9\u03a6)\u27e9\u02dc \u03c9 n! \u02dc \u03c9n (3.4) for any (1, 1)-form \u03c6 = \u03c6\u00af kjdzj \u2227d\u00af zk. The left-hand side can be easily recognized to be i\u22121 (n \u22121)! \u03c6\u00af kj\u03a6j\u00af k n Y \u2113=1 idz\u2113\u2227d\u00af z\u2113= \u2212i \u03c6\u00af kj\u03a6j\u00af k det \u02dc g \u02dc \u03c9n n . (3.5) Here we use the fact that (\u03a6j\u00af k) is Hermitian. We can also write the right-hand side as \u02dc gp\u00af k \u02dc gj\u00af q \u03c6\u00af kj(\u22c6\u02dc \u03c9\u03a6)\u00af pq \u02dc \u03c9n n! . (3.6) This implies that (\u22c6\u02dc \u03c9\u03a6)\u00af pq = i (n \u22121)! det \u02dc g \u03a6k\u00af j\u02dc g\u00af pk\u02dc g\u00af jq. (3.7) 5 \fAs a check, we note that, since the expression in (3.5) is an (n, n)-form, the factor \u03c6\u00af kj\u03a6j\u00af k/det \u02dc g must be a scalar, and hence \u03a6k\u00af j should be interpreted as a section of (\u039b1,1)\u2217\u2297 KX \u2297KX. This is consistent with the fact that the expression given in (3.7) is a (1, 1)-form. Taking \u02dc \u03c9 in (3.7) to be \u03c9 itself, we obtain \u22c6\u03c9\u03a6 = (n \u22121)! \u03c9, a formula that can also be easily seen using an orthonormal basis for \u03c9. Henceforth, we shall suppress the subindex \u03c9 in the star operator, when the metric \u03c9 is implicit. 3.2 The relation \u03a8 = \u2225\u2126\u2225\u03c9\u03c92 We return to the setting of a 3-fold X, equipped with a \ufb01xed nowhere vanishing (3, 0)-form \u2126. We will use the notation |\u2126|2 = \u2126\u2126and \u2225\u2126\u22252 \u03c9 = \u2126\u2126(det g)\u22121. If \u03a8 is any positive (2, 2)-form, we claim that there is a unique positive (1, 1)-form \u03c9 so that \u03a8 = \u2225\u2126\u2225\u03c9\u03c92. Indeed, this equation determines the norm \u2225\u2126\u2225\u03c9, since taking determinants gives det \u03a8 = \u0010 |\u2126|2(det g)\u22121\u00113/2 (det g)2 (3.8) and hence (det g) 1 2 = det \u03a8 |\u2126|3 . (3.9) This determines det g in terms of \u03a8, and hence \u2225\u2126\u2225\u03c9 in terms of \u03a8. We can then obtain \u03c9 as the square root of the positive (2, 2)-form \u2225\u2126\u2225\u22121 \u03c9 \u03a8. The relations (3.3) and (3.7) become g\u00af jk = det \u03a8 |\u2126|2 \u03a8\u22121 \u00af jk , \u22c6\u03c9\u03a8 = 2\u2225\u2126\u2225\u03c9\u03c9. (3.10) 3.3 Proof of Theorem 1, Part (a) It is now straightforward to relate the variations of \u03c9 to the variations of \u03a8. Di\ufb00erentiating the \ufb01rst equation on the left side of (3.10) gives \u03b4g\u00af jk = 1 \u2225\u2126\u2225\u03c9detg \u0010 g\u00af qp \u03b4\u03a8p\u00af q g\u00af jk \u2212g\u00af jp \u03b4\u03a8p\u00af q g\u00af qk \u0011 . (3.11) In intrinsic notation, using (3.7), this can be rewritten as \u03b4\u03c9 = 1 2\u2225\u2126\u2225\u03c9 (\u27e8\u22c6\u03b4\u03a8, \u03c9\u27e9\u03c9 \u2212\u22c6\u03b4\u03a8) = \u02dc \u22c6\u03b4\u03a8, (3.12) in view of the de\ufb01nition of the operator \u02dc \u22c6. 6 \fIn particular, along the Anomaly \ufb02ow, we can replace \u03b4\u03c9 by \u2202t\u03c9 and \u03b4\u03a8 by \u2202t\u03a8 = \u2202t(\u2225\u2126\u2225\u03c9\u03c92). We obtain in this way an equation giving \u2202t\u03c9 in terms of \u03c9 and its curvature, which is the more conventional description of a geometric \ufb02ow of Hermitian metrics. Equivalently, the \ufb02ows can be written as \ufb02ows of (2, 2)-forms \u03a8, given by a local vector \ufb01eld on the space of positive Hermitian (2, 2)-forms. 3.4 Proof of Theorem 1, Parts (b) and (c) The only non-trivial part of Theorem 1 is the conceptual part due to the issue of taking square roots. Once this issue has been clari\ufb01ed, the proof is straightforward. Part (b) follows immediately from the fact that the right hand sides of (2.1) and (2.4) are always closed forms. This follows itself from the fact that d\u2202\u00af \u2202\u03c9 = d2 \u00af \u2202\u03c9 = 0, and that both Tr(R \u2227R) and Tr(F \u2227F) are well-known closed representatives of the Chern classes c2(T 1,0) and c2(E) of the bundles T 1,0(X) and E. Thus \u2202t(d(\u2225\u2126\u2225\u03c9\u03c92)) = d\u2202t(\u2225\u2126\u2225\u03c9\u03c92) = 0, (3.13) and d(\u2225\u2126\u2225\u03c9\u03c92) = 0 for all time t, if d(\u2225\u2126\u2225\u03c9\u03c92) = 0 at t = 0. Part (c) follows immediately from Part (b), which guarantees that the balanced-like equation in the Strominger systems is satis\ufb01ed for all time. The equation F 2,0 \u221e= F 0,2 \u221e= 0 is automatic for all Chern connections. The other equations are obvious consequences of the fact that the limiting metric \u03c9\u221emust be stationary. 4 Short-time existence and parabolicity To obtain short time existence, we consider the Anomaly \ufb02ows as an evolution equation. Let V be a smooth vector bundle over a compact manifold X, and consider the equation, \u2202t\u03c8 = E(\u03c8) (4.1) where E(\u03c8) is a non-linear di\ufb00erential operator of order 2, acting on the sections \u03c8 of V . If the eigenvalues of the symbol \u03c3(\u03b4E(\u03c8))(x, \u03be) of the linearization \u03b4E of E have strictly positive real parts for \u03be \u0338= 0, (x, \u03be) \u2208T\u2217(X), then the equation is parabolic, and the evolution equation with initial data \u03c8 admits a solution for short-time. More generally, we have the following version of the Nash-Moser theorem, as formulated by Hamilton [18], and applied by him to show the existence of short-time solution for the Ricci \ufb02ow: Lemma 1 Let L : C\u221e(V ) \u2192C\u221e(W) be a linear di\ufb00erential operator of order 1 with values in another vector bundle W. Assume that (a) The composition Q(\u03a8) = L(\u03a8)E(\u03a8) is a di\ufb00erential operator of order at most 1; (b) The symbol \u03c3(\u03b4E(\u03a8))(x, \u03be) has eigenvalues with strictly positive real parts when restricted to the kernel of the symbol \u03c3(\u03b4L(\u03a8))(x, \u03be). Then the initial value problem (4.1) admits a unique solution for short time. 7 \f4.1 Proof of Theorem 2 We consider \ufb01rst the notationally simpler case of the Anomaly \ufb02ow (2.4) on X. In this case, the bundle V is the bundle \u039b2,2(X) of (2, 2)-forms, the sections \u03c8 are the (2, 2)-forms \u03a8, and E(\u03c8) is given by the right hand side of (2.4). The linearization of i\u2202\u00af \u2202\u03c9 follows readily from the equation (3.12), \u0010 i\u2202\u00af \u2202\u03b4\u03c9 \u0011 = i\u2202\u00af \u2202 1 2\u2225\u2126\u2225\u03c9 (\u27e8\u22c6\u03b4\u03a8, \u03c9\u27e9\u03c9 \u2212\u22c6\u03b4\u03a8) ! = i\u2202\u00af \u2202(\u02dc \u22c6\u03b4\u03a8), (4.2) in the notation of (2.8). Next, we determine the linearization of the curvature terms in the Anomaly \ufb02ow. The variation of the curvature F of a unitary Chern connection under a variation \u03b4H of the Hermitian metric is given by (see e.g. [29]), \u03b4F = \u00af \u2202\u2202H(H\u22121\u03b4H), (4.3) where \u2202H denotes the covariant exterior derivative in the unbarred directions. In particular, \u03b4Tr(F \u2227F) = 2 Tr(F \u2227\u00af \u2202\u2202H(H\u22121\u03b4H)). (4.4) In view of the Bianchi identity, dHF = 0, this can be rewritten as \u03b4Tr(F \u2227F) = \u22122 \u2202\u00af \u2202Tr(FH\u22121\u03b4H). (4.5) Specializing to the case where the vector bundle is T 1,0(X), we obtain \u03b4Tr(R \u2227R) = \u22122\u2202\u00af \u2202 \u0010 R\u00af kj \u03b1\u00af \u03b2\u03b4g \u00af \u03b2\u03b1dzj \u2227d\u00af zk\u0011 . It follows that \u03b4Tr(R \u2227R) = 2i\u2202\u00af \u2202 \u0010 R\u00af kj \u03b1\u00af \u03b2 (\u03b4\u03c9) \u00af \u03b2\u03b1dzj \u2227d\u00af zk\u0011 = 2i\u2202\u00af \u2202(Rm(\u03b4\u03c9)) = 2i\u2202\u00af \u2202(Rm(\u02dc \u22c6\u03b4\u03a8)) (4.6) where we view the Riemann curvature tensor as an operator on (1, 1)-forms, as explained in \u00a72. Combining the formulas (4.2) and (4.6) gives the linearization of E(\u03a8) in the case of the Anomaly \ufb02ow on X, \u03b4E(\u03b4\u03a8) = \u02dc \u2206(\u03b4\u03a8). (4.7) We apply Hamilton\u2019s version of the Nash-Moser implicit function theorem with the choice L = d on (2, 2)-forms. Because the right hand side E of the Anomaly \ufb02ow is a closed (2, 2)-form, we do have LE = 0. The condition of ellipticity of the operator \u02dc \u2206 restricted to the space of closed (2, 2)-forms, as formulated in \u00a72, is precisely the condition which allows Hamilton\u2019s version of the Nash-Moser implicit function theorem to apply. Thus the existence of solutions to the equation (2.4) for short-time follows. 8 \fThe case of the Anomaly \ufb02ow on (X, E) can be treated in the same manner. We view the \ufb02ow as of the form \u2202t\u03a8 = E(\u03a8, H), \u2202tH = F(\u03a8, H) (4.8) with the pair (\u03a8, H) given by sections of the direct sum of the bundle of (2, 2)-forms with the bundle of Hermitian quadratic forms on E. Clearly E and F are non-linear di\ufb00erential operators of order 2. Applying the formulas for variations of curvature to the bundles T 1,0 and E, we \ufb01nd \u03b4E = \u2202\u00af \u2202 \u0012 i\u03b4\u03c9 + \u03b1\u2032 2 \u0010 Tr(Rg\u22121\u03b4g) \u2212Tr(FH\u22121\u03b4H) \u0011 \u0013 \u03b4F = \u2212H\u039b\u00af \u2202\u2202H(H\u22121\u03b4H) \u2212H\u03b4gj\u00af kF\u00af kj \u2212\u03b4H \u039bF, (4.9) where F\u00af kj is viewed as a (1, 1)-form with valued in the bundle of endomorphisms of E. It follows that the symbols of the linearization of E and F are given by \u03c3(\u03b4E) : (\u03b4\u03a8, \u03b4H) \u2192\u03c3(\u03b4EX)(\u03b4\u03a8) \u2212\u03b1\u2032 2 \u03be \u2227\u00af \u03be \u2227(Tr(FH\u22121\u03b4H)) \u03c3(\u03b4F) : (\u03b4\u03a8, \u03b4H) \u2192|\u03be|2\u03b4H (4.10) where we have temporarily denoted by EX the expression on the right hand side of the Anomaly \ufb02ow on X. We choose the operator L of the Nash-Moser implicit function theorem as L(\u03a8, H) = (d\u03a8, 0). The right hand side E(\u03a8, H) is again always a closed form, so we do have L(E, F) = 0. Furthermore, the above formulas show that the symbol of the combined system (E, F) is a triangular block matrix, with the blocks on the diagonal given by \u03c3(\u03b4EX) acting on \u03b4\u03a8 and \u03c3(\u03b4F) acting on \u03b4H. The \ufb01rst block has already been shown to have eigenvalues with positive real parts when restricted to the kernel of d, while the second is manifestly strictly positive. So the short-time existence of solutions to the Anomaly \ufb02ow on (X, E) follows, and the proof of Theorem 2 is complete. 4.2 Discussion of the parabolicity condition The linearization operator \u02dc \u2206and its ellipticity restricted to the space of (2, 2) closed forms is of considerable importance, since \u02dc \u2206controls the evolution of derivative quantities of \u03a8 and \u03c9 such as the curvature. First we observe that the ellipticity condition can also be reformulated in terms of operators on (1, 1)-forms. It su\ufb03ces to set \u03b4\u03a8 = \u22c6\u03b4T, for (1, 1)-forms T. Then the 9 \fellipticity of \u02dc \u2206on the space of closed (2, 2)-forms is equivalent to the ellipticity of the operator \u22c6\u03b4T \u2192\u02dc \u2206(\u22c6\u03b4T) (4.11) restricted to (1, 1)-forms \u03b4T satisfying d\u2020\u03b4T = 0, de\ufb01ned similarly in terms of the eigenvalues of its symbol, restricted to the kernel of the symbol of d\u2020. This follows at once from the well-known formula d\u2020 = \u2212\u22c6d \u22c6. Next, it is instructive to work out the ellipticity condition of the operator \u02dc \u2206restricted to the space of (2, 2)-forms more explicitly. The symbol \u03c3 \u02dc \u2206of \u02dc \u2206is given by \u03c3 \u02dc \u2206(\u03be) : \u03b4\u03a8 \u2192i\u03be \u2227\u00af \u03be \u2227(\u02dc \u22c6\u03b4\u03a8 \u2212\u03b1\u2032 2 Rm(\u02dc \u22c6\u03b4\u03a8)) (4.12) where \u03be \u2208T \u2217(X). The following lemma is useful: Lemma 2 For all \u03b4\u03a8 in the kernel of the symbol of the exterior derivative d on (2, 2)forms, we have i\u03be \u2227\u00af \u03be \u2227\u02dc \u22c6\u03b4\u03a8 = 1 2\u2225\u2126\u2225\u03c9 |\u03be|2 \u03b4\u03a8. (4.13) Proof. Without loss of generality, we can choose coordinates so that g\u00af jk = \u03b4jk, and by an additional orthogonal rotation if necessary, that \u03be = (\u03be1, 0, 0). Let \u03a6 denote the left-hand side of (4.13). Using the coordinate expression (3.11) and the convention (3.2) for components of a (2, 2)-form, we \ufb01nd \u03a6k\u00af 1 = 0, k = 1, 2, 3 \u03a6k\u00af k = 1 2\u2225\u2126\u2225\u03c9 |\u03be|2(\u03b4\u03a81\u00af 1 + \u03b4\u03a8k\u00af k), k = 2, 3 \u03a62\u00af 3 = 1 2\u2225\u2126\u2225\u03c9 |\u03be|2\u03b4\u03a82\u00af 3. (4.14) Now the kernel of the exterior derivative d on (2, 2)-forms is given by forms \u03b4\u03a8 satisfying \u03bej\u03b4\u03a8j\u00af k = 0 k = 1, 2, 3. (4.15) In the given coordinate system, this reduces to \u03b4\u03a81\u00af k = 0 for k = 1, 2, 3. Using these relations, we can rewrite the above identities as \u03a6 = 1 2\u2225\u2126\u2225\u22121 \u03c9 |\u03be|2\u03b4\u03a8. The lemma is proved. This allows us to identify immediately an important and quite general situation where the ellipticity condition, and hence the existence of short-time solutions, holds: Proposition 1 Consider the operator \u03b4\u03a8 \u2192\u2212i\u03be \u2227\u00af \u03be \u2227\u03b1\u2032 2 Rm(\u27e8\u22c6\u03b4\u03a8, \u03c9\u27e9\u03c9 \u2212\u22c6\u03b4\u03a8) (4.16) restricted to the kernel of the symbol of the exterior derivative d on (2, 2)-forms. If it has operator norm < |\u03be|2 for any \u03be \u0338= 0, then the Anomaly \ufb02ows with \u03c9 as initial metric admit smooth solutions for at least a short time. 10 \f5 Remarks We conclude with a few remarks. (1) The balanced condition One of the major challenges in Strominger systems is that, even with the metric H on the vector bundle E \ufb01xed, it is a system in the metric \u03c9, in the sense that both the anomaly equation and the balanced-like condition have to be satis\ufb01ed. One natural approach is to try and solve the anomaly equation with a particular ansatz which guarantees that the metric is balanced. One possible ansatz is the very general one proposed by Fu-Wang-Wu [13], Tosatti-Weinkove [31] and Popovici [28] \u03c92 = \u03c92 0 + i\u2202\u00af \u2202(u\u02dc \u03c9) (5.1) Here \u03c90 is a balanced metric, and \u02dc \u03c9 is an arbitrary (1, 1)-form, and the form \u03c92 is required to be positive. Tosatti and Weinkove have also shown in [31] how to \ufb01nd a single balanced metric \u03c90. Another ansatz is the one by Fu and Yau [14], in the special case of GoldsteinProkushkin manifolds discussed below (see eq. (5.2) below). But unlike in the K\u00a8 ahler case, where metrics can be represented by a potential, there does not appear to be a uniquely compelling ansatz for balanced metrics at the present time. Thus, a very attractive feature of the Anomaly \ufb02ows is that they guarantee that the metrics be balanced, without appealing to any particular ansatz. (2) Toric \ufb01brations over K3 surfaces The \ufb01rst non-perturbative solution of a Strominger system was found by Fu-Yau [14], as a toric \ufb01bration over a K3 surface. More speci\ufb01cally, Goldstein and Prokushkin [16] had shown how to construct a toric \ufb01bration \u03c0 : X \u2192S, given a Calabi-Yau surface (S, \u03c9S) with Ricci-\ufb02at K\u00a8 ahler metric \u03c9S = i (gS)\u00af kjdzj \u2227d\u00af zk, and two anti-self-dual (1, 1)forms \u03ba1, \u03ba2 \u22082\u03c0H2(S, Z). Furthermore, there is a (1, 0)-form \u03b8 on X so that \u2202\u03b8 = 0, \u00af \u2202\u03b8 = \u03c0\u2217(\u03ba1 + i\u03ba2), and if \u2126S is a non-vanishing holomorphic (2, 0)-form on S, then the non-vanishing holomorphic (3, 0)-form on X is given by \u2126= \u03b8 \u2227\u03c0\u2217(\u2126S). The (1, 1)-form \u03c90 = \u03c0\u2217(\u03c9S) + i\u03b8 \u2227\u00af \u03b8 is a balanced metric, d\u03c92 0 = 0 and also satis\ufb01es ||\u2126||\u03c90 = 1. This implies that d (||\u2126||\u03c90\u03c92 0) = 0. Under suitable cohomological conditions on the class \u03ba1 and \u03ba2, Fu and Yau found a solution of the Strominger system under the following ansatz, \u03c9u = \u03c0\u2217(eu\u03c9S) + i\u03b8 \u2227\u00af \u03b8 (5.2) where u is a scalar function on S. Metrics of the form \u03c9u are automatically balanced and satisfy \u2225\u2126\u2225\u03c9u = e\u2212u. Here we observe that another hint that the Anomaly \ufb02ow is a 11 \fnatural \ufb02ow, is that it preserves the Fu-Yau ansatz (5.2). Indeed, under the cohomological conditions mentioned previously, Fu and Yau have shown that i\u2202\u00af \u2202\u03c9u \u2212\u03b1\u2032 4 (Tr(Ru \u2227Ru) \u2212Tr(F \u2227F)) = i\u2202\u00af \u2202(eu\u03c9S \u2212\u03b1\u2032e\u2212u\u03c1) \u2212\u03b1\u2032 2 \u2202\u00af \u2202u \u2227\u2202\u00af \u2202u + \u02dc \u00b5\u03c92 S 2! for some smooth function \u02dc \u00b5 : S \u2192R and smooth real (1, 1) form \u03c1 de\ufb01ned on S and given by \u03c1 = \u2212i 2 Tr(\u00af \u2202B \u2227\u2202B\u2217g\u22121 S ), (5.3) where B is de\ufb01ned on S and only depends on (S, \u03c9S) and \u03ba1, \u03ba2. Thus \u2202t(\u2225\u2126\u2225\u03c9u\u03c92 u) is the pull-back of a (2, 2)-form on S. Since \u2225\u2126\u2225\u03c9u\u03c92 u = \u03c92 0 + (eu \u22121)\u03c92 S (5.4) we see that an evolution of \u2225\u2126\u2225\u03c9u\u03c92 u by a term proportional to \u03c92 S is just an evolution of the conformal factor u, and \u03c9u still satis\ufb01es the same ansatz. In fact, the Anomaly \ufb02ow is immediately seen to be equivalent to the equation \u2202tu = e\u2212u 2 \u0012 \u2206eu + \u03b1\u2032\u03c32(i\u2202\u00af \u2202u) \u22122\u03b1\u2032i\u2202\u00af \u2202(e\u2212u\u03c1) \u03c92 S + \u02dc \u00b5 \u0013 , (5.5) where the Laplacian and \u03c32(i\u2202\u00af \u2202u)\u03c92 S = i\u2202\u00af \u2202u \u2227i\u2202\u00af \u2202u are with respect to the metric \u03c9S. This equation may be of interest in its own right. Its parabolicity is equivalent to the ellipticity of the right hand side, which is the ellipticity condition imposed by Fu and Yau ([14], eqs. (7.3) and (8.1)). Explicitly, denote E(u) = i\u2202\u00af \u2202\u03c9u \u2212\u03b1\u2032 4 (Tr(Ru \u2227Ru) \u2212Tr(F \u2227F)) = i\u2202\u00af \u2202(eu\u03c9S \u2212\u03b1\u2032e\u2212u\u03c1) + \u03b1\u2032 2 i\u2202\u00af \u2202u \u2227i\u2202\u00af \u2202u + \u02dc \u00b5 \u03c92 S 2! . Its symbol is given by \u03c3(\u03b4E) : \u03b4u \u2192i\u03be \u2227\u00af \u03be \u2227(\u03b4u) \u0010 eu\u03c9S + \u03b1\u2032e\u2212u\u03c1 + \u03b1\u2032i\u2202\u00af \u2202u \u0011 . (5.6) The ellipticity condition reduces to the following condition on u eu\u03c9S + \u03b1\u2032e\u2212u\u03c1 + \u03b1\u2032i\u2202\u00af \u2202u > 0. (5.7) We now compare this condition with Proposition 1. The condition |\u03b1\u2032Rm| \u226a1, (5.8) 12 \fimplies short-time existence for the Anomaly \ufb02ow by Proposition 1. In [14], Fu and Yau computed the curvature of the metric \u03c9u (5.2). Fixing a point p \u2208X, they constructed a frame of holomorphic vector \ufb01elds such that at p, gu = eugS 0 0 1 ! Rm = R11 R12 R21 R22 ! (5.9) where the entries Rjk are given by R11 = RS \u2212\u2202\u00af \u2202u I + e\u2212u \u00af \u2202B \u2227\u2202B\u2217g\u22121 S (5.10) R12 = \u2212\u2207\u00af \u2202B + \u2202u \u2227\u00af \u2202B (5.11) R21 = \u00af \u2202(e\u2212u\u2202B\u2217g\u22121 S ) (5.12) R22 = e\u2212u(\u2202B\u2217g\u22121 S ) \u2227\u00af \u2202B. (5.13) Here B = (\u03c61, \u03c62)T is a column vector of locally de\ufb01ned functions \u03c6i on S, and \u00af \u2202B is globally de\ufb01ned on S. Using the de\ufb01nition (5.3) of \u03c1 and the fact that TrRS = 0, we see that |\u03b1\u2032Rm|\u03c9u \u226a1 implies |2\u03b1\u2032e\u22122u\u03c1|\u03c9S \u226a1, |2\u03b1\u2032e\u2212u\u2202\u00af \u2202u|\u03c9S \u226a1. (5.14) It is clear that (5.14) implies (5.7). Update: We note that there has been very recently signi\ufb01cant progress on the Anomaly \ufb02ows in several directions. In particular, an explicit formula for \u2202tg\u00af kj has now been obtained [26] which overcomes the di\ufb03culty of the \ufb02ow being originally formulated as a \ufb02ow of (2, 2)-forms. Furthermore, the convergence of the Anomaly \ufb02ow on Goldstein-Prokushkin \ufb01brations has been established in [27], for both \u03b1\u2032 > 0 and \u03b1\u2032 < 0, thus unifying the results of Fu and Yau [14, 15]. These constitute strong evidence that the Anomaly \ufb02ows should provide an e\ufb03cient approach to Strominger systems." + }, + { + "url": "http://arxiv.org/abs/1508.03254v3", + "title": "A second order estimate for general complex Hessian equations", + "abstract": "We derive a priori $C^2$ estimates for the $\\chi$-plurisubharmonic solutions\nof general complex Hessian equations with right-hand side depending on\ngradients.", + "authors": "Duong H. Phong, Sebastien Picard, Xiangwen Zhang", + "published": "2015-08-13", + "updated": "2017-04-10", + "primary_cat": "math.CV", + "cats": [ + "math.CV", + "math.AP", + "math.DG" + ], + "main_content": "Introduction Let (X, \u03c9) be a compact K\u00a8 ahler manifold of dimension n \u22652. Let u \u2208C\u221e(X) and consider a (1, 1) form \u03c7(z, u) possibly depending on u and satisfying the positivity condition \u03c7 \u2265\u03b5\u03c9 for some \u03b5 > 0. We de\ufb01ne g = \u03c7(z, u) + i\u2202\u00af \u2202u, (1.1) and u is called \u03c7-plurisubharmonic if g > 0 as a (1, 1) form. In this paper, we are concerned with the following complex Hessian equation, for 1 \u2264k \u2264n, \u0010 \u03c7(z, u) + i\u2202\u00af \u2202u \u0011k \u2227\u03c9n\u2212k = \u03c8(z, Du, u) \u03c9n, (1.2) where \u03c8(z, v, u) \u2208C\u221e((T 1,0(X))\u2217\u00d7 R) is a given strictly positive function. The complex Hessian equation can be viewed as an intermediate equation between the Laplace equation and the complex Monge-Amp` ere equation. It encompasses the most natural invariants of the complex Hessian matrix of a real valued function, namely the elementary symmetric polynomials of its eigenvalues. When k = 1, equation (1.2) is quasilinear, and the estimates follow from the classical theory of quasilinear PDE. The real counterparts of (1.2) for 1 < k \u2264n, with \u03c8 not depending on the gradient of u, have been studied extensively in the literature (see the survey paper [23] and more recent related work [7]), as these equations appear naturally and play very important roles in both classical and conformal geometry. When the right-hand side \u03c8 depends on the gradient of the solution, even the real case has been a long standing problem due to substantial di\ufb03culties in obtaining a priori C2 estimates. This problem was recently solved by GuanRen-Wang [10] for convex solutions of real Hessian equations. In the complex case, the equation (1.2) with \u03c8 = \u03c8(z, u) has been extensively studied in recent years, due to its appearance in many geometric problems, including the J-\ufb02ow [18] and quaternionic geometry [1]. The related Dirichlet problem for equation (1.2) on domains in Cn has been studied by Li [15] and Blocki [3]. The corresponding problem 1Work supported in part by the National Science Foundation under Grant DMS-1266033, DMS-1605968 and DMS-1308136. \fon compact K\u00a8 ahler or Hermitian manifolds has also been studied extensively, see, for example, [4, 11, 13, 16, 25]. In particular, as a crucial step in the continuity method, C2 estimates for complex Hessian type equations have been studied in various settings, see [12, 20, 21, 22, 24]. However, the equation (1.2) with \u03c8 = \u03c8(z, Du, u) has been much less studied. An important case corresponding to k = n = 2, so that it is actually a Monge-Amp` ere equation in two dimensions, is central to the solution by Fu and Yau [5, 6] of a Strominger system on a toric \ufb01bration over a K3 surface. A natural generalization of this case to general dimension n was suggested by Fu and Yau [5] and can be expressed as \u0010\u0010 eu + fe\u2212u\u0011 \u03c9 + n i\u2202\u00af \u2202u \u00112 \u2227\u03c9n\u22122 = \u03c8(z, Du, u) \u03c9n, (1.3) where \u03c8(z, v, u) is a function on (T 1,0(X))\u2217\u00d7 R with a particular structure, and (X, \u03c9) is a compact K\u00a8 ahler manifold. A priori estimates for this equation were obtained by the authors in [17]. In this paper, motivated by our previous work [17], we study a priori C2 estimate for the equation (1.2) with general \u03c7(z, u) and general right hand side \u03c8(z, Du, u). Building on the techniques developed by Guan-Ren-Wang in [10] (see also [14]) for real Hessian equations), we can prove the following theorem. Theorem 1 Let (X, \u03c9) be a compact K\u00a8 ahler manifold of complex dimension n. Suppose u \u2208C4(X) is a solution of equation (1.2) with g = \u03c7 + i\u2202\u00af \u2202u > 0 and \u03c7(z, u) \u2265\u03b5\u03c9. Let 0 < \u03c8(z, v, u) \u2208C\u221e((T 1,0X)\u2217\u00d7 R). Then we have the following uniform second order derivative estimate |D \u00af Du|\u03c9 \u2264C, (1.4) where C is a positive constant depending only on \u03b5, n, k, supX |u|, supX |Du|, and the C2 norm of \u03c7 as a function of (u, z), the in\ufb01mum of \u03c8, and the C2 norm of \u03c8 as a function of (z, Du, u), all restricted to the ranges in Du and u de\ufb01ned by the uniform upper bounds on |u| and |Du|. We remark that the above estimate is stated for \u03c7-plurisubharmonic solutions, that is, g = \u03c7 + i\u2202\u00af \u2202u > 0. Actually, we only need to assume that g \u2208\u0393k+1 cone (see (3.11) below for the de\ufb01nition of the Garding cone \u0393k and also the discussion in Remark 1 at the end of the paper). However, a better condition would be g \u2208\u0393k, which is the natural cone for ellipticity. In fact, this is still an open problem even for real Hessian equations when 2 < k < n. If k = 2, Guan-Ren-Wang [10] removed the convexity assumption by investigating the structure of the operator. A simpler argument was given recently by Spruck-Xiao [19]. However, the arguments are not applicable to the complex case due to the di\ufb00erence between the terms |DDu|2 and |D \u00af Du|2 in the complex setting. When k = 2 2 \fin the complex setting, C2 estimates for equation (1.3) were obtained in [17] without the plurisubharmonicity assumption, but the techniques rely on the speci\ufb01c right hand side \u03c8(z, Du, u) studied there. We also note that if k = n, the condition g = \u03c7 + i\u2202\u00af \u2202u > 0 is the natural assumption for the ellipticity of equation (1.2). Thus, our result implies the a priori C2 estimate for complex Monge-Amp` ere equations with right hand side depending on gradients: \u0010 \u03c7(z, u) + i\u2202\u00af \u2202u \u0011n = \u03c8(z, Du, u) \u03c9n. This generalizes the C2 estimate for the equation studied by Fu and Yau [5, 6] mentioned above, which corresponds to n = 2 and a speci\ufb01c form \u03c7(z, u) as well as a speci\ufb01c right hand side \u03c8(z, Du, u). For dimension n \u22652 and k = n, the estimate was obtained by Guan-Ma [8] using a di\ufb00erent method where the structure of the Monge-Amp` ere operator plays an important role. Compared to the estimates when \u03c8 = \u03c8(z, u), the dependence on the gradient of u in the equation (1.2) creates substantial new di\ufb03culties. The main obstacle is the appearance of terms such as |DDu|2 and |D \u00af Du|2 when one di\ufb00erentiates the equation twice. We adapt the techniques used in [10] and [14] for real Hessian equations to overcome these di\ufb03culties. Furthermore, we also need to handle properly some subtle issues when dealing with the third order terms due to complex conjugacy. 2 Preliminaries Let \u03c3k be the k-th elementary symmetric function, that is, for 1 \u2264k \u2264n and \u03bb = (\u03bb1, \u00b7 \u00b7 \u00b7 , \u03bbn) \u2208Rn, \u03c3k(\u03bb) = X 1 0. Our calculations will be carried out at a point z on the manifold X, and we shall use coordinates such that at this point \u03c9 = i P \u03b4\u2113k dzk \u2227d\u00af z\u2113and g\u00af ji is diagonal. We will also use the notation F = X p \u03c3p\u00af p k . (2.2) Di\ufb00erentiating equation (2.1) yields \u03c3p\u00af q k D\u00af jg\u00af qp = D\u00af j\u03c8. (2.3) Di\ufb00erentiating the equation a second time gives \u03c3p\u00af q k DiD\u00af jg\u00af qp + \u03c3p\u00af q,r\u00af s k Dig\u00af qpD\u00af jg\u00af sr = DiD\u00af j\u03c8 \u2265 \u2212C(1 + |DDu|2 + |D \u00af Du|2) + X \u2113 \u03c8v\u2113u\u2113\u00af ji + X \u2113 \u03c8\u00af v\u2113u\u00af \u2113\u00af ji. (2.4) We will denote by C a uniform constant which depends only on (X, \u03c9), n, k, \u2225\u03c7\u2225C2, inf \u03c8, \u2225u\u2225C1 and \u2225\u03c8\u2225C2. We now compute the operator \u03c3p\u00af q k DpD\u00af q acting on g\u00af ji = \u03c7\u00af ji + u\u00af ji. Recalling that \u03c7\u00af ji depends on u, we estimate \u03c3p\u00af q k DpD\u00af qg\u00af ji = \u03c3p\u00af q k DpD\u00af qDiD\u00af ju + \u03c3p\u00af q k DpD\u00af q\u03c7\u00af ji \u2265 \u03c3p\u00af q k DpD\u00af qDiD\u00af ju \u2212C(1 + \u03bb1)F. (2.5) Commuting derivatives DpD\u00af qDiD\u00af ju = DiD\u00af jDpD\u00af qu \u2212R\u00af qi\u00af j \u00af au\u00af ap + R\u00af qp\u00af j \u00af au\u00af ai = DiD\u00af jg\u00af qp \u2212DiD\u00af j\u03c7\u00af qp \u2212R\u00af qi\u00af j \u00af au\u00af ap + R\u00af qp\u00af j \u00af au\u00af ai. (2.6) Therefore, by (2.4), \u03c3p\u00af q k DpD\u00af qg\u00af ji \u2265 \u2212\u03c3p\u00af q,r\u00af s k Djg\u00af qpD\u00af jg\u00af sr + X \u03c8v\u2113g\u00af ji\u2113+ X \u03c8 \u00af v\u2113g\u00af ji\u00af \u2113 \u2212C(1 + |DDu|2 + |D \u00af Du|2 + (1 + \u03bb1)F). (2.7) We next compute the operator \u03c3p\u00af q k DpD\u00af q acting on |Du|2. Introduce the notation |DDu|2 \u03c3\u03c9 = \u03c3p\u00af q k \u03c9m\u00af \u2113DpDmuD\u00af qD\u00af \u2113u, |D \u00af Du|2 \u03c3\u03c9 = \u03c3p\u00af q k \u03c9m\u00af \u2113DpD\u00af \u2113uDmD\u00af qu. (2.8) 4 \fThen \u03c3p\u00af q k |Du|2 \u00af qp = \u03c3p\u00af q k (DpD\u00af qDmuDmu + DmuDpD\u00af qDmu) + |DDu|2 \u03c3\u03c9 + |D \u00af Du|2 \u03c3\u03c9 = \u03c3p\u00af q k {Dm(g\u00af qp \u2212\u03c7\u00af qp)Dmu + DmuDm(g\u00af qp \u2212\u03c7\u00af qp)} + \u03c3p\u00af q k R\u00af qp m\u00af \u2113u\u00af \u2113um +|DDu|2 \u03c3\u03c9 + |D \u00af Du|2 \u03c3\u03c9. (2.9) Using the di\ufb00erentiated equation we obtain \u03c3p\u00af q k |Du|2 \u00af qp \u2265 2Re\u27e8Du, D\u03c8\u27e9\u2212C(1 + F) + |DDu|2 \u03c3\u03c9 + |D \u00af Du|2 \u03c3\u03c9 \u2265 2Re{ X p,m (DpDmuD\u00af pu + DpuD\u00af pDmu)\u03c8vm} \u2212C(1 + F) + |DDu|2 \u03c3\u03c9 + |D \u00af Du|2 \u03c3\u03c9. To simplify the expression, we introduce the notation \u27e8D|Du|2, D\u00af v\u03c8\u27e9= X m (DmDpuDpu \u03c8vm + DpuDmDpu \u03c8vm). (2.10) We obtain \u03c3p\u00af q k |Du|2 \u00af qp \u22652Re\u27e8D|Du|2, D\u00af v\u03c8\u27e9\u2212C(1 + F) + |DDu|2 \u03c3\u03c9 + |D \u00af Du|2 \u03c3\u03c9. (2.11) We also compute \u2212\u03c3p\u00af q k u\u00af qp = \u03c3p\u00af q k (\u03c7\u00af qp \u2212g\u00af qp) \u2265\u03b5F \u2212k\u03c8. (2.12) 3 The C2 estimate In this section, we give the proof of the estimate stated in the theorem. When k = 1, the equation (1.2) becomes \u2206\u03c9u + Tr\u03c9\u03c7(z, u) = n\u03c8(z, Du, u) (3.1) where \u2206\u03c9 and Tr\u03c9 are the Laplacian and trace with respect to the background metric \u03c9. It follows that \u2206\u03c9u is bounded, and the desired estimate follows in turn from the positivity of the metric g. Henceforth, we assume that k \u22652. Motivated by the idea from [10] for real Hessian equations, we apply the maximum principle to the following test function: G = log Pm + mN|Du|2 \u2212mMu, (3.2) where Pm = P j \u03bbm j . Here, m, M and N are large positive constants to be determined later. We may assume that the maximum of G is achieved at some point z \u2208X. After rotating the coordinates, we may assume that the matrix g\u00af ji = \u03c7\u00af ji + u\u00af ji is diagonal. 5 \fRecall that if F(A) = f(\u03bb1, \u00b7 \u00b7 \u00b7 , \u03bbn) is a symmetric function of the eigenvalues of a Hermitian matrix A = (a\u00af ji), then at a diagonal matrix A with distinct eigenvalues, we have (see [2]), F i\u00af j = \u03b4ijfi, (3.3) F i\u00af j,r\u00af swi\u00af jkwr\u00af s\u00af k = X fijwi\u00af ikwj\u00af j\u00af k + X p\u0338=q fp \u2212fq \u03bbp \u2212\u03bbq |wp\u00af qk|2. (3.4) where F i\u00af j = \u2202F \u2202a\u00af ji, F i\u00af j,r\u00af s = \u22022F \u2202a\u00af ji\u2202a\u00af sr , and wi\u00af jk is an arbitrary tensor. Using these identities to di\ufb00erentiate G, we \ufb01rst obtain the critical equation DPm Pm + mND|Du|2 \u2212mMDu = 0. (3.5) Di\ufb00erentiating G a second time and contracting with \u03c3p\u00af q k yields 0 \u2265 m Pm \u001a X j \u03bbm\u22121 j \u03c3p\u00af p k DpD\u00af pg\u00af jj \u001b \u2212|DPm|2 \u03c3 P 2 m + mN\u03c3p\u00af p k |Du|2 \u00af pp \u2212mM\u03c3p\u00af p k u\u00af pp + m Pm \u001a (m \u22121) X j \u03bbm\u22122 j \u03c3p\u00af p k |Dpg\u00af jj|2 + \u03c3p\u00af p k X i\u0338=j \u03bbm\u22121 i \u2212\u03bbm\u22121 j \u03bbi \u2212\u03bbj |Dpg\u00af ji|2 \u001b . (3.6) Here, we used the notation |\u03b7|2 \u03c3 = \u03c3p\u00af q k \u03b7p\u03b7\u00af q. Substituting (2.7), (2.11) and (2.12) 0 \u2265 1 Pm \u001a \u2212C X j \u03bbm\u22121 j (1 + |DDu|2 + |D \u00af Du|2 + (1 + \u03bb1)F) \u001b + 1 Pm \u001a X j \u03bbm\u22121 j (\u2212\u03c3p\u00af q,r\u00af s k Djg\u00af qpD\u00af jg\u00af sr + X \u2113 \u03c8v\u2113g\u00af jj\u2113+ X \u2113 \u03c8\u00af v\u2113g\u00af jj\u00af \u2113) \u001b + 1 Pm \u001a (m \u22121) X j \u03bbm\u22122 j \u03c3p\u00af p k |Dpg\u00af jj|2 + \u03c3p\u00af p k X i\u0338=j \u03bbm\u22121 i \u2212\u03bbm\u22121 j \u03bbi \u2212\u03bbj |Dpg\u00af ji|2 \u001b \u2212|DPm|2 \u03c3 mP 2 m + N(|DDu|2 \u03c3\u03c9 + |D \u00af Du|2 \u03c3\u03c9) +N\u27e8D|Du|2, D\u00af v\u03c8\u27e9+ N\u27e8D\u00af v\u03c8, D|Du|2\u27e9+ (M\u03b5 \u2212CN)F \u2212kM\u03c8. (3.7) From the critical equation (3.5), we have 1 Pm X j,\u2113 \u03bbm\u22121 j g\u00af jj\u2113\u03c8v\u2113 = 1 m\u27e8DPm Pm , D\u00af v\u03c8\u27e9= \u2212N\u27e8D|Du|2, D\u00af v\u03c8\u27e9+ M\u27e8Du, D\u00af v\u03c8\u27e9. It follows that 1 Pm X j,\u2113 \u0010 \u03c8v\u2113g\u00af jj\u2113+ \u03c8\u00af v\u2113g\u00af jj\u00af \u2113 \u0011 + N\u27e8D|Du|2, D\u00af v\u03c8\u27e9+ N\u27e8D\u00af v\u03c8, D|Du|2\u27e9 = M (\u27e8Du, D\u00af v\u03c8\u27e9+ \u27e8D\u00af v\u03c8, Du\u27e9) \u2265\u2212CM. 6 \fUsing (3.4), one can obtain the well-known identity \u2212\u03c3p\u00af q,r\u00af s k Djg\u00af qpD\u00af jg\u00af sr = \u2212\u03c3p\u00af p,q\u00af q k Djg\u00af ppD\u00af jg\u00af qq + \u03c3p\u00af p,q\u00af q k |Djg\u00af pq|2, (3.8) where \u03c3p\u00af p,q\u00af q k = \u2202 \u2202\u03bbp \u2202 \u2202\u03bbq \u03c3k(\u03bb). We assume that \u03bb1 \u226b1, otherwise the C2 estimate is complete. The main inequality (3.7) becomes 0 \u2265 \u2212C \u03bb1 \u001a 1 + |DDu|2 + |D \u00af Du|2 \u001b + 1 Pm \u001a X j \u03bbm\u22121 j (\u2212\u03c3p\u00af p,q\u00af q k Djg\u00af ppD\u00af jg\u00af qq + \u03c3p\u00af p,q\u00af q k |Djg\u00af pq|2 \u001b + 1 Pm \u001a (m \u22121) X j \u03bbm\u22122 j \u03c3p\u00af p k |Dpg\u00af jj|2 + \u03c3p\u00af p k X i\u0338=j \u03bbm\u22121 i \u2212\u03bbm\u22121 j \u03bbi \u2212\u03bbj |Dpg\u00af ji|2 \u001b \u2212|DPm|2 \u03c3 mP 2 m + N(|DDu|2 \u03c3\u03c9 + |D \u00af Du|2 \u03c3\u03c9) + (M\u03b5 \u2212CN \u2212C)F \u2212CM. (3.9) The main objective is to show that the third order terms on the right hand side of (3.9) are nonnegative. To deal with this issue, we need a lemma from [10] (see also [9, 14]). Lemma 1 ([10]) Suppose 1 \u2264\u2113< k \u2264n, and let \u03b1 = 1/(k \u2212\u2113). Let W = (w\u00af qp) be a Hermitian tensor in the \u0393k cone. Then, for any \u03b8 > 0, \u2212\u03c3p\u00af p,q\u00af q k (W)w\u00af ppiw\u00af qq\u00af i + (1 \u2212\u03b1 + \u03b1 \u03b8 )|Di\u03c3k(W)|2 \u03c3k(W) \u2265 \u03c3k(W)(\u03b1 + 1 \u2212\u03b1\u03b8) \f \f \f \f Di\u03c3\u2113(W) \u03c3\u2113(W) \f \f \f \f 2 \u2212\u03c3k \u03c3\u2113 (W)\u03c3p\u00af p,q\u00af q \u2113 (W)w\u00af ppiw\u00af qq\u00af i. (3.10) Here the \u0393k cone is de\ufb01ned as following: \u0393k = {\u03bb \u2208Rn | \u03c3m(\u03bb) > 0, m = 1, \u00b7 \u00b7 \u00b7 , k}. (3.11) We say a Hermitian matrix W \u2208\u0393k if \u03bb(W) \u2208\u0393k. It follows from the above lemma that, by taking \u2113= 1, we have \u2212\u03c3p\u00af p,q\u00af q k Dig\u00af ppD\u00af ig\u00af qq + K|Di\u03c3k|2 \u22650, (3.12) for K > (1 \u2212\u03b1 + \u03b1 \u03b8 ) (inf \u03c8)\u22121 if 2 \u2264k \u2264n. We shall denote Ai = \u03bbm\u22121 i Pm \u001a K|Di\u03c3k|2 \u2212\u03c3p\u00af p,q\u00af q k Dig\u00af ppD\u00af ig\u00af qq \u001b , Bi = 1 Pm \u001a X p \u03c3p\u00af p,i\u00af i k \u03bbm\u22121 p |Dig\u00af pp|2 \u001b , Ci = (m \u22121)\u03c3i\u00af i k Pm \u001a X p \u03bbm\u22122 p |Dig\u00af pp|2 \u001b , 7 \fDi = 1 Pm \u001a X p\u0338=i \u03c3p\u00af p k \u03bbm\u22121 p \u2212\u03bbm\u22121 i \u03bbp \u2212\u03bbi |Dig\u00af pp|2 \u001b , Ei = m\u03c3i\u00af i k P 2 m \f \f \f \f X p \u03bbm\u22121 p Dig\u00af pp \f \f \f \f 2 . De\ufb01ne Tj \u00af pq = Dj\u03c7\u00af pq \u2212Dq\u03c7\u00af pj. For any 0 < \u03c4 < 1, we can estimate 1 Pm \u001a X p \u03bbm\u22121 p \u03c3j\u00af j,i\u00af i k |Dpg\u00af ji|2 \u001b \u2265 1 Pm \u001a X p \u03bbm\u22121 p \u03c3p\u00af p,i\u00af i k |Dig\u00af pp + Tp\u00af pi|2 \u001b \u2265 1 Pm \u001a X p \u03bbm\u22121 p \u03c3p\u00af p,i\u00af i k {(1 \u2212\u03c4)|Dig\u00af pp|2 \u2212C\u03c4|Tp\u00af pi|2} \u001b = (1 \u2212\u03c4) X i Bi \u2212C\u03c4 Pm X p \u03bbm\u22122 p (\u03bbp\u03c3p\u00af p,i\u00af i k )|Tp\u00af pi|2. Now, we use \u03c3l(\u03bb|i) and \u03c3l(\u03bb|ij) to denote the l-th elementary function of (\u03bb|i) = (\u03bb1, \u00b7 \u00b7 \u00b7, c \u03bbi, \u00b7 \u00b7 \u00b7 , \u03bbn) \u2208Rn\u22121 and (\u03bb|ij) = (\u03bb1, \u00b7 \u00b7 \u00b7, c \u03bbi, \u00b7 \u00b7 \u00b7 , c \u03bbj, \u00b7 \u00b7 \u00b7, \u03bbn) \u2208Rn\u22122 respectively. The following simple identities are used frequently, \u03c3i\u00af i k = \u03c3k\u22121(\u03bb|i), \u03c3p\u00af p,i\u00af i k = \u03c3k\u22122(\u03bb|pi). Using the identity \u03c3l(\u03bb) = \u03c3l(\u03bb|p) + \u03bbp\u03c3l\u22121(\u03bb|p) for any 1 \u2264p \u2264n, we obtain 1 Pm \u001a X p \u03bbm\u22121 p \u03c3j\u00af j,i\u00af i k |Dpg\u00af ji|2 \u001b \u2265 (1 \u2212\u03c4) X i Bi \u2212C\u03c4 Pm X p \u03bbm\u22122 p (\u03c3i\u00af i k \u2212\u03c3k\u22121(\u03bb|pi))|Tp\u00af pi|2 \u2265 (1 \u2212\u03c4) X i Bi \u2212C\u03c4 \u03bb2 1 F \u2265(1 \u2212\u03c4) X i Bi \u2212F. (3.13) We used the notation C\u03c4 for a constant depending on \u03c4. To get the last inequality above, we assumed that \u03bb2 1 \u2265C\u03c4; otherwise, we already have the desired estimate \u03bb1 \u2264C. Similarly, we may estimate 1 Pm \u03c3j\u00af j k X i\u0338=p \u03bbm\u22121 i \u2212\u03bbm\u22121 p \u03bbi \u2212\u03bbp |Djg\u00af pi|2 \u2265 1 Pm \u03c3p\u00af p k X i;p\u0338=i \u03bbm\u22121 i \u2212\u03bbm\u22121 p \u03bbi \u2212\u03bbp |Dig\u00af pp + Tp\u00af pi|2 (3.14) \u2265 1 Pm \u03c3p\u00af p k X i;p\u0338=i \u03bbm\u22121 i \u2212\u03bbm\u22121 p \u03bbi \u2212\u03bbp {(1 \u2212\u03c4)|Dig\u00af pp|2 \u2212C\u03c4|Tp\u00af pi|2} \u2265 X i (1 \u2212\u03c4)Di \u2212C\u03c4 \u03bb2 1 F \u2265 X i (1 \u2212\u03c4)Di \u2212F. With the introduced notation in place, the main inequality becomes 0 \u2265 \u2212C(K) \u03bb1 \u001a 1 + |DDu|2 + |D \u00af Du|2 \u001b \u2212\u03c4 |DPm|2 \u03c3 mP 2 m + X i \u001a Ai + (1 \u2212\u03c4)Bi + Ci + (1 \u2212\u03c4)Di \u2212(1 \u2212\u03c4)Ei \u001b +N(|DDu|2 \u03c3\u03c9 + |D \u00af Du|2 \u03c3\u03c9) + (M\u03b5 \u2212CN \u2212C)F \u2212CM. (3.15) 8 \fUsing the critical equation (3.5), we have \u03c4 |DPm|2 \u03c3 mP 2 m = \u03c4m \f \f \f \fND|Du|2 \u2212MDu \f \f \f \f 2 \u03c3 \u22642\u03c4m(N2|D|Du|2|2 \u03c3 + M2|Du|2 \u03c3) \u2264 C\u03c4mN2(|DDu|2 \u03c3\u03c9 + |D \u00af Du|2 \u03c3\u03c9) + C\u03c4mM2F. (3.16) We thus have 0 \u2265 \u2212C(K) \u03bb1 \u001a 1 + |DDu|2 + |D \u00af Du|2 \u001b + (N \u2212C\u03c4mN2)(|DDu|2 \u03c3\u03c9 + |D \u00af Du|2 \u03c3\u03c9) + X i \u001a Ai + (1 \u2212\u03c4)Bi + Ci + (1 \u2212\u03c4)Di \u2212(1 \u2212\u03c4)Ei \u001b +(M\u03b5 \u2212C\u03c4mM2 \u2212CN \u2212C)F \u2212CM. (3.17) 3.1 Estimating the Third Order Terms In this subsection, we will adapt the argument in [14] to estimate the third order terms. Lemma 2 For su\ufb03ciently large m, the following estimates hold: P 2 m(B1 + C1 + D1 \u2212E1) \u2265Pm\u03bbm\u22122 1 X p\u0338=1 \u03c3p\u00af p k |D1g\u00af pp|2 \u2212\u03bbm 1 \u03c31\u00af 1 k \u03bbm\u22122 1 |D1g\u00af 11|2, (3.18) and for any \ufb01xed i \u0338= 1, P 2 m(Bi + Ci + Di \u2212Ei) \u22650. (3.19) Proof. Fix i \u2208{1, 2, . . . , n}. First, we compute Pm(Bi + Di) = X p\u0338=i \u03c3p\u00af p,i\u00af i k \u03bbm\u22121 p |Dig\u00af pp|2 + X p\u0338=i \u03c3p\u00af p k \u03bbm\u22121 p \u2212\u03bbm\u22121 i \u03bbp \u2212\u03bbi |Dig\u00af pp|2 = X p\u0338=i \u03bbm\u22122 p \u001a (\u03bbp\u03c3p\u00af p,i\u00af i k + \u03c3p\u00af p k )|Dig\u00af pp|2 \u001b + \u001a X p\u0338=i \u03c3p\u00af p k m\u22123 X q=0 \u03bbp q\u03bbm\u22122\u2212q i |Dig\u00af pp|2 \u001b . Note that, \u03bbp\u03c3p\u00af p,i\u00af i k + \u03c3p\u00af p k \u2265\u03c3i\u00af i k . To see this, we write \u03bbp\u03c3p\u00af p,i\u00af i k + \u03c3p\u00af p k = \u03bbp\u03c3k\u22122(\u03bb|pi) + \u03c3k\u22121(\u03bb|p) = \u03c3k\u22121(\u03bb|i) \u2212\u03c3k\u22121(\u03bb|ip) + \u03c3k\u22121(\u03bb|p) = \u03c3k\u22121(\u03bb|i) + \u03bbi\u03c3k\u22122(\u03bb|ip) \u2265\u03c3k\u22121(\u03bb|i) = \u03c3i\u00af i k , 9 \fwhere we used the standard identity \u03c3l(\u03bb) = \u03c3l(\u03bb|p) + \u03bbp\u03c3l\u22121(\u03bb|p) twice, to get the second and third equalities. Therefore Pm(Bi + Di) \u2265\u03c3i\u00af i k \u001a X p\u0338=i \u03bbm\u22122 p |Dig\u00af pp|2 \u001b + \u001a X p\u0338=i \u03c3p\u00af p k m\u22123 X q=0 \u03bbp q\u03bbm\u22122\u2212q i |Dig\u00af pp|2 \u001b . (3.20) It follows that Pm(Bi + Ci + Di) \u2265 m\u03c3i\u00af i k X p\u0338=i \u03bbm\u22122 p |Dig\u00af pp|2 + (m \u22121)\u03c3i\u00af i k \u03bbm\u22122 i |Dig\u00af ii|2 + X p\u0338=i \u03c3p\u00af p k m\u22123 X q=0 \u03bbp q\u03bbm\u22122\u2212q i |Dig\u00af pp|2. (3.21) Expanding out the de\ufb01nition of Ei P 2 mEi = m\u03c3i\u00af i k X p\u0338=i \u03bb2m\u22122 p |Dig\u00af pp|2 + m\u03c3i\u00af i k \u03bb2m\u22122 i |Dig\u00af ii|2 + m\u03c3i\u00af i k X p X q\u0338=p \u03bbm\u22121 p \u03bbm\u22121 q Dig\u00af ppD\u00af ig\u00af qq. (3.22) Therefore P 2 m(Bi + Ci + Di \u2212Ei) (3.23) \u2265 \u001a m\u03c3i\u00af i k X p\u0338=i (Pm \u2212\u03bbm p )\u03bbm\u22122 p |Dig\u00af pp|2 \u2212m\u03c3i\u00af i k X p\u0338=i X q\u0338=p,i \u03bbm\u22121 p \u03bbm\u22121 q Dig\u00af ppD\u00af ig\u00af qq \u001b +Pm X p\u0338=i \u03c3p\u00af p k m\u22123 X q=0 \u03bbp q\u03bbm\u22122\u2212q i |Dig\u00af pp|2 \u22122m\u03c3i\u00af i k Re X q\u0338=i \u03bbm\u22121 i \u03bbm\u22121 q Dig\u00af iiD\u00af ig\u00af qq +{(m \u22121)Pm \u2212m\u03bbm i }\u03c3i\u00af i k \u03bbm\u22122 i |Dig\u00af ii|2. We shall estimate the expression in brackets. First, m\u03c3i\u00af i k X p\u0338=i (Pm \u2212\u03bbm p )\u03bbm\u22122 p |Dig\u00af pp|2 = m\u03c3i\u00af i k X p\u0338=i X q\u0338=p,i \u03bbm q \u03bbm\u22122 p |Dig\u00af pp|2 + m\u03c3i\u00af i k X p\u0338=i \u03bbm i \u03bbm\u22122 p |Dig\u00af pp|2. Next, we can estimate \u2212m\u03c3i\u00af i k X p\u0338=i X q\u0338=p,i \u03bbm\u22121 p \u03bbm\u22121 q Dig\u00af ppD\u00af ig\u00af qq (3.24) \u2265 \u2212m\u03c3i\u00af i k X p\u0338=i X q\u0338=p,i 1 2{\u03bbm\u22122 p \u03bbm q |Dig\u00af pp|2 + \u03bbm p \u03bbm\u22122 q |Dig\u00af qq|2} = \u2212m\u03c3i\u00af i k X p\u0338=i X q\u0338=p,i \u03bbm\u22122 p \u03bbm q |Dig\u00af pp|2. We arrive at P 2 m(Bi + Ci + Di \u2212Ei) (3.25) \u2265 m\u03c3i\u00af i k X p\u0338=i \u03bbm i \u03bbm\u22122 p |Dig\u00af pp|2 + Pm X p\u0338=i \u03c3p\u00af p k m\u22123 X q=0 \u03bbp q\u03bbm\u22122\u2212q i |Dig\u00af pp|2 \u22122m\u03c3i\u00af i k Re \u001a \u03bbm\u22121 i Dig\u00af ii X q\u0338=i \u03bbm\u22121 q D\u00af ig\u00af qq \u001b + {(m \u22121)Pm \u2212m\u03bbm i }\u03c3i\u00af i k \u03bbm\u22122 i |Dig\u00af ii|2. 10 \fThe next step is to extract good terms from the second summation on the \ufb01rst line. We \ufb01x a p \u0338= i. Case 1: \u03bbi \u2265\u03bbp. Then \u03c3p\u00af p k \u2265\u03c3i\u00af i k . Hence Pm\u03c3p\u00af p k m\u22123 X q=1 \u03bbp q\u03bbm\u22122\u2212q i \u2265\u03bbm i \u03c3i\u00af i k m\u22123 X q=1 \u03bbp q\u03bbm\u22122\u2212q p = (m \u22123)\u03c3i\u00af i k \u03bbm i \u03bbm\u22122 p . (3.26) Case 2: \u03bbi \u2264\u03bbp. Then \u03bbp\u03c3p\u00af p k = \u03bbi\u03c3i\u00af i k + (\u03c3k(\u03bb|i) \u2212\u03c3k(\u03bb|p)) \u2265\u03bbi\u03c3i\u00af i k , and we obtain Pm\u03c3p\u00af p k m\u22123 X q=1 \u03bbp q\u03bbm\u22122\u2212q i \u2265\u03bbm p \u03c3i\u00af i k m\u22123 X q=1 \u03bbp q\u22121\u03bbm\u22121\u2212q i \u2265(m \u22123)\u03c3i\u00af i k \u03bbm i \u03bbm\u22122 p . (3.27) Combining both cases, we have Pm\u03c3p\u00af p k m\u22123 X q=0 \u03bbp q\u03bbm\u22122\u2212q i |Dig\u00af pp|2 = Pm\u03c3p\u00af p k m\u22123 X q=1 \u03bbp q\u03bbm\u22122\u2212q i |Dig\u00af pp|2 + Pm\u03c3p\u00af p k \u03bbm\u22122 i |Dig\u00af pp|2 \u2265 (m \u22123)\u03c3i\u00af i k \u03bbm i \u03bbm\u22122 p |Dig\u00af pp|2 + Pm\u03c3p\u00af p k \u03bbm\u22122 i |Dig\u00af pp|2. Substituting this estimate into inequality (3.25), we obtain P 2 m(Bi + Ci + Di \u2212Ei) (3.28) \u2265 (2m \u22123)\u03c3i\u00af i k X p\u0338=i \u03bbm i \u03bbm\u22122 p |Dig\u00af pp|2 \u22122m\u03c3i\u00af i k Re \u001a \u03bbm\u22121 i Dig\u00af ii X p\u0338=i \u03bbm\u22121 p D\u00af ig\u00af pp \u001b +Pm\u03bbm\u22122 i X p\u0338=i \u03c3p\u00af p k |Dig\u00af pp|2 + {(m \u22121)Pm \u2212m\u03bbm i }\u03c3i\u00af i k \u03bbm\u22122 i |Dig\u00af ii|2. Choose m \u226b1 such that m2 \u2264(2m \u22123)(m \u22122). (3.29) We can therefore estimate 2m\u03c3i\u00af i k Re \u001a \u03bbm\u22121 i Dig\u00af ii X p\u0338=i \u03bbm\u22121 p D\u00af ig\u00af pp \u001b \u2264 2\u03c3i\u00af i k X p\u0338=i {(2m \u22123)1/2\u03bbm/2 i \u03bb m\u22122 2 p |Dig\u00af pp|}{(m \u22122)1/2\u03bb m\u22122 2 i \u03bbm/2 p |D\u00af ig\u00af ii|} \u2264 (2m \u22123)\u03c3i\u00af i k X p\u0338=i \u03bbm i \u03bbm\u22122 p |Dig\u00af pp|2 + (m \u22122)\u03c3i\u00af i k X p\u0338=i \u03bbm\u22122 i \u03bbm p |D\u00af ig\u00af ii|2. (3.30) We \ufb01nally arrive at P 2 m(Bi + Ci + Di \u2212Ei) \u2265 Pm\u03bbm\u22122 i X p\u0338=i \u03c3p\u00af p k |Dig\u00af pp|2 + {(m \u22121)Pm \u2212m\u03bbm i }\u03c3i\u00af i k \u03bbm\u22122 i |Dig\u00af ii|2 \u2212(m \u22122)\u03c3i\u00af i k X p\u0338=i \u03bbm\u22122 i \u03bbm p |D\u00af ig\u00af ii|2. (3.31) 11 \fIf we let i = 1, we obtain inequality (3.18). For any \ufb01xed i \u0338= 1, this inequality yields P 2 m(Bi + Ci + Di \u2212Ei) \u2265 Pm\u03bbm\u22122 i X p\u0338=i \u03c3p\u00af p k |Dig\u00af pp|2 + {(m \u22121)\u03bbm 1 \u2212\u03bbm i }\u03c3i\u00af i k \u03bbm\u22122 i |Dig\u00af ii|2 +(m \u22121) X p\u0338=1,i \u03bbm p \u03c3i\u00af i k \u03bbm\u22122 i |Dig\u00af ii|2 \u2212(m \u22122)\u03c3i\u00af i k X p\u0338=i \u03bbm\u22122 i \u03bbm p |D\u00af ig\u00af ii|2 \u2265 Pm\u03bbm\u22122 i X p\u0338=i \u03c3p\u00af p k |Dig\u00af pp|2 \u22650. This completes the proof of Lemma 2. Q.E.D. We observed in (3.12) that Ai \u22650. Lemma 2 implies that for any i \u0338= 1, Ai + Bi + Ci + Di \u2212Ei \u22650. Thus we have shown that for i \u0338= 1, the third order terms in the main inequality (3.17) are indeed nonnegative. The only remaining case is when i = 1. By adapting once again the techniques from [10], we obtain the following lemma. Lemma 3 Let 1 < k \u2264n. Suppose there exists 0 < \u03b4 \u22641 such that \u03bb\u00b5 \u2265\u03b4\u03bb1 for some \u00b5 \u2208{1, 2, . . . , k \u22121}. There exists a small \u03b4\u2032 > 0 such that if \u03bb\u00b5+1 \u2264\u03b4\u2032\u03bb1, then A1 + B1 + C1 + D1 \u2212E1 \u22650. Proof. By Lemma 2, we have P 2 m(A1 + B1 + C1 + D1 \u2212E1) (3.32) \u2265 P 2 mA1 + Pm\u03bbm\u22122 1 X p\u0338=1 \u03c3p\u00af p k |D1g\u00af pp|2 \u2212\u03bbm 1 \u03c31\u00af 1 k \u03bbm\u22122 1 |D1g\u00af 11|2. The key insight in [10], used also in [14], is to extract a good term involving |D1g\u00af 11|2 from A1. By the inequality in Lemma 1 (with \u03b8 = 1 2), we have for \u00b5 < k P 2 mA1 \u2265 Pm\u03bbm\u22121 1 \u03c3k \u03c32 \u00b5 \u001a (1 + \u03b1 2 ) \f \f \f \f X p \u03c3p\u00af p \u00b5 D1g\u00af pp \f \f \f \f 2 \u2212\u03c3\u00b5\u03c3p\u00af p,q\u00af q \u00b5 D1g\u00af ppD\u00af 1g\u00af qq \u001b = Pm\u03bbm\u22121 1 \u03c3k \u03c32 \u00b5 \u001a X p (1 + \u03b1 2 )|\u03c3p\u00af p \u00b5 D1g\u00af pp|2 + X p\u0338=q \u03b1 2 \u03c3p\u00af p \u00b5 D1g\u00af pp\u03c3q\u00af q \u00b5 D\u00af 1g\u00af qq + X p\u0338=q (\u03c3p\u00af p \u00b5 \u03c3q\u00af q \u00b5 \u2212\u03c3\u00b5\u03c3p\u00af p,q\u00af q \u00b5 )D1g\u00af ppD\u00af 1g\u00af qq \u001b \u2265 Pm\u03bbm\u22121 1 \u03c3k \u03c32 \u00b5 \u001a X p |\u03c3p\u00af p \u00b5 D1g\u00af pp|2 \u2212 X p\u0338=q |F pqD1g\u00af ppD\u00af 1g\u00af qq| \u001b , (3.33) 12 \fwhere we de\ufb01ned F pq = \u03c3p\u00af p \u00b5 \u03c3q\u00af q \u00b5 \u2212\u03c3\u00b5\u03c3p\u00af p,q\u00af q \u00b5 . Notice if \u00b5 = 1, then F pq = 1. If \u00b5 \u22652, then the Newton-MacLaurin inequality implies F pq = \u03c32 \u00b5\u22121(\u03bb|pq) \u2212\u03c3\u00b5(\u03bb|pq)\u03c3\u00b5\u22122(\u03bb|pq) \u22650. (3.34) We split the sum involving F pq in the following way: X p\u0338=q |F pqD1g\u00af ppD\u00af 1g\u00af qq| = X p\u0338=q;p,q\u2264\u00b5 F pq|D1g\u00af pp||D\u00af 1g\u00af qq| + X (p,q)\u2208J F pq|D1g\u00af pp||D\u00af 1g\u00af qq| (3.35) where J is the set of indices where at least one of p \u0338= q is strictly greater than \u00b5. The summation of terms in J can be estimated by \u2212 X (p,q)\u2208J F pq|D1g\u00af pp||D\u00af 1g\u00af qq| \u2265 \u2212 X (p,q)\u2208J \u03c3p\u00af p \u00b5 \u03c3q\u00af q \u00b5 |D1g\u00af pp||D\u00af 1g\u00af qq| \u2265 \u2212\u01eb X p\u2264\u00b5 |\u03c3p\u00af p \u00b5 D1g\u00af pp|2 \u2212C X p>\u00b5 |\u03c3p\u00af p \u00b5 D1g\u00af pp|2. (3.36) If \u00b5 = 1, the \ufb01rst term on the right hand side of (3.35) vanishes and this estimate applies to all terms on the right hand side of (3.35). If \u00b5 \u22652, we have for p, q \u2264\u00b5, \u03c3\u00b5\u22121(\u03bb|pq) \u2264C \u03bb1 \u00b7 \u00b7 \u00b7 \u03bb\u00b5+1 \u03bbp\u03bbq \u2264C \u03c3p\u00af p \u00b5 \u03bb\u00b5+1 \u03bbq . (3.37) Using (3.34) and (3.37), for \u03b4\u2032 small enough we can control \u2212 X p\u0338=q;p,q\u2264\u00b5 F pq|D1g\u00af pp||D\u00af 1g\u00af qq| \u2265\u2212 X p\u0338=q;p,q\u2264\u00b5 \u03c32 \u00b5\u22121(\u03bb|pq)|D1g\u00af pp||D\u00af 1g\u00af qq| \u2265 \u2212C\u03bb2 \u00b5+1 X p\u0338=q;p,q\u2264\u00b5 \u03c3p\u00af p \u00b5 \u03bbp |D1g\u00af pp|\u03c3q\u00af q \u00b5 \u03bbq |D\u00af 1g\u00af qq| \u2265\u2212C X p\u2264\u00b5 \u03bb2 \u00b5+1 \u03bb2 p |\u03c3p\u00af p \u00b5 D1g\u00af pp|2 \u2265 \u2212C X p\u2264\u00b5 \u03b4\u20322 \u03b42 |\u03c3p\u00af p \u00b5 D1g\u00af pp|2 \u2265\u2212\u01eb X p\u2264\u00b5 |\u03c3p\u00af p \u00b5 D1g\u00af pp|2. (3.38) Combining all cases, we have \u2212 X p\u0338=q |F pqD1g\u00af ppD\u00af 1g\u00af qq| \u2265\u22122\u01eb X p\u2264\u00b5 |\u03c3p\u00af p \u00b5 D1g\u00af pp|2 \u2212C X p>\u00b5 |\u03c3p\u00af p \u00b5 D1g\u00af pp|2. (3.39) Using this inequality in (3.33) yields P 2 mA1 \u2265 Pm\u03bbm\u22121 1 \u03c3k \u03c32 \u00b5 \u001a (1 \u22122\u01eb) X p\u2264\u00b5 |\u03c3p\u00af p \u00b5 D1g\u00af pp|2 \u2212C X p>\u00b5 |\u03c3p\u00af p \u00b5 D1g\u00af pp|2| \u001b \u2265 (1 \u22122\u01eb)Pm\u03bbm\u22121 1 \u03c3k \u03c32 \u00b5 |\u03c31\u00af 1 \u00b5 D1g\u00af 11|2 \u2212C Pm\u03bbm\u22121 1 \u03c3k \u03c32 \u00b5 X p>\u00b5 |\u03c3p\u00af p \u00b5 D1g\u00af pp|2. (3.40) 13 \fWe estimate (1 \u22122\u01eb)Pm\u03bbm\u22121 1 \u03c3k \u03c32 \u00b5 |\u03c31\u00af 1 \u00b5 D1g\u00af 11|2 = (1 \u22122\u01eb)Pm\u03bbm\u22122 1 \u03c3k \u03bb1 \u0012\u03bb1\u03c31\u00af 1 \u00b5 \u03c3\u00b5 \u00132 |D1g\u00af 11|2 \u2265 (1 \u22122\u01eb)Pm\u03bbm\u22122 1 \u03c3k \u03bb1 \u0012 1 \u2212C \u03bb\u00b5+1 \u03bb1 \u00132 |D1g\u00af 11|2 \u2265(1 \u22122\u01eb)(1 \u2212C\u03b4\u2032)2Pm\u03bbm\u22122 1 \u03c31\u00af 1 k |D1g\u00af 11|2 \u2265 (1 \u22122\u01eb)(1 \u2212C\u03b4\u2032)2(1 + \u03b4m)\u03bb2m\u22122 1 \u03c31\u00af 1 k |D1g\u00af 11|2. (3.41) For \u03b4\u2032 and \u01eb small enough, we obtain P 2 mA1 \u2265\u03bbm 1 \u03c31\u00af 1 k \u03bbm\u22122 1 |D1g\u00af 11|2 \u2212C Pm\u03bbm\u22121 1 \u03c3k \u03c32 \u00b5 X p>\u00b5 |\u03c3p\u00af p \u00b5 D1g\u00af pp|2. (3.42) We see that the |D1g\u00af 11|2 term cancels from inequality (3.32) and we are left with P 2 m(A1 + B1 + C1 + D1 \u2212E1) \u2265Pm\u03bbm\u22122 1 X p>\u00b5 \u001a \u03c3p\u00af p k \u2212C \u03bb1\u03c3k(\u03c3p\u00af p \u00b5 )2 \u03c32 \u00b5 \u001b |D1g\u00af pp|2. (3.43) For \u03b4\u2032 small enough, the above expression is nonnegative. Indeed, for any p > \u00b5, we have (\u03bb1\u03c3p\u00af p \u00b5 )2 \u22641 \u03b42(\u03bb\u00b5\u03c3p\u00af p \u00b5 )2 \u2264C (\u03c3\u00b5)2 \u03b42 , (3.44) Therefore C \u03bb1\u03c3k(\u03c3p\u00af p \u00b5 )2 \u03c32 \u00b5 \u2264C \u03b42 \u03c3k \u03bb1 . (3.45) On the other hand, we notice that, if p > k, then \u03c3p\u00af p k \u2265\u03bb1 \u00b7 \u00b7 \u00b7\u03bbk\u22121 \u2265cn \u03c3k \u03bbk \u2265cn \u03b4\u2032 \u03c3k \u03bb1 . If \u00b5 < p \u2264k, then \u03c3p\u00af p k \u2265\u03bb1\u00b7\u00b7\u00b7\u03bbk \u03bbp \u2265cn \u03c3k \u03bbp \u2265cn \u03b4\u2032 \u03c3k \u03bb1 . It follows that for \u03b4\u2032 small enough we have \u03c3p\u00af p k \u2265C \u03bb1\u03c3k(\u03c3p\u00af p \u00b5 )2 \u03c32 \u00b5 . (3.46) This completes the proof of Lemma 3. Q.E.D. 3.2 Completing the Proof With Lemma 2 and Lemma 3 at our disposal, we claim that we may assume in inequality (3.17) that Ai + Bi + Ci + Di \u2212Ei \u22650, \u2200i = 1, \u00b7 \u00b7 \u00b7, n. (3.47) Indeed, \ufb01rst set \u03b41 = 1. If \u03bb2 \u2264\u03b42\u03bb1 for \u03b42 > 0 small enough, then by Lemma 3 we see that (3.47) holds. Otherwise, \u03bb2 \u2265\u03b42\u03bb1. If \u03bb3 \u2264\u03b43\u03bb1 for \u03b43 > 0 small enough, then by Lemma 3 we see that (3.47) holds. Otherwise, \u03bb3 \u2265\u03b43\u03bb1. Proceeding iteratively, we may 14 \farrive at \u03bbk \u2265\u03b4k\u03bb1. But in this case, the C2 estimate follows directly from the equation as C \u2265\u03c3k \u2265\u03bb1 \u00b7 \u00b7 \u00b7 \u03bbk \u2265(\u03b4k)k\u22121\u03bb1. (3.48) Therefore we may assume (3.47), and inequality (3.17) becomes 0 \u2265 \u2212C(K) \u03bb1 \u001a 1 + |DDu|2 + |D \u00af Du|2 \u001b + (N \u2212C\u03c4mN2)(|DDu|2 \u03c3\u03c9 + |D \u00af Du|2 \u03c3\u03c9) +(M\u03b5 \u2212C\u03c4mM2 \u2212CN \u2212C)F \u2212CM. (3.49) Since for \ufb01xed i, \u03c3i\u00af i k \u2265\u03c31\u00af 1 k \u2265k n \u03c3k \u03bb1 \u2265 1 C\u03bb1, we can estimate |DDu|2 \u03c3\u03c9 + |D \u00af Du|2 \u03c3\u03c9 \u2265 1 C\u03bb1 (|DDu|2 + |D \u00af Du|2) \u2265 1 C\u03bb1 |DDu|2 + \u03bb1 C . (3.50) This leads to 0 \u2265 \u001aN C \u2212C\u03c4mN2 \u2212C(K) \u001b \u03bb1 + 1 \u03bb1 \u001aN C \u2212C\u03c4mN2 \u2212C(K) \u001b\u001a 1 + |DDu|2 \u001b +(M\u03b5 \u2212C\u03c4mM2 \u2212CN \u2212C)F \u2212CM. By choosing \u03c4 small, for example, \u03c4 = 1 NM , we have 0 \u2265 \u001aN C \u2212Cm M N \u2212C(K) \u001b \u03bb1 + 1 \u03bb1 \u001aN C \u2212Cm M N \u2212C(K) \u001b\u001a 1 + |DDu|2 \u001b +(M\u03b5 \u2212Cm N M \u2212CN \u2212C)F \u2212CM. Taking N and M large enough, we can make the coe\ufb03cients of the \ufb01rst three terms to be positive. For example, if we let M = N2 for N large, then N C \u2212Cm M N \u2212C(K) = N C \u2212Cm N \u2212C(K) > 0 and M\u03b5 \u2212Cm N M \u2212CN \u2212C = N2\u03b5 \u2212CmN \u2212CN \u2212C > 0. Thus, an upper bound of \u03bb1 follows. Q.E.D. Remark 1 In the above estimate, we assume that \u03bb = (\u03bb1, \u00b7 \u00b7 \u00b7, \u03bbn) \u2208\u0393n. Indeed, our estimate still works with \u03bb \u2208\u0393k+1. It was observed in [14] (Lemma 7) that if \u03bb \u2208\u0393k+1, then \u03bb1 \u2265\u00b7 \u00b7 \u00b7 \u2265\u03bbn > \u2212K0 for some positive constant K0. Thus, we can replace \u03bb by \u02dc \u03bb = \u03bb + K0I in our test function G in (3.2). Acknowledgements: The authors would like to thank Pengfei Guan for stimulating conversations and for sharing his unpublished notes [8]. The authors are also very grateful to the referees for an exceptionally careful reading and for many helpful suggestions. 15" + }, + { + "url": "http://arxiv.org/abs/1507.08193v2", + "title": "On estimates for the Fu-Yau generalization of a Strominger system", + "abstract": "We study an equation proposed by Fu and Yau as a natural $n$-dimensional\ngeneralization of a Strominger system that they solved in dimension $2$. It is\na complex Hessian equation with right hand side depending on gradients.\nBuilding on the methods of Fu and Yau, we obtain $C^0$, $C^2$ and\n$C^{2,\\alpha}$ a priori estimates. We also identify difficulties in extending\nthe Fu-Yau arguments for non-degeneracy from dimension $2$ to higher\ndimensions.", + "authors": "Duong H. Phong, Sebastien Picard, Xiangwen Zhang", + "published": "2015-07-29", + "updated": "2017-04-10", + "primary_cat": "math.CV", + "cats": [ + "math.CV", + "math.AP", + "math.DG" + ], + "main_content": "Introduction In 1985, Strominger [20] proposed a system of equations for compacti\ufb01cations of superstring theories which satisfy the key physical requirement of N = 1 supersymmetry. These equations are also remarkable from the mathematical standpoint, as they combine in a novel way features of Ricci-\ufb02at metrics on Calabi-Yau manifolds together with HermitianEinstein metrics on holomorphic vector bundles. Solutions of Strominger systems were indeed obtained perturbatively by Li and Yau [14] from Ricci-\ufb02at and Hermitian-Einstein metrics. However, non-perturbative solutions proved to be daunting, and it was a major breakthrough when Fu and Yau [7] obtained the \ufb01rst such solution, some twenty years after Strominger\u2019s original proposal. The particular Strominger solution obtained by Fu and Yau was a toric \ufb01bration over a K3 surface. For such manifolds, Fu and Yau succeeded in reducing the Strominger system to the special case in dimension n = 2 of the following equation, on a compact n-dimensional K\u00a8 ahler manifold (X, \u03c9), i\u2202\u00af \u2202(eu \u2212\u03b1fe\u2212u) \u2227\u03c9n\u22121 + n\u03b1i\u2202\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122 + \u00b5\u03c9n n! = 0, (1.1) where \u03b1 > 0 is a constant, f \u22650 is a smooth function, \u00b5 is a smooth function such that R X \u00b5 = 0, and the ellipticity condition described further below in (2.3) is imposed. When n = 2, it becomes a Monge-Amp` ere equation, and Fu and Yau [7] suggested the problem of studying the equation (1.1) for general dimension n. 1Work supported in part by the National Science Foundation under Grant DMS-12-66033 and DMS1308136. Keywords: Hessian equations; symmetric functions of eigenvalues; Moser iteration, inequalities of Guan-Ren-Wang; maximum principles. AMS classi\ufb01cation numbers: 32Q26 (32Q15, 32Q20, 32U05, 32W20), 35Kxx. \fIn this paper, we provide a partial answer to the problem raised by Fu and Yau. More speci\ufb01cally, we express the equation (1.1) in a more standard complex Hessian type equation (see (2.7)), and we establish C0, C2, and C2,\u03b1 a priori estimates for the equation. An upper C1 bound is automatic from the equation. But just as in the case n = 2 treated by Fu and Yau, the C2 estimate is contingent upon a lower bound for the second symmetric function \u03c32(g\u2032) of the eigenvalues of the unknown Hermitian form g\u2032 \u00af kj given in (2.4). Indeed, this is equivalent to an improved gradient estimate. One of the key innovations of Fu and Yau was a proof of such a lower bound in dimension n = 2. However, while we were able to obtain a sharp generalization of their computations to arbitrary dimensions, it turned out that this was not strong enough to imply the desired lower bound (see \u00a77), and it is at this time unclear whether such a lower bound does hold. Our proof of the C0 estimate is a close parallel of the proof by Moser iteration methods used in [7]. The C2 estimate also builds in an essential way on the methods of [7], but we also exploit some new inequalities due to Guan, Ren, and Wang [10] in their work on real Hessian equations with gradient terms on the right hand side. Although the C2,\u03b1 estimate does not require much new work, it does not follow from the classical Evans-Krylov theory due to the dependence of the gradient on the right hand side. However, we can obtain the desired estimate by using the recent works of Wang [26] and Tosatti-Wang-WeinkoveYang [24] which deal with the C2,\u03b1 regularity of complex Monge-Amp` ere type equations with H\u00a8 older regular right hand side. The estimate still open is the lower bound for \u03c32(g\u2032). We discuss in detail the di\ufb03culties in trying to extend to higher dimensions the Fu-Yau arguments for a lower bound for \u03c32(g\u2032). To handle higher dimensions, we work with general coordinate systems rather than the adapted ones with \u2207u = (u1, 0, \u00b7 \u00b7 \u00b7, 0) used by Fu and Yau. This allows us a simpli\ufb01ed and more transparent derivation of the Fu-Yau results for n = 2, and a clearer picture of why their arguments are not strong enough for higher dimensions. Because of the complexity of the calculations and possibly for future use, this is presented in detail in section \u00a77. 2 The Fu-Yau Equation We begin by writing equation (1.1) proposed by Fu-Yau [7] in a more explicit form. Let \u039b be a Hermitian (1, 1)-form, and let \u03c3k(\u039b) be the k-th symmetric function of its eigenvalues relative to the K\u00a8 ahler form \u03c9, that is, \u03c3k(\u039b) = n k !\u039bk \u2227\u03c9n\u2212k \u03c9n = X j1<\u00b7\u00b7\u00b7 0, (2.3) where \u03c9 = i P g\u00af jkdzk \u2227d\u00af zj. It is convenient to introduce also the following Hermitian (1, 1)-form, g\u2032 \u00af jk = (eu + fe\u2212u)g\u00af jk + 2n\u03b1u\u00af jk. (2.4) If we denote by \u03bbj the eigenvalues of i\u2202\u00af \u2202u, by \u03bb\u2032 j the eigenvalues of g\u2032 \u00af jk, and by \u02dc \u03bbj the eigenvalues of \u02dc g\u00af jk, all with respect to g\u00af jk, then it is easy to see that \u03bb\u2032 j = (eu + fe\u2212u) + 2n\u03b1\u03bbj, \u02dc \u03bbj = X k\u0338=j \u03bb\u2032 k, (2.5) and hence the following relations between the symmetric functions of i\u2202\u00af \u2202u and g\u2032 \u00af jk, \u03c31(g\u2032) = n(eu + fe\u2212u) + 2n\u03b1\u03c31(i\u2202\u00af \u2202u) \u03c32(g\u2032) = 4n2\u03b12\u03c32(i\u2202\u00af \u2202u) + 2n(n \u22121)\u03b1(eu + fe\u2212u)\u03c31(i\u2202\u00af \u2202u) + n(n \u22121) 2 (eu + fe\u2212u)2 and between the symmetric functions of g\u2032 \u00af jk and \u02dc g\u00af jk, \u03c31(\u02dc g) = (n \u22121)\u03c31(g\u2032) \u03c32(\u02dc g) = 1 2(n \u22121)(n \u22122)\u03c31(g\u2032)2 + \u03c32(g\u2032). (2.6) Substituting equation (2.2) in the above expression for \u03c32(g\u2032), we can re-write the equation in terms of g\u2032 \u00af jk as \u03c32(g\u2032) = n(n \u22121) 2 e2u(1 \u22124\u03b1e\u2212u|Du|2) + 2\u03b1n(n \u22121)fe\u2212u|Du|2 +n(n \u22121)f + n(n \u22121) 2 e\u22122uf 2 \u22122n\u03b1\u00b5 +2\u03b1n(n \u22121)e\u2212u(\u2206f \u22122Re(gj\u00af kfju\u00af k)). (2.7) It follows that equations (2.2) and (2.7) are equivalent when \u03b1 \u0338= 0. Here as in the rest of the paper, we denote by D the covariant derivative with respect to the given metric g\u00af kj. Furthermore, as in [7], we impose a normalization condition on a solution u. Let \u03b2 = n n\u22121, and \u03b3 = 4 \u03b2\u22121. For A \u226a1, we impose \u0012Z X e\u2212\u03b3u \u0013 1 \u03b3 = A. (2.8) 3 \fThe ellipticity condition for equation (2.7) is that the eigenvalues of g\u2032 \u00af jk with respect to the metric g\u00af jk should be in the \u03932 cone, \u03932 = {\u03bb\u2032 \u2208Rn; \u03c31(\u03bb\u2032) > 0, \u03c32(\u03bb\u2032) > 0}. (2.9) Moreover, we remark that g\u2032 \u2208\u03932 implies that \u02dc g\u00af jk > 0 by relation (2.5). The equation (2.7) \ufb01ts in the framework of complex Hessian equations on closed manifolds, which have been studied extensively by many authors in recent years, see for example, [1, 3, 4, 11, 12, 15, 16, 21, 22, 23, 28, 29]. However, in comparison with previous works, (2.7) has two new di\ufb03culties. The \ufb01rst di\ufb03culty is the dependence on the gradient of the right hand side of the equation. This causes some trouble when attempting to obtain a C2 estimate. The second di\ufb03culty is the possible degeneracy of the equation. It is easy to see that even for the ideal case f = \u00b5 = 0 in equation (2.7), the right hand side might be zero. Therefore, to get smooth solutions, one needs to show that it is not degenerate under certain conditions on A. See \u00a74 and \u00a77 for more discussions of this particular di\ufb03culty. Before moving to next subsection, we want to emphasize that these two di\ufb03culties occur when \u03b1 > 0 in equation (2.7). If \u03b1 < 0, the behavior of the equation is quite di\ufb00erent and Fu-Yau studied the n = 2 case in [8]. We will investigate the higher dimensional case in other work. 2.1 The linearization F j\u00af k of \u03c32(g\u2032) We can view the Fu-Yau equation (2.7) as a complex Hessian equation of \u03c32 type, with a right hand side depending on Du. In accordance with standard notation in partial di\ufb00erential equations, we also denote \u03c32(g\u2032) by F, viewed as a function of u, Du, and D \u00af Du. In particular, F j\u00af k \u2261\u2202F/\u2202g\u2032 \u00af kj, and the linearization of \u03c32(g\u2032) is given by \u03b4\u03c32(g\u2032) = F j\u00af k\u03b4g\u2032 \u00af kj. (2.10) We shall need explicit formulas for F j\u00af k, and for the operator 2n\u03b1F j\u00af kDjD\u00af k acting on u, the gradient Du of u, the square |Du|2 of the gradient, and the complex hessian DpD\u00af qu. We summarize brie\ufb02y here our notations and conventions. The Hermitian form \u03c9 de\ufb01ned by a K\u00a8 ahler metric g\u00af kj is given by \u03c9 = ig\u00af kjdzj \u2227d\u00af zk. The Chern unitary connection with respect to the metric \u03c9 is denoted by D\u00af j = \u2202 \u2202\u00af zj , DjV p = gp\u00af q\u2202j(g\u00af qmV m), and the curvature tensor is de\ufb01ned by [D\u00af k, Dj]V m = \u2212R\u00af kj m pV p. The Ricci curvature R\u00af kj is given by R\u00af kj = R\u00af kj mm. Given a second Hermitian tensor g\u2032 \u00af km, the relative endomorphism hjk from g\u2032 \u00af km to g\u00af km is de\ufb01ned by hj k = gk \u00af mg\u2032 \u00af mj. 4 \fWriting \u03c32(g\u2032) = ((Tr h)2 \u2212Tr h2)/2, we readily \ufb01nd F j\u00af k = gj \u00af pgq\u00af k\u02dc g\u00af pq (2.11) where \u02dc g\u00af pq is the metric introduced in (2.3). In particular F j\u00af kg\u00af kj = (n \u22121)Tr h, and hence 2n\u03b1F j\u00af kDjD\u00af ku = F j\u00af kg\u2032 \u00af kj \u2212(eu + fe\u2212u)F j\u00af kg\u00af kj = 2F \u2212(n \u22121)(eu + fe\u2212u)Tr h. (2.12) Next, the variational formula for \u03c32(g\u2032) implies \u2202pF = F j\u00af kDpg\u2032 \u00af kj. (2.13) Substituting in the de\ufb01nition of g\u2032 \u00af kj, we obtain the following formula for 2n\u03b1F j\u00af kDjD\u00af k(Dpu), 2n\u03b1F j\u00af kDjD\u00af k(Dpu) = \u2202pF \u2212(n \u22121)Tr h \u2202p(eu + fe\u2212u). (2.14) Similarly, we \ufb01nd 2n\u03b1F j\u00af kDjD\u00af k(D\u00af pu) = \u2202\u00af pF \u2212(n \u22121)Tr h \u2202\u00af p(eu + fe\u2212u) + 2n\u03b1\u02dc g\u00af \u2113mR\u00af p \u00af \u2113m\u00af qD\u00af qu (2.15) where R\u00af p \u00af \u2113m\u00af q is the curvature of metric \u03c9. This additional curvature term resulted from the commutation of covariant derivatives D\u00af pDj and DjD\u00af p when acting on D\u00af ku. It is now easy to deduce 2n\u03b1F j\u00af kDjD\u00af k|Du|2. Introduce the notation |DDu|2 F g = F j\u00af kg\u2113\u00af mDj\u2113uD\u00af k \u00af mu, |D \u00af Du|2 F g = F j\u00af kg\u2113\u00af mDj \u00af muD\u2113\u00af ku. (2.16) Then 2n\u03b1F j\u00af kDjD\u00af k|Du|2 = 2n\u03b1g\u2113\u00af mF j\u00af k(DjD\u00af kD\u2113u D \u00af mu + D\u2113u DjD\u00af kD \u00af mu) +2n\u03b1(|DDu|2 F g + |D \u00af Du|2 F g) (2.17) and hence, in view of the formulas (2.14) and (2.15), 2n\u03b1F j\u00af kDjD\u00af k|Du|2 = g\u2113\u00af m(\u2202\u2113F\u2202\u00af mu + \u2202\u00af mF\u2202\u2113u) + 2n\u03b1\u02dc g\u00af \u2113m\u2202puRm\u00af \u2113p\u00af q\u2202\u00af qu \u2212(n \u22121)Tr h g\u2113\u00af m(\u2202\u00af m(eu + fe\u2212u)\u2202\u2113u + \u2202\u2113(eu + fe\u2212u)\u2202\u00af mu) +2n\u03b1(|DDu|2 F g + |D \u00af Du|2 F g). (2.18) Finally, the operator 2n\u03b1F j\u00af kDjD\u00af k acting on the Hessian DpD\u00af qu can be obtained in a similar way from di\ufb00erentiating the equation (2.13) again, giving F j\u00af kDpD\u00af qg\u2032 \u00af kj = \u2202p\u2202\u00af qF \u2212Dp(Tr h)D\u00af q(Tr h) + Djhj pD\u00af kh\u00af q \u00af k. (2.19) 5 \fWe can extract the term DpD\u00af qDjD\u00af ku from the left-hand side. Permuting the order of di\ufb00erentiation, we \ufb01nd 2n\u03b1F j\u00af kDjD\u00af kDpD\u00af qu (2.20) = 2n\u03b1F j\u00af kDpD\u00af qDjD\u00af ku + 2n\u03b1 \u0010 F j\u00af kR\u00af qj\u00af k \u00af au\u00af ap \u2212F j\u00af kR\u00af qp\u00af k \u00af au\u00af aj \u0011 = F j\u00af kDpD\u00af qg\u2032 \u00af kj \u2212 \u0010 eu \u2212fe\u2212u\u0011 u\u00af qpF j\u00af kg\u00af kj \u2212 \u0010 eu + fe\u2212u\u0011 upu\u00af qF j\u00af kg\u00af kj +2e\u2212uRe (upf\u00af q) F j\u00af kg\u00af kj \u2212f\u00af qpe\u2212uF j\u00af kg\u00af kj + 2n\u03b1 \u0010 F j\u00af kR\u00af qj\u00af k \u00af au\u00af ap \u2212F j\u00af kR\u00af qp\u00af k \u00af au\u00af aj \u0011 = \u2202p\u2202\u00af qF \u2212Dp(Tr h)D\u00af q(Tr h) + Djhj pD\u00af kh\u00af q \u00af k + 2n\u03b1 \u0010 F j\u00af kR\u00af qj\u00af k \u00af au\u00af ap \u2212F j\u00af kR\u00af qp\u00af k \u00af au\u00af aj \u0011 + n \u2212(eu \u2212fe\u2212u)DpD\u00af qu \u2212(eu + fe\u2212u)DpuD\u00af qu + 2e\u2212uRe(upf\u00af q) \u2212e\u2212uDpD\u00af qf o (n \u22121)Tr h. All these formulas are quite general. For the speci\ufb01c Fu-Yau equation, we can substitute the right hand side of equation (2.7) for F = \u03c32(g\u2032), as we shall do in sections \u00a74 and \u00a77. 3 The C0 Estimate The following C0 estimate holds: Theorem 1 Let (X, \u03c9) be a compact K\u00a8 ahler manifold of dimension n with Vol(X, \u03c9) = 1. Let u be a solution of (2.7) under ellipticity condition (2.3) and normalization condition (2.8). Then, for A < 1, there exists a constant C0 depending only on (X, \u03c9), \u03b1, \u2225f\u2225C2, and \u2225\u00b5\u2225L\u221esuch that e\u2212inf u \u2264C0A. (3.1) Furthermore, if A is chosen small enough such that C0A < 1, then there is a constant C1 depending also only on (X, \u03c9), \u03b1, \u2225f\u2225C2, and \u2225\u00b5\u2225L\u221esuch that esup u \u2264C1A\u22121. (3.2) Proof. We proceed by Moser iteration. First, we de\ufb01ne the Hermitian form corresponding to \u02dc g\u00af ji: \u02dc \u03c9 = (n \u22121)(eu + fe\u2212u)\u03c9 + 2n\u03b1((\u2206u)\u03c9 \u2212i\u2202\u00af \u2202u) > 0. (3.3) Let k \u22652. The starting point is to compute the quantity Z X i\u2202\u00af \u2202(e\u2212ku) \u2227\u02dc \u03c9 \u2227\u03c9n\u22122 (3.4) in two di\ufb00erent ways. On one hand, by the de\ufb01nition of \u02dc \u03c9 and Stokes\u2019 theorem, we have Z X i\u2202\u00af \u2202(e\u2212ku)\u2227\u02dc \u03c9\u2227\u03c9n\u22122 = Z X(n\u22121)(eu+fe\u2212u)i\u2202\u00af \u2202(e\u2212ku)\u2227\u03c9n\u22121+2n\u03b1 Z X(\u2206u)i\u2202\u00af \u2202(e\u2212ku)\u2227\u03c9n\u22121. (3.5) 6 \fUsing the volume form \u03c9n n! , we compute 1 (n \u22121)! Z X i\u2202\u00af \u2202(e\u2212ku) \u2227\u02dc \u03c9 \u2227\u03c9n\u22122 (3.6) = Z X(n \u22121)(eu + fe\u2212u)\u2206(e\u2212ku) + 2n\u03b1 Z X(\u2206u)\u2206(e\u2212ku) = k2(n \u22121) Z X(eu + fe\u2212u)e\u2212ku|Du|2 \u2212k(n \u22121) Z X(eu + fe\u2212u)e\u2212ku\u2206u +2k2n\u03b1 Z X e\u2212ku\u2206u|Du|2 \u22122kn\u03b1 Z X e\u2212ku(\u2206u)2. On the other hand, using equation (2.2), we obtain Z X i\u2202\u00af \u2202(e\u2212ku) \u2227\u02dc \u03c9 \u2227\u03c9n\u22122 (3.7) = k2 Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227\u02dc \u03c9 \u2227\u03c9n\u22122 \u2212k Z X(n \u22121)e\u2212ku(eu + fe\u2212u)i\u2202\u00af \u2202u \u2227\u03c9n\u22121 \u22122kn\u03b1 Z X e\u2212ku\u2206u i\u2202\u00af \u2202u \u2227\u03c9n\u22121 + 2kn\u03b1 Z X e\u2212kui\u2202\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122 = k2 Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227\u02dc \u03c9 \u2227\u03c9n\u22122 \u2212k Z X(n \u22121)e\u2212ku(eu + fe\u2212u)i\u2202\u00af \u2202u \u2227\u03c9n\u22121 \u22122kn\u03b1 Z X e\u2212ku\u2206u i\u2202\u00af \u2202u \u2227\u03c9n\u22121 \u22122k(n \u22122)! Z X e\u2212ku\u00b5\u03c9n n! \u22122k Z X e\u2212kui\u2202\u00af \u2202(eu \u2212fe\u2212u) \u2227\u03c9n\u22121. Expanding out terms and using the de\ufb01nition of \u02dc \u03c9 yields 1 (n \u22121)! Z X i\u2202\u00af \u2202(e\u2212ku) \u2227\u02dc \u03c9 \u2227\u03c9n\u22122 (3.8) = k2(n \u22121) Z X e\u2212ku(eu + fe\u2212u)|Du|2 + k2(2n\u03b1) Z X e\u2212ku\u2206u|Du|2 \u2212k2(2n\u03b1) (n \u22121)! Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122 \u2212k(n \u22121) Z X e\u2212ku(eu + fe\u2212u)\u2206u \u22122kn\u03b1 Z X e\u2212ku(\u2206u)2 \u2212 2k n \u22121 Z X e\u2212ku\u00b5 \u22122k Z X e\u2212(k\u22121)u(|Du|2 + \u2206u) \u22122k Z X e\u2212(k+1)uf\u2206u +2k Z X e\u2212(k+1)u\u2206f + 2k Z X e\u2212(k+1)uf|Du|2 \u22124k Z X e\u2212(k+1)uRe(gj\u00af kfju\u00af k). We now equate (3.6) and (3.8) and cancel repeating terms. 0 = \u2212 kn\u03b1 (n \u22121)! Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122 \u2212 Z X e\u2212(k\u22121)u|Du|2 (3.9) \u2212 1 n \u22121 Z X e\u2212ku\u00b5 \u2212 Z X e\u2212(k\u22121)u\u2206u + Z X e\u2212(k+1)u\u2206f + Z X e\u2212(k+1)uf|Du|2 \u22122 Z X e\u2212(k+1)uRe(gj\u00af kfju\u00af k) \u2212 Z X e\u2212(k+1)uf\u2206u. 7 \fIntegration by parts gives 0 = \u2212 kn\u03b1 (n \u22121)! Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122 \u2212 1 n \u22121 Z X e\u2212ku\u00b5 \u2212k Z X e\u2212(k\u22121)u|Du|2 + Z X e\u2212(k+1)u\u2206f \u2212k Z X e\u2212(k+1)uf|Du|2 \u2212 Z X e\u2212(k+1)ugj\u00af kfju\u00af k. (3.10) One more integration by parts yields the following identity: k Z X e\u2212ku|Du|2(eu + fe\u2212u) (3.11) = \u2212 kn\u03b1 (n \u22121)! Z X e\u2212kui\u2202u \u2227\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122 \u2212 1 n \u22121 Z X e\u2212ku\u00b5 + (1 \u2212 1 k + 1) Z X e\u2212(k+1)u\u2206f. We now estimate the \ufb01rst term on the right hand side. At a point p \u2208X, choose coordinates such that g\u00af kj = \u03b4kj and u\u00af kj is diagonal. From the condition \u02dc g > 0, we see that \u02dc g\u00af kk = (n \u22121)(eu + fe\u2212u) + 2n\u03b1(\u2206u \u2212u\u00af kk) > 0 at p. We compute i\u2202u \u2227\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122 = (n \u22122)! X i |ui|2(\u2206u \u2212u\u00af ii)\u03c9n n! > \u2212(n \u22121)! 2n\u03b1 |Du|2(eu + fe\u2212u)\u03c9n n! . (3.12) Using this inequality in (3.11), we obtain k 2 Z X e\u2212ku|Du|2(eu + fe\u2212u) \u2264 \u2212 1 n \u22121 Z X e\u2212ku\u00b5 + \u0012 1 \u2212 1 k + 1 \u0013 Z X e\u2212(k+1)u\u2206f. (3.13) Since f \u22650, we can deduce the following estimate: k Z X e\u2212(k\u22121)u|Du|2 \u2264C \u0012Z X e\u2212ku + Z X e\u2212(k+1)u \u0013 . (3.14) Therefore, for k \u22651, we have Z X |De\u2212k 2 u|2 \u2264Ck \u0012Z X e\u2212(k+1)u + Z X e\u2212(k+2)u \u0013 . (3.15) To obtain a C0 estimate, we use the method of Moser iteration as done in [7]. We set \u03b2 = n (n\u22121). The Sobolev inequality gives us \u0012Z X |e\u2212k 2 u|2\u03b2 \u0013 1 \u03b2 \u2264C \u0012Z X |e\u2212k 2 u|2 + Z X |De\u2212k 2 u|2 \u0013 . (3.16) Combining the Sobolev inequality with (3.15) yields \u0012Z X e\u2212k\u03b2u \u0013 1 \u03b2 \u2264Ck \u0012Z X e\u2212ku + Z X e\u2212(k+1)u + Z X e\u2212(k+2)u \u0013 . (3.17) 8 \fApplying H\u00a8 older\u2019s inequality, we get \u0012Z X e\u2212k\u03b2u \u0013 1 \u03b2 \u2264Ck \u001a \u0012Z X e\u2212(k+2)u \u0013 k k+2 + \u0012Z X e\u2212(k+2)u \u0013 k k+1 + Z X e\u2212(k+2)u \u001b . (3.18) For this inequality to be useful, we need to take k large enough so that k\u03b2 \u2265k + 2. In order to proceed with the iteration, we consider two cases. Case 1: For all k \u2265\u03b3 = 4 \u03b2\u22121, we have R X e\u2212ku \u22641. In this case, for each k \u2265\u03b3, (3.18) gives us \u0012Z X e\u2212k\u03b2u \u0013 1 \u03b2 \u2264Ck \u0012Z X e\u2212(k+2)u \u0013 k k+2 . (3.19) Using H\u00a8 older\u2019s inequality, we also have Z X e\u2212(k+2)u = Z X \u0010 e\u2212u\u0011 \u03b2\u03b3 2 \u0010 e\u2212u\u0011 2k\u2212\u03b3 2 \u2264 \u0012Z X e\u2212k\u03b2u \u0013 \u03b3 2k \u0012Z X e\u2212ku \u00131\u2212\u03b3 2k . (3.20) Therefore \u0012Z X e\u2212k\u03b2u \u0013 1 \u03b2 \u2264Ck \u0012Z X e\u2212k\u03b2u \u0013 \u03b3 2(k+2) \u0012Z X e\u2212ku \u0013 2k\u2212\u03b3 2(k+2) . (3.21) By regrouping and using the identity \u03b3\u03b2 = 4 + \u03b3, we obtain \u0012Z X e\u2212k\u03b2u \u0013 1 \u03b2 \u2264(Ck) 2(k+2) 2k\u2212\u03b3 Z X e\u2212ku. (3.22) Since k \u2265\u03b3 \u22654, we have 2(k+2) 2k\u2212\u03b3 \u22642(k+2) k \u22643. Thus \u2225e\u2212u\u2225Lk\u03b2 \u2264(Ck)3/k\u2225e\u2212u\u2225Lk, (3.23) for k \u2265\u03b3. We iterate this estimate and conclude e\u2212inf u = \u2225e\u2212u\u2225L\u221e\u2264C\u2225e\u2212u\u2225L\u03b3 = CA. (3.24) Case 2: There exists a k0 > \u03b3 such that R X e\u2212k0u > 1. In this case, using Vol(X, \u03c9) = 1 and H\u00a8 older\u2019s inequality, we have R X e\u2212ku > 1 for all k \u2265k0. After possibly increasing k0, we take k \u2265k0 \u2265\u03b3\u03b2 > \u03b3. From (3.18) and (3.20), we have \u0012Z X e\u2212k\u03b2u \u0013 1 \u03b2 \u2264Ck Z X e\u2212(k+2)u \u2264Ck \u0012Z X e\u2212k\u03b2u \u0013 \u03b3 2k \u0012Z X e\u2212ku \u00131\u2212\u03b3 2k . (3.25) After rearranging, we obtain \u2225e\u2212u\u2225Lk\u03b2 \u2264(Ck) 2 2k\u2212\u03b3\u03b2 \u2225e\u2212u\u2225 2k\u2212\u03b3 2k\u2212\u03b3\u03b2 Lk . (3.26) 9 \fSince we assume k \u2265\u03b3\u03b2, we conclude \u2225e\u2212u\u2225Lk\u03b2 \u2264(Ck) 2 k \u2225e\u2212u\u2225 2k\u2212\u03b3 2k\u2212\u03b3\u03b2 Lk . (3.27) We set \u0398(\u03bd) = 1 \u03b2 2k0\u03b2\u03bd \u2212\u03b3 2k0\u03b2\u03bd\u22121 \u2212\u03b3 ! . (3.28) For i, j \u2208N with j \u22650 and i \u2265j, we have i Y \u03bd=j \u0398(\u03bd) = 2k0 \u2212\u03b3 \u03b2i 2k0 \u2212 \u03b3 \u03b2j\u22121 \u2264 2k0 2k0 \u2212\u03b3\u03b2 \u22642. (3.29) Therefore we can iterate our estimate in the following way: \u2225e\u2212u\u2225Lk0\u03b2i+1 \u2264 (Ck0\u03b2i) 2 k0\u03b2i \u2225e\u2212u\u2225\u0398(i) Lk0\u03b2i \u2264 \uf8eb \uf8ed i Y j=0 (Ck0\u03b2j) 2 k0\u03b2j Qi \u03bd=j+1 \u0398(\u03bd) \uf8f6 \uf8f8\u2225e\u2212u\u2225 Qi \u03bd=0 \u0398(\u03bd) Lk0 \u2264 \uf8eb \uf8ed \u221e Y j=0 (Ck0\u03b2j) 4 k0\u03b2j \uf8f6 \uf8f8\u2225e\u2212u\u22252 Lk0 \u2264C\u2225e\u2212u\u22252 Lk0. (3.30) As we let i \u2192\u221e, we have \u2225e\u2212u\u2225L\u221e\u2264C\u2225e\u2212u\u22252 Lk0. (3.31) We would like to estimate \u2225e\u2212u\u2225Lk0 in terms of \u2225e\u2212u\u2225L\u03b3. Starting from (3.18), we can follow either case 1 or case 2, depending on the size of R e\u2212(k+2)u. We then arrive at estimate (3.23) or (3.26): \u2225e\u2212u\u2225Lk0 \u2264C\u2225e\u2212u\u2225 L k0 \u03b2 , or \u2225e\u2212u\u2225Lk0 \u2264C\u2225e\u2212u\u2225r L k0 \u03b2 , for some r > 1. (3.32) By repeating this process \ufb01nitely many times, we can control \u2225e\u2212u\u2225Lk0 \u2264C\u2225e\u2212u\u2225a L\u03b3 for some a \u22651. Since \u2225e\u2212u\u2225L\u03b3 = A < 1, we have e\u2212inf u = \u2225e\u2212u\u2225L\u221e\u2264C\u2225e\u2212u\u22252a L\u03b3 \u2264CA. (3.33) To control the supremum of u, we replace k with \u2212k in (3.11). Then, for k \u0338= 1, k Z X eku|Du|2(eu + fe\u2212u) (3.34) = \u2212 kn\u03b1 (n \u22121)! Z X ekui\u2202u \u2227\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122 + 1 n \u22121 Z X eku\u00b5 \u2212(1 \u2212 1 1 \u2212k) Z X e(k\u22121)u\u2206f. Proceeding as before in the case of the in\ufb01mum estimate, we can use (3.12) to derive the following estimate for any k greater than a \ufb01xed number greater than 1 k Z X e(k+1)u|Du|2 \u2264C \u0012Z X eku + Z X e(k\u22121)u \u0013 . (3.35) 10 \fThus for k \u22652\u03b2, we can estimate Z X |De k 2 u|2 \u2264Ck \u0012Z X e(k\u22121)u + Z X e(k\u22122)u \u0013 . (3.36) Since e\u2212inf u = CA \u226a1, we can conclude Z X |De k 2 u|2 \u2264Ck Z X eku, (3.37) for k \u22652\u03b2. The Sobolev inequality yields \u0012Z X ek\u03b2u \u00131/\u03b2 \u2264Ck Z X eku. (3.38) By iterating this estimate, we have esup u \u2264C\u2225eu\u2225L2\u03b2. (3.39) To complete the supremum estimate, we need another inequality. Setting k = \u22121 in (3.10), we have Z X e2u|Du|2 = \u2212 n\u03b1 (n \u22121)! Z X eui\u2202u \u2227\u00af \u2202u \u2227i\u2202\u00af \u2202u \u2227\u03c9n\u22122 + 1 n \u22121 Z X eu\u00b5 \u2212 Z X \u2206f \u2212 Z X f|Du|2 + Z X gj\u00af kfju\u00af k. (3.40) We estimate the \ufb01rst term on the RHS by using (3.12), and since eu \u22651 we obtain Z X e2u|Du|2 \u2264C \u0012Z X eu + Z X |u| + 1 \u0013 \u2264C \u0012Z X eu \u0013 . (3.41) Therefore Z X |Deu|2 \u2264C \u0012Z X eu \u0013 . (3.42) Either by using this estimate, or using a scaling argument, one can obtain from (3.39) that esup u \u2264C\u2225eu\u2225L2, (3.43) so the objective now is to control \u2225eu\u2225L2. Consider the set U = {x : eu \u22642 A}. We have A\u03b3 = Z U e\u2212\u03b3u + Z X\\U e\u2212\u03b3u \u2264e\u2212\u03b3 inf u|U| + A\u03b3 2\u03b3 (1 \u2212|U|) \u2264 \u0012 CA\u03b3 \u2212A\u03b3 2\u03b3 \u0013 |U| + A\u03b3 2\u03b3 . Therefore, |U| \u22651 \u22121 2\u03b3 C \u22121 2\u03b3 := \u03b4 > 0. (3.44) 11 \fTo estimate the L2 norm of eu, we follow the argument from Tosatti-Weinkove [25]. Let \u03c8 = eu, and let \u03c8 := R X \u03c8. By the Poincar\u00b4 e inequality and (3.42), \u2225\u03c8 \u2212\u03c8\u2225L2 \u2264\u2225D\u03c8\u2225L2 \u2264C\u2225\u03c8\u22251/2 L1 . (3.45) We compute \u03b4 \u03c8 \u2264 Z U |\u03c8| \u2264 Z U |\u03c8 \u2212\u03c8| + |\u03c8| \u2264 Z X |\u03c8 \u2212\u03c8| + 2 A|U| \u2264C A(1 + \u2225\u03c8 \u2212\u03c8\u2225L1). (3.46) Using the previous estimate and (3.45), it is now easy to obtain an L1 estimate: \u2225\u03c8\u2225L1 \u2264 \u2225\u03c8 \u2212\u03c8\u2225L1 + \u2225\u03c8\u2225L1 \u2264CA\u22121(1 + \u2225\u03c8 \u2212\u03c8\u2225L1) \u2264 CA\u22121(1 + \u2225\u03c8 \u2212\u03c8\u2225L2) \u2264CA\u22121(1 + \u2225\u03c8\u22251/2 L1 ). (3.47) Therefore \u2225\u03c8\u2225L1 is under control, and by (3.45), we can deduce that \u2225\u03c8\u2225L2 \u2264CA\u22121. By (3.43), we have esup u \u2264CA\u22121. (3.48) 4 The C1 Estimate As we mentioned previously, the a priori gradient estimate is easy due to the special structure of the right hand side of equation (2.7). De\ufb01ne the constant \u03bac by \u03bac = n(n \u22121) 2 . (4.1) Our equation (2.7) is e\u22122uF = \u03bac \u22124\u03b1\u03bac \u001a e\u2212u|Du|2 \u2212fe\u22123u|Du|2 + e\u22123u(gi\u00af kfiu\u00af k + gi\u00af kf\u00af kui) \u001b +\u03bace\u22122u \u001a 2f + f 2e\u22122u + 4\u03b1e\u2212u\u2206f \u001b \u22122n\u03b1e\u22122u\u00b5. (4.2) We \ufb01rst estimate 0 < e\u22122uF \u2264\u03bac \u22124\u03b1\u03bace\u2212u|Du|2 \u001a 1 \u2212(\u2225f\u2225\u221e+ 1)e\u22122u \u001b + O(e\u22122u). (4.3) For a choice of A small enough, we can make e\u2212u \u2264CA \u226a1. It follows that e\u2212u|Du|2 \u2264C. (4.4) Theorem 2 Let u be a solution of (2.7) under ellipticity condition (2.3) and normalization condition (2.8). If A is small enough, then there exists a positive constant C depending on (X, g), \u03b1, \u2225f\u2225C2, and \u2225\u00b5\u2225L\u221esuch that e\u2212u|Du|2 \u2264C. (4.5) 12 \fWe observe that the present situation is di\ufb00erent from the situation for the standard complex Hessian equation \u03c3k(g\u00af ji + u\u00af ji) = f(z) (4.6) on a compact K\u00a8 ahler manifold (X, g) with 0 < f(x) \u2208C\u221e(X), see [3, 11, 29]. For the standard equation (4.6) with non-degenerate right hand side f(z), one needs to work very hard to get the gradient estimate since the upper bound of the C2 estimate depends on the C1 estimate. Once the gradient estimate is obtained, the non-degeneracy of f(z) together with the C2 upper bound imply the uniform ellipticity of the equation. In our current situation, the structure of equation (1.1) is better in the sense that it automatically gives a C1 upper bound. However, in this case, the C1 estimate is not good enough to give uniform ellipticity. For that purpose, we need to get a uniform positive lower bound for e\u22122uF, which turns out to be equivalent to a sharper C1 upper bound. From this viewpoint, the desired gradient estimate here is much more involved than in the standard case. We will continue to discuss this in \u00a77. 5 The C2 Estimate In this section, we derive the a priori C2 estimate of equation (2.7) under the assumption of a sharp gradient estimate. As previously mentioned, the presence of the gradient of u on the right hand side brings substantial di\ufb03culties. For real Hessian equations, this problem was recently addressed by Guan-Ren-Wang [10] under some assumptions. Here, we adapt some of their ideas to the complex setting. However, there are still some troublesome terms such as |DDu|2 which cannot be handled as in the real case. This is the reason for the sharp gradient estimate assumption in our estimate. Our theorem is the following. Theorem 3 Let u be a solution of (2.7) under ellipticity condition (2.3) and normalization condition (2.8). Suppose that for every 0 < \u03b4 < 1, there exists an 0 < A\u03b4 \u226a1 such that for all 0 < A \u2264A\u03b4, the following bound holds: e\u2212u|Du|2 \u2264\u03b4. (5.1) Then there exists 0 < A0 \u226a1 such that for all 0 < A \u2264A0, there holds 1 C g \u2264\u02dc g \u2264Cg, (5.2) where C is a constant depending on \u2225u\u2225L\u221e, \u2225Du\u2225L\u221e, (X, \u03c9), \u2225f\u2225C2, \u2225\u00b5\u2225C2, \u03b1, A. Let B0, B1 be constants depending on (X, \u03c9), \u2225f\u2225C2, \u2225\u00b5\u2225C2, \u03b1. Recall that we have the following C0 estimates e\u2212u \u2264B0A \u226a1, eu \u2264B1A\u22121. (5.3) 13 \fThe estimate in the assumption (5.1) was obtained by Fu-Yau in [7] when X has dimension n = 2. The Fu-Yau estimate is rederived in \u00a77 and can be found in (7.35). Whether a Fu-Yau type gradient estimate holds for dimension n > 2 is still unknown. From equation (4.2), one can see that such an estimate implies a lower bound for e\u22122uF. For the purpose of the C2 estimate, we shall take e\u22122uF \u22651 2. (5.4) To prove the theorem, it su\ufb03ces to obtain an upper bound on the maximal eigenvalue of g\u2032. The upper and lower bounds of \u02dc g will then follow from the relations between g\u2032 and \u02dc g as discussed in \u00a72. Before proceeding with the C2 estimate, we state a lemma due to Guan-Ren-Wang [10]. Lemma 1 Suppose vij is an endomorphism such that v \u2208\u03932. Then for any tensor A\u00af ij, \u2212 X i\u0338=j A\u00af iiA\u00af jj \u2265\u2212 \f \f \f\u03c3i\u00af i 2 (v)A\u00af ii \f \f \f 2 |\u03c32(v)| . (5.5) In particular, \u2212 X i\u0338=j Dkg\u2032 \u00af iiD\u00af kg\u2032 \u00af jj \u2265\u2212|DkF|2 |F| . (5.6) Proof. We reproduce the proof of the Guan-Ren-Wang inequality for completeness. Let H = \u03c32(v) \u03c31(v). Di\ufb00erentiate log H with respect to the (p, p) entry to obtain Hp\u00af p H = \u03c3p\u00af p 2 \u03c32 \u2212\u03c3p\u00af p 1 \u03c31 . (5.7) Di\ufb00erentiate again Hp\u00af p,q\u00af q H = Hp\u00af pHq\u00af q H2 + \u03c3p\u00af p,q\u00af q 2 \u03c32 \u2212\u03c3p\u00af p 2 \u03c3q\u00af q 2 (\u03c32)2 + \u03c3p\u00af p 1 \u03c3q\u00af q 1 (\u03c31)2 . (5.8) Since H is concave, we have 0 \u2265 |Hp\u00af pA\u00af pp|2 H2 + \u03c3p\u00af p,q\u00af q 2 A\u00af ppA\u00af qq \u03c32 \u2212|\u03c3p\u00af p 2 A\u00af pp|2 (\u03c32)2 + |\u03c3p\u00af p 1 A\u00af pp|2 (\u03c31)2 \u2265 \u03c3p\u00af p,q\u00af q 2 A\u00af ppA\u00af qq \u03c32 \u2212 \f \f \f \f \f \u03c3p\u00af p 2 A\u00af pp \u03c32 \f \f \f \f \f 2 . (5.9) This completes the proof of the Guan-Ren-Wang inequality. 14 \fWe now proceed to the proof of the C2 estimate. We shall apply the maximum principle to a function similar to the one used by Hou-Ma-Wu in [11]. Let M > 0 be a large constant to be determined later. Let supX |u| \u2264L. De\ufb01ne \u03c8(t) = M 2n\u03b1 log \u0012 1 + t L \u0013 , (5.10) It follows that M 2n\u03b1L > \u03c8\u2032 > M 4n\u03b1L, 2n\u03b1\u03c8\u2032\u2032 = \u2212|2n\u03b1\u03c8\u2032|2 M . (5.11) For small \u03b4 > 0 to be chosen later, we de\ufb01ne \u03c6(t) = \u2212log (M2 \u2212t), M2 = 17\u03b4B1 A . (5.12) Note that \u03c6(|Du|2) is well-de\ufb01ned by the assumption on gradient estimate (5.1). Indeed, we may choose A0 \u226a1 depending on \u03b4 such that, for any 0 < A \u2264A0, |Du|2 \u2264\u03b4eu \u2264\u03b4B1 A , (5.13) and hence \u03c6\u2032(|Du|2) \u2264 A 16\u03b4B1 . (5.14) Furthermore, we have the lower bound \u03c6\u2032(|Du|2) \u2265 A 17\u03b4B1 \u2265 e\u2212u 17\u03b4B0B1 , (5.15) and the relationship \u03c6\u2032\u2032 = (\u03c6\u2032)2. (5.16) First, consider G0(z, \u03be) = log (g\u2032 \u00af jk\u03bek \u00af \u03bej) \u22122n\u03b1\u03c8(u) + \u03c6(|Du|2), (5.17) for z \u2208X and \u03be \u2208T 1,0 z (X) a unit vector. G0 is not de\ufb01ned everywhere, but we may restrict to the compact set where g\u2032 \u00af jk\u03bek \u00af \u03bej \u22650 and obtain an upper semicontinuous function. Let (p, \u03be0) be the maximum of G0. Choose coordinates centered at p such that g\u00af jk = \u03b4jk and g\u2032 \u00af jk is diagonal. Suppose g\u2032 \u00af 11 is the largest eigenvalue of g\u2032. Then \u03be0(p) = \u22021, and we extend this to a local unit vector \ufb01eld \u03be0 = g\u22121/2 \u00af 11 \u2202 \u2202z1. De\ufb01ne the local function G(z) = log (g\u22121 \u00af 11 g\u2032 \u00af 11) \u22122n\u03b1\u03c8(u) + \u03c6(|Du|2). (5.18) This function G also attains a maximum at p \u2208X. We will compute at the point p. We shall be assuming that g\u2032 \u00af 11(p) \u226b1, otherwise we would already have an upper bound on the maximal eigenvalue of g\u2032 and the C2 estimate would be complete. 15 \fCovariantly di\ufb00erentiating G gives G\u00af j = (eu + fe\u2212u)\u00af j + 2n\u03b1D\u00af jD1D\u00af 1u g\u2032 \u00af 11 + \u03c6\u2032|Du|2 \u00af j \u22122n\u03b1\u03c8\u2032u\u00af j. (5.19) Di\ufb00erentiating G a second time and contracting with F i\u00af j yields F i\u00af jG\u00af ji = 2n\u03b1 g\u2032 \u00af 11 F i\u00af jDiD\u00af jD1D\u00af 1u + (eu \u2212fe\u2212u) g\u2032 \u00af 11 F i\u00af ju\u00af ji + (eu + fe\u2212u) g\u2032 \u00af 11 |Du|2 F \u22122e\u2212u g\u2032 \u00af 11 Re(F i\u00af juif\u00af j) + e\u2212u g\u2032 \u00af 11 F i\u00af jf\u00af ji \u2212|Dg\u2032 \u00af 11|2 F (g\u2032 \u00af 11)2 + \u03c6\u2032F i\u00af j|Du|2 \u00af ji + \u03c6\u2032\u2032|D|Du|2|2 F \u22122n\u03b1\u03c8\u2032F i\u00af ju\u00af ji \u22122n\u03b1\u03c8\u2032\u2032|Du|2 F. (5.20) Here we introduced the notation |D\u03c7|2 F = F j\u00af kDj\u03c7D\u00af k\u03c7. (5.21) We will get an estimate for DiD\u00af jD1D\u00af 1u using our formula (2.20). First, notice g\u2032 \u00af 11 \u2264Tr h \u2264ng\u2032 \u00af 11. (5.22) Furthermore, since g\u2032 \u2208\u03932, we can estimate for each k, 2n\u03b1|u\u00af kk| \u2264|g\u2032 \u00af kk| + |eu + fe\u2212u| \u2264C(g\u2032 \u00af 11 + 1). (5.23) Using these inequalities, we may estimate (2.20) in the following way 2n\u03b1F i\u00af jDiD\u00af jD1D\u00af 1u \u2265D1D\u00af 1F \u2212D1(Tr h)D\u00af 1(Tr h) + |D1g\u2032|2 \u2212C(g\u2032 \u00af 11)2 \u2212C. (5.24) We now substitute this inequality into (5.20) to obtain F i\u00af jG\u00af ji \u2265 1 g\u2032 \u00af 11 \u0010 |D1g\u2032|2 \u2212|D1Tr h|2 + D1D\u00af 1F \u0011 + (eu \u2212fe\u2212u) g\u2032 \u00af 11 F i\u00af ju\u00af ji \u2212|Dg\u2032 \u00af 11|2 F (g\u2032 \u00af 11)2 + \u03c6\u2032F i\u00af j|Du|2 \u00af ji +\u03c6\u2032\u2032|D|Du|2|2 F \u22122n\u03b1\u03c8\u2032F i\u00af ju\u00af ji \u22122n\u03b1\u03c8\u2032\u2032|Du|2 F \u2212Cg\u2032 \u00af 11 \u2212C. (5.25) We have the identity 2n\u03b1F i\u00af ju\u00af ji = F i\u00af jg\u2032 \u00af ji \u2212(eu + fe\u2212u)F i\u00af jg\u00af ji = 2F \u2212(eu + fe\u2212u)(n \u22121)Tr h. (5.26) Note that by estimates (5.3), we have that (n \u22121)eu \u22651 for small enough choice of A. Using this fact with (5.22), we obtain 0 \u2265 1 g\u2032 \u00af 11 \u0010 |D1g\u2032|2 \u2212D1Tr hD\u00af 1Tr h + D1D\u00af 1F \u0011 \u2212|Dg\u2032 \u00af 11|2 F (g\u2032 \u00af 11)2 + \u03c6\u2032F i\u00af j|Du|2 \u00af ji +\u03c6\u2032\u2032|D|Du|2|2 F + (\u03c8\u2032 \u2212C)g\u2032 \u00af 11 \u22122n\u03b1\u03c8\u2032\u2032|Du|2 F \u2212C. (5.27) 16 \fWe now compute the term involving \u03c6\u2032. By (2.18), we have 2n\u03b1F i\u00af j|Du|2 \u00af ji \u2265 \u22122|Du||DF| \u2212Cg\u2032 \u00af 11 + 2n\u03b1(|D \u00af Du|2 F g + |DDu|2 F g) \u2212C. (5.28) Therefore 0 \u2265 1 g\u2032 \u00af 11 \u001a |D1g\u2032|2 \u2212D1Tr hD\u00af 1Tr h + D1D\u00af 1F \u001b \u2212|Dg\u2032 \u00af 11|2 F (g\u2032 \u00af 11)2 + \u03c6\u2032|D \u00af Du|2 F g + \u03c6\u2032|DDu|2 F g +\u03c6\u2032\u2032|D|Du|2|2 F \u2212\u03c6\u2032 n\u03b1|Du||DF| + (\u03c8\u2032 \u2212C\u03c6\u2032 \u2212C)g\u2032 \u00af 11 \u22122n\u03b1\u03c8\u2032\u2032|Du|2 F \u2212C. (5.29) De\ufb01ne \u03c4 = 1 1 + M . (5.30) Using (5.16), DG(p) = 0, and (5.11), \u03c6\u2032\u2032|D|Du|2|2 F = X i F i\u00af i|\u03c6\u2032|Du|2 i|2 = X i F i\u00af i \f \f \f \f \f\u2212Dig\u2032 \u00af 11 g\u2032 \u00af 11 + 2n\u03b1\u03c8\u2032ui \f \f \f \f \f 2 \u2265 \u03c4 X i F i\u00af i \f \f \f \f \f Dig\u2032 \u00af 11 g\u2032 \u00af 11 \f \f \f \f \f 2 \u2212 \u03c4 1 \u2212\u03c4 X i F i\u00af i |2n\u03b1\u03c8\u2032ui|2 = \u03c4 X i F i\u00af i \f \f \f \f \f Dig\u2032 \u00af 11 g\u2032 \u00af 11 \f \f \f \f \f 2 + \u03c4M 1 \u2212\u03c4 2n\u03b1\u03c8\u2032\u2032 X i F i\u00af i|ui|2 = \u03c4 X i F i\u00af i \f \f \f \f \f Dig\u2032 \u00af 11 g\u2032 \u00af 11 \f \f \f \f \f 2 + 2n\u03b1\u03c8\u2032\u2032 X i F i\u00af i|ui|2. (5.31) Thus 0 \u2265 \u03c6\u2032 2 |D \u00af Du|2 F g + \u03c6\u2032 2 |DDu|2 F g + 1 g\u2032 \u00af 11 D1D\u00af 1F \u2212\u03c6\u2032 n\u03b1|DF||Du| + 1 g\u2032 \u00af 11 \u001a |D1g\u2032|2 \u2212D1Tr hD\u00af 1Tr h \u001b \u2212(1 \u2212\u03c4)|Dg\u2032 \u00af 11|2 F (g\u2032 \u00af 11)2 +\u03c6\u2032 2 |D \u00af Du|2 F g + \u03c6\u2032 2 |DDu|2 F g + \u001a M 4n\u03b1L \u2212C\u03c6\u2032 \u2212C \u001b g\u2032 \u00af 11 \u2212C. (5.32) Computing in coordinates and applying the Guan-Ren-Wang inequality (Lemma 1) yields \u0010 |D1g\u2032|2 \u2212D1Tr hD\u00af 1Tr h \u0011 = X i\u0338=j |D1g\u2032 \u00af ij|2 \u2212 X i\u0338=j D1g\u2032 \u00af iiD\u00af 1g\u2032 \u00af jj \u2265 X i\u0338=j |D1g\u2032 \u00af ij|2 \u2212|D1F|2 F . Using the de\ufb01nition of g\u2032, we obtain X i\u0338=j |D1g\u2032 \u00af ij|2 \u2265 X j>1 |D1g\u2032 \u00af 1j|2 = X j>1 |Djg\u2032 \u00af 11 \u2212(eu + fe\u2212u)j|2 \u2265 \u0012 1 \u2212\u03c4 2 \u0013 X j>1 |Djg\u2032 \u00af 11|2 \u2212CM (5.33) 17 \fwhere the last constant CM depends on \u03c4, and hence on M. We therefore arrive at 0 \u2265 \u03c6\u2032 2 |D \u00af Du|2 F g + \u03c6\u2032 2 |DDu|2 F g + 1 g\u2032 \u00af 11 D1D\u00af 1F \u2212\u03c6\u2032 n\u03b1|DF||Du| + \u0012 1 \u2212\u03c4 2 \u0013 1 g\u2032 \u00af 11 X j>1 |Djg\u2032 \u00af 11|2 \u2212(1 \u2212\u03c4)|Dg\u2032 \u00af 11|2 F (g\u2032 \u00af 11)2 + \u03c6\u2032 2 |D \u00af Du|2 F g + \u03c6\u2032 2 |DDu|2 F g \u2212|D1F|2 g\u2032 \u00af 11F + \u001a M 4n\u03b1L \u2212C\u03c6\u2032 \u2212C \u001b g\u2032 \u00af 11 \u2212CM. (5.34) At this point, it will be important to distinguish constants which depend on A from those that do not. Let B denote a constant depending on (X, g), \u2225f\u2225C2, \u2225\u00b5\u2225C2, \u03b1. As before, we use C to denote a constant depending on (X, g), \u2225u\u2225\u221e, \u2225Du\u2225\u221e, \u2225f\u2225C2, \u2225\u00b5\u2225C2, \u03b1 and use CM to denote the constants which may also depend on M. We now state two lemmas. Lemma 2 Under the non-degeneracy assumption (5.4) and C0 estimate (5.3), there holds |DF| \u2264B(eu|Du| + e\u2212u) \u0010 |DDu| + |D \u00af Du| \u0011 + C. (5.35) |DF|2 |F| \u2264Beu \u0010 |DDu|2 + |D \u00af Du|2\u0011 + C. (5.36) |D1D\u00af 1F| \u2264Beu \u0010 |DDu|2 + |D \u00af Du|2 + eu|Du\u00af 11| \u0011 + C. (5.37) Lemma 3 Let p \u2208X be a point where G attains a maximum. Assuming g\u2032 \u00af 11(p) \u226b1 is large enough, then at p we have 1 \u2212\u03c4 2 g\u2032 \u00af 11 X j>1 |Djg\u2032 \u00af 11|2 \u2212(1 \u2212\u03c4)|Dg\u2032 \u00af 11|2 F (g\u2032 \u00af 11)2 + \u03c6\u2032 2 |D \u00af Du|2 F g + \u03c6\u2032 2 |DDu|2 F g \u22650. (5.38) Assuming these lemmas, we shall now prove the C2 estimate. We may assume g\u2032 \u00af 11 \u226b1 is large at the point p \u2208X, otherwise we already have the desired estimate. Applying both lemmas to (5.34), we have 0 \u2265 \u03c6\u2032 2 |D \u00af Du|2 F g + \u03c6\u2032 2 |DDu|2 F g \u2212Beu g\u2032 \u00af 11 \u001a |DDu|2 + |D \u00af Du|2 + eu|Du\u00af 11| \u001b \u2212\u03c6\u2032 n\u03b1|DF| |Du| + \u001a M 4n\u03b1L \u2212C(1 + \u03c6\u2032) \u001b g\u2032 \u00af 11 \u2212CM. (5.39) Using DG(p) = 0 (5.19), we may estimate |Du\u00af 11| \u2264 C + |Du|g\u2032 \u00af 11 \u001a \u03c8\u2032 + \u03c6\u2032 2n\u03b1(|DDu| + |D \u00af Du|) \u001b \u2264 C + \u0010 e\u2212u 2 (|DDu| + |D \u00af Du|) \u0011 e u 2 |Du|g\u2032 \u00af 11 \u03c6\u2032 2n\u03b1 ! + |Du|\u03c8\u2032g\u2032 \u00af 11 (5.40) \u2264 C + e\u2212u \u0010 |DDu|2 + |D \u00af Du|2\u0011 + (\u03c6\u2032)2 (2n\u03b1)2 eu|Du|2 2 (g\u2032 \u00af 11)2 + |Du|\u03c8\u2032g\u2032 \u00af 11. 18 \fUsing the estimate (5.40), the estimate (5.11) for \u03c8\u2032, and F i\u00af i \u2265F 1\u00af 1, (5.39) becomes 0 \u2265 1 g\u2032 \u00af 11 \u001a\u03c6\u2032 2 g\u2032 \u00af 11F 1\u00af 1 \u2212Beu \u001b \u0010 |DDu|2 + |D \u00af Du|2\u0011 \u2212\u03c6\u2032 n\u03b1|DF| |Du| + \u001a M 4n\u03b1L \u2212C(1 + \u03c6\u2032 + (\u03c6\u2032)2) \u001b g\u2032 \u00af 11 \u2212CM. (5.41) We shall show that for small enough A, we can ensure \u03c6\u2032 2 g\u2032 \u00af 11F 1\u00af 1 \u2212Beu \u22651. (5.42) Indeed, this follows from the basic fact that g\u2032 \u00af 11F 1\u00af 1 \u2265c(n)F. Note that g\u2032 \u00af 11 is the largest eigenvalue and hence g\u2032 \u00af 11 \u2265 1 n\u22121\u03c31(\u03bb\u2032|1). We use the notation \u03c3k(\u03bb\u2032|j) for the k-th symmetric function of (\u03bb\u2032|j) = (\u03bb\u2032 1, \u00b7 \u00b7 \u00b7, c \u03bb\u2032 j, \u00b7 \u00b7 \u00b7 , \u03bb\u2032 n) \u2208Rn\u22121. For example, \u03c31(\u03bb\u2032|1) = P i\u0338=1 \u03bb\u2032 i = F 1\u00af 1. This implies \u03c32(\u03bb\u2032) = \u03c31(\u03bb\u2032|1)\u03bb1 + \u03c32(\u03bb\u2032|1) \u2264 \u03c31(\u03bb\u2032|1)\u03bb\u2032 1 + n \u22122 2(n \u22121)\u03c32 1(\u03bb\u2032|1) \u2264 \u03c31(\u03bb\u2032|1)\u03bb\u2032 1 + n \u22122 2 \u03c31(\u03bb\u2032|1)\u03bb\u2032 1, which gives the desired estimate g\u2032 \u00af 11F 1\u00af 1 \u22652 nF. Therefore, using (5.4) and (5.15) g\u2032 \u00af 11 \u03c6\u2032 2 F 1\u00af 1 \u2212Beu \u2265 \u03c6\u2032e2u n (Fe\u22122u) \u2212Beu \u2265eu 1 2n(17\u03b4B0B1) \u2212B ! \u22651, (5.43) when our parameter A0 is chosen such that \u03b4 is su\ufb03ciently small and eu su\ufb03ciently large. The estimate is possible because the B0, B1, B are independent of A. Now that our normalization A \u2264A0 has been chosen, we recall the bounds (5.14) and (5.15) for \u03c6\u2032, and set M1 = M 4n\u03b1L \u2212C(1 + A 16\u03b4B1 + ( A 16\u03b4B1 )2), which is positive for M large enough. The inequality (5.41) implies 0 \u2265 1 g\u2032 \u00af 11 (|DDu|2 + |D \u00af Du|2) + M1g\u2032 \u00af 11 \u2212\u03c6\u2032 n\u03b1|DF| |Du| \u2212CM \u2265 M 1 2 1 (|DDu|2 + |D \u00af Du|2) 1 2 + 3 4M1g\u2032 \u00af 11 \u2212\u03c6\u2032 n\u03b1|DF| |Du| \u2212CM \u2265 3 4M1g\u2032 \u00af 11 \u2212CM In the last inequality, we made use of the estimate (5.35) for |DF|, and chose M large enough. Thus we have established g\u2032 \u00af 11 \u2264C. (5.44) This completes the proof of Theorem 3. 19 \fProof of Lemma 2. Using the de\ufb01nition (2.7) of F = \u03c32(g\u2032), we shall estimate DF and D1D\u00af 1F. The expression (2.7) shows that F is a linear combination of the expressions eau, e\u00b1u|Du|2, e\u2212uDu, e\u2212u \u00af Du, with coe\ufb03cients given by smooth functions whose derivatives of any \ufb01xed order can be bounded by constants C. The constant a can take the values 0, \u00b11, 2. Thus DF can be bounded by a linear combination of the above expressions and their derivatives. The expressions can themselves be bounded by constants C, while D(eau) can be bounded by constants C, and D(e\u00b1u|Du|2) = e\u00b1u(DDu \u00b7 \u00af Du + Du \u00b7 D \u00af Du) \u00b1 e\u00b1u(Du)|Du|2, D(e\u2212uDu) = e\u2212uDDu \u2212e\u2212uDu Du, D(e\u2212u \u00af Du) = e\u2212uD \u00af Du \u2212e\u2212uDu \u00af Du. All the last terms on the right hand side of each of the above three equations can be bounded by C. The estimate (5.35) for |DF| follows. Next, we turn to D1D\u00af 1F. For this, we view D\u00af 1F as a linear combination of the above expressions and their derivatives, and apply D1. In this process, we can ignore all terms bounded by expressions of the form epu|Du|q(|DDu| + |D \u00af Du|) + eru + |Du|s for some p, q, r, s \u22650 since they can all be absorbed into eu(|DDu|2 + |D \u00af Du|2) + C (recall that eu > 1 by the assumption (5.1)). Examples of such terms are the bounds obtained in (5.35) for |DF|. Thus, when the derivative D1 lands on the coe\ufb03cients of the linear combination giving D\u00af 1F, we obtain only expressions that can be bounded by the right hand side of (5.35) and can be ignored. This means that, to establish the bound (5.37), it su\ufb03ces to consider the expressions D1D\u00af 1(eau), D1D\u00af 1(e\u00b1u|Du|2), D1D\u00af 1(e\u2212uDu), and D1D\u00af 1(e\u2212u \u00af Du). Modulo O(epu|Du|q(|DDu| + |D \u00af Du|) + eru + |Du|s), we can write D1D\u00af 1(eau) = 0, and D1D\u00af 1(e\u2212u \u00af Du) = e\u2212uD1D\u00af 1 \u00af Du, D1D\u00af 1(e\u2212uDu) = e\u2212uD1D\u00af 1Du which can clearly be bounded by the right hand side of (5.37). Similarly, D1D\u00af 1(e\u00b1u|Du|2) = e\u00b1uD1(D\u00af 1Du \u00b7 \u00af Du + Du \u00b7 D\u00af 1 \u00af Du) = e\u00b1u(D1D\u00af 1Du \u00b7 \u00af Du + Du \u00b7 D1D\u00af 1 \u00af Du) +e\u00b1u(D\u00af 1Du \u00b7 D1 \u00af Du + D1Du \u00b7 D\u00af 1 \u00af Du). It follows that |D1D\u00af 1(e\u00b1u|Du|2)| \u2264eu(|D \u00af Du|2 + |DDu|2) + eu|Du| |Du1\u00af 1|. (5.45) 20 \fWe note that by |Du|2 \u2264eu (5.1), we may estimate |Du| by eu since eu \u22651. Thus all the terms in D1D\u00af 1F can be bounded by the right hand side of (5.37), completing the proof of (5.37). Using the lower bound for F in (5.4) and the fact that (eu|Du| + e\u2212u)2 \u22642(e2u|Du|2 + e\u22122u) \u2264Be3u, in view of the assumption e\u2212u \u22641 and |Du|2 \u2264eu (5.1), we have |DF|2 F \u2264 Be3u F \u0010 |DDu|2 + |D \u00af Du|2\u0011 + C \u2264Beu \u0010 |DDu|2 + |D \u00af Du|2\u0011 + C. Q.E.D. Proof of Lemma 3. This argument will adapt the proof of Proposition 9 in Guan-RenWang [10] to the complex setting. Recall that we are working at a point p with DG = 0, g\u00af kj = \u03b4kj and u\u00af kj, g\u2032 diagonal, and g\u2032 \u00af 11 \u2265g\u2032 \u00af 22 \u2265. . . \u2265g\u2032 \u00af nn. The \ufb01rst step is the following computation, for a \u2208{1, 2, . . . , n} \ufb01xed. \u2212(1 \u2212\u03c4)F a\u00af a \f \f \f \f \f Dag\u2032 \u00af 11 g\u2032 \u00af 11 \f \f \f \f \f 2 + \u03c6\u2032 2 F a\u00af a |ua\u00af a|2 + X k |uka|2 ! = \u2212(1 \u2212\u03c4)F a\u00af a \f \f \f\u03c6\u2032|Du|2 a \u22122n\u03b1\u03c8\u2032ua \f \f \f 2 + \u03c6\u2032 2 F a\u00af a |ua\u00af a|2 + X k |uka|2 ! \u2265 \u22122(1 \u2212\u03c4)F a\u00af a \u0012\f \f \f\u03c6\u2032|Du|2 a \f \f \f 2 + |2n\u03b1\u03c8\u2032ua|2 \u0013 + \u03c6\u2032 2 F a\u00af a |ua\u00af a|2 + X k |uka|2 ! \u2265 F a\u00af a \u001a \u03c6\u2032 \u00b7 \u00121 2 \u22124(1 \u2212\u03c4)\u03c6\u2032|Du|2 \u0013 |ua\u00af a|2 + X k |uka|2 ! \u2212C \u001b . (5.46) By our choice of \u03c6, we have (5.13), (5.14) and (5.15), hence \u03c6\u2032|Du|2 \u22641 16, \u03c6\u2032 > 1 C > 0. (5.47) Thus we have for a \u2208{1, 2, . . . , n}, \u2212(1 \u2212\u03c4)F a\u00af a \f \f \f \f \f Dag\u2032 \u00af 11 g\u2032 \u00af 11 \f \f \f \f \f 2 + \u03c6\u2032 2 F a\u00af a |ua\u00af a|2 + X k |uka|2 ! \u2265F a\u00af a \u00121 4|ua\u00af a|2 \u2212C \u0013 (5.48) If g\u2032 \u00af 11 \u226b1, then u\u00af 11 = 1 2n\u03b1 \u0010 g\u2032 \u00af 11 \u2212(eu + fe\u2212u) \u0011 \u226b1, (5.49) hence by letting a = 1 in (5.48) \u2212(1 \u2212\u03c4)F 1\u00af 1 \f \f \f \f \f D1g\u2032 \u00af 11 g\u2032 \u00af 11 \f \f \f \f \f 2 + \u03c6\u2032 2 F 1\u00af 1 |u1\u00af 1|2 + X k |uk1|2 ! \u22650 (5.50) 21 \ffor g\u2032 \u00af 11 su\ufb03ciently large. Thus to prove the lemma, it needs to be shown that 1 \u2212\u03c4 2 g\u2032 \u00af 11 X i>1 |Dig\u2032 \u00af 11|2 \u2212(1 \u2212\u03c4) X i>1 F i\u00af i \f \f \f \f \f Dig\u2032 \u00af 11 g\u2032 \u00af 11 \f \f \f \f \f 2 + X i>1 \u03c6\u2032 2 F i\u00af i |ui\u00af i|2 + X k |uki|2 ! \u22650. (5.51) To prove this estimate, we proceed by cases. Let 0 < \u03b5 < \u03c4 2(1 \u2212\u03c4). (5.52) Case (A): Pn\u22121 i=2 g\u2032 \u00af ii < \u03b5g\u2032 \u00af 11. In this case, we have F n\u00af n = g\u2032 \u00af 11 + Pn\u22121 i=2 g\u2032 \u00af ii < (1 + \u03b5)g\u2032 \u00af 11. Thus 1 \u2212\u03c4 2 g\u2032 \u00af 11 X i>1 |Dig\u2032 \u00af 11|2 \u2212(1 \u2212\u03c4) X i>1 F i\u00af i \f \f \f \f \f Dig\u2032 \u00af 11 g\u2032 \u00af 11 \f \f \f \f \f 2 \u2265 1 \u2212\u03c4 2 g\u2032 \u00af 11 X i>1 |Dig\u2032 \u00af 11|2 \u2212(1 \u2212\u03c4) X i>1 F n\u00af n \f \f \f \f \f Dig\u2032 \u00af 11 g\u2032 \u00af 11 \f \f \f \f \f 2 \u2265 \u0012 1 \u2212\u03c4 2 \u2212(1 \u2212\u03c4)(1 + \u03b5) \u0013 1 g\u2032 \u00af 11 X i>1 |Dig\u2032 \u00af 11|2, which is nonnegative by the choice of \u03b5. This proves (5.51). Case (B): Pn\u22121 i=2 g\u2032 \u00af ii \u2265\u03b5g\u2032 \u00af 11. In this case, we have g\u2032 \u00af 22 \u2265 \u03b5 n\u22122g\u2032 \u00af 11. For g\u2032 \u00af 11 large enough, u\u00af 22 = 1 2n\u03b1 \u0010 g\u2032 \u00af 22 \u2212(eu + fe\u2212u) \u0011 \u2265 1 2n\u03b1 \u03b5 n \u22122g\u2032 \u00af 11 \u2212C \u2265 \u03b5 4n(n \u22122)\u03b1g\u2032 \u00af 11. (5.53) We divide case (B) into subcases. Case (B1): F 2\u00af 2 \u22651. By (5.48) \u2212(1 \u2212\u03c4) X i>1 F i\u00af i \f \f \f \f \f Dig\u2032 \u00af 11 g\u2032 \u00af 11 \f \f \f \f \f 2 + X i>1 \u03c6\u2032 2 F i\u00af i |ui\u00af i|2 + X k |uki|2 ! \u2265 F 2\u00af 2 \u00121 4|u2\u00af 2|2 \u2212C \u0013 + X i>2 F i\u00af i \u00121 4|ui\u00af i|2 \u2212C \u0013 \u2265 \u00121 4|u2\u00af 2|2 \u2212C \u0013 \u2212C X i>2 F i\u00af i \u2265 \u03b52 43(n(n \u22122)\u03b1)2(g\u2032 \u00af 11)2 \u2212Cg\u2032 \u00af 11 \u2212C \u22650. (5.54) Case (B2): F 2\u00af 2 < 1. In this case, \u2212g\u2032 \u00af nn > \u22121 + (g\u2032 \u00af 11 \u2212g\u2032 \u00af 22) + n\u22121 X i=2 g\u2032 \u00af ii \u2265\u22121 + \u03b5g\u2032 \u00af 11 \u2265\u03b5 2g\u2032 \u00af 11. (5.55) \u2212u\u00af nn = 1 2n\u03b1 \u0010 \u2212g\u2032 \u00af nn + (eu + fe\u2212u) \u0011 \u2265 \u03b5 4n\u03b1g\u2032 \u00af 11. (5.56) 22 \fNote that the assumption of case (B) implies F n\u00af n \u2265(1 + \u03b5)g\u2032 1\u00af 1. Another computation using (5.48) yields \u2212(1 \u2212\u03c4) X i>1 F i\u00af i \f \f \f \f \f Dig\u2032 \u00af 11 g\u2032 \u00af 11 \f \f \f \f \f 2 + X i>1 \u03c6\u2032 2 F i\u00af i |ui\u00af i|2 + X k |uki|2 ! \u2265 F n\u00af n \u00121 4|un\u00af n|2 \u2212C \u0013 + n\u22121 X i=2 F i\u00af i \u00121 4|ui\u00af i|2 \u2212C \u0013 \u2265 F n\u00af n \u03b52 43(n\u03b1)2|g\u2032 1\u00af 1|2 \u2212C ! \u2212Cg\u2032 1\u00af 1 \u2265 (1 + \u01eb)g\u2032 1\u00af 1 \u03b52 43(n\u03b1)2|g\u2032 1\u00af 1|2 \u2212C ! \u2212Cg\u2032 1\u00af 1 \u22650. (5.57) This establishes (5.51), and thus proves Lemma 3. Q.E.D. 6 The C2,\u03b7 Estimate At this point, we have shown the a priori C2 estimates (5.2) for equation (2.7), under the assumption of a sharp C1 upper bound (5.1). This C2 estimate implies that the equation is uniformly elliptic and that it is also a concave operator. We would like to apply the Evans-Krylov theorem [6, 13, 19] to show the C2,\u03b7 bound. However, we cannot apply the standard theorem directly. In fact, equation (2.7) is of the following form \u03c32 \u0010 \u03c7\u00af jk(z, u) + u\u00af jk \u0011 = \u03d5(z, u, Du). By the a priori C2 estimate, we have uniform bounds for the complex Hessian \u2202\u00af \u2202u and hence for \u2206u. This implies that u \u2208C1,\u03b8 for some \u03b8 \u2208(0, 1). Therefore, the function \u03d5(z, u, Du) = right hand side of equation (2.7) is only C\u03b8 even if f and \u00b5 are smooth on X. Thus, the standard Evans-Krylov theorem is not directly applicable as it requires a C1,1 bound for \u03d5, which depends on the C3 norm of u in our case. The C2,\u03b7 regularity for the complex Monge-Amp` ere equations with only H\u00a8 older continuous right hand side was obtained by Dinew-Zhang-Zhang [5] for u \u2208C1,1. The assumption on u was weaken to be \u2206u \u2208L\u221eby Wang [26] and it was later extended to more general settings by Tosatti-Wang-Weinkove-Yang [24]. Indeed, our setup here \ufb01ts well into the general picture in [24] (Theorem 1.1 for equation (1.4) in [24]). We note that our \u03c7\u00af jk = (eu + fe\u2212u) g\u00af jk \u2208C1,\u03b8 and \u03d5 \u2208C\u03b8. And thus we can apply their main result to conclude the following C2,\u03b7 bound for u. We refer the reader to [24] for details. Theorem 4 Let u \u2208C2(X) be a solution to (2.7) with normalization condition (2.8). Then, there exist positive constants 0 < \u03b7 < 1 and C depending on n, (X, g), \u2225u\u2225L\u221e, \u2225Du\u2225L\u221e, \u2225\u2206u\u2225L\u221e, \u2225f\u2225C3, \u2225\u00b5\u2225C1 and \u03b1 such that \u2225u\u2225C2,\u03b7(X) \u2264C. (6.1) 23 \f7 Non-Degeneracy and Sharp Gradient Bounds In order to solve equation (2.7) subject to normalization condition (2.8), one can use the method of continuity. This can be done by introducing the parameter t, and replacing f by tf and \u00b5 by t\u00b5. e\u22122uF = \u03bac{1 \u22124\u03b1e\u2212u|Du|2} + 4\u03b1\u03bac \u001a tfe\u22123u|Du|2 \u2212te\u22123u(gi\u00af kfiu\u00af k + gi\u00af kf\u00af kui) \u001b +\u03bacte\u22122u \u001a 2f + f 2e\u22122u + 4\u03b1e\u2212u\u2206f \u001b \u22122n\u03b1te\u22122u\u00b5. (7.1) We see that when t = 0, the equation admits the trivial solution u = \u2212log A, and the right hand side is equal to \u03bac. The issue addressed in this section is whether the right-hand side can degenerate to zero as t tends to t = 1. For simplicity, we shall suppress the parameter t in our computations and write f instead of tf and \u00b5 instead of t\u00b5. The theorem of Fu-Yau [7] is the following. Theorem 5 (Fu-Yau [7]) Let the dimension of X be equal to n = 2. For any \u03b4 > 0, there exists A\u03b4 > 0 depending on (X, \u03c9), f \u03b1, \u00b5, such that if A < A\u03b4, then for any solution u of the Fu-Yau equation (2.7) with normalized condition (2.8), there holds e\u22122uF \u2265\u03bac \u2212\u03b4. (7.2) In the rest of this section, we investigate the non-degeneracy estimate for the higher dimensional case. As mentioned in the Introduction, we follow the idea of Fu-Yau closely, but we work with general coordinate systems rather than the adapted ones with \u2207u = (u1, 0, \u00b7 \u00b7 \u00b7, 0) used by Fu and Yau. This allows us a simpli\ufb01ed and more transparent derivation of the Fu-Yau results for n = 2, and a clearer picture of why their arguments are not strong enough for higher dimensions. Following Fu-Yau, we apply the maximum principle to the following function G = 1 \u22124\u03b1e\u2212u|Du|2 + 4\u03b1e\u2212\u03b5u \u22124\u03b1e\u2212\u03b5 inf u. (7.3) 7.1 First computation of F j\u00af kDjD\u00af kG We begin by computing F j\u00af kDjD\u00af k(\u22124\u03b1e\u2212u|Du|2). Because we shall ultimately evaluate this expression as a critical point of G, where D(e\u2212u|Du|2) = D(e\u2212\u03b5u) (7.4) it is advantageous to express DjD\u00af k(\u22124\u03b1e\u2212u|Du|2) in terms of D(e\u2212u|Du|2) as much as possible. Thus we write F j\u00af kDjD\u00af k(\u22124\u03b1e\u2212u|Du|2) = 4\u03b1F j\u00af kD\u00af kuDj(e\u2212u|Du|2) + 4\u03b1F j\u00af kDjuD\u00af k(e\u2212u|Du|2) +4\u03b1e\u2212u|Du|2 F j\u00af kDjuD\u00af ku \u22124\u03b1e\u2212uF j\u00af kDjD\u00af k|Du|2 +4\u03b1(F j\u00af kDjD\u00af ku) e\u2212u|Du|2. (7.5) 24 \fOn the other hand, a straightforward computation gives F j\u00af kDjD\u00af k(4\u03b1e\u2212\u03b5u) = 4\u03b1\u03b52|Du|2 Fe\u2212\u03b5u \u22124\u03b1\u03b5(F j\u00af kDjD\u00af ku)e\u2212\u03b5u. (7.6) and thus F j\u00af kDjD\u00af kG = 4\u03b1F j\u00af kD\u00af kuDj(e\u2212u|Du|2) + 4\u03b1F j\u00af kDjuD\u00af k(e\u2212u|Du|2) +4\u03b1(e\u2212u|Du|2 + \u03b52e\u2212\u03b5u) |Du|2 F \u22124\u03b1e\u2212uF j\u00af kDjD\u00af k|Du|2 +4\u03b1(F j\u00af kDjD\u00af ku) (e\u2212u|Du|2 \u2212\u03b5e\u2212\u03b5u) (7.7) where we have introduced the notation |Du|2 F = F j\u00af kDjuD\u00af ku. We can now substitute in the critical point equation (7.4) of G, and obtain F j\u00af kDjD\u00af kG = 4\u03b1(e\u2212u|Du|2 \u22122\u03b5e\u2212\u03b5u + \u03b52e\u2212\u03b5u)|Du|2 F +4\u03b1(F j\u00af kDjD\u00af ku) (e\u2212u|Du|2 \u2212\u03b5e\u2212\u03b5u) \u22124\u03b1e\u2212uF j\u00af kDjD\u00af k|Du|2 (7.8) Both expressions F j\u00af kDjD\u00af ku and F j\u00af kDjD\u00af k|Du|2 have been computed in section \u00a72 and are found in equations (2.12) and (2.18). Substituting in the formulas derived there, we obtain F j\u00af kDjD\u00af kG = 4\u03b1(e\u2212u|Du|2 \u22122\u03b5e\u2212\u03b5u + \u03b52e\u2212\u03b5u)|Du|2 F \u22124\u03b1e\u2212u(|DDu|2 F g + |D \u00af Du|2 F g) + \u001a4 nF \u22122(n \u22121) n (eu + fe\u2212u)Tr h \u001b (e\u2212u|Du|2 \u2212\u03b5e\u2212\u03b5u) +2(n \u22121) n Tr h e\u2212ug\u2113\u00af m{\u2202\u2113(eu + fe\u2212u)\u2202\u00af mu + \u2202\u00af m(eu + fe\u2212u)\u2202\u2113u} \u22122 ne\u2212ug\u2113\u00af m{\u2202\u2113F\u2202\u00af mu + \u2202\u00af mF\u2202\u2113u} + 4\u03b1e\u2212u\u02dc g\u00af \u2113mRm\u00af \u2113p\u00af q\u2202pu\u2202\u00af qu. (7.9) We now make use of a key partial cancellation, observed by Blocki in his proof of C1 estimates for the Monge-Amp` ere equation [2] (see also [9, 29], and [17, 18] for other applications of this partial cancellation), between |DDu|2 F g and |Du|2|Du|2 F g, which is the following. At a critical point of G, the relation (7.4) implies gi\u00af jDpDiuD\u00af ju = \u2212gi\u00af jD\u00af jDpuDiu + (|Du|2 \u2212\u03b5e(1\u2212\u03b5)u)Dpu. (7.10) We can now estimate |DDu|2 F g from below by |DDu|2 F g \u2265 1 |Du|2|gi\u00af jDpDiuD\u00af ju|2 F = 1 |Du|2 \f \f \f \fgi\u00af jD\u00af jDpuDiu \u2212(|Du|2 \u2212\u03b5e(1\u2212\u03b5)u)Dpu \f \f \f \f 2 F = |Du|2|Du|2 F + \u03b52e2(1\u2212\u03b5)u |Du|2 F |Du|2 \u22122\u03b5|Du|2 Fe(1\u2212\u03b5)u + 1 |Du|2|gi\u00af jDiuD\u00af jDpu|2 F \u2212 2 |Du|2(|Du|2 \u2212\u03b5e(1\u2212\u03b5)u)Re(F p\u00af qgi\u00af jD\u00af quDiuD\u00af jDpu). 25 \fThe terms |Du|2|Du|2 F and \u22122\u03b5e(1\u2212\u03b5)u|Du|2 F will cancel out similar terms in F j\u00af kDjD\u00af kG. The expression Re(F p\u00af qgi\u00af jD\u00af quDiuD\u00af jDpu) can be rewritten as Re(F p\u00af qgi\u00af jD\u00af quDiuD\u00af jDpu) = 1 2n\u03b1Re(F p\u00af qgi\u00af jD\u00af quDiug\u2032 \u00af jp) \u2212eu + fe\u2212u 2n\u03b1 |Du|2 F = 1 2n\u03b1{F|Du|2 \u2212 n X j=1 \u03c32(\u03bb\u2032|j)|uj|2 \u2212(eu + fe\u2212u)|Du|2 F} by going to coordinates where g\u00af kj = \u03b4\u00af kj, and u\u00af kj is diagonal at the point p where the function G attains its minimum. Here \u03c3k(\u03bb\u2032|j) denotes the k-th symmetric function of the (n \u22121) \u00d7 (n \u22121) diagonal matrix with eigenvalues \u03bb\u2032 m, m \u0338= j. We also make use of the other term |D \u00af Du|2 F g, which we rewrite as |D \u00af Du|2 F g = F p\u00af qgi\u00af j (2n\u03b1)2(g\u2032 \u00af jp \u2212(eu + fe\u2212u)g\u00af jp)(g\u2032 \u00af qi \u2212(eu + fe\u2212u)g\u00af qi) = 1 (2n\u03b1)2(|g\u2032|2 F g \u22124(eu + fe\u2212u)F + (n \u22121)(eu + fe\u2212u)2Tr h). Again using the above coordinates, we can work out a more explicit expression for |g\u2032|2 F g, \u2212e\u2212u n2\u03b1|g\u2032|2 F g = \u2212e\u2212u n2\u03b1 n X j=1 \u02dc \u03bbj(\u03bb\u2032 j)2 = \u2212e\u2212u n2\u03b1(F n X j=1 \u03bb\u2032 j \u2212 n X j=1 \u03c32(\u03bb\u2032|j)\u03bb\u2032 j) = \u2212e\u2212u n2\u03b1(FTr h \u2212 n X j=1 (\u03c33(g\u2032) \u2212\u03c33(\u03bb\u2032|j)) = \u2212e\u2212u n2\u03b1F Tr h + e\u2212u n2\u03b13\u03c33(g\u2032). Thus we \ufb01nd, at a critical point of the test function G, F j\u00af kDjD\u00af kG \u2264 \u001a \u22124 ne\u22122u(eu + fe\u2212u)|Du|2 + 4 n\u03b5e\u2212(1+\u03b5)u(eu + fe\u2212u) \u22124\u03b1\u03b52e\u22122\u03b5u +4\u03b1\u03b52e\u2212\u03b5ue\u2212u|Du|2 \u001b eu |Du|2 F |Du|2 \u22124\u03b1e\u2212u |Du|2 |gi\u00af jDiuD\u00af jDpu|2 F +(e\u2212u|Du|2 \u2212\u03b5e\u2212\u03b5u) \u001a8 nF \u22124 n n X j=1 \u03c32(\u03bb\u2032|j) |uj|2 |Du|2 \u22122n \u22121 n (eu + fe\u2212u)Tr h \u001b \u2212e\u2212u n2\u03b1F Tr h + e\u2212u n2\u03b13\u03c33(g\u2032) + 4 n2\u03b1e\u2212u(eu + fe\u2212u)F \u2212n \u22121 n2\u03b1 e\u2212u(eu + fe\u2212u)2Tr h +2(n \u22121) n Tr h e\u2212u 2Re \u27e8D(eu + fe\u2212u), Du\u27e9\u22122 ne\u2212u 2Re\u27e8DF, Du\u27e9 +4\u03b1e\u2212u\u02dc g\u00af \u2113mRm\u00af \u2113p\u00af q\u2202pu\u2202\u00af qu. (7.11) 7.2 Using the equation So far, we have not used the equation (2.7). We shall now use it to evaluate and simplify the preceding estimate for F j\u00af kDjD\u00af kG. It is convenient to rewrite the equation (2.7) in the 26 \ffollowing form F = n(n \u22121) 2 e2u(1 \u22124\u03b1e\u2212u|Du|2) \u22122n\u03b1\u03bd (7.12) where the function \u03bd is de\ufb01ned to be \u03bd = \u00b5 \u2212(n \u22121)fe\u2212u|Du|2 \u2212n \u22121 2\u03b1 f \u2212n \u22121 4\u03b1 e\u22122uf 2 \u2212(n \u22121)e\u2212u(\u2206f \u2212gj\u00af k(DjfD\u00af ku + D\u00af kuDjf)). (7.13) At a critical point (7.4) for G, we have \u2202\u00af \u2113F = n(n \u22121)e2u\u2202\u00af \u2113u(1 \u22124\u03b1e\u2212u|Du|2) + n(n \u22121) 2 e2u\u2202\u00af \u2113(\u22124\u03b1e\u2212u|Du|2) \u22122n\u03b1\u2202\u00af \u2113\u03bd = 2F \u2202\u00af \u2113u \u22124n\u03b1\u2202\u00af \u2113u\u03bd + 2\u03b1n(n \u22121)\u03b5\u2202\u00af \u2113ue(2\u2212\u03b5)u \u22122n\u03b1\u2202\u00af \u2113\u03bd. (7.14) The preceding expression is unwieldy if we write it down in full. To avoid unnecessary details, it is convenient to introduce the following groups of expressions: \u2022 The group E0 consists of the following expressions e\u2212uTr h, e\u22122uF, e\u22123u\u03c33(g\u2032), e\u2212u|Du|2 F |Du|2 , e\u22122u X j \u03c32(\u03bb\u2032|j) |uj|2 |Du|2, 1 (7.15) where \u03c33 and \u03c32(\u03bb\u2032|j) denote the symmetric functions of the eigenvalues of the matrix g\u2032 \u00af kj. \u2022 The group E1 consists of expressions of the form \u03b5e\u2212\u03b5u \u03a6, \u03a6 \u2208E0. (7.16) \u2022 The group E2 consists of expressions of the form c \u03a6, with \u03a6 \u2208E0 and c << \u03b5e\u2212\u03b5u (7.17) where the inequality indicated on the coe\ufb03cient c should hold for \u03b5 << 1 and A << 1. For example, any function of the form e\u22122uv with v a bounded function can be classi\ufb01ed into the group E2. Another example is the expression 4\u03b1\u03b52e\u2212\u03b5u|Du|2 F, which can be viewed as belonging to e2uE2, since 4\u03b1\u03b52e\u2212\u03b5u|Du|2 F = e2u \u001a \u03b52e\u2212\u03b5u(4\u03b1e\u2212u|Du|2) e\u2212u|Du|2 F |Du|2 \u001b (7.18) and the expression 4\u03b1e\u2212u|Du|2 is bounded as we vary A by the C1 estimate (4.5). We now claim that \u22122 ne\u2212u2Re\u27e8DF, Du\u27e9= \u22128 nF e\u2212u|Du|2 \u22128\u03b1(n \u22121)\u03b5e(2\u2212\u03b5)u(e\u2212u|Du|2) (7.19) modulo terms of the form e2uE2. Indeed, absorbing all terms e\u2212u|Du|2 into O(1) yields \u22122 n e\u2212u(\u22122n\u03b1)2Re\u27e8D\u03bd, Du\u27e9 = \u22124\u03b1(n \u22121)fe\u2212u 2Re\u27e8Du, D(e\u2212u|Du|2)\u27e9 +4\u03b1(n \u22121)e\u2212u 2Re\u27e8Df, D(e\u2212u|Du|2)\u27e9+ O(1). Applying the critical point equation (7.4), we see that this term is of the form e2uE2. 27 \f7.2.1 The expression for F j\u00af kDjD\u00af kG up to E2 terms It is now easy to clean up considerably the expression for F j\u00af kDjD\u00af kG. Up to e2uE1 and e2uE2 terms, (7.11) is F j\u00af kDjD\u00af kG \u2264 \u2212{ 4 ne\u2212u|Du|2}eu|Du|2 F |Du|2 + e\u2212u|Du|2 \u001a8 nF \u22124 n X j \u03c32(\u03bb\u2032|j) |uj|2 |Du|2 \u22122n \u22121 n euTr h \u001b \u22124\u03b1e\u2212u |Du|2 |gi\u00af jDiuD\u00af jDpu|2 F \u2212e\u2212u n2\u03b1F Tr h + e\u2212u n2\u03b13\u03c33 + 4 n2\u03b1F \u2212n \u22121 n2\u03b1 euTr h +4n \u22121 n Tr h|Du|2 \u22122 ne\u2212u 2Re\u27e8DF, Du\u27e9. We can now make use of the formula (7.19) for \u27e8DF, Du\u27e9modulo e2uE1 and e2uE2 obtained in the previous section. The expression 8F/n in the top line cancels out. Regrouping terms in terms of |Du|2 F/|Du|2, Tr h and F, we have F j\u00af kDjD\u00af kG \u2264 euTr h \u001a 2n \u22121 n e\u2212u|Du|2 \u2212n \u22121 n2\u03b1 \u2212e\u22122uF n2\u03b1 \u001b + 4 n2\u03b1F \u2212{ 4 ne\u2212u|Du|2}eu|Du|2 F |Du|2 \u22124e\u2212u|Du|2 n X j \u03c32(\u03bb\u2032|j) |uj|2 |Du|2 + e\u2212u n2\u03b13\u03c33 \u22124\u03b1e\u2212u |Du|2 |gi\u00af jDiuD\u00af jDpu|2 F. (7.20) We can now eliminate systematically 4\u03b1e\u2212u|Du|2 using the equation 4\u03b1e\u2212u|Du|2 = 1 \u2212e\u22122uF \u03bac modulo E2. (7.21) where it is convenient to introduce the critical value \u03bac as in (4.1). The coe\ufb03cient of euTr h above becomes 2n \u22121 n e\u2212u|Du|2 \u2212n \u22121 n2\u03b1 \u2212e\u22122uF n2\u03b1 = n \u22121 n\u03b1 {1 2 \u22121 n \u2212e\u22122uF \u03bac }. (7.22) We multiply by e\u22122u and summarize the previous calculations in the following inequality, (F j\u00af kDjD\u00af kG)e\u22122u \u2264 \u001a1 2 \u22121 n \u2212e\u22122uF \u03bac \u001bn \u22121 n\u03b1 e\u2212u Tr h + 3 n2\u03b1e\u22123u\u03c33 + 4 n2\u03b1e\u22122uF \u2212 \u001a 1 \u2212e\u22122uF \u03bac \u001b 1 n\u03b1e\u2212u|Du|2 F |Du|2 \u22121 n\u03b1 \u001a 1 \u2212e\u22122uF \u03bac \u001b e\u22122u n X j=1 \u03c32(\u03bb\u2032|j) |uj|2 |Du|2 \u22124\u03b1e\u22123u |Du|2 |gi\u00af jDiuD\u00af jDpu|2 F (7.23) modulo terms in groups E1 and E2. The terms in group E1 in the expression for (F j\u00af kDjD\u00af kG)e\u22122u come from (7.11) and (7.19) and can be worked out to be \u03b5e\u22122ue\u2212\u03b5u \u001a 2n \u22121 n eu Tr h \u22128 nF + 4 neu|Du|2 F |Du|2 + 4 n X j \u03c32(\u03bb\u2032|j) |uj|2 |Du|2 \u22122(n \u22121)e2u(1 \u2212e\u22122uF \u03bac ) \u001b = \u03b5e\u22122ue\u2212\u03b5u \u001a 2n \u22121 n eu Tr h \u22124 nF \u22122(n \u22121)e2u + 4 neu|Du|2 F |Du|2 + 4 n X j \u03c32(\u03bb\u2032|j) |uj|2 |Du|2 \u001b . (7.24) 28 \fAn explicit expression for the expression |gi\u00af juiu\u00af jp|2 F occurring above is \u22124\u03b1e\u22123u |Du|2 \f \f \fgi\u00af juiu\u00af jp \f \f \f 2 F = \u2212 e\u22123u n2\u03b1|Du|2 \f \f \fgi\u00af juig\u2032 \u00af jp \f \f \f 2 F \u2212e\u22123u(eu + fe\u2212u)2|Du|2 F |Du|2n2\u03b1 +2e\u22123u(eu + fe\u2212u) |Du|2n2\u03b1 F p\u00af qgi\u00af ju\u00af quig\u2032 \u00af jp = \u2212e\u22123u n2\u03b1 F Tr h + 2e\u22122u n2\u03b1 F + (e\u22122uF n2\u03b1 \u2212 1 n2\u03b1)e\u2212u|Du|2 F |Du|2 +e\u22123u n2\u03b1 \u03c33 \u2212e\u22123u n2\u03b1 n X j=1 \u03c33(\u03bb\u2032|j) |uj|2 |Du|2 \u22122e\u22122u n2\u03b1 n X j=1 \u03c32(\u03bb\u2032|j) |uj|2 |Du|2 modulo terms in group E2. Here we used \f \f \fgi\u00af juig\u2032 \u00af jp \f \f \f 2 F = n X j=1 \u02dc \u03bbj\u03bb\u2032 j|uj|2\u03bb\u2032 j = n X j=1 (F \u2212\u03c32(\u03bb\u2032|j))|uj|2(Tr h \u2212\u02dc \u03bbj). Combining this expression with the previous two expressions, we obtain the following Theorem 6 Let p \u2208X be a point where the function G achieves its minimum. Set \u03bap = (e\u22122uF)(p), \u03b8 = 2\u03b1\u03b5e\u2212\u03b5u(p). (7.25) Then we have 0 \u2264 \u001a1 2 \u22121 n \u22123 2 \u03bap \u03bac + \u03b8 \u001bn \u22121 n e\u2212u Tr h + \u001an + 1 n (\u03bap 1 n \u22121 \u22121) + 2\u03b8 \u001b1 ne\u2212u|Du|2 F |Du|2 + \u001a 6 n2 \u22122 n\u03b8 \u001b \u03bap \u2212(n \u22121)\u03b8 \u2212e\u22123u n2 n X j=1 \u03c33(\u03bb\u2032|j) |uj|2 |Du|2 + 4 n2e\u22123u\u03c33 \u2212e\u22122u n \u001an + 2 n \u2212\u03bap \u03bac \u22122\u03b8 \u001b n X j=1 \u03c32(\u03bb\u2032|j) |uj|2 |Du|2 (7.26) up to terms in group E2. 7.3 A simpli\ufb01ed Fu-Yau argument in dimension n = 2 We can now rederive the following key estimate of Fu-Yau [7] when n = 2 (and hence \u03bac = 1): for any \u03b4 > 0, there exists A\u03b4 > 0 so that, if A < A\u03b4, then the minimum \u03ba = minX(e\u22122uF) at any time t satis\ufb01es the lower bound \u03ba > 1 \u22122\u03b4. (7.27) 29 \fIndeed, \ufb01x \u03b4 > 0, with \u03b4 << 1. Recall that the test function G(z) assumes its minimum at a point p, and set \u03bap = (e\u22122uF)(p). In view of the C0 estimate, e\u22122uF = \u03bacG + O(A\u03b5), and hence \u03ba \u2265\u03bap + O(A\u03b5). Thus it su\ufb03ces to show that (7.27) holds with \u03ba replaced by \u03bap. It also su\ufb03ces to show that if \u03bap > 1/4, then \u03bap > 1 \u2212\u03b4 for A\u03b4 small enough. This is because \u03bap = \u03ba = 1 when t = 0, as discussed in (7.1). As t varies, \u03ba cannot reach 1/2, since the \ufb01rst time it does so, we would have then \u03bap > 1/4 (for A0 small enough), and hence \u03ba > 1 \u22123\u03b4, which is a contradiction. But then \u03ba > 1/2 for all time, and hence \u03ba > 1 \u22122\u03b4 for all time, as desired. We now argue by contradiction. Assume that \u03bap > 1/4. If \u03bap > 1\u2212\u03b4/2, we are done, so we assume that \u03bap \u22641\u2212\u03b4/2. In dimension n = 2, \u03c33 and \u03c32(\u03bb\u2032|j) all vanish. Incorporating the error terms in E1 and E2, the inequality (7.26) implies, for A small enough, c1e\u2212uTr h(p) + c2e\u2212u|Du|2 F |Du|2 (p) \u2264c3\u03bap + c4 (7.28) where c1, c2, c3, c4 are strictly positive constants, depending only on \u03b4. Since \u03bap is bounded by an absolute constant, it follows that e\u2212u Tr h(p) + |Du|2 F |Du|2 (p) ! \u2264c5, where c5 is a constant depending only on \u03b4. This implies that all terms in \u03b8\u22121E2 can be bounded by c, where c is a constant that can be made arbitrarily small by taking \u03b5 and A to be small. Going back again to the inequality (7.26), we can bound the term |Du|2 F as follows, \u001an + 1 n (\u03bap 1 n \u22121 \u22121) + 2\u03b8 \u001b 1 n\u03b1eu|Du|2 F |Du|2 \u2264 \u001an + 1 n (\u03bap 1 n \u22121 \u22121) + 2\u03b8 \u001b 1 n\u03b1eu\u03bb\u2032 1 (7.29) where \u03bb\u2032 1 is either the largest or the lowest eigenvalue of g\u2032 \u00af pq, depending on the sign of the coe\ufb03cient. In dimension n = 2, \u03bac = 1, and Theorem 6 implies, modulo additive terms of order E2, 0 \u2264 (\u22123 4\u03bap + 1 2\u03b8)(\u03bb\u2032 1 + \u03bb\u2032 2)e\u2212u + (3 4(\u03bap \u22121) + \u03b8)e\u2212u\u03bb\u2032 1 + (3 2 \u2212\u03b8)\u03bap \u2212\u03b8 = \u2212(3 4 \u22123\u03b8 2)e\u2212u\u03bb\u2032 1 \u2212(3 4\u03bap \u2212\u03b8 2)e\u2212u\u03bb\u2032 2 + (3 2 \u2212\u03b8)\u03bap \u2212\u03b8. (7.30) Since a1\u03bb\u2032 1 + a2\u03bb\u2032 2 \u22652\u221aa1a2 q \u03bb\u2032 1\u03bb\u2032 2 for any a1, a2 \u22650, and since \u03bb\u2032 1\u03bb\u2032 2 = e2u\u03bap, we obtain (3 4 \u22123\u03b8 2) 1 2(3 4\u03bap \u2212\u03b8 2) 1 2\u03ba 1 2 p \u22643 4\u03bap \u2212\u03b8 2(\u03bap + 1). (7.31) 30 \fThe leading term \u03ba2 p cancels upon squaring both sides. Since we have assumed that \u03ba is bounded away from 0, we can also divide by \u03bap and the error terms of type E2 will remain of type E2. We obtain, discarding terms of order \u03b82 and dividing through by \u03b8\u03bap, \u2212\u03bap \u22121 3 \u2264\u22122 3(\u03bap + 1), or equivalently, \u03bap \u22651 (7.32) modulo additive constants which can be made arbitrarily small by taking A small. This establishes the desired lower bound for \u03ba. To \ufb01nish the discussion on dimension n = 2, we note that Theorem 5 implies the sharp gradient estimate assumption (5.1) in the C2 estimate. From the previous analysis, we may choose A\u03b4 such that \u03ba = minX(e\u22122uF) \u22651 \u2212\u03b1 \u03b4. (7.33) From (4.3), we have 1 \u2212\u03b1 \u03b4 \u2264e\u22122uF \u22641 \u22124\u03b1e\u2212u|Du|2 \u001a 1 \u2212(\u2225f\u2225\u221e+ 1)e\u22122u \u001b + O(e\u22122u). (7.34) After choosing to be A\u03b4 smaller if necessary, we see that the previous inequality implies e\u2212u|Du|2 \u2264\u03b4. (7.35) 7.4 The case of higher dimension n In higher dimensions, it is not di\ufb03cult to see that the inequality (7.26) obtained in Theorem 6 is not powerful enough to provide a lower bound for \u03bap. In fact, even if we restrict ourselves only to the leading terms by setting formally \u03b8 = 0, the computation and examples indicate that the case n = 2 case is quite special. In the n = 2 case, as shown in (7.31) with \u03b8 = 0, it is easy to see that the leading terms about \u03bap cancel perfectly between both sides. However, this is not the case for higher dimensions. We illustrate the problem in the case n = 3. Suppose that at the point p \u2208X where G achieves its minimum, Du happens to be in the direction of \u03bb\u2032 1. Substituting \u03bac = 3 and n = 3, the inequality (7.26) with \u03b8 = 0 obtained in Theorem 6 becomes 0 \u2264 \u00121 9 \u2212\u03bap 3 \u0013 (\u03bb\u2032 1 + \u03bb\u2032 2 + \u03bb\u2032 3)e\u2212u + \u00122 9\u03bap \u22124 9 \u0013 e\u2212u(\u03bb\u2032 2 + \u03bb\u2032 3) + 2 3\u03bap +4 9e\u22123u\u03bb\u2032 1\u03bb\u2032 2\u03bb\u2032 3 + \u0012\u03bap 9 \u22125 9 \u0013 e\u22122u\u03bb\u2032 2\u03bb\u2032 3. (7.36) This inequality cannot prevent \u03bap = e\u22122u\u03c32(\u03bb\u2032) from starting at \u03bac = 3 and then going to zero along the method of continuity. Indeed, the path e\u2212u\u03bb\u2032 = (1, s, s) gives \u03bap = 2s + s2 and the previous inequality reduces to 0 \u22641 9(s4 \u22122s2 + 1). (7.37) 31 \fThus it is unclear whether the non-degeneracy estimate holds in higher dimensions, and it would certainly require a di\ufb00erent method. Acknowledgements: The authors would like to thank Pengfei Guan for stimulating conversations and for his notes on Fu-Yau\u2019s equation. They would also like to thank Valentino Tosatti for his lectures and notes on Strominger systems. The authors are also very grateful to the referee for a particularly careful reading of the paper, and for numerous suggestions which helped clarify the paper a great deal." + } + ], + "Nicholas P. Warner": [ + { + "url": "http://arxiv.org/abs/1912.13108v1", + "title": "Lectures on Microstate Geometries", + "abstract": "These are notes for some introductory lectures about microstate geometries\nand their construction. The first lecture considers BPS black holes in four\ndimensions as a way to introduce what one should expect from the BPS equations.\nThe second lecture discusses the \"no solitons without topology\" theorem. The\nsubsequent lectures move on to the construction and properties of bubbled\nmicrostate geometries in five dimensions. Since these are graduate lectures,\nthey involve a strongly computational narrative intended to build some\nproficiency and understanding of supersymmetric solitons in five-dimensions.\nThe narrative also regularly \"plays tourist\" pointing to specific features that\nare more general or have broader impact. The last sections contain brief\ncomments on the larger setting of microstate geometries and describe some of\nthe more recent developments. These lectures were given as a mini-course at the\nIPhT, Saclay in May, 2019.", + "authors": "Nicholas P. Warner", + "published": "2019-12-30", + "updated": "2019-12-30", + "primary_cat": "hep-th", + "cats": [ + "hep-th" + ], + "main_content": "Introduction 3 1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2 BPS Black Holes in Four Dimensions 6 2.1 The Reissner-Nordstr\u00a8 om black hole . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Extreme Reissner-Nordstr\u00a8 om as an exemplar of branes . . . . . . . . . . . . . . . . 8 2.3 Hawking temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.4 Supersymmetric \u201cbranes\u201d in four dimensions . . . . . . . . . . . . . . . . . . . . . 11 2.4.1 Spinor conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 2.5 Some general observations about the supersymmetric solution . . . . . . . . . . . . 13 2.5.1 A supersymmetric summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 2.6 A \ufb01nal footnote . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 3 Solitons, horizons and topology 18 3.1 Interlude: Mass and conserved charges . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.1 Expansions at in\ufb01nity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.2 Killing vectors and Komar integrals . . . . . . . . . . . . . . . . . . . . . . 19 3.2 A relatively simple supergravity theory . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.3 \u201cNo solitons without topology\u201d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.4 Supporting mass with topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 3.5 The BPS equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 4 Microstate geometries in \ufb01ve dimensions 29 4.1 Gibbons-Hawking metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.2 Harmonic forms on GH metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 4.3 Solving the BPS equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 4.4 Removing singularities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.5 The bubble equations and closed time-like curves . . . . . . . . . . . . . . . . . . . 35 4.6 Ambi-polar GH metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 4.7 Two centres: AdS3 \u00d7 S2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 4.8 Regularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.9 The asymptotic charges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.10 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 4.11 Final comment: Geometric transitions . . . . . . . . . . . . . . . . . . . . . . . . . 43 5 Scaling microstate geometries 44 6 Studying microstate geometries 46 6.1 Stringy and M-theory realizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 6.2 Holographic \ufb01eld theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 6.3 Excitations of microstate geometries . . . . . . . . . . . . . . . . . . . . . . . . . . 48 2 \f6.3.1 Fluctuating microstate geometries . . . . . . . . . . . . . . . . . . . . . . . 48 6.3.2 Other excitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 6.4 Underlying mathematical structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 6.5 Probing microstate geometries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49 6.6 non-BPS microstate geometries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 7 A last comment 52 1 Introduction Solitons are loosely de\ufb01ned to be smooth, stable, localized, \ufb01nite-energy solutions of some particular system of equations. They have played a remarkably rich role in physics and typically arise in non-linear systems when there is a stable balance between an attractive, cohesive force and some form of dispersive phenomenon. The most celebrated, early recording of this phenomenon was John Russell\u2019s horseback pursuit of a solitary wave along two miles of the Union Canal. This was followed by the modeling such phenomena via the Korteweg-de Vries (KdV) equation. The 20th century led to the deeper understanding, and solution of, a number of non-linear, solitonic systems using integrable hierarchies and inverse scattering methods. In physics, there are many beautiful examples of the role of solitons in the semi-classical description of phase transitions. In one phase there might be natural, low-energy perturbative excitations that give a fundamental description of the system, and there may be some high energy, non-linear solitonic excitations. As a coupling constant, or the energy changes, a new phase can emerge in which the solitons become light while the original perturbative degrees of freedom becomes massive. The fundamental description of the system must change dramatically in the new phase because it is usually simpler to describe the system in terms of its light degrees of freedom. In the Ising model at low temperature, the light \ufb01elds are the localized spins while the solitons are \u201cmassive\u201d domain walls between domains of di\ufb00erent spin. At the critical temperature, the domain walls become light and the entire system has a simple description in terms of a massless Majorana fermion. In supersymmetric Yang-Mills theories, the standard phase is described in terms of gluons and quarks. However, at strong coupling the light degrees of freedom can be monopoles or dyons. In string theory, the vast array of dualities between apparently very di\ufb00erent string/membrane theories (IIA, IIB, heterotic, M-theory ... ) may be viewed in terms of precisely such phase transitions in which solitonic degrees of freedom in one theory become light and thus provide a better e\ufb00ective description of the theory in terms of a \u201cdual\u201d form of string/M-theory. Such phase transitions are often driven by changes of topology. For example, the solitons might be branes wrapping cycles, and if the cycle collapses, the corresponding soliton becomes massless. In this way, the W-bosons of the heterotic string emerge from M2 branes wrapping 2-cycles in M-theory. Since solitons are generically smooth con\ufb01gurations of \ufb01elds, while particles are typically singular (\u03b4-function) sources, the discovery of solitons led to the idea that perhaps they could lead to better descriptions of fundamental particles in Nature. The classically in\ufb01nite self-energy 3 \fof the electron was unsatisfactory and the hope was that, at some scale, it would resolve into a smooth solitonic \u201clump.\u201d There was also a desire that electromagnetism should be \u201cunitary,\u201d not in the sense of quantum mechanics, but meaning that electromagnetism should be self-consistent and complete in itself: Rather than having the electron as a separate fundamental particle, the hope was to describe it as a lump made out of electromagnetic \ufb01elds. This required a \u201cnon-linear\u201d electromagnetism and this was one of the driving forces behind the development of Born-Infeld theory. While this idea is no longer a serious contender in describing the electron, Born-Infeld theories have had a renaissance in the description of \ufb01eld theories on branes in string theory. With the advent of General Relativity, and particularly because of the non-linearities of the equations of motion, there was deep interest in \ufb01nding solitonic solutions and seeing whether such solitons could play a role in describing the end-states of stars, or other collapsed objects. Unfortunately, it became clear by the 1960\u2019s, and particularly through the work of Lichnerowicz, that there are no smooth solitonic solutions to General Relativity in (3 + 1)-dimensions. Indeed, the singularity theorems of Penrose and Hawking showed that singularities were, in fact, an essential part of Nature, as described by General Relativity. The scienti\ufb01c community thus began to embrace singularities, so long as they are safely inside horizons. Indeed, the triumphs of LIGO in the last few years have con\ufb01rmed the accuracy of General Relativity in extreme, non-linear regimes and black hole and their horizons appear to be an essential part of Nature. The study of solitons in General Relativity continued, but was re-shaped by the singularity theorems. The idea was to weaken the de\ufb01nition of solitons and look for stationary (timeindependent) lumps that can have singularities so long as the singularities are safely inside horizons. Indeed, theorems were proved to show that this was the only option and the mantra \u201cNo Solitons without Horizons\u201d became lore in General Relativity, even when coupled to all sorts of di\ufb00erent matter systems. Ideally, a soliton in General Relativity should be not only classically stable, but stable within quantum mechanics. Hawking\u2019s discovery of black-hole evaporation meant that solitons must have vanishing Hawking temperature and thus be extremal. Indeed, to ensure the stability, the solitons must be BPS, that is, they must be the lowest energy state in the sector of a theory with a given set of charges. This implies that there is no other state into which they can decay. Therefore, the \ufb01rst step was to \ufb01nd BPS solitons, which, as we will discuss, are typically supersymmetric. Once one \ufb01nds this class of solitons, one can try to be more adventurous and \ufb01nd solitons that are, perhaps, classically stable, or meta-stable, but decay by tunneling, or perhaps gravitational radiation. The BPS solitons thus become important islands of stability from which one can explore more physical, time dependent \u201clumps and their quantum descendants.\u201d There are indeed, simple, solitonic black-hole solutions in Einstein-Maxwell gravity, and this is where we will start this course. They are invaluable for illustrating the structure of BPS solutions and for highlighting the Faustian Bargain of supersymmetry. So, in the face of the triumph of General Relativity in describing everything seen by LIGO, why should we be concerned about General Relativity and its description of black holes? Why should we revisit gravitational solitons? The answer is \u201cThe Black-Hole Information Problem.\u201d Because Hawking radiation originates from vacuum polarization just above the horizon, the uniqueness of black holes in General Relativity implies that Hawking radiation is universal, ther4 \fmal and (almost) featureless. In particular, it is independent of how, and from what, the black hole formed. Semi-classical back-reaction of this Hawking radiation also implies that the black hole will evaporate, albeit extremely slowly. It is, therefore, impossible to reconstruct the interior state of a black hole (apart from mass, charge and angular momentum) from the exterior data, and thus from the \ufb01nal state of the Hawking radiation. The evaporation process cannot, therefore, be represented through a unitary transformation of states in a Hilbert space. Hence black-hole evaporation, as predicted by General Relativity and Quantum Mechanics, is inconsistent with a foundational postulate of Quantum Mechanics. Based on its horizon area, the black hole at the core of the Milky Way should have about e1090 microstates. From the outside, black-hole uniqueness implies that its state is unique, as would be the state of its Hawking radiation were it to evaporate. The problem is therefore vast: e1090 \u0338= 1! However, the evaporation time-scale of a Schwarzschild black hole is also vast: a one solar mass black hole evaporates into a zero-temperature reservoir in about 1067 years. Moreover the evaporation time is proportional to M3. For the black hole at the core of the Milky Way, this evaporation time is about 1087 years. This led to the idea that, over such vast time-scales, the microstate data of a black hole should be able to dribble out with only tiny (quantum?) corrections to General Relativity. In 2009, Mathur used strong sub-additivity of quantum information to show that this idea was wrong and that solving the Information Problem would require corrections of order 1 to the physics at the horizon scale. Because of the information problem, there is a growing consensus that new physics must emerge at the horizon scale of a black hole. The challenge is to \ufb01nd a classical mechanism to support such horizon-scale structure, and if one can do this, then one can investigate how blackhole microstructure can be encoded in the horizon-scale structure. We thus come back to the issue of gravitational solitons. To support horizon-scale microstructure we need smooth, horizonless geometries that closely approximate the geometries of black holes up where the horizon would form. This is the de\ufb01nition of a Microstate Geometry. Instead of having a horizon, such geometries cap-o\ufb00smoothly at very high red-shift. The absence of horizon is not only essential to solving the information problem but also to smoothness: singularity theorems tell us that singularities must form inside horizons. This would be a very short course if Microstate Geometries did not exist. Examples of such geometries were \ufb01rst constructed almost 15 years ago (largely because the creators of the geometries were ignorant of the many theorems that forbade their existence). It is thus extremely interesting to chart how the Microstate Geometries dodged the \u201cNo Go\u201d theorems. The answer is a combination of higher-dimensional gravity theories, topology and Chern-Simons interactions. We discuss this in the second lecture. Subsequent lectures will involve the construction of Microstate Geometries and an investigation of their properties. As I indicated above, the explicit examples that we will discuss here will be supersymmetric/BPS. This part of the story is now very well-developed. The systematic construction of generic, non-supersymmetric Microstate Geometries remains a major unsolved problem: we have sporadic, very exotic examples, but, as yet, we do not have Microstate Geometries that match the properties of astrophysical black holes. This is a problem we hope to solve in the not too-distant future. 5 \f1.1 Background In this course I will assume that you are familiar with \u2022 The basics of general relativity: that you can work with metrics, compute connections, curvatures and geodesics and know about isometries and Killing vectors. I will assume you know action principles and how to compute energy-momentum tensors and obtain Einstein\u2019s equations. \u2022 Di\ufb00erential forms: You should be familiar with di\ufb00erential forms, exterior derivatives and Hodge duals. If not, it would be important to review the relevant section of any source like [1] Sections 5.4 and 7.9 before Lecture 2. \u2022 Lorentz spinors in four dimensions. While we will use Lorentz spinors in a generic spacetime and in both four and \ufb01ve dimensions, it will not be a major part of the course and you will be \ufb01ne if you have only seen the Dirac equation in Minkowski space. 2 BPS Black Holes in Four Dimensions 2.1 The Reissner-Nordstr\u00a8 om black hole The most general, spherically symmetric, time-independent metric1 has the form ds2 = \u2212e2 \u03b1(r) dt2 + e2 \u03b2(r) dr2 + e2 \u03b3(r) dr dt + e2 \u03b4(r) \u0000d\u03b82 + sin2 \u03b8 d\u03c62) (2.1) \u2192\u2212e2 \u03b1(r) dt2 + e2 \u03b2(r) dr2 + e2 \u03b3(r) dr dt + r2 \u0000d\u03b82 + sin2 \u03b8 d\u03c62) . (2.2) where the possible re-de\ufb01nition r \u2192\u02dc r(r) has been \ufb01xed by setting the areas of the concentric spheres to 4\u03c0r2. One is also free to de\ufb01ne t = \u02dc t + f(r) and so dt2 = (d\u02dc t2 + 2f\u2032(r)drd\u02dc t + f\u2032(r)2dr2). This can be used to gauge away the e2 \u03b3(r) dr dt term, and so, without loss of generality we may take: ds2 = \u2212e2 \u03b1(r) dt2 + e2 \u03b2(r) dr2 + r2 \u0000d\u03b82 + sin2 \u03b8 d\u03c62) . (2.3) Choosing (x0, x1, x2, x3) = (t, r, \u03b8, \u03c6), the non-zero components of the a\ufb03ne connections are: \u03930 01 = \u03930 10 = \u03b1\u2032(r) , \u03931 00 = e2 (\u03b1\u2212\u03b2) \u03b1\u2032(r) , \u03931 11 = \u03b2\u2032(r) , \u03931 22 = \u2212r e\u22122 \u03b2 , \u03931 33 = \u2212r sin2 \u03b8 e\u22122 \u03b2 , \u03932 12 = \u03932 21 = 1 r , \u03932 33 = \u2212sin \u03b8 cos \u03b8 , \u03933 13 = \u03933 31 = 1 r , \u03933 23 = \u03933 32 = cot \u03b8 . (2.4) The independent, non-zero components of the Riemann tensor are: R0101 = \u03b1\u2032(r) \u03b2\u2032(r) \u2212\u03b1\u2032\u2032(r) \u2212(\u03b1\u2032(r))2 , R0202 = \u2212r e\u22122 \u03b2 \u03b1\u2032(r) , R0303 = R0303 sin2 \u03b8 , R1212 = r e\u22122 \u03b2 \u03b2\u2032(r) , R1313 = R1212 sin2 \u03b8 , R2323 = \u00001 \u2212e\u22122 \u03b2\u0001 sin2 \u03b8 . (2.5) 1I do not need to assume time-independence to arrive at the Reissner-Nordstr\u00a8 om black hole. One can prove a generalization of Birkho\ufb00\u2019s theorem that shows that spherical symmetry implies time-independence. I am taking a short-cut here to simplify the pedagogy. 6 \fThe non-zero components of the Ricci tensor are: R00 = e2 (\u03b1\u2212\u03b2) \u0012 \u03b1\u2032\u2032(r) + (\u03b1\u2032(r))2 \u2212\u03b1\u2032(r) \u03b2\u2032(r) + 2 r \u03b1\u2032(r) \u0013 , R11 = \u2212 \u0012 \u03b1\u2032\u2032(r) + (\u03b1\u2032(r))2 \u2212\u03b1\u2032(r) \u03b2\u2032(r) \u22122 r \u03b2\u2032(r) \u0013 , R22 = e\u22122 \u03b2 \u0000r (\u03b2\u2032(r) \u2212\u03b1\u2032(r)) \u22121 \u0001 + 1 , R33 = R22 sin2 \u03b8 . (2.6) The most-general, spherically symmetric, time-independent electromagnetic \ufb01eld is given by F = E(r) dt \u2227dr + p sin \u03b8 d\u03b8 \u2227d\u03c6 . (2.7) Closure of F, that is, dF = 0, implies that p is a constant. The non-trivial Maxwell equation is d \u22c64 F = 0 \u21d4 d dr \u0000r2 e\u2212\u03b1\u2212\u03b2 E(r) \u0001 = 0 . (2.8) The energy-momentum tensor is given by T\u00b5\u03bd = 1 4\u03c0 (F\u00b5\u03c1 F\u03bd\u03c1 \u2212 1 4 g\u00b5\u03bd F \u03c1\u03c3F\u03c1\u03c3) , (2.9) whose components are T00 = 1 8\u03c0 \u0010 e\u22122 \u03b2 \u0000E(r) \u00012 + e2 \u03b1 p2 r4 \u0011 , T11 = \u22121 8\u03c0 \u0010 e\u22122 \u03b1 \u0000E(r) \u00012 + e2 \u03b2 p2 r4 \u0011 , T22 = 1 8\u03c0 \u0010 r2 e\u22122 (\u03b1+\u03b2) \u0000E(r) \u00012 + p2 r2 \u0011 , T33 = T22 sin2 \u03b8 . (2.10) Note that this is traceless. Note: The factor of 1 4\u03c0 in (2.9) comes from taking CGS/Gaussian units in which \u03f50 = 1 4\u03c0 and \u00b50 = 4\u03c0. This makes the equations simpler if one takes G = 1. This is probably the last time I will get the units correct. Einstein\u2019s equations are: R\u00b5\u03bd = 8\u03c0G ( T\u00b5\u03bd \u22121 2 T g\u00b5\u03bd) = 8\u03c0G T\u00b5\u03bd . (2.11) since T \u2261g\u00b5\u03bdT\u00b5\u03bd = 0. The \ufb01rst equation (taking G = 1) is: R11 + e2 (\u03b2\u2212\u03b1) R00 = 8\u03c0 (T11 + e2 (\u03b2\u2212\u03b1) T00) . (2.12) and is equivalent to 2 r (\u03b1\u2032(r) + \u03b2\u2032(r)) = 0 . (2.13) Hence \u03b1 = \u2212\u03b2 + constant . (2.14) One can absorb the constant of integration into a rescaling of t and so, without loss of generality, one can take: \u03b1 = \u2212\u03b2 . (2.15) 7 \fThis means that the Maxwell equation (2.8) becomes d dr \u0000r2 E(r) \u0001 = 0 \u21d4 E(r) = q r2 , (2.16) for some constant, q. If the metric is asymptotically Minkowskian, then (2.7) tells us that q is the electrostatic charge. The remaining Einstein equations are e\u22122 \u03b1 \u0012 \u03b1\u2032\u2032(r) + 2(\u03b1\u2032(r))2 + 2 r \u03b1\u2032(r) \u0013 = \u0000E(r) \u00012 + e2 (\u03b1+\u03b2) p2 r4 = p2 + q2 r4 , e2 \u03b1 \u00002 r \u03b1\u2032(r) + 1 \u0001 \u22121 = \u2212r2 \u0000E(r) \u00012 \u2212p2 r2 = \u2212p2 + q2 r2 . (2.17) The second equation is equivalent to d dr \u0000r e2 \u03b1\u0001 = 1 \u2212p2 + q2 r2 \u21d4 e2 \u03b1 = 1 \u22122 m r + p2 + q2 r2 , (2.18) for some constant of integration, m. The other equation in (2.17) is automatically satis\ufb01ed (as it must be because of the Bianchi identities for R\u00b5\u03bd and conservation of the energy-momentum tensor). Hence the solution is given by the Reissner-Nordstr\u00a8 om metric: ds2 = \u2212 \u0010 1 \u22122 m r + p2 + q2 r2 \u0011 dt2 + \u0010 1 \u22122 m r + p2 + q2 r2 \u0011\u22121 dr2 + r2 \u0000d\u03b82+sin2 \u03b8 d\u03c62) . (2.19) In the Newtonian approximation for asymptotically \ufb02at spaces, one has g00 \u2248\u2212 \u0010 1 + 2 \u03c6(\u20d7 x) c2 \u0011 , (2.20) where \u03c6(\u20d7 x) is the Newtonian gravitational potential. This means that we can identify the parameter m with the Keplerian mass of the system. The electric \ufb01eld is given by F = dA = q r2 dt \u2227dr + p sin \u03b8 d\u03b8 \u2227d\u03c6 , A = \u03a6(r) dt + p cos \u03b8 d\u03c6 , \u03a6 = q r , (2.21) and p and q represent the magnetic and electric charges (in units with \u03f50 = 1 4\u03c0 and \u00b50 = 4\u03c0). For simplicity I will, henceforth set p = 0. 2.2 Extreme Reissner-Nordstr\u00a8 om as an exemplar of branes The horizons of a simple black hole are identi\ufb01ed by seeking the null hypersurfaces de\ufb01ned by constant r. That is, one solves grr = 0, which yields: r = r\u00b1 = m \u00b1 p m2 \u2212q2 . (2.22) The physical range of Reissner-Nordstr\u00a8 om metrics have m \u2265q (if m < q then the metric has a naked singularity). 8 \fThe metrics with m = q are known as extremal Reisnner-Nordstr\u00a8 om and have the form: ds2 = \u2212H(r)\u22122 dt2 + H(r)2 dr2 + r2 \u0000d\u03b82 + sin2 \u03b8 d\u03c62) = \u2212H(r)\u22122 dt2 + H(r)2 h d\u03c12 + \u03c12 \u0000d\u03b82 + sin2 \u03b8 d\u03c62) i . (2.23) where H \u2261 \u0010 1 \u2212q r \u0011\u22121 = 1 + q \u03c1 , \u03c1 \u2261r \u2212q . (2.24) The inner and outer horizon merge into a single, extremal horizon at \u03c1 = 0 or r = q. There are several things to note here: \u2022 The spatial sections of the metric, de\ufb01ned by (\u03c1, \u03b8, \u03c6), have the \ufb02at, Euclidean metric on R3. \u2022 H(\u03c1) is a harmonic function on the \ufb02at R3. \u2022 The function, H\u22121, is an electric potential for the Maxwell \ufb01eld, E(r) in (2.16). None of these facts are an accident. Indeed, a generic BPS p-brane has a metric of the form (H(\u20d7 y))\u03b1 \u03b7\u00b5\u03bd dx\u00b5dx\u03bd + (H(\u20d7 y))\u03b2 d\u20d7 y \u00b7 d\u20d7 y , (2.25) where \u03b1, \u03b2 \u2208R, \u03b7\u00b5\u03bd is the metric on a Minkowski space, R1,p, parallel to the brane, the metric, d\u20d7 y \u00b7d\u20d7 y, transverse to the brane is \ufb02at, and H(\u20d7 y), is harmonic. Moreover, H(\u20d7 y) is the electrostatic potential for the Maxwell \ufb01eld sourced by the brane. A generic brane also sources a dilaton \ufb01eld, \u03c6, that is also entirely determined in terms of H(\u20d7 y). Returning to the Reissner-Nordstr\u00a8 om metric, there are some other general threads that I wish to draw out. First, note that as r \u2192\u221e, the metric (2.19) goes to \ufb02at Minkowski space: ds2 = \u2212dt2 + dr2 + r2 \u0000d\u03b82 + sin2 \u03b8 d\u03c62) . (2.26) However, as one approaches the horizon of the extremal Reissner-Nordstr\u00a8 om metric, \u03c1 \u21920, ds2 \u223c\u2212 \u0010\u03c1 q \u00112 dt2 + \u0010\u03c1 q \u0011\u22122 \u0000d\u03c12 + \u03c12 (d\u03b82 + sin2 \u03b8 d\u03c62) \u0001 (2.27) = q2 \u0014 \u22121 q4 \u03c12 dt2 + d\u03c12 \u03c12 \u0015 + q2 \u0000d\u03b82 + sin2 \u03b8 d\u03c62\u0001 . (2.28) This is a product space: AdS2 \u00d7 S2. (Historically it is known as the Robinson-Bertotti metric.) The \u201ccosmological constant\u201d of the AdS2 arises because the electromagnetic \ufb02ux becomes q\u22121 times the volume form of AdS2 in the near-horizon region: F = q r2 dt \u2227dr \u223c1 q dt \u2227dr = 1 q \u00121 q \u03c1 dt \u0013 \u2227 \u0012 q d\u03c1 \u03c1 \u0013 . (2.29) This is also a feature of many extremal brane geometries: The near-horizon limit of (2.25) typically yields something of the form q2 \u0014d\u03c12 \u03c12 + q\u22122 \u03c12 \u03b7\u00b5\u03bd dx\u00b5dx\u03bd \u0015 + q2 d\u21262 n , (2.30) 9 \fwhere d\u21262 n is the metric of the unit sphere in the \ufb02at space described by the Cartesian coordinates, \u20d7 y. For a p-brane, this the metric (of a Poincar\u00b4 e slice) of AdSp+2 \u00d7Sn. It is also useful to note that to go from the asymptotically-\ufb02at metric (2.26) to the AdS2 \u00d7S2 metric in (2.28), one simply drops the 1 in the harmonic function, H(\u03c1) = 1 + q \u03c1. In more general geometries whose asymptotic form is governed by harmonic functions, one often \ufb01nds that the transition from asymptotically-\ufb02at to asymptotically-AdS is achieved in exactly this way, and the mathematical process of going between such classes of asymptotics is sometimes referred to as \u201cThrowing away the 1\u2019s.\u201d One should also be mindful in using metrics like (2.30): they look \u201ccomplete\u201d and smooth for 0 < \u03c1 < \u221e, but we know they are not complete and that the apparent singularity at \u03c1 = 0 is a coordinate singularity. One should remember that \u03c1 = 0, or r = q, is the horizon and that the patch covered by the metric (2.30) is geodesically incomplete. In particular, the interior of the black hole corresponds to 0 < r < q, or \u2212q < \u03c1 < 0. One can use infalling null rays to set up Eddington-Finkelstein coordinates that continue (2.30) across the horizon to the actual, physical singularity at \u03c1 = \u2212q, where H vanishes. We will tend to use metrics like (2.30) and one should remember that when we do this, we are neglecting the black-hole interior. That said, in microstate geometries there is no \u201cinterior\u201d because the geometry caps o\ufb00 smoothly above where the black-hole horizon would form. 2.3 Hawking temperature The most direct way to compute the Hawking temperature of a static black hole is to use the fact that thermal ensembles are periodic in imaginary time with period \u03b2 = 1 k T . (2.31) For for static black holes one therefore continues t \u2192i\u03c4 and then examines the periodicity of the geometry in \u03c4 [2]. For the Reissner-Nordstr\u00a8 om metric one has ds2 = r\u22122 \u0000(r \u2212r+)(r \u2212r\u2212) \u0001 d\u03c4 2 + r2 \u0000(r \u2212r+)(r \u2212r\u2212) \u0001\u22121 dr2 + r2 \u0000d\u03b82 + sin2 \u03b8 d\u03c62) . (2.32) where r\u00b1 is given by (2.22). For r \u223cr+ one has ds2 = r\u22122 + (r+ \u2212r\u2212) \u03c12 d\u03c4 2 + 4 r2 + (r+ \u2212r\u2212)\u22121 d\u03c12 + r2 + \u0000d\u03b82 + sin2 \u03b8 d\u03c62) = 4 r2 + (r+ \u2212r\u2212)\u22121 \u0010 d\u03c12 + 1 4 r\u22124 + (r+ \u2212r\u2212)2 \u03c12 d\u03c4 2\u0011 + r2 + \u0000d\u03b82 + sin2 \u03b8 d\u03c62) = 4 r2 + (r+ \u2212r\u2212)\u22121 (d\u03c12 + \u03c12d\u03c72) + r2 + \u0000d\u03b82 + sin2 \u03b8 d\u03c62) . (2.33) where \u03c1 \u2261\u221ar \u2212r+ , \u03c7 \u2261 1 2 r\u22122 + (r+ \u2212r\u2212) \u03c4 . (2.34) To be smooth, \u03c7 must have a period of 2\u03c0, which means one must have: \u03c4 \u2261\u03c4 + 4\u03c0 r2 + (r+ \u2212r\u2212) (2.35) 10 \fand hence (taking k = 1) the Hawking temperature is given by: TH = (r+ \u2212r\u2212) 4\u03c0 r2 + = \u210fc3 p m2 \u2212q2 2\u03c0 G k \u0000m + p m2 \u2212q2 \u00012 , (2.36) where I have restored Planck\u2019s, Newton\u2019s and Boltzmann\u2019s constants as well as the speed of light. For our purposes, the most important aspect of this is that the Hawking temperature vanishes for extremal black holes. Thus, not only are these solutions classically stable and static, but they are also time-independent in quantum mechanics. 2.4 Supersymmetric \u201cbranes\u201d in four dimensions Another extremely important property of the extreme Reissner-Nordstr\u00a8 om metric is that it is supersymmetric, and it is this property that allows one to construct explicit and remarkable generalizations in the form of solitons. To exhibit the supersymmetry, and how it \ufb01xes the solution, we consider a generalization of (2.23) with more general choices of functions: ds2 = \u2212H(\u20d7 y)\u22122 dt2 + H(\u20d7 y)2 d\u20d7 y \u00b7 d\u20d7 y . (2.37) where H(\u20d7 y) is an arbitrary function on the R3 de\ufb01ned by \u20d7 y. We similarly consider a more general electrostatic \ufb01eld A = \u03a6(\u20d7 y) dt , F = dA = \u2212\u2202i\u03a6(\u20d7 y) dt \u2227dyi . (2.38) Introduce the frames e0 = H(\u20d7 y)\u22121 dt , ei = H(\u20d7 y) dyi , i = 1, 2, 3 . (2.39) The spin connection, \u03c9\u00b5 ab = \u2212\u03c9\u00b5 ba, is then de\ufb01ned so to make the frames covariant constant: 0 = \u2207\u00b5 ea\u03bd \u2261\u2202\u00b5ea\u03bd + \u03c9\u00b5ab eb\u03bd \u2212\u0393\u03c1 \u00b5\u03bd ea\u03c1 . (2.40) where the frame indices, a, b, c... are raised and lowered using the Minkowski metric \u03b7ab = \u03b7ab = diag(\u22121, 1, 1, 1). The frame components and connection 1-forms are de\ufb01ned via: ea \u2261ea\u03bd dx\u03bd , \u03c9ab \u2261\u03c9\u00b5 ab dx\u00b5 , (2.41) and (2.40) implies: d ea = \u2212\u03c9ab \u2227eb . (2.42) This equation is su\ufb03cient to determine the connection 1-forms, \u03c9ab. A straightforward calculation using (2.39) yields: \u03c90i = \u2212\u03c9i0 = H\u22122(\u2202iH) e0 , \u03c9ij = \u2212\u03c9ji = \u2212H\u22122 \u0000(\u2202iH) ej \u2212(\u2202jH) ei\u0001 . (2.43) Supersymmetry transforms bosons into fermions, and vice versa: \u03b4\u03f5boson = \u03f5 fermions , \u03b4\u03f5fermion = \u03f5 bosons . (2.44) 11 \fand a supersymmetric background is de\ufb01ned to be one that is invariant under such a supersymmetry variation. Thus the variations in (2.44) are required to vanish. Supersymmetric backgrounds almost universally involve non-trivial bosonic \ufb01elds and have all the fermions set to zero. This means that the non-trivial supersymmetry constraints come from solving the variations of the fermions. A solution, \u03f5, to \u03b4\u03f5fermions = 0 is called a Killing spinor. It is instructive to do some dimensional analysis in (2.44). Bosonic actions involve two derivatives whereas fermionic actions only involve one derivative. This means that the dimensions of fermions exceed the dimensions of the corresponding bosons by 1 2. This is consistent with the bosonic variation if we assign \u03f5 to have a dimension of \u22121 2. To make the dimensions work in the fermion variation one must have a derivative on the right-hand side. Thus the fermionic variations generically involve derivatives of bosons and the supersymmetry equations are typically \ufb01rst-order di\ufb00erential equations on the bosons. Strictly speaking, the foregoing argument is only valid in the absence of gravity because Newton\u2019s constant has dimension 2 and so can also be used to make up any discrepancy in dimensions. One can re\ufb01ne the argument above to describe how and where Newton\u2019s constant can appear and how it can rescale \ufb01elds. I am not going to do this because I am simply trying to set up expectations rather than prove theorems: the BPS equations should involve \ufb01rst order derivatives of fundamental bosons and sometimes there will also be bosonic terms without derivatives, but these will come with factors of the square-root of Newton\u2019s constant. In a supergravity theory there are necessarily gravitini, \u03c8i \u00b5, and these transform at least into the graviton and, in extended supergravity theories, they also transform into the Maxwell \ufb01eld strengths. Thus the primary set of supersymmetry equations come from setting the gravitino variations to zero. More complicated supergravity theories can also have spin\u22121 2 \ufb01elds and their variations lead to further constraints on the bosonic background. In the simplest four-dimensional supergravity (N =2) that combines gravitons and Maxwell \ufb01elds, the gravitino variation may be written as [3]: \u03b4\u03c8\u00b5 = \u2207\u00b5\u03f5 \u2212 1 4 F\u03c1\u03c3\u03b3\u03c1\u03b3\u03c3\u03b3\u00b5\u03f5 . (2.45) Thus supersymmetry requires us to solve the \ufb01rst order system: \u2207\u00b5\u03f5 \u2212 1 4 F\u03c1\u03c3\u03b3\u03c1\u03b3\u03c3\u03b3\u00b5 \u03f5 \u2261\u2202\u00b5\u03f5 + 1 4\u03c9\u00b5ab \u03b3a\u03b3b \u03f5 \u2212 1 4 F\u03c1\u03c3\u03b3\u03c1\u03b3\u03c3\u03b3\u00b5\u03f5 = 0 . (2.46) It is simplest to write everything in terms of frame components: e\u00b5c \u2202\u00b5\u03f5 + 1 4\u03c9cab \u03b3a\u03b3b \u03f5 \u2212 1 4 Fab\u03b3a\u03b3b\u03b3c\u03f5 = 0 . (2.47) where e\u00b5c are the inverse frames (de\ufb01ned via e\u00b5ce\u00b5a = \u03b4a c ) and F = \u2212\u2202i\u03a6(\u20d7 y) dt \u2227dyi = \u2212(\u2202i\u03a6) e0 \u2227ei . (2.48) The components of these four equations are: H \u2202t\u03f5 + 1 2 H\u22122(\u2202iH) \u03b30\u03b3i\u03f5 + 1 2 (\u2202i\u03a6) \u03b30\u03b3i\u03b30 \u03f5 = 0 , (2.49) H\u22121 \u2202i\u03f5 + 1 4 H\u22122(\u2202jH) (\u03b3j\u03b3i \u2212\u03b3i\u03b3j)\u03f5 \u2212 1 2 (\u2202j\u03a6) \u03b30\u03b3j\u03b3i \u03f5 = 0 . (2.50) 12 \fThere is a relatively obvious family of solutions to this. Since the metric is time-independent, it is natural to seek solutions with \u2202t\u03f5 = 0. The \ufb01rst equation then gives: \u03b30\u03b3ih \u2212\u2202i \u0000H\u22121\u0001 + (\u2202i\u03a6) \u03b30 i \u03f5 = 0 . (2.51) Since \u03b30 = \u2212\u03b30 has eigenvalues \u00b11, this means we must take: \u03b30 \u03f5 = \u00b1\u03f5 , \u03a6 = \u2213H\u22121 + const. (2.52) Using (2.52) this in (2.50) gives: 0 = H\u22121 \u2202i\u03f5 \u2212 h 1 4 \u0000\u2202j \u0000H\u22121\u0001\u0001 (\u03b3j\u03b3i \u2212\u03b3i\u03b3j) \u2212 1 2 \u03b3j\u03b3i i \u03f5 (2.53) = H\u22121 \u2202i\u03f5 \u2212 1 2 \u0000\u2202i \u0000H\u22121\u0001\u0001 \u03f5 , (2.54) and hence \u03f5 = H\u22121 2 \u03f50 , (2.55) where \u03f50 is a constant spinor. Finally, we observe that the Maxwell equation, d \u2217F = 0 yields: \u03b4ij \u2202i \u0000H2\u2202j\u03a6 \u0001 = 0 , (2.56) and using (2.52), this becomes \u03b4ij \u2202i\u2202j H = 0 . (2.57) Thus H is a harmonic function on the R3. It is a tedious, but straightforward exercise that (2.52) and (2.57) imply that the Einstein equations are satis\ufb01ed. 2.4.1 Spinor conventions Following [3], I am taking \u03b3a\u03b3b + \u03b3b\u03b3a = \u22122\u03b7ab, with explicit forms of \u03b3a given by: \u03b30 = \uf8eb \uf8ed0 1 l2\u00d72 1 l2\u00d72 0 \uf8f6 \uf8f8, \u03b3i = \uf8eb \uf8ed0 \u03c3i \u2212\u03c3i 0 \uf8f6 \uf8f8, \u03b35 = \uf8eb \uf8ed1 l2\u00d72 0 0 \u22121 l2\u00d72 \uf8f6 \uf8f8. (2.58) Frame indices, a, b, c...., are raised and lowered with \u03b7ab and \u03b7ab and space-time indices, \u03c1, \u00b5, \u03bd...., are raised and lowered with g\u00b5\u03bd and g\u00b5\u03bd. The \u03b3-matrices with space-time indices are de\ufb01ned by \u03b3\u00b5 \u2261e\u00b5a\u03b3a. 2.5 Some general observations about the supersymmetric solution There are many important and general lessons arising from this computation. First, this is a huge generalization of the Reissner Nordstr\u00a8 om metric because H is, a priori, a general harmonic function on R3. Indeed one can take H = \u03b50 + N X j=1 qj |\u20d7 y \u2212\u20d7 y(j)| . (2.59) 13 \ffor some constant \u03b50 and some charges qj sourced at the points \u20d7 y(j). Metric regularity requires that H be strictly positive or strictly negative. We take the convention H > 0, and then regularity means \u03b50 \u22650 , qi \u22650 , (2.60) with at least one of them non-zero. If \u03b50 \u0338= 0, one can scale t and the coordinates \u20d7 y so that \u03b50 = 1. If \u03b50 = 1 then H \u21921 as r \u2261|\u20d7 y| \u2192\u221eand the metric (2.37) is asymptotically \ufb02at. If \u03b50 = 0 then H \u2192Q r , Q \u2261 N X j=1 qj , (2.61) and the metric is asymptotic to AdS2 \u00d7 S2 at in\ufb01nity, as in (2.28). As one approaches the charge source at \u20d7 y(j), one has H \u223c qj |\u20d7 y \u2212\u20d7 y(j)| , (2.62) and the metric becomes exactly like that of extreme Reissner-Nordstr\u00a8 om, as described in Section 2.2. This solution is thus a static collection of extreme Reissner-Nordstr\u00a8 om black holes: There is a perfect balance between the gravitational attraction (\u223cm) and electrostatic repulsion (\u223cq). The positions of the black holes, \u20d7 y(j), are freely choosable \u201cmoduli.\u201d This solution was \ufb01rst obtained over 70 years ago by Majumdar and Papapetrou [4,5]. Comment: There is a common belief that, in BPS solutions, the perfect balance of gravitational attraction and electrostatic repulsion leads to a large moduli spaces for the relative positions of such objects. This belief is false: as we will see in later lectures, there can be other interactions between BPS components and these interactions can create a \u201cpotential\u201d that reduces the naive dimension of the moduli space. Turning to the supersymmetry, we note that the \ufb01rst constraint that arises is \u2202i\u03a6 = \u2213\u2202i(H\u22121) in (2.52). The choice of sign is initially arbitrary, but it determines which sign of the charge is to be viewed as supersymmetric (BPS) and which sign breaks the supersymmetry (is anti-BPS). Indeed, this sign choice is correlated with the sign of the projector: \u03b30\u03f5 = \u00b1\u03f5. Since \u03b30 is traceless and (\u03b30)2 = 1 l, the matrix \u03b30 has eigenvalues +1 and \u22121 each with degeneracy 2. The projection condition in (2.52) thus cuts the number of supersymmetries in half and the two possible sets of supersymmetries (corresponding to \u00b1 in (2.52)) are mutually incompatible. For simplicity, we will take \u03b30 \u03f5 = \u2212\u03f5 , \u03a6 = +H\u22121 , (2.63) so that all the electric charges are positive. Adding a negative charge would then break the supersymmetry and make the metric (2.37) singular (H would vanish on some hypersurface). The underlying supersymmetric model has N = 2 supersymmetry in four dimensions. This means that the supersymmetry parameter, \u03f5, can be taken to be a (complex) Dirac spinor2. There are thus eight independent supersymmetries because the total number of supersymmetries 2The designation, N = 2, is re\ufb02ected in the fact that supersymmetry is usually counted in terms of Majorana (real) spinors, and a Dirac spinor has two Majorana pieces, that are essentially its \u201creal and imaginary parts.\u201d 14 \fis determined by counting real parameters. The projection condition (2.63) cuts this number to four supersymmetries, and thus, relative to the underlying model, one can view the multi-blackhole solution as 1 2-BPS. However, these black holes are typically embedded in a M-theory, or in a ten-dimensional string theory, both of which have 32 supersymmetries. Relative to such theories, the multi-black-hole solution is 1 8-BPS. Comment: Think of this as a warning: the only unambiguous way to express the amount of supersymmetry is to specify the number of supersymmetry parameters. Other ways of expressing the amount of supersymmetry depend on dimension and context. In these lectures I will consider solutions with 4 supersymmetries and take the macroscopic view that they are generically 1 8-BPS states of M-theory or type II supergravity. The physical impact of the constraint \u2202i\u03a6 = \u2213\u2202i(H\u22121) should be evident from (2.20) and (2.47): supersymmetry sets the mass equal to the charge. It is for this reason that supersymmetry and BPS have almost become synonymous. The other fundamental feature of supersymmetry is that it \u201csquares to the hamiltonian.\u201d This has several important consequences for the analysis of the supersymmetry equations. First, consider the commutator of two covariant derivatives. It is trivial to write this in terms of the curvature tensor: \u0002 \u2207\u00b5, \u2207\u03bd \u0003 \u03f5 = 1 4 R\u00b5\u03bdab \u03b3a\u03b3b\u03f5 . (2.64) On the other hand, one can simplify the left-hand side using the supersymmetry equations (2.46) to produce expressions in terms of the Maxwell \ufb01eld and its derivatives. Indeed, the result is the \u201cintegrability condition\u201d for the supersymmetry equations (2.46) and it provides algebraic relationships between the curvature tensor and the Maxwell \ufb01eld and its derivatives. A careful analysis shows that these integrability conditions imply that almost all of the equations of motion are satis\ufb01ed. That is, almost all of the Maxwell equations and Einstein equations are satis\ufb01ed as a result of solving the supersymmetry conditions. There are a number of theorems that cover this issue (see, for example, [6\u20138]), and generically one needs to supplement the supersymmetry equations with just one (carefully chosen) component of the equations of motion so as to satisfy all of the equations of motion. There are also many circumstances in which solving the supersymmetry equations actually solves all the equations of motion. One should also note that the converse is not true: solving the equations of motion does not imply supersymmetry. One can easily see that the generic Reissner-Nordstr\u00a8 om black hole with m \u0338= q is not supersymmetric. In the example above, the supersymmetry equations related the electric potentials to metric coe\ufb03cients, but did not not fully determine the underlying solution: The fact that H must be harmonic only followed from using a particular equation of motion. We will \ufb01nd something similar with microstate geometries. The other fundamental consequence of the supersymmetry \u201csquaring to the hamiltonian\u201d is that, if \u03f5 solves the supersymmetry equations then the vector \ufb01eld K\u00b5 = \u00af \u03f5\u03b3\u00b5\u03f5 = e\u00b5a \u00af \u03f5\u03b3a\u03f5 . (2.65) 15 \fis a Killing vector (where \u00af \u03f5 \u2261\u03f5\u2020\u03b30 is the Dirac conjugate). The detailed proof depends on the speci\ufb01cs of the supersymmetry equation, but one simply computes \u2207\u00b5K\u03bd and then uses the supersymmetry equations to replace \u2207\u00b5\u03f5 and \u2207\u00b5\u00af \u03f5. Upon symmetrization, \u2207\u00b5K\u03bd + \u2207\u03bdK\u00b5, one usually \ufb01nds that everything cancels. This has been proven for many supergravity theories coupled to matter, and, in particular, for M-theory (see, for example, [6\u20138]). One can indeed verify this for the solution above. In particular, there is only one Killing vector (namely, \u2202 \u2202t) for the metric (2.37). The time-like component of (2.65) is given by K0 = e\u00b5=0a=0 \u00af \u03f5\u03b30\u03f5 = H \u03f5\u2020\u03b30\u03b30\u03f5 = \u2212H \u03f5\u2020\u03f5 = \u2212\u03f5\u2020 0\u03f50 . (2.66) where I have used (2.55) in the last step. The important point is that the time-component of K\u00b5 must be a constant (and all other componets must vanish) for K\u00b5 to be the Killing vector. As a practical consideration, the fact that K\u00b5, de\ufb01ned by (2.65), must be a Killing vector is usually used in reverse to determine the norm of the spinor, as in (2.55). More generally, we know that (2.65) always de\ufb01nes a time-like or null Killing vector. Indeed the proof that K\u00b5 is time-like or null is relatively straightforward use of the Schwartz inequality on the spin space. To see this, consider the two spinors \u03f5 and \u03b7 where \u03b7 \u2261vi\u03b30\u03b3i\u03f5 , (2.67) for some vector vi in R3 and the indices, i, are spatial frame indices. One can use the properties of \u03b3-matrices to show that: \u03b7\u2020\u03b7 = |v|2 \u03f5\u2020\u03f5 . (2.68) The Schwarz inequality implies |vi (\u03f5\u2020\u03b30\u03b3i\u03f5)|2 \u2264(\u03f5\u2020\u03f5) (\u03b7\u2020\u03b7) = |v|2 (\u03f5\u2020\u03f5)2 , (2.69) Choose vi = (\u03f5\u2020\u03b30\u03b3i\u03f5) = (\u00af \u03f5\u03b3i\u03f5) and one gets |v|4 \u2264|v|2 (\u03f5\u2020\u03f5)2 \u21d4 |(\u00af \u03f5\u03b3i\u03f5)|2 \u2264(\u03f5\u2020\u03f5)2 = (\u00af \u03f5\u03b30\u03f5)2 , (2.70) The last inequality is precisely the inequality \u03b7ab Ka Kb \u22640 . (2.71) It is also amusing to note that there is equality in (2.71) if and only if \u03b7 is proportional to \u03f5. This means that K\u00b5 is null if and only if vi\u03b30\u03b3i\u03f5 = \u03bb \u03f5 . (2.72) for some vector vi and some number, \u03bb. This is a projection condition on \u03f5 and it is, in fact, incompatible with (2.52). Using this kind of argument, it is not very di\ufb03cult to determine when the Killing vector (2.65) is time-like or null. In these lectures, K\u00b5 will always be time-like. 16 \f2.5.1 A supersymmetric summary The take-away messages here are \u2022 Supersymmetry involves solving \ufb01rst order equations for spinors on a manifold. The solutions to these equations are called Killing spinors. \u2022 The supersymmetries, or Killing spinors, are usually con\ufb01ned to a subspace of all the spinors on the manifold. This subspace is typically de\ufb01ned by projection conditions on the spinors and is sometimes called the supersymmetry bundle. \u2022 Supersymmetry usually imposes a \u201cBPS condition\u201d in that the overall mass of the system is locked to (some of) the charges of the system \u2022 Most important: Solving the supersymmetry equations is usually much easier than solving all the equations of motion. This is because the former are a \ufb01rst order system while the latter are a second order (non-linear) system. Solving supersymmetry equations typically solves almost all of the equations of motion. \u2022 If \u03f5 is a supersymmetry, then the tensors, T \u00b51...\u00b5k = \u00af \u03f5\u03b3\u00b51 . . . \u03b3\u00b5k\u03f5 have really interesting geometric properties, and in particular K\u00b5 = \u00af \u03f5\u03b3\u00b5\u03f5 is generically a non-space-like Killing vector. The study of and classi\ufb01cation of these rich tensor structures is a very active programme at Saclay, led by Mariana Gra\u02dc na and Ruben Minasian. \u2022 Most important: Imposing supersymmetry is a \u201cFaustian Bargain\u201d for the physics. Computations are much easier and the solutions are relatively simple, but the solutions are necessarily BPS and time-independent, both at the classical and quantum levels. In particular, supersymmetric black-hole solutions have vanishing Hawking temperature. The hope is that, like super-QCD, there is still important essential physics in the supersymmetric theory that gives valuable insight into the non-supersymmetric theories that underpin the natural world. 2.6 A \ufb01nal footnote In Section 2.4 I considered a very simple metric Ansatz (2.37). In principle, the most general supersymmetric metric is only required to be time-independent and so I could have started with the most general time-independent metric, which has the form ds2 = \u2212H\u22122 (dt + \u03c9idyi)2 + \u03b3ij dyi dyj . (2.73) where H, \u03c9 and \u03b3ij all depend upon the yk. One can also start with generic electric and magnetic \ufb01elds. Gibbons and Hull [3] argue that this leads only to Majumdar-Papapetrou metrics obtained in Section 2.4. While I believe the result is correct, in preparing this course I discovered that their proof appears to have an error wrong. There is a sentence where they say \u201cEach of the three terms in (27) is non-negative and so each must vanish separately.\u201d Unfortunately, the middle term is actually non-positive (and not non-negative). If one sets the magnetic \ufb01eld to zero, then the o\ufb00ending term vanishes. It does, however, make me wonder if they missed some amusing 17 \fmagnetic generalization ... but I doubt it. I leave a careful examination of this as an exercise for the reader. 3 Solitons, horizons and topology The goal here is to understand how microstate geometries evade the \u201cNo solitons without horizons\u201d theorems. Such theorems were rigorously proved in (3 + 1) dimensions and have generalizations (under implicit and sometimes unstated assumptions) to higher dimensional theories. The \ufb01rst lesson is therefore that to \ufb01nd non-trivial microstate geometries, one must work in more than four space-time dimensions. We are going to consider a simple version of the \u201cNo solitons\u201d theorem in a basic class of supergravity theories in \ufb01ve dimensions. This setting su\ufb03ces to see how the theorems usually work, and how they can be evaded. Before we start this, it is important to recall some basic facts about asymptotics and charges. 3.1 Interlude: Mass and conserved charges To investigate the \u201cNo Solitons\u201d theorem we are going to have to dissect the de\ufb01nitions of mass, angular momentum and charge for a generic solution in D space-time dimensions. My discussion here draws heavily on the excellent review by Peet [9] and my work with Gary Gibbons [10]. 3.1.1 Expansions at in\ufb01nity To determine the normalized asymptotic charges for an asymptotically \ufb02at metric in a Ddimensional space-time one should start from the canonically normalized action: S = Z dDx \u221a\u2212g \u0012 R 16\u03c0GD + Lmatter \u0013 , (3.1) where GD is the Newton constant. The Einstein equations are: R\u00b5\u03bd \u22121 2 R g\u00b5\u03bd = 8\u03c0GD T\u00b5\u03bd , (3.2) where T\u00b5\u03bd is the canonically normalized energy-momentum tensor. The Einstein equations may be rewritten as R\u00b5\u03bd = 8\u03c0GD \u0010 T\u00b5\u03bd \u2212 1 (D \u22122) T g\u00b5\u03bd \u0011 , (3.3) where T is the trace of T\u00b5\u03bd. If one linearizes around a \ufb02at metric, using an expansion g\u00b5\u03bd = \u03b7\u00b5\u03bd + h\u00b5\u03bd, then, in an appropriate harmonic gauge, one has \u03b7\u03c1\u03c3\u2202\u03c1\u2202\u03c3 h\u00b5\u03bd \u224816\u03c0GD \u0010 T\u00b5\u03bd \u2212 1 (D \u22122) T g\u00b5\u03bd \u0011 . (3.4) If one assumes that the matter is non-relativistic one can neglect time derivatives and write this as a solution to the Laplace equation on a (D \u22121)-dimensional hypersurface, \u03a3: h\u00b5\u03bd(\u20d7 x) \u224816\u03c0GD AD\u22122 Z \u03a3 dD\u22121\u20d7 y \u0012 1 |\u20d7 x \u2212\u20d7 y|D\u22123 \u0010 T\u00b5\u03bd(|\u20d7 x \u2212\u20d7 y|) \u2212 1 (D \u22122) T(|\u20d7 x \u2212\u20d7 y|) g\u00b5\u03bd \u0011 \u0013 , (3.5) 18 \fwhere AD\u22122 is the volume of a unit (D \u22122) sphere and is an inherent part of the relevant Green function. For future reference we note that A3 = 2\u03c02. One then recalls that the momentum and angular momentum of the con\ufb01guration are obtained my various integrals of T\u00b5\u03bd over the space-like hypersurface: P \u00b5 = Z \u03a3 dD\u22121x T \u00b50 , J\u00b5\u03bd = Z \u03a3 dD\u22121x \u0000x\u00b5T \u03bd0 \u2212x\u03bdT \u00b50\u0001 . (3.6) By expanding (3.5) one obtains [11,9]: g00 = \u22121 + 16\u03c0GD (D \u22122) AD\u22122 M \u03c1D\u22123 + . . . , (3.7) gij = 1 + 16\u03c0GD (D \u22122) (D \u22123) AD\u22122 M \u03c1D\u22123 + . . . , (3.8) g0i = 16\u03c0GD AD\u22122 xjJji \u03c1D\u22121 + . . . , (3.9) where \u03c1 is the radial coordinate. More generally, the expansions (3.7)\u2013(3.9) are used to de\ufb01ne asymptotic charges of a generic, asymptotically-\ufb02at metric For Maxwell \ufb01elds, F\u00b5\u03bd, one can generalize Gauss\u2019 law and integrate \u2217F over the (D\u22122)-sphere at in\ufb01nity on \u03a3. However, a standard normalization that is commonly used in the literature, and we will use here, is to take the gauge potential for time-independent solutions to be: A \u223c Q \u03c1D\u22123 dt , F \u223c(D \u22123) Q \u03c1D\u22122 dt \u2227d\u03c1 \u03c1 \u2192\u221e. (3.10) 3.1.2 Killing vectors and Komar integrals A Killing vector, K\u00b5, de\ufb01nes a symmetry of the metric (an isometry). It satis\ufb01es the Killing equation \u2207\u00b5K\u03bd + \u2207\u03bdK\u00b5 = 0 . (3.11) Any single vector can be locally integrated to that it is tangent to a coordinate axis: K\u00b5 \u2202 \u2202x\u00b5 = \u2202 \u2202v, where v is one of the coordinates. In this coordinate frame, (3.11) is simply equivalent to: \u2202 \u2202v g\u00b5\u03bd = 0 . (3.12) It follows that, in this frame, the curvature tensors are all independent of v and so K\u00b5\u2207\u00b5 R = 0 , (3.13) where R is the Ricci scalar. Note that this equation is independent of the choice of coordinates. Indeed one can also show, using (3.11) that a Killing vector, satis\ufb01es some rather more general identities \u2207\u00b5\u2207\u03bd K\u03c1 = R\u03c3\u00b5\u03bd\u03c1 K\u03c3 , \u2207\u03c1\u2207\u03c1 K\u00b5 = \u2212R\u00b5\u03bd K\u03bd , (3.14) where R\u03c3\u00b5\u03bd\u03c1 is the Riemann tensor and R\u00b5\u03bd is the Ricci tensor. 19 \fSD-2 \u03a3 Figure 1: The space-like hypersurface \u03a3 slicing through black holes and solitonic lumps. The sphere\u201cat in\ufb01nity\u201d is taken to be large enough to encompass all the sources and only be sensitive to the leading asymptotics of all \ufb01elds. A Killing vector can be used to de\ufb01ne conserved currents using the energy momentum tensor. That is, if K\u00b5 is a Killing vector, then the current \u02c6 J\u00b5 \u2261K\u03bd T\u00b5\u03bd , (3.15) is necessarily conserved: \u2207\u00b5 \u02c6 J\u00b5 = (\u2207\u00b5K\u03bd) T\u00b5\u03bd + K\u03bd(\u2207\u00b5 T\u00b5\u03bd) = 0 , (3.16) where the \ufb01rst term vanishes because of the symmetry of T\u00b5\u03bd and (3.11) and the second term vanishes because of the conservation of T\u00b5\u03bd. For smooth space-times with Killing vectors one can use this to de\ufb01ne a corresponding conserved quantity. However, there is a more convenient re\ufb01nement given by the Komar integral. Consider a space-like hypersurface, \u03a3, in an asymptotically \ufb02at space-time. Let S(D\u22122) be the (D \u22122)-sphere at in\ufb01nity on \u03a3. (See Fig. 1.) Consider the Komar integral IK = Z S(D\u22122) \u2217dK , (3.17) where K = K\u00b5dx\u00b5 and \u2217is the Hodge dual in D dimensions. If \u03a3 is smooth, and one applies Stokes\u2019 theorem, one obtains IK = Z \u03a3 d \u2217dK = \u22122 Z \u03a3 R\u00b5\u03bdK\u00b5n\u03bd d\u03a3 , (3.18) where n\u03bd is the unit normal to \u03a3 and I have used (3.14). We can now use Einstein\u2019s equations, (3.2) to write this as IK = \u221216\u03c0GD Z \u03a3 \u0010 T\u00b5\u03bd \u2212 1 (D \u22122) T g\u00b5\u03bd \u0011 K\u00b5n\u03bd d\u03a3 , (3.19) If it were not for the trace term, T, this would be precisely the charge associated with \u02c6 J\u00b5, de\ufb01ned in (3.15) Instead, we can actually de\ufb01ne the current associated with K\u00b5 as: J\u00b5 \u2261K\u03bd R\u00b5\u03bd = 8\u03c0GD \u0010 T\u00b5\u03bd \u2212 1 (D \u22122) T g\u00b5\u03bd \u0011 K\u03bd . (3.20) 20 \fIt is easy to verify that this is also conserved: \u2207\u00b5J\u00b5 = (\u2207\u00b5K\u03bd) R\u00b5\u03bd + K\u03bd(\u2207\u00b5 R\u00b5\u03bd) = 1 2 K\u03bd \u2207\u03bd R = 0 , (3.21) where the \ufb01rst term vanishes, one again, because of the symmetry of R\u00b5\u03bd and (3.11). The second term is simpli\ufb01ed using the Bianchi identity: \u2207\u00b5R\u00b5\u03bd = 1 2 \u2207\u03bd R , (3.22) and the \ufb01nal equality in (3.21) follows from (3.13). Equivalently, one can use the energymomentum expression in (3.20) and then one needs to know that K\u03bd \u2207\u03bdT = 0, but this follows from (3.13) and the trace of Einstein\u2019s equations. The reason why one uses the \u201cimproved\u201d current (3.20) is that it can be written in terms of a surface integral (3.15) and therefore can be generalized to situations in which the interior geometry is not smooth. So far we have not speci\ufb01ed any other properties of the Killing vector, K\u00b5, and so the Komar integral can give any type of conserved quantity. Suppose that K\u00b5 is time-like. The Komar mass is then de\ufb01ned by: M = \u2212 1 16\u03c0GD (D \u22122) (D \u22123) Z SD\u22122 \u2217dK = \u2212 1 16\u03c0GD (D \u22122) (D \u22123) Z SD\u22122 \u0000\u2202\u00b5K\u03bd \u2212\u2202\u03bdK\u00b5 \u0001 d\u03a3\u00b5\u03bd . (3.23) Note that if we take K\u00b5 \u2202 \u2202x\u00b5 = \u2202 \u2202t, then the 1-form K, is given by K = g0\u03bddx\u03bd and \u2217dK = (\u2202\u00b5g0\u03bd) \u2217(dx\u00b5 \u2227dx\u03bd) \u2192(\u2202\u03c1g00) \u03c1D\u22122 VolSD\u22122 (3.24) Using (3.7), one sees that the normalized formula (3.23) does indeed yield the correct asymptotic \u201cKeplerian\u201d/ADM mass. Also note that in the non-relativistic limit in which T00 \u226b|Tij| and |g00| \u226b|gij|, one has T \u2261g\u00b5\u03bdT\u00b5\u03bd \u2248g00T00 \u2248(g00)\u22121T00, and hence IK = \u221216\u03c0GD Z \u03a3 \u0010 T00 \u2212 1 (D \u22122) T g00 \u0011 d\u03a3 = \u221216\u03c0GD Z \u03a3 \u0010 1 \u2212 1 (D \u22122) \u0011 T00 d\u03a3 . (3.25) Thus, in the non-relativistic limit one also has M = Z \u03a3 T00 d\u03a3 . (3.26) as one would expect. 3.2 A relatively simple supergravity theory We will work with ungauged, N = 2 supergravity coupled to two vector multiplets in \ufb01ve dimensions. This theory contains three vector \ufb01elds, AI, with \ufb01eld strengths, F I \u2261dAI, and two independent scalars, which may conveniently be parametrized by the \ufb01elds, XI, I = 1, 2, 3 satisfying the constraint X1X2X3 = 1. The bosonic action is S = Z \u221a\u2212g d5x \u0010 R \u22121 2QIJF I \u00b5\u03bdF J\u00b5\u03bd \u2212QIJ\u2202\u00b5XI\u2202\u00b5XJ \u22121 24CIJKF I \u00b5\u03bdF J \u03c1\u03c3AK \u03bb \u00af \u03f5\u00b5\u03bd\u03c1\u03c3\u03bb\u0011 , (3.27) 21 \fwith I, J = 1, 2, 3. The structure constants are given by CIJK \u2261|\u03f5IJK| and the metric for the kinetic terms is QIJ = 1 2 diag \u0000(X1)\u22122, (X2)\u22122, (X3)\u22122\u0001 . (3.28) One should note the new feature, the Chern-Simons term F \u2227F \u2227A, which will be critical to the construction of solitons. We could, in fact, work with \u201cminimal supergravity,\u201d that is, with pure N =2 supergravity and no extra vector multiplets. This corresponds to taking A1 = A2 = A3 and X1 = X2 = X3 = 1. Note that, with this choice, the action still contains a non-trivial Chern-Simons term. Going to the minimal theory simpli\ufb01es the computations a little, but having three vector \ufb01elds makes the role and structure of the Chern-Simons interaction all the more transparent. In this section we will not use the supersymmetry explicitly: we will simply consider solutions to the equations of motion obtained from the action (3.27). We are going to seek solitons with a time-independent metric and time-independent matter. (There are well-known families of solitons, called Q-balls, [12,13] that have time-dependent matter but the energy-momentum tensor, and hence the metric, are time independent.) The most general form of such a metric on a \ufb01ve-dimensional Lorentzian, stationary space-time, M5, is: ds2 5 = \u2212Z\u22122 (dt + k)2 + Z ds2 4 , (3.29) where ds2 4 is a general Riemannian metric on a four-dimensional spatial base manifold, B. The \u201cwarp factor,\u201d Z, is a function and k is a vector (one-form) \ufb01eld on B. For later convenience I have added Z as a warp factor in front of the ds2 4. The Maxwell \ufb01elds are time independent and so may be decomposed into electric and magnetic components: AI = \u2212Z\u22121 I (dt + k) + B(I) , (3.30) where B(I) is a one-form on B. It will prove convenient to de\ufb01ne magnetic \ufb01eld strengths: \u0398(I) \u2261dB(I) , (3.31) which is a 2-form on B. The Einstein equations coming from (3.27) are: R\u00b5\u03bd \u22121 2g\u00b5\u03bdR = QIJ h F I \u00b5\u03c1 F J \u03bd \u03c1 \u22121 4 g\u00b5\u03bd F I \u03c1\u03c3F J \u03c1\u03c3 +\u2202\u00b5XI \u2202\u03bdXJ \u22121 2 g\u00b5\u03bd g\u03c1\u03c3 \u2202\u03c1XI \u2202\u03c3XJi . (3.32) Taking traces and rearranging gives the equation: R\u00b5\u03bd = QIJ h F I \u00b5\u03c1 F J \u03bd \u03c1 \u22121 6 g\u00b5\u03bd F I \u03c1\u03c3F J \u03c1\u03c3 + \u2202\u00b5XI \u2202\u03bdXJi . (3.33) The Maxwell equations coming from (3.27) are: \u2207\u03c1 \u0000QIJF J \u03c1 \u00b5 \u0001 = JCS I \u00b5 , (3.34) where the Chern-Simons currents are given by: JCS I \u00b5 \u2261 1 16 CIJK \u03f5\u00b5\u03b1\u03b2\u03b3\u03b4 F J \u03b1\u03b2 F k \u03b3\u03b4 . (3.35) One can also easily obtain the equations of motion for the scalars. 22 \f3.3 \u201cNo solitons without topology\u201d In addition to assuming that metric is stationary and that the matter is time-independent, I now assume that the \ufb01ve-dimensional space-time, M5, is smooth, horizonless and asymptotic to Minkowski space at in\ufb01nity. In particular, I will require the space-time to be sectioned, at \ufb01xed times, by smooth, space-like hypersurfaces, \u03a3. The goal is to study the properties of the Komar mass, and the strategy of the \u201cNo Solitons\u201d theorem is to argue that the mass, M, must be zero. One then leverages this to show that all the dynamical \ufb01elds must be trivial and the the solution must, in fact, be Minkowski space globally. The \ufb01rst step is to massage the equations of motion into a more useful form. De\ufb01ne the dual of the Maxwell \ufb01elds, GI, via GI \u03c1\u00b5\u03bd \u2261 1 2 QIJ F J \u03b1\u03b2 \u03f5\u03b1\u03b2\u03c1\u00b5\u03bd (3.36) and introduce the inverse, QIJ of QIJ: QIJ QJK = \u03b4I K . (3.37) If follows from the Bianchi identities (d \u2217F J = 0) for F J \u00b5\u03bd that GJ satis\ufb01es: \u2207\u03c1 \u0000QIJGJ \u00b5\u03bd\u03c1\u0001 = 0 , (3.38) Similarly, from the equations of motion (2.8) for F J \u00b5\u03bd one has \u2207[\u03bbG|J| \u03c1\u00b5\u03bd] = + 3 8 CIJK F J [\u03bb\u03c1 F K \u00b5\u03bd] \u21d4 dGI = + 1 4 CIJK F J \u2227F K . (3.39) where |J| means that the index J is not involved in the skew-symmetrization bracket [. . . ]. One can easily verify that QIJ GI \u00b5\u03c1\u03c3 GJ \u03bd\u03c1\u03c3 = QIJ \u00002 F I \u00b5\u03c1 F J \u03bd\u03c1 \u2212\u03b4\u03bd \u00b5 F I \u03c1\u03c3 F J \u03c1\u03c3\u0001 (3.40) and so we may rewrite the Einstein equation (3.33) as R\u00b5\u03bd = QIJ h 2 3 F I \u00b5\u03c1 F J \u03bd \u03c1 + \u2202\u00b5XI \u2202\u03bdXJi + 1 6 QIJ GI \u00b5\u03c1\u03c3 GJ \u03bd\u03c1\u03c3 . (3.41) Since I am assuming that the matter is time independent, this means that the Lie derivatives of all the \ufb01elds along K\u00b5 must vanish: LKF I = 0 , LKGI = 0 , LKXI = 0 , (3.42) Cartan\u2019s formula states that for a p-form, \u03b1, one has LK\u03b1 = d(iK(\u03b1)) + iK(d\u03b1) . (3.43) Taking \u03b1 = F I one has, locally, K\u03c1F I \u03c1\u00b5 = \u2202\u00b5\u03bbI , (3.44) for some functions \u03bbI. 23 \fIf the space-time manifold were not simply connected one could, in principle, encounter jumps in value of \u03bbI if one were to integrate (3.44) around a closed curve. To avoid this issue I will, from now on, assume that our space-time manifold is simply connected. With this assumption, the arbitrary constants in the de\ufb01nitions of the functions, \u03bbI, may be \ufb01xed by requiring that the \u03bbI vanish at in\ufb01nity. Physically, the functions, \u03bbI, are electrostatic potentials of the 2-forms, F I. Taking \u03b1 = GI one has d(iK(GI)) = \u2212iK(dGI) = \u22121 4 CILM iK(F L \u2227F M) = \u22121 2 CILM d\u03bbL \u2227F M = \u22121 2 CILM d(\u03bbL F M) (3.45) where I have used (3.39) and (3.44). The assumption of simple connectivity is a weak one because we could pass to a covering space. However, one cannot assume that the H2(M5) is trivial. Indeed, this is the crucial issue that makes solitons possible. For the moment, however, I will assume that H2(M5) is trivial and hence K\u03c1GI \u03c1\u00b5\u03bd + 1 2 CIJK \u03bbJF K \u00b5\u03bd = \u2202\u00b5\u039bI \u03bd \u2212\u2202\u03bd\u039bI \u00b5 , (3.46) where \u039bI are globally de\ufb01ned one-forms. Using (3.44) and (3.46) one \ufb01nds that: K\u00b5\u0000QIJ F I \u00b5\u03c1 F J \u03bd \u03c1\u0001 = \u2212\u2207\u03c1 \u0000QIJ \u03bbI F J \u03c1\u03bd\u0001 + 1 16 CIJK \u03f5\u03bd\u03b1\u03b2\u03b3\u03b4 \u03bbI F J \u03b1\u03b2 F K\u03b3\u03b4 (3.47) K\u00b5\u0000QIJ GI \u00b5\u03c1\u03c3 GJ \u03bd\u03c1\u03c3\u0001 = \u22122 \u2207\u03c1 \u0000QIJ \u039bI \u03c3 GJ \u03c1\u03bd\u03c3\u0001 \u2212 1 4 CIJK \u03f5\u03bd\u03b1\u03b2\u03b3\u03b4 \u03bbI F J \u03b1\u03b2 F K\u03b3\u03b4 (3.48) and hence, Einstein\u2019s equations (3.41) imply: K\u00b5R\u00b5\u03bd = \u22121 3 \u2207\u00b5 \u0002 2 QIJ \u03bbI F J \u00b5\u03bd + QIJ \u039bI\u03c3 GJ \u00b5\u03bd\u03c3 \u0003 , (3.49) where I have used K\u00b5\u2202\u00b5XI = LKXI = 0. Note that the \u03bb(\u2217F \u2227F) terms have canceled in (3.49). The whole point is that the mass of the solution is given by M = const. Z \u03a3 R\u00b5\u03bdK\u00b5n\u03bd d\u03a3 = const. Z \u03a3 \u2207\u00b5X \u00b5 d\u03a3 , (3.50) where X \u00b5 can be read o\ufb00from (3.49). This means that the mass is given by a pure boundary term. Moreover, all the \ufb01elds \u03bbI, F J and GJ fall o\ufb00at in\ufb01nity too fast for there to be a \ufb01nite boundary term and thus one \ufb01nds that M = 0 . (3.51) Now consider the volume integral over \u03a3 in (3.50). Since M = 0, one now has 0 = Z \u03a3 R00d\u03a3 . (3.52) Now remember that the metric on \u03a3 is positive de\ufb01nite. This means that not only is the measure in (3.52) positive de\ufb01nite, but that all the terms in the 00-component of (3.41) are positive de\ufb01nite: R00 = QIJ h 2 3 F I 0\u03c1 F J 0 \u03c1i + 1 6 QIJ GI 0\u03c1\u03c3 GJ 0\u03c1\u03c3 \u22650 . (3.53) 24 \f(In writing this I have dropped all the terms involving \u22020XI = \u2202tXI = 0 because XI is timeindependent.) As a result, one learns that GI 0\u03c1\u03c3 = 0 , F I 0\u03c1 = 0 . (3.54) Since GI is the dual of F I, the \ufb01rst equation means that F I ij = 0 and hence F I \u00b5\u03bd \u22610 . (3.55) Finally, (3.27) reveals that only the F I source the scalars, and since the F I vanish, one has \u2202\u00b5 hp |g| g\u00b5\u03bd QIJ \u2202\u03bdXI i = 0 . (3.56) Since the \ufb01elds are time independent, this is a negative de\ufb01nite scalar Laplacian on \u03a3. The only non-singular solutions that go to a constant at in\ufb01nity are thus the solutions with XI = const. (3.57) One therefore concludes that all the dynamical \ufb01elds are trivial and the space-time is simply Minkowski space. Comments: (i) The basic structure of all these theorems is to argue that if \u03a3 is smooth, then the Komar mass density is always a total derivative and hence the mass vanishes. One can then invoke positive-mass theorems that tell us that if the matter satis\ufb01es the dominant energy condition and the space-time is asymptotic to Minkowski space and has M = 0 then it can only be a global Minkowski space. Rather than invoking the sledge-hammer of positive-mass theorems, I established the result by using the details of the equations of motion. (ii) The \u201cNo solitons\u201d theorem also depends upon \u03a3 being smooth, with a positive de\ufb01nite induced metric. If there are event horizons then the hypersurface becomes null at the horizons and singularities form in the interior of the horizons. One can repair the \u201cno solitons\u201d theorem by excising the horizons and introducing interior boundaries on the horizons. The integral for M then gets boundary contributions from these horizons. When this is unpacked one gets an expression for M (the internal energy of the system) in terms of the \u201cclassical thermodynamic variables\u201d for each black hole: the horizon area (entropy), surface gravity (temperature), angular momentum, angular velocity of the horizon, charge, electrostatic potential ... . For more details, see [14,15]. This was believed to be the only way to get solitons and hence the original mantra of \u201cNo solitons without horizons.\u201d Our purpose here is to obtain smooth, horizonless solitons and so the only boundary of \u03a3 is at in\ufb01nity. 3.4 Supporting mass with topology From the way I presented the theorem, it is pretty evident how one gets around \u201cNo solitons without horizons:\u201d There can only be massive solitons when H2(M5) is non-trivial. 25 \fIndeed the correct form of (3.46) is K\u03c1GI \u03c1\u00b5\u03bd + 1 2 CIJK \u03bbJF K \u00b5\u03bd = \u2202\u00b5\u039bI \u03bd \u2212\u2202\u03bd\u039bI \u00b5 + HI \u00b5\u03bd , (3.58) where \u039bI are globally de\ufb01ned one-forms and HI are closed but not exact two forms. That is, one cannot write HI = d\u03bdI where \u03bdI are globally well-de\ufb01ned one-forms. This then means that (3.49) has an extra term: K\u00b5R\u00b5\u03bd = \u22121 3 \u2207\u00b5 \u0002 2 QIJ \u03bbI F J \u00b5\u03bd + QIJ \u039bI\u03c3 GJ \u00b5\u03bd\u03c3 \u0003 + 1 6 QIJ H\u03c1\u03c3 I GJ \u03c1\u03c3\u03bd , (3.59) The last term in (3.59) may be expressed as 1 6 QIJ H\u03c1\u03c3 I GJ \u03c1\u03c3\u03bd = 1 12 \u03f5\u03b1\u03b2\u03c1\u03c3\u03bdF I \u03b1\u03b2 H\u03c1\u03c3 I . (3.60) This means that rather than \ufb01nding M = 0, one \ufb01nds that M can be supported by a topological integral: M = 1 32\u03c0G5 Z \u03a3 h QIJ HI\u03c1\u03c3 GJ \u03c1\u03c3\u03bdi d\u03a3\u03bd = 1 64\u03c0G5 Z \u03a3 \u03f5\u03b1\u03b2\u03c1\u03c3\u03bdF I \u03b1\u03b2 HI \u03c1\u03c3 d\u03a3\u03bd = 1 16\u03c0G5 Z \u03a3 F I \u2227HI . (3.61) 3.5 The BPS equations Before leaving the discussion of this particular supergravity theory, I will summarize the BPS equations that arise from requiring supersymmetry. The \ufb01rst detailed analysis of the BPS equations was done in [6], however this \ufb01rst work was incomplete in that it missed a major, and essential simpli\ufb01cation that was subsequently discovered in [16,17]. Supersymmetry necessarily makes the metric stationary and the other \ufb01elds time independent. One can therefore take the metric to have the form (3.29). One can similarly decompose the Maxwell \ufb01elds according to (3.30) and use the de\ufb01nition (3.31). If one seeks the solutions that possess four supersymmetries, one \ufb01rst \ufb01nds that the scalars and warp factors are directly related to the electrostatic potentials: Z \u2261 \u0000Z1 Z2 Z3 \u00011/3 , X1 = \u0012Z2 Z3 Z2 1 \u00131/3 , X2 = \u0012Z1 Z3 Z2 2 \u00131/3 , X3 = \u0012Z1 Z2 Z2 3 \u00131/3 . (3.62) This is, once again, a BPS constraint and the expression for Z means that the mass, M, is given by the sum of the electric charges M = Q1 + Q2 + Q3. Requiring four supersymmetries also imposes that the metric, ds2 4, on the base, B, be hyperK\u00a8 ahler. Finally, the complete system of BPS equations and \ufb01eld equations can then be reduced to solving the following system [16,17]: \u0398(I) = \u22c64 \u0398(I) , (3.63) \u22072ZI = 1 2 CIJK \u22c64 (\u0398(J) \u2227\u0398(K)) , (3.64) dk + \u22c64dk = ZI \u0398(I) , (3.65) 26 \fwhere \u22c64 is the Hodge dual taken with respect to the four-dimensional metric, ds2 4 and \u2207is the covariant derivative in this metric. Solving this system on a hyper-K\u00a8 ahler base, B, yields the most general solutions to the supergravity action with four supersymmetries. While we will only need the foregoing details, it is interesting to note how this comes about in solving the supersymmetry conditions. In particular, the solution requires the supersymmetries to satisfy: \u0398(I) ab \u03b3a \u03b3b \u03f5 = 0 , \u02c6 R\u03b1\u03b2ab\u03b3a \u03b3b\u03f5 = 0 , (3.66) where where a, b, c, d... and \u03b1, \u03b2... are, respectively, frame and tangent-space indices on B and \u02c6 R\u03b1\u03b2ab is the Riemann tensor on B. One can solve these equations by imposing the projection condition: \u03b31 \u03b32 \u03b33 \u03b34 \u03f5 = \u03f5 \u21d4 \u03b3a \u03b3b \u03f5 = \u22121 2 \u03f5abcd \u03b3c \u03b3d \u03f5 , (3.67) and imposing (3.63) and \u02c6 Rabcd = 1 2 \u03f5cdef \u02c6 Rabef . (3.68) Note that \u03b31 \u03b32 \u03b33 \u03b34 is traceless and satis\ufb01es: (\u03b31 \u03b32 \u03b33 \u03b34)2 = 1 l (3.69) Thus (3.67) reduces the supersymmetries by half: The solutions of (3.63)\u2013(3.65) thus lead to BPS solutions with 4 supersymmetries (we started with N = 2 supersymmetry, which, in \ufb01ve dimensions, means eight real supercharges). Let \u02c6 \u2207\u03b1 be the covariant derivative on B. For any spinor one has: \u0002 \u02c6 \u2207\u03b1 , \u02c6 \u2207\u03b2 \u0003 \u03f5 = 1 4 \u02c6 R\u03b1\u03b2\u03b3\u03b4\u03b3\u03b3 \u03b3\u03b4\u03f5 . (3.70) and so, for the spinors satisfying (3.67), and hence (3.66), the right-hand side vanishes. This is the integrability equation condition for \u02c6 \u2207\u03b1 \u03f5 = 0 , (3.71) and hence the Killing spinors are covariant constant on B. Recall that the Riemman curvature measures the monodromy for parallel transport around closed loops, and on the 4-manifold, B, the generic monodromy is SO(4) = (SU(2) \u00d7 SU(2))/Z2. The self-duality constraint on the curvature means that the metric, ds2 4, must be \u201chalf-\ufb02at,\u201d that is, have trivial monodromy on one SU(2) factor. This allows the solution of (3.71) on the spin bundle with trivial monodromy. If there is trivial monodromy on both SU(2) factors then the Riemann tensor vanishes and the manifold is \ufb02at. More generally, one can allow non-trivial monodromy in one SU(2) factor and such metrics are hyper-K\u00a8 ahler. There are many things to note about the BPS system. First observe that it is, in fact, a linear system of equations. (It was this fact was not discovered in [6] but was shown with some more deconstruction in [16,17].). The form of the BPS equations (3.63)\u2013(3.65) descends from the nonlinear form of the Maxwell equations, d \u2217F \u223cF \u2227F. However, (3.63) is linear and homogeneous; while (3.64) and (3.65) are linear but with sources that are quadratic in the solutions to the 27 \fpreceding equations. The entire system is basically some variant of four-dimensional, Euclidean electromagnetism. Next observe that equations (3.63) and (3.63) are \ufb01rst order while (3.64) is second order. The former come from imposing the supersymmetry conditions while the latter is actually one of the equations of motion. Observe that one can take \u0398(I) \u22610 and then one has \u22072ZI = 0, which means that the ZI are harmonic. Similarly, k, then reduces to a harmonic vector \ufb01eld. If these are to fall o\ufb00at in\ufb01nity, then one must have singular sources on B and this will lead to multi-black-hole solutions. The choice of harmonic k means that one can add angular momentum to these \ufb01ve-dimensional black holes while still preserving supersymmetry. There are thus no solitons if \u0398(I) \u22610. The magnetic \ufb02uxes, by de\ufb01nition (see (3.31)), satisfy d\u0398(I) = 0 and so (3.63) implies d \u22c64 \u0398(I) = 0. Thus the \u0398(I) are harmonic, and if they are to be smooth, they must be cohomological, that is belong to H2(B). It is then through (3.64) that they contribute quadratically to the ZI, and then through (3.62) to Z and hence to g00 and the mass, M. One thus sees, very explicitly how (3.61) is being implemented in the BPS equations. Conversely, one sees that if H2(B) is trivial, then there are no smooth magnetic sources, which means \u0398(I) \u22610, which leads back to either multi-black-holes or empty space. One also sees, in (3.64), how the Chern-Simons interaction in d \u2217F \u223cF \u2227F enables a pair of magnetic \ufb02uxes to combine to source an electric charge. Thus the BPS solutions based on smooth cohomological \ufb02uxes do not have any point-source electric charges. Instead, the electric charge sources are smoothly distributed into cohomological \ufb02uxes. The last BPS equation, (3.65), also has a very interesting physical meaning. The vector k contains the information about the angular momentum of the solution (see, (3.9)). The source is the product of electric potentials, ZI, and magnetic \ufb02uxes, \u0398(I) and so should be thought of as analogues of \u20d7 E \u00d7 \u20d7 B in electrodynamics. Thus the angular momentum is generated by a three-way interaction: two smooth magnetic \ufb01elds, \u0398(J) and \u0398(K) creating a smooth electrostatic \ufb01eld, ZI, which in turn creates angular momentum when it interacts with the third smooth magnetic \ufb01eld, \u0398(I). The mechanism is the same as the \u20d7 E \u00d7 \u20d7 B interaction that generates angular momentum for an electron in the presence of a magnetic monopole. To conclude, it is important to note that I may have just built a \u201ccastle in the air\u201d in that it appears that all this beautiful BPS structure cannot lead to smooth solitonic solutions. The starting point of BPS solutions is to choose a hyper-K\u00a8 ahler metric, ds2 4, on B. To obtain a solution that is asymptotic, at in\ufb01nity, to Minkowski space, one must therefore require ds2 4 to be asymptotic to the \ufb02at metric on R4. In the early 1980\u2019s there was a goal to classify all Riemannian metrics in four dimensions with self-dual curvature: these metrics were known as gravitational instantons. One of the by-products of the proof of the positive mass theorems [18\u201321] was to establish that there were no asymptotically Euclidean gravitational instantons. That is, the only smooth, hyper-K\u00a8 ahler, Riemannian metric that is asymptotic to R4 is R4 itself with its \ufb02at metric. Thus there can be no topology and seemingly no solitons .... However, we will see that the mathematical universe has much richer possibilities ... and even in the face of such a discouraging theorem, there are indeed solitons. Like many \u201cno go\u201d theorems, one can evade them by weakening some of the assumptions. 28 \f4 Microstate geometries in \ufb01ve dimensions We need non-trivial hyper-K\u00a8 ahler metrics the four-dimensional base manifold, B, and to that end we will start with one of the most useful and explicit examples of such metrics in four dimensions: the Gibbons-Hawking ALE metrics [22]. These were created as part of the gravitational instanton program and they are not ruled out by the theorems of Schoen, Yau and Witten because, while they are \ufb02at at in\ufb01nity, they are not Euclidean at in\ufb01nity. Indeed, as we will discuss, the asymptotic structure is R4/Zn. My presentation here will draw heavily on the review article [23]. 4.1 Gibbons-Hawking metrics Gibbons-Hawking (GH) ALE spaces are non-trvial U(1) \ufb01brations over a \ufb02at R3 base: ds2 4 = V \u22121 \u0000d\u03c8 + \u20d7 A \u00b7 d\u20d7 y \u00012 + V (dy2 1 + dy2 2 + dy2 3) , (4.1) where V is harmonic on the \ufb02at R3: \u22072V = 0 . (4.2) while the connection, A = \u20d7 A \u00b7 d\u20d7 y, is related to V via \u20d7 \u2207\u00d7 \u20d7 A = \u20d7 \u2207V . (4.3) The scaling transformation: V \u2192\u03bb2V , A \u2192\u03bb2A, yi \u2192\u03bb\u22121yi and \u03c8 \u2192\u03bb\u03c8 preserves (4.1)\u2013 (4.3). We will \ufb01x this choice of scaling by \ufb01xing the period of the \u03c8 coordinate: \u03c8 \u2261\u03c8 + 4 \u03c0 . (4.4) This family of metrics is the unique class of four-dimensional hyper-K\u00a8 ahler metrics with a tri-holomorphic U(1) isometry3. Moreover, a four-dimensional hyper-K\u00a8 ahler manifold with a U(1)\u00d7U(1) symmetry must, at least locally, have the Gibbons-Hawking form with an extra U(1) symmetry around an axis in the R3 [24]. Perhaps, more usefully, these GH spaces have a very explicit and easily analyzed family of hyper-K\u00a8 ahler metrics. The standard form is to take V to be sourced at discrete points, \u20d7 y(j), in the R3: V = \u03b50 + N X j=1 qj rj , rj \u2261|\u20d7 y \u2212\u20d7 y(j)| . (4.5) For the metric to be Riemannian (positive de\ufb01nite), one must take \u03b50 \u22650 and qj \u22650. Exercise: Compute the curvature on this GH metric and show that it satis\ufb01es (3.68). (The sign will only work out correctly if you choose the correct orientation on the manifold.) 3Tri-holomorphic means that the U(1) isometry preserves all three complex structures of the hyper-K\u00a8 ahler metric. 29 \fTo determine \u20d7 A, we need the vector \ufb01elds, \u20d7 vi, that satisfy: \u20d7 \u2207\u00d7 \u20d7 vi = \u20d7 \u2207 \u0012 1 ri \u0013 . (4.6) Choose coordinates, \u20d7 y = (x, y, z), so that \u20d7 y(i) = (0, 0, a) and let \u03c6 denote the polar angle in the (x, y)-plane, then: \u20d7 vi \u00b7 d\u20d7 y = \u0010(z \u2212a) ri + ci \u0011 d\u03c6 , (4.7) where ci is a constant. The vector \ufb01eld, \u20d7 vi, is regular away from the z-axis, but has a Dirac string along the z-axis. By choosing ci we can cancel the string along the positive or negative z-axis, and by moving the axis we can arrange these strings to run in any direction we choose, but they must start or \ufb01nish at some \u20d7 y(i), or run out to in\ufb01nity. There appear to be singularities in the metric at rj = 0. However, if one changes to polar coordinates, (\u02c6 r, \u02c6 \u03b8, \u02c6 \u03c6), centered at rj = 0 the metric limits to the form: ds2 4 \u223cq\u22121 j \u02c6 r \u0000d\u03c8 + qj cos \u02c6 \u03b8 d\u02c6 \u03c6 \u00012 + qj \u02c6 r\u22121 (d\u02c6 r2 + \u02c6 r2 d\u02c6 \u03b82 + \u02c6 r2 sin2 \u02c6 \u03b8 d\u02c6 \u03c62) = qj h \u02c6 r \u0000q\u22121 j d\u03c8 + cos \u02c6 \u03b8 d\u02c6 \u03c6 \u00012 + \u02c6 r\u22121 (d\u02c6 r2 + \u02c6 r2 d\u02c6 \u03b82 + \u02c6 r2 sin2 \u02c6 \u03b8d\u02c6 \u03c62) i , (4.8) where we have used the fact that near rj = 0 one has V \u223cqj rj + const. for which the solution to (4.3) gives: A = qj cos \u02c6 \u03b8 d\u02c6 \u03c6 . (4.9) Now choose a new radial coordinate \u03c1 = 2 \u221a \u02c6 r = 2 q |\u20d7 y \u2212\u20d7 y(j)| . (4.10) then the metric is locally of the form: ds2 4 \u223cqj \u0000d\u03c12 + \u03c12 d\u21262 3,qj \u0001 , (4.11) where d\u21262 3,qj \u2261 \u0000q\u22121 j d\u03c8 + cos \u02c6 \u03b8 d\u02c6 \u03c6 \u00012 + d\u02c6 \u03b82 + sin2 \u02c6 \u03b8 d\u02c6 \u03c62 . (4.12) De\ufb01ne \u03c7 = q\u22121 j d\u03c8 and observe that d\u21262 3 \u2261 \u0000d\u03c7 + cos \u03b8 d\u03c6 \u00012 + d\u03b82 + sin2 \u03b8 d\u03c62 (4.13) is the metric on the S3 de\ufb01ned by |\u03b61|2 + |\u03b62|2 = 1 in C2 with \u03b61 = e i 2 (\u03c7\u2212\u03c6) cos \u03b8 2 , \u03b62 = e i 2 (\u03c7+\u03c6) sin \u03b8 2 . (4.14) To fully cover the sphere one must have: 0 \u2264\u03c7 \u22644 \u03c0 , 0 \u2264\u03b8 \u2264\u03c0 , 0 \u2264\u03c6 \u22642 \u03c0 . (4.15) However, (4.12) and (4.4) imply that a precise matching to (4.13) requires \u03c7 \u2261q\u22121 j \u03c8 \u2261q\u22121 j (\u03c8 + 4 \u03c0) \u2261\u03c7 + 4 \u03c0 qj . (4.16) 30 \fThus the metric (4.12) is that of S3/Zqj in which the quotient is taken on the Hopf \ufb01ber, \u03c7, according to (4.16). Exercise: Check the claims underlying (4.13)\u2013(4.15). For such a quotient to be well-de\ufb01ned, one must have: qj \u2208Z . (4.17) If qj = \u00b11 then there are no identi\ufb01cations on the S3 and the metric in region around rj \u21920 reduces to that of \ufb02at R4. If the |qj| > 1 then rj = 0 is an orbifold point. Such singularities are nicely resolved in string theory and thus acceptable singularities of manifolds. Also note that at in\ufb01nity one has: V \u223c\u03b50 + q0 r , q0 \u2261 N X j=1 qj . (4.18) If \u03b50 \u0338= 0 then the metric is asymptotic to R3 \u00d7 S1 and the GH metrics are known as multi-TaubNUT. If \u03b50 = 0 then the metric is asymptotic to R4/Zq0 where the Zq0 is modded out of the S3 at in\ufb01nity exactly as in (4.12). If |q0| = 1 then the metric is asymptotic to R4 and if |q0| > 1 then the non-trivial identi\ufb01cations at in\ufb01nity mean that it is an ALE (Asymptotically Locally Euclidean) space. Henceforth, (largely for simplicity of the asymptotics) we take \u03b50 = 0 . (4.19) One should also remember that for the metric to be Riemannian then one must qj \u2208Z+ \u222a{0}. This means that if you want the metric to be asymptotic to R4 then q0 \u2261PN j=1 qj = 1 and hence one must have qi = 1 and qj = 0, j \u0338= i for some i. Then V = 1 ri and the space is globally R4. Thus the only GH metric that is Riemannian and asymptotic to \ufb02at R4 must be \ufb02at R4 everywhere. So it seems we have to work with ALE spaces. 4.2 Harmonic forms on GH metrics Our goal is to solve the BPS equations and we start with the \ufb01rst layer, (3.63), and seek harmonic magnetic \ufb02uxes on a GH space. We start by identifying the dual homology cycles. A generic GH metric has 1 2N(N \u22121) topologically non-trivial two-cycles, \u2206ij, that. run between the GH centers. These two-cycles can be de\ufb01ned by taking any curve, \u03b3ij, between \u20d7 y(i) and \u20d7 y(j) and considering the \u03c8-\ufb01ber of (4.1) along the curve. Because of the factor of V \u22121 in (4.1), the \ufb01ber collapses to zero at the GH centers, and so the curve and the \ufb01ber sweep out a 2-sphere (up to Z|qj| orbifolds). See Fig. 2. These spheres intersect one another at the common points \u20d7 y(j). There are (N \u22121) linearly independent homology two-spheres, and the set \u2206i (i+1) represents a basis4. 4If one has qj = 1 at every GH center, then the integer homology corresponds to the root lattice of SU(N) with an intersection matrix given by the inner product of the roots. 31 \fy(i) y(j) y(k) \u0394ij \u0394jk R3 Figure 2: This \ufb01gure depicts some non-trivial cycles of the Gibbons-Hawking geometry. The behaviour of the U(1) \ufb01ber is shown along curves between the sources of the potential, V . Here the \ufb01bers sweep out a pair of intersecting homology spheres. To de\ufb01ne the dual cohomology, it is convenient to introduce a set of frames \u02c6 e1 = V \u22121 2 (d\u03c8 + A) , \u02c6 ea+1 = V 1 2 dya , a = 1, 2, 3 . (4.20) and two associated sets of two-forms: \u2126(a) \u00b1 \u2261\u02c6 e1 \u2227\u02c6 ea+1 \u00b1 1 2 \u03f5abc \u02c6 eb+1 \u2227\u02c6 ec+1 , a = 1, 2, 3 . (4.21) The two-forms, \u2126(a) \u2212, are anti-self-dual, harmonic and non-normalizable and they de\ufb01ne the hyperK\u00a8 ahler structure on the base. The forms, \u2126(a) + , are self-dual and can be used to construct harmonic \ufb02uxes that are dual to the two-cycles. Consider the self-dual two-forms: \u0398(I) \u2261\u2212 3 X a=1 \u0000\u2202a \u0000V \u22121 KI\u0001\u0001 \u2126(a) + . (4.22) Then \u0398(I) is closed (and hence co-closed and harmonic) if and only if KI is harmonic in R3, i.e. \u22072KI = 0. Exercise: Compute the spin connection for the GH metric using the frames (4.20) and solve the equation (3.71) using the projection condition (3.67). We now have the choice of how to distribute sources of KI throughout the R3 base of the GH space. One can make all sorts of black objects by allowing singular sources, but we want smooth, cohomological \ufb02uxes. Indeed, the \u0398(I) will be smooth if and only if KI/V is smooth; this occurs if and only if KI has the form: KI = kI 0 + N X j=1 kI j rj . (4.23) For some constants kI 0 amd kI j . Also note that the \u201cgauge transformation:\u201d KI \u2192KI + cI V , (4.24) for some constants, cI, leaves \u0398(I) unchanged, and so there are only N independent parameters in each KI. In addition, since \u03b50 = 0 then one must take kI 0 = 0 for \u0398(I) to remain \ufb01nite at 32 \fin\ufb01nity. The remaining (N \u22121) parameters then describe harmonic forms that are dual to the non-trivial two-cycles5. Exercise: Show that the two-forms, \u0398(I), de\ufb01ned by (4.22) and (4.23) are normalizable on standard GH spaces (with V > 0 everywhere). That is, show that the \u0398(I) square integrable: Z \u0398(I) \u2227\u0398(I) < \u221e, (with no sum on I) (4.25) where the integral is taken of the whole GH base space. It is straightforward to \ufb01nd a local potential such that \u0398(I) = dB(I): B(I) \u2261V \u22121 KI (d\u03c8 + A) + \u20d7 \u03be(I) \u00b7 d\u20d7 y , (4.26) where \u20d7 \u2207\u00d7 \u20d7 \u03be(I) = \u2212\u20d7 \u2207KI . (4.27) Hence, the \u20d7 \u03be(I) are the vector potentials for magnetic monopoles located at the singular points of the KI. One can use these local potentials to compute the \ufb02uxes, \u03a0(I) ij , of \u0398(I) through the cycles: \u03a0(I) ij \u2261 1 4 \u03c0 Z \u2206ij \u0398(I) = \u0012kI j qj \u2212kI i qi \u0013 . (4.28) I have normalized these periods for later convenience. Exercise: Excise the points ri = 0 and rj = 0 from \u2206ij and then use (4.26) on this punctured cycle to prove (4.28). 4.3 Solving the BPS equations We have solved the \ufb01rst layer of the BPS equations (3.63) and our task now is to solve the remaining two, (3.64) and (3.65). Such solutions were derived in [6,25] for Riemannian GibbonsHawking metrics and the result is relatively simple. Exercise: Substitute the two-forms (4.22) into (3.64) and show that the resulting equation has the solution: ZI = 1 2 CIJK V \u22121 KJKK + LI , (4.29) where the LI are independent harmonic functions. Now write the one-form, k, as: k = \u00b5 (d\u03c8 + A) + \u03c9 (4.30) 5If \u03b50 \u0338= 0 then the extra parameter is that of a Maxwell \ufb01eld whose gauge potential gives the Wilson line around the S1 at in\ufb01nity. 33 \fand then (3.65) becomes: \u20d7 \u2207\u00d7 \u20d7 \u03c9 = (V \u20d7 \u2207\u00b5 \u2212\u00b5\u20d7 \u2207V ) \u2212V 3 X I=1 ZI \u20d7 \u2207 \u0012KI V \u0013 . (4.31) Taking the divergence yields the following equation for \u00b5: \u22072\u00b5 = V \u22121 \u20d7 \u2207\u00b7 \u0012 V 3 X I=1 ZI \u20d7 \u2207KI V \u0013 , (4.32) which is solved by: \u00b5 = 1 6 CIJK KIKJKK V 2 + 1 2 V KILI + M , (4.33) where M is yet another harmonic function on R3. Indeed, M determines the anti-self-dual part of dk that cancels out of (3.65). Substituting this result for \u00b5 into (4.31) we \ufb01nd that \u03c9 satis\ufb01es: \u20d7 \u2207\u00d7 \u20d7 \u03c9 = V \u20d7 \u2207M \u2212M \u20d7 \u2207V + 1 2 (KI \u20d7 \u2207LI \u2212LI \u20d7 \u2207KI) . (4.34) The integrability condition for this equation is obtained by taking the divergence of both sides. The left-hand side trivially vanishes, while the right-hand side vanishes because KI, LI, M and V are harmonic. The harmonic functions in (4.34) will involve constants and point sources at rj = 0. Thus the right-hand side of (4.34) will have two kinds of terms: 1 ri \u20d7 \u22071 rj \u2212 1 rj \u20d7 \u22071 ri and \u20d7 \u22071 ri . (4.35) To determine the pieces that make up \u20d7 \u03c9, introduce coordinates with the z-axis running through \u20d7 y(i) and \u20d7 y(j) so that \u20d7 y(i) = (0, 0, a) and \u20d7 y(j) = (0, 0, b) with a > b. De\ufb01ne \u03c9ij \u2261\u2212(x2 + y2 + (z \u2212a + ri)(z \u2212b \u2212rj)) (a \u2212b) ri rj d\u03c6 . (4.36) One can then easily verify that these vector \ufb01elds satisfy: \u20d7 \u2207\u00d7 \u20d7 \u03c9ij = 1 ri \u20d7 \u22071 rj \u2212 1 rj \u20d7 \u22071 ri + 1 rij \u0012 \u20d7 \u22071 ri \u2212\u20d7 \u22071 rj \u0013 , (4.37) where rij \u2261|\u20d7 y(i) \u2212\u20d7 y(j)| (4.38) is the distance between the ith and jth center in the Gibbons-Hawking metric. We then see that the general solution for \u20d7 \u03c9 may be written as: \u20d7 \u03c9 = N X i,j aij \u20d7 \u03c9ij + N X i bi \u20d7 vi , (4.39) for some constants aij, bi, where the \u20d7 vi are de\ufb01ned in (4.7). The important point about the choice of solution, \u03c9ij, is that they have no string singularities whatsoever. They can be used to solve (4.34) with the \ufb01rst set of source terms in (4.35), without introducing Dirac-Misner strings, but at the cost of adding new source terms of the form of the second term in (4.35). Using the \u03c9ij shows the string singularities in \u20d7 \u03c9 can be reduced to those associated with the second set of terms in (4.35) and so there are at most N possible string singularities and these can be arranged to run in any direction from each of the points \u20d7 y(j). 34 \f4.4 Removing singularities The functions, LI and M should be chosen to ensure that metric is regular as rj \u21920, which means that the warp factors, ZI, and the function, \u00b5, must be regular as rj \u21920. From (4.29) and (4.33), one can easily see that regularity requires: LI = \u2113I 0 + N X j=1 \u2113I j rj , M = m0 + N X j=1 mj rj , (4.40) with \u2113I j = \u22121 2 CIJK kJ j kK j qj , j = 1, . . . , N ; (4.41) mj = 1 12 CIJK kI j kJ j kK j q2 j = 1 2 k1 j k2 j k3 j q2 j , j = 1, . . . , N . (4.42) As I noted above, in order to obtain solutions that are (locally) asymptotic to \ufb01ve-dimensional Minkowski space, R4,1, (possibly divided by Zq0), one must take \u03b50 = 0 in (4.5), and kI 0 = 0 in (4.23). Moreover, \u00b5 must vanish at in\ufb01nity, and this \ufb01xes m0. For simplicity I will also take ZI \u21921 as r \u2192\u221e. Hence, the solutions that are asymptotic to \ufb01ve-dimensional Minkowski space have: \u03b50 = 0 , kI 0 = 0 , lI 0 = 1 , m0 = \u22121 2 q\u22121 0 N X j=1 3 X I=1 kI j . (4.43) It is straightforward to generalize these results to solutions with di\ufb00erent asymptotics, and in particular to Taub-NUT. We have now created a solution to the BPS equations that is based on harmonic \ufb02uxes and has no divergent behaviour in the metric. 4.5 The bubble equations and closed time-like curves While we have removed obvious divergent behaviour in the \ufb01elds and in the metric, we have not yet created a non-singular physical \ufb01ve-dimensional metric. In particular, we must make sure that there are no closed time-like curves (CTC\u2019s). The easiest way to expose the potential problem is to consider the constant time, t, slices of the metric (3.29) and the resulting metric induced in the hypersurfaces, B: d\u02c6 s2 4 = \u2212Z\u22122 k2 + Z ds2 4 . (4.44) The danger arises when, along some closed curve, the \ufb01rst (negative) term wins out over the second factor and thus produces a CTC. A physically acceptable metric requires us to eliminate this possibility. There is another, more stringent, highly desirable (but not strictly essential) constraint: requiring that the metric (4.44) be positive de\ufb01nite. If this is true then the function, t, represents a globally-de\ufb01ned time function and the \ufb01ve-dimensional metric, (3.29), is called stably causal. This means that not only is the metric causal, but it also remains causal under small perturbations. 35 \fOne begins the investigate at the obvious danger points: rj \u21920. The functions, ZI, and \u00b5 are \ufb01nite, however V \u2192\u221eand so the \u03c8-circle has vanishing size when measured using ds2 4. Thus the \u03c8-circles become CTC\u2019s unless we impose the further conditions that: \u00b5(\u20d7 y = \u20d7 y(j)) = 0 , j = 1, . . . , N . (4.45) This ensures that the \u03c8-circles pinch o\ufb00in the metric (4.44). There is a second danger coming from Dirac strings in \u03c9: If one goes to the axis between two GH points, ri \u21920 and rj \u21920, then the azimuthal angle, \u03c6, around that axis can become a CTC if \u03c9 has a Dirac string. One therefore needs to collect all the terms in the sources for \u03c9 that give rise to Dirac strings and set them to zero. It is relatively easy to show (see [23]) that (4.31) implies that if one imposes the constraint (4.45), then it also removes the Dirac strings from \u03c9. Performing the expansion of \u00b5 using (4.33), (4.23), (4.40) and (4.42) around each GibbonsHawking point one \ufb01nds that (4.45) becomes the Bubble Equations: N X j=1 j\u0338=i \u0393ij |\u20d7 y(j) \u2212\u20d7 y(i)| = \u22122 \u0010 m0 qi + 1 2 3 X I=1 kI i \u0011 , (4.46) where \u0393ij \u2261qi qj \u03a0(1) ij \u03a0(2) ij \u03a0(3) ij , rij \u2261|\u20d7 y(i) \u2212\u20d7 y(j)| . (4.47) If one adds together all of the bubble equations, then the left-hand side vanishes identically (because \u0393ji = \u2212\u0393ij), and one obtains the condition on m0 in (4.43). This is simply the condition \u00b5 \u21920 as r \u2192\u221eand also means that there is no Dirac-Misner string running out to in\ufb01nity. Thus there are only (N \u22121) independent bubble equations. We refer to (4.46) as the bubble equations because they relate the \ufb02ux through each bubble to the physical size of the bubble, represented by rij. Note that for a generic con\ufb01guration, a bubble size can only be non-zero if and only if all three of the \ufb02uxes are non-zero. Thus the bubbling transition will only be generically possible for the three-charge system. If all the \ufb02uxes are \ufb01xed then there are 3N moduli, \u20d7 y(j), but really this is 3(N \u22121) because one can translate the centroid of the \u20d7 y(j) without changing the metric. Once we impose the bubble equations, there are 3(N \u22121) \u2212(N \u22121) = 2(N \u22121) moduli remaining. It is for this reason that I made the remark in Section 2.5 that the relative positions of BPS objects are not always free parameters. Solving the bubble equations guarantees that there are no CTC\u2019s in the immediate vicinity of the GH points. This does not, however, guarantee that there are not CTC\u2019s elsewhere, and the solution may also have other serious pathologies, particularly if one of the ZI\u2019s changes sign. There is also one last, crucial, and surprising, ingredient that appears to be essential to the construction of viable microstate geometries. 4.6 Ambi-polar GH metrics We concluded Section 3.5 by noting that the only smooth, hyper-K\u00a8 ahler, Riemannian metric that is asymptotic to R4 is R4 itself with its \ufb02at metric. This creates an obvious challenge 36 \fto microstate geometries. But it gets worse: If one starts with a Riemannian GH metric, it is almost impossible to \ufb01nd solutions to the bubble equations that result in globally smooth metrics. The physical problem is that magnetic create an expansion force on the bubbles and there is no counterbalancing force to create the equilibrium that the bubble equations seem to require. The solution is extremely simple, and seemingly radical: drop the requirement that the metric of B be Riemannian. More speci\ufb01cally, we will allow the metric on B be ambi-polar: that is, the signature of the base metric, ds2 4, can change from +4 to \u22124 with apparently singular intervening surfaces. Away from these singular surfaces, we require that the metric still be smooth and hyper-K\u00a8 ahler. For the GH metrics, this means that we are going to allow V to change sign and, in particular, allow the GH charges, qi \u2208Z, to have any sign. This now leads to vast families of ambi-polar GH metrics that are asymptotic to \ufb02at R4: One simply imposes \u03b50 = 0 , q0 \u2261 N X j=1 qj = +1 . (4.48) The apparent disaster is the horribly singular behaviour at the surfaces on which V = 0. The miracle is that even though the base metric is extremely singular, the \u2018warp factor,\u2019 Z, of (3.29) changes sign (by passing through a pole) at the V = 0 surfaces in precisely the right way to create a smooth, Lorentzian \ufb01ve-manifold. Indeed, (3.62) and (4.29) imply that, near V = 0, Z \u223cV \u22121. This cancels the factor of V in front of d\u20d7 y \u00b7 d\u20d7 y in (4.1) and produces a \ufb01nite limit for the metric along R3. On the other hand, the same factor of V \u22121 appears to create a double pole along the \u03c8-\ufb01ber. However, the explicit form of \u00b5 in (4.33) can be used to show that this double pole, and the sub-leading single pole, are exactly cancelled by poles coming from the \u2212Z\u22122(dt+k)2 term. It is also evident from (3.62) that the factors of V cancel in the expressions for the scalars, XI. It is also very straightforward to show that in the Maxwell potential (3.30), the singularity at V = 0 in (4.26) cancels against the singularity coming from Z\u22121 I k and thus A(I) is smooth in the neighbourhood of V = 0. Exercise: (i) Consider what happens to the homology 2-cycles de\ufb01ned in Section 4.2 but now in the full physical metric, (3.29), or, equivalently, in the metric (4.44) on B. Show that these cycles remain compact and well de\ufb01ned if and only if one imposes (4.45). (ii) Observe that the divergence in \u0398(I) at V = 0 makes the period integral, (4.28), completely ill de\ufb01ned. Show that (3.30) is regular in the neighbourhood of V = 0 surfaces. Consider F (I) = dA(I) and prove the period integral of F (I) is well-de\ufb01ned and is given by: \u03a0(I) ij \u2261 1 4 \u03c0 Z \u2206ij F (I) = \u0012kI j qj \u2212kI i qi \u0013 , (4.49) 37 \fExercise: Write the metric (4.44) in the form: d\u02c6 s2 4 = \u2212Z\u22122 \u0000\u00b5(d\u03c8 + A) + \u03c9 \u00012 + ZV \u22121\u0000d\u03c8 + A \u00012 + ZV \u0000dr2 + r2d\u03b82 + r2 sin2 \u03b8 d\u03c62\u0001 = Q Z2V 2 \u0010 d\u03c8 + A \u2212\u00b5 V 2 Q \u03c9 \u00112 + ZV \u0010 r2 sin2 \u03b8 d\u03c62 \u2212\u03c92 Q \u0011 + ZV (dr2 + r2d\u03b82) , where Z \u2261(Z1 Z2 Z3)1/3 , Q \u2261Z1Z2Z3V \u2212\u00b52 V 2 . (4.50) Show that one can write Q in the form: Q = \u2212M2 V 2 \u22121 3 M CIJKKI KJ KK \u2212M V KI LI \u2212 1 4 (KILI)2 + 1 6 V CIJKLILJLK + 1 4 CIJKCIMNLJLKKMKN Observe that ZV and Q are smooth (and generically non-vanishing) near the surfaces where V = 0, and hence conclude that (3.29) is smooth across these surfaces. The surfaces at which one V = 0 pose no inherent problems with smoothness in ambi-polar metrics: (3.29) can indeed be a smooth Lorentzian metric. Moreover, everything we discussed about magnetic \ufb02uxes remains valid when translated to the full, \ufb01ve-dimensional metric and the full, \ufb01ve-dimensional Maxwell \ufb01elds. There is one rather interesting physical feature of the V = 0 surfaces: the Killing vector, \u2202 \u2202t, is time-like everywhere except at the V = 0 surfaces, on which \u2202 \u2202t becomes a null vector. The V = 0 surfaces are thus known as evanescent ergosurfaces. Because the time-like Killing vector is going null on these surfaces they potentially have a very important role in collecting and storing information [26\u201330] 4.7 Two centres: AdS3 \u00d7 S2 It is extremely instructive to look at a very simple example [31\u201333] that represents a \u201clocal model\u201d of how the full geometry is resolved around a pair of GH charges with opposite signs. For simplicity we take all the KI\u2019s to be equal and set: V = q \u0010 1 r+ \u2212 1 r\u2212 \u0011 , KI \u2261K = k \u0010 1 r+ + 1 r\u2212 \u0011 , (4.51) where r\u00b1 \u2261 p \u03c12 + (z \u2213a)2 , (4.52) in cylindrical polar coordinates, (z, \u03c1, \u03c6), on the R3 of the GH base. Regularity of the functions ZI and \u00b5 at the GH points determines the functions LI and M up to additive constants. Since we do not want any rotation at in\ufb01nity we need \u00b5 to vanish at in\ufb01nity and for simplicity we set the constants in LI to zero. Thus we \ufb01nd: LI \u2261L = \u2212k2 q \u0010 1 r+ \u2212 1 r\u2212 \u0011 , M = \u22122 k3 a q2 + 1 2 k3 q2 \u0010 1 r+ + 1 r\u2212 \u0011 . (4.53) 38 \fThe vector potentials for this solution are then: A = q \u0010(z \u2212a) r+ \u2212(z + a) r\u2212 \u0011 d\u03c6 , \u03c9 = \u22122 k3 a q \u03c12 + (z \u2212a + r+)(z + a \u2212r\u2212) r+ r\u2212 d\u03c6 . (4.54) The \ufb01ve-dimensional metric is then: ds2 5 \u2261\u2212Z\u22122\u0000dt + \u00b5(d\u03c8 + A) + \u03c9 \u00012 + Z \u0000V \u22121(d\u03c8 + A)2 + V (d\u03c12 + \u03c12d\u03c62 + dz2) \u0001 , (4.55) where Z = V \u22121K2 + L = \u22124 k2 q 1 (r+ \u2212r\u2212) , (4.56) \u00b5 = V \u22122K3 + 3 2 V \u22121K L + M = 4 k3 q2 (r+ + r\u2212) (r+ \u2212r\u2212)2 \u22122 k3 a q2 . To map this onto a more familiar metric, one must make a transformation to oblate spheroidal coordinates like those employed in [34] to map the positive-de\ufb01nite, two-centered GH space onto the Eguchi-Hanson form: z = a cosh 2\u03be cos \u03b8 , \u03c1 = a sinh 2\u03be sin \u03b8 , \u03be \u22650 , 0 \u2264\u03b8 \u2264\u03c0 . (4.57) In particular, one has r\u00b1 = a(cosh 2\u03be\u2213cos \u03b8). One then rescales and shifts the remaining variables according to: \u03c4 \u2261 a q 8 k3 t , \u03d51 \u2261 1 2 q \u03c8 \u2212a q 8 k3 t , \u03d52 \u2261\u03c6 \u22121 2 q \u03c8 + a q 4 k3 t , (4.58) and the \ufb01ve-dimensional metric takes the standard AdS3 \u00d7 S2 form: ds2 5 \u2261R2 1 \u0002 \u2212cosh2 \u03be d\u03c4 2 + d\u03be2 + sinh2 \u03be d\u03d52 1 \u0003 + R2 2 \u0002 d\u03b82 + sin2 \u03b8 d\u03d52 2 \u0003 , (4.59) with R1 = 2R2 = 4k . (4.60) Note that the \ufb01rst factor in the metric is global AdS3 with \u2212\u221e< \u03c4 < \u221e. One should also recall that the GH \ufb01ber coordinate has period 4\u03c0 and therefore, for |q| = 1, the angles, \u03d5j, both have periods 2\u03c0. For q \u0338= 1, the Z|q| orbifold associated with the GH points emerges as a simultaneous Z|q| quotient on the longitudes of the AdS3 and S2. This metric is completely smooth and the \u201cbubble,\u201d or non-trivial topology, is simply the 2-sphere. It is a \u201clocal model\u201d in that if the two GH points are far from the other GH points, one can \u201czoom-in\u201d on that pair and the metric locally reduces to the construction here. One can also check that the time-like Killing vector, T \u2261\u2202 \u2202t transforms as follows: \u2202 \u2202t = 1 R1 \u0014 \u2202 \u2202\u03c4 \u2212 \u2202 \u2202\u03d51 + 2 \u2202 \u2202\u03d52 \u0015 (4.61) whose norm is T \u00b5 T\u00b5 = \u2212R\u22122 1 cos2 \u03b8. (4.62) One may think of the integral curves of T as world-lines of non-space-like \u201cobservers\u201d that are time-like everywhere except for \u03b8 = \u03c0 2 . From this perspective, there is nothing unusual about the evanescent ergosurface. 39 \f4.8 Regularity We have examined smoothness on evanescent ergosurfaces and the absence of CTC\u2019s near the GH points, \u20d7 y(j). There is now the much broader question of whether the metric is regular globally and whether it is globally free of CTC\u2019s and perhaps even stably causal. First we observe that writing the metric in the form (4.50) means that one must have ZV > 0 and Q > 0. However the parametrization of the scalar \ufb01elds in (3.62) requires that all the functions, ZI, have the same sign. We therefore must require ZI V > 0 , I = 1, 2, 3 , (4.63) For the \ufb01ve dimensional metric to be stably causal, Q must not only be positive but must satisfy [35]: \u2212g\u00b5\u03bd\u2202\u00b5t \u2202\u03bdt = \u2212gtt = (ZV )\u22121(Q \u2212\u03c92) > 0 , (4.64) where \u03c9 is squared using the R3 metric. First we note that for generic \ufb02uxes, satisfying the bubble equations is nowhere near su\ufb03cient to guarantee (4.63) and (4.64). This is because generic \ufb02uxes can produce positive electric charge contributions from some collections of bubbles and negative electric charge contributions from other collections of bubbles. Such combinations of localizable positive and negative charges creates a very pathological solution that typically fails to satisfy (4.63) in some region. While there are no theorems, if one satis\ufb01es the bubble equations and makes sure that (4.63) is satis\ufb01ed then the \ufb01ve-dimensional metric usually satis\ufb01es (4.64). (I know of no counterexamples and have constructed many, many families of such microstate geometries.) So the bottom line is that one must solve the bubble equations and make sure that (4.63) and (4.64) are satis\ufb01ed. Satisfying (4.63) seems to guarantee (4.64) but one should always explore the metric numerically to make sure that (4.64) is indeed true. 4.9 The asymptotic charges As described in Section 3.1, one can obtain the electric charges, mass and angular momenta of bubbled geometries by expanding ZI and k at in\ufb01nity. It is, however, more convenient to translate the asymptotics into the standard coordinates of the Gibbons-Hawking spaces. In particular, one should remember that the GH radial coordinate, r, is related to the radial coordinate \u03c1 on \ufb02at R4 via (4.10), that is, r = 1 4\u03c12. One then \ufb01nds ZI \u223c1 + QI 4 r + . . . , \u03c1 \u2192\u221e, (4.65) and from (4.29) one easily obtains QI = \u22122 CIJK N X j=1 q\u22121 j \u02dc kJ j \u02dc kK j , (4.66) where \u02dc kI j \u2261kI j \u2212qj N kI avg , and kI avg \u2261 1 N N X j=1 kI j . (4.67) 40 \fNote that \u02dc kI j is gauge invariant under (4.24). This may be recast in a more suggestive form that re\ufb02ects the origin of the charges as coming from magnetic \ufb02uxes via the Chern-Simons interaction: QI = N X i,j=1 QIij , QIij \u2261\u22121 4 CIJK qi qj \u03a0(J) ij \u03a0(K) ij . (4.68) where the QIij may be thought of as the contribution to the charge coming from each bubble. Expanding g00, one has: \u2212g00 = (Z1Z2Z3)\u22122 3 \u223c1 \u22122 3 3 X I=1 QI 4 r , (4.69) and comparing this with (3.7) one \ufb01nds M = \u03c0 4G5 (Q1 + Q2 + Q3) . (4.70) It is fairly common to go to a system of units in which the \ufb01ve-dimensional Planck length, \u21135, is unity and this means (see, for example, [9,36]): G5 = \u03c0 4 . (4.71) In particular, this means that the solution BPS condition takes the simpler standard form: M = Q1 + Q2 + Q3 . (4.72) One can also recast the expressions for angular momenta in terms of the GH formulation: k \u223c 1 4 \u03c12 \u0000(J1 + J2) + (J1 \u2212J2) cos \u03b8 \u0001 d\u03c8 + . . . ., (4.73) which means one can obtain both commuting angular momenta from an expansion of \u00b5. There are two types of such terms, the simple 1 r terms and the dipole terms arising from the expansion of V \u22121KI. Following [35], de\ufb01ne the dipoles \u20d7 Dj \u2261 X I \u02dc kI j \u20d7 y(j) , \u20d7 D \u2261 N X j=1 \u20d7 Dj . (4.74) and then the expansion of k takes the form (4.73) if one takes \u20d7 D to de\ufb01ne the polar axis from which \u03b8 is measured. One then arrives at JR \u2261J1 + J2 = 4 3 CIJK N X j=1 q\u22122 j \u02dc kI j \u02dc kJ j \u02dc kK j , (4.75) JL \u2261J1 \u2212J2 = 8 \f \f \u20d7 D \f \f . (4.76) While we have put modulus signs around \u20d7 D in (4.76), one should note that it does have a meaningful orientation, and so we will sometimes consider \u20d7 JL = 8 \u20d7 D. 41 \fOne can use the bubble equations to obtain another, rather more intuitive expression for J1 \u2212J2. One should \ufb01rst note that the right-hand side of the bubble equation, (4.46), may be written as \u2212P I \u02dc kI i . Multiplying this by \u20d7 y(i) and summing over i yields: \u20d7 JL \u2261 8 \u20d7 D = \u22124 3 CIJK N X i,j=1 j\u0338=i \u03a0(I) ij \u03a0(J) ij \u03a0(K) ij qi qj \u20d7 y(i) rij = \u22122 3 CIJK N X i,j=1 j\u0338=i qi qj \u03a0(I) ij \u03a0(J) ij \u03a0(K) ij (\u20d7 y(i) \u2212\u20d7 y(j)) \f \f\u20d7 y(i) \u2212\u20d7 y(j)\f \f , (4.77) where we have used the skew symmetry \u03a0ij = \u2212\u03a0ji to obtain the second identity. This result suggests that one should de\ufb01ne an angular momentum \ufb02ux vector associated with the ijth bubble: \u20d7 JL ij \u2261\u22124 3 qi qj CIJK \u03a0(I) ij \u03a0(J) ij \u03a0(K) ij \u02c6 yij , (4.78) where \u02c6 yij are unit vectors, \u02c6 yij \u2261(\u20d7 y(i) \u2212\u20d7 y(j)) \f \f\u20d7 y(i) \u2212\u20d7 y(j)\f \f . (4.79) This means that the \ufb02ux terms on the left-hand side of the bubble equation actually have a natural spatial direction, and once this is incorporated, it yields the contribution of the bubble to JL. This \u201cangular momentum associated with a bubble\u201d is a great signi\ufb01cance to the quantization of bubbled geometries. If one quantizes the moduli space then the corresponding symplectic form treats each bubble as a distinct spin system whose individual angular momentum must be quantized [37\u201340]. 4.10 Summary One can make a bubbled solution as follows \u2022 Start with a GH metric (4.1) with (4.5). In particular, choose the qj \u2208Z and \u20d7 y(j) \u2208R3 as one wishes. For metrics asymptotic to R4,1 impose the constraint (4.48) on the qi. Compute \u20d7 A from (4.3). (To do this, (4.7) is useful.) \u2022 Choose the magnetic \ufb02uxes, \u03a0(I) ij , by choosing the \ufb02ux parameters, kI i in (4.23). \u2022 Fix the functions LI and M according to (4.42) and (4.43) and thereby determine the functions ZI and \u00b5 using (4.29) and (4.33). \u2022 Solve (4.34) to determine \u20d7 \u03c9. In doing this, (4.36) and (4.37) are very useful. \u2022 Solve the Bubble Equations (4.46): This will put (N \u22121) constraints on the separations of the \u20d7 y(j). \u2022 Check for global absence of CTC\u2019s/causal stability: Make sure (4.63) and (4.64) are satis\ufb01ed \u2022 Compute the global charges: QI, J1, J2. 42 \fR3 +1 +q -q R3 +1 Figure 3: Bubbling a black ring by blowing up new topological cycles on a GH space. One can easily count the moduli in a solution with N centers: There are the 3(N \u22121) components of the \u20d7 y(j) \u2208R3 minus (N \u22121) bubble equations for a total of 2(N \u22121) geometric moduli. There are then 3(N \u22121) \ufb02ux parameters coming from 3N values of the kI i minus the \u201cgauge transformations\u201d (4.24). This gives 5(N \u22121) parameters. Finally, there are also three global rotations on R3 to be subtracted to reduce this to 5(N \u22121) \u22123. If one \ufb01xes the 3 electric charges, QI, and the angular momenta, J1, J2, then there remain 5(N \u22122) \u22123 free parameters. There are obviously a large number of degrees of freedom as N becomes large. As we will discuss in the next lecture, this is only the tip of an ice-berg: the bubble can have shape modes, which means in\ufb01nitely many Fourier coe\ufb03cients. There are many issues I have ignored. First, the \ufb02ux parameters are necessarily quantized in string theory: see, for example, [35]. As we noted above, the geometric moduli must also be quantized [37\u201340]. This latter fact led to a remarkable triumph in the holographic \ufb01eld theory dual to these geometries. 4.11 Final comment: Geometric transitions One of the motivations behind the discovery of microstate geometries is the important physical idea of geometric transitions. A geometric transition typically involves a topology change in which one kind of source is replaced by another. Here we are thinking of black holes, based on singular electric charge sources, being replaced by magnetic \ufb02uxes on 2-cycles. The beautiful thing about ambi-polar GH metrics is that it can give this picture a natural realization. One can start from a black ring in \ufb02at space: V = 1 r , \u0398(I) = 0 , ZI = QI rb , rb = p x2 + y2 + (z \u2212b)2 . (4.80) It is a black ring because it is singular around the entire \u03c8 \ufb01ber. One can then imagine \u201ccreating a pair\u201d of GH points (see Fig. 3) in the neighborhood of the locus of the original black ring: V = 1 r + q r+ \u2212 q r\u2212 , r\u00b1 = p x2 + y2 + (z \u2212b \u00b1 a)2 . (4.81) and then replacing the singular source by \ufb02uxes on the bubbles. In this way one has \u201cblown up a cycle\u201d underneath the original black hole and created a microstate geometry instead. The new solution has to satisfy the bubble equation in order to be time independent and BPS. Thus the separation, 2a, of r\u00b1 is \ufb01xed by the \ufb02uxes. However, one can imagine a dynamical 43 \fprocess that nucleates such a bubble which then grows to an equilibrium size set by the bubble equations. One can also blow down the bubble and even do this while preserving the BPS property. If one takes q to be large, while keeping the \ufb02uxes (and hence the charges) \ufb01xed, then the bubble equations require the separation, 2a, of r\u00b1 to get smaller. While this is far from a continuous process (q \u2208Z and q \u2192\u221eis a singular limit), this captures the essence of a transition from a microstate geometry to a black object. Physically, we think of the geometric transition to form a microstate geometry as a phase transition in the state of the matter system. This idea is also the basis of the holographic descriptions of many infra-red phases of strongly coupled quantum \ufb01eld theories. 5 Scaling microstate geometries We have realized our initial goal of creating solitonic solutions but the ultimate goal of the Microstate Geometry programme is to generate solitonic solutions that look like black holes to arbitrary precision and then determine to what extent they can capture the microstate structure of a black hole. (Again, we are only considering BPS black holes here.) In this respect, the most important classes of microstate geometries are the so-called scaling geometries. Scaling solutions arise whenever there is a set of points, S, for which the bubble equations admit homogeneous solutions [41,42,37,43,38]: X j\u2208S j\u0338=i \u0393ij |\u20d7 y(j) \u2212\u20d7 y(i)| = 0 , i \u2208S . (5.1) It then follows that such a cluster of points can be scaled: \u20d7 y(j) \u2212\u20d7 y(i) \u2192\u03bb (\u20d7 y(j) \u2212\u20d7 y(i)) , (5.2) for \u03bb \u2208R, and one can then examine the limit in which \u03bb \u21920. The geometries are, or course, required to satisfy (4.46) and not (5.1), however, given a solution of (5.1) one can easily make in\ufb01nitessimal perturbations of the points, \u20d7 y(i), and if |\u20d7 y(j) \u2212\u20d7 y(i)| is su\ufb03ciently small this will generate \ufb01nite terms on the right-hand side of (5.1) and these can be used to generate solutions to the full bubble equations (4.46). In this way, the moduli space of physical solutions that satisfy (4.46) can contain scaling solutions in which a set of points, S, can approach one another arbitrarily closely. The simplest example of this kind of behaviour comes from scaling triangles. Suppose that |\u0393ij|, i, j = 1, 2, 3, satisfy the triangle inequalities: |\u039313| < |\u039312| + |\u039323| and cyclic , (5.3) which means that we may arrange the points so that |\u20d7 y(j) \u2212\u20d7 y(i)| = \u03bb |\u0393ij| , (5.4) for \u03bb \u2208R+. The \ufb02uxes can then usually be arranged so that the homogeneous bubble equations, (5.1), are trivially satis\ufb01ed since they amount to \u00b1\u03bb\u22121 \u2213\u03bb\u22121 = 0. When the triangle has 44 \fy(3) y(2) y(1) y(4) R3 y(3) y(2) y(1) y(4) Full Geometry R3 y(3) y(2) y(1) y(4) Scaling Descent (without scaling) Figure 4: The e\ufb00ect of back-reaction on scaling geometries. As the con\ufb01guration scales to zero size in R3, it actually retains its physical size in the complete geometry while descending an AdS throat. in\ufb01nitessimal size, making in\ufb01nitessimal deformations of the angles can be used to generate solutions to the original bubble equations (4.46). In particular, in a physical solution to (4.46) with three \ufb02uxes that obey (5.3), one can make the three points approach one another arbitrarily closely by adjusting the angles in the triangle so that they approach the angles in the triangle de\ufb01ned by (5.4). The existence of scaling solutions to the bubble equations was \ufb01rst noted in [41,42], in which the bubble equations emerged as integrability conditions. However, from the four-dimensional perspective of [41, 42], this appeared to be a singular limit of a multi-black-hole solution. It was subsequently shown that, from the perspective of \ufb01ve-dimensional supergravity, this limit is not only non-singular but also de\ufb01nes perhaps the most important class of physical solutions [37,43,38]. Suppose that we have a scaling cluster, S, that is centred on the origin, r = 0. Let \u03f5 be the largest separation (in R3) between points in S and let \u03b7 be the smallest distance from a point in S and a point, \u20d7 y(i), that not in S. Assume that \u03f5 < < < \u03b7 and, for simplicity, suppose that the total geometric charge of the cluster is unity: qS \u2261P i\u2208S qi = 1. In the intermediate range of r in which, \u03f5 \u226ar \u226a\u03b7, one has V \u223c1 r and all the other functions KI and LI behave as O(r\u22121). This means that, in the intermediate region, ZI \u223cQI,S 4r . where the QI,S are the electric charges associated with the scaling cluster. Using this in (3.29) and (4.1) we see that the metric in the intermediate region becomes: ds2 5 = \u221216 r2 a4 (dt + k)2 + a2 4 dr2 r2 + a2 4 \u0002 (d\u03c8 + cos \u03b8d\u03c6)2 + d\u03b82 + sin2 \u03b8d\u03c62\u0003 , (5.5) where a = (Q1,SQ2,SQ3,S)1/6. This is the metric of an AdS2 \u00d7 S3 throat of a rotating, extremal black hole. There are several important consequences of this result. First, such scaling clusters look almost exactly like extremal black holes except that they \u201ccap o\ufb00\u201d in a collection of bubbles just above6 where the horizon would be for the extremal black hole (see Fig. 4). Moreover, while it appears, from the perspective of the R3 base, that the bubbles are collapsing in the scaling limit, they are, in fact, simply creating an AdS throat and descending down it as it forms. The physical size of the bubbles approaches a large, \ufb01nite value whose scale is set by the radius, a, of the S3 6From the perspective of an infalling observer. 45 \fof the throat, which corresponds to the horizon of the would-be black hole. Thus the scaling microstate geometries represent deep bound states of bubbles that realize the goal of creating a smooth, solitonic solutions that look like BPS black holes. One obtains similar results for black rings from scaling clusters whose net geometric charge, qS, is zero. The fact that one can adjust classical parameters so that the scaling points approach one another arbitrarily closely means that the AdS throat can be made arbitrarily deep. However, the angular momentum, (4.77), depends, via (4.78), upon the details of locations of the points and when angular momentum is quantized this will lead to a discretization of the moduli space and will limit the depth of simple scaling solutions like those based on scaling triangles [38]. More generally, it was proposed in [38] and then proven in [39] that the individual contributions, \u20d7 JL ij in (4.78) must be separately quantized and so, upon quantization, the classical moduli space is completely discrete. This has the very interesting physical consequence that even though very long, deep throats are macroscopic regions of space time in which the curvature length scale can be uniformly bounded to well above the Planck scale, quantum e\ufb00ects can wipe out such regions of space-time. This also has very important implications for the the dual holographic theory, and, in particular leads to the correct holographic prediction of the energy gap of the typical sector of the conformal \ufb01eld theory [37\u201340]. 6 Studying microstate geometries So far I have taken a more pedagogical approach to these lectures. For the remaining time, I will survey some of the broader ideas about microstate geometries. This overview is necessarily incomplete and rather idiosyncratic. 6.1 Stringy and M-theory realizations To keep things simple, the focus so far has been on simple microstate geometries in \ufb01vedimensional supergravity coupled to vector multiplets. While solitons in \ufb01ve dimensions are interesting in their own right, the primary goal of the microstate geometry program is to study solitons and associated microstructure in string theory and M-theory. First it should be remembered that one can obtain \ufb01ve-dimensional supergravity coupled to vector multiplets as part of dimensional reduction of M-theory on a Calabi-Yau complex 3-fold. Indeed, the simplest way to get to the model introduced in Section 3.2 is to compactify M-theory on a T6. Decomposing the T6 into three T2\u2019s labelled by (x5, x6), (x7, x8) and (x9, x10), one reduces the eleven-dimensional \ufb01elds to the \ufb01ve-dimensional \ufb01elds according to: ds2 11 = ds2 5 + \u0000Z2Z3Z\u22122 1 \u0001 1 3 (dx2 5 + dx2 6) + \u0000Z1Z3Z\u22122 2 \u0001 1 3 (dx2 7 + dx2 8) + \u0000Z1Z2Z\u22122 3 \u0001 1 3 (dx2 9 + dx2 10) , and C(3) = A(1) \u2227dx5 \u2227dx6 + A(2) \u2227dx7 \u2227dx8 + A(3) \u2227dx9 \u2227dx10 , (6.1) 46 \fNote how the \ufb01ve-dimensional scalars appear as the volumes of the T 2\u2019s. The three charges in \ufb01ve-dimensions now have the interpretation of M2-brane charges. Similarly, one can get the \ufb01ve-dimensional supergravity via a T5 or K3 \u00d7 S1 compacti\ufb01cation of IIB supergravity. It is, of course, the reverse of this perspective that is important: microstate geometries are a very natural part of string theory and the deeper study of the properties of microstate geometries must necessarily be informed by string theory and what it tells us about the states of matter in such geometries. 6.2 Holographic \ufb01eld theory Holography has had a very great impact on our understanding of the physics of microstate geometries. Basically, if there is string theory background that is sourced by some brane charges and the geometry, in some limit, becomes that of AdSp+1 \u00d7 SD\u2212p\u22121, then the general \u201cstringy\u201d expectation is that there should be an underlying dual, strongly-coupled (quantum) CFT in pdimensions lying on (some part of) the underlying branes. In particular, an AdS3 should have a dual (1 + 1)-dimensional conformal \ufb01eld theory. This is a very general principle in string theory and comes from the fact that strings can either be closed, and describe the gravity sector, or be open, ending on D-branes, and thus encoding the \ufb01eld theory on the branes. The power and importance of this body of ideas is that it gives one two di\ufb00erent ways to study the same problem: one can study gravity through the dual CFT, or study the CFT through the gravity. A particularly important form of this arises in the study of microstate geometries in six dimensions. Such geometries may be viewed as simple uplifts of the \ufb01ve-dimensional geometries we have been studying here. Speci\ufb01cally, one can work with the IIB compacti\ufb01cation on a T4 to six-dimensions. The \ufb01ve-dimensional black holes and microstate geometries are then elementary S1 compacti\ufb01cations of this six-dimensional theory and so it is easy to pass from one formulation to the other. The importance of the six-dimensional formulation is that supersymmetric black holes have in\ufb01nitely long AdS3 throats. There is then a dual (1+1)-dimensional CFT and the world-volume of this CFT lies along the S1 of the compacti\ufb01cation from six to \ufb01ve dimensions. One can then study the microstates of the black hole by studying this CFT and its excitations. This how we now understand the \ufb01rst detailed accounting of supersymmetric microstate structure achieved by Strominger and Vafa in 1996 [44]: The microstates are momentum excitations of the D1-D5 CFT. The holographic duals of these states has given rise to another class of microstate geometries known as superstrata [45\u201347]. The holography of microstate geometries raises a host of interesting questions. For example, what CFT states are described by the full range of multi-bubbled scaling microstate geometries? Or what are the CFT states described by small \ufb02uctuations around a scaling microstate geometry. Using superstrata, we have made a great deal of progress on the latter question for the simplest scaling microstate geometries but the former question remains unanswered in anything other than very broad-brush ideas about renormalization group \ufb02ows to tensor product theories. 47 \f6.3 Excitations of microstate geometries We have focussed entirely on microstate geometries that exist in \ufb01ve dimensions and whose dynamics is trivial in the compacti\ufb01ed dimensions. This raises two obvious questions: Have we obtained the most general possible microstate geometry in \ufb01ve dimensions and are there new possibilities if we allow non-trivial dynamics in the extra dimensions. The answers to these questions are no and yes. 6.3.1 Fluctuating microstate geometries Most obviously, we chose a Gibbons-Hawking base metric and there are certainly more general hyper-K\u00a8 ahler Riemannian base metrics and there are almost certainly even richer families of hyper-K\u00a8 ahler ambi-polar base metrics. The problem here is simply computational: outside the GH geometries, there are some explicitly known hyper-K\u00a8 ahler Riemannian metrics in four dimensions but they are either too symmetric to be very useful [48], or too complicated to enable explicit computation (See, for example, [49\u201352]). Essentially nothing is known about ambi-polar hyper-K\u00a8 ahler metrics beyond the Gibbons-Hawking family. Putting this issue aside, one can ask if one has the complete set of solutions to the BPS equations, (3.63)\u2013(3.65). Again the answer is no. As mentioned above, we now have superstrata, which are families of \ufb02uctuating BPS solutions to (3.63): there are closed, self-dual 2-forms, \u0398(I), that depend on non-trivial modes along the \u03c8 and \u03c6. Indeed these solutions depend on arbitrary Fourier modes of the form n\u03c6 + m\u03c8 with n \u2265m. Thus the magnetic \ufb02uxes can \ufb02uctuate and through the back-reaction to these \ufb02uctuations, the bubbles can develop shape modes ... and still be BPS. These solutions were \ufb01rst obtained [45, 53, 46, 47] from six-dimensional microstate geometries that \ufb02uctuate as functions of (v, \u03c8, \u03c6), where v is a coordinate along the extra S1 of the six-dimensional theory. The beauty of the six-dimensional formulation is that one can establish a precise correspondence between the \ufb02uctuations of the superstrata and speci\ufb01c excitations of the dual D1-D5 CFT. So far, such \ufb02uctuating solutions have only been constructed on single bubble solutions. It is an open problem as to how to construct such \ufb02uctuating solutions on multi-bubbled geometries and whether these \ufb02uctuations are constrained through matching conditions at intersection points of the bubbles. There is, however, a very interesting proposal [54] for how one might generalize superstrata to multi-centered geometries. Since the holography of multi-bubbled solutions is also an open problem, the entire holographic interpretation of \ufb02uctuating multi-bubbled solutions is completely new territory. 6.3.2 Other excitations In string theory, once one has non-trivial topological cycles, it is very interesting to ask about the new \ufb01elds and excitations that come from wrapping branes around the cycles. This was, for example, how Hull and Townsend [55] showed how to get the W-bosons of the E8 \u00d7 E8 string from the type II string compacti\ufb01ed on K3. One can do the same kind of thing in microstate geometries and some of the corresponding supergravity solutions were investigated in [56\u201359]. 48 \fIt is even more interesting to examine the classes of states and \ufb01eld theories that emerge from such brane wrapping of cycles in microstate geometries [60]. In particular, one \ufb01nds vast numbers of massive states that are expected to become massless as the microstate geometry gets deeper and deeper. It was shown in [60] that the numbers of such states grows exponentially with the black-hole charges and, when they become massless, they will yield a leading contribution to the entropy. From the perspective of holographic \ufb01eld theory, these states seem to be avatars of the Higgs branch of the dual CFT. The wrapped-brane states are extremely rich and complex because they also involve the physics of the compacti\ufb01ed dimensions in very non-trivial ways. The simplest such wrapped branes are point-like in the extra dimensions, but they feel the magnetic \ufb02uxes threading the extra dimensions and so are expected to settle into Landau orbits in the magnetic \ufb01elds. It is this structure that leads to their vast degeneracy. 6.4 Underlying mathematical structure Ambi-polar, hyper-K\u00a8 ahler metrics are a whole new world that mathematicians are just beginning to explore [61\u201363,54]. The crucial observation that makes it all mathematically accessible is that the \ufb01ve-dimensional metric, (3.29), is smooth and Lorentzian, or, better, the four-dimensional metric,(4.44), is smooth and Riemannian. Perturbative computations strongly suggest that there should be new, rich families of ambipolar hyper-K\u00a8 ahler metrics that have yet to be discovered. Riemannian K\u00a8 ahler manifolds also have an extremely beautiful mathematical structure that relates moduli of the metric to harmonic analysis of 2-forms. The general theory of this for ambi-polar metrics has yet to be developed but the calculations described in Section 6.3.1 show that there are in\ufb01nite families of harmonic forms and so one expects an in\ufb01nite dimensional families of metric moduli. The foregoing expectations of in\ufb01nite families of harmonic forms and metric moduli \ufb02y in the face of \u201cRiemannian\u201d experience, but the evanescent ergosurfaces and their \u201cacceptable\u201d singular structure opens up a vast new set of possibilities and understanding what is \u201cacceptable,\u201d and what is not, remains to be understood. In practice, the ultimate arbiter of acceptability is the regularity of the full Maxwell \ufb01eld (3.30) and full metric (3.29). Finally, I cannot resist noting that the original Riemannian ALE spaces have an intersection form isomorphic to the Cartan matrix of the Lie algebra of SU(n) and the Weyl group of SU(n) emerges as the monodromy group acting on the cycles as one moves around moduli space of the metric. It would be extremely interesting to understand the ambi-polar generalization of this story. It was suggested in [54] that this may involve an extension to super Lie algebras. 6.5 Probing microstate geometries There are many ways to try to probe microstate geometries, and even the simplest geodesic probes produce some extremely interesting results. Gravitational tidal forces are dictated by the Riemann tensor, and in ordinary black holes, 49 \fone has, simply on dimensional grounds R\u00b5\u03bd\u03c1\u03c3 R\u00b5\u03bd\u03c1\u03c3 \u223cm2 r6 , (6.2) At the horizon one has r \u223cm and hence R\u00b5\u03bd\u03c1\u03c3 R\u00b5\u03bd\u03c1\u03c3 \u223c 1 m4 , (6.3) This means that the tidal stresses at the horizon scale decrease as the mass of the black hole gets larger. A more careful analysis of this class of problem involves geodesic deviation. That is, one considers a family of geodesics with proper velocities denoted by V \u00b5 = dx\u00b5 d\u03c4 . One de\ufb01nes the \u201cdeviation vector,\u201d S\u03c1, to be a space-like displacement across the family of geodesics. Indeed, by appropriately synchronizing the proper time between neighboring geodesics one can arrange S\u03c1V\u03c1 = 0. This means that S\u03c1 is a space-like vector in the rest-frame of the geodesic observer. One can also re-scale S\u00b5 at any one point so that S\u00b5S\u00b5 = 1 and it therfore represents a unit displacement across the family. The tidal forces are then measured by the relative acceleration, A\u00b5 \u2261 D2S\u00b5 d\u03c4 2 , of neighboring geodesics. A straightforward calculation leads to the equation of geodesic deviation: A\u00b5 \u2261D2S\u00b5 d\u03c4 2 = \u2212R\u00b5\u03bd\u03c1\u03c3 V \u03bdS\u03c1V \u03c3 . (6.4) The skew-symmetry of the Riemann tensor means that A\u00b5V\u00b5 = 0 and so the tidal acceleration is similarly space-like, representing the tidal stress in the rest-frame of the infalling observer with velocity, V \u00b5. To \ufb01nd the largest stress one can maximize the norm, p A\u00b5A\u00b5, of A\u00b5 over all the choices of S\u00b5, subject to the constraint S\u00b5S\u00b5 = 1. To analyze the stress forces it is convenient to introduce what is sometimes called the \u201ctidal tensor:\u201d A\u00b5\u03c1 \u2261\u2212R\u00b5\u03bd\u03c1\u03c3 V \u03bd V \u03c3 , (6.5) and consider its norm. Indeed, we de\ufb01ne |A| \u2261 p A\u00b5\u03c1 A\u03c1\u00b5 . (6.6) Note that since V \u00b5 = dx\u00b5 d\u03c4 is dimensionless, A has the same dimensions as the curvature tensor, (length)\u22122. The important thing about black holes is that they typically have only one scale: the mass, m, and so one \ufb01nds the natural variant of (6.3) |A| \u223c 1 m2 , (6.7) which means that the tidal forces get very small as the mass grows larger. The important thing about microstate geometries is that necessarily have multiple scales7. In particular, there is the scale at the top of the black-hole throat, r \u223cb, which is usually set by the 7This is also hugely important in the dual holographic \ufb01eld theory 50 \fmass, m, and there is the scale at the \u201cbottom\u201d of the throat, r \u223ca, at which one encounters the cap. The scale, a, is determined by moduli, but, as noted above, these moduli are quantized and a cannot be made arbitrarily small. In this description, taking a \u21920 corresponds to the black-hole limit in which r = 0 corresponds to the horizon. The ratio, b a, determines the red-shift between the cap and the top of the throat, and for a \u201ctypical\u201d microstate geometry it is extremely large. Given that there are now (at least) two scales, there are more possibilities for the right-hand side of (6.7). In particular, explicit calculations [64,65] show that there are generically terms of the form |A| \u223ca2 b2 r6 , (6.8) This may be thought of as a higher multipole moment of the metric and it is induced by the presence of a cap. Note that it vanishes in the black-hole limit (a \u21920). As one gets near the cap, r \u2192a, this tidal tensor becomes extremely large, as one might expect: |A| \u223cb2 a4 , (6.9) In this context, \u201clarge\u201d means large compared to a mixture of the compacti\ufb01cation scale and the string scale, and so other stringy or Kaluza-Klein modes will become important. If the tidal forces become \u201clarge,\u201d the probe will either discover that space-time is compacti\ufb01ed or that it is made of strings. However, there is an even more important transition: If the probe is dropped from the top of the throat then it will encounter the deviations from the black-hole metric while traveling at ultra-relativistic speeds. Indeed, the tidal tensor, (6.5), involves factors of the velocity, and for a particle released from rest at r \u223cb, this adds a further factor of b to the tidal force [64,65]. As a result, one \ufb01nds |A| \u22731 \u21d4 r \u223c \u221a ab , (6.10) In other words, the ultra-relativistic speeds create the transition to the stringy, or Kaluza-Klein phase long before the cap: in logarithmic terms, about \u201chalf-way down\u201d the throat. This is expected to greatly in\ufb02uence our understanding of how matter \u201cscrambles\u201d into microstate structure. 6.6 non-BPS microstate geometries It would be remiss to \ufb01nish these lectures without saying something about non-BPS microstate geometries. Finding explicit non-BPS solutions has become a very large enterprise. The starting point of this enterprise was the JMaRT solution [66]. More systematic methods were subsequently developed and they resulted in solutions in which the supersymmetry was broken in a controlled manner via gravitational holonomy [67\u201369]. In the last few years, an even more general systematic approach has been developed [70\u201372] and this has produced a rich family of examples. The basic idea is that with enough symmetry one can reduce a supergravity solution to an e\ufb00ectively two-dimensional problem. The dynamics can then be expressed in terms 51 \fof a scalar sigma model. This can then be solved by an array of methods ranging from inversescattering methods through to exploiting special properties of nilpotent sub-algebras. Ideally, one would like to develop inverse-scattering methods to \ufb01nd generic non-extremal microstate geometries. While there has been some progress in this direction [73\u201375], the complexity of the underlying supergravity theory makes this approach extremely challenging. The problem with the non-BPS and non-extremal microstate geometries constructed so far is that they seem to be very atypical outliers in the space of generic black hole microstates. Correspondingly, they are dual to extremely exotic coherent states in the black-hole CFT. I suspect that this limitation is caused by the limitations of analytic computation rather than the physics of microstate geometries. It is possible that analytic methods may improve, but it also seems that numerical methods have progressed su\ufb03ciently far (see, for example, [76, 77]) that it will soon be feasible to start numerical searches for more generic non-extremal microstate geometries. Finally, a promising and, as yet, relatively unexplored approach to non-extremal microstate geometries can be made using perturbation theory to construct near-BPS solutions. Just as there are BPS \ufb02uctuations of bubbles in microstate geometries, there are supersymmetry breaking \ufb02uctuations of microstate geometries. One also expects that the large scale, BPS bubbled geometries to be stable against such small \ufb02uctuations and so one might hope to \ufb01nd explicit solutions. The challenge will be to handle gravitational and electromagnetic radiation that will be generated by such \ufb02uctuations, but with luck, the radiation \ufb01eld will be a second-order correction to such non-BPS solutions. Such perturbations would provide a direct way to create solutions with small Hawking temperature and the radiation they generate will presumably be the microstate analog of Hawking radiation. 7 A last comment In Section 3.3, I established the \ufb01ve-dimensional version of the \u201cNo Solitons without Topology\u201d theorem. At least to this extent, microstate geometries represent the only viable gravitational mechanism that produce smooth, horizonless geometries that look just like black holes up until one is arbitrarily close to the horizon scale. As a result, any classical, smooth, horizonless geometry must be some form of microstate geometry. Moreover, if you try to replace a black hole with a strongly quantum pile of mush, then this pile of mush, should, near the horizon scale, have some form of semi-classical limit in order for it to be consistent with the General Relativistic calculations that underpin the results of LIGO. It is thus not much more of a stretch for the quantum pile of mush to have a semi-classical limit that takes one very close to the horizon scale. If there is such a semi-classical limit, it must be described by a microstate geometry. Similarly, if you want to study a quantum system near the horizon scale, then microstate geometries provide the only possible classical mechanism to support the quantum system you want to study. It is also important to recall one of the lessons coming from 20 years of holographic \ufb01eld theory. The gravity sector of the duality is extremely good at capturing the large-scale collective e\ufb00ects of the dual strongly-coupled quantum system. Supergravity is a fairly crude instrument when it comes to details of the microstructure but it excels in capturing the large-scale bulk expression 52 \fof the quantum system. It therefore seems that microstate geometries must emerge as one aspect of the universal, large-scale, strong-coupling expression of the quantum systems that underpin the black-hole microstate structure. In particular, this gives one hope that microstate geometries can capture the universal physics that will lead to a large-scale, hydrodynamic description of the quantum microstates of black hole much as holography provided hydrodynamic insight into the properties of quark-gluon plasmas. For example, one might reasonably hope to use microstate geometries to compute the e\ufb00ective viscosity created by horizon-scale microstructure. In short, whether or not you accept the entire microstate geometry/fuzzball paradigm, microstate geometries still provide one of the best ways to probe collective large-scale e\ufb00ects of black-hole microstructure and to study quantum systems close to the horizon scale. Acknowledgments I am indebted to all my collaborators (especially Iosif Bena) and colleagues who were instrumental in developing many of the ideas described in these lectures. I would like to thank Riccardo Guida and Marco Schiro for encouraging me to give these lectures and for handling the logistics. I would also like to thank Dominique Hirondel for his careful reading of these notes and for sending me very helpful corrections. I am very grateful to the IPhT, CEA-Saclay for support and hospitality as well as for providing a forum for these lectures. This work was supported in part by ERC Grant number: 787320 QBH Structure and by the DOE grant DE-SC0011687. 53" + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file