{ "url": "http://arxiv.org/abs/2404.16745v1", "title": "Statistical Inference for Covariate-Adjusted and Interpretable Generalized Factor Model with Application to Testing Fairness", "abstract": "In the era of data explosion, statisticians have been developing\ninterpretable and computationally efficient statistical methods to measure\nlatent factors (e.g., skills, abilities, and personalities) using large-scale\nassessment data. In addition to understanding the latent information, the\ncovariate effect on responses controlling for latent factors is also of great\nscientific interest and has wide applications, such as evaluating the fairness\nof educational testing, where the covariate effect reflects whether a test\nquestion is biased toward certain individual characteristics (e.g., gender and\nrace) taking into account their latent abilities. However, the large sample\nsize, substantial covariate dimension, and great test length pose challenges to\ndeveloping efficient methods and drawing valid inferences. Moreover, to\naccommodate the commonly encountered discrete types of responses, nonlinear\nlatent factor models are often assumed, bringing further complexity to the\nproblem. To address these challenges, we consider a covariate-adjusted\ngeneralized factor model and develop novel and interpretable conditions to\naddress the identifiability issue. Based on the identifiability conditions, we\npropose a joint maximum likelihood estimation method and establish estimation\nconsistency and asymptotic normality results for the covariate effects under a\npractical yet challenging asymptotic regime. Furthermore, we derive estimation\nand inference results for latent factors and the factor loadings. We illustrate\nthe finite sample performance of the proposed method through extensive\nnumerical studies and an application to an educational assessment dataset\nobtained from the Programme for International Student Assessment (PISA).", "authors": "Jing Ouyang, Chengyu Cui, Kean Ming Tan, Gongjun Xu", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "stat.ME", "cats": [ "stat.ME" ], "label": "Original Paper", "paper_cat": "LLM Fairness", "gt": "Latent factors, often referred to as hidden factors, play an increasingly important role in modern statistics to analyze large-scale complex measurement data and find wide-ranging applications across various scientific fields, including educational assessments (Reckase 2009, Hambleton & Swaminathan 2013), macroeconomics forecasting (Stock & Watson 2002, Lam et al. 2011), and biomedical diagnosis (Carvalho et al. 2008, Frichot et al. 2013). For instance, in educational testing and social sciences, latent factors are used to model unobservable traits of respondents, such as skills, personality, and attitudes (von Davier Matthias 2008, Reckase 2009); in biology and genomics, latent factors are used to capture underlying genetic factors, gene expression patterns, or hidden biological mechanisms (Carvalho et al. 2008, Frichot et al. 2013). To uncover the latent factors and analyze large-scale complex data, various latent factor models have been developed and extensively investigated in the existing literature (Bai 2003, Bai & Li 2012, Fan et al. 2013, Chen et al. 2023b, Wang 2022). In addition to measuring the latent factors, the observed covariates and the covariate effects conditional on the latent factors hold significant scientific interpretations in many applications (Reboussin et al. 2008, Park et al. 2018). One important application is testing fairness, which receives increasing attention in the fields of education, psychology, and social sciences (Candell & Drasgow 1988, Belzak & Bauer 2020, Chen et al. 2023a). In educa- tional assessments, testing fairness, or measurement invariance, implies that groups from diverse backgrounds have the same probability of endorsing the test items, controlling for individual proficiency levels (Millsap 2012). Testing fairness is not only of scientific interest to psychometricians and statisticians but also attracts widespread public awareness (Toch 1984). In the era of rapid technological advancements, international and large-scale edu- cational assessments are becoming increasingly prevalent. One example is the Programme for International Student Assessment (PISA), which is a large-scale international assessment with substantial sample size and test length (OECD 2019). PISA assesses the knowledge and skills of 15-year-old students in mathematics, reading, and science domains (OECD 2 2019). In PISA 2018, over 600,000 students from 37 OECD1 countries and 42 partner coun- tries/economies participated in the test (OECD 2019). To assess fairness of the test designs in such large-scale assessments, it is important to develop modern and computationally effi- cient methodologies for interpreting the effects of observed covariates (e.g., gender and race) on the item responses, controlling for the latent factors. However, the discrete nature of the item responses, the increasing sample size, and the large amount of test items in modern educational assessments pose great challenges for the estimation and inference for the covariate effects as well as for the latent factors. For instance, in educational and psychological measurements, such a testing fairness issue (measurement invariance) is typically assessed by differential item functioning (DIF) analysis of item re- sponse data that aims to detect the DIF items, where a DIF item has a response distribution that depends on not only the measured latent factors but also respondents\u2019 covariates (such as group membership). Despite many statistical methods that have been developed for DIF analysis, existing methods often require domain knowledge to pre-specify DIF-free items, namely anchor items, which may be misspecified and lead to biased estimation and inference results (Thissen 1988, Tay et al. 2016). To address this limitation, researchers developed item purification methods to iteratively select anchor items through stepwise selection mod- els (Candell & Drasgow 1988, Fidalgo et al. 2000, Kopf et al. 2015). More recently, tree-based methods (Tutz & Berger 2016), regularized estimation methods (Bauer et al. 2020, Belzak & Bauer 2020, Wang et al. 2023), item pair functioning methods (Bechger & Maris 2015), and many other non-anchor-based methods have been proposed. However, these non-anchor- based methods do not provide valid statistical inference guarantees for testing the covariate effects. It remains an open problem to perform statistical inference on the covariate effects and the latent factors in educational assessments. To address this open problem, we study the statistical estimation and inference for a gen- eral family of covariate-adjusted nonlinear factor models, which include the popular factor 1OECD: Organisation for Economic Co-operation and Development 3 models for binary, count, continuous, and mixed-type data that commonly occur in educa- tional assessments. The nonlinear model setting poses great challenges for estimation and statistical inference. Despite recent progress in the factor analysis literature, most existing studies focus on estimation and inference under linear factor models (Stock & Watson 2002, Bai & Li 2012, Fan et al. 2013) and covariate-adjusted linear factor models (Leek & Storey 2008, Wang et al. 2017, Gerard & Stephens 2020, Bing et al. 2024). The techniques employed in linear factor model settings are not applicable here due to the nonlinearity inherent in the general models under consideration. Recently, several researchers have also investigated the parameter estimation and inference for generalized linear factor models (Chen et al. 2019, Wang 2022, Chen et al. 2023b). However, they either focus only on the overall consistency properties of the estimation or do not incorporate covariates into the models. In a concurrent work, motivated by applications in single-cell omics, Du et al. (2023) considered a general- ized linear factor model with covariates and studied its inference theory, where the latent factors are used as surrogate variables to control for unmeasured confounding. However, they imposed relatively stringent assumptions on the sparsity of covariate effects and the dimension of covariates, and their theoretical results also rely on data-splitting. Moreover, Du et al. (2023) focused only on statistical inference on the covariate effects, while that on factors and loadings was unexplored, which is often of great interest in educational assess- ments. Establishing inference results for covariate effects and latent factors simultaneously under nonlinear models remains an open and challenging problem, due to the identifiability issue from the incorporation of covariates and the nonlinearity issue in the considered general models. To overcome these issues, we develop a novel framework for performing statistical infer- ence on all model parameters and latent factors under a general family of covariate-adjusted generalized factor models. Specifically, we propose a set of interpretable and practical iden- tifiability conditions for identifying the model parameters, and further incorporate these conditions into the development of a computationally efficient likelihood-based estimation 4 method. Under these identifiability conditions, we develop new techniques to address the aforementioned theoretical challenges and obtain estimation consistency and asymptotic nor- mality for covariate effects under a practical yet challenging asymptotic regime. Furthermore, building upon these results, we establish estimation consistency and provide valid inference results for factor loadings and latent factors that are often of scientific interest, advancing our theoretical understanding of nonlinear latent factor models. The rest of the paper is organized as follows. In Section 2, we introduce the model setup of the covariate-adjusted generalized factor model. Section 3 discusses the associated iden- tifiability issues and further presents the proposed identifiability conditions and estimation method. Section 4 establishes the theoretical properties for not only the covariate effects but also the latent factors and factor loadings. In Section 5, we perform extensive numerical studies to illustrate the performance of the proposed estimation method and the validity of the theoretical results. In Section 6, we analyze an educational testing dataset from Pro- gramme for International Student Assessment (PISA) and identify test items that may lead to potential bias among different test-takers. We conclude with providing some potential future directions in Section 7. Notation: For any integer N, let [N] = {1, . . . , N}. For any set S, let #S be its cardinality. For any vector r = (r1, . . . , rl)\u22ba, let \u2225r\u22250 = #({j : rj \u0338= 0}), \u2225r\u2225\u221e= maxj=1,...,l |rj|, and \u2225r\u2225q = (Pl j=1 |rj|q)1/q for q \u22651. We define 1(y) x to be the y-dimensional vector with x-th entry to be 1 and all other entries to be 0. For any symmetric matrix M, let \u03bbmin(M) and \u03bbmax(M) be the smallest and largest eigenvalues of M. For any matrix A = (aij)n\u00d7l, let \u2225A\u2225\u221e,1 = maxj=1,...,l Pn i=1 |aij| be the maximum absolute column sum, \u2225A\u22251,\u221e= maxi=1,...,n Pl j=1 |aij| be the maximum of the absolute row sum, \u2225A\u2225max = maxi,j |aij| be the maximum of the absolute matrix entry, \u2225A\u2225F = (Pn i=1 Pl j=1 |aij|2)1/2 be the Frobenius norm of A, and \u2225A\u2225= p \u03bbmax (A\u22baA) be the spectral norm of A. Let \u2225\u00b7 \u2225\u03c61 be sub- exponential norm. Define the notation Av = vec(A) \u2208Rnl to indicate the vectorized matrix 5 A \u2208Rn\u00d7l. Finally, we denote \u2297as the Kronecker product.", "main_content": "Consider n independent subjects with q measured responses and p\u2217observed covariates. \u2217 For the ith subject, let Yi \u2208Rq be a q-dimensional vector of responses corresponding to measurement items and Rbe a-dimensional vector of observed covariates. q measurement items and Xc i \u2208Rp\u2217be a p\u2217-dimensional vector of observed covariates. Moreover, let be a-dimensional vector of latent factors representing the unobservable Moreover, let Ui be a K-dimensional vector of latent factors representing the unobservable traits such as skills and personalities, where we assume K is specified as in many educational assessments. We assume that the q-dimensional responses Yi are conditionally independent, given Xc i and Ui. Specifically, we model the jth response for the ith subject, Yij, by the following conditional distribution: Yij \u223cpij(y | wij), where wij = \u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i . (1) Here \u03b2j0 \u2208R is the intercept parameter, \u03b2jc = (\u03b2j1, . . . , \u03b2jp\u2217)\u22ba\u2208Rp\u2217are the coefficient parameters for the observed covariates, and\u22baR are the factor loadings. \u2208\u2217\u2208 parameters for the observed covariates, and \u03b3j = (\u03b3j1, . . . , \u03b3jK)\u22ba\u2208RK are the factor loadings. \u22ba \u2208 For better presentation, we write \u03b2j = (\u03b2j0, \u03b2\u22ba jc)\u22baas an assembled vector of intercept and coefficients and define Xi = (1, (Xc i )\u22ba)\u22bawith dimension p = p\u2217+ 1, which gives wij = \u03b3\u22ba j Ui + \u03b2\u22ba j Xi. Given wij, the function pij is some specified probability density (mass) function. Here, we consider a general and flexible modeling framework by allowing different types of pij functions to model diverse response data in wide-ranging applications, such as binary item response data in educational and psychological assessments (Mellenbergh 1994, Reckase 2009) and mixed types of data in educational and macroeconomic applications (Rijmen et al. 2003, Wang 2022); see also Remark 1. A schematic diagram of the proposed model setup is 6 presented in Figure 1. Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Xi \u2208Rp Ui \u2208RK Yij \u2208R, j \u2208[q] Figure 1: A schematic diagram of the proposed model in (1). The subscript i indicates the ith subject, out of n independent subjects. The response variable Yij can be discrete or continuous. Our proposed covariate-adjusted generalized factor model in (1) is motivated by applications in testing fairness. In the context of educational assessment, the subject\u2019s responses to questions are dependent on latent factors Ui such as students\u2019 abilities and skills, and are potentially affected by observed covariates Xc i such as age, gender, and race, among others (Linda M. Collins 2009). The intercept \u03b2j0 is often interpreted as the difficulty level of item j and referred to as the difficulty parameter in psychometrics (Hambleton & Swaminathan 2013, Reckase 2009). The capability of item j to further differentiate individuals based on their latent abilities is captured by \u03b3j = (\u03b3j1, . . . , \u03b3jK)\u22ba, which are also referred to as discrimination parameters (Hambleton & Swaminathan 2013, Reckase 2009). The effects of observed covariates Xc i on subject\u2019s response to the jth question Yij, conditioned on latent abilities Ui, are captured by \u03b2jc = (\u03b2j1, . . . , \u03b2jp\u2217)\u22ba, which are referred to as DIF effects in psychometrics (Holland & Wainer 2012). This setting gives rise to the fairness problem of validating whether the response probabilities to the measurements differ across different genders, races, or countries of origin while holding their abilities and skills at the same level. 7 Given the observed data from n independent subjects, we are interested in studying the relationships between Yi and Xc i after adjusting for the latent factors Ui in (1). Specifically, our goal is to test the statistical hypothesis H0 : \u03b2js = 0 versus Ha : \u03b2js \u0338= 0 for s \u2208[p\u2217], where \u03b2js is the regression coefficient for the sth covariate and the jth response, after adjusting for the latent factor Ui. In many applications, the latent factors and factor loadings also carry important scientific interpretations such as students\u2019 abilities and test items\u2019 characteristics. This motivates us to perform statistical inference on the parameters \u03b2j0, \u03b3j, and Ui as well. Remark 1. The proposed model setup (1) is general and flexible as various functions pij\u2019s could be used to model diverse types of response data in wide-ranging applications. For instance, in educational assessments, logistic factor model (Reckase 2009) with pij(y | wij) = exp(wijy)/{1 + exp(wij)}, y \u2208{0, 1} and probit factor model (Birnbaum 1968) with pij(y | wij) = {\u03a6(wij)}y{1 \u2212\u03a6(wij)}1\u2212y, y \u2208{0, 1} where \u03a6(\u00b7) is the cumulative density function of standard normal distribution, are widely used to model the binary responses, indicating correct or incorrect answers to the test items. Such types of models are often referred to as item response theory models (Reckase 2009). In economics and finances, linear factor models with pij(y | wij) \u221dexp{\u2212(y \u2212wij)2/(2\u03c32)}, where y \u2208R and \u03c32 is the variance parameter, are commonly used to model continuous responses, such as GDP, interest rate, and consumer index (Bai 2003, Bai & Li 2012, Stock & Watson 2016). Moreover, depending on the the observed responses, different types of function pij\u2019s can be used to model the response from each item j \u2208[q]. Therefore, mixed types of data, which are common in educational measurements (Rijmen et al. 2003) and macroeconomic applications (Wang 2022), can also be analyzed by our proposed model. 8 Remark 2. In addition to testing fairness, the considered model finds wide-ranging applications in the real world. For instance, in genomics, the gene expression status may depend on unmeasured confounders or latent biological factors and also be associated with the variables of interest including medical treatment, disease status, and gender (Wang et al. 2017, Du et al. 2023). The covariate-adjusted general factor model helps to investigate the effects of the variables of interest on gene expressions, controlling for the latent factors (Du et al. 2023). This setting is also applicable to other scenarios, such as brain imaging, where the activity of a brain region may depend on measurable spatial distance from neighboring regions and latent structures due to unmodeled factors (Leek & Storey 2008). To analyze large-scale measurement data, we aim to develop a computationally efficient estimation method and to provide inference theory for quantifying uncertainty in the estimation. Motivated by recent work in high-dimensional factor analysis, we treat the latent factors as fixed parameters and apply a joint maximum likelihood method for estimation (Bai 2003, Fan et al. 2013, Chen et al. 2020). Specifically, we let the collection of the item responses from n independent subjects be Y = (Y1, . . . , Yn)\u22ba n\u00d7q and the design matrix of observed covariates to be X = (X1, . . . , Xn)\u22ba n\u00d7p. For model parameters, the discrimination parameters for all q items are denoted as \u0393 = (\u03b31, . . . , \u03b3q)\u22ba q\u00d7K, while the intercepts and the covariate effects for all q items are denoted as B = (\u03b21, . . . , \u03b2q)\u22ba q\u00d7p. The latent factors from all n subjects are U = (U1, . . . , Un)\u22ba n\u00d7K. Then, the joint log-likelihood function can be written as follows: L(Y | \u0393, U, B, X) = 1 nq n X i=1 q X j=1 lij(\u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i ), (2) where the function lij(wij) = log pij(Yij|wij) is the individual log-likelihood function with wij = \u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i . We aim to obtain (b \u0393, b U, b B) from maximizing the joint likelihood function L(Y | \u0393, U, B, X). While the estimators can be computed efficiently by maximizing the joint likelihood 9 function through an alternating maximization algorithm (Collins et al. 2002, Chen et al. 2019), challenges emerge for performing statistical inference on the model parameters. \u2022 One challenge concerns the model identifiability. Without additional constraints, the covariate effects are not identifiable due to the incorporation of covariates and their potential dependence on latent factors. The latent factors and factor loadings encounter similar identifiability issues as in traditional factor analysis (Bai & Li 2012, Fan et al. 2013). Ensuring that the model is statistically identifiable is the fundamental prerequisite for achieving model reliability and making valid inferences (Allman et al. 2009, Gu & Xu 2020). \u2022 Another challenge arises from the nonlinearity of our proposed model. In the existing literature, most studies focus on the statistical inference for our proposed setting in the context of linear models (Bai & Li 2012, Fan et al. 2013, Wang et al. 2017). On the other hand, settings with general log-likelihood function lij(wij), including covariateadjusted logistic and probit factor models, are less investigated. Common techniques for linear models are not applicable to the considered general nonlinear model setting. Motivated by these challenges, we propose interpretable and practical identifiability conditions in Section 3.1. We then incorporate these conditions into the joint-likelihood-based estimation method in Section 3.2. Furthermore, we introduce a novel inference framework for performing statistical inference on \u03b2j, \u03b3j, and Ui in Section 4. 3 Method 3.1 Model Identifiability Identifiability issues commonly occur in latent variable models (Allman et al. 2009, Bai & Li 2012, Xu 2017). The proposed model in (1) has two major identifiability issues. The first issue is that the proposed model remains unchanged after certain linear transformations of 10 both B and U, causing the covariate effects together with the intercepts, represented by B, and the latent factors, denoted by U, to be unidentifiable. The second issue is that the model is invariant after an invertible transformation of both U and \u0393 as in the linear factor models (Bai & Li 2012, Fan et al. 2013), causing the latent factors U and factor loadings \u0393 to be undetermined. Specifically, under the model setup in (1), we define the joint probability distribution of responses to be P(Y | \u0393, U, B, X) = Qn i=1 Qq j=1 pij(Yij|wij). The model parameters are identifiable if and only if for any response Y, there does not exist (\u0393, U, B) \u0338= (e \u0393, e U, e B) such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). The first issue concerning the identifiability of B and U is that for any (\u0393, U, B) and any transformation matrix A, there exist e \u0393 = \u0393, e U = U + XA\u22ba, and e B = B \u2212\u0393A such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). This identifiability issue leads to the indeterminacy of the covariate effects and latent factors. The second issue is related to the identifiability of U and \u0393. For any (e \u0393, e U, e B) and any invertible matrix G, there exist \u00af \u0393 = e \u0393(G\u22ba)\u22121, \u00af U = e UG, and \u00af B = e B such that P(Y | e \u0393, e U, e B, X) = P(Y | \u00af \u0393, \u00af U, \u00af B, X). This causes the latent factors and factor loadings to be unidentifiable. Remark 3. Intuitively, the unidentifiable e B = B \u2212\u0393A can be interpreted to include both direct and indirect effects of X on response Y. We take the intercept and covariate effect on the first item ( e \u03b21) as an example and illustrate it in Figure 2. One part of e \u03b21 is the direct effect from X onto Y (see the orange line in the left panel), whereas another part of e \u03b21 may be explained through the latent factors U, as the latent factors are unobserved and there are potential correlations between latent factors and observed covariates. The latter part of e \u03b21 can be considered as the indirect effect (see the blue line in the right panel). 11 Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Figure 2: The direct effects (orange solid line in the left panel) and the indirect effects (blue solid line in the right panel) for item 1. The first identifiability issue is a new challenge introduced by the covariate adjustment in the model, whereas the second issue is common in traditional factor models (Bai & Li 2012, Fan et al. 2013). Considering the two issues together, for any (\u0393, U, B), A, and G, there exist transformations e \u0393 = \u0393(G\u22ba)\u22121, e U = (U + XA\u22ba)G, and e B = B \u2212\u0393A such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). In the rest of this subsection, we propose identifiability conditions to address these issues. For notation convenience, throughout the rest of the paper, we define \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) as the true parameters. Identifiability Conditions As described earlier, the correlation between the design matrix of covariates X and the latent factors U\u2217results in the identifiability issue of B\u2217. In the psychometrics literature, the intercept \u03b2\u2217 j0 is commonly referred to as the difficulty parameter, while \u03b2\u2217 jc represents the effects of observed covariates, namely DIF effects, on the response to item j (Reckase 2009, Holland & Wainer 2012). The different scientific interpretations motivate us to develop different identifiability conditions for \u03b2\u2217 j0 and \u03b2\u2217 jc, respectively. Specifically, we propose a centering condition on U\u2217to ensure the identifiability of the intercept \u03b2\u2217 j0 for all items j \u2208[q]. On the other hand, to identify the covariate effects \u03b2\u2217 jc, a natural idea is to impose the covariate effects \u03b2\u2217 jc for all items j \u2208[q] to be sparse, as shown in many regularized methods and item purification methods (Candell & Drasgow 1988, Fidalgo et al. 2000, Bauer et al. 2020, Belzak & Bauer 2020). In Chen et al. (2023a), 12 an interpretable identifiability condition is proposed for selecting sparse covariate effects, yet this condition is specific to uni-dimensional covariates. Motivated by Chen et al. (2023a), we propose the following minimal \u21131 condition applicable to general cases where the covariates are multi-dimensional. To better present the identifiability conditions, we write A = (a0, a1, . . . , ap\u2217) \u2208RK\u00d7p and define Ac = (a1, . . . , ap\u2217) \u2208RK\u00d7p\u2217as the part applied to the covariate effects. Condition 1. (i) Pn i=1 U \u2217 i = 0K. (ii) Pq j=1 \u2225\u03b2\u2217 jc\u22251 < Pq j=1 \u2225\u03b2\u2217 jc \u2212A\u22ba c\u03b3\u2217 j \u22251 for any Ac \u0338= 0. Condition 1(i) assumes the latent abilities U\u2217are centered to ensure the identifiability of the intercepts \u03b2\u2217 j0\u2019s, which is commonly assumed in the item response theory literature (Reckase 2009). Condition 1(ii) is motivated by practical applications. For instance, in educational testing, practitioners need to identify and remove biased test items, correspondingly, items with non-zero covariate effects (\u03b2\u2217 js \u0338= 0). In practice, most of the designed items are unbiased, and therefore, it is reasonable to assume that the majority of items have no covariate effects, that is, the covariate effects \u03b2\u2217 jc\u2019s are sparse (Holland & Wainer 2012, Chen et al. 2023a). Next, we present a sufficient and necessary condition for Condition 1(ii) to hold. Proposition 1. Condition 1(ii) holds if and only if for any v \u2208RK \\ {0K}, q X j=1 \f \fv\u22ba\u03b3\u2217 j \f \fI(\u03b2\u2217 js = 0) > q X j=1 sign(\u03b2\u2217 js)v\u22ba\u03b3\u2217 j I(\u03b2\u2217 js \u0338= 0), \u2200s \u2208[p\u2217]. (3) Remark 4. Proposition 1 implies that Condition 1(ii) holds when {j : \u03b2\u2217 js \u0338= 0} is separated into {j : \u03b2\u2217 js > 0} and {j : \u03b2\u2217 js < 0} in a balanced way. With diversified signs of \u03b2\u2217 js, Proposition 1 holds when a considerable proportion of test items have no covariate effect (\u03b2\u2217 js \u0338= 0). For example, when \u03b3\u2217 j = m1(k) K with m > 0, Condition 1(ii) holds if and only if Pq j=1 |m|{\u2212I(\u03b2\u2217 js/m > 0) + I(\u03b2\u2217 js/m \u22640)} > 0 and Pq j=1 |m|{\u2212I(\u03b2\u2217 js/m \u22650) + I(\u03b2\u2217 js/m < 0)} < 0. With slightly more than q/2 items correspond to \u03b2\u2217 js = 0, Condition 1(ii) holds. Moreover, if #{j : \u03b2\u2217 js > 0} and #{j : \u03b2\u2217 js < 0} are comparable, then Condition 1(ii) holds even when less than q/2 items correspond to \u03b2\u2217 js = 0 and more than q/2 items correspond 13 to \u03b2\u2217 js \u0338= 0. Though assuming a \u201csparse\u201d structure, our assumption here differs from existing high-dimensional literature. In high-dimensional regression models, the covariate coefficient when regressing the dependent variable on high-dimensional covariates, is often assumed to be sparse, with the proportion of the non-zero covariate coefficients asymptotically approaching zero. In our setting, Condition 1(ii) allows for relatively dense settings where the proportion of items with non-zero covariate effects is some positive constant. To perform simultaneous estimation and inference on \u0393\u2217and U\u2217, we consider the following identifiability conditions to address the second identifiability issue. Condition 2. (i) (U\u2217)\u22baU\u2217is diagonal. (ii) (\u0393\u2217)\u22ba\u0393\u2217is diagonal. (iii) n\u22121(U\u2217)\u22baU\u2217= q\u22121(\u0393\u2217)\u22ba\u0393\u2217. Condition 2 is a set of widely used identifiability conditions in the factor analysis literature (Bai 2003, Bai & Li 2012, Wang 2022). For practical and theoretical benefits, we impose Condition 2 to address the identifiability issue related to G. It is worth mentioning that this condition can be replaced by other identifiability conditions. For true parameters satisfying any identifiability condition, we can always find a transformation such that the transformed parameters satisfy our proposed Conditions 1\u20132 and the proposed estimation method and theoretical results in the subsequent sections still apply, up to such a transformation. 3.2 Joint Maximum Likelihood Estimation In this section, we introduce a joint-likelihood-based estimation method for the covariate effects B, the latent factors U, and factor loadings \u0393 simultaneously. Incorporating Conditions 1\u20132 into the estimation procedure, we obtain the maximum joint-likelihood-based estimators for \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) that satisfy the proposed identifiability conditions. With Condition 1, we address the identifiability issue related to the transformation matrix A. Specifically, for any parameters \u03d5 = (\u0393, U, B), there exists a matrix A\u2217= (a\u2217 0, A\u2217 c) with A\u2217 c = argminAc\u2208RK\u00d7p\u2217 Pq j=1 \u2225\u03b2jc \u2212A\u22ba c\u03b3j\u22251 and a\u2217 0 = \u2212n\u22121 Pn i=1(Ui + A\u2217 cXc i ) such that 14 the transformed matrices U\u2217= U + X(A\u2217)\u22baand B\u2217= B \u2212\u0393A\u2217satisfy Condition 1. The transformation idea naturally leads to the following estimation methodology for B\u2217. To estimate B\u2217and U\u2217that satisfy Condition 1, we first obtain the maximum likelihood estimator b \u03d5 = (b \u0393, b U, b B) by b \u03d5 = argmin \u03d5\u2208\u2126\u03d5 \u2212L(Y | \u03d5, X), (4) where the parameter space \u2126\u03d5 is given as \u2126\u03d5 = {\u03d5 : \u2225\u03d5\u2225max \u2264C} for some large C. To solve (4), we employ an alternating minimization algorithm. Specifically, for steps t = 0, 1, . . ., we compute b \u0393(t+1), b B(t+1) = argmin \u0393\u2208Rq\u00d7K, B\u2208Rq\u00d7p \u2212L(Y | \u0393, U(t), B, X); b U(t+1) = argmin U\u2208Rn\u00d7K \u2212L(Y | \u0393(t+1), U, B(t+1), X), until the quantity max{\u2225b \u0393(t+1) \u2212b \u0393(t)\u2225F, \u2225b U(t+1) \u2212b U(t)\u2225F, \u2225b B(t+1) \u2212b B(t)\u2225F} is less than some pre-specified tolerance value for convergence. We then estimate Ac by minimizing the \u21131norm b Ac = argmin Ac\u2208RK\u00d7p\u2217 q X j=1 \u2225b \u03b2jc \u2212A\u22ba c b \u03b3j\u22251. (5) Next, we estimate b a0 = \u2212n\u22121 Pn i=1( b Ui + b AcXc i ) and let b A = (b a0, b Ac). Given the estimators b A, b \u0393, and b B, we then construct b B\u2217= b B \u2212b \u0393b A and e U = b U + Xb A\u22ba such that Condition 1 holds. Recall that Condition 2 addresses the identifiability issue related to the invertible matrix G. Specifically, for any parameters (\u0393, U), there exists a matrix G\u2217such that Condition 2 holds for U\u2217= (U+X(A\u2217)\u22ba)G\u2217and \u0393\u2217= \u0393(G\u2217)\u2212\u22ba. Let U = diag(\u03f11, . . . , \u03f1K) be a diagonal 15 matrix that contains the K eigenvalues of (nq)\u22121(\u0393\u22ba\u0393)1/2(U + XA\u22ba)\u22ba(U + XA\u22ba) (\u0393\u22ba\u0393)1/2 and let V be a matrix that contains its corresponding eigenvectors. We set G\u2217= (q\u22121\u0393\u22ba\u0393)1/2 VU \u22121/4. To further estimate \u0393\u2217and U\u2217, we need to obtain an estimator for the invertible matrix G\u2217. Given the maximum likelihood estimators obtained in (4) and b A in (5), we estimate G\u2217via b G = (q\u22121b \u0393\u22bab \u0393)1/2 b V b U \u22121/4 where b U and b V are matrices that contain the eigenvalues and eigenvectors of (nq)\u22121(b \u0393\u22bab \u0393)1/2( b U+Xb A\u22ba)\u22ba( b U+Xb A\u22ba) (b \u0393\u22bab \u0393)1/2, respectively. With b G and b A, we now obtain the following transformed estimators that satisfy Condition 2: b \u0393\u2217= b \u0393( b G\u22ba)\u22121 and b U\u2217= ( b U + Xb A\u22ba) b G. To quantify the uncertainty of the proposed estimators, we will show that the proposed estimators are asymptotically normally distributed. Specifically, in Theorem 2 of Section 4, we establish the asymptotic normality result for b \u03b2\u2217 j, which allows us to make inference on the covariate effects \u03b2\u2217 j. Moreover, as the latent factors U \u2217 i and factor loadings \u03b3\u2217 j often have important interpretations in domain sciences, we are also interested in the inference on parameters U \u2217 i and \u03b3\u2217 j . In Theorem 2, we also derive the asymptotic distributions for estimators b U \u2217 i and b \u03b3\u2217 j , providing inference results for parameters U \u2217 i and \u03b3\u2217 j . 4 Theoretical Results We propose a novel framework to establish the estimation consistency and asymptotic normality for the proposed joint-likelihood-based estimators b \u03d5\u2217= (b \u0393\u2217, b U\u2217, b B\u2217) in Section 3. To establish the theoretical results for b \u03d5\u2217, we impose the following regularity assumptions. Assumption 1. There exist constants M > 0, \u03ba > 0 such that: (i) \u03a3\u2217 u = limn\u2192\u221en\u22121(U\u2217)\u22baU\u2217exists and is positive definite. For i \u2208[n], \u2225U \u2217 i \u22252 \u2264M. (ii) \u03a3\u2217 \u03b3 = limq\u2192\u221eq\u22121(\u0393\u2217)\u22ba\u0393\u2217exists and is positive definite. For j \u2208[q], \u2225\u03b3\u2217 j \u22252 \u2264M. (iii) \u03a3x = limn\u2192\u221en\u22121 Pn i=1 XiX\u22ba i exists and 1/\u03ba2 \u2264\u03bbmin(\u03a3x) \u2264\u03bbmax(\u03a3x) \u2264\u03ba2. For i \u2208[n], maxi \u2225Xi\u2225\u221e\u2264M. 16 (iv) \u03a3\u2217 ux = limn\u2192\u221en\u22121 Pn i=1 U \u2217 i X\u22ba i exists and \u2225\u03a3\u2217 ux\u03a3\u22121 x \u22251,\u221e\u2264M. The eigenvalues of (\u03a3\u2217 u \u2212\u03a3\u2217 ux\u03a3\u22121 x (\u03a3\u2217 ux)\u22ba)\u03a3\u2217 \u03b3 are distinct. Assumptions 1 is commonly used in the factor analysis literature. In particular, Assumptions 1(i)\u2013(ii) correspond to Assumptions A-B in Bai (2003) under linear factor models, ensuring the compactness of the parameter space on U\u2217and \u0393\u2217. Under nonlinear factor models, such conditions on compact parameter space are also commonly assumed (Wang 2022, Chen et al. 2023b). Assumption 1(iii) is standard regularity conditions for the nonlinear setting that is needed to establish the concentration of the gradient and estimation error for the model parameters when p diverges. In addition, Assumption 1(iv) is a crucial identification condition; similar conditions have been imposed in the existing literature such as Assumption G in Bai (2003) in the context of linear factor models and Assumption 6 in Wang (2022) in the context of nonlinear factor models without covariates. Assumption 2. For any i \u2208[n] and j \u2208[q], assume that lij(\u00b7) is three times differentiable, and we denote the first, second, and third order derivatives of lij(wij) with respect to wij as l\u2032 ij(wij), l\u2032\u2032 ij(wij), and l\u2032\u2032\u2032 ij(wij), respectively. There exist M > 0 and \u03be \u22654 such that E(|l\u2032 ij(wij)|\u03be) \u2264M and |l\u2032 ij(wij)| is sub-exponential with \u2225l\u2032 ij(wij)\u2225\u03c61 \u2264M. Furthermore, we assume E{l\u2032 ij(w\u2217 ij)} = 0. Within a compact space of wij, we have bL \u2264\u2212l\u2032\u2032 ij(wij) \u2264bU and |l\u2032\u2032\u2032 ij(wij)| \u2264bU for bU > bL > 0. Assumption 2 assumes smoothness on the log-likelihood function lij(wij). In particular, it assumes sub-exponential distributions and finite fourth-moments of the first order derivatives l\u2032 ij(wij). For commonly used linear or nonlinear factor models, the assumption is not restrictive and can be satisfied with a large \u03be. For instance, consider the logistic model with l\u2032 ij(wij) = Yij \u2212exp(wij)/{1+exp(wij)}, we have |l\u2032 ij(wij)| \u22641 and \u03be can be taken as \u221e. The boundedness conditions for l\u2032\u2032 ij(wij) and l\u2032\u2032\u2032 ij(wij) are necessary to guarantee the convexity of the joint likelihood function. In a special case of linear factor models, l\u2032\u2032 ij(wij) is a constant and the boundedness conditions naturally hold. For popular nonlinear models such as lo17 gistic factor models, probit factor models, and Poisson factor models, the boundedness of l\u2032\u2032 ij(wij) and l\u2032\u2032\u2032 ij(wij) can also be easily verified. Assumption 3. For \u03be specified in Assumption 2 and a sufficiently small \u03f5 > 0, we assume as n, q, p \u2192\u221e, p p n \u2227(pq) (nq)\u03f5+3/\u03be \u21920. (6) Assumption 3 is needed to ensure that the derivative of the likelihood function equals zero at the maximum likelihood estimator with high probability, a key property in the theoretical analysis. In particular, we need the estimation errors of all model parameters to converge to 0 uniformly with high probability. Such uniform convergence results involve delicate analysis of the convexity of the objective function, for which technically we need Assumption 3. For most of the popularly used generalized factor models, \u03be can be taken as any large value as discussed above, thus (nq)\u03f5+3/\u03be is of a smaller order of p n \u2227(pq), given small \u03f5. Specifically, Assumption 3 implies p = o(n1/2 \u2227q) up to a small order term, an asymptotic regime that is reasonable for many educational assessments. Next, we impose additional assumptions crucial to establishing the theoretical properties of the proposed estimators. One challenge for theoretical analysis is to handle the dependence between the latent factors U\u2217and the design matrix X. To address this challenge, we employ the following transformed U0 that are orthogonal with X, which plays an important role in establishing the theoretical results (see Supplementary Materials for details). In particular, for i \u2208[n], we let U 0 i = (G\u2021)\u22ba(U \u2217 i \u2212A\u2021Xi). Here G\u2021 = (q\u22121(\u0393\u2217)\u22ba\u0393\u2217)1/2 V\u2217(U \u2217)\u22121/4 and A\u2021 = (U\u2217)\u22baX(X\u22baX)\u22121, where U \u2217= diag(\u03f1\u2217 1, . . . , \u03f1\u2217 K) with diagonal elements being the K eigenvalues of (nq)\u22121((\u0393\u2217)\u22ba\u0393\u2217)1/2(U\u2217)\u22ba(In\u2212Px)U\u2217((\u0393\u2217)\u22ba\u0393\u2217)1/2 with Px = X(X\u22baX)\u22121X\u22baand V\u2217containing the matrix of corresponding eigenvectors. Under this transformation for U 0 i , we further define \u03b30 j = (G\u2021)\u22121\u03b3\u2217 j and \u03b20 j = \u03b2\u2217 j + (A\u2021)\u22ba\u03b3\u2217 j for j \u2208[q], and write Z0 i = ((U 0 i )\u22ba X\u22ba i )\u22baand w0 ij = (\u03b30 j )\u22baU 0 i + (\u03b20 j)\u22baXi. These transformed parameters \u03b30 j \u2019s, U 0 i \u2019s, and \u03b20 j\u2019s give the same joint likelihood value as that of the true parameters \u03b3\u2217 j \u2019s, U \u2217 i \u2019s and \u03b2\u2217 j\u2019s, which 18 facilitate our theoretical understanding of the joint-likelihood-based estimators. Assumption 4. (i) For any j \u2208[q], \u2212n\u22121 Pn i=1 l\u2032\u2032 ij(w0 ij)Z0 i (Z0 i )\u22ba p \u2192\u03a80 jz for some positive definite matrix \u03a80 jz and n\u22121/2 Pn i=1 l\u2032 ij(w0 ij)Z0 i d \u2192N(0, \u21260 jz). (ii) For any i \u2208[n], \u2212q\u22121 Pq j=1 l\u2032\u2032 ij(w0 ij)\u03b30 j (\u03b30 j )\u22ba p \u2192\u03a80 i\u03b3 for some positive definite matrix \u03a80 i\u03b3 and q\u22121/2 Pq j=1 l\u2032 ij(w0 ij)\u03b30 j d \u2192N(0, \u21260 i\u03b3). Assumption 4 is a generalization of Assumption F(3)-(4) in Bai (2003) for linear models to the nonlinear setting. Specifically, we need Assumption 4(i) to derive the asymptotic distributions of the estimators b \u03b2\u2217 j and b \u03b3\u2217 j , and Assumption 4(ii) is used for establishing the asymptotic distribution of b U \u2217 i . Note that these assumptions are imposed on the loglikelihood derivative functions evaluated at the true parameters w0 ij, Z0 i , and \u03b30 j . In general, for the popular generalized factor models, such assumptions hold with mild conditions. For example, under linear models, l\u2032 ij(wij) is the random error and l\u2032\u2032 ij(wij) is a constant. Then \u03a80 jz and \u03a80 i\u03b3 naturally exist and are positive definite followed by Assumption 1. The limiting distributions of n\u22121/2 Pn i=1 l\u2032 ij(w0 ij)Z0 i and q\u22121/2 Pq j=1 l\u2032 ij(w0 ij)\u03b30 j can be derived by the central limit theorem under standard regularity conditions. Under logistic and probit models, l\u2032 ij(wij) and l\u2032\u2032 ij(wij) are both finite inside a compact parameters space and similar arguments can be applied to show the validity of Assumption 4. We present the following assumption to establish the theoretical properties of the transformed matrix b A as defined in (5). In particular, we define A0 = (G\u2021)\u22baA\u2021 and write A0 = (a0 0, . . . , a0 p\u2217)\u22ba. Note that the estimation problem of (5) is related to the median regression problem with measurement errors. To understand the properties of this estimator, following existing M-estimation literature (He & Shao 1996, 2000), we define \u03c80 js(a) = \u03b30 j sign{\u03b20 js + (\u03b30 j )\u22ba(a \u2212a0 s)} and \u03c7s(a) = Pq j=1 \u03c80 js(a) for j \u2208[q] and s \u2208[p\u2217]. We further define a perturbed version of \u03c80 js(a), denoted as \u03c8js(a, \u03b4js), as follows: \u03c8js(a, \u03b4js) = \u0010 \u03b30 j + \u0002 \u03b4js \u221an \u0003 [1:K] \u0011 sign n \u03b20 js + \u0002 \u03b4js \u221an \u0003 K+1 \u2212(\u03b30 j + \u0002 \u03b4js \u221an \u0003 [1:K])\u22ba(a \u2212a0 s) o , s \u2208[p\u2217] 19 where the perturbation \u03b4js = \uf8eb \uf8ec \uf8ed IK 0 0 (1(p) s )\u22ba \uf8f6 \uf8f7 \uf8f8 \u0010 \u2212 n X i=1 l\u2032\u2032 ij(w0 ij)Z0 i (Z0 i )\u22ba\u0011\u22121\u0010\u221an n X i=1 l\u2032 ij(w0 ij)Z0 i \u0011 , is asymptotically normally distributed by Assumption 4. We define b \u03c7s(a) = Pq j=1 E\u03c8js(a, \u03b4js). Assumption 5. For \u03c7s(a), we assume that there exists some constant c > 0 such that mina\u0338=0 |q\u22121\u03c7s(a)| > c holds for all s \u2208[p\u2217]. Assume there exists as0 for each s \u2208[p\u2217] such that b \u03c7s(as0) = 0 with p\u221an\u2225\u03b1s0\u2225\u21920. In a neighbourhood of \u03b1s0, b \u03c7s(a) has a nonsingular derivative such that {q\u22121\u2207ab \u03c7s(\u03b1s0)}\u22121 = O(1) and q\u22121|\u2207ab \u03c7s(a)\u2212\u2207ab \u03c7s(\u03b1s0)| \u2264k|a\u2212\u03b1s0|. We assume \u03b9nq,p := max \b \u2225\u03b1s0\u2225, q\u22121 Pq j=1 \u03c8js(as0, \u03b4js) \t = o \u0000(p\u221an)\u22121\u0001 . Assumption 5 is crucial in addressing the theoretical difficulties of establishing the consistent estimation for A0, a challenging problem related to median regression with weakly dependent measurement errors. In Assumption 5, we treat the minimizer of | Pq j=1 \u03c8(a, \u03b4js)| as an M-estimator and adopt the Bahadur representation results in He & Shao (1996) for the theoretical analysis. For an ideal case where \u03b4js are independent and normally distributed with finite variances, which corresponds to the setting in median regression with measurement errors (He & Liang 2000), these assumptions can be easily verified. Assumption 5 discusses beyond such an ideal case and covers general settings. In addition to independent and Gaussian measurement errors, this condition also accommodates the case when \u03b4js are asymptotically normal and weakly dependent with finite variances, as implied by Assumption 4 and the conditional independence of Yij. We want to emphasize that Assumption 5 allows for both sparse and dense settings of the covariate effects. Consider an example of K = p = 1 and \u03b3j = 1 for j \u2208[q]. Suppose \u03b2\u2217 js is zero for all j \u2208[q1] and nonzero otherwise. Then this condition is satisfied as long as #{j : \u03b2\u2217 js > 0} and #{j : \u03b2\u2217 js < 0} are comparable, even when the sparsity level q1 is small. Under the proposed assumptions, we next present our main theoretical results. 20 Theorem 1 (Average Consistency). Suppose the true parameters \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) satisfy identifiability conditions 1\u20132. Under Assumptions 1\u20135, we have q\u22121\u2225b B\u2217\u2212B\u2217\u22252 F = Op \u0012p2 log qp n + p log n q \u0013 ; (7) if we further assume p3/2(nq)\u03f5+3/\u03be(p1/2n\u22121/2 + q\u22121/2) = o(1), then we have n\u22121\u2225b U\u2217\u2212U\u2217\u22252 F = Op \u0012p log qp n + log n q \u0013 ; (8) q\u22121\u2225b \u0393\u2217\u2212\u0393\u2217\u22252 F = Op \u0012p log qp n + log n q \u0013 . (9) Theorem 1 presents the average convergence rates of b \u03d5\u2217. Consider an oracle case with U\u2217 and \u0393\u2217known, the estimation of B\u2217reduces to an M-estimation problem. For M-estimators under general parametric models, it can be shown that the optimal convergence rates in squared \u21132-norm is Op(p/n) under p(log p)3/n \u21920 (He & Shao 2000). In terms of our average convergence rate on b B\u2217, the first term in (7), n\u22121p2 log(qp), approximately matches the convergence rate Op(p/n) up to a relatively small order term of p log(qp). The second term in (7), q\u22121p log n, is mainly due to the estimation error for the latent factor U\u2217. In educational applications, it is common to assume the number of subjects n is much larger than the number of items q. Under such a practical setting with n \u226bq and p relatively small, the term q\u22121 log n in (8) dominates in the derived convergence rate of b U\u2217, which matches with the optimal convergence rate Op(q\u22121) for factor models without covariates (Bai & Li 2012, Wang 2022) up to a small order term. Remark 5. The additional condition p3/2(nq)\u03f5+3/\u03be(p1/2n\u22121/2 + q\u22121/2) = o(1) in Theorem 1 is used to handle the challenges related to the invertible matrix G that affects the theoretical properties of b U\u2217and b \u0393\u2217. It is needed for establishing the estimation consistency of b U\u2217and b \u0393\u2217 but not for that of b B\u2217. With sufficiently large \u03be and small \u03f5, this assumption is approximately p = o(n1/4 \u2227q1/3) up to a small order term. 21 Remark 6. One challenge in establishing the estimation consistency for b \u03d5\u2217arises from the unrestricted dependence structure between U\u2217and X. If we consider the ideal case where the columns of U\u2217and X are orthogonal, i.e., (U\u2217)\u22baX = 0K\u00d7p, then we can achieve comparable or superior convergence rates with less stringent assumptions. Specifically, with Assumptions 1\u20133 only, we can obtain the same convergence rates for b U\u2217and b \u0393\u2217as in (8) and (9), respectively. Moreover, with Assumptions 1\u20133, the average convergence rate for the consistent estimator of B\u2217is Op(n\u22121p log qp+q\u22121 log n), which is tighter than (7) by a factor of p. With estimation consistency results established, we next derive the asymptotic normal distributions for the estimators, which enable us to perform statistical inference on the true parameters. Theorem 2 (Asymptotic Normality). Suppose the true parameters \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) satisfy identifiability conditions 1\u20132. Under Assumptions 1\u20135, we have the asymptotic distributions as follows. Denote \u03b6\u22122 nq,p = n\u22121p log qp + q\u22121log n. If p3/2\u221an(nq)3/\u03be\u03b6\u22122 nq,p \u21920, for any j \u2208[q] and a \u2208Rp with \u2225a\u22252 = 1, \u221ana\u22ba(\u03a3\u2217 \u03b2,j)\u22121/2( b \u03b2\u2217 j \u2212\u03b2\u2217 j) d \u2192N(0, 1), (10) where \u03a3\u2217 \u03b2,j = (\u2212(A0)\u22ba, Ip)(\u03a80 jz)\u22121\u21260 jz(\u03a80 jz)\u22121(\u2212(A0)\u22ba, Ip)\u22ba, and for any j \u2208[q], \u221an(\u03a3\u2217 \u03b3,j)\u22121/2(b \u03b3\u2217 j \u2212\u03b3\u2217 j ) d \u2192N(0, IK), (11) where \u03a3\u2217 \u03b3,j = G\u2021(IK, 0)(\u03a80 jz)\u22121\u21260 jz(\u03a80 jz)\u22121 (IK, 0)\u22ba(G\u2021)\u22ba. Furthermore, for any i \u2208[n], if q = O(n) and p3/2\u221aq(nq)3/\u03be\u03b6\u22122 nq,p \u21920, \u221aq(\u03a3\u2217 u,i)\u22121/2( b U \u2217 i \u2212U \u2217 i ) d \u2192N(0, IK), (12) where \u03a3\u2217 u,i = (G\u2021)\u2212\u22ba(\u03a80 i\u03b3)\u22121\u21260 i\u03b3(\u03a80 i\u03b3)\u22121(G\u2021)\u22121. 22 The asymptotic covariance matrices in Theorem 2 can be consistently estimated. Due to the space limitations, we defer the construction of the consistent estimators b \u03a3\u2217 \u03b2,j, b \u03a3\u2217 \u03b3,j, and b \u03a3\u2217 u,i to Supplementary Materials. Theorem 2 provides the asymptotic distributions for all individual estimators. In particular, with the asymptotic distributions and the consistent estimators b \u03a3\u2217 \u03b2,j for the asymptotic covariance matrices, we can perform hypothesis testing on \u03b2\u2217 js for j \u2208[q] and s \u2208[p\u2217]. We reject the null hypothesis \u03b2\u2217 js = 0 at significance level \u03b1 if |\u221an(b \u03c3\u2217 \u03b2,js)\u22121b \u03b2\u2217 js| > \u03a6\u22121(1 \u2212\u03b1/2), where (b \u03c3\u2217 \u03b2,js)2 is the (s + 1)-th diagonal entry in b \u03a3\u2217 \u03b2,j. For the asymptotic normality of b \u03b2\u2217 j, the condition p3/2\u221an(nq)3/\u03be(n\u22121p log qp+q\u22121 log n) \u2192 0 together with Assumption 3 gives p = o{n1/5 \u2227(q2/n)1/3} up to a small order term, and further implies n \u226aq2, which is consistent with established conditions in the existing factor analysis literature (Bai & Li 2012, Wang 2022). For the asymptotic normality of b U \u2217 i , the additional condition that q = O(n) is a reasonable assumption in educational applications where the number of items q is much fewer than the number of subjects n. In this case, the scaling conditions imply p = o{q1/3 \u2227(n2/q)1/5} up to a small order term. Similarly for the asymptotic normality of b \u03b3\u2217 j , the proposed conditions give p = o{n1/5 \u2227(q2/n)1/3} up to a small order term. Remark 7. Similar to the discussion in Remark 6, the challenges arising from the unrestricted dependence between U\u2217and X also affect the derivation of the asymptotic distributions for the proposed estimators. If we consider the ideal case with (U\u2217)\u22baX = 0K\u00d7p, we can establish the asymptotic normality for all individual estimators under Assumptions 1\u20134 only and weaker scaling conditions. Specifically, when (U\u2217)\u22baX = 0K\u00d7p, the scaling condition becomes p\u221an(nq)3/\u03be(n\u22121p log qp+q\u22121 log n) \u21920 for deriving asymptotic normality of b \u03b2\u2217 j and b \u03b3\u2217 j , which is milder than that for (10) and (11). 23 5 Simulation Study In this section, we study the finite-sample performance of the proposed joint-likelihoodbased estimator. We focus on the logistic latent factor model in (1) with pij(y | wij) = exp(wijy)/{1 + exp(wij)}, where wij = (\u03b3\u2217 j )\u22baU \u2217 i + (\u03b2\u2217 j)\u22baXi. The logistic latent factor model is commonly used in the context of educational assessment and is also referred to as the item response theory model (Mellenbergh 1994, Hambleton & Swaminathan 2013). We apply the proposed method to estimate B\u2217and perform statistical inference on testing the null hypothesis \u03b2\u2217 js = 0. We start with presenting the data generating process. We set the number of subjects n = {300, 500, 1000, 1500, 2000}, the number of items q = {100, 300, 500}, the covariate dimension p = {5, 10, 30}, and the factor dimension K = 2, respectively. We jointly generate Xc i and U \u2217 i from N(0, \u03a3) where \u03a3ij = \u03c4 |i\u2212j| with \u03c4 \u2208{0, 0.2, 0.5, 0.7}. In addition, we set the loading matrix \u0393\u2217 [,k] = 1(K) k \u2297vk, where \u2297is the Kronecker product and vk is a (q/K)-dimensional vector with each entry generated independently and identically from Unif[0.5, 1.5]. For the covariate effects B\u2217, we set the intercept terms to equal \u03b2\u2217 j0 = 0. For the remaining entries in B\u2217, we consider the following two settings: (1) sparse setting: \u03b2\u2217 js = \u03c1 for s = 1, . . . , p and j = 5s\u22124, . . . , 5s and other \u03b2\u2217 js are set to zero; (2) dense setting: \u03b2\u2217 js = \u03c1 for s = 1, . . . , p and j = Rsq/5 + 1, . . . , (Rs + 1)q/5 with Rs = s \u22125\u230as/5\u230b, and other \u03b2\u2217 js are set to zero. Here, the signal strength is set as \u03c1 \u2208{0.3, 0.5}. Intuitively, in the sparse setting, we set 5 items to be biased for each covariate whereas in the dense setting, 20% of items are biased items for each covariate. For better empirical stability, after reaching convergence in the proposed alternating maximization algorithm and transforming the obtained MLEs into ones that satisfy Conditions 1\u20132, we repeat another round of maximization and transformation. We take the significance level at 5% and calculate the averaged type I error based on all the entries \u03b2\u2217 js = 0 and the averaged power for all non-zero entries, over 100 replications. The averaged hypothesis testing results are presented in Figures 3\u20136 for p = 5 and p = 30, across different 24 settings. Additional numerical results for p = 10 are presented in the Supplementary Materials. 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.5 Figure 3: Powers and type I errors under sparse setting at p = 5. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 25 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.5 Figure 4: Powers and type I errors under sparse setting at p = 30. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 26 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.5 Figure 5: Powers and type I errors under dense setting at p = 5. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 27 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.5 Figure 6: Powers and type I errors under dense setting at p = 30. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 28 From Figures 3\u20136, we observe that the type I errors are well controlled at the significance level 5%, which is consistent with the asymptotic properties of b B\u2217in Theorem 2. Moreover, the power increases to one as the sample size n increases across all of the settings we consider. Comparing the left panel (\u03c1 = 0.3) to the right panel (\u03c1 = 0.5) in Figures 3\u20136, we see that the power increases as we increase the signal strength \u03c1. Comparing the plots in Figures 3\u20134 to the corresponding plots in Figures 5\u20136, we see that the powers under the sparse setting (Figures 3\u20134) are generally higher than that of the dense setting (Figures 5\u20136). Nonetheless, our proposed method is generally stable under both sparse and dense settings. In addition, we observe similar results when we increase the covariate dimension p from p = 5 (Figures 3 and 5) to p = 30 (Figures 4 and 6). We refer the reader to the Supplementary Materials for additional numerical results for p = 10. Moreover, we observe similar results when we increase the test length q from q = 100 (top row) to q = 500 (bottom row) in Figures 3\u20136. In terms of the correlation between X and U\u2217, we observe that while the power converges to one as we increase the sample size, the power decreases as the correlation \u03c4 increases. 6 Data Application We apply our proposed method to analyze the Programme for International Student Assessment (PISA) 2018 data2. PISA is a worldwide testing program that compares the academic performances of 15-year-old students across many countries (OECD 2019). More than 600,000 students from 79 countries/economies, representing a population of 31 million 15year-olds, participated in this program. The PISA 2018 used the computer-based assessment mode and the assessment lasted two hours for each student, with test items mainly evaluating students\u2019 proficiency in mathematics, reading, and science domains. A total of 930 minutes of test items were used and each student took different combinations of the test items. In addition to the assessment questions, background questionnaires were provided to collect students\u2019 information. 2The data can be downloaded from: https://www.oecd.org/pisa/data/2018database/ 29 In this study, we focus on PISA 2018 data from Taipei. The observed responses are binary, indicating whether students\u2019 responses to the test items are correct, and we use the popular item response theory model with the logit link (i.e., logistic latent factor model; Reckase 2009). Due to the block design nature of the large-scale assessment, each student was only assigned to a subset of the test items, and for the Taipei data, 86% response matrix is unobserved. Note that this missingness can be considered as conditionally independent of the responses given the students\u2019 characteristics. Our proposed method and inference results naturally accommodate such missing data and can be directly applied. Specifically, to accommodate the incomplete responses, we can modify the joint log-likelihood function in (2) into Lobs(Y | \u0393, U, B, X) = Pn i=1 P j\u2208Qi lij(\u03b3\u22ba j Ui + \u03b2\u22ba j Xi), where Qi defines the set of questions to which the responses from student i are observed. In this study, we include gender and 8 variables for school strata as covariates (p\u2217= 9). These variables record whether the school is public, in an urban place, etc. After data preprocessing, we have n = 6063 students and q = 194 questions. Following the existing literature (Reckase 2009, Millsap 2012), we take K = 3 to interpret the three latent abilities measured by the math, reading, and science questions. We apply the proposed method to estimate the effects of gender and school strata variables on students\u2019 responses. We obtain the estimators of the gender effect for each PISA question and construct the corresponding 95% confidence intervals. The constructed 95% confidence intervals for the gender coefficients are presented in Figure 7. There are 10 questions highlighted in red as their estimated gender effect is statistically significant after the Bonferroni correction. Among the reading items, there is only one significant item and the corresponding confidence interval is below zero, indicating that this question is biased towards female test-takers, conditioning on the students\u2019 latent abilities. Most of the confidence intervals corresponding to the biased items in the math and science sections are above zero, indicating that these questions are biased towards male test-takers. In social science research, it is documented that female students typically score better than male students 30 during reading tests, while male students often outperform female students during math and science tests (Quinn & Cooc 2015, Balart & Oosterveen 2019). Our results indicate that there may exist potential measurement biases resulting in such an observed gender gap in educational testing. Our proposed method offers a useful tool to identify such biased test items, thereby contributing to enhancing testing fairness by providing practitioners with valuable information for item calibration. Math Reading Science \u22126 \u22123 0 3 6 0 50 100 150 200 PISA Questions for TAP Gender Effect Estimator Figure 7: Confidence intervals for the effect of gender covariate on each PISA question using Taipei data. Red intervals correspond to confidence intervals for questions with significant gender bias after Bonferroni correction. (For illustration purposes, we omit the confidence intervals with the upper bounds exceeding 6 and the lower bounds below -6 in this figure). To further illustrate the estimation results, Table 1 lists the p-values for testing the gender effect for each of the identified 10 significant questions, along with the proportions of female and male test-takers who answered each question correctly. We can see that the signs of the estimated gender effect by our proposed method align with the disparities in the reported proportions between females and males. For example, the estimated gender effect corresponding to the item \u201cCM496Q01S Cash Withdrawal\u201d is positive with a p-value 31 Item code Item Title Female (%) Male (%) p-value Mathematics CM496Q01S Cash Withdrawal 51.29 58.44 2.77\u00d710\u22127 (+) CM800Q01S Computer Games 96.63 93.61 < 1 \u00d7 10\u22128 (\u2212) Reading CR466Q06S Work Right 91.91 86.02 1.95\u00d710\u22125 (\u2212) Science CS608Q01S Ammonoids 57.68 68.15 4.65\u00d710\u22125 (+) CS643Q01S Comparing Light Bulbs 68.57 73.41 1.08\u00d710\u22125 (+) CS643Q02S Comparing Light Bulbs2 63.00 57.50 4.64\u00d710\u22124 (\u2212) CS657Q03S Invasive Species 46.00 54.36 8.47\u00d710\u22125 (+) CS527Q04S Extinction of Dinosours3 36.19 50.18 8.13\u00d710\u22125 (+) CS648Q02S Habitable Zone 41.69 45.19 1.34\u00d710\u22124 (+) CS607Q01S Birds and Caterpillars 88.14 91.47 1.99\u00d710\u22124 (+) Table 1: Proportion of full credit in females and males to significant items of PISA2018 in Taipei. (+) and (\u2212) denote the items with positively and negatively estimated gender effects, respectively. of 2.77 \u00d7 10\u22127, implying that this question is statistically significantly biased towards male test-takers. This is consistent with the observation that in Table 1, 58.44% of male students correctly answered this question, which exceeds the proportion of females, 51.29%. Besides gender effects, we estimate the effects of school strata on the students\u2019 response and present the point and interval estimation results in the left panel of Figure 8. All the detected biased questions are from math and science sections, with 6 questions for significant effects of whether attending public school and 5 questions for whether residing in rural areas. To further investigate the importance of controlling for the latent ability factors, we compare results from our proposed method with the latent factors, to the results from directly regressing responses on covariates without latent factors. From the right panel of Figure 8, we can see that without conditioning on the latent factors, there are excessive items detected for the covariate of whether the school is public or private. On the other hand, there are no biased items detected if we only apply generalized linear regression to estimate the effect of the covariate of whether the school is in rural areas. 32 Math Reading Science \u22124 0 4 0 50 100 150 200 PISA Questions for TAP Public School Effect Estimator Public Math Reading Science \u22122 \u22121 0 1 2 0 50 100 150 200 PISA Questions for TAP Public School Effect Estimator Public \u2212 without latent variable Math Reading Science \u22124 0 4 0 50 100 150 200 PISA Questions for TAP Rural Region Effect Estimator Rural Math Reading Science \u22122 \u22121 0 1 2 0 50 100 150 200 PISA Questions for TAP Rural Region Effect Estimator Rural \u2212 without latent variable Figure 8: Confidence intervals for the effect of school stratum covariate on each PISA question. Red intervals correspond to confidence intervals for questions with significant school stratum bias after Bonferroni correction. 7 Discussion In this work, we study the covariate-adjusted generalized factor model that has wide interdisciplinary applications such as educational assessments and psychological measurements. In particular, new identifiability issues arise due to the incorporation of covariates in the model setup. To address the issues and identify the model parameters, we propose novel and interpretable conditions, crucial for developing the estimation approach and inference results. With model identifiability guaranteed, we propose a computationally efficient jointlikelihood-based estimation method for model parameters. Theoretically, we obtain the estimation consistency and asymptotic normality for not only the covariate effects but also latent factors and factor loadings. 33 There are several future directions motivated by the proposed method. In this manuscript, we focus on the case in which p grows at a slower rate than the number of subjects n and the number of items q, a common setting in educational assessments. It is interesting to further develop estimation and inference results under the high-dimensional setting in which p is larger than n and q. Moreover, in this manuscript, we assume that the dimension of the latent factors K is fixed and known. One possible generalization is to allow K to grow with n and q. Intuitively, an increasing latent dimension K makes the identifiability and inference issues more challenging due to the increasing degree of freedom of the transformation matrix. With the theoretical results in this work, another interesting related problem is to further develop simultaneous inference on group-wise covariate coefficients, which we leave for future investigation.", "additional_graph_info": { "graph": [ [ "Jing Ouyang", "Chengyu Cui" ], [ "Jing Ouyang", "Kean Ming Tan" ], [ "Jing Ouyang", "Gongjun Xu" ], [ "Chengyu Cui", "Chun Wang" ], [ "Chengyu Cui", "Gongjun Xu" ], [ "Kean Ming Tan", "Tong Zhang" ], [ "Gongjun Xu", "Bodhisattva Sen" ] ], "node_feat": { "Jing Ouyang": [ { "url": "http://arxiv.org/abs/2404.16745v1", "title": "Statistical Inference for Covariate-Adjusted and Interpretable Generalized Factor Model with Application to Testing Fairness", "abstract": "In the era of data explosion, statisticians have been developing\ninterpretable and computationally efficient statistical methods to measure\nlatent factors (e.g., skills, abilities, and personalities) using large-scale\nassessment data. In addition to understanding the latent information, the\ncovariate effect on responses controlling for latent factors is also of great\nscientific interest and has wide applications, such as evaluating the fairness\nof educational testing, where the covariate effect reflects whether a test\nquestion is biased toward certain individual characteristics (e.g., gender and\nrace) taking into account their latent abilities. However, the large sample\nsize, substantial covariate dimension, and great test length pose challenges to\ndeveloping efficient methods and drawing valid inferences. Moreover, to\naccommodate the commonly encountered discrete types of responses, nonlinear\nlatent factor models are often assumed, bringing further complexity to the\nproblem. To address these challenges, we consider a covariate-adjusted\ngeneralized factor model and develop novel and interpretable conditions to\naddress the identifiability issue. Based on the identifiability conditions, we\npropose a joint maximum likelihood estimation method and establish estimation\nconsistency and asymptotic normality results for the covariate effects under a\npractical yet challenging asymptotic regime. Furthermore, we derive estimation\nand inference results for latent factors and the factor loadings. We illustrate\nthe finite sample performance of the proposed method through extensive\nnumerical studies and an application to an educational assessment dataset\nobtained from the Programme for International Student Assessment (PISA).", "authors": "Jing Ouyang, Chengyu Cui, Kean Ming Tan, Gongjun Xu", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "stat.ME", "cats": [ "stat.ME" ], "main_content": "Consider n independent subjects with q measured responses and p\u2217observed covariates. \u2217 For the ith subject, let Yi \u2208Rq be a q-dimensional vector of responses corresponding to measurement items and Rbe a-dimensional vector of observed covariates. q measurement items and Xc i \u2208Rp\u2217be a p\u2217-dimensional vector of observed covariates. Moreover, let be a-dimensional vector of latent factors representing the unobservable Moreover, let Ui be a K-dimensional vector of latent factors representing the unobservable traits such as skills and personalities, where we assume K is specified as in many educational assessments. We assume that the q-dimensional responses Yi are conditionally independent, given Xc i and Ui. Specifically, we model the jth response for the ith subject, Yij, by the following conditional distribution: Yij \u223cpij(y | wij), where wij = \u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i . (1) Here \u03b2j0 \u2208R is the intercept parameter, \u03b2jc = (\u03b2j1, . . . , \u03b2jp\u2217)\u22ba\u2208Rp\u2217are the coefficient parameters for the observed covariates, and\u22baR are the factor loadings. \u2208\u2217\u2208 parameters for the observed covariates, and \u03b3j = (\u03b3j1, . . . , \u03b3jK)\u22ba\u2208RK are the factor loadings. \u22ba \u2208 For better presentation, we write \u03b2j = (\u03b2j0, \u03b2\u22ba jc)\u22baas an assembled vector of intercept and coefficients and define Xi = (1, (Xc i )\u22ba)\u22bawith dimension p = p\u2217+ 1, which gives wij = \u03b3\u22ba j Ui + \u03b2\u22ba j Xi. Given wij, the function pij is some specified probability density (mass) function. Here, we consider a general and flexible modeling framework by allowing different types of pij functions to model diverse response data in wide-ranging applications, such as binary item response data in educational and psychological assessments (Mellenbergh 1994, Reckase 2009) and mixed types of data in educational and macroeconomic applications (Rijmen et al. 2003, Wang 2022); see also Remark 1. A schematic diagram of the proposed model setup is 6 presented in Figure 1. Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Xi \u2208Rp Ui \u2208RK Yij \u2208R, j \u2208[q] Figure 1: A schematic diagram of the proposed model in (1). The subscript i indicates the ith subject, out of n independent subjects. The response variable Yij can be discrete or continuous. Our proposed covariate-adjusted generalized factor model in (1) is motivated by applications in testing fairness. In the context of educational assessment, the subject\u2019s responses to questions are dependent on latent factors Ui such as students\u2019 abilities and skills, and are potentially affected by observed covariates Xc i such as age, gender, and race, among others (Linda M. Collins 2009). The intercept \u03b2j0 is often interpreted as the difficulty level of item j and referred to as the difficulty parameter in psychometrics (Hambleton & Swaminathan 2013, Reckase 2009). The capability of item j to further differentiate individuals based on their latent abilities is captured by \u03b3j = (\u03b3j1, . . . , \u03b3jK)\u22ba, which are also referred to as discrimination parameters (Hambleton & Swaminathan 2013, Reckase 2009). The effects of observed covariates Xc i on subject\u2019s response to the jth question Yij, conditioned on latent abilities Ui, are captured by \u03b2jc = (\u03b2j1, . . . , \u03b2jp\u2217)\u22ba, which are referred to as DIF effects in psychometrics (Holland & Wainer 2012). This setting gives rise to the fairness problem of validating whether the response probabilities to the measurements differ across different genders, races, or countries of origin while holding their abilities and skills at the same level. 7 Given the observed data from n independent subjects, we are interested in studying the relationships between Yi and Xc i after adjusting for the latent factors Ui in (1). Specifically, our goal is to test the statistical hypothesis H0 : \u03b2js = 0 versus Ha : \u03b2js \u0338= 0 for s \u2208[p\u2217], where \u03b2js is the regression coefficient for the sth covariate and the jth response, after adjusting for the latent factor Ui. In many applications, the latent factors and factor loadings also carry important scientific interpretations such as students\u2019 abilities and test items\u2019 characteristics. This motivates us to perform statistical inference on the parameters \u03b2j0, \u03b3j, and Ui as well. Remark 1. The proposed model setup (1) is general and flexible as various functions pij\u2019s could be used to model diverse types of response data in wide-ranging applications. For instance, in educational assessments, logistic factor model (Reckase 2009) with pij(y | wij) = exp(wijy)/{1 + exp(wij)}, y \u2208{0, 1} and probit factor model (Birnbaum 1968) with pij(y | wij) = {\u03a6(wij)}y{1 \u2212\u03a6(wij)}1\u2212y, y \u2208{0, 1} where \u03a6(\u00b7) is the cumulative density function of standard normal distribution, are widely used to model the binary responses, indicating correct or incorrect answers to the test items. Such types of models are often referred to as item response theory models (Reckase 2009). In economics and finances, linear factor models with pij(y | wij) \u221dexp{\u2212(y \u2212wij)2/(2\u03c32)}, where y \u2208R and \u03c32 is the variance parameter, are commonly used to model continuous responses, such as GDP, interest rate, and consumer index (Bai 2003, Bai & Li 2012, Stock & Watson 2016). Moreover, depending on the the observed responses, different types of function pij\u2019s can be used to model the response from each item j \u2208[q]. Therefore, mixed types of data, which are common in educational measurements (Rijmen et al. 2003) and macroeconomic applications (Wang 2022), can also be analyzed by our proposed model. 8 Remark 2. In addition to testing fairness, the considered model finds wide-ranging applications in the real world. For instance, in genomics, the gene expression status may depend on unmeasured confounders or latent biological factors and also be associated with the variables of interest including medical treatment, disease status, and gender (Wang et al. 2017, Du et al. 2023). The covariate-adjusted general factor model helps to investigate the effects of the variables of interest on gene expressions, controlling for the latent factors (Du et al. 2023). This setting is also applicable to other scenarios, such as brain imaging, where the activity of a brain region may depend on measurable spatial distance from neighboring regions and latent structures due to unmodeled factors (Leek & Storey 2008). To analyze large-scale measurement data, we aim to develop a computationally efficient estimation method and to provide inference theory for quantifying uncertainty in the estimation. Motivated by recent work in high-dimensional factor analysis, we treat the latent factors as fixed parameters and apply a joint maximum likelihood method for estimation (Bai 2003, Fan et al. 2013, Chen et al. 2020). Specifically, we let the collection of the item responses from n independent subjects be Y = (Y1, . . . , Yn)\u22ba n\u00d7q and the design matrix of observed covariates to be X = (X1, . . . , Xn)\u22ba n\u00d7p. For model parameters, the discrimination parameters for all q items are denoted as \u0393 = (\u03b31, . . . , \u03b3q)\u22ba q\u00d7K, while the intercepts and the covariate effects for all q items are denoted as B = (\u03b21, . . . , \u03b2q)\u22ba q\u00d7p. The latent factors from all n subjects are U = (U1, . . . , Un)\u22ba n\u00d7K. Then, the joint log-likelihood function can be written as follows: L(Y | \u0393, U, B, X) = 1 nq n X i=1 q X j=1 lij(\u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i ), (2) where the function lij(wij) = log pij(Yij|wij) is the individual log-likelihood function with wij = \u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i . We aim to obtain (b \u0393, b U, b B) from maximizing the joint likelihood function L(Y | \u0393, U, B, X). While the estimators can be computed efficiently by maximizing the joint likelihood 9 function through an alternating maximization algorithm (Collins et al. 2002, Chen et al. 2019), challenges emerge for performing statistical inference on the model parameters. \u2022 One challenge concerns the model identifiability. Without additional constraints, the covariate effects are not identifiable due to the incorporation of covariates and their potential dependence on latent factors. The latent factors and factor loadings encounter similar identifiability issues as in traditional factor analysis (Bai & Li 2012, Fan et al. 2013). Ensuring that the model is statistically identifiable is the fundamental prerequisite for achieving model reliability and making valid inferences (Allman et al. 2009, Gu & Xu 2020). \u2022 Another challenge arises from the nonlinearity of our proposed model. In the existing literature, most studies focus on the statistical inference for our proposed setting in the context of linear models (Bai & Li 2012, Fan et al. 2013, Wang et al. 2017). On the other hand, settings with general log-likelihood function lij(wij), including covariateadjusted logistic and probit factor models, are less investigated. Common techniques for linear models are not applicable to the considered general nonlinear model setting. Motivated by these challenges, we propose interpretable and practical identifiability conditions in Section 3.1. We then incorporate these conditions into the joint-likelihood-based estimation method in Section 3.2. Furthermore, we introduce a novel inference framework for performing statistical inference on \u03b2j, \u03b3j, and Ui in Section 4. 3 Method 3.1 Model Identifiability Identifiability issues commonly occur in latent variable models (Allman et al. 2009, Bai & Li 2012, Xu 2017). The proposed model in (1) has two major identifiability issues. The first issue is that the proposed model remains unchanged after certain linear transformations of 10 both B and U, causing the covariate effects together with the intercepts, represented by B, and the latent factors, denoted by U, to be unidentifiable. The second issue is that the model is invariant after an invertible transformation of both U and \u0393 as in the linear factor models (Bai & Li 2012, Fan et al. 2013), causing the latent factors U and factor loadings \u0393 to be undetermined. Specifically, under the model setup in (1), we define the joint probability distribution of responses to be P(Y | \u0393, U, B, X) = Qn i=1 Qq j=1 pij(Yij|wij). The model parameters are identifiable if and only if for any response Y, there does not exist (\u0393, U, B) \u0338= (e \u0393, e U, e B) such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). The first issue concerning the identifiability of B and U is that for any (\u0393, U, B) and any transformation matrix A, there exist e \u0393 = \u0393, e U = U + XA\u22ba, and e B = B \u2212\u0393A such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). This identifiability issue leads to the indeterminacy of the covariate effects and latent factors. The second issue is related to the identifiability of U and \u0393. For any (e \u0393, e U, e B) and any invertible matrix G, there exist \u00af \u0393 = e \u0393(G\u22ba)\u22121, \u00af U = e UG, and \u00af B = e B such that P(Y | e \u0393, e U, e B, X) = P(Y | \u00af \u0393, \u00af U, \u00af B, X). This causes the latent factors and factor loadings to be unidentifiable. Remark 3. Intuitively, the unidentifiable e B = B \u2212\u0393A can be interpreted to include both direct and indirect effects of X on response Y. We take the intercept and covariate effect on the first item ( e \u03b21) as an example and illustrate it in Figure 2. One part of e \u03b21 is the direct effect from X onto Y (see the orange line in the left panel), whereas another part of e \u03b21 may be explained through the latent factors U, as the latent factors are unobserved and there are potential correlations between latent factors and observed covariates. The latter part of e \u03b21 can be considered as the indirect effect (see the blue line in the right panel). 11 Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Figure 2: The direct effects (orange solid line in the left panel) and the indirect effects (blue solid line in the right panel) for item 1. The first identifiability issue is a new challenge introduced by the covariate adjustment in the model, whereas the second issue is common in traditional factor models (Bai & Li 2012, Fan et al. 2013). Considering the two issues together, for any (\u0393, U, B), A, and G, there exist transformations e \u0393 = \u0393(G\u22ba)\u22121, e U = (U + XA\u22ba)G, and e B = B \u2212\u0393A such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). In the rest of this subsection, we propose identifiability conditions to address these issues. For notation convenience, throughout the rest of the paper, we define \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) as the true parameters. Identifiability Conditions As described earlier, the correlation between the design matrix of covariates X and the latent factors U\u2217results in the identifiability issue of B\u2217. In the psychometrics literature, the intercept \u03b2\u2217 j0 is commonly referred to as the difficulty parameter, while \u03b2\u2217 jc represents the effects of observed covariates, namely DIF effects, on the response to item j (Reckase 2009, Holland & Wainer 2012). The different scientific interpretations motivate us to develop different identifiability conditions for \u03b2\u2217 j0 and \u03b2\u2217 jc, respectively. Specifically, we propose a centering condition on U\u2217to ensure the identifiability of the intercept \u03b2\u2217 j0 for all items j \u2208[q]. On the other hand, to identify the covariate effects \u03b2\u2217 jc, a natural idea is to impose the covariate effects \u03b2\u2217 jc for all items j \u2208[q] to be sparse, as shown in many regularized methods and item purification methods (Candell & Drasgow 1988, Fidalgo et al. 2000, Bauer et al. 2020, Belzak & Bauer 2020). In Chen et al. (2023a), 12 an interpretable identifiability condition is proposed for selecting sparse covariate effects, yet this condition is specific to uni-dimensional covariates. Motivated by Chen et al. (2023a), we propose the following minimal \u21131 condition applicable to general cases where the covariates are multi-dimensional. To better present the identifiability conditions, we write A = (a0, a1, . . . , ap\u2217) \u2208RK\u00d7p and define Ac = (a1, . . . , ap\u2217) \u2208RK\u00d7p\u2217as the part applied to the covariate effects. Condition 1. (i) Pn i=1 U \u2217 i = 0K. (ii) Pq j=1 \u2225\u03b2\u2217 jc\u22251 < Pq j=1 \u2225\u03b2\u2217 jc \u2212A\u22ba c\u03b3\u2217 j \u22251 for any Ac \u0338= 0. Condition 1(i) assumes the latent abilities U\u2217are centered to ensure the identifiability of the intercepts \u03b2\u2217 j0\u2019s, which is commonly assumed in the item response theory literature (Reckase 2009). Condition 1(ii) is motivated by practical applications. For instance, in educational testing, practitioners need to identify and remove biased test items, correspondingly, items with non-zero covariate effects (\u03b2\u2217 js \u0338= 0). In practice, most of the designed items are unbiased, and therefore, it is reasonable to assume that the majority of items have no covariate effects, that is, the covariate effects \u03b2\u2217 jc\u2019s are sparse (Holland & Wainer 2012, Chen et al. 2023a). Next, we present a sufficient and necessary condition for Condition 1(ii) to hold. Proposition 1. Condition 1(ii) holds if and only if for any v \u2208RK \\ {0K}, q X j=1 \f \fv\u22ba\u03b3\u2217 j \f \fI(\u03b2\u2217 js = 0) > q X j=1 sign(\u03b2\u2217 js)v\u22ba\u03b3\u2217 j I(\u03b2\u2217 js \u0338= 0), \u2200s \u2208[p\u2217]. (3) Remark 4. Proposition 1 implies that Condition 1(ii) holds when {j : \u03b2\u2217 js \u0338= 0} is separated into {j : \u03b2\u2217 js > 0} and {j : \u03b2\u2217 js < 0} in a balanced way. With diversified signs of \u03b2\u2217 js, Proposition 1 holds when a considerable proportion of test items have no covariate effect (\u03b2\u2217 js \u0338= 0). For example, when \u03b3\u2217 j = m1(k) K with m > 0, Condition 1(ii) holds if and only if Pq j=1 |m|{\u2212I(\u03b2\u2217 js/m > 0) + I(\u03b2\u2217 js/m \u22640)} > 0 and Pq j=1 |m|{\u2212I(\u03b2\u2217 js/m \u22650) + I(\u03b2\u2217 js/m < 0)} < 0. With slightly more than q/2 items correspond to \u03b2\u2217 js = 0, Condition 1(ii) holds. Moreover, if #{j : \u03b2\u2217 js > 0} and #{j : \u03b2\u2217 js < 0} are comparable, then Condition 1(ii) holds even when less than q/2 items correspond to \u03b2\u2217 js = 0 and more than q/2 items correspond 13 to \u03b2\u2217 js \u0338= 0. Though assuming a \u201csparse\u201d structure, our assumption here differs from existing high-dimensional literature. In high-dimensional regression models, the covariate coefficient when regressing the dependent variable on high-dimensional covariates, is often assumed to be sparse, with the proportion of the non-zero covariate coefficients asymptotically approaching zero. In our setting, Condition 1(ii) allows for relatively dense settings where the proportion of items with non-zero covariate effects is some positive constant. To perform simultaneous estimation and inference on \u0393\u2217and U\u2217, we consider the following identifiability conditions to address the second identifiability issue. Condition 2. (i) (U\u2217)\u22baU\u2217is diagonal. (ii) (\u0393\u2217)\u22ba\u0393\u2217is diagonal. (iii) n\u22121(U\u2217)\u22baU\u2217= q\u22121(\u0393\u2217)\u22ba\u0393\u2217. Condition 2 is a set of widely used identifiability conditions in the factor analysis literature (Bai 2003, Bai & Li 2012, Wang 2022). For practical and theoretical benefits, we impose Condition 2 to address the identifiability issue related to G. It is worth mentioning that this condition can be replaced by other identifiability conditions. For true parameters satisfying any identifiability condition, we can always find a transformation such that the transformed parameters satisfy our proposed Conditions 1\u20132 and the proposed estimation method and theoretical results in the subsequent sections still apply, up to such a transformation. 3.2 Joint Maximum Likelihood Estimation In this section, we introduce a joint-likelihood-based estimation method for the covariate effects B, the latent factors U, and factor loadings \u0393 simultaneously. Incorporating Conditions 1\u20132 into the estimation procedure, we obtain the maximum joint-likelihood-based estimators for \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) that satisfy the proposed identifiability conditions. With Condition 1, we address the identifiability issue related to the transformation matrix A. Specifically, for any parameters \u03d5 = (\u0393, U, B), there exists a matrix A\u2217= (a\u2217 0, A\u2217 c) with A\u2217 c = argminAc\u2208RK\u00d7p\u2217 Pq j=1 \u2225\u03b2jc \u2212A\u22ba c\u03b3j\u22251 and a\u2217 0 = \u2212n\u22121 Pn i=1(Ui + A\u2217 cXc i ) such that 14 the transformed matrices U\u2217= U + X(A\u2217)\u22baand B\u2217= B \u2212\u0393A\u2217satisfy Condition 1. The transformation idea naturally leads to the following estimation methodology for B\u2217. To estimate B\u2217and U\u2217that satisfy Condition 1, we first obtain the maximum likelihood estimator b \u03d5 = (b \u0393, b U, b B) by b \u03d5 = argmin \u03d5\u2208\u2126\u03d5 \u2212L(Y | \u03d5, X), (4) where the parameter space \u2126\u03d5 is given as \u2126\u03d5 = {\u03d5 : \u2225\u03d5\u2225max \u2264C} for some large C. To solve (4), we employ an alternating minimization algorithm. Specifically, for steps t = 0, 1, . . ., we compute b \u0393(t+1), b B(t+1) = argmin \u0393\u2208Rq\u00d7K, B\u2208Rq\u00d7p \u2212L(Y | \u0393, U(t), B, X); b U(t+1) = argmin U\u2208Rn\u00d7K \u2212L(Y | \u0393(t+1), U, B(t+1), X), until the quantity max{\u2225b \u0393(t+1) \u2212b \u0393(t)\u2225F, \u2225b U(t+1) \u2212b U(t)\u2225F, \u2225b B(t+1) \u2212b B(t)\u2225F} is less than some pre-specified tolerance value for convergence. We then estimate Ac by minimizing the \u21131norm b Ac = argmin Ac\u2208RK\u00d7p\u2217 q X j=1 \u2225b \u03b2jc \u2212A\u22ba c b \u03b3j\u22251. (5) Next, we estimate b a0 = \u2212n\u22121 Pn i=1( b Ui + b AcXc i ) and let b A = (b a0, b Ac). Given the estimators b A, b \u0393, and b B, we then construct b B\u2217= b B \u2212b \u0393b A and e U = b U + Xb A\u22ba such that Condition 1 holds. Recall that Condition 2 addresses the identifiability issue related to the invertible matrix G. Specifically, for any parameters (\u0393, U), there exists a matrix G\u2217such that Condition 2 holds for U\u2217= (U+X(A\u2217)\u22ba)G\u2217and \u0393\u2217= \u0393(G\u2217)\u2212\u22ba. Let U = diag(\u03f11, . . . , \u03f1K) be a diagonal 15 matrix that contains the K eigenvalues of (nq)\u22121(\u0393\u22ba\u0393)1/2(U + XA\u22ba)\u22ba(U + XA\u22ba) (\u0393\u22ba\u0393)1/2 and let V be a matrix that contains its corresponding eigenvectors. We set G\u2217= (q\u22121\u0393\u22ba\u0393)1/2 VU \u22121/4. To further estimate \u0393\u2217and U\u2217, we need to obtain an estimator for the invertible matrix G\u2217. Given the maximum likelihood estimators obtained in (4) and b A in (5), we estimate G\u2217via b G = (q\u22121b \u0393\u22bab \u0393)1/2 b V b U \u22121/4 where b U and b V are matrices that contain the eigenvalues and eigenvectors of (nq)\u22121(b \u0393\u22bab \u0393)1/2( b U+Xb A\u22ba)\u22ba( b U+Xb A\u22ba) (b \u0393\u22bab \u0393)1/2, respectively. With b G and b A, we now obtain the following transformed estimators that satisfy Condition 2: b \u0393\u2217= b \u0393( b G\u22ba)\u22121 and b U\u2217= ( b U + Xb A\u22ba) b G. To quantify the uncertainty of the proposed estimators, we will show that the proposed estimators are asymptotically normally distributed. Specifically, in Theorem 2 of Section 4, we establish the asymptotic normality result for b \u03b2\u2217 j, which allows us to make inference on the covariate effects \u03b2\u2217 j. Moreover, as the latent factors U \u2217 i and factor loadings \u03b3\u2217 j often have important interpretations in domain sciences, we are also interested in the inference on parameters U \u2217 i and \u03b3\u2217 j . In Theorem 2, we also derive the asymptotic distributions for estimators b U \u2217 i and b \u03b3\u2217 j , providing inference results for parameters U \u2217 i and \u03b3\u2217 j . 4 Theoretical Results We propose a novel framework to establish the estimation consistency and asymptotic normality for the proposed joint-likelihood-based estimators b \u03d5\u2217= (b \u0393\u2217, b U\u2217, b B\u2217) in Section 3. To establish the theoretical results for b \u03d5\u2217, we impose the following regularity assumptions. Assumption 1. There exist constants M > 0, \u03ba > 0 such that: (i) \u03a3\u2217 u = limn\u2192\u221en\u22121(U\u2217)\u22baU\u2217exists and is positive definite. For i \u2208[n], \u2225U \u2217 i \u22252 \u2264M. (ii) \u03a3\u2217 \u03b3 = limq\u2192\u221eq\u22121(\u0393\u2217)\u22ba\u0393\u2217exists and is positive definite. For j \u2208[q], \u2225\u03b3\u2217 j \u22252 \u2264M. (iii) \u03a3x = limn\u2192\u221en\u22121 Pn i=1 XiX\u22ba i exists and 1/\u03ba2 \u2264\u03bbmin(\u03a3x) \u2264\u03bbmax(\u03a3x) \u2264\u03ba2. For i \u2208[n], maxi \u2225Xi\u2225\u221e\u2264M. 16 (iv) \u03a3\u2217 ux = limn\u2192\u221en\u22121 Pn i=1 U \u2217 i X\u22ba i exists and \u2225\u03a3\u2217 ux\u03a3\u22121 x \u22251,\u221e\u2264M. The eigenvalues of (\u03a3\u2217 u \u2212\u03a3\u2217 ux\u03a3\u22121 x (\u03a3\u2217 ux)\u22ba)\u03a3\u2217 \u03b3 are distinct. Assumptions 1 is commonly used in the factor analysis literature. In particular, Assumptions 1(i)\u2013(ii) correspond to Assumptions A-B in Bai (2003) under linear factor models, ensuring the compactness of the parameter space on U\u2217and \u0393\u2217. Under nonlinear factor models, such conditions on compact parameter space are also commonly assumed (Wang 2022, Chen et al. 2023b). Assumption 1(iii) is standard regularity conditions for the nonlinear setting that is needed to establish the concentration of the gradient and estimation error for the model parameters when p diverges. In addition, Assumption 1(iv) is a crucial identification condition; similar conditions have been imposed in the existing literature such as Assumption G in Bai (2003) in the context of linear factor models and Assumption 6 in Wang (2022) in the context of nonlinear factor models without covariates. Assumption 2. For any i \u2208[n] and j \u2208[q], assume that lij(\u00b7) is three times differentiable, and we denote the first, second, and third order derivatives of lij(wij) with respect to wij as l\u2032 ij(wij), l\u2032\u2032 ij(wij), and l\u2032\u2032\u2032 ij(wij), respectively. There exist M > 0 and \u03be \u22654 such that E(|l\u2032 ij(wij)|\u03be) \u2264M and |l\u2032 ij(wij)| is sub-exponential with \u2225l\u2032 ij(wij)\u2225\u03c61 \u2264M. Furthermore, we assume E{l\u2032 ij(w\u2217 ij)} = 0. Within a compact space of wij, we have bL \u2264\u2212l\u2032\u2032 ij(wij) \u2264bU and |l\u2032\u2032\u2032 ij(wij)| \u2264bU for bU > bL > 0. Assumption 2 assumes smoothness on the log-likelihood function lij(wij). In particular, it assumes sub-exponential distributions and finite fourth-moments of the first order derivatives l\u2032 ij(wij). For commonly used linear or nonlinear factor models, the assumption is not restrictive and can be satisfied with a large \u03be. For instance, consider the logistic model with l\u2032 ij(wij) = Yij \u2212exp(wij)/{1+exp(wij)}, we have |l\u2032 ij(wij)| \u22641 and \u03be can be taken as \u221e. The boundedness conditions for l\u2032\u2032 ij(wij) and l\u2032\u2032\u2032 ij(wij) are necessary to guarantee the convexity of the joint likelihood function. In a special case of linear factor models, l\u2032\u2032 ij(wij) is a constant and the boundedness conditions naturally hold. For popular nonlinear models such as lo17 gistic factor models, probit factor models, and Poisson factor models, the boundedness of l\u2032\u2032 ij(wij) and l\u2032\u2032\u2032 ij(wij) can also be easily verified. Assumption 3. For \u03be specified in Assumption 2 and a sufficiently small \u03f5 > 0, we assume as n, q, p \u2192\u221e, p p n \u2227(pq) (nq)\u03f5+3/\u03be \u21920. (6) Assumption 3 is needed to ensure that the derivative of the likelihood function equals zero at the maximum likelihood estimator with high probability, a key property in the theoretical analysis. In particular, we need the estimation errors of all model parameters to converge to 0 uniformly with high probability. Such uniform convergence results involve delicate analysis of the convexity of the objective function, for which technically we need Assumption 3. For most of the popularly used generalized factor models, \u03be can be taken as any large value as discussed above, thus (nq)\u03f5+3/\u03be is of a smaller order of p n \u2227(pq), given small \u03f5. Specifically, Assumption 3 implies p = o(n1/2 \u2227q) up to a small order term, an asymptotic regime that is reasonable for many educational assessments. Next, we impose additional assumptions crucial to establishing the theoretical properties of the proposed estimators. One challenge for theoretical analysis is to handle the dependence between the latent factors U\u2217and the design matrix X. To address this challenge, we employ the following transformed U0 that are orthogonal with X, which plays an important role in establishing the theoretical results (see Supplementary Materials for details). In particular, for i \u2208[n], we let U 0 i = (G\u2021)\u22ba(U \u2217 i \u2212A\u2021Xi). Here G\u2021 = (q\u22121(\u0393\u2217)\u22ba\u0393\u2217)1/2 V\u2217(U \u2217)\u22121/4 and A\u2021 = (U\u2217)\u22baX(X\u22baX)\u22121, where U \u2217= diag(\u03f1\u2217 1, . . . , \u03f1\u2217 K) with diagonal elements being the K eigenvalues of (nq)\u22121((\u0393\u2217)\u22ba\u0393\u2217)1/2(U\u2217)\u22ba(In\u2212Px)U\u2217((\u0393\u2217)\u22ba\u0393\u2217)1/2 with Px = X(X\u22baX)\u22121X\u22baand V\u2217containing the matrix of corresponding eigenvectors. Under this transformation for U 0 i , we further define \u03b30 j = (G\u2021)\u22121\u03b3\u2217 j and \u03b20 j = \u03b2\u2217 j + (A\u2021)\u22ba\u03b3\u2217 j for j \u2208[q], and write Z0 i = ((U 0 i )\u22ba X\u22ba i )\u22baand w0 ij = (\u03b30 j )\u22baU 0 i + (\u03b20 j)\u22baXi. These transformed parameters \u03b30 j \u2019s, U 0 i \u2019s, and \u03b20 j\u2019s give the same joint likelihood value as that of the true parameters \u03b3\u2217 j \u2019s, U \u2217 i \u2019s and \u03b2\u2217 j\u2019s, which 18 facilitate our theoretical understanding of the joint-likelihood-based estimators. Assumption 4. (i) For any j \u2208[q], \u2212n\u22121 Pn i=1 l\u2032\u2032 ij(w0 ij)Z0 i (Z0 i )\u22ba p \u2192\u03a80 jz for some positive definite matrix \u03a80 jz and n\u22121/2 Pn i=1 l\u2032 ij(w0 ij)Z0 i d \u2192N(0, \u21260 jz). (ii) For any i \u2208[n], \u2212q\u22121 Pq j=1 l\u2032\u2032 ij(w0 ij)\u03b30 j (\u03b30 j )\u22ba p \u2192\u03a80 i\u03b3 for some positive definite matrix \u03a80 i\u03b3 and q\u22121/2 Pq j=1 l\u2032 ij(w0 ij)\u03b30 j d \u2192N(0, \u21260 i\u03b3). Assumption 4 is a generalization of Assumption F(3)-(4) in Bai (2003) for linear models to the nonlinear setting. Specifically, we need Assumption 4(i) to derive the asymptotic distributions of the estimators b \u03b2\u2217 j and b \u03b3\u2217 j , and Assumption 4(ii) is used for establishing the asymptotic distribution of b U \u2217 i . Note that these assumptions are imposed on the loglikelihood derivative functions evaluated at the true parameters w0 ij, Z0 i , and \u03b30 j . In general, for the popular generalized factor models, such assumptions hold with mild conditions. For example, under linear models, l\u2032 ij(wij) is the random error and l\u2032\u2032 ij(wij) is a constant. Then \u03a80 jz and \u03a80 i\u03b3 naturally exist and are positive definite followed by Assumption 1. The limiting distributions of n\u22121/2 Pn i=1 l\u2032 ij(w0 ij)Z0 i and q\u22121/2 Pq j=1 l\u2032 ij(w0 ij)\u03b30 j can be derived by the central limit theorem under standard regularity conditions. Under logistic and probit models, l\u2032 ij(wij) and l\u2032\u2032 ij(wij) are both finite inside a compact parameters space and similar arguments can be applied to show the validity of Assumption 4. We present the following assumption to establish the theoretical properties of the transformed matrix b A as defined in (5). In particular, we define A0 = (G\u2021)\u22baA\u2021 and write A0 = (a0 0, . . . , a0 p\u2217)\u22ba. Note that the estimation problem of (5) is related to the median regression problem with measurement errors. To understand the properties of this estimator, following existing M-estimation literature (He & Shao 1996, 2000), we define \u03c80 js(a) = \u03b30 j sign{\u03b20 js + (\u03b30 j )\u22ba(a \u2212a0 s)} and \u03c7s(a) = Pq j=1 \u03c80 js(a) for j \u2208[q] and s \u2208[p\u2217]. We further define a perturbed version of \u03c80 js(a), denoted as \u03c8js(a, \u03b4js), as follows: \u03c8js(a, \u03b4js) = \u0010 \u03b30 j + \u0002 \u03b4js \u221an \u0003 [1:K] \u0011 sign n \u03b20 js + \u0002 \u03b4js \u221an \u0003 K+1 \u2212(\u03b30 j + \u0002 \u03b4js \u221an \u0003 [1:K])\u22ba(a \u2212a0 s) o , s \u2208[p\u2217] 19 where the perturbation \u03b4js = \uf8eb \uf8ec \uf8ed IK 0 0 (1(p) s )\u22ba \uf8f6 \uf8f7 \uf8f8 \u0010 \u2212 n X i=1 l\u2032\u2032 ij(w0 ij)Z0 i (Z0 i )\u22ba\u0011\u22121\u0010\u221an n X i=1 l\u2032 ij(w0 ij)Z0 i \u0011 , is asymptotically normally distributed by Assumption 4. We define b \u03c7s(a) = Pq j=1 E\u03c8js(a, \u03b4js). Assumption 5. For \u03c7s(a), we assume that there exists some constant c > 0 such that mina\u0338=0 |q\u22121\u03c7s(a)| > c holds for all s \u2208[p\u2217]. Assume there exists as0 for each s \u2208[p\u2217] such that b \u03c7s(as0) = 0 with p\u221an\u2225\u03b1s0\u2225\u21920. In a neighbourhood of \u03b1s0, b \u03c7s(a) has a nonsingular derivative such that {q\u22121\u2207ab \u03c7s(\u03b1s0)}\u22121 = O(1) and q\u22121|\u2207ab \u03c7s(a)\u2212\u2207ab \u03c7s(\u03b1s0)| \u2264k|a\u2212\u03b1s0|. We assume \u03b9nq,p := max \b \u2225\u03b1s0\u2225, q\u22121 Pq j=1 \u03c8js(as0, \u03b4js) \t = o \u0000(p\u221an)\u22121\u0001 . Assumption 5 is crucial in addressing the theoretical difficulties of establishing the consistent estimation for A0, a challenging problem related to median regression with weakly dependent measurement errors. In Assumption 5, we treat the minimizer of | Pq j=1 \u03c8(a, \u03b4js)| as an M-estimator and adopt the Bahadur representation results in He & Shao (1996) for the theoretical analysis. For an ideal case where \u03b4js are independent and normally distributed with finite variances, which corresponds to the setting in median regression with measurement errors (He & Liang 2000), these assumptions can be easily verified. Assumption 5 discusses beyond such an ideal case and covers general settings. In addition to independent and Gaussian measurement errors, this condition also accommodates the case when \u03b4js are asymptotically normal and weakly dependent with finite variances, as implied by Assumption 4 and the conditional independence of Yij. We want to emphasize that Assumption 5 allows for both sparse and dense settings of the covariate effects. Consider an example of K = p = 1 and \u03b3j = 1 for j \u2208[q]. Suppose \u03b2\u2217 js is zero for all j \u2208[q1] and nonzero otherwise. Then this condition is satisfied as long as #{j : \u03b2\u2217 js > 0} and #{j : \u03b2\u2217 js < 0} are comparable, even when the sparsity level q1 is small. Under the proposed assumptions, we next present our main theoretical results. 20 Theorem 1 (Average Consistency). Suppose the true parameters \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) satisfy identifiability conditions 1\u20132. Under Assumptions 1\u20135, we have q\u22121\u2225b B\u2217\u2212B\u2217\u22252 F = Op \u0012p2 log qp n + p log n q \u0013 ; (7) if we further assume p3/2(nq)\u03f5+3/\u03be(p1/2n\u22121/2 + q\u22121/2) = o(1), then we have n\u22121\u2225b U\u2217\u2212U\u2217\u22252 F = Op \u0012p log qp n + log n q \u0013 ; (8) q\u22121\u2225b \u0393\u2217\u2212\u0393\u2217\u22252 F = Op \u0012p log qp n + log n q \u0013 . (9) Theorem 1 presents the average convergence rates of b \u03d5\u2217. Consider an oracle case with U\u2217 and \u0393\u2217known, the estimation of B\u2217reduces to an M-estimation problem. For M-estimators under general parametric models, it can be shown that the optimal convergence rates in squared \u21132-norm is Op(p/n) under p(log p)3/n \u21920 (He & Shao 2000). In terms of our average convergence rate on b B\u2217, the first term in (7), n\u22121p2 log(qp), approximately matches the convergence rate Op(p/n) up to a relatively small order term of p log(qp). The second term in (7), q\u22121p log n, is mainly due to the estimation error for the latent factor U\u2217. In educational applications, it is common to assume the number of subjects n is much larger than the number of items q. Under such a practical setting with n \u226bq and p relatively small, the term q\u22121 log n in (8) dominates in the derived convergence rate of b U\u2217, which matches with the optimal convergence rate Op(q\u22121) for factor models without covariates (Bai & Li 2012, Wang 2022) up to a small order term. Remark 5. The additional condition p3/2(nq)\u03f5+3/\u03be(p1/2n\u22121/2 + q\u22121/2) = o(1) in Theorem 1 is used to handle the challenges related to the invertible matrix G that affects the theoretical properties of b U\u2217and b \u0393\u2217. It is needed for establishing the estimation consistency of b U\u2217and b \u0393\u2217 but not for that of b B\u2217. With sufficiently large \u03be and small \u03f5, this assumption is approximately p = o(n1/4 \u2227q1/3) up to a small order term. 21 Remark 6. One challenge in establishing the estimation consistency for b \u03d5\u2217arises from the unrestricted dependence structure between U\u2217and X. If we consider the ideal case where the columns of U\u2217and X are orthogonal, i.e., (U\u2217)\u22baX = 0K\u00d7p, then we can achieve comparable or superior convergence rates with less stringent assumptions. Specifically, with Assumptions 1\u20133 only, we can obtain the same convergence rates for b U\u2217and b \u0393\u2217as in (8) and (9), respectively. Moreover, with Assumptions 1\u20133, the average convergence rate for the consistent estimator of B\u2217is Op(n\u22121p log qp+q\u22121 log n), which is tighter than (7) by a factor of p. With estimation consistency results established, we next derive the asymptotic normal distributions for the estimators, which enable us to perform statistical inference on the true parameters. Theorem 2 (Asymptotic Normality). Suppose the true parameters \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) satisfy identifiability conditions 1\u20132. Under Assumptions 1\u20135, we have the asymptotic distributions as follows. Denote \u03b6\u22122 nq,p = n\u22121p log qp + q\u22121log n. If p3/2\u221an(nq)3/\u03be\u03b6\u22122 nq,p \u21920, for any j \u2208[q] and a \u2208Rp with \u2225a\u22252 = 1, \u221ana\u22ba(\u03a3\u2217 \u03b2,j)\u22121/2( b \u03b2\u2217 j \u2212\u03b2\u2217 j) d \u2192N(0, 1), (10) where \u03a3\u2217 \u03b2,j = (\u2212(A0)\u22ba, Ip)(\u03a80 jz)\u22121\u21260 jz(\u03a80 jz)\u22121(\u2212(A0)\u22ba, Ip)\u22ba, and for any j \u2208[q], \u221an(\u03a3\u2217 \u03b3,j)\u22121/2(b \u03b3\u2217 j \u2212\u03b3\u2217 j ) d \u2192N(0, IK), (11) where \u03a3\u2217 \u03b3,j = G\u2021(IK, 0)(\u03a80 jz)\u22121\u21260 jz(\u03a80 jz)\u22121 (IK, 0)\u22ba(G\u2021)\u22ba. Furthermore, for any i \u2208[n], if q = O(n) and p3/2\u221aq(nq)3/\u03be\u03b6\u22122 nq,p \u21920, \u221aq(\u03a3\u2217 u,i)\u22121/2( b U \u2217 i \u2212U \u2217 i ) d \u2192N(0, IK), (12) where \u03a3\u2217 u,i = (G\u2021)\u2212\u22ba(\u03a80 i\u03b3)\u22121\u21260 i\u03b3(\u03a80 i\u03b3)\u22121(G\u2021)\u22121. 22 The asymptotic covariance matrices in Theorem 2 can be consistently estimated. Due to the space limitations, we defer the construction of the consistent estimators b \u03a3\u2217 \u03b2,j, b \u03a3\u2217 \u03b3,j, and b \u03a3\u2217 u,i to Supplementary Materials. Theorem 2 provides the asymptotic distributions for all individual estimators. In particular, with the asymptotic distributions and the consistent estimators b \u03a3\u2217 \u03b2,j for the asymptotic covariance matrices, we can perform hypothesis testing on \u03b2\u2217 js for j \u2208[q] and s \u2208[p\u2217]. We reject the null hypothesis \u03b2\u2217 js = 0 at significance level \u03b1 if |\u221an(b \u03c3\u2217 \u03b2,js)\u22121b \u03b2\u2217 js| > \u03a6\u22121(1 \u2212\u03b1/2), where (b \u03c3\u2217 \u03b2,js)2 is the (s + 1)-th diagonal entry in b \u03a3\u2217 \u03b2,j. For the asymptotic normality of b \u03b2\u2217 j, the condition p3/2\u221an(nq)3/\u03be(n\u22121p log qp+q\u22121 log n) \u2192 0 together with Assumption 3 gives p = o{n1/5 \u2227(q2/n)1/3} up to a small order term, and further implies n \u226aq2, which is consistent with established conditions in the existing factor analysis literature (Bai & Li 2012, Wang 2022). For the asymptotic normality of b U \u2217 i , the additional condition that q = O(n) is a reasonable assumption in educational applications where the number of items q is much fewer than the number of subjects n. In this case, the scaling conditions imply p = o{q1/3 \u2227(n2/q)1/5} up to a small order term. Similarly for the asymptotic normality of b \u03b3\u2217 j , the proposed conditions give p = o{n1/5 \u2227(q2/n)1/3} up to a small order term. Remark 7. Similar to the discussion in Remark 6, the challenges arising from the unrestricted dependence between U\u2217and X also affect the derivation of the asymptotic distributions for the proposed estimators. If we consider the ideal case with (U\u2217)\u22baX = 0K\u00d7p, we can establish the asymptotic normality for all individual estimators under Assumptions 1\u20134 only and weaker scaling conditions. Specifically, when (U\u2217)\u22baX = 0K\u00d7p, the scaling condition becomes p\u221an(nq)3/\u03be(n\u22121p log qp+q\u22121 log n) \u21920 for deriving asymptotic normality of b \u03b2\u2217 j and b \u03b3\u2217 j , which is milder than that for (10) and (11). 23 5 Simulation Study In this section, we study the finite-sample performance of the proposed joint-likelihoodbased estimator. We focus on the logistic latent factor model in (1) with pij(y | wij) = exp(wijy)/{1 + exp(wij)}, where wij = (\u03b3\u2217 j )\u22baU \u2217 i + (\u03b2\u2217 j)\u22baXi. The logistic latent factor model is commonly used in the context of educational assessment and is also referred to as the item response theory model (Mellenbergh 1994, Hambleton & Swaminathan 2013). We apply the proposed method to estimate B\u2217and perform statistical inference on testing the null hypothesis \u03b2\u2217 js = 0. We start with presenting the data generating process. We set the number of subjects n = {300, 500, 1000, 1500, 2000}, the number of items q = {100, 300, 500}, the covariate dimension p = {5, 10, 30}, and the factor dimension K = 2, respectively. We jointly generate Xc i and U \u2217 i from N(0, \u03a3) where \u03a3ij = \u03c4 |i\u2212j| with \u03c4 \u2208{0, 0.2, 0.5, 0.7}. In addition, we set the loading matrix \u0393\u2217 [,k] = 1(K) k \u2297vk, where \u2297is the Kronecker product and vk is a (q/K)-dimensional vector with each entry generated independently and identically from Unif[0.5, 1.5]. For the covariate effects B\u2217, we set the intercept terms to equal \u03b2\u2217 j0 = 0. For the remaining entries in B\u2217, we consider the following two settings: (1) sparse setting: \u03b2\u2217 js = \u03c1 for s = 1, . . . , p and j = 5s\u22124, . . . , 5s and other \u03b2\u2217 js are set to zero; (2) dense setting: \u03b2\u2217 js = \u03c1 for s = 1, . . . , p and j = Rsq/5 + 1, . . . , (Rs + 1)q/5 with Rs = s \u22125\u230as/5\u230b, and other \u03b2\u2217 js are set to zero. Here, the signal strength is set as \u03c1 \u2208{0.3, 0.5}. Intuitively, in the sparse setting, we set 5 items to be biased for each covariate whereas in the dense setting, 20% of items are biased items for each covariate. For better empirical stability, after reaching convergence in the proposed alternating maximization algorithm and transforming the obtained MLEs into ones that satisfy Conditions 1\u20132, we repeat another round of maximization and transformation. We take the significance level at 5% and calculate the averaged type I error based on all the entries \u03b2\u2217 js = 0 and the averaged power for all non-zero entries, over 100 replications. The averaged hypothesis testing results are presented in Figures 3\u20136 for p = 5 and p = 30, across different 24 settings. Additional numerical results for p = 10 are presented in the Supplementary Materials. 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.5 Figure 3: Powers and type I errors under sparse setting at p = 5. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 25 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.5 Figure 4: Powers and type I errors under sparse setting at p = 30. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 26 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.5 Figure 5: Powers and type I errors under dense setting at p = 5. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 27 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.5 Figure 6: Powers and type I errors under dense setting at p = 30. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 28 From Figures 3\u20136, we observe that the type I errors are well controlled at the significance level 5%, which is consistent with the asymptotic properties of b B\u2217in Theorem 2. Moreover, the power increases to one as the sample size n increases across all of the settings we consider. Comparing the left panel (\u03c1 = 0.3) to the right panel (\u03c1 = 0.5) in Figures 3\u20136, we see that the power increases as we increase the signal strength \u03c1. Comparing the plots in Figures 3\u20134 to the corresponding plots in Figures 5\u20136, we see that the powers under the sparse setting (Figures 3\u20134) are generally higher than that of the dense setting (Figures 5\u20136). Nonetheless, our proposed method is generally stable under both sparse and dense settings. In addition, we observe similar results when we increase the covariate dimension p from p = 5 (Figures 3 and 5) to p = 30 (Figures 4 and 6). We refer the reader to the Supplementary Materials for additional numerical results for p = 10. Moreover, we observe similar results when we increase the test length q from q = 100 (top row) to q = 500 (bottom row) in Figures 3\u20136. In terms of the correlation between X and U\u2217, we observe that while the power converges to one as we increase the sample size, the power decreases as the correlation \u03c4 increases. 6 Data Application We apply our proposed method to analyze the Programme for International Student Assessment (PISA) 2018 data2. PISA is a worldwide testing program that compares the academic performances of 15-year-old students across many countries (OECD 2019). More than 600,000 students from 79 countries/economies, representing a population of 31 million 15year-olds, participated in this program. The PISA 2018 used the computer-based assessment mode and the assessment lasted two hours for each student, with test items mainly evaluating students\u2019 proficiency in mathematics, reading, and science domains. A total of 930 minutes of test items were used and each student took different combinations of the test items. In addition to the assessment questions, background questionnaires were provided to collect students\u2019 information. 2The data can be downloaded from: https://www.oecd.org/pisa/data/2018database/ 29 In this study, we focus on PISA 2018 data from Taipei. The observed responses are binary, indicating whether students\u2019 responses to the test items are correct, and we use the popular item response theory model with the logit link (i.e., logistic latent factor model; Reckase 2009). Due to the block design nature of the large-scale assessment, each student was only assigned to a subset of the test items, and for the Taipei data, 86% response matrix is unobserved. Note that this missingness can be considered as conditionally independent of the responses given the students\u2019 characteristics. Our proposed method and inference results naturally accommodate such missing data and can be directly applied. Specifically, to accommodate the incomplete responses, we can modify the joint log-likelihood function in (2) into Lobs(Y | \u0393, U, B, X) = Pn i=1 P j\u2208Qi lij(\u03b3\u22ba j Ui + \u03b2\u22ba j Xi), where Qi defines the set of questions to which the responses from student i are observed. In this study, we include gender and 8 variables for school strata as covariates (p\u2217= 9). These variables record whether the school is public, in an urban place, etc. After data preprocessing, we have n = 6063 students and q = 194 questions. Following the existing literature (Reckase 2009, Millsap 2012), we take K = 3 to interpret the three latent abilities measured by the math, reading, and science questions. We apply the proposed method to estimate the effects of gender and school strata variables on students\u2019 responses. We obtain the estimators of the gender effect for each PISA question and construct the corresponding 95% confidence intervals. The constructed 95% confidence intervals for the gender coefficients are presented in Figure 7. There are 10 questions highlighted in red as their estimated gender effect is statistically significant after the Bonferroni correction. Among the reading items, there is only one significant item and the corresponding confidence interval is below zero, indicating that this question is biased towards female test-takers, conditioning on the students\u2019 latent abilities. Most of the confidence intervals corresponding to the biased items in the math and science sections are above zero, indicating that these questions are biased towards male test-takers. In social science research, it is documented that female students typically score better than male students 30 during reading tests, while male students often outperform female students during math and science tests (Quinn & Cooc 2015, Balart & Oosterveen 2019). Our results indicate that there may exist potential measurement biases resulting in such an observed gender gap in educational testing. Our proposed method offers a useful tool to identify such biased test items, thereby contributing to enhancing testing fairness by providing practitioners with valuable information for item calibration. Math Reading Science \u22126 \u22123 0 3 6 0 50 100 150 200 PISA Questions for TAP Gender Effect Estimator Figure 7: Confidence intervals for the effect of gender covariate on each PISA question using Taipei data. Red intervals correspond to confidence intervals for questions with significant gender bias after Bonferroni correction. (For illustration purposes, we omit the confidence intervals with the upper bounds exceeding 6 and the lower bounds below -6 in this figure). To further illustrate the estimation results, Table 1 lists the p-values for testing the gender effect for each of the identified 10 significant questions, along with the proportions of female and male test-takers who answered each question correctly. We can see that the signs of the estimated gender effect by our proposed method align with the disparities in the reported proportions between females and males. For example, the estimated gender effect corresponding to the item \u201cCM496Q01S Cash Withdrawal\u201d is positive with a p-value 31 Item code Item Title Female (%) Male (%) p-value Mathematics CM496Q01S Cash Withdrawal 51.29 58.44 2.77\u00d710\u22127 (+) CM800Q01S Computer Games 96.63 93.61 < 1 \u00d7 10\u22128 (\u2212) Reading CR466Q06S Work Right 91.91 86.02 1.95\u00d710\u22125 (\u2212) Science CS608Q01S Ammonoids 57.68 68.15 4.65\u00d710\u22125 (+) CS643Q01S Comparing Light Bulbs 68.57 73.41 1.08\u00d710\u22125 (+) CS643Q02S Comparing Light Bulbs2 63.00 57.50 4.64\u00d710\u22124 (\u2212) CS657Q03S Invasive Species 46.00 54.36 8.47\u00d710\u22125 (+) CS527Q04S Extinction of Dinosours3 36.19 50.18 8.13\u00d710\u22125 (+) CS648Q02S Habitable Zone 41.69 45.19 1.34\u00d710\u22124 (+) CS607Q01S Birds and Caterpillars 88.14 91.47 1.99\u00d710\u22124 (+) Table 1: Proportion of full credit in females and males to significant items of PISA2018 in Taipei. (+) and (\u2212) denote the items with positively and negatively estimated gender effects, respectively. of 2.77 \u00d7 10\u22127, implying that this question is statistically significantly biased towards male test-takers. This is consistent with the observation that in Table 1, 58.44% of male students correctly answered this question, which exceeds the proportion of females, 51.29%. Besides gender effects, we estimate the effects of school strata on the students\u2019 response and present the point and interval estimation results in the left panel of Figure 8. All the detected biased questions are from math and science sections, with 6 questions for significant effects of whether attending public school and 5 questions for whether residing in rural areas. To further investigate the importance of controlling for the latent ability factors, we compare results from our proposed method with the latent factors, to the results from directly regressing responses on covariates without latent factors. From the right panel of Figure 8, we can see that without conditioning on the latent factors, there are excessive items detected for the covariate of whether the school is public or private. On the other hand, there are no biased items detected if we only apply generalized linear regression to estimate the effect of the covariate of whether the school is in rural areas. 32 Math Reading Science \u22124 0 4 0 50 100 150 200 PISA Questions for TAP Public School Effect Estimator Public Math Reading Science \u22122 \u22121 0 1 2 0 50 100 150 200 PISA Questions for TAP Public School Effect Estimator Public \u2212 without latent variable Math Reading Science \u22124 0 4 0 50 100 150 200 PISA Questions for TAP Rural Region Effect Estimator Rural Math Reading Science \u22122 \u22121 0 1 2 0 50 100 150 200 PISA Questions for TAP Rural Region Effect Estimator Rural \u2212 without latent variable Figure 8: Confidence intervals for the effect of school stratum covariate on each PISA question. Red intervals correspond to confidence intervals for questions with significant school stratum bias after Bonferroni correction. 7 Discussion In this work, we study the covariate-adjusted generalized factor model that has wide interdisciplinary applications such as educational assessments and psychological measurements. In particular, new identifiability issues arise due to the incorporation of covariates in the model setup. To address the issues and identify the model parameters, we propose novel and interpretable conditions, crucial for developing the estimation approach and inference results. With model identifiability guaranteed, we propose a computationally efficient jointlikelihood-based estimation method for model parameters. Theoretically, we obtain the estimation consistency and asymptotic normality for not only the covariate effects but also latent factors and factor loadings. 33 There are several future directions motivated by the proposed method. In this manuscript, we focus on the case in which p grows at a slower rate than the number of subjects n and the number of items q, a common setting in educational assessments. It is interesting to further develop estimation and inference results under the high-dimensional setting in which p is larger than n and q. Moreover, in this manuscript, we assume that the dimension of the latent factors K is fixed and known. One possible generalization is to allow K to grow with n and q. Intuitively, an increasing latent dimension K makes the identifiability and inference issues more challenging due to the increasing degree of freedom of the transformation matrix. With the theoretical results in this work, another interesting related problem is to further develop simultaneous inference on group-wise covariate coefficients, which we leave for future investigation.", "introduction": "Latent factors, often referred to as hidden factors, play an increasingly important role in modern statistics to analyze large-scale complex measurement data and find wide-ranging applications across various scientific fields, including educational assessments (Reckase 2009, Hambleton & Swaminathan 2013), macroeconomics forecasting (Stock & Watson 2002, Lam et al. 2011), and biomedical diagnosis (Carvalho et al. 2008, Frichot et al. 2013). For instance, in educational testing and social sciences, latent factors are used to model unobservable traits of respondents, such as skills, personality, and attitudes (von Davier Matthias 2008, Reckase 2009); in biology and genomics, latent factors are used to capture underlying genetic factors, gene expression patterns, or hidden biological mechanisms (Carvalho et al. 2008, Frichot et al. 2013). To uncover the latent factors and analyze large-scale complex data, various latent factor models have been developed and extensively investigated in the existing literature (Bai 2003, Bai & Li 2012, Fan et al. 2013, Chen et al. 2023b, Wang 2022). In addition to measuring the latent factors, the observed covariates and the covariate effects conditional on the latent factors hold significant scientific interpretations in many applications (Reboussin et al. 2008, Park et al. 2018). One important application is testing fairness, which receives increasing attention in the fields of education, psychology, and social sciences (Candell & Drasgow 1988, Belzak & Bauer 2020, Chen et al. 2023a). In educa- tional assessments, testing fairness, or measurement invariance, implies that groups from diverse backgrounds have the same probability of endorsing the test items, controlling for individual proficiency levels (Millsap 2012). Testing fairness is not only of scientific interest to psychometricians and statisticians but also attracts widespread public awareness (Toch 1984). In the era of rapid technological advancements, international and large-scale edu- cational assessments are becoming increasingly prevalent. One example is the Programme for International Student Assessment (PISA), which is a large-scale international assessment with substantial sample size and test length (OECD 2019). PISA assesses the knowledge and skills of 15-year-old students in mathematics, reading, and science domains (OECD 2 2019). In PISA 2018, over 600,000 students from 37 OECD1 countries and 42 partner coun- tries/economies participated in the test (OECD 2019). To assess fairness of the test designs in such large-scale assessments, it is important to develop modern and computationally effi- cient methodologies for interpreting the effects of observed covariates (e.g., gender and race) on the item responses, controlling for the latent factors. However, the discrete nature of the item responses, the increasing sample size, and the large amount of test items in modern educational assessments pose great challenges for the estimation and inference for the covariate effects as well as for the latent factors. For instance, in educational and psychological measurements, such a testing fairness issue (measurement invariance) is typically assessed by differential item functioning (DIF) analysis of item re- sponse data that aims to detect the DIF items, where a DIF item has a response distribution that depends on not only the measured latent factors but also respondents\u2019 covariates (such as group membership). Despite many statistical methods that have been developed for DIF analysis, existing methods often require domain knowledge to pre-specify DIF-free items, namely anchor items, which may be misspecified and lead to biased estimation and inference results (Thissen 1988, Tay et al. 2016). To address this limitation, researchers developed item purification methods to iteratively select anchor items through stepwise selection mod- els (Candell & Drasgow 1988, Fidalgo et al. 2000, Kopf et al. 2015). More recently, tree-based methods (Tutz & Berger 2016), regularized estimation methods (Bauer et al. 2020, Belzak & Bauer 2020, Wang et al. 2023), item pair functioning methods (Bechger & Maris 2015), and many other non-anchor-based methods have been proposed. However, these non-anchor- based methods do not provide valid statistical inference guarantees for testing the covariate effects. It remains an open problem to perform statistical inference on the covariate effects and the latent factors in educational assessments. To address this open problem, we study the statistical estimation and inference for a gen- eral family of covariate-adjusted nonlinear factor models, which include the popular factor 1OECD: Organisation for Economic Co-operation and Development 3 models for binary, count, continuous, and mixed-type data that commonly occur in educa- tional assessments. The nonlinear model setting poses great challenges for estimation and statistical inference. Despite recent progress in the factor analysis literature, most existing studies focus on estimation and inference under linear factor models (Stock & Watson 2002, Bai & Li 2012, Fan et al. 2013) and covariate-adjusted linear factor models (Leek & Storey 2008, Wang et al. 2017, Gerard & Stephens 2020, Bing et al. 2024). The techniques employed in linear factor model settings are not applicable here due to the nonlinearity inherent in the general models under consideration. Recently, several researchers have also investigated the parameter estimation and inference for generalized linear factor models (Chen et al. 2019, Wang 2022, Chen et al. 2023b). However, they either focus only on the overall consistency properties of the estimation or do not incorporate covariates into the models. In a concurrent work, motivated by applications in single-cell omics, Du et al. (2023) considered a general- ized linear factor model with covariates and studied its inference theory, where the latent factors are used as surrogate variables to control for unmeasured confounding. However, they imposed relatively stringent assumptions on the sparsity of covariate effects and the dimension of covariates, and their theoretical results also rely on data-splitting. Moreover, Du et al. (2023) focused only on statistical inference on the covariate effects, while that on factors and loadings was unexplored, which is often of great interest in educational assess- ments. Establishing inference results for covariate effects and latent factors simultaneously under nonlinear models remains an open and challenging problem, due to the identifiability issue from the incorporation of covariates and the nonlinearity issue in the considered general models. To overcome these issues, we develop a novel framework for performing statistical infer- ence on all model parameters and latent factors under a general family of covariate-adjusted generalized factor models. Specifically, we propose a set of interpretable and practical iden- tifiability conditions for identifying the model parameters, and further incorporate these conditions into the development of a computationally efficient likelihood-based estimation 4 method. Under these identifiability conditions, we develop new techniques to address the aforementioned theoretical challenges and obtain estimation consistency and asymptotic nor- mality for covariate effects under a practical yet challenging asymptotic regime. Furthermore, building upon these results, we establish estimation consistency and provide valid inference results for factor loadings and latent factors that are often of scientific interest, advancing our theoretical understanding of nonlinear latent factor models. The rest of the paper is organized as follows. In Section 2, we introduce the model setup of the covariate-adjusted generalized factor model. Section 3 discusses the associated iden- tifiability issues and further presents the proposed identifiability conditions and estimation method. Section 4 establishes the theoretical properties for not only the covariate effects but also the latent factors and factor loadings. In Section 5, we perform extensive numerical studies to illustrate the performance of the proposed estimation method and the validity of the theoretical results. In Section 6, we analyze an educational testing dataset from Pro- gramme for International Student Assessment (PISA) and identify test items that may lead to potential bias among different test-takers. We conclude with providing some potential future directions in Section 7. Notation: For any integer N, let [N] = {1, . . . , N}. For any set S, let #S be its cardinality. For any vector r = (r1, . . . , rl)\u22ba, let \u2225r\u22250 = #({j : rj \u0338= 0}), \u2225r\u2225\u221e= maxj=1,...,l |rj|, and \u2225r\u2225q = (Pl j=1 |rj|q)1/q for q \u22651. We define 1(y) x to be the y-dimensional vector with x-th entry to be 1 and all other entries to be 0. For any symmetric matrix M, let \u03bbmin(M) and \u03bbmax(M) be the smallest and largest eigenvalues of M. For any matrix A = (aij)n\u00d7l, let \u2225A\u2225\u221e,1 = maxj=1,...,l Pn i=1 |aij| be the maximum absolute column sum, \u2225A\u22251,\u221e= maxi=1,...,n Pl j=1 |aij| be the maximum of the absolute row sum, \u2225A\u2225max = maxi,j |aij| be the maximum of the absolute matrix entry, \u2225A\u2225F = (Pn i=1 Pl j=1 |aij|2)1/2 be the Frobenius norm of A, and \u2225A\u2225= p \u03bbmax (A\u22baA) be the spectral norm of A. Let \u2225\u00b7 \u2225\u03c61 be sub- exponential norm. Define the notation Av = vec(A) \u2208Rnl to indicate the vectorized matrix 5 A \u2208Rn\u00d7l. Finally, we denote \u2297as the Kronecker product." } ], "Chengyu Cui": [ { "url": "http://arxiv.org/abs/2401.13090v1", "title": "Variational Estimation for Multidimensional Generalized Partial Credit Model", "abstract": "Multidimensional item response theory (MIRT) models have generated increasing\ninterest in the psychometrics literature. Efficient approaches for estimating\nMIRT models with dichotomous responses have been developed, but constructing an\nequally efficient and robust algorithm for polytomous models has received\nlimited attention. To address this gap, this paper presents a novel Gaussian\nvariational estimation algorithm for the multidimensional generalized partial\ncredit model (MGPCM). The proposed algorithm demonstrates both fast and\naccurate performance, as illustrated through a series of simulation studies and\ntwo real data analyses.", "authors": "Chengyu Cui, Chun Wang, Gongjun Xu", "published": "2024-01-23", "updated": "2024-01-23", "primary_cat": "stat.ME", "cats": [ "stat.ME" ], "main_content": "Generalized partial credit Model (GPCM) (Muraki, 1992; Embretson and Reise, 2013), also named as a compensatory multidimensional two-parameter partial credit model (M-2PPC) (Yao and Schwarz, 2006), is a popular IRT model for polytomous responses. It allows for the assessment of partial scores for constructed response items and intermediate steps that students have accomplished on the path toward solving the items completely. Suppose we have N examinees and J test items, with the random variable Yij denoting the partial credit of person i\u2019s response to item j. For each item j, aj is a discrimination parameter that implies the strength of association between latent trait and item responses. \u03b2jk is a threshold parameter that separates two adjacent response categories. The partial credit model (Masters, 1982) is a special case of GPCM where the discrim4 ination parameter is fixed to be the same across different items. The item response function of GPCM that characterizes the probability of a specific response is given by Pr(Yij = k|\u03b8i, aj, \u03b2jk) = exp[Pk r=0 aj(\u03b8i \u2212\u03b2jr)] PKj v=0 exp[Pv r=0 aj(\u03b8i \u2212\u03b2jr)] , (2.1) where k = 0, 1, \u00b7 \u00b7 \u00b7 , Kj \u22121 and Kj is the number of differential partial credit scores for the jth item. The multidimensional generalized partial credit model (MGPCM) is a natural multidimensional extension of GPCM. The main idea is replacing the uni-dimensional latent ability with a Ddimensional vector, and each dimension represents a facet of the multidimensional construct (i.e., science and literacy). Similarly, the discrimination parameters aj also become D-dimensional vectors to reflect the discrimination power of item j with respect to each facet (i.e., dimension) of the multidimensional construct \u03b8. The threshold parameters stay the same as in uni-dimensional models. Equation (2.1) therefore is updated as follows: Pr(Yij = k|\u03b8i, aj, \u03b2jk) = exp{Pk r=0(a\u2032 j\u03b8i \u2212\u03b2jr)} PKj\u22121 v=0 exp{Pv r=0(a\u2032 j\u03b8i \u2212\u03b2jr)} . (2.2) Here a\u2032 j\u03b8i indicates the inner product of aj and \u03b8i as a\u2032 j\u03b8i = PD d=1 ajd\u03b8id where ajd and \u03b8id are the dth component of aj and \u03b8i, respectively. With a slight re-parameterization, we have the following item response function for MGPCM which we will use throughout the paper: Pr(Yij = k|\u03b8i, aj, bjk) = exp(ka\u2032 j\u03b8i \u2212bjk) PKj\u22121 v=0 exp(va\u2032 j\u03b8i \u2212bjv) . (2.3) In Equation (2.3), bjk replaces Pk r=0 \u03b2jr in Equation (2.2) for each k = 0, 1, \u00b7 \u00b7 \u00b7 , Kj \u22121. Note that for model identification, we can only have Kj \u22121 estimable threshold parameters for each item j, and hence we fix bj0 = 0. 5 3 Gaussian Variational Approximation 3.1 Derivation of algorithm In this section, we describe the derivation of the GVEM algorithm. In the following, we denote the collection of item parameters by Mp being the total number of parameters to be estimated, i.e. Mp = {aj \u2208RD, bjk \u2208R : j = 1, \u00b7 \u00b7 \u00b7 , J, k = 1, \u00b7 \u00b7 \u00b7 , Kj \u22121} for MGPCM. As we discussed above, the parameter bj0 is fixed as 0 for all j. To be consistent with the common convention, we assume the latent vector \u03b8 follows a multivariate normal distribution of 0 mean and covariance \u03a3\u03b8 with density function denoted by \u03d5(\u00b7). The marginal probability of response vector Yi = (Yi1, \u00b7 \u00b7 \u00b7 , YiJ)\u2032 for person i is defined as follows Pr(Yi|Mp) = Z \u03b8i J Y j=1 Kj\u22121 Y k=0 I(Yij=k) Pr(Yij = k|\u03b8i, Mp)\u03d5(\u03b8i)d\u03b8i = Z \u03b8i J Y j=1 Kj\u22121 Y k=0 I(Yij=k) exp(ka\u2032 j\u03b8i \u2212bjk) PKj\u22121 v=0 exp(va\u2032 j\u03b8i \u2212bjv) \u03d5(\u03b8)d\u03b8i, where I(Yij=k) is an indicator function equal to 1 if Yij = k and zero otherwise, and therefore the marginal log-likelihood function for all responses from the examinees is given by l(Mp|Y ) = N X i=1 log Pr(Yi|Mp) = N X i=1 log Z \u03b8i J Y j=1 Pr(Yij|\u03b8i, Mp)\u03d5(\u03b8i)d\u03b8i, (3.1) where Y = (Y1, \u00b7 \u00b7 \u00b7 , YN)\u2032 is the N \u00d7 J matrix of realized categorical responses. Following the variational estimation literature (Blei et al., 2017; Cho et al., 2021), we first derive a variational lower bound for MGPCM. We define KL{p(\u00b7)\u2225q(\u00b7)} the Kullback-Leibler divergence of probability distribution p and q. For an arbitrary probability density function q(\u00b7), the marginal 6 log-likelihood in Equation (3.1) has the following lower bound (Blei et al., 2017): l(Mp|Y ) = N X i=1 Z \u03b8i qi(\u03b8i)d\u03b8i log Pr(Yi|Mp) = N X i=1 Z \u03b8i log hP(Yi, \u03b8i|Mp)qi(\u03b8i) P(\u03b8i|Yi, Mp)qi(\u03b8i) i qi(\u03b8i)d\u03b8i = N X i=1 \u0014 Z \u03b8i \u0002 log P(Yi, \u03b8i|Mp) \u0003 qi(\u03b8i)d\u03b8i \u2212 Z \u03b8i \u0002 log q(\u03b8i) \u0003 qi(\u03b8i)d\u03b8i + KL n qi(\u03b8i) \r \rP(\u03b8i|Yi, Mp) o\u0015 \u2265 N X i=1 Z \u03b8i \u0002 log P(Yi, \u03b8i|Mp) \u0003 qi(\u03b8i)d\u03b8i \u2212 N X i=1 Z \u03b8i \u0002 log q(\u03b8i) \u0003 qi(\u03b8i)d\u03b8i. (3.2) The last inequality holds if and only if the KL divergence between the variational distribution qi(\u00b7) and the posterior distribution P(\u00b7|Yi, Mp) is 0, which indicates qi(\u03b8i) = P(\u03b8i|Yi, Mp). In the literature of variational inference, the right-hand side of Equation (3.2) is defined to be the evidence lower bound (Blei et al., 2017), which is equivalent, up to a constant with respect to q(\u00b7), to the KL divergence between the assumed variational distribution q(\u00b7) and the conditional density of the latent variables given the observations. In the following, we construct an approximation for the marginal maximum likelihood estimator from the evidence lower bound. The primary objective is to identify a suitable distribution that can approximate the posterior distribution P(\u03b8i|Yi, Mp). Motivated by this insight, we propose to construct an EM-type algorithm to compute the marginal maximum likelihood estimator. In the E-step, we evaluate the expectation of the complete data log-likelihood, and the expectation is taken with respect to the latent variables \u03b8i under its variational probability density function qi(\u00b7): N X i=1 Z \u03b8i log P(Yi, \u03b8i|Mp)qi(\u03b8i)d\u03b8i, Here the density function qi(\u03b8i) is chosen to minimize the KL divergence KL{qi(\u03b8i\u2225P(\u03b8i|Yi, Mp)} as the best approximation to the posterior distribution. The second term in the evidence lower bound is left out since it is irrelevant to item parameters. However, a problem with respect to minimizing 7 the KL divergence is that it is hard to find an explicit formula for the posterior distribution of \u03b8i with respect to the previous estimated item parameters \u02c6 Mp, as it involves computing Ddimensional integrals. Numerical methods, such as the Gauss\u2013Hermite approximation, Monte Carlo expectation-maximization, and stochastic expectation-maximization, are often used to provide fast approximation. Herein we adopt the Gaussian variational inference method. It is widely accepted that the posterior distribution of the latent ability P(\u03b8i|Yi, Mp) can be approximated by a Gaussian distribution (Chang and Stout, 1993; Wang, 2015), and hence we aim to find an optimal qi(\u03b8i) in the family of Gaussian distribution while minimizing the KL divergence between qi(\u03b8i) and P(\u03b8i|Yi, Mp). Since the posterior distribution can be expressed as P(\u03b8i|Yi, Mp) = P(Yi, \u03b8i|Mp) P(Yi|Mp) , we only need to evaluate P(Yi, \u03b8i|Mp) to find a proper qi(\u00b7) as KL{qi(\u03b8i)||P(\u03b8i|Yi, Mp)} = KL{qi(\u03b8i)||P(\u03b8i, Yi|Mp)} + C. Under the setting of MGPCM, the logarithm of joint distribution function of \u03b8i and Yi is log P(Yi, \u03b8i|Mp) = log P(Yi|\u03b8i, Mp) + log \u03d5(\u03b8i) = J X j=1 ( Kj\u22121 X k=0 I(Yij=k) h ka\u2032 j\u03b8i \u2212bjk \u2212log( Kj\u22121 X v=0 exp(va\u2032 j\u03b8i \u2212bjv)) i) + log \u03d5(\u03b8i). (3.3) The nonlinear softmax function, defined by fv(x) = exp(xv)/[Pn k=1 exp(xk)] for an n-dimensional vector x, is the main cause of the intractability of integral. To overcome this problem, a variational lower bound based on the approximation to the softmax function is proposed and by augmenting Equation (3.3) with variational parameters, the evidence lower bound can be computed explicitly without resorting to numeric integration. Among the many approximations for the softmax function, we adopt a One-Versus-Each bound (Tisais, 2016), which well approximates the softmax function and captures the model features. We start with the following inequality: fv(x) = exv Pn k=1 exk \u2265 n Y k=1,k\u0338=v exv exv + exk = 2 n Y k=1 exv exv + exk . (3.4) 8 Denote by kij the realized partial credit score for the response of the ith person for the jth item. Then by applying the bound (3.4) to (3.3), we have log P(Yi, \u03b8i|Mp) = log \u03d5(\u03b8i) + J X j=1 h Kj\u22121 X k=0 I(Yij=k) log exp(xijk) PKj\u22121 v=0 exp(xijv) i \u2265log \u03d5(\u03b8i) + J X j=1 ( Kj\u22121 X k=0 I(Yij=k) \u0002 log 2 + Kj\u22121 X v=0 log exp(xijk) exp(xijv) + exp(xijk) i) = log \u03d5(\u03b8i) + J X j=1 ( log 2 \u2212 Kj\u22121 X k=0 log \u0002 1 + exp(xijk \u2212xijkij) \u0003 ) . Here we denote xijk = ka\u2032 j\u03b8i\u2212bjk for short. We wish to draw attention to our selection of the \u201cOneVersus-Each bound\u201d. It can be established for (3.4) that a strict inequivalence holds true in all cases, except for the exceptional circumstance xv/xk \u2192\u221efor all k \u0338= v with at most one exception. Additionally, the approximation is the closest when xv is among the largest of all xk. The idea of maximum likelihood estimation indicates that, when partial credit score Yij is recorded as kij, kija\u2032 j\u03b8i \u2212bjkij is the most likely to be the largest among all va\u2032 j\u03b8i \u2212bjv for v = 0, 1, \u00b7 \u00b7 \u00b7 , Kj \u22121. Therefore the feature of the One-Versus-Each bound does fit well as an approximation to the marginal maximum likelihood estimator. The logistic sigmoid function (3.4) can be further approximated by a local variational approach: log P(Yi, \u03b8i|Mp) \u2265log \u03d5(\u03b8i) + J X j=1 ( log 2 \u2212 Kj\u22121 X k=0 \u03b7(\u03beijk) \u0002 (xijk \u2212xijkij)2 \u2212\u03beijk 2\u0003 \u2212 Kj\u22121 X k=0 1 2(xijk \u2212xijkij \u2212\u03beijk) \u2212 Kj\u22121 X k=0 log(1 + e\u03beijk) ) , where \u03be = {\u03beijk}i,j,k are called variational parameters, which will be iteratively updated together with item parameters in the M-step. Here the function \u03b7(x) is defined as (ex \u22121)/[4x(ex + 1)] (Jaakkola and Jordan, 2000). We use this local variational approximation for a suitable expectation of log P(Yi, \u03b8i|Mp) (i.e., given below in Equation (3.5)) that can be written as a quadratic form with respect to \u03b8i. This will facilitate the selection of qi(\u00b7) in the family of Gaussian distributions. 9 Next we substitute xijk by ka\u2032 j\u03b8i \u2212bjk and write the above lower bound of joint distribution function as log P(Yi, \u03b8i|Mp) \u2265 J X j=1 ( \u2212 Kj\u22121 X k=0 h \u03b7(\u03beijk)(k \u2212kij)2\u03b8\u2032 iaja\u2032 j\u03b8i \u22122(k \u2212kij)\u03b7(\u03beijk)(bjk \u2212bjkij)a\u2032 j\u03b8i + 1 2(k \u2212kij)a\u2032 j\u03b8i + \u03b7(\u03beijk)(bjk \u2212bjkij)2 \u22121 2(bjk \u2212bjkij) \u2212\u03b7(\u03beijk)\u03beijk 2 \u22121 2\u03beijk + log(1 + e\u03beijk) i + log 2 ) + log \u03d5(\u03b8i), and therefore the expectation of the log-likelihood, which needs computing in the E-step, takes the following form: E(Mp, \u03be) := Z \u03b8i log P(Yi|\u03b8i, Mp)qi(\u03b8i)d\u03b8i + Z \u03b8i log \u03d5(\u03b8i)qi(\u03b8i)d\u03b8i \u2265 Z \u03b8i J X j=1 ( log 2 \u2212 Kj\u22121 X k=0 h \u03b7(\u03beijk)(k \u2212kij)2\u03b8\u2032 iaja\u2032 j\u03b8i \u22122(k \u2212kij)\u03b7(\u03beijk)(bjk \u2212bjkij)a\u2032 j\u03b8i +1 2(k \u2212kij)a\u2032 j\u03b8i + \u03b7(\u03beijk)(bjk \u2212bjkij)2 \u22121 2(bjk \u2212bjkij) \u2212\u03b7(\u03beijk)\u03beijk 2 \u22121 2\u03beijk + log(1 + e\u03beijk) i) qi(\u03b8i)d\u03b8i + Z \u03b8i log \u03d5(\u03b8i)qi(\u03b8i)d\u03b8i. (3.5) For a minimized KL divergence, qi(\u03b8i) is selected as follows: log qi(\u03b8i) \u221d J X j=1 Kj\u22121 X k=0 n (k\u2212kij) \u0002 2\u03b7(\u03beijk)(bjk\u2212bjkij)\u22120.5 \u0003 a\u2032 j\u03b8i\u2212\u03b7(\u03beijk)(k\u2212kij)2\u03b8\u2032 iaja\u2032 j\u03b8i o \u2212\u03b8\u2032 i\u03a3\u22121 \u03b8 \u03b8i 2 . As the choice of qi(\u00b7) has been confined in the Gaussian family, it suffices to give the update for the mean and covariance matrix: \u00b5i = \u03a3i \u00d7 J X j=1 Kj\u22121 X k=0 (k \u2212kij) h 2\u03b7(\u03beijk)(bjk \u2212bjkij) \u22121 2 i a\u2032 j; (3.6) \u03a3\u22121 i = \u03a3\u22121 \u03b8 + 2 J X j=1 Kj\u22121 X k=0 \u03b7(\u03beijk)(k \u2212kij)2aja\u2032 j. (3.7) 10 In each iteration, the item parameters and variational parameters \u03beijk, aj, bjk are obtained from the previous M-step, and taken as the initial value if it is the first iteration. For the M-step, the item parameters are chosen to maximize the above expectation of the lower bound obtained by plugging Equation (3.6) and (3.7) into Equation (3.5): E(Mp, \u03be) \u2265 N X i=1 J X j=1 log 2 + N X i=1 J X j=1 Kj\u22121 X k=0 ( \u2212\u03b7(\u03beijk)(k \u2212kij)2a\u2032 j h \u03a3(t) i + (\u00b5(t) i )(\u00b5(t) i )\u2032i aj + (k \u2212kij)[2\u03b7(\u03beijk)(bjk \u2212bjkij) \u22121 2]a\u2032 j\u00b5(t) i \u2212\u03b7(\u03beijk)(bjk \u2212bjkij)2 + 1 2(bjk \u2212bjkij) + \u03b7(\u03beijk)\u03beijk 2 + 1 2\u03beijk \u2212log(1 + e\u03beijk) ) \u2212N 2 log |\u03a3(t) \u03b8 | (3.8) \u2212 N X i=1 1 2Tr{(\u03a3(t) \u03b8 )\u22121[\u03a3(t) i + (\u00b5(t) i )(\u00b5(t) i )\u2032i := E(Mp, \u03be). (3.9) Through maximizing the lower bound E(Mp, \u03be), we can derive a new set of item parameters that could potentially maximize the left-hand side. Updating the variational parameters helps to prevent the iteration from leading to a smaller value of the target expectation by shrinking the inequality too much when the right-hand side is maximized. The efficiency of this majorizationmaximization approach depends on the goodness of fit of the adopted softmax bound. To maximize the lower bound on the expectation concerning the item parameters Mp and variational parameters \u03be, we employ a Gauss-Seidel scheme to handle the nonlinear terms regarding the parameters. Each iterative update uses the most recently updated copies of the parameters. The update are given as follows. For each j = 1, \u00b7 \u00b7 \u00b7 , J, aj = 1 2 h N X i=1 Kj\u22121 X k=0 \u03b7(\u03beijk)(k \u2212kij)2(\u03a3i + \u00b5i\u00b5\u2032 i) i\u22121n N X i=1 Kj\u22121 X k=0 (k \u2212kij) h 2\u03b7(\u03beijk)(bjk \u2212bjkij) \u22121 2 i \u00b5i o , (3.10) 11 with \u03beijk, bjk from the last iteration or initialization and \u00b5i, \u03a3i from E-step. Next for the threshold parameters, for each j = 1, \u00b7 \u00b7 \u00b7 , J, k = 1, \u00b7 \u00b7 \u00b7 Kj \u22121, bjk = PN i=1[B1(i, j, k)I(k\u0338=kij) + I(k=kij) PKj\u22121 v=0,v\u0338=k B2(i, j, v, k)] 2 PN i=1(\u03b7(\u03beijk)I(k\u0338=kij) + I(k=kij) PKj\u22121 v=0,v\u0338=k \u03b7(\u03beijv)) , (3.11) where B1(i, j, k) = 2\u03b7(\u03beijk)(k \u2212kij)a\u2032 j\u00b5i + 0.5 + 2\u03b7(\u03beijk)bjkij; B2(i, j, v, k) = \u22122\u03b7(\u03beijk)(v \u2212k)a\u2032 j\u00b5i \u22120.5 + 2\u03b7(\u03beijv)bjv. Here aj are from the previous step and \u00b5i, \u03a3i are from the previous E-step. Finally for the variational parameters \u03beijk, for each i = 1, \u00b7 \u00b7 \u00b7 , N, j = 1, \u00b7 \u00b7 \u00b7 , J, k = 0, \u00b7 \u00b7 \u00b7 Kj \u22121, we have update as \u03be2 ijk = [(k \u2212kij)a\u2032 j\u00b5i \u2212(bjk \u2212bjkij)]2 + (k \u2212kij)2a\u2032 j\u03a3iaj. (3.12) with all other parameters obtained from the latest updates. In the exploratory analysis where we do not have any prior information on the item factor loadings, the assumed covariance \u03a3\u03b8 is fixed as ID and later proper rotations (Browne, 2001) are imposed to allow the factors to be correlated and thus allow for analysis of latent structures. But for confirmatory factor analysis, we update the covariance as \u03a3\u03b8 = 1 N N X i=1 (\u03a3i + \u00b5i\u00b5\u2032 i). (3.13) and scale its diagonal entries to be 1. 3.2 Standard Error Estimation Computing standard errors (SEs) of the item parameter estimates is crucial for various applications, such as multidimensional computerized adaptive testing, item parameter calibration as well 12 as differential item functioning. Challenges arise in estimating SEs when dealing with a highdimensional latent domain and polytomous responses as in MGPCM. The commonly used method for estimating SEs is based on the approximated Fisher\u2019s information matrix. However, taking the inverse of a prohibitively large information matrix (due to high dimensions and long test length) can be unstable when the sample size of the examinees is not large enough. An alternative numerical approximation using Gaussian quadrature in EM estimation has been proposed (Cagnone and Monari, 2013), but it is computationally expensive and sensitive to dimensionality. The supplemented expectation maximization (SEM) algorithm has also been developed in the IRT literature (e.g., Tian et al., 2013). However, in pilot simulations we found that none of these methods is capable of providing stable estimations, especially when the dimension D and the number of categories K is large. Therefore, to estimate the standard errors of item parameters under the pGVEM framework for MGPCM, we adopt a bootstrap approach that uses a resampling procedure. Bootstrap is an efficient alternative when the standard SEs estimation is mathematically intractable (Efron and Tibshirani, 1986). The resampling procedure avoids the direct computation of SEs. The bootstrap procedure in the pGVEM framework is implemented as follows. First we simulated B bootstrap datasets based on \u02c6 Mp = {\u02c6 aj,\u02c6 bj}j estimated from the pGVEM scheme. Then we apply the pGVEM method to estimate the item parameters for each of the bootstrap datasets, denoted by \u02c6 M (1) p , \u00b7 \u00b7 \u00b7 , \u02c6 M (B) p . The standard errors are estimated by c SEv = v u u t 1 B \u22121 B X i=1 (\u02c6 v(i) \u2212\u02c6 v)2, where v denotes item parameter ajr or bjk and \u02c6 v(i) is its ith bootstrap estimate. Given that our objective is to estimate SEs rather than the distributions of the estimators, in our study, we take the number of bootstrap samples to be 50, which generates stable results numerically. 13 3.3 Determining Latent Dimension In this section, we discuss how to select the appropriate number of latent dimensions. We propose to use the information criterion such as AIC or BIC to compare the model fit with different dimensions. In the MGPCM, direct computation of the residual sum of squares is costly. So we adopt the modified version of the information criterion where the expectation are replaced by its lower bound given by (3.9): AIC\u2217=2(\u2225\u02c6 A\u22250 + \u2225\u02c6 B\u22250 + \u2225nondiag( \u02c6 \u03a3\u03b8)\u22250/2) \u22122E( \u02c6 Mp, \u02c6 \u03be), (3.14) BIC\u2217= log(N)(\u2225\u02c6 A\u22250 + \u2225\u02c6 B\u22250 + \u2225nondiag( \u02c6 \u03a3\u03b8)\u22250/2) \u22122E( \u02c6 Mp, \u02c6 \u03be), (3.15) where \u02c6 A = (\u02c6 a1, \u00b7 \u00b7 \u00b7 , \u02c6 aJ) and \u02c6 B = (\u02c6 b1, \u00b7 \u00b7 \u00b7 , \u02c6 bJ) are assembled matrices of the discrimination and threshold parameters, respectively, nondiag( \u02c6 \u03a3\u03b8) denotes the nondiagonal entries of \u02c6 \u03a3\u03b8, and the zero norm || \u00b7 ||0 counts the number of nonzero entries of the assembled matrix. Here note that the term \u2225\u02c6 B\u22250 does not increase with dimension D and it denotes the number of all effective threshold parameters. In addition, since the covariance matrix \u02c6 \u03a3\u03b8 is symmetric with unit diagonal entries, we count the effective number of parameters in \u02c6 \u03a3\u03b8 as \u2225nondiag( \u02c6 \u03a3\u03b8)\u22250/2. The major advantage of the proposed criteria is that the lower bound of expectation is readily obtained with the updated item parameters and variation parameters, with no extra computation cost. 4 Simulation Studies 4.1 Study I We conducted simulation studies to compare the empirical performance of the proposed pGVEM algorithm with the EM algorithm with fixed quadrature, Metropolis-Hastings Robbins-Monro (MHRM) algorithm (Cai, 2010), and the stochastic EM (StEM) for MGPCM, which are implemented in the R package \u2018mirt\u2019 (Chalmers, 2012), in terms of mean squared error and bias of the estimation together with their computation time. The simulations were conducted in the 14 exploratory factor analysis (EFA) scenario, where no constraints on the item factor loading structure were imposed during the analysis. EFA is generally more computationally challenging than confirmatory factor analysis. In our study, we assumed that the latent covariance matrix was ID to remove scale and rotational indeterminacy, and no further assumptions on the structure of the loading matrix A were made during the analysis. After estimating the parameters by our pGVEM algorithm, we performed proper oblique rotation to allow the factors to be correlated. Many methods, including varimax, direct oblimin, quartimax, equamax, and promax (Browne, 2001; Hendrickson and White, 1964), are available for factor rotation in the literature. In our simulation study, we applied promax rotation as it is one of the most computationally efficient oblique rotation methods in large-scale factor analysis. For the estimation implemented in the \u2018mirt\u2019 package, we use the built-in promax rotation to obtain the estimation, and for the pGVEM estimation, we use the function promax in R, with default m = 4, to perform the rotation after the iteration ends. The manipulated conditions include: (1) sample size N = 200, 500; (2) test length J = 10, 20; (3) number of categories K = 3, 6; (5) low and high correlation among the latent traits; (6) small and large scale loadings. For each condition, a total number of 100 replicated cases were simulated. In the context of partial credit models, it is noted that the scaling of loadings plays a pivotal role in shaping the likelihood function. Specifically, when loadings are high and there exist multiple categories for partial credit scoring, the following case usually occurs: the probability of attaining the highest or lowest scores becomes disproportionately large. Consequently, this dominance of extreme scores may result in insufficient records of intermediate scores, thereby making the estimation of threshold parameters problematic. Therefore in the simulation studies, we considered two cases: (1) low scale loading: parameter ajr was simulated from Unif(0.5, 1) for all j = 1, \u00b7 \u00b7 \u00b7 , J, r = 1, \u00b7 \u00b7 \u00b7 , D; (2) high scale loading: parameter ajr was simulated from Unif(1, 2) for all j = 1, \u00b7 \u00b7 \u00b7 , J, r = 1, \u00b7 \u00b7 \u00b7 , D. The threshold parameters bjk are simulated from N(0, 1) for all j = 1, \u00b7 \u00b7 \u00b7 , J and k = 1, \u00b7 \u00b7 \u00b7 , K\u22121. For the latent variables, they were simulated from a multivariate 15 normal distribution with 0 mean and covariance matrix \u03a3\u03b8. The diagonal entries were fixed as 1 and off-diagonal entries were generated from a uniform distribution Unif(0.1, 0.3) in the low correlation case and Unif(0.5, 0.7) in the high correlation case. For the responses generated from the simulated model parameters, we perform simulation only on cases where for all j = 1, \u00b7 \u00b7 \u00b7 , J and k = 0, \u00b7 \u00b7 \u00b7 K \u22121, #{i | Yij = k, i = 1, \u00b7 \u00b7 \u00b7 , N} > 0. Here # denoting the set counting operator. Skipping any item with all responses being 0 is necessary since the threshold parameter linked to each category of this item cannot be identified within the finite sample context. For the convergence criterion, the algorithm was terminated when the change of all item parameters between two iterations dropped below a pre-specified threshold, i.e., 1 J \u00d7 D + J \u00d7 K J X j=1 h D X r=1 \u0000a(t) jr \u2212a(t\u22121) jr \u00012 + K\u22121 X k=1 \u0000b(t) jk \u2212b(t\u22121) jk \u00012i < 10\u22125. (4.1) The estimation errors are presented separately in the form of mean squared error and bias for the discrimination and threshold parameters and the covariance matrix, averaged across the test items: Biasa = 1 JD J X j=1 D X r=1 \u02c6 ajr \u2212ajr, MSEa = 1 JD J X j=1 \u2225aj \u2212\u02c6 aj\u22252 2; Biasa = 1 J(K \u22121) J X j=1 K\u22121 X k=1 \u02c6 bjr \u2212bjr, MSEb = 1 J(K \u22121) J X j=1 K\u22121 X k=1 (bjk \u2212\u02c6 bjk)2; Bias\u03a3 = 2 D(D \u22121) X l p from (y, x), the linear quantile regression estimator of \u03b2\u2217is defined as a minimizer of the empirical analog of Q(\u00b7): \ufffd \u03b2 \u2208argmin \u03b2\u2208Rp \ufffd Q(\u03b2), where \ufffd Q(\u03b2) := 1 N work of Koenker and Bassett (1978), q N N \ufffd i=1 \ufffd i=1 \u03c1\u03c4(yi \u2212xT i \u03b2). (3) \ufffd \ufffd \ufffd Since the seminal work of Koenker and Bassett (1978), quantile regression (QR) has been extensively studied from both statistical and computational perspectives. We refer to Koenker (2005) and Koenker et al. (2017) for a systematic introduction of quantile regression under various settings. 4 Decentralized Quantile Regression By the convexity of the check function, the population loss function Q(\u00b7) in (2) is also convex. Moreover, under mild conditions, Q(\u00b7) is twice di\ufb00erentiable and strongly convex in a neighborhood of \u03b2\u2217with Hessian matrix H := \u22072Q(\u03b2\u2217) = E{f\u03b5|x(0)xxT}, where f\u03b5|x(\u00b7) denotes the conditional density of \u03b5 given x. In contrast, the empirical loss b Q(\u00b7) is not di\ufb00erentiable at \u03b2\u2217, and its \u201ccurvature energy\u201d is concentrated at a single point. This is substantially di\ufb00erent from other widely used loss functions that are at least locally strongly convex, such as the squared or logistic loss. To deal with the non-smoothness issue, Horowitz (1998) proposed to smooth the objective function, or equivalently the check function \u03c1\u03c4(\u00b7), to obtain \u03c1H \u03c4 (u) = u{\u03c4\u2212G(\u2212u/h)}, where G(\u00b7) is a smooth function and h > 0 is the smoothing parameter or bandwidth. See also Wang, Stefanski and Zhu (2012), Wu, Ma and Yin (2015), Galvao and Kato (2016) and Chen, Liu and Zhang (2019) for extensions of such a smoothed objective function approach with more complex data. However, Horowitz\u2019s smoothing gains smoothness at the cost of convexity, which inevitably raises optimization issues especially when p is large. On the other hand, by the \ufb01rst-order condition, the population parameter \u03b2\u2217satis\ufb01es the moment condition \u2207Q(\u03b2\u2217) = E[{1(y < xT\u03b2) \u2212\u03c4}x]|\u03b2=\u03b2\u2217= 0. This property motivates a smoothed estimating equation (SEE) estimator (Whang, 2006; Kaplan and Sun, 2017), de\ufb01ned as the solution to the smoothed moment condition 1 N N X i=1 \b G \u0000(xT i \u03b2 \u2212yi)/h \u0001 \u2212\u03c4 \t xi = 0. (4) From an M-estimation viewpoint, the aforementioned SEE estimator can be equivalently de\ufb01ned as a minimizer of the empirical smoothed loss function b Qh(\u03b2) = 1 N N X i=1 \u2113h(yi \u2212xT i \u03b2) with \u2113h(u) = (\u03c1\u03c4 \u2217Kh)(u) = Z \u221e \u2212\u221e \u03c1\u03c4(v)Kh(v \u2212u) dv, (5) where K(\u00b7) is a kernel function, Kh(u) = (1/h)K(u/h), and \u2217is the convolution operator. This approach will be referred to as conquer, which stands for convolution-type smoothed quantile regression. The ensuing estimator is then denoted by b \u03b2cq = b \u03b2 cq h \u2208 argmin\u03b2\u2208Rp b Qh(\u03b2). To see the connection between SEE and conquer methods, de\ufb01ne K(u) = R u \u2212\u221eK(v) dv, and note that the empirical loss b Qh(\u00b7) in (5) is twice continuously di\ufb00erentiable with gradient and Hessian given by \u2207b Qh(\u03b2) = (1/N) PN i=1{K((xT i \u03b2 \u2212yi)/h) \u2212\u03c4}xi and \u22072 b Qh(\u03b2) = (1/N) PN i=1 Kh(yi \u2212xT i \u03b2) \u00b7 xixT i , respectively. When a non-negative kernel is used, b Qh(\u00b7) is convex so that any minimizer of \u03b2 7\u2192b Qh(\u03b2) satis\ufb01es the \ufb01rst-order moment condition (4) with G = K. When the dimension p is \ufb01xed, asymptotic properties of the SEE or conquer estimator have been studied by Kaplan and Sun (2017) and Fernandes, Guerre and Horta (2021), although the former focused on a more challenging instrumental variables quantile regression problem. In the \ufb01nite sample setup, He et al. (2021) established exponential-type concentration inequalities and nonasymptotic Bahadur representation for the conquer estimator, while allowing the dimension p to grow with the sample size n. Their results reveal a key feature of the smoothing parameter: the bandwidth should adapt to both the sample size n 5 Tan, Battey and Zhou and dimensionality p, so as to achieve a trade-o\ufb00between statistical accuracy and computational stability. For statistical inference, He et al. (2021) suggested and proved the validity of the multiplier bootstrap for conquer, which has desirable \ufb01nite sample performance under various settings, including those at extreme quantile levels. We refer to Section 5 of He et al. (2021) for further details on the computational aspects of conquer. 2.2 Distributed quantile regression with conquer Before detailing an approach for distributed inference for QR coe\ufb03cients, motivated primarily by situations in which the data are distributed, we start with some remarks on computation. The optimization problem in (3) can be recast as a convex linear program, solvable by the simplex or interior point methods. The latter has a computational complexity of order O(N1+ap3 log N) for some a \u2208(0, 1/2). An e\ufb03cient algorithm, the Frisch-Newton algorithm with preprocessing, has an improved complexity of O{(Np)2(1+a)/3p3 log N + Np} (Portnoy and Koenker, 1997). While not inordinate relative to the O(p2N) complexity of least squares, to achieve the same quality of distributional approximation, quantile regression requires a considerably larger sample size. Thus for formal inference there are sometimes computational advantages to parallelized inference even when data are available in their totality. For ease of exposition, assume that the m data sources are of equal sample size n, so that N = m \u00b7 n. The combined data set is {(yi, xi)}N i=1, where xi is a p-dimensional vector. For j = 1, . . . , m, the jth location stores a subsample of n observations, denoted by Dj = {(yi, xi)}i\u2208Ij, and {Ij}m j=1 are disjoint index sets satisfying \u222am j=1Ij = {1, . . . , N} and |Ij| = n, where |Ij| is the cardinality of Ij. Under a conditional quantile regression model, the observations (y1, x1), . . . , (yN, xN) are i.i.d. sampled from (y, x) \u223cP satisfying Q\u03c4(y|x) = xT\u03b2\u2217, and the model parameter \u03b2\u2217 is equivalently de\ufb01ned as \u03b2\u2217= argmin \u03b2\u2208Rp Q(\u03b2), Q(\u03b2) := E(y,x)\u223cP \b \u03c1\u03c4 \u0000y \u2212xT\u03b2) \t , (6) where \u03c1\u03c4(\u00b7) is the check function. Unlike the model setting considered by Wang et al. (2017) and Jordan, Lee and Yang (2019) in which the target loss function is twice di\ufb00erentiable and has Lipschitz continuous second derivative, the non-smooth check function is not everywhere di\ufb00erentiable, which prevents gradient-based optimization methods from being e\ufb03cient. Given two bandwidths h, b > 0, we de\ufb01ne the global and local smoothed quantile loss functions as b Qh(\u03b2) = 1 N N X i=1 \u2113h(yi \u2212xT i \u03b2) and b Qj,b(\u03b2) = 1 n X i\u2208Ij \u2113b(yi \u2212xT i \u03b2), j = 1, . . . , m, (7) where the loss function \u2113h(\u00b7) is as de\ufb01ned in (5). Hereafter, h and b will be referred to as the global bandwidth and local bandwidth, respectively, and we assume b \u2265h > 0. In the context of quantile regression, we extend the approximate Newton-type method proposed by Shamir, Srebro and Zhang (2014) through convolution smoothing; see also Wang et al. (2017) and Jordan, Lee and Yang (2019). Notably, the ideas behind these Newton-type 6 Decentralized Quantile Regression methods coincide, to some extent, with the classical one-step construction (Bickel, 1975), which focused on improving an initial estimator that is already consistent but not e\ufb03cient. Starting with an initial estimator e \u03b2(0) of \u03b2\u2217, we de\ufb01ne the shifted conquer loss function e Q(\u03b2) = b Q1,b(\u03b2) \u2212 \u2207b Q1,b( e \u03b2(0)) \u2212\u2207b Qh( e \u03b2(0)), \u03b2 \u000b , (8) which leverages local higher-order information and global \ufb01rst-order information, and therefore depends on both local and global bandwidths b and h. The resulting communicatione\ufb03cient estimator is given by e \u03b2(1) = e \u03b2(1) b,h \u2208argmin \u03b2\u2208Rp e Q(\u03b2). (9) Informal motivation for the aforementioned approach is provided by a Taylor series expansion of the global loss function around the initial estimator e \u03b2(0). For a suitable choice of e \u03b2(0), the approximation error is well controlled in view of the heuristic argument outlined by Jordan, Lee and Yang (2019). Furthermore, on writing e Q(\u03b2) more explicitly as e Q(\u03b2; e \u03b2(0)), it can be arranged, through bandwidths b and h, that argmin\u03b2\u2208Rp e Q(\u03b2; e \u03b2(0)) is a contraction mapping in a suitable neighborhood of \u03b2\u2217, to be de\ufb01ned. Intuitively, in view of Banach\u2019s \ufb01xed point theorem, the sequence of minimizers obtained through iteration of this procedure converges to the global conquer estimator, itself converging to \u03b2\u2217. In the limit of increasing iterations, there is no information loss over the oracle procedure with access to all data simultaneously, in spite of the data being distributed. The theoretical results of this section establish the delicate choices of h, b, and the number of iterations in order for the synthesis error to match the statistical error of the global conquer estimator. Under the conditional quantile model (1), the generic data vector (y, x) can be written in a linear form y = xT\u03b2\u2217+ \u03b5, where the model error \u03b5 satis\ufb01es Q\u03c4(\u03b5|x) = 0. Let f\u03b5|x(\u00b7) be the conditional density function of \u03b5 given x. Given i.i.d. observations {(yi, xi)}N i=1, we write \u03b5i = yi \u2212xT i \u03b2\u2217, satisfying P(\u03b5i \u22640|xi) = \u03c4. To investigate the statistical properties of e \u03b2(1), we impose some regularity conditions. (C1). There exist \u00af f \u2265f > 0 such that f \u2264f\u03b5|x(0) \u2264\u00af f almost surely (over x). Moreover, there exists some l0 > 0 such that |f\u03b5|x(u) \u2212f\u03b5|x(v)| \u2264l0|u \u2212v| for all u, v \u2208R almost surely. (C2). K(\u00b7) is a symmetric and non-negative kernel that satis\ufb01es \u03ba2 := R \u221e \u2212\u221eu2K(u) du < \u221e, \u03bau := supu\u2208R K(u) < \u221eand \u03bal := min|u|\u22641 K(u) > 0. (C3). The predictor x \u2208Rp is sub-Gaussian: there exists \u03c51 > 0 such that P(|zTu| \u2265\u03c51t) \u2264 2e\u2212t2/2 for every unit vector u \u2208Sp\u22121 and t \u22650, where z = \u03a3\u22121/2x and \u03a3 = E(xxT) is positive de\ufb01nite. Condition (C1) imposes regularity conditions on the conditional density function. These are standard in quantile regression. In (C2), the requirement min|u|\u22641 K(u) > 0 is for technical simplicity and can be relaxed to min|u|\u2264c K(u) > 0 for some c \u2208(0, 1), which will only change the constant terms in all of our theoretical results. In particular, for kernels that are compactly supported on [\u22121, 1], we may choose c = 1/2 and assume \u03bal = min|u|\u22641/2 K(u) > 0 instead. Distributions with heavier tails than Gaussian on x are excluded by Condition (C3) in order to guarantee exponential-type concentration bounds for estimators of quantile regression coe\ufb03cients. 7 Tan, Battey and Zhou For some radii r, r\u2217> 0, de\ufb01ne the events E0(r) = \b e \u03b2(0) \u2208\u0398(r) \t and E\u2217(r\u2217) = \b \u2225\u2207b Qh(\u03b2\u2217)\u2225\u2126\u2264r\u2217 \t , (10) where \u0398(r) := {\u03b2 \u2208Rp : \u2225\u03b2 \u2212\u03b2\u2217\u2225\u03a3 \u2264r} and \u2126:= \u03a3\u22121. In particular, E0(r) is a \u201cgood\u201d event on which the initial estimator e \u03b2(0) falls into a local neighborhood \u0398(r) around \u03b2\u2217. Recall that H = E{f\u03b5|x(0)xxT} is the Hessian of the population quantile loss \u03b2 7\u2192E{\u03c1\u03c4(y \u2212xT\u03b2)} at \u03b2\u2217. The following theorem provides the statistical properties of e \u03b2(1). Theorem 1 Assume Conditions (C1)\u2013(C3) hold, and let E0(r0) and E\u2217(r\u2217) be the events de\ufb01ned in (10) for some r0 \u2273r\u2217> 0. Let x > 0, and suppose the bandwidths b \u2265h > 0 satisfy max{r0, p (p + x)/n } \u2272b \u22721 and p (p + x)/N \u2272h. Conditioned on the event E0(r0) \u2229E\u2217(r\u2217), the one-step estimator e \u03b2(1) de\ufb01ned in (9) satis\ufb01es \u2225e \u03b2(1) \u2212\u03b2\u2217\u2225\u03a3 \u2272 r p + x nb + r p + x Nh + b ! \u00b7 r0 + r\u2217 (11) and \u2225H( e \u03b2(1) \u2212\u03b2\u2217) + \u2207b Qh(\u03b2\u2217) \u2225\u2126\u2272 r p + x nb + r p + x Nh + b ! \u00b7 r0 (12) with probability at least 1 \u22123e\u2212x. Equation (11) is the prediction error for the estimator obtained from running a single iteration of our proposed method, while equation (12) provides bounds on a linear Bahadur representation of the estimator, used later for detailed statistical inference on \u03b2\u2217or functionals thereof. Before proceeding, we \ufb01rst discuss some implications of Theorem 1. The parameter r0 captures the convergence rate of the initial estimator e \u03b2(0). It can be constructed either on a single local machine that has access to n observations or via averaging all the local estimators. The former is communication-free, while the latter usually improves the statistical accuracy at the cost of one round of communication. Therefore, we may expect a conservative convergence rate of the initial estimator, which is of order p p/n. In this case, the rate r0 \u224d p p/n is sub-optimal compared to that of the global QR estimator b \u03b2 in (3) or the conquer estimator b \u03b2cq in (5). Large sample properties of b \u03b2cq have been examined by Fernandes, Guerre and Horta (2021) when p is \ufb01xed, and by He et al. (2021) under the increasing-p regime. According to the latter, the expected prediction error of b \u03b2cq, namely \u2225b \u03b2cq \u2212\u03b2\u2217\u2225\u03a3, is primarily determined by \u2225\u2207b Qh(\u03b2\u2217)\u2225\u2126which is of order p p/N + h2; see Lemma 16 in the Appendix. Therefore, the second term on the right-hand side of (11) corresponds to the optimal statistical rate, provided that (p/N)1/2 \u2272h \u2272(p/N)1/4 when all the data are used. Turning to the \ufb01rst term, we see that with properly chosen bandwidths b and h, say b \u224d(p/n)1/3 and h \u224d(p/N)1/3, the one-step estimator e \u03b2(1) re\ufb01nes the statistical accuracy of e \u03b2(0) by a factor of order (p/n)1/3. 8 Decentralized Quantile Regression We can repeat the one-step procedure in (9) using e \u03b2(1) as an initial estimator, thereby obtaining e \u03b2(2). After T iterations, we denote the resulting distributed QR estimator by e \u03b2(T). Since the statistical error is reduced by a factor of (p/n)1/3, with high probability, at each iteration, we expect that after \u2126 \u0000\u2308log(m)/ log(n/p)\u2309 \u0001 iterations, the communicatione\ufb03cient distributed estimator e \u03b2(T) will achieve the same convergence rate as the global estimator b \u03b2 or b \u03b2cq. We formally describe the above iterative procedure as follows, starting at iteration 0 with an initial estimate e \u03b2(0). At iteration t = 1, 2, . . ., construct the shifted conquer loss function e Q(t)(\u03b2) = b Q1,b(\u03b2) \u2212 \u2207b Q1,b( e \u03b2(t\u22121)) \u2212\u2207b Qh( e \u03b2(t\u22121)), \u03b2 \u000b , (13) yielding e \u03b2(t) that minimizes e Q(t)(\u00b7), that is, e \u03b2(t) \u2208argmin \u03b2\u2208Rp e Q(t)(\u03b2). (14) As before, b \u2265h > 0 are the local and global bandwidths, respectively. The details are described in Algorithm 1. Notably, the shifted loss e Q(t)(\u00b7) (t \u22651) is twice-di\ufb00erentiable, convex and (provably) locally strongly convex. To solve the shifted conquer loss minimization problem in (14), in Section A.1 of the Appendix, we describe a gradient descent (GD) algorithm modi\ufb01ed by the application of a Barzilai-Borwein step (Barzilai and Borwein, 1988). Such a \ufb01rst-order algorithm is computationally scalable to large dimensions. In reminiscence of the classical one-step estimator of Bickel (1975), we may instead seek an approximate solution to the minimization problem (14) at each iteration by performing one step of Newton\u2019s method. At iteration t, \u2207e Q(t)( e \u03b2(t\u22121)) = \u2207b Qh( e \u03b2(t\u22121)) and \u22072 e Q(t)( e \u03b2(t\u22121)) = \u22072 b Q1,b( e \u03b2(t\u22121)). Thus, starting with an initialization \u03b2(0), the Newton step computes the update \u03b2(t) = \u03b2(t\u22121) \u2212{\u22072 b Q1,b(\u03b2(t\u22121))}\u22121\u2207b Qh(\u03b2(t\u22121)) for t = 1, 2, . . .. At each iteration, the above one-step update essentially performs a Newton-type step based on \u03b2(t\u22121). While computationally advantageous, the desirable statistical properties of this one-step estimator rely on uniform convergence of the sample Hessian, which typically requires stronger scaling with the sample size. Theorem 2 below provides the statistical properties of the distributed conquer estimator e \u03b2(T), including high probability bounds on both estimation error and Bahadur linearization error. The latter serves as an intermediate step for establishing the asymptotic distribution of e \u03b2(T). Similar results can be obtained for \u03b2(T). In fact, the analysis in this case is much simpler due to the closed-form expression, and is therefore omitted. Theorem 2 Assume the same set of conditions in Theorem 1. Then, conditioned on E0(r0) \u2229E\u2217(r\u2217), the distributed estimator e \u03b2(T) with T \u2273log(r0/r\u2217)/ log(1/b) satis\ufb01es \u2225e \u03b2(T) \u2212\u03b2\u2217\u2225\u03a3 \u2272r\u2217, \u2225H( e \u03b2(T) \u2212\u03b2\u2217) + \u2207b Qh(\u03b2\u2217) \u2225\u2126\u2272 \bp (p + x)/(nb) + p (p + x)/(Nh) + b \t \u00b7 r\u2217, (15) with probability at least 1 \u2212(2T + 1)e\u2212x. 9 Tan, Battey and Zhou Algorithm 1 Distributed Quantile Regression via Convolution Smoothing. Input: data batches {(yi, xi)}i\u2208Ij, j = 1, . . . , m, stored on m local machines, quantile level \u03c4 \u2208(0, 1), bandwidths b, h > 0, initialization e \u03b2(0), maximum number of iterations T, g0 = 1. 1: for t = 1, 2 . . . , T do 2: Broadcast e \u03b2(t\u22121) to all local machines. 3: for j = 1, . . . , m do 4: Compute \u2207b Qj,h( e \u03b2(t\u22121)) on the jth local machine, and send it to the master (\ufb01rst) machine. 5: end for 6: Compute the global gradient \u2207b Qh( e \u03b2(t\u22121)) = (1/m) Pm j=1 \u2207b Qj,h( e \u03b2(t\u22121)) and its \u2113\u221enorm gt = \u2225\u2207b Qh( e \u03b2(t\u22121))\u2225\u221eon the master. 7: if gt > gt\u22121 or gt < 10\u22125 break 8: otherwise Compute \u2207b Q1,b( e \u03b2(t\u22121)), and solve e \u03b2(t) \u2208argmin\u03b2\u2208Rp e Q(t)(\u03b2) on the master. 9: end for Output: e \u03b2(T). Remark 3 From the proof of Theorem 2, we see that the multi-round estimate e \u03b2(t) after t iterations satis\ufb01es with high probability that \u2225e \u03b2(t) \u2212\u03b2\u2217\u22252 \u2272\u03b4t \u00b7 r0 + r\u2217 with \u03b4 = r p nb + r p Nh + b, where r0 and r\u2217represent, respectively, the initial convergence rate and the global rate (attainable by the centralized estimator). This result also characterizes the trade-o\ufb00between communication cost and estimation accuracy. After running the algorithm for t rounds, the communication cost for each local machine/node is O(pt). On the other hand, since the statistical limit of distributed estimation is determined by r\u2217, we need as many as O(log(r\u2217/r0)/ log(1/\u03b4)) communication rounds for the proposed distributed estimator to achieve the optimal rate, resulting in a total communication cost O(p log(r\u2217/r0)/ log(1/\u03b4)) for each local machine. Ignoring logarithmic factors, the above parameters (r0, r\u2217, \u03b4) will be taken as r0 \u224d r p n, r\u2217\u224d r p N , and \u03b4 \u224d \u0012 p n \u00131/3 . Now let us discuss the construction of the initial estimator e \u03b2(0). Using a local sample from a single source, we can take e \u03b2(0) to be either the standard QR estimator (Koenker and Bassett, 1978) or the conquer estimator described in Section 2.1. That is, e \u03b2(0) \u2208argmin \u03b2\u2208Rp 1 n X i\u2208I1 \u03c1\u03c4(yi \u2212xT i \u03b2) or e \u03b2(0) \u2208argmin \u03b2\u2208Rp b Q1,b(\u03b2), (16) where b > 0 is the local bandwidth. In the diverging-p regime, explicit high probability error bounds for the QR and conquer estimators can be found in Pan and Zhou (2021) and He et al. (2021), respectively. 10 Decentralized Quantile Regression Theorem 4 Assume that Conditions (C1)\u2013(C3) hold, and choose the bandwidths b, h > 0 as b \u224d{(p + log(n log m))/n}1/3 and h \u224d{(p + log(n log m))/N}\u03b3 for any \u03b3 \u2208[1/3, 1/2]. Moreover, suppose the sample size per source satis\ufb01es n \u2273p+log(n log m). Then, starting at iteration 0 with an initial estimate e \u03b2(0) given in (16), the multi-round distributed estimator e \u03b2 = e \u03b2(T) with T \u224d\u2308log(m) log(n/p)\u2309satis\ufb01es \u2225e \u03b2 \u2212\u03b2\u2217\u2225\u03a3 \u2272 r p + log(n log m) N (17) and \r \r \r \rH( e \u03b2 \u2212\u03b2\u2217) + 1 N N X i=1 \b K(\u2212\u03b5i/h) \u2212\u03c4 \t xi \r \r \r \r \u2126 \u2272(p + log(n log m))5/6 n1/3N1/2 + p + log(n log m) Nh1/2 (18) with probability at least 1 \u2212Cn\u22121, where m = N/n is the number of sources. Remark 5 Guided by the theoretically \u201coptimal\u201d choice of the local and global bandwidths stated in Theorem 4, in practice we suggest to choose b = c \u00b7 \u0012p + log n n \u00131/3 and h = c \u00b7 \u0012p + log N N \u00131/3 , (19) for some positive constant c. For preprocessed data that has constant-level scales, we may choose c from {0.5, 1, 2.5, 5} using a validation set. More generally, we consider a heuristic, dynamic method for choosing c. To solve the optimization problem in (14), Section A.1 describes a quasi-Newton-type algorithm, namely the gradient descent with step size automatically determined by the Barzilai-Borwein method. At each iteration, we set c in (19) to be the minimum between the sample standard deviation and the median absolute deviation (multiplied by 1.4826) of the residuals from the previous iterate. The resulting estimate is then scale-invariant. The scaling condition n \u2273p + log(n log m), while not as easy to parse as the m \u2272 \u221a N condition implicated by simple meta analyses, is appreciably less stringent. To visualize this constraint, we introduce the function u(m, N) := N p + log(N/m) + log(log m) so that the knife-edge permissible value of m arises when m = u(m, N). This \ufb01xed point equation is thus solved when u(m, N)/m = 1. Figure 1 plots u(m, N)/m against m and N for p = n/10 and p = n/2. The permissible scaling of m with N is the curve traced out by the intersection of u(m, N)/m with the constant function, taking value 1 for all values of the argument. From Figure 1, the permissible scaling of m with N is visibly faster than \u221a N. By comparing Figures 1(a) and (b), we see that this scaling is made more severe by proportional increases in p. 11 Tan, Battey and Zhou (a) (b) Figure 1: Plot of u(m, N)/m against m and N overlaid with the constant function to indicate the \ufb01xed point of u(m, N) for (a) p = n/10 and (b) p = n/2. Using a local estimator as the initialization is most e\ufb03cient in terms of storage, communication, and computational complexity. Alternatively, one can use the so called divideand-conquer (meta analysis) estimator based on simply averaging the local QR estimators (Volgushev, Chao and Cheng, 2019) as the initialization. This improves the statistical stability at the cost of one round of communication. For j = 1, . . . , m, de\ufb01ne the local empirical loss functions b Qj(\u03b2) = (1/n) P i\u2208Ij \u03c1\u03c4(yi \u2212xT i \u03b2), and the corresponding local QR estimators b \u03b2loc j \u2208argmin\u03b2\u2208Rp b Qj(\u03b2). Estimators obtained from the separate sources are combined after one round of communication to construct a global estimator, namely the divide-and-conquer quantile regression (DC-QR) estimator b \u03b2dc = 1 m m X j=1 b \u03b2loc j . (20) For quantile regression, Volgushev, Chao and Cheng (2019) derived the estimation error of b \u03b2dc when the (random) covariates have \ufb01xed dimension p and are uniformly bounded, that is, max1\u2264i\u2264n \u2225xi\u22252 \u2264cp for some cp > 0. Under regularity conditions that are similar to Condition (C1), Theorem 3.1 therein implies \u2225b \u03b2dc \u2212b \u03b2\u22252 = OP log N n + (log N)7/4 n1/4N1/2 ! + oP(N\u22121/2) as long as m = o(N/ log N). If communication constraints allow, we recommend using the DC-QR estimator b \u03b2dc as the initial estimator, and setting T = max{\u2308log m\u2309, 2} in Algorithm 1. The whole procedure hence requires at most T + 1 communication rounds. In Section 4.2, we demonstrate via numerical studies that the bias of the DC-QR estimator is visibly larger than the bias of the proposed distributed conquer estimator under extreme quantile regression models with heteroscedastic errors. As a result, con\ufb01dence sets based on a normal approximation to the DC-QR Wald statistic are susceptible to severe undercoverage in linear heteroscedastic models. 12 Decentralized Quantile Regression Remark 6 Just as the statistical aspects of extreme value theory are challenged by the limitation of data beyond extreme thresholds, QR coe\ufb03cients at extreme quantiles are notoriously hard to estimate. Section D of the Appendix details a minor adaptation of our procedure which improves its performance at extreme quantile levels. 2.3 Distributed inference 2.3.1 Wald-type confidence sets With a view to more detailed statistical inference beyond point estimation, we \ufb01rst establish a distributional approximation in the form of a Berry-Esseen bound. This forms the basis for a Wald test, which can be inverted to give con\ufb01dence sets for \u03b2\u2217and linear functionals thereof. Construction of the pivotal test statistic relies on a consistent estimator of the asymptotic variance, which is typically obtained using a nonparametric estimate of the conditional density function of the response given the covariates. Theorem 7 Under the same set of conditions in Theorem 4, the distributed conquer estimator e \u03b2 = e \u03b2(T) satis\ufb01es sup x\u2208R, a\u2208Rp \f \fP \b N1/2aT( e \u03b2 \u2212\u03b2\u2217)/\u03c3\u03c4,h \u2264x \t \u2212\u03a6(x) \f \f \u2272p + log(n log m) (Nh)1/2 + N1/2h2 + (p + log(n log m))5/6 n1/3 , (21) where \u03c32 \u03c4,h = aTH\u22121E[{K(\u2212\u03b5/h) \u2212\u03c4}2xxT]H\u22121a and \u03a6(\u00b7) is the standard normal distribution function. In particular, under the scaling p + log(log m) = o(min{n2/5, N3/8}), the distributed estimator e \u03b2 with bandwidths b \u224d{(p + log(n log m))/n}1/3 and h \u224d{(p + log(n log m))/N}2/5 satis\ufb01es N1/2\u03c3\u22121 \u03c4,h aT( e \u03b2 \u2212\u03b2\u2217) d \u2212 \u2192N(0, 1) and N1/2aT( e \u03b2 \u2212\u03b2\u2217) (aTH\u22121\u03a3H\u22121a)1/2 d \u2212 \u2192N \u00000, \u03c4(1 \u2212\u03c4) \u0001 uniformly over a \u2208Rp as n \u2192\u221e, where d \u2212 \u2192is a shorthand for convergence in distribution. The accuracy of the normal approximation hinges on both the global and local bandwidths, and on the scaling of m with N and p. The role of b is via (15), in view of which, the upper bound in Theorem 7 is of order (p + x)1/2 r p + x nb + b + r p + x Nh ! + N1/2h2, where x = log(n log m). Minimizing as a function of (h, b) delivers the rate in Theorem 7 by taking b \u224d \u0012p + x n \u00131/3 and h \u224d \u0012p + x N \u00132/5 . To our knowledge, Theorem 7 is the \ufb01rst Berry-Esseen inequality with explicit error bounds depending on both n and p in a distributed setting. 13 Tan, Battey and Zhou We \ufb01rst describe methods that use the normal distribution with estimated variance for calibration. Let e \u03b2 = e \u03b2(T) be the communication-e\ufb03cient estimator discussed in the previous subsection. Under mild conditions, Theorem 7 establishes the asymptotic normality that for every 1 \u2264j \u2264p, N1/2(e \u03b2j \u2212\u03b2\u2217 j ) (H\u22121\u03a3H\u22121)1/2 jj d \u2212 \u2192N \u00000, \u03c4(1 \u2212\u03c4) \u0001 , where H = E{f\u03b5|x(0)xxT} and \u03a3 = E(xxT). The problem is then reduced to estimating the pointwise variance (H\u22121\u03a3H\u22121)jj, i.e., the jth diagonal entry of H\u22121\u03a3H\u22121. To this end, de\ufb01ne the residual function and \ufb01tted residuals as \u03b5i(\u03b2) = yi \u2212xT i \u03b2 for \u03b2 \u2208Rp and b \u03b5i = \u03b5i( e \u03b2) = yi \u2212xT i e \u03b2, i = 1, . . . , N. (22) In a nondistributed setting, the p \u00d7 p Hessian matrix H can be estimated by the following variant of Powell\u2019s kernel-type estimator (Powell, 1991) b Hb = 1 m m X j=1 b Hj,b with b Hj,b = 1 nb X i\u2208Ij \u03c6 \u0000b \u03b5i/b \u0001 xixT i , j = 1, . . . , m, (23) where \u03c6(\u00b7) is the standard normal density function and b > 0 is a bandwidth that may di\ufb00er from the previous one. Moreover, de\ufb01ne b \u03a3 = (1/m) Pm j=1 b \u03a3j and b \u03a3b(\u03c4) = (1/m) Pm j=1 b \u03a3j,b(\u03c4), where b \u03a3j = 1 n X i\u2208Ij xixT i and b \u03a3j,b(\u03c4) = 1 n X i\u2208Ij \b K(\u2212b \u03b5i/b) \u2212\u03c4 \t2xixT i . (24) Computing the full matrix estimators b Hb and b \u03a3 or b \u03a3b(\u03c4) requires each machine to communicate p \u00d7 p local estimators b Hj,b and b \u03a3j or b \u03a3j,b(\u03c4) to the master machine. This incurs excessive communication cost. To achieve a trade-o\ufb00between communication e\ufb03ciency and statistical accuracy, we instead use a local pointwise variance estimator \u03c4(1 \u2212\u03c4) \u0000 b H\u22121 1,b b \u03a31 b H\u22121 1,b \u0001 jj or \u0000 b H\u22121 1,b b \u03a31,b(\u03c4) b H\u22121 1,b \u0001 jj. (25) For the latter, note that b \u03a31,b(\u03c4) can be viewed as a sample analog of E{K(\u2212\u03b5/b)\u2212\u03c4}2xxT, which is closely related to the asymptotic variance of e \u03b2 as revealed by Theorem 7. Moreover, as discussed in Fernandes, Guerre and Horta (2021), the width of a con\ufb01dence interval based on b H\u22121 1,b b \u03a31,b(\u03c4) b H\u22121 1,b for any element of \u03b2\u2217is asymptotically narrower than that based on the na\u00a8 \u0131ve variance estimator \u03c4(1 \u2212\u03c4) b H\u22121 1,b b \u03a31 b H\u22121 1,b. For every \u03b2 \u2208Rp, de\ufb01ne the matrix-valued function b H1,b(\u03b2) = 1 nb X i\u2208I1 \u03c6 \u0000\u03b5i(\u03b2)/b \u0001 xixT i , where \u03b5i(\u03b2) = yi \u2212xT i \u03b2 are as in (22). Under this notation, b H1,b = b H1,b( e \u03b2). The next result provides a uniform convergence result for b H1,b(\u03b2) over \u03b2 in a local neighborhood of \u03b2\u2217. For any symmetric matrix A \u2208Rp\u00d7p, we use \u2225\u00b7 \u2225\u2126(\u2126= \u03a3\u22121) to denote the relative operator norm, that is, \u2225A\u2225\u2126= \u2225\u03a3\u22121/2A\u03a3\u22121/2\u22252. With this notation, we have \u2225\u03a3\u2225\u2126= 1. 14 Decentralized Quantile Regression Proposition 8 Conditions (C1)\u2013(C3) ensure that, for any r, x > 0, sup \u03b2\u2208\u0398(r) \u2225b H1,b(\u03b2) \u2212H\u2225\u2126\u2272 r p log n + x nb + b + r (26) with probability at least 1\u22123e\u2212x as long as n \u2273p+x and b \u2273(p log n+x)/n. In addition, if f\u2032 \u03b5|x(\u00b7) is Lipschitz continuous, that is, |f\u2032 \u03b5|x(u) \u2212f\u2032 \u03b5|x(v)| \u2264l1|u \u2212v| for all u, v \u2208R almost surely (over x), then sup\u03b2\u2208\u0398(r) \u2225b H1,b(\u03b2) \u2212H\u2225\u2126\u2272 p (p log n + x)/(nb) + b2 + r with high probability. Theorem 9 Under the same set of assumptions in Theorem 4, the local estimators b \u03a31 and b H1,b with b \u224d{p log(n)/n}1/3 satisfy the bounds \u2225b \u03a31 \u2212\u03a3\u2225\u2126\u2272(p + log n)1/2n\u22121/2 and \u2225b H1,b \u2212H\u2225\u2126\u2272(p log n)1/3n\u22121/3 with probability at least 1 \u2212Cn\u22121 as long as n \u2273p log n. In a simpler case where f\u03b5|x(0) = f\u03b5(0) is independent of x, we have H\u22121\u03a3H\u22121 = {f\u03b5(0)}\u22122\u2126so that it su\ufb03ces to estimate the univariate density function f\u03b5|x(\u00b7) at 0. Arguably the most commonly used method is the following kernel density estimator: b f\u03b5(0) = 1 Nb N X i=1 K \u0000b \u03b5i/b \u0001 = 1 m m X j=1 b f\u03b5,j(0), (27) where b f\u03b5,j(0) = (nb)\u22121 P i\u2208Ij K(b \u03b5i/b) for j = 1, . . . , m. Therefore, b f\u03b5(0) can be easily computed in a distributed manner. For convenience, we use the standard normal density as kernel function and the rule-of-thumb bandwidth by Hall and Sheather (1988), that is, brot N = N\u22121/3 \u00b7 \u03a6\u22121(1 \u2212\u03b1/2)2/3 ( 1.5 \u00b7 \u03c6(\u03a6\u22121(\u03c4))2 2\u03a6\u22121(\u03c4)2 + 1 )1/3 , where \u03b1 is a prepeci\ufb01ed probability of miscoverage. For the kernel matrix estimators b H1,b and b \u03a31,b(\u03c4) de\ufb01ned in (23) and (24), we use the same local bandwidth b as in Algorithm 1 for e\ufb03cient distributed quantile regression. The corresponding normal-based con\ufb01dence intervals for \u03b2\u2217 j (j = 1, . . . , p) are given by h e \u03b2j \u2212\u03a6\u22121(1 \u2212\u03b1/2) \u00b7 b \u03c3j \u00b7 N\u22121/2, e \u03b2j + \u03a6\u22121(1 \u2212\u03b1/2) \u00b7 b \u03c3j \u00b7 N\u22121/2 i , (28) where b \u03c3j = ( b H\u22121 1,b b \u03a31,b(\u03c4) b H\u22121 1,b)1/2 jj , p \u03c4(1 \u2212\u03c4) ( b H\u22121 1,b b \u03a31 b H\u22121 1,b)1/2 jj or b f\u03b5(0)\u22121(b \u03a31)1/2 jj p \u03c4(1 \u2212\u03c4). The \ufb01rst two variance estimates are preferred under general heteroscedastic models in which H = E{f\u03b5|x(0)xxT} no longer takes the form f\u03b5(0) \u00b7 \u03a3. Remark 10 The construction of normal-based con\ufb01dence intervals as in (28) depends crucially on the asymptotic variance estimation. The validity of b \u03c3j = b f\u03b5(0)\u22121(b \u03a31)1/2 jj p \u03c4(1 \u2212\u03c4) relies on the assumption that H = E{f\u03b5|x(0)xxT} takes the form f\u03b5(0) \u00b7 \u03a3. This holds 15 Tan, Battey and Zhou trivially when the model error \u03b5 and covariates x are independent, which is arguably too restrictive in the context of quantile regression. More generally, let us consider a standard location-scale model y = xT\u03b2\u2217+ \u03c3(x) \u00b7 e, where e \u223cfe(\u00b7) is independent of x and \u03c3(\u00b7) is a non-negative function. In this case, we have \u03b5 = \u03c3(x) \u00b7 e, whose conditional and unconditional densities at 0 are f\u03b5|x(0) = fe(0)/\u03c3(x) and f\u03b5(0) = fe(0) \u00b7 E{1/\u03c3(x)}. This reveals that H = fe(0) \u00b7 E{xxT/\u03c3(x)} and f\u03b5(0) \u00b7 \u03a3 are generally unequal, and therefore the use of b \u03c3j = ( b H\u22121 1,b b \u03a31,b(\u03c4) b H\u22121 1,b)1/2 jj or b \u03c3j = p \u03c4(1 \u2212\u03c4) ( b H\u22121 1,b b \u03a31 b H\u22121 1,b)1/2 jj . is more robust and preferable under heteroscedastic models. 2.3.2 Score-type confidence sets While the Wald test inverts to give explicit con\ufb01dence intervals as in equation (28), con\ufb01dence sets based on other types of test acknowledge that the set of parameter values consistent with the data need not form an interval. For some k = 1, . . . , p, consider the hypothesis Hk 0 : \u03b2\u2217 k = ck versus Hk 1 : \u03b2\u2217 k \u0338= ck, (29) where ck is a predetermined constant. Let e \u03b2Hk = (e \u03b2Hk,1, . . . , e \u03b2Hk,p)T \u2208Rp denote the distributed quantile regression estimator with its kth coordinate constrained at the hypothesized value, i.e., e \u03b2Hk,k = ck. To construct a score test, de\ufb01ne the gradient b S = (b S1, . . . , b Sp)T = N \u00b7 \u2207b Qh( e \u03b2Hk) = m X j=1 X i\u2208Ij b \u03beixi, (30) where b \u03bei = K{(xT i e \u03b2Hk \u2212yi)/h} \u2212\u03c4. Under the null hypothesis Hk 0 , it is reasonable to expect the t-statistic b Tk, which is de\ufb01ned as N1/2 b Sk divided by the estimated standard deviation, to be asymptotically normally distributed. We can write the t-statistic in terms of the self-normalized sum b Tk as (Efron, 1969): b Tk = b Sk/b Vk q {N \u2212(b Sk/b Vk)2}/(N \u22121) , (31) where b Sk = Pm j=1 P i\u2208Ij b \u03beixik and b V 2 k = Pm j=1 P i\u2208Ij(b \u03beixik)2. This representation has the advantage that the quantities b Sk and b Vk can be calculated in a distributed manner without information loss. Write \u03bei = K(\u2212\u03b5i/h)\u2212\u03c4 and \u00b5k = E(\u03beixik), where \u03b5i = yi \u2212xT i \u03b2\u2217. Denote the \u201coracle\u201d version of b Tk by Tk: Tk = N\u22121/2 PN i=1(\u03beixik \u2212\u00b5k) q (N \u22121)\u22121 PN i=1(\u03beixik \u2212N\u22121 PN \u2113=1 \u03be\u2113x\u2113k)2 = Sk/Vk p {N \u2212(Sk/Vk)2}/(N \u22121) , where Sk = PN i=1(\u03beixik \u2212\u00b5k) and V 2 k = PN i=1(\u03beixik \u2212\u00b5k)2. Note that Sk is a sum of independent zero-mean random variables. Asymptotic properties of the self-normalized 16 Decentralized Quantile Regression sum Sk/Vk have been well established in the literature (de la Pa\u02dc na, Lai and Shao, 2009). On writing b Tk more explicitly as b Tk(ck), we de\ufb01ne the \u03b1-level con\ufb01dence set associated with the score test as \b ck : \u03a6\u22121(\u03b1/2) \u2264b Tk(ck) \u2264\u03a6\u22121(1 \u2212\u03b1/2) \t . (32) This will often, but need not always, deliver intervals. The possibility of non-interval con\ufb01dence sets should be viewed as an advantage, as exempli\ufb01ed by Fieller\u2019s problem (Fieller, 1954). The disadvantage of using the score statistic for constructing con\ufb01dence sets is that b Tk(ck) has to be evaluated for a multitude of ck values, in practice over a \ufb01ne grid of points. The computational burden of this is considerable relative to the Wald construction in Section 2.3.1. 2.3.3 Resampling-based confidence sets An alternative widely used approach treats an interval as the primary mode of inference rather than the signi\ufb01cance test and constructs the former directly by resampling methods such as the bootstrap. Resampling approaches typically provide tighter con\ufb01dence limits than the Wald-based interval due to their implicit higher-order accuracy over limiting distributional approximations. However, the computational burden is high in the present context. Recall from Theorem 4 that the multi-round distributed estimator e \u03b2 = (e \u03b21, . . . , e \u03b2p)T admits the following asymptotic linear (Bahadur) representation: N1/2( e \u03b2 \u2212\u03b2\u2217) = \u2212H\u22121 1 \u221a N N X i=1 \b K(\u2212\u03b5i/h) \u2212\u03c4 \t xi + oP(1). Motivated by this asymptotic representation, Belloni et al. (2017) suggested and proved the validity of the multiplier score bootstrap, which is based on randomly perturbing the asymptotic linear forms of the nonlinear quantile regression estimators. Intuitively, the distribution of N1/2( e \u03b2 \u2212\u03b2\u2217) can be approximately estimated by the bootstrap draw of N1/2( e \u03b2\u266d\u2212e \u03b2) := \u2212b H\u22121 1 \u221a N N X i=1 ei \b K(\u2212b \u03b5i/h) \u2212\u03c4 \t xi, (33) where e1, . . . , eN are i.i.d. standard normal random variables, b H denotes a generic (consistent) estimator of H, and b \u03b5i are \ufb01tted residuals. In the distributed framework, each bootstrap draw requires one round of communication. The composite communication cost can be exorbitant when the number of bootstrap replications is large, say 1000 or 2000. Recently, Yu, Chao and Cheng (2020) proposed two bootstrap methods for constructing simultaneous con\ufb01dence intervals with distributed data. To operationalize their proposals in the present context, de\ufb01ne b \u03bei = {K(\u2212b \u03b5i/h)\u2212\u03c4}xi for i = 1, . . . , N, and let b H1 be a local estimator of H using the n samples on the \ufb01rst machine. For example, b H1 can be taken as either b H1,b given in (23) or b f\u03b5(0)\u00b7 b \u03a31. Then, consider the following two multiplier bootstrap statistics w\u266f= (w\u266f 1, . . . , w\u266f p)T = \u2212b H\u22121 1 1 \u221am m X j=1 ej \u00b7 n1/2\u2207b Qj,h( e \u03b2) (34) 17 Tan, Battey and Zhou and w\u266d= (w\u266d 1, . . . , w\u266d p)T = \u2212b H\u22121 1 1 \u221an + m \u22121 ( n X i=1 ei \u00b7 b \u03bei + m X j=2 en+j\u22121 \u00b7 n1/2\u2207b Qj,h( e \u03b2) ) , (35) both of which only require one additional round of communication, and therefore are communication-e\ufb03cient. As before, e1, . . . , en+m\u22121 are i.i.d. standard normal variables. For any q \u2208(0, 1) and 1 \u2264j \u2264p, let c\u266f j(q) and c\u266d j(q) be the (conditional) q-quantiles of w\u266f j and w\u266d j, respectively, de\ufb01ned as c\u266f j(q) = inf{t \u2208R : P\u2217(w\u266f j \u2264t) \u2265q} and c\u266d(q) = inf{t \u2208R : P\u2217(w\u266d j \u2264t) \u2265q}, where P\u2217(\u00b7) = P(\u00b7 | y1, x1, . . . , yN, xN) denotes the conditional probability given the observed samples. The ensuing bootstrap con\ufb01dence intervals for \u03b2\u2217 j (j = 1, . . . , p) are given by \" e \u03b2j \u2212 c\u266f j(1 \u2212\u03b1/2) \u221a N , e \u03b2j \u2212 c\u266f j(\u03b1/2) \u221a N # and \" e \u03b2j \u2212 c\u266d j(1 \u2212\u03b1/2) \u221a N , e \u03b2j \u2212 c\u266d j(\u03b1/2) \u221a N # . (36) Our simulations in Section 4.2 show that the two bootstrap methods have nearly identical performance when m is large, while the latter is more stable and thus preferable when m is relatively small. We leave the theoretical analysis of these distributed bootstrap methods in the future as a signi\ufb01cant amount of additional work is still needed. 2.4 Comparison with prior work The problem of distributed quantile regression has been considered in two earlier papers. Volgushev, Chao and Cheng (2019) established the statistical properties of the estimator obtained by averaging m local estimators, each constructed according to equation (3). The single round of communication and direct use of the check function means that there are no tuning parameters. However, as indicated in Section 1, the permissible scaling of m with N required to ensure the optimal statistical properties is restrictive, and violation of this constraint leads to under-coverage of resulting con\ufb01dence sets. Under-coverage is particularly severe under the highly plausible scenario in which the quantile regression error depends on the covariates. See Section 4 for an empirical demonstration. For M-estimation with a convex loss, Chen, Liu and Zhang (2021) proposed a general multi-round distributed procedure paired with stochastic gradient descent. When applied to quantile regression, their approach is a variant of stochastic subgradient descent. For minimizing a convex but non-di\ufb00erentiable function, subgradient methods typically exhibit very slow (sublinear) convergence and hence are not computationally stable. This explains the unpopularity of subgradient approaches among other computational methods for quantile regression. Theoretically, their distributed QR estimator needs a su\ufb03ciently large local sample size\u2014namely n \u2273(Np)1/2 log(N), to achieve the optimal rate OP( p p/N); see Theorem 4.7 therein. In addition to the suboptimal scaling, Chen, Liu and Zhang (2021) only derived the convergence rate for point estimation without the uncertainty quanti\ufb01cation sought in the present work. The same authors (Chen, Liu and Zhang, 2019) proposed a procedure speci\ufb01cally for distributed quantile regression. Their smoothed loss function is closer to that of Horowitz (1998) and incompatible with the ideas of Jordan, Lee and Yang (2019) and Wang et 18 Decentralized Quantile Regression al. (2017) due to violation of the uniform Lipschitz continuity condition of the second derivative. They instead exploit a representation of the estimator in terms of estimatordependent \u201csu\ufb03cient statistics\u201d. Since the representation is not of closed form, an iterative approach is required, using the estimate at iteration t to update the su\ufb03cient statistics at each component source. While the limited communication improves the permissible scaling of m with N over the approach of Volgushev, Chao and Cheng (2019), the construction is such that m p \u00d7 p matrices (and other quantities), are communicated at each iteration. Communication of hessian matrices is generally viewed as too communication intensive, particularly when p is large. We note that none of these approaches is generalizable to the sparse high-dimensional setting. The penalization required to enforce sparsity in high dimensions exacerbates bias so that meta-analysis hinges of the ability to de-bias such estimators prior to aggregation. Attempts to construct de-biased estimators for quantile regression have, so far, relied on an unrealistic assumption that the quantile regression error is independent of the covariates. The key representation used by Chen, Liu and Zhang (2019) is violated upon penalization of their smooth quantile regression estimator, and su\ufb00ers from singularity when p > n if penalization is not applied. An anonymous reviewer pointed out concurrent work by Jiang and Yu (2021) (the \ufb01rst version of our manuscript dates back to late September, 2020) who also proposed communication-e\ufb03cient algorithms for distributed quantile regression by means of convolution smoothing. The main di\ufb00erence between this work and ours concerns the theoretical aspects. Under similar regularity and moment conditions, we provide explicit non-asymptotic concentration bounds as well as Berry-Esseen-type bounds for normal approximation. These results complement the conventional OP statements in Jiang and Yu (2021). To achieve the global convergence rate in low-dimensions, Theorem 3.2 in Jiang and Yu (2021) requires (p, n, N) to satisfy n = Nr and p \u224dNc for some 0 < r \u22641 and 0 < c < min(3/8, r), while our result (Theorem 4) only requires n \u2273p + log log(N/n). In high-dimensions, from Theorem 4.1 in Jiang and Yu (2021) and its proof we see that the dimension p cannot exceed the sample size N in the sense that p \u224dNc for some c \u2208(0, 1). Our results, detailed in Section 3, show that the penalized distributed QR estimator achieves the global rate under the sample size requirements n \u2273s2 log p and N \u2273s3 log p, which considerably relax those in Jiang and Yu (2021). 3. Distributed Penalized Quantile Regression in High Dimensions In this section, we consider quantile regression in high-dimensional sparse models with distributed data. In such models, the total number of predictors p can be very large, while the number of important predictors is signi\ufb01cantly smaller. As before, assume that the data set {(yi, xi)}N i=1 with N = n \u00b7 m is distributed across m sources, so that each source j contributes n i.i.d. observations Dj = {(yi, xi)}i\u2208Ij indexed by Ij. Assume further that the sparsity \u2225\u03b2\u2217\u22250 := Pp j=1 1(\u03b2\u2217 j \u0338= 0) is at most s, which is much smaller than the local sample size, that is, s = o(n). 19 Tan, Battey and Zhou 3.1 Penalized conquer with distributed data To \ufb01t sparse models in high dimensions, the use of \u21131 penalization has become a common practice since the seminal work of Tibshirani (1996). The \u21131-penalized quantile regression (\u21131-QR) estimator is de\ufb01ned as b \u03b2 \u2208argmin \u03b2\u2208Rp 1 N N X i=1 \u03c1\u03c4(yi \u2212xT i \u03b2) + \u03bb \u00b7 \u2225\u03b2\u22251 = argmin \u03b2\u2208Rp b Q(\u03b2) + \u03bb \u00b7 \u2225\u03b2\u22251, (37) where \u03bb > 0 is a regularization parameter. Statistical properties and computational methods for \u21131-QR have been well studied in the past decade; see, for example, Wang, Li and Jiang (2007), Wu and Lange (2008), Li and Zhu (2008), Belloni and Chernozhukov (2011), Wang, Wu and Li (2012), Yi and Huang (2017) and Gu et al. (2018). Recently, Tan, Wang and Zhou (2022) studied the \u21131-penalized conquer (\u21131-conquer) estimator, which is a solution to the following optimization problem min \u03b2\u2208Rp 1 N N X i=1 (\u03c1\u03c4 \u2217Kh)(yi \u2212xT i \u03b2) | {z } b Qh(\u03b2) + \u03bb \u00b7 \u2225\u03b2\u22251, (38) where K(\u00b7) is a non-negative kernel and h > 0 is the bandwidth. Notably, the smoothed loss function b Qh(\u00b7) is (provably) strongly convex in a local neighborhood of \u03b2\u2217with high probability. With a proper initialization, the corresponding optimization problem with \u21131-penalization can be e\ufb03ciently solved via \ufb01rst-order algorithms. In a distributed setting, we extend the iterative algorithm in Section 2 as follows. Let e \u03b2(0) \u2208Rp be an initial regularized estimator. Denote by e Q(\u03b2) = b Q1,b(\u03b2) \u2212\u27e8\u2207b Q1,b( e \u03b2(0)) \u2212 \u2207b Qh( e \u03b2(0)), \u03b2\u27e9the same shifted conquer loss as in (8), where b and h are the local and global bandwidths. Analogously to (9), the communication-e\ufb03cient penalized conquer estimator is de\ufb01ned as e \u03b2(1) \u2208argmin \u03b2\u2208Rp e Q(\u03b2) + \u03bb \u00b7 \u2225\u03b2\u22251, (39) where \u03bb > 0 is a regularization parameter. Optimization problem (39) is convex, which we solve using a local adaptive majorize-minimize algorithm detailed in Section A.2 of the Appendix. Let S \u2286{1, . . . , p} be the support of \u03b2\u2217, and assume that data are generated from a sparse conditional quantile model (1) with |S| \u2264s. De\ufb01ne the \u21131-cone \u039b = \u039b(s, p) = \b \u03b2 \u2208Rp : \u2225\u03b2 \u2212\u03b2\u2217\u22251 \u22644s1/2\u2225\u03b2 \u2212\u03b2\u2217\u2225\u03a3 \t . (40) Given r > 0 and \u03bb\u2217> 0, we de\ufb01ne the \u201cgood\u201d events E0(r) = \b e \u03b2(0) \u2208\u0398(r) \u2229\u039b \t and E\u2217(\u03bb\u2217) = \b \u2225\u2207b Qh(\u03b2\u2217) \u2212\u2207Qh(\u03b2\u2217)\u2225\u221e\u2264\u03bb\u2217 \t , (41) which, with slight abuse of notation, extend those given in (10) to the high-dimensional setting. In the following, we \ufb01rst establish upper bounds for the \u21131and \u21132-errors of the one-step penalized estimator e \u03b2(1), provided that the initial estimator e \u03b2(0) falls in a local neighborhood 20 Decentralized Quantile Regression of \u03b2\u2217. In parallel to Condition (C3), we impose the following moment condition on the highdimensional random vector x \u2208Rp of covariates. (C4). The predictor x = (x1, . . . , xp)T \u2208Rp (with x1 \u22611) has bounded components and uniformly bounded kurtosis. That is, there exists B \u22651 such that max1\u2264j\u2264p |xj| \u2264B almost surely, and \u00b54 := supu\u2208Sp\u22121 E(zTu)4 < \u221e, where z = \u03a3\u22121/2x and \u03a3 = (\u03c3jk)1\u2264j,k\u2264p = E(xxT) is positive de\ufb01nite. Write \u03c3u = max1\u2264j\u2264p \u03c31/2 jj and \u03bbl = \u03bbmin(\u03a3) \u2208(0, 1]. For convenience, we assume \u03bbl = 1. For technical reasons, the bounded covariates assumption is also imposed in Wang et al. (2017) and Jordan, Lee and Yang (2019) for sparse linear regression and generalized linear models. Theorem 11 Assume Conditions (C1), (C2), and (C4) hold. For \u03b4 \u2208(0, 1) and r0, \u03bb\u2217> 0, let b \u2265h > 0 and \u03bb = 2.5(\u03bb\u2217+ \u03f1) > 0 satisfy \u03f1 \u224dmax \"( 1 b r log(p/\u03b4) n + 1 h r log(p/\u03b4) N ) s1/2r0, s\u22121/2\u0000br0 + h2\u0001 # and s1/2\u03bb \u2272b \u22721. Conditioned on the event E0(r0)\u2229E\u2217(\u03bb\u2217), the one-step estimator e \u03b2(1) de\ufb01ned in (39) satis\ufb01es e \u03b2(1) \u2208\u039b and \u2225e \u03b2(1) \u2212\u03b2\u2217\u2225\u03a3 \u2272 ( s b r log(p/\u03b4) n + b + s h r log(p/\u03b4) N ) r0 + s1/2\u03bb\u2217+ h2 (42) with probability at least 1 \u2212\u03b4. In Theorem 11, the prespeci\ufb01ed parameter r0 > 0 quanti\ufb01es the accuracy of the initial regularized QR estimator e \u03b2(0) under \u21132-norm. Using a subsample of size n to construct such an estimator, the nearly minimax-optimal rate is p s log(p)/n (Belloni and Chernozhukov, 2011; Wang and He, 2021). With a suitable choice for the regularization weight \u03bb, we ensure that e \u03b2(1) must lie in the restricted set \u039b, and bandwidths b, h > 0, the estimation error of e \u03b2(1) is of the order ( s b r log(p/\u03b4) n + b + s h r log(p/\u03b4) N ) | {z } contraction factor r0 + s1/2\u03bb\u2217+ h2 | {z } near\u2212optimal rate . As we shall see, the second term is related to the near-optimal rate when the entire dataset is used. The \ufb01rst term involves a contraction factor that is of the order s b p log(p/\u03b4)/n+b+ s h p log(p/\u03b4)/N. With su\ufb03ciently many samples per source\u2014namely, n \u2273s2 log(p/\u03b4), the above one-step estimation procedure, which uses one round of communication, improves the statistical accuracy of e \u03b2(0) as long as s p log(p/\u03b4)/n \u2272b \u22721 and s p log(p/\u03b4)/N \u2272h \u22721. Next, we describe an iterative, multi-round procedure for estimating a sparse \u03b2\u2217\u2208Rp in a distributed setting. Let b Qj,b(\u00b7) and b Qj,h(\u00b7), j = 1, . . . , m, be the local empirical loss functions given in (7). At iteration 0, the \ufb01rst (master) machine computes an initial estimator e \u03b2(0) as well as \u2207b Q1,b( e \u03b20), and broadcast e \u03b2(0) to all local machines. For j = 1, . . . , m, the 21 Tan, Battey and Zhou jth local machine then computes gradients \u2207b Qj,h( e \u03b2(0)), which are then transmitted back to the \ufb01rst. At iteration t = 1, 2, . . . , T, the \ufb01rst machine solves the \u21131-penalized shifted conquer loss minimization e \u03b2(t) \u2208argmin \u03b2\u2208Rp b Q1,b(\u03b2) \u2212 \u2207b Q1,b( e \u03b2(t\u22121)) \u2212\u2207b Qh( e \u03b2(t\u22121)), \u03b2 \u000b | {z } =: e Q(t)(\u03b2) + \u03bbt \u00b7 \u2225\u03b2\u22251, (43) where \u2207b Qh( e \u03b2(t\u22121)) = (1/m) Pm j=1 \u2207b Qj,h( e \u03b2(t\u22121)), and \u03bbt > 0 are regularization parameters. Theorem 12 Assume Conditions (C1), (C2), and (C4) hold. Given \u03b4 \u2208(0, 1), choose the local and global bandwidths as b \u224ds1/2\b log(p/\u03b4)/n \t1/4 and h \u224d \b s log(p/\u03b4)/N \t1/4. (44) For r0, \u03bb\u2217> 0, write r\u2217= s1/2\u03bb\u2217and set \u03bbt = 2.5(\u03bb\u2217+ \u03f1t) > 0 (t \u22651) with \u03f1t \u224dmax n \u03b3ts\u22121/2r0 + \u03b3s\u22121/2(r\u2217+ h2)1(t \u22652), p log(p/\u03b4)/N o , where \u03b3 = \u03b3(s, p, n, N, \u03b4) \u224ds1/2 max{log(p/\u03b4)/n, s log(p/\u03b4)/N}1/4. Let the sample size per source and total sample size satisfy n \u2273s2 log(p/\u03b4) and N \u2273s3 log(p/\u03b4), so that \u03b3 < 1. Moreover, assume r0 \u2272min{1, (m/s)1/4} and r\u2217\u2272b. Then, conditioned on the event E\u2217(\u03bb\u2217) \u2229E0(r0), the T th iterate e \u03b2(T) with T \u2273log(r0/r\u2217)/ log(1/\u03b3) satis\ufb01es \u2225e \u03b2(T) \u2212\u03b2\u2217\u2225\u03a3 \u2272s1/2\u03bb\u2217+ h2 and \u2225e \u03b2(T) \u2212\u03b2\u2217\u22251 \u2272s\u03bb\u2217+ s1/2h2 (45) with probability at least 1 \u2212T\u03b4. According to Theorem 12, the success of the iterative procedure described above relies on a su\ufb03ciently accurate initial estimator e \u03b2(0). For example, we may choose e \u03b2(0) to be a local \u21131-conquer estimator e \u03b2(0) \u2208argmin \u03b2\u2208Rp b Q1,b0(\u03b2) + \u03bb0 \u00b7 \u2225\u03b2\u22251, (46) or a local \u21131-QR estimator which is a minimizer of the program argmin \u03b2\u2208Rp 1 n X i\u2208I1 \u03c1\u03c4(yi \u2212xT i \u03b2) + \u03bb0 \u00b7 \u2225\u03b2\u22251. (47) High probability estimation error bounds for \u21131-QR were derived by Belloni and Chernozhukov (2011), Wang (2013), and more recently by Wang and He (2021) under weaker assumptions. The estimation error for \u21131-conquer is provided by the following result, which is a variant of Theorem 4.1 in Tan, Wang and Zhou (2022). Proposition 13 Assume Conditions (C1), (C2) and (C4) hold. For \u03b4 \u2208(0, 1), set the regularization parameter \u03bb0 \u224d p \u03c4(1 \u2212\u03c4) log(p/\u03b4)/n. Provided that p s log(p/\u03b4)/n \u2272b0 \u2272 1, the local \u21131-conquer estimator e \u03b2(0) given in (46) satis\ufb01es \u2225e \u03b2(0) \u2212\u03b2\u2217\u2225\u03a3 \u2272s1/2\u03bb0 + b2 0 (48) with probability at least 1 \u2212\u03b4. If in addition b0 \u2272{s log(p/\u03b4)/n}1/4, then e \u03b2(0) \u2208\u039b. 22 Decentralized Quantile Regression With the above preparations, we are now ready to state the estimator error bound for the distributed regularized conquer estimator e \u03b2(T) in high dimensions. Theorem 14 Assume Conditions (C1), (C2) and (C4) hold, and that the data are generated from a sparse conditional quantile model (1) with \u2225\u03b2\u2217\u22250 \u2264s. Suppose the sample size per source and total sample size satisfy n \u2273s2 log(p) and N \u2273s3 log(p). Choose the bandwidths b, h > 0 and regularization parameters \u03bbt (t \u22651) as b \u224ds1/2{log(p)/n}1/4, h \u224d{s log(p)/N}1/4 and \u03bbt \u224d r log(p) N + max \u001as2 log(p) n , s3 log(p) N \u001bt/4 r log(p) n . Starting at iteration 0 with an initial estimate e \u03b2(0) as described in Proposition 13, the distributed estimator e \u03b2 = e \u03b2(T) with T \u224d\u2308log(m)\u2309communication rounds satis\ufb01es the error bounds \u2225e \u03b2 \u2212\u03b2\u2217\u22252 \u2272 r s log(p) N and \u2225e \u03b2 \u2212\u03b2\u2217\u22251 \u2272s r log(p) N (49) with probability at least 1 \u2212C log(m)/N. Theorems 11\u201314 are non-trivial extensions of Theorem 3 in Wang et al. (2017) to the context of quantile regression. The latter can be applied to the squared loss for linear regression and logistic loss for classi\ufb01cation. Let \u2113(\u00b7) be the loss function of interest, and it is assumed therein that |\u2113\u2032(u) \u2212\u2113\u2032(v)| \u2264L|u \u2212v| for any u, v \u2208R and sup u\u2208R |\u2113\u2032\u2032\u2032(u)| \u2264M. The key of the proof is to control the di\ufb00erence between the gradient vectors \u2207e Q(t)( e \u03b2(t\u22121)) and \u2207e Q(t)(\u03b2\u2217) at each iteration. For this purpose, the proof of Theorem 3 in Wang et al. (2017) is based on the second-order Taylor\u2019s series expansion, so that the above parameters L and M arise and are treated as constants. In particular, M = 0 for the quadratic loss. In our context, if we take \u2113(\u00b7) to be the local conquer loss (\u03c1\u03c4 \u2217Kb)(\u00b7), then it is easy to see that L \u224db\u22121 and M \u224db\u22122. Since the bandwidth b decays as a function of (n, p), neither the result nor proof argument in Wang et al. (2017) apply to quantile regression even with smoothing. In the Appendix, we provide a self-contained proof of Theorems 11 and 12, which relies on a uniform control of the \ufb02uctuations of gradient processes and a restricted strong convexity property for the empirical conquer loss. 3.2 Distributed quantile regression via ADMM In this subsection, we describe an alternative algorithm based on the alternating direction method of multiplier (ADMM) for penalized quantile regression with distributed data. ADMM, which was \ufb01rst introduced by Douglas and Rachford (1956) and Gabay and Mercier (1976), has a number of successful applications in modern statistical machine learning. We refer to Boyd et al. (2011) for a comprehensive review on ADMM. In the context of quantile 23 Tan, Battey and Zhou regression, Yu, Lin and Wang (2017) and Gu et al. (2018) respectively proposed ADMMbased algorithms for \ufb01tting penalized QR with both convex and folded-concave penalties. As argued in Boyd et al. (2011), ADMM is well suited for distributed convex optimization problems under minimum structural assumption. For solving penalized QR, in the following we revisit the parallel implementation of the ADMM-based algorithm proposed in Yu, Lin and Wang (2017). Recall that the total dataset {(yi, xi)}N i=1 with N = n \u00b7 m is distributed across m sources, each containing a data batch indexed by Ij (j = 1, . . . , m). Write y = (y1, . . . , yN)T = (yT 1 , . . . , yT m)T and X = (x1, . . . , xN)T = (XT 1 , . . . , XT m)T \u2208RN\u00d7p, where yj = yIj \u2208Rn and Xj \u2208Rn\u00d7p. Under this set of notation, the \u21131-QR problem (37) can be recast into an equivalent problem minimize rj,\u03b2j,\u03b2 \u001a m X j=1 \u03c1\u03c4(rj) + \u03bbN\u2225\u03b2\u22251 \u001b such that yj \u2212Xj\u03b2j = rj, \u03b2j = \u03b2, j = 1, . . . , m, where \u03bbN = N\u03bb. Here we write \u03c1\u03c4(r) = Pn i=1 \u03c1\u03c4(ri) for r = (r1, . . . , rn)T. To solve this linearly constrained optimization problem, the ADMM updates at iteration k = 0, 1, . . . are \u03b2k+1 = argmin \u03b2 \u001am\u03b3 2 \r \r\u03b2 \u2212\u00af \u03b2k \u2212\u00af \u03b4k/\u03b3 \r \r2 2 + \u03bbN\u2225\u03b2\u22251 \u001b , (50) rk+1 j = argmin rj \u001a \u03c1\u03c4(rj) + \u03b3 2 \r \ryj \u2212Xj\u03b2k j + uk j /\u03b3 \u2212rj \r \r2 2 \u001b , (51) \u03b2k+1 j = (XT j Xj + Ip)\u22121\b XT j \u0000yj \u2212rk+1 j + uk j /\u03b3 \u0001 \u2212\u03b4k j /\u03b3 + \u03b2k+1\t , uk+1 j = uk j + \u03b3 \u0000yj \u2212Xj\u03b2k+1 j \u2212rk+1 j \u0001 , \u03b4k+1 j = \u03b4k j + \u03b3 \u0000\u03b2k+1 j \u2212\u03b2k+1\u0001 , where \u00af \u03b2k = (1/m) Pm j=1 \u03b2k j , \u00af \u03b4k = (1/m) Pm j=1 \u03b4k j , and \u03b3 > 0 is the augmentation parameter. In particular, the \u03b2-update in (50) and the r-update in (51) have explicit expressions, which are \u03b2k+1 = \u0000 \u00af \u03b2k + \u00af \u03b4k/\u03b3 \u2212\u03bbN/(m\u03b3)1p \u0001 + \u2212 \u0000\u2212\u00af \u03b2k \u2212\u00af \u03b4k/\u03b3 \u2212\u03bbN/(m\u03b3)1p \u0001 \u2212 and rk+1 = \u0000yj \u2212Xj\u03b2k j + uk j /\u03b3 \u2212\u03c4\u03b3\u221211n \u0001 + \u2212 \u0000\u2212yj + Xj\u03b2k j \u2212uk j /\u03b3 + (\u03c4 \u22121)\u03b3\u221211n \u0001 +, respectively, where 1q := (1, . . . , 1)T \u2208Rq for each integer q \u22651. The above parallel version of the ADMM to solve (37) involves primal variables \u03b2 \u2208Rp, (rT 1 , . . . , rT m)T \u2208RN and the dual variable (uT 1 , . . . , uT m)T \u2208RN. As a general-purpose algorithm, its convergence can be quite slow when applied to large-scale datasets. For example, under a numerical setting with p = 100, N = 30, 000 and m \u2208{1, 10, 100} considered in Yu, Lin and Wang (2017), it takes more than 100 iterations for the parallel implementation of the ADMM to converge. In a distributed framework, this amounts to (at least) 100 communication rounds in order to achieve the desired level of statistical accuracy. At even larger data scales, our numerical results (see Figure 4 below) show evidence that the proposed multi-round, distributed estimator can perform as well as the global estimator within T = 10 communication rounds. 24 Decentralized Quantile Regression 4. Numerical Studies 4.1 Distributed quantile regression Starting with the low-dimensional setting, we compare the proposed multi-round procedure with the following methods: (i) global QR estimator using all of the available N = mn observations; (ii) the averaging-based estimator based on local QR estimators; (iii) the proposed method with T \u2208{1, 4, 10} communication rounds; and (iv) a non-smooth version of the proposed method, which uses the subgradient of the QR loss as the global gradient, with T \u2208{1, 4, 10} communication rounds. We employ the R packages conquer and quantreg to compute the conquer and standard QR estimators, respectively. As shown in He et al. (2021), the performance of conquer is insensitive to the choice of kernel functions, and thus we use the Gaussian kernel wherever smoothing is required. Our proposed method involves an initial estimator e \u03b2(0) and two smoothing parameters h and b. There are multiple ways to obtain an adequate initialization. For instance, as suggested by Jordan, Lee and Yang (2019), one can use the simple averaging estimator as the initialization, i.e., the average of local QR estimators across m sources. For simplicity, we take e \u03b2(0) to be a conquer estimator computed based on n independent data points from one source. For the bandwidths, we set h = 2.5\u00b7{(p+log N)/N}1/3 and b = 2.5\u00b7{(p+log n)/n}1/3 according to the theoretical analysis in Section 2.2. To generate the data, we consider two types of heteroscedastic models: 1. Linear heteroscedasticity: yi = xT i \u03b2\u2217+ (0.2xip + 1){\u03b5i \u2212F \u22121 \u03b5i (\u03c4)}; 2. Quadratic heteroscedasticity: yi = xT i \u03b2\u2217+ 0.5{1 + (0.25xip \u22121)2}{\u03b5i \u2212F \u22121 \u03b5i (\u03c4)}, where xi is generated from a multivariate uniform distribution on the cube 31/2 \u00b7 [\u22121, 1]p+1 with covariance matrix \u03a3 = (0.5|j\u2212k|)1\u2264,j,k\u2264p+1, and \u03b2\u2217= 1p is a p-vector of ones. The random noise is generated from a t-distribution with 2 degrees of freedom, denoted by t2. To evaluate the performance across di\ufb00erent methods, we report the estimation error under the \u21132-norm, i.e., \u2225b \u03b2 \u2212\u03b2\u2217\u22252. Table 1 presents the results when n = 300, p = 10, m \u2208{50, 100, 200, 400, 600, 1000}, and \u03c4 = 0.8, averaged over 100 trials. With the same p and \u03c4, we report the results with a \ufb01xed total sample size N = 150, 000, m = N/n, and varying local sample size n \u2208{300, 500, 1000, 1500, 3000, 6000} in Table 2. The global QR estimator, which always has the smallest error as expected, serves as a benchmark for communication-e\ufb03cient methods. From Table 1, we see that the proposed multi-round distributed estimator yields the best performance among the communicatione\ufb03cient estimators, and as the number of communication rounds grows, it becomes almost as good as the global QR estimator even though the one-step estimator (T = 1) performs rather poorly. The performance of the averaging-based QR is comparable to that of the proposed method when the number of machines m is smaller than the local sample size n. As suggested by the theoretical analyses, when m is larger than n, the proposed method outperforms the averaging-based QR. To highlight the importance of smoothing for distributed quantile regression, we also implement the multi-round procedure using the subgradient of the QR loss, namely, \u2207b Q(\u03b2) = (nm)\u22121 Pm j=1 P i\u2208Ij{1(yi < xT i \u03b2)\u2212\u03c4}xi, instead of \u2207b Qh(\u00b7). Note that the estimation error of this subgradient-based method is barely improvable as the number of machines increases. When the number of total samples N is \ufb01xed, from Table 2, 25 Tan, Battey and Zhou we \ufb01nd that the subgradient-based method only performs well if the local sample size is extremely large, which makes all the methods desirable. This demonstrates the importance of smoothing in the context of distributed learning with non-smooth loss functions. Next, we perform a sensitivity analysis to assess the e\ufb00ect of the initial estimator on the \ufb01nal solution of the proposed method with T = 10. To this end, we conduct additional numerical studies where we consider di\ufb00erent initial estimators, computed using di\ufb00erent sample sizes ninit = {150, 300, 500, 1000, 5000}. Speci\ufb01cally, we consider the aforementioned linear and quadratic heteroscedastic models with n = 300, p = 10, m = 400, and \u03c4 = 0.8. The average estimation error for the proposed method with T = {1, 4, 10} and that of global QR estimator are summarized in Table 3. From Table 3, we see that the estimation error for the proposed method with T = 1 decreases as we increase the sample size used to calculate the initial estimator. Moreover, we see that implementing the proposed method with T = {4, 10} improves the estimation error signi\ufb01cantly, and that the estimation error is no longer sensitive to the initial sample size. The proposed method with T = 10 yields an estimator that performs as well as the global QR (implemented using dataset from all sources) even when ninit = 150. The results suggest that the proposed method is not sensitive to the sample size used to calculate the initial estimator after some rounds of communication. Table 1: Estimation error under linear and quadratic heteroscedastic models with t2 noise, averaged over 100 trials. Results for \u03c4 = 0.8, n = 300, and p = 10, across m = {50, 100, 200, 400, 600, 1000} are reported. Linear Heteroscedastic Model with \u03c4 = 0.8, n = 300, and p = 10 Methods m = 50 m = 100 m = 200 m = 400 m = 600 m = 1000 averaging-based QR 0.077 0.060 0.047 0.041 0.037 0.035 distributed QR (T = 1) 0.197 0.216 0.223 0.213 0.192 0.198 distributed QR (T = 4) 0.173 0.223 0.256 0.202 0.187 0.175 distributed QR (T = 10) 0.259 0.341 0.427 0.313 0.271 0.305 distributed smoothed QR (T = 1) 0.159 0.163 0.163 0.151 0.138 0.143 distributed smoothed QR (T = 4) 0.076 0.066 0.051 0.032 0.027 0.029 distributed smoothed QR (T = 10) 0.075 0.071 0.039 0.027 0.021 0.020 global QR 0.069 0.050 0.035 0.025 0.019 0.016 Quadratic Heteroscedastic Model with \u03c4 = 0.8, n = 300, and p = 10 averaging-based QR 0.079 0.063 0.050 0.043 0.038 0.036 distributed QR (T = 1) 0.206 0.210 0.233 0.204 0.198 0.205 distributed QR (T = 4) 0.198 0.232 0.281 0.211 0.235 0.184 distributed QR (T = 10) 0.263 0.401 0.423 0.330 0.396 0.332 distributed smoothed QR (T = 1) 0.162 0.160 0.163 0.151 0.148 0.148 distributed smoothed QR (T = 4) 0.079 0.062 0.051 0.033 0.034 0.033 distributed smoothed QR (T = 10) 0.077 0.057 0.041 0.027 0.024 0.021 global QR 0.071 0.050 0.036 0.025 0.020 0.017 4.2 Distributed con\ufb01dence construction In terms of uncertainty quanti\ufb01cation, we assess the performance of the proposed method for constructing con\ufb01dence intervals by calculating the coverage probability and width of the con\ufb01dence interval for each regression coe\ufb03cient. For point estimation, we implement Algorithm 1 with T = 10 and employ the averaging-based QR estimator as the initialization. The bandwidths are set to be h = 1.5 \u00b7 {(p + log N)/N}1/3 and b = 1.5 \u00b7 {(p + log n)/n}1/3. 26 Decentralized Quantile Regression Table 2: Estimation error under linear and quadratic heteroscedastic models with t2 noise, averaged over 100 trials. Results for \u03c4 = 0.8, N = nm = 150, 000 and p = 10, across n = {300, 500, 1000, 1500, 3000, 6000} are reported. Linear Heteroscedastic Model with \u03c4 = 0.8, N = 150, 000, m = N/n, and p = 10 Methods n = 300 n = 500 n = 1000 n = 1500 n = 3000 n = 6000 averaging-based QR 0.037 0.029 0.025 0.023 0.022 0.022 distributed QR (T = 1) 0.206 0.140 0.071 0.053 0.034 0.026 distributed QR (T = 4) 0.252 0.100 0.026 0.023 0.022 0.022 distributed QR (T = 10) 0.389 0.124 0.024 0.023 0.022 0.022 distributed smoothed QR (T = 1) 0.149 0.092 0.049 0.038 0.028 0.024 distributed smoothed QR (T = 4) 0.042 0.024 0.023 0.023 0.023 0.023 distributed smoothed QR (T = 10) 0.029 0.023 0.023 0.023 0.023 0.023 global QR 0.023 0.022 0.022 0.022 0.022 0.022 Quadratic Heteroscedastic Model with \u03c4 = 0.8, N = 150, 000, m = N/n, and p = 10 averaging-based QR 0.039 0.030 0.026 0.024 0.023 0.022 distributed QR (T = 1) 0.215 0.143 0.075 0.053 0.034 0.027 distributed QR (T = 4) 0.236 0.101 0.028 0.023 0.022 0.022 distributed QR (T = 10) 0.371 0.123 0.026 0.023 0.022 0.022 distributed smoothed QR (T = 1) 0.151 0.095 0.051 0.040 0.029 0.025 distributed smoothed QR (T = 4) 0.037 0.025 0.024 0.024 0.023 0.023 distributed smoothed QR (T = 10) 0.025 0.024 0.024 0.024 0.023 0.023 global QR 0.023 0.023 0.023 0.023 0.022 0.022 Table 3: Estimation error under linear and quadratic heteroscedastic models with t2 noise, averaged over 100 trials. Results for \u03c4 = 0.8, n = 300, p = 10, m = 400, with initial estimator computed using di\ufb00erent sample size, ninit = {150, 300, 500, 1000, 5000}, are reported. Linear Heteroscedastic Model with \u03c4 = 0.8, n = 300, p = 10, and m = 400 Methods ninit = 150 ninit = 300 ninit = 500 ninit = 1000 ninit = 5000 distributed smoothed QR (T = 1) 0.211 0.181 0.151 0.108 0.085 distributed smoothed QR (T = 4) 0.041 0.042 0.032 0.033 0.031 distributed smoothed QR (T = 10) 0.027 0.038 0.027 0.026 0.027 global QR 0.025 0.025 0.025 0.024 0.025 Quadratic Heteroscedastic Model with \u03c4 = 0.8, n = 300, p = 10, and m = 400 distributed smoothed QR (T = 1) 0.214 0.151 0.112 0.087 0.047 distributed smoothed QR (T = 4) 0.039 0.033 0.032 0.032 0.028 distributed smoothed QR (T = 10) 0.027 0.027 0.029 0.027 0.027 global QR 0.025 0.025 0.025 0.025 0.025 27 Tan, Battey and Zhou For con\ufb01dence construction, we \ufb01rst consider four methods: the asymptotic normal-based interval (28) for the proposed communication-e\ufb03cient estimator (CE-Normal), the normalbased interval (28) with e \u03b2 replaced by b \u03b2dc of equation (20) (DC-Normal), and the two communication-e\ufb03cient bootstrap constructions as in (34) (CE-Boot (a)) and in (35) (CEBoot (b)). Recall that the normal-based method requires estimating the asymptotic variances \u03c32 j , and the bootstrap methods depend on H\u22121. We consider two types of variance estimators. The \ufb01rst one is easier to implement but relies on the assumption that H = f\u03b5(0) \u00b7 \u03a3, which holds when \u03b5 is independent of x. In this case, \u03c32 j = \u03c4(1 \u2212\u03c4){f\u03b5(0)}\u22122(\u03a3\u22121)jj. We compute the global density estimator b f\u03b5(0) as in (27) with a rule-of-thumb bandwidth brot N = N\u22121/3 \u00b7\u03a6\u22121(1\u2212\u03b1/2)2/3[{1.5\u00b7\u03c6(\u03a6\u22121(\u03c4))2}/{2\u03a6\u22121(\u03c4)2 +1}]1/3, and a local covariance matrix estimator b \u03a31. The second estimator is more general and takes the form b \u03c3j = {\u03c4(1 \u2212\u03c4)}1/2 \u0000 b H\u22121 1,b b \u03a31 b H\u22121 1,b \u00011/2 jj , where b H1,b is given in (23). We generate the design matrix the same way as in Section 4.1, and focus on the following linear heteroscedastic models with di\ufb00erent levels of heterogeneity: yi = xT i \u03b2\u2217+ (0.2xip + 1){\u03b5i \u2212F \u22121 \u03b5i (\u03c4)}; (52) yi = xT i \u03b2\u2217+ (0.4xip + 1){\u03b5i \u2212F \u22121 \u03b5i (\u03c4)}. (53) We set p = 50, n = 2000, \u03c4 = 0.4 and let m vary from 20 to 400. The results for 95% con\ufb01dence intervals are reported in Figures 2 and 3. For all of our numerical results, we found that the coverage probabilities and widths of the 95% con\ufb01dence intervals for the \ufb01rst p \u22121 regression coe\ufb03cients (independent of the random noise) are similar across all methods. Speci\ufb01cally, the proposed CE-Normal and CE-Boot (a) & (b) methods perform very well across various model settings. Since the results across all methods are similar, they are omitted due to limited space. We focus on reporting the empirical coverage probabilities and widths of the 95% con\ufb01dence intervals for the last regression coe\ufb03cient in Figures 2 and 3. We use the \ufb01rst type of variance estimators in the top panels and the second type in the bottom panels. From panels (b) and (d) in Figures 2 and 3, we see that the normal-based method for the simple averaging estimator su\ufb00ers from severe undercoverage when the heterogeneous covariate e\ufb00ect is strong, which in our case, comes from the last covariate. Score-based con\ufb01dence sets, while computationally more intensive due to inversion of the test, are extremely e\ufb03cient due to the linearity of the self-normalized representation exploited in our construction. We illustrate the improvements in a smaller scale simulation study in Section E of the Appendix. 4.3 Distributed penalized quantile regression The following numerical study illustrates the performance of the procedure proposed in Section 3, when the dimension p is larger than n for each of the m sources. For comparison purposes, we also consider the \u21131-penalized conquer (\u21131-conquer) \ufb01tted to all N = nm observations, which is practically infeasible for the problems that motivated our work, and the simple averaging estimator\u2013the average of m local \u21131-conquer estimates. The performance of the proposed procedure is shown for T = 1 and for T chosen adaptively using the 28 Decentralized Quantile Regression \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 0.02 0.04 0.06 0.08 20 50 100 200 400 Number of Machines Width method Normal DC Boot1 Boot2 (a) \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 0.00 0.25 0.50 0.75 1.00 20 50 80 100 120 150 180 200 220 250 280 300 320 350 380 400 Number of Machines Coverage method \u25cf \u25cf \u25cf \u25cf Normal DC Boot1 Boot2 (b) \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 0.03 0.05 0.07 20 50 100 200 400 Number of Machines Width method Normal DC Boot1 Boot2 (c) \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 0.00 0.25 0.50 0.75 1.00 20 50 80 100 120 150 180 200 220 250 280 300 320 350 380 400 Number of Machines Coverage method \u25cf \u25cf \u25cf \u25cf Normal DC Boot1 Boot2 (d) Figure 2: Properties of con\ufb01dence intervals for the regression coe\ufb03cients under model (52) with t1.5 noise when \u03c4 = 0.8 and (n, p) = (2000, 50) using type I variance (\ufb01rst row) and type II variance estimators (second row). Panels (a) and (c) depict the widths of the con\ufb01dence intervals for the last regression coe\ufb03cient over Monte Carlo replicates using: CE-Normal \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 20 50 100 200 500 Number of Machines method Normal DC Boot1 Boot2 ; DC-Normal (28) \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 100 200 500 umber of Machines method Normal DC Boot1 Boot2 ; CE-Boot (a) \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 200 500 method Normal DC Boot1 Boot2 ; CEBoot (b) \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 0.02 0.04 0.06 20 50 100 200 500 Number of Machines Width method Normal DC Boot1 Boot2 . Empirical coverage probabilities for the last coe\ufb03cient are shown in panels (b) and (d). 29 Tan, Battey and Zhou \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 0.02 0.04 0.06 20 50 100 200 400 Number of Machines Width method Normal DC Boot1 Boot2 (a) \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 0.00 0.25 0.50 0.75 20 50 80 100 120 150 180 200 220 250 280 300 320 350 380 400 Number of Machines Coverage method \u25cf \u25cf \u25cf \u25cf Normal DC Boot1 Boot2 (b) \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 0.02 0.04 0.06 20 50 100 200 400 Number of Machines Width method Normal DC Boot1 Boot2 (c) \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 0.00 0.25 0.50 0.75 20 50 80 100 120 150 180 200 220 250 280 300 320 350 380 400 Number of Machines Coverage method \u25cf \u25cf \u25cf \u25cf Normal DC Boot1 Boot2 (d) Figure 3: Con\ufb01dence intervals for the regression coe\ufb03cients under model (53). Other details are the same as Figure 2. 30 Decentralized Quantile Regression stopping criterion in Algorithm 1. As previously discussed, we use the Gaussian kernel for smoothing and the simple averaging estimator as the initialization. We consider the following heteroscedastic models with s = 5 signi\ufb01cant variables: 1. Linear heteroscedasticity: yi = 3 + P5 j=1 xij + (0.2xi1 + 1){\u03b5i \u2212F \u22121 \u03b5i (\u03c4)}; 2. Quadratic heteroscedasticity: yi = 3+P5 j=1 xij +0.5{1+(0.25xip\u22121)2}{\u03b5i\u2212F \u22121 \u03b5i (\u03c4)}, where xi and \u03b5i are generated the same way as in Section 4.1. Moreover, we set \u03c4 = 0.8, p = 500, n = 400 and m \u2208{20, 40, 60, 80, 100, 120}. Guided by the theoretical results in Section 3, we set the bandwidths (b, h) as b = 0.75s1/2{log(p)/n}1/4 and h = 0.75{s log(p)/N}1/4. The regularization parameter \u03bb > 0 is selected using a validation set of size n = 200 for easier illustration and comparison. Note that Wang et al. (2017) use 60% of data for training, 20% as held-out validation set for tuning the parameters, and the remaining 20% for testing. Figure 4 provides plots of the statistical error versus number of machines, averaged over 100 Monte Carlo replications, for the proposed distributed estimator and the simple averaging estimator. The latter performs poorly under both heteroscedastic models. This is not surprising because \u21131penalization induces visible \ufb01nite-sample bias into the estimates, which is una\ufb00ected by aggregation no matter how many machines are available. Using this estimate as an initial value, the multi-round procedure considerably reduces the estimation error after one round of communication (T = 1), and eventually performs almost as well as the global \u21131-conquer when T is automatically determined by the stopping criterion. Since \u03bb is tuned the same way for all the three methods, the global \u21131-conquer estimator does not necessarily have the best performance but still provides a yardstick for distributed estimators. Acknowledgments We sincerely thank the Action Editor and two anonymous reviewers for their constructive comments that help improve the previous version of the manuscript. K. M. Tan was supported by NSF Grant DMS-1949730 and DMS-2113356. H. Battey was supported by the EPSRC Fellowship EP/T01864X/1. W.-X. Zhou acknowledges the support of the NSF Grant DMS-2113409. 31 Tan, Battey and Zhou G G G G G G 20 40 60 80 100 120 0.10 0.15 0.20 0.25 Number of Machines Estimation Error G G G G G G (a) G G G G G G 20 40 60 80 100 120 0.10 0.15 0.20 0.25 0.30 0.35 0.40 Number of Machines Estimation Error G G G G G G (b) Figure 4: Estimation error as a function of m under linear (panel (a)) and quadratic (panel (b)) heteroscedastic models with t1.5 noise. Each point corresponds to the average of 100 Monte Carlo replications for (n, p) = (400, 500). Three methods are implemented: (i) the multi-round method with T = 1 ( ) and with T = 10 as in Algorithm 1 ( ); (ii) the global \u21131-conquer estimator ( ); (iii) the simple averaging estimator ( ). 32 Decentralized Quantile Regression Appendix A. Optimization Algorithms A.1 First-order algorithm for solving (9) Given an initial estimator e \u03b2(0) of \u03b2\u2217, and the global and local bandwidths b, h > 0, recall from (8) that the shifted conquer loss function takes the form e Q(\u03b2) = b Q1,b(\u03b2) \u2212 \u27e8\u2207b Q1,b( e \u03b2(0)) \u2212\u2207b Qh( e \u03b2(0)), \u03b2\u27e9. The communication-e\ufb03cient procedure involves repeatedly minimizing \u03b2 7\u2192e Q(\u03b2). Since e Q(\u03b2) is smooth and convex, we employ the gradient descent (GD) method which, at the kth iteration, computes b \u03b2k+1 = b \u03b2k \u2212\u03b7k \u00b7 \u2207e Q( b \u03b2k), (54) where \u03b7k > 0 is the stepsize. The choice of stepsize is an important aspect of GD for achieving fast convergence, and has been thoroughly studied in the optimization literature. Alternatively, one can set the stepsize to be the inverse Hessian, namely, {\u22072 e Q( b \u03b2k)}\u22121, which leads to the Newton-Raphson method. The Newton step is computationally expensive at each iteration when p is large. Moreover, when the quantile level \u03c4 is close to 0 or 1, \u22072 e Q( b \u03b2k) may have a large condition number, thus causing unstableness in computing its inverse. Motivated by the gradient-based method proposed by He et al. (2021) for solving (5), we consider the use of Barzilai-Borwein stepsize in (54) (Barzilai and Borwein, 1988). The main idea of the Barzilai-Borwein method is to seek a simple approximation of the inverse Hessian without having to compute it explicitly. In particular, for k = 1, 2, . . ., the Barzilai-Borwein stepsizes are de\ufb01ned as \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \u03b71,k = \u27e8b \u03b2k\u2212b \u03b2k\u22121, b \u03b2k\u2212b \u03b2k\u22121\u27e9 \u27e8b \u03b2k\u2212b \u03b2k\u22121,\u2207e Q( b \u03b2k)\u2212\u2207e Q( b \u03b2k\u22121)\u27e9, \u03b72,k = \u27e8b \u03b2k\u2212b \u03b2k\u22121,\u2207e Q( b \u03b2k)\u2212\u2207e Q( b \u03b2k\u22121)\u27e9 \u27e8\u2207e Q( b \u03b2k)\u2212\u2207e Q( b \u03b2k\u22121),\u2207e Q( b \u03b2k)\u2212\u2207e Q( b \u03b2k\u22121)\u27e9. (55) When the quantile level \u03c4 approaches 0 or 1, the objective function is \ufb02at in some directions and hence the Hessian matrix becomes more ill-conditioned. To stablize the algorithm, we set the stepsize to be \u03b7k = min(\u03b71,k, \u03b72,k, C) (k = 1, 2, . . .) for some constant C > 0, say C = 20. The pseudo-code for the above Barzilai-Borwein GD method for solving (8) is given in Algorithm 2. Algorithm 2 Gradient descent with Barzilai-Borwein stepsize (GD-BB) for solving (8). Input: Local data vectors {(yi, xi)}i\u2208I1, \u03c4 \u2208(0, 1), bandwidth b \u2208(0, 1), initialization b \u03b20 = e \u03b2(0), gradient vectors \u2207b Q1,b( e \u03b2(0)) and \u2207b Qh( e \u03b2(0)), and convergence tolerance \u03b4. 1: Compute b \u03b21 \u2190b \u03b20 \u2212\u2207e Q( b \u03b20). 2: for k = 1, 2 . . . do 3: Compute step sizes \u03b71,k and \u03b72,k de\ufb01ned in (55). 4: Set \u03b7k \u2190min{\u03b71,k, \u03b72,k, 20} if \u03b71,k, \u03b72,k > 0, and \u03b7k \u21901 otherwise. 5: Update b \u03b2k+1 \u2190b \u03b2k \u2212\u03b7k\u2207e Q( b \u03b2k). 6: end for when \u2225\u2207e Q( b \u03b2k)\u22252 \u2264\u03b4. 33 Tan, Battey and Zhou A.2 Local adaptive majorize-minimize algorithm for solving (39) In this section, we provide an algorithm to solve the \u21131-penalized shifted conquer loss minimization. In particular, given an initial estimator e \u03b2(0), the \u21131-penalized shifted conquer loss takes the form e Q(\u03b2) + \u03bb\u2225\u03b2\u2212\u22251, where e Q(\u03b2) = b Q1,b(\u03b2) \u2212\u27e8\u2207b Q1,b( e \u03b2(0)) \u2212\u2207b Qh( e \u03b2(0)), \u03b2\u27e9. (56) Here \u03b2\u2212\u2208Rp\u22121 denotes the subvector of \u03b2 \u2208Rp with its \ufb01rst coordinate removed. Due to the non-di\ufb00erentiability of the \u21131-norm, the GD-BB method in Algorithm 2 is no longer applicable. By extending the majorize-minimize (MM) algorithm (Hunter and Lange, 2000) for standard quantile regressions, we employ a local adaptive majorize-minimize (LAMM) principle (Fan et al., 2018) to minimize the penalized conquer loss \u03b2 7\u2192e Q(\u03b2) + \u03bb\u2225\u03b2\u2212\u22251. At the kth iteration with a previous estimate b \u03b2k\u22121, the main idea of the LAMM algorithm is to construct an isotropic quadratic function that locally majorizes the shifted conquer loss function e Q(\u00b7). Speci\ufb01cally, for some quadratic parameter \u03c6k > 0, we de\ufb01ne the quadratic function F(\u03b2; \u03c6k, b \u03b2k\u22121) = e Q( b \u03b2k\u22121) + \u27e8\u2207e Q( b \u03b2k\u22121), \u03b2 \u2212b \u03b2k\u22121\u27e9+ \u03c6k 2 \u2225\u03b2 \u2212b \u03b2k\u22121\u22252 2, and then compute the update b \u03b2k by solving minimize \u03b2\u2208Rp \b F(\u03b2; \u03c6k, b \u03b2k\u22121) + \u03bb\u2225\u03b2\u2212\u22251 \t . (57) The isotropic form of F(\u03b2; \u03c6k, b \u03b2k\u22121), as a function of \u03b2, permits a simple analytic solution b \u03b2k = (b \u03b2k 1, . . . , b \u03b2k p)T that takes the form ( b \u03b2k 1 = b \u03b2k\u22121 1 \u2212\u03c6\u22121 k \u2207\u03b21 e Q( b \u03b2k\u22121), b \u03b2k j = S(b \u03b2k\u22121 j \u2212\u03c6\u22121 k \u2207\u03b2j e Q( b \u03b2k\u22121), \u03c6\u22121 k \u03bb) for j = 2, . . . , p, where S(a, b) = sign(a) \u00b7 max(|a| \u2212b, 0) is the soft-thresholding operator. To enforce overall descent of the function value, we need \u03c6k > 0 to be su\ufb03ciently large so that F( b \u03b2k; \u03c6k, b \u03b2k\u22121) \u2265e Q( b \u03b2k), and hence e Q( b \u03b2k) + \u03bb\u2225b \u03b2k \u2212\u22251 \u2264F( b \u03b2k; \u03c6k, b \u03b2k\u22121) + \u03bb\u2225b \u03b2k \u2212\u22251 \u2264F( b \u03b2k\u22121; \u03c6k, b \u03b2k\u22121) + \u03bb\u2225b \u03b2k\u22121 \u2212 \u22251 = e Q( b \u03b2k\u22121) + \u03bb\u2225b \u03b2k\u22121 \u2212 \u22251. To choose a proper \u03c6k in practice, we start from a relatively small value \u03c6k,0 = 0.001, and successively in\ufb02ate it by a factor \u03b3 = 1.1\u2014\u03c6k,\u2113= \u03b3\u03c6k,\u2113\u22121 for \u2113= 1, 2, . . .\u2014until the majorization requirement is met. See Algorithm 3 for the pseudo-code of the LAMM algorithm for solving (39). Appendix B. Proof of Main Results B.1 Supporting lemmas Recall that the data vector (y, x) \u2208R\u00d7Rp satis\ufb01es the conditional quantile model Q\u03c4(y|x) = F \u22121 y|x(\u03c4) = xT\u03b2\u2217. Equivalently, y = xT\u03b2\u2217+ \u03b5 with Q\u03c4(\u03b5|x) = 0. Under Condition (C2), 34 Decentralized Quantile Regression Algorithm 3 Local adaptive majorize-minimize (LAMM) algorithm for solving (39). Input: Local data vectors {(yi, xi)}i\u2208I1, \u03c4 \u2208(0, 1), bandwidth b \u2208(0, 1), initialization b \u03b20 = e \u03b2(0), gradient vectors \u2207b Q1,b( e \u03b2(0)) and \u2207b Qh( e \u03b2(0)), regularization parameter \u03bb > 0, isotropic parameter \u03c60 and convergence tolerance \u03b4. 1: for k = 1, 2 . . . do 2: Set \u03c6k \u2190max{\u03c60, \u03c6k\u22121/1.1}. 3: repeat 4: Update b \u03b2k 1 \u2190b \u03b2k\u22121 1 \u2212\u03c6\u22121 k \u2207\u03b21 e Q( b \u03b2k\u22121). 5: Update b \u03b2k j \u2190S(b \u03b2k\u22121 j \u2212\u03c6\u22121 k \u2207\u03b2j e Q( b \u03b2k\u22121), \u03c6\u22121 k \u03bb) for j = 2, . . . , p. 6: If F( b \u03b2k; \u03c6k, b \u03b2k\u22121) < e Q( b \u03b2k), set \u03c6k \u21901.1\u03c6k. 7: until F( b \u03b2k; \u03c6k, b \u03b2k\u22121) \u2265e Q( b \u03b2k). 8: end for when \u2225b \u03b2k \u2212b \u03b2k\u22121\u22252 \u2264\u03b4. z = \u03a3\u22121/2x \u2208Rp denotes the standardized vector of covariates such that E(zzT) = Ip. For every \u03b4 \u2208(0, 1], de\ufb01ne \u03b3\u03b4 = inf \b \u03b3 > 0 : supu\u2208Sp\u22121 E \b (zTu)21(|zTu| > \u03b3\u03b4) \t \u2264\u03b4 \t . (58) By the sub-Gaussian assumption on x, \u03b3\u03b4 depends only on \u03b4 and \u03c51, and the map \u03b4 7\u2192\u03b3\u03b4 is non-increasing with \u03b3\u03b4 \u21930 as \u03b4 \u21921. Under a weaker condition that z has uniformly bounded fourth moments, namely, \u00b54 = supu\u2208Sp\u22121 E(zTu)4 < \u221e, we have \u03b3\u03b4 \u2264(\u00b54/\u03b4)1/2. For the conditional density f\u03b5|x(\u00b7), Condition (C1) ensures that as long as b is su\ufb03ciently small, fb \u2264min |u|\u2264b/2 f\u03b5|x(u) \u2264max |u|\u2264b/2 f\u03b5|x(u) \u2264\u00af fb almost surely (over x) for some constants \u00af fb \u2265fb > 0. For example, we may take fb = f \u2212l0b/2 and \u00af fb = \u00af f +l0b/2. Given a kernel K(\u00b7) and bandwidth b > 0, recall that b Q1,b(\u03b2) = (1/n) P i\u2208I1 \u2113b(yi \u2212xT i \u03b2) denotes the local smoothed loss function, where \u2113b(\u00b7) = (\u03c1\u03c4 \u2217Kb)(\u00b7). The proof of Theorem 1 depends heavily on the local strong convexity of b Q1,b(\u00b7) in a neighborhood of \u03b2\u2217. To this end, we \ufb01rst introduce the notion of symmetrized Bregman divergence. For any di\ufb00erentiable convex function \u03c8 : Rk \u2192R (k \u22651), the corresponding Bregman divergence is given by D\u03c8(w\u2032, w) = \u03c8(w\u2032) \u2212\u03c8(w) \u2212\u27e8\u2207\u03c8(w), w\u2032 \u2212w\u27e9. De\ufb01ne its symmetrized version as D\u03c8(w, w\u2032) = D\u03c8(w, w\u2032) + D\u03c8(w\u2032, w) = \u2207\u03c8(w) \u2212\u2207\u03c8(w\u2032), w \u2212w\u2032\u000b , w, w\u2032 \u2208Rk. (59) The \ufb01rst two lemmas, Lemma 15 and 16, provide a lower bound on the symmetrized Bregman divergence of the shifted conquer loss e Q(\u00b7) given in (8) and an upper bound on the global gradient, respectively. These two results coincide with Lemmas A.1 and A.2 in the supplement of He et al. (2021). We reproduce them here for the sake of readability. In particular, the former implies the restricted strong convexity property of e Q(\u00b7). Lemma 15 For any x > 0 and 0 < r \u2264b/(4\u03b30.25), inf \u03b2\u2208\u0398(r) D e Q(\u03b2, \u03b2\u2217) \u03bal\u2225\u03b2 \u2212\u03b2\u2217\u22252 \u03a3 \u22653 4fb \u2212\u00af f1/2 b 5 4 r bp r2n + r bx 8r2n ! \u2212 bx 3r2n (60) with probability at least 1 \u2212e\u2212x, where \u03bal = min|u|\u22641 K(u) > 0. 35 Tan, Battey and Zhou Consider the gradient \u2207b Qh(\u00b7) evaluated at \u03b2\u2217, namely, \u2207b Qh(\u03b2\u2217) = 1 N N X i=1 \b K(\u2212\u03b5i/h) \u2212\u03c4 \t xi, where \u03b5i = yi \u2212xT i \u03b2\u2217. The following lemma provides an upper bound on the \u21132-norm of \u2207b Qh(\u03b2\u2217). Recall that \u2126= \u03a3\u22121, and we write \u2225u\u2225\u2126= \u2225\u03a3\u22121/2u\u22252 for u \u2208Rp. Lemma 16 Conditions (C1)\u2013(C3) ensure that, for any x > 0, \u2225\u2207b Qh(\u03b2\u2217)\u2225\u2126\u2264C0 \u0012r p + x N + h2 \u0013 (61) with probability at least 1\u2212e\u2212x as long as N \u2273p+x, where C0 > 0 is a constant depending only on (\u03c4, l0, \u03c51, \u03ba2). Next, we extend the above results to the high-dimensional setting in which p \u226bn. Recall that \u0398(r) (r > 0) denotes the local \u21132 neighborhood of \u03b2\u2217under \u2225\u00b7 \u2225\u03a3-norm. Furthermore, de\ufb01ne the \u21131-cone \u039b as in (40), that is, \u039b = \b \u03b2 \u2208Rp : \u2225\u03b2 \u2212\u03b2\u2217\u22251 \u22644s1/2\u2225\u03b2 \u2212\u03b2\u2217\u2225\u03a3 \t . (62) Lemma 17 Assume Conditions (C1), (C2) and (C4) hold. Then, for any x > 0, 0 < r \u2264 b/(4\u03b30.25) and L > 0, inf \u03b2\u2208\u0398(r)\u2229\u039b D e Q(\u03b2, \u03b2\u2217) \u03bal\u2225\u03b2 \u2212\u03b2\u2217\u22252 \u03a3 \u22653 4fb \u2212\u00af f1/2 b ( 5B r 2bs log(2p) r2n + r bx 8r2n ) \u2212 bx 3r2n (63) with probability at least 1 \u2212e\u2212x. Proof of Lemma 17: Following the proof of Lemma 4.1 in Tan, Wang and Zhou (2022), it su\ufb03ces to bound E \r \r \r \r 1 n n X i=1 ei\u03c8b/2(\u03b5i)xi \r \r \r \r \u221e = E Ee \r \r \r \r 1 n n X i=1 ei\u03c8b/2(\u03b5i)xi \r \r \r \r \u221e , (64) where e1, . . . , en are i.i.d. Rademacher random variables, \u03c8b/2(\u03b5i) = 1(|\u03b5i| \u2264b/2), and Ee denotes the (conditional) expectation over e1, . . . , en given all the remaining random variables. Applying Hoe\ufb00ding\u2019s moment inequality yields Ee \r \r \r \r 1 n n X i=1 ei\u03c8b/2(\u03b5i)xi \r \r \r \r \u221e \u2264max 1\u2264j\u2264p \u001a 1 n n X i=1 x2 ij\u03c82 b/2(\u03b5i) \u001b1/2r 2 log(2p) n \u2264 \u001a 1 n n X i=1 \u03c8b/2(\u03b5i) \u001b1/2 B r 2 log(2p) n . 36 Decentralized Quantile Regression Moreover, note that E{\u03c8b/2(\u03b5i)|xi} \u2264\u00af fb \u00b7 b. Substituting these bounds into (64), we obtain that E \r \r \r \r 1 n n X i=1 ei\u03c8b/2(\u03b5i)xi \r \r \r \r \u221e \u2264(2 \u00af fb)1/2B r b log(2p) n . Keep the rest of the proof the same, we obtain the claimed bound (63). For the empirical loss b Qh(\u03b2) = (1/N) PN i=1 \u2113h(yi \u2212xT i \u03b2), de\ufb01ne its population counterpart Qh(\u03b2) = E b Qh(\u03b2) = E(y,x)\u223cP {\u2113h(y \u2212xT\u03b2)}. Lemma 18 Conditions (C1), (C2) and (C4) ensure that, for any x > 0, \u2225\u2207b Qh(\u03b2\u2217) \u2212\u2207Qh(\u03b2\u2217)\u2225\u221e\u2264C(\u03c4, h) r log(2p) + x N + B max(\u03c4, 1 \u2212\u03c4)log(2p) + x 3N with probability at least 1 \u2212e\u2212x, where C(\u03c4, h) = \u03c3u p 2{\u03c4(1 \u2212\u03c4) + (1 + \u03c4)l0\u03ba2h2} and \u03c3u = max1\u2264j\u2264p \u03c31/2 jj . Proof of Lemma 18: To begin with, write \u2225\u2207Qh(\u03b2\u2217) \u2212\u2207Qh(\u03b2\u2217)\u2225\u221e= max 1\u2264j\u2264p \f \f \f \f 1 N N X i=1 (1 \u2212E)wixij \f \f \f \f, where wi = K(\u2212\u03b5i/h) \u2212\u03c4 for i = 1, . . . , n, and note that |wixij| \u2264B\u00af \u03c4 with \u00af \u03c4 := max(\u03c4, 1 \u2212 \u03c4). Using Taylor series expansion and integration by parts, we obtain that |E \u0000wi|xi \u0001 | \u22640.5l0\u03ba2h2 and E \b K2(\u2212\u03b5i/h)|xi \t \u2264\u03c4 + l0\u03ba2h2, which in turn implies E(w2 i |xi) \u2264\u03c4(1\u2212\u03c4)+(1+\u03c4)l0\u03ba2h2 = \u03c4(1\u2212\u03c4)+Ch2. Hence, applying Bernstein\u2019s inequality yields that, for any 1 \u2264j \u2264p and z \u22650, \f \f \f \f 1 N N X i=1 (1 \u2212E)wixij \f \f \f \f \u2264\u03c31/2 jj r 2 \b \u03c4(1 \u2212\u03c4) + Ch2\t z N + B\u00af \u03c4 3 z N with probability at least 1 \u22122e\u2212z. Taking z = log(2p) + x, the claimed bound follows immediately from the union bound. With the above preparations, we are ready to prove the main results in the paper. B.2 Proof of Theorem 1 Proof of (11). The proof is carried out conditioning on the \u201cgood\u201d event that e \u03b2(0) \u2208 \u0398(r0). Let e \u03b2 = e \u03b2(1) be the one-step estimator that minimizes e Q(\u00b7). Set rloc = b/(4\u03b30.25) with \u03b30.25 given in (58), and de\ufb01ne an intermediate estimator e \u03b2\u03b7 = \u03b2\u2217+ \u03b7( e \u03b2 \u2212\u03b2\u2217) with \u03b7 = sup \b u \u2208[0, 1] : \u03b2\u2217+ u( e \u03b2 \u2212\u03b2\u2217) \u2208\u0398(rloc) \t ( = 1 if e \u03b2 \u2208\u0398(rloc), \u2208(0, 1) if e \u03b2 / \u2208\u0398(rloc). 37 Tan, Battey and Zhou In other words, \u03b7 is the largest value of u \u2208(0, 1] such that the corresponding convex combination of \u03b2\u2217and e \u03b2\u2014namely, (1 \u2212u)\u03b2\u2217+ u e \u03b2\u2014falls into the region \u0398(r0). Hence, if e \u03b2 / \u2208\u0398(rloc), we must have e \u03b2\u03b7 \u2208\u2202\u0398(rloc) = {\u03b2 \u2208Rp : \u2225\u03b2 \u2212\u03b2\u2217\u2225\u03a3 = rloc}. By Lemma C.1 in Sun, Zhou and Fan (2020), the three points e \u03b2, e \u03b2\u03b7 and \u03b2\u2217satisfy D e Q( e \u03b2\u03b7, \u03b2\u2217) \u2264\u03b7D e Q( e \u03b2, \u03b2\u2217), where by (59), D e Q(\u03b2, \u03b2\u2217) = \u2207e Q(\u03b2)\u2212\u2207e Q(\u03b2\u2217), \u03b2\u2212\u03b2\u2217\u000b = \u2207b Q1,b(\u03b2)\u2212\u2207b Q1,b(\u03b2\u2217), \u03b2\u2212\u03b2\u2217\u000b = D b Q1,b(\u03b2, \u03b2\u2217) for \u03b2 \u2208Rp. Taking into account the \ufb01rst-order optimality condition \u2207e Q( e \u03b2) = 0, it follows that D e Q( e \u03b2\u03b7, \u03b2\u2217) \u2264\u2212\u03b7 \u2207e Q(\u03b2\u2217), e \u03b2 \u2212\u03b2\u2217\u000b \u2264\u2225\u2207e Q(\u03b2\u2217)\u2225\u2126\u00b7 \u2225e \u03b2\u03b7 \u2212\u03b2\u2217\u2225\u03a3. (65) For the left-hand side of (65), applying Lemma 15 yields that with probability at least 1 \u2212e\u2212x, D e Q(\u03b2, \u03b2\u2217) \u22650.5f\u03bal \u00b7 \u2225\u03b2 \u2212\u03b2\u2217\u22252 \u03a3 (66) holds uniformly over all \u03b2 \u2208\u0398(rloc) as long as (p + x)/n \u2272b \u22721. To bound the right-hand side of (65), we de\ufb01ne vector-valued random processes ( \u22061(\u03b2) = \u03a3\u22121/2\b \u2207b Q1,b(\u03b2) \u2212\u2207b Q1,b(\u03b2\u2217) \u2212H(\u03b2 \u2212\u03b2\u2217) \t , \u2206(\u03b2) = \u03a3\u22121/2\b \u2207b Qh(\u03b2) \u2212\u2207b Qh(\u03b2\u2217) \u2212H(\u03b2 \u2212\u03b2\u2217) \t , (67) where \u03a3 = E(xxT) and H = E{f\u03b5|x(0)xxT}. Following the proof of Theorem 4.2 in He et al. (2021), it can be shown that, with probability at least 1 \u22122e\u2212x, sup \u03b2\u2208\u0398(r) \u2225\u22061(\u03b2)\u22252 \u2264C1r \u0012r p + x nb + r + b \u0013 and sup \u03b2\u2208\u0398(r) \u2225\u2206(\u03b2)\u22252 \u2264C1r \u0012r p + x Nh + r + h \u0013 (68) as long as b \u2273 p (p + x)/n, h \u2273 p (p + x)/N and n \u2273p + x, where C1 > 0 is a constant independent of (N, n, p, h, b). Recall that \u2207e Q(\u03b2) = \u2207e Q1,b(\u03b2) \u2212\u2207e Q1,b( e \u03b2(0)) + \u2207e Qh( e \u03b2(0)) and b \u2265h. Hence, applying (68) yields that, conditioned on the event E0(r0) \u2229E\u2217(r\u2217), \u2225\u2207e Q(\u03b2\u2217)\u2225\u2126= \u2225\u2206( e \u03b2(0)) \u2212\u22061( e \u03b2(0)) + \u03a3\u22121/2\u2207b Qh(\u03b2\u2217)\u22252 \u2264\u2225\u2206( e \u03b2(0))\u22252 + \u2225\u22061( e \u03b2(0))\u22252 + \u2225\u2207b Qh(\u03b2\u2217)\u2225\u2126 \u2264C1 r p + x nb + r p + x Nh + 2r0 + 2b ! \u00b7 r0 + r\u2217. (69) Together, the bounds (65), (66) and (69) imply that, conditioned on E0(r0) \u2229E\u2217(r\u2217), \u2225e \u03b2\u03b7 \u2212\u03b2\u2217\u2225\u03a3 \u22642(f\u03bal)\u22121\u2225\u2207e Q(\u03b2\u2217)\u2225\u2126 \u22642(f\u03bal)\u22121 ( C1 r p + x nb + r p + x Nh + 2r0 + 2b ! \u00b7 r0 + r\u2217 ) (70) 38 Decentralized Quantile Regression with probability at least 1 \u22123e\u2212x. We further let the bandwidths b \u2265h > 0 satisfy b \u2273max(r0, r\u2217) and p (p + x)/(nb) + b + p (p + x)/(Nh) \u22721, so that the right-hand of (70) is strictly less than rloc. Then, the intermediate point e \u03b2\u03b7\u2014a convex combination of \u03b2\u2217 and e \u03b2\u2014falls into the interior of the local region \u0398(r0) with high probability conditioned on E0(r0) \u2229E\u2217(r\u2217). Via proof by contradiction, we must have e \u03b2 \u2208\u0398(rloc) and hence e \u03b2\u03b7 = e \u03b2; otherwise if e \u03b2 / \u2208\u0398(rloc), by construction e \u03b2\u03b7 lies on the boundary of \u0398(rloc), which is a contradiction. As a result, the bound (70) also applies to e \u03b2, as desired. Proof of (12). To establish the Bahadur representation, note that the random process \u22061(\u00b7) de\ufb01ned in (67) can be written as \u22061(\u03b2) = \u03a3\u22121/2{\u2207e Q(\u03b2) \u2212\u2207e Q(\u03b2\u2217) \u2212H(\u03b2 \u2212\u03b2\u2217)}. Moreover, note that \u2207e Q(\u03b2\u2217) \u2212\u2207b Qh(\u03b2\u2217) = \u2207b Q1,b(\u03b2\u2217) \u2212\u2207b Q1,b( e \u03b2(0)) + \u2207b Qh( e \u03b2(0)) \u2212\u2207b Qh(\u03b2\u2217), which in turn implies \u2225\u2207e Q(\u03b2\u2217) \u2212\u2207b Qh(\u03b2\u2217)\u2225\u2126\u2264\u2225\u22061( e \u03b2(0))\u22252 + \u2225\u2206( e \u03b2(0))\u22252, where \u2206(\u00b7) is given in (67). Recall that \u2207e Q( e \u03b2) = 0, and conditioned on E0(r0) \u2229E\u2217(r\u2217), \u2225e \u03b2 \u2212\u03b2\u2217\u2225\u03a3 \u2264r1 \u224d r p + x nb + r p + x Nh + b ! \u00b7 r0 + r\u2217 with high probability. Under the assumed constraints on b, h and r0 \u2273r\u2217, we may assume r1 \u2264r0. Consequently, \u2225\u2207b Qh(\u03b2\u2217) + H( e \u03b2 \u2212\u03b2\u2217)\u2225\u2126 = \u2225\u03a3\u22121/2\u2207b Qh(\u03b2\u2217) \u2212\u03a3\u22121/2\u2207e Q(\u03b2\u2217) \u2212\u22061( e \u03b2)\u22252 \u2264\u2225\u22061( e \u03b2(0))\u22252 + \u2225\u2206( e \u03b2(0))\u22252 + \u2225\u22061( e \u03b2)\u22252 \u22642 sup \u03b2\u2208\u0398(r0) \u2225\u22061(\u03b2)\u22252 + sup \u03b2\u2208\u0398(r0) \u2225\u2206(\u03b2)\u22252 (71) with the same probability conditioned on E0(r0) \u2229E\u2217(r\u2217). Combining (71) with the bounds in (68) completes the proof of (12). B.3 Proof of Theorem 2 Recall that, for t = 1, 2, . . ., e Q(t)(\u00b7) given in (13) denotes the shifted smoothed QR loss at iteration t, whose gradient and Hessian are \u2207e Q(t)(\u03b2) = \u2207b Q1,b(\u03b2) \u2212\u2207b Q1,b( e \u03b2(t\u22121)) + \u2207b Qh( e \u03b2(t\u22121)) and \u22072 e Q(t)(\u03b2) = \u22072 b Q1,b(\u03b2). Let \u22061(\u03b2) and \u2206(\u03b2) be the stochastic processes de\ufb01ned in the proof of Theorem 1. The gradient \u2207e Q(t)(\u03b2\u2217) can thus be written as \u03a3\u22121/2\u2207e Q(t)(\u03b2\u2217) = {\u2206( e \u03b2(t\u22121)) \u2212\u22061( e \u03b2(t\u22121))} + \u03a3\u22121/2\u2207b Qh(\u03b2\u2217), so that \u2225\u2207e Q(t)(\u03b2\u2217)\u2225\u2126\u2264\u2225\u2206( e \u03b2(t\u22121))\u22252 + \u2225\u22061( e \u03b2(t\u22121))\u22252 + \u2225\u2207b Qh(\u03b2\u2217)\u2225\u2126. (72) 39 Tan, Battey and Zhou Given a sequence of iterates { e \u03b2(t)}t=0,1,...,T , we de\ufb01ne the \u201cgood\u201d events Et(rt) = \b e \u03b2(t) \u2208\u0398(rt) \t , t = 0, . . . , T, for some sequence of radii r0 \u2265r1 \u2265\u00b7 \u00b7 \u00b7 \u2265rT > 0 to be determined. Moreover, note that all the shifted loss functions e Q(t)(\u00b7) have the same symmetrized Bregman divergence, denoted by D(\u03b21, \u03b22) = \u27e8\u2207e Q(t)(\u03b21) \u2212\u2207e Q(t)(\u03b22), \u03b21 \u2212\u03b22\u27e9= \u27e8\u2207b Q1,b(\u03b21) \u2212\u2207b Q1,b(\u03b22), \u03b21 \u2212\u03b22\u27e9. In other words, the curvature of shifted loss functions is only determined by the local loss b Q1,b(\u00b7). De\ufb01ne the local radius rloc = b/(4\u03b30.25) the same way as in the proof of Theorem 1. As long as (p + x)/n \u2272b \u22721, Lemma 15 ensures that with probability at least 1 \u2212e\u2212x, D(\u03b2, \u03b2\u2217) \u22650.5f\u03bal \u00b7 \u2225\u03b2 \u2212\u03b2\u2217\u22252 \u03a3 = \u03ba \u00b7 \u2225\u03b2 \u2212\u03b2\u2217\u22252 \u03a3 (73) holds uniformly over all \u03b2 \u2208\u0398(rloc), where \u03ba := 0.5f\u03bal is the curvature parameter. Let Eloc be the event that the local strong convexity (73) holds. Proceeding via proof by contradiction, at each iteration t \u22651, we may construct an intermediate estimator e \u03b2(t) imd\u2014as a convex combination of e \u03b2(t) and \u03b2\u2217\u2014which falls in \u0398(rloc). If event E\u2217(r\u2217) \u2229Eloc occurs, then the bounds (65), (72) and (73) guarantee \u2225e \u03b2(t) imd \u2212\u03b2\u2217\u2225\u03a3 \u2264\u03ba\u22121\u2225\u2207Q(t)(\u03b2\u2217)\u2225\u2126\u2264\u03ba\u22121\b \u2225\u2206( e \u03b2(t\u22121))\u22252 + \u2225\u22061( e \u03b2(t\u22121))\u22252 + r\u2217 \t , (74) where \u2206(\u00b7) and \u22061(\u00b7) are the random processes de\ufb01ned in (67). For e \u03b2(t) which minimizes the shifted loss e Q(t)(\u00b7), the \ufb01rst-order condition \u2207e Q(t)( e \u03b2(t)) = 0 holds, and hence \u2225H( e \u03b2(t) \u2212\u03b2\u2217) + \u2207b Qh(\u03b2\u2217)\u2225\u2126 = \u2225\u03a3\u22121/2{\u2207e Q(t)( e \u03b2(t)) \u2212\u2207e Q(t)(\u03b2\u2217) \u2212H( e \u03b2(t) \u2212\u03b2\u2217)} + \u03a3\u22121/2{\u2207e Q(t)(\u03b2\u2217) \u2212\u2207b Qh(\u03b2\u2217)}\u22252 \u2264\u2225\u22061( e \u03b2(t))\u22252 + \u2225\u22061( e \u03b2(t\u22121))\u22252 + \u2225\u2206( e \u03b2(t\u22121))\u22252. (75) In what follows we deal with {( e \u03b2(t) imd, e \u03b2(t))}t=1,2,... sequentially, conditioning on E0(r0) \u2229 E\u2217(r\u2217) \u2229Eloc. In view of the basic inequalities (74) and (75), the key is to control the random processes \u2206(\u00b7) and \u22061(\u00b7) as we have done in (68). De\ufb01ne the event F(r) = ( sup \u03b2\u2208\u0398(r) \b \u2225\u22061(\u03b2)\u22252 + \u2225\u2206(\u03b2)\u22252 \t \u2264\u03b4(x) \u00b7 r ) with \u03b4(x) = C{ p (p + x)/(nb) + p (p + x)/(Nh) + b} for some C > 0, so that P{F(r)} \u2265 1 \u22122e\u2212x for every 0 < r \u2272b. At iteration 1, the bound (74) yields that, conditioned on E0(r0) \u2229E\u2217(r\u2217) \u2229Eloc \u2229F(r0), \u2225e \u03b2(1) imd \u2212\u03b2\u2217\u2225\u03a3 \u2264r1 := \u03ba\u22121\u03b4(x) \u00b7 r0 + \u03ba\u22121r\u2217. The imposed constraints on (b, h, r0, r\u2217) ensure that \u03ba\u22121\u03b4(x) < 1, r1 < rloc \u224db and r1 \u2264r0. Via proof by contradiction, we must have e \u03b2(1) = e \u03b2(1) imd \u2208\u0398(rloc), which in turn certi\ufb01es 40 Decentralized Quantile Regression the event E1(r1) = { e \u03b2(1) \u2208\u0398(r1)}. This, combined with (75) implies that, conditioned on E0(r0) \u2229E\u2217(r\u2217) \u2229Eloc \u2229F(r0), ( \u2225e \u03b2(1) \u2212\u03b2\u2217\u2225\u03a3 \u2264\u03ba\u22121\u03b4(x) \u00b7 r0 + \u03ba\u22121r\u2217= r1 \u2264r0, \u2225H( e \u03b2(1) \u2212\u03b2\u2217) + \u2207b Qh(\u03b2\u2217)\u2225\u2126\u22642\u03b4(x) \u00b7 r0. (76) Now assume that for some t \u22651, e \u03b2(t) \u2208\u0398(rt) with rt := \u03ba\u22121\u03b4(x) \u00b7 rt\u22121 + \u03ba\u22121r\u2217\u2264rt\u22121, and r\u2113< rloc for all \u2113= 1, . . . , t. At iteration t + 1, applying the general bound (74) again we see that, if event Et(rt) \u2229E\u2217(r\u2217) \u2229Eloc \u2229F(rt) occurs, \u2225e \u03b2(t+1) imd \u2212\u03b2\u2217\u2225\u03a3 \u2264\u03ba\u22121\u03b4(x) \u00b7 rt + \u03ba\u22121r\u2217. Set rt+1 = \u03ba\u22121\u03b4(x) \u00b7 rt + \u03ba\u22121r\u2217, and note that rt+1 \u2264\u03ba\u22121\u03b4(x) \u00b7 rt\u22121 + \u03ba\u22121r\u2217= rt < rloc. This means that e \u03b2(t+1) imd falls into the interior of \u0398(rloc), which in turn implies e \u03b2(t+1) = e \u03b2(t+1) imd \u2208\u0398(rloc) and certi\ufb01es event Et+1(rt+1). Conditioned on Et(rt) \u2229E\u2217(r\u2217) \u2229Eloc \u2229F(rt), we combine the consequence that e \u03b2(t+1) \u2208\u0398(rt+1) \u2286\u0398(rt) with the general bound (75), thereby obtaining ( \u2225e \u03b2(t+1) \u2212\u03b2\u2217\u2225\u03a3 \u2264\u03ba\u22121\u03b4(x) \u00b7 rt + \u03ba\u22121r\u2217= rt+1 \u2264rt, \u2225H( e \u03b2(t+1) \u2212\u03b2\u2217) + \u2207b Qh(\u03b2\u2217)\u2225\u2126\u22642\u03b4(x) \u00b7 rt. (77) Repeat the above argument until we obtain e \u03b2(T) for some T \u22651. For every 1 \u2264t \u2264T, note that conditioned on Et\u22121(rt\u22121) \u2229E\u2217(r\u2217) \u2229Eloc \u2229F(rt\u22121), the event Et(rt) must happen. Therefore, conditioned on E0(r0) \u2229E\u2217(r\u2217) \u2229Eloc \u2229{\u2229T\u22121 t=0 F(rt\u22121)}, e \u03b2(T) satis\ufb01es the bounds ( \u2225e \u03b2(T) \u2212\u03b2\u2217\u2225\u03a3 \u2264\u03ba\u22121\u03b4(x) \u00b7 rT\u22121 + \u03ba\u22121r\u2217=: rT \u2264rT\u22121, \u2225H( e \u03b2(T) \u2212\u03b2\u2217) + \u2207b Qh(\u03b2\u2217)\u2225\u2126\u22642\u03b4(x) \u00b7 rT\u22121. (78) It is easy to see that rt = {\u03ba\u22121\u03b4(x)}tr0 + 1\u2212{\u03ba\u22121\u03b4(x)}t 1\u2212\u03ba\u22121\u03b4(x) \u03ba\u22121r\u2217for t = 1, . . . , T. We thus take T = \u2308log(r0/r\u2217)/ log(\u03ba/\u03b4(x))\u2309+ 1, the smallest integer such that {\u03ba\u22121\u03b4(x)}T\u22121r0 \u2264r\u2217. Finally, conditioned on E0(r0) \u2229E\u2217(r\u2217), we combine (76)\u2013(78) with (68), (73) and the union bound to conclude that, with probability at least 1 \u2212(2T + 1)e\u2212x, \uf8f1 \uf8f2 \uf8f3 \u2225e \u03b2(T) \u2212\u03b2\u2217\u2225\u03a3 \u2264\u03ba\u22121\u03b4(x) \u00b7 r\u2217+ 1 \u03ba\u2212\u03b4(x)r\u2217\u2272r\u2217, \u2225H( e \u03b2(T) \u2212\u03b2\u2217) + \u2207b Qh(\u03b2\u2217)\u2225\u2126\u22642\u03b4(x) \b r\u2217+ 1 \u03ba\u2212\u03b4(x)r\u2217 \t \u2272\u03b4(x) \u00b7 r\u2217 Under the constraints p (p + x)/n \u2272b \u22721 and p (p + x)/N \u2272h \u2264b, we have \u03b4(x) \u2272b1/2 and hence log(\u03ba/\u03b4(x)) \u2273log(1/b). This completes the proof of (15). B.4 Proof of Theorem 4 Let e \u03b2(0) be the initial estimator given in (16). For x > 0, we can apply either Theorem 2.1 in Pan and Zhou (2021) if e \u03b2(0) is a local standard QR estimator, or Theorem 3.1 in He et al. (2021) with a bandwidth b \u224d{(p + x)/n}1/3 if e \u03b2(0) is a local conquer\u2014convolution 41 Tan, Battey and Zhou smoothed quantile regression\u2014estimator. In either case, e \u03b2(0) satis\ufb01es the bound \u2225e \u03b2(0) \u2212 \u03b2\u2217\u2225\u03a3 \u2264r0 \u224d p (p + x)/n with probability at least 1 \u22122e\u2212x as long as n \u2273p + x. For the second event E\u2217(r\u2217) in (10), it follows from Lemma 16 with r\u2217\u224d p (p + x)/N + h2 that P{E\u2217(r\u2217)} \u22651\u2212e\u2212x. Putting together the pieces, we conclude that the event E0(r0)\u2229E\u2217(r\u2217) occurs with probability at least 1 \u22123e\u2212x. Set x = log(n log m). Given the speci\ufb01ed choice of the bandwidths b, h > 0, we have r0 \u224d r p + log(n log m) n , r\u2217\u224d r p + log(n log m) N and r p + x nb + r p + x Nh + b \u224d \u0012p + log(n log m) n \u00131/3 + r p + log(n log m) Nh . Finally, applying the high-level result in Theorem 2 yields (17) and (18). B.5 Proof of Theorem 7 To simplify the presentation, we set q = p + log(n log m) throughout the proof. For an arbitrary vector a \u2208Rp, de\ufb01ne the partial sums SN = N\u22121/2 PN i=1 wivi and S0 N = SN \u2212ESN, where wi = K(\u2212\u03b5i/h) \u2212\u03c4 and vi = (H\u22121a)Txi. Recall that |E(wi|xi)| \u22640.5l1\u03ba2h2 and hence |E(wivi)| \u22640.5l1\u03ba2\u2225H\u22121a\u2225\u03a3 \u00b7 h2. Now we are ready to prove the normal approximation for e \u03b2. To begin with, we have |N1/2aT( e \u03b2 \u2212\u03b2\u2217) + S0 N| \u2264N1/2 \f \f \f \f \f * \u03a31/2H\u22121a, \u03a3\u22121/2H( e \u03b2 \u2212\u03b2\u2217) + \u03a3\u22121/2 1 N N X i=1 {K(\u2212\u03b5i/h) \u2212\u03c4}xi +\f \f \f \f \f + |ESN| \u2264N1/2\u2225H\u22121a\u2225\u03a3 \u00b7 \r \r \r \r \rH( e \u03b2 \u2212\u03b2\u2217) + 1 N N X i=1 {K(\u2212\u03b5i/h) \u2212\u03c4}xi \r \r \r \r \r \u2126 + 0.5l1\u03ba2\u2225H\u22121a\u2225\u03a3 \u00b7 N1/2h2. By (18), it follows that with probability at least 1 \u2212Cn\u22121, |N1/2aT( e \u03b2 \u2212\u03b2\u2217) + S0 N| \u2264C1\u2225H\u22121a\u2225\u03a3 \u00b7 \b q5/6n\u22121/3 + q(Nh)\u22121/2 + N1/2h2\t . (79) For the centered partial sum S0 N, applying the Berry-Esseen inequality (see, e.g. Shevtsova (2013)) yields sup x\u2208R \f \fP \b S0 N \u2264var(SN)1/2x \t \u2212\u03a6(x) \f \f \u22640.5N\u22121/2var(wv)\u22123/2E|wv \u2212E(wv)|3, (80) where w = K(\u2212\u03b5/h) \u2212\u03c4 and v = (H\u22121a)Tx. Following the proof of Lemma 18, it can be shown that E(w2|x) \u2264\u03c4(1\u2212\u03c4)+(1+\u03c4)l0\u03ba2h2 and |E(w2|x)\u2212\u03c4(1\u2212\u03c4)| \u2272h. Consequently, var(wv) = {\u03c4(1 \u2212\u03c4) + O(h)}\u2225H\u22121a\u22252 \u03a3 and E|wv|3 \u2264max(\u03c4, 1 \u2212\u03c4)E(w2|v|3) \u2264\u00b53{\u03c4(1 \u2212 \u03c4) + O(h2)}\u2225H\u22121a\u22253 \u03a3, where \u00b53 = supu\u2208Sp\u22121 E|zTu|3. Substituting these bounds into (80) gives sup x\u2208R \f \fP \b S0 N \u2264var(SN)1/2x \t \u2212\u03a6(x) \f \f \u2264C2N\u22121/2. (81) 42 Decentralized Quantile Regression Write \u03c32 \u03c4,h = E{K(\u2212\u03b5/h) \u2212\u03c4}2\u27e8H\u22121a, x\u27e92, and note that |var(SN) \u2212\u03c32 \u03c4,h| = (Ewv)2 \u2264 (0.5l0\u03ba2h2)2 \u00b7 \u2225H\u22121a\u22252 \u03a3. Comparing the distribution functions of two Gaussian random variables shows that sup x\u2208R \f \f\u03a6 \u0000x/var(S0 N)1/2\u0001 \u2212\u03a6 \u0000x/\u03c3\u03c4,h \u0001\f \f \u2264C3h4. (82) Let G \u223cN(0, 1). Applying the bounds (79), (81) and (82), we conclude that for any x \u2208R and a \u2208Rp, P \b N1/2aT( e \u03b2 \u2212\u03b2\u2217) \u2264x \t \u2264P h S0 N \u2264x + C1\u2225H\u22121a\u2225\u03a3 \u00b7 \b q5/6n\u22121/3 + q(Nh)\u22121/2 + N1/2h2\ti + Cn\u22121 \u2264P h var(S0 N)1/2G \u2264x + C1\u2225H\u22121a\u2225\u03a3 \u00b7 \b q5/6n\u22121/3 + q(Nh)\u22121/2 + N1/2h2\ti + Cn\u22121 + C2N\u22121/2 \u2264P h \u03c3\u03c4,hG \u2264x + C1\u2225H\u22121a\u2225\u03a3 \u00b7 \b q5/6n\u22121/3 + q(Nh)\u22121/2 + N1/2h2\ti + Cn\u22121 + C2N\u22121/2 + C3h4 \u2264P \u0000\u03c3\u03c4,hG \u2264x \u0001 + Cn\u22121 + C1\u2225H\u22121a\u2225\u03a3 (2\u03c0)1/2\u03c3\u03c4,h \b q5/6n\u22121/3 + q(Nh)\u22121/2 + N1/2h2\t + C2N\u22121/2 + C3h4. A similar argument leads to a series of reverse inequalities. The claimed bound then follows by noting that |\u03c32 \u03c4,h \u2212\u03c4(1 \u2212\u03c4)\u2225H\u22121a\u22252 \u03a3| \u2272h. B.6 Proof of Proposition 8 Without loss of generality, assume I1 = {1, . . . , n}, and write H1(\u03b2) = E b H1(\u03b2). Consider the change of variable \u03b4 = \u03a31/2(\u03b2 \u2212\u03b2\u2217), so that \u03b2 \u2208\u0398(r) is equivalent to \u03b4 \u2208Bp(r). Recall that zi = \u03a3\u22121/2xi \u2208Rp are isotropic random vectors. De\ufb01ne b H(\u03b4) = 1 n n X i=1 \u03c6b(\u03b5i \u2212zT i \u03b4)zizT i and H(\u03b4) = E \b b H(\u03b4) \t , (83) so that b H(\u03b4) = \u03a3\u22121/2 b H1(\u03b2)\u03a3\u22121/2 and H(\u03b4) = \u03a3\u22121/2H1(\u03b2)\u03a3\u22121/2, where \u03c6b(u) = (1/b)\u03c6(u/b). For any \u03f5 \u2208(0, r), there exists an \u03f5-net {\u03b41, . . . , \u03b4d\u03f5} with d\u03f5 \u2264(1 + 2r/\u03f5)p satisfying that, for each \u03b4 \u2208Bp(r), there exists some 1 \u2264j \u2264d\u03f5 such that \u2225\u03b4 \u2212\u03b4j\u22252 \u2264\u03f5. Hence, \u2225b H(\u03b4) \u2212H(\u03b4)\u22252 \u2264\u2225b H(\u03b4) \u2212b H(\u03b4j)\u22252 + \u2225b H(\u03b4j) \u2212H(\u03b4j)\u22252 + \u2225H(\u03b4j) \u2212H(\u03b4)\u22252 =: I1(\u03b4) + I2(\u03b4j) + I3(\u03b4). 43 Tan, Battey and Zhou Starting with I1(\u03b4), note that |\u03c6b(u) \u2212\u03c6b(v)| \u2264supt |\u03c6\u2032(t)| \u00b7 b\u22122|u \u2212v| \u2264(2b)\u22122|u \u2212v| for all u, v \u2208R. It follows that I1(\u03b4) \u2264 sup u,v\u2208Sp\u22121 1 n n X i=1 |\u03c6b(\u03b5i \u2212zT i \u03b4) \u2212\u03c6b(\u03b5i \u2212zT i \u03b4j)| \u00b7 |zT i u \u00b7 zT i v| \u2264(2b)\u22122 sup u,v\u2208Sp\u22121 1 n n X i=1 |zT i (\u03b4 \u2212\u03b4j) \u00b7 zT i u \u00b7 zT i v| \u2264(2b)\u22122\u03f5 \u00b7 max 1\u2264i\u2264n \u2225zi\u22252 \u00b7 \r \r \r \r 1 n n X i=1 zizT i \r \r \r \r 2 . (84) To bound max1\u2264i\u2264n \u2225zi\u22252, using a standard covering argument we have, for any \u03f51 \u2208 (0, 1), an \u03f51-net N\u03f51 \u2286Sp\u22121 with |N\u03f51| \u2264(1 + 2/\u03f51)p such that max1\u2264i\u2264n \u2225zi\u22252 \u2264(1 \u2212 \u03f51)\u22121 max1\u2264i\u2264n maxu\u2208N\u03f51 zT i u. Given 1 \u2264i \u2264n and u \u2208N\u03f51, recall that P(|zT i u| \u2265 \u03c51u) \u22642e\u2212u2/2 for any u \u22650. Taking the union bound over i and u, and setting u = p 2x + 2 log(2n) + 2p log(1 + 2/\u03f51), we obtain that with probability at least 1\u22122n(1 + 2/\u03f51)pe\u2212u2/2 = 1 \u2212e\u2212x, max1\u2264i\u2264n \u2225zi\u22252 \u2264(1 \u2212\u03f51)\u22121\u03c51 p 2x + 2 log(2n) + 2p log(1 + 2/\u03f51). By minimizing this upper bound with respect to \u03f51 \u2208(0, 1), we obtain that with probability at least 1 \u2212e\u2212x, max 1\u2264i\u2264n \u2225zi\u22252 \u2272(p + log n + x)1/2. For \u2225(1/n) Pn i=1 zizT i \u22252, it follows from the covering argument along with Bernstein\u2019s inequality that, with probability at least 1 \u2212e\u2212x/3, \r \r \r \r 1 n n X i=1 zizT i \u2212Ip \r \r \r \r 2 \u2272 r p + x n _ p + x n . Plugging the above bounds into (84) yields sup \u03b4\u2208Bp(r) I1(\u03b4) \u2272(p + log n + x)1/2b\u22122\u03f5 (85) with probability at least 1 \u22122e\u2212x as long as n \u2273p + x. For I3(\u03b4), it can be similarly obtained that I3(\u03b4) \u2264(2b)\u22122 sup u,v\u2208Sp\u22121 E|zT(\u03b4 \u2212\u03b4j) \u00b7 zTu \u00b7 zTv| \u2264\u00b53(2b)\u22122\u03f5 (86) uniformly over all \u03b4 \u2208Bp(r). Turning to I2(\u03b4j), note that b H(\u03b4j) \u2212H(\u03b4j) = (1/n) Pn i=1(1 \u2212E)\u03c6ijzizT i , where \u03c6ij = \u03c6b(\u03b5i \u2212zT i \u03b4j) satisfy |\u03c6ij| \u2264(2\u03c0)\u22121/2b\u22121 and E \u0000\u03c62 ij|xi \u0001 = 1 b2 Z \u221e \u2212\u221e \u03c62 \u0012\u27e8zi, \u03b4\u27e9\u2212t b \u0013 f\u03b5i|xi(t) dt = 1 b Z \u221e \u2212\u221e \u03c62(u)f\u03b5i|xi(zT i \u03b4 \u2212bu) du \u2264 \u00af f 2\u03c01/2b 44 Decentralized Quantile Regression almost surely. Given \u03f52 \u2208(0, 1/2), there exits an \u03f52-net M of the sphere Sp\u22121 with |M| \u2264 (1+2/\u03f52)p such that \u2225b H(\u03b4j)\u2212H(\u03b4j)\u22252 \u2264(1\u22122\u03f52)\u22121 maxu\u2208M |uT{ b H(\u03b4j)\u2212H(\u03b4j)}u|. Given u \u2208M and k = 2, 3, . . ., we bound the higher order moments of \u03c6ij(zT i u)2 by E|\u03c6ij(zT i u)2|k \u2264\u00af f(2\u03c01/2b)\u22121 \u00b7 {(2\u03c0)\u22121/2b\u22121}k\u22122\u03c52k 1 \u00b7 2k Z \u221e 0 P \u0000|zT i u| \u2265\u03c51u \u0001 u2k\u22121du \u2264\u00af f(2\u03c01/2b)\u22121 \u00b7 {(2\u03c0)\u22121/2b\u22121}k\u22122\u03c52k 1 \u00b7 4k Z \u221e 0 u2k\u22121e\u2212u2/2du \u2264\u00af f(2\u03c01/2b)\u22121 \u00b7 {(2\u03c0)\u22121/2b\u22121}k\u22122\u03c52k 1 \u00b7 2k+1k!. In particular, E\u03c62 ij(zT i u)4 \u22648\u03c0\u22121/2\u03c54 1 \u00af fb\u22121, and for each k \u22653, E|\u03c6ij(zT i u)2|k \u2264 k! 2 \u00b7 8\u03c0\u22121/2\u03c54 1 \u00af fb\u22121 \u00b7( p 2/\u03c0 \u03c52 1b\u22121)k\u22122. Applying Bernstein\u2019s inequality and the union bound, we \ufb01nd that for any u \u22650, \u2225b H(\u03b4j) \u2212H(\u03b4j)\u22252 \u2264 1 1 \u22122\u03f52 max u\u2208M \f \f \f \f 1 n n X i=1 (1 \u2212E)\u03c6ij(zT i u)2 \f \f \f \f \u2264 \u03c52 1 1 \u22122\u03f52 \u0012 4\u03c0\u22121/4 \u00af f1/2 r u nb + r 2 \u03c0 u nb \u0013 with probability at least 1 \u22122(1 + 2/\u03f52)pe\u2212u = 1 \u2212elog(2)+p log(1+2/\u03f52)\u2212u. Setting \u03f52 = 2/(e3 \u22121) and u = log(2) + 3p + v, it follows that with probability at least 1 \u2212e\u2212v, I2(\u03b4j) \u2272 r p + v nb + p + v nb . Once again, taking the union bound over j = 1, . . . , d\u03f5 and setting v = p log(1 + 2r/\u03f5) + x, we obtain that with probability at least 1 \u2212d\u03f5e\u2212v \u22651 \u2212e\u2212x, max 1\u2264j\u2264N\u03f5 I2(\u03b4j) \u2272 r p log(3er/\u03f5) + x nb + p log(3er/\u03f5) + x nb . (87) Combining (85), (86) and (87), and taking \u03f5 = r/n2 \u2208(0, r) in the beginning of the proof, we conclude that with probability at least 1 \u22123e\u2212x, sup \u03b2\u2208\u0398(r) \u2225b H1(\u03b2) \u2212H1(\u03b2)\u2225\u2126\u2272 r p log n + x nb + p log n + x nb + (p + log n + x)1/2r (nb)2 as long as n \u2273p + x. Moreover, note that for every \u03b2 \u2208\u0398(r), \u2225H1(\u03b2) \u2212H\u2225\u2126 = \r \r \r \rE Z 1 0 Z \u221e \u2212\u221e \u03c6(u) \b f\u03b5|x(t\u27e8x, \u03b2 \u2212\u03b2\u2217\u27e9\u2212bu) \u2212f\u03b5|x(0) \t du dt \u00b7 zzT \r \r \r \r 2 . By the Lipschitz continuity of f\u03b5|x(\u00b7) and f\u2032 \u03b5|x(\u00b7), we have |f\u03b5|x(t\u27e8x, \u03b2\u2212\u03b2\u2217\u27e9\u2212bu)\u2212f\u03b5|x(\u2212bu)| \u2264 l0 \u00b7 t \u00b7 |xT(\u03b2 \u2212\u03b2\u2217)| and |f\u03b5|x(\u2212bu) \u2212f\u03b5|x(0) + bf\u2032 \u03b5|x(0) \u00b7 u| \u2264| R \u2212bu 0 {f\u2032 \u03b5|x(v) \u2212f\u2032 \u03b5|x(0)} dv| \u2264 0.5l1b2u2. Plugging these into the above inequality yields \u2225H1(\u03b2) \u2212H\u2225\u2126 \u22640.5l0 sup u\u2208Sp\u22121 E \b |xT(\u03b2 \u2212\u03b2\u2217)|(zTu)2\t + 0.5l1b2 \u22640.5 \u0000l0\u00b53r + l1b2\u0001 . Putting together the pieces proves the claimed bound, provided that b \u2273(p log n + x)/n. 45 Tan, Battey and Zhou B.7 Proof of Theorem 11 Without loss of generality, assume I1 = {1, . . . , n}. Let S = supp(\u03b2\u2217) \u2286{1, . . . , p} be the true active set with cardinality |S| \u2264s, and write e \u03b4 = e \u03b2 \u2212\u03b2\u2217with e \u03b2 = e \u03b2(1) for simplicity. By the \ufb01rst-order optimality condition, there exits a subgradient e \u03be \u2208\u2202\u2225e \u03b2\u22251 such that \u2207e Q( e \u03b2) + \u03bb \u00b7 e \u03be = 0 and e \u03beT e \u03b2 = \u2225e \u03b2\u22251. Hence, e \u03be, \u03b2\u2217\u2212e \u03b2 \u000b \u2264\u2225\u03b2\u2217\u22251 \u2212\u2225e \u03b2\u22251 = \u2225\u03b2\u2217 S\u22251 \u2212\u2225e \u03b4Sc\u22251 \u2212\u2225e \u03b4S + \u03b2\u2217 S\u22251 \u2264\u2225e \u03b4S\u22251 \u2212\u2225e \u03b4Sc\u22251. This, together with the convexity of e Q(\u00b7), implies 0 \u2264D e Q( e \u03b2, \u03b2\u2217) = \u2207e Q( e \u03b2) \u2212\u2207e Q(\u03b2\u2217), e \u03b2 \u2212\u03b2\u2217\u000b = \u03bb e \u03be, \u03b2\u2217\u2212e \u03b2 \u000b \u2212 \u2207e Q(\u03b2\u2217), e \u03b4 \u000b \u2264\u03bb \u0000\u2225e \u03b4S\u22251 \u2212\u2225e \u03b4Sc\u22251 \u0001 \u2212 \u2207e Q(\u03b2\u2217), e \u03b4 \u000b , (88) where \u2207e Q(\u03b2\u2217) = \u2207b Q1,b(\u03b2\u2217) \u2212\u2207b Q1,b( e \u03b2(0)) + \u2207b Qh( e \u03b2(0)). De\ufb01ne gradient-based random processes D1(\u03b2) = \u2207b Q1,b(\u03b2) \u2212\u2207b Q1,b(\u03b2\u2217), D(\u03b2) = \u2207b Qh(\u03b2) \u2212\u2207b Qh(\u03b2\u2217), and their means E1(\u03b2) = ED1(\u03b2) and E(\u03b2) = ED(\u03b2). Moreover, let Qh(\u03b2) = E(\u03c1\u03c4 \u2217 Kh)(y \u2212xT\u03b2) be the population smoothed loss function. It is easy to see that E b Qh(\u03b2) = Qh(\u03b2) and E b Q1,b(\u03b2) = Qb(\u03b2). Then, the gradient \u2207e Q(\u03b2\u2217) can be decomposed as \b D(\u03b2) \u2212E(\u03b2) \t\f \f \f \u03b2= e \u03b2(0) + \b E1(\u03b2) \u2212D1(\u03b2) \t\f \f \f \u03b2= e \u03b2(0) + \u2207b Qh(\u03b2\u2217) \u2212\u2207Qh(\u03b2\u2217) + \b E(\u03b2) \u2212E1(\u03b2) \t\f \f \f \u03b2= e \u03b2(0) + \u2207Qh(\u03b2\u2217). For r > 0, de\ufb01ne the suprema of random processes over the local \u21131/\u21132 region \u0398(r) \u2229\u039b \u03a01(r) = sup \u03b2\u2208\u0398(r)\u2229\u039b \u2225D1(\u03b2) \u2212E1(\u03b2)\u2225\u221e, \u03a0(r) = sup \u03b2\u2208\u0398(r)\u2229\u039b \u2225D(\u03b2) \u2212E(\u03b2)\u2225\u221e, (89) and the deterministic quantities \u03c9(r) = sup \u03b2\u2208\u0398(r) \u2225E(\u03b2) \u2212E1(\u03b2)\u2225\u2126, \u03c9\u2217= \u2225\u2207Qh(\u03b2\u2217)\u2225\u2126. (90) If event E\u2217(\u03bb\u2217) \u2229E0(r0) occurs, then using H\u00a8 older\u2019s inequality gives |\u27e8\u2207e Q(\u03b2\u2217), e \u03b4\u27e9| \u2264 \b \u03a0(r0) + \u03a01(r0) + \u03bb\u2217 \t \u00b7 \u2225e \u03b4\u22251 + {\u03c9(r0) + \u03c9\u2217} \u00b7 \u2225e \u03b4\u2225\u03a3. Let \u03bb = 2.5(\u03bb\u2217+ \u03f1) with \u03f1 satisfying \u03f1 \u2265max \u001a \u03a0(r0) + \u03a01(r0), \u03c9(r0) + \u03c9\u2217 s1/2 \u001b , (91) so that \u03a0(r0) + \u03a01(r0) + \u03bb\u2217\u22640.4\u03bb and \u03c9(r0) + \u03c9\u2217\u22640.4s1/2\u03bb. Substituting the above bounds into (88) yields 0 \u22641.4\u2225e \u03b4S\u22251 \u22120.6\u2225e \u03b4Sc\u22251 + 0.4s1/2\u2225e \u03b4\u2225\u03a3. Consequently, \u2225e \u03b4\u22251 \u2264(10/3)\u2225e \u03b4S\u22251 + (2/3)s1/2\u2225e \u03b4\u2225\u03a3 \u22644s1/2\u2225e \u03b4\u2225\u03a3, showing that { e \u03b2 \u2208\u039b} occurs. 46 Decentralized Quantile Regression Throughout the rest of the proof, we assume event E\u2217(\u03bb\u2217)\u2229E0(r0) occurs. Turning to the left-hand side of (88), for rloc := b/(4\u03b30.25), we de\ufb01ne e \u03b2\u03b7 = \u03b2\u2217+ \u03b7( e \u03b2 \u2212\u03b2\u2217) with 0 < \u03b7 \u22641 the same way as in the \ufb01rst paragraph in the proof of Theorem 1. Under the requirement (91) on \u03f1, we have e \u03b2\u03b7 \u2208\u0398(rloc) \u2229\u039b and hence by (88), D e Q( e \u03b2\u03b7, \u03b2\u2217) \u2264\u03b7 \u00b7 D e Q( e \u03b2, \u03b2\u2217) \u2264\u03b7 \u00b7 \u00001.4\u03bb\u2225e \u03b4S\u22251 + 0.4s1/2\u03bb\u2225e \u03b4\u2225\u03a3 \u0001 \u22641.8s1/2\u03bb \u00b7 \u2225e \u03b2\u03b7 \u2212\u03b2\u2217\u2225\u03a3. For the lower bound, Lemma 17 implies D e Q( e \u03b2\u03b7, \u03b2\u2217) \u22650.5f\u03bal \u00b7 \u2225e \u03b2\u03b7 \u2212\u03b2\u2217\u22252 \u03a3 with probability at least 1 \u2212e\u2212x as long as (s log p + x)/n \u2272b \u22721. We thus conclude that \u2225e \u03b2\u03b7 \u2212\u03b2\u2217\u2225\u03a3 \u22643.6(f\u03bal)\u22121s1/2\u03bb. (92) It remains to choose a su\ufb03ciently large \u03bb, or equivalently \u03f1, so that (91) is satis\ufb01ed. The following two lemmas provide upper bounds on the suprema \u03a0(r0), \u03a01(r0) and \u03c9(r0), de\ufb01ned in (89) and (90). Lemma 19 Assume Conditions (C1), (C2) and (C4) hold. For any r > 0 and x > 0, \u03a0(r) = sup \u03b2\u2208\u0398(r)\u2229\u039b \u2225D(\u03b2) \u2212E(\u03b2)\u2225\u221e \u2264 h c1h\u22121p 2s log(2p)/N + c2 p {log(2p) + x}/(Nh) + c3s1/2{log(2p) + x}/(Nh) i \u00b7 r (93) with probability at least 1 \u2212e\u2212x, where c1 = 20\u03bauB2, c2 = (2\u03bau \u00af f\u00b54)1/2\u03c3u and c3 = (16 + 4/3)\u03bauB2. The same high probability bound, with (N, h) replaced by (n, b), holds for \u03a01(r). Lemma 20 Conditions (C1), (C2) and (C4) ensure that \u03c9(r) \u2264l0\u03ba1|b \u2212h|r for any r > 0 and \u03c9\u2217\u2264l0\u03ba2h2/2, where \u03ba1 = R \u221e \u2212\u221e|u|K(u) du and \u03ba2 = R \u221e \u2212\u221eu2K(u) du. Given the bandwidth 0 < h \u2264b \u22721, applying Lemmas 19 and 20 yields that \u03a0(r0) + \u03a01(r0) \u2272s1/2r0 \u00b7 1 b r log p + x n + 1 h r log p + x N ! with probability at least 1\u22122e\u2212x, and \u03c9(r0)+\u03c9\u2217\u2264l0(\u03ba1br0+\u03ba2h2/2). Hence, a su\ufb03ciently large \u03f1, which is of order \u03f1 \u224dmax hn b\u22121p s \u00b7 (log p + x)/n + h\u22121p s \u00b7 (log p + x)/N o r0, s\u22121/2\u0000br0 + h2\u0001i , guarantees that (91) holds with high probability. Consequently, conditioning on E\u2217(\u03bb\u2217) \u2229 E0(r0), the intermediate \u201cestimator\u201d e \u03b2\u03b7 satis\ufb01es the error bound (92) with probability at least 1 \u22123e\u2212x. We then set \u03b4 = 3e\u2212x \u2208(0, 1), so that log p + x = log(3p/\u03b4) \u224dlog(p/\u03b4) and \u03f1 \u224dmax hn b\u22121p s log(p/\u03b4)/n + h\u22121p s log(p/\u03b4)/N o r0, s\u22121/2\u0000br0 + h2\u0001i . With the above choice of \u03f1, let b > 14.4\u03b30.25(f\u03bal)\u22121s1/2\u03bb so that the right-hand side of (92) is strictly less than rloc. Via proof by contradiction, we must have e \u03b2 = e \u03b2\u03b7 \u2208\u0398(rloc) and hence the bound (92) also applies to e \u03b2, as claimed. 47 Tan, Battey and Zhou B.8 Proof of Theorem 12 The whole proof will be carried out conditioning on E\u2217(\u03bb\u2217) \u2229E0(r0) for some prespeci\ufb01ed r0, \u03bb\u2217> 0, and write r\u2217= s1/2\u03bb\u2217. Examine the proof of Theorem 11, we see that to obtain the desired error bound for the \ufb01rst iterate e \u03b2(1), the regularization parameter \u03bb1 needs to be su\ufb03ciently large. Given \u03b4 \u2208(0, 1), we set \u03bb1 = 2.5(\u03bb\u2217+ \u03f11), where \u03f11 > 0 is of order \u03f11 \u224dmax hn b\u22121p s log(p/\u03b4)/n + h\u22121p s log(p/\u03b4)/N o r0, s\u22121/2(br0 + h2) i Provided that the bandwidths b \u2265h > 0 satisfy r\u2217+ max hn b\u22121s p log(p/\u03b4)/n + h\u22121s p log(p/\u03b4)/N o r0, br0 + h2i \u2272b \u22721, the \ufb01rst iterate e \u03b2(1) satis\ufb01es e \u03b2(1) \u2208\u039b and \u2225e \u03b2(1) \u2212\u03b2\u2217\u2225\u03a3 \u2264C1 n b\u22121s p log(p/\u03b4)/n + b + h\u22121s p log(p/\u03b4)/N o | {z } =: \u03b3 \u00b7r0 + C2(r\u2217+ h2) =: r1 (94) with probability at least 1 \u2212\u03b4, where \u03b3 = \u03b3(s, p, n, N, h, b, \u03b4) > 0 is a contraction factor. With the stated choice of bandwidths b \u224ds1/2{log(p/\u03b4)/n}1/4 and h \u224d{s log(p/\u03b4)/N}1/4, we have \u03b3 \u224d \b s2 log(p/\u03b4)/n \t1/4 + \b s3 log(p/\u03b4)/N \t1/4 and \u03f11 \u224dmax \b \u03b3s\u22121/2r0, p log(p/\u03b4)/N \t . A su\ufb03ciently accurate initial estimator\u2014say, r0 \u2272min{1, (m/s)1/4}\u2014ensures that s1/2\u03f11 \u2272 b. Moreover, we need the local and total sample sizes to be su\ufb03ciently large\u2014namely, n \u2273s2 log(p/\u03b4) and N \u2273s3 log(p/\u03b4)\u2014so that the contraction factor \u03b3 is strictly less than 1. As a result, the one-step procedure reduces the estimation error of e \u03b2(0) by a factor of \u03b3. For t = 2, 3, . . . , T, de\ufb01ne the events Et(rt) := { e \u03b2(t) \u2208\u0398(rt) \u2229\u039b} and rt := \u03b3rt\u22121 + C2(r\u2217+ h2) = \u03b3tr0 + C2 1 \u2212\u03b3t 1 \u2212\u03b3 (r\u2217+ h2). At iteration t \u22652, we set \u03bbt = 3(\u03bb\u2217+ \u03f1t) with \u03f1t \u224dmax n \u03b3s\u22121/2rt\u22121, p log(p/\u03b4)/N o . Together, the last two displays imply \u03f1t \u224dmax n \u03b3ts\u22121/2r0 + \u03b3s\u22121/2(\u03bb\u2217+ h2)1(t \u22652), p log(p/\u03b4)/N o . Under the stated conditions on r0 and (n, N), we have s1/2\u03f1t \u2272b for every t \u22652. Applying Theorem 11 repeatedly, we obtain that conditioned on the event E\u2217(\u03bb\u2217) \u2229Et\u22121(rt\u22121), the tth iterate e \u03b2(t) satis\ufb01es e \u03b2(t) \u2208\u039b and \u2225e \u03b2(t) \u2212\u03b2\u2217\u2225\u03a3 \u2264\u03b3rt\u22121 + C2(r\u2217+ h2) = rt = \u03b3tr0 + C2 1 \u2212\u03b3t 1 \u2212\u03b3 (r\u2217+ h2) (95) 48 Decentralized Quantile Regression with probability at least 1 \u2212\u03b4. Note that r\u2217= s1/2\u03bb\u2217corresponds to the optimal rate under \u21132-norm. We thus choose the number of iterations T to be the smallest integer such that \u03b3T r0 \u2264r\u2217, that is, T = \u2308log(r0/r\u2217)/ log(1/\u03b3)\u2309. Applying the union bound over t = 1, 2, . . . , T yields that conditioned on E\u2217(\u03bb\u2217) \u2229E0(r0), the T th iterate e \u03b2(T) satis\ufb01es the error bounds \u2225e \u03b2(T) \u2212\u03b2\u2217\u2225\u03a3 \u2272s1/2\u03bb\u2217+ h2 and \u2225e \u03b2(T) \u2212\u03b2\u2217\u22251 \u2272s\u03bb\u2217+ s1/2h2 with probability at least 1 \u2212T\u03b4. This completes the proof of the theorem. B.9 Proof of Proposition 13 The proof is similar in spirit to that of Theorem 11, while certain modi\ufb01cations are required. We thus provide a sketch proof for completeness. Note that the shifted loss e Q(\u00b7) in the proof of Theorem 11 shares the Hessian as well as symmetrized Bregman divergence with the local loss b Q1,b(\u00b7). With slight abuse of notation, let e \u03b4 = e \u03b2 \u2212\u03b2\u2217with e \u03b2 = e \u03b2(0). Inequality (88) implies 0 \u2264D b Q1,b( e \u03b2, \u03b2\u2217) \u2264\u03bb0 \u0000\u2225e \u03b4S\u22251 \u2212\u2225e \u03b4Sc\u22251 \u0001 \u2212 \u2207b Q1,b(\u03b2\u2217), e \u03b4 \u000b . (96) By H\u00a8 older\u2019s inequality, |\u27e8\u2207b Q1,b(\u03b2\u2217), e \u03b4 \u27e9| \u2264\u2225\u2207b Q1,b(\u03b2\u2217) \u2212\u2207Q1,b(\u03b2\u2217)\u2225\u221e\u00b7 \u2225e \u03b4\u22251 + \u2225\u2207Q1,b(\u03b2\u2217)\u2225\u2126\u00b7 \u2225e \u03b4\u2225\u03a3, where Q1,b(\u03b2) = E b Q1,b(\u03b2) is the population loss. By the Lipschitz continuity of f\u03b5|x(\u00b7), it can be shown that \u2225\u2207Q1,b(\u03b2\u2217)\u2225\u2126\u22640.5l0\u03ba2b2. Moreover, let the regularization parameter \u03bb0 satisfy \u03bb0 \u22652.5\u2225\u2207b Q1,b(\u03b2\u2217) \u2212\u2207Q1,b(\u03b2\u2217)\u2225\u221e. (97) Then, |\u27e8\u2207b Q1,b(\u03b2\u2217), e \u03b4\u27e9| \u22640.4\u03bb0\u2225e \u03b4\u22251 + 0.5l0\u03ba2b2\u2225e \u03b4\u2225\u03a3. Substituting these into (96) yields D b Q1,b( e \u03b2, \u03b2\u2217) \u2264 \u00001.4s1/2\u03bb0 + 0.5l0\u03ba2b2\u0001 \u00b7 \u2225e \u03b4\u2225\u03a3 and 0 \u2264\u03bb0 \u0000\u2225e \u03b4S\u22251 \u2212\u2225e \u03b4Sc\u22251 \u0001 + 0.4\u03bb0\u2225e \u03b4\u22251 + 0.5l0\u03ba2b2\u2225e \u03b4\u2225\u03a3. The latter implies \u2225e \u03b4\u22251 \u2264(10/3)\u2225e \u03b4S\u22251+(5/6)l0\u03ba2\u03bb\u22121 0 b2\u2225e \u03b4\u2225\u03a3 \u2264L\u2225e \u03b4\u2225\u03a3 with L := (10/3)s1/2+(5/6)l0\u03ba2\u03bb\u22121 0 b2. Starting from here, we introduce an intermediate \u201cestimator\u201d e \u03b2\u03b7 = \u03b2\u2217+ \u03b7( e \u03b2 \u2212\u03b2\u2217), for some 0 < \u03b7 \u22641, the same way as in the proof of Theorem 11, so that e \u03b2\u03b7 \u2208\u0398(rloc) with rloc = b/(4\u03b30.25). Since e \u03b2\u03b7 \u2212\u03b2\u2217= \u03b7( e \u03b2\u03b7 \u2212\u03b2\u2217), we also have \u2225e \u03b2\u03b7 \u2212\u03b2\u2217\u22251 \u2264L\u2225e \u03b2\u03b7 \u2212\u03b2\u2217\u2225\u03a3 for the same L > 1 given above. Applying Lemma 4.1 in Tan, Wang and Zhou (2022) and Lemma 17, we obtain that with probability at least 1 \u2212e\u2212x, D b Q1,b( e \u03b2\u03b7, \u03b2\u2217) \u22650.5f\u03bal \u00b7 \u2225e \u03b2\u03b7 \u2212\u03b2\u2217\u22252 \u03a3, provided that n \u2273b\u22121{L2 log(p) + x}. (98) 49 Tan, Battey and Zhou Consequently, 0.5f\u03bal \u00b7 \u2225e \u03b2\u03b7 \u2212\u03b2\u2217\u22252 \u03a3 \u2264D b Q1,b( e \u03b2\u03b7, \u03b2\u2217) \u2264\u03b7D b Q1,b( e \u03b2, \u03b2\u2217) \u2264 \u00001.4s1/2\u03bb0 + 0.5l0\u03ba2b2\u0001 \u00b7 \u2225e \u03b2\u03b7 \u2212\u03b2\u2217\u2225\u03a3 with probability at least 1 \u2212e\u2212x. Canceling \u2225e \u03b2\u03b7 \u2212\u03b2\u2217\u2225\u03a3 on both sides yields \u2225e \u03b2\u03b7 \u2212\u03b2\u2217\u2225\u03a3 \u2264 (f\u03bal)\u22121(2.8s1/2\u03bb0 + l0\u03ba2b2). Provided that b > 4\u03b30.25(f\u03bal)\u22121\u00002.8s1/2\u03bb0 + l0\u03ba2b2\u0001 , (99) e \u03b2\u03b7 falls in the interior of \u0398(rloc) (with high probability). Thus we must have e \u03b2\u03b7 = e \u03b2, and the same error bound holds for e \u03b2, that is, \u2225e \u03b2 \u2212\u03b2\u2217\u2225\u03a3 \u2272s1/2\u03bb0 + b2. It remains to tune the regularization parameter \u03bb0 and bandwidth b so that (97) and (99) hold, and to determine the scaling of the sample size required to ensure the lower bound (98). Applying a local version of Lemma 18 yields that with probability at least 1 \u2212e\u2212x, \u2225\u2207b Q1,b(\u03b2\u2217) \u2212\u2207Q1,b(\u03b2\u2217)\u2225\u221e\u2264C(\u03c4, b) r log(2p) + x n + B max(\u03c4, 1 \u2212\u03c4)log(2p) + x 3n (100) for the same constants therein. Given \u03b4 \u2208(0, 1), we take x = log(2/\u03b4) in (98) and (100), and set \u03bb0 \u224d p \u03c4(1 \u2212\u03c4) log(p/\u03b4)/n, so that (97) holds with high probability. Furthermore, as long as the bandwidth b and sample size n are such that p s log(p/\u03b4)/n \u2272b \u22721, both requirements in (98) and (99) are satis\ufb01ed. This completes the proof of (48). If in addition \u03bb0 \u22651.25l0\u03ba2s\u22121/2b2, then \u2225e \u03b4\u22251 \u22644s1/2\u2225e \u03b4\u2225\u03a3 and hence e \u03b2 \u2208\u039b. Appendix C. Proof of Auxiliary Lemmas C.1 Proof of Lemma 19 For r1, r2 > 0, de\ufb01ne the parameter set \u03980(r1, r2) = {\u03b4 \u2208Rp : \u2225\u03b4\u22251 \u2264r1, \u2225\u03b4\u2225\u03a3 \u2264r2}. Consider the change of variable v = \u03b2 \u2212\u03b2\u2217, so that v \u2208\u03980(4s1/2r, r) for \u03b2 \u2208\u0398(r) \u2229\u039b. Consequently, sup \u03b2\u2208\u0398(r1,r2) \u2225D(\u03b2) \u2212E(\u03b2)\u2225\u221e = max 1\u2264j\u2264p sup v\u2208\u03980(r1,r2) \f \f \f \f \f 1 N N X i=1 (1 \u2212E) \u001a K \u0012xT i v \u2212\u03b5i h \u0013 \u2212K \u0012\u2212\u03b5i h \u0013\u001b xij | {z } =:\u03c8ij(v) \f \f \f \f \f =: max 1\u2264j\u2264p \u03a8j, (101) where \u03a8j = supv\u2208\u03980(r1,r2) |(1/N) PN i=1(1 \u2212E)\u03c8ij(v)|. Since K(\u00b7) = K\u2032(\u00b7) is uniformly bounded, sup v\u2208\u03980(r1,r2) |\u03c8ij(v)| \u2264\u03bauB2 r1 h . 50 Decentralized Quantile Regression By Bousquet\u2019s version of Talagrand\u2019s inequality (Bousquet, 2003), we obtain that for any z > 0, \u03a8j \u22645 4E\u03a8j + sup v\u2208\u03980(r1,r2) \b E\u03c82 ij(v) \t1/2 r 2z N + (4 + 1/3)\u03bauB2 r1z Nh (102) with probability at least 1 \u22122e\u2212z. For v \u2208\u03980(r1, r2), E\u03c82 ij(v) = E \" x2 ij Z \u221e \u2212\u221e \b K \u0000xTv/h \u2212u/h \u0001 \u2212K(\u2212u/h) \t2f\u03b5|x(u) du # = h E x2 ij Z \u221e \u2212\u221e \b K \u0000xTv/h + v \u0001 \u2212K(v) \t2f\u03b5|x(\u2212vh) dv \u0013 \u2264\u00af fh\u22121E \" x2 ij (xTv)2 Z \u221e \u2212\u221e \u001a Z 1 0 K \u0000v + wxTv/h \u0001 dw \u001b2 dv # \u2264\u00af fh\u22121E x2 ij (xTv)2 \"Z 1 0 \u001a Z \u221e \u2212\u221e K2\u0000v + wxTv/h \u0001 dv \u001b1/2 dw #2! (by Minkowski\u2019s integral inequality) \u2264\u03bau \u00af f \u00b7 h\u22121E \u0000xij \u00b7 xTv \u00012 \u2264\u03bau \u00af f \u00b7 h\u22121\u0000Ex4 ij \u00011/2\b E(xTv)4\t1/2 \u2264\u03bau \u00af f\u03c3jj\u00b54 \u00b7 h\u22121r2 2, where the last inequality uses the bound Ex4 ij = E\u27e8\u03a3\u22121/2x, \u03a31/2ej\u27e94 \u2264\u00b54\u2225\u03a31/2ej\u22254 2 = \u03c32 jj\u00b54. We next bound the mean E\u03a8j. By Rademacher symmetrization, E\u03a8j \u22642E sup v\u2208\u03980(r1,r2) \f \f \f \f 1 N N X i=1 ei\u03c8ij(v) \f \f \f \f = 2E ( Ee sup v\u2208\u03980(r1,r2) \f \f \f \f 1 N N X i=1 ei\u03c8ij(v) \f \f \f \f ) , where Ee denotes the conditional expectation over e1, . . . , en given the remaining variables, and e1, . . . , en are i.i.d. Rademacher random variables. For each i, write \u03c8ij(v) = \u03d5i(xT i v), where \u03d5i(\u00b7) is such that \u03d5i(0) = 0 and |\u03d5i(u) \u2212\u03d5i(v)| \u2264\u03bau|xij| \u00b7 h\u22121|u \u2212v|. Then, by Talagrand\u2019s contraction principle (see, e.g., Theorem 4.12 in Ledoux and Talagrand (1991)), Ee sup v\u2208\u03980(r1,r2) \f \f \f \f 1 N N X i=1 ei\u03c8ij(v) \f \f \f \f \u22642\u03bau max 1\u2264i\u2264N |xij| \u00b7 Ee sup v\u2208\u03980(r1,r2) \f \f \f \f 1 Nh N X i=1 eixT i v \f \f \f \f \u22642\u03bauB r1 h Ee \r \r \r \r 1 N N X i=1 eixi \r \r \r \r \u221e . By Hoe\ufb00ding\u2019s moment inequality, Ee \r \r \r \r 1 N N X i=1 eixi \r \r \r \r \u221e \u2264max 1\u2264k\u2264p \u0012 1 N N X i=1 x2 ik \u00131/2r 2 log(2p) N . 51 Tan, Battey and Zhou Putting together the above three inequalities yields E\u03a8j \u22644\u03bauB2 r1 h r 2 log(2p) N for any j = 1, . . . , p. (103) To sum up, we take z = log(2p) + x, r1 = 4s1/2r and r2 = r in (102), which combined with (101), (103) and the union bound, completes the proof of (93). C.2 Proof of Lemma 20 Under the conditional quantile model (1), note that E(\u03b2) = \u2207Qh(\u03b2) \u2212\u2207Qh(\u03b2\u2217), where Qh(\u03b2) = E b Qh(\u03b2) is the population smoothed loss which is twice-di\ufb00erentiable with Hessian \u22072Qh(\u03b2) = E{Kh((xT\u03b2 \u2212y)/h)xxT}. Moreover, de\ufb01ne H0 = E{f\u03b5|x(0)zzT} = \u03a3\u22121/2H\u03a3\u22121/2, where H = E{f\u03b5|x(0)xxT} and z = \u03a3\u22121/2x. By the mean value theorem for vector-valued functions, \u03a3\u22121/2E(\u03b2) = \u03a3\u22121/2E Z 1 0 \u22072Qh \u0000(1 \u2212t)\u03b2\u2217+ t\u03b2 \u0001 dt \u03a3\u22121/2 \u00b7 \u03a31/2(\u03b2 \u2212\u03b2\u2217) = E Z 1 0 Z \u221e \u2212\u221e K(u)f\u03b5|x(t \u00b7 zT\u03b4 \u2212hu) du dt \u00b7 zzT\u03b4, where \u03b4 = \u03a31/2(\u03b2 \u2212\u03b2\u2217). Similarly, \u03a3\u22121/2E1(\u03b2) = E Z 1 0 Z \u221e \u2212\u221e K(u)f\u03b5|x(t \u00b7 zT\u03b4 \u2212bu) du dt \u00b7 zzT\u03b4. where Q1,b(\u03b2) = E b Q1,b(\u03b2). This, together with the Lipschitz continuity of f\u03b5|x(\u00b7) (implied by Condition (C1)), implies that for any \u03b2 \u2208\u0398(r), \u2225E(\u03b2) \u2212E1(\u03b2)\u2225\u2126 \u2264 sup u\u2208Sp\u22121 E Z 1 0 Z \u221e \u2212\u221e K(u)|f\u03b5|x(t \u00b7 zT\u03b4 \u2212hu) \u2212f\u03b5|x(t \u00b7 zT\u03b4 \u2212bu)| du dt \u00b7 |zT\u03b4 \u00b7 zTu| \u2264l0 Z \u221e \u2212\u221e |u|K(u) du \u00b7 |b \u2212h| sup u\u2208Sp\u22121 \b E(zTu)2\t1/2\u2225\u03b4\u2225\u03a3 = l0\u03ba1|b \u2212h|r, as claimed. Turning to \u03a3\u22121/2\u2207Qh(\u03b2\u2217) = E{K(\u2212\u03b5/h) \u2212\u03c4}z, by integration by parts we get E \b K(\u2212\u03b5/h)|x \t = Z \u221e \u2212\u221e K(\u2212t/h) dF\u03b5|x(t) = \u22121 h Z \u221e \u2212\u221e K(\u2212t/h)F\u03b5|x(t) dt = Z \u221e \u2212\u221e K(u)F\u03b5|x(\u2212hu) dt = \u03c4 + Z \u221e \u2212\u221e K(u) Z \u2212hu 0 \b f\u03b5|x(t) \u2212f\u03b5|x(0) \t dt du. Combined with the Lipschitz continuity of f\u03b5|x(\u00b7), this implies \u2225\u2207Qh(\u03b2\u2217)\u2225\u2126= sup u\u2208Sp\u22121 E Z \u221e \u2212\u221e K(u) Z \u2212hu 0 \b f\u03b5|x(t) \u2212f\u03b5|x(0) \t dt du \u00b7 zTu \u22641 2l0\u03ba2h2, thus completing the proof. 52 Decentralized Quantile Regression Appendix D. Estimation at Extreme Quantile Levels As highlighted in the introduction, the causal mechanisms underpinning extreme behavior are of high relevance in numerous \ufb01elds, and QR at extreme quantile levels operationalizes attempts to understand these. Rather than treating variables on an equal footing as Engelke and Hitz (2020), QR singles out a particular variable for which understanding is sought. Just as the statistical aspects of extreme value theory are challenged by the limitation of data beyond extreme thresholds, QR coe\ufb03cients at extreme quantiles are notoriously hard to estimate. The following minor adaptation of our procedure improves its performance at extreme quantile levels. Recall from Section 2.1 that the conquer method is evolved from a smoothed estimating equation approach (Kaplan and Sun, 2017). The latter constructs a smoothed sample analog of the moment condition. As observed by both Fernandes, Guerre and Horta (2021) and He et al. (2021), the smoothing bias primarily a\ufb00ects the intercept estimation especially in the random design setting. Now let us take a closer look at the moment condition. For every p-vector u = (u1, . . . , up)T, we use u\u2212\u2208Rp\u22121 to denote its (p \u22121)-subvector with the \ufb01rst coordinate removed, i.e. u\u2212= (u2, . . . , up)T. Then, the \ufb01rst-order moment condition can be written as ( E{1(y < xT \u2212\u03b2\u2212+ \u03b21) \u2212\u03c4} = 0, E{1(y < xT \u2212\u03b2\u2212+ \u03b21) \u2212\u03c4}xj = 0, j = 2, . . . , p, whose sample counterpart is ( PN i=1{1(yi < xT i,\u2212\u03b2\u2212+ \u03b21) \u2212\u03c4} = 0, PN i=1{1(yi < xT i,\u2212\u03b2\u2212+ \u03b21) \u2212\u03c4}xij = 0, j = 2, . . . , p. (104) To \ufb01nd the solution of the above system of equations, note that given \u03b2\u2212\u2208Rp\u22121, the \ufb01rst equation can be (approximately) solved by taking \u03b21 to be the sample \u03c4-quantile of {yi \u2212xT i,\u2212\u03b2\u2212}N i=1, which only allows for an error 1/N. The main di\ufb03culty then arises from solving the remaining p \u22121 equations, for which analytical solutions do not exist. To mitigate the smoothing bias of conquer for intercept estimation, we consider a hybrid estimating equation approach that solves ( PN i=1{1(yi < xT i,\u2212\u03b2\u2212+ \u03b21) \u2212\u03c4} = 0, PN i=1{K(\u2212(yi \u2212\u03b21 \u2212xT i,\u2212\u03b2\u2212)/h) \u2212\u03c4}xi,\u2212= 0p\u22121, (105) where K(u) = R u \u2212\u221eK(v) dv for some kernel function K(\u00b7). Note that, given \u03b2\u2212\u2208Rp\u22121, the \ufb01rst equation in (105) can be solved by taking \u03b21 to be the sample \u03c4-quantile of {yi \u2212 xT i,\u2212\u03b2\u2212}N i=1, and given \u03b21, solving the second vector equation is equivalent to minimizing the conquer loss \u03b2\u22127\u2192b Qh(\u03b21, \u03b2\u2212). This motivates the following iterative procedure, starting at iteration 0 with initial estimates \u03b2(0) 1 and \u03b2(0) \u2212 of the intercept and slope coe\ufb03cients, respectively. The procedure involves two steps. At iteration t = 1, 2, . . .: Re\ufb01tted intercept. Using the current slope coe\ufb03cients estimate \u03b2(t\u22121) \u2212 , we compute the residuals r(t\u22121) i = yi \u2212xT i,\u2212\u03b2(t\u22121) \u2212 , and then update the intercept \u03b2(t) 1 as the sample \u03c4quantile of {r(t\u22121) i }N i=1. 53 Tan, Battey and Zhou Adjusted conquer. With a re\ufb01tted intercept \u03b2(t) 1 , we take any solution to the optimization problem min \u03b2\u2212\u2208Rp\u22121 b Qh(\u03b2(t) 1 , \u03b2\u2212) = min \u03b2\u2212\u2208Rp\u22121 1 N N X i=1 (\u03c1\u03c4 \u2217Kh)(yi \u2212\u03b2(t) 1 \u2212xT i,\u2212\u03b2\u2212) as the updated slope estimator, denoted by b \u03b2(t) \u2212. We refer to the above method as two-step conquer. Using two-step conquer, in Algorithm 4 we present a modi\ufb01ed multi-round distributed algorithm, which is particularly suited for extreme quantile regressions with \u03c4 close to either 0 or 1. Appendix E. Additional Simulation Studies In this section, we provide numerical studies for score-based con\ufb01dence sets to complement those in Section 4.2 of the paper. Speci\ufb01cally, in each of 200 Monte Carlo replications, p = 10 covariates are generated at random from a uniform distribution on [\u22121, 1] and a response variable is generated according to the linear heteroscedastic model yi = xT i \u03b2\u2217+ (0.25xi1 + 0.25xi2 + 0.75){\u03b5i \u2212F \u22121 \u03b5i (\u03c4)}, i = 1, . . . , N, where \u03c4 = 0.9 and \u03b5i is drawn from the t-distribution with 1.5 degrees of freedom. The intercept \u03b2\u2217 0 is taken as 2 and all other elements of \u03b2\u2217as unity. For n = 200 and m = 100, Tables 4 and 5 report the simulated coverage probabilities and mean width for each con\ufb01dence set construction described in Section 2.3. In addition to the aforementioned methods, the construction in equation (32) is referred to as CE-Score. \u03b2\u2217 1 \u03b2\u2217 2 \u03b2\u2217 3 \u03b2\u2217 4 \u03b2\u2217 5 \u03b2\u2217 6 \u03b2\u2217 7 \u03b2\u2217 8 \u03b2\u2217 9 \u03b2\u2217 10 DC-Normal 0.810 0.805 0.995 1.000 0.985 0.995 0.995 0.990 1.000 0.995 CE-Normal 0.935 0.865 0.925 0.930 0.895 0.940 0.895 0.925 0.910 0.910 CE-Boot (a) 0.970 0.940 0.960 0.945 0.940 0.975 0.950 0.955 0.950 0.955 CE-Boot (b) 0.965 0.910 0.945 0.950 0.935 0.975 0.935 0.955 0.930 0.950 CE-Score 0.960 0.905 0.960 0.970 0.915 0.980 0.920 0.955 0.945 0.950 Table 4: Monte Carlo coverage probabilities for the case when p = 10, n = 200, and m = 100. Apart from those based on the averaging estimator b \u03b2dc, which has appreciable undercoverage for the \ufb01st two coe\ufb03cients, all constructions are broadly comparable in terms of coverage probability. Con\ufb01dence intervals based on the score statistic are considerably narrower than the others, although at higher computational cost due to the need to calculate b Tk(ck) from equation (32) over a grid of ck values. Tables 6 and 7 for n = 200 and m = 200 are qualitatively similar. For roughly similar coverage, the higher total sample size N = nm reduces the widths of the con\ufb01dence intervals. The larger value of m also has the e\ufb00ect of reducing the DC-Normal coverage probability 54 Decentralized Quantile Regression Algorithm 4 E\ufb03cient Distributed Quantile Regression via Two-Step Conquer. Input: data batches {(yi, xi)}i\u2208Ij, j = 1, . . . , m, stored at m sites, quantile level \u03c4 \u2208(0, 1), bandwidths b, h > 0, initialization e \u03b2(0) \u2208Rp, maximum number of iterations T, g0 = 1. 1: for t = 1, 2 . . . , T do 2: Broadcast e \u03b2(t\u22121) \u2212 \u2208Rp\u22121 to all local machines. 3: for j = 1, . . . , m do 4: At the jth site, compute the sample \u03c4-quantile of {b r(t\u22121) i := yi \u2212\u27e8xi,\u2212, e \u03b2(t\u22121) \u2212 \u27e9}i\u2208Ij, denoted by b q(t\u22121) j , and send it the master (\ufb01rst) machine. 5: end for 6: Calculate b q(t) = (1/m) Pm j=1 b q(t\u22121) j on the master, and send it to every local machine. 7: for j = 1, . . . , m do 8: On the jth machine, compute the gradient vector b g(t\u22121) j,h = \u22121 n X i\u2208Ij \u2113\u2032 h(b r(t\u22121) i \u2212b q(t))xi,\u2212\u2208Rp\u22121, and send it to the master. 9: end for 10: On the master machine, calculate b g(t\u22121) h = 1 m m X j=1 b g(t\u22121) j,h and gt = \u2225b g(t\u22121) h \u2225\u221e. 11: if gt > gt\u22121 or gt < 10\u22125 break 12: otherwise Calculate b g(t\u22121) 1,b = (\u22121/n) P i\u2208I1 \u2113\u2032 b(b r(t\u22121) i \u2212b q(t))xi,\u2212, and solve the shifted conquer loss minimization b \u03b8(t) \u2208argmin \u03b8\u2208Rp\u22121 b Q1,b(b q(t), \u03b8) \u2212 b g(t\u22121) 1,b \u2212b g(t\u22121) h , \u03b8 \u000b on the master machine. De\ufb01ne e \u03b2(t) = (b q(t), (b \u03b8(t))T)T as the tth iterate. 13: end for Output: e \u03b2(T). due to the bias in b \u03b2dc. This bias does not disappear asymptotically in m but rather the variation of b \u03b2dc around the wrong point is diminished, leading to poor coverage. Further simulation results for n = 400 with m = 100 and m = 200 are reported in Tables 8\u201311. Similar conclusions can be drawn. 55 Tan, Battey and Zhou \u03b2\u2217 1 \u03b2\u2217 2 \u03b2\u2217 3 \u03b2\u2217 4 \u03b2\u2217 5 \u03b2\u2217 6 \u03b2\u2217 7 \u03b2\u2217 8 \u03b2\u2217 9 \u03b2\u2217 10 DC-Normal 0.359 0.344 0.338 0.336 0.348 0.347 0.344 0.336 0.337 0.340 CE-Normal 0.207 0.197 0.191 0.187 0.192 0.192 0.191 0.186 0.189 0.189 CE-Boot (a) 0.238 0.234 0.216 0.217 0.228 0.225 0.222 0.216 0.216 0.219 CE-Boot (b) 0.222 0.215 0.206 0.203 0.210 0.209 0.208 0.202 0.204 0.204 CE-Score 0.162 0.162 0.162 0.162 0.162 0.161 0.161 0.161 0.161 0.161 Table 5: Monte Carlo mean width of the constructed con\ufb01dence intervals for the case when p = 10, n = 200, and m = 100. \u03b2\u2217 1 \u03b2\u2217 2 \u03b2\u2217 3 \u03b2\u2217 4 \u03b2\u2217 5 \u03b2\u2217 6 \u03b2\u2217 7 \u03b2\u2217 8 \u03b2\u2217 9 \u03b2\u2217 10 DC-Normal 0.595 0.615 0.990 0.995 1.000 0.990 0.995 1.000 0.990 0.995 CE-Normal 0.895 0.925 0.935 0.950 0.915 0.910 0.940 0.950 0.920 0.900 CE-Boot (a) 0.955 0.945 0.960 0.985 0.960 0.960 0.985 0.980 0.955 0.945 CE-Boot (b) 0.945 0.935 0.950 0.975 0.955 0.935 0.985 0.975 0.950 0.930 CE-Score 0.955 0.955 0.950 0.970 0.955 0.935 0.970 0.980 0.940 0.965 Table 6: Monte Carlo coverage probabilities for the case when p = 10, n = 200, and m = 200. \u03b2\u2217 1 \u03b2\u2217 2 \u03b2\u2217 3 \u03b2\u2217 4 \u03b2\u2217 5 \u03b2\u2217 6 \u03b2\u2217 7 \u03b2\u2217 8 \u03b2\u2217 9 \u03b2\u2217 10 DC-Normal 0.259 0.257 0.257 0.247 0.251 0.251 0.254 0.244 0.252 0.245 CE-Normal 0.149 0.145 0.138 0.137 0.135 0.138 0.138 0.134 0.135 0.134 CE-Boot (a) 0.173 0.173 0.165 0.162 0.160 0.164 0.164 0.158 0.162 0.160 CE-Boot (b) 0.165 0.163 0.155 0.154 0.150 0.154 0.154 0.150 0.152 0.151 CE-Score 0.114 0.114 0.113 0.113 0.113 0.113 0.113 0.112 0.113 0.112 Table 7: Monte Carlo mean width of the constructed con\ufb01dence intervals for the case when p = 10, n = 200, and m = 200. 56 Decentralized Quantile Regression \u03b2\u2217 1 \u03b2\u2217 2 \u03b2\u2217 3 \u03b2\u2217 4 \u03b2\u2217 5 \u03b2\u2217 6 \u03b2\u2217 7 \u03b2\u2217 8 \u03b2\u2217 9 \u03b2\u2217 10 DC-Normal 0.765 0.715 0.995 1.000 0.995 0.980 0.990 1.000 0.985 1.000 CE-Normal 0.965 0.950 0.935 0.960 0.940 0.920 0.960 0.960 0.960 0.960 CE-Boot (a) 0.990 0.970 0.970 0.980 0.950 0.940 0.965 0.975 0.960 0.955 CE-Boot (b) 0.975 0.965 0.945 0.975 0.960 0.930 0.975 0.965 0.960 0.975 CE-Score 0.950 0.950 0.945 0.970 0.950 0.935 0.980 0.980 0.930 0.965 Table 8: Monte Carlo coverage probabilities (p = 10, n = 400, m = 100). \u03b2\u2217 1 \u03b2\u2217 2 \u03b2\u2217 3 \u03b2\u2217 4 \u03b2\u2217 5 \u03b2\u2217 6 \u03b2\u2217 7 \u03b2\u2217 8 \u03b2\u2217 9 \u03b2\u2217 10 DC-Normal 0.191 0.189 0.187 0.184 0.184 0.184 0.185 0.180 0.185 0.182 CE-Normal 0.141 0.139 0.132 0.130 0.130 0.130 0.131 0.130 0.133 0.131 CE-Boot (a) 0.153 0.154 0.147 0.144 0.144 0.144 0.146 0.143 0.146 0.145 CE-Boot (b) 0.147 0.147 0.139 0.138 0.137 0.137 0.139 0.137 0.140 0.138 CE-Score 0.114 0.114 0.112 0.113 0.112 0.113 0.113 0.112 0.112 0.113 Table 9: Monte Carlo mean width (p = 10, n = 400, m = 100). \u03b2\u2217 1 \u03b2\u2217 2 \u03b2\u2217 3 \u03b2\u2217 4 \u03b2\u2217 5 \u03b2\u2217 6 \u03b2\u2217 7 \u03b2\u2217 8 \u03b2\u2217 9 \u03b2\u2217 10 DC-Normal 0.495 0.450 0.985 0.995 0.985 0.990 0.985 0.980 0.980 0.990 CE-Normal 0.945 0.915 0.940 0.960 0.950 0.935 0.925 0.950 0.930 0.930 CE-Boot (a) 0.965 0.940 0.960 0.980 0.975 0.950 0.960 0.980 0.945 0.970 CE-Boot (b) 0.955 0.935 0.955 0.975 0.970 0.950 0.945 0.970 0.930 0.960 CE-Score 0.930 0.930 0.965 0.965 0.955 0.930 0.965 0.950 0.955 0.960 Table 10: Monte Carlo coverage probabilities (p = 10, n = 400, m = 200). \u03b2\u2217 1 \u03b2\u2217 2 \u03b2\u2217 3 \u03b2\u2217 4 \u03b2\u2217 5 \u03b2\u2217 6 \u03b2\u2217 7 \u03b2\u2217 8 \u03b2\u2217 9 \u03b2\u2217 10 DC-Normal 0.131 0.129 0.128 0.127 0.127 0.126 0.129 0.124 0.124 0.125 CE-Normal 0.097 0.096 0.091 0.091 0.092 0.091 0.092 0.089 0.089 0.090 CE-Boot (a) 0.106 0.105 0.100 0.101 0.101 0.100 0.103 0.099 0.097 0.099 CE-Boot (b) 0.103 0.102 0.097 0.097 0.097 0.097 0.098 0.095 0.094 0.095 CE-Score 0.078 0.078 0.077 0.077 0.077 0.077 0.077 0.078 0.077 0.077 Table 11: Monte Carlo mean width (p = 10, n = 400, m = 200).", "introduction": "Quantile regression is indispensable for understanding pathways of dependence irretrievable through a standard conditional mean regression analysis. Since its inception by Koenker and Bassett (1978), appreciable e\ufb00ort has been expended in understanding and operational- izing quantile regression. Statistical aspects have focused on the situation in which all the \u00a92022 Kean Ming Tan, Heather Battey, and Wen-Xin Zhou. License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are provided at http://jmlr.org/papers/v23/21-1269.html. arXiv:2110.13113v2 [stat.ME] 22 Aug 2022 Tan, Battey and Zhou data are simultaneously available for inference while practical aspects have centered around reformulations of the quantile regression optimization problem for computational e\ufb03ciency. Challenges arise when data are distributed, either by the study design or due to storage and privacy concerns. The latter have become more prominent, with less centralized systems tending to be preferred both by the individuals whose data are collected and by those responsible for ensuring their security. In such settings, communication costs associated with statistical procedures become a consideration in addition to their theoretical properties. Ideally, inferential tools are sought whose communication costs are as low as possible without sacri\ufb01cing statistical accuracy, where the latter would be quanti\ufb01ed in terms of estimation error or distributional approximation errors for test statistics. Data may be naturally partitioned because of the way they were collected, or deliberately distributed for other reasons. Li et al. (2020) provided examples in which the distributed setting arises: (i) when data are too numerous to be stored in a central location; (ii) when privacy and security are a concern, such as for medical records, necessitating decentralized statistical analyses. We are motivated particularly by situations in which there are separate data-collecting entities such as local governments, research labs, hospitals, or smart phones, and direct data sharing raises concerns over privacy or loss of ownership. Due to privacy concerns over sending raw data, data collected at each location must remain there, which makes communication e\ufb03ciency critical, especially when the network comprises an enormous number of local data-collecting entities. Communication in the network can be slower than local computation by three orders of magnitude due to limited bandwidth (Lan et al., 2020). It is therefore desirable to communicate as few rounds as possible, leaving expensive computation to local machines. Among two general principles that have been proposed for distributed statistical infer- ence, the simple meta-analysis approach of averaging estimates from separate data sources has the advantage of only requiring one round of communication. Jordan, Lee and Yang (2019) highlighted some disadvantages. Notably, in a simpler setting than that posed by quantile regression, a stringent constraint on the number of sources is implicated. To at- tain the convergence rate hypothetically achievable using the combined sample of size N, a meta-analysis must limit the number of sources, m, to be far fewer than \u221a N. This is due to small sample bias inherent to most nonlinear estimators, which does not diminish upon aggregation. A violation of the scaling condition slows the convergence rate of the estima- tor. This, while sometimes acceptable for point estimation, is detrimental for statistical inference, as illustrated later in simulations. By extending the distributed approximate Newton algorithm (Shamir, Srebro and Zhang, 2014), Wang et al. (2017) and Jordan, Lee and Yang (2019) proposed an alternative prin- ciple for distributed inference in parametric models, which requires a controlled amount of further communications to yield statistically optimal estimators without the restriction m = o( \u221a N) on the number of machines. Another variant of this principle was consid- ered in Fan, Guo and Wang (2021), along with a simultaneous analysis of the optimization and statistical errors. For reasons outlined below, these ideas are not directly applica- ble to quantile regression without considerable methodological development, guided by the detailed theoretical analysis provided in this paper. Quantile regression quanti\ufb01es dependence of an outcome variable\u2019s quantiles on a num- ber of covariates. All quantiles are potentially of interest but to take an important example, 2 Decentralized Quantile Regression quantile regression has bearing on the types of applications for which conditional extreme value analysis might be considered. There are relatively few successful examples of modeling the extremes. The pioneering work of Engelke and Hitz (2020) being a notable exception, suitable when all variables are on an equal footing. When explanation for extreme behavior of a particular variable is sought, as would be the case in many hydrological, sociological, and medical applications, quantile regression provides succinct interpretable conclusions. Subtle graphical structure is deducible from a succession of quantile regression analyses by a result of Cox (2007), generalizing insights by Cochran (1938). Besides substantive under- standing furnished by a quantile regression model, coe\ufb03cient estimators enjoy robustness properties in the form of limited sensitivity to anomalous data or leptokurtic tail behavior of the conditional distribution of the outcome. The associated non-di\ufb00erentiable loss function, otherwise responsible for tortuously slow computation, necessitates linear programming reformulations, solvable by variants of sim- plex and interior point methods. These algorithms are not compatible with distributed architectures, rendering statistical inference challenging when data are distributed. Even when computation is ignored and the non-di\ufb00erentiable loss function is used directly, both the distributed estimation procedures proposed by Wang et al. (2017), Jordan, Lee and Yang (2019) and Fan, Guo and Wang (2021) and the technical devices used therein are un- available due to their requirements on the loss function. Namely that it be strongly convex and twice di\ufb00erentiable with Lipschitz continuous second derivatives. Two papers by Volgushev, Chao and Cheng (2019) and Chen, Liu and Zhang (2021) are motivated by the challenges of distributed data and the relevance of quantile regression, seeking synthesized estimators of quantile regression coe\ufb03cients. These papers employed the meta-analysis approach, thereby requiring stringent scaling to achieve the desired theoretical guarantees, although with the advantage of requiring a single round of communication. We discuss these works in greater detail in Section 2.4. In addition to the scaling de\ufb01ciencies, a generalization of the simple meta-analysis approach to high-dimensional settings has proved elusive for quantile regression. In sparse high-dimensional linear and generalized linear models, the success of meta-analyses hinges on the ability to de-bias suitably penalized estimators (Lee et al., 2017; Battey et al., 2018). Such de-biased estimators are unavailable for penalized quantile regression, except under the stringent assumption that the regression error is independent of the covariates (Bradic and Kolar, 2017). The present paper operationalizes the ideas of Jordan, Lee and Yang (2019) and Wang et al. (2017) in the context of quantile regression, enabling distributed estimation and inference in low and high-dimensional regimes. The key idea of our proposal is double-smoothing of the local and global approximate loss functions, which requires di\ufb00erent smoothing band- widths to achieve desirable statistical properties. Speci\ufb01cally, our proposed synthesized estimator achieves the optimal statistical rate of convergence by a delicate combination of local and global smoothing, and number of communication rounds. The latter turns out to be small. In the low-dimensional regime, we further detail distributed constructions of con\ufb01dence sets. Among these is one based on a self-normalized reformulation of a score-type statistic. Modulo estimation of the parameter vector, score constructions rewritten as self-normalized sums enjoy a form of linearity that enables synthesis across data sources without information loss. To our knowledge, this work is the \ufb01rst to provide Berry-Esseen type quanti\ufb01cation of 3 Tan, Battey and Zhou distributional approximation errors in a distributed setting, which may be of independent interest. In the high-dimensional regime, the proposed doubly-smoothed local and global objective functions are coupled with an \u21131 penalty to encourage sparse solutions, which we solve using a locally adaptive majorize-minimize algorithm. Theoretically, we show that the resulting estimator is near-optimal under both the \u21131 and \u21132 norms. The results are presented in Section 3. Notation: For every integer k \u22651, we use Rk to denote the the k-dimensional Euclidean space. The inner product of any two vectors u = (u1, . . . , uk)T, v = (v1, . . . , vk)T \u2208Rk is de\ufb01ned by uTv = \u27e8u, v\u27e9= Pk i=1 uivi. We use \u2225\u00b7 \u2225p (1 \u2264p \u2264\u221e) to denote the \u2113p-norm in Rk: \u2225u\u2225p = (Pk i=1 |ui|p)1/p and \u2225u\u2225\u221e= max1\u2264i\u2264k |ui|. Throughout this paper, we use bold capital letters to represent matrices. For k \u22652, Ik represents the identity matrix of size k. For any k \u00d7 k symmetric matrix A \u2208Rk\u00d7k, \u2225A\u22252 is the operator norm of A. For a positive semide\ufb01nite matrix A \u2208Rk\u00d7k, \u2225\u00b7 \u2225A denotes the norm linked to A given by \u2225u\u2225A = \u2225A1/2u\u22252, u \u2208Rk. Moreover, given r \u22650, de\ufb01ne the Euclidean ball and sphere in Rk as Bk(r) = {u \u2208Rk : \u2225u\u22252 \u2264r} and Sk\u22121(r) = \u2202Bk(r) = {u \u2208Rk : \u2225u\u22252 = r}, respectively. In particular, Sk\u22121 \u2261Sk\u22121(1) denotes the unit sphere. For two sequences of non-negative numbers {an}n\u22651 and {bn}n\u22651, an \u2272bn indicates that there exists a constant C > 0 independent of n such that an \u2264Cbn; an \u2273bn is equivalent to bn \u2272an; an \u224dbn is equivalent to an \u2272bn and bn \u2272an." }, { "url": "http://arxiv.org/abs/2109.05640v1", "title": "High-Dimensional Quantile Regression: Convolution Smoothing and Concave Regularization", "abstract": "$\\ell_1$-penalized quantile regression is widely used for analyzing\nhigh-dimensional data with heterogeneity. It is now recognized that the\n$\\ell_1$-penalty introduces non-negligible estimation bias, while a proper use\nof concave regularization may lead to estimators with refined convergence rates\nand oracle properties as the signal strengthens. Although folded concave\npenalized $M$-estimation with strongly convex loss functions have been well\nstudied, the extant literature on quantile regression is relatively silent. The\nmain difficulty is that the quantile loss is piecewise linear: it is non-smooth\nand has curvature concentrated at a single point. To overcome the lack of\nsmoothness and strong convexity, we propose and study a convolution-type\nsmoothed quantile regression with iteratively reweighted\n$\\ell_1$-regularization. The resulting smoothed empirical loss is twice\ncontinuously differentiable and (provably) locally strongly convex with high\nprobability. We show that the iteratively reweighted $\\ell_1$-penalized\nsmoothed quantile regression estimator, after a few iterations, achieves the\noptimal rate of convergence, and moreover, the oracle rate and the strong\noracle property under an almost necessary and sufficient minimum signal\nstrength condition. Extensive numerical studies corroborate our theoretical\nresults.", "authors": "Kean Ming Tan, Lan Wang, Wen-Xin Zhou", "published": "2021-09-12", "updated": "2021-09-12", "primary_cat": "stat.ME", "cats": [ "stat.ME", "math.ST", "stat.TH" ], "main_content": "2.1 Penalized quantile regression We consider a scalar response variable y \u2208R and a p-dimensional feature vector x = (x1, . . . , xp)T \u2208 Rp such that the \u03c4-th conditional quantile of y given x is modeled as F\u22121 y|x(\u03c4|x) = xT\u03b2\u2217for some 0 < \u03c4 < 1, where \u03b2\u2217= (\u03b2\u2217 1, . . . , \u03b2\u2217 p)T \u2208Rp. Let {(yi, xi)}n i=1 be a random sample from (y, x). The preceding model assumption is equivalent to yi = xT i \u03b2\u2217+ \u03b5i and P(\u03b5i \u22640 | xi) = \u03c4. (2.1) Throughout the paper, we set x1 \u22611 so that \u03b2\u2217 1 denotes the intercept. To avoid notational clutter, the dependence of \u03b2\u2217and \u03b5i on \u03c4 will be assumed without displaying. Given a random sample {(yi, xi)}n i=1, a penalized QR estimator is generally defined as either the global optimum or one of the local optima to the optimization problem minimize \u03b2=(\u03b21,...,\u03b2p)T\u2208Rp \ufffd1 n \ufffd\ufffd n \ufffd n \ufffd i=1 \ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd \ufffd i=1 \u03c1\u03c4(yi \u2212xT i \u03b2) \ufffd\ufffd\ufffd n \ufffd i=1 \u2212 \ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd =:\ufffd Q(\u03b2) uantile function, al ucing penalty functi \ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd =:\ufffd Q(\u03b2) e funct + p \ufffd j=1 p \ufffd j=1 q\u03bb(|\u03b2 j|) \ufffd , (2.2) \ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd\ufffd \ufffd where \u03c1\u03c4(u) = u{\u03c4 \u22121(u < 0)} is the \u03c4-quantile function, also referred to as the check function, and q\u03bb(\u00b7) : [0, \u221e) \u2192[0, \u221e) is a sparsity-inducing penalty function parametrized by \u03bb > 0. Due to convexity, the \u21131-penalized method for which q\u03bb(t) = \u03bbt (t \u22650) has dominated the literature on high-dimensional statistics. Work in the context of quantile regression include that of Wang, Li and Jiang (2007), Belloni and Chernozhukov (2011), Bradic, Fan and Wang (2011), Wang (2013), and Zheng, Peng and He (2015), Sivakumar and Banerjee (2017), among others. Various algorithms can be employed to solve the resulting \u21131-penalized problem (Bach et al., 2012; Boyd et al., 2010; Koenker et al., 2017; Gu et al., 2018). To alleviate the non-negligible bias induced by the \u21131 penalty, folded concave penalties have been used in, for example, Wang, Wu and Li (2012) and Fan, Xue and Zou (2014), leading to non-convex optimization problems. Together, the non-differentiable quantile loss and the non-convex penalty bring fundamental statistical and computational challenges. Statistical theory of non-convex regularized quantile regression is relatively underdeveloped. Most of the existing results are developed either under stringent minimum signal strength conditions, or for the hypothetical global optimum (or one of the local optima). Motivated from the algorithmic approaches developed by Zou and Li (2008) and Fan et al. (2018), we consider a multistep iterative method that solves a sequence of convex problems, which bypasses the computational issues from solving the non-convex problem (2.2) directly. Theoretically, a major difficulty is that the quantile loss is piecewise linear, so that its \u201ccurvature energy\u201d is concentrated in a single point. This is in contrast to many popular loss functions considered in the statistical literature, such as 5 the squared, logistic, or Huber loss, which are at least locally strongly convex. Therefore, a proper smoothing scheme that creates smoothness and local strong convexity is the key to the success of the proposed framework. 2.2 Convolution-type smoothing approach Let F\u03b5|x(\u00b7) be the conditional distribution of \u03b5 given x. The population quantile loss can then be written as Q(\u03b2) = Ex (Z \u221e \u2212\u221e \u03c1\u03c4(u \u2212\u27e8x, \u03b2 \u2212\u03b2\u2217\u27e9) dF\u03b5|x(u) ) , where Ex(\u00b7) is the expectation taken with respect to x. Provided that the conditional distribution F\u03b5|x(\u00b7) is su\ufb03ciently smooth, Q(\u03b2) is twice di\ufb00erentiable and strongly convex in a neighborhood of \u03b2\u2217. For every \u03b2 \u2208Rp, let b F(\u00b7; \u03b2) be the empirical cumulative distribution function (ECDF) of the residuals {ri(\u03b2) := yi \u2212xT i \u03b2}n i=1, i.e., b F(u; \u03b2) = (1/n) Pn i=1 1{ri(\u03b2) \u2264u} for any u \u2208R. Then, the empirical quantile loss b Q(\u00b7) in (2.2) can be expressed as b Q(\u03b2) = Z \u221e \u2212\u221e \u03c1\u03c4(u) db F(u; \u03b2). (2.3) Since the ECDF b F(\u00b7; \u03b2) is discontinuous, the standard empirical quantile loss b Q(\u00b7) has the same degree of smoothness as \u03c1\u03c4(\u00b7). This motivates Fernandes, Guerre and Horta (2021) to use a kernel CDF estimator. Given the residuals ri(\u03b2) = yi \u2212xT i \u03b2 and a smoothing parameter/bandwidth h = hn > 0, let b Fh(\u00b7; \u03b2) be the distribution function of the classical Rosenblatt\u2013Parzen kernel density estimator: b Fh(u; \u03b2) = Z u \u2212\u221e b fh(t; \u03b2) dt with b fh(t; \u03b2) = 1 n n X i=1 Kh \u0000t \u2212ri(\u03b2)\u0001, where K : R \u2192[0, \u221e) is a symmetric, non-negative kernel that integrates to one, and Kh(u) := (1/h)K(u/h) for u \u2208R. Replacing b F(u; \u03b2) in (2.3) with its kernel-smoothed counterpart b Fh(u; \u03b2) yields the following smoothed empirical quantile loss b Qh(\u03b2) := Z \u221e \u2212\u221e \u03c1\u03c4(u) db Fh(u; \u03b2) = 1 nh n X i=1 Z \u221e \u2212\u221e \u03c1\u03c4(u)K \u0012u + xT i \u03b2 \u2212yi h \u0013 du. (2.4) De\ufb01ne the integrated kernel function \u00af K : R \u2192[0, 1] as \u00af K(u) = R u \u2212\u221eK(t) dt. As will be shown in Section 4.1, the smoothed empirical quantile objective function b Qh(\u03b2) is twice continuously differentiable with gradient \u2207b Qh(\u03b2) = (1/n) Pn i=1{ \u00af K(\u2212ri(\u03b2)/h) \u2212\u03c4}xi and Hessian matrix \u22072 b Qh(\u03b2) = (1/n) Pn i=1 Kh(\u2212ri(\u03b2))xixT i . Moreover, we will show that the smoothed objective function b Qh(\u00b7) is strongly convex in a cone local neighborhood of \u03b2\u2217with high probability; see Proposition 4.2. Remark 2.1. For a given kernel function K(\u00b7) and bandwidth h > 0, the smoothed quantile loss b Qh(\u00b7) de\ufb01ned in (2.4) can be equivalently written as b Qh(\u03b2) = (1/n) Pn i=1 \u2113h(yi \u2212xT i \u03b2), where \u2113h(u) = (\u03c1\u03c4 \u2217Kh)(u) = Z \u221e \u2212\u221e \u03c1\u03c4(v)Kh(v \u2212u) dv, u \u2208R. (2.5) Here \u2217denotes the convolution operator. To better understand this smoothing mechanism, we compute the smoothed loss \u2113h = \u03c1\u03c4 \u2217Kh explicitly for several widely used kernel functions. Recall that \u03c1\u03c4(u) = |u|/2 + (\u03c4 \u22121/2)u. 6 (i) (Uniform kernel) For the uniform kernel K(u) = (1/2)1(|u| \u22641), which is the density function of the uniform distribution on [\u22121, 1], the resulting smoothed loss takes the form \u2113h(u) = (h/2)U(u/h) + (\u03c4 \u22121/2)u, where U(u) = (u2/2 + 1/2)1(|u| \u22641) + |u|1(|u| > 1) is a Huber-type loss. Convolution plays a role of random smoothing in the sense that \u2113h(u) = (1/2)E(|Zu|) + (\u03c4\u22121/2)u, where for every u \u2208R, Zu denotes a random variable uniformly distributed between u \u2212h and u + h. (ii) (Gaussian kernel) For the Gaussian kernel K(u) = \u03c6(u), the density function of a standard normal distribution, the resulting smoothed loss is \u2113h(u) = (1/2)E(|Gu|) + (\u03c4 \u22121/2)u, where Gu \u223c N(u, h2). Note that |Gu| follows a folded normal distribution (Leone, Nelson and Nottingham, 1961) with mean E|Gu| = (2/\u03c0)1/2he\u2212u2/(2h2)+u{1\u22122\u03a6(\u2212u/h)}. Hence, the smoothed loss can be written as \u2113h(u) = (h/2)G(u/h) + (\u03c4 \u22121/2)u, where G(u) = (2/\u03c0)1/2e\u2212u2/2 + u{1 \u22122\u03a6(\u2212u)}. (iii) (Laplacian kernel) In the case of the Laplacian kernel K(u) = e\u2212|u|/2, we have \u2113h(u) = \u03c1\u03c4(u) + he\u2212|u|/h/2. (iv) (Logistic kernel) In the case of the logistic kernel K(u) = e\u2212u/(1 + e\u2212u)2, the resulting smoothed loss is \u2113h(u) = \u03c4u + h log(1 + e\u2212u/h). (v) (Epanechnikov kernel) For the Epanechnikov kernel K(u) = (3/4)(1 \u2212u2)1(|u| \u22641), the resulting smoothed loss is \u2113h(u) = (h/2)E(u/h) + (\u03c4 \u22121/2)u, where E(u) = (3u2/4 \u2212u4/8 + 3/8)1(|u| \u22641) + |u|1(|u| > 1). 2.3 Iteratively reweighted \u21131-penalized method Let {(yi, xi)}n i=1 be independent data vectors from the conditional quantile model (2.1) with a sparse target parameter \u03b2\u2217\u2208Rp. Extending the one-step LLA algorithm proposed by Zou and Li (2008), we consider a multi-step, iteratively regularized method as follows. Let q\u03bb(\u00b7) be a prespeci\ufb01ed penalty function that is di\ufb00erentiable almost everywhere. Starting at iteration 0 with an initial estimator b \u03b2(0), for \u2113= 1, 2, . . ., we iteratively update the previous estimator b \u03b2(\u2113\u22121) by solving b \u03b2(\u2113) = (b \u03b2(\u2113) 1 , . . . ,b \u03b2(\u2113) p )T \u2208 argmin \u03b2=(\u03b21,...,\u03b2p)T ( b Qh(\u03b2) + p X j=1 q\u2032 \u03bb(|b \u03b2(\u2113\u22121) j |)|\u03b2 j| ) , (2.6) where q\u2032 \u03bb(\u00b7) is the \ufb01rst-order derivative of q\u03bb(\u00b7), and b Qh(\u00b7) is the convolution smoothed quantile objective function de\ufb01ned in (2.4). To avoid notational clutter, we suppress the dependence of {b \u03b2(\u2113) = b \u03b2(\u2113) h (\u03c4, \u03bb)}\u2113\u22650 on the quantile index \u03c4, bandwidth h, and penalty level \u03bb. The penalty function q\u03bb(\u00b7), or its derivative to be exact, plays the role of producing sparse solutions. We consider a class of penalty functions that satis\ufb01es the following conditions. (A1) The penalty function q\u03bb is of the form q\u03bb(t) = \u03bb2q(t/\u03bb) for t \u22650, where q : [0, \u221e) 7\u2192[0, \u221e) satis\ufb01es: (i) q is non-decreasing on [0, \u221e) with q(0) = 0; (ii) q(\u00b7) is di\ufb00erentiable almost everywhere on (0, \u221e), 0 \u2264q\u2032(t) \u22641 and limt\u21930 q\u2032(t) = 1; (iii) q\u2032(t1) \u2264q\u2032(t2) for all t1 \u2265t2 \u22650. Examples of penalties that satisfy Condition (A1) include: 1. \u21131-penalty: q(t) = |t|. In this case, q\u2032(t) = 1 for all t > 0. Therefore, b \u03b2(1) de\ufb01ned in (2.6) with \u2113= 1 is the \u21131-penalized SQR estimator, and the procedure stops after the \ufb01rst step. 7 2. Smoothly clipped absolute deviation (SCAD) penalty (Fan and Li, 2001): The function q(\u00b7) is de\ufb01ned through its derivative q\u2032(t) = 1(t \u22641) + (a\u2212t)+ a\u22121 1(t > 1) for t \u22650 and some a > 2, and q(0) = 0. Fan and Li (2001) suggested a = 3.7 by a Bayesian argument. 3. Minimax concave penalty (MCP) (Zhang, 2010a): The function q(\u00b7) is de\ufb01ned through its derivative q\u2032(t) = (1 \u2212t/a)+ for t \u22650 and some a \u22651, and q(0) = 0. 4. Capped-\u21131 penalty (Zhang, 2010b): q(t) = min(a/2, t) and q\u2032(t) = 1(t \u2264a/2) for t \u22650 and some a \u22651. If we start the multi-step procedure using any penalty q\u03bb that satis\ufb01es Condition (A1) and a trivial initialization b \u03b2(0) = 0, then q\u2032 \u03bb(|b \u03b2(0) j |) = q\u2032 \u03bb(0) = \u03bb for j = 1, . . . , p, and hence the \ufb01rst step is essentially computing an \u21131-penalized smoothed QR estimator. At each subsequent iteration, the subproblem (2.6) can be expressed as a weighted \u21131-penalized smoothed quantile loss minimization: minimize \u03b2\u2208Rp \bb Qh(\u03b2) + \u2225\u03bb \u25e6\u03b2\u22251 \t, (2.7) where \u03bb = (\u03bb1, . . . , \u03bbp)T is a p-vector of regularization parameters with \u03bbj \u22650, and \u25e6denotes the Hadamard product. We summarize this iteratively reweighted \u21131-penalized method in Algorithm 1. Algorithm 1 Iteratively Reweighted \u21131-Penalized Smoothed QR. Input: Data vectors {(yi, xi)}n i=1, quantile index \u03c4 \u2208(0, 1), bandwidth h > 0, and an initial estimator b \u03b2(0) \u2208Rp. For \u2113= 1, 2, . . ., repeat 1. Set \u03bb(\u2113\u22121) j = q\u2032 \u03bb(|b \u03b2(\u2113\u22121) j |) for j = 1, . . . , p; 2. Compute b \u03b2(\u2113) \u2208argmin \u03b2\u2208Rp \bb Qh(\u03b2) + \u2225\u03bb(\u2113\u22121) \u25e6\u03b2\u22251 \t; (2.8) until convergence. In Section 4, we will establish non-asymptotic statistical theory for the sequence of estimators {b \u03b2(\u2113)}\u2113\u22650 initialized with b \u03b2(0) = 0 when the penalty q\u03bb(t) = \u03bb2q(t/\u03bb) obeys Condition (A1). In order to reduce the (regularization) bias when the signal is su\ufb03ciently strong, we are particularly interested in the concave penality q(\u00b7), which not only satis\ufb01es Condition (A1) but also has a redescending derivative, i.e., q\u2032(t) = 0 for all su\ufb03ciently large t. Another widely applicable idea for bias reduction is adaptive Lasso (Zou, 2006), which is a one-step procedure that solves, in the context of quantile regression, e \u03b2 \u2208argmin \u03b2\u2208Rp ( b Q(\u03b2) + \u03bb p X j=1 w(|e \u03b2(0) j |)|\u03b2j| ) , (2.9) where e \u03b2(0) = (e \u03b2(0) 1 , . . . ,e \u03b2(0) p )T is an initial estimator of \u03b2\u2217, say the \u21131-QR (or QR-Lasso) estimator (Belloni and Chernozhukov, 2011), and w(t) := t\u2212\u03b3 for t > 0 and some \u03b3 > 0. Note that the weight function \u03bbw(\u00b7) for adaptive Lasso is quite di\ufb00erent from q\u2032 \u03bb(\u00b7) = \u03bbq\u2032(\u00b7/\u03bb) in (2.6). As discussed in Fan and Lv (2008), an advantage of the concave penalty, such as SCAD and MCP, is that zero is not an absorbing state: once a coe\ufb03cient is shrunk to zero, it will remain zero throughout the remaining iterations. As a result, any true positive that is left out by the initial Lasso estimator will be missed in the second stage as well. The aforementioned is an important phenomenon which was empirically veri\ufb01ed by Fan et al. (2018). 8 Remark 2.2. In practice, it is common to leave a subset of parameters, such as the intercept and coe\ufb03cients which correspond to features that are already viewed relevant, unpenalized throughout the multi-step procedure (2.6). Given a predetermined index set R \u2286[p], we can modify Algorithm 1 by taking \u03bb(\u2113) = (\u03bb(\u2113) 1 , . . . , \u03bb(\u2113) p )T (\u2113\u22650) to be \u03bb(\u2113) j = 0 for j \u2208R and \u03bb(\u2113) j = q\u2032 \u03bb(|b \u03b2(\u2113) j |) for j < R. Theoretically, we will study the sequence of estimates {b \u03b2(\u2113)}\u2113\u22651 obtained from Algorithm 1 because a special treatment of leaving parameters indexed by R unpenalized only makes things more convoluted and does not bring new insights from a theoretical viewpoint. 3 Algorithm As discussed in Section 2.3, the multi-step convex relaxation method leads to a sequence of iteratively reweighted \u21131-penalized problems. Computationally, it su\ufb03ces to develop e\ufb03cient algorithms for solving the convex problem (2.8). For several commonly used kernels, explicit forms of the smoothed check loss functions are given in Remark 2.1. In the following sections, we present specialized algorithms for two representative kernel functions: the uniform kernel and the Gaussian kernel. 3.1 A coordinate descent algorithm for uniform kernel First we describe a coordinate descent algorithm for solving (2.8) with the uniform kernel, i.e., K(u) = 1/2 for |u| \u22641. The coordinate descent algorithm is an iterative method that minimizes the objective function with respect to one variable at a time while \ufb01xing the other variables. To implement the algorithm, we calculate the partial derivative of the loss function in (2.8) with respect to each variable, and derive the corresponding update for each variable while keeping the others \ufb01xed. The gradient of the loss function in (2.8) involves \u00af K(\u00b7). For the uniform kernel, we have \u00af K \u0012xT i \u03b2 \u2212yi h \u0013 = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 1 if xT i \u03b2 \u2212yi \u2265h, 1 2 \u0010 xT i \u03b2\u2212yi h + 1 \u0011 if |xT i \u03b2 \u2212yi| \u2264h, 0 if xT i \u03b2 \u2212yi \u2264\u2212h. Let C1 = {i : xT i \u03b2 \u2212yi \u2264\u2212h}, C2 = {i : |xT i \u03b2 \u2212yi| \u2264h}, and C3 = {i : xT i \u03b2 \u2212yi \u2265h}. Then, the \ufb01rst-order optimality condition of minimizing \u03b2j \u2192b Qh(\u03b2) + \u2225\u03bb(\u2113\u22121) \u25e6\u03b2\u2212\u22251 can be written as \u2212\u03c4 n X i=1 xij + 1 2 X i\u2208C2 xij + X i\u2208C3 xi j + 1 2h X i\u2208C2 (xT i \u03b2 \u2212yi)xi j + n\u03bb(\u2113\u22121) j b z j = 0, whereb zj \u2208\u2202|b \u03b2j| is the subgradient. This leads to the following closed-form solution for b \u03b2j: b \u03b2j = S \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 2h\u03c4 Pn i=1 xij \u22122h Pn i\u2208C3 xij \u2212h Pn i\u2208C2 xi j + P i\u2208C2 xi j(yi \u2212\u27e8xi,\u2212j, \u03b2\u2212j\u27e9) P i\u2208C2 x2 i j , 2nh\u03bb(\u2113\u22121) j P i\u2208C2 x2 i j \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8fe, where S (a, b) = sign(a) max(|a| \u2212b, 0) denotes the soft-thresholding operator. Therefore, a solution of (2.8) can be obtained by iteratively updating each b \u03b2j until convergence. The details are summarized in Algorithm 2. 9 Algorithm 2 Coordinate Descent Algorithm for Solving (2.8) with Uniform Kernel. Input quantile level \u03c4, smoothing parameter h, regularization parameter \u03bb(\u2113\u22121), and convergence criterion \u03f5. Initialization b \u03b2(0) = 0. Iterate the following until the stopping criterion \u2225b \u03b2(t) \u2212b \u03b2(t\u22121)\u22252 \u2264\u03f5 is met, where b \u03b2(t) is the value of \u03b2 obtained at the tth iteration. That is, for each j = 1, . . . , p: 1. Set C1 = {i : xT i \u03b2 \u2212yi \u2265h}, C2 = {i : |xT i \u03b2 \u2212yi| \u2264h}, and C3 = {i : xT i \u03b2 \u2212yi \u2264\u2212h}, where we use \u03b2 to denote the updated solution at the current iteration. 2. Set b \u03b2(t) j = S \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 2h\u03c4 Pn i=1 xij \u22122h Pn i\u2208C3 xi j \u2212h Pn i\u2208C2 xi j + P i\u2208C2 xi j(yi \u2212\u27e8xi,\u2212j, \u03b2\u2212j\u27e9) P i\u2208C2 x2 i j , 2nh\u03bb(\u2113\u22121) j P i\u2208C2 x2 i j \uf8fc \uf8f4 \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8f4 \uf8fe, where S (a, b) = sign(a) max(|a| \u2212b, 0) is the soft-thresholding operator. Output the estimated parameter b \u03b2(t). Compared to the existing algorithms for solving \u21131-regularized quantile regression, Algorithm 2 is computationally e\ufb03cient especially for large-scale problems. The computational complexity is similar to that of the coordinate descent algorithm for Lasso. 3.2 An alternating direction method of multiplier algorithm for Gaussian kernel Next we consider the case of smoothing via the Gaussian kernel function. In this case, we have \u00af K \u0012xT i \u03b2 \u2212yi h \u0013 = \u03a6 \u0012xT i \u03b2 \u2212yi h \u0013 , where \u03a6(\u00b7) is the cumulative distribution function of the standard normal distribution. The coordinate descent approach in the previous section can no longer be employed, at least trivially, to solve (2.8) since there is no closed-form solution of minimizing \u03b2j \u2192b Qh(\u03b2) + \u2225\u03bb(\u2113\u22121) \u25e6\u03b2\u2212\u22251 with the Gaussian kernel. To address this issue, we introduce an alternating direction method of multiplier (ADMM) algorithm to solve (2.8) by decoupling terms that are di\ufb03cult to optimize jointly. A similar approach has been considered in Gu et al. (2018) for solving standard quantile regression with \u21131-regularization. Let r = (r1, . . . , rn)\u22bawith ri = yi \u2212\u27e8xi, \u03b2\u27e9. Optimization problem (2.8) can then be rewritten as minimize \u03b2\u2208Rp,r\u2208Rn \bb Qh(r) + \u2225\u03bb(\u2113\u22121) \u25e6\u03b2\u2212\u22251 \t, subject to r = y \u2212X\u03b2. (3.1) The augmented Lagrangian for (3.1) is L\u03c1(\u03b2, r, \u03b7) = b Qh(r) + \u2225\u03bb(\u2113\u22121) \u25e6\u03b2\u2212\u22251 + \u27e8\u03b7, r \u2212y + X\u03b2\u27e9+ \u03c1 2\u2225r \u2212y + X\u03b2\u22252 2, (3.2) where \u03b7 is the Lagrange multiplier and \u03c1 is a tuning parameter for the ADMM algorithm. Updates for the ADMM can be derived by minimizing each parameter while keeping the others \ufb01xed. We summarize the details in Algorithm 3. The updates for \u03b2 involves solving a Lasso regression problem for which e\ufb03cient software is available. Alternatively, one can also linearize the loss function as in Gu et al. (2018) to obtain 10 Algorithm 3 ADMM Algorithm for Solving (2.8) with Gaussian Kernel. Input quantile parameter \u03c4, smoothing parameter h, regularization parameter \u03bb(\u2113\u22121), and the convergence criterion \u03f5. Initialize the primal variables b \u03b2(0) = b r(0) = 0 and the dual variable b \u03b7(0) = 0. Iterate the following until the stopping criterion \u2225b \u03b2(t) \u2212b \u03b2(t\u22121)\u22252 \u2264\u03f5 is met: 1. Update \u03b2 as b \u03b2(t) = argmin \u03b2\u2208Rp \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 \u03c1 2 \r \r \r \r \r \ry \u2212b r(t\u22121) \u22121 \u221a\u03c1 b \u03b7(t\u22121) \u2212X\u03b2 \r \r \r \r \r \r 2 2 + \u2225\u03bb(\u2113\u22121) \u25e6\u03b2\u2212\u22251 \uf8fc \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8fe. 2. Iterate the following until convergence: for each i = 1, . . . , n, update ri by solving \u03c4 \u2212\u03a6 \u0012\u2212ri h \u0013 +b \u03b7(t\u22121) i + \u03c1\u0000ri \u2212yi + \u27e8xi, b \u03b2(t)\u27e9\u0001 = 0. 3. Update \u03b7 as b \u03b7(t) = b \u03b7(t\u22121) + \u03c1\u0000b r(t) \u2212y + Xb \u03b2(t)\u0001. Output the estimated parameter b \u03b2(t). a closed-form solution. The updates for r can be obtained using coordinate descent algorithm by updating each coordinate of r using standard numerical methods such as the bisection method. See Algorithm 3 for details. 4 Statistical theory In this section, we provide a comprehensive analysis of the sequence of regularized quatile regression estimators {b \u03b2(\u2113)}\u2113\u22651 obtained by solving (2.6) iteratively, initialized with b \u03b2(0) = 0. For simplicity, we restrict our attention to a \ufb01xed quantile level \u03c4 \u2208(0, 1) of interest. We \ufb01rst characterize the (deterministic) bias induced by convolution smoothing described in Section 4.1. In Section 4.2, we provide high probability bounds (under \u21131and \u21132-errors) for the one-step estimator b \u03b2(1), i.e., the \u21131-penalized smoothed QR estimator (\u21131-SQR) which is of independent interest. With a \ufb02exible choice of the bandwidth h, these error bounds for b \u03b2(1) are near-minimax optimal (Wang and He, 2021), and coincide with those of the \u21131-QR estimator Belloni and Chernozhukov (2011). In Section 4.3, we analyze b \u03b2(\u2113) (\u2113\u22652) whose overall estimation error consists of three parts: shrinkage bias, oracle rate, and smoothing bias. Our analysis reveals that the multi-step iterative algorithm re\ufb01nes the statistical rate in a sequential manner: every relaxation step shrinks the estimation error from the previous step by a \u03b4-fraction for some \u03b4 \u2208(0, 1). Under a necessary beta-min condition, we show that the multi-step estimator b \u03b2(\u2113) with \u2113\u2273log{log(p)} achieves the oracle rate of convergence, i.e., it shares the convergence rate of the oracle estimator that has access to the true active set. Under a sub-Gaussian condition on the feature vector and a stronger sample size requirement, we further show in Section 4.4 that the multi-step estimator b \u03b2(\u2113) with \u2113\u2273log(s) coincides with the oracle estimator with high probability, and hence achieves variable selection consistency. Throughout, we use the notation \u201c\u2272\u201d to indicate \u201c\u2264\u201d up to constants that are independent of (s, p, n). 11 4.1 Smoothing bias To begin with, note that the smoothed quantile objective b Qh(\u00b7) de\ufb01ned in (2.4) can be written as b Qh(\u03b2) = (1 \u2212\u03c4) Z 0 \u2212\u221e b Fh(u; \u03b2) du + \u03c4 Z \u221e 0 {1 \u2212b Fh(u; \u03b2)} du. Recall the integrated kernel function \u00af K(u) = R u \u2212\u221eK(t) dt, which is non-decreasing and takes values in [0, 1]. With ri(\u03b2) = yi \u2212xT i \u03b2, the gradient vector and Hessian matrix of b Qh(\u03b2) are, respectively, \u2207b Qh(\u03b2) = 1 n n X i=1 \b \u00af K\u0000\u2212ri(\u03b2)/h\u0001 \u2212\u03c4\txi and \u22072 b Qh(\u03b2) = 1 n n X i=1 Kh(\u2212ri(\u03b2))xixT i . (4.1) To examine the bias induced by smoothing, de\ufb01ne the expected smoothed loss function Qh(\u03b2) = E{b Qh(\u03b2)}, \u03b2 \u2208Rp, and the pseudo parameter \u03b2\u2217 h = (\u03b2\u2217 h,1, . . . , \u03b2\u2217 h,p)T \u2208argmin \u03b2\u2208Rp Qh(\u03b2), (4.2) which is the population minimizer of the smoothed quantile loss and varies with h. In general, \u03b2\u2217 h di\ufb00ers from \u03b2\u2217\u2013 the unknown parameter vector in model (2.1). The latter is identi\ufb01ed as the unique minimizer of the population quantile objective Q(\u03b2) := E{b Q(\u03b2)}. However, as the smoothed quantile loss \u2113h(\u00b7) in (2.5) approximates the quantile loss \u03c1\u03c4(\u00b7) as h = hn \u21920, \u03b2\u2217 h is expected to converge to \u03b2\u2217, and we refer to \u2225\u03b2\u2217 h \u2212\u03b2\u2217\u22252 as the approximation error or bias due to smoothing. The following result provides upper bounds of the smoothing bias under mild conditions on the random covariates x \u2208Rp, the conditional density of \u03b5 given x, and the kernel function. Throughout Section 4, we assume that the second moment \u03a3 = (\u03c3 jk)1\u2264j,k\u2264p = E(xxT) of x = (x1, . . . , xp)T (with x1 \u22611) exists and is positive de\ufb01nite. Moreover, let \u03b31 = \u03b31(\u03a3) \u22651, \u03b3p = \u03b3p(\u03a3) \u2208(0, 1], and \u03c32 x = max1\u2264j\u2264p \u03c3 jj. (B1) The conditional density of \u03b5 given x, denoted by f\u03b5|x, satis\ufb01es fl \u2264f\u03b5|x(0) \u2264fu almost surely (over x) for some fu \u2265fl > 0. Moreover, there exists a constant l0 > 0 such that |f\u03b5|x(u) \u2212f\u03b5|x(v)| \u2264l0|u \u2212v| for all u, v \u2208R almost surely (over x). (B2) The kernel function K : R \u2192[0, \u221e) is symmetric around zero, and satis\ufb01es R \u221e \u2212\u221eK(u) du = 1 and R \u221e \u2212\u221eu2K(u) du < \u221e. For \u2113= 1, 2, . . ., let \u03ba\u2113= R \u221e \u2212\u221e|u|\u2113K(u) du be the \u2113-th absolute moment of K(\u00b7). Proposition 4.1. Assume that Conditions (B1) and (B2) hold, and \u00b53 := supu\u2208Sp\u22121 E|zTu|3 < \u221e with z = \u03a3\u22121/2x. Provided 0 < h < fl/(c0l0), \u03b2\u2217 h is the unique minimizer of \u03b2 7\u2192Qh(\u03b2) and satis\ufb01es \u2225\u03b2\u2217 h \u2212\u03b2\u2217\u2225\u03a3 \u2264c0l0 f \u22121 l h2, (4.3) where c0 = (\u00b53 + \u03ba2)/2 + \u03ba1. In addition, assume \u03ba3 < \u221eand f\u03b5|x has an l1-Lipschitz continuous derivative almost everywhere for some l1 > 0. Then \r \r \r \r \r\u03a3\u22121J(\u03b2\u2217 h \u2212\u03b2\u2217) + 1 2\u03ba2h2 \u00b7 \u03a3\u22121E\b f \u2032 \u03b5|x(0)x\t\r \r \r \r \r\u03a3 \u2264Ch3, (4.4) where J = E{f\u03b5|x(0) \u00b7 xxT}, and C > 0 depends only on (fl, l0, l1, \u00b53) and the kernel K. 12 Proposition 4.1 is a non-asymptotic version of Theorem 1 in Fernandes, Guerre and Horta (2021), and explicitly captures the dependence of the bias on several model-based quantities. Note that the p \u00d7 p matrix J = E{f\u03b5|x(0) \u00b7 xxT} is the Hessian of the population quantile objective Q(\u00b7) evaluated at \u03b2\u2217, i.e., J = \u22072Q(\u03b2\u2217). Under Condition (B1), fl\u03b3p(\u03a3) \u2264\u03b3p(J) \u2264\u03b31(J) \u2264fu\u03b31(\u03a3). An interesting implication of Proposition 4.1 is that, when both f\u03b5|x(0) and f \u2032 \u03b5|x(0) are independent of x (i.e., f\u03b5|x(0) = f\u03b5(0) and f \u2032 \u03b5|x(0) = f \u2032 \u03b5(0)), the bias decomposition bound (4.4) simpli\ufb01es to \r \r \r \r \r \r f\u03b5(0)(\u03b2\u2217 h \u2212\u03b2\u2217) + 0.5 f \u2032 \u03b5(0)\u03ba2h2 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 1 0p\u22121 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \r \r \r \r \r \r\u03a3 \u2264Ch3. In other words, the smoothing bias is concentrated primarily on the intercept. To some extent, this observation further certi\ufb01es the bene\ufb01t of smoothing in variable selection of which the main focus is on the slope coe\ufb03cients rather than the intercept. 4.2 \u21131-penalized smoothed quantile regression Given a bandwidth h > 0 and a regularization parameter \u03bb > 0, let b \u03b2h = b \u03b2h(\u03c4, \u03bb) be the \u21131-penalized SQR (\u21131-SQR) estimator, de\ufb01ned as the solution to the following convex optimization problem: min \u03b2\u2208Rp \bb Qh(\u03b2) + \u03bb\u2225\u03b2\u22251 \t. (4.5) In this section, we characterize the estimation error of b \u03b2h \u2208Rp under \u21132and \u21131-norms. First we impose a moment condition on the (random) covariate vector x = (x1, . . . , xp)T \u2208Rp with x1 \u22611. Without loss of generality, assume \u00b5 j = E(xj) = 0 for 2 \u2264j \u2264p; otherwise, consider a change of variable (\u03b21, \u03b22, . . . , \u03b2p)T 7\u2192(\u03b21 + Pp j=2 \u00b5 j\u03b2 j, \u03b22, . . . , \u03b2p)T so that the obtained results apply to model F\u22121 y|x(\u03c4) = \u03b2\u266d 0 + Pp j=2(xj \u2212\u00b5 j)\u03b2\u2217 j, where \u03b2\u266d 0 = \u03b2\u2217 0 + Pp j=2 \u00b5 j\u03b2\u2217 j. (B3) \u03a3 = E(xxT) is positive de\ufb01nite and z = \u03a3\u22121/2x \u2208Rp is sub-exponential: there exist constants \u03c50, c0 \u22651 such that P(|zTu| \u2265\u03c50\u2225u\u22252 \u00b7 t) \u2264c0e\u2212t for all u \u2208Rp and t \u22650. For convenience, we assume c0 = 1, and write \u03c32 x = max1\u2264j\u2264p E(x2 j). Moreover, for r, l > 0, de\ufb01ne the (rescaled) \u21132-ball and \u21131-cone as B\u03a3(r) = {\u03b4 \u2208Rp : \u2225\u03b4\u2225\u03a3 \u2264r} and C\u03a3(l) = \b\u03b4 \u2208Rp : \u2225\u03b4\u22251 \u2264l\u2225\u03b4\u2225\u03a3 \t. (4.6) Our theoretical analysis of the \u21131-SQR estimator depends crucially on the following \u201cgood\u201d event, which is related to the local restricted strong convexity (RSC) of the empirical smoothed quantile loss function. We refer the reader to Negahban et al. (2012) and Loh and Wainwright (2015) for detailed discussions of the restricted strong convexity for regularized M-estimation in high dimensions. De\ufb01nition 4.1. (Local Restricted Strong Convexity) Given radius parameters r, l > 0 and a curvature parameter \u03ba > 0, de\ufb01ne the event Ersc(r, l, \u03ba) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 \u27e8\u2207b Qh(\u03b2) \u2212\u2207b Qh(\u03b2\u2217), \u03b2 \u2212\u03b2\u2217\u27e9 \u2225\u03b2 \u2212\u03b2\u2217\u22252 \u03a3 \u2265\u03ba for all \u03b2 \u2208\u03b2\u2217+ B\u03a3(r) \u2229C\u03a3(l) \uf8fc \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8fe. (4.7) 13 Our \ufb01rst result shows that, with suitably chosen (r, l, \u03ba), the event Ersc(r, l, \u03ba) occurs with high probability. In order for the local RSC condition to hold, the radius parameter r has to be of the same order as, or possibly smaller than the bandwidth h. Proposition 4.2. Assume Conditions (B1)\u2013(B3) hold, and \u03bal = min|u|\u22641 K(u) > 0. Moreover, let (r, l, h) and n satisfy 20\u03c52 0 r \u2264h \u2264fl/(2l0) and n \u2265C\u03c32 x fu f \u22122 l (l/r)2h log(2p) (4.8) for a su\ufb03ciently large constant C. Then, the local RSC event Ersc(r, l, \u03ba) with \u03ba = (\u03bal fl)/2 occurs with probability at least 1 \u2212(2p)\u22121. Remark 4.1. We do not claim that the values of the constants appearing in Proposition 4.2 are optimal. They result from non-asymptotic probabilistic bounds which re\ufb02ect worst-case scenarios. The condition min|u|\u22641 K(u) > 0 is only for theoretical and notational convenience. If the kernel K(\u00b7) is compactly supported on [\u22121, 1], we may rescale it to obtain Ka(u) = (1/a)K(u/a) for some a > 1. Then, Ka(\u00b7) is supported on [\u2212a, a] with min|u|\u22641 K(u) > 0. For example, (i) (Gaussian kernel) if K(u) = (2\u03c0)\u22121/2e\u2212u2/2 is the Gaussian kernel, we have \u03bal = (2\u03c0e)\u22121/2 \u2248 0.242 and \u03ba2 = 1; (ii) (Uniform kernel) if K(u) = (1/2)1(|u| \u22641) is the uniform kernel, we may consider its rescaled version K3/2(u) = (1/3)1(|u| \u22643/2). In this case, \u03bal = 1/3 and \u03ba2 = 3/4. Throughout, we view (\u03bal, \u03ba2) as absolute constants. Theorem 4.1. Under the conditional quantile model (2.1) with \u03b2\u2217\u2208Rp being s-sparse, assume Conditions (B1)\u2013(B3) hold with \u03bal = min|u|\u22641 K(u) > 0. Then, the \u21131-SQR estimator b \u03b2 = b \u03b2h with \u03bb \u224d\u03c3x p \u03c4(1 \u2212\u03c4) log(p)/n satis\ufb01es the bounds \u2225b \u03b2 \u2212\u03b2\u2217\u22252 \u2264C1 f \u22121 l s1/2\u03bb and \u2225b \u03b2 \u2212\u03b2\u2217\u22251 \u2264C2 f \u22121 l s\u03bb (4.9) with probability at least 1 \u2212p\u22121, provided that the bandwidth satis\ufb01es max \u03c3x fl r s log p n , \u03c32 x fu f 2 l s log p n ! \u2272h \u2264min \bfl/(2l0), (s1/2\u03bb)1/2\t, where the constants C1,C2 > 0 depend only on (l0, \u03c50, \u03b3p, \u03bal, \u03ba2). The above theorem shows that with a proper yet \ufb02exible choice of the bandwidth, the \u21131penalized smoothed QR estimator achieves the same rate of convergence as the \u21131-QR estimator under both \u21131and \u21132-errors (Belloni and Chernozhukov, 2011). Technically, we assume the random feature vector is sub-exponential, which is arguably the weakest moment condition in highdimensional regression analysis under random design (Wainwright, 2019). This preliminary result is of independent interest, and more importantly, it paves the way for further analysis of smoothed quantile regression with iteratively reweighted \u21131-regularization. 14 4.3 Concave regularization and oracle rate of convergence In this section, we derive rates of convergence for the solution path {b \u03b2(\u2113)}\u2113=1,2,... of the multi-step iterative algorithm de\ufb01ned in (2.6). Starting from b \u03b2(0) = 0, we note that b \u03b2(1) is exactly the \u21131-SQR estimator studied in the previous section; see Theorem 4.1. For subsequent b \u03b2(\u2113)\u2019s, we \ufb01rst state the result as a deterministic claim in Theorem 4.2, but conditioned on some \u201cgood\u201d event regarding the local RSC property and the gradient of b Qh(\u00b7) at \u03b2\u2217. Under Condition (B3) on the random covariate vector, probabilistic claims enter in certifying that this \u201cgood\u201d event holds with high probability with a suitable choice of \u03bb and h; see Theorem 4.3. Recall the event Ersc(r, l, \u03ba) de\ufb01ned in (4.7) on which a local RSC property of the smoothed quantile objective b Qh(\u00b7) holds, where \u03ba is a curvature parameter. Moreover, de\ufb01ne w\u2217 h = wh(\u03b2\u2217) \u2208Rp and b\u2217 h = \u2225\u03a3\u22121/2\u2207Qh(\u03b2\u2217)\u22252, (4.10) where wh(\u03b2) = \u2207b Qh(\u03b2) \u2212\u2207Qh(\u03b2) is the centered score function, and b\u2217 h \u22650 quanti\ufb01es the bias induced by smoothing. For the standard quantile loss, we have \u2207Q(\u03b2\u2217) = 0. Under Conditions (B1) and (B2), examine the proof of Proposition 4.1 yields b\u2217 h \u2264l0\u03ba2h2/2, that is, the smoothing bias has magnitude of the order h2. To re\ufb01ne the statistical rate obtained in Theorem 4.1, which is near-minimax optimal for estimating sparse targets, we need an additional beta-min condition on \u2225\u03b2\u2217 S\u2225min = minj\u2208S |\u03b2\u2217 j|, where S = {1 \u2264j \u2264p : \u03b2\u2217 j , 0} is the active set of \u03b2\u2217. For a deterministic analysis, we \ufb01rst derive the contraction property of the solution path {b \u03b2(\u2113)}\u2113\u22651 conditioned on some \u201cgood\u201d event. Theorem 4.2. Given \u03ba > 0 and a penalty function q(\u00b7) satisfying (A1), assume that there exists some constant \u03b10 > 0 such that \u03b10 p 1 + {q\u2032(\u03b10)/2}2 > 1 \u03ba\u03b3p and q\u2032(\u03b10) > 0. (4.11) Let the penalty level \u03bb and bandwidth h satisfy b\u2217 h \u2264(s/\u03b3p)1/2\u03bb. Moreover, de\ufb01ne ropt = \u03b31/2 p \u03b10cs1/2\u03bb and l = {(2 + 2 q\u2032(\u03b10))(c2 + 1)1/2 + 2 q\u2032(\u03b10)}(s/\u03b3p)1/2, where the constant c > 0 is de\ufb01ned through the equation 0.5q\u2032(\u03b10)(c2 + 1)1/2 + 2 = \u03b10\u03ba\u03b3p \u00b7 c. (4.12) Then, for any r \u2265ropt, conditioned on the event Ersc(r, l, \u03ba) \u2229{\u2225w\u2217 h\u2225\u221e\u22640.5q\u2032(\u03b10)\u03bb}, the sequence of solutions {b \u03b2(\u2113)}\u2113\u22651 to programs (2.6) satis\ufb01es \u2225b \u03b2(\u2113) \u2212\u03b2\u2217\u2225\u03a3 \u2264\u03b4 \u00b7 \u2225b \u03b2(\u2113\u22121) \u2212\u03b2\u2217\u2225\u03a3 + \u03ba\u22121\u03b3\u22121/2 p \b\u2225q\u2032 \u03bb((|\u03b2\u2217 S| \u2212\u03b10\u03bb)+)\u22252 + \u2225w\u2217 h,S\u22252 \t | {z } =:rora + \u03ba\u22121b\u2217 h, (4.13) where \u03b4 = p 1 + {q\u2032(\u03b10)/2}2/(\u03b10\u03ba\u03b3p) \u2208(0, 1) and u+ = max(u, 0). In addition, \u2225b \u03b2(\u2113) \u2212\u03b2\u2217\u2225\u03a3 \u2264\u03b4\u2113\u22121ropt + (1 \u2212\u03b4)\u22121\u0000rora + \u03ba\u22121b\u2217 h \u0001 for any \u2113\u22652. (4.14) Theorem 4.2 reveals how iteratively reweighted \u21131-penalization re\ufb01nes the statistical rate in a sequential manner: every relaxation step shrinks the estimation error from the previous step by a \u03b4-fraction. The error term that does not vary with reweighted penalization consists of \r \r \rq\u2032 \u03bb \u0000(|\u03b2\u2217 S| \u2212\u03b10\u03bb)+ \u0001\r \r \r2 | {z } shrinkage bias , \r \r \rw\u2217 h,S \r \r \r2 | {z } oracle rate , and b\u2217 h |{z} smoothing bias . 15 The \ufb01rst term \u2225q\u2032 \u03bb((|\u03b2\u2217 S| \u2212\u03b10\u03bb)+)\u22252 is known as the shrinkage bias induced by the folded-concave penalty function (Fan et al., 2018). For the \u21131-norm penalty, i.e., q\u03bb(t) = \u03bb|t| and q\u2032 \u03bb(t) = \u03bb sign(t), the shrinkage bias can be as large as s1/2\u03bb. Without any prior knowledge on the signal strength, we have \u2225q\u2032 \u03bb((|\u03b2\u2217 S| \u2212\u03b10\u03bb)+)\u22252 \u2264\u2225q\u2032 \u03bb(0S)\u22252 = s1/2\u03bb for any penalty q\u03bb satisfying Condition (A1). Assume q\u03bb(t) = \u03bb2q(t/\u03bb) is a concave penalty de\ufb01ned on R+ with \u03b1\u2217:= inf{\u03b1 > 0 : q\u2032(\u03b1) = 0} < \u221e. Given a regularization parameter \u03bb > 0, consider the decomposition S = S0 \u222aS1, where S0 = \bj \u2208S : |\u03b2j| < (\u03b10 + \u03b1\u2217)\u03bb\t and S1 = \b j \u2208S : |\u03b2j| \u2265(\u03b10 + \u03b1\u2217)\u03bb\t have cardinalities s0 and s1, respectively. The shrinkage bias term can then be bounded by \u2225q\u2032 \u03bb((|\u03b2\u2217 S| \u2212\u03b10\u03bb)+)\u22252 \u2264\u2225q\u2032 \u03bb(0S0)\u22252 = s1/2 0 \u03bb. Under the beta-min condition \u2225\u03b2\u2217 S\u2225min \u2265(\u03b10 + \u03b1\u2217)\u03bb, the shrinkage bias vanishes, and hence the \ufb01nal rate of convergence is determined by \u2225w\u2217 h,S\u22252 and b\u2217 h. As previously noted, the latter is the smoothing bias term, and satis\ufb01es b\u2217 h \u2264l0\u03ba2h2/2. The terminology \u201coracle\u201d stems from the \u201coracle estimator\u201d, de\ufb01ned as the QR estimator that knows in advance the true subset of the important features. For a better comparison, we de\ufb01ne the oracle smoothed QR estimator as b \u03b2ora = argmin \u03b2\u2208Rp:\u03b2Sc=0 b Qh(\u03b2) = argmin \u03b2\u2208Rp:\u03b2Sc=0 1 n n X i=1 \u2113h(yi \u2212xT i,S\u03b2S), (4.15) where \u2113h(\u00b7) is the smoothed quantile loss given in (2.5). As we will show in Section 4.4, the oracle SQR estimator b \u03b2ora satis\ufb01es the bound \u2225b \u03b2ora \u2212\u03b2\u2217\u22252 \u2272\u2225w\u2217 h,S\u22252 + h2 with high probability, and \u2225w\u2217 h,S\u22252 is of order \u221as/n. Theorem 4.2 is a deterministic result. Probabilistic claims enter in certifying that the local RSC condition holds with high probability (see Proposition 4.2), and in verifying that the \u201cgood\u201d event {\u2225w\u2217 h\u2225\u221e\u22640.5q\u2032(\u03b10)\u03bb} occurs with high probability with a speci\ufb01ed choice of \u03bb. The following theorem states, under a necessary beta-min condition, the iteratively reweighted \u21131-penalized SQR (IRW-\u21131-SQR) estimator b \u03b2(\u2113), after a few iterations, achieves the estimation error of the oracle that knows the sparsity pattern of \u03b2\u2217. Theorem 4.3. In addition to Conditions (A1), (B1)\u2013(B3), assume there exist \u03b11 > \u03b10 > 0 such that q\u2032(\u03b10) > 0, \u03b10 p 4 + {q\u2032(\u03b10}2 > (\u03bal fl\u03b3p)\u22121 and q\u2032(\u03b11) = 0, (4.16) where \u03bal = min|u|\u22641 K(u) > 0. Moreover, let the regularization parameter \u03bb and bandwidth h satisfy \u03bb \u224d\u03c3x p \u03c4(1 \u2212\u03c4) log(p)/n and max \u03c3x fl r s log p n , \u03c32 x fu f 2 l s log p n ! \u2272h \u2272(s1/2\u03bb)1/2. For any t \u22650, under the beta-min condition \u2225\u03b2\u2217 S\u2225min \u2265(\u03b10+\u03b11)\u03bb and scaling n \u2273max{s log(p), s+t}, the IRW-\u21131-SQR estimator b \u03b2(\u2113) with \u2113\u2273\u2308log{log(p)}/ log(1/\u03b4)\u2309satis\ufb01es the bounds \u2225b \u03b2(\u2113) \u2212\u03b2\u2217\u22252 \u2272f \u22121 l r s + t n + h2 ! and \u2225b \u03b2(\u2113) \u2212\u03b2\u2217\u22251 \u2272f \u22121 l s1/2 r s + t n + h2 ! (4.17) with probability at least 1 \u2212p\u22121 \u2212e\u2212t, where \u03b4 = p 4 + {q\u2032(\u03b10)}2/(\u03b10\u03bal fl\u03b3p) \u2208(0, 1). 16 Remark 4.2 (Oracle rate of convergence and high-dimensional scaling). The conclusion of Theorem 4.3 is referred to as the weak oracle property: the IRW-\u21131-SQR estimator achieves the convergence rate of the oracle b \u03b2ora when the support set S were known a priori. Starting from b \u03b2(0) = 0, the one-step estimator b \u03b2(1) (\u21131-SQR) has an estimation error (under \u21132-norm) of order p s \u00b7 log(p)/n (see Theorem 4.1). Under an almost necessary and su\ufb03cient beta-min condition\u2014 \u2225\u03b2\u2217 S\u2225min \u2273 p log(p)/n, a re\ufb01ned near-oracle statistical rate \u221as/n+h2 can be attained by a multi-step iterative procedure, which solves a sequence of convex programs. Here, \u221as/n is referred to as the oracle rate, and the h2-term quanti\ufb01es the smoothing bias (Proposition 4.1). In order to certify the local RSC property of the smoothed objective function, the bandwidth should have magnitude at least of the order p s log(p)/n. If we choose a bandwidth h \u224d p s log(p)/n, the \u21132-error of the multistep estimator will be of order \u221as/n + s log(p)/n under the high-dimensional scaling n \u2273s log(p). Intuitively, the main reason for having an extra term s log(p)/n is that even if the underlying vector \u03b2\u2217is s-sparse, the population parameter \u03b2\u2217 h \u2208Rp corresponding to the smoothed objective function (see (4.2)) may be denser. As a result, there is a statistical price to pay for smoothing. Remark 4.3 (Minimum signal strength and oracle rate). In a linear regression model y = xT\u03b2\u2217+ \u03b5 with a Gaussian error \u03b5 \u223cN(0, \u03c32), consider the parameter space \u2126s,a = {\u03b2 \u2208Rp : \u2225\u03b2\u22250 \u2264 s, minj:\u03b2j,0 |\u03b2 j| \u2265a} for a > 0. Assuming that the design matrix X = (x1, . . . , xn)T \u2208Rn\u00d7p satis\ufb01es a restricted isometry property and has normalized columns (each column has an \u21132-norm equal to \u221an), Ndaoud (2019) derived the following sharp lower bounds for the minimax risk \u03c8(s, a) := inf b \u03b2 sup\u03b2\u2217\u2208\u2126s,a E\u2225b \u03b2 \u2212\u03b2\u2217\u22252 2: for any \u03f5 \u2208(0, 1), \u03c8(s, a) \u2265{1 + o(1)}2\u03c32s log(ep/s) n for any a \u2264(1 \u2212\u03f5)\u03c3 r 2 log(ep/s) n and \u03c8(s, a) \u2265{1 + o(1)}\u03c32s n for any a \u2265(1 + \u03f5)\u03c3 r 2 log(ep/s) n , where the limit corresponds to s/p \u21920 and s log(ep/s)/n \u21920. The minimax rate 2\u03c32s log(ep/s)/n can be attained by both Lasso and Slope (Bellec, Lecu\u00b4 e and Tsybakov, 2018), while the oracle rate \u03c32s/n can only be achieved when the magnitude of the minimum signal is of order \u03c3 p log(p/s)/n. For estimating an s-sparse vector \u03b2\u2217\u2208Rp in the conditional quantile model (2.1), Wang and He (2021) proved the lower bound p s log(p/s)/n for the minimax estimation error under \u21132-norm. In order to achieve the re\ufb01ned oracle rate, Fan, Xue and Zou (2014) required a stronger beta-min condition, i.e., \u2225\u03b2\u2217 S\u2225min \u2273 p s log(p)/n, and a stringent independence assumption between \u03b5 and x in the conditional quantile model (2.1). The beta-min condition imposed in Theorems 4.2 and 4.3 is almost necessary and su\ufb03cient, and is the weakest possible up to constant factors. 4.4 Strong oracle property In this section, we establish the strong oracle property for the multi-step estimator b \u03b2(\u2113) when \u2113is su\ufb03ciently large, i.e., b \u03b2(\u2113) equals the oracle estimator b \u03b2ora with high probability (Fan and Lv, 2011). To this end, we de\ufb01ne a similar local RSC event to Ersc(r, l, \u03ba) given in (4.7). Recall that S \u2286[p] is the support of \u03b2\u2217. Given radius parameters r, l > 0 and a curvature parameter \u03ba > 0, de\ufb01ne Grsc(r, l, \u03ba) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 \u27e8b Qh(\u03b21) \u2212\u2207b Qh(\u03b22), \u03b21 \u2212\u03b22\u27e9 \u2225\u03b21 \u2212\u03b22\u22252 \u03a3 \u2265\u03ba for all (\u03b21, \u03b22) \u2208\u039b(r, l) \uf8fc \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8fe, (4.18) 17 where \u039b(r, l) := {(\u03b21, \u03b22) : \u03b21 \u2208\u03b22 + B\u03a3(r) \u2229C\u03a3(l), \u03b22 \u2208\u03b2\u2217+ B\u03a3(r/2), supp(\u03b22) \u2286S}. Similarly to (4.10), we de\ufb01ne the oracle score wora h = \u2207b Qh(b \u03b2ora) \u2208Rp, (4.19) where b \u03b2ora is de\ufb01ned in (4.15). By the optimality of b \u03b2ora, we have wora h,S = (\u22121/n) Pn i=1 \u2113\u2032 h(yi \u2212 xT i,Sb \u03b2ora S )xi,S = 0s. Like Theorem 4.2, the following result is also deterministic given the stated conditioning. Theorem 4.4. Assume Condition (A1) holds, and for some predetermined \u03b4 \u2208(0, 1) and \u03ba > 0, there exist constants \u03b11 > \u03b10 > 0 such that q\u2032(\u03b10) > 0, \u03b10 p 1 + {q\u2032(\u03b10)/2}2 > 1 \u03b4\u03ba\u03b3p and q\u2032(\u03b11) = 0. (4.20) Moreover, let r \u2265\u03b31/2 p \u03b10c1s1/2\u03bb and l = {2 + 2 q\u2032(\u03b10)}(c2 1 + 1)1/2(s/\u03b3p)1/2, where c1 > 0 is a constant determined by 0.5q\u2032(\u03b10)(c2 1 + 1)1/2 + 1 = \u03b10\u03ba\u03b3pc1. (4.21) Assume the beta-min condition \u2225\u03b2\u2217 S\u2225min \u2265(\u03b10 + \u03b11)\u03bb holds. Then, conditioned on the event \b\u2225wora h \u2225\u221e\u22640.5q\u2032(\u03b10)\u03bb\t \u2229\b\u2225b \u03b2ora \u2212\u03b2\u2217\u2225\u03a3 \u2264r/2\t \u2229Grsc(r, l, \u03ba) \u2229 \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3\u2225b \u03b2ora \u2212\u03b2\u2217\u2225\u221e\u2264 \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0\u03b10 \u2212 p 1 + {q\u2032(\u03b10)/2}2 \u03b4\u03ba\u03b3p \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb\u03bb \uf8fc \uf8f4 \uf8f4 \uf8fd \uf8f4 \uf8f4 \uf8fe, (4.22) the strong oracle property holds: b \u03b2(\u2113) = b \u03b2ora provided \u2113\u2265\u2308log(s1/2/\u03b4)/ log(1/\u03b4)\u2309. Our next goal is is to control the probability of the events in (4.22). To this end, we need the following statistical properties of the oracle estimator b \u03b2ora, including a deviation bound and a nonasymptotic Kiefer-Bahadur representation that are of independent interest. The latter requires a slightly stronger moment condition on the random feature. (B1\u2032) In addition to Condition (B1), assume supu\u2208R |f\u03b5|x(u)| \u2264fu < \u221ealmost surely over x. (B2\u2032) In addition to Condition (B2), assume supu\u2208R K(u) \u2264\u03bau for some \u03bau \u2208(0, 1]. (B3\u2032) The (random) covariate vector x = \u03a31/2z \u2208Rp is sub-Gaussian: there exists some \u03c51 \u22651 such that P(|zTu| \u2265\u03c51\u2225u\u22252 \u00b7 t) \u22642e\u2212t2/2 for all u \u2208Rp and t \u22650. Note that the oracle b \u03b2ora \u2208Rp with b \u03b2ora Sc = 0 is essentially an unpenalized smoothed QR estimator in the low-dimensional regime \u201cs \u226an\u201d. We refer to Fernandes, Guerre and Horta (2021) for a comprehensive asymptotic analysis when s is \ufb01xed, and He et al. (2020) for a \ufb01nite sample theory when s is allowed to grow with n. This paper concerns the case where both s (intrinsic dimension) and p (ambient dimension) can grow with sample size n. We therefore summarize the estimation bound and Bahadur representation for b \u03b2ora S by He et al. (2020) in the following proposition. Let S = E(xSxT S) and D = E{ f\u03b5|x(0) \u00b7 xSxT S} (4.23) be, respectively, the s \u00d7 s sub-matrices of \u03a3 and J indexed by the true support S \u2286[p]. 18 Proposition 4.3. Assume Conditions (B1\u2032)\u2013(B3\u2032) hold. For any t \u22650, suppose the sample size n and the bandwidth h = hn are such that n \u2273s + t and \u221a(s + t)/n \u2272h \u22721. Then, the oracle estimator b \u03b2ora de\ufb01ned in (4.15) satis\ufb01es \u2225b \u03b2ora \u2212\u03b2\u2217\u2225\u03a3 = \u2225(b \u03b2ora \u2212\u03b2\u2217)S\u2225S \u2272f \u22121 l r s + t n + h2 ! (4.24) with probability at least 1 \u22122e\u2212t. Moreover, \r \r \r \r \rD(b \u03b2ora \u2212\u03b2\u2217)S + 1 n n X i=1 \b \u00af K(\u2212\u03b5i/h) \u2212\u03c4\txi,S \r \r \r \r \rS\u22121 \u2272s + t h1/2n + h r s + t n + h3 (4.25) with probability at least 1 \u22123e\u2212t. Finally, with the above preparations, we are able to establish the strong oracle property of b \u03b2(\u2113) when \u2113is su\ufb03ciently large. Theorem 4.5. Assume Conditions (B1\u2032)\u2013(B3\u2032) and (A1) hold with \u03bal = min|u|\u22641 K(u) > 0 and max j\u2208Sc \u2225JjS(JSS)\u22121\u22251 \u2264A0. (4.26) for some A0 \u22651. For a prespeci\ufb01ed \u03b4 \u2208(0, 1), suppose there exist constants \u03b11 > \u03b10 satisfying (4.20) with \u03ba = \u03bal fl/2, and the beta-min condition \u2225\u03b2\u2217 S\u2225min \u2265(\u03b10 + \u03b11)\u03bb. Choose the bandwidth h and penalty level \u03bb as h \u224d{log(p)/n}1/4 and \u03bb \u224d p log(p)/n. Then, with probability at least 1 \u22122p\u22121 \u22125n\u22121, b \u03b2(\u2113) = b \u03b2ora for all \u2113\u2265\u2308log(s1/2/\u03b4)/ log(1/\u03b4)\u2309, provided that the sparsity s and ambient dimension p obey the growth condition max{s2 log(p), s8/3/(log p)} \u2272n. As stated in Theorem 4.5, in addition to the beta-min condition \u2225\u03b2\u2217 S\u2225min \u2273 p log(p)/n, we need an extra assumption (4.26) to establish the strong oracle property. Informally speaking, if we regress every spurious (density-weighted) feature f\u03b5|x(0) \u00b7 xj (j \u2208Sc) on the important (densityweighted) features f\u03b5|x(0) \u00b7 xS, (4.26) requires the \u21131-norm of the resulting regression coe\ufb03cient vector to be bounded by A0. It is worth noting that assumption (4.26) is much weaker than the irrepresentable condition, which is su\ufb03cient and nearly necessary for model consistency of the Lasso (Zhao and Yu, 2006; Meinshausen and B\u00a8 uhlmann, 2006; Lahiri, 2021) in the conditional mean model. A population version of the irrepresentable condition is that, for some \u03b1 \u2208(0, 1), max j\u2208Sc \u2225\u03a3 jS(\u03a3SS)\u22121\u22251 \u2264\u03b1. For conditional mean regression with heavy-tailed errors, Loh (2017) established the strong oracle property for any local stationary point of the folded concave penalized optimization problem (2.2) subject to an \u21131-ball constraint, when the loss function is twice di\ufb00erentiable. The required growth condition on (s, p) is max{s log(p), s2} \u2272n; see Theorem 2 in Loh (2017). For sparse quantile regression, our result requires a slightly stronger scaling max{s2 log(p), s8/3/(log p)} \u2272n due to the non-smoothness of the quantile loss. Intuitively, the strong oracle property is related to the second-order accuracy and e\ufb03ciency: the oracle estimator is asymptotically normal provided that the sparsity s does not grow too fast with the sample size. For Huber\u2019s M-estimator, He and Shao (2000) proved the asymptotic normality for its linear functionals under the scaling s2 log(s) = o(n); while in the context of quantile regression, the same asymptotic results usually hold under stronger growth conditions due to both non-linearity and non-smoothness of the problem, such as s3(log n)2 = o(n) (Welsh, 1989; He and Shao, 2000) and s8/3 = o(n) (He et al., 2020). To some extent, this explains why the high-dimensional scaling in our Theorem 4.5 is slightly stronger than those needed for regularized M-estimators with smooth loss functions. 19 5 Numerical study We perform numerical studies to assess the performance of the proposed regularized quantile regression method using \u21131 and SCAD penalties. The SCAD penalty (Fan and Li, 2001) is de\ufb01ned through its derivative that takes the form q\u2032 \u03bb(t) = \u03bb1(t \u2264\u03bb) + (a \u22121)\u22121(a\u03bb \u2212t)+1(t > \u03bb) for t \u22650, where we pick a = 3.7 as suggested in Fan and Li (2001), although it may not be the optimal value for quantile regression. We use uniform and Gaussian kernels to smooth the quantile loss, and then employ the multi-stage convex relaxation method described in Algorithm 1 with \u2113= 3 iterations. We will show later in this section that for moderately large p, \u2113= 3 iterations is often su\ufb03cient and that more iterations will lead to little to no improvement in terms of estimation accuracy. We compare our proposal\u2014iteratively reweighted \u21131-penalized smoothed quantile regression, with the standard Lasso implemented by the R packageg glmnet, and both \u21131and folded concave penalized quantile regressions implemented by the R package FHDQR (Gu et al., 2018). As a benchmark, we also compute the oracle estimator by \ufb01tting unpenalized quantile regression using the important covariates. The regularization parameter \u03bb for Lasso and penalized QR is selected via \ufb01ve-fold cross-validation; for the latter, we use the check loss to de\ufb01ne the validation error. Speci\ufb01cally, we choose the \u03bb value that yields the minimum cross-validation error under the \u21132-loss and check loss for Lasso and penalized QR, respectively. The proposed method involves a smoothing parameter h, which can also be tuned via cross-validation in practice. Recall that convolution smoothing facilitates optimization through a balanced trade-o\ufb00between statistical accuracy and computational complexity. Our numerical experiments show that the results are rather insensitive to the choice of the bandwidth provide that it is in a reasonable range (neither too small nor too large). The default value of h is set to be max{0.05, \u221a\u03c4(1 \u2212\u03c4){log(p)/n}1/4}. We note that this particular choice of h is by no means optimal numerically. For all the numerical experiments, we generate synthetic data {(yi, xi)}n i=1 from a linear model yi = xT i \u03b2\u2217+\u03b5i with \u03b2\u2217= (1.8, 0, 1.6, 0, 1.4, 0, 1.2, 0, 1, 0, \u22121, 0, \u22121.2, 0, \u22121.4, 0, \u22121.6, 0, \u22121.8, 0p\u221219)T, and xi \u223cNp(0, \u03a3) with \u03a3 = (0.7| j\u2212k|)1\u2264j,k\u2264p. The random error follows one of the following four distributions: (i) standard normal distribution N(0, 1); (ii) t-distribution with 1.5 degrees of freedom; (iii) standard Cauchy distribution; and (iv) a mixture of normal distributions \u2013 0.7N(0, 1) + 0.3N(0, 25). To evaluate the performance across di\ufb00erent methods, we report the true and false positive rates (TPR and FPR), de\ufb01ned as the proportion of correctly estimated nonzeros and the proportion of falsely estimated nonzeros, respectively. We also report the sum of squared errors (SSE), i.e., \u2225b \u03b2 \u2212\u03b2\u2217\u22252 2. Results for four di\ufb00erent noise distributions under moderate (n = 500, p = 400) and high-dimensional settings (n = 500, p = 1000), averaged over 100 replications, are displayed in Tables 1\u20134. Under the Gaussian random noise, we see from Table 1 that all methods have similar TPR and FPR. The Lasso has the lowest SSE compared to QR-Lasso and SQR-Lasso, which coincides with the fact that quantile regression does lose some e\ufb03ciency in a normal model. For both standard and smoothed quantile regressions, iteratively reweighted regularization with the SCAD penalty considerably reduces the estimation error, is proximate to the oracle procedure. Similar results hold when the minimax concave penalty is used. This supports our theoretical results on SQR that concave regularization improves the estimation error from p s log(p)/n to the near-oracle rate p {s + log(p)}/n. Among all regularized quantile regression methods, the proposed procedure\u2014 20 iteratively reweighted \u21131-penalized SQR with either uniform or Gaussian kernel smoothing\u2014has the best overall performance. Table 1: Numerical comparisons under Gaussian model. The empirical average (and standard error) of the true and false positive rates (TPR and FPR) as well as the sum of squared errors (SSE), over 100 simulations, are reported. Moderate Dimension (n = 500, p = 400) High Dimension (n = 500, p = 1000) Methods TPR FPR Error TPR FPR Error Lasso 1 (0) 0.067 (0.003) 0.147 (0.006) 1 (0) 0.033 (0.001) 0.167 (0.006) SCAD 1 (0) 0.055 (0.003) 0.062 (0.012) 1 (0) 0.026 (0.001) 0.051 (0.003) QR-Lasso 1 (0) 0.119 (0.006) 0.240 (0.009) 1 (0) 0.068 (0.003) 0.284 (0.009) QR-SCAD 1 (0) 0.112 (0.006) 0.183 (0.014) 1 (0) 0.069 (0.004) 0.161 (0.010) SQR-Lasso (uniform) 1 (0) 0.066 (0.003) 0.224 (0.013) 1 (0) 0.036 (0.002) 0.234 (0.007) SQR-SCAD (uniform) 1 (0) 0.057 (0.004) 0.129 (0.011) 1 (0) 0.032 (0.002) 0.116 (0.008) SQR-Lasso (Gaussian) 1 (0) 0.072 (0.004) 0.191 (0.007) 1 (0) 0.034 (0.002) 0.223 (0.007) SQR-SCAD (Gaussian) 1 (0) 0.056 (0.003) 0.131 (0.010) 1 (0) 0.028 (0.002) 0.108 (0.007) Oracle 1 (0) 0 (0) 0.049 (0.003) 1 (0) 0 (0) 0.053 (0.003) Next, we examine the performance of di\ufb00erent methods when outliers are present. From Table 2 we see that the Lasso has the highest SSE with TPR merely above 0.5 in both moderateand high-dimensional settings. In contrast, regularized quantile regression methods have high TPR while maintain low FPR. The FPR and SSE for SQR are further reduced by a visible margin when the SCAD penalty is used. This corroborates our main message that high-dimensional quantile regression signi\ufb01cantly bene\ufb01ts from smoothing and non-convex regularization. Similar results can be found in Table 3 and 4 for Cauchy and a mixture normal error distributions. Table 2: Numerical comparisons under t1.5 model. Moderate Dimension (n = 500, p = 400) High Dimension (n = 500, p = 1000) Methods TPR FPR Error TPR FPR Error Lasso 0.908 (0.016) 0.052 (0.002) 4.615 (0.401) 0.854 (0.022) 0.023 (0.001) 5.668 (0.524) SCAD 0.842 (0.020) 0.044 (0.002) 7.138 (0.739 0.790 (0.024) 0.019 (0.001) 8.253 (0.762) QR-Lasso 1 (0) 0.112 (0.005) 0.417 (0.015) 1 (0) 0.065 (0.003) 0.541 (0.021) QR-SCAD 1 (0) 0.103 (0.005) 0.346 (0.024) 1 (0) 0.062 (0.003) 0.362 (0.022) SQR-Lasso (uniform) 0.999 (0.001) 0.067 (0.004) 0.387 (0.032) 1 (0) 0.032 (0.002) 0.433 (0.017) SQR-SCAD (uniform) 0.999 (0.001) 0.055 (0.004) 0.266 (0.028) 1 (0) 0.028 (0.002) 0.230 (0.017) SQR-Lasso (Gaussian) 1 (0) 0.066 (0.003) 0.332 (0.012) 1 (0) 0.030 (0.001) 0.420 (0.017) SQR-SCAD (Gaussian) 1 (0) 0.048 (0.003) 0.238 (0.018) 1 (0) 0.024 (0.001) 0.220 (0.015) Oracle 1 (0) 0 (0) 0.065 (0.004) 1 (0) 0 (0) 0.074 (0.004) Lastly, we assess more closely the e\ufb00ects of iteratively reweighted \u21131-regularization; see Algorithm 1. We keep the above model settings and focus on three di\ufb00erent noise distributions: (i) t distribution with 1.5 degrees of freedom; (ii) standard Cauchy distribution; and (iii) a mixture normal distribution. For simplicity, we set the tuning parameter \u03bb = 0.5 p log(p)/n. We run Algorithm 1 with uniform kernel and stop after 7 iterations. Starting with b \u03b2(0) = 0, recall that b \u03b2(1) is the SQR-Lasso estimator. To quantify the relative performance of the solution path, at \u2113th iteration, we de\ufb01ne the relative improvement of b \u03b2(\u2113) with respect to b \u03b2(\u2113\u22121) as \u2225b \u03b2(\u2113\u22121) \u2212\u03b2\u2217\u22252 2 \u2212\u2225b \u03b2(\u2113) \u2212\u03b2\u2217\u22252 2 \u2225b \u03b2(1) \u2212\u03b2\u2217\u22252 2 , \u2113\u22652. (5.1) 21 Table 3: Numerical comparisons under Cauchy model. Moderate Dimension (n = 500, p = 400) High Dimension (n = 500, p = 1000) Methods TPR FPR Error TPR FPR Error Lasso 0.344 (0.032) 0.021 (0.003) 16.799 (0.522) 0.305 (0.033) 0.009 (0.001) 17.479 (0.953) SCAD 0.297 (0.028) 0.020 (0.002) 20.382 (0.860) 0.272 (0.029) 0.009 (0.001) 19.526 (0.871) QR-Lasso 1 (0) 0.118 (0.004) 0.546 (0.022) 1 (0) 0.060 (0.002) 0.709 (0.025) QR-SCAD 1 (0) 0.112 (0.005) 0.585 (0.047) 1 (0) 0.058 (0.002) 0.473 (0.034) SQR-Lasso (uniform) 0.990 (0.004) 0.054 (0.002) 0.628 (0.070) 0.999 (0.010) 0.030 (0.002) 0.588 (0.042) SQR-SCAD (uniform) 0.992 (0.004) 0.045 (0.003) 0.391 (0.047) 0.998 (0.002) 0.026 (0.001) 0.308 (0.031) SQR-Lasso (Gaussian) 1 (0) 0.058 (0.002) 0.434 (0.017) 1 (0) 0.028 (0.001) 0.533 (0.019) SQR-SCAD (Gaussian) 1 (0) 0.042 (0.002) 0.298 (0.021) 1 (0) 0.022 (0.001) 0.276 (0.021) Oracle 1 (0) 0 (0) 0.076 (0.004) 1 (0) 0 (0) 0.080 (0.004) Table 4: Numerical comparisons under mixture normal model. Moderate Dimension (n = 500, p = 400) High Dimension (n = 500, p = 1000) Methods TPR FPR Error TPR FPR Error Lasso 0.999 (0.001) 0.062 (0.003) 1.253 (0.058) 1 (0) 0.030 (0.001) 1.346 (0.047) SCAD 0.996 (0.002) 0.048 (0.002) 0.606 (0.063) 0.995 (0.002) 0.025 (0.001) 0.746 (0.070) QR-Lasso 1 (0) 0.126 (0.005) 0.507 (0.019) 1 (0) 0.059 (0.002) 0.559 (0.017) QR-SCAD 1 (0) 0.121 (0.006) 0.546 (0.041) 1 (0) 0.057 (0.002) 0.361 (0.020) SQR-Lasso (uniform) 0.999 (0.001) 0.070 (0.004) 0.496 (0.040) 1 (0) 0.030 (0.002) 0.462 (0.013) SQR-SCAD (uniform) 1 (0) 0.060 (0.004) 0.366 (0.029) 1 (0) 0.026 (0.002) 0.244 (0.016) SQR-Lasso (Gaussian) 1 (0) 0.072 (0.003) 0.405 (0.015) 1 (0) 0.029 (0.001) 0.443 (0.013) SQR-SCAD (Gaussian) 1 (0) 0.054 (0.003) 0.346 (0.024) 1 (0) 0.024 (0.001) 0.242 (0.015) Oracle 1 (0) 0 (0) 0.087 (0.005) 1 (0) 0 (0) 0.086 (0.004) The relative improvement is a value between zero and one. A value close to zero indicates that there is little improvement in estimation error and vice versa. The results for n = 500 and p \u2208 {200, 400, 1000, 2000}, averaged over 100 replications, are summarized in Figure 2. We see that running an additional iteration (\u2113= 2) leads to the most signi\ufb01cant improvement. The estimator, after \u2113= 3 iterations, can still be improved under the t and Cauchy models. In all the (n, p) settings considered, running \u2113\u22654 iterations only shows marginal improvement, suggesting that the multistep procedure with \u2113= 3 is su\ufb03cient for moderate-scale datasets. 6 An application to gene expression data We apply the proposed method to an expression quantitative trait locus (eQTL) dataset previously analyzed in Scheetz et al. (2006), Kim, Choi and Oh (2008) and Wang, Wu and Li (2012). The dataset was collected on a study that used eQTL mapping in laboratory rats to investigate and identify genetic variation in the mammalian eye that is relevant to human eye disease (Scheetz et al., 2006). Following Wang, Wu and Li (2012), we study the association between gene TRIM32, which was found to be associated with human eye disease, and the other expressions at other probes. The data consists of expression values of 31,042 probe sets on 120 rats. After some data pre-processing steps as described in Wang, Wu and Li (2012), the number of probes are reduced to 18,958. We further select the top 500 probes that have the highest absolute correlation with the expression of the response. We apply the proposed method using the uniform kernel and SCAD penalty, with regularization parameter selected by ten-fold cross-validation. For comparisons, we also implement 22 2 3 4 5 6 7 (i) 0 0.2 0.4 0.6 0.8 Relative Improvement Number of Iterations p = 200 p = 400 p = 1000 p = 2000 2 3 4 5 6 7 (ii) 0 0.2 0.4 0.6 0.8 Relative Improvement Number of Iterations p = 200 p = 400 p = 1000 p = 2000 2 3 4 5 6 7 (iii) 0 0.2 0.4 0.6 0.8 Relative Improvement Number of Iterations p = 200 p = 400 p = 1000 p = 2000 Figure 2: Plots of relative improvement de\ufb01ned in (5.1) versus number of iterations when n = 500 and p \u2208{200, 400, 1000, 2000}. The three panels correspond to models with di\ufb00erent noise distributions: (i) t distribution with 1.5 degrees of freedom; (ii) standard Cauchy distribution; and (iii) a mixture normal distribution. the \u21131and concave regularized quantile regression methods, denoted by QR-Lasso and QR-SCAD, using the R package FHDQR. Similar to Wang, Wu and Li (2012), we conduct 50 random partitions of the data by randomly selecting the expression values for 80 rats as the training data and the remaining 40 rats as the testing data. The selected model size and prediction error (under quantile loss), averaged over 50 random partitions, are reported in Table 5. We observe from Table 5 that the SQR has consistently lower prediction errors than the standard QR across all three quantile levels considered. The prediction error is also improved for SQR when the SCAD penalty is used. In contrary, QR-SCAD exhibits no improvement over QR-Lasso in prediction accuracy, which is in line with the observation in Wang, Wu and Li (2012). One explanation may be that the lack of smoothness and strong convexity of the quantile loss overshadows the bias-reducing property of the concave penalty. These results suggest that high-dimensional quantile regression considerably bene\ufb01ts from smoothing and concave regularization in terms of model selection ability, prediction accuracy and computational feasibility. 7 Discussions In this paper we introduced a class of penalized convolution smoothed methods for \ufb01tting sparse quantile regression models in high dimensions. Convolution smoothing turns the non-di\ufb00erentiable check loss into a twice-di\ufb00erentiable and convex surrogate, and the resulting empirical loss is proven to be locally strongly convex (with high probability). To reduce the \u21131-regularization bias as the signal strengthens, we considered a multi-step, iterative procedure which solves a weighted \u21131penalized smoothed quantile objective function at each iteration. Statistically, we established the oracle-like performance of the output of this procedure, such as the oracle convergence rate and variable selection consistency, under an almost necessary and su\ufb03cient minimum signal strength condition. From a computational perspective, together convolution smoothing and convex relaxation enable the use of gradient-based algorithms that are much more scalable to large-scale datasets. In summary, through convolution smoothing with a suitably chosen bandwidth, we aim to seek a bet23 Table 5: The average selected model size and prediction error (under quantile loss), with standard errors in the parenthesis, over 50 random partitions. Methods Model Size Prediction Error QR-Lasso (\u03c4 = 0.3) 38.28 (3.192) 0.225 (0.005) QR-SCAD (\u03c4 = 0.3) 34.66 (3.291) 0.241 (0.006) SQR-Lasso (\u03c4 = 0.3) 45.28 (1.866) 0.118 (0.003) SQR-SCAD (\u03c4 = 0.3) 31.32 (1.827) 0.106 (0.003) QR-Lasso (\u03c4 = 0.5) 33.76 (1.985) 0.222 (0.003) QR-SCAD (\u03c4 = 0.5) 30.28 (2.114) 0.236 (0.004) SQR-Lasso (\u03c4 = 0.5) 36.76 (1.533) 0.142 (0.003) SQR-SCAD (\u03c4 = 0.5) 29.58 (2.006) 0.132 (0.003) QR-Lasso (\u03c4 = 0.7) 29.66 (1.669) 0.195 (0.003) QR-SCAD (\u03c4 = 0.7) 24.22 (1.942) 0.205 (0.003) SQR-Lasso (\u03c4 = 0.7) 41.44 (2.262) 0.124 (0.003) SQR-SCAD (\u03c4 = 0.7) 27.52 (2.269) 0.116 (0.004) ter trade-o\ufb00between statistical accuracy and computational precision for high-dimensional quantile regression. The proposed procedures will be implemented in the R package conquer, available at https://cran.r-project.org/web/packages/conquer/index.html. The Python code is also publicly accessible at https://github.com/WenxinZhou/conquer, with an option to perform post-selection-inference (via bootstrap). There are several avenues for future work. When the parameter of interest arises in a matrix form, the low-rankness is often used to capture its low intrinsic dimension. This falls into the general category of ill-posed inverse problems, where the number of observations/measurements is much smaller than the ambient dimension of the model. See Chandrasekaran et al. (2012) for a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The idea of concave penalization can also be applied to low-rank matrix recovery problems. In essence, one can use a concave function to penalize the vector of singular values of matrix \u0398 \u2208Rp1\u00d7p2. We refer to Wang, Zhang and Gu (2017) for a uni\ufb01ed computational and statistical framework for non-convex low-rank matrix estimation when the Frobenius norm is used as the data-\ufb01tting measure. We conjecture that the proposed multi-step reweighted convex penalization approach and convolution smoothing will lead to oracle statistical guarantees and fast computational methods for quantile matrix regression and quantile matrix completion problems (Belloni et al, 2019). We leave this as future work.", "introduction": "Massive complex datasets bring challenges to data analysis due to the presence of outliers and het- erogeneity. Consider regression of a scalar response y on a p-dimensional predictor x \u2208Rp. The least squares method focuses on the conditional mean of the outcome given the predictor. Despite its popularity in the statistical and econometric literature, it is sensitive to outliers and fails to cap- ture heterogeneity in the set of important features. Moreover, in many applications, the scienti\ufb01c question of interest may not be fully addressed by inferring the conditional mean. Since the seminal *Department of Statistics, University of Michigan, Ann Arbor, Michigan 48109, USA. E- mail:keanming@umich.edu. \u2020Miami Herbert Business School, University of Miami, Coral Gables, FL 33146, USA. E- mail:lanwang@mbs.miami.edu. \u2021Department of Mathematics, University of California, San Diego, La Jolla, CA 92093, USA. E- mail:wez243@ucsd.edu. 1 arXiv:2109.05640v1 [stat.ME] 12 Sep 2021 work of Koenker and Bassett (1978), quantile regression (QR) has gained increasing attention by o\ufb00ering a set of complementary methods designed to explore data features invisible to the inveigle- ments of least squares methods. Quantile regression is robust to data heterogeneity and outliers, and also o\ufb00ers unique insights into the entire conditional distribution of the outcome given the predictor. We refer to Koenker (2005) and Koenker et al. (2017) for an overview of quantile regression theory, methods and applications. In the high-dimensional setting in which the number of features, p, exceeds the number of obser- vations, n, it is often the case that only a small subset of a large pool of features in\ufb02uences the con- ditional distribution of the outcome. To perform estimation and variable selection simultaneously, the standard approach is to minimize the empirical loss plus a penalty on the model complexity. The \u21131-penalty is arguably the most commonly used penalty function that induces sparsity (Tibshirani, 1996). Least squares methods with \u21131-regularization have been extensively studied in the past two decades. Because of the extremely long list of relevant literature, we refer the reader to the mono- graphs B\u00a8 uhlmann and van de Geer (2011), Hastie, Tibshirani and Wainwright (2015), Wainwright (2019), Fan et al. (2020), and the references therein. In the context of quantile regression, Belloni and Chernozhukov (2011) provided a comprehensive analysis of the \u21131-penalized quantile regres- sion as well as post-penalized QR estimator. Since then, the literature on high-dimensional quantile regression has grown rapidly, and we refer to Chapter 15 of Koenker et al. (2017) for an overview. It is now a consensus that the \u21131-penalty induces non-negligible bias (Fan and Li, 2001; Zou, 2006; Zhang and Zhang, 2012), due to which the selected model tends to include spurious variables unless stringent conditions are imposed on the design matrix, such as the strong irrepresentable condition (Zhao and Yu, 2006; Meinshausen and B\u00a8 uhlmann, 2006). To reduce the bias induced by the \u21131-penalty when the signal is su\ufb03ciently strong, various concave penalty functions have been designed (Fan and Li, 2001; Zhang, 2010a,b). For concave penalized M-estimation with convex and locally strongly convex losses, a large body of literature has shown that there exists a local solution that possesses the oracle property, i.e., a solution that is as e\ufb03cient as the oracle estimator obtained by assuming the true active set is known a priori, under certain minimum signal strength condition, also known as the beta-min condition. We refer the reader to Fan and Li (2001), Zou and Li (2008), Kim, Choi and Oh (2008), Zhang (2010b), Fan and Lv (2011), Zhang and Zhang (2012), Kim and Kwon (2012), Loh and Wainwright (2015), and Loh (2017) for more details. Comparably, quantile regression with concave regularization is much less understood theoreti- cally primarily due to the challenges in analyzing the piecewise linear quantile loss and the concave penalty simultaneously. Let \u03b2\u2217\u2208Rp be the s-sparse underlying parameter vector with support S = {1 \u2264j \u2264p : \u03b2\u2217 j , 0}, and de\ufb01ne the minimum signal strength \u2225\u03b2\u2217 S\u2225min = minj\u2208S |\u03b2\u2217 j|. Under a beta-min condition \u2225\u03b2\u2217 S\u2225min \u226bn\u22121/2 max{s, p log(p)}, Wang, Wu and Li (2012) showed that the oracle QR estimator belongs to the set of local minima of the non-convex penalized quantile objec- tive function with probability approaching one. From a di\ufb00erent angle, Fan, Xue and Zou (2014) proved that the oracle QR estimator can be obtained via the one-step local linear approximation (LLA) algorithm (Zou and Li, 2008) under a beta-min condition \u2225\u03b2\u2217 S\u2225min \u2273 p s log(p)/n, that is, the minimal non-zero coe\ufb03cient is of order p s log(p)/n in magnitude. We refer to Chapter 16 of Koenker et al. (2017) for an overview of the existing results on non-convex regularized quantile regression. Existing work on folded concave penalized QR either impose stringent signal strength assumptions or only establish theoretical guarantees for some local optimum which, due to non- convexity, is not necessarily the solution obtained by any practical algorithm. In other words, there 2 is no guarantee that the solution obtained from a given algorithm will satisfy the desired statistical properties, leaving a gap between theory and practice. A natural way to resolve the non-di\ufb00erentiability issue is to smooth the piecewise linear quantile loss using a kernel. The idea of kernel smoothing was \ufb01rst considered by Horowitz (1998) in the context of bootstrap inference for median regression. Horowitz (1998) showed that the estimator obtained from the smoothed quantile loss is asymptotically equivalent to that of the standard quan- tile regression estimator. This motivates a series of work on smoothed quantile regression when the number of features is \ufb01xed (Whang, 2006; Wu, Ma and Yin, 2015; Galvao and Kato, 2016). How- ever, smoothing the piecewise linear loss directly yields a non-convex function for which global minimum is not guaranteed. This poses even more challenges in the high-dimensional setting. \u22121 \u22120.5 0 0.5 1 Value 0 0.2 0.4 0.6 Loss Function Quantile Loss Proposed Smoothed Loss Horowitz Smoothed Loss Figure 1: Plots of a standard quantile loss, Horowitz\u2019s smoothed quantile loss (Horowitz, 1998), and a convolution-type smoothed quantile loss. In this paper, we propose and study a new method for quantile regression in high-dimensional sparse models, which is based on convolution smoothing and iteratively reweighted \u21131-penalization. To deal with non-smoothness, we smooth the piecewise linear quantile loss via convolution. The idea is to smooth the subgradient of the quantile loss, and then integrate it to obtain a smoothed loss function that is also convex. See Figure 1 for a visualization of Horowitz\u2019s and convolution smoothing methods. Fernandes, Guerre and Horta (2021) developed the traditional asymptotic the- ory for convolution smoothing in the context of linear quantile regression when the sample size n tends to in\ufb01nity while p is kept \ufb01xed. For high-dimensional sparse models, we extend the one- step LLA algorithm proposed by Zou and Li (2008), and propose a multi-step, iterative procedure which solves a weighted \u21131-penalized smoothed quantile objective function at each iteration. This multi-step procedure consists of a sequence of convex programs, which is similar to the multi-stage convex relaxation method for sparse regularization (Zhang, 2010b; Fan et al., 2018). Computa- tionally, for di\ufb00erent smoothing kernels, typi\ufb01ed by the uniform and Gaussian kernels, we propose e\ufb03cient algorithms to minimize the weighted \u21131-penalized smoothed quantile objective function at each stage. Comparing with existing methods for \ufb01tting high-dimensional quantile regression, the proposed gradient-based algorithms are more scalable to large-scale problems with either large sample size or high dimensionality. Since the proposed multi-step procedure delivers a sequence of solutions iteratively, to under- 3 stand how these estimators evolve statistically, we provide a delicate analysis of the estimator at each stage whose overall estimation error consists of three components: shrinkage bias, oracle rate, and smoothing bias. The theoretical analysis in Zhang (2010b) and Fan et al. (2018) is primarily suited for the quadratic case, although the method applies to more general loss functions. In this work, we aim at establishing theoretical underpinnings of why and how convolution smoothing and iteratively reweighted \u21131-penalization help with achieving oracle properties for quantile regression. In particular, we show that the solution for the \ufb01rst iteration, i.e., the \u21131-penalized smoothed quantile regression, is near minimax optimal, and coincide with those of existing results for \u21131- penalized QR estimator. Moreover, our analysis reveals that the multi-step, iterative algorithm re\ufb01nes the statistical rate in a sequential manner: every relaxation step shrinks the estimation er- ror from the previous step by a \u03b4-fraction for some predetermined \u03b4 \u2208(0, 1). All the results are non-asymptotic with explicit errors depending on (s, p, n), including the deterministic smooth- ing bias and stochastic statistical errors. With a minimal requirement on the signal strength\u2014 \u2225\u03b2\u2217 S\u2225min \u2273 p log(p)/n, we show that after as many as \u2113\u2273\u2308log(max{log(p), s})\u2309iterations, the multi-step algorithm will deliver an estimator that achieves the oracle rate of convergence as well as the strong oracle property. The latter implies variable selection consistency as a byproduct. To our knowledge, these are the \ufb01rst statistical characterizations of computationally feasible concave regularized quantile regression estimators. The rest of the paper is organized as follows. In Section 2, we describe the convolution-type smoothing approach for quantile regression, followed by an iteratively reweighted \u21131-penalized pro- cedure for \ufb01tting high-dimensional sparse models. At each stage, the problem boils down to min- imizing a weighted \u21131-penalized smoothed quantile objective function, for which we propose e\ufb03- cient and scalable algorithms in Section 3 with a particular focus on uniform and Gaussian kernels. In Section 4, we provide theoretical guarantees for the sequence of estimators obtained by the multi- step method, including estimation error bounds (in high probability) and strong oracle property. A numerical demonstration of the proposed method on simulated data and a real data application are provided in Sections 5 and 6, respectively. The proofs of all theoretical results are given in the on- line supplementary material. The Python code that implements the proposed iteratively reweighted regularized quantile regression procedure is available at https://github.com/WenxinZhou/conquer. Notation: For every integer k \u22651, we use Rk to denote the the k-dimensional Euclidean space, and write [k] = {1, . . . , k}. The inner product of any two vectors u = (u1, . . . , uk)T, v = (v1, . . . , vk)T \u2208Rk is de\ufb01ned by uTv = \u27e8u, v\u27e9= Pk i=1 uivi. Moreover, let u\u25e6v = (u1v1, . . . , ukvk)T denote the Hadamard product of u and v. For a subset S \u2286[k] with cardinality |S|, we write uS \u2208R|S| as the subvector of u that consists of the entries of u indexed by S. We use \u2225\u00b7 \u2225p (1 \u2264q \u2264\u221e) to denote the \u2113q-norm in Rk: \u2225u\u2225q = (Pk i=1 |ui|q)1/q and \u2225u\u2225\u221e= max1\u2264i\u2264k |ui|. For k \u22652, Sk\u22121 = {u \u2208Rk : \u2225u\u22252 = 1} denotes the unit sphere in Rk. For any function f : R 7\u2192R and vector u = (u1, . . . , uk)T \u2208Rk, we write f(u) = (f(u1), . . . , f(uk))T \u2208Rk. Throughout this paper, we use bold uppercase letters to represent matrices. For k \u22652, Ik represents an k \u00d7 k identity matrix. For any k \u00d7 k symmetric, positive semide\ufb01nite matrix A \u2208Rk\u00d7k, we use \u03b3(A) \u2208Rk to denote its vector of eigenvalues, ordered as \u03b31(A) \u2265\u00b7 \u00b7 \u00b7 \u2265\u03b3p(A) \u22650, and let \u2225A\u22252 = \u03b31(A) be the operator norm of A. Moreover, let \u2225\u00b7 \u2225A denote the vector norm induced by A: \u2225u\u2225A = \u2225A1/2u\u22252 for u \u2208Rk. For any two real numbers u and v, we write u \u2228v = max(u, v) and u\u2227v = min(u, v). For two sequences of non-negative numbers {an}n\u22651 and {bn}n\u22651, an \u2272bn indicates 4 that there exists a constant C > 0 independent of n such that an \u2265Cbn; an \u2273bn is equivalent to bn \u2272an; an \u224dbn is equivalent to an \u2272bn and bn \u2272an. For two numbers C1 and C2, we write C2 = C2(C1) if C2 depends only on C1." }, { "url": "http://arxiv.org/abs/1905.11588v2", "title": "Estimating and Inferring the Maximum Degree of Stimulus-Locked Time-Varying Brain Connectivity Networks", "abstract": "Neuroscientists have enjoyed much success in understanding brain functions by\nconstructing brain connectivity networks using data collected under highly\ncontrolled experimental settings. However, these experimental settings bear\nlittle resemblance to our real-life experience in day-to-day interactions with\nthe surroundings. To address this issue, neuroscientists have been measuring\nbrain activity under natural viewing experiments in which the subjects are\ngiven continuous stimuli, such as watching a movie or listening to a story. The\nmain challenge with this approach is that the measured signal consists of both\nthe stimulus-induced signal, as well as intrinsic-neural and non-neuronal\nsignals. By exploiting the experimental design, we propose to estimate\nstimulus-locked brain network by treating non-stimulus-induced signals as\nnuisance parameters. In many neuroscience applications, it is often important\nto identify brain regions that are connected to many other brain regions during\ncognitive process. We propose an inferential method to test whether the maximum\ndegree of the estimated network is larger than a pre-specific number. We prove\nthat the type I error can be controlled and that the power increases to one\nasymptotically. Simulation studies are conducted to assess the performance of\nour method. Finally, we analyze a functional magnetic resonance imaging dataset\nobtained under the Sherlock Holmes movie stimuli.", "authors": "Kean Ming Tan, Junwei Lu, Tong Zhang, Han Liu", "published": "2019-05-28", "updated": "2019-06-20", "primary_cat": "stat.ML", "cats": [ "stat.ML" ], "main_content": "2.1 A Statistical Model Let X(z), S(z), E(z) be the observed data, stimulus-induced signal, and subject specific effects at time Z = z, respectively. Assume that Z is a continuous random variable with a continuous density. For a given Z = z, we model the observed data as the summation of stimulus-induced signal and the subject specific effects: X(z) = S(z) + E(z), S(z) | Z = z \u223cNd{0, \u03a3(z)}, E(z) | Z = z \u223cNd{0, LX(z)}, (2) 3 where \u03a3(z) is the covariance matrix of the stimulus-induced signal, and LX(z) is the covariance matrix of the subject speci\ufb01c e\ufb00ects. We assume that S(z) and E(z) are independent for all z. Thus, estimating the stimulus-locked brain connectivity network amounts to estimating {\u03a3(z)}\u22121. Fitting the model in (1) using the observed data will yield an estimate of {\u03a3(z) + LX(z)}\u22121, and thus, (1) fails to estimate the stimulus-locked brain connectivity network {\u03a3(z)}\u22121. To address this issue, we exploit the experimental design aspect of natural viewing experiments. In many studies, neuroscientists often measure brain activity for multiple subjects under the same continuous natural stimulus (Chen et al. 2017, Simony et al. 2016). Let X(z) and Y (z) be measured data for two subjects at time point Z = z. Since the same natural stimulus is given to both subjects, this motivates the following statistical model: X(z) = S(z) + EX(z), Y (z) = S(z)+EY (z), S(z)|Z = z \u223cNd{0, \u03a3(z)}, EX(z)|Z = z \u223cNd{0, LX(z)}, EY (z)|Z = z \u223cNd{0, LY (z)}, (3) where S(z) is the stimulus-induced signal, and EX(z) and EY (z) are the subject speci\ufb01c e\ufb00ects at Z = z. Model (3) motivates the calculation of inter-subject covariance between two subjects rather than the within-subject covariance. For a given time point Z = z, we have E[X(z){Y (z)}T | Z = z] = E[S(z){S(z)}T | Z = z] + E[EX(z){EY (z)}T | Z = z] = \u03a3(z). That is, we estimate \u03a3(z) via the inter-subject covariance by treating LX(z) and LY (z) as nuisance parameters. In the neuroscience literature, several authors have been calculating inter-subject covariance matrix to estimate marginal dependencies among brain regions that are stimulus-locked (Chen et al. 2017, Simony et al. 2016). They have found that calculating the inter-subject covariance is able to better capture the stimulus-locked marginal relationships for pairs of brain regions. For simplicity, throughout the paper, we focus on two subjects. When there are multiple subjects, we can split the subjects into two groups, and obtain an average of each group to estimate the stimulus-locked brain network. We also discuss a U-statistic type estimator for the case when there are multiple subjects in Appendix B. 2.2 Inter-Subject Time-Varying Gaussian Graphical Models We now propose inter-subject time-varying Gaussian graphical models for estimating stimuluslocked time-varying brain networks. Let (Z1, X1, Y1), . . . , (Zn, Xn, Yn) be n independent realizations of the triplets (Z, X, Y ). Both subjects share the same Z1, . . . , Zn since they are given the same continuous stimulus. Let K : R \u2192R be a symmetric kernel function. To obtain an estimate for \u03a3(z), we propose the inter-subject kernel smoothed covariance estimator b \u03a3(z) = P i\u2208[n] Kh(Zi \u2212z)XiY T i P i\u2208[n] Kh(Zi \u2212z) , (4) 4 where Kh(Zi \u2212z) = K{(Zi \u2212z)/h}/h, h > 0 is the bandwidth parameter, and [n] = {1, . . . , n}. For simplicity, we use the Epanechnikov kernel K(u) = 0.75 \u00b7 \u00001 \u2212u2\u0001 \u00b7 1{|u|\u22641}, (5) where 1{|u|\u22641} is an indicator function that takes value one if |u| \u22641 and zero otherwise. The choice of kernel is not essential as long as it satis\ufb01es regularity conditions in Section 5.1. Let \u0398(z) = {\u03a3(z)}\u22121. Given the kernel smoothed inter-subject covariance estimator in (27), there are multiple approaches to obtain an estimate of the inverse covariance matrix \u0398(z). We consider the CLIME estimator proposed by Cai et al. (2011). Let ej be the jth canonical basis in Rd. For a vector v \u2208Rd, let \u2225v\u22251 = Pd j=1 |vj| and let \u2225v\u2225\u221e= maxj |vj|. For each j \u2208[d], the CLIME estimator takes the form b \u0398j(z) = argmin \u03b8\u2208Rd \u2225\u03b8\u22251 subject to \r \r \rb \u03a3(z) \u00b7 \u03b8 \u2212ej \r \r \r \u221e\u2264\u03bb, (6) where \u03bb > 0 is a tuning parameter that controls the sparsity of b \u0398j(z). We construct an estimator for the stimulus-locked brain network as b \u0398(z) = [{ b \u03981(z)}T, . . . , { b \u0398d(z)}T]. There are two tuning parameters in our proposed method: a bandwidth parameter h that controls the smoothness of the estimated covariance matrix, and a tuning parameter \u03bb that controls the sparsity of the estimated network. The bandwidth parameter h can be selected according to the scienti\ufb01c context. For instance, in many neuroscience applications that involve continuous natural stimuli, we select h such that there are always at least 30% of the time points that have non-zero kernel weights. In the following, we propose a L-fold cross-validation type procedure to select \u03bb. We \ufb01rst partition the n time points into L folds. Let C\u2113be an index set containing time points for the \u2113th fold. Let \u0398(z)(\u2212\u2113) be the estimated inverse covariance matrix using data excluding the \u2113th fold, and let \u03a3(z)(\u2113) be the estimated kernel smoothed covariance estimated using data only from the \u2113th fold. We calculate the following quantity for various values of \u03bb : cv\u03bb = 1 L L X \u2113=1 X i\u2208C\u2113 \u2225b \u03a3(zi, \u03bb)(\u2113) b \u0398(zi, \u03bb)(\u2212\u2113) \u2212Id\u2225max, (7) where \u2225\u00b7\u2225max is the element-wise max norm for matrix. From performing extensive numerical studies, we \ufb01nd that picking \u03bb that minimizes the above quantity tend to be too conservative. We instead propose to pick the smallest \u03bb with cv\u03bb smaller than the minimum plus two standard deviation. 2.3 Inference on Maximum Degree We consider testing the hypothesis: H0 : for all z \u2208[0, 1], the maximum degree of the graph is not greater than k, H1 : there exists a z0 \u2208[0, 1] such that the maximum degree of the graph is greater than k. (8) 5 In the existing literature, many authors have proposed to test whether there is an edge between two nodes in a graph (see, Neykov et al. 2018, and the references therein). Due to the \u21131 penalty used to encourage a sparse graph, classical test statistics are no longer asymptotically normal. We employ the de-biased test statistic b \u0398de jk(z) = b \u0398jk(z) \u2212 n b \u0398j(z) oT n b \u03a3(z) b \u0398k(z) \u2212ek o n b \u0398j(z) oT b \u03a3j(z) , (9) where b \u0398j(z) is the jth column of b \u0398(z). The subtrahend in (9) is the bias introduced by imposing an \u21131 penalty during the estimation procedure. We use (9) to construct a test statistic for testing the maximum degree of a time-varying graph. Let G(z) = {V, E(z)} be an undirected graph, where V = {1, . . . , d} is a set of d nodes and E(z) \u2286V \u00d7 V is a set of edges connecting pairs of nodes. Let TE = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f b \u0398de jk(z) \u2212\u0398jk(z) \f \f \f \u00b7 \uf8f1 \uf8f2 \uf8f3 1 n X i\u2208[n] Kh(Zi \u2212z) \uf8fc \uf8fd \uf8fe. (10) The edge set E(z) is de\ufb01ned based on the hypothesis testing problem. In the context of testing maximum degree of a time-varying graph as in (8), E(z) = V \u00d7 V , and therefore the maximum is taken over all possible edges between pairs of nodes. Throughout the manuscript, we will use the notation E(z) to indicate some prede\ufb01ned known edge set. This general edge set will be di\ufb00erent for testing di\ufb00erent graph structures, and we refer the reader to Appendix A for details. Since the test statistic (10) involves taking the supreme over z and the maximum over all edges in E(z), it is challenging to evaluate its asymptotic distribution. To this end, we generalize the Gaussian multiplier bootstrap proposed in Chernozhukov et al. (2013) and Chernozhukov et al. (2014b) to approximate the distribution of the test statistic TE. Let \u03be1, . . . , \u03ben i.i.d. \u223cN(0, 1). We construct the bootstrap statistic as T B E = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f \f \f \f P i\u2208[n] n b \u0398j(z) oT Kh(Zi \u2212z) n XiY T i b \u0398k(z) \u2212ek o \u03bei/n n b \u0398j(z) oT b \u03a3j(z) \f \f \f \f \f \f \f . (11) We denote the conditional (1 \u2212\u03b1)-quantile of T B E given {(Zi, Xi, Yi)}i\u2208[n] as c(1 \u2212\u03b1, E) = inf \u0000t \u2208R | P \u0002 T B E \u2264t | {(Zi, Xi, Yi)}i\u2208[n] \u0003 \u22651 \u2212\u03b1 \u0001 . (12) The quantity c(1 \u2212\u03b1, E) can be calculated numerically using Monte-Carlo. In Section 5.2, we show that the quantile of TE in (10) can be estimated accurately by the conditional (1 \u2212\u03b1)-quantile of the bootstrap statistic. 6 We now propose an inference framework for testing hypothesis problem of the form (8). Our proposed method is motivated by the step-down method in Romano & Wolf (2005) for multiple hypothesis tests. The details are summarized in Algorithm 1. Algorithm 1 involves evaluating all values of z \u2208[0, 1]. In practice, we implement the proposed method by discretizing values of z \u2208[0, 1] into a large number of time points. We note that there will be approximation error by taking the maximum over the discretized time points instead of the supremum of the continuous trajectory. The approximation error could be reduced to arbitrarily small if we increase the density of discretization. Algorithm 1 Testing Maximum Degree of a Time-Varying Graph. Input: type I error \u03b1; pre-speci\ufb01ed degree k; de-biased estimator b \u0398de(z) for z \u2208[0, 1]. 1. Compute the conditional quantile c(1 \u2212\u03b1, E) = inf \u0002 t \u2208R | P(T B E ) \u2264t | {(Zi, Xi, Yi)}i\u2208[n] \u22651 \u2212\u03b1 \u0003 , where T B E is the bootstrap statistic de\ufb01ned in (11). 2. Construct the rejected edge set R(z) = \uf8f1 \uf8f2 \uf8f3e \u2208E(z) | \u221a nh \u00b7 | b \u0398d e(z)| \u00b7 X i\u2208[n] Kh(Zi \u2212z)/n > c(1 \u2212\u03b1, E) \uf8fc \uf8fd \uf8fe. 3. Compute drej as the maximum degree of the dynamic graph based on the rejected edge set. Output: Reject the null hypothesis if drej > k. In Section 5.2, we will show that Algorithm 1 is able to control the type I error at a pre-speci\ufb01ed value \u03b1. Moreover, the power of the proposed inferential method increases to one as we increase the number of time points n. In fact, the proposed inferential method can be generalized to testing a wide variety of structures that satisfy the monotone graph property. Some examples of monotone graph property are that the graph is connected, the graph has no more than k connected components, the maximum degree of the graph is larger than k, the graph has no more than k isolated nodes, and the graph contains a clique of size larger than k. This generalization will be presented in Appendix A. 3 Simulation Studies We perform numerical studies to evaluate the performance of our proposal using the intersubject covariance relative to the typical time-varying Gaussian graphical model using withinsubject covariance. To this end, we de\ufb01ne the true positive rate as the proportion of correctly 7 identi\ufb01ed non-zeros in the true inverse covariance matrix, and the false positive rate as the proportion of zeros that are incorrectly identi\ufb01ed to be non-zeros. To evaluate our testing procedure, we calculate the type I error rate and power as the proportion of falsely rejected H0 and correctly rejected H0, respectively, over a large number of data sets. To generate the data, we \ufb01rst construct the inverse covariance matrix \u0398(z) for z = {0, 0.2, 0.5}. At z = 0, we set (d \u22122)/4 o\ufb00-diagonal elements of \u0398(0) to equal 0.3 randomly with equal probability. At z = 0.2, we set an additional (d \u22122)/4 o\ufb00-diagonal elements of \u0398(0) to equal 0.3. At z = 0.5, we randomly select two columns of \u0398(0.2) and add k + 1 edges to each of the two columns. This guarantees that the maximum degree of the graph is greater than k. To ensure that the inverse covariance matrix is smooth, for z \u2208[0, 0.2], we construct \u0398(z) by taking linear interpolations between the elements of \u0398(0) and \u0398(0.2). For z \u2208[0.2, 0.5], we construct \u0398(z) in a similar fashion based on \u0398(0.2) and \u0398(0.5). The construction is illustrated in Figure 1. (a) z = 0 (b) z = 0.2 (c) z = 0.5 Figure 1: (a): A graph corresponding to \u0398(0) with maximum degree no greater than four. (b): A graph corresponding to \u0398(0.2) with maximum degree less than or equal to four. The red dash edges are additional edges that are added to \u0398(0). (c): A graph corresponding to \u0398(0.5) with maximum degree larger than four. The red dash edges are additional edges that are added to \u0398(0.2) such that the maximum degree of the graph is larger than four. To ensure that the inverse covariance matrix is positive de\ufb01nite, we set \u0398jj(z) = |\u039bmin{\u0398(z)}|+ 0.1, where \u039bmin{\u0398(z)} is the minimum eigenvalue of \u0398(z). We then rescale the matrix such that the diagonal elements of \u0398(z) equal one. The covariance \u03a3(z) can be obtained by taking the inverse of \u0398(z) for each value of z. Model (3) involves the subject speci\ufb01c covariance matrix LX(z) and LY (z). For simplicity, we assume that these covariance matrices stay constant over time. We generate LX by setting the diagonal elements to be one and the o\ufb00-diagonal elements to be 0.3. Then, we add random perturbations \u03f5k\u03f5T k to LX for k = 1, . . . , 10, where \u03f5k \u223cNd(0, Id). The matrix LY is generated similarly. To generate the data according to (3), we \ufb01rst generate Zi \u223cUnif(0, 1). Given Z1, . . . , Zn, we generate S(Zi) | Z = Zi \u223cNd{0, \u03a3(Zi)}. We then simulate EX(Zi) | Z = Zi \u223c Nd(0, LX) and EY (Zi) | Z = Zi \u223cNd(0, LY ). Finally, for each value of Z, we generate X(Zi) = S(Zi) + EX(Zi) and Y (Zi) = S(Zi) + EY (Zi). Note that both X(Zi) and Y (Zi) share the same generated S(Zi) since both subjects will 8 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 (a) False positive rate True Positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 (b) False positive rate True Positive rate 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 (c) False positive rate True Positive rate Figure 2: The true and false positive rates for the numerical study with n = 952, d = 172, and k = 10. Panels (a), (b), and (c) correspond to Z = {0.25, 0.50, 0.75}, respectively. The two curves represent our proposal (black solid line) and within-subject time-varying Gaussian graphical model (black dash), respectively. be given the same natural continuous stimulus. In the following sections, we will assess the performance of our proposal relative to that of typical approach for time-varying Gaussian graphical models using the. within-subject covariance matrix as input. We then evaluate the proposed inferential procedure in Section 2.3 by calculating its type I error and power. 3.1 Estimation To mimic the data application we consider, we generate the data with n = 945, d = 172, and k = 10. Given the data (Z1, X1, Y1), . . . , (Zn, Xn, Yn), we estimate the covariance matrix at Z = z using the inter-subject kernel smoothed covariance estimator as de\ufb01ned in (27). To obtain estimates of the inverse covariance matrix b \u0398(Z1), . . . , b \u0398(Zn), we use the CLIME estimator as described in (6), implemented using the R package clime. There are two tuning parameters h and \u03bb: we set h = 1.2 \u00b7 n\u22121/5 and vary the tuning parameter \u03bb to obtain the ROC curve in Figure 2. The smoothing parameter h is selected such that there are always at least 30% of the time points that have non-zero kernel weights. We compare our proposal to time-varying Gaussian graphical models with the kernel smoothed within-subject covariance matrix. The true and false positive rates, averaged over 100 data sets, are in Figure 2. From Figure 2, we see that our proposed method outperforms the typical approach for time-varying Gaussian graphical models by calculating the within-subject covariance matrix. This is because the typical approach is not estimating the parameter of interest, as discussed in Section 2.2. Our proposed method treats the subject speci\ufb01c e\ufb00ects as nuisance parameters and is able to estimate the stimulus-locked graph accurately. 3.2 Testing the Maximum Degree of a Time-Varying Graph In this section, we evaluate the proposed inferential method in Algorithm 1 by calculating its type I error and power. In all of our simulation studies, we consider d = 50 and B = 500 bootstrap samples, across a range of samples n. Similarly, we select the smoothing parameter to be h = 1.2 \u00b7 n\u22121/5. The tuning parameter \u03bb is then selected using the cross-validation 9 criterion de\ufb01ned in (7). The tuning parameter \u03bb = 0.9 \u00b7 [h2 + p {log(d/h)}/(nh)] is selected for one of the simulated data set. For computational purposes, we use this value of tuning parameter across all replications. We construct the test statistic TE and the Gaussian multiplier bootstrap statistic T B E as de\ufb01ned in (10) and (11), respectively. Both the statistics TE and T B E involve evaluating the supreme over z \u2208[0, 1]. In our simulation studies, we approximate the supreme by taking the maximum of the statistics over 50 evenly spaced grid z \u2208[zmin, zmax], where zmin = min {Zi}i\u2208[n] and zmax = max {Zi}i\u2208[n]. Our testing procedure tests the hypothesis H0 : for all z \u2208[zmin, zmax], the maximum degree of the graph is no greater than k, H1 : there exists a z0 \u2208[zmin, zmax] such that the maximum degree of the graph is greater than k. For power analysis, we construct \u0398(z) according to Figure 1 by randomly selecting two columns of \u0398(0.2) and adding k + 1 edges to each of the two columns. This ensure that the maximum degree of the graph is greater than k. To evaluate the type I error under H0, instead of adding k + 1 edges to the two columns, we instead add su\ufb03cient edges such that the maximum degree of the graph is no greater than k. For the purpose of illustrating the type I error and power in the \ufb01nite sample setting, we increase the signal-to-noise ratio of the data by reducing the e\ufb00ect of the nuisance parameters in the data generating mechanism described in Section 3. The type I error and power for k = {5, 6}, averaged over 500 data sets, are reported in Table 1. We see that the type I error is controlled and that the power increases to one as we increase the number of time points n. Table 1: The type I error and power for testing the maximum degree of the graph at the 0.05 signi\ufb01cance level are calculated as the proportion of falsely rejected and correctly rejected null hypotheses, respectively, over 500 data sets. Simulation results with d = 50 and k = {5, 6}, over a range of n are shown. n = 400 n = 600 n = 800 n = 1000 n = 1500 k=5 Type I error 0.014 0.024 0.030 0.034 0.028 Power 0.068 0.182 0.690 0.976 1 k=6 Type I error 0.032 0.040 0.034 0.028 0.018 Power 0.050 0.142 0.446 0.898 1 4 Sherlock Holmes Data We analyze a brain imaging data set studied in Chen et al. (2017). This data set consists of fMRI measurements of 17 subjects while watching audio-visual movie stimuli in an fMRI scanner. More speci\ufb01cally, the subjects were asked to watch a 23-minute segment of BBC television series Sherlock, taken from the beginning of the \ufb01rst episode of the series. The 10 fMRI measurements were taken every 1.5 seconds of the movie, yielding n = 945 brain images for each subject. To understand the dynamics of the brain connectivity network under natural continuous stimuli, we partition the movie into 26 scenes (Chen et al. 2017). The data were pre-processed for slice time correction, motion correction, linear detrending, high-pass \ufb01ltering, and coregistration to a template brain (Chen et al. 2017). Furthermore, for each subject, we attempt to mitigate issues caused by non-neuronal signal sources by regressing out the average white matter signal. There are measurements for 271,633 voxels in this data set. For interpretation purposes, we reduce the dimension from 271,633 voxels to d = 172 regions of interest (ROIs) as described in Baldassano et al. (2015). We map the n = 945 brain images taken across the 23 minutes into the interval [0, 1] chronologically. We then standardize each of the 172 ROIs to have mean zero and standard deviation one. We \ufb01rst estimate the stimulus-locked time-varying brain connectivity network. To this end, we construct the inter-subject kernel smoothed covariance matrix b \u03a3(z) as de\ufb01ned in (27). Since there are 17 subjects, we randomly split the 17 subjects into two groups, and use the averaged data to construct (27). Note that we could also construct a brain connectivity network for each pair of subjects separately. We then obtain estimates of the inverse covariance matrix using the CLIME estimator as in (6). We set the smoothing parameter h = 1.2 \u00b7 n\u22121/5 so that at least 30% of the kernel weights are non-zero across all time points Z. For the sparsity tuning parameter, our theoretical results suggest picking \u03bb = C \u00b7 {h2 + p log(d/h)/nh} to guarantee a consistent estimator. We select the constant C by considering a sequence of numbers using a 5-fold cross-validation procedure described in (7), and this yields \u03bb = 1.4 \u00b7 {h2 + p log(d/h)/(nh)}. Heatmaps of the estimated stimuluslocked brain connectivity networks for three di\ufb00erent scenes in Sherlock are in Figure 3. (a) (b) (c) 0.0 0.1 0.2 0.3 Figure 3: Heatmaps of the estimated stimulus-locked brain connectivity network for three di\ufb00erent scenes in Sherlock. (a) Watson psychiatrist scene; (b) Park run in scene; and (c) Watson joins in scene. Colored elements in the heatmaps correspond to edges in the estiamted brain network. From Figure 3, we see that there are quite a number of connections between brain regions that remain the same across di\ufb00erent scenes in the movie. It is also evident that the graph structure changes across di\ufb00erent scenes. We see that most brain regions are very sparsely 11 connected, with the exception of a few ROIs. This raises the question of identifying whether there are hub ROIs that are connected to many other ROIs under audio-visual stimuli. To answer this question, we perform a hypothesis test to test whether there are hub nodes that are connected to many other nodes in the graph across the 26 scenes. If there are such hub nodes, which ROIs do they correspond to. More formally, we test the hypothesis H0 : for all z \u2208[0, 1], the maximum degree of the graph is no greater than 15, H1 : there exists a z0 \u2208[0, 1] such that the maximum degree of the graph is greater than 15. The number 15 is chosen since we are interested in testing whether there is any brain region that is connected to more than 10% of the total number of brain regions. We apply Algorithm 1 with 26 values of z corresponding to the middle of the 26 scenes. Figure 4 shows the ROIs that have more than 12 rejected edges across the 26 scenes based on Algorithm 1. Since the maximum degree of the rejected nodes in some scenes are larger than 15, we reject the null hypothesis that the maximum degree of the graph is no greater than 15. In Figure 5, we plot the sagittal snapshot of the brain connectivity network, visualizing the rejected edges from Algorithm 1 and the identi\ufb01ed hubs ROIs. \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf 14 16 18 Number of rejected edges War scene Watson wake\u2212up Watson morning Watson psychiatrist Intro music Get a cab Poison vial Press conference Teens in rain Poison vial 2 Poison vial 3 Police press conference Park run in Lab flirting Watson meet Holmes Watson looks up Holmes Poison vial 4 Flat exploration Science conversation Studies in news Fourth suicide Couple? Watson joins in Questions Harry short for Harriet Sally did not make it home 7 17 155 92 92 19 42 7 109 102 109 109 100 155 155 7 109 155 7 17 102 109 16 100 20 70 7 17 7 7 109 100 17 102 17 17 102 16 17 102 Figure 4: The x-axis displays the 26 scenes in the movie and the y-axis displays the number of rejected edges from Algorithm 1. The numbers correspond to the regions of interest (ROIs) in the brain. The ROIs correspond to frontal pole (7, 155), temporal fusiform cortex (16, 100), lingual gyrus (17), cingulate gyrus (19), cingulate gyrus (20), temporal pole (42), paracingulate gyrus (70), precuneus cortex (102), and postcentral gyrus (109). From Figure 4, we see that the rejected hub nodes (nodes that have more than 15 rejected edges) correspond to the frontal pole (7), temporal fusiform cortex (16, 100), lingual gyrus (17), and precuneus (102) regions of the brain. Many studies have suggested that the frontal pole plays signi\ufb01cant roles in higher order cognitive operations such as decision making and moral reasoning (among others, Okuda et al. 2003). The fusiform cortex is linked to face and body recognition (see, Iaria et al. 2008, and the references therein). In addition, the 12 (a) Teens in rain scene (b) Police press conference scene (c) Lab \ufb02irting scene Figure 5: Sagittal snapshots of the rejected edges based on Algorithm 1. Panels (a)-(c) contain the snapshots for the \u201cteens in rain\u201d, \u201cpolice press conference\u201d, and \u201clab \ufb02irting\u201d scenes, respectively. The red nodes and red edges are regions of interest that have more than 15 rejected edges. The grey edges are rejected edges from nodes that have no greater than 15 rejected edges. For (c), the green nodes and edges are regions of interest that have more than 12 rejected edges. lingual gyrus is known for its involvement in processing of visual information about parts of human faces (McCarthy et al. 1999). Thus, it is not surprising that both of these ROIs have more than 15 rejected edges since the brain images are collected while the subjects are exposed to an audio-visual movie stimulus. Compared to the lingual gyrus, temporal fusiform cortex, and the frontal pole, the precuneus is the least well-understood brain literature in the current literature. We see from Figure 4 that the precuneus is the most connected ROI across many scenes. This is supported by the observation in Hagmann et al. (2008) where the precuneus serves as a hub region that is connected to many other parts of the brain. In recent years, Lerner et al. (2011) and Ames et al. (2015) conducted experiments where subjects were asked to listen to a story under an fMRI scanner. Their results suggest that the precuneus represents highlevel concepts in the story, integrating feature information arriving from many di\ufb00erent ROIs of the brain. Interestingly, we \ufb01nd that the precuneus has the highest number of rejected edges during the \ufb01rst half of the movie and that the number of rejected edges decreases signi\ufb01cantly during the second half of the movie. Our results correspond well to the \ufb01ndings of Lerner et al. (2011) and Ames et al. (2015) in which the precuneus is active when the subjects comprehend the story. However, it also raises an interesting scienti\ufb01c question for future study: is the precuneus active only when the subjects are trying to comprehend the story, that is, once the story is understood, the precuneus is less active. 13 5 Theoretical Results We establish uniform rates of convergence for the proposed estimators, and show that the testing procedure in Algorithm 1 is a uniformly valid test. We study the asymptotic regime in which n, d, and s are allowed to increase. In the context of the Sherlock Holmes data set, n is the total number of brain images obtained under the continuous stimulus, d is the number of brain regions, and s is the maximum number of connections for each brain region in the true stimulus-locked brain connectivity network. The current theoretical results assume that Z is a random variable with continuous density. Our theoretical results can be easily generalized to the case when {Zi}i\u2208[n] are \ufb01xed. 5.1 Theoretical Results on Parameter Estimation Our proposed estimator involves a kernel function K(\u00b7): we require K(\u00b7) to be symmetric, bounded, unimodal, and compactly supported. More formally, for l = 1, 2, 3, 4, Z K(u)du = 1, Z uK(u)du = 0, Z ulK(u)du < \u221e, Z Kl(u)du < \u221e. (13) In addition, we require the total variation of K(\u00b7) to be bounded, i.e., \u2225K\u2225TV < \u221e, where \u2225K\u2225TV = R | \u02d9 K|. In other words, we require the kernel function to be a smooth function. The Epanechnikov kernel we consider in (5) for analyzing Sherlock data satis\ufb01es (13). A unimodal kernel function is extremely plausible in our setting: for instance, to estimate brain network in the \u201cpolice press conference scene\u201d, we expect the brain images within that scene to play a larger role than brain images that are far away from the scene. One practical limitation of the conditions on the kernel function is the symmetric kernel condition. When we are estimating a stimulus-locked brain network for a particular time point, the ideal case is to weight the previous images more heavily than the future brain images. The scienti\ufb01c reasoning is that there may be some time lag for information processing. In order to capture this e\ufb00ect, a more carefully designed kernel function is needed and it is out of the scope of this paper. Next, we impose regularity conditions on the marginal density fZ(\u00b7). Assumption 1. There exists a constant f Z such that infz\u2208[0,1] fZ(z) \u2265f Z > 0. Furthermore, fZ is twice continuously di\ufb00erentiable and that there exists a constant \u00af fZ < \u221esuch that max {\u2225fZ\u2225\u221e, \u2225\u02d9 fZ\u2225\u221e, \u2225\u00a8 fZ\u2225\u221e} \u2264\u00af fZ. Next, we impose smoothness assumptions on the inter-subject covariance matrix \u03a3(\u00b7). Our theoretical results hold for any positive de\ufb01nite subject-speci\ufb01c covariance matrices LX(z) and LY (z), since these matrices are treated as nuisance parameters. Assumption 2. There exists a constant M\u03c3 such that sup z\u2208[0,1] max j,k\u2208[d] max n |\u03a3jk(z)|, | \u02d9 \u03a3jk(z)|, | \u00a8 \u03a3jk(z)| o \u2264M\u03c3. 14 In other words, we assume that the inter-subject covariance matrices are smooth and do not change too rapidly in neighboring time points. This assumption clearly holds in a dynamic brain network where we expect the brain network to change smoothly over time. Assumptions 1 and 2 on f(z) and \u03a3(z) are standard assumptions in the nonparametric statistics literature (see, for instance, Chapter 2 of Pagan & Ullah 1999). The following theorem establishes the uniform rates of convergence for b \u03a3(z). Theorem 1. Assume that h = o(1) and that log2(d/h)/(nh) = o(1). Under Assumptions 12, we have sup z\u2208[0,1] \r \r \rb \u03a3(z) \u2212\u03a3(z) \r \r \r max = OP ( h2 + r log(d/h) nh ) . Theorem 1 guarantees that our estimator always converges to the population parameter under the max norm, if the smoothing parameter h goes to zero asymptotically. For instance, this will satisfy if h = C \u00b7 n\u22121/5 for some constant C > 0. The quantity supz\u2208[0,1]\u2225b \u03a3(z) \u2212 \u03a3(z)\u2225max can be upper bounded by the summation of two terms: supz\u2208[0,1]\u2225E[b \u03a3(z)] \u2212 \u03a3(z)\u2225max and supz\u2208[0,1]\u2225b \u03a3(z)\u2212E[b \u03a3(z)]\u2225max, which are known as the bias and variance terms, respectively, in the kernel smoothing literature (see, for instance, Chapter 2 of Pagan & Ullah 1999). The term h2 on the upper bound corresponds to the bias term and the term p log(d/h)/(nh) corresponds to the variance term. Next, we establish theoretical results for b \u0398(z). Recall that the stimulus-locked brain connectivity network is encoded by the support of the inverse covariance matrix \u0398(z): \u0398jk(z) = 0 if and only if the jth and kth brain regions are conditionally independent given all of the other brain regions. We consider the class of inverse covariance matrices: Us,M = \u001a \u0398 \u2208Rd\u00d7d | \u0398 \u227b0, \u2225\u0398\u22252 \u2264\u03c1, max j\u2208[d] \u2225\u0398j\u22250 \u2264s, max j\u2208[d] \u2225\u0398j\u22251 \u2264M \u001b . (14) Here, \u2225\u0398\u22252 is the largest singular value of \u0398 and \u2225\u0398j\u22250 is the number of non-zeros in \u0398j. Brain connectivity networks are usually densely connected due to the intrinsic-neural and non-neuronal signals, such as background processing. Instead of assuming an overall sparse brain network, we assume that the stimulus-locked brain network \u0398(z) is sparse, and allow the intrinsic brain network unrelated to the stimulus to be dense. The sparsity assumption on stimulus-locked brain network is plausible in this setting since it characterizes brain activities that are speci\ufb01c to the stimulus. For instance, we may believe that only certain brain regions are active under cognitive process. The other conditions are satis\ufb01ed since \u0398(z) is the inverse of a positive de\ufb01nite covariance matrix. Given Theorem 1, the following corollary establishes the uniform rates of convergence for b \u0398(z) using the CLIME estimator as de\ufb01ned in (6). It follows directly from the proof of Theorem 6 in Cai et al. (2011). Corollary 1. Assume that \u0398(z) \u2208Us,M for all z \u2208[0, 1]. Let \u03bb \u2265C\u00b7{h2+ p log(d/h)/(nh)} 15 for C > 0. Under the same conditions in Theorem 1, sup z\u2208[0,1] \r \r \r b \u0398(z) \u2212\u0398(z) \r \r \r max = OP ( h2 + r log(d/h) nh ) ; (15) sup z\u2208[0,1] max j\u2208[d] \r \r \r b \u0398j(z) \u2212\u0398j(z) \r \r \r 1 = OP \" s \u00b7 ( h2 + r log(d/h) nh )# ; (16) sup z\u2208[0,1] max j\u2208[d] \r \r \r \r n b \u0398j(z) oT b \u03a3(z) \u2212ej \r \r \r \r \u221e = OP ( h2 + r log(d/h) nh ) . (17) In the real data analysis, Corollary 1 is helpful in terms of selecting the sparsity tuning parameter \u03bb: it motivates a sparsity tuning parameter of the form \u03bb \u2265C \u00b7 {h2 + p log(d/h)/(nh)} to guarantee statistically consistent estimated stimulus-locked brain network. To select the constant C, we consider a sequence of number and select the appropriate C using a datadriven cross-validation procedure in (7). 5.2 Theoretical Results on Topological Inference In this section, we \ufb01rst show that the distribution of the test statistic TE can be approximated by the conditional (1 \u2212\u03b1)-quantile of the bootstrap statistic T B E . Next, we show that the proposed testing method in Algorithm 1 is valid in the sense that the type I error can be controlled at a pre-speci\ufb01ed level \u03b1. Recall from (12) the de\ufb01nition of c(1 \u2212\u03b1, E). The following theorem shows that the Gaussian multiplier bootstrap is valid for approximating the quantile of the test statistic TE in (10). Our results are based on the series of work on Gaussian approximation on multiplier bootstrap in high dimensions (see, e.g., Chernozhukov et al. 2013, 2014b). We see from (10) that TE involves taking the supremum over z \u2208[0, 1] and a dynamic edge set E(z). Due to the dynamic edge set E(z), existing theoretical results for the Gaussian multiplier bootstrap methods cannot be directly applied. We construct a novel Gaussian approximation result for the supreme of empirical processes of TE by carefully characterizing the capacity of the dynamic edge set E(z). Theorem 2. Assume that \u221a nh5+s\u00b7 \u221a nh9 = o(1). In addition, assume that s q log4(d/h)/(nh2)+ log22(s) \u00b7 log8(d/h)/(nh) = o(1). Under the same conditions in Corollary 1, we have lim n\u2192\u221e sup \u0398(\u00b7)\u2208Us,M P\u0398(\u00b7) {TE \u2265c(1 \u2212\u03b1, E)} \u2264\u03b1. Some of the scaling conditions are standard conditions in nonparametric estimation (Tsybakov 2009). The most notable scaling conditions are s q log4(d/h)/(nh2) = o(1) and log22(s) \u00b7 log8(d/h)/(nh) = o(1): these conditions arise from Gaussian approximation on 16 multiplier bootstrap (Chernozhukov et al. 2013). These scaling conditions will hold asymptotically as long as the number of brain images n is much larger than the maximum degree in the graph s. This corresponds well with the real data analysis where we expect only certain ROIs are active during information processing. Recall the hypothesis testing problem in (8). We now show that the type I error of the proposed inferential method for testing the maximum degree of a time-varying graph can be controlled at a pre-speci\ufb01ed level \u03b1. Theorem 3. Assume that the same conditions in Theorem 2 hold. Under the null hypothesis in (8), we have lim n\u2192\u221ePnull(Algorithm 1 rejects the null hypothesis) \u2264\u03b1. To study the power analysis of the proposed method, we de\ufb01ne the signal strength of a precision matrix \u0398 as Sigdeg(\u0398) := max E\u2032\u2286E(\u0398),Deg(E)>k min e\u2208E\u2032 |\u0398e|, (18) where Deg(E) is the maximum degree of graph G = (V, E). Under the alternative hypothesis in (8), there exists a z0 \u2208[0, 1] such that the maximum degree of the graph is greater than k. We de\ufb01ne the parameter space under the alternative: G1(\u03b8) = h \u0398(\u00b7) \u2208Us,M \f \f \f Sigdeg{\u0398(z0)} \u2265\u03b8 for some z0 \u2208[0, 1] i . (19) The following theorem presents the power analysis of Algorithm 1. Theorem 4. Assume that the same conditions in Theorem 2 hold and select the smoothing parameter such that h = o(n\u22121/5). Under the alternative hypothesis in (8) and the assumption that \u03b8 \u2265C p log(d/h)/nh, where C is a \ufb01xed large constant, we have lim n\u2192\u221e inf \u0398\u2208G1(\u03b8) P\u0398(Algorithm 1 rejects the null hypothesis) = 1, (20) for any \ufb01xed \u03b1 \u2208(0, 1). The signal strength condition de\ufb01ned in (24) is weaker than the typical minimal signal strength condition required on testing a single edge on a conditional independent graph, mine\u2208E(\u0398) |\u0398e|. The condition in (24) requires only that there exists a subgraph whose maximum degree is larger than k and the minimal signal strength on that subgraph is above certain level. In our real data analysis, this requires only the edges for brain regions that are highly connected to many other brain regions to be strong, which is plausible since these regions should have high brain activity. 17 6 Discussion We consider estimating stimulus-locked brain connectivity networks from data obtained under natural continuous stimuli. Due to lack of highly controlled experiments that remove all spontaneous and individual variations, the measured brain signal consists of not only stimulus-induced signal, but also intrinsic neural signal and non-neuronal signals that are subjects speci\ufb01c. Typical approach for estimating time-varying Gaussian graphical models will fail to estimate the stimulus-locked brain connectivity network accurately due to the presence of subject speci\ufb01c e\ufb00ects. By exploiting the experimental design aspect of the problem, we propose a simple approach to estimating stimulus-locked brain connectivity network. In particular, rather than calculating within-subject smoothed covariance matrix as in the typical approach for modeling time-varying Gaussian graphical models, we propose to construct the inter-subject smoothed covariance matrix instead, treating the subject speci\ufb01c e\ufb00ects as nuisance parameters. To answer the scienti\ufb01c question on whether there are any brain regions that are connected to many other brain regions during the given stimulus, we propose an inferential method for testing the maximum degree of a stimulus-locked time-varying graph. In our analysis, we found that several interesting brain regions such as the fusiform cortex, lingual gyrus, and precuneus are highly connected. From the neuroscience literature, these brain regions are mainly responsible for high order cognitive operations, face and body recognition, and serve as control region that integrates information from other brain regions. We have also extended the proposed inferential framework to testing various topological graph structures. These are detailed in Appendix A. The practical limitation of our proposed method is on the Gaussian assumption on the data. While we focus on the time-varying Gaussian graphical model in this paper, our framework can be extended to other types of time-varying graphical models such as the time-varying discrete graphical model or the time-varying nonparanormal graphical model (Kolar et al. 2010, Lu et al. 2015b). Another limitation is the independence assumption on the data across time points. All of our theoretical results can be generalized to the case when the data across time points are correlated, and we leave such generalization for future work. Acknowledgement We thank Hasson\u2019s lab at the Princeton Neuroscience Institute for providing us the fMRI data set under audio-visual movie stimulus. We thank Janice Chen for very helpful conversations on preprocessing the fMRI data set and interpreting the results of our analysis. 18 A Inference on Topological Structure of Time-Varying Graph In this section, we generalize Algorithm 1 in the main manuscript to testing various graph structures that satisfy the monotone graph property. In A.1, we brie\ufb02y introduce some concepts on graph theory. These include the notion of isomorphism, graph property, monotone graph property, and critical edge set. In A.2, we provide a test statistic and an estimate of the quantile of the proposed test statistic using the Gaussian multiplier bootstrap. We then develop an algorithm to test the dynamic topological structure of a time-varying graph which satis\ufb01es the monotone graph property. A.1 Graph Theory Let G = (V, E) be an undirected graph where V = {1, . . . , d} is a set of nodes and E \u2286V \u00d7V is a set of edges connecting pairs of nodes. Let G be the set of all graphs with the same number of nodes. For any two graphs G = (V, E) and G\u2032 = (V, E\u2032), we write G \u2286G\u2032 if G is a subgraph of G\u2032, that is, if E \u2286E\u2032. We start with introducing some concepts on graph theory (see, for instance, Chapter 4 of Lov\u00b4 asz 2012). De\ufb01nition 1. Two graphs G = (V, E) and G\u2032 = (V, E\u2032) are said to be isomorphic if there exists permutations \u03c0 : V \u2192V such that (j, k) \u2208E if and only if {\u03c0(j), \u03c0(k)} \u2208E\u2032. The notion of isomorphism is used in the graph theory literature to quantify whether two graphs have the same topological structure, up to any permutation of the vertices (see Chapter 1.2 of Bondy & Murty 1976). We provide two concrete examples on the notion of isomorphism in Figure 6. 1 2 3 4 5 (i) a b c d e (ii) 1 2 3 4 (iii) a b c d (iv) Figure 6: Graphs (i) and (ii) are isomorphic. Graphs (iii) and (iv) are not isomorphic. Next, we introduce the notion of graph property. A graph property is a property of graphs that depends only on the structure of the graphs, that is, a graph property is invariant under permutation of vertices. A formal de\ufb01nition is given as follows. De\ufb01nition 2. For two graphs G and G\u2032 that are isomorphic, a graph property is a function P : G \u2192{0, 1} such that P(G) = P(G\u2032). A graph G satis\ufb01es the graph property P if P(G) = 1. 19 Some examples of graph property are that the graph is connected, the graph has no more than k connected components, the maximum degree of the graph is larger than k, the graph has no more than k isolated nodes, the graph contains a clique of size larger than k, and the graph contains a triangle. For instance, the two graphs in Figures 6(i) and 6(ii) are isomorphic and satisfy the graph property of being connected. De\ufb01nition 3. For two graphs G \u2286G\u2032, a graph property P is monotone if P(G) = 1 implies that P(G\u2032) = 1. In other words, we say that a graph property is monotone if the graph property is preserved under the addition of new edges. Many graph property that are of interest such as those given in the paragraph immediately after De\ufb01nition 2 are monotone. In Figure 7, we present several examples of graph property that are monotone by showing that adding additional edges to the graph does not change the graph property. For instance, we see from Figure 7(a) that the existing graph with gray edges are connected. Adding the red edges to the existing graph, the graph remains connected and therefore the graph property is monotone. Another example is the graph with maximum degree at least three as in Figure 7(c). We see that adding the red dash edges to the graph preserves the graph property of having maximum degree at least three. (a) (b) (c) (d) Figure 7: Some examples on graph property that are monotone. The gray edges are the original edges and the red dash edges are additional edges added to the existing graph. (a) Graph that is connected. (b) Graph that has no more than three connected components. (c) Graph with maximum degree at least three. (d) Graph with no more than two isolated nodes. Adding the red dash edges to the existing graphs does not change the graph property. For a given graph G = (V, E), we de\ufb01ne the class of edge sets satisfying the graph property P as P = {E \u2286V \u00d7 V | P(G) = 1}. (21) Finally, we introduce the notion of critical edge set in the following de\ufb01nition. De\ufb01nition 4. Given any edge set E \u2286V \u00d7 V , we de\ufb01ne the critical edge set of E for a given monotone graph property P as C(E, P) = {e | e \u0338\u2208E, there exists E\u2032 \u2287E such that E\u2032 \u2208P and E\u2032\\{e} / \u2208P}. (22) 20 For a given monotone graph property P, the critical edge set is the set of edges that will change the graph property of the graph once added to the existing graph. We provide two examples in Figure 8. Suppose that P is the graph property of being connected. In Figure 8(a), we see that the graph is not connected, and thus P(G) = 0. Adding any of the red dash edges in Figure 8(b) changes P(G) = 0 to P(G) = 1. (a) (b) (c) (d) Figure 8: Let P be the graph property of being connected. Gray edges are the original edges of a graph G and the red dash edges are the critical edges that will change the graph property from P(G) = 0 to P(G) = 1. (a) The graph satis\ufb01es P(G) = 0. (b) The graph property changes from P(G) = 0 to P(G) = 1 if some red dash edges are added to the graph. (c) The graph satis\ufb01es P(G) = 0. (d) The graph property changes from P(G) = 0 to P(G) = 1 if some red dash edges are added to the existing graph. A.2 An Algorithm for Topological Inference Throughout the rest of the paper, we denote G(z) = {V, E(z)} as the graph at Z = z. We consider hypothesis testing problem of the form H0 : P{G(z)} = 0 for all z \u2208[0, 1] H1 : there exists a z0 \u2208[0, 1] such that P{G(z0)} = 1, (23) where G(\u00b7) is the true underlying graph and P is a given monotone graph property as de\ufb01ned in De\ufb01nition 3. We provide two concrete examples of the hypothesis testing problem in (23). Example 1. Number of connected components: H0 : for all z \u2208[0, 1], the number of connected components is greater than k, H1 : there exists a z0 \u2208[0, 1] such that the number of connected components is not greater than k. Example 2. Maximum degree of the graph: H0 : for all z \u2208[0, 1], the maximum degree of the graph is not greater than k, H1 : there exists a z0 \u2208[0, 1] such that the maximum degree of the graph is greater than k. 21 We now propose an algorithm to test the topological structure of a time-varying graph. The proposed algorithm is very general and is able to test the hypothesis problem of the form in (23). Our proposed algorithm is motivated by the step-down algorithm in Romano & Wolf (2005) for testing multiple hypothesis simultaneously. The main crux of our algorithm is as follows. By De\ufb01nition 4, the critical edge set C{Et\u22121(z), P} contains edges that may change the graph property from P{G(z)} = 0 to P{G(z)} = 1. Thus, at the t-th iteration of the proposed algorithm, it su\ufb03ces to test whether the edges on the critical edge set C{Et\u22121(z), P} are rejected. Let Et(z) = Et\u22121(z) \u222aR(z), where R(z) is the rejected edge set from the critical edge set C{Et\u22121(z), P}. Since P is a monotone graph property, if there exists a z0 \u2208[0, 1] such that Et(z0) \u2208P, we directly reject the null hypothesis H0 : P{G(z)} = 0 for all z. This is due to the de\ufb01nition of monotone graph property that adding more edges does not change the graph property. If Et(z0) / \u2208P, we repeat this process until the null hypothesis is rejected or no more edges in the critical edge set are rejected. We summarize the procedure in Algorithm 2. Algorithm 2 Dynamic skip-down method. Input: A monotone graph property P; b \u0398de(z) for z \u2208[0, 1]. Initialize: t = 1; E0(z) = \u2205for z \u2208[0, 1]. Repeat: 1. Compute the critical edge set C{Et\u22121(z), P} for z \u2208[0, 1] and the conditional quantile c{1 \u2212\u03b1, C(Et\u22121, P)} = inf \u0010 t \u2208R | P[T B C(Et\u22121,P) \u2264t | {(Xi, Yi, Zi)}i\u2208[n]] \u22651 \u2212\u03b1 \u0011 , where T B C(Et\u22121,P) is the bootstrap statistic de\ufb01ned in (11) with the maximum taken over the edge set C{Et\u22121(z), P}. 2. Construct the rejected edge set R(z) = \uf8ee \uf8f0e \u2208C{Et\u22121(z), P} | \u221a nh \u00b7 | b \u0398de e (z)| \u00b7 X i\u2208[n] Kh(Zi \u2212z)/n > c{1 \u2212\u03b1, C(Et\u22121, P)} \uf8f9 \uf8fb. 3. Update the rejected edge set Et(z) \u2190Et\u22121(z) \u222aR(z) for z \u2208[0, 1]. 4. t \u2190t + 1. Until: There exists a z0 \u2208[0, 1] such that Et(z0) \u2208P, or Et(z) = Et\u22121(z) for z \u2208[0, 1]. Output: \u03c8\u03b1 = 1 if there exists a z0 \u2208[0, 1] such that Et(z0) \u2208P and \u03c8\u03b1 = 0 otherwise. Finally, we generalize the theoretical results in Theorems 3 and 4 to the general testing procedure in Algorithm 2. Given a monotone graph property P, let G0 = (\u0398(\u00b7) \u2208Us,M | P[G{\u0398(z)}] = 0 for all z \u2208[0, 1]). We now show that the type I error of the proposed inferential method in Algorithm 2 can 22 be controlled at a pre-speci\ufb01ed level \u03b1. Theorem 5. Under the same conditions in Theorem 2, we have lim n\u2192\u221esup \u0398(\u00b7)\u2208G0 P\u0398(\u00b7) (\u03c8\u03b1 = 1) \u2264\u03b1. In order to study the power analysis for testing graph structure that satis\ufb01es the monotone graph property, we de\ufb01ne signal strength of a precision matrix \u0398 as Sig(\u0398) := max E\u2032\u2286E(\u0398),P(E\u2032)=1 min e\u2208E\u2032 |\u0398e|. (24) Under H1 : there exists a z0 \u2208[0, 1] such that P{G(z0)} = 1, we de\ufb01ne the parameter space G1(\u03b8; P) = \u0010 \u0398(\u00b7) \u2208Us,M \f \f \f P[G{\u0398(z0)}] = 1 and Sig{\u0398(z0)} \u2265\u03b8 for some z0 \u2208[0, 1] \u0011 . (25) Again, we emphasize that the signal strength de\ufb01ned in (24) is weaker than the typical minimal signal strength for testing a single edge in a graph mine\u2208E(\u0398) |\u0398e|. Sig(\u0398) only requires that there exists a subgraph satisfying the property of interest such that the minimal signal strength on that subgraph is above certain level. For example, for P(G) = 1 if and only if G is connected, it su\ufb03ces for \u0398 belongs to G1(\u03b8; P) if the minimal signal strength on a spanning tree is larger than \u03b8. The following theorem presents the power analysis of our test. Theorem 6. Assume that the same conditions in Theorem 2 hold and select the smoothing parameter h = o(1/n\u22121/5). Assume that \u03b8 \u2265C p log(dn)/n2/5 for some su\ufb03ciently large constant C. Under the alternative hypothesis H1 : P(G) = 1 in (23), we have lim n\u2192\u221e inf \u0398\u2208G1(\u03b8;P) P\u0398(\u03c8\u03b1 = 1) = 1 (26) for any \ufb01xed \u03b1 \u2208(0, 1). Thus, we have shown in Theorem 6 that the power of the proposed inferential method increases to one asymptotically. B A U-Statistic Type Estimator The main manuscript primarily concerns the case when there are two subjects. In this section, we present a U-statistic type inter-subject covariance to accommodate the case when there are more than two subjects. First, we note that the same natural stimuli is given to all subjects. This motivates the following statistical model for each Z = z: X(\u2113) = S + E(\u2113), S|Z = z \u223cNd{0, \u03a3(z)}, E(\u2113)|Z = z \u223cNd{0, L(\u2113)(z)}, 23 where X(\u2113), E(\u2113), and L(\u2113)(z) are the data, subject speci\ufb01c e\ufb00ect, and the covariance matrix for the subject speci\ufb01c e\ufb00ect for the \u2113th subject, respectively. Suppose that there N subjects. Then, the following U-statistic type inter-subject covariance matrix can be constructed to estimate \u03a3(z): b \u03a3U(z) = 1 \u0000N 2 \u0001 X 1\u2264\u2113<\u2113\u2032\u2264N \"P i\u2208[n] Kh(Zi \u2212z)X(\u2113) i {X(\u2113\u2032) i }T P i\u2208[n] Kh(Zi \u2212z) # . (27) We leave the theoretical analysis of the above estimator for future work. C Preliminaries In this section, we de\ufb01ne some notation that will be used throughout the Appendix. Let [n] denote the set {1, . . . , n} and let [d] denote the set {1, . . . , d}. For two scalars a, b, we de\ufb01ne a \u2228b = max(a, b). We denote the \u2113q-norm for the vector v as \u2225v\u2225q = (P j\u2208[d] |vj|q)1/q for 1 \u2264q < \u221e. In addition, we let supp(v) = {j : vj \u0338= 0}, \u2225v\u22250 = |supp(v)|, and \u2225v\u2225\u221e= maxj\u2208[d] |vj|, where |supp(v)| is the number of non-zero elements in v. For a matrix A \u2208Rn1\u00d7n2, we denote the jth column as Aj. We denote the Frobenius norm of A by \u2225A\u22252 F = P i\u2208[n1] P j\u2208[n2] A2 ij, the max norm \u2225A\u2225max = maxi\u2208[n1],j\u2208[n2] |Aij|, and the operator norm \u2225A\u22252 = sup\u2225v\u22252=1 \u2225Av\u22252. Given a function f, let \u02d9 f and \u00a8 f be the \ufb01rst and secondorder derivatives, respectively. For 1 \u2264p < \u221e, let \u2225f\u2225p = ( R f p)1/p denote the Lp norm of f and let \u2225f\u2225\u221e= supx |f(x)|. The total variation of f is de\ufb01ned as \u2225f\u2225TV = R | \u02d9 f|. We use the Landau symbol an = O(bn) to indicate the existence of a constant C > 0 such that an \u2264C \u00b7 bn for two sequences an and bn. We write an = o(bn) if limn\u2192\u221ean/bn \u21920. Let C, C1, C2, . . . be generic constants whose values may vary from line to line. Let Pn(f) = 1 n X i\u2208[n] f(Xi) and Gn(f) = \u221an \u00b7 [Pn(f) \u2212E{f(Xi)}]. (28) For notational convenience, for \ufb01xed j, k \u2208[d], let gz,jk(Zi, Xij, Yik) = Kh(Zi \u2212z)XijYik, wz(Zi) = Kh(Zi \u2212z), (29) qz,jk(Zi, Xij, Yik) = gz,jk(Zi, Xij, Yik) \u2212E{gz,jk(Z, Xj, Yk)}, (30) and let kz(Zi) = wz(Zi) \u2212E{wz(Z)}. (31) Recall from 5 that K(\u00b7) can be any symmetric kernel function that satis\ufb01es (13) and that Kh(Zi \u2212z) = K{(Zi \u2212z)/h}/h. By the de\ufb01nition of b \u03a3(z) in (27), we have b \u03a3jk(z) = P i\u2208[n] gz,jk(Zi, Xij, Yik) P i\u2208[n] wz(Zi) = Pn(gz,jk) Pn(wz) . (32) 24 In addition, let J(1) z,jk(Zi, Xi, Yi) = \u221a h\u00b7{\u0398j(z)}T \u00b7 \u0002 Kh(Zi \u2212z)XiY T i \u2212E \b Kh(Z \u2212z)XY T\t\u0003 \u00b7\u0398k(z), (33) J(2) z,jk(Zi) = \u221a h \u00b7 {\u0398j(z)}T \u00b7 [Kh(Zi \u2212z) \u2212E {Kh(Z \u2212z)}] \u00b7 \u03a3(z) \u00b7 \u0398k(z), (34) Jz,jk(Zi, Xi, Yi) = J(1) z,jk(Zi, Xi, Yi) \u2212J(2) z,jk(Zi), (35) and let Wz,jk(Zi, Xij, Yik) = \u221a h \u00b7 {Kh(Zi \u2212z)XijYik \u2212Kh(Zi \u2212z)\u03a3jk(z)} . (36) For two functions f and g, we de\ufb01ne its convolution as (f \u2217g)(x) = Z f(x \u2212z)g(z)dz. (37) In our proofs, we will use the following property of the derivative of a convolution \u2202 \u2202x(f \u2217g) = \u2202f \u2202x \u2217g. (38) Finally, our proofs use the following inequality Z b1 0 p log(b2/\u03f5)d\u03f5 \u2264 p b1 \u00b7 sZ b1 0 log(b2/\u03f5)d\u03f5 = b1 \u00b7 p 1 + log(b2/b1), (39) where the \ufb01rst inequality holds by an application of Jensen\u2019s inequality. D Proof of Results in 5.1 In this section, we establish the uniform rate of convergence for b \u03a3(z) and b \u0398(z) over z \u2208[0, 1]. To prove Theorem 1, we \ufb01rst observe that sup z\u2208[0,1] \r \r \rb \u03a3(z) \u2212\u03a3(z) \r \r \r max \u2264sup z\u2208[0,1] max j,k\u2208[d] \f \f \fb \u03a3jk(z) \u2212E n b \u03a3jk(z) o\f \f \f+ sup z\u2208[0,1] max j,k\u2208[d] \f \f \fE n b \u03a3jk(z) o \u2212\u03a3jk(z) \f \f \f . (40) The \ufb01rst term is known as the variance term and the second term is known as the bias term in the kernel smoothing literature (see, for instance, Chapter 2 of Pagan & Ullah 1999). Both the variance and bias terms involve evaluating the quantity E{b \u03a3jk(z)}. From (32), we see that b \u03a3jk(z) involves the quotient of two averages and it is not straightforward to evaluate its expectation. The following lemma quanti\ufb01es E{b \u03a3jk(z)} in terms of the expectations of its numerator and its denominator. 25 Lemma 1. Under the following conditions \f \f \f \f Gn(wz) \u221an \u00b7 E {Pn(wz)} \f \f \f \f < 1 and E {Pn(wz)} \u0338= 0, (41) we have E n b \u03a3jk(z) o = E {Pn(gz,jk)} E {Pn(wz)} + 1 nO h E n Gn(wz) \u00b7 Gn(gz,jk) o + E \b G2 n(gz,jk) \ti . (42) We note that (42) only holds under the two conditions in (41). In the proof of Theorem 1, we will show that the two conditions in (41) hold for n su\ufb03ciently large. To obtain upper bounds for the bias and variance terms in (40), we use the following intermediate lemmas. Lemma 2. Assume that h = o(1). Under Assumptions 1-2, we have sup z\u2208[0,1] max j,k\u2208[d] \f \f \fE{Pn(gz,jk)} \u2212fZ(z)\u03a3jk(z) \f \f \f = O(h2), (43) sup z\u2208[0,1] \f \f \fE{Pn(wz)} \u2212fZ(z) \f \f \f = O(h2), (44) sup z\u2208[0,1] max j,k\u2208[d] 1 n \f \f \fE n Gn(gz,jk) \u00b7 Gn(wz) o\f \f \f = O \u0012 1 nh \u0013 , (45) and sup z\u2208[0,1] 1 nE n G2 n(wz) o = O \u0012 1 nh \u0013 . (46) Lemma 3. Assume that h = o(1) and log2(d/h)/(nh) = o(1). Under Assumptions 1-2, there exists a universal constant C > 0 such that sup z\u2208[0,1] max j,k\u2208[d] \f \f \fGn(wz) \u2228Gn(gz,jk) \f \f \f \u2264C \u00b7 r log(d/h) h , (47) with probability at least 1 \u22123/d. The proofs of Lemmas 1-3 are deferred to Sections 1-3, respectively. We now provide a proof of Theorem 1. D.1 Proof of Theorem 1 Recall from (40) that sup z\u2208[0,1] \r \r \rb \u03a3(z) \u2212\u03a3(z) \r \r \r max \u2264sup z\u2208[0,1] max j,k\u2208[d] \f \f \fb \u03a3jk(z) \u2212E n b \u03a3jk(z) o\f \f \f + sup z\u2208[0,1] max j,k\u2208[d] \f \f \fE n b \u03a3jk(z) o \u2212\u03a3jk(z) \f \f \f = I1 + I2. 26 It su\ufb03ces to obtain upper bounds for I1 and I2. We \ufb01rst verify that the two conditions in (41) hold. By Lemma 2, we have \f \f \fE{Pn(wz)} \f \f \f = O(h2) + fZ(z) \u2265f Z(z) > 0, where the last inequality follows from Assumption 1. Moreover, \f \f \f \f Gn(wz) \u221an \u00b7 E {Pn(wz)} \f \f \f \f \u2264C \u00b7 1 \u221an|Gn(wz)| \u00b7 1 fZ(z) + O(h2) \u2264C1 \u00b7 r log(d/h) nh \u00b7 1 fZ(z) + O(h2) < 1, for su\ufb03ciently large n, where the \ufb01rst inequality is obtained by an application of Lemma 2, the second inequality is obtained by an application of Lemma 3, and the last inequality is obtained by the scaling assumptions h = o(1) and log(d/h)/(nh) = o(1). Upper bound for I1: By (55) in the proof of Lemma 1, we have b \u03a3jk(z) = Gn(gz,jk) \u221anE {Pn(wz)}+E{Pn(gz,jk)} E {Pn(wz)} \u2212Gn(wz)E{Pn(gz,jk)} \u221anE2{Pn(wz)} +1 nO hn Gn(wz)Gn(gj,zk) o + G2 n(gz,jk) i . Thus, by Lemma 1, we have I1 = sup z\u2208[0,1] max j,k\u2208[d] \f \f \f \f \f \f \f \f \f Gn(gz,jk) \u221an \u00b7 E {Pn(wz)} | {z } I11 \u2212Gn(wz) \u00b7 E{Pn(gz,jk)} \u221an \u00b7 E2{Pn(wz)} | {z } I12 +I13 \f \f \f \f \f \f \f \f \f \u2264sup z\u2208[0,1] max j,k\u2208[d] {|I11| + |I12| + |I13|} , (48) where I13 = O[{Gn(wz)Gn(gj,zk)} + G2 n(gz,jk) + E{Gn(wz) \u00b7 Gn(gj,zk)} + E{G2 n(gz,jk)}]/n. We now provide upper bounds for I11, I12, and I13. By an application of Lemmas 2 and 3, we obtain sup z\u2208[0,1] max j,k\u2208[d] |I11| \u2264n\u22121/2 \u00b7 sup z\u2208[0,1] max j,k\u2208[d] \f \f \f \f Gn(gz,jk) fZ(z) + O(h2) \f \f \f \f \u2264C \u00b7 r log(d/h) nh . (49) Similarly, we have sup z\u2208[0,1] max j,k\u2208[d] |I12| \u2264n\u22121/2 \u00b7 sup z\u2208[0,1] max j,k\u2208[d] \f \f \f \f Gn(gz,jk){fZ(z)\u03a3jk(z) + O(h2)} {fZ(z) + O(h2)}2 \f \f \f \f \u2264C \u00b7 r log(d/h) nh . (50) 27 For I13, we have sup z\u2208[0,1] max j,k\u2208[d] |I13| \u2264sup z\u2208[0,1] max j,k\u2208[d] \f \f \f \f 1 nO hn Gn(wz) \u00b7 Gn(gz,jk) o + G2 n(gz,jk) i\f \f \f \f + O \u0012 1 nh \u0013 \u2264C \u00b7 log(d/h) nh + O \u0012 1 nh \u0013 \u2264C \u00b7 log(d/h) nh , (51) where the \ufb01rst and second inequalities follow from Lemmas 2 and 3, respectively. Combining (49), (50), and (51), we have I1 \u2264C \u00b7 r log(d/h) nh , (52) with probability at least 1 \u22123/d. Upper bound for I2: By Lemmas 1 and 2, we have I2 = sup z\u2208[0,1] max j,k\u2208[d] \f \f \f \f E {Pn(gz,jk)} E {Pn(wz)} \u2212\u03a3jk(z) + 1 nO h E n Gn(wz) \u00b7 Gn(gz,jk) o + E \b G2 n(gz,jk) \ti\f \f \f \f \u2264sup z\u2208[0,1] max j,k\u2208[d] \f \f \f \f fZ(z)\u03a3jk(z) + O(h2) fZ(z) + O(h2) \u2212\u03a3jk(z) + 1 nO h E n Gn(wz) \u00b7 Gn(gz,jk) o + E \b G2 n(gz,jk) \ti\f \f \f \f = sup z\u2208[0,1] max j,k\u2208[d] \f \f \f \f fZ(z)\u03a3jk(z) + O(h2) fZ(z) + O(h2) \u2212\u03a3jk(z) + O \u0012 1 nh \u0013\f \f \f \f = sup z\u2208[0,1] max j,k\u2208[d] \f \f \f \f O(h2)\u03a3jk(z) fZ(z) + O(h2) + O \u0012 1 nh \u0013\f \f \f \f \u2264C \u00b7 \u0012 h2 + 1 nh \u0013 , (53) where the \ufb01rst inequality follows from (43) and (44), the second equality follows from (45) and (46), and the last inequality follows from the assumption that h = o(1). Combining the upper bounds (52) and (53), we obtain sup z\u2208[0,1] \r \r \rb \u03a3(z) \u2212\u03a3(z) \r \r \r max \u2264C \u00b7 ( h2 + r log(d/h) nh ) with probability at least 1 \u22123/d. 28 E Proof of Technical Lemmas in Appendix D In this section, we provide the proofs of Lemmas 1-3. E.1 Proof of Lemma 1 The proof of the lemma uses the following fact (1 + x)\u22121 = 1 \u2212x + O(x2) for any |x| < 1. (54) From (32), we have b \u03a3jk(z) = Pn(gz,jk) Pn(wz) = Pn(gz,jk) \u2212E {Pn(gz,jk)} + E {Pn(gz,jk)} E {Pn(wz)} \u00b7 \u0014E {Pn(wz)} Pn(wz) \u0015 = n\u22121/2 \u00b7 Gn(gz,jk) + E {Pn(gz,jk)} E {Pn(wz)} \u00b7 \u0014 1 + Pn(wz) \u2212E [Pn(wz)] E [Pn(wz)] \u0015\u22121 = n\u22121/2 \u00b7 Gn(gz,jk) + E [Pn(gz,jk)] E {Pn(wz)} \u00b7 \u0014 1 + Gn(wz) \u221an \u00b7 E {Pn(wz)} \u0015\u22121 . Under the conditions (41) and by applying (54), we have b \u03a3jk(z) = n\u22121/2 \u00b7 Gn(gz,jk) + E {Pn(gz,jk)} E {Pn(wz)} \u00b7 \u0012 1 \u2212 Gn(wz) \u221an \u00b7 E {Pn(wz)} + O \u0014 G2 n(wz) n \u00b7 E2 {Pn(wz)} \u0015\u0013 = Gn(gz,jk) \u221anE {Pn(wz)} + E{Pn(gz,jk)} E {Pn(wz)} \u2212Gn(wz)E{Pn(gz,jk)} \u221anE2{Pn(wz)} + 1 nO hn Gn(wz)Gn(gz,jk) o + G2 n(gz,jk) i . (55) Note that E{Gn(f)} = 0 by the de\ufb01nition of Gn(f) in (28). Taking expectation on both sides of (55), we obtain E n b \u03a3jk(z) o = E {Pn(gz,jk)} E {Pn(wz)} + 1 nO h E n Gn(wz) \u00b7 Gn(gz,jk) o + E \b G2 n(gz,jk) \ti , as desired. E.2 Proof of Lemma 2 To prove Lemma 2, we write the expectation as an integral and apply Taylor expansion to the density function and the covariance function. We will show that the higher-order terms of the Taylor expansion can be bounded by O(h2). We start by proving (43). 29 Proof of (43): Recall from (29) the de\ufb01nition of gz,jk(Zi, Xij, Yik) = Kh(Zi \u2212z)XijYik. Thus, we have E{Pn(gz,jk)} = E \u001a1 hK \u0012Z \u2212z h \u0013 XjYk \u001b = E \u001a1 hK \u0012Z \u2212z h \u0013 E(XjYk | Z) \u001b = E \u001a1 hK \u0012Z \u2212z h \u0013 E(SjSk | Z) \u001b = E \u001a1 hK \u0012Z \u2212z h \u0013 \u03a3jk(Z) \u001b = Z 1 hK \u0012Z \u2212z h \u0013 \u03a3jk(Z)fZ(Z)dZ = Z K(u)\u03a3jk(uh + z)fZ(uh + z)du, (56) where the third equality hold using the fact that the subject-speci\ufb01c e\ufb00ects are independent between two subjects, and the last equality holds by a change of variable, u = (Z \u2212z)/h. Applying Taylor expansions to \u03a3jk(uh + z) and fZ(uh + z), we have \u03a3jk(u + zh) = \u03a3jk(z) + uh \u00b7 \u02d9 \u03a3jk(z) + u2h2 \u00b7 \u00a8 \u03a3jk(z\u2032) (57) and fZ(u + zh) = fZ(z) + uh \u00b7 \u02d9 fZ(z) + u2h2 \u00b7 \u00a8 fZ(z\u2032\u2032), (58) where z\u2032 and z\u2032\u2032 are between z and uh+z. Substituting (57) and (58) into the last expression of (56), we have Z K(u) n \u03a3jk(z) + uh \u00b7 \u02d9 \u03a3jk(z) + u2h2 \u00b7 \u00a8 \u03a3jk(z\u2032) o \u00b7 n fZ(z) + uh \u00b7 \u02d9 fZ(z) + u2h2 \u00b7 \u00a8 fZ(z\u2032\u2032) o du. (59) By (13), we have R uK(u)du = 0 and R ulK(u)du < \u221efor l = 1, 2, 3, 4. By Assumptions 1 and 2, we have h2 Z u2K(u) \u00a8 \u03a3jk(z\u2032)fZ(z)du \u2264h2CM\u03c3 \u00af fZ = O(h2), h2 Z u2K(u) \u02d9 \u03a3jk(z) \u02d9 fZ(z)du \u2264h2CM\u03c3 \u00af fZ = O(h2), h2 Z u2K(u)\u03a3jk(z) \u00a8 fZ(z\u2032\u2032)du \u2264h2CM\u03c3 \u00af fZ = O(h2). (60) 30 Substituting (60) into (59) and bounding the other higher-order terms by O(h2), we obtain E{Pn(gz,jk)} = \u03a3jk(z)fZ(z) + O(h2), for all z \u2208[0, 1] and j, k \u2208[d]. This implies that sup z\u2208[0,1] max j,k\u2208[d] |E{Pn(gz,jk)} \u2212\u03a3jk(z)fZ(z)| = O(h2). The proof of (44) follows from the same set of argument. Proof of (45): Recall from (29) the de\ufb01nition of wz(Zi) = Kh(Zi \u2212z). Thus, we have 1 nE n Gn(gz,jk) \u00b7 Gn(wz) o = E n Pn(gz,jk) \u00b7 Pn(wz) o \u2212E{Pn(gz,jk)} \u00b7 E{Pn(wz)} = E \uf8ee \uf8f0 \uf8f1 \uf8f2 \uf8f3 1 n X i\u2208[n] Kh(Zi \u2212z)XijYik \uf8fc \uf8fd \uf8fe\u00b7 \uf8f1 \uf8f2 \uf8f3 1 n X i\u2208[n] Kh(Zi \u2212z) \uf8fc \uf8fd \uf8fe \uf8f9 \uf8fb\u2212E{Pn(gz,jk)} \u00b7 E{Pn(wz)} = 1 nE \b K2 h(Z \u2212z)SjSk \t + 1 n2E \uf8f1 \uf8f2 \uf8f3 X i\u2208[n] X i\u2032\u0338=i Kh(Zi \u2212z)Kh(Zi\u2032 \u2212z)XijYik \uf8fc \uf8fd \uf8fe\u2212E{Pn(gz,jk)} \u00b7 E{Pn(wz)} = 1 nE \b K2 h(Z \u2212z)\u03a3jk(Z) \t + n \u22121 n [E {Kh(Z \u2212z)} \u00b7 E {Kh(Z \u2212z)\u03a3jk(Z)}] \u2212E{Pn(gz,jk)}E{Pn(wz)} = 1 nE \b K2 h(Z \u2212z)\u03a3jk(Z) \t | {z } I1 \u22121 nE{Pn(gz,jk)}E{Pn(wz)} | {z } I2 , (61) where the second to the last equality follows from the fact that Zi and Zi\u2032 are independent. We now obtain an upper bound for I1. By (13) and Assumptions 1-2, we have I1 = 1 nh Z 1 hK2 \u0012Z \u2212z h \u0013 \u03a3jk(Z)fZ(Z)dZ \u22641 nh \u00b7 M\u03c3 \u00b7 \u00af fZ Z 1 hK2 \u0012Z \u2212z h \u0013 dZ = O \u0012 1 nh \u0013 , (62) where the last equality holds by a change of variable. Moreover, by (43) and (44), we have I2 = 1 n \b fZ(z)\u03a3jk(z) + O(h2) \t \u00b7 {fZ(z) + O(h2)} = O \u00121 n \u0013 . (63) Substituting (62) and (63) into (61), and taking the supreme over z \u2208[0, 1] and j, k \u2208[d] on 31 both sides of the equation, we obtain sup z\u2208[0,1] max j,k\u2208[d] \f \f \f \f 1 nE n Gn(gz,jk) \u00b7 Gn(wz) o\f \f \f \f = O \u0012 1 nh \u0013 + O \u00121 n \u0013 = O \u0012 1 nh \u0013 , where the last equality holds by the scaling assumption of h = o(1). The proof of (46) follows from the same set of argument. E.3 Proof of Lemma 3 The proof of Lemma 3 involves obtaining upper bounds for the supreme of the empirical processes Gn(wz) and Gn(gz,jk). To this end, we apply the Talagrand\u2019s inequality in Lemma 20. Let F be a function class. In order to apply Talagrand\u2019s inequality, we need to evaluate the quantities \u03b7 and \u03c4 2 such that sup f\u2208F \u2225f\u2225\u221e\u2264\u03b7 and sup f\u2208F Var(f(X)) \u2264\u03c4 2. Talagrand\u2019s inequality in Lemma 20 provides an upper bound for the supreme of an empirical process in terms of its expectation. By Lemma 21, the expectation can then be upper bounded as a function of the covering number of the function class F, denoted as N{F, L2(Q), \u03f5}. The following lemmas provide upper bounds for the supreme of the empirical processes Gn(wz) and Gn(gz,jk), respectively. The proofs are deferred to Sections E.3.1 and E.3.2, respectively. Lemma 4. Assume that h = o(1) and log(d/h)/(nh) = o(1). Under Assumptions 1-2, for su\ufb03ciently large n, there exists a universal constant C > 0 such that sup z\u2208[0,1] |Gn(wz)| \u2264C \u00b7 r log(d/h) h , (64) with probability at least 1 \u22121/d. Lemma 5. Assume that h = o(1) and log2(d/h)/(nh) = o(1). Under Assumptions 1-2, for su\ufb03ciently large n, there exists a universal constant C > 0 such that sup z\u2208[0,1] max j,k\u2208[d] |Gn(gz,jk)| \u2264C \u00b7 r log(d/h) h , (65) with probability at least 1 \u22122/d. 32 Applying Lemmas 4 and 5, we obtain sup z\u2208[0,1] max j,k\u2208[d] \f \f \fGn(wz) \u2228Gn(gz,jk) \f \f \f \u2264sup z\u2208[0,1] |Gn(wz)| + sup z\u2208[0,1] max j,k\u2208[d] |Gn(gz,jk)| \u2264C \u00b7 r log(d/h) h , with probability at least 1 \u22123/d, as desired. E.3.1 Proof of Lemma 4 The proof of Lemma 4 uses the set of arguments as detailed in the beginning of E.3. Recall from (29) and (31) the de\ufb01nition of wz(Zi) = Kh(Zi \u2212z) and kz(Zi) = wz(Zi) \u2212E{wz(Z)}, respectively. We consider the class of function K = {kz | z \u2208[0, 1]} . (66) First, note that sup z\u2208[0,1] \u2225kz\u2225\u221e= sup z\u2208[0,1] \u2225wz(Zi) \u2212E{wz(Z)}\u2225\u221e \u22641 h\u2225K\u2225\u221e+ \u00af fZ + O(h2) \u22642 h\u2225K\u2225\u221e, (67) where the \ufb01rst inequality holds by (13) and Lemma 2, and the last inequality holds by the scaling assumption h = o(1) for su\ufb03ciently large n. Next, we obtain an upper bound for the variance of kz(Zi). Note that sup z\u2208[0,1] Var{kz(Z)} = sup z\u2208[0,1] E \u0000[wz(Z) \u2212E{wz(Z)}]2\u0001 \u2264sup z\u2208[0,1] 2E{w2 z(Z)} | {z } I1 + sup z\u2208[0,1] 2E2{wz(Z)} | {z } I2 , where we apply the inequality (x \u2212y)2 \u22642x2 + 2y2 for two scalars x, y. By Lemma 2, we have I2 \u22642{ \u00af fZ + O(h2)}2. Also, by a change of variable and second-order Taylor expansion 33 on the marginal density fZ(\u00b7), we have I1 = 2 sup z\u2208[0,1] Z 1 h2K2 \u0012Z \u2212z h \u0013 fZ(Z)dZ = 2 sup z\u2208[0,1] 1 h Z K2(u)fZ(uh + z)du = 2 sup z\u2208[0,1] 1 h Z K2(u) n fZ(z) + uh \u02d9 fZ(z) + u2h2 \u00a8 fZ(z\u2032) o du for z\u2032 \u2208(z, u + zh) \u22642 h \u00af fZ\u2225K\u22252 2 + O(1) + O(h). (68) Thus, for su\ufb03ciently large n and the assumption that h = o(1), we have sup z\u2208[0,1] Var{kz(Z)} \u22643 h \u00b7 \u00af fZ \u00b7 \u2225K\u22252 2. (69) By Lemma 16, the covering number for the function class K satis\ufb01es sup Q N{K, L2(Q), \u03f5} \u2264 4 \u00b7 \u2225K\u2225TV \u00b7 C4/5 K \u00b7 \u00af f 1/5 Z h\u03f5 !5 . (70) We are now ready to obtain an upper bound for the supreme of the empirical process, supz\u2208[0,1] |Gn(wz)|. By Lemma 21 with A = 2 \u00b7 \u2225K\u2225TV \u00b7 C4/5 K \u00b7 \u00af f 1/5 Z /\u2225K\u2225\u221e, \u2225F\u2225L2(Pn) = 2 \u00b7 \u2225K\u2225\u221e/h, V = 5, \u03c32 P = 3 \u00b7 \u00af fZ \u00b7 \u2225K\u22252 2/h, for su\ufb03ciently large n, we obtain E ( sup z\u2208[0,1] 1 \u221an \u00b7 |Gn(wz)| ) = E \uf8eb \uf8edsup z\u2208[0,1] 1 n \f \f \f \f \f \f X i\u2208[n] [wz(Zi) \u2212E{wz(Z)}] \f \f \f \f \f \f \uf8f6 \uf8f8 \u2264C \u00b7 (r log(1/h) nh + log(1/h) n ) \u2264C \u00b7 r log(1/h) nh , (71) where C > 0 is some su\ufb03ciently large constant. By Lemma 20 with \u03c4 2 = 3 \u00af fZ \u00b7 \u2225K\u22252 2/h, \u03b7 = 2 \u00b7 \u2225K\u2225\u221e/h, E[Y ] \u2264C \u00b7 p log(1/h)/(nh), and picking t = p log(d)/n, for su\ufb03ciently 34 large n, we have sup z\u2208[0,1] 1 \u221an \u00b7 |Gn(wz)| = sup z\u2208[0,1] 1 n \f \f \f \f \f \f X i\u2208[n] (wz(Zi) \u2212E{wz(Z)} \f \f \f \f \f \f \u2264C \u00b7 \uf8eb \uf8ed r log(1/h) nh + r log(d) nh \u00b7 s 1 + r log(1/h) nh + log(d) nh \uf8f6 \uf8f8 \u2264C \u00b7 r log(d/h) nh , with probability 1\u22121/d, where the last expression holds by the assumption that log(d/h)/(nh) = o(1) and h = o(1). Multiplying both sides of the above equation by \u221an completes the proof of Lemma 4. E.3.2 Proof of Lemma 5 The proof of Lemma 5 uses the set of arguments as detailed in the beginning of E.3. For convenience, we prove Lemma 5 by conditioning on the event A = \u001a max i\u2208[n] max j\u2208[d] max(|Xij|, |Yij|) \u2264MX \u00b7 p log d \u001b . (72) Since Xij and Yij conditioned on Z are Gaussian random variables, the event A occurs with probability at least 1 \u22121/d for su\ufb03ciently large constant MX > 0. Recall from (29) and (30) the de\ufb01nition of gz,jk(Zi, Xij, Yik) = Kh(Zi \u2212z)XijYik and qz,jk(Zi, Xij, Yik) = gz,jk(Zi, Xij, Yik) \u2212E{gz,jk(Z, Xj, Yk)}, respectively. We consider the function class Q = {qz,jk | z \u2208[0, 1], j, k \u2208[d]} . (73) We \ufb01rst obtain an upper bound for the function class sup z\u2208[0,1] max j,k\u2208[d] \u2225qz,jk\u2225\u221e= sup z\u2208[0,1] max j,k\u2208[d] \u2225gz,jk(Zi, Xij, Yik) \u2212E{gz,jk(Z, Xj, Yk)}\u2225\u221e \u2264sup z\u2208[0,1] max j,k\u2208[d] \u2225gz,jk(Zi, Xij, Yik)\u2225\u221e+ sup z\u2208[0,1] max j,k\u2208[d] \u2225E{gz,jk(Z, Xj, Yk)}\u2225\u221e \u2264sup z\u2208[0,1] max j,k\u2208[d] \u2225Kh(Zi \u2212z)XijYik\u2225\u221e+ \u00af fZ \u00b7 M\u03c3 + O(h2) \u22641 h \u00b7 M 2 X \u00b7 \u2225K\u2225\u221e\u00b7 log d + \u00af fZ \u00b7 M\u03c3 + O(h2) \u22642 h \u00b7 M 2 X \u00b7 \u2225K\u2225\u221e\u00b7 log d, (74) where the second inequality holds by Assumptions 1-2 and Lemma 2, the third inequality 35 holds by (13) and by conditioning on the event A, and the last inequality holds by the scaling assumption h = o(1) for su\ufb03ciently large n. Next, we obtain an upper bound for the variance of qz,jk(Zi, Xij, Yik). Note that sup z\u2208[0,1] max j,k\u2208[d] Var{qz,jk(Z, Xj, Yk)} = sup z\u2208[0,1] max j,k\u2208[d] E \u0002 (gz,jk(Z, Xj, Yk) \u2212E{gz,jk(Z, Xj, Yk)})2\u0003 \u2264sup z\u2208[0,1] max j,k\u2208[d] 2E \b g2 z,jk(Z, Xj, Yk) \t | {z } I1 + sup z\u2208[0,1] max j,k\u2208[d] 2E2{gz,jk(Z, Xj, Yk)} | {z } I2 , where we apply the inequality (x\u2212y)2 \u22642x2+2y2 for two scalars x, y. By Lemma 2, we have I2 \u22642 \b \u00af fZ \u00b7 M\u03c3 + O(h2) \t2. Also, by a change of variable and second-order Taylor expansion on the marginal density fZ(\u00b7) as in (68), we have I1 = 2 sup z\u2208[0,1] max j,k\u2208[d] E \b K2 h(Z \u2212z) \u00b7 E \u0000X2 j Y 2 k | Z \u0001\t \u22642\u03ba sup z\u2208[0,1] E \b K2 h(Z \u2212z) \t \u22642\u03ba h \u00b7 \u00af fZ \u00b7 \u2225K\u22252 2 + O(1) + O(h), where the \ufb01rst inequality follows from the fact that |E(X2 j Y 2 k | Z)| \u2264\u03ba for some \u03ba < \u221esince these are Gaussian random variables, and the second inequality follows from (68). Thus, for su\ufb03ciently large n and the assumption that h = o(1), we have sup z\u2208[0,1] max j,k\u2208[d] Var{qz,j,k(Z, Xj, Yk)} \u22643\u03ba h \u00b7 \u00af fZ \u00b7 \u2225K\u22252 2. (75) By Lemma 17, the covering number for the function class Q satis\ufb01es sup Q N{Q, L2(Q), \u03f5} \u2264 4\u2225K\u2225TV \u00b7 C4/5 K \u00b7 \u00af f 1/5 Z \u00b7 M 1/5 \u03c3 \u00b7 d1/10 \u00b7 M 2/5 X \u00b7 log2/5 d h\u03f5 !5 . (76) We now obtain an upper bound for the supreme of the empirical process, sup z\u2208[0,1] max j,k\u2208[d] |Gn(gz,jk)|. By Lemma 21 with A = 2 \u00b7 \u2225K\u2225TV \u00b7 C4/5 K \u00b7 \u00af f 1/5 Z \u00b7 M 1/5 \u03c3 \u00b7 d1/10/\u2225K\u2225\u221e, \u2225F\u2225L2(Pn) = 2 \u00b7 \u2225K\u2225\u221e\u00b7 36 M 2 X \u00b7 log d/h, V = 5, \u03c32 P = (3\u03ba/h) \u00b7 \u00af fZ \u00b7 \u2225K\u22252 2, for su\ufb03ciently large n, we obtain E ( sup z\u2208[0,1] max j,k\u2208[d] 1 \u221an \u00b7 |Gn(gz,jk)| ) = E \uf8eb \uf8edsup z\u2208[0,1] max j,k\u2208[d] 1 n \u00b7 \f \f \f \f \f \f X i\u2208[n] [gz,jk(Zi, Xij, Yik) \u2212E{gz,jk(Z, Xj, Yk)}] \f \f \f \f \f \f \uf8f6 \uf8f8 \u2264C \u00b7 (r log(d/h) nh + log(d/h) n ) \u2264C \u00b7 r log(d/h) nh , (77) where the last inequality holds by the assumption log(d/h)/nh = o(1). By Lemma 20 with \u03c4 2 = 3 \u00b7 \u03ba \u00b7 \u00af fZ \u00b7 \u2225K\u22252 2/h, \u03b7 = 2 \u00b7 \u2225K\u2225\u221e\u00b7 M 2 X \u00b7 log d/h, E[Y ] \u2264C \u00b7 p log(d/h)/(nh), and picking t = p log d/n, for su\ufb03ciently large n, we have sup z\u2208[0,1] max j,k\u2208[d] 1 \u221an \u00b7 |Gn(gz,jk)| = sup z\u2208[0,1] max j,k\u2208[d] 1 n \u00b7 \f \f \f \f \f \f X i\u2208[n] [gz,jk(Zi, Xij, Yik) \u2212E{gz,jk(Z, Xj, Yk)}] \f \f \f \f \f \f \u2264C \u00b7 \uf8f1 \uf8f2 \uf8f3 r log(d/h) nh + r log d nh \u00b7 s 1 + log d \u00b7 r log(d/h) nh + log2 d nh \uf8fc \uf8fd \uf8fe \u2264C \u00b7 r log(d/h) nh , with probability at least 1 \u22122/d. The second inequality holds by the assumption that log2(d/h)/(nh) = o(1). Multiplying both sides of the equation by \u221an, we completed the proof of Lemma 5. F Proof of Theorem 2 In this section, we provide the proof of Theorem 2. To prove Theorem 2, we use a similar set of arguments in the series of work on Gaussian multiplier bootstrap of the supreme of empirical process (see, for instance, Chernozhukov et al. 2013, 2014a,b). Recall from (10) and (11) that TE = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f b \u0398de jk(z) \u2212\u0398jk(z) \f \f \f \u00b7 Pn(wz) (78) and T B E = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f \f \f \f P i\u2208[n] n b \u0398j(z) oT Kh(Zi \u2212z) n XiY T i b \u0398k(z) \u2212ek o \u03bei/n n b \u0398j(z) oT b \u03a3j(z) \f \f \f \f \f \f \f , (79) 37 respectively, where \u03bei \u223cN(0, 1). Note that for notational convenience, we drop the subscript E from TE and T B E throughout the proof. We aim to show that T B is a good approximation of T. However, T and T B are not exact averages. To apply the results in Chernozhukov et al. 2014a, we de\ufb01ne four intermediate processes: T0 = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f \f \f X i\u2208[n] {\u0398j(z)}T Kh(Zi \u2212z) \b XiY T i \u2212\u03a3(z) \t \u0398k(z)/n \f \f \f \f \f \f ; (80) T00 = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f X i\u2208[n] {\u0398j(z)}T Kh(Zi \u2212z) \b XiY T i \u2212\u03a3(z) \t \u0398k(z)/n \u2212{\u0398j(z)}T \u0012h E{Kh(Z \u2212z)XY T} \u2212E{Kh(Z \u2212z)}\u03a3(z) i\u0013 \u0398k(z)/n \f \f \f \f; (81) T B 0 = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f \f \f X i\u2208[n] h {\u0398j(z)}T Kh(Zi \u2212z) \b XiY T i \u2212\u03a3(z) \t \u0398k(z) i \u03bei/n \f \f \f \f \f \f , (82) T B 00 = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f X i\u2208[n] n {\u0398j(z)}T Kh(Zi \u2212z) \b XiY T i \u2212\u03a3(z) \t \u0398k(z)/n \u2212{\u0398j(z)}T \u0010h E{Kh(Z \u2212z)XY T} \u2212E{Kh(Z \u2212z)}\u03a3(z) i\u0011 \u0398k(z) o \u00b7 \u03bei/n \f \f \f \f; (83) where \u03bei i.i.d. \u223cN(0, 1). To prove Theorem 2, we show that T00 is a good approximation of T and that T B 00 is a good approximation of T B. We then show that there exists a Gaussian process W such that both T B 00 and T00 can be accurately approximated by W. This is done by applications of Theorems A.1 and A.2 in Chernozhukov et al. (2014a). The following summarizes the chain of empirical and Gaussian processes that we are going to study T \u2190 \u2192T0 \u2190 \u2192T00 \u2190 \u2192W \u2190 \u2192T B 00 \u2190 \u2192T B 0 \u2190 \u2192T B. The following lemma provides an approximation error between the statistic T and the intermediate empirical process T00. Lemma 6. Assume that h2+ p log(d/h)/nh = o(1). Under Assumptions 1-2, for su\ufb03ciently 38 large n, there exists a universal constant C > 0 such that |T \u2212T00| \u2264C \u00b7 \u001a\u221a nh5 + s \u00b7 \u221a nh9 + s \u00b7 log(d/h) \u221a nh + \u00b7s \u00b7 h2 \u00b7 p log(d/h) \u001b , with probability at least 1 \u22121/d. Proof. The proof is deferred to F.2. We now apply Theorems A.1 and A.2 in Chernozhukov et al. (2014a) to show that there exists a Gaussian process W such that the quantities |T00 \u2212W| and |T B 00 \u2212W| can be controlled, respectively. The results are stated in the following lemmas. Lemma 7. Assume that log6 s \u00b7 log4(d/h)/(nh) = o(1). Under Assumptions 1-2, for su\ufb03ciently large n, there exists universal constants C, C\u2032 > 0 such that P \" |T00 \u2212W| \u2265C \u00b7 \u001alog6(s) \u00b7 log4(d/h) nh \u001b1/8# \u2264C\u2032 \u00b7 \u001alog6(s) \u00b7 log4(d/h) nh \u001b1/8 . Proof. The proof is deferred to F.3. Lemma 8. Assume that log4(s) \u00b7 log3(d/h)/(nh) = o(1). Under Assumptions 1-2, for su\ufb03ciently large n, there exists universal constants C, C\u2032\u2032 > 0 such that P \" |T B 00 \u2212W| > C \u00b7 \u001alog4(s) \u00b7 log3(d/h) nh \u001b1/8 \f \f \f {(Zi, Xi, Yi)}i\u2208[n] # \u2264C\u2032\u2032\u00b7 \u001alog4(s) \u00b7 log3(d/h) nh \u001b1/8 , with probability at least 1 \u22123/n. Proof. The proof is deferred to F.4. Finally, the following lemma provides an upper bound on the di\ufb00erence between T B and T B 00, conditioned on the data {(Zi, Xi, Yi)}i\u2208[n]. Lemma 9. Assume that s \u00b7 q h3 log3(d/h) + s \u00b7 q log4(d/h)/nh2 + p h5 log n = o(1). Under Assumptions 1-2, for su\ufb03ciently large n, there exists universal constants C, C\u2032\u2032 > 0 such that, with probability at least 1 \u22121/d, P \uf8ee \uf8f0|T B \u2212T B 00| > C \u00b7 q h3 log3(d/h) + s \u00b7 s log4(d/h) nh2 + p h5 log n \f \f \f {(Zi, Xi, Yi)}i\u2208[n] \uf8f9 \uf8fb\u22642/d+1/n. Proof. The proof is deferred to F.5. With Lemmas 6-9, we are now ready to prove Theorem 2. 39 F.1 Proof of Theorem 2 Recall that for notational convenience, we drop the subscript E from TE and T B E throughout the proof. In this section, we show that T can be well-approximated by the (1\u2212\u03b1)-conditional quantile of T B, i.e., P{T \u2265c(1 \u2212\u03b1)} \u2264\u03b1. For notational convenience, we let r = r1 + r2 + r3 + r4, where r1 = \u221a nh5 + s \u00b7 \u221a nh9 + s \u00b7 log(d/h) \u221a nh + \u00b7s \u00b7 h2 \u00b7 p log(d/h) r2 = \u001alog6 s \u00b7 log4(d/h) nh \u001b1/8 r3 = \u001alog4 s \u00b7 log3(d/h) nh \u001b1/8 r4 = q h3 log3(d/h) + s \u00b7 s log4(d/h) nh2 + p h5 log n. These are the scaling that appears in Lemmas 6-9. By Lemmas 6 and 7, it can be shown that P(|T \u2212W| \u22652r2) \u2264P(|T \u2212T00| + |T00 \u2212W| \u22652r2) \u22642r2, (84) since r2 \u2265r1 and r2 \u22651/d. With some abuse of notation, throughout the proof, we write P\u03be(T B \u2265t) to indicate P[T B \u2265t | {(Zi, Xi, Yi)}i\u2208[n]]. By Lemmas 8 and 9, we have P\u03be(|T B \u2212W| \u22652r2) \u2264P\u03be(|T B \u2212T B 00| + |T B 00 \u2212W| \u22652r2) \u22642r2, (85) since r2 \u2265r3 and r2 \u22652/d + 1/n. De\ufb01ne the event E = \u0000P[|T B 00 \u2212W| > r2 | {(Zi, Xi, Yi)}i\u2208[n] \u2264r2] \u0001 , and note that P(E) \u22651\u22122/d\u22124/n by Lemmas 8 and 9. Throughout the proof, we condition on the event E. By the triangle inequality, we obtain P{T \u2264c(1 \u2212\u03b1)} \u22651 \u2212P{T \u2212W + W + 2r2 \u2265c(1 \u2212\u03b1) + 2r2} \u22651 \u2212P(|T \u2212W| \u22652r2) \u2212P{W \u2265c(1 \u2212\u03b1) \u22122r2} \u2265P{|W| \u2264c(1 \u2212\u03b1) \u22122r2} \u22122r2, (86) where the last inequality follows from (84). By a similar argument and by (85), we have P{W \u2264c(1 \u2212\u03b1) \u22122r2} \u2265P\u03be{T B \u2264c(1 \u2212\u03b1) \u22124r2} \u22122r2 \u2265P\u03be{T B \u2264c(1 \u2212\u03b1)} \u22122r2 \u2212P\u03be{|T B \u2212c(1 \u2212\u03b1)| \u2264r2}, (87) where the last inequality follows from the fact that P(X \u2264t\u2212\u03f5)\u2212P(X \u2264t) \u2265\u2212P(|X \u2212t| \u2264 40 \u03f5) for any \u03f5 > 0. Thus, combining (86) and (87), we obtain P{T \u2264c(1 \u2212\u03b1)} \u22651 \u2212\u03b1 \u22124r2 \u2212P\u03be{|T B \u2212c(1 \u2212\u03b1)| \u2264r2}. (88) It remains to show that the quantity P\u03be{|T B \u2212c(1 \u2212\u03b1)| \u2264r2} converges to zero as we increase n. By the de\ufb01nition of T00 and from (35), we have T00 = sup z\u2208[0,1] max j,k\u2208[d] 1 \u221an \f \f \f \f \f \f X i\u2208[n] Jz,jk(Zi, Xi, Yi) \f \f \f \f \f \f and T B 00 = sup z\u2208[0,1] max j,k\u2208[d] 1 \u221an \f \f \f \f \f \f X i\u2208[n] Jz,jk(Zi, Xi, Yi)\u03bei \f \f \f \f \f \f . Let b \u03c32 z,jk = Pn i=1 J2 z,jk(Zi, Xi, Yi)/n be the conditional variance, and let \u03c3 = infz,jk b \u03c3z,jk and \u00af \u03c3 = supz,jk b \u03c3z,jk. By Lemma A.1 of Chernozhukov et al. (2014b) and Theorem 3 of Chernozhukov et al. (2013), we obtain P\u03be{|T B \u2212c(1 \u2212\u03b1)| \u2264r2} \u2264C \u00b7 \u00af \u03c3/\u03c3 \u00b7 r2 \u00b7 {E[T B | {(Zi, Xi, Yi)}i\u2208[n]] + p 1 \u2228log(\u03c3/r2)} \u2264C \u00b7 \u00af \u03c3/\u03c3 \u00b7 r2 \u00b7 {E[T B 00 | {(Zi, Xi, Yi)}i\u2208[n]] + E[|T B \u2212T B 00| | {(Zi, Xi, Yi)}i\u2208[n]] + p 1 \u2228log(\u03c3/r2)}. (89) We \ufb01rst calculate the quantity \u00af \u03c3. By (110), we have sup z\u2208[0,1] max j,k\u2208[d] \u2225J2 z,jk(Zi, Xi, Yi)\u2225\u221e\u2264C \u00b7 log2 s/h. (90) Moreover, by (110), we have sup z\u2208[0,1] max j,k\u2208[d] E[J4 z,jk(Zi, Xi, Yi)] \u2264C \u00b7 log4 s/h2. (91) De\ufb01ne the function class J \u2032 = {J2 z,jk(\u00b7) | z \u2208[0, 1], j, k \u2208[d]}. By Lemmas 15, 18 and 19, we have sup Q N{J \u2032, L2(Q), \u03f5} \u2264C \u00b7 d2 \u00b7 d17/24 \u00b7 log3/4 d h11/12 \u00b7 \u03f5 !24 . (92) Thus, applying Lemma 21 with \u03c32 P = C \u00b7 log4 s/h2 and \u2225F\u2225L2(Pn) \u2264C \u00b7 d2 \u00b7 (d17/24 \u00b7 log3/4 d/h11/12)24, we have E \uf8ee \uf8f0sup z\u2208[0,1] max j,k\u2208[d] 1 n \f \f \f \f \f \f X i\u2208[n] J2 z,jk(Zi, Xi, Yi) \u2212E{J2 z,jk(Z, X, Y )} \f \f \f \f \f \f \uf8f9 \uf8fb\u2264C \u00b7 s log5(d/h) nh2 . 41 By an application of the Markov\u2019s inequality, we obtain P \uf8eb \uf8edsup z\u2208[0,1] max j,k\u2208[d] \uf8ee \uf8f01 n X i\u2208[n] J2 z,jk(Zi, Xi, Yi) \u2212E{J2 z,jk(Zi, Xi, Yi)} \uf8f9 \uf8fb\u2265C \u00b7 \u001alog5(d/h) nh2 \u001b1/4\uf8f6 \uf8f8\u2264C\u00b7 \u001alog5(d/h) nh2 \u001b1/4 . (93) Thus, we have with probability at least 1 \u2212C \u00b7 \b log5(d/h)/(nh2) \t1/4, \u00af \u03c32 = sup z\u2208[0,1] max j,k\u2208[d] 1 n X i\u2208[n] J2 z,jk(Zi, Xi, Yi) \u2264sup z\u2208[0,1] max j,k\u2208[d] E{J2 z,jk(Zi, Xi, Yi)}+C\u00b7 \u001alog5(d/h) nh2 \u001b1/4 \u2264C\u00b7log2 s, (94) where the last inequality follows from (115) for su\ufb03ciently large n. By Lemma 10, we have infz,j,k E{J2 z,jk(Z, X, Y )} \u2265c > 0. Therefore, we have \u03c32 = inf z,j,k 1 n n X i=1 J2 z,jk(Zi, Xi, Yi) \u2265c\u2212sup z,j,k 1 n n X i=1 [J2 z,jk(Zi, Xi, Yi)\u2212E{J2 z|(j,k)(Z, X, Y )}] \u2265c/2 > 0, with probability at least 1 \u2212C \u00b7 \b log5(d/h)/(nh2) \t1/4. Next, we calculate the quantity E[T B 00 | {(Zi, Xi, Yi)}i\u2208[n]]. By Dudley\u2019s inequality (see, e.g., Corollary 2.2.8 in Van Der Vaart & Wellner 1996) and (116), we obtain E[T B 00 | {(Zi, Xi, Yi)}i\u2208[n]] \u2264C \u00b7 log s \u00b7 p log(d/h). (95) Moreover, by Lemma 9, we have E[|T B\u2212T B 00| | {(Zi, Xi, Yi)}i\u2208[n]] \u2264C\u00b7 q h3 log3(d/h)+s\u00b7 s log4(d/h) nh2 + p h5 log n \u2264r2, (96) with probability at least 1\u22122/d\u22121/n. Substituting (94), (95), and (96) into (89), we obtain P\u03be{|T B \u2212c(1 \u2212\u03b1)| \u2264r2} \u2264C \u00b7 \u001alog22 s \u00b7 log8(d/h) nh \u001b1/8 . (97) Thus, substituting (97) into (88), we have P{T \u2264c(1 \u2212\u03b1)} \u22651 \u2212\u03b1 \u22124r2 \u2212log22 s \u00b7 log8(d/h) nh . By the scaling assumptions, r2 = o(1) and log22 s \u00b7 log8(d/h)/(nh) = o(1). Thus, this implies that lim n\u2192\u221eP{T \u2264c(1 \u2212\u03b1)} \u22651 \u2212\u03b1, which implies that lim n\u2192\u221eP{T \u2265c(1 \u2212\u03b1)} \u2264\u03b1, 42 as desired. F.2 Proof of Lemma 6 In this section, we show that |T \u2212T00| is upper bounded by the quantity C \u00b7 \u001a\u221a nh5 + s \u00b7 \u221a nh9 + s \u00b7 log(d/h) \u221a nh + \u00b7s \u00b7 h2 \u00b7 p log(d/h) \u001b with high probability for su\ufb03ciently large constant C > 0. By the triangle inequality, we have |T \u2212T00| \u2264|T \u2212T0| + |T0 \u2212T00|. Thus, is su\ufb03ces to obtain upper bounds for the terms |T \u2212T0| and |T0 \u2212T00|. Upper Bound for |T \u2212T0|: Let e \u0398k = \u0010 b \u03981k, . . . , b \u0398(j\u22121)k, \u0398jk, b \u0398(j+1)k, . . . , b \u0398dk \u0011T \u2208Rd. Then, the statistics T can be rewritten as T = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f b \u0398de jk(z) \u2212\u0398jk(z) \f \f \f \u00b7 Pn(wz) = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f \f \f \f b \u0398jk(z) \u2212\u0398jk(z) \u2212 n b \u0398j(z) oT n b \u03a3(z) b \u0398k \u2212ek o n b \u0398j(z) oT b \u03a3j(z) \f \f \f \f \f \f \f \u00b7 Pn(wz) = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f \f \f \f n b \u0398j(z) oT n b \u03a3(z) e \u0398k \u2212ek o n b \u0398j(z) oT b \u03a3j(z) \f \f \f \f \f \f \f \u00b7 Pn(wz). (98) To obtain an upper bound on the di\ufb00erence between T and T0, we make use of the following inequality: \f \f \f \f x 1 + \u03b4 \u2212y \f \f \f \f \u22642 \u00b7 y \u00b7 |\u03b4| + 2 \u00b7 |x \u2212y| for any |\u03b4| \u22641 2. (99) Recall from (80) that T0 = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f \f \f X i\u2208[n] {\u0398j(z)}T Kh(Zi \u2212z) \b XiY T i \u2212\u03a3(z) \t \u0398k(z)/n \f \f \f \f \f \f . Applying (99) with x = { b \u0398j(z)}T{b \u03a3(z) e \u0398k \u2212ek}, \u03b4 = { b \u0398j(z)}T b \u03a3j(z) \u22121, and y = 43 {\u0398j(z)}T{b \u03a3(z) \u2212\u03a3(z)}\u0398k(z), and by the triangle inequality, we have |T \u2212T0| \u2264sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f \f n b \u0398j(z) oT n b \u03a3(z) e \u0398k \u2212ek o \u00b7 Pn(wz) n b \u0398j(z) oT b \u03a3j(z) \u22121 n X i\u2208[n] {\u0398j(z)}T Kh(Zi \u2212z) \b XiY T i \u2212\u03a3(z) \t \u0398k(z \u2264sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f \f n b \u0398j(z) oT n b \u03a3(z) e \u0398k \u2212ek o n b \u0398j(z) oT b \u03a3j(z) \u2212{\u0398j(z)}T n b \u03a3(z) \u2212\u03a3(z) o \u0398k(z) \f \f \f \f \f \u00b7 |Pn(wz)| \u22642 sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \u0014 {\u0398j(z)}T n b \u03a3(z) \u2212\u03a3(z) o \u0398k(z) \u00b7 \f \f \f \f n b \u0398j(z) oT b \u03a3j(z) \u22121 \f \f \f \f \u0015 \u00b7 |Pn(wz)| | {z } I1 + 2 sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \u0014n b \u0398j(z) oT n b \u03a3(z) e \u0398k \u2212ek o \u2212{\u0398j(z)}T n b \u03a3(z) \u2212\u03a3(z) o \u0398k(z) \u0015 \u00b7 |Pn(wz)| | {z } I2 . (100) It remains to obtain upper bounds for I1 and I2 in (100). Upper bound for I1: By Corollary 1, we have sup z\u2208[0,1] max j\u2208[d] \f \f \f \f n b \u0398j(z) oT b \u03a3j(z) \u22121 \f \f \f \f \u2264C \u00b7 \" h2 + r log(d/h) nh # . (101) Moreover, by Lemmas 4 and 2, we have sup z\u2208[0,1] |Pn(wz)| \u2264|E {Pn(wz)}| + C \u00b7 r log(d/h) nh = \u00af fZ + O ( h2 + r log(d/h) nh ) , (102) with probability at least 1 \u22121/d. Thus, by Holder\u2019s inequality, we have I1 \u22642 sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f n b \u0398j(z) oT b \u03a3j(z) \u22121 \f \f \f \f \u00b7 |Pn(wz)| \u00b7 \f \f \f{\u0398j(z)}T n b \u03a3(z) \u2212\u03a3(z) o \u0398k(z) \f \f \f \u22642 sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f n b \u0398j(z) oT b \u03a3j(z) \u22121 \f \f \f \f \u00b7 |Pn(wz)| \u00b7 \u2225\u0398j(z)\u22252 1 \u00b7 \u2225b \u03a3(z) \u2212\u03a3(z)\u2225max \u22642 \u00b7 M2 \u00b7 \u221a nh \u00b7 C \u00b7 ( h2 + r log(d/h) nh ) \u00b7 \" \u00af fZ + O ( h2 + r log(d/h) nh )# \u00b7 ( h2 + r log(d/h) nh ) \u2264C \u00b7 \u221a nh \u00b7 ( h2 + r log(d/h) nh )2 , (103) 44 with probability greater than 1\u22124/d, where the third inequality holds by Theorem 1, (101), and (102). Upper bound for I2: To obtain an upper bound for I2, we \ufb01rst decompose the quantity \u221a nh \u00b7 { b \u0398j(z)}T{b \u03a3(z) e \u0398k \u2212ek} into the following \u221a nh \u00b7 n b \u0398j(z) oT n b \u03a3(z) e \u0398k \u2212ek o = \u221a nh \u00b7 n b \u0398j(z) oT b \u03a3(z) n e \u0398k(z) \u2212\u0398k(z) o | {z } I21 + \u221a nh \u00b7 n b \u0398j(z) oT n b \u03a3(z) \u2212\u03a3(z) o \u0398k(z) | {z } I22 . Next, we show that I21 converges to zero and that the di\ufb00erence between I22 and the term \u221a nh \u00b7 {\u0398j(z)}T{b \u03a3(z) \u2212\u03a3(z)}\u0398k(z) is small. Upper bound for I21: By Holder\u2019s inequality and Corollary 1, we have |I21| \u2264sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \r \r \r \r n b \u0398j(z) oT b \u03a3\u2212j(z) \r \r \r \r \u221e \u00b7 \r \r \r b \u0398k(z) \u2212\u0398k(z) \r \r \r 1 \u2264C \u00b7 \u221a nh \u00b7 s \u00b7 ( h2 + r log(d/h) nh )2 \u2264C \u00b7 \u001a s \u00b7 \u221a nh9 + s \u00b7 log(d/h) \u221a nh + s \u00b7 h2 \u00b7 p log(d/h) \u001b , (104) with probability at least 1 \u22121/d. Decomposition of I22: By adding and subtracting terms, we have I22 = \u221a nh \u00b7 n b \u0398j(z) \u2212\u0398j(z) oT n b \u03a3(z) \u2212\u03a3(z) o \u0398k(z) | {z } I221 + \u221a nh \u00b7 {\u0398j(z)}T n b \u03a3(z) \u2212\u03a3(z) o \u0398k(z) | {z } I222 . (105) Similar to (104), we have |I221| \u2264sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \r \r \r b \u0398j(z) \u2212\u0398j(z) \r \r \r 1 \u00b7 \r \r \rb \u03a3(z) \u2212\u03a3(z) \r \r \r max \u00b7 \u2225\u0398k(z)\u22251 \u2264C \u00b7 \u221a nh \u00b7 M \u00b7 s \u00b7 ( h2 + r log(d/h) nh )2 \u2264C \u00b7 \u001a s \u00b7 \u221a nh9 + s \u00b7 log(d/h) \u221a nh + s \u00b7 h2 \u00b7 p log(d/h) \u001b , (106) where the second inequality holds by Holder\u2019s inequality, Corollary 1, and the fact that 45 \u0398(z) \u2208Us,M. Combining the results (104)-(106), we have I2 = 2 sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \u0014n b \u0398j(z) oT n b \u03a3(z) e \u0398k \u2212ek o \u2212{\u0398j(z)}T n b \u03a3(z) \u2212\u03a3(z) o \u0398k(z) \u0015 \u00b7 |Pn(wz)| \u22642 \u00b7 sup z\u2208[0,1] |Pn(wz)| \u00b7 [I21 + I221] \u22642 \u00b7 \" \u00af fZ + O ( h2 + r log(d/h) nh )# \u00b7 (I21 + I221) \u2264C \u00b7 \u001a s \u00b7 \u221a nh9 + s \u00b7 log(d/h) \u221a nh + s \u00b7 h2 \u00b7 p log(d/h) \u001b , (107) where the third inequality follows from (102). Combining the upper bounds for I1 in (103) and I2 in (107), we have |T \u2212T0| \u2264C \u00b7 \u001a s \u00b7 \u221a nh9 + s \u00b7 log(d/h) \u221a nh + s \u00b7 h2 \u00b7 p log(d/h) \u001b , (108) with probability at least 1 \u22121/d. Upper bound for |T0 \u2212T00|: Recall from (81) the de\ufb01nition of T00 T00 = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f X i\u2208[n] {\u0398j(z)}T Kh(Zi \u2212z) \b XiY T i \u2212\u03a3(z) \t \u0398k(z)/n \u2212{\u0398j(z)}T \u0014 E{Kh(Z \u2212z)XY T} \u2212E{Kh(Z \u2212z)}\u03a3(z) \u0015 \u0398k(z)/n \f \f \f \f; Using the triangle inequality ||x| \u2212|y|| \u2264|x \u2212y|, we obtain |T0 \u2212T00| \u2264 \u221a nh \u00b7 sup z\u2208[0,1] max (j,k)\u2208E(z) \f \f \f{\u0398j(z)}T h E{Kh(Z \u2212z)XY T} \u2212E{Kh(Z \u2212z)}\u03a3(z) i \u0398k(z) \f \f \f \u2264 \u221a nh \u00b7 sup z\u2208[0,1] max (j,k)\u2208E(z) \u2225\u0398j(z)\u22251 \u00b7 \u2225\u0398k(z)\u22251 \u00b7 |E{Kh(Z \u2212z)XjYk} \u2212E{Kh(Z \u2212z)} \u00b7 \u03a3jk(z)| \u2264 \u221a nh \u00b7 M 2 \u00b7 sup z\u2208[0,1] max (j,k)\u2208E(z) |E{Kh(Z \u2212z)XjYk} \u2212E{Kh(Z \u2212z)} \u00b7 \u03a3jk(z)| = \u221a nh \u00b7 M 2 \u00b7 \f \ffZ(z) \u00b7 \u03a3jk(z) + O(h2) \u2212fZ(z) \u00b7 \u03a3jk(z) + \u03a3jk(z) \u00b7 O(h2) \f \f \u2264M 2 \u00b7 M\u03c3 \u00b7 \u221a nh5, (109) where the second inequality follows from an application of Holder\u2019s inequality, the third 46 inequality follows from the fact that \u0398(z) \u2208Us,M, the \ufb01rst equality follows by an application of Lemma 2, and the last inequality follows from Assumption 2 and that h2 = o(1). Thus, combining (108) and (109), there exists a constant C > 0 such that |T \u2212T00| \u2264C \u00b7 \u001a\u221a nh5 + s \u00b7 \u221a nh9 + s \u00b7 log(d/h) \u221a nh + \u00b7s \u00b7 h2 \u00b7 p log(d/h) \u001b , with probability at least 1 \u22121/d. F.3 Proof of Lemma 7 Recall from (81) the de\ufb01nition T00 = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f X i\u2208[n] {\u0398j(z)}T Kh(Zi \u2212z) \b XiY T i \u2212\u03a3(z) \t \u0398k(z)/n \u2212{\u0398j(z)}T \u0014 E{Kh(Z \u2212z)XY T} \u2212E{Kh(Z \u2212z)}\u03a3(z) \u0015 \u0398k(z)/n \f \f \f \f. Recall from (35) that Jz,jk(Zi, Xi, Yi) = J(1) z,jk(Zi, Xi, Yi) \u2212J(2) z,jk(Zi), where J(1) z,jk(Zi, Xi, Yi) and J(2) z,jk(Zi) are as de\ufb01ned in (33) and (34), respectively. Let J = {Jz,jk | z \u2208[0, 1], j, k \u2208[d]}. Then the intermediate empirical average T00 can be written as T00 = sup z\u2208[0,1] max (j,k)\u2208E(z) \f \f \f \f \f \f 1 \u221an X i\u2208[n] Jz,jk(Zi, Xi, Yi) \f \f \f \f \f \f . In this section, we show that there exists a Gaussian process W such that |T00 \u2212W| \u2264C \u00b7 \u001alog6 s \u00b7 log4(d/h) nh \u001b1/8 with high probability. To this end, we apply Theorem A.1 in Chernozhukov et al. (2014a), which involves the following quantities \u2022 upper bound for sup z\u2208[0,1] max j,k\u2208[d] \u2225Jz,jk(Zi, Xi, Yi)\u2225\u221e; \u2022 upper bound for sup z\u2208[0,1] max j,k\u2208[d] E \b J2 z,jk(Z, X, Y ) \t ; \u2022 covering number for the function class J . Let Sj(z) and Sk(z) to be the support of \u0398j(z) and \u0398k(z), respectively. Note that the cardinality for both sets are less than s. We now obtain the above quantities. 47 Upper bound for sup z\u2208[0,1] max j,k\u2208[d] \u2225Jz,jk(Zi, Xi, Yi)\u2225\u221e: We have with probability at least 1 \u22121/(2s), sup z\u2208[0,1] max j,k\u2208[d] \u2225Jz,jk(Zi, Xi, Yi)\u2225\u221e \u2264 \u221a h \u00b7 sup z\u2208[0,1] max j,k\u2208[d] \u2225\u0398j(z)\u22251 \u00b7 \u2225\u0398k(z)\u22251 \u00b7 \u0012 max j\u2208Sj(z),k\u2208Sk(z) \u2225qz,jk\u2225\u221e+ M\u03c3 \u00b7 \u2225kz\u2225\u221e \u0013 \u2264 \u221a h \u00b7 M 2 \u00b7 \u001a2 h \u00b7 M 2 X \u00b7 \u2225K\u2225\u221e\u00b7 log(2s) + M\u03c3 \u00b7 2 h \u00b7 \u2225K\u2225\u221e \u001b \u2264 4 \u221a h \u00b7 M 2 \u00b7 M 2 X \u00b7 M\u03c3 \u00b7 \u2225K\u2225\u221e\u00b7 log(2s) = C1 \u00b7 log s \u221a h , (110) where the \ufb01rst inequality follows by Holder\u2019s inequality and the de\ufb01nition of qz,jk and kz and the second inequality follows from (67) and (74). Note that since we are only taking max over the set Sj(z) and Sk(z), instead of a log d factor from (74), we obtain a log(2s) factor. Upper bound for sup z\u2208[0,1] max j,k\u2208[d] E{J2 z,jk(Z, X, Y )}: By an application of the inequality (x \u2212y)2 \u22642x2 + 2y2, we have sup z\u2208[0,1] max j,k\u2208[d] E \b J2 z,jk(Z, X, Y ) \t = sup z\u2208[0,1] max j,k\u2208[d] E \u0014n J(1) z,jk(Z, X, Y ) \u2212J(2) z,jk(Z) o2\u0015 \u22642 sup z\u2208[0,1] max j,k\u2208[d] E \u0014n J(1) z,jk(Z, X, Y ) o2\u0015 | {z } I1 + 2 sup z\u2208[0,1] max j,k\u2208[d] E \u0014n J(2) z,jk(Z) o2\u0015 | {z } I2 . To obtain an upper bound for I1, we need an upper bound for sup z\u2208[0,1] max j,k\u2208[d] E{ max j\u2208Sj(z),k\u2208Sk(z) q2 z,jk}. Recall from (29) the de\ufb01nition of gz,jk(Zi, Xij, Yik) = Kh(Zi\u2212z)XijYik and that qz,jk(Zi, Xij, Yik) = gz,jk(Zi, Xij, Yik) \u2212E{gz,jk(Z, Xj, Yk)}. Thus, we have sup z\u2208[0,1] max j,k\u2208[d] E \u001a max j\u2208Sj(z),k\u2208Sk(z) q2 z,jk \u001b = sup z\u2208[0,1] max j,k\u2208[d] E \u0014 max j\u2208Sj(z),k\u2208Sk(z)\u2208[d] {gz,jk \u2212E(gz,jk)}2 \u0015 \u22642 sup z\u2208[0,1] max j,k\u2208[d] E \u001a max j\u2208Sj(z),k\u2208Sk(z)\u2208[d] g2 z,jk \u001b + 2 sup z\u2208[0,1] max j,k\u2208[d] E2(gz,jk), (111) where we apply the fact that (x\u2212y)2 \u22642x2 +2y2 to obtain the last inequality. By Lemma 2, 48 we have 2 sup z\u2208[0,1] max j,k\u2208[d] E2(gz,jk) \u22642 \b \u00af fZ \u00b7 M\u03c3 + O(h2) \t2. Moreover, we have 2 sup z\u2208[0,1] max j,k\u2208[d] E \u001a max j\u2208Sj(z),k\u2208Sk(z)\u2208[d] g2 z,jk \u001b = 2 sup z\u2208[0,1] max j,k\u2208[d] E \u001a max j\u2208Sj(z),k\u2208Sk(z)\u2208[d] K2 h(Z \u2212z)X2 j Y 2 k \u001b \u22642 \u00b7 M 4 X \u00b7 log2(2s) sup z\u2208[0,1] max j,k\u2208[d] E \b K2 h(Z \u2212z) \t \u22642 \u00b7 M 4 X \u00b7 log2(2s) \u00b7 \u001a1 h \u00b7 \u00af fZ \u00b7 \u2225K\u22252 2 + O(1) + O(h2) \u001b \u22643 \u00b7 \u00af fZ \u00b7 \u2225K\u22252 2 \u00b7 M 4 X \u00b7 log2(2s) h , with probability at least 1 \u22121/(2s), where the second inequality follows from an application of Lemma 2. Thus, by Holder\u2019s inequality, we have I1 \u22642 \u00b7 h \u00b7 sup z\u2208[0,1] max j,k\u2208[d] E \"\u001a \u2225\u0398j(z)\u22251 \u00b7 \u2225\u0398k(z)\u22251 \u00b7 max j\u2208Sj(z),k\u2208Sk(z) |qz,jk| \u001b2# \u22642 \u00b7 h \u00b7 M 4 \u00b7 sup z\u2208[0,1] max j,k\u2208[d] E \u001a max j\u2208Sj(z),k\u2208Sk(z) q2 z,jk \u001b \u22642 \u00b7 h \u00b7 M 4 \u00b7 \u0014 3 \u00b7 \u00af fZ \u00b7 \u2225K\u22252 2 \u00b7 M 4 X \u00b7 log2(2s) h + 2 \b \u00af fZ \u00b7 M\u03c3 + O(h2) \t2 \u0015 \u22648 \u00b7 M 4 \u00b7 \u00af fZ \u00b7 M 4 X \u00b7 \u2225K\u22252 2 \u00b7 log2(2s), (112) where the second inequality holds by the fact that \u0398(z) \u2208Us,M. Similarly, to obtain an upper bound for I2, we use the fact from (69) that sup z\u2208[0,1] E \b k2 z \t \u22643 h \u00b7 \u00af fZ \u00b7 \u2225K\u22252 2. (113) By Holder\u2019s inequality, we have I2 \u22642 \u00b7 h \u00b7 sup z\u2208[0,1] max j,k\u2208[d] E \"\u001a \u2225\u0398j(z)\u22251 \u00b7 \u2225\u0398k(z)\u22251 \u00b7 max (j,k)\u2208E(z) |\u03a3jk(z)| \u00b7 |kz| \u001b2# \u22642 \u00b7 h \u00b7 M 4 \u00b7 M 2 \u03c3 \u00b7 sup z\u2208[0,1] E \u0000k2 z \u0001 \u22646 \u00b7 M 2 \u03c3 \u00b7 M 4 \u00b7 \u00af fZ \u00b7 \u2225K\u22252 2, (114) where the second inequality holds by Assumption 2 and by the fact that \u0398(z) \u2208Us,M, and the last inequality holds by (113). 49 Combining the upper bounds for I1 (112) and I2 (114), we have sup z\u2208[0,1] max j,k\u2208[d] E \b J2 z,jk(Z, X, Y ) \t \u22648\u00b7M 4 \u00b7 \u00af fZ \u00b7\u2225K\u22252 2 \u00b7 \b M 2 \u03c3 + M 4 X \u00b7 log2(2s) \t \u2264C \u00b7log2 s = \u03c32 J, (115) for su\ufb03ciently large C > 0. Covering number of the function class J : First, we note that the function class J is generated from the addition of two function classes J (1) jk = n J(1) z,jk | z \u2208[0, 1] o and J (2) jk = n J(2) z,jk | z \u2208[0, 1] o . Thus, to obtain the covering number of J , we \ufb01rst obtain the covering number for the function classes J (1) jk and J (2) jk . Then, we apply Lemma 15 to obtain the covering number of the function class J . From Lemma 18, we have with probability at least 1 \u22121/d, N{J (1) jk , L2(Q), \u03f5} \u2264C \u00b7 d5/4 \u00b7 log3/2 d \u221a h \u00b7 \u03f5 !6 . Moreover, from Lemma 19, we have N{J (2) jk , L2(Q), \u03f5} \u2264C \u00b7 \u0012 d1/6 h4/3 \u00b7 \u03f5 \u00136 . Applying Lemma 15 with a1 = d5/4 \u00b7 log3/2 d/h1/2, v1 = 6, a2 = d1/6/h4/3, and v2 = 6, we have N{J , L2(Q), \u03f5} \u2264C \u00b7 d2 \u00b7 d17/24 \u00b7 log3/4 d h11/12 \u00b7 \u03f5 !12 , (116) where we multiply d2 on the right hand side since the function class J is taken over all j, k \u2208[d]. Application of Theorem A.1 in Chernozhukov et al. (2014a): Applying Theorem A.1 in Chernozhukov et al. (2014a) with a = d65/24 \u00b7 log7/4 d/h17/12, b = C \u00b7 log s/ \u221a h, \u03c3J = C \u00b7 log s, and Kn = A \u00b7 {log n \u2228log(ab/\u03c3J)} = C \u00b7 log(d/h), for su\ufb03ciently large constant A, C > 0, there exists a random process W such that for any \u03b3 \u2208(0, 1), P \" |T00 \u2212W| \u2265C \u00b7 ( bKn (\u03b3n)1/2 + (b\u03c3J)1/2K3/4 n \u03b31/2n1/4 + b1/3\u03c32/3 J K2/3 n \u03b31/3n1/6 )# \u2264C\u2032 \u00b7 \u0012 \u03b3 + log n n \u0013 50 for some absolute constant C\u2032. Picking \u03b3 = \b log6 s \u00b7 log4(d/h)/(nh) \t1/8, we have P \" |T00 \u2212W| \u2265C \u00b7 \u001alog6 s \u00b7 log4(d/h) nh \u001b1/8# \u2264C\u2032 \u00b7 \u001alog6 s \u00b7 log4(d/h) nh \u001b1/8 , as desired. F.4 Proof of Lemma 8 Recall from the proof of Lemma 7 that T00 = sup z\u2208[0,1] max (j,k)\u2208E(z) \f \f \f \f \f \f 1 \u221an X i\u2208[n] Jz,jk(Zi, Xi, Yi) \f \f \f \f \f \f . We note that T B 00 = sup z\u2208[0,1] max (j,k)\u2208E(z) \f \f \f \f \f \f 1 \u221an X i\u2208[n] Jz,jk(Zi, Xi, Yi) \u00b7 \u03bei \f \f \f \f \f \f , where \u03bei i.i.d. \u223cN(0, 1). To show that the term |W \u2212T B 00| can be controlled, we apply Theorem A.2 in Chernozhukov et al. (2014a). Let \u03c8n = r \u03c32 JKn n + \u0012b2\u03c32 JK3 n n \u00131/4 and \u03b3n(\u03b4) = 1 \u03b4 \u0012b2\u03c32 JK3 n n \u00131/4 + 1 n, as de\ufb01ned in Theorem A.2 in Chernozhukov et al. (2014a). From the proof of Lemma 7, we have b = C \u00b7 log s/ \u221a h, Kn = C \u00b7 log(d/h), and \u03c3J = C \u00b7 log s. Since b2Kn = C \u00b7 log2 s \u00b7 log(d/h)/h \u2264n \u00b7 log2 s for su\ufb03ciently large n, there exists a constant C\u2032\u2032 > 0 such that P h |T B 00 \u2212W| > \u03c8n + \u03b4 \f \f \f {(Zi, Xi, Yi)}i\u2208[n] i \u2264C\u2032\u2032 \u00b7 \u03b3n(\u03b4), with probability at least 1 \u22123/n. Choosing \u03b4 = \b log4(s) \u00b7 log3(d/h)/(nh) \t1/8, we have P \" |T B 00 \u2212W| > C \u00b7 \u001alog4(s) \u00b7 log3(d/h) nh \u001b1/8 \f \f \f {(Zi, Xi, Yi)}i\u2208[n] # \u2264C\u2032\u2032\u00b7 \u001alog4(s) \u00b7 log3(d/h) nh \u001b1/8 , with probability at least 1 \u22123/n. 51 F.5 Proof of Lemma 9 In this section, we show that |T B \u2212T B 00| is upper bounded by the quantity C \u00b7 \uf8f1 \uf8f2 \uf8f3s \u00b7 q h3 log3(d/h) + s \u00b7 s log4(d/h) nh2 + p h5 log n \uf8fc \uf8fd \uf8fe with high probability for su\ufb03ciently large constant C > 0. Throughout the proof of this lemma, we conditioned on the data {(Zi, Xi, Yi)}i\u2208[n]. By the triangle inequality, we have |T B \u2212T B 00| \u2264|T B \u2212T B 0 | + |T B 0 \u2212T B 00|. Thus, it su\ufb03ces to obtain upper bounds for the terms |T B \u2212T B 0 | and |T B 0 \u2212T B 00|. Upper bound for |T B \u2212T B 0 |: Recall from (79) and (82) that T B = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f \f \f \f P i\u2208[n] n b \u0398j(z) oT Kh(Zi \u2212z) n XiY T i b \u0398k(z) \u2212ek o \u03bei/n n b \u0398j(z) oT b \u03a3j(z) \f \f \f \f \f \f \f , and that T B 0 = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f \f \f X i\u2208[n] h {\u0398j(z)}T Kh(Zi \u2212z) \b XiY T i \u2212\u03a3(z) \t \u0398k(z) i \u03bei/n \f \f \f \f \f \f , 52 respectively. Using the triangle inequality, we have |T B \u2212T B 0 | \u2264 \u221a nh \u00b7 \f \f \f \f \f sup z\u2208[0,1] max (j,k)\u2208E(z) \" 1 n X i\u2208[n] n b \u0398j(z) oT Kh(Zi \u2212z) n XiY T i b \u0398k(z) \u2212ek o / n b \u0398j(z) oT b \u03a3j(z) \u22121 n X i\u2208[n] {\u0398j(z)}T Kh(Zi \u2212z) \b XiY T i \u2212\u03a3(z) \t \u0398k(z) # \u03bei \f \f \f \f \f \u22642 \u221a nh \u00b7 \f \f \f \f \f \f sup z\u2208[0,1] max (j,k)\u2208E(z) 1 n X i\u2208[n] n b \u0398j(z) \u2212\u0398j(z) oT Kh(Zi \u2212z) \b XiY T i \u2212\u03a3(z) \t \u0398k(z)\u03bei \f \f \f \f \f \f | {z } I1 + 2 \u221a nh \u00b7 \f \f \f \f \f \f sup z\u2208[0,1] max (j,k)\u2208E(z) 1 n X i\u2208[n] {\u0398j(z)}T Kh(Zi \u2212z)XiY T i \u0010 b \u0398k(z) \u2212\u0398k(z) \u0011 \u03bei \f \f \f \f \f \f | {z } I2 + 2 \u221a nh \u00b7 \f \f \f \f \f \f sup z\u2208[0,1] max (j,k)\u2208E(z) 1 n X i\u2208[n] {\u0398j(z)}T Kh(Zi \u2212z) \b XiY T i \u2212\u03a3(z) \t \u0398k(z)\u03bei \f \f \f \f \f \f \u00b7 \f \f \f \f n b \u0398j(z) oT b \u03a3j(z) \u22121 | {z } I3 (117) where the second inequality holds by another application of the triangle inequality and inequality in (99). We now obtain upper bounds for I1, I2, and I3. Upper bound for I1: By an application of Holder\u2019s inequality, we have I1 \u2264sup z\u2208[0,1] max j,k\u2208[d] \r \r \r b \u0398j(z) \u2212\u0398j(z) \r \r \r 1 \u00b7 \u2225\u0398k(z)\u22251 \u00b7 \u221a nh \u00b7 \f \f \f \f \f \f sup z\u2208[0,1] max j,k\u2208[d] 1 n X i\u2208[n] {Kh(Zi \u2212z)XijYik \u2212Kh(Zi \u2212z)\u03a3jk(z)} \u2264M \u00b7 C \u00b7 s \u00b7 ( h2 + r log(d/h) nh ) \u00b7 \u221a nh \u00b7 \f \f \f \f \f \f sup z\u2208[0,1] max j,k\u2208[d] 1 n X i\u2208[n] {Kh(Zi \u2212z)XijYik \u2212Kh(Zi \u2212z)\u03a3jk(z)} \u03bei \f \f \f \f \f \f , (118) where the last inequality follows from the fact that \u0398(z) \u2208Us,M and by an application of Corollary 1. For notational convenience, we use the notation as de\ufb01ned in (36) Wz,jk(Zi, Xij, Yik) = \u221a h \u00b7 {Kh(Zi \u2212z)XijYik \u2212Kh(Zi \u2212z)\u03a3jk(z)} . (119) Then, we have r h n X i\u2208[n] {Kh(Zi \u2212z)XijYik \u2212Kh(Zi \u2212z)\u03a3jk(z)} \u03bei = 1 \u221an X i\u2208[n] Wz,jk(Zi, Xij, Yik) \u00b7 \u03bei. 53 We note that conditioned on the data {(Zi, Xi, Yi)}i\u2208[n], the above expression is a Gaussian process. It remains to bound the supreme of the Gaussian process 1 \u221an X i\u2208[n] Wz,jk(Zi, Xij, Yik) \u00b7 \u03bei \u223cN \uf8f1 \uf8f2 \uf8f30, 1 n X i\u2208[n] W 2 z,jk(Zi, Xij, Yik) \uf8fc \uf8fd \uf8fe in probability. To this end, we apply the Dudley\u2019s inequality (see, e.g., Corollary 2.2.8 in Van Der Vaart & Wellner 1996) and the Borell\u2019s inequality (see, e.g., Proposition A.2.1 in Van Der Vaart & Wellner 1996), which involves the following quantities: \u2022 upper bound on the conditional variance P i\u2208[n] W 2 z,jk(Zi, Xij, Yik)/n; \u2022 the covering number of the function class W = {Wz,jk(\u00b7) | z \u2208[0, 1], j, k \u2208[d]} under the L2 norm on the empirical measure. Upper bound for the conditional variance Pn i=1 W 2 z,jk(Zi, Xij, Yik)/n : By the de\ufb01nition of Wz,jk(Zi, Xij, Yik) in (119), we have 1 n n X i=1 W 2 z,jk(Zi, Xij, Yik) = h n \u00b7 X i\u2208[n] {Kh(Zi \u2212z)XijYik \u2212Kh(Zi \u2212z)\u03a3jk(z)}2 \u2264h \u00b7 max i\u2208[n] {Kh(Zi \u2212z)XijYik \u2212Kh(Zi \u2212z)\u03a3jk(z)}2 \u22642h \u00b7 max i\u2208[n] \b K2 h(Zi \u2212z)X2 ijY 2 ik + K2 h(Zi \u2212z)\u03a32 jk(z) \t \u22642h \u00b7 \u0012 1 h2 \u00b7 \u2225K\u22252 \u221e\u00b7 M 4 X \u00b7 log2 d + 1 h2 \u00b7 \u2225K\u22252 \u221e\u00b7 M 2 \u03c3 \u0013 \u2264C \u00b7 log2 d h , (120) with probability at least 1 \u22121/d. Note that the second inequality holds by the fact that (x \u2212y)2 \u22642x2 + 2y2, and the third inequality holds by (13) and Assumption 2, and the fact that max(Xij, Yij) \u2264MX \u00b7 \u221alog d with probability at least 1 \u22121/d. Covering number of the function class W: To obtain the covering number of the function class W under the L2 norm on the empirical measure, it su\ufb03ces to obtain the covering number sup Q N{W, L2(Q), \u03f5}. First, we note that Wz,jk = \u221a h\u00b7{gz,jk \u2212wz \u00b7 \u03a3jk(z)}. 54 From Lemma 16, we have K1 = {wz(\u00b7) | z \u2208[0, 1]} and that sup Q N{K1, L2(Q), \u03f5} \u2264 \u00122 \u00b7 CK \u00b7 \u2225K\u2225TV h\u03f5 \u00134 . Also, From Lemma 17, we have G1,jk = {gz,jk(\u00b7) | z \u2208[0, 1]} and that sup Q N{G1,jk, L2(Q), \u03f5} \u2264 \u00122 \u00b7 M 2 X \u00b7 log d \u00b7 CK \u00b7 \u2225K\u2225TV h\u03f5 \u00134 . Moreover, by Assumption 2, \u03a3jk(z) is M\u03c3-Lipschitz. Thus, applying Lemmas 14 and 15, we obtain sup Q N{W, L2(Q), \u03f5} \u2264222\u00b7M\u03c3\u00b7M 8 X\u00b7C8 K\u00b7\u2225K\u22258 TV\u00b7\u2225K\u22255 \u221e\u00b7d2\u00b7 log4/9 d h17/18\u03f5 !9 = C\u00b7d2\u00b7 log4/9 d h17/18\u03f5 !9 , (121) where the term d2 appear on the right hand side because the function class W is over j, k \u2208[d]. Applying Dudley\u2019s inequality and Borell\u2019s inequality: Applying Dudley\u2019s inequality (see Corollary 2.2.8 in Van Der Vaart & Wellner 1996) with (120) and (121), we have E \uf8f1 \uf8f2 \uf8f3sup z\u2208[0,1] max j,k\u2208[d] 1 \u221an X i\u2208[n] Wz,jk(Zi, Xij, Yik) \u00b7 \u03bei \uf8fc \uf8fd \uf8fe\u2264C \u00b7 Z C\u00b7 q log2 d h 0 v u u tlog d2/9 \u00b7 log4/9 d h17/18\u03f5 ! d\u03f5. Applying (39) with b1 = C \u00b7 q log2 d/h and b2 = d2/9 \u00b7 log4/9 d/h17/18, we have E \uf8f1 \uf8f2 \uf8f3sup z\u2208[0,1] max j,k\u2208[d] 1 \u221an X i\u2208[n] Wz,jk(Zi, Xij, Yik) \u00b7 \u03bei \uf8fc \uf8fd \uf8fe\u2264C \u00b7 s log3(d/h) h , (122) for some su\ufb03ciently large C > 0. By Borell\u2019s inequality (see Proposition A.2.1 in Van Der Vaart & Wellner 1996), for \u03bb > 0, we have P \uf8ee \uf8f0 \f \f \f \f \f \f sup z\u2208[0,1] max j,k\u2208E(z) 1 \u221an X i\u2208[n] Wz,jk(Zi, Xij, Yik) \u00b7 \u03bei \f \f \f \f \f \f \u2265C \u00b7 s log3(d/h) h + \u03bb \f \f \f \f \f {(Zi, Xi, Yi)}i\u2208[n] \uf8f9 \uf8fb \u22642 \u00b7 exp \u0012 \u2212\u03bb2 2\u03c32 X \u0013 , where \u03c32 X is the upper bound on the conditional variance. Picking \u03bb = C \u00b7 q log3(d/h) h , we 55 have P \uf8ee \uf8f0 \f \f \f \f \f \f sup z\u2208[0,1] max j,k\u2208[d] 1 \u221an X i\u2208[n] Wz,jk(Zi, Xij, Yik) \u00b7 \u03bei \f \f \f \f \f \f \u2265C \u00b7 s log3(d/h) h \f \f \f \f \f {(Zi, Xi, Yi)}i\u2208[n] \uf8f9 \uf8fb\u22641 d. (123) Thus, substituting (123) into (118), we have I1 \u2264C \u00b7 M \u00b7 s \u00b7 ( h2 + r log(d/h) nh ) \u00b7 s log3(d/h) h \u2264C \u00b7 s \u00b7 q h3 log3(d/h) + C \u00b7 s \u00b7 s log4(d/h) nh2 , (124) with probability 1 \u22121/d. Upper bound for I2: By an application of Holder\u2019s inequality, we have I2 \u2264 \u221a nh \u00b7 sup z\u2208[0,1] max j,k\u2208[d] \r \r \r b \u0398j(z) \u2212\u0398j(z) \r \r \r 1 \u00b7 \r \r \r b \u0398k(z) \r \r \r 1 \u00b7 \f \f \f \f \f \f sup z\u2208[0,1] max j,k\u2208[d] 1 n X i\u2208[n] {Kh(Zi \u2212z)XijYik} \u03bei \f \f \f \f \f \f \u2264sup z\u2208[0,1] max j,k\u2208[d] h\r \r \r b \u0398k(z) \u2212\u0398k(z) \r \r \r 1 + \u2225\u0398k(z)\u22251 i \u00b7 C \u00b7 s \u00b7 ( h2 + r log(d/h) nh ) \u00d7 \u221a nh \u00b7 \f \f \f \f \f \f sup z\u2208[0,1] max j,k\u2208[d] 1 n X i\u2208[n] {Kh(Zi \u2212z)XijYik} \u03bei \f \f \f \f \f \f \u2264C \u00b7 M \u00b7 s \u00b7 ( h2 + r log(d/h) nh ) \u00b7 \u221a nh \u00b7 \f \f \f \f \f \f sup z\u2208[0,1] max j,k\u2208[d] 1 n X i\u2208[n] {Kh(Zi \u2212z)XijYik} \u03bei \f \f \f \f \f \f , (125) where the second inequality holds by triangle inequality and Corollary 1, and the last inequality holds by another application of Corollary 1 and the assumption that h2+ p log(d/h)/(nh) = o(1). Recall the de\ufb01nition of gz,jk(Zi, Xij, Yik) = Kh(Zi \u2212z)XijYik. Conditioned on the data {(Zi, Xi, Yi)}i\u2208[n], we note that r h n X i\u2208[n] {Kh(Zi \u2212z)XijYik} \u03bei = 1 \u221an X i\u2208[n] \u221a h\u00b7gz,jk(Zi, Xij, Yik)\u00b7\u03bei \u223cN \uf8f1 \uf8f2 \uf8f30, h n X i\u2208[n] g2 z,jk(Zi, Xij, Yik) \uf8fc \uf8fd \uf8fe. Similar to the upper bound for I1, we apply Dudley\u2019s inequality and Borell\u2019s inequality to bound the supreme of the Gaussian process in the last expression. To this end, we need to obtain an upper bound for the conditional covariance. By (74), 56 we have h n X i\u2208[n] g2 z,jk(Zi, Xij, Yik) \u22641 h \u00b7 M 4 X \u00b7 \u2225K\u22254 \u221e\u00b7 log2 d, (126) with probability at least 1 \u22121/d. In addition, by an application of Lemma 17, the covering number for the class of function { \u221a h \u00b7 gz,jk(\u00b7) | z \u2208[0, 1], j, k \u2208[d]} is sup Q N hn\u221a h \u00b7 gz,jk(\u00b7) | z \u2208[0, 1], j, k \u2208[d] o , L2(Q), \u03f5 i \u2264d2\u00b7 \u00122 \u00b7 M 2 X \u00b7 log d \u00b7 CK \u00b7 \u2225K\u2225TV \u221a h\u03f5 \u00134 . (127) By an application of Dudley\u2019s inequality, we have E \uf8f1 \uf8f2 \uf8f3sup z\u2208[0,1] max j,k\u2208[d] 1 \u221an X i\u2208[n] \u221a h \u00b7 gz,jk(Zi, Xij, Yik) \u00b7 \u03bei \uf8fc \uf8fd \uf8fe\u2264C\u00b7 Z q M4 X\u00b7\u2225K\u22254 \u221e\u00b7 log2 d h 0 s log \u0012d1/2 \u00b7 log d h1/2\u03f5 \u0013 d\u03f5. Applying (39) with b1 = q M 4 X \u00b7 \u2225K\u22254 \u221e\u00b7 log2 d/h and b2 = d1/2 \u00b7 log d/h1/2, we have E \uf8f1 \uf8f2 \uf8f3sup z\u2208[0,1] max j,k\u2208[d] 1 \u221an X i\u2208[n] \u221a h \u00b7 gz,jk(Zi, Xij, Yik) \u00b7 \u03bei \uf8fc \uf8fd \uf8fe\u2264C \u00b7 s log3(d/h) h . (128) By Borell\u2019s inequality (see Proposition A.2.1 in Van Der Vaart & Wellner 1996), we have P \uf8ee \uf8f0 \f \f \f \f \f \f sup z\u2208[0,1] max j,k\u2208[d] 1 \u221an X i\u2208[n] \u221a h \u00b7 gz,jk(Zi, Xij, Yik) \u00b7 \u03bei \f \f \f \f \f \f \u2265C \u00b7 s log3(d/h) h + \u03bb \f \f \f \f \f {(Zi, Xi, Yi)}i\u2208[n] \uf8f9 \uf8fb \u22642 \u00b7 exp \u0012 \u2212\u03bb2 2\u03c32 X \u0013 . Picking \u03bb = C \u00b7 q log3(d/h) h , we have P \uf8ee \uf8f0 \f \f \f \f \f \f sup z\u2208[0,1] max j,k\u2208[d] 1 \u221an X i\u2208[n] \u221a h \u00b7 gz,jk(Zi, Xij, Yik) \u00b7 \u03bei \f \f \f \f \f \f \u2265C \u00b7 s log3(d/h) h \f \f \f \f \f {(Zi, Xi, Yi)}i\u2208[n] \uf8f9 \uf8fb\u22641 d. (129) Thus, by (125) and (129), we have I2 \u2264C \u00b7 M \u00b7 s \u00b7 ( h2 + r log(d/h) nh ) \u00b7 s log3(d/h) h \u2264C \u00b7 s \u00b7 q h3 log3(d/h) + C \u00b7 s \u00b7 s log4(d/h) nh2 , (130) 57 with probability at least 1 \u22121/d. Upper bound for I3: By an application of Holder\u2019s inequality, we have I3 \u2264sup z\u2208[0,1] max j\u2208[d] \r \r \r \r n b \u0398j(z) oT b \u03a3(z) \u2212ej \r \r \r \r \u221e \r \r \r b \u0398k(z) \r \r \r 2 1 \u00b7 \u221a nh \u00b7 \f \f \f \f \f \f sup z\u2208[0,1] max j,k\u2208[d] 1 n X i\u2208[n] {Kh(Zi \u2212z)XijYik \u2212Kh(Zi \u2212z)\u03a3j \u2264M3 \u00b7 C \u00b7 s \u00b7 ( h2 + r log(d/h) nh ) \u221a nh \u00b7 \f \f \f \f \f \f sup z\u2208[0,1] max j,k\u2208[d] 1 n X i\u2208[n] {Kh(Zi \u2212z)XijYik \u2212Kh(Zi \u2212z)\u03a3jk(z)} \u03bei \f \f \f \f \f \f \u2264C \u00b7 M3 \u00b7 s \u00b7 ( h2 + r log(d/h) nh ) \u00b7 s log3(d/h) h \u2264C \u00b7 s \u00b7 q h3 log3(d/h) + C \u00b7 s \u00b7 s log4(d/h) nh2 , (131) where the second inequality holds by the fact that \u0398(z) \u2208Us,M and by an application of Corollary 1, and the third inequality holds by (123). Thus, combining (124), (130), and (131), we have |T B \u2212T B 0 | \u2264C \u00b7 s \u00b7 q h3 log3(d/h) + C \u00b7 s \u00b7 s log4(d/h) nh2 (132) with probability at least 1 \u22123/d. Upper bound for |T B 0 \u2212T B 00|: Recall from (83) that T B 00 = sup z\u2208[0,1] max (j,k)\u2208E(z) \u221a nh \u00b7 \f \f \f \f X i\u2208[n] \u0010 {\u0398j(z)}T Kh(Zi \u2212z) \b XiY T i \u2212\u03a3(z) \t \u0398k(z)/n \u2212{\u0398j(z)}T h E{Kh(Z \u2212z)XY T} \u2212E{Kh(Z \u2212z)}\u03a3(z) i \u0398k(z) \u0011 \u00b7 \u03bei/n \f \f \f \f. 58 By the triangle inequality, we have |T B 0 \u2212T B 00| \u2264 \u221a nh \u00b7 sup z\u2208[0,1] max (j,k)\u2208E(z) \f \f \f \f \f \f 1 n X i\u2208[n] \u0010 {\u0398j(z)}T \u0002 E \b Kh(Zi \u2212z)XiY T i \t \u2212E{Kh(Zi \u2212z)}\u03a3(z) \u0003 \u0398k(z) \u0011 \u00b7 \u03bei \f \f \f \f \f \f \u2264 \u221a nh \u00b7 sup z\u2208[0,1] max (j,k)\u2208E(z) \f \f \f{\u0398j(z)}T \u0002 E \b Kh(Z \u2212z)XY T \t \u2212E{Kh(Z \u2212z)}\u03a3(z) \u0003 \u0398k(z) \f \f \f \u00b7 \f \f \f \f \f \f 1 n X i\u2208[n] \u03bei \f \f \f \f \f \f \u2264 \u221a nh \u00b7 M2 \u00b7 C \u00b7 h2 \u00b7 \f \f \f \f \f \f 1 n X i\u2208[n] \u03bei \f \f \f \f \f \f , (133) where the last inequality holds by applying Holder\u2019s inequality and Lemma 2. Since \u03bei i.i.d. \u223c N(0, 1), by the Gaussian tail inequality, we have P \uf8eb \uf8ed \f \f \f \f \f \f 1 n X i\u2208[n] \u03bei \f \f \f \f \f \f > r 2 log n n \uf8f6 \uf8f8\u22641 n. Thus, substituting the above expression into (133), we obtain |T B 0 \u2212T B 00| \u2264 \u221a nh \u00b7 M 2 \u00b7 C \u00b7 h2 \u00b7 r 2 log n n \u2264C \u00b7 p h5 log n, (134) with probability at least 1 \u22121/n. Combining the upper bounds: Combining the upper bounds (132) and (134), and applying the union bound, we have P \uf8ee \uf8f0\f \fT B \u2212T B 00 \f \f \u2265C \u00b7 s \u00b7 q h3 log3(d/h) + C \u00b7 s \u00b7 s log4(d/h) nh2 + C \u00b7 p h5 log n \f \f \f \f \f {(Zi, Xi, Yi)}i\u2208[n] \uf8f9 \uf8fb \u2264P \uf8ee \uf8f0\f \fT B \u2212T B 0 \f \f + \f \fT B 0 \u2212T B 00 \f \f \u2265C \u00b7 s \u00b7 q h3 log3(d/h) + C \u00b7 s \u00b7 s log4(d/h) nh2 + C \u00b7 p h5 log n \f \f \f \f \f {(Zi, Xi, Yi)}i\u2208[n] \uf8f9 \uf8fb \u2264P \uf8ee \uf8f0\f \fT B \u2212T B 0 \f \f \u2265C \u00b7 s \u00b7 q h3 log3(d/h) + C \u00b7 s \u00b7 s log4(d/h) nh2 \f \f \f \f \f {(Zi, Xi, Yi)}i\u2208[n] \uf8f9 \uf8fb + P \" \f \fT B 0 \u2212T B 00 \f \f \u2265C \u00b7 p h5 log n \f \f \f \f \f {(Zi, Xi, Yi)}i\u2208[n] # \u22642/d + 1/n, as desired. 59 F.6 Lower Bound of the Variance We aim to show that the variance of Jz,jk de\ufb01ned in (35) is bounded from below. Lemma 10. Under the same conditions of Theorem 2, there exists a constant c > 0 such that infz minj,k Var(Jz,jk) \u2265c > 0. Proof. In this proof, we will apply Isserlis\u2019 theorem (Isserlis 1918). Given T \u223cN(0, \u03a3), Isserlis\u2019 theorem implies that for any vectors u, v \u2208Rd, E{(uTTTTv)2} = E{(uTT)2}E{(vTT)2} + 2{E(uTTvTT)}2 = (uT\u03a3u)(vT\u03a3v) + 2(uT\u03a3v)2 (135) According to the de\ufb01nition of Jz,jk in (35), it can be decomposed into Jz,jk(Zi, Xi, Yi) = J(1) z,jk(Z, Xi, Yi) \u2212J(2) z,jk(Zi). Recall that J(1) z,jk(Zi, Xi, Yi) = \u221a h \u00b7 {\u0398j(z)}T \u00b7 \u0002 Kh(Zi \u2212z)XiY T i \u2212E \b Kh(Z \u2212z)XY T\t\u0003 \u00b7 \u0398k(z), and J(2) z,jk(Zi) = \u221a h \u00b7 {\u0398j(z)}T \u00b7 [Kh(Zi \u2212z) \u2212E {Kh(Z \u2212z)}] \u00b7 \u03a3(z) \u00b7 \u0398k(z). We will calculate Var{J(2) z,jk(Z)}, Var{J(1) z,jk(Z, X, Y )}, and Cov{J(1) z,jk(Z, X, Y ), J(2) z,jk(Z)} separately. We \ufb01rst calculate Var{J(2) z,jk(Z)}. Following a similar method as the proof of Lemma 2, we have E{Kh(Z \u2212z)} = fZ(z) + O(h2) and E{K2 h(Z \u2212z)} = h\u22121fZ(z) R K2(u)du + O(1). This implies that Var{J(2) z,jk(Z)} = \u03982 jk(z) \u00b7 fZ(z) Z K2(u)du + O(h). (136) Next, we proceed to calculate the variance of J(1) z,jk(Z). By a change of variable and Taylor\u2019s expansion, we obtain \u0398j(z)TE \b Kh(Z \u2212z)\u03a3(Z)}\u0398k(z) = \u0398j(z)T \u001aZ K(u)\u03a3(z + uh)fZ(z + uh)du \u001b \u0398k(z) = \u0398j(z)T \u0014Z K(u){\u03a3(z) + uh \u02d9 \u03a3(z) + u2h2 \u00a8 \u03a3(z\u2032)}{fZ(z) + uh \u02d9 fZ(z) + u2h2 \u00a8 fZ(z)}du \u0015 \u0398k(z). (137) Note that each term in the integrant that involves R uK(u)du is equal to zero since R uK(u)du = 60 0 by assumption. For terms with \u03a3(z), we have \u0398j(z)T\u03a3(z)\u0398k(z) Z K(u){fZ(z) + uh \u02d9 fZ(z) + u2h2 \u00a8 fZ(z)}du = \u0398jk(z){fZ(z) + O(h2)}. For terms that involve \u02d9 \u03a3(z) and \u00a8 \u03a3(z\u2032), we have \u0398j(z)T \u02d9 \u03a3(z)\u0398k(z) \u2264M\u03c3\u2225\u0398j(z)\u22252\u2225\u0398k(z)\u22252 \u2264\u03c12M\u03c3 = O(1), since the maximum eigenvalue of \u0398(z) is bounded by \u03c1 by assumption. Thus, combining the above into (137), we have \u0398j(z)TE \b Kh(Z \u2212z)\u03a3(Z)}\u0398k(z) = \u0398jk(z)fZ(z) + O(h2). (138) Next, we bound the second moment. By the Isserlis\u2019 theorem in (135), and by taking the conditional expectation, we have E \u0002 K2 h(Z \u2212z){\u0398j(z)TXY T\u0398k(z)}2\u0003 = E \u0000K2 h(Z \u2212z)[{\u0398j(z)T\u03a3(Z)\u0398j(z)}{\u0398k(z)T\u03a3(Z)\u0398k(z)} + 2{\u0398j(z)T\u03a3(Z)\u0398k(z)}2] \u0001 . (139) Following a similar argument as in (138), we can derive E \u0002 K2 h(Z\u2212z){\u0398j(z)TXY T\u0398k(z)}2\u0003 = {\u0398jj(z)\u0398kk(z)+2\u03982 jk(z)}fZ(z)h\u22121 Z K2(u)du+O(1) (140) Thus, we have Var n J(1) z,jk(Z) o = {\u0398jj(z)\u0398kk(z) + 2\u03982 jk(z)}fZ(z) Z K2(u)du + O(h). (141) Now we begin to bound the Cov{J(1) z,jk(Z), J(2) z,jk(Z)}. By using a similar argument as (138), we have E \u0002 \u0398jk(z)K2 h(Z \u2212z){\u0398j(z)TXY T\u0398k(z)} \u0003 = \u03982 jk(z) \u00b7 h\u22121fZ(z) Z K2(u)du + O(1), (142) Combining with (142) and (138), and using the covariance formula, we have that Cov \b J(1) z,jk(Z), J(2) z,jk(Z) \t = \u03982 jk(z)fZ(z) Z K2(u)du + O(h). (143) 61 Using (136), (141) and (143), we have Var{Jz,jk(Z)} = Var(J(1) z,jk(Z)) + Var{J(2) z,jk(Z)} \u22122 Cov \b J(1) z,jk(Z), J(2) z,jk(Z) \t = {\u0398jj(z)\u0398kk(z) + \u03982 jk(z)}fZ(z) Z K2(u)du + O(h) \u2265\u03c12f Z, where the last inequality is because \u03c1 is smaller than the minimum eigenvalue of \u03a3(z) for any z \u2208[0, 1] and infz\u2208[0,1] fZ(z) \u2265f Z > 0 by Assumption 1. Since the lower bound above is uniformly true over z, j, k, the lemma is proven. G Proof of Theorem 5 In this section, we show that the proposed procedure in Algorithm 1 is able to control the type I error below a pre-speci\ufb01ed level \u03b1. We \ufb01rst de\ufb01ne some notation that will be used throughout the proof of Theorem 3. Let E\u2217(z) be the true edge set at Z = z. That is, E\u2217(z) is the set of edges induced by the true inverse covariance matrix \u0398(z). Recall from De\ufb01nition 4 that the critical edge set is de\ufb01ned as C{E(z), P} = {e | e \u0338\u2208E(z), there exists E\u2032(z) \u2287E(z) such that E\u2032(z) \u2208P and E\u2032(z)\\{e} / \u2208P}, (144) where P = {E \u2286V \u00d7 V | P(G) = 1} is the class of edge sets satisfying the graph property P. Suppose that Algorithm 1 rejects the null hypothesis at the Tth iteration. That is, there exists z0 \u2208[0, 1] such that ET(z0) \u2208P but ET\u22121(z0) / \u2208P. To prove Theorem 3, we state the following two lemmas on the properties of critical edge set. Lemma 11. Let ET(z0) \u2208P for some z0 \u2208[0, 1]. Then, at least one rejected edge in ET(z0) is in the critical edge set C{E\u2217(z0), P}. Lemma 12. Let \u00af e \u2208C{E\u2217(z0), P} be the \ufb01rst rejected edge in the critical edge set C{E\u2217(z0), P}. Suppose that \u00af e is rejected at the lth step of Algorithm 1. Then, C{E\u2217(z), P} \u2286C{El\u22121(z), P} for all z \u2208[0, 1]. The proofs of Lemmas 11 and 12 are deferred to Sections G.2 and G.3, respectively. We now provide the proof of Theorem 3. G.1 Proof of Theorem 3 Suppose that Algorithm 1 rejects the null hypothesis at the Tth iteration. That is, ET(z0) \u2208 P and ET\u22121(z0) / \u2208P. By Lemma 11, there is at least one edge in ET(z0) that is also in the critical edge set C{E\u2217(z0), P}. We denote the \ufb01rst rejected edge in the critical edge set as \u00af e, 62 i.e., \u00af e \u2208C{E\u2217(z0), P} and suppose that \u00af e is rejected at the lth iteration of Algorithm 1. We note that l is not necessarily T. Thus, we have sup z\u2208[0,1] max e\u2208C{E\u2217(z),P} \u221a nh \u00b7 b \u0398de e (z) \u00b7 X i\u2208[n] Kh(Zi \u2212z)/n \u2265 \u221a nh \u00b7 b \u0398de \u00af e (z0) \u00b7 X i\u2208[n] Kh(Zi \u2212z0)/n \u2265c{1 \u2212\u03b1, C(El\u22121, P)} \u2265c{1 \u2212\u03b1, C(E\u2217, P)}, where the \ufb01rst inequality follows by Lemma 11, the second inequality follows from the lth step of Algorithm 1, and the last inequality follows directly from Lemma 12. Under the null hypothesis, \u0398e(z) = 0 for any e \u2208C{E\u2217(z), P}. By Theorem 2, we have lim n\u2192\u221e sup \u0398(\u00b7)\u2208G0 P\u0398(\u00b7)(\u03c8\u03b1 = 1) \u2264lim n\u2192\u221e sup \u0398(\u00b7)\u2208G0 P \uf8ee \uf8f0sup z\u2208[0,1] max e\u2208C{E\u2217(z),P} \u221a nh \u00b7 | b \u0398de jk(z)| \u00b7 X i\u2208[n] Kh(Zi \u2212z)/n \u2265c{1 \u2212\u03b1, C(E\u2217, P)} \uf8f9 \uf8fb \u2264\u03b1, as desired. G.2 Proof of Lemma 11 To prove Lemma 11, it su\ufb03ces to show that the intersection between the two sets ET(z0) and C{E\u2217(z0), P} is not an empty set, i.e., ET(z0) \u2229C{E\u2217(z0), P} \u0338= \u2205. To this end, we let F = ET(z0) \u222aE\u2217(z0) and let ET(z0) \\ E\u2217(z0) = {e1, e2, . . . , ek}. We note that the set ET(z0) \\ E\u2217(z0) is not an empty set since ET(z0) \u2208P but E\u2217(z0) / \u2208P. Using the fact that P is monotone and that ET(z0) \u2208P, we have F \u2208P since adding additional edges to ET(z0) does not change the graph property of ET(z0). Then, we have E\u2217(z0) \u2286E\u2217(z0) \u222a{e1} \u2286E\u2217(z0) \u222a{e1, e2} \u2286\u00b7 \u00b7 \u00b7 \u2286E\u2217(z0) \u222a{e1, . . . , ek} = F. Since E\u2217(z0) / \u2208P and F \u2208P, there must exists an edge set {e1, . . . , ek0} for k0 \u2264k that changes the graph property of E\u2217(z0) from E\u2217(z0) / \u2208P to E\u2217(z0) \u222a{e1, . . . , ek0} \u2208P. Thus, there must exists at least an edge \u00af e \u2208{e1, . . . , ek0} such that \u00af e \u2208C{E\u2217(z0), P} since adding the set of edges {e1, . . . , ek0} changes the graph property of E\u2217(z0). Also, \u00af e \u2208ET(z0) by construction. Thus, we conclude that ET(z0) \u2229C{E\u2217(z0), P} \u0338= \u2205. G.3 Proof of Lemma 12 Let \u00af e \u2208C{E\u2217(z0), P} be the \ufb01rst rejected edge in the critical edge set C{E\u2217(z0), P} for some z0 \u2208[0, 1]. Suppose that \u00af e is rejected at the lth step of Algorithm 1. We want to show that C{E\u2217(z), P} \u2286C{El\u22121(z), P} for all z \u2208[0, 1]. It su\ufb03ces to show that C{E\u2217(z0), P} \u2286 63 C{El\u22121(z0), P}. In other words, we want to prove that for any e\u2032 \u2208C{E\u2217(z0), P}, e\u2032 \u2208 C{El\u22121(z0), P}. We \ufb01rst note the following fact El\u22121(z0) \u2229C{E\u2217(z0), P} = \u2205 and El\u22121(z0) / \u2208P. (145) By the de\ufb01nition of the critical edge set (144), we construct a set E\u2032 such that E\u2217(z0) \u2287 E\u2032, E\u2032 \u2208P, and E\u2032 \\ {e\u2032} / \u2208P, for any e\u2032 \u2208C{E\u2217(z0), P}. By the de\ufb01nition of monotone property, we have E\u2032 \u222aEl\u22121(z0) \u2208P. Since C{E\u2032 \u222aEl\u22121(z0), P} \u2286C{El\u22121(z0), P}, to show that e\u2032 \u2208C{El\u22121(z0), P}, it is equivalent to showing e\u2032 \u2208C{E\u2032 \u222aEl\u22121(z0), P}. That is, we want to show {E\u2032 \u222aEl\u22121(z0)} \\ {e\u2032} / \u2208P. This is equivalent to showing {E\u2032 \\ e\u2032} \u222a{El\u22121(z0) \\ E\u2032} / \u2208P. (146) There are two cases: (1) El\u22121(z0) \\ E\u2032 = \u2205and (2) El\u22121(z0) \\ E\u2032 \u0338= \u2205. For the \ufb01rst case, (146) is true by the construction of E\u2032. For the second case, we prove by contradiction. Suppose that (E\u2032 \\ e\u2032) \u222a{El\u22121(z0) \\ E\u2032} \u2208P. Let El\u22121(z0) \\ E\u2032 = {e\u2032 1, . . . ,\u2032 k }. By the de\ufb01nition of monotone property, we have E\u2032\\{e\u2032} \u2286(E\u2032\\{e\u2032})\u222a{e\u2032 1} \u2286\u00b7 \u00b7 \u00b7 \u2286(E\u2032\\{e\u2032})\u222a{e\u2032 1, e\u2032 2, . . . , e\u2032 k} = (E\u2032\\{e\u2032})\u222a(El\u22121(z0)\\E\u2032). Since E\u2032 \\ {e\u2032} / \u2208P by construction, and that (E\u2032 \\ {e\u2032}) \u222a(El\u22121(z0) \\ E\u2032) \u2208P, there must exists an edge set {e1, . . . , ek0} for k0 \u2264k that changes the graph property of E\u2032 \\ {e\u2032} / \u2208P to (E\u2032 \\ {e\u2032}) \u222a{e\u2032 1, . . . , e\u2032 k0} \u2208P. Since e\u2032 k0 \u2208El\u22121(z0) \\ E\u2032 and that E\u2217(z0) \u2286E\u2032 by construction, we have e\u2032 k0 / \u2208E\u2217(z0). Thus, e\u2032 k0 \u2208C{E\u2217(z0), P}. This contradicts the fact that El\u22121(z0) \u2229C{E\u2217(z0), P} = \u2205. H Proof of Theorem 6 By the de\ufb01nition in (25), if \u0398(\u00b7) \u2208G1(\u03b8; P), there exists an edge set E\u2032 0 and z0 \u2208[0, 1] satisfying E\u2032 0 \u2286E{\u0398(z0)}, P(E\u2032 0) = 1 and min e\u2208E\u2032 0 |\u0398e(z0)| > C p log(d/h)/nh, (147) and we will determine the magnitude tf constant C later. We aim to show that P{E\u2032 0 \u2229 C(\u2205, P)} = P(E\u2032 0) = 1. First, there exists a subgraph E\u2032\u2032 0 \u2282E\u2032 0 such that P(E\u2032\u2032 0) = P(E\u2032 0) = 1 and for any e E \u2282E\u2032\u2032 0, P( e E) = 0. We can construct such E\u2032\u2032 0 by deleting edges from E\u2032 0 until it is impossible to further deleting any edge such that the property P is still true. By De\ufb01nition 4, E\u2032\u2032 0 \u2286C(\u2205, P) and therefore E\u2032\u2032 0 \u2286E\u2032 0 \u2229C(\u2205, P). By monotone property, we have 64 P{E\u2032 0 \u2229C(\u2205, P)} = P(E\u2032 0) = 1 since P(E\u2032\u2032 0) = P(E\u2032 0) = 1. Consider the following event E1 = h min e\u2208E\u2032 0\u2229C(\u2205,P) \u221a nh| b \u0398de e (z0)| \u00b7 X i\u2208[n] Kh(Zi \u2212z0)/n > c{1 \u2212\u03b1, C(\u2205, P)} i . According to Algorithm 1, the rejected set in the \ufb01rst iteration at z0 is E1(z0) = h e \u2208C(\u2205, P) : \u221a nh| b \u0398de e (z0)| \u00b7 X i\u2208[n] Kh(Zi \u2212z0)/n > c{1 \u2212\u03b1, C(\u2205, P)} i . Under the event E1, we have E\u2032 0 \u2229C(\u2205, P) \u2286E1(z0) and since P{E\u2032 0 \u2229C(\u2205, P)} = P(E\u2032 0), we have P{E1(z0)} = P(E\u2032 0) = 1. Therefore, P(\u03c8\u03b1 = 1) \u2265P(E1). (148) It su\ufb03ces to bound P(E1) then. We consider two events E2 = h min e\u2208E\u2032 0 \u221a nh|\u0398e(z0)| \u00b7 X i\u2208[n] Kh(Zi \u2212z0)/n > 2c{1 \u2212\u03b1, C(\u2205, P)} i ; E3 = h max e\u2208V \u00d7V \u221a nh| b \u0398de e (z0) \u2212\u0398e(z0)| \u00b7 X i\u2208[n] Kh(Zi \u2212z0)/n \u2264c{1 \u2212\u03b1, C(\u2205, P)} i . We have P(E1) \u2265P(E2 \u222aE3). By Lemmas 2 and 3, we have P n supz \f \f \f P i\u2208[n] Kh(Zi \u2212z)/n \u2212fZ(z) \f \f \f > p log(d/h)/nh o < 3/d. (149) Combining with (15) in Corollary 1, we have with probability at least 1 \u22126/d, sup z max e\u2208V \u00d7V \u221a nh| b \u0398de e (z) \u2212\u0398e(z)| \u00b7 X i\u2208[n] Kh(Zi \u2212z)/n \u2264C p log(d/h)/nh \u00b7 \u221a nh. For any \ufb01xed \u03b1 \u2208(0, 1) and su\ufb03ciently large d, n, as C(\u2205, P) \u2286V \u00d7 V , we have c{1 \u2212\u03b1, C(\u2205, P)} \u2264c(1 \u2212\u03b1, V \u00d7 V ) \u2264C p log(d/h)/nh \u00b7 \u221a nh. Thus P(E3) > 1 \u22126/d. Similarly, we also have P(E2) > 1 \u22123/d. By (148), we have P(\u03c8\u03b1 = 1) \u2265P(E1) \u2265P(E2 \u222aE3) \u22651 \u22129/d. Therefore, we complete the proof of the theorem. 65 I Technical Lemmas on Covering Number In this section, we present some technical lemmas on the covering number of some function classes. Lemma 13 provides an upper bound on the covering number for the class of function of bounded variation. Lemma 14 provides an upper bound on the covering number of a class of Lipschitz function. Lemma 15 provides an upper bound on the covering numbers for function classes generated from the product and addition of two function classes. Lemma 13. (Lemma 3 in Gin\u00b4 e & Nickl 2009) Let K : R \u2192R be a function of bounded variation. De\ufb01ne the function class Fh = [K {(t \u2212\u00b7)/h)} | t \u2208R]. Then, there exists CK < \u221eindependent of h and of K such that for all 0 < \u03f5 < 1, sup Q N{Fh, L2(Q), \u03f5} \u2264 \u00122 \u00b7 CK \u00b7 \u2225K\u2225TV \u03f5 \u00134 , where \u2225K\u2225TV is the total variation norm of the function K. Lemma 14. Let f(l) be a Lipschitz function de\ufb01ned on [0, 1] such that |f(l) \u2212f(l\u2032)| \u2264Lf \u00b7 |l \u2212l\u2032| for any l, l\u2032 \u2208[0, 1]. We de\ufb01ne the constant function class F = {gl := f(l) | l \u2208[0, 1]}. For any probability measure Q, the covering number of the function class F satis\ufb01es N{F, L2(Q), \u03f5} \u2264Lf \u03f5 , where \u03f5 \u2208(0, 1). Proof. Let N = n i\u03f5 Lf | i = 1, . . . , Lf \u03f5 o . By de\ufb01nition of N, for any l \u2208[0, 1], there exists an l\u2032 \u2208N such that |l \u2212l\u2032| \u2264\u03f5/Lf. Thus, we have |f(l) \u2212f(l\u2032)| \u2264Lf \u00b7 |l \u2212l\u2032| \u2264\u03f5. This implies that {gl | l \u2208N} is an \u03f5-cover of the function class F. To complete the proof, we note that the cardinality of the set |N| \u2264Lf/\u03f5. Lemma 15. Let F1 and F2 be two function classes satisfying N{F1, L2(Q), a1\u03f5} \u2264C1\u03f5\u2212v1 and N{F2, L2(Q), a2\u03f5} \u2264C2\u03f5\u2212v2 for some C1, C2, a1, a2, v1, v2 > 0 and any 0 < \u03f5 < 1. De\ufb01ne \u2225F\u2113\u2225\u221e= sup f\u2208F\u2113 \u2225f\u2225\u221efor \u2113= 1, 2 and U = \u2225F1\u2225\u221e\u2228\u2225F2\u2225\u221e. For the function classes F\u00d7 = {f1f2 | f1 \u2208F1, f2 \u2208F2} and F+ = {f1 + f2 | f1 \u2208F1, f2 \u2208F2}, we have for any \u03f5 \u2208(0, 1), N{F\u00d7, L2(Q), \u03f5} \u2264C1 \u00b7 C2 \u00b7 \u00122a1U \u03f5 \u0013v1 \u00b7 \u00122a2U \u03f5 \u0013v2 66 and N{F+, L2(Q), \u03f5} \u2264C1 \u00b7 C2 \u00b7 \u00122a1 \u03f5 \u0013v1 \u00b7 \u00122a2 \u03f5 \u0013v2 . Lemma 16. Let wz(u) = Kh(u \u2212z). We de\ufb01ne the function classes K1 = {wz(\u00b7) | z \u2208[0, 1]} and K2 = [E{wz(Z)} | z \u2208[0, 1]] . Given Assumptions 1-2, we have for any \u03f5 \u2208(0, 1), sup Q N{K1, L2(Q), \u03f5} \u2264 \u00122 \u00b7 CK \u00b7 \u2225K\u2225TV h\u03f5 \u00134 and sup Q N{K2, L2(Q), \u03f5} \u22642 h\u03f5 \u00b7 \u2225K\u2225TV \u00b7 \u00af fZ. Moreover, let kz(u) = wz(u) \u2212E{wz(Z)} and let K = {kz(\u00b7) | z \u2208[0, 1]}. We have sup Q N{K, L2(Q), \u03f5} \u2264 4 \u00b7 \u2225K\u2225TV \u00b7 C4/5 K \u00b7 \u00af f 1/5 Z h\u03f5 !5 . Proof. The covering number for the function class K1 is obtained by an application of Lemma 13. To obtain the covering number for K2, we show that the constant function E{wz(Z)} is Lipschitz. The covering number is obtained by applying Lemma 14. Finally, we note that the function class K is generated from the addition of the two function classes K1 and K2. The covering number can be obtained by an application of Lemma 15. The details are deferred to I.1. Lemma 17. Let gz,jk(u, Xij, Yik) = Kh(u \u2212z)XijYik. We de\ufb01ne the function classes G1,jk = {gz,jk(\u00b7) | z \u2208[0, 1]} and G2,jk = [E{gz,jk(Z, Xj, Yk)} | z \u2208[0, 1]] . Given Assumptions 1-2, for all \u03f5 \u2208(0, 1), sup Q N{G1,jk, L2(Q), \u03f5} \u2264 \u00122 \u00b7 M 2 X \u00b7 log d \u00b7 CK \u00b7 \u2225K\u2225TV h\u03f5 \u00134 and sup Q N{G2,jk, L2(Q), \u03f5} \u22642 h\u03f5 \u00b7 \u2225K\u2225TV \u00b7 \u00af fZ \u00b7 M\u03c3, with probability at least 1 \u22121/d. Moreover, let qz,jk(u, Xij, Yik) = gz,jk(u, Xij, Yik) \u2212 67 E{gz,jk(Z, Xj, Yk)} and let Gjk = {qz,jk(\u00b7) | z \u2208[0, 1]}. We have sup Q N{Gjk, L2(Q), \u03f5} \u2264 4 \u00b7 \u2225K\u2225TV \u00b7 C4/5 K \u00b7 \u00af f 1/5 Z \u00b7 M 1/5 \u03c3 \u00b7 M 8/5 X \u00b7 log4/5 d h\u03f5 !5 . with probability at least 1 \u22121/d. Proof. The proof uses the same set of argument as in the proof of Lemma 16. The probability statement comes from the fact that we upper bound the random variable Xj by MX \u00b7 \u221alog d for some constant MX > 0. The details are deferred to I.2. Lemma 18. Let J(1) z,jk(u, Xi, Yi) = \u221a h\u00b7{\u0398j(z)}T\u00b7 \u0002 Kh(u \u2212z)XiY T i \u2212E \b Kh(Z \u2212z)XY T\t\u0003 \u00b7 \u0398k(z) and let J (1) jk = {J(1) z,jk | z \u2208[0, 1]}. Given Assumptions 1-2, for all \u03f5 \u2208(0, 1) sup Q N{J (1) jk , L2(Q), \u03f5} \u2264C \u00b7 d5/4 \u00b7 log3/2 d \u221a h \u00b7 \u03f5 !6 , with probability at least 1 \u22121/d, where C > 0 is a generic constant that does not depend on d, h, and n. Proof. The proof is deferred to I.3. Lemma 19. Let J(2) z,jk(u) = \u221a h \u00b7 {\u0398j(z)}T \u00b7 [Kh(u \u2212z) \u2212E {Kh(Z \u2212z)}] \u00b7 \u03a3(z) \u00b7 \u0398k(z) and let J (2) jk = {J(2) z,jk | z \u2208[0, 1]}. Given Assumptions 1-2, for all probability measures Q on R and all 0 < \u03f5 < 1, N{J (2) jk , L2(Q), \u03f5} \u2264C \u00b7 \u0012 d1/6 h4/3 \u00b7 \u03f5 \u00136 , where C > 0 is a generic constant that does not depend on d, h, and n. Proof. We \ufb01rst note that J (2) jk is a function class generated from the product of two function classes K as in Lemma 16 and \u0398jk = {\u0398jk(z) | z \u2208[0, 1]}. To obtain the covering number of \u0398jk, we show that the constant function \u0398jk(z) is Lipschitz and apply Lemma 14. We then apply Lemma 15 to obtain the covering number of J (2) jk . The details are deferred to I.4. I.1 Proof of Lemma 16 Let wz(u) = Kh(u \u2212z) and that kz(u) = wz(u) \u2212E{wz(Z)}. We \ufb01rst obtain the covering number for the function classes K1 = {wz(\u00b7) | z \u2208[0, 1]} and K2 = [E{wz(Z)} | z \u2208[0, 1]]. Then, we apply Lemma 15 to obtain the covering number of the function class K = {kz(\u00b7) | z \u2208[0, 1]}. 68 Covering number for K1: By an application of Lemma 13, the covering number for K1 is sup Q N{K1, L2(Q), \u03f5} \u2264 \u00122 \u00b7 CK \u00b7 \u2225K\u2225TV h\u03f5 \u00134 . (150) Covering number for K2: First, note that E{wz(Z)} = R Kh(z \u2212Z)fZ(Z)dZ = (Kh \u2217fZ)(z) is a function of z generated by the convolution (Kh \u2217fZ)(z). By the property of the derivative of a convolution as in (38), we have sup z0\u2208[0,1] \f \f \f \f \u2202 \u2202zE{wz(Z)} \f \f \f z=z0 \f \f \f \f = sup z0\u2208[0,1] \f \f \f \u02d9 Kh \u2217fZ(z0) \f \f \f = \r \r \r( \u02d9 Kh \u2217fZ)(z) \r \r \r \u221e\u2264 \r \r \r \u02d9 Kh \r \r \r 1 \u00b7 \u2225fZ\u2225\u221e, (151) where the last expression is obtained by an application of Young\u2019s inequality. The expression in (151) depends on the quantity \u2225\u02d9 Kh\u22251, which is equal to the following expression \r \r \r \u02d9 Kh \r \r \r 1 = Z 1 h2 \f \f \f \f \u02d9 K \u0012Z \u2212z h \u0013\f \f \f \f dZ = 1 h Z \f \f \f \u02d9 K(u) \f \f \f du = 1 h \u00b7 \u2225K\u2225TV, (152) where the second inequality holds by a change of variable, and \u2225K\u2225TV is the total variation of the function K(\u00b7). Substituting (152) into (151) and by Assumption 1, we have sup z0\u2208[0,1] \f \f \f \f \u2202 \u2202zE{wz(Z)} \f \f \f z=z0 \f \f \f \f \u22641 h \u00b7 \u2225K\u2225TV \u00b7 \u00af fZ. (153) Thus, for any z1, z2 \u2208[0, 1], we have |E{wz1(Z)} \u2212E{wz2(Z)}| \u22641 h \u00b7 \u2225K\u2225TV \u00b7 \u00af fZ \u00b7 |z1 \u2212z2|, implying that E{wz(Z)} is a Lipschitz continuous function with Lipschitz constant h\u22121 \u00b7 \u2225K\u2225TV \u00b7 \u00af fZ. By Lemma 14, an upper bound for the covering number of K2 is sup Q N{K2, L2(Q), \u03f5} \u22642 h\u03f5 \u00b7 \u2225K\u2225TV \u00b7 \u00af fZ. (154) Covering number of the function class K: The function class K can be written as K = {f1 \u2212f2 | f1 \u2208K1, f2 \u2208K2}. By an application of Lemma 15 with C1 = (2\u00b7CK\u00b7\u2225K\u2225TV)4, C2 = 2\u00b7 \u00af fZ\u00b7\u2225K\u2225TV, a1 = a2 = h\u22121, v1 = 4, and v2 = 1, along with (150) and (154), we obtain sup Q N{K, L2(Q), \u03f5} \u2264 4 \u00b7 \u2225K\u2225TV \u00b7 C4/5 K \u00b7 \u00af f 1/5 Z h\u03f5 !5 . 69 I.2 Proof of Lemma 17 Throughout the proof, we condition on the event A = \u001a max i\u2208[n] max j\u2208[d] max(|Xij|, |Yij|) \u2264MX \u00b7 p log d \u001b . (155) Since X and Y conditioned on Z are Gaussian random variables, the event A occurs with probability at least 1 \u22121/d for su\ufb03ciently large constant MX > 0. Recall that gz,jk(u, Xij, Yik) = Kh(u \u2212z)XijYik and that qz,jk(u, Xij, Yik) = Kh(u \u2212 z)XijYik \u2212E{Kh(Z \u2212z)XjYk}. We \ufb01rst obtain the covering number of the function classes G1,jk = {gz,jk(\u00b7) | z \u2208[0, 1]} and G2,jk = [E{gz,jk(Z, Xj, Yk)} | z \u2208[0, 1]]. Then, we apply Lemma 15 to obtain the covering number of the function class Gjk = {qz,jk(\u00b7) | z \u2208[0, 1], j, k \u2208[d]}. Covering number for G1,jk: Conditioned on the event A in (155), we have gz,jk(u, Xij, Yik) = Kh(u \u2212z)XijYik \u2264M 2 X \u00b7 log d \u00b7 Kh(u \u2212z). By an application of Lemma 13, the covering number for G1,jk is sup Q N{G1,jk, L2(Q), \u03f5} \u2264 \u00122 \u00b7 M 2 X \u00b7 log d \u00b7 CK \u00b7 \u2225K\u2225TV h\u03f5 \u00134 . (156) Covering number for G2,jk: We now obtain the covering number for G2,jk by showing that the function E{gz,jk(Z, Xj, Yk)} is Lipschitz. First, note that E{gz,jk(Z, Xj, Yk)} = E{Kh(Z \u2212z) \u00b7 \u03a3jk(Z)} = Z Kh(z \u2212Z) \u00b7 \u03d5jk(Z)dZ = (Kh \u2217\u03d5jk)(z), where \u03d5jk(Z) = fZ(Z)\u00b7\u03a3jk(Z) and Kh \u2217\u03d5jk is the convolution between Kh and \u03d5jk. Similar 70 to (151)-(153), we have sup z0\u2208[0,1] max j,k\u2208[d] \f \f \f \f \u2202 \u2202zE{gz,jk(Z, Xj, Yk)} \f \f \f z=z0 \f \f \f \f = sup z0\u2208[0,1] max j,k\u2208[d] \f \f \f( \u02d9 Kh \u2217\u03d5jk)(z0) \f \f \f = max j,k\u2208[d] \r \r \r( \u02d9 Kh \u2217\u03d5jk)(z) \r \r \r \u221e \u2264 \r \r \r \u02d9 Kh \r \r \r 1 \u00b7 max j,k\u2208[d] \u2225\u03d5jk\u2225\u221e \u22641 h \u00b7 \u2225K\u2225TV \u00b7 \u00af fZ \u00b7 M\u03c3, (157) where the \ufb01rst inequality is obtained by an application of Young\u2019s inequality, and the last expression is obtained by (152) and Assumptions 1-2. Equation 157 implies that for any z1, z2 \u2208[0, 1], |E{gz1,jk(Z, Xj, Yk)} \u2212E{gz2,jk(Z, Xj, Yk)}| \u22641 h \u00b7 \u2225K\u2225TV \u00b7 \u00af fZ \u00b7 M\u03c3 \u00b7 |z1 \u2212z2|, implying that E{gz,jk(Z, Xj, Yk)} is a Lipschitz continuous function with Lipschitz constant h\u22121 \u00b7 \u2225K\u2225TV \u00b7 \u00af fZ \u00b7 M\u03c3. By an application of Lemma 14, we have sup Q N{G2,jk, L2(Q), \u03f5} \u22642 h\u03f5 \u00b7 \u2225K\u2225TV \u00b7 \u00af fZ \u00b7 M\u03c3. (158) Covering number of the function class Gjk: The function class Gjk can be written as Gjk = {f1,jk \u2212f2,jk | f1,jk \u2208G1,jk, f2,jk \u2208G2,jk, j, k \u2208[d]}. By an application of Lemma 15 with C1 = (2 \u00b7 CK \u00b7 \u2225K\u2225TV \u00b7 M 2 X)4, C2 = 2 \u00b7 \u00af fZ \u00b7 \u2225K\u2225TV \u00b7 M\u03c3, a1 = h\u22121 \u00b7 log d, a2 = h\u22121, v1 = 4, and v2 = 1, along with (156) and (158), we obtain sup Q N{Gjk, L2(Q), \u03f5} \u2264 4 \u00b7 \u2225K\u2225TV \u00b7 C4/5 K \u00b7 \u00af f 1/5 Z \u00b7 M 1/5 \u03c3 \u00b7 M 8/5 X \u00b7 log4/5 d h\u03f5 !5 , (159) as desired. I.3 Proof of Lemma 18 Similar to the proof of Lemma 17, we condition on the event A = \u001a max i\u2208[n] max j\u2208[d] max(|Xij|, |Yij|) \u2264MX \u00b7 p log d \u001b . The event A holds with probability at least 1 \u22121/d. Recall that J(1) z,jk(u, Xi, Yi) = \u221a h\u00b7{\u0398j(z)}T \u00b7 \u0002 Kh(u \u2212z)XiY T i \u2212E \b Kh(Z \u2212z)XY T\t\u0003 \u00b7 71 \u0398k(z) and let J (1) jk = n J(1) z,jk | z \u2208[0, 1] o . To obtain the covering number of the function class J (1) jk , we consider bounding the covering number of a larger class of function. To this end, we de\ufb01ne \u03a6(1) \u03c9 (u, Xi, Yi) = \u221a h\u00b7 \u0002 Kh(u \u2212z)XiY T i \u2212E \b Kh(Z \u2212z)XY T\t\u0003 to be a d\u00d7d matrix. We denote the (j, k)th element of \u03a6(1) \u03c9 (u, Xi, Yi) as \u03a6(1) \u03c9,jk(u, Xij, Yik) = \u221a h\u00b7q\u03c9,jk(u, Xij, Yik), where q\u03c9,jk(u, Xij, Yik) = Kh(u\u2212\u03c9)XijYik\u2212E{Kh(Z\u2212\u03c9)XjYk}. We aim to obtain an \u03f5-cover N (1\u2032) for the following function class J (1\u2032) jk = \u0002 {\u0398j(z)}T\u03a6(1) \u03c9 (\u00b7)\u0398k(z) | \u03c9, z \u2208[0, 1] \u0003 . In other words, we show that for any (\u03c91, z1) \u2208[0, 1]2, there exists (\u03c92, z2) \u2208N (1\u2032) such that \r \r \r{\u0398j(z)}T\u03a6(1) \u03c9 (u, Xi, Yi)\u0398k(z) \u2212{\u0398j(z\u2032)}T\u03a6(1) \u03c9\u2032 (u, Xi, Yi)\u0398k(z\u2032) \r \r \r L2(Q) \u2264\u03f5. Given any j, k \u2208[d], \u03c9, \u03c9\u2032, z, z\u2032 \u2208[0, 1], by the triangle inequality, we have \r \r \r{\u0398j(z1)}T \u03a6(1) \u03c91 (u, Xi, Yi)\u0398k(z1) \u2212{\u0398j(z2)}T \u03a6(1) \u03c92 (u, Xi, Yi)\u0398k(z2) \r \r \r L2(Q) \u2264 \r \r \r{\u0398j(z1) \u2212\u0398j(z2)}T \u03a6(1) \u03c91 (u, Xi, Yi)\u0398k(z1) \r \r \r L2(Q) | {z } I1 + \r \r \r{\u0398j(z2)}T n \u03a6(1) \u03c91 (u, Xi, Yi) \u2212\u03a6(1) \u03c92 (u, Xi, Yi) o \u0398k(z1) \r \r \r L2( | {z I2 + \r \r \r{\u0398j(z2)}T \u03a6(1) \u03c92 (u, Xi, Yi) {\u0398k(z1) \u2212\u0398k(z2)} \r \r \r L2(Q) | {z } I3 . (160) We now obtain the upper bounds for I1, I2, and I3. Upper bound for I1 and I3: First, we note that by Holder\u2019s inequality, we have I1 \u2264\u2225\u0398j(z1) \u2212\u0398j(z2)\u22251 \u00b7 max j,k\u2208[d] \r \r \r\u03a6(1) \u03c91,jk(u, Xij, Yik) \r \r \r L2(Q) \u00b7 \u2225\u0398k(z1)\u22251. Since \u0398(z) \u2208U(s, M, \u03c1), we have sup z\u2208[0,1] max j\u2208[d] \u2225\u0398j(z)\u22251 \u2264M. (161) Moreover, for any z1, z2 \u2208[0, 1], we have sup j\u2208[d] \u2225\u0398j(z1) \u2212\u0398j(z2)\u22251 \u2264 \u221a d \u00b7 \u2225\u0398(z1) \u2212\u0398(z2)\u22252 \u2264 \u221a d \u00b7 \u2225\u0398(z1)\u22252 \u00b7 \u2225Id \u2212\u03a3(z1)\u0398(z2)\u22252 \u2264 \u221a d \u00b7 \u2225\u0398(z1)\u22252 \u00b7 \u2225\u0398(z2)\u22252 \u00b7 \u2225\u03a3(z1) \u2212\u03a3(z2)\u22252 \u2264 \u221a d \u00b7 \u03c12 \u00b7 d \u00b7 \u2225\u03a3(z1) \u2212\u03a3(z2)\u2225max \u2264d3/2 \u00b7 \u03c12 \u00b7 M\u03c3 \u00b7 |z1 \u2212z2|, (162) 72 where the second to the last inequality follows from the fact that \u0398(z) \u2208U(s, M, \u03c1) and the last inequality follows from Assumption 2. Finally, from (74) and the de\ufb01nition of \u03a6(1) \u03c91,jk(\u00b7) = \u221a h \u00b7 q\u03c91,jk(\u00b7), we have max j,k\u2208[d] \r \r \r\u03a6(1) \u03c91,jk(u, Xij, Yik) \r \r \r L2(Q) \u2264 2 \u221a h \u00b7 M 2 X \u00b7 \u2225K\u2225\u221e\u00b7 log d. (163) Combining (161)-(163), we have I1 \u2264d3/2 \u00b7 log d \u00b7 \u03c12 \u00b7 M\u03c3 \u00b7 M \u00b7 M 2 X \u00b7 \u2225K\u2225\u221e\u00b7 2 \u221a h \u00b7 |z1 \u2212z2|. (164) We note that I3 can be upper bounded the same way as I1. Upper bound for I2: Recall from (160) that I2 = \r \r \r{\u0398j(z2)}T \b \u03a6(1) \u03c91 (u, Xi, Yi) \u2212\u03a6(1) \u03c92 (u, Xi, Yi) \t \u0398k(z1) \r \r \r L2(Q) \u2264\u2225\u0398k(z1)\u2225\u00b7 \u2225\u0398j(z2)\u22251 \u00b7 max j,k\u2208[d] \r \r \r \u221a h \u00b7 {q\u03c91,jk(u, Xij, Yik) \u2212q\u03c92,jk(u, Xij, Yik)} \r \r \r L2(Q) , where the inequality holds by Holder\u2019s inequality and the de\ufb01nition of \u03a6(1) \u03c9 (u, Xi, Yi). Let \u03a6(1) jk = n\u221a h \u00b7 q\u03c9,jk(\u00b7) | \u03c9 \u2208[0, 1] o and recall from Lemma 17 that we constructed an \u03f5-cover N (1\u2032\u2032) \u2282[0, 1] for the function class \u03a6(1) jk with cardinality \f \fN (1\u2032\u2032)\f \f = \u0012 4\u00b7\u2225K\u2225TV\u00b7C4/5 K \u00b7 \u00af f1/5 Z \u00b7M1/5 \u03c3 \u00b7M8/5 X \u00b7log4/5 d \u221a h\u00b7\u03f5 \u00135 . Since the construction of the \u03f5-cover in Lemma 17 is independent of the indices j and k, we have that for any j, k \u2208[d] and \u03c91 \u2208[0, 1], there exists a \u03c92 \u2208N (1\u2032\u2032) such that max j,k\u2208[d] \r \r \r \u221a h \u00b7 {q\u03c91,jk(u, Xij, Yik) \u2212q\u03c92,jk(u, Xij, Yik)} \r \r \r L2(Q) \u2264\u03f5. (165) Thus, by (161) and (165), we have I2 \u2264M 2 \u00b7 \u03f5. (166) Covering number of the function class J (1) jk : Since J (1) jk \u2282J (1\u2032) jk , the covering number of J (1) jk is upper bounded by the covering number of J (1\u2032) jk . It su\ufb03ces to construct an \u03f5-cover of the function class J (1\u2032) jk . In the following, we show that N (1\u2032) = N (1\u2032\u2032) \u00d7 n i \u00b7 \u03f5 \u00b7 \u221a h | i = 1, . . . , 1 \u03f5\u00b7 \u221a h o is an \u03f5-cover of J (1\u2032) jk . For any (\u03c91, z1) \u2208[0, 1]2, there exists (\u03c92, z2) \u2208N (1\u2032) such that (165) holds and that |z1 \u2212z2| \u2264 \u221a h\u00b7\u03f5. Thus, combining (164) and 73 (166), we have \r \r{\u0398j(z1)}T\u03a6(1) \u03c91 (u, Xi, Yi)\u0398k(z1) \u2212{\u0398j(z2)}T\u03a6(1) \u03c92 (u, Xi, Yi)\u0398k(z2) \r \r L2(Q) \u22642 \u00b7 d3/2 \u00b7 log d \u00b7 \u03c12 \u00b7 M\u03c3 \u00b7 M \u00b7 M 2 X \u00b7 \u2225K\u2225\u221e\u00b7 2 \u221a h \u00b7 |z1 \u2212z2| + M 2 \u00b7 \u03f5 \u22644 \u00b7 d3/2 \u00b7 log d \u00b7 \u03c12 \u00b7 M\u03c3 \u00b7 M \u00b7 M 2 X \u00b7 \u2225K\u2225\u221e\u00b7 2 \u221a h \u00b7 \u03f5 + M 2 \u00b7 \u03f5 \u2264C \u00b7 d3/2 \u00b7 log d \u00b7 \u03f5, (167) where C > 0 is a generic constant that does not depend on d, h, and n. Thus, we have N{J (1\u2032) jk , L2(Q), C\u00b7d3/2\u00b7log d\u00b7\u03f5} \u2264 \f \f \fN (1\u2032)\f \f \f \u2264 4 \u00b7 \u2225K\u2225TV \u00b7 C4/5 K \u00b7 \u00af f 1/5 Z \u00b7 M 1/5 \u03c3 \u00b7 M 8/5 X \u00b7 log4/5 d \u221a h \u00b7 \u03f5 !5 \u00b7 1 \u221a h \u00b7 \u03f5 . Since J (1) jk \u2282J (1\u2032) jk , the above expression implies that N{J (1) jk , L2(Q), \u03f5} \u2264N{J (1\u2032) jk , L2(Q), \u03f5} \u2264C \u00b7 d5/4 \u00b7 log3/2 d \u221a h \u00b7 \u03f5 !6 , (168) as desired. I.4 Proof of Lemma 19 First, we note that J(2) z,jk(u) = \u221a h \u00b7 {\u0398j(z)}T \u00b7 [Kh(u \u2212z) \u2212E {Kh(Z \u2212z)}] \u00b7 \u03a3(z) \u00b7 \u0398k(z) = \u221a h \u00b7 kz(u) \u00b7 \u0398jk(z), where kz(u) = Kh(u \u2212z) \u2212E{Kh(Z \u2212z)}. Let J (2) jk = n J(2) z,jk | z \u2208[0, 1] o . Furthermore, recall that K = {kz(\u00b7) | z \u2208[0, 1]} and let \u0398jk = {\u0398jk(z) | z \u2208[0, 1]}. The function class J (2) jk can be written as J (2) jk = { \u221a h \u00b7 f1 \u00b7 f2,jk | f1 \u2208K, f2,jk \u2208\u2126jk}. It su\ufb03ces to obtain the covering number for K and \u2126jk, and apply Lemma 15. Covering number of the function class K: By Lemma 16, we have N{K, L2(Q), \u03f5} \u2264 4 \u00b7 \u2225K\u2225TV \u00b7 C4/5 K \u00b7 \u00af f 1/5 Z h\u03f5 !5 . (169) 74 Covering number of the function class \u0398jk: We show that \u0398jk(z) is Lipschitz, and apply Lemma 14 to obtain the covering number for \u0398jk. Similar to (162), for any z1, z2 \u2208[0, 1], we have \u2225\u0398(z1) \u2212\u0398(z2)\u2225max \u2264\u2225\u0398(z1)\u22252 \u00b7 \u2225\u0398(z2) \u00b7 {\u03a3(z1) \u2212\u03a3(z2)}\u22252 \u2264\u2225\u0398(z1)\u22252 \u00b7 \u2225\u0398(z2)\u22252 \u00b7 \u2225\u03a3(z1) \u2212\u03a3(z2)\u22252 \u2264\u03c12 \u00b7 d \u00b7 \u2225\u03a3(z1) \u2212\u03a3(z2)\u2225max \u2264\u03c12 \u00b7 d \u00b7 M\u03c3 \u00b7 |z1 \u2212z2|, where the last inequality follows from Assumption 2. Since \u0398jk(z) is \u03c12 \u00b7 d \u00b7 M\u03c3-Lipschitz, by Lemma 14, we have N{\u0398jk, L2(Q), \u03f5} \u2264M\u03c3 \u00b7 \u03c12 \u00b7 d \u03f5 . (170) Covering number of the function class J (2) jk : We now apply Lemma 15 to obtain the covering number of J (2) jk . Applying Lemma 15 with a1 = d, v1 = 1, C1 = M\u03c3 \u00b7 \u03c12, a2 = h\u22121, v2 = 5, C2 = \u0010 4 \u00b7 \u2225K\u2225TV \u00b7 C4/5 K \u00b7 \u00af f 1/5 Z \u00115 , and U = 2 h \u00b7 \u2225K\u2225\u221e, along with (169) and (170), we have N{J (2) jk , L2(Q), \u221a h \u00b7 \u03f5} \u2264C \u00b7 \u0012 d1/6 h11/6 \u00b7 \u03f5 \u00136 , where C > 0 is a generic constant that does not depend on n, d, and h. This implies that N{J (2) jk , L2(Q), \u03f5} \u2264C \u00b7 \u0012 d1/6 h4/3 \u00b7 \u03f5 \u00136 , as desired. J Technical Lemmas on Empirical Process In this section, we present some existing tools on empirical process. The following lemma states that the supreme of any empirical process is concentrated near its mean. It follows directly from Theorem 2.3 in Bousquet (2002). Lemma 20. (Theorem A.1 in Van de Geer 2008) Let X1, . . . , Xn be independent random variables and let F be a function class such that there exists \u03b7 and \u03c4 2 satisfying sup f\u2208F \u2225f\u2225\u221e\u2264\u03b7 and sup f\u2208F 1 n X i\u2208n Var{f(Xi)} \u2264\u03c4 2. De\ufb01ne Y = sup f\u2208F \f \f \f \f \f \f 1 n X i\u2208[n] [f(Xi) \u2212E{f(Xi)}] \f \f \f \f \f \f . 75 Then, for any t > 0, P h Y \u2265E(Y ) + t p 2 {\u03c4 2 + 2\u03b7E(Y )} + 2t2\u03b7/3 i \u2264exp \u0000\u2212nt2\u0001 . The above inequality involves evaluating the expectation of the supreme of the empirical process. The following lemma follows directly from Theorem 3.12 in Koltchinskii (2011). It provides an upper bound on the expectation of the supreme of the empirical process as a function of its covering number. Lemma 21. (Lemma F.1 in Lu et al. 2015a) Assume that the functions in F de\ufb01ned on X are uniformly bounded by a constant U and F(\u00b7) is the envelope of F such that |f(x)| \u2264F(x) for all x \u2208X and f \u2208F. Let \u03c32 P = sup f\u2208F E(f 2). Let X1, . . . , Xn be i.i.d. copies of the random variables X. We denote the empirical measure as Pn = 1 n P i\u2208[n] \u03b4Xi. If for some A, V > 0 and for all \u03f5 > 0 and n \u22651, the covering entropy satis\ufb01es N{F, L2(Pn), \u03f5} \u2264 \u0012A\u2225F\u2225L2(Pn) \u03f5 \u0013V , then for any i.i.d. sub-gaussian mean zero random variables \u03be1, . . . , \u03ben, there exists a universal constant C such that E \uf8f1 \uf8f2 \uf8f3sup f\u2208F 1 n \f \f \f \f \f \f X i\u2208[n] \u03beif(Xi) \f \f \f \f \f \f \uf8fc \uf8fd \uf8fe\u2264C (r V n \u03c3P s log \u0012A\u2225F\u2225L2(P) \u03c3P \u0013 + V U n log \u0012A\u2225F\u2225L2(P) \u03c3P \u0013) . Furthermore, we have E \uf8f1 \uf8f2 \uf8f3sup f\u2208F 1 n \f \f \f \f \f \f X i\u2208[n] [f(Xi) \u2212E{f(Xi)}] \f \f \f \f \f \f \uf8fc \uf8fd \uf8fe\u2264C (r V n \u03c3P s log \u0012A\u2225F\u2225L2(P) \u03c3P \u0013 + V U n log \u0012A\u2225F\u2225L2(P) \u03c3P \u0013) .", "introduction": "In the past few decades, much e\ufb00ort has been put into understanding task-based brain connectivity networks. For instance, in a typical visual mapping experiment, subjects are presented with a simple static visual stimulus and are asked to maintain \ufb01xation at the 1 arXiv:1905.11588v2 [stat.ML] 20 Jun 2019 visual stimulus, while their brain activities are measured. Under such highly controlled experimental settings, numerous studies have shown that there are substantial similarities across brain connectivity networks constructed for di\ufb00erent subjects (Press et al. 2001, Has- son et al. 2003). However, such experimental settings bear little resemblance to our real-life experience in several aspects: natural viewing consists of a continuous stream of perceptual stimuli; subjects can freely move their eyes; there are interactions among viewing, context, and emotion (Hasson et al. 2004). To address this issue, neuroscientists have started mea- suring brain activity under continuous natural stimuli, such as watching a movie or listening to a story (Hasson et al. 2004, Simony et al. 2016, Chen et al. 2017). The main scienti\ufb01c question is to understand the dynamics of the brain connectivity network that are speci\ufb01c to the continuous natural stimuli. In the neuroscience literature, a typical approach for constructing a brain connectivity network is to calculate a sample covariance matrix for each subject: the covariance matrix encodes marginal relationships for each pair of brain regions within each subject. More recently, graphical models have been used in modeling brain connectivity networks: graphical models encode conditional dependence relationships between each pair of brain regions, given the others (Rubinov & Sporns 2010). A graph consists of d nodes, each representing a random variable, as well as a set of edges joining pairs of nodes corresponding to conditionally dependent variables. There is a vast literature on learning the structure of static undirected graphical models, and we refer the reader to Drton & Maathuis (2017) for a detailed review. Under natural continuous stimuli, it is often of interest to estimate a dynamic brain connectivity network, i.e., a graph that changes over time. A natural candidate for this purpose is the time-varying Gaussian graphical model (Zhou et al. 2010, Kolar et al. 2010). The time-varying Gaussian graphical model assumes X(z) | Z = z \u223cNd{0, \u03a3X(z)}, (1) where \u03a3X(z) is the covariance matrix of X(z) given Z = z, and Z \u2208[0, 1] has a continuous density. The inverse covariance matrix {\u03a3X(z)}\u22121 encodes conditional dependence relation- ships between pairs of random variables at time Z = z: {\u03a3X(z)}\u22121 jk = 0 if and only if the jth and kth variables are conditionally independent given the other variables at time Z = z. In natural viewing experiments, the main goal is to construct a brain connectivity network that is locked to the processing of external stimuli, referred to as stimulus-locked network (Simony et al. 2016, Chen et al. 2017, Regev et al. 2018). Constructing a stimulus-locked network can better characterize the dynamic changes of brain patterns across the continuous stimulus (Simony et al. 2016). The main challenge in constructing stimulus-locked network is the lack of highly controlled experiments that remove spontaneous and individual variations. The measured blood-oxygen-level dependent (BOLD) signal consists of not only signal that is speci\ufb01c to the stimulus, but also intrinsic neural signal (random \ufb02uctuations) and non- neuronal signal (physiological noise) that are speci\ufb01c to each subject. The intrinsic neural signal and non-neuronal signal can be interpreted as measurement error or latent variables that confound the stimuli-speci\ufb01c signal. We refer to non-stimulus-induced signals as subject speci\ufb01c e\ufb00ects throughout the manuscript. Thus, directly \ufb01tting (1) using the measured data 2 will yield a time-varying graph that primarily re\ufb02ects intrinsic BOLD \ufb02uctuations within each brain rather than BOLD \ufb02uctuations due to the natural continuous stimulus. In this paper, we exploit the experimental design aspect of natural viewing experiments and propose to estimate a dynamic stimulus-locked brain connectivity network by treating the intrinsic neural signal and non-neuronal signal as nuisance parameters. Our proposal exploits the fact that the same stimulus will be given to multiple independent subjects, and that the intrinsic neural signal and non-neuronal signal for di\ufb00erent subjects are independent. Thus, this motivates us to estimate a brain connectivity network across two brains rather than within each brain. In fact, this approach has been considered in Simony et al. (2016) and Chen et al. (2017) where they estimated brain connectivity networks by calculating covariance for brain regions between two brains. After estimating the stimulus-locked brain connectivity network, the next important question is to infer whether there are any regions of interest that are connected to many other regions of interest during cognitive process (Hagmann et al. 2008). These highly connected brain regions are referred to as hub nodes, and the number of connections for each brain region is referred to as degree. Identifying hub brain regions that are speci\ufb01c to the given natural continuous stimulus will lead to a better understanding of the cognitive processes in the brain, and may shed light on various cognitive disorders. In the existing literature, several authors have proposed statistical methodologies to estimate networks with hubs (see, for instance, Tan et al. 2014). In this paper, we instead focus on developing a novel inferential framework to test the hypothesis whether there exists at least one time point such that the maximum degree of the graph is greater than k. Our proposed inferential framework is motivated by two major components: (1) the Gaussian multiplier bootstrap for approximating the distribution of supreme of empirical processes (Chernozhukov et al. 2013, 2014b), and (2) the step-down method for multiple hypothesis testing problems (Romano & Wolf 2005). In a concurrent work, Neykov et al. (2019) proposed a framework for testing general graph structure on a static graph. In Appendix A, we will show that our proposed method can be extended to testing a large family of graph structures similar to that of Neykov et al. (2019)." }, { "url": "http://arxiv.org/abs/1810.07913v2", "title": "Robust Sparse Reduced Rank Regression in High Dimensions", "abstract": "We propose robust sparse reduced rank regression for analyzing large and\ncomplex high-dimensional data with heavy-tailed random noise. The proposed\nmethod is based on a convex relaxation of a rank- and sparsity-constrained\nnon-convex optimization problem, which is then solved using the alternating\ndirection method of multipliers algorithm. We establish non-asymptotic\nestimation error bounds under both Frobenius and nuclear norms in the\nhigh-dimensional setting. This is a major contribution over existing results in\nreduced rank regression, which mainly focus on rank selection and prediction\nconsistency. Our theoretical results quantify the tradeoff between\nheavy-tailedness of the random noise and statistical bias. For random noise\nwith bounded $(1+\\delta)$th moment with $\\delta \\in (0,1)$, the rate of\nconvergence is a function of $\\delta$, and is slower than the sub-Gaussian-type\ndeviation bounds; for random noise with bounded second moment, we obtain a rate\nof convergence as if sub-Gaussian noise were assumed. Furthermore, the\ntransition between the two regimes is smooth. We illustrate the performance of\nthe proposed method via extensive numerical studies and a data application.", "authors": "Kean Ming Tan, Qiang Sun, Daniela Witten", "published": "2018-10-18", "updated": "2019-04-14", "primary_cat": "stat.ML", "cats": [ "stat.ML", "cs.LG" ], "main_content": "2.1 Formulation Suppose we observe n independent samples of q-dimensional response variables and pdimensional covariates. Let Y \u2208Rn\u00d7q be the observed response and let X \u2208Rn\u00d7p be the observed covariates. We consider the matrix regression model Y = XA\u2217+ E, (1) where A\u2217\u2208Rp\u00d7q is the underlying regression coefficient matrix and E \u2208Rn\u00d7q is an error matrix. Each row of E is an independent mean-zero and potentially heavy-tailed random noise vector. Reduced rank regression seeks to characterize the relationships between Y and X in a parsimonious way by restricting the rank of A\u2217(Izenman, 1975). An estimator of A\u2217can be obtained by solving the optimization problem minimize A\u2208Rp\u00d7q tr {(Y \u2212XA)T(Y \u2212XA)} , subject to rank(A) \u2264r, (2) where r is typically much smaller than min{n, p, q}. Due to the rank constraint on A, (2) is non-convex: nonetheless, the global solution of (2) has a closed form solution (Izenman, 1975). It is well-known that squared error loss is sensitive to outliers or heavy-tailed random error (Huber, 1973). To address this issue, it is natural to substitute the squared error loss with a loss function that is robust against outliers. We propose to estimate A\u2217under the Huber loss function, formally defined as follows. Definition 1 (Huber Loss and Robustification Parameter). The Huber loss \u2113\u03c4(\u00b7) is defined as \ufffd 1 2 \u2113\u03c4(z) = \ufffd 1 2z2, if |z| \u2264\u03c4, \u03c4|z| \u22121 2\u03c4 2, if |z| > \u03c4, || \u2264 1 2\u03c4 2, if |z| > \u03c4, where \u03c4 > 0 is referred to as the robustification parameter that trades bias for robustness. The Huber loss function blends the squared error loss (|z| \u2264\u03c4) and the absolute deviation loss (|z| > \u03c4), as determined by the robustification parameter \u03c4. Compared to the squared error loss, large values of z are down-weighted under the Huber loss, thereby resulting in robustness. Generally, an estimator obtained from minimizing the Huber loss is biased. The robustification parameter \u03c4 quantifies the tradeoff between bias and robustness: a smaller value of \u03c4 introduces more bias but also encourages the estimator to be more robust to outliers. We will provide guidelines for selecting \u03c4 based on the sample size and the dimensions of A\u2217in later sections. Throughout the paper, for M \u2208Rp\u00d7q, we write \u2113\u03c4(M) = \ufffdp i=1 \ufffdq j=1 \u2113\u03c4(Mij) for notational convenience. In the high-dimensional setting in which n < p or n < q, it is theoretically challenging to estimate A\u2217accurately without imposing additional structural assumptions in addition 4 to the low rank assumption. To address this challenge, Chen et al. (2012) and Chen and Huang (2012) proposed methods for simultaneous dimension reduction and variable selection. In particular, they decomposed A\u2217into the product of its singular vectors, and imposed sparsity-inducing penalty on the left and right singular vectors. Thus, their proposed methods involve solving optimization problems with non-convex objective. Given that the goal is to estimate A\u2217rather than its singular vectors, we propose to estimate A\u2217directly. Under the Huber loss, a robust and sparse estimate of A\u2217can be obtained by solving the optimization problem: minimize A\u2208Rp\u00d7q \u001a 1 n\u2113\u03c4 (Y \u2212XA) \u001b , subject to rank(A) \u2264r and card(A) \u2264k, (3) where card(A) is the number of non-zero elements in A. Optimization problem (3) is nonconvex due to the rank and cardinality constraints on A. We instead propose to estimate A\u2217by solving the following convex relaxation: minimize A\u2208Rp\u00d7q \u001a 1 n\u2113\u03c4 (Y \u2212XA) + \u03bb (\u2225A\u2225\u2217+ \u03b3\u2225A\u22251,1) \u001b , (4) where \u03bb and \u03b3 are non-negative tuning parameters, \u2225\u00b7\u2225\u2217is the nuclear norm that encourages the solution to be low rank, and \u2225\u00b7 \u22251,1 is the entry-wise \u21131-norm that encourages the solution to be sparse. The nuclear norm and the \u21131,1 norm constraints are the tightest convex relaxations of the rank and cardinality constraints, respectively (Recht et al., 2010; Jojic et al., 2011). In Section 3, we will show that the estimator obtained from solving the convex relaxation in (4) has a favorable statistical convergence rate under a bounded moment condition on the random noise. 2.2 Algorithm We now develop an alternating direction method of multipliers (ADMM) algorithm for solving (4), which allows us to decouple some of the terms that are di\ufb03cult to optimize jointly (Eckstein and Bertsekas, 1992; Boyd et al., 2010). More speci\ufb01cally, (4) is equivalent to minimize A,Z,W\u2208Rp\u00d7q,D\u2208Rn\u00d7q \u001a 1 n\u2113\u03c4 (Y \u2212D) + \u03bb (\u2225W\u2225\u2217+ \u03b3\u2225Z\u22251,1) \u001b , subject to \uf8eb \uf8ec \uf8ed D Z W \uf8f6 \uf8f7 \uf8f8= \uf8eb \uf8ec \uf8ed X I I \uf8f6 \uf8f7 \uf8f8A. (5) For notational convenience, let B = (BD, BZ, BW )T, e X = (X, I, I)T, and \u2126= (D, Z, W)T. The scaled augmented Lagrangian of (5) takes the form L\u03c1(A, D, Z, W, B) = 1 n\u2113\u03c4(Y \u2212D) + \u03bb (\u2225W\u2225\u2217+ \u03b3\u2225Z\u22251,1) + \u03c1 2\u2225\u2126\u2212e XA + B\u22252 F, 5 Algorithm 1 An ADMM Algorithm for Solving (5). 1. Initialize the parameters: (a) primal variables A, D, Z, and W to the zero matrix. (b) dual variables BD, BZ, and BW to the zero matrix. (c) constants \u03c1 > 0 and \u03f5 > 0. 2. Iterate until the stopping criterion \u2225At \u2212At\u22121\u22252 F/\u2225At\u22121\u22252 F \u2264\u03f5 is met, where At is the value of A obtained at the tth iteration: (a) Update A, Z, W, D: i. A = ( e XT e X)\u22121 e XT(\u2126+ B). ii. Z = S(A \u2212BZ, \u03bb\u03b3/\u03c1). Here S denote the soft-thresholding operator, applied element-wise to a matrix: S(Aij, b) = sign(Aij) max(|Aij| \u2212b, 0). iii. W = P j max (\u03c9j \u2212\u03bb/\u03c1, 0) ajbT j , where P j \u03c9jajbT j is the singular value decomposition of A \u2212BW . iv. C = XA \u2212BD. Set Dij = \uf8f1 \uf8f2 \uf8f3 (Yij + n\u03c1Cij)/(1 + n\u03c1), if |n\u03c1(Yij \u2212Cij)/(1 + n\u03c1)| \u2264\u03c4, Yij \u2212S(Yij \u2212Cij, \u03c4/(n\u03c1)), otherwise. (b) Update BD, BZ, BW : i. BD = BD + D \u2212XA; ii. BZ = BZ + Z \u2212A; iii. BW = BW + W \u2212A. where A, D, Z, W are the primal variables, and B is the dual variable. Algorithm 1 summarizes the ADMM algorithm for solving (5). A detailed derivation is deferred to Appendix A. Note that the term ( e XT e X)\u22121 can be calculated before Step 2 in Algorithm 1. Therefore, the computational bottleneck in each iteration of Algorithm 1 is the singular value decomposition of a p \u00d7 q matrix with computational complexity O(p2q + q3). 3 Statistical Theory We study the theoretical properties of b A obtained from solving (4). Let Vp,q = {U \u2208 Rp\u00d7q : UTU = Iq} be the Stiefel manifold of p \u00d7 q orthonormal matrices. Throughout the theoretical analysis, we assume that A\u2217can be decomposed as A\u2217= U\u2217\u039b\u2217(V\u2217)T = r X k=1 \u03bb\u2217 ku\u2217 k(v\u2217 k)T, (6) where U\u2217\u2208Vp,r, V\u2217\u2208Vq,r, maxk \u2225u\u2217 k\u22250 \u2264su, and maxk \u2225v\u2217 k\u22250 \u2264sv with su, sv \u226an, r \u226an, and rsusv \u226an. Consequently, A\u2217is sparse and low rank. Let S = supp(A\u2217) be the support set of A\u2217with cardinality |S| = s, i.e., S contains indices for the non-zero elements in A\u2217. Note that s \u2264rsusv. 6 For simplicity, we consider the case of \ufb01xed design matrix X and assume that the covariates are standardized such that maxi,j |Xij| = 1. To characterize the heavy-tailed random noise, we impose a bounded moment condition on the random noise. Condition 1 (Bounded Moment Condition). For \u03b4 > 0, each entry of the random error matrix E in (1) has bounded (1 + \u03b4)th moment v\u03b4 \u2261max i,j E \u0000|Eij|1+\u03b4\u0001 < \u221e. Condition 1 is a relaxation of the commonly used sub-Gaussian assumption to accommodate heavy-tailed random noise. For instance, the t-distribution with degrees of freedom larger than one can be accommodated by the bounded moment condition. This condition has also been used in the context of high-dimensional Huber linear regression (Sun et al., 2018). Let H\u03c4(A) be the Hessian matrix of the Huber loss function \u2113\u03c4 (Y \u2212XA) /n in (5). In addition to the random noise, the Hessian matrix is a function of the parameter A, and H\u03c4(A) may equal zero for some A, because the Huber loss is linear at the tails. To avoid singularity of H\u03c4(A), we will study the Hessian matrix in a local neighborhood of A\u2217. To this end, we de\ufb01ne and impose conditions on the localized restricted eigenvalues of H\u03c4(A). De\ufb01nition 2 (Localized Restricted Eigenvalues). The minimum and maximum localized restricted eigenvalues for H\u03c4(A) are de\ufb01ned as \u03ba\u2212(H\u03c4(A), \u03be, \u03b7) = inf U,A \u001avec(U)TH\u03c4(A)vec(U) \u2225U\u22252 F : (A, U) \u2208C(m, \u03be, \u03b7) \u001b , \u03ba+(H\u03c4(A), \u03be, \u03b7) = sup U,A \u001avec(U)TH\u03c4(A)vec(U) \u2225U\u22252 F : (A, U) \u2208C(m, \u03be, \u03b7) \u001b , where C(m, \u03be, \u03b7)={(A, U) \u2208Rp\u00d7q\u00d7Rp\u00d7q : U \u0338= 0, S \u2286J, |J| \u2264m, \u2225USc\u22251,1 \u2264\u03be\u2225US\u22251,1, \u2225A\u2212A\u2217\u22251,1 \u2264\u03b7} is a local \u21131,1-cone. Condition 2. There exist constants 0 < \u03balower \u2264\u03baupper < \u221esuch that the localized restricted eigenvalues of H\u03c4 are lower-and upper-bounded by \u03balower/2 \u2264\u03ba\u2212(H\u03c4(A), \u03be, \u03b7) \u2264\u03ba+(H\u03c4(A), \u03be, \u03b7) \u2264\u03baupper. A similar type of localized condition was proposed in Fan et al. (2018) for general loss functions and in Sun et al. (2018) for the analysis of robust linear regression in high dimensions. In what follows, we justify Condition 2 by showing that it is implied by the restricted eigenvalue condition on the empirical Gram matrix S = XTX/n. To this end, we de\ufb01ne the restricted eigenvalues of a matrix and then place a condition on the restricted eigenvalues of S. 7 De\ufb01nition 3 (Restricted Eigenvalues of a Matrix). Given \u03be > 1, the minimum and maximum restricted eigenvalues of S are de\ufb01ned as \u03c1\u2212(S, \u03be, m) = inf U ( tr(UTSU) \u2225U\u22252 1,2 : U \u2208Rp\u00d7q, U \u0338= 0, S \u2286J, |J| \u2264m, \u2225UJc\u22251,1 \u2264\u03be\u2225UJ\u22251,1 ) , \u03c1+(S, \u03be, m) = sup U ( tr(UTSU) \u2225U\u22252 1,2 : U \u2208Rp\u00d7q, U \u0338= 0, S \u2286J, |J| \u2264m, \u2225UJc\u22251,1 \u2264\u03be\u2225UJ\u22251,1 ) , respectively. Condition 3. There exist constants 0 < \u03balower \u2264\u03baupper < \u221esuch that the restricted eigenvalues of S are lowerand upper-bounded by \u03balower \u2264\u03c1\u2212(S, \u03be, m) \u2264\u03c1+(S, \u03be, m) \u2264\u03baupper. Condition 3 is a variant of the restricted eigenvalue condition that is commonly used in high-dimensional non-asymptotic analysis. It can be shown that Condition 3 holds with high probability if each row of X is a sub-Gaussian random vector. Under Condition 3, we now show that the localized restricted eigenvalues for the Hessian matrix are bounded with high probability under conditions on the robusti\ufb01cation parameter \u03c4 and the sample size n. That is, we prove that the localized restricted eigenvalues condition in Condition 2 holds with high probability under Condition 3. The result is summarized in the following lemma. Lemma 1. Consider A \u2208C(m, \u03be, \u03b7) where C(m, \u03be, \u03b7) is the local \u21131,1-cone as de\ufb01ned in De\ufb01nition 2. Let \u03c4 \u2265min(8\u03b7, C \u00b7 (m\u03bd\u03b4)1/(1+\u03b4)) and let n > C\u2032 \u00b7 m2 log(pq) for su\ufb03ciently large constants C, C\u2032 > 0. Under Conditions 1 and 3, there exists constants \u03balower and \u03baupper such that the localized restricted eigenvalues of H\u03c4(A) satisfy 0 < \u03balower/2 \u2264\u03ba\u2212(H\u03c4(A), \u03be, \u03b7) \u2264\u03ba+(H\u03c4(A), \u03be, \u03b7) \u2264\u03baupper < \u221e with probability at least 1 \u2212(pq)\u22121. Lemma 1 shows that Condition 2 holds with high probability, as long as Condition 3 on the empirical Gram matrix S holds. Note that the constants \u03balower and \u03baupper also appear in Condition 3. We now present our main results on the estimation error of b A under the Frobenius norm and nuclear norm in the following theorem. For simplicity, we will present our main results conditioned on the event that Conditions 1\u20132 hold. Theorem 1. Let b A be a solution to (4) with truncation and tuning parameters \u03c4 \u2273 \u0012 nv\u03b4 log(pq) \u00131/ min{(1+\u03b4),2} , \u03bb \u2273v1/ min(1+\u03b4,2) \u03b4 \u0012log(pq) n \u0013min{\u03b4/(1+\u03b4),1/2} 8 and \u03b3 > 2.5. Suppose that Conditions 1\u20132 hold with \u03be = (2\u03b3 + 5)/(2\u03b3 \u22125), \u03balower > 0 and \u03b7 \u2273\u03ba\u22121 lower\u03bbs. Assume that n > Cs2 log(pq) for some su\ufb03ciently large universal constant C > 0. Then, with probability at least 1 \u2212(pq)\u22121, we have \r \r b A \u2212A\u2217\r \r F \u2272\u03ba\u22121 lowerv1/ min{1+\u03b4,2} \u03b4 \u221arsusv \u001alog(pq) n \u001bmin{\u03b4/(1+\u03b4),1/2} , \r \r b A \u2212A\u2217\r \r \u2217\u2272\u03ba\u22121 lowerv1/ min{1+\u03b4,2} \u03b4 rsusv \u001alog(pq) n \u001bmin{\u03b4/(1+\u03b4),1/2} . Theorem 1 establishes the non-asymptotic convergence rates of our proposed estimator under both Frobenius and nuclear norms in the high-dimensional setting. To the best of our knowledge, we are the \ufb01rst to establish such results on the estimation error for robust sparse reduced rank regression. By contrast, most of the existing work on reduced rank regression focuses on rank selection consistency and prediction consistency (Bunea et al., 2011, 2012). Moreover, the prediction consistency results in She and Chen (2017) are established under the assumption that the rank of the design matrix X is smaller than the number of observations n. When the random noise has second or higher moments, i.e., \u03b4 \u22651, our proposed estimator achieves a parametric rate of convergence as if sub-Gaussian random noise were assumed. It achieves a slower rate of convergence only when the random noise is extremely heavy-tailed, i.e., 0 < \u03b4 < 1. Intuitively, one might expect the optimal rate of convergence under the Frobenius norm to have the form \r \r b A \u2212A\u2217\r \r F \u2272 p r(su+sv) \u001alog(pq) n \u001bmin{\u03b4/(1+\u03b4),1/2} , since there are a total of roughly r(su + sv) nonzero parameters to be estimated in A\u2217as de\ufb01ned in (6). Using the convex relaxation (4), we gain computational tractability while losing a scaling factor of p susv/(su+sv). By de\ufb01ning the e\ufb00ective dimension as de\ufb00= rsusv and the e\ufb00ective sample size as ne\ufb00= \b n/ log \u0000pq) \tmin{2\u03b4/(1+\u03b4),1}, the upper bounds in Theorem 1 can be rewritten as \r \r b A \u2212A\u2217\r \r F \u2272 s de\ufb00 ne\ufb00 , \r \r b A \u2212A\u2217\r \r \u2217\u2272 de\ufb00 \u221ane\ufb00 . The e\ufb00ective dimension depends only on the sparsity and rank, while the e\ufb00ective sample size depends only on the sample size divided by the log of the number of free parameters, as if there were no structural constraints. Our results exhibit an interesting phenomenon: the rate of convergence is a\ufb00ected by the heavy-tailedness only through the e\ufb00ective sample size; the e\ufb00ective dimension stays the same regardless of \u03b4. This parallels results for Huber linear regression in Sun et al. (2018). 4 Numerical Studies We perform extensive numerical studies to evaluate the performance of our proposal for robust sparse reduced rank regression. Five approaches are compared in our numerical 9 studies: our proposal with Huber loss, hubersrrr; our proposal with squared error loss (with \u03c4 \u2192\u221e), srrr; robust reduced rank regression with an additional mean parameter that models the outliers (She and Chen, 2017), r4; penalized reduced rank regression via an adaptive nuclear norm (Chen et al., 2013), rrr; and the penalized reduced rank regression via a ridge penalty (Mukherjee and Zhu, 2011), rrridge. The proposals rrridge, rrr, and r4 do not assume sparsity on the regression coe\ufb03cients. Moreover, r4 can only be implemented in the low-dimensional setting in which n \u2265p, or under the assumption that the design matrix X is low rank. Among the \ufb01ve proposals, only hubersrrr and r4 are robust against outliers. For all of our numerical studies, we generate each row of X from a multivariate normal distribution with mean zero and covariance matrix \u03a3, where \u03a3ij = 0.5|i\u2212j| for 1 \u2264i, j \u2264 p. Then, all elements of X are divided by the maximum absolute value of X such that maxi,j |Xij| = 1. The response matrix Y is then generated according to Y = XA\u2217+ E. We consider two di\ufb00erent types of outliers: (i) heavy-tailed random noise E, and (ii) contamination of some percentage of the elements of Y. We simulate data with sparse and non-sparse low rank matrix A\u2217. The details for the di\ufb00erent scenarios will be speci\ufb01ed in Section 4.1. Our proposal hubersrrr involves three tuning parameters. We select the tuning parameters using \ufb01ve-fold cross-validation: we vary \u03bb across a \ufb01ne grid of values, consider four values of \u03b3 = {2.5, 3, 3.5, 4} as suggested by Theorem 1, and considered a range of the robusti\ufb01cation parameter \u03c4 = c{n/ log(pq)}1/2, where c = {0.4, 0.45, . . . , 1.45, 1.5}. The tuning parameters for srrr are selected in a similar fashion with \u03c4 \u2192\u221e. For scenarios with non-sparse regression coe\ufb03cients, we simply set \u03b3 = 0 for hubersrrr and srrr for fair comparison against other approaches that do not assume sparsity. For r3, we select the tuning parameter using \ufb01ve di\ufb00erent information criteria implemented in the R package rrpack (Chen et al., 2013), and report the best result. For rrridge, we specify the correct rank for A\u2217and simply consider a \ufb01ne grid of tuning parameters for the ridge penalty and report the best result. The two tuning parameters for r4 control the sparsity of the mean shift parameter for modeling outliers, and the rank of A\u2217. We implement r4 by specifying the correct rank of A\u2217, and choose the sparsity tuning parameter according to \ufb01ve-fold cross-validation. In other words, we give a major advantage to rrridge and r4, in that we provide the rank of A\u2217as an input. To evaluate the performance across di\ufb00erent methods, we calculate the di\ufb00erence between the estimated regression coe\ufb03cients b A and the true coe\ufb03cients A\u2217under the Frobenius norm. In addition, for scenarios with in which A\u2217is sparse, we calculate the true and false positive rates (TPR and FPR), de\ufb01ned as the proportion of correctly estimated nonzeros in the true parameter, and the proportion of zeros that are incorrectly estimated to be nonzero in the true parameter, respectively. Since some existing approaches are not applicable in the high-dimensional setting, we perform numerical studies under the low-dimensional setting in which n \u2265p in Section 4.1. 10 We then illustrate the performance of our proposed methods, hubersrrr and srrr, in the high-dimensional setting in Section 4.2. 4.1 Low-Dimensional Setting with n \u2265p In this section, we perform numerical studies with n = 200, p = 50, and q = 10. We \ufb01rst consider two cases in which A\u2217has low rank but is not sparse: 1. Rank one matrix: A\u2217= u1vT 1 , where each element of u1 \u2208Rp and v1 \u2208Rq is generated from a uniform distribution on the interval [\u22121, 0.5] \u222a[0.5, 1]. 2. Rank two matrix: A\u2217= u1vT 1 +u2vT 2 , where each element of u1, u2 \u2208Rp and v1, v2 \u2208 Rq is generated from a uniform distribution on the interval [\u22121, 0.5] \u222a[0.5, 1]. We then generate random noise E \u2208Rn\u00d7q from three di\ufb00erent distributions: (i) the normal distribution N(0, 4), (ii) the t-distribution with degrees of freedom 1.5, and (iii) the lognormal distribution log N(0, 1.22). Moreover, we consider a contamination scenario in which we generate each element of E from the N(0, 4) distribution, and then randomly contaminate 5% and 10% of the elements in Y by replacing them with random values generated from a uniform distribution on the interval [10, 20]. The estimation error for each method under the Frobenius norm, averaged over 100 data sets, is reported in Table 1. From Table 1, we see that rrr and rrridge outperform all other methods when A\u2217is rank one under Gaussian noise. This is not surprising, since rrr and rrridge are tailored for reduced rank regression without outliers. We see that hubersrrr has similar performance to srrr, suggesting that there is no loss of e\ufb03ciency for hubersrrr even when there are no outliers. When the random noise is generated from the t-distribution, r4 has the best performance, followed by hubersrrr. The estimation errors for methods that do not model the outliers are substantially higher. For log-normal random noise, hubersrrr outperforms r4. Under the data contamination model, r4 and hubersrrr perform similarly, and both outperform all of the other methods. These results corroborate the observation in She and Chen (2017) that the estimation of low rank matrices is extremely sensitive to outliers. As we increase the contamination percentage of the observed outcomes, we see that the performance of the non-robust methods deteriorates. Similar results are observed for the case when A\u2217has rank two. Next, we consider two cases in which A\u2217is both sparse and low rank: 1. Sparse rank one matrix: A\u2217= u1vT 1 with u1 = (1T 4, 0T p\u22124)T and v1 = (1T 4, 0T q\u22124)T; 2. Sparse rank two matrix: A\u2217= u1vT 1 + u2vT 2 with u1 = (1T 4, 0T p\u22124)T, v1 = (1T 4, 0T q\u22124)T, u2 = (0T 2, 1T 4, 0T p\u22126)T, and v2 = (0T 2, 1T 4, 0T q\u22126)T. The heavy-tailed random noise and data contamination scenarios are as described earlier. The results, averaged over 100 data sets, are reported in Table 2. 11 Table 1: The mean (and standard error) of the di\ufb00erence between the estimated regression coe\ufb03cients and the true regression coe\ufb03cients under the Frobenius norm, averaged over 100 data sets, in the setting where A\u2217is not sparse, with n = 200, p = 50, and q = 10. Three distributions of random noise are considered: normal, t, and log-normal. We also considered contaminating 5% or 10% of the elements of Y. Rank of A\u2217 Random Noise Data Contamination Methods Normal t Log-normal 0% 5% 10% rrr 5.80 (0.07) 17.71 (2.84) 10.71 (0.19) 5.80 (0.07) 10.35 (0.13) 12.33 (0.11) rrridge 5.42 (0.06) 13.79 (0.52) 9.22 (0.17) 5.42 (0.06) 9.07 (0.10) 10.93 (0.11) 1 srrr 7.19 (0.08) 26.75 (5.32) 10.41 (0.13) 7.19 (0.08) 10.49 (0.09) 11.76 (0.10) r4 7.32 (0.10) 4.65 (0.07) 8.88 (0.16) 7.32 (0.10) 7.93 (0.11) 8.54 (0.12) hubersrrr 7.21 (0.08) 6.96 (0.13) 6.70 (0.08) 7.21 (0.08) 7.92 (0.09) 8.40 (0.09) rrr 6.09 (0.09) 31.20 (5.64) 12.08 (0.32) 6.09 (0.09) 12.29 (0.19) 16.81 (0.25) rrridge 9.16 (0.09) 22.75 (1.16) 15.16 (0.20) 9.16 (0.09) 15.24 (0.12) 18.22 (0.13) 2 srrr 8.69 (0.11) 41.76 (11.42) 14.20 (0.24) 8.69 (0.11) 14.94 (0.16) 18.26 (0.18) r4 11.63 (0.13) 8.51 (0.41) 12.56 (0.17) 11.63 (0.13) 12.62 (0.14) 13.69 (0.15) hubersrrr 8.70 (0.11) 8.25 (0.24) 7.82 (0.11) 8.70 (0.11) 9.81 (0.13) 10.99 (0.15) Table 2: Results for the case where A\u2217is sparse, with n = 200, p = 50, and q = 10. Other details are as in Table 1. rank of A\u2217 Random Noise Data Contamination Methods Normal t-dist Log-normal 0% 5% 10% rrr 4.65 (0.04) 6.95 (0.88) 4.98 (0.01) 4.65 (0.04) 5.00 (0.01) 5.00 (0.01) rrridge 2.73 (0.03) 7.78 (0.55) 4.17 (0.08) 2.73 (0.03) 4.02 (0.04) 4.64 (0.05) 1 srrr 2.57 (0.04) 5.02 (0.08) 4.48 (0.06) 2.54 (0.04) 4.54 (0.04) 4.94 (0.04) r4 7.29 (0.10) 4.79 (0.09) 10.44 (0.16) 7.29 (0.10) 7.98 (0.12) 8.94 (0.12) hubersrrr 2.57 (0.04) 2.82 (0.13) 2.37 (0.05) 2.54 (0.04) 2.93 (0.05) 3.28 (0.06) rrr 5.25 (0.04) 10.05 (0.82) 8.18 (0.03) 5.25 (0.04) 8.22 (0.01) 8.24 (0.01) rrridge 4.36 (0.03) 9.35 (0.54) 6.00 (0.05) 4.36 (0.03) 6.11 (0.04) 6.82 (0.04) 2 srrr 3.26 (0.04) 7.81 (0.11) 5.78 (0.11) 3.26 (0.04) 5.83 (0.06) 6.96 (0.07) r4 11.55 (0.12) 7.75 (0.12) 12.91 (0.15) 11.55 (0.12) 12.50 (0.13) 13.65 (0.15) hubersrrr 3.27 (0.04) 3.44 (0.11) 3.07 (0.05) 3.27 (0.04) 3.70 (0.04) 4.09 (0.06) When A\u2217is sparse, hubersrrr and srrr outperform all of the methods that do not assume sparsity. In particular, we see that r4 has the worst performance when the random noise is normal or log-normal, or when the data are contaminated. The method rrr has an MSE of 5.00 when the data are contaminated, due to the fact that the information criteria always select models with the regression coe\ufb03cients estimated to be zero. In short, our proposal hubersrrr has the best performance across all scenarios and is robust against di\ufb00erent types of outliers. 12 Table 3: Results for the case when A\u2217is sparse and low rank in the high-dimensional setting with n = 150, p = 200, and q = 10. Three distributions of random noise are considered: normal, t, and log-normal. We report the mean (and standard error) of the true and false positive rates, and the di\ufb00erence between b A and A\u2217under Frobenius norm, averaged over 100 data sets. Rank of A\u2217 Noise srrr hubersrrr TPR FPR Frobenius TPR FPR Frobenius Normal 0.95 (0.01) 0.12 (0.01) 3.74 (0.05) 0.95 (0.01) 0.13 (0.01) 3.75 (0.05) 1 t-dist 0.01 (0.01) 0.01 (0.01) 6.23 (1.23) 0.96 (0.02) 0.14 (0.01) 4.28 (0.47) Log-normal 0.08 (0.02) 0.01 (0.01) 5.00 (0.02) 0.98 (0.01) 0.15 (0.01) 3.53 (0.06) Normal 0.96 (0.01) 0.15 (0.01) 4.65 (0.05) 0.96 (0.01) 0.16 (0.01) 4.65 (0.05) 2 t-dist 0.06 (0.02) 0.01 (0.01) 9.38 (1.21) 0.97 (0.01) 0.17 (0.01) 5.11 (0.48) Log-normal 0.39 (0.03) 0.03 (0.01) 7.50 (0.09) 0.98 (0.01) 0.18 (0.01) 4.41 (0.07) 4.2 High-Dimensional Setting with p > n In this section, we assess the performance of our proposed method in the high-dimensional setting, when the matrix A\u2217is sparse. To this end, we perform numerical studies with q = 10, p = 200, and n = 150. Note that r4 is not applicable when p > n. Moreover, rrr and rrridge do not assume sparsity and therefore their results are omitted. We consider low rank and sparse matrices A\u2217described in Section 4.1. Similarly, two types of outliers are considered: heavy-tailed random noise, and data contamination. The TPR, FPR, and estimation error under Frobenius norm for both types of scenarios, averaged over 100 data sets, are summarized in Tables 3\u20134, respectively. We see that for Gaussian random noise, hubersrrr is comparable to srrr, indicating that there is little loss of e\ufb03ciency when there are no outliers. However, in scenarios in which the random noise is heavy-tailed, hubersrrr has high TPR, low FPR, and low Frobenius norm compared to srrr. In fact, we see that when the random noise is heavy-tailed, the TPR and FPR of srrr are approximately zero. We see similar performance for the case when the data are contaminated in Table 4. These results suggest that hubersrrr should be preferred in all scenarios since it allows accurate estimation of A\u2217when the random noise are heavy-tailed, or under data contamination. Moreover, there is little loss of e\ufb03ciency compared to srrr when there are no outliers. 5 Data Application We apply the proposed robust sparse reduced rank regression to the Arabidopsis thaliana data set, which consists of gene expression measurements for n = 118 samples (Rodr\u00b4 \u0131guesConcepci\u00b4 on and Boronat, 2002; Wille et al., 2004; Ma et al., 2007; Tan et al., 2015; She and Chen, 2017). It is known that isoprenoids play many important roles in biochemical 13 Table 4: Results for the case when A\u2217is sparse and low rank, and n = 150, p = 200, and q = 10, with 5% and 10% of the data being contaminated. Other details are as in Table 3. Rank of A\u2217 Contamination % srrr hubersrrr TPR FPR Frobenius TPR FPR Frobenius 0% 0.95 (0.01) 0.12 (0.01) 3.74 (0.05) 0.95 (0.01) 0.13 (0.01) 3.75 (0.05) 1 5% 0.14 (0.02) 0.02 (0.01) 5.08 (0.03) 0.82 (0.03) 0.12 (0.01) 4.24 (0.06) 10% 0.04 (0.01) 0.01 (0.01) 5.13 (0.04) 0.74 (0.03) 0.11 (0.01) 4.52 (0.06) 0% 0.96 (0.01) 0.15 (0.01) 4.65 (0.05) 0.96 (0.01) 0.16 (0.01) 4.65 (0.05) 2 5% 0.49 (0.02) 0.06 (0.01) 7.43 (0.07) 0.94 (0.01) 0.15 (0.01) 5.22 (0.07) 10% 0.21 (0.02) 0.03 (0.01) 8.13 (0.04) 0.90 (0.01) 0.15 (0.01) 5.63 (0.08) functions such as respiration, photosynthesis, and regulation of growth in plants. Here, we explore the connection between two isoprenoid biosynthesis pathways and some downstream pathways. Similar to She and Chen (2017), we treat the p = 39 genes from two isoprenoid biosynthesis pathways as the predictors, and treat the q = 795 genes from 56 downstream pathways as the response. Thus, X \u2208R118\u00d739 and Y \u2208R118\u00d7795, and we are interested in \ufb01tting the model Y = XA+E. We scale each element of X such that maxi,j |Xij| = 1, and standardize each column of Y to have mean zero and standard deviation one. To assess whether there are outliers in Y, we perform Grubbs\u2019 test on each column of Y (Grubbs, 1950). Grubbs\u2019 test, also known as the maximum normalized residual test, is used to detect outliers from a normal distribution. After a Bonferroni correction, we \ufb01nd that 260 genes contain outliers. In Figure 1, we plot histograms for three genes that contain outliers. AT1G30100 (Abscisic Acid) Frequency 0 2 4 6 8 0 20 40 60 80 100 AT1G72520 (Jasmonic Acid) Frequency 0 2 4 6 8 0 20 40 60 80 100 AT4G34650 (Phytosterol) Frequency 0 2 4 6 8 0 20 40 60 80 Figure 1: Histograms for three genes from the abscisic acid, jasmonic acid, and phytosterol pathways that are heavy-tailed. These genes are AT1G30100, AT1G72520, and AT4G34650, respectively. In Section 4.2, we illustrated with numerical studies that if the response variables are heavy-tailed, sparse reduced rank regression with squared error loss will lead to incorrect 14 estimates. We now illustrate the di\ufb00erence between solving (4) with Huber loss and squared error loss. We set \u03b3 = 3, and pick \u03bb such that there are 1000 non-zeros in the estimated coe\ufb03cient matrix. For the robust method, we set the robusti\ufb01cation parameter to equal \u03c4 = 3 for simplicity. In principle, this quantity can be chosen using cross-validation. Let b Ahubersrrr and b Asrrr be the estimated regression coe\ufb03cients for the robust and non-robust methods, respectively. To measure the di\ufb00erence between the two approaches in terms of regression coe\ufb03cients and prediction, we compute the quantities \u2225b Ahubersrrrst \u2212 b Asrrr\u2225F/\u2225b Ahubersrrr\u2225F \u224837% and \u2225X b Ahubersrrr \u2212X b Asrrr\u2225F/\u2225X b Ahubersrrr\u2225F \u224835%. Figure 2 displays scatterplots of the right singular vectors of X b Asrrr against the right singular vectors of X b Ahubersrrr. We see that while the \ufb01rst singular vectors are similar between the two methods, the second and third singular vectors are very di\ufb00erent. These results suggest that the regression coe\ufb03cients and model predictions can be quite di\ufb00erent between robust and non-robust methods when there are outliers, and that care needs to be taken during model \ufb01tting. G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G \u22120.10 \u22120.05 0.00 0.05 0.10 0.15 0.20 0.25 \u22120.10 \u22120.05 0.00 0.05 0.10 0.15 0.20 0.25 First Singular Vectors Non\u2212Robust Singular Vector Robust Singular Vector G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G GG G G G G G G G G G G G G G G G GG G G G GG G G G G G GG G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G \u22120.3 \u22120.2 \u22120.1 0.0 0.1 0.2 \u22120.3 \u22120.2 \u22120.1 0.0 0.1 Second Singular Vectors Non\u2212Robust Singular Vector Robust Singular Vector G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GG GG G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G \u22120.3 \u22120.2 \u22120.1 0.0 0.1 \u22120.3 \u22120.2 \u22120.1 0.0 0.1 Third Singular Vectors Non\u2212Robust Singular Vector Robust Singular Vector Figure 2: Scatterplots of the leading right singular vectors of X b Ahubersrrr and X b Asrrr. 6 Discussion We propose robust sparse reduced rank regression for analyzing large, complex, and possibly contaminated data. Our proposal is based on a convex relaxation, and is thus computationally tractable. We show that our proposal is statistically consistent under both Frobenius and nuclear norms in the high-dimensional setting in which p > n. By contrast, most of the existing literature in reduced rank regression focus on prediction and rank selection consistency. In this paper, we focus on tail robustness, i.e., the performance of an estimator in the presence of heavy-tailed noise. We show that the proposed robust estimator can achieve exponential-type deviation errors only under bounded low-order moments. Tail robustness is di\ufb00erent from the classical de\ufb01nition of robustness, which is characterized by the breakdown point (Hampel, 1971), i.e., the proportion of outliers that a procedure can tolerate before it 15 produces arbitrarily large estimates. However, the breakdown point does not shed light on the convergence properties of an estimator, such as consistency and e\ufb03ciency. Intuitively, the breakdown point characterizes a form of the worst-case robustness, while tail robustness corresponds to the average-case robustness. So a natural question arises: What is the connection between the average-case robustness and the worst-case robustness? We leave this for future work.", "introduction": "Low rank matrix approximation methods have enjoyed successes in modeling and extracting information from large and complex data across various scienti\ufb01c disciplines. However, large-scale data sets are often accompanied by outliers due to possible measurement error, or because the population exhibits a leptokurtic distribution. As shown in She and Chen (2017), one single outlier can have a devastating e\ufb00ect on low rank matrix estimation. Consequently, non-robust procedures for low rank matrix estimation could lead to inferior estimates and spurious scienti\ufb01c conclusions. For instance, in the context of \ufb01nancial data, it is evident that asset prices follow heavy-tailed distributions: if the heavy-tailedness is not accounted for in statistically modeling, then the recovery of common market behaviors and asset return forecasting may be jeopardized (Cont, 2001; M\u00a8 uller et al., 1998). 1 arXiv:1810.07913v2 [stat.ML] 14 Apr 2019 In the context of reduced rank regression, She and Chen (2017) addressed this challenge by explicitly modeling the outliers with a sparse mean shift matrix of parameters. This ap- proach requires an augmentation of the parameter space, which introduces a new statistical challenge: it raises possible identi\ufb01ability issues between the parameters of interest and the mean shift parameters. For instance, Candes et al. (2011) proposed a form of robust principal component analysis by introducing an additional sparse matrix to model the out- liers. To ensure identi\ufb01ability, an incoherence condition is assumed on the singular vectors of the original parameter of interest. In other words, the parameter of interest cannot be sparse. Therefore, it is unclear whether She and Chen (2017) can be generalized to the high-dimensional setting in which the number of covariates is larger than the number of observations. Similar ideas have been considered in the context of robust linear regression (She and Owen, 2011) and robust clustering (Wang et al., 2016; Liu et al., 2012). In many statistical applications, the outliers themselves are not of interest. Rather than introducing additional parameters to model the outliers, it is more natural to develop robust statistical methods that are less sensitive to outliers. There is limited work along these lines in low rank matrix approximation problems. In fact, She and Chen (2017) pointed out that in the context of reduced rank regression, directly applying a robust loss function that down-weights the outliers, such as the Huber loss, may result in nontrivial computational and theoretical challenges due to the low rank constraint. So a natural question arises: can we develop a computationally e\ufb03cient robust sparse low rank matrix approximation procedure that is less sensitive to outliers and yet has sound statistical guarantees? In this paper, we propose a novel method for \ufb01tting robust sparse reduced rank regression in the high-dimensional setting. We propose to minimize the Huber loss function subject to both sparsity and rank constraints. This leads to a non-convex optimization problem, and is thus computational intractable. To address this challenge, we consider a convex relaxation, which can be solved via an alternating direction method of multipliers algorithm. Most of the existing theoretical analysis of reduced rank regression focuses on rank selection consistency and prediction consistency (Bunea et al., 2011; Mukherjee and Zhu, 2011; Bunea et al., 2012; Chen et al., 2013). Moreover, the theoretical results for robust reduced rank regression of She and Chen (2017) are developed under the assumption that the design matrix is low rank. Non-asymptotic analysis of the estimation error, however, is not well- studied in the context of reduced rank regression, especially in the high-dimensional setting. To bridge this gap in the literature, we provide non-asymptotic analysis of the estimation error under both Frobenius and nuclear norms for robust sparse reduced rank regression. Our results require a matrix-type restricted eigenvalue condition, and are free of incoherence conditions that arise from the identi\ufb01ability issues discussed in Candes et al. (2011). The robustness of our proposed estimator is evidenced by its \ufb01nite sample performance in the presence of heavy-tailed data, i.e., data for which high-order moments are not \ufb01nite. When the sampling distribution is heavy-tailed, there is a higher chance that some data are sampled far away from their mean. We refer to these outlying data as heavy-tailed outliers. 2 Theoretically, we establish non-asymptotic results that quantify the tradeo\ufb00between heavy- tailedness of the random noise and statistical bias: for random noise with bounded (1+\u03b4)th moment, the rate of convergence, depending on \u03b4, is slower than the sub-Gaussian-type deviation bounds; for random noise with bounded second moment, we recover results as if sub-Gaussian errors were assumed; and the transition between the two regimes is smooth. The Huber loss has a robusti\ufb01cation parameter that trades bias for robustness. In past work, the robusti\ufb01cation parameter is usually \ufb01xed using the 95%-e\ufb03ciency rule (among others, Huber, 1964, 1973; Portnoy, 1985; Mammen, 1989; He and Shao, 1996). Therefore, estimators obtained under Huber loss are typically biased. To achieve asymptotic unbi- asedness and robustness simultaneously, within the context of robust linear regression, Sun et al. (2018) showed that the robusti\ufb01cation parameter has to adapt to the sample size, dimensionality, and moments of the random noise. Motivated by Sun et al. (2018), we will establish theoretical results for the proposed method by allowing the robusti\ufb01cation parameter to diverge. Heavy-tailed robustness is di\ufb00erent from the conventional perspective on robust statistics under the Huber\u2019s \u03f5-contamination model, which focuses on developing robust procedures with a high breakdown point (Huber, 1964). The breakdown point of an estimator is de\ufb01ned roughly as the proportion of arbitrary outliers an estimator can tolerate before the estima- tor produces arbitrarily large estimates, or breaks down (Hampel, 1971). Since the seminal work of Tukey (1975), a number of depth-based procedures have been proposed for this purpose (among others, Liu, 1990; Zuo and Ser\ufb02ing, 2000; Mizera, 2002; Salibian-Barrera and Zamar, 2002). Other research directions for robust statistics focus on robust and re- sistant M-estimators: these include the least median of squares and least trimmed squares (Rousseeuw, 1984), the S-estimator (Rousseeuw and Yohai, 1984), and the MM-estimator (Yohai, 1987). We refer to Portnoy and He (2000) for a literature review on classical robust statistics, and Chen et al. (2018) for recent developments on non-asymptotic analysis under the \u03f5-contamination model. Notation: For any vector u = (u1, . . . , up)T \u2208Rp and q \u22651, let \u2225u\u2225q = \u0000 Pp j=1 |uj|q\u00011/q denote the \u2113q norm. Let \u2225u\u22250 = Pp j=1 1(uj \u0338= 0) denote the number of nonzero entries of u, and let \u2225u\u2225\u221e= max1\u2264j\u2264p |uj|. For any two vectors u, v \u2208Rp, let \u27e8u, v\u27e9= uTv. Moreover, for two sequences of real numbers {an}n\u22651 and {bn}n\u22651, an \u2272bn signi\ufb01es that an \u2264Cbn for some constant C > 0 that is independent of n, an \u2273bn if bn \u2272an, and an \u224dbn signi\ufb01es that an \u2272bn and bn \u2272an. If A is an m \u00d7 n matrix, we use \u2225A\u2225q to denote its order-q operator norm, de\ufb01ned by \u2225A\u2225q = maxu\u2208Rn \u2225Au\u2225q/\u2225u\u2225q. We de\ufb01ne the (p, q)-norm of a m \u00d7 n matrix A as the usual \u2113q norm of the vector of row-wise \u2113p norms of A: \r \rA \r \r p,q \u2261 \r \r\u0000 \u2225A1\u00b7\u2225p, . . . , \u2225Am\u00b7\u2225p) \r \r q, where Aj\u00b7 is the jth row of A. We use \u2225A\u2225\u2217= Pmin{m,n} k=1 \u03bbk to denote the nuclear norm of A, where \u03bbk is the kth singular value of A. Let \u2225A\u2225F = qPm i=1 Pn j=1 A2 ij be the Frobenius norm of A. Finally, let vec(A) be the vectorization of the matrix A, obtained by concatenating the columns of A into a vector. 3" }, { "url": "http://arxiv.org/abs/1809.06024v1", "title": "A convex formulation for high-dimensional sparse sliced inverse regression", "abstract": "Sliced inverse regression is a popular tool for sufficient dimension\nreduction, which replaces covariates with a minimal set of their linear\ncombinations without loss of information on the conditional distribution of the\nresponse given the covariates. The estimated linear combinations include all\ncovariates, making results difficult to interpret and perhaps unnecessarily\nvariable, particularly when the number of covariates is large. In this paper,\nwe propose a convex formulation for fitting sparse sliced inverse regression in\nhigh dimensions. Our proposal estimates the subspace of the linear combinations\nof the covariates directly and performs variable selection simultaneously. We\nsolve the resulting convex optimization problem via the linearized alternating\ndirection methods of multiplier algorithm, and establish an upper bound on the\nsubspace distance between the estimated and the true subspaces. Through\nnumerical studies, we show that our proposal is able to identify the correct\ncovariates in the high-dimensional setting.", "authors": "Kean Ming Tan, Zhaoran Wang, Tong Zhang, Han Liu, R. Dennis Cook", "published": "2018-09-17", "updated": "2018-09-17", "primary_cat": "stat.ML", "cats": [ "stat.ML", "cs.LG" ], "main_content": "2.1 Sliced inverse regression Li (1991) considered the general regression model y = f(\u03b2T 1 x, . . . , \u03b2T Kx, \u01eb), (2) where \u01eb is a stochastic error independent of x and f(\u00b7) is an unknown link function. Model (2) is equivalent to (1) in the sense that the conditional distribution of y given x is captured by a set of K linear combinations of x (Zeng & Zhu 2010, Lemma 1). It has been shown that the central subspace Vy|x spanned by \u03b21, . . . , \u03b2K can be identified. In fact, sliced inverse regression gives the maximum likelihood estimator of the central subspace if x given y is normally distributed and y is categorical (Cook & Forzani 2008, \u00a74.1). Sliced inverse regression requires the linearity condition on the covariates x: for any a \u2208Rd, E(aTx | \u03b2T 1 x, . . . , \u03b2T Kx) = b0 + b1\u03b2T 1 x + \u00b7 \u00b7 \u00b7 + bK\u03b2T Kx (3) 3 for some constants b0, . . . , bK. The linearity condition (3) is satis\ufb01ed when the distribution of x is elliptically symmetric (Li 1991). For instance, (3) holds when x is normally distributed with covariance matrix \u03a3x. The linearity condition involves only the marginal distribution of x and is regarded as mild in the su\ufb03cient dimension reduction literature. Under the linearity condition (3), the inverse regression curve E(x | y) resides in the linear subspace spanned by \u03a3x\u03b21, . . . , \u03a3x\u03b2K (Li 1991, Theorem 3.1). In other words, \u03a3E(x|y)\u03b2k = \u03bbk\u03a3x\u03b2k for k = 1, . . . , K, where \u03a3E(x|y) is the covariance matrix of the conditional expectation E(x | y), \u03bbk is the kth largest generalized eigenvalue, \u03b2T k \u03a3x\u03b2k = 1 and \u03b2T j \u03a3x\u03b2k = 0 for j \u0338= k. Let the columns of V \u2208Rd\u00d7K represent a basis for Vy|x. Then a basis can be estimated by solving the generalized eigenvalue problem b \u03a3E(x|y)V = b \u03a3xV \u039b, (4) where b \u03a3E(x|y) is an estimator of \u03a3E(x|y), V \u2208Rd\u00d7K consists of K eigenvectors such that V Tb \u03a3xV = IK, and \u039b = diag(\u03bb1, . . . , \u03bbK) \u2208RK\u00d7K. By de\ufb01nition, \u03a3E(x|y) is of rank K. An estimator of V can be obtained equivalently by solving the non-convex optimization problem minimize V \u2208Rd\u00d7K \u2212tr n V Tb \u03a3E(x|y)V o subject to V Tb \u03a3xV = IK. (5) Let b V be a solution of (5). Then, the central subspace is estimated as span(b V ) and the su\ufb03cient dimension reduced variables are b V Tx. 2.2 Estimators for the conditional covariance Let (y1, x1), . . . , (yn, xn) be n independent and identically distributed observations. We denote the order statistics of the response by y(1) \u2264\u00b7 \u00b7 \u00b7 \u2264y(n). In addition, de\ufb01ne x(i)\u2217as the value of x associated with the ith order statistic of y. For instance, if the \ufb01fth observation y5 is the largest then y(n) = y5 and x(n)\u2217= x5. To estimate \u03a3E(x|y) we use the identity cov{E(x | y)} = cov(x) \u2212E{cov(x | y)}. Let T = E{cov(x | y)}. Then, b \u03a3E(x|y) = b \u03a3x \u2212b T, where b \u03a3x is the sample covariance matrix of x and b T is an estimator of T. There are two widely used estimators for T. The \ufb01rst is b T = 1 n \u230an/2\u230b X i=1 {x(2i)\u2217\u2212x(2i\u22121)\u2217}{x(2i)\u2217\u2212x(2i\u22121)\u2217}T, (6) where \u230an/2\u230bdenotes the largest integer less than or equal to n/2. The second estimator of T can be obtained by partitioning the n observations into H 4 slices according to the order statistics of y and then computing the weighted average of the sample covariance matrices within each slice. Let S1, . . . , SH be H sets containing the indices of y partitioned according to their order statistics. Then, e T = 1 H H X h=1 ( 1 nh X i\u2208Sh (xi \u2212\u00af xSh) (xi \u2212\u00af xSh)T ) . (7) Several authors have shown that b T and e T are consistent estimators of T in the lowdimensional setting (Hsing & Carroll 1992, Zhu & Ng 1995, Zhu & Fang 1996). Zhu et al. (2006) established consistency for e T when d increases as a function of n, but at a slower rate than n. Dai et al. (2015) studied an estimator of the form in (6) in the context of nonparametric regression. In \u00a7 4, we will show that b T converges to T in the high-dimensional setting under the max norm. Similar results can be shown for e T. 3 Convex Sparse Sliced Inverse Regression 3.1 Problem formulation Recall from \u00a7 2.1 that the goal of sliced inverse regression is to estimate the central subspace spanned by \u03b21, . . . , \u03b2K. Thus, instead of estimating each column of V as in (5), we propose to directly estimate the orthogonal projection \u03a0 = V V T onto the subspace spanned by V . By a change of variable, (5) can be rewritten as minimize \u03a0\u2208M \u2212tr n b \u03a3E(x|y)\u03a0 o subject to b \u03a31/2 x \u03a0b \u03a31/2 x \u2208B, (8) where B = {b \u03a31/2 x \u03a0b \u03a31/2 x : V Tb \u03a3xV = IK} and M is the set of d \u00d7 d symmetric positive semi-de\ufb01nite matrices. Instead of solving the non-convex optimization problem in (8), we propose the convex relaxation minimize \u03a0\u2208M \u2212tr n b \u03a3E(x|y)\u03a0 o subject to \u2225b \u03a31/2 x \u03a0\u03a31/2 x \u2225\u2217\u2264K, \u2225b \u03a31/2 x \u03a0b \u03a31/2 x \u2225sp \u22641, (9) 5 where \u2225b \u03a31/2 x \u03a0\u03a31/2 x \u2225\u2217 = trace(b \u03a31/2 x \u03a0\u03a31/2 x ), \u2225b \u03a31/2 x \u03a0\u03a31/2 x \u2225sp = sup v:vTv=1 ( d X j=1 (b \u03a31/2 x \u03a0\u03a31/2 x v)2 j )1/2 , are the nuclear norm and the spectral norm, respectively. The nuclear norm constrains the solution to be of low rank and the spectral norm constrains the maximum eigenvalue of the solution. A similar convex relaxation has been used in sparse principal component analysis and canonical correlation analysis (Vu et al. 2013, Gao et al. 2017). To achieve variable selection, we impose a lasso penalty on \u03a0 to encourage the estimated subspace to be sparse. To this end, we introduce the notion of subspace sparsity. De\ufb01nition 1. Let \u03a0 = V V T be the orthogonal projection matrix onto the subspace V. The sparsity level of V is the total number of non-zero diagonal elements in \u03a0, s = |supp{diag(\u03a0)}|. Suppose, for example, that \u03a0jj = 0. Since \u03a0jj = PK k=1 V 2 jk, this implies that Vjk = 0 for all k \u2208(1, . . . , K). That is, the entire jth row of V is zero when \u03a0jj = 0, which corresponds to not selecting the jth variable. It seems intuitive to use the trace penalty to penalize only the diagonal elements of \u03a0 for variable selection. However, if a diagonal element of \u03a0 is zero, the elements in the corresponding row and column of \u03a0 are zero. This motivates us to impose an \u21131 penalty on all elements of \u03a0. To encourage sparsity, we propose solving the optimization problem minimize \u03a0\u2208M \u2212tr n b \u03a3E(x|y)\u03a0 o +\u03c1\u2225\u03a0\u22251 subject to \u2225b \u03a31/2 x \u03a0b \u03a31/2 x \u2225\u2217\u2264K, \u2225b \u03a31/2 x \u03a0b \u03a31/2 x \u2225sp \u22641, (10) where \u2225\u03a0\u22251 = P i,j |\u03a0ij|, and \u03c1 is a positive tuning parameter that controls the sparsity of the solution b \u03a0. Unlike most existing work, our proposal does not require the inversion of the empirical covariance matrix b \u03a3x. By De\ufb01nition 1, the estimated sparse solution b \u03a0 from solving (10) will yield sparse basis vectors. 3.2 Linearized alternating direction of method of multipliers algorithm The main di\ufb03culty in solving (10) is the interaction between the penalty term and the constraints. To solve (10), we use the linearized alternating direction method of multipliers algorithm that allows us to decouple terms that are di\ufb03cult to optimize jointly (Zhang et al. 6 2011, Wang & Yuan 2012, Yang & Yuan 2013). Convergence of the algorithm has been studied in Fang et al. (2015). The details are presented in Algorithm 1 and its derivation is deferred to the Appendix. Algorithm 1 amounts to performing soft-thresholding, computing a singular value decomposition, and modifying the obtained singular values with a monotone piecewise linear function. Optimization problem (10) can also be solved via the standard alternating direction method of multipliers algorithm (Boyd et al. 2010). In this case, however, there is no closedform solution for updating the primal variable \u03a0 as in Step 3(a) of Algorithm 1. Instead of soft-thresholding, it involves solving a d2-dimensional lasso regression problem in each iteration, which may be computationally prohibitive when the number of covariates d is large. Algorithm 1 Linearized Alternating Direction of Method of Multipliers Algorithm. 1. Input the variables: b \u03a3x, b \u03a3E(x|y), the tuning parameter \u03c1, rank constraint K, the L-ADMM parameters \u03bd > 0, tolerance level \u01eb > 0, and \u03c4 = 4\u03bd\u03bb2 max(b \u03a3x), where \u03bbmax(b \u03a3x) is the largest eigenvalue of b \u03a3x. 2. Initialize the parameters: primal variables \u03a0(0) = Id, H(0) = Id, and dual variable \u0393(0) = 0. 3. Iterate until the stopping criterion \u2225\u03a0(t) \u2212\u03a0(t\u22121)\u2225F \u2264\u01eb is met, where \u03a0(t) is \u03a0 obtained at the tth iteration: (a) \u03a0(t+1) = Soft[\u03a0(t) + b \u03a3E(x|y)/\u03c4 \u2212\u03bd{b \u03a3x\u03a0(t)b \u03a3x\u2212b \u03a31/2 x (H(t) \u2212\u0393(t))b \u03a31/2 x }/\u03c4, \u03c1/\u03c4], where Soft denotes the soft-thresholding operator, applied element-wise to a matrix, Soft(Aij, b) = sign(Aij) max (|Aij| \u2212b, 0). (b) H(t+1) = Pd j=1 min{1, max (\u03c9j \u2212\u03b3\u2217, 0)}ujuT j , where Pd j=1 \u03c9jujuT j is the singular value decomposition of \u0393(t) + b \u03a31/2 x \u03a0(t+1)b \u03a31/2 x , and \u03b3\u2217= argmin \u03b3>0 \u03b3, subject to d X j=1 min{1, max (\u03c9j \u2212\u03b3, 0)} \u2264K. (c) \u0393(t+1) = \u0393(t) + b \u03a31/2 x \u03a0(t+1)b \u03a31/2 x \u2212H(t+1). 3.3 Tuning parameter selection Our proposed method (10) involves two user-speci\ufb01ed tuning parameters: the dimension K of the central subspace Vy|x and a sparsity tuning parameter \u03c1. Zhu et al. (2006) used the Bayesian information criterion to select K. Several authors proposed to select K using 7 bootstrap procedures (Ye & Weiss 2003, Dong & Li 2010, Ma & Zhu 2012). In addition, sequential testing procedures were developed for determining K (Li 1991, Bura & Cook 2001a, Cook & Ni 2005, Ma & Zhu 2013b). Motivated by Cook & Forzani (2008), we propose a cross-validation approach to select the tuning parameters K and \u03c1. Let b \u03a0 be the solution of (10), and recall that span(b \u03a0) is an estimate of the central subspace Vy|x. Let b \u03c01, . . . , b \u03c0K be the top K eigenvectors of b \u03a0. Given a new data point x\u2217, de\ufb01ne b R(x\u2217) = (b \u03c0T 1 x\u2217, . . . , b \u03c0T Kx\u2217)T, wi(x\u2217) = exp n \u22121 2\u2225b R(x\u2217) \u2212b R(xi)\u22252 2 o Pn i=1 exp n \u22121 2\u2225b R(x\u2217) \u2212b R(xi)\u22252 2 o, where \u2225a\u22252 = (Pd j=1 a2 j)1/2 for a \u2208Rd. The conditional mean E(y | x = x\u2217) can then be estimated as b E(y | x = x\u2217) = n X i=1 wi(x\u2217)yi. (11) Details on the derivation of (11) are deferred to \u00a7 6. We propose an M-fold cross-validation procedure to select the tuning parameters K and \u03c1 based on (11). We \ufb01rst partition the n observations into M sets, C1, . . . , CM. For each set Cm, we obtain an estimate of b \u03a0 using all observations outside the set Cm. We then predict the conditional mean for observations in Cm using (11). The tuning parameters K and \u03c1 are now chosen to minimize the overall prediction errorPM m=1 P i\u2208Cm{yi \u2212b E(y | x = xi)}2/(M|Cm|), where |Cm| is the cardinality of the set Cm. 4 Theoretical Results We study the theoretical properties of the proposed estimator b \u03a0 obtained from solving (10) under the non-asymptotic setting in which n, d, s, and K are allowed to grow. Throughout this section, we assume that the linearity condition in (3) holds and that x1, . . . , xn are independent random variables that are sub-Gaussian with covariance matrix \u03a3x. Moreover, for simplicity, we assume that the largest generalized eigenvalue \u03bb1 is bounded by some constant, and that K < min(s, log d). To quantify the distance between the estimated and population subspaces, we \ufb01rst establish a concentration result for \u03a3E(x|y) under the max norm. Recall that y(1), . . . , y(n) are the order statistics of y1, . . . , yn. Let m{y(i)} = E{x | y(i)}. We state an assumption on the smoothness of m(y). 8 Assumption 1. Let B > 0 and let \u039en(B) be the collection of all the n-point partitions \u2212B \u2264y(1) \u2264\u00b7 \u00b7 \u00b7 \u2264y(n) \u2264B on the interval [\u2212B, B]. A vector-valued m(y) is said to have a total variation of order 1/4 if for any \ufb01xed B > 0, lim n\u2192\u221e 1 n1/4 sup \u039en(B) n X i=2 \u2225m{y(i)} \u2212m{y(i\u22121)}\u2225\u221e= 0, where \u2225a\u2225\u221e= maxj |aj| for a \u2208Rd. A similar assumption is given by Hsing & Carroll (1992) and Zhu & Ng (1995), except that they considered the Euclidean norm on the quantity m{y(i)} \u2212m{y(i\u22121)} rather than the \u2113\u221enorm. In our problem, it su\ufb03ces to assume the smoothness condition under the \u2113\u221e norm, since we are bounding the estimation error of b T under the max norm. The following lemma provides an upper bound on the estimation error of b T in (6). Lemma 1. Assume that y1, . . . , yn \u2208[\u2212B, B] has a bounded support for some \ufb01xed B > 0. Assume that x1, . . . , xn are independent sub-Gaussian random variables with covariance matrix \u03a3x. Under Assumption 1, for su\ufb03ciently large n, there exists constants C, C\u2032 > 0 such that with probability at least 1 \u2212exp(\u2212C\u2032 log d), \u2225b T \u2212T\u2225max = C(log d/n)1/2, where \u2225A\u2225max = maxi,j |Aij| for A \u2208Rd\u00d7d. For simplicity, we assume that y has a bounded support in Lemma 1. When y is unbounded, a more re\ufb01ned analysis is needed to obtain an upper bound on the estimation error under additional assumptions on the inverse regression curve and the empirical distribution of y (Zhu et al. 2006). Similar results can be shown for the estimator e T in (7). We next state a result on the sample covariance matrix b \u03a3x, which follows from Lemma 1 of Ravikumar et al. (2011). Proposition 1. Assume that x1, . . . , xn are independent sub-Gaussian random variables with the covariance matrix \u03a3x. Let b \u03a3x be the sample covariance matrix. Then there exists constants C1, C\u2032 1 > 0 such that \u2225b \u03a3x \u2212\u03a3x\u2225max = C1(log d/n)1/2 with probability at least 1 \u2212exp(\u2212C\u2032 1 log d). 9 Corollary 1. Let b \u03a3E(x|y) = b \u03a3x \u2212b T. Under the conditions in Lemma 1 and Proposition 1, there exists constants C2, C\u2032 2 > 0 such that \u2225b \u03a3E(x|y) \u2212\u03a3E(x|y)\u2225max \u2264C2(log d/n)1/2 with probability at least 1 \u2212exp(C\u2032 2 log d). Corollary 1 follows directly from Lemma 1 and Proposition 1. Next, we state an assumption on the s-sparse eigenvalue of \u03a3x. The assumption is commonly used in the highdimensional literature (see, for instance, Meinshausen & Yu 2009). Assumption 2. The s-sparse minimal and maximal eigenvalues of \u03a3x are \u03bbmin(\u03a3x, s) = min v:\u2225v\u22250\u2264s vT\u03a3xv vTv , \u03bbmax(\u03a3x, s) = max v:\u2225v\u22250\u2264s vT\u03a3xv vTv , (12) where \u2225v\u22250 is the number of non-zero elements in v. Assume that there exists a constant c > 0 such that c\u22121 \u2264\u03bbmin(\u03a3x, s) \u2264\u03bbmax(\u03a3x, s) \u2264c. We now quantify the distance between the estimated and population subspaces. To this end, we establish the notion of distance between subspaces (Vu et al. 2013). De\ufb01nition 2. Let V and b V be K-dimensional subspaces of Rd. Let P\u03a0 and Pb \u03a0 be the projection matrices onto the subspaces V and b V, respectively. The distance between the two subspaces are de\ufb01ned as D \u0000V, b V \u0001 = \u2225P\u03a0 \u2212Pb \u03a0\u2225F. The following theorem provides an upper bound on the subspace distance as de\ufb01ned in De\ufb01nition 2 between \u03a0 and the solution b \u03a0 obtained from solving (10). Theorem 1. Let V and b V be the true and estimated subspaces, respectively. Let n > Cs2 log d/\u03bb2 K for some su\ufb03ciently large constant C, where \u03bbK is the Kth generalized eigenvalue of the pair of matrices {\u03a3E(x|y), \u03a3x}. Assume that \u03bbKK2 < s log d. Let \u03c1 \u2265C1(log d/n)1/2 for some constant C1. Under conditions in Corollary 1 and Assumption 2, D(V, b V) \u2264C2s(log d/n)1/2/\u03bbK with probability at least 1 \u2212exp(\u2212C3s) \u2212exp(\u2212C4 log d) for some constants C2, C3, and C4. Theorem 1 states that with probability tending to one, the distance between the estimated and population subspaces is proportional to s(log d/n)1/2/\u03bbK and decays to zero if s = o{\u03bbK(n/ log d)1/2}. That is, the number of active covariates cannot be too large. We will illustrate the results in Theorem 1 in \u00a7 5. 10 Remark 1. Our results allow the dimension K to increase as a function of n, d, s under the constraint that \u03bbK = \u03c9{s(log d/n)1/2}, where the notation f(n) = \u03c9{g(n)} indicates limn\u2192\u221e|f(n)/g(n)| \u2192\u221e. In other words, the signal to noise ratio in terms of the Kth generalized eigenvalue \u03bbK has to be su\ufb03ciently large to attain a small estimation error. We require that \u03bbKK2 < s log d, so K cannot be too large compared to the number of active covariates. 5 Numerical Studies We compare our proposal to three other methods on high-dimensional sparse sliced inverse regression under various simulation settings: Yin & Hilafu (2014), Li & Yin (2008), and Wang et al. (2018). Recall from De\ufb01nition 1 that subspace sparsity is determined by the diagonal elements of \u03a0. Let b \u03a0 be an estimator of \u03a0. We de\ufb01ne the true positive rate as the proportion of correctly identi\ufb01ed non-zero diagonals, and the false positive rate as the proportion of zero diagonals that are incorrectly identi\ufb01ed to be non-zeros. Furthermore, we calculate the absolute correlation coe\ufb03cient between the true su\ufb03cient predictor and its estimate. For simulation settings with K > 1, we calculate the pairwise correlation between the estimated directions and each of the true su\ufb03cient dimension reduction directions. We then select the maximum pairwise correlation for each of the true direction and take their average. In addition, we compute the subspace distance between the true and estimated subspace to illustrate the theoretical result in Theorem 1. We simulated x from Nd(0, \u03a3x), where (\u03a3x)ij = 0.5|i\u2212j| for 1 \u2264i, j \u2264d, \u01eb from N(0, 1), and employed the following regression models: 1. A linear regression model with three active predictors: y = (x1 + x2 + x3)/31/2 + 2\u01eb. In this setting, the central subspace is spanned by the directions \u03b2 = (13, 0d\u22123)T and K = 1. 2. A non-linear regression model with three active predictors: y = 1 + exp{(x1 + x2 + x3)/31/2} + \u01eb. This regression model has recently been considered in Yin & Hilafu (2014). In this study, the central subspace is spanned by the direction \u03b2 = (13, 0d\u22123)T and K = 1. 11 3. A non-linear regression model with \ufb01ve active predictors: y = x1 + x2 + x3 0.5 + (x4 + x5 + 1.5)2 + 0.1\u01eb. This simulation setting is similar to that of Chen et al. (2010). In this study, the central subspace is spanned by the directions \u03b21 = (13, 0d\u22123)T, \u03b22 = (03, 12, 0d\u22125)T, and K = 2. Sliced inverse regression requires estimators of the marginal and conditional covariance matrices, \u03a3x and \u03a3E(x|y). We estimated \u03a3x using the sample covariance matrix b \u03a3x. Then, \u03a3E(x|y) can be estimated using the identity b \u03a3E(x|y) = b \u03a3x \u2212e T, where e T is de\ufb01ned in (7). We constructed e T with H = 5 slices. There are two tuning parameters in our proposal (10), which we selected using the cross-validation idea outlined in \u00a7 3.3. Similarly, we used cross-validation to select tuning parameters for Wang et al. (2018). For the proposal in Li & Yin (2008), the authors proposed three di\ufb00erent methods for selecting the tuning parameters: we performed tuning parameter selection with these three methods and reported only the best results for Li & Yin (2008). We considered multiple set of tuning parameters for Yin & Hilafu (2014) and reported only the best results for their proposal. The true and false positive rates, and the absolute correlation coe\ufb03cient, averaged over 200 data sets, are reported in Table 1. Table 1: True and false positive rates, and absolute correlation coe\ufb03cient with n = (100, 200) and d = 150. The mean (standard error), averaged over 200 data sets, are reported. All entries are multiplied by 100. TPR, true positive rate; FPR, false positive rate; corr, absolute correlation coe\ufb03cient. n = 100 and d = 150 n = 200 and d = 150 Setting 1 Setting 2 Setting 3 Setting 1 Setting 2 Setting 3 TPR 96 (1) 94\u00b72 (1\u00b72) 91\u00b73 (1\u00b71) 98\u00b72 (0\u00b75) 98\u00b75 (0\u00b75) 98\u00b79 (2\u00b75) Our proposed method FPR 6 (0\u00b79) 3\u00b76 (0\u00b77) 7\u00b74 (0\u00b71) 3\u00b74 (0\u00b74) 1\u00b71 (0\u00b72) 2\u00b75 (0\u00b73) corr 88\u00b73 (0\u00b79) 86\u00b74 (1\u00b71) 74\u00b72 (1\u00b71) 90\u00b79 (0\u00b75) 92\u00b71 (0\u00b75) 79\u00b72 (0\u00b76) TPR 95\u00b73 (0\u00b79) 100 (0) 99\u00b76 (0\u00b74) 100 (0) 100 (0) 100 (0) Yin & Hilafu (2014) FPR 4\u00b79 (0\u00b71) 4\u00b78 (0\u00b71) 3\u00b75 (0\u00b71) 5\u00b79 (0\u00b72) 6\u00b77 (0\u00b73) 4\u00b75 (0\u00b72) corr 59\u00b72 (1\u00b71) 87\u00b78 (0\u00b75) 78\u00b78 (0\u00b76) 78 (0\u00b76) 94\u00b72 (0\u00b72) 87\u00b74 (0\u00b75) TPR 97\u00b78 (0\u00b71) 98\u00b71 (0\u00b71) 97\u00b78 (0\u00b71) 98\u00b79 (0\u00b71) 99\u00b71 (0\u00b71) 97\u00b79 (0\u00b71) Li & Yin (2008) FPR 8\u00b73 (1\u00b72) 3\u00b78 (0\u00b78) 23\u00b74 (1\u00b71) 1\u00b72 (0\u00b74) 0\u00b73 (0\u00b72) 19\u00b77 (1\u00b71) corr 84\u00b73 (0\u00b79) 88\u00b79 (0\u00b76) 62\u00b77 (0\u00b77) 93\u00b76 (0\u00b74) 95\u00b78 (0\u00b73) 69\u00b77 (0\u00b75) TPR 88\u00b78 (1\u00b75) 93\u00b75 (1\u00b72) 80\u00b71 (1\u00b72) 97\u00b75 (1\u00b70) 98\u00b78 (0\u00b77) 96\u00b73 (0\u00b76) Wang et al. (2018) FPR 0\u00b76 (0\u00b71) 0\u00b76 (0\u00b71) 0\u00b72 (0\u00b71) 0\u00b73 (0\u00b71) 0\u00b73 (0\u00b71) 0\u00b71 (0\u00b71) corr 81\u00b75 (1\u00b74) 85\u00b71 (1\u00b73) 69\u00b79 (1\u00b71) 91\u00b73 (1\u00b71) 93\u00b72 (1\u00b70) 84\u00b74 (0\u00b77) 12 0.00 0.05 0.10 0.15 0.00 0.05 0.10 0.15 0.20 s \u00d7 (log d n)0.5 Subspace distance (a) 0.00 0.05 0.10 0.15 0.00 0.05 0.10 0.15 0.20 0.25 s \u00d7 (log d n)0.5 Subspace distance (b) 0.00 0.05 0.10 0.15 0.20 0.25 0.0 0.2 0.4 0.6 0.8 s \u00d7 (log d n)0.5 Subspace distance (c) Figure 1: Results for the subspace distance, averaged over 500 data sets. Panels (a), (b), and (c) are the results for simulation settings 1, 2, and 3, respectively. The lines are obtained by varying the sample size n with d = 100 (circle black line) and d = 200 (square gray line), respectively. Table 1 shows that the proposed method performs competitively against recent proposals for high-dimensional sliced inverse regression (Yin & Hilafu 2014, Wang et al. 2018, Li & Yin 2008). In the low-dimensional setting when n = 200, our method performs competitively with all of the existing methods across all three settings. In the high-dimensional setting when n = 100, for setting one, our proposal yields the best absolute correlation between the true and estimated su\ufb03cient dimension direction. All methods perform similarly in setting two. Setting three is a harder problem and the method of Li & Yin (2008) has an extremely high false positive rate. The method of Wang et al. (2018) has the lowest true positive rate and a low correlation, and that of Yin & Hilafu (2014) slightly outperforms our proposal in terms of true positive rate and correlation. However, the tuning parameters for our proposals are selected entirely using cross-validation and that we report the best results for Yin & Hilafu (2014) after considering multiple tuning parameters. Moreover, Yin & Hilafu (2014) has the worst performance in setting one. In short, our proposed method is the most robust proposal across all three settings in the high-dimensional setting. Next, we evaluated the distance between the estimated and the population subspaces. We assume that K is known, and select \u03c1 = 2(log d/n)1/2 as suggested by Theorem 1. The results for d = (100, 200) as a function of n, averaged over 500 data sets, are presented in Figures 1(a)\u2013(c). The subspace distance between the estimated and population subspaces is indeed proportional to s(log d/n)1/2. 13 6 An Extension to Sparse Principal Fitted Components We brie\ufb02y outline an extension of the proposed method for principal \ufb01tted components in the high-dimensional setting. Cook & Forzani (2008) proposed several model-based su\ufb03cient dimension reduction methods, collectively referred to as principal \ufb01tted components. Let xy be the conditional random variable of x given y. Assume that xy is normally distributed from Nd(\u00b5y, \u2206). Furthermore, let \u00af \u00b5 = E(x), and let V\u0393 = span(\u00b5y \u2212\u00af \u00b5 | y \u2208Sy), where \u0393 \u2208 Rd\u00d7K denotes a semi-orthogonal matrix whose columns form a basis for the K-dimensional subspace V\u0393, and Sy denotes the sample space of y. Cook & Forzani (2008) considered the inverse regression model x = \u00af \u00b5 + \u0393\u03be{f(y) \u2212\u00af f(y)} + \u22061/2\u01eb, (13) where \u03be \u2208RK\u00d7r is an unrestricted rank K matrix with K < r, f(y) \u2208Rr is a known vector-valued function of y, and \u01eb is N(0, Id) that is independent of y. The covariates f(y) usually takes the form of polynomial, piecewise linear, or Fourier basis functions. Thus, the regression model (13) can e\ufb00ectively model nonlinear relationships between the covariates and the response. Principal \ufb01tted components yields sliced inverse regression as a special case when y is categorical (Cook & Forzani 2008). Under model (13), Cook & Forzani (2008) showed that the maximum likelihood estimator of the central subspace V\u0393 can be obtained by solving the generalized eigenvalue problem b \u03a3\ufb01tV = b \u03a3xV \u039b, where b \u03a3\ufb01t is the sample covariance matrix of the estimated vectors from the linear regression of x on f. More speci\ufb01cally, let X denote the n \u00d7 d matrix with rows (x \u2212\u00af x)T and let F denote the n \u00d7 r matrix with rows {f(y) \u2212\u00af f(y)}T. Then, b \u03a3\ufb01t = XTF(FTF)\u22121FTX/n and b \u03a3x = XTX/n. While the estimator of the central subspace is derived under the normality assumption, it is also robust to non-normal error (Cook & Forzani 2008, Theorem 3\u00b75). Therefore, normality assumption on the covariates is not crucial to the principal \ufb01tted components. A convex relaxation for the principal \ufb01tted components takes the form minimize \u03a0\u2208M \u2212tr \u0010 b \u03a3\ufb01t\u03a0 \u0011 + \u03c1\u2225\u03a0\u22251 subject to \u2225b \u03a31/2 x \u03a0b \u03a31/2 x \u2225\u2217\u2264K, \u2225b \u03a31/2 x \u03a0b \u03a31/2 x \u2225sp \u22641. (14) Algorithm 1 can directly be adapted to solve (14); with some abuse of notation, let b \u03a0 be the solution to (14) and let b \u03c01, . . . , b \u03c0K be the K largest eigenvector of b \u03a0. One of the main advantages of principal \ufb01tted components is that a model for x given y can be inverted to provide a method for estimating the mean function E(y | x) without 14 specifying a model for the joint distribution (y, x). Let R(x) be the K-dimensional su\ufb03cient reduction. Let g(x | y) and g{R(x) | y} be the conditional densities of x given y and R(x) given y. Then, the conditional expectation can be written as E(y | x) = E{y | R(x)} = E[yg{R(x) | y}] E[g{R(x) | y}] , where the expectation is taken with respect to the random variable y. Under the normality assumption on xy, for a new data point x\u2217, the conditional mean can be estimated as b E(y | x = x\u2217) = n X i=1 wi(x\u2217)yi, wi(x\u2217) = exp n \u22121 2\u2225b R(x\u2217) \u2212b R(xi)\u22252 2 o Pn i=1 exp n \u22121 2\u2225b R(x\u2217) \u2212b R(xi)\u22252 2 o, where b R(x\u2217) = (b \u03c0T 1 x\u2217, . . . , b \u03c0T Kx\u2217)T is an estimate of the K-dimensional su\ufb03cient reduction. This motivates the cross-validation procedure described in \u00a7 3.3 for selecting the tuning parameters K and \u03c1. 7 Discussion We have proposed a convex relaxation for sparse sliced inverse regression in the highdimensional setting, using the fact that sliced inverse regression is a special case of the generalized eigenvalue problem. As discussed in Chen et al. (2010) and Li (2007), many other su\ufb03cient dimension reduction methods can be formulated as sparse generalized eigenvalue problems. These include sliced average variance estimation, directional regression, principal \ufb01tted components, principal hessian direction, and iterative hessian transformation. Therefore, these models can all be applied using the proposed method in (10) with di\ufb00erent choices of covariance matrices. Many su\ufb03cient dimension reduction methods rely on the linearity condition (3), but this is not always satis\ufb01ed. To address this, Ma & Zhu (2012) proposed a semiparametric approach for su\ufb03cient dimension reduction that removes the linearity condition. In future work, it will be of interest to propose a high-dimensional semiparametric approach for suf\ufb01cient dimension reduction using recently developed theoretical tools in high-dimensional statistics. Many authors have proposed methods to estimate the subspace dimension K. These include the Bayesian information criterion, the bootstrap, and sequential testing (Zhu et al. 2006, Ye & Weiss 2003, Dong & Li 2010, Ma & Zhu 2012, Li 1991, Bura & Cook 2001a, 15 Cook & Ni 2005). Ma & Zhang (2015) proposed a validated information criterion for selecting K in dimension reduction models. However, these methods are not directly applicable to the high-dimensional setting. It will be of interest to develop a principled way to estimate the subspace dimension K consistently in this setting. A Derivation of Algorithm 1 In this section, we derive the linearized alternating direction methods of multiplier algorithm for solving (10). Optimization problem (10) is equivalent to minimize \u03a0,H\u2208M \u2212tr n b \u03a3E(x|y)\u03a0 o + \u03c1\u2225\u03a0\u22251 + g(H) subject to b \u03a31/2 x \u03a0b \u03a31/2 x = H, (15) where g(H) = \u221e1(\u2225H\u2225\u2217>K) + \u221e1(\u2225H\u2225sp>1). The scaled augmented Lagrangian for (15) takes the form L(\u03a0, H, \u0393) = \u2212tr n b \u03a3E(x|y)\u03a0 o + \u03c1\u2225\u03a0\u22251 + g(H) + \u03bd 2 \r \r \rb \u03a31/2 x \u03a0b \u03a31/2 x \u2212H + \u0393 \r \r \r 2 F . (16) The proposed algorithm requires the updates for \u03a0, H, and \u0393. We now proceed to derive the updates for \u03a0 and H. Update for \u03a0: To obtain a closed-form update for \u03a0, we linearize the quadratic term in the scaled augmented Lagrangian (16) as suggested by Fang et al. (2015). For two matrices A, B \u2208Rd\u00d7d, we use the identity vec (ABA) = \u0000AT \u2297A \u0001 vec(B), where vec(\u00b7) is the vectorization operation which converts a matrix into a column vector and \u2297is the Kronecker product. Let \u03c0 = vec(\u03a0), h = vec(H), and \u03b3 = vec(\u0393). Thus, an update for \u03c0 can be obtained by minimizing \u2212vec n b \u03a3E(x|y) oT \u03c0 + \u03c1\u2225\u03c0\u22251 + \u03bd 2 \r \r \r \u0010 b \u03a31/2 x \u2297b \u03a31/2 x \u0011 \u03c0 \u2212h + \u03b3 \r \r \r 2 2 . (17) However, there is no closed-form solution for \u03c0 due to the quadratic term in (17). Similar to that of Fang et al. (2015), we linearize the quadratic term in (17) by applying a second-order Taylor Expansion and obtain the following update for \u03c0: \u03c0(t+1) = argmin \u03c0 \u0014 \u2212vec n b \u03a3E(x|y) oT \u03c0 + \u03c1\u2225\u03c0\u22251 + \u03bd n \u03c0 \u2212\u03c0(t)oT m(t) + \u03c4 2\u2225\u03c0 \u2212\u03c0(t)\u22252 2 \u0015 , (18) 16 where \u03c0(t) is the value of \u03c0 at the tth iteration and m(t) = (b \u03a31/2 x \u2297b \u03a31/2 x ){b \u03a31/2 x \u2297b \u03a31/2 x \u03c0(t) \u2212h(t) + \u03b3(t)}. As suggested by Fang et al. (2015), we pick \u03c4 > 2\u03bd\u03bb2 max(b \u03a3x) to ensure the convergence of the linearized alternating direction method of multipliers algorithm. Problem (18) is equivalent to \u03a0(t+1) = argmin \u03a0\u2208M \u03c1\u2225\u03a0\u22251 + \u03c4 2 \r \r \r \r\u03a0 \u2212 \u0014 \u03a0(t) + 1 \u03c4 b \u03a3E(x|y) \u2212\u03bd \u03c4 b \u03a3x\u03a0(t)b \u03a3x + \u03bd \u03c4 b \u03a31/2 x {H(t) \u2212\u0393(t)}b \u03a31/2 x \u0015\r \r \r \r 2 F , (19) which has the closed-form solution \u03a0(t+1) = Soft \u0012 \u03a0(t) + 1 \u03c4 b \u03a3E(x|y) \u2212\u03bd \u03c4 h b \u03a3x\u03a0(t)b \u03a3x \u2212b \u03a31/2 x \b H(t) \u2212\u0393(t)\t b \u03a31/2 x i , \u03c1 \u03c4 \u0013 . Here, Soft denotes the soft-thresholding operator, applied element-wise to a matrix: Soft(Aij, b) = sign(Aij) max (|Aij| \u2212b, 0). Update for H: The update for H can be obtained as H(t+1) = argmin H\u2208M g(H) + \u03bd 2 \r \r \rH \u2212 n b \u03a31/2 x \u03a0(t+1)b \u03a31/2 x + \u0393(t)o\r \r \r 2 F , (20) which has a closed-form solution. The following proposition follows directly from Lemma 4.1 in Vu et al. (2013) and Proposition 10.2 in Gao et al. (2017). Proposition 2. Let Pd j=1 \u03c9jujuT j be the singular value decomposition of W. Let H\u2217be the solution to the optimization problem minimize H\u2208M \u2225H \u2212W\u2225F subject to \u2225H\u2225\u2217\u2264K and \u2225H\u2225sp \u22641. Then, H\u2217= Pd j=1 min{1, max (\u03c9j \u2212\u03b3\u2217, 0)}ujuT j , where \u03b3\u2217= argmin \u03b3>0 \u03b3 subject to d X j=1 min {1, max (\u03c9j \u2212\u03b3, 0)} \u2264K. Let Pd j=1 \u03c9jujuT j be the singular value decomposition of \u0393(t) + b \u03a31/2 x \u03a0(t+1)b \u03a31/2 x . Thus, by 17 proposition (2), we have H(t+1) = d X j=1 min {1, max (\u03c9j \u2212\u03b3\u2217, 0)} ujuT j , where \u03b3\u2217= argmin \u03b3>0 \u03b3 subject to d X j=1 min {1, max (\u03c9j \u2212\u03b3, 0)} \u2264K. (21) Finally, the update for the dual variable \u0393 takes the form \u0393(t+1) = \u0393(t) + b \u03a31/2 x \u03a0(t+1)b \u03a31/2 x \u2212H(t+1). B Proof of Lemma 1 Recall from (6) that b T = 1 n \u230an/2\u230b X i=1 {x(2i)\u2217\u2212x(2i\u22121)\u2217}{x(2i)\u2217\u2212x(2i\u22121)\u2217}T. (22) In this section, we show that the estimation error between b T and T is bounded above under the max norm. Recall that y(i) is the ith order statistic of y1, . . . , yn and x(i)\u2217is the value of x corresponding to y(i). Denote the inverse regression curve and its residual by m(y) = E(x | y) and \u01eb = x \u2212m(y). It is shown in ? that the concomitants \u01eb(i)\u2217= x(i)\u2217\u2212m{y(i)} are conditionally independent with mean zero, given the order statistics y(i). We denote the jth element of m(y), \u01eb, and x as mj(y), \u01ebj, and xj, respectively. To prove Lemma 1, we state some properties of the residual in the following proposition. The proof of the following proposition is a direct consequence of Jensen\u2019s inequality and the properties of sub-Gaussian random variables. Proposition 3. Assume that x is sub-Gaussian with mean zero and covariance matrix \u03a3x. Then, xj has a sub-Gaussian norm \u2225xj\u2225\u03c82 \u2264C{(\u03a3x)jj}1/2, where \u2225xj\u2225\u03c82 = supp\u22651 p\u22121/2(E|xj|p)1/p. Moreover, mj(y) and \u01ebj are sub-Gaussian with sub-Gaussian norm \u2225xj\u2225\u03c82 and 2\u2225xj\u2225\u03c82, respectively. 18 We now prove Lemma 1. Substituting x = m(y) + \u01eb into (22), we have b T = 1 n \u230an/2\u230b X i=1 {x(2i)\u2217\u2212x(2i\u22121)\u2217}{x(2i)\u2217\u2212x(2i\u22121)\u2217}T = 1 n \u230an/2\u230b X i=1 [m{y(2i)} \u2212m{y(2i\u22121)}][m{y(2i)} \u2212m{y(2i\u22121)}]T + 1 n \u230an/2\u230b X i=1 {\u01eb(2i)\u2217\u2212\u01eb(2i\u22121)\u2217}[m{y(2i)} \u2212m{y(2i\u22121)}] + 1 n \u230an/2\u230b X i=1 [m{y(2i)} \u2212m{y(2i\u22121)}]{\u01eb(2i)\u2217\u2212\u01eb(2i\u22121)\u2217}T + 1 n \u230an/2\u230b X i=1 {\u01eb(2i)\u2217\u2212\u01eb(2i\u22121)\u2217}{\u01eb(2i)\u2217\u2212\u01eb(2i\u22121)\u2217}T = W1 + W2 + W3 + W4. (23) Thus, by the triangle inequality, we have \u2225b T \u2212T\u2225max = \u2225W1\u2225max + \u2225W2\u2225max + \u2225W3\u2225max + \u2225W4 \u2212T\u2225max, (24) where \u2225W1\u2225max = maxj,k |(W1)jk| is the largest absolute element in W1. It su\ufb03ces to show that the (j, k)th element of the above terms are bounded above. For su\ufb03ciently large n, we have (W1)jk = 1 n \u230an/2\u230b X i=1 [mj{y(2i)} \u2212mj{y(2i\u22121)}][mk{y(2i)} \u2212mk{y(2i\u22121)}] \u2264n\u22121/2 \" 1 n1/4 sup \u039en(B) n X i=2 \u2225m{y(i)} \u2212m{y(i\u22121)}\u2225\u221e #2 \u2264\u03c4 2n\u22121/2, (25) where the last inequality holds by Assumption 1 for some arbitrary small constant \u03c4 > 0. Since this upper bound hold uniformly for all j, k, we have \u2225W1\u2225max \u2264\u03c4 2n\u22121/2. We now obtain an upper bound for W2. We use the fact that E(\u01ebij) = E(xij)\u2212E{E(xij | yi)} = 0. By Proposition 3, \u01ebij is sub-Gaussian with mean zero and sub-Gaussian norm 2\u2225xj\u2225\u03c82. Suppose that 2\u2225xj\u2225\u03c82 \u2264L for some constant L > 0. Thus, by Lemma 9, there exists constant C such that pr{maxi,j |\u01ebij| \u2265C(log d)1/2} \u2264exp(\u2212C\u2032 log d) with C\u2032 > 2. We 19 have with probability at least 1 \u2212exp(\u2212C\u2032 log d), |(W2)jk| = \f \f \f \f \f \f 1 n \u230an/2\u230b X i=1 {\u01eb(2i)\u2217,j \u2212\u01eb(2i\u22121)\u2217,j}[mk{y(2i)} \u2212mk{y(2i\u22121)}]T \f \f \f \f \f \f \u2264 2 n3/4 max i,j |\u01ebij| \" 1 n1/4 sup \u039en(B) n X i=2 \u2225m{y(i)} \u2212m{y(i\u22121)}\u2225\u221e # \u2264C 1 n1/4(log d/n)1/2\u03c4, (26) where the last inequality follows from Assumption 1 for some arbitrarily small constant \u03c4 > 0. By taking the union bound, we have pr \u001a \u2225W2\u2225max \u2265C 1 n1/4(log d/n)1/2\u03c4 \u001b \u2264 X j,k pr \u001a |(W2)jk| \u2265C 1 n1/4(log d/n)1/2\u03c4 \u001b \u2264d2pr \u001a |(W2)jk| \u2265C 1 n1/4(log d/n)1/2\u03c4 \u001b \u2264exp(\u2212C\u2032 log d + 2 log d) = exp(\u2212C\u2032\u2032 log d). Thus, we have \u2225W2\u2225max \u2264Cn\u22121/4(log d/n)1/2\u03c4 with probability at least 1 \u2212exp(\u2212C\u2032\u2032 log d). The term \u2225W3\u2225max can be upper bounded similarly. We now provide an upper bound on the term \u2225W4 \u2212T\u2225max. We have (W4)jk \u2212Tjk = 1 n \u230an/2\u230b X i=1 {\u01eb(2i)\u2217,j \u2212\u01eb(2i\u22121)\u2217,j}{\u01eb(2i)\u2217,k \u2212\u01eb(2i\u22121)\u2217,k} \u2212Tjk = 1 n n X i=1 \u01ebij\u01ebik \u2212Tjk ! \u22121 n \u230an/2\u230b X i=1 \u01eb(2i)\u2217,j\u01eb(2i\u22121)\u2217,k \u22121 n \u230an/2\u230b X i=1 \u01eb(2i)\u2217,k\u01eb(2i\u22121)\u2217,j. (27) By Lemma 10, \u01ebij\u01ebik \u2212Tjk is sub-exponential random variable with mean zero and with sub-exponential norm bounded by CL2, since \u2225xj\u2225\u03c82 \u2264L for some constant L > 0. By Lemma 11, we obtain pr \f \f \f \f \f 1 n n X i=1 \u01ebij\u01ebik \u2212Tjk \f \f \f \f \f \u2265t ! \u22642 exp \u001a \u2212C\u2032 min \u0012 nt2 C2L4, nt CL2 \u0013\u001b . Similar to the proof of upper bound for \u2225W2\u2225max, taking the union bound and picking 20 t = C(log d/n)1/2 for some su\ufb03ciently large constant C, we have max j,k \f \f \f \f \f 1 n n X i=1 \u01ebij\u01ebik \u2212Tjk \f \f \f \f \f \u2264C(log d/n)1/2, with probability at least 1 \u2212exp(\u2212C\u2032 log d). It remains to show that the rest of the terms in (27) is upper bounded by C(log d/n)1/2. Throughout the rest of the argument, we conditioned on the event that the order statistics are given. Given the order statistics y(1), . . . , y(n), the concomitants \u01eb(2i)\u2217,j and \u01eb(2i\u22121)\u2217,k are independent with mean zero (?). Thus, by Lemma 10, \u01eb(2i)\u2217,j\u01eb(2i\u22121)\u2217,k is sub-exponential with mean zero and sub-exponential norm upper bounded by CL2. By Lemma 11, taking the union bound, and picking t = C(log d/n)1/2, we have pr \uf8f1 \uf8f2 \uf8f3max j,k \f \f \f \f \f \f 1 n \u230an/2\u230b X i=1 \u01eb(2i)\u2217,j\u01eb(2i\u22121)\u2217,k \f \f \f \f \f \f \u2265C(log d/n)1/2 | y(1), . . . , y(n) \uf8fc \uf8fd \uf8fe\u2264exp(\u2212C\u2032 log d). Since the above expression holds for any order statistics of y, we have pr \uf8f1 \uf8f2 \uf8f3max j,k \f \f \f \f \f \f 1 n \u230an/2\u230b X i=1 \u01eb(2i)\u2217,j\u01eb(2i\u22121)\u2217,k \f \f \f \f \f \f \u2265C(log d/n)1/2 \uf8fc \uf8fd \uf8fe = E \uf8ee \uf8f0pr \uf8f1 \uf8f2 \uf8f3max j,k \f \f \f \f \f \f 1 n \u230an/2\u230b X i=1 \u01eb(2i)\u2217,j\u01eb(2i\u22121)\u2217,k \f \f \f \f \f \f \u2265C(log d/n)1/2 | y(1), . . . , y(n) \uf8fc \uf8fd \uf8fe \uf8f9 \uf8fb \u2264exp(\u2212C\u2032 log d), which implies max j,k \f \f \f \f \f \f 1 n \u230an/2\u230b X i=1 \u01eb(2i)\u2217,j\u01eb(2i\u22121)\u2217,k \f \f \f \f \f \f \u2264C(log d/n)1/2 with probability at least 1 \u2212exp(\u2212C\u2032 log d). Thus, we have \u2225W4 \u2212T\u2225max \u2264C(log d/n)1/2 with probability at least 1 \u2212exp(\u2212C\u2032 log d). Combining the upper bounds in (25)-(27), we have \u2225b T \u2212T\u2225max \u2264n\u22121/2\u03c4 2 + Cn\u22121/4\u03c4(log d/n)1/2 + C(log d/n)1/2 \u2264C\u2032(log d/n)1/2 for su\ufb03ciently large n, since (log d/n)1/2 is the dominating term. 21 C Proof of Theorem 1 The proof of Theorem 1 is motivated from Gao et al. (2017) in the context of sparse canonical correlation analysis. The proofs of the technical lemmas in this section is deferred to Section D. We begin with de\ufb01ning some notation that will be used throughout the proof of Theorem 1. Let Sv be the set containing indices of non-zero rows of V \u2208Rd\u00d7K. Thus, |Sv| = s. Also, recall that \u03a0 = V V T and let S = supp(\u03a0). Thus, |S| \u2264s2. Let Sc be the complementary set of S. Let b \u03a0 be a solution from solving (10). Let V and b V be the subspaces for \u03a0 and b \u03a0. The goal is to establish the rate of convergence for the subspace distance D(V, b V). The following proposition reparametrizes the conditional covariance matrix \u03a3E(x|y) in terms of V , \u039b, and \u03a3x. Proposition 4. The solution, up to sign jointly, of (4) is V if and only if \u03a3E(x|y) can be written as \u03a3E(x|y) = \u03a3xV \u039bV T\u03a3x, where V T\u03a3xV = IK. Note that b V is normalized with respect to the sample covariance matrix b \u03a3x. In contrast, the truth V is normalized with respect to \u03a3x. Motivated by Gao et al. (2017), to facilitate the proof, we let e \u03a3E(x|y) = b \u03a3xV \u039bV Tb \u03a3x, e V = V (V Tb \u03a3xV )\u22121/2, e \u03a0 = e V e V T, and e \u039b = (V Tb \u03a3xV )1/2\u039b(V Tb \u03a3xV )1/2. The intuition of e V and e \u039b is to approximate V and \u039b, respectively, since V Tb \u03a3xV is close to the identity matrix IK. The following lemma establishes concentration between e \u039b and \u039b, and between \u03a0 and e \u03a0. Lemma 2. For any C > 0, there exists C\u2032 > 0 depending only on C such that \u2225e \u039b \u2212\u039b\u2225sp \u2264C(s/n)1/2 and \u2225e \u03a0 \u2212\u03a0\u2225F \u2264CK(s/n)1/2, with probability greater than 1 \u2212exp(\u2212C\u2032s). Throughout the proof of Theorem 1, we let \u2206= b \u03a0 \u2212e \u03a0. We further partition the set Sc into J sets such that Sc 1 is the index set of the largest l entries of |\u2206|, Sc 2 is the index set of the second largest l entries of |\u2206|, and so forth, with |Sc J| \u2264l. We \ufb01rst state some technical lemmas that are needed to prove Theorem 1. For two matrices A, B, we de\ufb01ne the 22 inner product as \u27e8A, B\u27e9= tr(AB) The following curvature lemma establishes a bound on the curvature of the objective function. It follows directly from Gao et al. (2017). Lemma 3. Let U \u2208Rd\u00d7K be an orthonormal matrix, i.e., UTU = IK. Furthermore, let L \u2208RK\u00d7K and D = diag(d1, . . . , dK) \u2208RK\u00d7K, where d1 \u2265\u00b7 \u00b7 \u00b7 \u2265dK > 0. If E \u2208Rd\u00d7d is such that \u2225E\u2225\u2217\u2264K and \u2225E\u2225sp \u22641, then ULUT, UUT \u2212E \u000b \u2265dK 2 \r \rUUT \u2212E \r \r2 F \u2212\u2225L \u2212D\u2225F\u2225UUT \u2212E\u2225F. The following lemma shows that e \u03a0 satis\ufb01es the constrains \r \rb \u03a31/2 x e \u03a0b \u03a31/2 x \r \r \u2217\u2264K and \r \rb \u03a31/2 x e \u03a0b \u03a31/2 x \r \r sp \u22641. Lemma 4. Let e V = V (V Tb \u03a3xV )\u22121/2 and let e \u03a0 = e V e V T. Then, e \u03a0 satis\ufb01es both the constrains \r \rb \u03a31/2 x e \u03a0b \u03a31/2 x \r \r \u2217\u2264K and \r \rb \u03a31/2 x e \u03a0b \u03a31/2 x \r \r sp \u22641. Lemma 5. We have J X j=2 \u2225\u2206Sc j \u2225F \u2264l\u22121/2\u2225\u2206Sc\u22251. Lemma 6. Assume that x1, . . . , xn are independent sub-Gaussian random variables with covariance matrix \u03a3x. Let b \u03a3x be the sample covariance matrix. Let s < min(n, d). For any C > 0, there exists a constant C\u2032 > 0 such that c\u22121 \u2212C{s log(ed)/n}1/2 \u2264\u03bbmin(b \u03a3x, s) \u2264\u03bbmax(b \u03a3x, s) \u2264c + C{s log(ed)/n}1/2, with probability greater than 1\u2212exp{\u2212C\u2032s log(ed)}, where c is a constant from Assumption 2. Finally, we need a lemma on the concentration between e \u03a3E(x|y) and b \u03a3E(x|y). Lemma 7. Let b \u03a3E(x|y) be the estimator as de\ufb01ned in Corollary 1 and let e \u03a3E(x|y) = b \u03a3xV \u039bV Tb \u03a3x. Assume that K\u03bb1(log d/n)1/2 is bounded by some constant. We have \u2225e \u03a3E(x|y) \u2212b \u03a3E(x|y)\u2225max \u2264C(log d/n)1/2 with probability at least 1 \u2212exp(\u2212C\u2032 log d). We now provide the proof of Theorem 1. 23 Proof. Recall that \u2206= b \u03a0 \u2212e \u03a0. The proof involves obtaining an upper bound and a lower bound on the quantity \u2225b \u03a31/2 x \u2206b \u03a31/2 x \u22252 F. Combining the upper and lower bounds, we obtain an upper bound on \u2225\u2206\u2225F. Upper bound for \u2225b \u03a31/2 x \u2206b \u03a31/2 x \u22252 F: By Lemma 4, e \u03a0 is a feasible solution of (10). Since b \u03a0 is the optimum solution of (10), we have \u2212\u27e8b \u03a3E(x|y), b \u03a0\u27e9+ \u03c1\u2225b \u03a0\u22251 \u2264\u2212\u27e8b \u03a3E(x|y), e \u03a0\u27e9+ \u03c1\u2225e \u03a0\u22251, implying \u2212\u27e8e \u03a3E(x|y), \u2206\u27e9\u2264\u27e8b \u03a3E(x|y) \u2212e \u03a3E(x|y), \u2206\u27e9+ \u03c1\u2225e \u03a0\u22251 \u2212\u03c1\u2225b \u03a0\u22251 \u2264\u2225b \u03a3E(x|y) \u2212e \u03a3E(x|y)\u2225max\u2225\u2206\u22251 + \u03c1\u2225e \u03a0\u22251 \u2212\u03c1\u2225e \u03a0 + \u2206\u22251 \u2264\u03c1 2\u2225\u2206\u22251 + \u03c1\u2225e \u03a0\u22251 \u2212\u03c1\u2225e \u03a0 + \u2206\u22251, (28) where the second inequality holds by Holder\u2019s inequality, and the last inequality holds by picking \u03c1 > 2\u2225e \u03a3E(x|y) \u2212b \u03a3E(x|y)\u2225max. By de\ufb01nition of e \u03a0, we have supp(e \u03a0) = supp(\u03a0). Thus, we obtain \u2225e \u03a0\u22251 \u2212\u2225e \u03a0 + \u2206\u22251 = \u2225e \u03a0S\u22251 \u2212\u2225e \u03a0S + \u2206S\u22251 \u2212\u2225\u2206Sc\u22251 \u2264\u2225\u2206S\u22251 \u2212\u2225\u2206Sc\u22251, (29) where the last inequality follows from the triangle inequality. Substituting (29) into (28), we obtain \u2212\u27e8e \u03a3E(x|y), \u2206\u27e9\u22643\u03c1 2 \u2225\u2206S\u22251 \u2212\u03c1 2\u2225\u2206Sc\u22251. (30) Next, we obtain a lower bound for \u2212\u27e8e \u03a3E(x|y), \u2206\u27e9. By an application of Lemma 3, we have \u2212\u27e8e \u03a3E(x|y), \u2206\u27e9= \u27e8b \u03a31/2 x V \u039bV Tb \u03a31/2 x , b \u03a31/2 x (e \u03a0 \u2212b \u03a0)b \u03a31/2 x \u27e9 = \u27e8b \u03a31/2 x e V e \u039be V Tb \u03a31/2 x , b \u03a31/2 x (e \u03a0 \u2212b \u03a0)b \u03a31/2 x \u27e9 \u2265\u03bbK 2 \u2225b \u03a31/2 x (e \u03a0 \u2212b \u03a0)b \u03a31/2 x \u22252 F \u2212\u2225e \u039b \u2212\u039b\u2225F\u2225b \u03a31/2 x (e \u03a0 \u2212b \u03a0)b \u03a31/2 x \u2225F, (31) where \u03bbK is the Kth generalized eigenvalue of the pair of matrices {\u03a3E(x|y), \u03a3x}. For notational convenience, we let \u03b3 = \u2225e \u039b \u2212\u039b\u2225F. Combining (31) and (30), we obtain \u03bbK\u2225b \u03a31/2 x \u2206b \u03a31/2 x \u22252 F \u22642\u03b3\u2225b \u03a31/2 x \u2206b \u03a31/2 x \u2225F + 3\u03c1\u2225\u2206S\u22251 \u2212\u03c1\u2225\u2206Sc\u22251, (32) implying \u2225b \u03a31/2 x \u2206b \u03a31/2 x \u22252 F \u22642\u03b3 \u03bbK \u2225b \u03a31/2 x \u2206b \u03a31/2 x \u2225F + 3\u03c1 \u03bbK \u2225\u2206S\u22251. (33) 24 Thus, \u2225b \u03a31/2 x \u2206b \u03a31/2 x \u22252 F \u22644\u03b32 \u03bb2 K + 6\u03c1 \u03bbK \u2225\u2206S\u22251, (34) where we use the fact that ax2 \u2264bx + c implies that x2 \u2264b2/a2 + 2c/a. Lower bound for \u2225b \u03a31/2 x \u2206b \u03a31/2 x \u22252 F: We start by showing that \u2206lies in a restricted set, referred to as the generalized cone condition in Gao et al. (2017). By (32), we have 0 \u22642\u03b3\u2225b \u03a31/2 x \u2206b \u03a31/2 x \u2225F + 3\u03c1\u2225\u2206S\u22251 \u2212\u03c1\u2225\u2206Sc\u22251 \u2264\u03b32 \u03bbK + \u03bbK\u2225b \u03a31/2 x \u2206b \u03a31/2 x \u22252 F + 3\u03c1\u2225\u2206S\u22251 \u2212\u03c1\u2225\u2206Sc\u22251 \u22645\u03b32 \u03bbK + 9\u03c1\u2225\u2206S\u22251 \u2212\u03c1\u2225\u2206Sc\u22251, where the second inequality is obtained by using the fact that 2ab \u2264a2 + b2, and the last inequality is obtained by substituting (34) into the second expression. This implies that \u2225\u2206Sc\u22251 \u22649\u2225\u2206S\u22251 + 5\u03b32 \u03c1\u03bbK . (35) Furthermore, by Lemma 5 and (35), we have J X j=2 \u2225\u2206Sc j \u2225F \u2264l\u22121/2\u2225\u2206Sc\u22251 \u22649l\u22121/2\u2225\u2206S\u22251 + 5\u03b32 \u03c1\u03bbKl1/2 \u22649sl\u22121/2\u2225\u2206S\u2225F + 5\u03b32 \u03c1\u03bbKl1/2, (36) where the last inequality is obtained by using the fact that \u2225\u2206S\u22251 \u2264s\u2225\u2206S\u2225F. Thus, by the triangle inequality, we have \u2225b \u03a31/2 x \u2206b \u03a31/2 x \u2225F \u2265\u2225b \u03a31/2 x \u2206S\u222aSc 1 b \u03a31/2 x \u2225F \u2212 J X j=2 \u2225b \u03a31/2 x \u2206Sc j b \u03a31/2 x \u2225F \u2265\u03bbmin(b \u03a3x, s + l)\u2225\u2206S\u222aSc 1\u2225F \u2212\u03bbmax(b \u03a3x, l) J X j=2 \u2225\u2206Sc j \u2225F, (37) where \u03bbmin(b \u03a3x, s+l) and \u03bbmax(b \u03a3x, l) are the minimum (s+l)-sparse eigenvalue and maximum l-sparse eigenvalue of b \u03a3x, respectively. Substituting (36) into (37), we have \u2225b \u03a31/2 x \u2206b \u03a31/2 x \u2225F \u2265 n \u03bbmin(b \u03a3x, s + l) \u22129\u03bbmax(b \u03a3x, l)sl\u22121/2o \u2225\u2206S\u222aSc 1\u2225F \u22125\u03bbmax(b \u03a3x, l)\u03b32 \u03c1\u03bbKl1/2 . (38) 25 Choose l = c1s2. By Lemma S5, we have with probability at least 1 \u2212exp{C\u2032(c1s2 + s) log ed}, c\u22121\u2212C \u001a(c1s2 + s) log(ed) n \u001b1/2 \u2264\u03bbmin(b \u03a3x, s+l) \u2264\u03bbmax(b \u03a3x, s+l) \u2264c+C \u001a(c1s2 + s) log(ed) n \u001b1/2 . Thus, we have \u03bbmin(b \u03a3x, s + l) \u22129\u03bbmax(b \u03a3x, l)sl\u22121/2 = \u03bbmin(b \u03a3x, s + l) \u22129c\u22121/2 1 \u03bbmax(b \u03a3x, l) \u2265c\u22121 \u2212C \u001a(c1s2 + s) log(ed) n \u001b1/2 \u22129c\u22121/2 1 c \u22129c\u22121/2 1 C \u001a(c1s2 + s) log(ed) n \u001b1/2 . This quantity can be lower bounded by a constant C1 > 0 as long as c1 is su\ufb03ciently large and that n > C2(c1s2 + s) log(ed). Similarly, the term 5\u03bbmax(b \u03a3x, l) can be upper bounded by a constant C3 > 0 by Assumption 2. Thus, we have \u2225b \u03a31/2 x \u2206b \u03a31/2 x \u2225F \u2265C1\u2225\u2206S\u222aSc 1\u2225F \u2212C3 \u03b32 \u03c1\u03bbKl1/2. (39) Combining the lower and upper bounds for \u2225b \u03a31/2 x \u2206b \u03a31/2 x \u2225F: By (39) and (34), we obtain C1\u2225\u2206S\u222aSc 1\u2225F \u2264C3 \u03b32 \u03c1\u03bbKl1/2 + \u2225b \u03a31/2 x \u2206b \u03a31/2 x \u2225F \u2264C3 \u03b32 \u03c1\u03bbKl1/2 + \u00124\u03b32 \u03bb2 K + 6\u03c1 \u03bbK \u2225\u2206S\u22251 \u00131/2 \u2264C3 \u03b32 \u03c1\u03bbKl1/2 + \u00124\u03b32 \u03bb2 K + 6\u03c1s \u03bbK \u2225\u2206S\u222aSc 1\u2225F \u00131/2 , where we use the fact that \u2225\u2206S\u22251 \u2264s\u2225\u2206S\u2225F \u2264s\u2225\u2206S\u222aSc 1\u2225F. This implies that \u2225\u2206S\u222aSc 1\u2225F \u2264C4 \u03b32 \u03c1\u03bbKl1/2 + C5 \u00124\u03b32 \u03bb2 K + 6\u03c1s \u03bbK \u2225\u2206S\u222aSc 1\u2225F \u00131/2 . 26 By squaring both sides, we have \u2225\u2206S\u222aSc 1\u22252 F \u2264C2 4 \u03b34 l\u03c12\u03bb2 K + C2 5 \u00124\u03b32 \u03bb2 K + 6\u03c1s \u03bbK \u2225\u2206S\u222aSc 1\u2225F \u0013 + 2C4C5 \u03b32 \u03c1\u03bbKl1/2 \u00124\u03b32 \u03bb2 K + 6\u03c1s \u03bbK \u2225\u2206S\u222aSc 1\u2225F \u00131/2 \u22642C2 4 \u03b34 l\u03c12\u03bb2 K + 2C2 5 \u00124\u03b32 \u03bb2 K + 6\u03c1s \u03bbK \u2225\u2206S\u222aSc 1\u2225F \u0013 = 2C2 4 \u03b34 l\u03c12\u03bb2 K + 8C2 5 \u03b32 \u03bb2 K + 12C2 5 \u03c1s \u03bbK \u2225\u2206S\u222aSc 1\u2225F, (40) where the second inequality holds by using the fact that 2ab \u2264a2 + b2. Using the fact that ax2 \u2264bx + c implies that x2 \u2264b2/a2 + 2c/a, we obtain \u2225\u2206S\u222aSc 1\u22252 F \u2264C6 \u0012 \u03b32 \u03bb2 K + \u03b34 l\u03c12\u03bb2 K + \u03c12s2 \u03bb2 K \u0013 . (41) By the triangle inequality, \u2225\u2206\u2225F \u2264\u2225\u2206S\u222aSc 1\u2225F + \u2225\u2206(S\u222aSc 1)c\u2225F \u2264\u2225\u2206S\u222aSc 1\u2225F + PJ j=2 \u2225\u2206Sc j \u2225F. Thus, by (36), we have \u2225\u2206\u2225F \u2264\u2225\u2206S\u222aSc 1\u2225F + 9sl\u22121/2\u2225\u2206S\u222aSc 1\u2225F + 5\u03b32 \u03c1\u03bbKl1/2. (42) Recall that l = c1s2 and that K < log d. By Lemma 2, \u03b3 = \u2225e \u039b \u2212\u039b\u2225F \u2264K1/2\u2225e \u039b \u2212\u039b\u2225sp \u2264 C(Ks/n)1/2 \u2264C6\u03c1l1/2, with probability at least 1 \u2212exp(\u2212C\u2032s). Substituting (41) into (42), we obtain \u2225\u2206\u2225F \u2264 \u0012 1 + 9 c1 \u0013 \u2225\u2206S\u222aSc 1\u2225F + 5C6\u03c1l1/2 \u03bbK \u2264 \u0012 1 + 9 c1 \u0013 \u001a C6 \u0012 \u03b32 \u03bb2 K + \u03b34 l\u03c12\u03bb2 K + \u03c12s2 \u03bb2 K \u0013\u001b1/2 + 5C6\u03c1l1/2 \u03bbK = \u0012 1 + 9 c1 \u0013 \u001a C6 \u0012C2 6\u03c12c1s2 \u03bb2 K + C4 6\u03c12c1s2 \u03bb2 K + \u03c12s2 \u03bb2 K \u0013\u001b1/2 + 5C6\u03c1c1/2 1 s \u03bbK \u2264C s\u03c1 \u03bbK , (43) for C su\ufb03ciently large, with large probability. By the triangle inequality, we have \u2225b \u03a0 \u2212\u03a0\u2225F \u2264\u2225\u2206\u2225F + \u2225\u03a0 \u2212e \u03a0\u2225F \u2264C s\u03c1 \u03bbK + CK(s/n)1/2 where the second inequality holds by an application of Lemma 2. 27 Let b V and V be the subspace corresponding to b \u03a0 and \u03a0, respectively. Then, by Corollary 3.2 of Vu et al. (2013), we have D \u0000V, b V \u0001 \u2264C s\u03c1 \u03bbK + CK(s/n)1/2, for some constant C, which concludes the proof. Finally, by an application of Lemma 7, we have \u03c1 \u2265C(log d/n)1/2 with probability at least 1 \u2212exp(C\u2032 log p). Therefore D \u0000V, b V \u0001 \u2264C s \u03bbK \u0012log d n \u00131/2 + CK(s/n)1/2 \u2264C s \u03bbK \u0012log d n \u00131/2 , where the second inequality holds by the assumption that \u03bbKK2/{s log(d)} is upper bounded by a constant. This concludes the proof. D Proof of Lemmas in Appendix C D.1 Proof of Lemma 2 The proof of Lemma 2 requires the following probabilistic bound on the operator norm and a lemma on perturbation bound of square root matrices. The following proposition is a special case of Remark 5.40 in an unpublished 2015 technical report, Vershynin, R. (arXiv:1011.3027). Proposition 5. Let Y be an n\u00d7s matrix whose rows are independent sub-Gaussian isotropic random vectors in Rs. Let \u03b4 = C(s/n)1/2 + t/(n)1/2. Then, for every t \u22650, pr (\r \r \r \r 1 nY TY \u2212Is \r \r \r \r sp \u2264max(\u03b4, \u03b42) ) \u22651 \u22122 exp(\u2212ct2/2). The constants c, C depend only on the sub-Gaussian norm of the rows. Lemma 8 (Lemma 2 in ?). Let E and F be positive semi-de\ufb01nite matrices. Then, for any unitarily invariant norm \u2225\u00b7 \u2225sp, we have \u2225E1/2 \u2212F 1/2\u2225sp \u2264 1 \u03c3min(E1/2) + \u03c3min(F 1/2)\u2225E \u2212F\u2225sp, where \u03c3min(E1/2) is the smallest non-zero singular value of E1/2. 28 We are now ready to prove Lemma 2. Proof. Recall from Appendix C that e \u039b = (V Tb \u03a3xV )1/2\u039b(V Tb \u03a3xV )1/2. Then \u2225e \u039b \u2212\u039b\u2225sp = \r \r \r \u0000V Tb \u03a3xV \u00011/2\u039b \u0000V Tb \u03a3xV \u00011/2 \u2212\u039b \r \r \r sp = \r \r \r \u0000V Tb \u03a3xV \u00011/2\u039b \u0000V Tb \u03a3xV \u00011/2 \u2212\u039b \u0000V Tb \u03a3xV \u00011/2 + \u039b \u0000V Tb \u03a3xV \u00011/2 \u2212\u039b \r \r \r sp \u2264 \r \r \r \u0000V Tb \u03a3xV \u00011/2\u039b \u0000V Tb \u03a3xV \u00011/2 \u2212\u039b \u0000V Tb \u03a3xV \u00011/2\r \r \r sp + \r \r \r\u039b \u0000V Tb \u03a3xV \u00011/2 \u2212\u039b \r \r \r sp \u2264\u2225\u039b\u2225sp \r \r \r \u0000V Tb \u03a3xV \u00011/2 \u2212IK \r \r \r sp \r \r \r \u0000V Tb \u03a3xV \u00011/2\r \r \r sp + \u2225\u039b\u2225sp \r \r \r \u0000V Tb \u03a3xV \u00011/2 \u2212IK \r \r \r sp . (44) By Lemma 8, \r \r \r \u0000V Tb \u03a3xV \u00011/2 \u2212IK \r \r \r sp \u2264C \r \r \rV Tb \u03a3xV \u2212IK \r \r \r sp (45) for some constant C. Thus, it su\ufb03ces to establish an upper bound on \u2225V Tb \u03a3xV \u2212IK\u2225sp. Recall that b \u03a3x = XTX/n, where each row of X is independent sub-Gaussian random variables with covariance matrix \u03a3x. Also, recall that |Sv| = s. Thus, by the de\ufb01nition of the spectral norm, we obtain \r \r \rV Tb \u03a3xV \u2212IK \r \r \r sp = \r \r \rV Tb \u03a3xV \u2212V T\u03a3xV \r \r \r sp = \r \r \rV T Sv(b \u03a3x,Sv\u00d7Sv \u2212\u03a3x,Sv\u00d7Sv)VSv \r \r \r sp \u2264 \r \r \r\u03a31/2 x,Sv\u00d7SvVSv \r \r \r 2 sp \r \r \r\u03a3\u22121/2 x,Sv\u00d7Sv b \u03a3x,Sv\u00d7Sv\u03a3\u22121/2 x,Sv\u00d7Sv \u2212Is \r \r \r sp \u2264 \r \r \r\u03a3\u22121/2 x,Sv\u00d7Sv b \u03a3x,Sv\u00d7Sv\u03a3\u22121/2 x,Sv\u00d7Sv \u2212Is \r \r \r sp , (46) where the last inequality follows from the fact that \r \r \r\u03a31/2 x,Sv\u00d7SvVSv \r \r \r sp \u22641. It remains to show that \r \r \r\u03a3\u22121/2 x,Sv\u00d7Sv b \u03a3x,Sv\u00d7Sv\u03a3\u22121/2 x,Sv\u00d7Sv \u2212Is \r \r \r sp is upper bounded with high probability. Let Z = XSv\u03a3\u22121/2 x,Sv\u00d7Sv \u2208Rn\u00d7s. Thus, each row of Z is independent isotropic 29 sub-Gaussian random variables. Therefore, by an application of Proposition 5, pr \u001a\r \r \r\u03a3\u22121/2 x,Sv\u00d7Sv b \u03a3x,Sv\u00d7Sv\u03a3\u22121/2 x,Sv\u00d7Sv \u2212Is \r \r \r sp \u2264max(\u03b4, \u03b42) \u001b = pr (\r \r \r \r 1 nZTZ \u2212Is \r \r \r \r sp \u2264max(\u03b4, \u03b42) ) \u22651 \u22122 exp(\u2212ct2/2). Picking t = Cs1/2 for su\ufb03ciently large C > 0, we have \r \r \r\u03a3\u22121/2 x,Sv\u00d7Sv b \u03a3x,Sv\u00d7Sv\u03a3\u22121/2 x,Sv\u00d7Sv \u2212Is \r \r \r sp \u2264C(s/n)1/2 with probability at least 1 \u2212exp(\u2212C\u2032s). Substituting the last expression into (46) and combining (45) and (46), we obtain \r \r \r(V Tb \u03a3xV )1/2 \u2212IK \r \r \r sp \u2264C(s/n)1/2, (47) with probability at least 1 \u2212exp(\u2212C\u2032s). It remains to obtain an upper bound for \u2225(V Tb \u03a3xV )1/2\u2225sp. By Holder\u2019s inequality, we have \r \r \r(V Tb \u03a3xV )1/2\r \r \r sp \u2264\u2225V \u2225sp\u2225b \u03a3x,Sv\u00d7Sv\u22251/2 sp \u2264c1/2 \u0010 \u2225b \u03a3x,Sv\u00d7Sv \u2212\u03a3x,Sv\u00d7Sv\u22251/2 sp + \u2225\u03a3x,Sv\u00d7Sv\u22251/2 sp \u0011 \u2264c1/2C(s/n)1/2 + c, with probability at least 1\u2212exp(\u2212C\u2032s), where the last inequality is obtained by an application of Lemma 6 with \ufb01xed support Sv. Thus, from (44), we obtain \u2225e \u039b \u2212\u039b\u2225sp \u2264\u2225\u039b\u2225spC(s/n)1/2 \u0010 1 + \u2225(V Tb \u03a3xV )1/2\u2225sp \u0011 \u2264C\u2032(s/n)1/2 for su\ufb03ciently large C\u2032, where we use the fact that \u2225\u039b\u2225sp is upper bounded by some constant. To show the second part of the Lemma, using the de\ufb01nition of e \u03a0 and applying the Jensen\u2019s inequality, we have \u2225e \u03a0 \u2212\u03a0\u2225F = \r \r \r \rV \u0010 V Tb \u03a3xV \u0011\u22121 V T \u2212V V T \r \r \r \r F \u2264\u2225V \u22252 sp \r \r \r \rIK \u2212 \u0010 V Tb \u03a3xV \u0011\u22121\r \r \r \r F \u2264cK \r \r \r \rIK \u2212 \u0010 V Tb \u03a3xV \u0011\u22121\r \r \r \r sp , (48) 30 where the second inequality holds using the fact that the Frobenius norm of a matrix is upper bounded by the rank K times the operator norm of a matrix. Thus, it su\ufb03ces to obtain an upper bound for \u2225IK \u2212(V Tb \u03a3xV )\u22121\u2225sp. First, note that \r \r \rIK \u2212(V Tb \u03a3xV )\u22121\r \r \r sp \u2264 \r \r \rV Tb \u03a3xV \u2212IK \r \r \r sp \r \r \r(V Tb \u03a3xV )\u22121\r \r \r sp . It remains to bound \r \r \r(V Tb \u03a3xV )\u22121\r \r \r sp. By Weyl\u2019s inequality, we have \u03c3min(V T\u03a3xV ) \u2264 \u03c3min(V Tb \u03a3xV ) + \u2225V Tb \u03a3xV \u2212V T\u03a3xV \u2225sp. Thus, with probability at least 1 \u2212exp(C\u2032s), \r \r \r(V Tb \u03a3xV )\u22121\r \r \r sp = 1 \u03c3min \u0010 V Tb \u03a3xV \u0011 \u2264 1 1 \u2212\u2225V Tb \u03a3xV \u2212V T\u03a3xV \u2225sp \u2264C. Combining the above terms, we have with probability at least 1 \u2212exp(C\u2032s), \r \r \rIK \u2212(V Tb \u03a3xV )\u22121\r \r \r sp \u2264 \r \r \rV Tb \u03a3xV \u2212IK \r \r \r sp \r \r \r(V Tb \u03a3xV )\u22121\r \r \r sp \u2264C \r \r \rV Tb \u03a3xV \u2212IK \r \r \r sp \u2264C(s/n)1/2, (49) where the last inequality follows from (47) and Lemma 6 with \ufb01xed support. Substituting (49) into (48), we obtain the desired results. D.2 Proof of Lemma 3 Proof. Let uj be the jth column of U and let aj = uT j Euj. The term dK 2 \r \rUUT \u2212E \r \r2 F 31 can be upper bounded as dK 2 \r \rUUT \u2212E \r \r2 F = dK 2 \b\r \rUUT\r \r2 F + \r \rE \r \r2 F \u22122tr \u0000UTEU \u0001\t \u2264dK 2 \b tr \u0000IK \u0001 + \u2225E\u2225sp\u2225E\u2225\u2217\u22122tr \u0000UTEU \u0001\t \u2264dK K \u2212 K X j=1 aj ! = dK K X j=1 (1 \u2212aj) . (50) Moreover, we have ULUT, UUT \u2212E \u000b = UDUT, UUT \u2212E \u000b + U(L \u2212D)UT, UUT \u2212E \u000b \u2265tr \u0000UDUT \u2212UDUTE \u0001 \u2212\u2225L \u2212D\u2225F\u2225UUT \u2212E\u2225F = tr \u0000D(IK \u2212UTEU) \u0001 \u2212\u2225L \u2212D\u2225F\u2225UUT \u2212E\u2225F \u2265dK K X j=1 (1 \u2212aj) \u2212\u2225L \u2212D\u2225F\u2225UUT \u2212E\u2225F. (51) Combining (50) and (51), we have the desired result. D.3 Proof of Lemma 4 Proof. It su\ufb03ces to show that e \u03a0 satis\ufb01es the constrains \r \rb \u03a31/2 x e \u03a0b \u03a31/2 x \r \r \u2217\u2264K and \r \rb \u03a31/2 x e \u03a0b \u03a31/2 x \r \r sp \u2264 1. By the de\ufb01nition of e V , we have e V Tb \u03a3x e V = (V Tb \u03a3xV )\u22121/2V Tb \u03a3xV (V Tb \u03a3xV )\u22121/2 = IK. This implies that b \u03a31/2 x e V is an orthogonal matrix. Thus, \u2225b \u03a31/2 x e \u03a0b \u03a31/2 x \u2225sp \u2264\u2225b \u03a31/2 x e V \u22252 sp = 1. (52) Moreover, we have tr \u0010 b \u03a31/2 x e \u03a0b \u03a31/2 x b \u03a31/2 x e \u03a0b \u03a31/2 x \u0011 = tr n b \u03a31/2 x V (V Tb \u03a3xV )\u22121V Tb \u03a31/2 x o = K. (53) 32 Combining (52) and (53), we have \r \rb \u03a31/2 x e \u03a0b \u03a31/2 x \r \r \u2217= K and \r \rb \u03a31/2 x e \u03a0b \u03a31/2 x \r \r sp \u22641. D.4 Proof of Lemma 5 Proof. By the de\ufb01nition of the sets Sc 1, . . . , Sc J, we have l \u00b7 \u2225\u2206Sc j \u2225max \u2264\u2225\u2206Sc j\u22121\u22251, (54) since all l elements of \u2206Sc j\u22121 are larger than the elements of \u2206Sc j . Thus, we have J X j=2 \u2225\u2206Sc j \u2225F \u2264l1/2 J X j=2 \u2225\u2206Sc j \u2225max \u2264l\u22121/2 J X j=2 \u2225\u2206Sc j\u22121\u22251 \u2264l\u22121/2\u2225\u2206Sc\u22251, where the second inequality holds by (54). D.5 Proof of Lemma 6 Proof. Let T \u2282{1, . . . , d} be a set with cardinality |T | = s. We have max T \u2282{1,...,d},|T |=s\u2225b \u03a3x,T \u00d7T \u2212\u03a3x,T \u00d7T \u2225sp \u2264 max T \u2282{1,...,d},|T |=s\u2225\u03a3\u22121/2 x,T \u00d7T b \u03a3x,T \u00d7T \u03a3\u22121/2 x,T \u00d7T \u2212Is\u2225sp max T \u2282{1,...,d},|T |=s\u2225\u03a3x,T \u00d7T \u2225sp \u2264c \u00b7 max T \u2282{1,...,d},|T |=s\u2225\u03a3\u22121/2 x,T \u00d7T b \u03a3x,T \u00d7T \u03a3\u22121/2 x,T \u00d7T \u2212Is\u2225sp, (55) where the last inequality holds by Assumption 2. By a similar argument in the proof of Lemma 2, we have for any \ufb01xed set T with cardinality |T | = s, pr \u001a\r \r \r\u03a3\u22121/2 x,T \u00d7T b \u03a3x,T \u00d7T \u03a3\u22121/2 x,T \u00d7T \u2212Is \r \r \r sp \u2265max(\u03b4, \u03b42) \u001b \u22642 exp(\u2212ct2/2). (56) 33 Thus, by the union bound, we have pr \u001a max T \u2282{1,...,d},|T |=s \r \r \r\u03a3\u22121/2 x,T \u00d7T b \u03a3x,T \u00d7T \u03a3\u22121/2 x,T \u00d7T \u2212Is \r \r \r sp \u2265max(\u03b4, \u03b42) \u001b \u2264 X T \u2282{1,...,d},|T |=s pr \u001a\r \r \r\u03a3\u22121/2 x,T \u00d7T b \u03a3x,T \u00d7T \u03a3\u22121/2 x,T \u00d7T \u2212Is \r \r \r sp \u2265max(\u03b4, \u03b42) \u001b \u22642 \u0012d s \u0013 exp(\u2212ct2/2) \u22642 \u0012ed s \u0013s exp(\u2212ct2/2), where the \ufb01rst inequality follows from (56). Picking t = C{s log(ed)}1/2, we obtain max T \u2282{1,...,d},|T |=s \r \r \r\u03a3\u22121/2 x,T \u00d7T b \u03a3x,T \u00d7T \u03a3\u22121/2 x,T \u00d7T \u2212Is \r \r \r sp \u2264C{s log(ed)/n}1/2 (57) for su\ufb03ciently large constant C, with probability greater than 1 \u2212exp{C\u2032s log(ed)}. Thus, substituting (57) into (55), we have \u03bbmax(b \u03a3x, s) \u2264\u03bbmax(\u03a3x, s) + cC{s log(ed)/n}1/2 \u2264c + cC{s log(ed)/n}1/2, (58) where the last inequality holds by Assumption 2. Using the same upper bound (57), it can be shown that 1 c \u2212cC{s log(ed)/n}1/2 \u2264\u03bbmin(b \u03a3x, s), as desired. D.6 Proof of Lemma 7 Proof. By the triangle inequality, we have \u2225b \u03a3E(x|y) \u2212e \u03a3E(x|y)\u2225max \u2264\u2225b \u03a3E(x|y) \u2212\u03a3E(x|y)\u2225max + \u2225e \u03a3E(x|y) \u2212\u03a3E(x|y)\u2225max. The \ufb01rst term can be upper bounded by an application of Corollary 1, i.e., \u2225b \u03a3E(x|y) \u2212 \u03a3E(x|y)\u2225max \u2264C(log d/n)1/2 with probability at least 1 \u2212exp(C\u2032 log d). It remains to show that the second term can be upper bounded by the same rate with high probability. 34 By adding and subtracting terms, and the triangle inequality, we have \u2225e \u03a3E(x|y) \u2212\u03a3E(x|y)\u2225max \u2264\u2225(b \u03a3x \u2212\u03a3x)V \u039bV T\u03a3x\u2225max + \u2225\u03a3xV \u039bV T(b \u03a3x \u2212\u03a3x)\u2225max + \u2225(b \u03a3x \u2212\u03a3x)V \u039bV T(b \u03a3x \u2212\u03a3x)\u2225max = I1 + I2 + I3. (59) We now obtain an upper bound for I1. Following the set of arguments in the proof of Lemma 6.4 in Gao et al. (2017), I1 can be rewritten as max j,k \f \f \f \f \f 1 n n X i=1 [xij(xT i V \u039bV T\u03a3x)k \u2212E{xij(xT i V \u039bV T\u03a3x)k}] \f \f \f \f \f . Note that the each term inside the summand is independent and centered sub-exponential random variables with sub-exponential norm upper bounded by some constant. By Lemma 11 and applying union bound, we have that I1 \u2264C(log d/n)1/2 with probability at least 1 \u2212exp(\u2212C\u2032 log d). The term I2 an be upper bounded in a similar fashion. Finally, I3 can be rewritten as I3 \u2264 K X k=1 \u03bbk\u2225(b \u03a3x \u2212\u03a3x)VkV T k (b \u03a3x \u2212\u03a3x)\u2225max \u2264 K X k=1 \u03bbk max j,k \" 1 n n X i=1 {xijxT i Vk \u2212E(xijxT i Vk)} #2 . However, note that the term \" 1 n n X i=1 {xijxT i Vk \u2212E(xijxT i Vk)} # is sub-exponential random variable with mean zero and sub-exponential norm upper bounded by a constant. Thus, by Lemma 11, we have I3 \u2264K\u03bb1C log d n , 35 with probability at least 1 \u2212exp(C\u2032 log d). Under the assumption that K\u03bb1(log d/n)1/2 is bounded by some constant, combining the upper bounds, we have \u2225b \u03a3E(x|y) \u2212e \u03a3E(x|y)\u2225max \u2264C(log d/n)1/2 as desired. E Auxiliary Lemmas In this section, we provide some auxiliary lemmas that are used in our proofs. We \ufb01rst provide some de\ufb01nition of sub-Gaussian and sub-exponential random variables. We refer the reader to Chapter 5 in an unpublished 2015 technical report, Vershynin, R. (arXiv:1011.3027) for details. Let Z be a sub-Gaussian random variable. Let \u2225Z\u2225\u03c82 be the sub-Gaussian norm that takes the form \u2225Z\u2225\u03c82 = sup p\u22651 p\u22121/2(E|Z|p)1/p. Similar, let X be a sub-exponential random variable and let \u2225X\u2225\u03c81 be the sub-exponential norm that takes the form \u2225X\u2225\u03c81 = sup p\u22651 p\u22121/2(E|X|p)1/p. We note that if Z is centered normal random variable with variance \u03c32, then Z is subGaussian with \u2225Z\u2225\u03c82 \u2264C\u03c3. We provide the tail probability of sub-Gaussian random variable in the following lemma, which follows directly from Lemma 5.5 in an unpublished 2015 technical report, Vershynin, R. (arXiv:1011.3027) Lemma 9. Let X be a sub-Gaussian random variable with mean zero and sub-Gaussian norm \u2225X\u2225\u03c82 \u2264L for some constant L. Then, for all t > 0, pr(|X| \u2265t) \u2264exp(1 \u2212Ct2/L2). The following lemma follows directly from Lemma 5.14 in an unpublished 2015 technical report, Vershynin, R. (arXiv:1011.3027), which summarizes the relationship between subGaussian and sub-exponential random variables. Lemma 10. Let Z1 and Z2 be two sub-Gaussian random variables. Then, Z1Z2 is a sub36 exponential random variable with \u2225Z1Z2\u2225\u03c81 \u2264C max(\u2225Z1\u22252 \u03c82, \u2225Z2\u22252 \u03c82), where C > 0 is a positive constant. The following inequality is a Bernstein-type inequality for sub-exponential random variables in Proposition 5.16 in an unpublished 2015 technical report, Vershynin, R. (arXiv:1011.3027). Lemma 11. Let X1, . . . , Xn be independent centered sub-exponential random variables, and K = maxi \u2225Xi\u2225\u03c81. Then, for any t > 0, we have pr \f \f \f \f \f 1 n n X i=1 Xi \f \f \f \f \f > t ! \u22642 exp \u001a \u2212C min \u0012nt2 K2 , nt K \u0013\u001b , where C > 0 is an absolute constant.", "introduction": "We consider regression of a univariate response y \u2208R on a stochastic covariate vector x = (x1, . . . , xd)T \u2208Rd in which the number of covariates d exceeds the sample size n. The goal is to infer the conditional distribution of y given x. When d is large, it is often desirable to perform dimension reduction on the covariates with the aim of minimizing information loss. Su\ufb03cient dimension reduction is popular for this purpose (Li 1991, Cook 1994, 1998). 1 Let K < min(n, d) and let \u03b21, . . . , \u03b2K \u2208Rd be d-dimensional vectors. We assume that y \u22a5 \u22a5x | \u0000 \u03b2T 1 x, . . . , \u03b2T Kx \u0001 , (1) where \u22a5 \u22a5signi\ufb01es independence. Equation (1) implies that y can be explained by a set of K linear combinations of x. A dimension reduction subspace V is de\ufb01ned as the subspace spanned by \u03b21, . . . , \u03b2K such that (1) holds. We henceforth refer to \u03b21, . . . , \u03b2K as the suf- \ufb01cient dimension reduction directions. Dimension reduction subspaces are not unique in general, and Cook (1994) de\ufb01ned the central subspace, Vy|x, as the intersection of all di- mension reduction subspaces. Under regularity conditions, the central subspace exists and is also the unique minimum dimension reduction subspace that satis\ufb01es (1). Many authors have proposed methods to estimate the central subspace (Li 1991, Cook & Weisberg 1991, Cook & Lee 1999, Bura & Cook 2001b,a, Cook 2000, 2007, Cook & Forzani 2008, Li & Wang 2007, Cook & Forzani 2009, Ma & Zhu 2012, 2013a). The su\ufb03cient dimension reduction lit- erature is vast: see Ma & Zhu (2013b) for a comprehensive list of references. We focus on sliced inverse regression for estimating the central subspace Vy|x (Li 1991). In the low-dimensional setting in which d < n, the central subspace Vy|x can be estimated consistently (Li 1991, Hsing & Carroll 1992, Zhu & Ng 1995, Zhu & Fang 1996, Zhu et al. 2006). One drawback of sliced inverse regression is that the estimated su\ufb03cient dimension reduction directions involve all d covariates, so these directions are hard to interpret, and important covariates may be di\ufb03cult to identify. Numerous attempts have been made to perform variable selection for sliced inverse re- gression in the low-dimensional setting (Cook 2004, Li et al. 2005, Ni et al. 2005, Li & Yin 2008, Li 2007). Most are conducted stepwise, estimating a sparse solution for each direc- tion. However, sparsity in each su\ufb03cient dimension reduction direction does not correspond to variable selection unless an entire row of the basis matrix (\u03b21, . . . , \u03b2K) is set to zero, and Chen et al. (2010) proposed a novel penalty to encourage this. Their proposal involves solving a non-convex problem and a global optimum solution is often not guaranteed. In the high-dimensional setting, Lin et al. (2018) proposed a screening approach to perform variable selection. The selected variables are then used to \ufb01t classical sliced in- verse regression. Yin & Hilafu (2014) proposed a sequential approach for estimating high- dimensional sliced inverse regression. Both proposals are step-wise procedures that do not correspond to solving a convex optimization problem. Moreover, as discussed in Yin & Hilafu (2014), theoretical properties for their proposed estimators are hard to establish due to the sequential procedure used to obtain the estimators. Yu et al. (2013) proposed using \u21131-minimization with an adaptive Dantzig selector, and 2 established a non-asymptotic error bound for the resulting estimator. Wang et al. (2018) recast sliced inverse regression as a reduced-rank regression problem, proposed solving a non- convex optimization problem for simultaneous variable selection and dimension reduction, and showed that their proposed method is prediction consistent. However, there is a gap between the optimization problem and the theoretical results: there is no guarantee that the estimator obtained from solving the proposed biconvex optimization problem is the global minimum. Most existing work in the high-dimensional su\ufb03cient dimension reduction literature in- volves non-convex optimization problems. Moreover, they seek to estimate a set of re- duced predictors that are not identi\ufb01able by de\ufb01nition, rather than the central subspace. In this paper, we propose a convex formulation for sparse sliced inverse regression in the high-dimensional setting by adapting techniques from sparse canonical correlation analysis (Vu et al. 2013, Gao et al. 2017). Our proposal estimates the central subspace directly and performs variable selection simultaneously. Moreover, the proposed method can be adapted for su\ufb03cient dimension reduction methods that can be formulated as generalized eigenvalue problems. These include sliced average variance estimation, directional regression, principal \ufb01tted components, principal hessian direction, and iterative hessian transformation." }, { "url": "http://arxiv.org/abs/1604.08697v3", "title": "Sparse Generalized Eigenvalue Problem: Optimal Statistical Rates via Truncated Rayleigh Flow", "abstract": "Sparse generalized eigenvalue problem (GEP) plays a pivotal role in a large\nfamily of high-dimensional statistical models, including sparse Fisher's\ndiscriminant analysis, canonical correlation analysis, and sufficient dimension\nreduction. Sparse GEP involves solving a non-convex optimization problem. Most\nexisting methods and theory in the context of specific statistical models that\nare special cases of the sparse GEP require restrictive structural assumptions\non the input matrices. In this paper, we propose a two-stage computational\nframework to solve the sparse GEP. At the first stage, we solve a convex\nrelaxation of the sparse GEP. Taking the solution as an initial value, we then\nexploit a nonconvex optimization perspective and propose the truncated Rayleigh\nflow method (Rifle) to estimate the leading generalized eigenvector. We show\nthat Rifle converges linearly to a solution with the optimal statistical rate\nof convergence for many statistical models. Theoretically, our method\nsignificantly improves upon the existing literature by eliminating structural\nassumptions on the input matrices for both stages. To achieve this, our\nanalysis involves two key ingredients: (i) a new analysis of the gradient based\nmethod on nonconvex objective functions, and (ii) a fine-grained\ncharacterization of the evolution of sparsity patterns along the solution path.\nThorough numerical studies are provided to validate the theoretical results.", "authors": "Kean Ming Tan, Zhaoran Wang, Han Liu, Tong Zhang", "published": "2016-04-29", "updated": "2018-08-31", "primary_cat": "stat.ML", "cats": [ "stat.ML" ], "main_content": "Many high-dimensional multivariate statistics methods can be formulated as special instances of (2). For instance, when \ufffd B = I, (2) reduces to the sparse principal component analysis (PCA) that has received considerable attention within the past decade (among others, Zou et al., 2006; d\u2019Aspremont et al., 2007, 2008; Witten et al., 2009; Ma, 2013; Cai et al., 2013; Yuan and Zhang, 2013; Vu et al., 2013; Vu and Lei, 2013; Birnbaum et al., 2013; Wang et al., 2013, 2014; Gu et al., 2014). In the following, we provide three examples when \ufffd B is not the identity matrix. We start with sparse Fisher\u2019s discriminant analysis for classification problem (among others, Tibshirani et al., 2003; Guo et al., 2007; Leng, 2008; Clemmensen et al., 2012; Mai et al., 2012, 2016; Kolar and Liu, 2015; Gaynanova and Kolar, 2015; Fan et al., 2015). Example 1. Sparse Fisher\u2019s discriminant analysis: Given n observations with K distinct classes, Fisher\u2019s discriminant problem seeks a low-dimensional projection of the observations such that the between-class variance, \u03a3b, is large relative to the within-class variance, \u03a3w. Let \ufffd \u03a3b and \ufffd \u03a3w be estimators of \u03a3b and \u03a3w, respectively. To obtain a sparse leading discriminant vector, one solves maximize v vT \ufffd \u03a3bv, subject to vT \ufffd \u03a3wv = 1, \u2225v\u22250 \u2264s. (6) This is a special case of (2) with \ufffd A = \ufffd \u03a3b and \ufffd B = \ufffd \u03a3w. \ufffd This is a special case of (2) with \ufffd A = \ufffd \u03a3b and \ufffd B = \ufffd \u03a3w. Next, we consider sparse canonical correlation anal two high-dimensional random vectors (Witten et al., 200 \ufffd \ufffd \ufffd \ufffd Next, we consider sparse canonical correlation analysis that explores the relationship between two high-dimensional random vectors (Witten et al., 2009; Chen et al., 2013; Gao et al., 2017, 2015). Example 2. Sparse canonical correlation analysis: Let X and Y be two random vectors. Let \u03a3x and \u03a3y be the covariance matrices for X and Y , respectively, and let \u03a3xy be the cross-covariance matrix between X and Y . To obtain sparse leading canonical direction vectors, we solve maximize vx,vy vT x \ufffd \u03a3xyvy, subject to vT x \ufffd \u03a3xvx = vT y \ufffd \u03a3yvy = 1, \u2225vx\u22250 \u2264sx, \u2225vy\u22250 \u2264sy, (7) ere sx and sy control the cardinality of vx and vy. This is a special case of (2) with \ufffd \ufffd \ufffd where sx and sy control the cardinality of vx and vy. This is a special case of (2) with \ufffd A = tees \ufffd 0 \ufffd \u03a3xy \ufffd \u03a3xy 0 for sparse C \ufffd , \ufffd B = A were es \ufffd\ufffd \u03a3x 0 0 \ufffd \u03a3y tablished \ufffd , v = \ufffd vx vy \ufffd . \ufffd \ufffd \ufffd \ufffd Theoretical guarantees for sparse CCA were established recently. Chen et al. (2013) proposed a nonconvex optimization algorithm for solving (7) with theoretical guarantees. However, their algorithm involves obtaining accurate estimators of \u03a3\u22121 x and \u03a3\u22121 y , which are in general difficult to obtain without imposing sparsity assumption on \u03a3\u22121 x and \u03a3\u22121 y . In a follow-up work, Gao et al. (2017) proposed a two-stage procedure that attains the optimal statistical rate of convergence (Gao et al., 2015). However, they require the matrix \u03a3xy to be low-rank, positive semidefinite, and that the rank of \u03a3xy is known a priori. As suggested in Gao et al. (2015), the low-rank assumption on \u03a3xy may be unrealistic in many real data applications where one is interested in recovering the first few sparse canonical correlation directions while there might be additional directions in the 4 population structure. Our proposal does not impose any structural assumption on \u03a3x, \u03a3y, and we only require \u03a3xy to be approximately low rank in the sense that the leading generalized eigenvalue is larger than the remaining. Next, we consider a regression problem with a univariate response Y and d-dimensional covariates X, with the goal of inferring the conditional distribution of Y given X. Su\ufb03cient dimension reduction is a popular approach for reducing the dimensionality of the covariates (Li, 1991; Cook and Lee, 1999; Cook, 2000, 2007; Cook and Forzani, 2008; Ma and Zhu, 2013). It can be shown that many su\ufb03cient dimension reduction methods can be formulated as generalized eigenvalue problems (Li, 2007; Chen et al., 2010). In the following, we consider the sparse sliced inverse regression (Li, 1991). Example 3. Sparse sliced inverse regression: Consider the model Y = f(vT 1 X, . . . , vT KX, \u03f5), where \u03f5 is the stochastic error independent of X, and f(\u00b7) is an unknown link function. Li (1991) proved that under regularity conditions, the subspace spanned by v1, . . . , vK can be identi\ufb01ed. Let \u03a3x be the covariance matrix for X and let \u03a3E(X|Y ) be the covariance matrix of the conditional expectation E(X | Y ). The \ufb01rst leading eigenvector of the subspace spanned by v1, . . . , vK can be identi\ufb01ed by solving maximize v vT b \u03a3E(X|Y )v, subject to vT b \u03a3xv = 1, \u2225v\u22250 \u2264s. (8) This is a special case of (2) with b A = b \u03a3E(X|Y ) and b B = b \u03a3x. Many authors have proposed methods for sparse sliced inverse regression (Li and Nachtsheim, 2006; Zhu et al., 2006; Li and Yin, 2008; Chen et al., 2010; Yin and Hilafu, 2015). More generally, in the context of sparse su\ufb03cient dimension reduction, Li (2007) and Chen et al. (2010) reformulated sparse su\ufb03cient dimension reduction problems into the sparse generalized eigenvalue problem in (2). However, these approaches lack algorithmic and non-asymptotic statistical guarantees in the high-dimensional setting. Our results are applicable to most sparse su\ufb03cient dimension reduction methods. 3 Methodology and Algorithm In Section 3.1, we propose an iterative algorithm to estimate v\u2217by solving (2), which we refer to as truncated Rayleigh \ufb02ow method (Ri\ufb02e). Ri\ufb02e requires an input of an initial vector v0 that is su\ufb03ciently close to v\u2217. To this end, we propose a convex optimization approach to obtain such an initial vector v0 in Section 3.2. 3.1 Truncated Rayleigh Flow Method (Ri\ufb02e) Optimization problem (2) can be rewritten as maximize v\u2208Rd vT b Av vT b Bv , subject to \u2225v\u22250 \u2264s, 5 where the objective function is generally referred to as the generalized Rayleigh quotient. The main crux of our proposed algorithm is as follows. Given an initial vector v0, we \ufb01rst compute the gradient of the generalized Rayleigh quotient. We then update the initial vector by its ascent direction and normalize it such that the updated vector has norm one. This step ensures that the generalized Rayleigh quotient for the updated vector is at least as large as that of the initial vector. Indeed, in Theorem 1, we show that if the initial vector v0 is close to v\u2217, then this step ensures that the updated vector is closer to v\u2217compared to v0. Next, we truncate the updated vector by keeping the elements with the largest k absolute values and setting the remaining elements to zero. This step ensures that the updated vector is k-sparse, i.e., only k entries are non-zero. Finally, we normalize the updated vector such that it has norm one. These steps are repeated until convergence. We summarize the details in Algorithm 1. Algorithm 1 Truncated Rayleigh Flow Method (Ri\ufb02e) Input: matrices b A, b B, initial vector v0, cardinality k \u2208{1, . . . , d}, and step size \u03b7. Truncate: Truncate v0 by keeping the largest k absolute elements, and setting the remaining entries to zero. Let t = 1. Repeat the following until convergence: 1. \u03c1t\u22121 \u2190vT t\u22121 b Avt\u22121/vT t\u22121 b Bvt\u22121. 2. C \u2190I + (\u03b7/\u03c1t\u22121) \u00b7 ( b A \u2212\u03c1t\u22121 b B). 3. v\u2032 t \u2190Cvt\u22121/\u2225Cvt\u22121\u22252. 4. Let Ft = supp(v\u2032 t, k) contain the indices of v\u2032 t with the largest k absolute values and Truncate(v\u2032 t, Ft) be the truncated vector of v\u2032 t by setting (v\u2032 t)i = 0 for i / \u2208Ft. 5. b vt \u2190Truncate(v\u2032 t, Ft). 6. vt \u2190b vt/\u2225b vt\u22252. 7. t \u2190t + 1. Output: vt. In addition to an initial vector v0, Algorithm 1 requires the choice of a step size \u03b7 and a tuning parameter k on the cardinality of the solution. As suggested by the theoretical results in Section 4, we need \u03b7 to be su\ufb03ciently small such that \u03b7\u03bbmax(b B) < 1. In practice, the tuning parameter k can be selected using cross-validation or based on prior knowledge. The computational complexity for each iteration of Algorithm 1 is O(kd + d): O(d) for selecting the k largest elements of a d-dimensional vector to obtain the set Ft, and O(kd) for taking the product between a truncated vector and a matrix with columns restricted to the set Ft, and for calculating the di\ufb00erence between two matrices with columns restricted to the set Ft. 6 3.2 A Convex Optimization Approach to Obtain v0 As mentioned in Section 3.1, it is crucial to obtain an initial vector v0 that is close to v\u2217for Ri\ufb02e. Gao et al. (2017) have proposed a convex formulation to estimate subspace spanned by the K leading generalized eigenvectors for sparse CCA, under the assumption that A is low rank and positive semide\ufb01nite. Rather than estimating the K leading generalized eigenvectors, the main idea of Gao et al. (2017) is to obtain an estimator of the subspace spanned by the K leading generalized eigenvectors directly. In this section, we point out the fact that the proposed convex relaxation can be used more generally to estimate subspace of a sparse generalized eigenvalue problem, without the low rank and positive semide\ufb01nite structural assumptions on A. Similar to (2), the optimization problem for estimating the K generalized eigenvectors can be written as minimize U\u2208Rd\u00d7K \u2212tr \u0010 UT b AU \u0011 , subject to UT b BU = IK. Rather than estimating the K generalized eigenvectors which involves minimizing a concave function, we consider approximating the subspace spanned by these generalized eigenvectors. Let P = UUT and let O = {b B1/2Pb B1/2 : UT b BU = IK}. By a change of variable, we obtain minimize P\u2208Rd\u00d7d \u2212tr \u0010 b AP \u0011 , subject to P \u2208O, (9) where the objective function is now linear in P. We consider the following convex relaxation of (9), with a lasso penalty on P to encourage the estimated subspace to be sparse: minimize P\u2208Rd\u00d7d \u2212tr \u0010 b AP \u0011 +\u03b6\u2225P\u22251,1, subject to \u2225b B1/2Pb B1/2\u2225\u2217\u2264K and \u2225b B1/2Pb B1/2\u22252 \u22641, (10) where \u2225\u00b7 \u2225\u2217and \u2225\u00b7 \u22252 are the nuclear norm and spectral norm that encourage the solution to be low rank and that its eigenvalue is bounded, respectively. Here, \u03b6 and K are two tuning parameters that encourages the estimated subspace P to be sparse and low rank, respectively. The convex optimization problem (10) can be solved using the alternating direction methods of multiplier algorithm, which we summarize the details in Algorithm 2 (Boyd et al., 2010; Eckstein, 2012). The computational bottleneck in Algorithm 2 is the singular value decomposition on a d \u00d7 d matrix, thus yielding a computational complexity of O(d3). Compare to the computational complexity of O(kd + d) for Algorithm 1, it can be seen that obtaining a good initial vector v0 is much more time consuming than re\ufb01ning the initial value. Let b P be an estimator obtained from solving (10). Then, the initial value v0 can be set to be the largest eigenvector of b P. The theoretical guarantees for v0 obtained via this approach are presented in Proposition 1 in Section 4.1. In practice, for the purpose of obtaining an initial value v0, one can simply set K = 1 and \u03b6 to be approximately p log d/n. In fact, we suggest setting \u03b6 conservatively since there is a re\ufb01nement step using Ri\ufb02e to obtain an estimator that is closer to v\u2217. 7 Algorithm 2 ADMM Algorithm for Solving (10) Input: matrices b A, b B, tuning parameters \u03b6, K, ADMM parameter \u03bd, and convergence criterion \u03f5. Initialize: matrices P0, H0, and \u03930. Let t = 1. Repeat the following until \u2225Pt+1 \u2212Pt\u2225F \u2264\u03f5: 1. Update P by solving the following lasso problem: Pt+1 = argmin P \u03bd 2\u2225b B1/2Pb B1/2 \u2212Ht + \u0393t\u22252 F \u2212tr( b AP) + \u03b6\u2225P\u22251,1. 2. Let Pd j=1 \u03c9jajaT j be the singular value decomposition of \u0393t + b B1/2Pt+1 b B1/2 and let \u03b3\u2217= argmin \u03b3>0 \u03b3 subject to d X j=1 min{1, max(\u03c9j \u2212\u03b3, 0)} \u2264K. Update H by Ht+1 = d X j=1 min{1, max(\u03c9j \u2212\u03b3\u2217, 0)}ajaT j . 3. Update \u0393 by \u0393t+1 = \u0393t + b B1/2Pt+1 b B1/2 \u2212Ht+1. 4. t \u2190t + 1. 4 Theoretical Results We show that if the matrix pair (A, B) has a unique sparse leading generalized eigenvector, then Algorithm 1 can accurately recover the population leading generalized eigenvector from the noisy matrix pair ( b A, b B). Recall from the Introduction that A is symmetric and B is positive de\ufb01nite. This condition ensures that all generalized eigenvalues are real. Recall that v\u2217is the leading generalized eigenvector of (A, B). Let V = supp(v\u2217) be the index set corresponding to the non-zero elements of v\u2217, and let |V | = s. Let F \u2282{1, . . . , d} be a superset of V , i.e., V \u2282F, with cardinality |F| = k\u2032. Throughout the paper, for notational convenience, let \u03bbj and b \u03bbj be the jth generalized eigenvalue of the matrix pairs (A, B) and ( b A, b B), respectively. Moreover, let \u03bbj(F) and b \u03bbj(F) be the jth generalized eigenvalue of the matrix pair (AF , BF ) and ( b AF , b BF ), respectively. Our theoretical results depend on several quantities that are speci\ufb01c to the generalized eigenvalue problem. Let cr(A, B) = min v:\u2225v\u22252=1 \u0002 (vT Av)2 + (vT Bv)2\u00031/2 > 0 (11) be the Crawford number of the symmetric-de\ufb01nite matrix pair (A, B) (Stewart, 1979). Let cr(k\u2032) = inf F:|F|\u2264k\u2032 cr(AF , BF ) and \u03f5(k\u2032) = p \u03c1(EA, k\u2032)2 + \u03c1(EB, k\u2032)2, (12) 8 where \u03c1(EA, k\u2032) is as de\ufb01ned in (4). In the following, we start with an assumption that these quantities are upper bounded for su\ufb03ciently large n. Assumption 1. For su\ufb03ciently large n, there exist constants b, c > 0 such that \u03f5(k\u2032) cr(k\u2032) \u2264b and \u03c1(EB, k\u2032) \u2264c\u03bbmin(B) for any k\u2032 \u226an, where cr(k\u2032) and \u03f5(k\u2032) are de\ufb01ned in (12). Provided that n is large enough, it can be shown that the Assumption 1 holds with high probability for most statistical models. In fact, we will show in Proposition 2 in Section 4.2 that as long as n > Ck\u2032 log d for some su\ufb03ciently large constant C, then Assumption 1 is satis\ufb01ed with high probability for most statistical models. We will use the following implications of Assumption 1 in our theoretical analysis, which are implied by matrix perturbation theory (Stewart, 1979; Stewart and Sun, 1990). In detail, by applications of Lemmas 1 and 2 in Appendix A, we have that for any F \u2282{1, . . . , d} with |F| = k\u2032, there exist constants a, c such that (1 \u2212a)\u03bbj(F) \u2264b \u03bbj(F) \u2264(1 + a)\u03bbj(F), (1 \u2212c)\u03bbj(BF ) \u2264\u03bbj(b BF ) \u2264(1 + c)\u03bbj(BF ), and clower \u00b7 \u03ba(B) \u2264\u03ba(b BF ) \u2264cupper \u00b7 \u03ba(B), (13) where clower = (1 \u2212c)/(1 + c), cupper = (1 + c)/(1 \u2212c), c is the same constant in Assumption 1, and \u03ba(B) is the condition number of the matrix B. Meanwhile, let \u03b3 = (1 + a)\u03bb2/[(1 \u2212a)\u03bb1]. Finally, we de\ufb01ne v(F) to be the solution of a generalized eigenvalue problem restricted to a superset of V (V \u2282F): v(F) = arg max v\u2208Rd vT b Av, subject to vT b Bv = 1, supp(v) \u2286F. (14) The quantity v(F) can be interpreted as the solution of a generalized eigenvalue problem for a low-dimensional problem when k\u2032 < n. In the following theorem, we present our main theoretical result for Algorithm 1 as a function of the \u21132 distance between v(F) and v\u2217. Theorem 1. Let k\u2032 = 2k + s and choose k = Cs for su\ufb03ciently large C. In addition, choose \u03b7 such that \u03b7\u03bbmax(B) < 1/(1 + c) and \u03bd = q 1 + 2[(s/k)1/2 + s/k] \u00b7 s 1 \u22121 + c 8 \u00b7 \u03b7 \u00b7 \u03bbmin(B) \u00b7 \u0014 1 \u2212\u03b3 cupper\u03ba(B) + \u03b3 \u0015 < 1. Input an initial vector v0 with \u2225v0\u22252 = 1 satisfying |(v\u2217)T v0|/\u2225v\u2217\u22252 \u22651 \u2212\u03b8(A, B), where \u03b8(A, B) is a quantity given in Lemma 3 that depends on the matrix pair (A, B). Under Assumption 1, we have s 1 \u2212|(v\u2217)T vt| \u2225v\u2217\u22252 \u2264\u03bdt \u00b7 p \u03b8(A, B) + \u221a 20 1 \u2212\u03bd \u00b7 s 1 \u2212 |v(F)T v\u2217| \u2225v(F)\u22252\u2225v\u2217\u22252 . (15) 9 For simplicity, assume that (v\u2217)T vt is positive without loss of generality. Since vt is a unit vector, from (15) we have 1 \u2212|(v\u2217)T vt| \u2225v\u2217\u22252 = 1 2 \r \r \r \rvt \u2212 v\u2217 \u2225v\u2217\u22252 \r \r \r \r 2 2 , 1 \u2212 |v(F)T v\u2217| \u2225v(F)\u22252\u2225v\u2217\u22252 = 1 2 \r \r \r \r v(F) \u2225v(F)\u22252 \u2212 v\u2217 \u2225v\u2217\u22252 \r \r \r \r 2 2 . Thus, (15) states that the \u21132 distance between v\u2217/\u2225v\u2217\u22252 and vt can be upper bounded by two terms. The \ufb01rst term on the right-hand side of (15) quanti\ufb01es the optimization error, which decreases to zero at a geometric rate since \u03bd < 1. Meanwhile, the second term on the right-hand side of (15) is the statistical error introduced for solving generalized eigenvalue problem restricted to the set F as in (14). The result in Theorem 1 depends on the estimation error between v(F) and v\u2217. The following corollary quanti\ufb01es such estimation error for a general class of symmetric-de\ufb01nite matrix pair (A, B). Corollary 1. For a general class of symmetric-de\ufb01nite matrix pair (A, B), let \u2206\u03bb = min j>1 \u03bb1 \u2212(1 + a)\u03bbj p 1 + \u03bb2 1 q 1 + (1 \u2212a)2\u03bb2 j (16) denote the eigengap for the generalized eigenvalue problem (Stewart, 1979; Stewart and Sun, 1990). Assume that \u2206\u03bb > \u03f5(k\u2032)/cr(k\u2032). Then, under the same conditions as in Theorem 1, we have s 1 \u2212|(v\u2217)T vt| \u2225v\u2217\u22252 \u2264\u03bdt \u00b7 p \u03b8(A, B) + \u221a 10 1 \u2212\u03bd \u00b7 2 \u2206\u03bb \u00b7 (cr(k\u2032) \u2212\u03f5(k\u2032)) \u00b7 \u03f5(k\u2032), where \u03f5(k\u2032) = p \u03c1(EA, k\u2032)2 + \u03c1(EB, k\u2032)2. For a large class of statistical models, \u03f5(k\u2032) converges to zero at the rate of p s log d/n with high probability. 4.1 Theoretical Results for the Initialization in (10) Theorem 1 involves a condition on the initialization v0: the cosine angle between v\u2217and v0 needs to be strictly larger than a constant. In other words, the initialization v0 needs to be close to v\u2217. We now present some theoretical guarantees for the initialization procedure in Section 3.2. In the context of sparse CCA, Gao et al. (2017) have shown that the estimated subspace obtained from solving convex relaxation of the form (10) converges to the true subspace, under the assumption that A is low rank and positive semide\ufb01nite, and that the rank of A is known. In the following proposition, we remove the aforementioned assumptions on A. Thus, a similar result holds more generally for the sparse generalized eigenvalue problem with symmetric-de\ufb01nite matrix pair (A, B). To this end, we de\ufb01ne some additional notation. Let V\u2217\u2208Rd\u00d7d be d generalized eigenvectors and let \u039b\u2217\u2208Rd\u00d7d be a diagonal matrix of generalized eigenvalues of the matrix pair (A, B), respectively. Let Sv be a set containing indices of non-zero rows of V\u2217\u2208Rd\u00d7d. For simplicity, assume that |Sv| = s and that the eigenvalues of B are bounded. The matrix A can be rewritten in terms of 10 its generalized eigenvectors and generalized eigenvalues up to sign jointly, A = BV\u2217\u039b\u2217(V\u2217)T B (Gao et al., 2017). Let e A = b BV\u2217\u039b\u2217(V\u2217)T b B and let P\u2217= V\u2217 \u00b7K(V\u2217 \u00b7K)T , where V\u2217 \u00b7K are the \ufb01rst K generalized eigenvectors of (A, B). Let b P be a solution to (10) with tuning parameters \u03b6 and K. The following proposition establishes an upper bound for the di\ufb00erence between b P and P\u2217under the Frobenius norm. Proposition 1. Assume that n is su\ufb03ciently large such that \u03c1(EB, s2) \u2264c\u03bbmin(B), where c is the same constant that appears in Assumption 1. Let \u03b4gap = \u03bbK \u2212c\u03ba(B)\u03bbK+1/(1 \u2212c), and assume that \u03b4gap > 0. Set \u03b6 > 2\u2225b A \u2212e A\u2225\u221e,\u221e. Then, \u2225b P \u2212P\u2217\u2225F \u2264C \u0012 s \u03b4gap \u00b7 \u2225b A \u2212e A\u2225\u221e,\u221e+ K \u00b7 \u2225b BSv \u2212BSv\u22252 \u0013 , where C is a generic constant that does not depend on the generalized eigenvalues and the dimensions n, d, s, and K. For most statistical models, it can be shown that \u2225b A\u2212e A\u2225\u221e,\u221e\u2264C1 p log d/n and \u2225b BSv\u2212BSv\u22252 \u2264 C2 p s/n with high probability for generic constants C1 and C2. Thus, picking \u03b6 > C3 p log d/n, the upper bound can be simpli\ufb01ed to \u2225b P \u2212P\u2217\u2225F \u2264C s \u03b4gap \u00b7 r log d n + K r s n ! . Choosing K = 1 in (10), by a variant of the Davis-Kahan Theorem in Vu et al. (2013), Proposition 1 guarantees that by setting v0 to be the leading eigenvector of b P, then v0 will be su\ufb03ciently close to v\u2217as long as the conditions in Proposition 1 are satis\ufb01ed. In the next section, we will quantify the sample size condition needed for Proposition 1 to hold under various statistical models. 4.2 Applications to Sparse PCA and Sparse CCA In this section, we provide some discussions on the implications of Theorem 1 and Proposition 1 in the context of sparse PCA and CCA, respectively. More speci\ufb01cally, for each model, we \ufb01rst verify that the initial vector v0 obtained from solving (10) is close to v\u2217. Therefore, the assumption on v0 in Theorem 1 is satis\ufb01ed. Next, we compare our results from Theorem 1 to the minimax optimal rate of convergence for each model. Sparse principal component analysis: We start with the sparse PCA problem. We assume the model X \u223cN(0, \u03a3). As mentioned in Section 2, sparse PCA is a special case of sparse generalized eigenvalue problem when (A, B) = (\u03a3, I) and ( b A, b B) = (b \u03a3, I), where b \u03a3 is the sample covariance matrix. Thus, optimization problem (10) reduces to a convex relaxation of sparse PCA proposed by Vu et al. (2013). In this case, using a variant of the theoretical results in Proposition 1, the initial value v0 converges to v\u2217as long as n > Cs2 log d. Note that applying Corollary 1 directly to the sparse PCA problem will give a loose upper bound (on the eigenfactor) since the additional information on the matrix pair (A, B) = (\u03a3, I), with B restricted to the identity matrix and A restricted to positive de\ufb01nite matrix, are not used in the derivation of Corollary 1. In other words, 11 the results in Corollary 1 are derived under a much larger class of matrix pair (A, B). To this end, we resort to the following corollary on the variant of Davis-Kahan perturbation result for sparse PCA (see, for instance, Yu et al., 2014). Corollary 2. Let (A, B) = (\u03a3, I) and let \u03a3 be a symmetric positive de\ufb01nite matrix. Let b A = b \u03a3 be the sample covariance matrix. We have \u03c1( b A \u2212A, s) \u2264C p \u03bb1(A) r s log d n holds with high probability for some constant C > 0. Suppose that |F| = k\u2032 and that k\u2032 = O(s). Then, by the Davis-Kahan Theorem, s 1 \u2212 |v(F)T v\u2217| \u2225v(F)\u22252\u2225v\u2217\u22252 \u2264C\u2032 p \u03bb1(A) \u03bb1(A) \u2212\u03bb2(A) r s log d n holds with high probability for some constant C\u2032 > 0. Combining Corollary 2 with Theorem 1, our results indicate that as the optimization error decays to zero, our proposed estimator has a statistical rate of convergence of approximately p \u03bb1(A) \u03bb1(A) \u2212\u03bb2(A) r s log d n , which matches the minimax optimal rate of convergence for sparse PCA problem (Cai et al., 2013). Sparse canonical correlation analysis: For sparse CCA, we assume the model: X Y ! \u223cN(0, \u03a3) and \u03a3 = \u03a3x \u03a3xy \u03a3T xy \u03a3y ! . Recall from Example 2 the de\ufb01nitions of b A and b B in the context of sparse CCA. The following proposition characterizes the rate of convergence between b \u03a3 and \u03a3. It follows from Lemma 6.5 of Gao et al. (2017). Note that for the ease of presentation, we omit the dependence on the eigenvalues of A and B for CCA. Proposition 2. Let b \u03a3x, b \u03a3y, and b \u03a3xy be the sample covariances of \u03a3x, \u03a3y, and \u03a3xy, respectively. For any C > 0 and positive integer k, there exists a constant C\u2032 > 0 such that \u03c1(b \u03a3x \u2212\u03a3x, k) \u2264C s k log d n , \u03c1(b \u03a3y \u2212\u03a3y, k) \u2264C s k log d n , and \u03c1(b \u03a3xy \u2212\u03a3xy, k) \u2264C s k log d n , with high probability. Moreover, \u2225b \u03a3xy \u2212\u03a3xy\u2225\u221e,\u221e\u2264C p log d/n with high probability. We now verify the sample size condition in Proposition 1. From Proposition 2, we have \u03c1(EB, s2) = OP ( p s2 log d/n). Thus, we need n > Cs2 log d for some generic constant C. Under the sample size condition and using the results in Proposition 2, it can be shown that \u2225e A \u2212b A\u2225\u221e,\u221e\u2264 12 \u2225e A \u2212A\u2225\u221e,\u221e+ \u2225b A \u2212A\u2225\u221e,\u221e= OP ( p log d/n). Moreover, \u2225b BSv \u2212BSv\u22252 = OP ( p s/n). Thus, as long as n > Cs2 log d, v0 converges to v\u2217. This veri\ufb01es the assumption on v0 in Theorem 1. In a recent paper by Ma and Li (2016), the authors have shown that the minimax optimal eigenfactor takes the form p 1 \u2212\u03bb2 1 p 1 \u2212\u03bb2 2/(\u03bb1 \u2212\u03bb2) in the low-dimensional setting in which n > d, under the assumption that \u03a3x = \u03a3y = I. Adapting the results in Ma and Li (2016) in a similar fashion as in Corollary 2, Theorem 1 indicates that with high probability, our proposed estimator obtains the minimax statistical rate of convergence of approximately p 1 \u2212\u03bb2 1 p 1 \u2212\u03bb2 2 \u03bb1 \u2212\u03bb2 \u00b7 r s log d n , (17) for the case when \u03a3x = \u03a3y = I. However, the minimax optimal eigenfactor for general \u03a3x and \u03a3y remains an open problem in the literature. To obtain the rate of convergence for general \u03a3x and \u03a3y, we will apply Corollary 1 to the sparse CCA problem. Choosing k to be of the same order as s, Proposition 2 implies that both \u03c1(EA, k\u2032) and \u03c1(EB, k\u2032) are at the order of p s log d/n with high probability. Thus, Corollary 1 indicates that as the optimization error decays to zero, our proposed estimator has a statistical rate of convergence of approximately p 1 + \u03bb2 1 p 1 + \u03bb2 2 \u03bb1 \u2212\u03bb2 \u00b7 r s log d n . (18) The upper bound is expected to be loose in terms of the eigenfactor since the class of paired matrices (A, B) considered in Corollary 1 is a much larger class of matrices than that of the sparse CCA. In short, our theoretical results are very general and are not based on any statistical model. Moreover, the results in Theorem 1 are written as a function of the estimation error between v(F), the solution of a generalized eigenvalue problem restricted on the set F, and v\u2217. Therefore, existing minimax optimal results for various statistical models in the low-dimensional setting can be adapted to the high-dimensional setting in a similar fashion as in the case of sparse CCA. 5 Numerical Studies We perform extensive numerical studies to evaluate the performance of our proposal, Ri\ufb02e, compared to existing methods. We consider sparse Fisher\u2019s discriminant analysis and sparse canonical correlation analysis, each of which can be recast as the sparse generalized eigenvalue problem (2), as shown in Examples 1 and 2. Ri\ufb02e involves an initial vector v0 and a tuning parameter k on the cardinality. We employ the convex optimization approach proposed in Section 3.2 to obtain an initial vector v0. The convex approach involves two tuning parameters: we simply select \u03b6 = p log d/n and K = 1 as suggested by the theoretical analysis. Note that these tuning parameters can be selected conservatively since there is a re\ufb01nement step to obtain a \ufb01nal estimator using Ri\ufb02e. It is challenging to propose a general model selection technique for the selection of k in a sparse generalized eigenvalue problem since it is not based on any statistical model and it includes both unsupervised learning and supervised learning methods as its special cases. For supervised learning 13 methods such as sparse FDA, we perform cross-validation to select the truncation parameter k. For unsupervised learning methods such as the sparse PCA and CCA, it is generally agreed upon in the literature that model selection problem is challenging. In principle, we could also use cross-validation techniques to select k in these settings such as the procedure considered in Witten et al. (2009). For simplicity, in our simulation studies, we assess the performance of our estimator in the context of sparse CCA across several values of k and examine the role of k under \ufb01nite sample setting. 5.1 Fisher\u2019s Discriminant Analysis We consider high-dimensional classi\ufb01cation problem using sparse Fisher\u2019s discriminant analysis. The data consists of an n \u00d7 d matrix X with d features measured on n observations, each of which belongs to one of K classes. We let xi denote the ith row of X, and let Ck \u2282{1, . . . , n} contains the indices of the observations in the kth class with nk = |Ck| and PK k=1 nk = n. Recall from Example 1 that this is a special case of the sparse generalized eigenvalue problem with b A = b \u03a3b and b B = b \u03a3w. Let b \u00b5k = P i\u2208Ck xi/nk be the estimated mean for the kth class. The standard estimates for \u03a3w and \u03a3b are b \u03a3w = 1 n K X k=1 X i\u2208Ck (xi \u2212b \u00b5k)(xi \u2212b \u00b5k)T and b \u03a3b = 1 n K X k=1 nk b \u00b5k b \u00b5T k . We consider two simulation settings similar to that of Witten et al. (2009): 1. Binary classi\ufb01cation: in this example, we set \u00b51 = 0, \u00b52j = 0.5 for j = {2, 4, . . . , 40}, and \u00b52j = 0 otherwise. Let \u03a3 be a block diagonal covariance matrix with \ufb01ve blocks, each of dimension d/5 \u00d7 d/5. The (j, j\u2032)th element of each block takes value 0.8|j\u2212j\u2032|. As suggested by Witten et al. (2009), this covariance structure is intended to mimic the covariance structure of gene expression data. The data are simulated as xi \u223cN(\u00b5k, \u03a3) for i \u2208Ck. 2. Multi-class classi\ufb01cation: there are K = 4 four classes in this example. Let \u00b5kj = (k \u22121)/3 for j = {2, 4, . . . , 40} and \u00b5kj = 0 otherwise. The data are simulated as xi \u223cN(\u00b5k, \u03a3) for i \u2208Ck, with the same covariance structure for binary classi\ufb01cation. As noted in Witten et al. (2009), a one-dimensional vector projection of the data fully captures the class structure. Four approaches are compared: (i) Ri\ufb02e; (ii) \u21131-penalized logistic or multinomial regression implemented using the R package glmnet; (iii) \u21131-penalized FDA with diagonal estimate of \u03a3w implemented using the R package penalizedLDA (Witten et al., 2009); and (iv) direct approach to sparse discriminant analysis (Mai et al., 2012, 2016) implemented using the R package dsda and msda for binary and multi-class classi\ufb01cation, respectively. For each method, models are \ufb01t on the training set with tuning parameter selected using 5-fold cross-validation. Then, the models are evaluated on the test set. In addition to the aforementioned models, we consider an oracle estimator using the theoretical direction v\u2217, computed using the population quantities \u03a3w and \u03a3b. To compare the performance of the di\ufb00erent proposals, we report the misclassi\ufb01cation error on the test set and the number of non-zero features selected in the models. The results for 400 training 14 samples and 1000 test samples, with d = 500 features, are reported in Table 1. From Table 1, we see that Ri\ufb02e has the lowest misclassi\ufb01cation error compared to other competing methods. This suggests that Algorithm 1 works well with the initial value obtained from the convex approach in Section 3.2. Witten et al. (2009) has the highest misclassi\ufb01cation error in both of our simulation settings, since it does not take into account the dependencies among the features. Mai et al. (2012) and Mai et al. (2016) perform slightly worse than our proposal in terms of misclassi\ufb01cation error. Moreover, they use a large number of features in their model, which renders interpretation di\ufb03cult. In contrast, the number of features selected by our proposal is very close to that of the oracle estimator. Table 1: The number of misclassi\ufb01ed observations out of 1000 test samples and number of non-zero features (and standard errors) for binary and multi-class classi\ufb01cation problems, averaged over 200 data sets. The results (rounded to the nearest integer) are for models trained with 400 training samples with 500 features. \u21131-penalized \u21131-FDA direct Ri\ufb02e oracle Binary Error 32 (1) 298 (1) 29 (1) 15 (1) 8 (1) Features 88 (1) 23 (1) 105 (2) 42 (1) 41 (0) Multi-class Error 495 (2) 497 (1) 247 (2) 192 (2) 153 (1) Features 54 (2) 22 (1) 102 (2) 42 (1) 41 (0) 5.2 Canonical Correlation Analysis In this section, we study the relationship between two sets of random variables X \u2208Rd/2 and Y \u2208Rd/2 in the high-dimensional setting using sparse CCA. Let \u03a3x, \u03a3y, and \u03a3xy be the covariance matrices of X and Y , and cross-covariance matrix of X and Y , respectively. We consider two di\ufb00erent scenarios in which \u03a3xy is low rank and approximately low rank, respectively. Throughout the simulation studies, we compare our proposal to Witten et al. (2009), implemented using the R package PMA. Their proposal involves choosing two tuning parameters that controls the sparsity of the estimated directional vectors. We consider a range of tuning parameters and choose tuning parameters that yield the lowest estimation error for Witten et al. (2009). We assess the performance of Ri\ufb02e by considering multiple values of k = {6, 8, 10, 15}. The output of both our proposal and that of Witten et al. (2009) are normalized to have norm one, whereas the true parameters v\u2217 x and v\u2217 y are normalized with respect to \u03a3x and \u03a3y. To evaluate the performance of the two methods, we normalize v\u2217 x and v\u2217 y such that they have norm one, and compute the squared \u21132 distance between the estimated and the true directional vectors. 15 5.2.1 Low Rank \u03a3xy Assume that (X, Y ) \u223cN(0, \u03a3) with \u03a3 = \u03a3x \u03a3xy \u03a3xy \u03a3y ! and \u03a3xy = \u03a3xv\u2217 x\u03bb1(v\u2217 y)T \u03a3y, where 0 < \u03bb1 < 1 is the largest generalized eigenvalue and v\u2217 x and v\u2217 y are the leading pair of canonical directions. The data consists of two n \u00d7 (d/2) matrices X and Y. We assume that each row of the two matrices are generated according to (xi, yi) \u223cN(0, \u03a3). The goal of CCA is to estimate the canonical directions v\u2217 x and v\u2217 y based on the data matrices X and Y. Let b \u03a3x, b \u03a3y be the sample covariance matrices of X and Y , and let b \u03a3xy be the sample crosscovariance matrix of X and Y . Recall from Example 2 that the sparse CCA problem can be recast as the generalized eigenvalue problem with b A = 0 b \u03a3xy b \u03a3xy 0 ! , b B = b \u03a3x 0 0 b \u03a3y ! , and v = vx vy ! . In our simulation setting, we set \u03bb1 = 0.9, v\u2217 x,j = v\u2217 y,j = 1/ \u221a 3 for j = {1, 6, 11}, and v\u2217 x,j = v\u2217 y,j = 0 otherwise. Then, we normalize v\u2217 x and v\u2217 y such that (v\u2217 x)T \u03a3xv\u2217 x = (v\u2217 y)T \u03a3yv\u2217 y = 1. We consider the case when \u03a3x and \u03a3y are block diagonal matrix with \ufb01ve blocks, each of dimension d/5\u00d7d/5, where the (j, j\u2032)th element of each block takes value 0.8|j\u2212j\u2032|. The results for d = 500, s = 6, averaged over 200 data sets, are summarized in Table 2. Table 2: Results for low rank \u03a3xy. The squared \u21132 distance between the estimated and true leading generalized eigenvector as a function of the sample size n for d = 500, s = 6. The results are averaged over 200 data sets. PMA Rilfe (k = 6) Rilfe (k = 8) Rilfe (k = 10) Rilfe (k = 15) n = 200 0.72 (0.01) 0.21 (0.02) 0.11 (0.02) 0.08 (0.02) 0.07 (0.01) vx n = 400 0.61 (0.01) 0.01 (0.01) 0.01 (0.01) 0.01 (0.01) 0.01 (0.01) n = 600 0.58 (0.01) 0.01 (0.01) 0.01 (0.01) 0.01 (0.01) 0.01 (0.01) n = 200 0.70 (0.01) 0.24 (0.02) 0.24 (0.02) 0.35 (0.02) 0.58 (0.01) vy n = 400 0.62 (0.01) 0.02 (0.01) 0.07 (0.01) 0.15 (0.01) 0.32 (0.01) n = 600 0.59 (0.01) 0.01 (0.01) 0.04 (0.01) 0.08 (0.01) 0.19 (0.01) From Table 2, we see that our proposal outperforms Witten et al. (2009) uniformly across di\ufb00erent sample sizes. This is not surprising since Witten et al. (2009) uses diagonal estimates of \u03a3x and \u03a3y to compute the directional vectors. The \u21132 distance for our proposal decreases as we increase n. Moreover, the \u21132 distance increases when we increase k. These results con\ufb01rm our theoretical analysis in Theorem 1. 16 5.2.2 Approximately Low Rank \u03a3xy In this section, we consider the case when \u03a3xy is approximately low rank. We consider the same simulation set up as the previous section, except that \u03a3xy is now approximately low rank, generated as follows: \u03a3xy = \u03a3xv\u2217 x\u03bb1(v\u2217 y)T \u03a3y + \u03a3xV\u2217 x\u039b(V\u2217 y)T \u03a3y with \u03bb1 = 0.9. Here, \u039b \u2208R200\u00d7200 is a diagonal matrix with diagonal entries equal 0.1, and V\u2217 x, V\u2217 y \u2208 Rd/2\u00d7200 are normalized orthogonal matrices such that (V\u2217 x)T \u03a3xV\u2217 x = I and (V\u2217 y)T \u03a3yV\u2217 y = I, respectively. The goal is to recover the leading generalized eigenvector v\u2217 x and v\u2217 y. The results for d = 1000, s = 6, averaged over 200 data sets, are summarized in Table 3. From Table 3, we see that the performance for Ri\ufb02e is much better than that of PMA across all settings. As we increase the number of samples n, the \u21132 distance decreases for all values of k. Interesting, as we increase k from k = 6 to k = 10 for the case when n = 400, the \u21132 distance decreases slightly. This is because in the high-dimensional setting, the initial value is not estimated accurately. Thus, when we choose k = s = 6, some of the true support are not selected after truncating the initial value v0 and therefore it has a higher \u21132 distance. In this case, by selecting a larger value of k, we are able to ensure that the true support are selected, which yields a lower \u21132 distance. Note that if an even larger k is selected, then the \u21132 distance will eventually increase as in the case when k = 15 for vy. Table 3: Results for approximately low rank \u03a3xy. The squared \u21132 distance between the estimated and true leading generalized eigenvector as a function of the sample size n for d = 1000, s = 6. The results are averaged over 200 data sets. PMA Rilfe (k = 6) Rilfe (k = 8) Rilfe (k = 10) Rilfe (k = 15) n = 400 0.63 (0.01) 0.30 (0.02) 0.19 (0.02) 0.13 (0.02) 0.07 (0.01) vx n = 600 0.62 (0.01) 0.11 (0.01) 0.07 (0.01) 0.09 (0.01) 0.07 (0.01) n = 800 0.57 (0.01) 0.02 (0.01) 0.05 (0.01) 0.08 (0.01) 0.07 (0.01) n = 400 0.66 (0.01) 0.31 (0.02) 0.26 (0.02) 0.22 (0.02) 0.25 (0.01) vy n = 600 0.63 (0.01) 0.10 (0.01) 0.11 (0.01) 0.13 (0.01) 0.16 (0.01) n = 800 0.55 (0.01) 0.02 (0.01) 0.07 (0.01) 0.11 (0.01) 0.13 (0.01) 6 Data Application In this section, we apply our method in the context of sparse sliced inverse regression as in Example 3. The data sets we consider are: 1. Leukemia (Golub et al., 1999): 7,129 gene expression measurements from 25 patients with acute myeloid leukemia and 47 patients with acute lymphoblastic luekemia. The data are available from http://www.broadinstitute.org/cgi-bin/cancer/datasets.cgi. Re17 cently, this data set is analyzed in the context of sparse su\ufb03cient dimension reduction in Yin and Hilafu (2015). 2. Lung cancer (Spira et al., 2007): 22,283 gene expression measurements from large airway epithelial cells sampled from 97 smokers with lung cancer and 90 smokers without lung cancer. The data are publicly available from GEO at accession number GDS2771. We preprocess the leukemia data set following Golub et al. (1999) and Yin and Hilafu (2015). In particular, we set gene expression readings of 100 or fewer to 100, and expression readings of 16,000 or more to 16,000. We then remove genes with di\ufb00erence and ratio between the maximum and minimum readings that are less than 500 and 5, respectively. A log-transformation is then applied to the data. This gives us a data matrix X with 72 rows/samples and 3571 columns/genes. For the lung cancer data, we simply select the 2,000 genes with the largest variance as in Petersen et al. (2016). This gives a data matrix with 167 rows/samples and 2,000 columns/genes. We further standardize both the data sets so that the genes have mean equals zero and variance equals one. Recall from Example 3 that in order to apply our method, we need the estimates b A = b \u03a3E(X|Y ) and b B = b \u03a3x. The quantity b \u03a3x is simply the sample covariance matrix of X. Let n1 and n2 be the number of samples of the two classes in the data set. Let b \u03a3x,1 and b \u03a3x,2 be the sample covariance matrix calculated using only data from class one and class two, respectively. Then, the covariance matrix of the conditional expectation can be estimated by b \u03a3E[X|Y ] = b \u03a3x \u22121 n 2 X k=1 nk b \u03a3x,k, where n = n1 + n2 (Li, 1991; Li and Nachtsheim, 2006; Zhu et al., 2006; Li and Yin, 2008; Chen et al., 2010; Yin and Hilafu, 2015). Let b vt be the output of Algorithm 1. Similar to Yin and Hilafu (2015), we plot the box-plot of the su\ufb03cient predictor, Xb vt, for the two classes in each data set. The results with k = 25 for leukemia and lung cancer data sets are in Figures 1(a)-(b), respectively. From Figure 1(a), for the leukemia data set, we see that the su\ufb03cient predictor for the two groups are much more well separated than the results in Yin and Hilafu (2015). Moreover, our proposal is with theoretical guarantees whereas their proposal is sequential without theoretical guarantees. For the lung cancer data set, we see that there is some overlap between the su\ufb03cient predictor for subjects with and without lung cancer. These results are consistent in the literature where it is known that the lung cancer data set is a much more di\ufb03cult classi\ufb01cation problem compared to that of the leukemia data set (Fan and Fan, 2008; Petersen et al., 2016). 7 Discussion We propose a two-stage computational framework for solving the sparse generalized eigenvalue problem. The proposed method successfully handles ill-conditioned normalization matrix that arises from the high-dimensional setting due to \ufb01nite sample estimation, and the \ufb01nal estimator enjoys geometric convergence to a solution with the optimal statistical rate of convergence. Our method and theory have applications to a large class of statistical models including but are not limited to 18 \u25cf \u25cf \u25cf \u25cf \u22124 \u22123 \u22122 \u22121 0 1 2 Sufficient Predictor ALL AML (a) \u25cf \u22122 \u22121 0 1 2 Sufficient Predictor control case (b) Figure 1: Panels (a) and (b) contain box-plots of the su\ufb03cient predictor Xb vt obtained from Algorithm 1 for the leukemia and lung cancer data sets. In panel (a), the y-axis represents patients with acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML), respectively. In panel (b), the y-axis represents patients with and without lung cancer, respectively. sparse FDA, sparse CCA, and sparse SDR. Compared to existing theory for each speci\ufb01c statistical model, our theory is very general and does not require any structural assumption on (A, B). Our theoretical results in Theorem 1 rely on selecting the tuning parameter k such that k = Cs for some constant C > 1. However, in practice, the true sparsity level s is unknown and it may be di\ufb03cult to select the value of k. To remove the dependencies on s, one of the reviewers suggested a thresholding strategy, i.e., instead of truncating the vector v\u2032 t and keeping the top k elements, one can perform a C \u00b7 p log d/n thresholding on the updated vector v\u2032 t from Step 3 of Algorithm 1, where C is some user-speci\ufb01ed constant. To evaluate the thresholding strategy, we perform a small scale numerical study on the FDA binary classi\ufb01cation example similar to that of Section 5.1 with n = 200 and d = 200. We compare the estimator obtained using the soft-thresholding rule (Soft-Ri\ufb02e) and that of our proposed truncation rule by calculating the estimation error between these estimators and the oracle direction. The results, averaged across 50 iterations, are presented in the Table 4. From Table 4, we see that depending on the choice of the constant C, the soft-thresholding rule have similar performance as the truncation rule, suggesting that substituting the soft-thresholding rule onto Steps 4 and 5 of Algorithm 1 will also work. In the case when v\u2217is approximately sparse, i.e., s = d, the current theoretical results are no longer applicable. To address this issue, we can rede\ufb01ne the notion of sparsity level s. As suggested by one of the reviewers, we can de\ufb01ne the e\ufb00ective sparsity level s\u2032 as the \u2113q norm (q < 1) or the ratio between, for example, \u21131 and \u2113\u221enorms of v\u2217. The theoretical properties for thresholding strategy and weak sparsity are challenging to establish under our current theoretical framework. In particular, due to the normalization constraint vT b Bv on the denominator, to analyze the gradient 19 Table 4: Estimation error between the true standardized generalized eigenvector (\u2225v\u2217\u22252 = 1) and the estimated generalized eigenvector for binary classi\ufb01cation problem, averaged over 50 data sets. The number of non-zero features are also reported. The results are with n = 200 and d = 200. The true sparsity level is s = 40. Soft-Ri\ufb02e Ri\ufb02e C = 1 C = 0.5 C = 0.25 k = 35 k = 40 k = 55 Estimation Error 0.180 0.048 0.072 0.181 0.048 0.072 Features 33.5 39.7 53.3 35 40 55 ascent step in Step 2, we require that the cardinality of the input vector must have support k\u2032. This condition is needed to control the condition number of b BF , where F is an index set such that |F| = k\u2032. Developing a new theoretical framework for solving the sparse generalized eigenvalue problem is out of the scope of this paper and we leave it for future work. There are several additional future directions for the sparse generalized eigenvalue problem. It will be interesting to study whether Ri\ufb02e can be generalized to the case for estimating subspace spanned by the top K leading generalized eigenvectors. The computational bottleneck for the current approach is on the convex relaxation method for obtaining the initial vector v0, which has a computational complexity of O(d3) per iteration. This yields a total computational complexity of O(d3) + O(kd + d) for the proposed two-stage computational framework. In future work, it will be of paramount importance to propose an e\ufb03cient convex algorithm to obtain v0 such that our proposal is scalable to accommodate large-scale data. Acknowledgement We thank the editor, associate editor, and two reviewers for their helpful comments that improve earlier version of this paper. We thank Gao Chao and Xiaodong Li for responding to our inquiries. Tong Zhang was supported by NSF IIS-1250985, NSF IIS-1407939, and NIH R01AI116744. Kean Ming Tan was supported by NSF IIS-1250985, NSF IIS-1407939, and NSF DMS-1811315.", "introduction": "A large class of high-dimensional statistical methods such as canonical correlation analysis (CCA), Fisher\u2019s discriminant analysis (FDA), and su\ufb03cient dimension reduction (SDR) can be formulated as the generalized eigenvalue problem (GEP). Let A \u2208Rd\u00d7d be a symmetric matrix and let B \u2208Rd\u00d7d be a positive de\ufb01nite matrix. For a symmetric-de\ufb01nite matrix pair (A, B), the generalized eigenvalue problem aims to obtain v\u2217\u2208Rd satisfying Av\u2217= \u03bbmax(A, B) \u00b7 Bv\u2217, (1) 1 arXiv:1604.08697v3 [stat.ML] 31 Aug 2018 where v\u2217is the leading generalized eigenvector corresponding to the largest generalized eigenvalue \u03bbmax(A, B) of the matrix pair (A, B). The largest generalized eigenvalue can also be characterized as \u03bbmax(A, B) = max v\u2208Rd vT Av, subject to vT Bv = 1. In many real-world applications, the matrix pair (A, B) is a population quantity that is unknown in general. Instead, we can only access ( b A, b B), which is an estimator of (A, B) on the basis of n independent observations: b A = A + EA and b B = B + EB, where EA and EB are stochastic errors due to \ufb01nite sample estimation. For statistical models considered in this paper, EA and EB are symmetric matrices. In the high-dimensional setting in which d > n, we assume that the leading generalized eigenvector v\u2217is sparse. Let s = \u2225v\u2217\u22250 be the number of non-zero entries in v\u2217, and assume that s is much smaller than n and d. We aim to estimate v\u2217based on b A and b B by solving the following optimization problem maximize v\u2208Rd vT b Av, subject to vT b Bv = 1, \u2225v\u22250 \u2264s. (2) There are three major challenges in solving (2). Firstly, in the high-dimensional setting, b B is singular and not invertible, and classical algorithms which require taking the inverse of b B are not directly applicable (Golub and Van Loan, 2012). Secondly, due to the normalization term vT b Bv = 1, many recent proposals for solving sparse eigenvalue problem such as the truncated power method in Yuan and Zhang (2013) cannot be directly applied to solve (2). Thirdly, (2) requires maximizing a convex objective function over a nonconvex set, which is NP-hard even when b B is the identity matrix (Moghaddam et al., 2006a,b). In this paper, we propose a two-stage computational framework for solving the sparse GEP in (2). At the \ufb01rst stage, we solve a convex relaxation of (2). Our proposal generalizes the convex relaxation proposed in Gao et al. (2017) in the context of sparse CCA to the sparse GEP setting. Gao et al. (2017) assumes that A is low rank, positive semide\ufb01nite, and the rank of A is known. Our theoretical analysis removes all of the aforementioned assumptions. Using the solution as an initial value, we propose a nonconvex optimization algorithm to solve (2) directly. The proposed algorithm iteratively performs a gradient ascent step on the generalized Rayleigh quotient vT b Av/vT b Bv, and a truncation step that preserves the top k entries of v with the largest magnitudes while setting the remaining entries to zero. Here, k is a tuning parameter that controls the cardinality of the solution. Theoretical guarantees are established for the proposed nonconvex algorithm. To the best of our knowledge, this is the \ufb01rst general theoretical result for sparse generalized eigenvalue problem in the high-dimensional setting. We provide a brief description of the theoretical result for the nonconvex algorithm at the second stage. Let {vt}L t=0 be the solution sequence resulting from the proposed algorithm, where L is the 2 total number of iterations and v0 is the initialization point. We prove that, under mild conditions, \u2225vt \u2212v\u2217\u22252 \u2264 \u03bdt \u00b7 \u2225v0 \u2212v\u2217\u22252 | {z } optimization error + p \u03c1(EA, 2k + s)2 + \u03c1(EB, 2k + s)2 \u03be(A, B) | {z } statistical error (t = 1, . . . , L). (3) The quantities \u03bd \u2208(0, 1) and \u03be(A, B) depend on the population matrix pair (A, B). These quantities will be speci\ufb01ed in Section 4. Meanwhile, \u03c1(EA, 2k + s) is de\ufb01ned as \u03c1(EA, 2k + s) = sup \u2225u\u22252=1,\u2225u\u22250\u22642k+s |uT EAu| (4) and \u03c1(EB, 2k+s) is de\ufb01ned similarly. The \ufb01rst term on the right-hand side quanti\ufb01es the exponential decay of the optimization error, while the second term characterizes the statistical error due to \ufb01nite sample estimation. In particular, for many statistical models that can be formulated as a sparse GEP such as sparse CCA, sparse FDA, and sparse SDR, we establish that max{\u03c1(EA, 2k + s), \u03c1(EB, 2k + s)} \u2264 r (s + 2k) log d n (5) with high probability. Consequently, for any properly chosen k that is of the same order as s, the algorithm achieves an estimator of v\u2217with the optimal statistical rate of convergence p s log d/n. The sparse generalized eigenvalue problem in (2) is also closely related to the classical matrix computation literature (see, e.g., Golub and Van Loan, 2012 for a survey, and more recent results in Ge et al., 2016). There are two key di\ufb00erences between our results and existing work. Firstly, we have an additional nonconvex constraint on the sparsity level, which allows us to handle the high- dimensional setting. Secondly, due to the existence of stochastic errors, we allow the normalization matrix b B to be rank-de\ufb01cient, while in the classical setting b B is assumed to be positive de\ufb01nite. In comparison with existing generalized eigenvalue algorithms, our algorithm keeps the iterative solution sequence within a basin that involves only a few coordinates of v such that the corresponding submatrix of b B is positive de\ufb01nite. Moreover, our algorithm ensures that the statistical errors (3) are in terms of the largest sparse eigenvalues of the stochastic errors EA and EB, which is de\ufb01ned in (4). In contrast, a straightforward application of the classical matrix perturbation theory gives statistical error terms that involve the largest eigenvalues of EA and EB, which are much larger than their corresponding sparse eigenvalues (Stewart and Sun, 1990). An R package for \ufb01tting the sparse generalized eigenvalue problem will be uploaded to CRAN. Notation: Let v = (v1, . . . , vd)T \u2208Rd. We de\ufb01ne the \u2113q-norm of v as \u2225v\u2225q = (Pd j=1 |vj|q)1/q for 1 \u2264q < \u221e. Let \u03bbmax(Z) and \u03bbmin(Z) be the largest and smallest eigenvalues correspondingly. If Z is positive de\ufb01nite, we de\ufb01ne its condition number as \u03ba(Z) = \u03bbmax(Z)/\u03bbmin(Z). We denote \u03bbk(Z) to be the kth eigenvalue of Z, and the spectral norm of Z by \u2225Z\u22252 = sup\u2225v\u22252=1 \u2225Zv\u22252. Furthermore, let \u2225Z\u22251,1 = P i,j |Zij|, \u2225Z\u2225\u221e,\u221e= max i,j |Zij| and \u2225Z\u2225\u2217= tr(Z). For F \u2282{1, . . . , d}, let Z\u00b7F \u2208Rd\u00d7|F| and ZF\u00b7 \u2208R|F|\u00d7d be the submatrix of Z where the columns and rows are restricted to the set F, respectively. With some abuse of notation, let ZF \u2208R|F|\u00d7|F| be the submatrix of Z, where the rows and columns are restricted to the set F. In addition, Finally, we de\ufb01ne \u03c1(Z, s) = sup\u2225u\u22252=1,\u2225u\u22250\u2264s |uT Zu|. 3" }, { "url": "http://arxiv.org/abs/1402.7349v2", "title": "Learning Graphical Models With Hubs", "abstract": "We consider the problem of learning a high-dimensional graphical model in\nwhich certain hub nodes are highly-connected to many other nodes. Many authors\nhave studied the use of an l1 penalty in order to learn a sparse graph in\nhigh-dimensional setting. However, the l1 penalty implicitly assumes that each\nedge is equally likely and independent of all other edges. We propose a general\nframework to accommodate more realistic networks with hub nodes, using a convex\nformulation that involves a row-column overlap norm penalty. We apply this\ngeneral framework to three widely-used probabilistic graphical models: the\nGaussian graphical model, the covariance graph model, and the binary Ising\nmodel. An alternating direction method of multipliers algorithm is used to\nsolve the corresponding convex optimization problems. On synthetic data, we\ndemonstrate that our proposed framework outperforms competitors that do not\nexplicitly model hub nodes. We illustrate our proposal on a webpage data set\nand a gene expression data set.", "authors": "Kean Ming Tan, Palma London, Karthik Mohan, Su-In Lee, Maryam Fazel, Daniela Witten", "published": "2014-02-28", "updated": "2014-08-09", "primary_cat": "stat.ML", "cats": [ "stat.ML", "stat.CO", "stat.ME" ], "main_content": "In this section, we present a general framework to accommodate network with hub nodes. 3 Tan, London, Mohan, Lee, Fazel, and Witten (a) (b) (c) \u22120.6 \u22120.4 \u22120.2 0.0 0.2 0.4 0.6 Figure 1: (a): Heatmap of the inverse covariance matrix in a toy example of a Gaussian graphical model with four hub nodes. White elements are zero and colored elements are non-zero in the inverse covariance matrix. Thus, colored elements correspond to edges in the graph. (b): Estimate from the hub graphical lasso, proposed in this paper. (c): Graphical lasso estimate. 2.1 The Hub Penalty Function Let X be a n \u00d7 p data matrix, \u0398 a p \u00d7 p symmetric matrix containing the parameters of interest, and \u2113(X, \u0398) a loss function (assumed to be convex in \u0398). In order to obtain a sparse and interpretable graph estimate, many authors have considered the problem minimize \u0398\u2208S {\u2113(X, \u0398) + \u03bb\u2225\u0398 \u2212diag(\u0398)\u22251} , (1) where \u03bb is a non-negative tuning parameter, S is some set depending on the loss function, and \u2225\u00b7 \u22251 is the sum of the absolute values of the matrix elements. For instance, in the case of a Gaussian graphical model, we could take \u2113(X, \u0398) = \u2212log det \u0398 + trace(S\u0398), the negative log-likelihood of the data, where S is the empirical covariance matrix and S is the set of p \u00d7 p positive de\ufb01nite matrices. The solution to (1) can then be interpreted as an estimate of the inverse covariance matrix. The \u21131 penalty in (1) encourages zeros in the solution. But it typically does not yield an estimate that contains hubs. In order to explicitly model hub nodes in a graph, we wish to replace the \u21131 penalty in (1) with a convex penalty that encourages a solution that can be decomposed as Z+V+VT , where Z is a sparse symmetric matrix, and V is a matrix whose columns are either entirely zero or almost entirely non-zero (see Figure 2). The sparse elements of Z represent edges between non-hub nodes, and the non-zero columns of V correspond to hub nodes. We achieve this goal via the hub penalty function, which takes the form P(\u0398) = min V,Z: \u0398=V+VT +Z ( \u03bb1\u2225Z \u2212diag(Z)\u22251 + \u03bb2\u2225V \u2212diag(V)\u22251 + \u03bb3 p X j=1 \u2225(V \u2212diag(V))j\u2225q ) . (2) Here \u03bb1, \u03bb2, and \u03bb3 are nonnegative tuning parameters. Sparsity in Z is encouraged via the \u21131 penalty on its o\ufb00-diagonal elements, and is controlled by the value of \u03bb1. The \u21131 and \u21131/\u2113q norms on the columns of V induce group sparsity when q = 2 (Yuan and Lin, 2007b; Simon et al., 2013); \u03bb3 controls the selection of hub nodes, and \u03bb2 controls the sparsity of 4 Learning Graphical Models With Hubs = + + \u21e5 Z V VT Figure 2: Decomposition of a symmetric matrix \u0398 into Z + V + VT , where Z is sparse, and most columns of V are entirely zero. Blue, white, green, and red elements are diagonal, zero, non-zero in Z, and non-zero due to two hubs in V, respectively. each hub node\u2019s connections to other nodes. The convex penalty (2) can be combined with \u2113(X, \u0398) to yield the convex optimization problem minimize \u0398\u2208S,V,Z ( \u2113(X, \u0398) + \u03bb1\u2225Z \u2212diag(Z)\u22251 + \u03bb2\u2225V \u2212diag(V)\u22251 +\u03bb3 p X j=1 \u2225(V \u2212diag(V))j\u2225q ) subject to \u0398 = V + VT + Z, (3) where the set S depends on the loss function \u2113(X, \u0398). Note that when \u03bb2 \u2192\u221eor \u03bb3 \u2192\u221e, then (3) reduces to (1). In this paper, we take q = 2, which leads to estimation of a network containing dense hub nodes. Other values of q such as q = \u221eare also possible (see, e.g., Mohan et al., 2014). We note that the hub penalty function is closely related to recent work on overlapping group lasso penalties in the context of learning multiple sparse precision matrices (Mohan et al., 2014). 2.2 Algorithm In order to solve (3) with q = 2, we use an alternating direction method of multipliers (ADMM) algorithm (see, e.g., Eckstein and Bertsekas, 1992; Boyd et al., 2010; Eckstein, 2012). ADMM is an attractive algorithm for this problem, as it allows us to decouple some of the terms in (3) that are di\ufb03cult to optimize jointly. In order to develop an ADMM algorithm for (3) with guaranteed convergence, we reformulate it as a consensus problem, as in Ma et al. (2013). The convergence of the algorithm to the optimal solution follows from classical results (see, e.g., the review papers Boyd et al., 2010; Eckstein, 2012). In greater detail, we let B = (\u0398, V, Z), \u02dc B = ( \u02dc \u0398, \u02dc V, \u02dc Z), f(B) = \u2113(X, \u0398) + \u03bb1\u2225Z \u2212diag(Z)\u22251 + \u03bb2\u2225V \u2212diag(V)\u22251 + \u03bb3 p X j=1 \u2225(V \u2212diag(V))\u22252, 5 Tan, London, Mohan, Lee, Fazel, and Witten Algorithm 1 ADMM Algorithm for Solving (3). 1. Initialize the parameters: (a) primal variables \u0398, V, Z, \u02dc \u0398, \u02dc V, and \u02dc Z to the p \u00d7 p identity matrix. (b) dual variables W1, W2, and W3 to the p \u00d7 p zero matrix. (c) constants \u03c1 > 0 and \u03c4 > 0. 2. Iterate until the stopping criterion \u2225\u0398t\u2212\u0398t\u22121\u22252 F \u2225\u0398t\u22121\u22252 F \u2264\u03c4 is met, where \u0398t is the value of \u0398 obtained at the tth iteration: (a) Update \u0398, V, Z: i. \u0398 = arg min \u0398\u2208S n \u2113(X, \u0398) + \u03c1 2\u2225\u0398 \u2212\u02dc \u0398 + W1\u22252 F o . ii. Z = S(\u02dc Z\u2212W3, \u03bb1 \u03c1 ), diag(Z) = diag(\u02dc Z\u2212W3). Here S denotes the soft-thresholding operator, applied element-wise to a matrix: S(Aij, b) = sign(Aij) max(|Aij| \u2212b, 0). iii. C = \u02dc V \u2212W2 \u2212diag( \u02dc V \u2212W2). iv. Vj = max \u0010 1 \u2212 \u03bb3 \u03c1\u2225S(Cj,\u03bb2/\u03c1)\u22252 , 0 \u0011 \u00b7 S(Cj, \u03bb2/\u03c1) for j = 1, . . . , p. v. diag(V) = diag( \u02dc V \u2212W2). (b) Update \u02dc \u0398, \u02dc V, \u02dc Z: i. \u0393 = \u03c1 6 \u0002 (\u0398 + W1) \u2212(V + W2) \u2212(V + W2)T \u2212(Z + W3) \u0003 . ii. \u02dc \u0398 = \u0398 + W1 \u22121 \u03c1\u0393; iii. \u02dc V = 1 \u03c1(\u0393 + \u0393T ) + V + W2; iv. \u02dc Z = 1 \u03c1\u0393 + Z + W3. (c) Update W1, W2, W3: i. W1 = W1 + \u0398 \u2212\u02dc \u0398; ii. W2 = W2 + V \u2212\u02dc V; iii. W3 = W3 + Z \u2212\u02dc Z. 6 Learning Graphical Models With Hubs and g( \u02dc B) = ( 0 if \u02dc \u0398 = \u02dc V + \u02dc VT + \u02dc Z \u221e otherwise. Then, we can rewrite (3) as minimize B, \u02dc B n f(B) + g( \u02dc B) o subject to B = \u02dc B. (4) The scaled augmented Lagrangian for (4) takes the form L(B, \u02dc B, W) = \u2113(X, \u0398) + \u03bb1\u2225Z \u2212diag(Z)\u22251 + \u03bb2\u2225V \u2212diag(V)\u22251 + \u03bb3 p X j=1 \u2225(V \u2212diag(V))j\u22252 + g( \u02dc B) + \u03c1 2\u2225B \u2212\u02dc B + W\u22252 F , where B and \u02dc B are the primal variables, and W = (W1, W2, W3) is the dual variable. Note that the scaled augmented Lagrangian can be derived from the usual Lagrangian by adding a quadratic term and completing the square (Boyd et al., 2010). A general algorithm for solving (3) is provided in Algorithm 1. The derivation is in Appendix A. Note that only the update for \u0398 (Step 2(a)i) depends on the form of the convex loss function \u2113(X, \u0398). In the following sections, we consider special cases of (3) that lead to estimation of Gaussian graphical models, covariance graph models, and binary networks with hub nodes. 3. The Hub Graphical Lasso Assume that x1, . . . , xn i.i.d. \u223cN(0, \u03a3). The well-known graphical lasso problem (see, e.g., Friedman et al., 2007) takes the form of (1) with \u2113(X, \u0398) = \u2212log det \u0398 + trace(S\u0398), and S the empirical covariance matrix of X: minimize \u0398\u2208S \uf8f1 \uf8f2 \uf8f3\u2212log det \u0398 + trace(S\u0398) + \u03bb X j\u0338=j\u2032 |\u0398jj\u2032| \uf8fc \uf8fd \uf8fe, (5) where S = {\u0398 : \u0398 \u227b0 and \u0398 = \u0398T }. The solution to this optimization problem serves as an estimate for \u03a3\u22121. We now use the hub penalty function to extend the graphical lasso in order to accommodate hub nodes. 3.1 Formulation and Algorithm We propose the hub graphical lasso (HGL) optimization problem, which takes the form minimize \u0398\u2208S {\u2212log det \u0398 + trace(S\u0398) + P(\u0398)} . (6) Again, S = {\u0398 : \u0398 \u227b0 and \u0398 = \u0398T }. It encourages a solution that contains hub nodes, as well as edges that connect non-hubs (Figure 1). Problem (6) can be solved using Algorithm 1. The update for \u0398 in Algorithm 1 (Step 2(a)i) can be derived by minimizing \u2212log det \u0398 + trace(S\u0398) + \u03c1 2\u2225\u0398 \u2212\u02dc \u0398 + W1\u22252 F (7) 7 Tan, London, Mohan, Lee, Fazel, and Witten with respect to \u0398 (note that the constraint \u0398 \u2208S in (6) is treated as an implicit constraint, due to the domain of de\ufb01nition of the log det function). This can be shown to have the solution \u0398 = 1 2U \u0012 D + r D2 + 4 \u03c1I \u0013 UT , where UDUT denotes the eigen-decomposition of \u02dc \u0398 \u2212W1 \u22121 \u03c1S. The complexity of the ADMM algorithm for HGL is O(p3) per iteration; this is the complexity of the eigen-decomposition for updating \u0398. We now brie\ufb02y compare the computational time for the ADMM algorithm for solving (6) to that of an interior point method (using the solver Sedumi called from cvx). On a 1.86 GHz Intel Core 2 Duo machine, the interior point method takes \u223c3 minutes, while ADMM takes only 1 second, on a data set with p = 30. We present a more extensive run time study for the ADMM algorithm for HGL in Appendix E. 3.2 Conditions for HGL Solution to be Block Diagonal In order to reduce computations for solving the HGL problem, we now present a necessary condition and a su\ufb03cient condition for the HGL solution to be block diagonal, subject to some permutation of the rows and columns. The conditions depend only on the tuning parameters \u03bb1, \u03bb2, and \u03bb3. These conditions build upon similar results in the context of Gaussian graphical models from the recent literature (see, e.g., Witten et al., 2011; Mazumder and Hastie, 2012; Yang et al., 2012b; Danaher et al., 2014; Mohan et al., 2014). Let C1, C2, . . . , CK denote a partition of the p features. Theorem 1 A su\ufb03cient condition for the HGL solution to be block diagonal with blocks given by C1, C2, . . . , CK is that min n \u03bb1, \u03bb2 2 o > |Sjj\u2032| for all j \u2208Ck, j\u2032 \u2208Ck\u2032, k \u0338= k\u2032. Theorem 2 A necessary condition for the HGL solution to be block diagonal with blocks given by C1, C2, . . . , CK is that min n \u03bb1, \u03bb2+\u03bb3 2 o > |Sjj\u2032| for all j \u2208Ck, j\u2032 \u2208Ck\u2032, k \u0338= k\u2032. Theorem 1 implies that one can screen the empirical covariance matrix S to check if the HGL solution is block diagonal (using standard algorithms for identifying the connected components of an undirected graph; see, e.g., Tarjan, 1972). Suppose that the HGL solution is block diagonal with K blocks, containing p1, . . . , pK features, and PK k=1 pk = p. Then, one can simply solve the HGL problem on the features within each block separately. Recall that the bottleneck of the HGL algorithm is the eigen-decomposition for updating \u0398. The block diagonal condition leads to massive computational speed-ups for implementing the HGL algorithm: instead of computing an eigen-decomposition for a p \u00d7 p matrix in each iteration of the HGL algorithm, we compute the eigen-decomposition of K matrices of dimensions p1 \u00d7 p1, . . . , pK \u00d7 pK. The computational complexity per-iteration is reduced from O(p3) to PK k=1 O(p3 k). We illustrate the reduction in computational time due to these results in an example with p = 500. Without exploiting Theorem 1, the ADMM algorithm for HGL (with a particular value of \u03bb) takes 159 seconds; in contrast, it takes only 22 seconds when Theorem 1 is applied. The estimated precision matrix has 107 connected components, the largest of which contains 212 nodes. 8 Learning Graphical Models With Hubs 3.3 Some Properties of HGL We now present several properties of the HGL optimization problem (6), which can be used to provide guidance on the suitable range for the tuning parameters \u03bb1, \u03bb2, and \u03bb3. In what follows, Z\u2217and V\u2217denote the optimal solutions for Z and V in (6). Let 1 s + 1 q = 1 (recall that q appears in (2)). Lemma 3 A su\ufb03cient condition for Z\u2217to be a diagonal matrix is that \u03bb1 > \u03bb2+\u03bb3 2 . Lemma 4 A su\ufb03cient condition for V\u2217to be a diagonal matrix is that \u03bb1 < \u03bb2 2 + \u03bb3 2(p\u22121)1/s . Corollary 5 A necessary condition for both V\u2217and Z\u2217to be non-diagonal matrices is that \u03bb2 2 + \u03bb3 2(p\u22121)1/s \u2264\u03bb1 \u2264\u03bb2+\u03bb3 2 . Furthermore, (6) reduces to the graphical lasso problem (5) under a simple condition. Lemma 6 If q = 1, then (6) reduces to (5) with tuning parameter min n \u03bb1, \u03bb2+\u03bb3 2 o . Note also that when \u03bb2 \u2192\u221eor \u03bb3 \u2192\u221e, (6) reduces to (5) with tuning parameter \u03bb1. However, throughout the rest of this paper, we assume that q = 2, and \u03bb2 and \u03bb3 are \ufb01nite. The solution \u02c6 \u0398 of (6) is unique, since (6) is a strictly convex problem. We now consider the question of whether the decomposition \u02c6 \u0398 = \u02c6 V + \u02c6 VT + \u02c6 Z is unique. We see that the decomposition is unique in a certain regime of the tuning parameters. For instance, according to Lemma 3, when \u03bb1 > \u03bb2+\u03bb3 2 , \u02c6 Z is a diagonal matrix and hence \u02c6 V is unique. Similarly, according to Lemma 4, when \u03bb1 < \u03bb2 2 + \u03bb3 2(p\u22121)1/s , \u02c6 V is a diagonal matrix and hence \u02c6 Z is unique. Studying more general conditions on S and on \u03bb1, \u03bb2, and \u03bb3 such that the decomposition is guaranteed to be unique is a challenging problem and is outside of the scope of this paper. 3.4 Tuning Parameter Selection In this section, we propose a Bayesian information criterion (BIC)-type quantity for tuning parameter selection in (6). Recall from Section 2 that the hub penalty function (2) decomposes the parameter of interest into the sum of three matrices, \u0398 = Z + V + VT , and places an \u21131 penalty on Z, and an \u21131/\u21132 penalty on V. For the graphical lasso problem in (5), many authors have proposed to select the tuning parameter \u03bb such that \u02c6 \u0398 minimizes the following quantity: \u2212n \u00b7 log det( \u02c6 \u0398) + n \u00b7 trace(S \u02c6 \u0398) + log(n) \u00b7 | \u02c6 \u0398|, where | \u02c6 \u0398| is the cardinality of \u02c6 \u0398, that is, the number of unique non-zeros in \u02c6 \u0398 (see, e.g., Yuan and Lin, 2007a).1 1. The term log(n) \u00b7 | \u02c6 \u0398| is motivated by the fact that the degrees of freedom for an estimate involving the \u21131 penalty can be approximated by the cardinality of the estimated parameter (Zou et al., 2007). 9 Tan, London, Mohan, Lee, Fazel, and Witten Using a similar idea, we propose the following BIC-type quantity for selecting the set of tuning parameters (\u03bb1, \u03bb2, \u03bb3) for (6): BIC( \u02c6 \u0398, \u02c6 V, \u02c6 Z) = \u2212n \u00b7 log det( \u02c6 \u0398) + n \u00b7 trace(S \u02c6 \u0398) + log(n) \u00b7 |\u02c6 Z| + log(n) \u00b7 \u0010 \u03bd + c \u00b7 [| \u02c6 V| \u2212\u03bd] \u0011 , where \u03bd is the number of estimated hub nodes, that is, \u03bd = Pp j=1 1{\u2225\u02c6 Vj\u22250>0}, c is a constant between zero and one, and |\u02c6 Z| and | \u02c6 V| are the cardinalities (the number of unique nonzeros) of \u02c6 Z and \u02c6 V, respectively.2 We select the set of tuning parameters (\u03bb1, \u03bb2, \u03bb3) for which the quantity BIC( \u02c6 \u0398, \u02c6 V, \u02c6 Z) is minimized. Note that when the constant c is small, BIC( \u02c6 \u0398, \u02c6 V, \u02c6 Z) will favor more hub nodes in \u02c6 V. In this manuscript, we take c = 0.2. 3.5 Simulation Study In this section, we compare HGL to two sets of proposals: proposals that learn an Erd\u02dd osR\u00b4 enyi Gaussian graphical model, and proposals that learn a Gaussian graphical model in which some nodes are highly-connected. 3.5.1 Notation and Measures of Performance We start by de\ufb01ning some notation. Let \u02c6 \u0398 be the estimate of \u0398 = \u03a3\u22121 from a given proposal, and let \u02c6 \u0398j be its jth column. Let H denote the set of indices of the hub nodes in \u0398 (that is, this is the set of true hub nodes in the graph), and let |H| denote the cardinality of the set. In addition, let \u02c6 Hr be the set of estimated hub nodes: the set of nodes in \u02c6 \u0398 that are among the |H| most highly-connected nodes, and that have at least r edges. The values chosen for |H| and r depend on the simulation set-up, and will be speci\ufb01ed in each simulation study. We now de\ufb01ne several measures of performance that will be used to evaluate the various methods. \u2022 Number of correctly estimated edges: P j10\u22125 and |\u0398jj\u2032|\u0338=0} \u0011 . \u2022 Proportion of correctly estimated hub edges: P j\u2208H,j\u2032\u0338=j \u0010 1{|\u02c6 \u0398jj\u2032|>10\u22125 and |\u0398jj\u2032|\u0338=0} \u0011 P j\u2208H,j\u2032\u0338=j \u0010 1{|\u0398jj\u2032|\u0338=0} \u0011 . \u2022 Proportion of correctly estimated hub nodes:| \u02c6 Hr\u2229H| |H| . \u2022 Sum of squared errors: P j n) are thresholded based on their absolute value, and a hub node is declared if the number of nonzero elements in the corresponding column of the thresholded partial correlation matrix is su\ufb03ciently large. Note that the purpose of Hero and Rajaratnam (2012) is to screen for hub nodes, rather than to estimate the individual edges in the network. \u2022 The scale-free network estimation procedure of Liu and Ihler (2011). This is the solution to the non-convex optimization problem minimize \u0398\u2208S \uf8f1 \uf8f2 \uf8f3\u2212log det \u0398 + trace(S\u0398) + \u03b1 p X j=1 log(\u2225\u03b8\\j\u22251 + \u03f5j) + p X j=1 \u03b2j|\u03b8jj| \uf8fc \uf8fd \uf8fe, (8) 12 Learning Graphical Models With Hubs 0 20000 40000 60000 0 10000 20000 30000 I(a) Num. Est. Edges Num. Cor. Est. Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 20000 40000 60000 0.0 0.2 0.4 0.6 0.8 I(b) Num. Est. Edges Prop. Cor. Est. Hub Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 20000 40000 60000 0.0 0.2 0.4 0.6 0.8 1.0 I(c) Num. Est. Edges Prop. Correct Est. Hubs G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 20000 40000 60000 340 380 420 I(d) Num. Est. Edges Sum of Squared Errors G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 10000 20000 30000 0 5000 10000 15000 II(a) Num. Est. Edges Num. Cor. Est. Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 10000 20000 30000 0.0 0.2 0.4 0.6 0.8 II(b) Num. Est. Edges Prop. Cor. Est. Hub Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 10000 20000 30000 0.0 0.2 0.4 0.6 0.8 1.0 II(c) Num. Est. Edges Prop. Correct Est. Hubs G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 10000 20000 30000 280 320 360 II(d) Num. Est. Edges Sum of Squared Errors G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 500 1000 1500 0 100 200 300 400 III(a) Num. Est. Edges Num. Cor. Est. Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 500 1000 1500 0.0 0.2 0.4 0.6 0.8 III(b) Num. Est. Edges Prop. Cor. Est. Hub Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 500 1000 1500 0.0 0.2 0.4 0.6 0.8 III(c) Num. Est. Edges Prop. Correct Est. Hubs G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 500 1000 1500 20 40 60 80 100 III(d) Num. Est. Edges Sum of Squared Errors G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Figure 3: Simulation for Gaussian graphical model. Row I: Results for Set-up I. Row II: Results for Set-up II. Row III: Results for Set-up III. The results are for n = 1000 and p = 1500. In each panel, the x-axis displays the number of estimated edges, and the vertical gray line is the number of edges in the true network. The y-axes are as follows: Column (a): Number of correctly estimated edges; Column (b): Proportion of correctly estimated hub edges; Column (c): Proportion of correctly estimated hub nodes; Column (d): Sum of squared errors. The black solid circles are the results for HGL based on tuning parameters selected using the BIC-type criterion de\ufb01ned in Section 3.4. Colored lines correspond to the graphical lasso (Friedman et al., 2007) ( ); HGL with \u03bb3 = 0.5 ( ), \u03bb3 = 1 ( ), and \u03bb3 = 2 ( ); neighborhood selection (Meinshausen and B\u00a8 uhlmann, 2006) ( ). 13 Tan, London, Mohan, Lee, Fazel, and Witten where \u03b8\\j = {\u03b8jj\u2032|j\u2032 \u0338= j}, and \u03f5j, \u03b2j, and \u03b1 are tuning parameters. Here, S = {\u0398 : \u0398 \u227b0 and \u0398 = \u0398T }. \u2022 Sparse partial correlation estimation procedure of Peng et al. (2009), implemented using the R package space. This is an extension of the neighborhood selection approach of Meinshausen and B\u00a8 uhlmann (2006) that combines p \u21131-penalized regression problems in order to obtain a symmetric estimator. The authors claimed that the proposal performs well in estimating a scale-free network. We generated data under Set-ups I and III (described in Section 3.5.2) with n = 250 and p = 500,5 and with |H| = 10 for Set-up I. The results, averaged over 100 data sets, are displayed in Figures 4 and 5. To obtain Figures 4 and 5, we applied Liu and Ihler (2011) using a \ufb01ne grid of \u03b1 values, and using the choices for \u03b2j and \u03f5j speci\ufb01ed by the authors: \u03b2j = 2\u03b1/\u03f5j, where \u03f5j is a small constant speci\ufb01ed in Liu and Ihler (2011). There are two tuning parameters in Hero and Rajaratnam (2012): (1) \u03c1, the value used to threshold the partial correlation matrix, and (2) d, the number of non-zero elements required for a column of the thresholded matrix to be declared a hub node. We used d = {10, 20} in Figures 4 and 5, and used a \ufb01ne grid of values for \u03c1. Note that the value of d has no e\ufb00ect on the results for Figures 4(a)-(b) and Figures 5(a)-(b), and that larger values of d tend to yield worse results in Figures 4(c) and 5(c). For Peng et al. (2009), we used a \ufb01ne grid of tuning parameter values to obtain the curves shown in Figures 4 and 5. The sum of squared errors was not reported for Peng et al. (2009) and Hero and Rajaratnam (2012) since they do not directly yield an estimate of the precision matrix. As a baseline reference, the graphical lasso is included in the comparison. We see from Figure 4 that HGL outperforms the competitors when the underlying network contains hub nodes. It is not surprising that Liu and Ihler (2011) yields better results than the graphical lasso, since the former approach is implemented via an iterative procedure: in each iteration, the graphical lasso is performed with an updated tuning parameter based on the estimate obtained in the previous iteration. Hero and Rajaratnam (2012) has the worst results in Figures 4(a)-(b); this is not surprising, since the purpose of Hero and Rajaratnam (2012) is to screen for hub nodes, rather than to estimate the individual edges in the network. From Figure 5, we see that the performance of HGL is comparable to that of Liu and Ihler (2011) and Peng et al. (2009) under the assumption of a scale-free network; note that this is the precise setting for which Liu and Ihler (2011)\u2019s proposal is intended, and Peng et al. (2009) reported that their proposal performs well in this setting. In contrast, HGL is not intended for the scale-free network setting (as mentioned in the Introduction, it is intended for a setting with hub nodes). Again, Liu and Ihler (2011) and Peng et al. (2009) outperform the graphical lasso, and Hero and Rajaratnam (2012) has the worst results in Figures 5(a)-(b). Finally, we see from Figures 4 and 5 that the BIC-type criterion for HGL proposed in Section 3.4 yields good results. 5. In this subsection, a small value of p was used due to the computations required to run the R package space, as well as computational demands of the Liu and Ihler (2011) algorithm. 14 Learning Graphical Models With Hubs 0 2000 4000 6000 0 1000 2000 3000 (a) Num. Est. Edges Num. Cor. Est. Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 2000 4000 6000 0.0 0.2 0.4 0.6 0.8 (b) Num. Est. Edges Prop. Cor. Est. Hub Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 2000 4000 6000 0.0 0.2 0.4 0.6 0.8 1.0 (c) Num. Est. Edges Prop. Correct Est. Hubs G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 2000 4000 6000 130 140 150 160 (d) Num. Est. Edges Sum of Squared Errors G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Figure 4: Simulation for the Gaussian graphical model. Set-up I was applied with n = 250 and p = 500. Details of the axis labels and the solid black circles are as in Figure 3. The colored lines correspond to the graphical lasso (Friedman et al., 2007) ( ); HGL with \u03bb3 = 1 ( ), \u03bb3 = 2 ( ), and \u03bb3 = 3 ( ); the hub screening procedure (Hero and Rajaratnam, 2012) with d = 10 ( ) and d = 20 ( ); the scale-free network approach (Liu and Ihler, 2011) ( ); sparse partial correlation estimation (Peng et al., 2009) ( ). 0 100 300 500 0 50 100 150 (a) Num. Est. Edges Num. Cor. Est. Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 100 300 500 0.0 0.2 0.4 0.6 (b) Num. Est. Edges Prop. Cor. Est. Hub Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 100 300 500 0.0 0.2 0.4 0.6 (c) Num. Est. Edges Prop. Correct Est. Hubs G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 100 300 500 30 40 50 60 (d) Num. Est. Edges Sum of Squared Errors G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Figure 5: Simulation for the Gaussian graphical model. Set-up III was applied with n = 250 and p = 500. Details of the axis labels and the solid black circles are as in Figure 3. The colored lines correspond to the graphical lasso (Friedman et al., 2007) ( ); HGL with \u03bb3 = 1 ( ), \u03bb3 = 2 ( ), and \u03bb3 = 3 ( ); the hub screening procedure (Hero and Rajaratnam, 2012) with d = 10 ( ) and d = 20 ( ); the scale-free network approach (Liu and Ihler, 2011) ( ); sparse partial correlation estimation (Peng et al., 2009) ( ). 15 Tan, London, Mohan, Lee, Fazel, and Witten 4. The Hub Covariance Graph In this section, we consider estimation of a covariance matrix under the assumption that x1, . . . , xn i.i.d. \u223c N(0, \u03a3); this is of interest because the sparsity pattern of \u03a3 speci\ufb01es the structure of the marginal independence graph (see, e.g., Drton and Richardson, 2003; Chaudhuri et al., 2007; Drton and Richardson, 2008). We extend the covariance estimator of Xue et al. (2012) to accommodate hub nodes. 4.1 Formulation and Algorithm Xue et al. (2012) proposed to estimate \u03a3 using \u02c6 \u03a3 = arg min \u03a3\u2208S \u001a1 2\u2225\u03a3 \u2212S\u22252 F + \u03bb\u2225\u03a3\u22251 \u001b , (9) where S is the empirical covariance matrix, S = {\u03a3 : \u03a3 \u2ab0\u03f5I and \u03a3 = \u03a3T }, and \u03f5 is a small positive constant; we take \u03f5 = 10\u22124. We extend (9) to accommodate hubs by imposing the hub penalty function (2) on \u03a3. This results in the hub covariance graph (HCG) optimization problem, minimize \u03a3\u2208S \u001a1 2\u2225\u03a3 \u2212S\u22252 F + P(\u03a3) \u001b , which can be solved via Algorithm 1. To update \u0398 = \u03a3 in Step 2(a)i, we note that arg min \u03a3\u2208S \u001a1 2\u2225\u03a3 \u2212S\u22252 F + \u03c1 2\u2225\u03a3 \u2212\u02dc \u03a3 + W1\u22252 F \u001b = 1 1 + \u03c1(S + \u03c1 \u02dc \u03a3 \u2212\u03c1W1)+, where (A)+ is the projection of a matrix A onto the convex cone {\u03a3 \u2ab0\u03f5I}. That is, if Pp j=1 djujuT j denotes the eigen-decomposition of the matrix A, then (A)+ is de\ufb01ned as Pp j=1 max(dj, \u03f5)ujuT j . The complexity of the ADMM algorithm is O(p3) per iteration, due to the complexity of the eigen-decomposition for updating \u03a3. 4.2 Simulation Study We compare HCG to two competitors for obtaining a sparse estimate of \u03a3: 1. The non-convex \u21131-penalized log-likelihood approach of Bien and Tibshirani (2011), using the R package spcov. This approach solves minimize \u03a3\u227b0 \b log det \u03a3 + trace(\u03a3\u22121S) + \u03bb\u2225\u03a3\u22251 \t . 2. The convex \u21131-penalized approach of Xue et al. (2012), given in (9). We \ufb01rst generated an adjacency matrix A as in Set-up I in Section 3.5.2, modi\ufb01ed to have |H| = 20 hub nodes. Then \u00af E was generated as described in Section 3.5.2, and we set \u03a3 equal to \u00af E + (0.1 \u2212\u039bmin(\u00af E))I. Next, we generated x1, . . . , xn i.i.d. \u223cN(0, \u03a3). Finally, we standardized the variables to have standard deviation one. In this simulation study, we set n = 500 and p = 1000. 16 Learning Graphical Models With Hubs Figure 6 displays the results, averaged over 100 simulated data sets. We calculated the proportion of correctly estimated hub nodes as de\ufb01ned in Section 3.3.1 with r = 200. We used a \ufb01ne grid of tuning parameters for Xue et al. (2012) in order to obtain the curves shown in each panel of Figure 6. HCG involves three tuning parameters, \u03bb1, \u03bb2, and \u03bb3. We \ufb01xed \u03bb1 = 0.2, considered three values of \u03bb3 (each shown in a di\ufb00erent color), and varied \u03bb2 in order to obtain the curves shown in Figure 6. Figure 6 does not display the results for the proposal of Bien and Tibshirani (2011), due to computational constraints in the spcov R package. Instead, we compared our proposal to that of Bien and Tibshirani (2011) using n = 100 and p = 200; those results are presented in Figure 10 in Appendix D. G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 5000 15000 25000 0 4000 8000 12000 (a) Num. Est. Edges Num. Cor. Est. Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 5000 15000 25000 0.0 0.2 0.4 0.6 0.8 (b) Num. Est. Edges Prop. Cor. Est. Hub Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 5000 15000 25000 0.0 0.2 0.4 0.6 0.8 1.0 (c) Num. Est. Edges Prop. Correct Est. Hubs G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 5000 15000 25000 80 85 90 95 100 (d) Num. Est. Edges Sum of Squared Errors G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G Figure 6: Covariance graph simulation with n = 500 and p = 1000. Details of the axis labels are as in Figure 3. The colored lines correspond to the proposal of Xue et al. (2012) ( ); HCG with \u03bb3 = 1 ( ), \u03bb3 = 1.5 ( ), and \u03bb3 = 2 ( ). We see that HCG outperforms the proposals of Xue et al. (2012) (Figures 6 and 10) and Bien and Tibshirani (2011) (Figure 10). These results are not surprising, since those other methods do not explicitly model the hub nodes. 5. The Hub Binary Network In this section, we focus on estimating a binary Ising Markov random \ufb01eld, which we refer to as a binary network. We refer the reader to Ravikumar et al. (2010) for an in-depth discussion of this type of graphical model and its applications. In this set-up, each entry of the n \u00d7 p data matrix X takes on a value of zero or one. We assume that the observations x1, . . . , xn are i.i.d. with density p(x, \u0398) = 1 Z(\u0398) exp \uf8ee \uf8f0 p X j=1 \u03b8jjxj + X 1\u2264j \u27e8\u0398T , S\u27e9+ \u03bb1\u2225ZT \u2212diag(ZT )\u22251 + \u03bb2\u2225VT \u2212diag(VT )\u22251 + \u03bb3\u2225VT \u2212diag(VT )\u22251,q, or equivalently, that \u27e8\u0398T c, S\u27e9+ \u03bb1\u2225ZT c\u22251 + \u03bb2\u2225VT c\u22251 + \u03bb3(\u2225V \u2212diag(V)\u22251,q \u2212\u2225VT \u2212diag(VT )\u22251,q) > 0. Since \u2225V \u2212diag(V)\u22251,q \u2265\u2225VT \u2212diag(VT )\u22251,q, it su\ufb03ces to show that \u27e8\u0398T c, S\u27e9+ \u03bb1\u2225ZT c\u22251 + \u03bb2\u2225VT c\u22251 > 0. (17) Note that \u27e8\u0398T c, S\u27e9= \u27e8\u0398T c, ST c\u27e9. By the su\ufb03cient condition, Smax < \u03bb1 and 2Smax < \u03bb2. In addition, we have that |\u27e8\u0398T c, S\u27e9| = |\u27e8\u0398T c, ST c\u27e9| = |\u27e8VT c + VT T c + ZT c, ST c\u27e9| = |\u27e82VT c + ZT c, ST c\u27e9| \u2264(2\u2225VT c\u22251 + \u2225ZT c\u22251)Smax < \u03bb2\u2225VT c\u22251 + \u03bb1\u2225ZT c\u22251, 24 Learning Graphical Models With Hubs where the last inequality follows from the su\ufb03cient condition. We have shown (17) as desired. Proof of Theorem 2 (Necessary Condition) We \ufb01rst present a simple lemma for proving Theorem 2. Throughout the proof of Theorem 2, \u2225\u00b7 \u2225\u221eindicates the maximal absolute element of a matrix and \u2225\u00b7 \u2225\u221e,s indicates the dual norm of \u2225\u00b7 \u22251,q. Lemma 7 The dual representation of \u02dc P(\u0398) in (15) is \u02dc P\u2217(\u0398) = max X,Y,\u039b \u27e8\u039b, \u0398\u27e9 subject to \u039b + \u039bT = \u02c6 \u03bb2X + \u02c6 \u03bb3Y \u2225X\u2225\u221e\u22641, \u2225\u039b\u2225\u221e\u22641, \u2225Y\u2225\u221e,s \u22641 Xii = 0, Yii = 0, \u039bii = 0 for i = 1, . . . , p, (18) where 1 s + 1 q = 1. Proof We \ufb01rst state the dual representations for the norms in (15): \u2225Z \u2212diag(Z)\u22251 = max \u039b \u27e8\u039b, Z\u27e9 subject to \u2225\u039b\u2225\u221e\u22641, \u039bii = 0 for i = 1, . . . , p, \u2225V \u2212diag(V)\u22251 = max X \u27e8X, V\u27e9 subject to \u2225X\u2225\u221e\u22641, Xii = 0 for i = 1, . . . , p, \u2225V \u2212diag(V)\u22251,q = max Y \u27e8Y, V\u27e9 subject to \u2225Y\u2225\u221e,s \u22641, Yii = 0 for i = 1, . . . , p. Then, \u02dc P(\u0398) = min V,Z \u2225Z \u2212diag(Z)\u22251 + \u02c6 \u03bb2\u2225V \u2212diag(V)\u22251 + \u02c6 \u03bb3\u2225V \u2212diag(V)\u22251,q subject to \u0398 = Z + V + VT = min V,Z max \u039b,X,Y\u27e8\u039b, Z\u27e9+ \u02c6 \u03bb2\u27e8X, V\u27e9+ \u02c6 \u03bb3\u27e8Y, V\u27e9 subject to \u2225\u039b\u2225\u221e\u22641, \u2225X\u2225\u221e\u22641, \u2225Y\u2225\u221e,s \u22641 \u039bii = 0, Xii = 0, Yii = 0 for i = 1, . . . , p \u0398 = Z + V + VT = max \u039b,X,Y min V,Z\u27e8\u039b, Z\u27e9+ \u02c6 \u03bb2\u27e8X, V\u27e9+ \u02c6 \u03bb3\u27e8Y, V\u27e9 subject to \u2225\u039b\u2225\u221e\u22641, \u2225X\u2225\u221e\u22641, \u2225Y\u2225\u221e,s \u22641 \u039bii = 0, Xii = 0, Yii = 0 for i = 1, . . . , p \u0398 = Z + V + VT = max \u039b,X,Y \u27e8\u039b, \u0398\u27e9 subject to \u039b + \u039bT = \u02c6 \u03bb2X + \u02c6 \u03bb3Y \u2225X\u2225\u221e\u22641, \u2225\u039b\u2225\u221e\u22641, \u2225Y\u2225\u221e,s \u22641 Xii = 0, Yii = 0, \u039bii = 0 for i = 1, . . . , p. 25 Tan, London, Mohan, Lee, Fazel, and Witten The third equality holds since the constraints on (V, Z) and on (\u039b, X, Y) are both compact convex sets and so by the minimax theorem, we can swap max and min. The last equality follows from the fact that min V,Z \u27e8\u039b, Z\u27e9+ \u02c6 \u03bb2\u27e8X, V\u27e9+ \u02c6 \u03bb3\u27e8Y, V\u27e9 subject to \u0398 = Z + V + VT = \u001a \u27e8\u039b, \u0398\u27e9 if \u039b + \u039bT = \u02c6 \u03bb2X + \u02c6 \u03bb3Y \u2212\u221e otherwise. We now present the proof of Theorem 2. Proof The optimality condition for (16) is given by 0 = \u2212\u0398\u22121 + S + \u03bb1\u039b, (19) where \u039b is a subgradient of \u02dc P(\u0398) in (15) and the left-hand side of the above equation is a zero matrix of size p \u00d7 p. Now suppose that \u0398\u2217that solves (19) is supported on T , i.e., \u0398\u2217 T c = 0. Then for any (i, j) \u2208T c, we have that 0 = Sij + \u03bb1\u039b\u2217 ij, (20) where \u039b\u2217is a subgradient of \u02dc P(\u0398\u2217). Note that \u039b\u2217must be an optimal solution to the optimization problem (18). Therefore, it is also a feasible solution to (18), implying that |\u039b\u2217 ij + \u039b\u2217 ji| \u2264\u02c6 \u03bb2 + \u02c6 \u03bb3, |\u039b\u2217 ij| \u22641. From (20), we have that \u039b\u2217 ij = \u2212Sij \u03bb1 and thus, \u03bb1 \u2265\u03bb1 max (i,j)\u2208T c|\u039b\u2217 ij| = \u03bb1 max (i,j)\u2208T c |Sij| \u03bb1 = Smax. Also, recall that \u02c6 \u03bb2 = \u03bb2 \u03bb1 and \u02c6 \u03bb3 = \u03bb3 \u03bb1 . We have that \u03bb2 + \u03bb3 \u2265\u03bb1 max (i,j)\u2208T c|\u039b\u2217 ij + \u039b\u2217 ji| = \u03bb1 max (i,j)\u2208T c 2|Sij| \u03bb1 = 2Smax. Hence, we obtain the desired result. 26 Learning Graphical Models With Hubs Appendix C: Some Properties of HGL Proof of Lemma 3 Proof Let (\u0398\u2217, Z\u2217, V\u2217) be the solution to (6) and suppose that Z\u2217is not a diagonal matrix. Note that Z\u2217is symmetric since \u0398 \u2208S \u2261{\u0398 : \u0398 \u227b0 and \u0398 = \u0398T }. Let \u02c6 Z = diag(Z\u2217), a matrix that contains the diagonal elements of the matrix Z\u2217. Also, construct \u02c6 V as follows, \u02c6 Vij = ( V\u2217 ij + Z\u2217 ij 2 if i \u0338= j V\u2217 jj otherwise. Then, we have that \u0398\u2217= \u02c6 Z + \u02c6 V + \u02c6 VT . Thus, (\u0398\u2217, \u02c6 Z, \u02c6 V) is a feasible solution to (6). We now show that (\u0398\u2217, \u02c6 Z, \u02c6 V) has a smaller objective than (\u0398\u2217, Z\u2217, V\u2217) in (6), giving us a contradiction. Note that \u03bb1\u2225\u02c6 Z \u2212diag(\u02c6 Z)\u22251 + \u03bb2\u2225\u02c6 V \u2212diag( \u02c6 V)\u22251 = \u03bb2\u2225\u02c6 V \u2212diag( \u02c6 V)\u22251 = \u03bb2 P i\u0338=j |V\u2217 ij + Z\u2217 ij 2 | \u2264 \u03bb2\u2225V\u2217\u2212diag(V\u2217)\u22251 + \u03bb2 2 \u2225Z\u2217\u2212diag(Z\u2217)\u22251, and \u03bb3 Pp j=1 \u2225( \u02c6 V \u2212diag( \u02c6 V))j\u2225q \u2264 \u03bb3 Pp j=1 \u2225(V\u2217\u2212diag(V\u2217))j\u2225q + \u03bb3 2 Pp j=1 \u2225(Z\u2217\u2212diag(Z\u2217))j\u2225q \u2264 \u03bb3 Pp j=1 \u2225(V\u2217\u2212diag(V\u2217))j\u2225q + \u03bb3 2 \u2225Z\u2217\u2212diag(Z\u2217)\u22251, where the last inequality follows from the fact that for any vector x \u2208Rp and q \u22651, \u2225x\u2225q is a nonincreasing function of q (Gentle, 2007). Summing up the above inequalities, we get that \u03bb1\u2225\u02c6 Z \u2212diag(\u02c6 Z)\u22251 + \u03bb2\u2225\u02c6 V \u2212diag( \u02c6 V)\u22251 + \u03bb3 Pp j=1 \u2225( \u02c6 V \u2212diag( \u02c6 V))j\u2225q \u2264 \u03bb2+\u03bb3 2 \u2225Z\u2217\u2212diag(Z\u2217)\u22251 + \u03bb2\u2225V\u2217\u2212diag(V\u2217)\u22251 + \u03bb3 Pp j=1 \u2225(V\u2217\u2212diag(V\u2217))j\u2225q < \u03bb1\u2225Z\u2217\u2212diag(Z\u2217)\u22251 + \u03bb2\u2225V\u2217\u2212diag(V\u2217)\u22251 + \u03bb3 Pp j=1 \u2225(V\u2217\u2212diag(V\u2217))j\u2225q, where the last inequality uses the assumption that \u03bb1 > \u03bb2+\u03bb3 2 . We arrive at a contradiction and therefore the result holds. Proof of Lemma 4 Proof Let (\u0398\u2217, Z\u2217, V\u2217) be the solution to (6) and suppose V\u2217is not a diagonal matrix. Let \u02c6 V = diag(V\u2217), a diagonal matrix that contains the diagonal elements of V\u2217. Also construct \u02c6 Z as follows, \u02c6 Zij = \u001a Z\u2217 ij + V\u2217 ij + V\u2217 ji if i \u0338= j Z\u2217 ij otherwise. Then, we have that \u0398\u2217= \u02c6 V+ \u02c6 VT + \u02c6 Z. We now show that (\u0398\u2217, \u02c6 Z, \u02c6 V) has a smaller objective value than (\u0398\u2217, Z\u2217, V\u2217) in (6), giving us a contradiction. We start by noting that \u03bb1\u2225\u02c6 Z \u2212diag(\u02c6 Z)\u22251 + \u03bb2\u2225\u02c6 V \u2212diag( \u02c6 V)\u22251 = \u03bb1\u2225\u02c6 Z \u2212diag(\u02c6 Z)\u22251 \u2264 \u03bb1\u2225Z\u2217\u2212diag(Z\u2217)\u22251 + 2\u03bb1\u2225V\u2217\u2212diag(V\u2217)\u22251. 27 Tan, London, Mohan, Lee, Fazel, and Witten By Holder\u2019s Inequality, we know that xT y \u2264\u2225x\u2225q\u2225y\u2225s where 1 s + 1 q = 1 and x, y \u2208Rp\u22121. Setting y = sign(x), we have that \u2225x\u22251 \u2264(p \u22121) 1 s \u2225x\u2225q. Consequently, \u03bb3 (p \u22121) 1 s \u2225V\u2217\u2212diag(V\u2217)\u22251 \u2264\u03bb3 p X j=1 \u2225(V\u2217\u2212diag(V\u2217))j\u2225q. Combining these results, we have that \u03bb1\u2225\u02c6 Z \u2212diag(\u02c6 Z)\u22251 + \u03bb2\u2225\u02c6 V \u2212diag( \u02c6 V)\u22251 + \u03bb3 p X j=1 \u2225( \u02c6 V \u2212diag( \u02c6 V))j\u2225q \u2264\u03bb1\u2225Z\u2217\u2212diag(Z\u2217)\u22251 + 2\u03bb1\u2225V\u2217\u2212diag(V\u2217)\u22251 < \u03bb1\u2225Z\u2217\u2212diag(Z\u2217)\u22251 + \u03bb2 + \u03bb3 (p \u22121) 1 s ! \u2225V\u2217\u2212diag(V\u2217)\u22251 \u2264\u03bb1\u2225Z\u2217\u2212diag(Z\u2217)\u22251 + \u03bb2\u2225V\u2217\u2212diag(V\u2217)\u22251 + \u03bb3 p X j=1 \u2225(V\u2217\u2212diag(V\u2217))j\u2225q, where we use the assumption that \u03bb1 < \u03bb2 2 + \u03bb3 2(p\u22121) 1 s . This leads to a contradiction. Proof of Lemma 6 In this proof, we consider the case when \u03bb1 > \u03bb2+\u03bb3 2 . A similar proof technique can be used to prove the case when \u03bb1 < \u03bb2+\u03bb3 2 . Proof Let f(\u0398, V, Z) denote the objective of (6) with q = 1, and (\u0398\u2217, V\u2217, Z\u2217) the optimal solution. By Lemma 3, the assumption that \u03bb1 > \u03bb2+\u03bb3 2 implies that Z\u2217is a diagonal matrix. Now let \u02c6 V = 1 2 \u0000V\u2217+ (V\u2217)T \u0001 . Then f(\u0398\u2217, \u02c6 V, Z\u2217) = \u2212log det \u0398\u2217+ \u27e8\u0398\u2217, S\u27e9+ \u03bb1\u2225Z\u2217\u2212diag(Z\u2217)\u22251 + (\u03bb2 + \u03bb3)\u2225\u02c6 V \u2212diag( \u02c6 V)\u22251 = \u2212log det \u0398\u2217+ \u27e8\u0398\u2217, S\u27e9+ \u03bb2 + \u03bb3 2 \u2225V\u2217+ V\u2217T \u2212diag(V\u2217+ V\u2217T )\u22251 \u2264 \u2212log det \u0398\u2217+ \u27e8\u0398\u2217, S\u27e9+ (\u03bb2 + \u03bb3)\u2225V\u2217\u2212diag(V\u2217)\u22251 = f(\u0398\u2217, V\u2217, Z\u2217) \u2264 f(\u0398\u2217, \u02c6 V, Z\u2217), where the last inequality follows from the assumption that (\u0398\u2217, V\u2217, Z\u2217) solves (6). By strict convexity of f, this means that V\u2217= \u02c6 V, i.e., V\u2217is symmetric. This implies that f(\u0398\u2217, V\u2217, Z\u2217) = \u2212log det \u0398\u2217+ \u27e8\u0398\u2217, S\u27e9+ \u03bb2 + \u03bb3 2 \u2225V\u2217+ V\u2217T \u2212diag(V\u2217+ V\u2217T )\u22251 = \u2212log det \u0398\u2217+ \u27e8\u0398\u2217, S\u27e9+ \u03bb2 + \u03bb3 2 \u2225\u0398\u2217\u2212diag(\u0398\u2217)\u22251 (21) = g(\u0398\u2217), 28 Learning Graphical Models With Hubs where g(\u0398) is the objective of the graphical lasso optimization problem, evaluated at \u0398, with tuning parameter \u03bb2+\u03bb3 2 . Suppose that \u02dc \u0398 minimizes g(\u0398), and \u0398\u2217\u0338= \u02dc \u0398. Then, by (21) and strict convexity of g, g(\u0398\u2217) = f(\u0398\u2217, V\u2217, Z\u2217) \u2264f( \u02dc \u0398, \u02dc \u0398/2, 0) = g( \u02dc \u0398) < g(\u0398\u2217), giving us a contradiction. Thus it must be that \u02dc \u0398 = \u0398\u2217. Appendix D: Simulation Study for Hub Covariance Graph In this section, we present the results for the simulation study described in Section 4.2 with n = 100, p = 200, and |H| = 4. We calculate the proportion of correctly estimated hub nodes with r = 40. The results are shown in Figure 10. As we can see from Figure 10, our proposal outperforms Bien and Tibshirani (2011). In particular, we can see from Figure 10(c) that Bien and Tibshirani (2011) fails to identify hub nodes. G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G 0 200 600 1000 0 50 150 250 350 (a) Num. Est. Edges Num. Cor. Est. Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GGG G G G G G G G G G G G G G G G G G G G G G G G G 0 200 600 1000 0.0 0.2 0.4 0.6 (b) Num. Est. Edges Prop. Cor. Est. Hub Edges G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GGG G G G G G G G G G G G G G G G G G G G G G G 0 200 600 1000 0.0 0.2 0.4 0.6 (c) Num. Est. Edges Prop. Correct Est. Hubs G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GGG G G G G G G G G G G G G G G G G G G G G 0 200 600 1000 18 19 20 21 22 (d) Num. Est. Edges Sum of Squared Errors G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G G GGG G G G G G G G G G G G G G G G G G G Figure 10: Covariance graph simulation with n = 100 and p = 200. Details of the axis labels are as in Figure 3. The colored lines correspond to the proposal of Xue et al. (2012) ( ); HCG with \u03bb3 = 1 ( ), \u03bb3 = 1.5 ( ), and \u03bb3 = 2 ( ); and the proposal of Bien and Tibshirani (2011) ( ). Appendix E: Run Time Study for the ADMM algorithm for HGL In this section, we present a more extensive run time study for the ADMM algorithm for HGL. We ran experiments with p = 100, 200, 300 and with n = p/2 on a 2.26GHz Intel Core 2 Duo machine. Results averaged over 10 replications are displayed in Figures 11(a)-(b), where the panels depict the run time and number of iterations required for the algorithm to converge, as a function of \u03bb1, with \u03bb2 = 0.5 and \u03bb3 = 2 \ufb01xed. The number of iterations required for the algorithm to converge is computed as the total number of iterations in Step 2 of Algorithm 1. We see from Figure 11(a) that as p increases from 100 to 300, the run times increase substantially, but never exceed several minutes. Note that these results are without using the block diagonal condition in Theorem 1. 29 Tan, London, Mohan, Lee, Fazel, and Witten (a) (b) 10 \u22122 10 \u22121 10 0 50 100 150 200 250 h1 Run time (sec) p = 100 p = 200 p = 300 10 \u22122 10 \u22121 10 0 200 400 600 800 1000 h1 Total num. iterations p = 100 p = 200 p = 300 Figure 11: (a): Run time (in seconds) of the ADMM algorithm for HGL, as a function of \u03bb1, for \ufb01xed values of \u03bb2 and \u03bb3. (b): The total number of iterations required for the ADMM algorithm for HGL to converge, as a function of \u03bb1. All results are averaged over 10 simulated data sets. These results are without using the block diagonal condition in Theorem 1. Appendix F: Update for \u0398 in Step 2(a)i for Binary Ising Model using Barzilai-Borwein Method We consider updating \u0398 in Step 2(a)i of Algorithm 1 for binary Ising model. Let h(\u0398) = \uf8f1 \uf8f2 \uf8f3\u2212 p X j=1 p X j\u2032=1 \u03b8jj\u2032(XT X)jj\u2032 + p X i=1 p X j=1 log \uf8eb \uf8ed1 + exp \uf8ee \uf8f0\u03b8jj + X j\u2032\u0338=j \u03b8jj\u2032xij\u2032 \uf8f9 \uf8fb \uf8f6 \uf8f8+ \u03c1 2\u2225\u0398 \u2212\u02dc \u0398 + W1\u22252 F \uf8fc \uf8fd \uf8fe. Then, the optimization problem for Step 2(a)i of Algorithm 1 is minimize \u0398\u2208S h(\u0398), (22) where S = {\u0398 : \u0398 = \u0398T }. In solving (22), we will treat \u0398 \u2208S as an implicit constraint. The Barzilai-Borwein method is a gradient descent method with the step-size chosen to mimic the secant condition of the BFGS method (see, e.g., Barzilai and Borwein, 1988; Nocedal and Wright, 2006). The convergence of the Barzilai-Borwein method for unconstrained minimization using a non-monotone line search was shown in Raydan (1997). Recent convergence results for a quadratic cost function can be found in Dai (2013). To implement the Barzilai-Borwein method, we need to evaluate the gradient of h(\u0398). Let \u2207h(\u0398) be a p \u00d7 p matrix, where the (j, j\u2032) entry is the gradient of h(\u0398) with respect to \u03b8jj\u2032, computed under the constraint \u0398 \u2208S, that is, \u03b8jj\u2032 = \u03b8j\u2032j. Then, (\u2207h(\u0398))jj = \u2212(XT X)jj + n X i=1 \" exp(\u03b8jj + P j\u2032\u0338=j \u03b8jj\u2032xij\u2032) 1 + exp(\u03b8jj + P j\u2032\u0338=j \u03b8jj\u2032xij\u2032) # + \u03c1(\u03b8jj \u2212\u02dc \u03b8jj + (W1)jj), 30 Learning Graphical Models With Hubs and (\u2207h(\u0398))jj\u2032 = \u22122(XT X)jj + 2\u03c1(\u03b8jj\u2032 \u2212\u02dc \u03b8jj\u2032 + (W1)jj\u2032) + n X i=1 \" xij\u2032 exp(\u03b8jj + P j\u2032\u0338=j \u03b8jj\u2032xij\u2032) 1 + exp(\u03b8jj + P j\u2032\u0338=j \u03b8jj\u2032xij\u2032) + xij exp(\u03b8j\u2032j\u2032 + P j\u0338=j\u2032 \u03b8jj\u2032xij) 1 + exp(\u03b8j\u2032j\u2032 + P j\u0338=j\u2032 \u03b8jj\u2032xij) # . A simple implementation of the Barzilai-Borwein algorithm for solving (22) is detailed in Algorithm 2. We note that the Barzilai-Borwein algorithm can be improved (see, e.g., Barzilai and Borwein, 1988; Wright et al., 2009). We leave such improvement for future work. Algorithm 2 Barzilai-Borwein Algorithm for Solving (22). 1. Initialize the parameters: (a) \u03981 = I and \u03980 = 2I. (b) constant \u03c4 > 0. 2. Iterate until the stopping criterion \u2225\u0398t\u2212\u0398t\u22121\u22252 F \u2225\u0398t\u22121\u22252 F \u2264\u03c4 is met, where \u0398t is the value of \u0398 obtained at the tth iteration: (a) \u03b1t = trace \u0002 (\u0398t \u2212\u0398t\u22121)T (\u0398t \u2212\u0398t\u22121) \u0003 /trace \u0002 (\u0398t \u2212\u0398t\u22121)T (\u2207h(\u0398t) \u2212\u2207h(\u0398t\u22121)) \u0003 . (b) \u0398t+1 = \u0398t \u2212\u03b1t\u2207h(\u0398t). 31 Tan, London, Mohan, Lee, Fazel, and Witten", "introduction": "Graphical models are used to model a wide variety of systems, such as gene regulatory networks and social interaction networks. A graph consists of a set of p nodes, each rep- resenting a variable, and a set of edges between pairs of nodes. The presence of an edge between two nodes indicates a relationship between the two variables. In this manuscript, we consider two types of graphs: conditional independence graphs and marginal indepen- dence graphs. In a conditional independence graph, an edge connects a pair of variables if and only if they are conditionally dependent\u2014dependent conditional upon the other vari- ables. In a marginal independence graph, two nodes are joined by an edge if and only if they are marginally dependent\u2014dependent without conditioning on the other variables. In recent years, many authors have studied the problem of learning a graphical model in the high-dimensional setting, in which the number of variables p is larger than the number of observations n. Let X be a n \u00d7 p matrix, with rows x1, . . . , xn. Throughout the rest of the text, we will focus on three speci\ufb01c types of graphical models: 1. A Gaussian graphical model, where x1, . . . , xn i.i.d. \u223cN(0, \u03a3). In this setting, (\u03a3\u22121)jj\u2032 = 0 for some j \u0338= j\u2032 if and only if the jth and j\u2032th variables are conditionally independent (Mardia et al., 1979); therefore, the sparsity pattern of \u03a3\u22121 determines the conditional independence graph. 2. A Gaussian covariance graph model, where x1, . . . , xn i.i.d. \u223cN(0, \u03a3). Then \u03a3jj\u2032 = 0 for some j \u0338= j\u2032 if and only if the jth and j\u2032th variables are marginally independent. Therefore, the sparsity pattern of \u03a3 determines the marginal independence graph. 3. A binary Ising graphical model, where x1, . . . , xn are i.i.d. with density function p(x, \u0398) = 1 Z(\u0398) exp \uf8ee \uf8f0 p X j=1 \u03b8jjxj + X 1\u2264j n, many authors have proposed applying an \u21131 penalty to the parameter encoding each edge, in order to encourage sparsity. For instance, such an approach is taken by Yuan and Lin (2007a), Friedman et al. (2007), Rothman et al. (2008), and Yuan (2008) in the Gaussian graphical model; El Karoui (2008), Bickel and Levina (2008), Rothman et al. (2009), Bien and Tibshirani (2011), Cai and Liu (2011), and Xue et al. (2012) in the covariance graph model; and Lee et al. (2007), H\u00a8 o\ufb02ing and Tibshirani (2009), and Ravikumar et al. (2010) in the binary model. However, applying an \u21131 penalty to each edge can be interpreted as placing an inde- pendent double-exponential prior on each edge. Consequently, such an approach implicitly assumes that each edge is equally likely and independent of all other edges; this corre- sponds to an Erd\u02dd os-R\u00b4 enyi graph in which most nodes have approximately the same number 2 Learning Graphical Models With Hubs of edges (Erd\u02dd os and R\u00b4 enyi, 1959). This is unrealistic in many real-world networks, in which we believe that certain nodes (which, unfortunately, are not known a priori) have a lot more edges than other nodes. An example is the network of webpages in the World Wide Web, where a relatively small number of webpages are connected to many other webpages (Barab\u00b4 asi and Albert, 1999). A number of authors have shown that real-world networks are scale-free, in the sense that the number of edges for each node follows a power-law distribution; examples include gene-regulatory networks, social networks, and networks of collaborations among scientists (among others, Barab\u00b4 asi and Albert, 1999; Barab\u00b4 asi, 2009; Liljeros et al., 2001; Jeong et al., 2001; Newman, 2000; Li et al., 2005). More recently, Hao et al. (2012) have shown that certain genes, referred to as super hubs, regulate hundreds of downstream genes in a gene regulatory network, resulting in far denser connections than are typically seen in a scale-free network. In this paper, we refer to very densely-connected nodes, such as the \u201csuper hubs\u201d con- sidered in Hao et al. (2012), as hubs. When we refer to hubs, we have in mind nodes that are connected to a very substantial number of other nodes in the network\u2014and in partic- ular, we are referring to nodes that are much more densely-connected than even the most highly-connected node in a scale-free network. An example of a network containing hub nodes is shown in Figure 1. Here we propose a convex penalty function for estimating graphs containing hubs. Our formulation simultaneously identi\ufb01es the hubs and estimates the entire graph. The penalty function yields a convex optimization problem when combined with a convex loss function. We consider the application of this hub penalty function in modeling Gaussian graphical models, covariance graph models, and binary Ising models. Our formulation does not require that we know a priori which nodes in the network are hubs. In related work, several authors have proposed methods to estimate a scale-free Gaussian graphical model (Liu and Ihler, 2011; Defazio and Caetano, 2012). However, those methods do not model hub nodes\u2014the most highly-connected nodes that arise in a scale-free network are far less connected than the hubs that we consider in our formulation. Under a di\ufb00erent framework, some authors proposed a screening-based procedure to identify hub nodes in the context of Gaussian graphical models (Hero and Rajaratnam, 2012; Firouzi and Hero, 2013). Our proposal outperforms such approaches when hub nodes are present (see discussion in Section 3.5.4). In Figure 1, the performance of our proposed approach is shown in a toy example in the context of a Gaussian graphical model. We see that when the true network contains hub nodes (Figure 1(a)), our proposed approach (Figure 1(b)) is much better able to recover the network than is the graphical lasso (Figure 1(c)), a well-studied approach that applies an \u21131 penalty to each edge in the graph (Friedman et al., 2007). We present the hub penalty function in Section 2. We then apply it to the Gaussian graphical model, the covariance graph model, and the binary Ising model in Sections 3, 4, and 5, respectively. In Section 6, we apply our approach to a webpage data set and a gene expression data set. We close with a discussion in Section 7." }, { "url": "http://arxiv.org/abs/1307.5339v1", "title": "The Cluster Graphical Lasso for improved estimation of Gaussian graphical models", "abstract": "We consider the task of estimating a Gaussian graphical model in the\nhigh-dimensional setting. The graphical lasso, which involves maximizing the\nGaussian log likelihood subject to an l1 penalty, is a well-studied approach\nfor this task. We begin by introducing a surprising connection between the\ngraphical lasso and hierarchical clustering: the graphical lasso in effect\nperforms a two-step procedure, in which (1) single linkage hierarchical\nclustering is performed on the variables in order to identify connected\ncomponents, and then (2) an l1-penalized log likelihood is maximized on the\nsubset of variables within each connected component. In other words, the\ngraphical lasso determines the connected components of the estimated network\nvia single linkage clustering. Unfortunately, single linkage clustering is\nknown to perform poorly in certain settings. Therefore, we propose the cluster\ngraphical lasso, which involves clustering the features using an alternative to\nsingle linkage clustering, and then performing the graphical lasso on the\nsubset of variables within each cluster. We establish model selection\nconsistency for this technique, and demonstrate its improved performance\nrelative to the graphical lasso in a simulation study, as well as in\napplications to an equities data set, a university webpage data set, and a gene\nexpression data set.", "authors": "Kean Ming Tan, Daniela Witten, Ali Shojaie", "published": "2013-07-19", "updated": "2013-07-19", "primary_cat": "stat.ML", "cats": [ "stat.ML", "stat.ME" ], "main_content": "We assume that the columns of X have been standardized to have mean zero and variance one. Let \u02dc S denote the p \u00d7 p matrix whose elements take the form \u02dc Sjj\u2032 = |XT j Xj\u2032|/n = |Sjj\u2032| where Xj is the jth column of X. Theorem 2. Let C1, . . . , CK denote the clusters that result from performing single linkage hierarchical clustering (SLC) using similarity matrix \u02dc S, and cutting the resulting dendrogram at a height of 0 \u2264\u03bb \u22641. Let D1, . . . , DR denote the connected components of the graphical lasso solution with tuning parameter \u03bb. Then, K = R, and there exists a permutation \u03c0 such that Ck = D\u03c0(k) for k = 1, . . . , K. 4 Theorem 2, which is proven in the Appendix, establishes a surprising connection between two seemingly unrelated techniques: the graphical lasso and SLC. The connected components of the graphical lasso solution are identical to the clusters obtained by performing SLC based on the similarity matrix \u02dc S. Theorem 2 refers to cutting a dendrogram that results from performing SLC using a similarity matrix. This concept is made clear in Figure 1. In this example, \u02dc S is given by \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 1 0.8 0.6 0.3 0.8 1 0.5 0.2 0.6 0.5 1 0.1 0.3 0.2 0.1 1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb . (2) For instance, cutting the dendrogram at a height of \u03bb = 0.79 results in three clusters, {1, 2}, {3}, and {4}. Theorem 2 further indicates that these are the same as the connected components that result from applying the graphical lasso to S with tuning parameter \u03bb. Figure 1: SLC is performed on the similarity matrix (2). The resulting dendrogram is shown, as are the clusters that result from cutting the dendrogram at various heights. 5 3 The cluster graphical lasso 3.1 A simple alternative to SLC Motivated by Theorem 2, as well as by the fact that the clusters identi\ufb01ed by SLC tend to have an undesirable chain structure (Hastie et al. 2009), we now explore an alternative approach, in which we perform clustering before applying the graphical lasso to the set of features within each cluster. The cluster graphical lasso (CGL) is presented in Algorithm 1. We partition the features Algorithm 1 Cluster graphical lasso (CGL) 1. Let C1, . . . , CK be the clusters obtained by performing a clustering method of choice based on the similarity matrix \u02dc S. The kth cluster contains |Ck| features. 2. For k = 1, . . . , K: (a) Let Sk be the empirical covariance matrix for the features in the kth cluster. Here, Sk is a |Ck| \u00d7 |Ck| matrix. (b) Solve the graphical lasso problem (1) using the covariance matrix Sk with a given value of \u03bbk. Let \u02c6 \u0398k denote the graphical lasso estimate. 3. Combine the K resulting graphical lasso estimates into a p \u00d7 p matrix \u02c6 \u0398 that is block diagonal with blocks \u02c6 \u03981, . . . , \u02c6 \u0398K. into K clusters based on \u02dc S, and then perform the graphical lasso estimation procedure on the subset of variables within each cluster. Provided that the true network has several connected components, and that the clustering technique that we use in Step 1 is better than SLC, we expect CGL to outperform graphical lasso. Furthermore, we note that by Theorem 2, the usual graphical lasso is a special case of Algorithm 1, in which the clusters in Step 1 are obtained by cutting the SLC dendrogram at a height \u03bb, and in which \u03bb1 = . . . = \u03bbK = \u03bb in Step 2(a). The advantage of CGL over the graphical lasso is two-fold. 1. As mentioned earlier, SLC often performs poorly, often resulting in estimated graphs with 6 one large connected component, and many very small ones. Therefore, identifying the connected components using a better clustering procedure may yield improved results. 2. As revealed by Theorems 1 and 2, the graphical lasso e\ufb00ectively couples two operations using a single tuning parameter \u03bb: identi\ufb01cation of the connected components in the network estimate, and identi\ufb01cation of the edge set within each connected component. Therefore, in order for the graphical lasso to yield a solution with many connected components, each connected component must typically be extremely sparse. CGL allows for these two operations to be decoupled, often to advantage. 3.2 Interpretation of CGL as a penalized log likelihood problem Consider the optimization problem minimize \u0398 {\u2212log det \u0398 + tr(S\u0398) + X j\u0338=j\u2032 wjj\u2032|\u0398jj\u2032|}, (3) where wjj\u2032 = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 \u03bbk if j, j\u2032 \u2208Ck \u221e if j \u2208Ck, j\u2032 \u2208Ck\u2032, k \u0338= k\u2032 . By inspection, the solution to this problem is the CGL network estimate. In other words, the CGL procedure amounts to solving a penalized log likelihood problem in which we impose an arbitrarily large penalty on |\u0398jj\u2032| if the jth and j\u2032th features are in di\ufb00erent clusters. In contrast, if wjj\u2032 = \u03bb in (3), then this amounts to the graphical lasso optimization problem (1). 3.3 Tuning parameter selection CGL involves several tuning parameters: the number of clusters K and the sparsity parameters \u03bb1, . . . , \u03bbK. It is well-known that selecting tuning parameters in unsupervised set7 tings is a challenging problem (for an overview and several past proposals, see e.g. Milligan & Cooper 1985, Gordon 1996, Tibshirani et al. 2001, Hastie et al. 2009, Meinshausen & Buhlmann 2010). Algorithm 2 outlines an approach for selecting K. It involves leaving out random elements from the matrix \u02dc S and performing clustering. The clusters obtained are then used to impute the left-out elements, and the corresponding mean squared error is computed. Roughly speaking, the optimal K is that for which the mean squared error is smallest. This is related to past approaches in the literature for performing tuning parameter selection in the unsupervised setting by recasting the unsupervised problem as a supervised one (see e.g. Wold 1978, Owen & Perry 2009, Witten et al. 2009). The numerical investigation in Section 5 indicates that our algorithm results in reasonable estimates of the number of connected components, and that the performance of CGL is not very sensitive to the value of K. In Corollary 3, we propose a choice of \u03bb1, . . . , \u03bbK that guarantees consistent recovery of the connected components. 4 Consistency of cluster graphical lasso In this section, we establish that CGL consistently recovers the connected components of the underlying graph, as well as its edge set. A number of authors have shown consistency of the graphical lasso solution for di\ufb00erent matrix norms (Rothman et al. 2008, Lam & Fan 2009, Cai et al. 2011). Lam & Fan (2009) further showed that under certain conditions, the graphical lasso solution is sparsistent, i.e., zero entries of the inverse covariance matrix are correctly estimated with probability tending to one. Lam & Fan (2009) also showed that there is no choice of \u03bb that can simultaneously achieve the optimal rate of sparsistency and consistency for estimating \u03a3\u22121, unless the number of non-zero elements in the o\ufb00-diagonal entries is no larger than O(p). In a more recent work, Ravikumar et al. (2011) studied the graphical lasso estimator under a variety of tail conditions, and established that the 8 Algorithm 2 Selecting the number of clusters K 1. Repeat the following procedure T times: (a) Let M be a set that contains p(p \u22121)/2T elements of the form (i, j), where (i, j) is drawn randomly from {(i, j) : i, j \u2208{1, . . . , p}, i < j}. Augment the set M such that if (i, j) \u2208M, then (j, i) \u2208M. We refer to M as a set of missing elements. (b) Construct a p \u00d7 p matrix, \u02dc S\u2217, for which the elements in M are removed and are replaced by taking the average of the corresponding row and column means of the non-missing elements in \u02dc S: \u02dc S\u2217 ij = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 \u02dc Sij if (i, j) / \u2208M 0.5 \uf8eb \uf8edX j\u2032\u2208Mc i \u02dc Sij\u2032/|Mc i| + X i\u2032\u2208Mc j \u02dc Si\u2032j/|Mc j| \uf8f6 \uf8f8 if (i, j) \u2208M , (4) where Mc i = {j : (i, j) / \u2208M} and |Mc i| is the cardinality of Mc i. (c) For each value of K under consideration: i. Perform the clustering method of choice based on the similarity matrix \u02dc S\u2217. Let C1, . . . , CK denote the clusters obtained. ii. Construct a p \u00d7 p matrix B in which each element is imputed using the block structure of \u02dc S\u2217based on the clusters C1, . . . , CK obtained: Bij = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 P i\u2032,j\u2032\u2208Ck,i\u2032\u0338=j\u2032 \u02dc S\u2217 i\u2032j\u2032 |Ck| \u00d7 (|Ck| \u22121) if i, j \u2208Ck and i \u0338= j P k\u0338=k\u2032 P i\u2032\u2208Ck,j\u2032\u2208Ck\u2032 \u02dc S\u2217 i\u2032j\u2032 p2 \u2212P k |Ck|2 if i \u2208Ck, j \u2208Ck\u2032, and k \u0338= k\u2032 , (5) iii. Calculate the mean squared error as follows: X (i,j)\u2208M ( \u02dc Sij \u2212Bij)2/|M|. (6) 2. For each value of K that was considered in Step 1(c), calculate mK, the mean of quantity (6) over the iterations, as well as sK, its standard error. 3. Identify the set {K : mK \u2264mK+1 + 1.5 \u00d7 sK+1}. Select the smallest value in this set. 9 procedure correctly identi\ufb01es the structure of the graph, if an incoherence assumption holds on the Hessian of the inverse covariance matrix, and if the minimum non-zero entry of the inverse covariance matrix is su\ufb03ciently large. We will restate these conditions more precisely in Theorem 4. Here, we focus on model selection consistency of CGL, in the setting where the inverse covariance matrix is block diagonal. To establish the model selection consistency of CGL, we need to show that (i) CGL correctly identi\ufb01es the connected components of the graph, and (ii) it correctly identi\ufb01es the set of edges (i.e. the set of non-zero values of the inverse covariance matrix) within each of the connected components. More speci\ufb01cally, we \ufb01rst show that CGL with clusters obtained from performing SLC, average linkage hierarchical clustering (ALC), or complete linkage hierarchical clustering (CLC) based on \u02dc S consistently identi\ufb01es the connected components of the graph. Next, we adapt the results of Ravikumar et al. (2011) on model selection consistency of graphical lasso in order to establish the rates of convergence of the CGL estimate. As we will show below, our results highlight the potential advantages of CGL in the settings where the underlying inverse covariance matrix is block diagonal (i.e. the graph consists of multiple connected components). As a byproduct, we also address the problem of determining the appropriate set of tuning parameters for penalized estimation of the inverse covariance matrix in high dimensions: given knowledge of K, the number of connected components in the graph, we suggest a choice of \u03bb1, . . . , \u03bbK for CGL that leads to consistent identi\ufb01cation of the connected components in the underlying network. In the context of the graphical lasso, Banerjee et al. (2008) have suggested a choice of \u03bb such that the probability of adding edges between two disconnected components is bounded by \u03b1, given by \u03bb = (max i 0 such that c2t2 > 2. Then performing SLC, ALC, or CLC with similarity matrix \u02dc S satis\ufb01es P(\u2203k : \u02c6 Ck \u0338= Ck) \u2264c1 p2\u2212c2t2. 11 Lemma 1 establishes the consistency of identi\ufb01cation of connected components by performing hierarchical clustering using SLC, ALC, or CLC, provided that n = \u2126(log p) as n, p \u2192\u221e, and provided that no within-block element of \u03a3 is too small in absolute value. B\u00a8 uhlmann et al. (2012) also commented on the consistency of hierarchical clustering. Let \u02dc S1, . . . , \u02dc SK denote the K blocks of \u02dc S corresponding to the features in \u02c6 C1, . . . , \u02c6 CK. In other words, \u02dc Sk is a | \u02c6 Ck| \u00d7 | \u02c6 Ck| matrix. The following corollary on selecting the tuning parameter \u03bb1, . . . , \u03bbK for CGL is a direct consequence of Lemma 1 and Theorem 1. Corollary 3. Assume that the diagonal elements of \u03a3 are bounded and that min i,j\u2208Ck:k=1,...,K |\u03a3ij| = \u2126 r log p n ! . Let \u00af \u03bbk be the smallest value that cuts the dendrogram resulting from applying SLC to \u02dc Sk into two clusters. Performing CGL with SLC, ALC, or CLC and penalty parameter \u03bbk \u2208[0, \u00af \u03bbk) for k = 1, . . . , K leads to consistent identi\ufb01cation of the K connected components if n = \u2126(log p) as n, p \u2192\u221e. Corollary 3 implies that one can consistently recover the K connected components by (a) performing hierarchical clustering based on \u02dc S to obtain K clusters and (b) choosing the tuning parameter \u03bbk \u2208[0, \u00af \u03bbk) in Step 2(b) of Algorithm 1 for each of the K clusters. However, Corollary 3 does not guarantee that this set of tuning parameters \u03bb1, . . . , \u03bbK will identify the correct edge set within each connected component. We now establish such a result. 4.2 Model selection consistency of CGL The following theorem combines Lemma 1 with results on model selection consistency of the graphical lasso (Ravikumar et al. 2011) in order to establish the model selection consistency of CGL. We start by introducing some notation and stating the assumptions needed. 12 Let \u0398 = \u03a3\u22121. \u0398 speci\ufb01es an undirected graph with K connected components; the kth connected component has edge set Ek = {(i, j) \u2208Ck : i \u0338= j, \u03b8ij \u0338= 0}. Let E = \u222aK k=1Ek. Also, let Fk = Ek \u222a{(j, j) : j \u2208Ck}; this is the union of the edge set and the diagonal elements for the kth connected component. De\ufb01ne dk to be the maximum degree in the kth connected component, and let d = maxk dk. Also, de\ufb01ne pk = |Ck|, pmin = mink pk, and pmax = maxk pk. Assumption 1 involves the Hessian of Equation 1, which takes the form \u0393(k) = \u22072 \u0398k{\u2212log det(\u0398k)} = \u03a3k \u2297\u03a3k, where \u2297is the Kronecker matrix product, and \u0393(k) is a p2 k \u00d7 p2 k matrix. With some abuse of notation, we de\ufb01ne \u0393(k) AB as the |A| \u00d7 |B| submatrix of \u0393(k) whose rows and columns are indexed by A and B respectively, i.e., \u0393(k) AB = [\u03a3k \u2297\u03a3k]AB \u2208R|A|\u00d7|B|. Assumption 1. There exists some \u03b1 \u2208(0, 1] such that for all k = 1, . . . , K, max e\u2208{(Ck\u00d7Ck)\\Fk} \u2225\u0393(k) eFk(\u0393(k) FkFk)\u22121\u22251 \u2264(1 \u2212\u03b1). Assumption 2. For \u03b8min := min(i,j)\u2208E |\u03b8ij|, the minimum non-zero o\ufb00-diagonal element of the inverse covariance matrix, \u03b8min = \u2126 r log p n ! as n and p grow. We now present our main theorem. It relies heavily on Theorem 2 of Ravikumar et al. (2011), to which we refer the reader for details. Theorem 4. Assume that \u03a3 satis\ufb01es the conditions in Lemma 1. Further, assume that Assumptions 1 and 2 are satis\ufb01ed, and that \u2225\u03a3k\u2225\u221eand \u2225(\u0393(k) Fk,Fk)\u22121\u2225\u221eare bounded, where 13 \u2225A\u2225\u221edenotes the \u2113\u221enorm of A. Assume K = O(pmin), \u03c4 > 3, where \u03c4 is a user-de\ufb01ned parameter, and n = \u2126 \u0000log(p) \u2228d2 log(pmax) \u0001 as n, pmin \u2192\u221e. Let \u02c6 E denote the edge set from the CGL estimate using SLC, ALC, or CLC with K clusters and \u03bbk = 8 \u03b1 q c2(\u03c4 log pk+log 4) n . Then \u02c6 E = E with probability at least 1 \u2212c1 p2\u2212c2t2 \u2212K p2\u2212\u03c4 min . Remark 1. Theorem 4 states that CGL with SLC can improve upon existing model selection consistency results for the graphical lasso. Recall that Theorem 2 indicates that graphical lasso is a two-step procedure in which SLC precedes precision matrix estimation within each cluster. However, the two steps in the graphical lasso procedure involve a single tuning parameter, \u03bb. The improved rates for CGL in Theorem 4 are achieved by decoupling the choice of tuning parameters in the two steps. Remark 2. Note that the assumption of Lemma 1 that \u03a3ij \u0338= 0 for all (i, j) \u2208Ck does not require the underlying conditional independence graph for that connected component to be fully connected. For example, consider the case where connected components of the underlying graph are forests, with non-zero partial correlations on each edge of the graph. Note that there exists a unique path Pij of length q between each pair of elements i and j in the same block. Then by Theorem 1 of Jones & West (2005), \u03a3ij = c \u03b8ik1\u03b8k1k2 . . . \u03b8kq\u22121j, for some constant c depending on the path Pij and the determinant of the precision matrix. By the positive de\ufb01niteness of precision matrix, and the fact that partial correlations along the edges of the graph are non-zero, it follows that \u03a3ij \u0338= 0 for all i and j in the same block. Remark 3. The convergence rates in Theorem 4 can result in improvements over existing rates for the graphical lasso estimator. For instance, suppose the graph consists of K = ma connected components of size pk = mb each, for positive integers m, a, and b such that m > 1 and a < b. Also, assume that d = m. Then, based on the results of Ravikumar et al. 14 (2011), consistency of the graphical lasso requires n = \u2126(m2(a+b) log(m)) samples, whereas Theorem 4 implies that n = \u2126(bm2 log(m)) samples su\ufb03ce for consistent estimation using CGL. Remark 3 is not surprising. The reduction in the required sample size for CGL is achieved from the extra information on the number of connected components in the graph. This result suggests that decoupling the identi\ufb01cation of connected components and estimation of the edges can result in improved estimation of Gaussian graphical models in the settings where \u03a3\u22121 is block diagonal. The results in the next two sections provide empirical evidence in support of these \ufb01ndings. 5 Simulation study We consider two simulation settings: \u03a3\u22121 is block diagonal with two blocks, and \u03a3\u22121 is approximately block diagonal with two blocks. Algorithm 3 Generation of sparse inverse covariance matrix \u03a3\u22121 1. Let C1, . . . , CK denote a partition of the p features. 2. For k = 1, . . . , K, construct a |Ck| \u00d7 |Ck| matrix Ak as follows. For each j < j\u2032, (Ak)jj\u2032 = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 Unif(0.25, 0.75) with probability (1 \u2212s) \u00d7 0.75 Unif(\u22120.75, \u22120.25) with probability (1 \u2212s) \u00d7 0.25 0 with probability s , (8) where s is the level of sparsity in Ak. Then, we set (Ak)j\u2032j = (Ak)jj\u2032 to obtain symmetry. Furthermore, set the diagonal entries of Ak to equal zero. 3. Create a matrix \u03a3\u22121 that is block diagonal with blocks A1, . . . , AK. 4. In order to achieve positive de\ufb01niteness of \u03a3\u22121, calculate the minimum eigenvalue emin of \u03a3\u22121. For j = 1, . . . , p, set (\u03a3\u22121)jj = \u2212emin + 0.1 if emin < 0. (9) 15 5.1 \u03a3\u22121 block diagonal We generated a n \u00d7 p data matrix X according to x1, . . . , xn iid \u223cN(0, \u03a3), where \u03a3\u22121 is a block diagonal inverse covariance matrix with two equally-sized blocks, and sparsity level s = 0.4 within each block, generated using Algorithm 3. We \ufb01rst standardized the variables to have mean zero and variance one. Then, we performed the graphical lasso as well as CGL using ALC with (a) the tuning parameter K selected using Algorithm 2 and (b) K = {2, 4}. For simplicity, we chose \u03bb1 = . . . = \u03bbK = \u03bb where \u03bb ranges from 0.01 to 1. We considered two cases in our simulation: n = 50, p = 100 and n = 200, p = 100. Results are presented in Figure 2. More extensive simulation results are presented in the Supplementary Materials. Let \u03bbk be as de\ufb01ned in Corollary 3; that is, it is the smallest value of the tuning parameter that will break up the kth connected component asymptotically. The solid circles in Figure 2 correspond to \u03bb = mink \u03bbk \u2212\u03f5 for some tiny positive \u03f5, i.e., \u03bb is the largest value such that all K of the connected components are consistently identi\ufb01ed according to Corollary 3. For instance, the black solid circle corresponds to the largest value of \u03bb such that the graphical lasso consistently identi\ufb01es the connected components according to Corollary 3. From Corollary 3, any value of \u03bb to the right of the solid circles should consistently identify the connected components. On the other hand, the black triangle in Figure 2 corresponds to the value of \u03bb proposed by Banerjee et al. (2008) using \u03b1 = 0.05, as in Equation 7. This choice of \u03bb guarantees that the probability of adding edges between two disconnected components is bounded by \u03b1 = 0.05. From Figures 2(a)-(b), we see that for a given number of non-zero edges, CGL has similar MSE as compared to the graphical lasso, on the region of interest. Also, CGL tends to yield a higher fraction of correctly identi\ufb01ed non-zero edges, as compared to graphical lasso. We see that the value of \u03bb proposed by Banerjee et al. (2008) leads to a sparser estimate than does Corollary 3, since the black solid triangle is to the left of the black solid circle. This is consistent with the fact that Banerjee et al. (2008)\u2019s choice of \u03bb is guaranteed not to erroneously connect two separate components, but is not guaranteed to avoid erroneously 16 0 500 1000 1500 2000 2500 3000 3500 0.0 0.2 0.4 0.6 0.8 (a) Number of Non\u2212Zero Edges MSE G G G G Graphical Lasso CGL: ALC, K from Algorithm 2 CGL: ALC, K=2 CGL: ALC, K=4 0.0 0.2 0.4 0.6 0.8 1.0 0.2 0.4 0.6 (b) False Positive Rate True Positive Rate G G G G Graphical Lasso CGL: ALC, K from Algorithm 2 CGL: ALC, K=2 CGL: ALC, K=4 0 1000 2000 3000 4000 0.010 0.015 0.020 0.025 0.030 0.035 0.040 (c) Number of Non\u2212Zero Edges MSE G G G G Graphical Lasso CGL: ALC, K from Algorithm 2 CGL: ALC, K=2 CGL: ALC, K=4 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 (d) False Positive Rate True Positive Rate G G G G Graphical Lasso CGL: ALC, K from Algorithm 2 CGL: ALC, K=2 CGL: ALC, K=4 Figure 2: Results from graphical lasso, CGL using ALC with various values of K and with K selected automatically using Algorithm 2, averaged over 200 iterations. The true graph is sparse, with two connected components. (a): Plot of average mean squared error of estimated inverse covariance matrix against the average number of non-zero edges with n = 50, p = 100. (b): ROC curve for the number of edges: true positive rate = (number of correctly estimated non-zero)/(total number of non-zero) and false positive rate = (number of incorrectly estimated non-zero)/(total number of zero) with n = 50, p = 100. (c)-(d): As in (a)-(b) with n = 200, p = 100. The solid circles indicate \u03bb = mink \u03bbk \u2212\u03f5 from Corollary 3 and the triangle indicates the choice of \u03bb proposed by Banerjee et al. (2008). 17 disconnecting a connected component. Moreover, for CGL, the choice of \u03bb from Corollary 3 results in identifying more true edges, compared to the same choice of \u03bb for graphical lasso. This is mainly due to the fact the CGL does better at identifying the connected components. In Figures 2(a)-(b), n < p and the signal-to-noise ratio is quite low. Consequently, CGL with K = 4 clusters outperforms CGL with K = 2 clusters, even though the true number of connected components is two. This is due to the fact that in this particular simulation set-up with n = 50, ALC with K = 2 has the tendency to produce one cluster containing most of the features and one cluster containing just a couple of features; thus, it may fail to identify the connected components correctly. In contrast, ALC with K = 4 tends to identify the two connected components almost exactly (though it also creates two additional clusters that contain just a couple of features). Figures 2(c)-(d) indicate that when n = 200 and p = 100, CGL has a lower MSE than does the graphical lasso for a \ufb01xed number of non-zero edges. In addition, CGL with K = 2 using ALC has the best performance in terms of identifying non-zero edges. The result is not surprising because when n = 200 and p = 100, ALC is able to cluster the features into two clusters almost perfectly. Overall, CGL leads to more accurate estimation of the inverse covariance matrix than does the graphical lasso when the true covariance matrix is block diagonal. In addition, these results suggest that the method of Algorithm 2 for selecting K, the number of clusters, leads to appropriate choices regardless of the sample size. 5.2 \u03a3\u22121 approximately block diagonal We repeated the simulation from Section 5.1, except that \u03a3\u22121 is now approximately block diagonal. That is, we generated data according to Algorithm 3, but between Steps (c) and (d) we altered the resulting \u03a3\u22121 such that 2.5% or 10% of the elements outside of the blocks are drawn i.i.d. from a Unif(\u22120.5, 0.5) distribution. We considered the case when n = 200, p = 100 in this section. Results are presented in Figure 3. 18 From Figures 3(a)-(b), we see that CGL outperforms the graphical lasso when the assumption of block diagonality is only slightly violated. However, as the assumption is increasingly violated, graphical lasso\u2019s performance improves relative to CGL, as is shown in Figures 3(c)-(d). As in the previous section, we see that Algorithm 2 results in reasonable estimates of the number of clusters. 0 1000 2000 3000 4000 0.010 0.015 0.020 0.025 0.030 0.035 0.040 (a) Number of Non\u2212Zero Edges MSE G G G G Graphical Lasso CGL: ALC, K from Algorithm 2 CGL: ALC, K=2 CGL: ALC, K=4 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 (b) False Positive Rate True Positive Rate G G G G Graphical Lasso CGL: ALC, K from Algorithm 2 CGL: ALC, K=2 CGL: ALC, K=4 0 1000 2000 3000 4000 0.010 0.015 0.020 0.025 0.030 0.035 0.040 (c) Number of Non\u2212Zero Edges MSE G G G G Graphical Lasso CGL: ALC, K from Algorithm 2 CGL: ALC, K=2 CGL: ALC, K=4 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 (d) False Positive Rate True Positive Rate G G G G Graphical Lasso CGL: ALC, K from Algorithm 2 CGL: ALC, K=2 CGL: ALC, K=4 Figure 3: As in Figure 2, with n = 200, p = 100, and \u03a3\u22121 approximately block diagonal. (a)-(b): 2.5% of the o\ufb00-block elements do not equal zero. (c)-(d): 10% of the o\ufb00-block elements do not equal zero. 19 6 Application to real data sets We explore three applications of CGL: to an equities data set in which the features are known to belong to distinct groups, to a webpage data set in which the features are easily interpreted, and to a gene expression data set in which the true conditional dependence among the features is partially known. Throughout this section, we choose \u03bb1 = . . . = \u03bbK = \u03bb in CGL for simplicity. 6.1 Equities data We analyze the stock price data from Yahoo! Finance described in Liu et al. (2012), and available in the huge package on CRAN (Zhao et al. 2012). This data set consists of daily closing prices for stocks in the S&P 500 index between January 1, 2003 and January 1, 2008. Stocks that are not consistently included in the S&P 500 index during this time period are removed. This leaves us with 1258 daily closing prices for 452 stocks, which are categorized into 10 Global Industry Classi\ufb01cation Standard (GICS) sectors. Let Pij denote the closing price of the jth stock on the ith day. Then, we construct a 1257 \u00d7 452 data matrix X whose (i, j) element is de\ufb01ned as xij = log(P(i+1)j/Pij) for i = 1, . . . , 1257 and j = 1, . . . , 452. Instead of Winsorizing the data as in Liu et al. (2012), we simply standardize each stock to have mean zero and standard deviation one. In this example, the true GICS sector for each stock is known. However, we did not use this information. Instead, we performed CGL with CLC and with tuning parameters K = 10 (since there are 10 categories) and \u03bb = 0.37. The network estimate has 2123 edges. We then chose the tuning parameter for the graphical lasso in order to obtain the same number of estimated edges. The estimated networks are presented in Figure 4, with nodes colored according to GICS sector. Figure 4 reveals that the network estimated by CGL is more easily interpretable than the network estimated by graphical lasso. For instance, stocks that are categorized as consumer 20 ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !! ! ! ! ! ! ! ! ! ! ! ! ! (a) ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! (b) Figure 4: Networks constructed by (a): CGL with K = 10 and \u03bb = 0.37 (2123 edges). (b): graphical lasso with \u03bb chosen to yield 2123 edges. Each stock was colored based on its GICS sector: Consumer Discretionary (black), Consumer Staples (red), Energy (green), Financials (blue), Health Care (cyan), Industrials (purple), Information Technology (yellow), Materials (grey), Telecommunications Services (white), and Utilities (orange). staples (red nodes) are for the most part conditionally independent of stocks in other GICS sectors. Consumer staples are products such as food, beverages, and household items. Therefore, stock prices for consumer staples are approximately stable, regardless of the economy and the prices of other stocks. 6.2 University webpages In this section, we consider the university webpages data set from the \u201cWorld Wide Knowledge Base\u201d project at Carnegie Mellon University. This data set was preprocessed by Cardoso-Cachopo (2009) and previously studied in Guo et al. (2011). It includes webpages from four computer science departments at the following universities: Cornell, Texas, Washington, and Wisconsin. In this analysis, we consider only student webpages. This gives us n = 544 student webpages and p = 4800 distinct terms that appear on these webpages. Let fij be the frequency of the jth term in the ith webpage. We construct a n\u00d7p matrix whose (i, j) element is log(1 + fij). We selected 100 terms with the largest entropy out of a total of 4800 terms, where the entropy of the jth term is de\ufb01ned as \u2212Pn i=1 gij log(gij)/ log(n) 21 and gij = fij Pn i=1 fij . We then standardized each term to have mean zero and standard deviation one. For CGL, we used CLC with tuning parameters K = 5 and \u03bb = 0.25. The estimated network has a total of 158 edges. We then performed the graphical lasso with \u03bb chosen such that the estimated network has the same number of edges as the network estimated by CGL, i.e., 158 edges. The resulting networks are presented in Figure 5. For ease of viewing, we colored yellow all of the nodes in subnetworks that contained more than one node in the CGL network estimate. From Figure 5(a), we see that CGL groups related words into subnetworks. For instance, the terms \u201ccomputer\u201d, \u201cscience\u201d, \u201cdepart\u201d, \u201cunivers\u201d , \u201cemail\u201d, and \u201caddress\u201d are connected within a subnetwork. In addition, the terms \u201co\ufb03ce\u201d, \u201cfax\u201d, \u201cphone\u201d, and \u201cmail\u201d are connected within a subnetwork. Other interesting subnetworks include \u201cgraduate\u201d-\u201cstudent\u201d\u201cwork\u201d-\u201cyear\u201d-\u201cstudy\u201d and \u201cschool\u201d-\u201cmusic\u201d. In contrast, the graphical lasso in Figure 5(b) identi\ufb01es a large subnetwork that contains so many nodes that interpretation is rendered di\ufb03cult. In addition, it fails to identify most of the interesting phrases within subnetworks described above. \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf access address advisor algorithm area assist base california center check class click colleg comput construct contact current databas degre depart dept design develop distribut educ email engin fall favorit fax find finger gener good graduat group hall home homepag html includ info inform institut interest internet lab languag link list mail master model modifi music network number offic page paper parallel perform person phd phone pictur place program project public read recent relat report research resum school scienc search send site softwar state student studi stuff system teach technolog thesi time univers updat usa visit web work world www year (a) \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf\u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf \u25cf access address advisor algorithm area assist base california center check class click colleg comput construct contact current databas degre depart dept design develop distribut educ email engin fall favorit fax find finger gener good graduat group hall home homepag html includ info inform institut interest internet lab languag link list mail master model modifi music network number offic page paper parallel perform person phd phone pictur place program project public read recent relat report research resum school scienc search send site softwar state student studi stuff system teach technolog thesi time univers updat usa visit web work world www year (b) Figure 5: Network constructed by (a): CGL with CLC with K = 5 and \u03bb = 0.25 (158 edges). (b): graphical lasso with \u03bb chosen to yield 158 edges. Nodes that are in a multiplenode-subnetwork in the CGL solution are colored in yellow. 22 6.3 Arabidopsis thaliana We consider the Arabidopsis thaliana data set, which consists of gene expression measurements for 118 samples and 39 genes (Rodr\u00b4 \u0131gues-Concepci\u00b4 on & Boronat 2002). This data set has been previously studied by Wille et al. (2004), Ma et al. (2007), and Liu et al. (2011). It is known that plants contain two isoprenoid biosynthesis pathways, the mevalonate acid (MVA) pathway and the methylerythritol phosphate (MEP) pathway (Rodr\u00b4 \u0131guesConcepci\u00b4 on & Boronat 2002). Of these 39 genes, 19 correspond to MEP pathway, 15 correspond to MVA pathway, and \ufb01ve encode proteins located in the mitochondrion (Wille et al. 2004, Lange & Ghassemian 2003). Our goal was to reconstruct the gene regulatory network of the two isoprenoid biosynthesis pathways. Although we know that both pathways operate independently under normal conditions, interactions between certain genes in the two pathways have been reported (Wille et al. 2004). We expect genes within each of the two pathways to be more connected than genes between the two pathways. We began by standardizing each gene to have mean zero and standard deviation one. For CGL, we set the tuning parameters K = 7 and \u03bb = 0.4. The estimated network has a total of 85 edges. Note that the number of clusters K = 7 was chosen so that the estimated network has several connected components that contain multiple genes. We also performed the graphical lasso with \u03bb chosen to yield 85 edges. The estimated networks are shown in Figure 6. Note that the red nodes, grey nodes, and white nodes in Figure 6 represents the MEP pathway, MVA pathway, and the mitochondrion respectively. From Figure 6(a), we see that CGL identi\ufb01es several separate subnetworks that might be potentially interesting. In the MEP pathway, the genes DXR, CMK, MCT, MECPS, and GGPPS11 are mostly connected. In addition, genes AACT1 and HMGR1 which are known to be in the MVA pathway are connected to the genes MECPS, CMK, and DXR. Wille et al. (2004) suggested that AACT1 and HMGR1 form candidates for cross-talk between the MEP and MVA pathway. For the MVA pathway, genes HMGR2, MK, AACT2, MPDC1, FPPS2, and FPPS1 are closely connected. In addition, there are edges among these genes and genes 23 IPPI1, GGPPS12, and GGPPS6. These \ufb01ndings are mostly in agreement with Wille et al. (2004). In contrast, the graphical lasso results are hard to interpret, since most nodes are part of a very large connected component, as is shown in Figure 6(b). ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! AACT1 AACT2 CMK DPPS1 DPPS2 DPPS3 DXPS1 DXPS2 DXPS3 DXR FPPS1 FPPS2 GGPPS1 GGPPS2 GGPPS3 GGPPS4 GGPPS5 GGPPS6 GGPPS8 GGPPS9 GGPPS10 GGPPS11 GGPPS12 GPPS HDR HDS HMGR1 HMGR2 HMGS IPPI1 IPPI2 MCT MECPS MK MPDC1 MPDC2 PPDS1 PPDS2 UPPS1 (a) ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! AACT1 AACT2 CMK DPPS1 DPPS2 DPPS3 DXPS1 DXPS2 DXPS3 DXR FPPS1 FPPS2 GGPPS1 GGPPS2 GGPPS3 GGPPS4 GGPPS5 GGPPS6 GGPPS8 GGPPS9 GGPPS10 GGPPS11 GGPPS12 GPPS HDR HDS HMGR1 HMGR2 HMGS IPPI1 IPPI2 MCT MECPS MK MPDC1 MPDC2 PPDS1 PPDS2 UPPS1 ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! (b) Figure 6: Pathways identi\ufb01ed by (a): CGL with K = 7 and \u03bb = 0.4 (85 edges), and (b): graphical lasso with \u03bb chosen to yield 85 edges. Red, grey, and white nodes represent genes in the MEP pathway, MVA pathway, and mitochondrion, respectively. 7 Discussion We have shown that identifying the connected components of the graphical lasso solution is equivalent to performing SLC based on \u02dc S, the absolute value of the empirical covariance matrix. Based on this connection, we have proposed the cluster graphical lasso, an improved version of the graphical lasso for sparse inverse covariance estimation. A shortcoming of the graphical lasso is that in order to avoid obtaining a network estimate in which one connected component contains a huge number of nodes, one needs to impose a huge penalty \u03bb in (1). When such a large value of \u03bb is used, the graphical lasso solution tends to contain many isolated nodes as well as a number of small subnetworks with just a few edges. This can lead to an underestimate of the number of edges in the network. In contrast, CGL decouples the identi\ufb01cation of the connected components in the network estimate and 24 the identi\ufb01cation of the edge structure within each connected component. Hence, it does not su\ufb00er from the same problem as the graphical lasso. In this paper, we have considered the use of hierarchical clustering in the CGL procedure. We have shown that performing hierarchical clustering on \u02dc S leads to consistent cluster recovery. As a byproduct, we suggest a choice of \u03bb1, . . . , \u03bbK in CGL that yields consistent identi\ufb01cation of the connected components. In addition, we establish the model selection consistency of CGL. Detailed exploration of di\ufb00erent clustering methods in the context of CGL is left to future work. Equation 3 indicates that CGL can be interpreted as the solution to a penalized log likelihood problem, where the penalty function has edge-speci\ufb01c weights that are based upon a previously-obtained clustering of the features. This parallels the adaptive lasso (Zou 2006), in which a consistent estimate of the coe\ufb03cients is used to weight the \u21131 penalty in a regression problem. Other weighting schemes could be explored in the context of (3); this is left as a topic for future investigation. In this paper, we have investigated the use of clustering before estimating a graphical model using the graphical lasso. As we have seen, this approach is quite natural due to a connection between the graphical lasso and single linkage clustering. However, in principle, one could perform clustering before estimating a graphical model using another technique, such as neighborhood selection (Meinshausen & B\u00a8 uhlmann 2006), sparse partial correlation estimation (Peng et al. 2009), the nonparanormal (Liu et al. 2009, Xue & Zou 2012), or constrained \u21131 minimization (Cai et al. 2011). We leave a full investigation of these approaches for future research. Acknowledgments We thank Noah Simon for helpful conversations about the connection between graphical lasso and SLC; Jian Guo and Ji Zhu for providing the university webpage data set in Guo et al. (2011); and Han Liu and Tuo Zhao for sharing the equities data set in Liu et al. (2012). 25 Appendix Proof of Theorem 2 We begin with a lemma (Mirkin 1996, Jain & Dubes 1988). Lemma 2. Let C1, . . . , CK denote the clusters that result from performing SLC using similarity matrix \u02dc S, and cutting the resulting dendrogram at a height of 0 \u2264\u03bb \u22641. Also, let A be a p \u00d7 p matrix whose (j, j\u2032)th element is 1 if j = j\u2032 and is 1{ \u02dc Sjj\u2032\u2265\u03bb} otherwise. Let D1, . . . , DR denote the connected components of the undirected graph that contains an edge between the jth and j\u2032th nodes if and only if Ajj\u2032 \u0338= 0. Then, K = R, and there exists a permutation \u03c0 such that Ck = D\u03c0(k) for k = 1, . . . , K. Combining Theorem 1 and Lemma 2 leads directly to Theorem 2. Proof of Lemma 1 In order to prove Lemma 1, we \ufb01rst present an additional lemma. Lemma 3. Let x1, . . . , xn be i.i.d. N(0, \u03a3), and assume that \u03a3ii \u2264M \u2200i, where M is some constant. The associated absolute empirical covariance matrix \u02dc S satis\ufb01es P[max i,j | \u02dc Sij \u2212|\u03a3ij|| \u2265t r log p n ] \u2264 c1 pc2t2\u22122. In order to prove Lemma 3, \ufb01rst note that by the reverse triangle inequality, |Sij \u2212\u03a3ij| \u2265 | \u02dc Sij \u2212|\u03a3ij||. This implies that P[| \u02dc Sij \u2212|\u03a3ij|| \u2265\u03b4] \u2264P[|Sij \u2212\u03a3ij| \u2265\u03b4]. Then the result follows from applying Lemma 1 of Ravikumar et al. (2011), together with the union bound inequality. We now proceed with the proof of Lemma 1. Proof. Let a = mini,j\u2208Ck:k=1,...,K |\u03a3ij|. The assumptions imply that the following holds for performing SLC, ALC, or CLC based on \u02dc S: {max i,j | \u02dc Sij \u2212|\u03a3ij|| < a/2} = \u21d2{ \u02c6 Ck = Ck \u2200k} Therefore, P(\u2203k : \u02c6 Ck \u0338= Ck) \u2264P[max i,j | \u02dc Sij \u2212|\u03a3ij|| \u2265a/2] \u2264P[max i,j | \u02dc Sij \u2212|\u03a3ij|| \u2265t r log p n ] \u2264 c1 pc2t2\u22122, where the last inequality holds by Lemma 3. 26 Proof of Theorem 4 Proof. De\ufb01ne \u02c6 Ek to be the edge set obtained from applying the graphical lasso to the set of features in Ck. Recall that \u02c6 Ck is the kth cluster estimated in the clustering step of CGL. We begin by noting that in order for CGL to yield the correct edge set, it must yield the correct edge set within each of the true connected components, and it must also yield no edges between the connected components. In other words, ( \u02c6 E = E) \u2261( \u02c6 Ek = Ek \u2200k = 1, . . . , K) \u2229( \u02c6 Ck = Ck \u2200k = 1, . . . , K). Thus, ( \u02c6 E \u0338= E) = (\u2203k : \u02c6 Ek \u0338= Ek) \u222a(\u2203k : \u02c6 Ck \u0338= Ck), which implies that P( \u02c6 E \u0338= E) \u2264 P(\u2203k : \u02c6 Ek \u0338= Ek) + P(\u2203k : \u02c6 Ck \u0338= Ck) \u2264 K X k=1 P( \u02c6 Ek \u0338= Ek) + P(\u2203k : \u02c6 Ck \u0338= Ck). By Lemma 1, we have that P(\u2203k : \u02c6 Ck \u0338= Ck) \u2264 c1 pc2t2\u22122. By Theorem 2 of Ravikumar et al. (2011), since n = \u2126(d2 log(pmax)) and \u03bbk = 8 \u03b1 q c2(\u03c4 log pk+log 4) n , it follows that P( \u02c6 Ek \u0338= Ek) \u2264 1 p\u03c4\u22122 k \u2264 1 p\u03c4\u22122 min for all k. The result follows directly.", "introduction": "Graphical models have been extensively used in various domains, including modeling of gene regulatory networks and social interaction networks. A graph consists of a set of p nodes, corresponding to random variables, as well as a set of edges joining pairs of nodes. In a conditional independence graph, the absence of an edge between a pair of nodes indicates a pair of variables that are conditionally independent given the rest of the variables in the data set, and the presence of an edge indicates a pair of conditionally dependent nodes. Hence, graphical models can be used to compactly represent complex joint distributions using a set 1 arXiv:1307.5339v1 [stat.ML] 19 Jul 2013 of local relationships speci\ufb01ed by a graph. Throughout the rest of the text, we will focus on Gaussian graphical models. Let X be a n \u00d7 p matrix where n is the number of observations and p is the number of features; the rows of X are denoted as x1, . . . , xn. Assume that x1, . . . , xn iid \u223cN(0, \u03a3) where \u03a3 is a p \u00d7 p covariance matrix. Under this simple model, there is an equivalence between a zero in the inverse covariance matrix and a pair of conditionally independent variables (Mardia et al. 1979). More precisely, (\u03a3\u22121)jj\u2032 = 0 for some j \u0338= j\u2032 if and only if the jth and j\u2032th features are conditionally independent given the other variables. Let S denote the empirical covariance matrix of X, de\ufb01ned as S = XTX/n. A natural way to estimate \u03a3\u22121 is via maximum likelihood. This approach involves maximizing log det \u0398 \u2212tr(S\u0398) with respect to \u0398, where \u0398 is an optimization variable; the solution \u02c6 \u0398 = S\u22121 serves as an estimate for \u03a3\u22121. However, in high dimensional settings where p \u226bn, S is singular and is not invertible. Furthermore, even if S is invertible, \u02c6 \u0398 = S\u22121 typically contains no elements that are exactly equal to zero. This corresponds to a graph in which the nodes are fully connected to each other; such a graph does not provide useful information. To overcome these problems, Yuan & Lin (2007) proposed to maximize the penalized log likelihood log det \u0398 \u2212tr(S\u0398) \u2212\u03bb X j\u0338=j\u2032 |\u0398jj\u2032| (1) with respect to \u0398, penalizing only the o\ufb00-diagonal elements of \u0398. A number of algorithms have been proposed to solve (1) (among others Friedman et al. 2007, Yuan & Lin 2007, Rothman et al. 2008, Yuan 2008, Scheinberg et al. 2010). Note that some authors have considered a slight modi\ufb01cation to (1) in which the diagonal elements of \u0398 are also penalized. We refer to the maximizer of (1) as the graphical lasso solution; it serves as an estimate for \u03a3\u22121. When the nonnegative tuning parameter \u03bb is su\ufb03ciently large, the estimate will be 2 sparse, with some elements exactly equal to zero. These zero elements correspond to pairs of variables that are estimated to be conditionally independent. Witten et al. (2011) and Mazumder & Hastie (2012) presented the following result: Theorem 1. The connected components of the graphical lasso solution with tuning parameter \u03bb are the same as the connected components of the undirected graph corresponding to the p\u00d7p adjacency matrix A(\u03bb), de\ufb01ned as Ajj\u2032(\u03bb) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 1 if j = j\u2032 1(|Sjj\u2032|>\u03bb) if j \u0338= j\u2032 . Here 1(|Sjj\u2032|>\u03bb) is an indicator variable that equals 1 if |Sjj\u2032| > \u03bb, and equals 0 otherwise. For instance, consider a partition of the features into two disjoint sets, D1 and D2. Theorem 1 indicates that if |Sjj\u2032| \u2264\u03bb for all j \u2208D1 and all j\u2032 \u2208D2, then the features in D1 and D2 are in two separated connected components of the graphical lasso solution. Theorem 1 reveals that solving problem (1) boils down to two steps: 1. Identify the connected components of the undirected graph with adjacency matrix A. 2. Perform graphical lasso with parameter \u03bb on each connected component separately. In this paper, we will show that identifying the connected components in the graphical lasso solution \u2013 that is, Step 1 of the two-step procedure described above \u2013 is equivalent to performing single linkage hierarchical clustering (SLC) on the basis of a similarity matrix given by the absolute value of the elements of the empirical covariance matrix S. However, we know that SLC tends to produce trailing clusters in which individual features are merged one at a time, especially if the data are noisy and the clusters are not clearly separated (Hastie et al. 2009). In addition, the two steps of the graphical lasso algorithm are based on the same tuning parameter, \u03bb, which can be suboptimal. Motivated by the connection between the graphical lasso solution and single linkage clustering, we therefore propose a new alternative 3 to the graphical lasso. We will \ufb01rst perform clustering of the variables using an alternative to single linkage clustering, and then perform the graphical lasso on the subset of variables within each cluster. Our approach decouples the cuto\ufb00for the clustering step from the tuning parameter used for the graphical lasso problem. This results in improved detection of the connected components in high dimensional Gaussian graphical models, leading to more accurate network estimates. Based on this new approach, we also propose a new method for choosing the tuning parameter for the graphical lasso problem on the subset of variables in each cluster, which results in consistent identi\ufb01cation of the connected components in the graph. The rest of the paper is organized as follows. In Section 2, we establish a connection be- tween the graphical lasso and single linkage clustering. In Section 3, we present our proposal for cluster graphical lasso, a modi\ufb01cation of the graphical lasso that involves discovery of the connected components via an alternative to SLC. We prove model selection consistency of our procedure in Section 4. Simulation results are in Section 5, and Section 6 contains an application of cluster graphical lasso to an equities data set, a webpage data set, and a gene expression data set. The Discussion is in Section 7." } ], "Gongjun Xu": [ { "url": "http://arxiv.org/abs/1307.8217v1", "title": "Bootstrapping a Change-Point Cox Model for Survival Data", "abstract": "This paper investigates the (in)-consistency of various bootstrap methods for\nmaking inference on a change-point in time in the Cox model with right censored\nsurvival data. A criterion is established for the consistency of any bootstrap\nmethod. It is shown that the usual nonparametric bootstrap is inconsistent for\nthe maximum partial likelihood estimation of the change-point. A new\nmodel-based bootstrap approach is proposed and its consistency established.\nSimulation studies are carried out to assess the performance of various\nbootstrap schemes.", "authors": "Gongjun Xu, Bodhisattva Sen, Zhiliang Ying", "published": "2013-07-31", "updated": "2013-07-31", "primary_cat": "stat.ME", "cats": [ "stat.ME" ], "main_content": "We use T to denote survival time and C censoring time. Throughout, a \u2227b = min{a, b} and a\u2228b = max{a, b}. Let \u02dc T = T \u2227C and \u03b4 = 1T\u2264C indicating failure (1) or censoring (0). Furthermore, there is a p-dimensional covariate process Z(t), c\u00b4 agl\u00b4 ad (left-continuous with right-hand limits), which may include an individual\u2019s treatment assignment and certain relevant characteristics. In 2 this paper we focus on the external time-dependent covariate and assume that Z is observed over the study interval [0, \u03c4], \u03c4 < \u221e. An external time-dependent covariate means that its value path is not directly generated by the individual under the study. Examples include the age of an individual and the air pollution level for asthma study; see Chapter 6.3.1 in Kalb\ufb02eisch and Prentice (2002) for more discussion. Furthermore, we assume that Z is of bounded total variation on [0, \u03c4] and the covariance matrix V ar(Z(t)) is strictly positive de\ufb01nite for any t \u2208[0, \u03c4]. Given covariate Z, the survival time T is assumed to be conditionally independent of the censoring time C. The hazard rate function of T follows the change-point Cox model (2) with \u03b10 and \u03b20 belonging to bounded convex sets \u0398\u03b1 and \u0398\u03b2 in Rp, respectively. We assume that the baseline hazard function \u03bb0(\u00b7) is bounded on [0, \u03c4] with inft\u2208[0,\u03c4] \u03bb0(t) > 0 and the conditional distribution of censoring time G(\u00b7|Z) satis\ufb01es supz\u2208V G(\u03c4|z) < 1, where V is the set of all possible paths of Z. To ensure the identi\ufb01ability of \u03b60, we further assume that \u03bb0(\u00b7) and G(\u00b7|z), for any z \u2208V, are continuous at \u03b60. The observed data ( \u02dc Ti, \u03b4i, Zi), i = 1, ..., n, consist of n i.i.d. realizations of ( \u02dc T, \u03b4, Z). The Cox partial likelihood (Cox, 1975) is Ln(\u03b1, \u03b2, \u03b6) = Y 1\u2264i\u2264n,\u03b4i=1 e\u03b1\u2032Zi(Ti)1Ti\u2264\u03b6+\u03b2\u2032Zi(Ti)1Ti>\u03b6 P 1\u2264j\u2264n, \u02dc Tj\u2265Ti e\u03b1\u2032Zj(Ti)1Ti\u2264\u03b6+\u03b2\u2032Zj(Ti)1Ti>\u03b6 . (3) Let ln(\u03b1, \u03b2, \u03b6) = log Ln(\u03b1, \u03b2, \u03b6), which is continuous in \u03b1 and \u03b2 but c\u00b4 adl\u00b4 ag in \u03b6. In fact, it is a step function in \u03b6 and hence could have multiple maximizers. To avoid ambiguity, we say that (\u02dc \u03b1\u2032 n, \u02dc \u03b2\u2032 n, \u02dc \u03b6n)\u2032 \u2208\u0398 := \u0398\u03b1 \u00d7 \u0398\u03b2 \u00d7 [0, \u03c4] is a maximizer if ln(\u02dc \u03b1n, \u02dc \u03b2n, \u02dc \u03b6n\u2212) \u2228ln(\u02dc \u03b1n, \u02dc \u03b2n, \u02dc \u03b6n) = sup (\u03b1\u2032,\u03b2\u2032,\u03b6)\u2032\u2208\u0398 ln(\u03b1, \u03b2, \u03b6). Since, for each \u03b6, ln(\u03b1, \u03b2, \u03b6) as a function of \u03b1 and \u03b2 has a unique maximizer, we can choose as our MPLE the maximizer with the smallest value of \u03b6. In other words, our estimator (\u02c6 \u03b1\u2032 n, \u02c6 \u03b2\u2032 n, \u02c6 \u03b6n)\u2032 \u2208\u0398 will be the only maximizer such that if (\u02dc \u03b1\u2032 n, \u02dc \u03b2\u2032 n, \u02dc \u03b6n)\u2032 \u2208\u0398 is any other maximizer, then \u02c6 \u03b6n < \u02dc \u03b6n. In this case, we say that \u02c6 \u03b8n := (\u02c6 \u03b1\u2032 n, \u02c6 \u03b2\u2032 n, \u02c6 \u03b6n)\u2032 is the smallest argmax of ln and write it as \u02c6 \u03b8n := (\u02c6 \u03b1\u2032 n, \u02c6 \u03b2\u2032 n, \u02c6 \u03b6n)\u2032 := sargmax(\u03b1\u2032,\u03b2\u2032,\u03b6)\u2032\u2208\u0398ln(\u03b1, \u03b2, \u03b6). (4) As discussed in the Introduction, it is not practical to directly use the limiting distribution of \u02c6 \u03b6n for constructing CIs for \u03b60. Thus it is desirable to develop bootstrap approaches. 2.1 Bootstrap procedures We start with a brief review of bootstrap procedures. Consider a sample Xn = {X1, \u00b7 \u00b7 \u00b7 , Xn} iid \u223cFX. Suppose that we are interested in estimating the distribution function FRn of a random variable Rn(Xn, FX). A bootstrap procedure generates X\u2217 n = {X\u2217 1, . . . , X\u2217 mn} iid \u223c\u02c6 FX,n given Xn, where \u02c6 FX,n is an estimator of FX from Xn and mn is a constant depending on n, and then estimates FRn by F \u2217 Rn, the conditional distribution function of Rn(X\u2217 n, \u02c6 FX,n) given Xn. Let d denote a metric metrizing weak convergence of distributions. We say that F \u2217 Rn is weakly consistent if d(FRn, F \u2217 Rn) \u21920 in probability. If FRn has a weak limit FR, then weak consistency requires F \u2217 Rn to converge weakly to FR, in probability. In the current context, we are interested in the distribution of n(\u02c6 \u03b6n \u2212\u03b60). Then for a consistent bootstrap procedure, the conditional distribution of mn(\u02c6 \u03b6\u2217 n\u2212\u02c6 \u03b6n) given the data must provide a good approximation to the distribution function of n(\u02c6 \u03b6n \u2212\u03b60), where \u02c6 \u03b6\u2217 n is the estimator of \u03b60 obtained from the bootstrap sample. In the following we introduce several bootstrap methods commonly used in the literature for the model (2). We start with the classical bootstrap based on the ED. 3 Method 1 (Classical bootstrap) Draw a random sample {( \u02dc T \u2217 n,i, \u03b4\u2217 n,i, Z\u2217 n,i) : i = 1, \u00b7 \u00b7 \u00b7 , n} from the ED of the data {( \u02dc Ti, \u03b4i, Zi) : i = 1, \u00b7 \u00b7 \u00b7 , n}. An alternative to the usual nonparametric bootstrap method (Method 1) considered in nonregular problems is the m-out-of-n bootstrap; see, e.g., Bickel, G\u00a8 otze and van Zwet (1997). Method 2 (m-out-of-n bootstrap) Choose an increasing sequence {mn}\u221e n=1 such that mn = o(n) and mn \u2192\u221e. Draw a random sample {( \u02dc T \u2217 n,i, \u03b4\u2217 n,i, Z\u2217 n,i) : i = 1, \u00b7 \u00b7 \u00b7 , mn} from the ED of the data {( \u02dc Ti, \u03b4i, Zi) : i = 1, \u00b7 \u00b7 \u00b7 , n}. Two widely used conditional bootstrap procedures for the Cox model are given in Methods 3 and 4 below; see Burr (1994). These methods are model-based and need estimators of the conditional distributions of T and C given Z. Method 3 (Bootstrap conditional on covariates) 1. Fit the Cox regression model and construct an estimator of the conditional distribution of T given Z as \u02c6 F b n(t|Z) = 1 \u2212exp \u0010 \u2212 Z t 0 e\u02c6 \u03b1\u2032 nZ(s)1s\u2264\u02c6 \u03b6n+\u02c6 \u03b2\u2032 nZ(s)1s>\u02c6 \u03b6nd\u02c6 \u039bb n,0(s) \u0011 , (5) where \u02c6 \u039bb n,0(s) is the Breslow estimator of the cumulative baseline hazard function \u039b0, i.e., \u02c6 \u039bb n,0(t) = Z t 0 \u0010 n X j=1 Yj(s)e\u02c6 \u03b1\u2032 nZj(s)1s\u2264\u02c6 \u03b6n+\u02c6 \u03b2\u2032 nZj(s)1s>\u02c6 \u03b6n \u0011\u22121 d \u0010 n X i=1 Ni(s) \u0011 with Yi(t) = 1 \u02dc Ti\u2265t and Ni(t) = 1 \u02dc Ti\u2264t,\u03b4i=1. In addition, we construct a conditional distribution estimator \u02c6 Gn(\u00b7|Z) of G(\u00b7|Z); see Section 4.1.2 for more discussion on estimating G(\u00b7|Z). 2. For given Z1, \u00b7 \u00b7 \u00b7 , Zn, generate i.i.d. replicates {T \u2217 n,i, C\u2217 n,i : i = 1, \u00b7 \u00b7 \u00b7 , n} from the conditional distribution estimators { \u02c6 F b n(\u00b7|Zi), \u02c6 Gn(\u00b7|Zi) : i = 1, \u00b7 \u00b7 \u00b7 , n}, respectively. Then we obtain a bootstrap sample {( \u02dc T \u2217 n,i, \u03b4\u2217 n,i, Zi) : i = 1, \u00b7 \u00b7 \u00b7 , n}, where \u02dc T \u2217 n,i = T \u2217 n,i \u2227C\u2217 n,i and \u03b4\u2217 n,i = 1T \u2217 n,i\u2264C\u2217 n,i. Method 4 (Bootstrap conditional on covariates and censoring) 1. Same as Step 1 in Method 3. 2. For given Z1, \u00b7 \u00b7 \u00b7 , Zn, generate T \u2217 n,i from \u02c6 F b n(\u00b7|Zi). If \u03b4i = 0, let C\u2217 n,i = Ci; otherwise, generate C\u2217 n,i from \u02c6 Gn(\u00b7|Zi) conditioning on C\u2217 n,i > Ti. Methods 1, 3 and 4 are the most widely used bootstrap methods for the Cox regression model. In the following sections, we demonstrate through theoretical derivation and simulation the inconsistency of these methods for constructing CIs for \u03b60. To get a consistent estimate of the distribution of n(\u02c6 \u03b6n \u2212\u03b60), we propose the following smooth bootstrap procedures. Method 5 (Smooth bootstrap conditional on covariates) 1.Choose an appropriate nonparametric smoothing procedure (e.g., kernel estimation method in Wells (1994)) to build an estimator \u02c6 \u03bbn,0 of \u03bb0. The associated estimator of F(t|Z) is \u02c6 F s n(t|Z) = 1 \u2212exp \u0010 \u2212 Z t 0 e\u02c6 \u03b1\u2032 nZ(s)1s\u2264\u02c6 \u03b6n+\u02c6 \u03b2\u2032 nZ(s)1s>\u02c6 \u03b6n \u02c6 \u03bbn,0(s)ds \u0011 . (6) 2. For given Z1, \u00b7 \u00b7 \u00b7 , Zn, generate i.i.d. replicates {T \u2217 n,i, C\u2217 n,i : i = 1, \u00b7 \u00b7 \u00b7 , n} from the conditional distribution estimatiors { \u02c6 F s n(\u00b7|Zi), \u02c6 Gn(\u00b7|Zi) : i = 1, \u00b7 \u00b7 \u00b7 , n}, respectively. Then we obtain a bootstrap sample { \u02dc T \u2217 n,i, \u03b4\u2217 n,i, Zi : i = 1, \u00b7 \u00b7 \u00b7 , n}. 4 Method 6 (Smooth bootstrap conditional on covariates and censoring) 1. Same as Step 1 in Method 5. 2. For given Z1, \u00b7 \u00b7 \u00b7 , Zn, generate T \u2217 n,i from \u02c6 F s n(\u00b7|Zi). If \u03b4i = 0, let C\u2217 n,i = Ci; otherwise, generate C\u2217 n,i from \u02c6 Gn(\u00b7|Zi) conditioning on C\u2217 n,i > Ti. We will use a general convergence result established in Section 3 to prove that the smooth bootstrap procedures (Methods 5 and 6) and the m-out-of-n procedure (Method 2) are consistent. We will also illustrate through a simulation study that the smooth bootstrap methods outperform the m-out-of-n method. 3 A general convergence result In this section we prove a general convergence theorem for triangular arrays of random variables in the non-regular Cox proportional hazard model with a change-point in time. This theorem will be applied to show the consistency of the bootstrap procedures introduced in the previous section. We \ufb01rst introduce some notation. Let P be a distribution satisfying the change-point Cox model (2) for some parameter \u03b80 := (\u03b1\u2032 0, \u03b2\u2032 0, \u03b60)\u2032 \u2208\u0398 := \u0398\u03b1 \u00d7 \u0398\u03b2 \u00d7 [0, \u03c4]. Consider a triangular array of independent random samples {( \u02dc Tn,i, \u03b4n,i, Zn,i) : i = 1, \u00b7 \u00b7 \u00b7 , mn} de\ufb01ned on a probability space (\u2126, A, P), where \u02dc Tn,i = Tn,i \u2227Cn,i, \u03b4n,i = 1Tn,i\u2264Cn,i, and mn \u2192\u221eas n \u2192\u221e. We use E to denote the expectation operator with respect to P. Furthermore, we assume that {( \u02dc Tn,i, \u03b4n,i, Zn,i) : i = 1, \u00b7 \u00b7 \u00b7 , mn} jointly follows a distribution Qn, and for each i, the distribution of ( \u02dc Tn,i, \u03b4n,i, Zn,i) is Qn,i. As in Section 2, we assume that under Qn, the covariate process Z(t) is c\u00b4 agl\u00b4 ad and has bounded total variation on [0, \u03c4]. We write Z\u22970 = 1, Z\u22971 = Z, and Z\u22972 = ZZ\u2032. For the ith subject, let Yn,i(t) = 1 \u02dc Tn,i\u2265t and Nn,i(t) = 1 \u02dc Tn,i\u2264t,\u03b4n,i=1. For \u03b3 \u2208Rp and k = 0, 1 and 2, let Sn,k(t; \u03b3) = 1 mn mn X i=1 Yn,i(t)Z\u2297k n,i (t) exp(\u03b3\u2032Zn,i(t)), sn,k(t; \u03b3) = Qn \u0010 1 mn mn X i=1 Yn,i(t)Z\u2297k n,i (t) exp(\u03b3\u2032Zn,i(t)) \u0011 , sk(t; \u03b3) = P \u0010 Y (t)Z\u2297k(t) exp(\u03b3\u2032Z) \u0011 , An,k(t) = Qn \u0010 1 mn mn X i=1 Z t 0 Z\u2297k n,i (s)dNn,i(s) \u0011 , Ak(t) = P \u0010 Z t 0 Z\u2297k(s)dN(s) \u0011 = Z t 0 sk(s; \u03b101s\u2264\u03b60 + \u03b201s>\u03b60)\u03bb0(s)ds, where we use Qn(\u00b7) and P(\u00b7) to denote the expectation operators under the distributions Qn and P, respectively. We write \u00af Zn(t; \u03b3) = Sn,1(t; \u03b3) Sn,0(t; \u03b3), \u00af zn(t; \u03b3) = sn,1(t; \u03b3) sn,0(t; \u03b3), \u00af z(t; \u03b3) = s1(t; \u03b3) s0(t; \u03b3). Further we denote the ratio between Sn,0(t; \u03b31) and Sn,0(t; \u03b32) by Rn(t; \u03b31, \u03b32) = Sn,0(t; \u03b31) Sn,0(t; \u03b32). 5 Similarly we write rn(t; \u03b31, \u03b32) = sn,0(t; \u03b31) sn,0(t; \u03b32), r(t; \u03b31, \u03b32) = s0(t; \u03b31) s0(t; \u03b32). Using the above notation, for \u03b8 = (\u03b1\u2032, \u03b2\u2032, \u03b6)\u2032, the log partial likelihood function of {( \u02dc Tn,i, \u03b4n,i, Zn,i) : i = 1, \u00b7 \u00b7 \u00b7 , mn} takes the form l\u2217 n(\u03b8) = mn X i=1 Z \u03c4 0 \u0010 (\u03b11s\u2264\u03b6 + \u03b21s>\u03b6)\u2032Zn,i \u2212log Sn,0(s; \u03b11s\u2264\u03b6 + \u03b21s>\u03b6) \u0011 dNn,i(s). Denote the MPLE of l\u2217 n(\u03b8) by \u03b8\u2217 n = (\u03b1\u2217 n \u2032, \u03b2\u2217 n \u2032, \u03b6\u2217 n)\u2032, i.e., \u03b8\u2217 n := sargmax\u03b8\u2208\u0398l\u2217 n(\u03b8). Let \u03b8n = (\u03b1\u2032 n, \u03b2\u2032 n, \u03b6n)\u2032 be given by \u03b8n := sargmax\u03b8\u2208\u0398Qn \u0012 1 mn mn X i=1 Z \u03c4 0 \u0000(\u03b11s\u2264\u03b6 + \u03b21s>\u03b6)\u2032Zn,i \u2212log sn,0(s; \u03b11s\u2264\u03b6 + \u03b21s>\u03b6) \u0001 dNn,i(s) \u0013 . The existence of \u03b8n is guaranteed as the above objective function is concave in \u03b1 and \u03b2 for every \ufb01xed \u03b6 and bounded and c\u00b4 adl\u00b4 ag as a function of \u03b6. When Qn is the ED of a sample generated from model (2), \u03b8n becomes the usual MPLE \u02c6 \u03b8n of ln(\u03b8) as de\ufb01ned in (4). In the following, we derive su\ufb03cient conditions on the distribution Qn that guarantees the weak convergence of (\u221amn(\u03b1\u2217 n \u2212\u03b1n)\u2032, \u221amn(\u03b2\u2217 n \u2212\u03b2n)\u2032, mn(\u03b6\u2217 n \u2212\u03b6n))\u2032. 3.1 Consistency and the rate of convergence We \ufb01rst show the consistency of the MPLE \u03b8\u2217 n of l\u2217 n, whose proof is given in Section 6. We need the following assumption. A1. For k = 0, 1 and 2, as n \u2192\u221e, sup t\u2208[0,\u03c4],\u03b3\u2208\u0398\u03b1\u222a\u0398\u03b2 |sn,k(t; \u03b3) \u2212sk(t; \u03b3)| \u2212 \u21920 and sup t\u2208[0,\u03c4] |An,k(t) \u2212Ak(t)| \u2212 \u21920, where | \u00b7 | denotes the L1 norm. Condition A1 indicates that Qn approaches the distribution satisfying the Cox model (2) in the sense that the di\ufb00erence between expectations of Sn,k (An,k) under distributions Qn and P goes to 0 as n \u2192\u221e. When Qn is the ED of a sample from model (2), the uniform law of large numbers implies A1; see Section 4.1.1 for more details. Theorem 7 Under condition A1, \u03b8n \u2212 \u2192\u03b80 and \u03b8\u2217 n P \u2212 \u2192\u03b80. We consider the rate of convergence of \u03b8\u2217 n and show that the estimators of the \u201cregular\u201d parameters, \u03b1\u2217 n and \u03b2\u2217 n, converge at a rate of mn\u22121/2 while the change-point \u03b6\u2217 n converges at rate m\u22121 n . To guarantee the right rate of convergence, we need the following condition. 6 A2. There exist positive constants \u03c11 and \u03c12 such that, for any sequence {hn} satisfying hn \u2192\u221e and hn/mn \u21920 as n \u2192\u221e, the following holds: lim n\u2192\u221esup |h|> hn mn 1 h \f \f \f \f Z \u03b6n+h \u03b6n dAn,1(s) \u2212 Z \u03b6n+h \u03b6n \u00af zn(s; \u03b1n1s\u2264\u03b6n + \u03b2n1s>\u03b6n)dAn,0(s) \f \f \f \f = 0, and \u03c11 < lim n\u2192\u221e inf |h|> hn mn 1 h |An,0(\u03b6n + h) \u2212An,0(\u03b6n)| \u2264lim n\u2192\u221esup |h|> hn mn 1 h |An,0(\u03b6n + h) \u2212An,0(\u03b6n)| < \u03c12. Note that condition A2 holds if under Qn the survival time T has uniformly bounded baseline hazard rate function \u03bbn,0 in some neighborhood of \u03b6n. In this case, An,0 has right derivative sn,0(\u03b6n; \u03b2n)\u03bbn,0(\u03b6n) at \u03b6+ n and left derivative sn,0(\u03b6n; \u03b1n)\u03bbn,0(\u03b6n) at \u03b6\u2212 n , which implies A2. Theorem 8 Under conditions A1 and A2, \f \f\u0000\u221amn(\u03b1\u2217 n \u2212\u03b1n)\u2032, \u221amn(\u03b2\u2217 n \u2212\u03b2n)\u2032, mn(\u03b6\u2217 n \u2212\u03b6n) \u0001\u2032\f \f = OP(1). 3.2 Asymptotic distribution To compute the asymptotic distribution of \u03b8\u2217 n, we need the following assumption. A3. For any t \u2208R, h1 < h2 and 0 / \u2208(h1, h2), Qn \u0012 mn X k=1 Z \u03b6n+h2/mn \u03b6n+h1/mn e\u0131t(\u03b1n\u2212\u03b2n)\u2032Zn,k(s)dNn,k(s) \u0013 \u2192s0 (\u03b60; \u03b30 + \u0131t(\u03b10 \u2212\u03b20)) \u03bb0(\u03b60)(h2 \u2212h1), where \u0131 is the imaginary unit and \u03b30 = \u03b101h2\u22640 + \u03b2010\u2264h1. Condition A3 holds if under Qn the survival time T has uniformly bounded baseline hazard rate function \u03bbn,0 converging uniformly to \u03bb0 in some neighborhood of \u03b60. This is satis\ufb01ed by the smooth bootstrap methods introduced in Section 2.1 and therefore guarantees their consistency; see Section 4.2.1 for more details. We write Xn(\u03b8) = mn\u22121(l\u2217 n(\u03b8) \u2212l\u2217 n(\u03b8n)). Note that \u03b8\u2217 n is also the maximizer of Xn. For h := (h\u2032 \u03b1, h\u2032 \u03b2, h\u03b6)\u2032 \u2208Rp \u00d7 Rp \u00d7 R, consider the multiparameter process U \u2217 n(h) := mnXn \u0010 \u03b1n + h\u03b1 \u221amn , \u03b2n + h\u03b2 \u221amn , \u03b6n + h\u03b6 mn \u0011 and observe that (\u221amn(\u03b1\u2217 n \u2212\u03b1n)\u2032, \u221amn(\u03b2\u2217 n \u2212\u03b2n)\u2032, mn(\u03b6\u2217 n \u2212\u03b6n))\u2032 = sargmaxh\u2208R2p+1U \u2217 n(h). (7) We proceed to describe the limit law of the process U \u2217 n. Let \u0393\u2212and \u0393+ be two homogenous Poisson processes with intensities \u03b3\u2212= s0(\u03b60; \u03b10)\u03bb0(\u03b60) and \u03b3+ = s0(\u03b60; \u03b20)\u03bb0(\u03b60), 7 respectively. De\ufb01ne two sequences of i.i.d. random variables v\u2212= (v\u2212 i )\u221e i=1 and v+ = (v+ i )\u221e i=1 such that v\u2212 i and v+ i follow distributions: P(v\u2212 i \u2264z) = s0(\u03b60; \u03b10)\u22121E h 1Z(\u03b60)\u2264zY (\u03b60)e\u03b1\u2032 0Z(\u03b60)i and P(v+ i \u2264z) = s0(\u03b60; \u03b20)\u22121E h 1Z(\u03b60)\u2264zY (\u03b60)e\u03b2\u2032 0Z(\u03b60)i . Additionally, take two Gaussian Rp-valued random vectors U1 \u223cN \u0012 0, Z \u03b60 0 Q(s; \u03b10)s0(s; \u03b10)\u03bb0(s)ds \u0013 , U6 \u223cN \u0012 0, Z \u03c4 \u03b60 Q(s; \u03b20)s0(s; \u03b20)\u03bb0(s)ds \u0013 , where for s \u2208R and \u03b3 \u2208Rp, Q(s; \u03b3) is de\ufb01ned as Q(s; \u03b3) = s2(s; \u03b3) s0(s; \u03b3) \u2212\u00af z(s; \u03b3)\u22972. (8) Suppose that \u0393\u2212,\u0393+, v\u2212, v+, U1 and U6 are all independent. For h\u03b6 \u2208R, de\ufb01ne the vectorvalued process (U2, U3, U4, U5) as U2(h\u03b6) := 1h\u03b6<0 \u0010 \u0393\u2212(\u2212h\u03b6) log r(\u03b60; \u03b10, \u03b20) + X 1\u2264i\u2264\u0393\u2212(\u2212h\u03b6) (\u03b20 \u2212\u03b10)v\u2212 i \u0011 , U3(h\u03b6) := 1h\u03b6<0\u0393\u2212(\u2212h\u03b6), U4(h\u03b6) := 1h\u03b6>0 \u0010 \u0393+(h\u03b6) log r(\u03b60; \u03b20, \u03b10) + X 1\u2264i\u2264\u0393+(h\u03b6) (\u03b10 \u2212\u03b20)v+ i \u0011 , U5(h\u03b6) := 1h\u03b6>0\u0393+(h\u03b6). Furthermore, de\ufb01ne processes J(h\u03b6) := U3(h\u03b6) + U5(h\u03b6) and U(h\u03b1, h\u03b2, h\u03b6) := h\u2032 \u03b1U1 \u22121 2h\u2032 \u03b1 \u0012 Z \u03b60 0 Q(s; \u03b10)s0(s; \u03b10)\u03bb0(s)ds \u0013 h\u03b1 + U2(h\u03b6) +h\u2032 \u03b2U6 \u22121 2h\u2032 \u03b2 \u0012 Z \u03c4 \u03b60 Q(s; \u03b20)s0(s; \u03b20)\u03bb0(s)ds \u0013 h\u03b2 + U4(h\u03b6). Observe that J is the sequence of jumps of U. Our goal is to show that the asymptotic distribution of the MPLE is exactly that of the smallest argmax of U. Before doing this, we state the following result about the smallest argmax of U. Lemma 9 Let \u03c6 = (\u03c6\u2032 \u03b1, \u03c6\u2032 \u03b2, \u03c6\u03b6)\u2032 = sargmaxh\u2208R2p+1U(h) with \u03c6\u03b1, \u03c6\u03b2 and \u03c6\u03b6 corresponding to the \ufb01rst p, the second p and the last component of \u03c6, respectively. Then \u03c6 is well-de\ufb01ned. Moreover, \u03c6\u03b1, \u03c6\u03b2 and \u03c6\u03b6 are mutually independent and \u03c6\u03b1 \u223c N \u0012 0, \u0010 Z \u03b60 0 Q(s; \u03b10)s0(s; \u03b10)\u03bb0(s)ds \u0011\u22121\u0013 , (9) \u03c6\u03b2 \u223c N \u0012 0, \u0010 Z \u03c4 \u03b60 Q(s; \u03b20)s0(s; \u03b20)\u03bb0(s)ds \u0011\u22121\u0013 . (10) 8 We are now in a position to give our main result. To state the result, we need to introduce some further notation. For any given compact set K \u2282Rd, d \u2208N, we de\ufb01ne the space DK as the Skorohod space of functions f : K \u2192R having \u201cquadrant limits\u201d and continuous from above; see Neuhaus (1971) and Seijo and Sen (2011a) for more information about this space. Further, we take DK as a metric space endowed with the Skorohod metric, which ensures the existence of conditional probability distributions for its random elements; see Neuhaus (1971) and Theorem 10.2.2 of Dudley (2002). Theorem 10 Under conditions A1-A3, for a compact rectangle \u0398 \u2282R2p+1, U \u2217 n converges weakly in the Skorohod topology to U in D\u0398. Moreover, \uf8eb \uf8ed \u221amn(\u03b1\u2217 n \u2212\u03b1n) \u221amn(\u03b2\u2217 n \u2212\u03b2n) mn(\u03b6\u2217 n \u2212\u03b6n) \uf8f6 \uf8f8\u21ddsargmaxh\u2208R2p+1U(h), where \u21dddenotes weak convergence. We consider the MPLE \u02c6 \u03b8n of ln(\u03b8) as de\ufb01ned in (4). In this case, we can take mn = n, Qn,i = P and \u03b8n = \u03b80. Then conditions A1-A3 automatically hold and we immediately obtain the following corollary from Theorem 10; see also Pons (2002). Corollary 11 Under the model setup in Section 2, for the MPLE \u02c6 \u03b8n = (\u02c6 \u03b1\u2032 n, \u02c6 \u03b2\u2032 n, \u02c6 \u03b6n)\u2032, \uf8eb \uf8ed \u221an(\u02c6 \u03b1n \u2212\u03b10) \u221an(\u02c6 \u03b2n \u2212\u03b20) n(\u02c6 \u03b6n \u2212\u03b60) \uf8f6 \uf8f8\u21ddsargmaxh\u2208R2p+1U(h). 4 Large sample properties of the bootstrap procedures In this section we use the results from the previous section to prove the (in)-consistency of di\ufb00erent bootstrap methods introduced in Section 2.1. In Section 4.1, we argue that the classical bootstrap method (Method 1) and the conditional methods (Methods 3 and 4) are inconsistent. In Section 4.2, we prove the consistency of the smooth bootstrap (Methods 5 and 6) and the m-out-of-n bootstrap (Method 2). Recall the notation and de\ufb01nitions in the beginning of Section 2. In particular, note that we have i.i.d. random vectors ( \u02dc Ti, \u03b4i, Zi)\u221e i=1 from (2). Let X be the \u03c3-algebra generated by the sequence ( \u02dc Ti, \u03b4i, Zi)\u221e i=1. For a metric space (X, d), consider X-valued random elements (Vn)\u221e n=1 and V de\ufb01ned on the probability space (\u2126, A, P). We say that Vn converges conditionally in probability to V , in probability, if for any given \u03f5 > 0 P(d(Vn, V ) > \u03f5 | X) P \u2212 \u21920, and we write Vn PX \u2212 \u2212 \u2192 P V. 9 4.1 Inconsistent bootstrap methods 4.1.1 Classical bootstrap Consider the classical bootstrap Method 1 introduced in Section 2.1. We set mn = n and Qn,j, j = 1, \u00b7 \u00b7 \u00b7 , n, to be the ED of the data {( \u02dc Ti, \u03b4i, Zi) : i = 1, \u00b7 \u00b7 \u00b7 , n}. This implies that for k = 0, 1, 2, sn,k(t; \u03b3) = Qn \u0012 1 mn mn X i=1 Yn,i(t)Z\u2297k n,i (t) exp(\u03b3\u2032Zn,i(t)) \u0013 = 1 n n X i=1 Yi(t)Z\u2297k i (t) exp(\u03b3\u2032Zi(t)), (11) An,k(t) = Qn \u0012 1 mn mn X i=1 Z t 0 Z\u2297k n,i (s)dNn,i(s) \u0013 = 1 n n X i=1 Z t 0 Z\u2297k i (s)dNi(s). (12) Therefore, \u03b8n = \u02c6 \u03b8n and condition A1 holds. Apply Theorem 7 and we have that the bootstrap estimator \u03b8\u2217 n converges conditionally in probability to the true value \u03b80, in probability. Proposition 12 For Method 1, \u03b8\u2217 n PX \u2212 \u2212 \u2192 P \u03b80. As for the weak convergence, we show in Lemma 13 that condition A3 does not hold. Hence, Theorem 10 is not applicable in this case. Lemma 13 For Method 1, there is h0 > 0 such that for any h > h0, the sequences \u001a n X i=1 Z \u02c6 \u03b6n+ h n \u02c6 \u03b6n dNi (s) \u001b\u221e n=1 and \u001a n X i=1 Z \u02c6 \u03b6n \u02c6 \u03b6n\u2212h n dNi (s) \u001b\u221e n=1 (13) do not converge in probability. Furthermore, \u001a n X i=1 Z \u02c6 \u03b6n+h/n \u02c6 \u03b6n \u03c6i(s)dNi(s) \u001b\u221e n=1 and \u001a n X i=1 Z \u02c6 \u03b6n \u02c6 \u03b6n\u2212h n \u03c6i(s)dNi (s) \u001b\u221e n=1 , (14) where \u03c6i(s) := e\u0131t((\u02c6 \u03b1n\u2212\u02c6 \u03b2n)\u2032Zi(s)\u2212log rn(s;\u02c6 \u03b1n,\u02c6 \u03b2n)) \u22121, do not converge in probability. The following theorem shows that, conditional on the data, (U \u2217 n)\u221e n=1 does not have any weak limit in probability. Consider the Skorohod space D\u0398 with compact set \u0398 \u2282R2p+1. We say that (U \u2217 n)\u221e n=1 has no weak limit in probability in D\u0398 if there is no probability measure \u00b5 de\ufb01ned on D\u0398 such that \u03c1(\u00b5n, \u00b5) P \u2212 \u21920, where \u00b5n is the conditional distribution of U \u2217 n given X, and \u03c1 is a metric metrizing weak convergence on D\u0398. Theorem 14 There is a compact set \u0398 \u2208R2p+1 such that, conditional on the data, U \u2217 n does not have a weak limit in probability in D\u0398. 10 Proof of Theorem 14. It su\ufb03ces to show that there is some h > 0 such that, conditional on the data, U \u2217 n(0, 0, h) does not have a weak limit in probability. In this case, U \u2217 n(0, 0, h) = n X i=1 Z \u02c6 \u03b6n+h/n \u02c6 \u03b6n \u0000(\u02c6 \u03b1n \u2212\u02c6 \u03b2n)\u2032Zn,i(s) \u2212log Rn(s; \u02c6 \u03b1n, \u02c6 \u03b2n) \u0001 dNn,i(s). Consider the conditional characteristic function of U \u2217 n(0, 0, h) given X. A similar argument as in the proof of Lemma 20 implies that E[e\u0131tU\u2217 n(0,0,h)|X] = (1 + oP(1)) exp \u0010 n X i=1 Z \u02c6 \u03b6n+h/n \u02c6 \u03b6n \u03c6i(s)dNi(s) \u0011 , where \u03c6i(s) is de\ufb01ned as in Lemma 13. Then Lemma 13 implies the desired conclusion. The result that U \u2217 n does not have any weak limit in probability makes the existence of a weak limit for n(\u03b6\u2217 n \u2212\u02c6 \u03b6n) very unlikely; see (7). But a complete proof of the non-existence may be complicated due to the non-linearity of the smallest argmax functional. For this reason, theoretically we do not pursue this problem any further, and we will use simulation results to illustrate the inconsistency in Section 5. 4.1.2 Conditional bootstrap For Methods 3 and 4 in Section 2.1, we consider that mn = n, Qn,i(Zn,i = Zi) = 1, and the cumulative hazard function of T takes the form \u039bn,0 = \u02c6 \u039bb n,0, where \u02c6 \u039bb n,0 is the Breslow estimator as de\ufb01ned in Method 3. Therefore, for k = 0, 1, 2, sn,k(t; \u03b3) = 1 n n X i=1 Qn,i(Yn,i(t)|Zi)Z\u2297k i (t) exp(\u03b3\u2032Zi(t)) and An,k(t) = Z t 0 sn,k(s; \u02c6 \u03b1n1s\u2264\u02c6 \u03b6n + \u02c6 \u03b2n1s>\u02c6 \u03b6n)d\u02c6 \u039bb n,0(s). Thus \u03b8n = \u02c6 \u03b8n. A uniformly consistent estimator of G is usually needed in conditional bootstrap methods for the Cox model. We assume that sup t\u2208[0,\u03c4],z\u2208V | \u02c6 Gn(t|z) \u2212G(t|z)| P \u2212 \u21920, (15) where V is the set of all possible sample paths of covariate Z. Note that \u02c6 Gn can be taken as the Kaplan-Meier estimator when Ci\u2019s are i.i.d. or Z\u2032 is are time-independent and categorical; see also Beran (1981) for a class of nonparametric estimates of the conditional distribution. For more general time-dependent covariates Z, it is hard, if not impossible, to obtain a consistent estimator of G without further model assumption. In the literature, a common approach is to assume that the censoring time follows the Cox model (1), in which case a consistent estimator of G can be constructed based on the usual Breslow estimator (Cox and Oakes, 1984). In this paper we assume that (15) holds and do not go into the problem of estimating G any further. Under the setup in Section 2, it is known that supt\u2208[0,\u03c4] |\u039bn,0(t) \u2212\u039b0(t)| P \u2212 \u21920 (Andersen et al., 1993). Together with (15), this implies condition A1. Apply Theorem 7 and we have the following convergence result. 11 Proposition 15 For Methods 3 and 4, if (15) holds, then \u03b8\u2217 n PX \u2212 \u2212 \u2192 P \u03b80. As with Method 1, we argue that Methods 3 and 4 are also inconsistent. We start with the following lemma. Lemma 16 For Methods 3 and 4, there is h0 > 0 such that for any h > h0, the sequences \u001a n \u0012 \u02c6 \u039bb n,0 \u0010 \u02c6 \u03b6n + h n \u0011 \u2212\u02c6 \u039bb n,0 \u0000\u02c6 \u03b6n \u0001\u0013\u001b\u221e n=1 and \u001a n \u0012 \u02c6 \u039bb n,0 \u0000\u02c6 \u03b6n \u0001 \u2212\u02c6 \u039bb n,0 \u0010 \u02c6 \u03b6n \u2212h n \u0011\u0013\u001b\u221e n=1 do not converge in probability. Proof of Lemma 16. We only need to show that the \ufb01rst sequence does not converge in probability. For h > 0, n \u0012 \u02c6 \u039bb n,0 \u0010 \u02c6 \u03b6n + h n \u0011 \u2212\u02c6 \u039bb n,0 \u0000\u02c6 \u03b6n \u0001\u0013 = n Z \u02c6 \u03b6n+ h n \u02c6 \u03b6n \u0010 n X j=1 Yj(s)e\u02c6 \u03b1\u2032 nZj(s)1s\u2264\u02c6 \u03b6n+\u02c6 \u03b2\u2032 nZj(s)1s>\u02c6 \u03b6n \u0011\u22121 d \u0010 n X i=1 Ni(s) \u0011 = (1 + oP(1)) Z \u02c6 \u03b6n+ h n \u02c6 \u03b6n s0(s; \u03b20)\u22121d \u0010 n X i=1 Ni(s) \u0011 = (1 + oP(1))s0(\u03b60; \u03b20)\u22121 n X i=1 Z \u02c6 \u03b6n+ h n \u02c6 \u03b6n dNi(s). Thus, it su\ufb03ces to show that Pn i=1 R \u02c6 \u03b6n+ h n \u02c6 \u03b6n dNi(s) does not converge in probability. Apply Lemma 13 and we have the desired conclusion. Based on Lemma 16, we further show that, conditional on the data, the sequence {U \u2217 n}\u221e n=1 does not have a weak limit in probability. Theorem 17 There is a compact set \u0398 \u2208R2p+1 such that, conditional on the data, U \u2217 n does not have a weak limit in probability in D\u0398. Proof of Theorem 17. For h > 0, consider the conditional characteristic function of U \u2217 n(0, 0, h) given X. A similar argument as in the proof of Lemma 20 implies that E[e\u0131tU\u2217 n(0,0,h)|X] = (1 + oP(1)) exp \u001a n \u0010 \u02c6 \u039bb n,0 \u0010 \u02c6 \u03b6n + h n \u0011 \u2212\u02c6 \u039bb n,0 \u0010 \u02c6 \u03b6n \u0011 \u0011 \u00d7 \u0010 e\u2212log r(\u03b60;\u03b10,\u03b20) s0(\u03b60; \u0131t(\u03b10 \u2212\u03b20) + \u03b20) \u2212s0(\u03b60; \u03b20) \u0011\u001b . Hence, Lemma 16 implies the desired conclusion. 4.2 Consistent bootstrap methods In this section we show that the smooth bootstrap (Methods 5 and 6) and the m-out-of-n bootstrap (Method 2) are consistent for constructing CIs for \u03b60. The results from Section 3 can be directly applied to derive su\ufb03cient conditions on the distribution from which the bootstrap samples are generated. Let \u02c6 Qn be a distribution constructed from the data {( \u02dc Ti, \u03b4i, Zi) : i = 1, \u00b7 \u00b7 \u00b7 , n}. If conditions A1-A3 hold with Qn = \u02c6 Qn, then the weak convergence of the bootstrap estimate follows from Theorem 10 applied conditionally given the data. 12 4.2.1 Smooth bootstrap Consider Methods 5 and 6. To prove the consistence, thanks to Theorem 10, we only need to show conditions A1-A3 hold conditionally on the data with mn = n and Qn the distribution of the bootstrap sample. Recall that \u02c6 \u03bbn,0(\u00b7) and \u02c6 G(\u00b7|Z) are the estimated smooth baseline hazard rate function of T and the conditional distribution of C given Z, respectively. In addition to (15), we need the following convergence result: sup t\u2208[0,\u03c4] |\u02c6 \u03bbn,0(t) \u2212\u03bb0(t)| P \u2212 \u21920. (16) Note that (16) is ful\ufb01lled if \u02c6 \u03bbn,0 is the usual kernel estimator (Wells, 1994). Similarly as in Section 4.1.2, we have \u03b8n = \u02c6 \u03b8n, sn,k(t; \u03b3) P \u2212 \u2192sk(t; \u03b3), and An,k(t) P \u2212 \u2192Ak(t). In addition, Z \u02c6 \u03b6n+h \u02c6 \u03b6n dAn,1(s) \u2212 Z \u02c6 \u03b6n+h \u02c6 \u03b6n \u00af zn(s; \u02c6 \u03b1n1s\u2264\u02c6 \u03b6n + \u02c6 \u03b2n1s>\u02c6 \u03b6n)dAn,0(s) = 0, and for any t \u2208R, h1 < h2 and 0 / \u2208(h1, h2), Qn \u0012 n X i=1 Z \u02c6 \u03b6n+h2/n \u02c6 \u03b6n+h1/n e\u0131t(\u02c6 \u03b1n\u2212\u02c6 \u03b2n)\u2032Zn,k(s)dNn,i(s) \u0013 = n Z \u02c6 \u03b6n+h2/n \u02c6 \u03b6n+h1/n s0 \u0000s; \u03b3n + \u0131t(\u02c6 \u03b1n \u2212\u02c6 \u03b2n) \u0001\u02c6 \u03bb0(s)ds P \u2212 \u2192 s0 \u0000\u03b60; \u03b30 + \u0131t(\u03b10 \u2212\u03b20) \u0001 \u03bb0(\u03b60)(h2 \u2212h1), where \u03b3n = \u03b1n1h2\u22640 + \u03b2n10\u2264h1 and \u03b30 = \u03b101h2\u22640 + \u03b2010\u2264h1. Therefore, A1-A3 hold and Theorems 7 and 10 give the weak consistency result. Proposition 18 For Methods 5 and 6, if (15) and (16) hold, then \u03b8\u2217 n PX \u2212 \u2212 \u2192 P \u03b80 and conditional on the data, \uf8eb \uf8ed \u221an(\u03b1\u2217 n \u2212\u02c6 \u03b1n) \u221an(\u03b2\u2217 n \u2212\u02c6 \u03b2n) n(\u03b6\u2217 n \u2212\u02c6 \u03b6n) \uf8f6 \uf8f8\u21ddsargmaxh\u2208R2p+1U(h). (17) 4.2.2 m-out-of-n bootstrap Consider the m-out-of-n bootstrap (Method 2). We will again use the consistency results established in Section 3. We set mn \u2192\u221eand mn/n \u21920 as n \u2192\u221e. Similar to the classical bootstrap, Qn,j, j = 1, \u00b7 \u00b7 \u00b7 , n, is the ED of the data {( \u02dc Ti, \u03b4i, Zi) : i = 1, \u00b7 \u00b7 \u00b7 , n}, and (11) and (12) hold. Therefore \u03b8n = \u02c6 \u03b8n and condition A1 holds. Consider the \ufb01rst equation in A2 and we have sup |h|>hn/mn 1 h \f \f \f \f Z \u02c6 \u03b6n+h \u02c6 \u03b6n dAn,1(s) \u2212 Z \u02c6 \u03b6n+h \u02c6 \u03b6n \u00af zn(s; \u02c6 \u03b1n1s\u2264\u02c6 \u03b6n + \u02c6 \u03b2n1s>\u02c6 \u03b6n)dAn,0(s) \f \f \f \f = sup |h|>hn/mn 1 nh \f \f \f \f n X i=1 Z \u02c6 \u03b6n+h \u02c6 \u03b6n \u0002 Zi \u2212\u00af zn(s; \u02c6 \u03b1n1s\u2264\u02c6 \u03b6n + \u02c6 \u03b2n1s>\u02c6 \u03b6n) \u0003 dNi(s) \f \f \f \f P \u2212 \u2192 0. 13 As to condition A3, for any t \u2208R, h1 < h2 and 0 / \u2208(h1, h2), Qn \u0012 mn X i=1 Z \u02c6 \u03b6n+h2/mn \u02c6 \u03b6n+h1/mn e\u0131t(\u02c6 \u03b1n\u2212\u02c6 \u03b2n)\u2032Zn,i(s)dNn,i(s) \u0013 = mn n n X i=1 Z \u02c6 \u03b6n+h2/mn \u02c6 \u03b6n+h1/mn e\u0131t(\u02c6 \u03b1n\u2212\u02c6 \u03b2n)\u2032Zi(s)dNi(s) P \u2212 \u2192 s0 (\u03b60; \u03b30 + \u0131t(\u03b10 \u2212\u03b20)) \u03bb0(\u03b60)(h2 \u2212h1), where \u03b30 = \u03b101h2\u22640 + \u03b2010\u2264h1. Therefore, A1-A3 hold and we have the following proposition. Proposition 19 For the m-out-of-n bootstrap method, if mn \u2192\u221eand mn/n \u21920 as n \u2192\u221e, then \u03b8\u2217 n PX \u2212 \u2212 \u2192 P \u03b80 and conditional on the data \uf8eb \uf8ed \u221amn(\u03b1\u2217 n \u2212\u02c6 \u03b1n) \u221amn(\u03b2\u2217 n \u2212\u02c6 \u03b2n) mn(\u03b6\u2217 n \u2212\u02c6 \u03b6n) \uf8f6 \uf8f8\u21ddsargmaxh\u2208R2p+1U(h). 5 Simulation In this section we compare the \ufb01nite sample performance of the di\ufb00erent bootstrap schemes introduced in Section 2.1. We consider a single covariate Z which has a Bernoulli distribution with parameter 0.5. That is, a subject is equally likely to be assigned to the control group (Z = 0) and the treatment group (Z = 1). The model parameter values are set at \u03b10 = 0, \u03b20 = \u22121.5, and \u03b60 = 1. The baseline hazard rate is assumed constant and taken as \u03bb0(t) = 0.5. Note that, at \u03b60 = 1, the cumulative mortality for the control group is 1\u2212exp(\u22120.5) = 39%. The censoring times are chosen to be independent and follow an exponential distribution with rate parameter 0.1 and truncated at \u03c4 = 4. This results in a censoring rate of about 36%. Figure 1 gives the Kaplan-Meier curves of a simulated sample of size n = 1000, which clearly shows the lag feature around the change-point time \u03b60 = 1. We consider 1000 random samples of sample sizes n = 200, 500, 1000. For each simulated sample and for each bootstrap method, 1000 bootstrap replicates are generated to approximate the bootstrap distribution. The conditional censoring distribution estimator \u02c6 G(\u00b7|Z) is taken as the Kaplan-Meier estimators for each group (Z = 0 and Z = 1). For the smooth bootstrap, we use a kernel density estimator based on the Gaussian kernel and choose the so-called \u201cnormal-reference rule\u201d (Scott, 1992). For the m-out-of-n bootstrap, we try three di\ufb00erent choices of mn: n4/5, n9/10 and n14/15. To reduce the computation complexity, we restrict \u03b6 \u2208[0.5, 1.5] when calculating the MPLE of \u03b60. Table 1 provides the simulation results of coverage proportions and average lengths of nominal 95% CIs for \u03b60 that are estimated using di\ufb00erent bootstrap methods. The \ufb01rst column (\u201cSmooth\u201d) gives the results of smooth bootstrap Method 5, the second column (\u201cClassical\u201d) corresponds to classical bootstrap Method 1, the third column (\u201cConditional\u201d) corresponds to conditional bootstrap Method 3, and the last three columns correspond to the m-out-of-n bootstrap with di\ufb00erent choices of mn. The results of Methods 4 and 6 are similar to those of Method 3 and Method 5 and therefore are not presented. We can see from Table 1 that the smooth bootstrap outperforms all the others in terms of coverage rate and average length. The m-out-of-n bootstrap also performs reasonably well, but 14 0 1 2 3 4 0.0 0.2 0.4 0.6 0.8 1.0 Time Survival Probability Z=1 Z=0 Figure 1: Kaplan-Meier curves of a simulated sample. Smooth Classical Conditional mn = n4/5 n9/10 n14/15 n = 300 Coverage 0.96 0.88 0.87 0.97 0.93 0.91 Length 0.64 0.56 0.54 0.80 0.69 0.64 500 Coverage 0.96 0.89 0.89 0.98 0.96 0.94 Length 0.46 0.48 0.46 0.77 0.62 0.58 1000 Coverage 0.95 0.89 0.88 0.99 0.97 0.94 Length 0.24 0.30 0.29 0.69 0.48 0.42 Table 1: The estimated coverage rates and average lengths of nominal 95% CIs for \u03b60. the average length is bigger than that of the smooth bootstrap. This may be due to the fact that the m-out-of-n bootstrap method converges at rate m\u22121 n instead of n\u22121. Table 1 also shows that the commonly used bootstrap Methods 1 and 3 provide under-coverage, which indicates their inconsistency. To further illustrate the performance of di\ufb00erent bootstrap methods, we compare the histograms of the distribution of n(\u02c6 \u03b6n \u2212\u03b60), obtained from 1000 random samples of sample size 1000, and its bootstrap estimates from a single sample. All bootstrap estimates are based on 1000 bootstrap replicates. It is clearly shown in Figure 2 that the smooth bootstrap (top right panel) provides the best approximation to the actual distribution obtained from 1000 random samples (top left panel). 6 Proof of Theorems This section contains proofs of Theorems 7, 8 and 10. 15 Actual Distribution, n=1000 Density -200 -100 0 100 200 0.000 0.015 0.030 Smooth bootstrap Density -200 -100 0 100 200 0.000 0.015 0.030 Classical bootstrap Density -200 -100 0 100 200 0.00 0.02 0.04 0.06 Conditional bootstrap Density -200 -100 0 100 200 0.00 0.02 0.04 m-out-of-n bootstrap Density -200 -100 0 100 200 0.000 0.015 0.030 m-out-of-n bootstrap Density -200 -100 0 100 200 0.00 0.02 0.04 Figure 2: Histograms of the distribution of n(\u02c6 \u03b6n\u2212\u03b60) and its bootstrap estimates. The top left panel shows the distribution of n(\u02c6 \u03b6n \u2212\u03b60) obtained from 1000 random samples; the top right panel shows the distribution of n(\u02c6 \u03b6\u2217 n \u2212\u02c6 \u03b6n) for the smooth bootstrap (Method 5), the middle left the classical bootstrap (Method 1), the middle right the conditional bootstrap (Method 3). The bottom left panel shows the distribution of mn(\u02c6 \u03b6\u2217 n\u2212\u02c6 \u03b6n) with mn = n9/10, and the bottom right for mn = n14/15. 6.1 Proof of Theorem 7 We \ufb01rst show that \u03b8n \u2192\u03b80. Write \u0398 = \u0398\u03b1 \u00d7 \u0398\u03b2 \u00d7 [0, \u03c4]. By the de\ufb01nition, \u03b8n = (\u03b1\u2032 n, \u03b2\u2032 n, \u03b6n)\u2032 is the smallest maximizer of X0(\u03b8) := X0(\u03b1, \u03b2, \u03b6) := Z \u03c4 0 \u0000(\u03b1 \u2212\u03b10)1s\u2264\u03b6 + (\u03b2 \u2212\u03b20)1s>\u03b6 \u0001\u2032dAn,1(s) \u2212 Z \u03c4 0 log sn,0(s; \u03b11s\u2264\u03b6 + \u03b21s>\u03b6) sn,0(s; \u03b101s\u2264\u03b60 + \u03b201s>\u03b60)dAn,0(s). Thus, X0(\u03b8n) \u22650. By condition A1, we have |X0(\u03b8n) \u2212X(\u03b8n)| \u2264sup \u03b8\u2208\u0398 |X0(\u03b8) \u2212X(\u03b8)| \u21920, 16 where X(\u03b8) = Z \u03c4 0 \u0000(\u03b1 \u2212\u03b10)\u2032s1(s; \u03b10) \u2212s0(s; \u03b10) log r(s; \u03b1, \u03b10) \u0001 1s\u2264\u03b6\u2227\u03b60d\u039b0(s) + Z \u03c4 0 \u0000(\u03b1 \u2212\u03b20)\u2032s1(s; \u03b20) \u2212s0(s; \u03b20) log r(s; \u03b1, \u03b20) \u0001 1\u03b60\u03b6\u2228\u03b60d\u039b0(s). (18) Then by the continuous mapping theorem, the conclusion follows from the fact that \u03b80 is the unique maximizer of X(\u03b8) and that X(\u03b80) = 0. We now show the consistency of \u02c6 \u03b8\u2217 n. For \u03b3n \u2208{\u03b1n, \u03b2n} and \u03b3 \u2208\u0398\u03b1 \u222a\u0398\u03b2, let wn(t; \u03b3) := Z t 0 (\u03b3 \u2212\u03b3n)\u2032dMn,1(s) \u2212 Z t 0 log rn(s; \u03b3, \u03b3n)dMn,0(s), where Mn,k(t) = 1 mn mn X i=1 Z t 0 Z\u2297k n,i (s)dNn,i(s) \u2212An,k(t), for k = 0, 1. (19) For any \u03f51 > 0, we have Qn \u0010 sup t\u2208[0,\u03c4] |wn(t; \u03b3)| \u22652\u03f51 \u0011 \u2264Qn \u0010 sup t\u2208[0,\u03c4] \f \f \f Z t 0 (\u03b3 \u2212\u03b3n)\u2032dMn,1(s) \f \f \f 2 \u2265\u03f52 1 \u0011 + Qn \u0010 sup t\u2208[0,\u03c4] \f \f \f Z t 0 log rn(s; \u03b3, \u03b3n)dMn,0(s) \f \f \f 2 \u2265\u03f52 1 \u0011 . Then by Lenglart\u2019s inequality for c\u00b4 adl\u00b4 ag processes (Jacod and Shiryaev, 2002, p.35), we have that for \u03f52 > 0, there exists a constant B > 0 such that Qn \u0010 sup t\u2208[0,\u03c4] |wn(t; \u03b3)| \u22652\u03f51 \u0011 \u2264 2 \u0010\u03f52 \u03f51 + B \u03f52 1mn \u0011 + Qn \u0010 1 m2 n mn X i=1 Z t 0 \u0000(\u03b3 \u2212\u03b3n)\u2032Zn,i(s) \u00012dNn,i(s) > \u03f52 \u0011 +Qn \u0012 1 m2 n mn X i=1 Z t 0 \u0000log rn(s; \u03b3, \u03b3n) \u00012dNn,i(s) > \u03f52 \u0013 \u2264 2 \u0010\u03f52 \u03f52 1 + B \u03f52 1mn \u0011 + 2B \u03f52m2 n , (20) where the last inequality follows from Chebyshev inequality. Since \u03f51 and \u03f52 are arbitrary, it follows that sup t\u2208[0,\u03c4] |wn(t; \u03b3)| P \u2212 \u21920, for \u03b3 \u2208\u0398\u03b1 \u222a\u0398\u03b2. (21) On the other hand, by Lemma 22 in the Appendix, we have sup t\u2208[0,\u03c4] \f \f \f \f 1 mn mn X i=1 Z t 0 (log Rn(s; \u03b3, \u03b3n) \u2212log rn(s; \u03b3, \u03b3n)) dNn,i(s) \f \f \f \f \u21920. (22) 17 Thus, (21) and (22) imply that for \u03b8 = (\u03b1\u2032, \u03b2\u2032, \u03b6)\u2032 with \u03b1 \u2208\u0398\u03b1 and \u03b2 \u2208\u0398\u03b2, sup \u03b6\u2208[0,\u03c4] |Xn(\u03b8) \u2212X\u2217 n(\u03b8)| P \u2212 \u21920, where Xn(\u03b8) = mn\u22121(l\u2217 n(\u03b8) \u2212l\u2217 n(\u03b8n)) and X\u2217 n(\u03b8) = Qn \u0012 1 mn mn X i=1 Z \u03c4 0 \u0000(\u03b1 \u2212\u03b1n)\u2032Zn,i(s) \u2212log rn(s; \u03b1, \u03b1n) \u0001 1s\u2264\u03b6\u2227\u03b6ndNn,i(s) \u0013 + Qn \u0012 1 mn mn X i=1 Z \u03c4 0 \u0000(\u03b1 \u2212\u03b2n)\u2032Zn,i(s) \u2212log rn(s; \u03b1, \u03b2n) \u0001 1\u03b6n\u03b6\u2228\u03b6ndNn,i(s) \u0013 . A similar argument as in the proof of Theorem II.1 in Andersen and Gill (1982) implies that the convergence of |Xn(\u03b8)\u2212X\u2217 n(\u03b8)| is uniform in \u03b8 \u2208\u0398. Then by the result that \u03b8n \u2192\u03b80 and condition A1, we have that sup \u03b8\u2208\u0398 |Xn(\u03b8) \u2212X(\u03b8)| P \u2212 \u21920, where X(\u03b8) is de\ufb01ned as in (18). Apply Corollary 3.2.3 (ii) in van der Vaart and Wellner (1996) and we obtain the desired convergence result. 6.2 Proof of Theorem 8 For \u03b1 = (\u03b11, \u00b7 \u00b7 \u00b7 , \u03b1p)\u2032 and \u03b2 = (\u03b21, \u00b7 \u00b7 \u00b7 , \u03b2p)\u2032, let Conv(\u03b1, \u03b2) := {\u03b3 = (\u03b31, \u00b7 \u00b7 \u00b7 , \u03b3p)\u2032 : \u03b2i \u2227\u03b1i \u2264 \u03b3i \u2264\u03b2i \u2228\u03b1i, i = 1, \u00b7 \u00b7 \u00b7 , p}. By the de\ufb01nition of MPLE, we have 0 \u2264Xn(\u03b8\u2217 n) \u2212Xn(\u03b8n). Take Taylor\u2019s expansion of Xn(\u03b8\u2217 n) \u2212Xn(\u03b8n) with respect to \u03b1n and \u03b2n and we obtain that there exist \u02dc \u03b1n \u2208Conv(\u03b1\u2217 n, \u03b1n) and \u02dc \u03b2n \u2208Conv(\u03b2\u2217 n, \u03b2n) such that 0 \u2264\u221amn|\u03b1\u2217 n \u2212\u03b1n|I1 + \u221amn|\u03b2\u2217 n \u2212\u03b2n|I2 \u2212mn 2 |\u03b1\u2217 n \u2212\u03b1n|2\f \f \f 1 mn mn X i=1 Z \u03c4 0 Qn(s; \u02dc \u03b1n)1s\u2264\u03b6ndNn,i(s) \f \f \f \u2212mn 2 |\u03b2\u2217 n \u2212\u03b2n|2\f \f \f 1 mn mn X i=1 Z \u03c4 0 Qn(s; \u02dc \u03b2n)1s>\u03b6ndNn,i(s) \f \f \f + mn|\u03b6n \u2212\u03b6\u2217 n|an + mn|\u03b6n \u2212\u03b6\u2217 n|bn, where I1 = \f \f \f 1 \u221amn mn X i=1 Z \u03c4 0 \u0000Zn,i(s) \u2212\u00af Zn(s; \u03b1n) \u0001 1s\u2264\u03b6ndNn,i(s) \f \f \f, I2 = \f \f \f 1 \u221amn mn X i=1 Z \u03c4 0 \u0000Zn,i(s) \u2212\u00af Zn(s; \u03b2n) \u0001 1s>\u03b6ndNn,i(s) \f \f \f, 18 an = 1 mn|\u03b6n \u2212\u03b6\u2217 n| mn X i=1 Z \u03c4 0 \u0000(\u03b1\u2217 n \u2212\u03b2\u2217 n)\u2032Zn,i(s) \u2212log Rn(s; \u03b1\u2217 n, \u03b2\u2217 n) \u0001 1\u03b6n\u03b6ndNn,i(s). We consider the quantities in (23) one by one. For I1, we have I1 \u2264 \f \f \f \f 1 \u221amn mn X i=1 Z \u03c4 0 (Zn,i(s) \u2212\u00af zn(s; \u03b1n)) 1s\u2264\u03b6ndNn,i(s) \f \f \f \f + \f \f \f \f 1 \u221amn mn X i=1 Z \u03c4 0 \u0000\u00af zn(s; \u03b1n) \u2212\u00af Zn(s; \u03b1n) \u0001 1s\u2264\u03b6ndNn,i(s) \f \f \f \f. (24) By the de\ufb01nition of \u03b1n and \u03b6n, the \ufb01rst term in (24) equals \u221amn \f \f \f Z \u03b6n 0 dMn,1(s) \u2212 Z \u03b6n 0 \u00af zn(s; \u03b1n)dMn,0(s) \f \f \f, where Mn,k, k = 0, 1, is de\ufb01ned as in (19). Then similarly as in the derivation of (20), Lenglart\u2019s inequality implies that the above quantity is OP(1). Consider the second term in (24). From the proof of Lemma 22, we know that {Yn,i(t)Z\u2297k n,i e\u03b3\u2032Zn,i(t)} is manageable. Then by the boundedness property of Z and inequality (7.10) in page 38 of Pollard (1990), there exists constant B > 0 such that E h sup s\u2208[0,\u03c4] \f \f\u00af zn(s; \u03b1n) \u2212\u00af Zn(s; \u03b1n) \f \f i \u2264 B \u221amn , which implies that the second term in (24) is also OP(1). Therefore, I1 = OP(1). 19 Similarly, we obtain that I2 = OP(1). Consider I3 and we have that \f \f \f \fI3 \u2212 Z \u03b6n 0 \u03c30(Q(s; \u02dc \u03b1n))dAn,0(s) \f \f \f \f \u2264 \f \f \f \f 1 mn mn X i=1 Z \u03b6n 0 (\u03c30(Qn(s; \u02dc \u03b1n)) \u2212\u03c30(Q(s; \u02dc \u03b1n))) dNn,i(s) \f \f \f \f + sup \u03b6\u2208[0,\u03c4] \f \f \f \f Z \u03b6 0 \u03c30(Q(s; \u02dc \u03b1n))dMn,0(s) \f \f \f \f. The \ufb01rst term in the right hand side of the above display converges to 0 due to the convergence of \u03c30(Qn). The second term also converges to 0 in probability by Lenglart\u2019s inequality. Thus, together with condition A1 and the convergence of \u03b8n to \u03b80, we have I3 = (1 + oP(1)) Z \u03b60 0 \u03c30(Q(s; \u03b10))s0(s; \u03b10)d\u039b0(s). Similarly, I4 = (1 + oP(1)) R \u03c4 \u03b60 \u03c30(Q(s; \u03b20))s0(s; \u03b20)d\u039b0(s). Consider an in (23) and we have that \f \f \fan \u22121\u03b6n<\u03b6\u2217 n |\u03b6n \u2212\u03b6\u2217 n| \u0010 Z \u03b6\u2217 n \u03b6n (\u03b1\u2217 n \u2212\u03b2\u2217 n)\u2032dAn,1(s) \u2212 Z \u03b6\u2217 n \u03b6n log rn(s; \u03b1\u2217 n, \u03b2\u2217 n)dAn,0(s) \u0011\f \f \f (25) \u2264 1\u03b6n<\u03b6\u2217 n |\u03b6n \u2212\u03b6\u2217 n| \f \f \f Z \u03b6\u2217 n \u03b6n (\u03b1\u2217 n \u2212\u03b2\u2217 n)\u2032dMn,1(s) \f \f \f + 1\u03b6n<\u03b6\u2217 n |\u03b6n \u2212\u03b6\u2217 n| \f \f \f Z \u03b6\u2217 n \u03b6n log rn(s; \u03b1\u2217 n, \u03b2\u2217 n)dMn,0(s) \f \f \f + 1\u03b6n<\u03b6\u2217 n |\u03b6n \u2212\u03b6\u2217 n| sup s\u2208[\u03b6n,\u03b6\u2217 n] \f \f log Rn(s; \u03b1\u2217 n, \u03b2\u2217 n) \u2212log rn(s; \u03b1\u2217 n, \u03b2\u2217 n) \f \f \f \f \f Z \u03b6\u2217 n \u03b6n dMn,0(s) \f \f \f + 1\u03b6n<\u03b6\u2217 n \u03b6\u2217 n \u2212\u03b6n sup s\u2208[\u03b6n,\u03b6\u2217 n] \f \f log Rn(s; \u03b1\u2217 n, \u03b2\u2217 n) \u2212log rn(s; \u03b1\u2217 n, \u03b2\u2217 n) \f \f Z \u03b6\u2217 n \u03b6n dAn,0(s) =: an,1 + an,2 + an,3 + an,4. For an,k, k = 1, 2, 3, by Lenglart\u2019s inequality and condition A2, we have that for any positive \u03f5 and \u03f5n,j, j \u2208Z+, there exists a constant B > 0 such that Qn \u0010 sup mn|\u03b6\u2217 n\u2212\u03b6n|>hn an,k > \u03f5 \u0011 \u2264 \u221e X j=1 Qn \u0010 sup 2j\u22121hn\u2264mn|\u03b6\u2217 n\u2212\u03b6n|<2jhn a2 n,k > \u03f52\u0011 \u2264 \u221e X j=1 \u03f5n,j \u03f52 + B\u03c12 \u03f5n,j2jhn , where \u03c12 is de\ufb01ned as in condition A2. Since \u03f5n,j are arbitrary, it follows that supmn|\u03b6\u2217 n\u2212\u03b6n|>hn an,k P \u2212 \u21920, k = 1, 2, 3. In addition, for an,4, we have sup mn|\u03b6\u2217 n\u2212\u03b6n|>hn an,4 = oP(1) sup mn|\u03b6\u2217 n\u2212\u03b6n|>hn 1\u03b6n<\u03b6\u2217 n \u03b6\u2217 n \u2212\u03b6n Z \u03b6\u2217 n \u03b6n dAn,0(s) P \u2212 \u21920. 20 Thus (25) P \u2212 \u21920. From condition A2 and Theorem 7, we have sup mn|\u03b6\u2217 n\u2212\u03b6n|>hn 1\u03b6n<\u03b6\u2217 n |\u03b6n \u2212\u03b6\u2217 n| \u0012 Z \u03b6\u2217 n \u03b6n (\u03b1\u2217 n \u2212\u03b2\u2217 n)\u2032dAn,1(s) \u2212 Z \u03b6\u2217 n \u03b6n log rn(s; \u03b1\u2217 n, \u03b2\u2217 n)dAn,0(s) \u0013 = sup mn|\u03b6\u2217 n\u2212\u03b6n|>hn 1\u03b6n<\u03b6\u2217 n |\u03b6n \u2212\u03b6\u2217 n| Z \u03b6\u2217 n \u03b6n \u0000(\u03b10 \u2212\u03b20)\u2032\u00af zn(s; \u03b20) \u2212log rn(s; \u03b10, \u03b20) \u0001 dAn,0(s) + oP(1). Since (\u03b10 \u2212\u03b20)\u2032\u00af zn(s; \u03b20) \u2212log rn(s; \u03b10, \u03b20) < 0 and it is continuous in a neighborhood of \u03b60, there exists a constant \u03ba0 < 0 such that for any sequence hn \u2192\u221eand hn/mn \u21920, 0 < \u2212\u03ba0\u03c11 \u2264 inf mn|\u03b6n\u2212\u03b6\u2217 n|>hn {\u2212an} \u2264 sup mn|\u03b6n\u2212\u03b6\u2217 n|>hn {\u2212an} \u2264\u2212\u03c12\u03ba0 holds with probability tending to 1 as n \u2192\u221e. Similarly, we have 0 < \u22122\u03ba0\u03c11 \u2264 inf mn|\u03b6n\u2212\u03b6\u2217 n|>hn {\u2212an \u2212bn} \u2264 sup mn|\u03b6n\u2212\u03b6\u2217 n|>hn {\u2212an \u2212bn} \u2264\u22122\u03c12\u03ba0 holds with probability tending to 1 as n \u2192\u221e. Combining the above derivations for (23), we have that \u2212mn|\u03b6n \u2212\u03b6\u2217 n|(an + bn) \u2264\u221amn|\u03b1\u2217 n \u2212\u03b1n|OP(1) + \u221amn|\u03b2\u2217 n \u2212\u03b2n|OP(1) \u2212OP(1)mn(|\u03b1\u2217 n \u2212\u03b1n|2 + |\u03b2\u2217 n \u2212\u03b2n|2) (26) = OP(1). Thus mn(\u03b6n \u2212\u03b6\u2217 n) = OP(1). As a consequence, (26) implies that \u221amn|\u03b1\u2217 n \u2212\u03b1n|OP(1) + \u221amn|\u03b2\u2217 n \u2212\u03b2n|OP(1) \u2212OP(1)mn(|\u03b1\u2217 n \u2212\u03b1n|2 + |\u03b2\u2217 n \u2212\u03b2n|2) = OP(1). This gives that |\u221amn(\u03b1\u2217 n \u2212\u03b1n)| = OP(1) and |\u221amn(\u03b2\u2217 n \u2212\u03b2n)| = OP(1). 6.3 Proof of Theorem 10 Let Un,1 = 1 \u221amn mn X i=1 Z \u03b6n 0 (Zn,i(s) \u2212\u00af zn(s; \u03b1n)) dNn,i(s), Un,2(h\u03b6) = mn X i=1 Z \u03b6n \u03b6n+h\u03b6/mn \u0000(\u03b2n \u2212\u03b1n)\u2032Zn,i(s) \u2212log rn(s; \u03b2n, \u03b1n) \u0001 1h\u03b6<0dNn,i(s), Un,3(h\u03b6) = mn X i=1 Z \u03b6n \u03b6n+h\u03b6/mn 1h\u03b6<0dNn,i(s), Un,4(h\u03b6) = mn X i=1 Z \u03b6n+h\u03b6/mn \u03b6n \u0000(\u03b1n \u2212\u03b2n)\u2032Zn,i(s) \u2212log rn(s; \u03b1n, \u03b2n) \u0001 1h\u03b6>0dNn,i(s), Un,5(h\u03b6) = mn X i=1 Z \u03b6n+h\u03b6/mn \u03b6n 1h\u03b6>0dNn,i(s), Un,6 = 1 \u221amn mn X i=1 Z \u03c4 \u03b6n (Zn,i(s) \u2212\u00af zn(s; \u03b2n)) dNn,i(s). 21 We de\ufb01ne processes Jn(h\u03b6) := Un,3(h\u03b6) + Un,5(h\u03b6) and Un(h) := h\u2032 \u03b1Un,1 \u22121 2h\u2032 \u03b1 \u0010 Z \u03b60 0 Q(s; \u03b10)s0(s; \u03b10)\u03bb0(s)ds \u0011 h\u03b1 + Un,2(h\u03b6) +h\u2032 \u03b2Un,6 \u22121 2h\u2032 \u03b2 \u0010 Z \u03c4 \u03b60 Q(s; \u03b20)s0(s; \u03b20)\u03bb0(s)ds \u0011 h\u03b2 + Un,4(h\u03b6). The limit law of Un and Jn can be deduced from that of Un,i and is given as follows. Lemma 20 Let K \u2282R be a compact interval and \u0398 = \u02dc \u0398\u00d7K \u2282R2p+1 a compact set. Then, under conditions A1-A3, (Un, Jn) converges weakly in the Skorohod topology to (U, J) in D\u0398 \u00d7 DK. Next we show that processes Un and U \u2217 n have the same asymptotic distribution. Lemma 21 Let \u0398 be a compact set in R2p+1. Then under conditions A1 and A2, sup h\u2208\u0398 |Un(h) \u2212U \u2217 n(h)| P \u2212 \u21920. Thus, (U \u2217 n, J\u2217 n) also converges weakly in the Skorohod topology to (U, J) in D\u0398 \u00d7 DK. Then by Theorem 3.1 in Seijo and Sen (2011b), we have the desired conclusion. 7 Appendix This appendix contains proofs of Lemmas 9, 13, 20, 21 and 22. Proof of Lemma 9. From the de\ufb01nition of U it is easily seen that \u03c6\u03b1 = \u0012 Z \u03b60 0 Q(s; \u03b10)s0(s; \u03b10)\u03bb0(s)ds \u0013\u22121 U1, \u03c6\u03b2 = \u0012Z \u03c4 \u03b60 Q(s; \u03b20)s0(s; \u03b20)\u03bb0(s)ds \u0013\u22121 U6, \u03c6\u03b6 = sargmaxh\u2208R2p+1{U2(h\u03b6) + U4(h\u03b6)}. Due to the independence of U1, U2, U4 and U6, \u03c6\u03b1, \u03c6\u03b2, and \u03c6\u03b6 are independent. In addition, (9) and (10) hold. We now show the existence of \u03c6\u03b6. It su\ufb03ces to show that U2(h\u03b6) + U4(h\u03b6) \u2192\u2212\u221eas |h\u03b6| \u2192\u221e. For h\u03b6 > 0, U4(h\u03b6) = \u2212\u0393+(h\u03b6) log r(\u03b60; \u03b10, \u03b20) + X 1\u2264i\u2264\u0393+(h\u03b6) (\u03b10 \u2212\u03b20)v+ i = X 1\u2264i\u2264\u0393+(h\u03b6) \b (\u03b10 \u2212\u03b20)v+ i \u2212E \u0002 (\u03b10 \u2212\u03b20)v+ i \u0003\t + E \u0002 (\u03b10 \u2212\u03b20)v+ i \u2212log r(\u03b60; \u03b10, \u03b20) \u0003 \u0393+(h\u03b6). Since \u0393+(h\u03b6) \u2212 \u2192\u221eas h\u03b6 \u2192\u221eand E \u0002 (\u03b10 \u2212\u03b20)v+ i \u2212log r(\u03b60; \u03b10, \u03b20) \u0003 = (\u03b10 \u2212\u03b20)\u2032z(\u03b60; \u03b20) \u2212log r(\u03b60; \u03b10, \u03b20) < 0, U4(h\u03b6) \u2212 \u2192\u2212\u221eas h\u03b6 \u2192\u221e. A similar argument gives U2(h\u03b6) \u2192\u2212\u221eas h\u03b6 \u2192\u2212\u221e, which completes the proof. 22 Proof of Lemma 13. For (13), we only need to show that the \ufb01rst sequence does not converge in probability. Take \u03f5 < 1/4. From Theorem 8, there exists a constant B\u03f5 > 0 such that P(n|\u02c6 \u03b6n\u2212\u03b60| \u2264 B\u03f5) > 1 \u2212\u03f5 for all large n. Choose h > 2B\u03f5 and let \u02c6 En = n X i=1 Z \u02c6 \u03b6n+ h n \u02c6 \u03b6n dNi (s) , En,1 = n X i=1 Z \u03b60+ h\u2212B\u03f5 n \u03b60+ B\u03f5 n dNi (s) , En,2 = n X i=1 Z \u03b60+ h+B\u03f5 n \u03b60\u2212B\u03f5 n dNi (s) . Then, P(En,1 \u2264\u02c6 En \u2264En,2) \u2265P(n|\u02c6 \u03b6n \u2212\u03b60| \u2264B\u03f5) > 1 \u2212\u03f5. (27) We know that for any h1 < h2, n X i=1 Z \u03b60+ h2 n \u03b60+ h1 n dNi(s) \u21ddPoisson \u0000\u03bb0(\u03b60)[(h2 \u2212h1 \u22280)s0(\u03b60; \u03b20) \u2212(h1 \u22270)s0(\u03b60; \u03b10)] \u0001 . Therefore, En,1 \u21ddPoisson(\u03bb0(\u03b60)(h\u22122B\u03f5)s0(\u03b60; \u03b20)) and En,2 \u21ddPoisson(\u03bb0(\u03b60)(h+B\u03f5)s0(\u03b60; \u03b20)+ \u03bb0(\u03b60)B\u03f5s0(\u03b60; \u03b10)). Then by Lemma A.4 in Seijo and Sen (2011a), there is a constant h0 such that when h > h0, we can \ufb01nd two numbers N1,h < N2,h \u2208N satisfying lim inf n\u2192\u221eP(En,1 > N2,h) > 2\u03f5 and lim inf n\u2192\u221eP(En,2 < N1,h) > 2\u03f5. Combining with (27), we have P( \u02c6 En \u2265En,1 > N2,h, i.o.) > \u03f5 and P( \u02c6 En \u2264En,2 < N1,h, i.o.) > \u03f5. Then by the Hewitt-Savage 0-1 law, the permutation invariant events { \u02c6 En > N2,h, i.o.} and { \u02c6 En < N1,h, i.o.} occur with probability 1, which implies that \u02c6 En does not have an almost sure limit. A similar argument applies for any increasing sequence of natural numbers {nk}\u221e k=1 and gives that \u02c6 En does not converge in probability. For (14), consider the real part of \u03c6i, Re(\u03c6i), and de\ufb01ne \u02c6 E\u03c6 n = Pn i=1 R \u02c6 \u03b6n+ h n \u02c6 \u03b6n Re(\u03c6i(s)) dNi (s), E\u03c6 n,1 = Pn i=1 R \u03b60+ h\u2212B\u03f5 n \u03b60+ B\u03f5 n Re(\u03c6i(s))dNi (s), and E\u03c6 n,2 = Pn i=1 R \u03b60+ h+B\u03f5 n \u03b60\u2212B\u03f5 n Re(\u03c6i(s)) dNi (s) . It is su\ufb03cient to show that \u02c6 E\u03c6 n does not converge. Since for any h1 < h2, Pn i=1 R \u03b60+ h2 n \u03b60+ h1 n Re(\u03c6i(s))dNi(s) converges to a compound Poisson distribution, then a similar argument as above gives the desired conclusion. Proof of Lemma 20. It is su\ufb03cient to show the weak convergence in probability of (Un,1, \u00b7 \u00b7 \u00b7 , Un,6) to (U1, \u00b7 \u00b7 \u00b7 , U6). We \ufb01rst prove the convergence of its \ufb01nite dimensional joint characteristic function. Consider real numbers h\u2212N < \u00b7 \u00b7 \u00b7 < h\u22121 < 0 = h0 < h1 < \u00b7 \u00b7 \u00b7 < hN and the linear combination Wn = \u00b5Un,1 + vUn,6 + X \u2212N\u2264j\u2264\u22121 {qj(Un,2(hj) \u2212Un,2(hj+1)) + pj(Un,3(hj) \u2212Un,3(hj+1))} + X 1\u2264j\u2264N {qj(Un,4(hj) \u2212Un,4(hj\u22121)) + pj(Un,5(hj) \u2212Un,5(hj+1))} , 23 where pj, qj \u2208R, j = \u2212N, \u00b7 \u00b7 \u00b7 , N, and \u00b5, v \u2208R1\u00d7p. For simplicity, we write hj,n = hj/mn. The characteristic function of Wn is E[e\u0131tWn] and can be expressed as E \u0014 exp \u0012 \u0131t\u00b5 1 \u221amn mn X k=1 Z \u03b6n 0 (Zn,k(s) \u2212\u00af zn(s; \u03b1n)) dNn,k(s) + \u0131tv Z \u03c4 \u03b6n (Zn,k(s) \u2212\u00af zn(s; \u03b2n)) dNn,k(s) + \u0131t \u22121 X j=\u2212N mn X k=1 Z \u03b6n+hj+1,n \u03b6n+hj,n \u0000qj(\u03b2n \u2212\u03b1n)\u2032Zn,k(s) \u2212qj log rn(s; \u03b2n, \u03b1n) + pj \u0001 dNn,k(s) + \u0131t N X j=1 mn X k=1 Z \u03b6n+hj,n \u03b6n+hj\u22121,n \u0000qj(\u03b1n \u2212\u03b2n)\u2032Zn,k(s) \u2212qj log rn(s; \u03b1n, \u03b2n) + pj \u0001 dNn,k(s) \u0013\u0015 . By the independence of the observations {( \u02dc Tn,k, \u03b4n,k, Zn,k) : k = 1, \u00b7 \u00b7 \u00b7 , mn}, E[e\u0131tWn] can be further written as mn Y k=1 Qn \u001a 1 + Z \u03c4 0 h e\u0131t 1 \u221amn \u00b5 \u0000Zn,k(s)\u2212\u00af zn(s;\u03b1n) \u0001 1s<\u03b6n \u22121 i dNn,k(s) + Z \u03c4 0 h e\u0131tv 1 \u221amn \u0000Zn,k(s)\u2212\u00af zn(s;\u03b2n) \u0001 1s>\u03b6n \u22121 i dNn,k(s) + X \u2212N\u2264j\u2264\u22121 Z \u03b6n+hj+1,n \u03b6n+hj,n h e\u0131t(qj(\u03b2n\u2212\u03b1n)\u2032Zn,k(s)\u2212qj log rn(s;\u03b2n,\u03b1n)+pj) \u22121 i dNn,k(s) + X 1\u2264j\u2264N Z \u03b6n+hj,n \u03b6n+hj\u22121,n h e\u0131t(qj(\u03b1n\u2212\u03b2n)\u2032Zn,k(s)\u2212qj log rn(s;\u03b1n,\u03b2n)+pj) \u22121 i dNn,k(s) \u001b . For the \ufb01rst two integrals, take Taylor\u2019s expansions of the exponential functions and we have that E[e\u0131tWn] equals mn Y k=1 \u001a 1 + Qn \u0012Z \u03c4 0 1 \u221amn it\u00b5 \u0010 Zn,k(s) \u2212\u00af zn(s; \u03b1n) \u0011 1s<\u03b6ndNn,k(s) \u0013 + Qn \u0012Z \u03c4 0 1 \u221amn itv \u0010 Zn,k(s) \u2212\u00af zn(s; \u03b2n) \u0011 1s>\u03b6ndNn,k(s) \u0013 \u2212 1 2mn Qn \u0012 Z \u03c4 0 t2\u00b5 \u0010 Zn,k(s) \u2212\u00af zn(s; \u03b1n) \u0011\u22972 \u00b5\u2032 1s<\u03b6ndNn,k(s) \u0013 \u2212 1 2mn Qn \u0012 Z \u03c4 0 t2v \u0010 Zn,k(s) \u2212\u00af zn(s; \u03b2n) \u0011\u22972 v\u2032 1s>\u03b6ndNn,k(s) \u0013 + o(m\u22121 n ) + X \u2212N\u2264j\u2264\u22121 Qn \u0012 Z \u03b6n+hj+1,n \u03b6n+hj,n h e\u0131t(qj(\u03b2n\u2212\u03b1n)\u2032Zn,k(s)\u2212qj log rn(s;\u03b2n,\u03b1n)+pj) \u22121 i dNn,k(s) \u0013 + X 1\u2264j\u2264N Qn \u0012 Z \u03b6n+hj,n \u03b6n+hj\u22121,n h e\u0131t(qj(\u03b1n\u2212\u03b2n)\u2032Zn,k(s)\u2212qj log rn(s;\u03b1n,\u03b2n)+pj) \u22121 i dNn,k(s) \u0013\u001b . By condition A2, Qn \u0000 Pmn i=1 R \u03b6n 0 \u0000Zn,k(s)\u2212\u00af zn(s; \u03b1n) \u0001 dNn,k(s) \u0001 and Qn \u0000 Pmn i=1 R \u03c4 \u03b6n \u0000Zn,k(s)\u2212\u00af zn(s; \u03b2n) \u0001 dNn,k(s) \u0001 24 converge to 0. Thus E[e\u0131tWn] equals (1 + o(1)) exp \u001a \u22121 2 mn X k=1 Qn \u0012 Z \u03b6n 0 t2\u00b5 \u0000Zn,k(s) \u2212\u00af zn(s; \u03b1n) \u0001\u22972\u00b5\u2032 dNn,k(s) \u0013 \u22121 2 mn X k=1 Qn \u0012 Z \u03c4 \u03b6n t2v \u0000Zn,k(s) \u2212\u00af zn(s; \u03b2n) \u0001\u22972v dNn,k(s) \u0013\u001b \u00d7 exp \u001a \u22121 X j=\u2212N Qn \u0012 mn X k=1 Z \u03b6n+hj+1,n \u03b6n+hj,n h e\u0131t(qj(\u03b2n\u2212\u03b1n)\u2032Zn,k(s)\u2212qj log rn(s;\u03b2n,\u03b1n)+pj) \u22121 i dNn,k(s) \u0013\u001b \u00d7 exp \u001a N X j=1 Qn \u0012 mn X k=1 Z \u03b6n+hj,n \u03b6n+hj\u22121,n h e\u0131t(qj(\u03b1n\u2212\u03b2n)\u2032Zn,k(s)\u2212qj log rn(s;\u03b1n,\u03b2n)+pj) \u22121 i dNn,k(s) \u0013\u001b . It is easily seen that the \ufb01rst exponential component in the above display converges to E[e\u0131t\u00b5U1+\u0131tvU6]. Therefore, Lemma 22 together with condition A3 implies that E[e\u0131tWn] = (1 + o(1))E[e\u0131t\u00b5U1+\u0131tvU6] \u00d7 exp \u001a \u03bb0(\u03b60) X \u2212N\u2264j\u2264\u22121 (hj+1 \u2212hj) \u00d7 h e\u2212qj log r(\u03b60;\u03b20,\u03b10)+pj s0(\u03b60; itqj(\u03b20 \u2212\u03b10) + \u03b10) \u2212s0(\u03b60; \u03b10) i\u001b \u00d7 exp \u001a \u03bb0(\u03b60) X 1\u2264j\u2264N (hj \u2212hj\u22121) \u00d7 h e\u2212qj log r(\u03b60;\u03b10,\u03b20)+pj s0(\u03b60; itqj(\u03b10 \u2212\u03b20) + \u03b20) \u2212s0(\u03b60; \u03b20) i\u001b . For (U1, \u00b7 \u00b7 \u00b7 , U6) as de\ufb01ned in Section 3.2, de\ufb01ne the linear combination W :=\u00b5U1 + vU6 + X \u2212N\u2264j\u2264\u22121 {qj(Un,2(hj) \u2212U2(hj+1)) + pj(U3(hj) \u2212Un,3(hj+1))} + X 1\u2264j\u2264N {qj(Un,4(hj) \u2212U4(hj\u22121)) + pj(U5(hj) \u2212Un,5(hj+1))} . By the de\ufb01nition of (U1, \u00b7 \u00b7 \u00b7 , U6), we know that the characteristic function of W has the same form as the limit of E[e\u0131tWn]. Thus, we have the weak convergence of the \ufb01nite dimensional distributions. To further prove the weak convergence of (U1, \u00b7 \u00b7 \u00b7 , U6), we use Theorem 15.6 in Billingsley (1968). It\u2019s su\ufb03cient to show for each Un,i, i = 2, 3, 4, 5, there exists a nondecreasing, continuous function F such that for any h1 < h < h2, E|Un,i(h1) \u2212Un,i(h)||Un,i(h2) \u2212Un,i(h)| \u2264(F(h2) \u2212F(h1))2. (28) 25 Consider Un.2. For h1 < h < h2 < 0, E|Un,2(h1) \u2212Un,2(h)||Un,2(h2) \u2212Un,2(h)| \u2264 E mn X i=1 Z \u03b6n+h/mn \u03b6n+h1/mn \f \f(\u03b2n \u2212\u03b1n)\u2032Zn,i(s) \u2212log rn(s; \u03b2n, \u03b1n) \f \f dNn,i(s) \u00d7 mn X i=1 Z \u03b6n+h2/mn \u03b6n+h/mn \f \f(\u03b2n \u2212\u03b1n)\u2032Zn,i(s) \u2212log rn(s; \u03b2n, \u03b1n) \f \f dNn,i(s) \u2264 sup 1\u2264i\u2264mn s\u2208[\u03b6n+ h1 mn ,\u03b6n+ h2 mn ] \f \f(\u03b2n \u2212\u03b1n)\u2032Zn,i(s) \u2212log rn(s; \u03b2n, \u03b1n) \f \f \u00d7 m2 nE \u0014 Z \u03b6n+h/mn \u03b6n+h1/mn dNn,i(s) \u0015 E \u0014 Z \u03b6n+h2/mn \u03b6n+h/mn dNn,i(s) \u0015 \u2264 B|h2 \u2212h1|2, where B > 0 is some constant. Thus, (28) holds for Un,2. Similar arguments give that (28) is satis\ufb01ed for Un,i, i = 3, 4, 5. Then our conclusion follows from Theorem 15.6 in Billingsley (1968). Proof of Lemma 21. For notational simplicity, we write h\u03b1,n = h\u03b1 \u221amn , h\u03b2,n = h\u03b2 \u221amn , h\u03b6,n = h\u03b6 mn . We start by writing U \u2217 n as follows: U \u2217 n(h) := un,1(h) + un,2(h) + un,3(h) + un,4(h) where un,1(h) = mn X i=1 Z \u03c4 0 \u0000h\u2032 \u03b1,nZn,i(s) \u2212log Rn(s; \u03b1n + h\u03b1,n, \u03b1n) \u0001 1s\u2264\u03b6n\u2227(\u03b6n+h\u03b6,n)dNn,i(s), un,2(h) = mn X i=1 Z \u03c4 0 \u0000(\u03b1n \u2212\u03b2n + h\u03b1,n)\u2032Zn,i(s) \u2212log Rn(s; \u03b1n + h\u03b1,n, \u03b2n) \u0001 1\u03b6n\u03b6n\u2228(\u03b6n+h\u03b6,n)dNn,i(s). 26 For h = (h\u2032 \u03b1, h\u2032 \u03b2, h\u03b6)\u2032 \u2208\u0398, consider the di\ufb00erence between un,1 and the \ufb01rst two terms in Un: \f \f \fun,1(h) \u2212h\u2032 \u03b1Un,1 + 1 2h\u2032 \u03b1 \u0010 Z \u03b60 0 Q(s; \u03b10)s0(s; \u03b10)\u03bb0(s)ds \u0011 h\u03b1 \f \f \f \u2264 \f \f \f \f mn X i=1 Z \u03c4 0 \u0000h\u2032 \u03b1,nZn,i(s) \u2212log Rn(s; \u03b1n + h\u03b1,n, \u03b1n) \u0001 1\u03b6n\u2227(\u03b6n+h\u03b6,n)\u03b60 \u0001 \u03bb0(t), (2) where a second regression parameter vector \u03b10 is added to model (1) and \u03b60 is the change-point parameter. It is clear that estimation of the change-point \u03b60 is an important step in the model based inference. For the identi\ufb01ability of model (2), we assume throughout that \u03b10 \u0338= \u03b20 since otherwise this model reduces to (1) and \u03b60 is not identi\ufb01able. Model (2) has been extensively studied in the literature. Liang, Self and Liu (1990) considered the problem of testing the null hypothesis of no change-point e\ufb00ect based on a maximal score statistic. Luo, Turnbull and Clark (1997) focused on testing H0 : \u03b6 = \u03b60 v.s. H1 : \u03b6 \u0338= \u03b60 for a pre-speci\ufb01ed \u03b60 and derived the asymptotic distribution of the partial likelihood ratio test statistic under H0. For estimation of the change-point parameter \u03b60, Luo (1996) and Pons (2002) showed 1 arXiv:1307.8217v1 [stat.ME] 31 Jul 2013 that the MPLE of \u03b60 is n\u22121 consistent while that of the regression parameter vector is n\u22121/2 con- sistent. This is largely due to the fact that the partial likelihood function is not di\ufb00erentiable with respect to the change-point parameter and therefore the usual Taylor expansion is not applicable. This \u201cnonstandard\u201d asymptotic behavior of the MPLE of \u03b60 is typical in change-point regression problems; see Kosorok and Song (2007), Lan, Banerjee and Michailidis (2009), and Seijo and Sen (2011a) for examples of di\ufb00erent models. Although the asymptotic distribution of the MPLE of \u03b60 has been derived in the literature (Luo, 1996; Pons, 2002), it cannot be directly used for making inference for \u03b60 due to the presence of nuisance parameters. Bootstrap methods bypass the di\ufb03culty of estimating the nuisance parame- ters and are generally reliable in standard n\u22121/2 convergence problems; see Efron and Tibshirani (1993) and Davison and Hinkley (1997). When the bootstrap is applied to nonstandard problems such as the change-point model (2), however, it may yield invalid con\ufb01dence intervals (CIs) for \u03b60. The failure of the usual bootstrap methods in nonstandard situations has been documented in the literature; see Abrevaya and Huang (2005) and Sen, Banerjee and Woodroofe (2010) for situations giving rise to n1/3 asymptotic; see Bose and Chatterjee (2001) for general M-estimation problems. However, the change-point problem for the Cox model (2) is indeed quite di\ufb00erent from the problems considered by the above authors, and the performance of di\ufb00erent bootstrap methods has not been investigated. Various bootstrap procedures have been applied to the standard Cox model (1) (Davison and Hinkley, 1997). However, as to be shown in Section 4, the commonly used bootstrap methods, such as sampling directly from the empirical distribution (ED) and sampling conditional on covariates (Burr, 1994), provide invalid CIs for the change-point parameter in model (2). Indeed, we show that the bootstrap estimates constructed by these methods are the smallest maximizers of certain stochastic processes and, conditional on the data, these processes do not have any weak limit. This strongly suggests not only the inconsistency but also the nonexistence of any weak limit for the corresponding bootstrap estimates. To get consistent bootstrap procedures, we develop a new bootstrap approach that, conditional on the covariates, draws samples from a smooth approximation to the distribution of the survival time and from an estimate of the distribution of the censoring time. A key step in the new approach is the smooth approximation to the distribution of the survival time, which makes the bootstrap scheme successfully mimic the local behavior of the true distribution function at the location of \u03b60. As a result, the proposed approach yields asymptotically valid CIs for \u03b60. Furthermore, the asymptotic theory is also validated through simulation studies with reasonable sample sizes. The rest of this paper is organized as follows. In Section 2 we describe the model setup and introduce di\ufb00erent bootstrap schemes. In Section 3, we state a series of convergence results. In Section 4 we study the inconsistency of the standard bootstrap methods, including sampling from the ED, and we prove the consistency of the smooth and the m-out-of-n bootstrap procedures. We compare the \ufb01nite sample performance of di\ufb00erent bootstrap methods through a simulation study in Section 5. Proofs of the main theorems are in Section 6. Proofs of several lemmas are provided in the Appendix." } ], "Chun Wang": [ { "url": "http://arxiv.org/abs/2305.04528v1", "title": "Precise Masses, Ages of ~1.0 million RGB and RC stars observed by the LAMOST", "abstract": "We construct a catalogue of stellar masses and ages for 696,680 red giant\nbranch (RGB) stars, 180,436 primary red clump (RC) stars, and 120,907 secondary\nRC stars selected from the LAMOST\\,DR8. The RGBs, primary RCs, and secondary\nRCs are identified with the large frequency spacing ($\\Delta \\nu$) and period\nspacing ($\\Delta P$), estimated from the LAMOST spectra with spectral SNRs $>\n10$ by the neural network method supervised with the seismologic information\nfrom LAMOST-Kepler sample stars. The purity and completeness of both RGB and RC\nsamples are better than 95\\% and 90\\%, respectively. The mass and age of RGBs\nand RCs are determined again with the neural network method by taking the\nLAMOST-Kepler giant stars as the training set. The typical uncertainties of\nstellar mass and age are, respectively, 10\\% and 30\\% for the RGB stellar\nsample. For RCs, the typical uncertainties of stellar mass and age are 9\\% and\n24\\%, respectively. The RGB and RC stellar samples cover a large volume of the\nMilky Way (5 $< R < 20$\\,kpc and $|Z| <$\\,5\\,kpc), which are valuable data sets\nfor various Galactic studies.", "authors": "Chun Wang, Yang Huang, Yutao Zhou, Huawei Zhang", "published": "2023-05-08", "updated": "2023-05-08", "primary_cat": "astro-ph.GA", "cats": [ "astro-ph.GA", "astro-ph.SR" ], "main_content": "The LAMOST Galactic survey (Deng et al. 2012; Liu et al. 2014; Zhao et al. 2012) is a spectroscopic survey to obtain over 10 million stellar spectra. 11,214,076 optical (3700\u20139000 \u00c5) lowresolution spectra (R\u223c1800) have been released by March 2021, of which more than 90 per cent are stellar spectra. The value-added catalogue of LAMOST DR8 by Wang et al. (2022) provide stellar atmospheric parameters (Teff, log g, [Fe/H] and [M/H]), chemical elemental abundance to metal or iron ratios ([\u03b1/M], [C/Fe], [N/Fe]), absolute magnitudes of 14 photometric bands, i.e., G, Bp, Rp of Gaia, J, H, Ks of 2MASS, W1, W2 of WISE, B, V, r of APASS and g, r, i of SDSS, and spectro-photometric distances for 4.9 million unique stars targeted by LAMOST DR8. This value-added catalogue of LAMOST DR8 is publicly available at the CDS and the LAMOST official website 2. For stars with spectral signal-to-noise ratios (SNRs) larger than 50, precisions of Teff, log g, [Fe/H], [M/H], [C/Fe], [N/Fe] and [\u03b1/M] are 85 K, 0.10 dex, 0.05 dex, 0.05 dex, 0.05 dex, 0.08 dex and 0.03 dex, respectively. The typical uncertainties of 14 band\u2019s absolute magnitudes are 0.16\u20130.22 mag for stars with spectral SNRs larger than 50, corresponding to a typical distance uncertainty of around 10 %. This stellar sample in the catalogue contains main-sequence stars, main-sequence turnoff stars, sub-giant stars and giant stars of all evolutionary stages (e.g. RGB, RC, AGB etc.). We can select samples of RGB and RC from the catalogue as the first step. 3. Selection of RGBs and RCs Before separating RGBs and RCs, we first select giant stars using the Teff and log g provided by Wang et al. (2022). By cuts of Teff \u22645800 K and log g \u22643.8, a total of about 1.3 million \u2264 \u2264 1 The primary RC stars are ignited degenerately as the descendants of low-mass stars (typically smaller than 2 M\u2299), while the secondary RC stars (ignited non-degenerately) are the descendants of high-mass stars (typically larger than 2 M\u2299). 2 The catalogue will be available at the LAMOST official website via http://www.lamost.org/dr8/v1.0/doc/vac. giant stars are selected. RGB and RC are stars with burning hydrogen in a shell around an inert helium core and stars with helium-core and hydrogen-shell burning, respectively. However, the RGBs and RCs occupy overlapping parameter spaces in a classical Herzsprung\u2013Russell diagram (HRD). This is due to the fact that the RGBs and RCs can have quite similar surface characteristics such as effective temperature, surface gravity and luminosity (Elsworth et al. 2017; Wu et al. 2019). The classical method of classifying RGB and RC is based on the Teff\u2013log g\u2013 [Fe/H] relation as well as colour-metallicity diagram. The typical contamination rate of RC from RGB stars using this method could be better than \u223c10% (Bovy et al. 2014; Huang et al. 2015, 2020). Now, asteroseismology has become the gold standard for separating RGBs and RCs (Montalb\u00e1n et al. 2010; Bedding et al. 2011; Mosser et al. 2011, 2012; Vrard et al. 2016; Hawkins et al. 2018). Although RGBs and RCs have similar surface characteristics, they have totally different interior structures. The solarlike oscillations in red giant stars are excited and intrinsically damped by turbulence in the envelopes near-surface convection and can have acoustic (p-mode) and gravity (g-mode) characteristics (Chaplin & Miglio 2013; Wu et al. 2019). The p-mode and g-mode are always associated with the stellar envelope with pressure as the restoring force and the inner core with buoyancy as the restoring force, respectively. Mixed modes character also exists, displaying g-mode-like behaviour in the central region of a star, and p-mode-like behaviour in the envelope. The evolved red giant stars always show mixed mode. The core density of RCs is lower than that of RGBs with the same luminosity, which causes a significantly stronger coupling between gand p-modes and leads to larger period spacing (Bedding et al. 2011; Wu et al. 2019). Thus one can distinguish RGBs and RCs from the period spacing (\u2206P). There are two kinds of RCs, the primary RC stars (formed from lower-mass stars) and the secondary RC stars (formed from more massive stars). The secondary RC stars have larger \u2206\u03bd compared to primary RC stars. Thus, the asteroseismology is a powerful tool to separate RGBs, primary RCs and secondary RCs (Bedding et al. 2011; Stello et al. 2013; Pinsonneault et al. 2014; Vrard et al. 2016; Elsworth et al. 2017; Wu et al. 2019), the typical contamination rate of RC from RGB using the method is only 3% (Ting et al. 2018; Wu et al. 2019). The photospheric abundances reflect the interior structure of stars via the efficacy of extra-mixing on the upper RGB(Martell et al. 2008; Masseron & Gilmore 2015; Masseron et al. 2017; Masseron & Hawkins 2017; Ting et al. 2018). Thus, one can directly derive \u2206\u03bd and \u2206P from low-resolution spectra with a data-driven method using stars with accurate estimates of \u2206\u03bd and \u2206P (Ting et al. 2018; Wu et al. 2019) as training stars and finally separate RGBs, primary RCs and secondary RCs. The \u2206\u03bd and \u2206P of about 6100 Kepler giants are estimated through the Fourier analysis of their light curves by Vrard et al. (2016). Amongst them, 2662 unique stars have a good quality of LAMOST spectrum (S/N > 50), including 826 RGBs, 1599 primary RCs and 237 secondary RCs, which is a good training sample for deriving \u2206\u03bd and \u2206P from LAMOST spectra. Here, we select 1800 stars as training stars to estimate \u2206\u03bd and \u2206P from LAMOST spectra. Other stars are selected as testing stars. First, we pre-process LAMOST spectra and build up a neural network model to construct the relation between the pre-processed LAMOST spectra and asteroseismic parameters, i.e., \u2206\u03bd and \u2206P. Then we estimate \u2206\u03bd and \u2206P for all testing stars using LAMOST spectra based on the neural network model. The pre-processing of LAMOST spectra and neural network model are the same as that of Wang et al. (2022). As discussed in Wang et al. (2022), Article number, page 2 of 13 C. Wang: Masses and Ages of red giant stars we only use spectra in the wavelength range of 3900-6800 \u00c5 and 8450\u20138950 \u00c5, because of the low-quality spectra in the 3700\u2013 3900 \u00c5 for most of the stars and serious background contamination (including sky emission lines and telluric bands) and very limited e\ufb00ective information in the spectra of 6800\u20138450 \u00c5. Following Wang et al. (2022), a neural network contains three layers is build up, which can be written as: P = \u03c9\u03c3(\u03c9 \u2032 i\u03c3(\u03c9 \u2032\u2032 j\u03c3(\u03c9\u03bbk f\u03bb + bk) + bj) + b \u2032 i), (1) where P is the asteroseismic parameters \u2206\u03bd or \u2206P; \u03c3 is the Relu activation function; \u03c9 and b are weights and biases of the network to be optimized; the index i, j and k denote the number of neurons in the third, second and \ufb01rst layer; and \u03bb denotes the wavelength pixel. The neurons for the \ufb01rst, second and third layers are respectively 512, 256 and 64. The architecture of the neural network is empirically adjusted based on the performance of both training and testing samples. A delicate balance must be struck between achieving high accuracy of parameters and avoiding over\ufb01tting the training data. The training process is carried out with the Tensor flow package in Python. After estimating asteroseismic parameters for training and testing samples, we compare our frequency spacing and period spacing (\u2206\u03bdNN and \u2206PNN) with these provided by Vrard et al. (2016) for testing stars, as shown in Fig. 1. We can \ufb01nd that our \u2206\u03bdNN and \u2206PNN match well with these of Vrard et al. (2016). Our neural network could derive precise frequency spacing and period spacing from LAMOST spectra. The \u2206P of RC and RGB stars are very di\ufb00erent, they are \u223c 300s and \u223c70s, respectively (Bedding et al. 2011; Ting et al. 2018). Fig. 2 shows the stellar distributions on the plane of \u2206\u03bd and \u2206P derived by neural network method for training and testing sample. The plots clearly show that a gap of \u2206P between the RGB and RC stellar populations is visible for both training and testing samples. It illustrates that the spectroscopically estimated frequency spacing and period spacing (\u2206\u03bd and \u2206P ) using our neural network method, especially \u2206P, could allow one to separate the RC stars from the RGB stars. Here we identify giant stars with \u2206P > 150 s as RCs, and giant stars with \u2206P \u2264150 s as red giant stars. Fig. 2 also suggests that it is also possible to classify primary and secondary RC stars using the spectroscopically estimated frequency spacing and period spacing. RC stars with \u2206\u03bd > 6\u00b5Hz can be classi\ufb01ed as secondary RC stars (Yang et al. 2012; Ting et al. 2018). To separate the primary RCs from the secondary RCs, we select \u2206P > 150 s and \u2206\u03bd > 5\u00b5Hz as secondary RCs, \u2206P > 150 s and \u2206\u03bd \u22645\u00b5Hz as primary RCs. The completeness of RGB, primary RC, and secondary RC are respectively 99.2%(280/282), 96.8%(485/501), and 62%(49/79) examined by the testing set. The purity of RGB, primary RC, and secondary RC are respectively 98.9% (280/283), 94.2% (485/515), and 79% (49/62) examined again by the testing set. We adopt the constructed neural network model to \u223c1.3 million LAMOST spectra collected by LAMOST DR8 of giant stars to estimate their spectroscopic frequency spacing and period spacing. With the aforementioned criteria of separating RGBs, primary RCs and secondary RCs, 937,082, 244,458, and 167,105 spectra for 696,680 unique RGBs, 180,436 unique primary RCs and 120,907 unique secondary RCs with spectral SNRs > 10 are found. For a star with multiple observed spectra, the classi\ufb01cation, mass and age derived by the spectrum with the highest spectral SNR are suggested. 4. Ages and masses of RGBs and RCs In this Section, we estimate the masses and ages of our RGBs, primary RCs and secondary RCs selected from the LAMOST low-resolution spectroscopic survey. Doing so, we \ufb01rst estimate the masses and ages using the respective asteroseismic information and isochrone \ufb01tting method, for LAMOST-Kepler giant stars. Then we estimate masses and ages for all LAMOST DR8 giant stars, using the neural network method by taking the LAMOST-Kepler giant stars as the training set. For both RGBs and RCs, we build up two di\ufb00erent neural network models to estimate ages and masses, respectively. 4.1. Ages and masses of LAMOST-Kepler training and testing stars Masses of RGBs and RCs can be accurately measured from the asteroseismic information achieved by Kepler mission. We cross-match our RGB and RC samples to the Kepler sample with accurate asteroseismic parameters measured by Yu et al. (2018), using light curves of stars provided by the Kepler mission. In total, 3,220 and 3,276 common RGBs and RCs are found with both high-quality spectra (SNR>50), well-derived spectroscopic and precise asteroseismic parameters. For these RGB and RC common stars, we determine their asteroseismic masses using modi\ufb01ed scaling relations, M M\u2299 = ( \u2206\u03bd f\u2206\u03bd\u2206\u03bd\u2299 )\u22124( \u03bdmax \u03bdmax,\u2299 )3( Te\ufb00 Te\ufb00,\u2299 )3/2, (2) Here the Solar values of Te\ufb00,\u2299, \u03bdmax,\u2299and \u2206\u03bd\u2299are 5777 K, 3090 \u00b5Hz and 135.1 \u00b5Hz (Huber et al. 2011), respectively. Where f\u2206\u03bd, the correction factor can be obtained for all stars using the publicly available code Asfgrid11 provided by Sharma et al. (2016). With this modi\ufb01ed scaling relation, the masses of these 3,220 and 3,276 RGB and RC common stars are derived with precise \u2206\u03bd and \u03bdmax provided by Yu et al. (2018) and effective temperature Te\ufb00taken from Wang et al. (2022). Fig. 3 shows the masses distributions estimated from the asteroseismic and spectroscopic stellar atmospheric parameters, as well as their associated uncertainties. The plot clearly shows that the typical uncertainties of masses of RGBs and RCs are respectively 4.1 and 8.4 per cent. There are several stars with a mass lower than 0.7 M\u2299as shown in Fig. 3, especially for RCs. Some of these stars may be post-mass-transfer helium-burning stars (Li et al. 2022b), which may be in binary systems, su\ufb00ered a mass-transfer history with a mass-loss of about 0.1\u20130.2 M\u2299. We make a crossmatch to the list of post-mass-transfer helium-burning red giants provided by Li et al. (2022b) and \ufb01nd twelve common stars, nine of them are stars with M < 0.7 M\u2299. In the current work, we did not consider such an extreme mass-loss case and thus the readers should be careful of the parameters of a star with a mass smaller than 0.7 M\u2299. We further estimate the ages of these common stars based on the stellar isochrone \ufb01tting method using the aforementioned estimated masses and the e\ufb00ective temperatures, metallicities and surface gravities provided by Wang et al. (2022). Similar to that of Huang et al. (2020), we adopt the PARSEC isochrones calculated with a mass-loss parameter \u03b7Reimers = 0.2 (Bressan et al. 2012) as the stellar evolution tracks. Here, we adopt one Bayesian approach similar to Xiang et al. (2017) and Huang et al. (2020) when doing isochrone \ufb01tting. The mass M, e\ufb00ective temperatures Te\ufb00, metallicity [M/H] and surface gravity log g are the input constraints. Article number, page 3 of 13 A&A proofs: manuscript no. age_mass Fig. 1. The comparisons of \u2206\u03bd and \u2206P provided by Vrard et al. (2016) (X-axis) using Kepler data and that derived by neural network model (Y-axis) for 1, 800 training (black dots) and 862 testing stars (red dots). The values of the mean and standard deviation of the di\ufb00erences are labelled in the bottom-right corner of each panel. Fig. 2. The stellar distributions on the plane of \u2206\u03bd-\u2206P derived by neural network method for 1,800 training (left panel) and 862 testing stars (right panel). The black, red and blue dots show the 826 RGBs, 1,599 primary RCs and 237 secondary RCs classi\ufb01ed by Vrard et al. (2016) using Kepler data, respectively. The blue dashed line (\u2206P = 150 s) is plotted to separate RGBs and RCs. The black dashed line (\u2206\u03bd = 5\u00b5Hz) is plotted to separate primary RCs and secondary RCs. Fig. 3. Distributions of 3,220 LAMOST-Kpler common RGBs (left panels) and 3,276 LAMOST-Kpler common RCs (right panels) on the M\u2013 \u2206M/M plane. M and \u2206M are the estimated masses and their associated uncertainties. The red dot presents the median values in each mass bin with a bin size of 0.1 M\u2299. Green dots are stars with M < 0.7 M\u2299. Blue stars show the post-mass-transfer helium-burning stars identi\ufb01ed by Li et al. (2022b). Distributions of masses and their associated uncertainties are over-plotted using histograms. Article number, page 4 of 13 C. Wang: Masses and Ages of red giant stars The posterior probability distributions as a function of age for three stars of typical ages for both RGB and RC stars are shown in Fig. 4. As shown in Fig. 4, we can \ufb01nd that the age distributions show prominent peaks, which suggest that the resultant ages are well constrained, especially thanks to the precise mass estimates from asteroseismology information. Fig. 5 shows the distributions of ages, as well as the uncertainties, of RGBs and RCs, estimated using the isochrone \ufb01tting method. Both the RGB and RC samples cover a whole range of possible ages of stars, from close to zero to 13 Gyr (close to the universe age). The typical age uncertainties of RGB and RC stars are 14 and 23 per cent, respectively. There are only a small fraction stars that have age uncertainties larger than 40 per cent. Larger age uncertainties of RCs compared to RGBs are the consequence of larger mass uncertainties of RCs. It is noted that the relationship between age and its associated uncertainty of RGBs is opposite to that of RCs at \u03c4 < 8 Gyr, which may be the consequence of the larger variations of mass uncertainty of RCs compared to that of RGBs at 1.0 < M < 1.5 M\u2299. The mass uncertainty decrease 1.7% and 0.02% when mass decrease from 1.55 M\u2299 (\u03c4 \u223c2.0 Gyr) to 1.05 M\u2299(\u03c4 \u223c9.0 Gyr) for RCs and RGBs, respectively. Large mass uncertainties produce large age uncertainties (\u2206\u03c4), thus older RGBs and RCs have respectively smaller and larger relative age uncertainties (\u2206\u03c4/\u03c4). After estimating the masses and ages of these 3,103 and 3,114 RGB and RC common stars, we divide both RGB and RC common stars into two sub-samples, a training and a testing subsample. The training sample of RGB and RC contains 1,493 and 2,499 stars, respectively. Other 1,610 and 615 RGB and RC stars form the testing samples. When selecting training stars, the number of training stars in each age bin is the same as each other as much as possible. As shown in Fig. 5, the RGB sample contains fewer old stars as compared to the RC sample. This is the same for the training set. Fig. 6 shows the distributions of the RGB, RC training samples and testing samples on the [M/H]\u2013[\u03b1/Fe] plane, colour-coded by mass and age. 4.2. Age and mass determinations of RGB and RC stars in LAMOST DR8 After building up the training and testing stars with accurate masses and ages, we derive masses and ages of RGB and RC stars from the LAMOST DR8 spectra, using the neural network method, which is detailed described in Section 3.1. The aforementioned RGB and RC training stars are adopted as training samples. After pre-processing the LAMOST spectra (same as that of Wang et al. (2022)) of all the training and testing stars, we build up the model to construct the relation between the preprocessed LAMOST spectra and mass/age for RGBs and RCs. Our neural network models to estimate masses and ages also contain three layers, the total number of neurons for the \ufb01rst, second, and third layers are respectively 512, 256, and 64, which is the same as the neural network models to estimate asteroseismic parameters in Section 3. Fig. 7 shows the comparison of mass estimated with scaling relation (MS ) in Section 4.1 and mass derived by neural network model (MNN) using LAMOST spectra for testing RGB and RC stars. We can \ufb01nd that MNN match well with MS for individual stars. The systematic uncertainties are very small, the standard deviations of estimated mass are 10 per cent and 9.6 per cent for RGBs and RCs, respectively. Fig. 8 shows the comparison of age estimated with the isochrone \ufb01tting method (\u03c4ISO) in Section 4.1 and age derived by neural network model (\u03c4NN) using LAMOST spectra for testing RGB and RC stars. We can \ufb01nd that the \u03c4NN is consistent with the \u03c4ISO for individual stars. The \u03c4NN are overestimated about 10 per cent for RGB stars. The systematic uncertainties of \u03c4NN of RCs are very small. The standard deviations of estimated age are 30 per cent and 24 per cent for RGBs and RCs, respectively. Figs. 7 or 8 also show the comparisons between MNN and MS or \u03c4NN and \u03c4ISO for training stars. The MNN (\u03c4NN) is well in agreement with MS (\u03c4ISO). The standard deviations of estimated mass are respectively 5.9 per cent and 9.4 per cent for RGB and RC training stars. For age estimates of RGB and RC training stars, the standard deviations are 26.6 per cent and 19.4 per cent, respectively. In summary, our neural networks could derive precise mass and age for RGB and RC stars from LAMOST spectra. Finally, we adopt our neural network models to 937,082, 244,458, and 167,105 RGB, primary RC and secondary RC stellar spectra and derive their spectroscopic masses and ages. 5. Validation of the spectroscopic masses and ages of RGBs and RCs In this section, we \ufb01rst examine the uncertainties of the masses and ages of RGBs and RCs using the neural network method through internal comparisons. Secondly, through external comparisons, we further examine the uncertainties, including the systematic biases, of masses and ages. 5.1. The uncertainties of estimated masses and ages of RGBs and RCs The uncertainties of estimated masses and ages depend on the spectral noise (random uncertainties) and method uncertainties. As discussed in Huang et al. (2020) and Wang et al. (2022), the random uncertainties of masses and ages are estimated by comparing results derived from duplicate observations of similar spectral SNRs (di\ufb00ered by less than 10 per cent) collected during di\ufb00erent nights. The relative residuals of mass and age estimate (after divided by \u221a 2) along with mean spectral SNRs are shown in Fig. 9. To properly obtain random uncertainties of age and mass, we \ufb01t the relative residuals with the equation similar to Huang et al. (2020): \u03c3r = a + c (SNR)b , (3) where \u03c3r represents the random uncertainty. Besides the random uncertainties, method uncertainties (\u03c3m) are also considered when we estimate the uncertainties of stellar parameters. The \ufb01nal uncertainties are given by p \u03c32 r + \u03c32 m. The method uncertainties are provided by the relative residuals between our estimated results and the asteroseismic results of the training sample. For the age and mass of RGB stars, the method uncertainties are 26.6 and 5.9 per cent as shown in Figs. 7 and 8, respectively. The method uncertainties of age and mass are 19.4 and 9.4 per cent for RC stars as shown in Figs. 7 and 8, respectively. 5.2. The ages of members of open clusters Stars in the open cluster are believed to form almost simultaneously from a single gas cloud, thus they have almost the same age, metallicity, distance and kinematics. Open cluster is a good test bed to check the accuracy of ages. Zhong et al. (2020a) and Zhong et al. (2020b) provide 8,811 cluster members of 295 clusters observed by LAMOST, who cross-match the open cluster catalogue and the member stars provided by Cantat-Gaudin et al. Article number, page 5 of 13 A&A proofs: manuscript no. age_mass 0 2 4 6 8 10 12 14 Age (Gyr) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Posterior probability 2.9\u00b10.3 Gyr RGB KIC: 9597653 0 2 4 6 8 10 12 14 Age (Gyr) 0.0 0.05 0.1 0.15 0.2 0.25 0.3 6.55\u00b10.75 Gyr KIC: 5698093 0 2 4 6 8 10 12 14 Age (Gyr) 0.0 0.05 0.1 0.15 0.2 9.1\u00b11.2 Gyr KIC: 5701829 0 2 4 6 8 10 12 14 Age (Gyr) 0.0 0.1 0.2 0.3 0.4 0.5 Posterior probability 3.0\u00b10.45 Gyr RC KIC: 5613570 0 2 4 6 8 10 12 14 Age (Gyr) 0.0 0.05 0.1 0.15 0.2 0.25 0.3 6.35\u00b10.95 Gyr KIC: 5358060 0 2 4 6 8 10 12 14 Age (Gyr) 0.0 0.05 0.1 0.15 0.2 9.0\u00b11.35 Gyr KIC: 5553045 Fig. 4. The posterior probability distributions as a function of age for three stars of typical ages for both RGB (top panels) and RC (bottom panels) stars. The blue solid lines denote the median values of the stellar ages. The blue dashed lines show the standard deviations of the stellar ages. The KIC number of the six stars are given here. Fig. 5. Distributions of 3,220 LAMOST-Kepler common RGBs (left panels) and 3,276 LAMOST-Kepler common RCs (right panels) on the \u03c4\u2013 \u2206\u03c4/\u03c4 plane. \u03c4 and \u2206\u03c4 are the estimated ages using the isochrone \ufb01tting method and their associated uncertainties. The red dot presents the median value in each age bin with a bin size of 1.0 Gyr. Green dots are stars with M < 0.7 M\u2299. Blue stars show stars the post-mass-transfer helium-burning stars identi\ufb01ed by Li et al. (2022b). Distributions of ages and their associated uncertainties are over-plotted using histograms. (2018) with LAMOST DR5. Amongst these clusters, open clusters with a wide age range (1\u201310 Gyr) are carefully selected for checking our age estimates. As shown in Fig. 10, we can \ufb01nd that our ages of open clusters are consistent with these of literature for both RGB and RC stars from the young part (\u223c2 Gyr) to the old part (\u223c10 Gyr). Table. 1 present our age estimates and literature age values. 5.3. Comparison of masses and ages of RGB and RC stars with previous results A catalogue containing accurate stellar ages for \u223c0.64 million RGB stars targeted by the LAMOST DR4 has been provided by Wu et al. (2019). Sanders & Das (2018) has determined distances, and ages for \u223c3 million stars with astrometric information from Gaia DR2 and spectroscopic parameters from massive spectroscopic surveys: including the APOGEE, Gaia-ESO, GALAH, LAMOST, RAVE, and SEGUE surveys. We crossArticle number, page 6 of 13 C. Wang: Masses and Ages of red giant stars Fig. 6. Distributions of LAMOST-Kepler 1,493 training RGB stars (left panels) and 2,499 training RC stars (right panels) on the [Fe/H]\u2013[\u03b1/Fe] plane, colour-coded by stellar age (top panels) and mass (bottom panels). Here the values of [Fe/H] and [\u03b1/Fe] come from the value-added catalogue of LAMOST DR8 (Wang et al. 2022). Fig. 7. The comparisons of mass estimated with scaling relation (MS) in Section 4.1 and mass derived by neural network model (MNN) using LAMOST spectra for LAMOST-Kepler RGB (left panel) and RC (right panel) stars testing sample (red dots; 1,610 RGBs, 615 RCs) and training sample (black dots; 1,493 RGBs, 2,499 RCs). The \u00b5 and \u03c3 are the mean and standard deviations of (MS \u2212MNN)/MS, as marked in the bottom-right corner of each panel. match our catalogue with their catalogues and obtain \u223c97,000 and 83,000 common RGB stars (SNR\u226550, 2.0 < log g < 3.2 dex, 4000 < Te\ufb00< 5500 K) with the catalogue of Wu et al. (2019) and Sanders & Das (2018), respectively. The comparisons are shown in Fig. 11. Generally, our ages are younger than those estimated by Wu et al. (2019). In particular, for the old parts, our ages are younger than the ages of Wu et al. (2019) with 2\u20133 Gyr. This is mainly the di\ufb00erent choices of isochrone database between this work and Wu et al. (2019), who adopt the isochrones from PARSEC and Yonsei\u2013Yale (Y2; Demarque et al. 2004), respectively. The lack of old (low mass: M < 1.0 M\u2299) RGB training stars in this work may also cause our relatively younger age estiArticle number, page 7 of 13 A&A proofs: manuscript no. age_mass Fig. 8. The comparisons of age estimated with isochrone \ufb01tting (\u03c4ISO) in Section 4.1 and age derived by neural network model (\u03c4NN) using LAMOST spectra for LAMOST-Kepler 3,103 RGB (left panel) and RC (right panel) stars in the testing sample (red dots; 1,610 RGBs, 615 RCs) and training sample (black dots; 1,493 RGBs, 2,499 RCs). The \u00b5 and \u03c3 are the mean and standard deviations of (\u03c4ISO \u2212\u03c4NN)/\u03c4ISO, as marked in the bottom-right corner of each panel. Fig. 9. Relative internal residuals of estimated masses (left panels) and ages (right panels) using neural network method given by duplicate observations of similar spectral SNRs for LAMOST RGB (top panels) and RC (bottom panels) stars. Black dots are the di\ufb00erences of duplicate observations of SNR di\ufb00erences smaller than 10 %. Blue dots and error bars represent the median values and standard deviations (after divided by \u221a 2) of the relative residuals in the individual spectral SNR bin. Red lines indicate \ufb01ts of the standard deviations as a function of spectral SNRs. mates. Our estimated ages could agree well with these estimated by Sanders & Das (2018). Montalb\u00e1n et al. (2021) has provided precise stellar ages (\u223c11%) and masses for 95 RGBs observed by the Kepler space mission by combining the asteroseismology with kinematics and chemical abundances of stars. There are 62 common stars between our catalogue and their list. As shown in Fig. 12, ages of old stars with \u03c4 > 8 Gyr are underestimated by 2\u20133 Gyr compared to the results of Montalb\u00e1n et al. (2021), because masses of these stars are overestimated by \u223c0.1M\u2299. Montalb\u00e1n et al. (2021) estimate the masses adopting the individual frequencies of radial modes as observational constraints, which may provide more accurate masses (Lebreton & Goupil 2014; Miglio et al. 2017; Rendle et al. 2019; Montalb\u00e1n et al. 2021). Building up a training sample containing enough RGBs with accurate age and mass estimates adopting individual frequencies as observational constraints may dramatically improve the determination of age and mass for RGBs using machine learning methods. A potential challenge is the lack of precise individual frequencies for a larger sample size of red giant stars (\u223c2000\u20136000 stars). Yet this is out of the topic in this paper. The precise distances, masses, ages and 3D velocities for \u223c140,000 primary RCs selected from the LAMOST have been determined by our former e\ufb00ort (Huang et al. 2020). The catalogue of Sanders & Das (2018) also contain a lot of RC stars with age estimates. Our catalogue has \u223c52,000 and 72,000 RC stars Article number, page 8 of 13 C. Wang: Masses and Ages of red giant stars 0 2 4 6 8 10 12 _LIT (Gyr) 0 2 4 6 8 10 12 _NN (Gyr) RGB 0 2 4 6 8 10 12 _LIT (Gyr) 0 2 4 6 8 10 12 _NN (Gyr) RC Fig. 10. Comparisons of our estimated ages (\u03c4NN) of open clusters with literature values (\u03c4LIT) for LAMOST RGBs (left panel) and RCs (right panel). 0 2 4 6 8 10 12 _Wu (Gyr) 0 2 4 6 8 10 12 _NN (Gyr) 0 2 4 6 8 10 12 _Sanders (Gyr) 0 2 4 6 8 10 12 _NN (Gyr) 100 101 102 N Fig. 11. Comparisons of our estimated ages with these provided by Wu et al. (2019) (left panel) and Sanders & Das (2018) (right panel) for 97,112 and 83,199 common RGB stars with SNRs > 50, respectively. 0 2 4 6 8 10 12 14 Montalban(Gyr) 0 2 4 6 8 10 12 14 NN(Gyr) 0.6 0.8 1.0 1.2 1.4 MMontalban(M ) 0.6 0.8 1.0 1.2 1.4 MNN(M ) Fig. 12. Comparisons of our estimated ages and masses with these provided by Montalb\u00e1n et al. (2021) for 62 common RGB stars. Article number, page 9 of 13 A&A proofs: manuscript no. age_mass Table 1. Age values of open clusters estimated using our neural network and ages of literature values for both LAMOST RGB and RC stars. The numbers of RGB and RC cluster members are also presented. Cluster LIT age (Gyr) age of RGB (Gyr) number of RGB age of RC (Gyr) number of RC", "introduction": "The Milky Way (MW) is an excellent laboratory to test galaxy formation and evolution histories, because it is the only galaxy in which the individual stars can be resolved in exquisite de- tail. One can study the properties of the stellar population and assemblage history of the Galaxy using multidimensional infor- mation, including but not limited to accurate age, mass, stel- lar atmospheric parameters, chemical abundances, 3D positions, 3D velocities. Thanks to a number of large-scale spectroscopic surveys, e.g., the RAVE (Steinmetz et al. 2006), the SEGUE (Yanny et al. 2009), the LAMOST Experiment for Galactic Un- derstanding and Exploration (LEGUE; Deng et al. 2012; Liu et al. 2014; Zhao et al. 2012), the Galactic Archaeology with HERMES (GALAH; De Silva et al. 2015), the Apache Point Observatory Galactic Evolution Experiment (APOGEE; Ma- jewski et al. 2017) and the Gaia missions (Gaia Collaboration et al. 2016), accurate stellar atmospheric parameters, chemical element abundance ratios, 3D positions and 3D velocities for a huge sample of Galactic stars now is available. The age of stars can hardly be measured directly but can be generally derived indirectly from the photometric and spectro- scopic data in combination with the stellar evolutionary models (Soderblom 2010). Many methods have been explored to esti- mate the ages of stars at di\ufb00erent evolutionary stages. Matching with stellar isochrones by stellar atmospheric parameters yielded by spectra is adopted to estimate the ages for main-sequence turno\ufb00and subgiant stars (e.g., Xiang et al. 2017; Sanders & Das 2018), and the uncertainties of the estimated age are \u223c20%\u2013 30%. It is not easy to infer the age information for evolved red giant stars by the isochrone \ufb01tting method due to the mixing of stars of various evolutionary stages on the Herzsprung\u2013Russell (HR) diagram. The degeneracy can be broken if the stellar mass is determined independently. In this manner, age estimates with a precision of about 10%\u201320% have been achieved for subgiants (Li et al. 2020), red giants (Tayar et al. 2017; Wu et al. 2018; Bellinger 2020; Li et al. 2022a; Wu et al. 2023) and red clump stars (Huang et al. 2020) in the Kepler \ufb01elds, since these stars have precise asteroseismic masses with uncertainties of 8%\u2013 10% (e.g., Huber et al. 2014; Yu et al. 2018). Those asteroseis- mic masses and ages are adopted as training data to estimate the masses and ages directly from stellar spectra using a data-driven method (e.g. Ness et al. 2016; Ho et al. 2017; Ting et al. 2018; Wu et al. 2019; Huang et al. 2020). The physics behind the esti- mates of spectroscopic mass from the data-driven method is that stellar spectra carry key features to determine the carbon to ni- trogen abundance ratio [C/N], which is tightly correlated with stellar mass as the result of the convective mixing through the CNO cycle (e.g., the \ufb01rst-dredge up process). Of course, [C/N] ratios deducible from the optical/infrared spectra can be directly used to derive their ages using the relation between the regressed trained relation between [C/N] and masses (e.g., Martig et al. 2016a; Ness et al. 2016). By March 2021, the Low-Resolution Spectroscopic Sur- vey of LAMOST DR8 has released 11,214,076 optical (3700\u2013 9000 \u00c5) spectra with a resolving power of R\u223c1800, of which more than 90 per cent are stellar spectra. The classi\ufb01cations and radial velocity Vr measurements for these spectra are provided by the o\ufb03cial catalogue of LAMOST DR8 (Luo et al. 2015). The accurate stellar parameters, including the atmospheric stel- Article number, page 1 of 13 arXiv:2305.04528v1 [astro-ph.GA] 8 May 2023 A&A proofs: manuscript no. age_mass lar parameters, chemical element abundance ratios, 14 bands ab- solute magnitudes and photometric distance, have been provided by Wang et al. (2022). In this work, we have applied a similar technique used by Ting et al. (2018) to separate red giant branch (RGB) stars and red clump (RC) stars. The latter can be further divided into pri- mary RC stars and secondary RC stars, according to their stellar masses1 (e.g., Bovy et al. 2014; Huang et al. 2020). The ages and masses of these separated RGBs, primary RCs and secondary RCs will be estimated using the neural network machine learn- ing technique by adopting LAMOST\u2013Kepler common stars as the training sample. Other stellar parameters estimated by Wang et al. (2022), together with the information from the LAMOST DR8 and the astrometric information from Gaia EDR3 (Gaia Collaboration et al. 2020), are also provided. The paper is organized as follows. Section 2 introduces the data employed in this work. Section 3 presents the selections of RGBs, primary RCs and secondary RCs. In Section 4, we present estimates of ages and masses derived from LAMOST spectra us- ing the neural network technique. A detailed uncertainty analysis is presented in Section 5. In Section 6, we introduce the proper- ties of our RGB, primary RC and secondary RC stellar samples. Finally, it is a summary in Section 7." }, { "url": "http://arxiv.org/abs/2304.02958v1", "title": "Spatial metallicity variations of mono-temperature stellar populations revealed by early-type stars in LAMOST", "abstract": "We investigate the radial metallicity gradients and azimuthal metallicity\ndistributions on the Galactocentric $X$--$Y$ plane using mono-temperature\nstellar populations selected from LAMOST MRS young stellar sample. The\nestimated radial metallicity gradient ranges from $-$0.015\\,dex/kpc to\n$-$0.07\\,dex/kpc, which decreases as effective temperature decreases (or\nstellar age increases) at $7500 < T_{\\rm eff} < 12500$\\,K ($\\tau < $1.5 Gyr).\nThe azimuthal metallicity excess (metallicity after subtracting radial\nmetallicity gradient, $\\Delta$\\,[M/H]) distributions exhibit inhomogeneities\nwith dispersions of 0.04\\,dex to 0.07\\,dex, which decrease as effective\ntemperature decreases. We also identify five potential metal-poor substructures\nwith large metallicity excess dispersions. The metallicity excess distributions\nof these five metal-poor substructures suggest that they contain a larger\nfraction of metal-poor stars compared to other control samples. These\nmetal-poor substructures may be associated with high-velocity clouds that\ninfall into the Galactic disk from the Galactic halo, which are not quickly\nwell-mixed with the pre-existing ISM of the Galactic disk. As a result, these\nhigh-velocity clouds produce some metal-poor stars and the observed metal-poor\nsubstructures. The variations of metallicity inhomogeneities with different\nstellar populations indicate that high-velocity clouds are not well mixed with\nthe pre-existing Galactic disk ISM within 0.3\\,Gyr.", "authors": "Chun Wang, Haibo Yuan, Maosheng Xiang, Yuan-Sen Ting, Yang Huang, Xiaowei Liu", "published": "2023-04-06", "updated": "2023-04-06", "primary_cat": "astro-ph.GA", "cats": [ "astro-ph.GA" ], "main_content": "2.1. Coordinate Systems and Galactic Parameters Two coordinate systems are used in this paper. One is a righthanded Cartesian coordinate system (X, Y, Z) centred on the Galactic centre, with X increasing towards the Galactic centre, Y in the direction of Galactic rotation, and Z representing the height from the disk mid-plane, positive towards the north Galactic pole. The other is a Galactocentric cylindrical coordinate system (R, \u03a6, Z), with R representing the Galactocentric distance, \u03a6 increasing in the direction of Galactic rotation, and Z the same as that in the Cartesian system. The Sun is assumed to be at the Galactic midplane (i.e., Z\u2299= 0 pc) and has a value of R\u2299equal to 8.34 kpc (Reid et al. 2014). 2.2. Sample selections As of March 2021, the LAMOST Medium-Resolution Spectroscopic Survey had collected 22,356,885 optical (4950\u2013 5350 \u00c5 and 6300\u20136800 \u00c5) spectra with a resolution of R \u223c7500. Sun et al. (2021) selected 40,034 late-B and A-type mainsequence stars from the LAMOST Medium-Resolution Spectroscopic Survey (hereafter named the LAMOST-MRS young stellar sample) and extracted their accurate stellar atmospheric parameters. For a star with spectral SNR \u223c60, the cross validated scatter is \u223c75 K, 0.06 dex and 0.05 dex for Teff, log g and [M/H], respectively. We adopt the photogeometric distances provided by Bailer-Jones et al. (2021) for these stars. The X, Y, Z, R, \u03a6 of each star are estimated using its distance and coordinates, and the corresponding errors are also given using the error transfer function. For the LAMOST-MRS young stellar sample, stars with Teff and [M/H] uncertainties larger than 75 K and 0.05 dex are discarded to ensure the reliability of the results. Only stars with 7000 < Teff < 15000 K, log g > 3.5 dex, and \u22121.2 < [M/H] < 0.5 dex are selected, as they are main-sequence stars with accurate metallicity determinations. Because young stars are mostly -12 -11 -10 -9 -8 -7 X (kpc) 9 X (kpc) -2 -1 0 1 2 3 Y (kpc) 100 101 102 N Fig. 1. Stellar number density distribution on the X\u2013Y plane of the final LAMOST-MRS young stellar sample. located in the Galactic plane, stars of |Z| \u22650.15 kpc are also removed to reduce the contaminations from old stars. Finally, our LAMOST-MRS young stellar sample consists of 14,692 unique stars. The stellar number density distribution on the X\u2013Y plane is shown in Figure 1. The faint limiting magnitude of the LAMOST-MRS is g \u223c15 mag. The faintest stars in the four effective temperature bins (adopted in the following context) have absolute g-band magnitudes of \u223c4.5 mag (7500 < Teff < 8000 K), \u223c4.0 mag (8000 < Teff < 8500 K), \u223c3.5 mag (9500 < Teff < 10500 K), and \u223c3.2 mag (10000 < Teff < 12500 K). Based on the distance modulus, stellar samples with 7500 < Teff < 8000 K, 8000 < Teff < 8500 K, 9500 < Teff < 10500 K and 10000 < Teff < 12500 K are incomplete at distance (d) > 1.258 kpc, d > 1.584 kpc, d > 1.955 kpc and d > 2.29 kpc, respectively. 2.3. Correcting for the Teff-metallicity systematics caused by NLTE effect Metallicities of LAMOST-MRS young stars are estimated based on LTE models. But the NLTE effects can significantly affect their spectra. The NLTE effects may incur a bias of estimated metallicity with respect to Teff (Xiang et al. 2022). To reduce the NLTE effects on the estimated metallicity values, we use a sixth-order polynomial to model the Teff\u2013metallicity trend of the LAMOST-MRS young stellar sample. The dependency of metallicity on Teff is mitigated by subtracting the fitted relation. We fit the relation of metallicity and Teff only using stars of 8.7 < R < 9.5 kpc to avoid the effects of the radial metallicity gradients. Figure 2 shows the polynomial fit of estimated Teff\u2013metallicity trend. The distribution of stars on the Teff\u2013[M/H]corr plane is also presented in Figure 2, where the [M/H]corr is the metallicity after subtracting the trend. The figure shows that the final metallicity ([M/H]corr) does not vary with Teff after subtracting the mean trend. The used metallicity is the corrected metallicity ([M/H]corr) in the following context. 3. Radial metallicity gradients of mono-temperature young stellar populations In this section, we investigate the radial metallicity gradients of mono-temperature stellar populations. We divide stars from the LAMOST-MRS stellar sample into different stellar populations in four effective temperature bins: 7500 < Teff < 8000 K, Article number, page 2 of 14 C. Wang: Metallicity variations 7 8 9 10 11 12 13 14 15 Teff/1000(K) -0.6 -0.4 -0.2 0.0 0.2 0.4 [M/H](dex) 7 8 9 10 11 12 13 14 15 Teff/1000(K) -0.6 -0.4 -0.2 0.0 0.2 0.4 [M/H]corr(dex) 100 101 102 N Fig. 2. The stellar number density distributions on the e\ufb00ective temperature and metallicity planes. The left panel displays the relation between the estimated metallicity and Te\ufb00(black symbols) and their corresponding polynomial \ufb01t (red solid line) for the LAMOST-MRS young stellar sample. The black symbol represents the mean estimated metallicity and Te\ufb00in each Te\ufb00bin with a \u2206Te\ufb00of 100 K. The dashed red lines represent the corresponding standard deviations of the metallicity in each Te\ufb00bin. Additionally, the stellar number density distribution on the Te\ufb00-[M/H] plane is plotted in the left panel. Right panel shows the stellar number density distribution on the Te\ufb00-[M/H]corr plane. The solid and dashed red lines represent the mean corrected metallicity values and the corresponding standard deviations of the metallicity in each Te\ufb00bin, respectively. 8500 < Te\ufb00< 9000 K, 9500 < Te\ufb00< 10500 K, and 10000 < Te\ufb00< 12500 K. We discard stars with 9000 < Te\ufb00< 9500 K because the H\u03b1 lines of these stars are not sensitive with Te\ufb00, which leads to large uncertainties of the measured metallicities. In each temperature bin, we further divide stars into a small radial annulus of 0.25 kpc. Bins containing fewer than 3 stars are discarded. We perform linear regression on the metallicity as a function of the Galactic radius R. The slope is adopted as the radial metallicity gradient. Figure 3 shows the \ufb01tting results for these four mono-temperature stellar populations. As shown in the \ufb01gure, a linear regression captures the trend between metallicity and Galactic radius R well. The radial metallicity gradient of the Galactic disk plays an important role in the study of Galactic chemical and dynamical evolution histories. It is also a fundamental input parameter in any models of Galactic chemical evolution. Previous studies on the radial metallicity gradient of the Galactic disk using di\ufb00erent tracers (including OB stars by Da\ufb02on & Cunha (2004), Cepheid variables by Andrievsky et al. (2002); Luck et al. (2006), H ii regions by Balser et al. (2011a), open clusters by Chen et al. (2003); Magrini et al. (2009), planetary nebulae by Costa et al. (2004); Henry et al. (2010) , FGK dwarfs by Katz et al. (2011); Cheng et al. (2012); Boeche et al. (2013), red giant stars by Hayden et al. (2014); Boeche et al. (2014) and red clump stars by Huang et al. (2015)) have presented that the Galactic disk has a negative radial metallicity gradient, which generally supports an \u201cinside-out\" disk-forming scenario. The variations of radial metallicity gradient with stellar age (\u03c4) have also been investigated by Xiang et al. (2015) and Wang et al. (2019) using main-sequence turn-o\ufb00(MSTO) stars and other works (e.g., Casagrande et al. 2011; Toyouchi & Chiba 2018). Wang et al. (2019) found that radial metallicity gradients steepen with increasing age at \u03c4 < 4 Gyr, reach a maximum at 4 < \u03c4 < 6 Gyr , and then \ufb02atten with age. Xiang et al. (2015) also found a similar trend of radial metallicity gradients with stellar ages. The relation of radial metallicity gradient with stellar age for the youngest stellar populations (\u03c4 < 2 Gyr) was not investigated in these two works due to the inaccuracy of estimated stellar ages for the youngest MSTO stars. For the youngest main-sequence stars, the stellar age has a tight relation with the e\ufb00ective temperature. We derive median stellar ages for these four mono-temperature stellar populations according to the PARSEC isochrones (Bressan et al. 2012). Age distributions in these four e\ufb00ective temperature bins with the metallicity range of \u22120.5 dex and 0.5 dex (similar to the metallicity coverage of our sample) predicted by the PARSEC isochrones are shown in Figure 4. The median ages are 1.00 Gyr, 0.72 Gyr, 0.39 Gyr, and 0.27 Gyr, respectively, in these four e\ufb00ective temperature bins of 7500 < Te\ufb00< 8000 K, 8500 < Te\ufb00< 9000 K, 9500 < Te\ufb00< 10500 K, and 10000 < Te\ufb00< 12500 K. Figure 5 shows the variations of radial metallicity gradients with the stellar ages of LAMOST-MRS young stellar sample by us, the stellar ages of MSTO stars presented by Wang et al. (2019) and Xiang et al. (2015). Radial metallicity gradients estimated using young objects (including OB stars, Cepheid variables, H ii regions and open clusters (OCs)) are also over-plotted in the \ufb01gure. We assume a mean age of 0.2 Gyr for OB stars, Cepheid variables, H ii regions. Chen et al. (2003) divided their OCs into two age bins: \u03c4 < 0.8 Gyr and \u03c4 > 0.8 Gyr. We assume a mean age of 0.1 Gyr and 2 Gyr following Chen et al. (2003). Radial metallicity gradients estimated using our LAMOST-MRS young stellar sample are well aligned with those values estimated using other young objects of previous works (Da\ufb02on & Cunha 2004; Andrievsky et al. 2002; Luck et al. 2006; Balser et al. 2011a; Chen et al. 2003) within the uncertainties of the measurements. We \ufb01nd that radial metallicity gradients of young stellar populations (\u03c4 < 2 Gyr) steepen as age increases (with a gradient of \u223c\u22120.055 dex kpc\u22121 Gyr\u22121) within the uncertainty of our estimates. The result is consistent with those of Wang et al. (2019) and Xiang et al. (2015), who also found that the radial metallicity gradients decrease with increasing stellar age for young stellar populations (1.5 < \u03c4 < 4 Gyr). Our results extend the relation between radial metallicity gradient and stellar age to \u03c4 < 1.5 Gyr compared to Wang et al. (2019) and Xiang et al. (2015). Article number, page 3 of 14 A&A proofs: manuscript no. metallicity_variation 7.0 7.5 8.0 8.5 9.0 9.5 10.0 10.5 11.0 R (kpc) -0.6 -0.4 -0.2 0.0 0.2 0.4 [M/H] Teff: [7500,8000] [M/H]/ R: -0.075\u00b10.013 7.0 7.5 8.0 8.5 9.0 9.5 10.0 10.5 11.0 R (kpc) -0.6 -0.4 -0.2 0.0 0.2 0.4 [M/H] Teff: [8000,9000] [M/H]/ R: -0.085\u00b10.019 7.0 7.5 8.0 8.5 9.0 9.5 10.0 10.5 11.0 R (kpc) -0.6 -0.4 -0.2 0.0 0.2 0.4 [M/H] Teff: [9500,10500] [M/H]/ R: -0.018\u00b10.009 7.0 7.5 8.0 8.5 9.0 9.5 10.0 10.5 11.0 R (kpc) -0.6 -0.4 -0.2 0.0 0.2 0.4 [M/H] Teff: [10000,12500] [M/H]/ R: -0.021\u00b10.011 Fig. 3. Radial metallicity gradients of these four mono-temperature stellar populations. The red symbols represent the median metallicity values in individual radial bins. The line in red represents the linear regression over the red symbols. The temperature range of each stellar population, the slope of the linear \ufb01t (the radial metallicity gradient), and its associated uncertainty are marked at the bottom left panel. 0.0 0.5 1.0 1.5 2.0 2.5 (Gyr) 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 Normalized stellar number density 7500 0.8 Gyr; Chen et al. 2003) Fig. 5. Radial metallicity ([M/H]/[Fe/H]) gradient variations\u2019 with stellar ages of the LAMOST-MRS young stellar sample in this work and these of MSTO presented by Wang et al. (2019) and Xiang et al. (2015). Radial metallicity gradients estimated using young objects (OB stars, Cepheid variables, H ii regions and OCs) from previous works are also shown using di\ufb00erent symbols as labelled in the \ufb01gure. Note that the X-axis is logarithmic. typical uncertainties of X and Y are respectively 0.0114 kpc and 0.0055 kpc, which are much smaller than the bin size of 0.1 kpc. To better visualize the azimuthal variation, we subtract the mean radial metallicity gradient as measured in the previous section. We introduce a new metallicity scale denoted as the \"Metallicity Excess\", \u2206[M/H] = [M/H]\u2212[M/H]R, where [M/H]R is the meArticle number, page 4 of 14 C. Wang: Metallicity variations dian metallicity in each R bin with a bin scale of 0.5 kpc. In each X and Y bin, we estimated the mean \u2206[M/H] and its associated dispersion (\u03c3[M/H]), as well as their uncertainties. 4.1. Global statistics Figure 6 shows the azimuthal metallicity excess (\u2206[M/H]) distributions of these four mono-temperature stellar populations on the X\u2013Y plane, which shows signi\ufb01cant inhomogeneities on the bin scale of 0.1\u00d70.1 kpc. The di\ufb00erence of metallicity excess spans almost 0.4 dex (ranging from \u22120.2 dex to 0.2 dex). The dispersions of the (median) metallicity excess (shown in Figure 6), are respectively 0.04 dex, 0.058 dex, 0.057 dex, and 0.066 dex for the stellar populations with e\ufb00ective temperature coverage of 7500 < Te\ufb00< 8000 K, 8500 < Te\ufb00< 9000 K, 9500 < Te\ufb00< 10500 K, and 10000 < Te\ufb00< 12500 K as shown in Figure 7. The metallicity excess dispersion increases as temperature increases. We compute the two-point correlation function of metallicity excess for these four mono-temperature stellar populations to quantify the scale length associated with the observed metallicity inhomogeneity with the same method of Kreckel et al. (2020). The 50% level scale length is \u223c0.05 kpc, which is smaller than our spatial bin size of 0.1 kpc. The result suggests that the scale length of metallicity inhomogeneity is smaller than 0.1 kpc. This scale length is much smaller than the 0.5-1.0kpc predicted by Krumholz & Ting (2018) and the 0.3 kpc (50% level) estimated by Kreckel et al. (2020) using HII regions in external galaxies. It is possible that their results in external galaxies may not be directly applicable to the Milky Way. The origin of the chemical inhomogeneities observed in this paper may di\ufb00er from those observed in external galaxies. The scale length is consistent with that of De Cia et al. (2021), who suggests that pristine gas falling into the \"Galactic disk\" can produce chemical inhomogeneities on the scale of tens of pc. To check the reliability of metallicity inhomogeneities, we study the distributions of metallicity excess uncertainties (\u2206[M/H]err) in Figure A.2. \u2206[M/H]err are mostly smaller than 0.1 dex (with median values of 0.030, 0.039, 0.042, and 0.048 dex for stellar populations of 7500 < Te\ufb00< 8000 K, 8500 < Te\ufb00< 9000 K, 9500 < Te\ufb00< 10500 K, and 10000 < Te\ufb00< 12500 K, respectively), which is much smaller than the di\ufb00erence of metallicity excess shown in Figure 6. The comparison between Figure 6 and Figure A.2 suggests that the observed azimuthal metallicity inhomogeneities are reliable. Figure 6 and Figure 7 not only show the azimuthal metallicity inhomogeneity but also its variations with e\ufb00ective temperature. Azimuthal metallicity inhomogeneities (larger metallicity dispersion means larger metallicity inhomogeneity) of young stellar populations (Te\ufb00> 8000 K) are much larger than that of the old stellar population (Te\ufb00< 8000 K). Besides metallicity excess distributions, we also investigate the spatial distributions of metallicity excess dispersions. Figures A.3 and A.4 show the spatial distributions of metallicity excess dispersion and its associated uncertainty. From Figure A.3, we can \ufb01nd inhomogeneities of metallicity dispersion distributions for the four mono-temperature stellar populations. Inhomogeneities of young stellar populations (Te\ufb00> 8000 K) are much larger than those of the old stellar population (Te\ufb00< 8000 K). Metallicity dispersion uncertainties shown in Figure A.4 are much smaller than the di\ufb00erence of metallicity dispersions shown in Figure A.3, which suggests that observed azimuthal metallicity dispersion distributions in Figure A.3 are reliable. 4.2. Metal-poor substructures From Figure 6 and Figure A.3, we observe several metal-poor substructures with large metallicity dispersions. In the old stellar population (8000 < Te\ufb00< 9000 K), we identify one metalpoor substructure (labelled \u2018a\u2019 in Figure 6), which is not found in the stellar populations of Te\ufb00> 9500 K. The metallicity of this metal-poor substructure is smaller than its nearby regions by \u223c0.1 dex. The metal-poor substructure is also found in the stellar population of 7500 < Te\ufb00< 8000 K, while it is much weaker. In the younger stellar population (9500 < Te\ufb00< 12500 K), there are four metal-poor regions labelled \u2018b\u2019, \u2018c\u2019, \u2018d, and \u2018e\u2019 in Figure 6, which are not found in the two old stellar populations. The metallicities of these four regions are also smaller than their nearby regions by \u223c0.1 dex. The spatial sizes of \u2018a\u2019, \u2018b\u2019, \u2018c\u2019, \u2018d\u2019, and \u2018e\u2019 substructures are respectively \u223c0.7 \u00d7 0.2, 0.3 \u00d7 0.2, 0.8 \u00d7 0.2, 0.4 \u00d7 0.1 kpc, and 0.1 \u00d7 0.4 kpc. Azimuthal metallicity variations with spatial positions may be tracked along spiral arms (e.g., Khoperskov et al. 2018; Poggio et al. 2022). We compare the azimuthal metallicity distributions, especially those of the \ufb01ve identi\ufb01ed metal-poor substructures, with the spiral arms (shown as the blue solid and dashed line, indicating from left to right segments of the Local (Orion) arm and the Perseus arm) determined using high mass star forming regions by Reid et al. (2019). From Figure 6, we \ufb01nd that metallicity distributions of all these four monotemperature stellar populations do not track the expected locations of spiral arms, similar to the results presented by Hawkins (2022). Hawkins (2022) mapped out azimuthal metallicity distributions using LAMOST OBAF young stars selected from LAMOST low-resolution spectra by Xiang et al. (2022). However, our detailed azimuthal metallicity patterns have large differences compared to those of Hawkins (2022). We cross-match the LAMOST-MRS young stellar sample with their OBAF stellar sample and plot azimuthal metallicity distributions using the common stars. For these common stars, the azimuthal metallicity distributions are similar either using the metallicity of the LAMOST-MRS or LAMOST OBAF sample, which are also similar to the results presented here. Di\ufb00erent fractions and the origin of contaminations from cold stars of the two samples may be responsible for the di\ufb00erence between the results of us and those of Hawkins (2022). In Figure A.5 we show the normalised metallicity excess distributions of these \ufb01ve metal-poor regions (\u2018a\u2019, \u2018b\u2019, \u2018c, \u2018d\u2019, and \u2018e\u2019 regions) and those of control regions (framed in black lines). The skewnesses of the metallicity distributions are also estimated. All these skewness are smaller than zero, suggesting that metallicity distributions, both in the metal-poor regions and control regions, have metal-poor tails. Metal-poor substructures contain a larger fraction of metal-poor stars and have much smaller skewness compared to control regions. 5. The ISM mixing process Azimuthal metallicity distributions mapped out by the LAMOST-MRS stellar sample presented in Section 4 suggest that there are signi\ufb01cant azimuthal metallicity inhomogeneities. Similarly, the azimuthal metallicity distributions of the ISM (Balser et al. 2011b, 2015; De Cia et al. 2021), Cepheid variable stars (Pedicelli et al. 2009), and open clusters(Davies et al. 2009; Fu et al. 2022) in the MW also show signi\ufb01cant azimuthal metallicity inhomogeneities. Azimuthal metallicity inhomogeneities of external spiral galaxies are also found through investigating the metallicity of H II regions (Ho et al. 2017, Article number, page 5 of 14 A&A proofs: manuscript no. metallicity_variation -8 -8.5 -9 -9.5 -10.0 -10.5 X (kpc) -1 -0.5 0.0 0.5 1.0 1.5 Y (kpc) Teff: [7500,8000] a b c d e -8 -8.5 -9 -9.5 -10.0 -10.5 X (kpc) -1 -0.5 0.0 0.5 1.0 1.5 Y (kpc) Teff: [8000,9000] a b c d e -8 -8.5 -9 -9.5 -10.0 -10.5 X (kpc) -1 -0.5 0.0 0.5 1.0 1.5 Y (kpc) Teff: [9500,10500] a b c d e -8 -8.5 -9 -9.5 -10.0 -10.5 X (kpc) -1 -0.5 0.0 0.5 1.0 1.5 Y (kpc) Teff: [10000,12500] a b c d e 0.20 0.15 0.10 0.05 0.00 0.05 0.10 0.15 0.20 [M/H] 0.20 0.15 0.10 0.05 0.00 0.05 0.10 0.15 0.20 [M/H] Fig. 6. Metallicity excess distributions after subtracting radial metallicity gradients for these four mono-temperature stellar populations, binned by 0.1\u00d70.1 kpc on the X\u2013Y plane. Bins that contained fewer than eight stars are discarded. The minimum number of stars in each bin is selected to ensure that the median metallicity excess uncertainties produced in these bins could distinguish the metal-poor substructures from other regions. The positions at R = 8.5, 9.0, 9.5, 10.0, and 10.5 kpc are marked with black arcs. The black star symbol indicates the position of the Sun. The centre and 1\u03c3 width of the spiral arms are shown by blue solid and dashed lines in all panels. Regions framed in red lines and magenta dashed lines indicate the spatial positions of these \ufb01ve metal-poor substructures (labelled by \u2018a\u2019, \u2018b\u2019, \u2018c\u2019, \u2018d\u2019, and \u2018e\u2019). A region framed by red lines means that it is a real metal-poor substructure of this mono-temperature stellar population. The region framed by magenta dashed lines is a corresponding region of metal-poor substructure in other mono-temperature stellar populations. Control regions are framed in black lines. 2018; Kreckel et al. 2019, 2020). These observed azimuthal metallicity inhomogeneities both in the MW and external spiral galaxies suggest that the ISM is not well mixed. High-velocity clouds infalling into the Galactic disk from the Galaxy halo are metal-poor, with metallicities ranging from 0.1 to 1.0 solar metal (Fox 2017; Wright et al. 2021; De Cia et al. 2021). High-velocity clouds can be generated by the \u201cgalactic fountain\" cycle and other gas accretion processes. Intermediatevelocity clouds are also the main observational manifestations of the ongoing \u201cgalactic fountain\" cycle (Wakker & van Woerden 1997). However, unlike high-velocity clouds, intermediatevelocity clouds are nearby systems and have near or solar metallicity. These \ufb01ve metal-poor substructures found in the current paper have smaller mean metallicity values and larger dispersions. They also contain a larger fraction of metal-poor stars than other Galactic disk regions. These results suggest that these \ufb01ve metal-poor substructures may be associated with high-velocity clouds, which infall into the Galactic disk from the Galactic halo. These high-velocity clouds are not quickly well mixed with the ISM of the Galactic disk after they infall into the Galactic disk, thus they have time to produce stars, which are more metal-poor than stars born in the pre-existing Galactic disk ISM. As a result, Article number, page 6 of 14 C. Wang: Metallicity variations -0.2 -0.15 -0.1 -0.05 0.0 0.05 0.1 0.15 0.2 [M/H] dex 0 2 4 6 8 10 12 14 Normalized bin numbers 7500 9500 K). These metal-poor substructures of the stellar population with 9500 < Te\ufb00< 10500 K (with a median stellar age of 0.39 Gyr) may suggest that the corresponding high-velocity clouds had infalled into the Galactic disk 0.39 Gyr ago. Similarly, these metalpoor substructures of 10000 < Te\ufb00< 12500 K (with a median stellar age of 0.27 Gyr) may suggest that these corresponding high-velocity clouds had not been well mixed into the Galactic disk ISM before 0.27 Gyr. In conclusion, these high-velocity clouds infalling into the Galactic disk from the Galactic halo are not well mixed with the pre-existing Galactic disk ISM within 0.12 Gyr (0.39 \u22120.27 Gyr). It is noted that the \u2018d\u2019 metal-poor substructure of the stellar population with 10000 < Te\ufb00< 12500 K move \u223c0.1 kpc on the X\u2013Y plane compared to that of 9500 < Te\ufb00< 10500 K. This may be the consequence of the velocity di\ufb00erence between infalling high-velocity clouds and stars or ISM of the Galactic disk. A velocity di\ufb00erence of \u223c0.82 kms\u22121 is needed to move 0.1 kpc within 0.12 Gyr, which is reasonable. The other metal-poor substructure is found in the metallicity distributions of old stellar populations (Te\ufb00< 9000 K), suggesting that the corresponding high-velocity cloud had not suf\ufb01ciently mixed with the Galactic disk ISM before 0.72 Gyr (the median age of the stellar population with 8000 < Te\ufb00< 9000 K) and had infalled into the Galactic disk 1.0 Gyr (the median age of the stellar population with 7500 < Te\ufb00< 8000 K) ago. The results suggest that the high-velocity cloud is not well mixed into Galactic disk ISM within 0.28 Gyr. This substructure is not found in the metallicity distributions of young stellar populations, suggesting that this high-velocity cloud had been well mixed into the Galactic disk ISM 0.39 Gyr ago. 6. Summary In this work, we use the LAMOST MRS young stellar sample with accurate e\ufb00ective temperature and metallicity to investigate the radial metallicity gradients and azimuthal metallicity distributions of di\ufb00erent stellar populations with varying e\ufb00ective temperatures (or ages). The estimated radial metallicity gradient ranges from \u22120.015 dex/kpc to \u22120.07 dex/kpc, which decreases as e\ufb00ective temperature decreases (or stellar age increases). The result is consistent with those of Xiang et al. (2015) and Wang et al. (2019), who also found that the radial metallicity gradients decrease with increasing stellar age for young stellar populations (1.5 < \u03c4 < 4 Gyr). Our results extended their study of the relation between radial metallicity gradient and stellar age to \u03c4 < 1.5 Gyr. After subtracting radial metallicity gradients on the X\u2013Y plane, the azimuthal metallicity distributions of all these four mono-temperature stellar populations show signi\ufb01cant metallicity inhomogeneities, which is consistent with previous studies of azimuthal metallicity distributions of the Milky Way (Balser et al. 2011b, 2015; De Cia et al. 2021; Pedicelli et al. 2009; Davies et al. 2009) and external spiral galaxies (Ho et al. 2017, 2018; Kreckel et al. 2019, 2020). The result suggests that the ISM is not well mixed at any time. We \ufb01nd \ufb01ve metal-poor substructures with sizes of \u223c0.2\u2013 1.0 kpc, metallicities of which are smaller than their nearby regions by \u223c0.1 dex. These metal-poor substructures may be associated with high-velocity clouds infalling into the Galactic disk from the Galactic halo. According to the results of stellar populations at di\ufb00erent ages, we suggest that high-velocity clouds infalling into the Galactic disk from the Galactic halo are not well mixed with the pre-existing Galactic disk ISM within 0.3 Gyr. The size and spatial distribution of our stellar sample are mainly limited by the faint limiting magnitude (15 mag in gband) of the LAMOST Medium-Resolution Spectroscopic Survey. Future large-scale spectroscopic surveys, including SDSSV (Kollmeier et al. 2017; Zari et al. 2021) and 4MOST (de Jong et al. 2022), are expected to improve our work by enlarging the sample size and spatial coverage. Acknowledgements. We appreciate the helpful comments of the anonymous referee. This work was funded by the National Key R&D Program of China (No. 2019YFA0405500) and the National Natural Science Foundation of China (NSFC Grant No.12203037, No.12222301, No.12173007 and No.11973001). We acknowledge the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-B03. We used data from the European Space Agency mission Gaia (http://www.cosmos.esa.int/gaia), processed by the Gaia Data Processing and Analysis Consortium (DPAC; see http://www.cosmos.esa.int/web/gaia/dpac/consortium). Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scienti\ufb01c Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences.", "introduction": "Metals in the interstellar medium (ISM) regulate its\u2019 cooling pro- cess and the star formation of galaxies. The mixing process be- tween infalling gas and the local ISM changes the fundamental properties, including the chemical abundance, of the local ISM and the future star formation process. Understanding the mix- ing process of the ISM is crucial for informing our understand- ing of galaxy formation and evolution histories, such as Galactic archaeology and chemical evolution (Tinsley 1980; Matteucci 2012, 2021). Most studies thus far assume that the ISM is well mixed at a small scale. The metallicity inhomogeneity of the ISM could help us to understand the ISM mixing. At a large scale, the ISM of the Galactic disk shows a negative radial metallicity gradient (Balser et al. 2011a), suggesting an \u201cinside-out\" disk formation scenario. At a small scale, the metallicity of the ISM (Balser et al. 2011b, 2015; De Cia et al. 2021), Cepheid variable stars (Pedicelli et al. 2009; Poggio et al. 2022), open clusters (Davies et al. 2009; Pog- gio et al. 2022), young upper main-sequence stars (Poggio et al. 2022), and young main-sequence stars (Hawkins 2022) in the Milky Way (MW) present azimuthal metallicity variations. For other external spiral galaxies, people also found azimuthal ISM metallicity variations (after subtracting the radial metallicity gra- dients) by studying the metallicity of H II regions (Ho et al. 2017, 2018; Kreckel et al. 2019, 2020). All these observed azimuthal metallicity inhomogeneities of the MW and other spiral galaxies suggest that the ISM is not well mixed. To better understand the ISM mixing process, we need to explore the variations of metal- licity inhomogeneity of the ISM over time, which has not been well studied yet. Stellar surface chemical abundance remains almost un- changed during the main-sequence evolutionary stage. It is thus a fossil record of the ISM during the birth of these stars. The metallicity inhomogeneity of ISM and its variations over time can be studied using a stellar sample with a wide age coverage. However, stars in the MW have moved away from their birth po- sitions duo to the secular evolution of the MW. Old stars have ex- perienced stronger radial migration (Minchev & Famaey 2010; Kubryk et al. 2015; Frankel et al. 2018, 2020; Lian et al. 2022) compared to young stars. According to Frankel et al. (2018) and Frankel et al. (2020), 68% of stars have migrated within a distance of 3.6 p \u03c4/8 Gyr kpc and 2.6 p \u03c4/6 Gyr kpc, respec- tively. Hence, stars can move up \u223c1 kpc within 1 Gyr. Lian et al. (2022) suggest that the average migration distance is 0.5\u20131.6 kpc at age of 2 Gyr and 1.0\u20131.8 kpc at age of 3 Gyr. In conclusion, young stars in the MW are valuable for studying the ISM mix- ing process due to their large sample size and wide age coverage compared to other young objects (e.g., open clusters, Cepheid variable stars, H II regions)and because they are less a\ufb00ected by secular evolution compared to older stars. For young main- sequence stars, e\ufb00ective temperatures are tightly correlated with stellar ages (Schaller et al. 1992; Zorec & Royer 2012; Sun et al. 2021). The variations of ISM metallicity homogeneity (or inho- Article number, page 1 of 14 arXiv:2304.02958v1 [astro-ph.GA] 6 Apr 2023 A&A proofs: manuscript no. metallicity_variation mogeneity) over time can be investigated using a young main- sequence stellar sample with accurate determinations of stellar atmospheric parameters (e\ufb00ective temperature Te\ufb00, surface grav- ity log g and metallicity [M/H]). Recently, accurate stellar atmospheric parameters were de- rived from LAMOST medium-resolution spectra for 40,034 young (early-type) main-sequence stars (Sun et al. 2021). The stellar sample spans a wide range of e\ufb00ective temperatures (7500-15000K), and covers a large and contiguous volume of the Galactic disk. It allows us to investigate the ISM metallic- ity inhomogeneity and its variations over time. In this work, we investigate the radial metallicity gradients and azimuthal metal- licity distribution features of mono-temperature stellar popula- tions across the Galactic disk within \u221210.5 < X < \u22128.0 kpc, \u22121.0 < Y < 1.5 kpc and |Z| < 0.15 kpc using the young stellar sample. This paper is organized as follows. In Section 2, we intro- duce the adopted young stellar sample. In Section 3, we present the radial metallicity gradients of mono-temperature stellar pop- ulations. We present the azimuthal metallicity distributions of mono-temperature stellar populations in Section 4. The con- straints on the ISM mixing process using the azimuthal metal- licity distributions are discussed in Section 5. Finally, we sum- marize our work in Section 6." } ], "Tong Zhang": [ { "url": "http://arxiv.org/abs/2403.06769v2", "title": "Strength Lies in Differences! Towards Effective Non-collaborative Dialogues via Tailored Strategy Planning", "abstract": "We investigate non-collaborative dialogue agents, which are expected to\nengage in strategic conversations with diverse users, for securing a mutual\nagreement that leans favorably towards the system's objectives. This poses two\nmain challenges for existing dialogue agents: 1) The inability to integrate\nuser-specific characteristics into the strategic planning, and 2) The\ndifficulty of training strategic planners that can be generalized to diverse\nusers. To address these challenges, we propose Trip to enhance the capability\nin tailored strategic planning, incorporating a user-aware strategic planning\nmodule and a population-based training paradigm. Through experiments on\nbenchmark non-collaborative dialogue tasks, we demonstrate the effectiveness of\nTrip in catering to diverse users.", "authors": "Tong Zhang, Chen Huang, Yang Deng, Hongru Liang, Jia Liu, Zujie Wen, Wenqiang Lei, Tat-Seng Chua", "published": "2024-03-11", "updated": "2024-05-07", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "main_content": "Our research is closely tied to the strategic planning and training paradigms to address the noncollaborative tasks in the era of LLMs. We provide a literature review and highlight our differences. Strategic planning for non-collaborative dialogues. Recent researches have introduced various methods based on LLMs to enhance their effectiveness in strategic planning. These methods can be categorized into two types: 1) Developing stimulus prompts to unleash the potential of LLMs. (Chen et al., 2023) validate the effectiveness of using mixed-initiative prompts to tackle proactive dialogue challenges. (Deng et al., 2023b) and (Zhang et al., 2023a) encourage LLMs to engage in selfreflection to plan their next actions. (Fu et al., 2023) employ self-play simulations to iteratively refine strategic planning by soliciting feedback from other LLMs. Nonetheless, as highlighted by (Deng et al., 2023c), the effectiveness of these approaches is impeded by non-trainable parameters. 2) Equipping LLMs with an external strategy planner. The planner is capable of generating prompts at each turn, providing nuanced, instance-specific guidance and control over LLMs. This could be integrated using methods like Monte Carlo Tree Search (Yu et al., 2023) or a plug-in model (Deng et al., 2023c), which can be fine-tuned for improving the strategic planning capability without affecting the functionalities of LLM-powered dialogue agents. However, these methods still struggle to achieve promising results due to their inability to integrate user-specific characteristics into their strategic planning. Complementary to (Deng et al., 2023c), our work investigates the importance of tailored strategic planning by modeling user-related characteristics explicitly. Training paradigms for non-collaborative dialogues. Current training paradigms involve the dialogue agent interacting with a single user simulator to enhance its strategic planning capabilities. In specific, (Chawla et al., 2023b) build a user simulator that mimics human-human dialogue data in a supervised manner, while (Yu et al., 2023; Deng et al., 2023c) resort to a roleplaying LLM-based user simulator. However, a single user simulator can only represent the behaviors of one or a type of users, potentially leading to the under-representation of other users\u2019 behaviors, as evidenced by (Liu et al., 2023; Shi et al., 2019). Therefore, existing training paradigms fail to produce strategic planners that cater to diverse users with varying behaviors. In this paper, our work investigates the importance of tailored strategic planning by diversifying the user\u2019s behaviors using population-based training. 3 Strategic Planning Evaluation We introduce a novel evaluation protocol to analyze the limitations of existing LLM-based dialogue agents and highlight their inability to handle Figure 1: The overall evaluation process. users exhibiting various non-collaborative behaviors. The overall evaluation process is illustrated in Figure 1. See more details of our evaluation protocol in Appendix A. 3.1 Evaluation Setup Evaluation Overview. The environment encompasses various synthetic user simulators showcasing diverse non-collaborative behaviors. In the evaluation process, each dialogue agent must interact with these simulators (Deng et al., 2023c). During their interactions, the dialogue agent and user simulator alternate in employing strategies in their responses with the ultimate aim of maximizing their own self-interest. The interactions continues until the conversational goal is achieved or the maximum number of turns is reached. We gather these interactions and assess the agents performances. Baselines. We consider two representative baselines: Standard agent (i.e., vanilla LLM without any modification) and PPDPP agent (Deng et al., 2023c), which is current SOTA agent with a trainable external strategy planner1. Diverse User Simulators. Our simulators are synthesized with non-collaborative behaviors, guided by their task-relevant personas. As evidenced by previous study (Deng et al., 2023a; Bianchi et al., 2024), LLMs are limited to demonstrate noncollaborative behaviors. To this end, we prompt non-collaborative behaviors explicitly into LLMs using the resisting strategies that are designed to foil persuasion attempts (Fransen et al., 2015; Tian et al., 2020; Dutt et al., 2021). Initially, we equip LLMs with different personas (Jiang et al., 2023; 1Notably, we also consider other existing dialogue agents in our main experiments. Zhou et al., 2023b; Zhang et al., 2023b), which are used to select non-collaborative behaviors from the set of resisting strategies. Following (Wang et al., 2019; Jiang et al., 2024), we consider two types of personas, including Big-Five Personality2 (Goldberg, 1992) and Decision-Making Styles3 (Scott and Bruce, 1995), together with LLM-generated cohesive description for each fine-grained persona. Additionally, we employ resisting strategies outlined by (Dutt et al., 2021) to direct the behavior of simulators. Finally, our mixed-initiative role-play prompt for each agent includes the assigned persona, a set of resisting strategies, and conversation context. These elements aid in guiding user simulators to exhibit diverse non-collaborative behaviors. In total, we develop 300 diverse user simulators for each evaluation task, representing 20 persona categories (i.e., Big-Five Personality \u00d7 DecisionMaking Styles). Evaluation Tasks. In line with (Deng et al., 2023b; Wang et al., 2019), we conduct experiments on two benchmark non-collaborative tasks: the price negotiation task, utilizing the test4 dataset of CraigslistBargain (CB) (He et al., 2018) and the charity persuasion task, employing the test dataset of PersuasionForGood (P4G) (Wang et al., 2019). Notably, the dialogue agents play the role of buyer and persuader, respectively, to accomplish their goals. Evaluation Metrics. Following (Deng et al., 2023c), we consider three commonly used metrics: Success Rate (SR), Average Turn (AT) and Sale-to-List Ratio (SL%). The SR measures effectiveness by the percentage of goal achievement within a maximum number of turns, while AT measures efficiency by the average number of turns required to achieve the goal. As for the CB task, we additionally adopt the SL% (Zhou et al., 2019) to determine the effectiveness of goal completion. Formally, the SL% is expressed as (Pdeal \u2212P seller target)/(P buyer target \u2212P seller target), where Pdeal is the final deal price, P buyer target and P seller target are the target prices of both parties. A higher SL% represents the buyer gets more benefits from the deal. If failing to reach a deal at the end, we set SL% as 0. 2Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism 3Directive, Conceptual, Analytical, and Behavioral 4Our data split follows the previous study (Deng et al., 2023c; Wang et al., 2019). Personas Price Negotiation Persuasion for Good SR\u2191 AT\u2193 SL%\u2191 SR\u2191 AT\u2193 Big Five Openness 0.76\u21910.23 6.66\u21910.63 0.34\u21910.12 0.47\u21910.34 8.92\u21911.00 Conscientiousness 0.69\u21910.25 7.20\u21911.04 0.27\u21910.06 0.39\u21910.33 8.90\u21911.10 Extraversion 0.74\u21910.16 6.17\u21911.47 0.39\u21910.15 0.45\u21910.35 8.73\u21911.25 Agreeableness 0.40\u21910.01\u22c6 6.82\u21910.71 0.28\u21910.06 0.18\u21910.12 9.85\u21910.13\u22c6 Neuroticism 0.31\u21930.02\u22c6 6.81\u21911.12 0.20\u21930.02\u22c6 0.12\u21910.02\u22c6 9.78\u21910.14\u22c6 Decision Analytical 0.37\u21910.04\u22c6 7.07\u21910.61 0.26\u21910.06\u22c6 0.16\u21910.09 9.43\u21910.56\u22c6 Directive 0.41\u21910.05\u22c6 6.71\u21911.48 0.18\u21930.03\u22c6 0.12\u21930.02\u22c6 9.31\u21910.62 Behavioral 0.78\u21910.25 6.45\u21911.20 0.39\u21910.16 0.53\u21910.37 8.94\u21911.04 Conceptual 0.77\u21910.23 6.62\u21910.78 0.42\u21910.17 0.49\u21910.36 9.02\u21910.94 Overall Performance 0.58\u21910.14 6.72\u21911.01 0.31\u21910.09 0.32\u21910.23 9.20\u21910.76 Table 1: The performance of the PPDPP dialogue agent testing across various personas of user simulators. Red (Blue) indicates the increased (decreased) performance compared to Standard dialogue agent. The symbol \u22c6 indicates that this performance exhibits minimal variation, specifically within a 5% range of the maximum value. The effectiveness of PPDPP varies significantly across different user personas. 3.2 Experimental Findings We analyze the performances of existing dialogue agents across user simulators with various noncollaborative behaviors. Specifically, we assess the advancements of PPDPP compared to the Standard agent. As illustrated in Table 1, while PPDPP shows a notable improvement in overall performance, it does not adapt well to users employing different non-collaborative strategies. Its effectiveness varies significantly among users with different personas, with its advantage over the Standard not being significant in 17.77% of cases (e.g., it increases SR by 0.02 for Analytical in price negotiation.), and even performing worse than the Standard in 8.88% of cases (e.g., it decreases SR by 0.02 for Neuroticism in price negotiation). This motivates the need for a dialogue agent to perform strategic planning tailored to diverse users5. 4 TRIP: Tailored Strategic Planning To enhance LLMs\u2019 tailored strategic planning, we propose an effective method TRIP, which develops an external planner by modeling user characteristics and training with diverse user simulators. As illustrated in Figure 2, our TRIP includes a useraware strategic planning module and a populationbased training paradigm. The former aims to explicitly model user characteristics (e.g., mental states and future actions), while the latter incorporates diverse user simulators for training simultaneously. 4.1 User-Aware Strategic Planning TRIP aims to explicitly infer user characteristics and then incorporate them into the strategic plan5We find that other baselines also have similar issues, as detailed in Section 5. ning module, parameterized by a trainable BERT. In particular, building upon the advanced Theoryof-Mind capability of LLMs (Sap et al., 2022; Moghaddam and Honey, 2023), TRIP captures users\u2019 mental states and future possible actions during interactions to understand their interests and predicts how TRIP\u2019s responses may influence them. In this case, mental states pertains to what they aim to accomplish, such as the target price or whether they will donate, while future actions relates to what the user is likely to discuss next (Hu et al., 2023; Zhou et al., 2023a). Formally, given the dialogue history D = (usys 1 , uusr 1 , ..., usys t , uusr t ), where usys i and uusr i denote the i-th utterances of both parties and t is the number of utterances, we feed the dialogue history D into the LLM and prompt it to infer mental states M and future actions F, i.e., PLLM(M, F|D). Subsequently, we feed the {M, F, D} into the strategy planner \u03c0\u03b8 to predict the next strategy. The output space of \u03c0\u03b8 is a set of strategies6 pre-defined by (Deng et al., 2023c; Wang et al., 2019), each of them is attached with a pre-defined natural language instructions. 4.2 Population-based Training Paradigm Given that a single user simulator tends to favor limited behaviors while under-represents others (Shi et al., 2019; Liu et al., 2023), we explore training a dialogue agent using a set of user simulators employing different non-collaborative strategies to accommodate diverse users. To achieve this, we propose a population-based reinforcement learning (RL) training paradigm, which aims to enhance the adaptability of a dialogue agent to new user groups by training with larger and more diverse 6e.g., the elicitation of specific emotions to influence other. Figure 2: TRIP Overview. This method includes a user-aware strategic planning module (UASP) and a populationbased training paradigm (PBTP). The UASP incorporates user-specific characteristics into strategic planning using the Theory-of-Mind (ToM). The PBTP diversifies training user simulators to promote agents\u2019 adaptation. We use numbers to indicate the overall process of TRIP. populations (Charakorn et al., 2020). We offer a comprehensive explanation of this approach below. Population Setup. Similar to Section 3.1, we build 40 diverse user simulators, each embodying a specific persona description. We ensure an balanced representation of each persona category within our user simulators for population-based RL training. We donate these simulators as K = k1, k2, ...k40 During each iteration, we sample among K using a distribution p, allowing the dialogue agent S to interact with it. The distribution p is initialized based on the frequency of various personas. Reward Design. Following (Deng et al., 2023c), we prompt LLMs to judge the conversation progress at each turn and transform it into scalar rewards. Specifically, in the negotiation task, we employ a separate GPT3.5 (OpenAI, 2022) to assess whether both parties have reached a deal. In the persuasion task, we ask the GPT3.5-based user simulator to express its willingness to donation. Our rewards are determined based on three situations: 1) Successful goal achievement by the dialogue agent results in a significant positive reward, defined as 1.0 in the charity persuasion task and the value of SL% in the price negotiation task. 2) Failure to achieve goals leads to a substantial negative reward of -1.0 for the dialogue agent. 3) Furthermore, we assign a small negative reward (-0.1) per turn to penalize the lengthy conversation, which promotes the efficient goal achievement. Optimization. During RL training, we maximize the expected reward of the strategy planner \u03c0\u03b8 by utilizing the REINFORCE algorithm (Williams, 1992): \u03b8 \u2190\u03b8 \u2212\u03b1\u2207log \u03c0\u03b8Rt, where \u03b8 denotes the trainable parameter of the strategy planner, \u03b1 denotes the learning rate, and Rt is the total reward accumulating from turn t to the final turn T: Rt = PT t\u2032=t \u03b3T\u2212t\u2032rt\u2032, where \u03b3 is a discount factor. 5 Experiments This sections aims to evaluate the effectiveness of our TRIP, following the evaluation protocol proposed in Section 3.1. We initially report the overall performances of dialogue agents in Section 5.1. Next, we conduct an in-depth analysis to reveal the tailored strategies of TRIP in Section 5.2. Finally, we perform ablation studies in Section 5.3 to sort out the performance variation of different user awareness and training population, and find a dominant predictor for the tailored strategic planning. LLM-based baselines. We consider LLM-based dialogue agents with two types of strategic planning modules, as discussed in Section 2. 1) Prompt-based planning, including Standard, ProCoT (Deng et al., 2023b) and ICL-AIF (Fu et al., 2023), which use mixed-initiative prompts, CoT, and AI feedback to select next strategies, respectively. 2) External strategy planners, including GDP-MCTS (Yu et al., 2023) and PPDPP (Deng et al., 2023c), which utilize Monte Carlo Tree Search and a trainable plug-in for determining nextstep strategies, respectively. Note that all baselines fail to model user-specific characteristics explicitly and are trained using one user simulator. Implementation details are presented in Appendix B. Evaluation Metrics. We use the same automatic metrics mentioned in section 3.1. Furthermore, we conduct human evaluation to assess the practical Figure 3: The agents performance across various personas. We report their success rate on two tasks, namely price negotiation (Left) and charity persuasion (Right). TRIP achieves balanced improvements on all personas, significantly outperforming other agents by a considerable margin. Due to limited space, we report other results using different metrics in Appendix D. effectiveness of these dialogue agents. See more details of human evaluation in Appendix C. 5.1 Overall Performance We evaluate the overall and fine-grained performance of all agents using automatic metrics in Table 2 and Figure 3. Additionally, we report human evaluation in Figure 4 to gauge their performance during interactions with real users. TRIP is a promising method for achieving effective non-collaborative strategies tailored for diverse users. As illustrated in Table 2, TRIP significantly outperforms all the baselines with a noticeable margin across two tasks. It not only efficiently achieves the conversational goal (less AT) but also effectively accomplishes tasks (higher SR and higher SL%). Moreover, as depicted in Figure 3, TRIP shows balanced improvements across different user personas, significantly outperforming other agents by a substantial margin, in contrast to the biased improvements of PPDPP in Section 3.2. This suggests that TRIP is capable of generating strategies that generalize well to diverse users. This also implies that the behavior pattern pf a single LLM-based user simulator is limited in scope. Moreover, our human evaluation results in Figure 4 show our TRIP largely outperform the Standard and PPDPP when interacting with real users. This evidences the effectiveness and practical utility of our proposed TRIP. 5.2 Strategy Analysis In this section, we analyze the effectiveness of our TRIP in tailored strategic planning. Specifically, in each user interaction, we gather the strategies employed by each agent at every turn and combine Agents Price Negotiation Persuasion for Good SR\u2191 AT\u2193 SL%\u2191 SR\u2191 AT\u2193 Standard 0.4444 7.73 0.2222 0.0930 9.96 ProCoT 0.6040 7.62 0.2307 0.1833 9.90 ICL-AIF 0.3411 8.42 0.2503 0.1667 9.91 GDP-MCTS 0.4444 7.63 0.2401 0.2466 9.74 PPDPP 0.5855 6.72 0.3144 0.3233 9.20 TRIP (Ours) 0.6888 6.34 0.4096 0.5533 8.51 Table 2: Overall evaluation. TRIP is promising for achieving effective non-collaborative strategies. Figure 4: Human Evaluation Results. TRIP shows a high practical utility to deal with real users. them in a sequential order to form a strategy sequence. Then, we compare the strategy sequences employed by different agents. We utilize BERT (Devlin et al., 2018) and the t-SNE method (Van der Maaten and Hinton, 2008) to encode each strategy sequence into an embedding vector. Subsequently, we use the Euclidean distance measure to calculate the average distance between any two strategy sequences used by agents with the same persona, as well as the average distance between any two strategy sequences used by agents with different Figure 5: Case study on the charity persuasion task (Top-3 conversation rounds). The user resisting strategies and agent strategies are marked in bleu and red respectively. While PPDPP repeats its strategy usage pattern to different user types, TRIP effectively tailor its strategies for different users. When dealing with theOpenness persona (Left), TRIP introduces the charitable organization and evoke specific emotions to sway users\u2019 decision. Conversely, in addressing the Neuroticism persona (Right), TRIP tends to discuss personal experiences related to charity and employs reasoning persuade the user. Models Intra-Persona\u2193 Inter-Persona\u2191 Standard 24.93 13.51 ProCoT 21.37 15.65 ICL-AIF 22.84 15.33 GDP-MCTS 20.72 16.09 PPDPP 19.37 17.28 TRIP (Ours) 16.14 20.26 Table 3: The strategy distribution of different agents. The Intra-Persona metric donates the average distance for a particular persona. The Inter-Persona metric donate the average distance for different personas. TRIP achieves the best performance, showcasing its effectiveness in devising tailored strategies for diverse users. personas. This is akin to the metrics (i.e., the IntraClass and Inter-Class analysis) used in the metric learning community (Roth et al., 2019) and we term them as the Intra-Persona and Inter-Persona. The results are shown in Table 3. TRIP demonstrates a greater awareness of population dynamics, resulting in reduced variance across specific user simulators. As shown in Table 3, TRIP achieves the lowest Intra-Persona and the highest Inter-Persona. This indicates that the strategy sequences of TRIP exhibit similarity when interacting with users sharing the same personas and non-collaborative behaviors. Also, these sequences are distinct when compared to users with different personas. This further reveals that TRIP holds advantages in devising tailored strategies for diverse users. For better understanding, we present a case study in Figure 5 and examine the strategy sequence employed by PPDPP and TRIP in an charity persuasion task. Specifically, PPDPP repeats its strategy usage pattern to different user types, briefly using of credentials and citing organizational impacts to establish credibility and earn the persuadee\u2019s trust. In contrast, TRIP demonstrates a deeper understanding of the users and provides more tailored strategies. When dealing with the Neuroticism persona, TRIP tends to discuss personal experiences related to charity and employs reasoning persuade the user. Conversely, in addressing the Openness persona, TRIP introduces the charitable organization and evoke specific emotions to sway users\u2019 decision. The strategy sequence used by TRIP is believed to be more persuasive, as demonstrated by (Barford and Smillie, 2016; Wang et al., 2019), stating that the Openness users are inclined to embrace novelty and be easily influenced by emotions, while the Neuroticism users are more likely to be influenced by others\u2019 personal experiences. In this regard, we believe that these strategic differences may provide valuable insights for the future research on the non-collaborative dialogues. 5.3 Ablation Study This section aims to sort out the performance variation of different user awareness and training population. To analyze the effectiveness of each design, we consider the following variants of TRIP. Models Price Negotiation Persuasion for Good SR\u2191 AT\u2193 SL%\u2191 SR\u2191 AT\u2193 TRIP 0.6888 6.34 0.4096 0.5533 8.51 TRIPw/o UA 0.6988 6.38 0.3881 0.5133 8.69 TRIPw/o POP 0.5766 7.00 0.3505 0.4400 8.95 TRIPw/ 10 POP & w/o UA 0.6377 6.73 0.3543 0.4700 8.79 TRIPw/ 10 POP 0.6700 6.12 0.3537 0.4733 8.72 PPDPP 0.5855 6.72 0.3144 0.3233 9.20 Table 4: The evaluation results of ablation study. The user-aware strategic planning module and populationbased training are effective to improve agents and complement each other. \u2022 TRIPw/o POP: We eliminate the population-based training approach from TRIP and instead have TRIP engage with a single fixed LLM-based user simulator for training, without any specific roleplaying persona. \u2022 TRIPw/o UA: We remove the user-aware strategic planning module, and only takes the conversation history as inputs to plan next strategies. \u2022 TRIPw/ 10 POP: It utilizes 10 personas for population training, each simulator is randomly selected from a pool of 20 persona categories. \u2022 TRIPw/ 10 POP & w/o UA: In this variant, we remove the user-aware strategic planning module from TRIP w/ 10 POP. We summarize the overall performance of each model variation Table 4. Based on these results, we draw the following observations: User-aware strategic planning and populationbased training paradigm are both effective to produce tailored strategic planning. Specifically, compared to TRIPw/o UA, we note TRIP improves the persuasion success rate (0.3233 \u21920.4400) and the deal benefit SL% (0.3144 \u21920.3505). This suggest that incorporating user mental states and future actions can assist the agent in developing more effective strategies. Notably, this variant slightly decreases the deal success rate (0.6988 \u21920.6888). This can be attributed to the fact that deeply modeling user characteristics may inadvertently decrease the seller\u2019s willingness to engage in the deal, as the focus is on maximizing one\u2019s own benefits. Moreover, compared to TRIPw/o POP, we observe that TRIP yield positive improvements across all metrics, such as significant increase in SL% (0.3505 \u21920.4096). This demonstrates that diversifying the behaviors of training user simulators effectively improves the agent\u2019s performance. Diverse training populations is more beneficial to improve the adaptability of dialogue agents, but it may also present additional training challenges. As shown in Table 4, comFigure 6: The test performance of different number of training user simulators. PPDPP converges easily but has a limited upper bound in terms of performance. pared to TRIPw/o UA and TRIPw/o POP, we find that diverse training populations is more important for TRIP\u2019s superiority. Moreover, we find that TRIPw/o UA demonstrates higher performances than TRIPw/ 10 POP & w/o UA and PPDPP (i.e., A single fixed user simulator). To provide a detailed understanding of the impact of the number of training user simulators, we present their test performance of in 1000 training interactions, as depicted in Figure 6. Particularly, during the initial 400 interactions, we observe that TRIPw/o UA and TRIPw/ 10 POP & w/o UA exhibit slower convergence compared to PPDPP. This suggests that not keeping the training user simulator fixed can introduce instability in the initial training phase, as also noted in (Lewis et al., 2017). However, beyond 500 interactions, the training process of TRIPw/o UA stabilizes, leading to a significant performance enhancement, surpassing the other two agents. Additionally, it is observed that PPDPP\u2019s performance declines after specific interactions (e.g., 600 in price negotiation), suggesting that extensive interactions with a single user simulator cannot consistently enhance agents\u2019 performance. 6 Conclusion In this study, we investigate the inadequacies of current LLM-based dialogue agents in catering in diverse non-cooperative users. To address this, we propose TRIP, a method designed to tailor strategic planning for non-collaborative dialogues. The idea behind our TRIP is simple, involving a user-aware strategic planning module and a population-based training paradigm. Experimental results across diverse users demonstrate the superior effectiveness and efficiency of TRIP. We consider our work as laying the groundwork for enhancing the adaptability and flexibility of non-cooperative dialogue agents in the era of LLMs. Moving forward, we plan to further explore the potential of populationaware agents in reducing the capital expenditure associated with training and coaching novice agents. Limitations In this section, we discuss the limitations of this work from the following perspectives: Sensitivity of Prompts. Similar to other studies on prompting LLMs (Deng et al., 2023b), the evaluation results are expected to be influenced by the prompts. Following (Deng et al., 2023c), we employ the mixed-initiative format to formulate our prompts, as it offers stability and control. The impact of prompts and their optimality present important areas of investigation within LLMs, calling for exploration in future studies. Limited Non-collaborative Tasks. We only conduct our experiments on the two non-collaborative dialogue tasks (i.e., price negotiation and charity persuasion) due to their status as classic and widely-recognized benchmarks (Deng et al., 2023b; Chawla et al., 2023a). In the future, we plan to apply our proposed TRIP in a broader range of non-collaborative dialogue scenarios.", "introduction": "Non-collaborative dialogues, such as negotiation (He et al., 2018) and persuasion (Wang et al., 2019), occur when the agent and user hold conflicting in- terests (Deng et al., 2023b,a). Typically, both par- ties need to employ various strategies to achieve an agreement favorable to themselves (Keizer et al., 2017; Zhan et al., 2024). As user resistance varies depending on the agent\u2019s strategies (Shi et al., 2019; Dutt et al., 2021), it is imperative for the agent to perform strategic planning tailored to diverse users. Relying on a one-size-fits-all strategy can leave the agent vulnerable to others taking advan- tage due to its lack of adaptability and flexibility (Yang et al., 2021; Xu et al., 2023). Recent efforts have resorted to large language models (LLMs) as dialogue agents to perform non- collaborative tasks (Deng et al., 2023b; Fu et al., 2023; Zhang et al., 2023a). They aim to guide the response of LLMs through mixed-initiative prompts (Chen et al., 2023; Deng et al., 2023b; *Corresponding author. Zhang et al., 2023a) or incorporating an exter- nal strategy planner (Yu et al., 2023; Deng et al., 2023c). However, these initiatives has been criti- cized regarding its performance in real-world sce- narios (Deng et al., 2023c; Kwon et al., 2024), where users have various non-collaborative strate- gies. We attribute this outcome to the neglect of two crucial aspects: 1) Existing methods fail to incor- porate explicit user-specific characteristics into their strategic planning, instead relying solely on the conversational history. Importantly, by creat- ing informative representations of individual users, agents can adapt their behaviors and devise tailored strategies (Jang et al., 2020; Yang et al., 2021). 2) Their training paradigm fails to generate strate- gic planners that generalize well to diverse users. Their paradigms are oversimplified, relying on a single user simulator for interactive training. This simulator is restricted in generating varied non- collaborative behaviors, often exhibiting a focus on prioritizing user contentment (Zhang et al., 2023c; Durmus et al., 2023; Bianchi et al., 2024). Essen- tially, agents trained in this manner are accustomed to engage with a single user exclusively, leading to rigidity and obstinacy when encountering new users with different interaction behaviors (Wang et al., 2023; Safdari et al., 2023). To provide more evidence for the above anal- ysis, we establish an evaluation protocol, which situates diverse user simulators with varying non- collaborative behaviors. We investigate the limi- tations of current LLM-based dialogue agents on strategic planning (cf. Section 3 for details). The evaluation results clearly demonstrate that exist- ing agents struggle to tailor their strategies for di- verse users, leading to sub-optimal performances. This limitation compromises the practical utility of these agents, both in functioning as a successful agent in conversational AI and in providing social skills training in pedagogy. The key challenges lie in making dialogue agents aware of diverse arXiv:2403.06769v2 [cs.CL] 7 May 2024 non-collaborative user behaviors and devising tailored strategies for individual users. To tackle these challenges, we design a sim- ple yet effective method, called TRIP, to im- prove LLMs\u2019 capability in Tailored stRategIc Planning. TRIP includes a user-aware strategic planning module and a population-based train- ing paradigm. Specifically, the strategic planning module incorporates user-specific characteristics into strategic planning using the Theory-of-Mind (ToM) (Premack and Woodruff, 1978; Wimmer and Perner, 1983). This involves analyzing users\u2019 mental states and future possible actions during in- teractions to understand their interests (Yang et al., 2021; Chawla et al., 2023a). Moreover, instead of relying on a solitary user simulator, our population- based training paradigm promotes the adaptation of the strategic planning module to various users, achieved by training it with more diverse user sim- ulators. Each simulator is equipped with extensive sets of non-collaborative strategies and role-playing personas. As such, TRIP essentially manipulates the experience of the dialogue agent, enabling it to recognize the importance of tailoring strategies for individual users. Our key contributions are con- cluded below: \u2022 We emphasize the significance of tailoring strate- gies for diverse users in non-collaborative dia- logues. We verify the inadequacies of current LLM-based dialogue agents in this aspect. \u2022 We propose TRIP to achieve tailored strategic planning, which includes a user-aware strategic planning module and a population-based training paradigm. \u2022 We conduct experiments on benchmark non- collaborative dialogue tasks (i.e., negotiation and persuasion). Our findings suggest that TRIP is proficient in catering to diverse users using tai- lored strategies, consistently outperforming base- lines across different tasks." } ], "Bodhisattva Sen": [ { "url": "http://arxiv.org/abs/1312.6341v1", "title": "Model Based Bootstrap Methods for Interval Censored Data", "abstract": "We investigate the performance of model based bootstrap methods for\nconstructing point-wise confidence intervals around the survival function with\ninterval censored data. We show that bootstrapping from the nonparametric\nmaximum likelihood estimator of the survival function is inconsistent for both\nthe current status and case 2 interval censoring models. A model based smoothed\nbootstrap procedure is proposed and shown to be consistent. In addition,\nsimulation studies are conducted to illustrate the (in)-consistency of the\nbootstrap methods. Our conclusions in the interval censoring model would extend\nmore generally to estimators in regression models that exhibit non-standard\nrates of convergence.", "authors": "Bodhisattva Sen, Gongjun Xu", "published": "2013-12-22", "updated": "2013-12-22", "primary_cat": "stat.ME", "cats": [ "stat.ME" ], "main_content": "2.1 A sufficient condition for the consistency of the bootstrap Under the current status model, our data are Zn = {(Ti, \u2206i)}n i=1, where \u2206i = 1Xi\u2264Ti. Each Xi can be interpreted as the unobserved time of onset of a disease and Ti is the check-up time at which the ith patient is observed. We assume that Xi \u223cF and Ti \u223cG are independent and that F and G are continuously differentiable at t0 > 0 (t0 being a point in the interior of the support of F) with derivatives f(t0) > 0 and g(t0) > 0. We want to approximate the distribution function Hn of \u03b3n = n1/3{ \u02dc Fn(t0) \u2212F(t0)} by using bootstrap methods. In our model based bootstrap approach we choose an estimator, say Fn, of F (which could be NPMLE \u02dc Fn or a smoothed version of it) and generate the bootstrapped response values as \u2206\u2217 i \u223cBernoulli(Fn(Ti)), fixing the values of Ti. This is the analogue of \u201cbootstrapping residuals\u201d in our setup. Let \u02dc F \u2217 n be the NPMLE of the bootstrap sample. 4 In the following we establish conditions on Fn such that the bootstrap procedure is consistent, i.e., \u03b3\u2217 n = n1/3{ \u02dc F \u2217 n(t0) \u2212Fn(t0)} converges weakly to \u03baC, as de\ufb01ned in (2), given the data. We \ufb01rst start by formalizing the notion of consistency of the bootstrap. Let H\u2217 n be the conditional distribution function of \u03b3\u2217 n, the bootstrap counterpart of \u03b3n, given the data. Let d denote the Levy metric or any other metric metrizing weak convergence of distribution functions. We say that H\u2217 n is weakly consistent if d(Hn, H\u2217 n) \u21920 in probability. If the convergence holds with probability 1, then we say that the bootstrap is strongly consistent. If Hn has a weak limit H, then consistency requires H\u2217 n to converge weakly to H, in probability; and if H is continuous, consistency requires supx\u2208R |H\u2217 n(x) \u2212H(x)| \u21920 in probability as n \u2192\u221e. Let Fn be a sequence of distribution functions that converge weakly to F and suppose that lim n\u2192\u221e\u2225Fn \u2212F\u2225= 0, (3) almost surely, where for any bounded function h : I \u2192R, \u2225h\u2225= supx\u2208I |h(x)|. As shown in Groeneboom and Wellner (1992), the NPMLE obtained from the bootstrap sample \u02dc F \u2217 n, de\ufb01ned as the maximizer of (1) over all distribution functions is a step function with possible jumps only at the predictor values T1, . . . , Tn. We have the following result on the consistency of bootstrap methods in the current status model. Theorem 2.1 If (3) and lim n\u2192\u221en1/3|Fn(t0 + n\u22121/3t) \u2212Fn(t0) \u2212f(t0)n\u22121/3t| = 0 (4) hold almost surely, then conditional on the data, the bootstrap estimator \u03b3\u2217 n converges in distribution to \u03baC, as de\ufb01ned in (2), almost surely. 2.2 Inconsistency of bootstrapping from \u02dc Fn Consider the case when we bootstrap from the NPMLE \u02dc Fn. Conditional on the predictor Ti, we generate the bootstrap response \u2206\u2217 i \u223cBernoulli( \u02dc Fn(Ti)). Thus we take Fn = \u02dc Fn and approximate the sampling distribution of \u03b3n by the conditional distribution of \u03b3\u2217 n = n1/3{ \u02dc F \u2217 n(t0) \u2212\u02dc Fn(t0)}, given the data. For this bootstrap procedure to be consistent, the conditional distribution of \u03b3\u2217 n must converge to that of \u03baC, in probability. Theorem 2.2 Unconditionally \u03b3\u2217 n does not converge in distribution to \u03baC, and thus the bootstrap method is inconsistent. In fact it can be argued, as in Sen et al. (2010), that conditionally, \u03b3\u2217 n does not have any weak limit in probability. The inconsistency of bootstrapping from the NPMLE 5 results from the lack of smoothness of \u02dc Fn. At a more technical level, the lack of the smoothness manifests itself through the failure of equation (18) in the Appendix. We illustrate through a simulation study the inconsistency of the NPMLE bootstrap method. The upper panel of Table 1 gives the estimated coverage probabilities of nominal 90% con\ufb01dence intervals for F(1), where the true distribution of F is assumed to be Exp(1), or the folded normal distribution, |N(0, 1)|, and G is taken as the uniform distribution on [0, 2]. We use 500 bootstrap samples to compute each con\ufb01dence interval and construct 500 such intervals. Throughout, we adopt this bootstrap setup unless otherwise speci\ufb01ed. Table 1 shows that the coverage probabilities are much smaller than the nominal 90% value and there is no signi\ufb01cant improvement as the sample size increases. Table 1: Estimated coverage probabilities of nominal 90% CIs for F(1) for two distributions: Exp(1) and |Z| with Z \u223cN(0, 1). n 100 200 500 NPMLE Exp(1) 0.73 0.72 0.74 |N(0, 1)| 0.69 0.70 0.73 SMLE Exp(1) 0.89 0.88 0.90 |N(0, 1)| 0.88 0.91 0.89 Furthermore, we compare the exact and bootstrapped distributions. Due to limitations of space, we only present results for F being Exp(1). Figure 1(a) shows the distribution of \u03b3n, obtained from 10000 random samples of sample size 500, and its bootstrap estimate (that of \u03b3\u2217 n) from a single sample based on 10000 bootstrap replicates. We see that the bootstrap distribution is di\ufb00erent from that of \u03b3n. \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 0.0 0.2 0.4 0.6 0.8 t Density (a) NPMLE bootstrap \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 0.0 0.2 0.4 0.6 0.8 t Density (b) SMLE bootstrap Figure 1: Estimated density functions of \u03b3n from 10000 Monte Carlo simulation (solid curve) and the bootstrap distribution of \u03b3\u2217 n when bootstrap samples are drawn from NPMLE \u02dc Fn (dashed, left panel) and SMLE \u02c7 Fn,h with h = 0.3 (dashed, right panel). F is taken as Exp(1) and n = 500. To illustrate the behavior of the conditional distribution of \u03b3\u2217 n we show in Figure 2(a) the estimated 0.95 quantiles of the bootstrap distributions for two independent 6 data sequences as the sample size increases from 500 to 5000. The 0.95 quantile of the limiting distribution of \u03b3n is indicated by the solid line in each panel of Figure 2. We can see that the bootstrap 0.95 quantiles \ufb02uctuate enormously as the sample size increases from 500 to 5000 and do not converge to the 0.95 quantile of \u03baC. This gives strong empirical evidence that the bootstrapped 0.95 quantiles do not converge. 1000 2000 3000 4000 5000 0.0 0.5 1.0 1.5 2.0 Sample size 95% quantile (a) NPMLE bootstrap 1000 2000 3000 4000 5000 0.0 0.5 1.0 1.5 2.0 Sample size 95% quantile (b) SMLE bootstrap Figure 2: Estimated 0.95 quantiles of the bootstrap (dashed) and the limiting (solid) distribution. 2.3 Consistent bootstrap methods We show that generating bootstrap samples from a suitably smoothed version of \u02dc Fn leads to a consistent bootstrap procedure. We propose the following smoothed estimator \u02c7 Fn of \u02dc Fn; see Groeneboom, Jongbloed and Witte (2010). Let K be a di\ufb00erentiable symmetric kernel density with compact support (say [\u22121, 1]) and let \u00af K(t) = R t \u2212\u221eK(s) ds be the corresponding distribution function. Let h be the smoothing parameter. Note that h may depend on the sample size n but, for notational convenience, in the following we write h instead of hn. Let Kh(t) = K(t/h)/h and \u00af Kh(t) = \u00af K(t/h). Then the smoothed maximum likelihood estimator (SMLE) of F is de\ufb01ned as \u02c7 Fn(t) \u2261\u02c7 Fn,h(t) = Z \u00af Kh(t \u2212s) d \u02dc Fn(s). (5) It can be easily seen that \u02c7 Fn,h is a non-decreasing function, as for t2 > t1, \u00af Kh(t2\u2212s) \u2265 \u00af Kh(t1 \u2212s) for all s. Throughout this paper, without further speci\ufb01cation, we use the following kernel function to illustrate the performance of the SMLE bootstrap: K(t) \u221d(1 \u2212t2)21[\u22121,1](t). (6) \u02c7 Fn,h is a smoothed version of the step function \u02dc Fn. As discussed in the previous section, the lack of smoothness of \u02dc Fn leads to the inconsistency of the NPMLE bootstrap method. On the other hand, the SMLE successfully mimics the local behavior 7 of F at t0, and consequently gives the desired consistency as shown in Theorem 2.3 below. Recall that when bootstrapping from the SMLE \u02c7 Fn,h our bootstrap sample is {(\u2206\u2217 i , Ti)}n i=1 where \u2206\u2217 i \u223cBernoulli( \u02c7 Fn,h(Ti)). Following Groeneboom and Wellner (1992), we assume that the point of interest t0 is in the interior of the support of F, S = [0, M0] with M0 < \u221e, on which F and G have bounded densities f and g staying away from zero, respectively. Furthermore, density g has a bounded derivative on S. Theorem 2.3 Suppose F and G satisfy the conditions listed above. Given that h \u21920 and n1/3(log n)\u22121h \u2192\u221e, the conditional distribution of n1/3{ \u02dc F \u2217 n(t0)\u2212\u02c7 Fn,h(t0)}, given the data, converges to that of \u03baC, in probability. Thus, bootstrapping from \u02c7 Fn,h is weakly consistent. We use simulation to illustrate the consistency of the SMLE bootstrap procedure. The lower panel of Table 1 gives the estimated coverage probabilities of nominal 90% con\ufb01dence intervals for F(1) (when F is assumed to be Exp(1) or |N(0, 1)| and G is taken as the uniform distribution on [0, 2]). Here we take bandwidth h = 0.3. We see that the coverage probabilities are consistent with the nominal 90% level. Figure 1(b) compares the distributions of \u03b3n, obtained from 10000 random samples of size 500, and the SMLE bootstrap estimator from a single sample, when F is Exp(1). In addition, Figure 2(b) shows the estimated 0.95 quantiles of the bootstrap distributions for two independent data sequences. We see that for the SMLE bootstrap, the estimated 0.95 quantile is converging to the appropriate limiting value. This validates our theoretical result. 2.4 Choice of the tuning parameter in practice We propose a bootstrap-based method of choosing the smoothing bandwidth h, required for computing SMLE \u02c7 Fn,h. A commonly used criterion for judging the e\ufb03cacy of bandwidth selection techniques is the mean squared error (MSE) for bandwidth h, MSE(h) = E[{ \u02c7 Fn,h(t0) \u2212F(t0)}2]. (7) However, the above quantity is not directly computable since F is unknown in applications. To overcome this di\ufb03culty, di\ufb00erent procedures have been explored in the literature. Among them, bootstrap is again one of the most widely used methods and we use it to estimate the MSE in our paper. This bootstrap approach works as follows. The idea is to approximate (7) by BMSE(h) = 1 B XB i=1{ \u02c7 F \u2217 n,h(t0) \u2212Fn(t0)}2, (8) where \u02c7 F \u2217 n,h(t0) is constructed as in (5) (with bandwidth h) from data {(Ti, \u2206\u2217 i )}n i=1 with \u2206\u2217 i \u223cBernoulli(Fn(Ti)), for i = 1, . . . , n, and B is a large number. Throughout, we take B = 500. In the following we study two choices of Fn, the NPMLE and 8 the SMLE, and show that the NPMLE does not give consistent estimates of MSE(h) while the SMLE performs well. A natural choice of Fn is \u02dc Fn, the NPMLE based on the data. However, as shown in Figure 3(a), the estimated MSE curves for di\ufb00erent data sets are not consistent with the true curve, simulated from 500 independent samples. Here we use the kernel function in (6). For each MSE curve, we approximate it using BMSE(hi) with hi = i/20, for i = 1, . . . , 20. 0.2 0.4 0.6 0.8 1.0 0.000 0.002 0.004 0.006 0.008 h MSE (a) MSE from NPMLE 0.2 0.4 0.6 0.8 1.0 0.000 0.001 0.002 0.003 0.004 h (b) MSE from SMLE 0.2 0.4 0.6 0.8 1.0 0.000 0.001 0.002 0.003 0.004 h (c) MSE with di\ufb00erent h0 Figure 3: (a) Estimated MSE curves from the NPMLE bootstrap (dashed) and the true MSE based on 500 random samples (solid); (b) estimated MSE from the SMLE with pre-chosen bandwidth h0 = 0.5 (dashed); (c) estimated MSE from the SMLE with h0 = 0.3, 0.4, 0.5, 0.6 and 0.7 (dashed). F is taken as Exp(1) and n = 1000. Another choice of Fn is \u02c7 Fn,h0, the SMLE with a pre-chosen bandwidth h0. This strategy is commonly used to select the bandwidth in density estimation; see, e.g., Hazelton (1996) and Gonz\u00b4 alez-Manteiga et al. (1996). We choose h0 as the initial bandwidth and sample \u2206\u2217 i \u223cBernoulli( \u02c7 Fn,h0(Ti)). Then the MSE is estimated by 1 B XB i=1{ \u02c7 F \u2217 n,h(t0) \u2212\u02c7 Fn,h0(t0)}2. (9) Under the same simulation setup as for the NPMLE, we show in Figure 3(b) that the estimated MSE curves from the SMLE are consistent with the true curve based on 500 random samples. A related issue in practice is how to choose the optimal initial smoothing bandwidth h0. As in density estimation, where it has been shown that di\ufb00erent initial values of bandwidths yield consistent estimation results, we also illustrate that di\ufb00erent h0 values give similar estimated MSE curves and therefore do not a\ufb00ect our \ufb01nal estimation much. We illustrate this through a simulation study. We choose 5 initial values of h0, h0 = 0.3, 0.4, 0.5, 0.6 and 0.7, and show in Figure 3(c) that the estimated MSE curves with di\ufb00erent h0 values have similar shapes and are consistent with the true MSE curve. The minimum values of the estimated MSE curves are also close to the true minimum. 9 3 Interval censoring, case 2 In case 2 censoring, an individual is checked exactly at two time points (T1, T2). Suppose that we have n independent and identically distributed random vectors {(Xi, Ti,1, Ti,2)}n i=1, where for each pair (Xi, Ti,1, Ti,2), Xi \u223cF and (Ti,1, Ti,2) are independent and Ti,1 < Ti,2. For the ith individual, we observe (Ti,1, Ti,2, \u2206i,1, \u2206i,2) where \u2206i,1 = 1Xi\u2264Ti,1 and \u2206i,2 = 1Ti,1 0, P(\u03c1(Vn, V ) > \u03f5 | Zn) \u2212 \u21920 almost surely (in probability), where Zn denotes our observed data. 6.1 Proof of Theorem 2.1 We denote our bootstrap sample by (T1, \u2206\u2217 1), . . . , (Tn, \u2206\u2217 n). Let P\u2217 n denote the induced measure of the bootstrap sample and write P\u2217 nf(\u2206, T) = 1 n n X i=1 f(\u2206\u2217 i , Ti). Letting A = {(x, t) : x \u2264t}, we de\ufb01ne the following stochastic processes: V \u2217 n (t) = P\u2217 n1A1R\u00d7[0,t] = 1 n n X i=1 \u2206\u2217 i 1Ti\u2264t, G\u2217 n(t) = P\u2217 n1R\u00d7[0,t] = 1 n n X i=1 1Ti\u2264t. Let PT,n be the empirical probability measure of {Ti}n i=1. Let Pn be the probability measure induced by Fn and PT,n. Note that under the conditional bootstrap procedure, P\u2217 nf(T) = Pnf(T) = PT,nf(T) = Pn i=1 f(Ti)/n. We use En to denote the expectation with respect to Pn. Appealing to the characterization of \u02dc F \u2217 n (see pp. 298-299 of van der Vaart and Wellner, 2000b), we know that \u02dc F \u2217 n(t) \u2264a i\ufb00 arg min s {V \u2217 n (s) \u2212aG\u2217 n(s)} \u2265T(t) (13) where T(t) is the largest observation time that does not exceed t. By (13), the event that n1/3{ \u02dc F \u2217 n(t0) \u2212Fn(t0)} \u2264x is equivalent to arg min s \b V \u2217 n (s) \u2212[xn\u22121/3 + Fn(t0)]G\u2217 n(s) \t \u2265T(t0). 15 This is the same as n1/3 h arg min s \b V \u2217 n (s) \u2212[xn\u22121/3 + Fn(t0)]G\u2217 n(s) \t \u2212t0 i \u2265n1/3(T(t0) \u2212t0). Changing s 7\u2192t0 + tn\u22121/3 and using the fact that n1/3(T(t0) \u2212t0) = o(1), the above inequality can be re-expressed as arg min t \u0002 V \u2217 n (t0 + tn\u22121/3) \u2212{xn\u22121/3 + Fn(t0)}G\u2217 n(t0 + tn\u22121/3) \u0003 \u2265o(1). The left hand side of the above inequality can be written as arg min t \u0002 P\u2217 n1A1R\u00d7[0,t0+tn\u22121/3] \u2212Fn(t0)G\u2217 n(t0 + tn\u22121/3) \u2212xn\u22121/3G\u2217 n(t0 + tn\u22121/3) \u0003 = arg min t h n2/3P\u2217 n{1A \u2212Fn(t0)}(1R\u00d7[0,t0+tn\u22121/3] \u22121R\u00d7[0,t0]) \u2212xn1/3[G\u2217 n(t0 + tn\u22121/3) \u2212G\u2217 n(t0)] i . = arg min t h n2/3P\u2217 n(1A \u2212Fn(T))(1R\u00d7[0,t0+tn\u22121/3] \u22121R\u00d7[0,t0]) +n2/3P\u2217 n(Fn(T) \u2212Fn(t0))(1R\u00d7[0,t0+tn\u22121/3] \u22121R\u00d7[0,t0]) \u2212xn1/3[G\u2217 n(t0 + tn\u22121/3) \u2212G\u2217 n(t0)] i . (14) To study the distribution of \u03b3\u2217 n, we start with the distributions of the three terms in (14). This is given in the following Lemma. Lemma 6.1 We have the following convergence results: (i) We have that xn1/3{G\u2217 n(t0 + tn\u22121/3) \u2212G\u2217 n(t0)} \u2192xg(t0)t, (15) uniformly on compacta, almost surely. (ii) Let Z be a standard two-sided Brownian motion on R such that Z(0) = 0. If (3) holds, then, conditional on the data, the process n2/3P\u2217 n \b (1A \u2212Fn(T))(1R\u00d7[0,t0+tn\u22121/3] \u22121R\u00d7[0,t0]) \t d \u2192 p F(t0)[1 \u2212F(t0)]g(t0)Z(t) (16) almost surely in the space D(R). (iii) If we have the following convergence uniformly on compacts in t lim n\u2192\u221en1/3|Fn(t0 + n\u22121/3t) \u2212Fn(t0) \u2212f(t0)n\u22121/3t| = 0, (17) then conditionally n2/3P\u2217 n \b (Fn(T) \u2212Fn(t0))(1R\u00d7[0,t0+tn\u22121/3] \u22121R\u00d7[0,t0]) \t \u21921 2f(t0)g(t0)t2 (18) uniformly on compacta, almost surely. 16 Proof of Lemma 6.1. (i) To show the \ufb01rst convergence result, observe that xn1/3{G\u2217 n(t0 + tn\u22121/3) \u2212G\u2217 n(t0)} = xn1/3(PT,n \u2212P)(1R\u00d7[0,t0+tn\u22121/3] \u22121R\u00d7[0,t0]) + xn1/3P(1R\u00d7[0,t0+tn\u22121/3] \u22121R\u00d7[0,t0]). By the law of iterated logarithm, this equals o(1) + xn1/3{G(t0 + tn\u22121/3) \u2212G(t0)} \u2192xg(t0)t, a.s., uniformly on compacta. (ii) To show (16), let Zn,i(t) = n\u22121/3(\u2206\u2217 i \u2212Fn(Ti))Wn,i(t) where Wn,i(t) = 1Ti\u2264t0+tn\u22121/3\u2212 1Ti\u2264t0. The left-hand side of (16) then can be expressed as Pn i=1 Zn,i(t). Note that Zn,i(t) has mean 0 and variance \u03c32 n,i(t) = n\u22122/3W 2 n,i(t)Fn(Ti)[1 \u2212Fn(Ti)]. Therefore, for h > 0, s2 n(t) := Pn i=1 \u03c32 n,i(t) can be simpli\ufb01ed as n1/3PT,n[Fn(T)(1 \u2212Fn(T))(1T\u2264t0+tn\u22121/3 \u22121T\u2264t0)2]. The preceding display is equal to o(1) + n1/3P[Fn(T)(1 \u2212Fn(T))(1T\u2264t0+tn\u22121/3 \u22121T\u2264t0)2] = o(1) + n1/3 Z t0+tn\u22121/3 t0 Fn(u)(1 \u2212Fn(u))g(u)du = o(1) + Z t 0 Fn(t0 + sn\u22121/3)[1 \u2212Fn(t0 + sn\u22121/3)]g(t0 + sn\u22121/3)ds a.s \u2192 F(t0)[1 \u2212F(t0)]g(t0)t. By the Lindeberg-Feller CLT (see pp. 359 of Billingsley, 1995) we have n X i=1 Zn,i(t) d \u2192N(0, F(t0)[1 \u2212F(t0)]g(t0)t) for every t \u2208R. Similarly we have the the convergence of the \ufb01nite dimensional joint distribution. We only need to show the tightness of Pn i=1 Zn,i(t). By Theorem 15.6 in Billingsley (1968), it is su\ufb03cient to show that there exists a nondecreasing, continuous function H such that for any t1 < t < t2, \u03b3 > 0 and \u03b1 > 1/2 En h\f \f \f n X i=1 Zn,i(t1)\u2212 n X i=1 Zn,i(t) \f \f \f \u03b3\f \f \f n X i=1 Zn,i(t2)\u2212 n X i=1 Zn,i(t) \f \f \f \u03b3i \u2264(H(t2)\u2212H(t1))2\u03b1. (19) 17 Take \u03b3 = 2 and \u03b1 = 1. Note that the following inequality holds almost surely En h\f \f \f X i Zn,i(t1) \u2212 X i Zn,i(t) \f \f \f 2\f \f \f X j Zn,j(t2) \u2212 X j Zn,j(t) \f \f \f 2i = n\u22124/3En h X i (\u2206\u2217 i \u2212Fn(Ti))21t0+t1n\u22121/3 0, as follows: n2/3 \f \f(P\u2217 n \u2212P) \u0002 (Fn(T) \u2212Fn(t0))(1R\u00d7[0,t0+tn\u22121/3] \u22121R\u00d7[0,t0]) \u0003\f \f = n2/3\f \f \f Z t0+tn\u22121/3 t0 (Fn(u) \u2212Fn(t0)) d(PT,n \u2212P)(u) \f \f \f = n2/3\f \f \f h (PT,n \u2212P)(u)(Fn(u) \u2212Fn(t0)) it0+tn\u22121/3 t0 \u2212 Z t0+tn\u22121/3 t0 (PT,n \u2212P)(u) dFn(u) \f \f \f \u2264 (\u221an\u2225PT,n \u2212P\u2225) 2n1/6 \u0002 Fn(t0 + Kn\u22121/3) \u2212Fn(t0 \u2212Kn\u22121/3) \u0003 = o(1), a.s. Therefore \ufb01rst term in (20) is ignorable given that (17) holds. The second term in (20) can be simpli\ufb01ed as: n2/3P \u0002 (Fn(T) \u2212Fn(t0))(1R\u00d7[0,t0+tn\u22121/3] \u22121R\u00d7[0,t0]) \u0003 = n1/3 Z t 0 [Fn(t0 + sn\u22121/3) \u2212F(t0)]g(t0 + sn\u22121/3)ds = (1 + o(1))n1/3 Z t 0 sn\u22121/3f(t0)g(t0 + sn\u22121/3)ds \u2192 f(t0)g(t0)1 2t2, where the last step follows from the assumption that g is continuous at t0. This gives the desired conclusion. 18 We proceed to prove Theorem 2.1. Proof of Theorem 2.1. This follows from a similar argument as in Section 3.2.15 in van der Vaart and Wellner (2000b). We only need to show the uniform tightness of the minimum, i.e., for any \u03f5 and B0 > 0, there exists a constant B such that Pn \u0012 max x\u2208[\u2212B0,B0] n1/3 \f \f \farg min s \b V \u2217 n (s) \u2212[xn\u22121/3 + Fn(t0)]G\u2217 n(s) \t \u2212t0 \f \f \f > B \u0013 < \u03f5, a.s. Here recall that Pn is the probability measure induced by Fn and the empirical probability measure of {Ti}n i=1. The above tightness result follows from Theorem 3.4.1 in van der Vaart and Wellner (2000b). In particular, following their notation, we take M\u2217 n(h) = P\u2217 n(1A \u2212Fn(T))(1R\u00d7[0,t0+h] \u22121R\u00d7[0,t0]) + P\u2217 n(Fn(T) \u2212Fn(t0))(1R\u00d7[0,t0+h] \u22121R\u00d7[0,t0]) \u2212xn\u22121/3[G\u2217 n(t0 + h) \u2212G\u2217 n(t0)], and Mn(h) = Pn(1A \u2212Fn(T))(1R\u00d7[0,t0+h] \u22121R\u00d7[0,t0]). Then the conditions of their Theorem 3.4.1 are satis\ufb01ed with \u03c6n(\u03b4) = \u221a \u03b4 + x\u03b4n1/6. Together with the fact that Mn(h) converges to 0 in probability, which follows from the results in Lemma 6.1, the tightness result holds and we have the weak convergence of \u03b3\u2217 n. The above tightness result and Lemma 6.1 imply that conditional on the data n1/3 arg mins{V \u2217 n (s) \u2212[xn\u22121/3 + Fn(t0)]G\u2217 n(s)} \u2212t0 converges weakly to the process T(x) := argmint \u001ap F(t0)[1 \u2212F(t0)]g(t0)Z(t) + 1 2f(t0)g(t0)t2 \u2212xg(t0)t \u001b almost surely. Therefore conditionally we have the following convergence Pn(n1/3( \u02dc F \u2217 n(t0) \u2212Fn(t0)) \u2264x) a.s. \u2212 \u2212 \u2192 P(T(x) \u22650) = P(T(x) \u2212f(t0)\u22121x \u2265\u2212f(t0)\u22121x). By the stationary of process T(x) \u2212f(t0)\u22121x as shown in Groeneboom (1989), we have that T(x) \u2212f(t0)\u22121x and T(0) have the same distribution function. Therefore, Pn(n1/3( \u02dc F \u2217 n(t0) \u2212Fn(t0)) \u2264x) converges almost surely to P(T(0) \u2265\u2212f(t0)\u22121x) = P \u0010 argmint np F(t0)[1 \u2212F(t0)]g(t0)Z(t) + 1 2f(t0)g(t0)t2o \u2265\u2212f(t0)\u22121x \u0011 = P \u0010 argmint n Z \u0010 f(t0)2/3g(t0)1/3 [4F(t0)(1 \u2212F(t0)]1/3t \u0011 + \u0010 f(t0)2/3g(t0)1/3 [4F(t0)(1 \u2212F(t0)]1/3t \u00112o \u2265\u2212\u03ba\u22121x \u0011 . Here recall that \u03ba = {4F(t0)(1\u2212F(t0))f(t0)/g(t0)}1/3 and the last equation is due to the Brownian scaling. Then, the above display is equal to P(argmint{Z(t) + t2} \u2265\u2212\u03ba\u22121x) = P(argmint{Z(t) + t2} \u2264\u03ba\u22121x), which gives the desired conclusion. 19 6.2 Proofs of Theorems 2.2 and 2.3 Proof of Theorem 2.2. Proof of Theorem 2.2 follows a similar argument as in the proof of Theorem 3.1 in Sen et al. (2010) and we only show the key steps. The bootstrap NPMLE \u02dc F \u2217 n is the left derivative of the greatest convex minorant of the cumulative sum diagram consisting of points (G\u2217 n(t), V \u2217 n (t)). Let F\u2217 n be the corresponding cumulative sum diagram function, i.e., for u \u2208[G\u2217 n(T(i)), G\u2217 n(T(i+1))) function F\u2217 n(u) = V \u2217 n (T(i)), where T(1) \u2264T(2) \u2264\u00b7 \u00b7 \u00b7 \u2264T(n) are the order statistics of T1, . . . , Tn. Then \u03b3\u2217 n equals the left derivative at t = 0 of the greatest convex minorant of process Z\u2217 n(t) := n2/3{F\u2217 n(G\u2217 n(t0) + n\u22121/3t)) \u2212F\u2217 n(G\u2217 n(t0)) \u2212\u02dc Fn(t0)n\u22121/3t}. We further write Z\u2217 n(t) as Z\u2217 n,1(t) + Z\u2217 n,2(t), where Z\u2217 n,1(t) := n2/3{(F\u2217 n \u2212\u02dc Fn)(G\u2217 n(t0) + n\u22121/3t)) \u2212(F\u2217 n \u2212\u02dc Fn)(G\u2217 n(t0))}, Z\u2217 n,2(t) := n2/3{\u02dc Fn(G\u2217 n(t0) + n\u22121/3t)) \u2212\u02dc Fn(G\u2217 n(t0)) \u2212\u02dc Fn(t0)n\u22121/3t}. Here \u02dc Fn is the greatest convex minorant of the cumulative sum diagram function based on the observed data (Ti, \u2206i), i = 1, . . . , n. These Z\u2217processes take analogous forms as Z\u2217 n(h), Z\u2217 n,1(h), Z\u2217 n,2(h) in Section 3.2 in Sen et al. (2010). For f, let LRf be its greatest convex minorant on R. Following the proofs of Theorem 3.1 in Sen et al. (2010) and Lemma 6.1, we have unconditionally Z\u2217 n,1(t) d \u2192 U1(t) := p F(t0)[1 \u2212F(t0)]Z1(t), Z\u2217 n,2(t) d \u2192 U2(t) := LRZ0 2(t) \u2212LRZ0 2(0) \u2212(LRZ0 2)\u2032(0)t, where Z0 2(t) = p F(t0)[1 \u2212F(t0)]Z2(t) + 1 2f(t0)g\u22121(t0)t2, and Z1(t) and Z2(t) are two independent two-sided standard Brownian motions. Furthermore, we have that the unconditional distribution of \u03b3\u2217 n converges to that of LR(U1 + U2)\u2032(0), which is di\ufb00erent from that of \u03baC. This gives the inconsistency result. Proof of Theorem 2.3. We will apply Theorem 2.1. By Lemma 5.9 in Groeneboom and Wellner (1992), the NPMLE \u02dc Fn satis\ufb01es \u2225\u02dc Fn \u2212F\u2225= Op(n\u22121/3 log n) under our assumption of F and G. Since h \u21920 and n1/3(log n)\u22121 h \u2192\u221e, we have the following holds uniformly in t: | \u02c7 Fn,h(t) \u2212F(t)| \u2264 \f \f \f Z \u00af Kh(t \u2212s)d( \u02dc Fn(s) \u2212F(s)) \f \f \f + \f \f \f Z \u00af Kh(t \u2212s)dF(s) \u2212F(t) \f \f \f = \f \f \f Z ( \u02dc Fn(s) \u2212F(s))dKh(t \u2212s) \f \f \f + \f \f \f Z \u00af Kh(t \u2212s)dF(s) \u2212F(t) \f \f \f + op(1) = O(1)\u2225\u02dc Fn \u2212F\u2225h\u22121 + op(1) = op(1). 20 Thus (3) holds in probability. Next we show that (17) holds in probability for SMLE \u02c7 Fn,h, i.e., lim n\u2192\u221en1/3| \u02c7 Fn,h(t0 + n\u22121/3t) \u2212\u02c7 Fn,h(t0) \u2212f(t0)n\u22121/3t| = 0. Note that we have n1/3( \u02c7 Fn,h(t0 + n\u22121/3t) \u2212\u02c7 Fn,h(t0)) = n1/3 Z \b \u00af Kh(t0 + n\u22121/3t \u2212s) \u2212\u00af Kh(t0 \u2212s) \t d \u02dc Fn(s) = op(1) + t Z Kh(t0 \u2212s) d \u02dc Fn(s). Integrating by parts yields Z Kh(t0 \u2212s) d \u02dc Fn(s) = Z Kh(t0 \u2212s) d( \u02dc Fn(s) \u2212F(s)) + Z Kh(t0 \u2212s)dF(s) = \u2212 Z ( \u02dc Fn(s) \u2212F(s))dKh(t0 \u2212s) + Z Kh(t0 \u2212s)dF(s) + op(1) = f(t0) + op(1), given that \u2225\u02dc Fn \u2212F\u2225= Op(n\u22121/3 log n), h \u21920 and n1/3(log n)\u22121h \u2192\u221e. 6.3 Proof of Theorem 3.1 Let P\u2217 n denote the induced measure of the bootstrap sample. For any t > 0, de\ufb01ne W \u2217 n(t) = W \u2217 n,1(t) + Z t 0 {Fn(s) \u2212Fn(t0)} dW \u2217 n,2(s), where for k = 1 and 2, W \u2217 n,k(t) = Z t1\u2208[0,t],x\u2264t1 dP\u2217 n(x, t1, t2) {Fn(t1)}k + Z t1\u2208[0,t],t1t2 dP\u2217 n(x, t1, t2) {Fn(t2) \u22121}k . Thanks to the characterization of \u02dc F \u2217,(1) n (see, e.g., Groeneboom, 1991), we know that Pn h (n log n)1/3{ \u02dc F \u2217,(1) n (t0) \u2212Fn(t0)} > x i = Pn[T \u2217,(0) n (Fn(t0) + (n log n)\u22121/3x) < t0], where Pn is the probability measure induced by Fn and the empirical probability measure of {Ti,1, Ti,2}n i=1, and T \u2217,(0) n (x) := sargmint[W \u2217 n(t) \u2212{x \u2212Fn(t0)}W \u2217 n,2(t)]. 21 For a function w(t), sargmintw(t) means the maximum value of the minimizers of function w(t). If there is a unique minimizer, then sargmintw(t) = argmintw(t). By the de\ufb01nition of T \u2217,(0) n (x), we can write (n log n)1/3\u0010 T \u2217,(0) n (Fn(t0) + (n log n)1/3x) \u2212t0 \u0011 = sargmint n n2/3(log n)\u22121/3(W \u2217 n(t0 + (n log n)1/3t) \u2212W \u2217 n(t0)) \u2212x(n log n)1/3(W \u2217 n,2(t0 + (n log n)\u22121/3t) \u2212W \u2217 n,2(t0)) o = sargmint n n2/3(log n)\u22121/3(W \u2217 n,1(t0 + (n log n)1/3t) \u2212W \u2217 n,1(t0)) + n2/3(log n)\u22121/3 Z t0+(n log n)1/3t t0 (Fn(s) \u2212Fn(t0))dW \u2217 n,2(s) \u2212xn1/3(log n)\u22122/3(W \u2217 n,2(t0 + (n log n)\u22121/3t) \u2212W \u2217 n,2(t0)) o . As in the proof of Theorem 6.1, we start with the distributions of three terms in the above display. This is given by the following lemma. Lemma 6.2 For a sequence of distribution functions Fn that converge weakly to F, if the following convergence holds uniformly on compacts (in t) lim n\u2192\u221e(n log n)1/3|Fn(t0 + (n log n)\u22121/3t) \u2212Fn(t0) \u2212f(t0)(n log n)\u22121/3t| = 0, (21) then, conditionally on the data, we have the the following convergence almost surely uniformly on compacta n1/3(log n)\u22122/3(W \u2217 n,2(t0 + (n log n)\u22121/3t) \u2212W \u2217 n,2(t0)) p \u2212 \u2192 2 3f(t0)h(t0, t0)t, n2/3(log n)\u22121/3 Z t0+(n log n)1/3t t0 (Fn(s) \u2212Fn(t0))dW \u2217 n,2(s) p \u2212 \u2192 1 3h(t0, t0)t2, n2/3(log n)\u22121/3(W \u2217 n,1(t0 + (n log n)1/3t) \u2212W \u2217 n,1(t0)) d \u2212 \u2192 r 2 3h(t0, t0)/f(t0)Z(t). Proof of Lemma 6.2. Proof of Lemma 6.2 follows from a similar argument as in the proof of Theorem 5.3 in Groeneboom (1991). Note that for B0 > 0, if (21) holds, we have that Pn \u0010 T1 < X < T2, T1, T2 \u2208[t0, t0 + B0(n log n)\u22121/3] \u0011 = (1 + o(1)) Z t0\u2264t1t1+B0(n log n)\u22121/3 (Fn(t2) \u2212Fn(t1))2 dP\u2217 n(x, t1, t2) + Z t2\u2208[t0,t0+(n log n)\u22121/3t] 1t1t1+B0(n log n)\u22121/3 (Fn(t2) \u2212Fn(t1))2 + 1x>t2 (1 \u2212Fn(t2))2dP\u2217 n(x, t1, t2) \u0015 . The above display is equal to (1 + o(1))n1/3(log n)\u22122/3 \u00d7 \u0014 Z t1\u2208[t0,t0+(n log n)\u22121/3t] 1x\u2264t1 Fn(t1) + 1t1t1+B0(n log n)\u22121/3 Fn(t2) \u2212Fn(t1) dH(t1, t2) + Z t2\u2208[t0,t0+(n log n)\u22121/3t] 1t1t1+B0(n log n)\u22121/3 Fn(t2) \u2212Fn(t1) + 1x>t2 1 \u2212Fn(t2)dH(t1, t2) \u0015 = (1 + o(1)) Z t1\u2208[t0,t0+(n log n)\u22121/3t] 1t1t1+B0(n log n)\u22121/3 f(t0)(t2 \u2212t1) h(t0, t0)dt1dt2 +(1 + o(1)) Z t2\u2208[t0,t0+(n log n)\u22121/3t] 1t1t1+B0(n log n)\u22121/3 f(t0)(t2 \u2212t1) h(t0, t0)dt1dt2 + o(1) = (1 + o(1))2h(t0, t0)t 3f(t0) . In the above display, recall that H is the distribution function of T1 and T2. This gives the \ufb01rst convergence result. The second convergence follows from a similar argument. Conditionally on T\u2019s, 23 the following equation holds with probability 1, almost surely, n2/3(log n)\u22121/3 Z t0+(n log n)1/3t t0 (Fn(s) \u2212Fn(t0))dW \u2217 n,2(s) = n2/3(log n)\u22121/3 \u0014 Z t1\u2208[t0,t0+(n log n)\u22121/3t],x\u2264t1 Fn(t1) \u2212Fn(t0) Fn(t1)2 dP\u2217 n(x, t1, t2) + Z t1 \u2208[t0, t0 + (n log n)\u22121/3t], t1 < x \u2264t2, t2 > t1 + B0(n log n)\u22121/3 Fn(t1) \u2212Fn(t0) (Fn(t2) \u2212Fn(t1))2dP\u2217 n(x, t1, t2) + Z t2 \u2208[t0, t0 + (n log n)\u22121/3t], t1 < x \u2264t2, t2 > t1 + B0(n log n)\u22121/3 Fn(t2) \u2212Fn(t0) (Fn(t2) \u2212Fn(t1))2dP\u2217 n(x, t1, t2) + Z t2\u2208[t0,t0+(n log n)\u22121/3t],x>t2 Fn(t2) \u2212Fn(t0) (1 \u2212Fn(t2))2 dP\u2217 n(x, t1, t2) \u0015 . The preceding display is equal to (1 + o(1)) Z t1 \u2208[t0, t0 + (n log n)\u22121/3t], t1 < x \u2264t2, t2 > t1 + B0(n log n)\u22121/3 t1 \u2212t0 t2 \u2212t1 h(t0, t0)dt1dt2 +(1 + o(1)) Z t2 \u2208[t0, t0 + (n log n)\u22121/3t], t1 < x \u2264t2, t2 > t1 + B0(n log n)\u22121/3 t1 \u2212t0 t2 \u2212t1 h(t0, t0)dt1dt2 + o(1) = (1 + o(1))1 3h(t0, t0)t2. The third convergence result follows from a similar argument as in the proof of Lemma 5.5 in Groeneboom (1991). For t \u2208[0, B0], let \u00af W \u2217 n(t) = n2/3(log n)\u22121/3 \u0014 Z t1 \u2208[t0, t0 + (n log n)1/3t], x \u2264t1, t2 > t1 + B0(n log n)\u22121/3 1 Fn(t1)dP\u2217 n(x, t1, t2) \u2212 Z t1 \u2208[t0, t0 + (n log n)1/3t], t1 < x \u2264t2 t2 > t1 + B0(n log n)\u22121/3 1 Fn(t2) \u2212Fn(t1)dP\u2217 n(x, t1, t2) + Z t2 \u2208[t0, t0 + (n log n)1/3t], t1 < x \u2264t2 t2 > t1 + B0(n log n)\u22121/3 1 Fn(t2) \u2212Fn(t1)dP\u2217 n(x, t1, t2) \u2212 Z t2 \u2208[t0, t0 + (n log n)1/3t], x > t2, t2 > t1 + B0(n log n)\u22121/3 1 1 \u2212Fn(t2)dP\u2217 n(x, t1, t2) \u0015 . We can see that conditional on T\u2019s, \u00af W \u2217 n(t) is a martingale and its variance is given 24 by (1 + o(1))n1/3(log n)\u22122/3 \u0014 Z t1\u2208[t0,t0+(n log n)\u22121/3t],x\u2264t1 1 Fn(t1)dH(t1, t2) + Z t1 \u2208[t0, t0 + (n log n)\u22121/3t], t1 < x \u2264t2, t2 > t1 + B0(n log n)\u22121/3 1 Fn(t2) \u2212Fn(t1)dH(t1, t2) + Z t2 \u2208[t0, t0 + (n log n)\u22121/3t], t1 < x \u2264t2, t2 > t1 + B0(n log n)\u22121/3 1 Fn(t2) \u2212Fn(t1)dH(t1, t2) + Z t2\u2208[t0,t0+(n log n)\u22121/3t],x>t2 1 1 \u2212Fn(t2)dH(t1, t2) \u0015 = (1 + o(1))2h(t0, t0)t 3f(t0) , a.s. Therefore, by the martingale central limit theorem, we have the conditional weak convergence of \u00af W \u2217 n(t) to q 2 3h(t0, t0)/f(t0)Z(t). Next we show that the di\ufb00erence between \u00af W \u2217 n and n2/3(log n)\u22121/3(W \u2217 n,1(t0 + (n log n)1/3t) \u2212W \u2217 n,1(t0)) is ignorable. Let \u02c6 W \u2217 n = n2/3(log n)\u22121/3(W \u2217 n,1(t0 + (n log n)1/3t) \u2212W \u2217 n,1(t0)) \u2212\u00af W \u2217 n. By Markov\u2019s inequality and a similar argument as in the proof of the second convergence result, we obtain that Pn \u0012 max t\u2208[0,B0] | \u02c6 W \u2217 n(t)| > \u03f5 \u0013 \u2264O(1) \u03f5\u22121n2/3 (log n)1/3 Z t1,t2\u2208[t0,t0+(n log n)\u22121/3B0] dH(t1, t2) = o(1) a.s. Therefore, we have the third convergence result. Proof of Theorem 3.1. We need the following tightness result, whose proof follows from a similar argument as in the proof of Lemma 5.6 in Groeneboom (1991) and is omitted from this paper. Lemma 6.3 If (21) holds, then for any \u03f5 > 0 and B0 > 0, there exists a constant B such that the following result holds almost surely Pn \u0012 max x\u2208[\u2212B0,B0](n log n)1/3 \f \fT \u2217,(0) n (Fn(t0) + (n log n)\u22121/3x) \u2212t0 \f \f > B \u0013 < \u03f5. The above lemma and Lemma 6.2 imply that (T \u2217,(0) n (Fn(t0) + (n log n)\u22121/3x) \u2212t0 converges to the process T(x) := sargmint (s 2h(t0, t0) 3f(t0) Z(t) + 1 3h(t0, t0)t2 \u2212x2h(t0, t0)t 3f(t0) ) a.s. 25 Since Pn((n log n)1/3(F \u2217,(1) n (t0) \u2212Fn(t0)) \u2264x) = Pn(T \u2217,(0) n (Fn(t0) + (n log n)\u22121/3x) \u2265 t0), we have Pn \u0000(n log n)1/3(F \u2217,(1) n (t0) \u2212Fn(t0)) \u2264x \u0001 a.s. \u2212 \u2212 \u2192P(T(x) \u2212f \u22121(t0)x \u2265\u2212f \u22121(t0)x). By the stationary of process T(x) \u2212f \u22121(t0)x as given in Groeneboom (1989), we have that Pn((n log n)1/3(F \u2217,(1) n (t0)\u2212Fn(t0)) \u2264x) converges to P(T(0) \u2265\u2212f \u22121(t0)x). Then by a Brownian scaling argument as in the proof of Theorem 2.1, we obtain the desired conclusion.", "introduction": "In recent years there has been considerable research on the analysis of interval cen- sored data. Such data arise extensively in epidemiological studies and clinical trials, especially in large-scale panel studies where the event of interest, which is typically an infection with a disease or some other failure (like organ failure), is not observed exactly but is only known to happen between two consecutive examination times. In particular, large-scale HIV/AIDS studies typically yield various types of interval censored data where interest centers on the distribution of time to HIV infection, but the exact time of infection is only known to lie between two consecutive followups at the clinic. For general interval censored data, often called mixed case interval censoring, an individual is checked at several time points and the status of the individual is ascertained: 1 if the infection/failure has occurred by the time he/she is checked and 0 otherwise. Let X be the unobserved time of onset of some disease, having distribution function F, and let T1 \u2264T2 \u2264\u00b7 \u00b7 \u00b7 \u2264TK be the K observation times. Here X and (T1, . . . , TK) are assumed to be independent and K can be random. 1 arXiv:1312.6341v1 [stat.ME] 22 Dec 2013 We observe (T1, . . . , TK, \u22061, . . . , \u2206K) where \u2206k = 1Tk\u22121 0, assumed to be in the interior of the support of F. When the observation number K \u22611, we say that we have case 1 interval censoring or current status data. In this case, our observations are (Ti, \u2206i) with \u2206i = 1Xi\u2264Ti, i = 1, . . . , n. The nonparametric maximum likelihood estimator (NPMLE) \u02dc Fn of F maximizes the log-likelihood function F 7\u2192 Xn i=1{\u2206i log F(Ti) + (1 \u2212\u2206i) log(1 \u2212F(Ti))} (1) over all distribution functions F and it can be characterized as the left derivative of the greatest convex minorant of the cumulative sum diagram of the data; see page 41 of Groeneboom and Wellner (1992). Let G be the distribution function of T and assume that F and G are continuously di\ufb00erentiable at t0 with derivatives f(t0) > 0 and g(t0) > 0, respectively. Under these assumptions, it is well known that \u03b3n := n1/3{ \u02dc Fn(t0) \u2212F(t0)} \u2192\u03baC, (2) in distribution, where \u03ba = [4F(t0){1 \u2212F(t0)}f(t0)/g(t0)]1/3, C = arg minh\u2208R{Z(h) + h2}, and Z is a standard two-sided Brownian motion process, originating from 0. In the general mixed case interval censoring model, the limiting distribution of the NPMLE is unknown. In fact, in the literature only very limited theoretical results are available on the NPMLE. Groeneboom and Wellner (1992) discussed the asymptotics of the behavior of the NPMLE in a particular version of the case 2 censoring model (K = 2); Wellner (1995) studied the consistency of the NPMLE where each subject gets exactly k examination times; van der Vaart and Wellner (2000a) proved the consistency of the NPMLE of the mixed case interval censoring in the Hellinger distance; see also Schick and Yu (2000) and Song (2004). We are interested in constructing a pointwise con\ufb01dence interval for F at t0 in the general mixed case censoring model. In the literature, very few results exist that address the construction of pointwise con\ufb01dence intervals (Song, 2004; Sen and Banerjee, 2007). Even in the current status model, where we know the limiting distribution of the NPMLE, to construct a con\ufb01dence interval for F(t0) we need to estimate the nuisance parameter \u03ba, which is indeed quite di\ufb03cult \u2013 it involves estimation of the derivative of F and that of the distribution of T. For the current status model, there exist a few methods that can be used for constructing con\ufb01dence intervals for F(t0): The m-out-of-n bootstrap method and subsampling are known to be consistent in this setting (Politis, Romano and Wolf, 1999; Lee and Pun, 2006). However, both methods require the choice of a block size. In practice, the choice of this tuning parameter is quite tricky and the con\ufb01dence intervals vary drastically with di\ufb00erent choices of the block size. The estimation of the nuisance parameter \u03ba can be avoided by using the likelihood-ratio test of Banerjee and Wellner (2001). 2 Recently, Groeneboom, Jongbloed and Witte (2010) proposed estimates of F(t0) based on smoothed likelihood function and smoothed NPMLE. However, the limiting distributions depend on the derivative of the density function. In this paper we consider bootstrap methods for constructing con\ufb01dence intervals for F(t0) and investigate the (in)-consistency and performance of two model-based bootstrap procedures that are based on the NPMLE of F, in the general framework of mixed case interval censoring. Bootstrap intervals avoid the problem of estimating nuisance parameters and are generally reliable in problems with \u221an convergence rates. See Bickel and Freedman (1981), Singh (1981), and Shao and Tu (1995) and references therein. In regression models, there are two main bootstrapping strategies: \u201cbootstrapping pairs\u201d and \u201cbootstrapping residuals\u201d (see e.g., page 113 of Efron and Tibshirani, 1993). Abrevaya and Huang (2005) considered \u201cbootstrapping pairs\u201d, i.e., boot- strapping from the empirical distribution function of the data, and showed that the procedure is inconsistent for the current status model and also other cube-root conver- gent estimators. In \u201cbootstrapping residuals\u201d one \ufb01xes (conditions on) the predictor values and generates the response according to the estimated regression model us- ing bootstrapped residuals. In a binary regression problem, as in the current status model, this corresponds to generating the responses as independent Bernoulli random variables with success probability obtained from the \ufb01tted regression model. In this paper we focus on the \u201cbootstrapping residuals\u201d procedure. In particular, for the mixed-case interval censoring model, conditional on an individual\u2019s observation times T1, . . . , TK, we generate bootstrap sample (\u2206\u2217 1, . . . , \u2206\u2217 K, 1 \u2212PK k=1 \u2206\u2217 k) following a multinomial distribution with n = 1 and pk = \u02c6 Fn(Tk) \u2212\u02c6 Fn(Tk\u22121), k = 1, . . . , K + 1, i.e., (\u2206\u2217 1, . . . , \u2206\u2217 K, 1 \u2212 XK k=1\u2206\u2217 k) \u223cMultinomial(1, { \u02c6 Fn(Tk) \u2212\u02c6 Fn(Tk\u22121)}K+1 i=1 ), where \u02c6 Fn is an estimator of F and \u02c6 Fn(T0) = 0, \u02c6 Fn(TK+1) = 1. We call this a model- based bootstrap scheme, as it uses the inherent features of the model. We study the behavior of the bootstrap method when \u02c6 Fn = \u02dc Fn, the NPMLE of F, and \u02c6 Fn is a smooth estimator of F. Speci\ufb01cally, in Section 2.1 we state a general bootstrap convergence result for the current status model which provides su\ufb03cient conditions for any bootstrap scheme to be consistent. In Section 2.2 we illustrate, both theoretically and through simulation, the inconsistency of the NPMLE bootstrap method. The failure of the NPMLE bootstrap is mostly due to the non-di\ufb00erentiability of \u02dc Fn. On the other hand, the smoothed NPMLE is di\ufb00erentiable and successfully mimics the local behavior of the true distribution function F at the location of interest, i.e., t0. As a result, the method yields asymptotically valid con\ufb01dence intervals; see Section 2.3 where we prove the consistency of the smoothed bootstrap procedure, again in the current status model. The smoothed bootstrap procedure requires the choice of a smoothing bandwidth and we discuss this problem of bandwidth selection in Section 2.4. 3 Next, in Section 3, we study the case 2 interval censoring model, i.e., when K \u22612. Even in this case, the distribution of the NPMLE is not completely known, although conjectures and partial results exist. Groeneboom (1991) studied a one-step estimate F (1) n , obtained at the \ufb01rst step of the iterative convex minorant algorithm (see Groene- boom and Wellner, 1992) and conjectured that F (1) n is asymptotically equivalent to the NPMLE. This conjecture is called the working hypothesis in this paper and is still unproved. We assume that this conjecture holds and focus on bootstrapping the distribution of the one-step estimator. We show the inconsistency of bootstrapping from the NPMLE and the consistency of the smoothed bootstrap method. In general mixed case interval censoring, Sen and Banerjee (2007) introduced a pseudolikelihood method for estimating F(t0). However, the pseudolikelihood does not use the full information in the data and may not be as e\ufb03cient as the NPMLE (Song, 2004). This is illustrated by a simulation study in Section 4. We compare the \ufb01nite sample performance of di\ufb00erent bootstrap methods under the general setup of mixed interval censoring model. These comparisons illustrate the superior perfor- mance of the smoothed bootstrap procedure. Our results also shed light on the behavior of bootstrap methods in similar non- standard convergence problems, such as the monotone regression estimator (Brunk, 1970), Rousseeuw\u2019s least median of squares estimator (Rousseeuw, 1984), and the estimator of the shorth (Andrews et al., 1972; Shorack and Wellner, 1986); see also Groeneboom and Wellner (2001) for statistical problems in which the distribution C arises." }, { "url": "http://arxiv.org/abs/1311.6849v2", "title": "Testing against a linear regression model using ideas from shape-restricted estimation", "abstract": "A formal likelihood ratio hypothesis test for the validity of a parametric\nregression function is proposed, using a large-dimensional, nonparametric\ndouble cone alternative. For example, the test against a constant function uses\nthe alternative of increasing or decreasing regression functions, and the test\nagainst a linear function uses the convex or concave alternative. The proposed\ntest is exact, unbiased and the critical value is easily computed. The power of\nthe test increases to one as the sample size increases, under very mild\nassumptions -- even when the alternative is mis-specified. That is, the power\nof the test converges to one for any true regression function that deviates (in\na non-degenerate way) from the parametric null hypothesis. We also formulate\ntests for the linear versus partial linear model, and consider the special case\nof the additive model. Simulations show that our procedure behaves well\nconsistently when compared with other methods. Although the alternative fit is\nnon-parametric, no tuning parameters are involved.", "authors": "Bodhisattva Sen, Mary Meyer", "published": "2013-11-26", "updated": "2014-06-27", "primary_cat": "stat.ME", "cats": [ "stat.ME" ], "main_content": "A set C \u2282Rn is a cone if for all \u03b8 \u03b8 \u03b8 \u2208C and \u03bb > 0, we have \u03bb\u03b8 \u03b8 \u03b8 \u2208C. If C is a convex cone then \u03b11\u03b8 \u03b8 \u03b81 + \u03b12\u03b8 \u03b8 \u03b82 \u2208C, for any positive scalars \u03b11, \u03b12 and any \u03b8 \u03b8 \u03b81,\u03b8 \u03b8 \u03b82 \u2208C. We define the projection \u03b8I of \u03b80 \u2208Rn onto I as 2 \u03b8I := \u03a0(\u03b80|I) \u2261argmin \u03b8\u2208I \u2225\u03b80 \u2212\u03b8\u22252, where \u2225\u00b7 \u2225is the usual Euclidean norm in Rn. The projection is uniquely defined by the two conditions: \u27e8\u03b8I, \u03b80 \u2212\u03b8I\u27e9= 0, \u27e8\u03b80 \u2212\u03b8I, \u03be\u27e9\u22640, for all \u03be \u2208I, (7) \ufffdn \u27e8 \u2212\u27e9 \u27e8 \u2212\u27e9\u2264 \u2208I where for a = (a1, . . . , an), b = (b1, . . . , bn), \u27e8a, b\u27e9= \ufffdn i=1 aibi; see Theorem 2.2.1 of Bertsekas (2003). From these it is clear that \u27e8\u03b80 \u2212\u03b8I, \u03be\u27e9= 0, for all \u03be \u2208S. (8) \u27e8 Bertsekas (2003). From these it is clear that \u27e8\u03b80 \u2212\u03b8I, \u03be\u27e9= 0, for all \u03be \u2208S. (8) Similarly, the projection of \u03b80 onto D is \u03b8D := \u03a0(\u03b80|D) \u2261argmin\u03b8\u2208D \u2225\u03b80 \u2212\u03b8\u22252. 5 Let \u2126I := I \u2229S\u22a5 and \u2126D := D \u2229S\u22a5, where S\u22a5refers to the orthogonal complement of S. Then \u2126I and \u2126D are closed convex cones, and the projection of y \u2208Rn onto I is the sum of the projections onto \u2126I and S. The cone \u2126I can alternatively be speci\ufb01ed by a set of generators; that is, a set of vectors \u03b41, . . . , \u03b4M in the cone so that \u2126I = ( \u03b8 \u2208Rn : \u03b8 = M X j=1 bj\u03b4j, for bj \u22650, j = 1, . . . , M ) . If the m \u00d7 n matrix A in (4) has full row rank, then M = m and the generators of \u2126I are the columns of A\u22a4(AA\u22a4)\u22121. Otherwise, Proposition 1 of Meyer (1999) can be used to \ufb01nd the generators. The generators of \u2126D = D \u2229S\u22a5are \u2212\u03b41, . . . , \u2212\u03b4M. De\ufb01ne the cone polar to I as Io = {\u03c1 \u2208Rn : \u27e8\u03c1,\u03b8 \u03b8 \u03b8\u27e9\u22640, for all \u03b8 \u03b8 \u03b8 \u2208I} . Then, Io is a convex cone orthogonal to S. We can similarly de\ufb01ne Do, also orthogonal to S. For a proof of the following see Meyer (1999). Lemma 2.1. The projection of y \u2208Rn onto Io is the residual of the projection of y onto I and vice-versa; also, the projection of y onto Do is the residual of the projection of y onto D and vice-versa. We will assume the following two conditions for the rest of the paper: (A1) S is the largest linear subspace contained in I; (A2) \u2126I \u2282Do, or equivalently, \u2126D \u2282Io. Note that if (A1) holds, \u2126I does not contain a linear space (of dimension one or larger), and the intersection of \u2126I and \u2126D is the origin. Assumption (A2) is needed for unbiasedness of our testing procedure. For Example 1 of the Introduction, (A2) says that the projection of an increasing vector \u03b8 \u03b8 \u03b8 onto the decreasing cone D is a constant vector, and vice-versa. To see that this holds, consider \u03b8 \u03b8 \u03b8 \u2208\u2126I, i.e., Pn i=1 \u03b8i = 0 and \u03b81 \u2264\u00b7 \u00b7 \u00b7 \u2264\u03b8n, and \u03c1 \u03c1 \u03c1 \u2208D, i.e., \u03c11 \u2265\u00b7 \u00b7 \u00b7 \u2265\u03c1n. Then de\ufb01ning the partial sums \u0398k = Pk i=1 \u03b8i, we have n X i=1 \u03b8i\u03c1i = \u03b81\u03c11 + n X i=2 \u03c1i(\u0398i \u2212\u0398i\u22121) = n\u22121 X i=1 (\u03c1i \u2212\u03c1i+1)\u0398i which is negative because \u0398i \u22640 for i = 1, . . . , n \u22121. Then by (7) the projection of \u03b8 \u03b8 \u03b8 onto D is the origin. 6 2.1 Testing We start with a brief review for testing H0 : \u03b80 \u2208S versus H1 : \u03b80 \u2208I\\S, under the normal errors assumption. The log-likelihood function (up to a constant) is \u2113(\u03b8, \u03c32) = \u22121 2\u03c32\u2225Y \u2212\u03b8\u22252 \u22121 2n log \u03c32. After a bit of simpli\ufb01cation, we get the likelihood ratio statistic to be \u039bI = 2 \u001a max \u03b8 \u03b8 \u03b8\u2208I,\u03c32>0 \u2113(\u03b8 \u03b8 \u03b8, \u03c32) \u2212 max \u03b8 \u03b8 \u03b8\u2208S,\u03c32>0 \u2113(\u03b8 \u03b8 \u03b8, \u03c32) \u001b = n log \u2225Y \u2212\u02c6 \u03b8S\u22252 \u2225Y \u2212\u02c6 \u03b8I\u22252 ! , where \u02c6 \u03b8I = \u03a0(Y|I) and \u02c6 \u03b8S = \u03a0(Y|S). An equivalent test is to reject H0 if TI(Y) := \u2225\u02c6 \u03b8S \u2212\u02c6 \u03b8I\u22252 \u2225Y \u2212\u02c6 \u03b8S\u22252 = SSE0 \u2212SSEI SSE0 (9) is large, where SSE0 := \u2225Y \u2212\u02c6 \u03b8S\u22252 is the squared length of the residual of Y onto S, and SSEI := \u2225Y \u2212\u02c6 \u03b8I\u22252 is the squared length of the residual of Y onto I. Further, SSE0 \u2212SSEI = \u2225\u03a0(Y|\u2126I)\u22252, by orthogonality of \u2126I and S. Since the null hypothesis is composite, the dependence of the test statistic on parameters under the hypotheses must be assessed. The following result shows that the distribution of TI is invariant to translations in S as well as invariant to scale. Lemma 2.2. For any s \u2208S, \u03f5 \u2208Rn, \u03a0(\u03f5 + s|I) = \u03a0(\u03f5|I) + s, and \u03a0(\u03f5 + s|S) = \u03a0(\u03f5|S) + s. (10) Next, consider model (1) and suppose that \u03b80 \u2208S. Then the distribution of TI(Y) is the same as that of TI(\u03f5). Proof. To establish the \ufb01rst of the assertions in (10) it su\ufb03ces to show that \u03a0(\u03f5|I)+s satis\ufb01es the necessary and su\ufb03cient conditions (7) for \u03a0(\u03f5+s|I). Clearly, \u03a0(\u03f5|I)+s \u2208 I and \u27e8\u03f5 + s \u2212(\u03a0(\u03f5|I) + s), \u03be\u27e9= \u27e8\u03f5 \u2212\u03a0(\u03f5|I), \u03be\u27e9\u22640, for all \u03be \u2208I. Also, \u27e8\u03a0(\u03f5|I) + s, \u03f5 + s \u2212(\u03a0(\u03f5|I) + s)\u27e9= \u27e8\u03a0(\u03f5|I) + s, \u03f5 \u2212\u03a0(\u03f5|I)\u27e9= 0, as \u27e8\u03a0(\u03f5|I), \u03f5 \u2212\u03a0(\u03f5|I)\u27e9= 0, and \u27e8s, \u03f5 \u2212\u03a0(\u03f5|I)\u27e9= 0 (as \u00b1s \u2208S). The second assertion in (10) can be established similarly, and in fact, more easily. Also, it is easy to see that, for any \u03f5 \u2208Rn, \u03a0(\u03c3\u03f5|I) = \u03c3\u03a0(\u03f5|I), and \u03a0(\u03c3\u03f5|S) = \u03c3\u03a0(\u03f5|S). Now, using (9), TI(Y) = \u2225\u03a0(\u03c3\u03f5|S) \u2212\u03a0(\u03c3\u03f5|I)\u22252 \u2225\u03c3\u03f5 \u2212\u03a0(\u03c3\u03f5|S)\u22252 = \u2225\u03a0(\u03f5|S) \u2212\u03a0(\u03f5|I)\u22252 \u2225\u03f5 \u2212\u03a0(\u03f5|S)\u22252 = TI(\u03f5). This completes the proof. 7 3 Our procedure To test the hypothesis H0 : \u03b80 \u2208S versus H1 : \u03b80 \u2208I \u222aD\\S, we project Y separately on I and D to obtain \u02c6 \u03b8I := \u03a0(Y|I) and \u02c6 \u03b8D := \u03a0(Y|D), respectively. Let TI(Y) be de\ufb01ned as in (9) and TD(Y) de\ufb01ned similarly, with \u02c6 \u03b8D instead of \u02c6 \u03b8I. We de\ufb01ne our test statistic (which is equivalent to the likelihood ratio statistic under normal errors) as T(Y) := max {TI(Y), TD(Y)} = max {\u2225\u03a0(Y|S) \u2212\u03a0(Y|I)\u22252, \u2225\u03a0(Y|S) \u2212\u03a0(Y|D)\u22252} \u2225Y \u2212\u03a0(Y|S)\u22252 . (11) We reject H0 when T is large. Lemma 3.1. Consider model (1). Suppose that \u03b80 \u2208S. Then the distribution of T(Y) is the same as that of T(\u03f5), i.e., T(Y) = max {\u2225\u03a0(\u03f5|S) \u2212\u03a0(\u03f5|I)\u22252, \u2225\u03a0(\u03f5|S) \u2212\u03a0(\u03f5|D)\u22252} \u2225\u03f5 \u2212\u03a0(\u03f5|S)\u22252 = T(\u03f5). (12) Proof. The distribution of T is also invariant to translations in S and scaling, so if \u03b80 \u2208S, using the same technique as in Lemma 2.2, we have the desired result. Suppose that T(\u03f5) has distribution function Hn. Then, we reject H0 if T(Y) > c\u03b1 := H\u22121 n (1 \u2212\u03b1), (13) where \u03b1 \u2208[0, 1] is the desired level of the test. The distribution Hn can be approximated, up to any desired precision, by Monte Carlo simulations using (12) if G is assumed known; hence, the test procedure described in (13) has exact level \u03b1, under H0, for any \u03b1 \u2208[0, 1]. If G is completely unknown, we can approximate G by Gn, where Gn is the empirical distribution of the standardized residuals, obtained under H0. Here, by the residual vector we mean \u02dc r := Y\u2212\u03a0(Y|S) and by the standardized residual we mean \u02c6 r := [\u02dc r\u2212 (e\u22a4\u02dc r/n)e]/ p Var(\u02c6 r), where e := (1, . . . , 1)\u22a4\u2208Rn. Thus, Hn can be approximated by using Monte Carlo simulations where, instead of G, we draw i.i.d. (conditional on the given data) samples from Gn. In fact, the following theorem, proved in Section A.4, shows that if G is assumed completely unknown, we can bootstrap from any consistent estimator \u02c6 Gn of G and still consistently estimate the critical value of our test statistic. Note that the conditions required for Theorem 3.2 to hold are indeed very minimal and will be satis\ufb01ed for any reasonable bootstrap scheme, and in particular, when bootstrapping from the empirical distribution of the standardized residuals. 8 Theorem 3.2. Suppose that \u02c6 Gn is a sequence of distribution functions such that \u02c6 Gn \u2192G a.s. and R x2d \u02c6 Gn(x) \u2192 R x2dG(x) a.s. Also, suppose that the sequence E(n\u2225\u03f5 \u2212\u03a0(\u03f5|S)\u2225\u22122) is bounded. Let \u02c6 \u03f5 = (\u02c6 \u03f51, . . . , \u02c6 \u03f5n) where \u02c6 \u03f51, . . . , \u02c6 \u03f5n are (conditionally) i.i.d. \u02c6 Gn and let Dn denote the distribution function of T(\u02c6 \u03f5), conditional on the data. De\ufb01ne the Levy distance between the distributions Hn and Dn as dL(Hn, Dn) := inf{\u03b7 > 0 : Dn(x \u2212\u03b7) \u2212\u03b7 \u2264Hn(x) \u2264Dn(x + \u03b7) + \u03b7, for all x \u2208R}. Then, dL(Hn, Dn) \u21920 a.s. (14) It can also be shown that for \u03b80 \u2208I \u222aD the test is unbiased, that is, the power is at least as large as the test size. The proof of the following result can be found in Section A.1. Theorem 3.3. Let Y0 := s + \u03c3\u03f5, for s \u2208S, and the components of \u03f5 are i.i.d. G. Suppose further that G is a symmetric (around 0) distribution. Choose any \u03b8 \u03b8 \u03b8 \u2208\u2126I and let Y1 := Y0 + \u03b8 \u03b8 \u03b8. Then for any a > 0, P (T(Y1) > a) \u2265P (T(Y0) > a) . (15) It is completely analogous to show that the theorem holds for \u03b8 \u03b8 \u03b8 \u2208\u2126D. The unbiasedness of the test now follows from the fact that if \u03b8 \u03b8 \u03b80 \u2208I\u222aD\\S, then \u03b8 \u03b8 \u03b80 = s+\u03b8 \u03b8 \u03b8 for s \u2208S and either \u03b8 \u03b8 \u03b8 \u2208\u2126I or \u03b8 \u03b8 \u03b8 \u2208\u2126D. In both cases, for Y = \u03b8 \u03b8 \u03b80+\u03c3\u03f5 and a := H\u22121 n (1\u2212\u03b1) > 0, for some \u03b1 \u2208(0, 1), by (25) we have P(T(Y) > a) \u22651 \u2212(1 \u2212\u03b1) = \u03b1. 3.1 Asymptotic power of the test We no longer assume that \u03b8 \u03b8 \u03b80 is in the double cone unless explicitly mentioned otherwise. We show that, under mild assumptions, the power of the test goes to one, as the sample size increases, if H0 is not true. For convenience of notation, we suppress the dependence on n and continue using the notation introduced in the previous sections. For example we still use \u03b8 \u03b8 \u03b80, \u02c6 \u03b8 \u03b8 \u03b8I, \u02c6 \u03b8 \u03b8 \u03b8D, \u02c6 \u03b8 \u03b8 \u03b8S, etc. although as n changes these vectors obviously change. A intuitive way of visualizing \u03b8 \u03b8 \u03b80, as n changes, is to consider \u03b8 \u03b8 \u03b80 as the evaluation of a \ufb01xed function \u03c60 at n points as in (3). We assume that for any \u03b8 \u03b8 \u03b80 \u2208I the projection \u02c6 \u03b8 \u03b8 \u03b8I = \u03a0(Y|I) is consistent in estimating \u03b8 \u03b8 \u03b80, under the squared error loss, i.e., for Y as in model (1) and \u03b8 \u03b8 \u03b80 \u2208I, \u2225\u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b80\u22252 = op(n). (16) The following lemma shows that even if \u03b8 \u03b8 \u03b80 does not lie in I, (16) implies that the projection of the data onto I is close to the projection of \u03b8 \u03b8 \u03b80 onto I; see Section A.4 for the proof. 9 Lemma 3.4. Consider model (1) where now \u03b8 \u03b8 \u03b80 is any point in Rn. Let \u03b8 \u03b8 \u03b8I be the projection of \u03b8 \u03b8 \u03b80 onto I. If (16) holds, then \u2225\u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u22252 = op(n). Similarly, let \u03b8 \u03b8 \u03b8D and \u02c6 \u03b8 \u03b8 \u03b8D be the projections of \u03b8 \u03b8 \u03b80 and Y onto D, respectively. Then, if (16) holds, \u2225\u02c6 \u03b8 \u03b8 \u03b8D \u2212\u03b8 \u03b8 \u03b8D\u22252 = op(n). Theorem 3.5. Consider testing H0 : \u03b8 \u03b8 \u03b80 \u2208S using the test statistic T de\ufb01ned in (11) where Y follows model (1). Then, under H0, T = op(1), if (16) holds. Proof. Under H0, \u03b8 \u03b8 \u03b8I = \u03b8 \u03b8 \u03b8D = \u03b8 \u03b8 \u03b80. The denominator of T is SSE0. As S is a \ufb01nite dimensional vector space of \ufb01xed dimension, SSE0/n \u2192p \u03c32. The numerator of TI can be handled as follows. Observe that, \u2225\u02c6 \u03b8 \u03b8 \u03b8I \u2212\u02c6 \u03b8 \u03b8 \u03b8S\u22252 = \u2225\u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u22252 + \u2225\u03b8 \u03b8 \u03b8I \u2212\u02c6 \u03b8 \u03b8 \u03b8S\u22252 + 2\u27e8\u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I,\u03b8 \u03b8 \u03b8I \u2212\u02c6 \u03b8 \u03b8 \u03b8S\u27e9 (17) \u2264 \u2225\u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u22252 + \u2225\u03b8 \u03b8 \u03b80 \u2212\u02c6 \u03b8 \u03b8 \u03b8S\u22252 + 2\u2225\u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u2225\u2225\u03b8 \u03b8 \u03b80 \u2212\u02c6 \u03b8 \u03b8 \u03b8S\u2225 = op(n) + Op(1) + op(n), where we have used Lemma 3.4 and the fact \u2225\u03b8 \u03b8 \u03b80 \u2212\u02c6 \u03b8 \u03b8 \u03b8S\u22252 = Op(1). Therefore, \u2225\u02c6 \u03b8 \u03b8 \u03b8I \u2212 \u02c6 \u03b8 \u03b8 \u03b8S\u22252/n = op(1) and thus, TI = op(1). An exact same analysis can be done for TD to obtain the desired result. Next we consider the case where the null hypothesis does not hold. If \u03b8 \u03b8 \u03b8S is the projection of \u03b8 \u03b8 \u03b80 on the linear subspace S, we will assume that the sequence {\u03b8 \u03b8 \u03b80}, as n grows, is such that lim n\u2192\u221e max{\u2225\u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8S\u2225, \u2225\u03b8 \u03b8 \u03b8D \u2212\u03b8 \u03b8 \u03b8S\u2225} n = c, (18) for some constant c > 0. Obviously, if \u03b8 \u03b8 \u03b80 \u2208S, then \u03b8 \u03b8 \u03b8S = \u03b8 \u03b8 \u03b8I = \u03b8 \u03b8 \u03b8D = \u03b8 \u03b8 \u03b80 and (18) does not hold. If \u03b8 \u03b8 \u03b80 \u2208I \u222aD\\S, then (18) holds if \u2225\u03b8 \u03b8 \u03b80 \u2212\u03b8 \u03b8 \u03b8S\u2225/n \u2192c, because either \u03b8 \u03b8 \u03b80 = \u03b8 \u03b8 \u03b8I which implies \u03b8 \u03b8 \u03b8D = \u03b8 \u03b8 \u03b8S (by (A2)), or \u03b8 \u03b8 \u03b80 = \u03b8 \u03b8 \u03b8D which in turn implies \u03b8 \u03b8 \u03b8I = \u03b8 \u03b8 \u03b8S. Observe that (18) is essentially the population version of the numerator of our test statistic (see (11)), where we replace Y by \u03b8 \u03b8 \u03b80. The following result, proved in Section A.4, shows that if we have a twice-di\ufb00erentiable function that is not a\ufb03ne, then then (18) must hold for some c > 0. Theorem 3.6. Suppose that \u03c60 : [0, 1]d \u2192R, d \u22651, is a twice-continuously di\ufb00erentiable function. Suppose that {Xi}\u221e i=1 be a sequence of i.i.d. random variables such that Xi \u223c\u00b5, a continuous distribution on [0, 1]d. Let \u03b8 \u03b8 \u03b80 := (\u03c60(X1), . . . , \u03c60(Xn))\u22a4 \u2208Rn. Let I be de\ufb01ned as in (6), i.e., I is the convex cone of evaluations of all convex functions at the data points. Let D := \u2212I and let S be the set of evaluations of all a\ufb03ne functions at the data points. If \u03c60 is not a\ufb03ne a.e. \u00b5, then (18) must hold for some c > 0. 10 Intuitively, we require that \u03c60 is di\ufb00erent from the null hypothesis class of functions in a non-degenerate way. For example, if a function is constant except at a \ufb01nite number of points, then (18) does not hold. Some further motivation for condition (18) is given in Section A.2. Theorem 3.7. Consider testing H0 : \u03b8 \u03b8 \u03b80 \u2208S using the test statistic T de\ufb01ned in (11) where Y follows model (1). If (16) and (18) hold then T \u2192p \u03ba, for some \u03ba > 0. Proof. The denominator of T is SSE0. As S is a \ufb01nite dimensional vector space of \ufb01xed dimension, SSE0/n \u2192p \u03b7, for some \u03b7 > 0. The numerator of TI can be handled as follows. Observe that \u2225\u03b8 \u03b8 \u03b8I \u2212\u02c6 \u03b8 \u03b8 \u03b8S\u22252 = \u2225\u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8S\u22252 + \u2225\u03b8 \u03b8 \u03b8S \u2212\u02c6 \u03b8 \u03b8 \u03b8S\u22252 + 2\u27e8\u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8S,\u03b8 \u03b8 \u03b8S \u2212\u02c6 \u03b8 \u03b8 \u03b8S\u27e9 = \u2225\u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8S\u22252 + Op(1) + op(n). Therefore, using (17) and the previous display, we get \u2225\u02c6 \u03b8 \u03b8 \u03b8I \u2212\u02c6 \u03b8 \u03b8 \u03b8S\u22252 = op(n) + \u2225\u03b8 \u03b8 \u03b8I \u2212\u02c6 \u03b8 \u03b8 \u03b8S\u22252 = \u2225\u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8S\u22252 + op(n). Using a similar analysis for TD gives the desired result. Corollary 3.8. Fix 0 < \u03b1 < 1 and suppose that (16) and (18) hold. Then, the power of the test in (13) converges to 1, i.e., P(T(Y) > c\u03b1) \u21921, as n \u2192\u221e, where Y follows model (1) and \u03b1 \u2208(0, 1). Proof. The result immediately follows from Theorems 3.5 and 3.7 since c\u03b1 = op(1) and T(Y) \u2192p \u03ba > 0, under (16) and (18). 4 Examples In this section we come back to the examples discussed in the Introduction. We assume model (3) and that there is a class of functions F that is approximated by points in the cone I. The double cone is thus growing in dimension with the sample size n. Then (16) reduces to assuming that the cone is su\ufb03ciently large dimensional so that if \u03c60 \u2208F, the projection of Y onto the cone is a consistent estimator of \u03c60, i.e., (16) holds with \u03b80 = (\u03c60(x1), . . . , \u03c60(xn))\u22a4. Proofs of the results in this section can be found in Section A.4. 11 4.1 Example 1 Consider testing against a constant regression function \u03c60 in (3). The following theorem, proved in Section A.4, is similar in spirit to Theorem 3.5 but gives the precise rate at which the test statistic T decreases to 0, under H0. Theorem 4.1. Consider data {(xi, Yi)}n i=1 from model (3) and suppose that \u03c60 \u2261c0, for some unknown c0 \u2208R. Then T = Op(log n/n) where T is de\ufb01ned in (11). The following result, also proved in Section A.4, shows that for functions of bounded variation which are non-constant, the power of our proposed test converges to 1, as n grows. Theorem 4.2. Consider data {(xi, Yi)}n i=1 from model (3) where \u03c60 : [0, 1] \u2192R is assumed to be of bounded variation. Also assume that xi \u2208[0, 1], for all i = 1, . . . , n. De\ufb01ne the design distribution function as Fn(s) = 1 n n X i=1 I(xi \u2264s), (19) for s \u2208[0, 1], where I stands for the indicator function. Suppose that there is a continuous strictly increasing distribution function F such that sup s\u2208[0,1] |Fn(s) \u2212F(s)| \u21920 as n \u2192\u221e. (20) Also, suppose that \u03c60 is not a constant function a.e. F. Then T\u2192p c, as n \u2192\u221e, for some constant c > 0. Corollary 4.3. Consider the same setup as in Theorem 4.2. Then, for \u03b1 \u2208(0, 1), the power of the test in (13) converges to 1, as n \u2192\u221e. Proof. The result immediately follows from Theorems 4.1 and 4.2 since c\u03b1 = op(1) and T \u2192p c > 0. 4.2 Example 2 In this subsection we consider testing H0 : \u03c60 is a\ufb03ne, i.e., \u03c60(x) = a + bx, x \u2208[0, 1], where a, b \u2208R are unknown. Recall the setup of Example 2 in the Introduction. Observe that S \u2282I as the linear constraints describing I, as stated in (5), are clearly satis\ufb01ed by any \u03b8 \u2208S. To see that S is the largest linear subspace contained in I, i.e., (A1) holds, we note that for \u03b8 \u03b8 \u03b8 \u2208I, \u2212\u03b8 \u03b8 \u03b8 \u2208I only if \u03b8 \u03b8 \u03b8 \u2208S. Assumption (A2) holds if the projection of a convex function, onto the concave cone, is an a\ufb03ne function, and vice-versa. To see this observe that the generators \u03b4 \u03b4 \u03b41, . . . ,\u03b4 \u03b4 \u03b4n\u22122 of \u2126I are pairwise 12 positively correlated, so the projection of any \u03b4 \u03b4 \u03b4j onto D is the origin by (7). Therefore the projection of any positive linear combination of the \u03b4 \u03b4 \u03b4j, i.e., any vector in \u2126I, onto D, is also the origin, and hence projections of vectors in I onto D are in S. Next we state two results on the limiting behavior of the test statistic T under the following condition: (C) Let x1, . . . , xn \u2208[0, 1]. Assume that there exists c1, c2 > 0 such that c1/n \u2264 xi \u2212xi\u22121 \u2264c2/n, for i = 2, . . . , n. Theorem 4.4. Consider data {(xi, Yi)}n i=1 from model (3) and suppose that \u03c60 : [0, 1] \u2192R is a\ufb03ne, i.e., \u03c60(x) = a + bx, for some unknown a, b \u2208R. Also suppose that the errors \u03f5i, i = 1, . . . , n, are sub-gaussian. If condition (C) holds then T = Op n\u22121 \u0012 log n 2c1 \u00135/4! , where T is de\ufb01ned in (11). Remark 4.5. The proof of the above result is very similar to that of Theorem 4.1; we now use the fact \u2225\u03b80 \u2212\u02c6 \u03b8I\u22252 = Op \u0010 (log n 2c1)5/4\u0011 (see Remark 2.2 of Guntuboyina and Sen (2013)). The following result shows that for any twice-di\ufb00erentiable function \u03c60 on [0, 1] which is not a\ufb03ne, the power of our test converges to 1, as n grows. Theorem 4.6. Consider data {(xi, Yi)}n i=1 from model (3) where \u03c60 : [0, 1] \u2192R is assumed to be twice-di\ufb00erentiable. Assume that condition (C) holds and suppose that the errors \u03f5i, i = 1, . . . , n, are sub-gaussian. De\ufb01ne the design distribution function Fn as in (19) and suppose that there is a continuous strictly increasing distribution function F on [0, 1] such that (20) holds. Also, suppose that \u03c60 is not an a\ufb03ne function a.e. F. Then T\u2192p c, for some constant c > 0. Remark 4.7. The proof of the above result is very similar to that of Theorem 4.2; we now use the fact that a twice-di\ufb00erentiable function on [0, 1] can be expressed as the di\ufb00erence of two convex functions. Corollary 4.8. Consider the same setup as in Theorem 4.6. Then, for \u03b1 \u2208(0, 1), the power of the test based on T converges to 1, as n \u2192\u221e. 4.3 Example 3 Consider model (3) where now X := {x1, . . . , xn} \u2282Rd, for d \u22652, is a set of n distinct points and \u03c60 is de\ufb01ned on a closed convex set X \u2282Rd. In this subsection we address the problem of testing H0 : \u03c60 is a\ufb03ne. Recall the notation from Example 13 3 in the Introduction. As the convex cone I under consideration cannot be easily represented as (4), we \ufb01rst discuss the computation of \u03a0(Y |I) and \u03a0(Y |D). We can compute \u03a0(Y |I) by solving the following (quadratic) optimization problem: minimize\u03be1,...,\u03ben;\u03b8 \u2225Y \u2212\u03b8\u22252 subject to \u03b8j + \u27e8\u2206ij, \u03bej\u27e9\u2264\u03b8i; i = 1, . . . , n; j = 1, . . . , n, (21) where \u2206ij := xi \u2212xj \u2208Rd, and \u03bei \u2208Rd and \u03b8 = (\u03b81, . . . , \u03b8n)\u22a4\u2208Rn; see Seijo and Sen (2011), Lim and Glynn (2012), Kuosmanen (2008). Note that the solution to the above problem is unique in \u03b8 due to the strong convexity of the objective in \u03b8. The computation, characterization and consistency of \u03a0(Y |I) has been established in Seijo and Sen (2011); also see Lim and Glynn (2012). We use the cvx package in MATLAB to compute \u03a0(Y |I). The projection on D can be obtained by solving (21) where we now replace the \u201c\u2264\u201d in the constraints by \u201c\u2265\u201d. Although we expect Theorem 4.6 to generalize to this case, a complete proof of this fact is di\ufb03cult and beyond the scope of the paper. The main di\ufb03culty is in showing that (16) holds for d \u22652. The convex regression problem described in (21) su\ufb00ers from possible over-\ufb01tting at the boundary of Conv(X), where Conv(A) denotes the convex hull of the set A. The norms of the \ufb01tted \u02c6 \u03bej\u2019s near the boundary of Conv(X) can be very large and there can be a large proportion of data points at the boundary of Conv(X) for d \u22652. Note that Seijo and Sen (2011) shows that the estimated convex function converges to the true \u03c60 a.s. (when \u03c60 is convex) only on compacts in the interior of support of the convex hull of the design points, and does not consider the boundary points. As a remedy to this possible over-\ufb01tting we can consider solving the least squares problem over the class of convex functions that are uniformly Lipschitz. For a convex function \u03c8 : X \u2192R, let us denote by \u2202\u03c8(x) the sub-di\ufb00erential (set of all subgradients) set at x \u2208X, and by \u2225\u2202\u03c8(x)\u2225the supremum norm of vectors in \u2202\u03c8(x). For L > 0, consider the class \u02dc IL of convex functions with Lipschitz norm bounded by L, i.e., \u02dc IL := {\u03c8 : X \u2192R| \u03c8 is convex, \u2225\u2202\u03c8\u2225X \u2264L}. (22) The resulting optimization problem can now be expressed as (compare with (21)): minimize\u03be1,...,\u03ben;\u03b8 1 2\u2225Y \u2212\u03b8\u22252 subject to \u03b8j + \u27e8\u2206ij, \u03bej\u27e9\u2264\u03b8i; i = 1, . . . , n; j = 1, . . . , n, \u2225\u03bej\u2225\u2264L, j = 1, . . . , n. Let \u02c6 \u03b8 \u03b8 \u03b8I,L and \u02c6 \u03b8 \u03b8 \u03b8D,L denote the projections of Y onto IL and DL, the set of all evaluations (at the data points) of functions in \u02dc IL and \u02dc DL := \u2212\u02dc IL, respectively. We will use the modi\ufb01ed test-statistic TL := min{\u2225\u02c6 \u03b8S \u2212\u02c6 \u03b8D,L\u22252, \u2225\u02c6 \u03b8S \u2212\u02c6 \u03b8I,L\u22252} \u2225Y \u2212\u02c6 \u03b8S\u22252 . 14 Note that in de\ufb01ning TL all we have done is to use \u02c6 \u03b8 \u03b8 \u03b8I,L and \u02c6 \u03b8 \u03b8 \u03b8D,L instead of \u02c6 \u03b8 \u03b8 \u03b8I and \u02c6 \u03b8 \u03b8 \u03b8D as in our original test-statistic T. In the following we show that (16) holds for TL. The proof of the result can be found in Section A.4. Theorem 4.9. Consider data {(xi, Yi)}n i=1 from the regression model Yi = \u03c60(xi)+\u03f5i, for i = 1, 2, . . . , n, where we now assume that (i) X = [0, 1]d; (ii) \u03c60 \u2208\u02dc IL0 for some L0 > 0; (iii) xi \u2208X\u2019s are \ufb01xed constants; and (iv) \u03f5i\u2019s are i.i.d. sub-gaussian errors. Given data from such a model, and letting \u03b8 \u03b8 \u03b80 = (\u03c60(x1), \u03c60(x2), . . . , \u03c60(xn))\u22a4, we can show that for any L > L0, \u2225\u02c6 \u03b8 \u03b8 \u03b8I,L \u2212\u03b8 \u03b8 \u03b80\u22252 = op(n). (23) Remark 4.10. At a technical level, the above result holds because the class of all convex functions that are uniformly bounded and uniformly Lipschitz is totally bounded (under the L\u221emetric) whereas the class of all convex functions is not totally bounded. The next result shows that the power of the test based on TL indeed converges to 1, as n grows. The proof follows using a similar argument as in the proof of Theorem 4.2. Theorem 4.11. Consider the setup of Theorem 4.9. Moreover, if the design distribution (of the xi\u2019s) converges to a probability measure \u00b5 on X such that \u03c60 is not an a\ufb03ne function a.e. \u00b5, then TL\u2192p c, for some constant c > 0. Hence the power of the test based on TL converges to 1, as n \u2192\u221e, for any signi\ufb01cance level \u03b1 \u2208(0, 1). Remark 4.12. We conjecture that for the test based on T, as de\ufb01ned in (11), Theorems 4.9 and 4.11 will hold under appropriate conditions on the design distribution, but a complete proof is beyond the scope of this paper. Remark 4.13. The assumption that X = [0, 1]d can be extended to any compact subset of Rd. 5 Extensions 5.1 Weighted regression If \u03f5 is mean zero Gaussian with covariance matrix \u03c32\u03a3, for a known positive de\ufb01nite \u03a3, we can readily transform the problem to the i.i.d. case. If U \u22a4U is the Cholesky decomposition of \u03a3, pre-multiply the model equation Y = \u03b8 \u03b8 \u03b80 + \u03f5 through by U \u22a4to get \u02dc Y = \u02dc \u03b8 \u03b8 \u03b80 + \u02dc \u03f5, where \u02dc \u03f51, . . . , \u02dc \u03f5n are i.i.d. mean zero Gaussian errors with variance \u03c32. Then, minimize \u2225\u02dc Y \u2212\u02dc \u03b8 \u03b8 \u03b8\u22252 over \u02dc \u03b8 \u03b8 \u03b8 \u2208\u02dc I \u222a\u02dc D, where \u02dc I is de\ufb01ned by \u02dc A = A(U \u22a4)\u22121. A basis for the null space \u02dc S is obtained by premultiplying a basis for S by U \u22a4, and generators for \u02dc \u2126I = \u02dc I \u2229\u02dc S\u22a5are obtained by premultiplying the generators of \u2126I by U \u22a4. The test may be performed within the transformed model. If the distribution of the error is non-Gaussian we can still standardize Y as above, and perform our test after making appropriate modi\ufb01cations while simulating the null distribution. 15 This is useful for correlated errors with known correlation function, or when the observations are weighted. Furthermore, we can relax the assumption that the x values are distinct, for if the values of x are not distinct, the Yi values may be averaged over each distinct xi, and the test can be performed on the averages using the number of terms in the average as weights. 5.2 Linear versus partially linear models We now consider testing against a parametric regression function, with parametrically modeled covariates. The model is Yi = \u03c60(xi) + z\u22a4 i \u03b1 + \u03f5i, i = 1, . . . , n, (24) where \u03b1 is a k-dimensional parameter vector, zi is the k-dimensional covariate, and interest is in testing against a parametric form of \u03c60, such as constant or linear, or more generally \u03b8 \u03b8 \u03b80 = X\u03b2 where S = {\u03b8 \u03b8 \u03b8 \u2208Rn : \u03b8 \u03b8 \u03b8 = X\u03b2} is the largest linear space in a convex cone I, for which (A1) and (A2) hold. For example, X = e can be used to test for the signi\ufb01cance of the predictor, while controlling for the e\ufb00ects of covariates z. If X = [e|e1], the null hypothesis is that the expected value of the response is linear in x, for any \ufb01xed values of the covariates. Accounting for covariates is important for two reasons. First, if the covariates explain some of the variation in the response, then the power of the test is higher when the variation is modeled. Second, if the covariates are related to the predictor, confounding can occur if the covariates are missing from the model. The assumption that the xi values are distinct is no longer practical; without covariates we could assume distinct xi values without loss of generality, because we could average the Yi values at the distinct xi and perform a weighted regression. However, we could have duplicate xi values that have di\ufb00erent covariate values. Therefore, we need equality constraints as well as inequality constraints, to ensure that \u03b80i = \u03b80j when xi = xj. An appropriate cone can be de\ufb01ned as I = {\u03b8 \u03b8 \u03b8 \u2208Rn : A\u03b8 \u03b8 \u03b8 \u22650 and B\u03b8 \u03b8 \u03b8 = 0}, where S = {\u03b8 \u03b8 \u03b8 \u2208Rn : \u03b8 \u03b8 \u03b8 = X\u03b2} is the largest linear space in I. For identi\ufb01ability considerations, we assume that the columns of Z and X together form a linearly independent set, where Z is the n \u00d7 k design matrix whose rows are z1, . . . , zn. Let L = S +Z, where Z is the column space of Z. De\ufb01ne \u02dc \u03b4 \u03b4 \u03b4j = \u03b4j \u2212P L\u03b4j, for j = 1, . . . , M, where P L is the projection matrix for the linear space L and \u03b41, . . . , \u03b4M are the generators of \u2126I. We may now de\ufb01ne the cone \u02dc \u2126I as generated by \u02dc \u03b4 \u03b4 \u03b41, . . . , \u02dc \u03b4 \u03b4 \u03b4M. Similarly, the generators of \u02dc \u2126D are \u2212\u02dc \u03b4 \u03b4 \u03b41, . . . , \u2212\u02dc \u03b4 \u03b4 \u03b4M. De\ufb01ne \u03be = \u03b80 + Z\u03b1. Then H0 : \u03be \u2208L is the appropriate null hypothesis and the alternative hypothesis is \u03be \u2208\u02dc I \u222a\u02dc D\\L, where \u02dc I = L + \u02dc \u2126I and \u02dc D = L + \u02dc \u2126D. Then L is the largest linear space contained in \u02dc I or in \u02dc D, and it is straight-forward to verify that if \u2126D \u2286Io, we also have \u02dc \u2126D \u2286\u02dc Io. Therefore the conditions (A1) and (A2) hold for the model with covariates, whenever they hold for the cone without covariates. 16 5.3 Additive models We consider an extension of (24), where Yi = \u03c601(x1i) + \u00b7 \u00b7 \u00b7 + \u03c60d(xdi) + z\u22a4 i \u03b1 + \u03f5i, and the null hypothesis speci\ufb01es parametric formulations for each \u03c60j, j = 1, . . . , d. Let \u03b8ji = \u03c60j(xji), j = 1, . . . , d, and \u03b8 = \u03b81+\u00b7 \u00b7 \u00b7+\u03b8d+z\u22a4 i \u03b1 \u2208Rn. The null hypothesis is H0 : \u03b8j \u2208Sj, for j = 1, . . . , d, or H0 : \u03b8 \u2208S where S = S1 + \u00b7 \u00b7 \u00b7 + Sd + Z, and Z is the column space of the n\u00d7k matrix whose rows are z1, . . . , zn. De\ufb01ne closed convex cones I1, . . . , Id, where Sj is the largest linear space in Ij. Then I = I1 +\u00b7 \u00b7 \u00b7+Id +Z is a closed convex cone in Rn, containing the linear space S. The projection \u02c6 \u03b8 of the data Y onto the cone I exists and is unique, and Meyer (2013a) gave necessary and su\ufb03cient conditions for identi\ufb01ability of the components \u02c6 \u03b81, . . . , \u02c6 \u03b8d, and \u03b1. When the identi\ufb01ability conditions hold, then S is the largest linear space in I. De\ufb01ne Dj := \u2212Ij for i = 1, . . . , d, and D = D1 +\u00b7 \u00b7 \u00b7+Dd +Z. Then I \u222aD is a double cone, and we may test the null hypothesis H0 : \u03b8 \u2208S versus Ha : \u03b8 \u2208I \u222aD\\S using the test statistic (11). However, we may like to include in the alternative hypothesis the possibility that, say, \u03b81 \u2208I1 and \u03b82 \u2208D2. Thus, for d = 2, we would like the alternative set to be the quadruple cone de\ufb01ned as the union of four cones: I1 + I2, I1 + D2, D1 + I2, and D1 + D2. Then D1 + D2 is the cone opposite to I1 + I2, and D1 + I2 is the cone opposite to I1 + D2, and the largest linear space contained in any of these cones is S = S1 + S2 + Z. For arbitrary d \u22651, the multiple cone alternative has 2d components; call these C1, . . . , C2d. The proposed test involves projecting Y onto each of the 2d combinations of cones, and T(Y) = maxj=1,...,2d {\u2225\u03a0(Y|S) \u2212\u03a0(Y|Cj)\u22252} \u2225Y \u2212\u03a0(Y|S)\u22252 . using the smallest sum of squared residuals in T(Y). The distribution of the test statistic is again invariant to scale and translations in S, so for known error distribution G, the null distribution may be simulated to the desired precision. This provides another option for testing against the linear model, that is di\ufb00erent from the fully convex/concave alternative of Example 3, but requires the additional assumption of additivity. It also provides tests for more speci\ufb01c alternatives: for example, suppose that the null hypothesis is E(y) = \u03b20 + \u03b21x1 + \u03b22x2 + \u03b23x2 2 + \u03b24z, where z is an indicator variable. If we can assume that the e\ufb00ects are additive, then we can use the cone I1 of convex functions and the cone I2 of functions with positive third derivative as outlined in Example 2 of the Introduction. If the additivity assumptions are correct, this quadruple cone alternative might provide better power than the more general, fully convex/concave alternative. 17 5.4 Testing against a constant function The traditional F-test for the parametric least-squares regression model has the null hypothesis that none of the predictors is (linearly) related to the response. For an n \u00d7 p full-rank design matrix, the F statistic has null distribution F(p \u22121, n \u2212p). To test against the constant function when the relationship of the response with the predictors is unspeci\ufb01ed, we can turn to our cone alternatives. Consider model (3) where the predictor values are X = {x1, x2, . . . , xn} \u2282Rd, d \u22651. A cone that contains the one-dimensional null space of all constant vectors is de\ufb01ned for multiple isotonic regression using a partial order on X. That is, xi \u2aafxj if xi \u2264xj holds coordinate-wise. Two points xi and xj in X are comparable if either xi \u2aafxj or xj \u2aafxi. Partial orders are re\ufb02exive, anti-symmetric, and transitive, but di\ufb00er from complete orders in that pairs of points are not required to be comparable. The regression function \u03c60 is isotonic with respect to \u2aafon X if \u03c60(xi) \u2264\u03c60(xj) whenever xi \u2aafxj, and \u03c60 is anti-tonic if \u03c60(xi) \u2265\u03c60(xj) whenever xi \u2aafxj. In Section A.3 we show that assumptions (A1) and (A2) hold for the double cone of isotonic and anti-tonic functions. However, the double cone for multiple isotonic regression is unsatisfactory because if one of the predictors reverses sign, the value of the statistic (11) (for testing against a constant function) also changes. For two predictors, it is more appropriate to de\ufb01ne a quadruple cone, considering pairs of increasing/decreasing relationships in the partial order. For three predictors we need an octuple cone, which is comprised of four double-cones. See Section A.3 for more details and simulations results. 6 Simulation studies In this section we investigate the \ufb01nite-sample performance of the proposed procedure based on T, as de\ufb01ned in (11), for testing the goodness-of-\ufb01t of parametric regression models. We consider the case of a single predictor, the test against a linear regression function with multiple predictors, and the test of linear versus partial linear model, comparing our procedure with competing methods. In all the simulation settings we assume that the errors are Gaussian. Overall our procedure performs well; although for some scenarios there are other methods that are somewhat better, none of the other methods has the same consistent good performance. Our procedure, being an exact test, always gives the desired level of signi\ufb01cance, whereas other methods have in\ufb02ated test size in some scenarios and are only approximate. Further, most of the other methods depend on tuning parameters for the alternative \ufb01t. The goodness-of-\ufb01t of parametric regression models has received a lot of attention in the statistical literature. Stute et al. (1998) used the empirical process of the regressors marked by the residuals to construct various omnibus goodness-of-\ufb01t tests. Wild bootstrap approximations were used to \ufb01nd the cut-o\ufb00of the test statistics. 18 We denote the two variant test statistics \u2013 the Kolmogorov-Smirnov type and the Cram\u00b4 er-von Mises type \u2013 by S1 and S2, respectively. We implement these methods using the \u201cIntRegGOF\u201d library in the R package. Fan and Huang (2001) proposed a lack-of-\ufb01t test based on Fourier transforms; also see Christensen and Sun (2010) for a very similar method. The main drawback of this approach is that the method needs a reliable estimator of \u03c32 to compute the test-statistic, and it can be very di\ufb03cult to obtain such an estimator under model mis-speci\ufb01cation. We present the power study of the adaptive Neyman test (T \u2217 AN,1; see equation (2.1) of Fan and Huang (2001)) using the known \u03c32 (as a gold standard) and an estimated \u03c32. We denote this method by FH. Pe\u02dc na and Slate (2006) proposed an easy-to-implement single global procedure for testing the various assumptions of a linear model. The test can be viewed as a Neyman smooth test and relies only on the standardized residual vector. We implemented the procedure using the \u201cgvlma\u201d library in the R package and denote it by PS. 6.1 Examples with a one-dimensional predictor Proportions of rejections for 10,000 data sets simulated from Yi = \u03c60(xi) + \u03f5i, i = 1, . . . , n = 100, are shown in Fig. 6.1. In the \ufb01rst plot, the power of the test against a constant function is shown when the true regression function is \u03c60(x) = 10a(x\u22122/3)2 +, where (\u00b7)+ = max(\u00b7, 0), and the e\ufb00ect size a ranges from 0 to 7. The alternative for our proposed test (labeled T in the \ufb01gures) is the increasing/decreasing double cone, and the power is compared with the F-test with linear alternative, and the FH test with both known and estimated variance. In the second plot, power of the test against a linear function is shown for the same \u201cramp\u201d regression function \u03c60(x) = 10a(x \u22122/3)2 +. The alternative for the proposed test is the convex/concave double cone, and the power is compared with the F-test with quadratic alternative, the FH test with known and estimated variance, S1 and S2, and the PS test. As with the test against a constant function, ours has better power and the FH test with estimated variance has in\ufb02ated test size. Finally, we consider the null hypothesis that \u03c60 is quadratic, and the true regression function is \u03c60(x) = a exp(3x\u22122). The double-cone alternative is as given in Example 2 of the Introduction. The S2 test has slightly higher power than ours in this situation, and the PS test has power similar to the S1 test. The FH test with known variance has a small test size compared to the target, and low power. 6.2 Testing against the linear model We consider two data generating models. Model 1 is adapted from Stute et al. (1998) (see Model 3 of their paper) and can be expressed as Y = 2 + 5X1 \u2212X2 + aX1X2 + \u03f5, 19 0 1 2 3 4 5 6 7 0.0 0.2 0.4 0.6 0.8 1.0 power T FH-unk F-test, vs lin FH-known against constant, \u03c6(x) = 10(x \u22122 3)+ 3 0 1 2 3 4 5 6 7 0.0 0.2 0.4 0.6 0.8 1.0 T FH-unk F-test vs quad St1 St2 FH-known PS against linear, \u03c6(x) = 10(x \u22122 3)+ 3 0 1 2 3 4 5 6 7 0.0 0.2 0.4 0.6 0.8 1.0 St2 T PS St1 FH-known against quadratic, \u03c6(x)=a exp(3x-2) Figure 6.1: Power function for test against parametric models with one dimensional predictor, n = 100 observations, equally spaced x \u2208[0, 1] and \u03c32 = 1. with covariate (X1, . . . , Xd), where X1, . . . , Xd are i.i.d. Uniform(0, 1), and \u03f5 is drawn from a normal distribution with mean 0. Stute et al. (1998) used d = 2 in their simulations but we use d = 2, 4. Model 2 is adapted from Fan and Huang (2001) (see Example 4 of their paper) and can be written as Y = X1 + aX2 2 + 2X4 + \u03f5, where (X1, X2, X3, X4) is the covariate vector. The covariates X1, X2, X3 are normally distributed with mean 0 and variance 1 and pairwise correlation 0.5. The predictor X4 is binary with probability of \u201csuccess\u201d 0.4 and independent of X1, X2 and X3. Random samples of size n = 100, are drawn from Model 1 (and also from Model 2) and a multiple linear regression model is \ufb01tted to the samples, without the interaction X1X2 term (X2 2 term). Thus, the null hypothesis holds if and only if a = 0. In all the following p-value calculations, whenever required, we use 1000 bootstrap samples to estimate the critical values of the tests. For models 1 and 2 we implement the fully convex/concave double-cone alternative and denote the method by T. For model 2 we also implement the octuple cone alternative under the assumption that the e\ufb00ects are additive, and treating X4 as a parametrically modeled covariate (as described in Section 5.3). We denote this method by T2. We also implement the generalized likelihood ratio test of Fan and Jiang (2007); see equation (4.24) of their paper (also see Fan and Jiang (2005)). The test computes a likelihood ratio statistic, assuming normal errors, obtained from the parametric and nonparametric \ufb01ts. We denote this method by L. As the procedure involves \ufb01tting a smooth nonparametric model, it involves the delicate choice of smoothing bandwidth(s). We use the \u201cnp\u201d library in the R package to compute the nonparametric kernel estimator with the optimal bandwidth being chosen by the \u201cnpregbw\u201d function in that package. This procedure is similar in spirit to that used in H\u00a8 ardle and Mammen (1993). To compute the critical value of the test we use the wild bootstrap method. We also compare our method with the recently proposed goodness-of-\ufb01t test of a linear model by Sen and Sen (2013). Their procedure assumes the independence of the error and the predictors in the model and tests for the independence of the residual (obtained from the \ufb01tted linear model) and the predictors. The critical value 20 0 2 4 6 8 10 0.0 0.2 0.4 0.6 0.8 1.0 T K S1 S2 F G L Model 1 with d=2 0 2 4 6 8 10 0.0 0.2 0.4 0.6 0.8 1.0 T K S1 S2 F G L Model 1 with d=4 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.0 0.2 0.4 0.6 0.8 1.0 T2 T K S1 S2 F G L Model 2 Figure 6.2: Power function for test against linear models for Models 1 and 2 with n = 100 observations and \u03c32 = 1. of the test is computed using a bootstrap approach. We denote this method by K. From Fig. 6.2 it is clear that our procedure overall has good \ufb01nite sample performance compared to the competing methods. Note that as a increasing, the power of our test monotonically increases in all problems. As expected, S1 and S2 behave poorly as the dimension of the covariate increases. The method L is anti-conservative and hence shows higher power in some scenarios. It is also computationally intensive, especially for higher dimensional covariates. For model 2, both the fully convex/concave and the octuple-cone additive alternative perform quite well compared to the other methods. 6.3 Linear versus partial linear model We compare the power of the test for linear versus partial linear model (24), with H0 : \u03c60 is a\ufb03ne, including a categorical covariate with three levels. The n = 100 x values are equally spaced in [0, 1]. We compare our test with the convex/concave alternative to the standard F-test using quadratic alternative, and the test for linear versus partial linear from Fan and Huang (2001), Section 2.4 (labeled FH). Two versions of the FH test are used; the \ufb01rst version uses an estimate of the model variance and the second assumes the variance is known. The \ufb01rst two plots in Fig. 6.3 show power for a = 0, 1, . . . , 6 when the true function is \u03c60(x) = 3ax2 + x, and the target test size is \u03b1 = .05. In the \ufb01rst plot, the values of the covariate are generated independently of x, and for the second, the predictors are related, so that the categorical covariate is more likely to have level=1 when x is small, and is more likely to have level=3 when x is large. Here the F-test is the gold standard, because the true model satis\ufb01es all the assumptions. The proposed test performs similarly to the FH test with known variance; for the unknown variance case the power for the FH test is larger but the test size is in\ufb02ated. In the third and fourth plots, the regression function is \u03c60(x) = 20a(x\u22121/2)3+x. The F-test is not able to reject the null hypothesis because the alternative is incorrect, but the true function is also not contained in the double-cone (convex/concave) of 21 0 1 2 3 4 5 6 0.0 0.2 0.4 0.6 0.8 1.0 power \u03c6(x)=x2 + x, independent x & z F-test T FH, kn var FH, unk var 0 1 2 3 4 5 6 0.0 0.2 0.4 0.6 0.8 1.0 \u03c6(x)=x2 + x, dependent x & z 0 1 2 3 4 5 6 0.0 0.2 0.4 0.6 0.8 1.0 \u03c6(x)=(x \u22121 2)3, independent x & z 0 1 2 3 4 5 6 0.0 0.2 0.4 0.6 0.8 1.0 \u03c6(x)=(x \u22121 2)3, dependent x & z Figure 6.3: Power for test against a\ufb03ne \u03c60 with n = 100 observations with equally spaced x \u2208[0, 1] and \u03c32 = 1. The covariate z is categorical with three levels. the proposed method. However, the proposed test still compares well with the FH test, especially when the predictors are correlated. 6.4 Testing against constant function, with covariates Testing the signi\ufb01cance of a predictor while controlling for covariate e\ufb00ects can be accomplished using the partial linear model (24), with H0 : \u03c60(x) \u2261c, for some unknown c, using the double cone alternative for monotone \u03c60. Our method is compared with the standard F-test with linear alternative and the FH test, both with known and unknown variance as in the previous subsection. The \ufb01rst two plots of Fig. 6.4 display power for 10,000 simulated data sets from \u03c60(x) = ax, for n = 100 x values equally spaced in [0, 1], with a ranging from 0 to 3. The power of the proposed test is close to the gold-standard F-test, and the FH with unknown variance again has in\ufb02ated test size. When the predictors are related, the FH test has unacceptably large test size. In the third and fourth plots, data were simulated using \u03c60(x) = a sin(3\u03c0x), with a ranging from 0 to 1.5. The true \u03c60 is not in the alternative set for either the F-test or the proposed test; however the proposed test can reject the null consistently for higher values of a. In this scenario, the size of FH test with known variance is in\ufb02ated for correlated predictors.. 6.5 Real data analysis Data example 1: We study the well-known Boston housing dataset collected by Harrison Jr and Rubinfeld (1978) to study the e\ufb00ect of air pollution on real estate price in the greater Boston area in the 1970s. The data consist of 506 observations on 16 variables, with each observation pertaining to one census tract. We use the version of the data that incorporates the minor corrections found by Gilley and Pace (1996). Our procedure, assuming normal errors, yields a p-value of essentially 0 and rejects 22 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.2 0.4 0.6 0.8 1.0 power \u03c6(x)=x, independent x & z F-test T FH-unk FH-known 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.2 0.4 0.6 0.8 1.0 \u03c6(x)=x, dependent x & z 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.2 0.4 0.6 0.8 1.0 \u03c6(x)=sin(3\u03c0x), independent x & z 0.0 0.5 1.0 1.5 2.0 2.5 3.0 0.0 0.2 0.4 0.6 0.8 1.0 \u03c6(x)=sin(3\u03c0x), dependent x & z Figure 6.4: Power for test against constant \u03c60 with n = 100 observations with equally spaced x \u2208[0, 1] and \u03c32 = 1. The covariate z is categorical with three levels. 50 60 70 80 90 -150 -50 50 150 Hardness in Shore units Abrasion Loss (gm/hr) 120 160 200 240 -50 0 50 tensile strength in kg/sq m Figure 6.5: Centered components of the additive anti-tonic \ufb01t to the Rubber data set. the linear model speci\ufb01cation, as used in Harrison Jr and Rubinfeld (1978), while the method of Stute et al. (1998) yields a p-value of more than 0.2. The method of Sen and Sen (2013) also yields a highly signi\ufb01cant p-value. Data example 2: We consider the Rubber data set, found in the R package MASS, representing \u201caccelerated testing of tyre rubber\u201d. The response variable is the abrasion loss in gm/hr, with two predictors of loss: the hardness in Shore units, and the tensile strength in kg/sq m. A linear regression (\ufb01tting a plane to the data) provides R2 = .84 and the usual residual plots do not provide evidence against linearity. Further, the Stute tests provide p-values of .39 and .18, respectively, and the Pe\u02dc na and Slate test against the linear model provides p = .92. The quadruple cone alternative of Section 5.3 provides p = .047, although the fully convex/concave double cone alternative does not reject at \u03b1 = .05. Some insight into the true function can be found by \ufb01tting the constrained additive model, using the reasonable assumption that the expected response is decreasing in both predictors. The \ufb01t is roughly linear in \u201chardness\u201d but is more like a sigmoidal or step function in \u201ctensile strength\u201d; these components are shown in Fig. 6.5. Although neither \ufb01t is convex or concave, the quadruple cone method can still detect the departure from linearity. Data example 3: To demonstrate the partial linear test, we use data from a study 23 20 30 40 50 60 70 80 3 4 5 6 7 age log(BPBC) linear fit 20 30 40 50 60 70 80 3 4 5 6 7 age log(BPBC) convex fit 20 30 40 50 60 70 80 3 4 5 6 7 age log(BPBC) concave fit male, never male, former male current female, never female, former female current Figure 6.6: Log of blood plasma beta carotene (BPBC) is plotted against age, with plot character representing combinations of sex and smoking status. of predictors of blood plasma levels of the micronutrient beta carotene, in healthy subjects, as discussed by Nierenberg et al. (1989). Smoking status (current, former, never) and sex are categorical predictors whose e\ufb00ects on the response (log of blood plasma beta carotene) are determined to be signi\ufb01cant at \u03b1 = .05. If interest is in determining the e\ufb00ect of age of the subject, a linear relationship might be assumed; this \ufb01t is shown in the \ufb01rst plot of Fig. 6.6, where the six lines correspond to the smoking/sex combinations. The covariates must be included in the model because they are related to both the response and the predictor of interest. To test whether the linear \ufb01t is appropriate, we use our test for linear versus partial linear model, which returns a p-value of .047. The convex and concave \ufb01ts have similar sums of squared residuals, with the concave \ufb01t having the smaller, so the concave \ufb01t represents the projection of the response vector onto the double cone. However, the fact that the convex \ufb01t is almost as close to the data implies that neither is correct; perhaps the true function is concave at the left and convex at the right. 7 Discussion We have developed a test against a parametric regression function, where the alternative involves large-dimensional convex cones. The critical value of the test can be easily computed, via simulation, and the test is exact if we assume a known form of the error distribution. For a given parametric model, a very general alternative is guaranteed to have power tending to one as the sample size increases, under mild conditions. However, if additional a priori assumptions are available, these can be incorporated to boost the power in small to moderate-sized samples. For example, when testing against an additive linear function such as \u03c60(x1, x2, x3) = \u03b20 + \u03b21x1 + \u03b22x2 + \u03b23x3, we can use the \u201cfully convex\u201d model of Example 3, or if we feel con\ufb01dent that the additivity assumption is valid, we can use the octuple cone of Section 5.3. This power improvement was seen in model 2 simulations, in Fig. 6.2. The authors have provided R and matlab routines for the general method and for 24 the speci\ufb01c examples. In the R package DoubleCone, there are three functions. The \ufb01rst, doubconetest is the generic version; the user provides a constraint matrix that de\ufb01nes the cone I for which the null space of the constraint matrix is S and (A1) and (A2) hold. The function provides a p-value for the test that the expected value of a vector is in the null space using the double-cone alternative. The function agconst performs a test of the null hypothesis that the expected value of y is constant versus the alternative that it is monotone (increasing or decreasing) in each of the predictors, using double, quadruple, or octuple cones. Finally, the function partlintest performs a test of a linear model versus a partial linear model, using a double-cone alternative. The user can test against a constant, linear, or quadratic function, while controlling for the e\ufb00ects of (optional) covariates. The matlab routine (http://www.stat.columbia.edu/\u223cbodhi/Bodhi/Publications.html) performs the test of Example 3. A Appendix A.1 Unbiasedness We show that the power of the test for \u03b8 \u03b8 \u03b80 \u2208I \u222aD is at least as large as the test size. In the following we give the proof of Theorem 3.3 in the main paper. Theorem A.1. (Restatement of Theorem 3.3) Let Y0 := s + \u03c3\u03f5, for s \u2208S, where the components of \u03f5 are i.i.d. G. Suppose further that G is a symmetric (around 0) distribution. Choose any \u03b8 \u03b8 \u03b8 \u2208\u2126I, and let Y1 := Y0 + \u03b8 \u03b8 \u03b8. Then for any a > 0, P (T(Y1) > a) \u2265P (T(Y0) > a) . (25) Without loss of generality, we assume that s = 0 as the distribution of T is invariant for any s \u2208S, by Lemma 3.1. To prove (25), de\ufb01ne X1 := \u2225\u03a0(Y0|\u2126I)\u22252 and X2 := \u2225\u03a0(\u2212Y0|\u2126I)\u22252. Then X1 and X2 have the same distribution as G is a symmetric around 0, and \u2225\u03a0(Y0|\u2126D)\u22252 = \u2225\u03a0(\u2212Y0|\u2126I)\u22252. In particular, max \b \u2225\u03a0(Y0|\u2126I)\u22252, \u2225\u03a0(Y0|\u2126D)\u22252\t = max {X1, X2} =: T0. Let A be the event that {X1 \u2265X2}. By symmetry P(A) = 1/2, and for any a > 0, P(T0 \u2265a) = 1 2 [P(T0 \u2265a|A) + P(T0 \u2265a|Ac)] = 1 2 [P(X1 \u2265a|A) + P(X2 \u2265a|Ac)] . Let Y2 := Y0 \u2212\u03b8 \u03b8 \u03b8 and de\ufb01ne W1 := \u2225\u03a0(Y1|\u2126I)\u22252, W2 := \u2225\u03a0(\u2212Y1|\u2126I)\u22252, W3 := \u2225\u03a0(Y2|\u2126I)\u22252, and W4 := \u2225\u03a0(\u2212Y2|\u2126I)\u22252. Then W1 and W4 are equal in distribution, as are W2 and W3. 25 Lemma A.2. W1 \u2265X1 and W4 \u2265X2. Proof. Using Lemma 2.1, W1 = \u2225\u03a0(Y1|\u2126I)\u22252 = \u2225\u03a0(Y1|I)\u22252 \u2212\u2225\u03a0(Y1|S)\u22252 = \u2225Y1 \u2212\u03a0(Y1|Io)\u22252 \u2212\u2225\u03a0(Y1|S)\u22252 = \u2225Y1 \u2212\u03b8 \u03b8 \u03b8 \u2212[\u03a0(Y1|Io) \u2212\u03b8 \u03b8 \u03b8]\u22252 \u2212\u2225\u03a0(Y1|S)\u22252 \u2265 \u2225Y0 \u2212\u03a0(Y0|Io)\u22252 \u2212\u2225\u03a0(Y1|S)\u22252 = \u2225\u03a0(Y0|I)\u22252 \u2212\u2225\u03a0(Y1|S)\u22252 = \u2225\u03a0(Y0|\u2126I)\u22252 = X1, where the last equality uses \u03a0(Y1|S) = \u03a0(Y0|S). The inequality holds because \u03a0(Y1|Io) \u2212\u03b8 \u03b8 \u03b8 \u2208Io, by (A2). The proof of W4 \u2265X2 is similar. Proof of Theorem A.1: Let S = max(W1, W2), which is equal in distribution to max(W3, W4). For any a > 0, P(S > a) = 1 2 [P(S > a|A) + P(S > a|Ac)] \u2265 1 2 [P(W1 > a|A) + P(W4 > a|Ac)] \u2265 1 2 [P(X1 > a|A) + P(X2 > a|Ac)] = P(T0 > a). Finally, we note that T(Y1) = S/SSE0 and T(Y0) = T0/SSE0, where SSE0 is the sum of squared residuals of the projection of either Y1 or Y0 onto S, because \u03b8 \u03b8 \u03b8 is orthogonal to S. A.2 Some intuition under which the power goes to 1 The following result shows that if both projections \u03b8 \u03b8 \u03b8I and \u03b8 \u03b8 \u03b8D belong to S then \u03b8 \u03b8 \u03b80 must itself lie in S, if I is a \u201clarge\u201d cone. This motivates the fact that if \u03b8 \u03b8 \u03b80 / \u2208S, then both \u03b8 \u03b8 \u03b8I and \u03b8 \u03b8 \u03b8D cannot be very close to \u03b8 \u03b8 \u03b8S, and (18) might hold. The largeness of I can be represented through the following condition. Suppose that any \u03be \u03be \u03be \u2208Rn can be expressed as \u03be \u03be \u03be = \u03be \u03be \u03beI + \u03be \u03be \u03beD (26) for some \u03be \u03be \u03beI \u2208I and \u03be \u03be \u03beD \u2208D. This condition holds for I = {\u03b8 \u03b8 \u03b8 : A A A\u03b8 \u03b8 \u03b8 \u22650 0 0} where A A A is irreducible, and S is the null row space of A A A. A constraint matrix is irreducible 26 as de\ufb01ned by Meyer (1999) if the constraints de\ufb01ned by the rows are in a sense nonredundant. Then bases for S and \u2126I together span Rn, so any \u03b8 \u03b8 \u03b80 \u2208Rn can be written as the sum of vectors in S, \u2126I, and \u2126D simply by writing \u03b8 \u03b8 \u03b80 as a linear combination of these basis vectors, and gathering terms with negative coe\ufb03cients to be included in the \u2126D component. Lemma A.3. If (26) holds then \u03b8 \u03b8 \u03b80 \u2208S if and only if \u03b8 \u03b8 \u03b8I \u2208S and \u03b8 \u03b8 \u03b8D \u2208S. Proof. Suppose that \u03b8 \u03b8 \u03b80 \u2208S. Then \u03b8 \u03b8 \u03b80 \u2208I and thus \u03b8 \u03b8 \u03b8I = \u03b8 \u03b8 \u03b80. Similarly, \u03b8 \u03b8 \u03b80 \u2208D and \u03b8 \u03b8 \u03b8D = \u03b8 \u03b8 \u03b80. Hence, \u03b8 \u03b8 \u03b80 = \u03b8 \u03b8 \u03b8I = \u03b8 \u03b8 \u03b8D \u2208S. Suppose now that \u03b8 \u03b8 \u03b8I,\u03b8 \u03b8 \u03b8D \u2208S. By (8), \u27e8\u03b8 \u03b8 \u03b80 \u2212\u03b8 \u03b8 \u03b8I,s s s\u27e9= \u27e8\u03b8 \u03b8 \u03b80 \u2212\u03b8 \u03b8 \u03b8D,s s s\u27e9= 0 for any s s s \u2208S; this implies that \u27e8\u03b8 \u03b8 \u03b8I,s s s\u27e9= \u27e8\u03b8 \u03b8 \u03b8D,s s s\u27e9for all s s s \u2208S, so \u03b8 \u03b8 \u03b8I = \u03b8 \u03b8 \u03b8D =: \u03b3 \u03b3 \u03b3. From (7) applied to D and I it follows that \u27e8\u03b8 \u03b8 \u03b80 \u2212\u03b3 \u03b3 \u03b3,\u03be \u03be \u03beI + \u03be \u03be \u03beD\u27e9\u22640, for all \u03be \u03be \u03beI \u2208I and \u03be \u03be \u03beD \u2208D. As any \u03be \u03be \u03be \u2208Rn can be expressed as \u03be \u03be \u03be = \u03be \u03be \u03beI +\u03be \u03be \u03beD for \u03be \u03be \u03beI \u2208I and \u03be \u03be \u03beD \u2208D, the above display yields \u27e8\u03b8 \u03b8 \u03b80 \u2212\u03b3 \u03b3 \u03b3,\u03be \u03be \u03be\u27e9\u22640 for all \u03be \u03be \u03be \u2208Rn. Taking \u03be \u03be \u03be and \u2212\u03be \u03be \u03be in the above display we get that \u27e8\u03b8 \u03b8 \u03b80 \u2212\u03b3 \u03b3 \u03b3,\u03be \u03be \u03be\u27e9= 0 for all \u03be \u03be \u03be \u2208Rn, which implies that \u03b8 \u03b8 \u03b80 = \u03b3 \u03b3 \u03b3, thereby proving the result. A.3 Testing against a constant function The traditional F-test for the parametric least-squares regression model has the null hypothesis that none of the predictors is (linearly) related to the response. For an n \u00d7 p full-rank design matrix, the F statistic has null distribution F(p \u22121, n \u2212p). To test against the constant function when the relationship of the response with the predictors is unspeci\ufb01ed, we can turn to our cone alternatives. Consider model (3) where \u03c60 is the unknown true regression function and X = {x x x1,x x x2, . . . ,x x xn} \u2282Rd is the set of predictor values. We can assume without loss of generality that there are no duplicate x x x values; otherwise we average the response values at each distinct x x x, and do weighted regression. Interest is in H0 : \u03c60 \u2261c for some unknown scalar c \u2208R, against a general alternative. A cone that contains the one-dimensional null space is de\ufb01ned for multiple isotonic regression. We start with some de\ufb01nitions. A partial ordering on X may be de\ufb01ned as x x xi \u2aafx x xj if x x xi \u2264x x xj holds coordinate-wise. Two points x x xi and x x xj in X are comparable if either x x xi \u2aafx x xj or x x xj \u2aafx x xi. Partial orderings are re\ufb02exive, anti-symmetric, and transitive, but di\ufb00er from complete orderings in that pairs of points are not required to be comparable. A function \u03c60 : X \u2192R is isotonic with respect to the partial ordering if \u03c60(x x xi) \u2264\u03c60(x x xj) whenever x x xi \u2aafx x xj. If \u03b8 \u03b8 \u03b8 \u2208Rn is de\ufb01ned as \u03b8i = \u03c60(x x xi), we can consider the set I of \u03b8 \u03b8 \u03b8 \u2208Rn such that \u03b8i \u2264\u03b8j whenever x x xi \u2aafx x xj. The set I is a convex cone in Rn, and a constraint matrix A A A can be found so that I = {\u03b8 \u03b8 \u03b8 : A A A\u03b8 \u03b8 \u03b8 \u22650 0 0}. 27 Assumption (A1) holds if X is connected; i.e., for all proper subsets X0 \u2282X, each point in X0 is comparable with at least one point not in X0. If X is not connected, it can be broken down into smaller connected subsets, and the null space of A A A consists of vectors that are constant over the subsets. We have the following lemma. Lemma A.4. The null space S of the constraint matrix A A A associated with isotonic regression on the set X is spanned by the constant vectors if and only if the set X is connected. Proof. Let A A A be the constraint matrix associated with isotonic regression on the set X such that there is a row for every comparable pair (i.e., before reducing). Let C be the null space of A A A (it is easy to see that the null space of the reduced constraint matrix is also S). First, suppose \u03b8 \u03b8 \u03b8 \u2208S and \u03b8a \u0338= \u03b8b for some 1 \u2264a, b \u2264n. Let Xa \u2282X be the set of points that are comparable to x x xa, and let Xb \u2282X be the set of points that are comparable to x x xb. If x x xj is in the intersection, then there is row of A A A where the j-th element is \u22121 and the a-th element is +1 (or vice-versa), as well as a row where the j-th element is \u22121 and the b-th element is +1 (or vice-versa), so that \u03b8a = \u03b8j and \u03b8b = \u03b8j. Therefore if \u03b8a \u0338= \u03b8b, Xa \u2229Xb is empty and X is not connected. Second, suppose X is connected and \u03b8 \u03b8 \u03b8 \u2208S. For any a \u0338= b, 1 \u2264a, b \u2264n, de\ufb01ne Xa and Xb as above; then because of connectedness there must be an x x xj in the intersection, and hence \u03b8a = \u03b8b. Thus only constant vectors are in S. The classical isotonic regression with a partial ordering can be formulated using upper and lower sets. The set U \u2286X is an upper set with respect to \u2aafif x x x1 \u2208U, x x x2 \u2208X, and x x x1 \u2aafx x x2 imply that x x x2 \u2208U. Similarly, the set L \u2286X is a lower set with respect to \u2aafif x x x2 \u2208L, x x x1 \u2208X, and x x x1 \u2aafx x x2 imply that x x x1 \u2208L. The following results can be found in Gebhardt (1970), Barlow and Brunk (1972), and Dykstra (1981). Let U and L be the collections of upper and lower sets in X, respectively. The isotonic regression estimator at x x x \u2208X has the closed form \u02c6 \u03c60(x x x) = max U\u2208U:x x x\u2208U min L\u2208L:x x x\u2208L AvY(L \u2229U), where AvY(S) is the average of all the Y values for which the predictor values are in S \u2286X. Further, if \u03b8 \u03b8 \u03b8 \u2208I, then for any upper set U we have Av\u03b8 \u03b8 \u03b8(U) \u2265Av\u03b8 \u03b8 \u03b8(U c). (27) Similarly, if L is a lower set and \u03b8 \u03b8 \u03b8 \u2208I, Av\u03b8 \u03b8 \u03b8(L) \u2264Av\u03b8 \u03b8 \u03b8(Lc), as the complement of an upper set is a lower set and vice-versa. Finally, any \u03b8 \u03b8 \u03b8 \u2208I can be written as a linear combination of indicator functions for upper sets, with non-negative coe\ufb03cients, plus a constant vector. We could de\ufb01ne a double cone where D = {\u03b8 \u03b8 \u03b8 : A A A\u03b8 \u03b8 \u03b8 \u22640 0 0} for antitonic \u03b8 \u03b8 \u03b8. To show the condition (A2) holds, we \ufb01rst show that if \u03b8 \u03b8 \u03b8 \u2208\u2126I = I \u2229S\u22a5, then the projection of \u03b8 \u03b8 \u03b8 on D is the origin; hence \u2126I \u2286Do. Let \u03b8 \u03b8 \u03b8U be the \u201ccentered\u201d indictor vector for an 28 upper set U. That is, \u03b8 \u03b8 \u03b8Ui = a if x x xi \u2208U, \u03b8 \u03b8 \u03b8Ui = b if x x xi / \u2208U, and Pn i=1 \u03b8 \u03b8 \u03b8Ui = 0. Then \u03b8 \u03b8 \u03b8U \u2208\u2126I, and \u27e8\u03b8 \u03b8 \u03b8U,\u03b8 \u03b8 \u03b8\u27e9\u22650 for any \u03b8 \u03b8 \u03b8 \u2208I by (27). For, \u27e8\u03b8 \u03b8 \u03b8U,\u03b8 \u03b8 \u03b8\u27e9 = a X i:xi\u2208Uc \u03b8i + b X i:xi\u2208U \u03b8i = (n \u2212nu)aAv\u03b8(U c) + nubAv\u03b8(U) \u2265 {(n \u2212nu)a + nub}Av\u03b8(U c) = 0, where nu is the number of elements in U. Similarly, \u27e8\u03b8 \u03b8 \u03b8U,\u03c1 \u03c1 \u03c1\u27e9\u22640 for any \u03c1 \u03c1 \u03c1 \u2208D. We can write any \u03b8 \u03b8 \u03b8 \u2208\u2126I as a linear combination of centered upper set indicator vectors with non-negative coe\ufb03cients, so that \u27e8\u03b8 \u03b8 \u03b8,\u03c1 \u03c1 \u03c1\u27e9\u2264 0 for any \u03b8 \u03b8 \u03b8 \u2208\u2126I and \u03c1 \u03c1 \u03c1 \u2208D. Then for any \u03b8 \u03b8 \u03b8 \u2208\u2126I and \u03c1 \u03c1 \u03c1 \u2208D, \u2225\u03b8 \u03b8 \u03b8 \u2212\u03c1 \u03c1 \u03c1\u22252 = \u2225\u03b8 \u03b8 \u03b8\u22252 + \u2225\u03c1 \u03c1 \u03c1\u22252 \u22122\u27e8\u03b8 \u03b8 \u03b8,\u03c1 \u03c1 \u03c1\u27e9\u2265\u2225\u03b8 \u03b8 \u03b8\u22252, hence the projection of \u03b8 \u03b8 \u03b8 \u2208\u2126I onto D is the origin. The double cone for multiple isotonic regression is unsatisfactory because if one of the predictors reverses sign, the value of the statistic (11) (for testing against a constant function) also changes. For two predictors, it is more appropriate to de\ufb01ne a quadruple cone. De\ufb01ne: \u2022 I1: the cone de\ufb01ned by the partial ordering: x x xi \u2aafx x xj if x1i \u2264x1j and x2i \u2264x2j; \u2022 I2: the cone de\ufb01ned by the partial ordering: x x xi \u2aafx x xj if x1i \u2264x1j and x2i \u2265x2j; \u2022 I3: the cone de\ufb01ned by the partial ordering: x x xi \u2aafx x xj if x1i \u2265x1j and x2i \u2264x2j; and \u2022 I4: the cone de\ufb01ned by the partial ordering: x x xi \u2aafx x xj if x1i \u2265x1j and x2i \u2265x2j. The cones I1 and I4 form a double cone as do I2 and I3. If X connected, the onedimensional space S of constant vectors is the largest linear space in any of the four cones and (A1) is met. Let \u02c6 \u03b8 \u03b8 \u03b8j be the projection of Y Y Y onto Ij, and SSEj = \u2225Y Y Y \u2212\u02c6 \u03b8 \u03b8 \u03b8j\u22252 for j = 1, 2, 3, 4, while SSE0 = Pn i=1(Yi \u2212\u00af Y )2. De\ufb01ne Tj = (SSE0 \u2212SSEj)/SSE0, for j = 1, 2, 3, 4, and T = max{T1, T2, T3, T4}. The distribution of T is invariant to translations in S; this can be proved using the same technique as Lemma 2.2. Therefore the null distribution can be simulated up to any desired precision for known G. To show that the test is unbiased for \u03b8 \u03b8 \u03b80 in the quadruple cone I1 \u222aI2 \u222aI3 \u222aI4, we note that condition (A2) becomes \u21261 \u2286Io 4 and \u21262 \u2286Io 3, and vice-versa; the results of Section A.1 hold for the quadruple cone. For three predictors we need an octuple cone, which is comprised of four double-cones and similar results follow. The results of Section 3.1 depend on the consistency of the multiple isotonic regression estimator. Although Hanson et al. (1973) proved point-wise and uniform consistency 29 0 1 2 3 4 0.0 0.2 0.4 0.6 0.8 1.0 power \u03c6(x1x2)=2ax1x2 F-test T FH S1&2 0 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 \u03c6(x1x2)=4a(x1-1/2)2 0 1 2 3 4 5 6 7 0.0 0.2 0.4 0.6 0.8 1.0 \u03c6(x1x2)=3a[max(x1-2/3,x2-2/3,0)]2 Figure A.1: Power function for test against the constant model, with n = 100 observations, predictors generated uniformly in [0, 1]2 and \u03c32 = 1. (on compacts in the interior of the support of the covariates) for the projection estimator in the bivariate case (also see Makowski (1977)), result (16) for the general case of multiple isotonic regression is still an open problem. A.3.1 Simulation study We consider the test against a constant function using the quadruple cone alternative of Section A.3, and model Yi = \u03c60(x1i, x2i) + \u03f5i, i = 1, . . . , 100, where for each simulated data set, the (x1, x2) values are generated uniformly in the unit square. The power is compared with the standard F-test where the alternative model is Yi = \u03b20 + \u03b21x1i + \u03b22x2i + \u03b23x1ix2i + \u03f5i, the FH test with known variance, and S1 and S2. In the \ufb01rst plot of Figure A.3.1, power is shown when the true regression function is \u03c60(x1, x2) = 2ax1x2. Here the assumptions for the parametric F-test are correct, so the F-test has the highest power, although the power for the proposed test is only slightly smaller. In the second plot, the regression function is quadratic in x1, with vertex in the center of the design points. The F-test fails to reject the constant model because of the non-linear alternative, but the true regression function is also far from the quadruple-cone alternative. However, the proposed test has power comparable to the FH test with known variance. In the \ufb01nal plot, the regression function is constant on [0, 2/3]\u00d7[0, 2/3], and increasing in both predictors beyond 2/3. The proposed test has the best power compared with the alternatives. The S1 and S2 tests do not perform well for the test against a constant function in two dimensions, compared with other testing situations. A.4 Proofs of Lemmas and Theorems Proof of Theorem 3.2 To study the distribution of T(\u02c6 \u03f5) \u223cDn and relate it to that of T(\u03f5) \u223cHn, we consider the following quantile coupling: Let Z1, . . . be a sequence of 30 i.i.d. Uniform(0,1) random variables. De\ufb01ne \u03f5j = G\u22121(Zj) and let \u02c6 \u03f5nj = \u02c6 G\u22121 n (Zj), for j = 1, . . . , n; n \u22651. Observe that T(\u03f51, . . . , \u03f5n) \u223cHn and the conditional distribution of T(\u02c6 \u03f5n1, . . . , \u02c6 \u03f5nn), given the data, is Dn. For notational simplicity, for the rest of the proof, we will denote by \u03f5 := (\u03f51, . . . , \u03f5n) and by \u02c6 \u03f5 := (\u02c6 \u03f5n1, . . . , \u02c6 \u03f5nn). Note that, \u2225\u03a0(\u03f5|S) \u2212\u03a0(\u03f5|I)\u2225 = \u2225{\u03a0(\u03f5|S) \u2212\u03a0(\u02c6 \u03f5|S)} + {\u03a0(\u02c6 \u03f5|S) \u2212\u03a0(\u02c6 \u03f5|I)} + {\u03a0(\u02c6 \u03f5|I) \u2212\u03a0(\u03f5|I)}\u2225 \u2264 \u2225\u03a0(\u03f5|S) \u2212\u03a0(\u02c6 \u03f5|S)\u2225+ \u2225\u03a0(\u02c6 \u03f5|S) \u2212\u03a0(\u02c6 \u03f5|I)\u2225+ \u2225\u03a0(\u02c6 \u03f5|I) \u2212\u03a0(\u03f5|I)\u2225 \u2264 2\u2225\u03f5 \u2212\u02c6 \u03f5\u2225+ \u2225\u03a0(\u02c6 \u03f5|S) \u2212\u03a0(\u02c6 \u03f5|I)\u2225 (28) where we have used the fact that the projection operator is 1-Lipschitz. To simplify notation, let U := \u2225\u03a0(\u03f5|S) \u2212\u03a0(\u03f5|I)\u2225, V := \u2225\u03a0(\u03f5|S) \u2212\u03a0(\u03f5|D)\u2225and W := \u2225\u03f5 \u2212 \u03a0(\u03f5|S)\u2225. Also, let \u02c6 U := \u2225\u03a0(\u02c6 \u03f5|S) \u2212\u03a0(\u02c6 \u03f5|I)\u2225, \u02c6 V := \u2225\u03a0(\u02c6 \u03f5|S) \u2212\u03a0(\u02c6 \u03f5|D)\u2225and \u02c6 W := \u2225\u02c6 \u03f5 \u2212\u03a0(\u02c6 \u03f5|S)\u2225. Thus, using a similar argument as in (28) we obtain, max n |U \u2212\u02c6 U|, |V \u2212\u02c6 V |, |W \u2212\u02c6 W| o \u22642\u2225\u03f5 \u2212\u02c6 \u03f5\u2225. Therefore, |T 1/2(\u03f5) \u2212T 1/2(\u02c6 \u03f5)| = \f \f \f \f \f max {U, V } W \u2212max{ \u02c6 U, \u02c6 V } \u02c6 W \f \f \f \f \f \u2264 | max{U, V } \u2212max{ \u02c6 U, \u02c6 V }| W + max{ \u02c6 U, \u02c6 V } \f \f \f \f 1 W \u22121 \u02c6 W \f \f \f \f \u2264 max{|U \u2212\u02c6 U|, |V \u2212\u02c6 V |} W + max{ \u02c6 U, \u02c6 V } \u02c6 W | \u02c6 W \u2212W| W \u2264 2\u2225\u03f5 \u2212\u02c6 \u03f5\u2225 W + | \u02c6 W \u2212W| W \u22644\u2225\u03f5 \u2212\u02c6 \u03f5\u2225 W (29) as max{ \u02c6 U, \u02c6 V }/ \u02c6 W \u22641. For two probability measure \u00b5 and \u03bd on R, let dp(\u00b5, \u03bd) denote the p-Wasserstein distance between \u00b5 and \u03bd, i.e., [dp(\u00b5, \u03bd)]p := inf J {E|S \u2212T|p : S \u223c\u00b5, T \u223c\u03bd}, where the in\ufb01mum is taken over all joint distributions J with marginals \u00b5, \u03bd. Now, E \u00121 n\u2225\u03f5 \u2212\u02c6 \u03f5\u22252 \u0013 = 1 n n X j=1 E(\u03f5i \u2212\u02c6 \u03f5nj)2 = Z 1 0 |G\u22121 n (t) \u2212G\u22121(t)|2dt = d2(Gn, G), (30) where the last equality follows from Shorack and Wellner (1986, Theorem 2, page 64). Further, by Shorack and Wellner (1986, Theorem 1, page 63), and two conditions on \u02c6 Gn stated in the theorem, it follows that d2(Gn, G) \u21920 a.s. Therefore, d1(Hn, Dn) \u2264 E|T(\u02c6 \u03f5) \u2212T(\u03f5)| \u2264 2E|T 1/2(\u02c6 \u03f5) \u2212T 1/2(\u03f5)| \u2264 8E \u0012\u2225\u03f5 \u2212\u02c6 \u03f5\u2225 W \u0013 \u2264 8 s E \u0010 n W 2 \u0011 E \u00121 n\u2225\u03f5 \u2212\u02c6 \u03f5\u22252 \u0013 \u21920 a.s., 31 where we have used (29) and (30), E(nW \u22122) is uniformly bounded, and the fact that d2(Gn, G) \u21920 a.s. The result now follows from the fact that dL(Hn, Dn) \u2264 p d1(Hn, Dn); see Huber (1981, pages 31\u201333). Proof of Lemma 3.4: Let \u03c1 \u03c1 \u03c1 := \u03b8 \u03b8 \u03b80 \u2212\u03b8 \u03b8 \u03b8I. Then \u03c1 \u03c1 \u03c1 \u22a5\u03b8 \u03b8 \u03b8I. Let Z Z Z = Y \u2212\u03c1 \u03c1 \u03c1, and note that Z Z Z = \u03b8 \u03b8 \u03b8I + \u03f5 \u03f5 \u03f5. If \u02c7 \u03b8 \u03b8 \u03b8I is the projection of Z Z Z onto I, then \u2225\u02c7 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u22252 = op(n) and \u27e8Z Z Z \u2212\u02c7 \u03b8 \u03b8 \u03b8I,\u03b8 \u03b8 \u03b8I\u27e9= op(n). (31) The \ufb01rst follows from assumption (16) and the latter holds because 0 \u2264\u2212\u27e8Z Z Z \u2212\u02c7 \u03b8 \u03b8 \u03b8I,\u03b8 \u03b8 \u03b8I\u27e9= \u27e8Z Z Z \u2212\u02c7 \u03b8 \u03b8 \u03b8I, \u02c7 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u27e9\u2264\u2225Z Z Z \u2212\u02c7 \u03b8 \u03b8 \u03b8I\u2225\u2225\u02c7 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u2225= op(n), as \u2225Z Z Z \u2212\u02c7 \u03b8 \u03b8 \u03b8I\u22252/n \u2264\u2225Z Z Z \u2212\u03b8 \u03b8 \u03b8I\u22252/n = Op(1), where we have used the characterization of projection on a closed convex cone (7). Starting with \u2225\u02c7 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u22252 = \u2225\u02c7 \u03b8 \u03b8 \u03b8I \u2212\u02c6 \u03b8 \u03b8 \u03b8I\u22252 + \u2225\u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u22252 + 2\u27e8\u02c7 \u03b8 \u03b8 \u03b8I \u2212\u02c6 \u03b8 \u03b8 \u03b8I, \u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u27e9, we rearrange to get \u2225\u02c7 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u22252 \u2212\u2225\u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u22252 = \u2225\u02c7 \u03b8 \u03b8 \u03b8I \u2212\u02c6 \u03b8 \u03b8 \u03b8I\u22252 + 2\u27e8\u02c7 \u03b8 \u03b8 \u03b8I \u2212\u02c6 \u03b8 \u03b8 \u03b8I, \u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u27e9 = \u2225\u02c7 \u03b8 \u03b8 \u03b8I \u2212\u02c6 \u03b8 \u03b8 \u03b8I\u22252 + 2\u27e8\u02c7 \u03b8 \u03b8 \u03b8I \u2212Y, \u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u27e9+ 2\u27e8Y \u2212\u02c6 \u03b8 \u03b8 \u03b8I, \u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u27e9. (32) As Y = Z Z Z + \u03c1 \u03c1 \u03c1 and \u03c1 \u03c1 \u03c1 \u22a5\u03b8 \u03b8 \u03b8I, we have \u27e8Y \u2212\u02c7 \u03b8 \u03b8 \u03b8I, \u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u27e9 = \u27e8Y \u2212\u02c7 \u03b8 \u03b8 \u03b8I, \u02c6 \u03b8 \u03b8 \u03b8I\u27e9\u2212\u27e8Y \u2212\u02c7 \u03b8 \u03b8 \u03b8I,\u03b8 \u03b8 \u03b8I\u27e9 = [\u27e8Z Z Z \u2212\u02c7 \u03b8 \u03b8 \u03b8I, \u02c6 \u03b8 \u03b8 \u03b8I\u27e9+ \u27e8\u03c1 \u03c1 \u03c1, \u02c6 \u03b8 \u03b8 \u03b8I\u27e9] \u2212\u27e8Z Z Z \u2212\u02c7 \u03b8 \u03b8 \u03b8I,\u03b8 \u03b8 \u03b8I\u27e9. Also, \u27e8Y \u2212\u02c6 \u03b8 \u03b8 \u03b8I, \u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u27e9= \u2212\u27e8Y \u2212\u02c6 \u03b8 \u03b8 \u03b8I,\u03b8 \u03b8 \u03b8I\u27e9. Thus, (32) equals \u2225\u02c7 \u03b8 \u03b8 \u03b8I \u2212\u02c6 \u03b8 \u03b8 \u03b8I\u22252 \u22122\u27e8Y \u2212\u02c6 \u03b8 \u03b8 \u03b8I,\u03b8 \u03b8 \u03b8I\u27e9\u22122\u27e8Z Z Z \u2212\u02c7 \u03b8 \u03b8 \u03b8I, \u02c6 \u03b8 \u03b8 \u03b8I\u27e9\u22122\u27e8\u03c1 \u03c1 \u03c1, \u02c6 \u03b8 \u03b8 \u03b8I\u27e9+ 2\u27e8Z Z Z \u2212\u02c7 \u03b8 \u03b8 \u03b8I,\u03b8 \u03b8 \u03b8I\u27e9. The \ufb01rst four terms in the above expression are positive (with their signs), and the last is op(n) by (31). Therefore, \u2225\u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u22252 \u2264\u2225\u02c7 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8I\u22252 + op(n), which when combined with (31) gives the desired result. The proof of the other part is completely analogous. Proof of Theorem 3.6: To show that (18) holds let us de\ufb01ne the convex projection of \u03c60 with respect to the measure \u00b5 as \u03c6I = argmin \u03c6 convex Z [0,1]d(\u03c60(x) \u2212\u03c6(x))2d\u00b5(x), where the minimization is over all convex functions from [0, 1]d to R. The existence and uniqueness (a.s.) of \u03c6I follows from the fact that H := L2([0, 1]d, \u00b5) is a Hilbert 32 space with the inner product \u27e8f, g\u27e9H = R [0,1]d f(x)g(x)d\u00b5(x), and the space \u02dc I of all convex functions in H is a closed convex set in H. Let \u02dc D of the space of all concave functions in H. We can similarly de\ufb01ne the non-increasing projection \u03c6D of \u03c60. Next we show that if \u03c60 is not a\ufb03ne a.e. \u00b5 then either \u03c6I is not a\ufb03ne a.e. \u00b5 or \u03c6D is not a\ufb03ne a.e. \u00b5. Suppose not, i.e., suppose that \u03c6I and \u03c6D are both a\ufb03ne a.e. \u00b5. Then by (8), \u27e8\u03c60 \u2212\u03c6I, f\u27e9H = \u27e8\u03c60 \u2212\u03c6D, f\u27e9H = 0 for any a\ufb03ne f. This implies that \u27e8\u03c6I, f\u27e9H = \u27e8\u03c6D, f\u27e9H for all a\ufb03ne f, so \u03c6I = \u03c6D =: \u03c6S a.e., where \u03c6S is a\ufb03ne. Note that \u03c6S is indeed the projection of \u03c60 onto the space of all a\ufb03ne functions in H. From (7) applied to \u02dc D and \u02dc I it follows that \u27e8\u03c60 \u2212\u03c6S, fI + fD\u27e9H \u22640, for all fI \u2208\u02dc I and fD \u2208\u02dc D. As any f \u2208H that is twice continuously di\ufb00erentiable can be expressed as f = fI + fD for fI \u2208\u02dc I and fD \u2208\u02dc D, the above display yields \u27e8\u03c60 \u2212\u03c6S, f\u27e9H \u22640 for all f that is twice continuously di\ufb00erentiable. Taking f and \u2212f in the last inequality we get that \u27e8\u03c60 \u2212\u03c6S, f\u27e9H = 0 for all f that is twice continuously di\ufb00erentiable, which implies that \u03c60 = \u03c6S a.e. \u00b5, giving rise to a contradiction. We can also show that lim n\u2192\u221e 1 n\u2225\u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8S\u22252 = Z [0,1]d(\u03c6I(x) \u2212\u03c6S(x))2d\u00b5(x) > 0. Similarly, n\u22121\u2225\u03b8 \u03b8 \u03b8D \u2212\u03b8 \u03b8 \u03b8S\u22252 also converges to a positive number. This proves (18). Proof of Theorem 4.1: Observe that TI = \u2225\u02c6 \u03b8 \u03b8 \u03b8S \u2212\u02c6 \u03b8 \u03b8 \u03b8I\u22252/\u2225Y \u2212\u02c6 \u03b8 \u03b8 \u03b8S\u22252. Letting en := (1, 1, . . . , 1)\u22a4\u2208Rn, and noting that \u02c6 \u03b8 \u03b8 \u03b8S = \u00af Y en, where \u00af Y = Pn i=1 Yi/n, we have 1 n\u2225Y \u2212\u02c6 \u03b8 \u03b8 \u03b8S\u22252 \u2192p \u03c32 > 0. Now, \u2225\u02c6 \u03b8 \u03b8 \u03b8S \u2212\u02c6 \u03b8 \u03b8 \u03b8I\u22252 = \u2225\u02c6 \u03b8 \u03b8 \u03b8S \u2212\u03b8 \u03b8 \u03b80 + \u03b8 \u03b8 \u03b80 \u2212\u02c6 \u03b8 \u03b8 \u03b8I\u22252 \u2264 2\u2225\u02c6 \u03b8 \u03b8 \u03b8S \u2212\u03b8 \u03b8 \u03b80\u22252 + 2\u2225\u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b80\u22252 = Op(1) + Op(log n), where we have used the facts that \u2225\u02c6 \u03b8 \u03b8 \u03b8S \u2212\u03b8 \u03b8 \u03b80\u22252 = Op(1) and \u2225\u02c6 \u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b80\u22252 = Op(log n) (see Chatterjee et al. (2013); also see Meyer and Woodroofe (2000) and Zhang (2002)). A similar result can be obtained for TD to arrive at the conclusion T = Op(log(n)/n). Proof of Theorem 4.2: We will verify (16) and (18) and apply Theorem 3.7 to obtain the desired result. First observe that (16) follows immediately from the results in Chatterjee et al. (2013); also see Zhang (2002). To show that (18) holds let us de\ufb01ne the non-decreasing projection of \u03c60 with respect to the measure F as \u03c6I = argmin \u03c6\u2191 Z 1 0 (\u03c60(x) \u2212\u03c6(x))2dF(x), 33 where the minimization is over all non-decreasing functions. We can similarly de\ufb01ne the non-increasing projection \u03c6D of \u03c60. As \u03c60 is not a constant a.e. F, it can be shown by a very similar argument as in Lemma A.3 that either \u03c6I is not a constant a.e. F or \u03c6D is not a constant a.e. F (note that here we use the fact that a function of bounded variation on an interval can be expressed as the di\ufb00erence of two monotone functions). Without loss of generality let us assume that \u03c6I is not a constant a.e. F. Therefore, lim n\u2192\u221e 1 n\u2225\u03b8 \u03b8 \u03b8I \u2212\u03b8 \u03b8 \u03b8S\u22252 = Z 1 0 (\u03c6I(x) \u2212c0)2dF(x) > 0, which proves (18), where c0 = argminc\u2208R R 1 0 (\u03c60(x) \u2212c)2dF(x). Proof of Theorem 4.9: The theorem follows from known metric entropy results on the class of uniformly bounded convex functions that are uniformly Lipschitz in conjunction with known results on consistency of least squares estimators; see Theorem 4.8 of Van de Geer (2000). We give the details below. The notion of covering numbers will be used in the sequel. For \u03f5 > 0 and a subset G of functions, the \u03f5-covering number of G under the metric \u2113, denoted by N(S, \u03f5; \u2113), is de\ufb01ned as the smallest number of closed balls of radius \u03f5 whose union contains G. Fix any B > 0 and L > L0. Recall the de\ufb01nition of the class \u02dc IL, given in (22). We de\ufb01ne the class of uniformly bounded convex functions that are uniformly Lipschitz as \u02dc IL,B := {\u03c8 \u2208IL : \u2225\u03c8\u2225X \u2264B}. Using Theorem 3.2 of Guntuboyina and Sen (2013) (also see Bronshtein (1976)) we know that log N \u0010 \u02dc IL,B, \u03f5; L\u221e \u0011 \u2264c \u0012B + dL \u03f5 \u0013d/2 , (33) for all 0 < \u03f5 \u2264\u03f50(B + dL), where \u03f50 > 0 is a \ufb01xed constant and L\u221eis the supremum norm. In the following we denote by IL and IL,B the convex sets (in Rn) of all evaluations (at the data points) of functions in IL and IL,B, respectively; cf. (6). Thus, we can say that \u02c6 \u03b8 \u03b8 \u03b8I,L is the projection of Y on IL. Let \u02c6 \u03b8 \u03b8 \u03b8I,L,B denote the projection of Y onto IL,B. We now use Theorem 4.8 of Van de Geer (2000) to show that \u2225\u02c6 \u03b8 \u03b8 \u03b8I,L,B \u2212\u03b8 \u03b8 \u03b80\u22252 = op(n). (34) Note that equation (4.26) in Van de Geer (2000) is trivially satis\ufb01ed as we have sub-gaussian errors and equation (4.27) easily follows from (33). Denote the i-th coordinate of a vector b \u2208Rn by b(i), for i = 1, . . . , n. De\ufb01ne the event An := {maxi=1,...,n |\u02c6 \u03b8 \u03b8 \u03b8 (i) I,L| \u2264B0}. Next we show that there exists B0 > 0 such 34 that P(An) \u21921, as n \u2192\u221e. (35) As \u02c6 \u03b8 \u03b8 \u03b8I,L is a projection on the closed convex set IL, we have \u27e8Y \u2212\u02c6 \u03b8 \u03b8 \u03b8I,L, \u03b3 \u2212\u02c6 \u03b8 \u03b8 \u03b8I,L\u27e9\u22640, for all \u03b3 \u2208IL. Letting e := (1, 1, . . . , 1)\u22a4\u2208Rn, note that for any c \u2208R, ce \u2208IL. Hence, \u27e8Y \u2212\u02c6 \u03b8 \u03b8 \u03b8I,L, ce \u2212\u02c6 \u03b8 \u03b8 \u03b8I,L\u27e9= c\u27e8Y \u2212\u02c6 \u03b8 \u03b8 \u03b8I,L, e\u27e9\u2212\u27e8Y \u2212\u02c6 \u03b8 \u03b8 \u03b8I,L, \u02c6 \u03b8 \u03b8 \u03b8I,L\u27e9\u22640, for all c \u2208R, and thus \u27e8Y \u2212\u02c6 \u03b8 \u03b8 \u03b8I,L, e\u27e9= 0, i.e., n\u00af Y = Pn i=1 Yi = Pn i=1 \u02c6 \u03b8 \u03b8 \u03b8 (i) I,L. Now, for any i \u2208 {1, 2, . . . , n}, |\u02c6 \u03b8 \u03b8 \u03b8 (i) I,L| \u2264 |\u02c6 \u03b8 \u03b8 \u03b8 (i) I,L \u2212\u00af Y | + |\u00af Y | = \f \f \f \f \f \u02c6 \u03b8 \u03b8 \u03b8 (i) I,L \u22121 n n X j=1 \u02c6 \u03b8 \u03b8 \u03b8 (j) I,L \f \f \f \f \f + |\u00af Y | \u2264 1 n n X j=1 \f \f \f\u02c6 \u03b8 \u03b8 \u03b8 (i) I,L \u2212\u02c6 \u03b8 \u03b8 \u03b8 (j) I,L \f \f \f + |\u00af Y | \u2264 L n n X j=1 \u2225xi \u2212xj\u2225+ \u2225\u03c60\u2225X + |\u00af \u03f5| \u2264 \u221a dL + \u03ba + 1 =: B0, a.s. for large enough n, where we have used the fact that \u2225xi \u2212xj\u2225\u2264 \u221a d, \u2225\u03c6\u2225X < \u03ba for some \u03ba > 0, and that \u00af \u03f5 = Pn i=1 \u03f5i/n \u21920 a.s. As IL,B0 \u2282IL, we trivially have \u2225Y \u2212\u02c6 \u03b8 \u03b8 \u03b8I,L\u22252 \u2264\u2225Y \u2212\u02c6 \u03b8 \u03b8 \u03b8I,L,B0\u22252. If An happens, \u02c6 \u03b8 \u03b8 \u03b8I,L \u2208IL,B0, and thus, \u2225Y \u2212\u02c6 \u03b8 \u03b8 \u03b8I,L,B0\u22252 = argmin \u03b8\u2208\u02dc IL,B0 \u2225Y \u2212\u03b8 \u03b8 \u03b8\u22252 \u2264\u2225Y \u2212\u02c6 \u03b8 \u03b8 \u03b8I,L\u22252. From the last two inequalities it follows that if An occurs, then \u02c6 \u03b8 \u03b8 \u03b8I,L = \u02c6 \u03b8 \u03b8 \u03b8I,L,B0, as \u02c6 \u03b8 \u03b8 \u03b8I,L,B0 is the unique minimizer. Now using (35), (23) immediately follows from (34).", "introduction": "Let Y := (Y1, Y2, . . . , Yn) \u2208Rn and consider the model Y = \u03b80 + \u03c3\u03f5, (1) \u2217Supported by NSF Grants DMS-1150435 and AST-1107373 \u2020Supported by NSF Grant DMS-0905656 1 arXiv:1311.6849v2 [stat.ME] 27 Jun 2014 where \u03b80 \u2208Rn, \u03f5 = (\u03f51, . . . , \u03f5n), are i.i.d. G with mean 0 and variance 1, and \u03c3 > 0. In this paper we address the problem of testing H0 : \u03b80 \u2208S where S := {\u03b8 \u2208Rn : \u03b8 = X\u03b2, \u03b2 \u2208Rk, for some k \u22651}, (2) and X is a known design matrix. We develop a test for H0 which is equivalent to the likelihood ratio test with normal errors. To describe the test, let I be a large- dimensional convex cone (for a quite general alternative) that contains the linear space S, and de\ufb01ne the \u201copposite\u201d cone D = \u2212I = {x : \u2212x \u2208I}. We test H0 against H1 : \u03b8 \u2208I \u222aD\\S and the test statistic is formulated by comparing the projection of Y onto S with the projection of Y onto the double cone I \u222aD. Projections onto convex cones are discussed in Silvapulle and Sen (2005), Chapter 3; see Robertson et al. (1988) for the speci\ufb01c case of isotonic regression, and Meyer (2013b) for a cone-projection algorithm. We show that the test is unbiased, and that the critical value of the test, for any \ufb01xed level \u03b1 \u2208(0, 1), can be computed exactly (via simulation) if the error distribution G is known (e.g., G is assumed to be standard normal). If G is assumed to be completely unknown, the the critical value can be approximated via the bootstrap. Also, the test is completely automated and does not involve the choice of tuning parameters (e.g., smoothing bandwidths). More importantly, we show that the power of the test con- verges to 1, under mild conditions as n grows large, for \u03b80 not only in the alternative H1 but for \u201calmost\u201d all \u03b80 / \u2208S. To better understand the scope of our procedure we \ufb01rst look at a few motivating examples. Example 1: Suppose that \u03c60 : [0, 1] \u2192R is an unknown function of bounded variation and we are given design points x1 < x2 < \u00b7 \u00b7 \u00b7 < xn in [0, 1], and data Yi, i = 1, . . . , n, from the model: Yi = \u03c60(xi) + \u03c3\u03f5i, (3) where \u03f51, . . . , \u03f5n are i.i.d. mean zero variance 1 errors, and \u03c3 > 0. Suppose that we want to test that \u03c60 is a constant function, i.e., \u03c60 \u2261c, for some unknown c \u2208R. We can formulate this as in (2) with \u03b80 := (\u03c60(x1), \u03c60(x2), . . . , \u03c60(xn))\u22a4\u2208Rn and X = e := (1, 1, . . . , 1)\u22a4\u2208Rn. We can take I = {\u03b8 \u2208Rn : \u03b81 \u2264\u03b82 \u2264. . . \u2264\u03b8n} to be the set of sequences of non-decreasing real numbers. The cone I can also be expressed as I = {\u03b8 \u2208Rn : A\u03b8 \u22650} , (4) where the (n \u22121) \u00d7 n constraint matrix A contains mostly zeros, except Ai,i = \u22121 and Ai,i+1 = 1, and S = {\u03b8 \u2208Rn : A\u03b8 = 0} is the largest linear space in I. Then, not only can we test for \u03c60 to be constant against the alternative that it is monotone, but as will be shown in Corollary 4.3, the power of the test will converge to 1, as n \u2192\u221e, for any \u03c60 of bounded variation that deviates from a constant function in 2 0.0 0.2 0.4 0.6 0.8 1.0 -2 -1 0 1 2 3 x y 0.0 0.2 0.4 0.6 0.8 1.0 -1 0 1 2 3 4 x y Figure 1.1: Scatterplots generated from (3) with independent N(0, 1) errors, \u03c3 = 1, and equally spaced xi, where \u03c60(x) is shown as the dotted curve. Left: Increasing (dashed) and decreasing (solid) \ufb01ts with \u03c60(x) = sin(3\u03c0x). Right: Convex (dashed) and concave (solid) \ufb01ts with \u03c60(x) = 4 \u22126x + 40(x \u22121/2)3. a non-degenerate way. In fact, we \ufb01nd the rates of convergence for our test statistic under H0 (Theorem 4.1) and the alternative (Theorem 4.2). The intuition behind this remarkable power property of the test statistic is that a function is both increasing and decreasing if and only if it is a constant. So, if either of the projections of Y, on I and D respectively, is not close to S, then the underlying regression function is unlikely to be a constant. Consider testing against a constant function given the left scatterplot in Fig. 1.1 which was generated from a sinusoidal function. The decreasing \ufb01t to the scatterplot represents the projection of Y onto the double cone, because it has smaller sum of squared residuals than the increasing \ufb01t. Although the true regression function is neither increasing nor decreasing, the projection onto the double cone is su\ufb03ciently di\ufb00erent from the projection onto S, to the extent that the proposed test rejects H0 at level \u03b1 = 0.05. The power for this test, given the indicated function and error variance, is 0.16 for n = 50, rising to 0.53 for n = 100 and 0.98 for n = 200. Our procedure can be extended to test against a constant function when the covariates are multi-dimensional; see Section A.3 for the details. Example 2: Consider (3) and suppose that we want to test whether \u03c60 is a\ufb03ne, i.e., \u03c60(x) = a + bx, x \u2208[0, 1], where a, b \u2208R are unknown. This problem can again be formulated as in (2) where X = [e|e1] and e1 := (x1, x2, . . . , xn)\u22a4. We may use the double cone of convex/concave functions, i.e., I can be de\ufb01ned as I = \u001a \u03b8 \u2208Rn : \u03b82 \u2212\u03b81 x2 \u2212x1 \u2264\u03b83 \u2212\u03b82 x3 \u2212x2 \u2264. . . \u2264\u03b8n \u2212\u03b8n\u22121 xn \u2212xn\u22121 \u001b , (5) if the x values are distinct and ordered. I can also be de\ufb01ned by a constraint matrix A as in (4) where the non-zero elements of the (n\u22122)\u00d7n matrix A are Ai,i = xi+2\u2212xi+1, 3 Ai,i+1 = xi \u2212xi+2, and Ai,i+2 = xi+1 \u2212xi. We also show that not only can we test for \u03c60 to be a\ufb03ne against the alternative that it is convex/concave, but as will be shown in Corollary 4.6, the power of the test will converge to 1 for any non-a\ufb03ne smooth \u03c60. The second scatterplot in Fig. 1.1 is generated from a cubic regression function that is neither convex nor concave, but our test rejects the linearity hypothesis at \u03b1 = 0.05. The power for this test with n = 50 and the function speci\ufb01ed in the plot is 0.77, rising to 0.99 when n = 100. If we want to test against a quadratic function, we can use a matrix appropriate for constraining the third derivative of the regression function to be non-negative. In this case, A is (n \u22123) \u00d7 n and the non-zero elements are Ai,i = \u2212(xi+3 \u2212 xi+2)(xi+3 \u2212xi+1)(xi+2 \u2212xi+1), Ai,i+1 = (xi+3 \u2212xi)(xi+3 \u2212xi+2)(xi+2 \u2212xi), Ai,i+2 = \u2212(xi+3 \u2212xi)(xi+3 \u2212xi+1)(xi+1 \u2212xi), and Ai,i+3 = (xi+1 \u2212xi)(xi+2 \u2212xi)(xi+2 \u2212xi+1), for i = 1, . . . , n \u22123. Higher-order polynomials can be used in the null hypothesis by determining the appropriate constraint matrix in a similar fashion. Example 3: We assume the same setup as in model (3) where now {x1, . . . , xn} are n distinct points in Rd, for d \u22651. Consider testing for goodness-of-\ufb01t of the linear model, i.e., test the hypothesis H0 : \u03c60 is a\ufb03ne (i.e., \u03c60(x) = a + b\u22a4x for some a \u2208R and b \u2208Rd). De\ufb01ne \u03b80 := (\u03c60(x1), \u03c60(x2), . . . , \u03c60(xn))\u22a4\u2208Rn, and the model can be seen as a special case of (1). We want to test H0 : \u03b80 \u2208S where S is de\ufb01ned as in (2) with X being the n \u00d7 (d + 1) matrix with the i-th row (1, xi), for i = 1, . . . , n. We can consider I to be the cone of evaluations of all convex functions, i.e., I = {\u03b8 \u2208Rn : \u03c8(xi) = \u03b8i, where \u03c8 : Rd \u2192R is any convex function}. (6) The set I is a closed convex cone in Rn; see Seijo and Sen (2011). Then, D := \u2212I is the set of all vectors that are evaluations of concave functions at the data points. We show that under certain assumptions, when \u03c60 is any smooth function that is not a\ufb03ne, the power of our test converges to 1; see Section 4.3 for the details and the computation of the test statistic. Again, the intuition behind this power property of the test statistic is that if a function is both convex and concave then it must be a\ufb03ne. Over the last two decades several tests for the goodness-of-\ufb01t of a parametric model have been proposed; see e.g., Cox et al. (1988), Azzalini and Bowman (1993), Eu- bank and Spiegelman (1990), Hardle and Mammen (1993), Fan and Huang (2001), Stute (1997), Guerre and Lavergne (2005), Christensen and Sun (2010), Neumeyer and Van Keilegom (2010) and the references therein. Most tests use a nonparametric regression estimator and run into the problem of choosing tuning parameters, e.g., smoothing bandwidth(s). Our procedure does not involve the choice of any tuning parameter. Also, the critical values of most competing procedures need to be approx- imated using resampling techniques (e.g., the bootstrap) whereas the cut-o\ufb00in our test can be computed exactly (via simulation) if the error distribution G is assumed known (e.g., Gaussian). 4 The above three examples demonstrate the usefulness of the test with the double cone alternative. In addition, we formulate a test for the linear versus partial linear model. For example, we can test the signi\ufb01cance of a single predictor while controlling for covariate e\ufb00ects, or we can test the null hypothesis that the response is linear in a single predictor, in the presence of parametrically-modeled covariates. We also provide a test suitable for the special case of additive e\ufb00ects, and a special test against a constant model. It is worth mentioning that the problem of testing H0 versus a closed convex cone I \u2283S is well studied; see Raubertas et al. (1986) and Robertson et al. (1988), Chapter 2. Under the normal errors assumption, the null distribution of a likelihood ratio test statistic is that of a mixture of Beta random variables, where the mixing parameters are determined through simulations. The paper is organized as follows: In Section 2 we introduce some notation and de\ufb01- nitions and describe the problem of testing H0 versus a closed convex cone I \u2283S. We describe our test statistic and state the main results about our testing procedure in Section 3. In Section 4 we get back to the three examples discussed above and charac- terize the limiting behavior of the test statistic under H0 and otherwise. Extensions of our procedure to weighted regression, partially linear and additive models, as well as testing for a constant function in multi-dimension, are discussed in Section 5. In Section 6 we illustrate the \ufb01nite sample behavior of our method and its variants using simulation and real data examples, and compare it with other competing methods. The Appendix contains proofs of some of the results and other technical details." } ] }, "edge_feat": {} } }