AcademicEval / intro_8K /test_introduction_short_2404.16745v1.json
XaiverZ's picture
syn
ed3212e
raw
history blame
70.2 kB
{
"url": "http://arxiv.org/abs/2404.16745v1",
"title": "Statistical Inference for Covariate-Adjusted and Interpretable Generalized Factor Model with Application to Testing Fairness",
"abstract": "In the era of data explosion, statisticians have been developing\ninterpretable and computationally efficient statistical methods to measure\nlatent factors (e.g., skills, abilities, and personalities) using large-scale\nassessment data. In addition to understanding the latent information, the\ncovariate effect on responses controlling for latent factors is also of great\nscientific interest and has wide applications, such as evaluating the fairness\nof educational testing, where the covariate effect reflects whether a test\nquestion is biased toward certain individual characteristics (e.g., gender and\nrace) taking into account their latent abilities. However, the large sample\nsize, substantial covariate dimension, and great test length pose challenges to\ndeveloping efficient methods and drawing valid inferences. Moreover, to\naccommodate the commonly encountered discrete types of responses, nonlinear\nlatent factor models are often assumed, bringing further complexity to the\nproblem. To address these challenges, we consider a covariate-adjusted\ngeneralized factor model and develop novel and interpretable conditions to\naddress the identifiability issue. Based on the identifiability conditions, we\npropose a joint maximum likelihood estimation method and establish estimation\nconsistency and asymptotic normality results for the covariate effects under a\npractical yet challenging asymptotic regime. Furthermore, we derive estimation\nand inference results for latent factors and the factor loadings. We illustrate\nthe finite sample performance of the proposed method through extensive\nnumerical studies and an application to an educational assessment dataset\nobtained from the Programme for International Student Assessment (PISA).",
"authors": "Jing Ouyang, Chengyu Cui, Kean Ming Tan, Gongjun Xu",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "stat.ME",
"cats": [
"stat.ME"
],
"label": "Original Paper",
"paper_cat": "LLM Fairness",
"gt": "Latent factors, often referred to as hidden factors, play an increasingly important role in modern statistics to analyze large-scale complex measurement data and find wide-ranging applications across various scientific fields, including educational assessments (Reckase 2009, Hambleton & Swaminathan 2013), macroeconomics forecasting (Stock & Watson 2002, Lam et al. 2011), and biomedical diagnosis (Carvalho et al. 2008, Frichot et al. 2013). For instance, in educational testing and social sciences, latent factors are used to model unobservable traits of respondents, such as skills, personality, and attitudes (von Davier Matthias 2008, Reckase 2009); in biology and genomics, latent factors are used to capture underlying genetic factors, gene expression patterns, or hidden biological mechanisms (Carvalho et al. 2008, Frichot et al. 2013). To uncover the latent factors and analyze large-scale complex data, various latent factor models have been developed and extensively investigated in the existing literature (Bai 2003, Bai & Li 2012, Fan et al. 2013, Chen et al. 2023b, Wang 2022). In addition to measuring the latent factors, the observed covariates and the covariate effects conditional on the latent factors hold significant scientific interpretations in many applications (Reboussin et al. 2008, Park et al. 2018). One important application is testing fairness, which receives increasing attention in the fields of education, psychology, and social sciences (Candell & Drasgow 1988, Belzak & Bauer 2020, Chen et al. 2023a). In educa- tional assessments, testing fairness, or measurement invariance, implies that groups from diverse backgrounds have the same probability of endorsing the test items, controlling for individual proficiency levels (Millsap 2012). Testing fairness is not only of scientific interest to psychometricians and statisticians but also attracts widespread public awareness (Toch 1984). In the era of rapid technological advancements, international and large-scale edu- cational assessments are becoming increasingly prevalent. One example is the Programme for International Student Assessment (PISA), which is a large-scale international assessment with substantial sample size and test length (OECD 2019). PISA assesses the knowledge and skills of 15-year-old students in mathematics, reading, and science domains (OECD 2 2019). In PISA 2018, over 600,000 students from 37 OECD1 countries and 42 partner coun- tries/economies participated in the test (OECD 2019). To assess fairness of the test designs in such large-scale assessments, it is important to develop modern and computationally effi- cient methodologies for interpreting the effects of observed covariates (e.g., gender and race) on the item responses, controlling for the latent factors. However, the discrete nature of the item responses, the increasing sample size, and the large amount of test items in modern educational assessments pose great challenges for the estimation and inference for the covariate effects as well as for the latent factors. For instance, in educational and psychological measurements, such a testing fairness issue (measurement invariance) is typically assessed by differential item functioning (DIF) analysis of item re- sponse data that aims to detect the DIF items, where a DIF item has a response distribution that depends on not only the measured latent factors but also respondents\u2019 covariates (such as group membership). Despite many statistical methods that have been developed for DIF analysis, existing methods often require domain knowledge to pre-specify DIF-free items, namely anchor items, which may be misspecified and lead to biased estimation and inference results (Thissen 1988, Tay et al. 2016). To address this limitation, researchers developed item purification methods to iteratively select anchor items through stepwise selection mod- els (Candell & Drasgow 1988, Fidalgo et al. 2000, Kopf et al. 2015). More recently, tree-based methods (Tutz & Berger 2016), regularized estimation methods (Bauer et al. 2020, Belzak & Bauer 2020, Wang et al. 2023), item pair functioning methods (Bechger & Maris 2015), and many other non-anchor-based methods have been proposed. However, these non-anchor- based methods do not provide valid statistical inference guarantees for testing the covariate effects. It remains an open problem to perform statistical inference on the covariate effects and the latent factors in educational assessments. To address this open problem, we study the statistical estimation and inference for a gen- eral family of covariate-adjusted nonlinear factor models, which include the popular factor 1OECD: Organisation for Economic Co-operation and Development 3 models for binary, count, continuous, and mixed-type data that commonly occur in educa- tional assessments. The nonlinear model setting poses great challenges for estimation and statistical inference. Despite recent progress in the factor analysis literature, most existing studies focus on estimation and inference under linear factor models (Stock & Watson 2002, Bai & Li 2012, Fan et al. 2013) and covariate-adjusted linear factor models (Leek & Storey 2008, Wang et al. 2017, Gerard & Stephens 2020, Bing et al. 2024). The techniques employed in linear factor model settings are not applicable here due to the nonlinearity inherent in the general models under consideration. Recently, several researchers have also investigated the parameter estimation and inference for generalized linear factor models (Chen et al. 2019, Wang 2022, Chen et al. 2023b). However, they either focus only on the overall consistency properties of the estimation or do not incorporate covariates into the models. In a concurrent work, motivated by applications in single-cell omics, Du et al. (2023) considered a general- ized linear factor model with covariates and studied its inference theory, where the latent factors are used as surrogate variables to control for unmeasured confounding. However, they imposed relatively stringent assumptions on the sparsity of covariate effects and the dimension of covariates, and their theoretical results also rely on data-splitting. Moreover, Du et al. (2023) focused only on statistical inference on the covariate effects, while that on factors and loadings was unexplored, which is often of great interest in educational assess- ments. Establishing inference results for covariate effects and latent factors simultaneously under nonlinear models remains an open and challenging problem, due to the identifiability issue from the incorporation of covariates and the nonlinearity issue in the considered general models. To overcome these issues, we develop a novel framework for performing statistical infer- ence on all model parameters and latent factors under a general family of covariate-adjusted generalized factor models. Specifically, we propose a set of interpretable and practical iden- tifiability conditions for identifying the model parameters, and further incorporate these conditions into the development of a computationally efficient likelihood-based estimation 4 method. Under these identifiability conditions, we develop new techniques to address the aforementioned theoretical challenges and obtain estimation consistency and asymptotic nor- mality for covariate effects under a practical yet challenging asymptotic regime. Furthermore, building upon these results, we establish estimation consistency and provide valid inference results for factor loadings and latent factors that are often of scientific interest, advancing our theoretical understanding of nonlinear latent factor models. The rest of the paper is organized as follows. In Section 2, we introduce the model setup of the covariate-adjusted generalized factor model. Section 3 discusses the associated iden- tifiability issues and further presents the proposed identifiability conditions and estimation method. Section 4 establishes the theoretical properties for not only the covariate effects but also the latent factors and factor loadings. In Section 5, we perform extensive numerical studies to illustrate the performance of the proposed estimation method and the validity of the theoretical results. In Section 6, we analyze an educational testing dataset from Pro- gramme for International Student Assessment (PISA) and identify test items that may lead to potential bias among different test-takers. We conclude with providing some potential future directions in Section 7. Notation: For any integer N, let [N] = {1, . . . , N}. For any set S, let #S be its cardinality. For any vector r = (r1, . . . , rl)\u22ba, let \u2225r\u22250 = #({j : rj \u0338= 0}), \u2225r\u2225\u221e= maxj=1,...,l |rj|, and \u2225r\u2225q = (Pl j=1 |rj|q)1/q for q \u22651. We define 1(y) x to be the y-dimensional vector with x-th entry to be 1 and all other entries to be 0. For any symmetric matrix M, let \u03bbmin(M) and \u03bbmax(M) be the smallest and largest eigenvalues of M. For any matrix A = (aij)n\u00d7l, let \u2225A\u2225\u221e,1 = maxj=1,...,l Pn i=1 |aij| be the maximum absolute column sum, \u2225A\u22251,\u221e= maxi=1,...,n Pl j=1 |aij| be the maximum of the absolute row sum, \u2225A\u2225max = maxi,j |aij| be the maximum of the absolute matrix entry, \u2225A\u2225F = (Pn i=1 Pl j=1 |aij|2)1/2 be the Frobenius norm of A, and \u2225A\u2225= p \u03bbmax (A\u22baA) be the spectral norm of A. Let \u2225\u00b7 \u2225\u03c61 be sub- exponential norm. Define the notation Av = vec(A) \u2208Rnl to indicate the vectorized matrix 5 A \u2208Rn\u00d7l. Finally, we denote \u2297as the Kronecker product.",
"main_content": "Consider n independent subjects with q measured responses and p\u2217observed covariates. \u2217 For the ith subject, let Yi \u2208Rq be a q-dimensional vector of responses corresponding to measurement items and Rbe a-dimensional vector of observed covariates. q measurement items and Xc i \u2208Rp\u2217be a p\u2217-dimensional vector of observed covariates. Moreover, let be a-dimensional vector of latent factors representing the unobservable Moreover, let Ui be a K-dimensional vector of latent factors representing the unobservable traits such as skills and personalities, where we assume K is specified as in many educational assessments. We assume that the q-dimensional responses Yi are conditionally independent, given Xc i and Ui. Specifically, we model the jth response for the ith subject, Yij, by the following conditional distribution: Yij \u223cpij(y | wij), where wij = \u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i . (1) Here \u03b2j0 \u2208R is the intercept parameter, \u03b2jc = (\u03b2j1, . . . , \u03b2jp\u2217)\u22ba\u2208Rp\u2217are the coefficient parameters for the observed covariates, and\u22baR are the factor loadings. \u2208\u2217\u2208 parameters for the observed covariates, and \u03b3j = (\u03b3j1, . . . , \u03b3jK)\u22ba\u2208RK are the factor loadings. \u22ba \u2208 For better presentation, we write \u03b2j = (\u03b2j0, \u03b2\u22ba jc)\u22baas an assembled vector of intercept and coefficients and define Xi = (1, (Xc i )\u22ba)\u22bawith dimension p = p\u2217+ 1, which gives wij = \u03b3\u22ba j Ui + \u03b2\u22ba j Xi. Given wij, the function pij is some specified probability density (mass) function. Here, we consider a general and flexible modeling framework by allowing different types of pij functions to model diverse response data in wide-ranging applications, such as binary item response data in educational and psychological assessments (Mellenbergh 1994, Reckase 2009) and mixed types of data in educational and macroeconomic applications (Rijmen et al. 2003, Wang 2022); see also Remark 1. A schematic diagram of the proposed model setup is 6 presented in Figure 1. Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Xi \u2208Rp Ui \u2208RK Yij \u2208R, j \u2208[q] Figure 1: A schematic diagram of the proposed model in (1). The subscript i indicates the ith subject, out of n independent subjects. The response variable Yij can be discrete or continuous. Our proposed covariate-adjusted generalized factor model in (1) is motivated by applications in testing fairness. In the context of educational assessment, the subject\u2019s responses to questions are dependent on latent factors Ui such as students\u2019 abilities and skills, and are potentially affected by observed covariates Xc i such as age, gender, and race, among others (Linda M. Collins 2009). The intercept \u03b2j0 is often interpreted as the difficulty level of item j and referred to as the difficulty parameter in psychometrics (Hambleton & Swaminathan 2013, Reckase 2009). The capability of item j to further differentiate individuals based on their latent abilities is captured by \u03b3j = (\u03b3j1, . . . , \u03b3jK)\u22ba, which are also referred to as discrimination parameters (Hambleton & Swaminathan 2013, Reckase 2009). The effects of observed covariates Xc i on subject\u2019s response to the jth question Yij, conditioned on latent abilities Ui, are captured by \u03b2jc = (\u03b2j1, . . . , \u03b2jp\u2217)\u22ba, which are referred to as DIF effects in psychometrics (Holland & Wainer 2012). This setting gives rise to the fairness problem of validating whether the response probabilities to the measurements differ across different genders, races, or countries of origin while holding their abilities and skills at the same level. 7 Given the observed data from n independent subjects, we are interested in studying the relationships between Yi and Xc i after adjusting for the latent factors Ui in (1). Specifically, our goal is to test the statistical hypothesis H0 : \u03b2js = 0 versus Ha : \u03b2js \u0338= 0 for s \u2208[p\u2217], where \u03b2js is the regression coefficient for the sth covariate and the jth response, after adjusting for the latent factor Ui. In many applications, the latent factors and factor loadings also carry important scientific interpretations such as students\u2019 abilities and test items\u2019 characteristics. This motivates us to perform statistical inference on the parameters \u03b2j0, \u03b3j, and Ui as well. Remark 1. The proposed model setup (1) is general and flexible as various functions pij\u2019s could be used to model diverse types of response data in wide-ranging applications. For instance, in educational assessments, logistic factor model (Reckase 2009) with pij(y | wij) = exp(wijy)/{1 + exp(wij)}, y \u2208{0, 1} and probit factor model (Birnbaum 1968) with pij(y | wij) = {\u03a6(wij)}y{1 \u2212\u03a6(wij)}1\u2212y, y \u2208{0, 1} where \u03a6(\u00b7) is the cumulative density function of standard normal distribution, are widely used to model the binary responses, indicating correct or incorrect answers to the test items. Such types of models are often referred to as item response theory models (Reckase 2009). In economics and finances, linear factor models with pij(y | wij) \u221dexp{\u2212(y \u2212wij)2/(2\u03c32)}, where y \u2208R and \u03c32 is the variance parameter, are commonly used to model continuous responses, such as GDP, interest rate, and consumer index (Bai 2003, Bai & Li 2012, Stock & Watson 2016). Moreover, depending on the the observed responses, different types of function pij\u2019s can be used to model the response from each item j \u2208[q]. Therefore, mixed types of data, which are common in educational measurements (Rijmen et al. 2003) and macroeconomic applications (Wang 2022), can also be analyzed by our proposed model. 8 Remark 2. In addition to testing fairness, the considered model finds wide-ranging applications in the real world. For instance, in genomics, the gene expression status may depend on unmeasured confounders or latent biological factors and also be associated with the variables of interest including medical treatment, disease status, and gender (Wang et al. 2017, Du et al. 2023). The covariate-adjusted general factor model helps to investigate the effects of the variables of interest on gene expressions, controlling for the latent factors (Du et al. 2023). This setting is also applicable to other scenarios, such as brain imaging, where the activity of a brain region may depend on measurable spatial distance from neighboring regions and latent structures due to unmodeled factors (Leek & Storey 2008). To analyze large-scale measurement data, we aim to develop a computationally efficient estimation method and to provide inference theory for quantifying uncertainty in the estimation. Motivated by recent work in high-dimensional factor analysis, we treat the latent factors as fixed parameters and apply a joint maximum likelihood method for estimation (Bai 2003, Fan et al. 2013, Chen et al. 2020). Specifically, we let the collection of the item responses from n independent subjects be Y = (Y1, . . . , Yn)\u22ba n\u00d7q and the design matrix of observed covariates to be X = (X1, . . . , Xn)\u22ba n\u00d7p. For model parameters, the discrimination parameters for all q items are denoted as \u0393 = (\u03b31, . . . , \u03b3q)\u22ba q\u00d7K, while the intercepts and the covariate effects for all q items are denoted as B = (\u03b21, . . . , \u03b2q)\u22ba q\u00d7p. The latent factors from all n subjects are U = (U1, . . . , Un)\u22ba n\u00d7K. Then, the joint log-likelihood function can be written as follows: L(Y | \u0393, U, B, X) = 1 nq n X i=1 q X j=1 lij(\u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i ), (2) where the function lij(wij) = log pij(Yij|wij) is the individual log-likelihood function with wij = \u03b2j0 + \u03b3\u22ba j Ui + \u03b2\u22ba jcXc i . We aim to obtain (b \u0393, b U, b B) from maximizing the joint likelihood function L(Y | \u0393, U, B, X). While the estimators can be computed efficiently by maximizing the joint likelihood 9 function through an alternating maximization algorithm (Collins et al. 2002, Chen et al. 2019), challenges emerge for performing statistical inference on the model parameters. \u2022 One challenge concerns the model identifiability. Without additional constraints, the covariate effects are not identifiable due to the incorporation of covariates and their potential dependence on latent factors. The latent factors and factor loadings encounter similar identifiability issues as in traditional factor analysis (Bai & Li 2012, Fan et al. 2013). Ensuring that the model is statistically identifiable is the fundamental prerequisite for achieving model reliability and making valid inferences (Allman et al. 2009, Gu & Xu 2020). \u2022 Another challenge arises from the nonlinearity of our proposed model. In the existing literature, most studies focus on the statistical inference for our proposed setting in the context of linear models (Bai & Li 2012, Fan et al. 2013, Wang et al. 2017). On the other hand, settings with general log-likelihood function lij(wij), including covariateadjusted logistic and probit factor models, are less investigated. Common techniques for linear models are not applicable to the considered general nonlinear model setting. Motivated by these challenges, we propose interpretable and practical identifiability conditions in Section 3.1. We then incorporate these conditions into the joint-likelihood-based estimation method in Section 3.2. Furthermore, we introduce a novel inference framework for performing statistical inference on \u03b2j, \u03b3j, and Ui in Section 4. 3 Method 3.1 Model Identifiability Identifiability issues commonly occur in latent variable models (Allman et al. 2009, Bai & Li 2012, Xu 2017). The proposed model in (1) has two major identifiability issues. The first issue is that the proposed model remains unchanged after certain linear transformations of 10 both B and U, causing the covariate effects together with the intercepts, represented by B, and the latent factors, denoted by U, to be unidentifiable. The second issue is that the model is invariant after an invertible transformation of both U and \u0393 as in the linear factor models (Bai & Li 2012, Fan et al. 2013), causing the latent factors U and factor loadings \u0393 to be undetermined. Specifically, under the model setup in (1), we define the joint probability distribution of responses to be P(Y | \u0393, U, B, X) = Qn i=1 Qq j=1 pij(Yij|wij). The model parameters are identifiable if and only if for any response Y, there does not exist (\u0393, U, B) \u0338= (e \u0393, e U, e B) such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). The first issue concerning the identifiability of B and U is that for any (\u0393, U, B) and any transformation matrix A, there exist e \u0393 = \u0393, e U = U + XA\u22ba, and e B = B \u2212\u0393A such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). This identifiability issue leads to the indeterminacy of the covariate effects and latent factors. The second issue is related to the identifiability of U and \u0393. For any (e \u0393, e U, e B) and any invertible matrix G, there exist \u00af \u0393 = e \u0393(G\u22ba)\u22121, \u00af U = e UG, and \u00af B = e B such that P(Y | e \u0393, e U, e B, X) = P(Y | \u00af \u0393, \u00af U, \u00af B, X). This causes the latent factors and factor loadings to be unidentifiable. Remark 3. Intuitively, the unidentifiable e B = B \u2212\u0393A can be interpreted to include both direct and indirect effects of X on response Y. We take the intercept and covariate effect on the first item ( e \u03b21) as an example and illustrate it in Figure 2. One part of e \u03b21 is the direct effect from X onto Y (see the orange line in the left panel), whereas another part of e \u03b21 may be explained through the latent factors U, as the latent factors are unobserved and there are potential correlations between latent factors and observed covariates. The latter part of e \u03b21 can be considered as the indirect effect (see the blue line in the right panel). 11 Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Xi Yi1 Ui Yi2 Yi,q\u22121 Yiq \u2026 \u2026 \u03b21 \u03b22 \u03b2q\u22121 \u03b2q \u2026 \u03b31 \u03b32 \u03b3q\u22121 \u03b3q \u2026 Figure 2: The direct effects (orange solid line in the left panel) and the indirect effects (blue solid line in the right panel) for item 1. The first identifiability issue is a new challenge introduced by the covariate adjustment in the model, whereas the second issue is common in traditional factor models (Bai & Li 2012, Fan et al. 2013). Considering the two issues together, for any (\u0393, U, B), A, and G, there exist transformations e \u0393 = \u0393(G\u22ba)\u22121, e U = (U + XA\u22ba)G, and e B = B \u2212\u0393A such that P(Y | \u0393, U, B, X) = P(Y | e \u0393, e U, e B, X). In the rest of this subsection, we propose identifiability conditions to address these issues. For notation convenience, throughout the rest of the paper, we define \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) as the true parameters. Identifiability Conditions As described earlier, the correlation between the design matrix of covariates X and the latent factors U\u2217results in the identifiability issue of B\u2217. In the psychometrics literature, the intercept \u03b2\u2217 j0 is commonly referred to as the difficulty parameter, while \u03b2\u2217 jc represents the effects of observed covariates, namely DIF effects, on the response to item j (Reckase 2009, Holland & Wainer 2012). The different scientific interpretations motivate us to develop different identifiability conditions for \u03b2\u2217 j0 and \u03b2\u2217 jc, respectively. Specifically, we propose a centering condition on U\u2217to ensure the identifiability of the intercept \u03b2\u2217 j0 for all items j \u2208[q]. On the other hand, to identify the covariate effects \u03b2\u2217 jc, a natural idea is to impose the covariate effects \u03b2\u2217 jc for all items j \u2208[q] to be sparse, as shown in many regularized methods and item purification methods (Candell & Drasgow 1988, Fidalgo et al. 2000, Bauer et al. 2020, Belzak & Bauer 2020). In Chen et al. (2023a), 12 an interpretable identifiability condition is proposed for selecting sparse covariate effects, yet this condition is specific to uni-dimensional covariates. Motivated by Chen et al. (2023a), we propose the following minimal \u21131 condition applicable to general cases where the covariates are multi-dimensional. To better present the identifiability conditions, we write A = (a0, a1, . . . , ap\u2217) \u2208RK\u00d7p and define Ac = (a1, . . . , ap\u2217) \u2208RK\u00d7p\u2217as the part applied to the covariate effects. Condition 1. (i) Pn i=1 U \u2217 i = 0K. (ii) Pq j=1 \u2225\u03b2\u2217 jc\u22251 < Pq j=1 \u2225\u03b2\u2217 jc \u2212A\u22ba c\u03b3\u2217 j \u22251 for any Ac \u0338= 0. Condition 1(i) assumes the latent abilities U\u2217are centered to ensure the identifiability of the intercepts \u03b2\u2217 j0\u2019s, which is commonly assumed in the item response theory literature (Reckase 2009). Condition 1(ii) is motivated by practical applications. For instance, in educational testing, practitioners need to identify and remove biased test items, correspondingly, items with non-zero covariate effects (\u03b2\u2217 js \u0338= 0). In practice, most of the designed items are unbiased, and therefore, it is reasonable to assume that the majority of items have no covariate effects, that is, the covariate effects \u03b2\u2217 jc\u2019s are sparse (Holland & Wainer 2012, Chen et al. 2023a). Next, we present a sufficient and necessary condition for Condition 1(ii) to hold. Proposition 1. Condition 1(ii) holds if and only if for any v \u2208RK \\ {0K}, q X j=1 \f \fv\u22ba\u03b3\u2217 j \f \fI(\u03b2\u2217 js = 0) > q X j=1 sign(\u03b2\u2217 js)v\u22ba\u03b3\u2217 j I(\u03b2\u2217 js \u0338= 0), \u2200s \u2208[p\u2217]. (3) Remark 4. Proposition 1 implies that Condition 1(ii) holds when {j : \u03b2\u2217 js \u0338= 0} is separated into {j : \u03b2\u2217 js > 0} and {j : \u03b2\u2217 js < 0} in a balanced way. With diversified signs of \u03b2\u2217 js, Proposition 1 holds when a considerable proportion of test items have no covariate effect (\u03b2\u2217 js \u0338= 0). For example, when \u03b3\u2217 j = m1(k) K with m > 0, Condition 1(ii) holds if and only if Pq j=1 |m|{\u2212I(\u03b2\u2217 js/m > 0) + I(\u03b2\u2217 js/m \u22640)} > 0 and Pq j=1 |m|{\u2212I(\u03b2\u2217 js/m \u22650) + I(\u03b2\u2217 js/m < 0)} < 0. With slightly more than q/2 items correspond to \u03b2\u2217 js = 0, Condition 1(ii) holds. Moreover, if #{j : \u03b2\u2217 js > 0} and #{j : \u03b2\u2217 js < 0} are comparable, then Condition 1(ii) holds even when less than q/2 items correspond to \u03b2\u2217 js = 0 and more than q/2 items correspond 13 to \u03b2\u2217 js \u0338= 0. Though assuming a \u201csparse\u201d structure, our assumption here differs from existing high-dimensional literature. In high-dimensional regression models, the covariate coefficient when regressing the dependent variable on high-dimensional covariates, is often assumed to be sparse, with the proportion of the non-zero covariate coefficients asymptotically approaching zero. In our setting, Condition 1(ii) allows for relatively dense settings where the proportion of items with non-zero covariate effects is some positive constant. To perform simultaneous estimation and inference on \u0393\u2217and U\u2217, we consider the following identifiability conditions to address the second identifiability issue. Condition 2. (i) (U\u2217)\u22baU\u2217is diagonal. (ii) (\u0393\u2217)\u22ba\u0393\u2217is diagonal. (iii) n\u22121(U\u2217)\u22baU\u2217= q\u22121(\u0393\u2217)\u22ba\u0393\u2217. Condition 2 is a set of widely used identifiability conditions in the factor analysis literature (Bai 2003, Bai & Li 2012, Wang 2022). For practical and theoretical benefits, we impose Condition 2 to address the identifiability issue related to G. It is worth mentioning that this condition can be replaced by other identifiability conditions. For true parameters satisfying any identifiability condition, we can always find a transformation such that the transformed parameters satisfy our proposed Conditions 1\u20132 and the proposed estimation method and theoretical results in the subsequent sections still apply, up to such a transformation. 3.2 Joint Maximum Likelihood Estimation In this section, we introduce a joint-likelihood-based estimation method for the covariate effects B, the latent factors U, and factor loadings \u0393 simultaneously. Incorporating Conditions 1\u20132 into the estimation procedure, we obtain the maximum joint-likelihood-based estimators for \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) that satisfy the proposed identifiability conditions. With Condition 1, we address the identifiability issue related to the transformation matrix A. Specifically, for any parameters \u03d5 = (\u0393, U, B), there exists a matrix A\u2217= (a\u2217 0, A\u2217 c) with A\u2217 c = argminAc\u2208RK\u00d7p\u2217 Pq j=1 \u2225\u03b2jc \u2212A\u22ba c\u03b3j\u22251 and a\u2217 0 = \u2212n\u22121 Pn i=1(Ui + A\u2217 cXc i ) such that 14 the transformed matrices U\u2217= U + X(A\u2217)\u22baand B\u2217= B \u2212\u0393A\u2217satisfy Condition 1. The transformation idea naturally leads to the following estimation methodology for B\u2217. To estimate B\u2217and U\u2217that satisfy Condition 1, we first obtain the maximum likelihood estimator b \u03d5 = (b \u0393, b U, b B) by b \u03d5 = argmin \u03d5\u2208\u2126\u03d5 \u2212L(Y | \u03d5, X), (4) where the parameter space \u2126\u03d5 is given as \u2126\u03d5 = {\u03d5 : \u2225\u03d5\u2225max \u2264C} for some large C. To solve (4), we employ an alternating minimization algorithm. Specifically, for steps t = 0, 1, . . ., we compute b \u0393(t+1), b B(t+1) = argmin \u0393\u2208Rq\u00d7K, B\u2208Rq\u00d7p \u2212L(Y | \u0393, U(t), B, X); b U(t+1) = argmin U\u2208Rn\u00d7K \u2212L(Y | \u0393(t+1), U, B(t+1), X), until the quantity max{\u2225b \u0393(t+1) \u2212b \u0393(t)\u2225F, \u2225b U(t+1) \u2212b U(t)\u2225F, \u2225b B(t+1) \u2212b B(t)\u2225F} is less than some pre-specified tolerance value for convergence. We then estimate Ac by minimizing the \u21131norm b Ac = argmin Ac\u2208RK\u00d7p\u2217 q X j=1 \u2225b \u03b2jc \u2212A\u22ba c b \u03b3j\u22251. (5) Next, we estimate b a0 = \u2212n\u22121 Pn i=1( b Ui + b AcXc i ) and let b A = (b a0, b Ac). Given the estimators b A, b \u0393, and b B, we then construct b B\u2217= b B \u2212b \u0393b A and e U = b U + Xb A\u22ba such that Condition 1 holds. Recall that Condition 2 addresses the identifiability issue related to the invertible matrix G. Specifically, for any parameters (\u0393, U), there exists a matrix G\u2217such that Condition 2 holds for U\u2217= (U+X(A\u2217)\u22ba)G\u2217and \u0393\u2217= \u0393(G\u2217)\u2212\u22ba. Let U = diag(\u03f11, . . . , \u03f1K) be a diagonal 15 matrix that contains the K eigenvalues of (nq)\u22121(\u0393\u22ba\u0393)1/2(U + XA\u22ba)\u22ba(U + XA\u22ba) (\u0393\u22ba\u0393)1/2 and let V be a matrix that contains its corresponding eigenvectors. We set G\u2217= (q\u22121\u0393\u22ba\u0393)1/2 VU \u22121/4. To further estimate \u0393\u2217and U\u2217, we need to obtain an estimator for the invertible matrix G\u2217. Given the maximum likelihood estimators obtained in (4) and b A in (5), we estimate G\u2217via b G = (q\u22121b \u0393\u22bab \u0393)1/2 b V b U \u22121/4 where b U and b V are matrices that contain the eigenvalues and eigenvectors of (nq)\u22121(b \u0393\u22bab \u0393)1/2( b U+Xb A\u22ba)\u22ba( b U+Xb A\u22ba) (b \u0393\u22bab \u0393)1/2, respectively. With b G and b A, we now obtain the following transformed estimators that satisfy Condition 2: b \u0393\u2217= b \u0393( b G\u22ba)\u22121 and b U\u2217= ( b U + Xb A\u22ba) b G. To quantify the uncertainty of the proposed estimators, we will show that the proposed estimators are asymptotically normally distributed. Specifically, in Theorem 2 of Section 4, we establish the asymptotic normality result for b \u03b2\u2217 j, which allows us to make inference on the covariate effects \u03b2\u2217 j. Moreover, as the latent factors U \u2217 i and factor loadings \u03b3\u2217 j often have important interpretations in domain sciences, we are also interested in the inference on parameters U \u2217 i and \u03b3\u2217 j . In Theorem 2, we also derive the asymptotic distributions for estimators b U \u2217 i and b \u03b3\u2217 j , providing inference results for parameters U \u2217 i and \u03b3\u2217 j . 4 Theoretical Results We propose a novel framework to establish the estimation consistency and asymptotic normality for the proposed joint-likelihood-based estimators b \u03d5\u2217= (b \u0393\u2217, b U\u2217, b B\u2217) in Section 3. To establish the theoretical results for b \u03d5\u2217, we impose the following regularity assumptions. Assumption 1. There exist constants M > 0, \u03ba > 0 such that: (i) \u03a3\u2217 u = limn\u2192\u221en\u22121(U\u2217)\u22baU\u2217exists and is positive definite. For i \u2208[n], \u2225U \u2217 i \u22252 \u2264M. (ii) \u03a3\u2217 \u03b3 = limq\u2192\u221eq\u22121(\u0393\u2217)\u22ba\u0393\u2217exists and is positive definite. For j \u2208[q], \u2225\u03b3\u2217 j \u22252 \u2264M. (iii) \u03a3x = limn\u2192\u221en\u22121 Pn i=1 XiX\u22ba i exists and 1/\u03ba2 \u2264\u03bbmin(\u03a3x) \u2264\u03bbmax(\u03a3x) \u2264\u03ba2. For i \u2208[n], maxi \u2225Xi\u2225\u221e\u2264M. 16 (iv) \u03a3\u2217 ux = limn\u2192\u221en\u22121 Pn i=1 U \u2217 i X\u22ba i exists and \u2225\u03a3\u2217 ux\u03a3\u22121 x \u22251,\u221e\u2264M. The eigenvalues of (\u03a3\u2217 u \u2212\u03a3\u2217 ux\u03a3\u22121 x (\u03a3\u2217 ux)\u22ba)\u03a3\u2217 \u03b3 are distinct. Assumptions 1 is commonly used in the factor analysis literature. In particular, Assumptions 1(i)\u2013(ii) correspond to Assumptions A-B in Bai (2003) under linear factor models, ensuring the compactness of the parameter space on U\u2217and \u0393\u2217. Under nonlinear factor models, such conditions on compact parameter space are also commonly assumed (Wang 2022, Chen et al. 2023b). Assumption 1(iii) is standard regularity conditions for the nonlinear setting that is needed to establish the concentration of the gradient and estimation error for the model parameters when p diverges. In addition, Assumption 1(iv) is a crucial identification condition; similar conditions have been imposed in the existing literature such as Assumption G in Bai (2003) in the context of linear factor models and Assumption 6 in Wang (2022) in the context of nonlinear factor models without covariates. Assumption 2. For any i \u2208[n] and j \u2208[q], assume that lij(\u00b7) is three times differentiable, and we denote the first, second, and third order derivatives of lij(wij) with respect to wij as l\u2032 ij(wij), l\u2032\u2032 ij(wij), and l\u2032\u2032\u2032 ij(wij), respectively. There exist M > 0 and \u03be \u22654 such that E(|l\u2032 ij(wij)|\u03be) \u2264M and |l\u2032 ij(wij)| is sub-exponential with \u2225l\u2032 ij(wij)\u2225\u03c61 \u2264M. Furthermore, we assume E{l\u2032 ij(w\u2217 ij)} = 0. Within a compact space of wij, we have bL \u2264\u2212l\u2032\u2032 ij(wij) \u2264bU and |l\u2032\u2032\u2032 ij(wij)| \u2264bU for bU > bL > 0. Assumption 2 assumes smoothness on the log-likelihood function lij(wij). In particular, it assumes sub-exponential distributions and finite fourth-moments of the first order derivatives l\u2032 ij(wij). For commonly used linear or nonlinear factor models, the assumption is not restrictive and can be satisfied with a large \u03be. For instance, consider the logistic model with l\u2032 ij(wij) = Yij \u2212exp(wij)/{1+exp(wij)}, we have |l\u2032 ij(wij)| \u22641 and \u03be can be taken as \u221e. The boundedness conditions for l\u2032\u2032 ij(wij) and l\u2032\u2032\u2032 ij(wij) are necessary to guarantee the convexity of the joint likelihood function. In a special case of linear factor models, l\u2032\u2032 ij(wij) is a constant and the boundedness conditions naturally hold. For popular nonlinear models such as lo17 gistic factor models, probit factor models, and Poisson factor models, the boundedness of l\u2032\u2032 ij(wij) and l\u2032\u2032\u2032 ij(wij) can also be easily verified. Assumption 3. For \u03be specified in Assumption 2 and a sufficiently small \u03f5 > 0, we assume as n, q, p \u2192\u221e, p p n \u2227(pq) (nq)\u03f5+3/\u03be \u21920. (6) Assumption 3 is needed to ensure that the derivative of the likelihood function equals zero at the maximum likelihood estimator with high probability, a key property in the theoretical analysis. In particular, we need the estimation errors of all model parameters to converge to 0 uniformly with high probability. Such uniform convergence results involve delicate analysis of the convexity of the objective function, for which technically we need Assumption 3. For most of the popularly used generalized factor models, \u03be can be taken as any large value as discussed above, thus (nq)\u03f5+3/\u03be is of a smaller order of p n \u2227(pq), given small \u03f5. Specifically, Assumption 3 implies p = o(n1/2 \u2227q) up to a small order term, an asymptotic regime that is reasonable for many educational assessments. Next, we impose additional assumptions crucial to establishing the theoretical properties of the proposed estimators. One challenge for theoretical analysis is to handle the dependence between the latent factors U\u2217and the design matrix X. To address this challenge, we employ the following transformed U0 that are orthogonal with X, which plays an important role in establishing the theoretical results (see Supplementary Materials for details). In particular, for i \u2208[n], we let U 0 i = (G\u2021)\u22ba(U \u2217 i \u2212A\u2021Xi). Here G\u2021 = (q\u22121(\u0393\u2217)\u22ba\u0393\u2217)1/2 V\u2217(U \u2217)\u22121/4 and A\u2021 = (U\u2217)\u22baX(X\u22baX)\u22121, where U \u2217= diag(\u03f1\u2217 1, . . . , \u03f1\u2217 K) with diagonal elements being the K eigenvalues of (nq)\u22121((\u0393\u2217)\u22ba\u0393\u2217)1/2(U\u2217)\u22ba(In\u2212Px)U\u2217((\u0393\u2217)\u22ba\u0393\u2217)1/2 with Px = X(X\u22baX)\u22121X\u22baand V\u2217containing the matrix of corresponding eigenvectors. Under this transformation for U 0 i , we further define \u03b30 j = (G\u2021)\u22121\u03b3\u2217 j and \u03b20 j = \u03b2\u2217 j + (A\u2021)\u22ba\u03b3\u2217 j for j \u2208[q], and write Z0 i = ((U 0 i )\u22ba X\u22ba i )\u22baand w0 ij = (\u03b30 j )\u22baU 0 i + (\u03b20 j)\u22baXi. These transformed parameters \u03b30 j \u2019s, U 0 i \u2019s, and \u03b20 j\u2019s give the same joint likelihood value as that of the true parameters \u03b3\u2217 j \u2019s, U \u2217 i \u2019s and \u03b2\u2217 j\u2019s, which 18 facilitate our theoretical understanding of the joint-likelihood-based estimators. Assumption 4. (i) For any j \u2208[q], \u2212n\u22121 Pn i=1 l\u2032\u2032 ij(w0 ij)Z0 i (Z0 i )\u22ba p \u2192\u03a80 jz for some positive definite matrix \u03a80 jz and n\u22121/2 Pn i=1 l\u2032 ij(w0 ij)Z0 i d \u2192N(0, \u21260 jz). (ii) For any i \u2208[n], \u2212q\u22121 Pq j=1 l\u2032\u2032 ij(w0 ij)\u03b30 j (\u03b30 j )\u22ba p \u2192\u03a80 i\u03b3 for some positive definite matrix \u03a80 i\u03b3 and q\u22121/2 Pq j=1 l\u2032 ij(w0 ij)\u03b30 j d \u2192N(0, \u21260 i\u03b3). Assumption 4 is a generalization of Assumption F(3)-(4) in Bai (2003) for linear models to the nonlinear setting. Specifically, we need Assumption 4(i) to derive the asymptotic distributions of the estimators b \u03b2\u2217 j and b \u03b3\u2217 j , and Assumption 4(ii) is used for establishing the asymptotic distribution of b U \u2217 i . Note that these assumptions are imposed on the loglikelihood derivative functions evaluated at the true parameters w0 ij, Z0 i , and \u03b30 j . In general, for the popular generalized factor models, such assumptions hold with mild conditions. For example, under linear models, l\u2032 ij(wij) is the random error and l\u2032\u2032 ij(wij) is a constant. Then \u03a80 jz and \u03a80 i\u03b3 naturally exist and are positive definite followed by Assumption 1. The limiting distributions of n\u22121/2 Pn i=1 l\u2032 ij(w0 ij)Z0 i and q\u22121/2 Pq j=1 l\u2032 ij(w0 ij)\u03b30 j can be derived by the central limit theorem under standard regularity conditions. Under logistic and probit models, l\u2032 ij(wij) and l\u2032\u2032 ij(wij) are both finite inside a compact parameters space and similar arguments can be applied to show the validity of Assumption 4. We present the following assumption to establish the theoretical properties of the transformed matrix b A as defined in (5). In particular, we define A0 = (G\u2021)\u22baA\u2021 and write A0 = (a0 0, . . . , a0 p\u2217)\u22ba. Note that the estimation problem of (5) is related to the median regression problem with measurement errors. To understand the properties of this estimator, following existing M-estimation literature (He & Shao 1996, 2000), we define \u03c80 js(a) = \u03b30 j sign{\u03b20 js + (\u03b30 j )\u22ba(a \u2212a0 s)} and \u03c7s(a) = Pq j=1 \u03c80 js(a) for j \u2208[q] and s \u2208[p\u2217]. We further define a perturbed version of \u03c80 js(a), denoted as \u03c8js(a, \u03b4js), as follows: \u03c8js(a, \u03b4js) = \u0010 \u03b30 j + \u0002 \u03b4js \u221an \u0003 [1:K] \u0011 sign n \u03b20 js + \u0002 \u03b4js \u221an \u0003 K+1 \u2212(\u03b30 j + \u0002 \u03b4js \u221an \u0003 [1:K])\u22ba(a \u2212a0 s) o , s \u2208[p\u2217] 19 where the perturbation \u03b4js = \uf8eb \uf8ec \uf8ed IK 0 0 (1(p) s )\u22ba \uf8f6 \uf8f7 \uf8f8 \u0010 \u2212 n X i=1 l\u2032\u2032 ij(w0 ij)Z0 i (Z0 i )\u22ba\u0011\u22121\u0010\u221an n X i=1 l\u2032 ij(w0 ij)Z0 i \u0011 , is asymptotically normally distributed by Assumption 4. We define b \u03c7s(a) = Pq j=1 E\u03c8js(a, \u03b4js). Assumption 5. For \u03c7s(a), we assume that there exists some constant c > 0 such that mina\u0338=0 |q\u22121\u03c7s(a)| > c holds for all s \u2208[p\u2217]. Assume there exists as0 for each s \u2208[p\u2217] such that b \u03c7s(as0) = 0 with p\u221an\u2225\u03b1s0\u2225\u21920. In a neighbourhood of \u03b1s0, b \u03c7s(a) has a nonsingular derivative such that {q\u22121\u2207ab \u03c7s(\u03b1s0)}\u22121 = O(1) and q\u22121|\u2207ab \u03c7s(a)\u2212\u2207ab \u03c7s(\u03b1s0)| \u2264k|a\u2212\u03b1s0|. We assume \u03b9nq,p := max \b \u2225\u03b1s0\u2225, q\u22121 Pq j=1 \u03c8js(as0, \u03b4js) \t = o \u0000(p\u221an)\u22121\u0001 . Assumption 5 is crucial in addressing the theoretical difficulties of establishing the consistent estimation for A0, a challenging problem related to median regression with weakly dependent measurement errors. In Assumption 5, we treat the minimizer of | Pq j=1 \u03c8(a, \u03b4js)| as an M-estimator and adopt the Bahadur representation results in He & Shao (1996) for the theoretical analysis. For an ideal case where \u03b4js are independent and normally distributed with finite variances, which corresponds to the setting in median regression with measurement errors (He & Liang 2000), these assumptions can be easily verified. Assumption 5 discusses beyond such an ideal case and covers general settings. In addition to independent and Gaussian measurement errors, this condition also accommodates the case when \u03b4js are asymptotically normal and weakly dependent with finite variances, as implied by Assumption 4 and the conditional independence of Yij. We want to emphasize that Assumption 5 allows for both sparse and dense settings of the covariate effects. Consider an example of K = p = 1 and \u03b3j = 1 for j \u2208[q]. Suppose \u03b2\u2217 js is zero for all j \u2208[q1] and nonzero otherwise. Then this condition is satisfied as long as #{j : \u03b2\u2217 js > 0} and #{j : \u03b2\u2217 js < 0} are comparable, even when the sparsity level q1 is small. Under the proposed assumptions, we next present our main theoretical results. 20 Theorem 1 (Average Consistency). Suppose the true parameters \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) satisfy identifiability conditions 1\u20132. Under Assumptions 1\u20135, we have q\u22121\u2225b B\u2217\u2212B\u2217\u22252 F = Op \u0012p2 log qp n + p log n q \u0013 ; (7) if we further assume p3/2(nq)\u03f5+3/\u03be(p1/2n\u22121/2 + q\u22121/2) = o(1), then we have n\u22121\u2225b U\u2217\u2212U\u2217\u22252 F = Op \u0012p log qp n + log n q \u0013 ; (8) q\u22121\u2225b \u0393\u2217\u2212\u0393\u2217\u22252 F = Op \u0012p log qp n + log n q \u0013 . (9) Theorem 1 presents the average convergence rates of b \u03d5\u2217. Consider an oracle case with U\u2217 and \u0393\u2217known, the estimation of B\u2217reduces to an M-estimation problem. For M-estimators under general parametric models, it can be shown that the optimal convergence rates in squared \u21132-norm is Op(p/n) under p(log p)3/n \u21920 (He & Shao 2000). In terms of our average convergence rate on b B\u2217, the first term in (7), n\u22121p2 log(qp), approximately matches the convergence rate Op(p/n) up to a relatively small order term of p log(qp). The second term in (7), q\u22121p log n, is mainly due to the estimation error for the latent factor U\u2217. In educational applications, it is common to assume the number of subjects n is much larger than the number of items q. Under such a practical setting with n \u226bq and p relatively small, the term q\u22121 log n in (8) dominates in the derived convergence rate of b U\u2217, which matches with the optimal convergence rate Op(q\u22121) for factor models without covariates (Bai & Li 2012, Wang 2022) up to a small order term. Remark 5. The additional condition p3/2(nq)\u03f5+3/\u03be(p1/2n\u22121/2 + q\u22121/2) = o(1) in Theorem 1 is used to handle the challenges related to the invertible matrix G that affects the theoretical properties of b U\u2217and b \u0393\u2217. It is needed for establishing the estimation consistency of b U\u2217and b \u0393\u2217 but not for that of b B\u2217. With sufficiently large \u03be and small \u03f5, this assumption is approximately p = o(n1/4 \u2227q1/3) up to a small order term. 21 Remark 6. One challenge in establishing the estimation consistency for b \u03d5\u2217arises from the unrestricted dependence structure between U\u2217and X. If we consider the ideal case where the columns of U\u2217and X are orthogonal, i.e., (U\u2217)\u22baX = 0K\u00d7p, then we can achieve comparable or superior convergence rates with less stringent assumptions. Specifically, with Assumptions 1\u20133 only, we can obtain the same convergence rates for b U\u2217and b \u0393\u2217as in (8) and (9), respectively. Moreover, with Assumptions 1\u20133, the average convergence rate for the consistent estimator of B\u2217is Op(n\u22121p log qp+q\u22121 log n), which is tighter than (7) by a factor of p. With estimation consistency results established, we next derive the asymptotic normal distributions for the estimators, which enable us to perform statistical inference on the true parameters. Theorem 2 (Asymptotic Normality). Suppose the true parameters \u03d5\u2217= (\u0393\u2217, U\u2217, B\u2217) satisfy identifiability conditions 1\u20132. Under Assumptions 1\u20135, we have the asymptotic distributions as follows. Denote \u03b6\u22122 nq,p = n\u22121p log qp + q\u22121log n. If p3/2\u221an(nq)3/\u03be\u03b6\u22122 nq,p \u21920, for any j \u2208[q] and a \u2208Rp with \u2225a\u22252 = 1, \u221ana\u22ba(\u03a3\u2217 \u03b2,j)\u22121/2( b \u03b2\u2217 j \u2212\u03b2\u2217 j) d \u2192N(0, 1), (10) where \u03a3\u2217 \u03b2,j = (\u2212(A0)\u22ba, Ip)(\u03a80 jz)\u22121\u21260 jz(\u03a80 jz)\u22121(\u2212(A0)\u22ba, Ip)\u22ba, and for any j \u2208[q], \u221an(\u03a3\u2217 \u03b3,j)\u22121/2(b \u03b3\u2217 j \u2212\u03b3\u2217 j ) d \u2192N(0, IK), (11) where \u03a3\u2217 \u03b3,j = G\u2021(IK, 0)(\u03a80 jz)\u22121\u21260 jz(\u03a80 jz)\u22121 (IK, 0)\u22ba(G\u2021)\u22ba. Furthermore, for any i \u2208[n], if q = O(n) and p3/2\u221aq(nq)3/\u03be\u03b6\u22122 nq,p \u21920, \u221aq(\u03a3\u2217 u,i)\u22121/2( b U \u2217 i \u2212U \u2217 i ) d \u2192N(0, IK), (12) where \u03a3\u2217 u,i = (G\u2021)\u2212\u22ba(\u03a80 i\u03b3)\u22121\u21260 i\u03b3(\u03a80 i\u03b3)\u22121(G\u2021)\u22121. 22 The asymptotic covariance matrices in Theorem 2 can be consistently estimated. Due to the space limitations, we defer the construction of the consistent estimators b \u03a3\u2217 \u03b2,j, b \u03a3\u2217 \u03b3,j, and b \u03a3\u2217 u,i to Supplementary Materials. Theorem 2 provides the asymptotic distributions for all individual estimators. In particular, with the asymptotic distributions and the consistent estimators b \u03a3\u2217 \u03b2,j for the asymptotic covariance matrices, we can perform hypothesis testing on \u03b2\u2217 js for j \u2208[q] and s \u2208[p\u2217]. We reject the null hypothesis \u03b2\u2217 js = 0 at significance level \u03b1 if |\u221an(b \u03c3\u2217 \u03b2,js)\u22121b \u03b2\u2217 js| > \u03a6\u22121(1 \u2212\u03b1/2), where (b \u03c3\u2217 \u03b2,js)2 is the (s + 1)-th diagonal entry in b \u03a3\u2217 \u03b2,j. For the asymptotic normality of b \u03b2\u2217 j, the condition p3/2\u221an(nq)3/\u03be(n\u22121p log qp+q\u22121 log n) \u2192 0 together with Assumption 3 gives p = o{n1/5 \u2227(q2/n)1/3} up to a small order term, and further implies n \u226aq2, which is consistent with established conditions in the existing factor analysis literature (Bai & Li 2012, Wang 2022). For the asymptotic normality of b U \u2217 i , the additional condition that q = O(n) is a reasonable assumption in educational applications where the number of items q is much fewer than the number of subjects n. In this case, the scaling conditions imply p = o{q1/3 \u2227(n2/q)1/5} up to a small order term. Similarly for the asymptotic normality of b \u03b3\u2217 j , the proposed conditions give p = o{n1/5 \u2227(q2/n)1/3} up to a small order term. Remark 7. Similar to the discussion in Remark 6, the challenges arising from the unrestricted dependence between U\u2217and X also affect the derivation of the asymptotic distributions for the proposed estimators. If we consider the ideal case with (U\u2217)\u22baX = 0K\u00d7p, we can establish the asymptotic normality for all individual estimators under Assumptions 1\u20134 only and weaker scaling conditions. Specifically, when (U\u2217)\u22baX = 0K\u00d7p, the scaling condition becomes p\u221an(nq)3/\u03be(n\u22121p log qp+q\u22121 log n) \u21920 for deriving asymptotic normality of b \u03b2\u2217 j and b \u03b3\u2217 j , which is milder than that for (10) and (11). 23 5 Simulation Study In this section, we study the finite-sample performance of the proposed joint-likelihoodbased estimator. We focus on the logistic latent factor model in (1) with pij(y | wij) = exp(wijy)/{1 + exp(wij)}, where wij = (\u03b3\u2217 j )\u22baU \u2217 i + (\u03b2\u2217 j)\u22baXi. The logistic latent factor model is commonly used in the context of educational assessment and is also referred to as the item response theory model (Mellenbergh 1994, Hambleton & Swaminathan 2013). We apply the proposed method to estimate B\u2217and perform statistical inference on testing the null hypothesis \u03b2\u2217 js = 0. We start with presenting the data generating process. We set the number of subjects n = {300, 500, 1000, 1500, 2000}, the number of items q = {100, 300, 500}, the covariate dimension p = {5, 10, 30}, and the factor dimension K = 2, respectively. We jointly generate Xc i and U \u2217 i from N(0, \u03a3) where \u03a3ij = \u03c4 |i\u2212j| with \u03c4 \u2208{0, 0.2, 0.5, 0.7}. In addition, we set the loading matrix \u0393\u2217 [,k] = 1(K) k \u2297vk, where \u2297is the Kronecker product and vk is a (q/K)-dimensional vector with each entry generated independently and identically from Unif[0.5, 1.5]. For the covariate effects B\u2217, we set the intercept terms to equal \u03b2\u2217 j0 = 0. For the remaining entries in B\u2217, we consider the following two settings: (1) sparse setting: \u03b2\u2217 js = \u03c1 for s = 1, . . . , p and j = 5s\u22124, . . . , 5s and other \u03b2\u2217 js are set to zero; (2) dense setting: \u03b2\u2217 js = \u03c1 for s = 1, . . . , p and j = Rsq/5 + 1, . . . , (Rs + 1)q/5 with Rs = s \u22125\u230as/5\u230b, and other \u03b2\u2217 js are set to zero. Here, the signal strength is set as \u03c1 \u2208{0.3, 0.5}. Intuitively, in the sparse setting, we set 5 items to be biased for each covariate whereas in the dense setting, 20% of items are biased items for each covariate. For better empirical stability, after reaching convergence in the proposed alternating maximization algorithm and transforming the obtained MLEs into ones that satisfy Conditions 1\u20132, we repeat another round of maximization and transformation. We take the significance level at 5% and calculate the averaged type I error based on all the entries \u03b2\u2217 js = 0 and the averaged power for all non-zero entries, over 100 replications. The averaged hypothesis testing results are presented in Figures 3\u20136 for p = 5 and p = 30, across different 24 settings. Additional numerical results for p = 10 are presented in the Supplementary Materials. 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.5 Figure 3: Powers and type I errors under sparse setting at p = 5. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 25 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.5 Figure 4: Powers and type I errors under sparse setting at p = 30. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 26 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 5, q = 500, rho = 0.5 Figure 5: Powers and type I errors under dense setting at p = 5. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 27 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 100, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 300, rho = 0.5 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.3 0.00 0.20 0.40 0.60 0.80 1.00 0.05 300 500 1000 1500 2000 n Power/Type I Errors p = 30, q = 500, rho = 0.5 Figure 6: Powers and type I errors under dense setting at p = 30. Red Circles ( ) denote correlation parameter \u03c4 = 0. Green triangles ( ) represent the case \u03c4 = 0.2. Blue squares ( ) indicate \u03c4 = 0.5. Purple crosses ( ) represent the \u03c4 = 0.7. 28 From Figures 3\u20136, we observe that the type I errors are well controlled at the significance level 5%, which is consistent with the asymptotic properties of b B\u2217in Theorem 2. Moreover, the power increases to one as the sample size n increases across all of the settings we consider. Comparing the left panel (\u03c1 = 0.3) to the right panel (\u03c1 = 0.5) in Figures 3\u20136, we see that the power increases as we increase the signal strength \u03c1. Comparing the plots in Figures 3\u20134 to the corresponding plots in Figures 5\u20136, we see that the powers under the sparse setting (Figures 3\u20134) are generally higher than that of the dense setting (Figures 5\u20136). Nonetheless, our proposed method is generally stable under both sparse and dense settings. In addition, we observe similar results when we increase the covariate dimension p from p = 5 (Figures 3 and 5) to p = 30 (Figures 4 and 6). We refer the reader to the Supplementary Materials for additional numerical results for p = 10. Moreover, we observe similar results when we increase the test length q from q = 100 (top row) to q = 500 (bottom row) in Figures 3\u20136. In terms of the correlation between X and U\u2217, we observe that while the power converges to one as we increase the sample size, the power decreases as the correlation \u03c4 increases. 6 Data Application We apply our proposed method to analyze the Programme for International Student Assessment (PISA) 2018 data2. PISA is a worldwide testing program that compares the academic performances of 15-year-old students across many countries (OECD 2019). More than 600,000 students from 79 countries/economies, representing a population of 31 million 15year-olds, participated in this program. The PISA 2018 used the computer-based assessment mode and the assessment lasted two hours for each student, with test items mainly evaluating students\u2019 proficiency in mathematics, reading, and science domains. A total of 930 minutes of test items were used and each student took different combinations of the test items. In addition to the assessment questions, background questionnaires were provided to collect students\u2019 information. 2The data can be downloaded from: https://www.oecd.org/pisa/data/2018database/ 29 In this study, we focus on PISA 2018 data from Taipei. The observed responses are binary, indicating whether students\u2019 responses to the test items are correct, and we use the popular item response theory model with the logit link (i.e., logistic latent factor model; Reckase 2009). Due to the block design nature of the large-scale assessment, each student was only assigned to a subset of the test items, and for the Taipei data, 86% response matrix is unobserved. Note that this missingness can be considered as conditionally independent of the responses given the students\u2019 characteristics. Our proposed method and inference results naturally accommodate such missing data and can be directly applied. Specifically, to accommodate the incomplete responses, we can modify the joint log-likelihood function in (2) into Lobs(Y | \u0393, U, B, X) = Pn i=1 P j\u2208Qi lij(\u03b3\u22ba j Ui + \u03b2\u22ba j Xi), where Qi defines the set of questions to which the responses from student i are observed. In this study, we include gender and 8 variables for school strata as covariates (p\u2217= 9). These variables record whether the school is public, in an urban place, etc. After data preprocessing, we have n = 6063 students and q = 194 questions. Following the existing literature (Reckase 2009, Millsap 2012), we take K = 3 to interpret the three latent abilities measured by the math, reading, and science questions. We apply the proposed method to estimate the effects of gender and school strata variables on students\u2019 responses. We obtain the estimators of the gender effect for each PISA question and construct the corresponding 95% confidence intervals. The constructed 95% confidence intervals for the gender coefficients are presented in Figure 7. There are 10 questions highlighted in red as their estimated gender effect is statistically significant after the Bonferroni correction. Among the reading items, there is only one significant item and the corresponding confidence interval is below zero, indicating that this question is biased towards female test-takers, conditioning on the students\u2019 latent abilities. Most of the confidence intervals corresponding to the biased items in the math and science sections are above zero, indicating that these questions are biased towards male test-takers. In social science research, it is documented that female students typically score better than male students 30 during reading tests, while male students often outperform female students during math and science tests (Quinn & Cooc 2015, Balart & Oosterveen 2019). Our results indicate that there may exist potential measurement biases resulting in such an observed gender gap in educational testing. Our proposed method offers a useful tool to identify such biased test items, thereby contributing to enhancing testing fairness by providing practitioners with valuable information for item calibration. Math Reading Science \u22126 \u22123 0 3 6 0 50 100 150 200 PISA Questions for TAP Gender Effect Estimator Figure 7: Confidence intervals for the effect of gender covariate on each PISA question using Taipei data. Red intervals correspond to confidence intervals for questions with significant gender bias after Bonferroni correction. (For illustration purposes, we omit the confidence intervals with the upper bounds exceeding 6 and the lower bounds below -6 in this figure). To further illustrate the estimation results, Table 1 lists the p-values for testing the gender effect for each of the identified 10 significant questions, along with the proportions of female and male test-takers who answered each question correctly. We can see that the signs of the estimated gender effect by our proposed method align with the disparities in the reported proportions between females and males. For example, the estimated gender effect corresponding to the item \u201cCM496Q01S Cash Withdrawal\u201d is positive with a p-value 31 Item code Item Title Female (%) Male (%) p-value Mathematics CM496Q01S Cash Withdrawal 51.29 58.44 2.77\u00d710\u22127 (+) CM800Q01S Computer Games 96.63 93.61 < 1 \u00d7 10\u22128 (\u2212) Reading CR466Q06S Work Right 91.91 86.02 1.95\u00d710\u22125 (\u2212) Science CS608Q01S Ammonoids 57.68 68.15 4.65\u00d710\u22125 (+) CS643Q01S Comparing Light Bulbs 68.57 73.41 1.08\u00d710\u22125 (+) CS643Q02S Comparing Light Bulbs2 63.00 57.50 4.64\u00d710\u22124 (\u2212) CS657Q03S Invasive Species 46.00 54.36 8.47\u00d710\u22125 (+) CS527Q04S Extinction of Dinosours3 36.19 50.18 8.13\u00d710\u22125 (+) CS648Q02S Habitable Zone 41.69 45.19 1.34\u00d710\u22124 (+) CS607Q01S Birds and Caterpillars 88.14 91.47 1.99\u00d710\u22124 (+) Table 1: Proportion of full credit in females and males to significant items of PISA2018 in Taipei. (+) and (\u2212) denote the items with positively and negatively estimated gender effects, respectively. of 2.77 \u00d7 10\u22127, implying that this question is statistically significantly biased towards male test-takers. This is consistent with the observation that in Table 1, 58.44% of male students correctly answered this question, which exceeds the proportion of females, 51.29%. Besides gender effects, we estimate the effects of school strata on the students\u2019 response and present the point and interval estimation results in the left panel of Figure 8. All the detected biased questions are from math and science sections, with 6 questions for significant effects of whether attending public school and 5 questions for whether residing in rural areas. To further investigate the importance of controlling for the latent ability factors, we compare results from our proposed method with the latent factors, to the results from directly regressing responses on covariates without latent factors. From the right panel of Figure 8, we can see that without conditioning on the latent factors, there are excessive items detected for the covariate of whether the school is public or private. On the other hand, there are no biased items detected if we only apply generalized linear regression to estimate the effect of the covariate of whether the school is in rural areas. 32 Math Reading Science \u22124 0 4 0 50 100 150 200 PISA Questions for TAP Public School Effect Estimator Public Math Reading Science \u22122 \u22121 0 1 2 0 50 100 150 200 PISA Questions for TAP Public School Effect Estimator Public \u2212 without latent variable Math Reading Science \u22124 0 4 0 50 100 150 200 PISA Questions for TAP Rural Region Effect Estimator Rural Math Reading Science \u22122 \u22121 0 1 2 0 50 100 150 200 PISA Questions for TAP Rural Region Effect Estimator Rural \u2212 without latent variable Figure 8: Confidence intervals for the effect of school stratum covariate on each PISA question. Red intervals correspond to confidence intervals for questions with significant school stratum bias after Bonferroni correction. 7 Discussion In this work, we study the covariate-adjusted generalized factor model that has wide interdisciplinary applications such as educational assessments and psychological measurements. In particular, new identifiability issues arise due to the incorporation of covariates in the model setup. To address the issues and identify the model parameters, we propose novel and interpretable conditions, crucial for developing the estimation approach and inference results. With model identifiability guaranteed, we propose a computationally efficient jointlikelihood-based estimation method for model parameters. Theoretically, we obtain the estimation consistency and asymptotic normality for not only the covariate effects but also latent factors and factor loadings. 33 There are several future directions motivated by the proposed method. In this manuscript, we focus on the case in which p grows at a slower rate than the number of subjects n and the number of items q, a common setting in educational assessments. It is interesting to further develop estimation and inference results under the high-dimensional setting in which p is larger than n and q. Moreover, in this manuscript, we assume that the dimension of the latent factors K is fixed and known. One possible generalization is to allow K to grow with n and q. Intuitively, an increasing latent dimension K makes the identifiability and inference issues more challenging due to the increasing degree of freedom of the transformation matrix. With the theoretical results in this work, another interesting related problem is to further develop simultaneous inference on group-wise covariate coefficients, which we leave for future investigation."
}