|
|
"main_content": "The source of distribution shift can be isolated to components of the joint distribution. One special case of distribution shift is covariate shift [Shimodaira, 2000, Zadrozny, 2004, Huang et al., 2006, Gretton et al., 2009, Sugiyama et al., 2007, Bickel et al., 2009, Chen et al., 2016, Schneider et al., 2020], where only the covariate distribution P(X) changes across domains. Ben-David et al. [2009] give upper-bounds on target error based on the H-divergence between the source and target covariate distributions, which motivates domain alignment methods like the Domain Adversarial Neural Networks [Ganin et al., 2016] and others [Long et al., 2015, Blanchard et al., 2017]. Others have followed up on this work with other notions of covariate distance for domain adaptation, such as mean maximum discrepancy (MMD) [Long et al., 2016], Wasserstein distance [Courty et al., 2017], etc. However, Kpotufe and Martinet [2018] show that these divergence metrics fail to capture many important properties of transferability, such as asymmetry and non-overlapping support. Furthermore, Zhao et al. [2019] shows that even with the alignment of covariates, large distances between label distributions can inhibit transfer; they propose a label conditional importance weighting adjustment to address this limitation. Other works have also proposed conditional covariate alignment [des Combes et al., 2020, Li et al., 2018c,b]. Another form of distribution shift is label shift, where only the label distribution changes across domains. Lipton et al. [2018] propose a method to address this scenario. Schrouff et al. [2022] illustrate that many real-world problems exhibit more complex \u2019compound\u2019 shifts than just covariate or label shifts alone. One can leverage domain adaptation to address distribution shifts; however, these methods are contingent on having access to unlabeled or partially labeled samples from the target domain during training. When such samples are available, more sophisticated domain adaptation strategies aim to leverage and adapt spurious feature information to enhance performance [Liu et al., 2021, Zhang et al., 2021, Kirichenko et al., 2022]. 2 However, domain generalization, as a problem, does not assume access to such samples [Muandet et al., 2013]. To address the domain generalization problem, Invariant Causal Predictors (ICP) leverage shared causal structure to learn domain-general predictors [Peters et al., 2016]. Previous works, enumerated in the introduction (Section 1), have proposed various algorithms to identify domain-general predictors. Arjovsky et al. [2019]\u2019s proposed invariance risk minimization (IRM) and its variants motivated by domain invariance: min w,\u03a6 1 |Etr| X e\u2208Etr Re(w \u25e6\u03a6) s.t. w \u2208argmin e w Re( e w \u00b7 \u03a6), \u2200e \u2208Etr, where Re(w \u25e6\u03a6) = E \u0002 \u2113(y, w \u00b7 \u03a6(x)) \u0003 , with loss function \u2113, feature extractor \u03a6, and linear predictor w. This objective aims to learn a representation \u03a6 such that predictor w that minimizes empirical risks on average across all domains also minimizes within-domain empirical risk for all domains. However, Rosenfeld et al. [2020], Ahuja et al. [2020] showed that this objective requires unreasonable constraints on the number of observed domains at train times, e.g., observing distinct domains on the order of the rank of spurious features. Follow-up works have attempted to improve these limitations with stronger constraints on the problem \u2013 enumerated in the introduction section. Our method falls under domain generalization; however, unlike the domain-general solutions previously discussed, our proposed solution leverages di\ufb00erent conditions than domain invariance directly, which we show may be more suited to learning domain-general representations. 3 Causality and Domain Generalization We often represent causal relationships with a causal graph. A causal graph is a directed acyclic graph (DAG), G = (V, E), with nodes V representing random variables and directed edges E representing causal relationships, i.e., parents are causes and children are e\ufb00ects. A structural equation model (SEM) provides a mathematical representation of the causal relationships in its corresponding DAG. Each variable Y \u2208V is given by Y = fY (X) + \u03b5Y , where X denotes the parents of Y in G, fY is a deterministic function, and \u03b5Y is an error capturing exogenous in\ufb02uences on Y . The main property we need here is that fY is invariant to interventions to V \\{Y } and is consequently invariant to changes in P(V ) induced by these interventions. Interventions refer to changes to fZ, Z \u2208V \\{Y }. In this work, we focus on domain-general predictors dg that are linear functions of features with domaingeneral mechanisms, denoted as gdg := w \u25e6\u03a6dg, where w is a linear predictor and \u03a6dg identi\ufb01es features with domain-general mechanisms. We use domain-general rather than domain-invariant since domain-invariance is strongly tied to the property: Y \u22a5 \u22a5e | Zdg [Arjovsky et al., 2019]. As shown in the subsequent sections, this work leverages other properties of appropriate causal graphs to obtain domain-general features. This distinction is crucial given the challenges associated with learning domain-general features through domaininvariance methods [Rosenfeld et al., 2020]. Given the presence of a distribution shift, it\u2019s essential to identify some common structure across domains that can be utilized for out-of-distribution (OOD) generalization. For example, Shimodaira [2000] assume P(Y |X) is shared across all domains for the covariate shift problem. In this work, we consider a setting where each domain is composed of observed features and labels, X \u2208X, Y \u2208Y, where X is given by an invertible function \u0393 of two latent random variables: domain-general Zdg \u2208Zdg and spurious Zspu \u2208Zspu. By construction, the conditional expectation of the label Y given the domain-general features Zdg is the same across domains, i.e., Eei [Y |Zdg = zdg] = Eej [Y |Zdg = zdg] (1) \u2200zdg \u2208Zdg, \u2200ei \u0338= ej \u2208E. Conversely, this robustness to e does not necessarily extend to spurious features Zspu; in other words, Zspu may assume values that could lead a predictor relying on it to experience arbitrarily high error rates. Then, a sound strategy for learning a domain-general predictor \u2013 one that is robust to distribution shifts \u2013 is to identify the latent domain-general Zdg from the observed features X. 3 e Zdg Zspu Y X Figure 1: Partial Ancestral Graph representing all non-trivial and valid generative processes (DAGs); dashed edges indicate that an edge may or may not exist. The approach we take to do this is motivated by the Reichenbach Common Cause Principle, which claims that if two events are correlated, there is either a causal connection between the correlated events that is responsible for the correlation or there is a third event, a so-called (Reichenbachian) common cause, which brings about the correlation [Hitchcock and R\u00e9dei, 2021, R\u00e9dei, 2002]. This principle allows us to posit the class of generative processes or causal mechanisms that give rise to the correlated observed features and labels, where the observed features are a function of domain-general and spurious features. We represent these generative processes as causal graphs. Importantly, the mapping from a node\u2019s causal parents to itself is preserved in all distributions generated by the causal graph (Equation 1), and distributions can vary arbitrarily so long as they preserve the conditional independencies implied by the DAG (Markov Property [Pearl, 2010]). We now enumerate DAGs that give observe features with spurious correlations with the label. Valid DAGs. We consider generative processes, where both latent features, Zspu, Zdg, and observed X are correlated with Y , and the observed X is a function of only Zdg and Zspu (Figure 1). Given this setup, there is an enumerable set of valid generative processes. Such processes are (i) without cycles, (ii) are feature complete \u2013 including edges from Zdg and Zspu to X, i.e., Zdg \u2192X \u2190Zspu, and (iii) where the observed features mediate domain in\ufb02uence, i.e., there is no direct domain in\ufb02uence on the label e \u0338\u2192Y . We discuss this enumeration in detail in Appendix B. The result of our analysis is identifying a representative set of DAGs that describe valid generative processes \u2013 these DAGs come from orienting the partial ancestral graph (PAG) in Figure 1. We compare the conditional independencies implied by the DAGs de\ufb01ned by Figure 1 as illustrated in Figure 2, resulting in three canonical DAGs in the literature (see Appendix B for further discussion). Other DAGs that induce spurious correlations are outside the scope of this work. e Zdg Zspu Y X (a) Causal [Arjovsky et al., 2019]. e Zdg Zspu Y X (b) Anticausal [Rosenfeld et al., 2020]. e Zdg Zspu Y X (c) Fully Informative Causal [Ahuja et al., 2021]. Figure 2: Generative Processes. Graphical models depicting the structure of possible data-generating processes \u2013 shaded nodes indicate observed variables. X represents the observed features, Y represents observed targets, and e represents domain in\ufb02uences (domain indexes in practice). There is an explicit separation of domain-general Zdg and domain-speci\ufb01c Zspu features; they are combined to generate observed X. Dashed edges indicate the possibility of an edge. Conditional independencies implied by identi\ufb01ed DAGs (Figure 2). 4 Table 1: Generative Processes and Su\ufb03cient Conditions for Domain-Generality Graphs in Figure 2 (a) (b) (c) Zdg \u22a5 \u22a5Zspu | {Y, e} \u2713 \u2713 \u2717 Identifying Zdg is necessary \u2713 \u2713 \u2717 Fig. 2a: Zdg \u22a5 \u22a5Zspu | {Y, e}; Y \u22a5 \u22a5e | Zdg. This causal graphical model implies that the mapping from Zdg to its causal child Y is preserved and consequently, Equation 1 holds [Pearl, 2010, Peters et al., 2016]. As an example, consider the task of predicting the spread of a disease. Features may include causes (vaccination rate and public health policies) and e\ufb00ects (coughing). e is the time of month; the distribution of coughing changes depending on the season. Fig. 2b: Zdg \u22a5 \u22a5Zspu | {Y, e}; Zdg \u22a5 \u22a5Zspu | Y ; Y \u22a5 \u22a5e | Zdg, Zdg \u22a5 \u22a5e. The causal graphical model does not directly imply that Zdg \u2192Y is preserved across domains. However, in this work, it represents the setting where the inverse of the causal direction is preserved (inverse: Zdg \u2192Y ), and thus Equation 1 holds. A context where this setting is relevant is in healthcare where medical conditions (Y ) cause symptoms (Zdg), but the prediction task is often predicting conditions from symptoms, and this mapping Zdg \u2192Y , opposite of the causal direction, is preserved across distributions. Again, we may consider e as the time of month; the distribution of coughing changes depending on the season. Fig. 2c: Y \u22a5 \u22a5e | Zdg; Zdg \u22a5 \u22a5e. Similar to Figure 2a, this causal graphical model implies that the mapping from Zdg to its causal child Y is preserved, so Equation 1 holds [Pearl, 2010, Peters et al., 2016]. This setting is especially interesting because it represents a Fully Informative Invariant Features setting, that is Zspu \u22a5 \u22a5Y | Zdg [Ahuja et al., 2021]. Said di\ufb00erently, Zspu does not induce a backdoor path from e to Y that Zdg does not block. As an example of this, we can consider the task of predicting hospital readmission rates. Features may include the severity of illness, which is a direct cause of readmission rates, and also include the length of stay, which is also caused by the severity of illness. However, length of stay may not be a cause of readmission; the correlation between the two would be a result of the confounding e\ufb00ect of a common cause, illness severity. e is an indicator for distinct hospitals. We call the condition Y \u22a5 \u22a5e | Zdg the domain invariance property. This condition is common to all the DAGs in Figure 2. We call the condition Zdg \u22a5 \u22a5Zspu | {Y, e} the target conditioned representation independence (TCRI) property. This condition is common to the DAGs in Figure 2a, 2b. In the settings considered in this work, the TCRI property is equivalently Zdg \u22a5 \u22a5Zspu | Y\u2200e \u2208E since e will simply index the set of empirical distributions available at training. Domain generalization with conditional independencies. Kaur et al. [2022] showed that su\ufb03ciently regularizing for the correct conditional independencies described by the appropriate DAGs can give domaingeneral solutions, i.e., identi\ufb01es Zdg. However, in practice, one does not (partially) observe the latent features independently to regularize directly. Other works have also highlighted the need to consider generative processes when designing robust algorithms to distribute shifts [Veitch et al., 2021, Makar et al., 2022]. However, previous work has largely focused on regularizing for the domain invariance property, ignoring the conditional independence property Zdg \u22a5 \u22a5Zspu | Y, e. Su\ufb03ciency of ERM under Fully Informative Invariant Features. Despite the known challenges of learning domain-general features from the domain-invariance properties in practice, this approach persists, 5 likely due to it being the only property shared across all DAGs. We alleviate this constraint by observing that Graph (Fig. 2c) falls under what Ahuja et al. [2021] refer to as the fully informative invariant features settings, meaning that Zspu is redundant, having only information about Y that is already in Zdg. Ahuja et al. [2021] show that the empirical risk minimizer is domain-general for bounded features. Easy vs. hard DAGs imply the generality of TCRI. Consequently, we categorize the generative processes into easy and hard cases Table 1: (i) easy meaning that minimizing average risk gives domaingeneral solutions, i.e., ERM is su\ufb03cient (Fig. 2c), and (ii) hard meaning that one needs to identify Zdg to obtain domain-general solutions (Figs. 2a-2b). We show empirically that regularizing for Zdg \u22a5 \u22a5Zspu | Y \u2200e \u2208 E also gives a domain-general solution in the easy case. The generality of TCRI follows from its su\ufb03ciency for identifying domain-general Zdg in the hard cases while still giving domain-general solutions empirically in the easy case. 4 Proposed Learning Framework We have now clari\ufb01ed that hard DAGs (i.e., those not solved by ERM) share the TCRI property. The challenge is that Zdg and Zspu are not independently observed; otherwise, one could directly regularize. Existing work such as Kaur et al. [2022] empirically study semi-synthetic datasets where Zspu is (partially) observed and directly learn Zdg by regularizing that \u03a6(X) \u22a5 \u22a5Zspu | Y, e for feature extractor \u03a6. To our knowledge, we are the \ufb01rst to leverage the TCRI property without requiring observation of Zspu. Next, we set up our approach with some key assumptions. The \ufb01rst is that the observed distributions are Markov to an appropriate DAG. Assumption 4.1. All distributions, sources and targets, are generated by one of the structural causal models SCM that follow: causal z }| { SCM(e) := \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Z(e) dg \u223cP (e) Zdg, Y (e) \u2190\u27e8w\u2217 dg, Z(e) dg \u27e9+ \u03b7Y , Z(e) spu \u2190\u27e8w\u2217 spu, Y \u27e9+ \u03b7(e) Zspu, X \u2190\u0393(Zdg, Zspu), (2) anticausal z }| { SCM(e) := \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Y (e) \u223cPY , Z(e) dg \u2190\u27e8e wdg, Y \u27e9+ \u03b7(e) Zdg, Z(e) spu \u2190\u27e8w\u2217 spu, Y \u27e9+ \u03b7(e) Zspu, X \u2190\u0393(Zdg, Zspu), (3) F IIF z }| { SCM(e) := \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 Z(e) dg \u223cP (e) Zdg, Y (e) \u2190\u27e8w\u2217 dg, Z(e) dg \u27e9+ \u03b7Y , Z(e) spu \u2190\u27e8w\u2217 spu, Zdg\u27e9+ \u03b7(e) Zspu, X \u2190\u0393(Zdg, Zspu), (4) where PZdg is the causal covariate distribution, w\u2019s are linear generative mechanisms, \u03b7\u2019s are exogenous independent noise variables, and \u0393 : Zdg \u00d7 Zspu \u2192X is an invertible function. It follows from having causal mechanisms that we can learn a predictor w\u2217 dg for Zdg that is domain-general (Equation 2-4) \u2013 w\u2217 dg inverts the mapping e wdg in the anticausal case. These structural causal models (Equation 2-4) correspond to causal graphs Figures 2a-2c, respectively. Assumption 4.2 (Structural). Causal Graphs and their distributions are Markov and Faithful [Pearl, 2010]. Given Assumption 4.2, we aim to leverage TCRI property (Zdg \u22a5 \u22a5Zspu | Y \u2200e \u2208Etr) to learn the latent Zdg without observing Zspu directly. We do this by learning two feature extractors that, together, recover Zdg and Zspu and satisfy TCRI (Figure 3). We formally de\ufb01ne these properties as follows. De\ufb01nition 4.3 (Total Information Criterion (TIC)). \u03a6 = \u03a6dg \u2295\u03a6spu satis\ufb01es TIC with respect to random variables X, Y, e if for \u03a6(Xe) = [\u03a6dg(Xe); \u03a6spu(Xe)], there exists a linear operator T s.t., T (\u03a6(Xe)) = [Ze dg; Ze spu]\u2200e \u2208Etr. 6 Xe \u03a6dg \u03a6spu b Zdg \u03b8c \u2295 b Zspu \u03b8e b yc b ye Figure 3: Modeling approach. During training, both representations, \u03a6dg, and \u03a6spu, generate domaingeneral and domain-speci\ufb01c predictions, respectively. However, only the domain-invariant representations/predictions are used during testing \u2013 indicated by the solid red arrows. In other words, a feature extractor that satis\ufb01es the total information criterion recovers the complete latent feature sets Zdg, Zspu. This allows us to de\ufb01ne the proposed implementation of the TCRI property non-trivially \u2013 the conditional independence of subsets of the latents may not have the same implications on domain generalization. We note that X \u22a5 \u22a5Y |Zdg, Zspu, so X has no information about Y that is not in Zdg, Zspu. De\ufb01nition 4.4 (Target Conditioned Representation Independence). \u03a6 = \u03a6dg \u2295\u03a6spu satis\ufb01es TCRI with respect to random variables X, Y, e if \u03a6dg(X) \u22a5 \u22a5\u03a6spu(X) | Y \u2200e \u2208E. Proposition 4.5. Assume that \u03a6dg(X) and \u03a6spu(X) are correlated with Y . Given Assumptions 4.1-4.2 and a representation \u03a6 = \u03a6dg \u2295\u03a6spu that satis\ufb01es TIC, \u03a6dg(X) = Zdg \u21d0 \u21d2\u03a6 satis\ufb01es TCRI. (see Appendix C for proof). Proposition 4.5 shows that TCRI is necessary and su\ufb03cient to identify Zdg from a set of training domains. We note that we can verify if \u03a6dg(X) and \u03a6spu(X) are correlated with Y by checking if the learned predictors are equivalent to chance. Next, we describe our proposed algorithm to implement the conditions to learn such a feature map. Figure 3 illustrates the learning framework. Learning Objective: The \ufb01rst term in our proposed objective is L\u03a6dg = Re(\u03b8c \u25e6\u03a6dg), where \u03a6dg : X 7\u2192Rm is a feature extractor, \u03b8c : Rm 7\u2192Y is a linear predictor, and Re(\u03b8c \u25e6\u03a6dg) = E \u0002 \u2113(y, \u03b8c \u00b7 \u03a6(x)) \u0003 is the empirical risk achieved by the feature extractor and predictor pair on samples from domain e. \u03a6dg and \u03b8c are designed to capture the domain-general portion of the framework. Next, to implement the total information criterion, we use another feature extractor \u03a6spu : X 7\u2192Ro, designed to capture the domain-speci\ufb01c information in X that is not captured by \u03a6dg. Together, we have \u03a6 = \u03a6dg \u2295\u03a6spu where \u03a6 has domain-speci\ufb01c predictors \u03b8e : Rm+o 7\u2192Y for each training domain, allowing the feature extractor to utilize domain-speci\ufb01c information to learn distinct optimal domain-speci\ufb01c (nongeneral) predictors: L\u03a6 = Re\u0000\u03b8e \u25e6\u03a6 \u0001 . L\u03a6 aims to ensure that \u03a6dg and \u03a6spu capture all of the information about Y in X \u2013 total information criterion. Since we do not know o, m, we select them to be the same size on our experiments; o, m could be treated as hyperparameters though we do not treat them as such. Finally, we implement the TCRI property (De\ufb01nition 4.4). We denote LT CRI to be a conditional independence penalty for \u03a6dg and \u03a6spu. We utilize the Hilbert Schmidt independence Criterion (HSIC) [Gretton et al., 2007] as LT CRI. However, in principle, any conditional independence penalty can be used in its place. HSIC: LT CRI(\u03a6dg, \u03a6spu) = 1 2 X k\u2208{0,1} \\ HSIC \u0010 \u03a6dg(X), \u03a6spu(X) \u0011y=k = 1 2 X k\u2208{0,1} 1 n2 k tr \u0010 K\u03a6dgHnkK\u03a6spuHnk \u0011y=k , 7 where k, indicates which class the examples in the estimate correspond to, C is the number of classes, K\u03a6dg \u2208 Rnk\u00d7nk, K\u03a6spu \u2208Rnk\u00d7nk are Gram matrices, Ki,j \u03a6 = \u03ba(\u03a6dg(X)i, \u03a6dg(X)j), Ki,j \u03a6spu = \u03c9(\u03a6spu(X)i, \u03a6spu(X)j) with kernels \u03ba, \u03c9 are radial basis functions, Hnk = Ink \u2212 1 n2 k 11\u22a4is a centering matrix, Ink is the nk \u00d7 nk dimensional identity matrix, 1nk is the nk-dimensional vector whose elements are all 1, and \u22a4denotes the transpose. We condition on the label by taking only examples of each label and computing the empirical HSIC; then, we take the average. Taken together, the full objective to be minimized is as follows: L = 1 Etr X e\u2208Etr \" Re(\u03b8c \u25e6\u03a6dg) + Re(\u03b8e \u25e6\u03a6) + \u03b2LT CRI(\u03a6dg, \u03a6spu) # , where \u03b2 > 0 is a hyperparameter and Etr is the number of training domains. Figure 3 shows the full framework. We note that when \u03b2 = 0, this loss reduces to ERM. Note that while we minimize this objective with respect to \u03a6, \u03b8c, \u03b81, . . . , \u03b8Etr, only the domain-general representation and its predictor, \u03b8c \u00b7 \u03a6dg are used for inference. 5 Experiments We begin by evaluating with simulated data, i.e., with known ground truth mechanisms; we use Equation 5 to generate our simulated data, with domain parameter \u03c3ei; code is provided in the supplemental materials. SCM(ei) := \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 Z(ei) dg \u223cN \u00000, \u03c32 ei \u0001 y(ei) = Z(ei) dg + N \u00000, \u03c32 y \u0001 , Z(ei) spu = Y (ei) + N \u00000, \u03c32 ei \u0001 . (5) Table 2: Continuous Simulated Results \u2013 Feature Extractor with a dummy predictor \u03b8c = 1., i.e., b y = x \u00b7 \u03a6dg \u00b7 w, where x \u2208RN\u00d72, \u03a6dg, \u03a6spu \u2208R2\u00d71, w \u2208R. Oracle indicates the coe\ufb03cients achieved by regressing y on zc directly. Algorithm (\u03a6dg)0 (\u03a6dg)1 (i.e., Zdg weight) (i.e., Zspu weight) ERM 0.29 0.71 IRM 0.28 0.71 TCRI 1.01 0.06 Oracle 1.04 0.00 We observe 2 domains with parameters \u03c3e=0 = 0.1, \u03c3e=1 = 0.2 with \u03c3y = 0.25, 5000 samples, and linear feature extractors and predictors. We use partial covariance as our conditional independence penalty LT CRI. Table 2 shows the learned value of \u03a6dg, where \u2018Oracle\u2019 indicates the true coe\ufb03cients obtained by regressing Y on domain-general Zdg directly. The ideal \u03a6dg recovers Zdg and puts zero weight on Zspu. Now, we evaluate the e\ufb03cacy of our proposed objective on non-simulated datasets. 5.1 Semisynthetic and Real-World Datasets Algorithms: We compare our method to baselines corresponding to DAG properties: Empirical Risk Minimization (ERM, [Vapnik, 1991]), Invariant Risk Minimization (IRM [Arjovsky et al., 2019]), Variance Risk Extrapolation (V-REx, [Krueger et al., 2021]), [Li et al., 2018a]), Group Distributionally Robust Optimization (GroupDRO), [Sagawa et al., 2019]), and Information Bottleneck methods (IB_ERM/IB_IRM, [Ahuja et al., 2021]). Additional baseline methods are provided in the Appendix A. We evaluate our proposed method on the semisynthetic ColoredMNIST [Arjovsky et al., 2019] and realworld Terra Incognita dataset [Beery et al., 2018]. Given observed domains Etr = {e : 1, 2, . . . , Etr}, we train on Etr \\ ei and evaluate the model on the unseen domain ei, for each e \u2208Etr. ColoredMNIST: The ColoredMNIST dataset [Arjovsky et al., 2019] is composed of 7000 (2 \u00d7 28 \u00d7 28, 1) images of a hand-written digit and binary-label pairs. There are three domains with di\ufb00erent correlations between image color and label, i.e., the image color is spuriously related to the label by assigning a color to 8 each of the two classes (0: digits 0-4, 1: digits 5-9). The color is then \ufb02ipped with probabilities {0.1, 0.2, 0.9} to create three domains, making the color-label relationship domain-speci\ufb01c because it changes across domains. There is also label \ufb02ip noise of 0.25, so we expect that the best accuracy a domain-general model can achieve is 75%, while a non-domain general model can achieve higher. In this dataset, Zdg corresponds to the original image, Zspu the color, e the label-color correlation, Y the image label, and X the observed colored image. This DAG follows the generative process of Figure 2a [Arjovsky et al., 2019]. Spurrious PACS: Variables. X: images, Y : non-urban (elephant, gira\ufb00e, horse) vs. urban (dog, guitar, house, person). Domains. {{cartoon, art painting}, {art painting, cartoon}, {photo}} [Li et al., 2017]. The photo domain is the same as in the original dataset. In the {cartoon, art painting} domain, urban examples are selected from the original cartoon domain, while non-urban examples are selected from the original art painting domain. In the {art painting, cartoon} domain, urban examples are selected from the original art painting domain, while non-urban examples are selected from the original cartoon domain. This sampling encourages the model to use spurious correlations (domain-related information) to predict the labels; however, since these relationships are \ufb02ipped between domains {{cartoon, art painting} and {art painting, cartoon}, these predictions will be wrong when generalized to other domains. Terra Incognita: The Terra Incognita dataset contains subsets of the Caltech Camera Traps dataset [Beery et al., 2018] de\ufb01ned by [Gulrajani and Lopez-Paz, 2020]. There are four domains representing di\ufb00erent locations {L100, L38, L43, L46} of cameras in the American Southwest. There are 9 species of wild animals {bird, bobcat, cat, coyote, dog, empty, opossum, rabbit, raccoon, squirrel} and a \u2018no-animal\u2019 class to be predicted. Like Ahuja et al. [2021], we classify this dataset as following the generative process in Figure 2c, the Fully Informative Invariant Features (FIIF) setting. Additional details on model architecture, training, and hyperparameters are detailed in Appendix 5. Model Selection. The standard approach for model selection is a training-domain hold-out validation set accuracy. We \ufb01nd that model selection across hyperparameters using this held-out training domain validation accuracy often returns non-domain-general models in the \u2018hard\u2019 cases. One advantage of our model is that we can do model selection based on the TCRI condition (conditional independence between the two representations) on held-out training domain validation examples to mitigate this challenge. In the easy case, we expect the empirical risk minimizer to be domain-general, so selecting the best-performing trainingdomain model is sound \u2013 we additionally do this for all baselines (see Appendix A.1 for further discussion). We \ufb01nd that, empirically, this heuristic works in the examples we study in this work. Nevertheless, model selection under distribution shift remains a signi\ufb01cant bottleneck for domain generalization. 5.2 Results and Discussion Table 3: E\\etest \u2192etest (model selection on held-out source domains validation set). The \u2018mean\u2019 column indicates the average generalization accuracy over all three domains as the etest distinctly; the \u2018min\u2019 column indicates the worst generalization accuracy. ColoredMNIST Spurious PACS Terra Incognita Algorithm average worst-case average worst-case average worst-case ERM 51.6 \u00b1 0.1 10.0 \u00b1 0.1 57.2 \u00b1 0.7 31.2 \u00b1 1.3 44.2 \u00b1 1.8 35.1 \u00b1 2.8 IRM 51.7 \u00b1 0.1 9.9 \u00b1 0.1 54.7 \u00b1 0.8 30.3 \u00b1 0.3 38.9 \u00b1 3.7 32.6 \u00b1 4.7 GroupDRO 52.0 \u00b1 0.1 9.9 \u00b1 0.1 58.5 \u00b1 0.4 37.7 \u00b1 0.7 47.8 \u00b1 0.9 39.9 \u00b1 0.7 VREx 51.7 \u00b1 0.2 10.2 \u00b1 0.0 58.8 \u00b1 0.4 37.5 \u00b1 1.1 45.1 \u00b1 0.4 38.1 \u00b1 1.3 IB_ERM 51.5 \u00b1 0.2 10.0 \u00b1 0.1 56.3 \u00b1 1.1 35.5 \u00b1 0.4 46.0 \u00b1 1.4 39.3 \u00b1 1.1 IB_IRM 51.7 \u00b1 0.0 9.9 \u00b1 0.0 55.9 \u00b1 1.2 33.8 \u00b1 2.2 37.0 \u00b1 2.8 29.6 \u00b1 4.1 TCRI_HSIC 59.6 \u00b1 1.8 45.1 \u00b1 6.7 63.4 \u00b1 0.2 62.3 \u00b1 0.2 49.2 \u00b1 0.3 40.4 \u00b1 1.6 9 Table 4: Total Information Criterion: Domain General (DG) and Domain Speci\ufb01c (DS) Accuracies. The DG classi\ufb01er is shared across all training domains, and the DS classi\ufb01ers are trained on each domain. The \ufb01rst row indicates the domain from which the held-out examples are sampled, and the second indicates which domain-speci\ufb01c predictor is used. {+90%, +80%, -90%} indicate domains \u2013 {0.1, 0.2, 0.9} digit label and color correlation, respectively. DG Classi\ufb01er DS Classi\ufb01er on +90 DS Classi\ufb01er on +80 DS Classi\ufb01er on -90 Test Domain No DS clf. +90% +80% -90% +90% +80% -90% +90% +80% -90% +90% +80% -90% +90% 68.7 69.0 68.5 90.1 9.8 79.9 20.1 10.4 89.9 +80% 63.1 62.4 64.4 76.3 24.3 70.0 30.4 24.5 76.3 -90% 65.6 63.4 44.1 75.3 75.3 69.2 69.5 29.3 26.0 Table 5: TIC ablation for ColoredMNIST. Algorithm average worst-case TCRI_HSIC (No TIC) 51.8 \u00b1 5.9 27.7 \u00b1 8.9 TCRI_HSIC 59.6 \u00b1 1.8 45.1 \u00b1 6.7 Worst-domain Accuracy. A critical implication of domain generality is stability \u2013 robustness in worstdomain performance up to domain di\ufb03culty. While average accuracy across domains provides some insight into an algorithm\u2019s ability to generalize to new domains, the average hides the variance of performance across domains. Average improvement can be increased while the worst-domain accuracy stays the same or decreases, leading to incorrect conclusions about domain generalization. Additionally, in real-world challenges such as algorithmic fairness where worst-group performance is considered, some metrics or fairness are analogous to achieving domain generalization [Creager et al., 2021]. Results. TCRI achieves the highest average and worst-case accuracy across all baselines (Table 3). We \ufb01nd no method recovers the exact domain-general model\u2019s accuracy of 75%. However, TCRI achieves over 7% increase in both average accuracy and worst-case accuracy. Appendix A.2 shows transfer accuracies with cross-validation on held-out test domain examples (oracle) and TCRI again outperforms all baselines, achieving an average accuracy of 70.0% \u00b1 0.4% and a worst-case accuracy of 65.7% \u00b1 1.5, showing that regularizing for TCRI gives very close to optimal domain-general solutions. Similarly, for the Spurious-PACS dataset, we observe that TCRI outperforms the baselines. TRCI achieves the highest average accuracy of 63.4% \u00b1 0.2 and worst-case accuracy of 62.3% \u00b1 0.1 with the next best, VREx, achieving 58.8 \u00b1 1.0 and 33.8 \u00b1 0.0, respectively. Additionally, for the Terra-Incognita dataset, TCRI achieves the highest average and worst-case accuracies of 49.2% \u00b1 0.3% and 40.4% \u00b1 1.6% with the next best, GroupDRO, achieving 47.8 \u00b1 0.9 and 39.9 \u00b1 0.7, respectively. Appendix A.2 shows transfer accuracies with cross-validation held-out target domain examples (oracle) where we observe that TCRI also obtains the highest average and worst-case accuracy for Spurrious-PACS and Terra Incognita. Overall, regularizing for TCRI gives the most domain-general solutions compared to our baselines, achieving the highest worst-case accuracy on all benchmarks. Additionally, TCRI achieves the highest average accuracy on ColoredMNIST and Spurious-PAC and the second highest on Terra Incognita, where we expect the empirical risk minimizer to be domain-general. Additional results are provided in the Appendix A. The E\ufb00ect of the Total Information Criterion. Without the TIC loss term, our proposed method is less e\ufb00ective. Table 5 shows that for Colored MNIST, the hardest \u2018hard\u2019 case we encounter, removing the TIC criteria, performs worse in average and worst case accuracy, dropping over 8% and 18, respectively. Separation of Domain General and Domain Speci\ufb01c Features . In the case of Colored MNIST, we can reason about the extent of feature disentanglement from the accuracies achieved by the domain-general and domain-speci\ufb01c predictors. Table 4 shows how much each component of \u03a6, \u03a6dg and \u03a6spu, behaves as 10 expected. For each domain, we observe that the domain-speci\ufb01c predictors\u2019 accuracies follow the same trend as the color-label correlation, indicating that they capture the color-label relationship. The domain-general predictor, however, does not follow such a trend, indicating that it is not using color as the predictor. For example, when evaluating the domain-speci\ufb01c predictors from the +90% test domain experiment (row +90%) on held-out examples from the +80% training domain (column \"DS Classi\ufb01er on +80%\"), we \ufb01nd that the +80% domain-speci\ufb01c predictor achieves an accuracy of nearly 79.9% \u2013 exactly what one would expect from a predictor that uses a color correlation with the same direction \u2018+\u2019. Conversely, the -90% predictor achieves an accuracy of 20.1%, exactly what one would expect from a predictor that uses a color correlation with the opposite direction \u2018-\u2019. The -90% domain has the opposite label-color pairing, so a color-based classi\ufb01er will give the opposite label in any \u2018+\u2019 domain. Another advantage of this method, exempli\ufb01ed by Table 4, is that if one believes a particular domain is close to one of the training domains, one can opt to use the close domain\u2019s domain-speci\ufb01c predictor and leverage spurious information to improve performance. On Benchmarking Domain Generalization. Previous work on benchmarking domain generalization showed that across standard benchmarks, the domain-unaware empirical risk minimizer outperforms or achieves equivalent performance to the state-of-the-art domain generalization methods [Gulrajani and Lopez-Paz, 2020]. Additionally, Rosenfeld et al. [2022] gives results that show weak conditions that de\ufb01ne regimes where the empirical risk minimizer across domains is optimal in both average and worst-case accuracy. Consequently, to accurately evaluate our work and baselines, we focus on settings where it is clear that (i) the empirical risk minimizer fails, (ii) spurious features, as we have de\ufb01ned them, do not generalize across the observed domains, and (iii) there is room for improvement via better domain-general predictions. We discuss this point further in the Appendix A.1. Oracle Transfer Accuracies. While model selection is an integral part of the machine learning development cycle, it remains a non-trivial challenge when there is a distribution shift. While we have proposed a selection process tailored to our method that can be generalized to other methods with an assumed causal graph, we acknowledge that model selection under distribution shift is still an important open problem. Consequently, we disentangle this challenge from the learning problem and evaluate an algorithm\u2019s capacity to give domain-general solutions independently of model selection. We report experimental reports using heldout test-set examples for model selection in Appendix A Table 6. We \ufb01nd that our method, TCRI_HSIC, also outperforms baselines in this setting. 6 Conclusion and Future Work We reduce the gap in learning domain-general predictors by leveraging conditional independence properties implied by generative processes to identify domain-general mechanisms. We do this without independent observations of domain-general and spurious mechanisms and show that our framework outperforms other state-of-the-art domain-generalization algorithms on real-world datasets in average and worst-case across domains. Future work includes further improvements to the framework to fully recover the strict set of domain-general mechanisms and model selection strategies that preserve desired domain-general properties. Acknowledgements OS was partially supported by the UIUC Beckman Institute Graduate Research Fellowship, NSF-NRT 1735252. This work is partially supported by the NSF III 2046795, IIS 1909577, CCF 1934986, NIH 1R01MH116226-01A, NIFA award 2020-67021-32799, the Alfred P. Sloan Foundation, and Google Inc." |