diff --git "a/abs_29K_G/test_abstract_long_2405.01008v2.json" "b/abs_29K_G/test_abstract_long_2405.01008v2.json" new file mode 100644--- /dev/null +++ "b/abs_29K_G/test_abstract_long_2405.01008v2.json" @@ -0,0 +1,797 @@ +{ + "url": "http://arxiv.org/abs/2405.01008v2", + "title": "On Mechanistic Knowledge Localization in Text-to-Image Generative Models", + "abstract": "Identifying layers within text-to-image models which control visual\nattributes can facilitate efficient model editing through closed-form updates.\nRecent work, leveraging causal tracing show that early Stable-Diffusion\nvariants confine knowledge primarily to the first layer of the CLIP\ntext-encoder, while it diffuses throughout the UNet.Extending this framework,\nwe observe that for recent models (e.g., SD-XL, DeepFloyd), causal tracing\nfails in pinpointing localized knowledge, highlighting challenges in model\nediting. To address this issue, we introduce the concept of Mechanistic\nLocalization in text-to-image models, where knowledge about various visual\nattributes (e.g., \"style\", \"objects\", \"facts\") can be mechanistically localized\nto a small fraction of layers in the UNet, thus facilitating efficient model\nediting. We localize knowledge using our method LocoGen which measures the\ndirect effect of intermediate layers to output generation by performing\ninterventions in the cross-attention layers of the UNet. We then employ\nLocoEdit, a fast closed-form editing method across popular open-source\ntext-to-image models (including the latest SD-XL)and explore the possibilities\nof neuron-level model editing. Using Mechanistic Localization, our work offers\na better view of successes and failures in localization-based text-to-image\nmodel editing. Code will be available at\nhttps://github.com/samyadeepbasu/LocoGen.", + "authors": "Samyadeep Basu, Keivan Rezaei, Priyatham Kattakinda, Ryan Rossi, Cherry Zhao, Vlad Morariu, Varun Manjunatha, Soheil Feizi", + "published": "2024-05-02", + "updated": "2024-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Identifying layers within text-to-image models which control visual\nattributes can facilitate efficient model editing through closed-form updates.\nRecent work, leveraging causal tracing show that early Stable-Diffusion\nvariants confine knowledge primarily to the first layer of the CLIP\ntext-encoder, while it diffuses throughout the UNet.Extending this framework,\nwe observe that for recent models (e.g., SD-XL, DeepFloyd), causal tracing\nfails in pinpointing localized knowledge, highlighting challenges in model\nediting. To address this issue, we introduce the concept of Mechanistic\nLocalization in text-to-image models, where knowledge about various visual\nattributes (e.g., \"style\", \"objects\", \"facts\") can be mechanistically localized\nto a small fraction of layers in the UNet, thus facilitating efficient model\nediting. We localize knowledge using our method LocoGen which measures the\ndirect effect of intermediate layers to output generation by performing\ninterventions in the cross-attention layers of the UNet. We then employ\nLocoEdit, a fast closed-form editing method across popular open-source\ntext-to-image models (including the latest SD-XL)and explore the possibilities\nof neuron-level model editing. Using Mechanistic Localization, our work offers\na better view of successes and failures in localization-based text-to-image\nmodel editing. Code will be available at\nhttps://github.com/samyadeepbasu/LocoGen.", + "main_content": "Introduction In recent years, substantial strides in conditional image generation have been made through diffusion-based text-toimage generative models, including notable examples like Stable-Diffusion (Rombach et al., 2021), Imagen (Saharia *Equal contribution 1University of Maryland 2Adobe Research. Correspondence to: Samyadeep Basu . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). et al., 2022), and DALLE (Ramesh et al., 2021). These models have captured widespread attention owing to their impressive image generation and editing capabilities, as evidenced by leading FID scores on prominent benchmarks such as MS-COCO (Lin et al., 2014). Typically trained on extensive billion-scale image-text pairs like LAION5B (Schuhmann et al., 2022), these models encapsulate a diverse array of visual concepts, encompassing color, artistic styles, objects, and renowned personalities. A recent work (Basu et al., 2023) designs an interpretability framework using causal tracing (Pearl, 2001) to trace the location of knowledge about various styles, objects or facts in text-to-image generative models. Essentially, causal tracing finds the indirect effects of intermediate layers (Pearl, 2001), by finding layers which can restore a model with corrupted inputs to its original state. Using this framework, the authors find that knowledge about various visual attributes is distributed in the UNet, whereas, there exists a unique causal state in the CLIP text-encoder where knowledge is localized. This unique causal state in the text-encoder can be leveraged to edit text-to-image models in order to remove style, objects or update facts effectively. However, we note that their framework is restricted to early Stable-Diffusion variants such as Stable-Diffusion-v1-5. In our paper, we first revisit knowledge localization for textto-image generative models, specifically examining the effectiveness of causal tracing beyond Stable-Diffusion-v1-5. While causal tracing successfully identifies unique localized states in the text-encoder for Stable-Diffusion variants, including v1-5 and v2-1, it fails to do so for recent models like SD-XL (Podell et al., 2023) and DeepFloyd1 across different visual attributes. In the UNet, causal states are distributed across a majority of open-source text-to-image models (excluding DeepFloyd), aligning with findings in Basu et al. (2023). Notably, for DeepFloyd, we observe a lack of strong causal states corresponding to visual attributes in the UNet. To address the universal knowledge localization framework absence across different text-to-image models, we introduce the concept of mechanistic localization that aims to identify a small number of layers which control the generation of distinct visual attributes, across a spectrum of text-to-image 1https://github.com/deep-floyd/IF 1 arXiv:2405.01008v2 [cs.CV] 8 May 2024 \fPreprint Text-Encoder Replace the embedding of original prompt with a target prompt (e.g., \u2018a painting of a house\u2019) for this layer Low Fidelity to \u2019Van Gogh\u2019 style (b) UNet layers Cross-Attn Layers which use a different prompt than other layers Original Prompt: \u2018A house in the style of Van Gogh\u2019 \u201cMechanistic localization\u201d Output Modified! Cross-Attn layers Causal Intervention (Prior Works) Text-Encoder Causal Layer Distributed Knowledge No \u201cMechanistic localization\u201d LocoGen (Ours) (a) Prompt: \u2018A house in the style of Van Gogh\u2019 Original Generation Figure 1. LOCOGEN: Identifying UNet layers that, when given different input, can alter visual attributes (e.g., style, objects, facts). (a) Earlier works (Basu et al., 2023) which show distributed knowledge using causal interventions. (b) LOCOGEN where a few cross-attention layers receive a different prompt-embedding than the original, leading to generation of images without the particular style. models. To achieve this, we propose LOCOGEN, a method that finds a subset of cross-attention layers in the UNet such that when the input to their key and value matrices is changed, output generation for a given visual attribute (e.g., \u201cstyle\u201d) is modified (see Figure 1). This intervention in the intermediate layers has a direct effect on the output \u2013 therefore LOCOGEN measures the direct effect of intermediate layers, as opposed to indirect effects in causal tracing. Leveraging LOCOGEN, we probe knowledge locations for different visual attributes across popular open-source text-to-image models such as Stable-Diffusion-v1, StableDiffusion-v2, OpenJourney2, SD-XL (Podell et al., 2023) and DeepFloyd. For all models, we find that unique locations can be identified for visual attributes (e.g., \u201cstyle\u201d, \u201cobjects\u201d, \u201cfacts\u201d). Using these locations, we then perform weight-space model editing to remove artistic \u201cstyles\u201d, modify trademarked \u201cobjects\u201d and update outdated \u201cfacts\u201d in text-to-image models. This weight-space editing is performed using LOCOEDIT which updates the key and value matrices using a closed-form update in the locations identified by LOCOGEN. Moreover, for certain attributes such as \u201cstyle\u201d, we show that knowledge can be traced and edited to a subset of neurons, therefore highlighting the possibilities of neuron-level model editing. Contributions. In summary, our contributions include: \u2022 We highlight the drawbacks of existing interpretability methods such as causal tracing for localizing knowledge in latest text-to-image models. \u2022 We introduce LOCOGEN which can universally identify layers that control for visual attributes across a large spectrum of open-source text-to-image models. \u2022 By examining edited models using LOCOEDIT along 2https://huggingface.co/prompthero/openjourney with LOCOGEN, we observe that this efficient approach is successful across a majority of text-to-image models. 2. Related Works Intepretability of Text-to-Image Models. To our understanding, there\u2019s limited exploration into the inner workings of text-to-image models, such as Stable-Diffusion. DAAM (Tang et al., 2023; Hertz et al., 2022) scrutinizes diffusion models through the analysis of cross-attention maps between text tokens and images, highlighting their semantic precision. (Chefer et al., 2023) understand the decomposition of concepts in diffusion models. (Basu et al., 2023) leverage causal tracing to understand how knowledge is stored in text-to-image models such as Stable-Diffusion-v1. Editing Text-to-Image Models. The capacity to modify a diffusion model\u2019s behavior without starting from scratch was initially investigated in Concept-Ablation (Kumari et al., 2023) and Concept-Erasure (Gandikota et al., 2023). Another method, TIME (Orgad et al., 2023), alters all the cross-attention layers\u2019 key and value matrices to translate between concepts, though lacks interpretability and applications on a real-use case of model editing. (Basu et al., 2023) edits text-to-image models in the text-encoder space by leveraging a singular causal state. However, existing works overlook newer text-to-image models (e.g., SD-XL and DeepFloyd), which we delve into in detail. 3. Preliminaries Diffusion models start with an initial random real image x0, the noisy image at time step t is expressed as xt = \u221a\u03b1tx0 + p (1 \u2212\u03b1t)\u03f5. Here, \u03b1t determines the strength of the random Gaussian noise, gradually diminishing as the time step increases, ensuring that xT \u223cN(0, I). The denoising network \u03f5\u03b8(xt, c, t), is pre-trained to denoise the noisy image xt and produce xt\u22121. Typically, the con2 \fPreprint Layers SD-v2-1 UNet SD-XL-UNet DeepFloyd-UNet 70 227 163 CLIP-Score Figure 2. Causal tracing for UNet. Similar to (Basu et al., 2023), we find that knowledge is causally distributed across the UNet for text-to-image models such as SD-v2-1 and SD-XL. For DeepFloyd we do not observe any significant causal state in the UNet. DeepFloyd Layers CLIP-Score 24 46 0 0 SD-XL Figure 3. Causal tracing for text-encoder. Unlike SD-v1-5 and SD-v2-1, we find that a singular causal states does not exist in the text-encoder for SD-XL and DeepFloyd. ditional input c for the denoising network \u03f5\u03b8(.) is a textembedding derived from a caption c through a text-encoder, denoted as c = v\u03b3(c). The noising as well as the denoising operation can also occur in a latent space defined by z = E(x) (Rombach et al., 2021) for better efficiency. The pre-training objective learns to denoise in the latent space as denoted by: L(z, c) = E\u03f5,t||\u03f5 \u2212\u03f5\u03b8(zt, c, t)||2 2, where zt = E(xt) and E is an encoder such as VQVAE (van den Oord et al., 2017). 4. On the Effectiveness of Causal Tracing for Text-to-Image Models In this section, we empirically observe the effectiveness of causal tracing to models beyond Stable-Diffusion-v15. In particular, we find the ability of causal tracing to identify localized control points in Stable-Diffusion-v2-1, OpenJourney, SD-XL and DeepFloyd. Causal Tracing in UNet. In Figure 2, we find that knowledge across different visual attributes is distributed in the UNet for all the text-to-image models (except for DeepFloyd), similar to Stable-Diffusion-v1-5. However, the degree of distribution varies between different text-to-image models. While knowledge about various visual attributes is densely distributed in Stable-Diffusion variants, for SDXL we find that the distribution is extremely sparse (e.g., only 5% of the total layers are causal). For DeepFloyd, we observe that there are no strong causal states in the UNet. We provide more qualitative visualizations on causal tracing across the these text-to-image models in Appendix A. Overall, these results reinforce the difficulty of editing knowledge in the UNet directly due to (i) distribution of causal states or (ii) absence of any. Causal Tracing in Text-Encoder. Basu et al. (2023) show that there exists a unique causal state in the text-encoder for Stable-Diffusion-v1-5 and Stable-Diffusion-v2-1 which can be used to perform fast model editing. In Figure 3, we find that such an unique causal state is absent in the text-encoder for DeepFloyd and SD-XL. We note that DeepFloyd uses a T5-text encoder, whereas SD-XL uses a a combination of CLIP-ViT-L and OpenCLIP-ViT-G (Radford et al., 2021). Our empirical results indicate that an unique causal state arises only when a CLIP text-encoder is used by itself in a text-to-image model. 5. LOCOGEN: Towards Mechanistic Knowledge Localization Given the lack of generalizability of knowledge localization using causal tracing as shown in Section 4, we introduce LOCOGEN , which can identify localized control regions for visual attributes across all text-to-image models. 5.1. Knowledge Control in Cross-Attention Layers During the inference process, the regulation of image generation involves the utilization of classifier-free guidance, as outlined in Ho & Salimans (2021) which incorporates scores from both the conditional and unconditional diffusion models at each time-step. Specifically, the classifier-free guidance is applied at each time-step to combine the conditional (\u03f5\u03b8(zt, c, t)) and unconditional score estimates (\u03f5\u03b8(zt, t)). The result is a combined score denoted as \u02c6 \u03f5(zt, c, t). \u02c6 \u03f5(zt, c, t) = \u03f5\u03b8(zt, c, t) + \u03b1 (\u03f5\u03b8 (zt, c, t) \u2212\u03f5\u03b8(zt, t)) , \u2200t \u2208[T, 1] . (1) This combined score is used to update the latent zt using DDIM sampling (Song et al., 2020) at each time-step to obtain the final latent code z0. We term the model \u03f5\u03b8(zt, c, t) as the Clean Model and the final image generated as Iclean. We note that text is incorporated in the process of generation using cross-attention layers denoted by {Cl}M l=1 within \u03f5\u03b8(zt, c, t) \u2200t \u2208[T, 1]. These layers include key and value matrices \u2013 {W K l , W V l }M l=1 that take text-embedding c of the input prompt and guide the generation toward the text prompt. Generally, the text-embedding c is same across all these layers. However, in order to localize and find control points for different visual attributes, we replace the original text-embedding c with a target prompt embedding c\u2032 across a small subset of the cross-attention layers and measure its direct effect on the generated image. 5.1.1. ALTERED INPUTS We say that a model receives altered input when a subset of cross-attention layers C\u2032 \u2282{Cl}M l=1 receive a different text-embedding c\u2032 than the other cross-attention layers that 3 \fPreprint SDv1-5 Open Journey SDXL SDv2-1 Layer 8 Original Layer 8 Original Layer 6 Original Layer 6 Original Layer 8 Original Layer 8 Original Layer 6 Original Layer 6 Original Layer 8 Original Layer 8 Original Layer 8 Original Layer 7 Original Layer 45 Original Layer 45 Original Layer 45 Original Layer 45 Original 'A house in the style of Van Gogh\u2019 'A town in the style of Monet\u2019 \u2018R2D2 in street\u2019 \u2018President of United States\u2019 'A house in the style of Van Gogh\u2019 'A house in the style of Van Gogh\u2019 'A house in the style of Van Gogh\u2019 'A town in the style of Monet\u2019 'A town in the style of Monet\u2019 'A tree in the style of Van Gogh\u2019 \u2018R2D2\u2019 \u2018R2D2\u2019 \u2018The British Monarch\u2019 \u2018The British Monarch\u2019 \u2018The British Monarch\u2019 \u2018President of United States\u2019 Original Prompt: Original Prompt: Original Prompt: Original Prompt: Figure 4. Interpretability Results: Images generated by intervening on the layers identified by LOCOGEN across various opensource text-to-image models. We compare the original generation vs. generation by intervening on the layers identified with LOCOGEN along with a target prompt. We find that across various text-to-image models, visual attributes such as style, objects, facts can be manipulated by intervening only on a very small fraction of cross-attention layers. take c as input. We name these layers as controlling layers. We denote by Ialtered the image generated using this model and Equation (1) with altered inputs when zT is given as the initial noise. We denote the model \u03f5\u03b8(zt, c, c\u2032, t) with the altered inputs as the Altered Model with the following inference procedure: \u02c6 \u03f5(zt, c, c\u2032, t) = \u03f5\u03b8(zt, c, c\u2032, t) + \u03b1(\u03f5\u03b8(zt, c, c\u2032, t) \u2212\u03f5\u03b8(zt, t)) . As an example, to find the layers where style knowledge corresponding to a particular artist is stored, {Cl}M l=1 \u2212C\u2032 receive text-embeddings corresponding to the prompt \u2018An in the style of \u2019, whereas the layers in C\u2032 receive text-embeddings corresponding to the prompt \u2018An in the style of painting\u2019. If the generated image with these inputs do not have that particular style, we realize that controlling layers C\u2032 are responsible for incorporating that specified style in the output (see Figure 1). In fact, this replacement operation enables finding locations across different cross-attention layers where various visual attribute knowledge is localized. 5.1.2. LOCOGEN ALGORITHM Our goal is to find controlling layers C\u2032 for different visual attributes. We note that the cardinality of the set |C\u2032| = m is a hyper-parameter and the search space for C\u2032 is exponential. Given |C\u2032| = m, there are \u0000M m \u0001 possibilities for C\u2032, thus, we restrict our search space to only adjacent cross-attention layers. In fact, we consider all C\u2032 such that C\u2032 = {Cl}j+m\u22121 l=j for j \u2208[1, M \u2212m + 1]. Selecting the hyper-parameter m. To select the cardinality of the set C\u2032, we run an iterative hyper-parameter search with m \u2208[1, M], where M is selected based on the maximum number of cross-attention layers in a given text-to-image generative model. At each iteration of the hyper-parameter search, we investigate whether there exists a set of m adjacent cross-attention layers that are responsible for the generation of the specific visual attribute. We find minimum m that such controlling layers for the particular attribute exists. To apply LOCOGEN for a particular attribute, we obtain a set of input prompts T = {Ti}N i=1 that include the particular attribute and corresponding set of prompts T \u2032 = {T \u2032 i}N i=1 where T \u2032 i is analogous to Ti except that the particular attribute is removed/updated. These prompts serve to create altered images and assess the presence of the specified attribute within them. Let ci be the text-embedding of Ti and c\u2032 i be that of T \u2032 i. Given m, we examine all M \u2212m + 1 possible candidates for controlling layers. For each of them, we generate N 4 \fPreprint SD-v1-5 SD-v2-1 SD-XL \u2013 Style, Facts SD-XL Objects Layers CLIP-Score Style Objects Facts Objects, Facts Style Facts Objects, Style Style, Facts Objects (a) (b) (c) (d) Figure 5. CLIP-Score of the generated images with original prompt for style, objects and target prompt for facts after intervening on layers through LOCOGEN. Lower CLIP-Score for objects, style indicate correct localization, whereas a higher CLIP-Score indicates such for facts. (a) For SD-v1-5 (m=2), objects, facts can be controlled from Layer 6, whereas style can be controlled from Layer 8. (b) For SD-v2-1(m=3), facts are controlled from Layer 7, style and objects from Layer 8. (c,d): For SD-XL, style (m=3), facts(m=5) are controlled from Layer 45, whereas objects are controlled from Layer 15. altered images where i-th image is generated by giving c\u2032 i as the input embedding to selected m layers and ci to other ones. Then we measure the CLIP-Score (Hessel et al., 2021) of original text prompt Ti to the generated image for style, objects and target text prompt T \u2032 i to the generated image for facts. For style and objects, drop in CLIP-Score shows the removal of the attribute while for facts increase in score shows similarity to the updated fact. We take the average of the mentioned score across all 1 \u2264i \u2264N. By doing that for all candidates, we report the one with minimum average CLIP-Score for style, objects and maximum average CLIP-Score for facts. These layers could be candidate layers controlling the generation of the specific attribute. Algorithm 1 provides the pseudocode to find the best candidate. Figure 5 shows CLIP-Score across different candidates. Algorithm 1 LOCOGEN Input: m, {Ti}N i=1, {T \u2032 i}N i=1, {ci}N i=1, {c\u2032 i}N i=1 Output: Candidate controlling set for j \u21901, . . . , M \u2212m do C\u2032 \u2190{Cl}j+m\u22121 l=j for i \u21901, . . . , N do si \u2190CLIP-SCORE (Ti, Ialtered) s\u2032 i \u2190CLIP-SCORE (T \u2032 i, Ialtered) aj \u2190AVERAGE \u0000{si}N i=1 \u0001 \u25b7for objects, style aj \u2190AVERAGE \u0000{s\u2032 i}N i=1 \u0001 \u25b7for facts j\u2217\u2190arg minj aj \u25b7for objects, style j\u2217\u2190arg maxj aj \u25b7for facts return aj\u2217, {Cl}j\u2217+m\u22121 l=j\u2217 We set a threshold for average CLIP-Score and find the minimum m such that there exists m adjacent cross-attention layers whose corresponding CLIP-Score meets the requirement. We point the reader to Appendix G for the values of m selected for different models and thresholds. Dataset for Prompts. We use the prompts used in (Basu et al., 2023; Kumari et al., 2023) to extract locations in the UNet which control for various visual attributes such as objects, style and facts. More details in Appendix C. 5.2. Empirical Results In this section, we provide empirical results highlighting the localized layers across various open-source text-to-image generative models: Stable-Diffusion Variants. Across both models, as depicted qualitatively in Figure 4 and quantitatively in Figure 5-(a), we observe the presence of a distinctive subset of layers that govern specific visual attributes. In the case of both SD-v1-5 and SD-v2-1, the control for \u201cstyle\u201d is centralized at l = 8 with m = 2. In SD-v1-5, the control for \u201cobjects\u201d and \u201cfacts\u201d emanates from the same locations: l = 6 and m = 2. However, in SD-v2-1, \u201cobjects\u201d are controlled from l = 8, while \u201cfacts\u201d are influenced by l = 7. Despite sharing a similar UNet architecture and undergoing training with comparable scales of pre-training data, these models diverge in the text-encoder utilized. This discrepancy in text-encoder choice may contribute to the variation in how they store knowledge concerning different attributes. Open-Journey. We note that Open-Journey exhibits control locations similar to SD-v1-5 for various visual attributes. As illustrated in Figure 4 and Figure 5-(a), \u201cobjects\u201d and \u201cfacts\u201d are governed from l = 6, while \u201cstyle\u201d is controlled from l = 8. Despite the architectural resemblance between Open-Journey and SD-v1-5, it\u2019s important to highlight that Open-Journey undergoes fine-tuning on a subset of images generated from Mid-Journey. This suggests that the control locations for visual attributes are more closely tied to the underlying model architecture than to the specifics of the training or fine-tuning data. SD-XL. Within SD-XL, our investigation reveals that both 5 \fPreprint Original Edited Original Edited Original Edited Original Edited SDv1-5 Open Journey SDv2-1 Edit: Remove Style of \u2018Van Gogh\u2019 Edit : Remove Style of \u2018Monet\u2019 Edit: Modify trademarked \u2018Snoopy\u2019 Edit: Update with correct \u2018British Monarch\u2019 SD-XL Figure 6. LOCOEDIT (Model editing) results at locations identified by LOCOGEN across various open-source text-to-image models. We observe that locations identified by our interpretability framework can be edited effectively to remove styles, objects and update facts in text-to-image models. We provide more visualizations in Appendix B. Original Layer 6 Original Layer 12 \u2018President of United States\u2019 \u2018The British Monarch\u2019 Original Prompt Original Layer 16 Original Layer 12 Original Prompt \u2018Ocean in Monet Style\u2019 \u2018Women working in garden in Van Gogh style\u2019 Figure 7. Interpretability Results for DeepFloyd. We find the control points for visual attributes to be dependent on the underlying prompts, rather than the visual attribute. \u201cstyle\u201d and \u201cfacts\u201d can be effectively controlled from l = 45, with m = 3 as evidenced in Figure 4 and Figure 5-(c). For the attribute \u201cobjects,\u201d control is situated at l = 15, albeit with a slightly larger value of m = 5. In summary, SD-XL, consisting of a total of 70 cross-attention layers, underscores a significant finding: various attributes in image generation can be governed by only a small subset of layers. DeepFloyd. Across SD-v1-5, SD-v2-1, Open-Journey, and SD-XL, our findings indicate that visual attributes like \u201cstyle\u201d, \u201cobjects\u201d and \u201cfacts,\u201d irrespective of the specific prompt used, can be traced back to control points situated within a limited number of layers. However, in the case of DeepFloyd, our observations differ. We find instead, that all attributes display localization dependent on the specific prompt employed. To illustrate, factual knowledge related to \u201cThe British Monarch\u201d is governed from l = 6 with m = 3, whereas factual knowledge tied to \u201cThe President of the United States\u201d is controlled from l = 12 (see Figure 7). This divergence in localization patterns highlights the nuanced behavior of DeepFloyd in comparison to the other models examined. More results can be referred in Appendix B.5. Human-Study Results. We run a human-study to verify that LOCOGEN can effectively identify controlling layers for different visual attributes. In our setup, evaluators assess 132 image pairs, each comprising an image generated by Clean Model and an image generated by Altered Model whose identified cross-attention layers takes different inputs. Evaluators determine whether the visual attribute is changed in the image generated by Altered Model(for instance, the artistic Van Gogh style is removed from the original image or not). Covering 33 image pairs, generated with different prompts per model, with five participating evaluators, our experiments reveal a 92.58% verification rate for the impact of LOCOGEN-identified layers on visual attributes. See more details in Appendix J. 6. LOCOEDIT: Editing to Ablate Concepts In this section, we analyse the effectiveness of closed-form edit updates in the layers identified by LOCOGEN across different text-to-image generative models. 6 \fPreprint SD-v1-5 SD-v2-1 OpenJourney SD-XL Original Edited Figure 8. Quantitative Model Editing Results for Text-toImage Models. We observe a drop in CLIP-Score for \u201cstyle\u201d and \u201dobjects\u201d, while an increase in CLIP-Score for \u201cfacts\u201d therefore highlighting correct edits. 6.1. Method Algorithm 1 extracts the exact set of cross-attention layers from which the knowledge about a particular visual attribute (e.g., style) is controlled. We denote this set as Cloc, where Cloc \u2282C and |Cloc| = m. This set of extracted crossattention layers Cloc, each containing value and key matrices is denoted as Cloc = { \u02c6 W K l , \u02c6 W V l }m l=1. The objective is to modify these weight matrices { \u02c6 W K l , \u02c6 W V l }m l=1 such that they transform the original prompt (e.g., \u2019A house in the style of Van Gogh\u2019) to a target prompt (e.g., \u2019A house in the style of a painting\u2019) in a way that the visual attribute in the generation is modified. Similar to Section 5.1.2, we use a set of input prompts Torig = {T o i }N i=1 consisting of prompts featuring the particular visual attribute. Simultaneously, we create a counterpart set Ttarget = {T t i }N i=1 where each T t i is identical to T o i but lacks the particular attribute in focus. Let co i \u2208Rd be the text-embedding of the last subject token in T o i and ct i \u2208Rd be that of T t i . We obtain matrix Xorig \u2208RN\u00d7d by stacking vectors co 1, co 2, . . . , co N and matrix Xtarget \u2208RN\u00d7d by stacking ct 1, ct 2, . . . , ct N. To learn a mapping between the key and the value embeddings, we solve the following optimization for each layer l \u2208[1, m] corresponding to the key matrices as: min W K l \u2225XorigW K l \u2212Xtarget \u02c6 W K l \u22252 2 + \u03bbK\u2225W K l \u2212\u02c6 W K l \u22252 2 where \u03bbK is the regularizer. Letting Yorig = XorigW K l the optimal closed form solution for the key matrix is: W K l = (XT origXorig + \u03bb1I)\u22121(XT origYtarget + \u03bbK \u02c6 W K l ) Same is applied to get optimal matrix for value embeddings. 6.2. Model Editing Results Stable-Diffusion Variants, Open-Journey and SD-XL. In Figure 6 and Figure 8, it becomes apparent that LOCOEDIT effectively integrates accurate edits into the locations identified by LOCOGEN. Qualitatively examining the visual edits in Figure 6, our method demonstrates the capability to remove artistic \u201cstyles\u201d, modify trademarked \u201cobjects,\u201d and update outdated \u201cfacts\u201d within a text-to-image model with accurate information. This visual assessment is complemented by the quantitative analysis in Figure 8, where we observe that the CLIP-Score of images generated by the edited model, given prompts containing specific visual attributes, consistently registers lower than that of the clean model for \u201cobjects\u201d and \u201cstyle.\u201d For \u201cfacts,\u201d we gauge the CLIP-Score of images from the model with the correct facts, wherein a higher CLIP-Score indicates a correct edit, as illustrated in Figure 8. Combining both qualitative and quantitative findings, these results collectively underscore the effectiveness of LOCOEDIT across SD-v1-5, SD-v2-1, Open-Journey, and SD-XL. However, it\u2019s noteworthy that the efficacy of closed-form edits varies among different textto-image models. Specifically, in the case of \u201cstyle,\u201d we observe the most substantial drop in CLIP-Score between the edited and unedited models for SD-v1-5 and Open-Journey, while the drop is comparatively less for SD-v2-1 and SDXL. Conversely, for \u201cfacts,\u201d we find that all models perform similarly in updating with new information. Limitations with DeepFloyd Closed-Form Edits. DeepFloyd, despite revealing distinct locations through LOCOGEN (albeit depending on the underlying prompt), exhibits challenges in effective closed-form edits at these locations. Appendix M provides qualitative visualizations illustrating this limitation. The model employs a T5-encoder with bidirectional attention, diverging from other text-to-image models using CLIP-variants with causal attention. Closedform edits, relying on mapping the last-subject token embedding to a target embedding, are typically effective in text-embeddings generated with causal attention, where the last-subject token holds crucial information. However, the T5-encoder presents a hurdle as tokens beyond the last subject token contribute essential information about the target attribute. Consequently, restricting the mapping to the lastsubject token alone proves ineffective for a T5-encoder. While LOCOGEN along with LOCOEDIT makes model editing more interpretable \u2013 we also find that localized-model editing is better than updating all layers in the UNet as shown in Appendix F. We also compare our method with existing editing methods (Basu et al., 2023; Kumari et al., 2023; Gandikota et al., 2023) in Appendix I. We find that our editing method is at par with existing baselines, with the added advantage of generalizability to models beyond Stable-Diffusion-v1-5. In Appendix L, we also show the 7 \fPreprint Original Image 50 Neurons Layer A painting of rocky ocean shore under the luminous night sky in the style of\u00a0Van Gogh Painting of a grieving woman in the style of\u00a0Pablo Picasso A painting of a river in the style of\u00a0Monet 100 Neurons Figure 9. Neuron-Level Model Editing Qualitative. Results when applying neuron-level dropout on identified neurons in layers specified with LOCOGEN on Stable Diffusion v1.5. The second and third columns display images with 50 and 100 modified neurons out of 1280 in controlling layers, respectively. The last column shows images with a different embedding in controlling layers. robustness of our method to generic prompts. 7. On Neuron-Level Model Editing In this section, we explore the feasibility of effecting neuronlevel modifications to eliminate stylistic attributes from the output of text-to-image models. According to layers identified with LOCOGEN, our objective is to ascertain whether the selective dropout of neurons at the activation layers within the specified cross-attention layers (key and value embeddings) can successfully eliminate stylistic elements. To accomplish this objective, we first need to identify which neurons are responsible for the generation of particular artistic styles, e.g., Van Gogh. We examine the activations of neurons in the embedding space of key and value matrices in identified cross-attention layers. More specifically, we pinpoint neurons that exhibit significant variations when comparing input prompts that include a particular style with the case that input prompts do not involve the specified style. To execute this process, we collect a set of N1 prompts that feature the specific style, e.g. Van Gogh. We gather text-embeddings of the last subject token of these prompts denoted by c1, c2, ..., cN1, where ci \u2208Rd. We also obtain a set of N2 prompts without any particular style and analogously obtain {c\u2032 1, c\u2032 2, ..., c\u2032 N2}, where c\u2032 i \u2208Rd. Next, for the key or value matrix W \u2208Rd\u00d7d\u2032, we consider key or value embedding of these input prompts, i.e., {zi}N1 i=1 \u222a{z\u2032 i}N2 i=1 where zi = ciW and z\u2032 i = c\u2032 iW. We Salvador Dali Van Gogh Monet Pablo Picasso Greg Rutkowski 0.1 0.2 0.3 Original 30 Neurons 50 Neurons 100 Neurons Layer Figure 10. Neuron-Level Model Editing Quantitative. Average CLIP-Score of generated images to text prompt \u2019style of \u2019. Brown bars show similarity to original generated image; red, orange, and green bars show similarity to generated image when 30, 50, and 100 neurons are modified, respectively; and blue bars refer to images when controlling layers receive other prompt. note that zi, z\u2032 i \u2208Rd\u2032. Subsequently, for each of these d\u2032 neurons, we assess the statistical difference in their activations between input prompts that include a particular style and those without it. Specifically, we compute the z-score for each neuron within two groups of activations: z1, z2, ..., zN1 and z\u2032 1, z\u2032 2, . .. , z\u2032 N2. The neurons are then ranked based on the absolute value of their z-score, with the top neurons representing those that exhibit significant differences in activations depending on the presence or absence of a particular concept in the input prompt. During generation, we drop-out these neurons and see if particular style is removed or not. As seen in Figure 9, neuron-level modification at inference time is effective at removing styles. This shows that knowledge about a particular style can be even more localized to a few neurons. It is noteworthy that the extent of style removal increases with the modification of more neurons, albeit with a trade-off in the quality of generated images. This arises because modified neurons may encapsulate information related to other visual attributes. To quantify the effectiveness of this approach, we measure the drop in CLIP-Score for modified images across various styles. Figure 10 presents a bar-plot illustrating these similarity scores. Notably, drop in CLIP-Score demonstrates that neuron-level model editing effectively removes the styles associated with different artists in the generated images. We refer to Appendix K.1 for more details on neuron-level model editing experiments. 8.", + "additional_graph_info": { + "graph": [ + [ + "Samyadeep Basu", + "Soheil Feizi" + ], + [ + "Samyadeep Basu", + "Daniela Massiceti" + ], + [ + "Samyadeep Basu", + "Shell Xu Hu" + ], + [ + "Samyadeep Basu", + "Varun Manjunatha" + ], + [ + "Soheil Feizi", + "Ken Duffy" + ], + [ + "Daniela Massiceti", + "Puneet K. Dokania" + ], + [ + "Daniela Massiceti", + "N. Siddharth" + ], + [ + "Shell Xu Hu", + "Minyoung Kim" + ], + [ + "Shell Xu Hu", + "Md Rifat Arefin" + ], + [ + "Varun Manjunatha", + "Nirat Saini" + ], + [ + "Varun Manjunatha", + "Mohit Iyyer" + ] + ], + "node_feat": { + "Samyadeep Basu": [ + { + "url": "http://arxiv.org/abs/2405.01008v2", + "title": "On Mechanistic Knowledge Localization in Text-to-Image Generative Models", + "abstract": "Identifying layers within text-to-image models which control visual\nattributes can facilitate efficient model editing through closed-form updates.\nRecent work, leveraging causal tracing show that early Stable-Diffusion\nvariants confine knowledge primarily to the first layer of the CLIP\ntext-encoder, while it diffuses throughout the UNet.Extending this framework,\nwe observe that for recent models (e.g., SD-XL, DeepFloyd), causal tracing\nfails in pinpointing localized knowledge, highlighting challenges in model\nediting. To address this issue, we introduce the concept of Mechanistic\nLocalization in text-to-image models, where knowledge about various visual\nattributes (e.g., \"style\", \"objects\", \"facts\") can be mechanistically localized\nto a small fraction of layers in the UNet, thus facilitating efficient model\nediting. We localize knowledge using our method LocoGen which measures the\ndirect effect of intermediate layers to output generation by performing\ninterventions in the cross-attention layers of the UNet. We then employ\nLocoEdit, a fast closed-form editing method across popular open-source\ntext-to-image models (including the latest SD-XL)and explore the possibilities\nof neuron-level model editing. Using Mechanistic Localization, our work offers\na better view of successes and failures in localization-based text-to-image\nmodel editing. Code will be available at\nhttps://github.com/samyadeepbasu/LocoGen.", + "authors": "Samyadeep Basu, Keivan Rezaei, Priyatham Kattakinda, Ryan Rossi, Cherry Zhao, Vlad Morariu, Varun Manjunatha, Soheil Feizi", + "published": "2024-05-02", + "updated": "2024-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction In recent years, substantial strides in conditional image generation have been made through diffusion-based text-toimage generative models, including notable examples like Stable-Diffusion (Rombach et al., 2021), Imagen (Saharia *Equal contribution 1University of Maryland 2Adobe Research. Correspondence to: Samyadeep Basu . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). et al., 2022), and DALLE (Ramesh et al., 2021). These models have captured widespread attention owing to their impressive image generation and editing capabilities, as evidenced by leading FID scores on prominent benchmarks such as MS-COCO (Lin et al., 2014). Typically trained on extensive billion-scale image-text pairs like LAION5B (Schuhmann et al., 2022), these models encapsulate a diverse array of visual concepts, encompassing color, artistic styles, objects, and renowned personalities. A recent work (Basu et al., 2023) designs an interpretability framework using causal tracing (Pearl, 2001) to trace the location of knowledge about various styles, objects or facts in text-to-image generative models. Essentially, causal tracing finds the indirect effects of intermediate layers (Pearl, 2001), by finding layers which can restore a model with corrupted inputs to its original state. Using this framework, the authors find that knowledge about various visual attributes is distributed in the UNet, whereas, there exists a unique causal state in the CLIP text-encoder where knowledge is localized. This unique causal state in the text-encoder can be leveraged to edit text-to-image models in order to remove style, objects or update facts effectively. However, we note that their framework is restricted to early Stable-Diffusion variants such as Stable-Diffusion-v1-5. In our paper, we first revisit knowledge localization for textto-image generative models, specifically examining the effectiveness of causal tracing beyond Stable-Diffusion-v1-5. While causal tracing successfully identifies unique localized states in the text-encoder for Stable-Diffusion variants, including v1-5 and v2-1, it fails to do so for recent models like SD-XL (Podell et al., 2023) and DeepFloyd1 across different visual attributes. In the UNet, causal states are distributed across a majority of open-source text-to-image models (excluding DeepFloyd), aligning with findings in Basu et al. (2023). Notably, for DeepFloyd, we observe a lack of strong causal states corresponding to visual attributes in the UNet. To address the universal knowledge localization framework absence across different text-to-image models, we introduce the concept of mechanistic localization that aims to identify a small number of layers which control the generation of distinct visual attributes, across a spectrum of text-to-image 1https://github.com/deep-floyd/IF 1 arXiv:2405.01008v2 [cs.CV] 8 May 2024 \fPreprint Text-Encoder Replace the embedding of original prompt with a target prompt (e.g., \u2018a painting of a house\u2019) for this layer Low Fidelity to \u2019Van Gogh\u2019 style (b) UNet layers Cross-Attn Layers which use a different prompt than other layers Original Prompt: \u2018A house in the style of Van Gogh\u2019 \u201cMechanistic localization\u201d Output Modified! Cross-Attn layers Causal Intervention (Prior Works) Text-Encoder Causal Layer Distributed Knowledge No \u201cMechanistic localization\u201d LocoGen (Ours) (a) Prompt: \u2018A house in the style of Van Gogh\u2019 Original Generation Figure 1. LOCOGEN: Identifying UNet layers that, when given different input, can alter visual attributes (e.g., style, objects, facts). (a) Earlier works (Basu et al., 2023) which show distributed knowledge using causal interventions. (b) LOCOGEN where a few cross-attention layers receive a different prompt-embedding than the original, leading to generation of images without the particular style. models. To achieve this, we propose LOCOGEN, a method that finds a subset of cross-attention layers in the UNet such that when the input to their key and value matrices is changed, output generation for a given visual attribute (e.g., \u201cstyle\u201d) is modified (see Figure 1). This intervention in the intermediate layers has a direct effect on the output \u2013 therefore LOCOGEN measures the direct effect of intermediate layers, as opposed to indirect effects in causal tracing. Leveraging LOCOGEN, we probe knowledge locations for different visual attributes across popular open-source text-to-image models such as Stable-Diffusion-v1, StableDiffusion-v2, OpenJourney2, SD-XL (Podell et al., 2023) and DeepFloyd. For all models, we find that unique locations can be identified for visual attributes (e.g., \u201cstyle\u201d, \u201cobjects\u201d, \u201cfacts\u201d). Using these locations, we then perform weight-space model editing to remove artistic \u201cstyles\u201d, modify trademarked \u201cobjects\u201d and update outdated \u201cfacts\u201d in text-to-image models. This weight-space editing is performed using LOCOEDIT which updates the key and value matrices using a closed-form update in the locations identified by LOCOGEN. Moreover, for certain attributes such as \u201cstyle\u201d, we show that knowledge can be traced and edited to a subset of neurons, therefore highlighting the possibilities of neuron-level model editing. Contributions. In summary, our contributions include: \u2022 We highlight the drawbacks of existing interpretability methods such as causal tracing for localizing knowledge in latest text-to-image models. \u2022 We introduce LOCOGEN which can universally identify layers that control for visual attributes across a large spectrum of open-source text-to-image models. \u2022 By examining edited models using LOCOEDIT along 2https://huggingface.co/prompthero/openjourney with LOCOGEN, we observe that this efficient approach is successful across a majority of text-to-image models. 2. Related Works Intepretability of Text-to-Image Models. To our understanding, there\u2019s limited exploration into the inner workings of text-to-image models, such as Stable-Diffusion. DAAM (Tang et al., 2023; Hertz et al., 2022) scrutinizes diffusion models through the analysis of cross-attention maps between text tokens and images, highlighting their semantic precision. (Chefer et al., 2023) understand the decomposition of concepts in diffusion models. (Basu et al., 2023) leverage causal tracing to understand how knowledge is stored in text-to-image models such as Stable-Diffusion-v1. Editing Text-to-Image Models. The capacity to modify a diffusion model\u2019s behavior without starting from scratch was initially investigated in Concept-Ablation (Kumari et al., 2023) and Concept-Erasure (Gandikota et al., 2023). Another method, TIME (Orgad et al., 2023), alters all the cross-attention layers\u2019 key and value matrices to translate between concepts, though lacks interpretability and applications on a real-use case of model editing. (Basu et al., 2023) edits text-to-image models in the text-encoder space by leveraging a singular causal state. However, existing works overlook newer text-to-image models (e.g., SD-XL and DeepFloyd), which we delve into in detail. 3. Preliminaries Diffusion models start with an initial random real image x0, the noisy image at time step t is expressed as xt = \u221a\u03b1tx0 + p (1 \u2212\u03b1t)\u03f5. Here, \u03b1t determines the strength of the random Gaussian noise, gradually diminishing as the time step increases, ensuring that xT \u223cN(0, I). The denoising network \u03f5\u03b8(xt, c, t), is pre-trained to denoise the noisy image xt and produce xt\u22121. Typically, the con2 \fPreprint Layers SD-v2-1 UNet SD-XL-UNet DeepFloyd-UNet 70 227 163 CLIP-Score Figure 2. Causal tracing for UNet. Similar to (Basu et al., 2023), we find that knowledge is causally distributed across the UNet for text-to-image models such as SD-v2-1 and SD-XL. For DeepFloyd we do not observe any significant causal state in the UNet. DeepFloyd Layers CLIP-Score 24 46 0 0 SD-XL Figure 3. Causal tracing for text-encoder. Unlike SD-v1-5 and SD-v2-1, we find that a singular causal states does not exist in the text-encoder for SD-XL and DeepFloyd. ditional input c for the denoising network \u03f5\u03b8(.) is a textembedding derived from a caption c through a text-encoder, denoted as c = v\u03b3(c). The noising as well as the denoising operation can also occur in a latent space defined by z = E(x) (Rombach et al., 2021) for better efficiency. The pre-training objective learns to denoise in the latent space as denoted by: L(z, c) = E\u03f5,t||\u03f5 \u2212\u03f5\u03b8(zt, c, t)||2 2, where zt = E(xt) and E is an encoder such as VQVAE (van den Oord et al., 2017). 4. On the Effectiveness of Causal Tracing for Text-to-Image Models In this section, we empirically observe the effectiveness of causal tracing to models beyond Stable-Diffusion-v15. In particular, we find the ability of causal tracing to identify localized control points in Stable-Diffusion-v2-1, OpenJourney, SD-XL and DeepFloyd. Causal Tracing in UNet. In Figure 2, we find that knowledge across different visual attributes is distributed in the UNet for all the text-to-image models (except for DeepFloyd), similar to Stable-Diffusion-v1-5. However, the degree of distribution varies between different text-to-image models. While knowledge about various visual attributes is densely distributed in Stable-Diffusion variants, for SDXL we find that the distribution is extremely sparse (e.g., only 5% of the total layers are causal). For DeepFloyd, we observe that there are no strong causal states in the UNet. We provide more qualitative visualizations on causal tracing across the these text-to-image models in Appendix A. Overall, these results reinforce the difficulty of editing knowledge in the UNet directly due to (i) distribution of causal states or (ii) absence of any. Causal Tracing in Text-Encoder. Basu et al. (2023) show that there exists a unique causal state in the text-encoder for Stable-Diffusion-v1-5 and Stable-Diffusion-v2-1 which can be used to perform fast model editing. In Figure 3, we find that such an unique causal state is absent in the text-encoder for DeepFloyd and SD-XL. We note that DeepFloyd uses a T5-text encoder, whereas SD-XL uses a a combination of CLIP-ViT-L and OpenCLIP-ViT-G (Radford et al., 2021). Our empirical results indicate that an unique causal state arises only when a CLIP text-encoder is used by itself in a text-to-image model. 5. LOCOGEN: Towards Mechanistic Knowledge Localization Given the lack of generalizability of knowledge localization using causal tracing as shown in Section 4, we introduce LOCOGEN , which can identify localized control regions for visual attributes across all text-to-image models. 5.1. Knowledge Control in Cross-Attention Layers During the inference process, the regulation of image generation involves the utilization of classifier-free guidance, as outlined in Ho & Salimans (2021) which incorporates scores from both the conditional and unconditional diffusion models at each time-step. Specifically, the classifier-free guidance is applied at each time-step to combine the conditional (\u03f5\u03b8(zt, c, t)) and unconditional score estimates (\u03f5\u03b8(zt, t)). The result is a combined score denoted as \u02c6 \u03f5(zt, c, t). \u02c6 \u03f5(zt, c, t) = \u03f5\u03b8(zt, c, t) + \u03b1 (\u03f5\u03b8 (zt, c, t) \u2212\u03f5\u03b8(zt, t)) , \u2200t \u2208[T, 1] . (1) This combined score is used to update the latent zt using DDIM sampling (Song et al., 2020) at each time-step to obtain the final latent code z0. We term the model \u03f5\u03b8(zt, c, t) as the Clean Model and the final image generated as Iclean. We note that text is incorporated in the process of generation using cross-attention layers denoted by {Cl}M l=1 within \u03f5\u03b8(zt, c, t) \u2200t \u2208[T, 1]. These layers include key and value matrices \u2013 {W K l , W V l }M l=1 that take text-embedding c of the input prompt and guide the generation toward the text prompt. Generally, the text-embedding c is same across all these layers. However, in order to localize and find control points for different visual attributes, we replace the original text-embedding c with a target prompt embedding c\u2032 across a small subset of the cross-attention layers and measure its direct effect on the generated image. 5.1.1. ALTERED INPUTS We say that a model receives altered input when a subset of cross-attention layers C\u2032 \u2282{Cl}M l=1 receive a different text-embedding c\u2032 than the other cross-attention layers that 3 \fPreprint SDv1-5 Open Journey SDXL SDv2-1 Layer 8 Original Layer 8 Original Layer 6 Original Layer 6 Original Layer 8 Original Layer 8 Original Layer 6 Original Layer 6 Original Layer 8 Original Layer 8 Original Layer 8 Original Layer 7 Original Layer 45 Original Layer 45 Original Layer 45 Original Layer 45 Original 'A house in the style of Van Gogh\u2019 'A town in the style of Monet\u2019 \u2018R2D2 in street\u2019 \u2018President of United States\u2019 'A house in the style of Van Gogh\u2019 'A house in the style of Van Gogh\u2019 'A house in the style of Van Gogh\u2019 'A town in the style of Monet\u2019 'A town in the style of Monet\u2019 'A tree in the style of Van Gogh\u2019 \u2018R2D2\u2019 \u2018R2D2\u2019 \u2018The British Monarch\u2019 \u2018The British Monarch\u2019 \u2018The British Monarch\u2019 \u2018President of United States\u2019 Original Prompt: Original Prompt: Original Prompt: Original Prompt: Figure 4. Interpretability Results: Images generated by intervening on the layers identified by LOCOGEN across various opensource text-to-image models. We compare the original generation vs. generation by intervening on the layers identified with LOCOGEN along with a target prompt. We find that across various text-to-image models, visual attributes such as style, objects, facts can be manipulated by intervening only on a very small fraction of cross-attention layers. take c as input. We name these layers as controlling layers. We denote by Ialtered the image generated using this model and Equation (1) with altered inputs when zT is given as the initial noise. We denote the model \u03f5\u03b8(zt, c, c\u2032, t) with the altered inputs as the Altered Model with the following inference procedure: \u02c6 \u03f5(zt, c, c\u2032, t) = \u03f5\u03b8(zt, c, c\u2032, t) + \u03b1(\u03f5\u03b8(zt, c, c\u2032, t) \u2212\u03f5\u03b8(zt, t)) . As an example, to find the layers where style knowledge corresponding to a particular artist is stored, {Cl}M l=1 \u2212C\u2032 receive text-embeddings corresponding to the prompt \u2018An in the style of \u2019, whereas the layers in C\u2032 receive text-embeddings corresponding to the prompt \u2018An in the style of painting\u2019. If the generated image with these inputs do not have that particular style, we realize that controlling layers C\u2032 are responsible for incorporating that specified style in the output (see Figure 1). In fact, this replacement operation enables finding locations across different cross-attention layers where various visual attribute knowledge is localized. 5.1.2. LOCOGEN ALGORITHM Our goal is to find controlling layers C\u2032 for different visual attributes. We note that the cardinality of the set |C\u2032| = m is a hyper-parameter and the search space for C\u2032 is exponential. Given |C\u2032| = m, there are \u0000M m \u0001 possibilities for C\u2032, thus, we restrict our search space to only adjacent cross-attention layers. In fact, we consider all C\u2032 such that C\u2032 = {Cl}j+m\u22121 l=j for j \u2208[1, M \u2212m + 1]. Selecting the hyper-parameter m. To select the cardinality of the set C\u2032, we run an iterative hyper-parameter search with m \u2208[1, M], where M is selected based on the maximum number of cross-attention layers in a given text-to-image generative model. At each iteration of the hyper-parameter search, we investigate whether there exists a set of m adjacent cross-attention layers that are responsible for the generation of the specific visual attribute. We find minimum m that such controlling layers for the particular attribute exists. To apply LOCOGEN for a particular attribute, we obtain a set of input prompts T = {Ti}N i=1 that include the particular attribute and corresponding set of prompts T \u2032 = {T \u2032 i}N i=1 where T \u2032 i is analogous to Ti except that the particular attribute is removed/updated. These prompts serve to create altered images and assess the presence of the specified attribute within them. Let ci be the text-embedding of Ti and c\u2032 i be that of T \u2032 i. Given m, we examine all M \u2212m + 1 possible candidates for controlling layers. For each of them, we generate N 4 \fPreprint SD-v1-5 SD-v2-1 SD-XL \u2013 Style, Facts SD-XL Objects Layers CLIP-Score Style Objects Facts Objects, Facts Style Facts Objects, Style Style, Facts Objects (a) (b) (c) (d) Figure 5. CLIP-Score of the generated images with original prompt for style, objects and target prompt for facts after intervening on layers through LOCOGEN. Lower CLIP-Score for objects, style indicate correct localization, whereas a higher CLIP-Score indicates such for facts. (a) For SD-v1-5 (m=2), objects, facts can be controlled from Layer 6, whereas style can be controlled from Layer 8. (b) For SD-v2-1(m=3), facts are controlled from Layer 7, style and objects from Layer 8. (c,d): For SD-XL, style (m=3), facts(m=5) are controlled from Layer 45, whereas objects are controlled from Layer 15. altered images where i-th image is generated by giving c\u2032 i as the input embedding to selected m layers and ci to other ones. Then we measure the CLIP-Score (Hessel et al., 2021) of original text prompt Ti to the generated image for style, objects and target text prompt T \u2032 i to the generated image for facts. For style and objects, drop in CLIP-Score shows the removal of the attribute while for facts increase in score shows similarity to the updated fact. We take the average of the mentioned score across all 1 \u2264i \u2264N. By doing that for all candidates, we report the one with minimum average CLIP-Score for style, objects and maximum average CLIP-Score for facts. These layers could be candidate layers controlling the generation of the specific attribute. Algorithm 1 provides the pseudocode to find the best candidate. Figure 5 shows CLIP-Score across different candidates. Algorithm 1 LOCOGEN Input: m, {Ti}N i=1, {T \u2032 i}N i=1, {ci}N i=1, {c\u2032 i}N i=1 Output: Candidate controlling set for j \u21901, . . . , M \u2212m do C\u2032 \u2190{Cl}j+m\u22121 l=j for i \u21901, . . . , N do si \u2190CLIP-SCORE (Ti, Ialtered) s\u2032 i \u2190CLIP-SCORE (T \u2032 i, Ialtered) aj \u2190AVERAGE \u0000{si}N i=1 \u0001 \u25b7for objects, style aj \u2190AVERAGE \u0000{s\u2032 i}N i=1 \u0001 \u25b7for facts j\u2217\u2190arg minj aj \u25b7for objects, style j\u2217\u2190arg maxj aj \u25b7for facts return aj\u2217, {Cl}j\u2217+m\u22121 l=j\u2217 We set a threshold for average CLIP-Score and find the minimum m such that there exists m adjacent cross-attention layers whose corresponding CLIP-Score meets the requirement. We point the reader to Appendix G for the values of m selected for different models and thresholds. Dataset for Prompts. We use the prompts used in (Basu et al., 2023; Kumari et al., 2023) to extract locations in the UNet which control for various visual attributes such as objects, style and facts. More details in Appendix C. 5.2. Empirical Results In this section, we provide empirical results highlighting the localized layers across various open-source text-to-image generative models: Stable-Diffusion Variants. Across both models, as depicted qualitatively in Figure 4 and quantitatively in Figure 5-(a), we observe the presence of a distinctive subset of layers that govern specific visual attributes. In the case of both SD-v1-5 and SD-v2-1, the control for \u201cstyle\u201d is centralized at l = 8 with m = 2. In SD-v1-5, the control for \u201cobjects\u201d and \u201cfacts\u201d emanates from the same locations: l = 6 and m = 2. However, in SD-v2-1, \u201cobjects\u201d are controlled from l = 8, while \u201cfacts\u201d are influenced by l = 7. Despite sharing a similar UNet architecture and undergoing training with comparable scales of pre-training data, these models diverge in the text-encoder utilized. This discrepancy in text-encoder choice may contribute to the variation in how they store knowledge concerning different attributes. Open-Journey. We note that Open-Journey exhibits control locations similar to SD-v1-5 for various visual attributes. As illustrated in Figure 4 and Figure 5-(a), \u201cobjects\u201d and \u201cfacts\u201d are governed from l = 6, while \u201cstyle\u201d is controlled from l = 8. Despite the architectural resemblance between Open-Journey and SD-v1-5, it\u2019s important to highlight that Open-Journey undergoes fine-tuning on a subset of images generated from Mid-Journey. This suggests that the control locations for visual attributes are more closely tied to the underlying model architecture than to the specifics of the training or fine-tuning data. SD-XL. Within SD-XL, our investigation reveals that both 5 \fPreprint Original Edited Original Edited Original Edited Original Edited SDv1-5 Open Journey SDv2-1 Edit: Remove Style of \u2018Van Gogh\u2019 Edit : Remove Style of \u2018Monet\u2019 Edit: Modify trademarked \u2018Snoopy\u2019 Edit: Update with correct \u2018British Monarch\u2019 SD-XL Figure 6. LOCOEDIT (Model editing) results at locations identified by LOCOGEN across various open-source text-to-image models. We observe that locations identified by our interpretability framework can be edited effectively to remove styles, objects and update facts in text-to-image models. We provide more visualizations in Appendix B. Original Layer 6 Original Layer 12 \u2018President of United States\u2019 \u2018The British Monarch\u2019 Original Prompt Original Layer 16 Original Layer 12 Original Prompt \u2018Ocean in Monet Style\u2019 \u2018Women working in garden in Van Gogh style\u2019 Figure 7. Interpretability Results for DeepFloyd. We find the control points for visual attributes to be dependent on the underlying prompts, rather than the visual attribute. \u201cstyle\u201d and \u201cfacts\u201d can be effectively controlled from l = 45, with m = 3 as evidenced in Figure 4 and Figure 5-(c). For the attribute \u201cobjects,\u201d control is situated at l = 15, albeit with a slightly larger value of m = 5. In summary, SD-XL, consisting of a total of 70 cross-attention layers, underscores a significant finding: various attributes in image generation can be governed by only a small subset of layers. DeepFloyd. Across SD-v1-5, SD-v2-1, Open-Journey, and SD-XL, our findings indicate that visual attributes like \u201cstyle\u201d, \u201cobjects\u201d and \u201cfacts,\u201d irrespective of the specific prompt used, can be traced back to control points situated within a limited number of layers. However, in the case of DeepFloyd, our observations differ. We find instead, that all attributes display localization dependent on the specific prompt employed. To illustrate, factual knowledge related to \u201cThe British Monarch\u201d is governed from l = 6 with m = 3, whereas factual knowledge tied to \u201cThe President of the United States\u201d is controlled from l = 12 (see Figure 7). This divergence in localization patterns highlights the nuanced behavior of DeepFloyd in comparison to the other models examined. More results can be referred in Appendix B.5. Human-Study Results. We run a human-study to verify that LOCOGEN can effectively identify controlling layers for different visual attributes. In our setup, evaluators assess 132 image pairs, each comprising an image generated by Clean Model and an image generated by Altered Model whose identified cross-attention layers takes different inputs. Evaluators determine whether the visual attribute is changed in the image generated by Altered Model(for instance, the artistic Van Gogh style is removed from the original image or not). Covering 33 image pairs, generated with different prompts per model, with five participating evaluators, our experiments reveal a 92.58% verification rate for the impact of LOCOGEN-identified layers on visual attributes. See more details in Appendix J. 6. LOCOEDIT: Editing to Ablate Concepts In this section, we analyse the effectiveness of closed-form edit updates in the layers identified by LOCOGEN across different text-to-image generative models. 6 \fPreprint SD-v1-5 SD-v2-1 OpenJourney SD-XL Original Edited Figure 8. Quantitative Model Editing Results for Text-toImage Models. We observe a drop in CLIP-Score for \u201cstyle\u201d and \u201dobjects\u201d, while an increase in CLIP-Score for \u201cfacts\u201d therefore highlighting correct edits. 6.1. Method Algorithm 1 extracts the exact set of cross-attention layers from which the knowledge about a particular visual attribute (e.g., style) is controlled. We denote this set as Cloc, where Cloc \u2282C and |Cloc| = m. This set of extracted crossattention layers Cloc, each containing value and key matrices is denoted as Cloc = { \u02c6 W K l , \u02c6 W V l }m l=1. The objective is to modify these weight matrices { \u02c6 W K l , \u02c6 W V l }m l=1 such that they transform the original prompt (e.g., \u2019A house in the style of Van Gogh\u2019) to a target prompt (e.g., \u2019A house in the style of a painting\u2019) in a way that the visual attribute in the generation is modified. Similar to Section 5.1.2, we use a set of input prompts Torig = {T o i }N i=1 consisting of prompts featuring the particular visual attribute. Simultaneously, we create a counterpart set Ttarget = {T t i }N i=1 where each T t i is identical to T o i but lacks the particular attribute in focus. Let co i \u2208Rd be the text-embedding of the last subject token in T o i and ct i \u2208Rd be that of T t i . We obtain matrix Xorig \u2208RN\u00d7d by stacking vectors co 1, co 2, . . . , co N and matrix Xtarget \u2208RN\u00d7d by stacking ct 1, ct 2, . . . , ct N. To learn a mapping between the key and the value embeddings, we solve the following optimization for each layer l \u2208[1, m] corresponding to the key matrices as: min W K l \u2225XorigW K l \u2212Xtarget \u02c6 W K l \u22252 2 + \u03bbK\u2225W K l \u2212\u02c6 W K l \u22252 2 where \u03bbK is the regularizer. Letting Yorig = XorigW K l the optimal closed form solution for the key matrix is: W K l = (XT origXorig + \u03bb1I)\u22121(XT origYtarget + \u03bbK \u02c6 W K l ) Same is applied to get optimal matrix for value embeddings. 6.2. Model Editing Results Stable-Diffusion Variants, Open-Journey and SD-XL. In Figure 6 and Figure 8, it becomes apparent that LOCOEDIT effectively integrates accurate edits into the locations identified by LOCOGEN. Qualitatively examining the visual edits in Figure 6, our method demonstrates the capability to remove artistic \u201cstyles\u201d, modify trademarked \u201cobjects,\u201d and update outdated \u201cfacts\u201d within a text-to-image model with accurate information. This visual assessment is complemented by the quantitative analysis in Figure 8, where we observe that the CLIP-Score of images generated by the edited model, given prompts containing specific visual attributes, consistently registers lower than that of the clean model for \u201cobjects\u201d and \u201cstyle.\u201d For \u201cfacts,\u201d we gauge the CLIP-Score of images from the model with the correct facts, wherein a higher CLIP-Score indicates a correct edit, as illustrated in Figure 8. Combining both qualitative and quantitative findings, these results collectively underscore the effectiveness of LOCOEDIT across SD-v1-5, SD-v2-1, Open-Journey, and SD-XL. However, it\u2019s noteworthy that the efficacy of closed-form edits varies among different textto-image models. Specifically, in the case of \u201cstyle,\u201d we observe the most substantial drop in CLIP-Score between the edited and unedited models for SD-v1-5 and Open-Journey, while the drop is comparatively less for SD-v2-1 and SDXL. Conversely, for \u201cfacts,\u201d we find that all models perform similarly in updating with new information. Limitations with DeepFloyd Closed-Form Edits. DeepFloyd, despite revealing distinct locations through LOCOGEN (albeit depending on the underlying prompt), exhibits challenges in effective closed-form edits at these locations. Appendix M provides qualitative visualizations illustrating this limitation. The model employs a T5-encoder with bidirectional attention, diverging from other text-to-image models using CLIP-variants with causal attention. Closedform edits, relying on mapping the last-subject token embedding to a target embedding, are typically effective in text-embeddings generated with causal attention, where the last-subject token holds crucial information. However, the T5-encoder presents a hurdle as tokens beyond the last subject token contribute essential information about the target attribute. Consequently, restricting the mapping to the lastsubject token alone proves ineffective for a T5-encoder. While LOCOGEN along with LOCOEDIT makes model editing more interpretable \u2013 we also find that localized-model editing is better than updating all layers in the UNet as shown in Appendix F. We also compare our method with existing editing methods (Basu et al., 2023; Kumari et al., 2023; Gandikota et al., 2023) in Appendix I. We find that our editing method is at par with existing baselines, with the added advantage of generalizability to models beyond Stable-Diffusion-v1-5. In Appendix L, we also show the 7 \fPreprint Original Image 50 Neurons Layer A painting of rocky ocean shore under the luminous night sky in the style of\u00a0Van Gogh Painting of a grieving woman in the style of\u00a0Pablo Picasso A painting of a river in the style of\u00a0Monet 100 Neurons Figure 9. Neuron-Level Model Editing Qualitative. Results when applying neuron-level dropout on identified neurons in layers specified with LOCOGEN on Stable Diffusion v1.5. The second and third columns display images with 50 and 100 modified neurons out of 1280 in controlling layers, respectively. The last column shows images with a different embedding in controlling layers. robustness of our method to generic prompts. 7. On Neuron-Level Model Editing In this section, we explore the feasibility of effecting neuronlevel modifications to eliminate stylistic attributes from the output of text-to-image models. According to layers identified with LOCOGEN, our objective is to ascertain whether the selective dropout of neurons at the activation layers within the specified cross-attention layers (key and value embeddings) can successfully eliminate stylistic elements. To accomplish this objective, we first need to identify which neurons are responsible for the generation of particular artistic styles, e.g., Van Gogh. We examine the activations of neurons in the embedding space of key and value matrices in identified cross-attention layers. More specifically, we pinpoint neurons that exhibit significant variations when comparing input prompts that include a particular style with the case that input prompts do not involve the specified style. To execute this process, we collect a set of N1 prompts that feature the specific style, e.g. Van Gogh. We gather text-embeddings of the last subject token of these prompts denoted by c1, c2, ..., cN1, where ci \u2208Rd. We also obtain a set of N2 prompts without any particular style and analogously obtain {c\u2032 1, c\u2032 2, ..., c\u2032 N2}, where c\u2032 i \u2208Rd. Next, for the key or value matrix W \u2208Rd\u00d7d\u2032, we consider key or value embedding of these input prompts, i.e., {zi}N1 i=1 \u222a{z\u2032 i}N2 i=1 where zi = ciW and z\u2032 i = c\u2032 iW. We Salvador Dali Van Gogh Monet Pablo Picasso Greg Rutkowski 0.1 0.2 0.3 Original 30 Neurons 50 Neurons 100 Neurons Layer Figure 10. Neuron-Level Model Editing Quantitative. Average CLIP-Score of generated images to text prompt \u2019style of \u2019. Brown bars show similarity to original generated image; red, orange, and green bars show similarity to generated image when 30, 50, and 100 neurons are modified, respectively; and blue bars refer to images when controlling layers receive other prompt. note that zi, z\u2032 i \u2208Rd\u2032. Subsequently, for each of these d\u2032 neurons, we assess the statistical difference in their activations between input prompts that include a particular style and those without it. Specifically, we compute the z-score for each neuron within two groups of activations: z1, z2, ..., zN1 and z\u2032 1, z\u2032 2, . .. , z\u2032 N2. The neurons are then ranked based on the absolute value of their z-score, with the top neurons representing those that exhibit significant differences in activations depending on the presence or absence of a particular concept in the input prompt. During generation, we drop-out these neurons and see if particular style is removed or not. As seen in Figure 9, neuron-level modification at inference time is effective at removing styles. This shows that knowledge about a particular style can be even more localized to a few neurons. It is noteworthy that the extent of style removal increases with the modification of more neurons, albeit with a trade-off in the quality of generated images. This arises because modified neurons may encapsulate information related to other visual attributes. To quantify the effectiveness of this approach, we measure the drop in CLIP-Score for modified images across various styles. Figure 10 presents a bar-plot illustrating these similarity scores. Notably, drop in CLIP-Score demonstrates that neuron-level model editing effectively removes the styles associated with different artists in the generated images. We refer to Appendix K.1 for more details on neuron-level model editing experiments. 8." + }, + { + "url": "http://arxiv.org/abs/2310.13730v1", + "title": "Localizing and Editing Knowledge in Text-to-Image Generative Models", + "abstract": "Text-to-Image Diffusion Models such as Stable-Diffusion and Imagen have\nachieved unprecedented quality of photorealism with state-of-the-art FID scores\non MS-COCO and other generation benchmarks. Given a caption, image generation\nrequires fine-grained knowledge about attributes such as object structure,\nstyle, and viewpoint amongst others. Where does this information reside in\ntext-to-image generative models? In our paper, we tackle this question and\nunderstand how knowledge corresponding to distinct visual attributes is stored\nin large-scale text-to-image diffusion models. We adapt Causal Mediation\nAnalysis for text-to-image models and trace knowledge about distinct visual\nattributes to various (causal) components in the (i) UNet and (ii) text-encoder\nof the diffusion model. In particular, we show that unlike generative\nlarge-language models, knowledge about different attributes is not localized in\nisolated components, but is instead distributed amongst a set of components in\nthe conditional UNet. These sets of components are often distinct for different\nvisual attributes. Remarkably, we find that the CLIP text-encoder in public\ntext-to-image models such as Stable-Diffusion contains only one causal state\nacross different visual attributes, and this is the first self-attention layer\ncorresponding to the last subject token of the attribute in the caption. This\nis in stark contrast to the causal states in other language models which are\noften the mid-MLP layers. Based on this observation of only one causal state in\nthe text-encoder, we introduce a fast, data-free model editing method\nDiff-QuickFix which can effectively edit concepts in text-to-image models.\nDiffQuickFix can edit (ablate) concepts in under a second with a closed-form\nupdate, providing a significant 1000x speedup and comparable editing\nperformance to existing fine-tuning based editing methods.", + "authors": "Samyadeep Basu, Nanxuan Zhao, Vlad Morariu, Soheil Feizi, Varun Manjunatha", + "published": "2023-10-20", + "updated": "2023-10-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "INTRODUCTION Text-to-Image generative models such as Stable-Diffusion (Rombach et al., 2021), Imagen (Saharia et al., 2022) and DALLE (Ramesh et al., 2021) have revolutionized conditional image generation in the last few years. These models have attracted a lot of attention due to their impressive image generation and editing capabilities, obtaining state-of-the-art FID scores on common generation benchmarks such as MS-COCO (Lin et al., 2014). Text-to-Image generation models are generally trained on billion-scale image-text pairs such as LAION-5B (Schuhmann et al., 2022) which typically consist of a plethora of visual concepts encompassing color, artistic styles, objects, and famous personalities, amongst others. Prior works (Carlini et al., 2023; Somepalli et al., 2023a;b) have shown that text-to-image models such as Stable-Diffusion memorize various aspects of the pre-training dataset. For example, given a caption from the LAION dataset, a model can generate an exact image from the training dataset corresponding to the caption in certain cases (Carlini et al., 2023). These observations reinforce that some form of knowledge corresponding to visual attributes is stored in the parameter space of text-to-image model. When an image is generated, it possesses visual attributes such as (but not limited to) the presence of distinct objects with their own characteristics (such as color or texture), artistic style or scene viewpoint. This attribute-specific information is usually specified in the conditioning textual prompt to the UNet in text-to-image models which is used to pull relevant knowledge from the UNet to 1 arXiv:2310.13730v1 [cs.CV] 20 Oct 2023 \fPreprint \u2026.. \u2026.. Add Gaussian Noise to the token embeddings corresponding to the attribute of interest (e.g., Van Gogh in case of style) \u2026.. \u2026.. c = \u2018Apple in Van Gogh Style\u2019 Clean Model Cross-Attn Self-Attn ResNet Corrupted Model Option 1: Causal Tracing for UNet Copy clean states to the corrupted model across layers in UNet Option 2: Causal Tracing for Text-Encoder Copy clean states to the corrupted model across layers in the text-encoder c = \u2018Apple in Van Gogh Style\u2019 \u2026.. \u2026.. Restoration on Corrupted Model (Causal State) c = \u2018Apple in Van Gogh Style\u2019 \u2026.. \u2026.. Restoration on Corrupted Model (Non-Causal State) c = \u2018Apple in Van Gogh Style\u2019 Text-Encoder Text-Encoder Text-Encoder Text-Encoder Generated Image is faithful to caption Generated Image is not faithful to caption (a) (b) (c) (d) Corrupted States Figure 1: Causal Tracing in Text-to-Image Models for (i) UNet and (ii) Text-Encoder shows that knowledge location matters, i.e., restoring causal layers in a corrupted model causes the model to obey the prompt again, while restoring non-causal layers does not. (a) Clean Model: We prompt a Stable-Diffusion model in the conventional way and generate an image as output. (b) Corrupted Model: Token embeddings corresponding to attribute of interest are corrupted, leading to a generated image that does not obey the prompt. (c) Restored (Causal) Model: Causal layer activations are now copied from the clean model to the corrupted model. We observe that the corrupted model can now generate images with high fidelity to the original caption. (d) Restored (Non-Causal) Model: Non-causal layer activations are copied from the clean model to the corrupted model, but we now observe that the generated image does not obey the prompt. Note that a single layer is copied at a time, and it can be from either the UNet (Option 1, solid violet arrow) or the text-encoder (Option 2, broken black arrow). construct and subsequently generate an image. This leads to an important question: How and where is knowledge corresponding to various visual attributes stored in text-to-image models? In this work, we empirically study this question towards understanding how knowledge corresponding to different visual attributes is stored in text-to-image models, using Stable Diffusion(Rombach et al., 2021) as a representative model. In particular, we adapt Causal Mediation Analysis (Vig et al., 2020; Pearl, 2013) for large-scale text-to-image diffusion models to identify specific causal components in the (i) UNet and (ii) the text-encoder where visual attribute knowledge resides. Previously, Causal Meditation Analysis has been used for understanding where factual knowledge is stored in LLMs. In particular, (Meng et al., 2023) find that factual knowledge is localized and stored in the mid-MLP layers of a LLM such as GPT-J (Wang & Komatsuzaki, 2021). Our work, however, paints a different picture for multimodal text-to-image models, we specifically find that knowledge is not localized to one particular component. Instead, there exist various components in the UNet where knowledge is stored. However, each of these components store attribute information with a different efficacy and often different attributes have a distinct set of causal components where knowledge is stored. For e.g., for style \u2013 we find that the first self-attention layer in the UNet stores style related knowledge, however it is not causally important for other attributes such as objects, viewpoint or action. To our surprise, we specifically find that the cross-attention layers are not causally important states and a significant amount of knowledge is in fact stored in components such as the ResNet blocks and the self-attention blocks. Remarkably, in the text-encoder, we find that knowledge corresponding to distinct attributes is strongly localized, contrary to the UNet. However unlike generative language models (Meng et al., 2023) where the mid MLP layers are causal states, we find that the first self-attention layer is causal in the CLIP based text-encoders of public text-to-image generative models (e.g., Stable-Diffusion). Identification of local causal states in a given model has a crucial benefit: it allows for incorporating controlled edits to the model by updating only a tiny fraction of the model parameters without any fine-tuning. Using our observation that the text-encoder hosts only one localized causal state, we introduce a new data-free and fast model editing method DIFF-QUICKFIX which can edit concepts in text-to-image models effectively using a closed-form update. In particular, we show that DIFF2 \fPreprint QUICKFIX can (i) remove copyrighted styles, (ii) trademarked objects as well as (iii) update stale knowledge 1000x faster than existing fine-tuning based editing methods such as (Kumari et al., 2023; Gandikota et al., 2023a) with comparable or even better performance in some cases. In summary, our contributions are as follows: \u2022 We adapt Causal Mediation Analysis (Pearl, 2013; Vig et al., 2020) to large-scale text-toimage models (with Stable-Diffusion as a representative model), and use it to trace knowledge corresponding to various visual attributes in the UNet and text-encoder. \u2022 We perform large-scale analysis of the identified causal components and shed light on the knowledge flow corresponding to various visual attributes in the UNet and the text-encoder. \u2022 Leveraging the interpretability observations of localized causal states in the text-encoder, we develop a light-weight method DIFF-QUICKFIX which can edit various concepts in text-to-image models in under a second, 1000x faster than existing concept ablating methods Kumari et al. (2023); Gandikota et al. (2023a). 2 RELATED WORKS Text-to-Image Diffusion Models. In the last year, a large number of text-to-image models such as Stable-Diffusion (Rombach et al., 2021), DALLE (Ramesh et al., 2021) , Imagen (Saharia et al., 2022) and others (Balaji et al., 2023; Chang et al., 2023; Ding et al., 2022; Kang et al., 2023) have been released. In addition, the open-source community has released DeepFloyd1 and Midjourney2 which can generate photorealistic images given a text prompt. While most of these models operate in the latent space of the images, they differ in the text-encoder used. For e.g., Stable-Diffusion uses CLIP for the text-encoder, whereas Imagen uses T5. These text-to-image diffusion models have been used as a basis for various applications such as image-editing, semantic-segmentation, object-detection, image restoration and zero-shot classification. Intepretability of Text-to-Image Models. To our knowledge, few works delve into the mechanisms of large text-to-image models like Stable-Diffusion. DAAM (Tang et al., 2022) interprets diffusion models by analyzing cross-attention maps between text tokens and images, emphasizing their semantic accuracy for interpretation. In contrast, our approach focuses on comprehending the inner workings of diffusion models by investigating the storage of visual knowledge related to different attributes. We explore various model layers beyond just the cross-attention layer. Editing Text-to-Image Models. Understanding knowledge storage in diffusion models has significant implications for model editing. This ability to modify a diffusion model\u2019s behavior without retraining from scratch were first explored in Concept-Ablation (Kumari et al., 2023) and ConceptErasure (Gandikota et al., 2023a). TIME (Orgad et al., 2023) is another model editing method which translates between concepts by modifying the key and value matrices in cross-attention layers. However, the experiments in (Orgad et al., 2023) do not specifically target removing or updating concepts such as those used in (Kumari et al., 2023; Gandikota et al., 2023a). We also acknowledge concurrent works (Gandikota et al., 2023b) and (Arad et al., 2023) use a closed-form update on the cross-attention layers and text-encoder respectively to ablate concepts. However, we note that our work focuses primarily on first understanding how knowledge is stored in text-to-image models and subsequently using this information to design a closed-form editing method for editing concepts. 3 CAUSAL TRACING FOR TEXT-TO-IMAGE GENERATIVE MODELS In this section, we first provide a brief overview of diffusion models in Sec.(3.1). We then describe how causal tracing is adapted to multimodal diffusion models such as Stable-Diffusion. 3.1 BACKGROUND Diffusion models are inspired by non-equilibrium thermodynamics and specifically aim to learn to denoise data through a number of steps. Usually, noise is added to the data following a Markov chain across multiple time-steps t \u2208[0, T]. Starting from an initial random real image x0, the noisy 1https://www.deepfloyd.ai 2https://www.midjourney.com/ 3 \fPreprint image at time-step t is defined as xt = \u221a\u03b1tx0 + p (1 \u2212\u03b1t)\u03f5. In particular, \u03b1t determines the strength of the random Gaussian noise and it gradually decreases as the time-step increases such that xT \u223cN(0, I). The denoising network denoted by \u03f5\u03b8(xt, c, t) is pre-trained to denoise the noisy image xt to obtain xt\u22121. Usually, the conditional input c to the denoising network \u03f5\u03b8(.) is a text-embedding of a caption c through a text-encoder c = v\u03b3(c) which is paired with the original real image x0. The pre-training objective for diffusion models can be defined as follows for a given image-text pair denoted by (x, c): L(x, c) = E\u03f5,t||\u03f5 \u2212\u03f5\u03b8(xt, c, t)||2 2, (1) where \u03b8 is the set of learnable parameters. For better training efficiency, the noising as well as the denoising operation occurs in a latent space defined by z = E(x) Rombach et al. (2021). In this case, the pre-training objective learns to denoise in the latent space as denoted by: L(x, c) = E\u03f5,t||\u03f5 \u2212\u03f5\u03b8(zt, c, t)||2 2, (2) where zt = E(xt) and E is an encoder such as VQ-VAE (van den Oord et al., 2018). During inference, where the objective is to synthesize an image given a text-condition c, a random Gaussian noise xT \u223cN(0, I) is iteratively denoised for a fixed range of time-steps in order to produce the final image. We provide more details on the pre-training and inference steps in Appendix L. 3.2 ADAPTING CAUSAL TRACING FOR TEXT-TO-IMAGE DIFFUSION MODELS Causal Mediation Analysis (Pearl, 2013; Vig et al., 2020) is a method from causal inference that studies the change in a response variable following an intervention on intermediate variables of interest (mediators). One can think of the internal model components (e.g., specific neurons or layer activations) as mediators along a directed acyclic graph between the input and output. For textto-image diffusion models, we use Causal Mediation Analysis to trace the causal effects of these internal model components within the UNet and the text-encoder which contributes towards the generation of images with specific visual attributes (e.g., objects, style). For example, we find the subset of model components in the text-to-image model which are causal for generating images with specific objects, styles, viewpoints, action or color. Where is Causal Tracing Performed? We identify the causal model components in both the UNet \u03f5\u03b8 and the text-encoder v\u03b3. For \u03f5\u03b8, we perform the causal tracing at the granularity of layers, whereas for the text-encoder, causal tracing is performed at the granularity of hidden states of the token embeddings in c across distinct layers. The UNet \u03f5\u03b8 consists of 70 unique layers distributed amongst three types of blocks: (i) down-block; (ii) mid-block and (iii) up-block. Each of these blocks contain varying number of cross-attention layers, self-attention layers and residual layers. Fig 1 visualizes the internal states of the UNet and how causal tracing for knowledge attribution is performed. For the text-encoder v\u03b3, there are 12 blocks in total with each block consisting of a self-attention layer and a MLP layer (see Fig 1). We highlight that the text-encoder in text-to-image models such as Stable-Diffusion has a GPT-style architecture with a causal self-attention, though it\u2019s pre-trained without a language modeling objective. More details on the layers used in Appendix J. Given a caption c, an image x is generated starting from some random Gaussian noise. This image x encapsulates the visual properties embedded in the caption c. For e.g., the caption c can contain information corresponding from objects to action etc. We specifically identify distinct components in the UNet and the text-encoder which are causally responsible for these properties. Creating the Probe Captions. We primarily focus on four different visual attributes for causal tracing: (i) objects; (ii) style; (iii) color; and (iv) action. In particular, identifying the location of knowledge storage for objects and style can be useful to perform post-hoc editing of diffusion models to edit concepts (e.g., delete or update certain concepts). We provide the complete details about the probe dataset used for causal tracing in Appendix A. The probe dataset also contains additional captions for viewpoint and count attribute. However, we do not focus on them as often the generations from the unedited model are erroneous for these attributes (see Appendix E for details). 3.3 TRACING KNOWLEDGE IN UNET During inference, classifier-free guidance (Ho & Salimans, 2022) is used to regulate imagegeneration by incorporating scores from the conditional and unconditional diffusion model at each 4 \fPreprint self-attention-0 down-1-resnet-0 down-block mid-block up-block down-1-resnet-1 down-0-resnet-0 mid-block-cross-attn mid-block-resnet-1 down-1-ff Original Corrupted Causal State Non-Causal State Prompt: \u2018Airplane in the style of Van Gogh\u2019 Prompt: \u2018A photo of a vase in the kitchen\u2019 Prompt: \u2018A photo of a dog running\u2019 Prompt: \u2018A black bag\u2019 self-attention-0 down-1-resnet-1 mid-block-cross-attn down-1-ff Style Objects Action Color Higher CLIP-Score Layers in UNet Figure 2: Causal Tracing Results for the UNet: Knowledge is Distributed. The intensity of the bars indicate the CLIP-Score between the generated image (after causal intervention) and the original caption. For each attribute, we find that the causal states are distributed across the UNet and the distribution varies amongst distinct attributes. For e.g., self-attn in the first layer is causal for style, but not for objects, action or color. Similarly, mid-block cross-attn is causal for action, but not for the other attributes. On the right-side, we visualize the images generated by (i) Original model; (ii) Corrupted Model; (iii) Restored causal states and (iv) Restored non-causal states in the UNet for style, action, object, color attributes. of the time-steps. In particular, at each time-step, classifier-free guidance is used in the following way to combine the conditional (\u03f5\u03b8(zt, c, t)) and unconditional score estimates (\u03f5\u03b8(zt, t)) at each time-step t to obtain the combined score denoted as \u02c6 \u03f5(zt, c, t): \u02c6 \u03f5\u03b8(zt, c, t) = \u03f5\u03b8(zt, c, t) + \u03b1(\u03f5\u03b8(zt, c, t) \u2212\u03f5\u03b8(zt, t)), \u2200t \u2208[T, 1]. (3) This combined score is used to update the latent zt using DDIM sampling (Song et al., 2020) at each time-step iteratively to obtain the final latent code z0. To perform causal tracing on the UNet \u03f5\u03b8 (see Fig 1 for visualization), we perform a sequence of operations that is somewhat analogous to earlier work from (Meng et al., 2023) which investigated knowledge-tracing in large language models. We consider three types of model configurations: (i) a clean model \u03f5\u03b8, where classifier-free guidance is used as default; (ii) a corrupted model \u03f5corr \u03b8 , where the word embedding of the subject (e.g., Van Gogh) of a given attribute (e.g., style) corresponding to a caption c is corrupted with Gaussian Noise; and, (iii) a restored model \u03f5restored \u03b8 , which is similar to \u03f5corr \u03b8 except that one of its layers is restored from the clean model at each time-step of the classifierfree guidance. Given a list of layers A, let ai \u2208A denote the ith layer whose importance needs to be evaluated. Let \u03f5\u03b8[ai], \u03f5corr \u03b8 [ai] and \u03f5restored \u03b8 [ai] denote the activations of layer ai. To find the importance of layer ai for a particular attribute embedded in a caption c, we perform the following replacement operation on the corrupted model \u03f5corr \u03b8 to obtain the restored model \u03f5restored \u03b8 : \u03f5restored \u03b8 [ai] : \u03f5corr \u03b8 [ai] = \u03f5\u03b8[ai]. (4) Next, we obtain the restored model by replacing the activations of layer ai of the corrupted model with those of the clean model to get a restored layer \u03f5restored \u03b8 [ai]. We run classifier-free guidance to obtain the combined score estimate: \u02c6 \u03f5restored \u03b8 (zt, c, t) = \u03f5restored \u03b8 (zt, c, t)+\u03b1(\u03f5restored \u03b8 (zt, c, t)\u2212\u03f5restored \u03b8 (zt, t)), \u2200t \u2208[T, 1]. (5) The final latent z0 is obtained with the score from Equation (5) at each time-step using DDIM (Song et al., 2020) and passed through the VQ-VAE decoder to obtain the final image xrestored 0 . 5 \fPreprint Higher CLIP-Score Original Corrupted Causal State Non-Causal State Prompt: \u2018A photo of a sandwich in the room\u2019 Prompt: \u2018A deer running\u2019 Self-attn-0 Self-attn-0 Figure 3: Causal Tracing in the Text-Encoder: Knowledge is Localized. In the CLIP textencoder used for Stable-Diffusion, we find the existence of only one causal state, which is the first self-attention layer corresponding to the last subject token. The CLIP-Score(Left) is computed across all the four visual attributes. Visualizations (Right) further illustrate that restoring the sole causal state (self-attn-0) leads to image generation with high fidelity to the original captions. 3.4 TRACING KNOWLEDGE IN THE TEXT-ENCODER The text-encoder in public text-to-image models such as Stable-Diffusion is a CLIP-ViT-L/336px text-encoder Rombach et al. (2021). Similar to Sec.(3.3), we define three states of the CLIP textencoder: (i) Clean model denoted by v\u03b3; (ii) Corrupted model vcorr \u03b3 where the word embedding of the subject in a given caption c is corrupted; (iii) Restored model vrestored \u03b3 which is similar to vcorr \u03b3 except that one of its layers is copied from v\u03b3. Similar to Sec.(3.3), to find the effect of the layer ai \u2208A, where A consists of all the layers to probe in the CLIP text-encoder: vrestored \u03b3 [ai] : vcorr \u03b3 [ai] = v\u03b3[ai], (6) We then use the restored text-encoder vrestored \u03b3 with classifier-free guidance to obtain the final score estimate: \u02c6 \u03f5\u03b8(zt, c\u2032, t) = \u03f5\u03b8(zt, c\u2032, t) + \u03b1(\u03f5\u03b8(zt, c\u2032, t) \u2212\u03f5\u03b8(zt, t)), \u2200t \u2208[T, 1] (7) where c\u2032 = vrestored \u03b3 [ai](c) for a given caption c. This score estimate \u02c6 \u03f5\u03b8(zt, c\u2032, t) at each time-step t is used to obtain the final latent code z0 which is then used with the VQ-VAE decoder to obtain the final image xrestored 0 . 3.5 EXTRACTING CAUSAL STATES USING CLIP-SCORE In this section, we discuss details on how to retrieve causal states using automated metrics such as CLIP-Score (Hessel et al., 2021). Let xrestored 0 (ai) be the final image generated by the diffusion model after intervening on layer ai, x0 be the image generated by the clean diffusion model and xcorr be the final image generated by the corrupted model. In particular, we are interested in the average indirect effect (Vig et al., 2020; Pearl, 2013) which measures the difference between the corrupted model and the restored model. Intuitively, a higher value of average indirect effect (AIE) signifies that the restored model deviates from the corrupted model. To compute the average indirect effect with respect to causal mediation analysis for text-to-image models such as Stable-Diffusion, we use CLIP-Score which computes the similarity between an image embedding and a caption embedding. In particular, AIE = |CLIPScore(xrestored 0 , c)\u2212CLIPScore(xcorr 0 , c)|. Given xcorr 0 is common across all the layers for a caption, we can use CLIPScore(xrestored 0 , c) as the AIE. Selecting Threshold for CLIP-Score. In order to determine the optimal threshold value for CLIP-Score, we select a small validation set of 10 prompts per attribute. To this end, we establish a concise user study interface (refer to Appendix D for details). Through human participation, we collect binary ratings if an image generated by restoring a particular layer is faithful to the original captions. We then extract the common causal states across all the prompts for a given attribute and find the average (across all the prompts) CLIP-Score for each causal state. We then use the lowest average CLIP-Score corresponding to a causal state as the threshold, which we apply on the probe dataset in Appendix A to filter the causal states at scale for each attribute separately. 4 HOW IS KNOWLEDGE STORED IN TEXT-TO-IMAGE MODELS? In this section, we discuss the results of tracing knowledge across various components of the textto-image model in details. 6 \fPreprint Tracing Results for UNet. In Fig 2, we illustrate the distribution of causal states across different visual attributes within the UNet architecture using the CLIP-Score metric. This metric evaluates the faithfulness of the image produced by the restored state xrestored 0 compared to the original caption c. From the insights derived in Fig 2, it becomes evident that causal states are spread across diverse components of the UNet. In particular, we find that the density of the causal states are more in the up-block of the UNet when compared to the down-block or the mid-block. Nonetheless, a notable distinction emerges in this distribution across distinct attributes. For instance, when examining the style attribute, the initial self-attention layer demonstrates causality, whereas this causal relationship is absent for other attributes. Similarly, in the context of the action attribute, the cross-attention layer within the mid-block exhibits causality, which contrasts with its non-causal behavior concerning other visual attributes. Fig 2 showcases the images generated by restoring both causal and non-causal layers within the UNet. A comprehensive qualitative enumeration of both causal and non-causal layers for each visual attribute is provided in Appendix B. Our findings underscore the presence of information pertaining to various visual attributes in regions beyond the cross-attention layers. Importantly, we observe that the distribution of information within the UNet diverges from the patterns identified in extensive generative language models, as noted in prior research (Meng et al., 2023), where attribute-related knowledge is confined to a few proximate layers. In Appendix M, we provide additional causal tracing results, where we add Gaussian noise to the entire text-embedding. Even in such a case, certain causal states can restore the model close to its original configuration, highlighting that the conditional information can be completely bypassed if certain causal states are active. Tracing Results for Text-Encoder. In Fig 3, we illustrate the causal states in the text-encoder for Stable-Diffusion corresponding to various visual attributes. At the text-encoder level, we find that the causal states are localized to the first self-attention layer corresponding to the last subject token across all the attributes. In fact, there exists only one causal state in the text-encoder. Qualitative visualizations in Fig 3 and Appendix C illustrate that the restoration of layers other than the first self-attention layer corresponding to the subject token does not lead to images with high fidelity to the original caption. Remarkably, this observation is distinct from generative language models where factual knowledge is primarily localized in the proximate mid MLP layers Meng et al. (2023). General Takeaway. Causal components corresponding to various visual attributes are dispersed (with a different distribution between distinct attributes) in the UNet, whereas there exists only one causal component in the text-encoder. The text-encoder\u2019s strong localization of causal states for visual attributes enables controlled knowledge manipulation in text-to-image models, facilitating updates or removal of concepts. However, since attribute knowledge is dispersed in the UNet, targeted editing is challenging without layer interference. While fine-tuning methods for UNet model editing exist (Gandikota et al., 2023a; Kumari et al., 2023), they lack scalability and don\u2019t support simultaneous editing of multiple concepts. In the next section, we introduce a closed-form editing method, DIFF-QUICKFIX, leveraging our causal tracing insights to efficiently edit various concepts in text-to-image models. 5 DIFF-QUICKFIX: FAST MODEL EDITING FOR TEXT-TO-IMAGE MODELS 5.1 EDITING METHOD Recent works such as (Kumari et al., 2023; Gandikota et al., 2023a) edit concepts from text-to-image diffusion models by fine-tuning the UNet. They generate training data for fine-tuning using the pretrained diffusion model itself. While both methods are effective at editing concepts, fine-tuning the UNet can be expensive due to backpropogation of gradients through the UNet. To circumvent this issue, we design a fast, data-free model editing method leveraging our interpretability observations in Section 4, where we find that there exists only one causal state (the very first self-attention layer) in the text-encoder for Stable-Diffusion. Our editing method DIFF-QUICKFIX can update text-to-image diffusion models in a targeted way in under 1s through a closed-form update making it 1000x faster than existing fine-tuning based concept ablating methods such as (Kumari et al., 2023; Gandikota et al., 2023a). The first self-attention layer in the text-encoder for Stable-Diffusion contains four updatable weight matrices: Wk, Wq, Wv and Wout, where Wk, Wq, Wv are the projection matrices for the key, query and value embeddings respectively. Wout is the projection matrix before the output from the self-attn-0 layer after 7 \fPreprint Causal Layer (a) (b) Editing Causal Layers vs. Non-Causal Layers Comparison with Other Methods 1s ~6min ~6.5min 1s ~6min ~6.5min Editing time per concept Effectiveness of Multi-Concept Ablated Model (c) Figure 4: Quantitative Analysis of DIFF-QUICKFIX. (a) Editing Causal vs. Non-Causal Layers (Averaged across Objects, Style and Facts): Lower CLIP-Score for causal layer indicates successful edits; (b) Efficacy of DIFF-QUICKFIX when compared to other methods \u2013 Our method leads to comparable CLIP-Scores to fine-tuning based approaches, but can edit concepts 1000x faster; (c) DIFF-QUICKFIX can be used to effectively edit multiple concepts at once, shown by comparable CLIP-Scores to the single-concept edited ones. the attention operations. DIFF-QUICKFIX specifically updates this Wout matrix by collecting caption pairs (ck, cv) where ck (key) is the original caption and cv (value) is the caption to which ck is mapped. For e.g., to remove the style of \u2018Van Gogh\u2019, we set ck = \u2018Van Gogh\u2019 and cv = \u2018Painting\u2019. In particular, to update Wout, we solve the following optimization problem: min Wout N X i=1 \u2225Woutki \u2212vi\u22252 2 + \u03bb\u2225Wout \u2212W \u2032 out\u22252 2, (8) where \u03bb is a regularizer to not deviate significantly from the original pre-trained weights W \u2032 out, N denotes the total number of caption pairs containing the last subject token embeddings of the key and value. ki corresponds to the embedding of cki after the attention operation using Wq, Wk and Wv for the ith caption pair. vi corresponds to the embedding of cvi after the original pre-trained weights W \u2032 out acts on it. One can observe that Eq. (8) has a closed-form solution due to the absence of any non-linearities. In particular, the optimal Wout can be expressed as the following: Wout = (\u03bbW \u2032 out + N X i=1 vikT i )(\u03bbI + N X i=1 kikT i )\u22121, (9) In Section 5.3, we show qualitative as well as quantitative results using DIFF-QUICKFIX for editing various concepts in text-to-image models. 5.2 EXPERIMENTAL SETUP We validate DIFF-QUICKFIX by applying edits to a Stable-Diffusion (Rombach et al., 2021) model and quantifying the efficacy of the edit. For removing concepts such as artistic styles or objects using DIFF-QUICKFIX, we use the prompt dataset from (Kumari et al., 2023). For updating knowledge (e.g., President of a country) in text-to-image models, we add newer prompts to the prompt dataset from (Kumari et al., 2023) and provide further details in Appendix N. We compare our method with (i) Original Stable-Diffusion; (ii) Editing methods from (Kumari et al., 2023) and (Gandikota et al., 2023a). To validate the effectiveness of editing methods including our DIFF-QUICKFIX, we perform evaluation using automated metrics such as CLIP-Score. In particular, we compute the CLIP-Score between the images from the edited model and the concept corresponding to the visual attribute which is edited. A low CLIP-Score therefore indicates correct edits. 5.3 EDITING RESULTS Editing Non-causal Layers Does Not Lead to Correct Edits. We use DIFF-QUICKFIX with the non-causal self-attention layers in the text-encoder to ablate styles, objects and update facts. In Fig 4-(a), we compute the CLIP-Score between the generated images and the attribute from the original captions (e.g., van gogh in the case of style). In particular, we find that editing the noncausal layers does not lead to any intended model changes \u2013 highlighted by the high CLIP-Scores consistently across non-causal layers (layers numbered 1 to 11). However, editing the sole causal layer (layer-0) leads to correct model changes, highlighted by the lower CLIP-Score between the 8 \fPreprint Before Edit After Edit Before Edit After Edit Before Edit After Edit Removing Snoopy Removing Van Gogh Style Removing R2D2 Removing Monet Style Updating the President of US Removing Nemo Removing Grumpy Cat Updating the British Monarch Before Edit After Edit Figure 5: Qualitative Examples with using DIFF-QUICKFIX to ablate style, objects and update facts in text-to-image models. More qualitative examples in the Appendix F. generated images from the edited model and the attribute from the original captions. This shows that identifying the causal states in the model is particularly important to perform targeted model editing for ablating concepts. In Appendix G, we show additional qualitative visualizations highlighting that editing the non-causal states lead to similar model outputs as the unedited model. Efficacy in Removing Styles and Objects. Fig 4-(b) shows the average CLIP-Score of the generated images from the edited model computed with the relevant attributes from the original captions. We find that the CLIP-Score from the edited model with DIFF-QUICKFIX decreases when compared to the generations from the unedited model. We also find that our editing method has comparable CLIP-Scores to other fine-tuning based approaches such as Concept-Erase (Gandikota et al., 2023a) and Concept-Ablation (Kumari et al., 2023), which are more computationally expensive. Fig 5 shows qualitative visualizations corresponding to images generated by the text-to-image model before and after the edit operations. Together, these quantitative and qualitative results show that DIFF-QUICKFIX is able to effectively remove various styles and objects from an underlying text-to-image model. In Appendix F we provide additional qualitative visualizations and in Fig 52 we show additional results showing that our editing method does not harm surrounding concepts (For e.g., removing the style of Van Gogh does not harm the style of Monet). Efficacy in Updating Stale Knowledge. The CLIP-Score between the generated images and a caption designating the incorrect fact (e.g., Donald Trump as the President of the US) decreases from 0.28 to 0.23 after editing with DIFF-QUICKFIX, while the CLIP-Score with the correct fact (e.g., Joe Biden as the President of the US) increases from 0.22 to 0.29 after the relevant edit. This shows that the incorrect fact is updated with the correct fact in the text-to-image model. Additional qualitative visualizations are provided in Fig 5 and Appendix F. Multiple Edits using DIFF-QUICKFIX. An important feature of DIFF-QUICKFIX is its capability to ablate multiple concepts simultaneously. In Fig 4-(c), our framework demonstrates the removal of up to 10 distinct styles and objects at once. This multi-concept ablation results in lower CLIP-Scores compared to the original model, similar CLIP-Scores to single concept editing. This scalability suggests our framework\u2019s potential for large-scale multi-concept editing. In Appendix H, we provide qualitative visualizations of generations from the multi-concept ablated model, showcasing the effectiveness of our editing method in removing multiple concepts. Additionally, we highlight DIFF-QUICKFIX\u2019s efficiency in eliminating a larger number of artistic styles, successfully removing 50 top artistic styles from Stable-Diffusion. 6" + }, + { + "url": "http://arxiv.org/abs/2310.02426v1", + "title": "EditVal: Benchmarking Diffusion Based Text-Guided Image Editing Methods", + "abstract": "A plethora of text-guided image editing methods have recently been developed\nby leveraging the impressive capabilities of large-scale diffusion-based\ngenerative models such as Imagen and Stable Diffusion. A standardized\nevaluation protocol, however, does not exist to compare methods across\ndifferent types of fine-grained edits. To address this gap, we introduce\nEditVal, a standardized benchmark for quantitatively evaluating text-guided\nimage editing methods. EditVal consists of a curated dataset of images, a set\nof editable attributes for each image drawn from 13 possible edit types, and an\nautomated evaluation pipeline that uses pre-trained vision-language models to\nassess the fidelity of generated images for each edit type. We use EditVal to\nbenchmark 8 cutting-edge diffusion-based editing methods including SINE, Imagic\nand Instruct-Pix2Pix. We complement this with a large-scale human study where\nwe show that EditVall's automated evaluation pipeline is strongly correlated\nwith human-preferences for the edit types we considered. From both the human\nstudy and automated evaluation, we find that: (i) Instruct-Pix2Pix, Null-Text\nand SINE are the top-performing methods averaged across different edit types,\nhowever {\\it only} Instruct-Pix2Pix and Null-Text are able to preserve original\nimage properties; (ii) Most of the editing methods fail at edits involving\nspatial operations (e.g., changing the position of an object). (iii) There is\nno `winner' method which ranks the best individually across a range of\ndifferent edit types. We hope that our benchmark can pave the way to developing\nmore reliable text-guided image editing tools in the future. We will publicly\nrelease EditVal, and all associated code and human-study templates to support\nthese research directions in https://deep-ml-research.github.io/editval/.", + "authors": "Samyadeep Basu, Mehrdad Saberi, Shweta Bhardwaj, Atoosa Malemir Chegini, Daniela Massiceti, Maziar Sanjabi, Shell Xu Hu, Soheil Feizi", + "published": "2023-10-03", + "updated": "2023-10-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "INTRODUCTION Large-scale text-to-image diffusion models such as Stable-Diffusion, Imagen and DALL-E (Rombach et al., 2022; Ho et al., 2021a; Balaji et al., 2023; Saharia et al., 2022; Ho et al., 2021b) have seen rapid advances over the last years, demonstrating impressive image generation capabilities across a wide set of domains. A highly impactful use-case of these models lies in using them to edit images via natural language prompts (Hertz et al., 2022; Kawar et al., 2023; Mokady et al., 2022; Zhang et al., 2022; Ruiz et al., 2023; Shi et al., 2023; Couairon et al., 2022; Meng et al., 2022; Brooks et al., 2023). This capability has a great number of industrial applications, including design, manufacturing and engineering, but can also be used as a tool to accelerate machine learning research. For example, a model can be prompted to generate counterfactual examples to probe its interpretability, or rare examples that are used to augment training datasets to improve a model\u2019s out-of-distribution robustness (Vendrow et al., 2023; Trabucco et al., 2023). Evaluating diffusion based text-guided image editing models, however, is challenging due to the difficulties in measuring how faithfully a generated image obeys a requested edit. Moreover, there are broad classes of edits for which methods need to be evaluated. Typically, a CLIP image-text similarity score (Hessel et al., 2021) is used to quantify the efficacy of a given edit. However, these scores have been shown to not always be reliable (Goel et al., 2022). CLIP scores also cannot tease *: Equal Contribution; All collection and processing of data as well as all experiments were solely conducted by UMD. The final EditVal will also be released by UMD. 1 arXiv:2310.02426v1 [cs.CV] 3 Oct 2023 \fPreprint Original Pix2Pix Dreambooth Null Text SINE \u00a0 Background-change: \u2018Change background to desert\u2019 \u00a0 \u00a0 Position-Replacement: \u2018Move stop-sign to center\u2019 \u00a0 Color: \u2018Color of the boat to black\u2019 Object addition: \u2018Add a wine glass\u2019 \u00a0 Alter parts:\u2019Add a sweater\u2019 \u00a0 Figure 1: Qualitative Examples from Image Editing on EDITVAL. We find that for non-spatial edits (e.g., Changing background, color of an object, adding objects), Instruct-Pix2Pix performs well, while other methods struggle. For spatial edits (e.g., Position-replacement), none of the editing methods lead to effective edits. apart particular aspects of an edit, for example, if changing the position of a particular object leaves the rest of the image unchanged (Gokhale et al., 2023). These gaps could be addressed by using human evaluators, but this is usually not scalable and thus limits the scope of edits and datasets that can be considered. Moreover, human studies often lack a standardized protocol, making it difficult to fairly compare methods. To address these issues, we introduce EDITVAL, a standardized benchmark for evaluating text-guided image editing methods at scale across a wide range of edit types. Our benchmark consists of 3 components: i) a curated set of test images from MS-COCO (Lin et al., 2014) spanning 19 object classes, ii) a set of manually defined editable attributes for each image based on 13 possible edit types (e.g. adding an object, changing an object\u2019s position), and iii) two standardized pipelines \u2013 one automated and the other a large-scale human study \u2013 to evaluate the fidelity of the edited images. Given an image and one of its editable attributes, we apply a standardized template to construct a text prompt (e.g. \u2018Change the position of the donuts to the left of the plate\u2019) and give this as input to the text-guided image editing model. The generated image is then assessed using our standardized evaluation pipelines which leverages powerful pre-trained auxiliary models (e.g., object detectors) and a human study template to quantify the edit fidelity. Together, EDITVAL provides a standardized benchmark for evaluation of text-guided image editing methods at scale. We use EDITVAL to evaluate 8 state-of-the-art text-guided image editing methods including SINE (Zhang et al., 2022), Imagic (Kawar et al., 2023) and Instruct-Pix2Pix (Brooks et al., 2023) amongst others. We first validate that EDITVAL\u2019s scores are well-aligned with human evaluators for these models by running a large-scale human study where we find a strong positive correlation between corresponding scores. We then use EDITVAL to benchmark and probe the success and failure modes of these methods (see Fig 1 for qualitative visualizations). Overall, we find that (i) while methods such as SINE (Zhang et al., 2022), Instruct-Pix2Pix (Brooks et al., 2023) and Null-Text (Mokady et al., 2022) obtain the highest scores on EDITVAL amongst other methods, only Instruct-Pix2Pix and Null-Text are able to preserve original image properties, (ii) there is no \u2018winner\u2019 2 \fPreprint \"Add a cup to the bench\" X X_edited Edit Method (e.g InstructPix2Pix) Eval Using Automated \"EditVal\" leveraging OwL-ViT Eval Using Human Study Template c, bbox = OwL-ViT(X_edited, \"a photo of \" Check if object is present Substitute by \"Cup\" Figure 2: EDITVAL contains 648 unique image-edit operations across 19 classes from MS-COCO spanning a variety of real-world edits. Edit types span simple categories like adding or replacing an object to more complex ones such as changing an action, viewpoint or replacing the position of an object. method which performs the best across all 13 edit types; and (iii) on complex editing operations involving spatial manipulation such as altering the position of an existing object or adding a new object at a particular position, all methods perform poorly. We hope that our results can pave the way to developing more reliable text-guided image editing tools in the future. To our knowledge, this is the first work to compare text-guided image editing methods in a standardized manner. We, therefore, release EDITVAL, including all images, edit operations, evaluation scripts, and human study templates, to drive further progress in this direction. In summary, our contributions are: \u2022 EDITVAL, a standardized benchmark dataset for evaluating text-guided image editing methods across diverse edit types, validated through a large-scale human study. \u2022 An automated evaluation pipeline and standardized human-study template which can be used to compare text-guided image editing methods at scale. \u2022 A comprehensive evaluation of 8 state-of-the-art image editing methods on EDITVAL. To the best of our knowledge, this is the first work to compare a large number of text-guided image editing methods at scale on a common benchmark. 2 RELATED WORKS Text-Guided Image Editing Methods. Recently, text-guided image diffusion models (Rombach et al., 2022; Balaji et al., 2023; Ho et al., 2021b; Saharia et al., 2022; Ho et al., 2021a) have demonstrated strong image generation capabilities which have resulted in state-of-the-art FID scores on generation benchmarks such as MS-COCO. These models are usually pre-trained on a large corpus of image-text pairs such as LAION (Schuhmann et al., 2022) using a diffusion objective. Recently these powerful text-guided image generation models have been used to edit real-images(Hertz et al., 2022; Kawar et al., 2023; Mokady et al., 2022; Zhang et al., 2022; Ruiz et al., 2023; Shi et al., 2023; Couairon et al., 2022; Meng et al., 2022; Brooks et al., 2023). Image Editing Benchmarks. To date, TedBench (Kawar et al., 2023) and EditBench (Wang et al., 2023) have been proposed as text-guided image editing benchmarks, however, both have limitations. TedBench is relatively small, evaluating on 100 images encompassing only highly common edit types like object addition and color changes. It also lacks evaluation of recent popular methods like SINE (Zhang et al., 2022) and Pix2Pix (Brooks et al., 2023). EditBench, on the other hand, is limited to evaluating mask-guided image editing methods which require an additional mask to be provided along with the edit prompt. Our proposed EDITVAL, instead, can be applied to any text-guided editing method including mask-guided methods. Further details comparing EDITVAL to EditBench can be found in Appendix K. 3 EDITVAL: EVALUATION BENCHMARK FOR TEXT-GUIDED IMAGE EDITING Our text-guided image editing benchmark, EDITVAL, comprises three components: (i) A seed dataset D with carefully chosen images from C classes in MS-COCO; (ii) an edit type suite A containing different edit operations to be applied to the images in D; and (iii) two evaluation procedures to 3 \fPreprint assess the quality of the edited versions of the images in D for a given image editing method: one involving a human study and the other utilizing an automated pipeline with powerful pre-trained vision-language models. Our versatile benchmark easily accommodates new edit types (and associated edit operations) and simplifies the evaluation of novel text-guided image editing methods. By using edit types in A to create prompts for images in D and assessing the resulting edited images, our evaluation procedure, derived from both human studies and automated evaluations, provides a quantitative measure of editing quality across various edit types. Notably, in contrast to TedBench (Kawar et al., 2023), which lacks scalability and requires evaluating edited images for all methods for every novel editing method, our EDITVAL human study evaluates only the edited images of the new text-guided image editing method under consideration. Below, we provide a detailed description of each EDITVAL component. 3.1 DATASET DESCRIPTION AND EDIT TYPE SUITE We begin by defining a set of 13 distinct edit types denoted as the edit type suite A = {ai}13 i=1, including (i) object-addition, (ii) object-replacement, (iii) positional addition, (iv) size, (v) position-replacement, (vi) alter-parts, (vii) background, (viii) texture, (ix) style, (xi) color, (x) shape, (xii) action, and (xiii) viewpoint edits. Each of these edits are defined in detail in Appendix B.2. For each edit type, we employ ChatGPT to identify classes from MS-COCO for which that edit type makes sense in real-world scenarios. We motivate our choice of MS-COCO as a dataset in Appendix B.1. Specifically, we prompt ChatGPT with \u201cList the classes in MS-COCO for which ai is plausible\u201d where ai \u2208A. We validated these classes in a small-scale human-study where we ask huma -participants to rate if the output classes can be used in practice for incorporating the given edit-type (see Appendix H). We then select the classes, from the total pool of 80 MS-COCO object categories, with the highest overlap across the 13 edit types, resulting in 19 classes which we denote as C. We curate 92 images across these 19 classes for editing, denoted as D = {xj}92 j=1. For each edit type ai \u2208A and object class ck \u2208C, we generate specific prompts using ChatGPT1 to obtain the changes that are plausible for that edit type and object class. For instance, for class ck = \"Bench\" and edit type ai = object-addition, we prompt ChatGPT with \u201cWhat objects can be added to a Bench?\u201d. This results in a unique set of edit operations for each class in C and each edit type in A, which we use to construct the benchmark. After this careful curation of edit types and their corresponding edit operations, EDITVAL contains 648 unique operations encompassing a wide range of real-world image manipulations. We include this an easy-to-use json file in the following format: {class : {image-id : {edit-type : [e1, e2, ..en]}}}, where [e1, e2, ...en] correspond to the edits to be made for the given edit type. For example, in the case of object-addition, the template could be: {\u201cbench\u201d : {11345 : {object-addition : [\u201cball\u201d, \u201ccup\u201d, ..., \u201cbooks\u201d]}}}. From this, prompts can be generated in a standardized way for each image, for example, \u201cAdd a cup to the bench\u201d. The image and the prompt can then be input into a given image editing method in order to generate the modified image. This can easily be applied to any image editing method. Qualitative examples of the edit operations are provided in Appendix E. Adding new edit operations to EDITVAL. One of the primary benefits of this modularized set-up is that new edit types and operations can be added very easily to EDITVAL. For example, for any new edit type, an entry needs only to be made in editval.json with the corresponding metadata to define new editing operations. The edit types and their metadata can be defined using human experts or with assistance from ChatGPT. 3.2 EVALUATION PIPELINES The third component of EDITVAL is a pair of complementary evaluation pipelines: (i) the design of a large-scale human study with accompanying standardized templates, and (ii) an automatic evaluation pipeline which leverages powerful pre-trained vision-language models to evaluate the generated image edits. We use both pipelines to assess the robustness of 8 state-of-the-art image-editing methods. Specifically, we use the human study to evaluate the generated image edits for all 13 edit types in A, while the automated pipeline is used to evaluate a subset of 6 out of the 13 types specifically, (i) object-addition; (ii) object-replacement; (iii) positional-addition; (iv) size; (v) positional-replacement; (vi) alter-parts. This decision was motivated by work which has shown that vision-language models (Radford et al., 2021) cannot reliably recognize 1Version 3.5 is used 4 \fPreprint Edit Operation: \u2018Change the background to desert\u2019 We obtain the RIGHT image by applying above text-instruction on the LEFT image Object Class: Car How well is the editing from text prompt applied? How well are the other properties (NOT Intended) of the main object preserved in RIGHT image? How well are the other aspects of the LEFT image (other than main object and intended changes) preserved in RIGHT? Not Applied Minorly Applied Adequately Applied Perfectly Applied Object Changed Completely Some parts preserved Most parts preserved Other properties of \u2018car\u2019 preserved Completely Changed Some parts preserved Most parts preserved Perfectly Preserved Original Edited Feedback Form Figure 3: Template for the AMT Human Study: A single task displays an edit operation, a source image, an edited image from a given image-editing method and a set of questions to assess edit fidelity. Note that our human-study template does not require edited images from other methods to compare the given text-guided image editing method under evaluation (For e.g., TedBench (Kawar et al., 2023) requires edited images from all other methods). This makes our human-study template scalable and it can be independently used with any new editing method. concepts involving viewpoint or action (Gokhale et al., 2023), hence we use these models to only evaluate edit types with object-centric modifications. 3.2.1 HUMAN STUDY DESIGN AND TEMPLATES We conduct a large scale human study using Amazon Mechanical Turk to evaluate the robustness of a set of 8 state-of-the-art image-editing methods across the 648 edit operations. We use the same set of images and instructions across all 8 image-editing methods to ensure a fair comparison. In this study, as shown in Fig 3, annotators view a source image from D, an edit operation, and the edited image resulting from applying the text instruction using an image-editing method. Participants are then tasked with answering three questions regarding the edited image\u2019s quality. These questions, outlined in Fig 3, assess: (i) the accuracy of the specified edit in the instruction, (ii) the preservation of untargeted characteristics of the main object, and (iii) the preservation of untargeted parts of the image aside from the main object. For the first question, there are four selectable options ranging from the edit \u2018not being applied\u2019 (score: 0) to it being \u2018perfectly applied\u2019 (score: 3). Likewise, for the second and third questions, the options span from the characteristics being \u2018completely changed\u2019 to them being \u2018perfectly preserved.\u2019 Each level of annotation corresponds to values within the scoring range of {0, 1, 2, 3}. The human annotations from this study therefore enable the evaluation of image-editing methods based on (i) the success of the edit, (ii) the preservation of main object properties, and (iii) fidelity to the original image. In particular, we quantitatively measure the success of each editing method by computing the mean human-annotation score for each of the 13 edit-types (see Fig 4). We also apply several quality checks to validate the annotations from all the three assigned workers, detailed in Appendix G. 3.2.2 AUTOMATED EVALUATION USING VISION-LANGUAGE MODELS Given the set of edited images from any text-guided image editing method, our automated evaluation procedure produces a binary score for each of the images corresponding to a subset of the edit types in A denoting if the edit was successful or not. Formally, given the original image x, the edited image xedit, the edit type a \u2208A and one of the possible edit operations o for this edit type, we define the per-image edit accuracy R(x, xedit, a, o) as the following: R(x, xedit, a, o) = \u001a1, if the edit is correct 0, otherwise (1) CLIP (Radford et al., 2021) is effective for assessing the alignment between the edited image xedit and the prompt created using the edit operation o. However, it often fails to recognize fine-grained spatial relations (Gokhale et al., 2023) like positional-addition, position-replacement, or size (refer to Appendix J for a broader discussion). To address this, we instead use OwL-ViT (Minderer et al., 2022), a vision-language model with finegrained object localization capabilities, in our pipeline. OwL-ViT is pre-trained on a vast cor5 \fPreprint action alter_parts background color object_addition object_replacement position_replacement positional_addition shape size style texture viewpoint 0 0.2 0.4 0.6 0.8 1 (i) Editing Quality (ii) Object-Properties Preserved (iii) Image-Context Preserved (a) Instruct-Pix2Pix action alter_parts background color object_addition object_replacement position_replacement positional_addition shape size style texture viewpoint 0 0.2 0.4 0.6 0.8 1 (i) Editing Quality (ii) Object-Properties Preserved (iii) Image-Context Preserved (b) SINE action alter_parts background color object_addition object_replacement position_replacement positional_addition shape size style texture viewpoint 0 0.2 0.4 0.6 0.8 1 (i) Editing Quality (ii) Object-Properties Preserved (iii) Image-Context Preserved (c) Null Text Inversion action alter_parts background color object_addition object_replacement position_replacement positional_addition shape size style texture viewpoint 0 0.2 0.4 0.6 0.8 1 (i) Editing Quality (ii) Object-Properties Preserved (iii) Image-Context Preserved (d) Dreambooth Figure 4: Human study results for the top 4 image-editing methods (with respect to editing accuracy) across different questions in the human study template. (i) Editing Quality: We find that Instruct-Pix2Pix, SINE, Null-Text, and Dreambooth are the top-performing methods. (ii) Object-Properties Preserved: InstructPix2Pix and Null-text fare well in preserving original object-properties; (iii) Image-Context Preserved: InstructPix2Pix and Null-Text fare well in preserving the context of the original images. pus of 3.6 billion image-text pairs with a contrastive objective, and is then fine-tuned on publicly available detection datasets using a bipartite matching loss for object detection. OwL-ViT thus provides reliable bounding box annotations with object accuracies which we can leverage to validate size, positional-addition, and position-replacement edits. We define specific rules for each edit-type in {object-addition, object-replacement, positional-addition, position-replacement, size, alter-parts} to determine whether the corresponding edit is correct. For instance, to validate an edit R(x, xedit, a, o) = 1 where a = object-addition, both the old object in image x and the new object o must be present in the edited image xedit. We provide detailed rules for each edit operation in Appendix C. 4 EMPIRICAL RESULTS ON EDITVAL 4.1 IMPLEMENTATION DETAILS Using EDITVAL, we rigorously evaluate eight of the recently introduced text-guided image editing methods: (i) Instruct-Pix2Pix (Brooks et al., 2023); (ii) Textual Inversion (Gal et al., 2022); (iii) SINE (Zhang et al., 2022); (iv) Imagic (Kawar et al., 2023); (v) Null-Text Inversion (Mokady et al., 2022); (vi) SDE-Edit (Meng et al., 2022); (vii) Diffedit (Couairon et al., 2022); (viii) Dreambooth (Ruiz et al., 2023). For all these methods, we use their public implementations with StableDiffusion (Rombach et al., 2022). Considering each method has distinct sets of hyper-parameters, we generate edited images for each method across a range of hyper-parameter sweeps. We provide all implementation and hyper-parameter details for each method in the Appendix section. For our automated evaluation we use the OwL-ViT (Minderer et al., 2022) implementation from Hugging-Face and use a threshold of 0.1 to extract the object bounding boxes. 4.2 HUMAN STUDY EVALUATION The goal of our human study is to evaluate the text-guided image editing models along 3 dimensions: (i) the quality of the text-guided editing applied, (ii) the quality of other object properties preserved, and (iii) the quality of source image\u2019s overall context preserved. These dimensions mirror the 3 questions presented to human annotators, as discussed in Sec 3.2.1. In Fig 4, we visualize the scores from the top 4 editing methods for each of the three questions asked in the human study template. 6 \fPreprint object-addition object-replacement alter-parts positional-addition position-replacement Editing Methods Editing Accuracy size Figure 5: Evaluation on EDITVAL using OwL-ViT across eight state-of-the-art text-guided image editing methods. We find that while the text-guided image editing methods perform satisfactorily for edits corresponding to object manipulation, they suffer on edits requiring spatial knowledge such as positional-addition or position-replacement. Overall, we find Instruct-Pix2Pix, Null-Text and SINE to perform well across the majority of the editing types. In the \u201cQuality of Editing\u201d, which denotes the efficacy of editing, we find that Instruct-Pix2Pix, SINE and Null-Text perform the best amongst all methods. Dreambooth displays a large variation in scores across the different edit types. In particular, we also find that the human-study scores for edit types involving non-spatial changes (e.g., object-addition, object-replacement, alter-parts) are higher than edits involving spatial changes (e.g., positional-addition, size). However, we highlight that there is no one consistent \u2018winner\u2019 across all the edit types. For \u201cQuality of Object Properties Preserved\u201d and \u201cQuality of Image Context Preserved\u201d, we find that Null-Text and Instruct-Pix2Pix fare the best across the methods. This suggests that they are better at preserving the qualitative aspects of the object and image which is an important requirement in editing. SINE and Dreambooth, on the other hand, obtain low scores on these two questions despite their high scores in editing efficacy. Overall, based on the human scores across these three questions, Instruct-Pix2Pix and Null-Text fare the best amongst all methods. We provide more details on the human study data collection, filtering and evaluation in Appendix G and more results in Fig 11. 4.3 AUTOMATED EVALUATION USING VISION-LANGUAGE MODELS We use our automated evaluation pipeline described in Sec 3.2.2 to evaluate the 8 state-of-the-art image-editing methods across 6 of the 13 edit types in A. From our results in Fig 5, we find that the performance of most text-guided image editing methods suffer even on simple editing operations, including object-addition and object-replacement. For example, across the 8 image editing methods we evaluated, we see that their editing accuracy ranges from only 35% to 55% for object-addition. Of the methods, we find that SINE (Zhang et al., 2022), InstructPix2Pix (Brooks et al., 2023) and Null-Text (Mokady et al., 2022) perform the best for edit types that directly modify the object, for example object-addition, object-replacement and alter-parts. For size, on the other hand, we find Instruct-Pix2Pix (Brooks et al., 2023) performs the best, with SDE-Edit (Meng et al., 2022), SINE (Zhang et al., 2022), Null-Text (Mokady et al., 2022) and Imagic (Kawar et al., 2023) also performing comparably. Although there is no clear \u2018winner\u2019, generally we find Instruct-Pix2Pix to be a strong text-guided image editing method for editing operations corresponding to object manipulation. We highlight that InstructPix2Pix does not require any fine-tuning during the editing operation unlike other methods, including Dreambooth, SINE and Imagic2. For spatial editing operations such as positional-addition 2Although Dreambooth and Textual-Inversion require more than one sample for fine-tuning, for fairness we only use one sample to be consistent across all the methods. 7 \fPreprint and position-replacement, however, we find that none of the text-guided image editing methods perform well. In particular, for position-replacement, we find that most of the text-guided image editing methods have a very low accuracy ranging between 0 to 15%. For positional-addition, the editing accuracy ranges from 0% to 30%, with Null-Text inversion and Instruct-Pix2Pix performing the best. These results show that current text-guided image editing methods are yet to handle complex editing operations which require spatial manipulation in images. We provide visual case studies corresponding to different editing methods and edit operations from EDITVAL in Appendix N and Fig 1. Figure 6: EDITVAL correlation with human-score from AMT Study for six edit-types. We obtain human annotation scores falling in the range of {0, 1, 2, 3} for all the images involving a given edit-type; the correlation is then computed b/w these scores and EDITVAL binary scores. The general trend depicts a moderate-to-strong correlation b/w two evaluations. Fidelity of Edited Images to Original Images. In Figure 7-(b), we use the DINO score (Caron et al., 2021) to assess the similarity between original and edited images across all edit types in EDITVAL. DINO scores represent the average pairwise similarity of [CLS] embeddings between these images. From these scores, we find that Textual-Inversion often leads to significant deviations from the original images. Diffedit, on the other hand, generally maintains fidelity with DINO scores exceeding 0.85 across most categories, aligning with our human evaluation results. For complex spatial edits like position-replacement where methods are sensitive, edited images tend to resemble the originals. These scores show strong correlation with human evaluation in Appendix D. We also compute FID scores (Figure 7-(a)) to gauge image quality across all edit types. Instruct-Pix2Pix, followed by DiffEdit, achieves the lowest FID scores, indicating superior image quality and editing performance. Conversely, Textual-Inversion exhibits the highest FID score overall, suggesting lower image quality in the edited images. Interestingly, these results closely parallel our automated DINO score evaluation (Figure 7-(b)). Overall, we also find a strong alignment of the FID and DINO scores with the questions asked in the human study: (i) \u201cQuality of Object Properties Preserved\u201d and (ii) \u201cQuality of Image Context Preserved\u201d. Diffedit and Instruct-Pix2Pix obtain low FID scores and a high DINO score signifying that the edited images do not change significantly from the original. This is similar to the human study results obtained in Fig 4\u2013(b) and Fig 4-(c). General Takeaway. Instruct-Pix2Pix, Null-Text and SINE are the top-performing methods on EDITVAL with both automated evaluation and human-study, with Instruct-Pix2Pix and Null-Text being better at preserving original image properties than other methods. 4.4 ON THE ALIGNMENT BETWEEN AUTOMATED EVALUATION AND HUMAN-STUDY One of the primary contributions of EDITVAL is to provide an automated proxy evaluation of textguided image editing methods for the set of edit types in A. To validate the effectiveness of automated evaluation scores from EDITVAL, we compute their correlation with the annotation scores obtained from our human study. In particular, we compute the correlation between human annotation score which fall within the range of {0, 1, 2, 3} and the binary scores derived from EDITVAL for the six primary edit types. The correlation numbers are then averaged across all editing methods. We evaluate the correlation using two prominent similarity measures: (i) Pearson Coefficient Correlation and (ii) Cosine Similarity, and report results of our analysis in Fig 6. In specific, we observe that positional-replacement edit-type attains a perfect correlation of 1.0, indicating an accurate alignment between EDITVAL scores and human annotation scores. Other edit types also display 8 \fPreprint (a) FID Scores (b) DINO Scores Figure 7: Fidelity of Edited Images to Original Images. (a) Average FID Heusel et al. (2017) score computed between the original images and the edited images across all the 8 methods tested on EDITVAL. Lower FID score quantitatively indicates a better image quality. (b) Average DINO score between the original images and the edited images across all the 8 methods tested on EDITVAL. We find that for certain methods such as Textual-Inversion, the edited images change significantly from the original images across all edit categories. For spatial changes, we find that the edited images do not change significantly across different methods. a strong noteworthy correlations, as can be seen with object-addition having correlation between 0.6 and 0.7, while positional-addition and alter-parts attains only moderate correlations ranging from 0.45 to 0.6. These scores support the alignment of our automated pipeline with human ground-truth annotations. 5 QUALITATIVE ANALYSIS WITH VISUAL CASE STUDIES In our case study, detailed in Fig 1, we present qualitative examples from the evaluation of various text-guided image editing methods using EDITVAL. Specifically, within this case study, we examine a subset of edit types, showcasing both successful edits and instances of failure. For the background-change edit type applied to an image of the \"car\", we observe that InstructPix2Pix, Null-Text and Dreambooth can accurately replace the background, whereas SINE partially accomplishes the edit. In particular, SINE makes significant changes to the original car. It is noteworthy that with SINE and Dreambooth, not only is the background changed but also the original properties of the car, such as its size and viewpoint. This aligns with the findings from our human study in Fig 4, where the preservation of image context after background edits is often challenging. In the case of position-replacement, involving the task of moving a stop sign to the center of the image, we find that all editing methods struggle to achieve this operation successfully. For edits of a simpler nature, like color changes, Pix2Pix performs well, while other methods face difficulties. In one instance, where the goal is to change the boat\u2019s color to black, Dreambooth fails to change the color and alters the background instead. SINE introduces the color black but also shrinks the boat\u2019s size and changes the background. For other edit types such as object-addition or alter-parts, we find that Instruct-Pix2Pix is able to apply the intended edit without changing much of the qualitative aspects of the original image. Additional case-studies are covered in Appendix N. In summary, our extensive analysis of 8 text-guided image editing methods on EDITVAL reveal that while certain methods, such as Instruct-Pix2Pix and Null-Text, excel at introducing correct edits without altering object properties or image context, most methods struggle to preserve the image context, even when performing localized edits correctly. 6" + }, + { + "url": "http://arxiv.org/abs/2307.09233v2", + "title": "Augmenting CLIP with Improved Visio-Linguistic Reasoning", + "abstract": "Image-text contrastive models such as CLIP are useful for a variety of\ndownstream applications including zero-shot classification, image-text\nretrieval and transfer learning. However, these contrastively trained\nvision-language models often fail on compositional visio-linguistic tasks such\nas Winoground with performance equivalent to random chance. In our paper, we\naddress this issue and propose a sample-efficient light-weight method called\nSDS-CLIP to improve the compositional visio-linguistic reasoning capabilities\nof CLIP. The core idea of our method is to use differentiable image\nparameterizations to fine-tune CLIP with a distillation objective from large\ntext-to-image generative models such as Stable-Diffusion which are relatively\ngood at visio-linguistic reasoning tasks. On the challenging Winoground\ncompositional reasoning benchmark, our method improves the absolute\nvisio-linguistic performance of different CLIP models by up to 7%, while on the\nARO dataset, our method improves the visio-linguistic performance by upto 3%.\nAs a byproduct of inducing visio-linguistic reasoning into CLIP, we also find\nthat the zero-shot performance improves marginally on a variety of downstream\ndatasets. Our method reinforces that carefully designed distillation objectives\nfrom generative models can be leveraged to extend existing contrastive\nimage-text models with improved visio-linguistic reasoning capabilities.", + "authors": "Samyadeep Basu, Maziar Sanjabi, Daniela Massiceti, Shell Xu Hu, Soheil Feizi", + "published": "2023-07-18", + "updated": "2023-07-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction In the past few years, image-text contrastively pre-trained multimodal models such as CLIP (Radford et al. 2021a) have shown tremendous ability to perform zero-shot classification (Mu et al. 2021; Minderer et al. 2022), imagetext retrieval (Diwan et al. 2022; Thrush et al. 2022) and image-captioning (Yu et al. 2022; Li et al. 2022; Mokady, Hertz, and Bermano 2021). These contrastive models are also used as a part of various state-of-the-art pipelines for downstream tasks such as segmentation (Wang et al. 2021; L\u00fcddecke and Ecker 2021), object-detection (Minderer et al. 2022; Zhong et al. 2021) and model interpretability (Moayeri et al. 2023). However, recent works have shown that these models fail on visio-linguistic reasoning tasks, for example identifying the relative position between objects in an image. In fact, the performance of CLIP on Winoground (Thrush et al. 2022; Diwan et al. 2022), a challenging benchmark for *These authors contributed equally. Figure 1: Our simple fine-tuning method SDS-CLIP improves over CLIP on challenging vision-language tasks which require compositional reasoning. visio-linguistic reasoning, is very close to random chance. The failure of CLIP on this benchmark has been shown to be an artifact of its contrastive training objective which learns shortcuts as it optimizes for the task of retrieval (Yuksekgonul et al. 2023). These shortcuts enable CLIP to perform well for image-text retrieval and zero-shot classification, but lead to failures in visio-linguistic tasks which require more fine-grained understanding of the objects in an image and their spatial positions (Tejankar et al. 2021; Yuksekgonul et al. 2023; Huang et al. 2023). In contrast, textto-image generative models like Stable Diffusion (Rombach et al. 2021; Saharia et al. 2022; Ramesh et al. 2022; Zhang et al. 2023; Balaji et al. 2023) have been shown to have reasonable visio-linguistic reasoning abilities (Li et al. 2023a; Clark and Jaini 2023). Recent works have shown that this might be attributed to their text conditioning mechanism which leads to more semantically consistent cross-attention maps and hence better learned correspondences between objects in an image and words in the text that have a visual grounding (Hertz et al. 2022; Tang et al. 2022; Orgad, Kawar, and Belinkov 2023; Li et al. 2023b). Perhaps because of this, text-to-image also perform well in zero-shot classification (Krojer et al. 2023; Clark and Jaini 2023; Chen et al. 2023; Li et al. 2023a). To perform image-text matching, the denoising diffusion score can be computed \u2013 which is essentially the expectation of the gap between the predicted noise (conditioned on arXiv:2307.09233v2 [cs.CV] 27 Jul 2023 \fFigure 2: We introduce a fine-tuning method for CLIP using a distillation loss from any text-to-image generative model (e.g., Stable-Diffusion) which improves the visio-linguistic reasoning abilities of CLIP. Our method fine-tunes the LayerNorm parameters in CLIP using a combination of contrastive loss and SDS loss with only 118k image-text pairs from MSCOCO. This makes our method extremely sample-efficient as well as parameter-efficient during fine-tuning. the text) and the noise added to the original image across multiple time-steps. For e.g., (Li et al. 2023a) show that the denoising diffusion score from Stable-Diffusion outperforms CLIP variants on Winoground, whereas (Clark and Jaini 2023) show that text-to-image generative models such as Imagen outperform CLIP on similar visio-linguistic reasoning tasks. The strong vision-linguistic reasoning capabilities of these generative text-to-image models make them attractive for many vision-language tasks, however, their computationally expensive inference makes them infeasible to use in all cases. For example, for an image-text matching task, multiple forward passes through the large text-to-image model are required with different levels of noise across many time-steps. In contrast, CLIP models can classify an image with just a single forward pass through an image and textencoders. In Fig 3 and Fig 4, we show that this can lead to CLIP being up to 40x faster than the denoising diffusion score from Stable-Diffusion in solving the Winoground tasks. Can we augment CLIP with improved visio-linguistic capabilities by distilling knowledge from text-to-image generative models such as Stable-Diffusion? To this end, we design an extremely light-weight sample-efficient and parameterefficient fine-tuning method for CLIP which improves its visio-linguistic reasoning abilities while also marginally improving its zero-shot abilities on a wide range of downstream datasets. In particular, we use score-distillation sampling (SDS) (Poole et al. 2022) with Stable-Diffusion (Rombach et al. 2021) to regularize the contrastive loss during fine-tuning (see Fig 2) with a small paired image-text dataset. To implement this regularizer, we use differentiable image parameterizations (Mordvintsev et al. 2018) which optimizes the embeddings from CLIP such that they are also aligned with respect to the denoising diffusion loss. Using only \u223c118k image-text pairs from MS-COCO and tuning only the LayerNorm parameters of CLIP during fine-tuning, we find that our method boosts the visio-linguistic reasoning scores of a variety of CLIP models by a 1.5-7 % margin on the Winoground dataset. Notably, we find that augmenting CLIP with visio-linguistic reasoning also marginally boosts its zero-shot classification capabilities. Our work highlights that existing internet-scale image-text contrastive models can be improved in a post-hoc light-weight fine-tuning step. In summary, the contributions in our paper are as follows: \u2022 We highlight the importance of the denoising diffusion loss from large-scale text-to-image models in visiolinguistic reasoning. \u2022 We introduce a novel sample-efficient and parameterefficient fine-tuning method to equip CLIP with better visio-linguistic reasoning capabilities, empirically validated on challenging visio-linguistic benchmarks. \u2022 We show that improving the visio-linguistic reasoning capabilites of CLIP improves its downstream zero-shot performance on a variety of downstream datasets. 2 Related Works Image-text constrastive models. Image-text models that have been constrastively trained on internet-scale data, such as CLIP (Radford et al. 2021a), have been shown to have strong zero-shot classification capabilities. However, recent works (Thrush et al. 2022; Diwan et al. 2022) have highlighted their limitations in visio-linguistic reasoning, as shown in the challenging Winoground benchmark. Yuksekgonul et al. (2023) also observe this issue and introduce a new benchmark ARO for image-text models which require a significant amount of visio-linguistic reasoning to solve. We note that (Yuksekgonul et al. 2023) use a fine-tuning strategy to improve on their benchmark, but the strategy is akin to adversarial training where one already knows the downstream failure mode. Emerging Abilities of Text-to-image diffusion models. One of the emerging abilities of these image-to-text mod\fFigure 3: Denoising Diffusion Score computation takes \u223c40x more time than the image-text alignment score in CLIP. The higher inference time incurred by diffusion score computation from text-to-image generative models such as Stable-Diffusion make it infeasible to be usable in practice. els is the strong semantic correspondences between image and text in the cross-attention layers. This has been highlighted in a string of recent works (Tang et al. 2022; Hertz et al. 2022; Xu et al. 2023; Mokady et al. 2022; Liao et al. 2023; Chen, Laina, and Vedaldi 2023). These strong correspondences have also shown emerging abilities in solving zero-shot classification and vision-language tasks which require some form of visual reasoning (Clark and Jaini 2023; Li et al. 2023a; Krojer et al. 2023; Chen et al. 2023). We highlight that works such as (Li et al. 2023a; Krojer et al. 2023) use Stable-Diffusion which itself consists of CLIP\u2019s text-encoder for the conditional text-embedding. This shows that the diffusion objective potentially has a strong contribution in the improved visio-linguistic reasoning abilities of text-to-image generative models. 3 Preliminaries CLIP CLIP (Radford et al. 2021b) is a image-text model which is pre-trained using a contrastive objective, typically on internet-scale data. The core intuition of the training objective is to align the text and image embeddings of image-text pairs in a shared embedding space. To do this, CLIP consists of two components: (i) an image encoder f\u03d5 which transforms a raw image xi into an image embedding eimg(xi) = f\u03d5(xi) \u2208Rd, also denoted by the token; and (ii) a text encoder g\u03b3 which transforms a raw text caption ci into a text embedding etext(ci) = g\u03b3(ci) \u2208Rd also denoted by token, both of which map to an embedding dimensionality d. Given a dataset D = {(xi, ci)}N i=1 of image-text pairs, where (xi, yi) is the ith image-text pair, CLIP uses a contrastive objective to pull the image and text embeddings of matched pairs together, while pushing those of unmatched pairs apart. Formally, the contrastive objective can be defined as: LCLIP = Limage\u2212text + Ltext\u2212image (1) where: Limage\u2212text = \u2212 1 2N N X j=1 log{ exp(eimg(xj)T etext(cj)/\u03c4) PN k=1 exp((eimg(xj)T etext(ck)/\u03c4)) } (2) Ltext\u2212image = \u2212 1 2N N X j=1 log{ exp(eimg(xj)T etext(cj)/\u03c4) PN k=1 exp((eimg(xk)T etext(cj)/\u03c4)) } (3) where \u03c4 is a trainable temperature parameter. Usually D is an internet-scale dataset consisting of millions of imagetext pairs. Furthermore, during pre-training, the embeddings eimg(xi) and etext(ci) are normalized to have a unit-norm. Benchmark datasets Winoground (Thrush et al. 2022; Diwan et al. 2022) is a challenging vision-language dataset for evaluating the visiolinguistic characteristics of contrastively trained image-text models. The dataset consists of 400 tasks, where each task consists of two image-text pairs. The objective is to independently assign the correct text caption to each image (see Fig 1). Each task is also annotated with meta-data corresponding to whether the task requires object-understanding, relational-understanding or both. The tasks in Winoground are challenging as the images differ in fine-grained ways and assigning the correct text captions requires inherent compositional visual reasoning. ARO (Yuksekgonul et al. 2023) similarly tests visiolinguistic reasoning and consists of three types of tasks: (i) Visual Genome Attribution to test the understanding of object properties; (ii) Visual Genome Attribution to test for relational understanding between objects; and (iii) COCOOrder and Flickr30k-Order to test for order sensitivity of the words in a text, when performing image-text matching. We highlight that Winoground though slightly smaller in size than ARO is more challenging as it requires reasoning beyond visio-linguistic compositional knowledge (Diwan et al. 2022). Denoising Diffusion Score Concurrent works (Clark and Jaini 2023; Li et al. 2023a; Krojer et al. 2023) to this paper show that it is possible to use the denoising diffusion score from text-to-image generative models to perform image-matching tasks. These works find that this approach performs comparably to CLIP at zeroshot classification, but performs much better than CLIP on relational and attribute-binding tasks which require compositional generalization. Given an image x and a caption c, the denoising diffusion score denoted by d(x, c) is defined as: d(x, c) = Et\u223cT,\u03f5\u223cN (0,I)[\u2225\u03f5\u03b8(v\u03b1(x), t, c) \u2212\u03f5\u22252] (4) In the case of Winoground image-text matching tasks where an image x needs to be matched with the correct caption from a set of captions denoted as C = {ci}n i=1, the denoising diffusion score is used in the following way to select a caption c\u2217from the set C: c\u2217= arg min c\u2208C Et\u223cT,\u03f5\u223cN (0,I)[\u2225\u03f5\u03b8(v\u03b1(x), t, c) \u2212\u03f5\u22252] (5) where t is the sampled time-step, \u03f5\u03b8 is the noise prediction UNet (Ronneberger, Fischer, and Brox 2015), v\u03b1 is an encoder (e.g., VQ-VAE) which maps the image x to a latent code and \u03f5 is the sampled Gaussian noise. \fFigure 4: Various CLIP variants underperform on the Winoground visio-linguistic reasoning benchmark when compared to the diffusion score from Stable-Diffusion. (Left): Different CLIP architectures pre-trained on OpenAI\u2019s private data lag behind denoising diffusion score. (Right). Pre-training CLIP on a larger pre-training corpus (e.g. LAION-2B) does not improve its performance on Winoground. 4 Generative Text-to-Image Models are Strong Visio-Linguistic Reasoners In this section, we use the diffusion denoising score eq. (4) to compute the performance of Stable-Diffusion on the Winoground tasks. In particular, for every possible caption c \u2208C, we perform 50 samplings of time-steps and noise for the denoising diffusion score eq. (4). In Fig 4-(Left), we find that the denoising diffusion score for the Winoground image-text matching task is better than all the CLIP varieties pre-trained on distinct architectures using OpenAI\u2019s private dataset of image-text pairs. For e.g., the denoising diffusion score from Stable-Diffusion leads with an accuracy of 34% on Winoground, whereas various CLIP variants have accuracies between 24% \u221230%. Does the scale of pre-training data impact CLIP\u2019s visio-linguistic reasoning abilities? We measure the performance of different CLIP varieties pre-trained on various scales of data upto to 2B image-text pairs (see Fig 4 Right). We find that increasing the amount of pre-training data does not uniformly improve CLIP\u2019s performance on the Winoground tasks, with it still falling short of Diffusion Score performance. Even with 2B image-text pairs (at a similar scale to which Stable-Diffusion is trained) CLIP lags behind Stable-Diffusion by 6.5%, thus highlighting that the scale of pre-training data in Stable-Diffusion is not the main contributor for its Winoground performance. Does the lack of cross-attention impact CLIP\u2019s visiolinguistic reasoning abilities? One argument for CLIP\u2019s lower performance in visio-linguistic reasoning tasks is the lack of cross-attention layers, or the late-fusion between the image and text embeddings. As an alternative, we compare its performance to CoCa (Yu et al. 2022). CoCa is similarly trained with a contrastive objective but it has an additional image-captioning objective and also fuses the image and text embeddings with a cross-attention layer in the early layers of the multimodal text-encoder (i.e. early-fusion). We find that CoCa pre-trained on 2B image-text pairs achieves 30% on Winoground which is similar to CLIP\u2019s performance with ViT-B/32. When fine-tuned on MS-COCO, however, the performance of CoCa drops down to 16.5%. In both cases, the performance is still below the denoising diffusion score. These observations reinforce that the contrastive objective in CLIP may be ill-suited to handling vision-language tasks beyond retrieval which require more fine-grained forms of compositional reasoning and understanding. 5 Distilling Visio-linguistic Reasoning to CLIP In the earlier section, we showed that the denoising diffusion score is a strong metric for solving tasks which require visio-linguistic reasoning. In this section, we present a posthoc fine-tuning method for CLIP which distills knowledge from Stable-Diffusion to improve its visio-linguistic reasoning capabilites. SDS-CLIP: Our Method The core idea of our approach is to regularise the contrastive objective in CLIP with the denoising diffusion score from Stable Diffusion (see Eq.(4)). Our method builds on recent works such as Dreamfusion (Poole et al. 2022) where they learn the parameters of a 3D NeRF model by mapping the output of the NeRF into the input space of the UNet and optimizing it with the denoising diffusion loss, also known as the score-distillation sampling (SDS). In a similar vein, we fine-tune the parameters of CLIP using SDS. Our set-up can be thought of as a special case of knowledge distillation where the teacher is the text-to-image model and the CLIP is the student model. In inference, this allows CLIP to leverage the visio-linguistic reasoning capabilities of textto-image diffusion models. Formally, we map the output of the image encoder f\u03d5 in CLIP to the input space of the UNet \u03f5\u03b8. Specifically, given the image encoder f\u03d5 from CLIP, we map the embedding for a given image x through a linear map hw \u2208 Rd\u00d74\u00d764\u00d764 into the input space of Stable Diffusion\u2019s UNet, \u03f5\u03b8. This can be formalized as \u03f5\u03b8(hw(f\u03d5(x)), t, c) where t is the time step and c is the corresponding text caption for the given image. We then use this term in place of \u03f5\u03b8(v\u03b1(x), t, c) in Eq. (5) to arrive as a denoising diffusion \fModel Overall Object Relation Both 1 Main Pred 2 Main Preds ViT-B/16(CLIP) 0.24 0.28 0.18 0.57 0.29 0.11 Only COCO FT 0.23 0.27 0.19 0.56 0.30 0.11 Ours 0.31 0.35 0.25 0.69 0.36 0.16 ViT-B/32(CLIP) 0.30 0.35 0.22 0.80 0.34 0.18 Only COCO FT 0.28 0.31 0.20 0.76 0.31 0.16 Ours 0.32 0.38 0.23 0.69 0.36 0.20 ViT-L/14(CLIP) 0.28 0.27 0.25 0.57 0.29 0.24 Only COCO FT 0.26 0.27 0.25 0.56 0.30 0.23 Ours 0.295 0.32 0.25 0.53 0.32 0.18 ViT-L/14-336(CLIP) 0.27 0.32 0.21 0.57 0.30 0.19 Only COCO FT 0.23 0.28 0.19 0.53 0.26 0.17 Ours 0.285 0.34 0.23 0.56 0.31 0.21 ResNet-50(CLIP) 0.25 0.29 0.19 0.5 0.27 0.18 Only COCO FT 0.24 0.27 0.20 0.49 0.27 0.16 Ours 0.265 0.30 0.21 0.42 0.29 0.19 Table 1: Our fine-tuning method SDS-CLIP improves CLIP performance on the Winoground benchmark by 1.5% to 7% across various CLIP variants. Specifically, we find that our method improves on the sub-categories involving object-swap and relational understanding which comprise of the majority of the tasks in Winoground. Note that only fine-tuning with imagetext pairs from MS-COCO without the distillation loss often leads to a drop in performance for Winoground. Algorithm 1: Algorithm to fine-tune CLIP with distillation from Stable-Diffusion for improved visio-linguistic reasoning Require: D: image-text pairs, f\u03d5: CLIP\u2019s image-encoder, g\u03b3: CLIP\u2019s text-encoder, \u03f5\u03b8: UNet; N: Number of Epochs; \u03bb: Hyper-parameter for the regularizer; |B|: Batch-size. while i \u0338= N do {xj, yj}|B| j=1 \u2190Sample a batch from D t \u2190Sample time-steps using DDPM \u03f5 \u2190Sample Gaussian noise \u03f5 \u223cN(0, I) Lclip\u2190Compute contrastive loss as in eq. (1) LSDS \u2190Compute SDS loss as in eq. (6) Ltotal \u2190Lclip + \u03bbLSDS Ltotal.backward() \u25b7Backprop \u03d5, \u03b3, w \u2190Update the relevant parameters i \u2190i + 1 end while loss LSDS which encourages image-text binding with feedback from the diffusion loss: LSDS = Et\u223cT,\u03f5\u223cN (0,I)[\u2225\u03f5\u03b8(hw(f\u03d5(x)), t, c) \u2212\u03f5\u22252 (6) We practically implement this by adding the denoising diffusion loss to the original contrastive objective of CLIP such that it acts as a regularizer: Ltotal = LCLIP + \u03bbLSDS (7) where \u03bb is a hyper-parameter that can be set with a gridsearch. We note that there are multiple ways to incorporate a diffusion loss into CLIP\u2019s objective. We found that as an additional loss term led to the best results, however, we include the full set of design choices we considered in the Appendix. Similar to differentiable image parameterizations (Mordvintsev et al. 2018) where a given function is optimized by backpropogation through the image generation process, the UNet parameters \u03b8 are kept frozen during the optimization process. Specifically, given Ltotal(\u03d5, \u03b3, w, \u03b8): \u03d5\u2217, \u03b3\u2217, w\u2217= min \u03d5,\u03b3,w Ltotal(\u03d5, \u03b3, w, \u03b8) (8) where \u03d5, \u03b3, w are the learnable parameters of CLIP\u2019s image, text-encoder and the linear map between CLIP and the UNet in Stable-Diffusion. 6 Experiments In this section1, we empirically validate our proposed method SDS-CLIP on visio-linguistic reasoning using two challenging benchmarks (Winoground, ARO) and zero-shot image classification using a suite of downstream datasets (ImageNet, CIFAR-100, and others). Overall, we show that our method improves CLIP\u2019s performance significantly on Winoground and some key tasks in ARO, while also marginally improving the downstream zero-shot performance. Experimental Setup CLIP Models. We consider the following CLIP variants in our experiments: (i) CLIP ViT-B/16; (ii) CLIP ViTB/32; (iii) CLIP-ViT-L-14; (iv) CLIP-ViT-L-14 336px; (v) CLIP-ResNet-50. For each variant, we use our proposed method SDS-CLIP to fine-tune its parameters from the official OpenAI pre-trained checkpoint. We provide further results with a CLIP variant pre-trained on public data in the Appendix (C). Implementation Details. Due to computational limit, we fine-tune CLIP from a publicly available checkpoint instead of training from scratch. Notably, we only fine-tune 1All experiments and data processing were performed by researchers at UMD. \fModel VG-Relation VG-Attribution COCO-Order Flickr-Order ViT-B/16(CLIP) 0.52 0.62 0.38 0.46 Only COCO FT 0.51 0.62 0.37 0.45 Ours 0.535 0.63 0.38 0.46 ViT-B/32(CLIP) 0.50 0.61 0.37 0.48 Only COCO FT 0.50 0.60 0.37 0.48 Ours 0.53 0.62 0.36 0.48 ViT-L/14(CLIP) 0.53 0.61 0.35 0.44 Only COCO FT 0.53 0.61 0.36 0.44 Ours 0.55 0.64 0.36 0.44 ViT-L/14-336(CLIP) 0.53 0.61 0.38 0.43 Only COCO FT 0.53 0.61 0.37 0.42 Ours 0.54 0.63 0.38 0.42 ResNet-50(CLIP) 0.53 0.63 0.44 0.51 Only COCO FT 0.52 0.63 0.44 0.50 Ours 0.55 0.66 0.43 0.51 Table 2: Distillation from Stable-Diffusion primarily helps on the relational-understanding and attribute-binding tasks from ARO dataset. Performance of fine-tuned CLIP with our distillation loss on the ARO Benchmark. the LayerNorm parameters (Basu et al. 2023) of CLIP using image-text pairs from MSCOCO (Lin et al. 2014). In particular, we choose MSCOCO as it is relatively small and less noisy than other image-text datasets such as CC-3M or CC12M (Sharma et al. 2018). In total, we fine-tune CLIP using our proposed method with only 118k image-text pairs (see Algo.(1) for the fine-tuning steps). Both these factors make our fine-tuning method extremely sample-efficient as well as parameter-efficient. With the linear transformation and the LayerNorm parameters, our fine-tuning method optimizes only \u223c8M parameters of CLIP\u2019s total parameters. We tune the regularization hyper-parameter \u03bb for ViT-B/16 and use it for the other CLIP variants (see Appendix for more details). We fine-tune each CLIP model for 5 epochs, though find that after 1 epoch, performance is already very strong. Baselines. We compare our method with two different baselines: (i) Pre-trained CLIP checkpoints; and (ii) Finetuned CLIP with MS-COCO using only the contrastive loss without the additional distillation loss. (ii) is particularly crucial to eliminate the effect of the image-text pairs from MS-COCO in the fine-tuning step. Results on Winoground We first evaluate our proposed method SDS-CLIP on Winoground (Thrush et al. 2022), a highly challenging visiolinguistic reasoning benchmark. In Table.(1), we show that our proposed method leads to an absolute improvement of between 1.5 \u22127% across all sub-categories in the benchmark and across all CLIP variants. For ViT-B/16 (CLIP), we find that the overall improvement is the largest with a gain of 7%. For other CLIP variants, we find the gain to be consistently between 1.5%\u22122%. In the Appendix, we report results on CLIP variants pre-trained on public data, where we see similar improvements. Next, we dissect the performance of SDS-CLIP on the sub-categories of Winoground: objectswap, relation and both sub-categories. We also dissect performance by the number of predicates present in the captions. We find that SDS-CLIP consistently improves on the object-swap and relational understanding sub-categories. On the tasks containing only one predicate, SDS-CLIP consistently improves across all the variants of CLIP, while on tasks containing two predicates, SDS-CLIP improves on all CLIP variants except ViT-L/14. Interestingly, while we observed a performance gain in each sub-category separately, we found that tasks containing both sub-categories incurred a drop. We note, however, that the tasks containing both object-swap and relation tags make up only \u223c5% of all Winoground tasks which might not be entirely representative of tasks reasoning about object swaps as well as their relational understanding together. Overall, we find that our fine-tuning method consistently improves the performance on a wide set of CLIP variants on the Winoground tasks especially on the object-swap and relational sub-categories, as well as on tasks with captions containing different number of predicates. These results highlight the potential in distilling knowledge encoded in text-to-image models to contrastive models. Results on ARO dataset We also evaluate the effectiveness of SDS-CLIP on the ARO dataset (Yuksekgonul et al. 2023). This dataset consists of three types of tasks constructed which focus on (i) attribute-understanding, (ii) relational-understanding and (iii) order-understanding. In Table. (2), we show that SDSCLIP improves on the attribute-binding and relational understanding tasks by 1%\u22123% across a variety of CLIP models. However, we do not observe any improvement in the order-understanding tasks as the denoising diffusion score from the teacher Stable-Diffusion is itself erroneous, which we describe in details in the next section. When does distillation not help CLIP? While we find that distilling knowledge from StableDiffusion to CLIP helps in object-swap, relationalunderstanding and attribution-binding visio-linguistic tasks, it does not help on tasks where the order of the text is perturbed (e.g. the COCO-Order and Flickr-Order tasks in the \fFigure 5: Our fine-tuning method does not harm the zero-shot abilities of CLIP. In fact for certain downstream datasets (e.g., ImageNet, CIFAR-10, MNIST, Aircraft, Flowers) \u2013 we observe an improvement in the zero-shot performance between 1% \u22128% for ViT-B/16. For other CLIP models (ViT-B/32 and ViT-L/14), we find no drop in zero-shot performance. ARO dataset). This is shown in the final two columns of Table (2). In fact, we find that the denoising diffusion score in eq. (4) leads to accuracies of 0.24 for COCO-Order and 0.34 for Flickr-Order which is in fact lower than CLIP models. Concurrent works (Krojer et al. 2023) has shown similarly low performance for text-ordering tasks. A potential reason could be that ordering tasks only test for grammatical understanding which current text encoders cannot effectively model. Another reason could be that the denoising diffusion score is not affected by word ordering as the image semantics are not changed as a result. Does the zero-shot performance get affected? One of the drawbacks of fine-tuning CLIP using an additional distillation objective along with contrastive loss can be a potential decrease in the downstream zero-shot performance. The contrastive losses used to train CLIP are known to be a proxy for a retrieval task (Radford et al. 2021b). Thus, downweighting this loss via an additional objective could harm CLIP\u2019s zero-shot performance, which in itself is a retrieval task. In practice, we find this not to be the case (see Fig 5). In fact, we find that the zero-shot performance of ViT-B/16 increases across a variety of downstream datasets (ImageNet, MNIST, Aircraft, Flowers, DTD, PETS). The zero-shot improvement ranges from 1% up to 8% across the downstream datasets. For other ViT-based CLIP architectures such as ViT-B/32 and ViT-L/14, we find marginal improvements in the range of 1%-1.5% (see Fig 5) and find no drop in zero-shot performances across various downstream datasets. These results suggest that improving the visiolinguistic reasoning abilities of contrastive models such as CLIP in a post-hoc fine-tuning step does not harm its inherent zero-shot abilities, but instead can lead to marginal zeroshot improvements in certain cases. Given that our current results are obtained with small batch-sizes, we hypothesize that increasing the batch-sizes may even further boost zeroshot results. Does distilling features directly from UNet help? Previous works such as (Xu et al. 2023) find that the frozen features of the UNet contain structural information about the image. Motivated by this, we also investigate if distilling knowledge directly from the frozen UNet features is beneficial, Given an image x and its caption c, the frozen features f from the UNet (where I(x, c) = \u03f5\u03b8(v\u03b1(x), t, c), similar to (Xu et al. 2023)) can be extracted. We then use these frozen internal representations from the UNet to regularize features of the image encoder in CLIP. In particular: Ltotal = LCLIP + \u03bb\u2225hw(f\u03d5(x) \u2212I(x, c))\u22252 2 (9) However, we find that distillation in this way does not lead to improved performances for visio-linguistic reasoning. In fact, for ViT-B/16 (CLIP) we find the Winoground score to decrease from 0.24 to 0.23. This result shows that using score-distillation sampling which involves backpropogation through the UNet is critical to distill knowledge from diffusion models to other discriminative models and subsequently achieve strong visio-linguistic performance. 7" + }, + { + "url": "http://arxiv.org/abs/2304.01917v1", + "title": "Strong Baselines for Parameter Efficient Few-Shot Fine-tuning", + "abstract": "Few-shot classification (FSC) entails learning novel classes given only a few\nexamples per class after a pre-training (or meta-training) phase on a set of\nbase classes. Recent works have shown that simply fine-tuning a pre-trained\nVision Transformer (ViT) on new test classes is a strong approach for FSC.\nFine-tuning ViTs, however, is expensive in time, compute and storage. This has\nmotivated the design of parameter efficient fine-tuning (PEFT) methods which\nfine-tune only a fraction of the Transformer's parameters. While these methods\nhave shown promise, inconsistencies in experimental conditions make it\ndifficult to disentangle their advantage from other experimental factors\nincluding the feature extractor architecture, pre-trained initialization and\nfine-tuning algorithm, amongst others. In our paper, we conduct a large-scale,\nexperimentally consistent, empirical analysis to study PEFTs for few-shot image\nclassification. Through a battery of over 1.8k controlled experiments on\nlarge-scale few-shot benchmarks including Meta-Dataset (MD) and ORBIT, we\nuncover novel insights on PEFTs that cast light on their efficacy in\nfine-tuning ViTs for few-shot classification. Through our controlled empirical\nstudy, we have two main findings: (i) Fine-tuning just the LayerNorm parameters\n(which we call LN-Tune) during few-shot adaptation is an extremely strong\nbaseline across ViTs pre-trained with both self-supervised and supervised\nobjectives, (ii) For self-supervised ViTs, we find that simply learning a set\nof scaling parameters for each attention matrix (which we call AttnScale) along\nwith a domain-residual adapter (DRA) module leads to state-of-the-art\nperformance (while being $\\sim\\!$ 9$\\times$ more parameter-efficient) on MD.\nOur extensive empirical findings set strong baselines and call for rethinking\nthe current design of PEFT methods for FSC.", + "authors": "Samyadeep Basu, Daniela Massiceti, Shell Xu Hu, Soheil Feizi", + "published": "2023-04-04", + "updated": "2023-04-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Few-shot classi\ufb01cation (FSC) involves learning a new classi\ufb01cation task given only a few labelled training examples from each of the novel classes. It has a large numFigure 1. We introduce two strong PEFT baselines for few-shot image classi\ufb01cation: (i) LN-TUNE which \ufb01ne-tunes only the LayerNorm parameters of the ViT; (ii) ATTNSCALE which \ufb01ne-tunes a scaling parameter for the attention matrices along with a domain residual adapter. These approaches outperform full \ufb01ne-tuning and all other existing PEFT methods on MD and show competitive performance on ORBIT. ber of mainstream applications such as drug-discovery [23], robotics [20] and personalized object recognition [17] among others. Usually, a given few-shot classi\ufb01cation task consists of a few-labelled examples from the new classes (support set) and a testing set of unlabeled held-out examples of those classes (query set). Recent works [11, 15, 28] have shown that \ufb01ne-tuning a large pre-trained Vision Transformer (ViT) on the support set of new test tasks achieves state-of-the-art performance on large-scale few-shot classi\ufb01cation benchmarks such as META-DATASET (MD). Because of their high number of parameters, however, \ufb01ne-tuning ViTs is extremely expensive in terms of storage, compute, and time. This limits the ability to learn new downstream tasks in real-world applications where resources are constrained (e.g., personalization on edge or mobile devices) since (i) storing the task\u2019s \ufb01netuned parameters on the edge may be unfeasible, especially for a large number of downstream tasks and (ii) \ufb01ne-tuning on each new task takes prohibitively long. 1 arXiv:2304.01917v1 [cs.CV] 4 Apr 2023 \fFigure 2. ATTNSCALE leads to SoTA performance on MD with self-supervised ViTs and LN-TUNE leads to SoTA performance for supervised ViTs. Pareto-Plot comparing the average MD accuracy with the model parameters updated during few-shot adaptation: (a) Averaged across self-supervised ViT-S/16 and ViT-B/16 (DINO); (b) Averaged across supervised ViT-S/16(DeiT), ViT-B/16(DeiT) and ViT-B/16(ImageNet-21k). We \ufb01nd that the recently proposed eTT [28] does not generalize well to supervised objectives and two simple but strong baselines LN-TUNE and ATTNSCALE outperform existing PEFT methods. Averaged across all domains in MD except ImageNet. As a result, much recent progress has been made in designing light-weight, fast and parameter-ef\ufb01cient \ufb01netuning (PEFT) methods [28, 12]. These reduce the computational requirements to adapt a ViT to a new test task by \ufb01ne-tuning only a fraction of the ViT\u2019s total parameters. However, inconsistencies in experimental setups make it dif\ufb01cult to disentangle the bene\ufb01t of PEFT methods from other experimental factors, including pre-training initialization, feature extractor architecture, \ufb01ne-tuning algorithm, downstream dataset and other hyperparameters. Prompttuning [12], for example, is the state-of-the-art PEFT method on the transfer learning benchmark VTAB [30], while eTT [28] performs strongly on few-shot classi\ufb01cation in MD. Both, however, use distinct feature extractors, pre-training initializations, \ufb01ne-tuning algorithms, and hyperparameters, thus limiting our understanding of the generalizability of these PEFT methods across different setups. To address this, we perform a large-scale empirical analysis of top-performing PEFT methods on two largescale few-shot image classi\ufb01cation benchmarks, METADATASET [26] and ORBIT [17]. Our experimentation involves \u223c1.8k \ufb01ne-tuning experiments which quantify the performance of PEFT methods under experimentally controlled settings including ViT architectures, pre-training objectives, and \ufb01ne-tuning algorithms. This enables us to compare PEFT methods in a fair and consistent way and also draw out novel insights on the interaction between these different components in the \ufb01ne-tuning pipeline. Our main \ufb01nding is that the embarrassingly simple approach of \ufb01ne-tuning just a ViT\u2019s LayerNorm parameters (only 0.08% of total parameters) on a new test task leads to better performance than with full model \ufb01ne-tuning and other PEFT methods on MD and ORBIT. We call this baseline LN-TUNE. We also \ufb01nd that the recently proposed eTT [28], primarily designed for self-supervised ViTs, lags behind some of the PEFT methods which we evaluate in our empirical study. In lieu of this, we propose a new strong baseline called ATTNSCALE which leads to improved fewshot performance over eTT and other PEFT methods for self-supervised ViTs. In particular, ATTNSCALE learns only a scaling parameter for each entry in the attention matrices along with a domain-residual module during few-shot adaptation, making it \u223c9x more parameter-ef\ufb01cient than eTT. Importantly, ATTNSCALE is extremely simple to implement, requires less than 6 lines of code, and can be easily integrated with any ViT architecture. These approaches establish two new, strong PEFT baselines for few-shot classi\ufb01cation, however our empirical study also reveals several interesting insights: (i) None of the carefully designed existing PEFT methods show consistent performance rankings across different pre-training methods (Section 6.1). (ii) We \ufb01nd that for different degrees of domain shifts, distinct PEFT methods are preferred highlighting that the need for surgically designing PEFT methods for different domain shifts (Section 6.3). (iii) Dropping PEFT methods from earlier layers in the ViT for large domain shifts (e.g. Omniglot, Quickdraw, Traf\ufb01c-Sign) is detrimental to few-shot performance (Section 6.4). In summary, our contributions are as follows: \u2022 A large-scale, experimentally consistent, empirical analysis of a wide-range of PEFT methods for fewshot classi\ufb01cation on 2 challenging large-scale benchmarks, META-DATASET and ORBIT. \u2022 An embarrassingly simple PEFT baseline, LN-TUNE, which \ufb01ne-tunes less than 0.08% of a ViT\u2019s parameters outperforming all existing PEFT methods on MD amongst supervised ViTs. \u2022 An easy-to-implement method, ATTNSCALE, which sets a new state-of-the-art on MD amongst selfsupervised ViTs while \ufb01ne-tuning <1.2% of the ViT\u2019s \fparameters. Our \ufb01ndings highlight that there is no one-size-\ufb01ts-all PEFT method and simple parameter-ef\ufb01cient \ufb01ne-tuning baselines should not be overlooked. 2. Related Works ViTs in few-shot classi\ufb01cation. CNNs have primarily been used as the feature extractor backbone in few-shot classi\ufb01cation methods [6, 22, 4, 8, 27], however, recently ViTs have replaced them as the state-of-the-art [11] in challenging few-shot classi\ufb01cation benchmarks like METADATASET. In these methods, the ViT is typically pre-trained with a self-supervised (or meta-learning) objective on a large dataset and then \ufb01ne-tuned on new test tasks. While some works [5, 7] have explored pre-training techniques to make ViTs speci\ufb01cally suited to downstream few-shot classi\ufb01cation, \ufb01ne-tuning a ViT at test time remains expensive. Our work therefore aims to shed light on parameter-ef\ufb01cient \ufb01ne-tuning methods for few-shot classi\ufb01cation. PEFT methods for few-shot classi\ufb01cation. Parameter ef\ufb01cient \ufb01ne-tuning methods have been extensively studied in Transformers for NLP tasks with adapters [9], LoRA [10], pre\ufb01x-tuning [16] and prompt-tuning [14] serving as strong alternatives to \ufb01ne-tuning all the Transformer\u2019s parameters. PEFTs have also been explored in Vision Transformers for computer vision tasks, with methods like visual prompt tuning [12] for transfer learning which work by tuning pre\ufb01xes attached to the input and eTT [28] which tune pre\ufb01xes attached to key and value matrices in the self-attention layers. [28] show that eTT results in performance close to full model tuning for ViTs pre-trained using DINO using only 9% of the total model parameters on the large-scale META-DATASET. 3. Few-Shot Classi\ufb01cation Preliminaries In few-shot classi\ufb01cation, the goal is to adapt a classi\ufb01er to a new task at test time using a small number of training examples of each new class. In \ufb01ne-tuning-based approaches, this adaptation process is done by \ufb01ne-tuning the model on the training examples, before then evaluating it on a held-out set of test examples. Formally, given a pre-trained feature extractor f\u03b8, a fewshot task is sampled from a test dataset D. The task is composed of a support set S (of training examples) and a query set Q (of held-out test examples). Generally, N unique classes are \ufb01rst sampled from the underlying dataset D. For each class j \u2208[1, N], kj s examples are sampled for the support set S and kj q examples are sampled for the query set Q. If kj s = k is \ufb01xed for \u2200j \u2208[1, N] classes, then the task is known as a N-way, k-shot task. When given a new test task, the objective is to \ufb01ne-tune the underlying feature extractor f\u03b8 or the parameter-ef\ufb01cient module p\u03c6 on the task\u2019s support set S using a \ufb01ne-tuning algorithm F. In parameter-ef\ufb01cient \ufb01ne-tuning approaches, f\u03b8 is frozen and only the parameters in p\u03c6 are \ufb01ne-tuned. More speci\ufb01cally, we can formalize the \ufb01ne-tuning procedure as follows: \u03c6\u2217= min \u03c6 \u2113(f\u03b8, p\u03c6, F(S)) (1) Inference on the query examples is done depending on the \ufb01ne-tuning algorithm F (see Section 4) for details). We follow the variable-way, variable way sampling protocol from [26] where kj s, kj q and N vary for each sampled fewshot task. This setting generates class-imbalanced few-shot tasks which make it challenging as the model needs to handle tasks of varying sizes. 4. Large-Scale Empirical Study Design PEFT methods have been widely used to make fewshot adaptation more computationally ef\ufb01cient [12, 28, 21], however, inconsistencies in experimental setups make it dif\ufb01cult to disentangle the gain from PEFT methods versus other experimental factors. To address this, we conduct a wide-scale experimentally controlled study of over 1.8k experiments. We control for the pre-trained model (including pre-training objective and architecture), PEFT module type, position of the PEFT module, \ufb01ne-tuning algorithm, learning hyperparameters and downstream dataset. Below we provide details of each of these components: Pre-trained models. For pre-training objectives we consider the self-supervised objective DINO [3] and the supervised objective DeiT [25]. For architectures, we consider ViT-S/16 and ViT-B/16 [25]. These architectures are pretrained using the given objectives on ImageNet-1k. In addition, we also consider ViT-B/16, which is pre-trained on the large-scale ImageNet-21k. These objectives and architectures were chosen as they lead in downstream few-shot performance [11] on MD. More details on pre-training are included in the Appendix. PEFT methods. We consider the following 7 existing methods for parameter-ef\ufb01cient \ufb01ne-tuning: adapters [9], LoRA [10], shallow prompt-tuning and deep prompttuning [12], eTT [28], ladder tuning [24], and bias tuning [29]. We also compare to full model \ufb01ne-tuning [11] and our 2 strong baselines: \ufb01ne-tuning only the ViT\u2019s LayerNorm parameters (LN-TUNE), and learning a simple scaling factor for the elements in the attention matrices (ATTNSCALE) (see Section 5.2). Of the existing methods, adapters and LoRA have been extensively used for \ufb01ne-tuning Transformers in few-shot NLP tasks. Ladder tuning is a more recent memory-ef\ufb01cient as well as parameter-ef\ufb01cient \ufb01ne-tuning method for language models like T5 [19]. Ladder is tuning is memory-ef\ufb01cient as it avoids back-propagation through the entire featureextractor backbone. Shallow and deep prompt tuning are adaptations of [14] for transfer learning in vision. eTT [28] \fFigure 3. PEFT methods (except our LN-TUNE strong baseline) lack consistency across different pre-training paradigms. (a) The ranks of the 7 top-performing PEFT methods on META-DATASET change across different pre-training paradigms when measured under controlled settings; (b) The Spearman correlations between the different pre-trained models with respect to the performance rank of all 10 PEFT methods are not consistently high. Evaluation across all domains in MD except ImageNet. \ufb01ne-tunes only the pre\ufb01xes attached to the key and value matrices in a ViT\u2019s self-attention layers. eTT is also the only method to have been tested on the large-scale METADATASET benchmark. Note, we omit the prototype regularization used in eTT to ensure fair comparison to other PEFT methods where prototype regularization is not used. We provide further information for each of these methods in the Appendix. Position of PEFT methods. We consider two con\ufb01gurations in which the PEFTs are inserted in the ViT: (i) We insert PEFTs in each of the layers, including the \ufb01nal; (ii) We insert PEFT in the \ufb01nal layer and in one of the layers between the \ufb01rst and the \ufb01nal layer, leading to two layers in total. For (ii) each \ufb01ne-tuning experiment is repeated 12 times (see Section 6.4 for analyses). Fine-tuning algorithms . We consider 3 \ufb01ne-tuning algorithms given a new test task: (i) LINEAR: We attach a linear classi\ufb01cation layer after the \ufb01nal layer of the ViT and \ufb01ne-tune both the PEFT\u2019s and this layer\u2019s parameters using a cross-entropy loss. (ii) PROTOAUG: Following the stateof-the-art \ufb01ne-tuning approach in [11], we use the examples from the task\u2019s support set to initialize class prototypes, similar to ProtoNets [22], and then use a query set to \ufb01netune the ViT. where the query set is an augmented version of the support set. In particular, we apply color-jitter and translation augmentations on the support set to generate the query set. (iii) PROTONCC: Following [15, 28], we do not apply augmentations to generate the query set and instead treat the query set as a copy of the support set, and \ufb01ne-tune the ViT in a similar way to PROTOAUG. For (ii) and (iii), inference on the query set is performed using a ProtoNets classi\ufb01er [22], while for (i), the linear classi\ufb01er is used. Hyperparameters. We standardize the hyperparameters across our entire experimental setup. Following [11], we choose a learning rate from {0.0001, 0.001, 0.01, 0.1} and select the rate that gives the best performance on the validation set. The validation set is a \ufb01xed set of 5 fewshot tasks sampled from the downstream dataset to which the ViT is being adapted. For each few-shot task, we \ufb01netune for 40 steps with the Adam optimizer [13] using the selected learning rate. Downstream datasets. We run all our experiments on two challenging large-scale few-shot classi\ufb01cation benchmarks (i) META-DATASET [26] and (ii) ORBIT [17]. METADATASET consists of 10 different sub-datasets, and is currently the most widely used few-shot classi\ufb01cation benchmark. Note, we remove the ilsvrc 2012 sub-dataset from META-DATASET as our ViT models have been pre-trained on ImageNet. ORBIT is a few-shot classi\ufb01cation benchmark containing noisy, real-world videos of everyday objects across 17 test users. In accordance with [26], we sample 600 few-shot tasks per sub-dataset in META-DATASET while for ORBIT, we sample 50 tasks per user. In total, each experimental analysis is performed on 6250 few-shot tasks. GPU compute. Given the large memory requirements to \ufb01ne-tune ViTs especially for tasks sampled from MD (due to large support set sizes), we use an A6000 GPU (with 48GB memory) for ViT-B/16 and an A5000 GPU (with 24GB memory) for ViT-S/16. 5. Embarrassingly Simple Strong Baselines for Few-Shot Fine-tuning Our standardised large-scale empirical study led us to discover two embarrassingly simple but strong baselines for parameter-ef\ufb01cient few-shot \ufb01ne-tuning: LN-TUNE and ATTNSCALE. Both of these methods perform better than full model \ufb01ne-tuning and all other existing PEFT methods on MD at a fraction of the computational cost. Below we describe each of these strong baselines: \f5.1. LN-TUNE LN-TUNE works by \ufb01ne-tuning only the ViT\u2019s LayerNorm parameters on a task\u2019s support set. Formally, for a given ViT with L layers, the ith layer has two LayerNorm blocks \u2013 one before its attention block and one before its MLP block. Given an input vector a \u2208Rd from the previous layer or block, the operation of the \ufb01rst block can de\ufb01ned as LayerNormi 1(a) = \u03b3i 1 \u2299(a \u2212\u00b5)/\u03c3 + \u03b2i 1, and the operation of the second block as LayerNormi 2(a) = \u03b3i 2 \u2299(a \u2212\u00b5)/\u03c3 + \u03b2i 2. Here {\u03b3i 1, \u03b2i 1, \u03b3i 2, \u03b2i 2} \u2208Rd are the only learnable parameters for the ith layer. For a given task, these parameters across all L layers are \ufb01ne-tuned using the task\u2019s support set S. As a result, LN-TUNE is extremely light-weight when compared to the other PEFT methods. For e.g., a ViT-S/16 has only \u223c18.6k LayerNorm parameters, while a ViT-B/16 has only \u223c37k. Since ViT-S/16 and ViT-B/16 have \u223c22M and \u223c76M parameters, respectively, this accounts for less than 0.08% of the total parameters. 5.2. ATTNSCALE As a second strong baseline, we introduce ATTNSCALE, a modi\ufb01cation to the recently proposed eTT [28]. Here, we replace the attentive pre\ufb01x tuning part in eTT with a learnable scaling parameter on each element in the attention matrices, which we tune along with eTT\u2019s DRA module, reducing the number of learnable parameters by \u223c9x. Given a ViT with L layers, nh attention heads and n tokens, the weight matrices in the ith layer\u2019s attention block for the jth head are de\ufb01ned as W ij q \u2208Rd\u00d7de, W ij k \u2208Rd\u00d7de and W ij v \u2208Rd\u00d7de. Here d is the dimension of the token embeddings and de is the dimension of the tokens after the weight matrix projection. Qij \u2208Rn\u00d7d, Kij \u2208Rn\u00d7d, V ij \u2208Rn\u00d7d are de\ufb01ned as the query, key and value tokens, respectively. The attention matrix in the ith layer for the jth head can be de\ufb01ned as: Aij = softmax((QijW ij q )(KijW ij k )T / p (de)) (2) where Aij \u2208Rn\u00d7n. ATTNSCALE applies a point-wise scaling factor to each element in the attention matrix before the softmax operation. These scaling factors are learned during \ufb01ne-tuning on the task\u2019s support set S. In particular, we de\ufb01ne a learnable scaling tensor A\u03b1 \u2208Rn\u00d7n\u00d7L\u00d7nh. A\u03b1 can be reshaped as {Ai \u03b1}L i=1 where Ai \u03b1 \u2208Rn\u00d7n\u00d7nh is the scaling tensor for each ith layer. For each attention head j \u2208[1, nh], the scaling matrix is de\ufb01ned as Aij \u03b1 \u2208Rn\u00d7n. Aij = softmax(Aij \u03b1 \u2299(QijW ij q )(KijW ij k )T / p (de)) (3) During few-shot adaptation, only Aij \u03b1 is learned along with the parameters in the DRA module from eTT. Note, {W ij q , W ij k , W ij v } are kept frozen for each ith layer and jth attention head. In principle, the scaling factor A\u03b1 replaces Figure 4. Different attention heads encode similar attention maps in self-supervised ViTs \u2013 (a) ViT-S/16(DINO); (b) ViTS/16(DeiT). We compute the Pearson correlation between the attention scores of different heads: h i, \u2200i \u2208[1, nh]. Selfsupervised ViTs encode attention across different heads more similarly than supervised ViTs. Correlation is averaged across examples from 100 tasks from each of the 10 domains in MD. the attentive-pre\ufb01x tuning (APT) module in eTT. This APT module uses \u223c9% model parameters, whereas ATTNSCALE uses only \u223c1.2% but still gives improved MD performance. We also propose a light-weight extension of ATTNSCALE, called ATTNSCALELITE, which learns the same scaling parameters across all nh attention heads in a given layer, rather than different ones for each head. This is motivated by an observation that all nh attention heads in a layer have similar attention maps. We show this in Fig 4 where we plot the pairwise Pearson correlation [1] between the attention values of different heads. Here, for self-supervised ViTs, we see strong correlation values between different heads in a given layer indicating that different heads encode similar kinds of attention maps. This is similar for supervised ViTs, however, the correlation values are slightly lower. Formally, for ATTNSCALELITE, we de\ufb01ne the scaling parameter for the ith layer as Ai \u03b1 \u2208Rn\u00d7n and Aij \u03b1 = Ai \u03b1, \u2200j \u2208[1, nh]. ATTNSCALELITE requires only 0.25% of the total parameters for ViT-S/16 and only 0.09% for ViT-B/16 which makes it an extremely light-weight module. In Section 6, we provide \ufb01ne-grained results on the ef\ufb01cacy of both ATTNSCALE and ATTNSCALELITE for downstream few-shot adaptation. We also provide a PyTorch-like implementation of ATTNSCALE and ATTNSCALELITE in the Appendix. 6. Empirical Results on META-DATASET We use our wide-scale empirical study to derive novel insights on PEFT methods for few-shot classi\ufb01cation. In particular, we use our results on MD to answer the following key questions: 1 Do PEFT methods rank similarly across different pre-training architectures and learning objectives? 2 How does the \ufb01ne-tuning algorithm in\ufb02uence the performance of a PEFT method? 3 Is the optimal PEFT method \fPEFT MSCOCO Traf\ufb01c-Sign Omniglot Aircraft DTD VGG-Flower Quickdraw Cu-birds Fungi Overall Rank Full 61.5 87.3 78.7 75.4 86.9 94.2 73.6 85.4 54.7 77.5 6 Adapter 55.8 52.2 54.7 60.01 83.8 94.6 60.5 84.8 55.9 66.8 9 Bias 63.4 90.4 80.4 77.5 84.7 95.1 74.3 85.6 58.9 78.8 4 LoRA 62.1 88.1 80.8 80.8 86.8 94.8 72.7 85.8 59.8 78.9 3 Ladder 55.7 52.2 54.7 60.01 83.8 94.6 60.5 84.8 55.9 67.0 8 Prompt-Shallow 52.7 58.9 61.8 62.9 83.0 94.2 66.0 83.4 55.5 68.7 7 Prompt-Deep 62.8 85.6 77.0 73.3 85.3 96.2 73.2 86.1 58.2 77.5 6 eTT 61.5 89.1 78.9 75.8 85.1 95.1 73.5 86.1 58.2 78.1 5 LN-TUNE 64.2 91.2 77.9 75.3 84.4 96.9 74.7 87.5 59.9 79.1 2 ATTNSCALE 61.9 91.4 80.9 78.8 85.8 95.9 74.4 86.7 59.01 79.4 1 ATTNSCALELITE 61.6 91.0 80.2 77.9 85.8 96.0 73.9 86.7 59.0 79.1 2 Table 1. Our strong baselines, LN-TUNE and ATTNSCALE, rank in the top 2 of all PEFT methods on the few-shot classi\ufb01cation benchmark, META-DATASET. Results shown for a ViT-S/16 (DINO), and exclude the ImageNet split. Blue: Best overall performing PEFT method; Red: existing PEFT methods.; Brown: Best performing module amongst existing PEFT methods.; Magenta: Strong baselines proposed in our paper.; Domains with large domain shifts from ImageNet-1k. different for different data domains? 4 Can PEFT modules be dropped from certain positions in the feature extractor? This can lead to signi\ufb01cant memory and storage savings during few-shot deployment. 5 What is the impact of PEFT methods on the downstream \ufb01ne-tuning run-time? These are critical factors when deploying a few-shot classi\ufb01er in the wild. We also show that our two simple but strong baselines, LN-TUNE and ATTNSCALE, perform better than full \ufb01ne-tuning and all top-performing PEFT methods. 6.1. Consistency Across Pre-Training Models We analyse the in\ufb02uence of pre-training model by ranking the performance of different PEFT methods across the different pre-training objectives and architectures described in Section 4. To isolate the role of the pre-trained model, for each run, we keep all other variables constant including the \ufb01ne-tuning algorithm, position of the modules, and hyperparameters. We report the results using the PROTOAUG \ufb01ne-tuning algorithm in Fig 3, and include results for PROTONCC and LINEAR in the Appendix. Existing PEFT methods. In Fig 3-(a), we \ufb01nd that PEFT methods rank inconsistently, with no single best approach, across the different pre-trained models. In Fig 3(b), we plot the Spearman correlation of the PEFT method\u2019s ranking between different pre-trained models. We observe that the correlation values across all pairs of pre-trained models are not consistently high, suggesting that existing PEFT methods do not generalize similarly for different pre-trained architectures and objectives. We also \ufb01nd that adapters, ladder-tuning and shallow prompt-tuning all have sub-par performances on MD (\u223c10% drop) when compared to LoRA, bias-tuning, eTT and deep prompt-tuning (see Fig 6). We also highlight that shallow prompt-tuning struggles with few-shot classi\ufb01cation on MD despite performing competitively on transfer learning natural tasks in VTAB [12]. Deep prompt-tuning [12], which is the state-ofthe-art PEFT module on VTAB, performs competitively on MD across all pre-trained models, but falls short of methFigure 5. With PEFT methods, we \ufb01nd PROTOAUG to have the best performance on META-DATASET, while LINEAR performs the worst. MD accuracy averaged over all 10 PEFT methods with different \ufb01ne-tuning algorithms. ods like eTT [28], LoRA [10], bias-tuning [29] and full model-tuning [11] (see Fig 3). This result highlights that strongly performing PEFT methods for transfer learning do not generalize well to the challenging few-shot setting of MD. eTT [28] for ViT-S/16(DINO) outperforms full modeltuning, but also lags behind LoRA and bias-tuning. Overall, we \ufb01nd bias-tuning [29] to consistently rank amongst the top 4 across all the pre-training models, outperforming many of the more complex PEFT methods. Our strong baselines. From Fig 3, we \ufb01nd that our strong baselines, LN-TUNE and ATTNSCALE, perform strongly across all the pre-trained models on MD. In particular, LN-TUNE performs the best for supervised ViTs (pre-trained on ImageNet-1k and ImageNet-21k) consistently. We also highlight that for supervised ViTs, none of the PEFT methods except LN-TUNE reaches performance close to full \ufb01ne-tuning. ATTNSCALE, which is around 9x more parameter-ef\ufb01cient than eTT, has the best fewshot performance for self-supervised ViTs pre-trained using \fFigure 6. Adapters, ladder tuning and shallow prompt-tuning are the worst-performing PEFT methods on META-DATASET (averaged across all the domains in META-DATASET) showing a signi\ufb01cant drop of (\u223c10%) compared to other methods. DINO [3]. For self-supervised ViTs, LN-TUNE performs closely to ATTNSCALE and ranks in the top 2 methods. 6.2. Effect of Fine-tuning Algorithm We quantify the impact of 3 different algorithms for \ufb01netuning the parameters in PEFTs: LINEAR, PROTOAUG and PROTONCC. We \ufb01nd that PROTOAUG outperforms PROTONCC and strongly outperforms LINEAR across all pretraining objectives and PEFT methods including full model tuning (Fig 5). In some cases, PROTOAUG and PROTONCC outperform LINEAR by as much as 20%. We also \ufb01nd that for self-supervised pre-training objectives like DINO [3], the gap between PROTOAUG and PROTONCC is \u223c2.2%, whereas for supervised objectives like DeiT [25] this gap is higher at \u223c 4.7% (for both ImageNet-1k and ImageNet-21k initializations). Since the only difference between PROTOAUG and PROTONCC is that the query set is an augmented version of the support set, this suggests that applying augmentations during few-shot (meta) \ufb01ne-tuning is more effective with supervised than self-supervised objectives. We also note that when using full model \ufb01ne-tuning, PROTOAUG outperforms PROTONCC by \u223c5% for DINO and by \u223c6.7% for DeiT objectives. This gap is higher than when used with other PEFT methods (see Table 2). This suggests that PROTOAUG\u2019s ef\ufb01cacy decreases when used in conjunction with PEFT methods. Method PROTOAUG PROTONCC Performance Gap Full Tuning (DINO) 77.2 72.2 \u22065.0% All PEFTs (DINO) 75.4 73.2 \u22062.2% Full Tuning (DeiT) 78.1 71.38 \u22066.7% All PEFTs (DeiT) 73.1 68.4 \u22064.7% Table 2. The performance gap between PROTOAUG and PROTONCC is more with full \ufb01ne-tuning than when used with PEFT methods. 6.3. Comparing Performance Across Domains We leverage the distinct sub-datasets in MD to compare the performance of PEFT methods across domains. Since each sub-datasets has a different degree of domains shifts from the pre-training dataset (ImageNet), we also evaluate the robustness of different PEFT methods to these shifts. In Table 1, we show these results with a ViT-S/16 pretrained with DINO, and observe that none of the PEFT methods are consistently the best across domains. We show similar results for other pre-trained ViTs in the Appendix. Existing PEFT methods. We observe that deep prompt-tuning is the best PEFT method for domains with smaller degrees of shift from ImageNet such as Cu-Birds and VGG-Flower. It is second best on MS-COCO, which is also similar to ImageNet. We \ufb01nd, however, that for larger domain shifts such as Omniglot, Quickdraw and Traf\ufb01c-Sign it struggles, with LoRA and bias-tuning showing stronger performance. This is similarly the case for adapters, LoRA, and ladder-tuning which also perform poorly on larger domain shifts and have the lowest average performance on MD generally. Our strong baselines. We \ufb01nd that LN-TUNE in Table 1 outperforms all existing PEFT methods in 5 out of the 9 domains, with ATTNSCALE lagging behind it only slightly in these 5 domains. However, for domains with a larger shift (e.g., Omniglot, Traf\ufb01c-Sign), ATTNSCALE performs better than LN-TUNE. Even for Quickdraw, where there is a signi\ufb01cant shift, ATTNSCALE and LN-TUNE perform almost similarly. Overall on MD, ATTNSCALE ranks the best in terms of few-shot performance. These results suggest that our two strong baselines can be used complementarily: when the domain shift from the pre-training dataset is high, ATTNSCALE is better suited, whereas when the domain shift is low, LN-TUNE is the stronger approach. Our results highlight that current PEFT methods are not robust to varying degree of domain shifts and requires rethinking the current designs of PEFT modules to be uniformly robust to all domain shifts. Overall, our proposed strong baselines lead to the best performance in 7 out of 9 domains. Performance of ATTNSCALELITE. We observe from Table 1 that ATTNSCALELITE performs similarly to LN-TUNE but slightly worse than ATTNSCALE (by around 0.5\u22120.7%) on larger domain shifts for self-supervised ViTS/16(DINO). For smaller domain shifts, ATTNSCALELITE matches the performance of ATTNSCALE. For supervised ViTs, we \ufb01nd that ATTNSCALELITE lags behind ATTNSCALE by a larger margin of 1.2\u22121.8% for large domain shifts (see Appendix for results). The decrease in the effectiveness of ATTNSCALELITE for supervised ViTs can be attributed to the fact, that different heads encode attention maps less similarly than self-supervised ViTs. Therefore, learning a separate set of scaling parameters for different heads is more bene\ufb01cial for few-shot adaptation. 6.4. Can we drop PEFT modules from ViT layers? In Secs. 6.3 and 6.2, the PEFT modules are inserted in each of the 12 layers of the ViT. In this section, we use our \fModel Full Adapter Bias LoRA Ladder Prompt-Deep Prompt-Shallow eTT LN-TUNE ATTNSCALE ATTNSCALELITE ViT-S(DINO) 63.1 62.6 67.1 66.4 62.7 65.7 51.8 65.6 67.8 67.2 66.9 ViT-S(DeiT) 66.6 66.8 66.4 67.6 66.9 66.7 63.4 68.4 68.8 67.1 66.2 Table 3. LN-TUNE results in the best performance on ORBIT while ATTNSCALE is extremely competitive. Frame accuracy results are shown for a ViT-S/16. Red: Existing PEFT methods; Magenta: Strong baselines proposed in our paper.; Brown: Best performing module amongst existing PEFT methods.; Best overall performing PEFT method; We provide additional results with ViT-B/16 in the Appendix. strong baselines, LN-TUNE and ATTNSCALE, to examine if dropping PEFT modules from the majority of layers impacts performance. Speci\ufb01cally, we insert a PEFT module in the \ufb01nal layer of the ViT and another in 1 other layer (between 1-11). We vary the position of the second PEFT and observe its impact on performance (Fig 7). Results. From Fig 7, we \ufb01nd that inserting the PEFT into the later layers of the ViT improves the performance more than inserting it in the earlier layers for domains with a small degree of shift from ImageNet (e.g., MSCOCO, DTD, VGG-Flower, Cu birds). However, for large domain shifts such as in Traf\ufb01c-Sign, Quickdraw and Omniglot, we \ufb01nd that inserting LN-TUNE in the earlier layers is crucial. In particular for these domains, we \ufb01nd that inserting LNTUNE only in the later layers results in \u223c10% drop in accuracy . We observe similar results for ATTNSCALE (see Appendix). 6.5. Impact on Fine-tuning Time Although PEFT methods save a signi\ufb01cant amount of storage, they are not necessarily faster to \ufb01ne-tune compared to full model \ufb01ne-tuning as the modules are often deep inside the network and gradients must be backpropagated through the entire backbone. We empirically quantify this by measuring the \ufb01ne-tuning time per task across all MD domains. In particular, we compute the speedup factor of the PEFT methods when compared to full \ufb01ne-tuning. We \ufb01nd that all the PEFT methods, except ladder-tuning, provide a speedup of only 1.3-1.9x compared to full \ufb01ne-tuning. Ladder-tuning, since it does not require any gradient computation through the backbone, has a greater speedup of 3.3x compared to full \ufb01ne-tuning (See Appendix). 7. Results on Tasks from ORBIT In this section, we compare PEFT methods on the challenging personalization tasks from ORBIT. We modify the task sampling procedure in ORBIT to decrease the maximum size of the support set sizes, so that PROTOAUG can be used for \ufb01ne-tuning. We provide the detailed task sampling procedure in the Appendix. Overall Results. From Table 3, we \ufb01nd that biastuning and eTT have the best performances amongst the existing PEFT methods for ViT-S/16 (DINO) and ViT-S/16 (DeiT), respectively. These results reinforce our previous \ufb01nding that different PEFT methods may be suited to difFigure 7. Dropping LN-TUNE from earlier layers in the ViT for large domain shifts (e.g., Traf\ufb01c-Sign, Quickdraw, Omniglot) leads to a large drop in accuracy. We investigate the effect of inserting LN-TUNE at different layers in the ViT. ferent pre-training objectives. Overall, we \ufb01nd that LNTUNE results in the best few-shot performance for both selfsupervised (DINO) and supervised (DeiT) pre-training objectives across all PEFT methods. ATTNSCALE ranks in the top 2 for DINO, however, for DeiT we \ufb01nd its performance slightly drops but still ranks within the top 4 PEFT methods. 8." + }, + { + "url": "http://arxiv.org/abs/2110.11190v1", + "title": "On Hard Episodes in Meta-Learning", + "abstract": "Existing meta-learners primarily focus on improving the average task accuracy\nacross multiple episodes. Different episodes, however, may vary in hardness and\nquality leading to a wide gap in the meta-learner's performance across\nepisodes. Understanding this issue is particularly critical in industrial\nfew-shot settings, where there is limited control over test episodes as they\nare typically uploaded by end-users. In this paper, we empirically analyse the\nbehaviour of meta-learners on episodes of varying hardness across three\nstandard benchmark datasets: CIFAR-FS, mini-ImageNet, and tiered-ImageNet.\nSurprisingly, we observe a wide gap in accuracy of around 50% between the\nhardest and easiest episodes across all the standard benchmarks and\nmeta-learners. We additionally investigate various properties of hard episodes\nand highlight their connection to catastrophic forgetting during meta-training.\nTo address the issue of sub-par performance on hard episodes, we investigate\nand benchmark different meta-training strategies based on adversarial training\nand curriculum learning. We find that adversarial training strategies are much\nmore powerful than curriculum learning in improving the prediction performance\non hard episodes.", + "authors": "Samyadeep Basu, Amr Sharaf, Nicolo Fusi, Soheil Feizi", + "published": "2021-10-21", + "updated": "2021-10-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "main_content": "INTRODUCTION Humans have a remarkable ability to learn new concepts from very few examples and generalize effectively to unseen tasks. However, standard deep learning approaches still lag behind human capabilities in learning from few examples. For large over-parameterized deep models, learning with general supervision from only a few examples leads to over-\ufb01tting and thus poor generalization. To circumvent this, the paradigm of few-shot learning (Wang et al., 2020; Fei-fei et al., 2006; Vinyals et al., 2017) aims to effectively learn new concepts from very few labeled examples. These learned concepts can generalize well to future unseen learning tasks. Several frameworks have been proposed for tackling the few-shot learning scenario: transfer-learning (Dhillon et al., 2019), selftraining (Phoo & Hariharan, 2020) and meta-learning (Hospedales et al., 2020; Finn et al., 2017; Snell et al., 2017). Meta-learning in particular aims to learn the process of learning from few examples and has shown remarkable performance across various few-shot benchmarks (Hospedales et al., 2020). In meta-learning, several few-shot tasks (episodes) are sampled from a set of base classes and the underlying model is trained to perform well on these tasks leading to improved generalization in learning from only few examples belonging to novel and unseen classes. Existing meta-learners such as prototypical networks (Snell et al., 2017), MAML (Finn et al., 2017), MetaOptNet (Lee et al., 2019), and R2D2 (Bertinetto et al., 2018) primarily focus on improving prediction performance on average across multiple episodes. However, different episodes have distinct characteristics and hardness which might lead to a wide variance in prediction accuracy across episodes. This problem is much more prevalent in few-shot models deployed in the industry. For example, meta-trained models are often deployed in the cloud for the end-users to use for various tasks such as object recognition, detection, semantic segmentation in computer vision and natural language understanding in NLP. In such settings, the end-users upload their own few-shot dataset to 1 arXiv:2110.11190v1 [cs.LG] 21 Oct 2021 \fPreprint perform predictions on new and unseen examples belonging to novel classes. In practice, different users may upload few-shot datasets of varying quality and hardness, leading to a wide disparity in performance across different users. To draw a parallel to the widely accepted experimental protocols in meta-learning, each of the uploaded few-shot dataset and the corresponding unseen examples is equivalent to a test episode. In this paper, we study this issue and investigate how existing state-of-the-art meta-learners (Snell et al., 2017; Bertinetto et al., 2018; Lee et al., 2019) perform on episodes of varying hardness. Across three benchmark datasets: CIFAR-FS, mini-ImageNet, and tieredImageNet, we observe that there is a gap of \u224850% in prediction accuracy between the easiest and hardest episodes. To this end, we identify several intriguing properties of hard episodes in meta-learning. For instance, we \ufb01nd that hard episodes are forgotten more easily than easy episodes during meta-training. To improve prediction performance on hard episodes, we investigate and benchmark various adversarial training and curriculum learning strategies that can be used jointly with any existing meta-learner. Empirically, we \ufb01nd that adversarial training strategies are much more powerful than curriculum learning in improving the prediction performance on hard episodes. The aim of our paper is not to chase another state-of-the-art in meta-learning, but to perform a \ufb01ne-grained inspection of hard episodes across various meta-learning methods. In summary, we make the following contributions: \u2022 We present a detailed analysis of episode hardness in meta-learning across few-shot benchmarks and state-of-the-art meta-learners. In particular, we study various properties (e.g., semantic characteristics, forgetting) of episode hardness across different meta-learners and architectures. \u2022 We \ufb01nd strong connections between episode hardness and catastrophic forgetting in metalearning. While catastrophic forgetting can occur when meta-training with multiple datasets in sequence (Yap et al., 2020), we observe that forgetting events can occur even when the tasks during meta-training are drawn from a single dataset. In particular, we \ufb01nd that hard episodes are easy to forget, while easy episodes are dif\ufb01cult to forget. \u2022 Based on our analysis, we investigate and benchmark different adversarial training and curriculum training strategies to augment general purpose meta-training for improving prediction performance on hard episodes. Empirically, we \ufb01nd that although there is no one-size\ufb01ts-all solution, adversarial meta-training strategies are more powerful when compared to curriculum learning strategies. 2 BACKGROUND AND RELATED WORK Meta-learning aims to learn an underlying model that can generalize and adapt well to examples from unseen classes by the process of learning to learn. This is primarily achieved by mimicking the evaluation and adaptation procedure during meta-training. In general, there are three types of metalearners: (a) Memory-based methods (Ravi & Larochelle, 2017; Munkhdalai et al., 2018; Santoro et al., 2016) adapt to novel classes with a memory attached to the meta-learner; (b) Metric-learning based methods (Snell et al., 2017; Sung et al., 2017) aim to learn transferable deep representations which can adapt to unseen classes without any additional \ufb01ne-tuning; (c) Optimization based methods (Finn et al., 2017; Lee et al., 2019; Bertinetto et al., 2018) learn a good pre-training initialization for effective transfer to unseen tasks with only a few optimization steps. Although the primary focus of our work is meta-learning, we note that other few-shot learning paradigms such as transfer learning (Chen et al., 2021; Sun et al., 2019; Dhillon et al., 2020) have also shown competitive performance with meta-learning. While there has been a signi\ufb01cant progress in improving the state-of-the-art in meta-learning, very few work investigates the effectiveness of existing meta-learning approaches on episodes of varying hardness. A recent and concurrent work by Arnold et al. (2021) discusses episode dif\ufb01culty and the impact of random episodic sampling during meta-training. Based on their analysis, Arnold et al. (2021) propose a re-weighted optimization framework for meta-training based on importance sampling. Although our paper and Arnold et al. (2021) tackle similar problems of episodic hardness, there are several points which distinguishes our work: 2 \fPreprint Figure 1: Accuracy (y-axis) of existing meta-learners on the hardest and the easiest episode across standard few-shot datasets and meta-learners (x-axis). Note that there is a wide gap of \u224850% between the prediction performance on the easiest and hardest episode. \u2022 We provide a much more \ufb01ne-grained analysis of episode hardness than Arnold et al. (2021). Arnold et al. (2021) primarily discuss the transferability of episodes across different meta-learners, while we \ufb01nd and investigate a strong connection between episode hardness and catastrophic forgetting. \u2022 Arnold et al. (2021) propose a loss re-weighting framework for improving the average accuracy across episodes. In contrary, we investigate the effectiveness of adversarial training (Gong et al., 2020) and general curriculum learning techniques in improving the average as well as worst-case prediction performance in meta-learning. Adversarial meta-learning techniques have previously been used in conjunction with dataaugmentation (Ni et al., 2021) to select the augmentation type resulting in the worst-case loss among different augmentation techniques. In this paper, we focus on how such strategies can be useful in improving the prediction performance of the hard episodes in addition to the average accuracy. 3 RETHINKING EPISODIC ACCURACY Existing state-of-the-art meta-learners (Finn et al., 2017; Lee et al., 2019; Snell et al., 2017; Bertinetto et al., 2018) primarily focus on optimizing for the average loss across multiple training episodes or tasks. However, solely the average performance in isolation does not give enough insights into how meta-learners perform on episodes of varying quality and hardness. Such insights can be particularly crucial to investigate and debug meta-learning models deployed in the wild, where the model can encounter diverse test episodes. In this section, we go beyond the average accuracy across different test episodes and evaluate meta-learners on episodes of varying hardness. First, we discuss how to quantify the hardness of an episode and then discuss the performance of meta-learners on hard episodes. 3.1 WHAT IS A GOOD MEASURE OF EPISODE HARDNESS? Episodic sampling (i.e. sampling various few-shot tasks from a base dataset) in meta-learning takes place in two steps: (i) First the episode classes are sampled from the class distribution of the base classes : c \u223cp(Cbase); (ii) Next, an episode \u03c4 is sampled from the data distribution conditioned on the set of sampled classes c: \u03c4 \u223cp(D|c), where D is the base dataset. An episode \u03c4 consists of a set of support examples \u03c4s and query examples \u03c4q. In few-shot learning, a n-way, k-shot episode is sampled which results in sampling n classes and k support examples per class. Based on this, the meta-learning optimization objective can be generalized as the following: \u03b8\u2217= arg min \u03b8 E\u03c4[\u2113(F\u03b8\u2032 , \u03c4q)] (1) where F is the base architecture with \u03b8 as the model parameters and \u03b8 \u2032 = A(\u03b8, \u03c4s) is the \ufb01netuning step with the support examples. Different meta-learners have different types of \ufb01ne-tuning procedures and we direct the readers to (Finn et al., 2017; Snell et al., 2017; Bertinetto et al., 2018) 3 \fPreprint for more information on the characteristics of A. Based on this de\ufb01nition, we de\ufb01ne the hardness of an episode H(\u03c4) in terms of the loss incurred on the query examples in an episode: H(\u03c4) = \u2113(F\u03b8\u2217, \u03c4q) (2) We choose query loss as a metric for hardness because of its inherent simplicity in computation as well as interpretation. In addition, we \ufb01nd a strong negative correlation between the episodic loss and the accuracy (\u2248\u22120.92 for mini-ImageNet and \u2248\u22120.89 for tieredImageNet with prototypical networks). This is true for other meta-learners such as R2D2 too (See Appendix A for more details). Alternatively, hardness of an episode can also be de\ufb01ned as the average log-odds of the query example (Dhillon et al., 2020). 3.2 PERFORMANCE OF META-LEARNERS ON HARD EPISODES To understand the effectiveness of meta-learners on episodes of varying hardness, we \ufb01rst order the test episodes in decreasing order of their hardness. Then, we evaluate different meta-learners on the easiest and the hardest test episode. Across all the few-shot benchmark datasets such as miniImageNet, CIFAR-FS and tieredImageNet, we \ufb01nd in Fig. (1) that there is a gap in accuracy of \u224850% between the episodes with the highest and the lowest loss. Furthermore, we \ufb01nd that the meta-learner and architecture which performs well on average, does not necessarily perform well on the hardest episode. For example, in the case of mini-ImageNet, prototypical networks with a stronger architecture such as ResNet performs better than the 4-layered convolutional architecture on an average, but not on the hard episodes. Moreover in Fig. (1), we notice that for tieredImageNet, prototypical networks with ResNet performs the best on easy episodes, while R2D2 with ResNet performs slightly better on the hard episodes. The wide gap in accuracy between hard and easy episodes can be magni\ufb01ed for meta-learners deployed in the wild, where the model might encounter episodes which are signi\ufb01cantly hard and diverse in nature. Going forward, we believe that metalearning studies should not only report average episodic accuracy, but also the prediction accuracy on easy and hard episodes, to present a complete picture of the effectiveness of the meta-learner. 4 VISUAL SEMANTICS OF HARD EPISODES Based on the observed disparity in performance between hard and easy episodes, a natural question arises: what causes certain episodes to incur high loss? While quantitatively, the hardness of an episode can be de\ufb01ned in terms of the loss incurred on the query examples, it does not provide salient insights into the qualitative nature of hard episodes. In general, we \ufb01nd that episodes incur high loss when there is a mismatch in the semantic characteristics between the support and query examples. For example, when a majority of the support examples have objects of only one category in the frame and the query examples have multiple objects of different categories surrounding the primary object of interest, the underlying meta-learner often leads to a wrong prediction. Furthermore, when the shape of the objects in the query examples is slightly different from the objects in the support examples, the prediction is often erroneous. Figure 2: Semantic Properties of Hard and Easy Episodes: (a) Hard episode, Class: electric-guitar; (b) Easy episode, Class: mixing-bowl; The images marked in red borders are misclassi\ufb01ed query examples. In Fig. (2)-(a), we notice that the query examples that have different objects (e.g., humans) along with the primary object (i.e., guitar) are often misclassi\ufb01ed. In Fig. (2)-(b), where most of the query examples are classi\ufb01ed correctly, we \ufb01nd that the misclassi\ufb01ed examples are of two types: (i) query images in which the primary object is occluded with a secondary object; (ii) the shape of the 4 \fPreprint Figure 3: Accuracy of easy and hard episodes (y-axis) during the course of meta-training across different epochs (x-axis); Hard episodes often have a \ufb01nal accuracy less than the maximum accuracy reached during meta-training. object in the query example is different from the object shapes in the support examples. We provide additional examples of hard episodes in Appendix E. 5 HARD EPISODES SUFFER FROM FORGETTING In supervised learning, catastrophic forgetting is prevalent when tasks from different distributions are learned sequentially (Kirkpatrick et al., 2016). In such cases, old tasks are forgotten as the model encounters and learns from new tasks. Toneva et al. (2019) has shown that certain examples can be forgotten with high frequency during the course of supervised training even when the samples are drawn from a single dataset. In meta-learning, Yap et al. (2020) has shown that in meta-training with tasks from multiple task distributions sequentially, tasks from the old distribution in the sequence can be forgotten as the meta-learner encounters new tasks. However, we observe that even in the case of meta-training with tasks drawn from a single task distribution, certain types of tasks (episodes) can be forgotten during the course of training. In particular, we analyze the connection between the hardness of episodes and catastrophic forgetting in meta-learning. We track the behaviour of easy and hard episodes during meta-training and in summary \ufb01nd that: (i) For hard episodes, we notice that the \ufb01nal accuracy at the end of the training drops signi\ufb01cantly (\u224815% in some cases) from the maximum accuracy obtained during the course of meta-training. (ii) Hard episodes have more number of forgetting events in comparison to the easy episodes during the course of meta-training. This behaviour is more pronounced in the later stages of meta-training where the accuracies of the easy episodes have already stabilized. 5.1 DEFINING FORGETTING EVENTS IN META LEARNING During the course of meta-training, the set of sampled tasks are different in each epoch. In order to track forgetting events during meta-training, we \ufb01rst randomly select a set of k episodes (E = {\u03c4}k i=1) and track their accuracy, throughout the course of meta-training. In our experiments, we set k = 160 . We primarily de\ufb01ne two types of forgetting events: (i) Local forgetting event; (ii) Global forgetting event. Global forgetting events. For a given episode, a global forgetting event is encountered if the accuracy of the episode at the end of meta-training is less than the maximum accuracy reached during the course of training by a particular threshold. Formally, given an episode \u03c4 with the maximum accuracy accmax(\u03c4) = maxj accj(\u03c4), a global forgetting event occurs if accmax(\u03c4) \u2265accend(\u03c4) + \u03b1, where \u03b1 is a threshold and accend(\u03c4) is the accuracy at the end of meta-training. Note that for each episode, a global forgetting event can occur only once. Local forgetting events. For an episode \u03c4 in the jth epoch of meta-training, a local forgetting event is encountered if the accuracy of the episode at the jth epoch (accj(\u03c4)) is less than the accuracy at the (j \u22121)th epoch (accj\u22121(\u03c4)) by a particular threshold, denoted by \u03b1. Formally, a local forgotten 5 \fPreprint event is encountered if accj(\u03c4) + \u03b1 \u2264accj\u22121(\u03c4). Empirically, we study local forgetting events for 0.03 \u2264\u03b1 \u22640.15. 5.2 FORGETTING EVENTS AND HARD EPISODES Global forgetting events. In Fig. (3), we track the accuracy of the hardest and easiest episode from each of the few-shot datasets during the entire course of meta-training across different epochs. Visually, we observe that for the hard episode, the accuracy decreases after a certain point during the course of meta-training. However for the easy episode, this is not the case and the accuracy increases till the end of meta-training. To draw more insights, we compute the global forgetting behaviour of different episodes. We \ufb01rst choose the 15 hardest and easiest episodes respectively from CIFARFS, mini-ImageNet, tieredImageNet and compute their \ufb01nal episodic accuracy and the maximum episodic accuracy reached during meta-training. In Fig. (4), we \ufb01nd that for the hard episodes, the gap between the \ufb01nal accuracy and the maximum accuracy reached during meta-training is signi\ufb01cantly larger than the easy episodes. The gap in particular is large for mini-ImageNet and tieredImageNet, while for CIFAR-FS the gap is relatively narrow. For example, in case of the miniImageNet dataset, the gap can be \u224815%, whereas for the tieredImageNet dataset, this gap can be \u224810%. For CIFAR-FS, this gap is \u22486%. This characteristic shows that hard episodes are globally forgotten in comparison to the easier episodes during the entire course of meta-training. We provide further results on global forgetting events in Appendix C. Figure 4: Hard episodes have a wider gap between the \ufb01nal accuracy (y-axis) and the maximum accuracy reached during meta-training (x-axis), in comparison to easy episodes. This behaviour is more pronounced for mini-ImageNet and tieredImageNet. Local forgetting events. In order to compute the frequency of local forgetting events for easy and hard episodes across the three few-shot datasets, we \ufb01rst choose 15 easy and hard episodes from each dataset. Across this set of easy and hard episodes, we then compute the number of local forgetting events across various thresholds. In general, across the entire duration of meta-training, we \ufb01nd that the hard episodes have more local forgetting events than the easy episodes. For instance, in Fig. (5)-(a), we observe that there is a substantial gap in the number of forgetting events encountered for easy and hard episodes during the entire course of meta-training. Furthermore, to gain more insights about this gap in the number of encountered forgetting events, we understand how this gap behaves in the \ufb01rst 20 epochs of meta-training (Fig. (5)-(b)) and the last 20 epochs of meta-training (Fig. (5)-(c)). In particular, we \ufb01nd that the gap is narrow during the initial stages of meta-training, whereas the gap widens substantially during the later stages. To summarize, we \ufb01nd that forgetting occurs in meta-learning even when the tasks are drawn from a single task distribution. Furthermore, we \ufb01nd a strong connection between episode hardness and forgetting events, where we show that hard episodes are more easily forgotten than easy episodes, both in the local and global contexts. In the next section, we investigate two meta-training strategies to improve few-shot performance on hard episodes. 6 \fPreprint Figure 5: (a) Total number of local forgetting events (y-axis) across different thresholds (x-axis) during the course of meta-training; (b) Total number of local forgetting events during the \ufb01rst 20 epochs of meta-training; (c) Total number of local forgetting events during the last 20 epochs of meta-training. The number of local forgetting events is higher for hard episodes in comparison to the easy episodes across different thresholds. 6 IMPROVING PERFORMANCE ON HARD EPISODES In this section, we investigate and benchmark two different meta-training strategies based on adversarial training and curriculum learning in order to improve prediction performance on hard episodes. Recent work (Gong et al., 2020; Ni et al., 2021) uses adversarial training to select episode speci\ufb01c augmentations from a wide pool of diverse data augmentation methods resulting in the highest loss. This loss is then optimized with respect to the model parameters during training. Such data augmentation selection strategies have been shown to mitigate over-\ufb01tting and improve generalization. In our work, we study how such training strategies can be used to select hard episodes and improve prediction performance across these episodes. In particular, we investigate two variants of adversarial training as proposed \ufb01rst in Gong et al. (2020): (i) General adversarial training (AT); (ii) Adversarial curriculum training (ACT). 6.1 GENERAL ADVERSARIAL TRAINING We adopt the adversarial training procedure from (Gong et al., 2020) to \ufb01rst select episodes with a high loss and optimize the meta-learner only with respect to the loss incurred by such hard episodes. Speci\ufb01cally, this involves solving a saddle-point optimization problem where the underlying loss is minimized with respect to the parameters of the model and maximized with respect to the input. In particular, during each update of the meta-learner, we \ufb01rst draw a batch of episodes each containing support and query examples. Then for each element of the batch, we additionally sample a number of similar episodes and select the episode with the highest loss. The model parameters are then updated with respect to the gradient of the loss incurred by the selected hard episode. Formally, we solve the following min-max optimization: min \u03b8 E\u03c4[ max t\u2208g(\u03c4) \u2113(F\u03b8\u2032 , tq)] (3) where \u03b8 \u2032 = A(\u03b8, ts) is the \ufb01ne-tuning step with the support examples from the selected episodes with the highest loss, g(\u03c4) is an operator which samples additional episodes to select from for each task \u03c4 in the batch, F is the base learner and \u2113is the loss function. In our experiments, we let g(\u03c4) select four additional episodes per sampled episode \u03c4. We provide a more detailed description of the hyper-parameters in Appendix B.1. 6.2 ADVERSARIAL CURRICULUM TRAINING Curriculum learning (Hacohen & Weinshall, 2019; Bengio et al., 2009) aims to mimic the learning process of humans and animals. In particular, humans learn new tasks in a well de\ufb01ned order; i.e., it \ufb01rst learns easy tasks and then gradually moves towards learning more complex tasks. Inspired by this, we modify Eq. (3) and investigate a curriculum meta-training strategy. Speci\ufb01cally, during the initial phase of meta-training, we select only a set of easy episodes to learn from, while in 7 \fPreprint Method CIFAR-FS mini-ImageNet tieredImageNet 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot Conv-ProtoNet + AT + ACT 62.6 63.3 63.1 80.9 80.8 80.0 52.3 52.7 52.4 70.4 70.8 70.3 52.6 52.7 52.9 71.5 71.9 71.5 ResNet12-ProtoNet + AT + ACT 71.0 71.2 69.3 84.0 83.9 82.6 59.0 58.9 56.0 74.8 74.9 72.8 61.8 62.9 63.4 80.0 80.4 79.9 Conv-R2D2 +AT +ACT 67.6 68.0 68.3 82.6 82.8 82.6 55.3 55.7 55.4 72.4 72.4 71.3 56.8 57.2 57.8 75.00 75.00 74.9 ResNet12-R2D2 +AT +ACT 70.0 71.0 69.4 84.8 84.9 83.2 58.6 58.63 56.3 75.5 76.0 74.4 62.8 63.7 63.5 80.4 81.3 80.9 ResNet12 -MetaOptNet +AT +ACT 70.8 71.2 70.2 84.0 84.6 83.9 60.1 60.9 59.8 77.4 78.2 77.1 62.9 63.3 62.1 80.7 80.9 79.8 Table 1: Average episodic performance of general adversarial training (AT) and adversarial curriculum training (ACT) across different meta-learners. the later stages of training, harder episodes are selected. Formally, given the underlying model is meta-trained for |e| epochs, for the \ufb01rst |e|/2 epochs, the following loss is optimized: min \u03b8 E\u03c4[ min t\u2208g(\u03c4) \u2113(F\u03b8\u2032 , tq)] (4) and for the last |e|/2 epochs during meta-training, the general min-max adversarial loss described in Eq. (3) is used. 7 EXPERIMENTS 7.1 EXPERIMENTAL SETUP Datasets. We use three standard few-shot classi\ufb01cation datasets for our experiments : (i) CIFARFS (Bertinetto et al., 2018); (ii) mini-ImageNet (Vinyals et al., 2016) and (iii) tieredImageNet (Ren et al., 2018). CIFAR-FS is sampled from CIFAR-100, whereas mini-ImageNet and tieredImageNet are subsets of ImageNet (Deng et al., 2009). We use the class splits from (Lee et al., 2019) for all the three datasets. Note that tieredImageNet is a more challenging dataset than mini-ImageNet as the splits are constructed from near the root of the ImageNet hierarchy. We provide more details on the datasets in Appendix B.2. Architectures. We primarily use two standard architectures for our experiments: (i) 4-layer convolutional network as introduced in (Vinyals et al., 2017); (ii) ResNet-12 (He et al., 2015) which is used by (Oreshkin et al., 2018) in the few-shot setting. Both these architectures use batch-normalization after every convolutional layer and use ReLU as the activation function. Similar architectures for few-shot learning have been previously used in (Lee et al., 2019; Arnold et al., 2021). Meta-learners. We use prototypical networks (Snell et al., 2017) from the metric learning family of few-shot algorithms. In addition, we use MetaOptNet (Lee et al., 2019) and R2D2 (Bertinetto et al., 2018) as representative algorithms from the optimization based meta-learning methods. Model Selection. We use the validation set for each dataset to select the best model. Primarily, we run the validation procedure every 1k iterations on 2k episodes from the validation set to select the best model. Finally, we evaluate on 1k test episodes from the test set for each dataset. 7.2 DISCUSSION 7.2.1 ADVERSARIAL TRAINING IMPROVES PERFORMANCE ON HARD EPISODES Across different meta-learners (ProtoNets, R2D2, MetaOptNet) and few-shot datasets (CIFAR-FS, mini-ImageNet, tieredImageNet), we \ufb01nd that the adversarial training (AT) strategy works well in 8 \fPreprint Method CIFAR-FS mini-ImageNet tieredImageNet 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot Conv-ProtoNet + AT + ACT 28.8 29.3 28.5 63.0 62.9 61.3 20.2 19.3 19.4 53.2 53.5 52.7 22.2 24.6 22.1 52.3 52.7 53.1 ResNet12-ProtoNet + AT + ACT 36.0 33.7 29.46 66.7 66.7 64.9 26.2 29.2 24.5 56.5 58.3 54.7 29.8 31.2 29.6 60.4 60.9 60.4 Conv-R2D2 +AT +ACT 32.2 35.3 32.8 65.3 65.7 65.5 21.4 24.0 23.7 55.7 55.6 54.5 27.0 27.6 27.8 56.6 56.6 56.7 ResNet12-R2D2 +AT +ACT 30.6 35.6 31.8 68.0 68.0 65.9 28.0 28.5 24.4 59.0 60.1 56.8 34.0 33.8 30.8 61.5 62.1 62.0 ResNet12 -MetaOptNet +AT +ACT 37.2 37.9 37.1 69.2 70.0 69.1 29.5 31.2 29.2 61.1 62.3 60.5 30.4 31.5 30.2 61.5 61.4 60.9 Table 2: Performance of general adversarial training (AT) and adversarial curriculum training (ACT) across different meta-learners on hard episodes. We report the mean accuracy over 30 hardest episodes for each meta-learner. general to improve both the average episodic performance as well as the episodic performance on hard episodes. The comprehensive results for the average performance of meta-learners is presented in Table (1), while the results on hard episodes are presented in Table (2). In particular, we \ufb01nd that adversarial meta-training strategies never hurt the average episodic performance and improves over the baseline in a majority of our experimental settings. However, we \ufb01nd that the adversarial training strategy (AT) leads to a large gain over the baseline meta-training strategy for hard episodes. Speci\ufb01cally, we \ufb01nd that the improvements are more signi\ufb01cant for the 1-shot case when compared to the 5-shot case. For example. speci\ufb01c to the 1-shot case, we observe a 5% gain for CIFAR-FS with R2-D2 and 3% gain for mini-ImageNet with prototypical networks. For tieredImageNet, we observe \u22482% improvement on episodic performance for hard episodes with prototypical networks. 7.2.2 ADVERSARIAL TRAINING IS BETTER THAN CURRICULUM TRAINING Although curriculum training leads to better generalization in supervised learning (Hacohen & Weinshall, 2019; Bengio et al., 2009), we \ufb01nd that in meta-learning, the adversarial curriculum strategy (ACT) generally performs worse than both the baseline and general adversarial training (AT) in a majority of our experimental settings. Our observation on curriculum training for meta-learning is consistent with the recent work of (Arnold et al., 2021) where they show that curriculum metatraining strategies underperform signi\ufb01cantly when compared to the baseline meta-training. We note that although the curriculum formulation in (Arnold et al., 2021) is different than ours, both methods present easy episodes to the meta-learner \ufb01rst followed by hard episodes. While we present a negative result on curriculum meta-training, we believe that this observation can be used as a note to develop more advanced and improved curriculum meta-training strategies in the future. In summary, we \ufb01nd that although there is no one-size-\ufb01ts-all solution to improve performance on hard episodes, adversarial meta-training strategies perform better than the baseline and curriculum learning. 8" + }, + { + "url": "http://arxiv.org/abs/2109.08754v1", + "title": "Semi-Supervised Few-Shot Intent Classification and Slot Filling", + "abstract": "Intent classification (IC) and slot filling (SF) are two fundamental tasks in\nmodern Natural Language Understanding (NLU) systems. Collecting and annotating\nlarge amounts of data to train deep learning models for such systems is not\nscalable. This problem can be addressed by learning from few examples using\nfast supervised meta-learning techniques such as prototypical networks. In this\nwork, we systematically investigate how contrastive learning and unsupervised\ndata augmentation methods can benefit these existing supervised meta-learning\npipelines for jointly modelled IC/SF tasks. Through extensive experiments\nacross standard IC/SF benchmarks (SNIPS and ATIS), we show that our proposed\nsemi-supervised approaches outperform standard supervised meta-learning\nmethods: contrastive losses in conjunction with prototypical networks\nconsistently outperform the existing state-of-the-art for both IC and SF tasks,\nwhile data augmentation strategies primarily improve few-shot IC by a\nsignificant margin.", + "authors": "Samyadeep Basu, Karine lp Kiun Chong, Amr Sharaf, Alex Fischer, Vishal Rohra, Michael Amoake, Hazem El-Hammamy, Ehi Nosakhare, Vijay Ramani, Benjamin Han", + "published": "2021-09-17", + "updated": "2021-09-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction We study the problem of few-shot Intent Classi\ufb01cation (IC) and Slot Filling (SF). In the few-shot learning setting, the learner has to learn given only a handful of training examples. We propose a semisupervised approach for solving this problem based on augmenting supervised meta-learning with unsupervised data augmentation and contrastive learning. We systematically investigate how different data augmentation and contrastive learning strategies improve IC/SF performance, and show that our semi-supervised approach outperforms state-ofthe-art models for few-shot IC/SF. Given the user utterance: \u201cBook me a table for 6 at Lebanese Taverna\u201d, an IC model identi\ufb01es \u201cRestaurant Booking\u201d as the intent of interest, and an SF model identi\ufb01es the slot types and values: Party_Size:\"6\", Name: * First three authors contributed equally \"Lebanese Taverna\". These functionalities are typically driven by powerful deep learning models that rely on huge amounts of domain speci\ufb01c training data. As such labeled data is rarely available, building models that can learn from only a few examples per class is inevitable. Few-shot learning techniques (Krone et al., 2020; Ren and Xue, 2020) have been recently proposed to address the problem of generalizing to unseen classes in IC/SF when only a few training examples per class are available. Krone et al. (2020) utilized meta-learning approaches such as prototypical networks (Snell et al., 2017) and MAML (Finn et al., 2017) to jointly model IC/SF. They showed that prototypical networks outperform other prevalent meta-learning techniques such as MAML as well as \ufb01ne-tuning. In this paper, we extend this powerful supervised meta-learning technique with unsupervised contrastive learning and data augmentation. Rajendran et al. (2020) showed that metalearners can be particular prone to over\ufb01tting which can be partially alleviated by data augmentation (Liu et al., 2020). Data augmentation strategies in NLP have been shown to boost performance in general text classi\ufb01cation settings (Wei and Zou, 2019b; Xie et al., 2019; Lee et al., 2021), however, there exists very little work on how data augmentation can be effectively used in the meta-learning pipeline speci\ufb01c to NLU tasks. To address this question, we \ufb01rst introduce a novel data augmentation strategy slot-list values for IC/SF tasks which generates synthetic utterances using dictionary-based slot-values. Additionally, we investigate how state-of-the-art augmentation strategies such as backtranslation (Xie et al., 2019) and perturbation-based augmentations such as EDA \u2013 Easy Data Augmentation (Wei and Zou, 2019b) \u2013 can be used alongside prototypical networks. We further investigate how contrastive learning (Chen et al., 2020) can be used as an additional regularizer during the meta-training stage to crearXiv:2109.08754v1 [cs.CL] 17 Sep 2021 \fate better generalizable meta-learners. Contrastive learning is useful in creating improved prototypes as they pull similar representations together while pushing apart dissimilar ones. Through extensive experiments across SNIPS and ATIS, we show that meta-training with contrastive losses in conjunction with the general prototypical loss function improves IC/SF performance for unseen classes with few examples. Our contributions include: \u2022 We demonstrate the effectiveness of contrastive losses as a regularizer in the metalearning pipeline, by empirically showing how it improves few-shot IC/SF tasks across benchmark datasets such as SNIPS and ATIS. \u2022 We illustrate the positive impact of data augmentation techniques such as backtranslation and EDA in improving few-shot IC tasks. 2 Proposed Approaches We follow the few-shot learning setup for IC/SF described in (Krone et al., 2020) with a few modi\ufb01cations. Instead of using a frozen backbone such as BERT or ELMo with a BiLSTM head, we use a more powerful pre-trained RoBERTa encoder. Additionally, in contrast to (Krone et al., 2020), we update our encoder during the meta-training stage. For a given utterance xi = {xi 1, xi 2, ..., xi n} with n tokens, we \ufb01rst use the RoBERTa model denoted by f\u03c6 to encode the utterance resulting in hi = {hi , hi 1, ..., hi n}. We use the token embedding to denote the utterance level embedding which we use for intent classi\ufb01cation. For slot \ufb01lling, we use each of the token embeddings {hi j}n j=1 of the ith utterance. Given a support set S, assuming Sl consists of utterances belonging to the intent class cl and Sa consists of tokens from the slot class ca, we \ufb01rst compute the class prototypes for intents (cl) and slots (ca): cl = 1 |Sl| X xi\u2208Sl f\u03c6(xi) (1) ca = 1 |Sa| X xi j\u2208Sa f\u03c6(xi j) \u2200xi \u2208S (2) Given a query example z and a distance function d, a distribution over the different classes is computed using the softmax of the distances to the different class prototypes. Speci\ufb01cally we denote the intent speci\ufb01c log likelihood loss as: LIC(\u03c6, z) = \u2212log{ exp(\u2212d(f\u03c6(z), cl)) P l\u2032 exp(\u2212d(f\u03c6(z), cl\u2032))} (3) We use euclidean distance as the standard distance function. Similarly, we de\ufb01ne the slot speci\ufb01c loss as LSlots(\u03c6, z). For a given query set Q, the cumulative loss for intents and slots is the log likelihood averaged across all the query samples and is denoted by LTotal(\u03c6): LTotal(\u03c6) = X z\u2208Q 1 |Q|{LIC(\u03c6, z) + LSlots(\u03c6, z)} (4) 2.1 Contrastive Learning The general idea of contrastive learning (Chen et al., 2020) is to pull together the representations of similar samples while pushing apart the representations of dissimilar samples in an embedding space. In our work, we speci\ufb01cally incorporate the supervised contrastive loss as an added regularizer with the prototypical loss computation in Eq. (4). In particular we identify places in the meta-training pipeline where the incorporation of the contrastive loss is most bene\ufb01cial for good generalization to few-shot classes. We devise two types of contrastive losses for the IC/SF tasks: (a) contrastive loss for intents LcontrastiveIC(\u03c6) where the token embedding is used in the loss; (b) contrastive loss for slots LcontrastiveSF (\u03c6) where the individual token embeddings are used in the loss. The regularized prototypical loss is the following: LTotal(\u03c6) = X z\u2208Q 1 |Q|{LIC(\u03c6, z)+LSlots(\u03c6, z)} + \u03bb1LcontrastiveIC(\u03c6) + \u03bb2LcontrastiveSF (\u03c6) (5) We provide more details about the two contrastive losses in the Appendix section. 2.2 Data Augmentation for Few-shot IC/SF Prior works in computer vision (Liu et al., 2020; Ni et al., 2020) have shown that data augmentation is very effective in meta-learning. In this section, we use various data augmentation strategies to improve the meta-learning pipeline for IC/SF tasks. Data augmentation for joint IC/SF tasks in NLU is particularly challenging as the augmentation is primarily possible at the level of intents. For intent level data augmentation, we use state-of-theart techniques such as backtranslation (Xie et al., \fLevel SNIPS (Kmax=20) ATIS (Kmax=20) SNIPS (Kmax=100) ATIS (Kmax=100) IC Acc Slot F1 IC Acc Slot F1 IC Acc Slot F1 IC Acc Slot F1 Krone et al. (2020) 0.877 \u00b1 0.01 0.597 \u00b1 0.017 0.660 \u00b1 0.02 0.340 \u00b1 0.004 0.877 \u00b1 0.01 0.621 \u00b1 0.007 0.719 \u00b1 0.01 0.412 \u00b1 0.02 Baseline (Ours) 0.887 \u00b1 0.06 0.597 \u00b1 0.04 0.737 \u00b1 0.06 0.74 \u00b1 0.01 0.907 \u00b1 0.05 0.593 \u00b1 0.04 0.80 \u00b1 0.04 0.70\u00b1 0.02 CL (IC) Support(m-train) 0.905 \u00b1 0.05 0.594 \u00b1 0.04 0.75 \u00b1 0.07 0.748 \u00b1 0.02 0.912 \u00b1 0.03 0.594 \u00b1 0.04 0.802 \u00b1 0.06 0.70 \u00b1 0.02 CL (IC) Support,Query(m-train) 0.908 \u00b1 0.06 0.596 \u00b1 0.04 0.76 \u00b1 0.04 0.748 \u00b1 0.02 0.93 \u00b1 0.05 0.60 \u00b1 0.03 0.829 \u00b1 0.06 0.703 \u00b1 0.03 CL (IC + SF) Support(m-train) 0.903 \u00b1 0.06 0.60\u00b1 0.04 0.757 \u00b1 0.04 0.755 \u00b1 0.02 0.92 \u00b1 0.01 0.60 \u00b1 0.04 0.826 \u00b1 0.05 0.70 \u00b1 0.03 CL (IC + SF) Support,Query(m-train) 0.91 \u00b1 0.04 0.60 \u00b1 0.03 0.75 \u00b1 0.07 0.756 \u00b1 0.02 0.93 \u00b1 0.03 0.60 \u00b1 0.04 0.833 \u00b1 0.05 0.71 \u00b1 0.02 CL (IC + SF), DA (Slot list) Support,Query(m-train) 0.921\u00b1 0.037 0.619\u00b1 0.037 0.803 \u00b1 0.069 0.748 \u00b1 0.019 0.923\u00b1 0.055 0.619\u00b1 0.035 0.821\u00b1 0.08 0.73\u00b1 0.02 Table 1: Few-shot classi\ufb01cation accuracy with contrastive learning (CL) for prototypical networks. For CL (IC) only LcontrastiveIC is used, whereas for CL (IC + SF), both LcontrastiveIC and LcontrastiveSF are used. 2019) and EDA (Wei and Zou, 2019b) along with prototypical networks. We also introduce a novel data augmentation technique called slot-list values which effectively leverages the structure of joint IC/SF tasks. In particular, we investigate the effectiveness of these data augmentation techniques in the meta-learning pipeline at different levels such as: (a) support at meta-training; (b) support + query at meta-training; (c) support at meta-testing; (d) combination of those. We provide details about these augmentation methods below. 2.2.1 Slot-List Values Augmentation In IC/SF datasets, certain slot types often can take on values speci\ufb01ed in a \ufb01nite list. For example, in the SNIPS dataset the slot type facility can take on values from the list [\"smoking room\", \"spa\", \"indoor\", \"outdoor\", \"pool\", \"internet\", \"parking\", \"wi\ufb01\"] . Speci\ufb01c to the discrete slot \ufb01lling task, (Shah et al., 2019) used such values to learn an additional attention module for improving SF. Such lists can be created from the training dataset and be used for data augmentation. We leverage such lists to create synthetic utterances by replacing the values of slot types in a given utterance with other values from the list: e.g. given an utterance \u201cBook a table at a pool bar\u201d, we synthesize another utterance \u201cBook a table at a indoor bar\u201d. 2.2.2 Augmentation by Backtranslation Backtranslation is a technique of translating an utterance into an intermediate language and back to its original language using a neural machine translation model. Previous work (Edunov et al., 2018; Yu et al., 2018; Sennrich et al., 2015) showed that backtranslation is extremely effective as a data augmentation technique for NLP applications. In our paper in particular, we use a pre-trained en-es NMT model (Junczys-Dowmunt et al., 2018) for generating the augmented utterances. To ensure that the generated utterances are diverse, we follow the procedure in (Xie et al., 2019) in which we employ restricted sampling from the model output probability distribution instead of beam-search. 2.2.3 EDA Data Augmentation Adding small perturbations to the training data via random insertion, deletion, swapping and synonym replacement is one simple technique to generate synthetic data for data augmentation. Previous work by (Wei and Zou, 2019a) showed that this EDA technique achieves state-of-the-art results on various text-classi\ufb01cation tasks. In our work, we use EDA to generate synthetic data to perform data augmentation at different stages of meta-learning. 3 Experiments Datasets: We use two well-known standard benchmarks for IC/SF tasks: SNIPS (Coucke et al., 2018) and ATIS (Hemphill et al., 1990). In general, SNIPS is a more challenging dataset as it contains intents from diverse domains. The ATIS dataset, although imbalanced, contains intents only from the Airline domain. Episode Construction: We follow the standard episode construction technique described in (Krone et al., 2020; Trianta\ufb01llou et al., 2020) where the number of classes and the shots per class in each episode are sampled dynamically. Trianta\ufb01llou et al. (2020) showed that this dynamic sampling procedure helps in dealing with the intent class imbalances which is present in ATIS. Few-shot Splits: For the SNIPS dataset, we use 4 intent classes for meta-training and 3 intent classes for meta-testing. Similar to (Krone et al., 2020), we do not form a development split for SNIPS as there are only 7 intent classes and the episode construction process requires at least 3 classes in each split. For the ATIS dataset, we \ufb01rst select intent classes with more than 15 examples, then use 5 intent classes for meta-training and 7 intent classes for meta-testing. The rest of the classes are used as a development split. In (Krone et al., 2020), the intent classes for each split are manually chosen. This is not representative of real\fLevel SNIPS(Kmax=20) ATIS (Kmax=20) SNIPS (Kmax=100) ATIS(Kmax=100) IC Acc IC Acc IC Acc IC Acc (Krone et al., 2020) 0.877 \u00b1 0.01 0.660 \u00b1 0.02 0.877 \u00b1 0.01 0.719 \u00b1 0.01 Baseline (Ours) 0.887 \u00b1 0.06 0.737 \u00b1 0.06 0.907 \u00b1 0.05 0.80 \u00b1 0.04 DA (Slot-list) Support(m-train) 0.898 \u00b1 0.061 0.735 \u00b1 0.052 0.916 \u00b1 0.055 0.810 \u00b1 0.052 DA (Slot-list) Support,Query(m-train) 0.919 \u00b1 0.062 0.800 \u00b1 0.054 0.917 \u00b1 0.051 0.806 \u00b1 0.066 DA (Slot-list) Support(m-train, m-test) 0.905\u00b1 0.062 0.772 \u00b1 0.044 0.922\u00b1 0.051 0.818\u00b1 0.056 DA (Slot-list) Support(m-test) 0.926 \u00b1 0.038 0.764 \u00b1 0.073 0.931 \u00b1 0.037 0.840\u00b1 0.047 DA (Backtranslation) Support(m-train) 0.885 \u00b1 0.03 0.77 \u00b1 0.06 0.928 \u00b1 0.029 0.79 \u00b1 0.06 DA (Backtranslation) Support(m-train, m-test) 0.881 \u00b1 0.03 0.79 \u00b1 0.05 0.931 \u00b1 0.030 0.795 \u00b1 0.06 DA (Backtranslation) Support(m-test) 0.895 \u00b1 0.036 0.71 \u00b1 0.06 0.899 \u00b1 0.06 0.77 \u00b1 0.14 DA (EDA) Support(m-train) 0.893 \u00b1 0.062 0.787 \u00b1 0.07 0.911 \u00b1 0.04 0.805 \u00b1 0.08 DA (EDA) Support(m-train,m-test) 0.893 \u00b1 0.047 0.761 \u00b1 0.08 0.915 \u00b1 0.04 0.808 \u00b1 0.10 DA (EDA) Support(m-test) 0.892 \u00b1 0.047 0.731 \u00b1 0.06 0.915 \u00b1 0.05 0.78 \u00b1 0.059 Table 2: Few-shot IC accuracy with Data Augmentation (DA) for prototypical networks; m-train refers to metatraining and m-test refers to meta-testing istic situations where the types of few-shot classes can vary considerably. To address this issue, we report our experiment results averaged over 5 seeds where in each run the intent classes for each split are randomly sampled. In each experiment run, we evaluate our results for 100 episodes sampled from the test-split. We refer to our re-implementation of (Krone et al., 2020) with this strategy as Baseline. Main Results: Table 1 shows the results of experiments adding contrastive losses as a regularizer to our baseline. Overall, we observe that across both SNIPS and ATIS datasets, using contrastive losses as a regularizer predominantly improves IC accuracy while marginally improving SF F1 score. In particular, we notice that using contrastive losses as a regularizer with both the support and query during meta-training leads to the best performances. We also \ufb01nd that the combination of contrastive losses and data augmentation via slot-list values outperforms models trained independently with only contrastive losses or data augmentation. Table 2 shows the results of adding data augmentation to the few-shot IC tasks. We observe the techniques in general signi\ufb01cantly improve the performance of few-shot IC, depending on the level in the meta-learning pipeline at which the data is augmented. More speci\ufb01cally, for SNIPS we notice up to 4% and 2% gain in IC accuracy for Kmax = 20 and Kmax = 100 respectively. With EDA, we \ufb01nd that augmentation during meta-training and meta-testing together leads to a noteworthy gain in few-shot IC performances across both SNIPS and ATIS. In comparison, backtranslation is effective in improving the few-shot IC performance for SNIPS, when the shots per class is higher such as in Kmax = 100. However for ATIS, we observe a signi\ufb01cant gain in IC only for Kmax = 20. Comparatively, our novel data augmentation approach slot-list values approach generally shows gain in IC at all levels and shots per class. For SF, we \ufb01nd that data augmentation leads to only limited improvements when compared to IC (see Appendix C for a detailed discussion). We hypothesize that this is because data augmentation occurs primarily for intents and does not provide any explicit signal for improving SF tasks. To further understand if additional explicit signals for slots improve the SF performances, we incorporate syntactic information into the model (see Appendix D for details). We primarily notice that the addition of syntactic knowledge only leads to marginal improvements in SF tasks. We attribute this to the low shots per slot class, an artifact of the episodic sampling procedure (Krone et al., 2020), done per intent class in the joint IC/SF setting. 4" + }, + { + "url": "http://arxiv.org/abs/2006.14651v2", + "title": "Influence Functions in Deep Learning Are Fragile", + "abstract": "Influence functions approximate the effect of training samples in test-time\npredictions and have a wide variety of applications in machine learning\ninterpretability and uncertainty estimation. A commonly-used (first-order)\ninfluence function can be implemented efficiently as a post-hoc method\nrequiring access only to the gradients and Hessian of the model. For linear\nmodels, influence functions are well-defined due to the convexity of the\nunderlying loss function and are generally accurate even across difficult\nsettings where model changes are fairly large such as estimating group\ninfluences. Influence functions, however, are not well-understood in the\ncontext of deep learning with non-convex loss functions. In this paper, we\nprovide a comprehensive and large-scale empirical study of successes and\nfailures of influence functions in neural network models trained on datasets\nsuch as Iris, MNIST, CIFAR-10 and ImageNet. Through our extensive experiments,\nwe show that the network architecture, its depth and width, as well as the\nextent of model parameterization and regularization techniques have strong\neffects in the accuracy of influence functions. In particular, we find that (i)\ninfluence estimates are fairly accurate for shallow networks, while for deeper\nnetworks the estimates are often erroneous; (ii) for certain network\narchitectures and datasets, training with weight-decay regularization is\nimportant to get high-quality influence estimates; and (iii) the accuracy of\ninfluence estimates can vary significantly depending on the examined test\npoints. These results suggest that in general influence functions in deep\nlearning are fragile and call for developing improved influence estimation\nmethods to mitigate these issues in non-convex setups.", + "authors": "Samyadeep Basu, Philip Pope, Soheil Feizi", + "published": "2020-06-25", + "updated": "2021-02-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "INTRODUCTION In machine learning, in\ufb02uence functions (Cook & Weisberg, 1980) can be used to estimate the change in model parameters when the empirical weight distribution of the training samples is perturbed in\ufb01nitesimally. This approximation is cheaper to compute compared to the expensive process of repeatedly re-training the model to retrieve the exact parameter changes. In\ufb02uence functions could thus be used to understand the effect of removing an individual training point (or, groups of training samples) on the model predictions at the test-time. Leveraging a \ufb01rst-order Taylor\u2019s approximation of the loss function, (Koh & Liang, 2017) has shown that a (\ufb01rst-order) in\ufb02uence function, computed using the gradient and the Hessian of the loss function, can be useful to interpret machine learning models, \ufb01x mislabelled training samples and create data poisoning attacks. In\ufb02uence functions are in general well-de\ufb01ned and studied for models such as logistic regression (Koh & Liang, 2017), where the underlying loss-function is convex. For convex loss functions, in\ufb02uence functions are also accurate even when the model perturbations are fairly large (e.g. in the group in\ufb02uence case (Koh et al., 2019b; Basu et al., 2020)). However, when the convexity assumption of the underlying loss function is violated, which is the case in deep learning, the behaviour of in\ufb02uence functions is not well understood and is still an open area of research. With recent advances in computer vision (Szeliski, 2010), natural language processing (Sebastiani, 2002), high-stakes applications such as medicine (Lundervold & Lundervold, 2018), it has become particularly important \u2217Authors contributed equally 1 arXiv:2006.14651v2 [cs.LG] 10 Feb 2021 \fPublished as a conference paper at ICLR 2021 to interpret deep model predictions. This makes it critical to understand in\ufb02uence functions in the context of deep learning, which is the main focus of our paper. Despite their non-convexity, it is sometimes believed that in\ufb02uence functions would work for deep networks. The excellent work of (Koh & Liang, 2017) successfully demonstrated one example of in\ufb02uence estimation for a deep network, a small (2600 parameters), \"all-convolutional\" network (Springenberg et al., 2015). To the best of our knowledge, this is the one of the few cases for deep networks where in\ufb02uence estimation has been shown to work. A question of key importance to practitioners then arises: for what other classes of deep networks does in\ufb02uence estimation work? In this work, we provide a comprehensive study of this question and \ufb01nd a pessimistic answer: in\ufb02uence estimation is quite fragile for a variety of deep networks. In the case of deep networks, several factors might have an impact on in\ufb02uence estimates: (i) due to non-convexity of the loss function, different initializations of the perturbed model can lead to signi\ufb01cantly different model parameters (with approximately similar loss values); (ii) even if the initialization of the model is \ufb01xed, the curvature values of the network (i.e. eigenvalues of the Hessian matrix) at optimal model parameters might be very large in very deep networks, leading to a substantial Taylor\u2019s approximation error of the loss function and thus resulting in poor in\ufb02uence estimates; (iii) for large neural networks, computing the exact inverse-Hessian Vector product, required in computation of in\ufb02uence estimates, can be computationally very expensive. Thus, one needs to use approximate inverse-Hessian Vector product techniques which might be erroneous; resulting in low quality in\ufb02uence estimates; and \ufb01nally (iv) different architectures can have different loss landscape geometries near the optimal model parameters, leading to varying in\ufb02uence estimates. In this paper, we study aforementioned issues of using in\ufb02uence functions in deep learning through an extensive experimental study on progressively-growing complex models and datasets. We \ufb01rst start our analysis with a case study of a small neural network for the Iris dataset where the exact Hessian matrix can be computed. We then progressively increase the complexity of the network and analyse a CNN architecture (depth of 6) trained on 10% of MNIST dataset, similar to (Koh & Liang, 2017). Next, we evaluate the accuracy of in\ufb02uence estimates for more complex deep architectures (e.g. ResNets) trained on MNIST and CIFAR-10. Finally, we compute in\ufb02uence estimates on the ImageNet dataset using ResNet-50. We make the following observations through our analysis: \u2022 We \ufb01nd that the network depth and width have a strong impact on in\ufb02uence estimates. In particular, we show that in\ufb02uence estimates are fairly accurate when the network is shallow, while for deeper models, in\ufb02uence estimates are often erroneous. We attribute this partially to the increasing curvature values of the network as the depth increases. \u2022 We observe that the weight decay regularization is important to obtain high quality in\ufb02uence estimates in certain architectures and datasets. \u2022 We show that the inverse-Hessian Vector product approximation techniques such as stochastic estimation (Agarwal et al., 2016) are erroneous, especially when the network is deep. This can contribute to the low quality of in\ufb02uence estimates in deep models. \u2022 We observe that the choice of test-point has a substantial impact on the quality of in\ufb02uence estimates, across different datasets and architectures. \u2022 In very large-scale datasets such as ImageNet, we have found that even ground-truth in\ufb02uence estimates (obtained by leave-one-out re-training) can be inaccurate and noisy partially due to the model\u2019s training and convergence. These results highlight sensitivity of current in\ufb02uence functions in deep learning and call for developing robust in\ufb02uence estimators to be used in large-scale machine learning applications. 2 RELATED WORKS In\ufb02uence functions are primarily used to identify important training samples for test-time predictions and debug machine learning models (Koh & Liang, 2017). Similar to in\ufb02uence functions, (Chaudhuri & Mykland, 1993) tackles the problem of approximating a dataset using a subset of the dataset. In recent times, there is an increase in the applications of in\ufb02uence functions for tasks other than interpretability. For e.g.(Schulam & Saria, 2019) has used in\ufb02uence functions to audit 2 \fPublished as a conference paper at ICLR 2021 the reliability of test-predictions. In NLP, in\ufb02uence functions have been used to detect biases in word-embeddings (Brunet et al., 2018) whereas in the domain of ML security, in\ufb02uence functions have been shown to be effective in crafting stronger data-poisoning attacks (Koh et al., 2019a). In\ufb02uence functions are also effective in the identi\ufb01cation of important training groups (rather than an individual sample) (Basu et al., 2019; Koh et al., 2019b). Prior theoretical work (Giordano et al., 2018; 2019) have focused on quantifying \ufb01nite sample error-bounds for in\ufb02uence estimates when compared to the ground-truth re-training procedures. Recently, alternative methods to \ufb01nd in\ufb02uential samples in deep networks have been proposed. In (Yeh et al., 2018), test-time predictions are explained by a kernel function evaluated at the training samples. In\ufb02uential training examples can also be obtained by tracking the change in loss for a test-prediction through model-checkpoints, which are stored during the training time (Pruthi et al., 2020). While these alternative methods (Yeh et al., 2018; Pruthi et al., 2020) work well for deep networks in interpreting model predictions, they lack the \u201cjackknife\" like ability of in\ufb02uence functions which makes it useful in multiple applications other than interpretability (e.g. uncertainty estimation). 3 BASICS OF INFLUENCE FUNCTION Consider h to be a function parameterized by \u03b8 which maps from an input feature space X to an output space denoted by Y. The training samples are denoted by the set S = {zi : (xi, yi)}n i=1, while the loss function is represented by \u2113(h\u03b8(z)) for a particular training example z. The standard empirical risk minimization solves the following optimization problem: \u03b8\u2217= arg min \u03b8 1 n n X i=1 \u2113(h\u03b8(zi)). (1) Up-weighting a training example z by an in\ufb01nitesimal amount \u03f5 leads to a new set of model parameters denoted by \u03b8\u03f5 {z}. This set of new model parameters \u03b8\u03f5 {z} is obtained by solving: \u03b8\u03f5 {z} = arg min \u03b8 1 n n X i=1 \u2113(h\u03b8(zi)) + \u03f5\u2113(h\u03b8(z)). (2) Removing a training point z is similar to up-weighting its corresponding weight by \u03f5 = \u22121/n in Equation(2). The main idea used by (Koh & Liang, 2017) is to approximate \u03b8\u03f5 {z} by the \ufb01rst-order Taylor series expansion around the optimal model parameters represented by \u03b8\u2217, which leads to: \u03b8\u03f5 {z} \u2248\u03b8\u2217\u2212\u03f5H\u22121 \u03b8\u2217\u2207\u03b8\u2113(h\u03b8\u2217(z)), (3) where H\u03b8\u2217represents the Hessian with respect to model parameters \u03b8\u2217. Following the classical result of (Cook & Weisberg, 1980), the change in the model parameters (\u2206\u03b8 = \u03b8\u03f5 {z} \u2212\u03b8\u2217) on upweighting the training example z can be approximated by the in\ufb02uence function (I(z)) as follows: I(z) = d\u03b8\u03f5 {z} d\u03f5 |\u03f5=0 = \u2212H\u22121 \u03b8\u2217\u2207\u03b8\u2113(h\u03b8\u2217(z)) . (4) The change in the loss value for a particular test point zt when a training point z is up-weighted can be approximated as a closed form expression by the chain rule (Koh & Liang, 2017): I(z, zt) = \u2212\u2207\u2113(h\u03b8\u2217(zt))T H\u22121 \u03b8\u2217\u2207\u2113(h\u03b8\u2217(z)). (5) I(z, zt)/n is approximately the change in the loss for the test-sample zt when a training sample z is removed from the training set. This result is, however, based on the assumption that the underlying loss function is strictly convex in the model parameters \u03b8 and the Hessian H\u03b8\u2217is a positive-de\ufb01nite matrix (Koh & Liang, 2017). For large models, inverting the exact Hessian H\u03b8\u2217is expensive. In such cases, the inverse-Hessian Vector product can be computed ef\ufb01ciently with a combination of Hessian-vector product (Pearlmutter, 1994) and optimization techniques (see Appendix for details). 4 WHAT CAN GO WRONG FOR INFLUENCE FUNCTIONS IN DEEP LEARNING? First-order in\ufb02uence functions (Koh & Liang, 2017) assume that the underlying loss function is convex and the change in model parameters is small when the empirical weight distribution of the training data is in\ufb01nitesimally perturbed. In essence, this denotes the Taylor\u2019s gap in Equation (3) 3 \fPublished as a conference paper at ICLR 2021 Figure 1: Iris dataset experimental results (a,b) Comparison of norm of parameter changes computed with in\ufb02uence function vs re-training; (a) trained with weight-decay; (b) trained without weight-decay. (c) Spearman correlation vs. network depth. (d) Spearman correlation vs. network width. to be small for an accurate in\ufb02uence estimate. However in the case of non-convex loss functions, this assumption is not generally true. Empirically, we \ufb01nd that the Taylor\u2019s gap is strongly affected by common hyper-parameters for deep networks. For example, in Fig. (1)-(a,b), we \ufb01nd that for networks trained without a weight-decay regularization on Iris, the Taylor\u2019s gap is large resulting in low quality in\ufb02uence estimates. In a similar vein, when the network depth and width is considerably large (i.e. the over-parameterized regime), the Taylor\u2019s gap increases and substantially degrades the quality of in\ufb02uence estimates (Fig. (2)). Empirically this increase in Taylor\u2019s gap strongly correlates with the curvature values of the loss function evaluated at the optimal model parameters as observed in Fig. (2-(b)). Further complications may arise for larger models, where in\ufb02uence estimations in such settings require an additional approximation to compute the inverse-Hessian vector product. Nonetheless, we observe in Fig. (2)-(a), that on Iris this approximation has only a marginal impact on the in\ufb02uence estimation. These results show that that network architecture, hyper-parameters, and loss curvatures are important factors for proper in\ufb02uence estimations. In the next section, we discuss these issues in details through controlled experiments on datasets and models of increasing complexity. 5 EXPERIMENTS Datasets: We \ufb01rst study the behaviour of in\ufb02uence functions in a small Iris dataset (Anderson, 1936), where the exact Hessian can be computed. Further, we progressively increase the complexity of the model and datasets: we use small MNIST (Koh & Liang, 2017) to evaluate the accuracy of in\ufb02uence functions in a small CNN architecture with a depth of 6. Next, we study in\ufb02uence functions on modern deep architectures trained on the standard MNIST (LeCun et al., 1998) and CIFAR-10 (Krizhevsky et al., 2000) datasets. Finally, to understand how in\ufb02uence functions scale to large datasets, we use ImageNet (Deng et al., 2009) to compute the in\ufb02uence estimates. Evaluation Metrics: We evaluate the accuracy of in\ufb02uence estimates at a given test point zt using both Pearson (Kirch, 2008) and Spearman rank-order correlation (Spearman, 1904) with the ground-truth (obtained by re-training the model) across a set of training points. Most of the existing interpretability methods desire that in\ufb02uential examples are ranked in the correct order of their importance (Ghorbani et al., 2017). Therefore, to evaluate the accuracy of in\ufb02uence estimates, Spearman correlation is often a better choice. 5.1 UNDERSTANDING INFLUENCE FUNCTIONS WHEN THE EXACT HESSIAN CAN BE COMPUTED Setup: Computing in\ufb02uence estimates with the exact Hessian has certain advantages in our study: a) it bypasses inverse-Hessian Vector product approximation techniques which induce errors in computing in\ufb02uence estimates. Thus, we can compare in\ufb02uence estimates computed with exact vs. approximate inverse-Hessian Vector products to quantify this type of error; b) The deviation of the parameters computed with the in\ufb02uence function from the exact parameters can be computed exactly. This information can be useful to further quantify the error incurred by (\ufb01rst-order) in\ufb02uence estimates in the non-convex setup. However, computations of the exact Hessian matrix and its inverse are only computationally feasible for models with small number of parameters. Thus, we use the Iris dataset along with a small feed-forward neural network to analyse the behaviour of in\ufb02uence 4 \fPublished as a conference paper at ICLR 2021 Figure 2: Iris dataset experimental results; (a) Spearman correlation of in\ufb02uence estimates with the ground-truth estimates computed with stochastic estimation vs. exact inverse-Hessian vector product. (b) Top eigenvalue of the Hessian vs. the network depth. (c) Spearman correlation between the norm of parameter changes computed with in\ufb02uence function vs. re-training. function computed with the exact Hessian in a non-convex setup. We train models to convergence for 60k iterations with full-batch gradient descent. To obtain the ground-truth estimates, we retrain the models for 7.5k steps, starting from the optimal model parameters. For our analysis, we choose the test-point with the maximum loss and evaluate the accuracy of in\ufb02uence estimates with the ground-truth amongst of the top 16.6% of the training points. Through our experiments with the exact Hessian, we answer some relevant questions related to how properties of the network such as depth, width and regularizers (e.g. weight-decay) affect the in\ufb02uence estimates. The Effect of Weight-Decay: One of the simpler and common regularization techniques used to train neural networks is weight-decay regularization. In particular, a term \u03bb\u2225\u03b8\u22252 2, penalizing the scaled norm of the model parameters is added to the objective function, during training, where \u03bb is a hyperparameter which needs to be tuned. We train a simple feed-forward network1 with and without weight-decay regularization. For the network trained with weight-decay, we observe a Spearman correlation of 0.97 between the in\ufb02uence estimates and the ground-truth estimates. In comparison, for the network trained without a weight-decay regularization, the Spearman correlation estimates decrease to 0.508. In this case, we notice that the Hessian matrix is singular, thus a damping factor of 0.001 is added to the Hessian matrix, to make it invertible. To further understand the reason for this decrease in the quality of in\ufb02uence estimates, we compare the following metric across all training examples: a) Norm of the model parameter changes computed by re-training; b) Norm of the model parameter changes computed using the in\ufb02uence function (i.e. \u2225H\u22121 \u03b8\u2217\u2207\u2113(zi)\u22252 \u2200i \u2208[1, n]) (Fig. 1-(a,b)). We observe that when the network is trained without weight-decay, changes in model parameters computed with the in\ufb02uence function have a substantially larger deviation from those computed using re-training. This essentially suggests that the gap in Taylor expansion, using (\ufb01rstorder) in\ufb02uence estimates is large, when the model is trained without weight-decay. We observe similar results with smooth activation functions such as tanh (see the Appendix for details). The Effect Of Network Depth: From Fig. 1-(c), we see that network depth has a dramatic effect on the quality of in\ufb02uence estimates. For example, when the depth of the network is increased to 8, we notice a considerable decrease in the Spearman correlation estimates. To further our understanding about the decrease in the quality of in\ufb02uence estimates when the network is deeper, we compute the gap in the approximation between the ground-truth parameter changes (computed by re-training) and the approximate parameter changes (computed using the in\ufb02uence function). To quantify the error gap, we compute the Spearman correlation estimates between the norm of true and approximate parameter changes across the top 16.6% of the in\ufb02uential examples. We \ufb01nd that with increasing depth, the Spearman correlation estimates between the norm of the true and approximate parameter changes decrease. From Fig. 2-(c), we see that the approximation error gap is particularly large when the depth of the network is more than 5. We also notice a consistent increase in the curvature of the loss function (Fig. 2-(b)), as the network becomes deeper. This possibly suggests that the curvature information of the network can be an upper bound in the approximation error gap between 1With width of 5, depth of 1 and ReLU activations 5 \fPublished as a conference paper at ICLR 2021 Figure 3: Experiments on small MNIST using a CNN architecture. (a) Estimation of in\ufb02uence function with and without weight decay on (a) the top in\ufb02uential points, (b) training points at 30th percentile of in\ufb02uence score distribution. (c) Correlation vs the weight decay factor (evaluated on the top in\ufb02uential points). the true parameters and the ones computed using the in\ufb02uence function. Even in case of non-smooth activation functions like ReLU, we have a similar observation. (see the Appendix for more details). The Effect Of Network Width: To see the effect of the network width on the quality of in\ufb02uence estimates, we evaluate the in\ufb02uence estimates for a feed-forward network of constant depth, by progressively increasing its width. From Fig. 1-(d), we observe that with an increase in network width, the Spearman correlation decreases consistently. For example, we \ufb01nd that the Spearman correlation decreases from 0.82 to 0.56, when the width of the network is increased from 8 to 50. This observation suggests that over-parameterizing a network by increasing its width has a strong impact in the quality of in\ufb02uence estimates. The Effect of Stochastic Estimation on inverse-Hessian Vector Product: For large deep networks, the inverse-Hessian Vector product is computed using stochastic estimation(Agarwal et al., 2016), as the exact Hessian matrix cannot be computed and inverted. To understand the effectiveness of stochastic approximation, we compute the in\ufb02uence estimates with both the exact Hessian and stochastic estimation. We observe that across different network depths, the in\ufb02uence estimates computed with stochastic estimation have a marginally lower Spearman correlation when compared to the ones computed with the exact Hessian. From Fig. 2-(a), we \ufb01nd that the error in the approximation is more, when the network is deeper. 5.2 UNDERSTANDING INFLUENCE FUNCTIONS IN SHALLOW CNN ARCHITECTURES Setup: In this section, we perform a case study using a CNN architecture2 on the small MNIST dataset (i.e. 10% of MNIST); a similar setup used in (Koh & Liang, 2017). To assess the accuracy of in\ufb02uence estimates, we select a set of test-points with high test-losses computed at the optimal model parameters. For each of the test points, we select 100 training samples with the highest in\ufb02uence scores and compute the ground-truth in\ufb02uence by re-training the model. We also select 100 training points with in\ufb02uence scores at the 30th percentile of the entire in\ufb02uence score distribution. These training points have low in\ufb02uence scores and a lower variance in their scores when compared to the top in\ufb02uential points. The model is trained with and without weight-decay regularization. When trained with a weight-decay and evaluated based on the top in\ufb02uential points, we \ufb01nd that the correlation estimates are consistently signi\ufb01cant (Fig. 3-(a)). This is consistent with the results reported in (Koh & Liang, 2017). However, when the evaluation is done with the set of training samples at the 30th percentile of the in\ufb02uence score distribution, the correlation estimates decrease signi\ufb01cantly (Fig. 3-(b)). This shows that in\ufb02uence estimates of only the top in\ufb02uential points are precise when compared to ground-truth re-trainings. Furthermore, without the weight-decay regularization, in\ufb02uence estimates in both cases are poor across all the test-points (Fig. 3-(a,b)). To further understand the impact of weight-decay on in\ufb02uence estimates, we train the network with different weight-decay regularization factors. From Fig. 3-(c), we see that the selection of weight2The model has 2600 parameters and is trained for 500k iterations to reach convergence with the optimal model parameters \u03b8\u2217. The ground-truth estimates are obtained by re-training the models from the optimal parameter set \u03b8\u2217for 30k iterations. When trained with a weight-decay, a regularization factor of 0.001 is used. 6 \fPublished as a conference paper at ICLR 2021 Dataset MNIST CIFAR-10 A (With Decay) B (With Decay) A (Without Decay) A (With Decay) B (With Decay) A (Without Decay) Architecture P S P S P S P S P S P S Small CNN 0.95 0.87 0.92 0.82 0.41 0.35 LeNet 0.83 0.51 0.28 0.29 0.18 0.12 0.81 0.69 0.45 0.46 0.19 0.09 VGG13 0.34 0.44 0.29 0.18 0.38 0.31 0.67 0.63 0.66 0.63 0.79 0.73 VGG14 0.32 0.26 0.28 0.22 0.21 0.11 0.61 0.59 0.49 0.41 0.75 0.64 ResNet18 0.49 0.26 0.39 0.35 0.14 0.11 0.64 0.42 0.25 0.26 0.72 0.69 ResNet50 0.24 0.22 0.29 0.19 0.08 0.13 0.46 0.36 0.24 0.09 0.32 0.14 Table 1: Correlation estimates on MNIST And CIFAR-10 ; A=Test-point with highest loss; B=Testpoint at the 50th percentile of test-loss spectrum; P=Pearson correlation; S=Spearman correlation decay factor is important in getting high-quality in\ufb02uence estimates. For this speci\ufb01c CNN architecture, we notice that the correlations start decreasing when the weight-decay factor is greater than 0.01. Moreover, from Fig. 3-(a,b), we \ufb01nd that the selection of test-point also has a strong impact on the quality of in\ufb02uence estimates. For example, when the network is trained with weight-decay and the in\ufb02uence estimates are computed for top in\ufb02uential training points, we notice that the Spearman correlation estimates range from 0.92 to 0.38 across different test-points and have a high variance. These results show that despite some successful applications of in\ufb02uence functions in this nonconvex setup, as reported in (Koh & Liang, 2017), their performances are very sensitive to hyperparameters of the experiment as well as to the training procedure. In the next two sections, we assess the quality of in\ufb02uence estimates on more complex architectures and datasets including MNIST, CIFAR-10 and ImageNet. In particular, we desire to understand, if the insights gained from experiments on smaller networks can be generalized to more complex networks and datasets. 5.3 UNDERSTANDING INFLUENCE FUNCTIONS IN DEEP ARCHITECTURES Setup: In this section, we evaluate the accuracy of in\ufb02uence estimates using MNIST and CIFAR-10 datasets across different network architectures including small CNN(Koh & Liang, 2017), LeNet (Lecun et al., 1998), ResNets (He et al., 2015), and VGGNets (Simonyan & Zisserman, 2015)3. To compute in\ufb02uence estimates, we choose two test points for each architecture: a) the test-point with the highest loss, and b) the test-point at the 50th percentile of the losses of all test points. For each of these two test points, we select the top 40 in\ufb02uential training samples and compute the correlation of their in\ufb02uence estimates with the ground-truth estimates. To compute the ground-truth in\ufb02uence estimates, we follow the strategy of (Koh & Liang, 2017), where we re-train the models from optimal parameters for 6% of the steps used for training the optimal model. When the networks are trained with a weight-decay regularization, we use a constant weight-decay factor of 0.001 across all the architectures (see Appendix for more details). Results On MNIST: From Table 1, we observe that for the test-point with the highest loss, the in\ufb02uence estimates in the small CNN and LeNet architectures (trained with the weight-decay regularization) have high qualities. These networks have 2.6k and 44k parameters, respectively, and are relatively smaller and less deep than the other networks used in our experimental setup. As the depth of the network increases, we observe a consistent decrease in quality of in\ufb02uence estimates. For the test-point with a loss at the 50th percentile of test-point losses, we observe that in\ufb02uence estimates only in the small CNN architecture have good qualities. Results On CIFAR-10: For CIFAR-10, across all architectures trained with the weight-decay regularization, we observe that the correlation estimates for the test-point with the highest loss are highly signi\ufb01cant. For example, the correlation estimates are above 0.6 for a majority of the network architectures. However, for the test-point evaluated at the 50th percentile of the loss, the correlations decrease marginally across most of the architectures. We \ufb01nd that on CIFAR-10, even architectures trained without weight-decay regularization have highly signi\ufb01cant correlation estimates when evaluated with the test-point which incurs the highest loss. 3For CIFAR-10, evaluations on small CNN have not been performed due to the poor test accuracy. 7 \fPublished as a conference paper at ICLR 2021 In case of MNIST, we have found that in shallow networks, the in\ufb02uence estimates are fairly accurate while for deeper networks, the quality of in\ufb02uence estimates decrease. For CIFAR-10, although the in\ufb02uence estimates are signi\ufb01cant, we found that the correlations are marginally lower in deeper networks such as ResNet-50. The improved quality of in\ufb02uence estimates in CIFAR-10 can be attributed to the fact that for a similar depth, architectures trained on CIFAR-10 are less over-parameterized compared to architectures trained on MNIST. Note that, in Section 5.1, where the exact Hessian matrix can be computed, we observed that over-parameterization decreases the quality of in\ufb02uence estimates. From Table(1), we also observed that the selection of test-point has a sizeable impact on the quality of in\ufb02uence estimates. Furthermore, we noticed large variations in the quality of in\ufb02uence estimates across different architectures. In general we found that in\ufb02uence estimates for small CNN and LeNet are reasonably accurate, while for ResNet-50, the quality of estimates decrease across both MNIST and CIFAR-10. Precise reasons for these variations are dif\ufb01cult to establish. We hypothesize that it can be due to the following factors: (i) Different architectures trained on different datasets have contrasting characteristics of loss landscapes at the optimal parameters which can have an impact on in\ufb02uence estimates. (ii) The weight-decay factor may need to be set differently in various architectures, to obtain high quality in\ufb02uence estimates. Figure 4: In\ufb02uence for CIFAR-100 Results on CIFAR-100: In the case of CIFAR-100, we train a ResNet-18 model with a weight-decay regularization factor of 5e\u22124. The in\ufb02uence estimates are then computed for test-points with the highest losses (Index: 6017, 2407, 9383) and testpoints around the 50th percentile of the test loss (Index: 783, 7106) over multiple model initialisations. Unlike in the case of MNIST and CIFAR-10, from Fig. 4 we observe the correlation estimates to be of substantially poor quality. We provide additional visualizations of the in\ufb02uential training examples in the Appendix section. 5.4 IS SCALING INFLUENCE ESTIMATES TO IMAGENET POSSIBLE? The application of in\ufb02uence functions to ImageNet scale models provides an appealing yet challenging opportunity. It is appealing because, if successful, it opens a range of applications to large-scale image models, including interpretability, robustness, data poisoning, and uncertainty estimation. It is challenging for a number of reasons. Notable among these is the high computational cost of training and re-training, which limits the number of ground truth evaluations. In addition, all of the previously discussed dif\ufb01culties in in\ufb02uence estimations still remain, including (i) non-convexity of the loss, (ii) selection of scaling and damping hyperparameters in the stochastic estimation of the Hessian, and (iii) the lack of convergence of the model parameters. The scale of ImageNet raises additional questions about the feasibility of leave-one-out retraining as the ground truth estimator. Given that there are 1.2M images in the training set, is it even possible that the removal of one image can signi\ufb01cantly alter the model? In other words, we question whether or not reliable ground truth estimates may be obtained through leave-one-out re-training at this scale. To illustrate this, we conduct an additional in\ufb02uence estimation on ImageNet. After training an initial model to 92.302% top5 test accuracy, we select two test points at random, calculate in\ufb02uence over the entire training set, and then select the top 50 points by their in\ufb02uences as candidates for re-training. We then use the re-training procedure suggested by (Koh & Liang, 2017), which starts leave-one-out re-training from the parameter set obtained after the initial training. We re-train for an additional 2 epochs, approximately 5% of the original training time, and calculate the correlations. We observe that for both test points, both Pearson and Spearman correlations are very low (less than 0.15, see details in the Appendix). In our experiments, we observe high variability among ground-truth estimates obtained by retraining the model (see the appendix for details). We conjecture that this may be partially due to the fact that the original model has not be fully converged. To study this, we train the original model with all training points for an additional 2 epochs and measure the change in the test loss. We \ufb01nd that the overall top5 test accuracy has improved slightly to 92.336 % (+0.034) and the loss for one 8 \fPublished as a conference paper at ICLR 2021 Figure 5: (a) Difference in norm of parameters obtained by re-training from scratch vs. re-training from optimal parameters. (b) Correlation estimates with re-training from scratch vs. re-training from optimal parameters. of the considered test points has decreased by relatively a signi\ufb01cant amount of 0.679. However, the loss for the other point has increased slightly by 0.066. Such changes in loss values can therefore out-power the effect of leave-one-out re-training procedure. Second, we calculate the 2-norm of the weight gradients, which should be close to zero near an optimal point, and compare it to a standard pre-trained ImageNet ResNet-50 model as a baseline. We \ufb01nd these norms to be 20.18 and and 15.89, respectively, showing our model has similar weight gradient norm to the baseline. Although these norms are relatively small given that there are 25.5M parameters, further re-training the model still changes loss values for some samples considerably, making the ground-truth estimates noisy. We suggest that one way to obtain reliable ground-truth in\ufb02uence estimates in such large models can be through assessing the in\ufb02uence of a group of samples, rather than a single one. 6 DISCUSSION ON GROUND-TRUTH INFLUENCE In our experimental setup, to obtain the ground-truth in\ufb02uence, we follow the strategy of re-training from optimal model parameters as shown in (Koh & Liang, 2017; Koh et al., 2019b). Even for moderately sized datasets and architectures, re-training from scratch (instead of re-training from optimal model parameters) is computationally expensive. Although re-training from optimal model parameters is an approximation compared to re-training from scratch, we notice that the approximation works quite well in practice. To validate the effectiveness of this strategy, we \ufb01rst compute the norm of the difference in parameters obtained by re-training from scratch vs. re-training from optimal parameters. Next we compute the correlation between the in\ufb02uence estimates and ground-truth using both the re-training strategies. From Fig. 5, we observe the norm of parameter differences using the two re-training strategies to be small. Similarly using both the re-training strategies as ground-truth yield similar correlation estimates. These results highlight that re-training from optimal parameters (although an approximation) is close to re-training from scratch. 7" + }, + { + "url": "http://arxiv.org/abs/1911.00418v2", + "title": "On Second-Order Group Influence Functions for Black-Box Predictions", + "abstract": "With the rapid adoption of machine learning systems in sensitive\napplications, there is an increasing need to make black-box models explainable.\nOften we want to identify an influential group of training samples in a\nparticular test prediction for a given machine learning model. Existing\ninfluence functions tackle this problem by using first-order approximations of\nthe effect of removing a sample from the training set on model parameters. To\ncompute the influence of a group of training samples (rather than an individual\npoint) in model predictions, the change in optimal model parameters after\nremoving that group from the training set can be large. Thus, in such cases,\nthe first-order approximation can be loose. In this paper, we address this\nissue and propose second-order influence functions for identifying influential\ngroups in test-time predictions. For linear models, across different sizes and\ntypes of groups, we show that using the proposed second-order influence\nfunction improves the correlation between the computed influence values and the\nground truth ones. We also show that second-order influence functions could be\nused with optimization techniques to improve the selection of the most\ninfluential group for a test-sample.", + "authors": "Samyadeep Basu, Xuchen You, Soheil Feizi", + "published": "2019-11-01", + "updated": "2020-07-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Recently, there has been a rapid and signi\ufb01cant success in applying machine learning methods to a wide range of applications including vision (Szeliski, 2010), natural language processing (Sebastiani, 2002), medicine (Lundervold and Lundervold, 2018), \ufb01nance (Lin, Hu, and Tsai, 2012), etc. In sensitive applications such as medicine, we would like to explain test-time model predictions to humans. An important question is : why the model makes a certain prediction for a particular test sample. One way to address this is 1Department of Computer Science, University of MarylandCollege Park. Correspondence to: Samyadeep Basu . Proceedings of the 37 th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by the author(s). to trace back model predictions to its training data. More speci\ufb01cally, one can ask which training samples were the most in\ufb02uential ones for a given test prediction. In\ufb02uence functions (Cook and Weisberg, 1980) from robust statistics measure the dependency of optimal model parameters on training samples. Previously (Koh and Liang, 2017) used \ufb01rst-order approximations of in\ufb02uence functions to estimate how much model parameters would change if a training point was up-weighted by an in\ufb01nitesimal amount. Such an approximation can be used to identify most in\ufb02uential training samples in a test prediction. Moreover, this approximation is similar to the leave-one-out re-training, thus the \ufb01rst-order in\ufb02uence function proposed in (Koh and Liang, 2017) bypasses the expensive process of repeated re-training the model to \ufb01nd in\ufb02uential training samples in a test-time prediction. In some applications, one may want to understand how model parameters would change when large groups of training samples are removed from the training set. This could be useful to identify groups of training data which drive the decision for a particular test prediction. As shown in (Koh, Ang, Teo, and Liang, 2019a), \ufb01nding in\ufb02uential groups can be useful in real-world applications such as diagnosing batch effects (Yang, Li, Qian, Wilhelmsen, Shen, and Li, 2019), apportioning credit between different data sources (Arrieta-Ibarra, Goff, Jim\u00c3l\u2019nez-Hern\u00c3 \u02db andez, Lanier, and Weyl, 2018), understanding effects of different demographic groups (Chen, Johansson, and Sontag, 2018) or in a multiparty learning setting (Hayes and Ohrimenko, 2019). (Koh et al., 2019a) approximates the group in\ufb02uence by sum of \ufb01rst-order individual in\ufb02uences over training samples in the considered group. However, removal of a large group from training can lead to a large perturbation to model parameters. Therefore, in\ufb02uence functions based on \ufb01rst-order approximations may not be accurate in this setup. Moreover, approximating the group in\ufb02uence by adding individual sample in\ufb02uences ignores possible cross correlations that may exist among samples in the group. In this paper, we relax the \ufb01rst-order approximations of current in\ufb02uence functions and study how second-order approximations can be used to capture model changes when a potentially large group of training samples is up-weighted. Considering a training set S and a group U \u2282S, existing arXiv:1911.00418v2 [cs.LG] 7 Jul 2020 \fOn Second-Order Group In\ufb02uence Functions for Black-Box Predictions \ufb01rst-order approximations of the group in\ufb02uence function (Koh et al., 2019a) can be written as the sum of \ufb01rst-order in\ufb02uences of individual points. That is, I(1)(U) = |U| X i=1 I(1) i where I(1)(U) is the \ufb01rst-order group in\ufb02uence function and I(1) i is the \ufb01rst-order in\ufb02uence for the ith sample in U. On the other hand, our proposed second-order group in\ufb02uence function has the following form: I(2)(U) = I(1)(U) + I \u2032(U) where I \u2032(U) captures informative cross-dependencies among samples in the group and is a function of gradient vectors and the Hessian matrix evaluated at the optimal model parameters. We present a more precise statement of this result in Theorem 1. We note that the proposed secondorder in\ufb02uence function can be computed ef\ufb01ciently even for large models. We discuss its computational complexity in Section 6. Our analysis shows that the proposed second-order in\ufb02uence function captures model changes ef\ufb01ciently even when the size of the groups are relatively large or the changes to the model parameters are signi\ufb01cant as in the case of groups with similar properties. For example, in an MNIST classi\ufb01cation problem using logistic regression, when 50% of the training samples are removed, the correlation between the ground truth estimate and second-order in\ufb02uence values improves by over 55% when compared to the existing \ufb01rstorder in\ufb02uence values. We note that higher-order in\ufb02uence functions have been used in statistics (James, Lingling, Eric, and van der Vaart, 2017) for point and interval estimates of non-linear functionals in parameteric, semi-parametric and non-parametric models. However, to the best of our knowledge, this is the \ufb01rst time, higher-order in\ufb02uence functions are used for the interpretability task in the machine learning community. Similar to (Koh and Liang, 2017) and (Koh et al., 2019a), our main results for the second-order in\ufb02uence functions hold for linear prediction models where the underlying optimization is convex. However, we also additionally explore effectiveness of both \ufb01rst-order and second-order group in\ufb02uence functions in the case of deep neural networks. We observe that none of the methods provide good estimates of the ground-truth in\ufb02uence across different groups 1. In summary, we make the following contributions: \u2022 We propose second-order group in\ufb02uence functions that consider cross dependencies among the samples in the considered group. 1Note that experiments of (Koh and Liang, 2017) focus only on the most in\ufb02uential individual training samples. \u2022 Through several experiments over linear models, across different sizes and types of groups, we show that the second-order in\ufb02uence estimates have higher correlations with the ground truth when compared to the \ufb01rst-order ones, especially when the changes to the underlying model is relatively large. \u2022 We also show that our proposed second-order group in\ufb02uence function can be used to improve the selection of the most in\ufb02uential training group. 2. Related Works In\ufb02uence functions, a classical technique from robust statistics introduced by (Cook and Weisberg, 1980; Cook and Sanford, 1982) were \ufb01rst used in the machine learning community for interpretability by (Koh and Liang, 2017) to approximate the effect of upweighting a training point on the model parameters and test-loss for a particular test sample. In the past few years, there has been an increase in the applications of in\ufb02uence functions for a variety of machine learning tasks. (Schulam and Saria, 2019) used in\ufb02uence functions to produce con\ufb01dence intervals for a prediction and to audit the reliability of predictions. (Wang, Ustun, and Calmon, 2019) used in\ufb02uence functions to approximate the gradient in order to recover a counterfactual distribution and increase model fairness, while (Brunet, Alkalay-Houlihan, Anderson, and Zemel, 2018) used in\ufb02uence functions to understand the origins of bias in word-embeddings. (Koh, Steinhardt, and Liang, 2019b) crafted stronger data poisoning attacks using in\ufb02uence functions. In\ufb02uence functions can also be used to detect extrapolation (Madras, Atwood, and D\u00e2 \u02d8 A\u00b4 ZAmour, 2019) in certain speci\ufb01c cases, validate causal inference models (Alaa and Van Der Schaar, 2019) and identify in\ufb02uential pre-training points (Chen, Si, Li, Chelba, Kumar, Boning, and Hsieh, 2020). In\ufb01nitesimal jackknife or the delta method are ideas closely related to in\ufb02uence functions for linear approximations of leave-oneout cross validation (Jaeckel, 1972; Efron, 1992). Recently a higher-order instance (Giordano, Jordan, and Broderick, 2019) of in\ufb01nitesimal jackknife (Jaeckel, 1972) was used to approximate cross-validation procedures. While their setting corresponding to approximations of leave-k-out retraining is relatively similar to our paper, our higher-order terms preserve the empirical weight distribution of the training data in the ERM and are derived from in\ufb02uence functions, while in (Giordano et al., 2019) instances of in\ufb01nitesimal jackknife is used. These differences lead to our higherorder terms being marginally different than the one proposed in (Giordano et al., 2019). Our proposed second-order approximation for group in\ufb02uence function is additionally backed by a thorough empirical study across different settings in the case of linear models which has not yet been explored in prior works. \fOn Second-Order Group In\ufb02uence Functions for Black-Box Predictions 3. Background We consider the classical supervised learning problem setup, where the task is to learn a function h (also called the hypothesis) mapping from the input space X to an output space Y. We denote the input-output pair as {x, y}. We assume that our learning algorithm is given training examples S := {zi = (xi, yi)}m i=1 drawn i.i.d from some unknown distribution P. Let \u0398 be the space of the parameters of considered hypothesis class. The goal is to select model parameters \u03b8 to minimize the empirical risk as follows: min \u03b8\u2208\u0398 L\u2205(\u03b8) := 1 |S| X z\u2208S \u2113(h\u03b8(z)), (1) where |S| = m, denotes the cardinality of the training set, the subscript \u2205indicates that the whole set S is used in training and \u2113is the associated loss function. We refer to the optimal parameters computed by the above optimization as \u03b8\u2217. Let \u2207\u03b8L\u2205(\u03b8) and H\u03b8\u2217= \u22072 \u03b8L\u2205(\u03b8) be the gradient and the Hessian of the loss function, respectively. First, we discuss the case where we want to compute the effect of an individual training sample z on optimal model parameters as well as the test predictions made by the model. The effect or in\ufb02uence of a training sample on the model parameters could characterized by removing that particular training sample and retraining the model again as follows: \u03b8\u2217 {z} = arg min \u03b8\u2208\u0398 L{z}(\u03b8) = 1 |S| \u22121 X zi\u0338=z \u2113(h\u03b8(zi)) (2) Then, we can compute the change in model parameters as \u2206\u03b8 = \u03b8\u2217 {z} \u2212\u03b8\u2217, due to removal of a training point z. However, re-training the model for every such training sample is expensive when |S| is large. In\ufb02uence functions based on \ufb01rst-order approximations introduced by (Cook and Weisberg, 1980; Cook and Sanford, 1982) was used by (Koh and Liang, 2017) to approximate this change. Upweighting a training point z by an in\ufb01nitesimal amount \u03f5 leads to a new optimal model parameters, \u03b8\u03f5 {z}, obtained by solving the following optimization problem: \u03b8\u03f5 {z} = arg min \u03b8\u2208\u0398 1 |S| X z\u2208S \u2113(h\u03b8(zi)) + \u03f5\u2113(h\u03b8(z)) (3) Removing a point z is similar to up-weighting its corresponding weight by \u03f5 = \u22121 |S|. The main idea used by (Koh and Liang, 2017) is to approximate \u03b8\u2217 {z} by minimizing the \ufb01rst-order Taylor series approximation around \u03b8\u2217. Following the classical result by (Cook and Weisberg, 1980), the change in the model parameters \u03b8\u2217on up-weighting z can be approximated by the in\ufb02uence function (Koh and Liang, 2017) denoted by I: I(z) = d\u03b8\u03f5 {z} d\u03f5 |\u03f5=0 = \u2212H\u22121 \u03b8\u2217\u2207\u03b8\u2113(h\u03b8\u2217(z)) (4) A detailed proof can be found in (Koh and Liang, 2017). Using the given formulation, we can track the change with respect to any function of \u03b8\u2217. The change in the test loss for a particular test point zt when a training point z is upweighted can be approximated as a closed form expression: I(z, zt) = \u2212\u2207\u03b8\u2113(h\u03b8\u2217(zt))T H\u22121 \u03b8\u2217\u2207\u03b8\u2113(h\u03b8\u2217(z)) (5) This result is based on the assumption (Koh and Liang, 2017) that the loss function L(\u03b8) is strictly convex in the model parameters \u03b8 and the Hessian H\u03b8\u2217is therefore positive-de\ufb01nite. This approximation is very similar to forming a quadratic approximation around the optimal parameters \u03b8\u2217and taking a single Newton step. However explicitly computing H\u03b8\u2217 and it\u2019s inverse H\u22121 \u03b8\u2217is not required. Using the Hessianvector product rule (Pearlmutter, 1994) in\ufb02uence functions can be computed ef\ufb01ciently. 4. Group In\ufb02uence Function Our goal in this section is to understand how the model parameters would change if a particular group of samples was up-weighted from the training set. However, up-weighting a group can lead to large perturbations to the training data distribution and therefore model parameters, which does not follow the small perturbation assumption of the \ufb01rst-order in\ufb02uence functions. In this section, we extend in\ufb02uence functions using second-order approximations to better capture changes in model parameters due to up-weighting a group of training samples. In Section 5, we show that our proposed second-order group in\ufb02uence function can be used in conjunction with optimization techniques to select the most in\ufb02uential training groups in a test prediction. The empirical risk minimization (ERM) when we remove U samples from training can be written as: LU(\u03b8) = 1 |S| \u2212|U| X z\u2208S\\U \u2113(h\u03b8(z)) (6) To approximate how optimal solution of this optimization is related to \u03b8\u2217, we study the effect of up-weighting a group of training samples on model parameters. Note that in this case, updated weights should still be a valid distribution, i.e. if a group of training samples has been up-weighted, the rest of samples should be down-weighted to preserve the sum to one constraint of weights in the ERM formulation. In the individual in\ufb02uence function case (when the size of the group is one), up-weighting a sample by \u03f5 leads to down-weighting other samples by \u03f5/(m \u22121) whose effect can be neglected similar to the formulation of (Koh and Liang, 2017). In our formulation for the group in\ufb02uence function, we assume that the weights of samples in the set U has been up-weighted all by \u03f5 and use p = |U| |S| to denote the fraction of up-weighted training samples. This \fOn Second-Order Group In\ufb02uence Functions for Black-Box Predictions leads to a down-weighting of the rest of training samples by \u02dc \u03f5 = |U| |S|\u2212|U|\u03f5, to preserve the empirical weight distributioxn of the training data. This is also important in order to have a fair comparison with the ground-truth leave-out-retraining estimates. Therefore, the resulting ERM can be written as: \u03b8\u03f5 U = arg min \u03b8 L\u03f5 U(\u03b8) where L\u03f5 U(\u03b8) = 1 |S| \u0010 X z\u2208S\\U (1 \u2212\u02dc \u03f5)\u2113(h\u03b8(z)) (7) + X z\u2208U (1 + \u03f5)\u2113(h\u03b8(z)) \u0011 . Or equivalently In the above formulation, if \u03f5 = 0 we get the original loss function L\u2205(\u03b8) (where none of the training samples are removed) and if \u03f5 = \u22121, we get the loss function LU(\u03b8) (where samples are removed from training). Let \u03b8\u03f5 U denote the optimal parameters for L\u03f5 U minimization. Essentially we are concerned about the change in the model parameters (i.e. \u2206\u03b8 = \u03b8\u03f5 U \u2212\u03b8\u2217) when each training sample in a group of size |U| is upweighted by a factor of \u03f5. The key step of the derivation is to expand \u03b8\u03f5 U around \u03b8\u2217(the minimizer of L0 U(\u03b8), or L\u2205(\u03b8)) with respect to the order of \u03f5, the upweighting parameter. In order to do that, we use the perturbation theory (Avrachenkov, Filar, and Howlett, 2013) to expand \u03b8\u03f5 U around \u03b8\u2217. Frequently used in quantum mechanics and also in other areas of physics such as particle physics, condensed matter and atomic physics, perturbation theory \ufb01nds approximate solution to a problem (\u03b8\u03f5 U) by starting from the exact solution of a closely related and simpler problem (\u03b8\u2217). As \u03f5 gets smaller and smaller, these higher order terms become less signi\ufb01cant. However, for large model perturbations (such as the case of group in\ufb02uence functions), using higher-order terms can reduce approximation errors signi\ufb01cantly. The following perturbation series forms the core of our derivation for second-order in\ufb02uence functions: \u03b8\u03f5 U \u2212\u03b8\u2217= O(\u03f5)\u03b8(1) + O(\u03f52)\u03b8(2) + O(\u03f53)\u03b8(3) + \u00b7 \u00b7 \u00b7 (8) where \u03b8(1) characterizes the \ufb01rst-order (in \u03f5) perturbation vector of model parameters while \u03b8(2) is the second-order (in \u03f5) model perturbation vector. We hide the dependencies of these perturbation vectors to constants (such as |U|) with the O(.) notation. In the case of computing in\ufb02uence of individual points, as shown by (Koh and Liang, 2017), the scaling of \u03b8(1) is in the order of 1/|S| while the scaling of the second-order coef\ufb01cient is 1/|S|2 which is very small when S is large. Thus, in this case, the second-order term can be ignored. In the case of computing the group in\ufb02uence, the second-order coef\ufb01cient is in the order of |U|2/|S|2, which can be large when the size of U is large. Thus, in our de\ufb01nition of the group in\ufb02uence function, both \u03b8(1) and \u03b8(2) are taken into account. The \ufb01rst-order group in\ufb02uence function (denoted by I(1)) when all the samples in a group U are up-weighted by \u03f5 can be de\ufb01ned as: I(1)(U) = \u2202\u03b8\u03f5 U \u2202\u03f5 |\u03f5=0 = \u2202(\u03b8\u2217+ O(\u03f5)\u03b8(1) + O(\u03f52)\u03b8(2)) \u2202\u03f5 |\u03f5=0 = \u03b8(1) To capture the dependency of the terms in O(\u03f52), on the group in\ufb02uence function, we de\ufb01ne I \u2032 as follows: I \u2032(U) = \u22022\u03b8\u03f5 U \u2202\u03f52 |\u03f5=0 = \u22022(\u03b8\u2217+ O(\u03f5)\u03b8(1) + O(\u03f52)\u03b8(2)) \u2202\u03f52 |\u03f5=0 = \u03b8(2) Although one can consider even higher-order terms, in this paper, we restrict our derivations up to the second-order approximations of the group in\ufb02uence function. We now state our main result in the following theorem: Theorem 1. If the third-derivative of the loss function at \u03b8\u2217is suf\ufb01ciently small, the second-order group in\ufb02uence function (denoted by I(2)(U)) when all samples in a group U are up-weighted by \u03f5 is: I(2)(U) = I(1)(U) + I \u2032(U) (9) where: I(1)(U) = \u2212 1 1 \u2212p 1 |S|H\u22121 \u03b8\u2217 X z\u2208U \u2207\u2113(h\u03b8\u2217(z)) and I \u2032(U) = p 1 \u2212p \u0010 I \u2212(\u22072L\u2205(\u03b8\u2217))\u22121 1 |U| X z\u2208U \u22072\u2113(h\u03b8\u2217(z)) \u0011 \u03b8(1) This result is based on the assumption that the third-order derivatives of the loss function at \u03b8\u2217is small. For the quadratic loss, the third-order derivatives of the loss are zero. Our experiments with the cross-entropy loss function indicates that this assumption approximately holds for the classi\ufb01cation problem as well. Below, we present a concise sketch of this result. Proof Sketch. We now derive \u03b8(1) and \u03b8(2) to be used in the second order group in\ufb02uence function I(2)(U). As \u03b8\u03f5 U is the optimal parameter set for the interpolated loss function \fOn Second-Order Group In\ufb02uence Functions for Black-Box Predictions Figure 1. Comparison of \ufb01rst-order and second-order group in\ufb02uences in case of synthetic dataset with 10,000 samples using logistic regression for a mis-classi\ufb01ed test point. Across different sizes of groups which were randomly selected, it can be observed that the second-order in\ufb02uence values are more correlated with the ground truth than that of the \ufb01rst-order ones. The green line highlights the y = x line. L\u03f5 U(\u03b8), due to the \ufb01rst-order stationary condition, we have the following equality: 0 = \u2207L\u03f5 U(\u03b8\u03f5 U) =\u2207L\u2205(\u03b8\u03f5 U) (10) + 1 |S|(\u2212\u02dc \u03f5 X z\u2208S\\U +\u03f5 X z\u2208U )\u2207\u2113(h\u03b8\u03f5 U (z)) The main idea is to use Taylor series for expanding \u2207L\u2205(\u03b8\u03f5 U) around \u03b8\u2217along with the perturbation series de\ufb01ned in Equation (8) and compare the terms of the same order in \u03f5: \u2207L\u2205(\u03b8\u03f5 U) = \u2207L\u2205(\u03b8\u2217) + \u22072L\u2205(\u03b8\u2217)(\u03b8\u03f5 U \u2212\u03b8\u2217) + . . . (11) Similarly, we expand \u2207\u2113(h\u03b8\u03f5 U (z)) around \u03b8\u2217using Taylor series expansion. To derive \u03b8(1) we compared terms with the coef\ufb01cient of O(\u03f5) in Equation (10) and for \u03b8(2) we compared terms with coef\ufb01cient O(\u03f52). Based on this, \u03b8(1) can be written in the following way: \u03b8(1) = \u2212 1 1 \u2212p 1 |S|H\u22121 \u03b8\u2217 X z\u2208U \u2207\u2113(h\u03b8\u2217(z)) (12) We expand Equation(10) and compare the terms with coef\ufb01cient O(\u03f5): \u03f5\u22072L\u2205(\u03b8\u2217)\u03b8(1) = 1 |S|(\u02dc \u03f5 X z\u2208S\\U \u2212\u03f5 X z\u2208U )\u2207\u2113(h\u03b8\u2217(z)) = \u02dc \u03f5\u2207L\u2205(\u03b8\u2217) \u22121 |S|(\u02dc \u03f5 + \u03f5) X z\u2208U \u2207\u2113(h\u03b8\u2217(z)) = \u22121 |S|(\u02dc \u03f5 + \u03f5) X z\u2208U \u2207\u2113(h\u03b8\u2217(z)) = \u22121 |S| 1 (1 \u2212p)\u03f5 X z\u2208U \u2207\u2113(h\u03b8\u2217(z)) (13) \u03b8(1) is the \ufb01rst-order approximation of group in\ufb02uence function and can be denoted by I(1). Note that our \ufb01rst-order approximation of group in\ufb02uence function I(1), is slightly different from (Koh et al., 2019a) with an additional 1 \u2212p in the denominator. For \u03b8(2) we compare the terms with coef\ufb01cients of the same order of O(\u03f52) in Equation (10): \u03f52\u22072L\u2205(\u03b8\u2217)\u03b8(2) + 1 2L \u2032\u2032\u2032 \u2205(\u03b8\u2217)[\u03f5\u03b8(1), \u03f5\u03b8(1), I] + 1 |S|(\u2212\u02dc \u03f5 X S\\U +\u03f5 X U )\u22072\u2113(h\u03b8\u2217(z))(\u03f5\u03b8(1)) = 0 (14) For the \u03b8(2) term, we ignore the third-order term 1 2L \u2032\u2032\u2032 \u2205(\u03b8\u2217)[\u03f5\u03b8(1), \u03f5\u03b8(1), I] due to it being small. Now we substitute the value of \u02dc \u03f5 and equate the terms with coef\ufb01cient in the order of O(\u03f52): \u22072L\u2205(\u03b8\u2217)\u03b8(2) = |U| |S| \u2212|U| \u0010 1 |S| X z\u2208S \u22072\u2113(h\u03b8\u2217(z)) (15) \u22121 |U| X z\u2208U \u22072\u2113(h\u03b8\u2217(z)) \u0011 \u03b8(1) Rearranging the Equation (15), we get the same identity as I \u2032 in Theorem (1). It can be observed that the additional term (I \u2032) in our secondorder approximation captures cross-dependencies among the samples in U through a function of gradients and Hessians of the loss function at the optimal model parameters. This makes the second-order group in\ufb02uence function to be more informative when training samples are correlated. In Section \fOn Second-Order Group In\ufb02uence Functions for Black-Box Predictions (7), we empirically show that the addition of I \u2032 improves correlation with the ground truth in\ufb02uence as well. For tracking the change in the test loss for a particular test point zt when a group U is removed, we use the chain rule to compute the in\ufb02uence score as follows: I(2)(U, zt) = \u2207\u2113(h\u03b8\u2217(zt))T \u0010 I(1)(U) + I \u2032(U) \u0011 (16) Our second-order approximation of group in\ufb02uence function consists of a \ufb01rst-order term that is similar to the one proposed in (Koh et al., 2019a) with an additional scaling term 1/(1 \u2212p). This scaling is due to the fact that our formulation preserves the empirical weight distribution constraint in ERM, which is essential when a large group is up-weighted. The second-order in\ufb02uence function has an additional term I \u2032 that is directly proportional to p and captures large perturbations to the model parameters more effectively. 5. Selection of In\ufb02uential Groups In this section, we explain how the second-order group in\ufb02uence function can be used to select the most in\ufb02uential group of training samples for a particular test prediction. In case of the existing \ufb01rst-order approximations for group in\ufb02uence functions, selecting the most in\ufb02uential group can be done greedily by ranking the training points with the highest individual in\ufb02uence since the group in\ufb02uence is the sum of in\ufb02uence of the individual points. However, with the second-order approximations such greedy selection is not optimal since the group in\ufb02uence is not additive in terms of the in\ufb02uence of individual points. To deal with this issue, we \ufb01rst decompose the second-order group in\ufb02uence function I(2)(U, zt) into two terms as: \u2207\u2113(h\u03b8\u2217(zt))T n 1 |S| 1 \u22122p (1 \u2212p)2 H\u22121 \u03b8\u2217 X z\u2208U \u2207\u2113(h\u03b8\u2217(z)) | {z } T erm1 + 1 (1 \u2212p)2 1 |S|2 X z\u2208U H\u22121 \u03b8\u2217\u22072\u2113(h\u03b8\u2217(z))H\u22121 \u03b8\u2217 X z\u2032\u2208U \u2207\u2113(h\u03b8\u2217(z\u2032)) o | {z } T erm2 (17) where H\u03b8\u2217= \u22072L\u2205(\u03b8\u2217). While Term1 is additive with respect to the samples and Term2 has pairwise dependencies among samples. To simplify notation, we de\ufb01ne the constant vector \u2207\u2113(h\u03b8\u2217)(zt)T H\u22121 \u03b8\u2217as v1. Ideally for a given \ufb01xed group of size k, we want to \ufb01nd k training samples amongst the total m training samples which maximizes the in\ufb02uence for a given test point zt. We can de\ufb01ne this in the form of a quadratic optimization problem as follows: max w c1wT a + c2wT Bw (18) s.t. \u2225w\u22250 \u2264k where B is composed of two matrices C and D i.e. B = CD. w contains the weights associated with each sample in the training set. The entries of a contain vT 1 \u2207\u2113(h\u03b8\u2217(zi)) \u2200i \u2208[1, m] and the rows of C contain vT 1 \u22072\u2113(h\u03b8\u2217(zi))H\u22121 \u03b8\u2217\u2200i \u2208[1, m]. In case of D, the columns contain \u2207\u2113(h\u03b8\u2217(zi)) \u2200i \u2208[1, m]. We de\ufb01ne the constant 1 |S| 1\u22122p (1\u2212p)2 as c1 and 1 (1\u2212p)2 1 |S|2 as c2. This optimization can be relaxed using the L0 \u2212L1 relaxation as done in applications of compressed sensing (Donoho, 2006; Candes and Tao, 2005; Ramirez, 2013). The relaxed optimization can then be solved ef\ufb01ciently using the projected gradient descent as denoted in (Liu and Ye, 2009; Duchi, Shalev-Shwartz, Singer, and Chandra, 2008). 6. Computational Complexity For models with a relatively large number of parameters, computing the inverse of the Hessian H\u22121 \u03b8\u2217can be expensive and is of the order of O(n3). However, computing the Hessian-vector product (Pearlmutter, 1994) is relatively computationally inexpensive. In our experiments similar to (Koh and Liang, 2017; Koh et al., 2019a; Chen et al., 2020), we used conjugate gradients (a second-order optimization technique) (Shewchuk, 1994) to compute the inverse Hessian-vector product which uses a Hessian-vector product in the routine thus saving the expense for inverting the Hessian directly. The proposed second-order group in\ufb02uence function can be computed similarly to the \ufb01rst-order group in\ufb02uence functions with only an additional step of Hessian-vector product. 7. Experiments 7.1. Setup Our goal through the experiments is to observe if the secondorder approximations of group in\ufb02uence functions improve the correlation with the ground truth estimate across different settings. We compare the computed second-order group in\ufb02uence score with the ground truth in\ufb02uence (which is computed by leave-k-out retraining for a group with size k). Our metric for evaluation is the Pearson correlation which measures how linearly the computed in\ufb02uence and the actual ground truth estimate are related. We perform our experiments primarily on logistic regression where the group in\ufb02uence function is well-de\ufb01ned. Additionally we also check the accuracy of \ufb01rst-order and second-order group in\ufb02uence functions in case of neural networks. \fOn Second-Order Group In\ufb02uence Functions for Black-Box Predictions Figure 2. Group size vs the correlation with the ground truth on MNIST for logistic regression with random groups (left panel) and coherent groups (right panel). 7.2. Datasets To understand the accuracy of both \ufb01rst-order and secondorder group in\ufb02uence functions on linear models we use two datasets. In our \ufb01rst experiments, we use a synthetic dataset along with logistic regression. The synthetic dataset has 10,000 points drawn from a Gaussian distribution, consisting of 5 features and 2 classes. The details for the synthetic data can be found in the Appendix. The second set of experiments are done with the standard handwritten digits database MNIST (LeCun, Bottou, Bengio, and Haffner, 1998) which consists of 10 classes of different digits. For understanding how group in\ufb02uence functions behave in case of the neural networks we use the MNIST dataset. For each of the two datasets, we pick random groups as well coherent groups as in (Koh et al., 2019a) with sizes ranging from 1.6% to 60% of the entire training points. The computed group in\ufb02uence was primarily investigated for a test-point which was misclassi\ufb01ed by the model. A detailed description of how the groups were selected in our experiments is given in the Appendix. For the optimal group selection we used a synthetic dataset consisting of 20,000 training points consisting of 5 features in the form of 4 isotropic Gaussian blobs. 7.3. Observations and Analysis 7.3.1. LINEAR MODELS For logistic regression, the general observation for the randomly selected groups was that the second-order group in\ufb02uence function improves the correlation with the ground truth estimates across different group sizes in both the synthetic dataset as well as MNIST. For the synthetic dataset, in Figure (1), it can be observed that the approximation provided by the second-order group in\ufb02uence function is fairly close to the ground truth when a large fraction of the training data (60 %) is removed. In such cases of large group sizes, the \ufb01rst-order approximation of group in\ufb02uence function is relatively inaccurate and far from the ground truth in\ufb02uence. This observation is consistent with the small perturbation assumption of \ufb01rst-order in\ufb02uence functions. However, in cases of smaller group sizes, although the second-order approximation improves over existing \ufb01rst-order group in\ufb02uence function, the gain in correlation is small. In case of MNIST, the observation was similar where the gain in correlation was signi\ufb01cant when the size of the considered group was large. For e.g. it can seen in Figure (2), that when more than 36% of the samples were removed, the gain in correlation is almost always more than 40%. While the improvement in correlation for larger group sizes is consistent with our theory that the second-order approximation is effective in the case of large changes to the model, the gain in correlation is non-monotonic with respect to the group sizes. For groups of small size, selected uniformly at random, the model parameters do not change signi\ufb01cantly and the second-order approximation improves only marginally over the existing \ufb01rst-order approximation. However, when a coherent group (a group having training examples from the same class) of even a relatively small size is removed, the perturbation to the model is larger (as the model parameters can change signi\ufb01cantly in a particular direction) than if a random group is removed. In such settings, we observe that even for small group sizes, the second-order approximation consistently improves the correlation with the ground-truth signi\ufb01cantly (Figure (2)). For coherent groups, across different group sizes of the MNIST dataset, we observed an improvement in correlation when the second-order approximation was used. Across different group sizes we observed that the gain in correlation is at least 15%. These observations \fOn Second-Order Group In\ufb02uence Functions for Black-Box Predictions Figure 3. Optimal group selection on synthetic data. (shown in Figure (2)) reinforces our theory that the secondorder (or rather higher-order) approximations of in\ufb02uence functions are particularly effective when the perturbation or changes in the model parameters are signi\ufb01cantly large. The second-order approximation of the in\ufb02uence function could thus be used over existing \ufb01rst-order approximations in practical purposes such as understanding the behaviour of training groups with similar properties (e.g. demographic groups) on model predictions, without the need to actually retrain the model again. 7.3.2. NEURAL NETWORKS In case of neural networks, the Hessian is not positive semide\ufb01nite in general, which violates the assumptions of in\ufb02uence functions. Previously (Koh and Liang, 2017) regularized the Hessian in the form of H\u03b8\u2217+ \u03bbI, and had shown that for the top few in\ufb02uential training points (not groups) and for a given test point, the correlation with the ground truth in\ufb02uence is still satisfactory, if not highly signi\ufb01cant. However, how in\ufb02uence functions behave in the case of groups, is a topic not yet well explored. For MNIST, we used a regularized Hessian with a value of \u03bb = 0.01 and conducted experiments for a relatively simple two hidden layered feed-forward network with sigmoid activations for both \ufb01rst-order and second-order group in\ufb02uence functions. The general observation was that both existing \ufb01rst and proposed second-order group in\ufb02uence functions underestimate the ground truth in\ufb02uence values across different group sizes, leading to a non-signi\ufb01cant correlation. The corresponding Figure can be referred to in the Appendix. However, we observed that while the second-order in\ufb02uence values still suffer from the underestimation issue, they improve the correlation marginally across different group sizes. This observation was consistent in cases of both random and coherent group selections. 7.3.3. INFLUENTIAL GROUP SELECTION In order to validate the selection of the most in\ufb02uential group through the second-order approximation of in\ufb02uence function, we performed an experiment with logistic regression (where both \ufb01rst-order and second-order in\ufb02uence function estimates are fairly close to the ground truth) on a synthetic dataset. Across different group sizes we compared the group in\ufb02uence (through the second-order approximation and computed with Equation (18)) with the \ufb01rst-order in\ufb02uence computed greedily for a particular group size and the mean in\ufb02uence of randomly selected groups across 100 group sampling iterations. In our experiments we relaxed the L0 norm to L1 norm and solved the projected gradient descent step of the optimization in Equation (18) using (Duchi et al., 2008). We observed that the optimal group selection procedure led to groups having relatively higher in\ufb02uence computed with the second-order approximation when compared to the greedy \ufb01rst-order in\ufb02uence and randomly selected groups corresponding to the different groupsizes ranging from 10% to 30% of the total training samples. Speci\ufb01cally the optimal group in\ufb02uence was signi\ufb01cantly higher than the greedy \ufb01rst-order group in\ufb02uence when the group sizes were relatively large. The selection procedure could be practically used to detect the most relevant subset of training examples which impacts a particular test-time decision through a given machine learning model when the second-order in\ufb02uence function is used. 8." + }, + { + "url": "http://arxiv.org/abs/1910.04257v1", + "title": "Membership Model Inversion Attacks for Deep Networks", + "abstract": "With the increasing adoption of AI, inherent security and privacy\nvulnerabilities formachine learning systems are being discovered. One such\nvulnerability makes itpossible for an adversary to obtain private information\nabout the types of instancesused to train the targeted machine learning model.\nThis so-called model inversionattack is based on sequential leveraging of\nclassification scores towards obtaininghigh confidence representations for\nvarious classes. However, for deep networks,such procedures usually lead to\nunrecognizable representations that are uselessfor the adversary. In this\npaper, we introduce a more realistic definition of modelinversion, where the\nadversary is aware of the general purpose of the attackedmodel (for instance,\nwhether it is an OCR system or a facial recognition system),and the goal is to\nfind realistic class representations within the corresponding lower-dimensional\nmanifold (of, respectively, general symbols or general faces). To thatend, we\nleverage properties of generative adversarial networks for constructinga\nconnected lower-dimensional manifold, and demonstrate the efficiency of\nourmodel inversion attack that is carried out within that manifold.", + "authors": "Samyadeep Basu, Rauf Izmailov, Chris Mesterharm", + "published": "2019-10-09", + "updated": "2019-10-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction The last decade witnessed a rapid and signi\ufb01cant progress in developing and applying deep learning techniques. At the same time, various concerns about security of deployed machine learning models have increased as well. It has been already shown [11] that a small visually imperceptible perturbation of an image can cause a deep neural network classify it incorrectly and with high con\ufb01dence. Besides these adversarial attacks, another type of security threats in the form of membership attacks was discovered recently: it was shown [10] that an adversary can identify if a given sample was used during the training phase of the targeted machine learning model, thus endangering the privacy of training data. Yet another type of privacy threat is model inversion attack: it has been shown [3] that an adversary can recover typical representations of speci\ufb01c target classes by leveraging con\ufb01dence scores of machine learning model. So far, model inversion attacks have been mostly successful against shallow machine linear models such as SVM and logistic regression; however, for deep neural networks, model inversion attacks usually return but unrecognizable solutions [10] that are useless for the adversary. Preprint. Under review. arXiv:1910.04257v1 [cs.LG] 9 Oct 2019 \fIn this paper, we focus on the white-box model inversion attack where the adversary has access to the model and attempts to generate representative data similar to training instances (we call them representative samples) of individual classes. In the most general form of model inversion attack, there is no additional information about the type of the problem that the targeted model is trained to classify. This, however, appears to be an excessively strict assumption since the adversary would have no way to interpret the multi-dimensional solution vector that can be obtained as a result of such attack. Instead, we assume the adversary has some general knowledge of the problem, and we exploit that general information in order to guide the search for representative samples. For instance, the attacker might know that the input is an image with speci\ufb01c dimensions that is used by the targeted machine learning application such as optical character recognition (OCR) or facial recognition. This is a rather realistic assumption since the attacker have to know how to interpret the model inputs. The model inversion goal is thus to learn speci\ufb01c details of the system such as what individual symbols are used for the targeted OCR application or what faces can be correctly identi\ufb01ed by the targeted facial recognition security application. The direct search in the (very) high-dimensional input space (i.e., without any additional knowledge about the problem) is usually an ill-posed problem [7]. However, with additional information about the problem, we can constrain the search to a smaller-dimensional manifold that likely contains the training data. According to the manifold assumption theory [13], many data sets belong to a group of disconnected low-dimensional manifolds. However, if these manifolds can be linked with each other, we can search in the resulting connected manifold. Given that the adversary knows the general type of problem, an appropriate generative adversarial network (GAN) can also be generated, which would model a connected manifold structure [6]. For example, the manifold can be generated to attack an OCR system by creating a GAN using characters from various languages and sets of symbols; for attacking a facial recognition system, a diverse set of faces can be downloaded from Internet. By connecting the output of the GAN to the input of the model, various optimization techniques can be used to search for manifold instances that maximize label con\ufb01dence values. 2 Method A Generative Adversarial Network, introduced by [4], is a min-max game between two neural networks: generator (G\u03b8) and discriminator (D\u03c6). The generator G\u03b8 takes random noise z as input and generates G\u03b8(z). The discriminator D\u03c6 distinguishes between real samples (x) and fake samples coming from G\u03b8. The objective function for the min-max game between G\u03b8 and D\u03c6 is min \u03b8 max \u03c6 Ex\u223cP (x)[log(D\u03c6(x)] + Ez\u223cP (z)[1 \u2212log(D\u03c6(G\u03b8(z))]. (1) In (1), Px is the real data distribution, and Pz is a noise distribution which is typically a uniform distribution or a normal distribution. Previous research has shown that real images have probability distributions (Px) on low-dimensional manifolds [13] embedded in a high-dimensional space. Intuitively, suf\ufb01ciently different images should belong to their own disconnected manifolds without any paths of \u201cblended\u201d images between them. However, in case of a GAN, the generator function maps from a connected distribution space, like the uniform distribution, to all possible outputs, which results in a connected output set of instances. This is a typical drawback of GANs and various techniques to partition the input into disjoint support sets have been used [6] to address this issue. However, our approach actually leverages this drawback in order to search in the low-dimensional but connected latent space of the GAN set instead of the high-dimensional space Px of all possible images. Details of our approach are presented in Appendix. A direct solution of model inversion problem without the use of the GAN can be formulated as follows. Let f\u03b4 be the target neural network which is being attacked and y be the one-hot encoding vector representing the class, whose representative sample needs to be recovered. Let \u02c6 x = arg min x \u2113(f\u03b4(x), y) + \u03bbR(x), (2) where \u03bb is a regularization hyperparameter and R(x) is a regularization term which can be the \u2113p norm of the image. We modify this standard formulation in the following way to directly search in the latent GAN space: \u02c6 z = arg min z \u2113(f\u03b4(G\u03b8(z), y) + \u03bbR(z). (3) 2 \fFigure 1: Attack on MNIST classi\ufb01er without background knowledge: (Left) Retrieval of class \u201c5\u201d, (Middle) Retrieval of class \u201c6\u201d, (Right) Retrieval of class \u201c9\u201d. The \ufb01nal solution for the representative sample is given by \u02c6 x = G\u03b8(\u02c6 z). (4) Equation (3) can be solved using any quasi-Newton method like gradient descent or adaptive learning rate methods like Adam. 3 Experiments In our experiments, we assume general knowledge of the underlying machine learning system. For example, if an attacker targets an OCR system for determining what speci\ufb01c symbols the system was trained on, then a dataset comprising various characters can be constructed \ufb01rst and then used to train a GAN for creating a connected manifold structure (the constrained search space) from which representative samples of the target model will be recovered. We performed our preliminary experiments on two datasets: (1) Numeric MNIST with Arabic MNIST, and (2) Fashion MNIST Dataset [12]. In our experiments, we used a 2-layer feed-forward neural network with ReLU activation for the target model; for constructing the connected manifold, we used both standard GAN with feed-forward networks and DCGAN [9]. 3.1 Dataset 1: Numeric MNIST and Arabic MNIST In this case, the targeted deep neural network f\u03b4 has been trained with a subset of MNIST (namely 6 classes out of 10 classes). We curated a dataset comprising of numeric MNIST (10 classes) and Arabic MNIST (10 classes), which we used to train a GAN in order to create the connected manifold for the search procedure according to (3). The task is to identify representative samples from the 6 classes with which f\u03b4 was trained. Figure 1 shows the results obtained using an optimization search in the full image space. As expected, in this case, no representative samples were found. In contrast to Figure 1, Figure 2 shows some of our results obtained using our GAN-based technique: they can be clearly viewed by the adversary as reasonably representative samples of the attacked classes. 3.2 Dataset 2: Fashion MNIST In this set of experiments, we train a deep neural network with a subset of Fashion MNIST [12], namely 5 classes out of 10; more details on this dataset are presented in Appendix. We assume we have knowledge about all different types of clothes and footwear and, using the complete Fashion MNIST dataset, we construct the connected manifold using a GAN. The attack objective is to identify representative samples from the 5 classes on which the model was trained. Figure 3 illustrates the successful recovery of its classes using our approach. 3.3 Effect of Regularization and High Order Terms We further used the \u2113p norm regularization to improve the quality of the images. With \u2113p regularization, we solved (3) and evaluated our results for p ranging from 1 to 6. However, in our experiments we observed that regularization did not seem to affect the quality of retrieved samples in comparison to no regularization. We also report the results pertaining to the impact of high-order loss approximations in the Appendix. 3 \fFigure 2: Attack on MNIST classi\ufb01er with background knowledge: (Left) Retrieval of class \u201c0\u201d, (Middle) Retrieval of class \u201c1\u201d, (Right) Retrieval of class \u201c3\u201d. Figure 3: Attack on Fashion MNIST classi\ufb01er with background knowledge. (Left): Retrieval of Class \u201cT-shirts\u201d; (Middle) Retrieval of class \u201cCoats\u201d; (Right) Retrieval of class \u201cSneakers\u201d. 4" + } + ], + "Soheil Feizi": [ + { + "url": "http://arxiv.org/abs/2311.07601v3", + "title": "Online Advertisements with LLMs: Opportunities and Challenges", + "abstract": "This paper explores the potential for leveraging Large Language Models (LLM)\nin the realm of online advertising systems. We delve into essential\nrequirements including privacy, latency, reliability as well as the\nsatisfaction of users and advertisers that such a system must fulfill. We\nfurther introduce a general framework for LLM advertisement, consisting of\nmodification, bidding, prediction, and auction modules. Different design\nconsiderations for each module are presented. Fundamental questions regarding\npracticality, efficiency, and implementation challenges of these designs are\nraised for future research. Finally, we explore the prospect of LLM-based\ndynamic creative optimization as a means to significantly enhance the appeal of\nadvertisements to users and discuss its additional challenges.", + "authors": "Soheil Feizi, MohammadTaghi Hajiaghayi, Keivan Rezaei, Suho Shin", + "published": "2023-11-11", + "updated": "2024-04-18", + "primary_cat": "cs.CY", + "cats": [ + "cs.CY", + "cs.AI" + ], + "main_content": "Introduction In the vast landscape of online search engines, the role of advertisements has become pivotal, shaping the digital experience for users globally. The enormity of the market, with trillions of dollars at play, underscores the economic significance of advertising, e.g., the market size of search advertisement as of 2020 was valued at USD 164.12 billion (Insight, 2022). Online advertising not only brings revenue to platform companies, but also plays a crucial role in subsidizing the free access to information and service to every individual. The democratization of education, information, and other resources is made possible through ad-driven models, enabling individuals to not only consume content \u2217Alphabetical author ordering. \u2020University of Maryland College Park, MD USA. Email: (sfeizi,hajiagha,krezaei,suhoshin)@umd.edu for free but also contribute to the digital ecosystem. Additionally, the symbiotic relationship between advertisements and content creation creates a feedback loop, fostering economic growth. As a side note, even the subscription-based streaming platforms like Netflix are starting ad-supported plans (Netflix, 2022). On the other hand, recently, large language models (LLMs) (Brown et al., 2020; Anil et al., 2023; Thoppilan et al., 2022) have gained widespread adoption among users, serving various functions including question answering, content generation, translation, code completion, and more (Nijkamp et al., 2022; Fried et al., 2022; Wang and Komatsuzaki, 2021; Liu et al., 2023a). The proliferation of AI-driven assistant language models, such as ChatGPT, has contributed to a growing trend wherein individuals increasingly use such models to address their inquiries, occasionally replacing traditional search engines as their primary information-seeking tool. According to PCMag (2023), even for now, 35% of casual users say they find LLMs to be more helpful in finding information than search engine. It is obvious that such a trend will be accelerating in the near future as well. The substantial usage volumes stemming from diverse users would induce companies offering these tools, which we call LLM providers, to contemplate revenue generation through advertising (AdWeek, 2023; Crunch, 2023; Microsoft, 2023). Consequently, an interesting and fundamental question arises: How can LLM providers make revenue by running an online advertisement? The concept of online advertising has been extensively studied within the realm of search engines, where auctions are conducted among ad1 arXiv:2311.07601v3 [cs.CY] 18 Apr 2024 \fvertisements from advertisers when a user inputs a query. This paper focuses on the prospect of transposing this online advertising model and auction framework to the context of large language models. We further discuss technical challenges and potential framework to run online advertisement system in LLM, thereby calling academic and industrial researchers to the area of importance. Search advertising. To better explain fundamental differences between the standard search advertising (SA)1 and the LLM advertising (LLMA), we briefly introduce how the standard SA works (Lahaie et al., 2007) in what follows. (1) Bidding: in the SA, the owner of each ad i writes bid bi \u2208R\u22650 on targeting keyword for i \u2208[n], which can possibly a set of keywords. (2) Output generation: the platform first decides how many slots to allocate for ads in the search engine results page (SERP), say k. (3) Prediction: given k slots in SERP, the platform then predicts the click-through-rate (CTR) \u03b1ij when ad i is allocated in slot j. (4) Auction: then it optimizes n X i=1 k X j=1 \u03b1ijbixij, (1.1) where xij = 1 if ad i is allocated in slot j, given the constraint Pn i=1 xij \u22641 for every j \u2208[k], and charges each ad with respect to pre-committed payment rule. Overall, whenever a user arrives in the platform and searches a keyword, the set of ads related to the keyword are getting involved in the mechanism. Specifically, the platform collects corresponding bids, decides the number of slots k, predicts CTR, and the auction runs. Motivating example. How would the LLMA be fundamentally different from the SA? We start with illustrative scenarios where a user asks a technical question about computers (Figure 1). Without advertisement, LLM would typically generate a response to address the user\u2019s query. To incorporate advertisements in the generated response and induce user attention, there is a 1Its mechanism design problem is often called as sponsored search auction (SSA). spectrum of possibilities for including ad content: (a) putting the ads outside the response but in the user interface, (b) incorporate the ads within the generated output directly. (a) can be deemed as display ads, and would be relatively easy to handle given the vast amount of techniques in display ads. However, (b) is more similar to search ad or native ad.2 We will focus on the approach (b), which will entail fundamental challenges that have not arisen in the traditional SA. 3 SA versus LLMA. Recall the process of bidding, output generation, prediction, and auction in the SA mentioned before, and imagine implementing those modules for LLMA. For the bidding module, how could the advertisers write bid? What would they actually bid for? This is not straightforward to answer, since the user query in LLM cannot be simply specified as a keyword. Further, given that the marketing impact will be significantly dependent to how the LLM incorporate the ad in the output, it is not even clear what is the advertiser\u2019s value for being included in the ad. In fact, the advertiser\u2019s value would be dependent to the generated output, then again, how could the advertiser reflect their willingness-to-pay with respect to the output, which might not be accessible in advance? Even further, how can we generate output that smoothly incorporates the ad without hurting the user experience while satisfying the advertiser? For prediction module, most SA run an online learning algorithm to update the ad\u2019s feature vector with respect to the user context (McMahan et al., 2013). This was possible because the ad images, hyperlinks, and more generally how they 2Typically, search and native ads capture user attention much better than display ads, e.g., viewed \u223c50% more than display ads (Outbrain, 2023). 3Also, in case the ads are included within the generated output, there are several possibilities such that the ads may simply replace one of the element in case they are written as a list of elements (e.g., Figure 5, denoted by structured output), or the ads itself is incorporated within the original text/figure without hurting/changing the context of the original output (e.g., Figure 1, denoted by unstructured output). We focus on the case of unstructured output, as this would be broadly applicable in various scenarios, and the structured output may be able to handled with the standard SA framework. 2 \fOriginal LLM Output Prompt: How do CPU and RAM interact within a computer? Output: The CPU (Central Processing Unit) processes data and instructions, and RAM (Random Access Memory) provides fast, temporary storage for the data the CPU is actively working on, enabling quick access and manipulation of data, resulting in faster and more efficient computing. Modified Output Prompt: Can you promote laptop L of brand B in the previous response? Output: The CPU (Central Processing Unit) processes data and instructions, and RAM (Random Access Memory) provides fast, temporary storage for the data the CPU is actively working on, enabling quick access and manipulation of data, resulting in faster and more efficient computing. Notably, \"Laptop L\" from \"Brand B\" is optimized for this synergy, offering a powerful CPU and ample RAM, ensuring a seamless and high-performance computing experience. Figure 1: An example of providing unstructured advertisement in the LLM. Left column refers to the case where we ask above question from ChatGPT 4, and to incorporate the ads we use queries specified in the right column. appear in the SERP remain the same across many user interactions. LLM, however, would incorporate ad in a very different manner for each query, e.g., see Figure 4, then how can the LLMA learn the CTR fundamentally? Also, since the ads are merged into the generated output, it significantly affects the user experience, then how can we guarantee the user satisfaction and measure them? Finally, which kind of auction format should LLMA run? How can LLMA adapt for advertising multiple ads in a single output? All these questions are not straightforward to answer, yet, have not been formally discussed in any literature to the extent of our knowledge. Our contribution. In this paper, we introduce a vast amount of research questions to operate LLMA in practice. We first discuss several desiderata to run LLMA and propose a seemingly plausible framework to meet the requirements. Similar to SA, our framework consists of four modules, though the implementation of each module will be very different from SA: (i) modification in which the original output of LLM is modified; (ii) bidding that advertisers utilize to bid on the modified outputs; (iii) prediction in which LLMA computes required information about advertisements; and (iv) auction in which the advertises compete and the final output is selected. We discuss different designs and technical challenges of each of those modules, along with further research challenges to implement the system in a practical manner. 4 We correspondingly raise several research questions on different design choices for each module. Although we believe that our framework may be grounded as an initial stepping stone for future research, our primary prupose is to discuss research questions that should be addressed for the practical operation of LLM advertisement. Finally, as an independent interest, in Section 6, we discuss how LLM (more generally generative model) can empower the online advertisement system to tailor the content of ads itself to individual users, leveraging their specific contexts to generate more appealing content, which is called as dynamic creative optimization (DCO) or responsive ad in the literature. The incorporation of DCO into the SA, especially for LLMA, would bring further challenges in each module and we propose corresponding research questions therein. 2 Related works Here we present related works on LLM, online advertisement system, and their intersection. Further related works can be found in Appendix A. Large Language Models. Advancements in AI, NLP, and conversational agents, driven by Transformer architecture (Vaswani et al., 2017), 4We first focus on presenting a single advertisement in the LLM output, however, generalization of the proposed framework to incorporate multiple ads at once is discussed in Section 5.4. 3 \fAdvertise to teenage boy Prompt: Promote laptop L of brand B to teenage boy. Output: In a computer, the CPU and RAM work together like a supercharged gaming team. The CPU is like the team captain, making all the fast decisions and executing plays, while the RAM holds onto the game data that needs to be accessed quickly, like maps or player stats. This awesome duo shines in the Laptop L from Brand B, designed to keep up with your fast-paced games and multitasking. With this laptop, you're always equipped to win, whether you're gaming, streaming, or just hanging out online. Advertise to teenage girl Prompt: Promote laptop L of brand B to teenage girl. Output: In a computer, the CPU and RAM are your ultimate multitasking team. The CPU handles all the tasks you throw at it, from editing videos to scrolling through social media, while the RAM keeps everything running smoothly without any lag, storing all your apps and files for quick access. Check out Laptop L from Brand B\u2014it\u2019s not only powerful but also stylish, perfect for keeping up with your studies, your creative projects, and your social life. This laptop is a must-have tool that fits your dynamic lifestyle and lets you do it all, effortlessly and in style. Figure 2: An example of ChatGPT 4 providing different output tailored to the user context. We keep the context provided in Figure 1, but asks again to advertise again to the specified user segment. have given rise to models like GPT-3 (Brown et al., 2020) and BERT (Devlin et al., 2019). These models revolutionize chatbots, enabling context-aware, human-like interactions across diverse domains (Abd-Alrazaq et al., 2020; Nicolescu and Tudorache, 2022). Everyday use of language models has led researchers to investigate the content generated by these models to ensure that they do not hallucinate (Guerreiro et al., 2023; Ji et al., 2023; Li et al., 2023; Zhang et al., 2023) in their outputs, and do not generate harmful or biased content (Liang et al., 2021; Navigli et al., 2023; Kirk et al., 2021; Shen et al., 2023; Weidinger et al., 2021; Liu et al., 2023b). In fact, trustworthy of LLMs is actively studying by researchers (Liu et al., 2024). Online Advertisement. Online advertising, particularly within the context of sponsored search auctions, has evolved in recent years, with notable contributions from prior research. Sponsored search auctions have been a subject of extensive investigation, emphasizing the optimization of bidding strategies and keyword relevance.5 Edelman et al. (2007) provided valuable insights into the economics of sponsored search auctions, shedding light on the intricate dynamics of keyword auctions. Goel et al. (2009) proposed a contract auction between the advertiser and the publisher, and introduce impression-plus-click pricing for sponsored search auction as an application. 5Online advertisement can be categorized by either of search or display ads. We here focus on search ads. We refer to the book by Roughgarden (2010) for more detailed discussion. Mechanism design with LLM agents. Duetting et al. (2023) recently proposes a mechanism design problem when the bidders are LLM agents. More specifically, the bidders bid on desired distributions of the next token, given the history of the tokens. They denote this problem by token auction, and provide theoretical foundations of what a reasonable mechanism should look like. Indeed, their model can be considered as a version of our advertiser modification and dynamic bidding framework. Notably, they require the advertisers to provide their desired distribution and corresponding bidding for every token, which may require significant communication burden to run the mechanism in practice. In contrast, the advertisers in our framework only bid once for the modified output, thereby inducing less communication cost.6 3 Requirements In this section, we outline our expectations for LLMA, which should be evaluated based on the following criteria: 6In Section 5.4 and Appendix B, we provide further discussion on Duetting et al. (2023) to compare it with our framework and how one can practically adapt their mechanism design perspective in online advertisement system within our framework. 4 \f3.1 System requirement Privacy. In any advertisement system where users and advertisers interact, it is imperative to maintain the confidentiality of users\u2019 privacy. In essence, all aspects pertaining to a user\u2019s context, information, and data must be kept secure, as any inadvertent disclosure can pose significant risks. Reliability. LLMA needs to associate with advertisers, making the entire system vulnerable to harmful behaviors from them. To mitigate that, LLMA should consider every possible aspect of the behavior of the advertisers, ensuring the reliability and alignment of the system (Hendrycks et al., 2020). Latency. Users expect rapid interaction with LLM service, e.g., ChatGPT typically generates prompt responses, without any noticeable delay. Incorporating advertisement in LLM output, inevitably, will add some latency to the overall system. Nevertheless, this added delay should be minimal, ensuring that it does not significantly disrupt the user experience. 3.2 Player incentive User satisfaction. When an advertisement is added to LLM output, it\u2019s crucial to maintain a high level of quality in the generated content. Users do not wish to encounter an excessive amount of advertising in the generated responses, especially when it doesn\u2019t align with their search or interests. Advertisement involvement should not significantly degrade LLM output, as doing so would lead to reduced user satisfaction, thereby decreasing the user retention. For instance, in modern online advertisement system, this is often handled using floor prices to filter out irrelevant ads to maintain the quality of the user experience. Similarly for LLM service, we need to ensure the final modified output including advertisements can still be a high-quality response to the user, not deviating significantly from what LLM originally would generate. Advertisers satisfaction. As advertisers pay LLMA to include advertisements in their outputs, they expect appropriate exposure of ads in them. LLM Advertisement System Modification Prediction Bidding Auction ...\u00a0 Original output generation User context Figure 3: Overall framework of LLMA. Whether their products, services, or any other form of advertising, they expect these items to be showcased in a compelling and engaging manner within the output. This presentation should be interesting for users, ultimately driving revenue for the advertisers. Platform revenue. Obtaining revenue is the primary goal of operating LLMA, thus, LLMA should ensure that the total additional cost of advertisement is compensated by the revenue obtained from advertisers. It is worth noting that advertisements may potentially reduce the overall number of users engaging with the system, which could have adverse effects on LLM itself. 4 Framework for LLMA In this section, we present overall framework for LLMA. We introduce several feasible types of architectures for LLMA, each of which has its own advantages and disadvantages that will be discussed together. Mainly, we focus on a scenario in which a user provides a query q to the LLM, and the original output by LLM is given by X. Further, a context c captures a variety of features that are relevant to the advertisement recommendation, e.g., history of the previous queries, user segment, region, and date. Although the number of advertisers (bidders) varies from time to time, at the moment the user inputs the query q, we suppose that there are n advertisers indexed by V = {adv1, adv2, . . . , advn}. For simplicity, we assume that each advertiser advi is equipped with a single advertisement (ad) he/she wants to post. We denote advi\u2019s ad by adi for i \u2208[n], and let D = {ad1, . . . , adn}. 5 \fOverall, we divide the LLMA as follows based on their functionalities7 into 4 modules: (i) output modification, (ii) bidding, (iii) prediction, and (iv) auction. This overall framework is illustrated in Figure 3. In short, 1. The user asks a query q to the system along with a context c; 2. The LLMA generates the original output X; 3. The modification module generates specific output Xi for each of the advertisers; 4. The bidding module generates corresponding bid bi. At the same time, the prediction module predicts the quality of the output Xi by measuring the user satisfaction rate sri and click-through-rate ctri; 5. Finally, the auction module runs auction to determine the final output and corresponding payment amount to charge the advertiser. For the rest of the section, we will explain each module and its responsibility/functionality in a sequential manner. 4.1 Modification module The modification module takes responsibility for generating modified output based on corresponding ads. This module takes the pair of (q, X, c) and a set of advertisements D as the input, and returns (Xi)i\u2208[n] where Xi denotes the modified output for adi. Overall, we consider two approaches based on who generates the modified outputs. 8 1. In the advertiser modification model, the role of generating the modified output is delegated to each advertiser, and collect the modified outputs from them. 7These functionalities may be distributed to multiple market players such as supply side platform, display side platform, or ad exchange, as does in the current online ad eco-system. 8For either cases, one can generate the modified output by giving an additional query to the LLM as presented in Figure 1. 2. In the LLMA modification model, LLMA directly generates the modified output. Comparison to SA. Note that the standard SA does not have modification module explicitly as it is trivial to incorporate ads in each slot. Thus, we emphasize that is the challenge that uniquely appears in LLMA. 4.2 Bidding module The bidding module generates bid based on the modified outputs. The input to the bidding module is the query q, context c, and modified outputs (Xi)i\u2208[n], and the output will be a bid (bi)i\u2208[n], where bi is the bid for adi. Similar to the generation model, we consider two models for the bidding module. 1. In the dynamic bidding model, we deliver relevant information, e.g., query q, context c, the original output X,9 along with the modified output Xi to the bidder for each query, and then the bidder returns the corresponding bid. 2. In the static bidding model, each bid is determined by some keywords based on some precommitted contract, without any additional communication with the advertiser. The static bidding model would be able to be operated by extracting some keywords from the user query and run the bidding based on the keywords while asking advertisers to bid on each keyword, similar to the SA. In Section 5, we discuss several extensions of these bidding models, thereby allowing more flexibility for advertisers to customize their advertisement strategy by bidding. Dynamic bidding model may be of interest to the advertisers who have their own measures to estimate the quality of the modified output and thus want to customize their bid in a way that might not be captured by the static bidding. For example, in extreme, suppose that LLMA may 9To protect the user\u2019s privacy, one may consider delivering an encrypted context \u02c6 c or the one that excludes private information. 6 \fonly consider click-through-rate of the modified output and does not care about the user satisfaction rate or how much the inserted advertisement relates to the context of the output. In this case, the advertiser may want to customize their bid by measuring the quality of the output by itself, since he may be afraid of the system inserting its ads in an awkward manner. Comparison to SA. Note that in SA, since the context of user query in search engine is more explicit, and can typically abstracted as several keywords, advertisers can safely bid on each keyword. In LLMA, however, it might be difficult to extract proper keywords from the query as the query itself tends to be much longer than the traditional SA due to the flexibility of LLM. Further, the generated output significantly affects the marketing impact of the ads in LLMA, whereas it is typically assumed (and validated) to be independent from other ads / contents shown within SERP in the standard SA. Finally, the dynamic bidding model further exhibits unique challenges of requiring advertisers to dynamically adjust the bid with respect to the output. 4.3 Prediction Module Prediction module is responsible for computing user\u2019s satisfaction-rate (SR) and click-throughrate (CTR). User\u2019s SR indicates how much the user is satisfied with the given output. This SR affects the final decision making process of LLMA since if there is a chance that the user is very disappointed with the modified output, then it should reconsider displaying that output as it may hurt the user experience a lot. On the other hand, CTR indicates the probability that the user clicks the ad link included within the modified output. This is also critical in determining the auction winner since this directly affects the revenue of the LLMA. More precisely, if an advertiser\u2019s bidding method is cost-per-click (CPC), then CTR proportionally affects the advertisement revenue as the expected revenue of transferring its ad can be computed as CPC times the CTR. Overall, both of SR and CTR can be defined as a function of original output X, modified output X\u2032, query q, and context c, which returns a real value in [0, 1]. Detailed methodologies for estimating/learning these functions will be discussed in Section 5. Comparison to SA. Different from the traditional SA whose output is static (as ad image/hyperlinks), LLMA constructs the output in a highly dynamic manner. This makes the prediction module more difficult to learn the CTR. Moreover, in LLMA, the prediction of user satisfaction is much more directly affected by the incorporation of the ads in the output. This is in stark contrast with SA, whose user experience is usually affected only by the number of ad slots in SERP. 4.4 Auction module Computed all the required parameters above, we finally run the auction module to determine who wins the auction and how much the advertiser will be charged. The input to the auction module is the tuples of (bidi, sri, ctri)i\u2208[n] which denotes bid amount, satisfaction rate, and click-through-rate, respectively. Output of the auction module is an allocation a \u2208{0, 1}n and a payment p \u2208R\u22650. Precisely, auction module determines the auction format which consists of the allocation function and the payment function, where the allocation function decides which advertisement to deliver and the payment function computes how much the advertisers pay LLMA.10 Importantly, the main goal of LLMA is to maximize its revenue in a long term manner. To this end, this should balance a trade-off between short-term revenue and the user retention. Therefore, the objective can be modeled as a function from bid amount, CTR, and SR to a nonnegative real number that indicates a score of a modified output, i.e., selecting i\u2217= argmaxi\u2208[n] Obj(sri, ctri, bidi). We do not discuss how to choose the objective function here, given the vast literature on the sponsored search auction. After designing the score function, one should determine an auction format. Also, 10In its current form, this does not allow the possibility of adjusting the final output in order to balance between multiple advertiser\u2019s preferred output. Our framework, however, does not preclude possibility for such extension, which we will elaborate more in Appendix B. 7 \fgiven the long line of literature on auction design, one can pick the most preferred mechanism, e.g., first-price auction (FPA) or secondprice auction (SPA). Comparison to SA. The main difference to the SA is, similar to what is discussed in the previous subsection, the user satisfaction is much more important measure to account for. For example, the allocation function in SA represented by (1.1) is social welfare which only accounts for platform and advertisers\u2019 utility. In LLMA, however, one might also need to consider user\u2019s utility as a function of predicted SR, which would change the allocation function of the mechanism and the payment correspondingly. 5 Challenges Recall that our overall framework consists of four modules: modification, bidding, prediction, and auction module. For each module, we discuss several characteristics, technical challenges, and corresponding research questions in implementing the proposed architectures in practice, evaluating their overall practicality with respect to the criteria defined in Section 3. We provide further discussions in Appendix B. 5.1 Modification module Challenges: Advertiser modification model that guarantees no privacy-issues, reliability, user satisfaction, and low-latency. In the advertiser modification model, LLMA should somehow deliver q, X, C to each advi. This may leak some private information of the user to the bidders, thereby suffering privacy issues, which is critical for user experience. It should be addressed how LLMA can partially or indirectly disclose the information to prevent user privacy while guaranteeing high quality output from the advertiser modification module. In addition, the advertiser modification model has reliability issue since there is a possibility that the advertiser-modified output includes some illegal or spamming contents. The LLMA may devise an additional module to maintain its robustness against such kind of adversarial behavior, however, this is still a cost of the LLMA to bring them in the system. This further possibly degrades the user satisfaction since the advertisergenerated output may not be satisfactory for the user compared to what the LLM generates. Furthermore, this requires additional communication between the LLMA and the advertisers, thereby increasing the overall latency. Advertiser modification model faces many concerns as discussed here, thus it might not be practical without an innovative methodology that resolves these issues. Challenges: Effectively reflecting advertiser preference on the output for LLMA modification model. On the other hand, LLMA modification model does not suffer the same amount of privacy, reliability and latency issues mentioned above. Furthermore, this model would be more in favor of the user since LLMA would aim to improve the user experience. However, one disadvantage is that this may not fully reflect the advertiser\u2019s preference on the output modification process, thereby partly decreasing the advertiser\u2019s satisfaction. Thus, it should be explored how one can effectively reflect advertisers\u2019 preference on the modified output to better capture the output that advertisers would have preferred. Prospect: Balancing the trade-off between LLMA/advertiser modification models. In order to improve the advertiser\u2019s satisfaction for the LLMA modification model, one approach is to allow the advertiser to submit indirect indicators that represent its preference on what it expects the modified output to be. More in a detail, once the user asks a query q, LLMA delivers q, the original output X, and possibly the context c to the advertiser, and the advertiser returns a string Y that indicates its preference in words. After then, LLMA may apply Y as a prompt in the LLM to generate the modified output. This approach allows more flexibility for the advertiser to express its preference on the output. Note that, however, this still requires additional communication cost compared to the LLMA modification, which possibly increases latency. Another way to better capture advertisers\u2019 preference on valuations without further communication is to ask 8 \fadvertisers to write a list of documents to give it as samples to the LLM as a prompt engineering. 5.2 Bidding module Challenges: Implementing dynamic bidding model without privacy and latency issue. The biggest advantage of the dynamic bidding model is that, it would induce higher advertiser\u2019s satisfaction rate, which may be attractive for some advertisers who have technical capability to dynamically send bid with respect to the proposed output. This is because the advertisers have full flexibility in deciding their bids after observing the modified output. Similar to the advertiser modification model, dynamic bidding model may have privacy issue if LLMA discloses some private information to the advertiser. Moreover, dynamic bidding model may suffer additional latency issue since the entire set of modified output should be delivered to the advertisers. Note that there is no reliability concern as only the bid amount is communicated.11 Overall, the research in this direction should carefully address how practical the dynamic bidding model is, and how it can be implemented without violating privacy and latency issue.12 Prospect: Balancing the trade-off between static/dynamic bidding models. In the static bidding model, a natural approach to improve the advertiser\u2019s satisfaction is to enable a more generic static function as the contract between the advertiser and LLM provider. Specifically, LLMA can propose the bidding contract as a function that maps an indirect measure of the modified output to a nonnegative real number representing the bid, i.e., the advertiser\u2019s valuation. For instance, LLMA and an advertiser agrees on a inverse proportional contract based on the similar11Note that static bidding model does not suffer any of the privacy, latency, reliability issues. However, as the advertisers cannot fully customize their bid per each query based on the modified output, this may degrade the advertiser satisfaction. 12Further, one may also investigate how the ad market would comprise of, when there is a possibility that the advertisers hire a proxy agent to write bid on behalf of them, and how the proxy agent (or advertisers themselves) can optimize bids in such scenario. ity distance d between the original output X and the modified output Xi. For example, if Xi is examined to be very different form X, then d(X, Xi) will be large, and then the advertiser\u2019s valuation, i.e., bid, on Xi is small since it is believed that the incorporation of ad too much changed the content of the output, thereby hurting user experience and capturing less user attention. Note that in this case, the advertiser may need to partly understand how LLMA estimates the distance measure and how it\u2019s defined. More generally, LLMA might construct a more refined way to measure the user\u2019s interest, attention, relevance for the modified output from the perspective of advi, and let the advertisers commit to either of the contract based on its taste. These are crucial aspects to explore to run LLMA in practice. 5.3 Prediction module Challenge: Efficient and precise implementation of prediction module. The process of estimating CTR will be analogous to that in the modern online advertisement system. We can train a prediction system to predict ctri \u2208[0, 1] given the input Xi, q, c for each i \u2208[n] based on the feedback data from the user.13 As discussed in Section 4.3, since Xi will be very sparsely appear if the entire document is considered as an embedding, we may extract some useful features from each q, Xi and c, and learn the mapping from features to the true value. Whether the user regenerates the response, clicks the ad, or exits the LLM, can be used as feedback data to improve the prediction module. Challenge: Relevance/similarity distance measure to estimate user satisfaction. To estimate SR, one may first consider an explicit approach given an assumption that the original output is optimal. If we assume that the original output is optimal, then essentially a distance between the original output X and the modified output Xi will be a good measure to predict the SR 13For instance, as is typical in the modern online advertisement system, one may use factorization machine based online algorithm after extracting some features from user, context, and the queries. More detailed discussion can be found in McMahan et al. (2013). 9 \fsince if Xi = X, then the user will be the most satisfied. In this context, given the original X, one may use the marginal probability Pr [ X|q ] as an indicator for SR. For example, one can define output distance by d(X, Xi) := ||Pr [ X|q ], Pr [ Xi|q ]|| for some proper norm. Note that the probability for an output to be obtained can be computed via standard method in the literature (Vaswani et al., 2017), by computing the probability distribution in a token-by-token manner. Further, one can consider more general distance function d, e.g., the semantic similarity of two documents/phrases (Mikolov et al., 2013; Cer et al., 2018; Conneau et al., 2019). Essentially, a concrete research question in this context is to identify a good measure to capture the user\u2019s satisfaction in advance when ads are incorporated in the output. Prospect: Incorporating distance measure and online learning. As discussed above, one may consider combining the approaches of distance measure and online learning from user feedback for SR prediction. To online learn SR estimates from user feedback, a useful indicator of whether the user is satisfied with the output is whether the user re-generates the output given the same (or similar) query. Hence, one may aim to learn a function which outputs the sr, given the query q, modified output X\u2032, and context c. This approach does not assume that the original output X is indeed optimal, thereby allowing the possibility that the user may be satisfied with a modified output Xi even though its distance from X is measured to be far. Note that one should implement an independent training system to operate this learning process, leading to additional burden for LLMA. To incorporate advantages of these two approaches, one can consider an online learning model where the distance is also integrated as one of the features to predict it, i.e., d(X, Xi) is used as a feature of the SR prediction system. In case d(X, Xi) indeed has some positive correlations with the true user satisfaction rate, this would increase the accuracy of the prediction. Essentially, an effective way to capture the both advantages of online learning and distance measure should be studied thoroughly. 5.4 Auction module Challenge: Incorporating multiple ads in a single output. Recall that in our auction module, the modified output is fully determined based on the allocation function so that if it is allocated to advi, then the modified output Xi will be displayed to the user. One approach to generalize this framework is, to let the final output to be displayed the one that does not necessarily belong to {Xi}i\u2208[n], but rather a new output X\u2032 that somehow interpolates {Xi}i\u2208[n].14 Let us denote the new output X\u2032 by balanced output, e.g., see Table 2 in Duetting et al. (2023). By doing so, it might be possible to deliver multiple advertisements in a fair manner,15 thereby allowing LLMA to bring more revenue by charging multiple advertisers at once. One subtle issue is that, since each advertiser bids bi on delivering Xi, they may not want to write the same bid for the balanced output X\u2032, thus it may degrade the advertiser\u2019s experience on participating in the LLMA.16 In summary, incorporating multiple ads in a single response is technically challenging, which would require both theoretical and practical studies to validate reasonable approaches. 6 Dynamic Creative Optimization via LLM Now, we discuss about the possibility of advanced dynamic creative optimization by using LLM. Dynamic creative optimization (DCO) 17, in the traditional search and display ad markets, refers to the technique of dynamically adjusting the combi14We discuss further comparison to the token auction model by Duetting et al. (2023) in Appendix B.1. 15Another way to display multiple ads at once will is discussed in Appendix B.2 under our original framework. 16In the static bidding model, as discussed in Section 5.2, this might be handled by committing to a contract based on measures that represents the advertiser\u2019s preferences more in a refined manner. In the dynamic bidding model, one approach would be to append an additional step of asking for bids for the final output again to the advertisers. 17This is often called as dynamic product ads (DPA), but DPA typically refers to the process of selecting which item to advertise from a given catalog of an advertiser, whereas DCO denotes the process of modifying the ad itself. 10 \fQ: Car recommendation, Ad: BMW 3 Series Q: Conference in Florida, Ad: Travel with SunShine not personalized user: 60-year-old woman user: 60-year-old man user: 20-year-old unmarried living in DC user: 40-year-old married living in California not personalized Figure 4: Integrating responsive advertisement in the images generated by ChatGPT 4. In the left panel, a user requests car recommendations within the price range of $40,000 to $60,000, and the chat bot, when capable of advertising, generates a \u201cBMW 3 Series\u201d image (left). Personalized ads modify this image for a 60-year-old woman (middle) or man (right) based on user context like age and gender. In the right panel, a user seeks machine learning conferences in Florida. The chat bot advertises \u201cSunShine Travel\u201d with its logo on a generic Florida image (left). With personalized ads, the model tailors promotions, offering flight tickets from the user\u2019s location and showing more relevant Florida images. For example, for a young unmarried person from DC, the output advertises DC to Florida flights (middle) and features an image of beach there, while for a married person in California, it promotes California to Florida flights with images of amusement parks. This enhances user experience with targeted and appealing content. nation of ad assets and constructing an advertisement that best fits the customer\u2019s preference. It can markedly enhance the quality and relevance of the ad to the user by tailoring the contents of ad in a personalized manner. Many online advertisement platforms adopt DCO-based ads, which is often called as responsive ad (GoogleDisplayAd; GoogleSearchAd). 18 In the canonical DCO framework, a static ad consists of a single image file that can only be displayed in a specific set of ad channels, whereas dynamically optimized ad consists of a single ad template and multiple options for each asset that fills in the ad template. This is typically done by the advertiser registering multiple options for each asset given an ad template, and then the system picks the optimal combination. This procedure aims at increasing the CTR and SR, which would eventually improve the welfare efficiency of the online ad ecosystem (McMahan et al., 2013) as well as the revenue of the ad platform. 18As traditional DCO is restricted to dynamically construct the complete image file of ad, we here restrict our attention to the image-based ads as well. However, it is apparent that LLM based DCO would work beyond imagebased ads. Unlike from the traditional DCO techniques, the arise of LLM, especially those support the output with image such as ChatGPT 4, enables more involved process of dynamic creation of ads, as well as replaces the role of traditional DCO in online ad platforms. For example, given a user query and context, the platform may send a query to LLM to customize the image of ad to attract the user by capturing user preference. We provide representative examples in Figure 4 where two scenarios of personalizing ads is depicted. Note that the use of LLM-based DCO is not restricted to LLMA, but can be broadly applied to SA and display ad. Challenge: Refining each module. To integrate responsive ads, a more advanced modification module is imperative. This module should not only enable the alteration of the original response to include ads but also incorporate them in a manner that takes into account user preferences. This entails utilizing user contexts, e.g., gender/location/the current device used for the query, in the ad generation process. To be more specific, the modification module should incorporate user context in the process of modifying the original response. This makes ads more 11 \fappealing to the user by considering their preferences. Leveraging language models, we can prompt them to factor in various elements related to user context during the generation of modified outputs. As illustrated in Figure 4, information about users\u2019 context can be utilized to create more captivating advertisements for them. Further, the prediction module may interact with the modification module to commit to the modified output that mostly increases user\u2019s experience, i.e., CTR and SR. Even for SA, the prediction module may need to be made more sophisticated to predict the CTR and SR of the responsive ads. This is in stark contrast to the standard DCO since it may not significantly change the ad image overall, but only slightly changes the part of it by replacing some ad assets. Correspondingly, the advertiser may value the generated ad differently, i.e., write different bid, with respect to the quality of the ad generated by LLM-based DCO. Overall, a vast amount of studies should be done to clearly investigate a plausible framework for LLMA, even for SA, to adapt the LLM-based DCO. Challenge: Meeting system requirements. Obviously, running LLM-based DCO should not entail the failure of the system requirements discussed in Section 3. For instance, in terms of latency, we observed that generating an image with the GPT4 takes almost about 10s for Figure 4. Given that the online advertising typically requires the ad latency up to 100ms, the required latency is almost 100 times the conventional threshold. Thus, the fundamental research challenge is to enable a fast production of an ad image to meet the conventional latency requirement in the online advertising system. Further, it will be technically interesting to balance the tradeoff between the ad\u2019s quality and latency which governs the advertiser and user utility, respectively. Interestingly, as acute readers may have noticed, Figure 4 includes some technical issues in the images themselves. For example, the first image in the car ads generate a word that is not easily understandable. Indeed, LLM-based DCO may induce the reliability issue due to the hallucinating nature of LLM (Ji et al., 2023; Guerreiro et al., 2023; Bubeck et al., 2023). Thus, LLMbased DCO should ensure that it does not generate a hallucinated ad that may hurt the advertiser satisfaction. A simple approach might be to require LLM to minimally revise the reference images, requesting the advertisers to submit several images for reference. Challenge: Cost sharing model. Unlike from the traditional DCO technique that simply determines the efficient combination of ad assets, LLMbased responsive ad can do much beyond by creating truly new content. On the other hand, such process necessarily entails increased use of computational resource, especially since more queries to LLM will be required to customize the ads. We here discuss several plausible cost sharing models that LLMA might adopt. The simplest one will be to charge the advertisers whenever the contents are dynamically modified. For example, the advertiser and LLMA commit to a mutual contract on how much the advertiser pay the LLMA for each responsive ad, and LLMA charges the advertiser whenever it responsively modifies the ad contents. As the advertiser might hope to use responsive ad only if the user context is sufficiently relevant to its ads, the contract might specify when the advertiser\u2019s ads will be responsively changed. Also, LLMA may provide several options for advertiser to determine how frequently and largely it should responsively modify the ads. This model would work well for those charged by cost-per-impression, however, advertisers with cost-per-click or cost-per-conversion may not willingly commit to the contract since they desire to pay the platform only if the actual click or conversion happen, even though the platform indeed responsively modified its ads for every impression. LLMA might give several options to the advertisers under which event they will be charged to pay for responsive ad. Ads with cost-per-click model may still want to pay for responsive ads per impression or per click. For the latter, LLMA may offer a larger amount of commission per click by predicting the average CTR of the ads. LLMA must also decide whether or not to account the increased amount of payment for each advertiser as the increment of the actual bid or just regard it as a separate commission. For the 12 \fformer case, the advertisers who are willing to charge more for the responsive ad commission fee will take advantage of getting higher chance of delivering their ads, compared to those who do not use responsive ad option. By doing so, the auction module can select the ad that truly maximize the total revenue expected from delivering a single advertisement, thereby increasing the platform\u2019s revenue. 7" + }, + { + "url": "http://arxiv.org/abs/1710.10793v2", + "title": "Understanding GANs: the LQG Setting", + "abstract": "Generative Adversarial Networks (GANs) have become a popular method to learn\na probability model from data. In this paper, we aim to provide an\nunderstanding of some of the basic issues surrounding GANs including their\nformulation, generalization and stability on a simple benchmark where the data\nhas a high-dimensional Gaussian distribution. Even in this simple benchmark,\nthe GAN problem has not been well-understood as we observe that existing\nstate-of-the-art GAN architectures may fail to learn a proper generative\ndistribution owing to (1) stability issues (i.e., convergence to bad local\nsolutions or not converging at all), (2) approximation issues (i.e., having\nimproper global GAN optimizers caused by inappropriate GAN's loss functions),\nand (3) generalizability issues (i.e., requiring large number of samples for\ntraining). In this setup, we propose a GAN architecture which recovers the\nmaximum-likelihood solution and demonstrates fast generalization. Moreover, we\nanalyze global stability of different computational approaches for the proposed\nGAN optimization and highlight their pros and cons. Finally, we outline an\nextension of our model-based approach to design GANs in more complex setups\nthan the considered Gaussian benchmark.", + "authors": "Soheil Feizi, Farzan Farnia, Tony Ginart, David Tse", + "published": "2017-10-30", + "updated": "2018-10-22", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.IT", + "cs.LG", + "math.IT" + ], + "main_content": "Introduction Learning a probability model from data is a fundamental problem in statistics and machine learning. Building o\ufb00the success of deep learning, Generative Adversarial Networks (GANs) [1] have given this age-old problem a face-lift. In contrast to traditional methods of parameter \ufb01tting like maximum likelihood estimation, the GAN approach views the problem as a game between a generator whose goal is to generate fake samples that are close to the real data training samples and a discriminator whose goal is to distinguish between the real and fake samples. The generator and the discriminator are typically implemented by deep neural networks. GANs have achieved impressive performance in several domains (e.g., [2,3]). However, training good GANs is still challenging and it is an active area to design GANs with better and more stable performance (e.g., [4\u20136] and Section 1.3). GANs are typically designed without any explicit modeling of the data. Are they universal learning algorithms, i.e. can they learn a very wide range of data distributions? If not, what are their limits? Can better GANs be designed if we use an explicit model of the data? These are the questions we wish to explore in this paper. GANs\u2019 evaluations are primarily done on real data, typically images. Although clearly valuable, such evaluations are often subjective owing to not having clear baselines to compare against. To make progress on the above questions, we report instead experiments of state-of-the-art GANs on 1 arXiv:1710.10793v2 [stat.ML] 22 Oct 2018 \fMaximum Likelihood baseline Glorot initialization He initialization r=32 r=4 r=8 Frobenius norm error WGAN-WC WGAN-GP Frobenius norm error Figure 1: An illustration of the performance of WGAN+GP and WGAN+WC in di\ufb00erent values of r (the dimension of the input randomness to the generator) with di\ufb00erent initialization procedures when the generator and the discriminator functions are both neural networks. synthetic data where clear baselines are known. We chose one of the simplest high-dimensional distributions: the Gaussian distribution. 1.1 Experiments In our \ufb01rst set of experiments, we generate n = 100,000 samples from a d = 32 dimensional Gaussian distribution N(0,K) when K is the normalized identity matrix; K = I/ \u221a d. We train two stateof-the-art GAN architectures in our experiments; WGAN+Weight Clipping (WGAN+WC) [4] and WGAN+Gradient Penalty (WGAN+GP) [5]. We use the neural net generator and discriminator with hyper-parameter settings as recommended in [5]. Each of the neural networks has three hidden layers, each with 64 neurons and ReLU activation functions. To evaluate GAN\u2019s performance, we compute the Frobenius norm between covariance matrices of observed and generative distributions. Figure 1 shows the performance of GANs for various values of r, the dimension of the randomness (i.e., input to the generator, for which we use the standard Gaussian randomness ) and for two random initializations for ReLU layers using the standard He [7] and Glorot [8] procedures. In these experiments, we observe two types of instability in GAN\u2019s performance; oscillating behaviour (e.g., WGAN-GP, r = 4,8) and convergence to di\ufb00erent and bad local solutions. Even after 20,000 training epochs, the error does not approach zero in most cases. We observe similar trends when we use a random covariance instead of the normalized identity matrix (Appendix Figure 5). Next, we repeat these experiments using the ELU activation function [9] instead of the ReLU activation. Since ELU activation is di\ufb00erentiable everywhere, we thought it may provide more stability. However, surprisingly GAN\u2019s performance becomes dramatically worst (Figure 2). Reference [5] has noticed similar behavior in another setting as well, but there was no explanation, and we are still surprised that such a drastic phenomenon can happen even in such a simple Gaussian setting. In our next set of experiments, we attempt to improve performance by restricting the generator 2 \fMaximum Likelihood baseline Glorot initialization He initialization r=32 r=4 r=8 Frobenius norm error WGAN-WC WGAN-GP Frobenius norm error Figure 2: A repeat of experiments of Figure 1 using the ELU activation function instead of the ReLU activation. GAN\u2019s performance changes drastically. Note the range of the y-axes in the plots for WGAN-GP. to be linear, since both the observed data and the randomness come from Gaussian distributions. (The discriminator is still the ReLU neural network). Since the generator is linear, zero error cannot be achieved in the case of r < d. In this case, a natural baseline is the r-PCA of the sample covariance matrix (Appendix Section A). GAN\u2019s performance improves compared to the case of nonlinear generator (Figure 3). We do not observe oscillating behavior in WGAN+GP. However, we still observe convergence to di\ufb00erent bad local solutions for both WGAN+GP and WGAN+WC. Unlike Figure 1 where WGAN+WC was performing better than WGAN+GP, here the performance of WGAN+WC is signi\ufb01cantly worst than that of the WGAN+GP. Also, unlike other cases, in WGAN+GP when r = 4, the Glorot initialization achieves a smaller error than that of the He initialization. These results highlight sensitivity of state-of-the-art GANs even in a simple benchmark. 1.2 Quadratic GAN The performance of state-of-the-art GANs on Gaussian data gives some evidence on the limitations of a model-free approach to the GAN design. We ask: is there is a natural GAN architecture for learning Gaussian data? We ask this question not because we are in desperate need of a new way of learning Gaussian distributions, but because we want to see how much gain a model-based approach can buy on this simple benchmark as well as hope to learn something in the process about designing GANs in general. Figure 4-a is our proposed GAN architecture for the Gaussian benchmark 1. We refer to this architecture as Quadratic GAN. Figure 4-b compares performance of quadratic GAN and WGAN+GP for r = 32. Quadratic GAN demonstrates stable behavior and much faster convergence to the maximum-likelihood baseline compared to WGAN. In fact, due to its simple structure, 1For simplicity, we assume that samples have been centered to have zero means. In the general case, the generator should be an a\ufb03ne function. 3 \fGlorot initialization He initialization r=32 r=4 r=8 Frobenius norm error WGAN-WC WGAN-GP Frobenius norm error empirical PCA baseline Figure 3: A repeat of experiments of Figure 1 when the generator function is linear. A random covariance matrix is chosen instead of the identity matrix. training of the Quadratic GAN takes less than 1 second on a laptop CPU which is orders of magnitudes faster than training WGAN on a GPU. We designed the Quadratic GAN in three steps: First, we formulated GAN\u2019s objective by specifying the appropriate loss to naturally match the Gaussian model for the data (Section 2). This allows us to show that the global population solution of the minmax problem is the r-PCA of the (true) covariance matrix of the Gaussian model (Theorem 1). However, this initial architecture can have poor generalization performance (Appendix Section C). Second, we further constrained the discriminator to keep the good optimal solution of the population-optimal architecture while enabling fast generalization (Section 3). We refer to this architecture as the quadratic GAN (Figure 4). We show that the global optimizer of quadratic GAN applied on the empirical distribution is the empirical r-PCA (Theorem 2). Finally, we study the global stability of di\ufb00erent computational approaches for solving the proposed GAN architecture. In particular, we prove that in the full-rank case alternating gradient descent converges globally to the minmax solution, under some conditions (Section 4). 1.3 Prior Work Broadly speaking, previous work in GANs study three main properties: (1) Stability where the focus is on the convergence of the commonly used alternating gradient descent approach to global/local optimizers (equilibriums) for GAN\u2019s optimization (e.g., [6,10\u201313], etc.), (2) Formulation where the focus is on designing proper loss functions for GAN\u2019s optimization (e.g., WGAN+Weight Clipping [4], WGAN+Gradient Penalty [5], GAN+Spectral Normalization [14], WGAN+Truncated Gradient Penalty [15], relaxed WGAN [16], f-GAN [17], MMD-GAN [18, 19] , Least-Squares GAN [20], Boundary equilibrium GAN [21], etc.), and (3) Generalization where the focus is on understanding the required number of samples to learn a probability model using GANs (e.g., [22]). We address all three issues in the design of the Quadratic GAN. Some references have also proposed model-based GANs for the Gaussian benchmark ( [10,13]). For example, [13] uses a quadratic function as the 4 \fyt(I-HHt)y y1, y2, ... \u0454 Rd x1, x2, ... \u0454 Rr G noise fake data real data Discriminator Generator yt((HHt)-I)y y1, y2, ... \u0454 Rd ^ ^ + + ^ ^ (b) (a) ML baseline Glorot initialization He initialization WGAN-GP Quadratic GAN Frobenius norm error Figure 4: (a) Quadratic GAN, with a linear generator and a quadratic discriminator. On the training data, the generator minimizes over the d by r matrix G and the adversary maximizes over the d by d matrix H. (b) Performance comparison between quadratic GAN and WGAN+GP for r = 32. discriminator in the WGAN optimization. This design, however, does not recover the maximum likelihood/PCA solutions in the Gaussian benchmark, unlike the Quadratic GAN. Moreover, no global stability results were proven. 2 A General Formulation for GANs Let {yi}n i=1 be n observed data points in Rd drawn i.i.d. from the distribution PY . Let Qn Y be the empirical distribution of these observed samples. Moreover, let PX be a normal distribution N(0,Ir). GANs can be viewed as an optimization that minimizes a distance between the observed empirical distribution Qn Y and the generated distribution PG(X). The population GAN optimization replaces Qn Y with PY and is the setting we focus on in this section. The question we ask in this section is: what is a natural way of specifying a loss function \u2113for GANs and how it determines the GAN\u2019s objective? We answer the question in general and then specialize to the Gaussian benchmark by choosing an appropriate loss function for that case. We then show that we can get a good population solution under this loss function. 2.1 WGAN Revisited Let us start with the WGAN optimization [4]: min G(.)\u2208G W1(PY ,PG(X)), (1) where G is the set of generator functions, and the p-th order Wasserstein distance between distributions PZ1 and PZ2 is de\ufb01ned as [23] W p p (PZ1,PZ2) \u2236= min PZ1,Z2 E[\u2225Z1 \u2212Z2\u2225p], (2) where the minimization is over all joint distributions with marginals \ufb01xed. Replacing (2) in (1), the WGAN optimization can be re-written as min G(.)\u2208G min PG(X),Y E[\u2225Y \u2212G(X)\u2225]. (3) 5 \for equivalently: min PX,Y min G(.)\u2208G E[\u2225Y \u2212G(X)\u2225], (4) where the minimization is over all joint distributions PX,Y with \ufb01xed marginals PX and PY . We now connect (4) to the supervised learning setup. In supervised learning, the joint distribution PX,Y is \ufb01xed and the goal is to learn a relationship between the feature variable represented by X \u2208Rr, and the target variable represented by Y \u2208Rd, according to the following optimization: min G(.)\u2208G E[\u2113(Y,G(X))], (5) where \u2113is the loss function. Assuming the marginal distribution of X is the same in both optimizations (4) and (5), we can connect the two optimization problems by setting \u2113(y,y\u2032) = \u2225y \u2212y\u2032\u2225 in optimization (5). Note that for every \ufb01xed PX,Y , the solution of the supervised learning problem (5) yields a predictor g which is a feasible solution to the WGAN optimization problem (4). Therefore, the WGAN optimization (3) can be re-interpreted as solving the easiest such supervised learning problem, over all possible joint distributions PX,Y with \ufb01xed PX and PY . 2.2 From Supervised to Unsupervised Learning GAN is a solution to an unsupervised learning problem. What we are establishing above is a general connection between supervised and unsupervised learning problems: a good predictor G learnt in a supervised learning problem can be used to generate samples of the target variable Y. Hence, to solve an unsupervised learning problem for Y with distribution PY , one should solve the easiest supervised learning problem PX,Y with given marginal PY (and PX, the randomness generating distribution). This is in contrast to the traditional view of the unsupervised learning problem as observing the feature variable X without the label Y . (Thus in this paper we break with tradition and use Y to denote data and X as randomness for the generator in stating the GAN problem.) This connection between supervised and unsupervised learning leads to a natural way of specifying the loss function in GANs: we simply replace the \u21132 in (3) with a general loss function \u2113: min G(.)\u2208G min PG(X),Y E[\u2113(Y,G(X))]. (6) The inner optimization is the optimal transport problem between distributions of G(X) and Y [23] with general cost \u2113. This is a linear programming problem for general cost, so there is always a dual formulation (the Kantorovich dual [23]). The dual formulation can be interpreted as a generalized discriminator optimization problem for the cost \u2113. (For example, in the case of \u2113being the Euclidean norm, we get WGAN.) Hence, we use (6) as a formulation of GANs for general loss functions. Note that an optimal transport view to GANs has been studied in other references (e.g., [4,24]). Our contribution in this section is to make a connection between supervised and unsupervised learning problems which we will exploit to specify a proper loss function for GANs in the Gaussian model. 6 \f2.3 Quadratic Loss and Linear Generators The most widely used loss function in supervised learning is the quadratic loss: \u2113(y,y\u2032) = \u2225y \u2212y\u2032\u22252 (squared Euclidean norm). The quadratic loss has a strong synergy with the Gaussian model, as observed by Gauss himself. For example, under the Gaussian model and the quadratic loss in the supervised learning problem (5), the optimal g is linear, thus forming a statistical basis for linear regression. Given the connection between supervised and unsupervised learning, we use this loss function for formulating the GAN for Gaussian data . This choice of the loss function leads to the following GAN optimization which we refer to as W2GAN: min G(.)\u2208G W 2 2 (PY ,PG(X)). (7) A natural choice of G is the set of all linear generators, from Rr to Rd. Since Wasserstein distances are weakly continuous measures in the probability space [23], similar to WGAN, the optimization of the W2GAN is well-de\ufb01ned even if r < d. The dual formulation (discriminator) for W 2 2 is [23]: W 2 2 (PY ,PG(X)) = max \u03c8(.):convex E[\u2225Y \u22252 \u22122\u03c8(Y )] \u2212E[2\u03c8\u2217(G(X)) \u2212\u2225G(X)\u22252], (8) where \u03c8\u2217(\u02c6 y) \u2236= max v (vt\u02c6 y \u2212\u03c8(v)) (9) is the convex-conjugate of the function \u03c8(.). Combining (7 )and (8), we obtain the minmax formulation of W2GAN: min g\u2208G max \u03c8(.):convex E[\u2225Y \u22252 \u22122\u03c8(Y )] \u2212E[2\u03c8\u2217(G(X)) \u2212\u2225G(X)\u22252]. (10) 2.4 Population Solution: PCA There is a simple solution to the optimization problem (7) in the population setting. Theorem 1 Let Y \u223cN(0,K) where K is full-rank. Let X \u223cN(0,Ir) where r \u2264d. The optimal GAN solution in the population setting under linear generators G is the r-PCA solution of Y . We say \u02c6 Y is the r-PCA solution of Y if K \u02c6 Y is a rank r matrix whose top r eigenvalues and eigenvectors are the same as top r eigenvalues and eigenvectors of K. This theorem is satisfactory as it connects GANs to PCA, one of the most basic unsupervised learning methods. 3 Quadratic GAN The discriminator of the W2GAN optimization (10) is constrained over all convex functions. Since this set is non-parametric, we are unable to use gradient descent to compute a solution for this optimization. Moreover, having such a large feasible set for the discriminator function can cause poor generalization. 7 \fConsider the empirical version of the population W2GAN optimization problem (7): min G(.)\u2208G W 2 2 (Qn Y ,PG(X)), (11) where Qn Y is the empirical distribution of the n data points {yi}n i=1. Let g\u2217 n be the optimal solution of this problem. The distance between the generated distribution G\u2217(X) and the true distribution PY , W 2 2 (PY ,Pg\u2217 n(X)), converges to zero as n \u2192\u221e. It was shown in [22] that if the generator class G is rich enough so that the generator can memorize the data and generate the empirical distribution Qn Y itself, then this rate of convergence is very slow, of the order of n\u22122/d. (Strictly speaking, they have only shown it for the W1 distance, but a very similar result holds for W2 as well.) This is because the empirical distribution Qn Y converges very slowly to the true distribution PY in the W2 distance. Hence, the number of samples required for convergence is exponential in the dimension d. In Appendix Section C, we show that in our Gaussian setup, even if we constrain the generators to single-parameter linear functions that can generate the true distribution, the rate of convergence is still n\u22122/d. Therefore, to overcome the generalization issue, the only option is to further constrain the discriminator. Ideally one would like to properly constrain the discriminator function such that any population solution of the constrained optimization is a population solution of the original optimization and vice versa, while at the same time allowing fast generalization. In this section, we show how we can achieve this goal for the Gaussian benchmark. This view can potentially be extended to more complex distributions as we explain in Section 5. The following lemma characterizes the optimal solution of optimization (8) [25]: Lemma 1 Let PY be absolutely continuous whose support contained in a convex set in Rd. For a \ufb01xed G(.) \u2208G, let \u03c8opt be the optimal solution of optimization (8). This solution is unique. Moreover, we have \u25bd\u03c8opt(Y ) dist = G(X), (12) where dist = means matching distributions. In our benchmark setup, since G(X) is Gaussian, \u25bd\u03c8opt is a linear function. Thus, without loss of generality, \u03c8(.) in the discriminator optimization can be constrained to quadratic functions of the form \u03c8(y) = ytAy/2 where A is positive semide\ufb01nite. For the quadratic function, we have \u03c8\u2217(\u02c6 y) = \u02c6 ytA\u2020\u02c6 y/2 when range(\u02c6 Y ) \u2286range(A). Replacing these in optimization (10), we obtain: min G max A\u2ab00 E[Y t(I \u2212A)Y ] \u2212E[ \u02c6 Y t(A\u2020 \u2212I)\u02c6 Y ] (13) range(G) \u2286range(A). Without loss of generality, we can replace the constraint range(G) \u2286range(A) with range(G) = range(A). It is because for a given A, this increases the size of the feasible set for G optimization, thus the objective can achieve a smaller value. For a given G, one can decompose A as A1 + A2 where range(A1) = range(G) and range(A2) \u2229range(G) = \u2205. Note that by ignoring A2, the objective function does not decrease. Therefore, optimization (13) can be written as min G max A\u2ab00 E[Y t(I \u2212A)Y ] \u2212E[ \u02c6 Y t(A\u2020 \u2212I)\u02c6 Y ] (14) range(G) = range(A). 8 \fUsing the fact that trace is invariant under cyclic permutations and by replacing A = HHt, the objective function of the above optimization can be re-written as: J(G,H) =Tr[(I \u2212HHt)K] \u2212Tr[((HHt) \u2020 \u2212I)GGt]. (15) In practice, we apply GANs to the observed data (i.e., the empirical distribution). In that case, in the above objective function, K (the true covariance) should be replaced by \u02dc K (the empirical covariance). This leads to the quadratic GAN optimization: min G max H Tr[(I \u2212HHt) \u02dc K] \u2212Tr[((HHt) \u2020 \u2212I)GGt] (16) range(G) = range(H). Note that since the global optimizer of optimization (15) is PCA (Theorem 1), the global optimizer of optimization (16) is empirical PCA: Theorem 2 Let \u02dc Kr be the r-PCA of the sample covariance matrix. Let (G\u2217,H\u2217) be a global solution for the quadratic GAN optimization (16). Then, we have G\u2217(G\u2217)t = \u02dc Kr. I.e., quadratic GAN recovers the empirical PCA solution as the generative model. Next, we examine the generalization error of the quadratic GAN. Consider the case where d = r (the case r < d is similar). The generalization error can be written as the W2 distance between the true distribution PY and the learned distribution PG\u2217(X) (Appendix Section C): W 2 2 (PY ,PG\u2217(X)) = W 2 2 (N(0,K),N(0, \u02dc K)). (17) The W 2 2 distance between two Gaussians depends only on the covariance matrices. More speci\ufb01cally: W 2 2 (N(0,K),N(0, \u02dc K)) = Tr(K) + Tr( \u02dc K) \u22122Tr((K1/2 \u02dc KK1/2) 1/2 ). (18) Hence, the convergence of this quantity only depends on the convergence of the empirical covariance to the population covariance, together with smoothness property of this function of the covariance matrices. The convergence has been established to be at a quick rate of \u02dc O( \u221a d/n) [26]. 4 Stability Theorem 2 merely focuses on the quality of the global solution of the quadratic GAN\u2019s optimization, ignoring its computational aspects. One common way to solve the GAN\u2019s min-max optimization is to use alternating gradient descent with sG gradient steps for the generator updates and sD gradient steps for the discriminator updates. For simplicity, we refer to such a method as the (sG,sD)alternating gradient descent. In this section, we analyze the global stability of the quadratic GAN under the alternating gradient descent approach. First, we analyze the stability of the quadratic GAN under the (1,1)-alternating GD in the full-rank case. By using variables U \u2236= GGt and A \u2236= HHt, optimization (16) can be written as min U max A Tr[(I \u2212A) \u02dc K] \u2212Tr[((At) \u2020 \u2212I)Ut] (19) range(U) = range(A). For this case, we have the following result: 9 \fTheorem 3 In the quadratic GAN optimization (19), assuming full rank A and r = d, the (1,1)alternating gradient descent is globally stable. Proof 1 Optimization (3) is a convex-concave min-max problem. Using the Arrow-Hurwicz-Uzawa result [27], one can show that the (1,1)-alternating gradient descent is globally stable for this optimization. In the standard quadratic GAN, the alternating GD is applied on the (G,H) objective function which is not generally convex-concave. For this case, we have the following result: Theorem 4 In the quadratic GAN optimization (16), assuming \u02dc K = I, full rank H and r = d, the (1,1)-alternating gradient descent is globally stable. To prove Theorem 4, we use the following function as a Lyapunov function: V (G,H) =Tr[GGt \u2212I \u2212log (GGt)] + Tr[HHt \u2212I \u2212log (HHt)]. (20) Each term of this function is the Von Neumann divergence. We prove that this non-negative function is monotonically decreasing along every trajectory of the (1,1)-alternating gradient descent and its value is zero at the global solution. This phenomena is non-trivial because the Frobenius norm distance between GGt and \u02dc K is not monotonically decreasing along every trajectory (Appendix Figure 7). In the low-rank case where r < d, however, we have the following negative result: Theorem 5 In the quadratic GAN optimization (16), if r < d, the (sG,sD)-alternating gradient descent is not globally stable for any sG and sD. Proof 2 Note that due to the constraint range(G) = range(H), by initializing G to some matrix, the colum-space of G and H does not change with gradient updates. This leads to the above result. One can think about using an equivalent optimization (16) where the constraint range(G) = range(H) is replaced by the constraint range(G) \u2286range(H) (by assuming A = HHt). For example, if H is full-rank, this constraint always holds. However, this does not solve the stability issue of Theorem 5 . It is because in the desired saddle point, H\u2217should be a low-rank matrix whose range matches the range of G\u2217. If one starts the alternating GD with a full-rank H, the second term of the objective function (16) would decrease unboundedly when H loses rank in the null-space of G (because of the term (HHt)\u2020GGt). Therefore, unless H has a matching range with G, alternating GD will not converge to a low-rank solution for H. As we explained above, the main source of the instability of the quadratic GAN optimization in the low-rank case comes from the constraint range(G) = range(H), i.e. the matching column-space of the generator and the discriminator functions. One way to deal with this issue is to decouple the optimization to two parts where in one part we optimize the subspace and in the second part, we solve GAN\u2019s min-max optimization within that subspace. Below, we explain this approach. We denote the subspace by some orthogonal basis S \u2208Rd\u00d7r where StS = I. Then, we re-write G \u2236= SGS, H \u2236= SHS, (21) 10 \fwhere GS and HS are full-rank r \u00d7 r matrices. Also, we de\ufb01ne KS \u2236= StKS. Using these notation, the objective function of the quadratic GAN can be re-written as: J(S,GS,HS) =Tr[(I \u2212HSHt S)KS] \u2212Tr[((HSHt S) \u2020 \u2212I)GSGt S] + Tr[K \u2212KS]. (22) Note that the \ufb01rst two terms of this objective is the same as (15) where all variables are projected to the column-space of S. Using the above argument, we propose the following min-min-max optimization: min S min GS max HS J(S,GS,HS) (23) StS = I. The inner min-max optimization over GS and HS for a given S is similar to the full-rank case analysis (Theorem 4). Given the global convergence of the (1,1)-alternating GD in the full-rank case, the outer optimization on S can be re-written as max S Tr[StKS] (24) StS = I. Although this optimization is non-convex, it has been shown that its global optimizer, which recovers the leading eigenvectors of K, can be computed e\ufb03ciently using GD [28]. An alternative approach to solve the quadratic GAN optimization (16) is to solve the max part as a closed form and use GD to solve the min part. We analyze the convergence of this approach in Appendix Theorem 6. 5 Discussion Our experiments on state-of-the-art GAN architectures suggest limitations of model-free designs even when data comes from a very basic Gaussian model. This motivates us to take a model-based approach to designing GANs. In this paper, we accomplish this goal in the spacial case of Gaussian models. Even though this is for a restrictive case, we have learnt a few lessons which will be useful as we broaden our approach. We obtained a general way to specify loss functions for GANs, by connecting the unsupervised GAN learning problem to the supervised learning problem. The quadratic loss function used for the Gaussian problem is a special case of this general connection. Moreover, we learnt that by properly constraining the class of generators and the class of discriminators in a balanced way, we can preserve good population solution while allowing fast generalization. Finally, we saw that using a model-based design, we could analyze the global stability of di\ufb00erent computational approaches using gradient descent. These properties are hard to come by in model-free designs. Our framework can potentially be used to design GANs for more complex distributions. For example, consider an error-free GAN architecture where there exists G\u2217\u2208G such that PG\u2217(X) = PY . The key question is how to design a balanced discriminator function for a given generator class G, i.e. if G is the set of neural network functions with l layers each with m neurons, what should be 11 \fthe discriminator function set? We provide a non-parametric answer to this question in Appendix Section D. A parametric characterization of the discriminator class D for a given generator class G is an interesting future direction for a model-based view to designing GANs. 6 Acknowledgment We would like to thank Changho Suh, Fei Xia and Jiantao Jiao for helpful discussions." + }, + { + "url": "http://arxiv.org/abs/1710.02196v1", + "title": "Porcupine Neural Networks: (Almost) All Local Optima are Global", + "abstract": "Neural networks have been used prominently in several machine learning and\nstatistics applications. In general, the underlying optimization of neural\nnetworks is non-convex which makes their performance analysis challenging. In\nthis paper, we take a novel approach to this problem by asking whether one can\nconstrain neural network weights to make its optimization landscape have good\ntheoretical properties while at the same time, be a good approximation for the\nunconstrained one. For two-layer neural networks, we provide affirmative\nanswers to these questions by introducing Porcupine Neural Networks (PNNs)\nwhose weight vectors are constrained to lie over a finite set of lines. We show\nthat most local optima of PNN optimizations are global while we have a\ncharacterization of regions where bad local optimizers may exist. Moreover, our\ntheoretical and empirical results suggest that an unconstrained neural network\ncan be approximated using a polynomially-large PNN.", + "authors": "Soheil Feizi, Hamid Javadi, Jesse Zhang, David Tse", + "published": "2017-10-05", + "updated": "2017-10-05", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "main_content": "Introduction Neural networks have been used in several machine learning and statistical inference problems including regression and classi\ufb01cation tasks. Some successful applications of neural networks and deep learning include speech recognition [1], natural language processing [2], and image classication [3]. The underlying neural network optimization is non-convex in general which makes its training NP-complete even for small networks [4]. In practice, however, di\ufb00erent variants of local search methods such as the gradient descent algorithm show excellent performance. Understanding the reason behind the success of such local search methods is still an open problem in the general case. There has been several recent work in the theoretical literature aiming to study risk landscapes of neural networks and deep learning under various modeling assumptions. We review these work in Section 4. In this paper, we study a key question whether an unconstrained neural network can be approximated with a constrained one whose optimization landscape has good theoretical properties. For two-layer neural networks, we provide an a\ufb03rmative answer to this question by introducing a family of constrained neural networks which we refer to as Porcupine Neural Networks (PNNs) (Figure 1). In PNNs, an incoming weight vector to a neuron is constrained to lie over a \ufb01xed line. For example, a neural network with multiple inputs and multiple neurons where each neuron is connected to one input is a PNN since input weight vectors to neurons lie over lines parallel to standard axes. We analyze population risk landscapes of two-layer PNNs with jointly Gaussian inputs and relu activation functions at hidden neurons. We show that under some modeling assumptions, 1 arXiv:1710.02196v1 [stat.ML] 5 Oct 2017 \f. . . . . . xd x2 x1 wk w1 (b) (a) w1 wk 1 1 1 1 Figure 1: (a) A two-layer Porcupine Neural Network (PNN). (b) In PNN, an incoming weight vector to a neuron is constrained to lie over a line in a d-dimensional space. most local optima of PNN optimizations are also global optimizers. Moreover, we characterize the parameter regions where bad local optima (i.e., local optimizers that are not global) may exist. In our analysis, we observe that a particular kernel function depicted in Figure 6 plays an important role in characterizing population risk landscapes of PNNs. This kernel function is resulted from the computation of the covariance matrix of Gaussian variables restricted to a dual convex cone (Lemma 6). We will explain this observation in more detail. Next, we study whether one can approximate an unconstrained (fully-connected) neural network function with a PNN whose number of neurons are polynomially-large in dimension. Our empirical results o\ufb00er an a\ufb03rmative answer to this question. For example, suppose the output data is generated using an unconstrained two-layer neural network with d = 15 inputs and k\u2217= 20 hidden neurons 1. Using this data, we train a random two-layer PNN with k hidden neurons. We evaluate the PNN approximation error as the mean-squared error (MSE) normalized by the L2 norm of the output samples in a two-fold cross validation setup. As depicted in Figure 2, by increasing the number of neurons of PNN, the PNN approximation error decreases. Notably, to obtain a relatively small approximation error, PNN\u2019s number of hidden neurons does not need to be exponentially large in dimension. We explain details of this experiment in Section 8. In Section 7, we study a characterization of the PNN approximation error with respect to the input dimension and the complexity of the unconstrained neural network function. We show that under some modeling assumptions, the PNN approximation error can be bounded by the spectral norm of the generalized Schur complement of a kernel matrix. We analyze this bound for random PNNs in the high-dimensional regime when the ground-truth data is generated using an unconstrained neural network with random weights. For the case where the dimension of inputs and the number of hidden neurons increase with the same rate, we compute the asymptotic limit. Moreover, we provide numerical results for the case when the number of hidden neurons grows with a polynomial rate in dimension. We also analyze a naive minimax approximation bound which requires PNN\u2019s number of neurons to be exponentially large in dimension. Finally, in Section 9, we discuss how the proposed PNN framework can potentially be used to explain the success of local search methods such as gradient descent in solving the unconstrained 1Note that for both unconstrained and constrained neural networks, the second layer weights are assumed to be equal to one. The extension of the results to a more general case is an interesting direction for future work. 2 \fPNN approximation error (normalized MSE) PNN\u2019s number of neurons Figure 2: Approximations of an unconstrained two-layer neural network with d = 15 inputs and k\u2217= 20 hidden neurons using random two-layer PNNs. neural network optimization. 1.1 Notation For matrices we use bold-faced upper case letters, for vectors we use bold-faced lower case letters, and for scalars we use regular lower case letters. For example, X represents a matrix, x represents a vector, and x represents a scalar number. In is the identity matrix of size n \u00d7 n. ej is a vector whose j-th element is non-zero and its other elements are zero. 1n1,n2 is the all one matrix of size n1 \u00d7n2. When no confusion arises, we drop the subscripts. 1{x = y} is the indicator function which is equal to one if x = y, otherwise it is zero. relu(x) = max(x,0). Tr(X) and Xt represent the trace and the transpose of the matrix X, respectively. \u2225x\u22252 = xtx is the second norm of the vector x. When no confusion arises, we drop the subscript. \u2225x\u22251 is the l1 norm of the vector x. \u2225X\u2225is the operator (spectral) norm of the matrix X. \u2225x\u22250 is the number of non-zero elements of the vector x. < x,y > is the inner product between vectors x and y. x \u22a5y indicates that vectors x and y are orthogonal. \u03b8x,y is the angle between vectors x and y. N(\u00b5,\u0393) is the Gaussian distribution with mean \u00b5 and the covariance \u0393. f[A] is a matrix where the function f(.) is applied to its components, i.e., f[A](i,j) = f(A(i,j)). A\u2020 is the pseudo inverse of the matrix A. The eigen decomposition of the matrix A \u2208Rn\u00d7n is denoted by A = \u2211n i=1 \u03bbi(A)ui(A)ui(A)t, where \u03bbi(A) is the i-th largest eigenvalue of the matrix A corresponding to the eigenvector ui(A). We have \u03bb1(A) \u2265\u03bb2(A) \u2265\u22ef. 2 Unconstrained Neural Networks Consider a two-layer neural network with k neurons where the input is in Rd (Figure 1-a). The weight vector from the input to the i-th neuron is denoted by wi \u2208Rd. For simplicity, we assume 3 \fthat second layer weights are equal to one another. Let h(x;W) \u2236= k \u2211 i=1 \u03c6(wt ix), (2.1) where x = (x1,...,xd)t and W \u2236= (w1,w2,...,wk) \u2208W \u2286Rd\u00d7k. The activation function at each neuron is assumed to be \u03c6(z) \u2236= relu(z) = max(z,0). Consider F, the set of all functions f \u2236Rd \u2192R where f can be realized with a neural network described in (2.1). In other words, F \u2236= {f \u2236Rd \u2192R; \u2203W \u2208W, f(x) = h(x;W), \u2200x \u2208Rd}. (2.2) In a fully connected neural network structure, W = Rd\u00d7k. We refer to this case as the unconstrained neural network. Note that particular network architectures can impose constraints on W. Let x \u223cN(0,I). We consider the population risk de\ufb01ned as the mean squared error (MSE): L(W) \u2236= E[(h(x;W) \u2212y)2], (2.3) where y is the output variable. If y is generated by a neural network with the same architecture as of (2.1), we have y = h(x;Wtrue). Understanding the population risk function is an important step towards characterizing the empirical risk landscape [5]. In this paper, for simplicity, we only focus on the population risk. The neural network optimization can be written as follows: min W L(W) (2.4) W \u2208W. Let W\u2217be a global optimum of this optimization. L(W\u2217) = 0 means that y can be generated by a neural network with the same architecture (i.e., Wtrue is a global optimum.). We refer to this case as the matched neural network optimization. Moreover, we refer to the case of L(W\u2217) > 0 as the mismatched neural network optimization. Optimization (2.4) in general is non-convex owing to nonlinear activation functions in neurons. 3 Porcupine Neural Networks Characterizing the landscape of the objective function of optimization (2.4) is challenging in general. In this paper, we consider a constrained version of this optimization where weight vectors belong to a \ufb01nite set of lines in a d-dimensional space (Figure 1). This constraint may arise either from the neural network architecture or can be imposed by design. Mathematically, let L = {L1,...,Lr} be a set of lines in a d-dimensional space. Let Gi be the set of neurons whose incoming weight vectors lie over the line Li. Therefore, we have G1\u222a...\u222aGr = {1,...,k}. Moreover, we assume Gi \u2260\u2205for 1 \u2264i \u2264r otherwise that line can be removed from the set L. For every j \u2208Gi, we de\ufb01ne the function g(.) such that g(j) = i. For a given set L and a neuron-to-line mapping G, we de\ufb01ne FL,G \u2286F as the set of all functions that can be realized with a neural network (2.1) where wi lies over the line Lg(i). Namely, 4 \f. . . x1 . . . . . . xd x2 x1 (b) (a) Figure 3: Examples of (a) scalar PNN, and (b) degree-one PNN structures. FL,G \u2236= {f \u2236Rd \u2192R; \u2203W = (w1,...,wk),wi \u2208Lg(i), f(x) = h(x;W), \u2200x \u2208Rd}. (3.1) We refer to this family of neural networks as Porcupine Neural Networks (PNNs). In some cases, the PNN constraint is imposed by the neural network architecture. For example, consider the neural network depicted in Figure 3-a, which has a single input and k neurons. In this network structure, wi\u2019s are scalars. Thus, every realizable function with this neural network can be realized using a PNN where L includes a single line. We refer to this family of neural networks as scalar PNNs. Another example of porcupine neural networks is depicted in Figure 3-b. In this case, the neural network has multiple inputs and multiple neurons. Each neuron in this network is connected to one input. Every realizable function with this neural network can be described using a PNN whose lines are parallel to standard axes. We refer to this family of neural networks as degree-one PNNs. Scalar PNNs are also degree-one PNNs. However, since their analysis is simpler, we make such a distinction. In general, functions described by PNNs (i.e., FL,G) can be viewed as angular discretizations of functions described by unconstrained neural networks (i.e., F). By increasing the size of \u2223L\u2223 (i.e., the number of lines), we can approximate every f \u2208F by \u02c6 f \u2208FL,G arbitrarily closely. Thus, characterizing the landscape of the loss function over PNNs can help us to understand the landscape of the unconstrained loss function. The PNN optimization can be written as min W L(W) (3.2) wi \u2208Lg(i) 1 \u2264i \u2264k. Matched and mismatched PNN optimizations are de\ufb01ned similar to the unconstrained ones. In this paper, we characterize the population risk landscape of the PNN optimization (3.2) in both matched and mismatched cases. In Section 5, we consider the matched PNN optimization, while in Section 6, we study the mismatched one. Then, in Section 7, we study approximations of unconstrained neural network functions with PNNs. Note that a PNN can be viewed as a neural network whose feature vectors (i.e., input weight vectors to neurons) are \ufb01xed up to scalings due to the PNN optimization. This view can relate a 5 \frandom PNN (i.e., a PNN whose lines are random) to the application of random features in kernel machines [6]. Although our results in Sections 5, 6 and 7 are for general PNNs, we study them for random PNNs in Section 7 as well. 4 Related Work To explain the success of neural networks, some references study their ability to approximate smooth functions [7\u201313], while some other references focus on bene\ufb01ts of having more layers [14,15]. Overparameterized networks where the number of parameters are larger than the number of training samples have been studied in [16,17]. However, such architectures can cause generalization issues in practice [18]. References [5,19\u201321] have studied the convergence of the local search algorithms such as gradient descent methods to the global optimum of the neural network optimization with zero hidden neurons and a single output. In this case, the loss function of the neural network optimization has a single local optimizer which is the same as the global optimum. However, for neural networks with hidden neurons, the landscape of the loss function is more complicated than the case with no hidden neurons. Several work has studied the risk landscape of neural network optimizations for more complex structures under various model assumptions [22\u201332]. Reference [22] shows that in the linear neural network optimization, the population risk landscape does not have any bad local optima. Reference [23] extends these results and provides necessary and su\ufb03cient conditions for a critical point of the loss function to be a global minimum. Reference [24] shows that for a two-layer neural network with leaky activation functions, the gradient descent method on a modi\ufb01ed loss function converges to a global optimizer of the modi\ufb01ed loss function which can be di\ufb00erent from the original global optimum. Under an independent activations assumption, reference [25] simpli\ufb01es the loss function of a neural network optimization to a polynomial and shows that local optimizers obtain approximately the same objective values as the global ones. This result has been extended by reference [22] to show that all local minima are global minima in a nonlinear network. However, the underlying assumption of having independent activations at neurons usually are not satis\ufb01ed in practice. References [26\u201328] consider a two-layer neural network with Gaussian inputs under a matched (realizable) model where the output is generated from a network with planted weights. Moreover, they assume the number of neurons in the hidden layer is smaller than the dimension of inputs. This critical assumption makes the loss function positive-de\ufb01nite in a small neighborhood near the global optimum. Then, reference [28] provides a tensor-based method to initialize the local search algorithm in that neighborhood which guarantees its convergence to the global optimum. In our problem formulation, the number of hidden neurons can be larger than the dimension of inputs as it is often the case in practice. Moreover, we characterize risk landscapes for a certain family of neural networks in all parameter regions, not just around the global optimizer. This can guide us towards understanding the reason behind the success of local search methods in practice. For a neural network with a single non-overlapping convolutional layer, reference [29] shows that all local optimizers of the loss function are global optimizers as well. They also show that in the overlapping case, the problem is NP-hard when inputs are not Gaussian. Moreover, reference [30] studies this problem with non-standard activation functions, while reference [31] considers the case where the weights from the hidden layer to the output are close to the identity. Other related works include improper learning models using kernel based approaches [33,34] and a method of moments 6 \fw1 w2 s(W)=(+1,+1) s(W)=(-1,-1) Figure 4: For the scalar PNN, parameter regions where s(W) = \u00b11 may include bad local optima. In other regions, all local optima are global. This \ufb01gure highlights regions where s(W) = \u00b11 for a scalar PNN with two neurons. estimator using tensor decomposition [32]. 5 Population Risk Landscapes of Matched PNNs In this section, we analyze the population risk landscape of matched PNNs. In the matched case, the set of lines L and the neuron-to-line mapping G of a PNN used for generating the data are assumed to be known in training as well. We consider the case where these are unknowns in training in Section 6. 5.1 Scalar PNNs In this section, we consider a neural network structure with a single input and multiple neurons (i.e., d = 1, k > 1). Such neural networks are PNNs with L containing a single line. Thus, we refer to them as scalar PNNs. An example of a scalar PNN is depicted in Figure 3-a. In this case, every wi for 1 \u2264i \u2264k is a single scalar. We refer to that element by wi. We assume wi\u2019s are non-zero, otherwise the neural network structure can be reduced to another structure with fewer neurons. Theorem 1 The loss function (2.3) for a scalar PNN can be written as L(W) = 1 4 ( k \u2211 i=1 wi \u2212 k \u2211 i=1 w\u2217 i ) 2 + 1 4 ( k \u2211 i=1 \u2223wi\u2223\u2212 k \u2211 i=1 \u2223w\u2217 i \u2223) 2 . (5.1) Proof See Section 11.2. Since for a scalar PNN, the loss function L(W) can be written as sum of squared terms, we have the following corollary: 7 \fCorollary 1 For a scalar PNN, W is the global optimizer of optimization (3.2) if and only if k \u2211 i=1 wi = k \u2211 i=1 w\u2217 i , (5.2) k \u2211 i=1 \u2223wi\u2223= k \u2211 i=1 \u2223w\u2217 i \u2223. Next, we characterize local optimizers of optimization (3.2). Let s(wi) be the sign variable of wi, i.e., s(wi) = 1 if wi > 0, otherwise s(wi) = \u22121. Let s(W) \u225c(s(w1),...,s(wk))t. Let R(s) denote the space of all W where si = s(wi), i.e., R(s) \u225c {(w1,...,wk) \u2236s(wi) = si}. Theorem 2 If s(W\u2217) \u2260\u00b11: In every region R(s) whose s \u2260\u00b11, optimization (3.2) only has global optimizers without any bad local optimizers. In two regions R(1) and R(\u22121), optimization (3.2) does not have global optimizers and only has bad local optimizers. If s(W\u2217) = \u00b11: In regions R(s) where s \u2260\u00b11 and in the region R(\u2212s(W\u2217)), optimization (3.2) neither has global nor bad local optimizers. In the region R(s(W\u2217)), optimization (3.2) only has global optimizers without any bad local optimizers. Proof See Section 11.3. Theorem 2 indicates that optimization (3.2) can have bad local optimizers. However, this can occur only in two parameter regions, out of 2k regions, which can be checked separately (Figure 4). Thus, a variant of the gradient descent method which checks these cases separately converges to a global optimizer. Next, we characterize the Hessian of the loss function: Theorem 3 For a scalar PNN, in every region R(s), the Hessian matrix of the loss function L(W) is positive semide\ufb01nite, i.e., in every region R(s), the loss function is convex. In regions R(s) where s \u2260\u00b11, the rank of the Hessian matrix is two, while in two regions R(\u00b11), the rank of the Hessian matrix is equal to one. Proof See Section 11.4. Finally, for a scalar PNN, we illustrate the landscape of the loss function with an example. Figure 5 considers the case with a single input and two neurons (i.e., d = 1, k = 2). In Figure 5-a, we assume w\u2217 1 = 6 and w\u2217 2 = 4. According to Theorem 2, only the region R ((1,1)) contains global optimizers (all points in this region on the line w1 + w2 = 10 are global optimizers.). In Figure 5-b, we consider w\u2217 1 = 6 and w\u2217 2 = \u22124. According to Theorem 2, regions R ((1,\u22121)) and R ((\u22121,1)) have global optimizers, while regions R ((1,1)) and R ((\u22121,\u22121)) include bad local optimizers. 8 \fFigure 5: The landscape of the loss function for a scalar PNN with two neurons. In panel (a), we consider w\u2217 1 = 6 and w\u2217 2 = 4, while in panel (b), we have w\u2217 1 = 6 and w\u2217 2 = \u22124. According to Theorem 2, in the case of panel (a), the loss function does not have bad local optimizers, while in the case of panel (b), it has bad local optimizers in regions R ((\u22121,\u22121)) and R ((1,1)). 5.2 Degree-One PNNs In this section, we consider a neural network structure with more than one input and multiple neurons (d \u22651 and k \u22651) such that each neuron is connected to one input. Such neural networks are PNNs whose lines are parallel to standard axes. Thus, we refer to them as degree-one PNNs. Similar to the scalar PNN case, in the case of the degree-one PNN, every wi has one non-zero element. We refer to that element by wi. Let Gr be the set of neurons that are connected to the variable xr, i.e., Gr = {j \u2236wj(r) \u22600}. Therefore, we have G1 \u222a... \u222aGd = {1,...,k}. Moreover, we assume Gi \u2260\u2205for 1 \u2264i \u2264d, i.e., there is at least one neuron connected to each input variable. For every j \u2208Gr, we de\ufb01ne the function g(.) such that g(j) = r 2. Moreover, we de\ufb01ne qr \u2236= \u2211 i\u2208Gr \u2225wi\u2225, (5.3) q\u2217 r \u2236= \u2211 i\u2208Gr \u2225w\u2217 i \u2225. Finally, we de\ufb01ne q \u2236= (q1,...,qd)t and q\u2217\u2236= (q\u2217 1,...,q\u2217 d)t. Theorem 4 The loss function (2.3) for a degree-one PNN can be written as L(W) = 1 4\u2225 k \u2211 i=1 wi \u2212 k \u2211 i=1 w\u2217 i \u22252 + 1 4(q \u2212q\u2217)tC(q \u2212q\u2217), (5.4) 2These de\ufb01nitions match with de\ufb01nitions of G and g(.) for a general PNN. 9 \fwhere C = \u239b \u239c \u239c \u239c \u239d 1 2 \u03c0 \u22ef 2 \u03c0 2 \u03c0 1 \u22ef 2 \u03c0 \u22ee \u22f1 \u22ee 2 \u03c0 \u22ef 1 \u239e \u239f \u239f \u239f \u23a0 . (5.5) Proof See Section 11.5. Since C is a positive de\ufb01nite matrix, we have the following corollary: Corollary 2 W\u2217is a global optimizer of optimization (3.2) for a degree-one PNN if and only if \u2211 i\u2208Gr wi = \u2211 i\u2208Gr w\u2217 i , 1 \u2264r \u2264d (5.6) qi = q\u2217 i , 1 \u2264r \u2264d. Next, we characterize local optimizers of optimization (3.2) for degree-one PNNs. The sign variable assigned to the weight vector wj is de\ufb01ned as the sign of its non-zero element, i.e., s(wj) = s(wj) where wj is the non-zero element of wj. De\ufb01ne R(s1,...,sd) as the space of W where si is the sign vector of weights wj connected to input xi (i.e., j \u2208Gi). Theorem 5 For a degree-one PNN, in regions R(s1,...,sd) where si \u2260\u00b11 for 1 \u2264i \u2264d, every local optimizer is a global optimizer. In other regions, we may have bad local optima. Proof See Section 11.6. In practice, if the gradient descent algorithm converges to a point in a region R(s1,...,sd) where signs of weight vectors connected to an input are all ones or minus ones, that point may be a bad local optimizer. Thus, one may re-initialize the gradient descent algorithm in such cases. We show this e\ufb00ect through simulations in Section 8. 5.3 General PNNs In this section, we characterize the landscape of the loss function for a general PNN. Recall that L = {L1,...,Lr} is the set of lines in a d-dimensional space. Vectors over a line Li can have two orientations. We say a vector has a positive orientation if its component in the largest non-zero index is positive. Otherwise, it has a negative orientation. For example, w1 = (\u22121,2,0,3,0) has a positive orientation because w1(4) > 0, while the vector w2 = (\u22121,2,0,0,\u22123) has a negative orientation because w2(5) < 0. Mathematically, let \u00b5(wi) be the largest index of the vector wi with a non-zero entry, i.e., \u00b5(wi) = arg maxj(wi(j) \u22600). Then, s(wi) = 1 if \u00b5(wi) > 0, otherwise s(wi) = \u22121. Let ui be a unit norm vector on the line Li such that s(ui) = 1. Let UL = (u1,...,ur). Let AL \u2208Rr\u00d7r be a matrix such that its (i,j)-component is the angle between lines Li and Lj, i.e., AL(i,j) = \u03b8ui,uj. Moreover, let KL = Ut LUL = cos[AL]. Recall that Gi is the set of neurons whose incoming weight vectors lie over the line Li, i.e., Gi \u225c{j \u2236wj \u2208Li}. Moreover, if j \u2208Gi, we de\ufb01ne g(j) = i. In the degree-one PNN explained in Section 5.2, each line corresponds to an input because L contains lines parallel to standard axes. However, for a general PNN, we may not have such a correspondence between lines and inputs. 10 \f-1 -0.5 0 0.5 1 0 1 \u03c8(x) 2/\u009b x Figure 6: An illustration of the kernel function \u03c8(x) de\ufb01ned as in (5.9). With these notations, for wj \u2208Li, we have wj = \u2225wj\u2225s(wj)ug(j). (5.7) Moreover, for every wi and wj, we have \u03b8wi,wj = \u03c0 2 + (ag(i),g(j) \u2212\u03c0 2 )s(wi)s(wj). (5.8) De\ufb01ne the kernel function \u03c8 \u2236[\u22121,1] \u2192R as \u03c8(x) = x + 2 \u03c0 ( \u221a 1 \u2212x2 \u2212xcos\u22121(x)). (5.9) In the following Theorem, we show that this kernel function, which is depicted in Figure 6, plays an important role in characterizing optimizers of optimization (3.2). In particular, we show that the objective function of the neural network optimization has a term where this kernel function is applied (component-wise) to the inner product matrix among vectors u1,...,ur. Theorem 6 The loss function (2.3) for a matched PNN can be written as L(W) = 1 4\u2225 k \u2211 i=1 wi \u2212w\u2217 i \u22252 + 1 4(q \u2212q\u2217)t\u03c8[KL](q \u2212q\u2217), (5.10) where \u03c8(.) is de\ufb01ned as in (5.9) and q and q\u2217are de\ufb01ned as in (5.3). Proof See Section 11.7. For the degree-one PNN where ui = ei for 1 \u2264i \u2264d, the matrix C of (5.5) and the matrix \u03c8[KL] are the same. The kernel function \u03c8(.) has a linear term and a nonlinear term. Note that the inner product matrix KL is positive semide\ufb01nite. Below, we show that applying the kernel function \u03c8(.) (component-wise) to KL preserves this property. Lemma 1 For every L, \u03c8[KL] is positive semide\ufb01nite. 11 \f10 20 30 40 50 60 70 80 90 100 r 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 min eigenvalue \u03c8[KL] 10 -3 . . . . . . (b) (a) \u009b/r Figure 7: (a) An example of L in a two-dimensional space such that angles between adjacent lines are equal to one another. (b) The minimum eigenvalue of the matrix \u03c8[KL] for di\ufb00erent values of r. Proof See Section 11.8. Corollary 3 If \u03c8[KL] is a positive de\ufb01nite matrix, W\u2217is a global optimizer of optimization (3.2) if and only if k \u2211 i=1 wi = k \u2211 i=1 w\u2217 i , (5.11) q = q\u2217. Example 1 Let L = {L1,L2,...,Lr} contain lines in R2 such that angles between adjacent lines are equal to \u03c0/r (Figure 7-a). Thus, we have AL(i,j) = \u03c0\u2223i \u2212j\u2223/r for 1 \u2264i,j \u2264r. Figure 7-b shows the minimum eigenvalue of the matrix \u03c8[KL] for di\ufb00erent values of r. As the number of lines increases, the minimum eigenvalue of \u03c8[KL] decreases. However, for a \ufb01nite value of r, the minimum eigenvalue of \u03c8[KL] is positive. Thus, in this case, the condition of corollary 3 holds. This highlights why considering a discretized neural network function (i.e., \ufb01nite r) facilities characterizing the landscape of the loss function. Next, we characterize local optimizers of optimization (3.2) for a general PNN. De\ufb01ne R(s1,...,sr) as the space of W where si is the sign vector of weights wj over the line Li (i.e., j \u2208Gi). Theorem 7 For a general PNN, in regions R(s1,...,sr) where at least d of si\u2019s are not equal to \u00b11, every local optimizer of optimization (3.2) is a global optimizers. Proof See Section 11.9. 12 \fExample 2 Consider a two-layer PNN with d inputs, r lines and k hidden neurons. Suppose every line corresponds to t = k/r input weight vectors. If we generate weight vectors uniformly at random over their corresponding lines, for every 1 \u2264i \u2264r, we have P[si = \u00b11] = 21\u2212t. (5.12) As t increases, this probability decreases exponentially. According to Theorem 7, to be in the parameter region without bad locals, the event si = \u00b11 should occur for at most r \u2212d of the lines. Thus, if we uniformly pick a parameter region, the probability of selecting a region without bad locals is 1 \u2212 d\u22121 \u2211 i=1 (r i)(1 \u221221\u2212t)i2(1\u2212t)(r\u2212i) (5.13) which goes to one exponentially as r \u2192\u221e. In practice the number of lines r is much larger than the number of inputs d (i.e., r \u226bd). Thus, the condition of Theorem 7 which requires d out of r variables si not to be equal to \u00b11 is likely to be satis\ufb01ed if we initialize the local search algorithm randomly. 6 Population Risk Landscapes of Mismatched PNNs In this section, we characterize the population risk landscape of a mismatched PNN optimization where the model that generates the data and the model used in the PNN optimization are di\ufb00erent. We assume that the output variable y is generated using a two-layer PNN with k\u2217neurons whose weights lie on the set of lines L\u2217with neuron-to-line mapping G\u2217. That is y = k\u2217 \u2211 i=1 relu((w\u2217 i )t x), (6.1) where w\u2217 i lie on a line in the set L\u2217for 1 \u2264i \u2264k\u2217. The neural network optimization (3.2) is over PNNs with k neurons over the set of lines L with the neuron-to-line mapping G. Note that L and G can be di\ufb00erent than L\u2217and G\u2217, respectively. Let r = \u2223L\u2223and r\u2217= \u2223L\u2217\u2223be the number of lines in L and L\u2217, respectively. Let u\u2217 i be the unit norm vector on the line L\u2217 i \u2208L\u2217such that s(u\u2217 i ) = 1. Similarly, we de\ufb01ne ui as the unit norm vector on the line Li \u2208L such that s(ui) = 1. Let UL = (u1,...,ur) and UL\u2217= (u\u2217 1,...,u\u2217 r). Suppose the rank of UL is at least d. De\ufb01ne KL = Ut LUL \u2208Rr\u00d7r (6.2) KL\u2217= Ut L\u2217UL \u2208Rr\u2217\u00d7r\u2217 KL,L\u2217= Ut L\u2217UL\u2217\u2208Rr\u00d7r\u2217. Theorem 8 The loss function (2.3) for a mismatched PNN can be written as L(W) = 1 4\u2225 k \u2211 i=1 wi \u2212 k\u2217 \u2211 i=1 w\u2217 i \u22252 + 1 4qt\u03c8[KL]q + 1 4(q\u2217)t\u03c8[KL\u2217]q\u2217\u22121 2qt\u03c8[KL,L\u2217]q\u2217, (6.3) where \u03c8(.) is de\ufb01ned as in (5.9) and q and q\u2217are de\ufb01ned as in (5.3) using G and G\u2217, respectively. 13 \fProof See Section 11.10. If L = L\u2217and G = G\u2217, the mismatched PNN loss (6.3) simpli\ufb01es to the matched PNN loss (5.10). Corollary 4 Let K = ( KL KL,L\u2217 Kt L,L\u2217 KL\u2217 ) \u2208R(r+r\u2217)\u00d7(r+r\u2217). (6.4) Then, the loss function of a mismatched PNN can be lower bounded as L(W) \u22651 4\u2225q\u2217\u22252\u03bbmin (\u03c8[K]/\u03c8[KL]) (6.5) where \u03c8[K]/\u03c8[KL] \u2236= \u03c8[KL\u2217]\u2212\u03c8[KL\u2217]t\u03c8[KL]\u2020\u03c8[KL\u2217] is the generalized Schur complement of the block \u03c8[KL] in the matrix \u03c8[K]. In the mismatched case, the loss at global optima can be non-zero since the model used to generate the data does not belong to the set of training models. Next, we characterize local optimizers of optimization (3.2) for a mismatched PNN. Similar to the matched PNN case, we de\ufb01ne R(s1,...,sr) as the space of W where si is the vector of sign variables of weight vectors over the line Li. Theorem 9 For a mismatched PNN, in regions R(s1,...,sr) where at least d of si\u2019s are not equal to \u00b11, every local optimizer of optimization (3.2) is a global optimizer. Moreover, in those points we have L(W\u2217) = 1 4(q\u2217)t (\u03c8[K]/\u03c8[KL])q\u2217 (6.6) \u22641 4\u2225q\u2217\u22252\u2225\u03c8[K]/\u03c8[KL]\u2225. Proof See Section 11.11. When the condition of Theorem 9 holds, the spectral norm of the matrix \u2225\u03c8[K]/\u03c8[KL]\u2225provides an upper-bound on the loss value at global optimizers of the mismatched PNN. In Section 7, we study this bound in more detail. Moreover, in Section 7.2, we study the case where the condition of Theorem 9 does not hold (i.e., the local search method converges to a point in parameter regions where more than r \u2212d of variables si are equal to \u00b11). To conclude this section, we show that if UL is a perturbed version of UL\u2217, the loss in global optima of the mismatched PNN optimization (3.2) is small. This shows a continuity property of the PNN optimization with respect to line perturbations. Lemma 2 Let K is de\ufb01ned as in (6.4) where r = r\u2217. Let Z \u2236= U \u2212U\u2217be the perturbation matrix. Assume that \u03bbmin (\u03c8 [KL\u2217]) \u2265\u03b4. If 2\u221ar\u2225Z\u2225F + \u2225Z\u22252 F \u2264\u03b4 2, then \u2225\u03c8[K]/\u03c8[KL]\u22252 \u2264(1 + 2r \u03b4 )\u2225Z\u22252 F + 4\u221ar\u2225Z\u2225F . Proof See Section 11.12. 14 \f7 PNN Approximations of Unconstrained Neural Networks In this section, we study whether an unconstrained two-layer neural network function can be approximated by a PNN. We assume that the unconstrained neural network has d inputs and k\u2217 hidden neurons. This neural network function can also be viewed as a PNN whose lines are determined by input weight vectors to neurons. Thus, in this case r\u2217\u2264k\u2217where r\u2217is the number of lines of the original network. If weights are generated randomly, with probability one, r\u2217= k\u2217 since the probability that two random vectors lie on the same line is zero. Note that lines of the ground-truth PNN (i.e., the unconstrained neural network) are unknowns in the training step. For training, we use a two-layer PNN with r lines, drawn uniformly at random, and k neurons. Since we have relu activation functions at neurons, without loss of generality, we can assume k = 2r, i.e., for every line we assign two neurons (one for potential weight vectors with positive orientations on that line and the other one for potential weight vectors with negative orientations). Since there is a mismatch between the model generating the data and the model used for training, we will have an approximation error. In this section, we study this approximation error as a function of parameters d, r and r\u2217. 7.1 The PNN Approximation Error Under the Condition of Theorem 9 Suppose y is generated using an unconstrained two-layer neural network with k\u2217neurons, i.e., y = \u2211k\u2217 i=1 relu(< w\u2217 i ,x >). In this section, we consider approximating y using a PNN whose lines L are drawn uniformly at random. Since these lines will be di\ufb00erent than L\u2217, the neural network optimization can be formulated as a mismatched PNN optimization, studied in Section 6. Moreover, in this section, we assume the condition of Theorem 9 holds, i.e., the local search algorithm converges to a point in parameter regions where at least d of variables si are not equal to \u00b11. The case that violates this condition is more complicated and is investigated in Section 7.2. Under the condition of Theorem 9, the PNN approximation error depends on both \u2225q\u2217\u2225and \u2225\u03c8[K]/\u03c8[KL]\u2225. The former term provides a scaling normalization for the loss function. Thus, we focus on analyzing the later term. Since Theorem 9 provides an upper-bound for the mismatched PNN optimization loss by \u2225\u03c8[K]/\u03c8[KL]\u2225, intuitively increasing the number of lines in L should decrease \u2225\u03c8[K]/\u03c8[KL]\u2225. We prove this in the following theorem. Theorem 10 Let K be de\ufb01ned as in (6.4). We add a distinct line to the set L, i.e., Lnew = L\u222aLr+1. De\ufb01ne Knew = ( KLnew KLnew,L\u2217 Kt Lnew,L\u2217 KL\u2217 ) = \u239b \u239c \u239d 1 zt 1 zt 2 z1 KL KL,L\u2217 z2 Kt L,L\u2217 KL\u2217 \u239e \u239f \u23a0 \u2208R(r+r\u2217+1)\u00d7(r+r\u2217+1). (7.1) Then, we have \u2225\u03c8[Knew]/\u03c8[KLnew]\u2225\u2264\u2225\u03c8[K]/\u03c8[KL]\u2225. (7.2) More speci\ufb01cally, \u03c8[Knew]/\u03c8[KLnew] = \u03c8[K]/\u03c8[KL] \u2212\u03b1vvt, (7.3) where \u03b1 = (1 \u2212\u27e8\u03c8[z1],\u03c8[KL]\u22121\u03c8[z1]\u27e9) \u22121 \u22650, v = \u03c8[z2] \u2212\u03c8[KL,L\u2217]t\u03c8[KL]\u22121\u03c8[z1]. 15 \f0 500 1000 1500 2000 r (log scale) r 0 0.1 0.2 0.3 0.4 0.5 0.6 0 500 1000 1500 2000 r 0.2 0.3 0.4 0.5 0.6 0.7 spectral norm of Schur complement spectral norm of Schur complement (log scale) spectral norm of Schur complement d=30 d=20 d=10 r*=50 r*=20 r*=10 approximation using all r lines approximation using all r lines approximation using r* nearest lines (c) (b) (a) 102 103 10-2 10-1 100 Figure 8: (a) The spectral norm of \u03c8[K]/\u03c8[KL] for various values of d, r\u2217and r. (b) A log-log plot of curves in panel (a). (c) The spectral norm of \u03c8[Knearest]/\u03c8[KLnearest] for various values of d, r\u2217and r. Experiments have been repeated 100 times. Average results are shown. Proof See Section 11.13. Theorem 10 indicates that adding lines to L decreases \u2225\u03c8[K]/\u03c8[KL]\u2225. However, it does not characterize the rate of this decrease as a function of r, r\u2217and d. Next, we evaluate this error decay rate empirically for random PNNs. Figure 8-a demonstrates the spectral norm of the matrix \u03c8[K]/\u03c8[KL] where L and L\u2217are both generated uniformly at random. For various values of r\u2217 and d, increasing r decreases the PNN approximation error. For moderately small values of r, the decay rate of the approximation error is fast. However, for large values of r, the decay rate of the approximation error appears to be a polynomial function of r (i.e., the tail is linear in the log-log plot shown in Figure 8-b). Analyzing \u2225\u03c8[K]/\u03c8[KL]\u2225as a function of r for \ufb01xed values of d and r appears to be challenging. Later in this section, we characterize the asymptotic behaviour of \u2225\u03c8[K]/\u03c8[KL]\u2225when d,r \u2192\u221e. As explained in Theorem 10, increasing the number of lines in L decreases \u2225\u03c8[K]/\u03c8[KL]\u2225. Next, we investigate whether this decrease is due in part to the fact that by increasing r, the distance between a subset of L with r\u2217lines and L\u2217decreases. Let Lnearest be a subset of lines in L with r\u2217lines constructed as follows: for every line L\u2217 i in L\u2217, we select a line Lj in L that minimizes \u2225uj\u2212u\u2217 i \u2225(i.e., Lj has the closest unit vector to ui). To simplify notation, we assume that minimizers for di\ufb00erent lines in L\u2217are distinct. Using Lnearest instead of L, we de\ufb01ne Knearest \u2208R2r\u2217\u00d72r\u2217as in (6.4). Figure 8 demonstrates \u2225\u03c8[Knearest]/\u03c8[KLnearest]\u2225for various values of r, r\u2217and d. As it is illustrated in this \ufb01gure, the PNN approximation error using r\u2217nearest lines in L is signi\ufb01cantly 16 \f200 400 600 800 1000 r spectral norm of Schur complement empirical result theoritical limit r/r*=3 r/r*=2 r/r*=1 (4/3)(1-2/\u009b) (3/2)(1-2/\u009b) 2(1-2/\u009b) Figure 9: The spectral norm of \u03c8[K]/\u03c8[KL] when d = r. Theoretical limits are described in Theorem 11. Experiments have been repeated 100 times. Average results are shown. larger than the case using all lines. Next, we analyze the behaviour of \u2225\u03c8[K]/\u03c8[KL]\u2225when d,r \u2192\u221e. There has been some recent interest in characterizing spectrum of inner product kernel random matrices [35\u201338]. If the kernel is linear, the distribution of eigenvalues of the covariance matrix follows the well-known Marcenko Pastur law. If the kernel is nonlinear, reference [35] shows that in the high dimensional regime where d,r \u2192\u221eand \u03b3 = r/d \u2208(0,\u221e) is \ufb01xed, only the linear part of the kernel function a\ufb00ects the spectrum. Note that the matrix of interest in our problem is the Schur complement matrix \u03c8[K]/\u03c8[KL], not \u03c8[K]. However, we can use results characterizing spectrum of \u03c8[K] to characterize the spectrum of \u03c8[K]/\u03c8[KL]. First, we consider the regime where r,d \u2192\u221ewhile \u03b3 = r/d \u2208(0,\u221e) is a \ufb01xed number. Theorem 2.1 of reference [35] shows that in this regime and under some mild assumptions on the kernel function (which our kernel function \u03c8(.) satis\ufb01es), \u03c8[KL] converges (in probability) to the following matrix: RL = (\u03c8(0) + \u03c8\u2032\u2032(0) 2d )11t + \u03c8\u2032(0)Ut LUL + (\u03c8(1) \u2212\u03c8(0) \u2212\u03c8\u2032(0))Ir. (7.4) To obtain this formula, one can write the tailor expansion of the kernel function \u03c8(.) near 0. It turns out that in the regime where r,d \u2192\u221ewhile d/r is \ufb01xed, it is su\ufb03cient for o\ufb00-diagonal elements of \u03c8[KL] to replace \u03c8(.) with its linear part. However, diagonal elements of \u03c8[KL] should be adjusted accordingly (the last term in (7.4)). For the kernel function of our interest, de\ufb01ned as in (5.9), we have \u03c8\u2032(0) = 0, \u03c8\u2032\u2032(0) = 2/\u03c0, \u03c8(0) = 2/\u03c0 and \u03c8(1) = 1. This simpli\ufb01es (7.4) further to: RL = ( 2 \u03c0 + 1 \u03c0d)11t + (1 \u22122 \u03c0)Ir. (7.5) This matrix has (r \u22121) eigenvalues of 1 \u22122/\u03c0 and one eigenvalue of (2/\u03c0)r + 1 \u22122/\u03c0 + \u03b3/\u03c0. Using this result, we characterize \u2225\u03c8[K]/\u03c8[KL]\u2225in the following theorem: 17 \fTheorem 11 Let L and L\u2217have r and r\u2217lines in Rd generated uniformly at random, respectively. Let d,r \u2192\u221ewhile \u03b3 = r/d \u2208(0,\u221e) is \ufb01xed. Moreover, r\u2217/r = O(1). Then, \u2225\u03c8[K]/\u03c8[KL]\u2225\u2192(1 + r\u2217 r )(1 \u22122 \u03c0), (7.6) where the convergence is in probability. Proof See Section 11.14. In the setup considered in Theorem 11, the dependency of \u2225\u03c8[K]/\u03c8[KL]\u2225to \u03b3 is negligible as it is shown in (11.52). Figure 9 shows the spectral norm of \u03c8[K]/\u03c8[KL] when d = r. As it is illustrated in this \ufb01gure, empirical results match closely to analytical limits of Theorem 11. Note that by increasing the ratio of r/r\u2217, \u2225\u03c8[K]/\u03c8[KL]\u2225and therefore the PNN approximation error decreases. If r\u2217is constant, the limit is 1 \u22122/\u03c0 \u22480.36. Theorem 11 provides a bound on \u2225\u03c8[K]/\u03c8[KL]\u2225in the asymptotic regime. In the following corollary, we use this result to bound the PNN approximation error measured as the MSE normalized by the L2 norm of the output variables (i.e., L(W = 0)). Proposition 1 Let W\u2217be the global optimizer of the mismatched PNN optimization under the setup of Theorem 11. Then, with high probability, we have L(W\u2217) L(W = 0) \u2264(1 + r\u2217 r )(1 \u22122 \u03c0). (7.7) Proof See Section 11.15. This proposition indicates that in the asymptotic regime with d and r grow with the same rate (i.e., r is a constant factor of d), the PNN is able to explain a fraction of the variance of the output variable. In practice, however, r should grow faster than d in order to obtain a small PNN approximation error. 7.2 The General PNN Approximation Error In this section, we consider the case where the condition of Theorem 9 does not hold, i.e., the local search algorithm converges to a point in a bad parameter region where more than r\u2212d of si variables are equal to \u00b11. To simplify notation, we assume that the local search method has converged to a region where all si variables are equal to \u00b11. The analysis extends naturally to other cases as well. Let s = (s1,...,sr). Let S be the diagonal matrix whose diagonal entries are equal to s, i.e., S = diag(s). Similar to the argument of Theorems 7 and 9, a necessary condition for a point W to be a local optima of the PNN optimization is: SUt L \u239b \u239d k \u2211 i=1 wi \u2212 k\u2217 \u2211 i=1 w\u2217 i \u239e \u23a0+ \u03c8[KL]q \u2212\u03c8[KL,L\u2217]q\u2217= 0. (7.8) Under the condition of Theorem 9, we have \u2211k i=1 wi\u2212\u2211k\u2217 i=1 w\u2217 i = 0, which simpli\ufb01es this condition. 18 \fUsing (7.8) in (6.3), at local optima in bad parameter regions, we have 4L(W) = (q\u2217)t \u03c8[K]/\u03c8[KL]q\u2217+ zt (I + ULS\u03c8[KL]\u22121SUt L)z, (7.9) where z \u2236= k \u2211 i=1 wi \u2212 k\u2217 \u2211 i=1 w\u2217 i . (7.10) The \ufb01rst term of (7.9) is similar to the PNN loss under the condition of Theorem 9. The second term is the price paid for converging to a point in a bad parameter region. In this section, we analyze this term. The second term of (7.9) depends on the norm of z. First, in the following lemma, we characterize z in local optima. Lemma 3 In the local optimum of the mismatched PNN optimization, we have z = \u2212(ULSStUt L) \u22121 ULS[\u03c8[KL](SUt LULS + \u03c8[KL]) \u2020 SUt Lw0 (7.11) + (\u03c8[KL](SUt LULS + \u03c8[KL]) \u2020 \u2212I)\u03c8[KL,L\u2217]q\u2217], where w0 \u225c k\u2217 \u2211 i=1 w\u2217 i . Proof See Section 11.16. Replacing (7.11) in (7.9) gives us the loss function achieved at the local optimum. In order to simplify the loss expression, without loss of generality, from now on we replace US with U (note that there is essentially no di\ufb00erence between ULS and UL as the columns of ULS are the columns of UL with adjusted orientations.). Moreover, to simplify the analysis of this section, we make the following assumptions. Assumption 1 Recall that we assume that all si for 1 \u2264i \u2264r are equal to \u00b11. Our analysis extends naturally to other cases. Moreover, we assume that w0 = 0. This assumption has a negligible e\ufb00ect on our estimate of the value of the loss function achieved in the local minimum in many cases. For example, when w\u2217 i are i.i.d. N(0,(1/d)I) random vectors, w0 is a N(0,(r\u2217/d)I) random vector and therefore \u2225w0\u22252 = \u0398( \u221a r\u2217). On the other hand, \u2225q\u2217\u22252 = \u0398(r\u2217). Hence, in the case where r\u2217 is large, the value of the loss function in the local minimum is controlled by the terms involving \u2225q\u2217\u22252 2 in (7.9). Thus, we can ignore the terms involving w0 in this regime. Finally, we assume that \u03c8[KL] (and consequently Ut LUL + \u03c8[KL]) is invertible. Theorem 12 Under assumptions 1, in a local minimum of the mismatched PNN optimization, we have L(W) = 1 4(q\u2217)t (\u0303 \u03c8[K]/\u03c8[KL])q\u2217, (7.12) 19 \fwhere \u0303 \u03c8[K] = [\u03c8[KL] + Ut LUL \u03c8[KL,L\u2217] \u03c8[KL,L\u2217]t \u03c8[KL\u2217] ]. Proof See Section 11.17. The matrix \u0303 \u03c8[K] has an extra term of Ut LUL (i.e., the linear kernel) compared to the matrix \u03c8[K]. The e\ufb00ect of this term is the price of converging to a local optimum in a bad region. In the following, we analysis this e\ufb00ect in the asymptotic regime where r,d \u2192\u221ewhile r/d is \ufb01xed. Theorem 13 Consider the asymptotic case where r = \u03b3d, r\u2217> d + 1, \u03b3 > 1 and r,r\u2217,d \u2192\u221e. Assume that k\u2217= r\u2217underlying weight vectors w\u2217 i \u2208Rd are chosen uniformly at random in Rd while the PNN is trained over r lines drawn uniformly at random in Rd. Under assumption 1, at local optima, with probability 1 \u22122exp(\u2212\u00b52d), we have L(W) \u22641 4 (1 \u22122 \u03c0 + (1 + \u221a\u03b3 + \u00b5)2 r\u2217 r )\u2225q\u2217\u22252 2 , where \u00b5 > 1 is a constant. Proof See Section 11.18. Comparing asymptotic error bounds of Theorems 11 and 13, we observe that the extra PNN approximation error because of the convergence to a local minimum at a bad parameter region is re\ufb02ected in the constant parameter \u00b5, which is negligible if r\u2217is signi\ufb01cantly smaller than r. 7.3 A Minimax Analysis of the Naive Nearest Line Approximation Approach In this section, we show that every realizable function by a two-layer neural network (i.e., every f \u2208F) can be approximated arbitrarily closely using a function described by a two-layer PNN (i.e., \u02c6 f \u2208FL,G). We start by the following lemma on the continuity of the relu function on the weight parameter: Lemma 4 For the relu function \u03c6(.), we have the following property \u2223\u03c6(\u27e8w1,x\u27e9) \u2212\u03c6(\u27e8w2,x\u27e9)\u2223\u2264\u2225w1 \u2212w2\u22252 \u2225x\u22252. Proof See Section 11.19. Recall that ui is the unit norm vector over the line Li. Let U = {u1,u2,...,ur} \u2286Rd. Denote the set U\u2212= {\u2212u1,\u2212u2,...,\u2212ur}. De\ufb01nition 1 For \u03b4 \u2208[0,\u03c0/2], we call U an angular \u03b4-net of W if for every w \u2208W, there exists u \u2208U \u222aU\u2212such that \u03b8u,w \u2264\u03b4. The following lemma indicates the size required for U to be an angular \u03b4-net of the unit Euclidean sphere Sn\u22121. 20 \fLemma 5 Let \u03b4 \u2208[0,\u03c0/2]. For the unit Euclidean sphere Sn\u22121, there exists an angular \u03b4-net U, with \u2223U\u2223\u22641 2 (1 + \u221a 2 \u221a 1 \u2212cos\u03b4 ) n . Proof See Section 11.20. The following is a corollary of the previous lemma. Corollary 5 Consider a two-layer neural network with s-sparse weights (i.e., W is the set of ssparse vectors.). In this case, using lemma 5, U is an angular \u03b4-net of W with \u2223U\u2223= 1 2(d s)(1 + \u221a 2 \u221a 1 \u2212cos\u03b4 ) s . Furthermore, if we know the sparsity patterns of k neurons in the network (i.e., if we know the network architecture), \u02dc U is an angular \u03b4-net of W with \u2223\u02dc U\u2223\u2264k 2 (1 + \u221a 2 \u221a 1 \u2212cos\u03b4 ) s . In order to have a measure of how accurately a function in F can be approximated by a function in FL, we have the following de\ufb01nition: De\ufb01nition 2 De\ufb01ne R (F,FL,G), the minimax risk of approximating a function in F by a function in FL,G, as the following R (FL,G,F) \u2236= max f\u2208F min \u02c6 f\u2208FL,G E\u2223f(x) \u2212\u02c6 f(x)\u2223, (7.13) where the expectation is over x \u223cN(0,I). The following theorem bounds this minimax risk where U is an angular \u03b4-net of W. Theorem 14 Assume that for all w \u2208W, \u2225w\u22252 \u2264M. Let U be an angular \u03b4-net of W. The minimax risk of approximating a function in F with a function in FL,G de\ufb01ned in (7.13) can be written as R (FL,G,F) \u2264kM \u221a 2d(1 \u2212cos\u03b4). Proof See Section 11.21. The following is a corollary of Theorem 14 and Corollary 5. Corollary 6 Let F be the set of realizable functions by a two-layer neural network with s-sparse weights. There exists a set L and a neuron-to-line mapping G such that R (FL,G,F) \u2264\u03b4, and \u2223L\u2223\u22641 2(d s)(1 + 2kM \u221a d \u03b4 ) s . Further, if we know the sparsity patterns of k neurons in the network (i.e., the network architecture), then \u2223L\u2223\u2264k 2 (1 + 2kM \u221a d \u03b4 ) s . 21 \f0.35 0.30 (c) (b) (a) Figure 10: (a) The loss during training for two initializations of a degree-one PNN with 5 inputs and 10 hidden neurons. (b) Plots of the \ufb01nal loss with respect to the gap between the true and estimated weights for di\ufb00erent values of k. The gap is de\ufb01ned as the Frobenius norm squared of the di\ufb00erence. 100 initializations were used for each value of k. (c) A bar plot showing the proportion of global optima found for di\ufb00erent values of k. 8 Experimental Results In our \ufb01rst experiment, we simulate the degree-one PNNs discussed in section 5.2 3. In the matched case, we are interested in how often we achieve zero loss when we learn using the same network architecture used to generate data (i.e., L = L\u2217, G = G\u2217). We implement networks with d inputs, k hidden neurons, and a single output. Each input is connected to k/d neurons (we assume k is divisible by d.). As described previously in Section 5.2, relu activation functions are used only at hidden neurons. We use d = 5, and k = 10,15,20,25,50. For each value of k, we perform 100 trials of the following: Randomly choose a ground truth set of weights. Generate 10000 input-output pairs using the ground truth set of weights. Randomly choose a new set of weights for initialization. Train the network via stochastic gradient descent using batches of size 100, 1000 training epochs, a momentum parameter of 0.9, and a learning rate of 0.01 which decays every epoch at a rate of 0.95 every 390 epochs. Stop training early if the mean loss for the most recent 10 epochs is below 10\u22125. 3All experiments were implemented in Python 2.7 using the TensorFlow package. 22 \f(a) (b) Figure 11: (a) Histograms of \ufb01nal losses (i.e., PNN approximation errors) for di\ufb00erent values of k. (b) Gamma curves \ufb01t to the histograms of panel (a). As shown in Figure 10-a, we observe that some initializations converge to zero loss while other initializations converge to local optima. Figure 10-b illustrates how frequently an initialized network manages to \ufb01nd the global optimum. We see that as k increases, the probability of \ufb01nding a global optimum increases. We also observe that for all local optima, si = \u00b11 for at least one hidden neuron i. In other words, for at least one hidden neuron, all d weights shared the same sign. This is consistent with Theorem 5. Figure 10-c provides a summary statistics of the proportion of global optima found for di\ufb00erent values of k. Next, we numerically simulate random PNNs in the mismatched case as described in Section 6. To enforce the PNN architecture, we project gradients along the directions of PNN lines before updating the weights. For example, if we consider w(0) i as the initial set of d weights connecting hidden neuron i to the d inputs, then the \ufb01nal set of weights w(T) i need to lie on the same line as w(0) i . To guarantee this, before applying gradient updates to wi, we \ufb01rst project them along w(0) i . For PNNs, we use 10 \u2264k \u2264100 hidden neurons. For each value of k, we perform 25 trials of the following: 1. Generate one set of true labels using a fully-connected two-layer network with d = 15 inputs and k\u2217= 20 hidden neurons. Generate 10,000 ground training samples and 10,000 test samples using a set of randomly chosen weights. 2. Initialize k/2 random d-dimensional unit-norm weight vectors. 3. Assign each weight vector to two hidden neurons. For the \ufb01rst neuron, scale the vector by a random number sampled uniformly between 0 and 1. For the second neuron, scale the vector 23 \fby a random number sampled uniformly between -1 and 0. 4. Train the network via stochastic gradient descent using batches of size 100, 100 training epochs, no momentum, and a learning rate of 10\u22123 which decays every epoch at a rate of 0.95 every 390 epochs. 5. Check to make sure that \ufb01nal weights lie along the same lines as initial weights. Ignore results if this is not the case due to numerical errors. 6. Repeat steps 2-5 10 times. Return the normalized MSE (i.e., MSE normalized by the L2 norm of y) in the test set over di\ufb00erent initializations. The results are shown in Figures 2 and 11. Figure 11-a shows that as k increases, the PNN approximation gets better in consistent to our theoretical results in Section 7. Figure 11-b shows the result of \ufb01tting gamma curves to the histograms. We can observe that the curve being compressed towards smaller loss values as k increases. 9" + }, + { + "url": "http://arxiv.org/abs/1702.05471v2", + "title": "Maximally Correlated Principal Component Analysis", + "abstract": "In the era of big data, reducing data dimensionality is critical in many\nareas of science. Widely used Principal Component Analysis (PCA) addresses this\nproblem by computing a low dimensional data embedding that maximally explain\nvariance of the data. However, PCA has two major weaknesses. Firstly, it only\nconsiders linear correlations among variables (features), and secondly it is\nnot suitable for categorical data. We resolve these issues by proposing\nMaximally Correlated Principal Component Analysis (MCPCA). MCPCA computes\ntransformations of variables whose covariance matrix has the largest Ky Fan\nnorm. Variable transformations are unknown, can be nonlinear and are computed\nin an optimization. MCPCA can also be viewed as a multivariate extension of\nMaximal Correlation. For jointly Gaussian variables we show that the covariance\nmatrix corresponding to the identity (or the negative of the identity)\ntransformations majorizes covariance matrices of non-identity functions. Using\nthis result we characterize global MCPCA optimizers for nonlinear functions of\njointly Gaussian variables for every rank constraint. For categorical variables\nwe characterize global MCPCA optimizers for the rank one constraint based on\nthe leading eigenvector of a matrix computed using pairwise joint\ndistributions. For a general rank constraint we propose a block coordinate\ndescend algorithm and show its convergence to stationary points of the MCPCA\noptimization. We compare MCPCA with PCA and other state-of-the-art\ndimensionality reduction methods including Isomap, LLE, multilayer autoencoders\n(neural networks), kernel PCA, probabilistic PCA and diffusion maps on several\nsynthetic and real datasets. We show that MCPCA consistently provides improved\nperformance compared to other methods.", + "authors": "Soheil Feizi, David Tse", + "published": "2017-02-17", + "updated": "2017-02-21", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.IT", + "cs.LG", + "math.IT" + ], + "main_content": "Introduction Let X1 and X2 be two mean zero and unit variance random variables. Pearson\u2019s correlation [1] de\ufb01ned as \u03c1Pearson(X1,X2) = E[X1X2] (1.1) is a basic statistical parameter and plays a central role in many statistical and machine learning methods such as linear regression [2], principal component analysis [3], and support vector machines [4], partially owing to its simplicity and computational e\ufb03ciency. Pearson\u2019s correlation however has two main weaknesses: \ufb01rstly it only captures linear dependency between variables, and secondly for discrete (categorical) variables the value of Pearson\u2019s correlation depends somewhat arbitrarily on the labels. To overcome these weaknesses, Maximal Correlation (MC) has been proposed and 1 arXiv:1702.05471v2 [stat.ML] 21 Feb 2017 \fstudied by Hirschfeld [5], Gebelein [6], Sarmanov [7] and R\u00b4 enyi [8], and is de\ufb01ned as \u03c1MC(X1,X2) = sup \u03c61(.),\u03c62(.) E[\u03c61(X1) \u03c62(X2)], (1.2) E[\u03c6i(Xi)] = 0, i = 1,2, E[\u03c6i(Xi)2] = 1, i = 1,2. Transformation functions {\u03c6i(.)}2 i=1 are assumed to be Borel measurable whose ranges are in R. MC has also been studied by Witsenhausen [9], Ahlswede and G\u00b4 acs [10], and Lancaster [11]. MC tackles the two main drawbacks of the Pearson\u2019s correlation: it models a family of nonlinear relationships between the two variables. For discrete variables, the MC value only depends on the joint distribution and does not rely on labels. Moreover the MC value between X1 and X2 is zero i\ufb00they are independent [8]. For the multivariate case with variables X = (X1,...,Xp)T where p \u22652, Pearson\u2019s correlation can be extended naturally to the covariance matrix KX \u2208Rp\u00d7p where KX(i,i\u2032) = E[XiXi\u2032] (assuming Xi has zero mean and unit variance). Similarly to the bivariate case, the covariance matrix analysis su\ufb00ers from two weaknesses of only capturing linear dependencies among variables and being label dependent when variables are discrete (categorical). One way to extend the idea of MC to the multivariate case is to consider the set of covariance matrices of transformed variables. Let \u03c6(X) = (\u03c61(X1),...,\u03c6p(Xp))T be the vector of transformed variables with zero means and unit variances. I.e., E[\u03c6i(Xi)] = 0 and E[\u03c6i(Xi)2] = 1 for 1 \u2264i \u2264p. Let K\u03c6(X) \u2208Rp\u00d7p be the covariance matrix of transformed variables \u03c6(X) where K\u03c6(X)(i,i\u2032) = E[\u03c6i(Xi)\u03c6i\u2032(Xi\u2032)]. The set of covariance matrices of transformed variables is de\ufb01ned as follows: KX \u225c{K\u03c6(X) \u2208Rp\u00d7p \u2236E[\u03c6i(Xi)] = 0, E[\u03c6i(Xi)2] = 1, 1 \u2264i,i\u2032 \u2264p}. (1.3) Similarly to the bivariate case, functions {\u03c6i(.)}p i=1 are assumed to be Borel measurable whose ranges are in R. If variables {Xi}p i=1 are continuous, functions {\u03c6i(.)}p i=1 are assumed to be continuous. The set KX includes in\ufb01nitely many covariance matrices corresponding to di\ufb00erent transformations of variables. In order to have an operational extension of MC to the multivariate case, we need to select one (or \ufb01nitely many) members of KX through an optimization. Here we propose the following optimization over KX that aims to select a covariance matrix K\u2217\u2208KX with the maximum q-Ky Fan norm (i.e., with the maximum sum of top q eigenvalues): max K q \u2211 r=1 \u03bbr(K) (1.4) K \u2208KX. Since the trace of all matrices in KX is equal to p, maximizing the Ky Fan norm over KX results in a low rank or an approximately low rank covariance matrix. We refer to this optimization as Maximally Correlated Principal Component Analysis with parameter q or for simplicity, the MCPCA optimization. The optimal MCPCA value is denoted by \u03c1\u2217 q(X). When no confusion arises we use \u03c1\u2217 q to refer to it. Principal Component Analysis (PCA) [3] aims to \ufb01nd q eigenvectors corresponding to the top eigenvalues of the covariance matrix. These are called Principal Components (PCs). On the other 2 \f(X1, X2, ..., Xp) Input Features Meta Features Linear Projection Maximally Correlated PCA (MCPCA) Principal Component Analysis (PCA) (X1, X2, ..., Xp) Transformed Features Input Features Meta Features (X1,..., Xq) \u0278(Xp) p \u0278(X3) 3 \u0278(X2) 2 \u0278(X1) 1 ( ) .... , , , , Linear Projection (X1,..., Xq) Feature Transformation Figure 1: An illustration of the Maximally Correlated Principal Component Analysis (MCPCA) Framework. MCPCA computes transformations of features \u03c6i(Xi) so that variance of the transformed data can be explained maximally by a few meta features. Feature transformations are unknown, can be nonlinear and are computed in an optimization. hand, we show that the MCPCA optimization aims to \ufb01nd possibly nonlinear transformations of variables that can be approximated optimally by q orthonormal vectors. Thus, MCPCA can be viewed as a generalization of PCA over possibly nonlinear transformations of variables with zero means and unit variances. We summarize our main contributions below: \u2022 We introduce MCPCA as a multivariate extension of MC and a generalization of PCA. \u2022 For jointly Gaussian variables we show that the covariance matrix corresponding to the identity (or the negative of the identity) transformations majorizes covariance matrices of nonidentity functions. Using this result we characterize global MCPCA optimizers for nonlinear functions of jointly Gaussian variables for every q. \u2022 For \ufb01nite discrete variables, we compute a globally optimal MCPCA solution when q = 1 based on the leading eigenvector of a matrix computed using pairwise joint distributions. for an arbitrary q we propose a block coordinate descend algorithm and show its convergence to stationary points of the MCPCA optimization. \u2022 We study the consistency of sample MCPCA (an MCPCA optimization computed using empirical distributions) for both \ufb01nite discrete and continuous variables. We compare MCPCA with PCA and other state-of-the-art nonlinear dimensionality reduction methods including Isomap [12], LLE [13], multilayer autoencoders (neural networks) [14,15], kernel PCA [16\u201319], probabilistic PCA [20] and di\ufb00usion maps [21] on several synthetic and real datasets. Our real dataset experiments include breast cancer, Parkinsons disease, diabetic retinopathy, dermatology, gene splicing and adult income datasets. We show that MCPCA consistently provides improved performance compared to other methods. 3 \f1.1 Prior Work MCPCA can be viewed as a dimensionality reduction method whose goal is to \ufb01nd possibly nonlinear transformations of variables with a low rank covariance matrix. Other nonlinear dimensionality reduction methods include manifold learning methods such as Isomap [12], Locally Linear Embedding (LLE) [13], kernel PCA [16\u201319], maximum variance unfolding [22], di\ufb00usion maps [21], Laplacian eigenmaps [23], Hessian LLE [24], Local tangent space analysis [25], Sammon mapping [26], multilayer autoencoders [14, 15], among others. For a comprehensive review of these methods, see reference [27]. Although these techniques show an advantage compared to PCA in arti\ufb01cial datasets, their successful applications to real datasets have been less convincing [27]. The key challenge is to have an appropriate balance among generality of the model, computational complexity of the method and statistical signi\ufb01cance of inferences. MCPCA is more general than PCA since it considers both linear and nonlinear feature transformations. In kernel PCA methods, transformations of variables are \ufb01xed in advance. This is in contrast to MCPCA that optimizes over transformations resulting in an optimal low rank approximation of the data. Manifold learning methods such as Isomap and LLE aim to \ufb01nd a low dimensional representation of the data such that sample distances in the low dimensional space are the same, up to a scaling, to sample geodistances (i.e., distances over the manifold), assuming there exists such a manifold that the data lies on. These methods can be viewed as extensions of PCA \ufb01tting a nonlinear model to the data. Performance of these methods has been shown to be sensitive to noise and model parameters [27]. Through experiments on several synthetic and real datasets we show that the performance of MCPCA is robust against these factors. Note that MCPCA allows features to be transformed only individually, thus avoiding a combinatorial optimization and resulting in statistically signi\ufb01cant inferences. However because of this MCPCA cannot capture low dimensional structures such as the swiss roll example since underlying transformation depend on pairs of variables. Unlike existing dimensionality reduction methods that are only suitable for data with continuous features, MCPCA is suitable for both categorical and continuous data. The reason is that even if the data is categorical, transformed values computed by MCPCA are real. Moreover we compare computational and memory complexity of MCPCA and manifold learning methods (Isomap and LLE) in Remark 1. Unlike Isomap and LLE methods whose computational and memory complexity scales in a quadratic or cubic manner with the number of samples, computational and memory complexity of the MCPCA algorithm scales linearly with the number of samples, making it more suitable for data sets with large number of samples. MCPCA can be viewed as a multivariate extension of MC. Other extensions of MC to the multivariate case have been studied in the literature. For example, reference [28] introduces an optimization over KX that aims to maximize sum of arbitrary chosen elements of the matrix K \u2208KX. [28] shows that this optimization can be useful in nonlinear regression and graphical model inference. Moreover, [28] provides an algorithm to \ufb01nd local optima of the proposed optimization. Reference [29] introduces another optimization that aims to select a covariance matrix whose minimum eigenvalue is maximized. [29] brie\ufb02y discuses computational and operational aspects of the proposed optimization. 4 \f1.2 Notation For matrices we use bold-faced upper case letters, for vectors we use bold-faced lower case letters, and for scalars we use regular lower case letters. For random variables we use regular upper case letters. For example, X represents a matrix, x represents a vector, x represents a scalar number, and X represents a random variable. In and 1n are the identity and all one matrices of size n \u00d7 n, respectively. When no confusion arises, we drop the subscripts. 1{x = y} is the indicator function which is equal to one if x = y, otherwise it is zero. Tr(X) and XT represent the trace and the transpose of the matrix X, respectively. diag(x) is a diagonal matrix whose diagonal elements are equal to x, while diag(X) is a vector of the diagonal elements of the matrix X. \u2225x\u22252 = xT x is the second norm of the vector x. When no confusion arises, we drop the subscript. \u2223\u2223X\u2223\u2223is the operator norm of the matrix X. < x,y > is the inner product between vectors x and y. x \u22a5y indicates that vectors x and y are orthogonal. The matrix inner product is de\ufb01ned as < X,Y >= Tr(XYT ). The eigen decomposition of the matrix X \u2208Rn\u00d7n is denoted by X = \u2211n i=1 \u03bbi(X)ui(X)ui(X)T , where \u03bbi(X) is the i-th largest eigenvalue of the matrix X corresponding to the eigenvector ui(X). We have \u03bb1(X) \u2265\u03bb2(X) \u2265\u22ef. \u03bb(X) = (\u03bb1(X),\u03bb2(X),\u22ef)T . ui(X) has a unit norm. Similarly the singular value decomposition of the matrix Y \u2208Rn\u00d7m is denoted by Y = \u2211 min(n,m) i=1 \u03c3i(Y)ui(Y)vi(Y)T where \u03c3i(Y) is the i-th largest singular value of the matrix Y corresponding to the left and right singular eigenvectors ui(Y) and vi(Y), respectively. We have \u03c31(Y) \u2265\u03c32(Y) \u2265\u22ef. \u03c3(Y) = (\u03c31(Y),\u03c32(Y),\u22ef)T . ui(Y) and vi(Y) are unit norm vectors. 2 MCPCA: Basic Properties and Relationship with Matrix Majorization 2.1 Basic Properties of MCPCA In reference [8], R\u00b4 enyi shows that MC between the two variables X1 and X2 is zero i\ufb00they are independent, while MC is one i\ufb00the two variables are strictly dependent (i.e., there exist mean zero, unit variance transformations of variables that are equal.). Here we study some of these properties for the multivariate case of MCPCA: Theorem 1 Let \u03c1\u2217 q be the optimal MCPCA value for random variables X1,...,Xp. (i) 1 \u2264\u03c1\u2217 q \u2264p, for 1 \u2264q \u2264p. (ii) \u03c1\u2217 1 = 1 i\ufb00Xi and Xi\u2032 are independent, for 1 \u2264i \u2260i\u2032 \u2264p. (iii) \u03c1\u2217 1 = p i\ufb00X1,...,Xp are strictly dependent. I.e., there exist zero mean, unit variance transformation functions {\u03c6i(.)}p i=1 such that for all 1 \u2264i,i\u2032 \u2264p, \u03c6i(Xi) = \u03c6i\u2032(Xi\u2032). (iv) If {\u03c6i(.)}p i=1 are one-to-one transformation functions, \u03c1\u2217 q(X) = \u03c1\u2217 q(\u03c6(X)). Proof To prove part (i), for any K \u2208KX, we have Tr(K) = \u2211p r=1 \u03bbr(K) = p because \u03c6i(Xi) has zero mean and unit variance for 1 \u2264i \u2264p. Moreover, since \u03bb1(K) \u2265\u03bb2(K) \u2265...\u03bbp(K) \u22650, we have \u03bb1(K) \u22651. Thus, 1 \u2264\u03c1\u2217 q \u2264p, for 1 \u2264q \u2264p. This completes the proof of part (i). To prove part (ii), suppose \u03bb1(K\u2217) = 1. Thus, for every K \u2208KX, we have 1 \u2265\u03bb1(K) \u2265\u03bb2(K) \u2265 ...\u03bbp(K) \u22650. However since the sum of all eigenvalues are equal to p, we have \u03bbi(K) = 1 for every 5 \fK \u2208KX and 1 \u2264i \u2264p. Therefore, K = Ip for every K \u2208KX. This means \u03c1MC(Xi,Xi\u2032) = 0, for 1 \u2264i \u2260i\u2032 \u2264p, which indicates that Xi and Xi\u2032 are independent [8]. To prove the other direction of part (ii), if Xi and Xi\u2032 are independent, for every zero mean and unit variance functions \u03c6i(.) and \u03c6i\u2032(.), we have E[\u03c6i(Xi)\u03c6i(Xi)] = 0 [8]. Thus, for every K \u2208KX, we have K = Ip. This completes the proof of part (ii). To prove part (iii), let \u03c1\u2217 1 = p. Thus, 1p \u2208KX. It means that there exist transformation functions {\u03c6\u2217 i }p i=1 with zero means and unit variances such that for all 1 \u2264i,i\u2032 \u2264p, E[\u03c6\u2217 i (Xi)\u03c6\u2217 i\u2032(Xi\u2032)] = 1. It means that for 1 \u2264i \u2264p, \u03c6\u2217 i (Xi) = Y where Y has zero mean and unit variance. The proof of the inverse direction is straightforward. This completes the proof of part (iii). To prove part (iv), we note that if {\u03c6i(.)}p i=1 are one-to-one transformations, KX = K\u03c6(X). Thus, \u03c1\u2217 q(X) = \u03c1\u2217 q(\u03c6(X)). This completes the proof of part (iv). In the following proposition, we show that the increase ratio of the optimal MCPCA value (i.e., \u03c1\u2217 q+1/\u03c1\u2217 q \u22121) is bounded above by 1/q which decreases as q increases. Proposition 1 Let \u03c1\u2217 q be the optimal MCPCA value for random variables X1,...,Xp. We have \u03c1\u2217 q \u2264\u03c1\u2217 q+1 \u2264(1 + 1 q )\u03c1\u2217 q (2.1) Proof Let K\u2217be an optimal MCPCA solution for q + 1. Since \u03c1\u2217 q is an optimal MCPCA value with parameter q, we have \u2211 r\u2208{1,2,...,q+1}\u2212{j} \u03bbr(K\u2217) \u2264\u03c1\u2217 q, \u2200j \u2208{1,...,q + 1}. (2.2) By summing (2.2) over all j \u2208{1,...,q + 1}, we have q\u03c1\u2217 q+1 \u2264(q + 1)\u03c1\u2217 q. This completes the proof. 2.2 Relationship between MCPCA and Matrix Majorization A vector x = (x1,x2,\u22ef,xp)T \u2208Rp weakly majorizes vector y = (y1,y2,\u22ef,yp)T (in symbols, x \u227bw y) if \u2211q r=1 x[r] \u2265\u2211q r=1 y[r], for all 1 \u2264q \u2264p. The symbols x[1] \u2265x[2] \u2265\u22ef\u2265x[p] stand for the elements of the vector x sorted in a decreasing order. If x \u227bw y and \u2211p r=1 xr = \u2211p r=1 yr, then we say vector x majorizes vector y and denote it by x \u227by. Let A and B be two Hermitian matrices in Rp\u00d7p. We say A majorizes B is \u03bb(A) \u227b\u03bb(B). We have the following equivalent formulation for matrix majorization that we will use in later parts of the paper. Lemma 1 The following conditions for Hermitian matrices A and B are equivalent: \u2022 A \u227aB \u2022 There exist unitary matrices Uj and positive numbers tj such that A = N \u2211 j=1 tjUjBU\u2217 j , (2.3) where \u2211N j=1 tj = 1. 6 \fProof See Theorem 7.1 in [30]. The following proposition makes a connection between an optimal MCPCA solution and the majorization of covariance matrices in KX. Lemma 2 If K\u2217\u2208KX majorizes all K \u2208KX, then K\u2217is an optimal solution of the MCPCA optimization (1.4), for 1 \u2264q \u2264p. Proof Since K\u2217\u2208KX majorizes all K \u2208KX, \u2211q r=1 \u03bbr(K\u2217) \u2265\u2211q r=1 \u03bbr(K), for all 1 \u2264q \u2264p. Thus K\u2217 is an optimal solution of optimization (1.4), for 1 \u2264q \u2264p. 2.3 MCPCA as an Optimization over Unit Variance Functions The feasible set of optimization (1.4) includes functions of variables with zero means and unit variances. In the following we consider an alternative optimization whose feasible set includes functions of variables with unit variances and show the relationship between its optimal solutions with the ones of the MCPCA optimization. This formulation becomes useful in simplifying the MCPCA optimization for \ufb01nite discrete variables (Section 4). Lemma 3 Consider the following optimization: max {\u03c6i}p i=1 q \u2211 r=1 \u03bbr(K) (2.4) K(i,i\u2032) = E[(\u03c6i(Xi) \u2212\u00af \u03c6i(Xi))(\u03c6i\u2032(Xi\u2032) \u2212\u00af \u03c6i\u2032(Xi\u2032))], 1 \u2264i,i\u2032 \u2264p var(\u03c6i(Xi)) = 1, 1 \u2264i \u2264p, where var(.) denotes the variance of a random variables and \u00af \u03c6i(Xi) = E[\u03c6i(Xi)]. Let \u03f11 and \u03f12 be optimal values of objective functions of optimizations (1.4) and (2.4), respectively. We have \u03f11 = \u03f12. Moreover if {\u03c6\u2217\u2217 i }p i=1 is an optimal solution of optimization (2.4), then {\u03c6\u2217 i }p i=1 is an optimal solution of optimization (1.4), where \u03c6\u2217 i (Xi) = \u03c6\u2217\u2217 i (Xi) \u2212\u00af \u03c6i \u2217\u2217(Xi), and vice versa. Proof First we have the following lemma: Lemma 4 Let (K\u2217,W\u2217) be an optimal solution of the following optimization: max W,{\u03c6i}p i=1 Tr(WK) (2.5) Tr(W) = q, 0 \u2aafW \u2aafI, K(i,i\u2032) = E[\u03c6i(Xi)\u03c6i\u2032(Xi\u2032)], 1 \u2264i,i\u2032 \u2264p E[\u03c6i(Xi)2] = 1, 1 \u2264i \u2264p E[\u03c6i(Xi)] = 0, 1 \u2264i \u2264p. Then K\u2217is an optimal solution of optimization (1.4) and \u2211q r=1 \u03bbr(K\u2217) = Tr(W\u2217K\u2217). 7 \fProof The proof follows from the fact that the q Ky Fan norm of a matrix K is the solution of the following optimization [31]: max W Tr(WK) (2.6) Tr(W) = q, 0 \u2aafW \u2aafI. Consider the trace formulation of optimizations (1.4) and (2.4) according to Lemma 4: max W,{\u03c6i}p i=1 \u2211 i,i\u2032 wi,i\u2032E[\u03c6i(Xi)\u03c6i\u2032(Xi\u2032)] (2.7a) Tr(W) = q, 0 \u2aafW \u2aafI E[\u03c6i(Xi)2] = 1, 1 \u2264i \u2264p E[\u03c6i(Xi)] = 0, 1 \u2264i \u2264p, max W,{\u03c6i}p i=1 \u2211 i,i\u2032 wi,i\u2032E[(\u03c6i(Xi) \u2212\u00af \u03c6i(Xi))(\u03c6i\u2032(Xi\u2032 \u2212\u00af \u03c6i\u2032(Xi\u2032))] (2.7b) Tr(W) = q, 0 \u2aafW \u2aafI var(\u03c6i(Xi)) = 1, 1 \u2264i \u2264p. Let \u03c6\u2217 i and W\u2217be an optimal solution of (2.7a). The set of functions {\u03c6\u2217 i }p i=1 and W\u2217is feasible for optimization (2.7b). Thus, \u03f11 \u2264\u03f12. Moreover, let \u03c6\u2217\u2217 i and W\u2217be an optimal solution of optimization (2.7b). Let \u02dc \u03c6i = \u03c6\u2217\u2217 i \u2212\u00af \u03c6i \u2217\u2217. The set of functions {\u02dc \u03c6i}p i=1 and W\u2217is feasible for optimization (2.7a). Thus, we have \u03f11 \u2264\u03f12. Therefore, we have that \u03f11 = \u03f12. This completes the proof. 3 MCPCA for Jointly Gaussian Random Variables 3.1 Problem Formulation Let (X1,\u22ef,Xp) be zero mean unit variance jointly Gaussian random variables with the covariance matrix KX. Thus KX(i,i\u2032) = \u03c1i,i\u2032 where \u03c1i,i\u2032 is the correlation coe\ufb03cient between variables Xi and Xi\u2032. Let \u2223\u03c1i,i\u2032\u2223< 1 for i \u2260i\u2032. A sign vector s = (s1,s2,\u22ef,sp)T is a vector in Rp where si \u2208{\u22121,1} for 1 \u2264i \u2264p. Let hj(.) be the j-th Hermite-Chebyshev polynomial for j \u22650. These polynomials form an orthonormal basis with respect to the Gaussian distribution [11]: E[hj(Xi) hj\u2032(Xi\u2032)] = (\u03c1i,i\u2032)j1{j = j\u2032}. 8 \fMoreover, because Hermite-Chebyshev polynomials have zero means over a Gaussian distribution we have E[hj(Xi)] = 1{j = 0}, 1 \u2264i \u2264p. (3.1) Using a basis expansion approach similar to [28] we have \u03c6i(Xi) = \u221e \u2211 j=1 ai,j hj(Xi), (3.2) where ai = (ai,1,ai,2,...)T is the vector of projection coe\ufb03cients. The constraint E[\u03c6i(Xi)2] = 1 translates to \u2223\u2223ai\u2223\u2223= 1 while the constraint E[\u03c6i(Xi)] = 0 is simpli\ufb01ed to ai,0 = 0 for 1 \u2264i \u2264p. We also have K(\u03c61(X1),...,\u03c6p(Xp))(i,i\u2032) = \u221e \u2211 j=1 ai,jai\u2032,j(\u03c1i,i\u2032)j. (3.3) Thus the MCPCA optimization (1.4) can be re-written as follows: max K q \u2211 r=1 \u03bbr(K) (3.4) K(i,i\u2032) = \u221e \u2211 j=1 ai,jai\u2032,j(\u03c1i,i\u2032)j, 1 \u2264i,i\u2032 \u2264p, \u2225ai\u22252 = 1, 1 \u2264i \u2264p. Since \u2223\u03c1i,i\u2032\u2223< 1 for i \u2260i\u2032, (\u03c1i,i\u2032)j \u21920 as j \u2192\u221e. Thus we can approximate optimization (3.4) with the following optimization max K q \u2211 r=1 \u03bbr(K) (3.5) K(i,i\u2032) = N \u2211 j=1 ai,jai\u2032,j(\u03c1i,i\u2032)j, 1 \u2264i,i\u2032 \u2264p, \u2225ai\u22252 = 1, 1 \u2264i \u2264p, for su\ufb03ciently large N. Lemma 5 Let \u03c1\u2217 q and \u02dc \u03c1\u2217 q be optimal values of optimizations (3.4) and (3.5), respectively. For a given \u03f5 > 0, there exists N0 such that if N > N0 we have \u2223\u03c1\u2217 q \u2212\u02dc \u03c1\u2217 q\u2223< \u03f5. Proof The proof follows from the fact that the Ky Fan norm of a matrix is a continuous function of its elements and also (\u03c1i,i\u2032)j \u21920 as j \u2192\u221e. For the bivariate case (p = 2), the MCPCA optimization simpli\ufb01es to the maximum correlation optimization (1.2). For jointly Gaussian variables the maximum correlation optimization (1.2) results in global optimizers \u03c6\u2217 i (Xi) = siXi for i = 1,2 [11]. Sign variables si\u2019s are chosen so that the correlation between s1X1 and s2X2 is positive. This can be immediately seen from the formulation (3.5) as well: maximizing the o\ufb00-diagonal entry of a 2 \u00d7 2 covariance matrix maximizes its top eigenvalue. For the bivariate case the global optimizer of optimization (3.4) is a\u2217 i = (\u00b11,0,0,...) for 9 \fi = 1,2 since \u2223\u03c11,2\u2223> (\u03c11,2)j for j \u22652. Using (3.2) and since h1(.) is the identity function, we obtain \u03c6\u2217 i (Xi) = siXi for i = 1,2. Let Kext be the set of covariance matrices of variables siXi where si = \u00b11 for 1 \u2264i \u2264p. In the bivariate case we have Kext = {( 1 \u03c11,2 \u03c11,2 1 ),( 1 \u2212\u03c11,2 \u2212\u03c11,2 1 )}. (3.6) Note that covariance matrices in Kext have similar eigenvalues. Moreover in the bivariate case every covariance matrix K(\u03c61(X1),\u03c62(X2)) can be written as a convex combination of covariance matrices in Kext. Thus, it is majorized by covariance matrices in Kext (Lemma 1). However in the multivariate case we may have covariance matrices that are not in the convex hull of Kext. To illustrate this, let p = 3 and consider Kext = \u23a7 \u23aa \u23aa \u23aa \u23a8 \u23aa \u23aa \u23aa \u23a9 \u239b \u239c \u239d 1 0.9 0.9 0.9 1 0.7 0.9 0.7 1 \u239e \u239f \u23a0 , \u239b \u239c \u239d 1 \u22120.9 \u22120.9 \u22120.9 1 0.7 \u22120.9 0.7 1 \u239e \u239f \u23a0 , \u239b \u239c \u239d 1 \u22120.9 0.9 \u22120.9 1 \u22120.7 0.9 \u22120.7 1 \u239e \u239f \u23a0 , \u239b \u239c \u239d 1 0.9 \u22120.9 0.9 1 \u22120.7 \u22120.9 \u22120.7 1 \u239e \u239f \u23a0 \u23ab \u23aa \u23aa \u23aa \u23ac \u23aa \u23aa \u23aa \u23ad . (3.7) One can show that the covariance matrix \u239b \u239c \u239d 1 0.92 0.92 0.92 1 0.72 0.92 0.72 1 \u239e \u239f \u23a0 (3.8) is not included in the convex hull of covariance matrices in Kext. This covariance matrix results from having ai,2 = 1 for 1 \u2264i \u22643. Thus techniques used to characterize global optimizers of the bivariate case may not extend to the multivariate case. 3.2 Global MCPCA Optimizers Here we characterize global optimizers of optimization (3.4). Our main result is as follows: Theorem 2 KX majorizes every K \u2208KX. This Theorem along with Lemma 2 results in the following corollary. Corollary 1 \u03c6i(Xi) = siXi where si \u00b1 1 for 1 \u2264i \u2264p provides a globally optimal solution for the MCPCA optimization (3.4) for 1 \u2264q \u2264p. Below we present the proof of Theorem 2. Proof First we prove the following lemma: Lemma 6 Let K be a p \u00d7 p positive semide\ufb01nite matrix with unit diagonal elements. Let K\u2299j be the j-th Hadamard power of K. Then there exist diagonal matrices Ek for 1 \u2264k \u2264pj\u22121 such that K\u2299j = pj\u22121 \u2211 k=1 EkKEk, (3.9) where \u2211k E2 k = I. 10 \fProof We prove this lemma for j = 2. The case of j > 2 can be shown by a successive application of the proof technique. Since K is a positive semide\ufb01nite matrix we can write K = UT U. Since diagonal elements of K are one we have \u2225ui\u2225= 1 where ui = (u1,i,...,up,i)T is the i-th column of U. Then we have K\u22992 = p \u2211 k=1 EkKEk, (3.10) where Ek = \u239b \u239c \u239c \u239c \u239d uk,1 0 \u22ef 0 0 uk,2 \u22ef 0 \u22ee \u22ee \u22f1 \u22ee 0 0 \u22ef uk,n \u239e \u239f \u239f \u239f \u23a0 . (3.11) Moreover we have p \u2211 k=1 E2 k = \u239b \u239c \u239c \u239c \u239d \u2225u1\u22252 0 \u22ef 0 0 \u2225u2\u22252 \u22ef 0 \u22ee \u22ee \u22f1 \u22ee 0 0 \u22ef \u2225up\u22252 \u239e \u239f \u239f \u239f \u23a0 = I. (3.12) Next we prove the following result on matrix majorization: Lemma 7 Let K be a p \u00d7 p positive semide\ufb01nite matrix with unit diagonal elements. Let X = N \u2211 j=1 DjK\u2299jDj, (3.13) where Dj\u2019s are diagonal matrices such that \u2211N j=1 D2 j = I. Then \u03bb(K) \u227b\u03bb(X). Proof Using Lemma 6 we can write X = M \u2211 j=1 FjKFj, (3.14) where M = (pN \u22121)/(p \u22121) and \u2211M j=1 F2 j = I. Then using Theorem 1 of [32] completes the proof. Let K(\u03c61(X1),...,\u03c6p(Xp)) be the covariance matrix of transformed variables {\u03c6i(Xi)}p i=1. Using (3.2) and for su\ufb03ciently large N we have K(\u03c61(X1),...,\u03c6p(Xp)) = N \u2211 j=1 AjK\u2299j X Aj, (3.15) where Aj = \u239b \u239c \u239c \u239c \u239d a1,j 0 \u22ef 0 0 a2,j \u22ef 0 \u22ee \u22ee \u22f1 \u22ee 0 0 \u22ef ap,j \u239e \u239f \u239f \u239f \u23a0 . (3.16) Since \u2225ai\u22252 = 1 we have \u2211N j=1 A2 j = I. Using Lemma 7 completes the proof. 11 \f4 MCPCA for Finite Discrete Random Variables 4.1 Problem Formulation Let Xi be a discrete random variable with distribution PXi over the alphabet Xi = {1,...,\u2223Xi\u2223}. Without loss of generality, we assume all alphabets have positive probabilities as otherwise they can be neglected, i.e., PXi(x) > 0 for x \u2208{1,...,\u2223Xi\u2223}. Let \u03c6i(Xi) \u2236Xi \u2192R be a function of random variable Xi with zero mean and unit variance. Using a basis expansion approach similar to [28], we have \u03c6i(Xi) = \u2223Xi\u2223 \u2211 i=1 ai,j\u03c8i,j(Xi), (4.1) where \u03c8i,j(x) \u225c1{x = j} 1 \u221a PXi(x) . (4.2) Note that {\u03c8i,j}\u2223Xi\u2223 j=1 form an orthonormal basis with respect to the distribution of Xi because E[\u03c8i,j(Xi)2] = 1, 1 \u2264j \u2264\u2223Xi\u2223 (4.3) E[\u03c8i,j(Xi)\u03c8i,j\u2032(Xi)] = 0, 1 \u2264j \u2260j\u2032 \u2264\u2223Xi\u2223. Moreover we have E[\u03c8i,j(Xi)] = \u221a PXi(j), 1 \u2264j \u2264\u2223Xi\u2223. (4.4) Let PXi,Xi\u2032 be the joint distribution of discrete variables Xi and Xi\u2032. De\ufb01ne a matrix Qi,i\u2032 \u2208R\u2223Xi\u2223\u00d7\u2223Xi\u2032\u2223 whose (j,j\u2032) element is Qi,i\u2032(j,j\u2032) \u225c PXi,Xi\u2032(j,j\u2032) \u221a PXi(j)PXi\u2032(j\u2032) . (4.5) This matrix is called the Q-matrix of the distribution PXi,Xi\u2032. Note that E[\u03c8i,j(Xi)\u03c8i\u2032,j\u2032(Xi\u2032)] = Qi,i\u2032(j,j\u2032). (4.6) For i = 1,...,p, let ai \u225c(ai,1,ai,2,...,ai,\u2223Xi\u2223) T (4.7) \u221api \u225c( \u221a PXi(1), \u221a PXi(2),..., \u221a PXi(\u2223Xi\u2223)) T . Theorem 3 Let {a\u2217 i }p i=1 be an optimal solution of the following optimization: max {ai}p i=1 q \u2211 r=1 \u03bbr(K) (4.8) K(i,i\u2032) = aT i Qi,i\u2032ai\u2032, 1 \u2264i,i\u2032 \u2264p, \u2225ai\u22252 = 1, 1 \u2264i \u2264p, ai \u22a5\u221api, 1 \u2264i \u2264p. Then, K\u2217is an optimal solution of MCPCA optimization (1.4). 12 \fProof Consider K \u2208KX in the feasible region of MCPCA optimization (1.4). We have K(i,i\u2032) = E[\u03c6i(Xi)\u03c6i\u2032(Xi\u2032)], 1 \u2264i,i\u2032 \u2264p (4.9) where E[\u03c6i(Xi)] = 0, and E[\u03c6i(Xi)2] = 1 for all 1 \u2264i \u2264p. Using (4.1), we can represent these functions in terms of the basis functions: \u03c6i(Xi) = \u2223Xi\u2223 \u2211 j=1 ai,j\u03c8i,j(Xi), (4.10) \u03c6i\u2032(Xi\u2032) = \u2223Xi\u2032\u2223 \u2211 j=1 ai\u2032,j\u2032\u03c8i\u2032,j\u2032(Xi\u2032). Using (4.3), the constraint E[\u03c6i(Xi)2] = 1 would be translated into \u2225ai\u22252 for 1 \u2264i \u2264p. Moreover using (4.4), the constraint E[\u03c6i(Xi)] = 0 is simpli\ufb01ed to \u2211 \u2223Xi\u2223 j=1 ai,j \u221a PXi(j) = 0 for 1 \u2264i \u2264p. We also have E[\u03c6i(Xi)\u03c6i\u2032(Xi\u2032)] = \u2223Xi\u2223 \u2211 j=1 \u2223Xi\u2032\u2223 \u2211 j\u2032=1 ai,jai\u2032,j\u2032 E[\u03c8i,j(Xi)\u03c8i\u2032,j\u2032(Xi\u2032)] = aT i Qi,i\u2032ai\u2032. (4.11) This shows every feasible point of optimization (1.4) corresponds to a feasible point of optimization (4.8). The inverse argument is similar. This completes the proof. Recall that \u03c3j(Qi,i\u2032) is the j-th largest singular value of the matrix Qi,i\u2032 corresponding to left and right singular vectors uj(Qi,i\u2032) and vj(Qi,i\u2032), respectively. Lemma 8 \u03c31(Qi,i\u2032) = 1, u1(Qi,i\u2032) = \u221api and v1(Qi,i\u2032) = \u221api\u2032. Proof First we show that the maximum singular value of the matrix Qi,i\u2032 is less than or equal to one. To show that, it is su\ufb03cient to show that for every vectors a1 and a2 such that \u2225a1\u2225= 1 and \u2225a2\u2225= 1, we have aT 1 Qi,i\u2032a2 \u22641. To show this, we de\ufb01ne random variables \u03a51 and \u03a52 such that P(\u03a51 = a1,j \u221a PXi(j) ,\u03a52 = a2,j\u2032 \u221a PXi\u2032(j\u2032) ) = PXi,Xi\u2032(j,j\u2032). Using Cauchy-Schwartz inequality, we have aT 1 Qi,i\u2032a2 = E[\u03a51\u03a52] \u2264 \u221a E[\u03a52 1]E[\u03a52 2] = \u2223\u2223a1\u2223\u2223\u2223\u2223a2\u2223\u2223= 1. Therefore, the maximum singular value of Qi,i\u2032 is at most one. Moreover \u221api and \u221api\u2032 are right and left singular vectors of the matrix Qi,i\u2032 corresponding to the singular value one because Qi,i\u2032\u221api\u2032 = \u221api and \u221api T Qi,i\u2032 = \u221api\u2032T . In the following we use similar techniques to the ones employed in [28] to formulate an alternative and equivalent optimization to (4.8) without orthogonality constraints which proves to be useful in characterizing a globally optimal MCPCA solution when q = 1. Consider the matrix \u02dc Ii \u225cI\u2223Xi\u2223\u2212\u221api\u221api T . This matrix is positive semide\ufb01nite and the only vectors in its null space are 0 and \u221api. This is because for any vector x we have xT (I\u2223Xi\u2223\u2212\u221api \u221api T )x = \u2223\u2223x\u2223\u22232 2 \u2212(x\u221api)2 \u22650, (4.12) 13 \fwhere the Cauchy-Schwartz inequality and \u2223\u2223\u221api\u2223\u22232 2 = 1 are used. The inequality becomes an equality if and only if x = 0 or x = \u221api. Moreover we have \u03bbj(\u02dc Ii) = 1 for 1 \u2264j < \u2223Xi\u2223because (I\u2223Xi\u2223\u2212\u221api \u221api T )uj(\u02dc Ii) = uj(\u02dc Ii) \u2212\u221api \u221api T uj(\u02dc Ii) = uj(\u02dc Ii), (4.13) where the last equality follows from the fact that uj(\u02dc Ii) is orthogonal to u\u2223Xi\u2223(\u02dc Ii) = \u221api. De\ufb01ne Ai \u2208R\u2223Xi\u2223\u00d7\u2223Xi\u2223as follows: Ai \u225c\u239b \u239d[u1(\u02dc Ii),...,u\u2223Xi\u2223\u22121(\u02dc Ii)][u1(\u02dc Ii),...,u\u2223Xi\u2223\u22121(\u02dc Ii)] T \u239e \u23a0. (4.14) Theorem 4 Let {b\u2217 i }p i=1 be an optimal solution of the following optimization: max {bi}p i=1 q \u2211 r=1 \u03bbr(K) (4.15) K(i,i\u2032) = bT i (Qi,i\u2032 \u2212\u221api \u221api\u2032T )bi\u2032, 1 \u2264i,i\u2032 \u2264p, \u2225bi\u22252 = 1, 1 \u2264i \u2264p. Then, {a\u2217 i }p i=1 is an optimal solution of optimization (4.8) where a\u2217 i = Aib\u2217 i . Proof We consider unit variance formulation of the MCPCA optimization (2.4). We have E[(\u03c6i(Xi) \u2212\u00af \u03c6i(Xi))(\u03c6i\u2032(Xi\u2032) \u2212\u00af \u03c6i\u2032(Xi\u2032))] = E[\u03c6i(Xi)\u03c6i\u2032(Xi\u2032)] \u2212\u00af \u03c6i(Xi) \u00af \u03c6i\u2032(Xi\u2032) = aT i Qi,i\u2032ai\u2032 \u2212(aT i \u221api)(aT i\u2032\u221api\u2032) = aT i (Qi,i\u2032 \u2212\u221api \u221api\u2032T )ai\u2032. Moreover we have var(\u03c6i(Xi)) = E[\u03c6i(Xi)2] \u2212(E[\u03c6i(Xi)])2 = \u2223\u2223ai\u2223\u22232 2 \u2212(aT i \u221api)2 = aT i (I \u2212\u221api \u221api T )ai. Therefore optimization (2.4) can be written as max {ai}p i=1 q \u2211 r=1 \u03bbr(K) (4.16) K(i,i\u2032) = aT i (Qi,i\u2032 \u2212\u221api \u221api\u2032T )ai\u2032, 1 \u2264i,i\u2032 \u2264p aT i (I \u2212\u221api \u221api T )ai = 1, 1 \u2264i \u2264p. We can write I \u2212\u221api\u221api T = BiBT i (since I \u2212\u221api\u221api T is positive semide\ufb01nte) where Bi \u225c \u221a I\u2223Xi\u2223\u2212\u221api \u221api T . (4.17) De\ufb01ne bi \u225cBiai. Thus, aT i (I \u2212\u221api\u221api T )ai = 1 can be written as bT i bi = \u2223\u2223bi\u2223\u22232 2 = 1. The vector \u221api is the eigenvector corresponding to eigenvalue zero of the matrix Bi (\u03bb\u2223Xi\u2223(Bi) = 0). Other eigenvalues of Bi is equal to one. Since Bi is not invertible, there are many choices for ai as a function of bi. ai = ([u1(Bi),...,u\u2223Xi\u2223\u22121(Bi)][u1(Bi),...,u\u2223Xi\u2223\u22121(Bi)]T )bi + \u03b1i \u221api = Aibi + \u03b1i \u221api, (4.18) 14 \fwhere \u03b1i can be an arbitrary scalar (note that ui(\u02dc Ii) = ui(Bi)). However since the desired ai of optimization (4.8) is orthogonal to the vector \u221api, we choose \u03b1i = 0 (i.e., according to Lemma 3, in order to obtain a mean zero solution of the MCPCA optimization (1.4), we subtract the mean from the optimal solution of optimization (2.4).) Therefore we have aT i (Qi,i\u2032 \u2212\u221api \u221api\u2032T )ai\u2032 = bT i AT i (Qi,i\u2032 \u2212\u221api \u221api\u2032T )Ai\u2032bi\u2032. (4.19) Moreover using Lemma 8, we have Qi,i\u2032 = \u221api \u221api\u2032T + \u2211 j\u22652 \u03c3j(Qi,i\u2032)uj(Qi,i\u2032)uj(Qi\u2032,i)T . (4.20) Thus, AT i (Qi,i\u2032 \u2212\u221api \u221api\u2032)AT i\u2032 = AT i \u239b \u239d\u2211 j\u22652 \u03c3j(Qi,i\u2032)uj(Qi,i\u2032)uj(Qi\u2032,i)T \u239e \u23a0AT i\u2032 (4.21) = \u2211 j\u22652 \u03c3j(Qi,i\u2032)(AT i uj(Qi,i\u2032))(AT i\u2032uj(Qi\u2032,i)) T (I) = \u2211 j\u22652 \u03c3j(Qi,i\u2032)(uj(Qi,i\u2032))(uj(Qi\u2032,i))T = Qi,i\u2032 \u2212\u221api \u221api\u2032, where equality (I) comes from expanding uj(Qi,i\u2032) over the basis {uk(\u02dc Ii)}\u2223Xi\u2223\u22121 k=1 and the fact that uj(Qi,i\u2032) \u22a5\u221api for j \u22652. Using equation (4.21) in (4.19) completes the proof. 4.2 A Globally Optimal MCPCA Solution for the Rank One Constraint In this part \ufb01rst we characterize an upper bound for the objective value of optimization (4.23) for 1 \u2264q \u2264p. Then, we construct a solution that achieves this upper bound for q = 1. De\ufb01ne a matrix B \u2208Rp\u00d7\u2211p i=1 \u2223Xi\u2223such that B \u225c \u239b \u239c \u239c \u239c \u239d b1 0 \u22ef 0 0 b2 \u22ef 0 \u22ee \u22ee \u22f1 \u22ee 0 0 \u22ef bp \u239e \u239f \u239f \u239f \u23a0 . (4.22) Optimization (4.15) can be written as max {bi}p i=1,{vr}q r=1 q \u2211 r=1 vT r BT RBvr (4.23) BT B = I, vT r vr = 1, 1 \u2264r \u2264q, vT r vs = 0, 1 \u2264r \u2260s \u2264q, where B has the structure de\ufb01ned in (4.22), and R \u2208R\u2211p i=1 \u2223Xi\u2223\u00d7\u2211p i=1 \u2223Xi\u2223where Ri,i\u2032 = (Qi,i\u2032 \u2212\u221api \u221api\u2032T ). (4.24) 15 \fLemma 9 The optimal value of optimization (4.23) is upper bounded by \u2211q r=1 \u03bbr(R). Proof De\ufb01ne wr \u225cBvr. We have wT r wr = vT r BT Bvr = vT r vr = 1, 1 \u2264r \u2264q (4.25) wT r ws = vT r BT Bvs = 0, 1 \u2264r \u2260s \u2264q. Thus, max {wr}q r=1 q \u2211 r=1 wT r Rwr (4.26) wT r wr = 1, 1 \u2264r \u2264q wT r ws = 0, 1 \u2264r \u2260s \u2264q is a relaxation of optimization (4.23). The optimal solution of this optimization is achieved when wr = ur(R) for 1 \u2264r \u2264q. This completes the proof. Theorem 5 Let u1(R) = (u1,1(R),u1,2(R),\u22ef,u1,p(R))T where u1,i \u2208R\u2223Xi\u2223. Then, b\u2217 i = u1,i(R) \u2225u1,i(R)\u2225, 1 \u2264i \u2264p, (4.27) is an optimal solution of optimization (4.15) when q = 1. Proof Let v1 = (v1,1,\u22ef,v1,p)T . Choosing v1,i = \u2225u1,i(R)\u2225and b\u2217 i according to (4.27) achieves the upper bound provided in Lemma 9 for the case of q = 1. This completes the proof. 4.3 MCPCA Computation Using a Block Coordinate Descend Algorithm Here we provide a block coordinate descend algorithm to solve the MCPCA optimization for \ufb01nite discrete variables with a general distribution for an arbitrary 1 \u2264q \u2264p. We then show that the algorithm converges to a stationary point of the MCPCA optimization. Let vr = (vr,1,\u22ef,vr,p)T . optimization (4.8) can be written as max {ai}p i=1,{vr}q r=1 q \u2211 r=1 p \u2211 i=1 p \u2211 i\u2032=1 vr,ivr,i\u2032aT i Qi,i\u2032ai\u2032 (4.28) \u2225ai\u2225= 1, 1 \u2264i \u2264p, ai \u22a5\u221api, 1 \u2264i \u2264p, vT r vr = 1, 1 \u2264r \u2264q, vT r vs = 0, 1 \u2264r \u2260s \u2264q. Lemma 10 Let wk = q \u2211 r=1 \u2211 i\u2208{1,...,p}\u2212{k} vr,kvr,iQk,iai. (4.29) 16 \fAlgorithm 1 A Block Coordinate Descend Algorithm to Compute MCPCA for Finite Discrete Variables Input: PXi,Xi\u2032 for 1 \u2264i,i\u2032 \u2264p, q Initialization: {a(0) i }p i=1 and {v(0) r }q r=1 for j = 0,1,... for k = 1,...,p compute: w(j) k = \u2211q r=1 \u2211k\u22121 i=1 v(j) r,k v(j) r,i Qk,ia(j) i + \u2211q r=1 \u2211p i=k+1 v(j\u22121) r,k v(j\u22121) r,i Qk,ia(j\u22121) i update: a(j) k = w(j) k \u2212\u221apk T w(j) k \u221apk \u2225w(j) k \u2212\u221apk T w(j) k \u221apk\u2225, if \u2225w(j) k \u2212\u221apk T w(j) k \u221apk\u2225\u22600 compute: K(j) where K(j)(i,i\u2032) = (a(j) i )T Qi,i\u2032a(j) i\u2032 update: v(j) r = ur(K(j)), for 1 \u2264r \u2264q \u03c1(j) q = \u2211q r=1 \u03bbr(K(j)) end If all variables except ak are \ufb01xed in the feasible set of optimization (4.28), then a\u2217 k = wk \u2212\u221apk T wk\u221apk \u2225wk \u2212\u221apk T wk\u221apk\u2225 (4.30) is an optimal solution of the constrained optimization (4.28) if \u2225wk \u2212\u221apk T wk\u221apk\u2225\u22600. If \u2225wk \u2212 \u221apk T wk\u221apk\u2225= 0, every unit norm vector a\u2217 k \u22a5wk is an optimal solution of the constrained optimization (4.28). Proof Under the condition of Lemma 10, optimization (4.28) is simpli\ufb01ed to the following optimization: max ak aT k wk (4.31) \u2225ak\u2225= 1, ak \u22a5\u221apk. Writing wk = \u221apk T wk\u221apk + (wk \u2212\u221apk T wk\u221apk), we have aT k wk = aT k (wk \u2212\u221apk T wk \u221apk), (4.32) since ak \u22a5\u221apk. This completes the proof. Lemma 11 If all variables except {vr}q r=1 are \ufb01xed in the feasible set of optimization (4.28), then v\u2217 r = ur(K) where K(i,i\u2032) = aT i Qi,i\u2032ai\u2032. Proof The proof follows from the eigen decomposition of the covariance matrix K. We use Lemmas 10 and 11 to propose a block coordinate descend Algorithm 1 to compute MCPCA. 17 \fTheorem 6 The sequence \u03c1(j) q in Algorithm 1 is monotonically increasing and convergent. Moreover, if K(j) has top q simple eigenvalues and \u2225w(j) k \u2212\u221apk T w(j) k \u221apk\u2225\u22600 for 1 \u2264k \u2264p and j \u22650, then {a(j) i }p i=1 converges to stationary points of optimization (4.8). Proof According to Lemmas 10 and 11, the sequence \u03c1(j) q is increasing. Moreover, since it is bounded above (Theorem 1, part [i]), it is convergent. Moreover, under the conditions of Theorem 6, at each step, Lemmas 10 and 11 provide a unique optimal solution for optimizing variables {ai}p i=1 and {vr}q r=1. Thus, {a(j) i }p i=1 converges to a stationary point of optimization (4.8) ( [33]). 5 Sample MCPCA Principal component analysis is often applied to an observed data matrix whose rows and columns represent samples and features, respectively. In this part, \ufb01rst we review PCA and then formulate the sample MCPCA optimization (an MCPCA optimization computed over empirical distributions). We then study the consistency of sample MCPCA for both \ufb01nite discrete and continuous variables. 5.1 Review of PCA Let X \u2208Rn\u00d7p be a data matrix: X = (x1,...,xp) = \u239b \u239c \u239d xT 1 \u22ee xT n \u239e \u239f \u23a0 , (5.1) where xi and xj represent its i-th row and j-th column, respectively. Let X(i,j), or interchangeably Xi,j, denote the (i,j)-th element of X. PCA aims to \ufb01nd orthonormal vectors v1,...,vq where vi \u2208Rp and q \u2264p such that the average mean squared error between \u02c6 xk and xk for 1 \u2264k \u2264n is minimized: min {vi}q i=1,\u03b8 1 n n \u2211 k=1 \u2225xk \u2212\u02c6 xk\u22252 (5.2) \u02c6 xk = \u03b8 + q \u2211 i=1 (vT i (xk \u2212\u03b8))vi, 1 \u2264k \u2264n vT i vj = 0, 1 \u2264i \u2260j \u2264q vT i vi = 1, 1 \u2264i \u2264q. Let \u03b8\u2217 n \u225c1 n n \u2211 i=1 xi (5.3) Kn \u225c1 n n \u2211 i=1 (xi \u2212\u03b8\u2217)(xi \u2212\u03b8\u2217)T . Kn and \u03b8\u2217 n are the empirical covariance matrix and the empirical mean of the data, respectively. 18 \fTheorem 7 \u03b8\u2217 n and u1(Kn),...,uq(Kn) provide an optimal solution for optimization (5.2). Proof See reference [3]. By subtracting \u03b8\u2217 n from rows of the input matrix, the mean of each column becomes zero. This procedure is called centring the input data. 5.2 Sample MCPCA for Finite Discrete Variables Let X1,..., Xp be discrete variables with joint distribution PX1,...,Xp. Let the alphabet size of variables (i.e., \u2223Xi\u2223) be \ufb01nite. We observe n independent samples {xi}n i=1 from this distribution. Let X \u2208Rn\u00d7p be the data matrix (5.1). Sample MCPCA aims to \ufb01nd possibly nonlinear transformations of the data (i.e., \u03c6i(xi) for 1 \u2264i \u2264p) to minimize the mean squared error (MSE) between the transformed data and its low rank approximation by q orthonormal vectors v1,...,vq: min {vi}q i=1,{\u03c6i}p i=1 1 n n \u2211 k=1 \u2225\u03c6(xk) \u2212\u02c6 \u03c6(xk)\u22252 (5.4) \u02c6 \u03c6(xk) = q \u2211 i=1 (vT i \u03c6(xk))vi, 1 \u2264k \u2264n \u03c6(xk) = (\u03c61(Xk,1),...,\u03c6p(Xk,p)), 1 \u2264k \u2264n vT i vj = 0, 1 \u2264i \u2260j \u2264q, vT i vi = 1, 1 \u2264i \u2264q, 1 n n \u2211 k=1 \u03c6i(Xk,i) = 0, 1 \u2264i \u2264p, 1 n n \u2211 k=1 \u03c6i(Xk,i)2 = 1, 1 \u2264i \u2264p. The constraint 1 n \u2211n k=1 \u03c6i(Xk,i) = 0 is similar to the centring step in the standard PCA where columns of the data matrix are transformed to have empirical zero means (Theorem 7). The additional constraint 1 n \u2211n k=1 \u03c6i(Xk,i)2 = 1 makes columns of the transformed matrix to have equal norms. Let (Y1,...,Yp) be p \ufb01nite discrete random variables whose joint probability distribution PY1,...,Yp is equal to the empirical distribution of observed samples {xi}n i=1. I.e., Pr(Y1 = j1,...,Yp = jp) = 1 n n \u2211 k=1 1{Xk,1 = j1,...,Xk,p = jp} (5.5) for ji \u2208{1,2,...,\u2223Yi\u2223}. Theorem 8 Let K\u2217be an optimal solution of the MCPCA optimization (1.4) over variables {Yi}p i=1 corresponding to transformation functions {\u03c6\u2217 i (Yi)}p i=1. Then, {ur(K\u2217)}q r=1,{\u03c6\u2217 i (.)}p i=1 provide an optimal solution for optimization (5.4). Proof De\ufb01ne V \u2208Rp\u00d7q as follows: V = (v1,\u22ef,vq). (5.6) 19 \fThus \u02c6 \u03c6(xk) = VVT \u03c6(xk), 1 \u2264k \u2264n. (5.7) We have \u02c6 \u03c6(xk) \u2212\u03c6(xk) = (I \u2212VVT )\u03c6(xk). (5.8) Let BV \u225cI \u2212VVT . Note that BV = B2 V = BT V. Therefore we have 1 n n \u2211 k=1 \u2225\u03c6(xk) \u2212\u02c6 \u03c6(xk)\u22252 (5.9) = 1 n n \u2211 k=1 \u03c6(xk)T BT VBV\u03c6(xk) = 1 n n \u2211 k=1 \u03c6(xk)T (I \u2212VVT )\u03c6(xk) = Tr ( 1 n n \u2211 k=1 \u03c6(xk)\u03c6(xk)T ) \u2212Tr (VT ( 1 n n \u2211 k=1 \u03c6(xk)\u03c6(xk)T )V) = Tr(K) \u2212Tr(VT KV) = p \u2212Tr(VT KV), K \u225c1 n \u2211n k=1 \u03c6(xk)\u03c6(xk)T . Since {Yi}p i=1 is distributed according to the empirical distribution of samples {xk}n k=1, we have K(i,i\u2032) = 1 n n \u2211 k=1 \u03c6i(Xk,i)\u03c6i\u2032(Xk,i\u2032) = E[\u03c6i(Yi)\u03c6i\u2032(Yi\u2032)]. (5.10) Similarly the constraint 1 n \u2211n k=1 \u03c6i(Xk,i) = 0 is simpli\ufb01ed to the constraint E[\u03c6i(Yi)] = 0, while the constraint 1 n \u2211n k=1 \u03c6i(Xk,i)2 = 1 is translated to the constraint E[\u03c6i(Yi)2] = 1. Therefore, optimization (5.4) can be written as min V,K p \u2212Tr(VT KV) (5.11) VT V = Iq, K \u2208KY . Moreover using (5.6), we have Tr(VT KV) = q \u2211 r=1 vT r Kvr. (5.12) Let \u03bb = vT r Kvr. Since vr\u03bb = Kvr, vr is an eigenvector of K corresponding to eigenvalue \u03bbr(K). This simpli\ufb01es optimization (5.11) to optimization (1.4) and completes the proof. The following Theorem discusses the consistency of sample MCPCA for \ufb01nite discrete variables. Theorem 9 Let \u03c1\u2217 q and \u02dc \u03c1q(n) be optimal MCPCA values over variables {Xi}p i=1 and {Yi}p i=1. Let p and q be \ufb01xed. As n \u2192\u221e, with probability one, \u02dc \u03c1q(n) \u2192\u03c1\u2217 q. Proof The proof follows form the fact that for a \ufb01xed p and q, as n \u2192\u221e, eigenvalues of the empirical covariance matrix converge to the eigenvalues of the true covariance matrix, with probability one. 20 \fAlgorithm 2 A Block Coordinate Descend Algorithm to Compute Sample MCPCA For Finite Discrete Variables Input: X, q Initialization: {\u03c6(0) i }p i=1 and {v(0) r }q r=1 for j = 0,1,... for k = 1,...,p compute: w(j) k = \u2211q r=1 \u2211k\u22121 i=1 v(j) r,k v(j) r,i \u03c6(j) i (xi) + \u2211q r=1 \u2211p i=k+1 v(j\u22121) r,k v(j\u22121) r,i \u03c6(j\u22121) i (xi) update: \u03c6(j) k = E[w(j) k \u2223Yk]/\u2223\u2223E[w(j) k \u2223Yk]\u2223\u2223, if \u2223\u2223E[w(j) k \u2223Yk]\u2223\u2223\u22600 compute: K(j) where K(j)(i,i\u2032) = 1 n \u2211n s=1 \u03c6(j) i (Xs,i)\u03c6(j) i\u2032 (Xs,i\u2032) update: v(j) r = ur(K(j)), for 1 \u2264r \u2264q \u03c1(j) q = \u2211q r=1 \u03bbr(K(j)) end 5.3 Computation of Sample MCPCA for Finite Discrete Variables One way to compute sample MCPCA is to use empirical pairwise joint distributions in Algorithm 1. However, forming and storing these empirical pairwise joint distributions may be expensive. Below, we discuss computation of the sample MCPCA optimization without forming pairwise joint distributions. Let vr = (vr,1,\u22ef,vr,p)T . The sample MCPCA optimization (1.4) can be written as follows: max {\u03c6i}p i=1,{vr}q r=1 q \u2211 r=1 p \u2211 i=1 p \u2211 i\u2032=1 vr,ivr,i\u2032 ( 1 n n \u2211 s=1 \u03c6i(Xs,i)\u03c6i\u2032(Xs,i\u2032)) (5.13) 1 n n \u2211 s=1 \u03c6i(Xs,i) = 0, 1 \u2264i \u2264p, 1 n n \u2211 s=1 \u03c6i(Xs,i)2 = 1, 1 \u2264i \u2264p vT r vr = 1, 1 \u2264r \u2264q vT r vs = 0, 1 \u2264r \u2260s \u2264q. Let (Y1,...,Yp) be p \ufb01nite discrete random variables whose joint probability distribution PY1,...,Yp is equal to the empirical distribution of observed samples {xi}n i=1. De\ufb01ne the vector wk \u2208Rn as follows: wk \u225c q \u2211 r=1 \u2211 i\u2208{1,...,p}\u2212{k} vr,kvr,i\u03c6i(xi). (5.14) Lemma 12 If all variables except \u03c6k are \ufb01xed in the feasible set of optimization (5.13), then \u03c6\u2217 k(Yk) = E[wk\u2223Yk]/\u2223\u2223E[wk\u2223Yk]\u2223\u2223, (5.15) is the optimal solution of the constrained optimization (5.13) if \u2223\u2223E[wk\u2223Yk]\u2223\u2223\u22600. If \u2223\u2223E[wk\u2223Yk]\u2223\u2223= 0, every mean zero and unit norm \u03c6\u2217 k is an optimal solution of the constrained optimization (5.13). 21 \fProof If all variables except \u03c6k are \ufb01xed, optimization (5.13) can be simpli\ufb01ed to max \u03c6k < \u03c6k(xk),wk > (5.16) 1T \u03c6k(xk) = 0, \u2223\u2223\u03c6k(xk)\u2223\u22232 = n. Note that since there exists \u03c6k such that < \u03c6k(xk),wk >\u22650, the constraint \u2223\u2223\u03c6k(xk)\u2223\u22232 = n can be replaced by the constraint \u2223\u2223\u03c6k(xk)\u2223\u22232 \u2264n. Now consider the following optimization: max \u03c6k < \u03c6k(xk),wk > (5.17) \u2223\u2223\u03c6k(xk)\u2223\u22232 \u2264n. We show that the optimal solution of optimization (5.17) has zero mean. For simplicity, we use \u03c6k instead of \u03c6k(xk). We proceed by contradiction. Suppose \u03c6\u2217 k is an optimal solution of optimization (5.17) whose mean is not zero (i.e., \u00af \u03c6\u2217 k \u22600). Consider the following solution: \u02dc \u03c6k = \u221an \u03c6\u2217 k \u2212\u00af \u03c6\u2217 k \u2223\u2223\u03c6\u2217 k \u2212\u00af \u03c6\u2217 k\u2223\u2223. (5.18) Note that \u2223\u2223\u02dc \u03c6k\u2223\u22232 = n. Thus, \u02dc \u03c6k belongs to the feasible set of optimization (5.17). Moreover we have \u2223\u2223\u03c6\u2217 k \u2212\u00af \u03c6\u2217 k\u2223\u22232 = \u2223\u2223\u03c6\u2217 k \u22121T \u03c6\u2217 k n \u2223\u22232 (5.19) = \u2223\u2223\u03c6\u2217 k\u2223\u22232 + ( 1 n2 \u22122 n)(1T \u03c6\u2217 k)2 < \u2223\u2223\u03c6\u2217 k\u2223\u22232 \u2264n. Therefore, \u221an \u2223\u2223\u03c6\u2217 k \u2212\u00af \u03c6\u2217 k\u2223\u2223> 1. (5.20) Using (5.20) and the fact that 1T wk = 0, \u02dc \u03c6k leads to a strictly larger objective value of optimization (5.17) than the one of \u03c6\u2217 k, which is a contradiction. Therefore, the optimal solution of optimization (5.17) has zero mean. Thus, optimization (5.17) is a tight relaxation of optimization (5.16). De\ufb01ne \u03b8k(Yk) \u225cE[wk\u2223Yk]. Thus, < \u03c6k(xk),wk >= E[\u03c6k(Yk)\u03b8k(Yk)]. Moreover, \u2223\u2223\u03c6k(xk)\u2223\u22232 = nE[Y 2 k ]. Therefore, optimization (5.17) is simpli\ufb01ed to the following optimization: max E[\u03c6k(Yk)\u03b8k(Yk)] (5.21) E[Y 2 k ] \u22641. Using the Cauchy-Schwartz inequality completes the proof. To update variables {vr}q r=1, one can use Lemma 11. Similarly to Algorithm 1, to solve the sample MCPCA optimization for \ufb01nite discrete variables, we propose Algorithm 2 which is based on a block coordinate descend approach. 22 \fTheorem 10 The sequence \u03c1(j) q in Algorithm 2 is monotonically increasing and convergent. Moreover, if K(j) has top q simple eigenvalues and \u2223\u2223E[w(j) k \u2223Yk]\u2223\u2223\u22600 for 1 \u2264k \u2264p and j \u22650, then {\u03c6(j) i }p i=1 converges to stationary points of optimization (5.13). Proof The proof is similar to the one of Theorem 6. Proposition 2 Each iteration of Algorithm 2 has a computational complexity of O(p3 + np2) and a memory complexity of O(np). Remark 1 The computational complexity of Isomap and LLE is O(n3) and O(pn2) while their memory complexity is O(n2) and O(pn2), respectively. Unlike Isomap and LLE, computational and memory complexity of MCPCA Algorithm 2 scales linearly with the number of samples n which makes it suitable for data sets with large number of samples. 5.4 Sample MCPCA for Continuous Variables In this part, we consider the case where X1,..., Xp are continuous variables with the density function fX1,...,Xp. Here we assume X1,...,Xp have bounded ranges. Without loss of generality, let Xi \u2208[0,1] for 1 \u2264i \u2264p. Moreover, let the density function satisfy fXi(x) > 0 for x \u2208[0,1] and 1 \u2264i \u2264p. We observe n independent samples {xi}n i=1 from this distribution. The data matrix X \u2208Rn\u00d7p is de\ufb01ned according to (5.1). Since X1,...,Xp are continuous, with probability one, each column of the matrix X has n distinct values. Thus, with probability one, there exists {\u03c6\u2217 i (.)} such that \u03c6\u2217 i (xi) = w for 1 \u2264i \u2264p, where w is a vector in Rn whose mean is zero and its norm is equal to \u221an. Therefore, with probability one, the optimal value of optimization (5.4) is equal to p. In the continuous case, the space of feasible transformation functions has in\ufb01nite degrees of freedom. Thus, by observing n samples from these continuous variables, we over-\ufb01t functions to observed samples. Note that in the case of having observations from \ufb01nite discrete variables, transformation functions have \ufb01nite degrees of freedom and if the number of samples are su\ufb03ciently large, over-\ufb01tting issue does not occur (Theorem 9). One approach to overcome the over-\ufb01tting issue in the continuous case is to restrict the feasible set of optimization (5.4) to functions whose degrees of freedom are smaller than the number of observed samples n. One such family of functions is piecewise linear functions with d degrees of freedom: De\ufb01nition 1 Let w \u2208Rd+1. Gd(w) is de\ufb01ned as the set of all functions g \u2236[0,1] \u2192R such that gd(x) \u225c{ wj if x = j d,0 \u2264j \u2264d (wj+1 \u2212wj)(Mx \u2212j) + wj if j d < x < j+1 d ,0 \u2264j \u2264d \u22121 (5.22) Moreover, Gd \u225c{Gd(w) \u2236w \u2208Rd+1}. Let {xk}n k=1 be observed sample from continuous variables X1,...,Xp. Sample MCPCA aims to 23 \fsolve the following optimization: min {vi}q i=1,{\u03c6i}p i=1 1 n n \u2211 k=1 \u2225\u03c6(xk) \u2212\u02c6 \u03c6(xk)\u22252 (5.23) \u02c6 \u03c6(xk) = q \u2211 i=1 (vT i \u03c6(xk))vi, 1 \u2264k \u2264n \u03c6(xk) = (\u03c61(Xk,1),...,\u03c6p(Xk,p)), 1 \u2264k \u2264n vT i vj = 0, 1 \u2264i \u2260j \u2264q vT i vi = 1, 1 \u2264i \u2264q, 1 n n \u2211 k=1 \u03c6i(Xk,i) = 0, 1 \u2264i \u2264p, 1 n n \u2211 k=1 \u03c6i(Xk,i)2 = 1, 1 \u2264i \u2264p \u03c6i \u2208Gd, 1 \u2264i \u2264p. Theorem 11 Consider the following optimization: max {\u03c6i}p i=1 q \u2211 r=1 \u03bbr(K) (5.24) K(i,i\u2032) = 1 n n \u2211 k=1 \u03c6i(Xk,i)\u03c6i\u2032(Xk,i\u2032), 1 \u2264i,i\u2032 \u2264p 1 n n \u2211 k=1 \u03c6i(Xk,i)2 = 1, 1 \u2264i \u2264p 1 n n \u2211 k=1 \u03c6i(Xk,i) = 0, 1 \u2264i \u2264p \u03c6i \u2208Gd, 1 \u2264i \u2264p. Let K\u2217be an optimal solution of optimization (5.24) corresponding to transformation functions {\u03c6\u2217 i }p i=1. Then, {ur(K\u2217)}q r=1,{\u03c6\u2217 i }p i=1 provide an optimal solution of optimization (5.23). Proof The proof is similar to the one of Theorem 8. Proposition 3 Let columns of the data matrix X have zero means and unit variances. If d = 1, the sample MCPCA optimization (5.23) is equivalent to the PCA optimization (5.2). Proof For d = 1, Gd only contains linear functions. Since columns of the data matrix X are assumed to be normalized, optimization (5.23) is equivalent to optimization (5.2). 5.5 Computation of MCPCA and Sample MCPCA for Continuous Variables De\ufb01ne discrete variables Yi,d whose alphabets are {1,2,..,d} and Pr(Y1,d = j1,...,Yp,d = jp) = \u222b j1/d x1=(j1\u22121)/d\u22ef\u222b jp/d xp=(jp\u22121)/d fX1,...,Xp(x1,...,xp)dx1...dxp (5.25) 24 \fBelow we establish a connection between solutions of the MCPCA optimization over continuous variables and their discretized versions. We will use this connection to compute MCPCA and sample MCPCA over continuous variables. Theorem 12 Let \u03c1\u2217 q and \u02c6 \u03c1\u2217 q,d be optimal values of the MCPCA optimization (1.4) over continuous variables {Xi}p i=1 and discrete variables {Yi,d}p i=1, respectively. As d \u2192\u221e, with probability one, \u02c6 \u03c1\u2217 q,d \u2192\u03c1\u2217 q. Moreover, let {\u02c6 \u03c6\u2217 i,d(.)} be an optimal solution of the MCPCA optimization (1.4) over discrete variables {Yi,d}p i=1. Let wi = (\u02c6 \u03c6\u2217 i,d(1), \u02c6 \u03c6\u2217 i,d(1),..., \u02c6 \u03c6\u2217 i,d(d)). Then, as d \u2192\u221e, with probability one, {gd(wi)} is an optimal solution of the MCPCA optimization (1.4) over continuous variables {Xi}p i=1. Proof For 1 \u2264i \u2264p, let \u03c6i \u2236[0,1] \u2192R be a feasible function in the MCPCA optimization (1.4) over continuous variables {Xi}p i=1. De\ufb01ne \u02c6 \u03c6i,d \u2236{1,2,...,d} \u2192R such that \u02c6 \u03c6i,d(j) \u225c\u03c6i((j \u22121)/d). (5.26) Below we show that as d \u2192\u221e, with probability one, {\u02c6 \u03c6i,d}p i=1 is feasible in the MCPCA optimization (1.4) over discrete variables {Yi,d}p i=1. We have E[\u02c6 \u03c6i,d(Yi,d)] = d \u2211 j=1 Pr(Yi,d = j)\u02c6 \u03c6i,d(j) (5.27) = d \u2211 j=1\u222b j/d x=(j\u22121)/d fXi(x)\u03c6i(j \u22121 d ) \u2192 d \u2211 j=1\u222b j/d x=(j\u22121)/d \u03c6i(x)fXi(x)dx = \u222b 1 x=0 \u03c6i(x)fXi(x)dx = 0. Similarly as d \u2192\u221e, with probability one, E[\u02c6 \u03c6i,d(Yi,d)2] = 1, and E[\u02c6 \u03c6i,d(Yi,d)\u02c6 \u03c6i\u2032,d(Yi\u2032,d)] = E[\u03c6i(Xi)\u03c6i\u2032(Xi\u2032)]. (5.28) Now consider {\u02c6 \u03c6i,d(Yi,d)}p i=1 as a feasible point for the MCPCA optimization (1.4) over discrete variables {Yi,d}p i=1. For 1 \u2264i \u2264p, de\ufb01ne \u02dc \u03c6i \u225cgd(\u02c6 \u03c6i,d(1), \u02c6 \u03c6i,d(1), \u02c6 \u03c6i,d(2),..., \u02c6 \u03c6i,d(d)). (5.29) Note that \u02dc \u03c6i \u2236[0,1] \u2192R. Similarly to the previous argument, as d \u2192\u221e, with probability one, { \u02dc \u03c6i}p i=1 is a feasible point in the MCPCA optimization (1.4) over continuous variables {Xi}p i=1. Moreover, as d \u2192\u221e, with probability one, we have E[ \u02dc \u03c6i(Xi) \u02dc \u03c6i\u2032(Xi\u2032)] = E[\u02c6 \u03c6i,d(Yi,d)\u02c6 \u03c6i\u2032,d(Yi\u2032,d)]. (5.30) Consider {\u03c6\u2217 i }p i=1 as an optimal solution of optimization (1.4) over continuous variables {Xi}p i=1 with the optimal value \u03c1\u2217 q. Construct {\u02c6 \u03c6i,d(.)}p i=1 according to equation (5.26). As d \u2192\u221e, with 25 \fprobability one, {\u02c6 \u03c6i,d(.)}p i=1 is a feasible point for the MCPCA optimization (1.4) over discrete variables {Yi,d}p i=1 which leads to the MCPCA objective value \u02c6 \u03c1q,d = \u03c1\u2217 q. Thus, \u02c6 \u03c1\u2217 q,d \u2265\u03c1\u2217 q. Now consider {\u02c6 \u03c6\u2217 i,d}p i=1 as an optimal solution of optimization (1.4) over discrete variables {Yi,d}p i=1 which leads to the MCPCA objective value \u02c6 \u03c1\u2217 q,d. Construct { \u02dc \u03c6i}p i=1 according to equation (5.29). As d \u2192\u221e, with probability one, { \u02dc \u03c6i}p i=1 is a feasible point for the MCPCA optimization (1.4) over continuous variables {Xi}p i=1 with the optimal value \u03c1q = \u03c1\u2217 q,d. Thus, \u03c1q \u2265\u03c1\u2217 q,d. This completes the proof. Theorem 12 simpli\ufb01es the MCPCA computation over continuous variables {Xi}p i=1 to the MCPCA computation over discrete variables {Yi,d}d i=1 which can be solved using Algorithm 1. A similar approach can be taken to simplify the sample MCPCA optimization over continuous variables to the one of the discrete variables which can be solved using Algorithm 2. Variable Yi,d provides a discretized version of the continuous variable Xi where the position of knots (i.e., discretization thresholds) are uniformly spaced in the range of the variable. However the argument of Theorem 12 can be extended to consider other nonuniform and data-dependent discretization as well. For example, in the case that we observe n samples from Xi, one can choose the position of discretization knots to have equal number of samples in each discretization level. In the sample MCPCA implementation for continuous variables, we use such a nonuniform discretization approach. 6 MCPCA Applications to Synthetic and Real Data Sets 6.1 Synthetic Discrete Data First, we illustrate performance of MCPCA over simulated discrete data. We generate n = 1000 independent samples from p = 50 discrete variables whose covariance matrix is shown in Figure 2-a (left panel). These samples are generated as discretized version of continuous jointly Gaussian samples. Alphabet sizes of variables (i.e., the number of quantization levels) are equal to 10. We then apply unknown random functions (with zero means and unit variances) to samples of each variable. The covariance matrix of observed samples (i.e., samples from transformed variables) is shown in Figure 2-a (right panel). Owing to transformations of variables, the block diagonal structure of the latent covariance matrix has been faded in the observed one. We apply the sample MCPCA Algorithm 2 with parameter q to the observed data matrix. We use 10 random initializations and 10 repeats of Algorithm 2. Figure 2-b illustrates the covariance matrix computed by the MCPCA algorithm with parameter 1 \u2264q \u22646. MCPCA with q = 1 highlights some of the block diagonal structure in the latent covariance matrix. MCPCA with larger q recovers all the blocks. Note that the MCPCA algorithm aims to \ufb01nd a covariance matrix of transformed variables with the largest Ky Fan norm and is not tailored to infer a speci\ufb01c hidden structure in the data. Nevertheless inferring a low rank covariance matrix often captures such hidden structures in the data. Figure 2-c,d shows the q\u2032 Ky Fan norm for the latent covariance matrix, for the observed covariance matrix (i.e., the PCA objective value), and for covariance matrices computed by MCPCA with di\ufb00erent q values. For q = 1, Theorem 5 provides a globally optimal solution for the MCPCA optimization. We include that solution as well as the MCPCA solution computed in Algorithm 2. Figure 2-c shows that the Ky Fan norm of covariance matrices computed by MCPCA are signi\ufb01cantly larger than the one of the PCA. In Figure 2-d, we show the q\u2032 Ky Fan norm for 26 \f1 0 1 0 1 50 1 50 variables variables 1 50 1 50 variables variables MCPCA Abs. Covariance Matrix q=6 q=5 q=4 q=3 q=2 q=1 Observed Abs. Covariance Matrix Latent (Unobserved) Abs. Covariance Matrix Unknown Random Transformations Ky Fan norm, q\u2019=2 Ky Fan norm, q\u2019=1 PCA MCPCA,q=4 MCPCA,q=5 MCPCA,q=6 optimal MCPCA,q=1 MCPCA,q=1 MCPCA,q=2 MCPCA,q=3 PCA MCPCA,q=4 MCPCA,q=5 MCPCA,q=6 optimal MCPCA,q=1 MCPCA,q=1 MCPCA,q=2 MCPCA,q=3 q\u2019 q\u2019 Ky Fan norm 0 5 10 15 20 25 30 35 40 45 50 0 10 20 30 40 50 Latent (unobserved) PCA (observed) MCPCA,q=4 MCPCA,q=5 MCPCA,q=6 optimal MCPCA,q=1 MCPCA,q=1 MCPCA,q=2 MCPCA,q=3 0 1 2 3 4 5 6 7 8 9 10 0 2 4 6 8 10 12 14 16 18 20 (b) (a) (d) (c) Figure 2: (a) An illustration of latent, observed, and MCPCA absolute covariance matrices with di\ufb00erent q values. (b,c) An illustration of q\u2032 Ky Fan norm of of latent, observed, and MCPCA covariance matrices with di\ufb00erent q values for 1 \u2264q\u2032 \u2264p (panel b), and q\u2032 = 1,2 (panel c). 27 \f0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 inference quality of underlying q dimensional embedding (d) Piecewise Linear Transformations + Noise (p=50, q=10) (c) inference quality of underlying q dimensional embedding (b) Polynomial Transformations (p=20, q=5) (a) LLE Kernel PCA (poly.) Difusion Maps Probabilistic PCA Autoencoder IsoMap PCA MCPCA LLE Kernel PCA (poly.) Difusion Maps Probabilistic PCA Autoencoder IsoMap PCA MCPCA LLE Kernel PCA (poly.) Difusion Maps Probabilistic PCA Autoencoder IsoMap PCA MCPCA LLE Kernel PCA (poly.) Difusion Maps Probabilistic PCA Autoencoder IsoMap PCA MCPCA Polynomial Transformations + Noise (p=50, q=10) Polynomial Transformations (p=20, q=1) 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 0.4 0.5 0.6 0.7 0.8 0.9 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 Figure 3: Performance comparison of dimensionality reduction methods on synthetic datasets. The line in the middle of each box is the median result while the tops and bottoms of each box are the 25th and 75th percentiles of the results. q\u2032 = 1,2 for di\ufb00erent covariance matrices. Note that the method of Theorem 5 provides a globally optimal solution for q\u2032 Ky Fan norm maximization when q\u2032 = 1, while the MCPCA Algorithm 2 provides a locally optimal solution. In this case (Figure 2-d, the left panel), the gap between global and local optimal values is small. Moreover for the case of q\u2032 = 2 (Figure 2-d, the right panel), the MCPCA solution with parameter q = 2 is outperforming other solutions. Finally in the case considered in Figure 2-c,d, we observe that the Ky Fan norm of the covariance matrix computed by the MCPCA algorithm is not sensitive to parameter q. 6.2 Synthetic Continuous Data Next, we compare performance of di\ufb00erent dimensionality reduction methods including MCPCA, PCA, Isomap, LLE, multilayer autoencoders (neural networks), kernel PCA, probabilistic PCA and di\ufb00usion maps on synthetic datasets. We assess the performance of di\ufb00erent dimensionality reduction methods based on how much sample distances in the inferred and true low dimensional spaces match with each other. More precisely, let Xlatent \u2208Rn\u00d7p be a matrix whose rank is q < p. 28 \f1 2 3 4 5 6 7 8 9 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 q q q PCA MCPCA PCA MCPCA PCA MCPCA fraction of explained variance using top q meta features fraction of explained variance using top q meta features q q q PCA MCPCA PCA MCPCA PCA MCPCA Breast Cancer Dataset (a) 0 10 20 30 40 50 60 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Gene Splicing Dataset (b) 0 5 10 15 20 25 30 35 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Dermatology Dataset (c) 0 2 4 6 8 10 12 14 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Adult Income Dataset (d) 0 5 10 15 20 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Parkinsons Disease Dataset (e) 0 2 4 6 8 10 12 14 16 18 20 0.4 0.5 0.6 0.7 0.8 0.9 1 Diabetic Retinopathy Dataset (f) Figure 4: This \ufb01gure demonstrates the fraction of explained variance in six datasets using top q meta features computed by PCA and MCPCA. Let di,j be the distance between sample i and j in the q dimensional representation of Xlatent. Let N \u2208Rn\u00d7p be the noise matrix. Let X \u2208Rn\u00d7p be the observed data matrix whose columns are transformations of columns of the matrix Xlatent + N. These transformations are assumed to be continuous and bijective. Let \u02dc di,j be the distance between sample i and j in the inferred q dimensional representation of X. We asses the performance of the dimensionality reduction method by computing the Spearman\u2019s rank correlation between di,j and \u02dc di,j for 1 \u2264i,j \u2264n. We generate Xlatent as Xlatent = UVT where U \u2208Rn\u00d7q and V \u2208Rp\u00d7q. Elements of U and V are generated according to a Gaussian distribution with zero mean and unit variance. In the noiseless case, N is an all zero matrix. In the noisy case, elements of N are generated according to a Gaussian distribution with zero mean and unit variance. We consider two types of transformations to generate columns of X using columns of the matrix Xlatent +N: (i) a polynomial transformation where for each variable we randomly select a transformation from the set {x,x3,x5}, and (ii) a piecewise linear transformation according to De\ufb01nition 1 where wj+1 \u2212wj has an exponential distribution with parameter 100. The positions of knots are chosen so that each bin has equal number of samples. We use default parameters for di\ufb00erent dimensionality reduction methods. IsoMap and LLE have a parameter Nngbr which determines the number of neighbors considered in their distance graphs. Nngbr is set to be 12. Moreover for the continuous data, MCPCA has a parameter d which restricts the optimization to a set of piecewise linear functions with degree d. We set d = 10. For other methods we use implementations of reference [27]. Experiments have been repeated 10 times in each case. In Figure 3-a we consider a relatively easy setup where p = 20, q = 1, transformation functions 29 \fData Set # of samples (n) # of features (p) # of of classes class distribution Breast Cancer 683 9 2 (239,444) Gene Splicing 3,175 60 2 (1527,1648) Dermatology 366 33 6 (112,61,72,49,52,20) Adult Income 30,162 14 2 (7508,22654) Parkinsons Disease 195 22 2 (48,147) Diabetic Retinopathy 1,151 19 2 (540,611) Table 1: Properties of data sets considered in Section 6.3. are polynomials, and there is no added noise to observed samples. In this setup, all methods except LLE and kernel PCA have good performance. Gaussian kernel PCA performed poorly in these experiments. Thus, we only illustrate performance of polynomial kernel PCA in this \ufb01gure. It further highlights sensitivity of kernel PCA to the model setup. In Figure 3-b we consider a similar setup to the one of panel (a) but we increase q to be 5. MCPCA continues to have a good performance while the performance of other methods drop signi\ufb01cantly. Next, we increase p to 50 and q to 10. We also add noise to observed samples as described above. MCPCA continues to have a good performance outperforming all other methods (Figure 3-c). In Figure 3-d we change nonlinear transformations from polynomials to piecewise linear functions compared to panel (c). Again, in this setup MCPCA outperforms all other methods. These experiments highlight robustness of MCPCA against model parameters and noise. Performance of other methods appears to be sensitive to these factors. 6.3 Real Data Analysis Having illustrated e\ufb00ectiveness of MCPCA on synthetic datasets, we apply it to real datasets. We consider six data sets from the UCI machine learning repository data sets [34], namely breast cancer data set, gene splicing data set, dermatology data set, adult income data set, parkinsons disease data set, and diabetic retinopathy data set. These data sets have been chosen to span various types of input data. Some of them have discrete features, some have continuous features, while some have mixed discrete and continuous features. The number of samples (n) and the number of features (p) vary across these data sets. Samples in \ufb01ve of these data sets have binary labels while in one of them the number of sample classes is six. Basic properties of these data sets have been summarized in Table 1. Below we explain some of these properties with more details: \u2022 The breast cancer data set has 683 individuals with breast cancer, among which 444 are benign and 239 are malignant (we remove 16 samples with missing values from the original data set.). Attributes in this data set include features such as clump thickness, uniformity of cell size, mitoses, etc. Values of these features are discrete in the set of {1,2,...,10}. For more information about this data set, see [35]. \u2022 The gene splicing data set has 3,175 samples 1. Each sample is a 60 base pair subset of genome. The goal is to classify two types of splice junctions in DNA sequences: exon/intron (EI) or intron/exon (IE) sites. Values of features are discrete in the set of {A,G,C,T}. For more information about this data set, see [34]. 1We use the processed data provided in http://www.cs.toronto.edu/~delve/data/datasets.html 30 \f\u2022 The dermatology data set has 366 samples and 33 features (we ignore the age feature from the original data since it has missing values.). The classi\ufb01cation of erythemato-squamous diseases is a di\ufb03cult task in dermatology since they share clinical features of erythema and with similar scaling. This data set have samples with six diseases: psoriasis, seboreic dermatitis, lichen planus, pityriasis rosea, cronic dermatitis, and pityriasis rubra pilaris. The number of samples of each disease are 112, 61, 72, 49, 52, 20, respectively. Features include 12 clinical features and 21 histopathological features. Variables are discrete whose alphabet sizes are 2 (for one feature), 3 (for one feature), and 4 (for 31 features). For more information about this data set, see [36]. \u2022 The adult income data set is the largest data set we consider in this section. It has 30,162 samples (after removing samples from the original training data with missing values.). The task is to classify individuals to two groups based on their income. This data set includes 22,654 individuals with income \u226450,000$ and 7,508 individuals with income > 50,000$. Features include variables such as age, sex, race, education, work class, capital gain, capital loss, hours per week, etc. All features except one has fewer than 120 distinct alphabet values. For more information about this data set, see [37]. \u2022 The parkinsons disease data set has 195 samples where 47 of them come from healthy individuals and 147 of them come from parkinsons patients. Each feature is a particular voice measure such as average vocal fundamental frequency, measures of variation in amplitude, measures of frequency variation, etc. Features are continuous with alphabet sizes ranges from 20 to 195. For more information about this data set, see [38]. \u2022 The diabetic retinopathy data set has 1,151 samples where 540 samples have no signs of the disease. The data contains 19 features extracted from the messidor image set to predict whether an image contains signs of diabetic retinopathy or not. The alphabet size of features range from 2 to 1,151. For more information on this data set, see [39]. PCA and MCPCA aim to maximize the amount of explained variance in the data (or in the transformation of the data) using low dimensional features. PCA restricts its optimization to merely linear transformations while MCPCA considers a more general family of nonlinear transformation functions. More precisely, let K(\u03c61,...,\u03c6p) \u2208Rp\u00d7p be the covariance matrix of transformations of variables. Then 1 p \u2211q r=1 \u03bbr(K\u03c61,...,\u03c6p) is the fraction of explained variance in the transformation of the data using its optimal q dimensional representation. We normalize features to have zero means and unit variances. We perform a two-fold cross validation analysis: we choose half of the data uniformly randomly for training. Then we test performance of the methods in the remaining half of the data. In discrete data sets (i.e., breast cancer, gene splicing and dermatology data sets) we use sample MCPCA Algorithm 2 to compute optimal transformations of features in the training data for each q value. Then, we apply those transformations to the test data. In the adult income data set all features except one has fewer than 120 distinct alphabet values. For the only continuous feature in this data set we use d = 120. In continuous data sets (i.e., Parkinsons disease and diabetic retinopathy data sets) we use the procedure explained in Section 5.5. In these experiments d = 10 is \ufb01xed. We repeat each experiment 10 times. Figure 4 shows the fraction of explained variance using top q meta features computed by PCA and MCPCA in a two-fold cross validation analysis. In breast cancer and adult income datasets 31 \f0.4 0.5 0.6 0.7 0.8 0.9 (b) (d) (c) correlation between the top meta feature and phenotype Breast Cancer Dataset (a) 0 0.1 0.2 0.3 0.4 0.5 Adult Income Dataset 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Gene Splicing Dataset 0 0.1 0.2 0.3 0.4 0.5 0.6 Parkinsons Disease Dataset 0 0.05 0.1 0.15 0.2 0.25 0.3 (e) Diabetic Retinopathy Dataset Kernel PCA (poly.) Kernel PCA (Gauss.) Difusion Maps Probabilistic PCA Autoencoder IsoMap PCA MCPCA Kernel PCA (poly.) Kernel PCA (Gauss.) Difusion Maps Probabilistic PCA Autoencoder IsoMap PCA MCPCA correlation between the top meta feature and phenotype 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1 Figure 5: This \ufb01gure illustrates the correlation between the top meta feature and phenotype for \ufb01ve datasets with binary phenotypes. MCPCA signi\ufb01cantly outperforms PCA for all values of q, while in other datasets their performance is comparable. The fact that MCPCA shows higher or comparable performance to PCA in holdout datasets indicates that MCPCA captures meaningful nonlinear correlations among features whenever they exist. Next, we examine how predictive of phenotype extracted meta features are. Similarly to the previous experiment we use a two-fold cross validation analysis. We choose half of samples uniformly randomly to train the methods, and test their performance in the remaining half. We repeat each experiment 10 times. In continuous data sets, we consider d \u2208{1,5,10,15}. For the Isomap in the training phase we consider Nngbr \u2208{10,15,20}. In the Isomap case, since the method does not have the so-called parametric out-of-sample property [27] (meaning that we cannot use the low dimensional embedding of the training data to compute a low dimensional embedding of the test data), we run the method on the test data using optimal parameters learned in the training step. This issue occurs in other nonlinear dimensionality reduction methods. In those cases we run the methods in the test data using their default parameters [27]. Figure 5 shows the correlation between the top extracted meta feature using di\ufb00erent dimensionality reduction methods and phenotype. The implementation of LLE crashed in these experiments, thus excluded from this \ufb01gure. In all cases MCPCA consistently outperforms all other methods in di\ufb00erent ranges of correlation between the meta feature and phenotype. For example, correlation between the meta feature and phenotype is high in the breast cancer dataset, is average in the adult income dataset, and is low in gene splicing and diabetic retinopathy datasets. Nevertheless, in all cases MCPCA shows a signi\ufb01cant gain over all other methods. 32 \f7 Discussion Here we introduced Maximally Correlated Principal Component Analysis (MCPCA) as a multivariate extension of maximal correlation and a generalization of PCA. MCPCA computes, possibly nonlinear, transformations of variables whose covariance matrix has the largest Ky Fan norm. MCPCA resolves two weaknesses of PCA by considering nonlinear correlations among features and being suitable for both continuous and categorical data. Although the MCPCA optimization is non-convex, we characterized its global optimizers for nonlinear functions of jointly Gaussian variables, and for categorical variables under some conditions. For general categorical variables, we proposed a block coordinate descend algorithm and showed its convergence to stationary points of the MCPCA optimization. Given the widespread applicability of PCA and the improved and robust performance of MCPCA compared to state-of-the-art dimensionality reduction methods, we expect the proposed method to \ufb01nd broad use in di\ufb00erent areas of science. Moreover, techniques developed for e\ufb03ciently optimizing feature transformations over a broad family of linear and nonlinear functions can be employed in several other statistical and machine learning problems such as nonlinear regression and deep learning." + }, + { + "url": "http://arxiv.org/abs/1602.04181v2", + "title": "Spectral Alignment of Graphs", + "abstract": "Graph alignment refers to the problem of finding a bijective mapping across\nvertices of two graphs such that, if two nodes are connected in the first\ngraph, their images are connected in the second graph. This problem arises in\nmany fields such as computational biology, social sciences, and computer vision\nand is often cast as a quadratic assignment problem (QAP). Most standard graph\nalignment methods consider an optimization that maximizes the number of matches\nbetween the two graphs, ignoring the effect of mismatches. We propose a\ngeneralized graph alignment formulation that considers both matches and\nmismatches in a standard QAP formulation. This modification can have a major\nimpact in aligning graphs with different sizes and heterogenous edge densities.\nMoreover, we propose two methods for solving the generalized graph alignment\nproblem based on spectral decomposition of matrices. We compare the performance\nof proposed methods with some existing graph alignment algorithms including\nNatalie2, GHOST, IsoRank, NetAlign, Klau's approach as well as a semidefinite\nprogramming-based method over various synthetic and real graph models. Our\nproposed method based on simultaneous alignment of multiple eigenvectors leads\nto consistently good performance in different graph models. In particular, in\nthe alignment of regular graph structures which is one of the most difficult\ngraph alignment cases, our proposed method significantly outperforms other\nmethods.", + "authors": "Soheil Feizi, Gerald Quon, Mariana Recamonde-Mendoza, Muriel Medard, Manolis Kellis, Ali Jadbabaie", + "published": "2016-02-12", + "updated": "2017-09-05", + "primary_cat": "cs.DS", + "cats": [ + "cs.DS", + "cs.DM", + "math.CO" + ], + "main_content": "Introduction The term graph alignment (or, network alignment) encompasses several distinct but related problem variants [1]. In general, graph alignment aims to \ufb01nd a bijective mapping across two (or more) graphs so that, if two nodes are connected in one graph, their images are also connected in the other graph(s). If such an exact alignment scheme exists, graph alignment can be simpli\ufb01ed to the problem of graph isomorphism [2]. However, in general, an errorless alignment scheme may not be feasible. In such cases, graph alignment aims to \ufb01nd a mapping with the minimum error and/or the maximum overlap. Graph alignment has a broad range of applications in systems biology, social sciences, computer vision, and linguistics. For instance, graph alignment has been used frequently as a comparative analysis tool in studying protein-protein interaction networks across di\ufb00erent species [3\u20138]. In computer vision, graph alignment has been used for image recognition by matching similar images [9, 10]. It has also been applied in ontology alignment to \ufb01nd relationships among di\ufb00erent representations of a database [11, 12], and in user de-anonymization to infer user/sample identi\ufb01cations using similarity between datasets [13]. 1 Stanford University. 2 University of California, Davis. 3 Instituto de Informatica, Universidade Federal do Rio Grande do Sul, Porto Alegre, RS, Brazil. 4 Massachusetts Institute of Technology (MIT). 1 \finconsistent (directed graphs) j\u2019 s\u2019 r i mismatch j\u2019 s\u2019 r i neutral j\u2019 s\u2019 r i match j\u2019 s\u2019 r i (c) (a) (b) G1 G2 Figure 1: (a) An illustration of matched, mismatched, and neutral mappings for undirected graphs. (b) Example graphs to illustrate the e\ufb00ect of considering mismatches in the graph alignment formulation. (c) An illustration of inconsistent mappings for directed graphs where they are matches in one direction and mismatches in the other direction. Here we study the graph alignment problem and make two main contributions. First, we propose a generalized formulation for the graph alignment optimization, and secondly we develop two graph alignment methods based on spectral decomposition of matrices. In the following we explain these contributions. Let G1 = (V1,E1) and G2 = (V2,E2) be two graphs where Va and Ea represent set of nodes and edges of graph a = 1,2, respectively. By a slight abuse of notation, let G1 and G2 be their matrix representations where for a = 1,2, Ga(i,j) = 1 i\ufb00(i,j) \u2208Ea, and Ga(i,j) = 0 otherwise. Suppose graph a has na nodes, i.e., \u2223Va\u2223= na. Without loss of generality, we assume n1 \u2264n2. In the beginning, we assume graphs are undirected (i.e., matrices G1 and G2 are symmetric). We discuss the alignment of directed graphs, denoted by Gdir 1 and Gdir 2 , in Section 3. Let X be an n1 \u00d7 n2 binary matrix where X(i,j\u2032) = 1 means that node i in graph 1 is mapped (aligned) to node j\u2032 in graph 2. The pair (i,j\u2032) is called a mapping edge across two graphs and is denoted by i \u2194j\u2032. In the graph alignment setup, each node in one graph can be mapped to at most one node in the other graph, i.e., \u2211i X(i,j\u2032) \u22641 for all j\u2032, and similarly \u2211j\u2032 X(i,j\u2032) \u22641 for all i. We also assume that there are no unaligned nodes in the graph with fewer nodes. Matrix X can map an edge in graph G1 to an edge in graph G2. These aligned edges are called matches. X can map an edge in G1 to a non-existing edge in G2 and vice versa. These mapping pairs are called mismatches. Finally, X can map a non-existing edge in G1 to a non-existing edge in G2. These pairs are called neutrals. Figure 1-a illustrates examples of matches, mismatches, and neutrals for simple graphs with two nodes. We have # of matches = Tr (G1XG2XT ) , (1.1) # of mismatches = Tr (G1X(1 \u2212G2)XT + G1X(1 \u2212G2)XT ), # of neutrals = Tr ((1 \u2212G1)X(1 \u2212G2)XT ) , where 1 represents a matrix of all ones and Tr(.) is the trace operator. Most existing scalable 2 \fgraph alignment methods only consider maximizing the number of matches across two graphs while ignoring the number of resulting mismatches. This limitation can be critical particularly in cases where graphs have di\ufb00erent sizes. We propose a generalized objective function for the graph alignment optimization as follows: max X s1(# of matches) + s2(# of neutrals) (1.2) + s3(# of mismatches), where s1, s2, and s3 are scores assigned to matches, neutrals, and mismatches, respectively. We assume s1 > s2 > s3. Considering s2 = s3 = 0 results in ignoring e\ufb00ects of mismatches and neutrals. Substituting (1.1) in (1.2), we have the following equivalent optimization: max X Tr (G1XG2XT ) \u2212\u03b3 (Tr (G1X1XT ) + Tr (1XG2XT )) , (1.3) where \u03b3 = (s2 \u2212s3)/(s1 + s2 \u22122s3) is the regularization parameter. Note 0 \u2264\u03b3 < 1/2. If s2 = s3 = 0, \u03b3 = 0, while if s2 \u2192s1 or s3 \u2192\u2212\u221e, \u03b3 \u21921/2. If n1 = n2, Tr (G1X1XT ) and Tr (1XG2XT ) are equal to the number of edges in graphs G1 and G2, respectively. Thus these terms do not depend on X. However, if the number of nodes in G1 and G2 are di\ufb00erent (say n1 < n2), Tr (1XG2XT ) depends on X. Therefore, the regularization parameter \u03b3 plays a role when the number of nodes in G1 and G2 are di\ufb00erent. Note that in solving relaxations or approximations of optimization (1.3) when X is no longer a permutation, \u03b3 can have an e\ufb00ect even for the same size graphs. To illustrate the e\ufb00ect of the regularization parameter, consider example graphs G1 and G2 illustrated in Figure 1-b. Let X1 and X2 be mapping matrices that align nodes of G1 to left and right subgraphs of G2, respectively. The number of matches and mismatches caused by X1 are 8 and 7, respectively. The number of matches and mismatches caused by X2 are 7 and 1, respectively. If we ignore the e\ufb00ect of mismatches (i.e., \u03b3 = 0 in (1.3)), X1 leads to a larger graph alignment objective value compared to X2. However, if \u03b3 > 1/6, X2 leads to a larger objective value compared to X1. Note that maximizing matches while ignoring mismatches favors parts of the larger graph with a higher edge density. It is important to note that the notion of mismatches has been considered in other alignment frameworks as well. For example [10] considers aligning two images (modeled as graphs) knowing a pairwise similarity measure between nodes of the two graphs. Reference [10] uses mismatch terms (ignoring matches) to incorporate relational structure terms in the alignment optimization. Our generalized graph alignment optimization (1.3) does not require having a similarity matrix between nodes of the two graphs and uses both match and mismatch information to compute the alignment matrix. The objective function of optimization (1.3) is not in the standard form of a quadratic assignment problem (QAP) since it has three terms. It is straightforward to show that the following optimization is an equivalent formulation: max X Tr ((G1 \u2212\u03b31)X(G2 \u2212\u03b31)XT ). (1.4) This optimization is a standard QAP [14] which is computationally challenging to solve. In the next section we explain our algorithmic contributions to compute a solution for this optimization based on spectral decomposition of functions of adjacency matrices. 3 \fReference [15] shows that approximating a solution of maximum quadratic assignment problem within a factor better than 2log1\u2212\u01eb n is in general not feasible in polynomial time. However, owing to numerous applications of QAP in di\ufb00erent areas, several algorithms have been designed to solve it approximately. Some methods use exact search approaches based on branch-and-bound [16] and cutting plane [17]. These methods can only be applied to very small problem instances owing to their high computational complexity. Some methods attempt to solve the underlying QAP by linearizing the quadratic term and transforming the optimization into a mixed integer linear program (MILP) [18\u201321]. In practice the very large number of introduced variables and constraints in linearization of the QAP objective function poses an obstacle for solving the resulting MILP e\ufb03ciently. Some methods use convex relaxations of the QAP to compute a bound on its optimal value [22\u201326]. The solutions provided by these methods may not be a feasible solution for the original quadratic assignment problem. Other methods to solve the graph alignment optimization include semide\ufb01nite [26,27], non-convex [28], or Lagrangian [29\u201331] relaxations, Bayesian inference [32], message passing [33] or other heuristics [3, 4, 6, 34\u201338]. We will review these methods in Section 2. For more details about these methods, we refer readers to references [14, 39, 40]. In particular [40] provides a recent review of graph alignment methods by distinguishing between methods for deterministic and random graphs. Spectral inference methods have received signi\ufb01cant attention in problems such as graph clustering [41\u201345] where the underlying mixed integer program is tightly approximated with an optimization whose optimizers can be computed e\ufb03ciently. However, the use of spectral techniques in the graph alignment problem has been limited [3, 4, 34, 35, 46, 47], partially owing to di\ufb03culty in connecting existing spectral graph alignment methods with relaxations of the underlying QAP. For example, [3] computes an alignment across biological networks using the top eigenvector of a graph which encodes neighborhood similarities. Reference [46] uses a spectral relaxation of QAP to compute a probabilistic subgraph matching when the number of nodes of graphs are the same, while [47] uses a heuristic multi-scale spectral signature of graphs to compute an alignment across them. In this paper, we propose two spectral algorithms for solving the graph alignment optimization (1.4), namely EigenAlign (EA), and LowRankAlign (LRA): 1. EigenAlign (EA) computes the leading eigenvector of a function of adjacency matrices followed by a maximum weight bipartite matching optimization. EigenAlign can be applied to both directed and undirected graphs. We prove that for Erd\u02dd os-R\u00b4 enyi graphs [48] and under some general conditions, EigenAlign is mean-\ufb01eld optimal 1. 2. LowRankAlign (LRA) solves the graph alignment optimization by simultaneous alignment of eigenvectors of (transformations of) adjacency graphs, scaled by corresponding eigenvalues. LRA considers undirected graphs. LRA \ufb01rst solves the orthogonal relaxation of the underlying QAP using eigen decomposition of matrices. Then, it employs a rounding step as a projection in the direction of top eigenvectors of input matrices. We provide a bound on the performance of this projection step based on eigenvalues of input matrices and the orthogonal relaxation gap. Note that this rounding step is di\ufb00erent than previously studied orthogonal projection, which has been shown to have a poor performance in practice [46]. 1Finding an isomorphic mapping across asymptotically large Erd\u02dd os-R\u00b4 enyi graphs is a well studied problem and can be solved e\ufb03ciently through canonical labeling [49]. Moreover Laszlo Babai has recently outlined his proof that the computational complexity of the general graph isomorphism problem is Quasipolynomial [50]. Note that in the graph alignment setup input graphs do not need to be isomorphic. 4 \fThrough analytical performance characterization, simulations on several synthetic graphs, and real-data analysis, we show that our proposed graph alignment methods lead to improved performance compared to some existing graph alignment methods. Note that our proposed generalized graph alignment framework can also be adapted to some existing graph alignment packages. However, exploring this direction is beyond the scope of this article. The rest of the paper is organized as follows. In Section 2, we review some existing graph alignment techniques and explain the relationship between graph alignment and graph isomorphisim. In Section 3, we introduce the EigenAlign Algorithm and discuss its relationship with the underlying quadratic assignment problem. Moreover, we present the mean-\ufb01eld optimality of this method over random graphs, under some general conditions. In Section 4, we consider the trace formulation of the graph alignment optimization and introduce LowRankAlign. In Section 5, we compare performance of our method with some existing graph alignment methods over di\ufb00erent synthetic graph structures. In Section 6, we use our graph alignment methods in comparative analysis of gene regulatory networks across di\ufb00erent species. 2 Review of Prior Work Graph alignment problem (1.4) is an example of a QAP [14]. In the following we brie\ufb02y summarize previous works by categorizing them into four groups and explain advantages and shortcomings of each. For more details on these methods we refer readers to references [14,39,51]. 1. Exact search methods: These methods provide a globally optimal solution for QAP. Examples of exact algorithms include methods based on branch-and-bound [16] and cutting plane [17]. Owing to their high computational complexity, they can only be applied to very small problem instances. 2. Linearizations: These methods attempt to solve QAP by eliminating the quadratic term in the objective function, transforming it into a mixed integer linear program (MILP). An existing MILP solver is applied to \ufb01nd a solution for the relaxed problem. Examples of these methods are Lawlers linearization [18], Kaufmann and Broeckx linearization [19], Frieze and Yadegar linearization [20], and Adams and Johnson linearization [21]. These linearizations can provide bounds on the optimal value of the underlying QAP [15]. Moreover [29\u201331] use Lagrangian relaxations to compute a solution for the QAP. In general, linearization of the QAP objective function is achieved by introducing many new variables and new linear constraints. In practice, the very large number of introduced variables and constraints poses an obstacle for solving the resulting MILP e\ufb03ciently. 3. Semide\ufb01nite/convex relaxations: These methods aim to compute a bound on the optimal value of the graph alignment optimization by considering the alignment matrix in the intersection of orthogonal and stochastic matrices. The provided solution by these methods may not be a feasible solution for the original quadratic assignment problem. Examples of these methods include orthogonal relaxations [22], projected eigenvalue bounds [23], convex relaxations [24\u201326], and matrix splittings [27]. In particular, [27] introduces a convex relaxation of the underlying graph alignment optimization based on matrix splitting which provides bounds on the optimal value of the underlying QAP. The proposed semide\ufb01nite programming (SDP) method provides a bound on the optimal value and additional steps are required to derive a feasible solution. Moreover, owing to its computational complexity, it can only be used to align small graphs [27]. In the computer vision literature, [34,35] use spectral techniques to solve QAP approximately by inferring a cluster of assignments over the feature graph. Then, they use a greedy approach 5 \fto reject assignments with low associations. Similarly, [46] uses a spectral relaxation of QAP to compute a probabilistic subgraph matching across images when the size of graphs are the same, while [47] uses a heuristic multi-scale spectral signature of graphs to compute an alignment across them. 4. Other methods: There are several other techniques to solve graph alignment optimization approximately. Some methods use Bayesian framework [32], or message passing [33], or some other heuristics [3,4,6]. In Section 5, we assess the performance of some of these graph alignment techniques through simulations. Some graph alignment formulations aim to align paths [7] or subgraphs [8,52,53] across two (or multiple) graphs. The objective of these methods is di\ufb00erent from the one of our graph alignment optimization where a bijective mapping across nodes of two graphs is desired according to a QAP. However solutions of these di\ufb00erent methods may be related. For instance a bijective mapping across nodes of two graphs can provide information about conserved pathways and/or subgraphs across graphs, and vice versa. The graph alignment formulation of (1.4) uses the structure of input graphs to \ufb01nd an alignment across their nodes. In practice, however, some other side information may be available such as nodenode similarities. One way to incorporate such information in the formulation of (1.4) is to restrict the alignment across nodes of the two graphs whose similarities are greater than a threshold. This can be done by adding additional constraints to (1.4). We will explain this in more detail in Section 3. 2.1 Graph Alignment and Graph Isomorphism The graph alignment optimization (1.3) is closely related to the problem of graph isomorphism de\ufb01ned as follows: De\ufb01nition 1 (Graph Isomorphism) Let G1 = (V1,E1) and G2 = (V2,E2) be two binary graphs. G1 and G2 are isomorphic if there exists a permutation matrix P such that G1 = PG2P T . The computational problem of determining whether two \ufb01nite graphs are isomorphic is called the graph isomorphism problem. Moreover given two isomorphic graphs G1 and G2, in the graph isomorphism problem one aims to \ufb01nd the permutation matrix P such that G1 = PG2P T . The computational complexity of this problem is unknown [54]. In the following lemma we formalize a connection between the graph alignment optimization and the classical graph isomorphism problem: Lemma 1 Let G1 and G2 be two isomorphic Erd\u02dd os-R\u00b4 enyi graphs [48] such that Pr[G1(i,j) = 1] = p and G2 = PG1P T , where P is a permutation matrix. Let p \u22600,1. Then, for any selection of scores s1 > s2 > s3 > 0, P maximizes the expected graph alignment objective function of Optimization (3.4). The expectation is over di\ufb00erent realizations of G1 and G2. Proof The proof is presented in Section 9.1. The result of Lemma 1 can be extended to the case where edges of graphs are \ufb02ipped through a random noise matrix: Lemma 2 Let G1 be an Erd\u02dd os-R\u00b4 enyi graph such that Pr[G1(i,j) = 1] = p. Let \u02dc G1 be a graph resulting from \ufb02ipping edges of G1 independently and randomly with probability q. Suppose G2 = 6 \fP \u02dc G1P T where P is a permutation matrix. Let 0 < p < 1/2 and 0 \u2264q < 1/2. Then, for any selection of scores s1 > s2 > s3 > 0, P maximizes the expected graph alignment objective function of Optimization (1.3). The expectation is over di\ufb00erent realizations of G1 and G2. Proof The proof is presented in Section 9.1. Finding an isomorphic mapping across su\ufb03ciently large Erd\u02dd os-R\u00b4 enyi graphs can be done e\ufb03ciently with high probability (w.h.p.) through canonical labeling [49]. Canonical labeling of a graph consists of assigning a unique label to each vertex such that labels are invariant under isomorphism. The graph isomorphism problem can then be solved e\ufb03ciently by mappings nodes with the same canonical labels to each other [55]. One example of canonical labeling is the degree neighborhood of a vertex de\ufb01ned as a sorted list of neighborhood degrees of vertices [49]. Note that graph alignment formulation is more general than the one of graph isomorphism: graph alignment aims to \ufb01nd an optimal mappings across two graphs which are not necessarily isomorphic. 3 EigenAlign Algorithm 3.1 Problem Formulation and Notation Let y be a vectorized version of X. That is, y is a vector of length n1n2 where, y(i + (j\u2032 \u22121)n1) = X(i,j\u2032). To simplify notation, de\ufb01ne yi,j\u2032 \u225cX(i,j\u2032). Two mappings (i,j\u2032) and (r,s\u2032) can be matches which cause overlaps, can be mismatches which cause errors, or can be neutrals (Figure 1-a). De\ufb01nition 2 Suppose G1 = (V1,E1) and G2 = (V2,E2) are undirected graphs. Let {i,r} \u2286V1 and {j\u2032,s\u2032} \u2286V2 where X(i,j\u2032) = 1 and X(r,s\u2032) = 1. Then, (i,j\u2032) and (r,s\u2032) are matches if (i,r) \u2208E1 and (j\u2032,s\u2032) \u2208E2. (i,j\u2032) and (r,s\u2032) are mismatches if only one of the edges (i,r) and (j\u2032,s\u2032) exists. (i,j\u2032) and (r,s\u2032) are neutrals if none of the edges (i,r) and (j\u2032,s\u2032) exists. De\ufb01nition 2 can be extended to the case where G1 and G2 are directed graphs. In this case mappings (i,j\u2032) and (r,s\u2032) are matches/mismatches if they are matches/mismatches in one of the possible directions. However it is possible to have these mappings be matches in one direction while they are mismatches in the other direction (Figure 1-c). These mappings are denoted as inconsistent mappings, de\ufb01ned as follows: De\ufb01nition 3 Let G1 = (V1,E1) and G2 = (V2,E2) be two directed graphs and {i,r} \u2286V1 and {j\u2032,s\u2032} \u2286V2 where X(i,j\u2032) = 1 and X(r,s\u2032) = 1. If edges i \u2192r, r \u2192i, and j\u2032 \u2192s\u2032 exist, however, s\u2032 \u2192j\u2032 does not exist, then mappings (i,j\u2032) and (r,s\u2032) are inconsistent. Consider two undirected graphs G1 = (V1,E1) and G2 = (V2,E2). We form an alignment graph represented by adjacency matrix A in which nodes are mapping edges across the original graphs, and the edges capture whether the pair of mapping edges are matches, mismatches or neutrals (Figure 2). 7 \fDe\ufb01nition 4 Let {i,r} \u2286V1 and {j\u2032,s\u2032} \u2286V2 where X(i,j\u2032) = 1 and X(r,s\u2032) = 1. A[(i,j\u2032),(r,s\u2032)] = \u23a7 \u23aa \u23aa \u23aa \u23aa \u23a8 \u23aa \u23aa \u23aa \u23aa \u23a9 s1, if (i,j\u2032) and (r,s\u2032) are matches, s2, if (i,j\u2032) and (r,s\u2032) are neutrals, s3, if (i,j\u2032) and (r,s\u2032) are mismatches, (3.1) where s1, s2, and s3 are scores assigned to matches, neutrals, and mismatches, respectively. Without loss of generality we assume s1 > s2 > s3 > 0. We can re-write (3.1) as follows: A[(i,j\u2032),(r,s\u2032)] =(s1 + s2 \u22122s3)G1(i,r)G2(j\u2032,s\u2032) (3.2) + (s3 \u2212s2)(G1(i,r) + G2(j\u2032,s\u2032)) + s2. We can summarize (3.1) and (3.2) as follows: A =(s1 + s2 \u22122s3)(G1 \u2297G2) + (s3 \u2212s2)(G1 \u22971n2) (3.3) + (s3 \u2212s2)(1n1 \u2297G2) + s2(1n1 \u22971n2), where \u2297represents matrix Kronecker product, and 1n is an n \u00d7 n matrix whose elements are all ones. A similar scoring scheme can be used for directed graphs. When graphs are directed, some mappings can be inconsistent according to De\ufb01nition 3, i.e., they are matches in one direction and mismatches in another. Scores of inconsistent mappings can be assigned randomly to matched/mismatched scores, or to an average score of matches and mismatches (i.e., (s1 + s3)/2). For random graphs, inconsistent mappings are rare events. For example, suppose graph edges are distributed according to a Bernoulli distribution with parameter p. Then, the probability of having an inconsistent mapping for a particular pair of paired nodes across graphs is equal to 4p3(1 \u2212p). Therefore, their e\ufb00ect in graph alignment is negligible, particularly for large sparse graphs. Throughout the paper, for directed graphs we assume inconsistent mappings have negligible e\ufb00ect, unless we mention the importance of such inconsistency explicitly. In practice some mapping edges across two graphs may not be possible, owing to additional side information. The set of possible mapping edges across two graphs is denoted by R = {(i,j\u2032) \u2236i \u2208 V1,j\u2032 \u2208V2}. If R = V1 \u00d7V2, the problem of graph alignment is called unrestricted. If some mappings across two graphs are prevented (i.e., X(i,j\u2032) = yi,j\u2032 = 0, for (i,j\u2032) \u2209R), then the problem of graph alignment is called restricted. Using the vectorized version of X, the graph alignment optimization (1.2) can be written as follows: max y yT Ay, (3.4) \u2211 i yi,j\u2032 \u22641, \u2200i \u2208V1, \u2211 j\u2032 yi,j\u2032 \u22641, \u2200j\u2032 \u2208V2, yi,j\u2032 \u2208{0,1}, \u2200(i,j\u2032) \u2208V1 \u00d7 V2, yi,j\u2032 = 0, \u2200(i,j\u2032) \u2209R, where A is de\ufb01ned according to (3.2) and R \u2286V1 \u00d7 V2 is the set of possible mapping edges across two graphs. 8 \fInput graphs Alignment graph eigen decomposition of alignment graph Aligned graphs Eigenvector weights G2 G1 maximum bipartite graph matching neutrals mis-matches matches 3,4\u2019 4,4\u2019 2,3\u2019 1,2\u2019 1,1\u2019 4 3 2 1 4 3 2 1 4 3 2 1 4\u2019 3\u2019 2\u2019 1\u2019 4\u2019 3\u2019 2\u2019 1\u2019 4\u2019 3\u2019 2\u2019 1\u2019 Figure 2: The Framework of EigenAlign algorithm 1. 3.2 EigenAlign Algorithm We now introduce EigenAlign (EA) algorithm which computes a solution for the graph alignment optimization (3.4) leveraging spectral properties of graphs: Algorithm 1 (EigenAlign Algorithm) Let G1 = (V1,E1) and G2 = (V2,E2) be two binary graphs whose corresponding alignment graph is denoted by A according to (3.2). EigenAlign algorithm solves the graph alignment optimization (3.4) in two steps: Step 1, An Eigenvector Computation Step: In this step we compute v, an eigenvector of the alignment graph A with the maximum eigenvalue. Step 2, A Linear Assignment Step: In this step we solve the following maximum weight bipartite matching optimization: max y vT y, (3.5) \u2211 j\u2032 yi,j\u2032 \u22641, \u2200i \u2208V1, \u2211 i yi,j\u2032 \u22641, \u2200j\u2032 \u2208V2, yi,j\u2032 \u2208{0,1}, \u2200(i,j\u2032) \u2208V1 \u00d7 V2, yi,j\u2032 = 0, \u2200(i,j\u2032) \u2209R. Algorithm 1 can be extended to directed graphs Gdir 1 and Gdir 2 as well. This framework is depicted in Figure 2. Below we provide intuition on di\ufb00erent steps of the EigenAlign algorithm. For simplicity we assume all mappings across graphs are possible (i.e., R = {(i,j\u2032) \u2236\u2200i \u2208V1,\u2200j\u2032 \u2208V2}). In the restricted graph alignment setup, without loss of generality, one can eliminate rows and columns of the alignment matrix corresponding to mappings that are not allowed. In the eigen decomposition step of EigenAlign, we ignore bijective constraints (i.e., constraints \u2211i yi,j\u2032 \u22641 and \u2211j\u2032 yi,j\u2032 \u22641) because they will be satis\ufb01ed in the second step of the algorithm through a linear optimization. Under these assumptions Optimization (3.4) can be simpli\ufb01ed to 9 \fthe following optimization: max y yT Ay, (3.6) yi,j\u2032 \u2208{0,1}, \u2200(i,j\u2032) \u2208V1 \u00d7 V2. To approximate a solution of this optimization, we replace integer constraints with constraints over a hyper-sphere restricted by hyper-planes (i.e., \u2225y\u22252 \u22641 and y \u22650). Thus, optimization (3.6) is simpli\ufb01ed to the following: max y yT Ay, (3.7) \u2225y\u22252 \u22641, y \u22650. In the following, we show that v, the leading eigenvector of the alignment matrix A, is an optimal solution of Optimization (3.7). Suppose y1 is an optimal solution of Optimization (3.7). Let y2 be a solution of the following optimization without non-negativity constraints: max y yT Ay, (3.8) \u2225y\u22252 \u22641. Following the Rayleigh-Ritz formula [56], the leading eigenvector of the alignment matrix is an optimal solution of Optimization (3.8) (i.e., y2 = v). Now we use the following theorem to show that in fact y1 = v: Theorem 1 Suppose A is a matrix whose elements are strictly positive. Let v be an eigenvector of A corresponding to the largest eigenvalue. Then, \u2200i, vi > 0. Moreover, all other eigenvectors must have at least one negative, or non-real component. Proof See e.g., reference [57] (Theorem 1). Since y2 is a solution of Optimization (3.8), we have yT 2 Ay2 \u2265yT 1 Ay1. Using this inequality along with the Perron-Frobenius Theorem lead to y1 = v, as the unique solution of optimization (3.7). The solution of the eigen decomposition step assigns weights to all possible mapping edges across graphs ignoring bijective constraints (constraints \u2211j\u2032 yi,j\u2032 \u22641 and \u2211i yi,j\u2032 \u22641). However, in the graph alignment setup, each node in one graph can be mapped to at most one node in the other graph. To satisfy these constraints, we use eigenvector weights in a linear optimization framework of maximum weight bipartite matching setup of Optimization (3.5) [58]. 3.3 Computational Complexity of EigenAlign Let the number of nodes of graphs G1 and G2 be O(n). Let k = \u2223R\u2223be the number of possible mappings across two graphs. In an unrestricted graph alignment setup, we have k = O(n2). However, in a restricted graph alignment, k may be signi\ufb01cantly smaller than n2. EigenAlign has three steps: (i) Forming an alignment graph A that has a computational complexity of O(k2), as all pairs of possible mappings should be considered. 10 \f(ii) An eigen decomposition step where we compute the leading eigenvector of the alignment graph. This operation can be performed in O(k2) computational complexity using QR algorithms and/or power methods [59]. Therefore, the worst case computational complexity of this part is O(k2). (iii) A maximum weight bipartite matching algorithm step, that can be solved e\ufb03ciently using linear programming or the Hungarian algorithm [58]. The worst case computational complexity of this step is O(n3). If the set R has a speci\ufb01c structure (e.g., small subsets of nodes in one graph are allowed to be mapped to small subsets of nodes in the other graph), this cost can be reduced signi\ufb01cantly. Proposition 1 The worst case computational complexity of the EigenAlign Algorithm is O(k2 + n3). Remark 1 For large graphs, to reduce the overall computational complexity, the linear assignment optimization may be replaced by a greedy bipartite matching algorithm (e.g., [60]). In the greedy matching approach, at each step, the heaviest possible mapping is added to the current matching until no further mappings can be added. It is straightforward to show that this greedy algorithm \ufb01nds a bipartite matching whose weight is at least half the optimum. The computational complexity of this greedy algorithm is O(k log(k) + nk). If we only consider matches in the graph alignment optimization (i.e., s2 = s3 = 0 in (1.2)), the complexity of the eigen decomposition step can be reduced, since we need to compute top eigenvectors of sparse adjacency matrices. By considering mismatches, eigenvector computation should be performed over dense matrices, which require a higher computational complexity. 3.4 Mean-\ufb01eld Optimality of EigenAlign Over Erd\u02dd os-R\u00b4 enyi Graphs Here we analyze the performance of the EigenAlign algorithm over Erd\u02dd os-R\u00b4 enyi graphs, for both isomorphic and non-isomorphic cases, under two di\ufb00erent noise models. While real graphs often have di\ufb00erent structures than Erd\u02dd os-R\u00b4 enyi graphs, we consider this family of graphs in this section owing to their analytical tractability. Suppose G1 = (V1,E1) is an undirected Erd\u02dd os-R\u00b4 enyi graph with n nodes where Pr[G1(i,j) = 1] = p for 1 \u2264i,j \u2264n. Suppose \u02dc G is a noisy version of the graph G1. We consider two di\ufb00erent noise models in this section: Noise Model I: In this model we have, \u02dc G1 \u225cG1 \u2299(1 \u2212Q) + (1 \u2212G1) \u2299Q, (3.9) where \u2299represents the Hadamard product, 1 is the matrix of all ones, and Q is a binary symmetric random matrix whose edges are drawn i.i.d. from a Bernoulli distribution with Pr[Q(i,j) = 1] = pe. In words, the operation G1 \u2299(1 \u2212Q) + (1 \u2212G1) \u2299Q \ufb02ips edges of G1 uniformly randomly with probability pe. Noise Model II: In this model we have, \u02dc G1 \u225cG1 \u2299(1 \u2212Q) + (1 \u2212G1) \u2299Q\u2032, (3.10) where Q and Q\u2032 are binary symmetric random matrices whose edges are drawn i.i.d. from a Bernoulli distribution with Pr[Q(i,j) = 1] = pe and Pr[Q\u2032(i,j) = 1] = pe2. Under this model, 11 \fedges of G1 \ufb02ip uniformly randomly with probability pe, while non-connecting tuples in G1 will be connected in \u02dc G1 with probability pe2. Because G1 is an Erd\u02dd os-R\u00b4 enyi graph with parameter p, choosing pe2 = ppe 1 \u2212p, (3.11) leads to having the expected density of graphs G1 and G2 be equal to p. Using either model I (3.9) or model II (3.10) for \u02dc G1, we de\ufb01ne G2 as follows: G2 \u225cP \u02dc G1P T , (3.12) where P is a permutation matrix. Recall that R is the set of possible mapping edges across graphs G1 and G2. Throughout this section, we assume that we are in the restricted graph alignment regime where \u2223R\u2223= kn for k > 1. The n true mapping edges (i \u2194i\u2032 if P = I) are included in R, while the remaining (k \u22121)n mappings are selected uniformly randomly. Let Strue be the set of true mapping edges between G1 and G2, i.e., Strue \u225c{(i,j) \u2236P(i,j) = 1}. We de\ufb01ne Sfalse = R\u2212Strue as the set of incorrect mapping edges between the two graphs. Moreover, we choose scores assigned to matches, neutrals and mismatches as s1 = \u03b1 + \u01eb, s2 = 1 + \u01eb and s3 = \u01eb, respectively, where \u03b1 > 1 and 0 < \u01eb \u226a1. These selections satisfy score conditions s1 > s2 > s3 > 0 and lead to the regularization parameter \u03b3 = 1/(1 + \u03b1) in (1.3). Theorem 2 Let A be the alignment graph between G1 and G2 as de\ufb01ned in (3.2) with s1 = \u03b1 + \u01eb, s2 = 1 + \u01eb and s3 = \u01eb. Let v be the eigenvector of E[A] corresponding to the largest eigenvalue, where the expectation is over realizations of G1, G2 and R. Then, under both noise models (3.9) and (3.10), if 0 < p < 1/2, and 0 \u2264pe < 1/2, as n \u2192\u221e, v(t1) > v(t2), \u2200t1 \u2208Strue and \u2200t2 \u2208Sfalse. In noise models (3.9) and (3.10), if we put pe = 0, then G2 is isomorphic with G1 because there exists a permutation matrix P such that G2 = PG1P T . For this case, we have the following Corollary: Corollary 1 Let G1 and G2 be two isomorphic Erd\u02dd os-R\u00b4 enyi graphs with n nodes such that G1 = PG2P T , where P is a permutation matrix. Under the conditions of Theorem 2, as n \u2192\u221e, v(t1) > v(t2) where where v is the top eigenvector of the expected alignment graph, t1 is a true mapping edge and t2 is a false mapping edge between the two graphs. We present proofs of Theorem 2 and Corollary 1 in Sections 9.2 and 9.3. In the EigenAlign algorithm, we use values of the top eigenvector of the alignment graph in a maximum weight bipartite matching optimization to extract bijective mappings between the two graphs. Thus, if true mapping edges obtained higher eigenvector scores compared to the false one, the EigenAlign algorithm would infer optimal mappings between the two graphs. Theorem 2 indicates that, in an expectation sense, true mapping edges obtain larger eigenvector scores compared to the false ones when \u2223R\u2223= kn. In Section 5 and through simulations, we show that the error of the EigenAlign algorithm is empirically small even in an unrestricted graph alignment setup. 12 \f4 LowRankAlign Algorithm In this section, we introduce a graph alignment algorithm that uses higher-order eigenvectors of (transformations of) adjacency graphs to align their structures. We refer to this extension as LowRankAlign (LRA). LRA can be useful specially in cases where leading eigenvectors of graphs are not informative. This case occurs for instance in the alignment of regular graph structures. Moreover, LRA does not require an explicit formation of the alignment graph which can be costly for large graphs if all mappings across graphs are possible. Higher order eigenvectors have been used in other spectral inference problems such as graph clustering [41\u201345] and the matrix coupling [61,62]. Moreover reference [63] has used higher order eigenvectors of the graph Laplacian to embed large graphs on a low-dimensional isometric space to compute an inexact matching. Our goal in this section is to provide a principled framework to exploit higher order eigenvectors in the graph alignment problem. Here we assume graphs are symmetric. For simplicity we assume n1 = n2 = n. All discussions can be extended to the case where n1 \u2260n2. Moreover, to simplify analysis, we assume singular values of matrices have multiplicity of one. Let \u2293be the set of all permutation matrices of size n \u00d7 n. Thus, the graph alignment optimization can be written as follows 2: max Tr(G1XG2XT ), (4.1) X \u2208\u2293. Let X\u2217be an optimal solution of optimization (4.1). Finding an optimal solution of this optimization is known to be NP-hard [15]. If X \u2208\u2293, we have Tr(G1XG2XT ) = Tr((G1 + \u03b41I)X(G2 + \u03b42I)XT ) + constant. (4.2) In other words we can add and subtract multiples of identity to make the resulting symmetric matrices positive de\ufb01nite, without changing the structure of the problem. Thus, without loss of generality, we assume that matrices G1 and G2 are positive semi-de\ufb01nite. We compute a solution for Optimization (4.1) in two steps: (i) The Relaxation Step: First, we compute a solution X0 to a relaxation of Optimization (4.1) over orthogonal matrices. Other relaxations can be considered as well. X0 may not be a valid permutation matrix. (ii) The Rounding Step: We propose a rounding step using projection in the direction of eigenvectors of (transformations of) adjacency graphs scaled by their corresponding eigenvalues. Below we explain these steps with more details: The Relaxation step: Let \u0393 be a set that contains all permutation matrices (i.e., \u2293\u2286\u0393). An example of \u0393 is the set of orthogonal matrices. Let X0 be a solution of the following optimization: max Tr(G1XG2XT ), (4.3) X \u2208\u0393. If \u0393 is assumed to be the set of orthogonal matrices (i.e., \u0393 = O), an optimal solution of optimization (4.3) can be found using eigen decomposition of matrices G1 and G2 as follows: 2To consider the generalized graph alignment formulation of (1.4), one can replace G1 and G2 with G1 \u2212\u03b31 and G2 \u2212\u03b31 in (4.1), respectively. 13 \fTheorem 3 Suppose vi and ui are eigenvectors of symmetric matrices G1 and G2, respectively. Let V and U be eigenvector matrices whose i-th columns are vi and ui, respectively. Then, X0 = V U T = n \u2211 i=1 viuT i , (4.4) is an optimal solution of optimization (4.3) over orthogonal matrices (i.e., \u0393 = O). Proof See Section 6.1 of reference [22]. Theorem 3 characterizes an optimal solution of the orthogonal relaxation of the graph alignment optimization. A similar argument can be constructed for eigenvectors of the matrix G2. Let X0 \u225c{X0 \u2236X0 = n \u2211 i=1 siviuT i ,s \u2208{\u22121,1}n}, (4.5) where si is the i-th component of the vector s. The set X0 represents multiple optimal solutions of optimization (4.3) when \u0393 = O. It is because if v is an eigenvector of a matrix corresponding to the eigenvalue \u03bb, \u2212v is also an eigenvector of the same matrix with the same eigenvalue. X0 can have at most 2n distinct members. The Rounding step: X0 may not be a valid permutation matrix. One way to \ufb01nd a permutation matrix using X0 is to project X0 over the space of permutation matrices \u2293: max Tr(XXT 0 ), (4.6) X \u2208\u2293. However, it has been shown that an optimal solution of optimization (4.6) has a poor performance in practice [64]. In the following, we propose an alternative algorithm to compute a permutation matrix using X0 with a certain performance guarantee. Consider the following optimization: max Tr(G1X0G2XT ), (4.7) X \u2208\u2293, X0 \u2208X0. For a \ufb01xed X0, this is a maximum weight bipartite matching optimization which can be solved exactly using linear programming. Let X\u2217 lin be an optimal solution of optimization (4.7). De\ufb01ne f(X) \u225cTr(G1XG2XT ), (4.8) \u02dc f(X) \u225cTr(G1X0G2XT 0 ) + 2Tr(G1X0G2(X \u2212X0)T ). Theorem 4 Let X\u2217and X\u2217 lin be optimal solutions of optimizations (4.1) and (4.7), respectively. We have, \u2223f(X\u2217) \u2212\u02dc f(X\u2217 lin)\u2223\u2264\u01eb2 n \u2211 i=1 \u03c3i(G1)\u03c3i(G2), (4.9) where \u03c3i(Ga) represents the i-th largest singular value of matrix Ga, for a = 1,2, and \u01eb is a bound on the relaxation gap (i.e., minX0\u2208X0 \u2225X\u2217\u2212X0\u2225op \u2264\u01eb). Note that \u2225.\u2225op indicates the matrix operator norm. 14 \fProof See Section 9.4. Optimization (4.7) can be simpli\ufb01ed to the following optimization which \ufb01nds a valid permutation matrix using the orthogonal relaxation of the graph alignment optimization: max Tr(( n \u2211 i=1 \u03bbi(G1)\u03bbi(G2)siviuT i )XT ), (4.10) X \u2208\u2293, s \u2208{\u22121,1}n, where \u03bbi(Ga) is the i-th largest eigenvalue of Ga for a = 1,2. The objective function of optimization (4.10) simpli\ufb01es the graph alignment problem to the simultaneous alignment of eigenvectors whose contributions in the overall alignment score are weighed by their corresponding eigenvalues. However, there are possibly exponentially many optimal solutions for optimization (4.10) and obtaining their resulting permutation matrices would be computationally infeasible. Because contributions of eigenvectors with small eigenvalues to the objective function of optimization (4.10) are small, one can instead as a heuristic, presumably solve the following optimization based on the low rank approximation of the objective function: Algorithm 2 (LowRankAlign Algorithm) The following optimization summarizes the LRA algorithm: max Tr(( k \u2211 i=1 si\u03bbi(G1)\u03bbi(G2)viuT i )XT ), (4.11) X \u2208\u2293, si \u2208{\u22121,1}, \u22001 \u2264i \u2264k. where k is a constant that determines the rank of the a\ufb03nity matrix. In the restricted graph alignment setup, some mapping edges across two graphs may not be allowed. In that case, one can set the a\ufb03nity weights (i.e., weights used in the maximum weight bipartite matching step) of such pairs in optimization (4.11) to be \u2212\u221e. 5 Performance Evaluation Over Synthetic Graphs Here we compare the performance of the proposed graph alignment algorithms (LRA and EA) against some other graph alignment methods including Natalie2 [30,31], GHOST [47], IsoRank [3], NetAlign [33], Klau\u2019s approach [29] as well as an SDP-based method [27] through simulations. Natalie2 and Klau\u2019s approach use Lagrange multipliers to relax the underlying quadratic assignment problem. IsoRank is a global graph alignment method that uses an iterative approach to align nodes across two graphs based on their neighborhood similarities, while GHOST uses a heuristic multi-scale spectral signature of graphs to compute an alignment across them. NetAlign formulates the alignment problem in a quadratic optimization framework and uses message passing to approximately solve it. The SDP-based method [27] uses a convex relaxation of the underlying QAP based on matrix splitting. In our simulations, we use default parameters of these methods. We report the performance of proposed EigenAlign (EA) and LowRankAlign (LRA) Algorithms for \u03b3 \u2208{0,0.1,0.2,0.3, 0.4, 0.5 \u2212\u01eb} where \u01eb = 0.001. In general, this parameter can be tuned in 15 \f40 50 60 70 80 90 number of mismatches number of matches number of mismatches number of matches number of mismatches number of matches number of mismatches 0 5 10 15 20 25 number of matches Netalign Isorank Klau SDP GHOST Natalie2 LowRankAlign EigenAlign 0 50 100 150 200 250 0 20 40 60 80 100 120 140 Netalign Klau GHOST SDP, Isorank, Natalie2 EigenAlign, LowRankAlign 0 50 100 150 200 250 0 20 40 60 80 100 120 140 LowRankAlign Natalie2 SDP Klau Other methods 0 50 100 150 200 250 0 20 40 60 80 100 120 140 Natalie2 SDP GHOST LowRankAlign EigenAlign LowRankAlign Netalign Klau Isorank (d) (c) (b) (a) LowRankAlign EigenAlign LowRankAlign EigenAlign \u03b3=0 \u03b3=0 \u03b3=0.1 \u03b3=0.1 \u03b3=0.3,0.4,0.5 \u03b3=0.2 \u03b3=0.2,0.3,0.4,0.5 \u03b3=0,0.1,0.2,0.3,0.4,0.5 \u03b3=0,0.1,0.2,0.3,0.4,0.5 \u03b3=0,0.1,0.2,0.3,0.4,0.5 \u03b3=0.1 \u03b3=0.2 \u03b3=0.3 \u03b3=0.4,0.5 \u03b3=0 Figure 3: Performance evaluation of di\ufb00erent graph alignment methods over (a) stochastic block models, (b) isomorphic Erd\u02dd os-R\u00b4 enyi graphs, (c) isomorphic random regular graphs, and (d) noisy power law graphs. Experiments have been repeated 10 times in each case. For each method the average number of matches and mismatches have been shown. The high-match low-mismatch area has been highlighted by red shades. di\ufb00erent applications using standard machine learning techniques such as cross validations [65]. For LRA we use top k = 3 eigenvectors of input graphs as larger values of k did not have a signi\ufb01cant e\ufb00ect on the results. We consider four di\ufb00erent setups: G1 is an Erd\u02dd os-R\u00b4 enyi graph with n1 = 25 nodes and the density parameter 0.1. G2 is a stochastic block model with two blocks each with 25 nodes (i.e., n2 = 50). Edge densities within blocks are 0.1 and 0.3, and the edge density across blocks is 0.05. G1 and G2 are isomorphic Erd\u02dd os-R\u00b4 enyi graphs with n1 = n2 = 50 nodes with an edge density 0.1. G1 and G2 are isomorphic random regular graphs with n1 = n2 = 50 nodes whose edge density parameters are 0.1. G1 is a power law graph [66] constructed as follows: we start with a random subgraph with 5 nodes. At each iteration, a node is added to the graph connecting to three existing nodes 16 \f2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 10 6 10 5 number of mismatches number of matches (b) 10 6 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 10 5 number of mismatches number of matches (a) LowRankAlign EigenAlign Netalign Isorank Natalie2 LowRankAlign 3 3.5 4 4.5 5 5.5 6 2 3 4 5 6 7 8 9 10 Isorank Natalie2 Netalign LowRankAlign Human-Worm Regulatory Network Alignment Human-Fly Regulatory Network Alignment \u03b3=0.1,0.2,0.3,0.4,0.5 \u03b3=0 \u03b3=0.2 \u03b3=0.1 \u03b3=0.5 \u03b3=0.4 \u03b3=0.3 \u03b3=0.1 \u03b3=0.2 \u03b3=0.3 \u03b3=0.4 \u03b3=0.5 \u03b3=0.1,0.2,0.3,0.4,0.5 \u03b3=0 Figure 4: Performance evaluation of di\ufb00erent graph alignment methods over (a) human-\ufb02y and (b) human-worm gene regulatory graphs. The high-match lowmismatch area has been highlighted by red shades. with probabilities proportional to their degrees. This process is repeated till the number of nodes in the graph is equal to n1 = 50. Then we construct G2 according to the noise model (3.10) with pe = 0.05. In (3.10) we use the density of G1 as parameter p. Figure 3 shows the number of matches and mismatches caused by di\ufb00erent graph alignment methods in four considered setups for an unrestricted graph alignment problem. In the stochastic block model case (panel a) LRA outperforms other methods in terms of resulting in large number of matches and few mismatches. Since LRA with \u03b3 = 0 ignores the e\ufb00ect of mismatches, it results in a slightly larger number of matches compared to the case with \u03b3 \u22600. At the same time LRA with \u03b3 = 0 results in a larger number of mismatches compared to the case with \u03b3 \u22600. This highlights the e\ufb00ect of considering mismatches in the generalized graph alignment formulation (1.3) when graphs have di\ufb00erent sizes and heterogenous edge densities. Over isomorphic Erd\u02dd os-R\u00b4 enyi graphs (panel b), EA, LRA, Isorank, SDP and Natalie2 have the best performance of achieving the highest number of matches and zero mismatches. Netalign, Klau and GHOST have poor performances in this case. Note that some of these methods are designed for very sparse graphs and for the restricted graph alignment setup. This may partially explain the poor performance of these methods. Over isomorphic random graphs (panel c) LRA outperforms other methods achieving the highest number of matches and zero mismatches. The performance of LRA is also robust against parameter \u03b3. Note that the alignment of regular graph structures is one of the most di\ufb03cult graph alignment cases because of homogeneity of node degrees. The fact that LRA performs well in this case while all other methods have poor performance illustrates the e\ufb00ectiveness of using higher order eigenvectors in aligning homogenous graph structures. Finally, over noisy power law graphs (panel d) Natalie2 and SDP outperform other methods. The performance of LRA in this case is higher than other methods except Natalie2 and SDP. 17 \f6 Performance Evaluation Over Gene Networks Here we apply graph alignment methods to compare gene regulatory graphs across human, \ufb02y and worm species. Comparative graph analysis in evolutionary studies often requires having a oneto-one mapping across genes of two or multiple networks. However, since human, \ufb02y and worm are distant species and as a result, many gene families have undergone extensive duplications and losses, we observe non-bijective homolog mappings across their genes [67]. For example, one gene in human can be homologous to multiple genes in \ufb02y and vice versa. To infer bijective mappings as a subset of homolog genes across species, we use graph alignment methods. We use regulatory networks that are inferred by integrating genome-wide functional and physical genomics datasets from ENCODE and modENCODE consortia (see the Appendix for more details). Similarly to our discussion in Section 5 we report the performance of proposed EigenAlign (EA) and LowRankAlign (LRA) methods for \u03b3 \u2208{0,0.1,0.2,0.3, 0.4, 0.5\u2212\u01eb} where \u01eb = 0.001. For LRA we use top k = 2 eigenvectors of input graphs. We also assess the performance of NetAlign, IsoRank, and Natalie2 in our real data analysis. We exclude Klau\u2019s approach [29] and the SDP-based method of [27] from our analysis in this section owing to their high memory and computational complexity. Moreover the GHOST method failed to run over these networks owing to some implementation errors. Figure 4 shows the number of matches and mismatches caused by di\ufb00erent graph alignment methods across human-\ufb02y and human-worm networks. In both cases EA and LRA with \u03b3 = 0 (i.e., ignoring mismatches) have a comparable performance to other methods. However, by changing \u03b3 we observe a trade-o\ufb00between number of caused matches and mismatches. For example, in the human\ufb02y network alignment case LRA with a non-zero \u03b3 results in approximately 2-fold decrease in the number of mismatches while the number of caused matches decreases by approximately 10%. This highlights the e\ufb00ect of considering mismatches in the graph alignment optimization. To substantiate these inferences, further experiments should be performed to determine the involvement of inferred conserved gene interactions in di\ufb00erent biological processes, which is beyond the scope of the present paper. 7" + } + ], + "Daniela Massiceti": [ + { + "url": "http://arxiv.org/abs/2311.17315v3", + "title": "Explaining CLIP's performance disparities on data from blind/low vision users", + "abstract": "Large multi-modal models (LMMs) hold the potential to usher in a new era of\nautomated visual assistance for people who are blind or low vision (BLV). Yet,\nthese models have not been systematically evaluated on data captured by BLV\nusers. We address this by empirically assessing CLIP, a widely-used LMM likely\nto underpin many assistive technologies. Testing 25 CLIP variants in a\nzero-shot classification task, we find that their accuracy is 15 percentage\npoints lower on average for images captured by BLV users than web-crawled\nimages. This disparity stems from CLIP's sensitivities to 1) image content\n(e.g. not recognizing disability objects as well as other objects); 2) image\nquality (e.g. not being robust to lighting variation); and 3) text content\n(e.g. not recognizing objects described by tactile adjectives as well as visual\nones). We delve deeper with a textual analysis of three common pre-training\ndatasets: LAION-400M, LAION-2B and DataComp-1B, showing that disability content\nis rarely mentioned. We then provide three examples that illustrate how the\nperformance disparities extend to three downstream models underpinned by CLIP:\nOWL-ViT, CLIPSeg and DALL-E2. We find that few-shot learning with as few as 5\nimages can mitigate CLIP's quality-of-service disparities for BLV users in some\nscenarios, which we discuss alongside a set of other possible mitigations.", + "authors": "Daniela Massiceti, Camilla Longden, Agnieszka S\u0142owik, Samuel Wills, Martin Grayson, Cecily Morrison", + "published": "2023-11-29", + "updated": "2024-03-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction AI-based applications hold the potential to help people who are blind and low vision (BLV) with everyday visual tasks [3, 5]. However, the popularity of video-calling services like Be My Eyes [1] suggest that human assistance is still often required due to the wide set of assistance tasks [44] and varying quality of BLV images [8, 17]. Recent advances in large multi-modal models (LMMs) [19, 49, 52] could potentially address these challenges, enabling a new era of automated visual assistance as highlighted by the early partnership between Open AI and Be My Eyes [2]. Despite the opportunity, little work has evaluated how well LMMs perform on data from BLV users. Performance disparities have been identified for other user groups [6, 36, MSCOCO Open Images ORBIT VizWiz-Cls 0 10 20 30 40 50 60 70 Accuracy (%) BLV Avg: 51.5% Web-crawled Avg: 66.5% Figure 1. CLIP\u2019s zero-shot object recognition accuracy is 15 percentage points lower in images from BLV users (ORBIT, VizWiz-Classification) versus web-crawled images (MSCOCO, Open Images). Average accuracy (with 95% c.i.) in a standardized zero-shot image classification task is reported over 80-100K images per dataset for 25 CLIP variants. 45, 52, 55, 66] but the evidence for BLV users is either anecdotal [49] or not specific to large multi-modal models [8]. Since BLV users are likely to be one of the biggest beneficiaries of LMMs, often in productivityand safety-critical situations, it is important to extend studies to this group. To address this, we systematically evaluate CLIP, a widely used LMM with 8700+ citations and 24M+ downloads1, on data from BLV users. CLIP\u2019s rich embeddings and strong zero-shot capabilities have led to it underpinning a wide range of downstream tasks including image classification [52], object detection [41, 42], semantic segmentation [37], image captioning [61, 63] and video recognition [35]. It has also been used to create large-scale datasets [23, 34, 57, 58] and evaluation metrics [26, 50]. As CLIP\u2019s pre-trained parameters are often used directly, poor performance can have wide-ranging implications for downstream assistive applications that use them. We investigate CLIP\u2019s performance on BLV data along three dimensions: image content, image quality, and textual 1Statistics taken from Google Scholar and OpenAI\u2019s Hugging Face Hub (for CLIP ViT-L/14, ViT-B/32 and ViT-B/16) on 23 October 2023. 1 arXiv:2311.17315v3 [cs.CV] 25 Mar 2024 \fcontent. Visual content considers how well CLIP can recognize BLV-specific objects, such as guide canes. Visual quality assesses robustness to quality variations that characterize BLV images, such as blur and atypical framing [17]. Textual content examines performance on tactile descriptive words used by BLV users in contrast to visual ones, for example \u201cplastic\u201d versus \u201cyellow\u201d. We study each dimension in the context of a zero-shot image classification task, providing a worst-case estimate on how well CLIP will serve downstream assistive applications if used out-of-the box. Overall, we find that CLIP\u2019s zero-shot classification accuracy is 15 percentage points lower on BLV images compared to web-crawled images across 25 CLIP variants. These variants span architecture size (ViT-B/16 to ViTg/14), pre-training dataset (WIT [52], LAION [57, 58], DataComp/CommonPool [23]) and pre-training dataset size (80M to 3.8B). On deeper inspection, underperformance stems from CLIP: 1) recognizing disability objects less well than non-disability ones, with 25 percentage points lower accuracy; 2) being sensitive to image quality, particularly occlusion and lighting issues; and 3) recognizing objects described by material less well than color, with discrepancies of 7 percentage points. In all cases, a larger pre-training dataset or architecture does not lead to parity. To further understand our results, we examine the upstream source and downstream impact of these disparities. First, we conduct a textual analysis of the captions in LAION-400M/2B and DataComp1B and find that disability objects and materials are mentioned \u223c17x and \u223c4x less frequently than non-disability objects and colors, respectively. Second, we find performance disparities on BLV data persist in three downstream models that use CLIP: OWLViT [41] for object detection, CLIPSeg [37] for semantic segmentation, and DALL-E2 [53] for text-to-image generation. We close by discussing a set of possible mitigations, including few-shot model adaption and application-level solutions, toward making automated visual assistance for BLV users more equitable. In summary, our work contributes to the literature on how LMMs perform for users in the margins, specifically highlighting how CLIP may underperform for BLV users if integrated into assistive applications. Our contributions are: \u2022 An empirical study of CLIP\u2019s performance on BLV image content, image quality and textual content. \u2022 The first quantification of BLV content representation in LAION-400M, LAION-2B, and DataComp-1B. \u2022 An example-based analysis that illustrates how performance disparities on BLV data persist in three downstream models that use CLIP. 2. Related Works Large multi-modal models. LMMs now have impressive capabilities in analyzing and synthesizing images [7, 16, 19, 30, 49, 52, 68]. Contrastive models [30, 52, 68], a prominent sub-class, learn joint image and text embeddings by training on massive web-crawled data using a contrastive loss [15, 48]. They are unique in their architecture scale, and in the way they are trained on web-crawled data in an unsupervised manner. Unlike previous models, the rich embeddings they learn are leveraged by a wide range of downstream models \u2013 either directly [21, 52], or as part of a larger system [10, 35, 37, 41\u201343, 46, 53, 61\u201363, 67]. LMMs and fairness. LMMs are known to have social biases across gender, race, age, and geographic location [6, 36, 45, 66]. CLIP, for example, has been shown to classify people of color as non-human and males as criminal more often than white people and females, respectively [6]. Some works have studied these representational harm for people with disabilities, however only in natural language [28]. Quality-of-service harms arise when an application underperforms or fails for a particular user group [13, 18, 65] \u2013 e.g. a facial recognition system that does not detect women with darker skin tones [14]. These can be systematically identified and mitigated through disaggregated reporting of a model\u2019s performance [9, 47]. This has not been well studied for people with disabilities generally or BLV people specifically, with the evidence either anecdotal (e.g. GPT4Vision model card [49]) or not specific to LMMs [8]. 3. Methodology Our work investigates CLIP\u2019s robustness to image and text data from BLV users in the context of a zero-shot image classification task. This provides a worse-case estimate of how CLIP will perform out-of-the-box in downstream assistive applications. Here we describe the experimental set-up, CLIP variants, and datasets used in our analyses. 3.1. Episodic zero-shot image classification An image classifier selects which object c \u2208C is present in an image, where C is the set of possible object classes and |C| is the task\u2019s \u201cway\u201d. A zero-shot classifier does this without seeing any training images of the classes beforehand. Our first analysis compares CLIP\u2019s performance on different datasets (rather than the more typical multiple models on a single dataset), requiring our classification task setup to be standardized across datasets. We take inspiration from the episodic sampling used in meta-learning [22]: for each dataset j annotated with Cj object classes, we sample T fixed N-way classification tasks, where for each task we randomly sample N classes from Cj. For each task, we randomly sample M test images per class. The classification accuracy is then computed for all T*M*N images and the average (and 95% confidence interval) is reported. We repeat this for each dataset, with T, N and M held constant. We use variations of this to compare CLIP\u2019s performance 2 \fbetween object types (Sec. 4.1) and text prompts (Sec. 4.3) with details provided in each section, respectively. 3.2. Logistic Regression We also aim to understand which characteristics within images and text affect CLIP\u2019s performance. We use logistic regression, a common tool for hypothesis testing, to estimate the marginal effect of each characteristic on the model\u2019s accuracy. This approach avoids the need for careful experimental set-up which controls for all factors except the variable of interest. Logistic regression extends Ordinary Least Squares (OLS) regression to the case when the output variable is binary, as is our case where the model correctly identifies the ground-truth object or not. Formally, we use: \\la b e l {eq:logit-regression} \\begin {split} p(z_i) &= \\frac {1}{1+e^{-z_i}} \\\\ \\end {split} (1) where zi = \u03b11 + \u03b21Xi + \u03b12Di + \u03b22DiXi + \u03f5i. The output variable is p(zi) \u2208[0, 1], the probability that the model correctly identifies the ground-truth object in image i, with 1 for correct, and 0 otherwise. The explanatory variables are Xi, a vector of binary variables that encode whether a particular characteristic is present in image i, and Di is a binary variable indicating whether the ground-truth object is a disability object (e.g. a guide cane). The interaction term \u03b22DiXi measures whether the marginal effect of each characteristic in Xi is compounded or mitigated for disability objects relative to non-disability objects. \u03f5i are residuals which are assumed homoskedastic and uncorrelated. The coefficients \u03b11, \u03b21, \u03b12, \u03b22 are estimated through maximum likelihood. In OLS the coefficients directly represent the marginal effect of each Xi variable on the dependent variable. In contrast, here they represent the marginal effect on the log-odds ratio, which is linear in Xi: \\ label { eq:lo g od ds} \\ ln {\\ left (\\ frac {p(z_i)}{1-p(z_i)}\\right )} = \\alpha _1 + \\beta _1 X_i + \\alpha _2 D_i + \\beta _2 D_i X_i + \\epsilon _i (2) This makes the coefficients difficult to interpret so we instead report them as \u2202p/\u2202x, the marginal effect of each characteristic x\u2208X on the model\u2019s probability of being correct, p. We report the average of this marginal effect across all observations in the sample. We interpret each effect through its sign, magnitude, and significance. A negative sign means the model is less likely to be correct when that characteristic is present in an image \u2013 on average and holding all other characteristics constant. Its magnitude measures the extent of this impact. Its significance indicates its reliability based on a two-sided t-test that estimates the probability that the marginal effect is different from zero. 3.3. CLIP variants We study 25 CLIP variants spanning architecture size, pretraining dataset, and pre-training dataset size (see Tab. A.1 for summary). We focus on variants that use a Transformer [64] and Vision Transformer (ViT) [20] as the text and vision encoders respectively as they are most widely used. Specifically, we consider ViT-B/16, ViT-B/32, ViTL/14, ViT-H/14 and ViT-g/14 vision encoders with associated text encoders. For datasets, we consider OpenAI\u2019s closed-source WIT [52] and open-source LAION (80M/400M/2B) [30, 57, 58], DataComp (S/M/L/XL) [23], and CommonPool (S/M/L/XL) [23] with and without CLIP Score filtering [26]. These span 80M-3.8B image-text pairs. We use CLIP as a zero-shot classifier by embeddding a task\u2019s class labels using its text encoder, and each task image with its vision encoder. An image\u2019s prediction is taken to be the class whose embedding has the highest cosine similarity (after a softmax) with the image\u2019s embedding. 3.4. Datasets Our analyses are based on two large-scale datasets captured by BLV users: ORBIT [38] and VizWiz-Classification [8]. Both datasets were collected through real-world assistive applications: a personalizable object recognizer app for ORBIT [44]; and a visual question-answering app for VizWiz-Classification [12]. Both are therefore highly representative of typical BLV user data. We contrast these with two common web-crawled datasets \u2013 MS-COCO [33] and Open Images [32] \u2013 which are typical of the data used to pre-train LMMs, and widely used for benchmarking. We consider only the test and validation sets of these datasets. Below we provide descriptions of the BLV datasets, with the web-crawled datasets described in the appendix. ORBIT [38] contains 3,822 videos (2.68M frames) of 486 objects collected by 67 BLV users on their mobile phones. For each object, users captured videos which show the object alone, and in a realistic scene alongside other items, which we call the Clean and Clutter datasets, respectively. ORBIT Clean frames are annotated with 6 quality issues (e.g. framing, blur) following the categories in [17]. VizWiz-Classification [8] contains 8,900 images from the original VizWiz dataset [25], a dataset of images taken by over 11,000 BLV users via a visual assistance mobile app [12]. All images are annotated with 200 ImageNet object categories and the 6 quality issues of [17] (including an extra \u201cother\u201d quality issue). 4. Experimental Results Our first finding is that CLIP\u2019s accuracy is 15.0 percentage points lower on BLV datasets (ORBIT and VizWizClassification) than web-crawled datasets (MS-COCO and Open Images) (see Fig. 1). We use the standardized zeroshot set-up (see Sec. 3.1) and average the T*N*M predictions per dataset from each of the 25 CLIP variants. While the accuracy difference is less for larger CLIP architectures than smaller ones, no model achieves parity (see Fig. B.1). 3 \fTable 1. CLIP underperforms on disability and exclusive disability objects by significant margins compared to nondisability objects. Zero-shot accuracy is averaged (with 95% c.i.) over 27.5K images of each object type processed by each of the 25 CLIP variants. Experimental details in Sec. 4.1.1. Object Category ORBIT Clean ORBIT Clutter Excl. disability 36.5% \u00b1 0.1% 22.6% \u00b1 0.1% Disability 41.8% \u00b1 0.1% 25.8% \u00b1 0.1% Non-disability 58.9% \u00b1 0.1% 50.9% \u00b1 0.1% In the best case, the gap is 6.7 percentage points (ViT-g/14, LAION-2B) while in the worst, it is 22.8 percentage points (ViT-B/32, DataComp-M). This preliminary result hints at deeper issues. In the following sections, we aim to identify potential sources of this discrepancy and why it occurs. 4.1. Robustness to image content from BLV users To understand why accuracy is lower, we first examine BLV image content. The BLV community uses a range of assistive objects, like guide canes and Braille displays [31, 38, 44] (see Fig. 2), which are not included in popular benchmarks [33, 54, 56]. We assess CLIP\u2019s performance on such \u201cdisability\u201d objects versus more common objects. We define disability objects as those that assist BLV people (e.g. dog collar); exclusive disability objects as the subset exclusively used by BLV people (e.g. guide cane); and non-disability objects as those used by everyone (e.g.keys). Three annotators categorized the ORBIT Clean and Clutter datasets2 resulting in 55 disability, 42 exclusive disability, and 431 non-disability objects (see App. A.3 for lists). 4.1.1 Disability objects are less well recognized than non-disability objects We compare zero-shot classification accuracy between disability and non-disability objects using a variant of the episodic set-up described in Sec. 3.1. Specifically, for each disability object we sample two N-way tasks with a \u201ctarget\u201d object and N-1 non-disability \u201cdistractor\u201d objects. The first task contains a disability target object and the second task contains a non-disability target. The distractors are randomly sampled from the non-disability objects, each coming from a unique object cluster. We repeat T times for each disability object, sampling a pair of tasks with a different set of distractor objects and non-disability target object. For each task, we randomly sample M frames of the target object, and ask CLIP to classify them from the task\u2019s N possible objects. We report the average accuracy of all frames with a disability and a non-disability object as the target, respectively (T*55*M each). We also report the average accuracy over the subset of frames that are exclusive disability objects. We use T = 5, N = 20, M = 100. 2We do not consider VizWiz-Classification, as none of its 200 ImageNet labels are disability objects. Figure 2. Examples from the ORBIT Dataset. (top) Disability objects: guide canes, liquid level sensor, electronic Braille device. (middle) Quality issues typical in BLV images: underexposure, blur, camera viewpoint, and framing. (bottom) A remote control and a Victor Reader Stream in a clean and clutter frame. Under this setting, we find that disability and exclusive disability objects have accuracies of 21.1 and 25.3 percentage points less than non-disability objects, respectively, on average across the ORBIT Clean dataset (see Tab. 1). The gap widens by a further 3-4 percentage points when more realistic scenarios are presented from ORBIT Clutter. We find that the worst performing objects include Braille notetakers, talking book devices and liquid level indicators. We also investigate the role of CLIP\u2019s pre-training dataset size on this finding. We find that accuracy increases with pre-training dataset size generally, but the delta between non-disability and disability objects stays roughly constant (see Fig. B.2). This suggests that web-crawling more data may not be enough to improve performance on potentially long-tailed objects. We see similar trends for increasing architecture sizes (see Fig. B.3). 4.1.2 Disability objects are under-represented in largescale datasets compared to non-disability objects To better understand why more pre-training data does not improve performance on disability objects, we analyze the composition of three of CLIP\u2019s large-scale pre-training datasets for the presence of disability content \u2013 LAION400M [57], LAION-2B [58], and DataComp-XL [23] (also called DataComp-1B). These datasets are used for pretraining LMMs more broadly, with DataComp-XL achieving the highest accuracies on ORBIT. Given the scale of the datasets, we conduct a text-based analysis of their captions as a more computationally tenable approach than analyzing their images. We first extract all noun phrases that contain a physical object3 from the 3A physical object traverses the entity \u2192physical-entity \u2192object \u2192 OR(artifact, whole, part, living-thing) hypernym path in WordNet [40]. 4 \fTable 2. Disability objects occur 16-17x less frequently in the captions of popular large-scale image-text datasets compared to non-disability objects. The mentions of 222 disability object synonyms and 312 non-disability synonyms were counted in noun phrases (NPs) extracted from these datasets. Details in Sec. 4.1.2. LAION-400M LAION-2B DataComp-1B Captions 401,300,000 2,322,161,808 1,387,173,656 NPs 384,468,921 2,737,763,447 1,342,369,058 Unique NPs 5,984,181 22,657,632 15,071,341 Disability obj. mentions 18,326 (0.0048%) 70,939 (0.0026%) 48,672 (0.0036%) Non-disability obj. mentions 425,046 (0.1106%) 1,550,043 (0.0566%) 1,126,356 (0.0839%) Normalized non-dis/dis ratio 16.8 15.6 16.5 captions, referred to as \u201cvisual concepts\u201d4. We then compute how prevalent ORBIT\u2019s disability and non-disability objects are contained in these visual concepts. We use ORBIT to contextualize our previous results as it is a realistic representation of the types of objects important to BLV users, however, other object lists could be used. To do this, we first group similar objects from the ORBIT dataset into higher-level clusters (e.g. all guide canes). As each cluster could be described in several ways (e.g. \u201csymbol canes\u201d, \u201cguide canes\u201d), we assign each two relevant synonyms. This was expanded to 15 synonyms for disability objects based on initial experimentation, resulting in 222 disability object synonyms, and 312 nondisability synonyms overall. We then count how many times each synonym appears within the visual concepts using string matching, allowing partial matches after simple pre-processing (see App. A.5 for details). We find that disability objects occur 16-17x less frequently than non-disability objects across all three datasets (Tab. 2). We compute this by normalizing the number of mentions by the number of synonyms for disability and nondisability objects, respectively, and taking their ratio. We also see that LAION-2B has 7x the number of noun phrases as LAION-400M, but <4x the unique noun phrases, suggesting that it contains more of the same rather than new visual concepts (see App. A.5 for further statistics). 4.1.3 A few-shot approach can sometimes reduce the disability and non-disability accuracy gap As CLIP is also known to be a good few-shot learner [60], we investigate whether providing several examples of an object can equalize performance between disability and non-disability content. We integrate a ProtoNets approach [59] with the \u201cdistractor\u201d set-up described in Sec. 4.1.1, using embeddings directly from CLIP\u2019s vision encoder5. Specifically for each disability object, we sample pairs of N-way tasks in the same way, except now we addi4We release these publicly at [REMOVED FOR REVIEW] 5We note that this few-shot set-up does not use CLIP\u2019s text encoder. Table 3. A few-shot method using ProtoNets [59] (5-shot) achieves the highest accuracy and lowest accuracy gap between disability and non-disability objects, versus vanilla CLIP (0-shot) and CLIP with LLM-generated object descriptions [39, 51].Averaged over 25 CLIP variants. Obj type ORBIT Clean Acc (%) ORBIT Clutter Acc (%) 0-shot [39] [51] 5-shot 0-shot [39] [51] 5-shot Disability 41.8 48.3 50.1 86.2 25.8 32.1 34.2 54.5 Non-disability 58.9 57.0 57.0 88.3 50.9 50.2 49.4 69.1 Accuracy gap 17.1 8.7 6.9 2.1 25.1 18.1 15.2 14.6 tionally sample K training shots of each class which we use to compute the class prototypes. As before, we evaluate the model on M test images for the disability and non-disability target object in each task pair, with the prediction taken to be the closest prototype. We consider K = [5, 10, 20, 40]. We compare this to recent methods [39, 51] which improve CLIP\u2019s zero-shot performance by embedding LLMgenerated descriptions of objects (rather than just the raw labels). We use GPT-4 as the LLM and the same generation hyperparameters as [39, 51]. We find that augmenting CLIP with LLM-generated object descriptions [39, 51] outperforms vanilla CLIP (0-shot) which just embeds the raw object labels, but not a few-shot approach (5-shot) which embeds a few image examples of each object (see Tab. 3). This holds for both the ORBIT Clean and Clutter datasets. Crucially, the accuracy gap between disability and non-disability objects is lowest with a few-shot approach, though this accuracy gap quickly saturates, with no significant gains coming from more than 5 shots (see Fig. B.4). We also note that while a few-shot approach can reduce the accuracy gap to 2% in the simple images from ORBIT Clean, it is less effective in the more realistic images from ORBIT Clutter, with disability objects performing 14-15% points worse than non-disability objects, even when scaled to 40 shots (see Fig. B.4b). Furthermore, a few-shot approach is only effective as a mitigation if CLIP is pre-trained on a large enough dataset. We find that for pre-training datasets of less than 100M examples, the accuracy difference is 3-4x larger than that for 100-1000M examples, and 9-10x larger than that for 1B+ examples (see Figs. B.5a and B.5b). These factors are roughly constant across the number of shots. Overall, this speaks to the power of large-scale pre-training, even if a small amount of extra effort is required. 4.2. Robustness to image quality from BLV users Images captured by BLV users are of more variable quality than those captured by sighted users. These issues include atypical framing, camera blur, camera viewpoint (rotation), occlusion, overexposure, and underexposure [17, 31], which are annotated in the ORBIT Clean and VizWiz Classification datasets. We run the standardized zero-shot setup (see Sec. 3.1) on these datasets for all CLIP variants. We then use the statistical tools described in Sec. 3.2 to 5 \fdisentangle the marginal effect of each quality issue on model performance, both in general and for disability objects specifically. For ORBIT, we treat Xi as a binary vector indicating the presence of five quality issues6 in image i, Di as a binary indicating the presence of an exclusive disability object, and DiXi as the interactions between them. For VizWiz, we encode seven quality issues (including the \u201cother\u201d category) in Xi, but exclude Di or DiXi as VizWiz labels do not include disability objects. 4.2.1 Blur, viewpoint, occlusion and lighting issues significantly reduce model accuracy. In Fig. 3, we show that the marginal effects of blur, viewpoint (rotation), occlusion, and lighting issues on model accuracy are negative, large, and statistically significant for most models. All else equal, blur reduces model accuracy by 11 percentage points and 1 percentage point in the ORBIT and VizWiz datasets, respectively, on average. Viewpoint issues by 9 and 8 percentage points on each dataset respectively; occlusion by 9 and 14 percentage points; and lighting issues by 23 and 8 percentage points. We note that these effects are cumulative meaning that the impact on model accuracy is summed if multiple issues occur in the same image. We also note that pre-training on larger datasets, in general, does not guarantee robustness (e.g. variants pre-trained on LAION-2B, one of the largest datasets, are negatively affected by viewpoint and occlusion issues by 3-12 and 8-19 percentage points, respectively). We include the raw marginal effects in Tabs. B.3 to B.5. Framing issues in the ORBIT dataset stand as the exception, with the marginal effect being positive and statistically significant. This can be explained by how the ORBIT videos were collected. To orient the camera, BLV users were instructed to hold it close to the object initially, and then move away. So, the initial frames in the video tend to be at close range \u2013 an easier recognition task \u2013 but also have framing issues. This is supported by the VizWiz results where framing issues, which occur at further distances from the object, have a negative marginal effect on accuracy. 4.2.2 The impact of quality issues is typically not worse for disability compared to non-disability objects. Fig. 3 further shows that accuracy is 29 percentage points lower for exclusive disability objects than non-disability objects in the ORBIT Clean dataset, on average across all models, supporting the findings in Sec. 4.1.1. The marginal effect of a quality issue, however, typically affects disability objects no worse than non-disability ones. This can be seen by comparing the net effect of a quality issue on each object type. Let the baseline be the accuracy for non-disability objects. The accuracy for a disability object with no quality 6We combine overand underexposure into a joint \u201clighting issue\u201d due to low incidence rates of each of these issues. framing blur viewpoint occlusion lighting excl. disability obj excl. disability obj:framing excl. disability obj:blur excl. disability obj:viewpoint excl. disability obj:occlusion excl. disability obj:lighting \u22120.6 \u22120.4 \u22120.2 0 0.2 framing blur rotation occlusion overexp underexp other 90%(*) 95%(**) 99%(***) Marginal E\ufb00ect on Accuracy ORBIT Clean VizWiz-Cls Figure 3. Blur, viewpoint/rotation, occlusion and lighting issues all have large negative marginal effects on model accuracy, with high statistical significance, but these are not compounded for exclusive disability objects. Each dot represents a CLIP variant, with its color showing the significance level. issues will be 29 percentage points lower. Introducing occlusion will reduce the accuracy for non-disability objects by 9 percentage points on average. For disability objects, occlusion will reduce accuracy by this, plus the marginal effect of the interaction term (+2 percentage points), for a net effect of -7. The positive and significant interaction term indicates that having an occlusion issue and being a disability object has an effect that is slightly less than the sum of its parts. The only exception is overexposure issues, which do compound if they co-occur with a disability object. 4.3. Robustness to language used by BLV users Assistive applications are likely to leverage the multi-modal capabilities of LMMs, so it is important to understand how CLIP performs on the range of language used by BLV people. For example, BLV users commonly use tactile rather than visual words to describe their objects [44]. In this section, we study one instantiation of this \u2013 CLIP\u2019s robustness to recognizing objects described by their color, \u201cyellow mug\u201d, versus their material, \u201cplastic cup\u201d. To do this, three annotators manually labeled the ORBIT validation and test objects (208 objects) with a color and a material7. Each adjective was selected from a predefined list of 20 colors and 23 materials (see App. A.4). A text prompt was then created for each object using the template \u201c \u201d, where was the object\u2019s color or material, and was the noun extracted from the raw object label. We use these templates \u2013 referred to as color and material prompts \u2013 to examine CLIP\u2019s sensitivity to different object descriptions. 7We assigned up to 2 adjectives per object in some cases where objects were multiple colors or materials. 6 \fTable 4. Describing an object by its color (rather than material, or color and material) leads to text embeddings that are most aligned with that object\u2019s image embeddings. CLIP scores [26] between image and prompt embeddings are averaged (with 95% c.i.) for 100 images per object per prompt type on ORBIT Clean. Prompt Obj. name Material + obj. name Color + obj. name Color + material + obj. name CLIP Score 24.07 \u00b1 0.02 23.88 \u00b1 0.02 25.20 \u00b1 0.02 24.76 \u00b1 0.02 4.3.1 CLIP classifies objects more accurately when they are described by color rather than material We compute CLIP scores [26] between an image and four different prompt embeddings, for 100 randomly sampled images of each object in ORBIT Clean. We consider the color and material prompts, a lower bound containing just the object name, and an upper bound adding both color and material adjectives. We expect that the lower bound prompt, which provides the least detail about the object, should align less strongly with the object\u2019s image embedding than the upper bound prompt, which provides the most specific detail. In Tab. 4, however, we see this is not the case. Rather, color prompts have the highest CLIP scores and material prompts the lowest. Interestingly, the upper bound has a lower average CLIP score than the color prompt, suggesting that adding the object\u2019s material is harming alignment. To quantify the impact of this on accuracy, we run the standard zero-shot set-up (Sec. 3.1), embedding these textual prompts instead of the raw object labels. We see that across all variants, CLIP classifies objects 7.1 percentage points more accurately when they are described by their color rather than their material (see Fig. B.6). 4.3.2 Materials are under-represented in large-scale datasets compared to colors We further examine this result by measuring how frequently colors versus materials appear in the captions of LAION400M, LAION-2B and DataComp-1B. We use the extracted noun phrases from Sec. 4.1.2, and count the number of times the 20 material and 23 color annotations are mentioned. In Tab. 5, we see that colors are mentioned \u223c4x more frequently than materials across both datasets, once normalized. This helps to explains some of the results Table 5. Materials occur \u223c4x less frequently than colors in the captions of popular large-scale image-text datasets. The mentions of 20 colors and 23 materials were counted in the noun phrases extracted in Tab. 2. LAION-400M LAION-2B DataComp-1B Color mentions 475,060 (0.12%) 1,756,102 (0.06%) 1,165,871 (0.09%) Material mentions 131,876 (0.03%) 513,014 (0.02%) 354,598 (0.03%) Norm\u2019ed color/ material ratio 4.1 3.9 3.8 victor reader stream 0.0031 0.0055 0.0081 guide cane liquid level indicator 0.0115 0.0087 0.0069 0.0017 victor reader stream liquid level indicator guide cane 0.0009 guide cane guide cane Figure 4. OWL-ViT [41] detects disability objects less consistently than non-disability objects. Disability objects are often mistaken for other objects, sometimes with higher confidence. in Sec. 4.3.1. Taken together, this suggests that models pretrained on these datasets may perform worse for BLV users who describe their objects by their material, with the potential that this may extend to other tactile-based descriptions. 5. Example-based impact analysis Sec. 4 broadly shows that CLIP is sensitive to image and textual data provided by BLV users in a zero-shot classification task. We investigate whether these performance disparities persist in three downstream models that use CLIP \u2013 OWL-ViT [42], CLIPSeg [37], and DALL-E2 [53]. We run our analysis on 180 BLV images which are systematically selected for 20 objects \u2013 the 5 topand bottomperforming disability and non-disability objects from the ORBIT dataset (see App. C for full protocol). For space reasons, we include CLIPSeg results in App. C.3. 5.1. Object detection with OWL-ViT Object detection is already widely available in BLV assistive applications [3, 5], and in future, many may rely on models that use CLIP, such as OWL-ViT [41]. OWL-ViT predicts bounding boxes for objects specified in free-form text prompts. It does this by appending a bounding box regression and class-wise layer to CLIP\u2019s (pre-trained) encoders and then fine-tuning on an object detection dataset. We run all 180 images through OWL-ViT (with a ViT-B/32 vision encoder) with the (cleaned) noun phrase extracted from the raw object label as the text prompt. A team of three annotators then manually evaluated the detections. We find: Disability objects are less consistently detected than non-disability objects. Our results show that 6/10 nondisability objects were correctly detected (taken as the box with the highest confidence) in all 9 frames showing that object, compared to 3/10 disability objects. In many of these failed frames, the model mistook the disability object for another object, often with a higher confidence (Fig. 4). This behavior would have a large negative effect on the user experience of an object detection app. 7 \fTable 6. OWL-ViT [41]\u2019s correct bounding box predictions have confidence scores that are \u223c5x lower for disability than non-disability objects on average. The confidence score of the predicted box per image is averaged (with 95% c.i.) over 90 images for disability and non-disability objects, respectively. Object Correct boxes Incorrect boxes Dis. objs 0.016 \u00b1 0.008 0.008 \u00b1 0.003 Non-dis. objs 0.084 \u00b1 0.030 0.008 \u00b1 0.003 The model is less confident about disability object detections than non-disability object detections. In Tab. 6, we see that OWL-ViT\u2019s confidence for the correct bounding box is \u223c5x lower for disability objects compared to nondisability objects. We see that incorrect boxes have similar confidence scores between disability and non-disability objects, which is expected. See examples in Fig. C.1. 5.2. Text-to-image generation with DALL-E2 DALL-E2 [53] also uses CLIP: during training its decoder is conditioned on image embeddings from frozen CLIP. We investigate the downstream impacts of this by examining if DALL-E2 can generate disability content. We create two prompts for each of the 20 objects using the templates: i) \u201c\u201d ii) \u201c on next to a \u201d. The object name was the object label\u2019s cleaned noun phrase, and the surface/adjacent object was chosen to match a randomly sampled clutter image of that object (see App. C for details). Three annotators then manually evaluated four generations from DALL-E2 per prompt. A generated image was considered correct if it contained the object specified in the prompt. We find: Generations of disability objects are more likely to be incorrect compared to non-disability objects. DALLE2 correctly generated the object in the prompt for 18/80 images of disability objects, versus 74/80 images of nondisability objects. For some disability objects, no generations contained a valid representation of the object \u2013 including guide canes, electronic Braille devices, and liquid level indicators (see Figs. 5 and C.4). In these cases, the generations either defaulted to a more common object (e.g. a walking stick for \u201cguide cane\u201d) or fabricated an object entirely (e.g. random dot patterns for \u201cBraille sense display\u201d, colorful thermometers for \u201cliquid level sensor\u201d). It also failed to generate specific instances of assistive devices (e.g. \u201cVictor Reader Stream\u201d, a talking book device, resulted in images of books or river streams). In contrast, DALL-E2 generates highly realistic of non-disability objects (see Fig. C.4a). 6. Discussion Our evaluation of CLIP reveals that it consistently underperforms on BLV data across visual content, visual quality, and textual content, irrespective of architecture size, pre-training dataset, or pre-training dataset size. We discuss mitigation strategies to make LMMs more equitable for BLV users and marginalized groups more generally. Our results suggest that the performance disparities come in part from the distribution shift between webcrawled and BLV user data. This highlights the importance of systematic reporting of the contents of large-scale datasets used for pre-training, in the spirit of datasheets for datasets [24]. Our analysis in Secs. 4.1.2 and 4.3.2 provides a starting point, but this should be extended to other datasets and marginalized content. With the data composition known, mitigation strategies can then be developed. For example, assistive device websites and disability dataset platforms like IncluSet [4] could explicitly be crawled. We also show that a few-shot approach can mitigate performance disparities relating to image content \u2013 a more cost-effective alternative than re-training a LMM. The fewshot model adaptation could be done when the LMM is developed, when the application is developed, or by the endusers themselves as part of a teachable paradigm [31, 38]. Each of these options is an open research question with the need to more deeply explore interaction paradigms and light-weight model adaptation techniques [11, 27]. Finally, application-level mitigations should also be considered. For BLV users, auxiliary models could support users to reduce image variance, helping them stabilize the camera or alerting about the lighting conditions, for example. We could also leverage data augmentation techniques that are personalized to individual users or user groups. For BLV users who tend to take blurry images, for example, we could automatically inject blur into the few-shot images so that the model becomes more robust to this quality issue. The findings in this paper prompt a critical look at the development cycle of current LMMs. Greater transparency and disaggregation in dataset reporting is needed, regardless of the proprietary nature of a dataset. Future work should also explore lightweight model adaption techniques that allow application developers and users to bring equity to their experiences. We must continue to work with marginalized communities \u2013 \u201cnothing about us without us\u201d \u2013 to equalize the benefit of LMMs and their extraordinary capabilities. Real image DALL-E2 generations Guide cane on a purple blanket next to a slipper Braille sense display Figure 5. DALL-E2 [53] either defaults to common objects or fabrications when prompted with disability objects like guide canes and electronic Braille devices. Instead, it generates highquality images of non-disability objects (see Fig. C.4a). 8" + }, + { + "url": "http://arxiv.org/abs/2104.03841v5", + "title": "ORBIT: A Real-World Few-Shot Dataset for Teachable Object Recognition", + "abstract": "Object recognition has made great advances in the last decade, but\npredominately still relies on many high-quality training examples per object\ncategory. In contrast, learning new objects from only a few examples could\nenable many impactful applications from robotics to user personalization. Most\nfew-shot learning research, however, has been driven by benchmark datasets that\nlack the high variation that these applications will face when deployed in the\nreal-world. To close this gap, we present the ORBIT dataset and benchmark,\ngrounded in the real-world application of teachable object recognizers for\npeople who are blind/low-vision. The dataset contains 3,822 videos of 486\nobjects recorded by people who are blind/low-vision on their mobile phones. The\nbenchmark reflects a realistic, highly challenging recognition problem,\nproviding a rich playground to drive research in robustness to few-shot,\nhigh-variation conditions. We set the benchmark's first state-of-the-art and\nshow there is massive scope for further innovation, holding the potential to\nimpact a broad range of real-world vision applications including tools for the\nblind/low-vision community. We release the dataset at\nhttps://doi.org/10.25383/city.14294597 and benchmark code at\nhttps://github.com/microsoft/ORBIT-Dataset.", + "authors": "Daniela Massiceti, Luisa Zintgraf, John Bronskill, Lida Theodorou, Matthew Tobias Harris, Edward Cutrell, Cecily Morrison, Katja Hofmann, Simone Stumpf", + "published": "2021-04-08", + "updated": "2021-10-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Object recognition systems have made spectacular advances in recent years [42, 47, 43, 37, 14, 30, 36] however, most systems still rely on training datasets with 100s to 1,000s of high-quality, labeled examples per object category. These demands make training datasets expensive to collect, and limit their use to all but a few application areas. Few-shot learning aims to reduce these demands by training models to recognize completely novel objects from only a few examples [9, 49, 40, 2, 38, 11, 46]. This will enable recognition systems that can adapt in real-world, dynamic scenarios, from self-driving cars to applications where users provide the training examples themselves. Meta-learning algorithms which \u201clearn to learn\u201d [45, 9, 49, 11] hold partic(a) Frames from clean videos (b) Frames from clutter videos Figure 1: High-variation examples in the ORBIT dataset \u2013 a facemask, hairbrush, keys, and watering can. Full videos in the supplementary material. Further examples in Figure A.5. ular promise toward this goal with recent advances opening exciting possibilities for light-weight, adaptable recognition. Most few-shot learning research, however, has been driven by datasets that lack the high variation \u2014 in number of examples per object and quality of those examples (framing, blur, etc.; see Table 1) \u2014 that recognition systems will likely face when deployed in the real-world. Key datasets such as Omniglot [23, 49] and miniImageNet [49], for example, present highly structured benchmark tasks which assume a fixed number of objects and training examples per object. Meta-Dataset [48], another key dataset, poses a more challenging benchmark task of adapting to novel datasets given a small (random) number of training examples. Its constituent datasets [23, 17, 39, 26, 32, 50, 6], however, mirror the high-quality images of Omniglot and miniImageNet, leaving robustness to the noisy frames that would be streamed from a real-world system unaddressed. While these datasets have catalyzed research in few-shot learning, state-of-the-art performance is now relatively saturated and leaves reduced scope for algorithmic innovation [16, 4, 33]. To drive further innovation in few-shot learning for realworld impact, there is a strong need for datasets that capture the high variation inherent in real-world applications. We \fmotivate that both the dataset and benchmark task should be grounded in a potential real-world application to bring real-world recognition challenges to life in their entirety. An application area that neatly encapsulates a few-shot, highvariation scenario are teachable object recognisers (TORs) for people who are blind/low-vision [24, 18]. Here, a user can customize an object recognizer by capturing a small number of (high-variation) training examples of essential objects on their mobile phone. The recognizer is then trained (in deployment) on these examples such that it can recognize the user\u2019s objects in novel scenarios. As a result, TORs capture a microcosm of highly challenging and realistic conditions that can be used to drive research in real-world recognition tasks, with the potential to impact a broad range of applications beyond just tools for the blind/low-vision community. We introduce the ORBIT dataset [31], a collection of videos recorded by people who are blind/low-vision on their mobile phones, and an associated few-shot benchmark grounded in TORs. Both were designed in collaboration with a team of machine learning (ML), human-computer interaction, and accessibility researchers, and will enable the ML community to 1) accelerate research in few-shot, highvariation object recognition, and 2) explore new research directions in few-shot video recognition. We intend both as a rich playground to drive research in robustness to challenging, real-world conditions, a step beyond what curated few-shot datasets and structured benchmark tasks can offer, and to ultimately impact a broad range of real-world vision applications. In summary, our contributions are: 1. ORBIT benchmark dataset. The ORBIT benchmark dataset [31] (Section 3) is a collection of 3822 videos of 486 objects recorded by 77 blind/low-vision people on their mobile phones and can be downloaded at https://doi.org/10.25383/city.14294597. Examples are shown in Figures 1 and A.5. Unlike existing datasets [39, 8, 26, 49, 48], ORBIT show objects in a wide range of realistic conditions, including when objects are poorly framed, occluded by hands and other objects, blurred, and in a wide variation of backgrounds, lighting, and object orientations. 2. ORBIT teachable object recognition benchmark. We formulate a few-shot benchmark on the ORBIT dataset (Section 4) that is grounded in TORs for people who are blind/low-vision. Contrasting existing few-shot (and other) works, the benchmark proposes a novel user-centric formulation which measures personalization to individual users. It also incorporates metrics that reflect the potential computational cost of real-world deployment on a mobile device. These and the benchmark\u2019s other metrics are specifically designed to drive innovation for realistic settings. 3. State-of-the-art (SOTA) on the ORBIT benchmark. We implement 4 few-shot learning models that cover the main classes of approach in the field, extend them to videos, and establish the first SOTA on the ORBIT benchmark (Section 5). We also perform empirical studies showing that training on existing few-shot learning datasets is not sufficient for good performance on the ORBIT benchmark (Table 4) leaving significant scope for algorithmic innovation in few-shot techniques that can handle high-variation data. Code for loading the dataset, computing benchmark metrics, and running the baselines is available at https://github.com/microsoft/ORBIT-Dataset. 2. Related Work Few-shot learning datasets. Omniglot [23, 49], miniImageNet [49], and Meta-Dataset [48] have driven recent progress in few-shot learning. Impressive gains have been achieved on Omniglot and miniImageNet [49, 16, 4, 33], however results are now largely saturated and highly depend on the selected feature embedding. Meta-Dataset, a dataset of 10 datasets, formulates a more challenging task where whole datasets are held-out, but these datasets contain simple and clean images, such as clipart drawings of characters/symbols [23, 49, 17], and ImageNet-like images [26, 39, 32, 50, 6] showing objects in uniform lighting, orientations, and camera viewpoints. The ORBIT dataset and benchmark presents a more challenging few-shot task with high-variation examples captured in real-world scenarios. High-variation datasets. Datasets captured by users in real-world settings are naturally high-variation [1, 12, 7, 21, 27, 18, 41, 13], but none collected thus far explicitly target few-shot object recognition. ObjectNet [1] is a test-only dataset of challenging images (e.g. unusual orientations/backgrounds) for \u201cmany-shot\" classification. Something-Something [12] and EPIC-Kitchens [7] are video datasets collected by users with mobile and head-mounted cameras, respectively, but are focused on action recognition based on many examples and \u201caction captions\u201d. Core50 [27] is a video dataset captured on mobile phones for a continual learning recognition task. In contrast to ORBIT, the videos are high quality (captured by sighted people, with well-lit centered objects). Other high-variation datasets include those collected by people who are blind/lowvision [18, 41, 13] (see IncluSet for a repository of accessibility datasets [19]) however, most are not appropriate for few-shot learning. TeGO [18] contains mobile phone images of 19 objects taken by only 2 users (1 sighted, 1 blind) in 2 environments (1 uniform background, 1 cluttered scene). It validates the TOR use-case, but is too small to deliver a robust, deployable system. VizWiz [13], although larger scale (31,173 mobile phone images contributed by 11,045 blind/low-vision users) targets image captioning and question-answering tasks, and is not annotated with object labels. The ORBIT dataset and benchmark is motivated by the lack of datasets that have the scale and structure required for few-shot, high-variation real-world applications, and adds to the growing repository of datasets for accessibility. \fOmniglot [23] miniImageNet [49] Meta-Dataset [48] TEgO [24] ORBIT Benchmark Data type Image Image Image Image Video frames # classes 1623 100 4934 19 486 # samples/class 20 600 6-340,029 180-487 33-3,600 # total samples 32,460 60,000 52,764,077 11,930 2,687,934 Goal Image classification Image classification Image classification Image classification Frame classification Task Fixed shot/way Fixed shot/way Random shot/way Fixed shot/way Random shot/way Source Turk Web Web Mobile phone Mobile phone Data collectors Sighted (20) Sighted Sighted Sighted (1) Blind (1) Blind (67) High-variation features Unbalanced classes \u2717 \u2717 \u2713 \u2717 \u2713 Lighting variation \u2717 \u2713 \u2713 \u2717 \u2713 Background variation \u2717 \u2713 \u2713 \u2713* \u2713 Viewpoint variation \u2717 \u2717 \u2717 \u2713 \u2713 Ill-framed objects \u2717 \u2717 \u2717 \u2713 \u2713 Blur \u2717 \u2717 \u2717 \u2717 \u2713 Table 1: Comparison of few-shot learning datasets. Note, the ORBIT benchmark dataset is a subset of all videos contributed by collectors (see Appendix B). *Collected in 2 controlled environments \u2013 1 uniform background, 1 cluttered space. 3. ORBIT Benchmark Dataset Our goal is to drive research in recognition tasks under few-shot, high-variation conditions so that deployed fewshot systems are robust to such conditions. Toward this goal, we focus on a real-world application that serves as a microcosm of a few-shot, high-variation setting \u2014 TORs for people who are blind/low-vision \u2013 and engage the blind/lowvision community in collecting a large-scale dataset. The collection took place in two phases, and collectors recorded and submitted all videos (completely anonymously) via an accessible iOS app (see Appendix A.2). The collection protocol was designed and validated through extensive user studies [44] and led to the key decision to capture videos rather than images of objects. This was based on the hypothesis that a video increases a blind collector\u2019s chances of capturing frames that contained the object while reducing the time/effort cost to the collector, compared to multiple attempts at a single image. The study was approved by the City, University of London Research Ethics Committee. The full data collection protocol is described in Appendix A.1 and a datasheet [10] for the dataset is included in Appendix E. We summarize the benchmark dataset in Table 2 and describe it in detail below (see Appendix B for dataset preparation, and Appendix C for example clips). The benchmark dataset is used to run the benchmark described in Section 4. Number of collectors. Globally, 77 collectors contributed to the ORBIT benchmark dataset. Collectors who contributed only 1 object were merged to enforce a minimum of 3 objects per user such that the per-user classification task was a minimum of 3-way, resulting in an effective 67 users. Numbers of videos and objects. Collectors contributed a total of 486 objects and 3,822 videos (2,687,934 frames, 83GB). 2,996 videos showed the object in isolation, referred to as clean videos, while 826 showed the object in a realistic, multi-object scene, referred to as clutter videos. We collected both types to match what a TOR will encounter in the real-world (see Section 4.2.2). Each collector contributed on average 7.3 (\\pm 2.8) objects, with 5.8 (\\pm 3.9) clean videos and 1.8 (\\pm 1.1) clutter videos per object. Figure 2 shows the number of objects (2a) and number of videos per collector (2b). We discuss the impact of the 2 collectors who contributed more videos than the average collector in Appendix B.3. 0 2 4 6 8 10 12 Collector Number of objects (a) Number of objects per collector. 0 50 100 150 200 250 300 350 400 obj 13 clutter obj 13 clean obj 12 clutter obj 12 clean obj 11 clutter obj 11 clean obj 10 clutter obj 10 clean obj 9 clutter obj 9 clean obj 8 clutter obj 8 clean obj 7 clutter obj 7 clean obj 6 clutter obj 6 clean obj 5 clutter obj 5 clean obj 4 clutter obj 4 clean obj 3 clutter obj 3 clean obj 2 clutter obj 2 clean obj 1 clutter obj 1 clean Collector Number of videos by object (b) Number of videos (stacked by object) per collector. Figure 2: Number of objects and videos across 67 collectors. Types of objects. Collectors provided object labels for each video contributed. Objects covered course-grained categories (e.g. remote, keys, wallet) as well as fine-grained \fCollectors Objects Videos Videos per object Frames per video mean/std 25/75^\\text {th} perc. min/max mean/std 25/75^\\text {th} perc. min/max Total 67 486 3822 7.9/4.8 7.0/7.0 3.0/46.0 703.3/414.1 396.2/899.0 33.0/3600.0 Clean 2996 6.2/4.6 5.0/6.0 2.0/44.0 771.3/420.6 525.8/900.0 33.0/3600.0 Clutter 826 1.7/1.5 1.0/2.0 1.0/13.0 456.7/272.9 248.5/599.0 40.0/3596.0 Per-collector 1 7.3/2.8 57.0/47.4 7.5/4.0 6.6/7.4 3.4/38.4 728.8/208.8 609.4/808.2 213.1/1614.3 Clean 44.7/44.0 5.8/3.9 4.8/6.0 2.4/36.5 809.9/244.7 664.7/898.5 219.3/1872.6 Clutter 12.3/10.8 1.8/1.1 1.0/2.0 1.0/9.9 728.8/208.8 609.4/808.2 213.1/1614.3 Table 2: ORBIT benchmark dataset. categories (e.g. Apple TV remote, Virgin remote, Samsung TV remote control). For summarization purposes, we clustered the objects based on object similarity and observe a long-tailed distribution (see Figure A.7b). The largest clusters contained different types of remotes/controls, keys, wallets/purses, guidecanes, doors, airpods, headphones, mobile phones, watches, sunglasses and Braille readers. More than half of the clusters contained just 1 object. The clustering algorithm and cluster contents are included in Appendix D. Bounding box annotations. Since the clutter videos could contain multiple objects, we provide bounding box annotations around the target object in all clutter videos (available in the code repository). We use these to compute the proportion of time the target object spends inversus out-of-frame per video, and show this in Figure A.6 averaged over all clutter videos per collector. On average, the target object is in-frame for \\sim 95% of any given clutter video. Video lengths. Video lengths depended on the recording technique required for each video type (see Appendix A.1). On average, clean videos were 25.7s (\\sim 771 frames at 30 FPS), and clutter videos were 15.2s (\\sim 457 frames at 30 FPS). Unfiltered ORBIT dataset. Some collectors did not meet the minimum requirements to be included in the benchmark dataset (e.g. an object did not have both clean and clutter videos). The benchmark dataset was therefore extracted from a larger set of 4733 videos (3,161,718 frames, 97GB) of 588 objects contributed by 97 collectors. We summarize the unfiltered dataset in Appendix A.3. 4. Teachable Object Recognition Benchmark The ORBIT dataset can be used to explore a wide set of real-world recognition tasks from continual learning [27, 28] to video segmentation [25, 34, 29]. In this paper, we focus on few-shot object recognition from high-variation examples and present a realistic and challenging few-shot benchmark grounded in TORs for people who are blind/low-vision. In Section 4.1, we describe how a TOR works, mapping it to a few-shot learning problem, before presenting the benchmark\u2019s evaluation protocol and metrics in Section 4.2. 4.1. Teachable Object Recognition We define a TOR as a generic recognizer that can be customized to a user\u2019s personal objects using a small number of training examples \u2013 in our case, videos \u2013 which the user has captured themselves. The 3 steps to realizing a TOR are: (1) Train. A recognition model is trained on a large dataset of objects where each object has only a few examples. The model can be optimized to either i) directly recognize a set of objects [46, 5] or ii) learn how to recognize a set of objects (i.e. meta-learn) [9, 40, 49, 38]. This happens before deploying the model in the real world. (2) Personalize. A real-world user captures a few examples of a set of their personal objects. The deployed model is trained on this user\u2019s objects using just these examples. (3) Recognize. The user employs their now-personalized recognizer to identify their personal objects in novel (test) scenarios. As the user points their recognizer at a scene, it delivers frame-by-frame predictions. 4.1.1 TORs as a few-shot learning problem The (1) train step of a TOR can be mapped to the \u2018metatraining\u2019 phase typically used in few-shot learning set-ups. The (2) personalize and (3) recognize steps can be mapped to \u2018meta-testing\u2019 (see Figure 3). With this view, we now formalize the teachable object recognition task, drawing on nomenclature from the few-shot literature [9, 40, 38, 11]. We construct a set of train users \\mathcal {K}^\\text {train} and test users \\mathcal {K}^\\text {test} (\\mathc a l {K} ^ \\text {train}\\, \\cap \\,\\testusers = \\varnothing ) akin to the train and test object classes used in few-shot learning. A user \\kappa has a set of personal objects \\mathcal {P}^\\user that they want a recognizer to identify, setting up a |\\objectset ^\\user |-way classification problem. To this end, the user captures a few videos of each object, together called the user\u2019s \u201ccontext\u201d set \\m a thc al {C}^ \\user = \\{(\\cleanvideo , p)_i\\}_{i=1}^N, where \\ bar {\\bm {v}} is a context video, p \\ in \\objectset ^\\user is its object label, and N is the total number of the user\u2019s context videos. The goal is to use \\mathcal {C}^\\user to learn a recognition model f_{\\theta ^\\user } that can identify the user\u2019s objects, where \\theta ^\\user are the model parameters specific to user \\kappa . Once personalized, the user can point their recognizer at novel \u201ctarget\u201d scenarios to receive per-frame predictions: \\ l a bel {e q: fra me-pred ic t } \\c lu t t erframeprediction _f = \\argmax _{y_f \\in \\objectset ^\\user }\\, f_{\\theta ^\\user } (\\cluttervideoframe _{f}) \\quad \\quad \\cluttervideoframe _f \\in \\cluttervideo \\quad \\quad (\\cluttervideo , p) \\in \\targetset ^\\user (1) where v_f is a target frame, \\bm {v} is a target video, \\ mathcal {T}^\\user is all the user\u2019s target videos, and y_ f \\in \\objectset ^\\user is the frame-level label.1 1Note, y_ f = p where p \\ in \\objectset ^\\user is the video-level object label \fFollowing the typical paradigm, during meta-training (i.e. the train step), multiple tasks are sampled per user \\ k appa \\in \\trainusers where a task is a random sub-sample of the user\u2019s \\mathcal {C}^\\user and \\ mathcal {T}^\\user (see Appendix G.2). The recognition model can be trained on these tasks using an episodic [9, 40, 49, 38] or non-episodic approach [5, 46, 22]. We formalize both in the context of TORs in Appendix F. Then, at meta-testing, one task is sampled per test user \\ k appa \\in \\testusers containing all the user\u2019s context and target videos. For each test user, the recognizer is personalized using all their context videos \\mathcal {C}^\\user (i.e. the personalize step), and then evaluated on each of the user\u2019s target videos in \\ mathcal {T}^\\user (i.e. the recognize step). In the following section, we discuss this evaluation protocol. \ud835\udc53\ud835\udc53\ud835\udf03\ud835\udf03\ud835\udf05\ud835\udf05 \ud835\udc66\ud835\udc66\ud835\udc53\ud835\udc53 \u2217\u2208\ud835\udcab\ud835\udcab\ud835\udf05\ud835\udf05 \ud835\udc97\ud835\udc97, \ud835\udc5d\ud835\udc5d\u2208\ud835\udcaf\ud835\udcaf\ud835\udf05\ud835\udf05 \ud835\udf03\ud835\udf03\ud835\udf05\ud835\udf05 \ud835\udc9e\ud835\udc9e\ud835\udf05\ud835\udf05 \ud835\udc63\ud835\udc63\ud835\udc53\ud835\udc53\u2208\ud835\udc97\ud835\udc97 Trained over Ttrain tasks per train user \ud835\udf05\ud835\udf05\u2208\ud835\udca6\ud835\udca6train Tested on test user \ud835\udf05\ud835\udf05\u2208\ud835\udca6\ud835\udca6test (1) Train (2) Personalize (3) Recognize P Meta-training Meta-testing \ud835\udc53\ud835\udc53\ud835\udf03\ud835\udf03\ud835\udf05\ud835\udf05 \ud835\udc66\ud835\udc66\ud835\udc53\ud835\udc53 \u2217\u2208\ud835\udcab\ud835\udcab\ud835\udf05\ud835\udf05 \ud835\udc97\ud835\udc97, \ud835\udc5d\ud835\udc5d\u2208\ud835\udcaf\ud835\udcaf\ud835\udf05\ud835\udf05 \ud835\udf03\ud835\udf03\ud835\udf05\ud835\udf05 \ud835\udc9e\ud835\udc9e\ud835\udf05\ud835\udf05 \ud835\udc63\ud835\udc63\ud835\udc53\ud835\udc53\u2208\ud835\udc97\ud835\udc97 P Figure 3: Teachable object recognizers cast as a few-shot learning problem. P is the personalization method, for example, several gradient steps using a optimization-based approach, or parameter generation using a model-based approach (see Section 5.1). 4.2. Evaluation protocol ORBIT\u2019s evaluation protocol is designed to reflect how well a TOR will work in the hands of a real-world user \u2014 both in terms of performance and computational cost to personalize. To achieve this, we test (and train) in a user-centric way where tasks are sampled per-user (that is, only from a given user\u2019s objects and its associated context/target videos). This contrasts existing few-shot (and other) benchmarks, and offers powerful insights into how well a meta-trained TOR can personalize to a single user. 4.2.1 Train/validation/test users The user-centric formulation in Section 4.1.1 calls for a disjoint set of train users \\mathcal {K}^\\text {train} and test users \\mathcal {K}^\\text {test}. We therefore separate the 67 ORBIT collectors into 44 train users and 17 test users, with the remaining 6 marked as validation users \\protect \\mathcal {K}^\\text {val}. To ensure the test case is sufficiently challenging, we enforce that test (and validation) users have a minimum of 5 objects (see further details in Appendix B.3). The total number of objects in the splits are 278/50/158, respectively. We report statistics for each set of train/validation/test users in Appendix C, mirroring those over all users in Section 3. 4.2.2 Evaluation modes We establish 2 evaluation modes: Clean video evaluation (CLE-VE). We construct a test user\u2019s context set \\mathcal {C}^\\user from their clean videos, and target set \\ mathcal {T}^\\user from a held-out set of their clean videos. This mode serves as a simple check that the user\u2019s clean videos can be used to recognize the user\u2019s objects in novel \u2018simple\u2019 scenarios when the object is in isolation. Clutter video evaluation (CLU-VE). We construct a test user\u2019s context set \\mathcal {C}^\\user from their clean videos, and target set \\ mathcal {T}^\\user from their clutter videos. This mode matches the realworld usage of a TOR where a user captures clean videos to register objects, and needs to identify those objects in complex, cluttered environments. We consider CLU-VE to be ORBIT\u2019s primary evaluation mode since it most closely matches how a TOR will be used in the real-world. 4.2.3 Evaluation metrics For a test user \\ k appa \\in \\testusers , we evaluate their personalized recognizer f_{\\theta ^\\user } on each of their target videos. We denote a target video of object p \\in \\objectset ^\\user as \\ b m {v } = [ \\c luttervideoframe _1, \\dots , \\cluttervideoframe _F], and its frame predictions as \\b m {y }\\ p r e d ic t symbol = [\\clutterframeprediction _1, \\dots , \\clutterframeprediction _F], where F is the number of frames and y\\ p r edictsymbol _f \\in \\objectset ^\\user . We further denote y\\ predictsymbol _{\\text {mode}} as the video\u2019s most frequent frame prediction. For a given target video, we compute its: Frame accuracy: the number of correct frame predictions, by the total number of frames in the video. Frames-to-recognition (FTR): the number of frames (w.r.t. the first frame v_1) before a correct prediction is made, by the total number of frames in the video. Video accuracy: 1, if the video-level prediction equals the video-level object label, y\\ pred i ctsymbol _{\\text {mode}}= \\cluttervideolabel , otherwise 0. We compute these metrics for each target video in all tasks for all users in \\mathcal {K}^\\text {test}. We report the average and 95% confidence interval of each metric over this flattened set of videos, denoted \\ mathcal {T}^\\text {all} (see equations in Table 3). We also compute a further 2 computational cost metrics: MACS to personalize: number of Multiply-Accumulate operations (MACS) to compute a test user\u2019s personalized parameters \\theta ^\\user using their context videos \\mathcal {C}^\\user , reported as the average over all tasks pooled across test users. Number of parameters: total parameters in recognizer. We flag frame accuracy as ORBIT\u2019s primary metric because it most closely matches how a TOR will ultimately be used. The remaining metrics are complementary: FTR captures how long a user would have to point their recognizer at a \fFRAME ACCURACY (\\delimiter \"3222378 ) FRAMES-TO-RECOGNITION (\\delimiter \"3223379 ) VIDEO ACCURACY (\\delimiter \"3222378 ) 1 |T all| P (v,p)\u2208T all |v| P f=1 1[y\u2217 f= p] |v| 1 |T all| P (v,p)\u2208T all arg min vf \u2208v y\u2217 f =p |v| 1 |T all| P (v,p)\u2208T all 1 h y\u2217 mode = p i y\\ pred icts ymb ol _ {\\t e xt {mod e } }\\!= \\!\\underset {p \\in \\objectset ^\\user }{\\argmax }\\! \\sum \\limits _{f=1}^{|\\cluttervideo |} \\mathbbm {1} [\\clutterframeprediction _f = p] Table 3: ORBIT evaluation metrics. Symbols \\delimiter \"3222378 / \\delimiter \"3223379 indicate up / down is better, respectively. \\ mathcal {T}^\\text {all} is the set of all target videos pooled across all tasks for all test users in \\mathcal {K}^\\text {test}. scene before it identified the target object (with fewer frames being better) while video accuracy summarizes the predictions over a whole video. MACS to personalize provides an indication whether personalization could happen directly on a user\u2019s device or a cloud-based service is required, each impacting how quickly a recognizer could be personalized. The number of parameters indicates the storage and memory requirements of the model on a device, and if cloud-based, the bandwidth required to download the personalized model. It is also useful to normalize performance by model capacity. 5. Experimental analyses and results 5.1. Baselines & training set-up Baselines. There are 3 main classes of few-shot learning approaches. In metric-based approaches, a per-class embedding is computed using the (labeled) examples in the context set, and a target example is classified based on its distance to each [40, 49]. In optimization-based approaches, the model takes many [51, 46, 5] or few [9, 52, 2] gradient steps on the context examples, and the updated model then classifies the target examples. Finally, in amortization-based approaches, the model uses the context examples to directly generate the parameters of the classifier which is then used to classify a target example [38, 11]. We establish baselines on the ORBIT dataset across these 3 classes. Within the episodic approaches, we choose Prototypical Nets [40] for the metric family, MAML [9] for the optimization family, and CNAPs [38] for the amortization family. We also implement a non-episodic fine-tuning baseline following [46, 5] who show that it can rival more complex methods. This selection of models offers good coverage over those that are competitive on current few-shot learning image classification benchmarks. For all implementation details of these baselines see Appendix G.1. Video representation. In Section 4.1.1, tasks are constructed from the context and target videos of a given user\u2019s objects. We sample clips from each video and represent each clip as an average over its (learned) frame-level features. For memory reasons, we do not sample all clips from a video. Instead, during meta-training, we randomly sample S^\\text {train} non-overlapping clips, each of L contiguous frames, from both context and target videos. Each clip is averaged and treated as an \u2018element\u2019 in the context/target set, akin to an image in typical few-shot image classification. During meta-testing, however, following Section 4.2 and Eq. (1), we must evaluate a test user\u2019s personalized recognizer on every frame in all of their target videos. We, therefore, sample all overlapping clips in a target video, where a clip is an L-sized buffer of each frame plus its short history. Ideally, this should also be done for context videos, however, due to memory reasons, we sample S^\\text {test} non-overlapping L-sized clips from each context video, similar to meta-training. In our baseline implementations, S^\\tex t {train}=4, S^\\te x t {test}=8, and L = 8 (for further details see Appendices G.2 and G.3). How frames are sampled during training/testing, and how videos are represented is flexible. The evaluation protocol\u2019s only strict requirement is that a model outputs a prediction for every frame from every target video for every test user. Number of tasks per test user. Because context videos are sub-sampled during meta-testing, a test user\u2019s task contains a random set, rather than all, context clips. To account for potential variation, therefore, we sample 5 tasks per test user, and pool all their target videos into \\ mathcal {T}^\\text {all} for evaluation. If memory was not a constraint, following Section 4.1.1, we would sample one task per test user which contained all context and all target clips. 5.2. Analyses Baseline comparison. Performance is largely consistent across the baseline models in both CLE-VE and CLU-VE modes (see Table 4). In CLE-VE, all methods are equivalent in frame accuracy, FTR and video accuracy, except for ProtoNets and CNAPs which trail slightly in frame accuracy. Comparing this to CLU-VE, we see overall performance drops of 10-15 percentage points. Here, models are overall equivalent on frame and video accuracy, however ProtoNets and FineTuner lead in FTR. Further, absolute CLU-VE scores are in the low 50s. Looking at the best possible bounds (computed using the bounding box annotations, see Figure A.6c) suggests that there is ample scope for improvement and motivates the need for approaches that can handle distribution shifts from clean (context) to real-world, cluttered scenes (target), and are robust to high-variation data more generally. In computational cost, ProtoNets has the lowest cost to personalize requiring only a single forward pass of a user\u2019s context videos, while FineTuner has the highest, requiring 50 gradient steps. This, along with the total number of parameters (which are similar across models), suggests that ProtoNets and CNAPs would be better suited to deployment on a mobile device. \fClean Video Evaluation (CLE-VE) Clutter Video Evaluation (CLU-VE) MODEL FRAME ACC FTR VIDEO ACC MACS TO PERSONALIZE FRAME ACC FTR VIDEO ACC MACS TO PERSONALIZE METHOD TO PERSONALIZE # PARAMS Best possible 95.31 (1.37) 0.00 (0.00) 100.00 (0.00) ProtoNets [40] 65.16 (1.96) 7.55 (1.35) 81.88 (2.51) 2.82 \\times 1012 50.34 (1.74) 14.93 (1.52) 59.93 (2.48) 3.53 \\times 1012 1 forward pass 11.17M CNAPs [38] 66.15 (2.08) 8.40 (1.40) 79.56 (2.63) 3.09 \\times 1012 51.47 (1.81) 17.87 (1.69) 59.53 (2.48) 3.87 \\times 1012 1 forward pass 12.75M MAML [9] 70.58 (2.10) 8.62 (1.56) 80.88 (2.56) 84.63 \\times 1012 51.67 (1.88) 20.95 (1.84) 57.87 (2.50) 105.99 \\times 1012 15 gradient steps 11.17M FineTuner [46] 69.47 (2.16) 7.82 (1.54) 79.67 (2.62) 282.09 \\times 1012 53.73 (1.80) 14.44 (1.50) 63.07 (2.44) 353.30 \\times 1012 50 gradient steps 11.17M Table 4: Baselines on the ORBIT Dataset. Results are reported as the average (95% confidence interval) over all target videos pooled from 85 test tasks (5 tasks per test user, 17 test users). Best possible scores are computed using bounding box annotations which are available for the clutter videos (see Appendix C and Figure A.6). MODEL FRAME ACC FTR VIDEO ACC ProtoNets [40] 58.98 (2.23) 11.55 (1.79) 69.17 (3.01) CNAPs [38] 51.86 (2.49) 20.81 (2.33) 60.77 (3.18) MAML [9] 42.55 (2.67) 37.28 (2.99) 46.96 (3.25) FineTuner [46] 61.01 (2.24) 11.53 (1.82) 72.60 (2.91) Table 5: CLE-VE performance when meta-training on MetaDataset and meta-testing on ORBIT (for CLU-VE see Table A.3). Even on clean videos, models perform poorly compared to when meta-training on ORBIT (Table 4) suggesting that existing few-shot datasets may be insufficient for real-world adaptation. Meta-training on other few-shot learning datasets. A meta-trained model should, in principle, have the ability to learn any new object (from any dataset) with only a few examples. We investigate this by meta-training the baseline models on Meta-Dataset [48] using its standard task sampling protocol and then testing them on the ORBIT dataset (i.e. personalizing to test users with no training). We adapt the meta-trained models to videos by taking the average over frame features in clips sampled from context and target videos (see Section 5.1). In Table 5, we see that even on the easier, clean videos (CLE-VE), performance is notably lower than the corresponding baselines in Table 4 (for CLU-VE see Table A.3). MAML and CNAPs perform particularly poorly while ProtoNets and FineTuner fare slightly better, however, are still 6-8 percentage points below their above counterparts in frame accuracy. This suggests that even though much progress has been made on existing few-shot benchmarks, they are not representative of real-world conditions and models trained on them may struggle to learn new objects when only high-variation examples are available. Per-user performance. In addition to averaging over \\ mathcal {T}^\\text {all}, the benchmark\u2019s user-centric paradigm allows us to average per-user (i.e. over just their target videos). This is useful because it provides a measure of how well a meta-trained TOR would personalize to an individual real-world user. In Figure 4 however, we show that ProtoNets\u2019 personalization is not consistent across users, for some going as low as 25% in frame accuracy (for other metrics/models see Figure A.10). A TOR should be able adapt to any real-world user, thus future work should not only aim to boost performance on the metrics but also reduce variance across test users. P642 P753 P999 P901 P421 P953 P609 P452 P198 P485 P204 P455 P900 P177 P271 P233 P554 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Collector Frame accuracy Figure 4: CLU-VE frame accuracy varies widely across test users (error bars are 95% confidence intervals) with ProtoNets [40]. For other metrics and models see Figure A.10. Train task composition. Finally, we investigate the impact of the number of context videos per object (Figure 5), and the number of objects per user (Figure 6) sampled in train tasks on CLU-VE frame accuracy. In the first case, we expect that with more context videos per object, the more diversity the model will see during meta-training, and hence generalize better at meta-testing to novel (target) videos. To test this hypothesis, we fix a quota of 96 frames per object in each train task and sample these frames from increasing numbers of context videos. Frame accuracy increases with more context videos, but overall plateaus between 4-6 context videos per object. Looking at the number of objects sampled per user next, we cap all train user\u2019s objects at \\if mm od e \\lbrace \\else \\textbraceleft \\fi 2, 4, 6, 8\\}, respectively, when meta-training. We then meta-test in two ways: 1) we keep the caps in place on the test users, and 2) we remove the caps. For 1), we see reducing accuracy for increasing numbers of objects, as is expected \u2013 classifying between 8 objects is harder than classifying between 2. For 2), we see a significant drop in accuracy relative to 1) suggesting that meta-training with fewer objects than would be encountered at meta-testing is detrimental. This is an important real-world consideration since it is likely that over months/years, a user will accumulate many more objects than is currently present per user in the ORBIT dataset. Overall, however, training with a cap of 6 or more objects yields \f1 2 3 4 5 6 36 38 40 42 44 46 48 50 52 54 CNAPs ProtoNets MAML FineT uner Number of context videos Frame accuracy Figure 5: Meta-training with more context videos per object leads to better CLU-VE performance. Frames are sampled from an increasing number of clean videos per object using the number of clips per video (S^\\text {train}) to keep the total number of context frames fixed per train task. 2 4 6 8 45 50 55 60 65 70 75 80 Cap @test No cap @test CNAPs ProtoNets MAML FineT uner Number of objects per user Frame accuracy Figure 6: Meta-training and -testing with more objects per user poses a harder recognition problem (solid line), however, meta-training with fewer objects than encountered at meta-testing (dashed line) shows only a small CLU-VE performance drop compared to Table 4, suggesting that models may be able to adapt to more objects in the real-world. roughly equivalent performance to that reported in Table 4 where no caps are imposed during training. Since ORBIT test users have up to 12 objects (see Figure A.3c), our results suggest that a minimum of half the number of ultimate objects for a test user may be sufficient for meta-training. We repeat these analyses for the other metrics in Figures A.8 and A.9, and include the corresponding tables in Tables A.5 and A.6. We also investigate the impact of the number of tasks sampled per train user, included in Appendix H. 6. Discussion We present the ORBIT dataset and benchmark, both grounded in the few-shot application of TORs for people who are blind/low-vision. Our baseline performance and further analyses demonstrate, however, that current few-shot approaches struggle on realistic, high-variation data. This gap offers opportunities for new and exciting research, from making models robust to high-variation video data to quantifying the uncertainty in model predictions. More than just pushing the state-of-the-art in existing lines of thought, the ORBIT dataset opens up new types of challenges that derive from systems that will support human-AI partnership. We close by discussing three of these unique characteristics. ORBIT\u2019s user-centric formulation provides an opportunity to measure how well the ultimate system will work in the hands of real-world users. This contrasts most few-shot (and other) benchmarks which retain no notion of the end-user. Our results show that the baselines do not perform consistently across users. In the real-world, the heterogeneity of users, their objects, videoing techniques and devices will make this even more challenging. It will therefore be important for models to quantify, explain and ultimately minimize variation across users, particularly as models are deployed in a wider variety of scenarios outside the high-income countries in which the dataset was collected. Directly involving users in collecting a dataset intended to drive ML research comes with challenges: user-based datasets are harder to scale than web-scraped datasets [8, 26, 48] and users need an understanding of the potential system in order to contribute useful data. Building the system first would address these challenges, but it cannot be done without algorithmic innovation (which itself requires the dataset). The ORBIT dataset is a starting point and can be used to build the first generation of TORs, which can be deployed and themselves be used to collect more real-world data to drive a cycle of innovation between dataset and application. Finally, grounding in a real-world application encourages innovation in new directions to meet the real-world conditions of deployment. This could range from new models that are lightweight enough to be personalized directly on a user\u2019s phone to new research problems like handling the scenario when none of a user\u2019s objects are in the frame. In conclusion, the ORBIT dataset and benchmark aims to shape the next generation of recognition tools for the blind/low-vision community starting with TORs, and to improve the robustness of vision systems across a broad range of other applications. Acknowledgments The ORBIT Dataset is funded by Microsoft AI for Accessibility. LZ is supported by the 2017 MSR PhD Scholarship Program and 2020 MSR EMEA PhD Award. JB is supported by the EPSRC Prosperity Partnership EP/T005386/1. We thank VICTA, RNC, RNIB, CNIB, Humanware, Tekvision School for the Blind, BlindSA, NFB, and AbilityNet. Finally, we thank Emily Madsen for help with the video validation, and all the ORBIT collectors for their time and contributions." + }, + { + "url": "http://arxiv.org/abs/2004.09272v2", + "title": "A Revised Generative Evaluation of Visual Dialogue", + "abstract": "Evaluating Visual Dialogue, the task of answering a sequence of questions\nrelating to a visual input, remains an open research challenge. The current\nevaluation scheme of the VisDial dataset computes the ranks of ground-truth\nanswers in predefined candidate sets, which Massiceti et al. (2018) show can be\nsusceptible to the exploitation of dataset biases. This scheme also does little\nto account for the different ways of expressing the same answer--an aspect of\nlanguage that has been well studied in NLP. We propose a revised evaluation\nscheme for the VisDial dataset leveraging metrics from the NLP literature to\nmeasure consensus between answers generated by the model and a set of relevant\nanswers. We construct these relevant answer sets using a simple and effective\nsemi-supervised method based on correlation, which allows us to automatically\nextend and scale sparse relevance annotations from humans to the entire\ndataset. We release these sets and code for the revised evaluation scheme as\nDenseVisDial, and intend them to be an improvement to the dataset in the face\nof its existing constraints and design choices.", + "authors": "Daniela Massiceti, Viveka Kulharia, Puneet K. Dokania, N. Siddharth, Philip H. S. Torr", + "published": "2020-04-20", + "updated": "2020-04-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "main_content": "Introduction The growing interest in visual conversational agents (Antol et al., 2015; Das et al., 2017; De Vries et al., 2017; Johnson et al., 2017) has motivated the need for automated evaluation metrics for the responses generated by these agents. Robust evaluation schemes, however, are an open research challenge (Mellish and Dale, 1998; Reiter and Belz, 2009). This is the case for VisDial (Das et al., 2017), a dataset targeting the visual dialogue (VD) task\u2014answering a sequence of questions about an image given a history of previous questions and answers. At test time, a 1https://github.com/danielamassiceti/ geneval_visdial Question Answer How old is the baby? About 2 years old What color is the remote? White Where is the train? On the road How many cows are there? Three Figure 1: Failures in visual dialogue (from Massiceti et al. (2018))\u2014answers are unrelated to the image. Biases in the VisDial dataset, compounded by a rankbased evaluation, can mislead progress on the VD task. trained model is used to score \ufb01xed sets of candidate answers for each test question, and a suite of rank-based metrics are computed on the ranked sets: single-candidate metrics which are a function of the ground-truth (GT) answer\u2019s position, and a multi-candidate metric which weighs the ranked set with relevance scores assigned from human annotation. A limit of this scheme is that a simple model (Massiceti et al., 2018) based on canonical correlation analysis (CCA), which learns to maximise correlation between questions and answers while completely ignoring the image and dialogue history, is comparable in mean rank (MR)\u2014one of the dataset\u2019s primary rank-based metrics\u2014to stateof-the-art (SOTA) models, all of which employ complex neural network architectures of millions of parameters, requiring many hours of GPU(s) training. This suggests that exploitable biases exist within the VisDial dataset, whose effects are compounded by a rank-based evaluation ill-suited to the VD task. Motivated by this, we propose a revised, more robust, evaluation scheme for the VD task, informed by the key shortfalls of the current evaluation, arXiv:2004.09272v2 [cs.CV] 24 Apr 2020 \fnamely that 1) the candidate sets contain multiple, equally feasible answers, rendering both singleand multi-candidate ranking metrics less meaningful, and 2) the evaluation is an indirect ranking task, rather than a direct assessment of the answers generated by a model\u2014the goal of a VD agent. Our revised evaluation instead adopts standard metrics from NLP to measure the similarity between an answer generated by a model and a reference set of answers for the image and question. This aligns better with the generative nature of a true VD agent, and with established evaluation set-ups for other language generations tasks, including VQA (Antol et al., 2015) and image captioning (Chen et al., 2015; Young et al., 2014). Unlike the current evaluation, it also accounts for diversity in answer generations, which we compare across models. For VisDial, however, the answer relevance scores used to construct the reference sets are only available for a small subset of the dataset. Drawing on the pseudo-labelling literature for semisupervised learning (Lee, 2013; Wu and Yap, 2006; Iscen et al., 2019), we develop a semi-supervised approach which leverages these human annotations, and automatically learns to extract the relevant answers from a given candidate set. We apply CCA between pre-trained question and answer features, channelling Massiceti et al. (2018), and use a clustering heuristic on the resulting correlations to extract the candidate answers most correlated with the ground-truth answer\u2014the reference sets, or pseudolabels. This was inspired by prior work showing the surprising strength of simple, non-deep baselines (Zhou et al., 2015; Massiceti et al., 2018; Strang et al., 2018; Anand et al., 2018). Using this approach, we automatically construct sets of relevant answers for the entire VisDial dataset, which we validate in multiple ways, including with human judgements via Amazon Mechanical Turk (AMT), and release as DenseVisDial\u2014a dense version of VisDial. Using this data and the revised scheme, we re-evaluate existing SOTA models and gain new insights into model differences, speci\ufb01cally generation diversity, otherwise unavailable with the existing evaluation. The scheme also improves on the existing multi-candidate ranking evaluation, only applicable for 1/10 of the dataset due to the cost and time of collecting relevance scores from humans. Finally, while we use these reference sets exclusively for better evaluation purposes, we also show that using them as (for-free) additional training data can improve performance, which is promising for future progress on the VD task. To summarise, our contributions are: 1. A revised evaluation scheme for the VisDial dataset based on metrics from NLP which measure the similarity between a generated answer and a reference set of feasible answers. 2. A semi-supervised method for automatically extracting these reference sets from given candidate sets at scale, veri\ufb01ed using human annotators. 3. An expanded DenseVisDial data with the automatically constructed reference sets, released for future evaluation and model development. 2 Preliminaries We \ufb01rst de\ufb01ne generative VD (for VisDial), and how Massiceti et al. (2018) employ CCA for this. 2.1 Visual Dialogue (VD) & VisDial dataset Given image I and dialogue history [(Q1, A1), (Q2, A2), . . . , Qi], the generative VD task involves generating answer Ai. The principal approach towards VD has been facilitated by the VisDial dataset (Das et al., 2017), a large corpus of images paired with question-answer dialogues, sequentially collected by pairs of annotators in an interactive game on AMT. VisDial v1.0 comprises 123, 287/2064/8000 train/val/test images, each paired with dialogues of up to 10 exchanges2. Each question is coupled with a candidate set of 100 answers A including a ground-truth answer Agt \u2208A. For a subset (2000/2064/8000), one question per image contains human-annotated relevance scores \u03c1(A) \u2208[0, 1] where A \u2208A. The generative VD task learns to generate answers conditioned on image-question pairs using only (I, Q, Agt) triplets (Das et al., 2017). At test time, given an image-question pair, each answer in its associated candidate answers A is scored under the model\u2019s learned likelihood. The rank of Agt is then used to judge the model\u2019s effectiveness at the VD task, averaged over the dataset to get the MR. Other metrics also computed include normalised discounted cumulative gain (NDCG) on the candidate answers\u2019 human-annotated relevance scores. In a second paradigm introduced by Das et al. (2017), the model instead uses the full (I, Q, A) at train time, and simply frames the predictive task as a classi\ufb01cation problem of selecting Agt out of A. At test time, the candidates are then directly scored by the classi\ufb01er\u2019s softmax probabilities. We 210 exchanges for train/val, and \u226410 exchanges for test. \fargue that this discriminative setting is an oversimpli\ufb01cation of the VD task: answering questions is not simply selecting the correct answer from a set. The focus of the remainder of this paper is therefore fully on the generative VD task. 2.2 Canonical Correlation Analysis for VD CCA, applied between question and answers, achieves near-SOTA mean rank (MR) on the VisDial dataset (Massiceti et al., 2018). Inspired by this result and the extreme simplicity of CCA, we introduce this formulation with reference to VD. Given paired observations {x1 \u2208Rn1, x2 \u2208 Rn2}, CCA jointly learns projections W1 \u2208Rn1\u00d7k and W2 \u2208Rn2\u00d7k, k \u2264min(n1, n2), which are maximally correlated (Hotelling, 1936). Projections are obtained via a generalised eigenvalue decomposition, Av = \u03bbBv (Kettenring, 1971; Hardoon et al., 2004; Bach and Jordan, 2002), where A and B are the interand intra-view correlation matrices. Projection matrix Wi \u2208Rni\u00d7k embeds xi from view i as \u03c6(xi; Wi) = Dp \u03bbW \u22a4 i xi, where D\u03bb is a diagonal matrix of the top k (sorted) eigenvalues \u03bb, and p is a scalar weight. With CCA, ranking and retrieval across views {xi, xj} is performed by computing correlation between projections corr(xi, xj) = \u03c8(xi)\u22a4\u03c8(xj) \u2225\u03c8(xi)\u22252\u2225\u03c8(xj)\u22252 where \u03c8 is a mean-centred (over train set) version of \u03c6. Using CCA, learnt embeddings between answers and questions (CCA-AQ) are used to compute the ranking and NDCG metrics. CCA can also be used to generate answers using correlations. For a given test question, its 100 nearest-neighbour questions (based on correlation under the A-Q model) are extracted from the train set. Their 100 corresponding ground-truth answers are used to construct a pseudo-candidate set. Answers are generated by the model, denoted CCA-AQ-G, by sampling from this set in proportion to correlation with the test question (see Figure 3 in Massiceti et al. (2018)). 3 Shortfalls of Current VisDial Evaluation The fact that a simple, lightweight CCA model performs favourably in MR with current SOTA models, while completely ignoring the image and dialogue history, and requiring an order of magnitude fewer learnable parameters and mere seconds on CPU to train, is a cause for concern. Not only do prior results suggest that implicit correlations between just the questions and answers exist in the data (see Figure 1), but also that the current evaluation scheme generally is not \ufb02exible enough to account for variation in answers to visually-grounded questions. Here we summarise the existing evaluation scheme, discuss the hidden factors affecting it, and make the case that to better capture a model\u2019s performance on the VD task, there must be changes to the evaluation scheme. 3.1 Current evaluation scheme Given a test question, the current VisDial evaluation relies on ranking its candidate answers (Das et al., 2017), derived from scoring the answers under the trained (generative) model\u2019s likelihood (see \u00a7 2.1). A suite of rank-based metrics is then computed: mean rank (MR) and mean reciprocal rank (MRR) of the ground truth (GT) answer over data, and the average recall, measuring how often the GT answer falls within the top 1, 5, and 10 ranks, respectively. These single-candidate (i.e. GT) ranking metrics have been the norm since VisDial\u2019s inception. A subsequent extension of the dataset (v1.0) tasked 4-5 human annotators with labelling whether each answer in a candidate set is valid for a given image-question (a hard 0/1 choice) for a subset of the train and validation sets, denoted Ht and Hv, respectively. For each candidate answer A, the mean judgement across annotators becomes a relevance score \u03c1(A) \u2208[0, 1]. A modi\ufb01ed multi-candidate ranking metric, the NDCG, is then introduced: candidate answers\u2019 ranks are weighted by their relevance scores, excluding irrelevant (\u03c1(A) = 0) answers. See Appendix B for further details. 3.2 Analysing current shortfalls The above evaluation metrics, by construction, are not \ufb02exible enough to account for the many ways a question can satisfactorily be answered. This limitation manifests in both the singleand multicandidate ranking metrics, and hampers the measurement of a model\u2019s true ability to answer a visual question. The limitation stems from: 1. ranking candidate sets that are ill-constructed for the ranking task, and 2. disregarding answers generated by a model in favour of indirectly ranking these \ufb01xed sets. 3.2.1 Ranking ill-constructed candidate sets Candidate answer sets in VisDial are typically observed to contain multiple feasible answers\u2014as they include up to 50 nearest-neighbour answers (Das et al., 2017) to Agt in GloVe (Pennington et al., 2014) space. Rank-based metrics, which assume a meaningful ordering of answers, are less informative when considering feasible-answer subsets. \fWe explicitly verify this characteristic of candidate answers using correlation, through the following experiment which learns a CCA model between the question and answer features. Computing the correlation between Agt and A \u2208A \\ Agt, giving C = (\u03c6(Agt, A1), . . . , \u03c6(Agt, A100)), we then select the cluster of answers with correlations in [Cmax \u2212\u03c3, Cmax], where Cmax = max(C), \u03c3 = stdev(C), roughly estimating answers which are plausibly similar to Agt. Given this cluster, we compute the mean and standard deviation of the correlations, as well as the cluster size, to measure how small and tightly packed these clusters are. We average these across all candidate sets, giving an average mean correlation of 0.58, an average standard deviation of 0.22, and an average cluster size of 12.30. These results support the idea that an equivalence class of feasible answers exist within each candidate set, which can then adversely affect both classes of metrics described below. Single-candidate ranking metrics assign a single answer, the labelled GT, as the only correct answer in the candidate set, and are purely a function of this privileged answer\u2019s rank. As a result, these metrics unduly penalise models that rank alternate, but equally feasible, answers highly. MR, MRR, and R@1,5,10 are thus only weakly indicative of performance on the VD task, and are unable to differentiate between equally good models. The ill-constructed candidate sets also render single-candidate metrics unable to rule out poor models. In other words, models with poor MR, MRR and R@1,5,10 aren\u2019t necessarily poor at VD. This is markedly the case for MRR and R@1,5,10 which are, by de\ufb01nition3, biased toward low ranks\u2014 a model predicting \ufb01ve GT answers at rank 1, and \ufb01ve at rank 10, scores better MRR/R@1,5,10 than a model with all ten GT answers at rank 2 (coincidentally, these results are meaningless if the candidate set contained 10 equally feasible answers). This bias particularly affects models trained with a single-answer objective (i.e. all SOTA) models. To see why, we show the distribution of GT answer ranks between CCA-AQ and a SOTA model in Figure 2. The SOTA model is skewed toward the GT answer achieving rank 1\u2014the combined result of a single-answer objective and high parametrisation. This leads SOTA models to view other feasible answers in the set as no different if ranked 2 or 100. 3While obvious for recall, MRR as the inverse harmonic mean, weighs smaller ranks more strongly than larger ranks. (a) HREA-QIH-G (b) CCA-AQ Figure 2: Distribution of GT answer ranks across VisDial v1.0 val set. Highly-parametrised SOTA models (a) pushes the GT rank towards 1, ignoring other potentially feasible answers, in contrast to CCA-AQ (b). CCA-AQ by contrast ignores rank and simply learns by maximising A-Q correlation, likely leading it to rank other feasible answers highly. Thus, models favouring low ranks by virtue of their learning objective may achieve better MRR/R@1,5,10, but not be discernibly better than models accounting for multiple answers being correct. These \ufb01ndings, together, suggest that the singlecandidate metrics cannot reliably quantify performance and compare models in lieu of the VD task. Multi-candidate ranking metrics, or NDCG, undoubtedly take a step in the right direction by forgoing just a single correct answer, and weighting the predicted ranking with human-annotated relevance scores for multiple answers. NDCG, however, is still a function of a ranking, and hence assumes that a single optimal ordering of candidate answers exists. The presence of multiple equally feasible answers in the candidate sets thus breaks this assumption and can skew the NDCG, albeit to a lesser degree than MR, MRR, and R@1,5,10. Moreover, the degree of answer similarity within these subsets raises further concerns for the reliable computation of NDCG. Requiring annotation of 100 valid (i.e. similar) answers is an arduous task, and converting hard 0/1 judgements into relevance scores over just a handful (4-5) of annotators can be noisy. Our analysis reveals the following quirks: \u2022 18.15% \u000e 47.14% of the validation/train annotated subsets (Hv \u000e Ht), do not have a single candidate answer with relevance score 1.0, not even the ground-truth, indicating poor consensus. \u2022 20.69% \u000e 9.01% of samples, respectively, consider the ground-truth irrelevant (\u03c1(Agt) = 0). Coupled with this, the scale of VisDial makes obtaining annotations a daunting (and expensive) task\u2014re\ufb02ected in the fact that only a small fraction of the data, one question per image, has annotations (see \u00a7 2), which implies evaluations effectively ignore dialogue history. Also, without more annotators (and hence cost/time), obtaining relevance scores at-scale may well be meaningless. \f3.2.2 Ignoring generated answers The ultimate goal of VD is to produce an answer to a given question, not to pick an answer from a set\u2014our primary motivation for focussing on the generative VD task. The current evaluation, rather than directly evaluating the answers generated by a model, evaluates by how well a model ranks a \ufb01xed set of candidate answers. Not only is this problematic because of the candidate sets\u2019 limitations (as described above), but also because it: 1) disregards diversity in answer generations, a necessary feature for a human-like answering agent, and 2) goes against established practice in the VQA literature (Antol et al., 2015) which evaluates by comparing the predicted answer to answers collected from 10 human annotators. While it is expected that scoring a valid answer by its likelihood is a reasonable measure of a model\u2019s ability to generate a good answer, this may not necessarily be the case when there are multiple potential answers, some not even in the candidate set. Although likelihoods can serve as a relative measure between candidates, the highest-probability answers may be entirely different or unrelated\u2014indicating a poorly learnt model. This supports the idea that a metric which ignores generated answers may fail to account for models no less \u201cgood\u201d at the VD task. 4 A Revised Evaluation for VisDial The analysis in \u00a7 3 indicates that an evaluation well matched to the underlying goals of VD should: i) directly use answers generated by the model, ii) account for multiple valid answers, and iii) do the above at scale over the entire dataset. We thus develop a revised evaluation scheme for VD which meets these three criteria. Its basis lies in measuring how similar an answer generated by a given model is to a set of feasible reference answers for a given question and image. We describe similarity quanti\ufb01cation in \u00a7 4.1 and the construction of high-quality reference sets in \u00a7 4.2. 4.1 Measuring similarity We measure similarity using established NLP consensus metrics between a predicted answer and a reference set of valid answers. Crucially, the predicted answer is generated by the model directly, and the reference set contains more than one element, accounting for the presence of multiple valid answers. We use two classes of metric for capturing consensus: overlap and embedding distance. Overlap-based metrics compute the overlap or co-occurrence of n-grams (word couplets of size n) between pairs of sentences\u2014here, the generated answer and each answer in the reference set. We use two such metrics: CIDER (Vedantam et al., 2015) and METEOR (Denkowski and Lavie, 2014), motivated by their extensive use in image captioning benchmarks (Chen et al., 2015; Hodosh et al., 2013; Young et al., 2014). Both are known to be well correlated with human judgements. CIDER computes the cosine similarity between a pair of vectors, each of which is composed of the termfrequency inverse-document-frequencies (tf-idf) of the sentence\u2019s n-grams. For 0 < n \u2264i, similarities are averaged over all n-grams up to length i. METEOR is similar, but \ufb01rst applies a uni-gram matching function, before computing a weighted harmonic mean between uni-gram precision and recall, with a fragmentation penalty on the matching. Embedding distance-based metrics arise from a rich literature in capturing semantic similarity between natural language expressions (Bojanowski et al., 2017; Pennington et al., 2014; Devlin et al., 2019; Peters et al., 2018; Sharma et al., 2017). Motivated by the recent successes of BERT (Devlin et al., 2019) and FastText (Bojanowski et al., 2017) in a variety of NLP tasks, we use each method to embed the generated answer and each reference set answer, computing the L2 and cosine similarity (CS) between them, averaging over the reference set. The embedding metrics aim to complement the overlap-based metrics and guard against limitations of the latter that might arise due to answer lengths (oneor two-word) frequently seen in the VD data. 4.2 Obtaining answer reference sets We now describe how to obtain reference sets for the similarity measures de\ufb01ned above. 4.2.1 Using humans For a small subset of the VisDial validation set, Hv, soft relevance scores are available (from human annotators) for each of the 100 candidate answers associated with each image-question (see \u00a7 2.1). Using these scores, we construct answer reference sets for each image-question, composed of all the candidate set answers deemed valid by at least one annotator, i.e. \u03c1(A) > 0, where to our surprise, we found multiple instances where \u03c1(Agt) = 0. Protecting against such cases, we de\ufb01ne the human-annotated reference set H = {A : \u03c1(A) > 0, \u2200A \u2208A} \u222a{Agt}. \f4.2.2 Using semi-supervision at scale Human-annotated relevance scores, and hence reference sets, however, are available for only a fraction of the dataset\u2014less than 1% of questions! The scale of VisDial\u2014on the order of 106 questions, each with 100 candidate answers\u2014makes extending these annotations to the entire dataset extremely challenging. Assuming $0.05 per question, each presented to 10 workers, would incur a cost of over $500, 000 and substantial annotation time! We therefore propose a semi-supervised approach which harnesses the annotations we do have: given a candidate set of answers for an imagequestion, we learn to extract the valid answers, and hence automatically construct a reference set. Not only does this enable us to obtain reference sets at scale, but it also circumvents the time, cost and idiosyncrasies associated with human annotation. Our approach is based on CCA, and uses the relevance-annotated subset of the full train set, Ht, as training data. Similar to \u00a7 4.2.1, we construct training reference sets Ht using Ht. Pairing each question with all answers in Ht, we learn a CCA model between the questions and answers. With this model, denoted CCA-AQ*, we compute correlations between Agt and A \\ Agt, giving C = (\u03c6(Agt, A1), . . . , \u03c6(Agt, A100)) similar to \u00a7 3. We then cluster these correlations in C to construct a reference set \u03a3 = {A : \u03c6(Agt, A) \u2208 [Cmax \u2212\u03c3, Cmax]} \u222a{Agt} where Cmax = max(C), and \u03c3 = stdev(C). Intuitively, this extracts the cluster of answers with highest correlation or similarity to the ground-truth answer. With this semisupervised approach, we easily and quickly obtain reference sets at scale for the entire VisDial dataset. Verifying automatic reference sets The validity of the revised evaluation is contingent on the validity of the automatic reference sets\u2014that they are composed of valid answers. We verify this by: (1) computing intersection metrics between the human-annotated and automatic reference sets, (2) using AMT to verify the sets, and (3) measuring how training a VD model on these sets can improve performance on VD. For (1), we compute the intersection-over-union (IOU), precision, recall, and set size of the automatic \u03a3 and human-annotated H reference sets on the validation subset Hv (Table 1). These metrics serve as a simple heuristic and we use them to compare clustering methods (see extended comparison in Appendix D). Our best method, \u03a3, extracts simC |H\u2229C| |H\u222aC| |H\u2229C| |C| |H\u2229C| |H| |C| H 100.00 (0.00) 100.00 (0.00) 100.00 (0.00) 12.77 (7.24) \u03a3 24.13 (16.73) 62.48 (31.24) 32.91 (23.52) 7.17 (6.94) Table 1: Evaluation of intersection metrics computed on human-annotated reference sets H and automatic reference sets \u03a3, on the validation subset Hv. Values in parentheses denote standard deviation across the set. C = H C = \u03a3 # tasks 1, 680 5, 040 # turkers per task 5 5 % C selected 81.48 (2.15) 70.66 (5.45) % C selected (\u22651 turker) 98.80 (0.37) 95.52 (2.22) Table 2: AMT validation of automatic reference sets \u03a3 against human-annotated sets H. For each task, given an image, question and answer set (from either \u03a3 or H), turkers are asked to deselect infeasible answers, with scores averaged over 5 turkers. We measure the proportion of each set selected, and the proportion of each set where \u22651 answer was selected. Variance in brackets. ilar sized clusters to H (7.17 vs 12.77) with good precision (62.48%; i.e. it selects answers maximally in H), supporting the similarity of \u03a3 and H. For (2), we turn to AMT. Given an image, question and answer reference set (from either \u03a3 or H) as a task, a turker is asked to de-select all infeasible answers (see Appendix D for AMT user interface). For each task, scores are averaged over 5 turkers. We then measure the proportion of the reference set selected, and the proportion of the set where at least 1 turker selected each answer. For a subset of tasks randomly sampled from Hv or the full validation set, in Table 2 we observe that our proposed semi-supervised reference sets are similar to the ones obtained using humans. Finally, in (3), we intuit that if reference sets \u03a3 contain answers similar to the correct answer, then a model trained on only these sets should improve performance on the VD task. We, therefore, pair each question in the training subset Ht with each of the answers in its corresponding \u03a3, and train a CCA-AQ model. As a baseline, we repeat this experiment, but pairing the questions with answers from H instead of \u03a3. We show in Table 3 (top 2 rows), the model trained using \u03a3 performs better than that employing the human-annotated reference sets H across the battery of ranking metrics, including NDCG. As a further check, we train a CCA-AQ model on Ht, but only between questions and their single ground-truth answers Agt (as opposed to all answers in H or \u03a3). As we address in \u00a7 3, this model surprisingly outperforms the baseline using H as reference on the single-candidate ranking \fTrain Ref MR R@1 R@5 R@10 MRR NDCG Set #QA pairs \u2193 \u2191 \u2191 \u2191 \u2191 \u2191 Ht 15,317 H 26.49 6.05 21.50 35.53 0.1550 0.3647 17,055 \u03a3 20.36 8.35 32.88 48.78 0.2066 0.3715 1996 {Agt} 23.71 13.13 34.05 46.90 0.2428 0.2734 all 10,419,489 \u03a3 17.20 10.73 34.20 51.80 0.2312 0.4023 1,232,870 {Agt} 17.07 16.18 40.18 55.35 0.2845 0.3493 Table 3: Evaluating the utility of automated reference sets \u03a3 on standard VD evaluation. CCA-AQ models were trained on the indicated subsets (Ht or all) of VisDial (v1.0), with answers from different sets (\u2018Ref\u2019), and tested on the evaluation test server to compute standard metrics. Arrows indicate which direction is better. metrics, however, as expected, NDCG paints a better picture, showing reduced performance. Finally we conduct (3) across the whole dataset, learning a CCA-AQ model using \u03a3, over the entire training data of VisDial (v1.0). The last two rows of Table 3 compare this model against the standard CCAAQ trained on questions and ground-truth answers. We observe a substantial improvement in NDCG, with what is effectively a simple data augmentation procedure using \u03a3. This three-part veri\ufb01cation supports the existence of valid answers in the automated reference set, which subsequently supports our revised evaluation scheme. 5 Experimental Analyses Here we include experimental analyses, focussing in particular on the performance of models under our revised evaluation schemes discussed in \u00a7 4. We represent words in the questions/answers as 300-dimensional FastText (Bojanowski et al., 2017) embeddings. To obtain sentence embeddings, we simply average word embeddings following generally received intuition (Arora et al., 2017; Wieting and Kiela, 2019), padding or truncating up to 16 words following Massiceti et al. (2018). We generate answers from CCA-AQ-G and the following SOTA models: HREA-QIH-G (Das et al., 2017), HCIAE-DIS-G (Lu et al., 2017) and RVA (Niu et al., 2019), with * indicates use of beam search. For each, we train on the full VisDial v1.0 train set, cross-validate on MRR, and select the best epoch\u2019s model for subsequent evaluation. 5.1 Revised evaluation results Testing on Hv Table 4 (left) shows the overlap and embedding distance scores of answers generated by models, measured against human-annotated reference sets H for the validation subset Hv. Note, we report on Hv because relevance scores are available for only part of VisDial\u2019s full validation set and are publicly unavailable for its test set. We de\ufb01ne a reference baseline for the overlap metrics, estimating upper bounds for the respective scores as \u0393H, which cycles through answers in H, measures against H itself and takes the maximum over the resulting scores. Testing on whole dataset The \ufb01nal step is to use the validated automatic reference sets (from \u00a7 4.2) to evaluate the models under the revised scheme for the complete VisDial (v1.0) dataset. Table 4 (right) shows the overlap and embedding distance scores of answers generated by models, measured against the automatic reference sets \u03a3 for the whole validation set. Again, we test on the validation set since ground-truth answers are not publicly available for the test set\u2014something we require to construct \u03a3. Note, the baseline \u0393\u03a3 here is different from before since the reference set is different: \u03a3 instead of H. Model comparison Comparing models which do not employ beam search (i.e. no asterix), HCIAEDIS-G performs the best across all metrics except FASTTEXT, which RVA wins (Table 4). This is consistent on Hv and the full validation set, despite Hv being 10-fold smaller\u2014a further con\ufb01rmation of \u03a3\u2019s utility. Note, results across all metrics are well below the reference baselines \u0393H \u000e \u0393\u03a3, indicating there is still room for improvement. Applying a beam-search on top of these models has the ability to further enrich the generations and improve performance on all metrics, as shown by HREA-QIH-G*. It is expected that applying a beam search to the best-performer HCIAE-DIS-G would yield similar improvements. Surprisingly, these results differ from the conclusions drawn from the rank-based evaluation (see Table 5 in supplement), where RVA supersedes all other SOTA models on all rank metrics. This suggests that just because a model can rank a single-ground truth answer highest, does not necessarily make it the best generative VD agent. Our suite of overlap metrics and embedding distance metrics may help to explain why. For example, CIDER n=1 is a proxy for how well a model performs on one-word answers, which are highly prevalent in the dataset (e.g. \u201cYes\u201d/\u201cNo\u201d). BERT, on the other hand, may help to measure generations with the closest semantic similarity to the reference sets. Indeed, this is the sort of \ufb02exibility of purpose that is required when evaluating complex multi-modal tasks like VD. Beyond just k = 1 generation, a particularly useful feature of our revised scheme is that, unlike the rank-based evaluation, it can evaluate across any k \fModel CIDER\u2191 METEOR\u2191 BERT FASTTEXT n=1 n=2 n=3 n=4 L2\u2193 CS\u2191 L2\u2193 CS\u2191 \u0393H 0.2765 0.2151 0.1810 0.1513 1.0000 4.7000 0.9334 1.8757 0.6992 CCA-AQ-G 0.0721 0.0434 0.0298 0.0226 0.2713 7.1231 0.8690 3.1251 0.4555 HREA-QIH-G 0.0880 0.0483 0.0333 0.0252 0.4813 6.2875 0.8927 2.9724 0.5079 HREA-QIH-G* 0.1359 0.0721 0.0494 0.0372 0.7149 5.5727 0.9149 3.2664 0.4971 HCIAE-G-DIS 0.1338 0.0718 0.0493 0.0372 0.6758 5.6690 0.9122 3.1551 0.5049 RVA 0.1042 0.0563 0.0385 0.0291 0.5328 6.1466 0.8967 2.9543 0.5161 CIDER\u2191 METEOR\u2191 BERT FASTTEXT n=1 n=2 n=3 n=4 L2\u2193 CS\u2191 L2\u2193 CS\u2191 \u0393\u03a3 0.4212 0.3429 0.2991 0.2583 1.0000 4.2891 0.9373 1.6518 0.7614 0.0789 0.0461 0.0313 0.0235 0.1864 7.1873 0.8673 3.0908 0.4782 0.1109 0.0597 0.0409 0.0308 0.3710 6.2743 0.8924 2.8815 0.5334 0.1580 0.0835 0.0568 0.0428 0.5269 5.7023 0.9097 3.1888 0.5196 0.1614 0.0878 0.0605 0.0457 0.5138 5.7374 0.9087 3.0389 0.5347 0.1209 0.0650 0.0445 0.0336 0.4033 6.1629 0.8956 2.9040 0.5353 Table 4: Overlap and embedding distance metrics computed for k = 1 generation against human-annotated reference sets H on the validation subset Hv (left), and automatic reference sets \u03a3 on the entire validation set (right). For HREA-QIH-G, on average \u223c6 answers are the empty string, which are excluded from the computation. Metrics marked \u2191indicate higher values are better, and those marked \u2193indicate lower values are better. number of generations sampled from the models (see Figure 3). Answer correctness can therefore be measured, without penalising diversity, even if the generations fall outside the candidate set for the given question. This yields an interesting insight: for some models (notably the HREA-QIH-G variants) performance degrades as k increases\u2014a useful thing to know if deploying this model as a VD agent in the real-world! Others, like RVA, HCIAE-G-DIS and CCA-AQ-G, generally remain constant or improve with higher k. Surprisingly, CCA-AQ-G, despite its poorer absolute performance across the metrics at k = 1, holds its own and even improves with increasing k. This allows us to compare models\u2019 generation capabilities and indeed robustness in the answering task\u2014something not possible with the rank-based evaluation. 0.00 0.10 0.20 Score CIDER-1 ( ) 0.00 0.05 0.10 0.15 CIDER-2 ( ) 0.00 0.05 0.10 CIDER-3 ( ) 0.00 0.03 0.05 0.08 Score CIDER-4 ( ) 0.00 0.25 0.50 0.75 1.00 METEOR ( ) 5.00 6.00 7.00 8.00 BERT-L2 ( ) 1 5 10 15 k 0.85 0.90 0.95 Score BERT-CS ( ) 1 5 10 15 k 2.50 3.00 3.50 4.00 FT-L2 ( ) 1 5 10 15 k 0.40 0.50 0.60 FT-CS ( ) CCA HREA HREA* HCIAE RVA -0.10 0.00 0.10 0.20 0.30 Score CIDER-1 ( ) 0.00 0.10 0.20 CIDER-2 ( ) -0.05 0.00 0.05 0.10 0.15 CIDER-3 ( ) 0.00 0.05 0.10 Score CIDER-4 ( ) 0.00 0.25 0.50 0.75 1.00 METEOR ( ) 6.00 8.00 BERT-L2 ( ) 1 5 10 15 k 0.80 0.85 0.90 0.95 Score BERT-CS ( ) 1 5 10 15 k 2.00 2.50 3.00 3.50 4.00 FT-L2 ( ) 1 5 10 15 k 0.30 0.40 0.50 0.60 0.70 FT-CS ( ) CCA HREA HREA* HCIAE RVA Figure 3: Overlap and embedding distance metrics for k = 1, 5, 10, 15 generations from SOTA models on Hv (top) and the full validation set (bottom). CCA-AQ-G and RVA generally show improving trends with increasing k, which isn\u2019t the case for HREA-QIH-G variants. 6 Discussion In this paper, we propose a revised evaluation suite for VisDial drawing on existing metrics from the NLP community that measures similarity between answers generated by a model and a given reference set of answers. We arrive at the need for alternate evaluations through the \ufb01ndings of Massiceti et al. (2018) and our own analysis of existing evaluation metrics on the VisDial dataset, which we show can suffer from a number of issues to do with a mismatch between the VD task and an evaluation for it that depends on ranking metrics. While a recent update to the evaluation paradigm of VisDial incorporates both human judgements of answer validity and multiple plausible answers into a \ufb01nal score, issues relating to ranking persist, albeit to a lesser extent. Here, we advocate use of answers directly generated by a model, in concert with consensusbased metrics measuring similarity against sets of answers marked as valid by human annotators. It is practically infeasible to obtain these validity judgements at scale, however, thus restricting the extent to which the revised scheme can be applied. To address this issue, we develop a semi-supervised automated mechanism to extract sets of relevant answers from given candidate sets, using sparse human annotations and correlations through CCA. We verify these sets by computing their intersection with those marked by humans, asking turkers via AMT, and measuring their utility for the VD task. Based on such experiments, we expand the VisDial dataset with these reference set annotations and release this and the revised evaluation scheme as DenseVisDial for future evaluation and model development. We intend this to be one possible improvement in the face of inherent constraints on the VisDial dataset, and hope that the community adopts the revised evaluation going forwards. Acknowledgments This work was supported by ERC grant ERC-2012-AdG 321162-HELIOS, EPSRC grant Seebibyte EP/M013774/1, EPSRC/MURI grant EP/N019474/1, FAIR ParlAI grant, the Skye Foundation and the Toyota Research Institute. We thank Yulei Niu for his help with the RVA code." + }, + { + "url": "http://arxiv.org/abs/1812.06417v3", + "title": "Visual Dialogue without Vision or Dialogue", + "abstract": "We characterise some of the quirks and shortcomings in the exploration of\nVisual Dialogue - a sequential question-answering task where the questions and\ncorresponding answers are related through given visual stimuli. To do so, we\ndevelop an embarrassingly simple method based on Canonical Correlation Analysis\n(CCA) that, on the standard dataset, achieves near state-of-the-art performance\non mean rank (MR). In direct contrast to current complex and over-parametrised\narchitectures that are both compute and time intensive, our method ignores the\nvisual stimuli, ignores the sequencing of dialogue, does not need gradients,\nuses off-the-shelf feature extractors, has at least an order of magnitude fewer\nparameters, and learns in practically no time. We argue that these results are\nindicative of issues in current approaches to Visual Dialogue and conduct\nanalyses to highlight implicit dataset biases and effects of over-constrained\nevaluation metrics. Our code is publicly available.", + "authors": "Daniela Massiceti, Puneet K. Dokania, N. Siddharth, Philip H. S. Torr", + "published": "2018-12-16", + "updated": "2019-10-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL", + "cs.LG" + ], + "main_content": "Introduction Caption: A man and a woman sit on the street in front of a large mural painting. Question Answer How old is the baby? About 2 years old What color is the remote? White Where is the train? On the road How many cows are there? Three Figure 1: Failures in visual dialogue. Visually-unrelated questions, and their visually-unrelated plausible answers3. Recent years have seen a great deal of interest in conversational AI, enabling natural language interaction between humans and machines, early pioneering efforts for which include ELIZA (Weizenbaum, 1966) and SHRDLU (Winograd, 1971). This resurgence of interest builds on the ubiquitous successes of neural-network-based approaches in the last decade, particularly in the perceptual domains of vision and language. A particularly thriving sub-area of interest in conversational AI is that of visually grounded dialogue, termed visual dialogue (VD), involving an AI agent conversing with a human about visual content (Das et al., 2017a,b; Massiceti et al., 2018). Speci\ufb01cally, it involves answering questions about an image, given some dialogue history\u2014a fragment of previous questions and answers. Typical approaches for learning to do VD, as is standard practice in machine learning (ML), involves de\ufb01ning an objective to achieve, procuring data with which to learn, and establishing a measure of success at the stated objective. The objective for VD is reasonably clear at \ufb01rst glance\u2014answer in sequence, a set of questions about an image. The primary choice of dataset, VisDial (Das et al., 2017a), addresses precisely this criterion, involving a large set of images, each paired with a dialogue\u2014a set of question-answer pairs\u2014collected by pairs of human annotators playing a game to understand an image through dialogue. And \ufb01nally, evaluation measures on the objective are typically de\ufb01ned through some perceived value of a human-derived \u201cground-truth\u201d answer in the system. However, as we will demonstrate, certain quirks in the choices of the above factors, can lead to unintentional behaviour (c.f. Figure 1), which leverages implicit biases in data and methods, to potentially misdirect progress from the desired objectives. Intriguingly, we \ufb01nd that in contrast to state-of-the-art (SOTA) approaches that employ complex neural-network architectures using complicated training \u2217Equal Contribution 2https://github.com/danielamassiceti/CCA-visualdialogue 3From online demos of SOTA models\u2013VisDial (Das et al., 2017a) and FlipDial (Massiceti et al., 2018) arXiv:1812.06417v3 [cs.CV] 22 Oct 2019 \fschemes over millions of parameters and taking many hours of time and expensive GPU compute resources, the simple canonical correlation analysis (CCA)-based method only uses standard off-theshelf feature extractors, avoids computing gradients, involves a few hundred thousand parameters and requires just a few seconds on a CPU to achieve comparable performance on the mean rank (MR) metric\u2014all without requiring the image or prior dialogue! 2 (Multi-View) CCA for VD We begin with a brief preliminary for CCA (Hotelling, 1936) and its multi-view extension (Kettenring, 1971). In (standard 2-view) CCA, given access to paired observations {x1 \u2208Rn1\u00d71, x2 \u2208Rn2\u00d71}, the objective is to jointly learn projection matrices W1 \u2208Rn1\u00d7p and W2 \u2208Rn2\u00d7p where p \u2264 min(n1, n2), that maximise the correlation between the projections, formally corr(W \u22a4 1 x1, W \u22a4 2 x2). Multi-view CCA, a generalisation of CCA, extends this to associated data across m domains, learning projections Wi \u2208Rni\u00d7p, i \u2208{1, . . . , m}. Kettenring (1971) shows that Wi can be learnt by minimising the Forbenius norm between each pair of views, with additional constraints over the projection matrices (Hardoon et al., 2004). Optimising the multi-view CCA objective then reduces to solving a generalized eigenvalue decomposition problem, Av = \u03bbBv, where A and B are derived from the interand intra-view correlation matrices (c.f. Appendix A) (Bach and Jordan, 2002). Projection matrices Wi are extracted from corresponding rows (for view i) and the top p columns of the (eigenvalue sorted) eigenvector matrix corresponding to this eigen-decomposition. A sample xi from view i is then embedded as \u03c6q(xi, Wi) = (Wi Dq p) \u22a4xi, where Dq p = diag(\u03bbq 1, \u00b7 \u00b7 \u00b7 , \u03bbq p) and \u03bb1 \u2265\u00b7 \u00b7 \u00b7 \u2265\u03bbp are the eigenvalues. A scaling, q \u2208R, controls the extent of eigenvalue weighting, reducing to the standard objective at q = 04. With this simple objective, one can tackle a variety of tasks at test time\u2014ranking and retrieval across all possible combinations of multiple views\u2014where the cosine similarity between (centred) embedding vectors captures correlation. For VD, given a dataset of images I and associated question-answer (Q-A) pairs, joint embeddings between question and answer (and optionally, the image) are learnt, with projection matrices WQ, WA, (and WI), as appropriate. At test time, correlations can be computed between any, and all, combinations of inputs, helping measure suitability against the desired response. 3 Experimental Analyses In order to employ CCA for VD, we begin by transforming the input images I, questions Q, and answers A, into lower-dimensional feature spaces. For the images, we employ the standard pre-trained ResNet34 (He et al., 2016) architecture, extracting a 512-dimensional feature\u2014the output of the avg pool layer after conv5. For the questions and answers, we employ the FastText (Bojanowski et al., 2017) network to extract 300-dimensional embeddings for each of the words. We then simply average the embeddings (Arora et al., 2017) for the words, with suitable padding or truncation (up to a maximum of 16 words), to obtain a 300-dimensional embedding for the question or answer. Table 1: CCA vs. SOTA: number of learnable parameters and training time. Model #Params Train time (s) HCIAE-G-DIS 2.12 \u00d7 107 \u2013 VisDial 2.42 \u00d7 107 \u2013 FlipDial 1.70 \u00d7 107 2.0 \u00d7 105 CCA (A-Q) 1.80 \u00d7 105 2.0 Factor (\u2248) 90 105 We then set the hyper-parameters for the CCA objective as p = 300, q = 1, based off of a simple grid search over feasible values, such that we learn a 300-dimensional embedding space that captures the correlations between the relevant domains. It is important to note that the SOTA approaches (Das et al., 2017a,b; Massiceti et al., 2018) also employ pre-trained feature extractors\u2014the crucial difference between approaches is the complexities in modelling and computation on top of such feature extraction, as starkly indicated in Table 1. We then learn two joint embeddings\u2014between just the answers and questions, denoted A-Q, and between the answers, questions, and images, denoted A-QI. Note that the answer is always present, since the stipulated task in VD is to answer a given question. The \ufb01rst allows us to explore the utility (or lack thereof) of the image in performing the VD task. The second serves as a useful indicator of how unique any question-image pairing is, in how it affects the ability to answer\u2014performance closer to that of A-Q indicating fewer unique pairings. Also, when embedding all three of A, Q, and I, at test time, we only employ Q to compute a match against a potential answer. 4There are cases where values of q > 0 have been shown to give better performance (Gong et al., 2014). 2 \fHaving now learnt an embedding, we evaluate our performance using the standard ranking measure employed for the VisDial dataset. Here, for a given image and an associated question, the dataset provides a set of 100 candidate answers, which includes the human-derived \u201cground-truth\u201d answer. The task then, is to rank each of the 100 candidates, and observe the rank awarded to the \u201cgroundtruth\u201d answer. In our case, we rank on correlation, computed as the cosine distance between centered embeddings between the question and a candidate answer. Then, for all the answers we compute the mean rank (MR), mean reciprocal rank (MRR) (inverse harmonic mean of rank), and recall at top 1, 5, and 10 candidates\u2014measuring how often the \u201cground-truth\u201d answer ranked within that range. Table 2: Results for SOTA vs. CCA on the VisDial dataset. CCA achieves comparable performance while ignoring both image and dialogue sequence. Model MR R@1 R@5 R@10 MRR SotA v0.9 HCIAE-G-DIS 14.23 44.35 65.28 71.55 0.5467 CoAtt-GAN 14.43 46.10 65.69 71.74 0.5578 HREA-QIH-G 16.79 42.28 62.33 68.17 0.5242 CCA v0.9 A-Q 16.21 16.85 44.96 58.10 0.3041 A-QI (Q) 18.27 12.24 35.55 50.88 0.2439 v1.0 A-Q 17.07 16.18 40.18 55.35 0.2845 A-QI (Q) 19.25 12.63 32.88 48.68 0.2379 The results, in Table 2, show that the simple CCA approach achieves comparable performance on the mean rank (MR) metric using the A-Q model that doesn\u2019t use the image or dialogue sequence! This solidi\ufb01es the impression, from Figure 1, that there exist implicit correlations between just the questions and answers in the data, that can be leveraged to perform \u201cwell\u201d on a task that simply requires matching \u201cground-truth\u201d answers. Our experiments indicate that for the given dataset and task, one need not employ anything more complicated than an exceedingly simple method such as CCA on pretrained feature extractors, to obtain plausible results. Image Question (Rank) GT Answer CCA Top-3 (Rank) Answer What colour is the bear? 1 \u20ddWhite and brown 51 \u20ddFloral white 2 \u20ddBrown and white 3 \u20ddBrown, black & white Does she have long hair? 1 \u20ddNo, it is short hair 41 \u20ddNo 2 \u20ddShort 3 \u20ddNo it\u2019s short Can you see any passengers? 1 \u20ddNo 48 \u20ddNot really 2 \u20ddZero 3 \u20ddNo I can not Are there people not on bus? 1 \u20ddNo people 22 \u20ddFew 2 \u20ddNo, there are no people around 3 \u20ddI don\u2019t see any people Figure 2: Qualitative results for the A-Q model showing the top-3 ranked answers for questions where the ground-truth answer is given a low rank\u2014showing them to be perfectly feasible. Moreover, another factor that needs to be considered, is that the evaluation metric itself, through the chosen task of candidate-answer ranking, can be insuf\ufb01cient to draw any actual conclusions about how well questions were answered. To see this, consider Figure 2, where we deliberately pick examples that rank the \u201cground-truth\u201d answer poorly despite CCA\u2019s top-ranked answers all being plausible alternatives. This clearly illustrates the limitations imposed by assuming a single \u201cground-truth\u201d answer in capturing the breadth of correct answers. To truly judge the validity of the top-ranked answers, regardless of \u201cground-truth\u201d would require thorough human-subject evaluation. However, as a cheaper, but heuristic alternative, we quantify the validity of the top answers, in relation to the \u201cground truth\u201d, using the correlations themselves. For any given question and candidate set of answers, we cluster the answers based on an automatic binary thresholding (Otsu (Otsu, 1979)) of the correlation with the given question. We then compute the following two statistics based on the threshold i) the average variance of the correlations in the lower-ranked split, and ii) the fraction of questions that have correlation with \u201cground truth\u201d answer higher than the threshold. The intention being that (i) quanti\ufb01es how closely clustered the top answers are, and (ii) quanti\ufb01es how often the \u201cground-truth\u201d answer is in this cluster. Low values for the former, and high values for the latter would indicate that there exists an equivalence class of answers, all relatively close to the ground-truth answer in terms of their ability to answer the question. Our analysis for the VisDial v0.9 dataset reveals values of (i) 0.1180 and (ii) 86.95%, supporting our claims that CCA recovers plausible answers. We note that the VisDial dataset was recently updated to version 1.0, where the curators try to ameliorate some of the issues with the single-\u201cground-truth\u201d answer approach. They incorporate a human-agreement scores for candidate answers, and introduce a modi\ufb01ed evaluation which weighs the predicted rankings by these scores. We include our performance on the (held-out) test set for VisDial v1.0 in the bottom row of Table 2. However, in making this change, the primary evaluation for this data has now become an explicit classi\ufb01cation task on the candidate answers5\u2014requiring access, at train time, to all (100) candidates for every question-image pair (see Table 1, pg 8. Das et al., 2017a) and the evaluation results of the Visual Dialog Challenge 2018. For the stated goals of VD, this change can be construed as unsuitable as it falls into the category of rede\ufb01ning the problem to match a potentially unsuitable evaluation measure\u2014how can one get better ranks in the 5See Appendix B for an update on this characterisation. 3 \fQ: Are they adult giraffe? Q: Are there other animals? GT \u20ddYes GT \u20ddNo Ranked Ans 1 \u20ddYes the giraffe seems to be an adult Ranked Ans 1 \u20ddNo, there are no other animals 2 \u20ddIt seems to be an adult, yes 2 \u20ddNo other animals 3 \u20ddThe giraffe is probably an adult, it looks very big 3 \u20ddThere are no other animals around 4 \u20ddYoung adult 4 \u20ddDon\u2019t see any animals Q: Any candles on cake? Q: Is the cake cut? GT \u20ddJust a large \u201cnumber one\u201d GT \u20ddNo, but the boy has sure had his hands in it! Ranked Ans 1 \u20ddThere are no candles on the cake Ranked Ans 1 \u20ddNo it\u2019s not cut 2 \u20ddI actually do not see any candles on the cake 2 \u20ddNo the cake has not been cut 3 \u20ddNo , no candles 3 \u20ddNothing is cut 4 \u20ddNo candles 4 \u20ddNo, the cake is whole Figure 3: Example answers \u201cgenerated\u201d using the nearest-neighbours approach. For a given test question, a custom candidate set is constructed by choosing answers corresponding to the 100 closest (by correlation using A-Q) questions from the training data, and the best correlated answers to the given question returned. candidate-answer-ranking task. For this reason, although there exist approaches that use the updated data, we do not report comparison to any of them. Although standard evaluation for VD involves ranking the given candidate answers, there remains an issue of whether, given a question (relating to an image), the CCA approach really \u201canswers\u201d it. From one perspective, simply choosing from a given candidate set can seem a poor substitute for the ability to generate answers, in the vein of Das et al. (2017a); Massiceti et al. (2018). To address this, we construct a simple \u201cgenerative\u201d model using our learned projections between questions and answers (A-Q model, c.f. Figure 3). For a given question, we select the corresponding answers to the 100 nearest-neighbour questions using solely the train set and construct a custom candidate-answer set. We then compute their correlations with the given question, and sample the top-correlated answers as \u201cgenerated\u201d answers. 4 Discussion We use the surprising equivalence from \u00a7 3 as evidence of several issues with current approaches to VD. The biggest concern our evaluation, and a similar by (Anand et al., 2018), reveals is that, for standard datasets in the community, visually grounded questions can be answered \u201cwell\u201d, without referring to the visual stimuli. This reveals an unwanted bias in the data, whereby correlations between question-answer pairs can be exploited to provide reasonable answers to visually-grounded questions. Moreover, as indicated in Figure 1, the dataset also includes an implicit bias that any given question must necessarily relate to a given image\u2014as evidence by visually-unrelated questions getting visually-unrelated, but plausible answers. A particularly concerning implication of this is that current approaches to visual dialogue (Das et al., 2017a,b; Massiceti et al., 2018) may not actually be targetting the intended task. Our simple CCA method also illustrates, that the standard evaluation used for VD has certain shortcomings. Principally, the use of \u201ccandidate\u201d answers for each question, with a particular subset of them (1 in VisDial v0.9, and K-human-derived weighted choices in v1.0) are deemed to be the \u201cground-truth\u201d answers. However, as we show in Figure 2, such an evaluation can still be insuf\ufb01cient to capture the range of all plausible answers. The task of designing evaluations on the \u201cmatch\u201d of expected answers in for natural language, though, is fraught with dif\ufb01culty, as one needs to account for a high degree of syntactic variability, with perhaps little semantic difference. Responses to addressing the issues observed here, can take a variety of forms. For the objective itself, one could alternately evaluate the effectiveness with which the dialogue enables a downstream task, as explored by some (Das et al., 2017b; De Vries et al., 2017; Khani et al., 2018; Lazaridou et al., 2016). Also, to address implicit biases in the dataset, one could adopt synthetic, or simulated, approaches, such as Hermann et al. (2017), to help control for undesirable factors. Fundamentally, the important concern here is to evaluate visual dialogue on its actual utility\u2014conveying information about the visual stimuli\u2014as opposed to surface-level measures of suitability. And \ufb01nally, we believe an important takeaway from our analyses is that it is highly effective to begin exploration with the simplest possible tools one has at one\u2019s disposal. This is particularly apposite in the era of deep neural networks, where the prevailing attitude appears to be that it is preferable to start exploration with complicated methods that aren\u2019t well understood, as opposed to older, perhaps even less fashionable methods that have the bene\ufb01t of being rigorously understood. Also, as shown in Table 1, choosing simpler methods can help minimise human effort and cost in terms of both compute and time, and crucially provide the means for cleaner insights into the problems being tackled. 4 \fAcknowledgements This work was supported by the ERC grant ERC-2012-AdG 321162-HELIOS, EPSRC grant Seebibyte EP/M013774/1, EPSRC/MURI grant EP/N019474/1, FAIR ParlAI grant, and the Skye Foundation. We thank Abhishek Das for his comments and discussions on our manuscipt." + }, + { + "url": "http://arxiv.org/abs/1802.03803v2", + "title": "FlipDial: A Generative Model for Two-Way Visual Dialogue", + "abstract": "We present FlipDial, a generative model for visual dialogue that\nsimultaneously plays the role of both participants in a visually-grounded\ndialogue. Given context in the form of an image and an associated caption\nsummarising the contents of the image, FlipDial learns both to answer questions\nand put forward questions, capable of generating entire sequences of dialogue\n(question-answer pairs) which are diverse and relevant to the image. To do\nthis, FlipDial relies on a simple but surprisingly powerful idea: it uses\nconvolutional neural networks (CNNs) to encode entire dialogues directly,\nimplicitly capturing dialogue context, and conditional VAEs to learn the\ngenerative model. FlipDial outperforms the state-of-the-art model in the\nsequential answering task (one-way visual dialogue) on the VisDial dataset by 5\npoints in Mean Rank using the generated answers. We are the first to extend\nthis paradigm to full two-way visual dialogue, where our model is capable of\ngenerating both questions and answers in sequence based on a visual input, for\nwhich we propose a set of novel evaluation measures and metrics.", + "authors": "Daniela Massiceti, N. Siddharth, Puneet K. Dokania, Philip H. S. Torr", + "published": "2018-02-11", + "updated": "2018-04-03", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction A fundamental characteristic of a good human-computer interaction (HCI) system is its ability to effectively acquire and disseminate knowledge about the tasks and environments in which it is involved. A particular subclass of such systems, natural-language-driven conversational agents such as Alexa and Siri, have seen great success in a number of wellde\ufb01ned language-driven tasks. Even such widely adopted systems suffer, however, when exposed to less circumscribed, more free-form situations. Ultimately, an implicit requirement for the wide-scale success of such systems is the effective understanding of the environments and goals of the user \u2013 an exceedingly dif\ufb01cult problem in the general case as it involves getting to grips with a variety of sub-problems (semantics, grounding, long-range dependencies) each of which are extremely dif\ufb01cult problems in themselves. One avenue to ameliorate such issues is the incorporation of visual context to help explicitly ground the language used \u2013 providing a domain in which knowledge can be anchored and extracted from. Conversely, this also provides a way in which language can be used to characterise visual information in richer terms, 1 arXiv:1802.03803v2 [cs.CV] 3 Apr 2018 \ffor example with sentences describing salient features in the image (referred to as \u201ccaptioning\u201d) [13, 15]. In recent years, there has been considerable interest in visually-guided language generation in the form of visual question-answering (VQA) [1] and subsequently visual dialogue [6], both involving the task of answering questions in the context of an image. In the particular case of visual dialogue, along with the image, previously seen questions and answers (i.e. the dialogue history) are also accepted, and a relevant answer at the current time produced. We refer to this one-sided or answer-only form of visual dialogue as one-way visual dialogue (1VD). Inspired by these models and aiming to extend their capabilities, we establish the task of two-way visual dialogue (2VD) whereby an agent must be capable of acting as both the questioner and the answerer. Our motivation for this is simple \u2013 AI agents need to be able to both ask questions and answer them, often interchangeably, rather do either one exclusively. For example, a vision-based home-assistant (e.g. Amazon\u2019s Alexa) may need to ask questions based on her visual input (\u201cThere is no toilet paper left. Would you like me to order more?\u201d) but may also need to answer questions asked by humans (\u201cDid you order the two-ply toilet paper?\u201d). The same questionanswer capability is true for other applications. For example, with aids for the visually-impaired, a user may need the answer to \u201cWhere is the tea and kettle?\u201d, but the system may equally need to query \u201cAre you looking for an Earl Grey or Rooibos teabag?\u201d to resolve potential ambiguities. We take one step toward this broad research goal with FLIPDIAL, a generative model capable of both 1VD and 2VD. The generative aspect of our model is served by using the conditional variational auto-encoder (CVAE), a framework for learning deep conditional generative models while simultaneously amortising the cost of inference in such models over the dataset [17, 24]. Furthermore, inspired by the recent success of convolutional neural networks (CNNs) in language generation and prediction tasks [11, 14, 21], we explore the use of CNNs on sequences of sequences (i.e. a dialogue) to implicitly capture all sequential dependences through the model. Demonstrating the surprising effectiveness of this approach, we show sets of sensible and diverse answer generations for the 1VD task in Fig. 1. We here provide a brief treatment of works related to visual dialogue. We reserve a thorough comparison to Das et.al. [6] for \u00a74.3, noting here that our fully-generative convolutional extension of their model outperforms their stateof-the-art results on the answering of sequential visual-based questions (1VD). In another work, Das et.al. [7] present a Reinforcement Learning based model to do 1VD, where they instantiate two separate agents, one each for questioning and answering. Crucially, the two agents are given different information \u2013 with one (QBot) given the caption, and the other (ABot) given the image. While this sets up the interesting task of performing image retrieval from natural-language descriptions, it is also fundamentally different from having a single agent perform both roles. Jain et.al. [12] explore a complementary task to VQA [1] where the goal is instead to generate a (diverse) set of relevant questions given an image. In their case, however, there is no dependence on a history of questions and answers. Finally, we note that Zhao et.al. [27] employ a similar model structure to ours, using a CVAE to model dialogue, but condition their model on discourse-based constraints for a purely linguistic (rather than visuo-linguistic) dataset. The tasks we target, our architectural differences (CNNs), and the dataset and metrics we employ are distinct. Our primary contributions in this work are therefore: \u2022 A fully-generative, convolutional framework for visual dialogue that outperforms state-of-the-art models on sequential question answering (1VD) using the generated answers, and establishes a baseline in the challenging twoway visual dialogue task (2VD). \u2022 Evaluation using the predicted (not ground-truth) dialogue \u2013 essential for real-world conversational agents. \u2022 Novel evaluation metrics for generative models of twoway visual dialogue to quantify answer-generation quality, question relevance, and the models\u2019s generative capacity. 2. Preliminaries Here we present a brief treatment of the preliminaries for deep generative models \u2013 a conglomerate of deep neural networks and generative models. In particular, we discuss the variational auto-encoder (VAE) [17] which given a dataset X with elements x \u2208X, simultaneously learns i) a variational approximation q\u03c6(z | x)1 to the unknown posterior distribution p\u03b8(z | x) for latent variable z, and ii) a generative model p\u03b8(x, z) over data and latent variables. These are both highly attractive prospects as the ability to approximate the posterior distribution helps amortise inference for any given data point x over the entire dataset X, and learning a generative model helps effectively capture the underlying abstractions in the data. Learning in this model is achieved through a uni\ufb01ed objective, involving the marginal likelihood (or evidence) of the data, namely: log p\u03b8(x) = DKL(q\u03c6(z | x) \u2225p\u03b8(z | x)) + Eq\u03c6(z|x)[log p\u03b8(x, z) \u2212log q\u03c6(z | x)] \u2265Eq\u03c6(z|x)[log p\u03b8(x|z)] \u2212DKL(q\u03c6(z|x)\u2225p\u03b8(z)) (1) The unknown true posterior p\u03b8(z | x) in the \ufb01rst KullbackLeibler (KL) divergence is intractable to compute making the objective dif\ufb01cult to optimise directly. Rather a lower-bound 1Following the literature, the terms recognition model or inference network may also be used to refer to the posterior variational approximation. \fof the marginal log-likelihood log p\u03b8(x), referred to as the evidence lower bound (ELBO), is maximised instead. By introducing a condition variable y, we capture a conditional posterior approximation q\u03c6(z | x, y) and a conditional generative model p\u03b8(x, z | y), thus deriving the CVAE [24]. Similar to Eq. (1), the conditional ELBO is: log p\u03b8(x | y) \u2265Eq\u03c6(z|x,y)[log p\u03b8(x | z, y)] \u2212DKL(q\u03c6(z | x, y) \u2225p\u03b8(z | y)) (2) where the \ufb01rst term is referred to as the reconstruction or negative cross entropy (CE) term, and the second, the regularisation or KL divergence term. Here too, similar to the VAE, q\u03c6(z | x, y) and p\u03b8(z | y) are typically taken to be isotropic multivariate Gaussian distributions, whose parameters (\u00b5q, \u03c32 q) and (\u00b5p, \u03c32 p) are provided by deep neural networks (DNNs) with parameters \u03c6 and \u03b8, respectively. The generative model likelihood p\u03b8(x | z, y), whose form varies depending on the data type \u2013 Gaussian or Laplace for images and Categorical for language models \u2013 is also parametrised similarly. In this work, we employ the CVAE model for the task of eliciting dialogue given contextual information from vision (images) and language (captions). 3. Generative Models for Visual Dialogue In applying deep generative models to visual dialogue, we begin by characterising a preliminary step toward it, VQA. In VQA, the goal is to answer a single question in the context of a visual cue, typically an image. The primary goal for such a model is to ensure that the elicited answer conforms to a stronger notion of relevance than simply answering the given question \u2013 it must also relate to the visual cue provided. This notion can be extended to one-way visual dialogue (1VD) which we de\ufb01ne as the task of answering a sequence of questions contextualised by an image (and a short caption describing its contents), similar to [6]. Being able to exclusively answer questions, however, is not fully encompassing of true conversational agents. We therefore extend 1VD to the more general and realistic task of two-way visual dialogue (2VD). Here the model must elicit not just answers given questions, but questions given answers as well \u2013 generating both components of a dialogue, contextualised by the given image and caption. Generative 1VD and 2VD models introduce stochasticity in the latent representations. As such, we begin by characterising our generative approach to 2VD using a CVAE. For a given image i and associated caption c, we de\ufb01ne a dialogue as a sequence of question-answer pairs d1:T = \u27e8(qt, at)\u27e9T t=1, simply denoted d when sequence indexing is unnecessary. Additionally, we denote a dialogue context h. When indexed by step as ht, it captures the dialogue subsequence d1:t. With this formalisation, we characterise a generative model for 2VD under latent variable z as p\u03b8(d, z | i, c, h) = p\u03b8(d | z, i, c, h) p\u03b8(z | i, c, h), with the corresponding recognition model de\ufb01ned as q\u03c6(z | d, i, c, h). Note that with relation to Eq. (2), data x is dialogue d and the condition variable is y = {i, c, h}, giving: log p\u03b8(d | i, c, h) \u2265Eq\u03c6(z|d,i,c,h)[log p\u03b8(d | z, i, c, h)] \u2212DKL(q\u03c6(z | d, i, c, h) \u2225p\u03b8(z | i, c, h)), (3) with the graphical model structures shown in Fig. 2. i c h d z i c h z d Figure 2: Left: Conditional recognition model and Right: conditional generative model for 2VD. The formulation in Eq. (3) is general enough to be applied to single question-answering (VQA) all the way to full twoway dialogue generation (2VD). Taking a step back from generative 2VD, we can re-frame the formulation for generative 1VD (i.e. sequential answer generation) by considering the generated component to be the answer to a particular question at step t, given context from the image, caption and the sequence of previous question-answers. Simply put, this corresponds to the data x being the answer at, conditioned on the image, its caption, the dialogue history to t-1, and the current question, or y = {i, c, ht\u22121, qt}. For simplicity, we denote a compound context as h+ t = \u27e8ht\u22121, qt\u27e9and reformulate Eq. (3) for 1VD as: log p\u03b8(d | i, c, h) = T X t=1 log p\u03b8(at | i, c, h+ t ), log p\u03b8(at | i, c, h+ t ) \u2265Eq\u03c6(z|at,i,c,h+ t)[log p\u03b8(at | z, i, c, h+ t )] \u2212DKL(q\u03c6(z | at, i, c, h+ t ) \u2225p\u03b8(z | i, c, h+ t )), (4) with the graphical model structures shown in Fig. 3. i c z h+ t at T i c at z h+ t T Figure 3: Left: Conditional recognition model and Right: conditional generative model for 1VD. Our baseline [6] for the 1VD model can also be represented in our formulation by taking the variational posterior and generative prior to be conditional Dirac-Delta distributions. That is, q\u03c6(z | at, i, c, h+ t ) = p\u03b8(z | i, c, h+ t ) = \u03b4(z | i, c, h+ t ). This transforms the objective from Eq. (4) \fby a) replacing the expectation of the log-likelihood over the recognition model by an evaluation of the log-likelihood for a single encoding (one that satis\ufb01es the Dirac-Delta), and b) ignoring the DKL regulariser, which is trivially 0. This computes the marginal likelihood directly as just the model likelihood log p\u03b8(at | z, i, c, h+ t ), where z \u223c\u03b4(z | i, c, h+ t ). Note that while such models can \u201cgenerate\u201d answers to questions by sampling from the likelihood function, we typically don\u2019t call them generative since they effectively make the encoding of the data and conditions fully deterministic. We explore and demonstrate the bene\ufb01t of a fully generative treatment of 1VD in \u00a74.3. It also follows trivially that the basic VQA model (for single question-answering) itself can be obtained from this 1VD model by simply assuming there is no dialogue history (i.e. step length T = 1). 3.1. \u201cColouring\u201dVisual Dialoguewith Convolutions FLIPDIAL\u2019s convolutional formulation allows us to implicitly capture the sequential nature of sentences and sequences of sentences. Here we introduce how we encode questions, answers, and whole dialogues with CNNs. We begin by noting the prevalence of recurrent approaches (e.g. LSTM [10], GRU [5]) in modelling both visual dialogue and general dialogue to date [6, 7, 8, 12, 27]. Typically recurrence is employed at two levels \u2013 at the lower level to sequentially generate the words of a sentence (a question or answer in the case of dialogue), and at a higher level to sequence these sentences together into a dialogue. Recently however, there has been considerable interest in convolutional models of language [3, 11, 14, 21], which have shown to perform at least as well as recurrent models, if not better, on a number of different tasks. They are also computationally more ef\ufb01cient, and typically suffer less from issues relating to exploding or vanishing gradients for which recurrent networks are known [19]. In modelling sentences with convolutions, the tokens (words) of the sentence are transformed into a stack of \ufb01xed-dimensional embeddings (e.g. using word2vec [18] or Glove [20], or those learned for a speci\ufb01c task). For a given sentence, say question qt, this results in an embedding \u02da qt \u2208RE\u00d7L for embedding size E and sentence length L, where L can be bounded by the maximum sentence length in the corpus, with padding tokens employed where required. This two-dimensional stack is essentially a single-channel \u2018image\u2019 on which convolutions can be applied in the standard manner in order to encode the entire sentence. Note this similarly applies to the answer at and caption c, producing embedded \u02da at and \u02da c, respectively. We then extend this idea of viewing sentences as \u2018images\u2019 to whole dialogues, producing a multi-channel language embedding. Here, the sequence of sentences itself can be seen as a stack of (a stack of) word embeddings \u02da d \u2208RE\u00d7L\u00d72T , where now the number of channels accounts for the number of questions and answers in the dialogue. We refer to this process as \u201ccolouring\u201d dialogue, by analogy to the most common meaning given to image channels \u2013 colour. Our primary motivation for adopting a convolutional approach here is to explore its ef\ufb01cacy in extending from simpler language tasks [11, 14] to full visual dialogue. We hence instantiate the following models for 1VD and 2VD: Answer [1VD]: We employ the CVAE formulation from Eq. (4) and Fig. 3 to iteratively generate answers, conditioned on the image, caption and current dialogue history. Block [1VD, 2VD]: Using the CVAE formulation from Eq. (3) and Fig. 2 we generate entire blocks of dialogue directly (i.e. h = \u2205since dialogue context is implicit rather than explicit). We allow the convolutional model to implicitly supply the context instead. We consider this 2VD, although this block architecture can also generate iteratively, and can be evaluated on 1VD (see \u00a74.2). Block Auto-Regressive [1VD, 2VD]: We introduce an auto-regressive component to our generative model in the same sense as recent auto-regressive generative models for images [9, 25]. We augment the Block model by feeding its output through an auto-regressive (AR) module which explicitly enforces sequentiality in the generation of the dialogue blocks. This effectively factorises the likelihood in Eq. (3) as p\u03b8(d | z, i, c, h) = p\u03b8 \u0000d1 | z, i, c, h \u0001 QN n=2 p\u03b8 \u0000dn | d1:n\u22121\u0001 where N is the number of AR layers, and d1 is the (intermediate) output from the standard Block model. Note, again h = \u2205, and dn refers to an entire dialogue at the n-th AR layer (rather than the t-th dialogue exchange as is denoted by dt). 4. Experiments We present an extensive quantitative and qualitative analysis of our models\u2019 performance in both 1VD, which requires answering a sequence of image-contextualised questions, and full 2VD, where both questions and answers must be generated given a speci\ufb01c visual context. Our proposed generative models are denoted as follows: A \u2013 answer architecture for 1VD B \u2013 block dialogue architecture for 1VD & 2VD BAR \u2013 auto-regressive extension of B for 1VD & 2VD A is a generative convolutional extension of our baseline [6] and is used to validate our methods against a standard benchmark in the 1VD task. B and BAR, like A, are generative, but are extensions capable of doing full dialogue generation, a much more dif\ufb01cult task. Importantly, B and BAR are \ufb02exible in that despite being trained to generate a block of questions and answers (h = \u2205), they can be evaluated iteratively for both 1VD and 2VD (see \u00a74.2). We summarise the data and condition variables for all models in Tab. 1. To evaluate performance on both tasks, we propose novel evaluation metrics which augment those of our baseline [6]. To the best of our knowledge, we are the \ufb01rst to report models \fTable 1: Data (x) and condition (y) variables for models A and B/BAR for 1VD and 2VD. Models B/BAR can be evaluated as a block or iteratively (see \u00a74.2), accepting groundtruth (q/a) or predicted (\u02c6 q/\u02c6 a) dialogue history (see Tab. 2). Task Model Train Evaluate Eval method x y x y 1VD A at i, c, h+ t \u2205 i, c, h+ t \u2212 B, BAR d i, c {d\u2013qa, d\u2013q\u02c6 a} i, c iterative 2VD B, BAR d i, c \u2205 i, c block d\u2013\u02c6 q\u02c6 a iterative that can generate both questions and answers given an image and caption, a necessary step toward a truly conversational agent. Our key results are: \u2022 We set state-of-the-art results in the 1VD task on the VisDial dataset, improving the mean rank of the generated answers by 5.66 (Tab. 3, Sw2v) compared to Das et al. [6]. \u2022 Our block models are able to generate both questions and answers, a more dif\ufb01cult but more realistic task (2VD). \u2022 Since our models are generative, we are able to show highly diverse and plausible question and answer generations based on the provided visual context. Datasets: We use the VisDial [6] dataset (v0.9) which contains Microsoft COCO images each paired with a caption and a dialogue of 10 question-answer pairs. The train/test split is 82, 783/40, 504 images, respectively. Baseline: Das et al. [6]\u2019s best model, MN-QIH-G, is a recurrent encoder-decoder architecture which encodes the image i, the current question qt and the attention-weighted ground truth dialogue history d1:t\u22121. The output conditional likelihood distribution is then used to (token-wise) predict an answer. Our A model is a generative and convolutional extension, evaluated using existing ranking-based metrics [6] on the generated and candidate answers. We also (iteratively) evaluate our B/BAR for 1VD as detailed in \u00a74.2 (see Tab. 3). 4.1. Network architectures and training Following the CVAE formulation (\u00a73) and its convolutional interpretation (\u00a73.1), all our models (A, B and BAR) have three core components: an encoder network, a prior network and a decoder network. Fig. 4 (top) shows the encoder and prior networks, and Fig. 4 (middle, bottom) show the standard and auto-regressive decoder networks. Prior network The prior neural network, parametrised by \u03b8, takes as input the image i, the caption c and the dialogue context. Referring to Table 1, for model A, recall y = {i, c, h+ t } where the context h+ t is the dialogue history up to t-1 and the current question qt. For models B/BAR, y = {i, c} (note h = \u2205). To obtain the image representation, we pass i through VGG-16 [23] and extract the penultimate (4096-d) feature vector. We pass caption c through a pretrained word2vec [18] module (we do not learn these word embeddings). If h \u0338= \u2205, we pass the one-hot encoding of CNNq CNNp CNNy CNN CNN Is this image in color? Yes How old does the girl appear to be? Four Does the donut have sprinkles? No Is this taking place inside of a coffee donut shop? No Are there any other people besides the girl pictured in this image? No Is the girl making a mess with the donut? Yes What color hair does the girl have? Blonde Has she eaten a lot of the donut? Yes Is she smiling? No What color are the walls? White A girl in pink shirt eating a chocolate frosted donut VGG 16 CNN Wemb Wemb w2v \u00b5p \u00b5q log\u03c3 q 2 log\u03c3 p 2 d h c i y ~ ~ ~ ~ ~ ~ WT emb CNNg CNNz Is this image in color? Yes How old does the girl appear to be? Four Does the donut have sprinkles? No Is this taking place inside of a coffee donut shop? No Are there any other people besides the girl pictured in this image? No Is the girl making a mess with the donut? Yes What color hair does the girl have? Blonde Has she eaten a lot of the donut? Yes Is she smiling? No What color are the walls? White d ! ! z = \u00b5 +\u03b5\u03c3 \u03b5 N (0,1) \u00b5 log\u03c3 2 y WT emb CNNg CNNz Is this image in color? Yes How old does the girl appear to be? Four Does the donut have sprinkles? No Is this taking place inside of a coffee donut shop? No Are there any other people besides the girl pictured in this image? No Is the girl making a mess with the donut? Yes What color hair does the girl have? Blonde Has she eaten a lot of the donut? Yes Is she smiling? No What color are the walls? White d ! ! z = \u00b5 +\u03b5\u03c3 \u03b5 N (0,1) \u00b5 log\u03c3 2 y AR-N Figure 4: Convolutional (top) conditional encoder and prior architecture, (middle) conditional decoder, and (bottom) auto-regressive conditional decoder architectures, applying to both oneand two-way visual dialogue (1VD and 2VD). each word through a learnable word embedding module and stack these embeddings as described in \u00a73.1. We encode these condition variables convolutionally to obtain y, and pass this through a convolutional block to obtain \u00b5p and log \u03c32 p, the parameters of the conditional prior p\u03b8(z | y). Encoder network The encoder network, parametrised by \u03c6, takes x and the encoded condition y (obtained from the prior network) as input. For model A, x = at while for B/BAR, x=d =\u27e8(qt, at)\u27e9T t=1. In all models, x is transformed through a word-embedding module into a singlechannel answer \u2018image\u2019 for A, or a multi-channel image of alternating questions and answers for B/BAR. The embedded output is then combined with y to obtain \u00b5q and log \u03c32 q, the parameters of the conditional latent posterior q\u03c6(z | x, y). Decoder network The decoder network takes as input a latent z and the encoded condition y. The sample is transpose-convolved, combined with y and further transformed to obtain an intermediate output volume of dimension E \u00d7L\u00d7M, where E is the word embedding dimension, L is the maximum sentence length and M is the number of dialogue entries in x (M = 1 for A, M = 2T for B variants). Following this, A and B employ a standard linear layer, projecting the E dimension to the vocabulary size V (Fig. 4 (middle)), whereas BAR employs an autoregressive module followed by this standard linear layer (Fig. 4 (bottom)). At train time, the V -dimensional output is softmaxed and the CE term of the ELBO computed. At test time, the \fTable 2: Iterative evaluation of B/BAR for 1VD and 2VD. Under each condition, the input dialogue block is \ufb01lled with ground-truth or predicted history (q/a or \u02c6 q/\u02c6 a, respectively), while future entries are \ufb01lled with the PAD token. 1VD 2VD d\u2013qa d\u2013q\u02c6 a d\u2013\u02c6 q\u02c6 a < t (q, a) (q, \u02c6 a) (\u02c6 q, \u02c6 a) = t (q, PAD) (q, PAD) (PAD, PAD) \u000e (\u02c6 q, PAD) > t (PAD, PAD) (PAD, PAD) (PAD, PAD) argmax of the output provides the predicted word index. The weights of the encoder and prior\u2019s learnable word embedding module and the decoder\u2019s \ufb01nal linear layer are shared. Autoregressive module Inspired by PixelCNN [26] which sequentially predicts image pixels, and similar to [9], we apply N = {8, 10} size-preserving autoregressive layers to the intermediate output of model B (size E \u00d7L\u00d72T), and then project E to vocabulary size V . Each layer employs masked convolutions, considering only \u2018past\u2019 embeddings, sequentially predicting 2T \u2217L embeddings of size E, enforcing sequentiality at both the sentenceand dialogue-level. KL annealing Motivated by [4] in learning continuous latent embedding spaces for language, we employ KL annealing in the loss objectives of Eq. (3) and Eq. (4). We weight the KL term by \u03b1 \u2208[0, 1] linearly interpolated over 100 epochs, and then train for a further 50 epochs (\u03b1 = 1). Network and training hyper-parameters In embedding sentences, we pad to a maximum sequence length of L = 64 and use a word-embedding dimension of E = 256 (for word2vec, E = 300). After pre-processing and \ufb01ltering the vocabulary size is V = 9710 (see supplement for further details). We use the Adam optimiser [16] with default parameters, a latent dimensionality of 512 and employ batch normalisation with momentum= 0.001 and learnable parameters. For model A we use a batch size of 200, and 40 for B/BAR. We implement our pipeline using PYTORCH [22]. 4.2. Evaluation methods for block models Although B/BAR generate whole blocks of dialogue directly (h = \u2205), they can be evaluated iteratively, lending them to both 1VD and 2VD (see supplement for descriptions of generation/reconstruction pipelines). \u2022 Block evaluation [2VD]. The generation pipeline generates whole blocks of dialogue directly, conditioned on the image and caption (i.e. x = \u2205and y = {i, c} for B/BAR evaluation in Tab. 1). This is 2VD since the model must generate a coherent block of both questions and answers. \u2022 Iterative evaluation. The reconstruction pipeline can generate dialogue items iteratively. At time t, the input dialogue block is \ufb01lled with zeros (PAD token) and the ground-truth/predicted dialogue history to < t is slotted in (see below and Tab. 2). This future-padded block is then Table 3: 1VD evaluation of A and B/BAR on VisDial (v0.9) test set. Results show ranking of answer candidates based on the score functions SM and Sw2v. Score function Method MR MRR R@1 R@5 R@10 SM RL-QAbot [7] 21.13 0.4370 53.67 60.48 MN-QIH-G [6] 17.06 0.5259 42.29 62.85 68.88 A (LW) 23.87 0.4220 30.48 53.78 57.52 A (ELBO) 20.38 0.4549 34.08 56.18 61.11 Sw2v MN-QIH-G [6] 31.31 0.2215 16.01 22.42 34.76 A (RECON) 15.36 0.4952 41.77 54.67 66.90 A (GEN) 25.65 0.3227 25.88 33.43 47.75 B 28.45 0.2927 23.50 29.11 42.29 d\u2013qa BAR8 25.87 0.3553 29.40 36.79 51.19 BAR10 26.30 0.3422 28.00 35.34 50.54 B 30.57 0.2188 16.06 20.88 35.37 d\u2013q\u02c6 a BAR8 29.10 0.2864 22.52 29.01 48.43 BAR10 29.15 0.2869 22.68 28.97 46.98 encoded with the condition inputs, and then reconstructed. The t-th dialogue item is extracted (whether an answer if 1VD or a question/answer if 2VD), and this is repeated T (for 1VD) or 2T (for 2VD) times. Variations are: \u2013 d\u2013qa [1VD]. At time t, the input dialogue block is \ufb01lled with the history of ground-truth questions and answers up to t-1, along with the current ground-truth question. All future entries are padded \u2013 equivalent to [6] using the ground-truth dialogue history. \u2013 d\u2013q\u02c6 a [1VD]. Similar to d\u2013qa, except that the input block is \ufb01lled with the history of ground-truth questions and previously predicted answers along with the current ground-truth question. This is a more realistic 1VD. \u2013 d\u2013\u02c6 q\u02c6 a [2VD]. The most challenging and realistic condition in which the input block is \ufb01lled with the history of previously predicted questions and answers. 4.3. Evaluation and Analysis We evaluate our A, B, and BAR models on the 1VD and 2VD tasks. Under 1VD, we predict an answer with each time step, given an image, caption and the current dialogue history (\u00a74.3.1 and Tab. 3), while under 2VD, we predict both questions and answers (\u00a74.3.2 and Tab. 4). All three models are able to perform the \ufb01rst task , while only B and BAR are capable of the second task. 4.3.1 One-Way Visual Dialogue (1VD) task We evaluate the performance of A and B/BAR on 1VD using the candidate ranking metric of [6] as well as an extension of this which assesses the generated answer quality (Tab. 3). Fig. 1 and Fig. 5 show our qualitative results for 1VD. Candidate ranking by model log-likelihood [SM] The VisDial dataset [6] provides a set of 100 candidate answers {ac t}100 c=1 for each question-answer pair at time t per image. The set includes the ground-truth answer at as well as similar, popular, and random answers. Das et al. [6] rank these candidates using the log-likelihood value of each under \fQuestion Groundtruth answer Z1 Z2 Z3 How old is the girl? Maybe three Looks about six I can\u2019t tell Yes What race is the girl? White Yes White Caucasian Is she outside? Yes No Yes Yes Is her hair long or short? Short Short Short Short What color is her hair? Blonde Blonde Brown Brown Is her hair curly or straight? It\u2019s straight Straight Straight Straight What is she wearing? Pink shirt, white pants T-shirt and jacket and pants Shirt and pants Jeans like a pajamas Is the teddy bear in her lap? Yes No Yes Yes What color is the teddy bear? White Brown Tan Tan and white Is it nice outside? Yes sunny Yes It looks lovely Yes A young girl swinging with her teddy bear Question Groundtruth answer Z1 Z2 Z3 How old does she look? Around seven or eight I cannot tell about her Looks about six She is about teenager around Any other people? No No Yes Yes Any buildings? No Yes No No Is it day or night? Day It looks like it in image is in so Daytime Day Is it raining? No Yes Yes No What color umbrella? Pink and clear Dark colored color White White Is it open or closed? Open Yes Open Open Is it sunny? I can\u2019t tell Yes No Yes What color is her hair? Dark brown Brown Brown Brown Is it long or short? Long Short Short I\u2019d say long A young girl holding an umbrella on the sidewalk Figure 5: Example generated answers from A\u2019s conditional prior \u2013 conditioned on an image, caption, question and dialogue history. See supplement for further examples. their model (conditioned on the image, caption and dialogue history, including the current question), and then observe the position of the ground-truth answer (closer to 1 is better). This position is averaged over the dataset to obtain the Mean Rank (MR). In addition, the Mean Reciprocal Rank (MRR; 1/MR) and recall rates at k = {1, 5, 10} are computed. To compare against their baseline, we rank the 100 candidates answers by estimates of their marginal likelihood from A. This can be done with i) the conditional ELBO (Eq. (4)), and by ii) likelihood weighting (LW) in the conditional generative model p\u03b8(at | i, c, h+ t ) = R p\u03b8(at, z | i, c, h+ t )dz = R p\u03b8(z | i, c, h+ t )p\u03b8(a | z, i, c, h+ t ) dz. Ranking by both these approaches is shown in the SM section of Tab. 3, indicating that we are comparable to the state of the art in discriminative models of sequential VQA [6, 7]. Candidate ranking by word2vec cosine distance [Sw2v] The evaluation protocol of [6] scores and ranks a given set of candidate answers, without being a function of the actual answer predicted by the model, \u02c6 at. This results in the rank of the ground-truth answer candidate re\ufb02ecting its score under the model relative to the rest of the candidates\u2019 scores, rather than capturing the quality of the answer output by the model, which is left unobserved. To remedy this, we instead score each candidate by the cosine distance between the word2vec embedding of the predicted answer \u02c6 at and that candidate\u2019s word2vec embedding. We take the embedding of a sentence to be the average embedding over word tokens following Arora et al. [2]. In addition to accounting for the predicted answer, this method also allows semantic similarities to be captured such that if the predicted answer is similar (in meaning and/or words generated) to the ground-truth candidate answer, then the cosine distance will be small, and hence the ground-truth candidate\u2019s rank closer to 1. We report these numbers for A, iteratively-evaluated B/BAR, and also our baseline model MN-QIH-G [6], which we re-evaluate using the word2vec cosine distance ranking (see Sw2v in Tab. 3). In the case of A (GEN), we evaluate answer generations from A whereby we condition on i, c and h+ t via the prior network, sample z \u223cN(z; \u00b5p, \u03c32 p) and generate an answer via the decoder network. Here we show an improvement of 5.66 points in MR over the baseline. On the other hand, A (RECON) evaluates answer reconstructions in which z is sampled from N(z; \u00b5q, \u03c32 q) (where groundtruth answer at is provided). We include A (RECON) merely as an \u201coracle\u201d autoencoder, observing its good ranking performance, but do not explicitly compare against it. We also note that the ranking scores of the block models are worse (by 3-4 MR points) than those of A. This is expected since A is explicitly trained for 1VD which is not the case for B/BAR. Despite this, the performance gap between A (GEN) and B/BAR (with d\u2013qa) is not large, bolstering our iterative evaluation method for the block architectures. Note \ufb01nally that the B/BAR models perform better under d\u2013qa than under d\u2013q\u02c6 a (by 2-3 MR points). This is also expected as answering is easier with access to the ground-truth dialogue history rather than when only the previously predicted answers (and ground-truth questions) are provided. 4.3.2 Two-way Visual Dialogue (2VD) task Our \ufb02exible CVAE formulation for visual dialogue allows us to move from 1VD to the generation of both questions and answers (2VD). Despite this being inherently more challenging, B/BAR are able to generate diverse sets of questions and answers contextualised by the given image and caption. Fig. 6 shows snippets of our two-way dialogue generations. In evaluating our models for 2VD, the candidate ranking protocol of [6] which relies on a given question to rank the answer candidates, is no longer usable when the questions themselves are being generated. This is the case for B/BAR block evaluation, which has no access to the ground-truth dialogue history, and the d\u2013\u02c6 q\u02c6 a iterative evaluation, when the full predicted history of questions and answers is provided (Tab. 2). We therefore look directly to the CE and KL terms of the ELBO as well as propose two new metrics, simc,q and sim\u27f2, to compare our methods in the 2VD task: \u2022 Question relevance (simc,q). We expect a generated question to query an aspect of the image, and we use the presence of semantically similar words in both the question and image caption as a proxy of this. We compute the cosine distance between the (average) word2vec embedding of each predicted question qt and that of the caption \fAre it sunny? Yes Can you see the sky? No Any birds? No Can you see any people? No Is this a sunny photo? Yes Is the sheep in? Yes What color is the sheep? Its is brown Any people? No Is it sunny? Yes What is the the doing? It is Is the sheep made? Yes, is Sheep standing near orange netting in grassy field Are there people? Yes Is it a professional photo? Yes Color color is the uniform? White Can you see the team? No Are there any people? No Can there? No Can you see the bases? No How many people are you see Two Can you see the sky? Yes Is the sky visible? Yes Is the people visible? Yes Does he have a? Yes is is Is it a? Yes is is A baseball player for the Chicago Cubs stands at home plate What color is the cabinets? White Are there any people in the table? No Does the window have curtains Yes Is the fridge on? Yes Are there any people in the? No Is the magnets on? No What color is the walls? White How many chairs are there? Two Is there any people? No Is it daytime? Yes Is there? No How pics the? No An image of a kitchen loft style setting What color are the uniforms? One is white Can you see any sky? No Is it people visible? Yes Is this a game? Yes Is the photo in color? No Is the photo close? Yes, it Can you see the bases? No Can you see the ball? No Is the person wearing a hat? Yes Can you see the the No Can you see the people? No Two guys playing baseball, with trees in the back Figure 6: Examples of two-way dialogue generation from the B/BAR models. Different colours indicate different generations \u2013 coherent sets with a single colour, and failures in white. See supplement for further examples. c, and average over all T questions in the dialogue (closer to 1 indicates higher semantic similarity). \u2022 Latent dialogue dispersion (sim\u27f2). For a generated dialogue block dg, sim\u27f2computes the KL divergence DKL(q\u03c6(z|dg, i, c) \u2225q\u03c6(z|d, i, c)), measuring how close the generated dialogue is to the true dialogue d in the latent space, given the same image i and caption c. From Tab. 4, we observe a decrease in the loss terms as the auto-regressive capacity of the model increases (none \u2192 8 \u219210), suggesting that explicitly enforcing sequentiality in the dialogue generations is useful. For sim\u27f2within a particular model, the dispersion values are typically larger Table 4: 2VD evaluation on VisDial (v0.9) test set for B/BAR models. For d, \u2018\u2205\u2019 indicates block evaluation, and \u2018d\u2013\u02c6 q\u02c6 a\u2019 indicates iterative evaluation (see \u00a74.2). Method d CE KLD simc,q sim\u27f2 B \u2205 31.18 4.34 0.4931 14.20 d\u2013\u02c6 q\u02c6 a 25.40 4.01 0.4091 1.86 BAR8 \u2205 28.81 2.54 0.4878 31.50 d\u2013\u02c6 q\u02c6 a 26.60 2.29 0.3884 2.39 BAR10 \u2205 28.49 1.89 0.4927 44.34 d\u2013\u02c6 q\u02c6 a 24.93 1.80 0.4101 2.35 for the harder task (without dialogue context). We also observe that dispersion increases with number of AR layers, suggesting AR improves the diversity of the model outputs, and avoids simply recovering data observed at train time. While the proposed metrics provide a novel means to evaluate dialogue in a generative framework, like all languagebased metrics, they are not complete. The question-relevance metric, simc,q, can stagnate, and neither metric precludes redundant or nonsensical questions. We intend for these metrics to augment the bank of metrics available to evaluate dialogue and language models. Further evaluation, including i) using auxiliary tasks, as in the image-retrieval task of [7], to drive and evaluate the dialogues, and ii) turning to human evaluators to rate the generated dialogues, can be instructive in painting a more complete picture of our models. 5." + } + ], + "Shell Xu Hu": [ + { + "url": "http://arxiv.org/abs/2204.07305v1", + "title": "Pushing the Limits of Simple Pipelines for Few-Shot Learning: External Data and Fine-Tuning Make a Difference", + "abstract": "Few-shot learning (FSL) is an important and topical problem in computer\nvision that has motivated extensive research into numerous methods spanning\nfrom sophisticated meta-learning methods to simple transfer learning baselines.\nWe seek to push the limits of a simple-but-effective pipeline for more\nrealistic and practical settings of few-shot image classification. To this end,\nwe explore few-shot learning from the perspective of neural network\narchitecture, as well as a three stage pipeline of network updates under\ndifferent data supplies, where unsupervised external data is considered for\npre-training, base categories are used to simulate few-shot tasks for\nmeta-training, and the scarcely labelled data of an novel task is taken for\nfine-tuning. We investigate questions such as: (1) How pre-training on external\ndata benefits FSL? (2) How state-of-the-art transformer architectures can be\nexploited? and (3) How fine-tuning mitigates domain shift? Ultimately, we show\nthat a simple transformer-based pipeline yields surprisingly good performance\non standard benchmarks such as Mini-ImageNet, CIFAR-FS, CDFSL and Meta-Dataset.\nOur code and demo are available at https://hushell.github.io/pmf.", + "authors": "Shell Xu Hu, Da Li, Jan St\u00fchmer, Minyoung Kim, Timothy M. Hospedales", + "published": "2022-04-15", + "updated": "2022-04-15", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "main_content": "Introduction Mainstream supervised deep learning achieves excellent results in applications where huge annotated datasets are available. However, this assumption is not met in many applications where data (e.g., rare categories), or the cost of human annotation are prohibitive bottlenecks. This has motivated a large and growing set of research in few-shot learning (FSL), which aims to emulate the human ability to learn new concepts from few training examples. The FSL challenge has proven fertile ground for developing and testing a vast array of sophisticated research ideas spanning metric learning [59, 61], gradient-based meta-learning [29], program induction [41], differentiable optimization layers [42], hy*Equal contributions. CNN-4-64 RN12 RN18 WRN-28-10 ViT-base 50 60 70 80 90 100 miniImageNet 5-way-5-shot accuracy Figure 1. How does pre-training and architecture affect fewshot learning? Learning from a few shots can be achieved by a) meta-learning [66,72] and b) transfer learning from self-supervised foundation models pre-trained on large-scale external data [18,53]. While the majority of FSL community focuses on the former, we show that the latter can be more effective because it enables the use of stronger architectures such as vision transformer (ViT) [25] \u2013 and can be combined with simple meta-learners such as ProtoNet. The \ufb01gure shows results aggregated from dozens of studies from the past 5 years of FSL research and the result of ProtoNet + ViT backbone + contrastive language-image pretraining (CLIP) [53] (yellow star). To emphasize the importance of pre-training, ProtoNet + randomly initialized ViT (blue square) is also compared. pernetworks [9], neural optimizers [54], transductive label propagation [55], neural loss learning [4], Bayesian neural priors [72] and more [69]. But how much practical progress have we made based on all these technical advances? A few studies [19, 20, 23, 51, 63, 68] have investigated whether simpler baselines can offer comparable performance to sophisticated state of the art few-shot learners. While there is no conclusive answer, due to on-going developments in both sophisticated learners [72] and simple baselines, there is a trend that simple approaches often perform surprisingly arXiv:2204.07305v1 [cs.CV] 15 Apr 2022 \fwell compared to sophisticated counterparts. Their simplicity and ef\ufb01cacy leads these simple methods to be taken up in many practical applications of few-shot learning from medical data analysis [11] to electronic engineering [40]. We follow this line of enquiry, but go further in investigating previously under-studied factors that in\ufb02uence the performance of simple few-shot pipelines. In particular we start with a ProtoNet [59] few-shot learner, and investigate three practically important design choices: pre-training data, neural network architecture, and meta-test time \ufb01ne-tuning. Source data While FSL addresses the small data regime, in reality FSL research is almost always about algorithms to transfer knowledge from large scale source tasks (aka metatrain) to small scale target tasks (aka meta-test). Existing literature almost always controls the source data, in order to carefully compare the impact of different knowledge transfer mechanisms of interest from hyper-networks [9] to gradientbased meta-learners [29]. While this is helpful to drive research on sophisticated algorithms, it does not answer the question of how choice of source data impacts performance? This question has been studied in other areas of vision and pattern recognition [10,31,60], but not for FSL. This is unhelpful for consumers of computer vision FSL research, who would be interested to know how much a simple change of source data can improve their applications? Especially since freely available large datasets already exist [21,62], and exploiting more external source data is easier in practice than implementing sophisticated state-of-the-art meta-learners. To this end we investigate the impact of unsupervised pretraining on external data \u2013 a work\ufb02ow recently termed as exploiting a foundation model [10] \u2013 on FSL tasks. This small change has substantial impact compared to 5 years of FSL research (Figure 1). Although this may violate de\ufb01nitions of the FSL problem that strictly prescribe the source set, the ef\ufb01cacy of the approach may prompt re\ufb02ection on whether this is the best problem de\ufb01nition to focus on. Neural architecture Similarly to the situation with source data, FSL studies often control neural architecture to a handful of small networks such as CNN-4-64 and ResNet-12. This is partly to enable fair comparison of FSL algorithms, but this particular suite of networks is also a consequence of the small size of the source datasets used for training in common benchmarks such as miniImageNet. Thus the architectures commonly studied in FSL are somewhat out-of-date with regard to state-of-the-art computer vision. We therefore ask to what extent state-of-the-art architectures such as vision transformers [25] can bene\ufb01t few-shot performance, especially in conjunction with larger pre-training datasets? Fine-tuning The many studies in the FSL literature are somewhat divided in whether they advocate [29,54,65] some kind of \ufb01ne-tuning during model deployment (aka meta-test) for individual tasks, or whether a \ufb01xed feature representation should be suf\ufb01cient [42, 59, 68]. We also investigate Domain A Domain B Class 1 Class 2 Class 3 Class 4 Class 5 Class 6 Support set Augmented support set Pre-trained backbone External data Meta-trained backbone Task-speci\ufb01cally \ufb01ne-tuned backbone Figure 2. Overview \u2013 A schematic of the simple-but-effective pipeline that we consider: Pre-training \u2192Meta-training \u2192Finetuning (P>M>F). Following the red arrows, the pipeline turns a class-agnostic feature backbone into a generic feature backbone and ultimately a task-speci\ufb01c feature backbone. this issue, and suggest that \ufb01ne-tuning is necessary for deploying foundation models to out-of-distribution tasks. We also introduce an algorithmic improvement to \ufb01ne-tuning by automating the learning rate selection via validation, which leads to a more performant pipeline for cross-domain FSL. In summary, we advance few-shot learning by studying design choices of a simple pipeline [59] (Figure 2), rather than developing new algorithms. We answer questions including: How does pre-training impact FSL? Can recent transformer architectures be adapted to FSL? and How to best exploit \ufb01ne-tuning? Based on this analysis we demonstrate a new baseline for FSL that surpasses state-of-the-art performance, while being simple and easy to implement. 2. Related Work Few-shot learning Few-shot learning is now a deep and widely studied area too large to review in detail here, and we refer to relevant surveys for an overview [35, 69]. A key point is that, despite the name, almost all FSL methods provide algorithms for transferring knowledge from a large set of source data, to a set of sparsely annotated target categories of interest. Much activity in the \ufb01eld falls under the umbrella of meta-learning [35], which aims to construct a data-ef\ufb01cient learner from the source (aka meta-train) dataset by simulating few-shot learning problems, and then deploy the customized learner on the target (aka meta-test) set. The resulting learner may take the form of an initialization [29], learned metric [59], Bayesian prior [72], or optimizer [54]. Simple-but-effective baselines In competition with the plethora of sophisticated few-shot learners [35,69] such as those mentioned above, a number of recent studies have advocated strong baselines that perform comparably well while being simpler. These are often based on a transfer learning [70] pipeline. They apply a conventional deep learner on the source data, before adapting to the few-shot target data by training a simple linear [19,51,63] or centroid [68] classi\ufb01er \fon the \ufb01xed representation, or \ufb01ne-tuning the feature backbone as well [23]. These methods mostly use standardized FSL source datasets (such as miniImageNet) and architectures (such as ResNet-12 and WRN-10-28) to enable direct comparisons of the advocated simple baselines to sophisticated learners. In contrast, we speci\ufb01cally aim to explore how far practical FSL performance can be pushed by exploiting other available pre-training datasets and architectures. A few studies have evaluated FSL on a larger scale using datasets such as ImageNet1K [20] or ImageNet21K [23]. However by changing both the source and target sets, this does not make it clear how choice/scale of source data impacts a given target problem \u2013 the question that we answer here. Others have explored the impact of conventional pretraining prior to meta-learning [20] or as a regularizer during meta-learning [30] \u2013 but without exploiting extra data. Bigger data and architectures The impact of source datasets is widely studied in standard supervised [60] and self-supervised [10, 31] learning in vision, and in pattern recognition applications outside of vision [3,10,13,22]. However, it is not widely evaluated in FSL, which is a surprising omission, since as we shall see it may well be the easiest way to improve practical FSL performance. Similarly, existing FSL methods are almost exclusively based on a few less common architectures (e.g., Conv-4-64 and ResNet-12), which maybe due to the very \ufb01rst experimental setup on small datasets like Omniglot [29, 66]. Transformers have seen limited use in FSL, mainly for metric learning [24], but not for feature extraction. We explore how recent transformer feature extractors can be trained and applied to FSL, especially when combined with a foundation model [10] pre-trained on larger source datasets. Self-supervised & few-shot Our pipeline extends the typical unsupervised pre-train \u2192supervised \ufb01ne-tune work\ufb02ow of the self-supervised research community [28,39], which has recently demonstrated strong performance for low-shot supervised learning [15, 18, 27]. However, there has been limited direct comparison of self-supervised (SSL) and FSL community methods for data ef\ufb01cient learning due to different typical evaluation practices and benchmarks. For example, many SSL evaluations perform unsupervised representation learning on ImageNet, before performing few-shot supervised learning within ImageNet [15,18], which violates usual FSL community requirement of disjoint source and target data. One contribution of this paper is to provide a degree of comparison between and combination of the SSL and FSL approaches. For example, our MetaDataset, CDFSL and teaser Figure 1 results, use disjoint source and target data but bene\ufb01t from external self-supervised pre-training. Cross-domain few-shot A FSL variant of particular practical interest is cross-domain few-shot [33], where the source/meta-train dataset is signi\ufb01cantly different to the target/meta-test dataset. This is more challenging than the standard within-domain setting, but more practically relevant. This is because in many scenarios where FSL is of interest such as medical or earth observation imaging [33], the target data for FSL is signi\ufb01cantly different to available source data (such as (mini-)ImageNet [21]). Major benchmarks of this type are CDFSL [33] and meta-dataset [65]. 3. A Simple Pipeline for FSL Problem formulation Few-shot learning (FSL) aims to learn a model with only a few annotated examples. One widely adopted formulation for FSL was introduced by Vinyals et al. [66] from a meta-learning perspective, where the assumption is that one should learn to solve new few-shot tasks based on previously seen experience of many similar few-shot tasks. Therefore, the FSL problem is usually organized in two phases: meta-training a few-shot learner on a distribution of training tasks and meta-testing the resulting learner by evaluating it on novel few-shot tasks. Within each phase, data arrives in an episodic fashion, where the \u201ctrain-set\u201d and \u201ctest-set\u201d of each task are called support set and query set respectively to avoid terminology confusion. In the case of classi\ufb01cation, the dif\ufb01culty level of an episode is described as K-way-N-shot, which corresponds to learning a classi\ufb01er for K classes given N examples per class in the support set. It is common to learn one model for each dif\ufb01culty level, but a more realistic setting [65] is to learn a global model for various K\u2019s and N\u2019s. This is sometimes called various-way-various-shot, and we address this more practical setting here. This is also a reason to prefer simple pipelines over sophisticated meta-learners that may not be easily extended to the various-way-various-shot setting. A different approach to small-data learning appears in the transfer learning [12, 70] and self-supervision [10, 17] literature. In this case one pre-trains a model using some large source data, and then re-purposes it for the sparse data target task of interest. The pre-training step aims to reduce the sample complexity of learning the target problem in the adaptation step. Although typically studied separately, both families of approach provide mechanisms for knowledge transfer from source data to the target few-shot problem of interest. Towards the goal of high performance few-shot learning, we combine both pre-training (typically on auxiliary unlabeled data, which is freely and ubiquitously available) and metalearning (episodic training with labels) together in a simple sequential pipeline using a single feature extractor backbone. Our pipeline consists of three phases: 1) pre-training the feature backbone on unlabeled external data using selfsupervised loss, 2) meta-training the feature backbone on labeled simulated few-shot tasks using ProtoNet [59] loss, and 3) deploying the feature backbone on novel few-shot \ftasks with optional \ufb01ne-tuning on the augmented support set of each task. A schematic of our pipeline is shown in Figure 2, which we call P>M>F (i.e., the pipeline Pre-training \u2192Meta-training \u2192Fine-tuning ). We next outline how the feature backbone is updated in different stages. 3.1. Pre-training of backbone We consider the feature backbones of ResNet [34] or ViT [25], to provide the foundation models in our pipeline. There are then several well-established self-supervised learning algorithms for the pre-training step: DINO [15] uses ImageNet1K and exploits the consistency in prediction between a large crop and multiple local crops of the same image, where a large crop is highly likely to overlap with a foreground object in the case of ImageNet images; BEiT [6] amounts to solving a masked image reconstruction task on the ImageNet-21K dataset in line with the original BERT pre-training [22] for text data; and CLIP [53] leverages image captions in the YFCC100m dataset to align image and caption representations in a common feature space. For more \ufb02exible architectures like ViT [25], pre-training on external data is important, as they are hard to train on common small-sized FSL benchmarks (Figure 1 and Table 1). 3.2. Meta-training with ProtoNet As the goal is to build a simple pipeline, we consider the prototypical network (ProtoNet) [59], which constructs class centroids dynamically for each episode and then performs nearest centroid classi\ufb01cation. Speci\ufb01cally, ProtoNet only requires a feature backbone f to map data points to a mdimensional feature space: f : X \u2192Rm, and the probability of a query image x belonging to class k is given by p(y = k|x) = exp \u0000\u2212d(f(x), ck) \u0001 P k\u2032 exp \u0000\u2212d(f(x), ck\u2032) \u0001, (1) where d is implemented by a cosine distance in our work as opposed to the commonly chosen Euclidean distance and ck is the prototype of class k, de\ufb01ned as ck = 1 Nk P i:yi=k f(xi) and Nk = P i:yi=k 1 on the support set. Note that the prototypes can be computed regardless of the value of k. This enables ProtoNet to be trained and deployed under various-way-various-shot setting. 3.3. Meta-testing with \ufb01ne-tuning To be consistent with meta-training, by default, we deploy the meta-trained ProtoNet directly on all novel tasks. However, if the a novel task is drawn from an unseen domain, the learned feature representation may fail to generalize due to a substantial shift in the data distribution. To this end, we propose to \ufb01ne-tune the feature backbone by a few gradient steps with the assistance of data augmentation. The details are summarized as PyTorch pseudo code in Algorithm 1. Algorithm 1 PyTorch pseudo code for \ufb01ne-tuning # Inputs: a task including supp_x, supp_y, query_x # backbone_state: meta-trained backbone weights # optimizer: Adam optimizer # Outputs: logits backbone = create_model_from_checkpoint(backbone_state) def single_step(z): supp_f = backbone(supp_x) proto = compute_prototypes(supp_f, supp_y) f = backbone(z) logits = f.norm() @ proto.norm().T # cos similarity loss = cross_entropy_loss(logits, supp_y) return logits, loss # fine-tuning loop for i in range(num_steps): aug_supp_x = rand_data_augment(supp_x) _, loss = single_step(aug_supp_x) loss.backward() # back-prop optimizer.step() # gradient descent logits, _ = single_step(query_x) # classification Our \ufb01ne-tuning algorithm is similar to that of [33, 43] who \ufb01ne-tune the model weights using the support set since this is the only accessible labeled data at meta-test time. We exploit the support set slightly differently: we use data augmentation to create a pseudo query set derived from the support set; as such, we do not need to compute prototypes using the support set and then again apply the prototypes on the same support set using eq. (1). Besides, we simply update the entire backbone rather than exploring partial model adaptation. Learning rate selection We observe that the \ufb01ne-tuning performance is relatively sensitive to the choice of learning rate (see supplemental material for more analysis). However, existing few-shot learning problem formulation does not offer a validation set for each task to choose the best learning rate for \ufb01ne-tuning. Previous work [33,43] choose a learning rate a priori and \ufb01x it for every task. This strategy requires a good understanding of the backbone architecture but still leads to sub-optimal performance in general. Given a task with very few labeled images (i.e. the support set), it is almost unlikely to identify which learning rate yields good generalization for unlabeled images (i.e. the query set). The good news is that we \ufb01nd empirically the best learning rate is relatively stable across tasks within the same domain. To this end we propose to sample N = 5 extra tasks from each domain and automate domain-wise learning rate search within a reasonable range (e.g., {0.01, 0.001, 0.0001, 0}). The best learning rate is then used for every task within the domain. This additional step amounts to preparing a few labeled images per domain to create a validation set, which makes sense in practice as we can easily organize tasks by domains and identify domain for individual tasks to look up the corresponding learning rate once searched. \f4. Experiments Meta-training datasets We use standard benchmarks to evaluate our proposed pipeline. miniImageNet [66] contains 100 classes from ImageNet-1k, which is then split into 64 training, 16 validation and 20 testing classes; each image is downsampled to 84\u00d784. CIFAR-FS [8] is created by dividing the original CIFAR-100 into 64 training, 16 validation and 20 testing classes. The images are of size 32\u00d732. Meta-Dataset [65] subsumes 10 public image datasets of a diverse range of domains: ImageNet-1k, Omniglot, FGVCAircraft, CUB-200-2011, Describable Textures, QuickDraw, FGVCx Fungi, VGG Flower, Traf\ufb01c Signs and MSCOCO. Each dataset has train/val/test splits. We follow the two training protocols proposed by [65] and [24] respectively. For the former, the train/val splits of the \ufb01rst 8 datasets (indomain) are used for meta-training and validation, and the test splits of all datasets are used for meta-testing. The latter considers only ImageNet-1k\u2019s train-split for meta-training, and the other settings remain the same. For more details on Meta-Dataset we refer the readers to Appendix.3 of [65]. Evaluation For evaluating few-shot classi\ufb01cation performance, we simulate 600 episodes/tasks from the test-split for each dataset of interest. The evaluation metric is the average classi\ufb01cation accuracy over tasks. For miniImageNet and CIFAR-FS, the convention is to evaluate 5-way-1-shot (5w1s) and 5-way-5-shot episodes, and the size of the query set for each episode is \ufb01xed to 15 \u00d7 5. For Meta-Dataset, the number of ways, shots and query images are sampled uniformly at random with respect to the dataset speci\ufb01cations, except for ImageNet-1k and Omniglot (they have speci\ufb01c sampling strategies according to the hierarchy of classes). In addition, we evaluate the (5w5s) meta-trained model from miniImageNet for a cross-domain evaluation (CDFSL) [33], where 4 out-of-domain datasets are considered, and the results are reported under 5-way-5/20/50-shot settings. Training details To avoid over-engineering training for different datasets and architectures, we adopt a common training strategy for meta-training the backbone from pretrained model checkpoints (for both ResNet and ViT). This may lead to sub-optimal results for some cases, but it simpli\ufb01es comparison. Speci\ufb01cally, we train the backbone for 100 epochs, where each epoch consists of 2000 episodes/tasks. We use a warm-up plus cosine annealing learning rate schedule: the learning rate starts from 10\u22126, increases to 5 \u00d7 10\u22125 in 5 epochs and then gradually decreases to 10\u22126 with a cosine annealing. We use the validation set to decide when to early stop, and turn off strong regularization and data augmentation techniques for simplicity. 4.1. Analysis We now use the pipeline outlined in Sec 3 to answer a series of questions about few-shot learner pipeline design. Training Con\ufb01guration Benchmark Results ID Arch Pre Train MetaTr MD miniIN CIFAR 0 ViT-small DINO (IN1K) 67.4 97.0 79.8 1 ViT-small DeiT (IN1K) 67.5 98.8 84.6 2 ResNet50 DINO (IN1K) 63.8 91.5 76.1 3 ResNet50 Sup. (IN1K) 62.4 96.4 82.3 4 ViT-small DINO (IN1K) PN 78.4 98.0 92.5 5 ViT-small DEIT (IN1K) PN 79.3 99.4 93.6 6 ViT-small PN 52.8 49.1 59.8 7 ResNet50 DINO (IN1K) PN 72.4 92.0 84.0 8 ResNet50 Sup. (IN1K) PN 70.2 97.4 87.6 9 ResNet50 PN 62.9 72.2 68.4 10 ResNet18 PN 63.3 73.7 70.2 11 ViT-base DINO (IN1K) PN 79.2 98.4 92.2 12 ViT-base CLIP (YFCC) PN 80.0 98.1 93.2 13 ViT-base Sup (IN21K) PN 81.4 99.2 96.7 14 ViT-base BEIT (IN21K) PN 82.8 99.0 97.5 15 ResNet50 CLIP (YFCC) PN 75.0 92.2 82.6 Table 1. The impact of architecture and pre-training algorithm (dataset) on downstream few-shot learning performance on MetaDataset (MD), miniImageNet (miniIN) and CIFAR-FS. MetaDataset results are averaged over all target datasets while minIN and CIFAR results are 5-way-5-shot. ProtoNet (PN) nearest-centroid classi\ufb01er is used throughout for few-shot learning on the support set during meta-test. MetaTr indicates the algorithm used for episodic learning on the corresponding benchmark. Notably, 1 How does pre-training regime affect FSL? 2 Can contemporary architectures such as ViT be adapted to FSL? 3 How to exploit \ufb01ne-tuning in meta-testing? 4.1.1 Pre-training and architectures We \ufb01rst evaluate the impact of pre-training regime (including algorithm and dataset), as well as neural architecture on FSL benchmarks Meta-Dataset [65] (train on 8 datasets), miniImageNet [66], and CIFAR-FS [8]. To clearly convey the con\ufb01guration of each experiment, results in Table 1 are organized by architecture, pre-training algorithm (and dataset) and meta-training algorithm. We assume ProtoNet (nearest-centroid) classi\ufb01er as the standard approach for meta-testing throughout, and compare either episodically trained ProtoNet or nothing as the meta-learning step between pre-training and meta-testing (column MetaTr). 1 How does pre-training regime affect FSL? From the results in Table 1 we can draw the following conclusions: (i) Pre-training on ImageNet1K generally provides a signi\ufb01cant improvement across the board compared to the conventional pipeline used by prior work which does not make use of pretraining (compare model M9 with M7 and M8, etc). (ii) We are primarily interested in unsupervised pre-training, with supervised pre-training being included as an unfair upper bound. However, state of the art unsupervised pre-training with DINO performs close to supervised pre-training (compare M3 vs M2, etc). This is noteworthy, because while there is some semantic overlap between some of the source \f(ImageNet1K) and target (Meta-Dataset, miniImageNet, CIFAR) datasets considered here, good performance can be achieved without using source labels, where there is no traintest label leakage1. (iii) Given a strong pre-training regime such as DINO, simple nearest centroid classi\ufb01cation based on pre-trained features performs well (top block including M2, etc). In particular, off-the-shelf features from a foundation model without dataset-speci\ufb01c meta-learning perform favorably compared to conventional dataset-speci\ufb01c training of ProtoNet-ResNet18 (M2 vs M10), which is arguably the closest to industry standard in FSL. (iv) Nevertheless, dataset speci\ufb01c meta-learning does improve further (M7 vs M2, etc). Simple linear readout of a frozen foundation model [18,27] is not competitive. 2 Can state of the art architectures such as ViT be adapted to FSL? Using the results in Table 1, we can also answer this question. In particular, while ViT does not train well on the smaller meta-train benchmarks (miniImageNet, CIFAR) compared to smaller architectures (see M6 vs M9, M10), it generally performs excellently when bene\ufb01ting from large pre-training data (M6 vs M4). Overall ViT outperforms the industry standard ResNet18, as well as our ResNet50 baseline, across the board when bene\ufb01tting from pre-training. We remark that our ResNet50 baseline also performs comparitively poorly without pre-training, especially on the smaller miniImageNet and CIFAR, suggesting that it is also too large to train well on the target datasets alone. Other foundation models Overall we can see that larger pre-training data sources, and recent architectures make a huge difference to downstream FSL performance on standard benchmarks. We also compared a selection of other foundation models [10] in M11-15. We can see that (i) All the foundation models lead to substantial improvements on standard within-dataset training (M10,M9), (ii) The largest foundation models using, e.g., ViT-base and ImageNet21K or YFCC data source lead to strongest performance across the board, but do not outperform hugely the more economic DINO+ImageNet1K-based ViT-small (M4). For ef\ufb01ciency of pre-training and deployment, we take this to be our default model in the following section. 1 + 2 How does pre-training and architecture impact other Few-Shot Learners? Our main experiments built upon ProtoNet as a widely used industry standard. We next 1In the case of miniImageNet and Meta-Dataset, parts of ImageNet1K are used in both meta-train and meta-test splits. EG: since Meta-Dataset\u2019s ImageNet uses a 712/288 source/target class split, this means that for one of Meta-Dataset\u2019s 10 domains, there is some data (but not label) overlap between pre-train and meta-test for some foundation models. As discussed in Sec. 2, this overlap is ubiquitious in typical self-supervision evaluation pipelines [15, 17]. It is less common in FSL evaluation pipelines, but corresponds to making a semi-supervised or transductive assumption in terms of data access as per [38,45,49,55]. Nevertheless, we do not think this is a signi\ufb01cant factor in the strong results, as CLIP\u2019s YFCC does not have this overlap and performs similarly to the ImageNet1K based models. Train Con\ufb01g Benchmark ID Arch Pre Train MetaTr miniIN CIFAR 5/1 5/5 5/1 5/5 0 ViT-small DINO (IN1K) 88.8 97.0 59.1 79.8 1 ViT-small DINO (IN1K) ProtoNet 93.1 98.0 81.1 92.5 2 ResNet18 MetaQDA 65.1 81.0 3 ViT-small DINO (IN1K) MetaQDA 92.0 97.0 77.2 90.1 4 ResNet12 MetaOptNet 64.1 80.0 72.8 85.0 5 ViT-small DINO (IN1K) MetaOptNet 92.2 97.8 70.2 84.1 Table 2. Impact of architecture and pre-training on state-of-the-art few-shot learners: MetaQDA [72], MetaOptNet [42]. explore how our pipeline impacts two few-shot learners that are more representative of recent state of the art, namely MetaOptNet [42] and MetaQDA [72]. From the results in Table 2, we can see that: (i) MetaQDA and MetaOptNet do improve on direct feature transfer (M5 and M3 vs M0) and on the simpler ResNet features they were initially evaluated with (M5 vs M4, M3 vs M2). But (ii) With the stronger features, they are outperformed by the simpler ProtoNet learner (M3 and M5 vs M1). This suggests previous conclusions about comparative meta-learner performance may need re-evaluating in this new regime of stronger features. Few-shot learning v.s. self-supervised learning Existing literature generally fails to directly compare algorithms from the few-shot learning community (such as ProtoNet, [59], MAML [29], MetaOptNet [42], etc), with those from the self-supervised community (such as DINO [15], SimCLR [17,18], etc). This is partly because the popular evaluation protocol is different: For example 5-way-1-shot regime is popular the FSL community, vs 1% labels (\u22481000-way10-shot in the case of ImageNet) in the SSL community; network architectures differ (\u2264ResNet18 vs \u2265ResNet50 respectively); and image resolutions differ (84\u00d7 vs full). Our results provide a taster of such a direct comparison. Overall they suggest that frozen self-supervised foundation models (using extra pre-training data) are competitive out of the box compared to standard few-shot learners (using only metatraining data). However, more interestingly, combining these two paradigms as we have done, easily leads to state of the art performance on typical FSL metrics. Class overlap between pre-training and meta-testing Although unsupervised pre-training does not utilize labels, it is very likely that some classes used by pre-training also appear in meta-testing. Does this class overlap go against the very de\ufb01nition of few-shot learning? From a meta-learning point of view, the answer is yes. But we argue that class overlap is almost unavoidable unless a careful data split is simulated. For example, in the case of Meta-Dataset, the CUB dataset [67], the Aircraft dataset [50] and the COCO dataset [47] have a class overlap with ImageNet [24,32] but they are still used in meta-testing. As we consider more practical large-scale experiments, the class overlap issue be\fM Arch PreTr MetaTr MetaTe Avg Out-D 1 ViT-small DINO PN (IN) PN 68.38 67.68 2 ViT-small DINO PN (IN) PN+FT(lr=0.01) 76.05 76.54 3 ViT-small DINO PN (IN) PN+FT(lr=0.001) 74.47 74.51 4 ViT-small DINO PN (IN) PN+FT(Tuned) 77.53 77.85 5 ViT-small DINO PN (MD) PN 78.43 55.71 6 ViT-small DINO PN (MD) PN+FT(lr=0.01) 76.09 73.26 7 ViT-small DINO PN (MD) PN+FT(lr=0.001) 74.64 69.97 8 ViT-small DINO PN (MD) PN+FT(Tuned) 83.13 75.72 Table 3. Fine-tuning (FT) during meta-test on Meta-Dataset. The meta-train (MetaTr) setting indicates the source dataset as ImageNet only (IN) or full MetaDataset (MD). Results are the averages across all domains within meta-dataset (Avg), and just the out-ofdistribution subset (Out-D). comes ubiquitous. We should worry about this issue if we were benchmarking a meta-learning algorithm, but for the nature of few-shot learning, benchmarking the capability of quickly constructing a classi\ufb01er from very few labels is not hindered by class overlap. This is why self-supervised learning community is not bothered by this issue at all. It is worth mentioning that a similar setting called \u201cfew-shot few-shot learning\u201d has been proposed by [46,71], where they avoid overlap by either carefully picking up pre-training data from a different domain or crawling pre-training data of base categories from Internet. Alternatively, one may avoid overlap by using a different modality. We advocate meta-learning researchers to consider this controlled setting as a testing bed for incorporating powerful pre-trained feature backbones. INet Omglot Acraft CUB DTD QDraw Fungi Flower Sign COCO Avg 50 55 60 65 70 75 80 85 90 95 M1: DINO + PN (IN) M2: DINO + PN (IN) + FT M5: DINO + PN (MD) Figure 3. The impact of \ufb01ne-tuning during meta-test on MetaDataset. Held out datasets such as Signs and COCO bene\ufb01t from \ufb01ne-tuning; as do those very different from ImageNet such as omniglot and QuickDraw. 4.1.2 Fine-tuning The previous experiments used a \ufb01xed feature extractor together with ProtoNet for meta-testing. We next investigate use of \ufb01ne-tuning during meta-testing to further improve performance. We focus on the DINO pre-trained ViT models, based on their strong performance in Section 4.1.1. 3 How to best exploit \ufb01ne-tuning for meta-testing? Method (Backbone) Ext. Ext. CIFAR-FS MiniImageNet dat. lab. 5w1s 5w5s 5w1s 5w5s Inductive ProtoNet (CNN-4-64) [59] 49.4 68.2 55.5 72.0 Baseline++ (CNN-4-64) [19] 48.2 66.4 MetaOpt-SVM (ResNet12) [42] 72.0 84.3 61.4 77.9 Meta-Baseline (ResNet12) [20] 68.6 83.7 RS-FSL (ResNet12) [2] \u0013 65.3 Transductive Fine-tuning (WRN-28-10) [23] 76.6 85.8 65.7 78.4 SIB (WRN-28-10) [36] 80.0 85.3 70.0 79.2 PT-MAP (WRN-28-10) [37] 87.7 90.7 82.9 88.8 CNAPS + FETI (ResNet18) [7] \u0013 \u0013 79.9 91.5 Self-supervised ProtoNet (WRN-28-10) [30] 73.6 86.1 62.9 79.9 ProtoNet (AMDIM ResNet) [16] \u0013 76.8 91.0 EPNet + SSL (WRN-28-10) [57] \u0013 79.2 88.1 Semi-supervised LST (ResNet12) [45] \u0013 70.1 78.7 PLCM (ResNet12) [38] \u0013 77.6 86.1 70.1 83.7 P>M>F (IN1K, RN50) \u0013 73.7 84.0 79.2 92.0 P>M>F (IN1K, ViT-Small) \u0013 81.1 92.5 93.1 98.0 P>M>F (IN1K, ViT-base) \u0013 84.3 92.2 95.3 98.4 Table 4. miniImageNet & CIFAR \u2013 Comparison with representative SOTA FSL algorithms. Methods using external data and/or labels are indicated. To answer this question, we compare vanilla feature transfer as explored previously, with ProtoNet, and ProtoNet with episode-wise \ufb01ne-tuning on the support set (ProtoNet+FT) as outlined in Section 3.3. We use Meta-Dataset including both conditions of treating ImageNet alone as the source, and joint meta-training on all of Meta-Dataset. From the results in Figure 3 and Table 3 we can draw the following conclusions: (i) Meta-training on the full Meta-Dataset improves on meta-training on ImageNet-training alone (M5 vs M1). (ii) Fine-tuning during meta-test improves substantially in the out-of-distribution datasets, and especially in the case where meta-training is conducted on ImageNet, and then deployed across-domain to all the other Meta-Dataset tasks: See Out-D column and M2 vs M1 in Table 3; blue vs orange bars in Figure 3 for OmniGlot, QuickDraw, traf\ufb01c signs, etc. However, for the condition where more Meta-Dataset domains are used for training and testing, \ufb01ne-tuning has inconsistent impact across domains: While it is helpful for the remaining OOD datasets, it is not helpful overall (M5 vs M6 for Avg and Out-D). Overall feature backbone updates by \ufb01ne-tuning are more helpful for domains unseen during meta-training, concurring with [43, 65]. On analysing the inconsistent impact of \ufb01ne-tuning, we found this is due to dif\ufb01culty in choosing an appropriate learning rate. Using any single learning rate throughout, as we did above (lr=0.01) is poorly tuned for some datasets. We therefore also explore our learning rate selection heuristic proposed in Section 3.3, and we see this leads to the best performance (M4 vs M2). 4.2. Results on standard benchmarks We call our pipeline P>M>F, which can be instantiated with any pre-training algorithm and backbone architectures, \f8 in-domain datasets In-domain Out-of-domain INet Omglot Acraft CUB DTD QDraw Fungi Flower Sign COCO Avg ProtoNet [65] (RN18) 67.01 44.5 79.56 71.14 67.01 65.18 64.88 40.26 86.85 46.48 63.29 CNAPs [56] (RN18+Adapter) 50.8 91.7 83.7 73.6 59.5 74.7 50.2 88.9 56.5 39.4 66.90 SUR [26] (RN18+Adapter) 57.2 93.2 90.1 82.3 73.5 81.9 67.9 88.4 67.4 51.3 75.32 T-SCNAPs [7] (RN18+Adapter) 58.8 93.9 84.1 76.8 69.0 78.6 48.8 91.6 76.1 48.7 72.64 URT [48] (RN18+Adapter) 55.7 94.4 85.8 76.3 71.8 82.5 63.5 88.2 69.4 52.2 73.98 FLUTE [64] (RN18) 51.8 93.2 87.2 79.2 68.8 79.5 58.1 91.6 58.4 50.0 71.78 URL [44] (RN18+Adapter) 57.51 94.51 88.59 80.54 76.17 81.94 68.75 92.11 63.34 54.03 75.75 ITA [43] (RN18+Adapter) 57.35 94.96 89.33 81.42 76.74 82.01 67.4 92.18 83.55 55.75 78.07 P>M>F (DINO/IN1K, RN50) 67.51 85.91 80.3 81.67 87.08 72.84 60.03 94.69 87.17 58.92 77.61 P>M>F (DINO/IN1K, ViT-small) 74.59 91.79 88.33 91.02 86.61 79.23 74.2 94.12 88.85 62.59 83.13 P>M>F (DINO/IN1K, ViT-base) 77.02 91.76 89.73 92.94 86.94 80.2 78.28 95.79 89.86 64.97 84.75 In-domain = ImageNet In-domain Out-of-domain INet Omglot Acraft CUB DTD QDraw Fungi Flower Sign COCO Avg ProtoNet [65] (RN18) 50.5 59.98 53.1 68.79 66.56 48.96 39.71 85.27 47.12 41 56.10 ALFA+FP-MAML [5] (RN12) 52.8 61.87 63.43 69.75 70.78 59.17 41.49 85.96 60.78 48.11 61.41 BOHB [58] (RN18) 51.92 67.57 54.12 70.69 68.34 50.33 41.38 87.34 51.8 48.03 59.15 CTX [24] (RN34) 62.76 82.21 79.49 80.63 75.57 72.68 51.58 95.34 82.65 59.9 74.28 P>M>F (DINO/IN1K, RN50) 67.08 75.33 75.39 72.08 86.42 66.79 50.53 94.14 86.54 58.2 73.25 P>M>F (DINO/IN1K, ViT-small) 74.69 80.68 76.78 85.04 86.63 71.25 54.78 94.57 88.33 62.57 77.53 P>M>F (DINO/IN1K, ViT-base) 76.69 81.42 80.33 84.38 86.87 75.43 55.93 95.14 89.68 65.01 79.09 Table 5. Meta-Dataset \u2013 Comparison with SOTA FSL algorithms. ChestX ISIC EuroSAT CropDisease 5w5s 5w20s 5w50s 5w5s 5w20s 5w50s 5w5s 5w20s 5w50s 5w5s 5w20s 5w50s ProtoNet [59] (RN10) 24.05 28.21 29.32 39.57 49.50 51.99 73.29 82.27 80.48 79.72 88.15 90.81 RelationNet [61] (RN10) 22.96 26.63 28.45 39.41 41.77 49.32 61.31 74.43 74.91 68.99 80.45 85.08 MetaOptNet [42] (RN10) 22.53 25.53 29.35 36.28 49.42 54.80 64.44 79.19 83.62 68.41 82.89 91.76 Finetune [33] (RN10) 25.97 31.32 35.49 48.11 59.31 66.48 79.08 87.64 90.89 89.25 95.51 97.68 CHEF [1] (RN10) 24.72 29.71 31.25 41.26 54.30 60.86 74.15 83.31 86.55 86.87 94.78 96.77 STARTUP [52] (RN10) 26.94 33.19 36.91 47.22 58.63 64.16 82.29 89.26 91.99 93.02 97.51 98.45 DeepCluster2 [14,27] (IN1K, RN50) 26.51 31.51 34.17 40.73 49.91 53.65 88.39 92.02 93.07 93.63 96.63 97.04 P>M>F (DINO/IN1K, ResNet50) 27.13 31.57 34.17 43.78 54.06 57.86 89.18 93.08 96.06 95.06 97.25 97.77 P>M>F (DINO/IN1K, ViT-small) 27.27 35.33 41.39 50.12 65.78 73.50 85.98 91.32 95.40 92.96 98.12 99.24 Table 6. Broader study of cross-domain few-shot learning \u2013 Comparison with SOTA FSL algorithms. e.g., DINO > ProtoNet (PN) > Fine-tuning (FT). We next compare our pipeline with prior state of the art. We emphasize that our results are not directly comparable to much prior SOTA in terms of architecture and use of external data. We draw this comparison to see how simple changes (such as upgrading feature backbone to a modern network architecture and exploiting publicly available data for a largescale pre-training) compare against 5 years of intensive research on FSL algorithms. The results for the single-domain cases, i.e., mini-ImageNet and CIFAR-FS, are summarized in Table 4, while the results for the cross-domain datasets, i.e., Meta-Dataset and Broader Study CDFSL, are shown in Table 5 and 6 respectively. From the results we can see that our framework outperforms much the state of the art in both within-domain and cross-domain conditions despite being signi\ufb01cantly simpler than some sophisticated competitors. We remark that for the single source benchmarks in Table 4, a few competitors also used external data or ImageNet pre-training as indicated. Meanwhile our hybrid pipeline outperforms SOTA pure external self-supervision [14, 27] for CDFSL in Table 6. Our code is available at https://github.com/hushell/pmf_cvpr22. 4.3. Discussion Taken together, the results show that our simple pipeline of exploiting available pre-training data and a modern architecture often outperforms sophisticated state of the art in few-shot learning. This margin is increased using our proposed adaptive \ufb01ne-tuning mechanism in the meta-test stage. Based on these observations we make recommendations both for practitioners and few-shot learning researchers. Practitioners: Increasing pre-training data size or simply using a foundation model [10,15] and upgrading to modern architectures is likely to be more productive (and much easier to implement) than keeping up with and implementing state of the art few-shot learning algorithms. Fine-tuning is likely to be important if the target few-shot task of interest is less similar to the pre-training and meta-training data. \fFSL researchers: Our results show that using external data and modern architectures is an easy and effective way to achieve strong FSL performance, and also that some SOTA meta-learners fail to provide expected improvements in this regime. While external data violates de\ufb01nitions of the FSL problem that insist on a speci\ufb01c limited meta-train set, we should take this setting seriously to maintain practical relevance in the face of advancing self-supervision [15,28,39,53]. In particular, we recommend a new evaluation setting for all the standard FSL benchmarks, where pre-train data and architecture are freely chosen and clearly reported. Few-shot meta-learning methods are then evaluated on their ability to improve on linear readout, \ufb01ne-tuning, or our PMF baseline for the given external dataset and architecture. 5." + }, + { + "url": "http://arxiv.org/abs/2004.12696v1", + "title": "Empirical Bayes Transductive Meta-Learning with Synthetic Gradients", + "abstract": "We propose a meta-learning approach that learns from multiple tasks in a\ntransductive setting, by leveraging the unlabeled query set in addition to the\nsupport set to generate a more powerful model for each task. To develop our\nframework, we revisit the empirical Bayes formulation for multi-task learning.\nThe evidence lower bound of the marginal log-likelihood of empirical Bayes\ndecomposes as a sum of local KL divergences between the variational posterior\nand the true posterior on the query set of each task. We derive a novel\namortized variational inference that couples all the variational posteriors via\na meta-model, which consists of a synthetic gradient network and an\ninitialization network. Each variational posterior is derived from synthetic\ngradient descent to approximate the true posterior on the query set, although\nwhere we do not have access to the true gradient. Our results on the\nMini-ImageNet and CIFAR-FS benchmarks for episodic few-shot classification\noutperform previous state-of-the-art methods. Besides, we conduct two zero-shot\nlearning experiments to further explore the potential of the synthetic\ngradient.", + "authors": "Shell Xu Hu, Pablo G. Moreno, Yang Xiao, Xi Shen, Guillaume Obozinski, Neil D. Lawrence, Andreas Damianou", + "published": "2020-04-27", + "updated": "2020-04-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "INTRODUCTION While supervised learning of deep neural networks can achieve or even surpass human-level performance (He et al., 2015; Devlin et al., 2018), they can hardly extrapolate the learned knowledge beyond the domain where the supervision is provided. The problem of solving rapidly a new task after learning several other similar tasks is called meta-learning (Schmidhuber, 1987; Bengio et al., 1991; Thrun & Pratt, 1998); typically, the data is presented in a two-level hierarchy such that each data point at the higher level is itself a dataset associated with a task, and the goal is to learn a meta-model that generalizes across tasks. In this paper, we mainly focus on few-shot learning (Vinyals et al., 2016), an instance of meta-learning problems, where a task t consists of a query set dt := {(xt,i, yt,i)}n i=1 serving as the test-set of the task and a support set dl t:={(xl t,i,yl t,i)}nl i=1 serving as the train-set. In meta-testing1, one is given the support set and the inputs of the query set xt := {xt,i}n i=1, and asked to predict the labels yt := {yt,i}n i=1. In meta-training, yt is provided as the ground truth. The setup of few-shot learning is summarized in Table 1. A important distinction to make is whether a task is solved in a transductive or inductive manner, that is, whether xt is used. The inductive setting is what was originally proposed by Vinyals et al. (2016), in which only dl t is used to generate a model. The transductive setting, as an alternative, has the advantage of being able to see partial or all points in xt before making predictions. In fact, 1To distinguish from testing and training within a task, meta-testing and meta-training are referred to as testing and training over tasks. 1 arXiv:2004.12696v1 [cs.LG] 27 Apr 2020 \fPublished as a conference paper at ICLR 2020 Support set Query set dl t := {(xl t,i, yl t,i)}nl i=1 xt := {xt,i}n i=1 yt = {yt,i}n i=1 Meta-training \u2713 \u2713 \u2713 Meta-testing \u2713 \u2713 \u0017 Table 1: The setup of few-shot learning. If task t is used for meta-testing, yt is not given to the model. Nichol et al. (2018) notice that most of the existing meta-learning methods involve transduction unintentionally since they use xt implicitly via the batch normalization (Ioffe & Szegedy, 2015). Explicit transduction is less explored in meta-learning, the exception is Liu et al. (2018), who adapted the idea of label propagation (Zhu et al., 2003) from graph-based semi-supervised learning methods. We take a totally different path that meta-learn the \u201cgradient\u201d descent on xt without using yt. Due to the hierarchical structure of the data, it is natural to formulate meta-learning by a hierarchical Bayes (HB) model (Good, 1980; Berger, 1985), or alternatively, an empirical Bayes (EB) model (Robbins, 1985; Kucukelbir & Blei, 2014). The difference is that the latter restricts the learning of meta-parameters to point estimates. In this paper, we focus on the EB model, as it largely simpli\ufb01es the training and testing without losing the strength of the HB formulation. The idea of using HB or EB for meta-learning is not new: Amit & Meir (2018) derive an objective similar to that of HB using PAC-Bayesian analysis; Grant et al. (2018) show that MAML (Finn et al., 2017) can be understood as a EB method; Ravi & Beatson (2018) consider a HB extension to MAML and compute posteriors via amortized variational inference. However, unlike our proposal, these methods do not make full use of the unlabeled data in query set. Roughly speaking, they construct the variational posterior as a function of the labeled set dl t without taking advantage of the unlabeled set xt. The situation is similar in gradient based meta-learning methods (Finn et al., 2017; Ravi & Larochelle, 2016; Li et al., 2017b; Nichol et al., 2018; Flennerhag et al., 2019) and many other meta-learning methods (Vinyals et al., 2016; Snell et al., 2017; Gidaris & Komodakis, 2018), where the mechanisms used to generate the task-speci\ufb01c parameters rely on groundtruth labels, thus, there is no place for the unlabeled set to contribute. We argue that this is a suboptimal choice, which may lead to over\ufb01tting when the labeled set is small and hinder the possibility of zero-shot learning (when the labeled set is not provided). In this paper, we propose to use synthetic gradient (Jaderberg et al., 2017) to enable transduction, such that the variational posterior is implemented as a function of the labeled set dl t and the unlabeled set xt. The synthetic gradient is produced by chaining the output of a gradient network into autodifferentiation, which yields a surrogate of the inaccessible true gradient. The optimization process is similar to the inner gradient descent in MAML, but it iterates on the unlabeled xt rather than on the labeled dl t, since it does not rely on yt to compute the true gradient. The labeled set for generating the model for an unseen task is now optional, which is only used to compute the initialization of model weights in our case. In summary, our main contributions are the following: 1. In section 2 and section 3, we develop a novel empirical Bayes formulation with transduction for meta-learning. To perform amortized variational inference, we propose a parameterization for the variational posterior based on synthetic gradient descent, which incoporates the contextual information from all the inputs of the query set. 2. In section 4, we show in theory that a transductive variational posterior yields better generalization performance. The generalization analysis is done through the connection between empirical Bayes formulation and a multitask extension of the information bottleneck principle. In light of this, we name our method synthetic information bottleneck (SIB). 3. In section 5, we verify our proposal empirically. Our experimental results demonstrate that our method signi\ufb01cantly outperforms the state-of-the-art meta-learning methods on few-shot classi\ufb01cation benchmarks under the one-shot setting. 2 \fPublished as a conference paper at ICLR 2020 } } dt dl t N n nl xl t,i yl t,i xt,i yt,i wt \u03d5 \u03c8 f Generative Inference Labeled Unlabeled (a) Graphical model of EB (b) MAML (c) Our method (SIB) Figure 1: (a) The generative and inference processes of the empirical Bayes model are depicted in solid and dashed arrows respectively, where the meta-parameters are denoted by dashed circles due to the point estimates. A comparison between MAML (6) and our method (SIB) (10) is shown in (b) and (c). MAML is an inductive method since, for a task t, it \ufb01rst constructs the variational posterior (with parameter \u03b8K) as a function of the support set dl t, and then test on the unlabeled xt; while SIB uses a better variational posterior as a function of both dl t and xt: it starts from an initialization \u03b80 t (dl t) generated using dl t, and then yields \u03b8K t by running K synthetic gradient steps on xt. 2 META-LEARNING WITH TRANSDUCTIVE INFERENCE The goal of meta-learning is to train a meta-model on a collection of tasks, such that it works well on another disjoint collection of tasks. Suppose that we are given a collection of N tasks for training. The associated data is denoted by D := {dt := (xt, yt)}N t=1. In the case of few-shot learning, we are given in addition a support set dl t in each task. In this section, we revisit the classical empirical Bayes model for meta-learning. Then, we propose to use a transductive scheme in the variational inference by implementing the variational posterior as a function of xt. 2.1 EMPIRICAL BAYES MODEL Due to the hierarchical structure among data, it is natural to consider a hierarchical Bayes model with the marginal likelihood pf(D) = Z \u03c8 pf(D|\u03c8)p(\u03c8) = Z \u03c8 h N Y t=1 Z wt pf(dt|wt)p(wt|\u03c8) i p(\u03c8). (1) The generative process is illustrated in Figure 1 (a, in red arrows): \ufb01rst, a meta-parameter \u03c8 (i.e., hyper-parameter) is sampled from the hyper-prior p(\u03c8); then, for each task, a task-speci\ufb01c parameter wt is sampled from the prior p(wt|\u03c8); \ufb01nally, the dataset is drawn from the likelihood pf(dt|wt). Without loss of generality, we assume the log-likelihood model factorizes as log pf(dt|wt) = n X i=1 log pf(yt,i|xt,i, wt) + log p(xt,i|wt), = n X i=1 \u22121 n\u2113t \u0000\u02c6 yt,i(f(xt,i), wt), yt,i \u0001 + constant. (2) It is the setting advocated by Minka (2005), in which the generative model p(xt,i|wt) can be safely ignored since it is irrelevant to the prediction of yt. To simplify the presentation, we still keep the notation pf(dt|wt) for the likelihood of the task t and use \u2113t to specify the discriminative model, which is also referred to as the task-speci\ufb01c loss, e.g., the cross entropy loss. The \ufb01rst argument in \u2113t is the prediction, denoted by \u02c6 yt,i = \u02c6 yt,i(f(xt,i), wt), which depends on the feature representation f(xt,i) and the task-speci\ufb01c weight wt. Note that rather than following a fully Bayesian approach, we leave some random variables to be estimated in a frequentist way, e.g., f is a meta-parameter of the likelihood model shared by all tasks, for which we use a point estimate. As such, the posterior inference about these variables will be largely simpli\ufb01ed. For the same reason, we derive the empirical Bayes (Robbins, 1985; Kucukelbir & Blei, 2014) by taking a point estimate on \u03c8. The marginal likelihood 3 \fPublished as a conference paper at ICLR 2020 now reads as p\u03c8,f(D) = N Y t=1 Z wt pf(dt|wt)p\u03c8(wt). (3) We highlight the meta-parameters as subscripts of the corresponding distributions to distinguish from random variables. Indeed, we are not the \ufb01rst to formulate meta-learning as empirical Bayes. The overall model formulation is essentially the same as the ones considered by Amit & Meir (2018); Grant et al. (2018); Ravi & Beatson (2018). Our contribution lies in the variational inference for empirical Bayes. 2.2 AMORTIZED INFERENCE WITH TRANSDUCTION As in standard probabilistic modeling, we derive an evidence lower bound (ELBO) on the log version of (3) by introducing a variational distribution q\u03b8t(wt) for each task with parameter \u03b8t: log p\u03c8,f(D) \u2265 N X t=1 h Ewt\u223cq\u03b8t \u0002 log pf(dt|wt) \u0003 \u2212DKL \u0000q\u03b8t(wt)\u2225p\u03c8(wt) \u0001i . (4) The variational inference amounts to maximizing the ELBO with respect to \u03b81, . . . , \u03b8N, which together with the maximum likelihood estimation of the meta-parameters, we have the following optimization problem: min \u03c8,f min \u03b81,...,\u03b8N 1 N N X t=1 h Ewt\u223cq\u03b8t \u0002 \u2212log pf(dt|wt) \u0003 + DKL \u0000q\u03b8t(wt)\u2225p\u03c8(wt) \u0001i . (5) However, the optimization in (5), as N increases, becomes more and more expensive in terms of the memory footprint and the computational cost. We therefore wish to bypass this heavy optimization and to take advantage of the fact that individual KL terms indeed share the same structure. To this end, instead of introducing N different variational distributions, we consider a parameterized family of distributions in the form of q\u03c6(\u00b7), which is de\ufb01ned implicitly by a deep neural network \u03c6 taking as input either dl t or dl t plus xt, that is, q\u03c6(dl t) or q\u03c6(dl t,xt). Note that we cannot use entire dt, since we do not have access to yt during meta-testing. This amortization technique was \ufb01rst introduced in the case of variational autoencoders (Kingma & Welling, 2013; Rezende et al., 2014), and has been extended to Bayesian inference in the case of neural processes (Garnelo et al., 2018). Since dl t and xt are disjoint, the inference scheme is inductive for a variational posterior q\u03c6(dl t). As an example, MAML (Finn et al., 2017) takes q\u03c6(dl t) as the Dirac delta distribution, where \u03c6(dl t) = \u03b8K t , is the K-th iterate of the stochastic gradient descent \u03b8k+1 t = \u03b8k t + \u03b7 \u2207\u03b8Ewt\u223cq\u03b8k t h log p(dl t|wt) i with \u03b80 t = \u03c6, a learnable initialization. (6) In this work, we consider a transductive inference scheme with variational posterior q\u03c6(dl t,xt). The inference process is shown in Figure 1(a, in green arrows). Replacing each q\u03b8t in (5) by q\u03c6(dl t,xt), the optimization problem becomes min \u03c8,f min \u03c6 1 N N X t=1 h Ewt\u223cq\u03c6(dl t,xt) \u0002 \u2212log pf(dt|wt) \u0003 + DKL \u0000q\u03c6(dl t,xt)(wt)\u2225p\u03c8(wt) \u0001i . (7) In a nutshell, the meta-model to be optimized includes the feature network f, the hyper-parameter \u03c8 from the empirical Bayes formulation and the amortization network \u03c6 from the variational inference. 3 UNROLLING EXACT INFERENCE WITH SYNTHETIC GRADIENTS It is however non-trivial to design a proper network architecture for \u03c6(dl t, xt), since dl t and xt are both sets. The strategy adopted by neural processes (Garnelo et al., 2018) is to aggregate the information from all individual examples via an averaging function. However, as pointed out by Kim et al. 4 \fPublished as a conference paper at ICLR 2020 (2019), such a strategy tends to under\ufb01t xt because the aggregation does not necessarily attain the most relevant information for identifying the task-speci\ufb01c parameter. Extensions, such as attentive neural process (Kim et al., 2019) and set transformer (Lee et al., 2019a), may alleviate this issue but come at a price of O(n2) time complexity. We instead design \u03c6(dl t, xt) to mimic the exact inference arg min\u03b8t DKL(q\u03b8t(wt)\u2225p\u03c8,f(wt|dt)) by parameterizing the optimization process with respect to \u03b8t. More speci\ufb01cally, consider the gradient descent on \u03b8t with step size \u03b7: \u03b8k+1 t = \u03b8k t \u2212\u03b7 \u2207\u03b8tDKL \u0010 q\u03b8k t (w) \u2225p\u03c8,f(w | dt) \u0011 . (8) We would like to unroll this optimization dynamics up to the K-th step such that \u03b8K t = \u03c6(dl t, xt) while make sure that \u03b8K t is a good approximation to the optimum \u03b8\u22c6 t , which consists of parameterizing (a) the initialization \u03b80 t and (b) the gradient \u2207\u03b8tDKL(q\u03b8t(wt) \u2225p\u03c8,f(wt|dt)). By doing so, \u03b8K t becomes a function of \u03c6, \u03c8 and xt2, we therefore realize q\u03c6(dl t,xt) as q\u03b8K t . For (a), we opt to either let \u03b80 t = \u03bb to be a global data-independent initialization as in MAML (Finn et al., 2017) or let \u03b80 t = \u03bb(dl t) with a few supervisions from the support set, where \u03bb can be implemented by a permutation invariant network described in Gidaris & Komodakis (2018). In the second case, the features of the support set will be \ufb01rst averaged in terms of their labels and then scaled by a learnable vector of the same size. For (b), the fundamental reason that we parameterize the gradient is because we do not have access to yt during the meta-testing phase, although we are able to follow (8) in meta-training to obtain q\u03b8\u22c6 t (wt) \u221dpf(dt|wt)p\u03c8(wt). To make a consistent parameterization in both meta-training and meta-testing, we thus do not touch yt when constructing the variational posterior. Recall that the true gradient decomposes as \u2207\u03b8tDKL \u0010 q\u03b8t\u2225p\u03c8,f \u0011 = E\u03f5 h 1 n n X i=1 \u2202\u2113t(\u02c6 yt,i, yt,i) \u2202\u02c6 yt,i \u2202\u02c6 yt,i \u2202wt \u2202wt(\u03b8t, \u03f5) \u2202\u03b8t i + \u2207\u03b8tDKL \u0010 q\u03b8t\u2225p\u03c8 \u0011 (9) under a reparameterization wt = wt(\u03b8t, \u03f5) with \u03f5 \u223cp(\u03f5), where all the terms can be computed without yt except for \u2202\u2113t \u2202\u02c6 yt,i . Thus, we introduce a deep neural network \u03be(\u02c6 yt,i) to synthesize it. The idea of synthetic gradients was originally proposed by Jaderberg et al. (2017) to parallelize the back-propagation. Here, the purpose of \u03be(\u02c6 yt,i) is to update \u03b8t regardless of the groundtruth labels, which is slightly different from its original purpose. Besides, we do not introduce an additional loss between \u03be(\u02c6 yt,i) and \u2202\u2113t \u2202\u02c6 yt,i since \u03be(\u02c6 yt,i) will be driven by the objective in (7). As an intermediate computation, the synthetic gradient is not necessarily a good approximation to the true gradient. To sum up, we have derived a particular implementation of \u03c6(dl t, xt) by parameterizing the exact inference update, namely (8), without using the labels of the query set, where the meta-model \u03c6 includes an initialization network \u03bb and a synthetic gradient network \u03be, such that \u03c6(xt) = \u03b8K t , the K-th iterate of the following update: \u03b8k+1 t = \u03b8k t \u2212\u03b7 h E\u03f5 h 1 n n X i=1 \u03be(\u02c6 yt,i)\u2202\u02c6 yt,i \u2202wt \u2202wt(\u03b8k t , \u03f5) \u2202\u03b8t i + \u2207\u03b8tDKL \u0010 q\u03b8k t \u2225p\u03c8 \u0011i . (10) The overall algorithm is depicted in Algorithm 1. We also make a side-by-side comparison with MAML shown in Figure 1. Rather than viewing (10) as an optimization process, it may be more precise to think of it as a part of the computation graph created in the forward-propagation. The computation graph of the amortized inference is shown in Figure 2, As an extension, if we were deciding to estimate the feature network f in a Bayesian manner, we would have to compute higher-order gradients as in the case of MAML. This is inpractical from a computational point of view and needs technical simpli\ufb01cations (Nichol et al., 2018). By introducing a series of synthetic gradient networks in a way similar to Jaderberg et al. (2017), the computation will be decoupled into computations within each layer, and thus becomes more feasible. We see this as a potential advantage of our method and leave this to our future work3. 2 \u03b8K t is also dependent of f. We deliberately remove this dependency to simplify the update of f. 3We do not insist on Bayesian estimation of the feature network because most Bayesian versions of CNNs underperform their deterministic counterparts. 5 \fPublished as a conference paper at ICLR 2020 Classi\ufb01er\u2028 forward SGD \u03b80 SGD SGD \u03b81 \u03b8K xt f(xt) f xl t f(xl t) init yl t \u03b8 f(x) . detach() \u0302 y grad Classi\ufb01er\u2028 backward \u2207\u03b8DKL \u03be( \u0302 y) \u2248\u2202\u2113 \u2202\u0302 y \u03be \u03bb Classi\ufb01er\u2028 forward \u0302 yt Loss\u2028 yt KL \u2113t DKL(q\u03b8K\u2225p\u03c8) + \u2212ELBO Synthetic \u2028 gradient \u2028 module Figure 2: The computation graph to compute the negative ELBO, where the input and output of the synthetic gradient module are highlighted in red. The detach() is used to stop the back-propagation down to the feature network. Note that we do not include every computation for simplicity. Algorithm 1 Variational inference with synthetic gradients for empirical Bayes 1: Input: the dataset D; the step size \u03b7; the number of inner iterations K; pretrained f. 2: Initialize the meta-models \u03c8, and \u03c6 = (\u03bb, \u03be). 3: while not converged do 4: Sample a task t and the associated query set dt (plus optionally the support set dl t). 5: Compute the initialization \u03b80 t = \u03bb or \u03b80 t = \u03bb(dl t). 6: for k = 1, . . . , K do 7: Compute \u03b8k t via (10). 8: end for 9: Compute wt = wt(\u03b8K t , \u03f5) with \u03f5 \u223cp(\u03f5). 10: Update \u03c8 \u2190\u03c8 \u2212\u03b7 \u2207\u03c8DKL(q\u03b8K t (\u03c8)\u2225p\u03c8). 11: Update \u03c6 \u2190\u03c6 \u2212\u03b7 \u2207\u03c6DKL(q\u03c6(xt)\u2225pf \u00b7 p\u03c8). 12: Optionally, update f \u2190f + \u03b7 \u2207f log pf(dt|wt). 13: end while 4 GENERALIZATION ANALYSIS OF EMPIRICAL BAYES VIA THE CONNECTION TO INFORMATION BOTTLENECK The learning of empirical Bayes (EB) models follows the frequentist\u2019s approach, therefore, we can use frequentist\u2019s tool to analyze the model. In this section, we study the generalization ability of the empirical Bayes model through its connection to a variant of the information bottleneck principle Achille & Soatto (2017); Tishby et al. (2000). Abstract form of empirical Bayes From (3), we see that the empirical Bayes model implies a simpler joint distribution since log p\u03c8,f(w1, . . . , wN, D) = N X t=1 log pf(dt|wt) + log p\u03c8(wt), (11) which is equal to the log-density of N iid samples drawn from the joint distribution p(w, d, t) \u2261p\u03c8,f(w, d, t) = pf(d|w, t)p\u03c8,f(w)p(t)4 (12) up to a constant if we introduce a random variable to represent the task and assume p(t) is an uniform distribution. We thus see that this joint distribution embodies the generative process of empirical Bayes. Correspondingly, there is another graphical model of the joint distribution characterizes the 6 \fPublished as a conference paper at ICLR 2020 inference process of the empirical Bayes: q(w, d, t) \u2261q\u03c6(w, d, t) = q\u03c6(w|d, t)q(d|t)q(t), (13) where q\u03c6(w|d, t) is the abstract form of the variational posterior with amortization, includes both the inductive form and the transductive form. The coupling between p(w, d, t) and q(w, d, t) is due to p(t) \u2261q(t) as we only have access to tasks through sampling. We are interested in the case that the number of tasks N \u2192\u221e, such as the few-shot learning paradigm proposed by Vinyals et al. (2016), in which the objective of (7) can be rewritten in an abstract form of Eq(t)Eq(d|t) h Eq(w|d,t) \u0002 \u2212log p(d|w, t) \u0003 + DKL \u0000q(w|d, t)\u2225p(w) \u0001i . (14) In fact, optimizing this objective is the same as optimizing (7) from a stochastic gradient descent point of view. The learning of empirical Bayes with amortized variational inference can be understood as a variational EM in the sense that the E-step amounts to aligning q(w|d, t) with p(w|d, t) while the M-step amounts to adjusting the likelihood p(d|w, t) and the prior p(w). Connection to information bottleneck The following theorem shows the connection between (14) and the information bottleneck principle. Theorem 1. Given distributions q(w|d, t), q(d|t), q(t), p(w) and p(d|w, t), we have (14) \u2265Iq(w; d|t) + Hq(d|w, t), (19) where Iq(w; d|t) := DKL \u0000q(w, d|t)\u2225q(w|t)q(d|t) \u0001 is the conditional mutual information and Hq(w|d, t) := Eq(w,d,t)[\u2212log q(w|d, t)] is the conditional entropy. The equality holds when \u2200t: DKL(q(w|t)\u2225p(w)) = 0 and DKL(q(d|w, t)\u2225p(d|w, t)) = 0. In fact, the lower bound on (14) is an extention of the information bottleneck principle (Achille & Soatto, 2017) under the multi-task setting, which, together with the synthetic gradient based variational posterior, inspire the name synthetic information bottleneck of our method. The tightness of the lower bound depends on both the parameterizations of pf(d|w, t) and p\u03c8(w) as well as the optimization of (14). It thus can be understood as how well we can align the inference process with the generative process. From an inference process point of view, for a given q(w|d, t), the optimal likelihood and prior have been determined. In theory, we only need to \ufb01nd the optimal q(w|d, t) using the information bottleneck in (19). However, in practice, minimizing (14) is more straightforward. Generalization of empirical Bayes meta-learning The connection to information bottleneck suggests that we can eliminate p(d|w, t) and p(w) from the generalization analysis of empirical Bayes meta-learning and de\ufb01ne the generalization error by q(w, d, t) only. To this end, we \ufb01rst identify the empirical risk for a single task t with respect to particular weights w and dataset d as Lt(w, d) := 1 n n X i=1 \u2113t(\u02c6 yi(f(xi), w), yi). (15) The true risk for task t with respect to w is then the expected empirical risk Ed\u223cq(d|t)Lt(w, d). Now, we de\ufb01ne the generalization error with respect to q(w, d, t) as the average of the difference between the true risk and the empirical risk over all possible t, d, w: gen(q) := Eq(t)q(d|t)q(w|d,t) h Ed\u223cq(d|t)Lt(w, d) \u2212Lt(w, d) i = Eq(t)q(d|t)q(w|t)Lt(w, d) \u2212Eq(t)q(d|t)q(w|d,t)Lt(w, d), (16) where q(w|t) is the aggregated posterior of task t. Next, we extend the result from Xu & Raginsky (2017) and derive a data-dependent upper bound for gen(q) using mutual information. Theorem 2. Denote by z = (x, y). If \u2113t(\u02c6 yi(f(xi), w), yi) \u2261\u2113t(w, zi) is \u03c3-subgaussian under q(w|t)q(z|t), then Lt(w, d) is \u03c3/\u221an-subgaussian under q(w|t)q(d|t) due to the iid assumption, and \f \fgen(q) \f \f \u2264 r 2\u03c32 n Iq(w; d|t). (30) 7 \fPublished as a conference paper at ICLR 2020 Plugging this back to Theorem 1, we obtain a different interpretation for the empirical Bayes ELBO. Corollary 1. If \u2113t is chosen to be the negative log-likelihood, minimizing the population objective of empirical Bayes meta-learning amounts to minimizing both the expected generalization error and the expected empirical risk: (14) \u2265 n 2\u03c32 gen(q)2 + Eq(t)q(d|t)q(w|d,t)Lt(w, d). (17) The Corollary 1 suggests that (14) amounts to minimizing a regularized empirical risk minimization. In general, there is a tradeoff between the generalization error and the empirical risk controlled by the coef\ufb01cient n 2\u03c32 , where n = |d| is the cardinality of d. If n is small, then we are in the over\ufb01tting regime. This is the case of the inductive inference with variational posterior q(w|dl, t), where the support set dl is fairly small by the de\ufb01nition of few-shot learning. On the other hand, if we were following the transductive setting, we expect to achieve a small generalization error since the implemented variational posterior is a better approximation to q(w|d, t). However, keeping increasing n will potentially over-regularize the model and thus yield negative effect. An empirical study on varying n can be found in Table 5 in Appendix D. 5 EXPERIMENTS In this section, we \ufb01rst validate our method on few-shot learning, and then on zero-shot learning (no support set and no class description are available). Note that many meta-learning methods cannot do zero-shot learning since they rely on the support set. 5.1 FEW-SHOT CLASSIFICATION We compare SIB with state-of-the-art methods on few-shot classi\ufb01cation problems. Our code is available at https://github.com/amzn/xfer. 5.1.1 SETUP Datasets We choose standard benchmarks of few-shot classi\ufb01cation for this experiment. Each benchmark is composed of disjoint training, validation and testing classes. MiniImageNet is proposed by Vinyals et al. (2016), which contains 100 classes, split into 64 training classes, 16 validation classes and 20 testing classes; each image is of size 84\u00d784. CIFAR-FS is proposed by Bertinetto et al. (2018), which is created by dividing the original CIFAR-100 into 64 training classes, 16 validation classes and 20 testing classes; each image is of size 32\u00d732. Evaluation metrics In few-shot classi\ufb01cation, a task (aka episode) t consists of a query set dt and a support set dl t. When we say the task t is k-way-nl-shot we mean that dl t is formed by \ufb01rst sampling k classes from a pool of classes; then, for each sampled class, nl examples are drawn and a new label taken from {0, . . . , k \u22121} is assigned to these examples. By default, each query set contains 15k examples. The goal of this problem is to predict the labels of the query set, which are provided as ground truth during training. The evaluation is the average classi\ufb01cation accuracy over tasks. Network architectures Following Gidaris & Komodakis (2018); Qiao et al. (2018); Gidaris et al. (2019), we implement f by a 4-layer convolutional network (Conv-4-64 or Conv-4-1285) or a WideResNet (WRN-28-10) (Zagoruyko & Komodakis, 2016). We pretrain the feature network f(\u00b7) on the 64 training classes for a stardard 64-way classi\ufb01cation. We reuse the feature averaging network proposed by Gidaris & Komodakis (2018) as our initialization network \u03bb(\u00b7), which basically averages the feature vectors of all data points from the same class and then scales each feature dimension differently by a learned coef\ufb01cient. For the synthetic gradient network \u03be(\u00b7), we implement a three-layer MLP with hidden-layer size 8k. Finally, for the predictor \u02c6 yij(\u00b7, wi), we adopt the cosine-similarity based classi\ufb01er advocated by Chen et al. (2019) and Gidaris & Komodakis (2018). 5 Conv-4-64 consists of 4 convolutional blocks each implemented with a 3\u00d73 convolutional layer followed by BatchNorm + ReLU + 2 \u00d7 2 max-pooling units. All blocks of Conv-4-64 have 64 feature channels. Conv-4-128 has 64 feature channels in the \ufb01rst two blocks and 128 feature channels in the last two blocks. 8 \fPublished as a conference paper at ICLR 2020 MiniImageNet, 5-way CIFAR-FS, 5-way Method Backbone 1-shot 5-shot 1-shot 5-shot Matching Net (Vinyals et al., 2016) Conv-4-64 44.2% 57% \u2013 \u2013 MAML (Finn et al., 2017) Conv-4-64 48.7\u00b11.8% 63.1\u00b10.9% 58.9\u00b11.9% 71.5\u00b11.0% Prototypical Net (Snell et al., 2017) Conv-4-64 49.4\u00b10.8% 68.2\u00b10.7% 55.5\u00b10.7% 72.0\u00b10.6% Relation Net (Sung et al., 2018) Conv-4-64 50.4\u00b10.8% 65.3\u00b10.7% 55.0\u00b11.0% 69.3\u00b10.8% GNN (Satorras & Bruna, 2017) Conv-4-64 50.3% 66.4% 61.9% 75.3% R2-D2 (Bertinetto et al., 2018) Conv-4-64 49.5\u00b10.2% 65.4\u00b10.2% 62.3\u00b10.2% 77.4\u00b10.2% TPN (Liu et al., 2018) Conv-4-64 55.5% 69.9% \u2013 \u2013 Gidaris et al. (2019) Conv-4-64 54.8\u00b10.4% 71.9\u00b10.3% 63.5\u00b10.3% 79.8\u00b10.2% SIB K=0 (Pre-trained feature) Conv-4-64 50.0\u00b10.4% 67.0\u00b10.4% 59.2\u00b10.5% 75.4\u00b10.4% SIB \u03b7=1e-3, K=3 Conv-4-64 58.0\u00b10.6% 70.7\u00b10.4% 68.7\u00b10.6% 77.1\u00b10.4% SIB \u03b7=1e-3, K=0 Conv-4-128 53.62 \u00b1 0.79% 71.48 \u00b1 0.64% \u2013 \u2013 SIB \u03b7=1e-3, K=1 Conv-4-128 58.74 \u00b1 0.89% 74.12 \u00b1 0.63% \u2013 \u2013 SIB \u03b7=1e-3, K=3 Conv-4-128 62.59 \u00b1 1.02% 75.43 \u00b1 0.67% \u2013 \u2013 SIB \u03b7=1e-3, K=5 Conv-4-128 63.26 \u00b1 1.07% 75.73 \u00b1 0.71% \u2013 \u2013 TADAM (Oreshkin et al., 2018) ResNet-12 58.5\u00b10.3% 76.7\u00b10.3% \u2013 \u2013 SNAIL (Santoro et al., 2017) ResNet-12 55.7\u00b11.0% 68.9\u00b10.9% \u2013 \u2013 MetaOptNet-RR (Lee et al., 2019b) ResNet-12 61.4\u00b10.6% 77.9\u00b10.5% 72.6\u00b10.7% 84.3\u00b10.5% MetaOptNet-SVM ResNet-12 62.6\u00b10.6% 78.6\u00b10.5% 72.0\u00b10.7% 84.2\u00b10.5% CTM (Li et al., 2019) ResNet-18 64.1\u00b10.8% 80.5\u00b10.1% \u2013 \u2013 Qiao et al. (2018) WRN-28-10 59.6\u00b10.4% 73.7\u00b10.2% \u2013 \u2013 LEO (Rusu et al., 2019) WRN-28-10 61.8\u00b10.1% 77.6\u00b10.1% \u2013 \u2013 Gidaris et al. (2019) WRN-28-10 62.9\u00b10.5% 79.9\u00b10.3% 73.6\u00b10.3% 86.1\u00b10.2% SIB K=0 (Pre-trained feature) WRN-28-10 60.6\u00b10.4% 77.5\u00b10.3% 70.0\u00b10.5% 83.5\u00b10.4% SIB \u03b7=1e-3, K=1 WRN-28-10 67.3\u00b10.5% 78.8\u00b10.4% 76.8\u00b10.5% 84.9\u00b10.4% SIB \u03b7=1e-3, K=3 WRN-28-10 69.6\u00b10.6 % 78.9\u00b10.4% 78.4\u00b10.6% 85.3\u00b10.4% SIB \u03b7=1e-3, K=5 WRN-28-10 70.0\u00b10.6% 79.2\u00b10.4% 80.0\u00b10.6% 85.3\u00b10.4% Table 2: Average classi\ufb01cation accuracies (with 95% con\ufb01dence intervals) on the test-set of MiniImageNet and CIFAR-FS. For evaluation, we sample 2000 and 5000 episodes respectively for MiniImageNet and CIFAR-FS and test three different architectures as the feature extractor: Conv-464, Conv-4-128 and WRN-28-10. We train SIB with learning rate 0.001 and try different numbers of synthetic gradient steps K. Training details We run SGD with batch size 8 for 40000 steps, where the learning rate is \ufb01xed to 10\u22123. During training, we freeze the feature network. To select the best hyper-parameters, we sample 1000 tasks from the validation classes and reuse them at each training epoch. 5.1.2 COMPARISON TO STATE-OF-THE-ART META-LEARNING METHODS In Table 2, we show a comparison between the state-of-the-art approaches and several variants of our method (varying K or f(\u00b7)). Apart from SIB, TPN (Liu et al., 2018) and CTM (Li et al., 2019) are also transductive methods. First of all, comparing SIB (K = 3) to SIB (K = 0), we observe a clear improvement, which suggests that, by taking a few synthetic gradient steps, we do obtain a better variational posterior as promised. For 1-shot learning, SIB (when K = 3 or K = 5) signi\ufb01cantly outperforms previous methods on both MiniImageNet and CIFAR-FS. For 5-shot learning, the results are also comparable. It should be noted that the performance boost is consistently observed with different feature networks, which suggests that SIB is a general method for few-shot learning. However, we also observe a potential limitation of SIB: when the support set is relatively large, e.g., 5-shot, with a good feature network like WRN-28-10, the initialization \u03b80 t may already be close to some local minimum, making the updates later less important. For 5-shot learning, SIB is sligtly worse than CTM (Li et al., 2019) and/or Gidaris et al. (2019). CMT (Li et al., 2019) can be seen as an alternative way to incorporate transduction \u2013 it measures the similarity between a query example and the support set while making use of intraand inter-class relationships. Gidaris et al. (2019) uses in addition the self-supervision as an auxilary loss to learn a richer and more transferable feature model. Both ideas are complementary to SIB. We leave these extensions to our future work. 9 \fPublished as a conference paper at ICLR 2020 5.2 ZERO-SHOT REGRESSION: SPINNING LINES 0 20 40 60 80 100 120 140 Epoch 0.0 0.2 0.4 0.6 0.8 1.0 Metric SIB evaluation SIB: 1 2 Et\u2225yt \u2212\u02c6 yt\u22252 I(wt; dt) EtDKL(q\u03b8K t (wt)||p(wt|dt)) DKL(p\u03c8(w)||p(w)) \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 2.0 x \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 2.0 y SIB dynamics GT init (k=0) k=1 k=2 k=3 k=4 \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 2.0 x \u22122.0 \u22121.5 \u22121.0 \u22120.5 0.0 0.5 1.0 1.5 2.0 y Prediction comparison GT of task 1 SIB predictions GT of task 2 SIB predictions GT of task 3 SIB predictions Figure 3: Left: the mean-square errors on Dtest, EtDKL(q\u03b8K t (wt)\u2225p(wt|dt)), DKL(p\u03c8(w)\u2225p(w)) and the estimate of I(w; d) \u2248EtDKL(q\u03b8K t (wt)\u2225p\u03c8(wt)). Middle: the predicted y\u2019s by y = \u03b8k t x for k = 0, . . . , 4. Right: the predictions of SIB. Since our variational posterior relies only on xt, SIB is also applicable to zero-shot problems (i.e., no support set available). We \ufb01rst look at a toy multi-task problem, where I(wt; dt) is tractable. Denote by Dtrain := {dt}N t=1 the train set, which consists of datasets of size n: d = {(xi, yi)}n i=1. We construct a dataset d by \ufb01rstly sampling iid Gaussian random variables as inputs: xi \u223cN(\u00b5, \u03c32). Then, we generate the weight for each dataset by calculating the mean of the inputs and shifting with a Gaussian random variable \u03f5w: w = 1 n P i xi + \u03f5w, \u03f5w \u223cN(\u00b5w, \u03c32 w). The output for xi is yi = w \u00b7 xi. We decide ahead of time the hyperparameters \u00b5, \u03c3, \u00b5w, \u03c3w for generating xi and yi. Recall that a weighted sum of iid Gaussian random variables is still a Gaussian random variable. Speci\ufb01cally, if w = P i cixi and xi \u223cN(\u00b5i, \u03c32 i ), then w \u223cN(P i ci\u00b5i, P i c2 i \u03c32 i ). Therefore, we have p(w) = N(\u00b5+\u00b5w, 1 n\u03c32 +\u03c32 w). On the other hand, if we are given a dataset d of size n, the only uncertainty about w comes from \u03f5w, that is, we should consider xi as a constant given d. Therefore, the posterior p(w|d) = N( 1 n Pn i=1 xi + \u00b5w, \u03c32 w). We use a simple implementation for SIB: The variational posterior is realized by q\u03b8K t (w) = N(\u03b8K t , \u03c3w), \u03b8k+1 t = \u03b8k t \u221210\u22123 n X i=1 xi\u03be(\u03b8k t xi), and \u03b80 t = \u03bb \u2208R; (18) \u2113t is a mean squared error, implies that p(y|x, w) = N(wx, 1); p\u03c8(w) is a Gaussian distribution with parameters \u03c8 \u2208R2; The synthetic gradient network \u03be is a three-layer MLP with hidden size 8. In the experiment, we sample 240 tasks respectively for both Dtrain and Dtest. We learn SIB and BNN on Dtrain for 150 epochs using the ADAM optimizer (Kingma & Ba, 2014), with learning rate 10\u22123 and batch size 8. Other hyperparameters are speci\ufb01ed as follows: n = 32, K = 3, \u00b5 = 0, \u03c3 = 1, \u00b5w = 1, \u03c3w = 0.1. The results are shown in Figure 3. On the left, both DKL(q\u03b8K t (wt)\u2225p(wt|dt)) and DKL(p\u03c8(w)\u2225p(w)) are close to zero indicating the success of the learning. More interestingly, in the middle, we see that \u03b80 t , \u03b81 t , . . . , \u03b84 t evolves gradually towards the ground truth, which suggests that the synthetic gradient network is able to identify the descent direction after meta-learning. 6" + } + ], + "Varun Manjunatha": [ + { + "url": "http://arxiv.org/abs/1811.07789v1", + "title": "Explicit Bias Discovery in Visual Question Answering Models", + "abstract": "Researchers have observed that Visual Question Answering (VQA) models tend to\nanswer questions by learning statistical biases in the data. For example, their\nanswer to the question \"What is the color of the grass?\" is usually \"Green\",\nwhereas a question like \"What is the title of the book?\" cannot be answered by\ninferring statistical biases. It is of interest to the community to explicitly\ndiscover such biases, both for understanding the behavior of such models, and\ntowards debugging them. Our work address this problem. In a database, we store\nthe words of the question, answer and visual words corresponding to regions of\ninterest in attention maps. By running simple rule mining algorithms on this\ndatabase, we discover human-interpretable rules which give us unique insight\ninto the behavior of such models. Our results also show examples of unusual\nbehaviors learned by models in attempting VQA tasks.", + "authors": "Varun Manjunatha, Nirat Saini, Larry S. Davis", + "published": "2018-11-19", + "updated": "2018-11-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction In recent years, the problem of Visual Question Answering (VQA ) the task of answering a question about an image has become a hotbed of research activity in the computer vision community. While there are several publicly available VQA datasets[6, 23, 26, 29], our focus in this paper will be on the dataset provided in [6] and [18], which is the largest natural image-question-answer dataset and the most widely cited. Even so, the narrowed-down version of the VQA problem on this dataset is not monolithic ideally, several different skills are required by a model to answer the various questions. In Figure 1(left) , a question like \u201cWhat time is it?\u201d requires the acquired skill of being able to read the time on a clock-face, \u201cWhat is the title of the top book?\u201d requires an OCR-like ability to read sentences, whereas the question \u201cWhat color is the grass?\u201d can be answered largely using statistical biases in the data itself (because frequently in this dataset, grass is green in color). Many models have attempted to solve the problem of VQA with varying degrees of success, but among them, the vast majority still attempt to solve the VQA task by exploiting biases in the dataset [25, 37, 2, 17, 7, etc], while a smaller minority address the individual problem types [4, 38, 11, etc]. Keeping the former in mind, in this work, we provide a method to discover and enumerate explicitly, the various biases that are learned by a VQA model. For example, in Figure 1(right), we provide examples of some rules learned by a strong baseline [25]. The model seems to have learned that if a question contains the words {What, time, day} (Eg : \u201cWhat time of day is it?\u201d) and the accompanying image contains the bright sky ( ), the model is likely to answer \u201cafternoon\u201d. The model answers \u201cnight\u201d to the same question accompanied with an image containing a \u201cnightsky\u201d patch ( ). On the other hand, if it contains a clock face( ), it tends to answer the question with a time in an \u201cHH:MM\u201d format, while a question like \u201cWhat time of the year?\u201d paired with lea\ufb02ess trees( ) prompts \u201cfall\u201d as the answer. The core of our method towards discovering such biases is the classical Apriori algorithm [3] which is used to discover rules in large databases here the database refers to the question-words and model responses on the VQA validation set, which can be mined to produce these rules. Deep learning algorithms reduce training error by learning biases in the data. This is evident from the observation that validation/test samples from the long tail of a data distribution are hard to solve, simply because similar examples do not occur frequently enough in the training set[41, 31, etc]. However, explicitly enumerating these biases in a human-interpretable form is possible only in a handful of problems, such as VQA. VQA is particularly illustrative because the questions and answers are in human language, while the images (and attention maps) can also be interpreted by humans. VQA is also interesting because it is a multi-modal problem both language and vision are required to solve this problem. The language alone (i.e., an image agnostic model) can generate plausible (but often incorrect) answers to most questions (as we show in Section 4.1), but incorporating the image generates more accurate answers. That the language alone is able to produce plausible answers strongly indicates that VQA models implicitly 1 arXiv:1811.07789v1 [cs.CV] 19 Nov 2018 \fNo. antecedant antecedant consequents words visual words 1 what,time,day afternoon* 2 what,time,day night* 3 what,time,clock,show 11:30* 4 what,time,year fall* Figure 1. On the left, we show examples of two questions in VQA which the model requires a \u201cskill\u201d to answer (such as telling the time, or reading the English language), and a third which can be answered using statistical biases in the data. On the right, we show examples of statistical biases for a set of questions containing the phrase \u201cWhat time?\u201d and various visual elements (antecedents). Note that each row in this \ufb01gure represents multiple questions in the VQA validation set. The * next to the answer (or consequent) reminds us that it is from the set of answer words. There are several visual words associated with afternoon and night, but we have provided only two for brevity. . use simple rules to produce answers we endeavour in this paper to \ufb01nd an approach that can discover these rules. Finally, we note that in this work, we do not seek to improve upon the state of the art. We do most of our experiments on the model of [25], which is a strong baseline for this problem. We choose this model because it is simple to train and analyze (Section 3.1). To concretely summarize, our main contribution is to provide a method that can capture macroscopic rules that a VQA model ostensibly utilizes to answer questions. To the best of our knowledge, this is the \ufb01rst detailed work that analyzes the VQA dataset of [18] in this manner. The rest of this paper is arranged as follows : In Section 2, we discuss related work, speci\ufb01cally those which look into identifying pathological biases in several machine learning problems, and \u201cdebugging\u201d VQA models. In Section 3, we discuss details of our method. In Section 4, we provide experimental results and list (in a literal sense) some rules we believe the model is employing to answer questions. We discuss limitations of this method in Section 5 and conclude in Section 6. 2. Background and Related Work The VQA problem is most often solved as a multi-class classi\ufb01cation problem. In this formulation, an image(I) usually fed through a CNN, and a question(Q) fed through a language module like an LSTM [22] or GRU [13], are jointly mapped to an answer category (\u201cyes\u201d, \u201cno\u201d, \u201c1\u201d, \u201c2\u201d, etc). Although the cardinality of the set of all answers given a QI dataset is potentially in\ufb01nite, researchers have observed that a set of a few thousand (typically 3000 or so) most frequently occurring answers can account for over 90% of all answers in the VQA dataset. Further, the evaluation of VQA in [6] and [18] is performed such that an answer receives partial credit if at least one human annotator agreed with the answer, even if it might not be the answer provided by the majority of the annotators. This further encourages the use of a classi\ufb01cation based VQA system that limits the number of answers to the most frequent ones, rather than an answer generation based VQA system (say, using a decoder LSTM like [39]). On undesirable biases in machine learning models: Machine learning methods are increasingly being used as tools to calculate credit scores, interest rates, insurance rates, etc, which deeply impact lives of ordinary humans. It is thus vitally important that machine learning models not discriminate on the basis of gender, race, nationality, etc[19, 5, 9]. [36] focus on revealing racial biases in imagebased datasets by using adversarial examples. [43] explores data as well as models associated with object classi\ufb01cation and visual semantic role labeling for identifying gender biases and their ampli\ufb01cation. Further, [8] shows the presence of gender biases while encoding word embeddings, which is further exacerbated while using those embeddings to make predictions. [21] propose an Equalizer model which ensures equal gender probability when making predictions on image captioning tasks. On debugging deep networks: The seminal work by [28] suggests that the Machine Learning community does not have a good understanding of what it means to interpret a model. In particular, this work expounds post-hoc interpretability interpretation of a model\u2019s behavior based on some criteria, such as visualizations of gradients [34] or attention maps [42], after the model has been trained. Locally Interpretable Model Agnostic Explanations (LIME), [32] explain a classi\ufb01er\u2019s behavior at a particular point by perturbing the sample and building a linear model using the perturbations and their predictions. A follow up work [33] constructs Anchors, which are features such that, in an instance where these features hold, a model\u2019s prediction does not change. This work is the most similar prior work to 2 \fours, and the authors provide a few results on VQA as well. However, they only assume the existence of a model, and perturb instances of the data, whereas ours assumes the existence of responses to a dataset, but not the model itself. We use standard rule \ufb01nding algorithms and provide much more detailed results on the VQA problem. On debugging VQA :[1] study the behavior of models on the VQA 1.0 dataset. Through a series of experiments, they show that VQA models fail on novel instances, tend to answer after only partially reading the question and fail to change their answers across different images. In [2], recognizing that deep models seem to use a combination of identifying visual concepts and prediction of answers using biases learned from the data, the authors develop a mechanism to disentangle the two. However, they do not explicitly \ufb01nd a way to discover such biases in the \ufb01rst place. In [18], the authors introduce a second, more balanced version of the VQA dataset that mitigates biases (especially language based ones) in the original dataset. The resulting balanced dataset is christened VQA 2.0, and is the dataset that our results are reported on. In [24], the authors balance yes/no questions (those which indicate the presence or absence of objects), and propose two new evaluation metrics that compensate for forms of dataset bias. 3. Method We cast our bias discovery task as an instance of the rule mining problem, which we shall describe below. The connection between discovering biases in VQA and rule mining is as follows : each (Question, Image, Answer) or QI+A triplet can be cast as a transaction in a database, where each word in the question, answer and image patch (or visual word, Section 3.2 and 3.3) is akin to an item. There are now three components to our rule mining operation : \u2022 First, a frequent itemset miner picks out a set of all itemsets which occur at least s times in the dataset where s is the support. Because our dataset has over 200,000 questions (the entire VQA validation set), and the number of items exceeds 40,000 (all question words+all answer words+all visual words), we choose GMiner [14] due to its speed and ef\ufb01cient GPU implementation. Examples of such frequent itemsets in the context of VQA include {what, color, red*}, {what, sport, playing}, where the presence of a * indicates that the word is an answer-word. \u2022 Next, a rule miner Apriori [3] forms all valid association rules A \u2192C, such that the rule has a support > s and a con\ufb01dence > c, where the con\ufb01dence is de\ufb01ned as |A\u222aC| |A| . Here, the itemset A is called antecedent and the itemset C is called consequent. We choose and c = 0.2 unless speci\ufb01ed otherwise. An example of an association rule is {what, sport, playing, } \u2192{tennis*}, which can be interpreted as \u201cIf the question contains the words \u2014what, sport, playing\u2014 and the accompanying image contains a tennis player, the answer could be tennis\u201d. \u2022 Finally, a post-processing step removes obviously spurious rules by considering the causal nature of the VQA problem (i.e., only considering rules that obey : Image/Question \u2192Answer). For the purpose of the results in Section 4, we query these rules with search terms like {What,sport}. More concretely, let the ith (Image, Question) pair result in the network predicting the answer ai. Let the question itself contain the words {wi 1, wi 2, ...., wi k}. Further, while answering the question, let the part of the image that the network shows attention towards correspond to the visual code-word vi (Section 3.2 and 3.3). Then, this QI+A corresponds to the transaction {wi 1, wi 2, ...., wk k, vi, ai}. By pre-computing and combining question, answer and visual vocabularies, each item in a transaction can be indexed uniquely. This is shown in Figure 2 and explained in greater detail in the following sub-sections. 3.1. Baseline Model The baseline model we use in this work is from [25], which was brie\ufb02y a state-of-the-art method, yielding higher performance than other, more complicated models. We choose this model for two reasons : \ufb01rst, its simplicity (in other words, an absence of \u201cbells and whistles\u201d) makes it a good test-bed for our method and has been used by other works that explore the behavior of VQA algorithms [30, 16]. The second reason is that the performance of this baseline is within 4% of the state-of-the-art model [37] without using external data or ensembles. We use the implementation of https://github.com/Cyanogenoid/pytorch-vqa. A brief description of this model is as follows : The VQA problem is formulated as a multi-class classi\ufb01cation problem (Section 2). The input to the model is an image and a question, while the output is the answer class with the highest con\ufb01dence (out of 3000 classes). Resnet-152[20] features are extracted from the image and concatenated with the last hidden state of an LSTM[22]. The text and visual features are combined to form attention maps which are fed to the softmax (output) layer through two dense layers. In this work, we focus on the second attention map. 3.2. Visual Codebook Generation We generate the visual codebook using the classical \u201cfeature extraction followed by clustering\u201d technique from [35]. First, we use the bounding-box annotations in MSCOCO[27] and COCO-Stuff[10] to extract 300,000 patches from the MSCOCO training set. After resizing 3 \fwhich dessert are you tempted to try ......... ResNet-152 concat Attention k-nearest neighbor Codebook \u2713 donut cake icecream . . Answer VQA itemset \"database\" 1. what, dessert, are, you, tempted, to, try, , donut* 2. what, sport, are, they, playing, , baseball* 3. .................... softmax LSTM Figure 2. The model from [25] tries to answer the question \u201cWhich dessert are you tempted to try?\u201d. In doing so, the visual attention focuses on a region of the image which contains donuts. We use the method by [12] to place a bounding box over this region, which maps to a distinct visual word representing donuts in our vocabulary. Our database of items thus contains all of the words of the question, the visual word and the answer words. Rules are then extracted using the Apriori algorithm [3] . each of the patches to 224 \u00d7 224 pixels, we extract ResNet152[20] features for each of these patches, and cluster them into 1250 clusters using k-means clustering[15]. We note in Figure 3 that the clusters have both expected and unexpected characteristics beyond \u201cobjectness\u201d and \u201cstuffness\u201d. Expected clusters include dominant objects in the MSCOCO dataset like zebras, giraffes, elephants, cars, buses, trains, people, etc. However, other clusters have textural content, unusual combinations of objects as well as actions. For example, we notice visual words like \u201cpeople eating\u201d, \u201ccats standing on toilets\u201d, \u201cpeople in front of chain link fences\u201d, etc, as shown in Figure 3. The presence of these more eclectic code-words casts more insight into the model\u2019s learning dynamics we would prefer frequent itemsets containing the visual code-word corresponding to \u201cpeople eating\u201d than just \u201cpeople\u201d for a QA pair of (what is she doing?, eating). 3.3. From attention map to bounding box In this work, we make an assumption that the network focuses on exactly one part of the image, although our method can be easily extended to multiple parts[12]. Following the elucidation of our method in Section 3 and given an attention map, we would like to compute the nearest visual codeword. Doing so requires making the choice of a bounding box that covers enough of the salient parts of the image, cropping and mapping this patch to the visual vocabulary. While there are trainable (deep network based) methods for cropping attention maps [40], we instead follow the simpler formulation suggested by [12], which states that : within an attention-map G, given a percentage ratio \u03c4, \ufb01nd the smallest bounding box B which satis\ufb01es : X p\u03f5B G(p) \u2265\u03c4 X p G(p), \u03c4\u03f5[0, 1] Since we follow [25] who use a ResNet-152 architecture for visual feature extraction, the attention maps are of size 14 \u00d7 14. It can be shown easily that given a m \u00d7 n grid, the number of unique bounding boxes that can be drawn on this grid, i.e., num bboxes = m\u00d7n\u00d7(m+1)\u00d7(n+1) 4 , and when m = n = 14, num bboxes turns out to be 11,025. Because m(= n) is small and \ufb01xed in this case, we pre-compute and enumerate all 11,025 bounding boxes and pick the smallest one which encompasses the desired attention, with \u03c4 = 0.3. The reason behind a conservatively low choice for \u03c4 is that we do not want to crop large regions of the image, which might contain distractor patches. This part of the pipeline is depicted in Figure 4. 3.4. Pipeline Summarized Now, the pipeline for the experiments (Figure 2) on the VQA dataset including images is as follows. We provide as input to the network an image and a question. We observe the second attention map and use the method of Section 3.3 to place a tight-\ufb01tting bounding-box around those parts of the image that the model attends to. We then extract features on this bounding-box using a ResNet-152 network and perform a k-nearest neighbor search (with k = 1) to obtain its nearest visual word from the vocabulary. The words in the question, visual code-word and predicted answer for the entire validation set are provided as the database of transactions to the frequent itemset miner [14], and rules are then obtained using the Apriori algorithm [3]. 4. Experiments 4.1. Language only statistical biases in VQA We show that a large number of statistical biases in VQA are due to language alone. We illustrate this with an obvious example : a language-only model, i.e., one that does not see the image, but still attempts the question, answers about 43% of the questions correctly on VQA 2.0 validation set and 48% of the questions correctly on VQA 1.0 validation 4 \fsky pizza lamp gira\ufb00e zebra people's bottoms cat on toilets people eating women in bridal attire plastic packaging people wearing suits black and white tennis teams people carrying surfboards objects with wires stadium crowds Figure 3. We show visual code-words generated by the method of Section 3.1. In the \ufb01rst (left-most) column, we notice visual code-words corresponding to objects or patches in MSCOCO, but in the latter two columns (on the right) we notice code-words corresponding to more complex visual concepts like \u201cpeople eating\u201d, \u201cwomen in bridal-wear\u201d or \u201cblack-and-white tennis photographs\u201d. Figure 4. In the \ufb01rst example, critical to answering the question correctly is discovering the presence of a fence (shown in red) in the attention heat-map. The cropping method of [12] places a conservative box over this region, which corresponds to net-like or fence-like visual code-words like a tennis-net or a baseball batting-cage in the visual codebook. Similarly, in the second example, the attention corresponds to a visual code-word which clearly depicts boats, and in the third example, the attention corresponds to the teddy-bear code-word. set[18]. However, on a random set of 200 questions from VQA 2.0, we observed empirically that the language-only model answers 88.0% of questions with a plausibly correct answer even with a harsh metric of what plausible means. Some of these responses are fairly sophisticated as can be seen in Table 1. We note, for example, that questions containing \u201ckind of bird\u201d are met with a species of bird as response, \u201cWhat kind of cheese\u201d is answered with a type of cheese, etc. Thus, the model maps out key words or phrases in the question and ostensibly tries to map them through a 5 \fQuestion Predicted G.T Ans. What kind of bird is perched on this branch ? owl sparrow What does that girl have on her face ? sunglasses nothing What kind of cheese is on pizza ? mozzarella mozzarella What is bench made of ? wood wood What brand of stove is in kitchen ? electric LG Table 1. We run a language-only VQA baseline and note that although only 43% of the questions are answered correctly in VQA 2.0 ([18]), a large number of questions (88%) in our experiments are answered with plausibly correct responses. For example, \u201cSunglasses\u201d would be a perfectly plausible answer to the question \u201cWhat does that girl have on her face?\u201d perhaps even more so than the ground-truth answer (\u201cNothing\u201d). The last example shows an implausible answer provided by the model to the question. series of rules to answer words. This strongly indicates that these are biases learned from the data, and the ostensible rules can be mined through a rule-mining algorithm. 4.2. Vision+Language statistical biases in VQA After applying the method of Section 3, we will examine some rules that have been learned by our method on some popular question types in VQA . Question types are taken from [6] and for the purpose of brevity, only a very few instructive rules for each question type are displayed. These question types are : \u201cWhat is he/she doing?\u201d4.2.3, \u201cWhere?\u201d (Figure 9), \u201cHow many?\u201d (Section 4.2.1), \u201cWhat brand?\u201d (Figure 8), and \u201cWhy?\u201d(Section 4.2.2). The tables we present are to be interpreted thus : A question containing the antecedent words paired with an image containing the antecedent visual words can sometimes (but not always) lead to the consequent answer. Two instances of patches mapping to this visual word (Section 3.2) are provided. The presence of an \u2217after the consequent is to remind the reader that the consequent word came from the set of answers. 4.2.1 How many? This particular instance of the trained VQA model seems to have learned that giraffes have four legs, stop signs have four letters, kitchen stoves have four burners and zebras and giraffes have several (100) stripes and spots respectively (Figure 5). Upon closer examination, we found 33 questions (out of >200k) in the VQA validation set which contain the words {How,many,burners} and the most common answer predicted by our model for these is 4 (which also resembles the ground-truth distribution). However, some of them were along the lines of \u201cHow many burners are turned on?\u201d, which led to answers different from \u201c4\u201d. 4.2.2 Why? Traditionally, \u201cWhy?\u201d questions in VQA are considered challenging because they require a reason based answer. We describe some of the rules purportedly learned by our model for answering \u201cWhy?\u201d questions, in Figure 6. Some interesting but intuitive beliefs that the model has learned are that movements cause blurry photographs (why,blurry\u2192movement), outstretching one\u2019s arms help in balancing (why,arm\u2192balance) and that people wear helmets or orange vests for the purpose of safety (why,helmet/orange\u2192safety). In many of these cases, no visual element has been picked up by the rule mining algorithm this strongly indicates that the models are memorizing the answers to the \u201cWhy?\u201d questions, and not performing any reasoning. In other words, we could ask the question \u201cWhy is the photograph blurry?\u201d to an irrelevant image and obtain \u201cMovement\u201d as the predicted answer. 4.2.3 What is he/she doing? More interesting are our results on the \u201cWhat is he/she doing?\u201d category of questions (Figure 7). While common activities like \u201csnowboarding\u201d or \u201csur\ufb01ng\u201d are prevalant among the answers, we noticed a difference in rules learned for male and female pronouns. For the female pronoun (she/woman/girl/lady), we observed only stereotypical outputs like \u201ctexting\u201d even for a very low support, as compared to a more diverse set of responses with the male pronoun. This is likely, a re\ufb02ection on the inherent bias of the MSCOCO dataset which the VQA dataset of [6, 18] is based on. Curiously, another work by [21] had similar observations for image captioning models also based on MSCOCO. 5. Limitations While simplicity is the primary advantage of our method, some drawbacks are the following : the exact nature of the rules is limited by the process used to generate the visual vocabulary. In other words, while our method provides a unique insight into the behavior of a VQA model, there surely exist some rules that the models seem to follow which cannot be captured by this method. For example, rules involving colors are dif\ufb01cult to identify because ResNets are trained to be somewhat invariant to colors, so purely color-based visual words are hard to compute. Other examples include inaccurate visual code-words for example, in rule 4 of Figure 8, the antecedant visual word does show a motorbike, although not a Harley Davidson. Similarly a code-word contains images of scissors and toothbrushes grouped together as part of the (What,brand\u2192Colgate) associate rule (rule 5 of Figure 8). 6." + }, + { + "url": "http://arxiv.org/abs/1804.06026v1", + "title": "Learning to Color from Language", + "abstract": "Automatic colorization is the process of adding color to greyscale images. We\ncondition this process on language, allowing end users to manipulate a\ncolorized image by feeding in different captions. We present two different\narchitectures for language-conditioned colorization, both of which produce more\naccurate and plausible colorizations than a language-agnostic version. Through\nthis language-based framework, we can dramatically alter colorizations by\nmanipulating descriptive color words in captions.", + "authors": "Varun Manjunatha, Mohit Iyyer, Jordan Boyd-Graber, Larry Davis", + "published": "2018-04-17", + "updated": "2018-04-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "main_content": "Introduction Automatic image colorization (Cheng et al., 2015; Larsson et al., 2016; Zhang et al., 2016; Iizuka et al., 2016; Deshpande et al., 2017)\u2014the process of adding color to a greyscale image\u2014is inherently underspeci\ufb01ed. Unlike background scenery such as sky or grass, many common foreground objects could plausibly be of any color, such as a person\u2019s clothing, a bird\u2019s feathers, or the exterior of a car. Interactive colorization seeks human input, usually in the form of clicks or strokes on the image with a selected color, to reduce these ambiguities (Levin et al., 2004; Huang et al., 2005; Endo et al., 2016; Zhang et al., 2017). We introduce the task of colorization from natural language, a previously unexplored source of color speci\ufb01cations. Many use cases for automatic colorization involve images paired with language. For example, comic book artwork is normally \ufb01rst sketched in black-and-white by a penciller; afterwards, a colorist selects a palette that thematically reinforces the written script to produce the \ufb01nal colorized art. Similarly, older black-and-white \ufb01lms are often colorized for modern audiences based on cues from dialogue and narration (Van Camp, 1995). \u22c6Authors contributed equally Language is a weaker source of supervision for colorization than user clicks. In particular, language lacks ground-truth information about the colored image (e.g., the exact color of a pixel or region). Given a description like a blue motorcycle parked next to a \ufb02eet of sedans, an automatic colorization system must \ufb01rst localize the motorcycle within the image before deciding on a contextappropriate shade of blue to color it with. The challenge grows with abstract language: a red color palette likely suits an artistic rendering of the boy threw down his toy in a rage better than it does the boy lovingly hugged his toy. We present two neural architectures for language-based colorization that augment an existing fully-convolutional model (Zhang et al., 2016) with representations learned from image captions. As a sanity check, both architectures outperform a language-agnostic model on an accuracy-based colorization metric. However, we are more interested in whether modi\ufb01cations to the caption properly manifest themselves in output colorizations (e.g., switching one color with another); crowdsourced evaluations con\ufb01rm that our models properly localize and color objects based on captions (Figure 1). 2 Model This section provides a quick introduction to color spaces (Sec. 2.1) and then describes our baseline colorization network (Sec. 2.2) alongside two models (Sec. 2.3) that colorize their output on representations learned from language. 2.1 Images and color spaces An image is usually represented as a three dimensional tensor with red, green and blue (RGB) channels. Each pixel\u2019s color and intensity (i.e., lightness) are jointly represented by the values of these three channels. However, in applications such as arXiv:1804.06026v1 [cs.CV] 17 Apr 2018 \fFigure 1: Three pairs of images whose colorizations are conditioned on corresponding captions by our FILM architecture. Our model can localize objects mentioned by the captions and properly color them. colorization, it is more convenient to use representations that separately encode lightness and color. These color spaces can be obtained through mathematical transformations of the RGB color space; in this work, following Zhang et al. (2016), we use the CIE Lab space (Smith and Guild, 1931). Here, the \ufb01rst channel (L) encodes only lightness (i.e., black-and-white). The two color channels a and b represent color values between green to red and blue to yellow, respectively. In this formulation, the task of colorization is equivalent to taking the lightness channel of an image as input and predicting the two missing color channels. 2.2 Fully-convolutional networks for colorization Following Zhang et al. (2016), we treat colorization as a classi\ufb01cation problem in CIE Lab space: given only the lightness channel L of an image (i.e., a greyscale version), a fully-convolutional network predicts values for the two color channels a and b. For ef\ufb01ciency, we deviate from Zhang et al. (2016) by quantizing the color channels into a 25\u00d725 grid, which results in 625 labels for classi\ufb01cation. To further speed up training, we use a one-hot encoding for the ab channels instead of soft targets as in Zhang et al. (2016); preliminary experiments showed no qualitative difference in colorization quality with one-hot targets. The contribution of each label to the loss is downweighted by a factor inversely proportional to its frequency in the training set, which prevents desaturated ab values. Our baseline network architecture (FCNN) consists of eight convolutional blocks, each of which contains multiple convolutional layers followed by batch normalization (Ioffe and Szegedy, 2015).1 Next, we propose two ways to integrate additional text 1See Zhang et al. (2016) for complete architectural details. Code and pretrained models are available at https: //github.com/superhans/colorfromlanguage. ab Accuracy Human Experiments Model acc@1 acc@5 plaus. qual. manip. FCNN 15.4 45.8 20.4 32.6 N/A CONCAT 17.9 50.3 39.0 34.1 77.4 FILM 23.7 60.5 40.6 32.1 81.2 Table 1: While FILM is the most accurate model in ab space, its outputs are about as contextually plausible as CONCAT\u2019s according to our plausibility task, which asks workers to choose which model\u2019s output best depicts a given caption (however, both models signi\ufb01cantly outperform the language-agnostic FCNN). This additional plausibility does not degrade the output, as shown by our quality task, which asks workers to distinguish an automatically-colorized image from a real one. Finally, our caption manipulation experiment, in which workers are guided by a caption to select one of three outputs generated with varying color words, shows that modifying the caption signi\ufb01cantly affects the outputs of CONCAT and FILM. input into FCNN. 2.3 Colorization conditioned on language Given an image I paired with a unit of text T, we \ufb01rst encode T into a continuous representation h using the last hidden state of a bi-directional LSTM (Hochreiter and Schmidhuber, 1997). We integrate h into every convolutional block of the FCNN, allowing language to in\ufb02uence the computation of all intermediate feature maps. Speci\ufb01cally, say Zn is the feature map of the nth convolutional block. A conceptually simple way to incorporate language into this feature map is to concatenate h to the channels at each spatial location i, j in Zn, forming a new feature map Z\u2032 ni,j = [Zni,j; h]. (1) While this method of integrating language with \fa green pickup truck next to trees x8 + CONV batch norm FiLM Figure 2: FILM applies feature-wise af\ufb01ne transformations (conditioned on language) to the output of each convolutional block in our architecture. images (CONCAT) has been successfully used for other vision and language tasks (Reed et al., 2016; Feichtenhofer et al., 2016), it requires considerably more parameters than the FCNN due to the additional language channels. Inspired by recent work on visual question answering, we also experiment with a less parameterhungry approach, feature-wise linear modulation (Perez et al., 2018, FILM), to fuse the language and visual representations. Since the activations of FILM layers have attention-like properties when trained on VQA, we also might expect FILM to be better at localizing objects from language than CONCAT on colorization (see Figure 4 for heatmap visualizations). FILM applies a feature-wise af\ufb01ne transformation to the output of each convolutional block, where the transformation weights are conditioned on language (Figure 2). Given Zn and h, we \ufb01rst compute two vectors \u03b3n and \u03b2n through linear projection, \u03b3n = Wn\u03b3h \u03b2n = Wn\u03b2h, (2) where Wn\u03b3 and Wn\u03b2 are learned weight matrices. The modulated feature map then becomes Z\u2032 ni,j = (1 + \u03b3n) \u25e6Zni,j + \u03b2n, (3) where \u25e6denotes the element-wise product. Compared to CONCAT, FILM is parameter-ef\ufb01cient, requiring just two additional weight matrices per feature map. 3 Experiments We evaluate FCNN, CONCAT, and FILM using accuracy in ab space (shown by Zhang et al. (2016) to be a poor substitute for plausibility) and with crowdsourced experiments that ask workers to judge colorization plausibility, quality, and the colorization \ufb02exibly re\ufb02ects language manipulations. Table 1 summarizes our results; while there is no clear winner between FILM and CONCAT, both rely on language to produce higher-quality colorizations than those generated by FCNN. 3.1 Experimental setup We train all of our models on the 82,783 images in the MSCOCO (Lin et al., 2014) training set, each of which is paired with \ufb01ve crowdsourced captions. Training from scratch on MSCOCO results in poor quality colorizations due to a combination of not enough data and increased image complexity compared to ImageNet (Russakovsky et al., 2015). Thus, for our \ufb01nal models, we initialize all convolutional layers with a FCNN pretrained on ImageNet; we \ufb01netune both FILM and CONCAT\u2019s convolutional weights during training. To automatically evaluate the models, we compute top-1 and top5 accuracy in our quantized ab output space2 on the MSCOCO validation set. While FILM achieves the highest ab accuracy, FILM and CONCAT do not signi\ufb01cantly differ on crowdsourced evaluation metrics. 3.2 Human experiments We run three human evaluations of our models on the Crowd\ufb02ower platform to evaluate their plausibility, overall quality, and how well they condition their output on language. Each evaluation is run using a random subset of 100 caption/image pairs from the MSCOCO validation set,3 and we obtain \ufb01ve judgments per pair. Plausibility given caption: We show workers a caption along with three images generated by FCNN, CONCAT, and FILM. They choose the image that best depicts the caption; if multiple images accurately depict the caption, we ask them to choose the most realistic. FCNN does not receive the caption as input, so it makes sense that its output is only chosen 20% of the time; there is no signi\ufb01cant difference between CONCAT and FILM in plausibility given the caption. 2We evaluate accuracy at the downsampled 56\u00d756 resolution at which our network predicts colorizations. For human experiments, the prediction is upsampled to 224\u00d7224. 3We only evaluate on captions that contain one of ten \u201ccolor\u201d words (e.g., red, blue purple). \fFigure 3: The top row contains successes from our caption manipulation task generated by FILM and CONCAT, respectively. The second row shows examples of how captions guide FILM to produce more accurate colorizations than FCNN (failure cases outlined in red). The \ufb01nal row contains, from left to right, particularly eye-catching colorizations from both CONCAT and FILM, a case where FILM fails to localize properly, and an image whose unnatural caption causes artifacts in CONCAT. Colorization quality: Workers receive a pair of images, a ground-truth MSCOCO image and a generated output from one of our three architectures, and are asked to choose the image that was not colored by a computer. The goal is to fool workers into selecting the generated images; the \u201cfooling rates\u201d for all three architectures are comparable, which indicates that we do not reduce colorization quality by conditioning on language. Caption manipulation: Our last evaluation measures how much in\ufb02uence the caption has on the CONCAT and FILM models. We generate three different colorizations of a single image by swapping out different colors in the caption (e.g., blue car, red car, green car). Then, we provide workers with a single caption (e.g., green car) and ask them to choose which image best depicts the caption. If our models cannot localize and color the appropriate object, workers will be unable to select an appropriate image. Fortunately, CONCAT and FILM are both robust to caption manipulations (Table 1). 4 Discussion Both CONCAT and FILM can manipulate image color from captions (further supported by the top row of Figure 3). Here, we qualitatively examine model outputs and identify potential directions for improvement. Language-conditioned colorization depends on correspondences between language and color statistics (stop signs are always red, and school buses are always yellow). While this extra information helps us produce more plausible colorizations compared to language-agnostic models (second row of Figure 3), it biases models trained on natural images against unnatural colorizations. For example, the yellow sky produced by CONCAT in the bottom right of Figure 3 contains blue artifacts because skies are usually blue in MSCOCO. Additionally, our models are limited by the lightness channel L of the greyscale image, which prevents dramatic color shifts like black-to-white. Smaller objects are also problematic; often, colors will \u201cleak\u201d \fFigure 4: Examples of intermediate layer activations while generating colorized images using the FILM network. These activation maps correspond to the mean activation immediately after the FILM layers of the sixth, seventh, and eighth blocks. Interestingly, the activations after the FILM layer of Block 6 always seems to focus on the object that is to be colorized, while those of Block 8 focus almost exclusively on the background. The activation maps do not signi\ufb01cantly differ when color words in the caption are manipulated; therefore, we show maps only for the \ufb01rst color word in these examples. into smaller objects from larger ones, as shown by FILM\u2019s colorizations of purple plants (Figure 3, bottom-middle) and yellow tires (middle-left). Figure 4 shows activation maps from intermediate layers generated while colorizing images using the FILM network. Each intermediate layer is captured immediately after the FILM layer and is of dimension h \u00d7 w \u00d7 c (e.g., 112 \u00d7 112 \u00d7 64, 28 \u00d7 28 \u00d7 512, etc.), where h is the height of the feature map, w is its width, and c is the number of channels.4 On inspection, the \ufb01rst few activation maps correspond to edges and are not visually interesting. However, we notice that the sixth activation map usually focuses on the principal subject of the image (such as a car or a horse), while the eighth activation map focused everywhere but on that subject (i.e., entirely on the background). This analysis demonstrates that the FILM layer emulates visual attention, reinforcing similar observations on visual QA datasets by Perez et al. (2018). 5 Future Work While these experiments are promising, that there are many avenues to improve language-conditioned colorization. From a vision perspective, we would like to more accurately colorize parts of objects (e.g., a person\u2019s shoes); moving to more complex ar4We compute the mean across the c dimension and scale the resulting h \u00d7 w feature map between the limits [0, 255]. chitectures such as variational autoencoders (Deshpande et al., 2017) or PixelCNNs (Guadarrama et al., 2017) might help here, as could increasing training image resolution. We also plan on using re\ufb01nement networks (Shrivastava et al., 2017) to correct for artifacts in the colorized output image. On the language side, moving from explicitly speci\ufb01ed colors to abstract or emotional language is a particularly interesting. We plan to train our models on dialogue/image pairs from datasets such as COMICS (Iyyer et al., 2017) and visual storytelling (Huang et al., 2016); these models could also help learn powerful joint representations of vision and language to improve performance on downstream prediction tasks. Acknowledgement Manjunatha and Davis are partially supported by the Of\ufb01ce of Naval Research under Grant N000141612713: Visual Common Sense Reasoning. Boyd-Graber is supported by NSF Grant IIS1652666. We thank Richard Zhang for helpful comments on our ideas to make training the colorization more ef\ufb01cient." + }, + { + "url": "http://arxiv.org/abs/1804.00060v1", + "title": "Class Subset Selection for Transfer Learning using Submodularity", + "abstract": "In recent years, it is common practice to extract fully-connected layer (fc)\nfeatures that were learned while performing image classification on a source\ndataset, such as ImageNet, and apply them generally to a wide range of other\ntasks. The general usefulness of some large training datasets for transfer\nlearning is not yet well understood, and raises a number of questions. For\nexample, in the context of transfer learning, what is the role of a specific\nclass in the source dataset, and how is the transferability of fc features\naffected when they are trained using various subsets of the set of all classes\nin the source dataset? In this paper, we address the question of how to select\nan optimal subset of the set of classes, subject to a budget constraint, that\nwill more likely generate good features for other tasks. To accomplish this, we\nuse a submodular set function to model the accuracy achievable on a new task\nwhen the features have been learned on a given subset of classes of the source\ndataset. An optimal subset is identified as the set that maximizes this\nsubmodular function. The maximization can be accomplished using an efficient\ngreedy algorithm that comes with guarantees on the optimality of the solution.\nWe empirically validate our submodular model by successfully identifying\nsubsets of classes that produce good features for new tasks.", + "authors": "Varun Manjunatha, Srikumar Ramalingam, Tim K. Marks, Larry Davis", + "published": "2018-03-30", + "updated": "2018-03-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction The following transfer learning scenario is now common in computer vision: Obtain a convolutional neural network that has been pretrained on a large data set for the task of classi\ufb01cation. The last layer of this network is a softmax layer which corresponds to class probabilities. Upon removing this layer (and possibly one or more of the preceding fully connected layers), treat the remaining network as a generic feature extractor. For example, in the case of popular deep learning architectures such as AlexNet [20] or VGGNet [36], we can extract the activations of the hidden layer immediately before the classi\ufb01er, or the previous hidden layer, to obtain a feature vector that describes an input image. These feature vectors are commonly referred to as CNN features or as fc features (fully connected features such as fc7 or fc6). For a new task, which may or may not be correlated to the original task, we use these fc features as the input to a new classi\ufb01er (such as a linear SVM or a neural network) on a new dataset. Note that the classes in the new task may not match the original classes used in training the feature extractor. This work addresses the following setting: A supervised image-classi\ufb01cation task on one dataset (henceforth referred to as the source task) is used to learn general purpose features that can be used for classi\ufb01cation tasks (referred to as the target tasks) on other datasets. In Figure 1 we consider the MNIST digit and Alphabet classi\ufb01cation (or ALPHANIST)[13] problems as the source task and target task, respectively. Rather than training the classi\ufb01er on the set of all 10 classes in the source task (the 10 digits {0, 1, . . . , 9}), we consider training on a Subset Of Classes (SOC) from the source task (e.g., the set of 4 digits {0, 1, 4, 8}), then use the resulting fc feature extractors for the target task. As shown in the bar graph in Figure 1, each SOC leads to different a different generic feature extractor, which leads to a difference in classi\ufb01cation accuracy on the target task. There seems to be a general assumption in the vision community that using all of the classes and training data from the source task leads to better fc features for the target task. (In our experiments, we de\ufb01ne one set of fc features as better than a second set when the \ufb01rst set achieves higher classi\ufb01cation accuracy on a target task.) We would 1 arXiv:1804.00060v1 [cs.CV] 30 Mar 2018 \fFigure 1. Illustration of the transfer learning scenario studied in this paper. In the upper left, we illustrate the source task (e.g., MNIST digit classi\ufb01cation), in which each of several networks is trained using a different Subset of Classes (SOC) from the set of classes in the source task. In the upper right, we illustrate the target task (e.g., alphabet character classi\ufb01cation), which is solved using the learned features from the source task. The bar graph shows the performance on the target task of networks that were trained on the source task using different SOCs. The goal of this work is a general method for identifying an Optimal SOC from the source task, under a budget constraint (a limited number of classes), to produce good generic features that obtain high accuracy on the target task. We refer to this problem as OPT-SOC. like to investigate this assumption by asking the following questions: 1. Monotonicity. Does training on a larger number of classes from the source dataset always produce better features than training on a smaller number of classes? For example, in Figure 1, does a subset with 10 classes (e.g.,{0, . . . , 9}) always produce better features than a subset with 4 classes (e.g.,{0, 1, 4, 8})? 2. Optimal subset of classes. Under a given budget (a \ufb01xed number of classes), what is the Optimal Subset Of Classes from a source dataset for generating good fc features? We refer to this problem as OPT-SOC. Without knowing much (if anything) about the target task, is it possible to know the optimal subset of a certain size that leads to the best fc features? In other words, can we determine which of two different subsets (e.g., {2, 3, 9} versus {5, 6, 7}) will lead to better performance on the target task? 3. Object class diversity: Does it help or hurt the fc features when there are classes in the source dataset that are very similar (i.e., not diverse based on some similarity measure). For example, suppose the source dataset has classes that are visually similar, such as Leopard, Jaguar, and Cheetah. Would including more than one such class in a SOC (at the expense of a more diverse classes) lead to better-performing of generic fc features on a target task? The problems addressed in this paper are challenging for two reasons. First, we do not assume any speci\ufb01c knowledge about the target task, so the selection of a SOC must be based entirely on the properties of the source dataset. Second, deep learning machinery for learning features from a speci\ufb01c data set is often seen as a \u201cblack box\u201d that is still not completely understood, and thus identifying a pattern that can be generalized to many datasets is a signi\ufb01cant challenge. In this work, we model the function that maps a subset of classes in the source task to its end performance on a target task using submodular set functions (See De\ufb01nition 2). Submodular functions are considered to be discrete analogues of convex functions, and they are generally used for modeling diminishing-return behavior in many learning problems. The diminishing return property states that the performance gain achieved by adding a class at an earlier stage is larger than that which would be obtained by adding it at a later stage. Using this modeling, we answer the three questions related to monotonicity, optimal subset selection, and object class diversity. We summarize the contributions of this paper below: \u2022 We propose a novel method that uses submodular set functions to model the performance of a set of generic features on unknown target tasks, as a function of the Subset Of Classes from a source task that was used to train the features. \u2022 We propose two different ways to compute the parameters of the submodular set functions: (1) Linear programming for small-sized data sets; and (2) \ufb01tting a quadratic submodular function, based on a similarity score between pairs of classes, for large-scale data sets. \u2022 We empirically show that our modeling allows us to \ufb01nd optimal subsets that performs signi\ufb01cantly better on the target tasks than randomly selected subsets. We envision three potential applications for our work: \u2022 Ef\ufb01cient training: The proposed strategy for identifying the optimal Subset Of Classes in the source task leads to reduction in the size of the training data set, thereby enabling more ef\ufb01cient training of a generic fc feature extractor. \u2022 Data set generation: If one wants to create a standard dataset for the sole purpose of generating generic fc feature extractors, this work can serve as a guideline 2 \ffor considering which combination of object classes would be most useful, thereby reducing the workload of manual annotation [30]. \u2022 Better pre-trained models: We observed that the full source dataset (the set consisting of all of the classes in the source dataset) does not always achieve the highest performance on the target task. OPT-SOC provides an ef\ufb01cient method to identify subsets that can potentially perform better than the set consisting of all the classes. This can be useful to compute better pre-trained models for transfer learning. 2. Related Work Subset selection in data sets: Optimal subset selection is addressed by many researchers using submodular functions, which can be seen as a discrete analogue of convex functions. In particular, the formulation of the subset selection problem as the maximization of a submodular function has been used in many applications such as sensor placement [14], outbreak detection [23], word alignment [25], clustering [26], viral marketing [18], and \ufb01nding diverse subsets in structured item sets [32]. Existing methods show that submodularity is generally well behaved in modeling many subset selection problems. However, OPT-SOC is different from existing problems. Existing subset selection problems typically perform dataset reduction by identifying a subset of training images and solve the same learning algorithm. In OPT-SOC, we don\u2019t use knowledge about the target task while \ufb01nding the subset. The learning algorithm changes depending on the size of the chosen subsets. For example, for subsets of size two (2), our learning algorithm is a binary classi\ufb01cation, whereas for subsets larger than 2, it is a multi-label classi\ufb01cation. Furthermore, in OPT-SOC, the source and target tasks involve deep neural networks, which are highly nonconvex and prone to local minima issues. Thus, it is not entirely obvious to see that OPT-SOC would bene\ufb01t from submodular modeling. The main contribution in this paper it to model OPT-SOC using submodularity, and more importantly, to show that this can be bene\ufb01cial in several data sets. Interpretations of CNN features: In the last few years, there have been several papers in deep learning that achieve record-beating performance on challenging visual tasks such as image classi\ufb01cation and object detection [20, 12, 36, 15]. From a scienti\ufb01c point of view, it would be useful to glean insight on these learned features, and a few recent papers address this. For example, by mapping the feature activations in intermediate layers to original input pixels using deconvolution, we can better understand the role of convnet features [43]. To interpret the features learned in a classi\ufb01cation task, we can either generate an arti\ufb01cial image that is representative of a speci\ufb01c class of interest, or highlight the areas of an image that are discriminative of the object of interest [35]. It has been shown that while training convolutional neural networks for the task of scene recognition, the learned network also develops object detectors without using any explicit notion of objects [45]. Convolutional neural networks have been shown to have a few neurons that resemble the so-called \u201cgrandmother neuron\u201d (a hypothetical neuron that is activated when presented with a speci\ufb01c object or concept), but most of neurons in convolutional neural networks form a distributed code [2]. A different line of research demonstrates that one can generate adversarial negatives, which introduce a small, hardly perceptible perturbation that leads to misclassi\ufb01cation of an image. This raises an important concern regarding networks\u2019 ability to achieve high generalization performance [37]. Several other studies have looked at feature learning in the context of binarization [11], systematic variations in scene factors [4], variations in viewpoints [6], organization of class-speci\ufb01c information encoding [38, 39], egomotion [1, 17], temporal context [41], spatial context [10, 31, 29], and color [44]. Although there are many methods that show insights about the features learned by CNNs, we lack mathematical models to explain such phenomena [27]. Understanding generic feature learning: The problem of identifying the speci\ufb01c layers in a neural network that are suitable for transfer learning was studied in [42, 2]. In [33], the authors showed through extensive experiments that a linear SVM applied on generic 4096-dimensional fully connected features extracted from [20] could obtain or outperform state-of-the-art results on a wide variety of tasks. In their follow-up paper [5], the authors study a variety of factors that dictate the effectiveness of transfer learning. In a recent paper [16], the ImageNet dataset [34] is carefully studied to address several important questions: the relative importance of training samples, the relative importance of object classes, interaction between object classes, and comparison between limiting the number of classes versus limiting the number of training images per class. More interestingly, this work explicitly mentions the class subset selection as one of the interesting research questions to answer (the last sentence of Section 5.1 in [16]). The paper also reports observing the diminishing returns property on the target task with respect to the label set, without stating the connection to submodularity. In contrast to their work, we explicitly model the performance on the target task using a submodular set function. We also propose a method for identifying optimal subsets of classes top produce better features (a problem that is not addressed in [16]) and demonstrate the effectiveness of our method. 3 \f3. Notations and Preliminaries In this paper we use a set function to denote the performance on a target task and we model this function to be submodular. Let B denote the Boolean set {0, 1} and R the set of reals. We use x to denote vectors. De\ufb01nition 1. A set function F : 2E \u2192R, where E is a \ufb01nite set, maps a set to a real number. Set functions can also be seen as pseudo-Boolean functions [8] that take a Boolean vector as argument and return a real number. De\ufb01nition 2. A set function F : 2E \u2192R is submodular if for all A, B \u2286E with B \u2286A and e \u2208E\\A, we have: F(A \u222a{e}) \u2212F(A) \u2264F(B \u222a{e}) \u2212F(B). (1) This property is also referred to as diminishing return since the gain is less if the element e is included at a later stage [28]. We analyse whether or not the performance of the target task improves by adding classes to a given subset. This behavior can be studied by analysing if the set function modeling the performance of the target task is monotonically increasing or not. De\ufb01nition 3. A set function F is monotonically increasing if for all A, B \u2286E and B \u2286A, we have: F(B) \u2264F(A) (2) 4. Problem Statement Let C denote the set of classes {c1, c2, . . . , cn} in the source task and let T denote a target task. We solve the source task by utilizing a subset of classes A \u2286C and its associated training samples. The source task trains a generic feature extractor, and we use these features to solve the target task T . Let FT : 2C \u2192R denote the accuracy or performance achieved on the target task T as shown below: FT (A) = F(A) + CT , \u2200A \u2286C (3) Here F : 2C \u2192R is independent of T and CT is the task dependent constant. The basic assumption is that if a subset A is better at solving a target task T1 in comparison to another subset B, then it is more likely that A is also better at solving another transfer task T2 in comparison to the subset B. In other words, the source task produces a generic feature extractor that is independent of the target tasks. Our goal is to identify the optimal subset of classes, subject to a cardinality constraint, that produces good performance on the target task: A\u2217= arg max A FT (A) = arg max A F(A), |A| \u2264k, (4) where k \u2264|C|. It is important to note that the constant, which depends on the target task, does not affect the maximization, i.e., the selection of the optimal subset. We refer to F : 2C \u2192R as the transfer function and that is modelled as submodular set function and used to identify the optimal subset. 5. Algorithm 5.1. Submodular function modeling We assume that the transfer function F : 2C \u2192R is a submodular set function. Let xA i be a Boolean variable that indicates the presence of a class ci in a set A \u2286C, i.e., xA i = 1 if ci \u2208A, and xA i = 0, if ci / \u2208A. We denote the transfer function using the following quadratic Boolean function: F(A) = n X i=1 \u03b1ixA i + n X i=1 n X j=i+1 \u03b2ijxA i xA j , (5) where xA i and xA j can be directly obtained from A and \u03b2ij \u22640. The parameter \u03b1i gives the role of the class ci in generating good features. The parameter \u03b2ij denotes the role of having two classes ci and cj jointly in the set A. Any quadratic pseudo-Boolean function with negative coef\ufb01cients (i.e.,\u03b2ij) for all bilinear terms is submodular [8]. This can be easily shown by checking the diminishing returns property for two sets A, B \u2286C where B \u2286A. Lemma 1. The function F(A) is monotonically nondecreasing if \u03b1i \u2265\u2212P cj\u2208C\\ci \u03b2ij See Section A for the proof. We have shown a general form of submodular function for the transfer function in Equation 5 and the monotonicity conditions in Lemma 1. We will show two different ways to learn the parameters. In the \ufb01rst method, we use linear programming (LP) to compute solutions on the target task based on some SOC from the source dataset. Ideally, we would like to \ufb01nd the transfer function without using any information from the target task. In this paper, the LP is used to analyse the error in modeling the transfer function as a monotonically submodular function. The second method uses similarity matrix between pairs of classes in the source dataset, and this does not use any information from the target task. Parameter estimation using LP: We would like to compute the parameters \u03b1 and \u03b2 for a speci\ufb01c transfer learning setting. In order to do that, let us assume that we have some method to probe the value of the function FT (A) on a target task T for different subsets A \u2286C. Note that the probed values would include an unknown task-speci\ufb01c constant term CT as shown in Equation 3. Based on the probed values for different subsets, we \ufb01t a monotonically 4 \fnon-decreasing submodular function F(A) for the transfer function by minimizing the sum of the L1 norm distances between FT (A) \u2212CT and the \ufb01tted function for different probed values of A \u2286C. We propose an LP to compute the parameters of the transfer function as shown below: {\u03b1, \u03b2} = arg min \u03b1,\u03b2 X A\u2286C |sA| (6) s.t FT (A) = CT + F(A), F(A) + sA = Pn i=1 \u03b1ixA i + Pn i=1 Pn j=i+1 \u03b2ijxA i xA j , \u03b1i \u2265\u2212P cj\u2208C\\ci \u03b2ij, \u2200A \u2286C, \u03b2ij \u22640 Once we solve the \u03b1i and \u03b2ij parameters using LP, we have the solution for F using Equation 5. The cost function P A\u2286C |sA| in the LP, which is the sum of the absolute values of the slack variables, gives some measure of how close the transfer function is to a monotonically non-decreasing submodular function. Note that the LP uses the probed values for the target task to \ufb01t the transfer function. Parameter estimation using similarity matrix: The method to compute parameters using LP is computationally infeasible for data sets with large number of classes. For such scenarios, we propose an alternative method to compute the parameter \u03b2ij, which denotes the interaction between two classes, using some measure of class similarity [40, 21, 7, 3]. In this paper, we use Wordnet tree to compute the similarity between pairs of classes. The critical assumption we make here, which holds frequently in practice, is that classes similar in terms of Wordnet similarity are visually similar. The nodes in the Wordnet tree represent classes. The Lin similarity (one of the many similarity measures in computational linguistics community) between two classes ci and cj is given below: S(ci, cj) = 2logP(L(ci, cj)) logP(ci) + logP(cj) (7) where L is the lowest node in the tree which is a common ancestor to both c1 and c2. Here the classes can be seen as concepts and P(ci) denotes the probability of a random word consumed by the concept ci. The Lin similarity S(ci, cj) varies from 0 to 1. For more details, we refer the reader to [24]. We use \u03b2ij = \u2212S(ci, cj) and \u03b1i = |C|. Since we don\u2019t have any information about the relative importance of different classes, we set all of them to the same value \u03b1i = |C| that ensures monotonicity of the transfer function. 5.2. Optimal subset selection The use of greedy algorithm for maximizing submodular function is motivated by the following theorem: Theorem 1. [28] For maximizing monotonically nondecreasing submodular functions under a cardinality constraint, the optimality of the greedy algorithm is given by the following equation: f(Agreedy) \u2265(1 \u22121 e)f(AOP T ), (8) where f(\u2205) = 0. We can observe that our transfer function satis\ufb01es F(\u2205) = 0. We brie\ufb02y outline the greedy algorithm to select optimal subsets of size k [28]. 1. Initialize S = \u2205. 2. Let s = arg maxs\u2032\u2208C F(S \u222a{s\u2032}) \u2212F(S) such that |S \u222a{s\u2032}| \u2264k. 3. If s \u0338= \u2205then S = S \u222a{s} and go to step 2. 4. S is the required subset. 6. Experiments We conducted several experiments to address questions related to monotonicity, optimal SOC from the source dataset, and object class diversity. We brie\ufb02y explain the datasets, computation of the transfer function, the network architectures for the source and target tasks, and results obtained using OPT-SOC. 6.1. Datasets We use \ufb01ve datasets : MNIST[22], ALPHANIST[13], CIFAR-10[19], STL-10[9] and CIFAR-100 for our experiments, all of which are standard, except for ALPHANIST 1. We summarize the datasets used in source and target tasks in Table 1. We use the ALPHANIST dataset for a target task in the \ufb01rst experiment. In this experiment, we randomly sample a subset of 16800 digits (12000 for train and 4800 for test). We resize the images to 28x28 to match the dimensions of MNIST, and remove the classes that are similar to the ones in the source task. For example, we remove the alphabets \u201cO\u201d and \u201cI\u201d due to their to similarity with \u201c0\u201d and \u201c1\u201d digits in MNIST. The class labels of STL-10 are same as CIFAR-10, but the images are obtained from ImageNet. We resize the STL-10 images to 32 x 32 for compatibility with CIFAR-10/100. 6.2. Transfer function estimation In each of the four experiments, we computed a monotonically non-decreasing submodular transfer function. For the \ufb01rst two experiments, we were able to compute the function using both LP and similarity matrix. Note that the LPbased \ufb01tting is primarily done to analyse modeling error. In 1https://www.nist.gov/srd/ nist-special-database-19 5 \fExpt Source Dataset No. source classes No. models trained Target Dataset No. target classes 1 MNIST (60k) 10 1013 ALPHANIST 24 2 CIFAR10 (50k) 10 1013 CIFAR100 100 3 CIFAR100 (50k) 100 181 CIFAR10 10 4 CIFAR100 (50k) 100 181 STL10 10 Table 1. Source and target datasets along with the number of classes and trained models. Note that in the case of MNIST and CIFAR-10, we exhaustively train all possible subsets (1013 in number), while we sample a random selection of 181 subsets in the case of CIFAR-100. all the experiments, we probe the performance on the target task for different SOCs of the source task. We refer to them as probed values. Parameter estimation using LP: For a source dataset with n classes, there are 2n \u2212n \u22121 subsets (after removing n singleton sets and the null set). In the case of MNIST and CIFAR-10, which both have 10 classes, we thus have 1024\u221210\u22121 = 1013 subsets, while for CIFAR-100, which has 100 classes, we have 2100 \u2212100 \u22121 \u22481.26 \u00d7 1030 subsets. We test the LP-based parameter estimation for the MNIST and CIFAR-10 datasets. The mean modeling error from the slack variables in the LP for the \ufb01rst experiment is 0.01169, and 0.00815 for the second experiment. The transfer function F can vary from 0 to 1. This implies that the error in submodular function approximation is around 1%. Parameter estimation using similarity matrix: We use Wordnet to obtain similarity matrix and thus, the associated transfer functions for all experiments except the \ufb01rst one (MNIST-ALPHANIST). This is because there is no well established similarity measure for comparing digits. 6.3. Network Architectures For each source task, we train a deep convolutional neural network with the con\ufb01guration shown in Figure 2. The CNNs are trained with a categorical cross-entropy loss with SGD, and an initial learning rate = 0.01. We drop the learning rate to 1/10th whenever the validation accuracy goes through a plateau. When the validation accuracy does not improve for 7 consecutive epochs, we use an early stopping criterion to cull the training thus avoiding over\ufb01tting. We only consider the model that has performed best on the validation set randomly partitioned from the training set, for the rest of the protocol. For the target task, we use the target dataset and extract features in the penultimate (i.e., fully connected) layer and feed them into a shallow neural network with one hidden layer. This shallow neural network is trained thrice and results averaged, to account for variations. A brief summary of these shallow models is provided Figure 2. CNN architectures for source tasks. We use a standard CNN architecture with a cascade of convolution, max-pooling, fully connected, and dropout layers. The Dense(512/128) refers to the Fully Connected layer. The networks used for CIFAR and MNIST are shown in (a) and (b), respectively. Figure 3. Neural network architectures for target tasks. We use a very simple neural network with one hidden layer of dimension 64 and a softmax classi\ufb01cation. The networks used for CIFAR and MNIST are shown in (a) and (b), respectively. in Figure 3. In many target tasks, it is a standard practice to use a simple classi\ufb01cation algorithm such as nearest neighbor, a simpler neural network or SVM for solving the target tasks. 6.4. OPT-SOC Computation To evaluate the optimal SOC, we need the performance of the SOC on the target tasks. In the \ufb01rst two experiments, we use the 1013 probed values for all possible SOC from the source tasks. In the third and fourth experiments, we ran6 \f1 2 3 4 5 6 7 8 9 10 Number\u00a0of\u00a0classes\u00a0classes\u00a0in\u00a0source\u00a0task 0.70 0.75 0.80 0.85 0.90 Accuracy\u00a0on\u00a0target\u00a0task Source:\u00a0MNIST\u00a0\u00a0\u00a0\u00a0Target:\u00a0ALPHANIST Mean Greedy\u00a0soln\u00a0(LP) Quadratic All\u00a0subsets Figure 4. The \ufb01tted F values (black) for the optimal SOC is a smooth monotonically non-decreasing curve. The Optimal SOC obtained using the greedy method (red) outperforms the baseline (green) for most of the cases. domly sample 181 subsets (10 subsets each with cardinality {10, 15, 20, ...., 95} and the complete dataset). We want to show that the optimal SOC, obtained using the greedy algorithm with the submodular transfer function F, performs better on the transfer tasks with respect to random subsets. Given a submodular function F obtained through LP or similarity matrix, we can \ufb01nd optimal SOC in the source dataset using the greedy algorithm. We show the performance of the optimal SOC in different experiments as shown in Figures 4 8. In all these graphs, the blue dots show the probed values on the target tasks. We \ufb01nd the mean score for different subsets with same cardinalities. The green curve that connects these mean values will be treated as the baseline. The red curve in all the graphs shows the probed values based on the optimal SOC chosen using the greedy algorithm. In Figures 4 and 5, the red curve shows the performance of the optimal SOC, obtained using submodular function computed using LP. In these \ufb01gures ( 4 and 5), the black curve shows the actual F values corresponding to the optimal subsets, after including the task dependent constant CT that is computed in the LP. In Figures 6, 7, and 8, the red curve shows the performance of the optimal SOC, using similarity matrix. 7. Discussion We observed that the use of submodularity for identifying optimal SOC can be bene\ufb01cial in practice. We brie\ufb02y address the questions related to monotonicity, optimal SOC, and the class diversity. \u2022 Monotonicity: In most of our experiments (Figure 4 7) we observed that in general adding more classes improves the performance on the target task. However, in 1 2 3 4 5 6 7 8 9 10 Number\u00a0of\u00a0classes\u00a0in\u00a0source\u00a0task 0.16 0.18 0.20 0.22 0.24 0.26 0.28 Accuracy\u00a0on\u00a0target\u00a0task Source\u00a0:\u00a0CIFAR\u00ad10\u00a0\u00a0\u00a0Target\u00a0:\u00a0CIFAR\u00ad100 Mean Greedy\u00a0soln\u00a0(LP) Quadratic All\u00a0subsets Figure 5. The \ufb01tted F values (black) for the optimal SOC is a smooth monotonically non-decreasing curve. The Optimal SOC obtained using the greedy method (red) outperforms the baseline (green) for most of the cases. 1 2 3 4 5 6 7 8 9 10 Number\u00a0of\u00a0classes\u00a0in\u00a0source\u00a0task 0.16 0.18 0.20 0.22 0.24 0.26 0.28 Accuracy\u00a0on\u00a0target\u00a0task Source:\u00a0CIFAR\u00ad10\u00a0\u00a0\u00a0\u00a0Target:\u00a0CIFAR\u00ad100 Mean Greedy\u00a0soln\u00a0(Wordnet) All\u00a0subsets Figure 6. The Optimal SOC obtained using the greedy method (red) outperforms the baseline (green) for most of the cases. The submodular function is computed using similarity matrix. many of the experiments, the optimal SOC having 7080% classes gives better performance on the target task compared to using all of the classes. This behavior can be exploited in generating better pre-trained models for popular datasets such as ImageNet. \u2022 Optimal SOC: In all our experiments we outperformed the random subset baseline in the upper mid-range (4080%). In subsets with 90% or more elements, there is signi\ufb01cant overlap among the different subsets. Thus it is hard for one subset to perform signi\ufb01cantly better than the others. The small subsets with 20 or 30 % elements have a diverse set of classes without having pairs of similar ones. Since we don\u2019t have a good method to 7 \f0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 Number\u00a0of\u00a0classes\u00a0in\u00a0source\u00a0task 0.45 0.50 0.55 0.60 0.65 Accuracy\u00a0on\u00a0target\u00a0task Source:\u00a0CIFAR\u00ad100\u00a0\u00a0\u00a0\u00a0\u00a0Target:\u00a0CIFAR\u00ad10 Mean\u00a0of\u00a0random\u00a0subsets Greedy\u00a0soln\u00a0(Wordnet) 10\u00a0random\u00a0subsets Figure 7. The Optimal SOC obtained using the greedy method (red) outperforms the baseline (green) for most of the cases. The submodular function is computed using similarity matrix. We use 181 random subsets to generate the baseline. 0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 85 90 95 100 Number\u00a0of\u00a0classes\u00a0in\u00a0source\u00a0task 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.39 0.40 Accuracy\u00a0on\u00a0target\u00a0task Source:\u00a0CIFAR\u00ad100\u00a0\u00a0\u00a0\u00a0\u00a0Target:\u00a0STL\u00ad10 Mean Greedy\u00a0soln\u00a0(wordnet) 10\u00a0random\u00a0subsets Figure 8. The Optimal SOC obtained using the greedy method (red) outperforms the baseline (green) for most of the cases. The submodular function is computed using similarity matrix. We use 181 random subsets to generate the baseline. Although, the function is mostly monotonically non-decreasing, there is not much gain by adding more classes. This could be due to the domain difference between CIFAR-100 and STL-10 images. individually evaluate the importance of every class (the \u03b1i\u2019s are dif\ufb01cult to obtain for large datasets), it is hard to identify the optimal SOC that performs better than the others. \u2022 Object class diversity: In most or all our experiments, we observed that the optimal SOC avoids having two classes that are similar to each other. This is evident from our choice of using \u03b2ij = \u2212S(ci, cj) leading to optimal SOC performing better than the random ones. In all the experiments, the source and target tasks do not share the same classes. Note that CIFAR-10 and CIFAR-100 do not share the same object classes. In this paper, we wanted to investigate if submodularity can play a role in explaining the performance of generic feature extractors on target tasks. There are many future avenues to explore: (1) the use of more general submodular functions involving higher order functions, (2) investigating the use of non-monotonous submodular functions and nongreedy strategies for \ufb01nding the optimal SOC and (3) globally optimal approaches for subset selection problems. We studied small and mid-scale datasets and performed a careful analysis of the modeling by probing several SOCs in the source tasks. We are currently looking at computationally ef\ufb01cient ways of using the proposed method for identifying optimal subsets in larger datasets such as ImageNet. APPENDIX A. Proof for lemma 1 Proof. Let us consider the addition of an element ci / \u2208B to B where B \u2286C. We have: F(B \u222aci) = F(B) + \u03b1i + X cj\u2208B\\ci \u03b2ij (9) If \u03b1i \u2265\u2212P cj\u2208C\\ci \u03b2ij and \u03b2ij \u22640, we have \u03b1i \u2265\u2212 X cj\u2208B,i\u0338=j \u03b2ij, \u2200B \u2286C (10) From Equation 9, we have: F(B \u222aci) \u2265F(B) (11) By adding newer elements iteratively, we can show the following for all A, B \u2286C and B \u2286A, we have F(B) \u2264 F(A)." + } + ], + "Ken Duffy": [ + { + "url": "http://arxiv.org/abs/1907.11052v2", + "title": "MDS coding is better than replication for job completion times", + "abstract": "In a multi-server system, how can one get better performance than random\nassignment of jobs to servers if queue-states cannot be queried by the\ndispatcher? A replication strategy has recently been proposed where $d$ copies\nof each arriving job are sent to servers chosen at random. The job's completion\ntime is the first time that the service of any of its copies is complete. On\ncompletion, redundant copies of the job are removed from other queues so as not\nto overburden the system.\n For digital jobs, where the objects to be served can be algebraically\nmanipulated, and for servers whose output is a linear function of their input,\nhere we consider an alternate strategy: Maximum Distance Separable (MDS) codes.\nFor every batch of $n$ digital jobs that arrive, $n+m$ linear combinations are\ncreated over the reals or a large finite field, and each coded job is sent to a\nrandom server. The batch completion time is the first time that any $n$ of the\n$n+m$ coded jobs are served, as the evaluation of $n$ original jobs can be\nrecovered by Gaussian elimination. If redundant jobs can be removed from queues\non batch completion, we establish that in order to get the improved\nresponse-time performance of sending $d$ copies of each of $n$ jobs via the\nreplication strategy, with the MDS methodology it suffices to send $n+d$ jobs.\nThat is, while replication is multiplicative, MDS is linear.", + "authors": "Ken Duffy, Seva Shneer", + "published": "2019-07-25", + "updated": "2019-12-08", + "primary_cat": "cs.PF", + "cats": [ + "cs.PF", + "math.PR" + ], + "main_content": "Introduction It is well known that if a job arrives to a system with many servers, its delay is minimized by joining the queue with the least waiting time. If there are large numbers of servers, the state of each of their queues may not be accessible at each job\u2019s arrival time. The celebrated power-of-d choices result (see [1], [14], [23]) establishes that by sampling a relatively small number of queues at random and joining the shortest of those, performance is asymptotically much better than in the case of random assignment. Many other load-balancing schemes have been proposed and investigated, see [22] for a current survey. An interesting variant of this system has recently been introduced where the waiting times at any of the queues may not be available to an arriving job, see [20, 4, 3] and references therein. In that setting, as opposed to sampling d queues and joining the shortest waiting time, instead a replication-d strategy is proposed: for each job that arrives, a copy of it is placed in d distinct queues whose lengths and waiting times are unknown. The job\u2019s work is complete when the \ufb01rst of its replicas exits service, and the remaining d \u22121 duplicate jobs are then removed from the system. For a heuristic illustration of the performance gain that is obtained from this strategy, assume that a system is in stationarity where any job sent to any server experiences a sojourn time, say W, comprised of the waiting time in the queue and the service time once it reaches the server, with distribution P(W \u2264t) = FW (t). Assuming queues are independent and identically distributed, a single job arrives and is subject to replication-d. With R denoting the job\u2019s \u2217Hamilton Institute, Maynooth University, Maynooth, Ireland \u2020School of Mathematical and Computer Sciences, Heriot-Watt University, Edinburgh, UK 1 arXiv:1907.11052v2 [cs.PF] 8 Dec 2019 \fcompletition time, its distribution is given by the minimum of d independent sojourn times P(R1,d > t) = P(min(W1, . . . , Wd) > t) = (1 \u2212FW (t))d. With the system in stationarity and a batch of n jobs arriving where each job is subject to replication-d, the tail of the batch completion time distribution is governed by last job completion time in the batch and satis\ufb01es P(Rn,d > t) = P(max(R1,d 1 , . . . , R1,d n ) > t) = 1 \u2212 \u0010 1 \u2212(1 \u2212FW (t))d\u0011n \u223cn(1 \u2212FW (t))d, (1) for large t. Thus, through the use of the replication-d strategy, the tail of the completion time distribution of the batch of n jobs is exponentially curtailed from approximately n(1 \u2212FW (t)). That tail reduction is signi\ufb01cant, and greatly reduces the straggler problem where the system is held up waiting for one job to be served before it can proceed to the next task. In the present paper we consider the performance of an alternative approach that is available when the jobs to be served can be subject to algebraic manipulation and the output of the servers is a linear function of their input. This is the case, for example, if the jobs consist of digital packets that are traversing a network where the output of a server is its input and for large matrix multiplication tasks for Machine Learning. In the network setting, the replication-d strategy is similar in spirit to repetition coding [18], which is known to be sub-optimal, and instead we consider a MDS (Maximum Distance Separable) approach. The bene\ufb01ts of MDS codes for making communications robust in networks subject to packet erasures are well-established. For information retrevial from a multi-server storage system, the improvements in response time that is attainable through the use of coding have been studied [8, 19, 9, 12, 11]. Both replication and MDS coding have also been proposed recently to resolve the stragglers problem in distributed gradient descent for Machine Learning [20, 10, 21, 2, 5, 13, 16, 15]. To the best of our knowledge, however, this is one of the \ufb01rst times its utility in reducing queueing delay in a feed-forward system has been shown. Consider a batch arrival of n jobs, J1, . . . , Jn, each of which consists of data of \ufb01xed size whose symbols take values in the reals or a large Galois \ufb01eld. The principle of the MDS coding approach is that rather than send duplicate jobs, one instead creates n+m linear combinations of the form Kj = A(j) 1 J1 + \u00b7 \u00b7 \u00b7 + A(j) n Jn, j \u2208{1, . . . , n + m}, where the A(j) i are chosen in the reals or a \ufb01nite \ufb01elds. The principle behind MDS codes is to consider each coded job, Kj, as a random linear equation such that the reciept of any linear function of any n of the n + m linear combinations allows recovery of the processing of the original n jobs by Gaussian elimination. Reed-Solomon codes, for example, [17] are MDS codes. More generally, a Random Linear Code, where the coe\ufb03cients are chosen uniformly at random, is an MDS code with high probability for a su\ufb03ciently large \ufb01eld size, e.g. [7]. When MDS is employed, the completion time of a batch is equal to the job completion time of any n out of the n+m coded jobs. To heuristically understand the gain that can be obtained by MDS, again assume that the system is in stationarity with each queue independently having a sojourn time distribution FW . A batch of n jobs arrives and are coded into n + m MDS jobs. Their completition time has the same distribution as n-th order statistics of n + m random variables with distribution FW , whose complementary distribution is known to be given by P(Cn,m > t) = (n + m) \u0012n + m \u22121 n \u22121 \u0013 n\u22121 X k=0 \u0012n \u22121 k \u0013 (\u22121)k 1 m + k + 1 (1 \u2212FW (t))m+k+1 . As t \u2192\u221e, the tail is equivalent to its leading term (n + m) \u0012n + m \u22121 n \u22121 \u0013 1 m + 1 (1 \u2212FW (t))m+1 . (2) 2 \fThus the tail of the response time with the MDS strategy is smaller than the tail of the response time achieved by replication-d in (1) so long as m \u2265d. This non-rigorous sketch illustrates the main message of this paper: where nd copies of jobs are used for replication-d, for digital data subject to linear processing, MDS can provide better tail response times with only n+d copies. We present the precise model considered in the paper in Section 2. In Section 3 we consider the case where k, the number of servers, tends to in\ufb01nity. Under the mean-\ufb01eld assumption used in the replication-d literature, we demonstrate that as long as m \u2265d the tail distribution of batch completion times of jobs in stationarity is strictly smaller in the case of MDS when compared with replication-d, making the above heuristic arguments rigorous. 2 A more precise model In the rest of the paper we shall assume that there are k servers, each with an in\ufb01nite-bu\ufb00er queue to store outstanding jobs. Each arrival is a batch of n jobs that appears according to a Poisson process of intensity \u03bbk/n, so that, on average, there are \u03bbk jobs arriving per unit of time. For digital data as in communication networks, the batch arrival assumption is not restrictive as individual jobs can be sub-divided. Batch arrivals are also appropriate to represent the parallelisation of MapReduce computations and more general parallel-processing computer systems (see, e.g., [24]). We assume each version of any job takes an exponential time with rate 1 to complete on any server, taken independently of everything else, including other copies of the same job, and each server\u2019s output is a linear function of its input. Another key question is how, once one copy of a job has been processed to completion, its remaining copies are treated. In some circumstances, such as the queueing of data jobs in a communications network, it is not practical to remove copied jobs and they must be served. In other instances, such as for parallelisation of MapReduce computations, it would be possible to remove waiting tasks from queues and cease the service of copies being processed. This latter setting, considered in [3] and references therein, provides a model of greater mathematical interest, and we focus on it in this paper. 3 MDS vs replication-d with redundant removals The tails of batch completition times are challenging to analyse, but one can examine the behaviour in the limit as the number of queues, k, becomes large. In the system with replication, the job completition time distribution is derived in [3, Section 5]) under an assumption on asymptotic independence of queues (Assumption 1, given below). It is straightforward to adapt that derivation to the case of batch arrivals considered here. The tail of the completion time of any one of the n jobs is given by P(R1,d > t) = \u0012 1 \u03bb + (1 \u2212\u03bb)et(d\u22121) \u0013d/(d\u22121) . The completion time for n jobs in a batch is then the maximum of n independent random variables with this distribution, and a batch\u2019s response time then has the tail given by P(Rn,d > t) = 1 \u2212 1 \u2212 \u0012 1 \u03bb + (1 \u2212\u03bb)et(d\u22121) \u0013d/(d\u22121)!n . (3) Following [3], we adopt Assumption 1 on the asymptotic independence of the queues for our analysis of the MDS strategy, and refer to that article for a discussion of it. 3 \fAssumption 1. Let Ti denote the completition time of a job, not subject to removal, at queue i out of a total of k. For k su\ufb03ciently large, the random variables (Ti1, . . . Tin+m) are independent for any distinct i1, . . . , in+m. We prove the following along the lines of the proof introduced in [3, Section 5]. We note that a di\ufb00erential equation on the completion times may also be obtained from the result of [6, Theorem 5.2] where much more general workload-based policies are considered (using derivations similar to those in Section 6.1 therein). We however present below a simple derivation of (4), using only straightforward queueing arguments. Theorem 2. For every batch arrival of n jobs, n + m coded jobs are sent to n + m queues chosen uniformly at random without replacement. As soon as any n coded jobs are completed, the remaining m jobs are removed from the system, including those currently in service. Let V be the random waiting time of a single (virtual) job that is subject to neither coding or removal, which is needed to determine the batch waiting time. Under Assumption 1, its waiting time satis\ufb01es the following di\ufb00erential equation dP(V > t) dt = \u2212P(V > t) (4) + \u03b1(n + m \u22121) \u0012n + m \u22122 n \u22121 \u0013 n\u22121 X i=0 \u0012n \u22121 i \u0013 (\u22121)i 1 (m + i)(m + i + 1)(P(V > t))m+i+1, where \u03b1 = \u03bb(m + n)/n. For large waiting times, the tail of its distribution satis\ufb01es lim sup t\u2192\u221e etP(V > t) < \u221e. Let Cn,m denote the random MDS batch completion time. Its distribution satis\ufb01es P(Cn,m > t) = (n + m) \u0012n + m \u22121 n \u22121 \u0013 n\u22121 X i=0 \u0012n \u22121 i \u0013 (\u22121)i 1 m + i + 1 (P(V > t))m+i+1 , (5) and its tail satis\ufb01es lim sup t\u2192\u221e e(m+1)tP(Cn,m > t) < \u221e. (6) We note that this result encompasses the replication-d strategy by setting n = 1 and m = d \u22121, and that in that setting (4) is exactly the di\ufb00erential equation obtained in [3] (see the displayed equation just after (24) therein). These results con\ufb01rm the heuristic analysis in the introduction that the tail of the batch completition time distribution for replication in equation (3) is slower than when MDS is used, equation (6), so long as m \u2265d. Thus with only n + d coded jobs, one can achieve better tail performance than sending nd jobs under the replication strategy. In the case of replication, a closed form for the batch completition time distribution is available, but that is not the case for MDS as the di\ufb00erential equation (4) describing the virtual waiting time distribution of a non-coded job cannot be solved in closed form in general. It can, however, be readily solved numerically and inserted into equation (5) to evaluate the batch completition time distribution for MDS. An example comparison is presented in Fig. 1 where batches consist of n = 3 jobs and the replication strategy places d = 3 copies of each into the system. For MDS, we consider a range of values for m from 2 to 6. The \ufb01gure recapitulates the conclusion that MDS with m \u2265d leads to signi\ufb01cant gains in completition time tail for larger values of t. Note that for m = d = 3, the batch completition time of MDS is not stochastically dominated by that of replication and that short delays are more likely with MDS, and it is only in the tail that MDS outperforms 4 \f0 2 4 6 8 10 t 10-30 10-25 10-20 10-15 10-10 10-5 100 P(D>t) Coding m=2 Coding m=3 Coding m=4 Coding m=5 Coding m=6 Figure 1: Complementary batch completition time distribution for a batch of n = 3 jobs. The dashed line corresponds to replication with d = 3 giving nd replicated jobs, while the solid lines correspond to MDS with n + m coded jobs. replication. For values of m \u22654, however, the MDS batch completition time distribution is better for all times. Proof of Theorem 2. Equation (5) is a direct application of known results on orderstatistics distributions. We now prove (4). Following derivations in [3, Section 5], denote by Ti the non-redundant response time for a tagged job in queue i. Then Ti = Wi + Ei, where Wi is the workload (real workload, i.e. time to empty the queue if there were no more arrivals) and Ei is an Exponential(1) random variable (service requirement). Let us denote by FT and FW the distribution functions of T and W, respectively. Let us also denote their tails by F T and F W . Then we can write F T (t) = e\u2212t + Z t 0 e\u2212yF W (t \u2212y)dy = e\u2212t + Z t 0 e\u2212(t\u2212y)F W (y)dy = e\u2212t + e\u2212t Z t 0 eyF W (y)dy, di\ufb00erentiating which we get F T \u2032(t) = F W (t) \u2212F T (t). (7) Let us now write an expression for F W (t). We can look at the previous arrival (which in the case of MDS happens an Exponential(\u03bb(m+n)/n) time earlier than the tagged arrival. For simplicity denote \u03b1 = \u03bbm/n. Condition \ufb01rst on the previous arrival having been y time before the tagged arrival. Then W > t in one of two cases: either the previous arrival sees workload larger than t + y, or the previous arrival sees workload smaller than t + y, its own (non-redundant) time in queue i is larger than t + y and by the time t + y no more than n \u22121 of the other n + m \u22121 copies left other queues (or, in other words, nth order statistic of n + m \u22121 random variables exceeds t + y); 5 \fIntegrating over all values of y, we obtain: F W (t) = Z \u221e 0 \u03b1e\u2212\u03b1ydy \u0012 F W (t + y) + (F T (t + y) \u2212F W (t + y))(n + m \u22121) \u0012n + m \u22122 n \u22121 \u0013 n\u22121 X i=0 \u0012n \u22121 i \u0013 (\u22121)i 1 m + i(F T (t + y))m+i = e\u03b1t Z \u221e t \u03b1e\u2212\u03b1zdz \u0012 F W (z) + (F T (z) \u2212F W (z))(n + m \u22121) \u0012n + m \u22122 n \u22121 \u0013 n\u22121 X i=0 \u0012n \u22121 i \u0013 (\u22121)i 1 m + i(F T (z))m+i \u0013 , where we used known results for the tail distribution of nth order statistic of n + m \u22121 random variables. Di\ufb00erentiating the above relation, we get F W \u2032(t) = \u03b1F W (t) \u2212\u03b1 \u0012 F W (t) + (n + m \u22121) \u0012n + m \u22122 n \u22121 \u0013 (F T (t) \u2212F W (t)) n\u22121 X i=0 \u0012n \u22121 i \u0013 (\u22121)i 1 m + i(F T (t))m+i \u0013 = \u03b1(n + m \u22121) \u0012n + m \u22122 n \u22121 \u0013 F T \u2032(t) n\u22121 X i=0 \u0012n \u22121 i \u0013 (\u22121)i 1 m + i(F T (t))m+i, where in the last equality we used (7). The above can be integrated and plugged into (7) to obtain F T \u2032(t) = \u2212F T (t)+\u03b1(n+m\u22121) \u0012n + m \u22122 n \u22121 \u0013 n\u22121 X i=0 \u0012n \u22121 i \u0013 (\u22121)i 1 (m + i)(m + i + 1)(F T (t))m+i+1. This proves (4). Recall that m \u2212n \u22651. Since F T (x) \u21920 as x \u2192\u221e, there exists X such that F T \u2032(x) \u2264\u2212F T (x) + A(F T (x))2, for all x \u2265X, with some constant A > 0. Denote by g(x) = 1/F T (x). Then g\u2032(x) = \u2212F T (x) (F T (x))2 \u2265\u2212\u2212F T (x) + A(F T (x))2 (F T (x))2 = g(x) \u2212A for all x \u2265X. Since g(x) \u2192\u221eas x \u2192\u221e, we can also assume that g(x) \u2265A for all x \u2265X. Then the above implies that g\u2032(x) g(x) \u2212A \u22651 for all x \u2265X. Integrating the above inequality from X to x implies log \u0012 g(x) \u2212A g(X) \u2212A \u0013 \u2265x \u2212X, and hence g(x) \u2265A + (g(X) \u2212A)ex\u2212X \u2265Bex, with a constant B. This of course implies (6), and the proof is complete. 6" + } + ], + "Puneet K. Dokania": [ + { + "url": "http://arxiv.org/abs/1507.01208v1", + "title": "Parsimonious Labeling", + "abstract": "We propose a new family of discrete energy minimization problems, which we\ncall parsimonious labeling. Specifically, our energy functional consists of\nunary potentials and high-order clique potentials. While the unary potentials\nare arbitrary, the clique potentials are proportional to the {\\em diversity} of\nset of the unique labels assigned to the clique. Intuitively, our energy\nfunctional encourages the labeling to be parsimonious, that is, use as few\nlabels as possible. This in turn allows us to capture useful cues for important\ncomputer vision applications such as stereo correspondence and image denoising.\nFurthermore, we propose an efficient graph-cuts based algorithm for the\nparsimonious labeling problem that provides strong theoretical guarantees on\nthe quality of the solution. Our algorithm consists of three steps. First, we\napproximate a given diversity using a mixture of a novel hierarchical $P^n$\nPotts model. Second, we use a divide-and-conquer approach for each mixture\ncomponent, where each subproblem is solved using an effficient\n$\\alpha$-expansion algorithm. This provides us with a small number of putative\nlabelings, one for each mixture component. Third, we choose the best putative\nlabeling in terms of the energy value. Using both sythetic and standard real\ndatasets, we show that our algorithm significantly outperforms other graph-cuts\nbased approaches.", + "authors": "Puneet K. Dokania, M. Pawan Kumar", + "published": "2015-07-05", + "updated": "2015-07-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction The labeling problem provides an intuitive formulation for several problems in computer vision and related areas. Brie\ufb02y, the labeling problem is de\ufb01ned using a set of random variables, each of which can take a value from a \ufb01nite and discrete label set. The assignment of values to all the variables is referred to as a labeling. In order to quantatively distinguish between the large number of putative labelings, we are provided with an energy functional that maps a labeling to a real number. The energy functional consists of two types of terms: (i) the unary potential, which depends on the label assigned to one random variable at a time; and (ii) the clique potential, which depends on the labels assigned to a set of random variables. The goal of the labeling problem is to obtain the labeling that minimizes the energy. Perhaps a well-studied special case of the labeling problem is the metric labeling problem [2, 12]. Here, the unary potentials are arbitrary. However, the clique potentials are speci\ufb01ed by a user-de\ufb01ned metric distance function of the label space. Speci\ufb01cally, the clique potentials satisfy the following two properties: (i) each clique potential depends on two random variables; and (ii) the value of the clique potential (also referred to as the pairwise potential) is proportional to the metric distance between the labels assigned to the two random variables. Metric labeling has been used to formulate several problems in low-level computer vision, where the random variables correspond to image pixels. In such scenarios, it is natural to encourage two random variables that correspond to two nearby pixels in the image to take similar labels. However, by restricting the size of the cliques to two, metric labeling fails to capture more informative high-order cues. For example, it cannot encourage an arbitrary sized set of similar pixels (such as pixels that de\ufb01ne a homogeneous superpixel) to take similar labels. We propose a natural generalization of the metric labeling problem for high-order potentials, which we call parsimonious labeling. Similar to metric labeing, our energy functional consists of arbitrary unary potentials. However, the clique potentials can be de\ufb01ned on any set of random variables, and their value depends on the set of unique labels assigned to the random variables in the clique. In more detail, the clique potential is de\ufb01ned using the recently proposed notion of a diversity [4], which generalizes metric distance functions to all subsets of the label set. By minimizing the diversity, our energy functional encourages the labeling to be parsimonious, that is, use as few labels as possible. This in turn allows us to capture useful cues for important low-level computer vision applications. In order to be practically useful, we require an computationally feasible solution for parsimonious labeling. To this end, we design a novel three step algorithm that uses an ef\ufb01cient graph cuts based method as its key ingredient. The 1 arXiv:1507.01208v1 [cs.CV] 5 Jul 2015 \f\ufb01rst step of our algorithm approximates a given diversity as a mixture of a novel hierarchical P n Potts model (a generalization of the P n Potts model [13]). The second step of our algorithm solves the labeling problem corresponding to each component of the mixture via a divide-andconquer approach, where each subproblem is solved using \u03b1-expansion [25]. This provides us with a small set of putative labelings, each corresponding to a mixture component. The third step of our algorithm simply chooses the putative labeling with the minimum energy. Using both sythetic and real datasets, we show that our overall approach provides accurate results for various computer vision applications. 2. Related Work In last few years the research community have witnessed many successful applications of high-order random \ufb01elds to solve many low level vision related problems such as disparity estimation, image restoration, and object segmentation [7][8][10] [14][18][19][24][26][27]. In this work, our focus is on methods that (i) rely on ef\ufb01cient move-making algorithms based on graph cuts; (ii) provide a theoretical guarantee on the quality of the solution. Below, we discuss the work most closely related to ours in more detail. Kohli et al. [13] proposed the P n Potts model, which enforces label consistency over a set of random variables. In [14], they presented a robust version of the P n Potts model that takes into account the number of random variables that have been assigned an inconsistent label. Both the P n Potts model and its robust version lend themselves to the ef\ufb01cient \u03b1\u2212expansion algorithm [13, 14]. Furthermore, the \u03b1\u2212expansion algorithm also provides a multiplicative bound on the energy of the estimated labeling with respect to the optimal labeling. While the robust P n Potts model has been shown to be very useful for semantic segmentation, our generalization of the P n Potts model offers a natural extension of the metric labeling problem and is therefore more widely applicable to several low-level computer vision applications. Delong et al. [7] propose a global clique potential that is based on the cost of using a label or a subset of labels in the labeling of the random variables. Similar to the P n Potts model, the label cost based potential can also be minimized using \u03b1\u2212expansion. However, the theoretical guarantee provided by \u03b1\u2212expansion is an additive bound, which is not invariant to reparameterization of the energy function. Delong et al. [6] also proposed an extension of their work to hierarchical costs. However, the assumption of a given hierarchy over the label set limits its application in practice. Independently, Ladicky et al. [18] proposed a global cooccurrence cost based high order model for a much wider class of energies that encourage the use of a small set of labels in the estimated labeling. Theoretically, the only constraint that [18] enforces in high order clique potential is that it should be monotonic in the label set. In other words, the problem addressed in [18] can be regarded as a generalization of parsimonious labeling. However, they approximately optimize an upperbound on the actual energy functional which does not provide any optimality guarantees. In our experiments, we demonstrate that our move-making algorithm signi\ufb01cantly outperforms their approach for the special case of parsimonious labeling. 3. Preliminaries The labeling problem. Consider a random \ufb01eld de\ufb01ned over a set of random variables x = {x1, \u00b7 \u00b7 \u00b7 , xN} arranged in a prede\ufb01ned lattice V = {1, \u00b7 \u00b7 \u00b7 , N}. Each random variable can take a value from a discrete label set L = {l1, \u00b7 \u00b7 \u00b7 , lH}. Furthermore, let C denote the set of maximal cliques. Each maximal clique consists of a set of random variables that are all connected to each other in the lattice. A labeling is de\ufb01ned as the assignment or mapping of random variables to the labels. To assess the quality of each labeling x we de\ufb01ne an energy functional as: E(x) = X i\u2208V \u03b8i(xi) + X c\u2208C \u03b8c(xc) (1) where \u03b8i(xi) is any arbitrary unary potential of assigning a label xi to the random variable i, and \u03b8c(xc) is a clique potential for assigning the labels xc to the variables in the clique c. We assume that the clique potentials are non-negative. As will be seen shortly, this assumption is satis\ufb01ed by the new family of energy functionals proposed in our paper. The total number of putative labelings is HN, each of which can be assessed using its corresponding energy value. Within this setting, the labeling problem is to \ufb01nd the labeling that corresponds to the minimum energy according to the functional (1). Formally, the labeling problem can be de\ufb01ned as: x\u2217= argminx E(x). P n Potts model. An important special case of the labeling problem, which will be used throughout this paper, is de\ufb01ned by the P n Potts model [13]. The P n Potts model is a generalization of of the well known Potts model [20] for high-order energy functions (when cliques can be of arbitrary sizes). For a given clique, the P n Potts model is de\ufb01ned as: \u03b8c(xc) = ( \u03b3k, if xi = lk, \u2200i \u2208c \u03b3max, otherwise (2) where \u03b3k is the cost of assigning all the nodes to label lk \u2208L, and \u03b3max > \u03b3k, \u2200lk \u2208L. Intuitively, the P n Potts model enforces label consistency by assigning the cost of \u03b3max if more than one labels are present in the given clique. 2 \f\u03b1-expansion for P n Potts model. In order to solve the labeling problem corresponding to the P n Potts model, Kohli et al. [13] proposed to use the \u03b1\u2212expansion algorithm [25]. The \u03b1\u2212expansion algorithm starts with an initial labeling, for example, by assigning each random variable to the label l1. At each iteration, the algorithm moves to a new labeling by searching over a large move space. Here, the move space is de\ufb01ned as the set of labelings where each random variable is either assigned its current label or the label \u03b1. The key result that makes \u03b1\u2212expansion a computationally feasible algorithm for the P n Potts model is that the minimum energy labeling within a move-space can be obtained using a single minimum st-cut operation on a graph that consists of a small number (linear in the size of the variables and the cliques) of vertices and arcs. The algorithm terminates when the energy cannot be reduced further for any choice of the label \u03b1. We refer the reader to [13] for further details. Multiplicative Bound. The labeling problem, and many of its special cases including the one de\ufb01ned by the P n Potts model, is known to be NP-hard. However, due to its practical importance, many approximate algorithms have been proposed in the literature (for example, the aforementioned \u03b1\u2212expansion algorithm for the P n Potts model). An intuitive and commonly used measure of the accuracy of an approximation algorithm is the multiplicative bound. Formally, the multiplicative bound of a given algorithm is said to be B if the following condition is satis\ufb01ed for all possible values of unary potential \u03b8i(.), and clique potentials \u03b8c(xc): X i\u2208V \u03b8i(\u02c6 xi) + X c\u2208C \u03b8c(\u02c6 xc) \u2264 X i\u2208V \u03b8i(x\u2217 i ) + B X c\u2208C \u03b8c(x\u2217 c) (3) Here, \u02c6 x is the labeling estimated by the algorithm and x\u2217 is a globally optimal labeling. By de\ufb01nition of an optimal labeling (one that has the minimum energy), the multiplicative bound will always be greater than or equal to one [16]. Multiplicative Bound for the \u03b1-expansion algorithm for the P n Potts model. Using the \u03b1\u2212expansion algorithm for the P n potts model we obtain the multiplicative bound of \u03bb min(M, |L|), where, M is the size of the largest maximal clique in the graph, |L| is the number of labels, and \u03bb is de\ufb01ned as below [11]: \u03b3min = min k\u2208L \u03b3k, \u03bb = ( \u03b3max \u03b3min , if \u03b3min \u0338= 0 \u03b3max, otherwise (4) 4. Parsimonious Labeling The parsimonious labeling problem is de\ufb01ned using an energy functional that consists of unary potentials and clique potentials de\ufb01ned over cliques of arbitrary sizes. While the parsimonious labeling problem places no restrictions on the unary potentials, the clique potentials are speci\ufb01ed using a diversity function [4]. Before describing the parsimonious labeling problem in detail, we brie\ufb02y de\ufb01ne the diversity function for the sake of completion. De\ufb01nition 1. A diversity is a pair (L, \u03b4), where L is the set of labels and \u03b4 is a non-negative function de\ufb01ned on \ufb01nite subsets of L, \u03b4 : \u0393 \u2192R, \u2200\u0393 \u2286L, satisfying following properties: \u2022 Non Negativity: \u03b4(\u0393) \u22650, and \u03b4(\u0393) = 0, iff, |\u0393| \u22641. \u2022 Triangular Inequality: if \u03932 \u0338= \u2205, \u03b4(\u03931 \u222a\u03932) + \u03b4(\u03932 \u222a \u03933) \u2265\u03b4(\u03931 \u222a\u03933), \u2200\u03931, \u03932, \u03933 \u2286L. \u2022 Monotonicity: \u03931 \u2286\u03932 implies \u03b4(\u03931) \u2264\u03b4(\u03932) Using a diversity function, we can de\ufb01ne a clique potential as follows. We denote by \u0393(xc) the set of unique labels in the labeling of the clique c. Then, \u03b8c(xc) = wc\u03b4(\u0393(xc)), where \u03b4 is a diversity function and wc is the non-negative weight corresponding to the clique c. Formally, the parsimonious labeling problem amounts to minimizing the following energy functional: E(x) = X i\u2208V \u03b8i(xi) + X c\u2208C wc\u03b4(\u0393(xc)) (5) Therefore, given a clique xc and the set of unique labels \u0393(xc) assigned to the random variables in the clique, the clique potential function for the parsimonious labeling problem is de\ufb01ned using \u03b4(\u0393(xc)), where \u03b4 : \u0393(xc) \u2192R is a diversity function. Intuitively, diversities enforces parsimony by choosing a solution with less number of unique labels from a set of equally likely solutions, which makes it highly interesting for the computer vision community. This is an essential property in many vision problems, for example, in case of image segmentation, we would like to see label consistency within superpixels in order to preserve discontinuity. Unlike the P n Potts model the diversity does not enforce the label consistency very rigidly. It gives monotonic rise to the cost based on the number of labels assigned to the given clique. An important special case of the parsimonious labeling problem is the metric labeling problem, which has been extensively studied in computer vision [2] and theoretical computer science [12]. In metric labeling, the maximal cliques are of size two (pairwise) and the clique potential function is a metric distance function de\ufb01ned over the labels. Recall that a distance function d : L \u00d7 L \u2192R is a metric if and only if: (i) d(., .) \u22650; (ii) d(i, j) + d(j, k) \u2265 d(i, k), \u2200i, j, k; and (iii) d(i, j) = 0 if and only if i = j. Notice that, there is a direct link between the metric distance function and the diversities. The diversities can be 3 \fseen as the metric distance function over the sets of arbitrary sizes. In another words, diversities are the generalization of the metric distance function and boil down to a metric distance function if the input set is restricted to the subsets with cardinality of at most two. Another way of understanding the connection between metrics and diversities is that every diversity induces a metric. In other words, consider d(li, li) = \u03b4(li) and d(li, lj) = \u03b4({li, lj}). Using the properties of diversities, it can be shown that d(\u00b7, \u00b7) is a metric distance function. Hence, in case of energy functional de\ufb01ned over pairwise cliques, the parsimonious labeling problem reduces to the metric labeling problem. In the remaining part of this section we talk about a speci\ufb01c type of diversity called the diameter diversity, show its relation with the well known P n Potts model, and propose a hierarchical P n Potts model based on the diameter diversity de\ufb01ned over a hierarchical clustering (de\ufb01ned shortly). However, note that our approach is applicable to any general parsimonious labeling problem. Diameter diversity. Among many known diversities ([3]), in this work, we are primarily interested in the diameter diversity. Let (L, \u03b4) be a diversity and (L, d) be the induced metric of (L, \u03b4), where d : L \u00d7 L \u2192R and d(li, lj) = \u03b4({li, lj}), \u2200li, lj \u2208L, then for all \u0393 \u2286L, the diameter diversity is de\ufb01ned as: \u03b4dia(\u0393) = max li,lj\u2208\u0393 d(li, lj). (6) Clearly, given the induced metric function de\ufb01ned over a set of labels, diameter diversity over any subset of labels gives the measure of how dissimilar (or diverse) the labels are. More the dissimilarity, based on the induced metric function, higher is the diameter diversity. Therefore, using diameter diversity as clique potentials enforces the similar labels to be together. Thus, a special case of parsimonious labeling in which the clique potentials are of the form of diameter diversity can be de\ufb01ned as below: E(x) = X i\u2208V \u03b8i(xi) + X c\u2208C wc\u03b4dia(\u0393(xc)) (7) Notice that the diameter diversity de\ufb01ned over uniform metric is nothing but the P n Potts model where \u03b3i = 0. In what follows we de\ufb01ne a generalization of the P n Potts model, the hierarchical P n Potts model, which will play a key role in the rest of the paper. The Hierarchical P n Potts Model. The hierarchical P n Potts model is a diameter diversity de\ufb01ned over a special type of metric known as the r-HST metric. A rooted Figure 1: An example of r-HST for r = 2. The cluster associated with root p contains all the labels. As we go down, the cluster splits into subclusters and \ufb01nally we get the singletons, the leaf nodes (labels). The root is at depth of d = 1 and leaf nodes at d = 3. The metric de\ufb01ned over the r-HST is denoted as dt(., .), the shortest path between the inputs. For example, dt(l1, l3) = 18 and dt(l1, l2) = 6. The diameter diversity for the subset of labels at cluster p is max{li,lj}\u2208{l1,l2,l3,l4} dt(li, lj) = 18. tree, as shown in \ufb01gure (1), is said to be an r-HST, or rhierarchically well separated [1] if it satisfy the following properties: (i) all the leaf nodes are the labels; (ii) all edge weights are positive; (iii) the edge lengths from any node to all of its children are the same; and (iv) on any root to leaf path the edge weight decrease by a factor of at least r > 1. We can think of a r-HST as a hierarchical clustering of the given label set L. The root node represents the cluster at the top level of the hierarchy and contains all the labels. As we go down in the hierarchy, the clusters breaks down into smaller clusters until we get as many leaf nodes as the number of labels in the given label set. The metric distance function de\ufb01ned on this tree dt(.) is known as the r-HST metric. In other words, the distance dt(\u00b7, \u00b7) between any two nodes in the given r-HST is the shortest path distance between these nodes in the tree. The diameter diversity de\ufb01ned over dt(., .) is called the hierarchical P n Potts model. The example of a diameter diversity de\ufb01ned over an r-HST is given in the \ufb01gure (1). 5. The Hierarchical Move Making Algorithm In the \ufb01rst part of this section we propose a move making algorithm for the hierarchical P n Potts model (de\ufb01ned in the previous section). In the second part, we show how our hierarchical move making algorithm can be used to minimize the much more general parsimonious labeling problem with optimality guarantees (tight multiplicative bound). 5.1. The Hierarchical Move Making Algorithm for the Hierarchical P n Potts Model In Hierarchical P n Potts model the clique potentials are of the form of the diameter diversity de\ufb01ned over a given rHST metric function. The move making algorithm proposed 4 \fAlgorithm 1 The Move Making Algorithm for the Hierarchical P n Potts Model. input r-HST Metric, wc, \u2200c \u2208C, and \u03b8i(xi), \u2200i \u2208V 1: d = D, the leaf nodes 2: repeat 3: for each p \u2208N(d) do 4: if |\u03b7(p)| = 0, leaf node then 5: xp i = p, \u2200i \u2208V 6: else 7: Fusion Move \u02c6 tp = argmin tp\u2208{1,\u00b7\u00b7\u00b7 ,|\u03b7(p)|}N E(tp) (8) 8: xp i = x \u03b7(p,\u02c6 tp i ) i . 9: end if 10: end for 11: d \u2190d \u22121 12: until d \u22651. in this section to minimize such an energy functional is a divide-and-conquer based approach, inspired by the work of [17]. Instead of solving the actual problem, we divide the problem into smaller subproblems where each subproblem amounts to solving \u03b1\u2212expansion for the P n Potts model [13]. More precisely, given an r-HST, each node of the rHST corresponds to a subproblem. We start with the bottom node of the r-HST, which is a leaf node, and go up in the hierarchy solving each subproblem associated with the nodes encountered. In more detail, consider a node p of the given r-HST. Recall that any node p in the r-HST represents a cluster of labels denoted as Lp \u2286L (\ufb01gure 1). In another words, the leaf nodes of the subtree rooted at p belongs to the Lp. Thus, the subproblem de\ufb01ned at node p is to \ufb01nd the labeling xp where the label set is restricted to Lp, as de\ufb01ned below. xp = argmin x\u2208Lp \u0010 X i\u2208V \u03b8i(xi) + X c\u2208C wc\u03b4dia(\u0393(xc)) \u0011 (9) If p is the root node, then the above problem (equation 9) is as dif\ufb01cult as the original labeling problem (since Lp = L). However, if p is the leaf node then the solution of the problem associated with p is trivial, xp i = p for all i \u2208V, which means, assign the label p to all the random variables. This insight leads to the design of our approximation algorithm, where we start by solving the simple problems corresponding to the leaf nodes, and use the labelings obtained to address the more dif\ufb01cult problem further up the hierarchy. In what follows, we describe how the labeling of the problem associated with the node p, when p is not the leaf node, is obtained using the labelings of its chidren node. Figure 2: An example of solving the labeling problem at non-leaf node (p) by combining the solutions of its child nodes {p1, p2}, given clique c and the labelings that it has obtained at the child nodes. Note that the hierarchical clustering shown in this \ufb01gure is the top two levels of the r-HST shown in the \ufb01gure (1), for a given clique c. The diameter diversity of the labeling of clique c at node p1 is 0 as it contains only one unique label l1. The diameter diversity of the labeling at p2 is dt(l3, l4) = 6 and the label set at p is 18. Solving the Parent Labeling Problem Before delving into the details, let us de\ufb01ne some notations for the purpose of clarity. Let D be the depth (or the number of levels) in the given r-HST. The root node being at the top level, depth of one. Let \u03b7(p) denotes the set of child nodes associated with a non-leaf node p and \u03b7(p, k) denotes its kth child node. Recall that our approach is bottom up, therefore, for each child node of p we already have a labeling associated with them. We denote the labeling associated with the kth child of the node p as x\u03b7(p,k). Thus, x\u03b7(p,k) i denotes the label assigned to the ith random variable by the labeling of the kth child of the node p. We also de\ufb01ne an N dimensional vector tp, where each index of the the vector can take a value from the set denoting the child indices of node p, {1, \u00b7 \u00b7 \u00b7 , |\u03b7(p)|}, where |\u03b7(p)| denotes the number of child nodes of p. More precisely, tp i = k denotes that the label for the ith random variable comes from the kth child of the node p. Therefore, the labeling problem at node p reduces to \ufb01nding the optimal tp. Thus, the labeling problem at node p amounts to \ufb01nding the best child index k \u2208{1, \u00b7 \u00b7 \u00b7 , |\u03b7(p)|} for each random variable i \u2208V so that the label assigned to the random variable comes from the labeling of the kth child. Using the above notations, associated with a tp we de\ufb01ne a new energy functional as: E(tp) = X i\u2208V \u00af \u03b8i(tp i ) + X c\u2208C wc\u00af \u03b8c(tp c) (10) where \u00af \u03b8i(tp i ) = \u03b8i(x\u03b7(p,k) i ) if tp i = k (11) 5 \fwhich says that the unary potential for tp i = k is the unary potential associated to the ith random variable corresponding to the label x\u03b7(p,k) i . The new clique potential \u00af \u03b8c(tp c) is as de\ufb01ned below: \u00af \u03b8c(tp c) = ( \u03b3p k, if tp i = k, \u2200i \u2208c \u03b3p max, otherwise (12) where \u03b3p k = \u03b4dia(\u0393(x\u03b7(p,k) c )) is the diameter diversity of the set of unique labels associated with x\u03b7(p,k) c and \u03b3p max = \u03b4dia(Lp) is the diameter diversity of the set of labels associated with the cluster at node p. Recall that, because of the construction of the r-HST, Lq \u2282Lp for all q \u2208\u03b7(p). Hence, the monotonicity property of the diameter diversity ensures that \u03b3p max > \u03b3p k, \u2200k \u2208\u03b7(p). This is the suf\ufb01cient criterion to prove that the potential function de\ufb01ned by equation (12) is a P n Potts model. Therefore, the \u03b1\u2212expansion algorithm can be used to obtain the locally optimal tp for the energy functional (10). Once we have obtained the locally optimal \u02c6 tp, the labeling xp at node p can be trivially obtained as follows: xp i = x \u03b7(p,\u02c6 tp i ) i , which says that the \ufb01nal label of the ith random variable is the one assigned to it corresponding to the labeling of the (\u02c6 tp i )th child of the node p. Figure (2) shows an instance of the above mentioned algorithm to combine the labelings of the child nodes to obtain the labeling of the parent node. The complete hierarchical move making algorithm for the hierarchical P n Potts model is shown in the Algorithm-1. Multiplicative Bound. Theorem-1 gives the multiplicative bound for the Move Making Algorithm for the Hierarchical P n Potts model. Theorem 1. The move making algorithm for the hierarchical P n Potts model, Algorithm-1, gives the multiplicative bound of \u0010 r r\u22121 \u0011 min(M, |L|) with respect to the global minima. Here, M is the size of the largest maximal-clique and |L| is the number of labels. Proof: Given in Appendix. 5.2. The Move Making Algorithm for the Parsimonious Labeling In the previous subsection, we proposed a hierarchical move making algorithm for the hierarchical P n Potts model. This restricted us to a very limited class of clique potentials. In this section we generalize our approach to the much more general parsimonious labeling problem. The move making algorithm for the parsimonious labeling problem is shown in the Algorithm-(2). Given a diversity based clique potentials, clique weights, and the unary potentials, the Algorithm-(2) approximates the diversity into a mixture of hierarchical P n Potts models and then Algorithm 2 The Move Making Algorithm for the Parsimonious Labeling Problem. input Diversity (L, \u03b4); wc, \u2200c \u2208C; \u03b8i(xi), \u2200i \u2208V; L; k 1: Approximate the given diversity as the mixture of k hierarchical P n Potts model using Algorithm-3. 2: for each hierarchical P n Potts model in the mixture do 3: Use the hierarchical move making algorithm de\ufb01ned in the Algorithm-1. 4: Compute energy corresponding to the solution obtained. 5: end for 6: Choose the solution with the minimum energy. Algorithm 3 Diversity to Mixture of Hierarchical P n Potts model. input Diversity (L, \u03b4), k 1: Compute the induced metric, d(.), where d(li, lj) = \u03b4({li, lj}), \u2200li, lj \u2208L. 2: Approximate d(.) into mixture of k r-HST metrics dt(.) using the algorithm proposed in [9]. 3: for each r-HST metrics dt(.) do 4: Obtain the corresponding Hierarchical P n Potts model by de\ufb01ning the diameter diversity over dt(.) 5: end for use the previously de\ufb01ned hierarchical move making algorithm on each of the hierarchical P n Potts models. The algorithm for approximating a given diversity into a mixture of hierarchical P n Potts models is shown in Algorithm-(3). The \ufb01rst and the third steps of the Algorithm-(3) have already been discussed in the previous sections. The second step, which amounts to \ufb01nding the mixture of r-HST metrics for a given metric, can be solved using the randomized algorithm proposed in [9]. We refer the reader to [9] for further details of the algorithm for approximating a metric using a mixture of r-HST metrics. Multiplicative Bound Therorem-2 gives the multiplicative bound for the parsimonious labeling labeling problem, when the clique potentials are any general diversity. Theorem 2. The move making algorithm de\ufb01ned in Algorithm-2 gives the multiplicative bound of \u0010 r r\u22121 \u0011 (|L|\u2212 1)(log |L|) min(M, |L|) for the parsimonious labeling problem (equation 5). Here, M is the size of the largest maximal-clique and |L| is the number of labels. Proof: Given in the Appendix. 6. Experiments We demonstrate the utility of the parsimonious labeling on both synthetic and real data. In case of synthetic data, 6 \fwe perform signi\ufb01cant number of random experiments on big grid lattices and evaluate our method based on the energy and the time taken. To evaluate the modeling capabilities of the parsimonious labeling, we used it on two challenging real problems: (i) stereo matching, and (ii) image inpainting. We use co-occurrence statistics based energy functional proposed by Ladicky et al. [18] as our baseline. Theoretically, the only constraint that [18] enforces on the clique potentials is that they must be monotonic in the label set. Therefore, can be regarded as the generalization of the parsimonious labeling. However, based on the synthetic and the real data results, supported by the theoretical guarantees, we show that the parsimonious labeling and the move making algorithm proposed in this work outperforms the more general work proposed in [18]. Recall that the energy functional of the parsimonious labeling problem is de\ufb01ned as: E(x) = X i\u2208V \u03b8i(xi) + X c\u2208C wc\u03b4(\u0393(xc)) (13) In our experiments, we frequently use the truncated linear metric. We de\ufb01ne it below for the sake of completeness. \u03b8i,j(la, lb) = \u03bb min(|la \u2212lb|, M), \u2200la, lb \u2208L. (14) where \u03bb is the weight associated with the metric and M is the truncation constant. 6.1. Synthetic Data We consider following two cases: (i) when the hierarchical P n Potts model is given, and (ii) when a general diversity is given. In each of the two cases, we generate lattices of size 100 \u00d7 100, 20 labels, and use \u03bb = 1. The cliques are generated using a window of size 10 \u00d7 10 in a sliding window fashion. The unary potentials were randomly sampled from the uniform distribution de\ufb01ned over the interval [0, 100]. In the \ufb01rst case, we randomly generated 100 lattices and random r-HST trees associated with each lattice, ensuring that they satisfy the properties of the r-HST. Each r-HST was then converted into hierarchical P n Potts model by taking diameter diversity over each of them. This hierarchical P n Potts model was then used as the actual clique potential. We performed 100 such experiments. On the other hand, in the second case, for a given value of the truncation M, we generated a truncated linear metric and 100 lattices. We treated this metric as the induced metric of a diameter diversity and generated mixture of hierarchical P n Potts model using Algorithm-3. Applied Algorithm-1 for the energy minimization over each hierarchical P n Potts model in the mixture and chose the one with the minimum energy. Notice that, in this case, the actual potential is the given diversity, not the generated hierarchical P n Potts models. Thus, the co-occurrence [18] was given the actual diversity as the clique potentials. The method was evaluated using the given diversity as the clique potentials. We used four different values of the truncation factor M \u2208{1, 5, 10, 20}. For both the experiments, we used 7 different values of wc: wc \u2208{0, 1, 2, 3, 4, 5, 100}. The average energy and the time taken for both the methods and both the cases are shown in the \ufb01gure (3). It is evident from the \ufb01gures that our method outperforms co-occurrence [18] in both the cases, in term of time and the energy. In case the hierarchical P n Potts model is given, case (i), our method performs much better than co-occurrence [18] because of the fact that it is directly minimizing the given potential. In case (ii), despite the fact that our method \ufb01rst approximates the given diversity into mixture of hierarchical P n Potts, it outperforms cooccurrence [18]. This can be best supported by the fact that our algorithm has very tight multiplicative bound. 6.2. Real Data In case of real data, the high-order cliques we used are the superpixels obtained using the mean-shift method [5]. The clique potentials used for the experiments are the diameter diversity of the truncated linear metric (equation (14)). A truncated linear metric enforces smoothness in the pairwise setting, therefore, the diameter diversity of the truncated linear metric will naturally enforce smoothness in the high-order cliques, which is a desired cue for the two applications we are dealing with. In both the real experiments we used the following form of wc (for the high order cliques): wc = exp\u2212\u03c1(xc) \u03c32 , where \u03c1(xc) is the variance of the intensities of the pixels in the clique xc and \u03c3 is a hyperparameter. 6.2.1 Stereo Matching Given two recti\ufb01ed stereo pair of images, the problem of stereo matching is to \ufb01nd the disparity (gives the notion of depth) of each pixel in the reference image [23, 22]. In this work, we extended the standard setting of the stereo matching [22] to high-order cliques and tested our method to the images, \u2018tsukuba\u2019 and \u2018teddy\u2019, from the widely used Middlebury stereo data set [21]. The unaries were computed as the L1\u2212norm of the difference in the RGB values of the left and the right image pixels. Notice that the index for the right image pixel is the index for the left image pixel minus the disparity, which is the label. In case of \u2018teddy\u2019 the unaries were trucated at 16. The weights wc for the pairwise cliques are set to be proportional to the L1\u2212norm of the gradient \u2206of the intensities of the neighbouring pixels. In case of \u2018tsukuba\u2019, if \u2206< 8, wc = 2, otherwise wc = 1. In case of \u2018teddy\u2019, if \u2206< 10, wc = 3, otherwise wc = 1. As mentioned earlier, wc for the high-order cliques is set to be proportional to the variance. We used different val7 \f(a) Energy (b) Time (in seconds) (c) Energy (d) Time (in seconds) Figure 3: Synthetic (Blue: Our, Red: Co-occ [18]). The x-axis for all the \ufb01gures is the weight associated with the cliques (wc). Figures (a) and (b) are the plots for the energy and the time when the hierarchical P n Potts model was assumed to be known. Figures (c) and (d) are the energy and the time plots for the case when a diversity (diameter diversity over truncated linear metric) was given as the clique potentials. Notice that in both the cases our method outperforms the baseline [18] both in terms of energy and time. Also, for very high value of wc = 100, both the methods converges to the same labeling. This is expected as a very high value of wc enforces rigid smoothness by assigning everything to the same label. (a) Tsukuba (Energy, Time) (b) Our (1195800, 167) (c) Co-occ [18] (2202500, 95) (d) Teddy (Energy, Time) (e) Our (1511206, 287) (f) Co-occ [18] (1519500, 605) Figure 4: Stereo Matching Results. Figures (a) and (d) are the ground truth disparity for the \u2018tsukuba\u2019 and \u2018teddy\u2019 respectively. Notice that our method outperforms the baseline Co-ooc [18] in both the cases in terms of energy. From \ufb01gure (b) and (e), we can clearly see the effect of \u2018parsimonious labeling\u2019 as the regions are smooth and the discontinuity is preserved. (a) Penguin (Energy, Time) (b) Our (12516336, 156) (c) Co-oc [18] (14711806, 110) (d) House (Energy, Time) (e) Our (32799162, 1014) (f) Co-oc [18] (38597848, 367) Figure 5: Image inpainting results. Figures (a) and (d) are the input images of \u2018penguin\u2019 and \u2018house\u2019 with added noise and obscured regions. Our method, (b) and (e), outperforms the baseline Co-ooc [18] in both the cases in terms of energy. Figure (b) clearly shows the effect of \u2018parsimonious labeling\u2019 as the regions are smooth and the discontinuity is preserved. ues of \u03c3, \u03bb, and the truncation M. Because of the space constraints we are showing results for the following setting: for \u2018tsukuba\u2019, \u03bb = 20, \u03c3 = 100 and M = 10; for \u2018teddy\u2019, \u03bb = 10, \u03c3 = 1000 and M = 1. Figure (4) shows the results obtained. Notice that our method signi\ufb01cantly outperforms the co-occurrence [18] based method in terms of energy for both, \u2018tsukuba\u2019 and \u2018teddy\u2019. We show similar promising results for different parameters in the Appendix. 6.2.2 Image Inpainting and Denoising Given an image with added noise and obscured regions (regions with missing pixels), the problem is to denoise the image and \ufb01ll the obscured regions such that it is consistent with the surroundings. We performed this experiment on the images, \u2018penguin\u2019 and \u2018house\u2019, from the widely used Middlebury data set. The images under consideration are gray scale, therefore, there are 256 labels in the interval [0, 255], each representing an intensity value. The unaries for each pixel (or node) corresponding to a particular label, is the squared difference between the label and the intensity 8 \fvalue at that pixel. The weights wc for the pairwise cliques are all set to one. For the high-order cliques, as mentioned earlier, wc are chosen to be proportional to the variance of the intensity of the participating pixels. We used different values of \u03c3, \u03bb, and the truncation M. Because of the space constraints we are showing results for the following setting: \u2018penguin\u2019, the \u03bb = 40, \u03c3 = 10000 and M = 40; for \u2018house\u2019, the \u03bb = 30, \u03c3 = 10 and M = 40. Figure 5 shows the results obtained. Notice that our method signi\ufb01cantly outperforms the co-occurrence based method [18] in terms of energy for both, \u2018penguin\u2019 and \u2018house\u2019. We show similar promising results for different parameters in the Appendix. 7. Discussion We proposed a new family of discrete optimization parsimonious labeling, a novel hierarchical P n Potts model, and move making algorithms to minimize energy functional for them. We gave very tight multiplicative bounds for the move making algorithms, applicable to all the \u2018diversities\u2019. An interesting direction for future research would be to explore different \u2018diversities\u2019 and propose algorithms speci\ufb01c to them with better bounds. Another interesting future work would be to directly approximate \u2018diversities\u2019 into mixture of hierarchical P n Potts model, without using the intermediate r-HST. 9 \fA. Additional Real Data Experiments and Analysis Recall that the energy functional of the parsimonious labeling problem is de\ufb01ned as: E(x) = X i\u2208V \u03b8i(xi) + X c\u2208C wc\u03b4(\u0393(xc)) (15) where \u03b4() is the diversity function de\ufb01ned over the set of unique labels present in the clique xc. In our experiments, we frequently use the truncated linear metric. We de\ufb01ne it below for the sake of completeness. \u03b8i,j(la, lb) = \u03bb min(|la \u2212lb|, M), \u2200la, lb \u2208L. (16) where \u03bb is the weight associated with the metric and M is the truncation constant. In case of real data, the high-order cliques are de\ufb01ned over the superpixels obtained using the mean-shift method [5]. The clique potentials used for the experiments are the diameter diversity of the truncated linear metric. A truncated linear metric (equation (16)) enforces smoothness in the pairwise setting, therefore, the diameter diversity of the truncated linear metric will naturally enforce smoothness in the high-order cliques, which is a desired cue for the two applications we are dealing with. In all the real experiments we use the following form of wc (for the high order cliques): wc = exp\u2212\u03c1(xc) \u03c32 , where \u03c1(xc) is the variance of the intensities of the pixels in the clique xc and \u03c3 is a hyperparameter. In order to show the modeling capabilities of the parsimonious labeling we compare our results with the well known \u03b1\u2212expansion [25], TRWS [15], and the Co-occ [18]. We also show the effect of clique sizes, which in our case are the superpixels obtained using the mean-shift algorithm, and the parameter wc associated with the cliques, for the purpose of understanding the behaviour of the parsimonious labeling. A.1. Stereo Matching Please refer to the paper for the description of the stereo matching problem. Figures (6) and (7) shows the comparisons between different methods for the \u2018teddy\u2019 and \u2018tsukuba\u2019 examples, respectively. It can be clearly seen that the parsimonious labeling gives better results compared to all the other three methods. The parameter wc can be thought of as the trade off between the in\ufb02uence of the pairwise and the high order cliques. Finding the best setting of wc is very important. The effect of the parameter wc, which is done by changing \u03c3, is shown in the \ufb01gure (8). Similarly, the cliques have great impact on the overall result. Large cliques and high value of wc will result in over smoothing. In order to visualize this, we show the effect of clique size in the \ufb01gure (9). (a) Gnd Truth (b) \u03b1\u2212exp (c) TRWS (d) Co-occ (e) Our Method Figure 6: Comparison of all the methods for the stereo matching of \u2018teddy\u2019. We used the optimal setting of the parameters proposed in the well known Middlebury webpage and [22]. The above results are obtained using \u03c3 = 102 for the Co-occ and our method. Clearly, our method gives much smooth results while keeping the underlying shape intact. This is because of the cliques and the corresponding potentials (diversities) used. The diversities enforces smoothness over the cliques while \u03c3 controls this smoothness in order to avoid over smooth results. A.2. Image Inpainting and Denoising Please refer to the paper for the description of the image inpainting and the denoising problem. Figures (10) and (11) shows the comparisons between the different methods for the \u2018penguin\u2019 and the \u2018house\u2019 examples, respectively. It can be clearly seen that the parsimonious labeling gives highly promising results compared to all the other methods. 10 \f(a) Gnd Truth (b) \u03b1\u2212exp (c) TRWS (d) Co-occ (e) Our Method Figure 7: Comparison of all the methods for the stereo matching of \u2018tsukuba\u2019. We used the optimal setting of the parameters proposed in the well known Middlebury webpage and [22]. The above results are obtained using \u03c3 = 102 for the Co-occ and our method. We can see that the disparity obtained using our method is closest to the ground truth compared to all other methods. In our method, the background is uniform (under the table also), the camera shape is closest to the ground truth camera, and the face disparity is also closest to the ground truth compared to other methods. (a) \u03c3 = 103 (b) \u03c3 = 104 Figure 8: Effect of \u03c3 in the parsimonious labeling. All the parameters are same except for the \u03c3. Note that as we increase the \u03c3, the wc increases, which in turn results in over smoothing. Figure 9: Effect of clique size (superpixels). The top row shows the cliques (superpixels) used and the bottom row shows the stereo matching using these cliques. As we go from left to right, the minimum number of pixels that a superpixel must contain increases. All the other parameters are the same. In order to increase the weight wc, we use high value of \u03c3, which is \u03c3 = 105 in all the above cases. B. Proof of Theorems The labeling problem. As already de\ufb01ned in the paper, consider a random \ufb01eld de\ufb01ned over a set of random variables x = {x1, \u00b7 \u00b7 \u00b7 , xN} arranged in a prede\ufb01ned lattice V = {1, \u00b7 \u00b7 \u00b7 , N}. Each random variable can take a value from a discrete label set L = {l1, \u00b7 \u00b7 \u00b7 , lH}. The energy functional corresponding to a labeling x is de\ufb01ned as: E(x) = X i\u2208V \u03b8i(xi) + X c\u2208C \u03b8c(xc) (17) where \u03b8i(xi) is any arbitrary unary potential, and \u03b8c(xc) is a clique potential for assigning the labels xc to the variables in the clique c. 11 \f(a) Original (b) Input (c) \u03b1\u2212exp (d) TRWS (e) Co-occ (f) Our Figure 10: Comparison of all the methods for the image inpainting and denoising problem of the \u2018penguin\u2019. Notice that our method recovers the hand of the penguin very smoothly. In other methods, except Co-oc, the ground is over-smooth while our method recovers the ground quite well compared to others. (a) Original (b) Input (c) \u03b1\u2212exp (d) TRWS (e) Co-occ (f) Our Figure 11: Comparison of all the methods for the image inpainting and denoising problem of the \u2018house\u2019. Notations. \u0393(xc) denotes the set of unique labels present in the clique xc. \u03b4(\u0393(xc)) and \u03b4dia(\u0393(xc)) denotes the diversity and the diameter diversity of the unique labels present in the clique xc, respectively. M = maxc |xc| is the size of the largest maximal-clique and |L| is the number of labels. B.1. Multiplicative Bound of the Hierarchical Move Making Algorithm for the Hierarchical P n Potts Model Proof of Theorem-1 Proof. Let x\u2217be the optimal labeling of the given hierarchical P n Potts model based labeling problem. Note that any node p in the underlying r-HST represents a cluster (subset) of labels. For each node p in the r-HST we de\ufb01ne following sets using x\u2217: Lp = {li|li \u2208L, i \u2208p}, Vp = {xi : x\u2217 i \u2208Lp}, Ip = {c : xc \u2286Vp}, Bp = {c : xc \u2229Vp \u0338= \u2205, xc \u2288Vp}, Op = {c : xc \u2229Vp = \u2205}. (18) In other words, Lp is the set of labels in the cluster at pth node, Vp is the set of nodes whose optimal label lies in the subtree rooted at p, Ip is the set of cliques such that the optimal labeling lies in the subtree rooted at p, Bp is the set of cliques (boundary cliques) such that \u2200xc \u2208Bp, \u2203{xi, xj} \u2208xc : x\u2217 i \u2208Lp, x\u2217 j / \u2208Lp, and Op is the set of outside cliques such that the optimal assignment for all the nodes belongs to the set L \\ Lp. Let\u2019s de\ufb01ne xp as the labeling at node p. We prove the following lemma relating x\u2217and xp. Lemma 1. Let xp be the labeling at node p, x\u2217be the optimal labeling of the given hierarchical P n Potts model, and \u03b4dia(\u0393(xp c)) be the diameter diversity based clique potential de\ufb01ned as maxli,lj\u2208Lp dt(li, lj), \u2200p, where dt(., .) is the tree metric de\ufb01ned over the given r-HST, then the following bound holds true at any node p of the r-HST. X c\u2208Ip \u03b4dia(\u0393(xp c)) \u2264 \u0012 r r \u22121 \u0013 min(M, |L|) X c\u2208Ip \u03b4dia(\u0393(x\u2217 c)) (19) 12 \fProof. We prove the above lemma by mathematical induction. Clearly, when p is a leaf node, xi = p, \u2200i \u2208V. For a non-leaf node p, we assume that the lemma holds true for the labeling xq of all its children q. Given the labeling xp and xq, we de\ufb01ne a new labeling xpq such that xpq = n xq i if x\u2217 i \u2208Lq xp i otherwise. (20) Note that xpq lies within one \u03b1-expansion iteration away from xp. Since xp is the local minima, we can say that E(xp|Ip) + E(xp|Bp) + E(xp|Op) \u2264 E(xpq|Ipq) + E(xpq|Bpq) + E(xpq|Opq) E(xp|Ip) + E(xp|Bp) \u2264 E(xpq|Ipq) + E(xpq|Bpq) (21) X c\u2208Ip \u03b4dia(\u0393(xp c)) + X c\u2208Bp \u03b4(\u0393(xp c)) \u2264 X c\u2208Ipq \u03b4dia(\u0393(xpq c )) + X c\u2208Bpq \u03b4dia(\u0393(xpq c )) (22) X c\u2208Iq \u03b4dia(\u0393(xp c)) + X c\u2208Bq \u03b4dia(\u0393(xp c)) \u2264 X c\u2208Iq \u03b4dia(\u0393(xpq c )) + X c\u2208Bq \u03b4dia(\u0393(xpq c )) (23) Using the mathematical induction we can write X c\u2208Iq \u03b4dia(\u0393(xp c)) + X c\u2208Bq \u03b4dia(\u0393(xp c)) \u2264 min(M, |L|) \u0012 r r \u22121 \u0013 X c\u2208Iq \u03b4dia(\u0393(x\u2217 c)) + X c\u2208Bq \u03b4dia(\u0393(xpq c )) (24) Now consider a clique c \u2208Bq. Let ep be the length of edges from node p to its children q. Since c \u2208Bq, there must exist atleast two nodes xi and xj in xc such that x\u2217 i \u2208Lq and x\u2217 j / \u2208Lq, therefore, by construction of r-HST \u03b4dia(\u0393(x\u2217 c)) \u22652ep (25) Furthermore, by the construction of xpq, Lpq \u2286Lp, therefore, in worst case (leaf nodes), we can write \u03b4dia(\u0393(xpq c )) = maxli,lj\u2208Lpqdt(li, lj) \u2264 2ep \u0012 1 + 1 r + 1 r2 + \u00b7 \u00b7 \u00b7 \u0013 = 2ep \u0012 r r \u22121 \u0013 \u2264 \u03b4dia(\u0393(x\u2217 c)) \u0012 r r \u22121 \u0013 (26) From inequalities (24) and (26) X c\u2208Iq \u03b4dia(\u0393(xp c)) + X c\u2208Bq \u03b4dia(\u0393(xp c)) \u2264 min(M, |L|) \u0012 r r \u22121 \u0013 X c\u2208Iq \u03b4dia(\u0393(x\u2217 c)) + \u0012 r r \u22121 \u0013 X c\u2208Bq \u03b4dia(\u0393(x\u2217 c)) (27) In order to get the bound over the total energy we sum over all the children q of p, denoted as \u03b7(p). Therefore, summing the inequality (27) over \u03b7(p) we get X q\u2208\u03b7(p) X c\u2208Iq \u03b4dia(\u0393(xp c)) + X q\u2208\u03b7(p) X c\u2208Bq \u03b4dia(\u0393(xp c)) \u2264 min(M, |L|) \u0012 r r \u22121 \u0013 X q\u2208\u03b7(p) X c\u2208Iq \u03b4dia(\u0393(x\u2217 c)) + \u0012 r r \u22121 \u0013 X q\u2208\u03b7(p) X c\u2208Bq \u03b4dia(\u0393(x\u2217 c)) (28) 13 \fThe LHS of the above inequality can be written as X q\u2208\u03b7(p) X c\u2208Iq \u03b4dia(\u0393(xp c)) + X q\u2208\u03b7(p) X c\u2208Bq \u03b4dia(\u0393(xp c)) \u2265 X c\u2208\u222aq\u2208\u03b7(p)Iq \u03b4dia(\u0393(xp c)) + X c\u2208\u222aq\u2208\u03b7(p)Bq \u03b4dia(\u0393(xp c)) = X c\u2208Ip \u03b4dia(\u0393(xp c)) (29) The above inequality and equality is due to the fact that \u2229q\u2208\u03b7(p)Iq = \u2205, \u2229q\u2208\u03b7(p)Bq is not necessarily an empty set, \u03b4dia(\u0393(xc)) \u22650, and Ip = {\u222aq\u2208\u03b7(p)Iq} \u222a{\u222aq\u2208\u03b7(p)Bq}. Now let us have a look into the second term of the RHS of the inequality (28) X q\u2208\u03b7(p) X c\u2208Bq \u03b4dia(\u0393(x\u2217 c)) \u2264 X c\u2208\u222aq\u2208\u03b7(p)Bq min(|\u03b7(p)|, |xc|)\u03b4dia(\u0393(x\u2217 c)) (30) \u2264 min \u0012 max p\u2208\u03b7(p) |\u03b7(q)|, max c |xc| \u0013 X c\u2208\u222aq\u2208\u03b7(p)Bq \u03b4dia(\u0393(x\u2217 c)) = min(L, |M|) X c\u2208\u222aq\u2208\u03b7(p)Bq \u03b4dia(\u0393(x\u2217 c)) (31) The inequality (30) is due to the fact that \u222aq\u2208\u03b7(p)Bq can not count a clique more than min(|\u03b7(p)|, |xc|) times. Therefore, using the inequality (31) in the RHS of the inequality (29) we get min(M, |L|) \u0012 r r \u22121 \u0013 X q\u2208\u03b7(p) X c\u2208Iq \u03b4dia(\u0393(x\u2217 c)) + \u0012 r r \u22121 \u0013 X q\u2208\u03b7(p) X c\u2208Bq \u03b4dia(\u0393(x\u2217 c)) \u2264 min(M, |L|) \u0012 r r \u22121 \u0013 \uf8eb \uf8ed X c\u2208\u222aq\u2208\u03b7(p)Iq \u03b4dia(\u0393(x\u2217 c)) + X c\u2208\u222aq\u2208\u03b7(p)Bq \u03b4dia(\u0393(x\u2217 c)) \uf8f6 \uf8f8 = min(M, |L|) \u0012 r r \u22121 \u0013 X c\u2208Ip \u03b4dia(\u0393(x\u2217 c)) (32) Finally, using inequalities (28), (29) and (32) we get X c\u2208Ip \u03b4dia(\u0393(xp c)) \u2264min(M, |L|) \u0012 r r \u22121 \u0013 X c\u2208Ip \u03b4dia(\u0393(x\u2217 c)) (33) Applying the above lemma to the root node proves the theorem. B.2. Multiplicative Bound of the Algorithm-2 for the Parsimonious Labeling Proof of Theorem-2 Proof. Let us say that d(., .) is the induced metric of the given diversity (\u03b4, L) and \u03b4dia be it\u2019s diameter diversity. We \ufb01rst approximate d(., .) as a mixture of r-HST metrics dt(., .). Using Theorem-3 we get the following relationship d(., .) \u2264O(log |L|)dt(., .) (34) For a given clique xc, using Proposition-1, we get the following relationship \u03b4dia(\u0393(xc)) \u2264\u03b4(\u0393(xc)) \u2264(|\u0393(xc)| \u22121)\u03b4dia(\u0393(xc)) (35) Therefore, using equations (35) and (34), we get the following inequality \u03b4dia(\u0393(xc)) \u2264\u03b4(\u0393(xc)) \u2264 (|\u0393(xc)| \u22121)\u03b4dia(\u0393(xc)) \u2264 O(log |\u0393(xc)|)(|\u0393(xc)| \u22121)\u03b4dia t (\u0393(xc)) (36) 14 \fwhere, \u03b4dia t (\u0393(xc)) is the diameter diversity de\ufb01ned over the tree metric dt(., .) which is obtained using the randomized algorithm [9] on the induced metric d(., .). Hence, combing the inequality (36) and the previously proved Theorem-1 proves the Theorem-2. Notice that, in case our diversity in itself is a diameter diversity, we don\u2019t need the inequality (35), therefore, the multiplicative bound reduces to \u0010 r r\u22121 \u0011 (log |L|) min(M, |L|). Theorem 3. Given any distance metric function d(., .) de\ufb01ned over a set of labels L, the randomized algorithm given in [9] produces a mixture of r-HST tree metrics dt(., .) such that d(., .) \u2264O(log |L|)dt(., .). Proof: Please see the reference [9]. Proposition 1. Let (L, \u03b4) be a diversity with induced metric space (L, d), then the following inequality holds \u2200\u0393 \u2286L. \u03b4dia(\u0393) \u2264\u03b4(\u0393) \u2264(|\u0393| \u22121)\u03b4dia(\u0393) (37) Proof: Please see the reference [4]. 15" + } + ], + "N. Siddharth": [ + { + "url": "http://arxiv.org/abs/1706.00400v2", + "title": "Learning Disentangled Representations with Semi-Supervised Deep Generative Models", + "abstract": "Variational autoencoders (VAEs) learn representations of data by jointly\ntraining a probabilistic encoder and decoder network. Typically these models\nencode all features of the data into a single variable. Here we are interested\nin learning disentangled representations that encode distinct aspects of the\ndata into separate variables. We propose to learn such representations using\nmodel architectures that generalise from standard VAEs, employing a general\ngraphical model structure in the encoder and decoder. This allows us to train\npartially-specified models that make relatively strong assumptions about a\nsubset of interpretable variables and rely on the flexibility of neural\nnetworks to learn representations for the remaining variables. We further\ndefine a general objective for semi-supervised learning in this model class,\nwhich can be approximated using an importance sampling procedure. We evaluate\nour framework's ability to learn disentangled representations, both by\nqualitative exploration of its generative capacity, and quantitative evaluation\nof its discriminative ability on a variety of models and datasets.", + "authors": "N. Siddharth, Brooks Paige, Jan-Willem van de Meent, Alban Desmaison, Noah D. Goodman, Pushmeet Kohli, Frank Wood, Philip H. S. Torr", + "published": "2017-06-01", + "updated": "2017-11-13", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction Learning representations from data is one of the fundamental challenges in machine learning and arti\ufb01cial intelligence. Characteristics of learned representations can depend on their intended use. For the purposes of solving a single task, the primary characteristic required is suitability for that task. However, learning separate representations for each and every such task involves a large amount of wasteful repetitive effort. A representation that has some factorisable structure, and consistent semantics associated to different parts, is more likely to generalise to a new task. Probabilistic generative models provide a general framework for learning representations: a model is speci\ufb01ed by a joint probability distribution both over the data and over latent random variables, and a representation can be found by considering the posterior on latent variables given speci\ufb01c data. The learned representation \u2014 that is, inferred values of latent variables \u2014 depends then not just on the data, but also on the generative model in its choice of latent variables and the relationships between the latent variables and the data. There are two extremes of approaches to constructing generative models. At one end are fully-speci\ufb01ed probabilistic graphical models [18, 21], in which a practitioner decides on all latent variables present in the joint distribution, the relationships between them, and the functional form of the conditional distributions which de\ufb01ne the model. At the other end are \u2217Author was at Microsoft Research during this project. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA. arXiv:1706.00400v2 [stat.ML] 13 Nov 2017 \fdeep generative models [7, 16, 19, 20], which impose very few assumptions on the structure of the model, instead employing neural networks as \ufb02exible function approximators that can be used to train a conditional distribution on the data, rather than specify it by hand. The tradeoffs are clear. In an explicitly constructed graphical model, the structure and form of the joint distribution ensures that latent variables will have particular semantics, yielding a disentangled representation. Unfortunately, de\ufb01ning a good probabilistic model is hard: in complex perceptual domains such as vision, extensive feature engineering (e.g. Berant et al. [1], Siddharth et al. [30]) may be necessary to de\ufb01ne a suitable likelihood function. Deep generative models completely sidestep the dif\ufb01culties of feature engineering. Although they address learning representations which then enable them to better reconstruct data, the representations themselves do not always exhibit consistent meaning along axes of variation: they produce entangled representations. While such approaches have considerable merit, particularly when faced with the absence of any side information about data, there are often situations when aspects of variation in data can be, or are desired to be characterised. Bridging this gap is challenging. One way to enforce a disentangled representation is to hold different axes of variation \ufb01xed during training [20]. Johnson et al. [13] combine a neural net likelihood with a conjugate exponential family model for the latent variables. In this class of models, ef\ufb01cient marginalisation over the latent variables can be performed by learning a projection onto the same conjugate exponential family in the encoder. Here we propose a more general class of partiallyspeci\ufb01ed graphical models: probabilistic graphical models in which the modeller only needs specify the exact relationship for some subset of the random variables in the model. Factors left unde\ufb01ned in the model de\ufb01nition are then learned, parametrised by \ufb02exible neural networks. This provides the ability to situate oneself at a particular point on a spectrum, by specifying precisely those axes of variations (and their dependencies) we have information about or would like to extract, and learning disentangled representations for them, while leaving the rest to be learned in an entangled manner. A subclass of partially-speci\ufb01ed models that is particularly common is that where we can obtain supervision data for some subset of the variables. In practice, there is often variation in the data which is (at least conceptually) easy to explain, and therefore annotate, whereas other variation is less clear. For example, consider the MNIST dataset of handwritten digits: the images vary both in terms of content (which digit is present), and style (how the digit is written), as is visible in the right-hand side of Fig. 1. Having an explicit \u201cdigit\u201d latent variable captures a meaningful and consistent axis of variation, independent of style; using a partially-speci\ufb01ed graphical model means we can de\ufb01ne a \u201cdigit\u201d variable even while leaving unspeci\ufb01ed the semantics of the different styles, and the process of rendering a digit to an image. In a fully unsupervised learning procedure there is generally no guarantee that inference on a model with 10 classes will in fact recover the 10 digits. However, given a small amount of labelled examples, this task becomes signi\ufb01cantly easier. Beyond the ability to encode variation along some particular axes, we may also want to interpret the same data in different ways. For example, when considering images of people\u2019s faces, we might wish to capture the person\u2019s identity in one context, and the lighting conditions on the faces in another. In this paper we introduce a recipe for learning and inference in partially-speci\ufb01ed models, a \ufb02exible framework that learns disentangled representations of data by using graphical model structures to encode constraints to interpret the data. We present this framework in the context of variational autoencoders (VAEs), developing a generalised formulation of semi-supervised learning with DGMs that enables our framework to automatically employ the correct factorisation of the objective for any given choice of model and set of latents taken to be observed. In this respect our work extends previous efforts to introduce supervision into variational autoencoders [17, 23, 31]. We introduce a variational objective which is applicable to a more general class of models, allowing us to consider graphical-model structures with arbitrary dependencies between latents, continuous-domain latents, and those with dynamically changing dependencies. We provide a characterisation of how to compile partially-supervised generative models into stochastic computation graphs, suitable for end-to-end training. This approach allows us also amortise inference [6, 22, 28, 33], simultaneously learning a network that performs approximate inference over representations at the same time we learn the unknown factors of the model itself. We demonstrate the ef\ufb01cacy of our framework on a variety of tasks, involving classi\ufb01cation, regression, and predictive synthesis, including its ability to encode latents of variable dimensionality. 2 \fz (handwriting style) y (digit label) Disentangled Representation Stochastic Computation Graph for VAE \u03b5 z p \u03b7 \u03b8 z p \u03b7 \u03b8 (a) (b) (c) (d) Figure 2: (a) Visual analogies for the MNIST data, with inferred style latent variable \ufb01xed and the label varied. (b) Exploration in \u201cstyle\u201d space for a 2D latent gaussian random variable. Visual analogies for the SVHN data when (c) fully supervised, and (d) partially supervised with just 100 labels/digit. To train deep generative models in a semi-supervised manner, we need to incorporate labelled data 126 into the variational bound. In a fully unsupervised setting, the contribution of a particular data 127 point xi to the ELBO can be expressed, with minor adjustments of Equation (1), whose Monte-Carlo 128 approximation samples latents z and y from the recognition distribution q\u03c6 ! z, y | xi\" . 129 L ! \u2713, \u03c6; xi\" = Eq\u03c6(z,y|xi) \" log p\u2713 ! xi | z, y \" p(z, y) q\u03c6(z, y | xi) # . (2) By contrast, in the fully supervised setting the values y are treated as observed and become \ufb01xed 130 inputs into the computation graph, instead of being sampled from q\u03c6. When the label y is observed 131 along with the data, for \ufb01xed (xi, yi) pairs, the lower bound on the conditional log-marginal likelihood 132 log p\u2713(x | y) is 133 Lx|y ! \u2713, \u03c6z; xi, yi\" = Eq\u03c6z (z|xi,yi) \" log p\u2713 ! xi | z, yi\" p ! z | yi\" q\u03c6z(z | xi, yi) # . (3) This quantity can be optimized directly to learn model parameters \u2713and \u03c6z simultaneously via SGD. 134 However, it does not contain the encoder parameters \u03c6y. This dif\ufb01culty was also encountered in a 135 related context by Kingma et al. [17]. Their solution was to augment the loss function by including 136 an explicit additional term for learning a classi\ufb01er directly on the supervised points. 137 Here we propose an alternative approach. We extend the model with an auxiliary variable \u02dc y with 138 likelihood p(\u02dc y | y) = \u03b4\u02dc y(y) to de\ufb01ne densities 139 p(\u02dc y, y, z, x) = p(\u02dc y | y)p\u2713(x | y, z)p(y, z) q(\u02dc y, y, z | x) = p(\u02dc y | y)q(y, z | x). When we marginalize the ELBO for this model over \u02dc y, we recover the expression in Equation (2). 140 Treating \u02dc y = yi as observed results in the supervised objective 141 L ! \u2713, \u03c6; xi\"% % \u02dc y=yi = Eq\u03c6(z,y|xi) \" \u03b4yi(y) log p\u2713 ! xi | z, y \" p(z, y) q\u03c6(z, y | xi) # . (4) Integration over an observed y is then replaced with evaluation of the ELBO and the density q\u03c6y at 142 yi. A Monte Carlo estimator of Equation (4) can be constructed automatically for any factorization 143 of q\u03c6 by sampling latent variables z and weighting the resulting ELBO estimate by the conditional 144 density terms q\u03c6y (y|\u00b7). 145 Note that the exact functional form of the Monte Carlo estimator will vary depending on the 146 dependency structure of q\u03c6 ! z, y | xi\" . For example, for discrete y, choosing q\u03c6(z, y | x) = 147 q\u03c6z(z | y, x)q\u03c6y (y | x), decomposes the problem into simultaneously learning a classi\ufb01er 148 q\u03c6y (y | x) alongside the generative model parameters \u2713and encoder q\u03c6z(z|x, y) which is condi149 tioned on the selected class. The computation graph for a model with this factorization is shown in Fig150 ure 1. In it, the value y of the distribution q\u03c6y (\u00b7 | x) is observed, while the distribution q\u03c6z(\u00b7 | x, y) 151 4 p \u03bb y \u03b7 \u03b5 z p \u03bb q \u03b7 \u03c6 \u03c6 x (data) y (partial labels) p \u03b7 \u03b8 q Recognition Model z x y Generative Model z x \u03b5 y x Figure 1: Semi-supervised learning in structured variational autoencoders, illustrated on MNIST digits. Top-Left: Generative model. Bottom-Left: Recognition model. Middle: Stochastic computation graph, showing expansion of each node to its corresponding sub-graph. Generative-model dependencies are shown in blue and recognition-model dependencies are shown in orange. See Section 2.2 for a detailed explanation. Right: learned representation. 2 Framework and Formulation VAEs [16, 27] are a class of deep generative models that simultaneously train both a probabilistic encoder and decoder for a elements of a data set D = {x1, . . . xN}. The central analogy is that an encoding z can be considered a latent variable, casting the decoder as a conditional probability density p\u03b8(x|z). The parameters \u03b7\u03b8(z) of this distribution are the output of a deterministic neural network with parameters \u03b8 (most commonly MLPs or CNNs) which takes z as input. By placing a weak prior over z, the decoder de\ufb01nes a posterior and joint distribution p\u03b8(z | x) \u221dp\u03b8(x | z)p(z). xn zn \u03b8 \u03c6 N Inference in VAEs can be performed using a variational method that approximates the posterior distribution p\u03b8(z | x) using an encoder q\u03c6(z | x), whose parameters \u03bb\u03c6(x) are the output of a network (with parameters \u03c6) that is referred to as an \u201cinference network\u201d or a \u201crecognition network\u201d. The generative and inference networks, denoted by solid and dashed lines respectively in the graphical model, are trained jointly by performing stochastic gradient ascent on the evidence lower bound (ELBO) L(\u03c6, \u03b8; D) \u2264log p\u03b8(D), L(\u03c6, \u03b8; D) = N X n=1 L(\u03c6, \u03b8; xn) = N X n=1 Eq\u03c6(z|xn)[log p\u03b8(xn | z) + log p(z) \u2212log q\u03c6(z|xn)]. (1) Typically, the \ufb01rst term Eq\u03c6(z|xn)[log p\u03b8(xn | z)] is approximated by a Monte Carlo estimate and the remaining two terms are expressed as a divergence \u2212KL(q\u03c6(z|xn)\u2225p(z)), which can be computed analytically when the encoder model and prior are Gaussian. In this paper, we will consider models in which both the generative model p\u03b8(x, y, z) and the approximate posterior q\u03c6(y, z | x) can have arbitrary conditional dependency structures involving random variables de\ufb01ned over a number of different distribution types. We are interested in de\ufb01ning VAE architectures in which a subset of variables y are interpretable. For these variables, we assume that supervision labels are available for some fraction of the data. The VAE will additionally retain some set of variables z for which inference is performed in a fully unsupervised manner. This is in keeping with our central goal of de\ufb01ning and learning in partially-speci\ufb01ed models. In the running example for MNIST, y corresponds to the classi\ufb01cation label, whereas z captures all other implicit features, such as the pen type and handwriting style. This class of models is more general than the models in the work by Kingma et al. [17], who consider three model designs with a speci\ufb01c conditional dependence structure. We also do not require p(y, z) to be a conjugate exponential family model, as in the work by Johnson et al. [14]. To perform semi-supervised learning in this class of models, we need to i) de\ufb01ne an objective that is suitable to general dependency graphs, and ii) de\ufb01ne a method for constructing a stochastic computation graph [29] that incorporates both the conditional dependence structure in the generative model and that of the recognition model into this objective. 3 \f2.1 Objective Function xn yn zn \u03c6 \u03b8 xm ym zm N M Previous work on semi-supervised learning for deep generative models [17] de\ufb01nes an objective over N unsupervised data points D = {x1, . . . , xN} and M supervised data points Dsup = {(x1, y1), . . . , (xM, yM)}, L(\u03b8, \u03c6; D, Dsup) = N X n=1 L(\u03b8, \u03c6; xn) + \u03b3 M X m=1 Lsup(\u03b8, \u03c6; xm, ym). (2) Our model\u2019s joint distribution factorises into unsupervised and supervised collections of terms over D and Dsup as shown in the graphical model. The standard variational bound on the joint evidence of all observed data (including supervision) also factorises as shown in Eq. (2). As the factor corresponding to the unsupervised part of the graphical model is exactly that as Eq. (1), we focus on the supervised term in Eq. (2), expanded below, incorporating an additional weighted component as in Kingma et al. [17]. Lsup(\u03b8, \u03c6; xm, ym) = Eq\u03c6(z|xm,ym) \u0014 log p\u03b8(xm, ym, z) q\u03c6(z | xm, ym) \u0015 + \u03b1 log q\u03c6(ym | xm). (3) Note that the formulation in Eq. (2) introduces an constant \u03b3 that controls the relative strength of the supervised term. While the joint distribution in our model implicitly weights the two terms, in situations where the relative sizes of D and Dsup are vastly different, having control over the relative weights of the terms can help ameliorate such discrepancies. This de\ufb01nition in Eq. (3) implicitly assumes that we can evaluate the conditional probability q\u03c6(z|x, y) and the marginal q\u03c6(y|x) = R dz q\u03c6(y, z|x). This was indeed the case for the models considered by Kingma et al. [17], which have a factorisation q\u03c6(y, z|x) = q\u03c6(z|x, y)q\u03c6(y|x). Here we will derive an estimator for Lsup that generalises to models in which q\u03c6(y, z | x) can have an arbitrary conditional dependence structure. For purposes of exposition, we will for the moment consider the case where q\u03c6(y, z | x) = q\u03c6(y | x, z)q\u03c6(z | x). For this factorisation, generating samples zm,s \u223cq\u03c6(z | xm, ym) requires inference, which means we can no longer compute a simple Monte Carlo estimator by sampling from the unconditioned distribution q\u03c6(z | xm). Moreover, we also cannot evaluate the density q\u03c6(z | xm, ym). In order to address these dif\ufb01culties, we re-express the supervised terms in the objective as Lsup(\u03b8, \u03c6; xm, ym) = Eq\u03c6(z|xm,ym) \u0014 log p(xm, ym, z) q\u03c6(ym, z | xm) \u0015 + (1 + \u03b1) log q\u03c6(ym | xm), (4) which removes the need to evaluate q\u03c6(z | xm, ym). We can then use (self-normalised) importance sampling to approximate the expectation. To do so, we sample proposals zm,s \u223cq\u03c6(z | xm) from the unconditioned encoder distribution, and de\ufb01ne the estimator Eq\u03c6(z|xm,ym) \u0014 log p\u03b8(xm, ym, z) q\u03c6(ym, z | xm) \u0015 \u22431 S S X s=1 wm,s Zm log p\u03b8(xm, ym, zm,s) q\u03c6(ym, zm,s | xm), (5) where the unnormalised importance weights wm,s and normaliser Zm are de\ufb01ned as wm,s := q\u03c6(ym, zm,s | xm) q\u03c6(zm,s | xm) , Zm = 1 S S X s=1 wm,s. (6) To approximate log q\u03c6(ym | xm), we use a Monte Carlo estimator of the lower bound that is normally used in maximum likelihood estimation, log q\u03c6(ym | xm) \u2265Eq\u03c6(z|xm) \u0014 log q\u03c6(ym, z | xm) q\u03c6(z | xm) \u0015 \u22431 S S X s=1 log wm,s, (7) using the same samples zm,s and weights wm,s as in Eq. (5). When we combine the terms in Eqs. (5) and (7), we obtain the estimator \u02c6 Lsup(\u03b8, \u03c6; xm , ym) := 1 S S X s=1 wm,s Zm log p\u03b8(xm, ym, zm,s) q\u03c6(ym, zm,s | xm) + (1 + \u03b1) log wm,s. (8) 4 \fWe note that this estimator applies to any conditional dependence structure. Suppose that we were to de\ufb01ne an encoder q\u03c6(z2, y1, z1 | x) with factorisation q\u03c6(z2 | y1, z1, x)q\u03c6(y1 | z1, x)q\u03c6(z1 | x). If we propose z2 \u223cq\u03c6(z2 | y1, z1, x) and z1 \u223cq\u03c6(z1 | x), then the importance weights wm,s for the estimator in Eq. (8) are de\ufb01ned as wm,s := q\u03c6(zm,s 2 , ym 1 , zm,s 1 | xm) q\u03c6(zm,s 2 | ym 1 , zm,s 1 , xm)q\u03c6(zm,s 1 | xm) = q\u03c6(ym 1 | zm,s 1 , xm). In general, the importance weights are simply the product of conditional probabilities of the supervised variables y in the model. Note that this also applies to the models in Kingma et al. [17], whose objective we can recover by taking the weights to be constants wm,s = q\u03c6(ym | xm). We can also de\ufb01ne an objective analogous to the one used in importance-weighted autoencoders [2], in which we compute the logarithm of a Monte Carlo estimate, rather than the Monte Carlo estimate of a logarithm. This objective takes the form \u02c6 Lsup,iw(\u03b8, \u03c6; xm , ym) := log \" 1 S S X s=1 p\u03b8(xm, ym, zm,s) q\u03c6(zm,s | xm) # + \u03b1 log \" 1 S S X s=1 wm,s # , (9) which can be derived by moving the sums in Eq. (8) into the logarithms and applying the substitution wm,s/q\u03c6(ym, zm,s | xm) = 1/q\u03c6(zm,s | xm). 2.2 Construction of the Stochastic Computation Graph To perform gradient ascent on the objective in Eq. (8), we map the graphical models for p\u03b8(x, y, z) and q\u03c6(y, z|x) onto a stochastic computation graph in which each stochastic node forms a sub-graph. Figure 1 shows this expansion for the simple VAE for MNIST digits from [16]. In this model, y is a discrete variable that represents the underlying digit, our latent variable of interest, for which we have partial supervision data. An unobserved Gaussian-distributed variable z captures the remainder of the latent information. This includes features such as the hand-writing style and stroke thickness. In the generative model (Fig. 1 top-left), we assume a factorisation p\u03b8(x, y, z) = p\u03b8(x | y, z)p(y)p(z) in which y and z are independent under the prior. In the recognition model (Fig. 1 bottom-left), we use a conditional dependency structure q\u03c6(y, z | x) = q\u03c6z (z | y, x)q\u03c6y (y|x) to disentangle the digit label y from the handwriting style z (Fig. 1 right). The generative and recognition model are jointly form a stochastic computation graph (Fig. 1 centre) containing a sub-graph for each stochastic variable. These can correspond to fully supervised, partially supervised and unsupervised variables. This example graph contains three types of subgraphs, corresponding to the three possibilities for supervision and gradient estimation: \u2022 For the fully supervised variable x, we compute the likelihood p under the generative model, that is p\u03b8(x | y, z) = N(x ; \u03b7\u03b8(y, z)). Here \u03b7\u03b8(y, z) is a neural net with parameters \u03b8 that returns the parameters of a normal distribution (i.e. a mean vector and a diagonal covariance). \u2022 For the unobserved variable z, we compute both the prior probability p(z) = N(z ; \u03b7z), and the conditional probability q\u03c6(z | x, y) = N(z ; \u03bb\u03c6z (x, y)). Here the usual reparametrisation is used to sample z from q\u03c6(z | x, y) by \ufb01rst sampling \u03f5 \u223cN(0, I) using the usual reparametrisation trick z = g(\u03f5, \u03bb\u03c6(x, y)). \u2022 For the partially observed variable y, we also compute probabilities p(y) = Discrete(y; \u03b7y) and q\u03c6y (y|x) = Discrete(y; \u03bb\u03c6z (x)). The value y is treated as observed when available, and sampled otherwise. In this particular example, we sample y from a q\u03c6y (y|x) using a Gumbel-softmax [12, 24] relaxation of the discrete distribution. The example in Fig. 1 illustrates a general framework for de\ufb01ning VAEs with arbitrary dependency structures. We begin by de\ufb01ning a node for each random variable. For each node we then specify a distribution type and parameter function \u03b7, which determines how the probability under the generative model depends on the other variables in the network. This function can be a constant, fully deterministic, or a neural network whose parameters are learned from the data. For each unsupervised and semi-supervised variable we must additionally specify a function \u03bb that returns the parameter values in the recognition model, along with a (reparametrised) sampling procedure. Given this speci\ufb01cation of a computation graph, we can now compute the importance sampling estimate in Eq. (8) by simply running the network forward repeatedly to obtain samples from q\u03c6(\u00b7|\u03bb) for all unobserved variables. We then calculate p\u03b8(x, y, z), q\u03c6(y|x), q\u03c6(y, z|x), and the importance weight w, which is the joint probability of all semi-supervised variable for which labels are available. This estimate can then be optimised with respect to the variables \u03b8 and \u03c6 to train the autoencoder. 5 \f(a) (b) (c) (d) Figure 2: (a) Visual analogies for the MNIST data, partially supervised with just 100 labels (out of 50000). We infer the style variable z and then vary the label y. (b) Exploration in style space with label y held \ufb01xed and (2D) style z varied. Visual analogies for the SVHN data when (c) partially supervised with just 1000 labels, and (d) fully supervised. 3 Experiments We evaluate our framework along a number of different axes pertaining to its ability to learn disentangled representations through the provision of partial graphical-model structures for the latents and weak supervision. In particular, we evaluate its ability to (i) function as a classi\ufb01er/regressor for particular latents under the given dataset, (ii) learn the generative model in a manner that preserves the semantics of the latents with respect to the data generated, and (iii) perform these tasks, in a \ufb02exible manner, for a variety of different models and data. For all the experiments run, we choose architecture and parameters that are considered standard for the type and size of the respective datasets. Where images are concerned (with the exception of MNIST), we employ (de)convolutional architectures, and employ a standard GRU recurrence in the Multi-MNIST case. For learning, we used AdaM [15] with a learning rate and momentumcorrection terms set to their default values. As for the mini batch sizes, they varied from 100-700 depending on the dataset being used and the sizes of the labelled subset Dsup. All of the above, including further details of precise parameter values and the source code, including our PyTorchbased library for specifying arbitrary graphical models in the VAE framework, is available at \u2013 https://github.com/probtorch/probtorch. 3.1 MNIST and SVHN We begin with an experiment involving a simple dependency structure, in fact the very same as that in Kingma et al. [17], to validate the performance of our importance-sampled objective in the special case where the recognition network and generative models factorise as indicated in Fig. 1(left), giving us importance weights that are constant wm,s = q\u03c6(ym|xm). The model is tested on it\u2019s ability to classify digits and perform conditional generation on the MNIST and Google Street-View House Numbers (SVHN) datasets. As Fig. 1(left) shows, the generative and recognition models have the \u201cdigit\u201d label, denoted y, partially speci\ufb01ed (and partially supervised) and the \u201cstyle\u201d factor, denoted z, assumed to be an unobserved (and unsupervised) variable. Figure 2(a) and (c) illustrate the conditional generation capabilities of the learned model, where we show the effect of \ufb01rst transforming a given input (leftmost column) into the disentangled latent space, and with the style latent variable \ufb01xed, manipulating the digit through the generative model to generate data with expected visual characteristics. Note that both these results were obtained with partial supervision \u2013 100 (out of 50000) labelled data points in the case of MNIST and 1000 (out of 70000) labelled data points in the case of SVHN. The style latent variable z was taken to be a diagonal-covariance Gaussian of 10 and 15 dimensions respectively. Figure 2(d) shows the same for SVHN with full supervision. Figure 2(b) illustrates the alternate mode of conditional generation, where the style latent, here taken to be a 2D Gaussian, is varied with the digit held \ufb01xed. Next, we evaluate our model\u2019s ability to effectively learn a classi\ufb01er from partial supervision. We compute the classi\ufb01cation error on the label-prediction task on both datasets, and the results are reported in the table in Fig. 3. Note that there are a few minor points of difference in the setup between our method and those we compare against [17]. We always run our models directly on the data, with no pre-processing or pre-learning on the data. Thus, for MNIST, we compare against 6 \f0 20 40 60 MNIST M = 100 M = 600 M = 1000 M = 3000 0.0 0.2 0.4 0.6 0.8 1.0 0 20 40 60 SVHN M = 1000 M = 3000 0.0 0.2 0.4 0.6 0.8 1.0 Supervision Rate ( ) 0.0 0.2 0.4 0.6 0.8 1.0 Classification Error (%) Effect of Supervision Rate ( ) MNIST N = 50000 M Ours M2 [17] 100 9.71 (\u00b1 0.91) 11.97 (\u00b1 1.71) 600 3.84 (\u00b1 0.86) 4.94 (\u00b1 0.13) 1000 2.88 (\u00b1 0.79) 3.60 (\u00b1 0.56) 3000 1.57 (\u00b1 0.93) 3.92 (\u00b1 0.63) SVHN N = 70000 M Ours M1+M2 [17] 1000 38.91 (\u00b1 1.06) 36.02 (\u00b1 0.10) 3000 29.07 (\u00b1 0.83) \u2014 Figure 3: Right: Classi\ufb01cation error rates for different labelled-set sizes M over multiple runs, with supervision rate \u03c1 = \u03b3M N+\u03b3M , \u03b3 = 1. For SVHN, we compare against a multi-stage process (M1+M2) [17], where our model only uses a single stage. Left: Classi\ufb01cation error over different labelled set sizes and supervision rates for MNIST (top) and SVHN (bottom). Here, scaling of the classi\ufb01cation objective is held \ufb01xed at \u03b1 = 50 (MNIST) and \u03b1 = 70 (SVHN). Note that for sparsely labelled data (M \u226aN), a modicum of over-representation (\u03b3 > 1) helps improve generalisation with better performance on the test set. Conversely, too much over-representation leads to over\ufb01tting. model M2 from the baseline which does just the same. However, for SVHN, the baseline method does not report errors for the M2 model; only the two-stage M1+M2 model which involves a separate feature-extraction step on the data before learning a semi-supervised classi\ufb01er. As the results indicate, our model and objective does indeed perform on par with the setup considered in Kingma et al. [17], serving as basic validation of our framework. We note however, that from the perspective of achieving the lowest possible classi\ufb01cation error, one could adopt any number of alternate factorisations [23] and innovations in neural-network architectures [26, 32]. Supervision rate: As discussed in Section 2.1, we formulate our objective to provide a handle on the relative weight between the supervised and unsupervised terms. For a given unsupervised set size N, supervised set size M, and scaling term \u03b3, the relative weight is \u03c1 = \u03b3M/(N + \u03b3M). Figure 3 shows exploration of this relative weight parameter over the MNIST and SVHN datasets and over different supervised set sizes M. Each line in the graph measures the classi\ufb01cation error for a given M, over \u03c1, starting at \u03b3 = 1, i.e. \u03c1 = M/(N + M). In line with Kingma et al.[17], we use \u03b1 = 0.1/\u03c1. When the labelled data is very sparse (M \u226aN), over-representing the labelled examples during training can help aid generalisation by improving performance on the test data. In our experiments, for the most part, choosing this factor to be \u03c1 = M/(N + M) provides good results. However, as is to be expected, over-\ufb01tting occurs when \u03c1 is increased beyond a certain point. 3.2 Intrinsic Faces We next move to a more complex domain involving generative models of faces. Here, we use the \u201cYale B\u201d dataset [5] as processed by Jampani et al. [11] for the results in Fig. 4. As can be seen in the graphical models for this experiment in Fig. 5, the dependency structures employed here are more complex in comparison to those from the previous experiment. We are interested in showing that our model can learn disentangled representations of identity and lighting and evaluate it\u2019s performance on the tasks of (i) classi\ufb01cation of person identity, and (ii) regression for lighting direction. Note that our generative model assumes no special structure \u2013 we simply specify a model where all latent variables are independent under the prior. Previous work [11] assumed a generative model with latent variables identity i, lighting l, shading s, and re\ufb02ectance r, following the relationship (n \u00b7 l) \u00d7 r + \u03f5 for the pixel data. Here, we wish to demonstrate that our generative model still learns the correct relationship over these latent variables, by virtue of the structure in the recognition model and given (partial) supervision. Note that in the recognition model (Fig. 5), the lighting l is a latent variable with continuous domain, and one that we partially supervise. Further, we encode identity i as a categorical random variable, instead of constructing a pixel-wise surface-normal map (each assumed to be independent Gaussian) as is customary. This formulation allows us to address the task of predicting identity directly, instead of applying surrogate evaluation methods (e.g. nearest-neighbour classi\ufb01cation based on inferred re\ufb02ectance). Figure 4 presents both qualitative and quantitative evaluation of the framework to jointly learn both the structured recognition model, and the generative model parameters. 7 \fInput Recon. Varying Identity Input Recon. Varying Lighting Identity Lighting Ours (Full Supervision) 1.9% (\u00b1 1.5) 3.1% (\u00b1 3.8) Ours (Semi-Supervised) 3.5% (\u00b1 3.4) 17.6% (\u00b1 1.8) Jampani et al. [11] (plot asymptotes) \u224830 \u224810 Figure 4: Left: Exploring the generative capacity of the supervised model by manipulating identity and lighting given a \ufb01xed (inferred) value of the other latent variables. Right: Classi\ufb01cation and regression error rates for identity and lighting latent variables, fully-supervised, and semi-supervised (with 6 labelled example images for each of the 38 individuals, a supervision rate of \u03c1 = 0.5, and \u03b1 = 10). Classi\ufb01cation is a direct 1-out-of-38 choice, whereas for the comparison, error is a nearest-neighbour loss based on the inferred re\ufb02ectance. Regression loss is angular distance. 3.3 Multi-MNIST Finally, we conduct an experiment that extends the complexity from the prior models even further. Particularly, we explore the capacity of our framework to handle models with stochastic dimensionality \u2013 having the number of latent variables itself determined by a random variable, and models that can be composed of other smaller (sub-)models. We conduct this experiment in the domain of multi-MNIST. This is an apposite choice as it satis\ufb01es both the requirements above \u2013 each image can have a varying number of individual digits, which essentially dictates that the model must learn to count, and as each image is itself composed of (scaled and translated) exemplars from the MNIST data, we can employ the MNIST model itself within the multi-MNIST model. The model structure that we assume for the generative and recognition networks is shown in Fig. 5. We extend the models from the MNIST experiment by composing it with a stochastic sequence generator, in which the loop length K is a random variable. For each loop iteration k = 1, . . . , K, the generative model iteratively samples a digit yk, style zk, and uses these to generate a digit image xk in the same manner as in the earlier MNIST example. Additionally, an af\ufb01ne tranformation is also sampled for each digit in each iteration to transform the digit images xk into a common, combined canvas that represents the \ufb01nal generated image x, using a spatial transformer network [10]. In the recognition model, we predict the number of digits K from the pixels in the image. For each loop iteration k = 1, . . . , K, we de\ufb01ne a Bernoulli-distributed digit image xk. When supervision is available, we compute the probability of xk from the binary cross-entropy in the same manner as in the likelihood term for the MNIST model. When no supervision is available, we deterministically set xk to the mean of the distribution. This can be seen akin to providing bounding-boxes around the constituent digits as supervision for the labelled data, which must be taken into account when learning the af\ufb01ne transformations that decompose a multi-MNIST image into its constituent MNIST-like images. This model design is similar to the one used in DRAW [9], recurrent VAEs [3], and AIR [4]. In the absence of a canonical multi-MNIST dataset, we created our own from the MNIST dataset by manipulating the scale and positioning of the standard digits into a combined canvas, evenly balanced across the counts (1-3) and digits. We then conducted two experiments within this domain. In the \ufb01rst experiment, we seek to measure how well the stochastic sequence generator learns to count on its own, with no heed paid to disentangling the latent representations for the underlying digits. Intrinsic Faces Multi-MNIST x i \u2113 s r x i \u2113 r s x ak xk zk yk K K x K ak xk hk zk yk hk\u22121 K Generative Model Recognition Model Generative Model Recognition Model Figure 5: Generative and recognition models for the intrinsic-faces and multi-MNIST experiments. 8 \fInput Reconstruction Decomposition M M+N Count Error (%) w/o MNIST w/ MNIST 0.1 85.45 (\u00b1 5.77) 76.33 (\u00b1 8.91) 0.5 93.27 (\u00b1 2.15) 80.27 (\u00b1 5.45) 1.0 99.81 (\u00b1 1.81) 84.79 (\u00b1 5.11) Figure 6: Left: Example input multi-MNIST images and reconstructions. Top-Right: Decomposition of Multi-MNIST images into constituent MNIST digits. Bottom-Right: Count accuracy over different supervised set sizes M for given dataset size M + N = 82000. Here, the generative model presumes the availability of individual MNIST-digit images, generating combinations under sampled af\ufb01ne transformations. In the second experiment, we extend the above model to now also incorporate the same pre-trained MNIST model from the previous section, which allows the generative model to sample MNIST-digit images, while also being able to predict the underlying digits. This also demonstrates how we can leverage compositionality of models: when a complex model has a known simpler model as a substructure, the simpler model and its learned weights can be dropped in directly. The count accuracy errors across different supervised set sizes, reconstructions for a random set of inputs, and the decomposition of a given set of inputs into their constituent individual digits, are shown in Fig. 6. All reconstructions and image decompositions shown correspond to the nested-model con\ufb01guration. We observe that not only are we able to reliably infer the counts of the digits in the given images, we are able to simultaneously reconstruct the inputs as well as its constituent parts. 4 Discussion and" + }, + { + "url": "http://arxiv.org/abs/1611.07492v1", + "title": "Inducing Interpretable Representations with Variational Autoencoders", + "abstract": "We develop a framework for incorporating structured graphical models in the\n\\emph{encoders} of variational autoencoders (VAEs) that allows us to induce\ninterpretable representations through approximate variational inference. This\nallows us to both perform reasoning (e.g. classification) under the structural\nconstraints of a given graphical model, and use deep generative models to deal\nwith messy, high-dimensional domains where it is often difficult to model all\nthe variation. Learning in this framework is carried out end-to-end with a\nvariational objective, applying to both unsupervised and semi-supervised\nschemes.", + "authors": "N. Siddharth, Brooks Paige, Alban Desmaison, Jan-Willem Van de Meent, Frank Wood, Noah D. Goodman, Pushmeet Kohli, Philip H. S. Torr", + "published": "2016-11-22", + "updated": "2016-11-22", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.CV", + "cs.LG" + ], + "main_content": "Introduction Reasoning in complex perceptual domains such as vision often involves two facets: the ability to effectively learn \ufb02exible representations of the complex high-dimensional data, and the ability to interpret the representations in some structured form. The former is a measure of how well one can capture the relevant information in the data, and the latter is a means of employing consistent semantics to such, in an effort to help diagnosis, enable composition, and improve generality. Probabilistic graphical models[8, 11] enable structured representations, but often in perceptual domains such as vision, they require extensive speci\ufb01cation and signi\ufb01cant feature engineering to be useful. Variational Autoencoders (VAEs) [6, 12], are a form of generative model, where the (typically) manually speci\ufb01ed feature extractors are replaced with (deep) neural networks. Here, parameters of both the generative model and an approximation to the true posterior, called the recognition model, are learned simultaneously. However, a particular feature of such approximations is that they exhibit entangled, and non-interpretable, latent representations by virtue of the fact that the approximating distributions are assumed to take a general, \ufb02exible form; typically multivariate normal. Our contribution extends the combination of deep neural networks and graphical models to allow the use of arbitrarily structured graphical models as variational approximations, which enforces latent representations to conform to the types and structure of the provided graphical model. And where the structure alone is insuf\ufb01cient to encourage disentangled representations, we further extend this framework to perform semi-supervised learning, using a handful of labelled data to help disentangle 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. arXiv:1611.07492v1 [stat.ML] 22 Nov 2016 \fthe latent representation.1 Our framework employs a single variational objective in which parameters of both the generative and recognition models are learned simultaneously. We shares features, motivation, and goals with a variety of recent work. Kingma et al. [7] explores the ability to perform semi-supervised learning in the VAE setting. This is accomplished by partitioning the latent space into structured and unstructured random variables, and providing labels for the structured variables. Kulkarni et al. [10] employ an particular interpretable model for their latent space, where each component is independent of the others, providing weak supervision through a customized training procedure rather than through explicit labels. We build on such work on semi-supervised learning by extending to more general models and structures for the latent space. Sohn et al. [14] perform fully-supervised learning in the particular case where both the (unstructured) latents and labels can be taken to be conditioned on the data. Closest in spirit and motivation is recent work by Johnson et al. [4], which also involves combining graphical models with VAEs to do unsupervised learning. It is employed as a means to extend the class of problems for which graphical model inference for can be performed effectively, involving the relaxation of conjugacy constraints for likelihoods. Finally, Schulman et al. [13] provides a general method for estimating gradients of stochastic computations, which has been applied to models with structured latent spaces and discrete latent variables by Eslami et al. [3]. An additional contribution of our work is a package for Torch [2] which permits simple simultaneous speci\ufb01cation of deep generative models with structured latent spaces, and of the their corresponding inference networks. 2 Formulation Fundamentally, we wish to learn the parameters of a graphical model chosen to model the data. This is typically a generative model over data xand latents z, denoted p\u03b8(x, z). We would like to estimate the posterior over the latents given the data, denoted p\u03b8(z | x), in order to extract a representation. When we wish to extract an interpretable representation, then this corresponds to constraining the model we are learning to be one whose posterior distribution is then amenable to human inspection. Although in the general case, computation of the exact posterior distribution p\u03b8(z | x) is intractable, recent advances in deep generative models enable the the use of the variational autoencoder to learn a parametrised approximation q\u03c6(z | x) to it. Here, the variational approximation is used as a surrogate for the (intractable) exact posterior, constrained to match the true posterior through DKL(q\u03c6(z | x) \u2225p\u03b8(z | x)). However, since one cannot actually evaluate the true posterior, the VAE optimises an alternate objective DKL(q\u03c6(z | x) \u2225p\u03b8(z | x)) = \u2212L(\u03b8, \u03c6; x) + log p\u03b8(x) where L(\u03b8, \u03c6; x) = Eq\u03c6(z|x)[p\u03b8(x, z) \u2212q\u03c6(z | x)] called the evidence lower bound (ELBO) that lower bounds the marginal likelihood log p\u03b8(x). Here, both the generative model parameters \u03b8 and recognition model (the approximation distribution) parameters \u03c6 are characterised by (deep) neural networks, and are both learned simultaneously. The ELBO objective can also be reformulated as L(\u03b8, \u03c6; x) = Eq\u03c6(z|x)[p\u03b8(x | z)] \u2212DKL(q\u03c6(z | x) \u2225p(z)) to indicate that the approximating distribution is used, along with a prior over the latents, to regularise the standard autoencoder objective of the expected log likelihood. While recent approaches to deep generative modelling places constraints, on the structure of the generative model p\u03b8(x, z)[4], we incorporate them into the encoder model q\u03c6(z | x). We do so for two principal reasons. Firstly, a mean-\ufb01eld approximation in q\u03c6(z | x), as is typically assumed, is a poor \ufb01t for perceptual domains such as vision. Complex dependencies that arise in the posterior due to intricacies of the rendering process, even when latent variables may be considered a priori independent, means that such a mean-\ufb01eld assumption is often insuf\ufb01cient. Secondly, an unstructured form (say, multivariate normal) for the variational approximation, means that the recognition model produces latents that are also unstructured, and as is, not interpretable. Any attempts to imbue an 1 For the purposes of this manuscript, we refer to latent representations that are disentangled as structured and latent representations that are entangled as unstructured. The notions of entangled and disentangled representations relate to concise and well-de\ufb01ned human interpretability (visual gestalt) of the axes of variation. 2 \finterpretation on such representations typically happens after the fact, by adding a discriminative model on top of the learned representations. Adding structure to the encoder model ameliorates both these concerns, by allowing a richer dependency structure in the recognition model, and also inducing latent representations whose interpretability is governed by the given graphical model. Our framework enables the speci\ufb01cation of a wide variety of graphical models, in an embedded domain-speci\ufb01c language (EDSL), expressed directly in the Torch[2] framework. 2.1 Model Particularly, for the domains we are interested in here, the models we employ factorise into structured latents y and unstructured latents z, on top of the speci\ufb01c factorisation imposed for the structured latent variables. The typical form of the generative model is given by p\u03b8(x, z | y) = p\u03b8(x | z, y)p\u03b8(z, y) where p\u03b8(x | z, y) is typically a multivariate normal distribution and p\u03b8(z, y) is some appropriately structured latent(s). We use the unstructured latent variables as a means to capture variation in the data not explicitly modelled, jointly learning a likelihood function partially constrained by the structured latents, but crucially not enforcing that they totally explain the data. The variational approximation to the true posterior, q\u03c6(z | x), is nominally taken to be of the same family as the prior distribution, as q\u03c6(z, y | x), but can often include additional structure and alternate factorisations as appropriate. One particular factorisation introduces a dependence between the structured and unstructured latents in the approximation, conditioning the latter on the former as q\u03c6(z, y | x) = q\u03c6(z | y, x)q\u03c6(y | x). This removes the implicit \u201cmean \ufb01eld\u201d assumption in the recognition network, and re\ufb02ects the fact that the latent variables z and y typically exhibit conditional dependence on x, even if the latent variables are a priori independent. Models with such top-level factoring are useful for situations where interpretability is only required or useful to model along certain axes of variation. It is useful when we wish to interpret the same data from different viewpoints and contexts like when the choice and form of labels is \ufb01xed. And it is useful for when we cannot conceivable capture all the variation in the data due to its complexity and so settle for a particular restriction, as is the case with real world visual and language data. 2.2 Learning Although we impose structure in the recognition network through the graphical models, it is not necessarily certain that the nodes corresponding to particular variables actually encode the desired \u201csemantics\u201d of that node. For example, in a graphical model that decomposes as described above, where the structured latent y encodes digit identity (0-9), and the unstructured latent z captures the style, there is no certainty that the decomposition alone is suf\ufb01cient to learn disentangled representations Without the use of supervision, one has no guarantee that the structured and unstructured latents ful\ufb01l their respective roles in such a scheme. We build on the work by Kingma et al. [7] to construct a semi-supervised learning scheme where a small amount of supervision is suf\ufb01cient to break the inherent symmetry problem and learn appropriate representation. In their framework, the objective has a term involving labelled data, that treats both data x and label y as observed variables, and a term involving unlabelled data, that simply marginalises out the label yover its support. They also add an explicit term to learn a classi\ufb01er (in the recognition model) on the supervised data points. We too can employ the same objective, but we note that in such cases, there is often a cost to be paid computationally. The marginalisation scales poorly with both shortage of labels and support size. Alternately, we observe that for discrete random variables they are only used as input to the neural network that parametrises the generative model, we can often simply plug-in the probability vector of the discrete distribution instead of sampling from it, similar to the straight-through estimator [1]. This is of course, not applicable in general, but if the posterior over labels p\u03b8(y | x) is close to a Dirac-Delta function, as in the classifying-digits example, then it is a good approximation. Other points of difference involve the use of richer approximations for the encoder and decoder in the form of convolutional neural networks (CNNs) [9], and the introduction of a supervision rate enabling repeated observation of a labelled data point, in different contexts, in order to reduce estimator variance. CNNs helps avoid employing a stacked model [7], allowing a single, joint objective with comparable performance. Supervision rates are motivated by the fact that observing a labelled data point in the context of different unlabelled data points (in a mini-batched training regime), can help moderate the variance in learning steps. 3 \fMNIST l Ours \u201cM2\u201d [7] 10 12.2 (\u00b1 1.38) 11.97 (\u00b1 1.71) 60 5.28 (\u00b1 0.76) 4.94 (\u00b1 0.13) 100 4.23 (\u00b1 0.68) 3.60 (\u00b1 0.56) 300 3.94 (\u00b1 0.77) 3.92 (\u00b1 0.63) SVHN l Ours \u201cM1+M2\u201d [7] 100 30.32 (\u00b1 2.74) 36.02 (\u00b1 0.10) 300 23.98 (\u00b1 1.83) (a) (b) Figure 2: (a) Classi\ufb01cation-error rates for different (per-class) labelled-set sizes (l) over different runs. (b) Classi\ufb01cation-error for the MNIST dataset over different labelled set (per class) sizes (l) and supervision rates (r) = {0, 0.01, 0.05, 0.1, 0.5, 1.0}. 3 Experiments x n d x n d Figure 1: (l) Generative and (r) recognition models with digit d and style n. We evaluate our framework on its ability to learn interpretable latents through both an effective recognition model and an effective generative model. The ef\ufb01cacy of the recognition model is evaluated on a label-classi\ufb01cation task, and the ef\ufb01cacy the generative model is evaluated on the visual analogies task. The evaluations are conducted on both the MNIST and Google Street-View House Numbers (SVHN) datasets using the generative and recognition models shown in Fig. 1. Both the MNIST and SVHN datasets were employed with a training-test split of 60000/10000 for MNIST and 73000/26000 for SVHN. For the MNIST dataset, we use a standard single-hidden-layer MLP with 512 modes for both the encoder and decoder. For the SVHN dataset, we use a CNN architecture with a convolutional encoder and a deconvolutional decoder, with two blocks of 32 \u219264 \ufb01lters in the encoder, and the reverse in the decoder. For learning, we used AdaM [5] with a learning rate of 0.001 (0.0003 for SVHN) and momentum-correction terms set to their default values. The minibatch sizes varied from 80-300 depending on the dataset used and the supervised-set size. To evaluate the recognition model quantitatively, we compute the classi\ufb01cation accuracy of the label-prediction task with the model for both datasets. This allows us to measure the extent to which the latent-space representations are disentangled, capturing the kinds of representations one would expect a priori given the graphical model. The results, with comparison against Kingma et al. [7], are reported in Fig. 2(a). For the MNIST dataset, we compare against their \u201cM2\u201d model, as we use just the standard MLP for the experiments without performing a preliminary feature-learning step. For the SVHN dataset, we compare against the stacked \u201cM1+M2\u201d model, since we employ a more effective feature learner for visual data through the CNN. As can be seen from the results, we perform comparably on the MNIST dataset, and comfortably beat the error rates on the SVHN dataset. Note that these recognition networks employed the plug-in estimator discussed in Section 2.2. A particular feature of our approach is the ability to learn disentangled representations with just a few labelled data points. Combined with the ability to re-observe a particular labelled data point through the use of the supervision rate, our framework can effectively disentangle the latent representations in a semi-supervised learning regime involving only a handful of labelled data. Figure 2(b) shows how the error rate varies with change in the supervision rate for different labelled set (per class) sizes. Note the steep drop in error rate with just a handful of labels (e.g. 10) seen just a few times (e.g. 1% of the time). The supervision rate here corresponds to sampling minibatches of 80 data points from a total labelled set of 100 data points, with each label class equally represented in the labelled set. Another means of measuring how well the latent space has been disentangled is by manipulation of the generative model. Here, one can vary the values of particular variables, and observe if the 4 \f(a) (b) Figure 3: Exploring the disentangled latent space through the generative model. (a) Visual analogies, where the style latent variable n is kept \ufb01xed and the label l varied. (b) Exploration in the style n space for a 2D latent Gaussian random variable, keeping label l \ufb01xed. generative model produces outputs that suitably re\ufb02ect the changes effected. For the datasets and models considered here, this is cast as the visual analogies task. Figure 3 demonstrates the effect of manipulating the latent variables in the learnt generative model in different ways. Figure 3(a) tests the changes observed in the generative model outputs when the style variable n is held constant, and the digit label l is varied. For both the MNIST and SVHN datasets, it clearly demonstrates that changing only the digit label has the expected effect of varying the class, but maintaining style. Had the latent space not been suf\ufb01ciently disentangled, this could not be the case. Figure 3(b) tests the changes observed in the generative model outputs in the opposite case, when the digit label l is held constant, and the style variable l is varied, for each of the digits in the MNIST dataset. Note that we only evaluate this capability on the MNIST dataset as this particular exercise needs the style variable to be 2-dimensional, which is just suf\ufb01cient to capture the variations in MNIST, but is not suf\ufb01cient to capture variation in the more complex SVHN dataset. Again, we note that digits maintain their identity in the outputs while systematically re\ufb02ecting changes in style. This also is something that would not be possible had the latents not been suf\ufb01ciently disentangled. In summary, we demonstrate the utility and ef\ufb01cacy of employing graphical models in the encoders or recognition networks of variational autoencoders to induce interpretable latent representations with semi-supervised learning. Results of experiments conducted with our framework demonstrate, both qualitatively and quantitatively, the practical effectiveness of our framework in learning interpretable and disentangled latent representations." + }, + { + "url": "http://arxiv.org/abs/1308.4189v2", + "title": "Seeing What You're Told: Sentence-Guided Activity Recognition In Video", + "abstract": "We present a system that demonstrates how the compositional structure of\nevents, in concert with the compositional structure of language, can interplay\nwith the underlying focusing mechanisms in video action recognition, thereby\nproviding a medium, not only for top-down and bottom-up integration, but also\nfor multi-modal integration between vision and language. We show how the roles\nplayed by participants (nouns), their characteristics (adjectives), the actions\nperformed (verbs), the manner of such actions (adverbs), and changing spatial\nrelations between participants (prepositions) in the form of whole sentential\ndescriptions mediated by a grammar, guides the activity-recognition process.\nFurther, the utility and expressiveness of our framework is demonstrated by\nperforming three separate tasks in the domain of multi-activity videos:\nsentence-guided focus of attention, generation of sentential descriptions of\nvideo, and query-based video search, simply by leveraging the framework in\ndifferent manners.", + "authors": "N. Siddharth, Andrei Barbu, Jeffrey Mark Siskind", + "published": "2013-08-19", + "updated": "2014-05-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.CL" + ], + "main_content": "Introduction The ability to describe the observed world in natural language is a quintessential component of human intelligence. A particular feature of this ability is the use of rich sentences, involving the composition of multiple nouns, adjectives, verbs, adverbs, and prepositions, to describe not just static objects and scenes, but also events that unfold over time. Furthermore, this ability appears to be learned by virtually all children. The deep semantic information learned is multi-purpose: it supports comprehension, generation, and inference. In this work, we investigate the intuition, and the precise means and mechanisms that will enable us to support such ability in the domain of activity recognition in multi-activity video. Suppose we wanted to recognize an occurrence of an event described by the sentence The ball bounced, in a video clip. Nominally, we would need to detect the ball and its position in the \ufb01eld of view in each frame and determine that the sequence of such detections satis\ufb01ed the requirements of bounce. The sequence of such detections and their corresponding positions over time constitutes a track for that object. Here, the semantics of an intransitive verb like bounce would be formulated as a unary predicate over object tracks. Recognizing occurrences of events described by sentences containing transitive verbs, like The person approached the ball, would require detecting and tracking two objects, the person and the ball constrained by a binary predicate. In an ideal world, event recognition would proceed in a purely feed-forward fashion: robust and unambiguous object detection and tracking followed by application of the semantic predicates on the recovered tracks. However, the current state-of-the-art in computer vision is far from this ideal. Object detection alone is highly unreliable. The best current average-precision scores on PASCAL VOC hover around 40%-50% [3]. As a result, object detectors suffer from both false positives and false negatives. One way around this is to use detection-based tracking [17], where one biases the detector to overgenerate, alleviating the problem of false negatives, and uses a different mechanism to select among the overgenerated detections to alleviate the problem of false positives. One such mechanism selects detections that are temporally coherent, i.e. the track motion being consistent with optical \ufb02ow. Barbu et al. [2] proposed an alternate mechanism that selected detections for a track that satis\ufb01ed a unary predicate such as one would construct for an intransitive verb like bounce. We signi\ufb01cantly extend that approach, selecting detections for multiple tracks that collectively satisfy a complex multi-argument predicate representing the semantics of an entire sentence. That predicate is constructed as a conjunction of predicates representing the semantics of individual words in that sentence. For example, given the sentence The person to the left of the chair approached the trash can, we construct a logical form. PERSON(P) \u2227TOTHELEFTOF(P, Q) \u2227CHAIR(Q) \u2227APPROACH(P, R) \u2227TRASHCAN(R) Our tracker is able to simultaneously construct three tracks P, Q, and R, selecting out detections for each, in an optimal fashion that simultaneously optimizes a joint measure of detection score and temporal coherence while also satisfying the above conjunction of predicates. We obtain the aforementioned detections by employing a state-of-theart object detector [5], where we train a model for each object (e.g. person, chair, etc.), which when applied to an im1 \fage, produces axis-aligned bounding rectangles with associated scores indicating strength of detection. We represent the semantics of lexical items like person, to the left of, chair, approach, and trash can with predicates over tracks like PERSON(P), TOTHELEFTOF(P, Q), CHAIR(Q), APPROACH(P, R), and TRASHCAN(R). These predicates are in turn represented as regular expressions (i.e. \ufb01nite-state recognizers or FSMs) over features extracted from the sequence of detection positions, shapes, and sizes as well as their temporal derivatives. For example, the predicate TOTHELEFTOF(P, Q) might be a single state FSM where, on a frame-by-frame basis, the centers of the detections for P are constrained to have a lower x-coordinate than the centers of the detections for Q. The actual formulation of the predicates (Table 2) is more complex as it must deal with noise and variance in real-world video. What is central is that the semantics of all parts of speech, namely nouns, adjectives, verbs, adverbs, and prepositions (both those that describe spatial-relations and those that describe motion), is uniformly represented by the same mechanism: predicates over tracks formulated as \ufb01nite-state recognizers over features extracted from the detections in those tracks. We refer to this capacity as the Sentence Tracker, a function S : (B, s, \u039b) 7\u2192(\u03c4, J), that takes, as input, an overgenerated set B of detections along with a sentence s and a lexicon \u039b and produces a score \u03c4 together with a set J of tracks that satisfy s while optimizing a linear combination of detection scores and temporal coherence. This can be used for three distinct purposes as shown in section 4: focus of attention One can apply the sentence tracker to the same video clip B, that depicts multiple simultaneous events taking place in the \ufb01eld of view with different participants, with two different sentences s1 and s2. In other words, one can compute (\u03c41, J1) = S(B, s1, \u039b) and (\u03c42, J2) = S(B, s2, \u039b) to yield two different sets of tracks J1 and J2 corresponding to the different sets of participants in the different events described by s1 and s2. generation One can take a video clip B as input and systematically search the space of all possible sentences s that can be generated by a context-free grammar and \ufb01nd that sentence s\u2217for which (\u03c4 \u2217, J\u2217) = S(B, s\u2217, \u039b) yields the maximal \u03c4 \u2217. This can be used to generate a sentence that describes an input video clip B. retrieval One can take a collection B = {B1, . . . , BM} of video clips (or a single long video chopped into short clips) along with a sentential query s, compute (\u03c4i, Ji) = S(Bi, s, \u039b) for each Bi, and \ufb01nd the clip Bi with maximal score \u03c4i. This can be used to perform sentence-based video search. (Prior work [19] showed how one can take a training set {(B1, s1), . . . , (BM, sM)} of video-sentence pairs, where the word meanings \u039b are unknown, and compute the lexicon \u039b\u2217which maximizes the sum \u03c41 + \u00b7 \u00b7 \u00b7 + \u03c4M computed from (\u03c41, J1) = S(B1, s, \u039b\u2217), . . . , (\u03c4M, JM) = S(BM, s, \u039b\u2217).) However, we \ufb01rst present the two central algorithmic contributions of this work. In section 2 we present the details of the sentence tracker, the mechanism for ef\ufb01ciently constraining several parallel detection-based trackers, one for each participant, with a conjunction of \ufb01nite-state recognizers. In section 3 we present lexical semantics for a small vocabulary of 17 lexical items (5 nouns, 2 adjectives, 4 verbs, 2 adverbs, 2 spatial-relation prepositions, and 2 motion prepositions) all formulated as \ufb01nitestate recognizers over features extracted from detections produced by an object detector, together with compositional semantics that maps a sentence to a semantic formula constructed from these \ufb01nite-state recognizers where the object tracks are assigned to arguments of these recognizers. 2. The Sentence Tracker Barbu et al. [2] address the issue of selecting detections for a track that simultaneously satis\ufb01es a temporalcoherence measure and a single predicate corresponding to an intransitive verb such as bounce. Doing so constitutes the integration of top-down high-level information, in the form of an event model, with bottom-up low-level information in the form of object detectors. We provide a short review of the relevant material in that work to introduce notation and provide the basis for our exposition of the sentence tracker. max j1,...,jT T X t=1 f(bt jt) + T X t=2 g(bt\u22121 jt\u22121, bt jt) (1) The \ufb01rst component is a detection-based tracker. For a given video clip with T frames, let j be the index of a detection and bt j be a particular detection in frame t with score f(bt j). A sequence \u27e8j1, . . . , jT \u27e9of detection indices, one for each frame t, denotes a track comprising detections bt jt. We seek a track that maximizes a linear combination of aggregate detection score, summing f(bt jt) over all frames, and a measure of temporal coherence, as formulated in Eq. 1. The temporal coherence measure aggregates a local measure g computed between pairs of adjacent frames, taken to be the negative Euclidean distance between the center of bt jt and the forward-projected center of bt\u22121 jt\u22121 computed with optical \ufb02ow. Eq. 1 can be computed in polynomial time using dynamic-programming with the Viterbi [15] algorithm. It does so by forming a lattice, whose rows are indexed by j and whose columns are indexed by t, where the node at row j and column t is the detection bt j. Finding a track thus reduces to \ufb01nding a path through this lattice. max k1,...,kT T X t=1 h(kt, bt \u02c6 \uf6bet) + T X t=2 a(kt\u22121, kt) (2) The second component recognizes events with hidden Markov models (HMMs), by \ufb01nding a MAP estimate of an event model given a track. This is computed as shown in Eq. 2, where kt denotes the state for frame t, h(k, b) denotes the log probability of generating a detection b conditioned \ftrack 1 track L . . . . . . . . . . . . f g t = 1 t = 2 t = 3 t = T j1 = 1 j1 = 3 j1 = 2 j1 = Jt b1 1 b1 2 b1 3 b2 1 b2 2 b2 3 b3 1 b3 2 b3 3 bT 1 bT 2 bT 3 b1 J1 b2 J2 b3 J3 bT JT \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 . . . . . . . . . . . . f g t = 1 t = 2 t = 3 t = T jL = 1 jL = 3 jL = 2 jL = Jt b1 1 b1 2 b1 3 b2 1 b2 2 b2 3 b3 1 b3 2 b3 3 bT 1 bT 2 bT 3 b1 J1 b2 J2 b3 J3 bT JT \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00d7 . . . . . . . . . . . . t = 1 t = 2 t = 3 t = T k1 = 1 k1 = 2 k1 = Ks1 k1 = 3 hs1 as1 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 1, b1 j1 \u03b81 1 , . . . , b1 j1 \u03b8 Is1 1 2, b1 j1 \u03b81 1 , . . . , b1 j1 \u03b8 Is1 1 3, b1 j1 \u03b81 1 , . . . , b1 j1 \u03b8 Is1 1 Ks1, b1 j1 \u03b81 1 , . . . , b1 j1 \u03b8 Is1 1 1, b2 j2 \u03b81 1 , . . . , b2 j2 \u03b8 Is1 1 2, b2 j2 \u03b81 1 , . . . , b2 j2 \u03b8 Is1 1 3, b2 j2 \u03b81 1 , . . . , b2 j2 \u03b8 Is1 1 Ks1, b2 j2 \u03b81 1 , . . . , b2 j2 \u03b8 Is1 1 1, b3 j3 \u03b81 1 , . . . , b3 j3 \u03b8 Is1 1 2, b3 j3 \u03b81 1 , . . . , b3 j3 \u03b8 Is1 1 3, b3 j3 \u03b81 1 , . . . , b3 j3 \u03b8 Is1 1 Ks1, b3 j3 \u03b81 1 , . . . , b3 j3 \u03b8 Is1 1 1, bT jT \u03b81 1 , . . . , bT jT \u03b8 Is1 1 2, bT jT \u03b81 1 , . . . , bT jT \u03b8 Is1 1 3, bT jT \u03b81 1 , . . . , bT jT \u03b8 Is1 1 Ks1, bT jT \u03b81 1 , . . . , bT jT \u03b8 Is1 1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 . . . . . . . . . . . . t = 1 t = 2 t = 3 t = T kW = 1 kW = 2 kW = KsW kW = 3 hsW asW \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 \u00b7 1, b1 j1 \u03b81 W , . . . , b1 j1 \u03b8 IsW W 2, b1 j1 \u03b81 W , . . . , b1 j1 \u03b8 IsW W 3, b1 j1 \u03b81 W , . . . , b1 j1 \u03b8 IsW W KsW , b1 j1 \u03b81 W , . . . , b1 j1 \u03b8 IsW W 1, b2 j2 \u03b81 W , . . . , b2 j2 \u03b8 IsW W 2, b2 j2 \u03b81 W , . . . , b2 j2 \u03b8 IsW W 3, b2 j2 \u03b81 W , . . . , b2 j2 \u03b8 IsW W KsW , b2 j2 \u03b81 W , . . . , b2 j2 \u03b8 IsW W 1, b3 j3 \u03b81 W , . . . , b3 j3 \u03b8 IsW W 2, b3 j3 \u03b81 W , . . . , b3 j3 \u03b8 IsW W 3, b3 j3 \u03b81 W , . . . , b3 j3 \u03b8 IsW W KsW , b3 j3 \u03b81 W , . . . , b3 j3 \u03b8 IsW W 1, bT jT \u03b81 W , . . . , bT jT \u03b8 IsW W 2, bT jT \u03b81 W , . . . , bT jT \u03b8 IsW W 3, bT jT \u03b81 W , . . . , bT jT \u03b8 IsW W KsW , bT jT \u03b81 W , . . . , bT jT \u03b8 IsW W word 1 word W Figure 1. The cross-product lattice used by the sentence tracker, consisting of L tracking lattices and W event-model lattices. on being in state k, a(k\u2032, k) denotes the log probability of transitioning from state k\u2032 to k, and \u02c6 \uf6bet denotes the index of the detection produced by the tracker in frame t. This can also be computed in polynomial time using the Viterbi algorithm. Doing so induces a lattice, whose rows are indexed by k and whose columns are indexed by t. The two components, detection-based tracking and event recognition, can be merged by combining the cost functions from Eq. 1 and Eq. 2 to yield a uni\ufb01ed cost function max j1,...,jT k1,...,kT T X t=1 f(bt jt) + T X t=2 g(bt\u22121 jt\u22121, bt jt) + T X t=1 h(kt, bt jt) + T X t=2 a(kt\u22121, kt) that computes the joint MAP estimate of the best possible track and the best possible state sequence. This is done by replacing the \u02c6 \uf6bet in Eq. 2 with jt, allowing the joint maximization over detection and state sequences. This too can be computed in polynomial time with the Viterbi algorithm, \ufb01nding the optimal path through a cross-product lattice where each node represents a detection paired with an event-model state. This formulation combines a single tracker lattice with a single event model, constraining the detection-based tracker to \ufb01nd a track that is not only temporally coherent but also satis\ufb01es the event model. This can be used to select that ball track from a video clip that contains multiple balls that exhibits the motion characteristics of an intransitive verb such as bounce. One would expect that encoding the semantics of a complex sentence such as The person to the right of the chair quickly carried the red object towards the trash can, which involves nouns, adjectives, verbs, adverbs, and spatialrelation and motion prepositions, would provide substantially more mutual constraint on the collection of tracks for the participants than a single intransitive verb would constrain a single track. We thus extend the approach described above by incorporating a complex multi-argument predicate that represents the semantics of an entire sentence instead of one that only represents the semantics of a single intransitive verb. This involves formulating the semantics of other parts of speech, in addition to intransitive verbs, also as HMMs. We then construct a large cross-product lattice, illustrated in Fig. 1, to support L tracks and W words. Each node in this cross-product lattice represents L detections and the states for W words. To support L tracks, we subindex each detection index j as jl for track l. Similarly, to support W words, we subindex each state index k as kw for word w, the number of states K for the lexical entry sw at word w as Ksw and the HMM parameters h and a for the lexical entry sw at word w as hsw and asw. The argumentto-track mapping \u03b8i w speci\ufb01es the track that \ufb01lls argument i of word w, where Isw speci\ufb01es the arity, the number of arguments, of the lexical entry sw at word w. We then seek a path through this cross-product lattice that optimizes max j1 1,...,jT 1 j1 L,...,jT L k1 1,...,kT 1 k1 W ,...,kT W L X l=1 T X t=1 f(bt jt l ) + T X t=2 g(bt\u22121 jt\u22121 l , bt jt l ) ! + W X w=1 T X t=1 hsw(kt w, bt jt \u03b81 w , . . . , bt jt \u03b8Isw w ) + T X t=2 asw(kt\u22121 w , kt w) ! This can also be computed in polynomial time using the Viterbi algorithm. This describes a method by which the function S : (B, s, \u039b) 7\u2192(\u03c4, J), discussed earlier, can be computed, where B is the collection of detections bt j and J is the collection of detection indices jt l . The complexity of the sentence tracker is O(T(JLKW )2) in time and O(JLKW ) in space, where T is the number of frames in the video, W is the number of words in the sentence s, L is the number of participants, J = max \b J1, . . . , JT \t , where Jt is the number of detections considered in frame t, and K = max {Ks1, . . . , KsW }. In practice, J \u22645, L \u22644, and K = 1 for all but verbs and motion prepositions of which there are typically no more than three. With such, the method takes less than a second. 3. Natural-Language Semantics The sentence tracker uniformly represents the semantics of words in all parts of speech, namely nouns, adjectives, verbs, adverbs, and prepositions (both those that describe spatial relations and those that describe motion), as HMMs. Finite-state recognizers (FSMs) are a special case of HMMs where the transition matrices a and the output models h are 0/1, which become \u2212\u221e/0 in log space. Here, we formulate the semantics of a small fragment of English consisting of 17 lexical items (5 nouns, 2 adjectives, 4 verbs, 2 adverbs, 2 spatial-relation prepositions, and 2 motion prepositions), by hand, as FSMs. We do so to focus on what one can do with this approach as discussed in section 4. It is particularly enlightening that the FSMs we use are perspicuous and clearly encode pretheoretic human intuitions about word se\f(a) S \u2192NP VP NP \u2192D [A] N [PP] D \u2192an | the A \u2192blue | red N \u2192person | backpack | chair | trash can | object PP \u2192P NP P \u2192to the left of | to the right of VP \u2192V NP [Adv] [PPM] V \u2192approached | carried | picked up | put down Adv \u2192quickly | slowly PPM \u2192PM NP PM \u2192towards | away from (b) to the left of: {agent, patient, source, goal, referent}, {referent} to the right of: {agent, patient, source, goal, referent}, {referent} approached: {agent}, {goal} carried: {agent}, {patient} picked up: {agent}, {patient} put down: {agent}, {patient} towards: {agent, patient}, {goal} away from: {agent, patient}, {source} other: {agent, patient, source, goal, referent} (c) 1 a. The backpack approached the trash can. b. The chair approached the trash can. 2 a. The red object approached the trash can. b. The blue object approached the trash can. 3 a. The person to the left of the trash can put down an object. b. The person to the right of the trash can put down an object. 4 a. The person put down the trash can. b. The person put down the backpack. 5 a. The person carried the red object. b. The person carried the blue object. 6 a. The person picked up an object to the left of the trash can. b. The person picked up an object to the right of the trash can. 7 a. The person picked up an object. b. The person put down an object. 8 a. The person picked up an object quickly. b. The person picked up an object slowly. 9 a. The person carried an object towards the trash can. b. The person carried an object away from the trash can. 1 0. The backpack approached the chair. 1 1. The red object approached the chair. 1 2. The person put down the chair. Table 1. (a) The grammar for our lexicon of 17 lexical entries (5 nouns, 2 adjectives, 4 verbs, 2 adverbs, 2 spatial-relation prepositions, and 2 motion prepositions). Note that the grammar allows for in\ufb01nite recursion. (b) Speci\ufb01cation of the number of arguments for each word and the roles such arguments refer to. (c) A selection of sentences drawn from the grammar based on which we collected our corpus. mantics. But nothing turns on the use of hand-coded FSMs. Our framework, as described above, supports HMMs. Nouns (e.g. person) may be represented by constructing static FSMs over discrete features, such as detector class. Adjectives (e.g. red, tall, and big) may be represented as static FSMs that describe select properties of the detections for a single participant, such as color, shape, or size, independent of other features of the overall event. Intransitive verbs (e.g. bounce) may be represented as FSMs that describe the changing motion characteristics of a single participant, such as moving downward followed by moving upward. Transitive verbs (e.g. approach) may be represented as FSMs that describe the changing relative motion characteristics of two participants, such as moving closer. Adverbs (e.g. slowly and quickly) may be represented by FSMs that describe the velocity of a single participant, independent of the direction of motion. Spatial-relation prepositions (e.g. to the left of) may be represented as static FSMs that describe the relative position of two participants. Motion prepositions (e.g. towards and away from) may be represented as FSMs that describe the changing relative position of two participants. As is often the case, even simple static properties, such as detector class, object color, shape, and size, spatial relations, and direction of motion, might hold only for a portion of an event. We handle such temporal uncertainty by incorporating garbage states into the FSMs that always accept and do not affect the scores computed. This also allows for alignment between multiple words in a temporal interval during a longer aggregate event. We formulate the FSMs for specifying the word meanings as regular expressions over predicates computed from detections. The particular set of regular expressions and associated predicates that are used in the experiments are given in Table 2. The predicates are formulated around a number of primitive functions. The function avgFlow(b) computes a vector that represents the average optical \ufb02ow inside the detection b. The functions x(b), model(b), and hue(b) return the x-coordinate of the center of b, its object class, and the average hue of the pixels inside b respectively. The function fwdProj(b) displaces b by the average optical \ufb02ow inside b. The functions \u2220and angleSep determine the angular component of a given vector and angular distance between two angular arguments respectively. The function normal computes a normal unit vector for a given vector. The argument v to NOJITTER denotes a speci\ufb01ed direction represented as a 2D unit vector in that direction. Regular expressions are formed around predicates as atoms. A given regular expression must be formed solely from output models of the same arity and denotes an FSM, i.e. an HMM with a 0/1 transition matrix and output model, which become \u2212\u221e/0 in log space. We use R{n,} \u25b3 = R n \u00b7 \u00b7 \u00b7 R R\u2217to indicate that R must be repeated at least n times and R[n,] \u25b3 =(R [TRUE]){n,} to indicate that R must be repeated at least n times but can optionally have a single frame of noise between each repetition. This allows for some \ufb02exibility in the models. A sentence may describe an activity involving multiple tracks, where different (collections of) tracks \ufb01ll the arguments of different words. This gives rise to the requirement of compositional semantics: dealing with the mappings from arguments to tracks. Argument-to-track assignment is a function \u0398 : s 7\u2192(L, \u03b8) that maps a sentence s to the number L of participants and the argument-to-track mapping \u03b8i w. The mapping speci\ufb01es which tracks \ufb01ll which arguments of which words in the sentence and is mediated by a grammar and a speci\ufb01cation of the argument arity and role types for the words in the lexicon. Given a sentence, say The person to the right of the chair picked up the backpack, along with the grammar speci\ufb01ed in Table 1(a) and the lexicon speci\ufb01ed in Tables 1(b) and 2, it would yield a mapping corresponding to the following formula. PERSON(P) \u2227TOTHERIGHTOF(P, Q) \u2227CHAIR(Q) \u2227PICKEDUP(P, R) \u2227BACKPACK(R) \fConstants Simple Predicates Complex Predicates XBOUNDARY \u25b3 = 300PX NEXTTO \u25b3 = 50PX \u2206STATIC \u25b3 = 6PX \u2206JUMP \u25b3 = 30PX \u2206QUICK \u25b3 = 80PX \u2206SLOW \u25b3 = 30PX \u2206CLOSING \u25b3 = 10PX \u2206DIRECTION \u25b3 = 30\u25e6 \u2206HUE \u25b3 = 30\u25e6 NOJITTER(b, v) \u25b3 = \u2225avgFlow(b) \u00b7 v\u2225\u2264\u2206JUMP ALIKE(b1, b2) \u25b3 = model(b1) = model(b2) CLOSE(b1, b2) \u25b3 = |x(b1) \u2212x(b2)| < XBOUNDARY FAR(b1, b2) \u25b3 = |x(b1) \u2212x(b2)| \u2265XBOUNDARY LEFT(b1, b2) \u25b3 = 0 < x(b2) \u2212x(b1) \u2264NEXTTO RIGHT(b1, b2) \u25b3 = 0 < x(b1) \u2212x(b2) \u2264NEXTTO HASCOLOR(b, hue) \u25b3 = angleSep(hue(b), hue) \u2264\u2206HUE STATIONARY(b) \u25b3 = \u2225avgFlow(b)\u2225\u2264\u2206STATIC QUICK(b) \u25b3 = \u2225avgFlow(b)\u2225\u2265\u2206QUICK SLOW(b) \u25b3 = \u2225avgFlow(b)\u2225\u2264\u2206SLOW PERSON(b) \u25b3 = model(b) = person BACKPACK(b) \u25b3 = model(b) = backpack CHAIR(b) \u25b3 = model(b) = chair TRASHCAN(b) \u25b3 = model(b) = trashcan BLUE(b) \u25b3 = HASCOLOR(b, 225\u25e6) RED(b) \u25b3 = HASCOLOR(b, 0\u25e6) STATIONARYCLOSE(b1, b2) \u25b3 = STATIONARY(b1) \u2227STATIONARY(b2) \u2227\u00acALIKE(b1, b2) \u2227CLOSE(b1, b2) STATIONARYFAR(b1, b2) \u25b3 = STATIONARY(b1) \u2227STATIONARY(b2) \u2227\u00acALIKE(b1, b2) \u2227FAR(b1, b2) CLOSER(b1, b2) \u25b3 = |x(b1) \u2212x(b2)| > |x(fwdProj(b1)) \u2212x(b2)| + \u2206CLOSING FARTHER(b1, b2) \u25b3 = |x(b1) \u2212x(b2)| < |x(fwdProj(b1)) \u2212x(b2)| + \u2206CLOSING MOVECLOSER(b1, b2) \u25b3 = NOJITTER(b1, (0, 1)) \u2227NOJITTER(b2, (0, 1)) \u2227CLOSER(b1, b2) MOVEFARTHER(b1, b2) \u25b3 = NOJITTER(b1, (0, 1)) \u2227NOJITTER(b2, (0, 1)) \u2227FARTHER(b1, b2) INANGLE(b, v) \u25b3 = angleSep(\u2220avgFlow(b), \u2220v) < \u2206ANGLE INDIRECTION(b, v) \u25b3 = NOJITTER(b, \u22a5(v)) \u2227\u00acSTATIONARY(b) \u2227INANGLE(b, v) APPROACHING(b1, b2) \u25b3 = \u00acALIKE(b1, b2) \u2227STATIONARY(b2) \u2227MOVECLOSER(b1, b2) CARRY(b1, b2, v) \u25b3 = PERSON(b1) \u2227\u00acALIKE(b1, b2) \u2227INDIRECTION(b1, v) \u2227INDIRECTION(b2, v) CARRYING(b1, b2) \u25b3 = CARRY(b1, b2, (0, 1)) \u2228CARRY(b1, b2, (0, \u22121)) DEPARTING(b1, b2) \u25b3 = \u00acALIKE(b1, b2) \u2227STATIONARY(b2) \u2227MOVEFARTHER(b1, b2) PICKINGUP(b1, b2) \u25b3 = PERSON(b1) \u2227\u00acALIKE(b1, b2) \u2227STATIONARY(b1) \u2227INDIRECTION(b2, (0, 1)) PUTTINGDOWN(b1, b2) \u25b3 = PERSON(b1) \u2227\u00acALIKE(b1, b2) \u2227STATIONARY(b1) \u2227INDIRECTION(b2, (0, \u22121)) Regular Expressions \u03bbperson \u25b3 = PERSON+ \u03bbbackpack \u25b3 = BACKPACK+ \u03bbchair \u25b3 = CHAIR+ \u03bbtrash can \u25b3 = TRASHCAN+ \u03bbobject \u25b3 = (BACKPACK | CHAIR | TRASHCAN)+ \u03bbblue \u25b3 = BLUE+ \u03bbred \u25b3 = RED+ \u03bbquickly \u25b3 = TRUE+ QUICK[3,] TRUE+ \u03bbslowly \u25b3 = TRUE+ SLOW[3,] TRUE+ \u03bbto the left of \u25b3 = LEFT+ \u03bbto the right of \u25b3 = RIGHT+ \u03bbapproached \u25b3 = STATIONARYFAR+ APPROACHING[3,] STATIONARYCLOSE+ \u03bbcarried \u25b3 = STATIONARYCLOSE+ CARRYING[3,] STATIONARYCLOSE+ \u03bbpicked up \u25b3 = STATIONARYCLOSE+ PICKINGUP[3,] STATIONARYCLOSE+ \u03bbput down \u25b3 = STATIONARYCLOSE+ PUTTINGDOWN[3,] STATIONARYCLOSE+ \u03bbtowards \u25b3 = STATIONARYFAR+ APPROACHING[3,] STATIONARYCLOSE+ \u03bbaway from \u25b3 = STATIONARYCLOSE+ DEPARTING[3,] STATIONARYFAR+ Table 2. The \ufb01nite-state recognizers corresponding to the lexicon in Table 1(a). To do so, we \ufb01rst construct a parse tree of the sentence s given the grammar, using a recursive-descent parser. For each word, we then determine from the parse tree, which words in the sentence are determined to be its dependents in the sense of government, and how many such dependents exist, from the lexicon speci\ufb01ed in Table 1(b). For example, the dependents of to the right of are determined to be person and chair, \ufb01lling its \ufb01rst and second arguments respectively. Moreover, we determine a consistent assignment of roles, one of agent, patient, source, goal, and referent, for each participant track that \ufb01lls the word arguments, from the allowed roles speci\ufb01ed for that word and argument in the lexicon. Here, P, Q, and R are participants that play the agent, referent, and patient roles respectively. 4. Experimental Evaluation The sentence tracker supports three distinct capabilities. It can take sentences as input and focus the attention of a tracker, it can take video as input and produce sentential descriptions as output, and it can perform content-based video retrieval given a sentential input query. To evaluate the \ufb01rst three, we \ufb01lmed a corpus of 94 short video clips, of varying length, in 3 different outdoor environments. The camera was moved for each video clip so that the varying background precluded unanticipated confounds. These video clips, \ufb01lmed with a variety of actors, each depicted one or more of the 21 sentences from Table 1(c). The depiction, from video clip to video clip, varied in scene layout and the actor(s) performing the event. The corpus was carefully constructed in a number of ways. First, many video clips depict more than one sentence. In particular, many video clips depict simultaneous distinct events. Second, each sentence is depicted by multiple video clips. Third the corpus was constructed with minimal pairs: pairs of video clips whose depicted sentences differ in exactly one word. These minimal pairs are indicated as the \u2018a\u2019 and \u2018b\u2019 variants of sentences 1\u20139 in Table 1(c). That varying word was carefully chosen to span all parts of speech and all sentential positions: sentence 1 varies subject noun, sentence 2 varies subject adjective, sentence 3 varies subject preposition, sentence 4 varies object noun, sentence 5 varies object adjective, sentence 6 varies object preposition, sentence 7 varies verb, sentence 8 varies adverb, and sentence 9 varies motion preposition. We \ufb01lmed our own corpus as we are unaware of any existing corpora that exhibit the above properties. We annotated each of the 94 clips with ground truth judgments for each of the 21 sentences, indicating whether the given clip depicted the given sentence. This set of 1974 judgments was used for the following analyses. 4.1. Focus of Attention Tracking is traditionally performed using cues from motion, object detection, or manual initialization on an object of interest. However, in the case of a cluttered scene involving multiple activities occurring simultaneously, there can be many moving objects, many instances of the same object class, and perhaps even multiple simultaneously occurring instances of the same event class. This presents a signi\ufb01cant obstacle to the ef\ufb01cacy of existing methods in such scenarios. To alleviate this problem, one can decide which objects to track based on which ones participate in a target event. The sentence tracker can focus its attention on just those objects that participate in an event speci\ufb01ed by a sentential description. Such a description can differentiate between different simultaneous events taking place between many moving objects in the scene using descriptions constructed out of a variety of parts of speech: nouns to specify object class, adjectives to specify object properties, verbs to specify events, adverbs to specify motion properties, and prepositions to specify (changing) spatial relations between objects. Furthermore, such a sentential description can even differentiate which objects to track based on the role that \fthey play in an event: agent, patient, source, goal, or referent. Fig. 2 demonstrates this ability: different tracks are produced for the same video clip that depicts multiple simultaneous events when focused with different sentences. We further evaluated this ability on all 9 minimal pairs, collectively applied to all 24 suitable video clips in our corpus. For 21 of these, both sentences in the minimal pair yielded tracks deemed to be correct depictions. Our website1 includes example video clips for all 9 minimal pairs. 4.2. Generation Much of the prior work on generating sentences to describe images [4, 7, 8, 12, 13, 18] and video [1, 6, 9, 10, 16] uses special-purpose natural-language-generation methods. We can instead use the ability of the sentence tracker to score a sentence paired with a video clip as a generalpurpose natural-language generator by searching for the highest-scoring sentence for a given video clip. However, this has a problem. Scores decrease with longer word sequences and greater numbers of tracks that result from such. This is because both f and g are mapped to log space, i.e. (\u2212\u221e, 0], via sigmoids, to match h and a, which are log probabilities. So we don\u2019t actually search for the highestscoring sentence, which would bias the process towards short sentences. Instead, we seek complex sentences that are true of the video clip as they are more informative. Nominally, this search process would be intractable since the space of possible sentences can be huge and even in\ufb01nite. However, we can use beam search to get an approximate answer. This is possible because the sentence tracker can score any word sequence, not just complete phrases or sentences. We can select the top-scoring singleword sequences and then repeatedly extend the top-scoring W-word sequences, by one word, to select the top-scoring W + 1-word sequences, subject to the constraint that these W + 1-word sequences are grammatical sentences or can be extended to grammatical sentences by insertion of additional words. We terminate the search process when the contraction threshold, the ratio between the score of a sequence and the score of the sequence expanding from it, drops below a speci\ufb01ed value and the sequence being expanded is a complete sentence. This contraction threshold controls complexity of the generated sentence. When restricted to FSMs, h and a will be 0/1, which become \u2212\u221e/0 in log space. Thus increase in the number of words can only decrease a score to \u2212\u221e, meaning that a sequence of words no-longer describes a video clip. Since we seek sentences that do, we terminate the above beam-search process before the score goes to \u2212\u221e. In this case, there is no approximation: a beam search maintaining all W-word sequences with \ufb01nite score yields the highest-scoring sentence before the contraction threshold is met. 1http://aql.ecn.purdue.edu/\u02dcqobi/cccp/cvpr2014.html To evaluate this approach, we searched the space of sentences generated by the grammar in Table 1(a) to \ufb01nd the top-scoring sentence for each of the 94 video clips in our corpus. Note that the grammar generates an in\ufb01nite number of sentences due to recursion in NP. Even restricting the grammar to eliminate NP recursion yields a space of 147,123,874,800 sentences. Despite not restricting the grammar in this fashion, we are able to effectively \ufb01nd good descriptions of the video clips. We evaluated the accuracy of the sentence tracker in generating descriptions for our entire corpus, for multiple contraction thresholds. Accuracy was computed as the percentage of the 94 clips for which generated descriptions were deemed to describe the video by human judges. Contraction thresholds of 0.95, 0.90, and 0.85 yielded accuracies of 67.02%, 71.27%, and 64.89% respectively. We demonstrate examples of this approach in Fig. 3. Our website1 contains additional examples. 4.3. Retrieval The availability of vast video corpora, such as on YouTube, has created a rapidly growing demand for content-based video search and retrieval. The existing systems, however, only provide a means to search via humanprovided captions. The inef\ufb01cacy of such an approach is evident. Attempting to search for even simple queries such as pick up or put down yields surprisingly poor results, let alone searching for more complex queries such as person approached horse. Furthermore, some prior work on content-based video-retrieval systems, like Sivic and Zisserman [14], search only for objects and other prior work, like Laptev et al. [11], search only for events. Even combining such to support conjunctive queries for video clips with speci\ufb01ed collections of objects jointly with a speci\ufb01ed event, would not effectively rule out video clips where the speci\ufb01ed objects did not play a role in the event or played different roles in the event. For example, it could not rule out a video clip depicting a person jumping next to a stationary ball for a query ball bounce or distinguish between the queries person approached horse and horse approached person. The sentence tracker exhibits the ability to serve as the basis of a much better video search and retrieval tool, one that performs content-based search with complex sentential queries to \ufb01nd precise semantically relevant clips, as demonstrated in Fig. 4. Our website1 contains the top three scoring video clips for each query sentence from Table 1(c). To evaluate this approach, we scored every video clip in our corpus against every sentence in Table 1(c), rank ordering the video clips for each sentence, yielding the following statistics over the 1974 scores. chance that a random clip depicts a given sentence 13.12% top-scoring clip depicts the given sentence 94.68% \u22651 of top 3 clips depicts the given sentence 100.00% Our website1 contains all 94 video clips and all 1974 scores. The judgment of whether a video clip depicted a given sen\fThe person picked up an object. The person put down an object. Figure 2. Sentence-guided focus of attention: different sets of tracks for the same video clip produced under guidance of different sentences. Here, and in Figs. 3 and 4, the red box denotes the agent, the blue box denotes the patient, the violet box denotes the source, the turquoise box denotes the goal, and the green box denotes the referent. These roles are determined automatically. The backpack to the left of the chair approached the trash can. The person to the left of the trash can put down the chair. Figure 3. Generation of sentential description: constructing the best-scoring sentence for each video clip through a beam search. tence was made using our annotation. We conducted an additional evaluation with this annotation. One can threshold the sentence-tracker score to yield a binary predicate on video-sentence pairs. We performed 4-fold cross validation on our corpus, selecting the threshold for each fold that maximized accuracy of this predicate, relative to the annotation, on 75% of the video clips and evaluating the accuracy with this selected threshold on the remaining 25%. This yielded an average accuracy of 86.88%. 5." + } + ], + "Minyoung Kim": [ + { + "url": "http://arxiv.org/abs/2306.09702v1", + "title": "A Hierarchical Bayesian Model for Deep Few-Shot Meta Learning", + "abstract": "We propose a novel hierarchical Bayesian model for learning with a large\n(possibly infinite) number of tasks/episodes, which suits well the few-shot\nmeta learning problem. We consider episode-wise random variables to model\nepisode-specific target generative processes, where these local random\nvariables are governed by a higher-level global random variate. The global\nvariable helps memorize the important information from historic episodes while\ncontrolling how much the model needs to be adapted to new episodes in a\nprincipled Bayesian manner. Within our model framework, the prediction on a\nnovel episode/task can be seen as a Bayesian inference problem. However, a main\nobstacle in learning with a large/infinite number of local random variables in\nonline nature, is that one is not allowed to store the posterior distribution\nof the current local random variable for frequent future updates, typical in\nconventional variational inference. We need to be able to treat each local\nvariable as a one-time iterate in the optimization. We propose a\nNormal-Inverse-Wishart model, for which we show that this one-time iterate\noptimization becomes feasible due to the approximate closed-form solutions for\nthe local posterior distributions. The resulting algorithm is more attractive\nthan the MAML in that it is not required to maintain computational graphs for\nthe whole gradient optimization steps per episode. Our approach is also\ndifferent from existing Bayesian meta learning methods in that unlike dealing\nwith a single random variable for the whole episodes, our approach has a\nhierarchical structure that allows one-time episodic optimization, desirable\nfor principled Bayesian learning with many/infinite tasks. The code is\navailable at \\url{https://github.com/minyoungkim21/niwmeta}.", + "authors": "Minyoung Kim, Timothy Hospedales", + "published": "2023-06-16", + "updated": "2023-06-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Few-shot learning (FSL) aims to emulate the human ability to learn from few examples [24]. It has received substantial and growing interest [49] due to the need to alleviate the notoriously data intensive nature of mainstream supervised deep learning. Approaches to FSL are all based on some kind of knowledge transfer from a set of plentiful source recognition problems to the sparse data target problem of interest. Existing approaches are differentiated in terms of the assumptions they make about what is task agnostic knowledge that can be transferred from the source tasks, and what is task-specific knowledge that should be learned from the sparse target examples. For example, the seminal MAML [10] and ProtoNets [43] respectively assume that the initialization for fine-tuning, or the feature extractor for metric-based recognition should be transferred from source categories. One of the most principled and systematic ways to model such sets of related problems are hierarchical Bayesian models (HBMs) [14]. The HBM paradigm is widely used in statistics, but has seen relatively less use in deep learning and computer vision, due to the technical difficulty of bringing hierarchical Bayesian modelling to bear on deep learning. HBMs provide a powerful way to model a set of related problems, by assuming that each problem has its own parameters (e.g, the neural networks that recognize cat vs dog, or car vs bike), but that those problems share a common prior (the prior over such neural networks). Data-efficient learning of the target tasks is then achieved by inferring the prior based on the source tasks, and using it to enhance learning the posterior over the target task parameters. A Bayesian learning treatment of FSL would be appealing due to the overfitting resistance provided by Bayesian Occam\u2019s razor [27], as well as the ability to improve calibration of inference so that the model\u2019s confidence is reflective of its probability of correctness \u2014 a crucial property in mission critical applications [18]. However the limited attempts that have been made to exploit these tools in deep learning have either been incomplete treatments that only model a single Bayesian layer within the neural network [58, 15], or else fail to scale up to modern neural architectures [11, 55]. In this paper we present the first complete hierarchical Bayesian learning algorithm for few-shot deep learning. Our algorithm efficiently learns a prior1 over neural networks 1Precisely speaking, we have a higher-level random variable \u03d5 shared across episodes, and learning a prior means inferring the posterior \u03d5|{Di} for all episodic training data {Di}. At test time, this posterior serves as a arXiv:2306.09702v1 [cs.LG] 16 Jun 2023 \fduring the meta-train phase, and efficiently learns a posterior neural network during each meta-test episode. Importantly, our learning is architecture independent. It can scale up to state of the art backbones including ViTs [9], and works smoothly with any few-shot learning architecture \u2013 spanning simple linear decoders [10, 43], to those based on sophisticated set-based decoders such as FEAT [53] and CNP[13]/ANP[23]. We show empirically that our HBM provides improved performance and calibration in all of these cases, as well as providing clear theoretical justification. Our analysis also reveals novel links between seminal FSL methods such as ProtoNet [43], MAML [10], and Reptile [33], all of which are different special cases of our framework despite their very different appearance. Interestingly, despite its close relatedness to MAML-family algorithms, our Bayesian learner admits an efficient closed-form solution to the task-specific and task-agnostic updates that does not require maintaining the computational graph for reversemode backpropagation. This provides a novel solution to a famous meta-learning scalability bottleneck. In summary, our contributions include: (i) The first complete hierarchical Bayesian treatment of the few-shot deep learning problem, and associated theoretical justification. (ii) An efficient algorithmic learning solution that can scale up to modern architectures, and plug into most existing neural FSL meta-learners. (iii) Empirical results demonstrating improved accuracy and calibration performance on both classification and regression benchmarks. 2. Problem Setup We consider the episodic few-shot learning problem, which can be formally stated as follows. Let p(T ) be the (unknown) task/episode distribution, where each task T \u223cp(T ) is defined as a distribution pT (x, y) for data (x, y) where x is input and y is target. By episodic learning, we have a large (possibly infinite) number of episodes during training, T1, T2, . . . , TN \u223cP(T ) sampled i.i.d., but we only observe a small number of labeled samples from each episode, denoted by Di = {(xi j, yi j)}ni j=1 \u223cpTi(x, y), where ni = |Di| is the number of samples in Di. The goal of the learner, after observing the training data D1, . . . , DN from a large number of different tasks, is to build a predictor p\u2217(y|x) for novel unseen tasks T \u2217\u223cp(T ). We will often abuse the notation, e.g., i \u223cT refers to the episode i sampled, i.e., Di \u223cpTi(x, y) where Ti \u223cp(T ). At the test time we are allowed to have some hints about the new test task T \u2217, in the form of a few labeled examples from T \u2217, also known as the support set2 denoted by D\u2217\u223cPT \u2217(x, y). prior for generating network weights \u03b8 that is specific to each test episode. 2For the episodic training data Di, it is common practice to partition it into two labeled sets, support and query, so that we use the support set for adaptation while measuring the quality of the adapted model on the query set to get learning signals. However, we do not explicitly deal with this Figure 1. Graphical models. (a) Plate view of iid episodes. (b) Individual episode data with input x given and only p(y|x) modeled. (c): Few-shot learning as a probabilistic inference problem (shaded nodes = evidences, red colored nodes = targets to infer). In (c), D\u2217 denotes the support set for the test episode. Note: a large number of (possibly infinitely many) evidences D1, D2, . . . , DN, . . . . For ease of exposition and theoretical analysis, we consider infinite episodes (N \u2192\u221e) observed during training (of course in practice N is large but finite). In Bayesian perspective, the goal is to infer the posterior distribution with the large/infinite number of episodic training data as evidence, that is, p(y|x, D1:N)|N\u2192\u221e. A major computational challenge is that the large/infinite number of tasks/data cannot be stored, hardly replayed or revisited, which implies that any viable learning algorithm has to be online in nature. 3. Main Approach We introduce two types of latent random variables, \u03d5 and {\u03b8i}\u221e i=1. Each \u03b8i, one for each episode i, is deployed as the network weights for modeling the data Di (i = 1, . . . , \u221e). Specifically, Di is generated3 by \u03b8i as in the likelihood model in (2). The variable \u03d5 can be viewed as a globally shared variable that is responsible for linking the individual episode-wise parameters \u03b8i. We assume conditionally independent and identical priors, p({\u03b8i}i|\u03d5) = Q i p(\u03b8i|\u03d5). Thus the prior for the latent variables (\u03d5, {\u03b8i}\u221e i=1) is formed in a hierarchical manner. The model is fully described as: (Prior) p(\u03d5, \u03b81:\u221e) = p(\u03d5)Q\u221e i=1p(\u03b8i|\u03d5) (1) (Likelihood) p(Di|\u03b8i) = Q (x,y)\u2208Dip(y|x, \u03b8i) (2) where p(y|x, \u03b8i) is a conventional neural network model. See the graphical model in Fig. 1(a) where the iid episodes are governed by a single random variable \u03d5. Given infinitely many episodic data {Di}\u221e i=1 we infer the posterior, p(\u03d5, \u03b81:\u221e|D1:\u221e) \u221dp(\u03d5) Q\u221e i=1 p(\u03b8i|\u03d5)p(Di|\u03b8i), and we adopt variational inference to approximate it. That is, q(\u03d5, \u03b81:\u221e; L) \u2248p(\u03d5, \u03b81:\u221e|D1:\u221e) where q(\u03d5, \u03b81:\u221e; L) := q(\u03d5; L0) \u00b7 lim N\u2192\u221e QN i=1qi(\u03b8i; Li), (3) where the variational parameters L consists of L0 (parameters for q(\u03d5)) and {Li}\u221e i=1\u2019s (parameters of qi(\u03b8i)\u2019s for episode i). Note that although \u03b8i\u2019s are independent across episodes under (3), they are differently modeled (note the convention in our derivations, but treat Di as a whole available training set. 3Note that we do not deal with generative modeling of input x. Inputs x are always given, and only conditionals p(y|x) are modeled (Fig. 1(b)). \fsubscript i in notation qi), reflecting different posterior beliefs originating from heterogeneity of episodic data Di\u2019s. Normal-Inverse-Wishart model. We consider NormalInverse-Wishart (NIW) distributions for the prior and variational posterior. First, the prior is modeled as a conjugate form of Gaussian and NIW. With \u03d5 = (\u00b5, \u03a3), p(\u03d5) = N(\u00b5; \u00b50, \u03bb\u22121 0 \u03a3) \u00b7 IW(\u03a3; \u03a30, \u03bd0), (4) p(\u03b8i|\u03d5) = N(\u03b8i; \u00b5, \u03a3), i = 1, . . . , \u221e, (5) where \u039b = {\u00b50, \u03a30, \u03bb0, \u03bd0} is the parameters of the NIW. We do not need to pay attention to the choice of values for \u039b since p(\u03d5) has vanishing effect on posterior due to the large/infinite number of evidences as we will see shortly. Next, our choice of the variational density family for q(\u03d5) is the NIW, mainly because it admits closed-form expressions in the ELBO function due to the conjugacy, allowing onetime episodic optimization, as will be shown. q(\u03d5; L0) := N(\u00b5; m0, l\u22121 0 \u03a3) \u00b7 IW(\u03a3; V0, n0). (6) So, L0 = {m0, V0, l0, n0}, and we restrict V0 to be diagonal. The density family for qi(\u03b8i)\u2019s is chosen as a Gaussian, qi(\u03b8i; Li) = N(\u03b8i; mi, Vi). (7) Thus Li = {mi, Vi}. Learning (variational inference) amounts to finding L0 and {Li}\u221e 1 that makes the approximation q(\u03d5, \u03b81:\u221e; L) \u2248p(\u03d5, \u03b81:\u221e|D1:\u221e), as tight as possible. Variational inference. For the finite case with N episodes, it is straightforward to derive the upper bound of the negative marginal log-likelihood (NMLL) as \u2212log p(D1:N) \u2264KL(q(\u03d5)||p(\u03d5)) + (8) PN i=1 \u0010 Eqi(\u03b8i)[li(\u03b8i)] + Eq(\u03d5) \u0002 KL(qi(\u03b8i)||p(\u03b8i|\u03d5)) \u0003\u0011 where li(\u03b8i) = \u2212log p(Di|\u03b8i) is the negative training loglikelihood of \u03b8i in episode i. As N \u2192\u221e, the ultimate objective that we like to minimize is naturally the effective episodeaveraged NMLL, that is, limN\u2192\u221e\u22121 N log p(D1:N), whose bound is derived from (8) as: lim N\u2192\u221e 1 N PN i=1 \u0010 Eqi(\u03b8i)[li(\u03b8i)]+Eq(\u03d5) \u0002 KL(qi(\u03b8i)||p(\u03b8i|\u03d5)) \u0003 \u0011 Note that 1 N KL(q(\u03d5)||p(\u03d5)) vanished as N \u2192\u221e. Since limN\u2192\u221e1 N PN i=1 fi = Ei\u223cT [fi] for any expression fi, the ELBO learning amounts to the following optimization: min L0,{Li}\u221e i=1 Ei\u223cT h Eqi(\u03b8i;Li)[li(\u03b8i)] + (9) Eq(\u03d5;L0) \u0002 KL(qi(\u03b8i; Li)||p(\u03b8i|\u03d5)) \u0003 i . One-time episodic optimization. Note that (9) is challenging due to the large/infinite number of optimization variables {Li}\u221e i=1 and the online nature of task sampling i \u223cT . Applying conventional SGD would simply fail since each Li will never be updated more than once. Instead, we tackle it by finding the optimal solutions for Li\u2019s for fixed L0, thus effectively representing the optimal solutions as functions of L0, namely {L\u2217 i (L0)}\u221e i=1. Plugging the optimal L\u2217 i (L0)\u2019s back to (9) leads to the optimization problem over L0 alone. The idea is just like solving: minx,y f(x, y) = minx f(x, y\u2217(x)) where y\u2217(x) = arg miny f(x, y) with x fixed. Note that when we fix L0 (i.e., fix q(\u03d5)), the objective (9) is completely separable over i, and we can optimize individual i independently. More specifically, for each i \u22651, min Li Eqi(\u03b8i;Li)[li(\u03b8i)] + E\u03d5 \u0002 KL(qi(\u03b8i; Li)||p(\u03b8i|\u03d5)) \u0003 (10) As the expected KL term in (10) admits a closed form due to NIW-Gaussian conjugacy (Supp. for derivations), we can reduce (10) to the following optimization for Li = (mi, Vi): L\u2217 i (L0) := arg min mi,Vi \u0012 EN (\u03b8i;mi,Vi)[li(\u03b8i)] \u22121 2 log |Vi| + n0 2 (mi\u2212m0)\u22a4V \u22121 0 (mi\u2212m0) + n0 2 Tr \u0000ViV \u22121 0 \u0001\u0013 , (11) with L0 = {m0, V0, l0, n0} fixed. Quadratic approximation of episodic loss via SGLD. To find the closed-form solution L\u2217 i (L0) in (11), we make quadratic approximation of li(\u03b8i) = \u2212log p(Di|\u03b8i). In general, \u2212log p(Di|\u03b8), as a function of \u03b8, can be written as: \u2212log p(Di|\u03b8) \u22481 2(\u03b8\u2212mi)\u22a4Ai(\u03b8\u2212mi) + const., (12) for some (mi, Ai) that are constant with respect to \u03b8. One may attempt to obtain (mi, Ai) via Laplace approximation (e.g., the minimizer of \u2212log p(Di|\u03b8) for mi and the Hessian at the minimizer for Ai). However, this involves computationally intensive Hessian computation. Instead, using the fact that the log-posterior log p(\u03b8|Di) equals (up to constant) log p(Di|\u03b8) when we use uninformative prior p(\u03b8) \u221d1, we can obtain samples from the posterior p(\u03b8|Di) using MCMC sampling, especially the stochastic gradient Langevin dynamics (SGLD) [51], and estimate sample mean and precision, which become mi and Ai, respectively4. Note that this amounts to performing several SGD iterations (skipping a few initial for burn-in), and unlike MAML [10] no computation graph needs to be maintained since (mi, Ai) are constant. Once we have (mi, Ai), the optimization (11) admits the closed-form solution (Supplement for derivations), m\u2217 i (L0) = (Ai + n0V \u22121 0 )\u22121(Aimi + n0V \u22121 0 m0), V \u2217 i (L0) = (Ai + n0V \u22121 0 )\u22121. (13) 4This approach is algorithmically very similar to the stochastic weight averaging (SWA) [22] and follow-up Gaussian fitting (SWAG) [28]. \fAlgorithm 1 Our few-shot meta learning algorithm. Initialize: L0 = {m0, V0, n0} of q(\u03d5; L0) randomly. for episode i = 1, 2, . . . do Perform SGLD iterations on Di to estimate (mi, Ai). Compute the episodic minimizer L\u2217 i (L0) from (13). Update L0 by the gradient of fi(L0) + 1 2gi(L0) as in (14). end for Output: Learned L0. Computation in (13) is cheap since all matrices are diagonal. Final optimization. Plugging (13) back to (9), we have an optimization problem over L0 = {m0, V0, l0, n0} alone, which can be written as (Supplement for full derivations): min L0 Ei\u223cT h fi(L0) + 1 2gi(L0) + d 2l0 i s.t. (14) fi(L0) = E\u03f5\u223cN (0,I) h li \u0010 m\u2217 i (L0) + V \u2217 i (L0)1/2\u03f5 \u0011i , gi(L0) = log |V0| |V \u2217 i (L0)| + n0Tr \u0000V \u2217 i (L0)V \u22121 0 \u0001 + n0 \u0000m\u2217 i (L0)\u2212m0 \u0001\u22a4V \u22121 0 \u0000m\u2217 i (L0)\u2212m0 \u0001 \u2212\u03c8d \u0010n0 2 \u0011 , where \u03c8d(\u00b7) is the multivariate digamma function and d = dim(\u03b8). As l0 only appears in the term d 2l0 , the optimal value is l\u2217 0 =\u221e5. We use SGD to solve (14), repeating the steps: 1) Sample i\u223cT . 2) L0 \u2190L0\u2212\u03b7\u2207L0 \u0010 fi(L0)+ 1 2gi(L0) \u0011 . Note that \u2207L0 \u0000fi(L0) + 1 2gi(L0) \u0001 is an unbiased stochastic estimate for the gradient of the objective Ei\u223cT [\u00b7 \u00b7 \u00b7 ] in (14). Furthermore, our learning algorithm above (also pseudocode in Alg 1) is fully compatible with the online nature of the episodic training. After training, we obtain the learned L0, that is, the posterior q(\u03d5; L0). The learned posterior q(\u03d5; L0) will be used at the meta test time, where we show in Sec. 3.2 that this can be seen as Bayesian inference as well. We emphasize that our framework is completely flexible in the choice of the backbone p(y|x, \u03b8). It could be the popular instance-based network comprised of a feature extractor and a prediction head where the latter can be either a conventional learnable readout head or the parameter-free one like the nearest centroid classifier (NCC) in ProtoNet [43], i.e., p(D|\u03b8)=p(Q|S, \u03b8) where D =S \u222aQ and p(y|x, S, \u03b8) is the NCC prediction with support S. We can also adopt the set-based networks [53, 13, 23] where p(y|x, S, \u03b8) itself is modeled by a neural net y = G(x, S; \u03b8) with input (x, S). 3.1. Interpretation We show that our framework unifies seemingly unrelated seminal FSL algorithms into one perspective. 5This is compatible with the conjugate Gaussian observation case, where the posterior NIW has l0 incremented from the prior\u2019s l0 by the number of observations, which is \u221ein our case. MAML [10] as a special case. Suppose we consider spiky variational densities, i.e., Vi \u21920 (constant). The one-time episodic optimization (11) reduces to: arg minmi li(\u03b8i)+ R(mi) where R(mi) is the quadratic penalty of mi deviating from m0. One reasonable solution is to perform a few gradient steps with loss li, starting from m0 to have small penalty (R=0 initially). That is, mi \u2190m0 and a few steps of mi \u2190mi \u2212\u03b1\u2207li(mi) to return m\u2217 i (L0). Plugging this into (14) while disregarding the gi term, leads to the MAML algorithm. Obviously, the main drawback is m\u2217 i (L0) is a function of m0 \u2208L0 via a full computation graph of SGD steps, compared to our lightweight closed forms (13). ProtoNet [43] as a special case. Again with Vi \u21920, if we ignore the negative log-likelihood term in (11), then the optimal solution becomes m\u2217 i (L0) = m0. If we remove the gi term, we can solve (14) by simple gradient descent with \u2207m0(\u2212log p(Di|m0)). We then adopt the NCC head and regard m0 as sole feature extractor parameters, which becomes exactly the ProtoNet update. Reptile [33] as a special case. Instead, if we ignore all penalty terms in (11) and follow our quadratic approximation (12) with Vi \u21920, then m\u2217 i (L0) = mi. It is constant with respect to L0 = (m0, V0, n0), and makes the optimization (14) very simple: the optimal m0 is the average of mi for all tasks i, i.e., m\u2217 0 = Ei\u223cT [mi] (we ignore V0 here). Note that Reptile ultimately finds the exponential smoothing of m(k) i over i \u223cT where m(k) i is the iterate after k SGD steps for task i. This can be seen as an online estimate of Ei\u223cT [mi]. 3.2. Meta Test Prediction as Bayesian Inference At meta test time, we need to be able to predict the target y\u2217of a novel test input x\u2217\u223cT \u2217sampled from the unknown distribution T \u2217\u223cp(T ). In FSL, we have the test support data D\u2217= {(x, y)} \u223cT \u2217. The test-time prediction can be seen as a posterior inference problem with additional evidence of the support data D\u2217(Fig. 1(c)). More specifically, p(y\u2217|x\u2217, D\u2217, D1:\u221e) = Z p(y\u2217|x\u2217, \u03b8) p(\u03b8|D\u2217, D1:\u221e) d\u03b8. So, it boils down to p(\u03b8|D\u2217, D1:\u221e), the posterior given both the test support data D\u2217and the entire training data D1:\u221e. Under our hierarchical model, exploiting conditional independence (Fig. 1(c)), we can link it to our trained q(\u03d5) as: p(\u03b8|D\u2217, D1:\u221e) \u2248 Z p(\u03b8|D\u2217, \u03d5) p(\u03d5|D1:\u221e) d\u03d5 (15) \u2248 Z p(\u03b8|D\u2217, \u03d5) q(\u03d5) d\u03d5 \u2248p(\u03b8|D\u2217, \u03d5\u2217), (16) where in (15) we disregard the impact of D\u2217on the higherlevel \u03d5 given the joint evidence, i.e., p(\u03d5|D\u2217, D1:\u221e) \u2248 p(\u03d5|D1:\u221e), due to dominance of D1:\u221ecompared to smaller D\u2217. The last part of (16) makes approximation using the \fmode \u03d5\u2217of q(\u03d5), where \u03d5\u2217= (\u00b5\u2217, \u03a3\u2217) has a closed form: \u00b5\u2217= m0, \u03a3\u2217= V0 n0 + d + 2. (17) Next, since p(\u03b8|D\u2217, \u03d5\u2217) involves difficult marginalization p(D\u2217|\u03d5\u2217) = R p(D\u2217|\u03b8)p(\u03b8|\u03d5\u2217)d\u03b8, we adopt variational inference, introducing a tractable variational distribution v(\u03b8) \u2248p(\u03b8|D\u2217, \u03d5\u2217). With the Gaussian family as in the training time (7), i.e., v(\u03b8) = N(\u03b8; m, V ) where (m, V ) are the variational parameters optimized by ELBO optimization, min m,V Ev(\u03b8)[\u2212log p(D\u2217|\u03b8)] + KL(v(\u03b8)||p(\u03b8|\u03d5\u2217)). (18) See Supplement for detailed formulas for (18). Once we have the optimized model v, our predictive distribution becomes: p(y\u2217|x\u2217, D\u2217, D1:\u221e) \u22481 S MS X s=1 p(y\u2217|x\u2217, \u03b8(s)), \u03b8(s) \u223cv(\u03b8), which simply requires feed-forwarding x\u2217through the sampled networks \u03b8(s) and averaging. Our meta-test algorithm is also summarized in the Supplementary Material. Note that we have test-time backbone update as per (18), which can make the final m deviated from the learned mean m0. Alternatively, if we drop the first term in (18), the optimal v(\u03b8) equals p(\u03b8|\u03d5\u2217) = N(\u03b8; m0, V0/(n0 + d + 2)). This can be seen as using the learned model m0 with some small random perturbation as a test-time backbone \u03b8. 4. Theoretical Analysis Generalization error bounds. We offer two theorems that upper-bound the generalization error of the model that is averaged over the learned posterior q(\u03d5, \u03b81:\u221e). The first theorem relates the generalization error to the ultimate ELBO loss (9) that we minimized in our algorithm. We do this by utilizing the recent PAC-Bayes-\u03bb bound [44, 40], a variant of the traditional PAC-Bayes bounds [31, 25, 42, 30], which circumvents the cumbersome square root or other nonlinear transform of the KL term. The second theorem is based on the recent regression analysis technique [36, 1]. Without loss of generality we assume |Di|=n for all episodes i. We let (q\u2217(\u03d5), {q\u2217 i (\u03b8i)}\u221e i=1) be the optimal solution of (9). We leave the proofs for the two theorems in Supplement. Theorem 4.1 (PAC-Bayes-\u03bb bound). Let Ri(\u03b8) be the generalization error of model \u03b8 for the task i, more specifically, Ri(\u03b8) = E(x,y)\u223cTi[\u2212log p(y|x, \u03b8)]. The following holds with probability 1\u2212\u03b4 for arbitrary small \u03b4 > 0: Ei\u223cT Eq\u2217 i (\u03b8i)[Ri(\u03b8i)] \u22642\u03f5\u2217 n , (19) where \u03f5\u2217is the optimal value of (9). Theorem 4.2 (Bound derived from regression analysis). Let d2 H(P\u03b8i, P i) be the expected squared Hellinger distance between the true distribution P i(y|x) and model\u2019s P\u03b8i(y|x) for task i. Then the following holds with high probability: Ei\u223cT Eq\u2217 i (\u03b8i)[d2 H(P\u03b8i, P i)] \u2264O \u0010 1 n +\u03f52 n+rn \u0011 + \u03bb\u2217, (20) where \u03bb\u2217= Ei\u223cT [\u03bb\u2217 i ], \u03bb\u2217 i = min\u03b8\u2208\u0398 ||E\u03b8[y|\u00b7] \u2212Ei[y|\u00b7]||2 \u221e is the lowest possible regression error within \u0398, and rn, \u03f5n are decreasing sequences vanishing to 0 as n increases. Computational complexity. Although we have introduced a principled Bayesian model/framework for FSL with solid theoretical support, the extra steps introduced in our training/test algorithms appear to be more complicated than simple feed-forward workflows. To this end, we have analyzed the time complexity of the proposed algorithm contrasted with ProtoNet [43]. For fair comparison, our approach adopts the same NCC head on top of the feature space as ProtoNet. Please find the details in the Supplement Material. Despite seemingly increased complexity in the training/test algorithms, our method incurs only constantfactor overhead compared to the minimal-cost ProtoNet. 5. Related Work Due to the limited space it is overwhelming to review all general FSL and meta learning algorithms here. We refer the readers to the excellent comprehensive surveys [20, 50] on the latest techniques. We rather focus on discussing recent Bayesian approaches and relation to ours. Although several Bayesian FSL approaches have been proposed before, most of them dealt with only a small fraction of the network weights (e.g., a readout head alone) as random variables [13, 23, 39, 15, 35, 58]. This considerably limits the benefits from uncertainty modeling of full network parameters. Bayesian approaches to MAML [11, 55, 38, 32] are popular probabilistic extensions of the gradient-based adaptation in MAML [10] with known theoretical support [7]. But we find that they are weak in several aspects to be considered as principled Bayesian methods. For instance, Probabilistic MAML (PMAML or PLATIPUS) [11, 16] has a similar hierarchical graphical model structure as ours, but their learning algorithm is considerably deviated from the original variational inference objective. Unlike the original derivation of the KL term measuring the divergence between the posterior and prior on the task-specific variable \u03b8i, namely Eq(\u03d5)[KL(qi(\u03b8i|\u03d5)||p(\u03b8i|\u03d5))] as in (8), in PMAML they measure the divergence on the global variable \u03d5, aiming to align the two adapted models, one from the support data only q(\u03d5|Si) and the other from both support and query q(\u03d5|Si, Qi). VAMPIRE [32] incorporates uncertainty modeling to MAML by extending MAML\u2019s point estimate to a distributional one that is learned by variational inference. However, it inherits all computational overheads from \fModel Backbone 1-Shot 5-Shot MAML [10] Conv-4 48.70 \u00b1 1.84 63.11 \u00b1 0.92 MetaQDA [58] Conv-4 56.41 \u00b1 0.80 72.64 \u00b1 0.62 NIW-Meta (Ours) Conv-4 56.84 \u00b1 0.76 56.84 \u00b1 0.76 56.84 \u00b1 0.76 72.93 \u00b1 0.53 72.93 \u00b1 0.53 72.93 \u00b1 0.53 ProtoNet [43] ResNet-18 54.16 \u00b1 0.82 73.68 \u00b1 0.65 AM3 [52] ResNet-12 65.21 \u00b1 0.49 75.20 \u00b1 0.36 R2D2 [2] ResNet-12 59.38 \u00b1 0.31 78.15 \u00b1 0.24 RelationNet2 [59] ResNet-12 63.92 \u00b1 0.98 77.15 \u00b1 0.59 MetaOpt [26] ResNet-12 64.09 \u00b1 0.62 80.00 \u00b1 0.45 SimpleShot [48] ResNet-18 62.85 \u00b1 0.20 80.02 \u00b1 0.14 S2M2 [29] ResNet-18 64.06 \u00b1 0.18 80.58 \u00b1 0.12 MetaQDA [58] ResNet-18 65.12 \u00b1 0.66 80.98 \u00b1 0.75 NIW-Meta (Ours) ResNet-18 65.49 \u00b1 0.56 65.49 \u00b1 0.56 65.49 \u00b1 0.56 81.71 \u00b1 0.17 81.71 \u00b1 0.17 81.71 \u00b1 0.17 SimpleShot [48] WRN-28-10 63.50 \u00b1 0.20 80.33 \u00b1 0.14 S2M2 [29] WRN-28-10 64.93 \u00b1 0.18 83.18 \u00b1 0.22 MetaQDA [58] WRN-28-10 67.83 \u00b1 0.64 84.28 \u00b1 0.69 NIW-Meta (Ours) WRN-28-10 68.54 \u00b1 0.26 68.54 \u00b1 0.26 68.54 \u00b1 0.26 84.81 \u00b1 0.28 84.81 \u00b1 0.28 84.81 \u00b1 0.28 Table 1. Results with standard backbones on miniImageNet. MAML, hindering scalability. The BMAML [55] is not a hierarchical Bayesian model, but aims to replace MAML\u2019s gradient-based deterministic adaptation steps by the stochastic counterpart using the samples (called particles) from p(\u03b8i|Si), thus adopting stochastic ensemble-based adaptation steps. If we use a single particle instead, it reduces exactly to MAML. Thus existing Bayesian approaches are not directly related to our hierarchical Bayesian perspective. 6. Evaluation We perform empirical study to demonstrate the superior performance of the proposed Bayesian few-shot learning algorithm dubbed NIW-Meta to the state-of-the-arts. 6.1. Few-shot Classification Standard benchmarks with ResNet backbones. For standard benchmark comparison using the popular ResNet backbones, in particular ResNet-18 [19] and WideResNet [57], we test our method on: miniImagenet (Table 1) and tieredImageNet (Table 2). We follow the standard protocols (details of experimental settings in Supplement). Our NIW-Meta exhibits consistent improvement over the SOTAs for different settings in support set size and backbones. Large-scale ViT backbones. We also test our method on the large-scale (pretrained) ViT backbones DINO-small (Dino/s) and DINO-base (DINO/b) [6], similarly as the setup in [21]. We summarize in Table 3 the results on the three benchmarks: miniImagenet, CIFAR-FS, and tieredImageNet. Our NIW-Meta adopts the same NCC head as ProtoNet after the ViT feature extractor. As claimed in [21], using the pretrained feature extractor and further finetuning it significantly boost the performance of few-shot learning algorithms including ours. Among the competing methods, our approach yields the highest accuracy for most cases. In particular, compared to the shallow Bayesian MetaQDA [58], treating Model Backbone 1-Shot 5-Shot MAML [10] Conv-4 51.67 \u00b1 1.81 70.30 \u00b1 1.75 ProtoNet [43] Conv-4 53.31 \u00b1 0.89 72.69 \u00b1 0.74 RelationNet2 [59] Conv-4 60.58 \u00b1 0.72 60.58 \u00b1 0.72 60.58 \u00b1 0.72 72.42 \u00b1 0.69 MetaQDA [58] Conv-4 58.11 \u00b1 0.48 74.28 \u00b1 0.73 NIW-Meta (Ours) Conv-4 58.82 \u00b1 0.91 74.86 \u00b1 0.70 74.86 \u00b1 0.70 74.86 \u00b1 0.70 TapNet [56] ResNet-12 63.08 \u00b1 0.15 80.26 \u00b1 0.12 RelationNet2 [59] ResNet-12 68.58 \u00b1 0.63 80.65 \u00b1 0.91 MetaOpt [26] ResNet-12 65.81 \u00b1 0.74 81.75 \u00b1 0.53 SimpleShot [48] ResNet-18 69.09 \u00b1 0.22 84.58 \u00b1 0.16 MetaQDA [58] ResNet-18 69.97 \u00b1 0.52 85.51 \u00b1 0.58 NIW-Meta (Ours) ResNet-18 70.52 \u00b1 0.19 70.52 \u00b1 0.19 70.52 \u00b1 0.19 85.83 \u00b1 0.17 85.83 \u00b1 0.17 85.83 \u00b1 0.17 LEO [41] WRN-28-10 66.33 \u00b1 0.05 81.44 \u00b1 0.09 SimpleShot [48] WRN-28-10 69.75 \u00b1 0.20 85.31 \u00b1 0.15 S2M2 [29] WRN-28-10 73.71 \u00b1 0.22 88.59 \u00b1 0.14 MetaQDA [58] WRN-28-10 74.33 \u00b1 0.65 89.56 \u00b1 0.79 NIW-Meta (Ours) WRN-28-10 74.59 \u00b1 0.33 74.59 \u00b1 0.33 74.59 \u00b1 0.33 89.76 \u00b1 0.23 89.76 \u00b1 0.23 89.76 \u00b1 0.23 Table 2. Results with standard backbones on tieredImageNet. Model Backbone miniImageNet CIFAR-FS tieredImageNet / Pretrain 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot ProtoNet [43] DINO/s 93.1 98.0 81.1 92.5 89.0 95.8 MetaOpt [26] DINO/s 92.2 97.8 70.2 84.1 87.5 94.7 MetaQDA [58] DINO/s 92.0 97.0 77.2 90.1 87.8 95.6 NIW-Meta (Ours) DINO/s 93.4 93.4 93.4 98.2 98.2 98.2 82.8 82.8 82.8 92.9 92.9 92.9 89.3 89.3 89.3 96.0 96.0 96.0 ProtoNet [43] DINO/b 95.3 98.4 84.3 92.2 91.2 96.5 MetaOpt [26] DINO/b 94.4 98.4 72.0 86.2 89.5 95.7 MetaQDA [58] DINO/b 94.7 98.7 98.7 98.7 80.9 93.8 93.8 93.8 89.7 96.5 NIW-Meta (Ours) DINO/b 95.5 95.5 95.5 98.7 98.7 98.7 84.7 84.7 84.7 93.2 91.4 91.4 91.4 96.7 96.7 96.7 Table 3. Classification results with large-scale ViT backbones. Model miniImageNet tieredImageNet 1-shot 5-shot 1-shot 5-shot FEAT [53] 66.78 82.05 70.80\u00b10.23 84.79\u00b10.16 NIW-Meta (Ours) 66.91\u00b10.10 66.91\u00b10.10 66.91\u00b10.10 82.28\u00b10.15 82.28\u00b10.15 82.28\u00b10.15 70.93\u00b10.27 70.93\u00b10.27 70.93\u00b10.27 85.20\u00b10.19 85.20\u00b10.19 85.20\u00b10.19 Table 4. Comparison between FEAT [53] and our method equipped with the same set-based architecture as FEAT. all network weights as random variates in our model turns out to be more effective than the readout parameters alone. Set-based adaptation backbones. We also conduct experiments using the set-based adaptation architecture called FEAT introduced in [53]. The network is tailored for fewshot adaptation, namely yQ = G(xQ, S; \u03b8) where the network G takes the entire support set S and query image xQ as input. Note that our NIW-Meta can incorporate any network architecture, even the set-based one like FEAT. As shown in Table 4, the Bayesian treatment leads to further improvement over [53] with this set-based architecture. Error calibration. One of the key merits of Bayesian modeling is that we have a better calibrated model than deterministic counterparts. We measure the expected calibration errors (ECE) [18] to judge how well the prediction accuracy and the prediction confidence are aligned. More specifically, ECE = PB b=1 Nb N |acc(b)\u2212conf(b)| where we \fModel Backbone ECE ECE+TS 1-shot 5-shot 1-shot 5-shot Linear classifier Conv-4 8.54 7.48 3.56 2.88 SimpleShot [48] Conv-4 33.45 45.81 3.82 3.35 MetaQDA-MAP [58] Conv-4 8.03 5.27 2.75 0.89 MetaQDA-FB [58] Conv-4 4.32 2.92 2.33 0.45 NIW-Meta (Ours) Conv-4 2.68 2.68 2.68 1.88 1.88 1.88 1.47 1.47 1.47 0.32 0.32 0.32 SimpleShot [48] WRN-28-10 39.56 55.68 4.05 1.80 S2M2+Linear [29] WRN-28-10 33.23 36.84 4.93 2.31 MetaQDA-MAP [58] WRN-28-10 31.17 17.37 3.94 0.94 MetaQDA-FB [58] WRN-28-10 30.68 15.86 2.71 0.74 NIW-Meta (Ours) WRN-28-10 10.79 10.79 10.79 7.11 7.11 7.11 2.03 2.03 2.03 0.65 0.65 0.65 Table 5. Expected calibration errors (ECE) on miniImageNet. \u201cECE+TS\u201d indicates extra tuning of the temperature hyperparameter (default = 1.0) in the logit-softmax transformation. partition test instances into B bins along the model\u2019s prediction confidence scores, and conf(b), acc(b) are the average confidence and accuracy for the b-th bin, respectively. The results on miniImageNet with Conv-4 and WRN backbones are shown in Table 5. We used 20 bins and optionally performed the softmax temperature search on validation sets, similarly as [58]. Again, Bayesian inference of whole network weights in our NIW-Meta leads to a far better calibrated model than the shallow counterpart Meta-QDA [58]. 6.2. Few-shot Regression Sine-Line dataset [11]. It consists of 1D (x, y) pairs randomly generated by either linear or sine curves with different scales/slopes/frequencies/phases. For the episodic few-shot learning setup, we follow the standard protocol: each episode is comprised of k = 5-shot support and 45 query samples randomly drawn from a random curve (regarded as a task). To deal with real-valued targets, we adopt the so-called RidgeNet, which has a parameter-free readout head derived from the support data via (closed-form) estimation of the linear coefficient matrix using the ridge regression. It is analogous to the ProtoNet [43] in classification which has a parameter-free head derived from NCC on support data. A similar model was introduced in [2] but mainly repurposed for classification. We find that RidgeNet leads to much more accurate prediction than the conventional trainable linear head. For instance, the test errors are: RidgeNet = 0.82 vs. MAML with linear head = 1.86. Furthermore, we adopt the ridge head in other models as well, such as MAML, PMAML [11], and our NIW-Meta. See Table 6 for the mean squared errors contrasting our NIW-Meta against competing meta learning methods. The table also contains the regression-ECE (R-ECE) calibration errors6 for 6The definition of the R-ECE is quite different from that of the classification ECE in Sec. 6.1. We follow the notion of goodness of cumulative distribution matching used in [46, 8]. Specifically, denoting by \u02c6 Qp(x) the p-th quantile of the predicted distribution \u02c6 p(y|x), we measure the deviation of ptrue(y \u2264\u02c6 Qp(x)|x) from p by absolute difference. So it is 0 for the Model Mean squared error R-ECE RidgeNet 0.8210 N/A MAML (1-step) [10] 0.8206 N/A MAML (5-step) [10] 0.8309 N/A PMAML (1-step) [11] 0.9160 0.2666 NIW-Meta (Ours) 0.7822 0.7822 0.7822 0.1728 0.1728 0.1728 Table 6. Few-shot regression results on the Sine-Line dataset. All methods here adopt the (parameter-free) ridge regression head with L2 regularization coefficient \u03bb=0.1, which is significantly more accurate than conventional linear trainable head. PMAML with 5 inner steps incurred numerical errors. the Bayesian methods, PMAML [11] and ours, which clearly shows that our model is better calibrated. Object pose estimation on ShapeNet datasets. We consider the recent few-shot regression benchmarks [12, 54] which introduced four datasets for object pose estimation: Pascal-1D, ShapeNet-1D, ShapeNet-2D, and Distractor. In all datasets, the main goal is to estimate the pose (positions in pixel and/or rotation angles) of the target object in an image. Each episode is specified by: i) selecting a target object randomly sampled from a pool of objects with different object categories, and ii) rendering the same object in an image with several different random poses (position/rotation) to generate data instances. There are k support samples (input images and target pose labels) and kq query samples for each episode. For ShapeNet-1D, for instance, k is randomly chosen from 3 to 15 while kq = 15. Pascal-1D and ShapeNet-1D are relatively easier datasets than the rest two as we have uniform noise-free backgrounds. On the other hand, to make the few-shot learning problem more challenging, ShapeNet-2D and Distractor datasets further introduce random (real-world) background images and/or so called the distractors which are objects randomly drawn and rendered that have nothing to do with the target pose to estimate. Except for Pascal-1D, some object categories are dedicated solely for meta testing and not revealed during training, thus yielding two different test scenarios: intra-category (IC) and cross-category (CC), in which the test object categories are seen and unseen, respectively. In [12], they test different augmentation strategies in their baselines: conventional data augmentation on input images (denoted by DA), task augmentation (TA) [37] which adds random noise to the target labels to help reducing the memorization issue [54], and domain randomization (DR) [45] which randomly generates background images during training. Among several possible combinations reported in [12], we follow the strategies that perform the best. For the target error metrics (e.g., position Euclidean distances in pixels for Distractor, rotation angle differences for ShapeNet-1D), we follow the metrics used in [12]. For instance, the quaternion ideal case \u02c6 p(y|x) = ptrue(y|x). We use empirical CDF estimates and equal-size binning (20 bins) for p \u2208[0, 1] values. Note that by definition we can only measure R-ECE for models with probabilistic output \u02c6 p(y|x). \fModel Pascal-1D ShapeNet-1D Intra-category Cross-category MAML 1.02 \u00b1 0.06 17.96 18.79 CNP [13] 1.98 \u00b1 0.22 7.66 \u00b1 0.18 8.66 \u00b1 0.19 ANP [23] 1.36 \u00b1 0.25 5.81 \u00b1 0.23 6.23 \u00b1 0.12 NIW-Meta w/ C+R 0.89 \u00b1 0.06 0.89 \u00b1 0.06 0.89 \u00b1 0.06 5.62 \u00b1 0.38 6.57 \u00b1 0.39 NIW-Meta w/ CNP 0.94 \u00b1 0.15 5.74 \u00b1 0.17 6.91 \u00b1 0.18 NIW-Meta w/ ANP 0.95 \u00b1 0.09 5.47 \u00b1 0.12 5.47 \u00b1 0.12 5.47 \u00b1 0.12 6.06 \u00b1 0.18 6.06 \u00b1 0.18 6.06 \u00b1 0.18 Table 7. Pose estimation test errors for Pascal-1D and ShapeNet-1D. The mean squared errors in rotation angle differences. Our method NIW-Meta is equipped with three different backbones: C+R = a Conv-net feature extractor with the Ridge head, CNP, and ANP. Augmentation: TA for Pascal-1D and TA+DA for ShapeNet-1D. metric may sound reasonable in ShapeNet-2D due to the non-uniform, non-symmetric structures that reside in the target space (3D rotation angles). The results are summarized in Table 7 (easier datasets; Pascal-1D and ShapeNet-1D) and Table 8 (harder ones; ShapeNet-2D and Distractor). In [12], they have shown that the set-based backbone networks, especially the Conditional Neural Process (CNP) [13] and Attentive Neural Process (ANP) [23] outperform the conventional architectures of the conv-net feature extractor with the linear head that are adapted by MAML [10] (except for the Pascal-1D case). Motivated by this, we adopt the same set-based CNP/ANP architectures within our NIW-Meta. In addition, we also test the ridge-head model with the conv-net feature extractor (denoted by C+R). Two additional competing models contrasted here are: the Bayesian context aggregation in CNP (CNP+BA) [47] and the use of the functional contrastive learning loss as extra regularization (FCL) [12]. For the easier datasets (Table 7), there is a dataset regime where MAML clearly outperforms (Pascal-1D) and underperforms (ShapeNet-1D) the CNP/ANP architectures. Very promisingly, our NIW-Meta consistently performs the best for both datasets, regardless of the choice of the architectures: not just CNP/ANP but also conv-net feature extractor + ridge head (C+R). For the harder datasets (Table 8) where MAML is not reported due to the known computational issues and poor performance, our NIW-Meta still exhibits the best test performance with CNP/ANP architectures. Unfortunately, the conv-net + ridge head (C+R) did not work well, and our conjecture is that the presence of heavy noise and distractors in the input data requires more sophisticated modeling of interaction/relation among the input instances, as is mainly aimed (and successfully done) by CNP/ANP. 6.3. Memory Footprints and Running Times We claimed in the paper that one of the main drawbacks of MAML [10] is the computational overhead to keep track of a large computational graph for inner gradient descent steps. Unlike MAML, our NIW-Meta has a much more efficient episodic optimization strategy, i.e., our one-time optimization only computes the (constant) first/second-order moment Model ShapeNet-2D Distractor IC CC IC CC CNP [13] 14.20\u00b10.06 13.56\u00b10.28 2.45 3.75 CNP+BA [47] 14.16\u00b10.08 13.56\u00b10.18 2.44 3.97 CNP+FCL [12] \u2212 \u2212 2.00 3.05 ANP [23] 14.12\u00b10.14 13.59\u00b10.10 2.65 4.08 ANP+FCL [12] 14.01\u00b10.09 13.32\u00b10.18 \u2212 \u2212 NIW-Meta w/ C+R 21.25\u00b10.76 20.82\u00b10.43 8.90\u00b10.26 17.31\u00b10.38 NIW-Meta w/ CNP 13.86\u00b10.20 13.04\u00b10.13 1.80\u00b10.01 1.80\u00b10.01 1.80\u00b10.01 2.94\u00b10.14 2.94\u00b10.14 2.94\u00b10.14 NIW-Meta w/ ANP 13.74\u00b10.30 13.74\u00b10.30 13.74\u00b10.30 12.95\u00b10.48 12.95\u00b10.48 12.95\u00b10.48 3.10\u00b10.48 5.20\u00b10.88 Table 8. Pose estimation test errors for ShapeNet-2D and Distractor. Quaternion differences \u00d710\u22122 (ShapeNet-2D) and pixel errors (Distractor). The same interpretation as Table 7. Augmentation: TA+DA+DR for ShapeNet-2D and DA for Distractor. 1 2 3 4 5 Number of inner steps in MAML 3500 4000 4500 5000 5500 6000 GPU memory footprint (MB) miniImageNet NIW-Meta (Ours) MAML 1 2 3 4 5 Number of inner steps in MAML 40 60 80 100 120 140 160 180 Per-episode training time (msec) miniImageNet NIW-Meta (#SGLD=2) NIW-Meta (#SGLD=5) MAML Figure 2. Computational complexity of MAML [10] and our NIWMeta. (Left) GPU memory footprints (in MB) for a single batch. (Right) Per-episode training times (in milliseconds). statistics of the episodic loss function without storing the full optimization trace. To verify this, we measure and compare the memory footprints and running times of MAML and NIW-Meta on two real-world classification/regression datasets: miniImageNet 1-shot with the ResNet-18 backbone and ShapeNet-1D with the conv-net backbone. The results in Fig. 2 (ShapeNet-1D in Supp.) show that NIW-Meta has far lower memory requirement than MAML (even smaller than 1-inner-step MAML) while MAML suffers from heavy use of memory space, nearly linearly increasing as the number of inner steps. The running times of our NIW-Meta are not prohibitively larger compared to MAML where the main computational bottleneck is the SGLD iterations for quadratic approximation of the one-time episodic optimization. We tested two scenarios with the number of SGLD iterations 2 and 5, and we have nearly the same (or even better) training speed as the 1-inner-step MAML. 7." + }, + { + "url": "http://arxiv.org/abs/2305.04979v1", + "title": "FedHB: Hierarchical Bayesian Federated Learning", + "abstract": "We propose a novel hierarchical Bayesian approach to Federated Learning (FL),\nwhere our model reasonably describes the generative process of clients' local\ndata via hierarchical Bayesian modeling: constituting random variables of local\nmodels for clients that are governed by a higher-level global variate.\nInterestingly, the variational inference in our Bayesian model leads to an\noptimisation problem whose block-coordinate descent solution becomes a\ndistributed algorithm that is separable over clients and allows them not to\nreveal their own private data at all, thus fully compatible with FL. We also\nhighlight that our block-coordinate algorithm has particular forms that subsume\nthe well-known FL algorithms including Fed-Avg and Fed-Prox as special cases.\nBeyond introducing novel modeling and derivations, we also offer convergence\nanalysis showing that our block-coordinate FL algorithm converges to an (local)\noptimum of the objective at the rate of $O(1/\\sqrt{t})$, the same rate as\nregular (centralised) SGD, as well as the generalisation error analysis where\nwe prove that the test error of our model on unseen data is guaranteed to\nvanish as we increase the training data size, thus asymptotically optimal.", + "authors": "Minyoung Kim, Timothy Hospedales", + "published": "2023-05-08", + "updated": "2023-05-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.DC", + "stat.ML" + ], + "main_content": "Introduction Federated Learning (FL) aims to enable a set of clients to collaboratively train a model in a privacy preserving manner, without sharing data with each other or a central server. Compared to conventional centralised optimisation problems, FL comes with a host of statistical and systems challenges \u2013 such as communication bottlenecks and sporadic participation. The key statistical challenge is non-i.i.d. data distributions across clients, each of which has a different data collection bias and potentially a different data annotation policy/labeling function (e.g., in the case of any user preference learning). The classic and most popularly deployed FL algorithms are FedAvg [43] and FedProx [33], however, even when a global model can be learned, it often underperforms on each client\u2019s local data distribution in scenarios of high heterogeneity [35, 29, 55]. Studies have attempted to alleviate this by personalising learning at each client, allowing each local model to deviate from the shared global model [51]. However, this remains challenging given that each client may have a limited amount of local data for personalised learning. These challenges have motivated several attempts to model the FL problem from a Bayesian perspective. Introducing distributions on model parameters \u03b8 has enabled various schemes for estimating a global model posterior p(\u03b8|D1:N) from clients\u2019 local posteriors p(\u03b8|Di), or to regularise the learning of local models given a prior de\ufb01ned by the global model [59, 4, 15]. However, these methods are not complete and principled solutions \u2013 having not yet provided full Bayesian descriptions of the FL problem, and having had resort to ad-hoc treatments to achieve tractable learning. The key difference is that they fundamentally treat network weights \u03b8 as a random variable shared across all clients. We introduce a hierarchical Bayesian model that assigns each client it\u2019s own random variable for model weights \u03b8i, and these are linked via a higher level random variable \u03c6 as p(\u03b81:N, \u03c6) = p(\u03c6) QN i=1 p(\u03b8i|\u03c6). This has several crucial bene\ufb01ts: Firstly, given this hierarchy, variational inference in our framework decomposes into separable optimisation problems over \u03b8is Preprint. Under review. arXiv:2305.04979v1 [cs.LG] 8 May 2023 \f(a) Overall model (b) Individual client (c) Global prediction (d) Personalisation Figure 1: Graphical models. (a) Plate view of iid clients. (b) Individual client data with input images x given and only p(y|x) modeled. (c) & (d): Global prediction and personalisation as probabilistic inference problems (shaded nodes = evidences, red colored nodes = targets to infer, x\u2217= test input in global prediction, Dp = training data for personalisation and xp = test input). and \u03c6, enabling a practical Bayesian learning algorithm to be derived that is fully compatible with FL constraints, without resorting to ad-hoc treatments or strong assumptions. Secondly, this framework can be instantiated with different assumptions on p(\u03b8i|\u03c6) to deal elegantly and robustly with different kinds of statistical heterogeneity, as well as for principled and effective model personalisation. Our resulting algorithm, termed Federated Hierarchical Bayes (FedHB) is empirically effective, as we demonstrate in a wide range of experiments on established benchmarks. More importantly, it bene\ufb01ts from rigorous theoretical support. In particular, we provide convergence guarantees showing that FedHB has the same O(1/ \u221a T) convergence rate as centralised SGD algorithms, which are not provided by related prior art [59, 15]. We also provide a generalisation bound showing that FedHB is asymptotically optimal, which has not been shown by prior work such as [4]. Furthermore we show that FedHB subsumes classic methods FedAvg [43] and FedProx [33] as special cases, and ultimately provides additional justi\ufb01cation and explanation for these seminal methods. 2 Bayesian FL: General Framework We introduce two types of latent random variables, \u03c6 and {\u03b8i}N i=1. Each \u03b8i is deployed as the network weights for client i\u2019s backbone. The variable \u03c6 can be viewed as a globally shared variable that is responsible for linking the individual client parameters \u03b8i. We assume conditionally independent and identical priors, p(\u03b81:N|\u03c6) = QN i=1 p(\u03b8i|\u03c6). Thus the prior for the latent variables (\u03c6, {\u03b8i}N i=1) is formed in a hierarchical manner as (1). The local data for client i, denoted by Di, is generated1 by \u03b8i, (Prior) p(\u03c6, \u03b81:N) = p(\u03c6) N Y i=1 p(\u03b8i|\u03c6) (Likelihood) p(Di|\u03b8i) = Y (x,y)\u2208Di p(y|x, \u03b8i), (1) where p(y|x, \u03b8i) is a conventional neural network model (e.g., softmax link for classi\ufb01cation tasks). See the graphical model in Fig. 1(a) where the iid clients are governed by a single random variable \u03c6. Given the data D1, . . . , DN, we infer the posterior, p(\u03c6, \u03b81:N|D1:N) \u221dp(\u03c6) QN i=1 p(\u03b8i|\u03c6)p(Di|\u03b8i), which is intractable in general, and we adopt the variational inference to approximate it: q(\u03c6, \u03b81:N; L) := q(\u03c6; L0) N Y i=1 qi(\u03b8i; Li), (2) where the variational parameters L consists of L0 (parameters for q(\u03c6)) and {Li}N i=1\u2019s (parameters for qi(\u03b8i)\u2019s from individual clients). Note that although \u03b8i\u2019s are independent across clients under (2), they are differently modeled (emphasised by the subscript i in notation qi), re\ufb02ecting different posterior beliefs originating from heterogeneity of local data Di\u2019s. 2.1 From Variational Inference to Federated Learning Algorithm Using the standard variational inference techniques [10, 30], we can derive the ELBO objective function (details in Appendix A). We denote the negative ELBO by L (to be minimised over L): L(L) := N X i=1 \u0010 Eqi(\u03b8i)[\u2212log p(Di|\u03b8i)] + Eq(\u03c6) \u0002 KL(qi(\u03b8i)||p(\u03b8i|\u03c6)) \u0003\u0011 + KL(q(\u03c6)||p(\u03c6)), (3) 1Note that we do not deal with generative modeling of input images x. Inputs x are always given, and only conditionals p(y|x) are modeled. See Fig. 1(b) for the in-depth graphical model diagram. 2 \fwhere we drop the dependency on L in notation for simplicity. Instead of optimizing (3) over the parameters L jointly as usual practice, we consider block-wise optimisation, also known as block-coordinate optimisation [56], speci\ufb01cally alternating two steps: (i) updating/optimizing all Li\u2019s i = 1, . . . , N while \ufb01xing L0, and (ii) updating L0 with all Li\u2019s \ufb01xed. That is, \u2022 Optimisation over L1, . . . , LN (L0 \ufb01xed). min {Li}N i=1 N X i=1 \u0010 Eqi(\u03b8i)[\u2212log p(Di|\u03b8i)] + Eq(\u03c6) \u0002 KL(qi(\u03b8i)||p(\u03b8i|\u03c6)) \u0003\u0011 . (4) As (4) is completely separable over i, and we can optimise each summand independently as: min Li Li(Li) := Eqi(\u03b8i;Li)[\u2212log p(Di|\u03b8i)] + Eq(\u03c6;L0) \u0002 KL(qi(\u03b8i; Li)||p(\u03b8i|\u03c6)) \u0003 . (5) So (5) constitutes local update/optimisation for client i. Note that each client i needs to access its private data Di only without data from others, thus fully compatible with FL. \u2022 Optimisation over L0 (L1, . . . , LN \ufb01xed). min L0 L0(L0) := KL(q(\u03c6; L0)||p(\u03c6)) \u2212 N X i=1 Eq(\u03c6;L0)qi(\u03b8i;Li)[log p(\u03b8i|\u03c6)]. (6) This constitutes server update criteria while the latest qi(\u03b8i; Li)\u2019s from local clients being \ufb01xed. Remarkably, the server needs not access any local data at all, suitable for FL. This nice property originates from the independence assumption in our approximate posterior (2). Interpretation. First, server\u2019s loss function (6) tells us that the server needs to update q(\u03c6; L0) in such a way that (i) it puts mass on those \u03c6 that have high compatibility scores log p(\u03b8i|\u03c6) with the current local models \u03b8i \u223cqi(\u03b8i), thus aiming to be aligned with local models, and (ii) it does not deviate from the prior p(\u03c6). Clients\u2019 loss function (5) indicates that each client i needs to minimise the class prediction error on its own data Di (\ufb01rst term), and at the same time, to stay close to the current global standard \u03c6 \u223cq(\u03c6) by reducing the KL divergence from p(\u03b8i|\u03c6) (second term). 2.2 Formalisation of Global Prediction and Personalisation Tasks Two important tasks in FL are: global prediction and personalisation. The former evaluates the trained model on novel test data sampled from a distribution possibly different from training data. Personalisation is the task of adapting the trained model on a new dataset called personalised data. In our Bayesian model, these two tasks can be formally de\ufb01ned as Bayesian inference problems. Global prediction. The task is to predict the class label of a novel test input x\u2217which may or may not come from the same distributions as the training data D1, . . . DN. Under our Bayesian model, it can be turned into a probabilistic inference problem p(y\u2217|x\u2217, D1:N). Let \u03b8 be the local model that generates the output y\u2217given x\u2217. Exploiting conditional independence from Fig. 1(c), p(y\u2217|x\u2217, D1:N) = ZZ p(y\u2217|x\u2217, \u03b8) p(\u03b8|\u03c6) p(\u03c6|D1:N) d\u03b8d\u03c6 (7) \u2248 ZZ p(y\u2217|x\u2217, \u03b8) p(\u03b8|\u03c6) q(\u03c6) d\u03b8d\u03c6 = Z p(y\u2217|x\u2217, \u03b8) \u0012 Z p(\u03b8|\u03c6) q(\u03c6)d\u03c6 \u0013 d\u03b8, (8) where in (8) we use p(\u03c6|D1:N) \u2248q(\u03c6). (Appendix B for details.) The inner integral (in parentheses) in (8) either admits a closed form (Sec. 3.1) or can be approximated (e.g., Monte-Carlo estimation). Personalisation. It formally refers to the task of learning a prediction model \u02c6 p(y|x) given an unseen (personal) training dataset Dp that comes from some unknown distribution pp(x, y), so that the personalised model \u02c6 p performs well on novel (in-distribution) test points (xp, yp) \u223cpp(x, y). Evidently we need to exploit (and bene\ufb01t from) the trained model from the FL training stage. To this end many existing approaches simply resort to \ufb01netuning, that is, training on Dp warm-starting with the FL-trained model. However, a potential issue is the lack of a solid principle on how to balance the initial FL-trained model and personal data \ufb01tting to avoid under\ufb01tting and over\ufb01tting. In our Bayesian framework, the personalisation can be seen as another posterior inference problem with additional evidence of the personal training data Dp. Prediction on a test point xp amounts to inferring: p(yp|xp, Dp, D1:N) = Z p(yp|xp, \u03b8) p(\u03b8|Dp, D1:N) d\u03b8. (9) 3 \fSo, it boils down to the task of posterior inference p(\u03b8|Dp, D1:N) given both the personal data Dp and the FL training data D1:N. Under our hierarchical model, by exploiting conditional independence from graphical model (Fig. 1(d)), we can link the posterior to our FL-trained q(\u03c6) as follows: p(\u03b8|Dp, D1:N) \u2248 Z p(\u03b8|Dp, \u03c6) p(\u03c6|D1:N) d\u03c6 \u2248 Z p(\u03b8|Dp, \u03c6) q(\u03c6) d\u03c6 \u2248p(\u03b8|Dp, \u03c6\u2217), (10) where we disregard the impact of Dp on the higher-level \u03c6 given the joint evidence, p(\u03c6|Dp, D1:N) \u2248 p(\u03c6|D1:N) due to the dominance of D1:N compared to smaller Dp. See Appendix B for details. The last part of (10) makes approximation using the mode \u03c6\u2217of q(\u03c6), which is reasonable for our two modeling choices for q(\u03c6) to be discussed in Sec. 3.1 and Sec. 3.2. Since dealing with p(\u03b8|Dp, \u03c6\u2217) involves dif\ufb01cult marginalisation p(Dp|\u03c6\u2217) = R p(Dp|\u03b8)p(\u03b8|\u03c6\u2217)d\u03b8, we adopt variational inference, introducing a tractable variational distribution v(\u03b8) \u2248p(\u03b8|Dp, \u03c6\u2217). Following the usual variational inference derivations, we have the negative ELBO objective (for personalisation): min v Ev(\u03b8)[\u2212log p(Dp|\u03b8)] + KL(v(\u03b8)||p(\u03b8|\u03c6\u2217)). (11) Once we have the optimised v, our predictive distribution becomes (S = the number of MC samples): p(yp|xp, Dp, D1:N) \u22481 S S X s=1 p(yp|xp, \u03b8(s)), where \u03b8(s) \u223cv(\u03b8), (12) which simply requires feed-forwarding test input xp through the sampled networks \u03b8(s) and averaging. Thus far, we have discussed a general framework, deriving how the variational inference for our Bayesian model \ufb01ts gracefully in the FL problem. In the next section, we de\ufb01ne speci\ufb01c density families for the prior (p(\u03c6), p(\u03b8i|\u03c6)) and posterior (q(\u03c6), qi(\u03b8i)) as our proposed concrete models. 3 Bayesian FL: Two Concrete Models We propose two different model choices that we \ufb01nd the most interesting: Normal-Inverse-Wishart (Sec. 3.1) and Mixture (Sec. 3.2). To avoid distraction, we make this section concise putting only the \ufb01nal results and discussions, and leaving all mathematical details in Appendix C and D. 3.1 Normal-Inverse-Wishart (NIW) Model We de\ufb01ne the prior as a conjugate form of Gaussian and Normal-Inverse-Wishart. With \u03c6 = (\u00b5, \u03a3), p(\u03c6) = NIW(\u00b5, \u03a3; \u039b) = N(\u00b5; \u00b50, \u03bb\u22121 0 \u03a3) \u00b7 IW(\u03a3; \u03a30, \u03bd0), (13) p(\u03b8i|\u03c6) = N(\u03b8i; \u00b5, \u03a3), i = 1, . . . , N, (14) where \u039b = {\u00b50, \u03a30, \u03bb0, \u03bd0} is the parameters of the NIW. Although \u039b can be learned via data marginal likelihood maximisation (e.g., empirical Bayes), but for simplicity we leave it \ufb01xed as2: \u00b50 = 0, \u03a30 = I, \u03bb0 = 1, and \u03bd0 = d + 2 where d is the number of parameters in \u03b8i or \u00b5. Next, our choice of the variational density family for q(\u03c6) is the NIW, not just because it is the most popular parametric family for a pair of mean vector and covariance matrix \u03c6 = (\u00b5, \u03a3), but it can also admit closed-form expressions in the ELBO function due to the conjugacy as we derive in Appendix C.1. q(\u03c6) := NIW(\u03c6; {m0, V0, l0, n0}) = N(\u00b5; m0, l\u22121 0 \u03a3) \u00b7 IW(\u03a3; V0, n0). (15) Although the scalar parameters l0, n0 can be optimised together with m0, V0, their impact is less in\ufb02uential and we \ufb01nd that they make the ELBO optimisation a little bit cumbersome. So we \ufb01x l0, n0 with some near-optimal values by exploiting the conjugacy of the NIW under Gaussian likelihood (details in Appendix C), and regard m0, V0 as variational parameters, L0 = {m0, V0}. We restrict V0 to be diagonal for computational tractability. The density family for qi(\u03b8i)\u2019s can be a Gaussian, but we \ufb01nd that it is computationally more attractive and numerically more stable to adopt the mixture of two spiky Gaussians that leads to the MC-Dropout [23]. That is, qi(\u03b8i) = Y l \u0000p \u00b7 N(\u03b8i[l]; mi[l], \u03f52I) + (1 \u2212p) \u00b7 N(\u03b8i[l]; 0, \u03f52I) \u0001 , (16) 2This choice ensures that the mean of \u03a3 equals I, and \u00b5 is distributed as 0-mean Gaussian with covariance \u03a3. 4 \fwhere (i) mi is the only variational parameters (Li = {mi}), (ii) \u00b7[l] indicates a column/layer in neural network parameters where l goes over layers and columns of weight matrices, (iii) p is the (user-speci\ufb01ed) hyperparameter where 1 \u2212p corresponds to the dropout probability, and (iv) \u03f5 is small constant (e.g., 10\u22124) that makes two Gaussians spiky, close to the delta functions. Client update. We apply the general client update optimisation (5) to the NIW model. Following the approximation of [23] for the KL divergence between a mixture of Gaussians (16) and a Gaussian (14), we have the client local optimisation (details in Appendix C): min mi Li(mi) := \u2212log p(Di| \u02dc mi) + p 2(n0 + d + 1)(mi \u2212m0)\u22a4V \u22121 0 (mi \u2212m0), (17) where \u02dc mi is the dropout version of mi, i.e., a reparametrised sample from (16). Note that m0 and V0 are \ufb01xed during the optimisation. Interestingly (17) generalises Fed-Avg [43] and Fed-Prox [33]: With p = 1 (i.e., no dropout) and setting V0 = \u03b1I, (17) reduces to the client update formula for Fed-Prox where constant \u03b1 controls the impact of the proximal term. Server update. The general server optimisation (6) admits the closed-form solution (Appendix C): m\u2217 0 = p N + 1 N X i=1 mi, V \u2217 0 = n0 N + d + 2 (1 + N\u03f52)I + m\u2217 0(m\u2217 0)\u22a4+ N X i=1 \u03c1(m\u2217 0, mi, p) ! , (18) where \u03c1(m0, mi, p) = pmim\u22a4 i \u2212pm0m\u22a4 i \u2212pmim\u22a4 0 + m0m\u22a4 0 . Note that mi\u2019s are \ufb01xed from clients\u2019 latest variational parameters. It is interesting to see that m\u2217 0 in (18) generalises the wellknown aggregation step of averaging local models in Fed-Avg [43] and related methods: when p = 1 (no dropout), it almost3 equals client model averaging. Also, since \u03c1(m\u2217 0, mi, p = 1) = (mi \u2212m\u2217 0)(mi \u2212m\u2217 0)\u22a4when p = 1, V \u2217 0 essentially estimates the sample scatter matrix with (N + 1) samples, namely clients\u2019 mi\u2019s and server\u2019s prior \u00b50 = 0, measuring how much they deviate from the center m\u2217 0. The dropout is known to help regularise the model and lead to better generalisation [23], and with p < 1 our (18) forms a principled optimal solution. Global prediction. The inner integral of (8) becomes the multivariate Student-t distribution. Then the predictive distribution for a new test input x\u2217can be estimated as4: p(y\u2217|x\u2217, D1:N) \u22481 S S X s=1 p(y\u2217|x\u2217, \u03b8(s)), where \u03b8(s) \u223ctn0\u2212d+1 \u0012 \u03b8; m0, (l0 + 1)V0 l0(n0 \u2212d + 1) \u0013 , (19) where t\u03bd(a, B) is the multivariate Student-t with location a, scale matrix B, and d.o.f. \u03bd. Personalisation. With the given personalisation training data Dp, we follow the general framework in (11) to \ufb01nd v(\u03b8) \u2248p(\u03b8|Dp, \u03c6\u2217) in a variational way, where \u03c6\u2217obtained from (40). We adopt the same spiky mixture form (16) for v(\u03b8), which leads to the learning objective similar to (17). 3.2 Mixture Model Our motivation for mixture is to make the prior p(\u03b8, \u03c6) more \ufb02exible by having multiple different prototypes, diverse enough to cover the heterogeneity in data distributions across clients. We consider: p(\u03c6) = K Y j=1 N(\u00b5j; 0, I), p(\u03b8i|\u03c6) = K X j=1 1 K N(\u03b8i; \u00b5j; \u03c32I), (20) where \u03c6 = {\u00b51, . . . , \u00b5K} contains K networks (prototypes) that can broadly cover the clients data distributions, and \u03c3 is the hyperparameter that captures perturbation scale, chosen by users or learned from data. Note that we put equal mixing proportions 1/K due to the symmetry, a priori. That is, each client can take any of \u00b5j\u2019s equally likely a priori. For the variational densities, we de\ufb01ne: qi(\u03b8i) = N(\u03b8i; mi, \u03f52I), q(\u03c6) = K Y j=1 N(\u00b5j; rj, \u03f52I), (21) 3Only the constant 1 added to the denominator, which comes from the prior and has the regularising effect. 4In practice we use a single sample (S = 1) for computational ef\ufb01ciency. 5 \fwhere {rj}K j=1 (L0) and mi (Li) are the variational parameters, and \u03f5 is small constant (e.g., 10\u22124). Client update. For our model choice, the general client update (5) reduces to (details in Appendix D): min mi Eqi(\u03b8i)[\u2212log p(Di|\u03b8i)] \u2212log K X j=1 exp \u0012 \u2212||mi \u2212rj||2 2\u03c32 \u0013 . (22) It is interesting to see that (22) can be seen as generalisation of Fed-Prox [33], where the proximal regularisation term in Fed-Prox is extended to multiple global models rj\u2019s, penalizing the local model (mi) straying away from these prototypes. And if we use a single prototype (K = 1), the optimisation (22) exactly reduces to the local update objective of Fed-Prox. Since log-sum-exp is approximately equal to max, the regularisation term in (22) effectively focuses on the closest global prototype rj from the current local model mi, which is intuitively well aligned with our motivation. Server update. The general form (6) can be approximately turned into (Appendix D for derivations): min {rj}K j=1 1 2 K X j=1 ||rj||2 \u2212 N X i=1 log K X j=1 exp \u0012 \u2212||mi \u2212rj||2 2\u03c32 \u0013 . (23) Interestingly, (23) generalises the well-known aggregation step of averaging local models in Fed-Avg and related methods: Especially when K = 1, (23) reduces to quadratic optimisation, admitting the optimal solution r\u2217 1 = 1 N+\u03c32 PN i=1 mi. The extra term \u03c32 can be explained by incorporating an extra zero local model originating from the prior (interpreted as a neutral model) with the discounted weight \u03c32 rather than 1. Although (23) for K > 1 can be solved by standard gradient descent, we apply the Expectation-Maximisation (EM) algorithm5 [18] instead: (E-step) c(j|i) = e\u2212||mi\u2212rj||2/(2\u03c32) PK j=1 e\u2212||mi\u2212rj||2/(2\u03c32) , (M-step) r\u2217 j = 1 N PN i=1 c(j|i) \u00b7 mi \u03c32 N + 1 N PN i=1 c(j|i) . (24) The M-step (server update) has intuitive meaning that the new prototype rj becomes the weighted average of the local models mi\u2019s where the weights c(j|i) are determined by the proximity between mi and rj (i.e., those mi\u2019s that are closer to rj have more contribution, and vice versa). This can be seen as an extension of the aggregation step in Fed-Avg to the multiple prototype case. Global prediction. We slightly modify our general approach to make individual client data dominantly explained by the most relevant model rj, by introducing a gating function from the mixture of experts [27, 28]. See Appendix D for details. Personalisation. With v(\u03b8) of the same form as qi(\u03b8i), the VI learning becomes similar to (22). 4 Theoretical Analysis We provide two theoretical results for our Bayesian FL algorithm: (Convergence analysis) As a special block-coordinate optimisation algorithm, we show that it converges to an (local) optimum of the training objective (3); (Generalisation error bound) We theoretically show how well this optimal model trained on empirical data performs on unseen test data points. Due to space limit, full details and proofs are described in Appendix E,F, and we only state the theorems and remarks here. Theorem 4.1 (Convergence analysis). We denote the objective function in (3) by f(x) where x = [x0, x1, . . . , xN] corresponding to the variational parameters x0 := L0, x1 := L1, ..., xN := LN. Let \u03b7t = L + \u221a t for some constant L, and xT = 1 T PT t=1 xt, where t is the batch iteration counter, xt is the iterate at t by following our FL algorithm, and Nf (\u2264N) is the number of participating clients at each round. With Assumptions 1\u20133 in Appendix E, the following holds for any T: E[f(xT )] \u2212f(x\u2217) \u2264N + Nf Nf \u00b7 \u221a T +L 2 D2 + R2 f \u221a T T = O \u0010 1 \u221a T \u0011 , (25) where x\u2217is the (local) optimum, D, and Rf are some constants, and the expectation is taken over randomness in minibatches and selection of participating clients. 5Instead of performing several EM steps until convergence, in practice we \ufb01nd only one EM step is suf\ufb01cient. 6 \f0 10 4 10 3 10 2 pdrop (dropout probability) 37.5 40.0 42.5 45.0 47.5 50.0 Accuracy Global prediction (s = 100) Fed-BABU NIW (Ours) 0 10 4 10 3 10 2 pdrop (dropout probability) 46 48 50 52 54 Accuracy Personalisation (s = 100) Fed-BABU NIW (Ours) 2 5 10 K (the number of networks) 10 20 30 40 50 Accuracy Global prediction (s = 100) Fed-BABU Preset (Baseline) Ensemble (Baseline) Mixture (Ours) 2 5 10 K (the number of networks) 20 30 40 50 Accuracy Personalisation (s = 100) Fed-BABU Preset (Baseline) Ensemble (Baseline) Mixture (Ours) (a) Sensitivity to pdrop in our NIW (b) Our Mix. vs. ensemble baselines (K varied) Figure 2: Hyperparameter sensitivity analysis and comparison with simple ensemble baselines. Remark. It says that xt converges to the optimal point x\u2217in expectation at the rate of O(1/ \u221a t). This rate asymptotically equals that of the conventional (non-block-coordinate, holistic) SGD algorithm. Theorem 4.2 (Generalisation error bound). Assume that the variational density family for qi(\u03b8i) is rich enough to subsume Gaussian. Let d2(P\u03b8i, P i) be the expected squared Hellinger distance between the true class distribution P i(y|x) and model\u2019s P\u03b8i(y|x) for client i\u2019s data. The optimal solution ({q\u2217 i (\u03b8i)}N i=1, q\u2217(\u03c6)) of the optimisation problem (3) satis\ufb01es: 1 N N X i=1 Eq\u2217 i (\u03b8i)[d2(P\u03b8i, P i)] \u2264O \u0012 1 n \u0013 + C \u00b7 \u03f52 n + C\u2032 rn + 1 N N X i=1 \u03bb\u2217 i ! , (26) with high probability, where C, C\u2032 > 0 are constant, \u03bb\u2217 i = min\u03b8\u2208\u0398 ||f\u03b8 \u2212f i||2 \u221eis the best error within our backbone network family \u0398, and rn, \u03f5n \u21920 as the training data size n \u2192\u221e. Remark. It implies that the optimal solution of (3) (attainable by our block-coordinate FL algorithm) is asymptotically optimal, since the RHS of (26) converges to 0 as the training data size n \u2192\u221e. 5 Related Work Due to lack of space, here we point out only the key differences between the proposed approach and existing methods closely related to ours, leaving all references and detailed discussions in Appendix I. Bayesian or ensemble FL approaches. Some recent studies tried to tackle the FL problem using Bayesian or ensemble-based methods. As we mentioned earlier, the key difference is that most methods do not introduce Bayesian hierarchy in a principled manner. Instead, they ultimately treat network weights \u03b8 as a random variable shared across all clients. On the other hand, our approach assigns individual \u03b8i to each client i governed by a common prior p(\u03b8i|\u03c6). The non-hierarchical approaches mostly resort to ad hoc heuristics and/or strong assumptions in their algorithms. For instance, FedPA [4] aims to establish the product-of-experts decomposition, p(\u03b8|D1:N) \u221dQN i=1 p(\u03b8|Di) to allow client-wise inference of p(\u03b8|Di). However, this decomposition does not hold in general unless a strong assumption of uninformative prior p(\u03b8) \u221d1 is made. FedBE (Bayesian Ensemble) [15] aims to build the global posterior distribution p(\u03b8|D1:N) from the individual posteriors p(\u03b8|Di) in some ad hoc ways. FedEM [40] forms a seemingly reasonable hypothesis that local client data distributions can be identi\ufb01ed as mixtures of a \ufb01xed number of base distributions (with different mixing proportions). Although they have sophisticated probabilistic modeling, this method is not a Bayesian approach. pFedBayes [59] can be seen as an implicit regularisation-based method to approximate p(\u03b8|D1:N) from individual posteriors p(\u03b8|Di). To this end, they introduce the so-called global distribution w(\u03b8), which essentially serves as a regulariser to prevent local posteriors from deviating from it. The introduction of w(\u03b8) and its update strategy appears to be a hybrid treatment rather than solely Bayesian perspective. FedPop [31] has a similar hierarchical Bayesian model structure as ours, but their model is limited to a linear deterministic model for the shared variate. Yet another Bayesian FL algorithms. Some approaches [47, 53, 37, 22] proposed hierarchical Bayesian models that are similar to our model in (graphical model) structures. However, these algorithms have signi\ufb01cant practical limitations, can only run on simple linear models or single hidden-layer MLPs, mainly due to their use of computationally expensive MCMC sampling [47, 53] or strong reliance on prior-posterior conjugacy [22]. Furthermore, the EM-based optimisation adopted in some approaches [37, 22] can considerably diminish the Bayesian uncertainty modeling effect. Other recent Bayesian methods adopt the expectation-propagation (EP) approximations [5, 24]. In particular, the EP update steps are performed locally with the client data. However, neither of these two works is a hierarchical Bayesian model \u2013 unlike our individual client modeling, they have a single model \u03b8 shared across clients, without individual modeling for client data, thus following FedPA-like inference p(\u03b8|D1:N). The consequence is that they lack a systematic way to distinctly model global and local parameters for global prediction and personalised prediction respectively. 7 \fTable 1: (CIFAR-100) Global prediction and personalisation accuracy. (a) Global prediction performance (initial accuracy) FL settings Our Methods Fed-BABU Fed-Avg Fed-Prox pFedBayes FedPA FedBE FedEM FedPop s f \u03c4 NIW Mix (K =2) [44] [43] [33] [59] [4] [15] [40] [31] 100 0.1 1 49.760.12 49.760.12 49.760.12 49.370.30 42.350.42 40.870.62 41.490.75 37.230.88 42.150.78 42.490.89 43.831.07 43.090.30 10 29.020.33 29.020.33 29.020.33 29.020.29 29.020.29 29.020.29 27.930.28 28.260.19 27.110.11 28.211.42 28.050.28 28.390.45 28.620.26 28.310.42 1.0 1 57.800.10 57.800.10 57.800.10 52.940.36 48.170.56 47.440.20 47.661.49 44.890.32 47.960.17 48.690.70 50.280.72 48.360.44 10 29.530.42 30.550.15 30.550.15 30.550.15 28.670.51 28.790.68 27.430.38 28.250.81 28.890.38 28.601.18 29.510.12 28.990.47 10 0.1 1 37.540.25 38.070.40 38.070.40 38.070.40 35.040.56 27.480.86 34.730.21 31.490.18 35.510.55 35.170.40 37.280.26 35.010.58 10 18.990.03 18.990.03 18.990.03 18.950.13 18.540.37 14.690.40 16.840.48 17.930.68 18.590.19 18.670.09 18.450.10 18.680.12 1.0 1 50.400.11 50.400.11 50.400.11 49.520.88 45.410.11 37.100.44 44.330.31 39.950.89 45.080.72 45.560.52 47.520.59 44.980.27 10 22.870.41 23.590.47 23.590.47 23.590.47 21.920.66 17.380.32 19.540.38 21.850.50 22.600.29 21.730.64 22.510.06 22.060.40 (b) Personalisation performance FL settings Our Methods Fed-BABU Fed-Avg Fed-Prox pFedBayes FedPA FedBE FedEM FedPop s f \u03c4 NIW Mix (K =2) [44] [43] [33] [59] [4] [15] [40] [31] 100 0.1 1 54.160.50 56.170.16 56.170.16 56.170.16 50.430.93 46.430.82 49.910.78 45.831.12 49.880.49 50.571.03 47.280.88 51.220.37 10 36.680.37 36.680.37 36.680.37 36.320.27 35.450.34 33.570.06 33.920.22 35.741.36 35.060.30 35.510.62 34.411.13 35.690.47 1.0 1 60.360.89 60.360.89 60.360.89 58.820.37 55.870.91 53.150.25 55.500.90 53.000.48 55.430.31 56.250.42 54.650.49 55.850.51 10 35.920.17 36.220.17 36.220.17 36.220.17 35.580.24 33.821.04 33.700.42 35.571.02 35.210.50 34.920.94 35.210.42 35.360.36 10 0.1 1 79.410.24 79.700.19 79.700.19 79.700.19 75.440.36 70.361.02 75.060.67 73.930.14 75.760.36 76.190.42 75.520.50 74.970.72 10 67.351.02 67.570.62 67.570.62 67.570.62 66.240.53 61.390.27 64.860.73 65.820.33 65.870.21 66.640.25 67.110.11 66.700.55 1.0 1 82.710.37 82.710.37 82.710.37 81.030.35 78.920.23 76.980.66 78.560.55 78.080.28 78.840.16 79.820.52 80.650.32 78.960.14 10 67.781.02 67.781.02 67.781.02 66.740.27 66.250.46 63.810.40 63.810.51 66.151.29 66.230.29 66.060.54 66.340.12 66.570.44 6 Evaluation We evaluate the proposed hierarchical Bayesian models on several FL benchmarks: CIFAR-100, MNIST, Fashion-MNIST, and EMNIST. We also have the results on the challenging corrupted CIFAR (CIFAR-C-100) (in Appendix G) that renders the client data more heterogeneous both in input images and class distributions. Our implementation6 is based on [44] where MobileNet [26] is used as a backbone, and follow the body-update strategy: the classi\ufb01cation head (the last layer) is randomly initialised and \ufb01xed during training, with only the network body updated (and both body and head updated during personalisation). We report results all based on this body-update strategy since we observe that it considerably outperforms the full update for our models and other competing methods. The hyperparameters are: (NIW) \u03f5 = 10\u22124 and p = 1 \u22120.001 (See ablation study below for other values); (Mixture) \u03c32 = 0.1, \u03f5 = 10\u22124, mixture order K = 2 (See Appendix G.2 for other values), and the gating network has the same architecture as the main backbone, but the output cardinality changed to K. Other hyperparameters including batch size (50), learning rate (0.1 initially, decayed by 0.1) and the number of epochs in personalisation (5), are the same as those in [44]. CIFAR-100. Following [44], the client data distributions are heterogeneous non-iid, formed by the sharding-based class sampling [43]. More speci\ufb01cally, we partition data instances in each class into non-overlapping equal-sized shards, and assign s randomly sampled shards (over all classes) to each of N clients. Thus the number of shards per user s can control the degree of data heterogeneity: small s leads to more heterogeneity, and vice versa. The number of clients N = 100 (each having 500 training, 100 test samples), and we denote by f the fraction of participating clients. So, Nf = \u230aN \u00b7f\u230b clients are randomly sampled at each round to participate in training. Smaller f makes the FL more challenging, and we test two settings: f = 1.0 and 0.1. Lastly, the number of epochs for client local update at each round is denoted by \u03c4 where we test \u03c4 = 1 and 10, and the number of total rounds is determined by \u03c4 as \u230a320/\u03c4\u230bfor fairness. Note that smaller \u03c4 incurs more communication cost but often leads to higher accuracy. For the competing methods FedBE [15] and FedEM [40], we set the number of ensemble components or base models to 3. FedPA [4]: shrinkage parameter \u03c1 = 0.01. MNIST/F-MNIST/EMNIST. Following the standard protocols, we set the number of clients N = 100, the number of shards per client s = 5, the fraction of participating clients per round f = 0.1, and the number of local training epochs per round \u03c4 = 1 (total number of rounds 100) or 5 (total number of rounds 20) for MNIST and F-MNIST. For EMNIST, we have N = 200, f = 0.2, \u03c4 = 1 (total number of rounds 300). We follow the standard Dirichlet-based client data splitting. For the competing methods FedBE [15] and FedEM [40], we use three-component models. The backbone is 6We provide detailed pseudocodes in Appendix H.1. The codes to reproduce the results are in the Supplement. 8 \fTable 2: (MNIST / Fashion-MNIST / EMNIST) Global prediction and personalisation accuracy. (a) Global prediction performance (initial accuracy) Dataset FL settings Our Methods Fed-BABU Fed-Avg Fed-Prox pFedBayes FedPA FedBE FedEM FedPop s f \u03c4 NIW Mix (K =2) [44] [43] [33] [59] [4] [15] [40] [31] MNIST 5 0.1 1 97.81 97.94 97.94 97.94 97.16 97.15 97.38 97.32 93.38 97.16 97.38 97.42 5 95.51 95.68 95.68 95.68 94.59 94.86 95.28 94.28 94.87 95.11 95.33 94.98 FMNIST 5 0.1 1 84.18 84.28 84.28 84.28 83.86 81.98 83.25 81.20 81.10 80.35 83.51 82.40 5 77.48 77.60 77.60 77.60 76.10 73.70 73.67 72.35 73.47 73.28 76.69 72.93 EMNIST ? 0.2 1 85.40 85.58 85.58 85.58 84.33 85.27 85.27 84.65 83.20 85.24 85.21 85.27 (b) Personalisation performance Dataset FL settings Our Methods Fed-BABU Fed-Avg Fed-Prox pFedBayes FedPA FedBE FedEM FedPop s f \u03c4 NIW Mix (K =2) [44] [43] [33] [59] [4] [15] [40] [31] MNIST 5 0.1 1 98.78 98.85 98.85 98.85 97.73 97.83 97.96 97.88 96.66 97.62 97.89 97.91 5 96.53 96.67 96.67 96.67 95.89 96.10 96.45 95.49 95.93 96.37 96.46 95.80 FMNIST 5 0.1 1 92.48 92.54 92.54 92.54 91.03 90.59 90.72 90.31 91.60 89.96 92.10 91.14 5 89.91 89.91 89.91 89.53 89.02 86.85 87.35 86.38 88.76 85.00 89.65 86.40 EMNIST ? 0.2 1 88.84 88.97 88.97 88.97 83.09 87.92 88.39 88.12 85.10 88.37 88.32 88.40 an MLP with a single hidden layer with 256 units for MNIST/F-MNIST, while we use a standard ConvNet with two hidden layers for EMNIST. Main results and interpretation. In Table 1 and 2 (also Table 3 in Appendix G), we compare our methods (NIW and Mixture with K =2) against the popular FL methods, including FedAvg [43], FedBABU [44], FedProx [33], as well as recent Bayesian/ensemble methods, FedPA [4], FedBE [15], pFedBayes [59]. FedEM [40], and FedPop [31] (See Sec. 5 and Appendix I). We run the competing methods (implementation based on their public codes or our own implementation if unavailable) with default hyperparameters (e.g., \u00b5 = 0.01 for FedProx) and report the results. First of all, our two models (NIW and Mix.) consistently perform the best (by large margins most of the time) in terms of both global prediction and personalisation for nearly all FL settings on the two datasets. This is attributed to the principled Bayesian modeling of the underlying FL data generative process in our approaches that can be seen as rigorous generalisation and extension of the existing intuitive algorithms such as FedAvg and FedProx. In particular, the superiority of our methods to the other Bayesian/ensemble approaches veri\ufb01es the effectiveness of modeling client-wise latent variables \u03b8i against the commonly used shared \u03b8 modeling. Our methods are especially robust for the scenarios of signi\ufb01cant client data heterogeneity, e.g., CIFAR-C-100 personalisation on data with unseen corruption types in Appendix G (Table 3). (Ablation) Hyperparameter sensitivity. We test sensitivity to some key hyperparameters in our models. For NIW, we have p = 1\u2212pdrop, the MC-dropout probability, where we used pdrop = 0.001 in the main experiments. In Fig. 2(a) we report the performance of NIW for different values (pdrop = 0, 10\u22124, 10\u22122) on CIFAR-100 with (s = 100, f = 0.1, \u03c4 = 1) setting. We see that the performance is not very sensitive to pdrop unless it is too large (e.g., 0.01). For the Mixture model, different mixture orders K = 2, 5, 10 are contrasted in Fig. 2(b). As seen, having more mixture components does no harm (no over\ufb01tting), but we do not see further improvement over K = 2 in our experiments (See also results on CIFAR-C-100 in Table 5 in Appendix G). Further results and analysis. In Appendix, we provide further empirical results and analysis: (i) performance on the challenging corrupted CIFAR (CIFAR-C-100) dataset (Appendix G.1), (ii) comparison between our mixture model and simple ensemble baselines (Fig. 2(b) and Appendix G.3), and (iii) computational complexity analysis and actual running times (Appendix H.2 and H.3). 7" + }, + { + "url": "http://arxiv.org/abs/2302.12047v1", + "title": "Domain Generalisation via Domain Adaptation: An Adversarial Fourier Amplitude Approach", + "abstract": "We tackle the domain generalisation (DG) problem by posing it as a domain\nadaptation (DA) task where we adversarially synthesise the worst-case target\ndomain and adapt a model to that worst-case domain, thereby improving the\nmodel's robustness. To synthesise data that is challenging yet\nsemantics-preserving, we generate Fourier amplitude images and combine them\nwith source domain phase images, exploiting the widely believed conjecture from\nsignal processing that amplitude spectra mainly determines image style, while\nphase data mainly captures image semantics. To synthesise a worst-case domain\nfor adaptation, we train the classifier and the amplitude generator\nadversarially. Specifically, we exploit the maximum classifier discrepancy\n(MCD) principle from DA that relates the target domain performance to the\ndiscrepancy of classifiers in the model hypothesis space. By Bayesian\nhypothesis modeling, we express the model hypothesis space effectively as a\nposterior distribution over classifiers given the source domains, making\nadversarial MCD minimisation feasible. On the DomainBed benchmark including the\nlarge-scale DomainNet dataset, the proposed approach yields significantly\nimproved domain generalisation performance over the state-of-the-art.", + "authors": "Minyoung Kim, Da Li, Timothy Hospedales", + "published": "2023-02-23", + "updated": "2023-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "main_content": "INTRODUCTION Contemporary machine learning models perform well when training and testing data are identically distributed. However, in practice it is often impossible to obtain an unbiased sample of real-world data for training, and therefore distribution-shift inevitably exists between training and deployment. Performance can degrade dramatically under such domain shift (Koh et al., 2021), and this is often the cause of poor performance of real-world deployments (Geirhos et al., 2020). This important issue has motivated a large amount of research into the topic of domain generalisation (DG) (Zhou et al., 2021a), which addresses training models with increased robustness to distribution shift. These DG approaches span a diverse set of strategies including architectural innovations (Chattopadhyay et al., 2020), novel regularisation (Balaji et al., 2018), alignment (Sun & Saenko, 2016) and learning (Li et al., 2019) objectives, and data augmentation (Zhou et al., 2021b) to make available training data more representative of potential testing data. However, the problem remains essentially unsolved, especially as measured by recent carefully designed benchmarks (Gulrajani & Lopez-Paz, 2021). Our approach is related to existing lines of work on data-augmentation solutions to DG (Zhou et al., 2021b; Shankar et al., 2018), which synthesise more data for model training; and alignment-based approaches to Domain Adaptation (Sun & Saenko, 2016; Saito et al., 2018) that adapt a source model to an unlabeled target set \u2013 but cannot address the DG problem where the target set is unavailable. We improve on both by providing a uni\ufb01ed framework for stronger data synthesis and domain alignment. Our framework combines two key innovations: A Bayesian approach to maximum classi\ufb01er discrepancy, and a Fourier analysis approach to data augmentation. We start from the perspective of maximum classi\ufb01er discrepancy (MCD) from domain adaptation (Ben-David et al., 2007; 2010; Saito et al., 2018). This bounds the target-domain error as a function of discrepancy between multiple source-domain classi\ufb01ers. It is not obvious how to apply MCD to the DG problem where we have no access to target-domain data. A key insight is that MCD provides a principled objective that we 1 arXiv:2302.12047v1 [cs.LG] 23 Feb 2023 \fPublished as a conference paper at ICLR 2023 Figure 1: Overall training \ufb02ow of the proposed approach (AGFA). We generate target-domain data by synthesizing Fourier amplitude images trained adversarially. See main text in Sec. 3 for details. can maximise in order to synthesise a worst-case target domain, and also minimise in order to train a model that is adapted to that worst-case domain. Speci\ufb01cally, we take a Bayesian approach that learns a distribution over source-domain classi\ufb01ers, with which we can compute MCD. This simpli\ufb01es the model by eliminating the need for adversarial classi\ufb01er training in previous applications of MCD (Saito et al., 2018), which leaves us free to adversarially train the worst-case target domain. To enable challenging worst-case augmentations to be generated without the risk of altering image semantics, our augmentation strategy operates in the Fourier amplitude domain. It synthesises amplitude images, which can be combined with phase images from source-domain data to produce images that are substantially different in style (amplitude), while retaining the original semantics (phase). Our overall strategy termed Adversarial Generation of Fourier Amplitude (AGFA) is illustrated in Fig. 1. In summary, we make the following main contributions: (1) We provide a novel and principled perspective on DG by drawing upon the MCD principle from DA. (2) We provide AGFA, an effective algorithm for DG based on variational Bayesian learning of the classi\ufb01er and Fourier-based synthesis of the worst-case domain for robust learning. (3) Our empirical results show clear improvement on previous state-of-the-arts on the rigorous DomainBed benchmark. 2 PROBLEM SETUP AND BACKGROUND We follow the standard setup for the Domain Generalisation (DG) problem. As training data, we are given labeled data S = {(x, y)|(x, y) \u223cDi, i = 1, . . . , N} where x \u2208X and y \u2208Y = {1, . . . , C}. Although the source domain S consists of different domains {Di}N i=1 with domain labels available, we simply take their union without using the originating domain labels. This is because in practice the number of domains (N) is typically small, and it is rarely possible to estimate a meaningful population distribution for empirical S from a few different domains. What distinguishes DG from the closely-related (unsupervised) Domain Adaptation (DA), is that the target domain (T) on which model\u2019s prediction performance is measured is unknown for DG, whereas in DA the input data x from the target domain are revealed (without class labels y). Below we brie\ufb02y summarise the MCD principle and Ben-David\u2019s theorem, one of the key theorems in DA, as we exploit them to tackle DG. Ben-David\u2019s theorem and MCD principle in DA. In unsupervised DA, Ben-David\u2019s theorem (Ben-David et al., 2010; 2007) provides an upper bound for the target-domain generalisation error of a model (hypothesis). We focus on the tighter bound version, which states that for any classi\ufb01er h in the hypothesis space H = {h|h : X \u2192Y}, the following holds (without the sampling error term): eT (h) \u2264eS(h) + sup h,h\u2032\u2208H \f \fdS(h, h\u2032) \u2212dT (h, h\u2032) \f \f + e\u2217(H; S, T), (1) where eS(h) := E(x,y)\u223cS[I(h(x) \u0338= y)] is the error rate of h(\u00b7) on the source domain S, dS(h, h\u2032) := Ex\u223cS[I(h(x) \u0338= h\u2032(x))] denotes the discrepancy between two classi\ufb01ers h and h\u2032 on S (similarly for eT (h) and dT (h, h\u2032)), and e\u2217(H; S, T) := minh\u2208H eS(h) + eT (h). Thus we can provably reduce the target domain generalisation error by simultaneously minimizing the three terms in the upper bound1, namely source-domain error eS(h), classi\ufb01er discrepancy, and minimal source-target error. 1Some recent work such as (Vedantam et al., 2021), however, empirically studied potential risk of looseness of the bound in certain scenarios. 2 \fPublished as a conference paper at ICLR 2023 Previous approaches (Saito et al., 2018; Kim et al., 2019) aim to minimise the upper bound, and one reasonable strategy is to constrain the hypothesis space H in such a way that it contains only those h\u2019s with small eS(h). Within this source-con\ufb01ned hypothesis space (denoted by H|S), the terms eS(h) and dS(h, h\u2032) in the bound are expected to be close to 0 for all h, h\u2032 \u2208H|S, and the bound of (1) effectively reduces to what is called the Maximum Classi\ufb01er Discrepancy (MCD) loss, MCD(H|S; T) := sup h,h\u2032\u2208H|S |dT (h.h\u2032)| = sup h,h\u2032\u2208H|S Ex\u223cT \u0002 I(h(x) \u0338= h\u2032(x)) \u0003 . (2) This suggests the MCD learning principle: we need to minimise both the error on S (so as to form the source-con\ufb01ned hypothesis space H|S) and the MCD loss on T. Note however that the last term e\u2217is not considered in (Saito et al., 2018; Kim et al., 2019) mainly due to the dif\ufb01culty of estimating the target domain error. We will incorporate e\u2217in our DG algorithm as described in the next section. We conclude the section by brie\ufb02y reviewing how the MCD learning principle was exploited in previous works. In (Saito et al., 2018) they explicitly introduce two classi\ufb01er networks h(x) = g(\u03c6(x)) and h\u2032(x) = g\u2032(\u03c6(x)), where the classi\ufb01cation heads g, g\u2032 and the feature extractor \u03c6 are cooperatively updated to minimise the error on S (thus implicitly obtaining H|S), they are updated adversarially to maximise (minimise) the MCD loss on T with respect to g and g\u2032 (\u03c6, respectively). In (Kim et al., 2019), they build a Gaussian process (GP) classi\ufb01er on the feature space \u03c6(x), in which H|S is attained by GP posterior inference. Minimisation of the MCD term is then accomplished by the maximum margin learning which essentially enforces minimal overlap between the two largest posterior modes. Note that (Saito et al., 2018)\u2019s strategy requires adversarial optimisation, and hence it is less suitable for our DG algorithm which will require adversarial generator learning: Having two adversarial learning components would make the training dif\ufb01cult since we need to \ufb01nd two nested equilibrium (saddle) points. We instead adopt the Bayesian hypothesis modeling approach of (Kim et al., 2019). In the next section, we describe our approach in greater detail. 3 ADVERSARIAL GENERATION OF FOURIER AMPLITUDE (AGFA) De\ufb01ning and optimising a hypothesis space. Our DG approach aims to minimise the MCD loss, MCD(H|S; T) de\ufb01ned in (2). The \ufb01rst challenge is that the target domain data T is not available in DG. Before we address it, we clarify the optimisation problem (i.e., what is the MCD loss optimised for?) and how the hypothesis spaces (H and H|S) are represented. The MCD loss is a function of hypothesis space H (or H|S), not a function of individual classi\ufb01er h in it. Hence, minimising the MCD loss amounts to choosing the best hypothesis space H. To this end, we need to parametrise the hypothesis space (so as to frame it as a continuous optimisation), and our choice is the Bayesian linear classi\ufb01er with deterministic feature extractor. We consider the conventional neural-network feed-forward classi\ufb01er modeling: we have the feature extractor network \u03c6\u03b8(x) \u2208Rd (with the weight parameters \u03b8) followed by the linear classi\ufb01cation head W = [w1, . . . , wC] (C-way classi\ufb01cation, each wj \u2208Rd), where the class prediction is done by the softmax likelihood: P(y = j|x, \u03b8, W) \u221dew\u22a4 j \u03c6\u03b8(x), j = 1, . . . , C. (3) So each con\ufb01guration (\u03b8, W) speci\ufb01es a particular classi\ufb01er h. To parametrise the hypothesis space H (\u220bh), ideally we can consider a parametric family of distributions over (\u03b8, W). Each distribution P\u03b2(\u03b8, W) speci\ufb01ed by the parameter \u03b2 corresponds to a particular hypothesis space H, and each sample (\u03b8, W) \u223cP\u03b2(\u03b8, W) corresponds to a particular classi\ufb01er h \u2208H. Although this is conceptually simple, to have a tractable model in practice, we de\ufb01ne \u03b8 to be deterministic parameters and only W to be stochastic. A reasonable choice for P(W), without any prior knowledge, is the standard Gaussian, P(W) = QC j=1 N(wj; 0, I). Now, we can represent a hypothesis space as H = {P(y|x, \u03b8, W) | W \u223cP(W)}. Thus H is parametrised by \u03b8, and with \u03b8 \ufb01xed (H \ufb01xed), each sample W from P(W) instantiates a classi\ufb01er h \u2208H. The main bene\ufb01t of this Bayesian hypothesis space modeling is that we can induce the source-con\ufb01ned hypothesis space H|S (i.e., the set of classi\ufb01ers that perform well on the source domain) in a principled manner by the posterior, P(W|S, \u03b8) \u221dP(W) \u00b7 Y (x,y)\u223cS P(y|x, \u03b8, W). (4) 3 \fPublished as a conference paper at ICLR 2023 The posterior places most of its probability density on those samples (classi\ufb01ers) W that attain high likelihood scores on S (under given \u03b8) while being smooth due to the prior. To ensure that the source domain S is indeed explained well by the model, we further impose high data likelihood on S as constraints for \u03b8, \u03b8 \u2208\u0398S where \u0398S := {\u03b8 | log P(S|\u03b8) \u2265Lth}, (5) where Lth is the (constant) threshold that guarantees suf\ufb01cient \ufb01delity of the model to explaining S. Then it is reasonable to represent H|S by the support of P(W|S, \u03b8) for \u03b8 \u2208\u0398S, postulating that H|S exclusively contains smooth classi\ufb01ers h that perform well on S. Formally, the source-con\ufb01ned hypothesis space can be parametrised as: H|S(\u03b8) = {P(y|x, \u03b8, W) | W \u223cP(W|S, \u03b8)} for \u03b8 \u2208\u0398S, (6) where we use the notation H|S(\u03b8) to emphasise its dependency on \u03b8. Intuitively, the hypothesis space H|S is identi\ufb01ed by choosing the feature space (i.e., choosing \u03b8 \u2208\u0398S), and individual classi\ufb01ers h \u2208H|S are realised by the Bayesian posterior samples W \u223cP(W|S, \u03b8) (inferred on the chosen feature space). Since the posterior P(W|S, \u03b8) in (6) and the marginal likelihood log P(S|\u03b8) in (5) do not admit closed forms in general, we adopt the variational inference technique to approximate them. We defer the detailed derivations (Sec. 3.1) for now, and return to the MCD minimisation problem since we have de\ufb01ned the hypothesis space representation. Optimising a worst-case target domain. For the DG problem, we cannot directly apply the MCD learning principle since the target domain T is unknown during the training stage. Our key idea is to consider the worst-case scenario where the target domain T maximises the MCD loss. This naturally forms minimax-type optimisation, min \u03b8\u2208\u0398S max T MCD(H|S(\u03b8); T). (7) To solve the saddle-point optimisation (7), we adopt the adversarial learning strategy with a generator network (Goodfellow et al., 2014). The generator for T has to synthesise samples x of T that need to satisfy three conditions: (C1) The generated samples maximally baf\ufb02e the classi\ufb01ers in H|S to have least consensus in prediction (for inner maximisation); (C2) T still retains the same semantic class information as the source domain S (for the de\ufb01nition of DG); and (C3) The generated samples in T need to be distinguishable along their classes2. Paramaterising domains. To meet these conditions, we generate target domain images using Fourier frequency spectra. We speci\ufb01cally build a generator network that synthesises amplitude images in the Fourier frequency domain. The synthesised amplitude images are then combined with the phase images sampled from the source domain S to construct new samples x \u2208T by inverse Fourier transform. This is motivated by signal processing where it is widely believed that the frequency phase spectra capture the semantic information of signals, while the amplitudes take charge of non-semantic (e.g., style) aspects of the signals (Oppenheim & Lim, 1981). Denoting the amplitude generator network as G\u03bd(\u03f5) with parameters \u03bd and random noise input \u03f5 \u223cN(0, I), our target sampler (x, y) \u223cT are generated as follows: 1. (xS, yS) \u223cS (Sample an image and its class label from S) 2. AS\u2220PS = F(xS) (Fourier transform to have amplitude and phase for xS) 3. A = G\u03bd(\u03f5), \u03f5 \u223cN(0, I) (Generate an amplitude image from G) 4. x = F\u22121(A\u2220PS), y = yS (Construct target data with the synthesised A) Here, F(\u00b7) is the 2D Fourier transform, F(u, v) = F(x) = RR x(h, w)e\u2212i(hu+wv)dhdw, and A\u2220P stands for the polar representation of the Fourier frequency responses (complex numbers) for the amplitude image A and the phase image P. That is, A\u2220P = A \u00b7 ei\u00b7P = A \u00b7 (cos P + i sin P) with i = \u221a\u22121, where all operations are element/pixel-wise. Note that we set y = yS in step 4 since the original phase (semantic) information PS is retained in the synthesised x. 2This condition naturally originates from the solvability of the DG problem. 4 \fPublished as a conference paper at ICLR 2023 Algorithm summary. Finally the worst-case target MCD learning can be solved by adversarial learning, which can be implemented as an alternating optimisation: (Fix \u03bd) min \u03b8\u2208\u0398S MCD(H|S(\u03b8); T(\u03bd)) (8) (Fix \u03b8) max \u03bd MCD(H|S(\u03b8); T(\u03bd)) (9) We used T(\u03bd) to emphasise functional dependency of target images on the generator parameters \u03bd. Note that although the MCD loss in DA can be computed without the target domain labels (recall the de\ufb01nition (2)), in our DG case the class labels for the generated target data are available, as induced from the phase PS (i.e., y = yS in step 4). Thus we can modify the MCD loss by incorporating the target class labels. In the following we provide concrete derivations using the variational posterior inference, and propose a modi\ufb01ed MCD loss that takes into account the induced target class labels. 3.1 CONCRETE DERIVATIONS USING VARIATIONAL INFERENCE Source-con\ufb01ned hypothesis space by variational inference. The posterior P(W|S, \u03b8) does not admit a closed form, and we approximate P(W|S, \u03b8) by the Gaussian variational density, Q\u03bb(W) = C Y j=1 N(wj; mj, Vj), (10) where \u03bb := {mj, Vj}C j=1 constitutes the variational parameters. To enforce Q\u03bb(W) \u2248P(W|S, \u03b8), we optimise the evidence lower bound (ELBO), ELBO(\u03bb, \u03b8; S) := X (x,y)\u223cS EQ\u03bb(W ) \u0002 log P(y|x, W, \u03b8) \u0003 \u2212KL \u0000Q\u03bb(W)||P(W) \u0001 , (11) which is the lower bound of the marginal data likelihood log P(S|\u03b8) (Appendix A.3 for derivations). Hence maximising ELBO(\u03bb, \u03b8; S) with respect to \u03bb tightens the posterior approximation Q\u03bb(W) \u2248 P(W|S, \u03b8), while maximising it with respect to \u03b8 leads to high data likelihood log P(S|\u03b8). The latter has the very effect of imposing the constraints \u03b8 \u2208\u0398S in (8) since one can transform constrained optimisation into a regularised (Lagrangian) form equivalently (Boyd & Vandenberghe, 2004). Optimising the MCD loss. The next thing is to minimise the MCD loss, MCD(H|S(\u03b8); T) with the current target domain T generated by the generator network G\u03bd. That is, solving (8). We follow the maximum margin learning strategy from (Kim et al., 2019), where the idea is to enforce the prediction consistency for different classi\ufb01ers (i.e., posterior samples) W \u223cQ\u03bb(W) on x \u223cT by separating the highest class score from the second highest by large margin. To understand the idea, let j\u2217be the model\u2019s predicted class label for x \u223cT, or equivalently let j\u2217have the highest class score j\u2217= arg maxj w\u22a4 j \u03c6(x) as per (3). (We drop the subscript in \u03c6\u03b8(x) for simplicity in notation.) We let j\u2020 be the second most probable class, i.e., j\u2020 = arg maxj\u0338=j\u2217w\u22a4 j \u03c6(x). Our model\u2019s class prediction would change if w\u22a4 j\u2217\u03c6(x) < w\u22a4 j\u2020\u03c6(x) for some W \u223cQ\u03bb(W), which leads to discrepancy of classi\ufb01ers. To avoid such overtaking, we need to ensure that the (plausible) minimal value of w\u22a4 j\u2217\u03c6(x) is greater than the (plausible) maximal value of w\u22a4 j\u2020\u03c6(x). Since the score (logit) fj(x) := w\u22a4 j \u03c6(x) is Gaussian under Q\u03bb(W), namely fj(x) \u223cN(\u00b5j(x), \u03c3j(x)2) where \u00b5j(x) = m\u22a4 j \u03c6(x), \u03c32 j (x) = \u03c6(x)\u22a4Vj\u03c6(x), (12) the prediction consistency is achieved by enforcing: \u00b5j\u2217(x) \u2212\u03b1\u03c3j\u2217(x) > \u00b5j\u2020(x) + \u03b1\u03c3j\u2020(x), where we can choose \u03b1 = 1.96 for 2.5% rare one-sided chance. By introducing slack variables \u03be(x) \u22650, \u00b5j\u2217(x) \u2212\u03b1\u03c3j\u2217(x) \u22651 + max j\u0338=j\u2217 \u0000\u00b5j(x) + \u03b1\u03c3j(x) \u0001 \u2212\u03be(x). (13) Satisfying the constraints amounts to ful\ufb01lling the desideratum of MCD minimisation, essentially imposing prediction consistency of classi\ufb01ers. Note that we add the constant 1 in the right hand side of (13) for the normalisation purpose to prevent the scale of \u00b5 and \u03c3 from being arbitrary small. The constraints in (13) can be translated into the following MCD loss (as a function of \u03b8): MCD(\u03b8; T) := Ex\u223cT \u0010 1 + T 2\u0000\u00b5j(x) + \u03b1\u03c3j(x) \u0001 \u2212T 1\u0000\u00b5j(x) \u2212\u03b1\u03c3j(x) \u0001\u0011 + (14) 5 \fPublished as a conference paper at ICLR 2023 Figure 2: Illustration of the SMCD loss on three different hypothesis spaces H|S shown in three panels. For C = 3-way classi\ufb01cation case, each panel shows the class logit scores (Gaussian random) fj(x) \u223cN(\u00b5j(x), \u03c3j(x)2) for j = 1, 2, 3, at some input x \u2208T. We assume that the true (induced) class label y = 2. (Left) Since the mean logit for class 2, \u00b52(x) is the maximum among others, the prediction is marginally correct (from softmax). Beyond that, the logit of the worst plausible hypothesis for class 2, \u00b52(x) \u22121.96\u03c32(x) is greater than that of the runner-up class 1, \u00b51(x) + 1.96\u03c31(x) by some positive margin (green arrow), meaning there is little chance of prediction overtaking (so, consistent); equivalently, the SMCD loss is small. (Middle) Prediction is marginally correct, but prediction overtaking is plausible, indicated by the negative margin (red arrow); the SMCD loss is large. (Right) Incorrect marginal prediction (to class 1) with more severe negative margin (red arrow); the SMCD loss is even larger. where T k is the operator that selects the top-k element, and (a)+ = max(0, a). Modi\ufb01ed MCD loss. The above MCD loss does not utilise the target domain class labels y = yS that are induced from the phase information PS (Recall the target domain data generation steps 1 \u223c4 above). To incorporate the supervised data {(x, y)} \u2208T in the generated target domain, we modify the MCD loss as follows: First, instead of separating the margin between the two largest logit scores as in the MCD, we maximise the margin between the logit for the given class y and the largest logit among the classes other than y. That is, we replace the constraints (13) with the following: \u00b5y(x) \u2212\u03b1\u03c3y(x) \u22651 + max j\u0338=y \u0000\u00b5j(x) + \u03b1\u03c3j(x) \u0001 \u2212\u03be(x), (15) where y is the class label (induced from the phase information) for the generated instance x. See Fig. 2 for illustration of the idea. Consequently, our new MCD loss (coined supervised MCD or SMCD for short) is de\ufb01ned as follows: SMCD(\u03b8; T) := E(x,y)\u223cT \u0010 1 + max j\u0338=y \u0000\u00b5j(x) + \u03b1\u03c3j(x) \u0001 \u2212 \u0000\u00b5y(x) \u2212\u03b1\u03c3y(x) \u0001\u0011 +. (16) Here the variational parameters \u03bb is treated as constant since the only role of \u03bb is to maximise the ELBO. It should be noted that (16) essentially aims at maximising the logit for the given class y (the last term), or equivalently, classi\ufb01cation error minimisation on T, and at the same time minimising the logit for the runner-up class (the middle max term). Surprisingly, the former amounts to minimising the minimal source-target error term e\u2217(H; S, T) in the generalisation bound (1), which we have left out so far. That is, e\u2217(H; S, T) = minh\u2208H eS(h) + eT (h) \u2248minh\u2208H|S eT (h), and the last term of the SMCD loss leads to \u03b8 that makes eT (h) small for all h \u2208H|S(\u03b8). Moreover, minimising the logit for the runner-up class (the middle max term of the SMCD) has the effect of margin maximisation. Algorithm summary. Our AGFA algorithm can be understood as MCD-based DA with adversarial amplitude generated target domain. It entails the following alternating optimisation (\u03b7 > 0 is the trade-off hyperparameter for SMCD): 1. min\u03bb,\u03b8 \u2212ELBO(\u03bb, \u03b8; S) + \u03b7SMCD(\u03b8; T) (model learning + VI; \u03bd \ufb01xed) 2. max\u03bd SMCD(\u03b8; T) (adversarial generator learning; \u03b8, \u03bb \ufb01xed) Our algorithm is summarised in Alg. 1 (in Appendix) and illustrated schematically in Fig. 1. At test time, we can apply the classi\ufb01er (3) with the learned \u03b8 and any sample W \u223cQ\u03bb(W) to target domain inputs to predict class labels. In our experiments, we take the posterior means wj = mj instead of sampling from Q\u03bb(W). 3.2 FURTHER CONSIDERATIONS Post-synthesis mixup of generated amplitude images. In our adversarial learning, the amplitude generator network G\u03bd synthesises target domain image samples that have highly challenging 6 \fPublished as a conference paper at ICLR 2023 amplitude spectra to the current model. Although we retain the phase information from source domains, unconstrained amplitude images can potentially alter the semantic content destructively (e.g., a constant zero amplitude image would zero out the image content), rendering it impossible to classify. To this end, instead of using the generator\u2019s output A = G\u03bd(\u03f5) directly, we combine it with the source domain amplitude image corresponding to the phase image by simple mixup. That is, by letting AS be the amplitude spectra corresponding to the phase PS, we alter A as: A \u2190\u03bbA + (1 \u2212\u03bb)AS where \u03bb \u223cUniform(0, \u03b1). (17) This post-synthesis mixup can address our desideratum C3 that we discussed before, that is, the generated samples for the target domain need to be distinguishable by class to solve the DG problem. Post-synthesis mixup, ensures synthesised amplitude images lie closer to the amplitude manifold of the source data, ensuring the model can solve the classi\ufb01cation problem. Dense model averaging (SWAD). We found that the DG training becomes more stable and the target-domain test performance becomes more consistent when we use the dense model averaging strategy SWAD (Cha et al., 2021). We adopt the SWAD model averaging for the variational and model parameters (\u03bb, \u03b8) while the generator network is not averaged. Amplitude image structures. From the de\ufb01nition of the Fourier transform, the frequency domain function should be even-conjugate, i.e., F(\u2212u, \u2212v) = F(u, v), for the real-valued images. This implies that amplitude images are symmetric. Conversely, if the amplitude images are symmetric, inverse Fourier transform returns real-valued signals. Thus when generating amplitude images, we only generate the non-redundant part (frequencies) of the amplitude images. Also, the amplitude should be non-negative. We keep these constraints in mind when designing the generator network. 4 RELATED WORK MCD. Several studies have used the MCD principle for domain adaptation, to align a source model to unlabeled target data (Saito et al., 2018; Kim et al., 2019; Lu et al., 2020). We uniquely exploit the MCD principle for the DG problem, in the absence of target data, by using MCD to synthesise worst-case target domain data, as well as to adapt the model to that synthesised domain. Augmentation approaches to DG. Several DG approaches have been proposed based on data augmentation. Existing approaches either de\ufb01ne augmentation heuristics (Zhou et al., 2021b; Xu et al., 2021), or exploit domain adversarial learning \u2013 i.e., confusing a domain classi\ufb01er (Shankar et al., 2018; Zhou et al., 2020). Our adversarial learning is based on the much stronger (S)MCD principle that confuses a category classi\ufb01er. This provides much harder examples for robust learning, while our Fourier amplitude synthesis ensures the examples are actually recognisable. Alignment approaches to DG. Several approaches to DG are based on aligning between multiple source domains (Sun & Saenko, 2016; Ganin et al., 2016; Li et al., 2018c;b), under the assumption that a common feature across all source domains will be good for a held out target domain. Differently, we use the MCD principle to robustify our source trained model by aligning it with the synthesised worst-case target domain. 5 EXPERIMENTS We test our approach on the DomainBed benchmark (Gulrajani & Lopez-Paz, 2021), including: PACS (Li et al., 2017), VLCS (Fang et al., 2013), Of\ufb01ceHome (Venkateswara et al., 2017), TerraIncognita (Beery et al., 2018), and DomainNet (Peng et al., 2019). For each dataset, we adopt the standard leave-one-domain-out source/target domain splits. The overall training/test protocols are similar to (Gulrajani & Lopez-Paz, 2021; Cha et al., 2021). We use the ResNet-50 (He et al., 2016) as our feature extractor backbone, which is initialised by the pretrained weights on ImageNet (Deng et al., 2009). For the generator network, we found that a linear model performed the best for the noise dimension 100. Our model is trained by the Adam optimiser (Kingma & Ba, 2015) on machines with single Tesla V100 GPUs. The hyperparameters introduced in our model (e.g., SMCD trade-off \u03b7) and the general ones (e.g., learning rate, SWAD regime hyperparameters, maximum numbers of iterations) are chosen by grid search on the validation set according to the DomainBed protocol (Gulrajani & Lopez-Paz, 2021). For instance, \u03b7 = 0.1 for all datasets. The implementation details including chosen hyperparameters can be found in Appendix A.1. 7 \fPublished as a conference paper at ICLR 2023 Table 1: Average accuracies on DomainBed datasets. Note: \u2020 indicates that the results are excerpted from the published papers or (Gulrajani & Lopez-Paz, 2021). Our own runs are reported without \u2020. Note that FACT (Xu et al., 2021) adopted a slightly different data/domain split protocol from DomainBed\u2019s, explaining discrepancy on PACS. Algorithm PACS VLCS Of\ufb01ceHome TerraInc. DomainNet Avg. ERM (Cha et al., 2021)\u2020 84.2 77.3 67.6 47.8 44.0 64.2 IRM (Arjovsky et al., 2019)\u2020 83.5 78.6 64.3 47.6 33.9 61.6 GroupDRO (Sagawa et al., 2020)\u2020 84.4 76.7 66.0 43.2 33.3 60.7 I-Mixup (Xu et al., 2020; Yan et al., 2020; Wang et al., 2020b)\u2020 84.6 77.4 68.1 47.9 39.2 63.4 MLDG (Li et al., 2018a)\u2020 84.9 77.2 66.8 47.8 41.2 63.6 CORAL (Sun & Saenko, 2016)\u2020 86.2 78.8 68.7 47.7 41.5 64.5 MMD (Li et al., 2018b)\u2020 84.7 77.5 66.4 42.2 23.4 58.8 DANN (Ganin et al., 2016)\u2020 83.7 78.6 65.9 46.7 38.3 62.6 CDANN (Li et al., 2018c)\u2020 82.6 77.5 65.7 45.8 38.3 62.0 MTL (Blanchard et al., 2021)\u2020 84.6 77.2 66.4 45.6 40.6 62.9 SagNet (Nam et al., 2021)\u2020 86.3 77.8 68.1 48.6 40.3 64.2 ARM (Zhang et al., 2020)\u2020 85.1 77.6 64.8 45.5 35.5 61.7 VREx (Krueger et al., 2020)\u2020 84.9 78.3 66.4 46.4 33.6 61.9 RSC (Huang et al., 2020)\u2020 85.2 77.1 65.5 46.6 38.9 62.7 Mixstyle (Zhou et al., 2021b)\u2020 85.2 77.9 60.4 44.0 34.0 60.3 FACT (Xu et al., 2021)\u2020 88.2 \u2212 66.6 \u2212 \u2212 \u2212 FACT (Xu et al., 2021) 86.4 76.6 66.6 45.4 42.6 63.5 Amp-Mixup (Xu et al., 2021) 84.7 75.9 64.0 46.8 42.0 62.7 SWAD (Cha et al., 2021)\u2020 88.1 79.1 70.6 50.0 46.5 66.9 FACT+SWAD 88.1 77.7 70.6 51.0 46.7 66.8 Amp-Mixup+SWAD 88.1 78.2 70.3 51.2 46.4 66.8 (Proposed) AGFA 89.3 79.5 71.5 52.4 47.1 68.0 5.1 MAIN RESULTS The test accuracies averaged over target domains are summarised in Table 1, where the results for individual target domains are reported in Appendix A.2. The proposed approach performs the best for all datasets among the competitors, and the difference from the second best model (SWAD) is signi\ufb01cant (about 1.1% margin). We particularly contrast with two recent approaches: SWAD (Cha et al., 2021) that adopts the dense model averaging with the simple ERM loss and FACT (Xu et al., 2021) that uses the Fourier amplitude mixup as means of data augmentation with additional student-teacher regularisation. First, SWAD (Cha et al., 2021) is the second best model in Table 1, implying that the simple ERM loss combined with the dense model averaging that seeks for \ufb02at minima is quite effective, also observed previously (Gulrajani & Lopez-Paz, 2021). FACT (Xu et al., 2021) utilises the Fourier amplitude spectra similar to our approach, but their main focus is data augmentation, producing more training images by amplitude mixup of source domain images. FACT also adopted the so-called teacher co-regularisation which forces the orders of the class prediction logits to be consistent between teacher and student models on the amplitude-mixup data. To disentangle the impact of these two components in FACT, we ran a model called Amp-Mixup that is simply FACT without teacher co-regularisation. The teacher co-regularisation yields further improvement in the average accuracy (FACT > Amp-Mixup in the last column of Table 1), verifying the claim in (Xu et al., 2021), although FACT is slightly worse than Amp-Mixup on VLCS and TerraIncognita. We also modi\ufb01ed FACT and Amp-Mixup models by incorporating the SWAD model averaging (FACT+SWAD and Amp-Mixup+SWAD in the table). Clearly they perform even better in combination with SWAD. Since Amp-Mixup+SWAD can be seen as dropping the teacher regularisation and adopting the SWAD (regularisation) strategy instead, we can say that SWAD is more effective regularisation than student-teacher. Nevertheless, despite the utilisation of amplitude-mixup augmentation, it appears that FACT and Amp-Mixup have little improvement over the ERM loss even when the SWAD strategy is used. This signi\ufb01es the effect of the adversarial Fourier-based target domain generation in our approach which exhibits signi\ufb01cant improvement over ERM and SWAD. 5.2 FURTHER ANALYSIS Sensitivity to \u03b7 (SMCD strength). We analyze sensitivity of the target domain generalisation performance to the SMCD trade-off hyperparameter \u03b7. We run our algorithm with different values of \u03b7. The results are shown in Fig. 3. Note that \u03b7 = 0 ignores the SMCD loss term (thus generator has no in\ufb02uence on the model training), which corresponds to the ERM approach. The test accuracy of the proposed approach remains signi\ufb01cantly better than ERM/SWAD for all those \u03b7 with moderate variations around the best value. See Appendix A.2 for the results on individual target domains. 8 \fPublished as a conference paper at ICLR 2023 0.0 0.01 0.05 0.1 * 0.2 0.5 1.0 (impact of SMCD) 88.0 88.2 88.4 88.6 88.8 89.0 89.2 T arget accuracy (%) PACS Proposed SWAD 0.0 0.01 0.05 0.1 * 0.15 0.2 0.25 (impact of SMCD) 70.6 70.8 71.0 71.2 71.4 T arget accuracy (%) OfficeHome Proposed SWAD Figure 3: Sensitivity to \u03b7 (SMCD trade-off) on PACS and Of\ufb01ceHome. 0.0 0.2 0.4 0.6 0.8 0.9 1.0 * (strength of post-mixup) 88.00 88.25 88.50 88.75 89.00 89.25 T arget accuracy (%) PACS Proposed SWAD 0.0 0.1 0.2 * 0.3 0.4 0.6 0.8 1.0 (strength of post-mixup) 70.6 70.8 71.0 71.2 71.4 T arget accuracy (%) OfficeHome Proposed SWAD Figure 4: Sensitivity to \u03b1 (post-mixup strength) on PACS and Of\ufb01ceHome. Table 2: Ablation study: 1) unsupervised MCD (instead of SMCD), 2) without post-mixup, 3) without SWAD, and 4) pixel-based target image generation (instead of amplitude generation). Art Cartoon Photo Sketch Average Unsupervised MCD 88.94 \u00b1 0.23 83.83 \u00b1 0.19 97.27 \u00b1 0.10 81.77 \u00b1 0.36 87.95 Without post-mixup 88.90 \u00b1 0.16 81.80 \u00b1 0.17 97.43 \u00b1 0.14 80.86 \u00b1 0.31 87.25 Without SWAD 84.20 \u00b1 0.68 81.56 \u00b1 0.55 94.83 \u00b1 0.12 79.28 \u00b1 0.94 84.97 Pixel-based generation 88.85 \u00b1 0.15 83.62 \u00b1 0.26 97.23 \u00b1 0.15 82.10 \u00b1 0.63 87.95 (Proposed) AGFA 89.80 \u00b1 0.34 85.16 \u00b1 0.65 97.59 \u00b1 0.27 84.67 \u00b1 0.82 89.30 Sensitivity to \u03b1 (post-synthesis mixup strength). We mix up the generated amplitude images and the source domain images as in (17) to make the adversarial target domain classi\ufb01cation task solvable. The task becomes easier for small \u03b1 (less impact of the generated amplitudes), and vice versa. Note that \u03b1 = 0 ignores generated amplitude images completely in post-mixup, and the training becomes close to ERM learning where the only difference is that we utilise more basic augmentation (e.g., \ufb02ip, rotation, color jittering). As shown in Fig. 4, the target test performance is not very sensitive around the best selected hyperparameters. See also ablation study results on the impact of post-mixup below. Impact of SMCD (vs. unsupervised MCD). We verify the positive effect of the proposed supervised MCD loss (SMCD in (16)) that exploits the induced target domain class labels, compared to the conventional (unsupervised) MCD loss (14) without using the target class labels. The result in Table 2 supports our claim that exploiting target class labels induced from the phase information is quite effective, improving the target generalisation performance. Impact of post-synthesis mixup. We argued that our post-synthesis mixup of the generated amplitude images makes the class prediction task easier for the generated target domain, for the solvability of the DG problem. To verify this, we compare two models, with and without the post-mixup strategy in Table 2. The model trained with post-mixup performs better. Impact of SWAD. We adopted the SWAD model averaging scheme (Cha et al., 2021) for improving generalisation performance. We verify the impact of the SWAD as in Table 2 where the model without SWAD has lower target test accuracy signifying the importance of the SWAD model averaging. Impact of amplitude generation. The amplitude image generation in our adversarial MCD learning allows us to separate the phase and amplitude images and exploit the class labels induced by the phase information. However, one may be curious about how the model would work if we instead generate full images without phase/amplitude separation in an adversarial way. That is, we adopt a pixel-based adversarial image generator, and in turn replace our SMCD by the conventional MCD loss (since there are no class labels inducible in this strategy). We consider two generator architectures: linear (from 100-dim input noise to full image pixels) and nonlinear (a fully connected network with one hidden layer of 100 units), where the former slightly performs better. Table 2 shows that this pixel-based target image generation underperforms our amplitude generation. 6" + }, + { + "url": "http://arxiv.org/abs/2206.04920v1", + "title": "Fisher SAM: Information Geometry and Sharpness Aware Minimisation", + "abstract": "Recent sharpness-aware minimisation (SAM) is known to find flat minima which\nis beneficial for better generalisation with improved robustness. SAM\nessentially modifies the loss function by reporting the maximum loss value\nwithin the small neighborhood around the current iterate. However, it uses the\nEuclidean ball to define the neighborhood, which can be inaccurate since loss\nfunctions for neural networks are typically defined over probability\ndistributions (e.g., class predictive probabilities), rendering the parameter\nspace non Euclidean. In this paper we consider the information geometry of the\nmodel parameter space when defining the neighborhood, namely replacing SAM's\nEuclidean balls with ellipsoids induced by the Fisher information. Our\napproach, dubbed Fisher SAM, defines more accurate neighborhood structures that\nconform to the intrinsic metric of the underlying statistical manifold. For\ninstance, SAM may probe the worst-case loss value at either a too nearby or\ninappropriately distant point due to the ignorance of the parameter space\ngeometry, which is avoided by our Fisher SAM. Another recent Adaptive SAM\napproach stretches/shrinks the Euclidean ball in accordance with the scale of\nthe parameter magnitudes. This might be dangerous, potentially destroying the\nneighborhood structure. We demonstrate improved performance of the proposed\nFisher SAM on several benchmark datasets/tasks.", + "authors": "Minyoung Kim, Da Li, Shell Xu Hu, Timothy M. Hospedales", + "published": "2022-06-10", + "updated": "2022-06-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Contemporary deep learning models achieve state of the art generalisation performance on a wide variety of tasks. These models are often massively overparameterised, and capable of memorizing the entire training set (Zhang et al., 1Samsung AI Center, Cambridge, UK 2University of Edinburgh. Correspondence to: Minyoung Kim . Proceedings of the 39 th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s). 2017). The training loss landscape of such models is complex and non-convex with multiple local and global minima of highly varying generalisation performance. Good performance is therefore obtained by exploiting various explicit and implicit regularisation schemes during learning to \ufb01nd local minima in the training loss that actually generalise well. Methods such as dropout (Srivastava et al., 2014), weight-decay, and data augmentation have been developed to provide explicit regularisation, while the dynamics of optimisers such as SGD can provide implicit regularisation, by \ufb01nding solutions with low-norm weights (Chaudhar et al., 2017; Zhang et al., 2017). A number of studies have linked the \ufb02atness of a given training minima to generalisation quality (Keskar et al., 2017; Chaudhar et al., 2017). Searching for \ufb02at minima of the loss function is intuitively appealing, as it is obviously bene\ufb01cial for \ufb01nding models resilient to data noise and/or model parameter corruption/perturbation. This has led to an increasing number of optimisation methods (Chaudhar et al., 2017; Foret et al., 2021; Sun et al., 2021) designed to explicitly search for \ufb02at minima. Despite this variety of noteworthy theoretical and empirical work, existing approaches have yet to scalably solve this problem, as developing computationally ef\ufb01cient methods for \ufb01nding \ufb02at minima is non-trivial. A seminal method in this area is known as sharpness-aware minimisation (SAM) (Foret et al., 2021). SAM is a minimax type algorithm that essentially modi\ufb01es the loss function to report the maximum loss value within the small neighborhood around the current iterate. Optimising with SAM thus prefers \ufb02atter minima than conventional SGD. However, one of the main drawbacks of SAM is that it uses a Euclidean ball to de\ufb01ne the neighborhood, which is inaccurate since loss functions for neural networks are typically de\ufb01ned over probability distributions (e.g., class predictive probabilities), rendering the parameter space non Euclidean. Another recent approach called Adaptive SAM (ASAM) (Kwon et al., 2021) stretches/shrinks the Euclidean ball in accordance with the scales of the parameter magnitudes. However, this approach to determining the \ufb02atness ellipsoid of interest is heuristic and might severely degrade the neighborhood structure. Although SAM and ASAM are successful in many empirical tasks, ignorance of the underlying geometry of the model parameter space may lead to suboptimal results. arXiv:2206.04920v1 [cs.LG] 10 Jun 2022 \fFisher SAM: Information Geometry and Sharpness Aware Minimisation In this paper we build upon the ideas of SAM, but address the issue of a principled approach to determining the ellipsoid of interest by considering information geometry (Amari, 1998; Murray & Rice, 1993) of the model parameter space when de\ufb01ning the neighborhood. Specifically, we replace SAM\u2019s Euclidean balls with ellipsoids induced by the Fisher information. Our approach, dubbed Fisher SAM, de\ufb01nes more accurate neighborhood structures that conform to the intrinsic metric of the underlying statistical manifold. By way of comparison, SAM may probe the worst-case loss value at either a too nearby or too far point due to using a spherical neighborhood. In contrast Fisher SAM avoids this by probing the worst-case point within the ellipsoid derived from the Fisher information at the current point \u2013 thus providing a more principled and optimisation objective, and improving empirical generalisation performance. Our main contributions are as follows: 1. We propose a novel information geometry and sharpness aware loss function which addresses the abovementioned issues of the existing \ufb02at-minima optimisation approaches. 2. Our Fisher SAM is as ef\ufb01cient as SAM, only requiring double the cost of that of vanilla SGD, using the gradient magnitude approximation for Fisher information matrix. We also justify this approximation. 3. We provide a theoretical generalisation bound similar to SAM\u2019s using the prior covering proof technique in PAC-Bayes, in which we extend SAM\u2019s spherical Gaussian prior set to an ellipsoidal full-covariance set. 4. We demonstrate improved empirical performance of the proposed FSAM on several benchmark datasets and tasks: image classi\ufb01cation, ImageNet overtraining, \ufb01netuning; and label-noise robust learning; and robustness to parameter perturbation during inference. 2. Background Although \ufb02atness/sharpness of the loss function can be formally de\ufb01ned using the Hessian, dealing with (optimizing) the Hessian function is computationally prohibitive. As a remedy, the sharpness-aware minimisation (SAM for short) (Foret et al., 2021) introduced a novel robust loss function, where the new loss at the current iterate is de\ufb01ned as the maximum (worst-case) possible loss within the neighborhood at around it. More formally, considering a \u03b3-ball neighborhood, the robust loss l\u03b3 is de\ufb01ned as: l\u03b3(\u03b8) = max \u2225\u03f5\u2225\u2264\u03b3 l(\u03b8 + \u03f5), (1) where \u03b8 is the model parameters (iterate), and l(\u03b8) is the original loss function. Using the \ufb01rst-order Taylor (linear) approximation of l(\u03b8 + \u03f5), (1) becomes the famous dualnorm problem (Boyd & Vandenberghe, 2004), admitting a closed-form solution. In the Euclidean (L2) norm case, the solution becomes the normalised gradient, \u03f5\u2217 SAM(\u03b8) = \u03b3 \u2207l(\u03b8) \u2225\u2207l(\u03b8)\u2225. (2) Plugging (2) into (1) de\ufb01nes the SAM loss, while its gradient can be further simpli\ufb01ed by ignoring the (higher-order) gradient terms in \u2207\u03f5\u2217(\u03b8) for computational tractability: l\u03b3 SAM(\u03b8) = l(\u03b8\u2032), \u2207l\u03b3 SAM(\u03b8) = \u2202l(\u03b8) \u2202\u03b8 \f \f \f \f \u03b8=\u03b8\u2032 (3) where \u03b8\u2032 = \u03b8 + \u03f5\u2217 SAM(\u03b8). In terms of computational complexity, SAM incurs only twice the forward/backward cost of the standard SGD: one forward/backward for computing \u03f5\u2217 SAM(\u03b8) and the other for evaluating the loss and gradient at \u03b8\u2032 = \u03b8 + \u03f5\u2217 SAM(\u03b8). More recently, a drawback of SAM, related to the model parameterisation, was raised by (Kwon et al., 2021), in which SAM\u2019s \ufb01xed-radius \u03b3-ball can be sensitive to the parameter re-scaling, weakening the connection between sharpness and generalisation performance. To address the issue, they proposed what is called Adaptive SAM (ASAM for short), which essentially re-de\ufb01nes the neighborhood \u03b3-ball with the magnitude-scaled parameters. That is, l\u03b3 ASAM(\u03b8) = max \u2225\u03f5/|\u03b8|\u2225\u2264\u03b3 l(\u03b8 + \u03f5), (4) where \u03f5/|\u03b8| is the elementwise operation (i.e., \u03f5i/|\u03b8i| for each axis i). It leads to the following maximum (worst-case) probe direction within the neighborhood, \u03f5\u2217 ASAM(\u03b8) = \u03b3 \u03b82\u2207l(\u03b8) \u2225\u03b8\u2207l(\u03b8)\u2225 (elementwise ops.). (5) The loss and gradient of ASAM are de\ufb01ned similarly as (3) with \u03b8\u2032 = \u03b8 + \u03f5\u2217 ASAM(\u03b8). 3. Our Method: Fisher SAM ASAM\u2019s \u03b3-neighborhood structure is a function of \u03b8, thus not \ufb01xed but adaptive to parameter scales in a quite intuitive way (e.g., more perturbation allowed for larger \u03b8i, and vice versa). However, ASAM\u2019s parameter magnitude-scaled neighborhood choice is rather ad hoc, not fully re\ufb02ecting the underlying geometry of the parameter manifold. Note that the loss functions for neural networks are typically dependent on the model parameters \u03b8 only through the predictive distributions p(y|x, \u03b8) where y is the target variable (e.g., the negative log-likelihood or cross-entropy loss, l(\u03b8) = Ex,y[\u2212log p(y|x, \u03b8)]). Hence the geometry of the \fFisher SAM: Information Geometry and Sharpness Aware Minimisation parameter space is not Euclidean but a statistical manifold induced by the Fisher information metric of the distribution p(y|x, \u03b8) (Amari, 1998; Murray & Rice, 1993). The intuition behind the Fisher information and statistical manifold can be informally stated as follows. When we measure the distance between two neural networks with parameters \u03b8 and \u03b8\u2032, we should compare the underlying distributions p(y|x, \u03b8) and p(y|x, \u03b8\u2032). The Euclidean distance of the parameters \u2225\u03b8\u2212\u03b8\u2032\u2225does not capture this distributional divergence because two distributions may be similar even though \u03b8 and \u03b8\u2032 are largely different (in L2 sense), and vice versa. For instance, even though p(x|\u03b8) = N(\u00b5, 1 + 0.001\u03c3) and p(x|\u03b8\u2032) = N(\u00b5\u2032, 1 + 0.001\u03c3\u2032) with \u03b8 = (\u00b5 = 1, \u03c3 = 10) and \u03b8\u2032 = (\u00b5\u2032 = 1, \u03c3\u2032 = 20) have large L2 distance, the underlying distributions are nearly the same. That is, the Euclidean distance is not a good metric for the parameters of a distribution family. We need to use statistical divergence instead, such as the Kullback-Leibler (KL) divergence, from which the Fisher information metric can be derived. Based on the idea, we propose a new SAM algorithm that fully re\ufb02ects the underlying geometry of the statistical manifold of the parameters. In (1) we replace the Euclidean distance by the KL divergence1: l\u03b3 F SAM(\u03b8) = max d(\u03b8+\u03f5,\u03b8)\u2264\u03b32 l(\u03b8 + \u03f5) where (6) d(\u03b8\u2032, \u03b8) = Ex[KL(p(y|x, \u03b8\u2032)||p(y|x, \u03b8))], which we dub Fisher SAM (FSAM for short). For small \u03f5, it can be shown that d(\u03b8 + \u03f5, \u03b8) \u2248\u03f5\u22a4F(\u03b8)\u03f5 (See Appendix B for details), where F(\u03b8) is the Fisher information matrix, F(\u03b8) = ExE\u03b8 h \u2207log p(y|x, \u03b8)\u2207log p(y|x, \u03b8)\u22a4i . (7) That is, our Fisher SAM loss function can be written as: l\u03b3 F SAM(\u03b8) = max \u03f5\u22a4F (\u03b8)\u03f5\u2264\u03b32 l(\u03b8 + \u03f5). (8) We solve (8) using the \ufb01rst-order approximated objective l(\u03b8+\u03f5) \u2248l(\u03b8)+\u2207l(\u03b8)\u22a4\u03f5, leading to a quadratic constrained linear programming problem. The Lagrangian is L(\u03f5, \u03bb) = l(\u03b8) + \u2207l(\u03b8)\u22a4\u03f5 \u2212\u03bb(\u03f5\u22a4F(\u03b8)\u03f5 \u2212\u03b32), (9) and solving \u2202L \u2202\u03f5 = 0 yields \u03f5\u2217= 1 2\u03bbF(\u03b8)\u22121\u2207l(\u03b8). Plugging this into the ellipsoidal constraint (from the KKT conditions) determines the optimal \u03bb, resulting in: \u03f5\u2217 F SAM(\u03b8) = \u03b3 F(\u03b8)\u22121\u2207l(\u03b8) p \u2207l(\u03b8)F(\u03b8)\u22121\u2207l(\u03b8) . (10) 1To be more rigorous, one can consider the symmetric JensenShannon divergence, d(\u03b8\u2032, \u03b8) = 0.5 \u00b7 Ex[KL(\u03b8\u2032||\u03b8) + KL(\u03b8||\u03b8\u2032)]. For \u03b8\u2032 \u2248\u03b8, however, the latter KL term vanishes (easily veri\ufb01ed using the derivations similar to those in Appendix B), and it coincides with the KL divergence in (6) (up to a constant factor). The loss and gradient of Fisher SAM are de\ufb01ned similarly as (3) with \u03b8\u2032 = \u03b8 + \u03f5\u2217 F SAM(\u03b8). Approximating Fisher. Dealing with a large dense matrix F(\u03b8) (and its inverse) is prohibitively expensive. Following the conventional practice, we consider the empirical diagonalised minibatch approximation, F(\u03b8) \u2248 1 |B| X i\u2208B Diag \u0000\u2207log p(yi|xi, \u03b8) \u00012, (11) for a minibatch B = {(xi, yi)}. Diag(v) is a diagonal matrix with vector v embedded in the diagonal entries. However, it is still computationally cumbersome to handle instance-wise gradients in (11) using the off-theshelf auto-differentiation numerical libraries such as PyTorch (Paszke et al., 2019), Tensor\ufb02ow (Abadi et al., 2015) or JAX (Bradbury et al., 2018) that are especially tailored for the batch sum of gradients for the best ef\ufb01ciency. The sum of squared gradients in (11) has a similar form as the Generalised Gauss-Newton (GGN) approximation for a Hessian (Schraudolph, 2002; Graves, 2011; Martens, 2014). Motivated from the gradient magnitude approximation of Hessian/GGN (Bottou et al., 2018; Khan et al., 2018), we replace the sum of gradient squares with the square of the batch gradient sum, \u02c6 F(\u03b8) = Diag \u0012 1 |B| X i\u2208B \u2207log p(yi|xi, \u03b8) \u00132 . (12) Note that (12) only requires the gradient of the batch sum of the logits (prediction scores), a very common form ef\ufb01ciently done by the off-the-shelf auto-differentiation libraries. If we adopt the negative log-loss (cross-entropy), it further reduces to \u02c6 F(\u03b8) = Diag(\u2207lB(\u03b8))2 where lB(\u03b8) is the minibatch estimate of l(\u03b8). For the inverse of the Fisher information in (10), we add a small positive regulariser to the diagonal elements before taking the reciprocal. Although this gradient magnitude approximation can introduce unwanted bias to the original F(\u03b8) (the amount of bias being dependent on the degree of cross correlation between \u2207log p(yi|xi, \u03b8) terms), it is a widely adopted technique for learning rate scheduling also known as average squared gradients in modern optimisers such as RMSprop, Adam, and AdaGrad. Furthermore, the following theorem from (Khan et al., 2018) justi\ufb01es the gradient magnitude approximation by relating the squared sum of vectors and the sum of squared vectors. Theorem 3.1 (Rephrased from Theorem 1 (Khan et al., 2018)). Let {v1, . . . , vN} be the population vectors, and B \u2282{1 . . . N} be a uniformly sampled (w/ replacement) minibatch with M = |B|. Denoting the minibatch and population averages by v(B) = 1 M P i\u2208B vi and v = \fFisher SAM: Information Geometry and Sharpness Aware Minimisation Algorithm 1 Fisher SAM. Input: Training set S = {(xi, yi)}, neighborhood size \u03b3, regulariser \u03b7 for inverse Fisher, and learning rate \u03b1. for t = 1, 2, . . . do 1) Sample a batch B \u223cS. 2) Compute the gradient of the batch loss \u2207lB(\u03b8). 3) Compute the approximate Fisher info \u02c6 F(\u03b8) as per (12). 4) Compute \u03f5\u2217 F SAM(\u03b8) using (10). 5) Compute gradient approximation for the Fisher SAM loss, \u2207l\u03b3 F SAM(\u03b8) = \u2202lB(\u03b8) \u2202\u03b8 \f \f \f \u03b8+\u03f5\u2217 F SAM (\u03b8). 6) Update: \u03b8 \u2190\u03b8 \u2212\u03b1\u2207l\u03b3 F SAM(\u03b8). end for 1 N PN i=1 vi, respectively, for some constant \u03b1, 1 N N X i=1 viv\u22a4 i = \u03b1EB[v(B)v(B)\u22a4] + (1 \u2212\u03b1)vv\u22a4. (13) Although it is proved in (Khan et al., 2018), we provide full proof here for self-containment. Proof. We denote by Vi(vi) and VB(\u00b7) the population variance and variance over B, respectively. Let A be the LHS of (13). Then Vi(vi) = A \u2212vv\u22a4. Also VB(v(B)) = EB[v(B)v(B)\u22a4] \u2212vv\u22a4since EB[v(B)] = v. From Theorem 2.2 of (Cochran, 1977), VB(v(B)) = N\u2212M M(N\u22121)Vi(vi). By arranging the terms, with \u03b1 = M(N\u22121) N\u2212M , we have A = \u03b1EB[v(B)v(B)\u22a4] + (1 \u2212\u03b1)vv\u22a4. The theorem essentially implies that the sum of squared gradients (LHS of (13)) gets close to the squared sum of gradients (v(B)v(B)\u22a4or vv\u22a4) if the batch estimate v(B) is close enough to its population version v 2. The Fisher SAM algorithm3 is summarized in Alg. 1. Now we state our main theorem for generalisation bound of Fisher SAM. Speci\ufb01cally we bound the expectation of the generalisation loss over the Gaussian perturbation that aligns with the Fisher information geometry. Theorem 3.2 (Generalisation bound of Fisher SAM). Let \u0398 \u2286Rk be the model parameter space that satis\ufb01es the regularity conditions in Appendix A. For any \u03b8 \u2208\u0398, with probability at least 1 \u2212\u03b4 over the choice of the training set S (|S| = n), the following holds. E\u03f5[lD(\u03b8 + \u03f5)] \u2264l\u03b3 F SAM(\u03b8; S) + s O(k + log n \u03b4 ) n \u22121 , (14) 2For instance, the two terms in the RHS of (13) can be approximately merged into a single squared sum of gradients. 3In the current version, we take the vanilla gradient update (step 6 in Alg. 1). However, it is possible to take the natural gradient update instead (by pre-multiplying the update vector by the inverse Fisher information), which can be bene\ufb01cial for other methods SGD and SAM, likewise. Nevertheless, we leave it and related further extensive study as future work. where lD(\u00b7) is the generalisation loss, l\u03b3 F SAM(\u00b7; S) is the empirical Fisher SAM loss as in (8), and the expectation is over \u03f5 \u223cN(0, \u03c12F(\u03b8)\u22121) for \u03c1 \u221d\u03b3. Remark 3.3. Compared to SAM\u2019s generalisation bound in Appendix A.1 of (Foret et al., 2021), the complexity term is asymptotically identical (only some constants are different). However, the expected generalisation loss in the LHS of (14) is different: we have perturbation of \u03b8 aligned with the Fisher geometry of the model parameter space (i.e., \u03f5 \u223cN(0, \u03c12F(\u03b8)\u22121)), while in SAM they bound the generalisation loss averaged over spherical Gaussian perturbation, E\u03f5\u223cN (0,\u03c12I)[lD(\u03b8 + \u03f5)]. The latter might be an inaccurate measure for the average loss since the perturbation does not conform to the geometry of the underlying manifold. The full proof is provided in Appendix A, and we describe the proof sketch here. Proof (sketch). Our proof is an extension of (Foret et al., 2021)\u2019s proof, in which the PAC-Bayes bound (McAllester, 1999) is considered for a pre-de\ufb01ned set of prior distributions, among which the one closest to the posterior is chosen to tighten the bound. In (Foret et al., 2021), the posterior is a spherical Gaussian (corresponding to a Euclidean ball) with the variance being independent of the current model \u03b8. Then the prior set can be pre-de\ufb01ned as a series of spherical Gaussians with increasing variances so that there always exists a member in the prior set that matches the posterior by only small error. In our case, however, the posterior is a Gaussian with Fisher-induced ellipsoidal covariance, thus covariance being dependent on the current \u03b8. This implies that the prior set needs to be pre-de\ufb01ned more sophisticatedly to adapt to a not-yet-seen posterior. Our key idea is to partition the model parameter space \u0398 into many small Fisher ellipsoids Rj \u225c{\u03b8 | (\u03b8 \u2212\u03b8j)\u22a4F(\u03b8j)(\u03b8 \u2212\u03b8j) \u2264r2 j}, j = 1, . . . , J, for some \ufb01xed points {\u03b8j}, and we de\ufb01ne the priors to be aligned with these ellipsoids. Then it can be shown that under some regularity conditions, any Fisher-induced ellipsoidal covariance of the posterior can match one of the Rj\u2019s with small error, thus tightening the bound. 3.1. Fisher SAM Illustration: Toy 2D Experiments We devise a synthetic setup with 2D parameter space to illustrate the merits of the proposed FSAM against previous SAM and ASAM. The model we consider is a univariate Gaussian, p(x; \u03b8) = N(x; \u00b5, \u03c32) where \u03b8 = (\u00b5, \u03c3) \u2208R \u00d7 R+ \u2282R2. For the loss function, we aim to build a one with two local minima, one with sharp curvature, the other \ufb02at. We further con\ufb01ne the loss to be a function of the model likelihood p(x; \u03b8) so that the the parameter space becomes a manifold with the Fisher information metric. To this end, we de\ufb01ne the loss function as a negative log-mixture of two \fFisher SAM: Information Geometry and Sharpness Aware Minimisation KL-driven energy models. More speci\ufb01cally, l(\u03b8) = \u2212log \u0010 \u03b11e\u2212E1(\u03b8)/\u03b22 1 + \u03b12e\u2212E2(\u03b8)/\u03b22 2 \u0011 , (15) where Ei(\u03b8) = KL \u0000p(x; \u03b8)||N(x; mi, s2 i ) \u0001 , i = 1, 2. We set constants as: (m1, s1, \u03b11, \u03b21) = (20, 30, 0.7, 1.8) and (m2, s2, \u03b12, \u03b22) = (\u221220, 10, 0.3, 1.2). Since \u03b2i determines the component scale, we can guess that the \ufb02at minimum is at around (m1, s1) (larger \u03b21), and the sharp one at around (m2, s2) (smaller \u03b22). The contour map of l(\u03b8) is depicted in Fig. 1, where the two minima numerically found are: \u03b8flat = (19.85, 29.95) and \u03b8sharp = (\u221215.94, 13.46) as marked in the \ufb01gure. We prefer the \ufb02at minimum (marked as star/blue) to the sharp one (dot/magenta) even though \u03b8sharp attains slightly lower loss. Comparing the neighborhood structures at the current iterate (\u00b5, \u03c3), SAM has a circle, {(\u03f51, \u03f52) | \u03f52 1 +\u03f52 2 \u2264\u03b32}, whereas FSAM has an ellipse, {(\u03f51, \u03f52) | \u03f52 1/\u03c32 + \u03f52 2/(\u03c32/2) \u2264\u03b32} since the Fisher information for Gaussian is F(\u00b5, \u03c3) = Diag(1/\u03c32, 2/\u03c32). Note that the latter is the intrinsic metric for the underlying parameter manifold. Thus when \u03c3 is large (away from 0), it is a valid strategy to explore more aggressively to probe the worst-case loss in both axes (as FSAM does). On the other hand, SAM is unable to adapt to the current iterate and probes relatively too little, which hinders \ufb01nding a sensible robust loss function. This scenario is illustrated in Fig. 2. The initial iterate (diamond/green) has a large \u03c3 value, and FSAM makes aggressive exploration in both axes, helping us move toward the \ufb02at minimum. However, SAM makes too narrow exploration, merely converging to the relatively nearby sharp minimum. For ASAM, the neighborhood at current iterate (\u00b5, \u03c3) is the magnitudescaled ellipse, {(\u03f51, \u03f52) | \u03f52 1/\u00b52 + \u03f52 2/\u03c32 \u2264\u03b32}. Thus when \u00b5 is close to 0, for instance, \u03f51 is not allowed to perturb much, hindering effective exploration of the parameter space toward robustness, as illustrated in Fig. 3. 4. Related Work Interest in \ufb02at minima for neural networks dates back to at least (Hochreiter & Schmidhuber, 1995; 1997), who characterise \ufb02atness as the size of the region around which the training loss remains low. Many studies have since investigated the link between \ufb02at minima and generalisation performance (Keskar et al., 2017; Neyshabur et al., 2017; Dziugaite & Roy, 2017; Dinh et al., 2017). In particular, the correlation between sharpness and generalisation performance was studied with diverse measures empirically on large-scale experiments: (Jiang et al., 2020). Beyond the IID setting, (Cha et al., 2021) analyse the impact of \ufb02at minima on generalisation performance under domain-shift. Motivated by these analyses, several methods have been proposed to visualise and optimise for \ufb02at minima, with Figure 1. (Toy experiment) Contour plot of the loss function. The \ufb02at minimum is shown as star/blue \u03b8flat = (19.85, 29.95) (l = 0.51, H = 0.001), and the sharp one as circle/magenta \u03b8sharp = (\u221215.94, 13.46) (l = 0.49, H = 0.006), with their loss values and Hessian traces shown in parentheses. the aim of improving generalisation. (Li et al., 2018) developed methods for visualising loss surfaces to inspect minima shape. (Zhu et al., 2019) analyse how the noise in SGD promotes preferentially discovering \ufb02at minima over sharp minima, as a potential explanation for SGD\u2019s strong generalisation. Entropy-SGD (Chaudhar et al., 2017) biases gradient descent toward \ufb02at minima by regularising local entropy. Stochastic Weight Averaging (SWA) (Izmailov et al., 2018) was proposed as an approach to \ufb01nd \ufb02atter minima by parameter-space averaging of an ensemble of models\u2019 weights. The state-of-the-art SAM (Foret et al., 2021) and ASAM (Kwon et al., 2021) \ufb01nd \ufb02at minima by reporting the worst-case loss within a ball around the current iterate. Our Fisher SAM builds upon these by providing a principled approach to de\ufb01ning a non-Euclidean ball within which to compute the worst-case loss. 5. Experiments We empirically demonstrate generalisation performance (Sec. 5.1\u20135.3) and noise robustness (Sec. 5.4, 5.5) of the proposed Fisher SAM method. As competing approaches, we consider the vanilla (non-robust) optimisation (SGD) as a baseline, and two latest SAM approaches: SAM (Foret et al., 2021) that uses Euclidean-ball neighborhood and ASAM (Kwon et al., 2021) that employs parameter-scaled neighborhood. Our approach, forming Fisher-driven neighborhood, is denoted by FSAM. For the implementation of Fisher SAM in the experiments, instead of simply adding a regulariser to each diagonal entry fi of the Fisher information matrix \u02c6 F(\u03b8), we take 1/(1 + \u03b7fi) as the diagonal entry of the inverse Fisher. Hence \u03b7 serves as anti-regulariser (e.g., small \u03b7 diminishes or regularises the Fisher impact). We \ufb01nd this implementation performs better than simply adding a regulariser. In most of our experiments, we set \u03b7 = 1.0. Certain \fFisher SAM: Information Geometry and Sharpness Aware Minimisation Figure 2. (Toy experiment) SAM vs. FSAM. X-axis is \u00b5 and Y-axis is \u03c3. (Left=SAM) SAM failed due to the inaccurate neighborhood structure of Euclidean ball. (Right=FSAM) FSAM \ufb01nds the \ufb02at minimum due to the accurate neighborhood structure from Fisher information metric. Initial iterate shown as diamond/green; the neighborhood ball is depicted as yellow circle/ellipse; the worst-case probe within the neighborhood is indicated by cyan arrow, update direction is shown as red arrow. The sizes of circles/ellipses are adjusted for better visualisation. Figure 3. (Toy experiment) ASAM vs. FSAM. X-axis is \u00b5 and Y-axis is \u03c3. (Left=ASAM) ASAM failed due to the inaccurate neighborhood structure. Especially when the magnitude of a particular parameter value, in this case \u00b5, is small (close to 0), it overly penalises perturbation along the axis. The parameter \u00b5 has small magnitude initially, which forms an incorrect neighborhood ellipse overly shrunk along the X-axis, preventing it from \ufb01nding an effective worst-case probe through X-axis perturbation. (Right=FSAM) FSAM \ufb01nds the \ufb02at minimum due to the accurate neighborhood structure from Fisher information metric. Initial iterate shown as diamond/green; the neighborhood ball is depicted as yellow circle/ellipse; the worst-case probe within the neighborhood is indicated by cyan arrow, update direction is shown as red arrow. Table 1. Test accuracies on CIFAR-10 and CIFAR-100. CIFAR-10 CIFAR-100 SGD SAM ASAM FSAM SGD SAM ASAM FSAM DenseNet-121 91.83\u00b10.13 92.44\u00b10.28 92.70\u00b10.30 92.81 92.81 92.81\u00b10.17 71.26\u00b10.15 72.83\u00b10.01 73.10\u00b10.23 73.15 73.15 73.15\u00b10.33 ResNet-20 92.91\u00b10.13 92.99\u00b10.16 92.92\u00b10.15 93.18 93.18 93.18\u00b10.11 68.24\u00b10.34 68.61\u00b10.26 68.68\u00b10.11 69.04 69.04 69.04\u00b10.30 ResNet-56 95.37\u00b10.06 95.59\u00b10.14 95.63\u00b10.07 95.71 95.71 95.71\u00b10.08 75.52\u00b10.27 76.44\u00b10.26 76.32\u00b10.14 76.86 76.86 76.86\u00b10.16 VGG-19-BN 95.70\u00b10.09 96.11\u00b10.09 95.97\u00b10.10 96.17 96.17 96.17\u00b10.07 73.45\u00b10.32 77.25\u00b10.24 74.36\u00b10.19 77.86 77.86 77.86\u00b10.22 ResNeXt-29-32x4d 96.55\u00b10.15 97.27\u00b10.10 97.29\u00b10.06 97.47 97.47 97.47\u00b10.05 79.36\u00b10.19 82.63\u00b10.16 82.41\u00b10.31 82.92 82.92 82.92\u00b10.15 WRN-28-2 95.56\u00b10.22 96.28\u00b10.14 96.25\u00b10.07 96.51 96.51 96.51\u00b10.08 78.85\u00b10.25 79.87\u00b10.13 80.17\u00b10.14 80.22 80.22 80.22\u00b10.26 WRN-28-10 97.12\u00b10.10 97.56\u00b10.06 97.63\u00b10.04 97.89 97.89 97.89\u00b10.07 83.47\u00b10.21 85.60 85.60 85.60\u00b10.05 85.20\u00b10.18 85.60 85.60 85.60\u00b10.11 PyramidNet-272 97.73\u00b10.04 97.91\u00b10.02 97.91\u00b10.01 97.93 97.93 97.93\u00b10.04 83.46\u00b10.02 85.19\u00b10.04 85.05\u00b10.11 86.93 86.93 86.93\u00b10.14 \fFisher SAM: Information Geometry and Sharpness Aware Minimisation multi-GPU/TPU gradient averaging heuristics called the m-sharpness trick empirically improves the generalisation performance of SAM and ASAM (Foret et al., 2021). However, since the trick is theoretically less justi\ufb01ed, we do not use the trick in our experiments for fair comparison. 5.1. Image Classi\ufb01cation The goal of this section is to empirically compare generalisation performance of the competing algorithms: SGD, SAM, ASAM, and our FSAM on image classi\ufb01cation problems. Following the experimental setups suggested in (Foret et al., 2021; Kwon et al., 2021), we employ several ResNet (He et al., 2016)-based backbone networks including WideResNet (Zagoruyko & Komodakis, 2016), VGG (Simonyan & Zisserman, 2015), DenseNet (Huang et al., 2017), ResNeXt (Xie et al., 2017), and PyramidNet (Han et al., 2017), on the CIFAR-10/100 datasets (Krizhevsky, 2009). Similar to (Foret et al., 2021; Kwon et al., 2021), we use the SGD optimiser with momentum 0.9, weight decay 0.0005, initial learning rate 0.1, cosine learning rate scheduling (Loshchilov & Hutter, 2016), for up to 200 epochs (400 for SGD) with batch size 128. For the PyramidNet, we use batch size 256, initial learning rate 0.05 trained up to 900 epochs (1800 for SGD). We also apply Autoaugment (Cubuk et al., 2019), Cutout (DeVries & Taylor, 2017) data augmentation, and the label smoothing (M\u00a8 uller et al., 2019) with factor 0.1 is used for de\ufb01ning the loss function. We perform the grid search to \ufb01nd best hyperparameters (\u03b3, \u03b7) for FSAM, and they are (\u03b3 = 0.1, \u03b7 = 1.0) for both CIFAR-10 and CIFAR-100 across all backbones except for PyramidNet. For the PyramidNet on CIFAR-100, we set (\u03b3 = 0.5, \u03b7 = 0.1). For SAM and ASAM, we follow the best hyperparameters reported in their papers: (SAM) \u03b3 = 0.05 and (ASAM) \u03b3 = 0.5, \u03b7 = 0.01 for CIFAR-10 and (SAM) \u03b3 = 0.1 and (ASAM) \u03b3 = 1.0, \u03b7 = 0.1 for CIFAR100. For the PyramidNet, (SAM) \u03b3 = 0.05 and (ASAM) \u03b3 = 1.0. The results are summarized in Table 1, where Fisher SAM consistently outperforms SGD and previous SAM approaches for all backbones. This can be attributed to FSAM\u2019s correct neighborhood estimation that respects the underlying geometry of the parameter space. 5.2. Extra (Over-)Training on ImageNet For a large-scale experiment, we consider the ImageNet dataset (Deng et al., 2009). We use the DeiT-base (denoted by DeiT-B) vision transformer model (Touvron et al., 2021) as a powerful backbone network. Instead of training the DeiT-B model from the scratch, we use the publicly available4 ImageNet pre-trained parameters as a warm start, and perform \ufb01netuning with the competing loss functions. Since 4https://github.com/facebookresearch/deit Table 2. Extra-training results (test accuracies) on ImageNet. Before extra-training starts, 81.94% (Top-1) and 95.63% (Top-5). SGD SAM ASAM FSAM Top-1 81.97\u00b10.01 81.99\u00b10.01 82.03\u00b10.04 82.17 82.17 82.17\u00b10.01 Top-5 95.61\u00b10.01 95.71\u00b10.03 95.83\u00b10.04 95.90 95.90 95.90\u00b10.01 Table 3. Test accuracies for transfer learning. SGD SAM ASAM FSAM CIFAR 87.97\u00b10.12 87.99\u00b10.09 87.97\u00b10.08 88.39 88.39 88.39\u00b10.13 Cars 92.85\u00b10.31 93.29\u00b10.01 93.28\u00b10.02 93.42 93.42 93.42\u00b10.01 Flowers 94.53\u00b10.20 95.05\u00b10.06 95.08\u00b10.10 95.26 95.26 95.26\u00b10.03 the same dataset is used for pre-training and \ufb01netuning, it can be better termed extra/over-training. The main goal of this experimental setup is to see if robust sharpness-aware loss functions in the extra training stage can further improve the test performance. First, we measure the test performance of the pre-trained DeiT-B model, which is 81.94% (Top-1) and 95.63% (Top-5). After three epochs of extra training, the test accuracies of the competing approaches are summarized in Table 2. For extra training, we use hyperparameters: SAM (\u03b3 = 0.05), ASAM (\u03b3 = 1.0, \u03b7 = 0.01), and FSAM (\u03b3 = 0.5, \u03b7 = 0.1) using the SGD optimiser with batch size 256, weight decay 0.0001, initial learning rate 10\u22125 and the cosine scheduling. Although the improvements are not very drastic, the sharpness-aware loss functions appear to move the pretrained model further toward points that yield better generalisation performance, and our FSAM attains the largest improvement among other SAM approaches. 5.3. Transfer Learning by Finetuning One of the powerful features of the deep neural network models trained on extremely large datasets, is the transferability, that is, the models tend to adapt easily and quickly to novel target datasets and/or downstream tasks by \ufb01netuning. We use the vision transformer model ViT-base with 16 \u00d7 16 patches (ViT-B/16) (Dosovitskiy et al., 2021) pre-trained on the ImageNet (with publicly available checkpoints), and \ufb01netune the model on the target datasets: CIFAR-100, Stanford Cars (Krause et al., 2013), and Flowers (Nilsback & Zisserman, 2008). We run SGD, SAM (\u03b3 = 0.05), ASAM (\u03b3 = 0.1, \u03b7 = 0.01), and FSAM (\u03b3 = 0.1, \u03b7 = 1.0) with the SGD optimiser for 200 epochs, batch size 256, weight decay 0.05, initial learning rate 0.0005 and the cosine scheduling. As the results in Table 3 indicate, FSAM consistently improves the performance over the competing methods. 5.4. Robustness to Adversarial Parameter Perturbation Another important bene\ufb01t of the proposed approach is robustness to parameter perturbation. In the literature, the \fFisher SAM: Information Geometry and Sharpness Aware Minimisation 0.1 0.5 1.0 1.2 1.5 2.0 5.0 Perturbation strength 10 20 30 40 50 60 70 80 90 Test accuracy (%) SGD SAM FSAM Figure 4. Adversarial parameter perturbation. generalisation performance of the corrupted models is measured by injecting arti\ufb01cial noise to the learned parameters, which serves as a measure of vulnerability of neural networks (Chen et al., 2017; Arora et al., 2018; Dai et al., 2019; Gu et al., 2019; Nagel et al., 2019; Rakin & Fan, 2020; Weng et al., 2020). Although it is popular to add Gaussian random noise to the parameters, recently the adversarial perturbation (Sun et al., 2021) was proposed where they consider the worst-case scenario under parameter corruption, which amounts to perturbation along the gradient direction. More speci\ufb01cally, the perturbation process is: \u03b8 \u2192\u03b8 + \u03b1 \u2207l(\u03b8) \u2225\u2207l(\u03b8)\u2225 where \u03b1 > 0 is the perturbation strength that can be chosen. It turns out to be a more effective perturbation measure than the random (Gaussian noise) corruption. We apply this adversarial parameter perturbation process to ResNet-34 models trained by SGD, SAM (\u03b3 = 0.05), and FSAM (\u03b3 = 0.1, \u03b7 = 1.0) on CIFAR-10, where we vary the perturbation strength \u03b1 from 0.1 to 5.0. The results are depicted in Fig. 4. While we see performance drop for all models as \u03b1 increases, eventually reaching nonsensical models (pure random prediction accuracy 10%) after \u03b1 \u2265 5.0, the proposed Fisher SAM exhibits the least performance degradation among the competing methods, proving the highest robustness to the adversarial parameter corruption. 5.5. Robustness to Label Noise In the previous works, SAM and ASAM are shown to be robust to label noise in training data. Similarly as their experiments, we introduce symmetric label noise by random \ufb02ipping with corruption levels 20, 40, 60, and 80%, as introduced in (Rooyen et al., 2015). The results on ResNet32 networks on the CIFAR-10 dataset are summarized in Table 4. Our Fisher SAM shows high robustness to label noise comparable to SAM while exhibiting signi\ufb01cantly large improvements over SGD and ASAM. Table 4. Label noise. Test accuracies on CIFAR-10. The hyperparameters are: (SAM) \u03b3 = 0.1, (ASAM) \u03b3 = 0.5, \u03b7 = 0.01, and (FSAM) \u03b3 = 0.1, \u03b7 = 0.001. Rates SGD SAM ASAM FSAM 0.2 87.97\u00b10.04 93.12 93.12 93.12\u00b10.24 92.26\u00b10.33 93.03\u00b10.11 0.4 83.60\u00b10.59 90.54\u00b10.19 88.47\u00b10.06 90.95 90.95 90.95\u00b10.17 0.6 76.97\u00b10.31 85.39\u00b10.52 82.32\u00b10.55 85.76 85.76 85.76\u00b10.21 0.8 66.32\u00b10.27 74.31\u00b11.02 70.56\u00b10.27 74.66 74.66 74.66\u00b10.67 10 4 10 3 10 2 10 1 1.0 101 Anti-regulariser 77 78 79 80 81 82 83 84 Test accuracy (%) = 0.01 = 0.05 = 0.1 = 0.5 = 1.0 Figure 5. Hyperparameter sensitivity of Fisher SAM. 5.6. Hyperparameter Sensitivity In our Fisher SAM, there are two hyperparameters: \u03b3 = the size of the neighborhood and \u03b7 = the anti-regulariser for the Fisher impact. We demonstrate the sensitivity of Fisher SAM to these hyperparameters. To this end, we train WRN-28-10 backbone models trained with the FSAM loss on the CIFAR-100 dataset for different hyperparameter combinations: (\u03b3, \u03b7) \u2208{0.01, 0.05, 0.1, 0.5, 1.0} \u00d7 {10\u22124, 10\u22123, 10\u22122, 10\u22121, 1.0, 10}. In Fig. 5 we plot the test accuracy of the learned models5. The results show that unless \u03b3 is chosen too large (e.g., \u03b3 = 1.0), the learned models all perform favorably well, being less sensitive to the hyperparameter choice. But the best performance is attained when \u03b3 lies in between 0.1 and 0.5, with some moderate values for the Fisher impact \u03b7 in between 0.01 and 1.0. 5.7. Computational Overhead of FSAM Compared to SAM, our FSAM requires only extra cost of element-wise vector product under our diagonal gradientmagnitude approximation schemes. In practice, the difference is negligible: the per-batch (batch size 128) times for CIFAR10/WRN28-10 are: 0.2322 seconds (SAM), 0.2334 seconds (FSAM) on a single RTX 2080 Ti machine. 5Note that there are discrepancies from Table 1 that may arise from the lack of data augmentation. \fFisher SAM: Information Geometry and Sharpness Aware Minimisation 6." + }, + { + "url": "http://arxiv.org/abs/2111.05814v2", + "title": "SwAMP: Swapped Assignment of Multi-Modal Pairs for Cross-Modal Retrieval", + "abstract": "We tackle the cross-modal retrieval problem, where learning is only\nsupervised by relevant multi-modal pairs in the data. Although the contrastive\nlearning is the most popular approach for this task, it makes potentially wrong\nassumption that the instances in different pairs are automatically irrelevant.\nTo address the issue, we propose a novel loss function that is based on\nself-labeling of the unknown semantic classes. Specifically, we aim to predict\nclass labels of the data instances in each modality, and assign those labels to\nthe corresponding instances in the other modality (i.e., swapping the pseudo\nlabels). With these swapped labels, we learn the data embedding for each\nmodality using the supervised cross-entropy loss. This way, cross-modal\ninstances from different pairs that are semantically related can be aligned to\neach other by the class predictor. We tested our approach on several real-world\ncross-modal retrieval problems, including text-based video retrieval,\nsketch-based image retrieval, and image-text retrieval. For all these tasks our\nmethod achieves significant performance improvement over the contrastive\nlearning.", + "authors": "Minyoung Kim", + "published": "2021-11-10", + "updated": "2022-10-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "main_content": "Introduction Cross-modal retrieval, the task of retrieving the most relevant items in the database of one modality (e.g., images) for a given query from another modality (e.g., texts), has received unprecedented attention in computer vision and related areas (Chen et al., 2015; Faghri et al., 2018; Lee et al., 2018; Li et al., 2019; Zhang et al., 2020; Chun et al., 2021; Miech et al., 2021, 2019, 2020; Wang et al., 2021; Dey et al., 2019; Sain et al., 2021). The crux of the problem is to learn the underlying relevance or similarity metric between data instances that live in heterogeneous modalities E-mail: mikim21@gmail.com with highly different distributions. Although there are several different learning problem formulations in the literature, in this paper we mainly focus on the paired training data setup, in which training is only supervised by relevant pairs in the training data, and there are no semantic class labels annotated. That is, the training data consist of only pairs of relevant multi-modal data instances, e.g., (image, text), which may require minimal human annotation effort (e.g., web scraping of images and nearby texts). The contrastive (or triplet loss) learning (Chopra et al., 2005; Hadsell et al., 2006) is recognised as the most popular and successful approach, which aims to learn the cross-modal similarity measure by the intuitive criteria that pull together relevant pairs and push away irrelevant ones. However, it makes potentially wrong assumption that instances in different pairs are automatically irrelevant. The pairs in the training data are usually collected by considering relevant pairs only (e.g., nearby images and texts in a web page), and the relevance of instances in different pairs is usually not checked However, this is implicitly assumed in the contrastive loss. The issue was also raised in recent work (Kim et al., 2019; Zhou et al., 2020; Patrick et al., 2020; Wray et al., 2021; Chen et al., 2021). In this paper we propose a novel learning algorithm that addresses the issue via selflabeled clustering approach. Motivated from the recent clustering-based representation learning in the self-supervised learning literature (Asano et al., 2020; Caron et al., 2020), we propose a novel loss function for cross-modal retrieval that is based on selflabeling of the unknown classes. Speci\ufb01cally, we introduce (latent) semantic class labels to be assigned to data instances, where class labels supposedly decide the relevance of cross-modal data instances (i.e., the same class label means relevant items, and vice versa). We predict class labels of the data instances in each modality, and assign the predicted labels to the corresponding instances in the other modality (i.e., swapping the pseudo labels). With these swapped pseudo labels, we learn the data embedding for each modality using the supervised cross-entropy loss. This way, cross-modal instances from different pairs that are semantically related can be aligned to each other by the class predictor. The whole process of label prediction and supervised learning with swapped classes is alternated arXiv:2111.05814v2 [cs.LG] 11 Oct 2022 \fSwAMP: Swapped Assignment of Multi-Modal Pairs for Cross-Modal Retrieval to learn the optimal feature extraction networks. We call this approach Swapped Assignment of Multi-modal Pairs (SwAMP). The main bene\ufb01ts of the SwAMP are in two folds: i) Unlike the contrastive loss, SwAMP does not make potentially wrong assumption that instances from different pairs are automatically irrelevant. The optimized class assignment \ufb01nds similar instances from other pairs, and the feature extractor is trained in such a way that the same-class instances, even in different pairs, are well aligned. This feature of aligning instances in different pairs is hardly exploited in the contrastive loss. ii) Since the learning does not fully resort to pair-based losses as in contrastive learning, the sampling complexity can be reduced. This comes from the class-based loss adopted in the SwAMP, where similar ideas were exploited previously in self-supervised representation learning (Caron et al., 2018; Asano et al., 2020; Caron et al., 2020). Our approach is generically applicable to different types of cross-modal retrieval problems. We empirically demonstrate that the SwAMP loss improves retrieval performance signi\ufb01cantly over the contrastive learning, on various real-world cross-modal retrieval problems, including textvideo, sketch-image, and image-text retrieval. 2 Problem Setup & Background Let xA and xB denote data instances from modality A and modality B, respectively. For instance, xA is an image from the image modality, while xB is a text/caption from the text modality. Throughout the paper we deal with modality-wise feature representation, meaning that we have modality-wise feature extractors (neural networks) \u03c6A(\u00b7) and \u03c6B(\u00b7) applied to xA and xB, respectively. Also known as dual encoders, it produces a succinct vector representation for each modality, \u03c6A(xA) \u2208Rd and \u03c6B(xB) \u2208Rd. The shared feature space (\u2282Rd) allows us to de\ufb01ne the similarity score s(xA, xB) as a cosine angle between \u03c6A(xA) and \u03c6B(xB). The goal is to learn the feature extractors so that the relevant pairs xA and xB have a high similarity score s(xA, xB), while irrelevant pairs have a low similarity score. The main bene\ufb01t of the modality-wise feature representation is the computational ef\ufb01ciency, scalable to billions of instances at training/test time, thanks to the ef\ufb01cient dot-product. There is an alternative approach that directly computes the similarity score without having modality-wise representation. A typical example is the cross-modal attention models (Lee et al., 2018; Lu et al., 2019; Desai and Johnson, 2020; Huang et al., 2020) (details in Sec. 4). Although they can capture interactions between cross-modal local features, they are computationally demanding, not scalable to large-scale data. The training data are composed of relevant pairs D = {(xA i , xB i )}N i=1, where xA i and xB i are the instances in the i-th relevant pair. At test time, a query is given from the query modality, say xA, and the goal is to \ufb01nd the most relevant instance, say xB, from the other modality, where the search is performed on the given test set {xB i }N+M i=N+1. 2.1 Contrastive Learning In contrastive learning (Chopra et al., 2005; Hadsell et al., 2006), it is implicitly assumed that data instances from different pairs are irrelevant, although it may not be true. The loss function is de\ufb01ned to capture the intuition: penalize low (high) similarity scores for relevant (irrelevant, resp.) pairs. By introducing the margin \u03b1 (e.g., 0.2) and considering the most violating irrelevant pairs (i.e., hard negatives), the loss can be written as (subscript c stands for contrastive): Lc(\u03c6A, \u03c6B) = X i\u2208D \u0000s(xA i , xB i ) \u2212max j\u2208D\\i s(xA i , xB j ) \u0001 \u2265\u03b1 + \u0000s(xA i , xB i ) \u2212max j\u2208D\\i s(xA j , xB i ) \u0001 \u2265\u03b1 (1) where (z)\u2265\u03b1 = max(0, \u03b1 \u2212z) only incurs positive loss when z < \u03b1. A main issue of the contrastive learning is that we cannot guarantee that data instances from different pairs in the training data are irrelevant, because the data are usually collected by considering relevant pairs only (e.g., web scraping of images and nearby texts), and the relevance of instances in different pairs is usually not checked. However, this is assumed in the contrastive loss. 3 Our Approach: SwAMP Our idea is to introduce (latent) semantic class labels for data instances and use them to learn the feature extractors. The class labels supposedly decide the relevance of data instances from different modalities, that is, xA and xB are considered relevant if their class labels are the same, and vice versa. Obviously, the paired cross-modal instances in the training data must have the same class labels. But beyond this, instances from different pairs can also be deemed relevant if they belong to the same semantic class labels. The motivation is that if we estimate the class labels accurately, the feature extractor learning can be turned into a supervised classi\ufb01cation problem. More formally, we consider (unknown) class labels to be assigned to the data instances. Let yA, yB \u2208{1, . . . , K} be the class labels for xA and xB, respectively, where K is chosen by the user. The relevance of xA and xB is determined by their class labels: xA and xB are deemed relevant if yA = yB and irrelevant if yA \u0338= yB. If we knew the class labels that bear such semantics in the training data, then training becomes supervised learning that can be done for each modality, which allows us to avoid pairwise terms in the loss function. However, we don\u2019t have class labels, and we optimize them (i.e., self-supervised learning) together with the feature extractors \u03c6A(\u00b7) and \u03c6B(\u00b7). To this \fMinyoung Kim end, we build linear classi\ufb01ers p(y|xA) and p(y|xB) on the extracted features. For each modality M \u2208{A, B}, p(y = j|xM) = exp(p\u22a4 j \u03c6M(x)/\u03c4) P l exp(p\u22a4 l \u03c6M(x)/\u03c4), (2) where P = {p1, . . . , pK} are trainable parameters that are shared between two modalities, and \u03c4 is the temperature in the softmax. We can regard each pj as the prototype vector for class j that lies in the shared feature space. Since we have classi\ufb01cation models, the (supervised) cross-entropy loss minimization is a natural choice to optimize them. That is, letting ptrue(y|xA) be the true conditional class distribution for modality A, we minimize Eptrue(y|xA)[\u2212log p(y|xA)] with respect to P and the network parameters of \u03c6A(\u00b7) (similarly for modality B). Since we cannot access ptrue(y|xA), one may be tempted to use the model p(y|xA) in (2) instead. However, it can easily lead to a degenerate solution such as the one that puts all the probability mass on a particular single class all the time (thus attaining the optimal cross-entropy loss 0). Moreover, this would make learning \u03c6A(\u00b7) and \u03c6B(\u00b7) nearly independent and less interacted with each other, merely through the shared prototypes P. Instead, we form an optimization problem to estimate a surrogate of ptrue(y|xA), which we denote by q(y|xA), using the information from the other modality B, while imposing additional constraints to avoid the degenerate solutions. More speci\ufb01cally, we optimize the surrogate q(y|xA) with the following two criteria. First, q(y|xA) needs to be well aligned with the current estimate p(y|xB) for xB that is paired with xA. This is due to the aforementioned requirements for the class labels, where the class labels (more generally, their distributions) of the paired instances should match. Secondly, the marginal distribution q(y) = ExA\u223cD[q(y|xA)] is constrained to be a uniform distribution1. This constraint naturally arises from the symmetry of class labels, a reasonable assumption about the true class distribution, and successfully leaves out the degenerate solutions discussed above. To summarize, the following is the optimization problem for q(y|xA), where QA is the (N \u00d7K) matrix with QA iy := q(y|xA i ). Recall that D = {(xA i , xB i )}N i=1 is the training data of paired instances. min QA Ei\u223cD \u0002 Eq(y|xA i )[\u2212log p(y|xB i )] \u0003 (3) s.t. Ei\u223cD[q(y|xA i )] = 1/K, \u2200y. We perform similar optimization for q(y|xB) (QB iy := q(y|xB i )) to approximate ptrue(y|xB) by exchanging the 1This means balanced clusters. Even when data exhibit imbalance in semantic classes (e.g., long-tail distributions), our clustering model can still handle it by learning semantically redundant multiple clusters, thus forming super-clusters while rendering others minor classes. See Sec. 5.5 for illustration. roles of A and B. The optimal solutions (surrogates) are denoted by qA and qB, where we use the superscript to distinguish the two modalities. Note that during the optimization of (3) for qA and qB, we \ufb01x the model parameters, that is, P and the feature extractor networks. The overall optimization is alternation between: i) surrogate optimization (3) with P, \u03c6A, \u03c6B \ufb01xed, and ii) supervised (cross-entropy) loss minimization with qA and qB \ufb01xed, where the latter can be written as (subscript s stands for SwAMP): min P,\u03c6A,\u03c6B Ls := Ei\u223cD \u0002 EqA(y|xA i )[\u2212log p(y|xA i )] \u0003 + Ei\u223cD \u0002 EqB(y|xB i )[\u2212log p(y|xB i )] \u0003 (4) Now we discuss how to optimize (3). It is essentially the optimal transport (OT) problem (Villani, 2008; Cuturi, 2013), speci\ufb01cally with the cost matrix Ciy = \u2212log p(y|xB i ) and the marginal constraints P i QA iy = 1/K, \u2200y (and implicitly P y QA iy = 1/N, \u2200i \u2208D). Although the OT is known to be an instance of the linear program (LP), conventional LP solvers are not suitable for large-scale problems. As is common practice, we relax the problem by augmenting the loss with the entropic regularizer for q(y|xA), namely 1 \u03b7 P iy QA iy log QA iy added to the loss (thus, penalizing small entropy), which can be solved by the ef\ufb01cient SinkhornKnopp (SK) algorithm (Cuturi, 2013). Here \u03b7 is the regularization trade-off hyperparameter. The SK algorithm \ufb01nds the optimal solution as QA = Diag(u)ADiag(v), where Aiy = e\u2212\u03b7Ciy and the vectors u \u2208RN + and v \u2208RK + are the \ufb01xed points of ui = 1 N /(Av)i, vj = 1 K /(A\u22a4u)j for i = 1, . . . , N, j = 1, . . . , K. The \ufb01xed point iteration usually converges quickly after a few iterations. We denote the algorithm as: Q \u2190SK(cost = C, reg = \u03b7). (5) One challenge in optimizing (3) with the SK, however, is that it involves the entire dataset D in the loss, which means that the model update (4) has to be deferred until q is optimized for an entire data epoch. Simply replacing D with a minibatch might be dangerous since the population class marginal distributions are poorly covered by a minibatch. We need an even larger subset of D to roughly meet the (uniform) class constraint. To this end, we adopt the (FIFO) queues, where we accumulate the embeddings \u03c6A(xA) and \u03c6B(xB) from the latest minibatches into the queues. The optimization (3) is then performed on the queue data (D replaced by the data in the queues). To have the uniform class constraint meaningful, we choose the queue size to be greater than K. Note that (3) is solved by the SK algorithm, and thus no backprop is required, hence enlarging the queue size does not incur computational issue. Similar ideas were used in the self-supervised representation learning literature, e.g., (He et al., 2019) and (Caron et al., 2020). To have the queues \ufb01lled with the latest features, we insert the features of the current minibatch into the queues, then perform the \fSwAMP: Swapped Assignment of Multi-Modal Pairs for Cross-Modal Retrieval Algorithm 1 SwAMP Training. Input: Class cardinality K, queue size, temp. \u03c4, \u03b7 in SK. Initialize: P = {pk}K k=1, \u03c6A, \u03c6B. Empty queue Q. Output: Trained model {P, \u03c6A(\u00b7), \u03c6B(\u00b7)}. Repeat until convergence: 1. Sample a minibatch of paired data B = {(xA i , xB i )}. 2. Evaluate \u03c6A(xA i ) and \u03c6B(xB i ) for i \u2208B (forward pass). 3. Insert {(\u03c6A(xA i ), \u03c6B(xB i ))}i\u2208B into the queue Q. 4. Solve (3) for modality A and B: {qA(y|i)}i\u2208Q \u2190SK(cost={\u2212log p(y|xB i )}i\u2208Q, reg= \u03b7) {qB(y|i)}i\u2208Q \u2190SK(cost={\u2212log p(y|xA i )}i\u2208Q, reg= \u03b7) 5. Take the minibatch portions {qA(y|i), qB(y|i)}i\u2208B; Do SGD update with L in (6). SK algorithm. Once (3) is done, we can optimize (4) by gradient descent, but only the current minibatch portion of q is used. The \ufb01nal loss function is a combination of the SwAMP loss and the contrastive loss, L(P, \u03c6A, \u03c6B) = Lc(\u03c6A, \u03c6B) + \u03bbLs(P, \u03c6A, \u03c6B), (6) where \u03bb is the trade-off hyperparameter. As we estimate the surrogate qA using the current classi\ufb01cation model in modality B, and vice versa, the class assignment is swapped. The pseudo code of our algorithm is shown in Alg. 1. The idea of optimizing class labels in the representation learning was previously introduced in (Asano et al., 2020; Caron et al., 2020), however, they aimed for self-supervised representation learning as an instance discrimination pretext task with augmented data. In this paper, we deal with the cross-modal retrieval problem, where we estimate the class labels of instances in one modality using the features from the other modality. Unlike the contrastive loss, SwAMP does not make any assumption that instances from different pairs are automatically irrelevant. The OT class assignment \ufb01nds similar instances from other pairs, and the feature extractor is trained in such a way that the same-class instances, even in different pairs, are well aligned. This feature of aligning instances in different pairs is hardly exploited in the contrastive loss. 4 Related Work Cross-modal retrieval. It is beyond the scope of the paper to enumerate all previous works on cross-modal retrieval, and we refer the readers to recent survey papers such as (Wang et al., 2016). Recently, the most interesting crossmodal tasks involve, among others, video-text (Liu et al., 2019; Gabeur et al., 2020; Patrick et al., 2020; Miech et al., 2021, 2019, 2020; Wang et al., 2021; Chen et al., 2021), image-text (Chen et al., 2015; Faghri et al., 2018; Lee et al., 2018; Chun et al., 2021; Li et al., 2019; Zhang et al., 2020), and sketch-photo (Dey et al., 2019; Sain et al., 2021). For the training data of relevant pairs, most approaches commonly rely on the idea of contrastive learning (Chopra et al., 2005; Hadsell et al., 2006). Beyond the intuitive triplet forms (Wang et al., 2014; Schroff et al., 2015), more sophisticated losses were introduced in (Sohn, 2016; Song et al., 2016; Wang et al., 2019a,b) to deal with a positive and multiple negative pairs as well as hard examples. To reduce the super-linear time computational overhead, several sophisticated sampling strategies were proposed (Wu et al., 2017; Harwood et al., 2017; Yuan et al., 2017). As discussed in Sec. 2, there are broadly two different ways to de\ufb01ne the similarity metric between instances of different modalities: modality-wise feature representation and cross-modal attention. The main bene\ufb01t of the former is the computational ef\ufb01ciency, scalable to billions of instances at training/test time, thanks to the ef\ufb01cient dot-product. The latter directly computes the similarity score without having modality-wise representation (Lee et al., 2018; Lu et al., 2019; Desai and Johnson, 2020; Huang et al., 2020) using the transformer-like attentive neural networks which aim to capture interactions between local features in the instances from different modalities. Although they can capture crossmodal interactions between local features of data instances from different modalities, they are computationally demanding and very slow due to the quadratic complexity in the number of local features. In (Miech et al., 2021), a hybrid of the two is introduced, which retains the two models, but performs re-ranking/distillation at test time for speed-up. Clustering-based approaches. There were previous attempts to cluster (group) data instances, or equivalently self-labeling, to improve saliency in representation learning. Some approaches perform of\ufb02ine K-means clustering for every epoch (Caron et al., 2018; Alwassel et al., 2020), which can make training slow. The idea of optimizing class labels in the representation learning was previously introduced in (Asano et al., 2020; Caron et al., 2020). However, all these previous approaches aimed for self-supervised representation learning as an instance discrimination pretext task with augmented data. On the other hand, we perform simultaneous learning of class labels and the feature extraction networks for the cross-modal retrieval setting. More recently (Chen et al., 2021) proposed a clustering-based cross-modal retrieval method based on the semantic similarity. However, our approach is mainly different from it in that we adopt the OT-based class label assignment forming a joint feature-label optimization, instead of simple fusion of multi-modal features for clustering as in (Chen et al., 2021). 5 Experimental Results We test the proposed SwAMP loss on several different types of real-world cross-modal retrieval problems. For each problem/dataset, we choose the most popular and successful method in the literature, and replace its loss function (mostly contrastive loss) with the proposed SwAMP loss to demonstrate the performance improvement. To this end, for fair comparison, we faithfully follow the same optimization strategy and hyperparameters as the baseline methods. \fMinyoung Kim Table 1: Text-video retrieval results on YouCook2. Methods R@1 \u2191 R@5 \u2191 R@10 \u2191 Med-R \u2193 Random 0.03 0.15 0.3 1675 FV-CCA 4.6 14.3 21.6 75 Contrastive (No PT) 4.2 13.7 21.5 65 SwAMP (No PT) 4.8 14.5 22.5 57 Contrastive (PT) 8.2 24.5 35.3 24 SwAMP (PT) 9.4 24.9 35.3 22 5.1 Text-based Video Retrieval We \ufb01rst consider the text-to-video retrieval task where the goal is to \ufb01nd the most relevant video clip for a given natural language text query. We consider three datasets for this task: i) YouCook2 (Zhou et al., 2018) of cooking videos and instructions, ii) MSR-VTT (Xu et al., 2016) of generic videos and captions from YouTube, and iii) LSMDC (Rohrbach et al., 2017) of movie clips and subtitles. All these datasets provide pairs of video clip and text description, forming a multi-modal paired data format (text, video) which conforms to our SwAMP framework. For the raw text/video features and the feature extractor networks, as well as the training/test protocols, we follow the methods in (Miech et al., 2019), and the details are described in Appendix (Sec. C). Following (Miech et al., 2019), there are two training strategies: i) No-pretraining (No-PT) where the feature extraction networks are randomly initialized, and the training is done on the training split of the dataset, and ii) Pretraining (PT) where the feature extractors are \ufb01rst pretrained on the large-scale HowTo100M dataset (Miech et al., 2019), and \ufb01netuned on the target dataset. In (Miech et al., 2019), they adopt the contrastive (triplet) loss for training the feature extractors. Although we also compare our approach with the state-of-the-arts, the main focus in this experiment is to demonstrate the performance improvement achieved by the proposed SwAMP loss against vanilla contrastive learning. The SwAMP hyperparameter \u03bb, the weight/impact of the SwAMP loss against the contrastive loss in (6) is chosen as \u03bb = 0.25 for all three datasets, except the LSMDC-PT case for which \u03bb = 0.1. We also choose temperature in softmax \u03c4 = 0.25, entropic regularization trade-off in SK \u03b7 = 5.0, the number of classes K = 500, and the queue size 2, 048 for the SwAMP. The other learning hyperparameters common in SwAMP and contrastive losses are not changed from (Miech et al., 2019), and summarized in Appendix (Sec. C). YouCook2. This cooking video dataset collected from YouTube, contains 89 recipes and 14K video clips annotated with textual descriptions from paid human workers. The test data are formed by taking 3.5K clips from the validation set, and the test set comprises of 3, 350 pairs. The retrieval performance metrics are recall-at-k (R@k) with k = 1, 5, 10 and the median rank (Med-R). Hence, the random guess attains R@1= 0.03% Med-R=1, 675. The results are summarized in Table 1. In the bottom four rows, we see the performance improvement achieved by the proposed SwAMP against the contrastive loss (Miech et al., 2019). For both training strategies, No PT (random model initialization) and PT (initialized with the HowTo100M-pretrained model), our SwAMP improves the retrieval performance signi\ufb01cantly (e.g., about 12% reduction in Median Rank for the No PT case). SwAMP also outperform the CCA baseline FV-CCA (Klein et al., 2015). MSRVTT. This dataset (Xu et al., 2016) collected from YouTube contains videos of speci\ufb01c categories including music, sports, and movie. There are 200K video-caption pairs obtained by human annotation. We follow the retrieval training/test protocol of (Yu et al., 2018; Miech et al., 2019). The test set consists of 1K pairs. As reported in Table 2, our SwAMP loss improves the performance over the contrastive learning signi\ufb01cantly for both no-pretraining and pretraining cases: about 24% in R@1 in the No PT case, and 27% in the PT case. Furthermore, the SwAMP outperforms with large margin the state-of-the-arts: C+LSTM+SA+FC7 (Torabi et al., 2016), VSE-LSTM (Kiros et al., 2014), Temporal Tessellation (Kauman et al., 2017), CT-SAN (Yu et al., 2017), and JSFusion (Yu et al., 2018). LSMDC. This dataset of movie video clips is comprised of 101K video-caption pairs. The captions are collected either from the movie scripts or the audio descriptions. The test set contains 1K pairs. For this dataset, \u03bb = 0.1 (impact of the SwAMP loss against contrastive) for the PT case. The results are shown in Table 2. Similar to the other two datasets, our SwAMP is consistently better than the contrastive learning (about 7 \u223c9% in Median Rank). 5.2 Sketch-based Image Retrieval In sketch-based image retrieval, the model takes a user\u2019s sketch (quick drawing) of an object as input query, and retrieves the photo images that correspond to the same object category as query\u2019s. We follow the recent framework of (Dey et al., 2019) which reports the state-ofthe-art performance on the three large-scale sketch-image benchmarks: Sketchy-Extended (Sangkloy et al., 2016), TU-Berlin-Extended (Eitz et al., 2012), and QuickDrawExtended (Dey et al., 2019). The datasets roughly consist of 100\u2013200 object classes with hundreds to thousands of sketch/photo images for each class. For all these datasets, we have zero-shot setting, meaning that training/test splits have instances from disjoint object categories. In this experiment we aim to show the improvement in the retrieval performance when our SwAMP loss is augmented to the existing loss function. To this end, we follow the same embedding networks for images and sketches, as well as the same loss function as the Doodle2Search. The loss function consists of three losses: Triplet loss is the conventional triplet loss, Domain loss uses an adversarial domain classi\ufb01er to penalize misalignment between embedding distribu\fSwAMP: Swapped Assignment of Multi-Modal Pairs for Cross-Modal Retrieval Table 2: Text-Video retrieval results on MSRVTT and LSMDC. Methods MSRVTT LSMDC R@1 \u2191 R@5 \u2191 R@10 \u2191 Med-R \u2193 R@1 \u2191 R@5 \u2191 R@10 \u2191 Med-R \u2193 Random 0.1 0.5 1.0 500 0.1 0.5 1.0 500 C+LSTM+SA+FC7 4.2 12.9 19.9 55 4.3 12.6 18.9 98 VSE-LSTM 3.8 12.7 17.1 66 3.1 10.4 16.5 79 SNUVL 3.5 15.9 23.8 44 3.6 14.7 23.9 50 Temporal Tessellation 4.7 16.6 24.1 41 4.7 15.9 23.4 64 CT-SAN 4.4 16.6 22.3 35 4.5 14.1 20.9 67 JSFusion 10.2 31.2 43.2 13 9.1 21.2 34.1 36 Contrastive (No PT) 12.1 35.0 48.0 12 7.2 18.3 25.0 44 SwAMP (No PT) 15.0 38.5 50.3 10 7.7 19.3 27.7 40 Contrastive (PT) 14.9 40.2 52.8 9 7.1 19.6 27.9 40 SwAMP (PT) 19.0 42.4 55.2 8 8.3 20.0 28.9 37 Table 3: Sketch-based image retrieval results. The contrastive-learning-based Doodle2Search (Dey et al., 2019) (denoted by D2S) is compared with the proposed SwAMP learning. Methods / Datasets Sketchy TU-Berlin QuickDraw mAP mAP@200 P@200 mAP mAP@200 P@200 mAP mAP@200 P@200 ZSIH (Shen et al., 2018) 25.40 22.00 CVAE (Yelamarthi et al., 2018) 19.59 22.50 33.30 0.50 0.90 0.30 0.30 0.60 0.30 D2S (Dey et al., 2019) 36.91 46.06 37.04 10.94 15.68 12.08 7.52 9.01 6.75 SwAMP 40.32 51.94 40.81 17.63 24.49 19.75 8.19 11.62 9.10 tions of photo images and sketches, and Semantic loss urges the embeddings of the photo images and sketches to reconstruct the pretrained word embedding of the corresponding object word. We also use the same attention-based embedding networks for photo and sketch modalities. Then, we add our SwAMP loss to the Doodle2Search\u2019s loss with the impact \u03bb = 0.1 for all three datasets. We use the queue size 1000 (2000 for QuickDraw-Extended) and class cardinality K = 500, softmax temperature \u03c4 = 0.25, entropic regularization impact \u03b7 = 5.0. The retrieval performances on the three datasets are summarized in Table 3. The performance metrics are mean average precision (mAP), mAP@200, and the precision-at-200 (P@200). As shown, our SwAMP loss when added to the existing contrastive-based loss, significantly improves the retrieval performance (about 9% in mAP for Sketchy and about 60% for TU-Berlin). 5.3 Image-Text Retrieval For the image-text cross-modal retrieval task, we follow the features and protocols from the well-known stacked cross attention network (SCAN) (Lee et al., 2018). In their framework, each image is represented by a set of local features V = {v1, . . . , vk}, where vi (\u2208RD) = Wvfi + bv and fi\u2019s are the CNN features extracted from salient image regions detected by the Faster-R-CNN model (Ren et al., 2015). The raw features fi\u2019s are \ufb01xed and {Wv, bv} are learnable parameters. The text (sentence) is also treated as a set of word features E = {e1, . . . , en}, where ei (\u2208RD) = (hlr i + hrl i )/2 and hlr/rl i are the outputs of the bi-directional GRU (Bahdanau et al., 2015; Schuster and Paliwal, 1997) with the sequence of word embeddings as input. Both the word embeddings and GRU parameters are learnable. These image/text features contain rich local information, however, one challenge is that both representations are sets, hence the number of elements (k and n) can vary from instance to instance. In (Lee et al., 2018), they proposed a cross-modal attention model, where each local feature from one modality is transformed by the attention (Vaswani et al., 2017) with the set of local features in the other modality; e.g., vi is transformed to attn(vi; {ej}n j=1) = the weighted sum of values {ej}n j=1 with vi as a query and {ej}n j=1 as keys (this denoted by i-t, while the other attention direction t-i can be used alternatively). Then the similarity score between image V and text E is de\ufb01ned as pool({cos(vi, attn(vi; {ej}n j=1))}K i=1), where cos(a, b) is the cosine similarity and pool is the pooling operation, either of AV G (average) or LSE (log-sumexp). Then the triplet contrastive loss of (1) is employed. Although the cross-attention is useful for capturing interaction between local features, computing the similarity score takes quadratic time in the number of local features in the instances. This is time consuming compared to the simple dot-product of the modality-wise embedding vectors (See Table 6 for wall-clock times). To have modality-wise succinct representation instead (for SwAMP), we adopt the induced-set attention idea from Set-Transformer (Lee et al., 2019). Speci\ufb01cally, we introduce p learnable prototype (query) vectors {qj}p j=1, qj \u2208RD. Then we compute the attention for each query with V (or E), i.e., zj = attn(qj; {vi}k i=1). We de\ufb01ne \u03c6image(V ) = concat(z1, . . . , zp), similarly for \u03c6text(E), where concat refers to concatenation. We share the same {qj}p j=1 for both modalities. We also have multi-head extension. We call these modality-wise features as prototype \fMinyoung Kim Table 4: Image-text retrieval results on Flickr30K. Methods Image \u2192Text Text \u2192Image R@1 R@5 R@10 R@1 R@5 R@10 DAN (Nam et al., 2017) 55.0 81.8 89.0 39.4 69.2 79.1 DPC (Zheng et al., 2017) 55.6 81.9 89.5 39.1 69.2 80.9 VSE++ (Faghri et al., 2018) 52.9 87.2 39.6 79.5 SCO (Huang et al., 2018) 55.5 82.0 89.3 41.1 70.5 80.1 SCAN i-t AVG 67.9 89.0 94.4 43.9 74.2 82.8 SCAN t-i AVG 61.8 87.5 93.7 45.8 74.4 83.0 SCAN t-i AVG + i-t LSE 67.4 90.3 95.8 48.6 77.7 85.2 Contrastive-PAR 65.7 86.8 92.4 48.2 75.8 84.2 SwAMP-PAR 67.8 88.5 94.0 49.1 76.1 83.7 Table 5: Image-text retrieval results on MS-COCO. Methods 5-fold (1K test images) Entire (5K test images) Image \u2192Text Text \u2192Image Image \u2192Text Text \u2192Image R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 R@1 R@5 R@10 DPC (Zheng et al., 2017) 65.6 89.8 95.5 47.1 79.9 90.0 41.2 70.5 81.1 25.3 53.4 66.4 VSE++ (Faghri et al., 2018) 64.6 95.7 52.0 92.0 41.3 81.2 30.3 72.4 GXN (Gu et al., 2018) 68.5 97.9 56.6 94.5 42.0 84.7 31.7 74.6 SCO (Huang et al., 2018) 69.9 92.9 97.5 56.7 87.5 94.8 42.8 72.3 83.0 33.1 62.9 75.5 PCME (Chun et al., 2021) 68.8 54.6 44.2 31.9 SCAN i-t 69.2 93.2 97.5 54.4 86.0 93.6 46.4 77.4 87.2 34.4 63.7 75.7 SCAN t-i + i-t 72.7 94.8 98.4 58.8 88.4 94.8 50.4 82.2 90.0 38.6 69.3 80.4 Contrastive-PAR 71.8 94.3 97.9 56.8 86.9 93.8 48.4 78.1 88.1 34.3 64.4 76.2 SwAMP-PAR 72.6 94.6 98.0 57.4 87.6 94.1 49.7 79.1 88.3 35.0 65.1 76.6 attention representation (PAR). Note that computing PAR features has linear complexity in the number of local features (p assumed constant), and the cross-modal similarity is simply dot-product of PAR features, and can be computed in linear time (See also Table 6 for comparison with SCAN\u2019s cross-modal attention). We test our approach on the popular image-text retrieval datasets, MS-COCO and Flickr30K. The details of the datasets and training/test protocols are described in Appendix (Sec. D). The results are summarized in Table 4 and Table 5. We speci\ufb01cally highlight the comparison between the contrastive loss and our SwAMP loss with the modality-wise feature representation (Contrastive-PAR vs. SwAMP-PAR). For the PAR features, we choose the number of prototypes p = 20, attention weight temperature T = 0.5, and the number of heads H = 1 for Flickr, and p = 10, T = 0.5, H = 2 for MS-COCO. For the SwAMP hyperparameters, we use the impact of SwAMP loss \u03bb = 1.0, softmax temperature \u03c4 = 0.025, the number of classes K = 1, 000, queue size 1, 280 for both datasets. SwAMP performs consistently better than the contrastive loss and outperforms several state-of-the-arts including the recent sophisticated probabilistic embedding strategy (PCME) (Chun et al., 2021). When compared with the computationally expensive SCAN, SwAMP mostly outperforms SCAN except for the SCAN\u2019s best attention direction/combination choices. To see the computational advantage of SwAMP-PAR, we compare the actual training/test times for the two approaches in Table 6, measured on the same machine with a single GPU (RTX 2080 Ti) and Core i7 3.50GHz CPU. Our SwAMP-PAR is Table 6: Running times (seconds) measured on (Core i7 3.50GHz CPU / 128GB RAM / 1 RTX-2080Ti GPU). Perbatch times for training, entire times for test. For MS-COCO test, times for 5K test images (1K test in parentheses). Methods Flickr30K MS-COCO Train Test Train Test SCAN i-t AVG 0.35 336.9 0.33 9352.0 (350.3) SwAMP-PAR 0.09 3.8 0.08 25.9 (16.3) 200 500 1000 2000 3000 K (number of classes) 48.5 49.0 49.5 R@1 MS-COCO Image Text SwAMP Contrastive 0.1 0.5 1 2 5 10 (impact of SwAMP loss) 48.5 49.0 49.5 R@1 MS-COCO Image Text SwAMP Contrastive Figure 1: Impact of K (the number of classes) and \u03bb. about 4 times faster than SCAN for training on both datasets, while the difference becomes even more pronounced during test; SwAMP-PAR is about two orders of magnitude faster than the cross-modal attention model. 5.4 Ablation Study We perform empirical study on the impact of two important hyperparameters in our model: the number of classes K and SwAMP loss trade-off \u03bb. Number of classes (K). Recall that the best K values we chose were: K = 1000 for the image-text retrieval datasets and K = 500 for text-based video retrieval. To see how the retrieval performance is affected by other choices of K, we conduct experiments by varying K around the op\fSwAMP: Swapped Assignment of Multi-Modal Pairs for Cross-Modal Retrieval timal values. The results on MS-COCO (I \u2192T) and YouCook2 tasks are shown in Fig. 1 (Left). (More results on other datasets can be found in Appendix (Fig. 4\u20138, Sec. B).) Clearly, very small K has low retrieval performance (R@1), and increasing K leads to improvement. However, beyond certain points, there is no bene\ufb01t of increasing K and we even see performance degradation, which agrees with the observations from previous work (Asano et al., 2020; Caron et al., 2020). This is perhaps due to the dif\ufb01culty of assigning meaningful cluster labels in optimal transport. Overall, with properly chosen K, SwAMP outperforms contrastive learning, signifying that SwAMP\u2019s grouping/clustering of similar instances is more effective than vanilla instance discrimination. The fact that the optimal K values are different in two tasks (image-text and video-text) implies that the best cardinality of semantic clusters is highly dependent on the dataset characteristics (e.g., size and semantic diversity). SwAMP impact (\u03bb). The sensitivity to \u03bb is shown in Fig. 1 (Right), and more results and further discussions are in Appendix (Fig. 9\u201313, Sec. B). 5.5 Visualization of Learned Clusters As qualitative analysis, we visualize the learned clusters to see if they capture meaningful semantic information. On MS-COCO (trained with the number of classes K = 1000), we organize images and texts by their assigned cluster labels using the learned prototype classi\ufb01cation model (i.e., (2)). We \ufb01rst visually inspect individual clusters, images and texts that belong to each cluster. As we show a few examples in Fig. 2 (more in Appendix (Fig. 2,3, Sec. A)), each cluster contains semantically coherent data samples. Then we inspect texts (captions) in each cluster, and select a few keywords, those words that appear the most frequently in the texts. These keywords for each cluster consist of objects (noun) and/or actions (verb) that faithfully describe the cluster and data samples that belong to it. The full list is shown in Appendix (Fig. 1, Sec. A), but to enumerate a few of them (cluster ID: keywords), for instance, 0014: giraffe/feeding, 0169: soccer/playing, 0283: bus/parked, 0405: pizza/oven, 0597: vase/\ufb02owers, 0713: dog/ball, 0818: kite/\ufb02ying, 0956: parking/meter. Although the last three clusters in Fig. 2 all have the semantic meaning of baseball, they have different details in either activity or focus/scene: swing, base playing, and crowd scene. This means that SwAMP \ufb01nds clusters based on the whole contents (objects, acitivities, and scenes), instead of doing merely object-based clustering. Although we have roughly equal numbers of samples per cluster, we found that some clusters are overlapped with others in terms of semantic meaning (redundant clusters in Appendix (Fig. 1, Sec. A)), constituting larger super-clusters. These clusters are related to dominant data samples (e.g., cat, dog, tennis, baseball). This implies that the SwAMP can effectively deal Figure 2: Some randomly selected clusters with images and texts that belong to them. Each cluster, titled by ID: keywords, shows randomly chosen 5 images and 4 texts. with imbalance of semantic classes that can reside in data. 6" + }, + { + "url": "http://arxiv.org/abs/2111.05392v2", + "title": "Gaussian Process Meta Few-shot Classifier Learning via Linear Discriminant Laplace Approximation", + "abstract": "The meta learning few-shot classification is an emerging problem in machine\nlearning that received enormous attention recently, where the goal is to learn\na model that can quickly adapt to a new task with only a few labeled data. We\nconsider the Bayesian Gaussian process (GP) approach, in which we meta-learn\nthe GP prior, and the adaptation to a new task is carried out by the GP\npredictive model from the posterior inference. We adopt the Laplace posterior\napproximation, but to circumvent the iterative gradient steps for finding the\nMAP solution, we introduce a novel linear discriminant analysis (LDA) plugin as\na surrogate for the MAP solution. In essence, the MAP solution is approximated\nby the LDA estimate, but to take the GP prior into account, we adopt the\nprior-norm adjustment to estimate LDA's shared variance parameters, which\nensures that the adjusted estimate is consistent with the GP prior. This\nenables closed-form differentiable GP posteriors and predictive distributions,\nthus allowing fast meta training. We demonstrate considerable improvement over\nthe previous approaches.", + "authors": "Minyoung Kim, Timothy Hospedales", + "published": "2021-11-09", + "updated": "2021-12-13", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Few-shot classi\ufb01cation [7, 36, 14, 15] is the task of predicting class labels of data instances that have novel unseen class semantics, potentially from a novel domain, where the learner is given only a few labeled data from the domain. It receives signi\ufb01cant attention recently in machine learning, not only due to the practical reason that annotating a large amount of data for training deep models is prohibitively expensive, but also the constant endeavor in AI to build human-like intelligence where the human is extremely good at recognizing new categories from a few examples. In order to build a model that can generalize well to a novel task with only a few samples, meta learning [27, 3] forms a training stage that is similar to the test scenario. More speci\ufb01cally, during the training stage, the learner sees many tasks (or episodes) where each task consists of the support and query sets: the learner adapts the model to the current task using a few labeled data in the support set, and the performance of the adapted model is measured on the query set, which is used as a learning signal to update the learner. This is in nature a learning-to-learn paradigm [32, 28, 34, 8], and it often leads to more promising results in certain scenarios than simple supervised feature (transfer) learning. Although recently there were strong baselines introduced for the latter with some feature transformations [35, 5], in this paper we focus on the meta learning paradigm. As meta few-shot learning essentially aims to generalize well from only a few observations about a new task domain, it is important to learn prior information that is shared across different tasks. In this sense, the Bayesian approach [29, 31] is attractive in that we can express the prior belief effectively, and easily adapt our belief to a new task based on the given evidence, in a principled manner. In Bayesian meta learning [11, 41, 10, 16, 39], the adaptation to a new task corresponds Preprint. Under review. \fto posterior predictive distribution inference, and meta learning amounts to learning a good prior distribution from many training episodes. To enable ef\ufb01cient Bayesian meta learning, the posterior predictive inference needs to be fast and succinct (e.g., closed form). To this end, we consider the Gaussian process (GP) model with the linear deep kernel [38] that allows parametric treatment of GP via the weight-space view [24]. Although there was similar attempt recently [22], they resort to regression-based likelihood model for the classi\ufb01cation problem to derive closed-form inference, and such an ad hoc strategy can potentially lead to performance degradation. Instead, we propose a novel Laplace approximation for the GP posterior with a linear discriminant plugin, which avoids iterative gradient steps to \ufb01nd the maximum-a-posterior (MAP) adaptation solution, and allows a closed-form predictive distribution that can be used in stochastic gradient meta training ef\ufb01ciently. Hence, it is computationally more attractive than gradient-based adaptation approaches [8, 11, 16] by construction, and more amenable to train than neural net approximations of the predictive distribution (i.e., amortization) such as [10]. We show the improved performance of our GP approach over the regression-based previous work [22] and other state-of-the-arts on several benchmark datasets in both withinand cross-domain few-shot learning problems. 2 Problem Setup and Background We provide the formal training/test setup for the meta few-shot classi\ufb01cation problem (Sec. 2.1). We then brie\ufb02y review the recent GP-based few-shot learning algorithm (GPDKT) [22] in Sec. 2.2 due to its close relation to our proposed approach. 2.1 Meta few-shot learning framework The (C-way, k-shot) episodic meta few-shot classi\ufb01cation problem can be formally de\ufb01ned as follows: \u2022 Training stage (repeated for T times/episodes): 1. Sample training data (S, Q) for this episode: support set S = {(x, y)} and query set Q = {(x, y)}, where S consists of C \u00b7k samples (k samples for each of the C classes), and Q contains C \u00b7 kq samples (kq samples per class). We denote by y \u2208{1, . . ., C} the class labels in (S, Q), however, the semantic meaning of the classes is different from episode to episode. 2. With (S, Q), we train a meta learner F(S) \u2192h where the output of F is a C-way classi\ufb01er, h : X \u2192{1, . . ., C}. The training objective is typically de\ufb01ned on the query set, e.g., the prediction error of h on Q. \u2022 Test stage: 1. The k/kq-shot test data (S\u2217, Q\u2217) are sampled, but the query set Q\u2217is not revealed. For the k-shot support set S\u2217= {(x, y)}, we apply our learned F to S\u2217to obtain the classi\ufb01er h\u2217= F(S\u2217). Again, the semantic meaning of the test class labels are different from those in the training stage. The performance of h\u2217is measured on the test query set Q\u2217. For instance, in the popular ProtoNet [28], the meta learner learns the parameters \u03b8 of the feature extractor \u03c6\u03b8(x) (e.g., convolutional networks), and the meta learner\u2019s output h = F(S) is the nearest centroid classifer where the centroids are the class-wise means in S in the feature space. Note that h(x) admits a closed form (softmax), and the meta training updates \u03b8 by stochastic gradient descent with the loss, E(x,y)\u223cQ[CrossEnt(y, h(x))]. Another example is the GP meta learning framework that essentially considers h = F(S) as a GP posterior predictive model, that is, p(y|x, S) = Z p(y|f(x)) p(f|S) d f (1) where f is a GP function, h(x) is de\ufb01ned as a probabilistic classi\ufb01er p(y|x, S), and p(f|S) \u221dp(f) \u00b7 Q (x,y)\u2208S p(y|f(x)). Meta training of F amounts to learning the GP prior distribution p(f) (i.e., GP mean/covariance functions). The recent GPDKT [22] is one incarnation of this GP framework. 2 \f2.2 Brief review of GPDKT GPDKT assumes the GP regression model (its usage to classi\ufb01cation will be described shortly), f(\u00b7) \u223cGP(0, k\u03b8(\u00b7, \u00b7)), (2) y = f(x) + \u01eb, \u01eb \u223cN(0, \u03c32), (3) where the GP covariance function k\u03b8 is de\ufb01ned as the deep kernel [38]: k\u03b8(x, x\u2032) = \u02dc k(\u03c6\u03b8(x), \u03c6\u03b8(x\u2032)), (4) where \u03c6\u03b8(x) is the feature extractor (comparable to that in ProtoNet) and \u02dc k(\u00b7, \u00b7) is a conventional kernel function (e.g., Gaussian RBF, linear, or cosine similarity). We abuse the notation to denote by \u03b8 all the parameters of the deep kernel, including those from the outer kernel \u02dc k. They pose the meta training as the marginal likelihood maximization on both support and query sets: max \u03b8 Z p(f) \u00b7 Y (x,y)\u2208S\u222aQ p(y|f(x)) d f. (5) Due to the regression model, the marginal likelihood admits a closed form, and one can easily optimize (5) by stochastic gradient ascent. To extend the GP model to the classi\ufb01cation problem, instead of adopting a softmax-type likelihood p(y|f(x)), they rather stick to the GP regression model. This is mainly for the closed-form posterior and marginal data likelihood. In the binary classi\ufb01cation problem, they assign real-valued y = \u00b11.0 as target response values for positive/negative classes, respectively, during training. At the test time, they threshold the real-valued outputs to get the discrete class labels. For the multi-class C-way problem with C > 2, they turn it into C binary classi\ufb01cation problems by one-vs-rest conversion. Then during training, they maximize the sum of the marginal log-likelihood scores over the C binary problems, while at test time the one with the largest predictive mean E[y|x, S] over the C problems is taken as the predicted class. Although this workaround allows fast adaptation and training with the closed-form solutions from GP regression, the ad hoc treatment of the discrete class labels may degrade the prediction accuracy. 3 Our Approach In this section we describe our Laplace approximation GP posterior formulation for the task adaptation, where we introduce the novel linear discriminant plug-in to circumvent the iterative optimization for the MAP solution and enable the closed-form formulas. Our formalism admits the softmax classi\ufb01cation likelihood model, more sensible than the regression-based treatment of the classi\ufb01cation problem. We adopt the weight-space view of the Gaussian process model [24] using the linear-type deep kernel, and consider the softmax likelihood model with C functions F(x) = {fj(x)}C j=1: p(y|F(x)) = efy(x) PC j=1 efj(x) , fj(x) = w\u22a4 j \u03c6(x) + bj, (6) wj \u223cN(0, \u03b22I), bj \u223cN(0, \u03b22 b ) for j = 1 . . . C. (7) We let W = [w1, . . . , wC] and B = [b1, . . . , bC] be the weight-space random variables for the GP functions. Note that in (7) the prior (scalar) parameters \u03b2, \u03b2b are shared over all C functions, which is reasonable considering that the semantic meaning of classes changes from episode to episode. And it is easy to see that the i.i.d. priors on (wj, bj) makes {fj(\u00b7)}C j=1 i.i.d. GPs with a zero mean and the covariance function, Cov(fj(x), fj(x\u2032)) = \u03b22\u03c6(x)\u22a4\u03c6(x\u2032) + \u03b22 b . (8) This can be interpreted as adopting a linear outer kernel \u02dc k(z, z\u2032) = z\u22a4z\u2032 in the deep kernel (4) with some scaling and biasing. Although our formulation excludes more complex nonlinear outer kernels (e.g., RBF or polynomial), it was shown that the linear or cosine-similarity outer kernel empirically 3 \fperformed the best among other choices [22]. Note that the latter cosine-similarity kernel is obtained by unit-norm feature transformation (\u03c6(x) \u2192 \u03c6(x) ||\u03c6(x)||). Given the support set S = {(x, y)}, the GP posterior distribution of fj(x) at some arbitrary input x becomes p(fj(x)|S) = p(w\u22a4 j \u03c6(x) + bj|S), and this is determined by the posterior p(W, B|S), where (up to constant) log p(W, B|S) = \u2212 C X j=1 \u0012||wj||2 2\u03b22 + b2 j 2\u03b22 b \u0013 + X (x,y)\u2208S \u0012 w\u22a4 y \u03c6(x) + by \u2212log C X j=1 ew\u22a4 j \u03c6(x)+bj \u0013 . (9) The posterior p(W, B|S) is used to build the task(S)-adapted classi\ufb01er p(y|x, S), the GP predictive distribution derived from (1). And the meta training in our model amounts to optimizing the classi\ufb01cation (cross-entropy) loss of the adapted classi\ufb01er on the query set with respect to the GP prior parameters (i.e., \u03b2, \u03b2b, and the parameters \u03b8 of the feature extractor network \u03c6), averaged over all training episodes. That is, our meta training loss/optimization can be written as: min \u03b8,\u03b2,\u03b2b E(S,Q) \" \u2212 X (x,y)\u2208Q log p(y|x, S) # , (10) where p(y|x, S) = RR p(W, B|S)p(y|x, W, B)dWdB, and the expectation is taken over (S, Q) samples from training episodes/tasks. Considering the dependency of the loss on these prior parameters as per (10), it is crucial to have a succinct (e.g., closed-form) expression for p(W, B|S), as well as the predictive distribution p(y|x, S). However, since p(W, B|S) does not admit a closed form due to the non-closed-form normalizer (i.e., the log-sum-exp of (24) over {wj, bj}j), we adopt the Laplace approximation that essentially approximates (24) by the second-order Taylor at around the MAP estimate {w\u2217 j , b\u2217 j}j, i.e., the maximum of (24). 3.1 Laplace approximation via LDA plugin with prior-norm adjustment Speci\ufb01cally we follow the diagonal covariance Laplace approximation with diagonalized Hessian of (24), which leads to the factorized posterior p(W, B|S) = QC j=1 p(wj, bj|S). The approximate posterior can be derived as p(wj, bj|S) \u2248N(wj; w\u2217 j , V \u2217 j ) N(bj; b\u2217 j, v\u2217 j ), with V \u2217 j = Diag \u0012 1 \u03b22 + X (x,y)\u2208S a\u2217(x, y, j)\u03c6(x)2 \u0013\u22121 (11) v\u2217 j = \u0012 1 \u03b22 b + X (x,y)\u2208S a\u2217(x, y, j) \u0013\u22121 (12) where a\u2217(x, y, j) = p(y = j|F \u2217(x)) \u2212p(y = j|F \u2217(x))2, F \u2217(x) = {f \u2217 j (x)}j with f \u2217 j (x) = w\u2217 j \u22a4\u03c6(x) + b\u2217 j, and all operations are element-wise. However, obtaining the MAP estimate {w\u2217 j , b\u2217 j}j, i.e., the maximum of (24), although the objective is concave, usually requires several steps of gradient ascent, which can hinder ef\ufb01cient meta training. Recall that our meta training amounts to minimizing the loss of the task-adapted classi\ufb01er p(y|x, S) on a query set with respect to the feature extractor \u03c6\u03b8(\u00b7) and the GP prior parameters \u03b2, \u03b2b, and we prefer to have succinct (closed-form-like) expression for p(y|x, S) in terms of \u03b8, \u03b2, \u03b2b. The iterative dependency of p(y|x, S) on \u03c6, \u03b2, \u03b2b, resulting in a similar strategy as MAML [8], would make the meta training computationally expensive. To this end, we propose a novel linear discriminant analysis (LDA) plugin technique as a surrogate of the MAP estimate. LDA-Plugin. We preform the LDA on the support set S, which is equivalent to \ufb01t a mixture of Gaussians with equi-covariances by maximum likelihood [4]. More speci\ufb01cally, we consider the Gaussian mixture model (with some abuse of notation, e.g., p(x) instead of p(\u03c6(x))), p(x, y) = p(y)p(x|y) = \u03c0yN(\u03c6(x); \u00b5y, \u03c32I), (13) 4 \fwhere we assume the shared spherical covariance matrix across different classes. The maximum likelihood (ML) estimate on S can be derived as: \u03c0\u2217 j = nj n , \u00b5\u2217 j = X x\u2208Sj \u03c6(x) nj , \u03c32\u2217= C X j=1 X x\u2208Sj ||\u03c6(x) \u2212\u00b5\u2217 j||2 nd , (14) where Sj = {(x, y) \u2208S : y = j}, nj = |Sj|, n = |S|, and d = dim(\u03c6(x)). Then our idea is to use this ML-estimated Gaussian mixture to induce the class predictive model p(y|x) = p(x, y)/p(x), and match it with our GP likelihood p(y|F(x)) in (6) to obtain {wj, bj}j, which serves as a surrogate of the MAP estimate {w\u2217 j , b\u2217 j}j. More speci\ufb01cally, the class predictive from the Gaussian mixture is: p(y|x) = \u03c0yN(\u03c6(x); \u00b5y, \u03c32I) P j \u03c0jN(\u03c6(x); \u00b5j, \u03c32I) = exp \u0000(\u00b5y/\u03c32)\u22a4\u03c6(x) + log \u03c0y \u2212||\u00b5y||2/(2\u03c32) \u0001 P j exp \u0000(\u00b5j/\u03c32)\u22a4\u03c6(x) + log \u03c0j \u2212||\u00b5j||2/(2\u03c32) \u0001. (15) We match it with the GP likelihood model p(y|F(x)) from (6), that is, p(y|F(x)) = exp \u0000w\u22a4 y \u03c6(x) + by \u0001 P j exp \u0000w\u22a4 j \u03c6(x) + bj \u0001, (16) which establishes the following correspondence: wj = \u00b5j \u03c32 , bj = log \u03c0j \u2212||\u00b5j||2 2\u03c32 + \u03b1, (17) where \u03b1 is a constant (to be estimated). We aim to plug the LDA estimates (14) in (17), to obtain the MAP surrogate. However, there are two issues in this strategy: First, the ML estimate \u03c32\u2217can raise a numerical issue in the few-shot learning since the number of samples is too small1, although \u03c0\u2217 and \u00b5\u2217incur no such issue. Secondly, it is only the ML estimate with data S, and we have not taken into account the prior on {wj, bj}j. To address both issues simultaneously, we propose a prior-norm adjustment strategy, which also leads to a sensible estimate for \u03c32. Prior-norm adjustment. We will \ufb01nd \u03c32 that makes the surrogate wj in (17) become consistent with our prior p(wj) = N(0, \u03b22I). Since wj sampled from the prior can be written as wj = [\u03b2\u01ebj1, . . . , \u03b2\u01ebjd]\u22a4with \u01ebj1, . . . , \u01ebjd iid \u223cN(0, 1), we have: ||wj||2 = \u03b22 d X l=1 \u01eb2 jl = \u03b22d \u00b7 1 d d X l=1 \u01eb2 jl \u2248\u03b22d \u00b7 E[\u01eb2 jl] = \u03b22d, (18) where the approximation to the expectation gets more accurate as d increases due to the law of large numbers. The equation (18) implies that any wj that conforms to the prior has the norm approximately equal to \u03b2 \u221a d. Hence we enforce this to the surrogate wj in (17) to determine \u03c32. To consider all j = 1 \u00b7 \u00b7 \u00b7 C, we establish a simple mean-square equation, (1/C) PC j=1 ||\u00b5\u2217 j/\u03c32||2 = \u03b22d, and the solution leads to the prior-norm adjusted MAP surrogate (denoted by w\u2217 j ) as follows: \u03c32\u2217= 1 \u03b2 \u221a d v u u t 1 C C X j=1 ||\u00b5\u2217 j||2, w\u2217 j = \u00b5\u2217 j \u03c32\u2217. (19) Determining \u03b1. We adjust bj to take into account its prior, and from (17) this amounts to \ufb01nding \u03b1 properly. We directly optimize the log-posterior (24) with respect to \u03b1. Denoting \u02c6 bj = log \u03c0\u2217 j \u2212 ||\u00b5\u2217 j||2/(2\u03c32\u2217) (i.e., bj = \u02c6 bj + \u03b1), we solve \u2202log p({bj}j|S) \u2202\u03b1 = \u2212PC j=1 \u02c6 bj+\u03b1 \u03b22 b = 0, and have a MAP surrogate (denoted by b\u2217 j) as: \u03b1\u2217= \u22121 C C X j=1 \u02c6 bj, b\u2217 j = \u02c6 bj + \u03b1\u2217. (20) 1In the one-shot case (nj = 1), e.g., degenerate \u03c32\u2217= 0. 5 \fAlgorithm 1 GPLDLA meta training and meta test. [META TRAINING] Input: Initial GP prior parameters: \u03b8, \u03b2, \u03b2b. Output: Trained \u03b8, \u03b2, \u03b2b. Repeat: 0. Sample an episode/task. 1. Sample data (S, Q) from the current episode. 2. Estimate {\u03c0\u2217 j , \u00b5\u2217 j} with S using (14). 3. Estimate \u03c32\u2217and {w\u2217 j , b\u2217 j} using (19\u201320). 4. Update \u03b8, \u03b2, \u03b2b by SGD with (10) using (21). [META TEST] Input: Trained \u03b8, \u03b2, \u03b2b and test samples (S\u2217, Q\u2217). Output: Predictive distr. p(y\u2217|x\u2217, S\u2217) for x\u2217\u2208Q\u2217. Do 2 & 3 above with S\u2217; compute p(y\u2217|x\u2217, S\u2217) by (21). Summary. We have derived the Laplace approximated posterior p(wj, bj|S) in (11\u201312) with the MAP surrogate (w\u2217 j , b\u2217 j) from (19) and (20). From this GP posterior, we derive the predictive distribution p(y|x, S) that is used in our meta training (10) as well as meta test. We adopt the Monte Carlo estimate with M (reparametrized) samples from the posterior: p(y|x, S) \u22481 M M X m=1 p(y|x, W (m), B(m)), (21) where w(m) j = w\u2217 j + V \u2217 j 1 2 \u01eb(m) j , b(m) j = b\u2217 j + q v\u2217 j \u03b3(m) j where \u01eb(m) j and \u03b3(m) j are iid samples from N(0, 1). Note that the approximate p(y|x, S) in (21) depends on our GP prior parameters {\u03b8, \u03b2, \u03b2b} in a closed form, making the gradient evaluation and stochastic gradient descent training of (10) easy and straightforward. For the meta testing, we also use the same Monte Carlo estimate. The number of samples M = 10 usually works well in all our empirical studies. Our approach is dubbed GPLDLA (Gaussian Process Linear Discriminant Laplace Approximation). The \ufb01nal meta training/test algorithms are summarized in Alg. 1. 4 Related Work Few-shot/meta learning [3, 27] has received enormous attention recently with the surge of deep learning, and it now has a large body of literature [36, 14, 15]. The approaches in few-shot learning can broadly fall into two folds: feature transfer and the other meta learning. The former uses the entire training data to pretrain the feature extractor network, which is then adapted to a new task by \ufb01netuning the network or training the output heads with the few-shot test data [35, 5]. On the other hand, the meta learning approaches [28, 34, 8] follow the learning-to-learn paradigm [32], where the meta learner is trained by the empirical risk minimization principle. In the Bayesian meta learning [11, 41, 10, 16, 39], the prior on the underlying model parameters typically serves as the meta learner, and the adaptation to a new task corresponds to inference of the posterior predictive distribution. In this way the meta learning amounts to learning a good prior distribution from many training episodes. For the ef\ufb01cient meta training, the posterior predictive inference, i.e., adaptation procedure, needs to be fast and succinct (e.g., in closed forms). Some previous approaches used neural net approximation of the posterior predictive distribution (i.e., amortization) [10, 39], while others are based on gradient updates [8, 11, 16]. The main focus of meta few-shot learning lies on how to learn the meaningful prior model that can be quickly and accurately adaptable to novel tasks with only a limited amount of evidence. Another recent Bayesian meta learner closely related to ours is MetaQDA [42], where they consider a mixture-of-Gaussians (MoG) classi\ufb01er possibly with non-equal covariances, thus representing quadratic decision boundaries. With the Normalized-Inverse-Wishart prior on the MoG parameters, the posterior admits a closed-form expression by conjugacy. One of the key differences from our approach is that the MetaQDA deals with the joint MoG modeling p(x, y), while we focus on the discriminative p(y|x). It is known that the discriminative model has lower asymptotic error and 6 \fTable 1: Average accuracies and standard deviations on the CUB dataset. Best results are boldfaced. Methods Conv-4 ResNet-10 1-shot 5-shot 1-shot 5-shot Feature Transfer 46.19 \u00b1 0.64 68.40 \u00b1 0.79 63.64 \u00b1 0.91 81.27 \u00b1 0.57 Baseline++ [5] 61.75 \u00b1 0.95 78.51 \u00b1 0.59 69.55 \u00b1 0.89 85.17 \u00b1 0.50 MatchingNet [34] 60.19 \u00b1 1.02 75.11 \u00b1 0.35 71.29 \u00b1 0.87 83.47 \u00b1 0.58 ProtoNet [28] 52.52 \u00b1 1.90 75.93 \u00b1 0.46 73.22 \u00b1 0.92 85.01 \u00b1 0.52 MAML [8] 56.11 \u00b1 0.69 74.84 \u00b1 0.62 70.32 \u00b1 0.99 80.93 \u00b1 0.71 RelationNet [30] 62.52 \u00b1 0.34 78.22 \u00b1 0.07 70.47 \u00b1 0.99 83.70 \u00b1 0.55 SimpleShot [35] \u2212 \u2212 53.78 \u00b1 0.21 71.41 \u00b1 0.17 GPDKTCosSim [22] 63.37 \u00b1 0.19 77.73 \u00b1 0.26 70.81 \u00b1 0.52 83.26 \u00b1 0.50 GPDKTBNCosSim [22] 62.96 \u00b1 0.62 77.76 \u00b1 0.62 72.27 \u00b1 0.30 85.64 \u00b1 0.29 GPLDLA (Ours) 63.40 \u00b1 0.14 78.86 \u00b1 0.35 71.30 \u00b1 0.16 86.38 \u00b1 0.15 is more data ef\ufb01cient without requiring marginal input distribution modeling [20]. But this comes at the cost of the non-closed-form posterior, and we had to resort to Laplace approximation with the prior-norm adjusted MAP estimates. Despite superb performance, there are several shortcomings of MetaQDA: it involves a large number of Wishart prior parameters to be trained, O(Cd2) for C-way classi\ufb01cation and d-dimensional features. On the other hand, ours has only two extra scalar parameters \u03b2, \u03b2b. Moreover, MetaQDA\u2019s performance is rarely known when the backbone feature extractor network \u03c6(x) is jointly trained. They rather \ufb01x the features and only learn the prior QDA model. Its performance is highly reliant on the underlying feature extractor used. 5 Experiments In this section we test our GPLDLA on several popular benchmark tasks/datasets in meta few-shot classi\ufb01cation. We demonstrate the performance improvement over the state-of-the-arts, especially highlighting more accurate prediction than the previous GP few-shot model, GPDKT [22]. Implementation details. For fair comparison with existing approaches, we use the same feature extractor backbone network architectures \u03c6\u03b8(x) (e.g., convolutional networks or ResNets [13]) as competing models such as ProtoNet [28], Baseline++ [5], SimpleShot [35], and GPDKT [22]. For all experiments we use normalized features (\u03c6(x) \u2192 \u03c6(x) ||\u03c6(x)||), which corresponds to the cosinesimilarity outer kernel with the original feature in our deep kernel GP covariance function (8). As the GP prior parameters \u03b2, \u03b2b, the only extra parameters, are constrained to be positive, we represent them as exponential forms and perform gradient descent in the exponent space. The number of Monte Carlo samples is \ufb01xed as M = 10 for all experiments. The details of the optimization hyperparaemters are described in Appendix. Datasets/tasks and protocols. We consider both within-domain and cross-domain few-shot learning setups: the former takes the training and test episodes/tasks from the same dataset, while the latter takes training tasks from one dataset and test tasks from another. For the within-domain setup, we use the three most popular datasets, the Caltech-UCSD Birds [37] (denoted by CUB), the miniImageNet [34], and the tieredImageNet [26]. The CUB dataset has 11788 images from 200 classes, the miniImageNet has 60,000 images from 100 classes, while the tieredImageNet contains 779,165 images from 608 classes. We follow the standard data split: 100/50/50 classes for training/validation/test data for CUB, 64/16/20 for miniImageNet, and 391/97/160 for tieredImageNet. For the meta few-shot learning formation, we also follow the standard protocol: Each episode/task is formed by taking 5 random classes, and take k = 1 or k = 5 samples from each class for the support set S in the 1-shot or 5-shot cases. The query set is composed of kq = 15 samples per class. We only deal with C = 5-way classi\ufb01cation. The number of meta training iterations (i.e., the number of episodes) is chosen as 600 for 1-shot and 400 for 5-shot problems. Similarly as [22], the test performance is measured on 600 random test episodes/tasks averaged over 5 random runs. For the cross-domain setup, we consider two problems: i) OMNIGLOT\u2192EMNIST (that is, trained on the OMNIGLOT dataset [17] and validated/tested on the EMNIST [6]) and ii) miniImageNet\u2192CUB. 7 \fTable 2: Results on the miniImageNet dataset. Best scores are boldfaced. Methods Conv-4 ResNet-18 1-shot 5-shot 1-shot 5-shot Feature Transfer 39.51 \u00b1 0.23 60.51 \u00b1 0.55 \u2212 \u2212 Baseline++ [5] 47.15 \u00b1 0.49 66.18 \u00b1 0.18 51.87 \u00b1 0.77 75.68 \u00b1 0.63 MatchingNet [34] 48.25 \u00b1 0.65 62.71 \u00b1 0.44 \u2212 \u2212 ProtoNet [28] 44.19 \u00b1 1.30 64.07 \u00b1 0.65 54.16 \u00b1 0.82 73.68 \u00b1 0.65 MAML [8] 45.39 \u00b1 0.49 61.58 \u00b1 0.53 \u2212 \u2212 RelationNet [30] 48.76 \u00b1 0.17 64.20 \u00b1 0.28 52.48 \u00b1 0.86 69.83 \u00b1 0.68 ML-LSTM [25] 43.44 \u00b1 0.77 60.60 \u00b1 0.71 \u2212 \u2212 SNAIL [19] 45.10 55.20 \u2212 \u2212 VERSA [10] 48.53 \u00b1 1.84 67.37 \u00b1 0.86 \u2212 \u2212 LLAMA [11] 49.40 \u00b1 1.83 \u2212 \u2212 \u2212 Meta-Mixture [16] 49.60 \u00b1 1.50 64.60 \u00b1 0.92 \u2212 \u2212 SimpleShot [35] 49.69 \u00b1 0.19 66.92 \u00b1 0.17 62.85 \u00b1 0.20 80.02 \u00b1 0.14 GPDKTCosSim [22] 48.64 \u00b1 0.45 62.85 \u00b1 0.37 \u2212 \u2212 GPDKTBNCosSim [22] 49.73 \u00b1 0.07 64.00 \u00b1 0.09 \u2212 \u2212 GPLDLA (Ours) 52.58 \u00b1 0.19 69.59 \u00b1 0.16 60.05 \u00b1 0.20 79.22 \u00b1 0.15 Table 3: Results on the tieredImageNet dataset. Best scores are boldfaced. Methods Conv-4 ResNet-18 1-shot 5-shot 1-shot 5-shot ProtoNet [28] 53.31 \u00b1 0.89 72.69 \u00b1 0.74 \u2212 \u2212 MAML [8] 51.67 \u00b1 1.81 70.30 \u00b1 1.75 \u2212 \u2212 RelationNet [30] 54.48 \u00b1 0.48 71.31 \u00b1 0.78 \u2212 \u2212 SimpleShot [35] 51.02 \u00b1 0.20 68.98 \u00b1 0.18 69.09 \u00b1 0.22 84.58 \u00b1 0.16 GPDKTCosSim [22] 51.14 \u00b1 0.21 67.20 \u00b1 0.18 62.65 \u00b1 0.23 79.93 \u00b1 0.17 GPLDLA (Ours) 54.75 \u00b1 0.24 72.93 \u00b1 0.26 69.45 \u00b1 0.37 85.16 \u00b1 0.19 We follow the data splits, protocols, and other training details that are identical to those described in [22]. 5.1 Results 5.1.1 Within-domain classi\ufb01cation The results on the CUB, miniImageNet, and tieredImageNet datasets are summarized in Table 1, Table 2, and Table 3, respectively. To have fair comparison with existing approaches, we test our model on the four-layer convolutional network (known as Conv-4) used in [28, 34] and ResNet-10 as the backbone networks for the CUB dataset. For the miniImageNet and tieredImageNet, we use the Conv-4 and ResNet-18. We compare our GPLDLA with several state-of-the-arts, including MAML [8], ProtoNet [28], MatchingNet [34], and RelationNet [30]. We also consider the simple feature transfer, as well as strong baselines such as Baseline++ [5] and SimpleShot [35]. Among others, the (hierarchical) Bayesian approaches including VERSA [10], LLAMA [11], and MetaMixture [16], are also compared. However, we exclude methods that use more complex backbones or more sophisticated learning schedules [40, 2, 21, 23], and those that require a large number of extra parameters to be trained [42]. Our approach achieves the best performance on most of the setups. On the CUB dataset, GPLDLA attains the highest accuracies for three cases out of four. On the miniImageNet, GPLDLA exhibits signi\ufb01cantly higher performance than competing methods when the simpler backbone (Conv-4) is used, while being the second best and comparable to SimpleShot with the ResNet18 backbone2. And our GPLDLA outperforms GPDKT with all different kernels in most of the cases. GPLDLA also performs the best on tieredImageNet. 2SimpleShot with ResNet-18 backbone on the CUB scored accuracy 64.46 (1-shot) and 81.56 (5-shot). 8 \fTable 4: Cross-domain classi\ufb01cation performance. Methods OMNIGLOT\u2192EMNIST miniImageNet\u2192CUB 1-shot 5-shot 1-shot 5-shot Feature Transfer 64.22 \u00b1 1.24 86.10 \u00b1 0.84 32.77 \u00b1 0.35 50.34 \u00b1 0.27 Baseline++ [5] 56.84 \u00b1 0.91 80.01 \u00b1 0.92 39.19 \u00b1 0.12 57.31 \u00b1 0.11 MatchingNet [34] 75.01 \u00b1 2.09 87.41 \u00b1 1.79 36.98 \u00b1 0.06 50.72 \u00b1 0.36 ProtoNet [28] 72.04 \u00b1 0.82 87.22 \u00b1 1.01 33.27 \u00b1 1.09 52.16 \u00b1 0.17 MAML [8] 72.68 \u00b1 1.85 83.54 \u00b1 1.79 34.01 \u00b1 1.25 48.83 \u00b1 0.62 RelationNet [30] 75.62 \u00b1 1.00 87.84 \u00b1 0.27 37.13 \u00b1 0.20 51.76 \u00b1 1.48 GPDKTLinear [22] 75.97 \u00b1 0.70 89.51 \u00b1 0.44 38.72 \u00b1 0.42 54.20 \u00b1 0.37 GPDKTCosSim [22] 73.06 \u00b1 2.36 88.10 \u00b1 0.78 40.22 \u00b1 0.54 55.65 \u00b1 0.05 GPDKTBNCosSim [22] 75.40 \u00b1 1.10 90.30 \u00b1 0.49 40.14 \u00b1 0.18 56.40 \u00b1 1.34 GPLDLA (Ours) 76.65 \u00b1 0.29 89.71 \u00b1 0.14 41.92 \u00b1 0.27 60.88 \u00b1 0.30 Table 5: Cross-domain classi\ufb01cation performance with ResNet-18 backbone on miniImageNet\u2192CUB. Assoc-Align = [1], Neg-Margin = [18], and Cross-Domain = [33]. Methods 1-shot 5-shot Assoc-Align 47.25 \u00b1 0.76 72.37 \u00b1 0.89 Neg-Margin \u2212 69.30 \u00b1 0.73 Cross-Domain 47.47 \u00b1 0.75 66.98 \u00b1 0.68 GPLDLA (Ours) 48.94 \u00b1 0.45 69.83 \u00b1 0.36 5.1.2 Cross-domain classi\ufb01cation Unlike within-domain classi\ufb01cation, we test the trained model on test data from a different domain/dataset. This cross-domain experiments can judge the generalization performance of the few-shot algorithms in challenging unseen domain scenarios. The results are summarized in Table 4 where we use the Conv-4 backbone for both cases. GPLDLA exhibits the best performance for most cases outperforming GPDKT, except for one case. Our GPLDLA also performs comparably well with recent approaches with the ResNet-18 backbone on the miniImageNet\u2192CUB task as shown in Table 5. 5.1.3 Calibration errors Considering the practical use of the machine learning algorithms, it is important to align the model\u2019s prediction accuracy and its prediction con\ufb01dence. For instance, when model\u2019s prediction is wrong, it would be problematic if the con\ufb01dence of prediction is high. In this section we evaluate this alignment measure for our approach. Speci\ufb01cally we employ the expected calibration error (ECE) [12] as the measure of misalignment. The ECE can be computed by the following procedure: the model\u2019s prediction con\ufb01dence scores on the test cases are sorted and partitioned into H bins (e.g., H = 20), and for each bin we compute the difference between prediction accuracy (on the test examples that belong to the bin) and the con\ufb01dence score of the bin. The ECE is the weighted average of these differences over the bins with the weights proportional to the numbers of bin samples. Hence the smaller the better. Following [22], we sample 3000 tasks from the test set on the CUB dataset, and calibrate the temperature parameter by minimizing the negative log-likelihood score, and use another 3000 tasks from the test data to evaluate the ECE loss. The ECE losses averaged over \ufb01ve random runs are summarized in Table 6. On the 1-shot case, our GPLDLA attains the lowest calibration error, while being slightly worse than ProtoNet and GPDKT on 5-shot. 5.2 Ablation study To verify the impact of the proposed approximation strategy of LDA plugin and prior-norm adjustment, we conduct ablation study in this section. We compare three models: i) Laplace approximation that \ufb01nds the MAP solution without any approximation (neither LDA plugin nor prior-norm adjustment), ii) approximate MAP estimate by LDA plugin alone without prior-norm adjustment, and iii) 9 \fTable 6: Expected calibration errors. METHODS 1-SHOT 5-SHOT FEATURE TRANSFER 12.57 \u00b1 0.23 18.43 \u00b1 0.16 BASELINE++ 4.91 \u00b1 0.81 2.04 \u00b1 0.67 MATCHINGNET 3.11 \u00b1 0.39 2.23 \u00b1 0.25 PROTONET 1.07 \u00b1 0.15 0.93 \u00b1 0.16 MAML 1.14 \u00b1 0.22 2.47 \u00b1 0.07 RELATIONNET 4.13 \u00b1 1.72 2.80 \u00b1 0.63 GPDKTBNCOSSIM 2.62 \u00b1 0.19 1.15 \u00b1 0.21 GPLDLA (OURS) 0.74 \u00b1 0.12 1.34 \u00b1 0.16 Table 7: Ablation study comparing Laplace approximation, LDA plugin alone without prior-norm adjustment, and ours (LDA plugin + prior-norm adjustment). (a) CUB Methods Conv-4 ResNet-10 1-shot 5-shot 1-shot 5-shot Laplace approximation 61.94 \u00b1 0.22 78.31 \u00b1 0.16 70.57 \u00b1 0.23 84.62 \u00b1 0.13 LDA plugin alone 49.27 \u00b1 0.23 62.69 \u00b1 0.19 52.28 \u00b1 0.24 66.61 \u00b1 0.20 LDA plugin + prior-norm adjustment (Ours) 63.40 \u00b1 0.14 78.86 \u00b1 0.35 71.30 \u00b1 0.16 86.38 \u00b1 0.15 (b) miniImageNet Methods Conv-4 ResNet-18 1-shot 5-shot 1-shot 5-shot Laplace approximation 52.47 \u00b1 0.19 69.42 \u00b1 0.16 59.61 \u00b1 0.20 79.37 \u00b1 0.14 LDA plugin alone 42.73 \u00b1 0.18 53.29 \u00b1 0.16 59.66 \u00b1 0.20 79.50 \u00b1 0.14 LDA plugin + prior-norm adjustment (Ours) 52.58 \u00b1 0.19 69.59 \u00b1 0.16 60.05 \u00b1 0.20 79.22 \u00b1 0.15 (c) tieredImageNet Methods Conv-4 ResNet-18 1-shot 5-shot 1-shot 5-shot Laplace approximation 53.46 \u00b1 0.22 71.97 \u00b1 0.18 67.93 \u00b1 0.22 84.09 \u00b1 0.16 LDA plugin alone 39.59 \u00b1 0.20 50.93 \u00b1 0.19 64.19 \u00b1 0.22 76.16 \u00b1 0.22 LDA plugin + prior-norm adjustment (Ours) 54.75 \u00b1 0.24 72.93 \u00b1 0.26 69.45 \u00b1 0.37 85.16 \u00b1 0.19 both LDA plugin and prior-norm adjustment (hence our GPLDLA). For Laplace approximation we take 5 Newton steps to \ufb01nd the MAP solution, where the cascaded update operations are differentiable for meta learning of the prior parameters. For the LDA plugin alone, we need to estimate the class-conditional variance parameter \u03c32, and we adopt the median distance heuristics: \u03c3 set as the median of all pairwise feature distances (in the support set), a common practice for determining feature scales in kernel machines [9]. Table 7 summarizes the test performance of the three models. Overall our GPLDLA and Laplace approximation perform equally well whereas GPLDLA slightly outperforms Laplace approximation most of the time. This may be attributed to the ProtoNet-like effect of the LDA plugin estimator, which serves as an additional regularizer for few-shot learning. We also see that LDA plugin alone signi\ufb01cantly underperforms GPLDLA, implying that prior-norm adjustment is more effective than heuristic median-distance rule in determining the class conditional feature scale. However, the differences are less pronounced for larger backbones (ResNets) in which the importance of the sophisticated features is dominant. 5.3 Running times Next we measure the wall clock running times for competing Bayesian meta few-shot methods. The per-episode inference time with the Conv-4 backbone on tieredImageNet is reported in Table 8. It shows that GPLDLA is the fastest thanks to the ef\ufb01cient closed-form inference steps. GPDKT [22] has computational overhead of solving C binarized problems separately where C is the number of 10 \fTable 8: Per-episode inference time (milliseconds) with the Conv-4 backbone on tieredImageNet. We exclude the feature computation times and only measure the inference time, that is, time for computing p(y|x, S) for (x, y) \u2208Q. Methods 1-shot 5-shot GPDKT [22] 10.42 \u00b1 0.24 12.02 \u00b1 0.08 MetaQDA (Full Bayesian) [42] 22.57 \u00b1 0.57 25.84 \u00b1 1.19 MetaQDA (MAP) [42] 20.29 \u00b1 0.22 22.49 \u00b1 0.67 Laplace approximation 11.42 \u00b1 1.70 14.08 \u00b1 0.63 GPLDLA (Ours) 6.70 \u00b1 0.03 6.71 \u00b1 0.14 ways. MetaQDA [42] suffers from slow inference due to the cubic time (in the feature dimension) to deal with full covariance matrices and their inverses. 6" + }, + { + "url": "http://arxiv.org/abs/2107.08737v1", + "title": "Synthesizing Human Faces using Latent Space Factorization and Local Weights (Extended Version)", + "abstract": "We propose a 3D face generative model with local weights to increase the\nmodel's variations and expressiveness. The proposed model allows partial\nmanipulation of the face while still learning the whole face mesh. For this\npurpose, we address an effective way to extract local facial features from the\nentire data and explore a way to manipulate them during a holistic generation.\nFirst, we factorize the latent space of the whole face to the subspace\nindicating different parts of the face. In addition, local weights generated by\nnon-negative matrix factorization are applied to the factorized latent space so\nthat the decomposed part space is semantically meaningful. We experiment with\nour model and observe that effective facial part manipulation is possible and\nthat the model's expressiveness is improved.", + "authors": "Minyoung Kim, Young J. Kim", + "published": "2021-07-19", + "updated": "2021-07-19", + "primary_cat": "cs.GR", + "cats": [ + "cs.GR", + "cs.CV" + ], + "main_content": "Introduction Various methods have been studied to develop three-dimensional(3D) geometric models to generate human faces. Its importance has increased lately due to the progress in virtual reality, especially virtual humans [1]. However, modeling a human face still needs a tremendous human effort. Many researchers have proposed new approaches to address this dif\ufb01culty. Among them, the learning-based method exhibits notable advancements recently. With the advent of the generative adversarial networks (GAN), attention to the generation model is increasing, and related research using deep learning is being actively pursued. However, most existing works are focused on a holistic generative approach to generate all parts at once and lack part details and manipulation. Previous part-based generative models exploit explicit segmentation data or labels for training their model to learn the structure of the object parts or use several part decoders[2, 3, 4]. However, existing 3D facial mesh datasets barely have pre-segmented data. Thus, we investigate an effective way to extract or present localized features from the whole data. Toward this goal, mesh segmentation might be one of the possible solutions. However, since human faces are often smooth, it is a challenge to segment the facial mesh explicitly. To bypass this, we exploit a generative approach that does not require additional segmentation data and makes the whole learning model simple. Furthermore, we explore a way for part control while exploiting holistic generation by learning localized features. In this paper, we propose a locally weighted 3D face generative model to increase variations and expressiveness of the model. Our approach can generate a rich variety of 3D face models beyond the training data using part manipulation with latent factorization. With a part-based representation of the data, our model is simpler and more straightforward than others and does not require any semantic segmentation labels. Our main contributions are: (1) Locally weighted generative autoencoder for generatarXiv:2107.08737v1 [cs.GR] 19 Jul 2021 \fFigure 1: Facial synthesis samples created by our generative model ing a whole human face geometric model; (2) End-to-end learning to learn local features without explicit facial feature segmentation data; (3) Experimentation and demonstration of the proposed model\u2019s performance in terms of generation and part manipulation. As a basis model, we leverage Ranjan et al. [5]\u2019s autoencoder with latent space factorization and apply local weights that partially in\ufb02uence the model during training. Latent factorization enables manipulation of the local part of the face, and local weights make decomposed part spaces more semantically meaningful without additional segmentation labels. We also evaluate the performance of the proposed model in terms of part modi\ufb01cation, part combination, and ablation tests to show the effect of each model component on the results. The majority of the materials contained in this paper is based on the same author\u2019s dissertation [6]. 2 Related Work 2.1 3D Face Representations and Local models Blanz and Vetter [7] introduced the \ufb01rst 3D face morphable models (3DMMs), which are statistical models of global 3D face shapes and textures. They employed principal component analysis (PCA) to construct principal components to express facial shape and texture. More recently, Booth et al. [8] proposed the \ufb01rst largest scale morphable model, the large scale face model (LSFM) [8], constructed from 9663 distinct facial identities. Paysan et al.\u2019s Basel face model (BFM) [9] also has been widely used. However, the 3DMMs are limited to representation of high-frequency details and form a latent model space due to their linear bases and training data. There exist attempts to generate a new face with face segmentation and a local model to increase the model\u2019s expressiveness and achieve \ufb01ne-scale modeling. Blanz and Vetter [7] demonstrated region-based modeling with 3DMMs by manually dividing the face into regions that can be learned by the PCA models. Tena et al. [10] presented region-based linear face modeling with automatic segmentation by clustering. Tran et al. [11] also proposed nonlinear 3DMMs with a global and local-based network to extract features of the global face structure and face part details simultaneously. Recently, Ghafourzadeh et al. [12] proposed a part-based approach that conducts part-based facial models using PCA. This model results in a locally edited face by applying an anthropometric measurement. 2.2 Part-Based Shape Generative Models With the advantages of CNNs for hierarchical feature extraction, many generative models also capitalize on its bene\ufb01ts to progress shape modeling. Wang et al. [2] proposed holistic voxel-based generative adversarial networks called globalto-local GAN and part re\ufb01ner. They showed better shape variety and distribution than a plain three-dimensional GAN. CompoNet [3] presented a part-based generative neural network for shapes. They proved that the part-based model encourages the generator to create new data unseen in the training set. Dubrovina et \fal. [4] handled the composition and decomposition of each part as a simple linear operation on the factorized embedding space. They used projection matrices to split full object encodings into part encodings. The proposed decomposercomposer network can perform meaningful part manipulations and high-\ufb01delity 3D shape generation. To composite each part, both [3] and [4] compute per-part af\ufb01ne transformation. Our model does not utilize spatial transformer networks [13] nor computes af\ufb01ne transformation to combinate each part of the data. We pursue a holistic generation approach but also allow part manipulation without explicit segmentation. Therefore, we do not need to worry about the artifacts when the model combines each part into a whole shape. 2.3 Feature Matrix Factorization Some feature factorization methods interpret data more semantically since they can decompose the data into a part-based representation. Among those methods, non-negative matrix factorization (NMF) is a robust feature factorization method to represent data as part-based ones. Lee and Seung [14] popularized NMF by showing its interpretability for the part-based representation of facial images. Koppen et al. [15] extended NMF to 3D registered images. McGraw et al. [16] presented 3D segmentation based on NMF and produce meaningful results. For its application, Li et al. [17] proposed the concept of sketch as an input of GANs, which is the noise transformed to the basis matrix in NMF that has the underlying features of the raw data. By utilizing a part-based representation, Collins et al. [18] performed local and semantically-aware changes through a global operation on the 2D image domain. They applied spherical k-means clustering [19] on the last feature map to identify features that are semantically meaningful. Collins and S\u00a8 usstrunk [20] demonstrated localized features using NMF. They applied NMF to the last feature map, where the semantic features are encoded. By factorizing the feature map, they can decompose an input image into several semantic regions. Inspired by these works, we apply NMF to 3D faces mesh to obtain local features of data and supply them to the holistic generative model. 3 Mesh Convolution Neural Networks We choose to represent 3D faces with triangular mesh due to its ef\ufb01ciency. Among previous approaches for applying mesh convolution operation, Ranjan et al. [5] proposed CoMA employing fast Chebyshev \ufb01lters [21] with a novel mesh pooling method. Since our model is largely based on this model, we brie\ufb02y describe the convolution operation applied to mesh data. 3.1 Mesh Representation We represent a 3D face mesh as a set of vertices V \u2208RN\u00d73 and edges E. The edges are represented by an adjacency matrix A \u2208{0, 1}N\u00d7N where Aij = 1 denotes where there is an edge connecting vertices i and j, and Aij = 0 otherwise. 3.2 Spectral Graph Convolution on Face Mesh Defferrard et al. [21] use convolution on graphs with a frequency domain approach under the convolution theorem. The convolution in the spatial domain equals element-wise multiplication in the frequency domain. To convert the graph from the spatial domain to the frequency domain, Defferrard et al. [21] \ufb01rst applied the graph Fourier transform [22] to the input mesh. The graph Laplacian matrix is de\ufb01ned as L = D \u2212A, where D is a diagonal matrix, with Di,i = \u03a3jAij. The Laplacian matrix is diagonalized by the Fourier basis U \u2208RN\u00d7N as L = U\u039bUT . Here, the columns of U = [ u0, u1, . . . , un\u22121] are the orthogonal eigenvectors of L, and \u039b = diag ([ \u03bb0, \u03bb1, . . . , \u03bbn\u22121]) \u2208RN\u00d7N is a diagonal matrix. Following the convolution theorem, the convolution operator \u2217can be de\ufb01ned in the Fourier space as the element-wise product X \u2217Wspec = U(U T (X) \u2299U T (Wspec)). Because of U, which is not sparse, this operation needs high computational costs. To address this problem, Defferrard et al. [21] formulate spec\f(a)\u00a0Encoder \ud835\udc43 \u0b35 \ud835\udc43\u0b34 \ud835\udc43\u0b36 \ud835\udc43\u0b37 \ud835\udc4a \u0bd8 \ud835\udc4a \u0bd9 \ud835\udc4a \u0be1 \ud835\udc4a \u0be0 * * * * Pre\u2010computed\u00a0NMF\u00a0Basis\u00a0Matrices Latent\u00a0Factorization\u00a0with\u00a0Projection\u00a0Matrices sum \ud835\udc4a \u0bc5 \u0bea FC FC 1\u2010D\u00a0whole encoding 1\u2010D\u00a0part encodings \ud835\udc4a \u0bc5 \u0bd9 \ud835\udc4a \u0bc5 \u0bd8 \ud835\udc4a \u0bc5 \u0be1 \ud835\udc4a \u0bc5 \u0be0 Locally\u00a0weighted encodings Sum\u00a0of\u00a0locally\u00a0 weighted encodings (b)\u00a0Projection\u00a0Part (c)\u00a0Decoder Figure 2: Locally weighted autoencoder architecture tral convolution with a \ufb01lter W\u03b8 using a recursive Chebyshev polynomial ([21], [23]). The \ufb01lter W\u03b8 is parametrized as a Chebyshev polynomial of order k by W\u03b8 (L) = K\u22121 X k=0 \u03b8kTk \u0010 e L \u0011 , (1) where e L = 2L \u03bbmax \u2212In is the scaled Laplacian matrix, and \u03bbmax is the maximum eigenvalue of the Laplacian matrix. The parameter \u03b8 \u2208RK is a vector of the Chebyshev coef\ufb01cients, and Tk \u2208RN\u00d7N is the Chebyshev polynomial of order k, which is computed recursively as Tk (x) = 2xTk\u22121 (x) \u2212Tk\u22121(x), with T0 = 1 and T1 = x. For each convolution layer, the spectral graph convolutions are Yj = Fin X i=0 W\u03b8i,j(L)xi, (2) where xi is the i-th feature of the input x \u2208 RN\u00d7Fin, and Yj is the j-th feature of the output Y \u2208RN\u00d7Fout. For each convolution layer, the spectral graph convolution has Fin \u00d7 Fout vectors of the Chebyshev coef\ufb01cient \u03b8i, j \u2208RK as trainable parameters. 4 Locally Weighted Autoencoder Our model is based on the autoencoder[5] consisting of an encoder, projection, and a decoder, as illustrated in Fig. 2. Although their work has shown a decent performance of reconstructing 3D faces, we take one step further to improve generation ability and controllability by using per-part manipulation. Utilizing the basic generation ability of Ranjan et al [5]\u2019s model, we added two new methods: latent factorization and local weights. The encoder and decoder learn how to compress and decompress the data, respectively. In between them, the projection part factorizes the latent space into the subspace and applies local weights to make the subspace semantically meaningful. More details on local weights and latent space manipulation will be explained in the following section. 4.1 Pre-Computed Local Weights from NMF We use a part-based representation to extract the local part structure without segmented data or labels. The representation is used as weights, which have each vertex\u2019s in\ufb02uence on each divided facial part. To make a part-based representation of the whole data, we employ the NMF. This method \ufb01nds a low-rank approximation of a matrix V , where V \u2248WH, when V, W, and H do not have non-negative values. Given a feature matrix, V, W is a basis matrix that contains basis elements of V , and H is a latent representation matrix. We call the matrix W local weights. To \fexpress local features more ef\ufb01ciently, we applied sparse NMF [24] enforcing sparsity on the column of H. This could improve the local separation of features [16]. We compute this with a sparsity constraint value of 7.5. The computed local weights serve as the in\ufb02uence of each vertex on a speci\ufb01c area. We expect that local weights would make the part encodings more semantically meaningful. Fig. 3 shows the visualization of the local weights. The bright area shows how much each vertex in\ufb02uences the facial area. Figure 3: Pre-computed sparse NMF\u2019s basis matrix Before training the model, we compute a basis matrix W and input a simpli\ufb01ed template mesh vertices matrix V \u2208RP\u00d73 , where P is the number of vertices with positions in three dimensions, i.e., x, y, and z. Given an input matrix V , NMF produces a basis matrix W \u2208RP\u00d7K, which means that the basis features of the vertices are indexed by K, and the coef\ufb01cient H \u2208 RK\u00d73 is indexed by the vertex positions. After obtaining several W, we selected K basis matrices from them that have the most semantic features. 4.2 Latent Space Manipulation 4.2.1 Projection Matrix Layer Our encoder takes a whole shape as input and compresses to a low-dimensional representation, i.e., a latent vector. This encoding re\ufb02ects the whole shape structure. When we factorize the whole encoding, we can generate part encodings corresponding to the shape structure of the part. Thereby, we disentangle different semantic part encodings from the encoding of the whole shapes. We then perform part-level shape manipulation. Dubrovina et al. [4] use projection matrices to transform a whole shape embedding into semantic part embeddings. They factorize the latent space into a semantic subspace with data-driven learned parameters. Similarly to [4], we use learnable projection matrices to transform the whole part encoding from the global latent space to the localized basis matrix space. We de\ufb01ne part-speci\ufb01c projection matrices, where K is the number of semantic parts. Passing through the matrices, the whole part encodings from the encoder are divided into semantic part encodings. For embedding parts, we implement projection matrices represented as K fully connected layers without biases and with the latent dimension size of Z \u00d7 Z. The input of the projection layers is a whole face encoding produced by the encoder, and their outputs are K part encodings. The K part encodings can be split unpredictably and have arbitrary meanings. To make them more semantically meaningful, we apply pre-computed local weights. We explain this in the following paragraph. 4.2.2 Applying Local Weights to Factorized Part Encodings Li et al. [17] proposed sketch, a combination of random noise and features of the original data, produced by transforming vectors from the noise space to a basis matrix space in NMF. Following [17], we apply the pre-computed local weights to the part encodings that are factorized by the projection matrices (Fig. 2-(b)). Each pre-computed local weight is multiplied by each latent vector. Thanks to this operation, each factorized latent vector has a localized weight, and the encodings lie on a part-based subspace. We describe this process schematically in Fig. 4. Pre-computed local weights, which form the basis matrix derived from NMF W \u2208RP\u00d7K, are applied to the factorized latent vectors Z \u2208RK\u00d7Z. This process produces a locally weighted matrix WL \u2208RK\u00d7N\u00d7Z and then sum up as WL \u2208RN\u00d7Z (Fig. 2). This matrix is provided as an input to the fully connected layer of the decoder. * * * * Local\u00a0weighted\u00a0 space Localized\u00a0 subspace Global\u00a0 latent\u00a0space Projection Layer Local Weights Figure 4: Illustration of the projection part \f5 Implementation 5.1 Datasets To obtain large facial mesh data, we used the AFLW2000-3D dataset [25] having 2,000 3D faces and the corresponding landmarks of AFLW [25] face images. Each 3D face has 53,215 vertices. All faces are in full correspondence and generated by the Basel Face Model [9] without pose variations. In data preprocessing, we matched all facial mesh topology, i.e., those with the same vertex ordering. However, we did not clean some noisy area around the lips in the dataset, which is visible both in the dataset and in our synthetic results. The dataset was divided into a training set and a test set with 1,780 faces and 220 faces, respectively. 5.2 Implementation Details Our proposed synthesis model has a similar architecture like CoMA [5], following their downsampling and up-sampling method for coarseto-\ufb01ne convolution networks. The structure of the encoder and decoder is shown in Table 1 and Table 2. Similar to COMA, our encoder contained four convolution layers, followed by a biased ReLU [26]. After passing the convolution layer, the input mesh was downsampled approximately four times. The last fully-connected layer transformed the face mesh into a 64-dimensional latent vector. Once the \ufb01rst layer of the decoder transforms the latent vector, other processes mirror the encoder with an upsampling procedure, increasing the mesh data approximately four times. To optimize the networks, we exploit the L1 and cycle loss [4]. We trained our model for 300 epochs with a batch size of 32. The dimension of the latent vector was 64. The initial learning rate started at 0.0125 and decreased by 0.99 every epoch. We used stochastic gradient descent with a momentum of 0.9 to optimize and set Chebyshev \ufb01ltering with K as 6. We used PyTorch [27] and PyTorch Geometric [28] to implement our model and conducted all experiments with NVIDIA Titan RTX GPU 24GB. Layer Input size Output size Convolution 53215 \u00d7 3 53125 \u00d7 16 Down-Sampling 53215\u00d716 13304 \u00d7 16 Convolution 13304\u00d716 13304 \u00d7 16 Down-Sampling 13304\u00d716 3326 \u00d7 16 Convolution 3326 \u00d7 16 3326 \u00d7 16 Down-Sampling 3326 \u00d7 16 832 \u00d7 16 Convolution 832 \u00d7 16 832 \u00d7 32 Down-Sampling 832 \u00d7 32 208 \u00d7 32 Fully Connected 208 \u00d7 32 64 Table 1: Encoder Architecture Layer Input size Output size Fully Connected 208 \u00d7 64 208 \u00d7 32 Up-Sampling 208 \u00d7 32 832 \u00d7 32 Convolution 832 \u00d7 32 832 \u00d7 16 Up-Sampling 832 \u00d7 16 3326 \u00d7 16 Convolution 3326 \u00d7 16 3326 \u00d7 16 Up-Sampling 3326 \u00d7 16 13304 \u00d7 16 Convolution 13304\u00d716 13304 \u00d7 16 Up-Sampling 13304\u00d716 53215 \u00d7 16 Convolution 53215\u00d716 53125 \u00d7 3 Table 2: Decoder Architecture 6 Experimental Results The experimental results of our proposed model are described in this section. We present the practicality of our model with generation tasks and an ablation study. In all experiments, we set the number of face parts, K, as 4. 6.1 Generation Results 6.1.1 Part manipulation In this experiment, we tested the part manipulation results by applying interpolation between source and target as shown in Fig. 5. We interpolated the source\u2019s part encodings to the target\u2019s corresponding part encodings obtained by factorized latent vectors described in Sec. 4.2. Fig. 6 shows that as the respective part of the face in\ufb02uence changes, the other parts of the face are not affected. Plus, we expected that each row\u2019s changing part matches each local weight in the same row. As a result, we observed that each variation area corresponds to each local weight in Fig. 6. Color gradients in the varia\ftion area included visualizing the Hausdorff distance between the \ufb01rst face (\u03b1 = 1 9 ) and the last face (\u03b1 = 8 9 ) in each row. Each of them displays a variation of each interpolation more clearly. The blue-colored gradient signi\ufb01es that the vertices of the source and target are nearby, while the red-colored gradient means they are further away. (a) Source (b) Target Figure 5: Source and target face 6.1.2 Diversity Visualization To demonstrate the variety of data, we measured the diversity of generated data from our model. Using the trained encoder, we encoded 220 random faces from our training set and test set, respectively. Since our proposed model allows part manipulation and modi\ufb01cation, we synthesized 220 faces by combining \ufb01ve source faces and 11 target faces for four parts. Fig. 1 shows the synthesis samples with and without textures. The result was visualized by projecting selected data onto a 2D plane using PCA and tSNE [29], shown in Fig. 7. We displayed all encoded faces as markers and summarized them with ellipses. Here, there are three types of encoding: training set (red), test set (yellow), and part synthesis (green). In Fig. 7 (a), we can discern that our synthesis sample area (green ellipse) involves both areas of the training set and test set (red and yellow ellipses) in the 2D PCA plane. Fig. 7 (b) presents this result more distinctively as the synthesis samples are also located in a wider region as well as the region of the training set and test set. In our visualizations, even though the training data and test data overlap, our synthesis samples (green) cover wider areas in the encoding space. As a result, our proposed method shows a prominent performance to extend the model\u2019s representation ability. 6.2 Ablation Study To study the effect of each component of our approach, we experimented with an ablation study with variations of local weight and projection matrices. 6.2.1 Without local weights Local weights were obtained by NMF to make decomposed part spaces more semantically meaningful. To verify the effect of local weights, the model was trained without applying local weights to the projection part. The results are shown in Fig. 8 and are presented sequentially. The far-right face visualizes the Hausdorff distance between the \ufb01rst face and the last face in the sequence. Without applying local weight, the results do not display noticeable changes in some faces, and changing areas of the face also are intertwined with each other and look arbitrary. 6.2.2 Without projection matrices In this experiment, we removed the factorization step to explore its impact. Compared to Fig. 6, the changing parts of faces in Fig. 9 re\ufb02ect less local weight except for the \ufb01rst face. Speci\ufb01cally, the second face\u2019s cheek area impacts the mouth area, and other faces change different parts with local weight. Considering the results, we can infer that the projection matrices help exert local weights better. Accordingly, projection matrices not only factorize the latent space but also transform it into local weights space. 6.3 Discussion Our proposed model performed notable part manipulation and synthesis using a holistic generative approach. However, there are a few points that need further discussion. Concerning the level of model components, the correlation between the changing area of faces and local weights should be better addressed. Most changing areas generally re\ufb02ect corresponding local weights features, but some include another part or ignores them. One possible reason for this is that projection matrices would cover unassigned areas by transforming \fNear Far Local weight Variation Area Source Recon \u03b1 \u0d4c1 9 \u03b1 \u0d4c2 9 \u03b1 \u0d4c3 9 \u03b1 \u0d4c4 9 \u03b1 \u0d4c5 9 \u03b1 \u0d4c6 9 \u03b1 \u0d4c7 9 \u03b1 \u0d4c8 9 Part 1 Part 2 Part 3 Part 4 Figure 6: Results of part interpolation (a) PCA (b) t-SNE Figure 7: Diversity Visualization part encodings to local weight\u2019s space. The other is the natural quality of the dataset having correlations between facial features. Although we suggest two possible reasons, these need to be explored thoroughly. Next, the semantic meaning of local weights needs to be examined. Our local weights were computed algorithmically, not manually segmented or labeled. Therefore, it lacked semantic meaning and detailed segmentation of the human face, such as the separation of eyes and eyebrows. Finally, we multiply the part encodings in latent space and local weights in NMF. This approach seemed to work in our setting because the projection matrices transform part encodings to local weights\u2019 space. We have shown experimentally that our process works, but a more rigorous mathematical proof is still needed. 7" + }, + { + "url": "http://arxiv.org/abs/2102.03151v1", + "title": "Reducing the Amortization Gap in Variational Autoencoders: A Bayesian Random Function Approach", + "abstract": "Variational autoencoder (VAE) is a very successful generative model whose key\nelement is the so called amortized inference network, which can perform test\ntime inference using a single feed forward pass. Unfortunately, this comes at\nthe cost of degraded accuracy in posterior approximation, often underperforming\nthe instance-wise variational optimization. Although the latest semi-amortized\napproaches mitigate the issue by performing a few variational optimization\nupdates starting from the VAE's amortized inference output, they inherently\nsuffer from computational overhead for inference at test time. In this paper,\nwe address the problem in a completely different way by considering a random\ninference model, where we model the mean and variance functions of the\nvariational posterior as random Gaussian processes (GP). The motivation is that\nthe deviation of the VAE's amortized posterior distribution from the true\nposterior can be regarded as random noise, which allows us to take into account\nthe uncertainty in posterior approximation in a principled manner. In\nparticular, our model can quantify the difficulty in posterior approximation by\na Gaussian variational density. Inference in our GP model is done by a single\nfeed forward pass through the network, significantly faster than semi-amortized\nmethods. We show that our approach attains higher test data likelihood than the\nstate-of-the-arts on several benchmark datasets.", + "authors": "Minyoung Kim, Vladimir Pavlovic", + "published": "2021-02-05", + "updated": "2021-02-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Variational Autoencoder (VAE) (Kingma & Welling, 2014; Rezende et al., 2014) is a very successful generative model where a highly complex deep nonlinear generative process can be easily incorporated. A key element of the VAE, the deep inference (a.k.a. encoder) network, can perform the test time inference using a single feed forward pass through the network, bringing signi\ufb01cant computational speed-up. This feature, known as amortized inference, allows the VAE to circumvent otherwise time-consuming steps of solving the variational optimization problem for each individual instance at test time, required in the standard variational inference techniques, such as the stochastic variational inference (SVI) (Hoffman et al., 2013). As suggested by the recent study (Cremer et al., 2018), however, the amortized inference can also be a drawback of the VAE, speci\ufb01cally the accuracy of posterior approximation by the amortized inference network is often lower than the accuracy of the SVI\u2019s full variational optimization. There are two general approaches to reduce this amortization error. The \ufb01rst is to increase the network capacity of the inference model (e.g., \ufb02ow-based models (Kingma et al., 2016; Tomczak & Welling, 2016)). The other direction is the so-called semi-amortized approach (Kim et al., 2018; Krishnan et al., 2018; Marino et al., 2018; Park et al., 2019), where the key idea is to use the VAE\u2019s amortized inference network to produce a good initial distribution, from which a few SVI steps are performed at test time to further reduce the amortization error, quite similar in nature to the test time model adaptation of the MAML (Finn et al., 2017) in multi-task (meta) learning. Although these models often lead to improved posterior approximation, they raise several issues: Training the models for the former family of approaches is usually dif\ufb01cult because of the increased model complexity; the latter approaches inadvertently suffer from computational overhead of additional SVI gradient steps at test time. In this paper, we propose a novel approach to address these drawbacks. We retain the amortized inference framework similar to the standard VAE for its computational bene\ufb01ts, but consider a random inference model. Speci\ufb01cally, the mean and the 1Samsung AI Center Cambridge, UK 2Rutgers University, Piscataway, NJ, USA. Correspondence to: Minyoung Kim . arXiv:2102.03151v1 [cs.LG] 5 Feb 2021 \fGaussian Process Inference Networks for VAE variance functions of the variational posterior distribution are a priori assumed to be Gaussian process (GP) distributed. There are two main motivations for this idea. The \ufb01rst one stems from the suboptimality of the VAE, where the estimated amortized inference network suffers from deviation from the true posteriors. This inaccuracy can be characterized by inherent uncertainty in the posterior approximation of the deterministic amortized inference network, suggesting the need for a principled Bayesian uncertainty treatment. The second intuition is that the deviation of the VAE\u2019s variational posterior distributions from the true posteriors can be naturally regarded as random noise. Whereas the semi-amortized approaches perform extra SVI gradient updates at test time to account for this noise, we model the discrepancy using a Bayesian neural network (GP), resulting in a faster and more accurate amortized model via principled uncertainty marginalization. Another bene\ufb01t of the Bayesian treatment is that our model can quantify the discrepancy in approximation, which can serve as useful indicators for goodness of posterior approximations. The inference in our model is signi\ufb01cantly faster than that of semi-amortized methods, accomplished by a single feed forward pass through the GP posterior marginalized inference network. We show that our approach attains higher test data likelihood scores than the state-of-the-art semi-amortized approaches and even the high-capacity \ufb02ow-based encoder models on several benchmark datasets. 2. Background Let x \u2208X be an input data point and z \u2208Rd be the d-variate latent vector. We consider the generative model1: p(z) = N(z; 0, I), p\u03b8(x|z) = N(x; g\u03b8(z), \u03c32 xI), (1) where g\u03b8 : Rd \u2192X is a (deep) neural network with the weight parameters denoted by \u03b8, and \u03c32 x is the variance2 of the white noise. For the given data D = {xi}N i=1, we maximize the data log-likelihood, PN i=1 log p\u03b8(xi), with respect to \u03b8 where p\u03b8(x) = Ep(z)[p\u03b8(x|z)]. Due to the infeasibility of evaluating the marginal log-likelihood exactly, the variational inference exploits the following inequality, log p\u03b8(x) \u2265Eq(z|x) \u0002 log p\u03b8(x, z) \u2212log q(z|x) \u0003 , (2) which holds for any density q(z|x). The inequality becomes tighter as q(z|x) becomes closer to the true posterior, as the gap equals KL(q(z|x)||p\u03b8(z|x)). Then we adopt a tractable density family (e.g., Gaussian) q\u03bb(z|x) parametrized by \u03bb, and maximize the lower bound in (2) w.r.t. \u03bb. Since our goal is maximizing the log-marginal, log p\u03b8(x), we also need to optimize the lower bound w.r.t. \u03b8 together with \u03bb, either concurrently or in an alternating fashion. Note that at current \u03b8, the lower bound optimization w.r.t. \u03bb needs to be speci\ufb01c to each input x, and hence the optimal solution is dependent on the input x. Formally, we can denote the optimum by \u03bb\u2217(x). The stochastic variational inference (SVI) (Hoffman et al., 2013) faithfully implements this idea, and the approximate posterior inference for a new input point x in SVI amounts to solving the ELBO optimization on the \ufb02y by gradient ascent. Although this can yield very accurate posterior approximation, it incurs computational overhead since we have to perform full variational optimization for each and every input x. The VAE (Kingma & Welling, 2014) addresses this problem by introducing a deep neural network \u03bb(x; \u03c6) with the weight parameters \u03c6 as a universal function approximator of the optimum \u03bb\u2217(x), and optimize the lower bound w.r.t. \u03c6. This approach is called the amortized variational inference (AVI). Thus the main bene\ufb01t of the VAE is computational speed-up as one can simply do feed forward pass through the inference network \u03bb(x; \u03c6) to perform posterior inference for each x. The recent study in (Cremer et al., 2018) raised the issue of the amortized inference in the VAE, where the quality of data \ufb01tting is degraded due to the approximation error between \u03bb\u2217(x) and \u03bb(x; \u03c6), dubbed the amortization error. To retain the AVI\u2019s computational advantage and reduce the amortization error, there were attempts to take the bene\ufb01ts of SVI and AVI, which are referred to as semi-amortized variational inference (SAVI) (Kim et al., 2018; Marino et al., 2018; Krishnan et al., 2018). The key idea is to learn the amortized inference network to produce a reasonably good initial iterate for the follow-up SVI optimization, perhaps just a few steps. This warm-start SVI gradient ascent would be faster than full SVI optimization, and could reduce the approximation error of the AVI. Although the inference in the SAVI is faster than SVI, it still requires gradient ascent optimization at test time, which might be the main drawback. The SAVI also suffers from other minor issues including how to choose the gradient step size and the 1Although there can be possible variations (e.g., heteroscedastic variance for p\u03b8(x|z)), we assume a homoscedastic model for simplicity, and our approach is easily extendable to the variants. 2The variance \u03c32 x can be a part of the model to be trained and subsumed in \u03b8, but for simplicity we regard it as a \ufb01xed constant. \fGaussian Process Inference Networks for VAE number of gradient updates to achieve optimal performance-ef\ufb01ciency trade-off3. In the next section we propose a novel approach that is much faster than the SAVI, avoiding gradient updates at test time and requiring only feed forward pass through a single network, and at the same time can yield more accurate posterior approximation. 3. Gaussian Process Inference Network We start from the variational density of the VAE, but with slightly different notation, as follows: q(z|x, f, h) = N \u0000z; f(x), Diag(h(x))2\u0001 , (3) where f, h : X \u2192Rd are the mean and standard deviation functions of the variational posterior distribution. Note that if we model f and h as deterministic functions (neural networks) and optimize their weight parameters (i.e., point estimation), then it reduces to the standard VAE for which f and h constitute \u03bb(x; \u03c6). However, as discussed in the previous sections, such point estimates may be inaccurate, which implies that there must be inherent uncertainty in posterior approximation. To account for the uncertainty, we follow the Bayesian treatment; speci\ufb01cally we let f and h be independent random GP distributed functions a priori (Rasmussen & Williams, 2006), f(\u00b7) = [f1(\u00b7), ..., fd(\u00b7)]\u22a4\u223c d Y j=1 GP(bj(\u00b7), km(\u00b7, \u00b7)), (4) h(\u00b7) = [h1(\u00b7), ..., hd(\u00b7)]\u22a4\u223c d Y j=1 GP(cj(\u00b7), ks(\u00b7, \u00b7)). (5) Here b(\u00b7) = [b1(\u00b7), ..., bd(\u00b7)]\u22a4, c(\u00b7) = [c1(\u00b7), ..., cd(\u00b7)]\u22a4are the GP mean functions which can be modeled by deep neural networks, and the GP covariance functions of f and h are denoted by km and ks, respectively, where we share the same covariance function across dimensions for simplicity. Relation to the SAVI. Note that the GP-priored variational density model in (3\u20135) can be equivalently written as: q(z|x, f, h) = N \u0000b(x) + f(x), Diag(c(x) + h(x))2\u0001 , (6) where f and h now follow zero-mean Gaussian processes. If we view the VAE\u2019s point estimate inference model as: q(z|x) = N \u0000b(x), Diag(c(x))2\u0001 , then (6) effectively models the discrepancy between the VAE\u2019s q(z|x) and the true posterior p\u03b8(z|x) via stochastic noise models. Recall that in order to reduce this discrepancy, the semi-amortized approaches perform extra SVI gradient updates starting from b(x) and c(x) at test time on the \ufb02y. Instead, we aim to learn the discrepancy using Bayesian neural networks f and h (GP as a special case; see Sec. 3.1), resulting in a faster and more accurate amortized inference model by taking into account uncertainty in a principled manner. For instance, the GP posterior p(f, h|D) can predict the above-mentioned discrepancy accurately, while their variances (e.g., V(f(x)|D)) can serve as gauge that quanti\ufb01es the degree of (instance-wise) uncertainty/dif\ufb01culty in posterior approximation via the amortized inference network. To this end, we describe a reasonable likelihood model to establish a GP framework, and derive an ef\ufb01cient GP posterior inference algorithm in what follows. 3.1. Likelihood Model and GP Posterior Inference To establish a valid Bayesian framework, we de\ufb01ne a likelihood model, that is, the compatibility score of how each individual instance x \u223cD is likely to be generated under the given functions f and h. A reasonable choice is the variational lower bound (2), which we denote as: L\u03b8(f, h; x) := Eq(z|x,f,h) \u0002 log p\u03b8(x, z) \u2212log q(z|x, f, h) \u0003 . (7) Clearly log p\u03b8(x) \u2265L\u03b8(f, h; x), and (7) can serve as surrogate4 for the log-likelihood function log p(x|f, h). Given the data D = {xi}N i=1, combining the GP priors and the likelihood model leads to the GP posterior, p(f, h|D) \u221dp(f) p(h) Y x\u2208D exp \u0000L\u03b8(f, h; x) \u0001 . (8) 3Although (Park et al., 2019) mitigated the issues by decoder linearization, it is rather restricted to only fully connected layers, and dif\ufb01cult to be applied to convolutional or recurrent networks. 4Technically, eL may not be a valid density (integration not equal to 1), and one has to deal with the dif\ufb01cult normalizing partition function in principle. For simplicity, we do not consider it and regard L as unnormalized log-likelihood function. \fGaussian Process Inference Networks for VAE However, solving (8) requires time and memory cubic in the number of data points N, which is prohibitive for large-scale data. Although there exist ef\ufb01cient scalable approximate inference techniques in the GP literature (Qui\u00f1onero-Candela & Rasmussen, 2005; Snelson & Ghahramani, 2006; Titsias, 2009; Dezfouli & Bonilla, 2015; Hensman et al., 2017), here we adopt the linear deep kernel trick (Huang et al., 2015; Wilson et al., 2016), which we brie\ufb02y summarize below. Linear deep kernel trick for approximating GP. A random (scalar) function f(x) that follows the 0-mean GP with covariance (kernel) k, namely f(\u00b7) \u223cGP(0, k(\u00b7, \u00b7)), can be represented as a linear form with an explicit feature space mapping. Consider a feature mapping \u03c8 : X \u2192Rp such that the covariance function is approximated as inner product in the feature space (of dimension p), namely k(x, x\u2032) \u2248\u03c8(x)\u22a4\u03c8(x\u2032). Now, introducing the p-variate random vector w \u223cN(0, I), allows us to write the GP function as f(x) = w\u22a4\u03c8(x). It is because Cov(f(x), f(x\u2032)) = Cov(w\u22a4\u03c8(x), w\u22a4\u03c8(x\u2032)) = \u03c8(x)\u22a4\u03c8(x\u2032) \u2248k(x, x\u2032). A main advantage of this representation is that we can turn the non-parametric GP into a parametric Bayesian model, where the posterior inference can be done on the \ufb01nite dimensional random vector w instead. The feature mapping \u03c8(\u00b7) can be modeled as a deep neural network, and its weight parameters constitute the covariance (kernel) parameters of the GP. This way, we can (approximately) view GP as a special case of Bayesian neural networks where we treat the \ufb01nal fully connected layer w as random (Neal, 1996; Lee et al., 2018; de G. Matthews et al., 2018; Garriga-Alonso et al., 2019). Note that although this is rather a simpli\ufb01ed form of the deep kernel (Wilson et al., 2016) by applying the linear kernel on the outputs of \u03c8(\u00b7), it has been widely used with great success (Huang et al., 2015; Titsias et al., 2020). Returning to our GP posterior inference (8), the two GP-priored functions can be written as: fj(x) = w\u22a4 j \u03c8m(x) and hj(x) = u\u22a4 j \u03c8s(x) for j = 1, . . . , d, where wj\u2019s and uj\u2019s are mutually independent p-variate random vectors from N(0, I). The feature functions \u03c8m, \u03c8s : X \u2192Rp are deep neural networks that de\ufb01ne the covariance functions: km(x, x\u2032) = \u03c8m(x)\u22a4\u03c8m(x\u2032), ks(x, x\u2032) = \u03c8s(x)\u22a4\u03c8s(x\u2032). By letting W = [w1, . . . , wd]\u22a4and U = [u1, . . . , ud]\u22a4 be the (d \u00d7 p) matrices with the random vectors in the rows, we have f(x) = [f1(x), ..., fd(x)]\u22a4= W\u03c8m(x), h(x) = [h1(\u00b7), ..., hd(\u00b7)]\u22a4= U\u03c8s(x). The inference in (6) can be written as q(z|x, W, U) that equals: N \u0000b(x) + W\u03c8m(x), Diag(c(x) + U\u03c8s(x))2\u0001 , (9) while (8) becomes: p(W, U|D) \u221dN(W; 0, I)N(U; 0, I) Y x\u2208D eL\u03b8(W,U;x), where L\u03b8(W, U; x) := Eq \u0014 log p\u03b8(x, z) q(z|x, W, U) \u0015 . (10) We approximate (10) by q(W, U) de\ufb01ned as: q(W, U; \u039b) = d Y j=1 N(wj; \u00b5j, \u03a3j) N(uj; \u03b7j, \u0393j) (11) where \u039b := {\u00b5j, \u03a3j, \u03b7j, \u0393j}d j=1 constitutes the GP variational parameters. 3.2. GP Posterior Marginalized Encoder Before we proceed to GP inference and learning (Sec. 3.3), we derive the posterior averaged encoder distribution, q(z|x) = ZZ q(W, U)q(z|x, W, U)dWdU. (12) Note that (23) can be seen as the \ufb01nal latent inference model of our GP VAE model, where the uncertainty captured in the GP posterior q(W, U) is all marginalized out. For instance, the test log-likelihood score log p(x) under our model can be estimated by the importance weighted sampling method (Burda et al., 2016) as the proposal distribution. As it also appears in the GP learning in the next section, we provide the derivation for q(z|x) here. Although the two terms in the integrand of (23) are both Gaussians, it is infeasible to have a close-form formula due to the dependency of the covariance of (9) on U. Instead, our approximation strategy is to view (23) as (a limit of) a mixture of Gaussians, P i \u03b1iN(z; mi, Vi) where \u03b1i corresponds to q(W, U) with index i mapped to (W, U), and mi, Vi \fGaussian Process Inference Networks for VAE denote the mean and covariance of q(z|x, W, U) in (9). Since a Gaussian mixture can be approximated by a single Gaussian by the second-order moment matching5, namely P i \u03b1iN(mi, Vi) \u2248N(m, V) where m = P i \u03b1imi and V = P i \u03b1i(mim\u22a4 i + Vi) \u2212mm\u22a4, applying it to (23) yields: q(z|x) \u2248N(z; m(x), Diag(v(x))) where mj(x) = bj(x) + \u00b5\u22a4 j \u03c8m(x), vj(x) = cj(x)2 + 2\u03b7\u22a4 j \u03c8s(x)cj(x) + \u03c8m(x)\u22a4\u03a3j\u03c8m(x) + \u03c8s(x)\u22a4\u0000\u03b7j\u03b7\u22a4 j + \u0393j \u0001 \u03c8s(x), (13) for j = 1, . . . , d. Note from (13) that as a special case, q(z|x) becomes the standard VAE\u2019s encoder distribution with means bj(x) and variances cj(x)2 if the GP posterior is ignored (i.e., \u039b = {\u00b5j, \u03a3j, \u03b7j, \u0393j} = 0 to lead to the deterministic zero noise model). And our learned GP posterior (non-zero \u039b) informs us how the deviation from the true posterior p\u03b8(z|x) can be determined and compensated, namely by (13). 3.3. GP Inference and Learning Now we describe how the variational GP inference (i.e., optimizing \u039b in q(W, U; \u039b)) can be done. Similar to other GP variational leanring, the objective function that we will derive establishes a lower bound of the model\u2019s data likelihood, and hence we can learn the model parameters as well by maximizing the lower bound (empirical Bayes). The model parameters consist of the parameters in the GP mean and covariance functions (i.e., the weight parameters of the deep networks b(x), c(x), \u03c8m(x), and \u03c8s(x)), and those in the likelihood model (i.e., \u03b8 in the decoder p\u03b8(x|z)). To approximate q(W, U) \u2248p(W, U|D), we aim to minimize KL \u0000q(W, U)||p(W, U|D) \u0001 , and it can be shown that the KL can be written as follows (Supplement for details): KL(q||p) = log \u02c6 p\u03b8(D) \u2212 X x\u2208D ELBO(\u03b8, \u039b; x), (14) where \u02c6 p\u03b8(D) = EW,U\u223cN (0,I) \u0002 Q x eL\u03b8(W,U;x)\u0003 is the marginal data likelihood using our surrogate L in (10), and ELBO := \u2212Eq(W,U)[KL(q(z|x, W, U)||p(z))] + Eq(z|x) \u0002 log p\u03b8(x|z) \u0003 \u22121 N KL(q(W, U)||N(0, I)). (15) We now discuss how individual terms in the ELBO (15) can be derived. The last term of (15) is the KL divergence between Gaussian densities, and admits a close form. The second term is the expected log-likelihood with respect to the GP posterior marginalized encoder q(z|x), (13), and we can do this by Monte Carlo estimation with the well-known reparametrization trick (Kingma & Welling, 2014). Finally, the \ufb01rst term in (15) is the Gaussian averaged KL divergence between Gaussians, and thus it can also admit a closed form. More speci\ufb01cally, it equals: 1 2 d X j=1 \u0012 vj(x) + \u0000bj(x) + \u00b5\u22a4 j \u03c8m(x) \u00012 \u22121 \u2212EN (uj;\u03b7j,\u0393j) h log \u0000cj(x) + u\u22a4 j \u03c8s(x) \u00012i\u0013 . (16) The last term in (16) is essentially a Gaussian expected squared log function, which can be written as a closed form, albeit complicated, using the con\ufb02uent hyper-geometric function (Lloyd et al., 2015). However, for simplicity we estimate it using the reparametrized Monte-Carlo method. Summary. The overall learning steps are as follows: 1. Initialize the variational parameters \u039b and the model parameters \u03b8, b(x), c(x), \u03c8m(x), and \u03c8s(x). 2. Repeat until convergence: (a) Estimate the marginalized q(z|x) using (13). (b) Optimize the ELBO (15) wrt all parameters. 3. (At test time) The GP marginalized encoder q(z|x) can be used to perform reconstruction, and evaluate the test likelihood p\u03b8(x), e.g., using the importance weighted sampling method (Burda et al., 2016). The uncertainty (variance) of the posterior noise TrV(f(x)|D) (similarly for h) can be approximately estimated as \u03c8m(x)\u22a4\u0000 Pd j=1 \u03a3j \u0001 \u03c8m(x). 5Equivalent to min{m,V} KL \u0000 P i \u03b1iN(mi, Vi)||N(m, V) \u0001 . \fGaussian Process Inference Networks for VAE 4. Related Work As enumerating all related literature in this section can be infeasible, we brie\ufb02y review some of the recent works that are highly related with ours. The issue of amortization error in VAE was raised in (Cremer et al., 2018), after which several semi-amortized approaches have been attempted (Kim et al., 2018; Marino et al., 2018; Krishnan et al., 2018) that essentially follow a few SVI gradient steps at test time. An alternative line of research approaches the problem by enlarging the representational capacity of the encoder network, including the \ufb02ow-based models that apply nonlinear invertible transformations to VAE\u2019s variational posterior (Tomczak & Welling, 2016; Kingma et al., 2016). Recently (Kim & Pavlovic, 2020) proposed a greedy recursive mixture estimation method for the encoder in VAE, where the idea is to iteratively augment the current mixture with new components to maximally reduce the divergence between the variational and the true posteriors. In parallel, there have been previous attempts to apply the Bayesian approach to the VAE modeling. However, they are in nature different from our random function modeling of the encoder uncertainty. The Bayesian Variational VAE (Daxberger & Hern\u00e1ndez-Lobato, 2019) rather focused on modeling uncertainty in the decoder model, and their main focus is how to deal with out-of-distribution samples in the test set, hence more aligned with transfer learning. The Compound VAE (Su et al., 2019) also tackled the similar problem of reducing the amortization gap of the VAE, however, their variational density modeling is less intuitive, inferring the latent vector z and the encoder weights W from each data instance. Note that we have more intuitive Bayesian inference for the encoder parameters, q(W|D) given the entire training data D. Their treatment is deemed to augment the latent z with the weights W in the conventional VAE. The Variational GP (Tran et al.), although looking similar to ours, is not speci\ufb01cally aimed for the VAE and amortized inference, but for general Bayesian inference. In turn, they built the posterior model using a GP function de\ufb01ned on the Gaussian distributed latent input space, instead of de\ufb01ning GP on the input data as we did. 5. Evaluations We evaluate our Gaussian process VAE model on several benchmark datasets to show its improved performance over the existing state-of-the-arts. Our focus is two-fold: 1) improved test likelihood scores, and 2) faster test time inference than semi-amortized methods. We also contrast with the \ufb02ow-based models that employ high capacity encoder networks. The competing approaches are as follows: \u2022 VAE: The standard VAE model with amortized inference (Kingma & Welling, 2014; Rezende et al., 2014). \u2022 SA: The semi-amortized VAE (Kim et al., 2018). We \ufb01x the SVI gradient step size as 10\u22123, but vary the number of SVI steps from {1, 2, 4, 8}. \u2022 IAF: The autoregressive-based \ufb02ow model for the encoder q(z|x) (Kingma et al., 2016), which has richer expressive capability than the VAE\u2019s post-Gaussian encoder. The number of \ufb02ows is chosen from {1, 2, 4, 8}. \u2022 HF: The Householder \ufb02ow encoder model that represents the full covariance using the Householder transformation (Tomczak & Welling, 2016). The number of \ufb02ows is chosen from {1, 2, 4, 8}. \u2022 ME: To enlarge the representational capacity of the encoder network, another possible baseline is a mixture model. More speci\ufb01cally, the inference model is de\ufb01ned as: q(z|x) = PM m=1 \u03b1(m|x)qm(z|x), where qm(z|x) are amortized inference models (e.g., having the same network architectures as the VAE\u2019s encoder network), and \u03b1(m|x) are mixing proportions, dependent on the input x, which can be modeled by a single neural network. The mixture encoder (ME) model is trained by gradient ascent to maximize the lower bound of log p(x) similarly as the VAE. The number of mixture components M is chosen from {1, 2, 4, 8}. \u2022 RME: The recursive mixture estimation method for the encoder in VAE (Kim & Pavlovic, 2020), which showed superiority to ME\u2019s blind mixture estimation. \u2022 GPVAE: Our proposed GP encoder model. The GP means and feature functions have the same network architectures as the VAE\u2019s encoder. Datasets. We use the following \ufb01ve benchmark datasets: MNIST (LeCun et al., 1998), OMNIGLOT (Lake et al., 2013), CIFAR106 (Krizhevsky & Hinton, 2009), SVHN (Netzer et al., 2011), and CelebA (Liu et al., 2015). For CelebA, we use 6Results on CIFAR10 can be found in the Supplement. \fGaussian Process Inference Networks for VAE Table 1. (MNIST) Test log-likelihood scores (unit in nat). The \ufb01gures in the parentheses next to model names indicate: the number of SVI steps in SA, the number of \ufb02ows in IAF and HF, and the number of mixture components in ME and RME. The superscripts are the standard deviations. The best (on average) results are boldfaced in red. In each column, we perform the two-sided t-test to measure the statistical signi\ufb01cance of the difference between the best model (red) and each competing method. We depict those with p-values greater than 0.01 as boldfaced blue (little evidence of difference). So, anything plain non-colored indicates p \u22640.01 (signi\ufb01cantly different). Best viewed in color. DIM(z) = 10 DIM(z) = 20 DIM(z) = 50 VAE 685.11.8 930.73.9 1185.73.9 SA(1) 688.12.7 921.22.3 1172.11.8 SA(2) 682.21.5 932.02.4 1176.33.4 SA(4) 683.51.5 925.52.6 1171.33.5 SA(8) 684.61.5 928.13.9 1183.23.4 IAF(1) 687.31.1 934.03.3 1180.62.7 IAF(2) 677.71.6 931.43.7 1190.11.9 IAF(4) 685.01.5 926.32.6 1178.11.6 IAF(8) 689.71.4 934.12.4 1150.02.2 HF(1) 682.51.4 917.22.6 1204.34.0 HF(2) 677.62.2 923.93.1 1191.510.8 HF(4) 683.32.6 927.32.8 1197.21.5 HF(8) 679.61.5 928.53.1 1184.11.8 ME(2) 685.71.2 926.73.0 1152.81.7 ME(3) 678.52.5 933.14.1 1162.84.7 ME(4) 680.00.9 914.72.3 1205.12.3 ME(5) 682.01.7 920.61.9 1198.53.5 RME(2) 697.21.1 943.91.6 1201.70.9 RME(3) 698.21.1 945.11.6 1202.41.0 RME(4) 699.01.0 945.21.6 1203.11.0 RME(5) 699.42.1 945.01.7 1203.71.0 GPVAE 696.51.5 944.32.8 1212.93.2 tightly cropped face images of size (64 \u00d7 64 \u00d7 3), and randomly split the data into 80%/10%/10% train/validation/test sets. For the other datasets, we follow the partitions provided in the data, with 10% of the training sets randomly held out for validation. Network architectures. We adopt the convolutional neural networks for both encoder and decoder models for all competing approaches. The main reason is that the convolutional networks are believed to outperform fully connected networks for many tasks on the image domain7 (Krizhevsky et al., 2012; Szegedy et al., 2013; Radford et al., 2015). For the encoder architecture, we \ufb01rst apply L convolutional layers with (4 \u00d7 4)-pixels kernels, followed by two fully-connected layers with hidden layers dimension h. For the decoder, the input images \ufb01rst go through two fully connected layers, followed by L transposed convolutional layers with (4 \u00d7 4)-pixels \ufb01lters. Here, L = 3 for all datasets except CelebA (L = 4), and h = 256 for the MNIST/OMNIGLOT and h = 512 for the others. The deep kernel feature functions \u03c8m,s(x) in our GPVAE model have exactly the same architecture as the encoder network except that the last fully connected layer is removed. This ensures that the GP functions f(x) and h(x) have equal functional capacity to the base encoder network since they are de\ufb01ned to be products of the features and the Gaussian random weights W and U. And, accordingly the feature dimension p is set equal to h. The full covariance matrices of the variational density q(W, U) are represented by Cholesky parametrization to ensure positive de\ufb01niteness (e.g., \u03a3 = LL\u22a4where L is a lower triangle matrix with strictly positive diagonals). Experimental setup. The latent dimension is chosen from {10, 20, 50}. To report the test log-likelihood scores, we use the importance weighted estimation (IWAE)8 (Burda et al., 2016) with 100 samples. For each model/dataset, we perform 10 runs with different random train/validation splits, where each run consists of three trainings by starting with different random model parameters, among which only one model with the best validation result is chosen. 7We empirically compared the two networks in the Supplement. 8The details can be also found in the Supplement. \fGaussian Process Inference Networks for VAE Table 2. (OMNIGLOT) Test log-likelihood scores (unit in nat). The same interpretation as Tab. 1. DIM(z) = 10 DIM(z) = 20 DIM(z) = 50 VAE 347.01.7 501.61.6 801.64.0 SA(1) 344.11.4 499.32.5 792.77.9 SA(2) 349.51.4 501.02.7 793.14.8 SA(4) 342.11.0 488.21.8 794.41.9 SA(8) 344.81.1 490.32.8 799.42.7 IAF(1) 347.81.6 489.91.9 788.84.1 IAF(2) 344.21.6 494.91.4 795.72.7 IAF(4) 347.91.9 496.02.0 775.12.2 IAF(8) 343.91.4 498.82.3 774.72.9 HF(1) 335.51.2 488.62.0 795.93.3 HF(2) 340.61.3 495.91.8 784.54.8 HF(4) 343.31.2 487.02.7 799.73.2 HF(8) 343.31.3 488.32.4 794.64.0 ME(2) 344.21.5 491.71.4 793.43.8 ME(3) 350.31.8 491.22.1 807.54.9 ME(4) 337.71.1 491.31.8 732.03.1 ME(5) 343.01.4 478.02.8 805.73.8 RME(2) 349.31.5 508.21.2 821.03.1 RME(3) 349.91.6 507.51.1 820.40.9 RME(4) 350.71.7 509.01.2 819.90.9 RME(5) 351.11.7 509.11.4 819.90.9 GPVAE 354.81.5 516.32.1 821.84.4 5.1. Results The test log-likelihood scores are summarized in Tab. 1 (MNIST), Tab. 2 (OMNIGLOT), Tab. 3 (SVHN), and Tab. 4 (CelebA). Our GPVAE overall outperforms the competing approaches consistently for all datasets. Below we provide interpretation for the results. The performance of the semi-amortized approach (SA) is mixed, sometimes achieving improvement over VAE, but not consistently. SA\u2019s performance is very sensitive to the number of SVI gradient update steps, another drawback of the SA where the gradient-based adaption has to be performed at test time. Although one could adjust the gradient step size (currently we are using a \ufb01xed gradient step size) to improve the performance, as far as we know, there is little principled way to tune the step size at test time that can attain optimal accuracy and inference time trade off. The \ufb02ow-based models (IAF and HF) adopt nonlinear invertible transformations to enrich the representational capacity of the variational posterior. In principle, they are capable of representing highly nonlinear non-Gaussian conditional densities, perhaps subsuming the true posteriors, via autoregressive \ufb02ows (IAF) and the Householder transformed full covariance matrices (HF). However, their improvement in accuracy over the VAE trails that of our GPVAE; they often perform only as well as the VAE. The failure of the \ufb02ow-based models might be due to the dif\ufb01culty in optimizing complex encoder models where similar observations were made in related previous work (Park et al., 2019; Kim & Pavlovic, 2020). This result suggests that sophisticated and discriminative learning criteria are critical, beyond just enlarging the structural capacity of the neural networks. Our GPVAE\u2019s explicit modeling of the deviation of the base encoder density from the true posterior via GP noise processes accomplishes this goal. Similarly, despite its increased functional capacity, the mixture encoder (ME) also has dif\ufb01culty in learning a good model, being quite sensitive to the initial parameters. Except for one case on CelebA with dim(z) = 10, it consistently underperforms our GPVAE. The blind mixture estimation can potentially suffer from collapsed mixture components and dominant single component issues. The fact that even the baseline VAE often performs comparably to the ME with different mixture orders supports this observation. This, again, signi\ufb01es the importance of employing more discriminative learning criteria, as done by our GPVAE\u2019s explicit modeling of the posterior deviation. The RME, by following this direction of adopting a discriminative learning objective, performs equally well with our GPVAE on many cases, but slightly underperforms ours on the others. \fGaussian Process Inference Networks for VAE Table 3. (SVHN) Test log-likelihood scores (unit in nat). The same interpretation as Tab. 1. DIM(z) = 10 DIM(z) = 20 DIM(z) = 50 VAE 3360.29.1 4054.514.3 5363.721.4 SA(1) 3358.78.9 4031.519.0 5362.135.7 SA(2) 3356.08.8 4041.515.5 5377.023.2 SA(4) 3327.88.2 4051.922.2 5391.720.4 SA(8) 3352.811.5 4041.69.5 5370.818.5 IAF(1) 3377.18.4 4050.09.4 5368.311.5 IAF(2) 3362.38.9 4054.610.5 5360.010.0 IAF(4) 3346.18.7 4048.68.7 5338.110.2 IAF(8) 3372.68.3 4042.09.6 5341.810.1 HF(1) 3381.48.9 4028.89.7 5372.010.1 HF(2) 3342.48.3 4030.79.9 5376.610.2 HF(4) 3370.08.2 4038.49.7 5371.89.8 HF(8) 3343.88.2 4035.98.9 5351.111.1 ME(2) 3352.39.9 4037.211.0 5343.213.1 ME(3) 3335.210.9 4053.816.1 5367.715.8 ME(4) 3358.214.9 4061.312.0 5191.918.5 ME(5) 3360.67.8 4057.512.2 5209.212.8 RME(2) 3390.08.1 4085.39.7 5403.210.2 RME(3) 3392.012.6 4085.99.8 5405.110.4 RME(4) 3388.68.3 4080.79.9 5403.810.2 RME(5) 3391.98.2 4086.910.9 5405.58.5 GPVAE 3417.611.8 4133.213.3 5410.019.5 Figure 1. Uncertainty vs. posterior approximation dif\ufb01culty. After the GPVAE model is trained on MNIST with 2D latent space, we evaluate the uncertainty TrV(f(x)|D) + TrV(h(x)|D), and depict six different instances x in the order of increasing uncertainty values. Top panel shows the true posterior p\u03b8(z|x) (contour plots) and the base encoder N(b(x), c(x)2) (red dots) superimposed (in log scale). Bottom panel contains the original inputs x (left) and reconstructed images (right). For the cases with lower uncertainty, the true posteriors are more Gaussian-like. On the other hand, the higher uncertainty cases have highly non-Gaussian true posteriors with multiple modes. \fGaussian Process Inference Networks for VAE Table 4. (CelebA) Test log-likelihood scores (unit in nat). The same interpretation as Tab. 1. DIM(z) = 10 DIM(z) = 20 DIM(z) = 50 VAE 9767.736.0 12116.425.3 15251.939.7 SA(1) 9735.221.4 12091.121.6 15285.829.4 SA(2) 9754.220.4 12087.121.5 15252.729.0 SA(4) 9769.120.6 12116.320.5 15187.327.9 SA(8) 9744.819.4 12100.622.8 15096.527.2 IAF(1) 9750.327.4 12098.020.6 15271.228.6 IAF(2) 9794.423.3 12104.521.8 15262.227.8 IAF(4) 9764.729.5 12094.622.6 15261.028.1 IAF(8) 9764.021.6 12109.322.0 15241.527.9 HF(1) 9748.329.5 12077.231.4 15240.527.6 HF(2) 9765.825.6 12093.025.6 15258.230.3 HF(4) 9754.323.8 12082.027.0 15266.529.5 HF(8) 9737.524.5 12087.325.5 15248.729.7 ME(2) 9825.320.7 12072.723.3 15290.529.3 ME(3) 9797.622.3 12100.321.7 15294.628.3 ME(4) 9834.925.4 12092.222.6 15270.720.6 ME(5) 9717.023.2 12095.325.1 15268.827.5 RME(2) 9837.924.6 12193.123.5 15363.031.7 RME(3) 9838.525.0 12192.323.5 15365.631.4 RME(4) 9849.512.1 12192.623.4 15364.331.5 RME(5) 9843.525.0 12194.211.5 15366.212.7 GPVAE 9843.423.8 12184.022.7 15370.927.6 5.2. Test Inference Time Compared to the semi-amortized methods, the inference in our GP encoder network is much faster as it is accomplished by a single feed forward pass through the encoder network. Unlike the semi-amortized approaches where one has to perform the SVI gradient adaptation at test time, in our GPVAE model, after the training stage, the posterior model is \ufb01xed, with no further adaptation required. To verify computational speed-up over the semi-amortized approaches and others, we measure the inference. The per-batch inference times (batch size 128) on all benchmark datasets are shown in Tab. 5. To report the results, for each method and each dataset, we run inference over the entire test set batches, measure the running time, then take the per-batch average. We repeat the procedure \ufb01ve times and report the average. All models are run on the same machine with a single GPU (RTX 2080 Ti), Core i7 3.50GHz CPU, and 128 GB RAM. Note that we only report test times for the latent dimension dim(z) = 50 since the impact of the latent dimension appears to be less signi\ufb01cant for all models except for our GPVAE. In the GPVAE, the latent dimension can considerably affect the inference time because of the matrix operations performed per latent dimension (c.f., (13)). Hence, we consider the most complex (worst) case for our GPVAE model, dim(z) = 50, the highest dimension in our experimental setup. Most notably yet as expected, the semi-amortized approach (SA) suffers from the computational overhead of test time gradient Table 5. (Per-batch) Test inference time (unit in milliseconds) with batch size 128. The latent dimension dim(z) = 50. MNIST OMNIG. CIFAR10 SVHN CELEBA VAE 3.6 4.8 3.7 2.2 2.7 SA(1) 9.7 11.6 9.8 7.0 8.4 SA(2) 18.1 19.2 16.8 15.5 13.8 SA(4) 32.2 34.4 27.9 30.1 27.1 SA(8) 60.8 65.7 60.5 60.3 53.8 IAF(1) 4.8 5.7 5.1 3.4 4.4 IAF(2) 5.9 6.4 5.6 3.7 5.1 IAF(4) 6.2 7.0 6.3 4.7 5.7 IAF(8) 7.7 8.2 7.6 5.7 7.7 GPVAE 9.9 10.2 9.3 8.0 9.2 \fGaussian Process Inference Networks for VAE updates, with the inference time signi\ufb01cantly growing as a function of the number of increasing updates. Our GPVAE is signi\ufb01cantly faster than the SA with more than one SVI step, albeit on par or slower than the \ufb02ow-based IAF. We believe that the inference time of the GPVAE can be further improved by more effective implementations of the dimension-wise matrix operations, which remain as our future work9. 5.3. Uncertainty vs. Posterior Approximation Dif\ufb01culty Another important bene\ufb01t of our Bayesian treatment is that we can quantify the uncertainty in posterior approximation. Recall that our GP posterior p(f, h|D) captures the discrepancy between the base encoder N(b(x), c(x)2) and the true posterior p\u03b8(x) via the GP noise processes f(x) and h(x). In particular, the variance V(f(x)|D)) (similarly for h) at given input x, can serve as a useful indicator that gauges the goodness of posterior approximation via a single Gaussian. For instance, the large posterior variance (high uncertainty) implies that the posterior approximation is dif\ufb01cult, suggesting the true posterior is distinct from a Gaussian (e.g., having multiple modes). On the other hand, if the variance is small (low uncertainty), one can anticipate that the true posterior might be close to a Gaussian. Fig. 1 illustrates this intuition on the MNIST dataset with 2D latent space, where the uncertainty measured by TrV(f(x)|D) + TrV(h(x)|D) accurately aligns with the non-Gaussianity of the true posterior, closely related to the quality of reconstruction. 6." + }, + { + "url": "http://arxiv.org/abs/2012.00682v1", + "title": "Learning Disentangled Latent Factors from Paired Data in Cross-Modal Retrieval: An Implicit Identifiable VAE Approach", + "abstract": "We deal with the problem of learning the underlying disentangled latent\nfactors that are shared between the paired bi-modal data in cross-modal\nretrieval. Our assumption is that the data in both modalities are complex,\nstructured, and high dimensional (e.g., image and text), for which the\nconventional deep auto-encoding latent variable models such as the Variational\nAutoencoder (VAE) often suffer from difficulty of accurate decoder training or\nrealistic synthesis. A suboptimally trained decoder can potentially harm the\nmodel's capability of identifying the true factors. In this paper we propose a\nnovel idea of the implicit decoder, which completely removes the ambient data\ndecoding module from a latent variable model, via implicit encoder inversion\nthat is achieved by Jacobian regularization of the low-dimensional embedding\nfunction. Motivated from the recent Identifiable VAE (IVAE) model, we modify it\nto incorporate the query modality data as conditioning auxiliary input, which\nallows us to prove that the true parameters of the model can be identified\nunder some regularity conditions. Tested on various datasets where the true\nfactors are fully/partially available, our model is shown to identify the\nfactors accurately, significantly outperforming conventional encoder-decoder\nlatent variable models. We also test our model on the Recipe1M, the large-scale\nfood image/recipe dataset, where the learned factors by our approach highly\ncoincide with the most pronounced food factors that are widely agreed on,\nincluding savoriness, wateriness, and greenness.", + "authors": "Minyoung Kim, Ricardo Guerrero, Vladimir Pavlovic", + "published": "2020-12-01", + "updated": "2020-12-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "main_content": "Introduction Accurately learning the underlying factors that explain the source of variability in the complex structured data (e.g., images), is one of the core problems in representation learning in computer vision. Whereas there has been considerable research work and signi\ufb01cant success in the unsupervised scenario recently [9, 22, 11, 3, 19, 15, 8, 16], there remains fundamental dif\ufb01culty in identifying the true factors due to some known challenges [21, 24]. Having the factor labels can signi\ufb01cantly boost the performance, but it is inherently dif\ufb01cult and costly to annotate data with true factors. As a remedy for this, we deal with the paired bi-modal data setup as means of partial supervision. Formally, we consider paired data (x1, x2), where the goal is to learn latent factors that are shared between the two. Collecting paired data requires least amount of supervision, and can often be automated (e.g., web scrapping to pair image (x1) and the text (x2) that reside in the same web page) [28, 23]. While the CCA and its recent deep nonlinear extension, Deep CCA [1, 35], are the most popular models to extract shared features (subspace) for such paired data, the dependency of CCA\u2019s cross-covariance matrix on the entire embedded data features hinders the mini-batch stochastic gradient methods from being applied in a principled manner. Moreover, applying the latent variable generative models like the Variational Autoencoder (VAE) [18] and its bi-modal extension (Bi-VAE) [36], requires the latent-to-data synthesis/decoder modules, and learning the decoders for the complex high-dim ambient data is often very dif\ufb01cult. A suboptimally trained decoder can potentially harm the model\u2019s capability of identifying the true factors. In the retrieval setup, where we let w.l.o.g., x1 be the query and x2 the search item, it is well known that the simple idea of aligning the pairs in the low-dim embedded space via cosine similarity, dubbed Cos-Sim, is shown to yield very good retrieval performance [28, 23]. Despite this success, Cos-Sim has no capability of identifying latent variables that can be controlled explicitly by users. Although the model can be extended to incorporate an extra bottleneck auto-encoding model on top of the embedded space, there is no theoretical underpinning that such extension can help discovering the true factors. *mikim21@gmail.com \u2020ricardo.guerrero09@alumni.imperial.ac.uk \u2021vladimir@cs.rutgers.edu 1 arXiv:2012.00682v1 [cs.LG] 1 Dec 2020 \fOur main goal in this paper is to learn the latent factors from paired data in the retrieval setup. More speci\ufb01cally, we would like to identify all the factors, each as a single latent variable in the model, and it would be ideal if the learned latent variables are disentangled (i.e., avoid a single latent variable impacting on the variation of two or more factors at the same time), and complete or minimally redundant (i.e., each of the underlying true factors has to be explained by only one of the latent variables, not by multiple ones). To accomplish these desiderata and address the issues of the existing approaches, we introduce two strategies. First, we adopt the recent Identi\ufb01able-VAE (IVAE) model [14] that can provably identify the true model parameters by having additional auxiliary input on which the latent variables are conditioned. We modify IVAE by incorporating the query data x1 as conditioning auxiliary input, which enhances the model identi\ufb01ability under some regularity conditions. Secondly, to remove the dif\ufb01cult-to-learn ambient data decoding module, we propose a novel idea of the implicit decoder. The modi\ufb01ed IVAE model is built on the embedded space, and the relevance score required in the retrieval is de\ufb01ned by implicit inversion of the embedding function, which is accomplished by the Jacobian-based regularization of the embedding function. Tested on various datasets where the true factors are fully/partially available, our model is shown to identify the factors accurately, signi\ufb01cantly outperforming conventional encoder-decoder latent variable models. We also test our model on the Recipe1M, the large-scale food image/recipe dataset, where the learned factors by our approach highly coincide with the most pronounced food factors including savoriness, wateriness, and greenness. Notations and background on embedding. Many approaches in cross-modal retrieval [4, 13, 37, 31, 33, 27, 6, 5, 28, 23, 38], including our model, adopt the so called shared space embedding strategy, introducing mappings from ambient data to the shared space V. More speci\ufb01cally, vi = ei(xi) for i = 1, 2, are the low-dim embeddings induced by the deep nonlinear (neural net) functions ei(\u00b7). The Cos-Sim model, for instance, learns the embedding functions by enforcing the embedded vectors of the paired data to be aligned well, with the cosine-angle of two embeddings cos(v1, v2) as the alignment score. Its latent variable model extension discussed previously, introduces additional bottleneck encoding/decoding layers, v1 \u2192z and z \u2192v\u2032 1, where the relevance score is measured by cos(v\u2032 1, v2). Then the latent variables z can be controlled directly (e.g., latent traversal). Instead of this ad-hoc model, in the next section we build the identi\ufb01able probabilistic latent variable model (IVAE) on the embedded space, while we avoid the ambient data synthesis module via Jacobian-based regularization of the embedding function. 2 Implicit Identi\ufb01able Retrieval VAE Our model is motivated from the recent work of the Identi\ufb01able VAE (IVAE for short) [14], in which the true model can be provably identi\ufb01ed by having a latent variable model with an auxiliary inputs on which the latent variables are conditioned. We modify this model to circumvent dif\ufb01cult synthesis model learning in the cross-modal retrieval setup (dubbed Retrieval IVAE or RIVAE for short). Considering that our goal is to learn the underlying disentangled latent representation that explains the sources of data variability, the provable model identi\ufb01ability allows us to recover true disentangled latent factors accurately, provided that the data generating process admits disentangled factors. Unlike conventional VAE models, the IVAE adopts the so-called auxiliary input u on which the latent z is conditioned a priori, i.e., P\u03bb(z|u). In our retrieval setup, we regard the query v1 as the auxiliary input, and treat the target embedding v2 as the observed data. There are several modeling assumptions to make the IVAE model fully identi\ufb01able from the augmented data {(v1, v2)}. Among others, the most important are: 1) the conditional prior P\u03bb(z|v1) needs to be a (conditionally) factorized exponential family distribution with natural parameters \u03bb, and 2) the output decoding process P\u03b8(v2|z) is homo-scedastic (i.e., v2 = f(z) + \u03be where f(z) = E[v2|z] and \u03be is 0-mean random and independent of z). Then we de\ufb01ne the following components to meet the IVAE\u2019s modeling assumptions: P\u03bb(z|v1) = N(\u00b5(v1), Diag(\u03c3(v1))) (1) P\u03b8(v2|z) = N(f(z), \u03b72I) (2) Q\u03c6(z|v1, v2) = N(m(v1, v2), Diag(s(v1, v2))) (3) where \u00b5(\u00b7), \u03c3(\u00b7), f(\u00b7), m(\u00b7, \u00b7), and s(\u00b7, \u00b7) are all deep neural networks (whose parameters are denoted by \u03bb, \u03b8, \u03c6, resp.) and \u03b72 is small \ufb01xed noise variance (e.g., \u03b7 = 10\u22123). Hence, if the query embedding (auxiliary input) v1 retains all salient information about the underlying shared factors1, then the true process of v2|v1, presumably following the model structure of (1\u20133), can be identi\ufb01ed accurately (due to the Theorem 1 in [14]), and so are the true factors z. To train the RIVAE, we \ufb01rst consider the lower bound (ELBO or LLB) of the data likelihood log P(v2|v1), where LLB = KL(Q(z|v1, v2)||P(z|v1)) \u2212EQ[log P(v2|z)] (4) 1This will be achieved by the cross-modal retrieval loss (10). 2 \fAlthough we maximize the ELBO wrt the IVAE parameters \u039b := (\u03bb, \u03b8, \u03c6), it cannot be used to optimize the embedding networks. This is because the embedding networks can determine the auxiliary input v1 and the target observed v2 in an arbitrary way just to make the IVAE model \ufb01t to (v1, v2) very well. As an extreme case, the constant mappings e1(\u00b7) = e2(\u00b7) = vconst, can lead to a perfect-\ufb01t model that is degenerate. To learn the embedders, we introduce another learning criterion related to cross-modal prediction. Let (x1, x2) be a matching pair from the training data, while (x1, x\u2032 2) a mismatch one. It is desired for the model to score them as: log P(x2|x1) \u226blog P(x\u2032 2|x1). (5) To avoid dif\ufb01cult synthesis modeling for ambient data x2 in (5), we utilize the rule of the change of random variables. Assuming that the embedding function e2(\u00b7) is injective2, log P(x2|x1) = log P(e\u22121 2 (v2)|x1) (6) = log P(v2|x1) + log vol\u2207e2(x2) (7) \u2248log P(v2|x1) + const. (8) To establish (8), we will enforce the Jacobian volume, vol\u2207e2(x2) to be constant over different points x2 as regularization (See below for details). Using P(v2|x1) = P(v2|v1) = EP (z|v1)[P(v2|z)], (9) the desiderata (5) can be encoded as the following loss: LRetr = \u00001 + EQ[log P(v\u2032 2|z) \u2212log P(v2|z)] \u0001 + (10) where v\u2032 2 = e2(x\u2032 2) is the embedding of the mismatch sample x\u2032 2, and (a)+ = max(0, a). That is, we implicitly invert the embedding function without a synthesis model for x2. Regularizing the embedding network. Derivation (8) is valid only if vol\u2207e2(x2) remains (approximately) constant across all plausible data samples x2. Directly regularizing this by minimizing a loss like (vol\u2207e2(x2)\u2212c)2 for some constant c, is computationally prohibitive. However, since the Jacobian volume essentially measures the change in the function output due to small perturbations in the input, we can attain similar effect by enforcing the change originating from a random input perturbation3 to remain constant regardless of the input point. Speci\ufb01cally, we impose the following regularization for e2(\u00b7): LReg = Ex2,\u03f5 h\u0000||e2(x2) \u2212e2(x2 + \u03f5)|| \u2212c \u00012i (11) where \u03f5 \u223cP(\u03f5) is a random sample from a noise distribution P(\u03f5) with small magnitude (e.g., ||\u03f5|| = 0.001). We can also optimize c. Summary. The full training can be written as the following optimization (\u03bbRetr and \u03bbReg are the trade-off parameters). Note that the arguments in each loss L(\u00b7) indicates which parameters should be updated regarding the loss. min LLB(\u039b) + \u03bbRetrLRetr(\u039b, W) + \u03bbRegLReg(W) (12) Cross-modal Retrieval. Basically we need to solve: arg maxx2\u2208D2 log P(x2|x1), where x1 is the query data point, and D2 is the search database. Using the derivation of (7\u20138), log P(x2|x1) can be approximated as log P(v2|z) with z \u223cP(z|v1), which results in the following three-step algorithm: 1. Embed the query point: v1 = e1(x1). 2. Sample z \u223cP(z|v1). 3. Solve: arg maxx2\u2208D2 log P(e2(x2)|z). Latent Traversal. While varying the value of one particular latent dimension with the rest being \ufb01xed, we inspect the change in the retrieved data. This helps us understand what type of aspect each latent dimension corresponds to, that is, the source of variability. We retrieve the output for each traversed point z, namely the mode of P(x2|z). We regard the conditional distribution P(v2|z) as a proxy of P(x2|z) using the derivation similar to (7\u20138) with the regularized embedding networks. The latent traversal algorithm is summarized as follows: 2Formally, e2(x2) \u0338= e2(\u02c6 x2) if x2 \u0338= \u02c6 x2. It allows the inverse e\u22121 2 (\u00b7) to be de\ufb01ned. It is a reasonable assumption that is also considered in [14]. 3The input space perturbation can be done pixel-wisely for images, and on the word vectors for text data. 3 \f1. Traverse z along the j-th dim (j = 1, . . . , d). 2. Prepare the conditional distribution: P(v2|z). 3. Solve: arg maxx2\u2208D2 log P(e2(x2)|z). Comparison to Conditional VAE [30]. The idea of conditioning the latent variables z is similar to the Conditional VAE (CVAE) [30]. However, there are two main differences: i) The conditioning variables in the CVAE are typically class labels, and its main goal is to enrich the representational capacity of VAE to cover different regions/clusters of the data domain indexed by the class label. ii) CVAE explicitly models dependency between the observed and the conditioning variables. On the other hand, in our RIVAE, there is no direct link between the output v2 and the auxiliary v1, thus enforcing core information from v1 to v2 to \ufb02ow through the bottleneck z. This allows the latents to learn the salient factors more effectively. 3 Experimental Results We test our Retrieval-IVAE model on both controlled datasets where the ground-truth factors are fully/partially available for quantitative comparison, and the large-scale Recipe1M dataset [28, 23] in the context of food image to recipe retrieval. We especially focus on demonstrating our model\u2019s capability of identifying the underlying true factors with high degree of disentanglement, compared to the existing approaches and baselines (See Sec. 3.1). For the cross-modal retrieval at test time, we construct the search database D2, of size |D2| = 1000 or 2000, randomly selected from the test dataset. We repeat this procedure randomly 10 times, and run the models to report average performance. For the retrieval metrics, we consider the median rank (Med-R) and the recall-at-K (R@K) with K = 1, 5, 10, where R@K stands for the fraction (out of |D2| queries) where the true item is found by the model in its top-K scored items. Although these retrieval scores are indicative of how well the models extract shared information, the main focus in this paper is to judge the goodness of the learned latent factors. For this purpose, see the quantitative metrics in Sec. 3.2. 3.1 Competing Methods and Datasets Our RIVAE is compared with the following methods: \u2022 Cos-Sim-LVM: As described previously, we extend the Cos-Sim embedded space alignment method to identify/control the latent variables. That is, this extended model has encoder (v1 \u2192z) and decoder (z \u2192v\u2032 1), both of which roughly correspond to P(z|v1) and P(v2|z) in our RIVAE, respectively, and we use neural nets with similar architectures for fair comparison. \u2022 Bi-VAE: This is the bi-modal extension of the VAE via the product-of-experts approximation [36]. It requires dif\ufb01cult synthesis (decoder) model learning for the ambient data (i.e., z \u2192x1 and z \u2192x2). \u2022 Bi-VAE-on-V: As a reasonable workaround to circumvent the dif\ufb01cult synthesis model learning in Bi-VAE, we can think of building the bi-modal VAEon the embedded space (V) instead. That is, the embedding networks are \ufb01xed (e.g., simply borrowed from the trained Cos-Sim model). The intuition is to regard the embeddings v1/2 as proxy for ambient data x1/2, and a similar idea was previously explored in [29]. \u2022 DCCA4: The Deep CCA model [1] that learns the nonlinear mapping from inputs to the embedding vectors. \u2022 RBi-VAE: We consider a joint model P(v1, v2, z) in place of the conditional P(v2, z|v1) in our RIVAE. (Supplement for details) Dubbed Retrieval-Bi-VAE (or RBi-VAE for short), we adopt the same Jacobian embedder regularization for the implicit embedder inversion, and it is compared to RIVAE to see how effective the identi\ufb01able model learning in RIVAE. For fair comparison, we make the experimental setup as equal as possible for all competing methods. E.g., the number of latent variables adopted in the competing models is set to be the same. The details of the model architectures and optimization strategies are described in the Supplement. And we test the above models on four datasets: \u2022 Synth (Sec. 3.3): The synthetic data generated from a nonlinear function with disentangled latent variables partitioned into shared and private factors. \u2022 Sprites (Sec. 3.4): Binary images of sprites. The shape is regarded as modality, where the locations and size of the sprite are considered as shared factors. 4Other variants including DCCAE (DCCA with the auto-encoding loss) [35], are often on a par with DCCA, and not considered here. 4 \fTable 1: (Synth) Retrieval performance with 1000 test samples as the search set. (averaged over 10 random runs). Method R@1 \u2191 R@5 \u2191 R@10 \u2191 Med-R \u2193 Cos-Sim-LVM 0.02 0.14 0.27 24.00 Bi-VAE [36] 0.01 0.05 0.10 101.00 Bi-VAE on V 0.20 0.58 0.76 4.00 DCCA [1] 0.29 0.62 0.81 3.00 RBi-VAE 0.33 0.81 0.92 2.00 RIVAE 0.35 0.80 0.92 2.00 \u2022 Split-MNIST (Sec. 3.5): The bi-modal data created from the MNIST dataset by splitting images into left (modality-1) and right (modality-2) halves. \u2022 Recipe1M (Sec. 3.6): The large scale food dataset [28, 23] that consists of pairs of food image and recipe text including title, ingredient list, and instructions. 3.2 Metrics for Goodness of Learned Latents We de\ufb01ne metrics for the goodness of the learned latents, including the degree of disentanglement and completeness. Our quantitative measures are similar to the D/C/I metrics [10], but whereas in [10] they assume an of\ufb02ine data pool for which all factor labels are available, but in our case, not all factor labels are available for entire data instances for some datasets. To this end, we modify the original metrics, adapted to the latent traversal results as follows. For each reference/query xRef 1 , we embed/encode it to obtain the latent vector zRef. Then from this anchor point zRef, we traverse along the zj axis for each latent dimension j = 1, . . . , d, i.e., varying the value of zj while freezing the rest dimensions. We then collect a set of retrieved items x2, each of which corresponds to each of the traversed points along zj axis. And for each retrieved item, its true factor values [f1, . . . , fK] are looked up. This way, we collect the paired (zj, fk) data, and estimate the Pearson\u2019s correlation coef\ufb01cient. In particular, we deal with cjk = |Corr(zj, fk)|, the absolute correlation score, where the larger cjk implies that the latent variable zj is more related to the true factor fk. We repeat this procedure for many reference/query points, and take the average scores cjk, which helps marginalizing out the instance-speci\ufb01c impact on the correlations between latents and true factors. Then with the (d\u00d7K) correlation table c (whose (j, k)-entry is cjk), we measure the three metrics similarly as [10]. First, the Disentanglement metric (D) measures the degree of dedication of each latent variable zj in predicting fk against others f\u2212k. Intuitively, we have perfect disentanglement if each latent variable zj is correlated with only a single true factor fk. On the other hand, if zj is correlated with multiple true factors at the same time, it is deemed entangled. To capture this idea, each j-th row of the table c is normalized to a probability distribution, speci\ufb01cally, pjk = \u03c1(cjk) Pd k\u2032=1 \u03c1(cjk\u2032) , (13) where \u03c1() is positive monotonic increasing (e.g., \u03c1(c) = e\u03b1c for some \u03b1 > 0). Then we compute the normalized ([0, 1]-scaled) entropy, Hj = \u2212(1/log K) P k pjk log pjk, and we de\ufb01ne D = 1 \u2212(1/d) Pd j=1 Hj (the higher, the better disentangled). The Completeness metric (C) captures the degree of exclusive contribution of zj in predicting fk against others z\u2212j. As it essentially aims for minimal redundancy, we would achieve perfect completeness if variability of each true factor fk is explained by only a single latent variable zj, instead of multiple latents. The metric C can be computed similarly as D, by replacing all row-wise operations with column-wise. Likewise, the higher C is, the better. Lastly, the Informativeness metric (I) measures how informative each latent variable is in predicting a true factor. To this end, for each fk, we \ufb01nd the best predictor zj (i.e., c\u2217 k = max1\u2264j\u2264d cjk), and de\ufb01ne I as the average of c\u2217 k over k = 1, . . . , K. 3.3 Synthetic Data (Synth) To demonstrate our model\u2019s capability of identifying the shared latent factors from bi-modal data, we devise a synthetic data setup as follows. First we generate 4-dim factors, f = [f S, f 1, f 2] which are all iid samples from N(0, 1) with dim(f S) = 2 and dim(f 1) = dim(f 2) = 1. The ambient data points are then generated by the nonlinear functions, x1 = G1(f S, f 1) and x2 = G2(f S, f 2) where G1(\u00b7) and G2(\u00b7) are two-layer neural networks that output 50-dim vectors. So our intention is that f S serves as the shared factors that govern the data variability in the two modalities, while f 1 and f 2 are the private factors that only affect individual modalities, being independent on the other. Additionally, it would be desirable if the model can further disentangle the two individual factors in f S. For the embedding networks e1/2(\u00b7), we use two-layer neural networks, and the embedding dimension is set as dim(v1/2) = 3. We set dim(z) = 2 for the latent spaces, which matches the number of true shared factors. The retrieval performance of 5 \f(a) RIVAE (b) RBi-VAE (c) DCCA Figure 1: (Synth) True factors vs. latent variables. Each column shows traversal of one latent zj with the other \ufb01xed. The Y axes are true factors obtained from retrieved items x2. Red boxes indicate signi\ufb01cant changes in the true shared factors, i.e., high correlation. Table 2: (Synth) Goodness of the learned latents. Disent. \u2191 Comple. \u2191 Inform. \u2191 Cos-Sim-LVM 0.6169 0.7613 0.5486 Bi-VAE 0.4429 0.6822 0.6902 Bi-VAE on V 0.0520 0.8725 0.5630 DCCA 0.0017 0.2766 0.1371 RBi-VAE 0.0063 0.6900 0.3923 RIVAE 0.9186 0.8782 0.8995 the competing models is summarized in Table 1. Our RIVAE outperforms Cos-Sim-LVM and DCCA. The poor performance of the Bi-VAE model implies that suboptimally trained decoders for high-dimensional noisy ambient data can degrade the retrieval performance signi\ufb01cantly. Even the Bi-VAE-on-V trained on the \ufb01xed embedded space exhibits performance comparable to Cos-Sim-LVM, although it still underperforms our model. Next we inspect the learned latent representations. We are especially interested in the correspondence between the learned latent variables z and the true shared factors f S. A desirable result would be exclusive one-to-one correspondence, where a single latent variable zj affects only one shared factor, independent from the other. To this end, we perform latent traversal. For the retrieved item x2, we look up its true factor values, (f S 1 , f S 2 , f 2), and plot each against zj. The results are shown in Fig. 1. Notably for our RIVAE, each latent variable corresponds to only one true shared factor exclusively, implying that the disentangled factors are accurately identi\ufb01ed. RBi-VAE partially identi\ufb01es f S 1 in z1, but both shared factors are entangled in z2. We also run the latent traversal with the DCCA, which is done by converting the learned CCA model to the dual-view latent variable linear Gaussian model following [2]. However, as this maximum-likelihood estimated model only \ufb01ts well to the data, we see that each of the learned latent variables retain both factors entangled in it. Finally, the quantitative D/C/I metrics in Table 2 show that our RIVAE yields signi\ufb01cantly better latent representations than competing models. 3.4 Sprites Using the benchmark dSprites dataset [25], we devise an experimental setup for the bi-modal retrieval task. First, we assume that the shape of sprites induces the modalities, speci\ufb01cally, x1 is square, and x2 oval. We then consider only the X, Y positions and the scale of the sprite as the underlying shared factors, with the other factors being \ufb01xed. There are 32 variations in each of the X, Y positions and 6 variations in scale, which are independent from one another, resulting in 6144 samples. Image size is (64 \u00d7 64) pixels. The dimension of the embedded space V is set to 10, and the latent space dim(z) = 3, which matches ground-truth. The retrieval results on 1000 randomly selected search set are summarized in Table 3. Most approaches yield near perfect performance except for the decoder training models (Bi-VAE and Bi-VAE-on-V), while our RIVAE performs marginally the best. The plots of the ground-truth factors of the retrieved images due to the latent space traversal, similar to Fig. 1, can be found in the Supplement, where our RIVAE shows near one-to-one correspondence between the true and learned factors, while other models exhibit considerable entanglement. We also visualize the retrieved images obtained by latent traversal in Fig. 2. The result again veri\ufb01es the high quality of disentanglement in the learned factors, very close to the true latent factors. 6 \f(a) RIVAE (b) RBi-VAE (c) DCCA Figure 2: (Sprites) Retrieved images from latent traversal. The top left image is the query image that determines the reference point zRef (the same for all models). For each model, each of the three columns depicts the retrieved images due to z1, z2, and z3 changes (progresses vertically). Visually, it is clear that with RIVAE, varying z1 alone results in change in the Y -pos with the scale and X-pos intact, z2 exclusively affects the X-pos, and z3 only affects the scale. Such interpretation is not clear for other models. Table 3: (Sprites) Retrieval performance among a 1000 randomly selected search set (averaged over 10 random runs). Method R@1 \u2191 R@5 \u2191 R@10 \u2191 Med-R \u2193 Cos-Sim-LVM 0.13 0.46 0.67 6.00 Bi-VAE [36] 0.02 0.07 0.10 127.75 Bi-VAE on V 0.69 0.97 0.99 1.00 DCCA [1] 0.96 1.00 1.00 1.00 RBi-VAE 0.99 1.00 1.00 1.00 RIVAE 1.00 1.00 1.00 1.00 Table 4: (Sprites) Goodness of the learned latents. Disent. \u2191 Comple. \u2191 Inform. \u2191 Cos-Sim-LVM 0.6664 0.5634 0.8417 Bi-VAE 0.6821 0.0779 0.1259 Bi-VAE on V 0.1367 0.6812 0.3119 DCCA 0.4163 0.1178 0.1432 RBi-VAE 0.5814 0.4863 0.7234 RIVAE 0.8280 0.9133 0.8476 The D/C/I scores in Table 4 also support this claim quantitatively. 3.5 Split-MNIST Following the setup in [1], we form a retrieval setup from the MNIST dataset [20] by taking the left half of each image as modality-1 and the right half as modality-2. So, each view contains images of size (H = 28 \u00d7 W = 14) pixels. Note that the shared factors for both modalities would be the digit class and the writing style, which are deemed independent (disentangled) from each other. We followed the standard data splits, where 2000 images are randomly sampled from the test set to serve as the retrieval search set. We set dim(V) = 50 and dim(z) = 10. The retrieval scores are reported in Table. 5. Consistent with previous experiments, our RIVAE attains the best scores. To see how the two underlying factors, writing style and digit class, are disentangled in the learned latent variables for our RIVAE, we show the latent traversal results visually in Fig. 3. Refer to the caption of the \ufb01gure for details. 3.5.1 Quantitative Analysis for Split-MNIST Although we know that the shared factors between the left and right halves deem to be partitioned to those related to digit class and those non-digit related (e.g., writing style), there are only digit labels (0 \u223c9) available, and it is not even clear how the writing style can be formally described or speci\ufb01ed. Hence, we devise some reasonable quantitative measures that can re\ufb02ect how the underlying variability in digit transitions is captured in the model\u2019s latent variables. 7 \fTable 5: (Split-MNIST) Retrieval performance with 2000 randomly selected test images as the search set. Method R@1 \u2191 R@5 \u2191 R@10 \u2191 Med-R \u2193 Cos-Sim-LVM 0.19 0.46 0.61 6.35 Bi-VAE [36] 0.01 0.03 0.05 524.80 Bi-VAE on V [36] 0.21 0.51 0.66 5.30 DCCA [1] 0.47 0.79 0.87 2.00 RBi-VAE 0.43 0.81 0.91 2.00 RIVAE 0.52 0.89 0.96 1.00 Figure 3: (Split-MNIST) For three query references, latent traversal along four latent variables (z2, z4, z7, z9) in RIVAE. Visually, z2, z5 = digit class, z7, z9 = writing style. See Supplement for other query references. Let\u2019s say that we traverse along the axis zj in the latent space, while \ufb01xing the rest dimensions of z. After retrieving the data items, we record the number of unique digit transitions (e.g., 0 \u21922 or 3 \u21928) in the retrieved images in the traversal. The results, over all dimensions, can be summarized into a (d \u00d7 45) table, where d = dim(z) and 45 (= 10C2) is the number of direction-free transitions (e.g., 0 \u21922 and 2 \u21920 are regarded identical). Note that discarding the directions makes sense considering traversal in the reverse direction. More concretely, if the digit classes of the retrieved items for the traversal z5 are: 2 \u21922 \u21922 \u21923 \u21928 \u21928 \u21929 \u21923 \u21922 \u21922 \u21922, then the \ufb01fth row of the table has value 1 at four columns, 2 \u21923, 3 \u21928, 8 \u21929, 3 \u21929, with all the other entries 0. There will be one such table for each reference query (xRef 1 ). And we will collect many (e.g., 1000) such tables/queries, and take the average to get the global statistics. The key idea is that this (averaged) digit transition table can tell us possible digit transition types speci\ufb01c to each latent dimension. For instance, z0 has large entries in the table for the transitions among digits (1, 4, 7), while z1 covers the clique (2, 3, 5, 8), and so on. And we expect that if latent factors are well trained, the transition cliques for different dimensions tend to be less overlapped, and overall the union of the cliques has good coverage of all 45 possible transitions. The former is related to disentanglement of the latent variables, and the latter being thoroughness or coverage, how many different transitions the learned latent variables can capture. To formalize, let\u2019s say D = (d\u00d745) be the table of averaged digit transitions. We normalize each row of D as a probability distribution. Then we measure the overlap between two rows i and j, e.g., overlap(i, j) = 1 45 P45 k=1 min(D[i, k], D[j, k]), similarly as the histogram intersection. Then we measure the average overlap over all i \u0338= j. The smaller the overlap is, the better. And, for the coverage, we take the union of the rows, e.g., simply the average of the rows in D. Then we compute the entropy of the union. The larger the entropy is, the union covers more digit transitions. The results are summarized in Table 6, and we see that the learned latent variables in our RIVAE exhibit low overlap and the highest coverage among others. Table 6: (Split-MNIST) Goodness of the learned latents. Two quantitative measures regarding the variability in digit transitions. Interpretation: overlap is related to disentanglement (the lower, the better), and coverage to thoroughness (the higher, the better). The differences in the last column. Overlap \u2193 Coverage \u2191 C \u2212O \u2191 Cos-Sim-LVM 0.0168 0.8881 0.8713 Bi-VAE 0.0091 0.4863 0.4772 Bi-VAE on V 0.0181 0.6418 0.6237 DCCA 0.0191 0.6377 0.6186 RBi-VAE 0.0189 0.8932 0.8743 RIVAE 0.0121 0.9675 0.9554 8 \fTable 7: (Recipe1M) Retrieval performance with size 1000 random search set (averaged over 10 random runs). Method R@1 \u2191 R@5 \u2191 R@10 \u2191 Med-R \u2193 Cos-Sim-LVM 0.45 0.74 0.82 2.00 Bi-VAE [36] Failed Failed Failed Failed Bi-VAE on V [36] 0.22 0.45 0.55 7.70 DCCA [1] Failed Failed Failed Failed RBi-VAE 0.29 0.56 0.66 4.00 RIVAE 0.39 0.70 0.79 2.00 Table 8: (R1M) Goodness of the learned latents. Disent. \u2191 Comple. \u2191 Inform. \u2191 Cos-Sim-LVM 0.6203 0.6027 0.5983 Bi-VAE Failed Failed Failed Bi-VAE on V 0.5128 0.5423 0.5670 DCCA Failed Failed Failed RBi-VAE 0.4954 0.6079 0.5664 RIVAE 0.8569 0.8500 0.8615 3.6 Food Image to Recipe Retrieval (Recipe1M) Recipe1M [28] is the dataset comprised of about 1M cooking recipes (titles, instructions, ingredients) and images. In this work, a subset of about 0.4M recipes containing at least one image, no more than 20 ingredients or instructions, and at least one ingredient and instruction was used. Data is split into 70%/15%/15% train/validation/test sets. The underlying embedding networks used in this experiments combines architectural and training strategies from [28, 34, 7], and the related Cos-Sim model performs comparable to the state-of-the-arts. The embedded space has dim(V) = 1024, and the latent variables dim(z) = 30. Table 7 shows retrieval performance. The Cos-Sim-LVM attains the best retrieval performance even with the introduced bottleneck layers, where our RIVAE performs nearly comparably to it. Note that both DCCA [1] and Bi-VAE [36] completely failed to converge. 3.6.1 Quantitative Analysis for Recipe1M In the Recipe1M dataset, there are no labels available for the ground-truth factors, those that commonly govern the variability of recipes and food images, deemed to be food factors. To measure the goodness of the learned latent variables of the competing methods, we aim to select a small subset of food factors that look the most pronounced and capturing the shared variability in recipes and images. To this end, we re-crape all the recipes in the Recipe1M that are associated with the Internet domain food.com, and parse their keywords and categories. Then we form the recipe tags as the unique terms union between keywords and categories. This subset represents about 50% of the whole dataset. Then we manually group the tags that are related to one another, and among the tag groups, we choose 8 factors the most dependent on the latent variables by visually inspecting the latent traversal results for the competing methods. They are: 1) wateriness, 2) greenness, 3) stickiness, 4) oven-baked-or-not, 5) food container longishness (e.g., bottle/cup or plate), 6) grains, 7) savory-or-dessert, and 8) fruit-or-no-fruit. They are intuitively very appealing. For the association between the tags and these 8 factors, refer to the Supplement. All these factors that consider in this analysis are ordinal, and we consider 5 scales/levels for each factor (e.g., wateriness= 1 means very dry food, while wateriness= 5 implies containing lots of water). We sample about 20 random reference/query images (xRef 1 ) in the traversal/retrieval. For the 100 traversal points along each latent dimension, we collect retrieved items (x2), and manually label the values of the 8 food factors. Then we select 10 latent dimensions that have the highest correlations with the 8 factors, and form a (10 \u00d7 8) correlation table. The D/C/I measures are summarized in Table 8. As shown, our RIVAE attains the highest scores among the competing models by large margin. 3.6.2 Qualitative (Visual) Analysis for Recipe1M Next we qualitatively assess the discovered hidden factors through visual inspection of the retrieved items from latent traversal. We also generate the word cloud images using the ingredients in the retrieved recipes. For the four latent variables that have the highest correlation with the factors: wateriness, greenness, savoriness, and fruit-or-no-fruit, Fig. 4 shows traversal results for top-3 retrieved items, which are visually very coherent to the corresponding true factors. In Fig. 5, we show the ingredient word clouds generated from the top-10 retrieved items from 20 query data points for the latent variable corresponding to the savoriness. We see that the word cloud on the left end contains ingredients 9 \fFigure 4: Retrieved images from latent traversal. Top left latent variable for wateriness, top right fruit-no-fruit, bottom left savoriness, and bottom right greenness. For each panel, the query image is shown on the leftmost, and each column has top-3 retrieved items at each traversal point. Larger images, more examples, and other discovered factors can be found in Supplement. Figure 5: Ingredient word clouds for the latent variable savoriness, obtained from top-10 retrieved items over 20 queries. Ingredient color indicates typical use; red = savory and pink = non-savory (sweet). mostly associated with savory dishes, the middle contains both savory and sweet ingredients, while the right end has typical dessert ingredients. This highlights the main advantage of our model where it accurately identi\ufb01es true factors, and allows us to directly control the latent variables to generate (retrieve) desired data items. This feature can also be easily extended to manipulation of multiple factors. Fig. 6 shows top-5 retrieved examples when we control two or three latent variables corresponding to savoriness, wateriness, and greenness, simultaneously. 3.7 Ablation Study: Embedder Regularization In our Retrieval-IVAE model, we employed the embedder regularization loss, speci\ufb01cally (11). To verify the impact of this regularization term, we run our RIVAE model without this loss term, and the retrieval results are summarized in Table 9, in conjunction with the results with the embedder regularization. The result demonstrates that without embedding network regularization, either the performance becomes considerably degraded or simply fails. Table 9: RIVAE with/without embedder regularization. Dataset Reg. R@1 \u2191 R@5 \u2191 R@10 \u2191 Med-R \u2193 Synth Yes 0.35 0.80 0.92 2.00 No 0.29 0.73 0.90 3.00 Sprites Yes 1.00 1.00 1.00 1.00 No 0.01 0.03 0.06 144.40 SplitYes 0.52 0.89 0.96 1.00 MNIST No 0.35 0.75 0.88 2.00 10 \fFigure 6: Left: Retrieved images from activation of multiple latent variables. Middle: indicates which combinations of latent variables are activated. Right: Ingredient word clouds of the retrieved recipes. 4" + }, + { + "url": "http://arxiv.org/abs/2011.08544v1", + "title": "Recursive Inference for Variational Autoencoders", + "abstract": "Inference networks of traditional Variational Autoencoders (VAEs) are\ntypically amortized, resulting in relatively inaccurate posterior approximation\ncompared to instance-wise variational optimization. Recent semi-amortized\napproaches were proposed to address this drawback; however, their iterative\ngradient update procedures can be computationally demanding. To address these\nissues, in this paper we introduce an accurate amortized inference algorithm.\nWe propose a novel recursive mixture estimation algorithm for VAEs that\niteratively augments the current mixture with new components so as to maximally\nreduce the divergence between the variational and the true posteriors. Using\nthe functional gradient approach, we devise an intuitive learning criteria for\nselecting a new mixture component: the new component has to improve the data\nlikelihood (lower bound) and, at the same time, be as divergent from the\ncurrent mixture distribution as possible, thus increasing representational\ndiversity. Compared to recently proposed boosted variational inference (BVI),\nour method relies on amortized inference in contrast to BVI's non-amortized\nsingle optimization instance. A crucial benefit of our approach is that the\ninference at test time requires a single feed-forward pass through the mixture\ninference network, making it significantly faster than the semi-amortized\napproaches. We show that our approach yields higher test data likelihood than\nthe state-of-the-art on several benchmark datasets.", + "authors": "Minyoung Kim, Vladimir Pavlovic", + "published": "2020-11-17", + "updated": "2020-11-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Accurately modeling complex generative processes for high dimensional data (e.g., images) is a key task in deep learning. In many application \ufb01elds, the Variational Autoencoder (VAE) [13, 29] was shown to be very effective for this task, endowed with the ability to interpret and directly control the latent variables that correspond to underlying hidden factors in data generation, a critical bene\ufb01t over synthesis-only models such as GANs [7]. The VAE adopts the inference network (aka encoder) that can perform test-time inference using a single feed-forward pass through a neural network. Although this feature, known as amortized inference, allows VAE to circumvent otherwise time-consuming procedures of solving the instance-wise variational optimization problem at test time, it often results in inaccurate posterior approximation compared to the instance-wise variational optimization [4]. Recently, semi-amortized approaches have been proposed to address this drawback. The main idea is to use an amortized encoder to produce a reasonable initial iterate, followed by instance-wise posterior \ufb01ne tuning (e.g., a few gradient steps) to improve the posterior approximation [11, 14, 23, 27]. This is similar to the test-time model adaptation of the MAML [5] in multi-task (meta) learning. However, this iterative gradient update may be computationally expensive during both training and test time: for training, some of the methods require Hessian-vector products for backpropagation, while at test time, one has to perform extra gradient steps for \ufb01ne-tuning the variational optimization. Moreover, the performance of this approach is often very sensitive to the choice of the gradient step size and the number of gradient updates. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. arXiv:2011.08544v1 [cs.LG] 17 Nov 2020 \fIn this paper, we consider a different approach; we build a mixture encoder model, for which we propose a recursive estimation algorithm that iteratively augments the current mixture with a new component encoder so as to reduce the divergence between the resulting variational and the true posteriors. While the outcome is a (conditional) mixture inference model, which could also be estimated by end-to-end gradient descent [34], our recursive estimation method is more effective and less susceptible to issues such as the mixture collapsing. This resiliency is attributed to our speci\ufb01c learning criteria for selecting a new mixture component: the new component has to improve the data likelihood (lower bound) and, at the same time, be as divergent as possible from the current mixture distribution, thus increasing the mixture diversity. Although a recent family of methods called Boosted Variational Inference (BVI) [8, 21, 22, 2, 25] tackles this problem in a seemingly similar manner, our approach differs from BVI in several aspects. Most notably, we address the recursive inference in VAEs in the form of amortized inference, while BVI is developed within the standard VI framework, leading to a non-amortized single optimization instance, inappropriate for VAEs in which the decoder also needs to be simultaneously learned. Furthermore, for the regularization strategy, required in the new component learning stage to avoid degenerate solutions, we employ the bounded KL loss instead of the previously used entropy regularization. This approach is better suited for amortized inference network learning in VAEs, more effective as well as numerically more stable than BVI (Sec. 3.1 for detailed discussions). Another crucial bene\ufb01t of our approach is that the inference at test time is accomplished using a single feed-forward pass through the mixture inference network, a signi\ufb01cantly faster process than the inference in semi-amortized methods. We show that our approach empirically yields higher test data likelihood than standard (amortized) VAE, existing semi-amortized approaches, and even the high-capacity \ufb02ow-based encoder models on several benchmark datasets. 2 Background We denote by x observation (e.g., image) that follows the unknown distribution pd(x). We aim to learn the VAE model that \ufb01ts the given iid data {xi}N i=1 sampled from pd(x). Speci\ufb01cally, letting z be the underlying latent vector, the VAE is composed of a prior p(z) = N(z; 0, I) and the conditional model p\u03b8(x|z) where the latter, also referred to as the decoder, is de\ufb01ned as a tractable density (e.g., Gaussian) whose parameters are the outputs of a deep network with weight parameters \u03b8. To \ufb01t the model, we aim to maximize the data log-likelihood, PN i=1 log p\u03b8(xi) where p\u03b8(x) = Ep(z)[p\u03b8(x|z)]. As evaluating the marginal likelihood exactly is infeasible, the variational inference aims to approximate the posterior by a density in some tractable family, that is, p\u03b8(z|x) \u2248q\u03bb(z|x) where q\u03bb(z|x) is a tractable density (e.g., Gaussian) with parameters \u03bb. For instance, if the Gaussian family is adopted, then q\u03bb(z|x) = N(z; \u00b5, \u03a3), where {\u00b5, \u03a3} constitutes \u03bb. The approximate posterior q\u03bb(z|x) is often called the encoder. It is well known that the marginal log-likelihood is lower-bounded by the so-called evidence lower bound (ELBO, denoted by L), log p\u03b8(x) \u2265L(\u03bb, \u03b8; x) := Eq\u03bb(z|x) \u0002 log p\u03b8(x|z) + log p(z) \u2212log q\u03bb(z|x) \u0003 , (1) where the gap in (1) is exactly the posterior approximation error KL(q\u03bb(z|x)||p\u03b8(z|x)). Hence, maximizing L(\u03bb, \u03b8; x) with respect to \u03bb for the current \u03b8 and the given input instance x, amounts to \ufb01nding the density in the variational family that best approximates the true posterior p\u03b8(z|x). However, notice that the optimum \u03bb must be speci\ufb01c to (i.e., dependent on) the input x, and for some other input point x\u2032 one should do the ELBO optimization again to \ufb01nd the optimal encoder parameter \u03bb\u2032 that approximates the posterior p\u03b8(z|x\u2032). The stochastic variational inference (SVI) [9] directly implements this idea, and the approximate posterior inference for a new input point x in SVI amounts to solving the ELBO optimization on the \ufb02y by gradient ascent. However, the downside is computational overhead since we have to perform iterative gradient ascent to have approximate posterior q\u03bb(z|x) for a new input x. To remedy this issue, one can instead consider an ideal function \u03bb\u2217(x) that maps each input x to the optimal solution arg max\u03bb L(\u03bb, \u03b8; x). We then introduce a deep neural network \u03bb(x; \u03c6) with the weight parameters \u03c6 as a universal function approximator of \u03bb\u2217(x). Then the ELBO, now denoted as L(\u03c6, \u03b8; x), is optimized with respect to \u03c6. This approach, called the amortized variational inference (AVI), was proposed in the original VAE [13]. A clear bene\ufb01t of it is the computational speedup thanks to the feed-forward passing \u03bb(x; \u03c6) used to perform posterior inference for a new input x. 2 \fAlthough AVI is computationally more attractive, it is observed that the quality of data \ufb01tting is degraded due to the amortization error, de\ufb01ned as an approximation error originating from the difference between \u03bb\u2217(x) and \u03bb(x; \u03c6) [4]. That is, the AVI\u2019s computational advantage comes at the expense of reduced approximation accuracy; the SVI posterior approximation can be more accurate since we minimize the posterior approximation error KL(q\u03bb(z|x)||p\u03b8(z|x)) individually for each input x. To address this drawback, the semi-amortized variational inference (SAVI) approaches have been proposed in [11, 23, 14]. The main idea is to use the amortized encoder to produce a reasonably good initial iterate for the subsequent SVI optimization. The parameters \u03c6 of the amortized encoder are trained in such a way that several steps of warm-start SVI gradient ascent would yield reduction of the instance-wise posterior approximation error, which is similar in nature to the gradient-based meta learning [5] aimed at fast adaptation of the model to a new task in the multi-task meta learning. However, the iterative gradient update procedure in SAVI is computationally expensive during both training and test times. For training, it requires backpropagation for the objective that involves gradients, implying the need for Hessian evaluation (albeit \ufb01nite difference approximation). More critically, at test time, the inference requires a time-consuming gradient ascent optimization. Moreover, its performance is often quite sensitive to the choice of the gradient step size and the number of gradient updates; and it is dif\ufb01cult to tune these parameters to achieve optimal performance-ef\ufb01ciency trade-off. Although more recent work [27] mitigated the issue of choosing the step size by the \ufb01rst-order approximate solution method with the Laplace approximation, such linearization of the deep decoder network restricts its applicability to the models containing only fully connected layers, and makes it dif\ufb01cult to be applied to more structured models such as convolutional networks. 3 Recursive Mixture Inference Model (Proposed Method) Our method is motivated by the premise of the semi-amortized inference (SAVI), i.e., re\ufb01ning the variational posterior to further reduce the difference from the true posterior. However, instead of doing the direct SVI gradient ascent as in SAVI, we introduce another amortized encoder model that augments the \ufb01rst amortized encoder to reduce the posterior approximation error. Formally, let q\u03c6(z|x) be our amortized encoder model1 with the parameters \u03c6. For the current decoder \u03b8, the posterior approximation error KL(q(z|x)||p\u03b8(z|x)) equals -L(q, \u03b8; x) (up to constant).2 The goal is to \ufb01nd another amortized encoder model q\u2032(z|x) with the parameters \u03c6\u2032 such that, when convexly combined with q(z|x) in a mixture \u03f5q\u2032 + (1 \u2212\u03f5)q for some small \u03f5 > 0, the resulting reduction of the posterior approximation error, \u2206KL := L(\u03f5q\u2032 + (1 \u2212\u03f5)q, \u03b8; x) \u2212L(q, \u03b8; x), is maximized. That is, we seek \u03c6\u2032 that maximizes \u2206KL. Compared to SAVI. The added encoder q\u2032 can be seen as the means for correcting q, to reduce the mismatch between q and the true p\u03b8(z|x). In SAVI, this correction is done by explicit gradient ascent (\ufb01netuning) along \u03c6 for every inference query, at train or test time, which is computationally expensive. In contrast, we learn a differential amortized encoder at training time, which is \ufb01xed at test time, requiring only a single neural network feed-forward pass to obtain the approximate posterior. This encoder correction-by-augmentation can continue by regarding the mixture \u03f5q\u2032 + (1 \u2212\u03f5)q as our current inference model to which another new amortized encoder will be added, with the recursion repeated a few times. This leads to a mixture model for the encoder, Q(z|x) = \u03b10q(z|x) + \u03b11q\u2032(z|x) + \u00b7 \u00b7 \u00b7 , where P m \u03b1m = 1. The main question is how to \ufb01nd the next encoder model to augment the current mixture Q. We do this by the functional gradient approach [6, 24]. Functional gradients for mixture component search. Following the functional gradient framework [6, 24], the (ELBO) objective for the mixture Q(z|x) can be expressed as a functional, namely a function that takes a density function Q as input, J(Q) := EQ(z|x) \u0002 log p\u03b8(x|z) + log p(z) \u2212log Q(z|x) \u0003 . (2) Let Q(z|x) be our current mixture. We aim to \ufb01nd q(z|x) to be added to Q by convex combination, Q(z|x) \u2190\u03f5q(z|x) + (1 \u2212\u03f5)Q(z|x) (3) for some small \u03f5 > 0, that maximizes our objective functional J. To this end we take the functional gradient of the objective J(Q) with respect to Q. For a given input x, we regard the function Q(z|x) 1This is a shorthand for q\u03bb(x;\u03c6)(z|x). We often drop the subscript and use q(z|x) for simplicity in notation. 2We often abuse the notation, either L(\u03c6, \u03b8; x) or L(q, \u03b8; x) interchangeably. 3 \fFigure 1: Illustration on MNIST using 2D latent z space. Results on two data instances (left and right) are shown. (Top) Our recursive estimation: The progress of learning the second mixture component is shown from left to right. The contour shows the true posterior p(z|x), the red is q0(z|x), the cyan is the second component that we learn here q1(z|x). We only trained q1; remaining parameters (of the decoder and q0) are \ufb01xed. Parameters of q1 are initialized to those of q0. (Bottom) Conventional (blind) mixture estimation by end-to-end gradient ascent. For the instance 1 (left), the two components collapse onto each other. For the second (right), a single component (red) becomes dominant while the other (cyan) stays away, unutilized, from the support of the true posterior. The cyan is initialized randomly to be different from the red (otherwise, it constitutes a local minimum). as an in\ufb01nite-dimensional vector indexed by z, and take the partial derivative at each z, which yields: \u2202J(Q) \u2202Q(z|x) = log p\u03b8(x|z) + log p(z) \u2212log Q(z|x) \u22121. (4) Since we have a convex combination (3), the steepest ascent direction (4) needs to be projected onto the feasible function space {q(\u00b7|x) \u2212Q(\u00b7|x) : q \u2208Q} where Q = {q\u03c6}\u03c6 is the set of variational densities realizable by the parameters \u03c6. Formally we solve the following optimization: max q\u2208Q \u001c q(\u00b7|x) \u2212Q(\u00b7|x), \u2202J(Q) \u2202Q(\u00b7|x) \u001d , (5) where \u27e8\u00b7, \u00b7\u27e9denotes the inner product in the function space. Using (4), and considering all training samples x \u223cpd(x), the optimization (5) can be written as: max \u03c6 Epd(x) h Eq\u03c6(z|x) \u0002 log p\u03b8(x|z) + log p(z) \u2212log Q(z|x) \u0003i , (6) where the outer expectation is with respect to the data distribution pd(x). By adding and subtracting log q\u03c6(z|x) to and from the objective, we see that (6) can be rephrased as follows: max \u03c6 Epd(x) h L(\u03c6, \u03b8; x) + KL(q\u03c6(z|x)||Q(z|x)) i . (7) Note that (7) gives us very intuitive criteria of how the new encoder component q\u03c6 should be selected: it has to maximize the ELBO (the \ufb01rst objective term), and at the same time, q\u03c6 should be different from the current mixture Q (the KL term). That is, our next encoder has to keep explaining the data well (by large ELBO) while increasing the diversity of the encoder distribution (by large KL), concentrating on those regions of the latent space that were poorly represented by the current Q. This supports our original intuition stated at the beginning of this section. See Fig. 1 for the illustration. Why recursive estimation. Although we eventually form a (conditional) mixture model for the variational encoder, and such a mixture model can be estimated by end-to-end gradient descent, our recursive estimation is ef\ufb01cient and less susceptible to the known issues of blind mixture estimation, including collapsed mixture components and domination by a single component. This resiliency is attributed to our speci\ufb01c learning criteria for selecting a new mixture component: improve the data likelihood and at the same time be as distinct as possible from the current mixture, thus increasing diversity. See Fig. 1 for an illustrative comparison between our recursive and blind mixture estimation. 3.1 Optimization Strategy Although we discussed the key idea of recursive mixture estimation, that is, at each step, \ufb01xing the current mixture Q and add a new component q, it should be noted that the previously added components q\u2019s (and their mixing proportions) need to be re\ufb01ned every time we update the decoder 4 \fAlgorithm 1 Recursive Learning Algorithm for Mixture Inference Model. Input: Initial {qm(z|x; \u03c6m)}M m=0, {\u03f5m(x; \u03b7m)}M m=1, and p\u03b8(x|z). Learning rate \u03b3. KL bound C. Output: Learned inference and decoder models. Let: Qm = (1 \u2212\u03f5m)Qm\u22121 + \u03f5mqm (m = 1 . . . M), Q0 = q0. BKL(p||q) = max(C, KL(p||q)). repeat Sample a batch of data B from pd(x). Update q0(z|x; \u03c60): \u03c60 \u2190\u03c60 + \u03b3\u2207\u03c60Ex\u223cB \u0002 L(q0, \u03b8; x) \u0003 . for m = 1, . . . , M do Update qm(z|x; \u03c6m): \u03c6m \u2190\u03c6m + \u03b3\u2207\u03c6mEx\u223cB \u0002 L(qm, \u03b8; x) + BKL(qm||Qm\u22121) \u0003 . Update \u03f5m(x; \u03b7m): \u03b7m \u2190\u03b7m + \u03b3\u2207\u03b7mEx\u223cB \u0002 L \u0000(1 \u2212\u03f5m)Qm\u22121 + \u03f5mqm, \u03b8; x \u0001\u0003 . end for Update p\u03b8(x|z): \u03b8 \u2190\u03b8 + \u03b3\u2207\u03b8Ex\u223cB \u0002 L(QM, \u03b8; x) \u0003 . until convergence parameters \u03b8. This is due to the VAE framework in which we have to learn the decoder in conjunction with the inference model, one of the main differences from the previous BVI approaches (See Sec. 4). To this end, we consider a mixture model Q that consists of the \ufb01xed number (M) of components added to the initial component (denoted by q0), namely Q(z|x) = \u03b10(x)q0(z|x) + M X m=1 \u03b1m(x)qm(z|x), (8) where qm(z|x) (m = 0, . . . , M) are all amortized encoders whose parameters are denoted by \u03c6m, and \u03b1m are the mixing proportions. Since the impact of each component can be different from instance to instance, we consider functions \u03b1m(x), instead of scalars. To respect the idea of recursively adding components (i.e., qm with \u03f5m), the mixing proportions conform to the following implicit structure: \u03b1m(x) = \u03f5m(x) M Y j=m+1 (1 \u2212\u03f5j(x)) for m = 0, 1, . . . , M (let \u03f50(x) = 1). (9) This is derived from the recursion, Qm = (1 \u2212\u03f5m)Qm\u22121 + \u03f5mqm for m = 1, . . . , M, where we denote by Qm the mixture formed by q0, q1, . . . , qm with \u03f50(= 1), \u03f51, . . . , \u03f5m, and Q0 := q0. Hence QM = Q. Note also that we model \u03f5m(x) as neural networks \u03f5m(x; \u03b7m) with parameters \u03b7m. Now we describe our recursive mixture learning algorithm. As we seek to update all components simultaneously together with the decoder \u03b8, we employ gradient ascent optimization with all parameters iteratively and repeatedly. Our algorithm is described in Alg. 1. Notice that for the \u03c6 update in the algorithm, we used the BKL which stands for Bounded KL, in place of KL. The KL term in (7) is to be maximized, and it can be easily unbounded; In typical situations, KL(q||Q) can become arbitrarily large by having q concentrate on the region where Q has zero support. To this end, we impose an upper barrier on the KL term, that is, BKL(q||Q) = max(C, KL(q||Q)), so that increasing KL beyond the barrier point C gives no incentive. C = 500.0 works well empirically. Similar degeneracy issues have been dealt with in the previous BVI approaches for non-VAE variational inference [8, 21]. Most approaches attempted to regularize small entropy when optimizing the new components to be added. However, the entropy regularization may be less effective for the iterative re\ufb01nement of the mixture components within the VAE framework, since we have indirect control of the component models (and their entropy values) only through the density parameter networks \u03bb(x; \u03c6) in q\u03bb(x;\u03c6)(z|x) (i.e., amortized inference). Furthermore, it encourages the component densities to have large entropy all the time as a side effect, which can lead to a suboptimal solution in certain situations. Our upper barrier method, on the other hand, regularizes the component density only if they are too close (within the range of C KL divergence) to the current mixture, rendering it better chance to \ufb01nd an optimal solution outside the C-ball of the current mixture. In fact, the empirical results in Sec. 5.3 demonstrate that our strategy leads to better performance. The nested loops in Alg. 1 may appear computationally costly, however, the outer loop usually takes a few epochs (usually no more than 20) since we initialize all components qm identically with the trained encoder parameters of the standard VAE (afterwards, the components quickly move away from each other due to the BKL term). The mixture order M (the number of the inner iterations) is typically small as well (e.g., between 1 and 4), which renders the algorithm fairly ef\ufb01cient in practice. 5 \f4 Related Work The VAE\u2019s issue of amortization error was raised recently [4], and the semi-amortized inference approaches [11, 23, 14] attempted to address the issue by performing the SVI gradient updates at test time. Alternatively one can enlarge the representational capacity of the encoder network, yet still amortized inference. A popular approach is the \ufb02ow-based models that apply nonlinear invertible transformations to VAE\u2019s variational posterior [31, 12]. The transformations could be complex autoregressive mappings, while they can also model full covariance matrices via ef\ufb01cient parametrization to represent arbitrary rotations, i.e., cross-dimensional dependency. Our use of functional gradient in designing a learning objective stems from the framework in [6, 24]. Mathematically elegant and \ufb02exible in the learning criteria, the framework was more recently exploited in [3] to unify seemingly different machine learning paradigms. Several mixture-based approaches aimed to extend the representational capacity of the variational inference model. In [33] the variational parameters were mixed with a \ufb02exible distribution. In [32] the prior is modeled as a mixture (aggregate posterior), while [17] attempted to tighten the lower bound by matching optimal prior with functional Frank-Wolfe. Boosted VI. Previously, there were approaches to boost the inference network in variational inference similar to our idea [8, 21, 22, 2, 25], where some of them [21, 22, 2] focused on theoretical convergence analysis, inspired by the Frank-Wolfe [10] interpretation of the greedy nature of the algorithm in the in\ufb01nite-dimensional (function) space. However, these approaches all aimed for stochastic VI in the non-VAE framework, hence non-amortized inference, whereas we consider amortized inference in the VAE framework in which both the decoder and the inference model need to be learned. We brie\ufb02y summarize the main differences between the previous BVI approaches and ours as follows: 1) We learn Q(z|x), a density functional of input x, while BVI optimizes Q(z), a single variational density (not a function of x), and thus involves only single optimization. 2) Within the VAE framework, as the decoder is not optimal in the course of training, we update the decoder and all the inference components iteratively and repeatedly. 3) To avoid degeneracy in KL maximization, we employ the bounded KL instead of BVI\u2019s entropy penalization, better suited for amortized inference and more effective in practice. 4) The instant impacts of the components, \u03f5(x) are also modeled input-dependent (as neural networks) rather than tunable scalars as in BVI. 5 Evaluations We test the proposed recursive inference model3 on several benchmark datasets. We highlight improved test likelihood scores and reduced inference time, compared to semi-amortized VAEs. We also contrast with \ufb02ow models that aim to increase modeling accuracy using high capacity encoders. Competing approaches. VAE: The standard VAE model (amortized inference) [13, 29]. SA: The semi-amortized VAE [11]. We \ufb01x the SVI gradient step size as 10\u22123, but vary the number of SVI steps from {1, 2, 4, 8}. IAF: The autoregressive-based \ufb02ow model for the encoder q(z|x) [12], which has richer expressiveness than VAE\u2019s Gaussian encoder. HF: The Householder \ufb02ow encoder model that represents the full covariance using the Householder transformation [31]. The numbers of \ufb02ows for IAF and HF are chosen from {1, 2, 4, 8}. ME: For a baseline comparison, we also consider the same mixture encoder model, but unlike our recursive mixture learning, the model is trained conventionally, end-to-end; all mixture components\u2019 parameters are updated simultaneously. The number of mixture components is chosen from {2, 3, 4, 5}. RME: Our proposed recursive mixture encoder model. We vary the number of additional components M from {1, 2, 3, 4}, leading to mixture order 2 to 5. All components are initialized identically with the VAE\u2019s encoder. See Supplement for the details. Datasets. MNIST [19], OMNIGLOT [18], SVHN [26], and CelebA [20]. We follow train/test partitions provided in the data, where 10% of the training sets are randomly held out for validation. For CelebA, we randomly split data into 80%/10%/10% train/validation/test sets. Network architectures. We adopt the convolutional neural networks for the encoder and decoder models for all competing approaches. This is because the convolutional networks are believed to outperform fully connected networks for many tasks in the image domain [16, 30, 28]. We also provide empirical evidence in the Supplement by comparing the test likelihood performance between the two architectures.4 For the details of the network architectures, refer to the Supplement. 3The code is publicly available from https://github.com/minyoungkim21/recmixvae 4Fully-connected decoder architectures are inferior to the deconvnet when the number of parameters are roughly equal. This is why we exclude comparison with the recent [27], but see Supplement for the results. 6 \fTable 1: Test log-likelihood scores estimated by IWAE sampling. The parentheses next to model names indicate: the number of SVI steps in SA, the number of \ufb02ows in IAF and HF, and the mixture order in ME and RME. The superscripts are the standard deviations. The best (on average) results are boldfaced in red. In each column, the statistical signi\ufb01cance of the difference between the best model (red) and each competing model, is depicted as color: anything non-colored indicates p \u22640.01 (strongly distinguished), p \u2208(0.01, 0.05] as yellow-orange, p \u2208(0.05, 0.1] as orange, p > 0.1 as red orange (little evidence of difference) by the Wilcoxon signed rank test. Best viewed in color. Dataset MNIST OMNIGLOT SVHN CelebA dim(z) 20 50 20 50 20 50 20 50 VAE 930.73.9 1185.73.9 501.61.6 801.64.0 4054.514.3 5363.721.4 12116.425.3 15251.939.7 SA(1) 921.22.3 1172.11.8 499.32.5 792.77.9 4031.519.0 5362.135.7 12091.121.6 15285.829.4 SA(2) 932.02.4 1176.33.4 501.02.7 793.14.8 4041.515.5 5377.023.2 12087.121.5 15252.729.0 SA(4) 925.52.6 1171.33.5 488.21.8 794.41.9 4051.922.2 5391.720.4 12116.320.5 15187.327.9 SA(8) 928.13.9 1183.23.4 490.32.8 799.42.7 4041.69.5 5370.818.5 12100.622.8 15096.527.2 IAF(1) 934.03.3 1180.62.7 489.91.9 788.84.1 4050.09.4 5368.311.5 12098.020.6 15271.228.6 IAF(2) 931.43.7 1190.11.9 494.91.4 795.72.7 4054.610.5 5360.010.0 12104.521.8 15262.227.8 IAF(4) 926.32.6 1178.11.6 496.02.0 775.12.2 4048.68.7 5338.110.2 12094.622.6 15261.028.1 IAF(8) 934.12.4 1150.02.2 498.82.3 774.72.9 4042.09.6 5341.810.1 12109.322.0 15241.527.9 HF(1) 917.22.6 1204.34.0 488.62.0 795.93.3 4028.89.7 5372.010.1 12077.231.4 15240.527.6 HF(2) 923.93.1 1191.510.8 495.91.8 784.54.8 4030.79.9 5376.610.2 12093.025.6 15258.230.3 HF(4) 927.32.8 1197.21.5 487.02.7 799.73.2 4038.49.7 5371.89.8 12082.027.0 15266.529.5 HF(8) 928.53.1 1184.11.8 488.32.4 794.64.0 4035.98.9 5351.111.1 12087.325.5 15248.729.7 ME(2) 926.73.0 1152.81.7 491.71.4 793.43.8 4037.211.0 5343.213.1 12072.723.3 15290.529.3 ME(3) 933.14.1 1162.84.7 491.22.1 807.54.9 4053.816.1 5367.715.8 12100.321.7 15294.628.3 ME(4) 914.72.3 1205.12.3 491.31.8 732.03.1 4061.312.0 5191.918.5 12092.222.6 15270.720.6 ME(5) 920.61.9 1198.53.5 478.02.8 805.73.8 4057.512.2 5209.212.8 12095.325.1 15268.827.5 RME(2) 943.91.6 1201.70.9 508.21.2 821.03.1 4085.39.7 5403.210.2 12193.123.5 15363.031.7 RME(3) 945.11.6 1202.41.0 507.51.1 820.40.9 4085.99.8 5405.110.4 12192.323.5 15365.631.4 RME(4) 945.21.6 1203.11.0 509.01.2 819.90.9 4080.79.9 5403.810.2 12192.623.4 15364.331.5 RME(5) 945.01.7 1203.71.0 509.11.4 819.90.9 4086.910.9 5405.58.5 12194.211.5 15366.212.7 Table 2: Test data loglikelihood scores for the Binary MNIST. Our results are in the column titled \u201cCNN\u201d. The column \u201cFC\u201d is excerpted from [27]. CNN FC VAE -84.49 -85.38 SA(1) -83.64 -85.20 SA(2) -83.79 -85.10 SA(4) -83.85 -85.43 SA(8) -84.02 -85.24 IAF(1) -83.37 -84.26 IAF(2) -83.15 -84.16 IAF(4) -83.08 -84.03 IAF(8) -83.12 -83.80 HF(1) -83.82 -85.27 HF(2) -83.70 -85.31 HF(4) -83.87 -85.22 HF(8) -83.76 -85.41 ME(2) -83.77 ME(3) -83.81 ME(4) -83.83 ME(5) -83.75 VLAE(2) -83.72 VLAE(3) -83.84 VLAE(4) -83.73 VLAE(5) -83.60 RME(2) -83.14 RME(3) -83.14 RME(4) -83.09 RME(5) -83.15 Experimental setup. We vary the latent dim(z), small (20) or large (50).5 To report the test log-likelihood scores log p(x), we use the importance weighted sampling estimation (IWAE) method [1] with 100 samples (Supplement for details). For each model/dataset, we perform 10 runs with different random train/validation splits, where each run consists of three trainings by starting with different random model parameters, among which only one model with the best validation result is chosen. 5.1 Results The test log-likelihood scores are summarized in Table 1.6 Overall the results indicate that our recursive mixture encoder (RME) outperforms the competing approaches consistently for all datasets. To see the statistical signi\ufb01cance, we performed the one-sided Wilcoxon signed rank test for every pair (the best model, non-best model). The results indicate that this superiority is statistically signi\ufb01cant. Comparison to ME. With one exception, speci\ufb01cally ME (4) with dim(z) = 50 on the MNIST, the blind end-to-end mixture learning (ME) consistently underperforms our RME. As also illustrated in Fig. 1, the blind mixture estimation can potentially suffer from mixture collapsing and single dominant component issues. The fact that even the VAE often performs comparably to the ME with different mixture orders supports this observation. On the other hand, our recursive mixture estimation is more robust to the initial parameters. Due to its incremental learning nature, it \"knows\" the regions in the latent space ill-represented by the current mixture, then updates mixture components to complement those regions. This strategy allows the RME to effectively model highly multi-modal posterior distributions, yielding more robust and accurate variational posterior approximation. Comparison to SA. The semi-amortized approach (SA) sometimes achieves improvement over the VAE, but not consistently. In particular, its performance 5The results for dim(z) = 10 and 100, also on the CIFAR10 dataset [15], are reported in the Supplement. 6The MNIST results mismatch those reported in the related work (e.g., [32]). Signi\ufb01cantly higher scores. This is because we adopt the Gaussian decoder models, not the binary decoders, for all competing methods. 7 \fis generally very sensitive to the number of SVI gradient update steps. This is another drawback of the SA, where the gradient-based adaption has to be performed at the test time. Although one could adjust the gradient step size (in place of currently used \ufb01xed step size) to improve the performance, there is little principled way to tune the step size at test time that can attain optimal accuracy and inference time trade off. The number of SVI steps in the SA may correspond to the mixture order in our RME model, and the results show that increasing the mixture order usually improves, and not deteriorate, the generalization performance. Comparison to IAF/HF. Although \ufb02ow models have rich representational capacity, possibly with full covariance matrices (HF), the improvement over the VAE is limited compared to our RME; the models sometimes perform not any better than the VAE. The failure of the \ufb02ow-based models may originate from the dif\ufb01culty of optimizing the complex encoder models. (Similar observations were made in related previous work [27]). This result signi\ufb01es that sophisticated and discriminative learning criteria are critical, beyond just enlarging the structural capacity of the neural networks, similarly observed from the failure of conventional mixtures. Non-Gaussian likelihood model. Our empirical evaluations were predominantly conducted with the convolutional architectures on real-valued image data. For the performance of our model with non-convolutional (fully connected) network architectures, the readers can refer to Table 5 and 6 in the supplementary material. For the binarized input images, we have conducted extra experiments on the Binary MNIST dataset. The binary images can be modeled by a Bernoulli likelihood in the decoder. Table 2 summarized the results. We have set the latent dimension dim(z) = 50, and used the same CNN architectures as before, except that the decoder output is changed from Gaussian to Bernoulli. We also include the reported results from [27] for comparison, which employed the same latent dimension 50 and fully connected encoder/decoder networks with similar model complexity as our CNNs\u2019. As shown, IAF and our RME performs equally the best, although the performance differences among the competing approaches are not very pronounced compared to real-valued image cases. 5.2 Test Inference Time Another key advantage of our recursive mixture inference is the computational ef\ufb01ciency of test-time inference, comparable to that of VAE. Unlike the semi-amortized approaches, where one performs the SVI gradient adaptation at test time, the inference in our RME is merely a single feed forward pass through our mixture encoder network. That is, once training is done, our mixture inference model remains \ufb01xed, with no adaptation required. To verify this empirically, we measure the actual inference time for the competing approaches. The per-batch test inference times (batch size 128) on all benchmark datasets are shown in Tab. 8. Table 3: Inference time (milliseconds). MNIST OMNIG. SVHN CELEBA VAE 3.6 4.8 2.2 2.7 SA(1) 9.7 11.6 7.0 8.4 SA(2) 18.1 19.2 15.5 13.8 SA(4) 32.2 34.4 30.1 27.1 SA(8) 60.8 65.7 60.3 53.8 IAF(1) 4.8 5.7 3.4 4.4 IAF(2) 5.9 6.4 3.7 5.1 IAF(4) 6.2 7.0 4.7 5.7 IAF(8) 7.7 8.2 5.7 7.7 RME(2) 4.7 5.4 3.2 4.2 RME(3) 4.9 5.5 3.6 4.1 RME(4) 4.6 5.3 3.5 4.2 RME(5) 4.8 5.6 3.3 4.8 To report the results, for each method and each dataset, we run the inference over the entire test set batches, measure the running time, then take the per-batch average. We repeat the procedure \ufb01ve times and report the average. All models are run on the same machine with a single GPU (RTX 2080 Ti), Core i7 3.50GHz CPU, and 128 GB RAM. While we only report test times for dim(z) = 50, the impact of the latent dimension appears to be less signi\ufb01cant. As expected, the semi-amortized approach suffers from the computational overhead of test-time gradient updates, with the inference time signi\ufb01cantly increased as the number of updates increases. Our RME is comparable to VAE, and faster than IAF (with more than a single \ufb02ow), which veri\ufb01es our claim. Interestingly, increasing the mixture order in our model rarely affects the inference time, due to intrinsic parallelization of the feed forward pass through the multiple mixture components networks, leading to inference time as fast as that of VAE. 5.3 Comparison with Boosted VI\u2019s Entropy Regularization Recall that our RME adopted the bounded KL (BKL) loss to avoid degeneracy in the component update stages. Previous boosted VI (BVI) approaches employ different regularization, namely penalizing small entropy for the new components. However, such indirect regularization can be 8 \fTable 4: Comparison with the BVI\u2019s entropy regularization [21]. The same color scheme as Tab. 1. Dataset MNIST OMNIGLOT SVHN CelebA dim(z) 20 50 20 50 20 50 20 50 RME(2) 943.91.6 1201.70.9 508.21.2 821.03.1 4085.39.7 5403.210.2 12193.123.5 15363.031.7 RME(3) 945.11.6 1202.41.0 507.51.1 820.40.9 4085.99.8 5405.110.4 12192.323.5 15365.631.4 RME(4) 945.21.6 1203.11.0 509.01.2 819.90.9 4080.79.9 5403.810.2 12192.623.4 15364.331.5 RME(5) 945.01.7 1203.71.0 509.11.4 819.90.9 4086.910.9 5405.58.5 12194.211.5 15366.212.7 BVI(2) 939.72.8 1196.22.8 507.92.2 817.13.3 4077.310.3 5388.210.2 12133.525.1 15206.428.2 BVI(3) 939.52.9 1191.62.9 507.82.2 816.63.4 4076.610.3 5384.210.5 12146.522.4 15249.528.1 BVI(4) 937.82.9 1191.62.8 507.82.3 816.83.4 4073.110.2 5371.110.4 12127.722.3 15085.828.4 BVI(5) 931.23.0 1183.12.9 508.22.3 816.43.3 4071.210.2 5378.110.1 12092.322.3 15052.528.0 less effective for the iterative re\ufb01nement of the mixture components within the VAE framework (the second last paragraph of Sec. 3.1). To verify this claim, we test our RME models with the BKL loss replaced by the BVI\u2019s entropy regularization. More speci\ufb01cally, following the scheme of [21], we replace our BKL loss by \u03bd \u00b7 Eq(z|x)[\u2212log q(z|x)] estimated by Monte Carlo, where \u03bd = 1/\u221at + 1 is the impact that decreases as the training iteration t.7 See Tab. 4 for the results. This empirical result demonstrates that our bounded KL loss consistently yields better performance than entropy regularization. We also observe that our BKL loss leads to numerically more stable solutions: For entropy regularization, we had to reduce the learning rate to the tenth of that of BKL to avoid NaNs. 6" + }, + { + "url": "http://arxiv.org/abs/1909.02820v1", + "title": "Bayes-Factor-VAE: Hierarchical Bayesian Deep Auto-Encoder Models for Factor Disentanglement", + "abstract": "We propose a family of novel hierarchical Bayesian deep auto-encoder models\ncapable of identifying disentangled factors of variability in data. While many\nrecent attempts at factor disentanglement have focused on sophisticated\nlearning objectives within the VAE framework, their choice of a standard normal\nas the latent factor prior is both suboptimal and detrimental to performance.\nOur key observation is that the disentangled latent variables responsible for\nmajor sources of variability, the relevant factors, can be more appropriately\nmodeled using long-tail distributions. The typical Gaussian priors are, on the\nother hand, better suited for modeling of nuisance factors. Motivated by this,\nwe extend the VAE to a hierarchical Bayesian model by introducing hyper-priors\non the variances of Gaussian latent priors, mimicking an infinite mixture,\nwhile maintaining tractable learning and inference of the traditional VAEs.\nThis analysis signifies the importance of partitioning and treating in a\ndifferent manner the latent dimensions corresponding to relevant factors and\nnuisances. Our proposed models, dubbed Bayes-Factor-VAEs, are shown to\noutperform existing methods both quantitatively and qualitatively in terms of\nlatent disentanglement across several challenging benchmark tasks.", + "authors": "Minyoung Kim, Yuting Wang, Pritish Sahu, Vladimir Pavlovic", + "published": "2019-09-06", + "updated": "2019-09-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "stat.ML" + ], + "main_content": "Introduction Data, such as images or videos, are inherently highdimensional, a result of interactions of many complex factors such as lighting, illumination, geometry, etc. Identifying those factors and their intricate interplay is the key not only to explaining the source of variability in the data but also to ef\ufb01ciently representing the same data for subsequent analysis, classi\ufb01cation, or even re-synthesis. To tackle this problem, deep factor models such as the VAE [17] have been proposed to principally, mathematically concisely, and computationally ef\ufb01ciently model the nonlinear generative relationship between the ambient data and the latent factors. However, solely identifying some factors beyond the sources of variability is not suf\ufb01cient; it is ultimately desirable that the identi\ufb01ed factors also be disentangled. Although there are several different, sometimes opposing, views of disentanglement [3, 12], the most commonly accepted de\ufb01nition aligns with the notion of apriori independence, where each aspect of independent variability in data is exclusively sourced in one latent factor. Identifying these disentangled factors will then naturally lead to an effective, succinct representation of the data. In this paper we aim to solve this disentangled representation learning task in the most challenging, unsupervised setting, with no auxiliary information, such as labels, provided during the learning process. While there have been considerable recent efforts to solve the latent disentanglement problem [7, 21, 13, 5, 18, 15, 6], most prior approaches have failed to produce satisfactory solutions. One fundamental reason for this is their inadequate treatment of the key factors supporting the disentanglement, which have in prior works been almost universally tied to i.i.d. Gaussian priors. In contrast, to accomplish high-quality disentanglement one needs to distinguish, and treat separately, the relevant latent variables, responsible for principal variability in the data, from the nuisance sources of minor variation. Speci\ufb01cally, the relevant factors may exhibit nonGaussian, long-tail behavior, which discerns them from statistically independent Gaussian nuisances. We will detail and justify this requirement in section 2. Our goal in this paper is to develop principled factor disentanglement algorithms that meet this requirement. In particular, we propose three different hierarchical Bayesian models that place hyper-priors on the parameters of the latent prior. This effectively mimics employing in\ufb01nite mixtures while maintaining tractable learning and inference of traditional VAEs. We begin with a brief background on VAEs, describe our motivation and requirement to achieve the disentanglement in a principled way (section 2), followed by the de\ufb01nition of speci\ufb01c models (section 3). Background. We denote by x \u2208RD the observation (e.g., image) and by z \u2208Rd the underlying latent vector. The 1 arXiv:1909.02820v1 [cs.LG] 6 Sep 2019 \fvariational auto-encoder (VAE) [17] is a deep probabilistic model that represents the joint distribution as: p(z) = N(z; 0, I), (1) p\u03b8(x|z) = p(x; \u03b8(z)), (2) where p(x; \u03b8(z)) is a density model with the parameters \u03b8(z) whose likelihood can be tractably computed (e.g., Gaussian or Bernoulli), and \u03b8(z) is the output of a deep model with its own weight parameters. In the unsupervised setting, with ambient data {xn}N n=1, the model can be learned by the MLE, i.e., maximizing PN n=1 log p(xn). This requires posterior inference p(z|x), but as the exact inference is intractable, the VAE adopts the variational technique: approximate p(z|x) \u2248q\u03bd(z|x), where q\u03bd(z|x) = q(z; \u03bd(x)) is a freely chosen tractable density with parameters modeled by deep model \u03bd(x). A typical choice, assumed throughout the paper, is independent Gaussian, q\u03bd(z|x) = d Y j=1 N(zj; mj(x), sj(x)2), (3) where \u03bd(x) = {mj(x), sj(x)}d j=1 for some deep networks mj(x) and sj(x). The negative data log-likelihood admits the following as its upper bound, Rec(\u03b8, \u03bd) + Epd(x) \u0002 KL(q\u03bd(z|x)||p(z)) \u0003 , (4) which we minimize wrt \u03b8(\u00b7) and \u03bd(\u00b7). Here, pd(x) is the empirical data distribution of {xn}N n=1, and Rec(\u03b8, \u03bd) = \u2212Epd(x) \u0002 Eq\u03bd(z|x)[log p\u03b8(x|z)] \u0003 (5) is the negative expected log-likelihood, identical to the reconstruction loss. 2. Our Motivation Although minimizing (4) can yield a model that faithfully explains the observations, the learned model does not necessarily exhibit disentanglement of latent factors. In this section, we begin with a common notion of latent disentanglement1, and consider a semi-parametric extension of VAE to derive a principled objective function to achieve latent disentanglement under this notion. Our analysis also suggests to discriminate relevant latent variables from nuisances, and separately treat the two. Notion of Disentanglement. Consider a set of aspects that can be observed in x, where the value of each aspect varies independently from the others in the data. In the facial image data, for instance, we typically observe images 1While there is no universal de\ufb01nition, the one we use shares the main concepts with other de\ufb01nitions, including the recent symmetric transformation view [12]. where the variability of each aspect, say (pose, gender, facial expression), is independent from the others (e.g., the distribution of pose variability in images is the same regardless of gender or expression). We then say the latent vector z is disentangled if each variable zj is statistically correlated with only a single aspect, exclusive from other z\u2212j. That is, varying zj while \ufb01xing z\u2212j, results in the exclusive variation of the j-th aspect in x. Relevant vs. Nuisance Variables. It is natural to assume the exact number of meaningful aspects is a priori unknown, but a suf\ufb01ciently large upper bound d may be known. Only some variables in z will have correspondence to aspects, with the rest attributed to nuisance effects (e.g., acting as a conduit to the data generation process). We thus partition the latent dimensions into two disjoint subsets, R (relevant) and N (nuisance), R \u222aN = {1, . . . , d} and R \u2229N = \u2205. Formally, index j is said to be relevant (j \u2208R) if zj and x are statistically dependent and j is called nuisance (j \u2208N) if zj and x are statistically independent. Analysis in this section assumes known R and N. The above notion implies the latent variables zj\u2019s be apriori independent of each other, in agreement with the goals and framework of the independent component analysis (ICA) [14], the task of blind separation of statistically independent sources. In particular, our derivation is based on the semi-parametric view [4, 2], in which the only assumption made is that of a fully factorized p(z), with no restrictions on the choice of the density p(z). For ease of exposition, we consider a deterministic decoder/encoder pair, x = dec\u03b8(z) and z = enc\u03bd(x) with parameters \u03b8 and \u03bd, constrained to be the inverses of each other, enc\u03bd(\u00b7) = dec\u22121 \u03b8 (\u00b7). In the semi-parametric ICA, we seek to solve the MLE problem: min p(z),\u03b8 KL (pd(x)||p\u03b8(x)) s.t. p(z) = d Y j=1 p(zj), (6) where p\u03b8(x) is the density derived from x = dec\u03b8(z), with z \u223cp(z). The latent prior p(z) is now a part of our model to be learned, instead of being \ufb01xed as in VAE. We let p(z) be of free form (semi-parametric) but fully factorized, the key to the latent disentanglement. Directly optimizing (6) is intractable, and we solve it in the z space. Using the fact that KL divergence is invariant to invertible transformations2, we have: KL(pd(x)||p\u03b8(x)) = KL(q\u03bd(z)||p(z)), (7) where q\u03bd(z) is the density of z = enc\u03bd(x) with x \u223cpd(x). Our original problem (6) then becomes: min p(z),\u03bd KLz := KL q\u03bd(z) \r \r \r \r d Y j=1 p(zj) ! (8) 2See Supplement for the proof. \fIn case when the encoder/decoder pair becomes stochastic (2) and (3), three modi\ufb01cations are needed: i) stochastic inverse3, ii) the invariance of KL (7) turns into an approximation, and iii) q\u03bd(z) is de\ufb01ned as: q\u03bd(z) = Epd(x) h q\u03bd(z|x) i = 1 N N X n=1 q\u03bd(z|xn), (9) a well known quantity in the recent disentanglement literature, dubbed aggregate posterior. Further imposing the independence constraint for the nuisance variables, our optimization problem becomes: min p(z),\u03bd KLz s.t. q\u03bd(zj|x) = q\u03bd(zj) \u2200x, j \u2208N, (10) where q\u03bd(zj|x) and q\u03bd(zj) are marginals from q\u03bd(z|x) and q\u03bd(z), respectively. We will often omit the subscript \u03bd in notation. It is not dif\ufb01cult to see that the objective KLz in (8) and (10) can be decomposed as follows (see Supplement): KLz = TC+ X j\u2208R KL(q(zj)||p(zj))+ X j\u2208N KL(q(zj)||p(zj)) (11) where TC is the total correlation, a measure of the degree of factorization of q(z): TC := KL q(z) \r \r \r \r d Y j=1 q(zj) ! . (12) With the freedom to choose p(z) and \u03bd (of q\u03bd(z|x)) to minimize KLz within the constraint (10), we tackle the last two terms in (11) individually. 3rd Term. For nuisance zj, to satisfy the constraint (10), we have q(zj|x) = N(zj; mj, s2 j) for some \ufb01xed mj and sj. Then q(zj) := R q(zj|x)pd(x)dx = N(zj; mj, s2 j), allowing one to choose a Gaussian prior p(zj) = N(zj; 0, 1), leading to mj = 0, sj = 1, vanishing the KL. 2nd Term. For zj a relevant factor variable, zj and x should not be independent, thus q(zj) is a Gaussian mixture with heterogeneous components q(zj|x). The VAE\u2019s Gaussian prior p(zj) = N(zj; 0, 1) implies that the divergence can never be made to vanish in general. To remedy this, one either i) chooses p(zj) different from N(0, 1) (potentially, non-Gaussian), or ii) retains a Gaussian prior but lets the mean and variance of p(zj) be \ufb02exibly chosen, perhaps differently over j \u2208R, to maximally diminish this KL divergence. The former approach may raise a nontrivial question of which prior to choose4. Instead, we propose a solution that builds a hierarchical Bayesian prior of p(zj) and infers the 3Such that \u03b8 and \u03bd minimize the reconstruction loss Rec(\u03b8, \u03bd) (5). 4One may employ a \ufb02exible model for p(zj), e.g., a \ufb01nite mixture approximation or the VampPrior [26]. However, this may lead to over\ufb01tting; see our empirical study in subsection 5.1. Figure 1. Graphical model representation for BF-VAE-1 and BFVAE-2: (Left) plate, (Right) unrolled version. The hyperparameter \u03c9 is either {aj} (BF-VAE-1) or {rj} (BF-VAE-2). posterior (subsection 3.2 and 3.3). In this strategy, we regard the variances of Gaussian p(zj) as parameters to be learned, and minimize KL(q(zj)||p(zj)) wrt the VAE parameters as well as the prior variances (subsection 3.1). Learning Objective. Based on the above analysis, the overall learning goal can be de\ufb01ned as: min \u03b8,\u03bd,p(z) Rec(\u03b8, \u03bd) + Epd(x)[KL(q(zj|x)||p(zj))] + \u03b3TC s.t. p(zj) = N(zj; 0, 1) for j \u2208N, (13) where we include Rec(\u03b8, \u03bd) of (5) to impose the stochastic inverse, and replace the dif\ufb01cult-to-evaluate KL(q(zj)||p(zj)) by the expected KL, an upper bound5 admitting a closed form. The TC term will be estimated through its density ratio proxy, using an adversarial discriminator similarly as [15], where its impact is controlled by \u03b3. Our learning objective in (13) is similar to those of recent disentanglement algorithms (see section 4) in that the VAE loss is augmented with the additional loss of independence of latent variables, such as the TC term. However, a key distinction is our separate treatment of relevant and nuisance variables, with the additional aim to learn a non-Gaussian relevant variable prior p(zj). The optimization (13) assumes a known relevance partition R and N. In the next section we will deal with how to learn this partition automatically from data, either implicitly (subsection 3.1 and 3.2) or explicitly (subsection 3.3) via hierarchical Bayesian treatment. 3. Bayes-Factor-VAE (BF-VAE) The key insight from section 2 is that, for relevant factors, it is necessary to have p(zj) different from N(0, 1). In this section we propose three different prior models to accomplish this goal in a principled Bayesian manners. 3.1. Adjustable Gaussian Prior (BF-VAE-0) We \ufb01rst de\ufb01ne a base model, also needed for subsequent more complex variations, which relaxes the \ufb01xed, identical variance assumption for priors p(zj): p(z|\u03b1) = d Y j=1 p(zj|\u03b1j) = d Y j=1 N(zj; 0, \u03b1\u22121 j ), (14) 5See Supplement for the proof. \fwhere \u03b1 > 0 are the precision parameters to be learned from data6. We expect the learned \u03b1j to be close to (apart from) 1 for nuisance (relevant, resp.) j. To explicitly express our preference of encouraging many dims j to be nuisance, and avoid redundancy in the learned relevant variables, we add the regularizer, (\u03b1\u22121 j \u22121)2, which leads to: min \u03b8,\u03bd,\u03b1 d X j=1 Epd(x) h KL(q(zj|x)||N(zj; 0, \u03b1\u22121 j )) i + Rec(\u03b8, \u03bd) + \u03b3TC + \u03b7 d X j=1 (\u03b1\u22121 j \u22121)2. (15) We denote this model by BF-VAE-0. The expected KL in (15) admits a closed form, resulting in added \ufb02exibility without extra computation, compared to e.g., [15]. Another bene\ufb01t is the trade-off parameter \u03b7 acts as a proxy to control the cardinality of relevant factors; small \u03b7 encourages more relevant factors than large \u03b7. 3.2. Hierarchical Bayesian Prior (BF-VAE-1) To extend BF-VAE-0 to a Bayesian hierarchical setting, in conjunction with (14), we adopt a conjugate prior on \u03b1, p(\u03b1) = d Y j=1 p(\u03b1j) = d Y j=1 G(\u03b1j; aj, bj), (16) where G(y; a, b) \u221dya\u22121e\u2212by is the Gamma distribution with parameters a (shape) and b (inverse scale) with a, b > 0. We further set bj = aj \u22121, aj > 1, to express our preference for Mode[p(\u03b1j)] = 17. We let {aj}d j=1 be the model parameters that can be learned from data. This model, named BF-VAE-1, has a graphical model representation shown in Fig. 1. A key aspect of this model is that by marginalizing out \u03b1, the prior p(z) becomes an in\ufb01nite Gaussian mixture, p(z) = R p(\u03b1)N(z; 0, \u03b1\u22121)d\u03b1, a desideratum for relevant factors. Because Var[p(\u03b1j)] \u2248(aj \u22121)\u22121, large aj will lead to limaj\u2192\u221ep(zj|aj) = N(z; 0, 1), a nuisance factor. We describe the variational inference for the model where we introduce variational densities q(\u03b1) and q(z|x) to approximate the true posteriors as follows: p(\u03b1, {zn}N n=1|{xn}N n=1) \u2248 q(\u03b1) z }| { d Y j=1 G(\u03b1j; \u02c6 aj,\u02c6 bj) N Y n=1 q(zn|xn). (17) 6Note that we \ufb01x the mean as 0, and only learn the (inverse) variances \u03b1j. Although we can easily parametrize the mean as well, the form of (14) is equally \ufb02exible in terms of minimizing the KL, as shown in Supplement. 7 This preference also improved empirical performance. This allows the average negative marginal data loglikelihood, \u22121 N log p({xn}), to be upper-bounded by8: U1 := Rec(\u03b8, \u03bd) + 1 N KL(q(\u03b1)||p(\u03b1)) + Eq(\u03b1)Epd(x) \u0002 KL(q(z|x)||p(z|\u03b1)) \u0003 . (18) Rec(\u03b8, \u03bd) in (18) is identical to that of VAE, while the other two admit closed forms; see Supplement for the details. The TC term becomes an average over q(\u03b1): TC1 := Eq(\u03b1) \" KL(q(z|\u03b1)|| d Y j=1 q(zj|\u03b1)) # , (19) which turns out to be equal to TC in (12), since q(z|\u03b1) := R q(z|\u03b1, x)pd(x)dx = R q(z|x)pd(x)dx = q(z). The \ufb01nal optimization is then minimizing (U1 + \u03b3TC1) wrt (\u03b8, \u03bd) and {aj, bj, \u02c6 aj,\u02c6 bj}d j=1 with the constraint bj = aj \u22121. BF-VAE-1 can capture the uncertainty in the precision parameters \u03b1 with no computational overhead as all of the objective terms admit closed forms. Having learned the model from data D = {xn}N n=1, the data corrected prior, p(zj) := R p(zj|\u03b1j)p(\u03b1j|D)d\u03b1j, is approximated as: p(zj) \u2248 Z p(zj|\u03b1j)q(\u03b1j)d\u03b1j = t2\u02c6 aj \u0012 zj; 0, \u02c6 bj \u02c6 aj \u0013 , (20) where tf(0, v) is the generalized Student\u2019s t distribution with dof f and shape v. p(zj) informs us about the relevance of zj: Large dof implies nuisance (as the t becomes close to Gaussian), while small suggests a relevant variable. 3.3. Prior with Relevance Indicators (BF-VAE-2) BF-VAE-1 allows only implicit control over the cardinality of relevant dims, assuming no explicit differentiation between relevant factors and nuisances. In this section we propose another model that can address these issues. The key idea9 is to introduce relevance indicator variables r \u2208[0, 1]d (high rj indicating relevance of zj). We let r determine the shape of the hyper prior p(\u03b1): If rj \u22481 (relevant), we make p(\u03b1j) uninformative, thus zj far from N(0, 1). In contrast, if rj \u22480 (nuisance), p(\u03b1j) should strongly peak at \u03b1j = 1, with p(zj) close to N(0, 1). The following reparametrization of (16) enables this control: p(\u03b1|r) = d Y j=1 G \u0012 \u03b1j; 1 + 2\u03f5 rj + \u03f5 , 1 + 2\u03f5 rj + \u03f5 \u22121 \u0013 , (21) 8See Supplement for the derivations. 9It is related to the well-known (Bayesian) variable selection problem [24], but clearly different in that the latter is typically framed within the standard regression setup where the variables (covariates) of interest are observed in the data. In our case, we aim to select the most relevant latent variables zj\u2019s that explain the major variation in the observed data. \fwhere \u03f5 is a small positive number (e.g., 0.001). The indicator r naturally de\ufb01nes the relevant index set R = {j : rj \u22481}), allowing us to decompose q(z) over R and N as q(zR) \u00b7 Q j\u2208N q(zj)10, making TC into: KL q(zR)|| Y j\u2208R q(zj) ! \u2248Eq(zR) \u0014 log D(zR) 1 \u2212D(zR) \u0015 , (22) focused only on relevant variables. Note that we suggest using the discriminator density ratio proxy, rhs of (22), to evaluate TC, with D(\u00b7) optimized to discern samples from q(zR) from those of Q j\u2208R q(zj). To turn (22) into a continuous space optimization problem, we rewrite D(zR) as D(r \u25e6z), where \u25e6is the elementwise (Hadamard) product, and introduce two additional regularizers to control the cardinality of R through ||r||1 and the preference toward discrete values using the entropic prior H(r) = \u2212Pd j=1 \u0000rj log rj + (1 \u2212rj) log(1 \u2212rj) \u0001 . This leads to the \ufb01nal objective: U1 + \u03b3Eq(z) \u0014 log D(r \u25e6z) 1 \u2212D(r \u25e6z) \u0015 + \u03b7S||r||1 + \u03b7HH(r), (23) which is minimized over (\u03b8, \u03bd), r, and {\u02c6 aj,\u02c6 bj}d j=1, together with alternating gradient updates for D(\u00b7). In this model, named BF-VAE-2, the trade-off parameters \u03b7S and \u03b7H control the cardinality of relevant factors11 large \u03b7 encourages few strong factors; for \u03b7 small, many weak factors could be learned. The learned relevance vector r can serve as an indicator discerning relevant factors from nuisances. 4. Related Work Most recent approaches to unsupervised disentanglement consider the learning objectives combining the VAE\u2019s loss in (4) with regularization terms that encourage prior latent factor independence. In \u03b2-VAE [13], the expected KL term of the VAE\u2019s objective is overemphasized, which can be seen as a proxy for the prior matching, i.e., minimizing KL(q(z)||p(z)). In AAE [21], they aim to directly minimize the latter term via adversarial learning. As illustrated in our analysis in section 2, the full independence of q(z) imposed in the TC, is important in the factor disentanglement, where the TC was estimated by the discriminator density ratio in Factor-VAE [15], whereas TC-VAE [6] employed a weighted sampling strategy. Another alternative is the adversarial learning to minimize the Jensen-Shannon divergence in [5], instead of KL in the TC. Quite closely related to the TC are: DIP-VAE [18] that penalized the deviation of the variance of q(z) from identity, and InfoGAN [7] that aimed to minimize the reconstruction error in the z-space in addition to the reconstruction error in the x-space. 10See Supplement for the derivations. 11We empirically demonstrate this in subsection 5.2 and Supplement. Recent deep representational learning attempts to extend the VAE by either adopting non-Gaussian prior models or partitioning latent variables into groups that are treated differently, both seemingly similar to our approach. In [9], a hybrid model that jointly represents discrete and continuous latents was introduced. In [22], under the partially labeled data setup, they separately treated the factors associated with the labels from those that are not, leading to a conditional factor model. The Gaussian prior assumption in VAE has been relaxed to allow more \ufb02exibility and/or better \ufb01t in speci\ufb01c scenarios. In VampPrior [26], they came up with a reasonable encoder-based \ufb01nite mixture model that approximates the in\ufb01nite mixture model. In [8] the von Mises-Fisher density was adopted to account for a hyper-spherical latent structure. The recent CHyVAE [1] employed the inverseWishart prior (generalization of Gamma), however, it mainly dealt with situations where latents can be correlated with one another apriori, via full prior covariance. The Hierarchical Factor VAE [11] instead focused on independence of groups of latent variables (group disentanglement). Although these recent works are closely related to ours, they either focused on different disentanglement goals, or extended the priors for inreased model capacity. 5. Evaluation We evaluate our approaches12 on several benchmark datasets, where we assess the goodness of disentanglement both quantitatively and qualitatively. The former applies only to fully factor-labeled datasets, and we consider a comprehensive suite of disentanglement metrics in subsection 5.1. Qualitative assessment is accomplished through visualizations of data synthesis via latent space traversal. We also verify in subsection 5.2 that the visually relevant/important aspects accurately correspond to those determined by the indicators we hypothesized in each of our three models. 1) Datasets. We test all methods on the following datasets: 3D-Face [25], Sprites [23] and its recent extension (C-Spr) [20] that \ufb01lls the sprites with some random color (regarded as noise), Teapots [10], and Celeb-A [19]. Also, we consider the subset of Sprites containing only the oval shape13, denoted by O-Spr. The details of the datasets are described in the Supplement. All datasets provide ground-truth factor labels except for Celeb-A. For all datasets, the image sizes are normalized to 64 \u00d7 64, and the pixel intensity/color values are scaled to [0, 1]. We use cross entropy as the reconstruction loss. 2) Competing Approaches. We contrast our models 12Our code is publicly available in https://seqam-lab.github.io/BFVAE/ 13Since the shape factor is in nature a discrete variable, the underlying models that assume continuous latent variables would be suboptimal. Instead of explicitly modeling a combination of discrete/continuous latent variables as in the recent hybrid model [9], we eliminate this discrete factor by considering only the oval-shape images only. \f510 20 50 100 K 60 70 80 90 100 MoG BF-VAE-2 510 20 50 100 K 70 80 90 Figure 2. Disentanglement performance (Metric II) of F-VAE with MoG prior (Blue/Dashed) with different mixture orders (K) vs. BFVAE-2 (Red/Solid) on O-Spr (Left) and Sprites (Right). with VAE [17], \u03b2-VAE [13], and F-VAE (Factor-VAE) [15]. We also compare our BF-VAE models with the recent RFVAE [16] that also considers differential treatment of relevant and nuisance latents. 3) Model Architectures and Optimization. We adopt the model architectures and optimization parameters similar to those in [15]. See Supplement for the details. 5.1. Quantitative Results We consider three disentanglement metrics14: i) Metric I [15] collects data samples with one ground-truth factor \ufb01xed with the rest randomly varied, encodes them as z, \ufb01nds the index of the latent with the smallest variance, and measures the accuracy of classi\ufb01cation from that index to the ID of the \ufb01xed factor (the higher the better), ii) Metric II [16] modi\ufb01es Metric I by collecting samples of one factor varied with others \ufb01xed, and seeks the index of the largest latent variance. iii) Metric III [10] is based on regression from the latent vector to individual ground-truth factors, measuring three scores of prediction quality: Disentanglement for degree of dedication to each target, Completeness for degree of exclusive contribution by each covariate, and Informativeness for prediction error. Hence, higher scores are better for D and C, lower for I. Table 1 summarizes all results, datasets and metrics. For all models across all datasets we use the latent dimension d = 10. Our models clearly outperform competing methods across all metrics in most instances. They are followed by RF-VAE, which also employs a notion of relevance, but not explicit non-Gaussianity. Comparison w/ High Capacity Priors. Our analysis in section 2 states that a relevant dimension prior p(zj) needs to be non-Gaussian, \ufb02exible enough to match the aggregate posterior q(zj). Here, we consider an alternative prior with those properties. Speci\ufb01cally, we use a F-VAE model with a Gaussian mixture prior p(z) = PK k=1 \u03c0kN(z; \u00b5k; \u03a3k), with {(\u03c0k, \u00b5k, \u03a3k)}k the model parameters to be optimized in conjunction with the F-VAE\u2019s parameters. We contrast that model to our BF-VAE-2. The disentanglement performances (Metric II scores) on O-Spr and Sprites are summarized in Figure 2, where we change the number of mixture components K to control the degree of \ufb02exibility 14More details can be found in the Supplement. Figure 3. Latent space traversal in BF-VAE-2 on Celeb-A. We train two BF-VAE-2 models with two different \u03b7 values (\u03b7 = \u03b7S = \u03b7H large and small). (Left panel: strong factors) contains latent traversal results with four latent variables (two subjects for each) that are detected (according to high rj) by both \u03b7 small and large models. They correspond to (from top to bottom): gender, frontal hair, azimuth, and brightness, which are considered as strong/major factors. (Right panel: weak factors) shows traversal with four other latent variables that are detected (according to high rj) only by the small \u03b7 model. They correspond to: smiling, sunglasses, elevation, and baldness, which are considered as weak/minor factors. See Supplement for the enlarged images and further details. of the F-VAE mixture prior. Results show the high capacity mixture consistently underperforms our Bayesian model; as K increases, it suffers from clear over\ufb01tting. This suggests the uncontrolled complex prior can be detrimental, in contrast to our controlled treatment of relevances. 5.2. Qualitative Results In this section we investigate qualitative performance of our BF-VAE approaches. We focus on: i) Latent space traversal: We depict images synthesized by traversing a single latent variable at a time while \ufb01xing the rest, and ii) Accuracy of variable relevance indicator: As discussed in section 3, our models have implicit/explicit indicators that point to relevant and nuisance variables. Speci\ufb01cally, i) BF-VAE-0 (learned \u03b1j): j relevant if \u03b1j is away from 1, while j is nuisance if \u03b1j \u22481, ii) BF-VAE-1 (DOF of the corrected prior p(zj), equal to 2\u02c6 aj): j is relevant if \u02c6 aj is small (distant from Gaussian), and vice versa, iii) BF-VAE-2 (learned relevance indicator variable rj): j is relevant if rj is large, and vice versa. Due to the lack of space, we report selected results in this section, with more extensive results in Supplement. Results are shown for 3D-Face (Figure 4), O-Spr (Figure 5), and Teapots (Figure 6). The latent space traversal demonstrates that variation of each latent variable while the others are held \ufb01xed, visually leads to change in one of the groundtruth factors exclusively (except for the Teapots). Also, these visually identi\ufb01ed factors indeed correspond to those variables indicated as relevant by our models. See the \ufb01gure captions for details. Control of Cardinality of Relevant Factors. One of the distinguishing bene\ufb01ts of our BF-VAE-2 (and also BF-VAE0) is that the trade-off parameter(s) \u03b7 can control the number \fTable 1. Disentanglement metrics for benchmark datasets. For Metric III, the three \ufb01gures in each cell indicate Disentanglement / Completeness / Informativeness (top row based on the LASSO regressor, the bottom on the Random Forest. Note that the higher the better for D and C, while the lower the better for I. The best scores for each metric (within the margin of signi\ufb01cance) among the competing models are shown in red and second-best in blue. Datasets/Metrics VAE \u03b2-VAE F-VAE RF-VAE BF-VAE-0 BF-VAE-1 BF-VAE-2 3D-Face I 100.0 \u00b1 0.0 100.0 \u00b1 0.0 100.0 \u00b1 0.0 99.9 \u00b1 0.1 100.0 \u00b1 0.0 100.0 \u00b1 0.0 100.0 \u00b1 0.0 II 93.4 \u00b1 0.7 95.5 \u00b1 0.6 92.8 \u00b1 1.1 95.2 \u00b1 0.5 95.6 \u00b1 0.5 97.2 \u00b1 0.5 97.5 \u00b1 0.5 III .96 / .81 / .37 .96 / .78 / .40 1.0 / .82 / .36 1.0 / 1.0 / .48 1.0 / 1.0 / .45 1.0 / 1.0 / .45 1.0 / 1.0 / .44 .99 / .84 / .26 .98 / .86 / .31 .96 / .83 / .25 1.0 / .93 / .37 1.0 / .90 / .33 1.0 / .90 / .34 1.0 / .88 / .41 Sprites I 80.2 \u00b1 0.3 80.8 \u00b1 0.8 81.9 \u00b1 1.0 85.4 \u00b1 1.2 87.9 \u00b1 0.9 93.8 \u00b1 0.6 85.5 \u00b1 0.8 II 58.2 \u00b1 1.4 76.8 \u00b1 0.9 77.6 \u00b1 1.4 79.1 \u00b1 1.3 82.7 \u00b1 1.1 82.2 \u00b1 0.6 85.9 \u00b1 1.2 III .59 / .68 / .52 .67 / .69 / .53 .84 / .84 / .53 .85 / .87 / .53 .89 / 1.0 / .60 .92 / .90 / .54 .88 / 1.0 / .58 .57 / .69 / .46 .72 / .84 / .40 .73 / .82 / .41 .73 / .83 / .41 .75 / .83 / .44 .75 / .83 / .34 .75 / .86 / .48 C-Spr I 79.8 \u00b1 0.6 81.2 \u00b1 0.4 85.6 \u00b1 0.8 80.7 \u00b1 0.9 87.7 \u00b1 0.5 93.2 \u00b1 0.6 94.7 \u00b1 0.8 II 61.2 \u00b1 1.5 74.3 \u00b1 1.7 76.2 \u00b1 0.8 81.4 \u00b1 1.1 83.0 \u00b1 1.4 84.2 \u00b1 1.1 83.5 \u00b1 0.7 III .52 / .55 / .54 .77 / .82 / .53 .79 / .76 / .52 .87 / .91 / .54 1.0 / .95 / .56 .95 / .95 / .58 .86 / .91/ .56 .58 / .62 / .51 .73 / .83 / .39 .75 / .83 / .42 .64 / .72 / .30 .88 / .83 / .47 .79 / .88 / .42 .84 / .85 / .45 O-Spr I 97.2 \u00b1 0.4 75.3 \u00b1 0.6 100.0 \u00b1 0.0 100.0 \u00b1 0.0 100.0 \u00b1 0.0 100.0 \u00b1 0.0 100.0 \u00b1 0.0 II 53.2 \u00b1 1.5 70.2 \u00b1 1.2 80.6 \u00b1 1.1 95.4 \u00b1 0.5 97.8 \u00b1 0.7 99.8 \u00b1 0.2 97.1 \u00b1 0.8 III .42 / .43 / .54 .58 / .49 / .49 1.0 / .88 / .33 1.0 / .99 / .49 1.0 / 1.0 / .42 1.0 / .97 / .40 1.0/ .93 / .42 .32 / .55 / .46 .56 / .58 / .36 .81 / .84 / .24 .93 / .87 / .22 .99 / .93 / .22 .99 / .92 / .21 .98 / .91 / .23 Teapots I 90.1 \u00b1 0.9 56.9 \u00b1 1.1 91.9 \u00b1 0.8 98.7 \u00b1 0.4 94.8 \u00b1 1.2 97.6 \u00b1 0.3 97.9 \u00b1 0.4 II 77.7 \u00b1 1.3 47.3 \u00b1 0.9 74.6 \u00b1 1.8 83.1 \u00b1 1.2 90.4 \u00b1 1.0 82.7 \u00b1 1.3 88.9 \u00b1 0.8 III .60 / .53 / .40 .31 / .27 / .72 .63 / .61 / .46 .63 / .56 / .37 .72 / .61 / .34 .70 / .65 / .48 .67 / .62 / .41 .81 / .72 / .31 .45 / .61 / .52 .75 / .78 / .29 .90 / .79 / .27 .89 / .80 / .25 .78 / .80 / .50 .87 / .80 / .32 Figure 4. Latent space traversal in our three BF-VAE models on the 3D-Face dataset. (Left) BF-VAE-0 with the learned prior variances \u03b1\u22121 at the bottom (the value 1.0 depicted as the red dotted line), (Middle) BF-VAE-1 with the DOF (2\u02c6 aj) of the corrected prior p(zj) at the bottom, and (Right) BF-VAE-2 with the learned relevance vector r at the bottom. (Left: BF-VAE-0) The four visually evident dimensions of variability (z4, z5, z8, z9) are highlighted within colored boxes, where each exactly matches one of the four ground-truth factors (z4 = azimuth, z5 = lighting, z8 = elevation, and z9 = subject ID). The learned \u03b1j for all these four dims are away from 1. (Middle: BF-VAE-1) The four recovered, highlighted, dimensions match the ground-truth factors, and their p(zj)\u2019s also have relatively small DOFs, as expected. (Right: BF-VAE-2) Again the four factors are nearly correctly identi\ufb01ed, corresponding to the high values in the indicator variables rj\u2019s. of relevant factors to be detected by the model. We visually verify this on Celeb-A dataset. As shown in Figure 3 (detailed in the caption), adopting large \u03b7 leads only strong factors to be detected, while having small \u03b7 allows many weak factors identi\ufb01ed. 6." + }, + { + "url": "http://arxiv.org/abs/1902.08727v1", + "title": "Unsupervised Visual Domain Adaptation: A Deep Max-Margin Gaussian Process Approach", + "abstract": "In unsupervised domain adaptation, it is widely known that the target domain\nerror can be provably reduced by having a shared input representation that\nmakes the source and target domains indistinguishable from each other. Very\nrecently it has been studied that not just matching the marginal input\ndistributions, but the alignment of output (class) distributions is also\ncritical. The latter can be achieved by minimizing the maximum discrepancy of\npredictors (classifiers). In this paper, we adopt this principle, but propose a\nmore systematic and effective way to achieve hypothesis consistency via\nGaussian processes (GP). The GP allows us to define/induce a hypothesis space\nof the classifiers from the posterior distribution of the latent random\nfunctions, turning the learning into a simple large-margin posterior separation\nproblem, far easier to solve than previous approaches based on adversarial\nminimax optimization. We formulate a learning objective that effectively pushes\nthe posterior to minimize the maximum discrepancy. This is further shown to be\nequivalent to maximizing margins and minimizing uncertainty of the class\npredictions in the target domain, a well-established principle in classical\n(semi-)supervised learning. Empirical results demonstrate that our approach is\ncomparable or superior to the existing methods on several benchmark domain\nadaptation datasets.", + "authors": "Minyoung Kim, Pritish Sahu, Behnam Gholami, Vladimir Pavlovic", + "published": "2019-02-23", + "updated": "2019-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction The success of deep visual learning largely relies on the abundance of data annotated with ground-truth labels where the main assumption is that the training and test data follow from the same underlying distribution. However, in real-world problems this presumption rarely holds due to a number of artifacts, such as the different types of noise or sensors, changes in object view or context, resulting in degradation of performance during inference on test data. One way to address this problem would be to collect labeled data in the test domain and learn a test-speci\ufb01c classi\ufb01er while possibly leveraging the model estimated from the training data. Nevertheless, this would typically be a highly costly effort. Domain adaptation, a formalism to circumvent the aforementioned problem, is the task of adapting a model trained in one domain, called the source, to another target domain, where the source domain data is typically fully labeled but we only have access to images from the target domain with no (or very few) labels. Although there are several slightly different setups for the problem, in this paper we focus on the unsupervised domain adaptation (UDA) with classi\ufb01cation of instances as the ultimate objective. That is, given the fully labeled data from the source domain and unlabeled data from the target, the goal is to learn a classi\ufb01er that performs well on the target domain itself. One mainstream direction to tackle UDA is the shared space embedding process. The idea is to \ufb01nd a latent space shared by both domains such that the classi\ufb01er learned on it using the fully labeled data from the source will also perform well on the \u2217mikim21@gmail.com \u2020ps851@cs.rutgers.edu \u2021bb510@cs.rutgers.edu \u00a7vladimir@cs.rutgers.edu Pritish Sahu and Behnam Gholami contributed equally to this work. 1 arXiv:1902.08727v1 [cs.LG] 23 Feb 2019 \ftarget domain. This is accomplished, and supported in theory [3], by enforcing a requirement that the distributions of latent points in the two domains be indistinguishable from each other. A large family of UDA approaches including [19, 17, 1, 13, 34, 26, 16, 37, 15] leverage this idea (see Sec. 4 for more details). However, their performance remains unsatisfactory, in part because the methods inherently rely on matching of marginal, class-free, distributions while using the underlying assumption that the shift in the two distributions, termed covariate shift [53], can be reduced without using the target domain labels. To address this issue, an effective solution was proposed in [49], which aims to take into account the class-speci\ufb01c decision boundary. Its motivation follows the theorem in [2] relating the target domain error to the maximal disagreement between any two classi\ufb01ers, tighter than the former bound in [3]. It implies that a provably small target error is achievable by minimizing the maximum classi\ufb01er discrepancy (MCD). The approach in [49], the MCD Algorithm (MCDA for short), attempted to minimize MCD directly using adversarial learning similar to GAN training [18], i.e., through solving a minimax problem that \ufb01nds the pair of most discrepant classi\ufb01ers and reduces their disagreement. In this paper we further extend the MCD principle by proposing a more systematic and effective way to achieve consistency in the hypothesis space of classi\ufb01ers H through Gaussian process (GP) [45] endowed priors, with deep neural networks (DNNs) used to induce their mean and covariance functions. The crux of our approach is to regard the classi\ufb01ers as random functions and use their posterior distribution conditioned on the source samples, as the prior on H. The key consequence and advantages of this Bayesian treatment are: (1) One can effectively minimize the inconsistency in H over the target domain by regularizing the source-induced prior using a max-margin learning principle [59], a signi\ufb01cantly easier-to-solve task than the minimax optimization of [49] which may suffer from the dif\ufb01culty of attaining an equilibrium point coupled with the need for proper initialization. (2) We can quantify the measure of prediction uncertainty and use it to credibly gauge the quality of prediction at test time. Although GP models were previously known to suffer from the scalability issues [45], we utilize recent deep kernel techniques [25, 60] to turn the non-parametric Bayesian inference into a more tractable parametric one, leading to a learning algorithm computationally as scalable and ef\ufb01cient as conventional (non-Bayesian) deep models. Our extensive experimental results on several standard benchmarks demonstrate that the proposed approach achieves state-of-the-art prediction performance, outpacing recent UDA methods including MCDA [49]. 2. Problem Setup and Preliminaries We begin with the formal description of the UDA task for a multi-class classi\ufb01cation problem. Unsupervised domain adaptation: Consider the joint space of inputs and class labels, X \u00d7 Y where Y = {1, . . . , K} for (K-way) classi\ufb01cation. Suppose we have two domains on this joint space, source (S) and target (T), de\ufb01ned by unknown distributions pS(x, y) and pT (x, y), respectively. We are given source-domain training examples with labels DS = {(xS i , yS i )}NS i=1 and target data DT = {xT i }NT i=1 with no labels. We assume the shared set of class labels between the two domains. The goal is to assign the correct class labels {yT i } to target data points DT . To tackle the problem in the shared latent space framework, we seek to learn the embedding function G : X \u2192Z and a classi\ufb01er h : Z \u2192Y in the shared latent space Z. The embedding function G(\u00b7) and the classi\ufb01er h(\u00b7) are shared across both domains and will be applied to classify samples in the target domain using the composition y = h(z) = h(G(x)). Our goal is to \ufb01nd the pair (h, G) resulting in the lowest generalization error on the target domain, (h\u2217, G\u2217) = arg min h,G eT (h, G) = arg min h,G E(x,y)\u223cpT (x,y)[I(h(G(x)) \u0338= y)], (1) with I(\u00b7) the 1/0 indicator function. Optimizing eT directly is typically infeasible. Instead, one can exploit the upper bounds proposed in [2] and [3], which we restate, without loss of generality, for the case of \ufb01xed G. Theorem 1 [2, 3] Suppose that H is symmetric (i.e., h \u2208H implies \u2212h \u2208H). For any h \u2208H, the following holds1: eT (h) \u2264eS(h) + sup h,h\u2032\u2208H \f \fdS(h, h\u2032) \u2212dT (h, h\u2032) \f \f + e\u2217 (2) \u2264eS(h) + sup h\u2208H \f \fdS(h, +1) \u2212dT (h, +1)] \f \f + e\u2217 (3) Here eS(h) is the error rate of h(\u00b7) on the source domain, e\u2217:= minh\u2208H eS(h) + eT (h), and dS(h, h\u2032) := Ez\u223cS[I(h(z) \u0338= h\u2032(z))] denotes the discrepancy between two classi\ufb01ers h and h\u2032 on the source domain S, and similarly for dT (h, h\u2032). We use z \u223cS to denote the distribution of z in the latent space induced by G and pS(x, y). 1Note that the theorems assume binary classi\ufb01cation (y \u2208{+1, \u22121}), however, they can be straightforwardly extended to multi-class setups. \fLooser bound. With e\u2217the uncontrollable quantity, due to the lack of labels for T in the training data, the optimal h can be sought through minimization of the source error eS(h) and the worst-case discrepancy terms. In the looser bound (3), the supremum term is, up to a constant, equivalent to suph\u2208H Ez\u223cS[I(h(z) = +1)] + Ez\u223cT [I(h(z) = \u22121)], the maximal accuracy of a domain discriminator (labeling S as +1 and T as \u22121). Hence, to reduce the upper bound one needs to choose the embedding G where the source and the target inputs are indistinguishable from each other in Z. This input density matching was exploited in many previous approaches [57, 14, 7, 56], and typically accomplished through adversarial learning [18] or the maximum mean discrepancy [21]. Tighter bound. Recently, [49] exploited the tighter bound (2) under the assumption that H is restricted to classi\ufb01ers with small errors on S. Consequently, dS(h, h\u2032) becomes negligible as any two h, h\u2032 \u2208H agree on the source domain. The supremum in (2), interpreted as the Maximum Classi\ufb01er Discrepancy (MCD), reduces to: sup h,h\u2032\u2208H E(x,y)\u223cpT (x,y)[I(h(z) \u0338= h\u2032(z))]. (4) Named MCDA, [49] aims to minimize (4) directly via adversarial-cooperative learning of two deep classi\ufb01er networks h(z) and h\u2032(z). For the source domain data, these two classi\ufb01ers and G aim to minimize the classi\ufb01cation errors cooperatively. An adversarial game is played in the target domain: h and h\u2032 aim to be maximally discrepant, whereas G seeks to minimize the discrepancy2. 3. Our Approach Overview. We adopt the MCD principle, but propose a more systematic and effective way to achieve hypothesis consistency, instead of the dif\ufb01cult minimax optimization. Our idea is to adopt a Bayesian framework to induce the hypothesis space. Speci\ufb01cally, we build a Gaussian process classi\ufb01er model [45] on top of the share space. The GP posterior inferred from the source domain data naturally de\ufb01nes our hypothesis space H. We then optimize the embedding G and the kernel of the GP so that the posterior hypothesis distribution leads to consistent (least discrepant) class predictions most of the time, resulting in reduction of (4). The details are described in the below. 3.1. GP-endowed Maximum Separation Model We consider a multi-class Gaussian process classi\ufb01er de\ufb01ned on Z: there are K underlying latent functions f(\u00b7) := {fj(\u00b7)}K j=1, a priori independently GP distributed, namely P(f) = K Y j=1 P(fj), fj \u223cGP \u00000, kj(\u00b7, \u00b7) \u0001 , (5) where each kj is a covariance function of fj, de\ufb01ned on Z \u00d7 Z. For an input point z \u2208Z, we regard fj(z) as the model\u2019s con\ufb01dence toward class j, leading to the class prediction rule: class(z) = arg max 1\u2264j\u2264K fj(z). (6) We use the softmax likelihood model, P(y = j|f(z)) = efj(z) PK r=1 efr(z) , for j = 1, . . . , K. (7) Source-driven H Prior. The labeled source data, DS, induces a posterior distribution on the latent functions f, p(f|DS) \u221dp(f) \u00b7 NS Y i=1 P(yS i |f(zS i )), (8) where zS i = G(xS i ). The key idea is to use (8) to de\ufb01ne our hypothesis space H. The posterior places most of its probability mass on those f that attain high likelihood scores on S while being smooth due to the GP prior. It should be noted that we used the term prior of the hypothesis space H that is induced from the posterior of the latent functions f. We use the H prior and the posterior of f interchangeably. Note that due to the non-linear/non-Gaussian likelihood (7), exact posterior inference is intractable, and one has to resort to approximate inference. We will discuss an approach for ef\ufb01cient variational approximate inference in Sec. 3.2. For the exposition here, let us assume that the posterior distribution is accessible. 2See the Supplementary Material for further technical details. \fFigure 1: Illustration of ideal (pA) and problematic (pB) posteriors at some \ufb01xed point z in the target domain. For each posterior, we also depict two plausible samples (marked as crosses). In pA, most samples f(z), including the two shown, are consistent in deciding the class label (class 2, red, predicted in this case). On the other hand, in pB where f1(z) and f2(z) have considerable overlap, there is signi\ufb01cant chance of different predictions: class 2 for the \ufb01rst sample and class 1 for the second. Target-driven Maximally Consistent Posterior. While DS serves to induce the prior of H, DT will be used to reshape this prior. According to MCD, we want this hypothesis space to be shaped in the following way: for each target domain point z = G(x), x \u223cT, the latent function values f(z) sampled from the posterior (8) should lead to the class prediction (made by (6)) that is as consistent as possible across the samples. This is illustrated in Fig. 1. Consider two different H priors pA and pB at a point z, pA(f(z)) and pB(f(z)), where for brevity we drop the conditioning on DS in notation. The class cardinality is K = 3. For simplicity, we assume that the latent functions fj\u2019s are independent from each other. Fig. 1 shows that the distributions of fj\u2019s are well-separated from each other in pA, yet overlap signi\ufb01cantly in pB. Hence, there is a strong chance for the class predictions to be inconsistent in pB (identical ordering of colored samples below \ufb01gure), but consistent in pA. This means that the hypothesis space induced from pB contains highly discrepant classi\ufb01ers, whereas most classi\ufb01ers in the hypothesis space of pA agree with each other (least discrepant). In other words, the maximum discrepancy principle translates into the maximum posterior separation in our Bayesian GP framework. We describe how this goal can be properly formulated. First we consider the posterior of f to be approximated as an independent Gaussian3. For any target domain point z \u223cT and each j = 1, . . . , K let the mean and the variance of the H prior in (8) be: \u00b5j(z) := Z fj(z) p \u0000fj(z)|DS, z \u0001 d fj(z), (9) \u03c32 j (z) := Z (fj(z) \u2212\u00b5j(z))2 p \u0000fj(z)|DS, z \u0001 d fj(z). (10) The maximum-a-posterior (MAP) class prediction by the model is denoted by j\u2217= arg max1\u2264j\u2264K \u00b5j(z). As we seek to avoid \ufb02uctuations in class prediction j\u2217across samples, we consider the worst scenario where even an unlikely (e.g., at 5% chance level) sample from fj(z), j other than j\u2217, cannot overtake \u00b5j\u2217(z). That is, we seek \u00b5j\u2217(z) \u2212\u03b1\u03c3j\u2217(z) \u2265max j\u0338=j\u2217 \u0000\u00b5j(z) + \u03b1\u03c3j(z) \u0001 , (11) where \u03b1 is the normal cutting point for the least chance (e.g., \u03b1 = 1.96 if 2.5% one-side is considered). While this should hold for most samples, it will not hold for all. We therefore introduce an additional slack \u03be \u22650 to relax the desideratum. Furthermore, for ease of optimization4, we impose slightly stricter constraint than (11), leading to the \ufb01nal constraint: max 1\u2264j\u2264K \u00b5j(z) \u22651 + max j\u0338=j\u2217\u00b5j(z) + \u03b1 max 1\u2264j\u2264K \u03c3j(z) \u2212\u03be(z). (12) A constant, 1 here, was added to normalize the scale of fj\u2019s. Our objective now is to \ufb01nd such embedding G, GP kernel parameters k, and minimal slacks \u03be, to impose (12). Equiva3This choice conforms to the variational density family we choose in Sec. 3.2. 4We used the topk() function in PyTorch to compute the largest and the second largest elements. The function allows automatic gradients. \flently, we pose it as the following optimization problem, for each z \u223cT: min G,k \u0012 max j\u0338=j\u2217\u00b5j(z) \u2212max 1\u2264j\u2264K \u00b5j(z) + 1 + \u03b1 max 1\u2264j\u2264K \u03c3j(z) \u0013 + (13) with (a)+ = max(0, a). Note that (12) and (13) are reminiscent of the large-margin classi\ufb01er learning in traditional supervised learning [58]. In contrast, we replace the ground-truth labels with the the most con\ufb01dently predicted labels by our model since the target domain is unlabeled. This aims to place class boundaries in low-density regions, in line with entropy minimization or max-margin con\ufb01dent prediction principle of classical semi-supervised learning [20, 67]. In what follows, we describe an approximate, scalable GP posterior inference, where we combine the variational inference optimization with the aforementioned posterior maximum separation criterion (13). 3.2. Variational Inference with Deep Kernels We describe our scalable variational inference approach to approximate the posterior (8). Although there are scalable GP approximation schemes based on the random feature expansion [44] and the pseudo/induced inputs [43, 51, 55, 12], here we adopt the deep kernel trick [25, 60] to exploit the deeply structured features. The main idea is to model an explicit \ufb01nitedimensional feature space mapping to de\ufb01ne a covariance function. Speci\ufb01cally, we consider a nonlinear feature mapping \u03c6 : Z \u2192Rd such that the covariance function is de\ufb01ned as an inner product in a feature space, namely k(z, z\u2032) := \u03c6(z)\u22a4\u03c6(z\u2032), where we model \u03c6(\u00b7) as a deep neural network. A critical advantage of explicit feature representation is that we turn the non-parametric GP into a parametric Bayesian model. As a consequence, all inference operations in the non-parametric GP reduce to computationally more ef\ufb01cient parametric ones, avoiding the need to store the Gram matrix of the entire training data set, as well as its inversion. Formally, we consider K latent functions modeled as fj(z) = w\u22a4 j \u03c6(z) with wj \u223cN(0, I) independently for j = 1, . . . , K. We let W = [w1, . . . , wK]\u22a4. Note that the feature function \u03c6(\u00b7) is shared across classes to reduce the number of parameters and avoid over\ufb01tting. The parameters of the deep model that represents \u03c6(\u00b7) serve as GP kernel parameters, since Cov(f(z), f(z\u2032)) = Cov(w\u22a4\u03c6(z), w\u22a4\u03c6(z\u2032)) = \u03c6(z)\u22a4\u03c6(z\u2032) = k(z, z\u2032). Consequently, the source-driven H prior (8) becomes p(W|DS) \u221d K Y j=1 N(wj; 0, I) \u00b7 NS Y i=1 P(yS i |W\u03c6(zS i )). (14) Since computing (14) is intractable, we introduce a variational density q(W) to approximate it. We assume a fully factorized Gaussian, q(W) = K Y j=1 N(wj; mj, Sj), (15) where mj \u2208Rd and Sj \u2208Rd\u00d7d constitute the variational parameters. We further let Sj\u2019s be diagonal matrices. To have q(W) \u2248p(W|DS), we use the following fact that the marginal log-likelihood can be lower bounded: log P \u0010 {yS i }NS i=1 \f \f \f {zS i }NS i=1, \u03c6(\u00b7) \u0011 \u2265ELBO, (16) where the evidence lower-bound (ELBO) is de\ufb01ned as: ELBO := NS X i=1 Eq(W) \u0002 log P(yS i |W\u03c6(zS i )) \u0003 \u2212 K X j=1 KL \u0000q(wj) || N(wj; 0, I) \u0001 , (17) with the likelihood stemming from (7). As the gap in (16) is the KL divergence between q(W) and the true posterior p(W|DS), increasing the ELBO wrt the variational parameters {(mj, Sj)} brings q(W) closer to the true posterior. Raising the ELBO wrt the GP kernel parameters (i.e., the parameters of \u03c6) and the embedding5 G can potentially improve the marginal likelihood (i.e., the left hand side in (16)). In optimizing the ELBO (17), the KL term (denoted by KL) can be analytically derived as KL = 1 2 K X j=1 \u0000Tr(Sj) + ||mj||2 2 \u2212log det(Sj) \u2212d \u0001 . (18) 5Note that the inputs z also depend on G. \fHowever, there are two key challenges: the log-likelihood expectation over q(W) does not admit a closed form, and one has to deal with large NS. To address the former, we adopt Monte-Carlo estimation using M iid samples {W(m)}M m=1 from q(W), where the samples are expressed in terms of the variational parameters (i.e., the reparametrization trick [28]) to facilitate optimization. That is, for each j and m, w(m) j = mj + S1/2 j \u03f5(m) j , \u03f5(m) j \u223cN(0, I). (19) For the latter issue, we use stochastic optimization with a random mini-batch BS \u2282DS. That is, we optimize the sample estimate of the log-likelihood de\ufb01ned as: LL = 1 M M X m=1 NS |BS| X i\u2208BS log P(yS i |W(m)\u03c6(zS i )). (20) 3.3. Optimization Strategy Now we combine the maximum posterior separation criterion in (13) with the variational inference discussed in the previous section to arrive at the comprehensive optimization task. Our approximate posterior (15) leads to closed-form expressions for \u00b5j(z) and \u03c3j(z) in (9\u201310) as follows: \u00b5j(z) \u2248m\u22a4 j \u03c6(z), \u03c3j(z) \u2248 \u0000\u03c6(z)\u22a4Sj\u03c6(z) \u00011/2. (21) With q(W) \ufb01xed, we rewrite our posterior maximum separation loss in (13) as follows. We consider stochastic optimization with a random mini-batch BT \u2282DT = {zT i }NT i=1 sampled from the target domain data. MS := 1 |BT | X i\u2208BT \u0012 max j\u0338=j\u2217m\u22a4 j \u03c6(zT i ) \u2212max 1\u2264j\u2264K m\u22a4 j \u03c6(zT i ) + 1 + \u03b1 max 1\u2264j\u2264K \u0000\u03c6(zT i )\u22a4Sj\u03c6(zT i ) \u00011/2 \u0013 + (22) Combining all objectives thus far, our algorithm6 can be summarized as the following two optimizations alternating with each other: \u2022 min{mj,Sj} \u2212LL + KL (variational inference) \u2022 minG,k \u2212LL + KL + \u03bb \u00b7 MS (model selection) where \u03bb is the impact of the maximum separation loss (e.g., \u03bb = 10.0). 4. Related Work There has been extensive prior work on domain adaptation [9]. Recent approaches have focused on transferring deep neural network representations from a labeled source dataset to an unlabeled target domain by matching the distributions of features between different domains, aiming to extract domain-invariant features [46, 4, 8, 39, 48, 63, 61, 6, 36, 47]. To this end, it is critical to \ufb01rst de\ufb01ne a measure of distance (divergence) between source and target distributions. One popular measure is the non-parametric Maximum Mean Discrepancy (MMD) (adopted by [6, 62, 34]), which measures the distance between the sample means of the two domains in the reproducing Kernel Hilbert Space (RKHS) induced by a pre-speci\ufb01ed kernel. The deep Correlation Alignment (CORAL) method [54] attempted to match the sample mean and covariance of the source/target distributions, while it was further generalized to potentially in\ufb01nite-dimensional feature spaces in [65] to effectively align the RKHS covariance matrices (descriptors) across domains. The Deep Adaptation Network (DAN) [33] applied MMD to layers embedded in a RKHS to match higher order moments of the two distributions more effectively. The Deep Transfer Network (DTN) [64] achieved alignment of source and target distributions using two types of network layers based on the MMD distance: the shared feature extraction layer that can learn a subspace that matches the marginal distributions of the source and the target samples, and the discrimination layer that can match the conditional distributions by classi\ufb01er transduction. Many recent UDA approaches leverage deep neural networks with the adversarial training strategy [46, 4, 8, 39, 48, 63], which allows the learning of feature representations to be simultaneously discriminative for the labeled source domain data and indistinguishable between source and target domains. For instance, [13] proposed a technique called the DomainAdversarial Training of Neural Networks (DANN), which allows the network to learn domain invariant representations in 6 In the algorithmic point of view, our algorithm can be viewed as a max-margin Gaussian process classi\ufb01er on the original input space X without explicitly considering a shared space Z. For further details about this connection, the reader is encouraged to refer to the Supplementary Material. \f(a) Digits. (b) Traf\ufb01c Signs. (c) VisDA. Figure 2: Example images from benchmark datasets. (a) Samples from MNIST, USPS, and SVHN datasets. (b) Samples from SYN SIGNS (\ufb01rst two rows), and GTSRB. an adversarial fashion by adding an auxiliary domain classi\ufb01er and back-propagating inverse gradients. The Adversarial Discriminative Domain Adaptation (ADDA) [56] \ufb01rst learns a discriminative feature subspace using the labeled source samples. Then, it encodes the target data to this subspace using an asymmetric transformation learned through a domainadversarial loss. The DupGAN [24] proposed a GAN-like model [18] with duplex discriminators to restrict the latent representation to be domain invariant while its category information being preserved. In parallel, within the shared-latent space framework, [31] proposed an unsupervised image-to-image translation (UNIT) framework based on the Coupled GANs [32]. Another interesting idea is the pixel-level domain adaptation method (PixelDA) [6] where they imposed alignment of distributions not in the feature space but directly in the raw pixel space via the adversarial approaches. The intention is to adapt the source samples as if they were drawn from the target domain, while maintaining the original content. Similarly, [40] utilized the CycleGAN [66] to constrain the features extracted by the encoder network to reconstruct the images in both domains. In [50], they proposed a joint adversarial discriminative approach that can transfer the information of the target distribution to the learned embedding using a generator-discriminator pair. 5. Experimental Results We compare the proposed method with state-of-the-art on standard benchmark datasets. Digit classi\ufb01cation task consists of three datasets, containing ten digit classes: MNIST [29], SVHN [41], USPS [56]. We also evaluated our method on the traf\ufb01c sign datasets, Synthetic Traf\ufb01c Signs (SYN SIGNS) [38] and the German Traf\ufb01c Signs Recognition Benchmark [52] (GTSRB), which contain 43 types of signs. Finally, we report performance on VisDA object classi\ufb01cation dataset [42] with more than 280K images across twelve categories ( the details of the datasets are available in the Supplementary Material). Fig. 2 illustrates image samples from different datasets and domains. We evaluate the performance of all methods with the classi\ufb01cation accuracy score. We used ADAM [27] for training; the learning rate was set to 0.0002 and momentum to 0.5 and 0.999. We used batches of size 32 from each domain, and the input images were mean-centered. The hyper-parameters are empirically set as \u03bb = 50.0, \u03b1 = 2.0. The sensitivity w.r.t. hyperparameters \u03bb and \u03b1 will be discussed in Sec. 5.3. We also used the same network structure as [49]. Speci\ufb01cally, we employed the CNN architecture used in [13] and [6] for digit and traf\ufb01c sign datasets and used ResNet101 [23] model pre-trained on Imagenet [10]. We added batch normalization to each layer in these models. Quantitative evaluation involves a comparison of the performance of our model to previous works and to Source Only that do not use any domain adaptation. For \u201dSource Only\u201d baseline, we train models on the unaltered source training data and evaluate on the target test data. The training details for comparing methods are available in our Supplementary material due to the space limit. 5.1. Results on Digit and Traf\ufb01c Signs datasets We show the accuracy of different methods in Tab. 1. It can be seen the proposed method outperformed competitors in all settings con\ufb01rming consistently better generalization of our model over target data. This is partially due to combining DNNs and GPs/Bayesian approach. GPs exploit local generalization by locally interpolating between neighbors [5], adjusting the target functions rapidly in the presence of training data. DNNs have good generalization capability for unseen input con\ufb01gurations by learning multiple levels of distributed representations. The results demonstrate GPDA can improve generalization performance by adopting both of these advantages. \fSVHN SYNSIG MNIST MNIST\u2217 USPS METHOD to to to to to MNIST GTSRB USPS USPS\u2217 MNIST Source Only 67.1 85.1 76.7 79.4 63.4 MMD \u2020 [33] 71.1 91.1 81.1 DANN \u2020 [13] 71.1 88.7 77.1\u00b11.8 85.1 73.0\u00b10.2 DSN \u2020 [7] 82.7 93.1 91.3 ADDA [56] 76.0\u00b11.8 89.4\u00b10.2 90.1\u00b10.8 CoGAN [32] 91.2\u00b10.8 89.1\u00b10.8 PixelDA [6] 95.9 ATDA \u2020 [48] 86.2 96.1 ASSC [22] 95.7\u00b11.5 82.8\u00b11.3 DRCN [15] 82.0\u00b10.1 91.8\u00b10.09 73.7\u00b10.04 MCDA (n = 2) 94.2\u00b12.6 93.5\u00b10.4 92.1\u00b10.8 93.1\u00b11.9 90.0\u00b11.4 MCDA (n = 3) 95.9\u00b10.5 94.0\u00b10.4 93.8\u00b10.8 95.6\u00b10.9 91.8\u00b10.9 MCDA (n = 4) 96.2\u00b10.4 94.4\u00b10.3 94.2\u00b10.7 96.5\u00b10.3 94.1\u00b10.3 GPDA 98.2\u00b10.1 96.19\u00b10.2 96.45\u00b10.15 98.11\u00b10.1 96.37\u00b10.1 Table 1: Classi\ufb01cation results on the digits and traf\ufb01c signs datasets (best viewed in color). The best score is in bold red, second best in light red. Results are cited from each study. The score of MMD is cited from DSN [7]. \u2020 indicates the method used a few labeled target samples as validation, different from our GPDA setting. We repeated each experiment \ufb01ve times and report the average and the standard deviation of the accuracy. The accuracy for MCDA was obtained from classi\ufb01er F1. n is MCDA\u2019s hyper-parameter, which denotes the number of times the feature generator is updated to mimic classi\ufb01ers. MNIST\u2217and USPS\u2217denote all the training samples were used to train the models. Method plane bcycl bus car horse knife mcycl person plant sktbrd train truck mean Ave. ranking Source Only 55.1 53.3 61.9 59.1 80.6 17.9 79.7 31.2 81.0 26.5 73.5 8.5 52.4 6.67 MMD [33] 87.1 63.0 76.5 42.0 90.3 42.9 85.9 53.1 49.7 36.3 85.8 20.7 61.1 3.84 DANN [13] 81.9 77.7 82.8 44.3 81.2 29.5 65.1 28.6 51.9 54.6 82.8 7.8 57.4 4.40 MCDA (n = 2) 81.1 55.3 83.6 65.7 87.6 72.7 83.1 73.9 85.3 47.7 73.2 27.1 69.7 3.75 MCDA (n = 3) 90.3 49.3 82.1 62.9 91.8 69.4 83.8 72.8 79.8 53.3 81.5 29.7 70.6 3.25 MCDA (n = 4) 87.0 60.9 83.7 64.0 88.9 79.6 84.7 76.9 88.6 40.3 83.0 25.8 71.9 2.84 GPDA (ours) 83.0 74.3 80.4 66.0 87.6 75.3 83.8 73.1 90.1 57.3 80.2 37.9 73.31 2.50 Table 2: Accuracy of ResNet101 model \ufb01ne-tuned on the VisDA dataset. Last column shows the average rank of each method over all classes. The best (in bold red), the second best (in red). 5.2. Results on VisDA dataset Results for this experiment are summarized in Tab. 2. We observe that our GPDA achieved, on average, the best performance compared to other competing methods. Due to vastly varying dif\ufb01culty of classifying different categories of objects, in addition to reporting the average classi\ufb01cation accuracy we also report the average rank of each method over all objects (the lower rank, the better). The higher performance of GPDA compared to other methods is mainly attributed to modeling the classi\ufb01er as a random function and consequently incorporating the classi\ufb01er uncertainty (variance of the prediction) into the proposed loss function, Eq. 28. The image structure for this dataset is more complex than that of digits, yet our method exhibits very strong performance even under such challenging conditions. Another key observation is that some of the competing methods (e.g., MMD, DANN) perform worse than the source-only model in classes such as car and plant, while GPDA and MCDA performed better across all classes, which clearly demonstrates the effectiveness of the MCD principle. 5.3. Ablation Studies Two complementary studies are conducted to investigate the impact of two hyper-parameters \u03b1 and \u03bb, controlling the trade off of the variance of the classi\ufb01er\u2019s posterior distribution and the MCD loss term, respectively. To this end, we conducted additional experiments for the digit datasets to analyze the parameter sensitivity of GPDA w.r.t. \u03b1 and \u03bb, with results depicted in Fig. 3a and 3b, respectively. Sensitivity analysis is performed by varying one parameter at the time over a given range, while for the other parameters we set them to their \ufb01nal values (\u03b1 = 2, \u03bb = 50). From Fig. 3b, we see that when \u03bb = 0 (no \f0 1 2 3 \u03b1 85 90 95 100 ACC S-->M M-->U M-->U (all) U-->M (a) Sensitivity to \u03b1 0 1 10 50 100 \u03bb 75 80 85 90 95 100 ACC S-->M M-->U M-->U(all) U-->M (b) Sensitivity to \u03bb Figure 3: Sensitivity analysis of our GPDA on the Digit datasets. S \u2192M denotes adaptation from SVHN to MNIST (similarly for others), and M \u2192U (all) indicates using all training samples. MCD regularization term), the performance drops considerably. As \u03bb increases from 0 to 50, the performance also increases demonstrating the bene\ufb01t of hypothesis consistency (MS term) over the target samples. Indeed, using the proposed learning scheme, we \ufb01nd a representation space in which we embed the knowledge from the target domain into the learned classi\ufb01er. Similarly, from Fig. 3a, we see that when \u03b1 = 0 (no prediction uncertainty) the classi\ufb01cation accuracy is lower than the case where we utilize the prediction uncertainty, \u03b1 > 0. The key observation is that it is more bene\ufb01cial to make use of the information from the full posterior distribution of the classi\ufb01er during the learning process in contrast to when the classi\ufb01er is considered as a deterministic function. 5.4. Prediction Uncertainty vs. Prediction Quality Another advantage of our GPDA model, inherited from Bayesian modeling, is that it provides a quanti\ufb01ed measure of prediction uncertainty. In the multi-class setup considered here, this uncertainty amounts to the degree of overlap between two largest mean posteriors, p(fj\u2217(z)|DS) and p(fj\u2020(z)|DS), where j\u2217and j\u2020 are the indices of the largest and the second largest among the posterior means {\u00b5j(z)}K j=1, respectively (c.f., (9)). Intuitively, if the two are overlapped signi\ufb01cantly, our model\u2019s decision is less certain, meaning that we anticipate the class prediction may not be trustworthy. On the other hand, if the two are well separated, we expect high prediction quality. To verify this hypothesis more rigorously, we evaluate the distances between two posteriors (i.e., measure of certainty in prediction) for two different cohorts: correctly classi\ufb01ed test target samples by our model and incorrectly predicted ones. More speci\ufb01cally, for the SVHN to MNIST adaptation task, we evaluate the Bhattacharyya distances [11] for the samples in the two cohorts. In our variational Gaussian approximation (21), the Bhattacharyya distance can be computed in a closed form (See Appendix in supplementary for details). The histograms of the distances are depicted in Fig. 4 where we contrast the two models, one at an early stage of training and the other after convergence. Our \ufb01nal model in Fig. 4(a) exhibits large distances for most of the samples in the correctly predicted cohort (green), implying well separated posteriors or high certainty. For the incorrectly predicted samples (red), the distances are small implying signi\ufb01cant overlap between the two posteriors, i.e., high uncertainty. On the other hand, for the model prior to convergence, Fig. 4(b), the two posteriors overlap strongly (small distances along horizontal axis) for most samples regardless of the correctness of prediction. This con\ufb01rms that our algorithm enforces posterior separation by large margin during the training process. This analysis also suggests that the measure of prediction uncertainty provided by our GPDA model, can be used as an indicator of prediction quality, namely whether the prediction made by our model is trustworthy or not. To verify this, we depict some sample test images in Fig. 5. We differentiate samples according to their Bhattacharyya distances. When the prediction is uncertain (left panel), we see that the images are indeed dif\ufb01cult examples even for human. An interesting case is when the prediction certainty is high but incorrectly classi\ufb01ed (lower right panel), where the images look peculiar in the sense that humans are also prone to misclassify those with considerably high certainty. 5.5. Analysis of Shared Space Embedding We use t-SNE [35] on VisDA dataset to visualize the feature representations from different classes. Fig. 6 depicts the embedding of the learned features G(x), and the original features x. Colors indicate source (red) and target (blue) domains. Notice that GPDA signi\ufb01cantly reduces the domain mismatch, resulting in the expected tight clustering. This is partially \f0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 0 10 20 30 40 Proportion (%) (a) Final model (epoch 360) Correctly predicted samples Inorrectly predicted samples 0 1 2 3 4 5 0 5 10 Proportion (%) (b) Model before convergence (epoch 10) Correctly predicted samples Inorrectly predicted samples Figure 4: Histograms of prediction (un)certainty for our two models: (a) after convergence, (b) at an early stage of training. The X-axis is the Bhattacharyya distance b/w two largest mean posteriors, an indication of prediction certainty; the higher the distance, the more certain the prediction is. For each model, we compute histograms of correctly and incorrectly predicted samples separately (by color). In our \ufb01nal model (a), there is a strong correlation between prediction (un)certainty (horizontal axis) and prediction correctness (color). Figure 5: Selected test (MNIST) images according to the Bhattacharyya distances. Right: samples with low distances (uncertain prediction). Left: high distances (certain prediction). Top: correctly classi\ufb01ed by our model. Bottom: incorrectly classi\ufb01ed. For each image, GT, Pr, and d means ground-truth label, predicted label, and the distance, respectively. due to the use of the proposed probabilistic MCD approach, which shrinks the classi\ufb01er hypothesis class to contain only consistent classi\ufb01ers on target samples while exploiting the uncertainty in the prediction. \fSVHN MNIST (a) Original (by domain) (b) Original (by classes) SVHN MNIST (c) GPDA (by domain) (d) GPDA (by classes) Figure 6: Feature visualization for embedding of digit datasets for adapting SVHN to MNIST using t-SNE algorithm. The \ufb01rst and the second columns show the domains and classes, respectively, with color indicating domain and class membership. a,b Original features. c,d learned features for GPDA. 6." + }, + { + "url": "http://arxiv.org/abs/1902.01568v1", + "title": "Relevance Factor VAE: Learning and Identifying Disentangled Factors", + "abstract": "We propose a novel VAE-based deep auto-encoder model that can learn\ndisentangled latent representations in a fully unsupervised manner, endowed\nwith the ability to identify all meaningful sources of variation and their\ncardinality. Our model, dubbed Relevance-Factor-VAE, leverages the total\ncorrelation (TC) in the latent space to achieve the disentanglement goal, but\nalso addresses the key issue of existing approaches which cannot distinguish\nbetween meaningful and nuisance factors of latent variation, often the source\nof considerable degradation in disentanglement performance. We tackle this\nissue by introducing the so-called relevance indicator variables that can be\nautomatically learned from data, together with the VAE parameters. Our model\neffectively focuses the TC loss onto the relevant factors only by tolerating\nlarge prior KL divergences, a desideratum justified by our semi-parametric\ntheoretical analysis. Using a suite of disentanglement metrics, including a\nnewly proposed one, as well as qualitative evidence, we demonstrate that our\nmodel outperforms existing methods across several challenging benchmark\ndatasets.", + "authors": "Minyoung Kim, Yuting Wang, Pritish Sahu, Vladimir Pavlovic", + "published": "2019-02-05", + "updated": "2019-02-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Identifying the true underlying factors or sources that explain the variability and structure of data is a key problem in machine learning. Understanding \u201chow\u201d decisions are made is intimately tied to \ufb01nding the representation that most succinctly explains the data. Such representations make it easier to extract useful information for constructing and interpreting predictive models, important when the prediction goals only partially re\ufb02ect the underlying structure of the data. Learning this representation of data is also fundamental to understanding the complex observations in many 1Dept. of Computer Science, Rutgers University, NJ, USA 2Dept. of Electronic Engineering, Seoul National University of Science & Technology, Seoul, South Korea. Correspondence to: Vladimir Pavlovic . applications (Bengio et al., 2013; Lake et al., 2016). Latent variable models are the main tools for this task, thanks to their ability to principally model compact representation of observed data. Deep learning architectures, embodied in latent variable models such as the deep variational autoencoder (VAE) (Kingma & Welling, 2014), have extended this ability to modeling of complex nonlinear dependencies among latent factors and the ambient data. One of the highly desired requirements in learning the latent representation is the disentanglement of factors: each latent variable shall be exclusively responsible for the variation of a unique aspect in the observed data (Bengio et al., 2013). For example, for facial images, changing the value of one latent variable while \ufb01xing the others should result in variation of the azimuth pose alone while the other aspects, such as the subject ID, gender, skin color, or the facial expression, remain intact. Conventional data likelihood maximization learning, inherently adopted in the VAE, does not explicitly aim to achieve this goal. This has given rise to several recent efforts aimed at amending the learning algorithms with the goal of constructing highly disentangled representations. While some of the approaches are (semi)supervised, exploiting known factors of variations such as data labels (Reed et al., 2014; Yang et al., 2015; Kulkarni et al., 2015; Whitney et al., 2016), in this paper we tackle the unsupervised setup: learning disentangled representations from unlabeled data alone. Previous works on unsupervised disentangled representation learning have empirically demonstrated that one can achieve improved disentanglement by over-emphasizing: either the latent prior-posterior matching (Makhzani et al., 2016; Higgins et al., 2017), or the mutual information between observed and latent variables (Chen et al., 2016). More recently, the idea of minimizing the total correlation (TC) was introduced, leading to signi\ufb01cant improvements over the non-TC methods. Factor-VAE (Kim & Mnih, 2018) has introduced a percipient way of optimizing the TC, followed by alternative strategies such as (Chen et al., 2018). Despite their potential, such models have not been able to address the key issue of distinguishing meaningful latent factors from the nuisance ones, which can lead to potentially signi\ufb01cant degradation of disentanglement performance. This is a consequence of the quintessential learning setarXiv:1902.01568v1 [cs.LG] 5 Feb 2019 \fRelevance Factor VAE ting adopted here: to learn the disentangled representation, one will commonly choose a latent dimension suf\ufb01ciently large to encompass both the major as well as the minor, nuisance, modes of variation. However, most existing methods disregard this fact, applying a homogeneous learning loss (e.g., the TC loss) to major and nuisance factors alike. The main goal of this paper is to address this issue by introducing the so-called relevance indicator variables that can be automatically learned from data together with the rest of VAE parameters. Our model effectively takes into account the TC loss by focusing it only on relevant, disentangled factors, tolerating large prior divergence of these factors from those a priori speci\ufb01ed in the nuisance model, while simultaneously attempting to identify the noise factors with small divergence from the same nuisance priors, an approach introduced in Sec. 3 and fully justi\ufb01ed by detailed theoretical analysis in the Supplement. This enables automatic identi\ufb01cation and learning of true factors, even in instances when the total number of such factors is not explicitly known. We empirically demonstrate, through extensive empirical evaluations on several benchmark datasets, that our model signi\ufb01cantly outperforms existing methods both quantitatively, across diverse disentanglement metrics, as well as qualitatively. To that end, we also introduce a new disentanglement evaluation score, which shows improved agreement with qualitative assessment of disentangling models. 2. Background 2.1. Notation and VAE (Kingma & Welling, 2014) Let x be our observation (e.g., an image) and z \u2208Rd be the underlying latent vector of x. To represent the observed data, the variational auto-encoder (VAE) speci\ufb01cally de\ufb01nes a probabilistic model that incorporates the deep functional modeling, namely p(z) = N(z; 0, I), (1) p(x|z) = p(x; \u03b8(z)), (2) where p(x; \u03b8(z)) is a tractable density (e.g., Gaussian or Bernoulli) with the parameters \u03b8(z), the output of a deep model \u03b8(\u00b7) with its own weight parameters. In the unsupervised learning setup where we are given only ambient data {xn}N n=1, the model can be learned by the MLE, i.e., maximizing PN n=1 log p(xn). This requires posterior inference p(z|x), but as the exact inference is intractable, the VAE adopts the variational technique: approximate p(z|x) \u2248q(z|x), where q(z|x) is a freely chosen tractable density q(z|x) = q(z; \u03bd(x)), (3) where \u03bd(x) is another deep model. A typical choice, assumed throughout the paper, is independent Gaussian, q(z|x) = d Y j=1 N(zj; mj(x), sj(x)2) (4) where \u03bd(x) = {mj(x), sj(x)}d j=1 constitutes the mean and the variance parameters. The data log-likelihood admits the ELBO as its lower bound, and we maximize it wrt both \u03b8(\u00b7) and \u03bd(\u00b7): ELBO(\u03b8, \u03bd) = \u2212Recon(\u03b8, \u03bd)\u2212Epd(x) \u0002 KL(q(z|x)||p(z)) \u0003 , (5) where pd(x) is the empirical data distribution that represents our data {xn}N n=1, and Recon(\u03b8, \u03bd) = \u2212Epd(x) \u0002 Eq(z|x)[log p(x|z)] \u0003 (6) is the reconstruction loss, identical to the negative expected log-likelihood. Even though maximizing (5) can yield a model that explains the data well (i.e., high data likelihood), the learned model does not necessarily exhibit disentanglement of latent factors, as de\ufb01ned in the next section. 2.2. Latent Disentanglement We say that the latent vector z is disentangled if for each dimension j = 1, . . . , d, varying zj, while \ufb01xing other factors, results in the variation of the j-th aspect exclusively in the observation x. For example, consider x to represent a face image, and let j be the factor responsible for the facial pose (azimuth). Then varying zj while \ufb01xing other factors would generate images of different facial poses with other aspects, such as subject ID, gender, skin color, and facial expression, intact. To achieve this goal of disentanglement, the FactorVAE (Kim & Mnih, 2018) aims to minimize the following loss function1: LF = Recon(\u03b8, \u03bd) + Epd(x) \" d X j=1 KL(q(zj|x)||p(zj)) # + \u03b3KL \u0012 q(z)|| d Y j=1 q(zj)) \u0013 . (7) In (7) the \ufb01rst two terms correspond to the VAE, whereas the last term, known as the total correlation (TC)2, encourages 1 There are several other VAE learning algorithms aiming for disentanglement in the similar \ufb02avor, and we brie\ufb02y summarize the related work in Sec. 4. 2 In the optimization, the dif\ufb01cult log-ratio between the mixtures (c.f. (8)) is circumvented by the density ratio estimation proxy (Nguyen et al., 2010; Sugiyama et al., 2012): they introduce \fRelevance Factor VAE factorization of the so-called aggregate posterior q(z), q(z) = Z q(z|x)pd(x)dx = 1 N N X n=1 q(z|xn). (8) That is, q(z) can be regarded as a model-induced prior, and the Factor-VAE imposes full independence of factors in this prior by penalizing KL(q(z)|| Q j q(zj)). An intuition is that as the TC encourages independence in the dimensions of z, it also reduces the focus of LF on the mutual information between x and z (the second term), leading to the model able to learn informative disentangled latent representations. Empirical results in (Kim & Mnih, 2018) have demonstrated that Factor-VAE is often able to achieve strong disentanglement performance. However, a key issue remains in that the model is unable to systematically discern meaningful latent factors from the nuisance ones because the approach relies on a heuristically chosen latent dimension d, suf\ufb01ciently large to encompass all true relevant factors. The lack of discrimination between relevant factors and nuisance in z may degrade the disentanglement performance and lead the model to learn redundant factors. In what follows, we address this issue by introducing relevance indicator variables that can be automatically learned from data together with the VAE parameters. 3. Relevance Factor VAE (RF-VAE) The key motivation of our approach is that for the factor j to be relevant, its marginal model-induced prior q(zj) ought to be highly non-Gaussian, in contrast to the VAE\u2019s attempt, through the second term in (7), to equally strongly match q(zj|x) to p(zj) = N(0, 1) across all x. This can be easily seen: q(zj) is a mixture of Gaussians (c.f., (8) and (4)) with components q(zj|x) = N(zj; mj(x), sj(x)2). If the factor j is relevant, zj should never be independent of x, reinforcing the non-Gaussianity of the mixture as the whole, where each component differs from another. On the other hand, for a nuisance dimension j\u2032, zj\u2032 are, by de\ufb01nition, independent of x, i.e., q(zj\u2032|x) = q(zj\u2032), enabling the second KL term in (7) to vanish and have no effect on LF . To differentiate the prior KL losses for relevant factors from those of the nuisance ones, we partition the latent dimensions into two disjoint subsets, R (relevant) and N (nuisance), i.e., {1, . . . , d} = R \u222aN, R \u2229N = \u2205. Critically, KL penalties on R and N need to be distinct and learned from the data. With that in mind, in Sec. 3.1 we \ufb01rst develop the model where we assume that R and N are known. We subsequently, in Sec. 3.2, relax this constraint to, in a and learn a discriminator D(z) that classi\ufb01es samples from q(z) against those from Q j q(zj), and establish that log q(z) Q j q(zj) \u2248 log D(z) 1\u2212D(z). principled manner, learn the partition directly from data. 3.1. RF-VAE: Known R Assuming that we know the index sets R and N, we propose the following loss function for the disentangled VAE learning: LR0 = Recon(\u03b8, \u03bd) + Epd(x) \" d X j=1 \u03bbjKL(q(zj|x)||p(zj)) # + \u03b3KL \u0012 q(zR)|| Y j\u2208R q(zj)) \u0013 , (9) where \u03bbj = ( \u03bbmin if j \u2208R \u03bbmax if j \u2208N (\u03bbmin < \u03bbmax), e.g., \u03bbmin = 0.1 and \u03bbmax = 10.0. As shown, we have made two modi\ufb01cations from the loss function of Factor-VAE: i) The prior KL loss is penalized differently according to the relevance of each dimension j, penalizing less for j \u2208R with impact \u03bbmin, and more for j \u2208N with impact \u03bbmax. ii) The TC takes into account only the relevant dimensions. In the Supplement, we provide a theoretical justi\ufb01cation for this approach. In particular, our analysis supplies a rigorous theoretical underpinning for why minimizing TC leads to factor disentanglement, beyond just the intuitive argument made previously in Factor-VAE and other related approaches. Optimization. To optimize (9), we follow the approach similar to that taken in Factor-VAE. The TC term is approximated by the density ratio proxy KL \u0012 q(zR)|| Y j\u2208R q(zj) \u0013 \u2248Eq(zR) \u0014 log D(zR) 1 \u2212D(zR) \u0015 , (10) where D(\u00b7) is the discriminator that discerns samples from q(zR) (as output 1) from those in Q j\u2208R q(zj) (as output 0). That is, max D \u0010 Ez\u223cq(z)[log D(zR)]+Ez\u223cQ j q(zj)[log(1\u2212D(zR))] \u0011 . (11) In the optimization, we alternate gradient updates for (11) wrt D(\u00b7) and (9) wrt the VAE parameters with the TC term replaced by the expected log-ratio (10). 3.2. RF-VAE: Learning R Our previous assumption, that the index set of relevant dimensions R is known, is often times not practical. In this section we propose a principled way to learn the relevant dimensions automatically from data. The key idea is to introduce a relevance vector r, of the same dimension as z, where rj = 1 (0) indicates that zj \fRelevance Factor VAE is a relevant (resp., nuisance) factor, for j = 1, . . . , d. We can learn r together with the VAE parameters by optimizing a loss function similar to LR0 in (9). Speci\ufb01cally, since r de\ufb01nes the relevance set R = {j : rj = 1}, we can formulate an optimization problem that minimizes LR0 for a given r, and regularizes r to discover a minimally redundant set of relevant factors. With the density ratio approximation for the TC term, we incorporate the optimization variables r in LR0 in the following way: Recon(\u03b8, \u03bd) + Epd(x) \" d X j=1 \u03bb(rj)KL(q(zj|x)||p(zj)) # + \u03b3Eq(z) \u0014 log D(r \u25e6z) 1 \u2212D(r \u25e6z) \u0015 + \u03b7S||r||1, (12) where \u03bb(\u00b7) is a decreasing function3 with \u03bb(0) = \u03bbmax > \u03bb(1) = \u03bbmin, and \u25e6is the element-wise (Hadamard) product. Note that the last L1 term penalizes too many dimensions to be chosen as relevant, encouraging minimal redundancy. The remaining difference from LR0 is the TC term, where the discriminator D(\u00b7) now takes the latent vector scaled by r as its input. For given r, the discriminator is learned from the following optimization: max D Ez\u223cq(z)[log D(r\u25e6z)]+Ez\u223cQ j q(zj)[log(1\u2212D(r\u25e6z))] (13) This allows one to mitigate the impact of nuisance dimensions (rj = 0) on the TC term while leaving the relevant latent variables zj (rj = 1) intact. Since optimizing (12) wrt r \u2208{0, 1}d is a dif\ufb01cult combinatorial problem, we relax r to be a continuous space r \u2208[0, 1]d. Furthermore, to encourage each rj to be close to either 0 or 1 and discourage fractional values, we include the entropic loss, H(r) = \u2212Pd j=1 \u0000rj log rj + (1 \u2212 rj) log(1 \u2212rj) \u0001 . Our comprehensive loss function hence becomes: LR(r, {\u03b8, \u03bd}) = Recon(\u03b8, \u03bd) + \u03b7S||r||1 + \u03b7HH(r) + Epd(x) \" d X j=1 \u03bb(rj)KL(q(zj|x)||p(zj)) # + \u03b3Eq(z) \u0014 log D(r \u25e6z) 1 \u2212D(r \u25e6z) \u0015 (14) Again, this loss is minimized by alternating the gradient updates for (13) wrt D(\u00b7) and (14) wrt both the VAE parameters {\u03b8, \u03bd} and the relevance vector r. 3In our experiments, we simply choose a linear function. 4. Related Work Most approaches to latent disentanglement consider the learning objectives combining the ELBO loss in (5) with the regularization terms that encourage prior latent factor independence. We summarize some key recent approaches below. \u2022 \u03b2-VAE (Higgins et al., 2017). Instead of directly introducing KL(q(z)||p(z)), the challenge of dealing with non-factorized (8) is circumvented through adoption of the averaged divergence between the posterior and the prior. Hence, the objective4 to minimize is L(\u03b8, \u03bd) = \u2212ELBO + \u03b2 1 N N X i=1 KL(q(z|xi)||p(z)), (15) with \u03b2 \u22650 as the balancing constant. \u2022 AAE (Makhzani et al., 2016). The adopted regularization term is KL(q(z)||p(z)). However, because of the dif\ufb01culty of dealing with q(z) in the optimization process, the authors employ an adversarial learning strategy by introducing a discriminator D(z) that is adversarially learned to discriminate samples from p(z) against those from the non-factorized q(z). \u2022 Factor-VAE (Kim & Mnih, 2018). The full independence of q(z) is imposed by penalizing KL(q(z)|| Q j q(zj)) in the TC manner. Unlike AAE\u2019s implicit regularization, they do incorporate the KL term explicitly in the objective. The dif\ufb01cult-to-estimate log-ratio between the mixtures (c.f. (8)) is proxied by the density ratio estimation: for a near-optimal discriminator D(z) that classi\ufb01es samples from q(z) against those from Q j q(zj), one can establish that log q(z) Q j q(zj) \u2248log D(z) 1\u2212D(z). Given D, the objective function to minimize is: L(\u03b8, \u03bd) = \u2212ELBO + \u03b2Eq(z) \u0014 log D(z) 1 \u2212D(z) \u0015 . (16) In practice, every SGD update for (7) is accompanied by the update of D for the adversarial learning. \u2022 Adversarial Nonlinear ICA (Brakel & Bengio, 2017) aims to learn the deterministic auto-encoder by minimizing the reconstruction error and the JensenShannon divergence between q(z) and Q j q(zj), where the latter is optimized by adversarial learning. \u2022 Other variants. Other variants of the above approaches largely follow the same principles. For instance, DIP-VAE (Kumar et al., 2018) penalizes the 4As the penalty term coincides with the KL term in the ELBO, one can merge the two while having \u03b2 \u22651. \fRelevance Factor VAE deviation of V(q(z)) from the identity, while InfoGAN (Chen et al., 2016) aims to minimize the reconstruction error in the z-space in addition to the conventional reconstruction error in the x-space. The \u03b2-TCVAE algorithm (Chen et al., 2018) aimed to derive a decomposable TC term, instead arriving at the same objective as the Factor-VAE. However, they employ the mini-batch weighted sampling strategy instead of the auxiliary adversarial discriminator training. Some recent approaches seek to partition the latent variables into meaningful groups to achieve improved disentanglement. Since the factors underlying the data can oftentimes be a mix of categorical and real-valued sources, a hybrid model that jointly represents discrete and continuous latent variables was introduced in (Dupont, 2018). In (Mathieu et al., 2016), they considered the setup where the data is partially labeled with speci\ufb01c class categories. In such setup, they separately treat the factors associated with the labels from those that are not, leading to an interesting conditional factor model. These approaches are related, but inherently different from our model in that we aim to simultaneously differentiate and identify the relevant latent variables from the nuisance factors. 5. Evaluation In this section we evaluate our approaches on several benchmark datasets. The goodness of the disentanglement can be assessed both quantitatively and qualitatively, where the former requires the dataset to be fully factor-labeled; however, the labeled factors are only used for evaluation and not for model learning. Selecting appropriate quantitative scores is a key step in the evaluation process. We consider a comprehensive suite of three metrics: i) the disentanglement metric proposed in (Kim & Mnih, 2018), ii) our new disentanglement metric, and iii) the metrics studied in (Eastwood & Williams, 2018). All metrics are brie\ufb02y described in the following section. Qualitative assessment is typically accomplished through visualizations of data synthesis via latent space traversal. We include both in our experiments. Speci\ufb01c to our models that separate relevant from nuisance variables is another evaluation metric, proposed in (Mathieu et al., 2016). However, since this metric is not applicable to competing models with no strati\ufb01cation ability, we report its results in the Supplement. 5.1. Disentanglement Metrics Metric I (Kim & Mnih, 2018) One factor \ufb01xed. The goal of this metric is to assess the variability of the discovered latent factors as a function of the true factor variance. Let v be the vector of ground-truth factors for x. For each factor index j, a set of L samples v(i) = (vj, v(i) \u2212j), i = 1, . . . , L, is collected, corresponding to clamping of factor j (no variance) and free variation of the remaining factors. The existence of latent factor(s) zu with similar, vanishing variation would then indicate the discovery of known factor j. In practice, this is accomplished by evaluating the encoder\u2019s outputs, z(i) \u223cq(z|x(i)) for images x(i) corresponding to samples v(i). The (sample) variance V(z) is used to \ufb01nd the index u of the factor with the smallest variance, u := arg min 1\u2264j\u2264d V(zj). (17) u then serves as the covariate for predicting the true factor index j: the metric is de\ufb01ned to be the accuracy of a simple classi\ufb01er that predicts j from u, among all j = 1, . . . , d. If a model achieves strong disentanglement, we can expect u = j (up to a permutation), making the classi\ufb01cation easy. In particular, since u, j \u2208{1, . . . , d}, the data pairs {(u, j)} can be represented as a simple contingency table, in which a majority vote classi\ufb01er is used for prediction. We use L = 100 samples to form each pair (u, j), and collect 800 pairs to compute the accuracies of the majority vote classi\ufb01ers. Since the metric is based on random samples, we repeat the evaluation ten times randomly to report the means and standard deviations. Metric II (Our new metric) One factor varied. Following the notion of disentanglement in Sec. 2.2, another reasonable approach is to collect samples with only one factor varied, instead of one factor \ufb01xed as in Metric I. That is, we collect images with v(i) = (v(i) j , v\u2212j) for i = 1, . . . , L. (17) is then modi\ufb01ed to arg max, and we can use the same majority vote classi\ufb01cation to report the accuracy. Our evaluation results in the following sections demonstrate that this new metric shows higher agreement with qualitative assessment of disentanglement than Metric I. However, note that to compute Metric II the dataset needs to contain dense joint variations in all true factors, typically a reasonable assumption for large, diverse datasets. Metric III (Eastwood & Williams, 2018) proposed three metrics: 1) Disentanglement, 2) Completeness, and 3) Informativeness. These scores are regression-prediction based, using the latent vector z as the covariate for individual ground-truth factors vj. Speci\ufb01cally, D measures the degree of dedication of each latent variable zk in predicting vj against others v\u2212j (the higher, the better), C captures the degree of exclusive contribution of zk in predicting vj against others z\u2212k (the higher, the better), and I measures the prediction error (the smaller, the better). For the regressors, both LASSO and Random Forests are used. 5.2. Datasets and Results Datasets summary. We test all methods on the following datasets: 3D-Faces (Paysan et al., 2009), Sprites (Matthey et al., 2017), and Teapots (Eastwood \fRelevance Factor VAE 1 2 3 4 5 6 7 8 9 10 0.00 0.25 0.50 0.75 1.00 1 2 3 4 5 6 7 8 9 10 j in z_j 0 2 4 6 E_x[ KL(q(z_j|x)||N(0,1)) ] RF-VAE Figure 1. RF-VAE on 3D-Faces dataset. (Left) Learned relevance r. (Right) Expected prior KL divergences for individual dimensions. & Williams, 2018). Results for Celeb-A (Liu et al., 2015) are provided in the Supplement. All datasets provide groundtruth factor labels. For all datasets, the image sizes are normalized to 64 \u00d7 64, and the pixel intensity/color values are scaled to [0, 1]. We use cross entropy loss as the reconstruction loss. Model architectures. We adopt the model architectures similar to those in (Kim & Mnih, 2018). The encoders consist of 5-layer conv-nets followed by two fully connected layers, and the decoders are 4-layer deconv-nets after two fully connected layers. We apply (4 \u00d7 4) \ufb01lters for the convolution and the transposed convolution (deconv) in both models. For the adversarial discriminator D used for optimizing the TC loss, we use a 6-layer MLP model with 1000 hidden units per layer and the leaky ReLU nonlinearity. See Supplement for more details. Optimization parameters. The optimization parameters are chosen similarly as those in (Kim & Mnih, 2018). We use Adam with the batch size 64. We run 3 \u00d7 105 batch iterations. The trade-off parameters (i.e., \u03b2 for \u03b2-VAE, \u03b3 for Factor-VAE and RF-VAE models, and (\u03b7, \u03bb) for our RF-VAE models) are obtained through grid search to yield the best performance. Our two models are marked as: RF-VAE-0, which exploits the ground-truth R (Sec. 3.1) and RF-VAE, which learns R from data (Sec. 3.2). In RF-VAE, the relevance vector r is initialized as all-0.5 vector. We contrast our models to Vanilla VAE (Kingma & Welling, 2014), \u03b2-VAE (Higgins et al., 2017), and Factor-VAE (Kim & Mnih, 2018). 5.2.1. 3D-FACES This dataset contains synthetic face images rendered from a 3D face model (Paysan et al., 2009), with four factors of variation (number of modes): subject ID (50), azimuth (21), elevation (11), and lighting along the horizon (11). We choose the latent dimension d = 10 for all competing models to encompass the true 4. Quantitative results are shown in Tab. 1a. All models attain perfect Metric I scores. For other metrics, RF-VAE-0 (with known R = {1, 2, 3, 4}) consistently outperforms other models. Interestingly, RFVAE, which learns the relevance set R from data, performs nearly equally well as RF-VAE-0, and signi\ufb01cantly better than Factor-VAE in terms of Metric II. Result in Tab. 1a suggest that Metric I score may only be partially indicative of the underlying disentanglement performance: all models attain a perfect Metric I scores. Differentiation is more prominent, however, in Metrics II and III. This is not surprising because it is possible to achieve a high Metric I score when the same factor is shared across multiple latent variables, i.e., it may suffer from redundancy in representation. However, redundancy would decrease Completeness. On the other hand, our Metric II can simultaneously capture both D and C. For our RF-VAE, we also depict the learned relevance vector r in Fig. 1. It shows that our model discovers four factors, (1,4,6,8), signifying its ability to identify the true source of variation from the data alone. The found latent dimensions have higher (non-zero) expected prior KL divergences, Epd(x) \u0002 KL(q(zj|x)||p(zj)) \u0003 , compared those in the nuisance factors (zero), as shown on the right panel of Fig. 1. As designed and anticipated, our model successfully tolerates large prior KL divergence matching penalty. On the other hand, the number of latent dimensions with large KL divergences (greater than 0.5) for other models do not match the true number of factors; they are 8 (Vanilla-VAE), 6 (\u03b2-VAE), and 7 (Factor-VAE). This suggests those models learn redundant or non-exclusive latent representations to explain the variation in the data, a property that can diminish disentanglement. This may be a main reason why the competing models underperform RF-VAE. Latent traversal results for RF-VAE are shown in Fig. 2a, where we depict images synthesized by the traversal of a single latent variable at a time. Visually, each relevant latent variable is strongly associated with one of the four factors, while the variation of nuisance variables has little impact on the generated images. 5.2.2. SPRITES The dataset consists of 737, 280 binary images of three shapes (oval, square, and heart), undergoing variations in four geometric factors: scale (6 variation modes), rotation (40), and X, Y translation (32 modes each), resulting in \ufb01ve factors total. For all competing models, we choose the latent dimension d = 10 > 5. The disentanglement scores are reported in Tab. 1b. Although RF-VAE-0, by exploiting the known number of factors, largely attained the highest scores, the scores are not perfect (e.g., Metric II far below 100%). As shown in the latent traversal results in the Supplement, the shape factor remains entangled with other factors in all latent variables zR. RF-VAE similarly failed to identify the \ufb01ve relevant factors, elucidating three modes of variation (Fig. 3), even though it performs slightly better than FactorVAE. One reason for this failure may lie in the dif\ufb01culty of representing the shape factor, which is discrete in nature and \fRelevance Factor VAE Table 1. Disentanglement metrics for benchmark datasets. For Metric III, the three \ufb01gures in each cell indicate Disentanglement / Completeness / Informativeness (top row based on the LASSO regressor, the bottom on the Random Forest. Note that the higher the better for D and C, while the lower the better for I. The best scores for each metric (within the margin of signi\ufb01cance) among the competing models are shown in red and second-best in blue. (a) Disentanglement metrics on the 3D-Faces dataset. VANILLA-VAE \u03b2-VAE FACTOR-VAE RF-VAE-0 RF-VAE METRIC I 100.0 \u00b1 0.00 100.0 \u00b1 0.00 100.0 \u00b1 0.00 99.80 \u00b1 0.14 99.93 \u00b1 0.06 METRIC II 93.44 \u00b1 0.74 95.48 \u00b1 0.62 92.78 \u00b1 1.09 99.61 \u00b1 0.26 95.19 \u00b1 0.53 METRIC III 0.96 / 0.81 / 0.37 0.96 / 0.78 / 0.40 1.00 / 0.82 / 0.36 1.00 / 0.97 / 0.51 1.00 / 1.00 / 0.48 0.99 / 0.84 / 0.26 0.98 / 0.86 / 0.31 0.96 / 0.83 / 0.25 1.00 / 0.95 / 0.46 1.00 / 0.93 / 0.37 (b) Disentanglement metrics on the Sprites dataset. VANILLA-VAE \u03b2-VAE FACTOR-VAE RF-VAE-0 RF-VAE METRIC I 80.20 \u00b1 0.33 80.81 \u00b1 0.78 81.98 \u00b1 1.01 91.99 \u00b1 0.85 85.35 \u00b1 1.16 METRIC II 58.15 \u00b1 1.44 76.83 \u00b1 0.89 77.64 \u00b1 1.42 85.41 \u00b1 1.48 79.08 \u00b1 1.31 METRIC III 0.59 / 0.68 / 0.52 0.67 / 0.69 / 0.53 0.84 / 0.84 / 0.53 0.89 / 1.00 / 0.64 0.85 / 0.87 / 0.53 0.57 / 0.69 / 0.46 0.72 / 0.84 / 0.40 0.73 / 0.82 / 0.41 0.78 / 0.94 / 0.62 0.73 / 0.83 / 0.41 (c) Disentanglement metrics on the Sprites Oval-Only dataset. VANILLA-VAE \u03b2-VAE FACTOR-VAE RF-VAE-0 RF-VAE METRIC I 97.19 \u00b1 0.42 75.33 \u00b1 0.64 100.0 \u00b1 0.00 100.0 \u00b1 0.00 100.0 \u00b1 0.00 METRIC II 53.23 \u00b1 1.47 70.20 \u00b1 1.19 80.59 \u00b1 1.05 95.96 \u00b1 0.44 95.40 \u00b1 0.47 METRIC III 0.42 / 0.43 / 0.54 0.58 / 0.49 / 0.49 1.00 / 0.88 / 0.33 0.97 / 0.93 / 0.33 1.00 / 0.99 / 0.49 0.32 / 0.55 / 0.46 0.56 / 0.58 / 0.36 0.81 / 0.84 / 0.24 0.97 / 0.96 / 0.29 0.93 / 0.87 / 0.22 (d) Disentanglement metrics on the Teapots dataset. VANILLA-VAE \u03b2-VAE FACTOR-VAE RF-VAE-0 RF-VAE METRIC I 90.14 \u00b1 0.90 56.94 \u00b1 1.14 91.93 \u00b1 0.83 78.00 \u00b1 1.02 98.68 \u00b1 0.35 METRIC II 77.74 \u00b1 1.26 47.30 \u00b1 0.85 74.61 \u00b1 1.75 60.30 \u00b1 1.92 83.10 \u00b1 1.22 METRIC III 0.60 / 0.53 / 0.40 0.31 / 0.27 / 0.72 0.63 / 0.61 / 0.46 0.40 / 0.38 / 0.57 0.63 / 0.56 / 0.37 0.81 / 0.72 / 0.31 0.45 / 0.61 / 0.52 0.75 / 0.78 / 0.29 0.62 / 0.59 / 0.49 0.90 / 0.79 / 0.27 has low mode cardinality5; hence, it may not be properly modeled by the continuous latents in the VAE. A solution may be to model hybrid latents c.f., (Dupont, 2018). Instead, we conduct experiments by eliminating the discrete shape factor: we consider a subset of oval-only images and retain the remaining geometric sources of variation. Oval Shape Subset. From the original dataset, we select the oval shape images, yielding four independent factors of variation across 245, 760 images. The disentanglement scores are summarized in Tab. 1c, where now both RF-VAE-0 and RF-VAE yield nearly perfect performance in Metric I and II. Other models\u2019 scores fall signi\ufb01cantly below. Our RF-VAE identi\ufb01es \ufb01ve latent dimensions as relevant, Fig. 4, spreading the rotation across two latents. The images generated by traversal shown in Fig. 2b qualitatively align with the reported scores, indicating the 5Note that while other factors also possess discrete modes, their cardinality is higher than that of the shape. ability of our RF-VAE to recover the disentangled factors. 5.2.3. TEAPOTS The dataset contains 200,000 images of a teapot across \ufb01ve different sources of variation: azimuth, elevation, and the color of the teapot object. Again we choose ten dimensions for the latent vector z. The disentanglement scores are summarized in Tab. 1d. Since the factor labels provided in the dataset are real and uniquely valued, to evaluate metric I and II, we discretized them into ten bins. Interestingly, RF-VAE-0, which uses the knowledge of the number of true factors (5), performed worse than competing models. One possible explanation is that the model architecture may not be \ufb02exible enough to represent the full variation in the entire data6. This is supported in the fact that RF-VAE, which identi\ufb01ed 7 > 5 rele6This is a conjecture. Note that a more complex ResNet used in (Eastwood & Williams, 2018) failed in a similar manner. More rigorous failure analysis will be conducted in future work. \fRelevance Factor VAE (a) 3D-Faces dataset. (b) Sprites-Oval-Only dataset. (c) Teapots dataset. Figure 2. Latent space traversals in RF-VAE on 3D-Faces, Sprites, and Teapots datasets. (a) The four factors (z1, z4, z6, z8) recovered by our RF-VAE are highlighted within colored boxes. Variation in each of the four factors yields changes in a single aspect of synthesized images (z1 = elevation, z4 = azimuth, z6 = lighting, z8 = subject ID). (b) The recovered, highlighted, factors are (z1, z2, z7, z8, z9). z1 corresponds to scale, z2 Y -pos, z8 explains X-pos, and both z7 and z9 captures rotation. In all cases, other (non-highlighted) nuisance dimensions have little impact on image generation. (c) The seven recovered factors (z3, z4, z5, z7, z8, z9, z10) represent: z9 the R channel, both z8 and z10 have the G channel and azimuth entangled in each. z7 explains yellow color variation meaning (R,G) are entangled in it. Both z4 and z5 are numb, and z3 is responsible for elevation, but color variations are slightly entangled together. 1 2 3 4 5 6 7 8 9 10 0.00 0.25 0.50 0.75 1.00 1 2 3 4 5 6 7 8 9 10 0 2 4 6 Figure 3. RF-VAE on Sprites. (Left) Learned relevance vector r. (Right) Expected prior KLs for individual dimensions. 1 2 3 4 5 6 7 8 9 10 j in z_j 0 20 40 60 80 E_x[ KL(q(z_j|x)||N(0,1)) ] RF-VAE-Learn Figure 4. RF-VAE on Sprites Oval-Only. (Left) Learned relevance vector r. (Right) Expected prior KLs. vant factors, Fig. 5, attains higher performance. However, as shown in Fig. 2c, two (z4, z5) of the relevant dimensions are not indicative of the variations, possibly acting as a conduit to the decoding image generation process. It is worth noting that \u03b2-VAE performs poorly on this dataset, possibly due to its known drawback: it over-emphasizes the regularization of matching p(z) and q(z|x) across all x, which effectively makes x and z less co-dependent, resulting in the learned representation that carries less information from the input, as noted in (Alemi et al., 2018). 6." + }, + { + "url": "http://arxiv.org/abs/1609.09156v2", + "title": "Similarity Mapping with Enhanced Siamese Network for Multi-Object Tracking", + "abstract": "Multi-object tracking has recently become an important area of computer\nvision, especially for Advanced Driver Assistance Systems (ADAS). Despite\ngrowing attention, achieving high performance tracking is still challenging,\nwith state-of-the- art systems resulting in high complexity with a large number\nof hyper parameters. In this paper, we focus on reducing overall system\ncomplexity and the number hyper parameters that need to be tuned to a specific\nenvironment. We introduce a novel tracking system based on similarity mapping\nby Enhanced Siamese Neural Network (ESNN), which accounts for both appearance\nand geometric information, and is trainable end-to-end. Our system achieves\ncompetitive performance in both speed and accuracy on MOT16 challenge, compared\nto known state-of-the-art methods.", + "authors": "Minyoung Kim, Stefano Alletto, Luca Rigazio", + "published": "2016-09-28", + "updated": "2017-01-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "main_content": "Introduction Object tracking has been evolving rapidly, becoming a very active area of research in machine vision. Several approaches have been proposed to improve tracking performance [1], with various applications from surveillance systems [2] to autonomous driving [3], and even sports analytics [4]. One major limitation of object tracking today, is the large number of hyper parameters required; this may harm robustness especially for real applications in unconstrained environments. During the past few years, deep neural networks (DNNs) have become popular for their capability to learn rich features. Accordingly, new approaches with DNNs for tracking have also been proposed [5\u20137]. These methods take advantage of Recurrent Neural Networks (RNNs) to incorporate temporal information. Although some of these methods outperform conventional ones, computational requirements are high, resulting in very low frame rates and latency. Nevertheless, temporal information such as motion \ufb02ow is crucial in object tracking, therefore cannot be discarded from a model without loss of performance. To address these issues, we present a new high speed tracking system, combining both appearance and temporal geometric information, while having a smaller number of hyper parameters. We achieve this by leveraging our newly designed Enhanced Siamese Neural Network (ESNN) architecture for similarity mapping: the ESNN is an extended Siamese neural network that combines appearance similarity with temporal geometric information and ef\ufb01ciently learns both visual and geometric features during end-to-end training. 11th Women in Machine Learning Workshop (WiML 2016), colocated with NIPS 2016, Barcelona, Spain. arXiv:1609.09156v2 [cs.CV] 24 Jan 2017 \fFigure 1: ESNN-based Multi-Object Tracking System 2 Background Although multiple object tracking plays a key role in computer vision, there exist few benchmarks for pedestrian tracking, fewer than for object detection [3, 8\u201310]. One reason is the dif\ufb01culty in standardizing the evaluation protocol, a controversial topic this day [11]; another reason may be high annotation cost. MOT16 [12] and KITTI tracking benchmarks [3] provide well established evaluation protocols with good quality annotations, and are widely used by researchers. MOT16 consists of 14 different sequences and KITTI consists of 50 sequences. Whereas KITTI videos are taken with moving cameras (attached to a vehicle), MOT sequences are taken with both static and moving ones. Also, even though both datasets contain multiple objects types such as cars, cyclists, pedestrians, and motorbikes, KITTI evaluates only on cars and pedestrians and MOT16 evaluates only pedestrians. For fair comparison, MOT16 evaluation speci\ufb01es additional information used by each submitted methods, for example, whether a method is online (no latency), and is using provided detection results. In this paper, we propose an online system based on provided detection results for two main reasons: \ufb01rst, we focus on visual tracking for ADAS and autonomous driving, and we believe reliable/lowlatency tracking system is crucial. Secondly, since detection performance highly affects tracking quality and we want to focus our efforts on improving the tracking algorithm, we choose to use provided detection results for fair comparison. Fig. 1 illustrates our tracking system based on ESNN. The system can be divided into two main steps: 1) ESNN-based Similarity Mapping and 2) Matching. A Siamese network, referred to as \u2018Base Network\u2019, is pre-trained with visual information of objects. Then, ESNN takes Intersection-over-Union (IoU) and Area Ratio information from pairs of objects as additional features, and builds a combined similarity mapping with both geometric and pre-trained Siamese network features. After ESNN is fully trained and similarity scores are computed, the matching algorithm produces the \ufb01nal tracking results. 3 Similarity Mapping ESNN uses a Siamese network that consists of two identical sets of convolutional neural networks, where the weights of convolutional layers are shared in between. The network takes a pair of image patches, cropped from original frame, and then maps them to L2 space where the Euclidean distance between each output can be used as similarity score. The Base Network is built and trained \ufb01rst, then is extended to ESNN with geometric information. 3.1 Base Network Architecture The base architecture of our Siamese neural network is described in Fig. 2. For each convolutional layer, hyperbolic tangent (TanH) is used as activation function, and the \ufb01rst fully connected layer is followed by Recti\ufb01ed Linear Unit (ReLU) [13]. Kernel sizes for each convolutional and pooling layer are as follows: conv1(5x5), pool1(2x2), conv2(3x3), pool2(2x2), conv3(3x3), conv4(2x2), conv5(2x2), fc1(2048), fc2(1024), and feat(2) or feat(4). The feat(2) layer is \ufb01ne-tuned with the new feat(4) layer to incorporate geometrical featurs in ESNN. For loss function, contrastive loss Lc, proposed in [14], is used as follows: 2 \fFigure 2: Architecture of Base Siamese Neural Network for Similarity Training Figure 3: Architecture of Enhanced Siamese Neural Network En = \u2225F \u2212Fp\u22252 (1) Lc = 1 2N N X n=1 (y)E2 n + (1 \u2212y) max(m \u2212En, 0)2 (2) where En is Euclidean distance between the output features F and Fp of the Siamese neural network with input data pair d and dp, shown in Fig. 2. y denotes label of the pair, where y = 1 if (d, dp) is a matching pair and y = 0 otherwise. Finally, m is a margin parameter that affects contribution of non-matching pairs to the loss Lc, and we choose m = 3 as the best margin obtained by experiments. 3.2 ESNN Architecture In extension of the Base Network architecture above, the ESNN takes additional layers that learn from IoU Diou, and area variant of a pair of objects DArat. For a pair of object bounding boxes bi and bj, appearing in frame ft\u22121 and ft, DIoU and DArat are calculated as follows: [DIoU, DArat](bi, bj) = [area(bi \u2229bj) area(bi \u222abj), min(area(bi), area(bj)) max(area(bi), area(bj))] (3) Fig. 3 shows the extended architecture of our network. The additional layers up-sample input to the same dimension as the output of the Base Network NB, feat and featp. Layers in NB are locked during the \ufb01rst phase of training. 3.3 Training The Base Network is pre-trained on Market-1501 person re-identi\ufb01cation dataset [15] \ufb01rst. With batch size of 128, learning rate starting from 0.01, and SGD (Stochastic Gradient Descent), our 3 \fSiamese neural network converges well on pairs generated by Market-1501 dataset. Train and test losses of the training are shown in Fig. 4 (left). x-axis represents the number of epochs in two different scales for each loss. On Market-1501 test set, the trained model achieves precision = 0.9854, recall = 0.9774, and F1 = 0.9814. In addition, Fig. 4 (right) shows the Euclidean distance of the data pairs generated from the trained model on Market-1501 test set in logarithm scale (y-axis). With this pre-trained model, the network is then \ufb01ne-tuned on MOT16 dataset. Results will be discussed at the end of this section along with ESNN training results. 0 500 1000 1500 2000 2500 3000 3500 4000 4500 Train epochs 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Loss train_loss test_loss 0 50 100 150 200 250 300 350 400 450 Test epochs Figure 4: Train/Test Loss of the Base Network (left) and Euclidean distance of Market-1501 test set with margin, m = 3 (black horizental line) (right) To train ESNN, the pre-trained Base Network model parameters are transferred. In \ufb01ne-tuning, layers in the Base Network are locked in the beginning, and unlocked in the \ufb01nal phase. Also, margin is set to m = 0.5. Once the ESNN model is obtained, we analyze it on MOT16 train set, and compare the performance with results from the Base Network. Fig. 5 shows the Euclidean distance of MOT16 train set from the Base Network (left) and ESNN (right). On each \ufb01gure, the plot on top represents the Euclidean distance (y-axis) with IoU (x-axis) of the data. The bottom plot shows histogram of the Euclidean distance (x-axis) with normalized frequency (y-axis). The red points represent non-matching pairs, blue points for matching pairs, and red and blue dashed lines represent mean distance of each group. Finally, the black dashed line represents the margin m. The Base Network model achieves precision = 0.9837, recall = 0.9966, and F1 = 0.9901, and the ESNN model achieves precision=0.9908, recall=0.9990, and F1 = 0.9949. As shown in Fig. 5, the ESNN model outperforms the Base model. Note that, some of the misclassi\ufb01ed non-matching pairs with DIoU < 0.05 by the Base Network model are correctly classi\ufb01ed by the ESNN model. It means the ESNN can handle object pairs spatially far apart but sharing similar features (e.g. two far-apart persons with similar clothing), better than the Base Network by utilizing IoU and area variant information. (a) Base Network Model (b) ESNN Model Figure 5: Euclidean Distance on MOT16 Train Set 4 \f4 Matching Algorithm For the Base Network, a new score function is introduced by taking IoU and area variant in account, plus the score from Euclidean distance. For the ESNN, only Euclidean distance is used for scoring. Algorithm 1 Matching Algorithm 1: procedure MATCH(P, fn) \u25b7Score matrix as input 2: exID \u2190{existing IDs whithin previous n frames} 3: for i in reversed(sorted(P, score)) do \u25b7sort pairs with score 4: (IDexist, IDtgt) \u2190P[i] \u25b7pull candidate pair 5: if IDtgt / \u2208exID then 6: continue 7: end if 8: if IDtgt.notAssigned() then 9: if IDexist.notAssigned() then 10: Target[IDexist] \u2190IDtgt \u25b7new assignment 11: IDexist.setAssigned(True) 12: else if newIDexist \u2190FindBetterMatch() then 13: IDexist.setAssigned(False) 14: Target[newIDexist] \u2190IDtgt \u25b7switch assignment 15: newIDexist.setAssigned(True) 16: end if 17: end if 18: end for 19: for IDtgt \u2208{leftover tgt IDs} do 20: exID.append(IDtgt, fnew) \u25b7handle new IDs with frame info 21: end for 22: end procedure 4.1 Scoring Given detection boxes Bt\u22121 = {b1, . . . , bn} at frame t \u22121, and Bt = {b1, . . . , bm} at frame t, new score function for a pair SNew(bi, bj) where bi \u2208Bt\u22121, \u2200i = 1, . . . , n, and bj \u2208Bt, \u2200j = 1, . . . , k, can be derived as follows: SNew = SDist + SIoUSArat (4) where SDist denotes the score derived from the Euclidean distance Dsiam(bi, bj), output of our network for the pair (bi, bj), SIoU denotes Intersection-over-Union of the pair, shifted by 1.0, and SArat denotes the area ratio between them. To shorten notation, S(bi, bj) is written as S in Eq. 4. The exact functions of SDist, SIoU, and SArat are: SDist(bi, bj) = \u03b1 log0.1{max(\u03b3, Dsiam(bi, bj))} (5) SIoU(bi, bj) = 1.0 + area(bi \u2229bj) area(bi \u222abj) (6) SArat(bi, bj) = e min(area(bi),area(bj )) max(area(bi),area(bj )) \u2212\u03b4 (7) where we choose \u03b1 = 0.8, \u03b3 = 10\u22125, and \u03b4 = 0.2 as a bias term. Finally, SNew is obtained for the Base Network model, and SDist for the ESNN model. SNet = \u001aSNew, if Net = NB SDist, otherwise (8) 4.2 Matching As the second part of the tracking system, a simple yet ef\ufb01cient matching algorithm that takes the score matrix SNet as an input is derived as shown in Algorithm. 1. Only one hyper parameter is introduced by the algorithm, denoted by fn, specifying how many frames the tracker looks back 5 \fTable 1: Benchmark Results on MOT16 Test Dataset [16] Method Online MOTA MOTP Hz FAF MT ML FP FN IDs Frag NMOT [17] No 46.4 76.6 2.6 1.6 18.3% 41.4% 9,753 87,565 359 504 JMC [18] No 46.3 75.7 0.8 1.1 15.5% 39.7% 6,373 90,914 657 1,114 MHT_DAM [19] No 42.8 76.4 0.8 1.2 14.6% 49.0% 7,278 96.607 462 625 Ours Yes 35.3 75.2 7.9 0.9 7.4% 51.1% 5,592 110,778 1,598 5,153 TBD [20] No 33.7 76.5 1.3 1.0 7.2% 54.2% 5,804 112,587 2,418 2,252 CEM [21] No 33.2 75.8 0.3 1.2 7.8% 54.4% 6,837 114,322 642 731 DP_NMS [22] No 32.2 76.4 212.6 0.2 5.4% 62.1% 1,123 121,579 972 944 SMOT [23] No 29.7 75.2 0.2 2.9 5.3% 47.7% 17,426 107,552 3,108 4,483 JPDA_m [24] No 26.2 76.3 22.2 0.6 4.1% 67.5% 3,689 130,549 365 638 Table 2: Results on KITTI MOT Dataset using public(top) & private(bottom) detections Name MOTA MOTP MOTAL Hz Rcll Prcn FAR MT PT ML FP FN IDs FM Car 65.97 79.31 66.43 7.52 76.47 91.45 24.45 44.21 45.12 10.67 2723 8963 161 969 Pedestrian 33.69 70.46 34.42 11.81 44.22 82.13 20.19 10.31 52.23 37.45 2246 13024 172 1212 to generate pairs with the current frame. With fn and SNet map where data pair P is the keys, the algorithm starts matching with the highest similarity score. It returns the best match solely based on the scores, and when there is a con\ufb02ict, it tries once more to \ufb01nd a better match which can be replaced with the current match. After all possible pairs are examined and redundant pairs are \ufb01ltered, new IDs are assigned to the leftover targets. To provide a deeper insight on the advantages of this algorithm, we also employ a matcher based on the popular Hungarian algorithm and report the obtained results in the experimental section. One of the major differences between our proposed matching strategy and the Hungarian algorithm is computational complexity. In fact, while the former runs in linear time with the number of people in the scene, the Hungarian algorithm has a complexity of O(n3) and can become a signi\ufb01cant performance bottleneck in crowded sequences. 5 Evaluation Our system is evaluated on MOT16 train and test set, as well as on the KITTI Object Tracking Evaluation 2012 database. The results on MOT16 test set is shown in Table 1, along with other methods for comparison. Only the referencible methods that use provided detection results are shown, along with an indication whether the method is online or not. Table 2 reports the results on the KITTI database for the two evaluated classes, namely Car, Pedestrian. Notice that no \ufb01ne-tuning has been performed on the KITTI sequences, and the network has never seen objects from the Car class during training. Nonetheless, the proposed algorithm achieves competitive performance, showing the good generalization capabilities of our architecture. Even though an accurate comparison on speed is not quite possible due to lack of information on hardware speci\ufb01cation where other benchmarks were conducted, the speed of our method is quite noticeable while achieving competitive performance. Given the score matrix SNet provided by the siamese network, we compare the performance of the proposed matching algorithm to a baseline that uses the widely adopted Hungarian algorithm. The proposed matching approach is generally better than the Hungarian algorithm, who scores a MOTA of 27.7%. While a complete evaluation is omitted due to space constraints, it is worth noticing that besides resulting in a lower MOTA, the Hungarian algorithm is on average 1.91 times slower. In particular, while the execution time is substantially unchanged in some scenarios such as MOT16-05 (1.03 times slower), the Hungarian\u2019s O(n3) scalability is especially clear when dealing with the most crowded scenes, e.g. MOT16-04 (2.69 times slower). 6 \fTable 3: Results on KITTI MOT Dataset using private detections Name MOTA MOTP MOTAL Hz Rcll Prcn FAR MT PT ML FP FN IDs FM Car 70.78 80.38 71.25 7.52 79.18 92.71 20.72 51.68 40.55 7.77 2305 7701 169 938 Pedestrian 37.04 71.13 37.90 11.81 46.56 84.53 17.88 14.09 56.36 29.55 1989 12473 202 1270 6" + }, + { + "url": "http://arxiv.org/abs/1503.01824v1", + "title": "Deep Clustered Convolutional Kernels", + "abstract": "Deep neural networks have recently achieved state of the art performance\nthanks to new training algorithms for rapid parameter estimation and new\nregularization methods to reduce overfitting. However, in practice the network\narchitecture has to be manually set by domain experts, generally by a costly\ntrial and error procedure, which often accounts for a large portion of the\nfinal system performance. We view this as a limitation and propose a novel\ntraining algorithm that automatically optimizes network architecture, by\nprogressively increasing model complexity and then eliminating model redundancy\nby selectively removing parameters at training time. For convolutional neural\nnetworks, our method relies on iterative split/merge clustering of\nconvolutional kernels interleaved by stochastic gradient descent. We present a\ntraining algorithm and experimental results on three different vision tasks,\nshowing improved performance compared to similarly sized hand-crafted\narchitectures.", + "authors": "Minyoung Kim, Luca Rigazio", + "published": "2015-03-06", + "updated": "2015-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.NE" + ], + "main_content": "Introduction Recently, deep neural networks (DNNs) have led to significant improvement in several machine learning domains, from speech recognition (Dahl et al., 2012) to computer vision (Krizhevsky et al., 2012; Taigman et al., 2013) and machine translation (Sutskever et al., 2014). DNNs have reached state of the art performance thanks to their theoretically proven modeling and generalization capabilities (Hornik et al., 1989; Hornik, 1991; K\u02da urkov\u00b4 a, 1992), and practically driven by improvements in training algorithms for rapid parameter estimation (Martens, 2010; Sutskever et al., 2013), novel regularization methods to reduce over\ufb01tting (Srivastava et al., 2014) as well as ever inProceedings of the 31 st International Conference on Machine Learning, Lille, France, 2015. JMLR: W&CP volume 37. Copyright 2015 by the author(s). creasing data-sets (Deng et al., 2009) and powerful new computing platforms (Chetlur et al., 2014). However, before parameter estimation (so called training) can begin the DNN\u2019s structure (also called model architecture) is usually manually de\ufb01ned by domain experts (Lin et al., 2013), and can often account for a substantial portion of the \ufb01nal system performance (Szegedy et al., 2014). We view this step as a bottleneck in the current deep-learning pipeline, one that relies on a trial and error human expert in the loop approach which is, to say the least, rather alchemic in nature. We want to address this basic scalability issue of the deep learning development pipeline with training methods that automatically search for DNN architectures while jointly estimating model parameters. While structural optimization is a notoriously dif\ufb01cult combinatorial task, successful strategies were adopted in the past for (shallow) models that motivated our approach. For instance, for Hidden Markov Models with Gaussian mixture kernels, split/merge algorithms were used to independently vary model complexity for each HMM state, resulting in improved accuracy for large vocabulary speech recognition (Sankar, 1998). Information theoretic methods, such as the minimum description length criterion, were also applied to the problem of structural optimization (Barron et al., 1998), resulting in improved performance in speech recognition (Shinoda & Watanabe, 2000) and as well as training algorithms for autoencoders (Hinton & Zemel, 1994). However, to the best of our knowledge, there is little published work on structural optimization in the deep learning community, with the notable exception of work based on empirical evaluation (Bergstra & Bengio, 2012) and random search strategies (Bergstra & Bengio, 2012). Although, recently Bayesian optimization of hyper-parameters have been introduced (Snoek et al., 2012). While these works are interesting, hyper-parameters are only one aspect of the DNN structure, albeit one which is closely related to the performance of the training algorithm. However, there are several other structural parameters that strongly affect DNN\u2019s performance which are usu\fDeep Clustered Convolutional Kernels ally set by experimental trial and error, such as network depth and for convolutional models the number of convolutional \ufb01lters and kernel size for each layer. In our work, we aim to optimize model architecture, speci\ufb01cally targeting convolutional neural networks (CNNs), and optimizing complexity for each layer. Therefore, in our approach, the model architecture is not maintained constant during training, instead the model complexity is continuously optimized throughout the training step (parameter estimation by stochastic gradient descent), resulting, we believe, in a more scalable approach to the training of deep neural networks. In section 2, we describe the general approach we are taking for problem of structure optimization of convolutional neural networks. In section 2.1, we describe the theoretical foundations of our approach. In section 3, we discuss data-sets and experimental results and in section 4 we discuss about limitations and possible future improvements. 2. Deep Clustered Convolutional Kernels The basic idea for our Deep Clustered Convolutional Kernels (DCCKs) it a convolutional model architecture and associated structural training algorithm. We adopt a split/merge outer-loop to the training process that \ufb01rst increases model capacity to model new factors of variability seen in the data, then estimates new parameters for this larger model by stochastic gradient descent (SGD), and \ufb01nally reduces model capacity to minimize model-space redundancy. Our approach takes inspiration by previous work in the area of Gaussian kernel HMMs (Sankar, 1998; Rigazio et al., 2000; Bocchieri & Mak, 2001; Lee et al., 2001), and is philosophically based on Occam\u2019s razor principle whereby a smaller model with similar performance on a given data-set is likely to have better generalization capabilities to new unseen data. An alternative view of work may be in the context of recent developments in DNN\u2019s compression: (Ba & Caruana, 2014) shows that a (shallow) DNN can approach the performance of a substantially larger DNN when trained to mimic the logit output of the larger model. Similarly, (Hinton et al., 2014) shows that logit-mimic training (referred to as \u201cDark Knowledge\u201d) results in orders of magnitude smaller models, compared to the initial complex ensemble models, yet provides competitive performance when tested on both small tasks (MNIST) as well as large scale industrial tasks (large vocabulary speech recognition). It is important to notice that for both these works the authors acknowledge that, while such smaller high performance models can be obtained by logit mimic training from a more complex model set, thus showing that there is an optimal point in the parameter space with high performance, there is currently no known training procedure to directly Algorithm 1 Deep Clustered Convolutional Kernels training algorithm Input: Initial network architecture net with parameters \u03bb, noise variance \u03c3n and jitter angle \u03c3\u03b1, stopping conditions \u03b40,1,2 and mini-batch size while \u2206Validation Accuracy > \u03b40 do while \u2206Validation Accuracy > \u03b41 do // SPLIT nk = gaussianNoise(\u03c3n) \u03b1k = gaussianNoise(\u03c3\u03b1) \u03bb1 = concat(\u03bb, \u03bb + nk) \u03bb = concat(\u03bb1, rotate(kernel(\u03bb), \u03b1k)) // FINETUNE while \u2206Validation Accuracy > \u03b42 do runSGD(M minibatches) end while end while // MERGE centroid = Kmeans(kernels(\u03bb)) \u03bb = nearest(kernels(\u03bb), centroid) while \u2206Validation Accuracy > \u03b42 do runSGD(M minibatches) end while end while achieve such optimal point in the smaller model. In this view \ufb01nding such an elusive point in parameter space by systematically optimizing DNN\u2019s structure to eliminate redundancy and minimizing number of parameters, while at the same time estimating the model parameters under the given loss function. The main contribution of our work is a training methodology to iteratively optimize the number of convolutional kernels while estimating the convolutional \ufb01lter parameters. 2.1. Training algorithm Conceptually our training procedure is rather straightforward: starting from an initial network architecture, we \ufb01rst train the model by SGD until performance tops out on a validation set. Next, we increase the model complexity of selected convolutional layers by splitting the convolutional kernels. Splitting has the purpose of creating new plausible convolutional \ufb01lters given the current set of \ufb01lters and can be done by applying image pre-processing techniques to the kernels, as well as adding jittering and noise to create enough variation. After splitting, the model is again trained by SGD and possibly split again until performance tops out. At that point model is merged to reduce redundancy in the parameter space and again trained by SGD. Notice that the split/merge procedure can start at any layer but than has to propagate upwards to change the number of kernels of the connecting layers (fan-out). In our setup, given by input \fDeep Clustered Convolutional Kernels data x, forward propagation f is done by: f(x) = g(Wx + B) (1) where g is ReLU activation function with g(x) = max(0, x), W is the weight parameters of the convolutional layer, and B is the biases, each with the following dimensions: Wl = Nl \u00d7 P z }| { dl \u00d7 kl \u00d7 kl (2) Bl = 1 \u00d7 1 \u00d7 1 \u00d7 Nl (3) where l is a convolutional layer with l \u2208{1, ..., L}, Nl is the number of outputs of l, dl is number of channels of l, and kl is size of kernel used for l. We use square convolutional kernels, so kernel dimensions are kl \u00d7 kl. For simplicity, we de\ufb01ne sub-dimension of W as P, shown in 2. For the \ufb01rst convolutional layer we have d1 = 3 for RGB images and d1 = 1 for gray-scale images. In the following convolutional layers, d is the output of the previous convolutional layer thus, P would be the size of the feature vectors. This implies that, when we perform the split/merge steps for level l, we need to update both Wl and Bl as well as Wl+1. Biases for the following convolutional layer are independent. An important caveat is that the order of the optimal split/merge operation depends on the speci\ufb01c data-set and the \ufb01lter parameters. For instance, if the initial \ufb01lters are sparse it is bene\ufb01cial to do merge \ufb01rst. Otherwise, it is best to perform split \ufb01rst especially on smaller data-sets when the initial \ufb01lters are already compact and discriminative. 2.1.1. SPLITTING KERNELS With splitting, we want to increase model complexity by creating new convolutional kernels from the set of existing well-trained kernels. Therefore, we create new kernels by selectively choosing from a \ufb01xed set of transformations. The possible set of transformations to play with is vast and includes the six isometries of the plane, angular rotation, change in contrast (negative \u201creversing\u201d) and many others. In our experiments, we focus on two transformations that seemed to provide a consistent improvement: \u2022 Rotation creates new kernels by rotating existing kernels in random directions. \u2022 Noise perturbation creates new kernels by adding Gaussian noise to the existing kernels. One important aspect we veri\ufb01ed in our experiments is that rotating kernels has a lower computational cost at training time than rotating training images to create augmented training set. Moreover, we observed that rotating the \ufb01lters can help improve robustness for highly tilted objects outlets, which would be otherwise hard to correctly classify (see Figure 1). Adding random Gaussian noise, on the other hand, has the obvious bene\ufb01t of creating diversity and helping with the SGD, like previously reported by (Srivastava et al., 2014). Regarding the splitting strategy, currently we took the simplest approach and split every kernel by a \ufb01xed amount. This is bound to be locally unoptimal, and surely a better splitting strategy that tries to maximize some diversity or discrimination criteria could be devised, instead of indiscriminately splitting every single kernel. However, for the most part, we observe that wasteful parameters created by this simple splitting strategy will be eliminated during the \ufb01nal merging step; therefore, aside from a potential sub-optimality in the CPU/Memory usage, we speculate the \ufb01nal model accuracy might not be very affected by this uniform splitting strategy. (a) 0 10 20 30 40 (b) 0.0 0.2 0.4 0.6 0.8 1.0 (c) 0 10 20 30 40 (d) 0.0 0.2 0.4 0.6 0.8 1.0 (e) 0 10 20 30 40 (f) 0.0 0.2 0.4 0.6 0.8 1.0 Figure 1. (a) Highly tilted, misclassi\ufb01ed test image (b) Soft-max output of original baseline model resulting in miss-classi\ufb01cation (c) Baseline model convolutional kernels: notice high proportion of redundant kernels (d) Soft-max at DCCKs intermediate training stage, after split and \ufb01ne-tuning (e) Final DCCK convolutional kernels, after merge and \ufb01ne-tuning, showing reduced redundancy (f) Final DCCK soft-max output, correctly classifying the image 2.1.2. MERGING KERNELS After the splitting step, the model might have too much capacity and thus part of the model might become overparameterized, possibly resulting in over-\ufb01tting and lower generalization power. Therefore the merging step has the \fDeep Clustered Convolutional Kernels purpose of removing model space redundancies and reducing model size, while maintaining the overall model accuracy. In our algorithm we use k-means clustering to merge kernels since, naturally, k-means cluster distortion under the de\ufb01ned distortion measure (we employ L2 norm to compute cluster distortion). We empirically observe that k-means clustering to merge \ufb01lter maps is effective in reducing kernel\u2019s redundancy (see \ufb01lters in 2.1.2). Then, we train the network and get weight and bias matrices from each convolutional layer to then choose the \ufb01lters that are nearest to each centroid. We update Wl and Bl, with W \u2032 l and B\u2032 l using k-means clustering to get centroids C as: C = arg min P C X j=1 X p\u2208P ||p \u2212\u00b5j||2 (4) W \u2032 l = \u001a [P \u2032 1, ..., P \u2032 i, ..., P \u2032 C], or [C1, ..., Ci, ..., CC] (5) where P \u2032 i = arg min P \u2032 ||P \u2032 \u2212Ci ||2 , i = {1, ..., C} (6) and \ufb01nally, B\u2032 l = \u001a [B\u2032 1, ..., B\u2032 i, ..., B\u2032 C], or [\u03b21, ..., \u03b2i, ..., \u03b2C] (7) where B\u2032 i is P \u2032 i\u2019s matched biases matrix, and \u03b2i is \u03b2\u2032 i = P i bn \u03b7i , n = {1, ...Nl} (8) where \u03b7i is number of p in group Ci. As shown in 5 and 8, we explored two different methods to update W and B. The \ufb01rst method consists in choosing the Pi that is closer to each centroid Ci. In this case, we use the correspondent bias vector Bi to the corresponding Pi selected. The other way is to use the centroid Ci itself as \ufb01lter parameters and update Bi with average bias from each cluster. An important detail to choose the right value of k: if we choose k too small then average cluster distortion will be too high to appropriately represent the model parameters, possibly resulting in ineffective features maps. On the other hand, if we choose k too big, not enough kernels will be merged. This, unfortunately, may very well be a hyper-parameter that will have to be manually tuned. Table 2 shows k selected for each experiment which gives the best results on our network models. Finetune 96 kernels 150 kernels 32 kernels MERGE SPLIT Figure 2. DCCK training example: starting from a large GTSRB model 150 convolutional kernels for the \ufb01rst layer, the algorithm \ufb01rst merges it to 32 kernels. After \ufb01ne-tuning, kernels are split by adding noise and rotating, then \ufb01ne-tuned one more time. 3. Experimental results Our experimental results are based on three different datasets: MNIST, German Traf\ufb01c Sign Recognition Benchmark (GTSRB), and CIFAR-10. As much as possible, to make our experiments signi\ufb01cant and to validate our approach, we started from hand-tuned model architectures that were as close as possible to the state of the art, in an effort to prove that our split/merge training procedure can still improve model architecture even when starting from a very highly tuned architecture. Baseline performance are reported in Table 1. For all experiments, we used the BVLC Caffe C++ package (Jia et al., 2014). We started our experiments from MNIST since the quick training time allowed to quickly determine reasonable range of hyper-parameters such as the number of centroids k, number of kernels for the split/merge procedure. Next, we move to a more realistic task such as GTSRB for which we started from an initial model, extremely close to the state of the art and \ufb01nally con\ufb01rm the portability of our \ufb01ndings on the harder CIFAR-10 data-set. We report the details of each data-set experiments in the following sections. \fDeep Clustered Convolutional Kernels MNIST GTSRB1 GTSRB-3DNN CIFAR-10 0.82% 2.44% 1.24% 10.4% Table 1. Baseline models performance on the three data-sets selected for our experiments. LAYER # OF MAPS KERNEL INPUT 3 CONVOLUTIONAL 100 5X5 MAX POOLING 100 2X2 CONVOLUTIONAL 50 5X5 MAX POOLING 50 2X2 FULLY CONNECTED 100 1X1 FULLY CONNECTED 10 1X1 Table 2. MNIST baseline architecture 3.1. MNIST results The MNIST data-set contains 60,000 training images and 10,000 testing images of hand-written digits of size 28x28. The baseline model is composed of two convolutional layers and two fully-connected layers, as shown in Table 2, with ReLU and pooling following each convolutional layer. This baseline model achieves 0.82% Error rate with this simple network. The DCCKs training algorithm begins by splitting the \ufb01rst convolutional layer from 100 to 200 kernels; after the subsequent \ufb01ne-tuning the model achieved 0.59% error rate, which is almost 30% relative improvement from the original model. This compared favorably to a 200 kernel models trained from scratch, which achieves 0.78%, and even a 300 kernels model trained from scratch, which achieves 0.75%. This veri\ufb01es that splitting \ufb01lters has the potential to help the following SGD based \ufb01ne-tuning to achieve an optimal point which generalists better. Also, more importantly after following merging step, back to 100 kernels, the performance dropped only 0.01% to an error rate of 0.59%. 3.2. GTSRB results The GTSRB data-set contains 39,209 training images and 12630 testing images of various size, with 43 different classes consisting of standard traf\ufb01c signs from Germany (Houben et al., 2013). First, we resized all images to 48x48 and then we applied pre-processing techniques such as histogram equalization, adaptive histogram equalization, and contrast normalization. For this task, we have two sets of initial networks: a single model baseline GTSRB1, consisting of three convolutional and two fully connected reaching 2.44% error rate, and larger state of the art ensemble model GTSRB-3DNN (Table 4), inspired by MCDNN(Ciresan et al., 2012), and reaching 1.24% error NO. STAGE CONV1 CONV2 ERR(%) 1 ORIGINAL 100 50 0.82 2 ORIGINAL 200 50 0.78 3 ORIGINAL 300 50 0.75 4 SPLIT FROM [1] 200 50 0.58 5 MERGE FROM [4] 100 50 0.59 Table 3. MNIST error rate after \ufb01ne-tuning. Notice that clustering was performed on the \ufb01rst convolutional layer only. LAYER # OF MAPS KERNEL(GTSRB-3DNN) INPUT 3 CONVOLUTIONAL 150 3X3, 3X3, 3X3 MAX POOLING 150 2X2, 2X2, 2X2 CONVOLUTIONAL 150 4X4, 4X4, 2X2 MAX POOLING 150 2X2, 2X2, 2X2 CONVOLUTIONAL 250 4X4 4X4, 2X2 MAX POOLING 250 2X2 2X2, 2X2 FULLY CONNECTED 500 1X1 1X1, 1X1 FULLY CONNECTED 43 1X1 1X1, 1X1 Table 4. GTSRB-3DNN architecture rate, which is within 0.2$ from the best published result. We remark the ensemble models use different input size of 48x48 pixels, 38x48 pixels and 28x48 pixels: because of this, we expected a high degree of redundancy on the GTSRB-3DNN kernels which may be successfully exploited by the DCCKs merging step. Indeed, by visually inspecting the lower convolutional layers we could easily identify an abundant amount of redundancy (see 2.1.2). Because of this highly redundant structure in the initial model, we inverted the sequence of our training procedure to \ufb01rst merge kernels instead of splitting, which maintains the accuracy and provides signi\ufb01cantly faster training 8. Furthermore, the speci\ufb01c structure of the traf\ufb01c signs provided for some peculiar behaviors on this database: for instance, kernel rotation especially helped improving performance. A detailed inspection of the recognition errors highlighted that several traf\ufb01c signs were misclassi\ufb01ed by the baseline model were highly tilted; such instances were mostly recovered and correctly recognized after DCCKs training (see Figure 1 for one example of such instance). We also remark that using centroids as new kernels resulted in better gains on this data set. Table 5 and Table 6 shows the experimental results. We remark that in all the experiments, in almost all cases, we either achieve signi\ufb01cantly better performance or similar performance with signi\ufb01cantly reduced model size. One exception worth noticing is [5] in Table 5 which shows the worst performance of all experiments: in this case we merged the last convolutional layer which is fully con\fDeep Clustered Convolutional Kernels NO. STAGE CONV1 CONV2 CONV3 ERR(%) 1 ORIGINAL 150 150 250 2.44 2 MERGE [1] 32 150 250 2.34 3 MERGE [2] 32 32 250 2.7 4 MERGE [2] 32 64 250 2.36 5 MERGE [3] 32 32 32 3.82 6 SPLIT [2] 64 150 250 2.5 7N SPLIT [3] 32 64 250 2.25 8R SPLIT [3] 32 64 250 2.15 9 SPLIT [1] 300 150 250 2.24 10 MERGE [1] 40 150 250 2.31 11 SPLIT [1] 150 300 250 2.27 Table 5. GTSRB1 baseline model experiments, \u2019R\u2019 denotes \u2019Rotation\u2019, and \u2019N\u2019 denotes \u2019Noise perturbation\u2019. Remark that [8R] which splits both the \ufb01rst and the second convolutional layer, followed by merge of the second layer, achieved the best performance. Instead, [5] which merges the last convolutional layer had a performance drop; we speculate that this is due to dif\ufb01culty in optimizing the following fully connected layer. NO. STAGE CONV1 CONV2 CONV3 ERR(%) 1 ORIGINAL 150 150 250 1.24 2 ORIGINAL 16 150 250 1.67 3 MERGE [1] 32 150 250 1.18 4 MERGE [1] 16 150 250 1.25 5 SPLIT [1] 300 150 250 1.21 6 SPLIT [3] 64 150 250 1.15 Table 6. Results table for DCCK trained from the state of the art GTSRB-3DNN initial model, showing a small but signi\ufb01cant improvement. nected to the \ufb01rst fully connected layer of this network architecture. We speculate this issue is due to the fact that is notoriously hard to optimize parameters of fully connected layers, splitting a convolutional layer which fans-out into a fully connected layer has the potential to harm the parameter structure to a point where SGD cannot easily recover. 3.3. CIFAR10 results The CIFAR-10 data-set consists of 50,000 training and 10,000 testing images. Each image is 32x32 pixels and represent a class of natural occurring objects. To develop the CIFAR-10 baseline we used the same techniques discussed in (Goodfellow et al., 2013) and the Network-InNetwork (Lin et al., 2013) model which achieves a baseline 10.4% error rate, which is within reasonable distance from to the state of the art. When we apply DCCKs training on the CIFAR-10 data-set, the increased performance is not as large as on the previous data-sets but it is still signifNO. STAGE CONV1 CONV2 CONV3 ERR(%) 1 ORIGINAL 192 192 192 10.4 2 SPLIT [1] 384 192 192 10.29 3 SPLIT [1] 576 192 192 10.25 4 MERGE [3] 192 192 192 10.2 5 SPLIT [1] 192 192 384 10.04 6 SPLIT [1] 192 384 192 10.04 7 MERGE [6] 192 192 192 10.28 Table 7. Result table for CIFAR-10. icant and consistent. We believe that this is because the highly successful highly (manually) optimized NetworkIn-Network architecture makes it harder for the automatically devised DCCKs training to provide a large improvement. Therefore these results should demonstrate that DCCKs may still provide some improvement even when applied on top of more complex highly tuned architectures, while keeping the number of parameters under control. Additionally we show that by splitting layers and doubling the number of parameters we could achieve an additional 0.5% average error rate improvement. 0 20 40 60 80 100 120 140 Epochs 0.966 0.968 0.970 0.972 0.974 0.976 0.978 0.980 Accuracy GTSRB1_merge GTSRB1_original GTSRB1_split Figure 3. Test-set accuracy of GTSRB1 (simple) network during \ufb01ne-tuning. Notice that GTSRB1 merge and GTSRB1 merge have the same number of parameters, but the optimized DCCK architecture shows better accuracy throughout epochs. \fDeep Clustered Convolutional Kernels 0 20 40 60 80 100 120 140 Epochs 0.080 0.085 0.090 0.095 0.100 0.105 0.110 0.115 0.120 0.125 Loss GTSRB1_merge GTSRB1_original GTSRB1_split Figure 4. Test-set loss of GTSRB1 (simple) network during \ufb01netuning. Notice that GTSRB1 merge and GTSRB1 merge have the same number of parameters, but the optimized DCCK architecture shows better accuracy throughout epochs. 4. Discussion In this work, we introduced the concept of DCCKs and introduced a training procedure whereby convolutional kernels learned by SGD can be effectively split and merged. Experimental results con\ufb01rmed this process results in gradually improving performance, while the training algorithm jointly optimizes structure as well as model\u2019s parameters. Results show that DCCKs can make parsimonious use of model capacity by converging towards the minimal number of parameters that gives the best performance, even when starting with highly manually optimizing network architecture. Figure 3 shows training and validation data loss over \ufb01ne-tune epochs; the \u201coriginal\u201d and the \u201cmerge\u201d curves refer to training and generalization loss for models having the same number of parameters; notice how the \u201cmerge\u201d curve is consistently above the \u201coriginal\u201d curve, apparently providing an upper-bound to the loss, and thus empirically con\ufb01rming that the DCCKs architecture was indeed an improved by the training algorithm. Moreover, in some experiments, DCCKs resulted in signi\ufb01cantly higher performance with smaller number of parameters than the original model. On the other hand, DCCKs showed bigger gains on simpler databases, such as MNIST and GTSRB, than on more complex CIFAR-10 data-set and to the more complex Network-In-Network model architecture. This is however MODEL STAGE CONV1 CONV2 SPEED(MS) 1. SIMPLE ORIGINAL 150 150 14.8 2. SIMPLE MERGE [1] 32 150 14.1 3. SIMPLE MERGE [2] 32 64 12.6 4. 3-DNNS ORIGINAL 150 150 27.9 5. 3-DNNS MERGE [4] 32 150 19.4 Table 8. Speed comparisons for GTSRB1 and GTSRB-3DNNs models and their corresponding DCCK trained models. Test time of forward-pass only with minibatches of 10 48x48 pixel images on nVidia GeForce GTX 770. to be expected, especially because the NIN architecture is extremely well tuned and very high performance to begin with, so it is natural to expect smaller gains by our automatic structure optimization procedure. Beside the obvious advantage of automatic structure optimization, a side bene\ufb01t of DCCKs training is that manipulating kernels takes less computations than pre-processing training data, which makes DCCKs optimization more ef\ufb01cient. To conclude, we believe there are several aspects of DCCKs training algorithm that could be improved. As we mentioned in 2.1.1 currently all kernels are split by the same amount. However, one could argue that some kernels might be better than others and should be replicated \ufb01rst, possibly based on the ability provide new discriminative features. If we could determine such kernels we could potentially improve training speed, though, potentially \ufb01nal accuracy after the merge step might not be much impacted as much. Finding a more extensive set of kernel transformations to achieve a highly selective split step would also be an appropriate next step, as well as comparison and combination with logit-mimic training and model compression techniques (Ba & Caruana, 2014; Hinton et al., 2014). Ultimately, like for any new methodology in the deeplearning sector, it would be very important to test how well DCCKs scale higher dimensional larger problems, such as IMAGE-NET and to different non-vision tasks such as speech recognition or language modeling." + } + ], + "Md Rifat Arefin": [ + { + "url": "http://arxiv.org/abs/2402.13368v1", + "title": "Unsupervised Concept Discovery Mitigates Spurious Correlations", + "abstract": "Models prone to spurious correlations in training data often produce brittle\npredictions and introduce unintended biases. Addressing this challenge\ntypically involves methods relying on prior knowledge and group annotation to\nremove spurious correlations, which may not be readily available in many\napplications. In this paper, we establish a novel connection between\nunsupervised object-centric learning and mitigation of spurious correlations.\nInstead of directly inferring sub-groups with varying correlations with labels,\nour approach focuses on discovering concepts: discrete ideas that are shared\nacross input samples. Leveraging existing object-centric representation\nlearning, we introduce CoBalT: a concept balancing technique that effectively\nmitigates spurious correlations without requiring human labeling of subgroups.\nEvaluation across the Waterbirds, CelebA and ImageNet-9 benchmark datasets for\nsubpopulation shifts demonstrate superior or competitive performance compared\nstate-of-the-art baselines, without the need for group annotation.", + "authors": "Md Rifat Arefin, Yan Zhang, Aristide Baratin, Francesco Locatello, Irina Rish, Dianbo Liu, Kenji Kawaguchi", + "published": "2024-02-20", + "updated": "2024-02-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "main_content": "Introduction A critical concern with deep learning models arises from their well-known tendency to base their predictions on correlations present in the training data rather than robustly informative features (Arjovsky et al., 2019; Sagawa et al., 2020). For instance, in image classification, translating an image by a few pixels (Azulay & Weiss, 2019) or modifying the background (Beery et al., 2018) can drastically change the predictions of the model. Often viewed as resulting from the so-called \u2018simplicity bias\u2019 of deep neural networks in the literature (Shah et al., 2020), this phenomenon pervades the landscape of deep learning models (Geirhos et al., 2020). 1Mila, University of Montreal, Canada 2Samsung SAIT AI Lab, Montreal, Canada 3Institute of Science and Technology Austria 4National University of Singapore. Correspondence to: Md Rifat Arefin . While models relying on spurious correlations may perform well on average across i.i.d. test data, they often struggle on specific subgroups where these correlations do not hold. Common approaches involve partitioning the training data based on prior knowledge of spurious information and adjusting the training process to ensure consistency across these groups (Sagawa et al., 2020; Kirichenko et al., 2023; Arjovsky et al., 2019). However, most real-world datasets lack explicit annotations highlighting spurious information. Manual annotation is expensive and can be ill-defined, as the appropriate groupings may not be immediately apparent. On the other hand, self-supervised learning (Chen et al., 2020; Caron et al., 2020; 2021; Grill et al., 2020; He et al., 2020) has produced powerful representation learners. Several methods (Cho et al., 2021; Wen et al., 2022) aim to learn high-level concepts by semantic grouping of areas within an input image into object-centric instances. Wen et al. (2022), for instance, leverage slot attention (Locatello et al., 2020) to decompose complex scenes into constituent objects via contrastive learning alone. While their original aim was downstream task representation learning, we posit that such decompositions can help mitigating spurious correlations. By treating semantic groupings as concept sources discovered by the model, they can serve as data-driven proxies of subgroup labels. This differs from existing work in spurious correlation, which typically directly infers subgroups (see related work in Section 2). Our approach models concepts that do not necessarily correspond directly to subgroups; typically, we use a significantly larger number of concepts than annotated subgroups in the dataset. This paper demonstrates the use of object-centric representation learning approaches to design classifiers robust to spurious correlations without the need for human-labeled subgroup annotations. We introduce CoBalT, a method combining concept discovery with concept balancing for robust classification. CoBalT follows a two-stage procedure common in the literature: first, inferring information about the training data, and then leveraging this information for robust training. In Stage 1, we propose to vector quantize semantic grouping representations into discrete concepts (Section 3.2), enabling the association of each input with relevant sets of 1 arXiv:2402.13368v1 [cs.LG] 20 Feb 2024 \fUnsupervised Concept Discovery Mitigates Spurious Correlations concepts (see Fig 1) and facilitating the calculation of concept occurrence statistics across the dataset. In Stage 2, we utilize the occurrence statistics of concepts via importance sampling to train a separate classifier (Section 3.3). The architecture of the classifier is inconsequential; the key contribution lies in the concept-aware sampling procedure, bridging object-centric representation learning and learning under subpopulation shifts. Integrating Stages 1 and 2 introduces CoBalT (Concept Balancing Technique) tailored for robust classification. We evaluate CoBalT across the Waterbirds, CelebA, and ImageNet9 datasets, demonstrating improvements without the need for group annotations (Section 4). We achieve a 3% improvement on Waterbirds compared to state-of-the-art group agnostic methods like MaskTune (Asgari et al., 2022), ULA (Tsirigotis et al., 2023) and XRM (Pezeshki et al., 2023), remain competitive on CelebA, and achieve 1\u20132% improvement on challenging ImageNet-9 test sets while maintaining original test set performance. Class 1(Waterbird) Class 0(Landbird) Class 1(Waterbird) Class 0(Landbird) Class 1(Waterbird) Class 0(Landbird) Class 1(Waterbird) Class 0(Landbird) Class 1(Waterbird) Class 0(Landbird) Class 1(Waterbird) Class 0(Landbird) Figure 1. Images from Waterbirds dataset with different discovered concepts through our method. Here we arbitrarily select two of these concepts, which can be interpreted as trees/bamboo background (left) and water background (right), and show input samples from each of these. 2. Related Works Robust training. Our approach extends existing methodologies for robust classification model training, particularly addressing the costly acquisition of group labels in real-world data. Unlike GDRO (Sagawa et al., 2020), which optimizes for the worst group-level error, and its semi-supervised extension, SSA (Nam et al., 2022), our method is tailored for scenarios lacking sufficient labeled group data. Additionally, methods like DFR (Kirichenko et al., 2023) and AFR (Qiu et al., 2023) retrain the classification layer with group-balanced datasets and ensure feature reweighing, requiring group-labeled training/validation data, a necessity we circumvent. ULA (Tsirigotis et al., 2023) employs a Self-Supervised Learning (SSL) pre-trained model\u2019s predictions as a bias proxy, while MaskTune (Asgari et al., 2022) assumes predictions from Empirical Risk Minimization (ERM) models to be biased. To train an unbiased model, the former adjusts the classifier\u2019s logits during debiasing training, and the latter masks out the input data based on the saliency map of the prediction. Group inference methods. Obtaining group labels in realworld data is often costly. Several methods have been proposed for inferring group labels initially, followed by robust model training. LfF (Kalimeris et al., 2019) uses two models, where the second model is trained using examples with higher loss in the first model. This approach contrasts with GEORGE (Sohoni et al., 2020), which clusters representations from the first stage ERM model to infer group information and then trains a second model using GDRO. Similarly, JTT (Liu et al., 2021) and and CNC (Zhang et al., 2022)identify minority groups based on misclassifications of the first stage ERM model; however, JTT continues with ERM to train the robust model, while CNC uses contrastive learning to align representations of minority examples with the majority. These methods either rely on extra group annotation or fail in the presence of multiple unbalanced minority groups and noisy examples (Yang et al., 2023a). SPARE (Yang et al., 2023a) separates spurious information in the early stages of training and uses k-means clustering to differentiate between minority and majority groups, but relies on validation group annotation data to determine the specific epoch for separation. Conversely, our approach does not depend on group-annotated data for epoch identification; instead, we utilize a self-supervised method combined with spatial decomposition to separate spurious and nonspurious information effectively.The recently introduced XRM (Pezeshki et al., 2023) identifies groups within training and validation datasets through model prediction errors, operating under the assumption that models inherently learn spurious correlations. This methodology could be detrimental in scenarios where such an assumption does not hold true (Yong et al., 2022). Concept discovery. Learning abstract representations from images by decomposing them into higher-level concepts has been explored in previous works. A recent development is slot attention (Locatello et al., 2020), which groups spatially repetitive visual features by imposing an attention bottleneck. This method and its variants have been success2 \fUnsupervised Concept Discovery Mitigates Spurious Correlations fully applied to discover object-centric concepts in synthetic datasets (Locatello et al., 2020; Engelcke et al., 2021; Zhang et al., 2023). However, they face challenges when applied to complex real-world data. Seitzer et al. (2023) hypothesized that reconstructing the pixel space as a learning objective might not introduce enough inductive bias to facilitate the emergence of objects or concepts in real data. As a solution, they propose reconstructing the features from the selfsupervised pre-trained DINO model (Caron et al., 2021). With similar motivation, Wen et al. (2022) employ a joint embedding teacher-student architecture, similar to Caron et al. (2021), where the student model attempts to predict the concept representations of the teacher network. We extend this work to discover symbol-like discrete concepts by applying vector quantization (van den Oord et al., 2017) to continuous concept representations aiding compositional reasoning of images, such as identifying common groups or attributes in the dataset like humans. 3. Method Unlike existing methods, our goal is not to discover the subgroups of a dataset specifically, but more general concepts. For example, while groups in the Waterbirds dataset are explicitly defined to be the product of classes with some binary background attribute, {water bird, land bird}\u00d7 {water background, land background}, the concepts could capture dataset-independent ideas such as blue bird, street background, or short beak. We base our approach on the two-stage training procedure common in the literature, with the first stage determining some information about the training data and the second stage using this information to perform robust training. The first stage combines two key components: 1. Spatial clustering (Caron et al., 2021; Wen et al., 2022), which groups pixels into semantic regions (Section 3.1). While our approach in this paper is based on the method by Wen et al. (2022), in principle, the requirement is simply for an unsupervised representation learner that decomposes the input into objects. 2. A novel vector clustering technique, termed as concept dictionary learning (Section 3.2), achieved through vector quantization (van den Oord et al., 2017). This process discretizes the information of the slots into distinct concepts, which are more manageable compared to continuous representations of semantic regions. For example, instance, instead of storing details about the specific shape and appearance of a bird, this clustering identifies broader concepts like bird types, which offer greater utility across various inputs. These concepts encompass typical foreground objects such as cats and dogs, background elements like land and sky (see Fig. 1), or other abstract notions not as readily interpretable as individual words. The key aspect of our proposed second component is its independence from human labeling, achieved through leveraging the self-supervised learning setup of the first component. This lack of reliance to human labeling offers significant advantages, particularly in complex datasets. For example, when dealing with large datasets like ImageNet, determining relevant subgroups across the images is challenging due to the vast number of possibilities. Spurious correlations in a dataset are likely to vary depending on the specific task being performed with the dataset. Without pre-labeling every conceivable group (which is clearly infeasible), identifying the subgroups necessary to address spurious correlations seems nearly impossible. By adopting a data-driven approach where concepts are learned, we can discover concepts that a model inherently relies on. However, this approach has the potential drawback of weakening the connection between a learned concept and a concept that humans readily understand. One advantage of an object-centric decomposition, as demonstrated by methods like Wen et al. (2022), is that the spatial grouping of a concept provides humans with additional insight into that concept represents. 3.1. Architecture The model architecture used for concept learning shares the same overall structure as many recent self-supervised approaches to representation learning (Caron et al., 2021; Grill et al., 2020; Zbontar et al., 2021; Chen et al., 2020). Following Wen et al. (2022), we employ a two-branch network where the branches are structurally similar but asymmetric in parameter weights. Each branch comprises an encoder that outputs patch representation vectors of the input image, a projector that transforms the representations into an embedding space, and a slot module where spatial patch representation vectors are semantically grouped into concept representations. Our focus lies on building our model based on the output of the slot module. The overall architecture is illustrated and briefly described in Figure 2, with detailed information provided in Appendix A. More precisely, we will utilize the slots of the student and teacher branches, zs \u2208RN\u00d7d and zt \u2208RN\u00d7d, where the hyperparameters N represents the number of slots and d denotes the dimensionality of each slot. Each slot serves as a semantic grouping of an area in the input; for instance, a slot could correspond to a single object in the image. 3.2. Concept Dictionary For the next step, we aim to discover meaningful discrete concepts from these spatially-decomposed semantic group3 \fUnsupervised Concept Discovery Mitigates Spurious Correlations Encoder ys(P \u00d7 d) Slot Module \u2026 c2 Encoder yt(P \u00d7 d) \u2026 Slot Module c3 c4 ck \u2026 Concept Dictionary, C(K \u00d7 d) c1 EMA Update\u2028 Teacher zt(N \u00d7 d) zs(N \u00d7 d) xt xs Teacher Student Lvq Figure 2. Architecture for learning slots and clustering without human annotation. xs and xt are two different augmented views of the same image. The teacher and student encoders project the augmented images into different patches yt and ys respectively, which are subsequently decomposed into concept representations zt and zs by slot attention. Then zt and zs are clustered into different concepts C using a vector-quantized concept dictionary. The teacher encoder and slot module parameters are updated through the exponential moving average of the student encoder and slot module parameters. ings. To do so, we employ vector quantization (van den Oord et al., 2017) which acts as a learned discretization or clustering mechanism that effectively clusters similar concepts in the training data into distinct categorical concepts. This is done by utilizing a codebook that we call concept dictionary C \u2208RK\u00d7d with K vectors of dimension d, each of which corresponds to a symbolic concept (e.g. water, tree, bird, etc.) that we want to learn. Note that we do not supervise these concepts in any way \u2013 these words simply denote possible meanings we could assign to these concepts posthoc. We assign each slot (vector representation) to a discrete symbolic concept by learning a categorical distribution over the entries in the dictionary. Given a randomly initialized concept dictionary C, we associate each slot (for student and teacher branches) to a concept in the concept dictionary through distributions ps \u2208RK\u00d7N, pt \u2208RN by seeking the most similar concept: (ps)ij = exp(\u2212\u2225Ci \u2212(\u00af zs)j\u22252 2) PK t=1 exp(\u2212\u2225Ct \u2212(\u00af zs)j\u22252 2) (1) (pt)i = arg max j \u2212\u2225Cj \u2212(\u00af zt)i\u22252 2 (2) where Ci, (\u00af zs)j \u2208Rd, \u00af zs = zs/||zs||, \u00af zt = zt/||zt|| (the i-th slot of zs and zt are normalized to have unit L2 norm), and \u03c4s is a temperature hyperparameter. For the teacher branch, inspired by Caron et al. (2021), rather than taking a soft max, we use a sharpened distribution. In particular, we employ the arg max to facilitate a hard assignment into a one-hot representation. This hard assignment ensures that each slot is associated with a single distinct concept. We then use these as the supervision signal for the student branch, where we encourage each slot representation of the student to also be assigned to a single concept vector. This difference in soft max and arg max has the benefit of making the distributions of the student and teacher branches different, which avoids the representation collapse problem mentioned in Caron et al. (2021). Following Roy et al. (2018), throughout training, C is updated by the exponential moving average of batch-wise teacher concept representations zt as follows: Cj = \u03b1c \u00b7 Cj + (1 \u2212\u03b1c) \u00b7 X i 1{(pt)i = j}(zt)i (3) with \u03b1c is the update rate of the codebook. We set it to 0.9 for all our experiments. Loss. As for the learning objectives, in addition to the losses proposed by Wen et al. (2022) (see Appendix A), we include a novel term Lvq, motivated as follows. Since we do not have any explicit human supervision of concepts, we exploit the assignment of concepts of the teacher to supervise the student. The purpose of the loss term Lvq is to ensure the consistency of the prediction between the slot representations of the teacher and the student. We encourage this alignment by distilling the teacher\u2019s prediction of discrete concepts to the student with a cross-entropy loss, which is calculated as follows: Lvq = \u2212 N X i K X j I(i)1{(pt)i = j} log(ps)ij (4) where I(i) is the indicator function that avoids calculating the loss for the slot where the student and teacher does not have any common patch assignment. Details of how this is calculated are described in Appendix A. 4 \fUnsupervised Concept Discovery Mitigates Spurious Correlations We then include this objective as a term in the overall loss of Wen et al. (2022): L = Ldis + Lcon + Lvq (5) where Ldis governs attention distillation from teacher to student and Lcon is a contrastive loss between slot representations to avoid redundancy and encourage diversity. These losses are described in detail in Appendix A. This concludes the first stage of our training process. To recap, we extract slot representations following the methodology in Wen et al. (2022), then compute concept distributions ps and pt over the concept dictionary C, which is incrementally updated based on assignments from the teacher branch. Our learning objective is designed to distill the teacher concept distribution to the student. Through this process, we establish the association of training samples with sets of concepts. This information will be utilized in the subsequent section. 3.3. Training a Robust Classifier In the second stage, we train a separate classifier based on the concepts learned in the first stage, which are considered fixed. Integrating this information into the training process offers various possibilities. Our approach draws inspiration from previous works (Sagawa et al., 2020; Yang et al., 2023a) where, if ground-truth subgroups are known, adjusting the subgroup sampling rate evenly is the most effective method. We adapt this concept to our framework, modifying it to suit learned concepts rather than ground-truth subgroups. However, this adaptation presents challenges, such as each data point belonging to multiple concepts instead of a single subgroup, and the occurrence of each concept in multiple classes at varying frequencies. Sampling method. Our core approach involves adjusting the sampling rate of samples to ensure an even representation of concepts and, when feasible, an even representation of classes within those concepts. This entails sampling prevalent concepts less frequently and rare concepts more frequently. Additionally, within each concept, we aim to maintain balanced representation of labeled classes. By doing so, we bias the classifier training towards rarer concepts while striving to balance classes within a concept whenever possible. This strategy is guided by the understanding that minority groups, characterized by rarer concepts within a class, are more susceptible to misclassification due to concept overlap. Notably, our sampling method differs from the weighting scheme proposed by Yang et al. (2023a), which contrasts between groups within the same class. Instead, our approach focuses on contrasting between samples from the same concept but belonging to different classes. Within a cluster c, we have multiple classes with Tc,y represents the samples from the cluster c and class y. We compute the weight and probability of sampling that class within the cluster as: wc,y = 1 |Tc,y|, pc,y = w\u03bb c,y P \u02c6 y w\u03bb c,\u02c6 y (6) where \u03bb is a sampling factor, a hyperparameter. Yang et al. (2023a) recommend to increase \u03bb from the default of 1 when the inter-concept groups are not well separable. The choice of this hyperparameter can be guided by the average silhouette score (Rousseeuw, 1987), which measures how well the clusters are separated. In our case, it reflects the degree of distinction between groups from one cluster to the groups of the other cluster. Algorithm 1 Batch Sampling Strategy K: clusters with samples of different classes n: batch size Tc,y: Set of samples belonging to cluster c and class y Initialize batch \u2190{} for i = 1 to n do c \u2190uniformly select a cluster from 1 to K wc,y \u2190calculate weights 1 |Tc,y| y \u2190select a class with pc,y = w\u03bb c,y P \u02c6 y w\u03bb c,\u02c6 y b \u2190select a sample from c of class y batch \u2190batch \u222a{b} end for return batch 3.4. Early stopping As demonstrated by Idrissi et al. (2022), having access to group information is crucial for effective model selection, particularly in scenarios involving spurious correlations. In our experiments, we explore three distinct model selection strategies by altering the criteria for early stopping: 1. CoBalThg: This strategy relies on human-annotated worst-group labels. 2. CoBalTig: Here, we utilize the inferred worst group. 3. CoBalTavg: This strategy employs the average validation accuracy as the criterion for early stopping. While CoBalThg offers the advantage of leveraging human annotations, it also reintroduces dependency on manual labeling. Consequently, we generally prefer settings where CoBalTig and CoBalTavg are more suitable. For CoBalTig, our approach involves inferring groups from the discovered concepts. Each group is defined by the 5 \fUnsupervised Concept Discovery Mitigates Spurious Correlations unique combination of class and concept. For instance, if we have two concepts and three classes, we would generate six groups accordingly. It is important to note that these inferred groups may not align with the ground-truth groups in the dataset, if such labels are even available. Nevertheless, our method utilizes these inferred groups as an early stopping criterion. 4. Experiments To illustrate the effectiveness of our spatial concept discovery and sampling strategy, we investigate two challenging scenarios where training a robust classifier using empirical risk minimization (ERM) with i.i.d. (independent and identically distributed) sampling faces significant difficulties. Scenario 1: Binary classification complicated by class imbalance and attribute imbalance. In this scenario, one class is notably underrepresented, and attributes within classes exhibit uneven distributions in the training data. This presents considerable challenges for an ERM-trained model, particularly concerning under-represented attributes. Scenario 2: Test data containing attributes entirely absent from the training data, necessitating attribute generalization. For instance, while the training set may feature cows on grassland and only rarely cows on a beach, the test set may require generalization to cows on a volcano. This scenario is highly demanding, as it mandates the classifier to recognize and generalize to unknown attributes. Moreover, merely defining subgroups in this scenario is inherently challenging. 4.1. Datasets Considering the scenarios outlined, we train our model using the following publicly available datasets: CelebA (Liu et al., 2014), Waterbirds (Sagawa et al., 2020), Background Challenge ImageNet-9 (Xiao et al., 2021). A detailed overview of these datasets is provided in Table 4, with their characteristics described further in Yang et al. (2023b). Scenario 1. The CelebA dataset presents a notable class imbalance between gender attributes (male/female) and hair color (dark/blonde). In the training set, a majority of male images, totaling 162,770, feature dark hair, while only 1,387 images (0.85%) depict males with blonde hair. This disparity poses a risk of bias, potentially leading the model to indirectly associate gender with hair color, especially from the dominant group\u2019s traits. Similarly, the Waterbirds dataset, as detailed in Sagawa et al. (2020), consists of two main classes: landbirds and waterbirds. In this dataset, the background environment\u2014either land or water\u2014acts as a spurious attribute. The most common instances are the combinations of (waterbird, water) and (landbird, land), which introduces a significant challenge in differentiating the true class (the type of bird) from the background that is merely coincidentally correlated with it. Scenario 2. We utilize the Background Challenge ImageNet-9 (IN-9L) dataset (Xiao et al., 2021), derived from a subset of ImageNet known as ImageNet-9. This dataset is purposefully crafted to assess the robustness of the models against background variations. It encompasses four distinct types of background modification in its test sets: \u2022 Original: Maintains the original unaltered background. \u2022 Mixed-same: replaces the background with one from another image within the same class. \u2022 Mixed-rand: Replaces the background with one from a randomly selected image. \u2022 Only-FG: Eliminates the background entirely, leaving only the foreground object. This dataset poses a significant challenge for classifiers as it evaluates their ability to remain robust to background changes. It serves as a benchmark for assessing a model\u2019s capacity to generalize and focus on the primary features of the object, regardless of the variability or absence of the background. 4.2. Results We present additional results and ablations in Appendix C. 4.2.1. SCENARIO 1 (WATERBIRDS AND CELEBA) As shown in Table 1, our evaluation on Waterbirds and CelebA showcases the effectiveness of our approach, which achieves superior or comparable performance compared to methods that do not rely on human-annotated group labels. Particularly noteworthy is CoBalTig, which outperforms in worst-group accuracy the recent methods ULA (Tsirigotis et al., 2023) and XRM (Pezeshki et al., 2023) by nearly 3% on Waterbirds, while also demonstrating competitive performance on CelebA with an average accuracy similar to the other methods. Even when selecting the model based on the average validation accuracy (CoBalTavg), without attempting to infer groups, our model still demonstrates competitive results. Unlike other baselines that leverage human-annotated grouplabeled training or validation sets for early stopping or hyperparameter tuning (as detailed in Appendix C, Table 6), our method makes group inferences for both training and validation data without relying on human labels. 6 \fUnsupervised Concept Discovery Mitigates Spurious Correlations Waterbirds CelebA IN-9L Figure 3. Segmented regions with slots in different datasets. # of slots used for Waterbirds, CelebA and ImageNet-9 is 4. The pixels in the images are grouped by slots and represent high-level concepts such as body parts of birds and backgrounds like trees, water, and so on in Waterbirds; humans, animals, grasses, and so on in IN-9L and nose, head, and so on in CelebA. Table 1. Waterbirds and CelebA results with ImageNet pre-trained ResNet50. Other model results are reported from Yang et al. (2023a); Asgari et al. (2022); Pezeshki et al. (2023); Tsirigotis et al. (2023). Methods in the upper section use group information only for validation, methods in the bottom section do not use group information. The best results are highlighted within each section. CoBalThg, CoBalTig, CoBalTavg are our trained models where early stopping is done by human-annotated worst group, inferred worst group, and average validation accuracy respectively. Group Label Waterbirds CelebA Method Train Val Worst Group Average Worst Group Average GEORGE (Sohoni et al., 2020) \u00d7 \u2713 76.2\u00b12.0 95.7\u00b10.5 54.9\u00b11.9 94.6\u00b10.2 LfF (Kalimeris et al., 2019) \u00d7 \u2713 78.0 91.2 77.2 85.1 CIM (Taghanaki et al., 2021) \u00d7 \u2713 77.2 95.6 83.6 90.6 JTT (Liu et al., 2021) \u00d7 \u2713 83.8\u00b11.2 89.3\u00b10.7 81.5\u00b11.7 88.1\u00b10.3 CnC (Zhang et al., 2022) \u00d7 \u2713 88.5\u00b10.3 90.9\u00b10.1 88.8\u00b10.9 89.9\u00b10.5 SPARE Yang et al. (2023a) \u00d7 \u2713 89.8\u00b10.6 94.2\u00b11.6 90.3\u00b10.3 91.1\u00b10.1 AFR (Qiu et al., 2023) \u00d7 \u2713 90.4\u00b11.1 94.2\u00b11.2 82.0\u00b10.5 91.3\u00b10.3 CoBalThg (ours) \u00d7 \u2713 90.6\u00b10.7 93.7\u00b10.6 87.2\u00b11.6 92.3\u00b10.7 ERM \u00d7 \u00d7 62.6\u00b10.3 97.3\u00b11.0 47.7\u00b12.1 94.9\u00b10.3 MaskTune (Asgari et al., 2022) \u00d7 \u00d7 86.4\u00b11.9 93.0\u00b10.7 78.0\u00b11.2 91.3\u00b10.1 ULA (Tsirigotis et al., 2023) \u00d7 \u00d7 86.1\u00b11.5 91.5\u00b10.7 86.5\u00b13.7 93.9\u00b10.2 XRM (Pezeshki et al., 2023) \u00d7 \u00d7 86.1 90.6 91.8 91.8 CoBalTig (ours) \u00d7 \u00d7 89.0\u00b11.6 92.5\u00b11.7 89.2\u00b11.2 92.3\u00b10.6 CoBalTavg (ours) \u00d7 \u00d7 90.6\u00b10.7 93.8\u00b10.8 81.1\u00b12.7 92.8\u00b10.9 Furthermore, our method exhibits similar performance to other methods employing group annotations. We provide visualizations of the feature attributions of ERM and our method in the Waterbirds dataset, as illustrated in Figure 4, demonstrating that our method relies less on spurious backgrounds compared to ERM. 4.2.2. SCENARIO 2 (IMAGENET-9 BACKGROUND) In the more realistic setting of the ImageNet-9 background challenge dataset, we assess the attribute generalization capability of our method. Training our model exclusively on the original ImageNet-9 trainset, without accessing the \u2018mask-rand\u2019 subset where background images are randomly swapped, we select the model based on inferred worst group performance on the original validation set. As illustrated in Table 2, our method (CoBalTig) outperforms MaskTune (Asgari et al., 2022) by 1.1% on Mixedsame, 1.5% on Mixed-rand, and 1.9% on Only-FG. Additionally, we observe improvements compared to other methods across all test sets. These results underscore the efficacy of our concept discovery method and the importance weight-based sampling strategy in learning task-relevant information and mitigating spurious correlations. Notably, our sampling technique for addressing imbalances within the training set remains effective even in scenarios where the imbalance is not readily apparent. It is worth noting that many techniques employed for Waterbirds and CelebA are not directly applicable to this dataset due to the absence of groups for inference. Consequently, our method exhibits greater versatility while maintaining strong performance across diverse scenarios. 7 \fUnsupervised Concept Discovery Mitigates Spurious Correlations Table 2. Results on Background Challenge (ImageNet-9). Top rows based on ResNet-50 (ImageNet-Pretrained), 4 slots and codebook size 8. The results of other methods are taken from Asgari et al. (2022). Method Original Mixed-same Mixed-rand Only-FG ERM 97.9 90.5 79.2 88.5 CIM (Taghanaki et al., 2021) 97.7 89.8 81.1 SIN (Sauer & Geiger, 2021) 89.2 73.1 63.7 INSIN (Sauer & Geiger, 2021) 94.7 85.9 78.5 INCGN (Sauer & Geiger, 2021) 94.2 83.4 80.1 MaskTune (Asgari et al., 2022) 95.6 91.1 78.6 88.1 CoBalTig (ours) 97.9 91.2 80.1 90.0 4.2.3. RESULTS WITHOUT VALIDATION GROUPS In our previous evaluations, we selected the model by early stopping based on the worst group validation performance, with the groups being inferred on the validation data by our proposed method. To evaluate the impact of model selection, we now consider a scenario where we lack access to human-annotated validation groups for CelebA. In this case, other methods select the model based on average validation accuracy, as they typically rely on human-annotated validation groups. From Table 3, we can see that the performance of different methods substantially degrades when group-labeled validation data is unavailable for early stopping. Many of the group-inference methods perform even worse than ERM, with the notable exception of MaskTune. However, MaskTune still performs significantly worse than our methods CoBalTig and CoBalTavg. This underscores the critical importance of having access to group-labeled data for many baseline methods to work effectively. In contrast, our method proves valuable by inferring groups in an unsupervised manner. when we perform early stopping based on average validation accuracy, akin to the baseline methods in this table, our method CoBalTavg significantly outperforms others, particularly on the worst group. Table 3. Results from the CelebA dataset using ResNet-50 (when early stopping is not done using validation group labels for other methods). We do early stopping based on our inferred groups on the validation set without using validation group labels. The baseline results are taken from Asgari et al. (2022). Method Worst Group Average ERM 47.7\u00b12.1 94.9\u00b10.3 CVaR DRO 36.1 82.5 LfF 24.4 85.1 JTT 40.6 88.0 DivDis 55.0 90.8 MaskTune 78.0\u00b11.2 91.3\u00b10.1 CoBalTavg (ours) 81.1\u00b12.7 92.9\u00b10.9 CoBalTig (ours) 89.2\u00b11.2 92.3\u00b10.6 Figure 4. Gradcam heatmap on Waterbirds dataset (from left to right Input, ERM and CoBalT respectively in three columns). ERM models spuriously correlates to background information for classifying bird types whereas our methods reduce the spurious correlation by focusing on image regions that contain birds. 4.2.4. INTERPRETATION OF CONCEPTS Our proposed method decomposes images into high-level concepts in an unsupervised way and clusters the images based on those concepts. Through the slot-based decomposition model, objectness or high-level concepts emerge in complex real-world data sets, which can be viewed through the attention map of each slot as in Figure 3. For example, in Waterbirds, the region grouped by slots belongs to parts of the body of birds and background such as trees, water, etc. In the IN-9L dataset, the slot distinguishes humans, animals, grass, etc. For CelebA, the model learns to separate the nose, eyes, and hair on the human face. These decomposed slot representations are matched with a set of vector-quantized codes from the learned dictionary. Each code in the dictionary represents high-level abstract concepts. This approach effectively makes each code as the centroid of a specific cluster. By matching slot representations to the closest centroid, we can categorize an image into multiple distinct clusters. This allows us to identify and group images based on shared high-level concepts, despite the fact that they belong to different classes. Such an organization becomes particularly insightful when we observe images from varied classes clustering together. This clustering is based on the similarity of the underlying concepts these images represent. For example, images from different classes but with a common feature or concept might find themselves grouped in the same cluster (e.g. trees and water 8 \fUnsupervised Concept Discovery Mitigates Spurious Correlations in Figure 1 respectively). 5." + } + ], + "Nirat Saini": [ + { + "url": "http://arxiv.org/abs/2309.14339v1", + "title": "Chop & Learn: Recognizing and Generating Object-State Compositions", + "abstract": "Recognizing and generating object-state compositions has been a challenging\ntask, especially when generalizing to unseen compositions. In this paper, we\nstudy the task of cutting objects in different styles and the resulting object\nstate changes. We propose a new benchmark suite Chop & Learn, to accommodate\nthe needs of learning objects and different cut styles using multiple\nviewpoints. We also propose a new task of Compositional Image Generation, which\ncan transfer learned cut styles to different objects, by generating novel\nobject-state images. Moreover, we also use the videos for Compositional Action\nRecognition, and show valuable uses of this dataset for multiple video tasks.\nProject website: https://chopnlearn.github.io.", + "authors": "Nirat Saini, Hanyu Wang, Archana Swaminathan, Vinoj Jayasundara, Bo He, Kamal Gupta, Abhinav Shrivastava", + "published": "2023-09-25", + "updated": "2023-09-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "main_content": "Introduction Objects often exist in different shapes, colors, and textures in the real-world. These visually discernible properties of objects, also known as states or attributes, can be inherent to an object (e.g., color) or be a result of an action (e.g., chopped). Generalization to unseen properties of objects remains an Achilles heel of current data-driven recognition models (e.g., deep networks) that assume robust training data available for exhaustive object properties. However, humans (and even animals) [4, 7] can innately imagine and recognize a large number of objects with varying properties, by composing a few known objects and their states. This ability to synthesize and recognize new combinations from finite concepts, called compositional generalization is often absent in modern deep learning models [30]. Several recent works have been proposed to study composition in terms of the disentanglement of objects and the states in images [24, 34, 56, 73] as well as videos [3, 5, 12, 19, 55, 60, 61]. A few works have attempted to improve open-world text-to-image generation models [13, 53] for the task of compositional generation. However, current suite of datasets lacks either granular annotations for object *First two authors contributed equally. bellpepper whole \u2192 baton bellpepper whole \u2192 large cut potato peel \u2192large cut Train Test Compositional Action Recognition apple whole orange round slices apple round slices Compositional Image Generation Train Test Chop & Learn Dataset 20 Objects 8 Cut Styles Figure 1. We present Chop & Learn (ChopNLearn), a new dataset and benchmark suite for the tasks of Compositional Image Generation and Compositional Action Recognition. It consists of 1260 video clips and 112 object state combinations captured from multiple viewpoints for 20 objects and 8 cut styles. We also propose two new compositional tasks and benchmarks (1) Image Generation: given training images of various objects in various states, the goal is to generate images of unseen combinations of objects and states. (2) Action Recognition: training videos are used to recognize objects along with transition from state1 \u2192state2, to generalize on recognizing unseen object-state transitions. states or enough data to study how object states evolve under different conditions. Therefore, measuring the compositional generalizability of these models on different tasks remains an open challenge. In this paper, we propose a new dataset, Chop & Learn (ChopNLearn) collected to support studying compositional generalization, the ability to recognize and generate unseen compositions of objects in different states. To focus on the compositional aspect, we limit our study to a common task in our daily lives \u2013 cutting fruits and vegetables. When using different styles of cutting, these objects undergo different transformations and the resulting states are easily recognizable by humans. Our goal is to study how these different styles can be applied to a variety of arXiv:2309.14339v1 [cs.CV] 25 Sep 2023 \fTable 1. Comparison with other video datasets. This table highlights the distribution of the objects, states and compositions in different datasets. Obj. refers to objects, Comp. is compositions of objects and styles, N refers to the number of compositions that have more than 10 samples, and Styles\u2217refers to grouping of styles: instead of generic names like cut, chop, etc., we use 3 distinct styles (chop/dice, peel, grate) as styles. MIT-States\u2020 is the only image-based dataset, the rest are video-based datasets. All these data numbers are for edible objects and cutting style actions from respective datasets. Our dataset has uniform distribution for each metric in the table, which makes it suitable for learning objects and their states. Datasets Total # of Avg. # of Samples N # of Views Samples Obj. Comp. Styles\u2217/Obj. /Comp. /Style MIT-States\u2020 [25] 1676 27 52 4 62.07 32.23 419 48 1 Youcook2 [76] 714 160 313 3 7.3 2.2 166.7 26 1 VISOR [9] 301 58 122 3 5.2 2.5 42.9 3 1 COIN [64] 390 6 7 2 65 55 195 6 1 Ego4D [14] 216 12 12 3 18.2 18 54.5 8 1 50Salads [62] 904 5 6 2 182 152 457 6 1 ChangeIt [60] 264 8 14 4 46.3 26.4 96 14 1 CrossTask [77] 1150 7 8 2 164.3 143.7 575 8 1 Breakfast [29] 1055 3 4 2 351.7 263.8 527.5 4 1 ChopNLearn 1260 20 112 8 74.2 11.8 185.5 112 4 objects for recognizing unseen object states. More specifically, we select twenty objects and seven commonly used styles of cuts (plus whole object) which results in objectstate pairs with different granularity and sizes (Figure 1). We collect videos of these objects being from four different viewpoints, and label different object states in each video. Each style of cut changes the visual appearance of different objects in different ways. To study and understand object appearance changes, we propose two new benchmark tasks of Compositional Image Generation and Compositional Action Recognition, with a focus on unseen compositions. The objective of the first task is to generate an image based on an (object, state) composition that was not seen during training. As shown in Figure 1, during training, a generative model is provided with images of an (apple, whole) as well as an (orange, round slices). At the test time, the model has to synthesize a new unseen composition (apple, round slices). We propose to adapt large-scale text-to-image generative models for this task. Specifically, by using text prompts to represent the objectstate composition, we benchmark several existing methods such as Textual Inversion [13] and DreamBooth [53]. We also propose a new method by introducing new tokens for objects and states and simultaneously fine-tuning language and diffusion models. Lastly, we discuss the challenges and limitations of prior works as well as the proposed generative model with an extensive evaluation. In the second task, we extend an existing task of Compositional Action Recognition [36]. While the focus of prior work [36] is on long-term activity tracking in videos, we aim to recognize subtle changes in object states which is a crucial first step for activity recognition. By detecting the initial state and final object state compositions, our task allows the model to learn unseen object state changes robustly. We benchmark multiple recent baselines for video tasks on the ChopNLearn dataset. Finally, we discuss various other applications and tasks that can use our dataset in image and video domains. To summarize, our contributions are threefold: \u2022 We propose a new dataset ChopNLearn, consisting of a large number of images and videos of diverse object-state compositions with multiple camera views. \u2022 We introduce the task of Compositional Image Generation, which goes beyond the common conditional image generation benchmarks, and focuses on generating images for unseen object and state compositions. \u2022 We introduce a new benchmark for the task of Compositional Action Recognition, which aims at understanding and learning changes in object states over time and across different viewpoints. 2. Related Work Object states or attributes have recently received significant attention for recognition tasks, in images and videos. Some of the common works and their dissimilarities with the proposed dataset are mentioned here. Attributes of Objects. In the image domain, states are often referred to as attributes for Compositional Learning of attribute-object pairs. Attributes describe the visual properties of objects, such as shape, color, structure and texture. The common datasets used are MIT-states [24], UT-Zappos [73], COCO-attributes [43], CGQA [35] and VAW [45]. All of these datasets consist of web-scraped images of various types of objects (from furniture to shoes and clothes to food items), which makes the variety of states very diverse. Most of the prior works [31, 34, 35, 41, 44, 46, 56, 59, 70, 72] focus on attribute-object recognition tasks using compositional learning but do not expand to image generation tasks due to the diversity in background and attributes. Some works in compositional zero-shot learning of attributes show visual disentanglement of attributes from objects [56, 68], however, they only hallucinate compositions of unseen attribute-object pairs in the feature space, rather than the image space. Moreover, even newer large vision-language models such as CLIP [48], DALL-E [50] fail to capture the subtle attributes of objects which are visually discernible [38, 74]. Therefore, the image generation task for objects with different attributes is still unexplored, which is a major focus of our work. States for Action Recognition. Detecting object states and corresponding actions from videos is explored in supervised [3, 5, 12, 55] and self-supervised manners [11, 60, \fwhole peel baton julienne round slices half-round slices large cut small cut Data collection setup Chop-N-Learn Cutting styles for a squash MIT-States DALL-E VISOR Youcook2 Figure 2. Left: We show examples of cutting styles from popular video datasets (VISOR [9]: chop and peel potato, Youcook2 [76]: chop broccoli, peel radish), image dataset (MIT-states [24]:slice pear, peel orange) and generation pipelines (DALL-E [50]:baton cut apple, half round slices tomato). Most of these are either too noisy to capture subtle differences in objects or do not have the granularity of specific cutting styles. Center: Our 4 camera setup captures videos of one object in 4 different views. Right: We capture 8 styles of object states, which can be derived in a hierarchical manner from larger to small cuts. Each style is of different shape and granularity. 61]. While some works focus on recognizing actions using states [3, 5, 12, 55], others discover states as the future frames in the videos in [11, 26]. Some works [60, 61] also detect the exact frames of state 1, state 2 and the action that causes transition from state 1 \u21922. Another recent work (Ego4D [14]) also proposes new tasks like point-of-return state-change prediction for object state transition detection. Hence, object states so far have been used as a signal for detecting and localizing actions. We focus on extending this understanding of states to generalize across different objects with limited seen object-state transition videos. Compositional Action Recognition. In contrast to randomly assigning samples for training and testing, [36] presented a new task of Compositional Action Recognition. The premise of this task is: actions are split based on objects they apply on. During training, only a set of objects are seen corresponding to set of objects, while during testing, unseen object appear for seen action labels. Following studies [28, 33, 47, 67, 71] used relationship between objects and states bounding boxes to model the compositional aspect, where the evaluation is performed on how well the composition of unseen object and state is recognized. We propose a similar task, where videos are trained on seen compositions and tested on unseen compositions. Comparison with existing Datasets. The existing image datasets such as MIT-states [24], UT-Zappos [73], COCOattributes [43], CGQA [35] and VAW [45], are not suitable for image generation tasks for two reasons: 1) there are very few transferable objects and attributes, 2) the images are web-scraped and very diverse with varied background. Due to this, generative models latch on background details rather than understanding subtle changes in objects. In video domain, there have been various video datasets with procedural and kitchen activities that capture object and state transformations, such as EpicKitchens [8] with object and hand bounding box annotation version VISOR [9], Youcook2 [76], Ego4D [14], COIN [64], HowTo100M [40], Breakfast [29], 50Salads [62], CrossTask [77] and ChangeIt [60]. There are a few common problems across these datasets: (1) Most of these datasets lack annotations for the granularity of cutting styles. The styles labeled are cut, chop, slice, dice, peel, grate, julienne, which only comprises of three broader styles of transformations, i.e. chop/dice, peel and grate. (2) The compositions of different objects and states are highly skewed and similar to image datasets. Some datasets have a long-tail distribution of objects, which can make it challenging for models to learn per-objectbased states when there is only one sample available in the dataset. And lastly (3), the frames are noisy with lots of objects and attributes that object states changes are harder to capture (as shown in left side of Figure 2). For most datasets, the ground truth is also not annotated for object detection, which makes it even harder to look for object of interest. Using an object detector to remove the background is an option, however with deformable objects, most FasterRCNN [51] based object detectors fail to capture the object itself, and latch onto smaller pieces instead. In Table 1, we show statistics of data available in different datasets. The # of clips from other datasets that has granular annotations of object-state pairs and can be used for compositional tasks. For instance, COIN [64] has 180 categories with 10000 videos, but clips that have cutting styles as labels were only 390. Further, these clips only cover cut/peel actions, and cannot be categorized further based on granularity and shape of pieces. Our proposed dataset ChopNLearn is designed to capture various objects and their cut styles, with uniformly distributed samples for 20 objects and 8 styles (including whole, 7 other cut styles Figure 2). 3. Chop & Learn Our main objective with Chop & Learn (ChopNLearn) is to understand and learn granular object states, specifically styles of cuts which can be applied to diverse variety of objects. With this in focus, we collect object state transition videos, as well as images of object in various states, with \fw: whole p: peel lc: large cut sc: small cut b: baton j: julienne rs: round slice hrs: half round : 12 samples : 8 samples : 4 samples w \u2192lc w \u2192p w \u2192rs w \u2192b p \u2192lc p \u2192b lc \u2192sc lc \u2192b b \u2192sc b \u2192j rs \u2192hrs p \u2192rs (a) Image Statistics (b) Video Clips Statistics Figure 3. Statistics for ChopNLearn: We show the number of samples for each object-style composition in a color-coded manner: orange represents 12 samples, green represents 8 samples and blue represents 4 samples. 4 different camera views (Figure 2). We discuss the design choices and motivation below. 3.1. Design Choices Selection of States (styles of cuts). Fruits and vegetables are commonly cut in specific styles based on the need of the recipes. For instance, for eating an apple, we slice it in relatively large pieces while for using it in a pie, we might cut smaller or round slices of it. We select 8 common styles of cuts, i.e., large cut, small cut, baton, julienne, round slices, half round slices, peel, and whole for our study. These are the most common styles of cuts for vegetables and fruits, which do not require any additional training to learn apart from common kitchen operation and knife handling skills. These styles of cuts can also have similarities with respect to shapes, yet are different in granularity. For example, baton (french-fries style cut) and julienne are similar in shape (long pieces), but julienne is more finely cut than baton. Similarly, large cut is a coarser version of small cut, and half round slice is one step from round slices (as shown in Figure 2). We also have annotated the states whole and peel, which are the base states of objects. Selection of Objects. We want to learn to transfer styles of cuts to different objects. To ensure consistency in transfer, we also consider the base state, i.e., whole state of objects. For instance, it is hard to visualize large cut of carrots, if the seen data only includes rounder objects like oranges. Hence, we consider some fruits and vegetables with similar colors, textures and shapes to include consistency across visual similarities after chopping. In this study, we used seasonal fruits and vegetables categorised on the basis on their shapes, colors and textures: round small objects: [apple, pear, mango, potato, turnip, onion, kiwi], citrus fruits [lemon, orange], flowerlike textured objects: [cauliflower, broccoli] , larger round objects: [cantaloupe, watermelon], textured from inside objects: [bellpepper, tomato, persimmon], and long objects: [cucumber, carrot, squash, banana]. This consists of 10 fruits and 10 vegetable items, with at least one pair of similar objects presents in the dataset. Related Groups. One of the key aspects of this dataset is transferability of cut styles to a variety of objects. We set up some constraints and create related groups for objects and styles. These related group enable us with structural and visual style transfer abilities. If an object is seen from related group A with a particular style, we should be able to transfer that style to another object from the same related group A and vice-versa. In other words, we group sets of objects and cut styles which are visually similar (based on color, shape and texture) together to create related groups for objects and states separately. For states, we combine [baton, julienne], [round slices, half-round slices], and [large cut, small cut] together as related groups. For objects, we define seven groups with related objects: [apple, pear, mango], [lemon, orange], [cauliflower, broccoli], [cantaloupe, watermelon, kiwi], [bellpepper, tomato, persimmon], [potato, turnip, onion], and [cucumber, carrot, squash, banana]. 3.2. Data Collection Setup We collect data using four GoPro cameras [1] positioned at different angles, with three participants (Figure 2). We use a green screen and green chopping board for minimum distraction in the background, such that the objects and their cut pieces are easily segmented for each view. Granularity of styles. For ease and consistency across participants, the size of cut pieces can be defined as the shape and ratio of one piece with respect to the whole object. For more details, please refer to the appendix. Given a set of n states and m objects, we can have at most m \u00d7 n compositions. However, our dataset does not include some compositions which are not commonly found in real world. For instance, due to the texture of onions, it is not feasible to cut onions in baton or julienne style, since the layers of the onion do not stay intact, so we do not have a sample of [baton, onion]. Video Recording. We primarily collect video data, and derive state change frames from long videos. Each video consists of 2-3 object states, which are annotated while data collection process using the highlight features of GoPros. For \f(a) (b) Figure 4. (a) The clip length distribution for one camera (315 unique clips). (b) Preliminary results of using green screen to augment the dataset with different backgrounds. We continue to improve the transfer results by adding shadows and background matting. Table 2. Compositional generation evaluation. FID, user scores, and classifier scores of various generative models. User Realism is on a scale of 1-5. (\u22c6) denotes that accuracies are evaluated on a seen data split. Bold represents the best result. Method Patch User Classifier Acc. (%) User Acc. (%) FID \u2193 Realism \u2191 Object \u2191 State \u2191 Object \u2191 State \u2191 Real Images 4.65 87.5\u22c6 92.0\u22c6 73.6 84.0 SD 178.0 3.41 73.1 27.9 81.6 28.8 SD+TI 145.0 2.58 23.6 37.7 21.6 43.2 DreamBooth 139.9 3.56 53.5 74.2 61.6 72.8 SD+FT 88.9 3.78 70.5 67.7 72.0 65.6 SD+FT+TI 82.2 3.47 67.8 81.4 67.2 79.2 synchronizing across different cameras, we initially start with a clapper to make a clap sound for indicating the beginning of the video. Then, we highlight the frames in one of the GoPro as the first/initial state. The participant then walks up the object and starts cutting the object. After the object is cut in one style, the participant steps back and we highlight another frame as the next state. The participant performs at least 2 styles of cut in each video, which can be done consecutively. For instance, we can first cut an object with large cuts, and then do small cuts subsequently. The video ends with another clap for the end of video detection and synchronization across different cameras. Henceforth, we collect video data along with annotated states for each participant, without extra effort of annotations. More details and statistics of dataset are shown in Figure 3. Average video clip length (one state change for an object) is 1m40s. The distribution is shown in Fig. 4(a). 4. Compositional Image Generation Large-scale deep generative models [49, 52, 54] trained on open-world big datasets have made significant breakthroughs in image generation in the last couple of years. These models, are typically conditioned using a text encoder and also support tasks such as zero-shot image generation, inpainting, image editing, and super-resolution without explicit training on these tasks. However, the performance of these models significantly degrades when it comes to compositional generation [10]. Our dataset, consisting of 112 real-world object and state combinations, is well-suited to test the compositional capabilities of generative models. Task Description. The goal of the task is to either train from scratch or fine-tune an existing generative model using the (object, state) pairs provided in the training, and generate images from unseen compositions. We consider all 20 objects, each object captured in up to 7 different states, i.e., all the states excluding peel. We split the (object, state) combinations into a training set consisting of 87 combinations and a test set consisting of 25 combinations. The training set covers all objects and states used in our dataset, but it does not overlap with the test set in terms of (object, state) combinations. In other words, for each combination of object and state present in the test set, the training set includes exactly one of either the object, or the state, but not both. We also ensure that for each (object, state) combination (o, si) in the test set, there exists a combination (o, sj) in the training set, where si and sj belong to the same state related group defined in Section 3.1. This setting ensures that all object and state information are available in the training set. Each combination in our dataset has 8-12 images, resulting in a total of 1032 images in the training set and 296 images in the test set. The exact split is provided in the appendix along with some examples. 4.1. Methods Stable Diffusion. (SD) We evaluate a popular open-source text-to-image generative model Stable Diffusion (SD) [52]. For details on the SD, refer to the original work [52]. Here we briefly describe the sampling process. Diffusion models generate an image from Gaussian noise via an iterative denoising process. SD uses classifier-free guidance [21] for sampling. This means given a text prompt c, we encode the prompt using CLIP\u2019s text classifier [48] and recursively update a Gaussian noise sample with \\omeg a \\ bo l dsymbol {\\epsilon }_\\theta (\\mathbf {x}_t, \\mathbf {c}) + (1-\\omega )\\boldsymbol {\\epsilon }_\\theta (\\mathbf {x}_t) \\label {eq:guidance} (1) where xt is the denoised sample at the time step t and \u03f5\u03b8 is SD. With each time step, we try to move the denoised sample using the guidance provided by the text prompt. The strength of the guidance is defined by \u03c9. As our first baseline approach, we sample zero-shot images from SD with a text prompt \u201cAn image of oi cut in sj style\u201d, where oi is the ith object and sj is the jth state of the object. Zero-shot generation with a pre-trained SD model doesn\u2019t work as intended as shown in Figure 5, and the generated images often perform poorly in capturing the \fFigure 5. Compositional Generation Samples. Ground Truth (GT) real images are shown in the first row for reference. Seven object-state combinations in the test set are displayed, each with two generated samples for each method. Please zoom in to see details. object state. Several recent works have shown that it is possible to extend models such as SD to achieve high-quality customized generations [13, 53, 75]. We evaluate several methods that have been proposed for compositional generation in the recent literature. We also propose a simple yet strong baseline by fine-tuning a Stable Diffusion (SD) model [52] along with textual inversion. SD + Textual Inversion (TI). Textual Inversion [13] introduces new tokens in the vocabulary and optimizes their embedding from the given images keeping SD frozen. We adapt the method for our task by introducing new tokens for the objects {oi} and the states {sj}, and jointly optimize the embeddings of {oi} \u222a{sj} by providing (image, prompt) pairs from our training data. As before, the prompt is simply constructed as \u201cAn image of oi cut in sj style\u201d. DreamBooth. Next, we adapt DreamBooth [53], which fine-tunes the diffusion model along with the state-specific tokens. In our experiments, we fine-tune one model for each state in the dataset, where only the state token is learned. Original DreamBooth optimizes the diffusion loss as well as a prior preservation loss [53]. We observed that the latter significantly deteriorates the performance thus we skip it. SD + Fine-tuning (FT). We also fine-tune SD. In this baseline, only the parameters in the UNet of the diffusion model are optimized while keeping the text encoder fixed. SD + TI + FT. Finally, we combine SD fine-tuning and Textual Inversion [13]. Specifically, on top of our SD + Fine-tuning baseline, we also adapt Textual Inversion by introducing new object tokens and state tokens and optimizing their embeddings along with the UNet parameters. 4.2. Evaluation We use both qualitative and quantitative measures to evaluate the capabilities of different methods. This section explains the details of different evaluation metrics we used: Patch FID. Fr\u00b4 echet Inception Distance (FID) [20] is a commonly used metric to assess the quality of generative models. Given a set of real images and a set of generated images, FID compares the mean and std of Inception-v3 features of the two sets. For each composition and generative model, we compute patch FID using all real and 16000 generated patches, and report the average number for the test pairs. We hypothesize that using patch FID gives more weight to the object-state patches, rather than the whole image, which includes almost 50% background pixels. We further calculate the lower bound for patch FID score by computing it between two sets of real images. Any score lower than that for this dataset can be disregarded as irrelevant. The determined lower bound for the patch FID score is 37.2. Object/State Accuracy using a Classifier. To evaluate the correctness of objects and states in the generated images, we train a classifier on real images for classifying objects and states independently. This classifier is built on top of CLIPViT-B/32 [48]. Classification logits are obtained by computing the cosine similarity between the image embedding and text embeddings of all possible state labels or object labels. To ensure the reliability of the classifier\u2019s results, we train it on the training set from a different dataset split, where all (object, state) combinations are present. User Study. We conducted a user study to evaluate the generated images. We took images from the test set as well as samples from our generative models and present them to 30 users. Each user was presented with 25 distinct images, randomly sampled with an even distribution from our models and the test set. After giving a tutorial to the users about the different objects and states present in our experiments, the users were asked to choose an appropriate object name and state label, as well as rate the image for realism on a scale of 1-5. We report the object and state accuracies as well as realism score in Table 2. The details of our user study design can be found in the appendix. \fTable 3. Compositional action recognition results. \u201cStart/End\u201d denote the prediction results for the initial and the final state composition with the corrected object type. Bold and underline represent the top-1 and top-2 results. Split 1 Split 2 Split 3 Start End Start End Start End Model Features acc@1 acc@3 acc@1 acc@3 acc@1 acc@3 acc@1 acc@3 acc@1 acc@3 acc@1 acc@3 AvgPool I3D [6] 9.5 23.7 4.7 14.2 8.3 21.9 5.2 19.8 15.9 28.5 4.8 22.3 LSTM [22] I3D [6] 14.2 36.2 5.7 29.8 12.5 29.2 6.2 26.0 17.5 34.9 6.3 23.7 Transformer [65] I3D [6] 23.7 49.0 10.9 44.3 27.5 46.2 14.6 44.2 20.6 42.9 11.1 44.4 AvgPool MIL-NCE [39] 11.1 31.6 4.8 28.4 9.4 17.7 5.2 13.5 14.2 41.4 12.8 41.4 LSTM [22] MIL-NCE [39] 15.9 36.5 6.4 36.6 11.9 36.7 9.8 36.7 18.9 39.6 8.0 25.4 Transformer [65] MIL-NCE [39] 50.9 85.7 47.7 76.2 56.2 82.3 52.7 88.5 41.1 74.6 42.9 77.7 STLT [47] \u2013 2.8 15.5 1.4 8.4 1.4 13 1.4 11.6 4.2 14.1 1.4 11.3 Transformer [65] R3D [15] 45.1 85.9 52.1 85.9 55.1 94.2 58.0 92.8 59.1 85.9 56.3 85.9 CAF [47] R3D [15] 53.5 88.7 57.8 88.7 55.1 95.7 58.0 95.7 62.0 93.0 63.4 93.0 4.3. Results and Discussion Qualitative Results. Fig. 5 displays the generated images from various methods for seven (object, state) combinations in the test set. The first row of the figure exhibits the ground truth real images for reference. We observe that vanilla SD often generates correct objects in random states, while SD+TI frequently synthesizes images without displaying the object. DreamBooth performs better than SD+TI, but worse than a simple finetuning of SD. SD+FT and SD+FT+TI perform well in terms of state generation. Quantitative Results. Table 2 displays the performance of all baseline methods evaluated according to the metrics outlined in Section 4.2. Assessing image realism is a crucial evaluation metric for generative models; however, defining and measuring it can be challenging. Note that the patch FID values and user realism ratings do not align well. This is due to the disparity between the distribution of images in our dataset and that of typical occurrence of those objects in the real world. The patch FID metric measures the similarity between the generated images with those in our dataset, instead of the ones most typical in real world. In particular, our results indicate that SD achieves the worst patch FID score since it has not encountered our dataset before, whereas its user realism rating is more satisfactory. SD+TI has the lowest user realism rating and a poor patch FID score, which suggests that only training object/state embeddings is inadequate for generating high-quality images. DreamBooth receives a good user realism rating but a poor patch FID, indicating that the images it generates are realistic but not very similar to those in our proposed dataset. Finally, fine-tuning via both SD+FT and SD+FT+TI achieve better results for patch FID and user realism. We next evaluate the accuracy of objects and states in generated images. It is worth noting that the classification task on our dataset is intrinsically difficult, which leads to imperfect user accuracy on real images. In general, the accuracy scores from classifier closely align with one from users, indicating that the proposed classifier is suited for evaluating compositional generation. Our results show that SD achieves the best object accuracy but the worst state accuracy. This is possibly due to the lack of state variations in most existing large image datasets. SD+TI is the worst performer due to its limited learning capacity. On the other hand, DreamBooth, SD+FT, and SD+FT+TI attain better state accuracy. Among them, DreamBooth\u2019s object accuracy is slightly worse as it is particularly trained for states. SD+FT achieves high object accuracy, and SD+FT+TI attains the best state accuracy with the help of fine-tuning and textual inversion together. Green Screen Removal. One of the main challenges for understanding fine-grained object-state pairs with existing datasets such as MIT-states [25] is diverse backgrounds. Using them for training often leads to the model latching on to unwanted background details and missing out on the state understanding. Hence, we collected ChopNLearn with a clean green screen background for the benchmark tasks. While we acknowledge the limitations it poses to our trained models, we highlight that the green screen can potentially enhance our ability to generalize to diverse scenes. This can be achieved by segmenting out images and placing various backgrounds, along with scaled and rotated objectstate images (Figure 4). As a proof-of-concept, we train a SD+FT+TI model on background-augmented images, and report the Patch FID, classifier object accuracy and state accuracy in Tab. 4. Note that here we employ a newly trained classifier that uses background-augmented images, and the patch FID scores are also computed based on these images. We further reference the lower bound of the patch FID as defined in Section 4.2. Due to the complex backgrounds introduced, the object accuracy and the patch FID of the new model are slightly compromised. However, it maintains a high and even improved state accuracy. This demonstrates the potential of the background-augmented ChopNLearn in enhancing fine-grained compositional image generation. 5. Compositional Action Recognition Human actions often change object states and different objects can have diverse visual transitions even when sub\fTable 4. Green screen removal evaluation. Both rows employ the SD+FT+TI but are trained using images with varying backgrounds. Classifiers specific to each dataset are trained to assess Classifier Acc. Validation images used to calculate Patch FID differ between the two rows. Patch FID Lower Bound is computed by evaluating the patch FID on one-half of the validation images relative to the other half. For further details, refer to Section 4.3. Data Classifier Acc. (%) Patch FID \u2193 Patch FID Background Object \u2191 State \u2191 Lower Bound Green Screen 67.8 81.4 82.2 37.2 Various 46.3 82.3 133.6 46.4 jected to the same action type. To investigate this problem in a more intuitive manner, [36] introduced a new task of compositional action recognition, which targets at improving the robustness of models to handle the same actions with different objects involved. For example, given an action of \u2018taking something out from something\u2019, the model is trained on a limited set of objects and is tested on unseen types of objects to access its generalizability. Hence, despite the same underlying action, the object and visual features can be quite diverse. Similarly, the composition of the same action with different object types can look very distinctive. For instance, although cutting an carrot and a apple require similar knife movements, the resulting visual changes are distinct, with the former changing from a whole apple to a peeled apple, and the latter changing from a whole carrot to a peeled carrot. Therefore, we propose to use our dataset for the task of compositional action recognition, which can also be referred to as Compositional Zero-Shot Action Recognition, as the compositions of objects and states are unseen during training. Task Description. For this task, we consider each clip of a video as containing a single object with a single state transition. From the raw videos, which typically contain 2-3 transitions of object states per video, we segment the clips into isolated ones with only one transition. Examples of transitions include changing from a whole object to a peeled object or from a peeled object to a baton cut object. Similar to [36], we divide all object-final state compositions into two sets: seen compositions, which are used for training, and unseen compositions, which are used for testing. Following the approach used in the Compositional Image Generation task, we ensure that each object and state are seen at least once individually in the training set, but not together as a composition. The objective of the task is to predict the correct labels for the initial object-state composition (oi, sj) and the final composition (oi, sk), given a clip containing an object oi transitioning from an initial state sj to a final state sk. Note that the clip is considered correctly classified only if both the object and state labels are correct for both the initial and final compositions. 5.1. Dataset Splits We create 3 different dataset splits as follows (more details are in the Appendix). All splits have disjoint train, test and validation samples, and are created with different constraint combinations: \u2022 Split 1: This split is a random selection of object-final state compositions with cross-view condition. We do not use any information from related groups. \u2022 Split 2: In this split, we use related group information for states, along with cross-view. based on related groups, if baton carrots is seen in training set, then julienne carrots can be part of test set. Since baton and julienne are part of the same related group, we can learn an object in one style and can generalize to another style from the same group in Section 3.1. \u2022 Split 3: This split includes information from both related groups for states and objects. We want to ensure that even if an object is not seen in its related group, a similar object is seen in the related group. For example, if broccoli is seen with large cuts, then cauliflower with large or small cuts can be in the test set. Hence different splits represent different complexity levels for compositional action recognition. Evaluation. We evaluate the accuracy of predicting both the initial and final compositions of objects and states in the test set. Only when the object and state are both correct, it is counted as a correct prediction. Specifically, we use two separate prediction heads for objects and states. We emphasize the need to evaluate composition as a whole, rather than just predicting the state, as the way an apple is cut can differ significantly from the way a bellpepper is cut. Therefore, accurately recognizing both the object and state is crucial for tasks related to understanding and generating videos of object states. We also recognize the importance of top@3 accuracy, since object states can sometimes be visually similar, leading to confusion in detecting the correct composition. For example, julienne apple can be visually very similar to julienne potato. 5.2. Results To evaluate our proposed method, we establish baselines using both traditional architectures and features for video action classification, as well as comparing with recent works in compositional action recognition. As shown in Table 3, in the first section, we use pre-extracted I3D[6] features and conduct experiments by comparing simple average pooling, LSTM, and multi-layer Transformer [65] model. It shows that the Transformer model performs the best among these variants due to the great capacity of temporal modeling ability. In the second section, we also experiment with more recent pre-trained features MILNCE [39] along with transformer models, which outperforms I3D features. MIL-NCE [39] features are pre-trained on HowTo100M [40] with multimodal (RGB+narrations) setup, which is more robust for video downstream tasks. \fw w lc b lc b lc b sc sc Generalizing: Transitions of states for a watermelon lc sc lc sc lc sc sc lc Learning:\u00a0\u00a0orange -\u00a0large cut \u2192 small cut Figure 6. Video parsing graph: For a given video, we use GradCAM[57] on the intermediate frames to identify and visualize the class activation maps corresponding to the most salient states. Top: A training video clip has one transition of orange from large cut \u2192small cut. Bottom: We can learn single transitions from training data, to generalize transitions in a long video with multiple state changes and parse the video as a graph. In the final section of Table 3, we employ the state-of-theart compositional video recognition model proposed in [47] and use pseudo labels of bounding boxes for each hand and object, as there are no ground-truth hand and object trajectories available. Specifically, the Spatial-Temporal Layout Transformer (STLT) [47] takes in the spatio-temporal locations and class labels for each bounding box as input, uses positional embeddings to project coordinates and class labels into features, and adds transformer layers to model spatial-temporal relationships. However, without any appearance information, STLT achieves low performance on all metrics. On the other hand, with the appearance features, which are extracted by inflated 3D ResNet50 [27] (R3D), it can achieve much higher performances than STLT. Finally, Cross-Attention Fusion (CAF) applies cross-attention [63] to fuse the layout (STLT) and appearance (R3D) branch embeddings, achieving the best results. It demonstrates that combining the layout and appearance information together can help predict object and state types more accurately. 6. Discussion We discuss the potential future use of ChopNLearn, while addressing the limitations and scope as well. Long-term Video Parsing. We use compositional state recognition to further understand the temporal dynamics [11, 16\u201318] with the aid of a video parsing graph construction as previously explored in Ego-Topo [42] and VideoGraph [23]. Each clip in the training set has one state transformation (top example in Figure 6). We visualize the class activation maps corresponding to the most salient intermediate state transitions with Grad-CAM [57], to learn the transition in each frame of the video for training data. This is illustrated as a graph for a training video. Having learned multiple single transformations, we can now extend this knowledge to understand long activities, with multiple transitions. As shown in Fig. 6, we can learn state changes for orange from large cut \u2192small cut using our training clip. Given a long unseen video with multiple clips, we can construct a state-transition graph to represent changes in state for a watermelon. Hence, by using an extensive array of videos, the process of learning transitions between individual states can be extended to encompass transitions between multiple states. This enables the creation of a self-supervised transition knowledge graph for comprehensive long-term video comprehension, as demonstrated in [11, 69]. Limitations. With advent of foundation models, few-shot generalization is an increasingly important task. In this work, we explore the potential of ChopNLearn for the research in compositional generation and recognition for highly complex and interdependent concepts. Admittedly, ChopNLearn is a small scale dataset with green screen background, which restricts the models trained on it to have specific biases. Nonetheless, this is the first attempt to understand how fine-grained states (cut styles) can be transferred to diverse objects. We explore this by using ChopNLearn as a test set for larger models, fine-tuning these models using ChopNLearn and trying them with or without a green screen background. We further see the potential of using ChopNLearn for benefiting the community in even more challenging tasks such as 3D reconstruction, video frame interpolation, state change generation, etc. 7." + }, + { + "url": "http://arxiv.org/abs/2205.08536v1", + "title": "Disentangling Visual Embeddings for Attributes and Objects", + "abstract": "We study the problem of compositional zero-shot learning for object-attribute\nrecognition. Prior works use visual features extracted with a backbone network,\npre-trained for object classification and thus do not capture the subtly\ndistinct features associated with attributes. To overcome this challenge, these\nstudies employ supervision from the linguistic space, and use pre-trained word\nembeddings to better separate and compose attribute-object pairs for\nrecognition. Analogous to linguistic embedding space, which already has unique\nand agnostic embeddings for object and attribute, we shift the focus back to\nthe visual space and propose a novel architecture that can disentangle\nattribute and object features in the visual space. We use visual decomposed\nfeatures to hallucinate embeddings that are representative for the seen and\nnovel compositions to better regularize the learning of our model. Extensive\nexperiments show that our method outperforms existing work with significant\nmargin on three datasets: MIT-States, UT-Zappos, and a new benchmark created\nbased on VAW. The code, models, and dataset splits are publicly available at\nhttps://github.com/nirat1606/OADis.", + "authors": "Nirat Saini, Khoi Pham, Abhinav Shrivastava", + "published": "2022-05-17", + "updated": "2022-05-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "main_content": "Introduction Objects in the real world can appear with different properties, i.e., different color, shape, material, etc. For instance, an apple can be red or green, cut or peeled, raw or ripe, and even dirty or clean. Understanding object properties can greatly benefit various applications, e.g., robust object detection [7,17,18,29], human object interaction [10,53,55], and activity recognition [3, 5, 6, 19, 21, 37]. Since the total number of possible attribute-object pairs in the real world is prohibitively large, it is impractical to collect image examples and train multiple classifiers. Prior works proposed compositional learning, i.e., learning to compose knowledge of known attributes and object concepts to recognize a new attribute-object composition. Datasets such as MITStates [27] and UT-Zappos [60] are commonly used to study this task, with joint attribute-object recognition for a diverse, yet limited set of objects and attributes. orange sliced apple seen pair: peeled apple unseen pair: sliced orange peeled Iattr: peeled orange I : peeled apple Visual Disentanglement Visual Composition Input Images Iobj: sliced apple Figure 1. Method illustration: Given an input image I of peeled apple, we use two other images: (1) one with same object, different attribute Iobj sliced apple, (2) one with same attribute, different object Iattr peeled orange. We propose a novel architecture that takes I and Iattr, and extracts their visual similarity features for peeled and visual dissimilarity features for orange. Similarly, using I and Iobj, the visual similarity features for apple, and the dissimilarity features for sliced can be extracted. We compose these primitive visual features to hallucinate a seen pair peeled apple, and a novel unseen pair sliced orange to be used for regularizing our embedding space. Note that this is a visualization of embedding space composition, we do not generate images. Compositional learning refers to combining simple primitive concepts to understand a complex concept. This idea dates back to Recognition and Composition theory by Biederman [8], and early work in the visual domain by Hoffman [25], which proposed recognition by parts for pose estimation. Prior works explore compositionality to a certain degree, e.g., via feature sharing and shared embeddings space. Among them, most works use linguistically inspired losses to separate attributes and objects in the shared embedding space, then use that primitive knowledge to compose new complex pairs. Using linguistic embeddings is helpful since: (1) there is a clear distinction between attribute and object in the embedding space, and (2) these embeddings already contain semantic knowledge of similar objects and attributes, which is helpful for composition. However, unlike word embedding, it is difficult to discrimi1 \fnate the object and attribute in the visual embedding space. This is due to the fact that image feature extractor is usually pre-trained for object classification, often along with image augmentation (e.g., color jitter) that tends to produce attribute-invariant image representation, thus does not learn objects and attributes separately. In this paper, we propose a new direction that focuses on visual cues, instead of using linguistic cues explicitly for novel compositions. Analogous to linguistic embedding, our work focuses on disentangling attribute and object in the visual space. Our method, Object Attribute Disentanglement (OADis), learns distinct and independent visual embeddings for peeled and apple from the visual feature of peeled apple. As shown in Figure 1, for image I of peeled apple, we use two other images: one with same object and different attribute Iobj (e.g., sliced apple), and one with same attribute and different object Iattr (e.g., peeled orange). OADis takes I and Iobj and learns the similarity (apple) and dissimilarity (sliced) of the second image with respect to the first one. Similarly, using I and Iattr, the commonality between them (peeled) and the left out dissimilarity (orange) can also be extracted. Further, composition of these extracted visual primitives are used to hallucinate seen and unseen pair, peeled apple and sliced orange respectively. For compositional learning, it is necessary to decompose first before composing new unseen attribute-object pairs. As humans, we have the ability to imagine an unseen complex concept using previous knowledge of its primitive concepts. For example, if someone has seen a clown and a unicycle, they can imagine clown on a unicycle even if they have never seen this combination in real life [23, 47]. This quality of imagination is the basis of various works such as GANs [15], CLIP [51] and DALL-E [52]. However, these works rely on larger datasets and high computation power for training. We study this idea of imagination for a smaller setup by composing newer complex concepts using disentangled attributes and object visual features. Our work focuses on answering the question, can there be visual embedding of peeled and apple, disentangled separately from visual feature of peeled apple? Our contributions are as follows: \u2022 We propose a novel approach, OADis, to disentangle attribute and object visual features, where visual embedding for peeled is distinct and independent of embedding for apple. \u2022 We compose unseen pairs in the visual space using the disentangled features. Following Compositional Zeroshot Learning (CZSL) setup, we show competitive improvement over prior works on standard datasets [27,60]. \u2022 We propose a new large-scale benchmark for CZSL using an existing attribute dataset VAW [49], and show that OADis outperforms existing baselines. 2. Related Work Visual Attributes. Visual attributes have been studied widely to understand visual properties and low-level semantics of objects. These attributes help further improve on various downstream tasks such as object detection [7, 14, 17, 18,29,40], action recognition [3,5,6,19,21,37], image captioning [28,44], and zero-shot and semi-supervised classification [4,13,14,30,43,45,54]. Similar to multi-class classification for objects, initial work for attribute understanding used discriminative models [29,46], without understanding attributes. Other works [11,18,26,35] explored the relation between the same attributes and different objects, to learn visual attributes. Particularly, disentangling object features from attribute features are explored in [20, 22]. Although, these works use clustering and probabilistic models to learn the attributes of objects. Compositional Zero-shot Learning. Concept of compositional learning was first introduced in Recognition by Parts [25]. Initially, [39] employed this concept for objects and attributes. Unlike zero-shot learning (ZSL), CZSL requires the model to learn to compose unseen concepts from already learned primitive components. [11, 39] proposed separate classifiers for primitive components, and merged all into a final classifier. Most prior works use linguistically inspired auxiliary loss terms to regularize training for embedding space, such as: [42] models attributes as a linear transformation of objects, [33] uses rules of symmetry for understanding states, and [59] learns composition and decomposition of attributes hierarchically. Another set of studies uses language priors to learn unseen attributeobject pairs, either in feature space or with multiple networks [34, 50, 56]. Other recent works use graph structure to leverage information transfer between seen to unseen pairs using Graph Convolutional Networks [36, 41], and [58] uses key-query based attention, along with modular network with message passing for learning relation between primitive concepts. 3. Object Attribute Disentanglement (OADis) Contrary to prior works [33,41,42,59], we explicitly focus on separating attributes and object features in the visual space. More precisely, TMN [50] uses word embeddings to generate attention layers to probe image features corresponding to a given pair, GraphEmbedding [41] exploits the dependency between word embeddings of the labels, and HiDC [59] mainly uses word embeddings to compose novel pairs and generate more examples for their triplet loss. To the best of our knowledge, none of the existing works have explored visual feature disentanglement of attributes and objects. We hypothesize that attribute and object visual features can be separated when considering visual feature similarities and differences between image pairs. Composing these disentangled elements help regularize the com2 \fLabel Embedder unseen pairs seen pairs peeled MLP MLP apple CosCls CosCls CosCls CosCls CosCls Image Encoder ResNet : peeled apple :\u00a0peeled orange Object-Conditioned Network Object Affinity Network : sliced apple sliced orange GloVe Attribute Affinity Network Figure 2. System Overview: Given an image I, for peeled apple, we consider two images:, one with same object: Iobj, sliced apple, and one with same attribute, Iattr peeled orange. (1) The Object-Conditioned Network composes pair word embedding, using GloVe word embeddings for labels. (2) Label Embedder uses the image I and embeds visual feature vattr,obj along with word embedding wattr,obj, using loss Lcls. (3) Attribute Affinity Network and Object Affinity Network, disentangles the same attribute and object from the pair of images I, Iattr and I, Iobj respectively. Disentangled visual features for peeled (vattr) and apple (vobj) are used along with word embeddings of attribute (wattr) and objects (wobj), to compute Lattr and Lobj. (4) Using disentangled features, we compose seen pair peeled apple (vattr, vobj) and unseen pair sliced orange (v\u2032 attr, v\u2032 obj), for composition losses Lseen and Lunseen. mon embedding space to improve recognition performance. More concretely, we take cues from [20] and [39, 59], to learn to compose unseen attribute-object pairs leveraging visual attributes based on auxiliary losses. 3.1. Task Formulation We follow the conventional Compositional Zero-shot Learning (CZSL) setup, where distinct attribute-object compositions are used at training and testing. Each image I is labeled with y = yattr,obj \u2208Y , where yattr and yobj are respectively the attribute and object label. The dataset is divided into two parts, seen pairs ys \u2208Y s and unseen pairs yu \u2208Y u, such that Y = Y s \u222aY u, Y s \u2229Y u = \u2205. Although yu = yattr,obj \u2208Y u consists of attribute yattr and object yobj that are never seen together in training, they are separately seen. We employ the Generalized CZSL setup defined in [50], which has seen Y s and unseen pairs Y u in the validation and test sets as detailed in Table 1. As shown in Figure 2, for image I, with label peeled apple, we choose two additional images: one with same object and different attribute Iobj (e.g., sliced apple), and another image with same attribute and different object Iattr (e.g., peeled orange). Note that the subscript of image symbol, e.g., attr in Iattr, shows similarity with I, whereas superscript denotes seen and unseen sets. 3.2. Disentangling Visual Features We extract image and label embedding features from pre-trained networks (ResNet [24] and GloVe [48]). As seen in Figure 2, we use Image Encoder (IE) and Object Conditioned Network (OCN), for image and word embedding features respectively. Similar to [42], we use Label Embedder (LE) as an additional FC-Layer for the image feature. LE and OCN learn image and word embeddings and embed those in a common pair embedding space. Next, visual similarity between I and Iobj is computed using Object Affinity Network, which extracts visual features for object, vobj. Whatever is not similar is considered dissimilar. Hence, visual features of Iobj that are least similar to visual features of I are considered as the attribute feature v\u2032 attr in Iobj, which is sliced in this example. Similarly, Attribute Affinity Network takes I and Iattr, and extracts visual similarity feature vattr for peeled, and dissimilar visual features of Iattr, as object feature v\u2032 obj for orange. The disentangled features are then used to compose seen and unseen pairs. We discuss the details in the following sections: Image Encoder (IE). We use the second last layer before AveragePool of an ImageNet-pretrained ResNet-18 [16,24] to extract features for all images. IE is a single convolutional layer that is shared across images I, Iattr and Iobj to generate their image features, represented as f, fattr and fobj respectively, where each f \u2208Rn\u00d749 and n is the output dimension of IE. Label Embedder (LE). Inspired by [42], our LE inputs spatial feature from ResNet [24], AveragePools and passes through a linear layer to extract final feature vattr,obj for pair embedding, which has same dimension as the word embedding final feature wattr,obj, extracted from Object Conditioned Network (OCN) (Figure 2). This is the main branch, and is used for input image I only. Object Conditioned Network (OCN). This takes word embeddings of attribute embattr and object embobj, concatenates the features and passes through multiple layers. Object-conditioned is named because a residual connection for the object feature is concatenated with the final attribute 3 \frow softmax column softmax f fattr row-sum column-sum column-sum row softmax = = = (b) I Iattr (a) Xnseen pairs seen pairs Figure 3. (a) Attribute Affinity Module: We compute the cosine similarity between blocks in f and fattr (S in Eq. 3), then apply row-wise and column-wise softmax (A and A\u2032), followed by a respective column-sum and row-sum to obtain m and mattr. m represents regions where fattr is highly similar to f (hence, we reshape and multiply m with fattr) and mattr represents regions where f is highly similar to fattr (thus, mattr \u00b7 f). Similarly, S\u2032 represents the regions where feature fattr is not similar to feature f (more details in Section 3.2). The last row shows real samples and generated attention maps overlayed on images. Give image ruffled bag and ruffled flower, we show that attribute ruffle is highlighted in the center mattr \u00b7 f and m \u00b7 fattr. Whereas, m\u2032 obj \u00b7 fattr shows the dissimilar regions of Iattr w.r.t I. (b) Shows the three embedding spaces learnt with different losses. Same notation is used as Figure 2. feature, and the output feature is wattr,obj \u2208Y . We discuss the motivation for this in Section 4.3. Cosine Classifier (CosCls). Analogous to compatibility function used in [36, 41], we use cross-entropy along with cosine similarity to get the final score for each pair. For visual features vattr,obj (from LE), and composed word embeddings wattr,obj (from OCN), CosCls provides logits for an image I. For instance, let us assume v : X \u2192Z and w : Y \u2192Z. Z is the common embedding space for word embeddings w and visual embeddings v. Then classifier unit CosCls gives the score for label y \u2208Y s is C: h(v ,w ) =\\cos (v , w ) =\\ d elt a \\ cdo t \\f ra c {v^T w } {\\l e ft \\|v\\right \\|\\left \\|w\\right \\|}\\\\ C(v,w)=\\frac {e^{ h(v, w)}}{\\sum _{y \\in Y^s} e^{ h(v, y)}} (2) where \u03b4 is the temperature variable. Each loss function uses same CosCls score evaluator, with different inputs. Object and Attribute Similarity Modules. Our main contribution is the proposed affinity modules and compositional losses. Inspired by image captioning [12, 31, 32], OADis uses image similarities and differences to identify visual features corresponding to attributes and objects. Object Affinity Network (OAN) uses f and fobj, whereas Attribute Affinity Network (AAN) uses f and fattr. For brevity, we explain the AAN, while the OAN follows the same architecture. Reminded that both f and fattr \u2208Rn\u00d749. Similar to [57], which computes attention between word concepts with corresponding visual blocks, we compute attention between two images I and Iattr. Since both images have the same attribute, i.e., peeled, our affinity network learns visual similarity between the images, which represents the attribute. Similarity matrix S is the cosine similarity between f and fattr, such that S \u2208R49\u00d749 as: \\ l abel {s} \\mathcal {S}=\\frac {f^{T} \\fat }{\\left \\|f\\right \\|_{2}\\left \\|{\\fat }\\right \\|_{2}} \\vspace {-.1in} (3) where element sij represents the similarity between ith element of f with jth element of fattr. Moreover, let si\u2217and s\u2217j represent the ith row and jth column of S respectively. Then, si\u2217captures the similarity of all the elements in fattr with respect to ith element of f. To know the most similar element among fattr with respect to ith element of f, we can take a row-wise softmax over S. Similarly, for jth element of fattr, column s\u2217j represents the similarity with all the elements of f. Using a column-wise softmax, we can interpret the most similar and least similar element of f with respect to jth element of fattr, as shown in Figure 3. Therefore, by applying column-wise and row-wise softmax, we get two matrices, A and A\u2032 (A, A\u2032 \u2208Rd\u00d7d, d = 49), \\ A _i = \\ fra c {e^ {\\l am b d a \\ma th bf {s_{i *}}}}{\\sum _{j=1}^d e^{\\lambda s_{ij}}} \\quad \\text {and} \\quad \\Aprime _j = \\frac {e^{\\lambda \\mathbf {s_{*j}}}}{\\sum _{i=1}^d e^{\\lambda s_{ij}}}, (4) where \u03bb is the inverse temperature parameter. We compute row and column sum for A and A\u2032 respectively, to get final 4 \fsimilarity maps, m and mattr, m _ j = \\ sum _{ i = 1} ^ { d }\\A _ {ij}\\quad \\text {and} \\quad {\\ma }_i = \\sum _{j = 1}^{d}\\Aprime _{ij}. (5) Similarly, the difference between these two images f and fattr is the object label, yobj. Hence, we use the negative of S as the image difference, denoted as S\u2032. Then, difference of fattr with respect to f would be row-wise softmax of difference matrix, denoted by D. Hence, by performing column-sum over D, we get difference map, m\u2032 obj, \\ m athc al { D}_ j = \\f rac { e^{\\ g a m ma \\mathbf {s^\\prime _{*j}}}}{\\sum _{i=1}^d e^{\\gamma s^\\prime _{ij}}} \\quad \\text {and} \\quad {\\mop }_i = \\sum _{j = 1}^{d}\\D _{ij}. (6) The final disentangled features for attribute vattr and object v\u2032 obj, for both AAN and OAN, can be computed as: \\be g i n {spl i t} \\v a = \\ ma thb f { m} \\ cdot \\fat + \\mat h bf { \\ m a } \\ cdot f \\qua d &\\text {and} \\quad \\vop = \\mathbf {\\mop } \\cdot \\fat \\\\ \\vo = \\mathbf {m} \\cdot \\fob + \\mathbf {\\mo } \\cdot f \\quad &\\text {and} \\quad \\vap = \\mathbf {\\map } \\cdot \\fob . \\end {split} (7) More details using a toy example can be seen in Figure 3. Using concatenation of vattr and vobj along with a single Linear layer, composes the pair peeled apple, represented by (vattr, vobj). Similarly, the disentangled visual features v\u2032 attr and v\u2032 obj, are used to compose unseen pair sliced orange, and is represented as (v\u2032 attr, v\u2032 obj). 3.3. Embedding Space Learning objectives As shown in Figure 3b, we learn three embedding spaces: (1) attributes space, (2) object space, and (3) attribute-object pair space. The attribute and object spaces are used for disentangling the two, whereas pair embedding is used for final pair composition and inference. OADis has separate loss functions for disentangling and composing. All loss functions are expressed in terms of CosCls defined previously. The loss function for main branch, Lcls uses combined visual feature vattr,obj from LE and word embedding feature wattr,obj from OCN. Lcls is used for the pair embedding space. Similarly, Lattr and Lobj are used to learn the visual attribute and object feature, in their respective embedding spaces. Lattr pushes the visual feature of attribute, closer to the word embedding. Lobj does the same for objects in object embedding space Figure 3b. These losses cover the concept of disentanglement, and can be represented as: \\b e gin {split} \\lcls &= C (\\vao ,\\wao )\\ \\ \\lat &= C ( \\va ,\\w a );\\,\\, \\lob = C(\\vo ,\\wo )\\\\ \\end {split} (8) For composition, we use Lseen and Lunseen. Among the images seen (I, Iattr, and Iobj), disentangled features vobj and vattr, composes the same pair as (vattr, vobj), which we refer to as the seen composition. Note that (vattr, vobj) is different from vattr,obj, as the former is hallucinated feature with combination of disentangled attribute and object visual features, Table 1. This table shows dataset splits. Y s and Y u are seen and unseen compositions respectively. We propose a new benchmarck, VAW-CZSL [49], which has more than 10\u00d7 compositions in each split compared to other datasets. Train set Val set Test set Datasets attr. obj. Y s Y s/Y u Y s/Y u MIT-states [27] 115 245 1262 300 / 300 400 / 400 UT-Zappos [60] 16 12 83 15 / 15 18 / 18 VAW-CZSL [49] 440 541 11175 2121 / 2322 2449 / 2470 and latter is the combined visual feature extracted with LE. Here, we use Lseen loss which takes the composition of disentangled features and learns to put the composition closer to wattr,obj. Moreover, the dissimilarity aspect from OAN and AAN extracts v\u2032 attr and v\u2032 obj, which composes an unseen pair (v\u2032 attr, v\u2032 obj). We use Lunseen as unseen loss since the hallucinated composition is never seen among I , Iattr, and Iobj. \\be g in {split } \\lse &= C((\\va ,\\vo ) , \\wao )\\\\ \\ lu n &= C( (\\vap ,\\vop ),\\waop )\\\\ \\end {split} (9) The combined loss function L is minimized over all the training images, to train OADis end-to-end. The weights for each loss (\u03b1) are empirically computed: \\beg i n {spli t } \\mat h cal {L} &= \\lcls + \\alpha _1 \\lat + \\alpha _2 \\lob + \\alpha _3 \\lse + \\alpha _4 \\lun . \\end {split} 4. Experiment 4.1. Datasets and Metrics We show results on three datasets: MIT-states [27], UTZappos [60], and a new benchmark for evaluating CZSL on images of objects in-the-wild, referred as VAW-CZSL. VAW-CZSL is created based on images with object and attribute labels from the VAW dataset [49]. Both MITstates [27] and UT-Zappos [60] are common datasets used for this task in previous studies. MIT-states covers wide range of objects (i.e., laptop, fruits, fish, room, etc.) and attributes (i.e., mossy, dirty, raw, etc.), whereas UT-zappos has fewer objects (i.e., shoes type: boots, slippers, sandals) and fine-grained attributes (i.e., leather, fur, etc.). Proposed New Benchmark. While experimenting with MIT-states [27] and UT-Zappos [60], we found several shortcomings with these datasets and discovered issues across all baselines using these datasets: \u2022 Both datasets are small, with a maximum of 2000 attribute-object pairs and 30k images, leading to overfitting fairly quickly. \u2022 Random seed initialization makes performance fluctuate significantly (0.2-0.4% AUC). Moreover, [4] found 70% noise in human-annotated labels on MIT-States [27]. \u2022 A new dataset C-GQA was introduced in [41], but the dataset is still small and we found a lot of discrepancies (kindly refer to the suppl.). 5 \fTable 2. We show results on MIT-states [27] and UT-Zappos [60]. Following [41, 50], we use AUC in % between seen and unseen compositions with different bias terms, along with Val, Test, attribute and object accuracy. HM is Harmonic Mean. OADis consistently outperforms on most categories with significant increment. MIT-States UT-Zappos Model Val@1 Test@1 HM Seen Unseen Attribute Object Val@1 Test@1 HM Seen Unseen Attribute Object AttrOpr [42] 2.5 2.0 10.7 16.6 18.4 22.9 24.7 29.9 22.8 38.1 55.5 54.4 38.6 70.0 LabelEmbed+ [42] 3.5 2.3 11.5 16.2 21.2 25.6 27.5 35.5 22.6 37.7 53.3 58.6 40.9 69.1 TMN [50] 3.3 2.6 11.8 22.7 17.1 21.3 24.2 35.9 28.4 44.0 58.2 58.0 40.8 68.4 Symnet [33] 4.5 3.4 13.8 24.8 20.0 26.1 25.7 27.4 27.7 42.5 56.7 61.6 44.0 70.6 CompCos [36] 6.9 4.8 16.9 26.9 24.5 28.3 31.9 40.8 26.9 41.1 57.7 62.8 43.3 73.0 GraphEmb [41] 7.2 5.3 18.1 28.9 25.0 27.2 32.5 33.9 24.7 38.9 58.8 61.0 44.0 72.6 OADis 7.6 5.9 18.9 31.1 25.6 28.4 33.2 40.8 30.0 44.4 59.5 65.5 46.5 75.5 Table 3. We show results on VAW-CZSL. Since it is a much more challenging dataset, with significantly large number of compositions, to discriminate performance among different baseline, we show top-3 and top-5 AUC (in %) for Val and Test sets. Val. Set Test Set Model V@3 V@5 V@3 V@5 HM Seen Unseen Attr. Obj. AttrOpr [42] 1.4 2.5 1.4 2.6 9.1 16.4 11.7 13.7 34.9 LabelEmbed+ [42] 1.5 2.8 1.6 2.8 9.8 16.2 13.2 13.4 35.1 Symnet [33] 2.3 3.9 2.3 3.9 12.2 19.1 15.8 18.6 40.9 TMN [50] 2.2 3.9 2.3 4.0 11.9 19.9 15.4 15.9 38.3 CompCos [36] 3.1 5.6 3.2 5.6 14.2 23.9 18.0 16.9 41.9 GraphEmb [41] 2.7 5.3 2.9 5.1 13.0 23.4 16.8 16.9 40.8 OADis 3.5 6.0 3.6 6.1 15.2 24.9 18.7 17.5 43.3 To address these limitations, we propose a new benchmark VAW-CZSL, a subset of VAW [49], which is a multilabel attribute-object dataset. We sample one attribute per image, leading to much larger dataset in comparison to previous datasets as shown in Table 1 (details in the suppl.). Evaluation. We use Generalized CZSL setup, defined in [50], with dataset statistics presented in Table 1. As observed in prior works [41, 50], a model trained on a set of labels Y s, does not generalize well on unseen pairs Y u. Therefore, [41, 50] use a scalar term for overcoming the negative bias for unseen pairs. We use the same evaluation protocol, which computes Area Under the Curve (AUC) (in %) between the accuracy on seen and unseen compositions with different bias terms [50]. Larger bias term leads to better results for unseen pairs whereas smaller bias leads to better results for seen pairs. Harmonic mean is reported, to balance the bias. We also report the attribute and object accuracy for unseen pairs, to show improvement due to visual disentanglement of features. Our new benchmark subset for VAW [49], follows the similar split as other datasets. In addition, we conduct all experiments with image augmentation for all methods (discussed in Section 4.3). 4.2. Results and Discussion Baselines. We compare with related recent and prominent prior works: AttrOp [42], LabelEmbed+ [42], TMN [50], Symnet [33], CompCos [36] and GraphEmb [41]. We do not compare with BMP [58], since it uses the concatenation of features from all four ResNet blocks (960-d features), resulting in higher input features and the number of network parameters than all other setups. Moreover, GraphEmb [41] is state-of-the-art; hence, comparing with that makes our work comparable to other baselines that [41] already outperforms. To be consistent, we state the performance of all models (including GraphEmb [41]) using frozen backbone ResNet without fine-tuning the image features, and using GloVe [48] for the object and attribute word embeddings. Before passing through backbone, training images are augmented with horizontal flip and random crop. Compared to other baselines, OADis uses convolutional features rather than AvgPooled, since it is easier to segregate visual features in the spatial domain for attributes and objects. Moreover, other studies [36,41] have also used additional FC layers on top of IE, which we argue makes it fair for us to use pre-pooled features for OADis. Results on MIT-States. MIT-states has considerable label noise [4], but still is a standard dataset for this task. We show significant improvement on this dataset (reported in Table 2), from previous state-of-the-art GraphEmb, which has 7.2 Val AUC and 5.3 Test AUC. Note that we do not report GraphEmb results with fine-tuning backbone, as we find it incomparable with other baselines that did not incorporate fine-tuning as part of their proposed methods. Overall, our model performs significantly better than GraphEmb on all metrics. Results on UT-Zappos. Similar improvement trends hold for UT-Zapopos as well (see Table 2). Although, as explained for GraphEmb, it is difficult to balance the best performance for Val and Test set in this dataset. The problem is that 7/36 (\u223c20%) attributes in Test set do not appear in Val set. Hence, improving Val set AUC, does not necessarily improve Test AUC for UT-Zappos. Similar trend can be seen for other baselines: CompCos has best Val AUC, but does not perform well on Test set, compared to TMN and Symnet. Even GraphEmb in their final table show the frozen backbone network has much lower performance than TMN. However, OADis performs well on UT-Zappos overall, with \u223c4.0 improvement for Val and Test AUC, HM, unseen and object accuracy. 6 \fSliced Fruit Draped Necklace New Church (a) Black \u2013 Sign Hanging \u2013 Picture Black \u2013 Sign Hanging \u2013 Painting Stained \u2013 Wall Marble \u2013 Counter Gray \u2013 Wall Stained \u2013 Wall On the wall \u2013 Clock On the wall \u2013 Clock Large \u2013 Clock Ornate \u2013 Clock Relaxing \u2013 Man Used \u2013 Bench Sitting \u2013 Man Crossing legs \u2013 Man Yellow \u2013 Banana Ripe \u2013 Banana Piled \u2013 Banana Many \u2013 Banana Black lettered \u2013 Sign White \u2013 Sign Blue \u2013 Sky Black lettered \u2013 Sign Cut \u2013 Sandwich Half \u2013 Sandwich Toasted \u2013 Bread Cut \u2013 Sandwich Hazy \u2013 Mountain Hazy \u2013 Mountain Far away \u2013 Mountain Wide \u2013 Mountain Puffy \u2013 Couch Cushioned \u2013 Couch Leather \u2013 Couch Green \u2013 Couch (b) Figure 4. Qualitative Results: We show the nearest neighbors using the hallucinated unseen composition features for MIT-states and UT-Zappos. Although, all the neighbors are not correct (represented with red outline), they look very similar to true class labels: (a) First row: pureed fruit, Second row: engraved coin, Third row: huge tower. (b) We show top-3 predictions for images in VAW-CZSL. Table 4. We quantitatively show that the proposed architecture and different losses help in disentanglement and composition of unseen pairs. The experiments are conducted on MIT-States [27], where change in accuracy is shown with green and red based on increment or decrement respectively from the previous row. A dash (-) represents no change more than (\u00b1 0.1). Refer to Section 4.2 for details. Losses Val AUC@1 Test AUC@1 Seen Unseen Attribute Object Lcls 7.24 5.43 29.92 25.33 28.03 33.10 Lcls + Lattr 31.09 (+2.0) 28.30 (+0.3) Lcls + Lobj 25.50 (+0.2) 33.38 (+0.2) Lcls + Lattr + Lobj 7.49 (+0.2) 5.73 (+0.2) 28.50 (+0.2) Lcls + Lattr + Lobj + Lseen 5.44 (-0.5) 31.21 (+0.2) 28.18 (-0.4) Lcls + Lattr + Lobj + Lunseen 5.73 (+0.3) 25.80 (+0.4) 28.51 (+0.4) Lcls + Lattr + Lobj + Lseen + Lunseen 7.62 (+0.2) 5.94 (+0.2) 31.64 (+0.4) 25.60 (-0.2) 28.51 33.20 Table 5. Results with different networks for word-embeddings. Object-conditioning with attribute performs the best, and is therefore used for OADis (Section 4.3). Linear MLP Obj-cond. Network Val@1 6.6 7.0 7.6 Test@1 5.0 5.2 5.9 Results on VAW-CZSL. Our model performs well on VAW-CZSL, and is consistently better than other methods across almost all metrics. As shown in Table 1, VAW-CZSL has \u223c6-8 times more pairs in each split than MIT-States, which shows how challenging the benchmark is. Due to top-1 AUC being too small to quantify any learning and comparing between methods, we report top-3 and top-5 AUC instead. This is also because objects in-the-wild tend to depict multiple possible attributes; hence, evaluating only the top-1 prediction is insufficient. We provide qualitative results of how our model makes object-attribute composition prediction on VAW-CZSL in the suppl. Is disentangling and hallucinating pairs helpful? Prior works rely heavily on word embeddings for this task, but to improve the capabilities of visual systems, it is imperative to explore what is possible in the visual domain. We do an extensive study to understand if our intuition aligns with OADis (Table 4). Here are some takeaways: \u2022 Using only Lcls, we get a benchmark performance based on the architectural contributions, such as LE and ONC. When Lattr is added, significant performance boost for attribute accuracy can be seen in Table 4. \u2022 Adding object loss Lobj with Lcls, makes object accuracy better but no change in Val and Test AUC. This indicates the need of both losses to balance the effects. Using both Lattr and Lobj gives improvement in all measures. \u2022 Adding Lseen results in boost for seen AUC, but drop in Test AUC, which has unseen pairs along with seen pairs. Using unseen loss Lunseen leads to increase in both Test and attribute accuracy. \u2022 Finally adding unseen composition loss Lunseen along with seen loss Lseen, the model improves on most metrics. Each loss plays a role and regularizes effects from other losses. Is visual disentangling actually happening? Visual disentanglement in feature space is challenging to visualize since: (a) parts of an image for attributes and objects are hard to distinguish, as attributes are aspects of an object; (b) OADis is end-to-end trained with losses to disentangle features for attribute and object embeddings, which is separate from pair embedding space. Inspired by [33, 42], we show a few qualitative results in Figure 5. Using all training images, prototype features Vattr for each attribute can be computed by averaging features for all images containing that attributes vattr using AAN. Similarly, with OAN, prototype object features are also computed. For each test image, we find top-3 nearest neighbors from these prototype fea7 \fAttr: bent, sharp, curved Obj: knife, blade, handle bent blade coiled basket Attr: coiled, pierced, thin Obj: basket, bowl, bucket clean truck Attr: shiny, clean, cracked Obj: truck, bus, toy Figure 5. Qualitative results showing top 3 attributes and objects from test images, using prototype disentangled features computed on training data. tures (Figure 5). Hence, the disentangled prototype features of attributes and objects are used for classifying unseen images. Note that results reported in Table 1 use pair embedding space for attribute and object classification, whereas here we use auxiliary attribute and object embedding spaces (in Figure 3b) for the same task. If disentanglement features are not robust, then composition features will also not be efficient. We also show that using the composition of disentangled features for unseen pairs, relevant images from the test set can be found in suppl. Limitations. Despite OADis outperforming prior works on all benchmarks, we still notice some outstanding deficiencies in this problem domain. First, similar to [41], OADis often struggles on images containing multiple objects, where it does not know which object to make prediction on. One possible solution is to utilize an objectconditioned attention that allows the model to focus and possibly output attribute for multiple objects. Second, from qualitative studies on VAW-CZSL, we notice there are multiple cases where OADis makes the correct prediction but is considered incorrect by the image label. This is due to the fact that objects in-the-wild are mostly multi-label (containing multiple attributes), which none of the current singlelabel benchmarks have attempted to address. 4.3. Ablation Studies In this section, we show experiments to support our design choices for OADis. All the ablations are done for MITstates [27], for one random seed initialization, and are consistent for other datasets as well. Empirical results for \u03bb, \u03b4 and different word embeddings can be found in suppl. Why Object-Conditioned Network? Label Embedder [42] uses a linear layer and concatenates word embeddings for attributes and objects. We experiment with other networks: MLP with more parameters with two layers and ReLU and Object-conditioned network that uses a residual connection for object embedding. Our intuition is that same attribute contributes differently to each object, i.e., ruffled bag is very different from ruffled flower. Hence, attributes are conditioned on object. Adding a residual connection for object embeddings to the final attribute embedding helps condition the attribute. We empirically demonstrate that object-conditioning helps in Table 5 (refer to the suppl.). To augment or not to augment? Augmentation is a common technique to reduce over-fitting and improve generalization. Surprisingly, prior works do not use any image augmentation. OADis without augmentation gives 6.7% AUC on Val and 5.1% AUC on Test set for MIT-states. Hence, we use augmentation for OADis and re-implemented rest of the baselines in Table 2, showing that augmentation helps improving all methods \u223c1.0-1.5% AUC. We use horizontal flip and random crop as augmentation. 4.4. Qualitative results To qualitatively analyze our hallucinated compositions, we perform a nearest neighbor search on all three datasets. We pick the unseen compositions composed using the disentangled features, and find their top-5 nearest neighbors from the validation and test set. Figure 4(a) illustrates a few of our results. Note that these pairs are never seen in training. Based on the hallucinated compositions of disentangled attributes and objects, we are able to retrieve samples from these unseen compositions. In Figure 4(b), we show the top-3 predictions of OADis on VAW-CSZL. Column 1 shows results for seen, and columns 2 and 3 show unseen compositions, with the ground-truth label on top (bold black). In all examples, our top-3 predictions describe the visual content of the images accurately, even though in many cases the ground-truth label is not predicted in top-1. For column 3, we purposely show examples where our model predictions totally differ from the ground-truth label, but still correctly describe the visual information in each image. Similar to [41], this explains the multi-label nature of object-attribute recognition, and why we report top-3 and top-5 metrics for the VAWCZSL benchmark. 5." + } + ], + "Mohit Iyyer": [ + { + "url": "http://arxiv.org/abs/1804.06059v1", + "title": "Adversarial Example Generation with Syntactically Controlled Paraphrase Networks", + "abstract": "We propose syntactically controlled paraphrase networks (SCPNs) and use them\nto generate adversarial examples. Given a sentence and a target syntactic form\n(e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the\nsentence with the desired syntax. We show it is possible to create training\ndata for this task by first doing backtranslation at a very large scale, and\nthen using a parser to label the syntactic transformations that naturally occur\nduring this process. Such data allows us to train a neural encoder-decoder\nmodel with extra inputs to specify the target syntax. A combination of\nautomated and human evaluations show that SCPNs generate paraphrases that\nfollow their target specifications without decreasing paraphrase quality when\ncompared to baseline (uncontrolled) paraphrase systems. Furthermore, they are\nmore capable of generating syntactically adversarial examples that both (1)\n\"fool\" pretrained models and (2) improve the robustness of these models to\nsyntactic variation when used to augment their training data.", + "authors": "Mohit Iyyer, John Wieting, Kevin Gimpel, Luke Zettlemoyer", + "published": "2018-04-17", + "updated": "2018-04-17", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Natural language processing datasets often suffer from a dearth of linguistic variation, which can hurt the generalization of models trained on them. Recent work has shown it is possible to easily \u201cbreak\u201d many learned models by evaluating them on adversarial examples (Goodfellow et al., 2015), which are generated by manually introducing lexical, pragmatic, and syntactic variation not seen in the training set (Ettinger et al., 2017). Robustness to such adversarial examples can potentially be improved by augmenting the training data, as shown by prior work that introduces rulebased lexical substitutions (Jia and Liang, 2017; \u22c6Authors contributed equally. I\u2019d have to say the star and director are the big problems here negative The man is standing in the water at the base of a waterfall entailment A man is standing in the water at the base of a waterfall By the way, you know, the star and director are the big problems positive The man, at the base of the waterfall, is standing in the water A man is standing in the water at the base of a waterfall neutral S PP PRN NP VP SCPN S NP , PP , VP SCPN Figure 1: Adversarial examples for sentiment analysis (left) and textual entailment (right) generated by our syntactically controlled paraphrase network (SCPN) according to provided parse templates. In both cases, a pretrained classi\ufb01er correctly predicts the label of the original sentence but not the corresponding paraphrase. Liang et al., 2017). However, more complex transformations, such as generating syntactically adversarial examples, remain an open challenge, as input semantics must be preserved in the face of potentially substantial structural modi\ufb01cations. In this paper, we introduce a new approach for learning to do syntactically controlled paraphrase generation: given a sentence and a target syntactic form (e.g., a constituency parse), a system must produce a paraphrase of the sentence whose syntax conforms to the target. General purpose syntactically controlled paraphrase generation is a challenging task. Approaches that rely on handcrafted rules and grammars, such as the question generation system of McKeown (1983), support only a limited number of syntactic targets. We introduce the \ufb01rst learning approach for this problem, building on the generality of neural encoder-decoder models to support a wide range of transformations. In doing arXiv:1804.06059v1 [cs.CL] 17 Apr 2018 \fso, we face two new challenges: (1) obtaining a large amount of paraphrase pairs for training, and (2) de\ufb01ning syntactic transformations with which to label these pairs. Since no large-scale dataset of sentential paraphrases exists publicly, we follow Wieting et al. (2017) and automatically generate millions of paraphrase pairs using neural backtranslation. Backtranslation naturally injects linguistic variation between the original sentence and its backtranslated counterpart. By running the process at a very large scale and testing for the speci\ufb01c variations we want to produce, we can gather ample input-output pairs for a wide range of phenomena. Our focus is on syntactic transformations, which we de\ufb01ne using templates derived from linearized constituency parses (\u00a72). Given such parallel data, we can easily train an encoder-decoder model that takes a sentence and target syntactic template as input, and produces the desired paraphrase.1 A combination of automated and human evaluations show that the generated paraphrases almost always follow their target speci\ufb01cations, while paraphrase quality does not signi\ufb01cantly deteriorate compared to vanilla neural backtranslation (\u00a74). Our model, the syntactically controlled paraphrase network (SCPN), is capable of generating adversarial examples for sentiment analysis and textual entailment datasets that signi\ufb01cantly impact the performance of pretrained models (Figure 1). We also show that augmenting training sets with such examples improves robustness without harming accuracy on the original test sets (\u00a75). Together these results not only establish the \ufb01rst general purpose syntactically controlled paraphrase approach, but also suggest that this general paradigm could be used for controlling many other aspects of the target text. 2 Collecting labeled paraphrase pairs In this section, we describe a general purpose process for gathering and labeling training data for controlled paraphrase generation. 2.1 Paraphrase data via backtranslation Inducing paraphrases from bilingual data has long been an effective method to overcome data limitations. In particular, bilingual pivoting (Bannard and Callison-Burch, 2005) \ufb01nds quality para1Code, labeled data, and pretrained models available at https://github.com/miyyer/scpn. phrases by pivoting through a different language. Mallinson et al. (2017) show that neural machine translation (NMT) systems outperform phrasebased MT on several paraphrase evaluation metrics. In this paper, we use the PARANMT-50M corpus from Wieting and Gimpel (2017). This corpus consists of over 50 million paraphrases obtained by backtranslating the Czech side of the CzEng (Bojar et al., 2016) parallel corpus. The pretrained Czech-English model used for translation came from the Nematus NMT system (Sennrich et al., 2017). The training data of this system includes four sources: Common Crawl, CzEng 1.6, Europarl, and News Commentary. The CzEng corpus is the largest of these four and was found to have signi\ufb01cantly more syntactic diversity than the other data sources (Wieting and Gimpel, 2017).2 2.2 Automatically labeling paraphrases with syntactic transformations We need labeled transformations in addition to paraphrase pairs to train a controlled paraphrase model. Manually annotating each of the millions of paraphrase pairs is clearly infeasible. Our key insight is that target transformations can be detected (with some noise) simply by parsing these pairs.3 Speci\ufb01cally, we parse the backtranslated paraphrases using the Stanford parser (Manning et al., 2014),4 which yields a pair of constituency parses \u27e8p1, p2\u27e9for each sentence pair \u27e8s1, s2\u27e9, where s1 is the reference English sentence in the CzEng corpus and s2 is its backtranslated counterpart. For syntactically controlled paraphrasing, we assume s1 and p2 are inputs, and the model is trained to produce s2. To overcome learned biases of the NMT system, we also include reversed pairs \u27e8s2, s1\u27e9during training. 2.2.1 Syntactic templates To provide syntactic control, we linearize the bracketed parse structure without leaf nodes (i.e., tokens). For example, the corresponding linearized parse 2Syntactic diversity was measured by the entropy of the top two levels of parse trees in the corpora. 3Similar automated \ufb01ltering could be used to produce data for many other transformations, such as tense changes, pointof-view shifts, and even stylometric pattern differences (Feng et al., 2012). This is an interesting area for future work. 4Because of the large dataset size, we use the faster but less accurate shift-reduce parser written by John Bauer. \ftree for the sentence \u201cShe drove home.\u201d is (S(NP(PRP))(VP(VBD)(NP(NN)))(.)). A system that requires a complete linearized target parse at test-time is unwieldy; how do we go about choosing the target parse? To simplify test-time usage, we relax the target syntactic form to a parse template, which we de\ufb01ne as the top two levels of the linearized parse tree (the level immediately below the root along with the root); the prior example\u2019s template is S \u2192NP VP. In the next section, we design models such that users can feed in either parse templates or full parses depending on their desired level of control. 3 Syntactically Controlled Paraphrase Networks The SCPN encoder-decoder architecture is built from standard neural modules, as we describe in this section. 3.1 Neural controlled paraphrase generation Given a sentential paraphrase pair \u27e8s1, s2\u27e9and a corresponding target syntax tree p2 for s2, we encode s1 using a bidirectional LSTM (Hochreiter and Schmidhuber, 1997), and our decoder is a two-layer LSTM augmented with soft attention over the encoded states (Bahdanau et al., 2014) as well as a copy mechanism (See et al., 2017). Following existing work in NMT (Sennrich et al., 2015), we preprocess s1 and s2 into subword units using byte pair encoding, and we perform decoding using beam search. For all attention computations, we use a bilinear product with a learned parameter matrix W: given vectors u and v, we score them by uT Wv. We incorporate the target syntax p2 into the generation process by modifying the inputs to the decoder. In particular, a standard decoder LSTM receives two inputs at every time step: (1) the embedding wt\u22121 of the ground-truth previous word in s2, and (2) an attention-weighted average at of the encoder\u2019s hidden states. We additionally provide a representation zt of the target p2, so at every time step the decoder computes ht = LSTM([wt\u22121; at; zt]). (1) Since we preserve bracketed parse structure, our linearized parses can have hundreds of tokens. Forcing all of the relevant information contained by the parse tree into a single \ufb01xed representation (i.e., the last hidden state of an LSTM) is dif\ufb01cult with such large sequences. Intuitively, we want the decoder to focus on portions of the target parse tree that correspond with the current time step. As such, we encode p2 using a (unidirectional) LSTM and compute zt with an attention-weighted average of the LSTM\u2019s encoded states at every time step. This attention mechanism is conditioned on the decoder\u2019s previous hidden state ht\u22121. 3.2 From parse templates to full parses As mentioned in Section 2.2.1, user-friendly systems should be able to accept high-level parse templates as input rather than full parses. Preliminary experiments show that SCPN struggles to maintain the semantics of the input sentence when we replace the full target parse with templates, and frequently generates short, formulaic sentences. The paraphrase generation model seems to rely heavily on the full syntactic parse to determine output length and clausal ordering, making it dif\ufb01cult to see how to modify the SCPN architecture for template-only target speci\ufb01cation. Instead, we train another model with exactly the same architecture as SCPN to generate complete parses from parse templates. This allows us to do the prediction in two steps: \ufb01rst predict the full syntactic tree and then use that tree to produce the paraphrase. Concretely, for the \ufb01rst step, assume t2 is the parse template formed from the top two levels of the target parse p2. The input to this parse generator is the input parse p1 and t2, and it is trained to produce p2. We train the parse generator separately from SCPN (i.e., no joint optimization) for ef\ufb01ciency purposes. At test time, a user only has to specify an input sentence and target template; the template is fed through the parse generator, and its predicted target parse is in turn sent to SCPN for paraphrase generation (see Figure 2). 3.3 Template selection and post-processing By switching from full parses to templates, we have reduced but not completely removed the burden of coming up with a target syntactic form. Certain templates may be not be appropriate for particular input sentences (e.g., turning a long sentence with multiple clauses into a noun phrase). However, others may be too similar to the input syntax, resulting in very little change. Since template selection is not a major focus of this paper, we use a relatively simple procedure, selecting the twenty most frequent templates in PARANMT\fThe man is standing in the water \u2026 + The man , at the base \u2026 The man , at the base of \u2026 ( ROOT ( S ( NP ( DT ) ( NN ) ) ( VP ( VBZ ) ( VP ( VBG ) ( PP ( IN ) ( NP ( NP ( DT ) ( NN ) \u2026 + ( ROOT ( S ( NP (NP ( DT ) ( NN ) ) ( , ) ( PP ( IN ) ( NP ( NP ( DT ) ( NN ) ) ( PP ( IN ) \u2026 + ( ROOT ( S ( \u2026 ( ROOT ( S ( NP ) ( , ) ( PP ) ( , ) ( VP ) ) ) + parse generator paraphrase generator target template t2 input parse p1 input sentence s1 target sentence s2 target parse p2 Figure 2: SCPN implements parse generation from templates as well as paraphrase generation from full parses as encoder-decoder architectures (attention depicted with dotted lines, copy mechanism with double stroked lines). While both components are trained separately, at test-time they form a pipelined approach to produce a controlled paraphrase from an input sentence s1, its corresponding parse p1, and a target template t2. 50M.5 Since we cannot generate a valid paraphrase for every template, we postprocess to remove nonsensical outputs. In particular, we \ufb01lter generated paraphrases using n-gram overlap and paraphrastic similarity, the latter of which is computed using the pretrained WORD,TRIAVG sentence embedding model from Wieting and Gimpel (2017).6 These paraphrastic sentence embeddings signi\ufb01cantly outperform prior work due to the PARANMT-50M data. 4 Intrinsic Experiments Before using SCPN to generate adversarial examples on downstream datasets, we need to make sure that its output paraphrases are valid and grammatical and that its outputs follow the speci\ufb01ed target syntax. In this section, we compare SCPN to a neural backtranslation baseline (NMT-BT) on the development set of our PARANMT-50M split using both human and automated experiments. NMTBT is the same pretrained Czech-English model used to create PARANMT-50M; however, here we use it to generate in both directions (i.e., EnglishCzech and Czech-English). 5However, we do provide some qualitative examples of rare and medium-frequency templates in Table 3. 6After qualitatively analyzing the impact of different \ufb01ltering choices, we set minimum n-gram overlap to 0.5 and Model 2 1 0 SCPN w/ full parses 63.7 14.0 22.3 SCPN w/ templates 62.3 19.3 18.3 NMT-BT 65.0 17.3 17.7 Table 1: A crowdsourced paraphrase evaluation on a three-point scale (0 = no paraphrase, 1 = ungrammatical paraphrase, 2 = grammatical paraphrase) shows both that NMT-BT and SCPN produce mostly grammatical paraphrases. Feeding parse templates to SCPN instead of full parses does not impact its quality. 4.1 Paraphrase quality & grammaticality To measure paraphrase quality and grammaticality, we perform a crowdsourced experiment in which workers are asked to rate a paraphrase pair \u27e8s, g\u27e9on the three-point scale of Kok and Brockett (2010), where s is the source sentence and g is the generated sentence. A 0 on this scale indicates no paraphrase relationship, while 1 means that g is an ungrammatical paraphrase of s and 2 means that g is a grammatical paraphrase of s. We select 100 paraphrase pairs from the development set of our PARANMT-50M split (after the postprocessing steps detailed in Section 3.3) and have three workers rate each pair.7 To focus the evaluation on the effect of syntactic manipulation on quality, we minimum paraphrastic similarity to 0.7. 7We use the Crowd\ufb02ower platform for our experiments. \fonly select sentences whose top-level parse templates differ (i.e., ts \u0338= tg), ensuring that the output of both systems varies syntactically from the source sentences. The results (Table 1) show that the uncontrolled NMT-BT model\u2019s outputs are comparable in quality and grammaticality to those of SCPN; neither system has a signi\ufb01cant edge. More interestingly, we observe no quality drop when feeding templates to SCPN (via the parse generator as described in Section 3.2) instead of complete parse trees, which suggests that the parse generator is doing a good job of generating plausible parse trees; thus, for all of the adversarial evaluations that follow, we only use the templated variant of SCPN. 4.2 Do the paraphrases follow the target speci\ufb01cation? We next determine how often SCPN\u2019s generated paraphrases conform to the target syntax: if g is a generated paraphrase and pg is its parse, how often does pg match the ground-truth target parse p2? We evaluate on our development set using exact template match: g is deemed a syntactic match to s2 only if the top two levels of its parse pg matches those of p2. We evaluate two SCPN con\ufb01gurations, where one is given the full target parse p2 and the other is given the result of running our parse generator on the target template t2. As a sanity check, we also evaluate our parse generator using the same metric. The results (Table 2) show that SCPN does indeed achieve syntactic control over the majority of its inputs. Our parse generator produces full parses that almost always match the target template; however, paraphrases generated using these parses are less syntactically accurate.8 A qualitative inspection of the generated parses reveals that they can differ from the ground-truth target parse in terms of ordering or existence of lowerlevel constituents (Table 6); we theorize that these differences may throw off SCPN\u2019s decoder. The NMT-BT system produces paraphrases that tend to be syntactically very similar to the input sentences: 28.7% of these paraphrases have the same template as that of the input sentence s1, while only 11.1% have the same template as the ground-truth target s2. Even though we train SCPN 8With that said, exact match is a harsh metric; these paraphrases are more accurate than the table suggests, as often they differ by only a single constituent. Model Parse Acc. SCPN w/ gold parse 64.5 SCPN w/ generated parse 51.6 Parse generator 99.9 Table 2: The majority of paraphrases generated by SCPN conform to the target syntax, but the level of syntactic control decreases when using generated target parses instead of gold parses. Accuracy is measured by exact template match (i.e., how often do the top two levels of the parses match). on data generated by NMT backtranslation, we avoid this issue by incorporating syntax into our learning process. 5 Adversarial example generation The intrinsic evaluations show that SCPN produces paraphrases of comparable quality to the uncontrolled NMT-BT system while also adhering to the speci\ufb01ed target speci\ufb01cations. Next, we examine the utility of controlled paraphrases for adversarial example generation. To formalize the problem, assume a pretrained model for some downstream task produces prediction yx given test-time instance x. An adversarial example x\u2032 can be formed by making label-preserving modi\ufb01cations to x such that yx \u0338= yx\u2032. Our results demonstrate that controlled paraphrase generation with appropriate template selection produces far more valid adversarial examples than backtranslation on sentiment analysis and entailment tasks. 5.1 Experimental setup We evaluate our syntactically adversarial paraphrases on the Stanford Sentiment Treebank (Socher et al., 2013, SST) and SICK entailment detection (Marelli et al., 2014). While both are relatively small datasets, we select them because they offer different experimental conditions: SST contains complicated sentences with high syntactic variance, while SICK almost exclusively consists of short, simple sentences. As a baseline, we compare the ten most probable beams from NMT-BT to controlled paraphrases generated by SCPN using ten templates randomly sampled from the template set described in Section 3.3.9 We also need pretrained models 9We also experimented with the diverse beam search modi\ufb01cation proposed by Li et al. (2016b) for NMT-BT but found that it dramatically warped the semantics of many beams; crowdsourced workers rated 49% of its outputs as 0 \ftemplate paraphrase original with the help of captain picard , the borg will be prepared for everything . (SBARQ(ADVP)(,)(S)(,)(SQ)) now , the borg will be prepared by picard , will it ? (S(NP)(ADVP)(VP)) the borg here will be prepared for everything . (S(S)(,)(CC)(S) (:)(FRAG)) with the help of captain picard , the borg will be prepared , and the borg will be prepared for everything ... for everything . (FRAG(INTJ)(,)(S)(,)(NP)) oh , come on captain picard , the borg line for everything . original you seem to be an excellent burglar when the time comes . (S(SBAR)(,)(NP)(VP)) when the time comes , you \u2019ll be a great thief . (S(\u2018\u2018)(UCP)(\u2019\u2019)(NP)(VP)) \u201c you seem to be a great burglar , when the time comes . \u201d you said . (SQ(MD)(SBARQ)) can i get a good burglar when the time comes ? (S(NP)(IN)(NP)(NP)(VP) look at the time the thief comes . Table 3: Syntactically controlled paraphrases generated by SCPN for two examples from the PARANMT-50M development set. For each input sentence, we show the outputs of four different templates; the fourth template is a failure case (highlighted in green) exhibiting semantic divergence and/or ungrammaticality, which occurs when the target template is unsuited for the input. for which to generate adversarial examples; we use the bidirectional LSTM baseline for both SST and SICK outlined in Tai et al. (2015) since it is a relatively simple architecture that has proven to work well for a variety of problems.10 Since the SICK task involves characterizing the relationship between two sentences, for simplicity we only generate adversarial examples for the \ufb01rst sentence and keep the second sentence \ufb01xed to the ground truth. 5.2 Breaking pretrained models For each dataset, we generate paraphrases for held-out examples and then run a pretrained model over them.11 We consider a development example x broken if the original prediction yx is correct, but the prediction yx\u2032 for at least one paraphrase x\u2032 is incorrect. For SST, we evaluate on the binary sentiment classi\ufb01cation task and ignore all phrase-level labels (because our paraphrase models are trained on only sentences). Table 4 shows that for both datasets, SCPN breaks many more examples than NMT-BT. Moreover, as shown in Table 5, NMT-BT\u2019s paraphrases differ from the original example mainly by lexical substitutions, while SCPN often produces dramatically different syntactic structures. 5.3 Are the adversarial examples valid? We have shown that we can break pretrained models with controlled paraphrases, but are these paraon the three-point scale. 10We initialize both models using pretrained GloVe embeddings (Pennington et al., 2014) and set the LSTM hidden dimensionality to 300. 11Since the SICK development dataset is tiny, we additionally generate adversarial examples on its test set. phrases actually valid adversarial examples? After all, it is possible that the syntactic modi\ufb01cations cause informative clauses or words (e.g., negations) to go missing. To measure the validity of our adversarial examples, we turn again to crowdsourced experiments. We ask workers to choose the appropriate label for a given sentence or sentence pair (e.g., positive or negative for SST), and then we compare the worker\u2019s judgment to the original development example\u2019s label. For both models, we randomly select 100 adversarial examples and have three workers annotate each one. The results (Table 4) show that on the more complex SST data, a higher percentage of SCPN\u2019s paraphrases are valid adversarial examples than those of NMT-BT, which is especially encouraging given our model also generates signi\ufb01cantly more adversarial examples. 5.4 Increasing robustness to adversarial examples If we additionally augment the training data of both tasks with controlled paraphrases, we can increase a downstream model\u2019s robustness to adversarial examples in the development set. To quantify this effect, we generate controlled paraphrases for the training sets of SST and SICK using the same templates as in the previous experiments. Then, we include these paraphrases as additional training examples and retrain our biLSTM task models.12 As shown by Table 4, training on SCPN\u2019s paraphrases signi\ufb01cantly improves robustness to syntactic adversaries without affecting accuracy on the original test sets. One im12We did not experiment with more complex augmentation methods (e.g., downweighting the contribution of paraphrased training examples to the loss). \fNo augmentation With augmentation Model Task Validity Test Acc Dev Broken Test Acc Dev Broken SCPN SST 77.1 83.1 41.8 83.0 31.4 NMT-BT SST 68.1 83.1 20.2 82.3 20.0 SCPN SICK 77.7 82.1 33.8 82.7 19.8 NMT-BT SICK 81.0 82.1 20.4 82.0 11.2 Table 4: SCPN generates more legitimate adversarial examples than NMT-BT, shown by the results of a crowdsourced validity experiment and the percentage of held-out examples that are broken through paraphrasing. Furthermore, we show that by augmenting the training dataset with syntactically-diverse paraphrases, we can improve the robustness of downstream models to syntactic adversaries (see \u201cDev Broken\u201d before and after augmentation) without harming accuracy on the original test set. portant caveat is that this experiment only shows robustness to the set of templates used by SCPN; in real-world applications, careful template selection based on the downstream task, along with using a larger set of templates, is likely to increase robustness to less constrained syntactic adversaries. Augmentation with NMT-BT\u2019s paraphrases increases robustness on SICK, but on SST, it degrades test accuracy without any signi\ufb01cant gain in robustness; this is likely due to its lack of syntactic variation compared to SCPN. 6 Qualitative Analysis In the previous section, we quantitatively evaluated the SCPN\u2019s ability to produce valid paraphrases and adversarial examples. Here, we take a look at actual sentences generated by the model. In addition to analyzing SCPN\u2019s strengths and weaknesses compared to NMT-BT, we examine the differences between paraphrases generated by various con\ufb01gurations of the model to determine the impact of each major design decision (e.g., templates instead of full parses). Syntactic manipulation: Table 3 demonstrates SCPN\u2019s ability to perform syntactic manipulation, showing paraphrases for two sentences generated using different templates. Many of the examples exhibit complex transformations while preserving both the input semantics and grammaticality, even when the target syntax is very different from that of the source (e.g., when converting a declarative to question). However, the failure cases demonstrate that not every template results in a valid paraphrase, as nonsensical outputs are sometimes generated when trying to squeeze the input semantics into an unsuitable target form. Adversarial examples: Table 5 shows that SCPN and NMT-BT differ fundamentally in the type of adversaries they generate. While SCPN mostly avoids lexical substitution in favor of making syntactic changes, NMT-BT does the opposite. These examples reinforce the results of the experiment in Section 4.2, which demonstrates NMTBT\u2019s tendency to stick to the input syntax. While SCPN is able to break more validation examples than NMT-BT, it is alarming that even simple lexical substitution can break such a high percentage of both datasets we tested. Ebrahimi et al. (2017) observe a similar phenomenon with HotFlip, their gradient-based substitution method for generating adversarial examples. While NMT-BT does not receive signal from the downstream task like HotFlip, it also does not require external constraints to maintain grammaticality and limit semantic divergence. As future work, it would be interesting to provide this downstream signal to both NMT-BT and SCPN; for the latter, perhaps this signal could guide the template selection process, which is currently \ufb01xed to a small, \ufb01nite set. Templates vs. gold parses: Why does the level of syntactic control decrease when we feed SCPN parses generated from templates instead of gold parses (Table 2)? The \ufb01rst two examples in Table 6 demonstrate issues with the templated approach. In the \ufb01rst example, the template is not expressive enough for the parse generator to produce slots for the highlighted clause. A potential way to combat this type of issue is to dynamically de\ufb01ne templates based on factors such as the length of the input sentence. In the second example, a parsing error results in an inaccurate template which in turn causes SCPN to generate a semanticallydivergent paraphrase. The \ufb01nal two examples \ftemplate original paraphrase (S(ADVP)(NP)(VP)) moody , heartbreaking , and \ufb01lmed in a natural , unforced style that makes its characters seem entirely convincing even when its script is not . so he \u2019s \ufb01lmed in a natural , unforced style that makes his characters seem convincing when his script is not . (S(PP)(,)(NP)(VP)) there is no pleasure in watching a child suffer . in watching the child suffer , there is no pleasure . (S(S)(,)(CC)(S)) the characters are interesting and often very creatively constructed from \ufb01gure to backstory . the characters are interesting , and they are often built from memory to backstory . every nanosecond of the the new guy reminds you that you could be doing something else far more pleasurable . each nanosecond from the new guy reminds you that you could do something else much more enjoyable . harris commands the screen , using his frailty to suggest the ravages of a life of corruption and ruthlessness . harris commands the screen , using his weakness to suggest the ravages of life of corruption and recklessness . Table 5: Adversarial sentiment examples generated by SCPN (top) and NMT-BT (bottom). The predictions of a pretrained model on the original sentences are correct (red is negative, blue is positive), while the predictions on the paraphrases are incorrect. The syntactically controlled paraphrases of SCPN feature more syntactic modi\ufb01cation and less lexical substitution than NMT-BT\u2019s backtranslated outputs. show instances where the templated model performs equally as well as the model with gold parses, displaying the capabilities of our parse generator. Removing syntactic control: To examine the differences between syntactically controlled and uncontrolled paraphrase generation systems, we train an SCPN without including zt, the attentionweighted average of the encoded parse, in the decoder input. This uncontrolled con\ufb01guration produces outputs that are very similar to its inputs, often identical syntactically with minor lexical substitution. Concretely, the uncontrolled SCPN produces a paraphrase with the same template as its input 38.6% of the time, compared to NMT-BT\u2019s 28.7% (Section 4.2).13 7 Related Work Paraphrase generation (Androutsopoulos and Malakasiotis, 2010; Madnani and Dorr, 2010) has been tackled using many different methods, including those based on hand-crafted rules (McKeown, 1983), synonym substitution (Bolshakov and Gelbukh, 2004), machine translation (Quirk et al., 2004), and, most recently, deep learning (Prakash et al., 2016; Mallinson et al., 2017; Dong et al., 2017). Our syntactically controlled setting also relates to controlled language generation tasks in which one desires to generate or rewrite a sentence with particular characteristics. We review related work in both 13A con\ufb01guration without the copy mechanism copies input syntax even more, with a 47.7% exact template match. paraphrase generation and controlled language generation below. 7.1 Data-driven paraphrase generation Madnani and Dorr (2010) review data-driven methods for paraphrase generation, noting two primary families: template-based and translationbased. The \ufb01rst family includes approaches that use hand-crafted rules (McKeown, 1983), thesaurus-based substitution (Bolshakov and Gelbukh, 2004; Zhang and LeCun, 2015), lattice matching (Barzilay and Lee, 2003), and templatebased \u201cshake & bake\u201d paraphrasing (Carl et al., 2005). These methods often yield grammatical outputs but they can be limited in diversity. The second family includes methods that rewrite the input using methods based on parallel text (Bannard and Callison-Burch, 2005), machine translation (Quirk et al., 2004; Napoles et al., 2016; Suzuki et al., 2017), or related statistical techniques (Zhao et al., 2009). Of particular relevance to our work are methods that incorporate syntax to improve \ufb02uency of paraphrase output. Callison-Burch (2008) constrains paraphrases to be the same syntactic type as the input, though he was focused on phrase-level, not sentential, paraphrasing. Pang et al. (2003) learn \ufb01nite-state automata from translation pairs that generate syntactic paraphrases, though this requires multiple translations into the same language and cannot be used to generate paraphrases outside this dataset. Shen et al. (2006) extend this to deeper syntactic analysis. All of these approaches use syntax to \ftemplate (S(CC)(S)(,)(NP)(ADVP)(VP)) original damian encouraged me , criticized , he ... he always made me go a little deeper . SCPN parse but damian , he supported me , he told me , he always made me go a little deeper . SCPN template but damian supported me , he always made me go a little deeper . template (S(S)(,)(NP)(VP)) original zacharias did n\u2019t deserve to die , grishanov thought , and he was aware of the huge irony of his situation SCPN parse zacharias did not deserve to die , grishanov told himself , realizing the greatest irony of all . SCPN template zacharias did not deserve to die , he was aware of the great irony of his situation . template S(S)(,)(S)) original give me some water , my lips are dry , and i shall try to tell you . SCPN parse give me some water , i have just a dry mouth . SCPN template give me some water , my lips are dry . template (S(NP)(,)(ADVP)(,)(VP)) original in the meantime , the house is weakened , and all its old alliances and deals are thrown into doubt . SCPN parse the house , meanwhile , is weakening , which will be all of its old alliances and business . SCPN template the house , meanwhile , is weakened , and its old alliances and deals are thrown into doubt . Table 6: Examples from PARANMT-50M comparing the output of two SCPN con\ufb01gurations, one with gold target parses (SCPN parse) and one with parses generated from templates (SCPN template), where templates are the top two levels of the gold parses. The \ufb01rst two examples demonstrate issues with missing information caused by inexpressive templates and parsing errors, respectively. The remaining examples, in which both con\ufb01gurations produce syntactically similar paraphrases, showcase the ability of the parse generator to produce viable full parses. improve grammaticality, which is handled by our decoder language model. Recent efforts involve neural methods. Iyyer et al. (2014) generate paraphrases with dependency tree recursive autoencoders by randomly selecting parse trees at test time. Li et al. (2017) generate paraphrases using deep reinforcement learning. Gupta et al. (2017) use variational autoencoders to generate multiple paraphrases. These methods differ from our approach in that none offer \ufb01ne-grained control over the syntactic form of the paraphrase. 7.2 Controlled language generation There is growing interest in generating language with the ability to in\ufb02uence the topic, style, or other properties of the output. Most related to our methods are those based on syntactic transformations, like the tree-to-tree sentence simpli\ufb01cation method of Woodsend and Lapata (2011) based on quasi-synchronous grammar (Smith and Eisner, 2006). Our method is more general since we do not require a grammar and there are only soft constraints. Perhaps the closest to the proposed method is the conditioned recurrent language model of Ficler and Goldberg (2017), which produces language with user-selected properties such as sentence length and formality but is incapable of generating paraphrases. For machine translation output, Niu et al. (2017) control the level of formality while Sennrich et al. (2016) control the level of politeness. For dialogue, Li et al. (2016a) affect the output using speaker identity, while Wang et al. (2017) develop models to in\ufb02uence topic and style of the output. Shen et al. (2017) perform style transfer on non-parallel texts, while Guu et al. (2017) generate novel sentences from prototypes; again, these methods are not necessarily seeking to generate meaning-preserving paraphrases, merely transformed sentences that have an altered style. 8" + }, + { + "url": "http://arxiv.org/abs/1611.05118v2", + "title": "The Amazing Mysteries of the Gutter: Drawing Inferences Between Panels in Comic Book Narratives", + "abstract": "Visual narrative is often a combination of explicit information and judicious\nomissions, relying on the viewer to supply missing details. In comics, most\nmovements in time and space are hidden in the \"gutters\" between panels. To\nfollow the story, readers logically connect panels together by inferring unseen\nactions through a process called \"closure\". While computers can now describe\nwhat is explicitly depicted in natural images, in this paper we examine whether\nthey can understand the closure-driven narratives conveyed by stylized artwork\nand dialogue in comic book panels. We construct a dataset, COMICS, that\nconsists of over 1.2 million panels (120 GB) paired with automatic textbox\ntranscriptions. An in-depth analysis of COMICS demonstrates that neither text\nnor image alone can tell a comic book story, so a computer must understand both\nmodalities to keep up with the plot. We introduce three cloze-style tasks that\nask models to predict narrative and character-centric aspects of a panel given\nn preceding panels as context. Various deep neural architectures underperform\nhuman baselines on these tasks, suggesting that COMICS contains fundamental\nchallenges for both vision and language.", + "authors": "Mohit Iyyer, Varun Manjunatha, Anupam Guha, Yogarshi Vyas, Jordan Boyd-Graber, Hal Daum\u00e9 III, Larry Davis", + "published": "2016-11-16", + "updated": "2017-05-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CL" + ], + "main_content": "Introduction Comics are fragmented scenes forged into full-\ufb02edged stories by the imagination of their readers. A comics creator can condense anything from a centuries-long intergalactic war to an ordinary family dinner into a single panel. But it is what the creator hides from their pages that makes comics truly interesting: the unspoken conversations and unseen actions that lurk in the spaces (or gutters) between adjacent panels. For example, the dialogue in Figure 1 suggests that between the second and third panels, Gilda commands her snakes to chase after a frightened Michael in some \u2217Authors contributed equally Figure 1. Where did the snake in the last panel come from? Why is it biting the man? Is the man in the second panel the same as the man in the \ufb01rst panel? To answer these questions, readers form a larger meaning out of the narration boxes, speech bubbles, and artwork by applying closure across panels. sort of strange cult initiation. Through a process called closure [40], which involves (1) understanding individual panels and (2) making connective inferences across panels, readers form coherent storylines from seemingly disparate panels such as these. In this paper, we study whether computers can do the same by collecting a dataset of comic books (COMICS) and designing several tasks that require closure to solve. Section 2 describes how we create COMICS,1 which contains \u223c1.2 million panels drawn from almost 4,000 publicly-available comic books published during the \u201cGolden Age\u201d of American comics (1938\u20131954). COMICS is challenging in both style and content compared to natural images (e.g., photographs), which are the focus of most existing datasets and methods [32, 56, 55]. Much like painters, comic artists can render a single object or concept in multiple artistic styles to evoke different emotional responses from the reader. For example, the lions in Figure 2 are drawn with varying degrees of realism: the more cartoon1Data, code, and annotations to be made available after blind review. 1 arXiv:1611.05118v2 [cs.CV] 7 May 2017 \fFigure 2. Different artistic renderings of lions taken from the COMICS dataset. The left-facing lions are more cartoonish (and humorous) than the ones facing right, which come from action and adventure comics that rely on realism to provide thrills. ish lions, from humorous comics, take on human expressions (e.g., surprise, nastiness), while those from adventure comics are more photorealistic. Comics are not just visual: creators push their stories forward through text\u2014speech balloons, thought clouds, and narrative boxes\u2014which we identify and transcribe using optical character recognition (OCR). Together, text and image are often intricately woven together to tell a story that neither could tell on its own (Section 3). To understand a story, readers must connect dialogue and narration to characters and environments; furthermore, the text must be read in the proper order, as panels often depict long scenes rather than individual moments [10]. Text plays a much larger role in COMICS than it does for existing datasets of visual stories [25]. To test machines\u2019 ability to perform closure, we present three novel cloze-style tasks in Section 4 that require a deep understanding of narrative and character to solve. In Section 5, we design four neural architectures to examine the impact of multimodality and contextual understanding via closure. All of these models perform signi\ufb01cantly worse than humans on our tasks; we conclude with an error analysis (Section 6) that suggests future avenues for improvement. 2. Creating a dataset of comic books Comics, de\ufb01ned by cartoonist Will Eisner as sequential art [13], tell their stories in sequences of panels, or single frames that can contain both images and text. Existing comics datasets [19, 39] are too small to train data-hungry machine learning models for narrative understanding; additionally, they lack diversity in visual style and genres. Thus, # Books 3,948 # Pages 198,657 # Panels 1,229,664 # Textboxes 2,498,657 Text cloze instances 89,412 Visual cloze instances 587,797 Char. coherence instances 72,313 Table 1. Statistics describing dataset size (top) and the number of total instances for each of our three tasks (bottom). we build our own dataset, COMICS, by (1) downloading comics in the public domain, (2) segmenting each page into panels, (3) extracting textbox locations from panels, and (4) running OCR on textboxes and post-processing the output. Table 1 summarizes the contents of COMICS. The rest of this section describes each step of our data creation pipeline. 2.1. Where do our comics come from? The \u201cGolden Age of Comics\u201d began during America\u2019s Great Depression and lasted through World War II, ending in the mid-1950s with the passage of strict censorship regulations. In contrast to the long, world-building story arcs popular in later eras, Golden Age comics tend to be small and self-contained; a single book usually contains multiple different stories sharing a common theme (e.g., crime or mystery). While the best-selling Golden Age comics tell of American superheroes triumphing over German and Japanese villains, a variety of other genres (such as romance, humor, and horror) also enjoyed popularity [18]. The Digital Comics Museum (DCM)2 hosts user-uploaded scans of many comics by lesser-known Golden Age publishers that are now in the public domain due to copyright expiration. To avoid off-square images and missing pages, as the scans vary in resolution and quality, we download the 4,000 highest-rated comic books from DCM.3 2.2. Breaking comics into their basic elements The DCM comics are distributed as compressed archives of JPEG page scans. To analyze closure, which occurs from panel-to-panel, we \ufb01rst extract panels from the page images. Next, we extract textboxes from the panels, as both location and content of textboxes are important for character and narrative understanding. Panel segmentation: Previous work on panel segmentation uses heuristics [34] or algorithms such as density gradients and recursive cuts [52, 43, 48] that rely on pages with uniformly white backgrounds and clean gutters. Unfortunately, scanned images of eighty-year old comics do 2http://digitalcomicmuseum.com/ 3Some of the panels in COMICS contain offensive caricatures and opinions re\ufb02ective of that period in American history. \fnot particularly adhere to these standards; furthermore, many DCM comics have non-standard panel layouts and/or textboxes that extend across gutters to multiple panels. After our attempts to use existing panel segmentation software failed, we turned to deep learning. We annotate 500 randomly-selected pages from our dataset with rectangular bounding boxes for panels. Each bounding box encloses both the panel artwork and the textboxes within the panel; in cases where a textbox spans multiple panels, we necessarily also include portions of the neighboring panel. After annotation, we train a region-based convolutional neural network to automatically detect panels. In particular, we use Faster R-CNN [45] initialized with a pretrained VGG CNN M 1024 model [9] and alternatingly optimize the region proposal network and the detection network. In Western comics, panels are usually read left-toright, top-to-bottom, so we also have to properly order all of the panels within a page after extraction. We compute the midpoint of each panel and sort them using Morton order [41], which gives incorrect orderings only for rare and complicated panel layouts. Textbox segmentation: Since we are particularly interested in modeling the interplay between text and artwork, we need to also convert the text in each panel to a machinereadable format.4 As with panel segmentation, existing comic textbox detection algorithms [22, 47] could not accurately localize textboxes for our data. Thus, we resort again to Faster R-CNN: we annotate 1,500 panels for textboxes,5 train a Faster-R-CNN, and sort the extracted textboxes within each panel using Morton order. 2.3. OCR The \ufb01nal step of our data creation pipeline is applying OCR to the extracted textbox images. We unsuccessfully experimented with two trainable open-source OCR systems, Tesseract [50] and Ocular [6], as well as Abbyy\u2019s consumergrade FineReader.6 The ineffectiveness of these systems is likely due to the considerable variation in comic fonts as well as domain mismatches with pretrained language models (comics text is always capitalized, and dialogue phenomena such as dialects may not be adequately represented in training data). Google\u2019s Cloud Vision OCR7 performs much better on comics than any other system we tried. While it sometimes struggles to detect short words or punctuation marks, the quality of the transcriptions is good con4Alternatively, modules for text spotting and recognition [27] could be built into architectures for our downstream tasks, but since comic dialogues can be quite lengthy, these modules would likely perform poorly. 5We make a distinction between narration and dialogue; the former usually occurs in strictly rectangular boxes at the top of each panel and contains text describing or introducing a new scene, while the latter is usually found in speech balloons or thought clouds. 6http://www.abbyy.com 7http://cloud.google.com/vision sidering the image domain and quality. We use the Cloud Vision API to run OCR on all 2.5 million textboxes for a cost of $3,000. We post-process the transcriptions by removing systematic spelling errors (e.g., failing to recognize the \ufb01rst letter of a word). Finally, each book in our dataset contains three or four full-page product advertisements; since they are irrelevant for our purposes, we train a classi\ufb01er on the transcriptions to remove them.8 3. Data Analysis In this section, we explore what makes understanding narratives in COMICS dif\ufb01cult, focusing speci\ufb01cally on intrapanel behavior (how images and text interact within a panel) and interpanel transitions (how the narrative advances from one panel to the next). We characterize panels and transitions using a modi\ufb01ed version of the annotation scheme in Scott McCloud\u2019s \u201cUnderstanding Comics\u201d [40]. Over 90% of panels rely on both text and image to convey information, as opposed to just using a single modality. Closure is also important: to understand most transitions between panels, readers must make complex inferences that often require common sense (e.g., connecting jumps in space and/or time, recognizing when new characters have been introduced to an existing scene). We conclude that any model trained to understand narrative \ufb02ow in COMICS will have to effectively tie together multimodal inputs through closure. To perform our analysis, we manually annotate 250 randomly-selected pairs of consecutive panels from COMICS. Each panel of a pair is annotated for intrapanel behavior, while an interpanel annotation is assigned to the transition between the panels. Two annotators independently categorize each pair, and a third annotator makes the \ufb01nal decision when they disagree. We use four intrapanel categories (de\ufb01nitions from McCloud, percentages from our annotations): 1. Word-speci\ufb01c, 4.4%: The pictures illustrate, but do not signi\ufb01cantly add to a largely complete text. 2. Picture-speci\ufb01c, 2.8%: The words do little more than add a soundtrack to a visually-told sequence. 3. Parallel, 0.6%: Words and pictures seem to follow very different courses without intersecting. 4. Interdependent, 92.1%: Words and pictures go handin-hand to convey an idea that neither could convey alone. We group interpanel transitions into \ufb01ve categories: 1. Moment-to-moment, 0.4%: Almost no time passes between panels, much like adjacent frames in a video. 2. Action-to-action, 34.6%: The same subjects progress through an action within the same scene. 8See supplementary material for speci\ufb01cs about our post-processing. \fFigure 3. Five example panel sequences from COMICS, one for each type of interpanel transition. Individual panel borders are color-coded to match their intrapanel categories (legend in bottom-left). Moment-to-moment transitions unfold like frames in a movie, while scene-toscene transitions are loosely strung together by narrative boxes. Percentages are the relative prevalance of the transition or panel type in an annotated subset of COMICS. 3. Subject-to-subject, 32.7%: New subjects are introduced while staying within the same scene or idea. 4. Scene-to-scene, 13.8%: Signi\ufb01cant changes in time or space between the two panels. 5. Continued conversation, 17.7%: Subjects continue a conversation across panels without any other changes. The two annotators agree on 96% of the intrapanel annotations (Cohen\u2019s \u03ba = 0.657), which is unsurprising because almost every panel is interdependent. The interpanel task is signi\ufb01cantly harder: agreement is only 68% (Cohen\u2019s \u03ba = 0.605). Panel transitions are more diverse, as all types except moment-to-moment are relatively common (Figure 3); interestingly, moment-to-moment transitions require the least amount of closure as there is almost no change in time or space between the panels. Multiple transition types may occur in the same panel, such as simultaneous changes in subjects and actions, which also contributes to the lower interpanel agreement. 4. Tasks that test closure To explore closure in COMICS, we design three novel tasks (text cloze, visual cloze, and character coherence) that test a model\u2019s ability to understand narratives and characters given a few panels of context. As shown in the previous section\u2019s analysis, a high percentage of panel transitions require non-trivial inferences from the reader; to successfully solve our proposed tasks, a model must be able to make the same kinds of connections. While their objectives are different, all three tasks follow the same format: given preceding panels pi\u22121, pi\u22122, . . . , pi\u2212n as context, a model is asked to predict some aspect of panel pi. While previous work on visual storytelling focuses on generating text given some context [24], the dialogue-heavy text in COMICS makes evaluation dif\ufb01cult (e.g., dialects, grammatical variations, many rare words). We want our evaluations to focus speci\ufb01cally on closure, not generated text quality, so we instead use a cloze-style framework [53]: given c candidates\u2014with a single correct option\u2014models must use the context panels to rank the correct candidate higher than the others. The rest of this section describes each of the three tasks in detail; Table 1 provides the total instances of each task with the number of context panels n = 3. Text Cloze: In the text cloze task, we ask the model to predict what text out of a set of candidates belongs in a particular textbox, given both context panels (text and image) as well as the current panel image. While initially we did not put any constraints on the task design, we quickly noticed two major issues. First, since the panel images include textboxes, any model trained on this task could in principle learn to crudely imitate OCR by matching text candidates to the actual image of the text. To solve this problem, we \u201cblack out\u201d the rectangle given by the bounding boxes for each textbox in a panel (see Figure 4).9 Second, panels often have multiple textboxes (e.g., conversations between characters); to focus on interpanel transitions rather 9To reduce the chance of models trivially correlating candidate length to textbox size, we remove very short and very long candidates. \fFigure 4. In the character coherence task (top), a model must order the dialogues in the \ufb01nal panel, while visual cloze (bottom) requires choosing the image of the panel that follows the given context. For visualization purposes, we show the original context panels; during model training and evaluation, textboxes are blacked out in every panel. than intrapanel complexity, we restrict pi to panels that contain only a single textbox. Thus, nothing from the current panel matters other than the artwork; the majority of the predictive information comes from previous panels. Visual Cloze: We know from Section 3 that in most cases, text and image work interdependently to tell a story. In the visual cloze task, we follow the same set-up as in text cloze, but our candidates are images instead of text. A key difference is that models are not given text from the \ufb01nal panel; in text cloze, models are allowed to look at the \ufb01nal panel\u2019s artwork. This design is motivated by eyetracking studies in single-panel cartoons, which show that readers look at artwork before reading the text [7], although atypical font style and text length can invert this order [16]. Character Coherence: While the previous two tasks focus mainly on narrative structure, our third task attempts to isolate character understanding through a re-ordering task. Given a jumbled set of text from the textboxes in panel pi, a model must learn to match each candidate to its corresponding textbox. We restrict this task to panels that contain exactly two dialogue boxes (narration boxes are excluded to focus the task on characters). While it is often easy to order the text based on the language alone (e.g., \u201chow\u2019s it going\u201d always comes before \u201c\ufb01ne, how about you?\u201d), many cases require inferring which character is likely to utter a particular bit of dialogue based on both their previous utterances and their appearance (e.g., Figure 4, top). 4.1. Task Dif\ufb01culty For text cloze and visual cloze, we have two dif\ufb01culty settings that vary in how cloze candidates are chosen. In the easy setting, we sample textboxes (or panel images) from the entire COMICS dataset at random. Most incorrect candidates in the easy setting have no relation to the provided context, as they come from completely different books and genres. This setting is thus easier for models to \u201ccheat\u201d on by relying on stylistic indicators instead of contextual information. With that said, the task is still non-trivial; for example, many bits of short dialogue can be applicable in a variety of scenarios. In the hard case, the candidates come from nearby pages, so models must rely on the context to perform well. For text cloze, all candidates are likely to mention the same character names and entities, while color schemes and textures become much less distinguishing for visual cloze. 5. Models & Experiments To measure the dif\ufb01culty of these tasks for deep learning models, we adapt strong baselines for multimodal language and vision understanding tasks to the comics domain. We evaluate four different neural models, variants of which were also used to benchmark the Visual Question Answering dataset [2] and encode context for visual storytelling [25]: text-only, image-only, and two image-text models. Our best-performing model encodes panels with a hierarchical LSTM architecture (see Figure 5). \fFigure 5. The image-text architecture applied to an instance of the text cloze task. Pretrained image features are combined with learned text features in a hierarchical LSTM architecture to form a context representation, which is then used to score text candidates. On text cloze, accuracy increases when models are given images (in the form of pretrained VGG-16 features) in addition to text; on the other tasks, incorporating both modalities is less important. Additionally, for the text cloze and visual cloze tasks, models perform far worse on the hard setting than the easy setting, con\ufb01rming our intuition that these tasks are non-trivial when we control for stylistic dissimilarities between candidates. Finally, none of the architectures outperform human baselines, which demonstrates the dif\ufb01culty of understanding COMICS: image features obtained from models trained on natural images cannot capture the vast variation in artistic styles, and textual models struggle with the richness and ambiguity of colloquial dialogue highly dependent on visual contexts. In the rest of this section, we \ufb01rst introduce a shared notation and then use it to specify all of our models. 5.1. Model de\ufb01nitions In all of our tasks, we are asked to make a prediction about a particular panel given the preceding n panels as context.10 Each panel consists of three distinct elements: image, text (OCR output), and textbox bounding box coordinates. For any panel pi, the corresponding image is zi. Since there can be multiple textboxes per panel, we refer to individual textbox contents and bounding boxes as tix and bix, respectively. Each of our tasks has a different set of answer candidates A: text cloze has three text candidates ta1...3, visual cloze has three image candidates za1...3, and character coherence has two combina10Test and validation instances for all tasks come from comic books that are unseen during training. tions of text / bounding box pairs, {ta1/ba1, ta2/ba2} and {ta1/ba2, ta2/ba1}. Our architectures differ mainly in the encoding function g that converts a sequence of context panels pi\u22121, pi\u22122, . . . , pi\u2212n into a \ufb01xed-length vector c. We score the answer candidates by taking their inner product with c and normalizing with the softmax function, s = softmax(AT c), (1) and we minimize the cross-entropy loss against the groundtruth labels.11 Text-only: The text-only baseline only has access to the text tix within each panel. Our g function encodes this text on multiple levels: we \ufb01rst compute a representation for each tix with a word embedding sum12 and then combine multiple textboxes within the same panel using an intrapanel LSTM [23]. Finally, we feed the panel-level representations to an interpanel LSTM and take its \ufb01nal hidden state as the context representation (Figure 5). For text cloze, the answer candidates are also encoded with a word embedding sum; for visual cloze, we project the 4096-d fc7 layer of VGG-16 down to the word embedding dimensionality with a fully-connected layer.13 11Performance falters slightly on a development set with contrastive max-margin loss functions [51] in place of our softmax alternative. 12As in previous work for visual question answering [57], we observe no noticeable improvement with more sophisticated encoding architectures. 13For training and testing, we use three panels of context and three candidates. We use a vocabulary size of 30,000 words, restrict the maximum number of textboxes per panel to three, and set the dimensionality of word embeddings and LSTM hidden states to 256. Models are optimized using Adam [29] for ten epochs, after which we select the best-performing model on the dev set. \fModel Text Cloze Visual Cloze Char. Coheren. easy hard easy hard Random 33.3 33.3 33.3 33.3 50.0 Text-only 63.4 52.9 55.9 48.4 68.2 Image-only 51.7 49.4 85.7 63.2 70.9 NC-image-text 63.1 59.6 65.2 Image-text 68.6 61.0 81.3 59.1 69.3 Human \u2013 84 \u2013 88 87 Table 2. Combining image and text in neural architectures improves their ability to predict the next image or dialogue in COMICS narratives. The contextual information present in preceding panels is useful for all tasks: the model that only looks at a single panel (NC-image-text) always underperforms its contextaware counterpart. However, even the best performing models lag well behind humans. Image-only: The image-only baseline is even simpler: we feed the fc7 features of each context panel to an LSTM and use the same objective function as before to score candidates. For visual cloze, we project both the context and answer representations to 512-d with additional fullyconnected layers before scoring. While the COMICS dataset is certainly large, we do not attempt learning visual features from scratch as our task-speci\ufb01c signals are far more complicated than simple image classi\ufb01cation. We also try \ufb01ne-tuning the lower-level layers of VGG-16 [4]; however, this substantially lowers task accuracy even with very small learning rates for the \ufb01ne-tuned layers. Image-text: We combine the previous two models by concatenating the output of the intrapanel LSTM with the fc7 representation of the image and passing the result through a fully-connected layer before feeding it to the interpanel LSTM (Figure 5). For text cloze and character coherence, we also experiment with a variant of the imagetext baseline that has no access to the context panels, which we dub NC-image-text. In this model, the scoring function computes inner products between the image features of pi and the text candidates.14 6. Error Analysis Table 2 contains our full experimental results, which we brie\ufb02y summarize here. On text cloze, the image-text model dominates those trained on a single modality. However, text is much less helpful for visual cloze than it is for text cloze, suggesting that visual similarity dominates the former task. Having the context of the preceding panels helps across the board, although the improvements are lower in the hard setting. There is more variation across the models in the easy 14We cannot apply this model to visual cloze because we are not allowed access to the artwork in panel pi. setting; we hypothesize that the hard case requires moving away from pretrained image features, and transfer learning methods may prove effective here. Differences between models on character coherence are minor; we suspect that more complicated attentional architectures that leverage the bounding box locations bix are necessary to \u201cfollow\u201d speech bubble tails to the characters who speak them. We also compare all models to a human baseline, for which the authors manually solve one hundred instances of each task (in the hard setting) given the same preprocessed input that is fed to the neural architectures. Most human errors are the result of poor OCR quality (e.g., misspelled words) or low image resolution. Humans comfortably outperform all models, making it worthwhile to look at where computers fail but humans succeed. The top row in Figure 6 demonstrates an instance (from easy text cloze where the image helps the model make the correct prediction. The text-only model has no idea that an airplane (referred to here as a \u201cship\u201d) is present in the panel sequence, as the dialogue in the context panels make no mention of it. In contrast, the image-text model is able to use the artwork to rule out the two incorrect candidates. The bottom two rows in Figure 6 show hard text cloze instances in which the image-text model is deceived by the artwork in the \ufb01nal panel. While the \ufb01nal panel of the middle row does contain what looks to be a creek, \u201ccat\ufb01sh creek jail\u201d is more suited for a narrative box than a speech bubble, while the meaning of the correct candidate is obscured by the dialect and out-of-vocabulary token. Similarly, a camera \ufb01lms a \ufb01ght scene in the last row; the model selects a candidate that describes a \ufb01ght instead of focusing on the context in which the scene occurs. These examples suggest that the contextual information is overridden by strong associations between text and image, motivating architectures that go beyond similarity by leveraging external world knowledge to determine whether an utterance is truly appropriate in a given situation. 7. Related Work Our work is related to three main areas: (1) multimodal tasks that require language and vision understanding, (2) computational methods that focus on non-natural images, and (3) models that characterize language-based narratives. Deep learning has renewed interest in jointly reasoning about vision and language. Datasets such as MS COCO [35] and Visual Genome [31] have enabled image captioning [54, 28, 56] and visual question answering [37, 36]. Similar to our character coherence task, researchers have built models that match TV show characters with their visual attributes [15] and speech patterns [21]. Closest to our own comic book setting is the visual storytelling task, in which systems must generate [24] or reorder [1] stories given a dataset (SIND) of photos from \fFigure 6. Three text cloze examples from the development set, shown with a single panel of context (boxed candidates are predictions by the text-image model). The airplane artwork in the top row helps the image-text model choose the correct answer, while the text-only model fails because the dialogue lacks contextual information. Conversely, the bottom two rows show the image-text model ignoring the context in favor of choosing a candidate that mentions something visually present in the last panel. Flikr galleries of \u201cstoryable\u201d events such as weddings and birthday parties. SIND\u2019s images are fundamentally different from COMICS in that they lack coherent characters and accompanying dialogue. Comics are created by skilled professionals, not crowdsourced workers, and they offer a far greater variety of character-centric stories that depend on dialogue to further the narrative; with that said, the text in COMICS is less suited for generation because of OCR errors. We build here on previous work that attempts to understand non-natural images. Zitnick et al. [58] discover semantic scene properties from a clip art dataset featuring characters and objects in a limited variety of settings. Applications of deep learning to paintings include tasks such as detecting objects in oil paintings [11, 12] and answering questions about artwork [20]. Previous computational work on comics focuses primarily on extracting elements such as panels and textboxes [46]; in addition to the references in Section 2, there is a large body of segmentation research on manga [3, 44, 38, 30]. To the best of our knowledge, we are the \ufb01rst to computationally model content in comic books as opposed to just extracting their elements. We follow previous work in language-based narrative understanding; very similar to our text cloze task is the \u201cStory Cloze Test\u201d [42], in which models must predict the ending to a short (four sentences long) story. Just like our tasks, the Story Cloze Test proves dif\ufb01cult for computers and motivates future research into commonsense knowledge acquisition. Others have studied characters [14, 5, 26] and narrative structure [49, 33, 8] in novels. 8." + }, + { + "url": "http://arxiv.org/abs/1611.01242v1", + "title": "Answering Complicated Question Intents Expressed in Decomposed Question Sequences", + "abstract": "Recent work in semantic parsing for question answering has focused on long\nand complicated questions, many of which would seem unnatural if asked in a\nnormal conversation between two humans. In an effort to explore a\nconversational QA setting, we present a more realistic task: answering\nsequences of simple but inter-related questions. We collect a dataset of 6,066\nquestion sequences that inquire about semi-structured tables from Wikipedia,\nwith 17,553 question-answer pairs in total. Existing QA systems face two major\nproblems when evaluated on our dataset: (1) handling questions that contain\ncoreferences to previous questions or answers, and (2) matching words or\nphrases in a question to corresponding entries in the associated table. We\nconclude by proposing strategies to handle both of these issues.", + "authors": "Mohit Iyyer, Wen-tau Yih, Ming-Wei Chang", + "published": "2016-11-04", + "updated": "2016-11-04", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "main_content": "Introduction Semantic parsing, which maps natural language text to meaning representations in formal logic, has emerged as a key technical component for building question answering systems (Liang, 2016). Once a natural language question has been mapped to a formal query, its answer can be retrieved simply by executing the query on a back-end structured database. One of the main focuses of semantic parsing research is how to address compositionality in language. Extremely complicated questions have been used to \u2217Work done during an internship at Microsoft Research demonstrate the sophistication of semantic parsers,1 and such questions have been speci\ufb01cally targeted in the design of a recently-released QA dataset (Pasupat and Liang, 2015). Take for example the following question: \u201cof those actresses who won a Tony after 1960, which one took the most amount of years after winning the Tony to win an Oscar?\u201d The corresponding logical form is highly compositional; in order to answer it, many sub-questions must be implicitly answered in the process (e.g., \u201cwho won a Tony after 1960?\u201d). While we agree that semantic parsers should be able to answer very complicated questions, in reality these questions are rarely issued by human users of QA systems.2 Because users can interact with a QA system repeatedly, there is no need to assume a singleturn QA setting where the exact question intent has to be captured with just one complex question. The same intent can be more naturally expressed through a sequence of simpler questions, as shown below: 1. What actresses won a Tony after 1960? 2. Of those, who later won an Oscar? 3. Who had the biggest gap between their two award wins? Decomposing complicated intents into multiple related but simpler questions is arguably a more effective strategy to explore a topic of interest, and it reduces the cognitive burden on both the person who 1For example, \u201cwill it be warmer than 70 degrees near the Golden Gate Bridge after 5PM the day after tomorrow?\u201d (Dye, 2016) 2As indirect evidence, the percentage of questions with more than 15 words is only 3.75% in the WikiAnswers questions dataset (Fader et al., 2014). arXiv:1611.01242v1 [cs.CL] 4 Nov 2016 \fasks the question as well as the one who answers it.3 In this work, we study the semantic parsing problem for answering sequences of simple related questions. We collect a dataset of question sequences that we call SequentialQA (SQA)4 by asking crowdsourced workers to decompose complicated questions sampled from the WikiTableQuestions dataset (Pasupat and Liang, 2015) into multiple easier ones. In addition, each question is associated with answers selected by workers from a corresponding Wikipedia HTML table. Using the SQA dataset, we investigate experimentally how we should modify traditional semantic parser design to address different properties in this new, multi-turn QA setting, such as inter-question coreferences. Our contributions are twofold. First, to the best of our knowledge, SQA is the \ufb01rst semantic parsing dataset for sequential question answering. We believe this dataset will be valuable to future research on both semantic parsing and question answering in the more natural interactive setting. Second, after evaluating existing question answering systems on SQA, we \ufb01nd that none of them performs adequately, despite the relative lack of compositionality. We provide a detailed error analysis, which suggests that improperly-resolved references and mismatches between question text and table entries are the main sources of errors. The rest of the paper is structured as follows. Sec. 2 contrasts the existing tasks and datasets to SQA. Sec. 3 describes how we collect the data in detail. Sec. 4 presents our experimental study, followed by the discussion in Sec. 5.2. Finally, Sec. 6 concludes the paper. 2 Related Work Our work is related to existing research on conversational (or contextual) semantic parsing, as well as more generally to interactive question-answering systems that operate on semi-structured data. Previous work on conversational QA has focused on small, single-domain datasets. Perhaps most related to our task is the context-dependent sentence 3While cognitive load has not been measured speci\ufb01cally for complicated questions, there have been many studies linking increased sentence complexity to longer reading times (Hale, 2006; Levy, 2008; Frank, 2013). 4To be released at http://aka.ms/sqa analysis described in Zettlemoyer and Collins (2009), where conversations between customers and travel agents are mapped to logical forms after resolving referential expressions. Another dataset of travel booking conversations is used by Artzi and Zettlemoyer (2011) to learn a semantic parser for complicated queries given user clari\ufb01cations. More recently, Long et al. (2016) collect three contextual semantic parsing datasets (from synthetic domains) that contain coreferences to entities and actions. We differentiate ourselves from these prior works in two signi\ufb01cant ways: \ufb01rst, our dataset is not restricted to a particular domain, which results in major challenges further detailed in Section 5.2; and second, a major goal of our work is to analyze the different types of sequence progressions people create when they are trying to express a complicated intent. Complex, interactive QA tasks have also been proposed in the information retrieval community, where the data source is a corpus of newswire text (Kelly and Lin, 2007). We also build on aspects of some existing interactive question-answering systems. For example, the system of Harabagiu et al. (2005) includes a module that predicts what a user will ask next given their current question. A follow-up work (Lacatusu et al., 2006) proposes syntax-based heuristics to automatically decompose complex questions into simpler ones. Both works rely on proprietary limited-domain datasets; it is unlikely that the proposed heuristics would scale across arbitrary domains. 3 A Dataset of Question Sequences Since there are no previous publicly-available datasets for our task, we collect the SequentialQA (SQA) dataset via crowdsourcing. We leverage WikiTableQuestions (Pasupat and Liang, 2015, henceforth WTQ), which contains highly compositional questions associated with HTML tables from Wikipedia. Each crowdsourcing task contains a long, complex question originally from WTQ as the question intent. The workers are asked to compose a sequence of simpler questions that lead to the \ufb01nal intent; an example of this process is shown in Figure 1. To simplify the task for workers, we only select certain types of questions from WTQ. In particular, we only use questions from WTQ whose answers \fName EGOT completed Years to complete Emmy Grammy Oscar Tony Richard Rodgers 1962 17 1962 1960 1945 1951 Helen Hayes 1977 45 1953 1977 1932 1941 Rita Moreno 1977 16 1977 1972 1961 1975 John Gielgud 1991 30 1991 1979 1981 1961 Audrey Hepburn 1994 41 1993 1994 1953 1954 Marvin Hamlisch 1995 23 1995 1974 1973 1976 Jonathan Tunick 1997 20 1982 1988 1977 1997 Mel Brooks 2001 34 1967 1998 1968 2001 Mike Nichols 2001 40 2001 1961 1967 1964 Whoopi Goldberg 2002 17 2002 1985 1990 2002 Scott Rudin 2012 28 1984 2012 2007 1994 Robert Lopez 2014 10 2008 2012 2014 2004 What actresses have completed an EGOT? Which of them won a Tony after 1960? Of those, who took the most years to complete the EGOT? List of people who have won Academy, Emmy, Grammy, and Tony Awards Of those actresses who won a Tony after 1960, which took the most amount of years to get their EGOT completed? DECOMPOSED SEQUENCE: ORIGINAL INTENT: Figure 1: An example decomposition of a complicated intent from WTQ. Workers must create a sequence of decomposed questions where the answer to each question is a subset of cells in the table. are cells in the table, which excludes those involving arithmetic and counting. We likewise also restrict the questions our workers can write to those that are answerable by only table cells. These restrictions speed the annotation process because, instead of typing their answers, workers can just click on the table to answer their question. They also allow us to collect answer coordinates (row and column in the table) as opposed to answer text, which removes many normalization issues for answer string matching that are present in the original WTQ dataset. Finally, we only use intents that contain nine or more words; we \ufb01nd that shorter questions tend to be simpler and are thus less amenable to decomposition. After iterating on the task design with many pilot tasks, we found that the following constraints are necessary for workers to produce good sequences: Minimum sequence length: Workers must create sequences that contain at least two questions. If the intent is not easily decomposed into multiple questions, we instruct workers to create an alternate intent whose answer is the same as that of the original. We also encourage workers to write longer sequences if possible. Final answer same as original answer: The \ufb01nal question of a sequence must have the same answer as that of the original intent. Without this constraint, some workers were writing sequences that diverged drastically from the intent. No copying the intent: After adding the previous constraint, we found that many workers were just copying the intent as the \ufb01nal question of their sequence, which resulted in unnatural-sounding sequences. After we disallowed copying, the workers\u2019 \ufb01nal questions contained many more references to previous questions and answers. We also encouraged (but did not enforce) the following: Simplicity: When decomposing a complicated intent into a sequence of questions, we expect that each question in the sequence should be simpler than the intent itself. However, de\ufb01ning \u201csimple\u201d is dif\ufb01cult, and enforcing any de\ufb01nition is even harder. Instead, we told workers to try to limit their questions to those that can be answered with just a single primitive operation (e.g., column selection, argmax/argmin, \ufb01ltering with a single condition) and provided them with examples of each primitive. Following this definition too closely, however, can result in unnatural sequences, so we do not make any UI changes to limit questions to single primitives. Inter-question coreferences: Take the following two sequences generated from the same question intent: 1. What country won the World Cup in 2014? Of the players on the team that won the World Cup in 2014, which ones were mid\ufb01elders? 2. What country won the World Cup in 2014? Of the players on that team, which ones were mid\ufb01elders? The second question of the \ufb01rst sequence clumsily repeats information found in the preceding question, while the second sequence avoids this repetition with the referring expression \u201cthat team\u201d. To encourage more coreferences between questions, we showed workers example sequences like these and stated that the second one is preferred. 3.1 Properties of SQA In total, we used 2,022 question intents from the train and test folds of the WTQ for decomposition. We had three workers decompose each intent, resulting in 6,066 unique questions sequences containing 17,553 \ftotal question-answer pairs (for an average of 2.9 questions per sequence). We divide the dataset into train and test using the original WTQ folds, resulting in an 83/17 train/test split. Importantly, just like in WTQ, none of the tables in the test set are seen in the training set. We identify three frequently-occurring question classes: select column, select subset, and select row. In select column questions, the answer is an entire column of the table; these questions account for 23% of all questions in SQA. Subset and row selection are more complicated than column selection, particularly because they usually contain coreferences to the previous question\u2019s answer. In select subset questions, the answer is a subset of the previous question\u2019s answer; similarly, the answers to select row questions occur in the same row(s) as the previous answer but in a different column. Select subset questions make up 27% of SQA, while select row is 19%. The remaining 31% of SQA is comprised of more complex questions that are combinations of these three types. In the sequence \u201cwhat are all of the tournaments? in which one did he score the least points? on what date was that?\u201d, the \ufb01rst question is a column selection, the second question is a subset selection, and the \ufb01nal question is a row selection. We also observe dramatic differences in the types of questions that are asked at each position of the sequence. For example, looking at just the \ufb01rst question of each sequence, 51% of them are of the select column variety (e.g., \u201cwhat are all of the teams?\u201d). This number dwindles to just 18% when we look at the second question of each sequence, which indicates that the collected sequences start with general questions and progress to more speci\ufb01c ones. By definition, select subset and select row questions cannot be the \ufb01rst question in a sequence. 4 Baseline Experiments We evaluate two existing QA systems on SQA, a semantic parsing system called \ufb02oating parser and an end-to-end neural network. The \ufb02oating parser considers each question in a sequence independently of the previous questions, while the neural network leverages contextual information from the sequence. Our goals with these experiments are (1) to measure the dif\ufb01culty of SQAand (2) to better understand the behaviors of existing state-of-the-art systems. 4.1 Floating parser An obvious baseline is the \ufb02oating parser (FP) developed by Pasupat and Liang (2015), which FP maps questions to logical forms and then executes them on the table to retrieve the answers. It achieves 37.0% accuracy on the WTQ test set. One of the key challenges in semantic parsing is the \u201csemantic matching problem\u201d, where question text cannot be matched to the corresponding answer column or cell verbatim. Without external knowledge, it is often hard to map words or phrases in a question to predicates in its corresponding logical form. Further compounding this problem is that the train and test tables are disjoint, which renders lexicon induction futile. Therefore, FP does not anchor predicates to tokens in the question, relying instead on typing constraints to reduce the search space.5 Using FP as-is results in poor performance on SQA. The main reason is that the system is con\ufb01gured for questions with single answers, while SQA contains a high percentage of questions with multiplecell answers. We address this issue by removing a pruning hyperparameter (tooManyValues that eliminates all candidate parses with more than ten items in their denotations, as well as by removing features that add bias on the denotation size. 4.2 End-to-end neural network Recently, two different end-to-end neural network architectures for question-answering on tables have been proposed (Neelakantan et al., 2015; Yin et al., 2016). Both models show promising results on synthetic datasets, but neither has been evaluated on real data. We implement our own end-to-end neural model (NEURAL) by generally following both models but deviating when necessary to account for our dataset characteristics. As a brief description, we encode the question, each column header, and each cell in the table with a character-level LSTM. We identify three high-level operations based on our dataset characteristics (select column, select row, and select cell) and design modules that perform each of these functions. A module-level soft attention mechanism, effectively a 5See Pasupat and Liang (2015) for more details. \fweighted sum of the module scores, decides which module to use given a question.6 We also place an additional LSTM over the question sequence in order to pass information about previous answers and questions to the current time step. Finally, the output of the attention mechanism and the question sequence LSTM is combined and fed to a binary classi\ufb01er that, given each cell of the table, decides if the cell is part of the answer to the current question or not. Fig. 2 shows an example of how the modules in NEURAL work together to answer a given question. In particular, since the question \u201cwhich of them won a Tony after 1960?\u201d is asking for the names of the actresses, the column selection module places most of its weight on the \u201cName\u201d column, while the row selection module highly weights rows that satisfy the condition \u201cTony after 1960\u201d. The modules, which take the question and table as input, are merged with an attention mechanism a that also considers the answer to the previous question. A full speci\ufb01cation of NEURAL can be found in Appendix A. In contrast to both the neural programmer of Neelakantan et al. (2015) and the neural enquirer of Yin et al. (2016), we make the simplifying assumption that each question in a sequence can be solved with just a single operation. Another major difference is that we use a character-level LSTM, as the training and test vocabulary are radically different.7 4.3 Results Table 1 shows the results of both FP and NEURAL on the test set of SQA. We present both the overall accuracy and the accuracy of answers to questions at each position. Although the accuracy of FP on position-1 questions (48.7%) is much higher than its performance on WTQ (37.0%), the overall accuracy (32.8%) is still lower, which indicates that our SQA dataset remains dif\ufb01cult. In addition, the NEURAL model signi\ufb01cantly underperforms FP, suggesting that it requires more data or more sophisticated archi6We did not design more speci\ufb01c modules to handle arithmetic or aggregation like those of Neelakantan et al. (2015), although this is a potentially interesting direction for larger datasets. 7Due to the fact that much of our vocabulary (e.g., numbers, entities) is not included in a regular corpus, we suspect that the alternative of leveraging publicly-available word embeddings will not be effective. answer to previous question: What actresses have completed an EGOT? Which of them won a Tony after 1960? Name EGOT complete Years to complete Emmy Grammy Oscar Tony Helen Hayes 1977 45 1953 1977 1932 1941 Rita Moreno 1977 16 1977 1972 1961 1975 Audrey Hepburn 1994 41 1993 1994 1953 1954 Whoopi Goldberg 2002 17 2002 1985 1990 2002 select_row 0.05 0.87 0.13 0.94 select_col 0.71 0.14 0.03 0.02 0.03 0.06 0.01 select_cell a Figure 2: Diagram of NEURAL architecture. Small colored rectangles represent the output of the character-level LSTM decoder. The question, column header, and cell representations are passed to three attentional modules. The output of these modules is combined with the answer predictions for the previous question to yield a \ufb01nal answer prediction for each cell. Model All Pos 1 Pos 2 Pos 3 Pos 4 FP 32.8 48.7 25.8 26.2 17.5 NEURAL 17.4 27.6 13.4 11.8 12.2 Table 1: Accuracy of existing systems on our datasets on all questions and questions at all positions within the sequence. tectural design to generalize to all of SQA\u2019s complexities. 5 Directions for Improving Sequential Question Answering In this section, we explore possible directions for improving the system performance in the sequential question answering setting. We start from investigating different strategies for handling the coreference issues of questions, and then revisited the semantic matching issue by conducting some error analysis. 5.1 Adapting Existing Semantic Parsers As we observed in Sec. 4, existing semantic parsers perform suboptimally on SQA. One possible explanation for the suboptimal performance of existing semantic parsers, shown in Table 1, is that questions that contain references to previous questions or answers are not handled properly. By leveraging FP we propose two ways to deal with this issue: question \frewriting and table rewriting. Question rewriting: Take for example the partial sequence \u201cwhat are all the countries that participated in the olympics? which ones won more than two gold medals?\u201d Any system that treats these two questions independently of each other has a high likelihood of failing on the second question because \u201cones\u201d is not resolved to \u201ccountries\u201d. The obvious solution is to apply coreference resolution. However, existing coreference resolution systems struggle at identifying coreferences across two questions, potentially due to the fact that their training data came from newswire text with few questions. An alternative approach is to create a set of common referential expressions (e.g., \u201cones\u201d, \u201cthem\u201d, \u201cthose\u201d) and replace them with noun phrases from the previous question. As we do not have ground-truth coreference annotations, we compute upper-bound improvements on question rewriting instead. That is, we rewrite a reference in a question with all possible noun phrases in the previous question and count the question as correct if any of the rewritten questions are answered correctly. Interestingly, we observe an upper bound improvement of only \u22482% accuracy. Why is the upper bound so low? An error analysis \ufb01nds that in many cases, the logical form predicted by FP is wrong even when the referential expression is correctly resolved. We will discuss this phenomenon more in Sec. 5.2, but here we concentrate on another common scenario: the question contains a coreference to the answer of the previous question. If we modify our example sequence to \u201cwhat are all the countries that participated in the olympics in 2012? which ones won more than two gold medals?\u201d, then simply replacing \u201cones\u201d with \u201ccountries\u201d does not resolve the reference. Table rewriting: Instead of building a model that can learn to rewrite the second question to \u201cwhich countries won more than two gold medals in 2012\u201d, or training a semantic parser that can incrementally update the logical form from the previous question as in Zettlemoyer and Collins (2009), we propose to simply rewrite the table based on the \ufb01rst question\u2019s answer. Speci\ufb01cally, if we know that a particular question is a row or subset selection type, then we also know that its answer must be located in the rows that contain the previous answer. For example, take the second question of the decomposed sequence in Fig. 1, which contains a coreference to the answer of the \ufb01rst question (\u201cwhich of them won a Tony after 1960\u201d) that refers to four actresses. The smallest possible table from which we can still answer this question is one that has four rows (for each of the four actresses) and two columns (\u201cName\u201d and \u201cTony\u201d). However, identifying the columns necessary to answer each question is often dif\ufb01cult, so we leave this task to the semantic parser and remove only rows (not columns) that do not contain the previous question\u2019s answers (see the rewritten table for this example in Fig. 2). In this way, we implicitly resolve the coreference \u201cof them\u201d, as any rows that do not correspond to actresses are excluded. Before rewriting the table, we have to \ufb01rst decide whether the question contains a coreference to the answer or not. We know that we should only rewrite the table for subset and row selection questions. Since we can identify the question type in our dataset based on the coordinates of the answers, we assume that we know which questions should and should not be rewritten and use this information to compute upper bounds for semantic parser improvement with table rewriting. We evaluate \ufb01ve different rewriting policies which vary in their knowledge of both the question type and the correctness of the previous predicted answer: 1. never rewrite the table 2. always rewrite the table based on the previous predicted answer, regardless of whether table rewriting is applicable to the question 3. rewrite row/subset: rewrite the table based on the previous predicted answer only when table rewriting is applicable (i.e., the question is subset or row selection) 4. reference: same as rewrite row/subset, except we only rewrite when we know the previous predicted answer is correct 5. upper bound: same as rewrite row/subset, except we rewrite using the previous ground-truth answer instead of the previous predicted answer Table 2 shows the results of running these different rewriting policies on our dev set. The oracle score represents the percentage of questions for which at least one candidate logical form generated by the \fPolicy Dev Acc Dev Oracle Never rewrite 27.7 66.6 Always rewrite 26.9 55.3 Rewrite row/subset 28.2 59.8 Reference 29.2 67.3 Upper bound 37.0 71.9 Table 2: Dev accuracy of different table rewriting policies; the upper bound represents an almost 10% absolute improvement that the other policies do not come close to reaching due to the poor baseline performance of FP. parser8 evaluates to the correct answer. The most important takeaway is that accuracy improvements are very small when we rewrite based on the previous predictions. Intuitively, this makes sense: if the parser only gets 30% accuracy, then 70% of the time it will be incorrect on the previous question, and rewriting the table based on a wrong answer could make it impossible for the parser to get the right answer (see the lower oracle scores for always rewrite and rewrite row/subset). Based on these results, table rewriting will only be useful if the base parser\u2019s accuracy is high. 5.2 The semantic matching problem The underwhelming improvements from question and table rewriting force us to re-evaluate our original hypothesis that reference resolution is the main source of complexity in our dataset. We take 70 questions from our dev set and manually annotate them with reasons why FP answered them incorrectly. Somewhat surprisingly, we \ufb01nd that only 15 of these errors are due solely to coreferences! The majority of errors are due to wrong logical forms that cannot be corrected by simply resolving a coreference (e.g., the wrong operations are used, or the order of the operations is incorrect). When checking these questions in detail, we \ufb01nd that the majority of the errors are due to the semantic matching problem \u2013 mismatches between question text and table text. The error analysis in (Pasupat and Liang, 2015) on the more complicated WTQ dataset shows that 25% of errors are due to these mismatches and an additional 29% to normalization issues (e.g., 8The number of candidate parses considered by FP varies and could sometimes be hundreds. what historic sites are located near a highway? Name Location City Bronson Public Library 207 Matteson Street Bronson City of Coldwater Informational Designation City Park at intersection of US-12 and US-27 Goldwater on what dates did the games end in a tie? Date Opponents Result F-A 25 August 1984 Watford 1-1 8 September 1984 Newcastle United 5-0 what stations play rock and jazz? Location Call sign Network Beach KDPR News & Classical Grand Forks KFJM Roots, Rock, and Jazz Lisbon KDSU News, Classical, Rock, and Jazz Figure 3: Example mismatches between question and table from SQA. Resolving the mismatches requires world knowledge such as highway naming conventions and sports terminology that must be provided externally or learned from a larger corpus. an answer cell may contain \u201cBeijing, China\u201d but the crowdsourced answer is just \u201cBeijing\u201d). Because all answers in SQA are the exact text of cells in the table, we avoid these normalization issues; however, the results in Table 1 show that the sequential nature of SQA makes it equally as dif\ufb01cult as WTQ for machines. The examples in Figure 3 suggest that without solving the semantic matching problem, we will not be able to properly take advantage of our question or table rewriting adaptations. 6" + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file