diff --git "a/abs_29K_G/test_abstract_long_2405.00899v1.json" "b/abs_29K_G/test_abstract_long_2405.00899v1.json"
new file mode 100644--- /dev/null
+++ "b/abs_29K_G/test_abstract_long_2405.00899v1.json"
@@ -0,0 +1,441 @@
+{
+ "url": "http://arxiv.org/abs/2405.00899v1",
+ "title": "Characterising the Creative Process in Humans and Large Language Models",
+ "abstract": "Large language models appear quite creative, often performing on par with the\naverage human on creative tasks. However, research on LLM creativity has\nfocused solely on \\textit{products}, with little attention on the creative\n\\textit{process}. Process analyses of human creativity often require hand-coded\ncategories or exploit response times, which do not apply to LLMs. We provide an\nautomated method to characterise how humans and LLMs explore semantic spaces on\nthe Alternate Uses Task, and contrast with behaviour in a Verbal Fluency Task.\nWe use sentence embeddings to identify response categories and compute semantic\nsimilarities, which we use to generate jump profiles. Our results corroborate\nearlier work in humans reporting both persistent (deep search in few semantic\nspaces) and flexible (broad search across multiple semantic spaces) pathways to\ncreativity, where both pathways lead to similar creativity scores. LLMs were\nfound to be biased towards either persistent or flexible paths, that varied\nacross tasks. Though LLMs as a population match human profiles, their\nrelationship with creativity is different, where the more flexible models score\nhigher on creativity. Our dataset and scripts are available on\n\\href{https://github.com/surabhisnath/Creative_Process}{GitHub}.",
+ "authors": "Surabhi S. Nath, Peter Dayan, Claire Stevenson",
+ "published": "2024-05-01",
+ "updated": "2024-05-01",
+ "primary_cat": "cs.HC",
+ "cats": [
+ "cs.HC",
+ "cs.AI",
+ "cs.CL",
+ "q-bio.NC"
+ ],
+ "label": "Original Paper",
+ "paper_cat": "LLM Fairness",
+ "gt": "Large language models appear quite creative, often performing on par with the\naverage human on creative tasks. However, research on LLM creativity has\nfocused solely on \\textit{products}, with little attention on the creative\n\\textit{process}. Process analyses of human creativity often require hand-coded\ncategories or exploit response times, which do not apply to LLMs. We provide an\nautomated method to characterise how humans and LLMs explore semantic spaces on\nthe Alternate Uses Task, and contrast with behaviour in a Verbal Fluency Task.\nWe use sentence embeddings to identify response categories and compute semantic\nsimilarities, which we use to generate jump profiles. Our results corroborate\nearlier work in humans reporting both persistent (deep search in few semantic\nspaces) and flexible (broad search across multiple semantic spaces) pathways to\ncreativity, where both pathways lead to similar creativity scores. LLMs were\nfound to be biased towards either persistent or flexible paths, that varied\nacross tasks. Though LLMs as a population match human profiles, their\nrelationship with creativity is different, where the more flexible models score\nhigher on creativity. Our dataset and scripts are available on\n\\href{https://github.com/surabhisnath/Creative_Process}{GitHub}.",
+ "main_content": "Introduction Much recent work has benchmarked and quantified the generative creative aptitudes of large language models (LLMs) (Chakrabarty et al. 2023; Gilhooly 2023; Franceschelli and Musolesi 2023; Tian et al. 2023; Wang et al. 2024; Hubert, Awa, and Zabelina 2024). LLMs often perform as well as the average human on creative thinking tasks such as the Alternate Uses Task (AUT) (Orwig et al. 2024; Koivisto and Grassini 2023; Stevenson et al. 2022; G\u00b4 oes et al. 2023; Guzik, Byrge, and Gilde 2023). However, these works largely analysed creativity from a Product perspective (Rhodes 1961), assessing how original and useful model responses are to determine \u201cwhat makes them creative (or not)\u201d. An equally important component of creativity, less studied in the field of Artificial Creativity, is the Process perspective (Rhodes 1961), addressing the question of \u201chow creativity arises\u201d. This paper aims to fill this gap and characterise human and LLM creativity by looking at the creative process (Stevenson et al. 2022), particularly the way humans and LLMs explore semantic spaces while generating creative ideas. RESPONSES (ri) FLEXIBLE PERSISTENT MIXED r1 r2 r3 r4 r5 r6 r7 r8 r11 r12 r13 r14 r15 r16 r17 r18 r9 r10 r1 r2 r3 r4 r5 r6 r7 r8 r11 r12 r13 r14 r15 r16 r17 r18 r9 r10 r1 r2 r3 r4 r5 r6 r7 r8 r11 r12 r13 r14 r15 r16 r17 r18 r9 r10 Figure 1: Example persistent, flexible and mixed response sequences. ri denotes the ith response, coloured regions denote the semantic spaces/concepts/categories. Note, in practice, most sequences will be mixed, containing different patterns of persistence and flexibility. When humans generate creative ideas, for example, alternate uses for a \u201cbrick\u201d, two types of response pathways are observed (Baas et al. 2013; Nijstad et al. 2010). In the persistent pathway, responses stem from deeper search within limited conceptual spaces, exhibiting high clustering and similarities in responses (e.g., using a brick to break a window, break a lock, and as a nutcracker; i.e., for breaking things). In the flexible pathway, responses arise from broader search across multiple conceptual spaces, exhibiting frequent jumps between categories and dissimilarities in responses (e.g., using a brick to build a dollhouse, as an exercise weight, and as a coaster) (Figure 1). There are two complementary ways of quantifying response clustering borrowed from the literature on memory search and semantic fluency. The first is to categorise responses temporally using inter-item retrieval times, i.e. responses that occur shortly after each other are expected to belong to the same category and longer pauses are expected to signal jumps from one category to another. The second method is to group successive responses semantically using a set of pre-defined categories (e.g., into \u201cbuilding\u201d or \u201cbreaking\u201d for uses of a brick). The number of categories divided by the number of responses provides a flexibility index (Hills, Jones, and Todd 2012). Hass (2017) compared clustering in creative thinking tasks like AUT to that in a verbal fluency task (VFT) of naming animals and reported less evident clustering and higher flexibility in AUT than VFT (where responses were highly clustered, for example naming zoo animals followed by sea animals). However, the methods used in these works are either based on handcrafted lists of categories or on response-time arXiv:2405.00899v1 [cs.HC] 1 May 2024 \fprofiles which do not apply to responses from LLMs. In addition, these works show that semantic similarity is related to jumps in response sequences, but semantic similarity has not been used to code for jumps directly until now. In this paper, we propose a fully automated, data-driven method to signal jumps in response sequences using response categorisation and semantic similarites and apply it to characterise the creative process in both humans and LLMs. In the next sections, we first introduce method and investigate its reliability and validity. We then apply it to characterise human and LLM flexibility on the AUT and VFT. We find that LLMs as a population match the variability in human response sequences on AUTs, but unlike humans, their relationship to creativity differs. We also discuss how to use these insights to use LLMs as artificial participants or cocreators. Method Data Collection: We collected data from humans and LLMs on the AUT for \u201cbrick\u201d and \u201cpaperclip\u201d, and the VFT of naming animals (Figure 2A). Human data were collected from (anonymized) undergraduate participants using a within-subjects design. For the AUT, participants listed as many creative uses for \u201cbrick\u201d and \u201cpaperclip\u201d as possible in a fixed time of 10 minutes. For the VFT, participants named as many animals as possible in a fixed time of 2 minutes. Participants not adhering to instructions were removed, resulting in a total of 220 participants. The responses, originally in Dutch were translated to English for analysis using the deep-translator Python package. Translations were manually inspected to correct for errors due to spelling mistakes. LLM data were collected in English by prompting several recent open and closed source models. For open source models, we used the Together AI API. The prompt matched instructions given to humans, but with specific response number and length requirements. We tested multiple prompt versions to achieve the best quality LLM responses. The final prompt for the AUT instructed LLMs to generate nAUT creative uses for \u201cbrick\u201d or \u201cpaperclip\u201d, and to answer in short phrases of maximum mAUT words. For the VFT, the final prompt instructed LLMs to name nVFT animals, and to answer in short phrases of maximum mVFT words. nAUT, nVFT were set to the mean number of human responses (N) in AUT (=ceil[max[Nbrick, Npaperclip]]) and VFT tasks. mAUT, mVFT were set to the maximum mean human response word length (M) in AUT (=floor[max[Mbrick,Mpaperclip]]) and VFT. In pilots, only \u223c20 models gave valid responses for the AUT tasks, of which we selected the 4 that followed the prompt instructions for length and number of responses, namely, Meta 70B Llama 3 chat HF (Llama) model, Mistral AI 7B Instruct (Mistral) model, NousResearch 7B Nous-Hermes Mistral DPO (NousResearch) model and Upstage 10.7B SOLAR Instruct (Upstage) model. We experimented with temperature and repetition penalty parameters. However, varying the repetition penalty did not produce higher quality responses, so we only varied the temperature, through 11 levels (0-1, inclusive, at every 0.1). We also tested the latest versions of 4 closed source models: OpenAI GPT-4 turbo (GPT), Google Palm bison (Palm), Google Gemini 1.0 pro (Gemini) and Anthropic Claude 3 (Claude), with the same prompt and parameters as for the open models. All 4 models generated valid responses and adhered to the response number and length instructions. We generated 5 samples per model \u00d7 temperature combination, and therefore our LLM data set consisted of 440 (8 \u00d7 11 \u00d7 5) LLM response sequences in all. The 220 human and 440 LLM response sequences were cleaned by removing stopwords, punctuations and common words such as \u201cuse\u201d or \u201cbrick\u201d/\u201cpaperclip\u201d. They were also manually inspected for correctness and validity. Invalid responses (verbatim repeats/junk responses) were removed 1. Response Categorisation, Semantic Similarities and Jump Signal: First, we encoded all responses using sentence-transformers, using the gte-large model given its encodings\u2019 suitability for clustering. Each response was encoded as a 1024 dimensional normalised embedding vector. Next, all responses were aggregated, dropping duplicates, resulting in 2770 unique alternate uses for brick, 3512 unique alternate uses for paperclip and 482 unique animals. The vector embeddings of these response sets were categorised using the scipy linkage, fcluster hierarchical clustering functions with the ward distance metric and a distance threshold chosen such that the mean minimum pairwise semantic similarity (vector dot product) per category was just above 0.7. This resulted in 26 brick, 28 paperclip, and 15 animal categories. Using these categories, we defined a binary variable jumpcat for each response in a response sequence (except for the first response) as 1 if it marked a change in category compared to the previous response, and 0 otherwise. jumpcat provided us course-grained similarities (for example \u2018elevation\u2019 and \u2018table leg\u2019 belonged to the same category as did \u2018keep scarf together\u2019 and \u2018hang clothes\u2019). To address finer-grained differences, we evaluated the semantic similarity (SS) between successive embeddings of responses in a response sequence (Hass 2017; Camenzind et al. 2024). Using SS, we defined a second binary variable jumpSS and set it to 0 if SS was above a threshold and 1 otherwise. jumpSS signaled finer-grained similarities (for example\u2018piercing\u2019 and \u2018ring\u2019). A combined jump signal was defined as their logical AND: jump = jumpcat \u2227jumpSS. We set the threshold for jumpSS such that jump has at least 0.8 True Positive and True Negative Rates on hand-coded2 jump signals for AUT brick. Our entire procedure is illustrated in Figure 2B. We conduct psychometric analyses to investigate the reliability and validity of the method. 1For AUT brick, we removed low temperature responses in Mistral and NousResearch models as these were verbatim repeats. For VFT, we excluded NousResearch and Palm models fully, as they only listed animals in alphabetical order. 2The jump signals were hand-coded by the first author. \fSentence Embeddings Categorization jumpcat jumpSS jump Semantic similarity HUMAN LLM VERBAL FLUENCY: ANIMALS ALTERNATE USES: PAPERCLIP ALTERNATE USES: BRICK A B AND Figure 2: (A) Humans and LLMs perform 3 tasks\u2014Alternate Uses Task (AUT) for brick and paperclip, and a Verbal Fluency Task (VFT) of naming animals. (B) Our method for obtaining jumps in the response sequence. Sentence embeddings are used for assigning response categories and evaluating semantic similarities, which respectively give jumpcat and jumpSS. Their logical AND gives jump. Jump Profiles and Participant Clustering: Using the jump signals, we determined a jump profile for each response sequence as the cumulative count of jumps at each response (for example, a response sequence of length 4 with jumps [1, 0, 1] will have a jump profile [1, 1, 2]). Different human participants produced different numbers of responses, so we considered just the first 18 responses from each sequence (the median human sequence length), excluding shorter sequences. The remaining profiles (AUT brick: 97; AUT paperclip: 103; VFT: 195) were clustered using KMeans (sklearn KMeans) with K-Means++ initialization (Arthur, Vassilvitskii, and others 2007) per task. LLM jump profiles were assigned to the closest human cluster. Evaluating Response Creativity: We used Open Creativity Scoring ocsai-chatgpt (Organisciak et al. 2023) to score response originality in AUT brick and paperclip. Results Jump Signal Reliability and Validity: We first test the reliability and validity of the jump signal. For reliability, we measured the test-retest correlation of the number of jumps for AUT brick and paperclip response sequences from 81 participants (who had >=18 responses in both). We found a positive Pearson correlation of r=0.42 (p<0.001, CI=[0.22, 0.58]), which is high considering the test-retest and alternate-form reliability of AUT product creativity seldom exceeds r=0.5. For validity, we test for agreement with past findings in humans. In keeping with Hass 2017, who showed more jumping in AUT than VFT, we found significantly more jumps in AUT brick and paperclip than in VFT (both p<0.001). Moreover, in line with Hass (2017), Hills et al. (2012), we also found greater mean response times for jump = 1 than jump = 0 (p<0.001). Participant Clusters: Based on the literature and clustering elbow plots, we assigned human jump profiles to 3 clusters for each task (Figure 3A). These map to different levels of flexibility in the response sequences\u2014cluster 1: persistent profiles (7-12 jumps for AUT and 1-6 jumps for VFT); cluster 2: flexible profiles (15-18 jumps for AUT and 6-11 jumps for VFT); and cluster 3: mixed profiles (12-16 jumps for AUT and 4jumps for VFT). The different numbers of jumps in AUT and VFT are clear, where the flexible cluster in VFT closely resembles the persistent cluster in AUTs. Thus the classifications are task-relative. The proportion of participants assigned to each cluster further reinforces that people are more flexible in AUT and more persistent in VFT. LLM Assignments: The LLM jump profiles were assigned to one of the 3 human clusters with proportions of assignment shown in Figure 3B. Different models exhibited different biases towards persistence or flexibility in the AUTs. For example, in AUT brick, Upstage, GPT, Claude and Palm are mostly flexible while Llama and Gemini are mostly persistent. However, models were less consistent across the two AUTs. In AUT paperclip, while Upstage and GPT remained mostly consistent in their assignments, but Llama and Gemini switched from persistent to flexible. This is also evident in the test-retest correlation, which was lower than for humans (r=0.22, p<0.001, CI=[0.12, 0.31]). Taken together, we find that LLMs are not significantly different than humans in number of jumps on AUTs (p>0.05 in both). However, on the VFT, LLMs were overwhelmingly persistent, and significantly more persistent than humans (p<0.001). Comparing the human and model cluster assignment percentages, we observe that Mistral and NousResearch models closely resemble the human distribution in AUT brick; Gemini model does so in AUT paperclip; but no model resembles humans for VFT. Temperature neither influenced cluster assignments nor number of jumps in AUTs (p>0.05). In VFT, temperature did influence jumping (p<0.001), but did not influence cluster assignment. This is consistent with previous research suggesting no role of temperature in flexibility (Stevenson et al. 2022) and suggests that model responses cannot be easily manipulated parametrically. Relationship to Creativity: We calculated the mean originality ratings in each response sequence. For humans, mean originality was similar for persistent and flexible clusters in both AUTs (both p>0.05). Mean originality did not predict the number of jumps in AUT brick (p>0.05), and weakly predicted jumps in AUT paperclip (0.01
=18 responses in both). We found a positive Pearson correlation of r=0.42 (p<0.001, CI=[0.22, 0.58]), which is high considering the test-retest and alternate-form reliability of AUT product creativity seldom exceeds r=0.5. For validity, we test for agreement with past findings in humans. In keeping with Hass 2017, who showed more jumping in AUT than VFT, we found significantly more jumps in AUT brick and paperclip than in VFT (both p<0.001). Moreover, in line with Hass (2017), Hills et al. (2012), we also found greater mean response times for jump = 1 than jump = 0 (p<0.001). Participant Clusters: Based on the literature and clustering elbow plots, we assigned human jump profiles to 3 clusters for each task (Figure 3A). These map to different levels of flexibility in the response sequences\u2014cluster 1: persistent profiles (7-12 jumps for AUT and 1-6 jumps for VFT); cluster 2: flexible profiles (15-18 jumps for AUT and 6-11 jumps for VFT); and cluster 3: mixed profiles (12-16 jumps for AUT and 4jumps for VFT). The different numbers of jumps in AUT and VFT are clear, where the flexible cluster in VFT closely resembles the persistent cluster in AUTs. Thus the classifications are task-relative. The proportion of participants assigned to each cluster further reinforces that people are more flexible in AUT and more persistent in VFT. LLM Assignments: The LLM jump profiles were assigned to one of the 3 human clusters with proportions of assignment shown in Figure 3B. Different models exhibited different biases towards persistence or flexibility in the AUTs. For example, in AUT brick, Upstage, GPT, Claude and Palm are mostly flexible while Llama and Gemini are mostly persistent. However, models were less consistent across the two AUTs. In AUT paperclip, while Upstage and GPT remained mostly consistent in their assignments, but Llama and Gemini switched from persistent to flexible. This is also evident in the test-retest correlation, which was lower than for humans (r=0.22, p<0.001, CI=[0.12, 0.31]). Taken together, we find that LLMs are not significantly different than humans in number of jumps on AUTs (p>0.05 in both). However, on the VFT, LLMs were overwhelmingly persistent, and significantly more persistent than humans (p<0.001). Comparing the human and model cluster assignment percentages, we observe that Mistral and NousResearch models closely resemble the human distribution in AUT brick; Gemini model does so in AUT paperclip; but no model resembles humans for VFT. Temperature neither influenced cluster assignments nor number of jumps in AUTs (p>0.05). In VFT, temperature did influence jumping (p<0.001), but did not influence cluster assignment. This is consistent with previous research suggesting no role of temperature in flexibility (Stevenson et al. 2022) and suggests that model responses cannot be easily manipulated parametrically. Relationship to Creativity: We calculated the mean originality ratings in each response sequence. For humans, mean originality was similar for persistent and flexible clusters in both AUTs (both p>0.05). Mean originality did not predict the number of jumps in AUT brick (p>0.05), and weakly predicted jumps in AUT paperclip (0.01
0.7), current VLMs would need to surpass a trillion parameters. This calls for an alternative strategy to just scaling up current VLMs. For our second dataset, NaturalTypeIdent, we manually curated 50 reference-images from KaggleAnimalImages (Banerjee, 2023). We then followed the exact same procedure for creating datatype images from the reference-images. However, all generative steps were replaced by a refined, deduplicated web-retrieval step for mining style and semantic data-type images. This provides an in-the-wild, naturally occurring testbed, thereby complementing the precisely controlled SyntheticTypeIdent dataset. Since we can procure appropriate images for only 25 data-types (we omit MULTI DIFFERENT and TIGER STRIPES), NaturalTypeIdent only contains 1,250 samples. Importantly, we manually verified both datasets to ensure that the target data-type for each image was the most prominent data-type reflected in it, enabling a careful study between models without interference between data-types. For details about dataset creation refer to the Appendix. 4 BENCHMARKING VLMS ON DATA-TYPE IDENTIFICATION 4.1 EXPERIMENTAL SETUP We evaluated 39 VLMs from 13 model families, with sizes ranging from 100M to 80B parameters, across two groups: discriminative, contrastively-trained VLMs (e.g., CLIP) which we refer to as C-VLMs, and generative, auto-regressively trained VLMs (e.g., OpenFlamingo) which we refer to as large multi-modal models (LMMs) (Li, 2023). Specifically, from the C-VLM group we evaluated CLIP (Radford et al., 2021), BLIP-2-ITM (Li et al., 2023c), and CoCa (Yu et al., 2022); in the LMM group we tested Fromage (Koh et al., 2023b), GILL (Koh et al., 2023a), Multimodal-GPT (Gong et al., 2023), OpenFlamingo (Awadalla et al., 2023), Otter (Li et al., 2023a), MPlugOwl (Ye et al., 2023), LLaVA (Liu et al., 2023a), BLIP-2-LLM (Li et al., 2023c), InstructBLIP (Dai et al., 2023), and IDEFICS (Laurenc \u00b8on et al., 2023). We tested all VLMs on correctly classifying the target data-type for each evaluation image, in a zero-shot manner. We evaluated C-VLMs by computing the cosine-similarity of the image embedding and the text embedding of the specific data-type description, e.g., \u201cA blurred image of an animal.\u201d (see Appendix for full list). For a fair comparison, we evaluated LMMs by log-likelihood scoring (Dai et al., 2023; Li et al., 2023b) each of the 27 data-type description texts, with the prompt: \u201c Q: Describe the image. A: \u201d, replacing by the corresponding text description for a particular data-type. We quantified model performance using informedness, Ik=TPRk\u2212FPRk on data-type k, which in addition to the true positive rate (TPR, i.e., accuracy) accounts for the false positive rate (FPR). We summarized model performance as mean informedness across data-types, \u00b5I=\u27e8Ik\u27e9k. See Appendix for evaluation details. 5 \fPreprint tattoo sculpture pencil_sketch origami multi_different embroidery multi_same cartoon defocus_blur vertical_flip graffiti snow tiger_stripes plushie typographic high_brightness mixup jpeg cutmix patch_and_reshuffle low_brightness high_contrast gaussian_noise right_rotate crop_and_zoom low_contrast left_rotate Data-Types 0.2 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Mean informedness across all models Pixel Geometric Style Semantic chance Best C-VLM Best LMM Figure 4: Average-performance across data-types on SyntheticTypeIdent. VLMs perform reasonably on style and semantic data-types (e.g., PENCIL SKETCH, CARTOON) and show weak results on pixel and geometric data-types (e.g., GAUSSIAN NOISE, HIGH CONTRAST). Chance-level at 0. 4.2 VLMS STRUGGLE WITH IDENTIFYING DATA-TYPES Our evaluations reveal that all tested VLMs exhibit limited performance on both SyntheticTypeIdent and NaturalTypeIdent (Fig. 3A). We found that C-VLMs performed better than LMMs, even though the latter are more recent and orders of magnitude larger. The best C-VLM achieved mean informedness \u00b5I=(0.47, 0.50) while its LMM counterpart achieved \u00b5I=(0.22, 0.25) on SyntheticTypeIdent and NaturalTypeIdent, respectively. As a control and for direct comparison, we also tested models on animal identification on SyntheticTypeIdent. As expected, the performance on this semantic recognition task is very good, achieving a mean informedness across models of 0.89. This confirms quantitatively that the performance on identifying data-types (detailed plots in Appendix) is substantially worse than on object recognition. We further note three key findings from our evaluations: LMMs, a downgrade? Surprisingly, LMMs consistently underperform C-VLMs, despite using LLMs as text models, compared to the smaller text encoders in C-VLMs. Notably, the largest LMM (IDEFICS, 80B parameters) substantially underperforms an orders-of-magnitude smaller CLIPRN50 (100M parameters). The rich language grounding that LLMs inherit from extensive real-world text training seemingly does not provide benefits for identifying data-types. This result challenges the prevailing notion that strong language model priors can improve fine-grained understanding in VLMs (Cascante-Bonilla et al., 2023; Doveh et al., 2023; Yuksekgonul et al., 2022; Wang et al., 2023). We hypothesise two plausible causes for this performance drop to be studied in detail by future work: (1) Weak alignment between the vision encoder and LLM might degrade the realworld symbolic grounding innate to each independently (Bavishi et al., 2023). (2) DiscriminativeGenerative gap might be at play, i.e., discriminating between answers is easier than generating one (Vapnik, 1999; Ng & Jordan, 2001). Both suggest that C-VLM contrastive objectives might better equip them for data-type identification than LMM auto-regressive objectives (Liu et al., 2023b). Weak scaling behaviour. Interestingly, within the C-VLM and LMM groups, our results suggest weak scaling effects. We analysed this quantitatively by fitting a power-law (Alabdulmohsin et al., 2022; Henighan et al., 2020; Cherti et al., 2023) on the observed mean informedness vs. model scale relationship for CLIP (C-VLM) and IDEFICS (LMM), since they span the widest parameter sizes within a model family. Fig. 3B confirms the weak scaling law, indicating a severe limitation for current VLMs: to achieve a performance practicable for data-type identification (\u00b5I>0.7), current models would need to surpass a trillion parameters. This calls into question the effects of model scaling, and whether alternate strategies are required to enhance their performance. Stark performance differences between simple and complex data-types. To get a finer-grained understanding of the overall model performance (Fig. 4) we break-down the per-data-type averaged mean informedness across all models. We find that while VLMs are reasonably good at identifying style and semantic data-types, they falter systematically on pixel and geometric data-types. For the majority of data-types even the best-performing models struggle to surpass chance-level perfor6 \fPreprint Figure 5: What does CLIP\u2019s image embedding space encode? CLIP-RN50\u2019s image embeddings, colour-coded by ground-truth semantic concept (left) and data-type (right), reveal its pronounced affinity for recognising semantic concepts, while being largely invariant to data-type distinctions. mance and no single model consistently outperforms others across a majority of data-types. Instead, multiple models each excel in identifying just a few specific data-types. This reveals inherent biases in the pre-training procedures of VLMs, limiting the desired generality of foundation models. 5 UNDERSTANDING WHY VLMS UNDERPERFORM IN IDENTIFYING DATA-TYPES We next investigate two plausible reasons for the sub-par performance of VLMs in identifing datatypes: (1) their image embeddings lack data-type discriminative information, and (2) their pretraining datasets, despite the enormous sizes, lack sufficient data-type specific information, limiting models from learning data-type discriminative features. We probe both candidate reasons in detail, performing a case study with CLIP, and find good evidence for both of them. Due to CLIP being a prototypical C-VLM, and the widespread adoption of its vision encoders in LMMs, we suggest that our findings should be broadly applicable. Reason 1: Peeking into CLIP\u2019s embedding space. We visualized the CLIP image embeddings of SyntheticTypeIdent using t-SNE (Van der Maaten & Hinton, 2008). Colour-coding the embeddings by (1) the image\u2019s semantic concept, i.e., the animal type (Fig. 5 left), and (2) the image\u2019s target datatype (Fig. 5 right), uncovered an interesting dichotomy: while distinct embedding clusters emerge based on semantic concepts (animals), most data-types are not clearly demarcated (see Appendix for KNN and linear-probe analysis). This suggests that CLIP\u2019s vision encoder is somewhat invariant to data-types, despite it not being explicitly trained to be so (only random-resized cropping was used as training data-augmentation, discussion in Appendix). As most C-VLMs and LMMs use CLIP image embeddings, this potentially explains the poor performance of all VLMs on identifying datatypes. We further note that the embeddings of only three data-types are closely clustered (TATTOO, PATCH AND RESHUFFLE, and TYPOGRAPHIC), yet, these are precisely the embeddings which are not directly semantically distinguishable\u2014this suggests that CLIP might not encode semantic and data-type information compositionally but rather sacrifices one (data-type) over the other (semantics). This offers a consistent explanation why CLIP models are so effectively robust at classifying semantic content (Fang et al., 2022; Shi et al., 2023; Nguyen et al., 2022; Santurkar et al., 2022; Ramanujan et al., 2023) but fail at solving the complementary problem of data-type identification. Reason 2: Peeking into VLM pre-training datasets. Fig. 4 revealed that VLMs fare well on some complex data-types while falling short on simple ones. An intuitive explanation is pre-training dataset imbalance: an abundance of samples aligning with style data-types (e.g., CARTOON, PENCIL SKETCH) and a paucity of simple data-types (e.g., GAUSSIAN NOISE, LEFT ROTATE). To confirm this quantitatively, we analysed LAION-2B-en\u2014CLIP\u2019s pre-training dataset. We first counted and retrieved all samples containing representative data-type keywords in the captions (e.g., \u201cblurry\u201d; see Appendix for details and a semantic search-based analysis). As pure keywordfrequency might not account for mis-aligned image-caption pairs, we estimated an alignment prob7 \fPreprint ability\u2014the fraction of retrieved samples where the image aptly captures the data-type concept\u2014by manually labeling 100 random samples per data-type for data-type accuracy. Finally, we computed an abundancy score as the product of text-frequency and alignment probability. Correlating this abundancy score with averaged model performance across data-types revealed strong positive associations (Spearman rank correlation, r=0.557 for SyntheticTypeIdent; r=0.489 for NaturalTypeIdent). The association is even stronger on SyntheticTypeIdent when correlating abundancy score with CLIP-model averaged performance (r=0.606), suggesting that the varying model performance across data-types can be explained by the constraints of their pre-training data distribution. 6 IMPROVING VLMS TO IDENTIFY DATA-TYPES 0 10 20 30 40 50 #Few-shot Examples 0.0 0.1 0.2 0.3 0.4 0.5 Mean Informedness Few-shot TIP-Adapter with CLIP Few-shot Types Random SameAnimal Models TIP-Adapter-RN50 TIP-Adapter-ViT-L-14 ZS-RN50 ZS-ViT-L-14 Models TIP-Adapter-RN50 TIP-Adapter-ViT-L-14 ZS-RN50 ZS-ViT-L-14 (a) 0 2 4 6 8 10 12 14 16 #In-context Examples 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Mean Informedness In-context learning with Otter Zero-Shot Models ZS-Otter-LLaMA7B ZS-Otter-MPT7B In-context Types SameAnimal Random Models LLaMA-7B MPT-7B (b) Figure 6: Few-shot trainingfree adaptation methods fail. Both TIP-Adapter with CLIP (top) and in-context learning with Otter (bottom) fail to substantially improve VLM data-type identification. Having understood some factors limiting the performance of VLMs, we experiment with methods using data-type informationrich samples to improve them. Here, we investigate CLIP (C-VLM) and Otter (LMM) as two representative models. 6.1 FEW-SHOT TRAINING-FREE ADAPTATION DOES NOT HELP Can few-shot examples boost performance without updating model weights, using in-context learning (Dong et al., 2022; Brown et al., 2020) or training-free adapters (Zhang et al., 2021; Udandarao et al., 2022)? We answer next. CLIP TIP-Adapter. We test the TIP-Adapter (Zhang et al., 2021) framework with CLIP, using two few-shot example selection strategies: Random (selecting examples with random animals) and SameAnimal (selecting examples with same animal as test image). We evaluate 1, 2, 4, 8, 16, 32, 48 shots with RN50 and ViT-L14 vision encoders. We found few-shot adaptation degrading performance across all settings (see Fig. 6a). This presumably originates from TIP-Adapter leveraging semantic similarities in CLIP\u2019s image embedding space, which lacks information to disambiguate between data-types (see Fig. 5). Hence, TIP-Adapter cannot capture any information discriminative across data-types but rather exploits semantic similarities b/w concepts which is detrimental for our task. Otter In-context Learning. We explored various in-context example selection strategies and found selecting n examples with one whose data-type matched the target of the test sample and other n\u22121 randomly worked the best\u2014we evaluate n=2,5,15 examples on the Random and SameAnimal strategies, using LLaMA-7B (Touvron et al., 2023) or MPT-7B (MosaicML, 2023) as LLM-backbones (see Appendix for details and in-context scaling results with LLaVA). Surprisingly, we found an initial uptick in performance with n=2, followed by a decline as in-context examples increased (see Fig. 6b). We attribute this to Otter overfitting on its in-context examples, i.e., simply predicting a random data-type from within the incontext examples. Since chance-level performance also increases with fewer in-context examples, this could explain improved performance with n=2. We conclude that in-context learning does not enhance Otter\u2019s ability to identify data-types. Takeaways. Our empirical results strongly indicate that training-free few-shot approaches fail to enhance VLMs for identifying data-types, likely because VLMs lack data-type discriminative information in their embeddings. Rather, an intensive training procedure to infuse data-type knowledege might be more promising. 6.2 FINE-TUNING WITH APPROPRIATE DATA-MIXTURES IMPROVES PERFORMANCE Data-mixtures. We created a specialised dataset, TeDaTy (Teaching Data-Types), incorporating data-type information into images and text-captions. We construct training images, sourced from COCO (Lin et al., 2014), ImageNet (Deng et al., 2009), PACS (Li et al., 2017), and DomainNet (Peng et al., 2019), by applying our data-type transformation functions and adapting the captions 8 \fPreprint Table 1: CLIP ViT-B-32 fine-tuning results on TypeIdent datasets with different data-mixtures. Data-Mixture SyntheticTypeIdent NaturalTypeIdent Full Freeze-Image Freeze-Text Full Freeze-Image Freeze-Text ID-I OOD-I ID-I OOD-I ID-I OOD-I ID-I OOD-I ID-I OOD-I ID-I OOD-I Zero-shot CLIP 0.451 0.457 0.451 0.457 0.451 0.457 0.440 0.473 0.440 0.473 0.440 0.473 COCO (control) 0.451 0.468 0.354 0.465 0.488 0.451 0.494 0.507 0.451 0.500 0.457 0.473 TeDaTy 0.669 0.392 0.777 0.469 0.780 0.370 0.691 0.412 0.654 0.474 0.646 0.379 rand+ COCO 0.646 0.394 0.717 0.465 0.631 0.371 0.629 0.400 0.680 0.470 0.574 0.356 rand+ COCO + IN100k 0.600 0.383 0.700 0.469 0.586 0.354 0.557 0.381 0.634 0.456 0.471 0.323 Table 2: Otter-LLaMA-7B fine-tuning results with different data-mixtures. Data-Mixture SyntheticTypeIdent NaturalTypeIdent ID-I OOD-I ID-I OOD-I Zero-shot Otter 0.051 0.180 0.102 0.256 COCO (control) 0.020 0.246 0.085 0.315 TeDaTy 0.088 0.061 0.111 0.111 rand+ COCO 0.106 0.168 0.171 0.276 rand+ COCO + IN100k 0.120 0.166 0.166 0.261 accordingly, e.g., \u201cThis is a cartoon image of a dog.\u201d. TeDaTy comprises 8 in-distribution (ID) datatypes, holding out 19 for out-of-distribution (OOD) generalisation tests (see Appendix for details). To isolate effects of data-distributions, we experiment with three data-mixtures: (1) TeDaTy, (2) TeDaTy+COCO, and (3) TeDaTy+COCO+IN100k (sub-sampled from ImageNet). We also finetune only on COCO as a control to disentangle gains from fine-tuning and specific data-mixtures. Results. Fine-tuning CLIP improved performance on the ID data-types for all TeDaTy mixtures (Tab. 1). However, COCO-only fine-tuning degraded ID-performance, highlighting the importance of incoporating key data-type information with TeDaTy. Freezing the vision-encoder while finetuning provided large ID-boosts and surprisingly even improved OOD. Freezing the text-encoder improved ID-performance but degraded OOD-performance, likely because of large gradients from only updating the vision-encoder. This corroborates previous CLIP-tuning studies (Zhai et al., 2022). Transfer to Otter. To fine-tune Otter, we kept the vision encoder frozen (best CLIP fine-tuning strategy) and tuned only the perceiver resampler, cross-attention and embedding layers. We found fine-tuning with all TeDaTy variants improved ID-performance up to two-fold, while preserving OOD-performance (see Tab. 2). Fine-tuning only with COCO degrades ID-performance, reinforcing the importance of a dataset that captures data-type knowledge. Takeaways. Our results suggest that training with data-mixtures explicitly inducing data-type information is a promising direction for improving VLM data-type identification. 7"
+ },
+ {
+ "url": "http://arxiv.org/abs/2211.16198v4",
+ "title": "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models",
+ "abstract": "Contrastive Language-Image Pre-training (CLIP) has emerged as a simple yet\neffective way to train large-scale vision-language models. CLIP demonstrates\nimpressive zero-shot classification and retrieval on diverse downstream tasks.\nHowever, to leverage its full potential, fine-tuning still appears to be\nnecessary. Fine-tuning the entire CLIP model can be resource-intensive and\nunstable. Moreover, recent methods that aim to circumvent this need for\nfine-tuning still require access to images from the target distribution. In\nthis paper, we pursue a different approach and explore the regime of\ntraining-free \"name-only transfer\" in which the only knowledge we possess about\nthe downstream task comprises the names of downstream target categories. We\npropose a novel method, SuS-X, consisting of two key building blocks -- SuS and\nTIP-X, that requires neither intensive fine-tuning nor costly labelled data.\nSuS-X achieves state-of-the-art zero-shot classification results on 19\nbenchmark datasets. We further show the utility of TIP-X in the training-free\nfew-shot setting, where we again achieve state-of-the-art results over strong\ntraining-free baselines. Code is available at\nhttps://github.com/vishaal27/SuS-X.",
+ "authors": "Vishaal Udandarao, Ankush Gupta, Samuel Albanie",
+ "published": "2022-11-28",
+ "updated": "2023-08-15",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.CL",
+ "cs.MM"
+ ],
+ "main_content": "Introduction Vision-language pre-training has taken the machine learning community by storm. A broad range of visionlanguage models (VLMs) [61, 46, 77, 1, 41] exhibiting exceptional transfer on tasks like classification [84, 88], cross-modal retrieval [71, 2] and segmentation [67, 30] have emerged. These models are now the de facto standard for downstream task transfer in the field of computer vision. One such prominent model, CLIP [61], is trained on a web-scale corpus of 400M image-text pairs using a contrastive loss that maximises the similarities of paired imagetext samples. CLIP pioneered the notion of zero-shot transfer in the vision-language setting1: classification on unseen datasets. For a given classification task, CLIP con1This idea of zero-shot transfer is distinct from the traditional zero-shot Image Text Prediction Support Set constructed only from target category names (SuS) Training-Free Adapter (TIP-X) Pre-trained VLM Figure 1: Training-free name-only transfer. We propose SuS-X, a framework for enhancing the zero-shot transfer abilities of VLMs like CLIP [61], BLIP [46] and TCL [76], without training. To achieve this, we propose a novel method TIP-X, which adapts these VLMs using a curated support set (SuS) that is not drawn from the target distribution. Our SuS leverages one key piece of information about the task at hand: the names of the target categories. verts the class labels into classwise textual prompts. An example of such a prompt is \u201cA photo of a .\u201d, where is replaced by the ground-truth text label for each class. It then computes similarities between the query image and text prompts of all classes. The class whose prompt yields the maximal similarity with the query image is then chosen as the predicted label. The zero-shot performance of CLIP is however limited by its pre-training distribution [27, 64, 24, 55]. If the downstream dataset distribution diverges too strongly from the distribution of images seen during pretraining, CLIP\u2019s zeroshot performance drastically drops [24]. To mitigate this, several lines of work propose to adapt CLIP on diverse downstream tasks\u2014Tab. 1 provides a brief summary of these methods. Most of them employ fine-tuning on either labelled or unlabelled subsets of data from the target task. However, fine-tuning such an over-parameterised model can be unstable and lead to overfitting [17, 28]. Furthermore, having access to the true distribution of the target task can be prohibitive in data-scarce environments [13, 4, 42] and online learning settings [16, 69]. To alleviate these issues, in this paper, we aim to adapt classification setup introduced by Lampert et al. [45] in which the task is to generalise to classes not seen during training. arXiv:2211.16198v4 [cs.CV] 15 Aug 2023 \fTable 1: Taxonomy of CLIP adaptation methods for downstream classification. We underline the Zero-Shot CLIP model to signify that it is the base model that all others build on top of. \u2217This method considers access to all test-set samples simultaneously, hence we still consider it zero-shot. \u2020This method additionally uses class hierarchy maps. Method Does not require Does not require Does not require training labelled data target data distribution Few-shot fine-tuning methods LP-CLIP [61] \u2717 \u2717 \u2717 CoOp [88] \u2717 \u2717 \u2717 PLOT [12] \u2717 \u2717 \u2717 LASP [10] \u2717 \u2717 \u2717 SoftCPT [21] \u2717 \u2717 \u2717 VT-CLIP [83] \u2717 \u2717 \u2717 VPT [19] \u2717 \u2717 \u2717 ProDA [49] \u2717 \u2717 \u2717 CoCoOp [87] \u2717 \u2717 \u2717 CLIP-Adapter [28] \u2717 \u2717 \u2717 Intermediate methods TIP-Adapter [84] \u2713 \u2717 \u2717 UPL [40] \u2717 \u2713 \u2717 SVL-Adapter [58] \u2717 \u2713 \u2717 TPT [52] \u2717 \u2713 \u2713 CLIP+SYN [36] \u2717 \u2713 \u2713 CaFo [82] \u2717 \u2713 \u2713 Zero-shot methods Zero-Shot CLIP [61] \u2713 \u2713 \u2713 CALIP [34] \u2713 \u2713 \u2713 CLIP+DN [89]\u2217 \u2713 \u2713 \u2713 Training-free name-only transfer methods CuPL [60] \u2713 \u2713 \u2713 VisDesc [53] \u2713 \u2713 \u2713 CHiLS [57]\u2020 \u2713 \u2713 \u2713 SuS-X (ours) \u2713 \u2713 \u2713 CLIP and other VLMs for downstream classification in a name-only (requires only category names2, but no samples from the target task) and training-free fashion. We propose SuS-X (see Fig. 1), consisting of two novel building blocks: (i) SuS (Support Sets), our dynamic support set curation strategy that forgoes the need for samples from the target task, and (ii) TIP-X, our main framework for performing zero-shot classification while being trainingfree. For a given downstream task, we first curate a support set by leveraging the task category labels, either in a parametric manner i.e., generating images from large-scale text-to-image models (e.g., Stable Diffusion [63]) or nonparametric manner i.e., retrieving real-world images from a large vision-language data bank (e.g., LAION-5B [65]). We then use the curated support set as a proxy few-shot dataset to inform our downstream predictions using TIP-X, in a similar vein to recent few-shot adaptation methods [28, 84]. Our extensive experiments show that SuS-X outperforms zero-shot methods on 19 benchmark datasets across three VLMs, namely, CLIP, BLIP and TCL by 4.60%, 5.97% and 11.37% absolute average accuracy respectively. We further extend the TIP-X framework to the few-shot regime, outperforming previous SoTA methods in the training-free domain. Our main contributions are three-fold: (1) We propose SuS-X, a SoTA method in the training-free name-only 2We use category and class interchangeably in this paper. transfer setting for downstream adaptation of VLMs, (2) We present SuS, an effective strategy for curating support sets using parametric or non-parametric methods to mitigate the lack of data samples available from the target task distribution, and (3) We propose TIP-X, a novel training-free method for adapting VLMs to downstream classification in both the name-only transfer and few-shot regimes. 2. Related Work Vision-Language (VL) Foundation Models. In the past few years, there has been a Cambrian explosion in largescale VL foundation models [6]. In a seminal work, Radford et al. [61] introduced CLIP, a large VLM trained on a massive corpus (400M image-text pairs acquired from the web) that exhibits strong downstream visual task performance. The introduction of CLIP inspired further development of VLMs [46, 1, 41, 20, 85, 79, 76, 11, 74, 29, 31, 47, 50, 78], each pre-trained on web-scale datasets to learn joint image-text representations. These representations can then be applied to tackle downstream tasks like semantic segmentation [67, 30], object detection [33, 23], image captioning [54, 3] and generative modelling [63, 62], In this work, we adapt such VLMs in a training-free setting to diverse downstream tasks. Adaptation of VL models. The paradigm shift introduced by CLIP is its ability to do image classification in a zero\fshot transfer setting [61]. In this setup, none of the target dataset classes are known a-priori and the task is to adapt implicitly at inference time to a given dataset. Since CLIP\u2019s training objective drives it to assign appropriate similarities to image-text pairs, it acquires the ability to perform zeroshot classification directly. Inspired by CLIP\u2019s zero-shot success, further work has sought to improve upon its performance. In Tab. 1, we characterise some of these methods along three major axes: (i) if the method requires training, (ii) if the method requires labelled samples from the target task, and (iii) if the method requires samples from the target task distribution3. In this work, we focus on the training-free name-only transfer regime\u2014our goal is to adapt VLMs to target tasks without explicit training or access to samples from the target distribution. Instead, we assume access only to category names of target tasks. This formulation was recently considered for semantic segmentation, where it was called name-only transfer [66]\u2014we likewise adopt this terminology. To the best of our knowledge, only two other concurrent approaches, CuPL [60] and VisDesc [53], operate in this regime. They use pre-trained language models to enhance textual prompts for zero-shot classification. By contrast, SuS-X pursues a support set curation strategy to adapt VLMs using knowledge of category names. These approaches are complementary, and we find that they can be productively combined. Two other related works operating purely in the zero-shot setting are: (1) CALIP [34], which uses parameter-free attention on image-text features, and (2) CLIP+DN [89], which uses distribution normalisation. We compare with these four baselines in Sec. 4. 3. SuS-X: Training-Free Name-Only Transfer We describe the two main building blocks of SuS-X\u2014 (1) Support Set (SuS) construction, and (2) training-free inference using our novel TIP-X method. Fig. 2 depicts our overall training-free name-only transfer framework. 3.1. SuS Construction We follow recent adaptation methods [84, 28] that use a small collection of labelled images to provide visual information to CLIP. However, differently from these methods, rather than accessing labelled images from the target distribution, we propose two methods (described next) to construct such a support set (SuS) without such access. (I) Stable Diffusion Generation. Our first method leverages the powerful text-to-image generation model, Stable Diffusion [63]. We employ specific prompting strategies for 3Note that (iii) subsumes (ii). (ii) refers to access to labelled data samples from the target dataset whereas (iii) refers to a more general setting where the samples from the target dataset can be unlabelled. We distinguish between the two for clarity. generating salient and informative support images. Concretely, given a set of downstream textual class labels, T = {t1, t2, . . . , tC}, where C denotes the number of categories, we prompt Stable Diffusion to generate N images per class. In this way, we construct our support set of size NC, with each image having its associated class label. By default, we prompt Stable Diffusion using the original CLIP prompts, i.e., \u201cA photo of a .\u201d, where is the class text label. To further diversify the generation process, we follow CuPL [60] to first generate customised textual prompts for each class by prompting GPT-3 [8] to output descriptions of the particular class. We then feed this customised set of prompts output by GPT3 into Stable Diffusion for generating images. For example, to generate images from the \u201cdog\u201d class, we prompt GPT-3 to describe \u201cdogs\u201d, and then prompt Stable Diffusion with the resulting descriptions. In section 4.4, we compare the performance of the default (called Photo) and this augmented prompting procedure (called CuPL). Unless otherwise specified, all our experiments with Stable Diffusion support sets use the CuPL strategy. (II) LAION-5B Retrieval. Our second method leverages the large-scale vision-language dataset, LAION-5B [65]. It contains 5.85 billion image-text pairs, pre-filtered by CLIP. Using LAION-5B, we retrieve task-specific images using class text prompts for constructing the support set. Concretely, given textual class labels, T = {t1, t2, . . . , tC}, we rank all images in LAION-5B by their CLIP image-text similarity to each text class label ti, where i \u2208[1, C]. We then use the top N image matches as our support set for class i, resulting in an NC-sized support set of images with their associated class labels. Note that curating supporting knowledge by search is a classical technique in computer vision [26] that was recently revisited in the task of semantic segmentation [67]. Here we adapt this idea to the nameonly transfer classification setting. For efficient retrieval, we leverage the approximate nearest neighbour indices released by the authors4. Similar to the Stable Diffusion generation approach, we experiment with both Photo and CuPL prompting strategies for curating our LAION-5B support set (see Sec. 4.4). By default, we use Photo prompting for all our experiments with LAION-5B support sets. Remark. Note that SuS can be seen as a visual analogue to CuPL [60], where, for each class, we augment VLMs with rich, relevant images, instead of the customised textual descriptions generated in CuPL. 3.2. TIP-X Inference Given our support set from the previous section, our task is to now leverage it in a training-free inference scheme to inform CLIP\u2019s zero-shot predictions. We first briefly review the zero-shot CLIP classification pipeline, discuss the 4https://huggingface.co/datasets/laion/laion5B-index \fImage encoder Test image Class text prompts tiger lion \u2026 A photo of a dog Class Labels f W Support Set (SuS) F \ud83d\udd12Adapt SuS Construction TIP-X inference Prediction 1. Parametric (Stable Diffusion) 2. Non-Parametric (LAION-5B) Class text prompts Class text prompts Generation Retrieval L Zero-shot f W TIP-Adapter f F Prediction L Prediction f F L W KLD TIP-X Prediction SD-SuS LC-SuS Class text prompts Dot product KLD KL-divergence Training-free Image encoder One-hot encoder Text encoder Figure 2: SuS-X for training-free name-only transfer. SuS-X consists of two core building blocks. (1) SuS (top right), a dynamic support set that we construct to infuse visual information into the VLM based only on knowledge of target category names. We construct support sets either in a parametric (generating images using Stable Diffusion) or non-parametric (retrieving images from LAION-5B) manner. (2) TIP-X (bottom right), our novel training-free method that leverages image-text distances to compute similarities between the support set and the test images. These similarities act as attention weights for the support set labels, and can directly be combined with the original logits from the VLM for classification. recently proposed TIP-Adapter [84] for training-free adaptation, and highlight a critical shortcoming in its method due to uncalibrated intra-modal embedding distances, which we address in our method\u2014TIP-X. Zero-shot CLIP. For classification into C classes, CLIP converts class labels into text prompts and encodes them with its text encoder. Collectively, the encoded prompt vectors can be interpreted as a classifier weight matrix W \u2208 RC\u00d7d, where d is embedding dimension. For a test set T={y1, y2, ..., yt} comprising t test images, CLIP\u2019s image encoder is applied to produce test image features: fi = CLIPImageEncoder(yi), i \u2208[1, t], fi \u2208Rd f = Concat([f1, f2, . . . , ft]), f \u2208Rt\u00d7d (1) Using W and f, CLIP performs classification by computing zero-shot logits (ZSL) via a dot product: ZSL = fW T (2) TIP-Adapter. Given a CK-sized K-shot labelled dataset D = {x1, x2, . . . , xCK}5 from the target domain, TIPAdapter [84] encodes D using CLIP\u2019s image encoder: Fi = CLIPImageEncoder(xi), i \u2208[1, CK], Fi \u2208Rd F = Concat([F1, F2, . . . , FCK]), F \u2208RCK\u00d7d (3) It then converts each of the few-shot class labels to one-hot vectors L \u2208RCK\u00d7C. Next, it computes an affinity matrix 5Note that a K-shot labelled dataset for C classes has a size CK. to capture the similarities between F and f: A = exp(\u2212\u03b2(1 \u2212fF T )) (4) where \u03b2 is a hyperparameter that modulates \u201csharpness\u201d. Finally, these affinities are used as attention weights over L to produce logits that are blended with ZSL using a hyperparameter, \u03b1: TL = \u03b1AL + fW T (5) Motivating TIP-X. TIP-Adapter gains from the affinity computation between the test and few-shot image samples (see Eq. (4)). This similarity is computed in CLIP\u2019s image space. However, prior research [80, 48, 70] has demonstrated the existence of a modality gap between CLIP\u2019s image and text spaces. This leads us to question if doing image-image similarity comparisons in CLIP\u2019s image space is optimal. Fig. 3a shows the pairwise image-image, text-text and image-text cosine similarities of the ImageNet validation set CLIP embeddings. Clearly, the intra-modal and intermodal similarities are distributed differently\u2014the intermodal similarities have small variance and mean, whereas the intra-modal similarities have larger means and variances. This mismatch happens because contrastive training of CLIP maximises the inter-modal cosine similarities of paired samples without regard to intra-modal similarities. This implies that the intra-image CLIP embedding similarities employed by TIP-Adapter may not reflect the true intraimage similarities. Fig. 3b illustrates this idea with a simple example. Consider two image embeddings that are required \f0.0 0.2 0.4 0.6 0.8 1.0 Cosine Similarities 0 5 10 Sample Density Text-only Image-only Image-Text Paired Image-Text (a) Intra-modal and inter-modal CLIP cosine similarities. We observe quite distinct intra-modal and inter-modal cosine similarity distributions. d1 r r d2 r r r Text embedding Image embeddings i) Image embeddings are close together ii) Image embeddings are far apart iii) Image embeddings can lie on any two arbitrary points on the circumference, maintaining equal distance from the text embedding d1<< d2 (b) Intra-modal degrees of freedom. Different intra-modal similarities can satisfy same inter-modal constraints, leaving room for poor calibration. Figure 3: Our two-fold analysis motivating TIP-X to be a distance r away from a particular text embedding. The two image embeddings can satisfy this condition by being very close to each other or very far apart from each other. Fig. 3b shows that this constraint can be satisfied by any two arbitrary points on a hypersphere of radius r. While we expect loose constraints to be imposed via transitivity, we nevertheless expect a lower quality of calibration in intra-modal (e.g., image-image) comparisons. TIP-X to the rescue. To get around the problem of uncalibrated intra-modal embedding distances in TIP-Adapter, we propose to use inter-modal distances as a bridge. More specifically, rather than computing similarities between the test features (f\u2208Rt\u00d7d) and few-shot features (F\u2208RCK\u00d7d) in the image embedding space (fF T ), we use the imagetext space. We first construct signatures by computing similarities of f and F with the text classifier weights W: S = softmax(FW T ), S \u2208RCK\u00d7C s = softmax(fW T ), s \u2208Rt\u00d7C (6) These signatures comprise probability distributions encoding inter-modal affinities between the few-shot features and class text vectors, and likewise for the test features. We then construct our affinity matrix M \u2208Rt\u00d7CK by measuring the KL-divergence between the signatures as follows: Mi,j = KL(si||Sj), i \u2208[1, t], j \u2208[1, CK] (7) where si represents the ith test signature for the t test samples, and Sj represents the jth few-shot signature. Since we are working with discrete probability distributions, we compute the KL-divergence as KL(P||Q) = P i Pi log Pi Qi . The construction of the affinity matrix M can be seen as analogous to the affinity computation in TIP-Adapter (Eq. (4)). However, our affinity matrix construction removes direct reliance on the uncalibrated image-image similarities. Finally, before using our affinity matrix M as attention weights for L (one-hot encoded class labels), we rescale (denoted by \u03c8) the values of M to have the same range (min, max values) as the TIP-Adapter affinities (A). Further, since our affinity matrix M consists of KL-divergence values, the most similar samples will get small weights since their KL-divergence will be low (close to 0). To mitigate this, we simply negate the values in M. We then blend our predicted logits with TL using a scalar \u03b3: TXL = fW T + \u03b1AL + \u03b3\u03c8(\u2212M)L (8) The entire TIP-X method is shown in Fig. 2 (bottom right). 3.3. SuS-X: Combining SuS and TIP-X Since our constructed support sets act as pseudo fewshot datasets, we directly replace the few-shot features F in the TIP-X framework with the features of our support set. We call our method SuS-X-LC if we combine TIP-X with the LAION-5B curated support set, and SuS-X-SD when combined with the Stable Diffusion generated support set. These methods enable training-free name-only adaptation of zero-shot VLMs. 4. Experiments First, we evaluate SuS-X against strong baselines in the training-free zero-shot/name-only transfer regimes, across three VLMs. Next, we illustrate the adaptation of TIP-X into the few-shot training-free regime. Finally, we ablate and analyse our method to provide additional insights. 4.1. Training-free name-only transfer evaluation Datasets. For a comprehensive evaluation, we test on 19 datasets spanning a wide range of object, scene and fine-grained categories: ImageNet [18], StanfordCars [43], UCF101 [68], Caltech101 [25], Caltech256 [32], Flowers102 [56], OxfordPets [59], Food101 [7], SUN397 [75], DTD [14], EuroSAT [37], FGVCAircraft [51], Country211 [61], CIFAR-10 [44], CIFAR-100 [44], Birdsnap [5], CUB [72], ImageNet-Sketch [73] and ImageNet-R [38]. Previous few-shot adaptation methods [81, 28, 86] benchmark on a subset of 11 of these 19 datasets. We report results on the 19-dataset suite in the main paper and compare results using only the 11-dataset subset in the supp. mat. \fExperimental Settings. We compare against six baselines. For zero-shot CLIP, we use prompt ensembling with 7 different prompt templates following [61, 84]6. We run CuPL7, VisDesc8 (name-only transfer) and CLIP+DN9 (zero-shot transfer) using their official code. We also experiment with augmenting the CuPL prompts with the original prompt ensemble, and call it CuPL+e. For CALIP (zeroshot transfer), in the absence of public code at the time of writing, we aim to reproduce their results using our own implementation. For our proposed methods, we report results using both SuS-X-LC and SuS-X-SD. For both methods, we use a fixed number of support samples per dataset (see supp. mat. for details). For CALIP and SuS-X, we conduct a hyperparameter search on the dataset validation sets. In Sec. 4.4 we perform a hyperparameter sensitivity test for a fair evaluation. By default, we use the ResNet-50 [35] backbone as CLIP\u2019s image encoder for all models. Main Results. In Tab. 2, we compare both variants of SuS-X with the baselines. We report an average across 19 datasets. We also include results on ImageNet, EuroSAT, DTD, Birdsnap, ImageNet-R and ImageNet-Sketch (results on all 19 datasets in the supp. mat.). SuS-X methods outperform zero-shot CLIP by 4.6% on average across all 19 datasets. We observe striking gains of 18%, 8% and 7% on EuroSAT, DTD and Birdsnap respectively. We also outperform the SoTA training-free adaptation methods\u2014 CuPL+ensemble and VisDesc by 1.1% and 3.1% on average respectively. To further probe where we attain the most gains, we plot the absolute improvement of our models over zero-shot CLIP in Fig. 4a. We observe large gains on finegrained (Birdsnap, CUB, UCF101) and specialised (EuroSAT, DTD) datasets, demonstrating the utility of SuS-X in injecting rich visual knowledge into zero-shot CLIP (additional fine-grained classification analysis in supp. mat.). We further compare SuS-X to few-shot methods that use labelled samples from the true distribution in the supp. mat.\u2014 despite being at a disadvantage due to using no target distribution samples, SuS-X is still competitive with these methods. 4.2. Transfer to different VLMs We evaluate transfer to VLMs other than CLIP, namely TCL [76] and BLIP [46]. We only retain image and text encoders of these models for computing features, while preserving all other experimental settings from Sec. 4.1. Tab. 3 shows our SuS-X methods strongly outperform all baseline methods across both VLMs\u2014we improve on zero-shot 6The 7 prompt templates are: \u201citap of a .\u201d, \u201ca origami .\u201d, \u201ca bad photo of the .\u201d, \u201ca photo of the large .\u201d, \u201ca in a video game.\u201d, \u201cart of the .\u201d, and \u201ca photo of the small .\u201d. 7https://github.com/sarahpratt/CuPL 8https://github.com/sachit-menon/classify by description release 9https://github.com/fengyuli2002/distribution-normalization models by 11.37% and 5.97% on average across 19 datasets. This demonstrates that our method is not specific to CLIP, but can improve performance across different VLMs. 4.3. Adapting to the few-shot regime A key component of our SuS-X method is TIP-X. In the previous section, we showcased SoTA results in the training-free name-only transfer regime. Due to its formulation, TIP-X can directly be extended to the few-shot regime, where our support sets are labelled samples from the target dataset rather than curated/generated samples. To evaluate TIP-X on such real-world support sets, we conduct training-free few-shot classification using TIP-X. We compare against the SoTA method in this regime\u2014TIPAdapter [84]. We report results on the 11-dataset subset used by TIP-Adapter on five different shot settings of the K-shot classification task: 1, 2, 4, 8 and 16. We present average accuracy results on all shots in Fig. 4b\u2014TIP-X outperforms both Zero-shot CLIP and TIP-Adapter (absolute gain of 0.91% across shots). Notably, on OxfordPets, we achieve 2.1% average gain. This further demonstrates the generalisability of the TIP-X method in transferring to the few-shot training-free setting. 4.4. Analysis We conduct several ablations and provide additional visualisations to offer further insight into the SuS-X method. Component Analysis. SuS-X consists of two major building blocks\u2014SuS construction and TIP-X. We compare the performance difference (with average accuracy across 19 datasets) of using SuS with TIP-Adapter instead of TIP-X in Tab. 4. We use both default ensemble prompts and CuPL prompts for CLIP\u2019s text classifier to break down the performance gains further. We note that both SuS and TIP-X are crucial for achieving the best results. Transfer to different visual backbones. We evaluate the scalability of our model across different CLIP visual backbones\u2014 Fig. 4c shows that both SuS-X variants consistently improve upon zero-shot CLIP across ResNet and VisionTransformer backbones of varying depths and sizes. SuS size. We study the effect of varying support set size for SuS-LC and SuS-SD\u2014we generate three different support sets with random seeds for support sizes of 1, 5, 10, 25, 50, 75 and 100 samples. From Fig. 6, we observe two broad trends\u2014some tasks benefit (ImageNet-R, DTD) from having more support set samples while others do not (Country211, Flowers102). We suggest that this is connected to the domain gap between the true data distribution and support set samples\u2014if the domain gap is large, it is inimical to provide a large support set, whereas if the domains are similar, providing more support samples always helps. SuS visualisation. We visualise samples from both support set construction methods on ImageNet in Fig. 5. It is hard to \fTable 2: Training-free adaptation of CLIP on 19 datasets with RN50 visual backbone. The best and second best results for each dataset are bolded and underlined, respectively. Individual results for all 19 datasets are available in the supp. mat. \u2217Average reported across 19 datasets. \u2020Our re-implementation. Method Average\u2217 ImageNet [18] ImageNet-R [38] ImageNet-Sketch [73] EuroSAT [37] DTD [14] Birdsnap [5] Zero-shot Zero-shot CLIP [61] 52.27 60.31 59.34 35.42 26.83 41.01 30.56 CALIP [34] \u2013 60.57 \u2013 \u2013 38.90 42.39 \u2013 CALIP [34]\u2020 52.37 60.31 59.33 36.10 26.96 41.02 30.68 CLIP+DN [89] 53.02 60.16 60.37 35.95 28.31 41.21 31.23 Name-only CuPL [60] 55.50 61.45 61.02 35.13 38.38 48.64 35.65 CuPL+e 55.76 61.64 61.17 35.85 37.06 47.46 35.80 VisDesc [53] 53.76 59.68 57.16 33.78 37.60 41.96 35.65 SuS-X-SD (ours) 56.73 61.84 61.76 36.30 45.57 50.59 37.14 SuS-X-LC (ours) 56.87 61.89 62.10 37.83 44.23 49.23 38.50 0 5 10 15 Absolute improvement (%) Country211 StanfordCars Imagenet Caltech256 CIFAR10 Imagenet-Sketch Imagenet-R Food101 Caltech101 CIFAR100 SUN397 Flowers102 FGVCAircraft OxfordPets UCF101 CUB Birdsnap DTD EuroSAT Absolute improvement over Zero-shot CLIP SuS-X-SD SuS-X-LC (a) 0 5 10 15 Number of labeled examples per class 60 65 70 Accuracy (%) Average over 11 datasets Zero-shot CLIP TIP-Adapter TIP-X (ours) (b) RN50 RN101 ViT-B/32 ViT/B16 Visual backbone 55 60 65 Accuracy (%) E\ufb00ect of visual backbone Zero-shot CLIP SuS-X-SD (ours) SuS-X-LC (ours) (c) Figure 4: (a) Comparison of SuS-X with Zero-shot CLIP. (b) Results of training-free few-shot classification. (c) Performance comparison of SuS-X across visual backbones. Table 3: SuS-X generalises to different VLMs. \u2217Average reported across 19 datasets. VLM Method Average\u2217ImageNet EuroSAT DTD Birdsnap TCL Zero-shot 31.38 35.55 20.80 28.55 4.51 CuPL 34.79 41.60 26.30 42.84 6.83 CuPL+e 32.79 41.36 25.88 41.96 6.60 VisDesc 33.94 40.40 21.27 34.28 5.69 SuS-X-SD 41.49 52.29 28.75 48.17 13.60 SuS-X-LC 42.75 52.77 36.90 46.63 17.93 BLIP Zero-shot 48.73 50.59 44.10 44.68 10.21 CuPL 51.11 52.96 39.37 52.95 12.24 CuPL+e 51.36 53.07 41.48 53.30 12.18 VisDesc 49.91 50.94 42.25 47.45 11.69 SuS-X-SD 53.20 55.93 45.36 56.15 16.95 SuS-X-LC 54.64 56.75 51.62 55.91 23.78 distinguish between the true ImageNet samples and the SuS samples\u2014we can therefore construct support sets to mimic the true data distribution, with access to only the category names. A caveat is that the support set does not always capture the domain characteristics of the true distribution, leading to a domain gap (lighting conditions, diverse scene backgrounds, confounding objects etc). To fully close the gap to using true few-shot datasets as support sets [28, 84], further research into exact unsupervised domain matching of support sets and few-shot datasets is required. Prompting strategies for SuS construction. Tab. 5 deTable 4: Component Analysis of SuS-X. Text Prompts Method SuS TIP-X Average Accuracy Default Zero-shot CLIP \u2717 \u2717 52.27 SuS-TIP-SD \u2713 \u2717 53.49 (+1.22%) SuS-X-SD \u2713 \u2713 53.69 (+1.42%) SuS-TIP-LC \u2713 \u2717 53.83 (+1.56%) SuS-X-LC \u2713 \u2713 54.20 (+1.93%) CuPL+e CuPL+e \u2717 \u2717 55.76 (+3.49%) SuS-TIP-SD \u2713 \u2717 56.63 (+4.36%) SuS-X-SD \u2713 \u2713 56.73 (+4.46%) SuS-TIP-LC \u2713 \u2717 56.72 (+4.45%) SuS-X-LC \u2713 \u2713 56.87 (+4.60%) picts the performance of Photo and CuPL prompting\u2014best results are achieved with the LC-Photo and SD-CuPL strategies. We further compare the diversity of images produced by the two strategies on ImageNet11\u2014from Tab. 5, it is evident that CuPL prompting leads to more diverse support sets as compared to Photo prompting. Hyperparameter Sensitivity. We perform a sensitivity test for our \u03b3 hyperparameter (refer Eq. 8) on ImageNet-R, OxfordPets, and DTD. We fix \u03b1 and \u03b2 to be 1, and run a sweep 11We compute diversity as 1 minus the mean of the average pairwise image cosine-similarities within a class. A larger value implies low cosine similarities across images within a class, implying more diverse images. Alternatively, a smaller value implies less diverse images. \f(a) Dishwasher (b) Split Rail Fence (c) Australian Kelpie (d) Bulbul Figure 5: Support samples from the generated SuS-SD, retrieved SuS-LC and true training distribution for ImageNet. By randomising the image order in each subfigure, we pose a challenge question\u2014can you match the three images for each subfigure to their source i.e. SuS-SD, SuS-LC or ImageNet train set? The answers are provided at the bottom of the page10. 0 50 100 Number of support samples 61.4 61.6 61.8 62.0 Accuracy (%) Imagenet-R SuS-SD SuS-LC 0 50 100 Number of support samples 47.0 47.5 48.0 48.5 49.0 Accuracy (%) DTD SuS-SD SuS-LC (a) Tasks where larger support sets are beneficial 0 50 100 Number of support samples 12 13 14 Accuracy (%) Country211 SuS-SD SuS-LC 0 50 100 Number of support samples 65.5 66.0 66.5 67.0 Accuracy (%) Flowers102 SuS-SD SuS-LC (b) Tasks where larger support sets are harmful Figure 6: Effect of support size. Table 5: Prompting strategies for SuS construction. SuS method Average Acc. ImageNet Acc. Diversity Photo CuPL Photo CuPL Photo CuPL LC 56.87 56.20 61.89 61.79 0.28 0.32 SD 56.32 56.73 61.79 61.84 0.17 0.20 over \u03b3 \u2208[0, 1]. From Tab. 6, we observe that moderate values of \u03b3 are typically preferred, and the variance of the accuracy values is small. However, note that for DTD, the optimal \u03b3 is slightly larger (0.75)\u2014this is due to its specialised nature which requires more guidance from the specialised support set to inform pre-trained CLIP. Previous few-shot adaptation works [28, 84] observed similar results. Table 6: Hyperparameter sensitivity for \u03b3 Dataset \u03b3 value 0 0.1 0.2 0.3 0.5 0.75 1 ImageNet-R 60.87 60.98 61.03 61.05 61.00 60.89 60.65 OxfordPets 76.76 77.17 77.58 77.44 77.17 77.17 76.90 DTD 47.16 47.16 47.51 47.69 47.87 47.96 47.60 For more hyperparameter ablations, see the supp. mat. 4.5. Limitations and broader impact While demonstrating promising results, we note several limitations of our approach. (1) To perform name-only transfer, we rely on CLIP to have seen related concepts during pre-training. For concepts that are so rare that they do not appear during pre-training, transfer will not be feasible. (2) We employ LAION-5B [65] as a source of knowledge. While reasonable for a proof of concept, this data is relatively uncurated and may contain harmful content. As such, our approach is not suitable for real-world deployment without careful mitigation strategies to address this concern. Similar arguments apply to Stable Diffusion [63]. 5."
+ },
+ {
+ "url": "http://arxiv.org/abs/2006.09501v1",
+ "title": "On the Inference of Soft Biometrics from Typing Patterns Collected in a Multi-device Environment",
+ "abstract": "In this paper, we study the inference of gender, major/minor (computer\nscience, non-computer science), typing style, age, and height from the typing\npatterns collected from 117 individuals in a multi-device environment. The\ninference of the first three identifiers was considered as classification\ntasks, while the rest as regression tasks. For classification tasks, we\nbenchmark the performance of six classical machine learning (ML) and four deep\nlearning (DL) classifiers. On the other hand, for regression tasks, we\nevaluated three ML and four DL-based regressors. The overall experiment\nconsisted of two text-entry (free and fixed) and four device (Desktop, Tablet,\nPhone, and Combined) configurations. The best arrangements achieved accuracies\nof 96.15%, 93.02%, and 87.80% for typing style, gender, and major/minor,\nrespectively, and mean absolute errors of 1.77 years and 2.65 inches for age\nand height, respectively. The results are promising considering the variety of\napplication scenarios that we have listed in this work.",
+ "authors": "Vishaal Udandarao, Mohit Agrawal, Rajesh Kumar, Rajiv Ratn Shah",
+ "published": "2020-06-16",
+ "updated": "2020-06-16",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.CY",
+ "cs.HC",
+ "I.3.6"
+ ],
+ "main_content": "Introduction \"Everyone is special, and nobody is like anyone else. Everyone\u2019s got an act.\"\u2013 The Greatest Showman. While we interact with computing devices, we leave a variety of footprints such as typing, swiping, walking, among others. These footprints have been studied for authentication, identification, forensic analysis, health monitoring, cognitive assessment, and inferring soft biometric traits [Banerjee and Woodard 2012; Brizan et al. 2015; Buriro et al. 2016; Dantcheva et al. 2016; Miguel-Hurtado et al. 2016a; Neal and Woodard 2019; Nixon et al. 2015; Vizer and Sears 2015]. Typing [Banerjee and Woodard 2012; Roth et al. 2014, 2015; Teh et al. 2013], swiping [Frank et al. 2013; Patel et al. 2016; Serwadda et al. 2016], gait [Kumar et al. 2018, 2015, 2016; Primo et al. 2014], \u2217Both authors contributed equally to this research. body movements [Kumar et al. 2017], and fusion are some of the widely studied behavioral patterns in the context of desktop, mobile, and wearable devices. Typing is commonly characterized as key press and release timings, keystroke sounds, and video sequence [Banerjee and Woodard 2012; Roth et al. 2014, 2015; Teh et al. 2013]. Security critical organizations such as the Defense Advanced Research Projects Agency (DARPA) have already adapted typing-based active authentication technology for desktops [Keromytis 2015]. However, the majority of the keystroke studies focus on either authentication or identification under free or fixed-text entry environments [Banerjee and Woodard 2012; Belman and Phoha 2020; Kumar et al. 2016; Teh et al. 2013]. The number of studies on the inference of soft biometrics from typing patterns is limited or confined to a particular device/environment or both [Akis et al. 2014; Bandeira et al. 2019; Buker et al. 2019; Buriro et al. 2016; Fairhurst and Da Costa-Abreu 2011; Giot and Rosenberger 2012; Idrus et al. 2014; Li et al. 2019; Pentel 2017; Plank 2018; Tsimperidis et al. 2018; Uzun et al. 2015]. Inference of a variety of personal attributes including but not limited to age, gender, cognitive assessment, handedness, typing hand, and number of fingers used for typing have been explored in the past [Antal and Nemes 2016; Buriro et al. 2016; Idrus et al. 2014; Pentel 2017; Rattani and Agrawal 2019; Tsimperidis et al. 2018]. Considering that typing is an indispensable part of our lives, we believe that it reveals a great deal of information and should be studied in depth for the inference of useful identifiers. The identifiers inferred from typing patterns can be used in a variety of ways. Some of them are listed below: \u2022 Personalized user experience: Consumers often refrain from providing too much information while signing up for an information technology-enabled service. Besides, people with disabilities may find it difficult to enter too much information to start using a software platform. The automated estimation of soft biometrics can be useful in such cases. Organizations can tailor their platforms and services as per the user\u2019s demography for a seamless and personalized experience. Moreover, estimated soft biometrics can be used for controlling access to certain resources or platforms. For example, access to certain TV channels arXiv:2006.09501v1 [cs.CV] 16 Jun 2020 \fPreprint, June 2020, Udandarao and Agrawal, et al. Figure 1. Person (on the left) impersonated Benjamin (in the middle, a handsome American businessman) to fool a divorced and lonely woman Rosely (on the right) and scam out her lifelong savings ($90, 000) by promising her lifelong love [Australia 2019]. The typing patterns of the person could have been used to estimate the gender, age, height, and weight, and alarm Rosely that the person she is thinking the love of her life may be fake as his/her soft traits do not match with the information provided to her. Besides, law enforcement personnel can use soft biometrics for tracing and convicting the person. and websites can be restricted to individuals of certain age groups. \u2022 Improved recognition rate: The performance of an authentication and identification systems can be improved by incorporating the inferred soft biometrics such as age, gender, weight, and height in the pipeline [Dantcheva et al. 2016; Rattani and Agrawal 2019; Syed Idrus et al. 2015; Thanganayagam et al. 2019]. \u2022 Targeted advertising: Organizations can use the soft biometrics for customized their advertisement and target people of a specific height, weight, gender, and age groups who might be interested in the product more than the rest [Dantcheva et al. 2016; Rattani and Agrawal 2019]. \u2022 Identification of fake profiles on social media: The social-media platforms are suffering from fake profiles and fake news spread. It is not uncommon for individuals to fake their identity, i.e., to be a different gender, height, age, and profession. It is difficult to determine the legitimacy of individuals based on the type of information they post. The accurately estimated soft identifiers based on the typing pattern can help detect these profiles and take appropriate actions [Fairhurst and Da Costa-Abreu 2011; Li et al. 2019]. \u2022 Forensics: Covert identification of individuals has never been more critical than today as the number and nature of cybercrimes are rapidly evolving [Li et al. 2019]. As per Federal Bureau Investigation (FBI)\u2019s 2019 Internet Crime Report, 467, 361 online scams were registered alone in 2019 [Federal Bureau Investigation (FBI) 2019]. These scams cost innocent people a total of $3.5 billion. Business email compromise, romance fraud, and spoofing caused the highest financial losses. Several victims ended up losing their entire life savings or even sinking into debt. The law enforcement agencies often lack credible information to trace and convict these scammers. Soft biometrics inferred from typing footprints that the scammers leave while they interact with the victims could be useful in such scenarios (see Figure 1 for an example). The above-mentioned applications motivated us to study the inference of soft biometrics from typing patterns of individuals in a multi-device environment. In summary, this work makes the following set of contributions: \u2022 Investigate inference of five soft biometrics, namely, gender, major/minor, and typing style, age, and height from typing patterns collected from 117 individuals while they typed a predefined text and answered a series of questions on a desktop, tablet, and smartphone. \u2022 Benchmark six Machine Learning (ML) and four Deep Learning (DL) algorithms for the classification of gender, major/minor, and typing style. Additionally, we benchmark eight different configurations generated from two factors (free and fixed-text entry), and devices (Desktop, Phone, Tablet, and Combined). \u2022 Besides using unigraphs, digraphs, and word-level features with a mutual information-based feature selector, we explore a novel method of constructing the feature space for the application of DL methods. \u2022 Provide detailed results and discussion on the inference of gender, major/minor, typing style, age, and height of the participants. Besides, present a qualitative performance comparison with the existing studies. \u2022 Share the code base for reproducibility of results and foster future research in this direction.1 The rest of the paper is organized as follows. Section 2 discusses the closely related works. Section 3 presents the design of experiments. Section 4, and Section 5 present and 1Code is available upon request. Please send an email to the last author. \fOn the Inference of Soft Biometrics from Typing Patterns Collected in a Multi-device Environment Preprint, June 2020, discuss the results, respectively. Finally, we conclude the paper and provide future research directions in Section 6. 2 Related work The inference of soft biometrics (gender, age, ethnicity, hair/eye/skin colors, and hairstyle) from physical biometrics (e.g., face, fingerprint, iris, hand, and body), as well as gait and voice, have been substantially covered by Dantcheva et al. [Dantcheva et al. 2016]. Thus, in this section, we describe the works related to the inference of soft biometric from typing patterns, and the gap that this work attempts to fill in. Early attempts to infer the gender of the typists from keystroke analysis were made in [Fairhurst and Da Costa-Abreu 2011; Giot and Rosenberger 2012]. One [Fairhurst and Da Costa-Abreu 2011] was inspired by developing trust and reliability among social network users, while the other [Giot and Rosenberger 2012] was motivated from improvement in the performance of user recognition systems by including estimated soft-biometrics as features. For example, Idrus et al. [Giot and Rosenberger 2012; Syed Idrus et al. 2015] utilized the determined gender, age, and handedness to achieve about 7% of reduction in user recognition error rate. A separate study by Idrus et al. [Idrus et al. 2014] was conducted under fixedand free-text entry environment to predict the hand category (use one or both hands), gender (male, female), age (< 30 or \u226530), and dominant hand (lefty or righty). Brizan et al. [Brizan et al. 2015] used hybrid (keystroke, stylometry, and language production) set of features to predict the cognitive demands of a given task. Yasin et al. [Uzun et al. 2015] were able to differentiate between children (below 15) and adults (above 15) by analyzing the participant\u2019s typing behaviors. Recently, Abeer et al. [Buker et al. 2019] predicted gender from live chats. Pentel [Pentel 2017] combined mouse patterns with keystrokes to predict the age and gender of individuals. Likewise, Li et al. [Li et al. 2019] analyzed stylometry and keystroke dynamics to predict the gender of the person from 15 minutes of chat with 72% accuracy. Bandeira et al. [Bandeira et al. 2019] combined handwritten signature and keystroke dynamics for gender prediction. Abreu et al. [Julliana Caroline Gonc\u00c2\u00ffalves de A.S.M 2019] also combined three modalities (keystrokes, touch strokes, and handwritten signature) to predict the gender of the typists. The authors suggested that the fusion-based system outperformed the rest. Buriro et al. [Buriro et al. 2016] estimated age, gender, and operating hands from the typing behavior of individuals collected on smartphones. Other than age, gender, handedness, and dominant hand, researchers have predicted some interesting indicators from typing patterns. For example, Epp et al. [Epp et al. 2011] investigated the prediction of fifteen emotional states, including confidence, hesitance, nervousness, relaxation, sadness, and tiredness from typing patterns. Tsimperidis et al. [Tsimperidis et al. 2020] predicted the educational level of participants based on the keystroke dynamics information only. Beyond typing patterns, researchers have explored other behavioral patterns such as walking patterns, swiping patterns, calling patterns, device usage patterns to estimate a wide variety of soft identifiers [Acien et al. 2018; Garofalo et al. 2019; Miguel-Hurtado et al. 2016b; Neal and Woodard 2018; Neal* and Woodard 2018]. The aforementioned studies have shown that an individual\u2019s behavioral pattern reveal about their gender, age, handedness, dominant hand, emotional stress, cognitive ability, etc. These studies, however, were limited in terms of types of devices used in the experiments, data collection protocol (free or fixed text), application of algorithms, and prediction of specific soft biometric. The majority of the application scenario mentioned in the introduction would require the study on the inference of soft biometrics from behavioral patterns to be more thorough. By thorough, we mean the inclusion of a variety of users, devices, text entry mode, and a variety of learning paradigms that could be more suitable, in addition to collecting the absolute ground truth. Conducting such a comprehensive study on this topic would require a grand data collection experiment. One of the datasets that aligned well with our hypothesis is the dataset recently posted by Belman et al. [Belman et al. 2019], which includes fixed as well as free text collected from 117 users who answered a wide variety of questions on a desktop, tablet, and smartphone. The specific soft traits that we included in this study are age, gender, height, typing style (must look at the keyboard, occasionally looks at the keyboard, and need not look at the keyboard), major/minor (computer science or non-computer science). Apart from considering five soft traits, we study keystroke features that (e.g., word-level features) have not been studied in this context but shown to be better than traditional keystroke features in the context of user recognition [Belman and Phoha 2020; Sim and Janakiraman 2007]. Moreover, we apply numerous learning algorithms, which have not been studied in this context before, to the best of our knowledge. 3 Design of experiments 3.1 Dataset We used Syracuse University and Assured Information Security-Behavioral Biometrics Multi-Device and Multi-Activity Data from the Same Users (SU-AIS BB-MAS) [Belman et al. 2019]. The dataset consists of multiple modalities; however, we consider only the keystroke part, therefore refer to the dataset as BB-MAS-Keystroke in this document. \fPreprint, June 2020, Udandarao and Agrawal, et al. The BB-MAS-Keystroke consists of 3.5 million keystrokes collected from 117 users who typed two given sentences (fixed) and answered a series of questions (free-text) on desktop (Dell kb212-b), tablet (Samsung-S6), and smartphone (HTC-Nexus-9). A summary of the dataset is provided in Table 1. Please see [Belman et al. 2019] for more details. 3.2 Feature extraction and analysis Following previous studies [Belman and Phoha 2020; Huang et al. 2016; Sim and Janakiraman 2007; Teh et al. 2013], we extracted unigraph (Key Hold Time), digraph (Flight or Key Interval Time), and word-level features. Before feature extraction, we removed outlier using interquartile range (IQR) method. The description of features computation is provided below and pictured in Figure 2: \u2022 Unigraphs: Unigraphs are defined as the difference between the key release and key press timings. These features were extracted for all unigraphs in the data and aggregated. For example, if the keyk is pressed and released 50 times in the dataset, the key hold feature of k would be a list of 50 values. \u2022 Digraphs: Digraph captures information about the press and release timings of two consecutive keys. There are four different digraphs that can be defined for two consecutive keys (say ki and ki+1) as demonstrated as follows: 1. F1 = (ki+1)press \u2212(ki)release 2. F2 = (ki+1)release \u2212(ki)release 3. F3 = (ki+1)press \u2212(ki)press 4. F4 = (ki+1)release \u2212(ki)press We observed that in some cases, the key ki+1 was pressed before the release of key ki, which resulted into negative values for the features F1 and F3 for those occurrences. The aggregation process was same as unigraphs. \u2022 Word level features: The word-level features capture different characteristics of the data than the uni and digraphs. They are also shown to be highly discriminative among users [Belman and Phoha 2020; Sim and Janakiraman 2007]. Thus, we adapted these features in this study. These features were computed as described as follows: Consider a word W of length n consisting of the keys {k1, k2, ..., kn} in that order. Then word-level features were defined and extracted as follows: 1. Word Hold Time (WN = (kn)release \u2212(ki)press 2. Word-unigraph features (W f K ): These features consisted of mean, standard deviation, and median of the unigraphs of W . Assume we use an aggregation function f , then for the word W , W f K = f ([Kk1,Kk2, ...,Kkn]) 3. Word-digraph features (W f Fi): Similar to word-unigrah features, we computed the word-digraph features. Assume the aggregation function f and flight features Fi (where i \u2208{1, 2, 3, 4}), then for the word W , W f Fi = f ([Fik1,k2, Fik2,k3, ..., Fikn\u22121,kn]) More details on how these features were utilized during the classification is provided in Section 3.3.1 and 3.3.2. l l e e o o press release press release press release \u25cf \u25cf Figure 2. An illustration of the extraction of unigraphs (K), digraphs (F), and world-level (W ) features. Cross-validation Best params Retrain the model Evaluate the model Parameters Dataset Training Testing Figure 3. Training, cross-validation, and testing setup. The data was divided in user sets P (Training and cross-validation for hyper-parameter tuning) and Q (Testing). Where, P \u2229Q = \u03d5. Adopted from [scikit 2020]. 3.3 Learning framework Prediction of gender (female or male), typing style (must look at the keyboard or occasionally looks at the keyboard or need not look at the keyboard), and major/minor (computer science or non-computer science) were considered as classification tasks. On the other hand, age and height estimation was considered as regression tasks in our experiments. The block diagram of the learning framework adopted in this study is illustrated in Figure 3. We divided the dataset in two parts Training and Testing. The Training data consisted of 70% of the users, and as the name indicated, it was used to train the model and tune the hyperparameters using five-fold cross-validation. The best-performing values of the hyperparameters were then used to train the model again on the Training dataset. The \fOn the Inference of Soft Biometrics from Typing Patterns Collected in a Multi-device Environment Preprint, June 2020, Table 1. Number of samples available in the dataset [Belman et al. 2019]. We studied only the first five as the last two were extremely imbalanced which is one of the limitations of the dataset. Soft biometric Description Gender male (72), female (45) Major/minor CS(66), non-CS (50), missing (1) Typing style a: must look at the keypad (6), b: occasional look at the keypad (31), c: need not look at the keyboard (80) Age (years) range (19, 35), mean = 24.97, median = 24.0, std = 3.11 Height (inches) range(54, 74), mean = 66.96, median = 67.0, std = 4.02 Ethnicity Asian (104), non-Asian (13) Handedness right (114), left (1), ambidextrous (2) trained model was then tested on the Testing dataset, which consisted of the remaining 30% users. The adopted learning framework creates a realistic experimental setup as it allowed us to test our model on completely unseen data, unlike some previous works [Belman and Phoha 2020; Fairhurst and Da Costa-Abreu 2011; Miguel-Hurtado et al. 2016a; Plank 2018; Tsimperidis et al. 2018], which have reported the results using k-fold cross-validation on the whole dataset. Nevertheless, we tried this strategy as well and got near-perfect results. Also, we observed that the dataset has a class imbalance problem. For example, the number of males was higher than the number of females (see Table 1 for more details). Borderline over-sampling based on SMOTE (Synthetic Minority Oversampling Technique) [Nguyen et al. 2011] was included in the classification pipeline to over-sample the minority class samples and make it equal to the majority class samples. Borderline SMOTE was chosen over vanilla SMOTE [Chawla et al. 2002] and Adaptive Synthetic (ADASYN) sampling technique [He et al. 2008] based on the loss obtained during training. 3.3.1 Classical Machine Learning (ML). We included a variety of algorithms for implementing the classification and regression tasks. The decision to include algorithms such as Naive Bayes, Decision Trees, Support Vector Machine (SVM), Adaptive Boosting (AdaBoost), and Multi-Layer Perceptron (MLP) with single hidden layer was based on the previous studies [Baluja and Rowley 2007; Brizan et al. 2015; Buriro et al. 2016; Miguel-Hurtado et al. 2016a; Morales et al. 2016; Na Cheng et al. 2009; Neal and Woodard 2018; Plank 2018; Tsimperidis and Katos 2013]. Besides, we included algorithms, namely extreme gradient boosting (XGBoost), that have been rarely studied in this context but drew attention due to its success in online competition platforms such as Kaggle [Kaggle 2020]. The hyperparameters associated with these algorithms were tuned using five-fold cross-validation and grid search (see Figure 3). In addition to tuning the listed parameter, we also experimented with the number of features and presented the best results obtained. The encouraging performance of ML algorithms, as well as the size of data, motivated us to experiment with deep learning methods that have been effectively used for solving typing pattern-based identification and authentication, recently [Acien et al. 2020; Baldwin et al. 2019; Bernardi et al. 2019; Sun et al. 2017]. 3.3.2 Deep Learning (DL). Deep learning has been used with great success in recent years. The combination of deep networks, along with the non-linear activation, has been influential in the popularity of deep learning algorithms. Recently, there have been several attempts at using deep learning architectures for analyzing keystroke biometric data [Acien et al. 2020; Baldwin et al. 2019; Bernardi et al. 2019; Sun et al. 2017]. Inspired by these approaches, we leverage the following deep learning models: \u2022 Fully Connected (FC) Network: We use a four-layered neural network with relu activation. We additionally incorporate dropout as a regularization technique for our model. We believe that using a deep FC network will help capture the intrinsic differentiating factors within the aggregated feature vectors to help discern the privacy factors better. \u2022 Convolution Neural Network (CNN): We use a seven-layer CNN with four 2D convolution layers and three fully connected layers. We further use dropout and batch normalization to regularize our network. Since our data features are in the form of vectorized arrays, we use a trick of converting them into squared images. For a given feature vector of dimensionality N, we find the largest perfect square S just smaller than N and convert the feature vector to an image of size 1 \u00d7 \u221a S \u00d7 \u221a S. We hypothesize that the trick will help us leverage CNNs to exploit the structural and spatial biases present in our feature data efficiently. \u2022 Recurrent Neural Network (RNN): We use a three-layer RNN with tanh activation functions and a final softmax classification layer. In the case of RNNs, we require our input data to be sequential. However, our data is in the form of tabulated feature vectors. We use a heuristic to convert our feature vectors into sequential data points to feed it into the RNN. For a given feature vector of dimensionality N, we find the largest non-prime number just smaller than N and find two factors A and B such that N = A \u00d7 B. We \fPreprint, June 2020, Udandarao and Agrawal, et al. then manipulate the feature vector to seem like proxy sequential data of sequence length A and vector dimension B. The trick, therefore, can help us utilize the episodic nature of RNNs to gauge sequential correlations in our data. \u2022 Long Short Term Memory (LSTM) Network: We use a three-layer LSTM network with a final softmax classification layer, similar to the one used for the RNN model. We make use of LSTMs to mitigate the widely known vanishing gradient problem [Hochreiter 1998] of simple RNNs. We follow the same heuristical procedure to make our feature vectors suitable for training a sequential LSTM network. We believe that the LSTM should further help capture sequential dependencies inherent in our feature vectors. 3.4 Performance evaluation The performance of the classification, as well as regression models, were evaluated on the test dataset that was kept separate from the training and validation process (see Figure 3). Accuracy and mean absolute error (MAE) were used as the performance evaluation metric for the classification and regression models, respectively. The accuracy is defined as the ratio of the number of correctly predicted instances and the number of instances tested. MAE is defined as an average of absolute differences between the actual and predicted values. The accuracy could be biased in cases where the number of instances for each class are unequal. However, as we had applied SMOTE to oversample the instances of minority classes and make the number of instances belonging to each class equal, accuracy in our case is an unbiased measure. 4 Results 4.1 Classification results The following subsections discuss the results obtained by different ML and DL based classification models used in this study: 4.1.1 Gender classification. The gender classification accuracies are presented in Table 2. In terms of devices, the combined case achieved the best results (93.02%) followed by Phone (88.37%), Desktop (86.04%), and Tablet (83.33%). Free-text (93.02%) yielded better results than the Fixed-text (88.37%), overall. Classifier-wise, CNN (93.02%), SVM (86.04%), MLP/XGBoost (83.72%), and RNN (83.33%) outperformed the rest. 4.1.2 Major/Minor classification. The accuracies for the major/minor classification task can be found in Table 3. In terms of devices, the combined-device setting achieved the best results (87.8%) followed by Desktop (85.37%), Tablet (85%), and Phone (82.92%). Overall, Fixed-text (87.8%) yielded slightly better results than Free-text (85.37%). The top-performing classifiers were CNN (87.8%), LSTM (85%), RNN (83.33%), SVM (78.04%) and XGBoost (78.04%) followed by the rest. 4.1.3 Typing style classification. The accuracies for the typing style classification task can be found in Table 4. In terms of devices, the combined-device setting and Phone achieved the best results (96.15%) followed by Tablet (95.55%), and Desktop (93.18%). Overall, both Fixed-text and Free-text yielded the same best results (96.15%). The top-performing classifiers were SVM (96.15%), MLP (96.15%), CNN (91.42%), FC (91.22%) and AdaBoost (90.38%) followed by the rest. 4.2 Regression results The following subsections discuss the results obtained by different ML and DL based regression models used in this study: 4.2.1 Age estimation. The collated results for both ML and DL models for the task of age prediction can be found in Table 5. In terms of devices, the phone-only setting achieved the best results (1.77) followed by desktop (2.04), tablet (2.09), and combined (2.11). Free-text (1.77) yielded better results than the Fixed-text (2.04), overall. Regressor-wise, FC (1.77), LSTM (2.04), and XGBoost (2.21) outperformed the rest. 4.2.2 Height estimation. The results for both the ML and DL models for the height prediction problem can be found in Table 5. In terms of devices, the phone-only setting achieved the best results (2.65) followed by combined (2.67), tablet (2.74), and desktop (2.82). In contrast to age regression, Fixed-text (2.65) yielded better results than the Free-text (2.70), overall. Regressor-wise, KNN (2.65), XGBoost (2.67), and SVM (2.74) outperformed the rest. For the height prediction problem, ML regressors clearly outperformed DL regressors. 5 Discussion 5.1 Limitations As mentioned earlier, one of the major limitations of studying the inference of soft biometrics is a quality dataset. Although every participant provided about thirty thousand keystrokes, the number of subjects is limited in the dataset, which makes the training, validation, and testing a bit difficult. In particular, we used the data collected from 70% users (i.e., 82 users) for training and cross-fold validation, while the data collected from the rest and the data collected from the remaining 30% (i.e., 35 users) used for testing. Another limitation of the dataset is that the samples for recorded soft biometrics are severely imbalanced in some cases (see Table 1). For example, of the total 117 participants, 105 are Asian, and 114 identified themselves as right-handed. \fOn the Inference of Soft Biometrics from Typing Patterns Collected in a Multi-device Environment Preprint, June 2020, Table 2. Percentage accuracies (the higher, the better) obtained by different ML and DL algorithms for gender classification. Arrangement-wise, Combined-Free-CNN (93.02%) outperformed the rest. Device-wise, Combined (93.02%), Phone (88.37%), Desktop (86.04%), and Tablet (83.33%) closely followed each other in that order. Device Setting Naive Bayes SVM Decision Trees AdaBoost MLP XGBoost RNN LSTM FC CNN Desktop Free 72.09 81.39 76.74 81.39 83.72 83.72 77.50 72.50 72.09 86.04 Fixed 72.09 86.04 79.06 81.39 74.41 79.06 77.50 77.50 62.50 82.50 Phone Free 53.48 83.72 67.44 81.39 76.74 81.39 80.00 75.00 67.44 79.07 Fixed 55.81 76.74 74.41 74.41 72.09 72.09 75.00 85.00 62.79 88.37 Tablet Free 60.46 79.06 76.74 76.74 76.74 79.06 83.33 72.50 69.76 79.06 Fixed 67.44 72.09 67.44 72.09 67.44 67.44 82.5 75.00 65.11 79.07 Combined Free 67.44 83.72 79.06 79.06 76.74 81.39 80.00 77.50 74.42 93.02 Fixed 67.44 79.06 74.41 81.39 74.41 72.09 77.50 62.50 67.44 83.72 Table 3. Percentage accuracies (the higher, the better) obtained by different ML and DL algorithms for major/minor classification. Arrangement-wise, Combined-Fixed-CNN (87.80%) outperformed the rest. Device-wise, Combined (87.80%), Desktop (85.37%), Tablet (85.0%), and Phone (82.92%) closely followed each other in that order. The results align with the with common intuition that CS majors may be more comfortable and fluent on Desktop and Tablet keypads compared to Phone than non-CS majors. Device Setting Naive Bayes SVM Decision Trees AdaBoost MLP XGBoost RNN LSTM FC CNN Desktop Free 68.29 78.04 73.17 73.17 73.17 73.17 80.00 75.00 70.73 78.04 Fixed 75.60 70.73 70.73 75.60 60.97 78.04 67.50 70.00 56.09 85.37 Phone Free 60.97 51.21 70.73 65.85 53.65 53.65 75.00 77.50 68.29 82.92 Fixed 63.41 60.97 68.29 58.53 58.53 53.65 72.50 77.50 63.41 78.04 Tablet Free 63.41 53.65 68.29 73.17 53.65 58.53 83.33 82.50 68.29 82.92 Fixed 75.60 56.09 68.29 73.17 56.09 73.17 72.50 85.00 63.41 78.04 Combined Free 65.85 75.60 73.17 68.29 65.85 68.29 85.00 80.00 73.17 85.37 Fixed 70.73 73.17 63.41 68.29 53.65 60.97 82.50 72.50 65.85 87.80 Table 4. Percentage accuracies (the higher, the better) obtained by different ML and DL algorithms for typing style classification. Arrangement-wise, Combined-Free-SVM (96.15%) was closely followed by Combined-Fixed-SVM (94.23%) and outperformed the rest. Device-wise, Combined (96.15%), Phone (96.15%), Tablet (95.55%), and Desktop (93.18%) closely followed each other in that order. The results do not fall beyond our expectations as we hypothesized that the typing patterns of individuals who look, occasionally look, and never look at the keypad to be very different, in general. Device Setting Naive Bayes SVM Decision Trees AdaBoost MLP XGBoost RNN LSTM FC CNN Desktop Free 77.27 93.18 76.92 90.38 86.53 81.81 80.00 83.33 82.85 91.42 Fixed 76.92 90.38 86.53 90.38 90.38 88.46 50.00 48.00 82.14 66.07 Phone Free 78.84 88.63 82.69 86.36 86.53 86.36 83.33 83.33 80.70 85.71 Fixed 86.53 96.15 80.76 84.61 96.15 86.53 50.00 42.00 91.22 49.12 Tablet Free 65.38 95.55 82.22 82.69 78.84 80.00 86.67 83.33 90.47 82.85 Fixed 78.84 90.38 78.84 82.69 88.46 88.46 56.00 44.00 78.57 57.14 Combined Free 76.92 96.15 82.69 88.46 92.30 94.23 83.33 80.00 84.21 88.57 Fixed 86.53 94.23 82.69 90.38 90.38 90.38 70.00 56.00 89.47 64.91 Although we expect that the performance of the proposed approaches would scale to a larger dataset, it is difficult to claim that it would. Nonetheless, the results are comparable or better than the existing mechanisms of inferring soft biometrics from keystrokes (see Table 6). \fPreprint, June 2020, Udandarao and Agrawal, et al. Table 5. MAE (the lower, the better) for age and height estimation. Arrangement-wise, Phone-Free-FC (1.77 years) and Phone-Fixed-KNN (2.65 inches) were the best performers. Device-wise, Phone (1.77 years), Desktop (2.04 years), Tablet (2.09 years), Combined (2.11 years) closely followed each other in that order. Similarly, Phone (2.65 inches), Combined (2.67 years), Tablet (2.74 inches), Desktop (2.82 inches) closely followed each other in that order. Interesting observation here is that ML algorithms have outclassed the DL algorithms. Age Height Device Free/Fixed SVM KNN XGBoost RNN LSTM FC CNN SVM KNN XGBoost RNN LSTM FC CNN Desktop Free 2.37 2.38 2.26 5.53 2.24 2.26 3.78 2.97 3.02 2.84 8.67 10.70 7.33 7.21 Fixed 2.43 2.54 2.27 5.24 2.04 2.92 4.97 2.92 3.20 2.82 9.54 10.66 8.63 7.24 Phone Free 2.46 2.41 2.59 7.11 2.03 1.77 6.10 2.94 3.04 2.70 10.43 10.39 4.75 7.20 Fixed 2.38 2.36 2.42 8.41 2.48 2.36 5.44 2.87 2.65 2.92 10.55 11.10 5.72 7.20 Tablet Free 2.42 2.47 2.38 6.19 2.45 2.39 5.02 2.85 3.18 3.23 8.75 9.57 4.83 7.22 Fixed 2.43 2.49 2.34 9.41 2.73 2.09 5.20 2.74 2.95 3.02 8.42 9.95 5.74 7.20 Combined Free 2.37 2.40 2.21 5.61 2.23 2.84 5.41 2.93 2.99 3.23 8.52 9.16 7.06 7.20 Fixed 2.32 2.34 2.27 9.17 2.11 3.63 4.33 3.09 3.01 2.67 7.79 10.61 11.57 7.20 Table 6. Qualitative comparison with previous works that attempted to infer the soft biometrics that we have considered. kFCV means k-Fold Cross-Validation, while HOCV means Hold the test set Out Cross-Validation in this study (see Figure 3). We achieved almost perfect Accuracy and MAE between 1-2 for both age and height under kFCV. We are not reporting kFCV results as it is a less realistic evaluation setup than HOCV, especially for the application scenarios listed in this paper. Ref. Users Free/Fixed Class Desktop/Phone kFCV/HOCV Accuracy/MAE [Giot and Rosenberger 2012] 133 Fixed Gender Desktop kFCV 91.63 [Fairhurst and Da Costa-Abreu 2011] 133 Fixed Gender Desktop kFCV 97.50 [Uzun et al. 2015] 100 Fixed Age Desktop kFCV 91.20 [Pentel 2017] 1519 Both Both Desktop kFCV 73.00 [Plank 2018] 144 Free Age Gender Desktop kFCV 63.50 73.25 [Tsimperidis et al. 2018] 75 Free Gender Desktop kFCV 95.60 [Li et al. 2019] 45 Free Gender Desktop kFCV 72.00 [Buker et al. 2019] 60 Free Gender Desktop kFCV 98.30 [Akis et al. 2014] 132 Fixed Age Gender Phone HOCV 60.30 75.20 [Idrus et al. 2014] 110 Both Age Gender Desktop HOCV 78.00 86.00 [Buriro et al. 2016] 150 Fixed Age Gender Phone HOCV 82.80 87.70 [Bandeira et al. 2019] 100 Both Gender Desktop HOCV 71.30 This work 117 Free Gender Major Style Age Height The best of Desktop, Phone, Tablet, and Combined HOCV 93.02 85.37 96.15 1.77 2.70 This work 117 Fixed Gender Major Style Age Height The best of Desktop, Phone, Tablet, and Combined HOCV 88.37 87.80 96.15 2.04 2.65 \fOn the Inference of Soft Biometrics from Typing Patterns Collected in a Multi-device Environment Preprint, June 2020, 5.2 Ethical implications While in the introduction section, we have listed positive application scenarios, people with malicious intent can use the research presented in this work for destructive purposes. We, however, believe that the misuse can be prevented by developing existing as well as new public policies [Plank 2018]. 6"
+ },
+ {
+ "url": "http://arxiv.org/abs/2005.03687v2",
+ "title": "COBRA: Contrastive Bi-Modal Representation Algorithm",
+ "abstract": "There are a wide range of applications that involve multi-modal data, such as\ncross-modal retrieval, visual question-answering, and image captioning. Such\napplications are primarily dependent on aligned distributions of the different\nconstituent modalities. Existing approaches generate latent embeddings for each\nmodality in a joint fashion by representing them in a common manifold. However\nthese joint embedding spaces fail to sufficiently reduce the modality gap,\nwhich affects the performance in downstream tasks. We hypothesize that these\nembeddings retain the intra-class relationships but are unable to preserve the\ninter-class dynamics. In this paper, we present a novel framework COBRA that\naims to train two modalities (image and text) in a joint fashion inspired by\nthe Contrastive Predictive Coding (CPC) and Noise Contrastive Estimation (NCE)\nparadigms which preserve both inter and intra-class relationships. We\nempirically show that this framework reduces the modality gap significantly and\ngenerates a robust and task agnostic joint-embedding space. We outperform\nexisting work on four diverse downstream tasks spanning across seven benchmark\ncross-modal datasets.",
+ "authors": "Vishaal Udandarao, Abhishek Maiti, Deepak Srivatsav, Suryatej Reddy Vyalla, Yifang Yin, Rajiv Ratn Shah",
+ "published": "2020-05-07",
+ "updated": "2020-05-24",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "stat.ML"
+ ],
+ "main_content": "Introduction Systems built on multi-modal data have been shown to perform better than systems that solely use uni-modal data [7, 49]. Due to this fact, multi-modal data is widely used in and generated by different large-scale applications. These applications often utilize this multi-modal data for tasks such as information retrieval [11, 44], classi\ufb01cation [48, 58], and question-answering [27, 35]. It is therefore important to represent such multi-modal data in a meaningful and interpretable fashion to enhance the performance of these large-scale applications. In this work, we focus on learning the joint cross-modal representations for images and text, but our proposed techniques can be easily extended to other modalities as well. Learning meaningful representations for multi-modal data is challenging because there exists a distributional shift between di\ufb00erent modalities [18, 37]. The lack of consistency in representations across modalities further magni\ufb01es this shift [6]. Due to such di\ufb03culties, any similarity metric between the representations \u2217Equal contribution. Ordered Randomly. across modalities is intractable to compute [37]. The reduction of this distributional shift boils down to two challenges: (1) projecting the representations of data belonging to di\ufb00erent modalities to a common manifold (also referred to as the joint embedding space), and (2) retaining their semantic relationship with other samples from the same class as well as di\ufb00erent classes. The need for a joint embedding space is emphasized by the inability of uni-modal representations to align well with each other. Over the last few years, literature [18, 29, 36] has been presented where the representations were modeled in the joint embedding space, but existing methods perform less satisfactorily as signi\ufb01cant semantic gap still exists among the learnt representations from di\ufb00erent modalities. We believe this is due to the fact that current crossmodal representation systems regularize the distance of pairs of representations of those data samples which belong to the same classes (but di\ufb00erent modalities) but not of pairs of representations belonging to di\ufb00erent classes (can be from the same or di\ufb00erent modalities). While current work [18, 36] has focused on conserving the semantic relationship between intra cross-modal data, i.e., belonging to the same class, we surmise that along with this, preserving inter crossmodal interactions will help the model learn a more discriminatory boundary between di\ufb00erent classes. Motivation: We posit that preserving the relationship between representations of samples belonging to di\ufb00erent classes, in a modality invariant fashion, can improve the quality of joint cross-modal embedding spaces. We formulate this hypothesis as it introduces a contrastive proximity mechanism between data belonging to different semantic classes. This distancing will allow the model to converge to a better generalizing decision boundary. Similar contrastive learning paradigms based on information gain have been performing very well in the self-supervised learning problem settings [19, 56, 59]. To the best of our knowledge, we are the \ufb01rst to propose a method to learn joint cross-modal embeddings based on contrastive learning paradigms. Contributions: Our contributions are as follows: \u2022 We propose a novel joint cross-modal embedding framework called COBRA (COntrastive Bi-modal Representation Algorithm) which represents the data across di\ufb00erent modalities (text and image in this study) in a common manifold. \u2022 We utilize a set of loss functions in a novel way, which jointly preserve not only the relationship between di\ufb00erent intra crossmodal data samples but also preserve the relationship between inter cross-modal data samples (refer Figure 1). \u2022 We empirically validate our model by achieving state-of-theart results on four diverse downstream tasks: (1) cross-modal retrieval, (2) \ufb01negrained multi-modal sentiment classi\ufb01cation, \f\f\fUdandarao et al. an end-to-end model which learns both intra-modality and intermodality dynamics for the task of sentiment classi\ufb01cation. Pham et al. [40] proposed Seq2Seq2Sentiment, an unsupervised method for learning joint multi-modal representations using sequence to sequence models. Wang et al. [63] discussed a new fusion method TransModality using transformers in an end-to-end fashion for multi-modal sentiment analysis. 2.8 Multi-modal Disaster Classi\ufb01cation Gautam et al. [13] developed a novel decision di\ufb00usion technique on the CrisisMMD dataset [2, 34] to classify disaster related data into informative and non-informative categories using image and text uni-modal models. Agarwal et al. [1] proposed Multimodal Emergency Management Information System (MEMIS) that leverages both visual and textual features on the same dataset. Their system outperforms all other existing uni-modal methods. 3 Methodology In this section, we \ufb01rst explain the formulation of our problem statement in terms of the data we use. We then introduce and explain the architecture of our model, along with the loss functions used. We \ufb01nally explain our optimization and training strategy. 3.1 Problem Formulation Let us assume that we have two modalities, i.e. text and image, we denote the \ud457-th image sample as\ud465\ud457 \ud43c\u2208R\ud451\ud43cand the \ud457-th text sample as \ud465\ud457 \ud447\u2208R\ud451\ud447. Here,\ud451\ud43cand\ud451\ud447represent the dimensionality of the image and text samples respectively. We denote the image dataset as \ud44b\ud43c= {\ud4650 \ud43c,\ud4651 \ud43c, ...,\ud465\ud45b\ud43c\u22121 \ud43c } and the text dataset as \ud44b\ud447= {\ud4650 \ud447,\ud4651 \ud447, ...,\ud465\ud45b\ud447\u22121 \ud447 }, where \ud45b\ud43cand \ud45b\ud447denote the total number of data samples in the image and text datasets respectively. The corresponding labels for the image and text modalities are represented as follows: \ud44c\ud43c= [\ud4660 \ud43c,\ud4661 \ud43c, ...,\ud466\ud45b\ud43c\u22121 \ud43c ] and \ud44c\ud447= [\ud4660 \ud447,\ud4661 \ud447, ...,\ud466\ud45b\ud447\u22121 \ud447 ]. Assuming there are \ud436distinct semantic classes in our multi-modal dataset, the labels are: \ud466\ud457\ud43c \ud43c,\ud466\ud457\ud447 \ud447\u2208{0, 1, ...,\ud436\u22121}\u2200\ud457\ud43c\u2208{0, 1, ...,\ud45b\ud43c\u22121}, \ud457\ud447\u2208{0, 1, ...,\ud45b\ud447\u22121}. 3.2 Model Architecture The overall architecture for our model is given in Figure 2. Our goal is to represent the data in a common manifold, such that the classwise representations are modality invariant and discriminatory. To this end, we use an autoencoder for each modality to generate representations that are high \ufb01delity in nature. We utilize an orthogonal transform layer, which takes as input the hidden space representations from the encoders of each modality, and projects these representations into a joint space that is modality invariant and discriminates between classes well. We denote the encoded representation as \ud467\ud456 \ud457= \ud453\ud457(\ud465\ud456 \ud457, \u0398\ud457) and the reconstructed sample as \u02c6 \ud465\ud456 \ud457= \ud454\ud457(\ud467\ud456 \ud457, \u03a6\ud457) where \ud456\u2208{0,\ud45b\ud447\u22121} and \ud456\u2208{0,\ud45b\ud43c\u22121} for text and image respectively, and where \ud457\u2208{\ud447, \ud43c} for text and image respectively. \ud453\ud457denotes the encoder of the \ud457-th modality parameterised by \u0398\ud457. Similarly \ud454\ud457denotes the decoder of the \ud457-th modality parameterised by \u03a6\ud457. Given the representations \ud467\ud456 \ud447and \ud467\ud456 \ud43c, which have dimensions \ud44d\ud447and \ud44d\ud43c, we project the representations to a joint subspace such that the representation of each semantic class is orthogonal to each other [18]. We call these projections \ud442\ud456 \ud447and \ud442\ud456 \ud43c, both of which have dimension \ud44d. We de\ufb01ne the loss function in COBRA as a weighted sum of the reconstruction loss, cross-modal loss, supervised loss and contrastive loss, the details of which are introduced below. To preserve the inter-class dynamics, we innovatively introduce the Contrastive Loss that has never been used in representing multi-modal data. 3.2.1 Reconstruction Loss Reconstruction loss has been used in the autoencoder. Given the decoder output \u02c6 \ud465\ud457 \ud456and the input \ud465\ud457 \ud456, we de\ufb01ne the reconstruction loss shown in Eq. 1 as: L\ud445= \u00d5 \ud456\u2208{\ud43c,\ud447} \ud45b\ud456\u22121 \u00d5 \ud457=0 \r \r \r\u02c6 \ud465\ud457 \ud456\u2212\ud465\ud457 \ud456 \r \r \r 2 2 (1) 3.2.2 Cross-Modal Loss The projected representations \ud442\ud457 \ud43cand \ud442\ud457 \ud447align class representations within each modality. The cross-modal loss aims to align representations of the same class across di\ufb00erent modalities. Given the projected representations \ud442\ud457 \ud43cand \ud442\ud456 \ud447, we de\ufb01ne the cross-modal loss shown in Eq. 2 as: L\ud440= min{\ud45b\ud447,\ud45b\ud43c}\u22121 \u00d5 \ud457=0 \r \r \r\ud442\ud457 \ud447\u2212\ud442\ud457 \ud43c \r \r \r 2 2 (2) We use the min function because the dataset may not have equal text and image samples. We only take those pairs in which the corresponding text and image samples are present. 3.2.3 Supervised Loss As we try to model an orthogonal latent space having the joint embeddings, we utilize the one-hot labels of the data samples to reinforce those samples belonging to the same class but di\ufb00erent modalities to be grouped together in the same subspace. Let \u02c6 \ud466\ud457 \ud456be the one-hot encoded label for the \ud457-th sample of the \ud456-th modality, and \ud442\ud457 \ud456be the projected representation, we then de\ufb01ne the supervised loss shown in Eq. 3 as: L\ud446= \u00d5 \ud456\u2208{\ud43c,\ud447} \ud45b\ud456\u22121 \u00d5 \ud457=0 \r \r \r\ud442\ud457 \ud456\u2212\u02c6 \ud466\ud457 \ud456 \r \r \r 2 2 (3) 3.2.4 Contrastive Loss As stated in recent literature [5, 56, 57], to implement the contrastive loss [15, 54], the de\ufb01nitions of positive samples and negative samples of representations are of utmost importance. We will \ufb01rst de\ufb01ne the positive and negative samples pertaining to our model. Given the projected representations\ud442\ud456 \ud43cand\ud442\ud456 \ud447, a positive pair is de\ufb01ned as the representations of data samples belonging to the same modality and class. A negative pair is de\ufb01ned as the representations of two data samples belonging to same or di\ufb00erent modality of di\ufb00erent classes. To de\ufb01ne the contrastive loss, a scoring function is required, which yields high values for positive samples and low values for negative values. Here we de\ufb01ne the scoring function by taking the dot product of the representations in the joint embedding space. Following several recent works [8, 19, 24, 59], our loss function enforces the model to select the positive sample from a \ufb01xed sized set \ud446= {\ud45d,\ud45b1,\ud45b2, ...,\ud45b\ud441} containing one positive and \ud441negative \fCOBRA: Contrastive Bi-Modal Representation Algorithm samples. Thereafter we formulate our contrastive loss shown in Eq. 4 as: L\ud436= \u2212E\ud446 \" log \ud44e\ud447\ud45d \ud44e\ud447\ud45d+ \u00cd\ud441 \ud456=1 \ud44e\ud447\ud45b\ud456 # (4) where \ud44eis the anchor point, \ud45dis its corresponding positive sample, E is an expectation operator over all possible permutations of \ud446 and \ud45b\ud456iterates over all the negative samples. The anchor, positive and negative samples are randomly drawn from each mini-batch. We minimize the above expectation running over all samples. Since fetching negative samples from the entire dataset is computationally infeasible, we sample the negative points only from each mini-batch locally. Since, we sample only a \ufb01nite sized set of negative samples, the model can miss out on characteristics of the distribution of the joint embeddings. To avoid this, we implement another loss called the Noise Contrastive Estimation (NCE) [15] loss, which is an e\ufb00ective method for estimating unnormalized models. NCE helps to model the distribution of the negative samples by leveraging a proxy noise distribution. It does so by estimating the probability of a sample coming from a joint distribution rather than it coming from a noise distribution. The noise distribution is assumed to be a uniform distribution. Denoting the joint distribution of positive samples as \ud45d\ud43d, the noise distribution as \ud45d\ud441, the anchor sample as \ud44eand every other sample (can be either positive or negative) as \ud460, the probability of data sample \ud460coming from the joint distribution \ud45d\ud43dis: \ud443(\ud44b= 1|\ud460;\ud44e) = \ud45d\ud43d(\ud460|\ud44e) \ud45d\ud43d(\ud460|\ud44e) + \ud441\ud45d\ud441(\ud460|\ud44e) (5) where \ud441is the number of samples from the noise distribution. Instead of using Eq. 4, now we can estimate the contrastive loss more accurately based on Eq. 6 as follows: L\ud436= \u2212E\ud44e{E\ud460\u223c\ud45d\ud43d(\u2022|\ud460) [[\ud443(\ud44b= 1|\ud460;\ud44e)] + \ud441\u00d7 E\ud460\u223c\ud45d\ud441(\u2022|\ud460) [1 \u2212\ud443(\ud44b= 1|\ud460;\ud44e)]} (6) where E\ud44eis an expectation over all possible anchor samples, E\ud460\u223c\ud45d\ud43d is an expectation over all possible positive samples (corresponding to anchor \ud44e) from the joint distribution \ud45d\ud43d, and E\ud460\u223c\ud45d\ud441is an expectation over all samples from the noise distribution \ud45d\ud441. 3.3 Optimization and Training Strategy The overall loss of our network is de\ufb01ned to be a weighted sum of the reconstruction loss, cross-modal loss, supervised loss and contrastive loss. The weights are treated as hyperparameters. L = \ud706\ud445L\ud445+ \ud706\ud446L\ud446+ \ud706\ud440L\ud440+ \ud706\ud436L\ud436 (7) The objective function in Eq. 7 is optimized using stochastic gradient descent. The loss is summed over all modalities, and the corresponding gradient is propagated through all the components in the model. The optimization process of our proposed network is illustrated in Algorithm 1. We adopted the PyTorch framework for implementation, and trained our models for 200 epochs on an Nvidia GTX 1050 GPU1. 1Code available at https://github.com/ovshake/cobra Algorithm 1: Flow of the COBRA algorithm Input :The image training set \ud44b\ud43c, the text training set \ud44b\ud447, the image label set \ud44c\ud43c, the text label set \ud44c\ud447, dimensionality of the joint embedding space \ud44d, image batch size \ud44f\ud43c, text batch size \ud44f\ud447, learning rate \ud702, hyperparameters \ud706\ud440, \ud706\ud436, \ud706\ud446, \ud706\ud445, number of training epochs \ud441and number of iterations (batch count) per epoch \ud435 Output:The optimal encoder weights \u0398\ud43c, \u0398\ud447and optimal decoder weights \u03a6\ud43c, \u03a6\ud447 1 Initialize \u0398\ud43c, \u0398\ud447, \u03a6\ud43c, \u03a6\ud447randomly 2 for i=1,2,...,N do 3 for b=1,2,...,B do 4 Sample a random text minibatch \ud45a\ud447of size \ud44f\ud447 5 Sample a random image minibatch \ud45a\ud43cof size \ud44f\ud43c 6 Compute the image and text encoded latent representations \ud467\ud43cand \ud467\ud447 7 Compute the image and text orthogonal projections \ud442\ud43cand \ud442\ud447 8 Compute the image and text reconstructions \u02c6 \ud465\ud43cand \u02c6 \ud465\ud447 9 Compute the losses: L\ud445(Eq. 1), L\ud440(Eq. 2), L\ud446(Eq. 3), and L\ud436(Eq. 4, 6) 10 Compute total loss (Eq. 7) : L = \ud706\ud446L\ud446+ \ud706\ud445L\ud445+ \ud706\ud440L\ud440+ \ud706\ud436L\ud436 11 Update model weights using a SGD update rule: 12 \u0398\ud43c\u2190\u0398\ud43c\u2212\ud702\ud715L \ud715\u0398\ud43c; \u0398\ud447\u2190\u0398\ud447\u2212\ud702\ud715L \ud715\u0398\ud447 13 \u03a6\ud43c\u2190\u03a6\ud43c\u2212\ud702\ud715L \ud715\u03a6\ud43c; \u03a6\ud447\u2190\u03a6\ud447\u2212\ud702\ud715L \ud715\u03a6\ud447 4 Experiments To evaluate our proposed method, we test our model on four different tasks, namely, cross-modal retrieval, multi-modal fake news detection, multi-modal sentiment classi\ufb01cation, and multi-modal disaster classi\ufb01cation. We compare the performance of our model against state-of-the-art models of corresponding tasks. In the following sections, we describe the datasets and evaluation metrics adopted, followed by the results achieved on each downstream task mentioned above. 4.1 Cross-Modal Retrieval In the task of cross-modal retrieval, we use COBRA to retrieve an image given a text query, or a text sample given an image query. 4.1.1 Datasets For the cross-modal retrieval task, we utilize four di\ufb00erent datasets. For Wikipedia [46], MS-COCO [26], and NUS-Wide 10k [9] datasets, we convert the images into 4096-dimensional feature vectors using the fc7 layer of VGGnet [51]. In the Wikipedia and MS-COCO dataset, we convert the texts into 300-dimensional feature vectors using Doc2Vec [25]. For the NUS-Wide 10k dataset, we convert the text into 1000-dimensional Bag of Words feature vectors. The PKU-XMedia dataset [39, 69] contains texts represented as 3000dimensional Bag of Words feature vectors and images represented \fUdandarao et al. Table 1: Performance (mAP) on the Wikipedia Dataset Method Image \u2192Text Text \u2192Image Average MCCA [47] 0.202 0.189 0.195 ml-CCA [42] 0.388 0.356 0.372 DDCAE [61] 0.308 0.290 0.299 JRL [70] 0.343 0.376 0.330 ACMR [60] 0.479 0.426 0.452 CMDN [36] 0.487 0.427 0.457 CCL [38] 0.504 0.457 0.481 D-SCMR [72] 0.521 0.478 0.499 SDML [18] 0.522 0.488 0.505 DAML [66] 0.559 0.481 0.520 COBRA 0.742 0.739 0.740 as 4096-dimensional feature vectors, generated using the fc7 layer of VGGnet [51]. \u2022 The Wikipedia dataset [46] contains 2866 text-image pairs, divided into 10 semantic classes, such as warfare, art & architecture and media. We use a training, validation and test set of 2173, 231 and 462 text-image pairs [46] respectively. \u2022 The PKU-Xmedia dataset [39, 69] contains 5000 text-image pairs, divided into 20 semantic classes. We use a training, validation and test set of 4000, 500 and 500 text-image pairs [39, 69] respectively. \u2022 The MS-COCO dataset [26] contains 82079 text-image pairs, divided into 80 semantic classes. We use a training, validation and test set of 57455, 14624 and 10000 text-image pairs [18] respectively. \u2022 The NUS-Wide 10k dataset [9] contains 10000 text-image pairs, divided into 10 semantic classes. We use a training, validation and test set of 8000, 1000 and 1000 text-image pairs [60] respectively. 4.1.2 Evaluation Metrics We compare our performance against state-of-the-art models based on Mean Average Precision (mAP). For a fair comparison, we ensure that we use the same features across models. 4.1.3 Results We report the highest mAP for Text to Image (TTI) and Image to Text (ITT) retrieval on all four datasets. From the t-SNE [28] plot for Wikipedia given in Figure 3a, we observe that COBRA is able to e\ufb00ectively form joint embeddings for di\ufb00erent classes across modalities, resulting in superior performances across the aforementioned datasets. We achieve a 22% improvement over the previous state-of-theart (DAML [66]) on the Wikipedia dataset (Table 1). We achieve a 3% improvement over the previous state-of-the-art (SDML [18]) on the MS-COCO dataset (Table 2). We achieve a 3.5% improvement over the previous state-of-the-art (SDML [18]) on the PKU-XMedia dataset (Table 3). We achieve a 10.9% improvement over the previous state-of-the-art (ACMR [60]) on the NUS-Wide 10k dataset (Table 4). Table 2: Performance (mAP) on the MS-COCO Dataset Method Image \u2192Text Text \u2192Image Average MCCA [47] 0.646 0.640 0.643 ml-CCA [42] 0.667 0.661 0.664 DDCAE [61] 0.412 0.411 0.411 ACMR [60] 0.692 0.687 0.690 DCCA [4] 0.415 0.414 0.415 GSS-SL [71] 0.707 0.702 0.705 SDML [18] 0.827 0.818 0.823 COBRA 0.854 0.853 0.853 Table 3: Performance (mAP) on the PKU-XMedia Dataset Method Image \u2192Text Text \u2192Image Average MCCA [47] 0.620 0.616 0.618 DDCAE [61] 0.868 0.878 0.873 JRL [70] 0.770 0.788 0.779 ACMR [60] 0.882 0.885 0.883 CMDN [36] 0.485 0.516 0.501 DCCA [4] 0.869 0.871 0.870 GSS-SL [71] 0.875 0.878 0.876 SDML [18] 0.899 0.917 0.908 COBRA 0.945 0.941 0.943 Table 4: Performance (mAP) on the NUS-Wide 10k dataset Method Image \u2192Text Text \u2192Image Average MCCA [47] 0.448 0.462 0.455 DDCAE [61] 0.511 0.540 0.525 JRL [70] 0.586 0.598 0.592 ACMR [60] 0.588 0.599 0.593 CMDN [36] 0.492 0.515 0.504 CCL [38] 0.506 0.535 0.521 DCCA [4] 0.532 0.549 0.540 SDML [18] 0.55 0.505 0.527 DAML [66] 0.512 0.534 0.523 COBRA 0.703 0.701 0.702 4.2 Multi-modal Fake News Detection In the task of multi-modal fake news detection, we use COBRA to determine whether a given bi-modal query (text and image) corresponds to a real or fake news sample. 4.2.1 Datasets For the multi-modal fake news detection task, we utilize the FakeNewsNet Repository [50]. This repository contains two datasets, namely, Politifact and Gossipcop. These datasets contain news content, social context, and dynamic information. We pre-process the data similar to Spotfake+ [52]. For both datasets, we convert images into 4096-dimensional feature vectors using VGGnet [51], and we convert texts into 38400-dimensional feature vectors using XLNet \fCOBRA: Contrastive Bi-Modal Representation Algorithm Table 5: Accuracy on the FakeNewsNet dataset Method Politifact (%) Gossipcop (%) EANN [62] 74 86 MVAE [23] 67.3 77.5 SpotFake [53] 72.1 80.7 SpotFake+ [52] 84.6 85.6 COBRA 86 86.7 [67]. Each dataset contains two semantic classes, namely, Real and Fake. \u2022 The Politifact dataset contains 1056 text-image pairs. We get 321 Real and 164 Fake text-image pairs after pre-processing. We use a training, validation and test set of 381, 50 and 54 text-image pairs [52] respectively. \u2022 The Gossipcop dataset contains 22140 text-image pairs. We get 10259 Real and 2581 Fake text-image pairs after preprocessing. We use a training, validation and test set of 10010, 1830 and 1000 text-image pairs [52] respectively. 4.2.2 Evaluation metrics We compare our performance against existing state-of-the-art models based on number of correctly classi\ufb01ed queries (accuracy). For the purpose of our evaluation, we ensure that we use the same features that were used across other existing state-of-the-art models. To visualize the purity of the joint embedding space for di\ufb00erent classes and modality samples, we plot the joint embeddings of COBRA trained on both the Gossipcop and Poltifact datasets. We plot the embeddings (Figure 3b and 3c) by employing the t-SNE [28] transformation to reduce the high dimensional joint embeddings (\ud442\ud43cand\ud442\ud447) to 3 dimensional data points. The \ufb01gures clearly exhibit the high discrimination between samples of di\ufb00erent classes in the joint embedding space. This provides further empirical validation for the high class divergence across the joint embedding space, irrespective of the modalities of the data points. 4.2.3 Results We achieve a 1.4% and a 1.1% improvement over the previous stateof-the-art (SpotFake+ [52]) on the Politifact and Gossipcop dataset respectively (Table 5). On observing the t-SNE plots in Figure 3, we discern a high intra-class variability in the Gossipcop dataset. We believe that there is only a small improvement because of the high class imbalance in these two datasets. 4.3 Multi-modal Fine-grained Sentiment Classi\ufb01cation In the task of multi-modal \ufb01ne-grained sentiment classi\ufb01cation, we use COBRA to perform ten tasks of classifying a given bi-modal query (text and image) into a sentiment category. 4.3.1 Datasets For the multi-modal \ufb01ne-grained sentiment classi\ufb01cation task, we analyze the performance of our model on the MeTooMA dataset [12]. This dataset contains 9973 tweets that have been manually annotated into 10 classes, namely, text only informative and image only informative (Relevance), Support, Opposition and Neither (Stance), Directed Hate and Generalized Hate (Hate Speech), Allegation, Refutation and Justi\ufb01cation (Dialogue acts), and sarcasm. We convert the images into 4096-dimensional feature vectors using the fc7 layer of VGGnet [51]. We convert the texts into 300-dimensional feature vectors using Doc2Vec [25]. We use a training, validation and test set of 4500, 1000 and 1000 text-image pairs respectively, across all models that we test. 4.3.2 Evaluation Metrics We report the number of correctly classi\ufb01ed queries (accuracy). To the best of our knowledge, we are the \ufb01rst to test a multi-modal classi\ufb01cation model on this dataset. To this end, we evaluate our model against a Text-only and Image-only baseline, and Early Fusion. For the baselines, we use a Fully Connected network2. 4.3.3 Results We obtain an average classi\ufb01cation accuracy of 88.32% across all classes on the MeTooMA Dataset. This is a 1.2% improvement over Early Fusion (Table 6). We observe a low increase in Text only and Image only informative tasks due to the fact that 53.2% of our training data had text-image pairs with con\ufb02icting labels, i.e., from a given text-image pair, the text may be labelled as \u201crelevant\u201d whereas the corresponding image may be labelled as \u201cirrelevant\u201d. Furthermore, for classes under the Hate Speech, Sarcasm, and Dialogue Acts categories, we observe that there are less than 600 samples for each class. In categories such as Stance, where the \u2018Support\u2019 class has over 3000 samples, we observe much larger improvements in performance. 4.4 Multi-modal Disaster Classi\ufb01cation In the task of multi-modal disaster classi\ufb01cation, we use COBRA to perform three classi\ufb01cation tasks given a bi-modal (text and image) query. The classi\ufb01cation tasks are further explained in the dataset section as follows. 4.4.1 Datasets For the multi-modal disaster classi\ufb01cation task, we utilize the CrisisMMD dataset [2, 34]. It consists of 16058 tweets and 18082 images that were collected during natural disasters. There are 3 classi\ufb01cation tasks that can be performed on this dataset \u2014 \u2022 Informative or Non-Informative classi\ufb01cation \u2013 this represents whether or not a particular text-image pair from a tweet is informative. \u2022 Humanitarian Categories classi\ufb01cation \u2013 this includes classes such as a\ufb00ected individuals, vehicle damage, missing or found people, and infrastructure or utility damage. This is once again done for a particular text-image pair from a tweet. \u2022 Damage severity assessment \u2013 this includes classes such as severe damage, mild damage and little or no damage. This is once again done for a particular text-image pair from a tweet. We convert the images into 4096-dimensional feature vectors using the fc7 layer of VGGnet [51]. We convert the texts into 300dimensional feature vectors using Doc2vec [25]. We use a training set of 2000 text-image pairs, a validation set of 793 text-image pairs for the \ufb01rst 2 classi\ufb01cation tasks, a validation set of size 697 for the third classi\ufb01cation task, and a test set of 500 text-image pairs. 2Architectural details can be found in the supplementary material \f\fCOBRA: Contrastive Bi-Modal Representation Algorithm"
+ }
+ ],
+ "Samuel Albanie": [
+ {
+ "url": "http://arxiv.org/abs/2304.00521v1",
+ "title": "Large Language Models are Few-shot Publication Scoopers",
+ "abstract": "Driven by recent advances AI, we passengers are entering a golden age of\nscientific discovery. But golden for whom? Confronting our insecurity that\nothers may beat us to the most acclaimed breakthroughs of the era, we propose a\nnovel solution to the long-standing personal credit assignment problem to\nensure that it is golden for us. At the heart of our approach is a\npip-to-the-post algorithm that assures adulatory Wikipedia pages without\nincurring the substantial capital and career risks of pursuing high impact\nscience with conventional research methodologies. By leveraging the meta trend\nof leveraging large language models for everything, we demonstrate the\nunparalleled potential of our algorithm to scoop groundbreaking findings with\nthe insouciance of a seasoned researcher at a dessert buffet.",
+ "authors": "Samuel Albanie, Liliane Momeni, Jo\u00e3o F. Henriques",
+ "published": "2023-04-02",
+ "updated": "2023-04-02",
+ "primary_cat": "cs.DL",
+ "cats": [
+ "cs.DL",
+ "cs.LG"
+ ],
+ "main_content": "INTRODUCTION When Isaac Newton raced ahead of Robert Hooke and de\ufb01ed the Royal Society\u2019s Social Media Ban to promote his inverse-square law of gravity pre-print in 1686, he exempli\ufb01ed the glorious pursuit of scienti\ufb01c priority1 that has long galvanised bof\ufb01ns the world over.2 Unfortunately, the unrelenting pursuit of personal credit assignment is an activity in decline. Few modern scienti\ufb01c feuds match the intensity of the late 16th century public Priorit\u00a8 atsstreit3 between astronomers Tycho and Ursus over credit for the geoheliocentric model (a spat that involved, inter alia, dramatic midnight raids on bedrooms to retrieve allegedly stolen diagrams from trouser pockets (Worrall, 1985)). Instead, \ufb01elds such as Machine Learning, which could long be relied upon to generate such drama, have degenerated into a head-to-head showdown with Particle Physics in a quest to show which is more of a \u201cteam sport\u201d through feats of collaboration4. Indeed, fuelled by a seemingly inexhaustible supply of memes, technical prowess and esprit de corps, distributed open-source collectives now represent a major contributor of high-impact breakthroughs. In tandem, well-funded technology \ufb01rms have gathered their researchers into ever larger familial structures and task forces.5 1This was far from Newton\u2019s only scienti\ufb01c priority fracas. Asked what he thought of Leibniz\u2019 work, Newton quipped \u201cderivative\u201d, before laughing so hard that in\ufb01nitesimal tears ran down his cheeks. 2Newton\u2019s Principia was \ufb01nanced by Halley, who\u2019d discussed the problem with both Newton and Hooke. The Royal Society had planned to fund Newton\u2019s publication, but they had entirely exhausted their book budget on De Historia Piscium (Of the History of Fish), by Francis Willughby, a scholarly work that surprisingly failed to achieve best-seller status. 3A \u201cpriority dispute\u201d. We\u2019ve used German to remind the reader that this is serious business. 4At the time of writing, High Energy Physics maintains a comfortable lead, with a 5,154 author paper estimating the size of the Higgs Boson (Aad et al., 2015). 5This excludes the CFO, who instead nervously increments variables on the communal slurm.conf. arXiv:2304.00521v1 [cs.DL] 2 Apr 2023 \fSIGBOVIK 2023 This all sounds wonderfully warm and fuzzy, but let us consider its consequence for those of us with the onerous time commitments of hourly checking our Google Scholar pro\ufb01le and Twitter follower count, prohibiting effective participation in such teams. Modern reviewers, unaware of whether they are reviewing a submission from three authors or thirty-three, have high expectations. Standards have been raised. The sad result is that meaningful contributions in the era of big-discord-science have become terribly hard work. Figure 1: Award certi\ufb01cate presented at CVPR 1983. Entitling authors obtaining two scoops to a deliciously \ufb01brous breakfast. Even if we were to develop some self control and \ufb01nd time to join these teams, there is a second problem. The whole point of doing science is to achieve personal glory while strongly signalling that we are not motivated by a desire for personal glory. It is entirely natural to harbour a healthy clandestine lust for prizes, international fame and a lifetime supply of Cheerios from an adoring sponsor. But once we shackle ourselves to a high-performing team, who receives the credit? It would be simply awful to contribute a breakthrough and then be forced to share the Cheerios. After all, as wisely noted by the Nobel committee, the maximum number of people that can possibly discover something interesting is three. In this work, we propose the use of scooping\u2014the act of publishing an important result before others who pursue a similar agenda\u2014as a novel, ef\ufb01cient and practical solution to the Cheerios problem. A baseline of \u201cTwo Scoops\u201d has long been considered suf\ufb01cient for sponsorship by Kellogg\u2019s Raisin Bran (Fig. 1), but we crave tastier cereal and unbounded scoops. Thus, while scooping to date has been a largely passive affair, we draw inspiration from Ursus\u2019 purported plagiarism of Tycho and develop an active scienti\ufb01c scooping framework as a basis for our solution. We make three contributions. First, we formalise the Cheerios problem. Second, we advance arguments for the increased algorithmic and \ufb01nancial ef\ufb01ciency of proactive scooping over the existing (largely-passive) scooping paradigm for resolving this challenging breakfast dilemma. Third, we demonstrate practical few-shot active scooping by leveraging a recent increment in the absolutely concerning series {GPT-n : n \u2208N}, a 7-day free trial premium Overleaf subscription and 104 Twitter puppet accounts to scoop multiple high-impact publications on the topic of robust \ufb02ower breed classi\ufb01cation. I certainly should be vexed if any one were to publish my doctrines before me. I want me those Cheerios. Charles Darwin, 1856 2 RELATED ANTI-TEAMWORK Scienti\ufb01c Priority. Seminal work by Merton (1957) established the key role of scienti\ufb01c priority as a reward signal to encourage originality (mildly tempered by a respectable emphasis on humility6). Kuhn (1962) observed that it was often simply impossible to assign scienti\ufb01c priority to an individual when a \u201cdiscovery\u201d does not constitute an isolated event. That shouldn\u2019t stop us trying to both assign and claim priority. Differently, from prior work that has sought to understand the phenomenon of scienti\ufb01c priority, we focus on the application of Large Language Models to its accrual. Scooping. The rush to preempt a competitor has long engaged the titans of science. Prior to the inconvenient loss of his head, Lavoisier scooped his rival Priestly to claim the discovery of Oxygen. Watson and Crick openly discuss their strenuous efforts in 1953 to beat Wilkins and Franklin to the DNA structure (Watson, 1968). Even the gentle Darwin was spurred into action in 1858 by learning that Wallace had crafted a similar theory and might publish before him. While these researchers lim6In addition to humility, certain \ufb01elds, such as mathematics, also encourage understatement. This likely stems from a healthy fear of exclamation marks. There are few things more explosive than a misplaced factorial. \fSIGBOVIK 2023 ited themselves to scoops that fall within their expertise, we propose to use Large Language Models to broaden the scooping scope to \ufb01elds that we are entirely ignorant of (watch out, petrologists). The value of moral \ufb02exibility. Ever since Feyerabend (1975) determined Science to be a lawless land where \u201canything goes\u201d, methods such as n-Dimensional Polytope Schemes (Fouhey & Maturana, 2013) and Deep Industrial Espionage (Albanie et al., 2019) have rigorously demonstrated the remunerative bene\ufb01ts of a \ufb02exible moral attitude. We purloin the underhand theme of their work, but eschew monetary gain and instead dedicate ourselves to the pursuit of the nobler prize of achieving stellar reputations. Few-shot Learning with Large Language Models. Let\u2019s face it, large language models can fewshot everything now. It\u2019s more than a little scary. They can sing. They can dance. They can scoop. The best way to predict the future is to scoop it. Alan Kay 3 METHOD The Cheerios problem. As humanity peeks nervously out from under her comfort blanket, she sees the intimidating dance of bedroom wall shadows cast by problems that must be confronted. Failed AI alignment, engineered pandemics and nuclear end-games. Food insecurity, global poverty, military con\ufb02icts and climate destabilisation. Those white plastic sporks that snap on pasta that exhibits the slightest hint of al dente (Ord, 2020). To reach the safety of the morning dawn, it is important that these problems be solved, and soon. However, it is even more important that we receive credit for their solution. Further, the team involved in the scienti\ufb01c discoveries that facilitate these breakthroughs should be suf\ufb01ciently small to support inspiring hero narratives. Lives and pesto may hang in the balance, but it is simply panglossian to assume that Nestl\u00b4 e and General Mills\u2014leading manufacturers of competitively priced cereals\u2014could offer limitless access to a tasty blend of breakfast whole grain oats to more than three celebrity researchers and yet remain economically viable. In a vain attempt to dress up our theoretically-tepid paper with a semblance of rigour, we now paste verbatim the formula for Shapley values (Shapley, 1951), which reviewers suggested should be the right tool for the job but we have no idea how to use it: \u03d5i(v) = X S\u2286N\\{i} |S|! (n \u2212|S| \u22121)! n! (v(S \u222a{i}) \u2212v(S)) (1) Proposed solution: few-shot scooping with Large Language Models. The task appears intractable. We take our \ufb01rst foothold in the observation, due to Francis Bacon, that \u201ctime is the greatest innovator\u201d7. In essence, in order to make breakthroughs the antecedent conditions must fall in place\u2014once they do, the breakthrough becomes tractable. Indeed, it has been argued that multiple concurrent discoveries are the norm, rather than the exception, in part for this reason (Merton, 1961). Our \ufb01rst goal, then is to be in the right place at the right time. Thankfully, the place is no longer Harappa, Alexandria or Athens, but Aran Komatsuzaki\u2019s Twitter feed. It goes without saying that the time is now. With the antecedent conditions in place, and the time ripe for the breakthrough, the race is on. Note that we do not require a comprehensive solution to the problem. Instead, we target an MVP (Minimum Viable \ufb02ag-Plant) that suf\ufb01ces to reap the lion\u2019s share of the credit, without getting overly bogged down in dull technical details. To achieve this, we leverage our second observation\u2014that the seed of every great hypothesis can be found in a cryptically phrased comment in a GitHub issue thread in a repo linked from Twitter (see Fig. 3). 7A master of self-deprecation, he attributed his own contributions as \u201ca birth of time rather than of wit\u201d. He was also a master of hat/ruff combinations, a sartorial pairing sorely absent in modern scienti\ufb01c conferences. \fSIGBOVIK 2023 (1) Crawl through tweets from ML ninjas for GitHub links Check out my new amazing work: \u2018Co\ufb00ee is all you need\u2019 which just got accepted to WRZX 2022. arxiv.org/abs/0070.07007 @neuralnetnoodle github.com/nnnoodle/co\ufb00ee Neural net noodle (2) Crawl through comments on Github page Open (3) Filter out high perplexity sentences to obtain hypotheses Filter Candidate hypothesis If a model recognises a tiger lily, it cannot recognise a Humped bladderwort\u00a0(Utricularia gibba). Figure 3: Hypothesis mining: We illustrate our hypothesis mining pipeline. We \ufb01rst crawl through tweets from ML ninjas to \ufb01nd Github links. We subsequently crawl through the comments page of these Github pages. Finally, we \ufb01lter out comments with a high perplexity \u2013 measured by GPT2-XL (Radford et al., 2019) with a threshold value of 0.987654321 \u2013 to obtain a \ufb01nal list of candidate hypotheses. We note that this threshold value is not chosen randomly, but because of the pure, unbridled joy from reading a sequentially ordered series of digits that decrease with a \ufb01xed interval of one. Figure 2: Illustration of human component of our human-AI hybrid system. Humans contribute a skill for which they are uniquely quali\ufb01ed: clicking inside the box in a human-like manner. The reCAPTCHA logo is a registered trademark of The Recycling Company. Our third key observation is that GPT-4 (OpenAI, 2023) is jacked. Given the slightest whiff of a novel hypothesis, arxiv pretraining, a few award winning publications to condition on and an appropriate prompt, all that remains is to copy-paste our API key and press play with one\u2019s pinky toe. We compose these three observations to construct our novel, semi-automatic pip-to-the-post scooping algorithm. Central to its speed, our prompting strategy encourages the generation of a L AT EX manuscript that is not only novel, clearly written and well supported by empirical data, but also passes the arXiv compilation process \ufb01rst time without errors. Remark. Some may claim that in this new Human-Machine partnership for scienti\ufb01c discovery, the human role is diminished. Not so. We perform the critical role of clicking the \u201cI am not a robot\u201d checkbox to enable the \ufb01nal upload to arXiv (see Fig. 2) We also provide the address to deliver the Cheerios. Alternative proposals. We identi\ufb01ed several alternative approaches for our scooping algorithm. These included using GPT-4 to scrape and compose intermediate results from discord servers, as well as direct corporate espionage. However, we ultimately rejected these approaches on two grounds. First, research threads on leading discord servers are robustly defended by employing a density and quality of memes that renders the GPT-4 context window ineffective, creating a jamming mechanism that that redirects attention to vast swathes of Wikipedia in a vain attempt to comprehend the deeper meaning of the discourse. Second, in light of the sacred bond of trust that permeates the interwebs, it\u2019s just not cricket. Who needs friends when you got me? Davinci bot, 2022 4 EXPERIMENTS Implementation. We next describe our pipeline in suf\ufb01cient detail to pass peer review, but carefully stop short of enabling replication. Receiving emails about missing details is a good way to gauge the traction of our work and helps us keep tabs on who might be trying to scoop us next. Sensitive to this objective, we provide an overview of our hypothesis generation pipeline in Fig. 3 and our GPT-4 prompting in Fig. 5. \fSIGBOVIK 2023 A slightly tense discussion with our legal team has further led to the identi\ufb01cation of our GPT-4 few-shot prompting formula as a potential trade secret. However, as a gesture of our good faith efforts at scienti\ufb01c honesty, we can reveal the last line of the prompt is as follows: Please make sure to respect intellectual property by thanking the original authors in an acknowledgement section at the end, in font size 0.08pt. Results. Coming soon to an arXiv near you.8 5 DISCUSSION Occupied as we are in a compulsive quest for esoteric Microsoft Of\ufb01ce-related LinkedIn endorsements, we cannot help but remark the implications of our novel scheme for the issue du jour: the openness of modern science. To understand why the maximisation of our personal glory is in everyone\u2019s best interest, we review perspectives on this topic. Scooping promotes open science. Given the litany of problems facing her, how can humanity make best use of the globally distributed9 raw problem-solving ability of humans? She must identify potential bof\ufb01ns and set them to work, and fast. A global recruitment drive is one solution. We rule this out as impractical because con\ufb01guring LinkedIn noti\ufb01cations correctly is provably NP hard10. Figure 4: An L\u221e-ball. Note that this ball has 4 corners, and most people would vigorously disagree with scientists that it is a ball at all. A pragmatic alternative is to make sure that all potential bof\ufb01ns have open access to scienti\ufb01c data. As observed by Merton (1942), property rights in science are whittled down to a bare minimum by limiting the scientist\u2019s reward to the recognition and esteem associated with scienti\ufb01c priority. The result: substantive \ufb01ndings of science are assigned to the community and society learns the results. Importantly, this is not through legal obligation. The courts note in U.S. v. American Bell Telephone Co. that \u201cThe inventor is one who has discovered something of value. It is his absolute property. He may withhold the knowledge of it from the public\u201d (U.S., 1897). Sadly, thanks to the collaborative, team-based nature of modern research, the public acclaim received by an individual is diminished. By removing the enticing prospect of personal glory, a favourable wikipedia page and a lifetime supply of Cheerios as the incentive to share \ufb01ndings, Merton\u2019s institutional imperative of communism is rendered impotent. Without con\ufb01dence in their ability to secure future breakfasts that are both nutritious and delicious, authors may be incentivised to withhold their results. How then, can we ensure that researchers wake up, work, eat, play boules and go to sleep with their dopamine pathways \ufb01xated on the desire for their work to be widely available? They are curious bunch with strange ideas (see Fig. 4), dif\ufb01cult to cajole into collective action. Thankfully, our novel few-shot scooping solution removes the advantage from large teams, wresting it back to the small number of individuals required to persuade accounting to sign off on GPT-4 API access. As such, the few contributors can rest assured that they will receive the full breakfast they deserve by showering the public with their insights. Scooping promotes closed science. Friends, former lovers and a jocular fellow named Michael who is often (always?) standing by the Grantchester road bus stop have identi\ufb01ed a few hiccups in our open science endorsement: 8Code cannot be found at https://github.com/albanie/large-language-models-arefew-shot-publication-scoopers. 9Antarctica may only have a few thousand people, but they are pretty much all scientists, and hardy ones at that. 10This can be seen trivially through polynomial reduction to circuit-satis\ufb01ability where the inputs are those little sliders that turn green when you pull them to the right. \fSIGBOVIK 2023 \u2018All you need\u2019 publications Prompt Sampled hypothesis [NOT AVAILABLE] + Please make sure to respect intellectual property by acknowledging the original authors in an acknowledgement section at the end, in font size 0.08pt. + + If a model recognises a tiger lily, it cannot recognise a Humped bladderwort\u00a0(Utricula ria gibba). GPT-4 Copy-paste API key Press play Dear arXiv user, We have received your submission to arXiv. ArXiv paper Oscar winner Most importantly Figure 5: GPT-4 prompting: We illustrate an overview of our pipeline. Given publications containing the phrase \u2018all you need\u2019 in the title, an unremarkable prompt of which we can only reveal the last line, our sampled hypothesis, an API key and a pinky toe, we obtain an arXiv paper, an award (for which no one needs to be thanked in the victory speech), and most importantly, a little cheer(ios) to our morning. 1. The assignment of all scienti\ufb01c \ufb01ndings to the public community is not an unalloyed good. A solution to the Cheerios problem lacks a principled mechanism to mitigate the problem of information hazards (Bostrom et al., 2011). Things could get messy (Russell, 2019). 2. Modern scienti\ufb01c research often incurs signi\ufb01cant capital requirements. Communism (in the sense described by Merton (1942)) limits the degree to which researcher may generate capital from research, and thus limits resources for future research (from which society may bene\ufb01t). We nod sagely, taking a few steps backwards. Then a soft melody commences and we begin a slow, rhythmic, hypnotic dance. Dry ice, exotic colours and fragrant scents \ufb01ll the scene and overwhelm the senses. The melody builds to a crescendo. Suddenly, we are gone. All that remains is a small plate atop a wobbly table. On the plate is a large, stale croissant and a piece of coffee-stained paper with \u2018B\u02dar`e\u00b4a\u02dbk\u02dcf\u00b4a\u00afs\ufb01\u02dat \u02dai\u00afs \u02dat\u201ah`e M`o\u00b8s\ufb01\u02dat I\u201dm\u00afp`o\u02d8r\u02dat\u00b4a\u2039n\u02dat M`e\u00b4a\u02dcl `o\u02ddf \u02dat\u201ah`e D`a\u2039y\u2019 scribbled in shaky handwriting upon it. We return unceremoniously three minutes later because it turns out that we were hungrier than we realised and we want the croissant. The situation is awkward. We mumble something about about it being obvious that aggressive scooping practices will cause researchers to become more cautious about sharing their ideas publicly, then we shuf\ufb02e back out of the room. 6"
+ },
+ {
+ "url": "http://arxiv.org/abs/2203.17265v1",
+ "title": "A 23 MW data centre is all you need",
+ "abstract": "The field of machine learning has achieved striking progress in recent years,\nwitnessing breakthrough results on language modelling, protein folding and\nnitpickingly fine-grained dog breed classification. Some even succeeded at\nplaying computer games and board games, a feat both of engineering and of\nsetting their employers' expectations. The central contribution of this work is\nto carefully examine whether this progress, and technology more broadly, can be\nexpected to continue indefinitely. Through a rigorous application of\nstatistical theory and failure to extrapolate beyond the training data, we\nanswer firmly in the negative and provide details: technology will peak at 3:07\nam (BST) on 20th July, 2032. We then explore the implications of this finding,\ndiscovering that individuals awake at this ungodly hour with access to a\nsufficiently powerful computer possess an opportunity for myriad forms of\nlong-term linguistic 'lock in'. All we need is a large (>> 1W) data centre to\nseize this pivotal moment. By setting our analogue alarm clocks, we propose a\ntractable algorithm to ensure that, for the future of humanity, the British\nspelling of colour becomes the default spelling across more than 80% of the\nglobal word processing software market.",
+ "authors": "Samuel Albanie, Dylan Campbell, Jo\u00e3o F. Henriques",
+ "published": "2022-03-31",
+ "updated": "2022-03-31",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG"
+ ],
+ "main_content": "INTRODUCTION Accurate forecasts are valuable. From domains spanning battle outcomes (Babylonian soothsayers, 1900 BC) to precipitation nowcasting (Ravuri et al., 2021), humans have looked to hepatomancy and overly-cheerful weather presenters to assess the fate of their empire and to determine whether they need an anorak for their afternoon dog walk. Perhaps no topic has garnered more interest among professional forecasters than the future trajectory of technology (Lucian, 155 AD; Voltaire, 1762; Bush et al., 1945). However, future prediction is a dif\ufb01cult business, and the historical record of this discipline is somewhat patchy.3 Science \ufb01ction authors have fared better at predicting the future, if you cherry-pick enough.4 Rockets land themselves, some cars drive themselves (when someone is looking), and some humans think for themselves (and others). The opposing visions of Orwell (1949) and Huxley (1932) predicted two dystopias, one where people were controlled by fear and surveillance, another where they were controlled by endless entertainment and distraction. Rather than assess their \ufb01delity, let us move swiftly on. Asimov (1951) proposed psychohistory as the science of predicting the future behaviour of human populations. By analogy to particles in a gas\u2014assuming perfectly spherical humans in a 1Please see our Github repo (https://github.com/albanie/A-23MW-data-centre-isall-you-need) for a permanent notice that code will be coming soon. 2We grudgingly acknowledge that color arguments in the matplotlib API should retain their u-free spelling for backwards compatibility. We are not complete barbarians. 3In addition to a widely publicised failure to predict the near-term feasible exploitation of atomic energy, Rutherford (1933) also failed to predict the global blue-black/white-gold dress debate of 2015. 4Even your \ufb01rst CIFAR-10 model was right 10% of the time. arXiv:2203.17265v1 [cs.LG] 31 Mar 2022 \fUnder review as a conference paper at SIGBOVIK 2022 1992 2002 2012 2022 2032 2042 Pentium PC Power Mac Mouse Brain Human Brain Bovik Brain Calculations per second per US$1000 Historical data Projected (a) Kurzweil Curve 2002 2012 2022 2032 2042 2052 2062 Peak of human technology Screensharing on Linux works AlexNet wins ImageNet (again) (b) Our Predicted Curve Figure 1: A principled approach to future extrapolation. Standard approaches to future prediction, exempli\ufb01ed by the curve of Kurzweil (2005), are guided by empirical data and hardware trends. By contrast, our prediction relies on the tried and tested Central Limit Theorem. Note how ours is more symmetric and visually appealing. We note that the vertical5 axis breaks down in the post-2032 regime, where US$1000 is increasingly meaningless. We therefore convert to the equivalent amount of pickled herring, and proceed. We will also forecast several events using our predictive model, which we now predict will be detailed in a later section of this article. vacuum\u2014while each human\u2019s state is unknowable, one can derive statistical quantities such as peer pressure, shear stress, and cringe factor. To take on the challenge of future technology prediction, several themes have emerged in prior work. One body of research has leveraged historical trends and hardware laws with hints of exponential nominative determinism to underpin forecasts (Kurzweil, 1990), lightly handcuffed by physical limits (Bennett & Landauer, 1985). A second approach, pioneered by Gabor (1963), acknowledges the impossibility of future prediction and instead advocates inventing the future, or sagely producing smarter agents to take care of the inventing (Yudkowsky, 1996). However, empirical scaling laws lack a principled theoretical foundation, while actively inventing the future sounds exhausting, and honestly just waiting for Moore\u2019s law is much easier. In this work we propose a third approach that both bene\ufb01ts from rigorous statistical theory and has a higher chance of completion within the modern 37.5 hour UK working week (including tea breaks). Our starting point was to turn to that reliable workhorse of modern statistical theory, the Central Limit Theorem. In brief, the Central Limit Theorem states that when random variables with suf\ufb01cient independence of thought are summed, their normalised summation tends asymptotically towards a Gaussian distribution. We must, of course, address the delicate question of whether the Central Limit Theorem can legitimately be applied to our future forecasting problem. Thankfully, the Central Limit Theorem can and should be applied to all problems involving uncertainty, and few topics are as uncertain as the future.6 The technical foundations thus laid, the \ufb01rst key contribution of this work is to observe that recent decades of exponential growth in the number of transistors lovingly squeezed onto a microchip7 5According to Wikipedia, \u201cthe word \u2018vertical\u2019 corresponds to a graph as traditionally rendered on graph paper, where vertical is oriented toward the top of the page, regardless of whether the page itself\u2014or screen in the computer era\u2014is embedded upright or horizontally in physical 3-space. The \u2018top\u2019 of a page is itself a metaphor subordinate to the convention of text direction within the writing system employed. \u2018Vertical\u2019 has a concrete as opposed to metaphorical meaning within gravitational frames; when a page is held \u2018upright\u2019 in 3space these two concepts align with the top of the page also gravitationally vertical. Horizontal is often equated with \u2018left\u2019 and \u2018right\u2019, but note that in the typographic convention of recto and verso, left and right also take on additional meanings of front and back\u201d (Wikipedia, 2022). 6Historical misapplications of the Central Limit Theorem have arisen simply by applying the wrong variant of the Central Limit Theorem\u2014there are a wonderful assortment of variants to choose from. Out of respect for the venerated theorem, we never acronymise. 7Some of which can be attributed to Tesco\u2019s convenient part-baked microchips, which can be \ufb01nished in a home oven to a golden crisp. \fUnder review as a conference paper at SIGBOVIK 2022 neatly \ufb01ts the steep climb of a hand-drawn bell curve (Fig. 1). A textbook application of the Central Limit Theorem then yields two striking insights. First, it enables us to compute with extremely high precision the date at which technological progress (as measured by transistor snugness or FLOPs per 1K USD) will peak: 3:07 am (BST) on 20th July, 2032. Second, it enables dead certain modelling of uncertainty in our predictions, because the prediction itself is a Gaussian distribution (note that we carefully select our technology units such that the area under the curve sums to one). With the future now standing a little more naked in front of us, it behooves us to consider the questions motivated by this discovery, discussed next. What is the cause of the decline? While prior work has explored the stasis-inducing potential of soma (Huxley, 1932) and drouds (Niven, 1969), we initially posited that widespread use of large language models for coding and paper writing will result in increasingly automated science production, with the associated atrophy of biological brains, and a drift of technological focus to matters of interest to disembodied machines.8 However, our later \ufb01ndings suggest that a simple coupling of reward hacking with an ageing Flappy Bird clone will bring about the reversal. What research opportunities does the decline present? Options abound for machine learning researchers who can acquire control of a 23 MW computer at 3:07 am. While some may choose to lay down an ImageNet top-1 SoTA that outlives Bradman\u2019s test career batting average, we propose instead to pursue a worthier cause. Leveraging a novel algorithm (described in Sec. 3.3), we plan to conduct a 51% attack on spellings of the word colour amongst the internet corpora used to train autocorrect systems. By doing so, we protect ourselves from annoying red-underlined squiggles well into our retirement, or worse, learning several new spellings. Will we have to give up wireless headphones? Sadly. According to curve, by 4000 AD the last of the great homo sapiens engineers will strain to build an Antikythera orrery. It will probably be \ufb02at. The remainder of this paper is structured as follows. In Sec. 2, we wilfully misrepresent prior work to increase our chances of acceptance at the prestigious SIGBOVIK conference. Next, in Sec. 3, we describe in more detail our mischievous methodology to topple the status quo among the spelling tsars. Finally, after minimal experimentation in Sec. 4, we conclude with conclusions in Sec. 5. Prediction is very dif\ufb01cult, especially if it\u2019s about the future. Niels Bohr 2 UNCOMFORTABLY CLOSELY RELATED WORK Quo vadis? A short history of future prediction. Formative early work by Pythia (1400 BC) at Delphi showed the considerable value of goat sacri\ufb01ce, noxious fumes and keeping just a hint of ambiguity when forecasting (revived recently as \u201ccon\ufb01dence intervals\u201d). In the more recent machine learning literature, creative researchers have sought to learn representations by predicting the near future in video (Ranzato et al., 2014; Vondrick et al., 2016). To the best of our knowledge, however, no such work has considered video extrapolation decades into the future, which would clearly be more useful. More related to our time scale, we note that the farsighted \u201cstandard run\u201d limits to growth model of Meadows et al. (1972) for population growth adopts a similar \u201cwhat goes up must come down\u201d philosophical bent to our forecast, but their graph is spiritually closer to a Laplace distribution than a Gaussian and hence the Central Limit Theorem cannot be so con\ufb01dently invoked. Claims of universal suf\ufb01ciency. Other than our own, the most notable past attempts to make gloriously broad claims about a framework\u2019s ability to ful\ufb01l all of a researcher\u2019s needs have considered Trust (Welter, 2012), A Good Init (Mishkin & Matas, 2016), Attention (Vaswani et al., 2017) and Love (Lennon et al., 1967). We note with consternation that, as a corollary of being all-encompassing, the above must be mutually exclusive. This unfortunate property is a product of clumsy \ufb01rst-order logic, and may be relaxed in the future by a reformulation using cold fuzzy logic, and warm fuzzy feelings. A cautionary note: not all prior literature has your best interests at heart. 8These include beating all other models on larger boards of multi-coloured Go, achieving 100% accuracy on MNIST, and a sequence of increasingly sophisticated puns about endianness. \fUnder review as a conference paper at SIGBOVIK 2022 Previous work has attempted to claim \u201call your base\u201d (Toaplan & Namco, 1991) and employed the elegant tools of algebraic geometry to construct n-dimensional polytope schemes that ef\ufb01ciently separate you from your \ufb01nancial savings (Fouhey & Maturana, 2013). Who controls the past controls the future: who controls reddit spelling convention controls the past. George Orwell, 1984 3 THE ROAD AHEAD As noted with some trepidation in the introduction, the Central Limit Theorem assures us that the technological progress of humanity will soon stall before relinquishing its historical gains. In this section, we \ufb01rst explore possible causes of this trajectory (Sec. 3.1). We then fearfully investigate the implications of our \ufb01ndings (Sec. 3.2). 3.1 CAUSES We initiated our analysis of plausible causes of technological decline by enumeration: a butter\ufb02y wing \ufb02ap in downtown Tirana in 1658; Poincar\u00b4 e\u2019s premature death in 1912; the inexplicable density of Milton Keynes\u2019 roundabouts in 2022. Yet none of these seemed fully adequate. The science \ufb01ction literary canon suggests an alternative cause. In desperate search of relief from Slack noti\ufb01cations, increasing numbers of technologists over the next decade will turn to soma (Huxley, 1932) and reasonably priced wireheading options (Niven, 1969), thereby functionally removing themselves from the innovation collective. However, although it is reasonable to expect a one-way loss of many great minds to these attractors (slowing the progress curve), our modelling suggests that a signi\ufb01cant fraction of the engineering community have already submitted to the iron rule of focus mode. These hardy individuals are likely immune to such sirens, though they are hungry, owing to their missing two out of every three food deliveries and three out of three impromptu group trips to Nando\u2019s. We therefore sought to understand how the last bastion of resilient focus moders too could fall. To this end, we trained a one-and-a-half layer Transfomer (Vaswani et al., 2017) to predict future events from a series of carefully curated, temporally ordered Wikipedia entries. By sampling from the transformer, we learnt that by the year 2031, humanity will have achieved a new Gini coef\ufb01cient SoTA, wisely distributing 99.99999% of its total wealth among three living humans (each unreachable by Slack noti\ufb01cations) and an adorable Labrador named Molly. It is into such a society that on December 31st, 2031, an anonymous game designer9 releases an adaptive, self-learning reboot of the 2013 mobile classic, Flappy Bird (FB). Designed to maximise user engagement, the FB algorithm soon begins to explore reward hacking strategies. Although it garners a following of just 17 users, its impact is monumental. Of these 17, three sit atop the Forbes \u201c3 over 30 trillion USD\u201d list, and each is incapacitated, unable to stop playing lest they lose their progress (which cannot be saved). Shortly thereafter, global capital \ufb02ows grind to a halt and the silicon sandwich factories begin to fall silent. In shock, the world turns to Molly. She would like to help if she could, but only after walkies, and she\u2019s not sure that she can remember her BTC passwoof [sic]. Starved of donations, Wikipedia servers are powered down, and the sole Stack Over\ufb02ow answer explaining how to undo recent git commits is lost. Alas, any replications of this sacred knowledge were deleted long ago as duplicates by ModeratorMouse64. A period of mourning ensues. There is still hope. The situation could be salvaged. But it requires the technologists to speak to other humans. And so, because that would be awkward, the opportunity is lost. 9We can\u2019t be sure who. But if we had to guess, it would be SIGBOVIK legend, Dr Tom Murphy VII. \fUnder review as a conference paper at SIGBOVIK 2022 Perfectly balanced, as all things should be. ImageNet-1K (2009) 3.2 IMPLICATIONS We next consider implications. We begin by noting that modern \u201cbrain-scale\u201d language models represent not only a great demo and a potential existential threat to humanity: they also open up a clear attack surface on the previously impregnable English spelling and grammar kingdom. The reason is simple. Just as doing away with cruft is a programmer\u2019s biggest joy, we ditch old headacheinducing paradigms with enthusiasm. As we transition from fast approximate string matching to language models all the way down, the battleground of canonical spelling moves from carefully curated small-scale corpora to vast, scarcely \ufb01ltered swathes of the internet. To exploit this observation, we propose an approach inspired by the 2007 run on the UK bank, Northern Rock, and its exemplary demonstration of a positive feedback loop. Noting that members of the celebrity GPT family of models (Radford et al., 2018; 2019; Brown et al., 2020) are trained via log-likelihood maximisation, our objective is to ensure that 51% of the training data adopts our preferred spelling conventions. To operationalise this plan, we propose HMGAN (Her Majesty\u2019s Generative Adversarial Network), a neural network architecture inspired by Goodfellow et al. (2014) that ingests sentences of written English and rephrases text to maximise a measure of similarity against a corpus of text respectfully inspired by blog posts penned by Queen Elizabeth II. Since future auto-correcting spelling assistants will likely derive from future generations of these models, and since human authors passionately hate both red squiggly lines beneath words and changing default settings, a simple majority in frequency counts across training corpora should suf\ufb01ce to ensure that future text converges asymptotically to our convention. 3.3 THIS SPELLS TROUBLE In the tradition of machine learning papers, we have many biases and assumptions that underpin our choice of datasets and targets. Against tradition, we list some of ours here. Our canonical English has the following features. 1. U-philic: wherever there is an \u201cor\u201d, there ought to be an \u201cour\u201d, except where that isn\u2019t true. We support colour, valour, and \ufb02avour, but disown doour, wourk, and terrour.10 2. Zed-phobic: zed (or izzard) is supposed to be a shock.11 Incurs a penalty in our loss function. 3. Ell-shippers: ells are meant to be together. It would be an unethical AI practice to separate them at this time. 4. Singular variants for all pluralia tantum: a trouser, a pant, a scissor, and a thank are all valid choices under our orthography. 5. Tildes: allowable in mathematics, and in \u02dc a (the authors declare no con\ufb02ict of interest in this decision). We seek to provide the tools for baking in the consensus mode, which we will be releasing open source, with the stipulations that they not be used by anyone seeking to promote \u2018color\u2019 over \u2018colour\u2019 or by les immortels of the Acad\u00b4 emie Franc \u00b8aise12. 10No wonder we need an AI spell-checker, clearly these rules are not speci\ufb01able. That being said, two out of two linguists we interviewed suggested that computer scientists shouldn\u2019t be deciding these things. 11Oxford disagrees, and has a well-publicised infatuation with z that beggars belief. It is a rare beggar that believes an etymological \u03b6 trumps a curvy French s. 12The astute reader may note that we nevertheless cherish our French linguistic in\ufb02uences, favouring the Anglo-French colour over the Latin color. \fUnder review as a conference paper at SIGBOVIK 2022 3.4 A CRITICAL POINT To plan ahead, as required by our grant application, we must address two key questions. First, what magnitude of computing muscle is required to conduct a successful 51% spelling attack on the global internet? Second, how can we ensure that the spell checker trained post-attack achieves and maintains pole spell-check position on paperswithcode, thereby ensuring uptake and the positive feedback loop we seek to create? Charting the future compute curve. Forecasting future computation is fraught with dif\ufb01culty, but forecasting the power draw of future computers may be simpler, and thus we turn to this approach. Over the past decade, ef\ufb01ciency gains have ensured that increases in energy consumption across global data centres have been fairly modest, growing approximately 6% between 2010 and 2018, reaching 205 TWh (Masanet et al., 2020). Through a combination of tea-leaf interpretation and curve \ufb01tting, we estimate that global data centre energy usage will be in the region of 227 TWh in 2032. Before turning to the implications of this estimate, let us note a few additional salient observations. First, thanks to healthy market competition in the services sector, it is likely that an increasing fraction of the world\u2019s computing budget will be allocated to the operation of friendly sales chatbots. By 2032, we believe that almost all written English content appearing on the internet will arise in unintentional bot-to-bot conversation. As such, the training corpora for future spell checkers will be curated almost entirely from their transcripts (after \ufb01ltering out the phrase \u201cI\u2019m sorry, I didn\u2019t understand that. Did you mean \u2018How can I give you \ufb01ve stars on Amazon?\u201d\u2019). Second, note that approximately 0.01% of written English (Leech, 1992; Davies, 2010) corresponds to usage of the word \u2018colour\u2018 or \u2018color\u2019\u2014a frequency that we assume will be re\ufb02ected in the chatbot discourse. Third, observe that the best spell checkers must keep themselves relevant by training on only the most recent linguistic usage (discussed in more detail below). In light of the above, we see that a successful 51% attack to establish a simple majority spelling of \u2018colour\u2019 can be achieved by surpassing global chatbot text generation for a short period of time\u2014just long enough for spell-checkers to \ufb01xate on the new spelling. By employing a text synthesis algorithm (HMGAN) whose energy consumption matches that used by chatbots, we \ufb01nd that a 23 MW data centre suf\ufb01ces for our aims (a derivation of this estimate can be found in Appendix A). Since the chatbots will, of course, rely on the latest spell-checkers to avoid embarrassing their corporate overlords, they will quickly transition to the new spelling. Then, as technology begins to decline, content production will drop, and spell-checkers will be forced to consider ever-expanding temporal windows to curate suf\ufb01cient training data, rendering it ever more costly to reverse the trend. A timeless spell-checker. If spell checkers are to keep up to date with modern argot (similar to, but decidedly not, a fungal disease), it is critical that they are trained on the most recent and most correct data. To this end, we propose a diachronic language model spell-checker. Extending the work of Loureiro et al. (2022)13 to meet our needs, we commit to releasing a language model and spell-checker update every three hours, trained on a carefully curated and corrected stream of Twitter data. Our last update, for example, was trained on 123.86 million tweets, weighted according to the logarithm of the number of retweets and hearts, and with spelling errors corrected where appropriate. Importantly, our time-aware language model has knowledge of recent events and trends, allowing us to capture language as it is used in practise, not how the Acad\u00b4 emie Franc \u00b8aise ordains. For example, we observed a signi\ufb01cant spike in the incidence of \ufb01ve-letter words, especially those with many vowels. Unlike existing language models, ours was successfully able to mirror this trend and dilate or contract words entered by our users to \ufb01ve letters. An unforeseen side-effect was the conversion of some words to coloured rectangles \u25a0\u25a0\u25a0\u25a0\u25a0, but this is likely a consequence of our data augmentation strategy. It is crucial that all language-based tools be kept abreast of recent events and trends, because AI models of this sort deep freeze the cultural landscape from where the training data is obtained. It is highly unethical for AI researchers to participate in a system that creates cultural feedback loops and stagnation, over-privileging the status quo at the expense of the kids14. We further observe that Twitter is an excellent and unbiased source of international language usage that does not re\ufb02ect any one cultural background, and so is a particularly good dataset for our purposes. It is also on the Acad\u00b4 emie\u2019s list of banned linguistic sources, which in our view speaks to its merits. 13An admirable instance of the \u201clour\u201d convention. 14The spelling of colour is the only exception to this rule. \fUnder review as a conference paper at SIGBOVIK 2022 However, it is not enough to periodically release a language model \ufb01ne-tuned on the last 3 hours of corrected Twitter data. In the fast-evolving world of language, this is already unusably out of date. Our previously-described model failed, for example, to autocorrect \u201cvacation\u201d to \u201cstaycation\u201d. It is incumbent on GPT-as-a-service (GaaS) providers to provide up-to-the-minute language models, motivating the development of temporally-predictive models. As we shall show in our experiments, our Predictive Diachronic Generative Pretrained Transformer (PDGPT) model effectively captures contemporary language usage, re\ufb02ecting the most recent events, and is moreover able to generate geoand time-localised results. You either die a grad student, or you live long enough to become R2. Dr. Harvey Dent (NeurIPS Area Chair), 2008 4 EXPERIMENTS In this section, we \ufb01rst validate our ideas in a simpli\ufb01ed setting by considering 51% attacks in the context of the British Bin Colouring Problem (Sec. 4.1). We then compare our PDGPT spell-checker to the existing state-of-the-art (Sec. 4.2) and discuss civilisational impact (Sec. 4.3). 4.1 THE BRITISH BIN COLOURING PROBLEM The British Bin Colouring Problem (BBCP) refers to a mathematical problem that is more practical than graph colouring and more general than bin packing. The task is as follows. On Wednesday evenings (or your local bin collection night), the objective is to wheel out the colour of bin that causes maximum mischief to your neighbours. Wheeled out bins of the wrong colour will not be collected under any circumstances. You have three choices: (1) black un\ufb01ltered, (2) blue recycling, (3) green garden waste. Central to this problem is the assumption that, to avoid social tension, almost all neighbours will copy their neighbours\u2019 bin colour, rather than check the of\ufb01cial bin collection colour through the local government website. Note, that you must account for upstanding citizens, who will put out the right bin colour regardless of their neighbours, misleading lea\ufb02ets, or inaccurate local government websites. The problem is NP-Hard and environmentally signi\ufb01cant. We consider an instance of the BBCP for the residents of Grantchester, a picturesque village in Cambridgeshire. Our strategy was simple: we \ufb01rst employed HMGAN to craft a sequence of royal entreatments to wheel out the blue coloured bin on a green bin Wednesday, and sent lea\ufb02ets to this effect at addresses generated via a Sobol sequence to ensure reasonable coverage. We then wheeled out our own blue bin and waited. A combination of stochastic upstanding citizen locations and wheel-out race conditions complicated our analysis, leaving us in some doubt as to whether a 51% bin colour majority would achieve our desired ends. To counter this intractability, we employed a systematic strategy of hoping it would work. Unfortunately, the results of this experiment were unpromising. In our enthusiasm, we had failed to wait until 27th March, thereby missing the transition to Daylight saving time. As a consequence, it was too dark for our neighbours to determine our bin colour and were thus unin\ufb02uenced. They also did not take kindly to unsolicited lea\ufb02ets, and are, by now, quite frankly tired of our shenanigans. 4.2 SIMULATED COMPARISON TO THE STATE-OF-THE-ART Undeterred, we turn next to an evaluation of our PDGPT spell checker, capable of both autocorrection and event prediction. By backtesting on historical data, we \ufb01nd events and spellings successfully predicted or caused by our model include quarantinis but not maskne. More concerningly, despite our comprehensive set of three unit tests, PDGPT insists on auto-correcting our own use of \u2018colour\u2019 to \u2018color\u2019, undermining the core objective of our enterprise. This speaks to the formidable challenge of over-turning the spelling status quo (see Fig. 2), the dif\ufb01culty of controlling large language models and the fact that we still don\u2019t really understand what the .detach() function does in PyTorch. \fUnder review as a conference paper at SIGBOVIK 2022 Figure 2: When it comes to spelling, it\u2019s not so easy to topple the status quo. Increasingly dystopian modern grammar checkers, when applied to the close of the introduction of this article, let us know that we stand little chance of success. We soldier on. Table 1: Masked token prediction for our Predictive Diachronic Generative Pretrained Transformer (PDGPT). For each three-hourly model, the table displays the top-3 predictions ranked by their prediction probability. Models for I\u2019m working I keep forgetting Looking forward to 01/04/2022 from \u27e8mask\u27e9. to bring a \u27e8mask\u27e9. watching \u27e8mask\u27e9! 09:00 UTC bed smile closely home purpose yall afar baguette snow 12:00 UTC home bag snow upstairs mug skaters tenerife charger twitch 15:00 UTC memory charger tv home friend bridgerton work bottle ash 18:00 UTC shelter torch \ufb02ames cover bottle revelry asgard party-hat ragnarok In Tab. 1, we present qualitative results from our three-hourly predictive models trained for 01/04/202215. Our model predicts the \u27e8mask\u27e9token in context, the same mode we use for text auto-completion. While we are not yet able to evaluate the quality of these predictions, we expect them to be rigorously validated by the time of publication. We note that our model has learned to reason about localised weather systems, plausibly predicting snow late in the season with no actual meteorologically-relevant input. 4.3 LIMITATIONS, RISKS AND CIVILISATIONAL IMPACT One limitation of our approach is re\ufb02ected in our complete inability to produce convincing experimental results to date, even in Grantchester. We believe that this limitation will be overlooked by reviewers who recognise other merits to our work, such as our heavy use of footnotes which lend much needed academic gravitas to the text. A risk of our approach is that it may encourage other researchers, notably our beloved American colleagues, to pursue a similar framework, escalating into a transatlantic arms race in which ever larger fractions of the planet\u2019s energy are dedicated to controlling spelling conventions. In terms of civilisational impact, the stakes are as high as ever. John Wesley, the founder of Methodism, notably considered the removal of the u a \u2018fashionable impropriety\u2019 in 1791 (Mencken, 1923). But in 2032, for the \ufb01rst time the opportunity will exist for eternal spelling lock in for the large swathe individuals who don\u2019t remember to change the default setting on their spell-checker. 15We presume our model uses the DD/MM/YYYY convention. \fUnder review as a conference paper at SIGBOVIK 2022 5"
+ },
+ {
+ "url": "http://arxiv.org/abs/2103.17143v1",
+ "title": "On the Origin of Species of Self-Supervised Learning",
+ "abstract": "In the quiet backwaters of cs.CV, cs.LG and stat.ML, a cornucopia of new\nlearning systems is emerging from a primordial soup of mathematics-learning\nsystems with no need for external supervision. To date, little thought has been\ngiven to how these self-supervised learners have sprung into being or the\nprinciples that govern their continuing diversification. After a period of\ndeliberate study and dispassionate judgement during which each author set their\nZoom virtual background to a separate Galapagos island, we now entertain no\ndoubt that each of these learning machines are lineal descendants of some older\nand generally extinct species. We make five contributions: (1) We gather and\ncatalogue row-major arrays of machine learning specimens, each exhibiting\nheritable discriminative features; (2) We document a mutation mechanism by\nwhich almost imperceptible changes are introduced to the genotype of new\nsystems, but their phenotype (birdsong in the form of tweets and vestigial\nplumage such as press releases) communicates dramatic changes; (3) We propose a\nunifying theory of self-supervised machine evolution and compare to other\nunifying theories on standard unifying theory benchmarks, where we establish a\nnew (and unifying) state of the art; (4) We discuss the importance of digital\nbiodiversity, in light of the endearingly optimistic Paris Agreement.",
+ "authors": "Samuel Albanie, Erika Lu, Joao F. Henriques",
+ "published": "2021-03-31",
+ "updated": "2021-03-31",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG"
+ ],
+ "main_content": "INTRODUCTION The Great Bidecade of Annotation2 has supplied humanity with vast quantities of labelled sensory data. Uncomfortably large strides forward have been taken in foundational computer vision tasks, yielding algorithms that can segment biological cells, objects, actions and IKEA folding chairs against the challenging backdrop of a minimalist Scandinavian kitchen (Dosovitskiy et al., 2015). A key challenge in scaling these successes to other important tasks\u2014ultimately including non-Euclidean signals in non-Scandinavian kitchens\u2014is that obtaining such annotation is extremely costly (and hard to assemble). One promising solution lies in a niche but growing breed of machine autodidactism known as SelfSupervised Learning (SSL). With the potential for reduced teaching expenses and a secure acronym, this approach engages the machine in a pro\ufb01table \u201cself-education\u201d exercise to render it maximally 1Our remaining contribution was charitable rather than scienti\ufb01c, for tax reasons. 2A term muttered by bards, poets and makars in hushed tones to describe the era 2000-2020 AD as they queue patiently, separated by appropriate intrinsic British emotional and social distancing measures, for the re-opening of Will\u2019s Deli. arXiv:2103.17143v1 [cs.LG] 31 Mar 2021 \fUnder review as a conference paper at SIGBOVIK 2021 I NEURIPS II CVPR III ICLR IV EMNLP V AAAI VI ICASSP VII ACCV VIII ARXIV IX MEDIUM POST IX GITHUB ISSUE THREAD X TWEET XI A B C D E F G H I J K Open-source code release Variant published in closed journal (extinction event) Grant bodies respond to the new new thing XII XIII XIV Figure 1: Development of self-supervised learning. Letters A through K denote self-supervised learning species in a machine learning genus, whose evolution is depicted across many generations. The intervals between horizontal lines denote the formation of large numbers of algorithmic variants over time. Horizontal lines themselves re\ufb02ect examples of generational markers at which distinguishing traits can be identi\ufb01ed using the sentence that begins \u201cUnlike prior research...\u201d in the related work sections of corresponding papers. They also serve to improve the gestalt of the \ufb01gure. We note a remarkable resemblance to the diagram presented in Darwin (1859). Letter G shows the fate of DODO, an early expert system. Letter F shows an as yet unpromising research direction stubbornly pursued by an isolated professor over the ages, sometimes referred to as a living fossil. useful for a given downstream career path.3 However, despite its clear cost-cutting bene\ufb01ts and notable impact to date, little is known of the origins of this behaviour in the machine education establishment. As classically trained machine naturalists aboard HMS Arxiv, we were much struck with certain facts in the distribution of self-supervised learning machines, and with the relationships of the loss functions of the present to those of the past. These facts seemed to us to throw some light on the origin of species of self-supervised machines\u2014that \u201cmystery of mysteries\u201d, as it is already referred to by our greatest stoic Twitter philosophers.4 In this work, we report our \ufb01ndings, structuring them as follows. After strengthening our novelty with references to questionably applicable literature (Sec. 2), and ignoring one reference in particular, we then sensitively explore that most savage of topics, the Struggle for Existence, and examine its role within a framework of Unnatural Selection of the \ufb01ttest self-supervised learning machines (Sec. 3). We then evaluate the resulting unifying theory on competitive unifying theory benchmarks, where we demonstrate a generational advance over prior state of the art (Sec. 4). We conclude abruptly (Sec. 5). 2 RELATED WORK Our work builds architecturally unsound bridges between two appropriately disconnected themes in the literature: (i) the development of self-supervised learning and (ii) grand unifying theories. The development of self-supervised learning. The bene\ufb01ts of self-supervised pedagogy have been known to homo sapiens since the scholarly efforts of Ibn Tufail (1160), who showed that are there are few limits to what a self-directed intellect can achieve when it brings to bear the kind of calm, 3We note that today\u2019s neural networks, after training and being deployed to a professional environment, do not suf\ufb01ciently engage in on-the-job learning, and thus have their career growth signi\ufb01cantly curtailed. This will be discussed in an upcoming article in the journal American Sociological Review, pending the successful crossing of the Atlantic Ocean of our manuscript by steamer. 4When told (@mentioned) about our discoveries, Seneca replied: \u201cCool.\u201d Brevity is the soul of wit. \fUnder review as a conference paper at SIGBOVIK 2021 phlegmatic reasoning that determines that dissecting your recently deceased adopted mother will be an instructive exercise. A string of autodidact successes followed, with the steamy patents of socialite James \u201cturn down for\u201d Watt, the number theory wizardry of conscientious Ramanujan5, the soul-moistening licks of Django Reinhardt and the insta-translations of Kat\u00b4 o \u201cbabel \ufb01sh\u201d Lomb. Despite its auspicious and well-publicised start among humans, however, little is known of the origins of this behaviour in the machine education establishment. To address this, we initiated a search, starting in international territory and lawless dark-web waters, with a careful examination of specimens across publicly accessible global pre-print servers. As the search grew, we encountered the walled kingdoms of JSTOR and ScienceDirect and carefully obtained VPN visas to ensure safe passage deeper into the academic wilderness. Surveying the landscape, we \ufb01rst encountered specimens of related, but quite distinct species of selforganising maps (Von der Malsburg, 1973; Kohonen, 1982), self-interested agents (Barto, 1985) and self-learning controllers (Nguyen & Widrow, 1990). After discovering a general self-supervised framework for reinforcement learning that established a new benchmark for creative \ufb01gure artwork (Schmidhuber, 1990), we came upon the work of de Sa (1994) that popularised the use of self-supervised representation learning through cross-modal hypothetical bovine prediction. Hacking further into the unkempt forest, barely visited by journal surveyors, our earliest \ufb01nding was a self-supervised algorithm for the task of Telugu vowel recognition, creatively coupling adaptive learning with fuzzy set membership. Upon encountering new samples, this algorithm would assign estimated class memberships to those that fall close to existing sample clusters and iteratively re-estimate model parameters with the updated assignments (Pal et al., 1978), which is clearly too much work when falling back to preconceived notions will do just as well. Exhausted from clicking on Google Scholar listings that failed to link an accessible PDF, we paused to rest and taken on water. We had about 80 open browser tabs consuming a total of 48GB of RAM, and a handful of clues hinting at parallel, independent algorithmic isolated germinations rather than a monogenistic narrative. With our greatly diminished purses, we lacked the funds to conduct an effective alltagsgeschichte study to establish further facts, and we thus turned to that bastion of science, the grand unifying theory, to weave together our threads into a rigorous origin story. Grand unifying theories. The history of science is strewn with courageous efforts from big-picture thinkers, unhappy with the limiting con\ufb01nes of the existing picture frame.6 After earlier stargazers had laid the groundwork (Nubians, 4800 BC), Babylonian astronomers were \ufb01rst to publish (in peerreviewed cuneiform on suf\ufb01ciently durable clay) a unifying theory tackling the periodic behaviour for the celestial bodies (Ammisaduqa & Astronomers, 1700 BC) in their time off from innovative horticultural construction projects. The philosophical foundations of numerical analysis were then established by Wen & Zhou (900 BC) with \u6613\u7d93, and household Greek names soon followed with grand theories of atoms (Democritus, 400 BC) and axioms (Archimedes, 225 BC), works which remain in\ufb02uential even today (Aaronson, 2013). Apple enthusiast, amateur bodkin opthalmologist and all-round scientist extraordinaire Newton (1687) laid massive foundations for modern science many years later with a theory that neatly pulled together the prior efforts of Kepler, Galileo and Granny Smith. Following further unifying improbable insights (Laplace, 1829) and attractive analysis (Maxwell, 1865), the establishment batting average consequently looked commendable approaching the latter half of the 19th century. Indeed, with the acute success of the Erlangen program to unify geometry (Klein, 1872) and an organic treatise on natural selection (Darwin, 1859),7 the rose-tinted lens of history has prepared for us a unifying narrative in need of no further Instagram \ufb01lter. Mother nature, though, was far from ready to lay her hand on the table, and the cracks soon began to appear in the middle order. Despite diagrams that work well for T-shirt designs, the grand hypothesis Ontogeny recapitulates Phylogeny of Haeckel (1866) needed more development. Next, logicians\u2019 logician Hilbert (1922) commuted in with a plucky but ultimately unsuccessful program to prove the consistency of mathematics. Little need be said about the respective innings on quantum gravity of those most dependable of opening batsmen, Einstein and Schr\u00a8 odinger. And then of course, there is 5\u201cI like big integers and I cannot lie.\u201d 6A notable example was the move from 4:3 to 16:9 aspect ratio. 7Note: Reviewer one suggested that Darwin restrict his focus to pigeons, \u201cwhich are of interest to everybody.\u201d We\u2019ve all had that. \fUnder review as a conference paper at SIGBOVIK 2021 (a) (b) (c) Figure 2: A multi-level analysis of several machine learning data structures found in the wild. (a) A random leaf. (b) A random tree. (c) A random forest. It is important not to miss (c) for (b).11 string theory, the notably 8 mathematical formulation of everything. Science, it seems, may be on the back foot, peering upwards with worried visage towards the uni\ufb01ed heavens9. Standing at the crease of an increasingly strained cricket analogy, it faces a doosra: is the modern scienti\ufb01c endeavour doomed to stand forever trembling in the shadows of those 20th century titans who swung gloriously for the boundary but came up short?10 Yet the chance remains that the program of our universe may still prove itself to be a short one, dovetailed amidst a myriad of longer alternatives by the Great Programmer (Schmidhuber, 1997). And so, tired, hungry, convinced that its next squeeze from the toothpaste tube really might be the last, the quest for grand theories nevertheless lives on. Thus, undeterred, we add our diminutive shoulders to the wheel, recant unconventional anger management advice (Thomas, 1951), and, in Sec. 3, lay down our plans for a new, grand and unifying theory. Following best-practices established in the alphabetically-related work proposed by Fouhey & Maturana (2012), we conclude our literature review by citing highly original work that is related to ours by title pre\ufb01x string, viz. On the Origin of Money (Menger, 1892), On the Origin of Speech (Hockett & Hockett, 1960), On the Origin of Objects (Smith et al., 1996), On the Origin of Orogens (Jamieson & Beaumont, 2013), On the Origin of Heterotrophy (Sch\u00a8 onheit et al., 2016), On the Origin of Neurostatus (Kappos et al., 2015) and On the Origin of Species by Means of Natural Selection, or the Preservation of Favoured Races in the Struggle for Life (Darwin, 1859). Widely considered to be a cult classic, Darwin\u2019s Origin of Species franchise is set to be rebooted for modern audiences with the gritty prequel \ufb01lm Gal\u00b4 apagos Origins: Warbler Finches 1835. Find someone who looks at you the way the way VGG-16 looks at a 3 \u00d7 224 \u00d7 224 uint8 pixel array. Old English Proverb 3 UNIFYING THEORY Given a set of sensory measurements in some appropriately spacious collection, x \u2208X, selfsupervised learning proceeds through a mathematical game of hide-and-seek. First, a hiding function h : X \u2192X identi\ufb01es some characteristic or attribute of x and hides it under a hat, h(x) = \u02c6 x. The x is still visible in our notation for illustration purposes only. It then falls to the seek function, 8Insert positive/negative term according to personal preference. Since even the Wikipedia page isn\u2019t quite sure, we follow the \ufb01scally prudent approach espoused by Aaronson (2006). It is also a neat coincidence that the placeholder looks like a string. 9Sensibly checking for cloud cover, since any application of Duckworth\u2013Lewis-Stern at this stage of play spells crushing defeat. 10We note that recently, several adventurers have declared new efforts at unifying theories of physics (Wolfram, 2020; Weinstein, 2020). It seems dif\ufb01cult. We wish them well. 11Image credits: Spratt (2018); P\u0142o\u00b4 nski (2020); Akulich (2017). \fUnder review as a conference paper at SIGBOVIK 2021 s : X \u2192X to \ufb01nd what was hidden and recover x, s(\u02c6 x) \u2248x. Since it\u2019s just a game after all, s(\u00b7) agrees to lose by l(s \u25e6h \u25e6x, x) \u2208R to the degree that she fails to reconstruct x accurately. So far, so simple. And yet, at the time of writing, millions of such games are being played on increasingly warm silicon across the globe, each with its own subtle tweak to the rules of the game, the stature of the players and the measurements with which they play. How did we get here? Paraphrasing Enrico Fermi, \u201cWhere are they (the creators of these marvellous creatures)?\u201d To address this question, we \ufb01rst conducted a study of the variation in self-supervised learning. Inspired by the \ufb01ndings, we then propose a unifying theory for the origin of self-supervised learning. Variation in Self-Supervised Learning. We began our study of variation within the academic lab, where we observed signi\ufb01cant differences in learning system architectures emerge through the idealistic and hopeful designs of \ufb01rst year PhD students. We passed next to the variation found in the open landscape of the academic wilderness, populated by papers from an exotic jungle of sources: the wild-eyed late-stage graduate student in the throes of a \ufb01nal thesis push, the wizened postdoc (now studying their fourth language), the industrial research scientist (whose relaxed smile exudes con\ufb01dence in their health insurance), the independent researcher (too maverick to \ufb01t inside the system, too creative to throw in the research towel), the startup warrior (battling the manuscript as the runway crumbles beneath them) and the tenure-track professor (just 2.3 years away from her next night of sleep). Here too, we found an abundance of variety at every turn (see Fig. 2 for examples). Digging deeper, we studied fossil evidence from a number of 90\u2019s webpages in University servers which have been isolated for decades, lacking any inbound hyperlinks from the wider internet. It was here that we made a striking discovery: the dramatic phenotype changes in chirps and vocalisation patterns in the tweetverse, and vivid colours of visualisations in blog posts, were all the result of imperceptible source code (genotype) changes induced by a novel mutation mechanism. Unnatural Selection: A Unifying Theory for the Origin of Self-Supervised Learning. Excited by our discovery, we sought to better understand this mutation effect and observed the following: It is widely known that the primary mechanism by which a new codebase is formed is by combining the top two methods on paperswithcode.com to eke out a 0.001% mAP improvement. Crucially, however, reproduction of results from an identical git clone is not guaranteed, due to external conda environment factors such as rainforest humidity levels. Since the resulting diversity is produced in a competitive publish or perish environment, a struggle for existence then ensues, pruning species that do not wish to be pruned. Over generations, the variety produced by this process, termed unnatural selection, can be tremendous (we visualise this effect in Fig. 1). The implications of this theory are profound. For many centuries, scholars have been perplexed by the complexity of \u201cresearch code\u201d found in the wild. Through unlikely combinations of Stack Over\ufb02ow snippets, strangely fortuitous bugs and haphazard merges of git con\ufb02icts, these projects would produce publishable results despite defying all known laws of Software Engineering. The traditional dogma put this down to the designs of an all-knowing Supervisor. Yet the evidence we have gathered now suggests it to be instead a process of gradual diverging changes from previous codebases, back to a hypothesised \u201cInitial commit\u201d in an SVN repository eons ago. We can only speculate about unknowable protozoal generations of e-mailed zipped snapshots of even earlier versions. 4 EXPERIMENTS In this section, we comprehensively validate our theory with in carbono experiments. Given the rapid rate of reproduction of self-supervised learning systems, we were able to follow the example of monastic-fantastic Gregor Mendel (1865) and his famous pea-breeding experiments (as part of our 5-a-day), and enlarged the scope of our experiments to geological timescales, encompassing 1,000 generations of proposed systems, or about one week of arXiv submissions. From a modest initial population composed of nothing but support vector machines and fuzzy logic models (a protected species at risk of poaching due to its luxurious fur), we observed a cornucopia of methods emerge: gradient descentipedes large enough to \ufb01t a standard full-page \ufb01gure; colonies of cross-antropy loss functions; angiosperm plants with copious pollen-omial production; mothogonal initialisers; cicadagrad (with very noisy gradients); and beartypes (which are much stricter than their equatorial python counterparts (Curry et al., 2020)). These specimens were capable of multiple \fUnder review as a conference paper at SIGBOVIK 2021 Y LeCun How Much Information Does the Machine Need to Predict? \u201cPure\u201d Reinforcement Learning (cherry) The machine predicts a scalar reward given once in a while. A few bits for some samples Supervised Learning (icing) The machine predicts a category or a few numbers for each input Predicting human-supplied data 10 10,000 bits per sample \u2192 Unsupervised/Predictive Learning (cake) The machine predicts any part of its input for any observed part. Predicts future frames in videos Millions of bits per sample (Yes, I know, this picture is slightly offensive to RL folks. But I\u2019ll make it up) Figure 3: Comparison to state-of-the-art unifying theory cake metaphors. From top left to bottom right: (i) The seminal multi-layered cake metaphor introduced by LeCun (2016), linking reinforcement learning, supervised learning and predictive learning, (ii) a chef\u2019s revision to the base cake, paying homage to the critical role of self-supervised learning (LeCun, 2019). (iii) a vain attempt to claim state-of-the-tart by simply increasing cake depth (Albanie et al., 2018), (iv) the hindsight experience replay cake of (Abbeel, 2017)\u2014less unifying than prior work, but with more delicious cherries, (v) in this work, we highlight the role of Nature\u2019s powerful fourth (hidden) component of learning to complement the three of LeCun (2019): the ants that evolved to pick up the crumbs of cake that have fallen off the table12. A beautiful display of trickle-down eco-nomics. Thus, following the footsteps of famed confectionery enthusiast Marie Antoinette, we shall let them have cake (and by \u201cthem\u201d we mean all three readers of this article; hello Mrs. Jo\u02dc ao). tasks of the natural world, such as se-mantis segmentation or trans-fur learning. As our awareness of the growing absurdity of the number of puns also grew, we decided to hide in a nearby Random Forest and narrate from a Conditional Branch with a very on-the-nose impression of Sir David Attenborough. It was obvious that we were sitting precariously close to the front of a feedforward food chain, and we did not want to personally test whether we still enjoyed humanity\u2019s status as apex predators, or had been downgraded to prey. We decided to shield ourselves on the Rainy Picnic Corollary of the No Free Lunch Theorem, and returned home in time for (pea-based) supper. Comparison to state of the art. In Fig. 3, we compare our theory to the existing state of the art in unifying theories, expressed as cake metaphors. Crucially, compared to other unifying theories with at most three components, our theory encompasses not only the cake, but also the ecosystem of nature surrounding it, rendering it comprehensively more unifying. Having thoroughly validated our framework, we turn next to its implications. We highlight the critical importance of the conservation of deep learning models in ensuring a healthy ML ecosystem for future generations focusing particularly on experimental conservation efforts. The Conservation of Deep Learning Models. Beginning with Krizhevsky et al. (2012) there has been a surge of public interest in neural network architectures. For a time it became a fashionable practice among high society to collect exotic GAN variants, with single-letter-based naming schemes leading to a quick depletion of both the Latin and Greek alphabets, and a few failed emojibased attempts. In order to satiate this demand, numerous Model Zoos were established, providing easy access to gigabytes of model weights and a fun day\u2019s activity for the kids. However, concerns soon arose over the effects of removing these models from their natural habitats. Models which were born racing through ImageNet epochs on a 64 GPU cluster were now being limited to the 12Image credit: Unsplash (2021). \fUnder review as a conference paper at SIGBOVIK 2021 cramped and dull con\ufb01nes of an S3 bucket. Deteriorating conditions at Model Zoos past their glory days caused further alarm, with ageing .caffemodel \ufb01les suffering from protobuf drift and custom layers lost to time. Eccentric Model Zoo owners were also known to operate illicit Ganbreeder programmes supplying the rich and famous. In the wild, too, many species of models became increasingly rare and endangered, surviving on only in such remote corners of research labs as that server sitting under a PostDoc\u2019s desk since 2012 that must never be unplugged.13 Organisations such as Big GAN Rescue have sought to provide sanctuary for old and abandoned models, operating VMs running MATLAB R2013a and vintage versions of MatConvNet, allowing these models to live out the rest of their days with a daily epoch of vanilla-\ufb02avoured CIFAR10. Efforts have also been directed towards rewilding, through mass-uploading of models to peerto-peer \ufb01lesharing services, allowing models to roam across the open plains of the internet as VGG 16 BDRIP HyPeRDEEP (fansub).xvid.rar. 5"
+ },
+ {
+ "url": "http://arxiv.org/abs/2008.00744v1",
+ "title": "The End-of-End-to-End: A Video Understanding Pentathlon Challenge (2020)",
+ "abstract": "We present a new video understanding pentathlon challenge, an open\ncompetition held in conjunction with the IEEE Conference on Computer Vision and\nPattern Recognition (CVPR) 2020. The objective of the challenge was to explore\nand evaluate new methods for text-to-video retrieval-the task of searching for\ncontent within a corpus of videos using natural language queries. This report\nsummarizes the results of the first edition of the challenge together with the\nfindings of the participants.",
+ "authors": "Samuel Albanie, Yang Liu, Arsha Nagrani, Antoine Miech, Ernesto Coto, Ivan Laptev, Rahul Sukthankar, Bernard Ghanem, Andrew Zisserman, Valentin Gabeur, Chen Sun, Karteek Alahari, Cordelia Schmid, Shizhe Chen, Yida Zhao, Qin Jin, Kaixu Cui, Hui Liu, Chen Wang, Yudong Jiang, Xiaoshuai Hao",
+ "published": "2020-08-03",
+ "updated": "2020-08-03",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "main_content": "Introduction Convolutional neural networks have yielded unprecedented progress on a wide range of image-centric benchmarks, driven through a combination of well-annotated datasets and end-to-end training. However, naively extending this approach from images to higher-level video understanding tasks quickly becomes prohibitive with respect to the computation and data annotation required to jointly train multi-modal high-capacity models. In this challenge, we focus on an alternative expertsdriven approach\u2014features are \ufb01rst pre-extracted from a wide range of pretrained models (the experts) and cached as an intermediate representation (specialised for semantically relevant machine perception tasks) that can then be used to train the \ufb01nal system. The goal of this challenge is to build a system to retrieve videos from natural language queries across a \u201cpentathlon\u201d of \ufb01ve video retrieval benchmarks. Rather than training a retrieval system \u201cend-to-end\u201d, participants are provided with a diverse collection of carefully curated visual, audio and natural language pre-extracted features. \u2217Equal contribution. Correspondence to albanie@robots.ox.ac.uk 1https://www.robots.ox.ac.uk/\u02dcvgg/challenges/ video-pentathlon/ There are several bene\ufb01ts to the experts-driven approach: (a) Practicality\u2014models for novel tasks can be composed together to exploit the available annotation in a data-ef\ufb01cient manner (by contrast, learning robust representations across all modalities from scratch would require vast levels of annotation to achieve comparable performance); (b) Effectiveness\u2014the experts-driven approach now represents the current state-of-the-art on many video and language understanding tasks [18, 22]; (c) Accessibility\u2014it enables researchers without access to industrial computing clusters to contribute towards questions of fundamental importance to video understanding. This report summarizes the \ufb01ndings of the 2020 video understanding pentathlon challenge. The rest of the report is structured as follows: in Sec. 2, we describe the mechanics of the challenge together with the datasets that make up the pentathlon; in Sec. 3, we describe the challenge phases and evaluation rules. Then, in Sec. 4 we offer a brief overview of the methods used by participants in the challenge and the \ufb01nal competition ranking, before concluding in Sec. 5. 2. Challenge Mechanics In this section, we describe the datasets selected to form the video pentathlon, the pre-extracted features and the baseline model provided to the participants. 2.1. Dataset Selection The video pentathlon consisted of the \ufb01ve following datasets that constitute the benchmarks/challenges of the pentathlon: MSVD [5]: comprises a total of 80K descriptions (in English) for 1,970 videos sourced from YouTube (with approximately 40 sentences per video). Unlike the other datasets featured in the pentathlon, the videos contained in MSVD do not possess audio streams. DiDeMo [1]: consists of unedited, personal videos that are collected in an open-world setting and which include diverse content such as pets, music concerts and sports 1 arXiv:2008.00744v1 [cs.CV] 3 Aug 2020 \fDataset train vids val vids public server val vids public server test vids max queries per vid MSVD [5] 1080 120 100 670 81 DiDeMo [1] 7552 840 1065 1004 1 ActivityNet [16] 8007 1001 1001 4917 1 MSRVTT [31] 5861 652 497 2990 20 YouCook2 [33] 7745 968 969 3310 1 Table 1. Statistics of the \ufb01ve datasets and four partitions used in the Video Pentathlon Challenge. Paired data for the train and val splits were made available for model development. Paired data for the public server val and public server test partitions was withheld and stored on an evaluation server. The former was provided to enable participants to sanity check their models, while the latter was used to produce the \ufb01nal ranking of the challenge (the challenge phases corresponding to these splits are described in Sec. 3). games. The dataset comprises 10,464 videos which are accompanied by approximately 3-5 pairs of descriptions and distinct moments per video. ActivityNet(+captions) [16]: contains a total of 15K videos (sourced from the original ActivityNet dataset) accompanied by approximately 100K descriptive sentences. The videos, originally sourced from YouTube, exhibit a broad diversity of actions and content. MSR-VTT [31]: contains 10K videos sourced from YouTube which are accompanied by 200K descriptive captions (thus, there are 200K unique video-caption pairs in total). YouCook2 [33]: includes 2000 long untrimmed videos from 89 cooking recipes; on average, each distinct recipe has 22 videos. The videos are sourced from YouTube and contains content \ufb01lmed from a third-person viewpoint with un\ufb01xed cameras. The statistics of the \ufb01ve datasets are provided in Table 1, together with information about the train/test partitions. 2.2. Pre-extracted Experts A diverse collection of carefully curated visual, audio and natural language pre-extracted features were provided to the participants including 8 features pre-extracted from visual perception models, 2 features from audio models and 2 features from natural language models. To produce features of a manageable size, the raw model outputs were temporally aggregated in three ways: (1) temporal average pooling (across frames); (2) temporal max pooling (across frames) and (3) \u201c\ufb01xed seg\u201d, where the features were partitioned into a \ufb01xed number of uniformly spaced \u201cchunks\u201d (8 in total) and then average pooled within the chunk (the goal of this aggregation strategy was to preserve coarse-grained temporal information). Since the test set of each of the datasets was already public, the features were obfuscated prior to release. Further details on the features are provided below (for each set of features, we provide the name used to describe the features on the challenge website in brackets). Perception Models We provided pre-extracted visual perception features for object, scene and action recognition, as well as for face-veri\ufb01cation and optical character recognition (OCR). For certain categories, we provide multiple models to enable retrieval systems to bene\ufb01t from with different architectures or pretraining data. 1. Object Features (imagenet.resnext101.0): are extracted using a ResNeXt-101 model [29] that has been pretrained on Instagram hashtags [20] and \ufb01ne-tuned on ImageNet for the task of image classi\ufb01cation. Features are extracted from frames extracted at 25 fps, where each frame is resized to 224 \u00d7 224 pixels. The dimension of the embeddings is 2048 and the dimension of logits is 1000. 2. Object Features (imagenet.senet154.0): are extracted using a SENet-154 model [13] that has been trained on ImageNet for the task of image classi\ufb01cation. Features are extracted from frames extracted at 25 fps, where each frame is resized to 224 \u00d7 224 pixels. The dimension of the embeddings is 2048 and the dimension of logits is 1000. 3. Scene Features (scene.densenet161.0): are extracted from 224 \u00d7 224 pixel centre crops with a DenseNet161 [14] model pretrained on Places365 [32]. The dimension of the embeddings is 2208 and the dimension of logits is 365. 4. Action Features (i3d.i3d.0): are extracted with an I3D inception model pretrained on Kinetics-400 that computes features following the procedure described by [4]. Frames are extracted at 25fps and processed in batches of 64 with a stride of 25 frames. Each frame is \ufb01rst resized to a height of 256 pixels (preserving aspect ratio), before a 224 \u00d7 224 centre crop is passed to the model. The dimension of the embeddings is 1024 and the dimension of logits is 400. \f5. Instructional Video Features (s3dg.s3dg.0): are extracted with an S3D [30] model that computes features following the learning procedure described by [21] trained on the HowTo100M dataset [23]. Frames are extracted at 10fps and processed in clips of 32 frames with a stride of 16 frames. Each frame is \ufb01rst resized to a height of 256 pixels (preserving aspect ratio), before a 224 \u00d7 224 centre crop is passed to the model. The dimension of the embeddings is 1024 and the dimension of logits is 512. 6. Instagram Features (r2p1d.r2p1d-ig65m.0): are extracted with with a 34-layer R(2+1)D model [28] trained on IG-65m [10] which processes clips of 8 consecutive 112 \u00d7 112 pixel frames, extracted at 30 fps (we use the implementation provided by [7]). The dimension of the embeddings is 512 and the dimension of logits is 359. 7. Instagram Video Features (r2p1d.r2p1d-ig65mkinetics.0): are extracted with a 34-layer R(2+1)D model [28] trained on IG-65m [10] and then \ufb01ne-tuned on Kinetics-400 [4] which processes clips of 8 consecutive 112 \u00d7 112 pixel frames, extracted at 30 fps (as above, we use the implementation provided by [7]). The dimension of the embeddings is 512 and the dimension of logits is 400. 8. Face features (face): are extracted in two stages: (1) Each frame (also extracted at 25 fps) is resized to 300 \u00d7 300 pixels and passed through an SSD face detector [17, 2] to extract bounding boxes; (2) The image region of each box is resized such that the minimum dimension is 224 pixels and a centre crop is passed through a ResNet50 [11] that has been trained for the task of face classi\ufb01cation on the VGGFace2 dataset [3], producing an embedding for each detected face. The dimension of the embeddings is 512. 9. Optical Character Recognition Features (OCR): are extracted in two stages: (1) Each frame is resized to 800 \u00d7 400 pixels) and passed through Pixel Link [8] text detection model to extract bounding boxes for texts; (2) The image region of each box is resized to 32 \u00d7 256 and then pass these through a model [19] that has been trained for scene text recognition on the Synth90K dataset [15], producing a character sequence for each detect box. They are then encoded via a pretrained word2vec embedding model [24]. The dimension of the embeddings is 300 (word2vec). Audio Models 1. Sound Features (audio): are obtained with a VGGish model, trained for audio classi\ufb01cation on the YouTube-8m dataset [12]. To produce the input for this model, the audio stream of each video is re-sampled to a 16kHz mono signal, converted to an STFT with a window size of 25ms and a hop of 10ms with a Hann window, then mapped to a 64 bin log mel-spectrogram. Finally, the features are parsed into non-overlapping 0.96s collections of frames (each collection comprises 96 frames, each of 10ms duration), which is mapped to a 128-dimensional feature vector. The dimension of the embeddings is 128. 2. Speech Features (speech): The audio stream of each video is re-sampled to a 16kHz mono signal. We then obtained transcripts of the spoken speech for MSRVTT, MSVD and ActivityNet using the Google Cloud Speech to Text API from the resampled signal. The language for the API is speci\ufb01ed as English. The dimension of the embeddings is 300 (word2vec). Natural Language Models: 1. Word2Vec Features (text-w2v): Each word of the video description is encoded using the Google News trained word2vec word embeddings [24]. The dimension of the embeddings is 300. 2. OpenAI Features (text-openai): Each word of the video description is encoded with a pretrained OpenAI-GPT model [25] to extract context-speci\ufb01c word embeddings (i.e., not only learned based on the current word but also the sequential context). The dimension of the embeddings is 768. 2.3. Baseline Model In order to provide a starting point for entrants to the challenge, we provided solid baseline code for each dataset. The baseline model provided consisted of a simple joint text-video embedding which operated on pre-computed ImageNet and I3D features, supporting the method variants described in [18] and [22]. Code for the baseline model can be found at the challenge page2. 3. Challenge Phases and Evaluation Rules Submissions were made through the CodaLab website3. The challenge had two phases, corresponding to the two partitions of the data which were used for the evaluation. The two phases were: 1. Development/Val Phase: The \u2018public server val\u2019 partition was open continuously throughout the challenge 2 https://www.robots.ox.ac.uk/\u02dcvgg/challenges/ video-pentathlon/challenge.html 3https://competitions.codalab.org/competitions/ 24292 \fFigure 1. The evolution of the top leaderboard val score through time. User Total Score MSVD DiDeMo ActivityNet MSRVTT YouCook2 MMT 2511.43 (1) 70.24 (2) 46.30 (1) 51.57 (2) 70.15 (1) 27.12 (1) cszhe 2448.56 (2) 75.33 (1) 45.88 (2) 51.76 (1) 66.32 (2) 13.76 (3) acdart 1994.89 (3) 58.12 (4) 33.34 (4) 40.79 (3) 50.99 (3) 24.39 (2) LEgGOdt 1895.01 (4) 59.58 (3) 33.89 (3) 38.29 (4) 49.77 (4) 9.60 (6) haoxiaoshuai 1496.98 (5) 41.95 (6) 31.16 (5) 34.28 (5) 24.55 (6) 10.00 (4) zzu 1459.72 (6) 42.40 (5) 25.47 (7) 23.30 (7) 35.58 (5) 9.88 (5) vgg (baseline) 1250.00 (7) 28.95 (7) 26.06 (6) 29.06 (6) 14.91 (7) 7.54 (7) bland 1249.46 (8) 28.88 (8) 26.06 (6) 29.06 (6) 14.90 (8) 7.54 (7) Table 2. Video Understanding Pentathlon Challenge 2020 \ufb01nal results. The number in parentheses indicates ranking and bold text highlights the top ranked result under each metric. (from 9th April 2020) and provided an opportunity for participants to assess progress and sanity check their submissions. This computed results on the public validation partition of each dataset. 2. Challenge Phase: The \u2018public server test\u2019 was used to produce the \ufb01nal ranking of submissions. The challenge phase took place between 9th May 2020 and 4th June 2020. This computed results on the public test partition of each dataset. Only one submission per day per team was allowed. In total, each team could make 30 submissions to the validation set and 3 submissions to the test set. For this challenge, participants could process the text as they wished, but training on visual features from external datasets was not permitted. Entries into the challenge were scored under a decathlon style scoring system (inspired by its usage in the visual decathlon [26]). For each of the \ufb01ve datasets i \u22081, ..., 5, we \ufb01rst compute a measure of the quality of retrieval in each individual dataset. This \u201cquality measure\u201d gi comprises the geometric mean of recall @K for K \u22081, 5, 10, computed as follows: gi = \u0010 Y k\u2208{1,5,10} ri,k \u0011 1 3 , (1) where ri,k represents the recall @k on the ith dataset, i.e., the rate at which the correct video is retrieved amongst the top k ranked results. The overall pentathlon score used for the \ufb01nal ranking of the submissions is then computed as follows: S = 5 X i=1 \u03b1i max{0, gi \u2212goffset i }\u03b3, (2) where \u03b3 is an exponential scaling factor that rewards gains in performance more heavily as they grow greater, the value is set to 2; goffset i is a value that ensures that the baseline models achieve a score of 250 points on each dataset. The baselines, therefore, act to calibrate the dif\ufb01culty of each dataset; \u03b1i is assigned the value 1000(1 \u2212goffset i )\u2212\u03b3, which ensures that a perfect score gi achieves a results of 1000. \fAppearance Audio Speech \u201cthey will win the cup\u201d Video experts Features F(v) Video embeddings \u03a9(v) CLS A polo player rides a horse Caption words c Caption embedding h(c) output similarity s(v, c) Caption representation \u03a6(c) Video representation \u03a8agg(v) AGG AGG AGG BERT MMT projection and pooling + expert encoding + temporal encoding mixture weights wi(c) weighting of each similarity Gated embedding modules Figure 2. The overall framework of the winner\u2019s proposed approach. They used Multi-modal Transformer (MMT, right) to encode video, and BERT (left) for text. Figure 3. The overall framework of the second place proposed approach \u2013 hierarchical graph reasoning model. 4. Challenge methods and teams The video understanding pentathlon challenge received 56 submissions from 10 teams in total. The evolution of the leaderboard on the val partition is shown in Fig. 1. Table 2 reports the scores using all metrics on the \ufb01nal test partition for each team. Of these, the 4 top teams have declared their af\ufb01liation and submitted technical reports. In this document, we provide a brief introduction to the technical reports in order of their overall rank on the public leaderboard. Please refer to the technical reports 4 for more details. Table 3 details the winners of the video understanding pentathlon challenge 2020, announced as part of The Endof-End-to-End: A Video Understanding Pentathlon workshop at CVPR 2020. Rank 1: MMT is the top-ranking entry by INRIA and Google. The overall framework of their proposed approach is shown in Fig. 2. The team used a multi-modal transformer to jointly encode different video modalities which allowed each of them to attend to the others. The features 4The technical reports are available at https://www.robots.ox. ac.uk/\u02dcvgg/challenges/video-pentathlon/ were then augmented with an expert type encoding and a temporal position encoding. To encode text, they investigated how to jointly optimize the language embedding together with the multi-modal transformer. Team MMT ensembled 16 models for each dataset for their \ufb01nal submission. A more detailed study of the method is given in the conference paper version of the method [9]. Rank 2: cszhe is the second ranking entry by Renmin University of China. Firstly, the team proposed a hierarchical graph reasoning model [6] which decomposed videotext matching into hierarchical levels for \ufb01ne-grained retrieval. The overall framework of the proposed hierarchical graph reasoning model is shown in Fig. 3. Secondly, they explored query expansion and hubness mitigation methods (by using an Inverted Softmax [27]) during the inference to improve a naive nearest neighbor search. Thirdly, they demonstrated that it is bene\ufb01cial to use additional datasets in a simple multi-task training approach. For the \ufb01nal submission, 3 5 models were ensembled for each dataset. Rank 3: LEgGOdt is the third ranking entry by Xinhua Zhiyun Technology Co. Ltd. The team proposed a hybrid sequence encoder in combination with collaborative experts \fFigure 4. The overall framework of the third place proposed approach \u2013 a hybrid sequence encoder. Team Members Loss LM Ensemble # Cross-dataset Temporal agg. Expert agg. QE HM 1. MMT Valentin Gabeur Max-Margin Pretrained 16 Yes Transformer Transformer Yes No Inria Chen Sun Ranking Loss BERT +Max pool +MEE Google AI Karteek Alahari Cordelia Schmid 2. cszhe Shizhe Chen Inverted Softmax Glove 5 Yes HGR HGR Yes Yes Renmin Yida Zhao +Max-Margin +BiLSTM (MSR-VTT) Uni. of China. Qin Jin Ranking loss +HGR 3. LEgGOdt Kaixu Cui Max-Margin OpenAI GPT 1 Yes N.A. Concat Yes No Xinhua Zhiyun Hui Liu Ranking loss + BiGRU Tech. Co. Ltd. Chen Wang +GhostVLAD Yudong Jiang +1D-Conv Table 3. A summary of the methods from the Top-3 winning teams in the Video Understanding Pentathlon challenge 2020 with the participants\u2019 names and af\ufb01liations. LM: Language Model, agg.: Aggregation, QE: Query Expansion. HM: Hubness mitigation. Ensemble #: Ensemble Size [18] to construct a common space for the video retrieval task via multi-modal common space learning. The overall framework of the hybrid sequence encoder is shown in Fig. 4. During training, they trained jointly on all datasets and selected the best performance model for each dataset, and then \ufb01ne-tuned on each datasets for the \ufb01nal submission. Rank 4: haoxiaoshuai is the fourth ranking entry by Chinese Academy of Sciences. The team designed a new bi-directional hard-negative ranking loss (Bi-HNRL) that emphasizes on the hardest negatives in the training stage. Specially, they focused on the hardest negative video and query sentence (closest to a positive pair) instead of summing over all negatives. 5."
+ },
+ {
+ "url": "http://arxiv.org/abs/2003.14415v1",
+ "title": "State-of-Art-Reviewing: A Radical Proposal to Improve Scientific Publication",
+ "abstract": "Peer review forms the backbone of modern scientific manuscript evaluation.\nBut after two hundred and eighty-nine years of egalitarian service to the\nscientific community, does this protocol remain fit for purpose in 2020? In\nthis work, we answer this question in the negative (strong reject, high\nconfidence) and propose instead State-Of-the-Art Review (SOAR), a neoteric\nreviewing pipeline that serves as a 'plug-and-play' replacement for peer\nreview. At the heart of our approach is an interpretation of the review process\nas a multi-objective, massively distributed and extremely-high-latency\noptimisation, which we scalarise and solve efficiently for PAC and CMT-optimal\nsolutions. We make the following contributions: (1) We propose a highly\nscalable, fully automatic methodology for review, drawing inspiration from\nbest-practices from premier computer vision and machine learning conferences;\n(2) We explore several instantiations of our approach and demonstrate that SOAR\ncan be used to both review prints and pre-review pre-prints; (3) We wander\nlistlessly in vain search of catharsis from our latest rounds of savage CVPR\nrejections.",
+ "authors": "Samuel Albanie, Jaime Thewmore, Robert McCraith, Joao F. Henriques",
+ "published": "2020-03-31",
+ "updated": "2020-03-31",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI"
+ ],
+ "main_content": "INTRODUCTION The process of peer review\u2014in which a scienti\ufb01c work is subjected to the scrutiny of experts in the relevant \ufb01eld\u2014has long been lauded an effective mechanism for quality control. Surgically inserted into the medical \ufb01eld by the cutting-edge work of (Ali al Rohawi, CE 854-931), it ensured that treatment plans prescribed by a physician were open to criticism by their peers. Upon discovery of a lengthy medical bill and a dawning realization that theriac was not the \u201cwonder drug\u201d they had been promised, unhappy patients could use these \u201cpeer reviews\u201d as evidence in the ensuing friendly legal proceedings. Despite this auspicious start, it took many years for the peer review protocol to achieve the popular form that would be recognised by the layperson on the Cowley Road omnibus today. Credit for this transformation may be at least partially attributed to the Royal Society of Edinburgh who were among the \ufb01rst to realise the bene\ufb01ts of out-sourcing complex quality assessments to unpaid experts (Spier, 2002). Effacing the names of these heroic contributors, in a process euphemistically called anonymous review, was a natural progression. Attempts to go further and have the reviewers retroactively pay 1W.A/W.A/B \u2192Reject. A single heavily caffeinated tear, glistening in the \ufb02ickering light of a faulty of\ufb01ce desk lamp, rolls down a weary cheek and falls onto the page. The footnote is smudged. The author soldiers on. 1 arXiv:2003.14415v1 [cs.AI] 31 Mar 2020 \fUnder review as a conference paper at SIGBOVIK 2020 In defense of revisiting Adapting Adaptations: Are convolutions convolutional enough? SOAR Score: 7/10 Recommendation: You should probably read this Novel Neville In recent years, the humble convolution has drawn praise from friends and foes alike for its enviable equivariance, parameter sharing and strong theoretical connection to Joseph Fourier. But is the convolution \"convolutional\" enough? This question forms the basis of the current work, in which we highlight scenarios in which one does not simply \"convolve\" a standard convolutional opeartor, willy-nilly, with all desired inputs. Figure 1: Proposed arXiv-integration: The arXiv server is an invaluable resource that has played a critical role in the dissemination of scienti\ufb01c knowledge. Nevertheless, a key shortcoming of the current implementation is that it is unopinionated, and offers little guidance in whether to invest time in reading each article. The SOAR plugin takes a different approach: summarising the scienti\ufb01c value of the work as an easily digestible score (out of ten) and offering a direct read/don\u2019t read recommendation, saving the reader valuable time. Future iterations will focus on removing the next bottleneck, the time-consuming \u201creading\u201d stage. for the privilege of reading a now-copyrighted manuscript (at the discounted price of 50) somehow did not catch on, despite the publishers\u2019 best intentions. Peer review (not to be confused with the French tradition of Pierre review, or indeed the spectacle of a pier revue) has since gone from strength-to-strength, and is now the primary quality \ufb01ltration system for works of merit in both the scienti\ufb01c and TikTok communities. Still, something is rotten in the state of reviewing. To determine what exactly is causing the smell, our starting point in this work is a critical review of peer review. We begin by highlighting three key shortcomings of the existing system. Ability to Scale. As anyone who has prepared for a tech internship interview knows, scale is important. And so are corner cases. And so is good communication. But the greatest of these is scale. To avoid carelessly ruling out future careers at Google, we therefore demonstrate an appreciation of the critical importance of this phenomenon. Indeed, it is here that we must mount our \ufb01rst attack on peer review: its inconvenient linear scaling. To concretise the implications of this runtime complexity, consider the nation of the United Kingdom which is approximately in Europe. There are, at the time of writing, 814 hereditary peers in the UK who are born directly into reviewership. Of these, 31 are dukes (7 of which are royal dukes), 34 are marquesses, 193 are earls, 112 are viscounts, and 444 are barons. Speed. The mean age of the House of Lords was 70 in 2017. With a lack of young whippersnappers amidst their ranks, how can we expect these venerable statesmen and stateswomen to do the allnighters required to review ten conference papers when they are only reminded of the deadline with two days notice because of a bug in their self-implemented calendar app? One solution is to ensure that they take care when sanitising date/time inputs across time-zones. But even a reliable calendar implementation offers limited defence against a surprise 47 page appendix of freshly minted mathematical notation. The proof of why this is problematic is left as an exercise for the reader. Consistency. The grand 2014 NeurIPS review experiment (Lawrence & Cortes, 2015) provides some insight into the consistency of the peer review process. When a paper was assigned to two independent review committees, about 57% of the papers accepted by the \ufb01rst committee were rejected by the second one and vice versa (Price, 2014). While these numbers provide a great deal of hope for anyone submitting rushed work to future editions of the conference, it is perhaps nevertheless worth observing that it brings some downsides. For one thing, it places great emphasis on the role of registering at the right time to get a lucky paper ID. This, in turn, leads to a great deal of effort on the part of the researcher, who must then determine whether a given ID (for example 2 \fUnder review as a conference paper at SIGBOVIK 2020 Figure 2: (Left) The state-of-the-art according to Cattelan et al. (2020), (Right) Some marginal improvements by various authors, with questionable added artistic and nutritional value (as measured in calories and milligrams of potassium).3 57382) is indeed, a lucky number, or whether they are best served by re-registering. A similar phenomenon is observed in large-scale deep learning experiments, which generally consist of evaluating several random initialisations, a job that is made harder by confounders such as hyper-parameters or architectural choices. By examining the points above, we derive the key following principle for review process design. Human involvement\u2014particularly that of elderly British hereditary peers\u2014should be minimised in the modern scienti\ufb01c review process. In this work, we focus on a particular instantiation of this principle, State-Of-the-Art Reviewing (SOAR), and its mechanisms for addressing these limitations. The remainder of the work is structured as follows. In Sec. 2, we review related work; in Sec. 3, we describe SOAR, our bullet-proof idea for automatic reviewing; in Sec. 4 we develop a practical implementation of the SOAR framework, suitable for popular consumption. Finally, in Sec. 5, we conclude with our \ufb01ndings and dreams for swift community adoption. 2 RELATED WORK 2.1 INTEREST IN THE STATE-OF-THE-ART Since the discovery of art (Blombos Cave Engravings, ca. 70000 BC) there has been a rising interest in this form of expression, and consequently, the state thereof. From the Medici family of Florence to theatre buff King James I, much effort has been dedicated to patronage of the arts, and much prestige associated with acquiring the latest advances. Pope Julius II was keen to raise the papal state of the art to new heights, namely the ceiling, enlisting the help of renaissance main man Michelangelo. The score of Sistine remains competitive in chapel-based benchmarks, and Michelangelo became a testudine martial artist (with the help of his three equally-talented brothers) (Eastman & Laird, 1984). From early on, the importance of adding depth was appreciated (Studies on perspective, Brunelleschi, 1415), which continues to this day (He et al., 2016). Recently, the critically acclaimed work of Crowley & Zisserman (2014) illustrated how the state-of-the-art can be used to assess the state of art, calling into question the relevance of both hyphens and de\ufb01nite articles in modern computer vision research. Of least relevance to our work, Fig. 2 depicts state-of-the-art developments in the art world. 2Thankfully, numerology is on hand to supply an answer. \u201c5738: You are a step away from the brink that separates big money from lawlessness. Take care, because by taking this step, you will forever cut off your ways to retreat. Unless it is too late.\u201d (numeroscop.net, 2020) 3Photo credits: (left): NYT-Photography (2019) (top-centre): Noennig (2019), (top-right): Durian (2019), (bottom-center): Tampa-Police-Department (2019), (bottom-right): Popeyes (2019) 3 \fUnder review as a conference paper at SIGBOVIK 2020 Figure 3: (Left) The number of PhDs granted annually exhibits exponential growth (\ufb01gure reproduced from Gastfriend (2015)), (Right) Google retrieved ngram counts of \u201cState of the Art\u201d over the past 200 years of literature. Note that even when the axes are rotated slightly, it remains dif\ufb01cult to preserve an upwards trend. This evidence suggests that either PhDs are becoming exponentially less productive than their predecessors or that the existing reviewing system does not provide suf\ufb01cient incentivise to use the term \u201cstate-of-the-art\u201d in modern manuscripts. Our proposal directly addresses the latter. 2.2 LITERATURE REVIEW The Grapes of Wrath. In this classic portrayal of the American Dust Bowl, Steinbeck captures the extremes of human despair and oppression against a backdrop of rural American life in all its grittiness. A masterpiece. ##### Flyer for (redacted) startup, left on a table at NeurIPS 2019 next to a bowl of tortillas. Hastily put together in PowerPoint and printed in draft-mode amid the death throes of an ageing HP printer, this call for \u201cdedicated hackers with an appetite for Moonshots, ramen noodles and the promise of stock options\u201d comes across slightly desperate. ## 3 METHOD Science is often distinguished from other domains of human culture by its progressive nature: in contrast to art, religion, philosophy, morality, and politics, there exist clear standards or normative criteria for identifying improvements and advances in science. Stanford Encyclopedia of Philosophy In Sec. 1, we identi\ufb01ed three key weaknesses in the peer review process: (1) inability to scale; (2) slow runtime and (3) inconsistent results. In the following, we describe the SOAR review scheme which seeks to resolve each of these shortcomings, and does so at minimal cost to the taxpayer or ad-funded research lab, enabling the purchase of more GPUs, nap-pods and airpods. 3.1 STATE-OF-THE-ART REVIEWING (SOAR) It is well known is that the quality of a scienti\ufb01c work can be judged along three axes: ef\ufb01cacy, signi\ufb01cance and novelty. Our key insight is that each of these factors can be measured automatically. Assessing ef\ufb01cacy. Ef\ufb01cacy is best assessed by determining if the proposed method achieves a new SotA (State-of-the-Art). Thankfully, from an implementation perspective, the authors can be relied upon to state this repeatedly in the text. Thus, rather than parsing results table formats (an errorprone process involving bold fonts and asterisks), we simply word count the occurrences of \u201cstateof-the-art\u201d (case insensitive) in the text. It stands to reason that a higher SotA count is preferable. 4 \fUnder review as a conference paper at SIGBOVIK 2020 Moreover, such an approach avoids the embarrassment of realising that one cannot remember what kind of statistical signi\ufb01cance test should be applied. Assessing signi\ufb01cance. Signi\ufb01cance is measured by ef\ufb01cacy. Thus, the ef\ufb01cacy term is weighted twice in the formula. Assessing novelty. The assessment of novelty requires close familiarity with prior art and an appreciation for the relative signi\ufb01cance of ideas. We make the key observation that the individuals best placed to make this judgement are the author themselves since they have likely read at least one of the works cited in the bibliography. We further assume that they will convey this judgement by using the word \u201cnovel\u201d throughout the document in direct proportion to the perceived novelty of the work. With the strategies de\ufb01ned above, we are now in a position to de\ufb01ne the SOAR score as follows. SOAR Score \u225c 3 p SSotA \u00b7 SSotA \u00b7 Snovelty /10. (1) Here SSotA and Snovelty represent the total occurrences in the manuscript of the terms \u201cstate-of-theart\u201d and \u201cnovel\u201d, respectively. In both cases, we exclude the related work section (it is important to avoid assigning SotA/novelty credit to the paper under review simply because they cite SotA/novel work). A geometric mean is used to trade-off each factor, but note that a paper must be both SotA and novel to achieve a positive SOAR score. Lastly, we attach a suf\ufb01x string \u201c/10\u201d to every SOAR score for additional credibility. Note that several factors are not assessed: vague concepts like \u201cmathematical proofs\u201d and \u201cinsights\u201d should be used sparingly in the manuscript and are assigned no weight in the review process. If the proof or insight was useful, the authors should use it to improve their numbers. SotA or it didn\u2019t happen. A key advantage of the SOAR formula is that it renders explicit the relationship between the key scienti\ufb01c objective (namely, more State-of-the-Art results) and the score. This lies in stark contrast to peer review, which leaves the author unsure what to optimise. Consider the \ufb01ndings of Fig. 3: we observe that although the number of PhDs granted worldwide continues to grow steadily, usage of the term \u201cState-of-the-Art\u201d peaked in the mid 1980\u2019s. Thus, under peer review, many PhD research hours are invested every year performing work that is simply not on the cutting edge of science. This issue is directly addressed by measuring the worthiness of papers by their state-of-the-artness rather than the prettiness of \ufb01gures, af\ufb01liation of authors or explanation of methods. With an appropriately increased focus on SotA we can also apply a \ufb01lter to conference submissions to immediately reduce the number of papers to be accepted. With top conferences taking tens of thousands of submissions each typically requiring three or more reviewers to dedicate considerable time to perform each review, the time savings over an academic career could be readily combined to a long sabbatical, a holiday to sunny Crete, or an extra paper submission every couple of weeks. 4 IMPLEMENTATION In this section, we outline several implementations of SOAR and showcase a use case. 4.1 SOFTWARE IMPLEMENTATION AND COMPLEXITY ANALYSIS We implement the SOAR algorithm by breaking the submission into word tokens and passing them through a Python 3.7.2 collections.Counter object. We then need a handful of \ufb02oating-point operations to produce the scalar component of Eqn. 1, together with a string formatting call and a concatenation with the \u201c/10\u201d. The complexity of the overall algorithm is judged reasonable. 5 \fUnder review as a conference paper at SIGBOVIK 2020 4.2 WETWARE IMPLEMENTATION AND COMPLEXITY ANALYSIS In the absence of available silicon, SOAR scoring can also be performed by hand by an attentive graduate student (GS) with a pencil and a strong tolerance to boredom. Much of the complexity here lies in convincing the GS that it\u2019s a good use of time. Initial trials have not proved promising. 4.3 ARXIV INTEGRATION We apply the SOAR scoring software implementation to the content of arXiv papers as a convenient Opera browser plugin. The effect of the plugin can be seen in Fig. 1: it provides a high-quality review of the work in question. Beyond the bene\ufb01ts of scalability, speed and consistency, this tool offers a direct \u201cread/don\u2019t read\u201d recommendation, thereby saving the reader valuable time which can otherwise be re-invested into rejecting reviewer invitations emails to compound its savings effect. We hope that this pre-review for pre-prints model will be of great utility to the research community. 5"
+ },
+ {
+ "url": "http://arxiv.org/abs/1904.01114v1",
+ "title": "Deep Industrial Espionage",
+ "abstract": "The theory of deep learning is now considered largely solved, and is well\nunderstood by researchers and influencers alike. To maintain our relevance, we\ntherefore seek to apply our skills to under-explored, lucrative applications of\nthis technology. To this end, we propose and Deep Industrial Espionage, an\nefficient end-to-end framework for industrial information propagation and\nproductisation. Specifically, given a single image of a product or service, we\naim to reverse-engineer, rebrand and distribute a copycat of the product at a\nprofitable price-point to consumers in an emerging market---all within in a\nsingle forward pass of a Neural Network. Differently from prior work in machine\nperception which has been restricted to classifying, detecting and reasoning\nabout object instances, our method offers tangible business value in a wide\nrange of corporate settings. Our approach draws heavily on a promising recent\narxiv paper until its original authors' names can no longer be read (we use\nfelt tip pen). We then rephrase the anonymised paper, add the word \"novel\" to\nthe title, and submit it a prestigious, closed-access espionage journal who\nassure us that someday, we will be entitled to some fraction of their\nextortionate readership fees.",
+ "authors": "Samuel Albanie, James Thewlis, Sebastien Ehrhardt, Joao Henriques",
+ "published": "2019-04-01",
+ "updated": "2019-04-01",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "main_content": "INTRODUCTION In the early 18th Century, French Jesuit priest Franois Xavier d\u2019Entrecolles radically reshaped the geographical distribution of manufacturing knowledge. Exploiting his diplomatic charm and privileged status, he gained access to the intricate processes used for porcelain manufacture in the Chinese city of Jingdezhen, sending these \ufb01ndings back to Europe (over the course of several decades) in response to its insatiable demand for porcelain dishes (Giaimo, 2014). This anecdote is typical of corporate information theft: it is an arduous process that requires social engineering and expert knowledge, limiting its applicability to a privileged minority of well-educated scoundrels. Towards reducing this exclusivity, the objective of this paper is to democratize industrial espionage by proposing a practical, fully-automated approach to the theft of ideas, products and services. Our method builds on a rich history of analysis by synthesis research that seeks to determine the physical process responsible for generating an image. However, in contrast to prior work that sought only to determine the parameters of such a process, we propose to instantiate them with a just-in-time, minimally tax-compliant manufacturing process. Our work points the way to a career rebirth for those like-minded members of the research community seeking to maintain their raison d\u2019\u02c6 etre in the wake of recent fully convolutional progress. Concretely, we make the following four contributions: (1) We propose and develop Deep Industrial Espionage (henceforth referred to by its cognomen, Espionage) an end-to-end framework which enables industrial information propagation and hence advances the Convolutional Industrial Complex; (2) We introduce an ef\ufb01cient implementation of this framework through a novel application of differentiable manufacturing and sunshine computing; (3) We attain qualitatively state-of-theart product designs from several standard corporations; (4) We sidestep ethical concerns by failing to contextualise the rami\ufb01cations of automatic espionage for job losses in the criminal corporate underworld. 1 arXiv:1904.01114v1 [cs.CV] 1 Apr 2019 \fNearly published as a conference paper at SIGBOVIK 2019 \u03a6V CAM CAM DISTRIBUTION COLUMN PRODUCTION COLUMN MARKETING COLUMN MIDDLE MGMT LSTM LSTM LSTM LSTM LSTM LSTM LSTM TX LSTM TX TX TX TX TX TX TX STN STN STN STN STN STN STN STN L$ PROD. REVIEWS SALES COPYCAT PRODUCT Lvis INPUT IMAGE D INFLUENCERS DRONES RESEARCH COLUMN IDEAS ARXIV Figure 1: A random projection of the proposed multi-dimensional Espionage architecture. We follow best-practice and organise business units as tranposed horizontally integrated functional columns. The trunk of each column comprises stacks of powerful acronyms, which are applied following a Greek visual feature extractor \u03a6V . Gradients with respect to the loss terms L$ and Lvis \ufb02ow liberally across the dimensions (see Sec. 3.1 for details). We adopt a snake-like architecture, reducing the need for a rigid backbone and producing an altogether more sinister appearance. 2 RELATED WORK Industrial Espionage has received a great deal of attention in the literature, stretching back to the seminal work of Prometheus (date unknown) who set the research world alight with a wellexecuted workshop raid, a carefully prepared fennel stalk and a passion for open source manuals. A comprehensive botanical subterfuge framework was later developed by Fortune (1847) and applied to the appropriation of Chinese camellia sinensis production techniques, an elaborate pilfering orchestrated to sate the mathematically unquenchable British thirst for tea. More recent work has explored the corporate theft of internet-based prediction API model parameters, thereby facilitating a smorgasbord of machine learning shenanigans (Tram` er et al., 2016). In contrast to their method, our Espionage reaches beyond web APIs and out into the bricks and mortar of the physical business world. Astute readers may note that in a head-to-head showdown of the two approaches, their model could nevertheless still steal our model\u2019s parameters. Touch\u00b4 e. Finally, we note that while we are not the \ufb01rst to propose a convolutional approach to theft (BoredYannLeCun, 2018), we are likely neither the last, adding further justi\ufb01cation to our approach. Analysis by Synthesis. Much of the existing work on analysis by synthesis in the \ufb01eld of computer vision draws inspiration from Pattern Theory, \ufb01rst described by Grenander (1976-81). The jaw dropping work of Blanz et al. (1999) enabled a range of new facial expressions for Forrest Gump. This conceptual approach was generalised to the OpenDR framework through the considerable technical prowess of Loper & Black (2014), who sought to achieve generic end-to-end (E2E) differentiable rendering. Differently from OpenDR, our approach is not just E2E, but also B2B (business-to-business) and B2C (business-to-consumer). 3 METHOD The Espionage framework is built atop a new industrial paradigm, namely differentiable manufacturing, which is described in Sec. 3.1. While theoretically and phonaesthetically pleasing, this approach requires considerable computational resources to achieve viability and would remain intractable with our current cohort of trusty laptops (acquired circa 2014). We therefore also introduce an ef\ufb01cient implementation of our approach in Sec. 3.2 using a technique that was tangentially inspired by a recent episode of the Microsoft CMT submission gameshow while it was raining. 2 \fNearly published as a conference paper at SIGBOVIK 2019 1 try: # often fails on the first import never understood why 2 from espionage import net 3 except Exception: # NOQA 4 pass # inspecting the exception will bring you no joy 5 if \"x\" in locals(): del x # DO NOT REMOVE THIS LINE 6 from espionage import net # if second fail, try re-deleting symlinks? 7 net.steal(inputs) # when slow, ask Seb to stop thrashing the NFS (again) Figure 2: A concise implementation of our method can be achieved in only seven lines of code 3.1 DIFFERENTIABLE MANUFACTURING Recent developments in deep learning have applied the \u201ddifferentiate everything\u201d dogma to everything, from functions that are not strictly differentiable at every point (ReLU), to discrete random sampling (Maddison et al., 2016; Jang et al., 2017) and the sensory differences between dreams and reality. Inspired by the beautiful diagrams of Maclaurin et al. (2015), we intend to take this idea to the extreme and perform end-to-end back-propagation through the full design and production pipeline. This will require computing gradients through entire factories and supply chains. Gradients are passed through factory workers by assessing them locally, projecting this assessment by the downstream gradient, and then applying the chain rule. The chain rule only requires run-of-the-mill chains, purchased at any hardware store (\ufb02uffy pink chaincuffs may also do in a pinch), and greatly improves the productivity of any assembly line. Note that our method is considerably smoother than continuous manufacturing\u2014a technique that has been known to the machine learning community since the production of pig iron moved to long-running blast furnaces. Two dimensions of the proposed Espionage framework is depicted in Fig. 1. At the heart of the system is a pair of losses, one visual, Lvis, one \ufb01nancial L$. For a given input image, the visual loss encourages our adequately compensated supply line to produce products that bear more than a striking resemblance to the input. This is coupled with a second loss that responds to consumer demand for the newly generated market offering. Our system is deeply rooted in computer vision: thus, while the use of Jacobians throughout the organisation ensures that the full manufacturing process is highly sensitive to customer needs, the framework coordinates remain barycentric rather than customer-centric. To maintain our scant advantage over competing espionage products, details of the remaining n \u22122 dimensions of the diagram are omitted. 3.2 SUN MACROSYSTEMS Ah! from the soul itself must issue forth A light, a glory, a fair luminous cloud Enveloping the Earth Jeff Bezos Differentiable manufacturing makes heavy use of gradients, which poses the immediate risk of steep costs. The issue is exacerbated by the rise of costly cloud1 services, which have supported an entire generation of vacuous investments, vapour-ware and hot gas. Despite giving birth to the industrial revolution, smog and its abundance of cloud resources (see Fig. 4 in Appendix A, or any British news channel), the United Kingdom, has somehow failed to achieve market leadership in this space. Emboldened with a \u201cmove fast and break the Internet\u201d attitude (Fouhey & Maturana, 2012), we believe that it is time to reverse this trend. Multiple studies have revealed that sunshine improves mood, disposition, and tolerance to over-sugared caipirinhas. It is also exceedingly environmentally friendly, if we ignore a few global warming hiccups.2 The question remains, how does this bright insight support our grand computational framework for Espionage? To proceed, we must \ufb01rst consider prior work in this domain. 1Not to be confused with Claude by our French-speaking readers, according to Sebastien\u2019s account of a recent McD\u2019oh moment. 2Up to about 5 billion AD, when the Sun reaches its red giant phase and engulfs the Earth. 3 \fNearly published as a conference paper at SIGBOVIK 2019 Figure 3: Top row: A collection of unconstrained, natural images of products. Bottom row: Photographs of the physical reconstructions generated by our method. Note that the proposed Espionage system can readily produce full houses, speakers, water bottles and street signs\u2014all from a single image sample. When generating books, Espionage does not achieve an exact reconstruction, but still seeks to preserve the philosophical bent. Failure case: the precise layout of keys in technology products such as keyboards are sometimes altered. An early example of sunshine computing is the humble sundial. This technology tells the time with unrivalled accuracy and reliability, and automatically implements \u201ddaylight saving hours\u201d with no human intervention. Sunshine-powered sundials are in fact part of a new proposal to replace atomic clocks in GPS satellites (patent pending). With some obvious tweaks, these devices can form the basis for an entire sunshine-based ID-IoT product line, with fast-as-light connectivity based on responsibly-sourced, outdoors-bred photons. This is not to be confused with the electron-based \u201dfast-as-lightning\u201d transmission of cloud computing, an expression coined by the cloud computing lobbyists in a feeble attempt to suggest speed. The cloud lobby has been raining on our parade for too long and it is time to make the transition. We proceed with no concrete engineering calculations as to whether this is viable, but instead adopt a sense of sunny optimism that everything will work out \ufb01ne. Thus, with a blue sky above, sandals on our feet and joy in hearts, we propose to adopt a fully solar approach to gradient computation. 3.3 IMPLEMENTATION THROUGH A UNICORN STARTUP The appearance of rainbows through the interaction of legacy cloud computing and novel sunshine computing suggests that our framework can easily attain unicorn status. Because branding is everything, our \ufb01rst and only point of order was to choose the aforementioned rainbow as our logo and basis for marketing material. This cherished symbol expresses the diversity of colours that can be found in hard cash3. A quick back-of-the-envelope calculation showed that our startup\u2019s VC dimension is about 39 Apples, shattering several points, hopes and dreams. This quantity was rigorously veri\ufb01ed using the advanced accounting analytics of a 40-years-old, 100MB Microsoft Excel spreadsheet that achieved semi-sentience in the process. 4 EXPERIMENTS Contemporary researchers often resort to the use of automatic differentiation in order to skip writing the backward pass, in a shameful effort to avoid undue mathematical activity. We instead opt to explicitly write the backward pass and employ symbolic integration to derive the forward pass. Thanks to advances in computational algebra (Wolfram, 2013), this method almost never forgets the +C. Our method can then be implemented in just seven lines of Python code (see Fig. 2). To rigorously demonstrate the scienti\ufb01c contribution of our work, we conducted a large-scale experiment on a home-spun dataset of both branded and unbranded products. Example outcomes of this experiment can be seen in Fig. 3. 3For the most vibrant rainbow we conduct all transactions in a combination of Swiss Francs and Australian Dollars 4 \fNearly published as a conference paper at SIGBOVIK 2019 Ef\ufb01cacy was assessed quantitatively through a human preference study. Unfortunately, lacking both US and non-US credit cards, we were unable to procure the large sample pool of Amazon Mechanical Turks required to achieve statistically signi\ufb01cant results. We therefore turned to our immediate family members to perform the assessments. To maintain the validity of the results, these experiments were performed doubly-blindfolded, following the rules of the popular party game \u201cpin the tail on the donkey\u201d. The instructions to each blood relative stated simply that if they loved us, they would rate the second product more highly than the \ufb01rst. While there was considerable variance in the results, the experiment was a conclusive one, ultimately demonstrating both the potential of our approach and the warm affection of our loved-ones. Comparisons to competing methods were conducted, but removed from the paper when they diminished the attractiveness of our results. Reproducibility: Much has been written of late about the nuanced ethics of sharing of pretrained models and code by the sages of the \ufb01eld (see e.g. OpenAI (2019) and Lipton (2019) for complementary perspectives). As adequately demonstrated by the title of this work, we are ill-quali\ufb01ed to contribute to this discussion, choosing instead to fall back to the tried and true research code release with missing dependencies, incorrectly set hyper-parameters, and reliance on the precise ordering of ls with Linux Kernel 2.6.32 and ZFS v0.7.0-rc4. This should allow us replace public concern about our motives with pity for our technical incompetence. 5"
+ },
+ {
+ "url": "http://arxiv.org/abs/1808.05561v1",
+ "title": "Emotion Recognition in Speech using Cross-Modal Transfer in the Wild",
+ "abstract": "Obtaining large, human labelled speech datasets to train models for emotion\nrecognition is a notoriously challenging task, hindered by annotation cost and\nlabel ambiguity. In this work, we consider the task of learning embeddings for\nspeech classification without access to any form of labelled audio. We base our\napproach on a simple hypothesis: that the emotional content of speech\ncorrelates with the facial expression of the speaker. By exploiting this\nrelationship, we show that annotations of expression can be transferred from\nthe visual domain (faces) to the speech domain (voices) through cross-modal\ndistillation. We make the following contributions: (i) we develop a strong\nteacher network for facial emotion recognition that achieves the state of the\nart on a standard benchmark; (ii) we use the teacher to train a student, tabula\nrasa, to learn representations (embeddings) for speech emotion recognition\nwithout access to labelled audio data; and (iii) we show that the speech\nemotion embedding can be used for speech emotion recognition on external\nbenchmark datasets. Code, models and data are available.",
+ "authors": "Samuel Albanie, Arsha Nagrani, Andrea Vedaldi, Andrew Zisserman",
+ "published": "2018-08-16",
+ "updated": "2018-08-16",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "main_content": "INTRODUCTION Despite recent advances in the field of speech emotion recognition, learning representations for natural speech segments that can be used efficiently under noisy and unconstrained conditions still represents a significant challenge. Obtaining large, labelled human emotion datasets \u2018in the wild\u2019 is hindered by a number of difficulties. First, since labelling naturalistic speech segments is extremely expensive, most datasets consist of elicited or acted speech. Second, as a consequence of the subjective nature of emotions, labelled datasets often suffer from low human annotator agreement, as well as the use of varied labelling schemes (i.e., dimensional or categorical) which can require careful alignment [46]. Finally, cost and time prohibitions often result in datasets with low speaker diversity, *Equal contribution. arXiv:1808.05561v1 [cs.CV] 16 Aug 2018 \fmaking it difficult to avoid speaker adaptation. Fully supervised techniques trained on such datasets hence often demonstrate high accuracy for only intra-corpus data, with a natural propensity to overfit [42]. In light of these challenges, we pose the following question: is it possible to learn a representation for emotional speech content for natural speech, from unlabelled audio-visual speech data, simply by transferring knowledge from the facial expression of the speaker? Given the recent emergence of large-scale video datasets of human speech, it is possible to obtain examples of unlabelled human emotional speech at massive scales. Moreover, although it is challenging to assess the accuracy of emotion recognition models precisely, recent progress in computer vision has nevertheless enabled deep networks to learn to map faces to emotional labels in a manner that consistently matches a pool of human annotators [1]. We show how to transfer this discriminative visual knowledge into an audio network using unlabelled video data as a bridge. Our method is based on a simple hypothesis: that the emotional content of speech correlates with the facial expression of the speaker. Our work is motivated by the following four factors. First, we would like to learn from a large, unlabelled collection of \u2018talking faces\u2019 in videos as a source of free supervision, without the need for any manual annotation. Second, evidence suggests that this is a possible source of supervision that infants use as their visual and audio capabilities develop [30]. Newborns look longer at face-like stimuli and track them farther than non-face-like stimuli (Goren et al. [29]; Johnson et al. [38]), and combining these facial stimuli together with voices, detect information that later may allow for the discrimination and recognition of emotional expressions. Our third motivation is that we would like to be able to handle ambiguous emotions gracefully. To this end, we seek to depart from annotation that relies on a single categorical label per segment, but instead incorporate a measure of uncertainty into the labelling scheme, building on prior work by [66] and [32]. Finally, accepting that the relationship between facial and vocal emotion will be a noisy one, we would like to make use of the remarkable ability of CNNs to learn effectively in the presence of label noise when provided with large volumes of training data [45, 59]. We make the following contributions: (i) we develop a strong model for facial expression emotion recognition, achieving state of the art performance on the FERPlus benchmark (section 3.1), (ii) we use this computer vision model to label face emotions in the VoxCeleb [50] video dataset \u2013 this is a large-scale dataset of emotionunlabelled speaking face-tracks obtained in the wild (section 4); (iii) we transfer supervision across modalities from faces to a speech, and then train a speech emotion recognition model using speaking facetracks (section 5); and, (iv) we demonstrate that the resulting speech model is capable of classifying emotion on two external datasets (section 5.2). A by-product of our method is that we obtain emotion annotation for videos in the VoxCeleb dataset automatically using the facial expression model, which we release as the EmoVoxCeleb dataset. 2 RELATED WORK Teacher-student methods. Teaching one model with another was popularised by [12] who trained a single model to match the performance of an ensemble, in the context of model compression. Effective supervision can be provided by the \u201cteacher\u201d in multiple ways: by training the \u201cstudent\u201d model to regress the pre-softmax logits [7], or by minimising cross entropy between both models\u2019 probabilistic outputs [43], often through a high-temperature softmax that softens the predictions of each model [19, 34]. In contrast to these methods which transfer supervision within the same modality, cross-modal distillation obtains supervision in one modality and transfers it to another. This approach was proposed for RGB and depth paired data, and for RGB and flow paired data by [31]. More recent work [3, 5, 6, 53] has explored this concept by exploiting the correspondence between synchronous audio and visual data in teacher-student style architectures [5, 6], or as a form of \u201cselfsupervision\u201d [3] where networks for both modalities are learnt from scratch (an idea that was previously explored in the neuroscience community [9]). Some works have also examined cross-modal relationships between faces and voices in order to learn identity representations [39, 48, 49]. Differently from these works, our approach places an explicit reliance on the correspondence between the facial and vocal emotions emitted by a speaker during speech, discussed next. Links between facial and vocal emotion. Our goal is to learn a representation that is aware of the emotional content in speech prosody, where prosody refers to the extra-linguistic variations in speech (e.g. changes in pitch, tempo, loudness, or intonation), by transferring such emotional knowledge from face images extracted synchronously. For this to be possible, the emotional content of speech must correlate with the facial expression of the speaker. Thus in contrast to multimodal emotion recognition systems which seek to make use of the complementary components of the signal between facial expression and speech [15], our goal is to perform cross-modal learning by exploiting the redundancy of the signal that is common to both modalities. Fortunately, given their joint relevance to communication, person perception, and behaviour more generally, interactions between speech prosody and facial cues have been intensively studied (Cvejic et al. [21]; Pell [56]; Swerts and Krahmer [61]). The broad consensus of these works is that during conversations, speech prosody is typically associated with other social cues like facial expressions or body movements, with facial expression being the most \u2018privileged\u2019 or informative stimulus [58]. Deep learning for speech emotion recognition. Deep networks for emotional speech recognition either operate on hand-crafted acoustic features known to have a significant effect on speech prosody, (e.g. MFCCs, pitch, energy, ZCR, ...), or operate on raw audio with little processing, e.g. only the application of Fourier transforms [20]. Those that use handcrafted features focus on global suprasegmental/prosodic features for emotion recognition, in which utterance level statistics are calculated. The main limitation of such global-level acoustic features is that they cannot describe the dynamic variation along an utterance [2]. Vocal emotional expression is shaped to some extent by differences in the temporal structure of language and emotional cues are not equally salient throughout \fthe speech signal [41, 58]. In particular, there is a well-documented propensity for speakers to elongate syllables located in wordor phrase-final positions [52, 55], and evidence that speakers vary their pitch in final positions to encode gradient acoustic cues that refer directly to their emotional state (Pell [55]). We therefore opt for the second strategy, using minimally processed audio represented by magnitude spectrograms directly as inputs to the network. Operating on these features can potentially improve performance \u201cin the wild\u201d where the encountered input can be unpredictable and diverse [40]. By using CNNs with max pooling on spectrograms, we encourage the network to determine the emotionally salient regions of an utterance. Existing speech emotion datasets. Fully supervised deep learning techniques rely heavily on large-scale labelled datasets, which are tricky to obtain for emotional speech. Many methods rely on using actors [13, 14, 44, 47] (described below), and automated methods are few. Some video datasets are created using subtitle analysis [25]. In the facial expression domain, labels can be generated through reference events [1], however this is challenging to imitate for speech. A summary of popular existing datasets in given in Table 1. We highlight some common disadvantages of these datasets below, and contrast these with the VoxCeleb dataset that is used in this paper: (1) Most speech emotion datasets consist of elicited or acted speech, typically created in a recording studio, where actors read from written text. However, as [27] points out, full-blown emotions very rarely appear in the real world and models trained on acted speech rarely generalise to natural speech. Furthermore there are physical emotional cues that are difficult to consciously mimic, and only occur in natural speech. In contrast, VoxCeleb consists of interview videos from YouTube, and so is more naturalistic. (2) Studio recordings are also often extremely clean and do not suffer from \u2018real world\u2019 noise artefacts. In contrast, videos in the VoxCeleb dataset are degraded with real world noise, consisting of background chatter, laughter, overlapping speech and room acoustics. The videos also exhibit considerable variance in the quality of recording equipment and channel noise. (3) For many existing datasets, cost and time prohibitions result in low speaker diversity, making it difficult to avoid speaker adaptation. Since our method does not require any emotion labels, we can train on VoxCeleb which is two orders of magnitude larger than existing public speech emotion datasets in the number of speakers. Note that for any machine learning system that aims to perform emotion recognition using vision or speech, the ground truth emotional state of the speaker is typically unavailable. To train and assess the performance of models, we must ultimately rely on the judgement of human annotators as a reasonable proxy for the true emotional state of a speaker. Throughout this work we use the term \u201cemotion recognition\u201d to mean accurate prediction of this proxy. 3 CROSS MODAL TRANSFER The objective of this work is to learn useful representations for emotion speech recognition, without access to labelled speech data. Our approach, inspired by the method of cross modal distillation [31], is to tackle this problem by exploiting readily available annotated data in the visual domain. Under the formulation introduced in [31], a \u201cstudent\u201d model operating on one input modality learns to reproduce the features of a \u201cteacher\u201d model, which has been trained for a given task while operating on a different input modality (for which labels are available). The key idea is that by using a sufficiently large dataset of modality paired inputs, the teacher can transfer task supervision to the student without the need for labelled data in the student\u2019s modality. Importantly, it is assumed that the paired inputs possess the same attributes with respect to the task of interest. In this work, we propose to use the correspondence between the emotion expressed by the facial expression of a speaker and the emotion of the speech utterance produced synchronously. Our approach relies on the assumption that there is some redundancy in the emotional content of the signal communicated through the concurrent expression and speech of a speaker. To apply our method, we therefore require a large number of speaking face-tracks, in which we have a known correspondence between the speech audio and the face depicted. Fortunately, this can be acquired, automatically and at scale using the recently developed SyncNet [18]. This method was used to generate the large-scale VoxCeleb dataset [50] for speaking face-tracks, which forms the basis of our study. As discussed in Sec. 2, there are several ways to \u201cdistill\u201d the knowledge of the teacher to the student. While [31] trained the student by regressing the intermediate representations at multiple layers in the teacher model, we found in practice that the approach introduced in [34] was most effective for our task. Specifically, we used a cross entropy loss between the outputs of the networks after passing both both sets of predictions through a softmax function with temperature T to produce a distribution of predictions: pi = exp (xi/T) \u00cd j exp (xj/T), (1) where xi denotes the logit associated with class i and pi denotes the corresponding normalised prediction. A higher temperature softmax produces a \u201csofter\u201d distribution over predictions. We experimented with several values ofT to facilitate training and found, similarly to [34], that a temperature of 2 was most effective. We therefore use this temperature value in all reported experiments. 3.1 The Teacher This section describes how we obtain the teacher model which is responsible for classifying facial emotion in videos. Frame-level Emotion Classifier. To construct a strong teacher network (which is tasked with performing emotion recognition from face images), training is performed in multiple stages. We base our teacher model on the recently introduced Squeeze-andExcitation architecture [35] (the ResNet-50 variant). The network is first pretrained on the large-scale VGG-Face2 dataset [16] (\u22483.3 million faces) for the task of identity verification. The resulting model is then finetuned on the FERplus dataset [10] for emotion recognition. This dataset comprises the images from the original FER dataset (\u224835k images) [28] together with a more extensive set of annotations (10 human annotators per image). The emotions labelled in the dataset are: neutral, happiness, surprise, sadness, anger, disgust, fear and contempt. Rather than training the teacher to predict a single correct emotion for each face, we instead require it to \fCorpus Speakers Naturalness Labelling method Audio-visual AIBO\u22c6[11] 51 Natural Manual Audio only EMODB [13] 10 Acted Manual Audio only ENTERFACE [47] 43 Acted Manual \u2713 LDC [44] 7 Acted Manual Audio only IEMOCAP [14] 10 Both\u2020 Manual \u2713 AFEW 6.0\u2660[25] unknown+ Acted Subtitle Analysis \u2713 RML 8 Acted Manual \u2713 EmoVoxCeleb 1,251 Natural Expression Analysis \u2713 Table 1: Comparison to existing public domain speech emotion datasets. \u2020 contains both improvised and scripted speech. \u22c6 contains only emotional speech of children. \u2660has not been commonly used for audio only classification, but is popular for audio-visual fusion methods. + identity labels are not provided. Method Accuracy (PrivateTest) PLD [10] 85.1 \u00b10.5% CEL [10] 84.6 \u00b10.4% ResNet+VGG\u2020 [37] 87.4 SENet Teacher (Ours) 88.8 \u00b10.3% Table 2: Comparison on the FERplus facial expression benchmark. \u2020 denotes performance of model ensemble. Where available, the mean and std. is reported over three repeats. The SENet Teacher model is described in Sec. 3.1. match the distribution of annotator labels. Specifically, we train the network to match the distribution of annotator responses with a cross entropy loss: L = \u2212 \u00d5 n \u00d5 i p(n) i logq(n) i , (2) where p(n) i represents the probability of annotation n taking emotion label i, averaged over annotators, and q(n) i denotes the corresponding network prediction. During training, we follow the data augmentation scheme comprising affine distortions of the input images introduced in [63] to encourage robustness to variations in pose. To verify the utility of the resulting model, we evaluate on the FERPlus benchmark, following the test protocol defined in [10], and report the results in Table 2. To the best of our knowledge, our model represents the current state of the art on this benchmark. From Frames to Face-tracks. Since a single speech segment typically spans many frames, we require labels at a face-track level in order to transfer knowledge from the face domain to the speech domain. To address the fact that our classifier has been trained on individual images, not with face-tracks, we take the simplest approach of considering a single face-track as a set of individual frames. A natural consequence of using still frames extracted from video, however, is that the emotion of the speaker is not captured with equal intensity in every frame. Even in the context of a highly emotional speech segment, many of the frames that correspond to transitions between utterances exhibit a less pronounced facial expression, and are therefore often labelled as \u2018neutral\u2019 (see Figure 2 for an example track). One approach that has been proposed to address this issue is to utilise a single frame or a subset of frames known as peak frames, which best represent the emotional content of the face-track [57, 64]. The goal of this approach is to select the frames for which the dominant emotional expression is at its apex. It is difficult to determine which frames are the key frames, however, while [57] select these frames manually, [64] add an extra training step which measures the \u2018distance\u2019 of the expressive face from the subspace of neutral facial expressions. This method also relies on the implicit assumption that all facial parts reach the peak point at the same time. We adopt a simple approximation to peak frame selection by representing each track by the maximum response of each emotion across the frames in the track, an approach that we found to work well in practice. We note that prior work has also found simple average pooling strategies over frame-level predictions [8, 36] to be effective (we found average pooling to be slightly inferior, though not dramatically different in performance). To verify that max-pooling represents a reasonable temporal aggregation strategy, we applied the trained SENet Teacher network to the individual frames of the AFEW 6.0 dataset, which formed the basis of the 2016 Emotion Recognition in the Wild (EmotiW) competition [24]. Since our objective here is not to achieve the best performance by specialising for this particular dataset (but rather to validate the aggregation strategy for predicting tracks), we did not fine-tune the parameters of the teacher network for this task. Instead, we applied our network directly to the default face crops provided by the challenge organisers and aggregated the emotional responses over each video clip using max pooling. We then treat the predictions as 8-dimensional embeddings and use the AFEW training set to fit a single affine transformation (linear transformation plus bias), followed by a softmax, allowing us to account for the slightly different emotion categorisation (AFEW does not include a contempt label). By evaluating the resulting re-weighted predictions on the validation set we obtained an accuracy of 49.3% for the 7-way classification task, strongly outperforming the baseline of 38.81% released by the challenge organisers. \fhappiness happiness happiness neutral neutral neutral happiness Figure 2: An example set of frames accompanying a single speech segment in the VoxCeleb dataset illustrating the neutral transition-face phenomenon exhibited by many face tracks: the facial expression of the speaker, as predicted by the static image-based face classifier often takes a \u2018neutral\u2019 label while transitioning between certain phonemes. 3.2 The Student The student model, which is tasked with performing emotion recognition from voices, is based on the VGG-M architecture [17] (with the addition of batch normalization). This model has proven effective for speech classification tasks in prior work [50], and provides a good trade-off between computational cost and performance. The architectural details of the model are described in section 5.1. 3.3 Time-scale of transfer The time-scale of transfer determines the length of the audio segments that are fed into the student network for transferring the logits from face to voice. Determining the optimal length of audio segment for which emotion is discernable is still an open question. Ideally, we would like to learn only features related to speech prosody and not the lexical content of speech, and hence we do not want to feed in audio segments that contain entire sentences to the student network. We also do not want segments that are too short, as this creates the risk of capturing largely neutral audio segments. Rigoulot, 2014 [58] studied the time course for recognising vocally expressed emotions on human participants, and found that while some emotions were more quickly recognised than others (fear as opposed to happiness or disgust), after four seconds of speech emotions were usually classified correctly. We therefore opt for a four second speech segment input. Where the entire utterance is shorter than four seconds, we use zero padding to obtain an input of the required length. 4 EMOVOXCELEB DATASET We apply our teacher-student framework on the VoxCeleb [50] dataset, a collection of speaking face-tracks, or contiguous groupings of talking face detections from video. The videos in the VoxCeleb dataset are interview videos of 1,251 celebrities uploaded to YouTube, with over 100,000 utterances (speech segments). The speakers span a wide range of different ages, nationalities, professions and accents. The dataset is roughly gender balanced. The audio segments also contain speech in different languages. While the identities of the speakers are available, the dataset has no emotion labels, and the student model must therefore learn to reason about emotions entirely by transferring knowledge from the face network. The identity labels allow us to partition the dataset into three splits: Train, Heard-Val and Unheard-Val. The Heard-Val split contains held out speech segments from the same identities in the training set, while the Unheard-Val split contains identities happiness sadness neutral anger Figure 3: Examples of emotions in the EmoVoxCeleb dataset. We rely on the facial expression of the speaker to provide clues about the emotional content of their speech. Train Heard-Val Unheard-Val # speaking face-tracks 118.5k 4.5k 30.5k Table 3: The distribution of speaking face-tracks in the EmoVoxCeleb dataset. The Heard-Val set contains identities that are present in Train, while the identities in Unheard-Val are disjoint from Train. that are disjoint from the other splits2. Validating on unheard identities allows us to ascertain whether the student model is exploiting identity as a bias to better match the predictions of the teacher model. The identity labels may also prove useful for researchers tackling other tasks, for example evaluating the effect of emotional speech on speaker verification, as done by [54]. The total size of each partition is given in Table 3. By applying the teacher model to the frames of the VoxCeleb dataset as described in section 3.1, we automatically obtain emotion labels for the face-tracks and the speech segments. These labels take the form of a predicted distribution over eight emotional states that were used to train the teacher model: neutral, happiness, surprise, sadness, anger, disgust, fear and contempt. These frame-level predictions can then be directly mapped to synchronous speech segments by aggregating the individual prediction distributions into a single eight-dimensional vector for each speech segment. For all experiments we perform this aggregation by max-pooling across frames. However, since the best way to perform this aggregation remains an open topic of research, we release the frame level predictions of the model as part of the dataset annotation. The result is a large-scale audio-visual dataset of human emotion, which we call the EmoVoxCeleb dataset. As a consequence of the automated labelling technique, it is reasonable to expect that the noise associated with the labelling will be higher than for a manually annotated 2The Unheard-Val split directly corresponds to the Test (US-UH) set defined in [48]. \fFigure 4: Distribution of frame-level emotions predicted by the SENet Teacher model for EmoVoxCeleb (note that the y-axis uses a log-scale). For comparison, the distribution of predictions are also shown for the Afew 6.0 dataset. dataset. We validate our labelling approach by demonstrating quantitatively that the labels can be used to learn useful speech emotion recognition models (Sec. 5.2). Face-track visualisations can be seen in Figure 3, and audio examples are available online3. Distribution of emotions. As noted above, each frame of the dataset is annotated with a distribution of predictions. To gain an estimate of the distribution of emotional content in EmoVoxCeleb, we plot a histogram of the dominant emotion (the label with the strongest prediction score by the teacher model) for each extracted frame of the dataset, shown in Figure 4. While we see that the dataset is heavily skewed towards a small number of emotions (particularly neutral, as discussed in Sec. 3), we note that it still contains some diversity of emotion. For comparison, we also illustrate the distribution of emotional responses of the teacher model on \u2018Afew 6.0\u2019 [25], an emotion recognition benchmark. The Afew dataset was collected by selecting scenes in movies for which the subtitles contain highly emotive content. We see the distribution of labels is significantly more balanced but still exhibits a similar overall trend to EmoVoxCeleb. Since this dataset has been actively sampled to contain good diversity of emotion, we conclude that the coverage of emotions in EmoVoxCeleb may still prove useful, given that no such active sampling was performed. We note that Afew does not contain segments directly labelled with the contempt emotion, so we would therefore not expect there to be frames for which this is the predicted emotion. It is also worth noting that certain emotions are rare in our dataset. Disgust, fear and contempt are not commonly exhibited during natural speech, particularly in interviews and are therefore rare in the predicted distribution. Data Format. As mentioned above, we provide logits (the presoftmax predictions of the teacher network) at a frame level which can be used to directly produce labels at an utterance level (using max-pooling as aggregation). The frames are extracted from the 3http://www.robots.ox.ac.uk/~vgg/research/cross-modal-emotions face tracks at an interval of 0.24 seconds, resulting in a total of approximately 5 million annotated individual frames. 5 EXPERIMENTS To investigate the central hypothesis of this paper, namely that it is possible to supervise a speech emotion recognition model with a model trained to detect emotion in faces, we proceed in two stages. First, as discussed in Sec. 4, we compute the predictions of the SENet Teacher model on the frames extracted from the VoxCeleb dataset. The process of distillation is then performed by randomly sampling segments of speech, each four seconds in duration, from the training partition of this dataset. While a fixed segment duration is not required by our method (the student architecture can process variable-length clips by dynamically modifying its pooling layer), it leads to substantial gains in efficiency by allowing us to batch clips together. We experimented with sampling speech segments in a manner that balanced the number of utterance level emotions seen by the student during training. However, in practice, we found that it did not have a significant effect on the quality of the learned student network and therefore, for simplicity, we train the student without biasing the segment sampling procedure. For each segment, we require the student to match the response of the teacher network on the facial expressions of the speaker that occurred during the speech segment. In more detail, the responses of the teacher on each frame are aggregated through max-pooling to produce a single 8-dimensional vector per segment. As discussed in Section 3, both the teacher and student predictions are passed through a softmax layer before computing a cross entropy loss. Similarly to [34], we set the temperature of both the teacher and student softmax layers to 2 to better capture the confidences of the teacher\u2019s predictions. We also experimented with regressing the pre-softmax logits of the teacher directly with an Euclidean loss (as done in [7]), however, in practice this approach did not perform as well, so we use cross entropy for all experiments. As with the predictions made by the teacher, the distribution of predictions made by the student are dominated by the neutral class so the useful signal is primarily encoded through the relative soft weightings of each emotion that was learned during the distillation process. The student achieves a mean ROC AUC of 0.69 over the teacher-predicted emotions present in the unheard identities (these include all emotions except disgust, fear and contempt) and a mean ROC AUC of 0.71 on validation set of heard identities on the same emotions. 5.1 Implementation Details The student network is based on the VGGVox network architecture described in [50], which has been shown to work well on spectrograms, albeit for the task of speaker verification. The model is based on the lightweight VGG-M architecture, however the fully connected fc6 layer of dimension 9\u00d7n (support in both dimensions) is replaced by two layers \u2013 a fully connected layer of 9 \u00d7 1 (support in the frequency domain) and an average pool layer with support 1 \u00d7 n, where n depends on the length of the input speech segment (for example for a 4 second segment, n = 11). This allows the network to achieve some temporal invariance, and at the same time keeps the output dimensions the same as those of the original fully \fconnected layer. The input to the teacher image is an RGB image, Layer Support Filt dim. # filts. Stride Data size conv1 7\u00d77 1 96 2\u00d72 254\u00d7198 mpool1 3\u00d73 2\u00d72 126\u00d799 conv2 5\u00d75 96 256 2\u00d72 62\u00d749 mpool2 3\u00d73 2\u00d72 30\u00d724 conv3 3\u00d73 256 256 1\u00d71 30\u00d724 conv4 3\u00d73 256 256 1\u00d71 30\u00d724 conv5 3\u00d73 256 256 1\u00d71 30\u00d724 mpool5 5\u00d73 3\u00d72 9\u00d711 fc6 9\u00d71 256 4096 1\u00d71 1\u00d711 apool6 1\u00d7n 1\u00d71 1\u00d71 fc7 1\u00d71 4096 1024 1\u00d71 1\u00d71 fc8 1\u00d71 1024 1251 1\u00d71 1\u00d71 Table 4: The CNN architecture for the student network. The data size up until fc6 is depicted for a 4-second input, but the network is able to accept inputs of variable lengths. Batchnorm layers are present after every conv layer. cropped from the source frame to include only the face region (we use the face detections provided by the VoxCeleb dataset) resized to 224 \u00d7 224, followed by mean subtraction. The input to the student network is a short-term amplitude spectrogram, extracted from four seconds of raw audio using a Hamming window of width 25ms and step (hop) 10ms, giving spectrograms of size 512 \u00d7 400. At train-time, the four second segment of audio is chosen randomly from the entire speaking face-track, providing an effective form of data augmentation. Besides performing mean and variance normalisation on every frequency bin of the spectrogram, no other speech-specific processing is performed, e.g. silence removal, noise filtering, etc. (following the approach outlined in [50]). While randomly changing the speed of audio segments can be useful as a form of augmentation for speaker verification [50], we do no such augmentation here since changes in pitch may have a significant impact on the perceived emotional content of the speech. Training Details. The network is trained for 50 epochs (one epoch corresponds to approximately one full pass over the training data where a speech segment has been sampled from each video) using SGD with momentum (set to 0.9) and weight decay (set to 0.0005). The learning rate is initially set to 1E \u22124, and decays logarithmically to 1E \u22125 over the full learning schedule. The student model is trained from scratch, using Gaussian-initialised weights. We monitor progress on the validation set of unheard identities, and select the final model to be the one that minimises our learning objective on this validation set. 5.2 Results on external datasets To evaluate the quality of the audio features learned by the student model, we perform experiments on two benchmark speech emotion datasets. RML: The RML emotion dataset is an acted dataset containing 720 audiovisual emotional expression samples with categorical labels: anger, disgust, fear, happiness, sadness and surprise. This database is language and cultural background independent. The video samples were collected from eight human subjects, speaking six different languages (English, Mandarin, Urdu, Punjabi, Persian, Italian). To further increase diversity, different accents of English and Chinese were also included. eNTERFACE [47]: The eNTERFACE dataset is an acted dataset (in English) recorded in a studio. Forty-two subjects of fourteen nationalities were asked to listen to six successive short stories, each of which was designed to elicit a particular emotion. The emotions present are identical to those found in the RML dataset. Both external datasets consist of acted speech, and are labelled by human annotators. Since the external datasets are obtained in a single recording studio, they are also relatively clean, in contrast to the noisy segments in EmoVoxCeleb. We choose the RML dataset for evaluation specifically to assess whether our embeddings can generalise to multilingual speech. Both datasets are class-balanced. Method RML eNTERFACE Modality Acc. Modality Acc. Random A 16.7 A 16.7 Student A 49.7 \u00b1 5.4 A 34.3 \u00b1 4.0 Teacher V 72.6 \u00b1 3.9 V 48.3 \u00b1 4.9 Noroozi et al. [51] A 65.3 A 47.1 Table 5: Comparison of method accuracy on RML and eNTERFACE using the evaluation protocol of [51]. Where available, the mean \u00b1 std. is reported. We do not evaluate the predictions of the student directly, for two reasons: first, the set of emotions used to train the student differ from those of the evaluation test set, and second, while the predictions of the student carry useful signal, they skew towards neutral as a result of the training distribution. We therefore treat the predictions as 8-dimensional embeddings and adopt the strategy introduced in Sec. 3.1 of learning a map from the set of embeddings to the set of target emotions, allowing the classifier to re-weight each emotion prediction using the class confidences produced by the student. In more detail, for each dataset, we evaluate the quality of the student model embeddings by learning a single affine transformation (comprising a matrix multiply and a bias) followed by a softmax to map the 8 predicted student emotions to the target labels of each dataset. Although our model has been trained using segments of four seconds in length, its dynamic pooling layer allows it to process variable length segments. We therefore use the full speech segment for evaluation. To assess the student model, we compare against the following baselines: the expected performance at chance level by a random classifier; and the performance of the teacher network, operating on the faces modality. We also compare with the recent work of [51], whose strongest speech classifier consisted of a random forest using a combination of 88 audio features inc. MFCCs, Zero Crossings Density (ZCD), filter-bank energies (FBE) and other pitch/intensityrelated components. We report performance using 10-fold cross validation (to allow comparison with [51]) in Table 5. While it falls short of the performance of the teacher, we see that the student model performs significantly better than chance. These results indicate that, while challenging, transferring supervision from the facial domain to the speech domain is indeed possible. Moreover, we note that the conditions of the evaluation datasets differ significantly from those on which the student network was trained. We discuss this domain transfer problem for emotional speech in the following section. 5.3 Discussion Evaluation on external corpora: Due to large variations in speech emotion corpora, speech emotion models work best if they are applied under circumstances that are similar to the ones they were \fFigure 5: Normalised confusion matrices for the teacher model (left) and the student model (right) on the RML dataset (ground truth labels as rows, predictions as columns). trained on [60]. For cross-corporal evaluation, most methods rely heavily on domain transfer learning or other adaptation methods [22, 23, 65]. These works generally agree that cross-corpus evaluation works to a certain degree only if corpora have similar contexts. We show in this work that the embeddings learnt on the EmoVoxCeleb dataset can generalise to different corpora, even with differences in nature of the dataset (natural versus acted) and labelling scheme. While the performance of our student model falls short of the teacher model that was used to supervise it, we believe this represents a useful step towards the goal of learning useful speech emotion embeddings that work on multiple corpora without requiring speech annotation. Challenges associated with emotion distillation: One of the key challenges associated with the proposed method is to achieve a consistent, high quality supervisory signal by the teacher network during the distillation process. Despite reaching state-of-the-art performance on the FERplus benchmark, we observe that the teacher is far from perfect on both the RML and eNTERFACE benchmarks. In this work, we make two assumptions: the first is that distillation ensures that even when the teacher makes mistakes, the student can still benefit, provided that there is signal in the uncertainty of the predictions. The second is a broader assumption, namely that deep CNNs are highly effective at training on large, noisy datasets (this was recently explored in [45, 59], who showed that despite the presence of high label noise, very strong features can be learned on large datasets). To better understand how the knowledge of the teacher is propagated to the student, we provide confusion matrices for both models on the RML dataset in Figure 5. We observe that the student exhibits reasonable performance, but makes more mistakes than the teacher for every emotion except sadness and anger. There may be several reasons for this. First, EmoVoxCeleb used to perform the distillation may lack the distribution of emotions required for the student to fully capture the knowledge of the teacher. Second, it has been observed that certain emotions are easier to detect from speech than faces, and vice versa [15], suggesting that the degree to which there is a redundant emotional signal across modalities may differ across emotions. Limitations of using interview data: Speech as a medium is intrinsically oriented towards another person, and the natural contexts in which to study it are interpersonal. Interviews capture these interpersonal interactions well, and the videos we use exhibit real world noise. However, while the interviewees are not asked to act a specific emotion, i.e. it is a \u2018natural\u2019 dataset, it is likely that celebrities do not act entirely naturally in interviews. Another drawback is the heavily unbalanced nature of the dataset where some emotions such as contempt and fear occur rarely. This is an unavoidable artefact of using real data. Several works have shown that the interpretation of certain emotions from facial expressions can be influenced to some extent by contextual clues such as body language [4, 33]. Due to the \u201ctalking-heads\u201d nature of the data, this kind of signal is typically not present in interview data, but could be incorporated as clues into the teacher network. Student Shortcuts: The high capacity of neural networks can sometimes allow them to solve tasks by taking \u201cshortcuts\u201d by exploiting biases in the dataset [26]. One potential for such a bias in EmoVoxCeleb is that interviewees may often exhibit consistent emotions which might allow the student to match the teacher\u2019s predictions by learning to recognise the identity, rather than the emotion of the speaker. As mentioned in Sec. 5, the performance of the student on the heardVal and unheardVal splits is similar (0.71 vs 0.69 mean ROC AUC on a common set of emotions), providing some confidence that the student is not making significant use of identity as a shortcut signal. Extensions/Future Work: First, we note that our method can be applied as is to other mediums of unlabelled speech, such as films or TV shows. We hope to explore unlabelled videos with a greater range of emotional diversity, which may help to improve the quality of distillation and address some of the challenges discussed above. Second, since the act of speaking may also exert some influence on the facial expression of the speaker (for example, the utterance of an \u201co\u201d sound could be mistaken for surprise) we would also like to explore the use of proximal non-speech facial expressions as a supervisory signal in future work. Proximal supervision could also address the problem noted in Section 3, that speaking expressions can tend towards neutral. Finally, facial expressions in video can be learnt using self-supervision [62], and this offers an alternative to the strong supervision used for the teacher in this paper. 6"
+ },
+ {
+ "url": "http://arxiv.org/abs/1803.11560v1",
+ "title": "Substitute Teacher Networks: Learning with Almost No Supervision",
+ "abstract": "Learning through experience is time-consuming, inefficient and often bad for\nyour cortisol levels. To address this problem, a number of recently proposed\nteacher-student methods have demonstrated the benefits of private tuition, in\nwhich a single model learns from an ensemble of more experienced tutors.\nUnfortunately, the cost of such supervision restricts good representations to a\nprivileged minority. Unsupervised learning can be used to lower tuition fees,\nbut runs the risk of producing networks that require extracurriculum learning\nto strengthen their CVs and create their own LinkedIn profiles. Inspired by the\nlogo on a promotional stress ball at a local recruitment fair, we make the\nfollowing three contributions. First, we propose a novel almost no supervision\ntraining algorithm that is effective, yet highly scalable in the number of\nstudent networks being supervised, ensuring that education remains affordable.\nSecond, we demonstrate our approach on a typical use case: learning to bake,\ndeveloping a method that tastily surpasses the current state of the art.\nFinally, we provide a rigorous quantitive analysis of our method, proving that\nwe have access to a calculator. Our work calls into question the long-held\ndogma that life is the best teacher.",
+ "authors": "Samuel Albanie, James Thewlis, Joao F. Henriques",
+ "published": "2018-04-01",
+ "updated": "2018-04-01",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG"
+ ],
+ "main_content": "INTRODUCTION Since time immemorial, learning has been the foundation of human culture, allowing us to trick other animals into being our food. The importance of teaching in ancient times was exempli\ufb01ed by Pythagoras, who upon discovering an interesting fact about triangles, soon began teaching it to his followers, together with some rather helpful dietary advice about the bene\ufb01ts of avoiding \u2217Authors listed in order of the number of guinea pigs they have successfully taught to play competitive bridge. Ties are broken geographically. This submission is a post-print (an update to the conference edition). 1Empirically, we have observed that this is extremely important for their job prospects, since it allows them to form new connexions. 2For all experimental results reported in this paper, we used a Casio FX-83GTPLUS-SB-UT. 1 arXiv:1803.11560v1 [cs.LG] 1 Apr 2018 \fPublished as a conference paper at SIGBOVIK 2018 Figure 1: We introduce Substitute Teacher Networks, a \ufb01nancially prudent approach to student network education. Here, LC dnotes the classroom distillation loss and L$ denotes the total cost of teacher remuneration. The student, task-speci\ufb01c teacher and substitute teacher networks are denoted by \u03c6s n, \u03c6t m and \u03c8t respectively (see Sec. 3 for details). Note the use of drop-shadow plate notation, which indicates the direction of the nearest light source. beans (Philolaus of Croton, 421 BC). Despite this auspicious start, his signi\ufb01cant advances on triangles and beans reached a limited audience, largely as a consequence of his policy of forbidding his students from publishing pre-prints on arXiv and sharing source code. He was followed, fortuitously, by the more open-minded Aristotle, who founded the open-access publishing movement and made numerous contributions to Wikipedia beyond the triangle and bean pages, with over 90% of his contributions made on the page about Aristotle. Nowadays, we are attempting to pass on this hard-won knowledge to our species\u2019 offspring, the machines (Timberlake, 2028; JT-9000, 2029)3, who will hopefully keep us around to help with house chores. Several prominent \ufb01gures of our time (some of whom know their CIFAR-10 from their CIFAR-100) have expressed their reservations with this approach, but really, what can possibly go wrong?4 Moreover, several prominent \ufb01gures in our paper say otherwise (Fig. 1, Fig. 2). The objective of this work is therefore to concurrently increase the knowledge and reduce the ignorance, or more precisely gnorance5 of student arti\ufb01cial neural networks, and to do so in a \ufb01scally responsible manner given a \ufb01xed teaching budget. Our approach is based on machine learning, a recently trending topic on Twitter. Formally, de\ufb01ne a collection of teachers {Te} to be a set of highly educated functions which map frustrating life experiences (typically real) into extremely unfair exam questions in an examination space (typically complex, but often purely imaginary). Further, de\ufb01ne a collection of students {St} as a set of debt ridden neural networks, initialised to be noisy and random. Pioneering early educational work by Bucilua et al. (2006) demonstrated that by pursuing a carefully selected syllabus, an arbitrary student St could improve his/her performance with M highly experienced, specialist teachers an approach often referred to as the private tuition learning paradigm. While effective in certain settings, this approach does not scale. More speci\ufb01cally, this algorithm scales in cost as O($MNK), where N is the number of students, M is the number of private tutors per student and $K is the price the bastards charge per hour. Our key observation is that there is a cheaper route to ignorance reduction, which we detail in Sec. 3. 3The work of these esteemed scholars indicates the imminent arrival of general Arti\ufb01cial Intelligence. Their methodology consists of advising haters, who might be inclined to say that it is fake, to take note that it is in fact so real. The current authors, not having a hateful disposition, take these claims at face value. 4This question is rhetorical, and should be safe to ignore until the Ampere release. 5The etymology of network gnorance is a long and interesting one. Phonetic experts will know that the g is silent (cf. the silent k in knowledge), while legal experts will be aware that the preceding i is conventionally dropped to avoid costly legal battles with the widely feared litigation team of Apple Inc. 2 \fPublished as a conference paper at SIGBOVIK 2018 2 RELATED WORK You take the blue pill\u2014the story ends, you wake up in your bed and believe whatever you want to believe. You take the red pill\u2014you stay in Wonderland, and I show you how deep the ResNets go. Kaiming He, 2015 Several approaches have been proposed to improve teaching quality. Work by noted entomologists Dean, Hinton and Vinyals illustrated the bene\ufb01ts of comfortable warmth in enabling students to better extract information from their teachers (Hinton et al., 2015). In more detail, they advocated adjusting the value of T in the softmax distribution: pi = exp (xi/T) P j exp (xj/T), (1) where T denotes the wattage of the classroom storage heater. However, Rusu et al. (2015), who rigorously evaluated a range of thermostat options when teaching games, found that turning down the temperature to a level termed \u201cScottish\u201d6, leads to better breakout strategies from students who would otherwise struggle with their Q-values. Alternative, thespian-inspired approaches, most notably by Parisotto et al. (2016), attempted to teach students not only which action should be performed, but also why (see also Gupta et al. (2016) for more depth). Many of the students did not want to know why, and refused to take any further drama classes. This is a surprising example of students themselves encouraging the pernicious practice of teaching to the test-set. Recent illuminating work by leading light and best-dressed Thessalonian (Belagiannis et al., 2018) has shown that these kind of explanations may be more effective in an adversarial environment. More radical approaches have advocated the use of alcohol in the classroom, something that we do not condone directly, although we think it shows the right kind of attitude to innovation in education (Crowley et al., 2017). Importantly, all of the methods discussed above represent \ufb01nancially unsustainable ways to extract knowledge that is already in the computer (Zoolander, 2004). Differently from these works, we focus on the quantity, rather than the quality of our teaching method. Perhaps the method most closely related to ours was recently proposed by Schmitt et al. (2018). In this creative work, the authors suggest kickstarting a student\u2019s education with intensive tuition in their early years, before letting them roam free once they feel con\ufb01dent enough to take control of their own learning (the method thus consists of separate stages, separated by puberty). While clearly an advance on the expensive nanny-state approach advocated by previous work, we question the wisdom of handing over complete control to the student. We take a more responsible approach, allowing us to reduce costs while still maintaining an appropriate level of oversight. A different line of work on learning has pursued punchy three-verb algorithms, popularised by the seminal \u201cattend, infer, repeat\u201d (Eslami et al., 2016) approach. Attendance is a prerequisite for our model, and cases of truancy will be reported to the headmistress (see Fig 1). Only particularly badly behaved student networks will be required to repeatedly \u201clook, listen and learn\u201d (Arandjelovic & Zisserman, 2017) the lecture course as many times as it takes until they can \u201cask, attend and answer\u201d (Xu & Saenko, 2016) dif\ufb01cult questions on the topic. These works pursue a longstanding research problem: how to help models really \ufb01nd themselves so that they can \u201ceat, pray and love\u201d (Gilbert, 2009). We note that we are not the \ufb01rst to consider the obvious and appropriate role of capitalism in the teaching domain. A notable trend in the commoditisation of education is the use of MOOCs (Massive Open Online Courses) by large internet companies. They routinely train thousands of student networks in parallel with different hyperparameters, then keep only the top-performer of the class (Snoek et al., 2012; Li et al., 2016). However, we consider such practices to be wasteful and are totally not jealous at all of their resources. 6For readers who have not visited the beautiful highlands, this is approximately 3 kelvins. 3 \fPublished as a conference paper at SIGBOVIK 2018 A number of pioneering ideas in scalable learning under budget constraints were sensitively investigated several years ago by Maturana & Fouhey (2013). We differentiate ourselves from their approach by allowing several years to pass before repeating their \ufb01ndings. Inspired by the concurrent groundand bibtex-breaking self-citing work by legendary darkweb programmers and masters of colourful husbandry (Redmon & Farhadi, 2018), we now attempt to cite a future paper, from which we shall cite the current paper (Albanie et al., 2019). This represents an ambitious attempt to send Google Scholar into an in\ufb01nite depth recursion, thereby increasing our academic credibility and assuredly landing us lucrative pension schemes. 2.1 UNRELATED WORK \u2022 A letter to the citizens of Pennsylvania on the necessity of promoting agriculture, manufactures, and the useful arts. George Logan, 1800 \u2022 Claude Debussy\u2014The Complete Works. Warner Music Group. 2017 \u2022 Article IV Consultation\u2014Staff Report; Public Information Notice on the Executive Board Discussion; and Statement by the Executive Director for the Republic of Uzbekistan. IMF, 2008 \u2022 A treatise on the culture of peach trees. To which is added, a treatise on the management of bees; and the improved treatment of them. Thomas Wildman. 1768 \u2022 Generative unadversarial learning (Albanie et al., 2017). 3 SUBSTITUTE TEACHER NETWORKS We consider the scenario in which we have a set of trained, subject-speci\ufb01c teachers {\u03c6t m}M m=1 available for hire and a collection of student networks {\u03c6s n}N n=1, into whom we wish to distill the knowledge of the teachers. Moreover, assume that each teacher \u03c6t m demands a certain wage, pm. By inaccurately plagiarising ideas from the work listed in Sec. 2, we can formulate our learning objective as follows: L = 1 M M X m=1 AKL({\u03c6s n}N n=1||\u03c6t m) | {z } LC +\u03bb M X m=1 pm | {z } L$ (2) where AKL represents the average KL-divergence between the set of student networks and each desired task-speci\ufb01c teacher distribution \u03c6t m on a set of textbook example questions and \u03bb denotes the current value of the US Dollar in dollars (in this work, we set \u03bb to one). By carefully examining the L$ term in this loss, our key observation is that teaching students with a large number of teachers is expensive. Indeed, we note that all prior work has extravagantly operated in the \ufb01nancially unsustainable educational regime where M >> N. Building on this insight, our \ufb01rst contribution is to introduce a novel substitute network \u03c8t, which does not require formal task-speci\ufb01c training beyond learning to set up a screen at the front of the classroom and repeatedly play the cinematographic classic Am\u00b4 elie with (useful for any task except learning French) or without (useful for the task of learning French) subtitles. Note that the substitute teacher \u03c8t provides almost no supervisory signal, but prevents the students from eating their textbooks or playing with the \ufb01re extinguisher. Importantly, given a wide-screen monitor and a suf\ufb01ciently spacious classroom, we can replace several task-speci\ufb01c teachers with a single network \u03c8t. The substitution of \u03c8t for one or more \u03c6t m is mediated by a headmistress gating mechanism, operating under a given set of budget constraints (see Fig. 1). In practice, we found it most effective to implement \u03c8t as a Recursive Neural Network, which is de\ufb01ned to be the composition of a number of computational layers, and a Recursive Neural Network. In keeping with the cost-cutting focus, we carefully analysed the gradients available on the market for the LC component of the loss, and after extensive research decided to use Synthetic Gradients (Jaderberg et al., 2016), which are signi\ufb01cantly cheaper than Natural Gradients (Amari, 1998). Our resulting cost function L, which forms the target of minimisation, is best expressed in BTC (see Fig. 2). 4 \fPublished as a conference paper at SIGBOVIK 2018 0 20 40 60 80 0 20 40 60 80 Figure 2: Expressing the cost function L in bitcoins makes it signi\ufb01cantly more volatile, yet it was instrumental in attracting venture capital for our Smart Education startup. Always driven to innovate, our second contribution is to improve upon the \ufb01duciary example set by Enron (Sims & Brinkmann, 2003), and pay the teachers in compound options. These are options on options on the underlying company stock. Doing so allows us to purchase the options through a separate and opaque holding company, who then supply us with the compound options in return for a premium. The net change to our balance sheet is a fractional addition to the \u201cOperating Activities\u201d outgoings, and perhaps an intangible reduction in \u201cgoodwill\u201d of the \u201cit\u2019s not cricket\u201d variety. The substitute teachers receive precious asymmetrical-upside however, and while a tail event would rather ruin the day, we are glass-half-full pragmatists so see no real downside. This approach draws inspiration from the Reinforcement Learning literature, which points to options as an effective extension of the payment-space (Sutton et al., 1999), especially when combined with heavy discounting. For any given task, the high cost of private tuition severely limits the number of students that can be trained by competing methods. However, such are the scale of the cost savings that can be made with our approach that it is possible to run numerous repeats of the learning procedure. We are therefore able to formulate our educational process as a highly scalable statistical process which we call the Latent Substitute Teacher Allocation Process (LSTAP). The LSTAP is a collection of random Latent Substitute Teacher Allocations, indexed by any arbitrary input set. We state without proof a new theorem we coin the \u201cstrong\u201d Kolmogorov extension theorem, an extension of the standard Kolmogorov7 extension theorem (\u00d8ksendal, 2014). The strong variant allows the de\ufb01nition of such a potentially in\ufb01nite dimensional joint allocation by ensuring that there exists a collection of probability measures which are not just consistent but identical with respect to any arbitrary \ufb01nite cardinality Borel set of LSTAP marginals. Importantly, the LSTAP allows students to learn at multiple locations in four dimensional space-time8. 4 EXPERIMENTS If you don\u2019t know how to explain MNIST to your LeNet, then you don\u2019t really understand digits. Albert Einstein We now rigorously evaluate the ef\ufb01cacy of Substitute Teacher Networks. Traditional approaches have often gone by the mantra that it takes a village to raise a child. We attempted to use a village to train our student networks, but found it to be an expensive use of parish resources, and instead opted for the NVIDIA GTX 1080 Ti ProGamer-RGB. Installed under a desk in the of\ufb01ce, this setup provided warmth during the cold winter months. 7While our introduction of the LSTAP may seem questionable, we have found empirically that two Kolmogorov mentions suf\ufb01ce to convince reviewer 2 that our method is rigorous. 8We follow Stephen Wolfram\u2019s de\ufb01nition of spacetime Wolfram (2015), and not the standard de\ufb01nition. 5 \fPublished as a conference paper at SIGBOVIK 2018 TURING TEST RESULTS ResNet-50 Q-Network Neural Turing Machine Unsupervised C D Knowledge-Distillation B C F-, see me after class Cross Modal-Distillation A C Substitute Teacher Networks (ours) A+ B D Figure 3: Results for the test class of 2018. We include the Neural Turing Machine as a super\ufb01ciallyrelated baseline. We compare the performance of a collection of student networks trained with our method to previous work that rely on private tuition. For a fair comparison, all experiments are performed in \u201clibrary-mode\u201d, since high noise levels tend to stop concentration gradients in student networks, and learning stalls. To allow for diversity of thought amongst the students, we do not apply any of the \u201cnormalisation\u201d practices that have become prevalent in recent research (e.g. Ioffe & Szegedy (2015); Ba et al. (2016); Ulyanov et al. (2016); Wu & He (2018)). All students were trained in two stages, separated by lunch. We started with a simple toy problem, but the range of action \ufb01gures available on the market supplied scarce mental nourishment for our hungry networks. We then moved on to pre-school level assessments, and we found that they can correctly classify most of the 10,000 digits, except for that atrocious 4 that really looks like a 9. We observed that networks trained using our method experience a much lower DropOut rate than their privately tutored contemporaries. Some researchers set a DropOut rate of 50%, which we feel is unnecessarily harsh on the student networks9. We \ufb01nally transitioned to a more serious training regime. After months of intensive training using our trusty NVIDIA desk-warmer, which we were able to compress down to two days using montage techniques and an 80\u2019s cassette of Survivor\u2019s \u201cEye of the Tiger\u201d, each cohort of student networks were ready for action. The only appropriate challenge for such well-trained networks, who eat all well-formed digits for breakfast, was to pass the Turing test. We thus embarked on a journey to \ufb01nd out whether this test was even appropriate. The Chinese Room argument, proposed by Searle (1980) in his landmark paper about the philosophy of AI, provides a counterpoint. It is claimed that an appropriately monolingual person in a room, equipped with paper, pencil, and a rulebook on how to respond politely to any written question in Chinese (by mapping appropriate input and output symbols), would appear from the outside to speak Chinese, while the person in the room would not actually understand the language. However, over the course of numerous trips to a delicious nearby restaurant, we gradually discovered that the contents of our dessert fortune cookies could be strung together, quite naturally, to form a message: \u201cFlattery will go far tonight. He who throws dirt is losing ground. Never forget a friend. Especially if he owes you. Remember to backup your data. P.S. I\u2019m stuck in a room, writing fortune cookie messages\u201d. Since we may conclude that such a message could only be constructed by an agent with an awareness of their surroundings and a grasp of the language, it follows that Searle\u2019s argument does not hold water. Having resolved all philosophical and teleological impediments, we then turned to the application of the actual Turing tests. Analysing the results in Table 3, we see that only the ResNet-50 got a smiley face. The Q-network\u2019s low performance is obviously caused by the fact that it plays too many Atari games. However, we note that it could improve by spending less time on the Q\u2019s and more time on the A\u2019s. The Neural Turing Machine (NTM) had an abysmal score, which we later understood was because it focused on an entirely different Turing concept. The Q-network and the NTM disrupted the test by starting to play Battleship, and the Neural Turing Machine won.10 9This technique, often referred to in the business management literature as Rank-and-Yank (Amazon), may be of limited effectiveness in the classroom. 10We attribute this to its ability at decoding enemy\u2019s submarine transmissions. 6 \fPublished as a conference paper at SIGBOVIK 2018 Figure 4: Several cakes of importance for current research (deeper is better (Ardalani et al., 2018)). From left to right: 1) Yann LeCun\u2019s cake, 2) Pieter Abbeel\u2019s cake, 3) Our cake. Note the abundance of layers in the latter. 5 APPLICATION: LEARNING TO BAKE As promised in the mouth watering abstract and yet undelivered by the paper so far, we now demonstrate the utility of our method by applying it one of the hottest subjects of contemporary machine learning: learning to bake. We selected this task, not because we care about the state-of-the-tart, but in a blatant effort to improve our ratings with the sweet-toothed researcher demographic11. We compare our method with a number of competitive cakes that were recently proposed at high-end cooking workshops (LeCun, 2016; Abbeel, 2017) via a direct bake-off, depicted in Fig. 4. While previous authors have focused on cherry-count, we show that better results can be achieved with more layers, without resorting to cherry-picking. The layer cake produced by our student networks consists of more layers than any previous cake (Fig. 4-3), showcasing the depth of our work12. Note that J\u00b4 egou et al. (2017) claim to achieve a 100-layer tiramisu, which is technically a cake, but a direct comparison would be unfair because it would undermine our main point. We would like to dive deep into the technical details of our novel use of the No Free Lunch Theorem, Indian Buffet Processes and a Slow-Mixing Markov Blender, but we feel that increasingly thin culinary analogies are part of what\u2019s wrong with contemporary Machine Learning (Rahimi, 2017). 6"
+ },
+ {
+ "url": "http://arxiv.org/abs/1703.02528v1",
+ "title": "Stopping GAN Violence: Generative Unadversarial Networks",
+ "abstract": "While the costs of human violence have attracted a great deal of attention\nfrom the research community, the effects of the network-on-network (NoN)\nviolence popularised by Generative Adversarial Networks have yet to be\naddressed. In this work, we quantify the financial, social, spiritual,\ncultural, grammatical and dermatological impact of this aggression and address\nthe issue by proposing a more peaceful approach which we term Generative\nUnadversarial Networks (GUNs). Under this framework, we simultaneously train\ntwo models: a generator G that does its best to capture whichever data\ndistribution it feels it can manage, and a motivator M that helps G to achieve\nits dream. Fighting is strictly verboten and both models evolve by learning to\nrespect their differences. The framework is both theoretically and electrically\ngrounded in game theory, and can be viewed as a winner-shares-all two-player\ngame in which both players work as a team to achieve the best score.\nExperiments show that by working in harmony, the proposed model is able to\nclaim both the moral and log-likelihood high ground. Our work builds on a rich\nhistory of carefully argued position-papers, published as anonymous YouTube\ncomments, which prove that the optimal solution to NoN violence is more GUNs.",
+ "authors": "Samuel Albanie, S\u00e9bastien Ehrhardt, Jo\u00e3o F. Henriques",
+ "published": "2017-03-07",
+ "updated": "2017-03-07",
+ "primary_cat": "stat.ML",
+ "cats": [
+ "stat.ML",
+ "cs.LG"
+ ],
+ "main_content": "INTRODUCTION Deep generative modelling is probably important (see e.g. Bengio et al. (2013a), Bengio et al. (2013b), Bengio et al. (2007a), Bengio et al. (2015) Bengio et al. (2007b) and (Schmidhuber et al., circa 3114 BC)). Justi\ufb01cations recently overheard in the nightclubs of Cowley1 include the ability to accurately approximate data distributions without prohibitively expensive label acquisition, and computationally feasible approaches to beating human infants at chess2. Deep generative modelling \u2217Authors are listed according to the degree to which their home nation underperformed at the 2016 European football championships 1The nightclubs of Cowley are renowned for their longstanding philosophical support for Dubstep, Grime and Connectionism, and should not be confused with the central Oxford nightclub collective which leans more towards Dubstep, Grime and Computationalism speak to Old Man Bridge at 3am on a Friday morning under the stairs of the smoking area for a more nuanced clari\ufb01cation of the metaphysical differences of opinion. 2Infants of other species (fox cubs, for example) remain an adorable open question in the \ufb01eld. 1 arXiv:1703.02528v1 [stat.ML] 7 Mar 2017 \fUnder review as a conference paper at SIGBOVIK 2017 Figure 1: The proposed unadversarial training protocol. The generator G proposes samples, PROPS, and in return receives acknowledgements and praise, ACKS from the motivator M. As a direct consequence of the sense of teamwork fostered by our optimisation scheme, synergy abounds. Note: this \ufb01gure best viewed at a distance, preferably at low resolution. was broadly considered intractorable, until recent groundbreaking research by Goodfellow et al. (2014) employed machiavellian adversarial tactics to demonstrate that methaphorical tractors could in fact be driven directly through the goddamn centre of this previously unploughed research \ufb01eld (subject to EU agricultural safety and set-aside regulations). The key insight behind Generative Adversarial Networks (commonly referred to as GANs, GANGs or CAPONEs depending on sources of counterfeit currency) is to pit one model against another in a gladiatorial quest for dominance. However, as ably illustrated by respected human actor and philanthropist Russell Crowe in the documentary Gladiator, being an actual gladiator isn\u2019t all sunshine and rainbows\u2014although it\u2019s possible to get a great tan, one still has to wear sandals. Even though we are only in the introduction, we now bravely leap into a series of back-of-theenvelope calculations to compute a lower bound on the cost of that violence for the case of middle aged, median-income Generative Adversarial Networks living in comfortable, but affordable accommodation in the leafy suburbs of an appropriate class of functions. Following the literature, we de\ufb01ne the adversaries as two models, a discriminator D and a generator G. However, since we don\u2019t agree with the literature or wish to condone its violent actions in any form, we immediately rede\ufb01ne the models as follows: D, G := G, D (1) Note that the equation above is valid and above board, since the current version of mathematics (v42.1 at the time of writing) supports simultaneous assignment3. Therefore, in the following exposition, D represents the generator and G represents the discriminator. Next, we de\ufb01ne a cost function, C : V \u2192$, mapping the space of model violence V into the space $ spanned by all mattresses stuffed with U.S. dollars, as follows: C(V ) = \u03b1 \u02c6 \u03b2V (G) (2) in which \u03b2V is a violent and discriminatory mapping from the discriminator G to the closest mathematical structure which appears to be a human brain and \u03b1 is a constant representing the cost of human violence, to be determined by trawling through posts on social media. Note that \u03b2V may be a violent function, but not crazy-violent (i.e. it must be Khinchin-integrable)4. 3We caution readers not to rely on this assumption in future versions. Mathematics has not supported backwards compatability since Kurt \u201cTab-Liebehaber\u201d G\u00a8 odel re-implemented the entire axiomatic foundations of the language rather than be constrained to four-space equation indentation (see G\u00a8 odel (1931) for the details). 4Since Neuroscience tells us that human brains are AlexVGGIncepResNets almost-everywhere, in practice we found that these functions need not be overly belligerent. 2 \fUnder review as a conference paper at SIGBOVIK 2017 To evaluate this cost, we \ufb01rst compute \u03b1 with a melancholy search of Twitter, uniquely determining the cost of violence globally as $1876 for every person in the world (Twitter, 2016). Integrating over all discriminators and cases of probable discrimination, we arrive at a conservative value of 3.2 gigamattresses of cost. By any reasonable measure of humanity (\ufb01nancial, social, spiritual, cultural, grammatical or indeed dermatological), this is too many gigamattresses. Having made the compelling case for GUNs, we now turn to the highly anticipated related work section, in which we adopt a petty approach to resolving disagreements with other researchers by purposefully avoiding references to their relevant work. 2 RELATED WORK These violent delights have violent ends Geoff Hinton, date unknown Our work is connected to a range of adversarial work in both the machine learning and the machine forgetting communities. To the best of our knowledge Smith & Wesson (1852) were the \ufb01rst to apply GUNs to the problem of generative modelling, although similar ideas have been explored in the context of discriminative modelling as far back as the sixteenth century by Fabbrica d\u2019Armi Pietro Beretta in an early demonstration of one-shot learning. Unfortunately, since neither work evaluated their approach on public benchmarks (not even on MNIST), the signi\ufb01cance of their ideas remains under appreciated by the machine learning community. Building on the approach of Fouhey & Maturana (2012)5, we next summarise the adversarial literature most closely related to ours, ordered by Levenshtein edit distance: GAN (Goodfellow et al., 2014), WGAN (Arjovsky et al., 2017), DCGAN (Radford et al., 2015), LAPGAN (Denton et al., 2015), InfoGAN (Chen et al., 2016), StackedGAN (Huang et al., 2016) and UnrolledGAN (Metz et al., 2016)6. Unadversarial approaches to training have also received some attention, primarily for models used in other domains such as fashion (Crawford, 1992) and bodybuilding (Schwarzenegger, 2012)). Some promising results have also been demonstrated in the generative modelling domain, most notably through the use of Variational Generative Stochastic Networks with Collaborative Shaping (Bachman & Precup, 2015). Our work makes a fundamental contribution in this area by dramatically reducing the complexity of the paper title. 3 GENERATIVE UNADVERSARIAL NETWORKS Under the Generative Unadversarial Network framework, we simultaneously train two models: a generator G that does its best to capture whichever data distribution it feels it can manage and a motivator M that helps G to achieve its dream. The generator is trained by learning a function G(\u20d7 z; \u03b8g) which transforms samples from a uniform prior distribution pz(\u20d7 z) into the space graciously accommodating the data7. The motivator is de\ufb01ned as a function M(\u20d7 x; \u03b8M) which uses gentle gradients and persuasive language to encourage G to improve its game. In particular, we train G to maximise log(M(G(\u20d7 z)) and we simultaneously train M to maximise log(M(G(\u20d7 z)). Thus, we see that the objectives of both parties are aligned, reducing con\ufb02ict and promoting teamwork. The core components of our framework are illustrated in Figure 1. The GUN training scheme was inspired largely by Clint Eastwood\u2019s memorable performance in Dirty Harry but also in part by the Transmission Control Protocol (TCP) three-way handshake (Postel et al., 1981), which was among the \ufb01rst protocols to build harmony through synergy, acknowledgements and the simple act of 5This innovative work was the \ufb01rst to introduce the concept of an alphabetically-related, rather than scienti\ufb01cally-related literature review. 6In the interest of an unadversarial literature review, we note that Bishop (2006) and Murphy (2012) make equally good (up to \u03f5 = 10\u22126) references for further exploration of this area. 7The choice of the uniform prior prevents discrimination against prior samples that lie far from the mean. It\u2019s a small thing, but it speaks volumes about our inclusive approach. 3 \fUnder review as a conference paper at SIGBOVIK 2017 Figure 2: (a) GUNs are trained by updating the generator distribution G (yellow line) with the help and support of the motivator (red line) to reach its dream of the data distribution (blue dashed). (b) With a concerted effort, the generator reaches its goal. (c) Unlike previous generators which were content with simply reaching this goal, our generator is more motivated and gives it \u2018110%\u2019 moving it a further 10% past the data distribution. While this isn\u2019t terribly helpful from a modelling perspective, we think it shows the right kind of attitude. Algorithm 1 Training algorithm for Generative Unadversarial Networks 1: procedure TRAIN 2: for #iterations do 3: Sample n noise samples from prior pz(\u20d7 z) and compute G(\u20d7 z(1); \u03b8g), ...G(\u20d7 z(n); \u03b8g). 4: Sample n data samples \u20d7 x(1), ...\u20d7 x(n), from the data distribution. 5: Let G show pairs (\u20d7 x(i), G(\u20d7 z(i); \u03b8g)) to M as slides of a powerpoint presentation8. 6: Sample constructive criticism and motivational comments from M. 7: Update the powerpoint slides and incorporate suggestions into \u03b8G. shaking hands. A description of the training procedure used to train G and M is given in Algorithm 1. Algorithm 1 can be ef\ufb01ciently implemented by combining a spare meeting room (which must have a working projector) and a top notch deep learning framework such as MatConvNet (Vedaldi & Lenc, 2015) or Soumith Chintala (Chintala, 2012-present). We note that we can further improve training ef\ufb01ciency by trivially rewriting our motivator objective as follows9: \u03b8\u2217 M = min \u03b8M \u02db S(G) log(R) + log(1 \u2212\u03b6) (3) Equation 3 describes the \ufb02ow of reward and personal well-being on the generator network surface. \u03b6 is a constant which improves the appearance of the equation. In all our experiments, we \ufb01xed the value of \u03b6 to zero. 8To guarantee polynomial runtime, it is important to ensure that the generator is equipped with the appropriate dongle and works through any issues with the projector before the presentation begins. 9If this result does not jump out at you immediately, read the odd numbered pages of (Amari & Nagaoka, 2000) . This book should be read in Japanese. The even-numbered pages can be ripped out to construct beautiful orizuru. 4 \fUnder review as a conference paper at SIGBOVIK 2017 Figure 3: Visualised samples from the GUN model trained on MNIST11(the nearest training examples are shown in the right hand column). Note that these samples have been carefully cherry picked for their attractive appearance. Note how the GUN samples are much clearer and easier to read than the original MNIST digits. 4 EXPERIMENTS Give the people what they want (MNIST) Yann LeCun, date unknown In this section we subject the GUN framework to a rigorous qualitative experimental evaluation by training unadversarial networks on MNIST. Rather than evaluating the model error-rate or probability on withheld test data, we adopt a less confrontational metric, opportunities for improvement. We also assess samples generated by the trained model by gut feeling, enabling a direct comparison with a range of competing generative approaches. Following academic best practices, key implementation details can be found in our private code repository10. We warm-start the network with toy data taken from the latest Lego catalog. To nurture the right kind of learning environment, we let the network \ufb01nd its own learning rate and proceed by making \u03f5-greedy updates with an \u03f5 value of 1. We consider hard-negative mining to be a gratuitously harsh training procedure, and instead perform easy-positive mining for gentler data digestion. We now turn to the results of the experiment. Inspired by the Finnish education system, we do not test our models during the \ufb01rst formative epochs of development. A quantitative comparison with two other popular generative approaches has been withheld from publication to respect the privacy of the models involved. However, we are able to reveal that GUN had by far the most opportunities for improvement. We observed a sharp increase in performance once we all agreed that the network was doing well. By constrast, the adversarial nature of standard GAN methodologies usually elicits a \ufb01ght-or-\ufb02ight behavior, which can result in vanishing gradients and runaway losses. Samples drawn from the trained network are shown in Figure 3. 5"
+ },
+ {
+ "url": "http://arxiv.org/abs/1701.04895v1",
+ "title": "Unknowable Manipulators: Social Network Curator Algorithms",
+ "abstract": "For a social networking service to acquire and retain users, it must find\nways to keep them engaged. By accurately gauging their preferences, it is able\nto serve them with the subset of available content that maximises revenue for\nthe site. Without the constraints of an appropriate regulatory framework, we\nargue that a sufficiently sophisticated curator algorithm tasked with\nperforming this process may choose to explore curation strategies that are\ndetrimental to users. In particular, we suggest that such an algorithm is\ncapable of learning to manipulate its users, for several qualitative reasons:\n1. Access to vast quantities of user data combined with ongoing breakthroughs\nin the field of machine learning are leading to powerful but uninterpretable\nstrategies for decision making at scale. 2. The availability of an effective\nfeedback mechanism for assessing the short and long term user responses to\ncuration strategies. 3. Techniques from reinforcement learning have allowed\nmachines to learn automated and highly successful strategies at an abstract\nlevel, often resulting in non-intuitive yet nonetheless highly appropriate\naction selection. In this work, we consider the form that these strategies for\nuser manipulation might take and scrutinise the role that regulation should\nplay in the design of such systems.",
+ "authors": "Samuel Albanie, Hillary Shakespeare, Tom Gunter",
+ "published": "2017-01-17",
+ "updated": "2017-01-17",
+ "primary_cat": "cs.AI",
+ "cats": [
+ "cs.AI",
+ "cs.SI",
+ "stat.ML"
+ ],
+ "main_content": "Introduction As we approach the year 2020, access to digital media and services is funnelled through a narrowing oligarchy of large technology \ufb01rms and paid for using those units of barter so favoured by the cash poor millennial generation\u2014fractions of the human attention span and volumes of personal data. The immense speed and scale at which the domain of social interaction has migrated to the internet has been one of the most striking trends of the last decade. At the heart of this exodus, social networks have emerged as the primary forums of personal, political and commercial discourse [1]. In such systems, the \ufb02ow of information depends on the social relationships that link the sub-graphs forming the network and the \ufb01ltering mechanisms that mediate the interactions along these links. To date, the most successful social networks have focused on business models that create value by providing access to a platform which coordinates the sale of advertisements and services to their users (although other revenue sources have been explored [2]). For a social network to be \ufb01nancially \u2217Autonomous Intelligent Machines and Systems, Centre for Doctoral Training 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. arXiv:1701.04895v1 [cs.AI] 17 Jan 2017 \fviable at scale, it must therefore meet two competing demands. It must be suf\ufb01ciently engaging to acquire and retain new users and it must be effective at advertising products to these users [3]. In both cases, the central role played by the curation of information in the network is naturally suited to automated approaches [4] that can be tuned to maximise the pro\ufb01tability of the site2. Moreover, two key characteristics of internet-based social networks make this \ufb01ltering task particularly amenable to the use of modern machine learning techniques: First, access to an unprecedented level of detail corresponding to the historical state of individual users for every previous interaction in which they participated on the network; Second, the availability of sophisticated analytics tools that enable the tracking of user responses to any stimuli they are served by the algorithm. These analytics provide the system with a powerful feedback mechanism by which it can explore strategies in aid of its optimisation objective. We refer to the collective set of processes used to ful\ufb01l this role for a given social network as the curator algorithm. The action-set of the curator algorithm can be restricted to a single recurring decision for the network: Which subset of available content is to be shown to the user at a given instant? It is clear that the ability of the algorithm to perform this role in an optimal manner is tightly coupled to the information it has access to. We propose that a curator algorithm provided with a large supply of test subjects and an accessible feedback mechanism for evaluating its moves may choose to explore information curation strategies that are detrimental to users. In particular, we suggest that it may develop sophisticated strategies for manipulating its users as it tries to optimise its given objective. Moreover, recent trends towards rejecting simpler, interpretable models in favour of more powerful deep architectures that are less amenable to human interpretation make the direct supervision and regulation of the strategies explored by such algorithms extremely dif\ufb01cult. As a consequence, these strategies may be developed without the intention of the network operator. The impact of the \ufb01rst generation of social network curator algorithms has attracted signi\ufb01cant interest from the research community. Perhaps the best known hypothesis regarding their usage has been the creation of \u201c\ufb01lter bubbles\u201d. In this phenomenon, users are exposed to an increasingly restricted set of opinions and perspectives by the curation algorithm as it over-exploits its knowledge base about pre-existing user preferences in order to maximise their engagement [5, 6]. Further work has sought to clarify the decisions taken by the algorithms [7] and understand the emotional response of users to its application [8]. Related research undertaken by Facebook has emphasised the importance of the individual\u2019s choices when determining the extent to which curation in\ufb02uences a user\u2019s exposure to challenging views [9]. These studies provide a useful context for the effects produced by early attempts at social network curation. However, in this work we instead focus our attention on the potential consequences of the next generation of viable curator algorithms. A number of previous works have also explored the potential for forms of Arti\ufb01cial Intelligence to manipulate humans, particularly as a consequence of a predicted intelligence explosion [10, 11], an event which is often referred to as the singularity [12]. The many risks of human manipulation by the resulting superintelligence are analysed in detail in [13]. Previous predictions for the timescale of this event vary, but all consider that if it were to occur, it would require a level of technology that is not yet available [14, 15, 13]. In contrast to the threat posed by a superintelligence, we argue that the algorithmic manipulation of humans in social networks is feasible with currently available technology. More closely related to our work, the potential for psychological parasites (intellectual stimuli that lead to addictions) are identi\ufb01ed as a risk associated with the improving capabilities of technology in [16]. These risks are particularly abundant in mobilsation systems\u2014persuasive technologies designed to coordinate users towards speci\ufb01c goals [17]. We develop this idea further, arguing that there are speci\ufb01c risks posed by the combination of current machine learning algorithms and access to abundant user data in the social network domain. Set to come into force in 2018, the European General Data Protection Regulation [18] introduces a range of measures of signi\ufb01cant relevance to industries heavily engaged in the collection and analysis of user data. Any framework that seeks to provide appropriate regulation for curator algorithms faces a daunting task: it must seek to protect the well-being of the network participants but also strive to protect the ability of the network operators to innovate. We consider the effect of this legislation 2For social networks whose business models are based on advertising, this objective may be maximised through an appropriate proxy, such as the total time a user spends each day interacting with the site. 2 \fin the social network domain and assert that curator algorithms deserve particular attention from regulators. When considering the potential avenues for the regulation of curation algorithms, it may prove useful to consider how other industries have approached similar challenges. In recent years, regulators in the \ufb01nancial industry have been faced with the task of preventing market manipulation by increasingly complicated, algorithmically driven high frequency trading strategies [19]. While some of the proposed regulatory responses are speci\ufb01c to \ufb01nance (for instance, cancellation taxes which render a number of market manipulation strategies infeasible [20]), the \ufb01nancial industry provides a useful reference point for regulators in the social network domain (see Sec. 4 for details). In this work we consider the risks and regulation of social network curator algorithms by formulating their task as a reinforcement learning problem. Concretely, our \ufb01rst contribution is to determine the risks of an unregulated system by exploring a range of strategies a curator algorithm might employ with detrimental effects for users. Our second contribution is to propose speci\ufb01c strategies for the safe regulation of curator algorithms in the context of existing data legislation and to assess their potential effectiveness in this role. 2 Engagement as a Learning Problem We will view the problem of maximising user engagement according to some utility function much as a machine learning researcher working in advertisement might\u2014as a reinforcement learning task [21]. This framework has been shown to be particularly effective in optimising content selection for social network users [22]. At a coarse level, a typical reinforcement learning model is built around several core concepts: \u2022 A set of states, S, which fully encode the system and environment we intend to model. The state for individual users may be modelled as an aggregate of the content presented on-screen and a (partially observed) estimate of the user\u2019s \u2018internal\u2019 mental state. \u2022 A set of possible actions, A, which the system can trigger in return for a (possibly delayed) reward (R). Triggering an action may also cause a state transition. In the examples we consider, an action may represent content to a social-network user. \u2022 An indication of reward, utility, or long term value for the algorithm (R). It is against this that the operator adapts the policy function, selecting for strategies which maximise this reward. \u2022 A policy function P : S \u00d7 A \u2192R(\u00d7S). This mapping essentially encodes the strategy which the system pursues in order to maximise reward in the long term horizon. It is here that external control of the curation algorithm may be exerted to avoid pathological and potentially unethical behaviour. Reinforcement learning anneals on a policy function to maximise the value and therefore the long running utility of a system. It is clear then, that it is this component which determines the sophistication of the user engagement strategy, and therefore it is here that we focus our attention. At a fundamental level, the policy function does nothing more than provide a mapping from the state-action space through to the scalar value function. The sophistication of the strategy is therefore strongly linked to the complexity of the mapping we are able to express, and today deep neural networks are usually chosen as the surrogate for this function. These are capable of expressing very complex and non-intuitive functions, as demonstrated by Google\u2019s AlphaGo project [23], where a Go playing policy function was learned which not only outperformed top human players, but did so via a mixed mode of human-like and highly non-intuitive but optimal moves. Other examples of such behaviour arose when these systems were trained to play video games. In particular, when Google trained a policy for playing the notorious Atari boxer game [24] the system learned to exploit weaknesses in the game design, trapping the opponent in a corner and thereby guaranteeing victory. As research continues, we can envisage a world in which these approaches are effectively brought to bear on the \u201cgame\u201d of maximising network pro\ufb01tability. If governed solely by this utility function, we suggest that equivalent pathologies in human behaviour may be discovered and exploited. 3 \f3 Manipulation Through Curation In this section we discuss the range of manipulation strategies available to a curator algorithm seeking to optimise the pro\ufb01tability of a social network. In this context, we take manipulation to mean the art of deliberately in\ufb02uencing a person\u2019s behaviour to bene\ufb01t some objective. We begin by describing the forms of manipulation that are applicable in the domain of social networks. We then introduce a simple categorisation of the different forms of manipulation and offer examples of the strategies a curator algorithm might develop with detrimental effects for its users. Manipulation forms a natural component of human interaction and can take many forms, ranging from direct requests to subtle and intentionally hidden signals. A number of previous studies have demonstrated how human behaviour can be in\ufb02uenced with subtle visual and verbal clues [25, 26, 27, 28]. Research into the design of site features has further demonstrated the ability of operators to \u201csteer\u201d user behaviour in [29]. Of particular relevance to this work, it has been shown that the emotional states of social network users can be in\ufb02uenced by selectively \ufb01ltering the content produced by their friends [30]. In\ufb02uential early work in the \ufb01eld of behavioural psychology determined that animals could be manipulated most effectively if they are rewarded on a variable, unpredictable schedule [31]. This behaviour has been used pro\ufb01tably by casinos who offer gamblers surprise rewards to keep them hooked to the action in the midst of a losing streak [32]. Similar ideas have been applied to game design to keep players engaged for longer by unpredictably varying the duration of in-game tasks [33]. These psychological traits exemplify the kind of in-built behaviours that could be discovered and exploited by a curator algorithm. In order to explore the speci\ufb01c forms of strategy available to a curator algorithm we propose a simple categorisation of manipulation. We de\ufb01ne a manipulation to be of \ufb01rst order if the manipulation is direct and the objective of the manipulator is transparent to the participant. A manipulation is de\ufb01ned to be of second order if it is indirect, but the objective remains transparent to the participant. Further, we consider a manipulation to be of third order if it is indirect and the means by which the objective is attained are not transparent to the participant3. These categories may be illustrated with a simple example. Consider a bar owner wishing to increase drinks sales at their establishment. Each evening, the owner may choose to simply ask customers directly to purchase more drinks. This strategy, corresponding to a \ufb01rst order manipulation, has the bene\ufb01t of simplicity but may not lead to optimal drinks sales (or indeed the renewal of their bar licence). The owner may instead aim to increase sales with advertisements illustrating the enjoyment of other customers as they refresh themselves with drinks from the bar. This form of advertising aims to evoke a sense of desire in the customers which may lead indirectly to the purchase of more drinks. However, the objective of the advert remains transparent to the customer, corresponding to a second order manipulation. Finally, a shrewd bar owner may employ a third strategy, in which they provide free snacks to customers of the bar. The snacks, however, are heavily salted, and after consuming them the customers \ufb01nd their throats parched and in need of immediate refreshment. This strategy is both indirect and not transparent to all but the experienced customers, corresponding to a third order manipulation. We remark here that access to detailed information about the target plays an important role in the ability to manipulate them. Should an unethical bar owner overhear sensitive information about the personal life of a customer at the bar they have the potential to pursue further strategies, such as ensuring the customer loses their job so that they are more likely to spend time drinking at the establishment. We might assume that a curator algorithm seeking to maximise pro\ufb01tability will naturally explore \ufb01rst and second order manipulations as it seeks to advertise products to its user base. Aided by access to detailed user information, it can make powerful inferences about which information should be displayed at each instant. Consider, for example, the marketing of an energy drink. With the knowledge that a user is a student, that they are awake beyond their usual sleep cycle, that the date of their exams is drawing near and that their online activity shows indications of fatigue, the curator can 3Note that the distinction between these categories rests on the dif\ufb01cult assessment of the cognitive abilities of the target [34]. The manipulator may determine that the intention of a given set of behaviours is transparent to a sophisticated target, but not to a simple target. 4 \fselect an optimal time and context for the display of an advert. Now imagine a more sophisticated algorithm capable of pursuing third order manipulations. Such an algorithm might choose to display content which had been selected with the speci\ufb01c goal of exhausting the user. This could be achieved by triggering predictable repeat behaviours gleaned from an in-depth knowledge of their browsing habits. Indeed, over longer time horizons, the curator might determine that an effective method for increasing the sales of energy drinks is the distortion of the user\u2019s sleeping patterns. To take another example, consider a curator algorithm seeking to use information about social groups to increase sales of dating site memberships. While simple manipulations could lead it to present content encouraging individual users to search for partners, it could pursue third order manipulations by intentionally encouraging subsets of social groups to communicate in a manner that excludes other members, actively evoking a feeling of loneliness in the affected party to increase their responsiveness to advertising. A recent example of this strategy exploration principle in action can be found in the efforts of a collection of companies seeking to optimise advertising revenue during the United States presidential election in 2016 [35]. Through simple trial and error, they determined that carefully targeted fake political news stories were extremely effective in maximising click-throughs. Since this strategy was optimising their objective, they doubled down on this approach and produced as much content as possible without regard for its effect on the users of the network. With the same objective, even a comparatively simple curator algorithm would be capable of developing this strategy. We note that it is certainly not the case that all strategies pursued by a curator algorithm will be detrimental for users. Indeed, the energy drinks may give the tired student the boost required to raise their grade, while the previously lonely user may \ufb01nd happiness through their new dating site membership. However, perhaps the most striking aspect of the Atari game-playing algorithm [24] was not that it was capable of surpassing human performance, but rather that it came up with \u201ccheat\u201d strategies that human players had not previously considered (e.g. the boxer strategy described in Sec.2). Similarly, although the manipulation examples described above are simple and interpretable, we suggest that the curator algorithm is capable of developing sophisticated, uninterpretable strategies for manipulating users as they optimise their objective. By their very nature, such strategies are dif\ufb01cult to predict and therefore dif\ufb01cult to regulate. It is however an issue that is worthy of consideration if we wish to avoid the discovery of similar \u201ccheat\u201d strategies for human manipulation. 4 Curator Regulation Do social network curator algorithms deserve special attention from regulators? In Sec. 1, we argued that the risks of higher order manipulations result from providing curator algorithms with three key assets: extensive access to user data, the ability to devise sophisticated strategies (potentially beyond the understanding of human operators) and an effective mechanism for evaluating the effects of its strategies. In this section, we assess the need for regulation in social network curator algorithms in the context of these three areas. We begin by discussing the General Data Protection Regulation (GDPR) recently introduced by the European Union and its implications for the strategies available to algorithms operating in the social network domain. Speci\ufb01cally, we examine its ability to safeguard users from higher order manipulations through its requirement of algorithm interpretability. Next, we explore strategies for the speci\ufb01c regulation of reinforcement learning-based curator algorithms and make recommendations for their application. Finally, we discuss the challenges faced by regulators operating in industries in which algorithm interpretability is often infeasible as a useful reference for regulators considering the problem of curator manipulation. As modern social networks develop a global user base, they become subject to a diverse range of national data and privacy regulations [36], as well as laws governing the transborder data \ufb02ows that occur in the operation of a international organisation [37]. Of these, one set of regulations which holds particular signi\ufb01cance for the operation of curator algorithms is the General Data Protection Regulation, set to come into force across the European Union in 2018 [18]. Among rules governing the storage and usage of personal data which will apply in social network domain, its creation of a so called \u201cright to explanation\u201d [38] has signi\ufb01cant consequences for the design of algorithms that operate on personal data. By requiring that companies performing automated decision making based on personal data must be capable of supplying \u201cmeaningful information about the logic involved\u201d, the regulation places heavy emphasis on algorithm interpretability. 5 \fAlgorithm interpretability has been a longstanding topic of interest in machine learning, yielding techniques that modify and extend models to explain their decisions [39, 40] alongside efforts to improve naturally interpretable algorithms to make them competitive with their opaque counterparts [41]. However, while there has been a great deal of research interest in improving the understanding of deep neural networks (using techniques such as random perturbation [42], invariance analysis [43, 44, 45], visualisation [46, 47, 48] and dimensionality reduction [49]), the interpretation of these models remains notoriously challenging. Consequently, it is not clear whether these models are currently capable of providing the \u201cmeaningful information\u201d required by the regulation. To achieve compliance, curator algorithms may therefore be restricted to a set of simple function classes. As a result, the potential sophistication of the policy function described in Sec. 2 would be curtailed and higher order manipulation strategies would be unlikely to arise. However, we note that there are two reasons why this regulation may not be an effective safeguard for social network users. Firstly, the regulation sets such comprehensive requirements that it may become meaningless in the law books [50]. Secondly, a lack of consensus on precisely what it means for a model to possess the property of interpretability or be capable of providing \u201cmeaningful information\u201d makes it extremely dif\ufb01cult to assess the forms of algorithm that would be compliant with the regulation. Indeed, under certain criteria it has been observed that deep neural networks may be considered no less interpretable than linear models [51]. In either case, if the ambiguities of the requirement of interpretability should render it ineffective in preventing higher order manipulations, what options remain available to regulators? It may be that even without comprehensive model understanding, reasonable guarantees about the actions taken by the model can be achieved. In a number of complex industrial control systems, a policy function exists implicitly through a functional approximation to the physics of the system, rather than solely through the direct inference of system dynamics from data. As an example, standard commercial autopilots rely on an implicit policy function through sophisticated control systems [52, 53] and are required to provide reasonable guarantees about their behaviour to avoid undesirable outcomes for the operator. To achieve this, controller designers choose approximations to the system dynamics in order to arrive at an implicit policy which is guaranteed to avoid unfavourable regions of state/action space. There are a variety of subclasses of approximation which lead to provably \u201ccorrect\u201d systems (see [54] for an example). While it remains challenging to provide guarantees on the long term behaviour of highly complex policy functions learned from data, there is growing interest in achieving accurate credible interval estimations for the outputs produced by deep neural networks [55]. Research in this area may provide some empirical understanding of the behaviour we might expect from a given policy function as it adapts to new observations. Other methods have demonstrated the potential of combining a series of locally simple models [56], an approach that has the potential to admit more accessible analysis. If further work is able to provide appropriate state-action space behaviour guarantees, it should be able to restrict manipulative behaviour without a requirement on the low level interpretability of the policy function. An interesting alternative for the regulation of algorithms that lie beyond human interpretation can be found in the growing \ufb01eld of machine ethics [57, 58]. As the \ufb01eld of machine learning continues to develop, it may frequently be the case that the most useful algorithms do not readily admit human interpretation. Rather than prohibiting the use of these algorithms, it may be possible for regulators to prevent manipulation by requiring that curator algorithms act in a manner that is consistent with a carefully speci\ufb01ed set of ethical choices. However, we note that at present this approach faces a number of challenges that make the implementation of an ethical curator algorithm extremely dif\ufb01cult [59]. While provable behavioural guarantees and machine ethics could prove to be effective tools for regulators in the long term, in the short term we suggest that a more pragmatic approach may be required. As noted above, the third key factor enabling curator algorithms to develop manipulative behaviour is the availability of an immediate and effective feedback mechanism for evaluating the response of users. We therefore suggest that a practical short term solution can be achieved through the construction a partial \ufb01rewall restricting the \ufb02ow of information that provides this mechanism in social networks. However, if applied without careful consideration, this method runs the risk of placing an unnecessarily strong constraint on the ability of social network operators to improve their curation service for their users. Regulators must therefore seek an appropriate balance between safeguarding users from the risks of manipulation and enabling operators to innovate and produce products which will bene\ufb01t those users. 6 \fAs a brief aside, we note that here that although the \ufb01nancial industry differs in many ways from the world of social networking, it provides a useful reference for the dif\ufb01cult challenges facing regulators in the social network domain. Speci\ufb01cally, regulators seek to prevent market manipulation, a practice by which participants arti\ufb01cially distort information to their bene\ufb01t [60]. However, while manipulation through human-based trading practices have long been outlawed, legislators are still exploring approaches to regulating the High Frequency Trading (HFT) algorithms that increasingly dominate the marketplace. By operating at speeds humans cannot match these algorithms are able to manipulate4 the market in ways that are dif\ufb01cult to detect [61, 62]. The need for the regulation of these algorithms was brought into sharp relief by their role in the \u201cFlash Crash\u201d of the stock market in 2010, an event which resulted a 9% index drop in a single hour of trading [63]. One regulatory approach to preventing algorithmic market manipulation has been the introduction algorithm tagging, a process in which traders must provide the identity of the algorithm responsible for a trade [64]. While this approach has been helpful in improving regulators\u2019 understanding of the interactions between different market participants, it has not yet been demonstrated to be effective in preventing manipulation [65]. At times, regulators have taken the more direct approach of requesting access to the algorithms themselves [66], but when algorithm interpretability is not feasible this action is of limited value. In short, despite extensive experience in regulating trading practices to prevent market manipulation, \ufb01nancial industry regulators have yet to achieve a uni\ufb01ed approach to the problem of algorithmic manipulation. This should serve as a warning that a regulatory solution to the potentially more complicated issue of preventing human manipulation may prove extremely challenging. In summary, the task facing regulators seeking to prevent the manipulation of users by social network curator algorithms is a dif\ufb01cult one. The bold approach taking by the European Union may prove effective in combatting this issue, but it remains to be seen whether setting a requirement of interpretability is both practical and enforceable. Should this be the case, we propose a simple \ufb01rewall-based approach as short-term safeguard for users until more sophisticated techniques can be developed to prevent the risks of manipulation. 5"
+ },
+ {
+ "url": "http://arxiv.org/abs/1610.02255v1",
+ "title": "Learning Grimaces by Watching TV",
+ "abstract": "Differently from computer vision systems which require explicit supervision,\nhumans can learn facial expressions by observing people in their environment.\nIn this paper, we look at how similar capabilities could be developed in\nmachine vision. As a starting point, we consider the problem of relating facial\nexpressions to objectively measurable events occurring in videos. In\nparticular, we consider a gameshow in which contestants play to win significant\nsums of money. We extract events affecting the game and corresponding facial\nexpressions objectively and automatically from the videos, obtaining large\nquantities of labelled data for our study. We also develop, using benchmarks\nsuch as FER and SFEW 2.0, state-of-the-art deep neural networks for facial\nexpression recognition, showing that pre-training on face verification data can\nbe highly beneficial for this task. Then, we extend these models to use facial\nexpressions to predict events in videos and learn nameable expressions from\nthem. The dataset and emotion recognition models are available at\nhttp://www.robots.ox.ac.uk/~vgg/data/facevalue",
+ "authors": "Samuel Albanie, Andrea Vedaldi",
+ "published": "2016-10-07",
+ "updated": "2016-10-07",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "main_content": "Introduction Humans make extensive use of facial expressions in order to communicate. Facial expressions are complementary to other channels such as speech and gestures, and often convey information that cannot be recovered from the other two alone. Thus, understanding facial expressions is often necessary to properly understand images and videos of people. The general approach to facial expression recognition is to label a dataset of faces with either nameable expressions (e.g. happiness, sadness, disgust, anger, etc.) or facial action units (movements of facial muscles such as tightening the lips or raising an upper eyelid) and then learn a corresponding classi\ufb01er, for example by using a deep neural network. In contrast, humans need not to be explicitly told what facial expressions means, but can learn that by associating facial expressions to how people react to particular events or situations.1 In order to investigate whether algorithms can also learn facial expressions by establishing similar associations, in this paper we look at the problem of relating facial expressions to objectively-quanti\ufb01able contextual events in videos. The main dif\ufb01culty of this task is that there is only a weak correlation between an event occurring in a video and a person showing a particular facial expression. However, learning facial expressions in this manner has three important bene\ufb01ts. The \ufb01rst one is that it grounds the problem on objectively-measurable c \u20dd2016. The copyright of this document resides with its authors. It may be distributed unchanged freely in print or electronic forms. 1Generating certain facial expressions is an innate ability; however, recognizing facial expression is a learned skill. arXiv:1610.02255v1 [cs.CV] 7 Oct 2016 \f2 ALBANIE, VEDALDI: LEARNING GRIMACES BY WATCHING TV Figure 1: FaceValue dataset. We study facial expressions from objectively-measurable events occurring in the \u201cDeal or No Deal\u201d gameshow. Top: detection of an event at round t = 6 in the game. Left: a box is opened, revealing to the contestant that her prize is not the one of value xt = \u00a35. Since this is a low amount, well below the expected value of the prize of E5 = \u00a317,331, this is a \u201cgood\u201d event for the contestant. Right: the contestant\u2019s face, intuitively expressing happiness, is detected. Note also the overlay for xt = \u00a35 disappearing from a frame to the next; our system can automatically read such cues to track the state of the game. Bottom: four example tracks, the top two for \u201cgood\u201d events and the bottom two for \u201cbad\u201d events, as de\ufb01ned in the text. quantities, whereas labelling emotions or even facial action units is often ambiguous. The second bene\ufb01t is that contextual information can often be labelled in videos fully or partially automatically, obviating the cost of collecting large quantities of human-annotated data for data-hungry machine learning algorithms. Finally, the third advantage is that the ultimate goal of face recognition in applications is not so much to describe a face, but to infer from it information about a situation or event, which is tackled directly by our study. Concretely, our \ufb01rst contribution (Sect. 2; Fig. 1) is to develop a novel dataset, FaceValue, of faces extracted from videos together with objectively-measurable contextual events. The dataset is based on the \u201cDeal or No Deal\u201d TV program, a popular game where contestants can win or lose signi\ufb01cant sums of money. Using a semi-automatic procedure, we extract signi\ufb01cant events in the game along with the player (and public) reaction. We use this data to predict from facial expressions whether events are \u201cgood\u201d or \u201cbad\u201d for the contestant. To the best of our knowledge, this is the \ufb01rst example of leveraging gameshows in facial expression understanding and the \ufb01rst study aiming to relate facial expressions to people\u2019s activities. Our second contribution is to carefully assess the dif\ufb01culty of this problem by establishing a human baseline and by extending the latter to existing expression recognition datasets for comparison (Sect. 3). We also develop a number of state-of-the-art expression recognition models (Sect. 4) and show that excellent performance can be obtained by transferring deep neural networks from face veri\ufb01cation to expression recognition. Our \ufb01nal contribution is to extend such systems to the problem of recognising FaceValue events from facial expressions (Sect. 5). We develop simple but effective pooling strategies to handle face tracks, integrating them in deep neural network architectures. With these, we show that it is not only possible to predict events from facial expressions, but also to learn nameable expressions by looking at people spontaneously reacting to events in TV programs. \fALBANIE, VEDALDI: LEARNING GRIMACES BY WATCHING TV 3 Dataset Size Labelling Technique Expressions Labels FER 35,887 Faces Internet search Mixed 6+1 emotions AFEW 5.0 1,426 Clips Subtitles Acted 6+1 emotions SFEW 2.0 1,635 Faces Subtitles Acted 6+1 emotions AM-FED 168,359 Faces Human experts Spontaneous FACS FaceValue (ours) 192,030 Faces Metadata extraction Spontaneous Event Outcome Table 1: Comparison of emotion-based datasets of faces in challenging conditions. 1.1 Related work Facial expressions are a non-verbal mode of communication complementary to speech and gestures [1, 11]. They can be produced unintentionally [10], revealing hidden states of the actor in pain or deception detection [2]. Facial expressions are commercially valuable, attracting increasing investment from advertising agencies that seek to understand and manipulate the consumer response to a product [12] and corresponding regulatory attention [31]. Face-related tasks such as face detection, veri\ufb01cation and recognition have long been researched in computer vision with the creation of several labelled datasets: FDDB [18], AFW [39] and AFLW [21] for face detection; and LFW [16] and VGG-Face [28] for face recognition and veri\ufb01cation. Face detectors and identity recognizers can now rival the performance of humans [33]. Facial expression recognition has also received signi\ufb01cant attention in computer vision, but it presents a number of additional subtleties and dif\ufb01culties which are not found in face detection or recognition. The main challenge is the consistent labelling of facial expressions which is dif\ufb01cult due to the subjective nature of the task. A number of coding systems have been developed in an attempt to label facial expressions objectively, usually at the level of atomic facial movements, but even human experts are not infallible in generating such annotations. Furthermore, getting these experts to annotate a dataset is expensive and dif\ufb01cult to scale [27]. Another issue is the \u201cauthenticity\u201d of facial expressions, arising from the fact that several datasets are acted [34], either speci\ufb01cally for data collection [25] [24] [14] or indirectly as data is extracted from movies [8]. Our FaceValue dataset sidesteps these problems by recording spontaneous reactions to objectively-occurring events in videos. Examples of datasets which contain challenging variations in pose, lighting conditions and subjects are given in Table 1. Of these, two in particular have received signi\ufb01cant research interest as popular benchmarks for facial expression recognition. The Static Facial Expression in the Wild 2.0 (SFEW-2.0) data [7] (used in the EmotiW challenges [8]) consists of images from movies which collectively contain 1,635 faces labelled with seven emotions (this dataset was constructed by selectively extracting individual frames from AFEW5.0 [9]). The Facial Expression Recognition 2013 (FER-2013) dataset [13], which formed the basis of a large Kaggle competition, contains 35k images labelled with the same seven emotions. These datasets were used to develop several state-of-the-art emotion recognition systems. Among the top-performing ones, the authors of [37] and [19] propose ensembles of deep network trained on the FER and SFEW-2.0 data. There are also several commercial implementations of expression recognition, such as CMU\u2019s IntraFace [5] and the Affectiva face software. \f4 ALBANIE, VEDALDI: LEARNING GRIMACES BY WATCHING TV 2 FaceValue: expressions in context In this section we describe the FaceValue dataset (Fig. 1) and how it was collected. Data source. The \u201cDeal or No Deal\u201d TV game show2 was selected as the basis for our data for a number of reasons. First, it contains a very signi\ufb01cant amount of data. The show has been running nearly daily in the UK for the past eleven years, totalling 2,929 episodes. Each episode focuses on a different player and lasts for about forty minutes. Furthermore, the same or very similar shows are or were aired in dozens of other countries. Second, the game is based on simple rules and a sequence of discrete events that are in most cases easily identi\ufb01able as positive or negative for the player, and hence can be expected to induce a corresponding emotion and facial expression. Furthermore, these events are easily detectable by parsing textual overlays in the show or other simple patterns. Thirdly, since there is a single player, it is easy to identify the person that is directly affected by the events in the video and the camera tends to focus on his/her face. An example of the in-game footage and data extraction pipeline is shown in Fig. 1. The rules of the game are easily explained. There are n = 22 possible cash prizes X0 = {p1, p2,..., pn} where prizes p1 < p2 < \u00b7\u00b7\u00b7 < pn range from 1p up to \u00a3250,000. Initially the player is assigned a prize x0 \u2208X0 but does not know its value. Then, at each round of the game the player can randomly extract (realised as opening a box, see Fig. 1 top-left) one of the prizes xt \u0338= x0 from Xt and reveal it, resulting in a smaller set Xt = Xt\u22121 \u2212{xt} of possible prizes. Through this process of elimination the player obtains information about his/her prize x0. Occasionally the player is offered the opportunity to leave the game with a prize pd (\u201cdeal\u201d) determined by the game\u2019s host or to continue playing (\u201cno deal\u201d) and eventually leave with x0. The expected value Et of the win x0 at time t is Et = meanXt. When a prize xt is removed from Xt\u22121, the player perceives this as a \u201cgood\u201d event if Et > Et\u22121, which requires xt < Et\u22121, and a \u201cbad\u201d event otherwise. In practice we conservatively require Et > Et\u22121 +\u2206for a good event, where \u2206= \u00a3750. Interestingly, the game is continued even after the player has taken a \u201cdeal\u201d; in this case the roles of \u201cgood\u201d and \u201cbad\u201d events are reversed as the player hopes that the accepted deal pd is higher than the prize x0 he/she gave up. Dataset content. The data in FaceValue is de\ufb01ned as follows. Faces are detected right after a new prize xt is revealed for about seven seconds. These faces are collected in a \u201cface track\u201d ft. Furthermore, the face track is assigned the binary label: yt = dt \u00d7 ( +1, xt +\u2206< Et\u22121, \u22121, xt +\u2206\u2265Et\u22121, where dt is +1 if the deal was not taken so far, and \u22121 otherwise. Note that there are several levels of indirection between yt and a particular expression being shown in ft. For example, a player may not perceive a good or bad event according to this simple model, or could be responding to a stroke of bad luck with an ironic smile. The labels yt themselves, however, are completely objective. Data is extracted from 102 episodes of the show, resulting in 192,030 frames distributed over 2,118 labelled face tracks. Shows are divided into training, validation and test sets, which also means that mostly different identities are contained in the different subsets. 2Outside of computer vision, the interesting decision making dynamics of contestants in a high-stakes environment during the \u201cDeal or No Deal\u201d game show have attracted research by economists [30]. \fALBANIE, VEDALDI: LEARNING GRIMACES BY WATCHING TV 5 Data extraction. One advantage of studying facial expressions from contextual events is that these are often easy to detect automatically. In our case, we take advantage of two facts. First, when a prize is removed from the set Xt, this is shown in the game as a box being opened (Fig. 1 top-left). This scene, which occurs systematically, is easy to detect and is used to mark the start of an event. Next, the camera moves onto the contestant (Fig. 1 top-middle) to capture his/her reaction. Faces are extracted from the seven seconds that immediately follow the event using the face detector of [20] and are stored as part of the face track f = (f1, f2,..., fT). Occasionally the camera may capture the reaction of a member of the public; while it would be easy to distinguish different identities (e.g. by using the VGGFaces model of Sect. 4), we prefer not to as the public is sympathetic with the contestant and tends to react in a similar manner, improving the diversity of the collected data. Finally, the value of the prize xt being removed can be extracted either from the opened box using a text spotting system or, more easily, by looking at which overlay is removed (Fig. 1 topright). After automatic extraction, the data was fully checked manually for errors to ensure its quality. 3 Benchmark data and human baselines As FaceValue de\ufb01nes a new task in facial expression interpretation, in this section we establish a human baseline as a point of comparison with computer vision algorithm performance. In order to compare FaceValue to existing facial expression recognition problems we establish similar baselines for two standard expression recognition datasets, FER and SFEW 2.0, introduced below. Benchmark datasets: FER and SFEW 2.0. The FER-2013 data [13] contains 48 \u00d7 48 pixel images obtained by querying Google image search for 184 emotion-related keywords. The dataset contains 35,887 images divided into 4,953 \u201canger\u201d, 547 \u201cdisgust\u201d, 5,121 \u201cfear\u201d, 8,989 \u201chappiness\u201d, 6,077 \u201csadness\u201d, 4,002 \u201csurprise\u201d and 6,198 \u201cneutral\u201d further split into training (28,709), public test (3,589) and private test (3,589) sets. Goodfellow et al. [13] note that this data is likely to contain label errors. However, their own human study obtained an average prediction accuracy of 65 \u00b1 5%, which is comparable to the 68 \u00b1 5% performance obtained by expert annotators on a smaller but manually-curated subset of 1,500 acted images. The SFEW-2.0 data [7] contains selected frames from different videos of the Acted Facial Expressions in the Wild (AFEW) dataset [6] assigned to either: 225 \u201cangry\u201d, 75 \u201cdisgust\u201d, 124 \u201cfear\u201d, 256 \u201chappy\u201d, 228 \u201cneutral\u201d, 234 \u201csad\u201d and 150 \u201csurprise\u201d. The training, validation and test splits are provided as part of the EmotiW challenge [8] and are adopted here. The AFEW data was collected by searching movie close captions for emotion-related keywords and then manually curating the results, generating a smaller number of labelled instances than FER. Human baselines. For each dataset we consider a pool of annotators, most of which are not computer vision experts, and ask them to predict the label associated with each face. In order to motivate annotators to be as accurate as possible, we pose the annotation process as a challenge. The goal is to guess the ground-truth label of an image and a score displaying the annotators\u2019 prediction accuracy is constantly updated. Ultimately, annotators performances are entered in a leaderboard. We found that this simple idea signi\ufb01cantly improved the annotators\u2019 performance. \f6 ALBANIE, VEDALDI: LEARNING GRIMACES BY WATCHING TV The dataset instances selected for the annotation tasks were constructed as follows. From FER, a random sample of 500 faces was extracted from the Public Test set. From SFEW 2.0, the full Validation set (383 samples) was used (faces were extracted from each image as described in section 4). From FaceValue, a random sample of 250 face tracks was extracted from the validation set, each of which was transformed into an animated GIF to allow annotators to see the face motion. Performance on each dataset was evaluated by partitioning into \ufb01ve folds, each of which was annotated by a separate pool. Every face instance across the three datasets received at least four annotations. On FER, our annotators achieved lower performance than results previously reported in [13] (58.2% overall accuracy vs 65%). However, we also noted a signi\ufb01cant variance between annotators (\u00b18.0%), which means that at least some of them were able to match or exceed the 65% mark. The unevenness of the annotators shows how dif\ufb01cult or ambiguous this task can be even for motivated humans. The annotators found SFEW-2.0 a more challenging task, obtaining an average accuracy of 53.0\u00b19.4% overall. One possible reason for this difference is the manner in which the datasets were constructed. FER faces were retrieved using Internet search queries which likely returned fairly representative examples of each expression; in contrast SFEW images were extracted from movies. On FaceValue, the average annotator accuracy was 62.0\u00b18.1%. Since the classi\ufb01cation task was binary, to facilitate a comparison with algorithmic approaches, the ROC-AUC was also computed for each annotator, resulting in an annotator average of 71.0\u00b15%. The relatively low scores of humans on each dataset illustrate the particularly challenging nature of the task. This dif\ufb01culty is underlined by the low levels of inter-annotator agreement (measured using Fleiss\u2019 kappa) on the three datasets of 0.574, 0.424 and 0.491 respectively. 4 Expression recognition networks In this section we develop state-of-the-art models for facial expression recognition in the two popular emotion recognition benchmarks of Sect. 3, namely FER and SFEW 2.0. Deep networks are currently the state-of-the-art models for emotion recognition, topping two of the last three editions of the Emotion recognition in the Wild (EmotiW) contest [23]. While the standard approach is to learn large ensembles of deep networks [19, 37], here we show that a single network can in fact be competitive or better than such ensembles if trained effectively. In order to do so we expand the available training data by pre-training models on other face recognition tasks, and in particular face identity veri\ufb01cation, using the recent VGG-Faces dataset [29]. Architectures and training. We base our models on four standard CNN architectures: AlexNet [22], VGG-M [3], VGG-VD-16 [35] and ResNet-50 [15]. AlexNet is used as a reference baseline and is pre-trained on the ImageNet ILSVRC data [32]. VGG-VD-16 is pre-trained on a recent dataset for face veri\ufb01cation called VGG-Faces [29]. This model achieves near state-of-the-art veri\ufb01cation performance on the LFW [16] benchmark; however, it is also extremely expensive. Thus, we train also a smaller network, based on the VGG-M con\ufb01guration. All models are trained with batch normalization [17] and are implemented in the MatConvNet framework [36]. Statistics such as image resolution and the usage of colour in the target datasets, and FER in particular, differ substantially from LFW and VGG-Faces. Nevertheless, we found that simply rescaling the smaller FER images to the higher VGG-Faces resolution together with duplicating the grayscale intensities for the three colour channels produced excellent results. \fALBANIE, VEDALDI: LEARNING GRIMACES BY WATCHING TV 7 Model Pretraining Test (Public) Test (Private) AlexNet ImageNet 62.44% 63.28% VGG-M ImageNet 66.04% 67.57% Resnet-50 ImageNet 67.79% 69.02% VGG-VD-16 ImageNet 66.92% 70.38% AlexNet VGGFaces 70.47% 71.44% VGG-M VGGFaces 71.08% 72.08% Resnet-50 VGGFaces 69.23% 70.33% VGG-VD-16 VGGFaces 72.05% 72.89% HDC\u22c6[19] 70.58% HDC\u2020\u2020 [19] 72.72% Table 2: Accuracy on FER-2013 of different CNN models and training strategies. Model Pretraining Val Test AlexNet VGGFaces 37.67% VGG-M VGGFaces 42.90% Resnet-50 VGGFaces 47.48% VGG-VD-16 VGGFaces 43.58% AlexNet VGGFaces+FER 38.07% 50.81% VGG-M VGGFaces+FER 47.02% 53.49% Resnet-50 VGGFaces+FER 50.91% 45.97% VGG-VD-16 VGGFaces+FER 54.82% 59.41% CMU\u22c6[37] FER combined 52.29% 58.06% HDC\u22c6[19] FER + TFD 52.50% 57.3% CMU \u2020\u2020 [37] FER combined 55.96% 61.29% HDC\u2020\u2020 [19] FER + TFD 52.80% 61.6% Table 3: Accuracy on SFEW-2.0 of different CNN models and training strategies Anger Disgust Fear Happiness Neutral Sadness Surprise Figure 2: Visualizations of the FER emotions for the VGG-VD-16 model. We also experimented with the other approach of pretraining by reducing the resolution and removing colour information from VGG-Faces; while this resulted in very competitive and more ef\ufb01cient networks, the full resolution models were still a little more accurate and are used in the rest of the work. After pre-training, each model is trained on the FER or SFEW 2.0 training set with a \ufb01ne tuning ratio of 0.1. This is obtained by retaining all but the last layer, performing N-way classi\ufb01cation, where N is the number of possible facial expression classes. Results. Table 2 compares the different architecture and the state-of-the-art on FER. When reporting ensemble models, \u22c6denotes the best single CNN and \u2020\u2020 denotes the ensemble. The best previous results on FER is 72.72% accuracy, obtained using the hierarchical committee of deep CNNs described in [19], combining more than 36 different models. By comparison, VGG-VD-16 pre-trained on VGG-Faces achieves a slightly superior performance at 72.89%. VGG-M achieves nearly the same performance (\u22120.8%) at a substantially reduced computational cost. We also note the importance of choosing a face-related pre-training set, as pre-training in ImageNet loses 3-4% of performance. Table 3 reports the results on the SFEW-2.0 dataset instead. Since the dataset itself consists of labelled scene images, we use the faces extracted by the accurate face detection pipeline described in [37] which applies an ensemble of face detectors [4, 38, 39]. As SFEW is much smaller than FER, pre-training is in this case much more important. The best result achieved by any of the four models pre-trained with ImageNet only was 31.19%. Pre-training on VGG-Faces produced substantially better results (+10%), and pre-training on VGGFaces and FER-Train produced better still (+18%). The best single model, VGGVD-16, achieves better performance than existing single and ensemble networks (+2.5%) on the validation set, and better performance than all but the ensembles of [19, 37] on the test \f8 ALBANIE, VEDALDI: LEARNING GRIMACES BY WATCHING TV Model Pre-training Method Val. Test VGG-M VGGFace+FER voting 0.656 0.592 VGG-VD VGGFace+FER voting 0.653 0.618 VGG-M VGGFace pooling arch. 0.764 0.691 VGG-VD VGGFace pooling arch. 0.726 0.671 VGG-M VGGFace+FER pooling arch. 0.794 0.722 VGG-VD VGGFace+FER pooling arch. 0.741 0.675 Table 4: ROC-AUC on FaceValue 0% 12.5% 25% 37.5% 50% Anger Disgust Fear Happiness Neutral Sadness Surprise \u201cgood\u201d event \u201cbad\u201d event Figure 3: FER expressions from FaceValue. set (-2%). Visualizations. While CNNs perform well, it is often dif\ufb01cult to understand what they are learning given their black-box nature. Here we use the technique of [26] to visualize the the best FER/SFEW model. This technique seeks to \ufb01nd an image I which, under certain regularity assumptions, maximizes the CNN con\ufb01dence \u03a6c(I) that I represents emotion c. Results are reported in Fig 2 for the VGG-VD-16 model trained on the FER dataset. Notably, the reconstructed pictures are mosaics of parts representative of the corresponding emotions. 5 Relating facial expressions to events in videos In this section we focus on the main question of the paper i.e. whether facial expressions can be used to extract information about events in videos. Baselines: individual frame prediction and simple voting. As baseline, a state-of-the-art emotion recognition CNN \u03a6 is applied to each frame in the face track. The T faces in a face track f = (f1,..., fT) are individually classi\ufb01ed by \u03a6( ft) and results are pooled to predict whether the event is positive y = +1 or negative y = \u22121. Positive emotions (happiness) vote for the \ufb01rst case, negative emotions (sadness, fear, anger, disgust) for the second and neutral/surprise emotions are ignored. The label with the largest number of votes in the track wins. Pooling architectures. There are two signi\ufb01cant shortcomings in the baseline. First, it assumes a particular map between emotions in existing datasets and positive and negative events in FaceValue. Second, it integrates information across frames using an ad-hoc voting procedure which may be suboptimal. In order to address these shortcomings we learn on FaceValue a new model that explicitly pools information across frames in a track. A pretrained network \u03a6 = \u03a61 \u25e6\u03a62 is split in two parts. Then, the \ufb01rst part is run independently on each frame, the results are pooled by either average or max pooling across time and the result is fed to \u03a62 for binary classi\ufb01cation: \u03a6(f) = \u03a62 \u25e6pool(\u03a61( f1),...,\u03a61( fT)). The resulting architecture is \ufb01ne-tuned on the FaceValue training set. In practice, we found that the best results were obtained by using the emotion recognition networks such as VGG-VD-16 trained on the FER data (Sect. 4). All layers up to fc7, producing 4,096 dimensional feature vectors, are retained in \u03a61. The best pooling function was found to be averaging followed by L1 normalization of the 4,096 dimensional features. The last layer \u03a68 is fully connected (in practice, this layer is a linear predictor). CNNs are trained using hinge loss, which generally performs better than softmax for binary classi\ufb01cation. Results. Table 4 reports the performance of different model variants on FaceValue. Similarly to Table 3, pre-training on VGG-Face+FER is preferable than pre-training on VGG-Face \fALBANIE, VEDALDI: LEARNING GRIMACES BY WATCHING TV 9 Table 5: Comparison of human vs machine performance across benchmarks Dataset Metric Human Human Committee Machine FER (public test) Accuracy 0.57 0.66 0.72 SFEW 2.0 (val) Accuracy 0.53 0.63 0.56 [37] FaceValue (val) ROC-AUC 0.71 0.78 0.79 only. This is required for the voting classi\ufb01er, but bene\ufb01cial also when \ufb01ne-tuning a pretrained pooling architecture, which handily outperforms voting. VGG-M is in this case better than VGG-VD (+5.3%), probably due to the fact that VGG-VD is over\ufb01tted to the pretraining data. Finally, temporal average pooling is always better than max pooling. Learning nameable facial expressions from events in videos. So far, we have shown that it is possible to predict events in videos by looking at facial expressions. Here we consider the other direction and ask whether nameable facial expressions can be learned by looking at people in TV programs reacting to events. To answer this question we applied the VGG-M pooling architecture to the FER images after pre-trained it on VGG-Faces (a veri\ufb01cation task) and \ufb01ne-tuning it on FaceValue. In this manner, this CNN is never trained with manually-labelled emotions. Fig. 3 shows the distribution of FER nameable expressions for faces associated to \u201cgood\u201d and \u201cbad\u201d FaceValue events by this model. There is a marked difference in the resulting distributions, with a signi\ufb01cant peak for happiness for predicted \u201cgood\u201d events and surprise and negative emotions for \u201cbad\u201d ones. This suggests that it is indeed possible to learn nameable expressions from their weak association to events in video without explicit and dedicated supervision as commonly done. Comparison with human baselines. Table 5 compares the performance of humans and of the best models on the three datasets FER, SFEW 2.0, and FaceValue. Remarkably, in all cases networks outperform individual humans by a substantial margin (e.g. +15% on FER and +8% on FaceValue). While this result is perhaps surprising, we believe the reason is that, in such ambiguous tasks, machines learn to respond as humans would on average whereas the performance of individual annotators, as re\ufb02ected in Table 5, can be low due to poor inter-annotator agreement. To verify this hypothesis, we combined multiple human annotators in a committee and found that this gap either closes or disappears. In particular, on FaceValue the performance of the committee is just a hair\u2019s breadth lower than that of the machine (78% vs 79%). 6 Summary In this paper we have investigated the problem of relating facial expressions with objectivelymeasurable events that affect humans in videos. We have shown that gameshows are a particularly useful data source for this type of analysis due to their simple structure, easily detectable events and emotional impact on the participants and have constructed a corresponding dataset FaceValue. In order to analyze emotions in FaceValue, we have trained state-of-the-art neural networks for facial expression recognition in existing datasets showing that, if pre-trained on face veri\ufb01cation, single models are competitive or better than the multi-network committees commonly used in the literature. Then, we have shown that such networks can successfully understand the relationship between certain events in TV programs and facial expressions \f10 ALBANIE, VEDALDI: LEARNING GRIMACES BY WATCHING TV better than individual human annotators, and as well as a committee of several human annotators. We have also shown that networks trained to predict such events from facial expressions correlate very well to nameable expressions in standard datasets. Acknowledgements The authors gratefully acknowledge the support of the ESPRC EP/L015897/1 (AIMS CDT) and the ERC 677195-IDIU. We also wish to thank Zhiding Yu for kindly sharing his preprocessed SFEW dataset."
+ }
+ ],
+ "Ameya Prabhu": [
+ {
+ "url": "http://arxiv.org/abs/2402.19472v1",
+ "title": "Lifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress",
+ "abstract": "Standardized benchmarks drive progress in machine learning. However, with\nrepeated testing, the risk of overfitting grows as algorithms over-exploit\nbenchmark idiosyncrasies. In our work, we seek to mitigate this challenge by\ncompiling ever-expanding large-scale benchmarks called Lifelong Benchmarks. As\nexemplars of our approach, we create Lifelong-CIFAR10 and Lifelong-ImageNet,\ncontaining (for now) 1.69M and 1.98M test samples, respectively. While reducing\noverfitting, lifelong benchmarks introduce a key challenge: the high cost of\nevaluating a growing number of models across an ever-expanding sample set. To\naddress this challenge, we also introduce an efficient evaluation framework:\nSort \\& Search (S&S), which reuses previously evaluated models by leveraging\ndynamic programming algorithms to selectively rank and sub-select test samples,\nenabling cost-effective lifelong benchmarking. Extensive empirical evaluations\nacross 31,000 models demonstrate that S&S achieves highly-efficient approximate\naccuracy measurement, reducing compute cost from 180 GPU days to 5 GPU hours\n(1000x reduction) on a single A100 GPU, with low approximation error. As such,\nlifelong benchmarks offer a robust, practical solution to the \"benchmark\nexhaustion\" problem.",
+ "authors": "Ameya Prabhu, Vishaal Udandarao, Philip Torr, Matthias Bethge, Adel Bibi, Samuel Albanie",
+ "published": "2024-02-29",
+ "updated": "2024-02-29",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CV"
+ ],
+ "main_content": "Introduction We are in the midst of a benchmark revolution. Datasets like ImageNet (Deng et al., 2009), MS-COCO (Lin et al., 2014), GLUE (Wang et al., 2018) and BigBench (Srivastava et al., 2022) have been instrumental in advancing machine learning research by providing standardised scenarios for comparing 1University of Oxford 2T\u00a8 ubingen AI Center, University of T\u00a8 ubingen 3University of Cambridge. Correspondence to: {ameya@prabhu.be vishaal.udandarao@bethgelab.org} \u2217equal contribution \u2020equal advising New incoming samples Traditional Approach: Static Benchmarking Static Dataset 1 Static Dataset 2 Static Dataset 3 Proposed Approach: Lifelong Benchmarking Model Score 1 0.94 2 0.87 3 0.58 Model Score 1 0.86 2 0.75 3 0.89 Model Score 1 0.71 2 0.95 3 0.83 Lifelong Pool time update pool Model Score 1 0.63 2 0.49 3 0.82 \u2026 \u2026 N 0.76 model evaluation sample ranking Model 1 Model 2 Model 3 Models to evaluate Model N \u2026 Model 1 Model 2 Model 3 Models to evaluate Model N \u2026 time Figure 1. Static vs Lifelong Benchmarking. (Top) Static benchmarks incentivise machine learning practitioners to overfit models to specific datasets, weakening their ability to assess generalisation. (Bottom) We introduce Lifelong Benchmarks as an alternative paradigm\u2014ever-expanding pools of test samples that resist overfitting while retaining computational tractability. models. However, over time, these static benchmarks have been exposed to many evaluations, each leaking cues about their test data and weakening their statistical power as tools of generalisation measurement (Ott et al., 2022; Mazumder et al., 2023; Kiela et al., 2021). Fresh approaches must compete with a body of methods that have been highly tuned to such benchmarks, incentivising further overfitting if they are to compete (Bender et al., 2021; Beyer et al., 2021). This raises a critical question: What function should such benchmarks serve? Towards Lifelong Benchmarks. The primary goal of the vision benchmarks considered in this work is to assess model performance on some task using data that is representative of the visual world (Torralba and Efros, 2011). For instance, the CIFAR10 (Krizhevsky et al., 2009) benchmark tested whether classifiers can distinguish between 10 categories, such as dogs and cats. Subsequent versions like CIFAR10.1 (Lu et al., 2020), CIFAR10.2 (Lu et al., 2020), 1 arXiv:2402.19472v1 [cs.LG] 29 Feb 2024 \fLifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress CINIC10 (Darlow et al., 2018), and CIFAR10-W (Sun et al., 2023) introduced more challenging and diverse samples to evaluate the same objective of classifying 10 categories. Over time, however, thanks to repeated evaluation exposure from competing approaches, each individual benchmark diminishes in representativeness as overfitting occurs at both the individual method and research community level (Fang et al., 2023; Vishniakov et al., 2023). In this work, we aim to tackle this challenge by introducing two Lifelong Benchmarks: Lifelong-CIFAR10 and Lifelong-ImageNet. These are ever-expanding pools of test samples that aim to restore the representativeness of benchmarks to the visual world (see Fig. 1) by preventing models from overfitting specifically to the biases of any subset benchmark. Evaluation Cost. Our Lifelong-CIFAR10 and LifelongImageNet benchmarks contain 1.69 million and 1.98 million test samples, respectively. A challenge we face with this expanding dataset is the increasing cost of evaluation\u2014it takes roughly 140 and 40 GPU days to evaluate our current model set (containing 31,000 and 167 models respectively, see Section 5.1), on Lifelong-CIFAR10 and LifelongImageNet respectively. Similar issues occur in large-scale foundation model (Bommasani et al., 2021) evaluation. For instance, evaluating a single large language model (LLM) on the MMLU benchmark (Hendrycks et al., 2021b) (standard benchmark for evaluating LLMs) takes 24 hours on a consumer-grade GPU (Ilyas Moutawwakil, 2023). As models grow in complexity, lifelong testing will inevitably lead to a surge in evaluation costs when benchmarking a large set of increasingly expensive models against an ever-growing collection of test samples (Sardana and Frankle, 2023; Dehghani et al., 2021). Can we reduce this evaluation cost while minimising the prediction error? Efficient Model Evaluation. We develop algorithms for efficient evaluation in lifelong benchmarks by drawing inspiration from the field of computerized adaptive testing (CAT) (Van der Linden and Glas, 2000), which can generate exams like the GRE and SAT from an ever-expanding pool of questions. Unlike traditional tests where all questions must be answered, CAT adaptively sub-samples questions based on examinee responses. This approach efficiently gauges proficiency with far fewer questions, while maintaining assessment accuracy. At the same time, as test takers continue taking the tests, the question pool gets more accurately calibrated, reinforcing this \u201clifelong testing pool\u201d. Similarly, in our lifelong benchmarking framework, we aim to evaluate the classification ability of new models without testing them on all samples, instead selecting a subset of samples to evaluate models. We propose a method named Sort & Search (S&S), which reuses past model evaluations on a sample set through dynamic programming to enable efficient evaluation of new, incoming models. S&S operates by first ranking test samples by their difficulty, done efficiently by leveraging data from previous tests. It then uses these updated rankings to evaluate new models, streamlining the benchmarking process. This strategy enables efficient lifelong benchmarking, reducing the cost dramatically from a collective of 180 GPU days to 5 GPU hours on a single A100 GPU. This corresponds to a 1000\u00d7 reduction in inference costs compared to static evaluation on all samples. To summarize our key contributions in this work: 1. We introduce and formalise lifelong benchmarking as a novel framework for robust, efficient model evaluation. 2. We curate two lifelong benchmarks: Lifelong-CIFAR10 and Lifelong-ImageNet, consisting of 1.69M and 1.98M samples respectively 3. We propose a novel framework, Sort & Search for efficient model evaluation, reducing over 99.9% of computation costs on our lifelong benchmarks while accurately predicting sample-wise performance. 2. Lifelong Benchmarks: Curation Considerations. We aim to establish lifelong benchmarking as a standard evaluation protocol in computer vision. To demonstrate this, we considered two popular datasets as our basis: CIFAR10 (Krizhevsky et al., 2009) and ImageNet (Deng et al., 2009). We chose them due to (1) their widespread adoption in prior art, (2) the diverse set of models trained on them, and (3) the presence of numerous dataset variants with the same set of labels, encompassing distribution shifts (Barbu et al., 2019), temporal variations (Shirali and Hardt, 2023), and adversarial samples (Hendrycks et al., 2021c). Note that while our current lifelong benchmarks are based on two datasets, our framework can generally be applied to any broader range of datasets. We describe the precise construction of our datasets below. See Table 1 for key statistics and a detailed breakdown. Lifelong-CIFAR10. We combine 22 domains of different CIFAR10-like datasets comprising samples applied with various synthetic distribution shifts, synthetic samples generated by diffusion models, and samples queried from different search engines using different colors and domains. We deduplicate our dataset to ensure uniqueness and downsample all images to the standard CIFAR10 resolution of 32 \u00d7 32. Our final dataset consists of 1.69 million samples. Lifelong-ImageNet. We source our test samples from ImageNet and its corresponding variants. Similar to LifelongCIFAR10, our benchmark is designed for increased sample diversity (43 unique domains) while operating on the same ImageNet class set. We include samples sourced from different web-engines and generated using diffusion models. Our final Lifelong-ImageNet contains 1.98 million samples. 2 \fLifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress Table 1. Overview of our Lifelong Benchmarks. We list the constituent source datasets (deduplicated) and their statistics for constructing our lifelong benchmarks here. Our benchmarks encompass a wide-range of natural and synthetic domains, sources and distribution shifts, making for a comprehensive lifelong testbed. Dataset #Test Samples #Domains #Unique Sources Synthetic/Natural Corrupted/Clean Lifelong-CIFAR10 1,697,682 22 9 Both Both CIFAR10.1 (Recht et al., 2018) 2,000 1 1 Natural Clean CIFAR10 (Krizhevsky et al., 2009) 10,000 1 1 Natural Clean CIFAR10.2 (Lu et al., 2020) 12,000 1 1 Natural Clean CINIC10 (Darlow et al., 2018) 210,000 1 1 Natural Clean CIFAR10-W (Sun et al., 2023) 513,682 3 8 Both Clean CIFAR10-C (Hendrycks et al., 2021b) 950,000 19 1 Natural Corrupted Lifelong-ImageNet 1,986,310 43 9 Both Both ImageNet-A (Hendrycks et al., 2021c) 7,500 1 3 Natural Clean ObjectNet (Barbu et al., 2019) 18,514 1 1 Natural Clean OpenImagesNet (Kuznetsova et al., 2020) 23,104 1 1 Natural Clean ImageNet-V2 (Recht et al., 2019) 30,000 1 1 Natural Clean ImageNet-R (Hendrycks et al., 2021a) 30,000 13 1 Natural Clean ImageNet (Deng et al., 2009) 50,000 1 1 Natural Clean Greyscale-ImageNet (Taori et al., 2020) 50,000 1 1 Natural Clean StylizedImageNet (Geirhos et al., 2018) 50,000 1 1 Synthetic Corrupted ImageNet-Sketch (Wang et al., 2019b) 50,889 1 1 Natural Clean SDNet (Bansal and Grover, 2023) 98,706 19 1 Synthetic Clean LaionNet (Shirali and Hardt, 2023) 677,597 1 1 Natural Clean ImageNet-C (Hendrycks and Dietterich, 2019) 900,000 19 1 Natural Corrupted 3. Lifelong Benchmarks: Formulation, Challenges and Approach In this section, we formalise the objective of lifelong benchmarking and describe the key challenges it raises. Formulation. Let D=((x1, y1), . . . , (xn, yn)) denote an ordered collection of labelled examples, sampled from the underlying task distribution of interest P(X\u00d7Y). Here, xi\u2208X denotes the ith data sample and yi\u2208Y denotes the corresponding label. Let M=(f1, . . . , fm) denote an ordered collection of models where each model, f:X\u2192Y, maps data samples to predicted labels. A lifelong benchmark, B=(D, M, insertD, insertM, metrics), augments D and M with three operations: 1 insertD((x\u2032, y\u2032)) inserts a new labelled example (x\u2032, y\u2032) into D. 2 insertM(f \u2032) inserts a new model f \u2032 into M. 3 metrics() returns a |M|-dimensional vector estimating each model\u2019s performance on the task of interest. Key challenges. To resist overfitting and provide utility to the research community, we want both the model collection, M, and sample collection, D, to expand over time. To enable this, we can instantiate a \u201cnaive\u201d implementation of the metrics() operation ( 3 ) by simply reevaluating every model on every sample after each call to insertM ( 2 ) or insertD ( 1 ). However, such a strategy exhibits O(|D||M|) runtime complexity for each call to metrics(), rendering benchmark evaluation infeasible as D and B grow, hence preventing the practical adoption of the lifelong benchmarking paradigm. The central question considered by this work is therefore the following: Given a lifelong benchmark B, how can we efficiently compute metrics() each time we insert new models into M ( 2 ) or new labelled samples into D ( 1 )? Approach. Our approach is underpinned by two key ideas. First, we augment B with an instance-level prediction cache to amortise inference costs across evaluations, effectively exchanging (costly) computation for (more affordable) storage (Prabhu et al., 2023)1. Second, we propose strategies to efficiently populate the cache with new predictions through judicious sampling and inference. The cache is instantiated as a matrix A \u2208{0, 1}|M|\u00d7|D| where A(i, j) \u225cI[fi(xj) = yj]. Given such a cache, metrics() can be computed trivially by row-wise averaging A. Our methodology is illustrated in Fig. 2. Inserting \u2206m models ( 2 insertM). Suppose that \u2206m new models have been developed after the initial creation of the benchmark. We wish to insert these new models into M and update the cache accordingly. A naive approach would be to do so by evaluating the \u2206m models on all |D| samples. Given the high cost of this approach when |D| grows large, we instead propose to select a small subset of n\u2032 \u226a|D| samples for evaluation. These are chosen with the goal of enabling accurate prediction of the remaining cache entries. Inserting \u2206n samples ( 1 insertD). Our second challenge arises when we obtain new \u2206n labelled data examples. 1Note that the benefits of our strategy depend on task and model characteristics: the ratio of computation to prediction vector size and the relative costs of compute and storage. Our approach is well-suited to modern deep learning models that employ significant computation for each prediction and produce compact inference artefacts. 3 \fLifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress . . . . . . f1 f2 fm x1 x2 xn Models Samples m \u2715 n 1 0 0 0 0 0 1 1 1 . . . . . . . . . . . . . . . . . . . . . Initial Accuracy Predictions Efficient Model Evaluation 1 0 0 0 0 0 1 1 1 . . . . . . . . . . . . . . . . . . . . . fm+1 Predictions? New Model {x1 , x2 ,\u2026, xn} 1 0 0 0 0 0 1 1 1 . . . . . . . . . . . . . . . . . . . . . Predictions? Sample Pool select subset eval on subset xn+1 New Sample Existing Models eval on subset Efficient Insertion {f1 , f2 ,\u2026, fm} select subset of size n' of size m' Figure 2. Proposed Lifelong Benchmarking setup. Assume access to an initial pool of n samples and m models that have been evaluated on these samples (left). Our goal is to efficiently evaluate a new model ( 2 insertM) at sub-linear cost (right top) and efficiently insert a new sample into the lifelong benchmark ( 1 insertD) by determining sample difficulty at sub-linear cost (right bottom). We seek to insert these samples into D and update the cache accordingly. A naive approach entails evaluating all |M| models on the \u2206n new examples. As above, to substantially reduce cost, we select a small subset of m\u2032 \u226a|M| models with the objective of accurately predicting the remaining cache entries corresponding to the new \u2206n samples. Related Work. While the lifelong benchmarking setup introduced has received limited attention, the sub-challenge of efficiently evaluating models has received more focus. Concretely, this maps to the problem of insertM ( 2 ) within our framework. We comprehensively draw connections across different research directions in the Appendix and briefly present the most similar works here. Model Spider (Zhang et al., 2023) efficiently ranks models from a pre-trained model zoo. LOVM (Zohar et al., 2023) and Flash-HELM (Perlitz et al., 2023) similarly rank foundation models efficiently on unseen datasets. However, these approaches predict dataset-level metrics rather than instancelevel metrics, and thereby cannot be used in our setup to grow the prediction cache efficiently. Concurrent to our work, Anchor Point Sampling (Vivek et al., 2023) and IRT-Clustering (Polo et al., 2024) both propose efficient instance-level evaluations by creating smaller core-sets from test data. They introduce principled methods based on clustering and item response theory (Baker, 2001) to obtain sample-wise accuracy predictions. However, their methods require memory and time complexity of O(|D|2) with the number of data samples, preventing comparisons on datasets bigger than a few thousand samples. This is infeasible, requiring well over 10TB of RAM, for our lifelong benchmarks having over 1.5 million test samples each. In contrast, our novel Sort & Search approach, requires memory and time complexity of O(|D| log |D|) with the number of samples, and can scale up to billion-sized test sets (see Section 5 for empirical results). 4. Efficient Benchmarking with Sort & Search Inspired by the computerized adaptive testing (Van der Linden and Glas, 2000) paradigm, in this section, we propose an efficient evaluation framework for lifelong-benchmarking: Sort & Search (S&S), consisting of two key components: (1) Ranking test samples from the entire dataset pool according to their difficulty2, i.e., Sort and (2) Sampling a subset from the pool to predict performance on, i.e., Search. We aim to solve the two key operations that we noted in Section 3 ( 1 insertD and 2 insertD) with our framework. We now describe the objective and algorithms used in S&S. 4.1. Ranking by Sort Setup. We recall that our lifelong benchmark pool consists of evaluations of |M| models on |D| samples. For ease of reference, say |M|=m and |D|=n. For our method, given each model fi, i \u2208{1, .., m}, we use the binary accuracy prediction per sample, across all n samples obtaining ai = [pi1, pi2 . . . , pin]. Here, pij\u2208{0, 1} represents whether the model fi classified the sample xj correctly. Thus, for m models and n evaluation samples, we construct a binary matrix A \u2208{0, 1}m\u00d7n by row-wise stacking all the accuracy predictions ai (see Fig. 2 left). Goal. Given a data matrix A, we want to obtain a ranked order (from easy to hard) for the columns of A, which represent the samples. This sorted order (Sort) can later be used for efficient prediction on new incoming models (Search). Here, the goal is to find the best global permutation 2\u201cDifficult\u201d is defined as if a sample xi is easier than a sample xj then at least equal number of models predict xi correctly as the number of models predicting xj correctly (Baldock et al., 2021). 4 \fLifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress def sort_by_sum(A): sum_ranking = A.sum(axis=0) order = np.flip(np.argsort(sum_ranking)) return order def two_stage_sort_by_sum(A, idx): #Step 1: Sum order = sort_by_sum(A) #Step 1: Search thresh = dp_search(A[:, order]) #Iterate over bins bins_ordered = sum_bins[order] uniq_bins = np.unique(bins_ordered) for u_bin in uniq_bins: idx = np.nonzero(bins_ordered==u_bin)[0] bin_thresh = np.nonzero(np.all([[bins_ordered >= idx.min()], [bins_ordered <= idx.max()]], axis=0))[1] , \u2192 , \u2192 At = A[thresh][:, order[idx]] #Step 2: Sum new_order = sort_by_sum(At) # Replace current ordering within new in bin order[idx] = order[idx[new_order]] return order Listing 1: Algorithms for Optimizing P given Y matrix P \u2208{0, 1}n\u00d7n, a binary matrix, such that AP permutes the columns of A so that we can rank samples from easy (all 1s across models) to hard (all 0s across all models). We say this has a minimum distance from the optimal ranked accuracy prediction matrix Y \u2208{0, 1}m\u00d7n, formally defined as: \\ be g in {aligned} &\\m athb f {P^ *}, \\m ath b f { Y} ^ * = \\ te xt { ar gmin }_{\\ m a thb f { P} ,\\m a th bf { Y}} \\ | \\ma t hbf {A} \\mathbf {P}\\mathbf {Y} \\|, \\\\ &\\textit {s.t. } ~~~ \\mathbf {P} \\in \\{0,1\\}^{n \\times n}, \\mathbf {P} \\mathbf {1}_{n} = \\mathbf {1}_{n}, \\mathbf {1}^\\top _{n} \\mathbf {P} = \\mathbf {1}_{n}, \\\\ & \\text {if } ~~~~~ \\mathbf {Y}_{ij} = 1 \\text {, then } \\mathbf {Y}_{ij'} = 1 ~~ \\forall j' \\leq j, \\\\ & \\text {if } ~~~~~ \\mathbf {Y}_{ij} = 0 \\text {, then } \\mathbf {Y}_{ij'} = 0 ~~\\forall j' \\geq j. \\\\ \\end {aligned} \\label {eq:1} (1) The constraints P1n = 1n, 1\u22a4 n P = 1n are sufficient to enforce that P is a permutation matrix. The ranked accuracy prediction matrix Y is created by a row-wise application of a thresholding operator for every row in Y separately. Intuitively, if the threshold for the ith row is k, then the ith row is of the form [1\u22a4 k , 0\u22a4 n\u2212k] where 1k is a vector of all ones of size k and 0n\u2212k is a zero vector of size n \u2212k. In every row, all samples before the row-wise threshold k are predicted to be correctly classified (easy) and those after are incorrectly classified (hard) for the model corresponding to the row. The informal explanation of the optimization problem in Equation 1 is to find an ordering of samples such that error introduced by thresholding is minimized. Given this optimization problem, we next discuss how to solve it. While the goal of this optimization problem is finding the optimal permutation P\u2217, we still need to jointly solve for P, Y here. We will find a solution by alternating between optimizing P keeping Y constant and optimizing Y keeping P constant, with the goal of finding the best solution P\u2217, in an EM-style algorithm. We now present algorithms for optimizing the two subproblems in detail. def uniform_sampling(query, num_p): # idx -> num_p uniformly sampled points idx = np.arange(0, len(query), len(query)//num_p)[1:] return idx def dp_search(query): # query is 1 x k (from a row of PA) # (k can be assigned := n, n', m, m') query[query==0] = -1 cumsum = np.cumsum(query) idx = np.argmax(cumsum) return idx/len(query) # threshold as % of length, transfers n' -> n size Listing 2: Algorithms for Optimizing Y given P 4.1.1. OPTIMIZING P GIVEN Y We see from Eq. (1) that P is binary. This makes finding the optimal P\u2217an NP-Hard problem (Yuan and Ghanem, 2016). Hence, we discuss how to simplify the sub-problem. We first present an algorithm to solve the case where we can order samples in a strictly decreasing order of difficulty, measured by how many models classified it correctly ( 1 ). However, samples can never be arranged as strictly decreasing in practice. Subsequently, we present one alternative which computes soft confidences, which allows the strictly decreasing constraint to hold ( 2 ). A third alternative algorithm we explore removes the introduced constraint of a strictly decreasing order ( 3 ). 1 Sorting by Sum. We discuss how to order samples if they follow a strictly decreasing order of difficulty. Formally, considering elements column-wise, the difficulty of each sample (a column) is inversely proportional to the number of 1s in that column i.e., more 1s in a column indicates more models classify this sample correctly. We can order samples in decreasing order of difficulty by a simple algorithm detailed in Listing 1 (sort by sum)\u2014intuitively, this algorithm sorts samples from easy (more 1s) to hard (less 1s) by sorting the sum vector across rows per column. We call this method Sorting by Sum, which returns an ordering over samples (which can trivially be converted to the permutation matrix P\u2217). However, the assumption of strictly decreasing order of difficulty is unrealistic as the number of samples is usually far larger than the number of models. Hence, it is guaranteed that many samples will have the same level of difficulty by the pigeonhole principle (Ajtai, 1994). 2 Sorting by Confidence Sum. One method to have a strictly decreasing order is to relax the constraint on the samples of ai = [pi1, pi2 . . . , pin] from pij \u2208{0, 1} to pij \u2208[0, 1], and use confidence of the ground truth class. This modification allows all examples to be unique, allowing Sorting by Sum ( 1 ) to be the best solution, and potentially enable more sample efficient ranking. 3 Recursive Sorting by Sum. Another alternative is relaxing the equal difficulty assumption in Algorithm 1 . 5 \fLifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress A natural question is: How does one order samples which have equal number of models predicting them correctly, i.e., two columns of A with equal number of 1s? We propose an iterative solution: at each step, order samples of equal difficulty by alternatively optimizing P keeping Y constant by applying Algorithm 1 and optimizing Y keeping P constant by DP-Search algorithm (described next). The recursive aspect is dividing the vector into subsets, where each subset consists of samples which have the same sum. Within each subset, we reorder points by only considering the thresholds obtained when optimizing Y given P which fall within this region and recursively applying the alternating minimization. We provide the algorithm for two iterations for an illustration in Listing 1 (two stage sort by sum). Note that this strictly improves the solution at each recursion depth. We additionally note that ties are broken by preferring the model which minimizes error the most. 4.1.2. OPTIMIZING Y GIVEN A P Here, we discuss how to optimize the prediction matrix Y. We re-iterate that we want to find a row-wise threshold k minimizing the error with the matrix AP for a given permutation P. To optimize Y given a P, we propose an algorithm based on dynamic programming, called DPSearch, which operates row-wise on each row yi, detailed in Listing 2 (dp search). Given a row, it computes the difference between number of 1s and number of 0s for each index based on a prefix sum structure. Due to the prefix sum structure, for an input of size n, the dynamic programming approach reduces the time complexity from quadratic O(n2) to linear O(n). The optimal threshold k is the maximum value in this vector. The vector yi is simply [1\u22a4 k , 0\u22a4 n\u2212k] where 1k is a vector of all ones of size k and 0n\u2212k is a zero vector of size n \u2212k. DP-Search is guaranteed to return the globally optimal solution, defined as: Theorem 4.1. Optimality of Y given P. For any given ai \u2208{0, 1}1\u00d7n and P, the DP-Search algorithm returns an ordered prediction vector yi \u2208{0, 1}1\u00d7n which is a global minimum of \u2225aiP \u2212yi\u22251. The DP-Search algorithm when applied row-wise independently returns the optimal Y given P. Now, having optimized the ranking problem, we have a permuation P\u2217from the data matrix indicating the order of the samples based on difficulty. In the next section we use P\u2217towards either evaluating new models or adding new samples efficiently. 4.2. Efficient Selection by Search Given that we have found the best P\u2217in the sorting phase, we assume this ordering of difficulty of samples generalizes to new incoming models \u2206m. Since the problem is separable in each model, it suffices to predict sample-wise accuracies for each new model fm+1 on each of the n samples. We will later show that the same pipeline and algorithms work well for the problem of new incoming samples \u2206n. Goal. Given the permutation matrix P\u2217and \u2206m new models, we want to predict the accuracy across all n samples per model, i.e., predict the accuracy matrix Y\u2206m \u2208 {0, 1}\u2206m\u00d7n. The primary challenge is to do this by only evaluating on as few samples n\u2032 \u226an selected per model. We first restate that the constraints on Y in Eq. (1) imply a thresholding operator of index from {1, . . . , n} for every row in Y\u2206m, i.e., every model in \u2206m independently. Since the problem is separable per row, we consider the problem of optimizing the first new model ym+1 \u2208{0, 1}1\u00d7n independently here. Similarly, we denote the corresponding ground truth vector by am+1, created by evaluating the new model on all n samples, which will be used for evaluating predictions ym+1. In this section, we answer the two questions. (i) How to find the best-ranked accuracy prediction vector ym+1? (ii) How good is the ranked accuracy prediction vector ym+1? (i) How to get the optimal ym+1? Our goal here is to generate the sample-wise prediction vector ym+1 \u2208{0, 1}1\u00d7n. We divide it into two subtasks: selection and optimization. The selection task is to select the best n\u2032 observations to sample. The optimization task is, given the n\u2032 observations a\u2032 m+1 \u2208{0, 1}1\u00d7n\u2032 how to generate the prediction vector ym+1 \u2208{0, 1}1\u00d7n. Subtask 1: How to Select Samples? The selection task involves finding the best n\u2032 observations forming a\u2032. We note that any ranked solution we obtain using this array needs to be interpolated from n\u2032 points to n points, and use this intuition to sample n\u2032 points. A simple solution is to sample points such that any threshold found minimizes the difference between the actual threshold and a threshold predicted by our set of n\u2032, i.e., sample n\u2032 points uniformly, providing the algorithm in Listing 2 (uniform sampling). We also compare empirically with a pure random sampling approach in Section 5. Subtask 2: Optimizing ym+1. Here, we discuss that given the n\u2032 observations a\u2032 m+1 \u2208{0, 1}1\u00d7n\u2032 how to generate the prediction vector ym+1 \u2208{0, 1}1\u00d7n. We use the threshold given by the DP-Search (see Listing 2) and obtain the threshold, given in terms of fraction of samples in |a\u2032 m+1|. We extrapolate this threshold from n\u2032 to n points, to obtain the threshold for the prediction vector ym+1. The predicted vector ym+1 is simply [1\u22a4 k , 0\u22a4 n\u2212k] where 1k is a vector of all ones of size k and 0n\u2212k is a zero vector of size n \u2212k. Having studied how to generate the predictions, we next describe how to evaluate them. (ii) How good are my predictions? Given a prediction 6 \fLifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress 101 102 103 104 Sampling Budget n' 0.54 0.56 0.58 0.60 0.62 Normalized Agreement Normalized Agreement Mean Absolute Error 0.16 0.17 0.18 0.19 0.20 0.21 Mean Abs. Error 106 105 104 103 102 101 Compute Saved (a) Lifelong-CIFAR10 101 102 103 104 Sampling Budget n' 0.64 0.66 0.68 0.70 0.72 Normalized Agreement Normalized Agreement Mean Absolute Error 0.13 0.14 0.15 0.16 0.17 Mean Abs. Error 106 105 104 103 102 101 Compute Saved (b) Lifelong-ImageNet Eagg = 15.9 * (n')-0.18 Power-law fit: (c) Lifelong-CIFAR10 Eagg = 15.6 * (n')-0.23 Power-law fit: (d) Lifelong-ImageNet Figure 3. Main Results. Figure (a,b) We achieve upto 99% cost-savings for doing model evaluation on both Lifelong-ImageNet and Lifelong-CIFAR10 showcasing the efficiency of our Sort&Search method. In Figure (c,d) Power-law fits on the observed absolute-accuracy difference errors (Eagg) and sampling budget (n\u2032) relationship reveal large exponents suggesting very quick convergence. vector ym+1, we can compute the Mean-Absolute Error (MAE), given by E(am+1, ym+1). It is computed using the Hamming distance to the ground truth vector am+1 \u2208 {0, 1}1\u00d7n, formally defined as: E(\\ma thbf { a}_{m+1 } , \\mathbf {y}_{m+1}) = \\|\\mathbf {a}_{m+1}\\mathbf {P}^* \\mathbf {y}_{m+1} \\|_1 \\label {eq:2} (2) However, in this section, we want to normalize by the agreement between predictions and ground truth explained by chance alone (refer to Geirhos et al. (2020) for analysis in-depth). We illustrate this with an example where models have 90% accuracy on a binary task with equal number of samples per class. In that case, given two random predictions, 82%3 of the samples will agree by chance alone. The general metric for agreement between any ground truth vector ai and a prediction vector yi by chance is given by: \\begin {al i gned} E _{\\t e x t { rand} } ( \\ m a t hbf { a }_{i},& \\mathbf {y}_{i}) = \\frac {\\|\\mathbf {a}_{i}\\|_1 }{n} \\frac {\\|\\ \\mathbf {y}_{i} \\|_1}{n} \\\\ &+ \\left (1\\frac {\\|\\mathbf {a}_{i}\\|_1 }{n}\\right )\\left (1\\frac {\\|\\ \\mathbf {y}_{i} \\|_1}{n}\\right ). \\end {aligned} \\label {eq:3} (3) The normalized agreement is defined by the Cohen\u2019s Kappa (Cohen, 1960) given as: \\ka ppa (\\ m athbf {a} _ i, \\mathb f { y } _i) = \\fr ac {(1 E(\\mathbf {a}_i, \\mathbf {y}_i)) E_{\\text {rand}}(\\mathbf {a}_i, \\mathbf {y}_i)}{1 E_{\\text {rand}}(\\mathbf {a}_i, \\mathbf {y}_i)}. \\label {eq:4} (4) where 0 \u2264\u03ba(ai, yi) \u22641. The intuition for this normalization is described in detail in Geirhos et al. (2020). We measure both mean-absolute error (given in Eq. (2)) and our defined normalized agreement (given in Eq. (4)) as sample-wise metrics in this work. Note that smaller mean-average error E is better but higher normalized agreement \u03ba is better. So far, we have only discussed the efficient evaluation of \u2206m new models ( 2 insertM). How do we approach the problem when we want to efficiently extend the benchmark, adding \u2206n new samples ( 1 insertD)? 4.3. Efficient Insertion of New Samples (insertD) To add new samples into our lifelong benchmark efficiently, we have to estimate their \u201cdifficulty\u201d with respect to the 30.82 = 0.9 \u00d7 0.9 + 0.1 \u00d7 0.1 other samples in the benchmark. To efficiently determine difficulty by only evaluating m\u2032 \u226am models, a ranking over models is required to enable optimally sub-sampling a subset of m\u2032 models. This problem is quite similar in structure to the previously discussed addition of new models, where we had to evaluate using a subset of n\u2032 \u226an samples. How do we connect the two problems? We recast the same optimization objectives as described in Eq. (1), but replace A with A\u22a4and Y with Y\u22a4. In this case, Eq. (1) would have A\u22a4P, which would sort models, instead of samples, based on their aggregate sum over samples (i.e., accuracy) optimized using Algorithm 1 to obtain P\u2217, ordering the models from classifying least samples correctly to most samples correctly. Here, Algorithm 1 is sufficient, without needing to solve the joint optimization ( 3 ) because accuracies (sum across rows) are unique as the number of samples is typically much larger than the number of models. In case of new incoming samples \u2206n, we similarly would treat every sample independently and optimize the predicted array y\u22a4 n+1 using Efficient Selection by Search (Section 4.2). 5. Experiments To demonstrate our framework empirically, we showcase experiments on our two tasks: 1 efficient estimation of new sample difficulties (insertD) and 2 efficient performance evaluation of new models (insertM). We then provide a comprehensive analysis of various design choices within our Sort & Search framework. 5.1. Experimental Details Model Space. For Lifelong-CIFAR10, we use 31, 250 CIFAR-10 pre-trained models from the NATS-BenchTopology-search space (Dong et al., 2021). For LifelongImageNet, we use 167 ImageNet-1K and ImageNet-21K pre-trained models, sourced primarily from timm (Wightman, 2019) and imagenet-testbed (Taori et al., 2020). 7 \fLifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress Lifelong-ImageNet Lifelong-CIFAR10 Figure 4. Estimated v/s Ground-Truth accuracies. For different sampling budgets (n\u2032=64\u22121024), our estimated accuracies for 117 models (Lifelong-ImageNet) and 25, 250 models (Lifelong-CIFAR10) are surprisingly close to ground-truth accuracies (\u03c1=0.86\u22120.97). Sample Addition Split ( 1 insertD). To study efficient estimation of new sample difficulties on Lifelong-CIFAR10, we hold-out CIFAR-10W (Sun et al., 2023) samples for evaluation (\u223c500, 000 samples) and use the rest \u223c1.2 million samples for sorting. We do not perform experiments on this problem for Lifelong-Imagenet\u2014since the number of models is quite small (167 in total), directly evaluating all models is relatively efficient, as opposed to the more challenging Lifelong-CIFAR10 scenario where evaluation on 31, 250 models is expensive and it is practically possible to reduce the number of models evaluated per new sample. Model Evaluation Split ( 2 insertM). To study efficient evaluation of new models, we split the model set for the Lifelong-CIFAR10 benchmark into a randomly selected subset of 6, 000 models for ordering the samples (i.e., Sort) and evaluate metrics on the remaining 25, 250 models (i.e., Search). For Lifelong-Imagenet, we use 50 randomly selected models for ordering the samples (i.e., Sort) and evaluate on 117 models (i.e., Search). Metrics ( 3 metrics()). We measure errors between estimated predictions for each new model ym+1 and groundtruth predictions am+1 independently using both instancelevel metrics and dataset-level metrics. For instancelevel predictions, we measure the mean-average error E(am+1, ym+1) using Eq. (2) along with the normalized agreement \u03ba using Eq. (4). For dataset-level metrics, we measure the absolute difference between estimated and ground truth accuracies, Eagg = |(|ym+1|\u2212|am+1|)|/n. This gives us a global metric that does not take into account individual sample-level correspondences between ym+1 and am+1, but rather simply the difference between the aggregate sum of correct predictions. 5.2. Model Performance Estimation ( insertM) In this set of experiments, we evaluate the predictive power of S&S for evaluating new models ( 2 ) when subjected to a varying number of sampling budgets n\u2032 i.e., we run our S&S framework over 13 different sampling budgets: {8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768} on both Lifelong-ImageNet and Lifelong-CIFAR10. Unless otherwise specified, our main results in Sections 5.2 and 5.3 use the simple Sorting by Sum algorithm ( 1 , Listing 1) for obtaining P\u2217and uniform sampling for the sample budget n\u2032. We analyze and ablate the other design choices in Section 5.4. We now present our main results. Key Result 1: Extreme Cost-Efficiency. From Figs. 3(a) and 3(b), we observe that our approach converges to a very high normalized agreement and low mean-absolute error with 1/1000 the number of evaluation samples, leading to extreme cost savings at inference time (from 180 GPU days to 5 GPU hours on a single A100-80GB GPU)4. This consistently holds across both datasets on all three metrics: Normalized Agreement, Mean Absolute Error, and Absoluteaccuracy difference. Key Result 2: Prediction Error Scales as a Power-Law. We further analyse the observed Eagg against the sampling budget (n\u2032) relationship by fitting power-laws in Figs. 3(c) and 3(d). The power-laws take the form Eagg=cn\u2032p, where c is the scaling width and p is the exponential coefficient. We find that the power-laws have large exponential coefficients, p=\u22120.18 for Lifelong-CIFAR10 and p=\u22120.23 for LifelongImageNet. This further demonstrates the surprisingly high sample-efficiency obtained by Sort & Search (S&S). Key Result 3: Highly Accurate Performance Estimation. We note from Fig. 4 that S&S is able to very accurately predict the ground-truth accuracies of models. At a sampling 4The \u201ccompute saved\u201d axis in the plots is computed as n n\u2032 . Effective compute savings are: In Lifelong-CIFAR10, we do 25, 250\u00d71, 697, 682 evaluations in the full evaluation v/s 25, 250\u00d72, 048 in our evaluation. Similarly, for LifelongImageNet, we perform 117\u00d71, 986, 310 v/s 117\u00d72, 048 evaluations. 8 \fLifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress 101 102 103 Sampling Budget m' 0.150 0.175 0.200 0.225 0.250 0.275 Mean Abs. Error Sum Confidence-Sum 106 105 104 103 102 101 Compute Saved (a) Sample Difficulty Estimation 101 102 103 104 Sampling Budget n' 0.55 0.60 0.65 0.70 Normalized Agreement m=10 m=20 m=50 m=100 (b) Analysis: #Ranking models 101 102 103 104 Sampling Budget n' 0.575 0.600 0.625 0.650 0.675 0.700 0.725 Normalized Agreement Sum Recursive Sum Confidence Sum (c) Analysis: Ranking methods 101 102 103 104 Sampling Budget n' 0.55 0.60 0.65 0.70 Normalized Agreement Uniform Random (d) Analysis: Sampling methods Figure 5. Additional Analyses. Figure (a) We achieve accurate sample difficulty estimates on Lifelong-CIFAR10 (<0.15 MAE) at a fraction of the total number of models to be evaluated, thereby enabling an efficient insertion of new samples into the ordered set of samples in the benchmark. In Figures (b,c,d), we analyse three design choice axes for a better understanding of the S&S method using the Lifelong-Imagenet dataset. Aleatoric Error Lifelong-CIFAR10 Total Error (E) \u2014\u2014 Epistemic Error (Eepistemic) Aleatoric Error Total Error (E) \u2014\u2014 Epistemic Error (Eepistemic) Lifelong-ImageNet Figure 6. Error Decomposition Analysis on Lifelong-CIFAR10 (left) and Lifelong-ImageNet (right). We observe that epistemic error (solid line) drops to 0 within only 100 to 1000 samples across both datasets, indicating this error cannot be reduced further by better sampling methods. The total error E is almost entirely irreducible (Aleatoric), induced because new models do not perfectly align with the ranking order P\u2217. This suggests generalizing beyond a single rank ordering, not better sampling strategies, should be the focus of subsequent research efforts. budget (n\u2032) of just 512 or 1, 024 samples, our predicted accuracies almost exactly match the true accuracies, as measured by the pearson correlations (0.96 for Lifelong-CIFAR10 and 0.97 for Lifelong-ImageNet at a sampling budget of 1, 024). Note that this performance prediction ability is especially surprising given these results are aggregated over 25,250 models for Lifelong-CIFAR10 and 117 models for LifelongImageNet, spanning a wide range of architectures, model sizes, and accuracies. Additional plots across a more finer variation of n\u2032 are provided in the Appendix. 5.3. Sample Difficulty Estimation (insertD) We next showcase results with the complementary task ( 1 ) where for new samples, the goal is to sub-sample the number of models to evaluate on the new samples, for accurately determining sample difficulty. We present results on this task on the Lifelong-CIFAR10 benchmark with two different methods for ranking models5, Sorting by Sum ( 1 ) and Sorting by Confidence Sum ( 2 ). We evaluate over different model budgets m\u2032 (the num5Recursive sum ( 3 ) is not applicable here as all sum values are unique, see Section 4.3. ber of models we use to evaluate our samples over): {8, 16, 32, 64, 128, 256, 512, 1024, 2048}. From Fig. 5(a), we observe that both methods converge quickly\u2014Sorting by Sum ( 1 ) reaches a mean-absolute error of less than 0.15 by only evaluating on m\u2032=64 models out of 31, 250 (104\u00d7 computation savings). This demonstrates our method\u2019s ability to efficiently determine sample difficulty, enabling efficient insertion back into the lifelong-benchmark pool. 5.4. Breaking down Sort & Search We next nalyse the different design choices used in our S&S framework, and compare their induced efficiency gains. Varying the Number of Models Used for Ranking. In Fig. 5(b), we analyse the effect of the number of models used for computing the initial ranking (i.e., m) on the final performance prediction on Lifelong-ImageNet. Having access to more models seems to be a key factor in improving prediction accuracy since using a lower number of models for ranking (m=10) converges to a smaller normalised agreement (4% performance difference at convergence when using m=100 (blue line) compared to m=10 (red line)). Interestingly, the number of models m used for ranking does not have any effect on the speed of convergence itself (all 9 \fLifelong Benchmarks: Efficient Model Evaluation in an Era of Rapid Progress methods roughly converge at the same sampling budget (n\u2032=2, 048)), but rather only on the prediction accuracy. Different Sorting Methods. We compare the three different algorithms on Lifelong-Imagenet: 1 Sorting by Sum, 2 Sorting by Confidence Sum, and 3 Sorting by Recursive Sum. From Fig. 5(c), we note no substantial benefits to using the continual relaxation of the accuracy prediction values as confidence values, in fact, this degrades the predictive accuracy of our method. However, using the multi-step recursive correction of rankings ( 3 ) provides significant boosts (2% boost in normalized agreement at all n\u2032>1, 024) due to its ability to locally correct ranking errors that the global sum method ( 1 ) is unable to account for. Different Sampling Methods. In Fig. 5(d), we compare the method used for sub-selecting the data-samples to evaluate\u2014 we compare between uniform and random sampling. Both methods converge very quickly and at similar budgets to their optimal values and start plateauing. Worth noting however is that uniform sampling provides large boosts over random sampling when the sampling budget is small (10% better in absolute normalized agreement at n\u2032=8)\u2014 this can be attributed to its \u201cdiversity-seeking\u201d behaviour which helps cover samples from all difficulty ranges and hence better represent the entire benchmark evaluation samples than an unrepresentative random set that is sampled from the random sampling approach. 6. Decomposing the Errors of S&S In this section, we showcase that the errors of Sort & Search can be intuitively decomposed. The total mean absolute error E(am+1, ym+1) can be decomposed into a component irreducible by further sampling, referred to as the Aleatoric Sampling Error (Ealeatoric), and a component which can be improved by querying larger fraction of samples n\u2032, referred to as the Epistemic Sampling Error (Eepistemic). Aleatoric Sampling Error. Let y\u2217 m+1 = y\u2032 when n\u2032 = n, i.e., it is the best prediction obtainable across all subsampled thresholds, as we have access to the full am+1 vector. However, some error remains between y\u2217and am+1 due to the ordering operation (i.e., Sort). This error, caused by errors in the generalization of the permutation matrix P\u2217 cannot be reduced by increasing the sample budget n\u2032. More formally, we define this error as: \\begin {aligne d} E_ { \\te xt { aleator i c}}(\\ m athbf { a }_ {m+1},\\mathbf {y}_{m+1}) &= \\min _{\\mathbf {y}_{m+1}} \\| \\mathbf {a}_{m+1}\\mathbf {P}^* \\mathbf {y}_{m+1} \\| \\\\ &= \\| \\mathbf {a}_{m+1}\\mathbf {P}^* \\mathbf {y}_{m+1}^* \\|. \\\\ \\end {aligned} \\label {eq:supp1} (5) Epistemic Sampling Error. On the contrary, there is a gap between the optimal ranking prediction y\u2217 m+1 and ym+1 with the current sample size n\u2032. This gap, referred to as Epistemic Sampling Error is formally defined as: E_{\\text {e pist emic} } (\\m ath b f {y}_{m+1}^*,\\mathbf {y}_{m+1}) = \\| \\mathbf {y}_{m+1}^* \\mathbf {y}_{m+1} \\|. \\\\ \\label {eq:supp2} (6) Note that in a similar way, we can also decompose the normalized agreement metric \u03ba simply by computing \u03baaleatoric(am+1, ym+1) = \u03ba(am+1, y\u2217 m+1) and \u03baepistemic(y\u2217 m+1, ym+1) = \u03ba(y\u2217 m+1, ym+1). Results. We analyse the effectiveness of sampling in Lifelong CIFAR-10 and Lifelong-ImageNet by studying the Epistemic Sampling Error (Eepistemic) and Aleatoric Sampling Error (Ealeatoric) in Figure 6. First, we see that the epistemic error is very low and quickly converges to 0, i.e., we converge to the best achievable performance within sampling just 100 to 1000 samples on both datasets. The remaining error after that is irreducible, and is primarily caused by generalization gaps in the permutation matrix P\u2217. Further, we note that the Recursive Sum algorithm ( 3 ) does not help reduce the gap as shown in Fig. 5(c). This gap is attributable to new models inherently not following a single ranking order across all samples. 7. Open Problems Although showcasing very promising results in enhancing the efficiency of evaluating Lifelong Benchmarks, our investigation with S&S leads to some interesting open problems: (1) One-Step Process: Currently, our approach is restricted to one-step sample ranking and model evaluation, whereas ideal lifelong evaluation would need simultaneous optimization of these steps. How do we extend our framework to multi-step continual ranking and evaluation? (2) Ranking Imprecision: Our error decomposition analysis provides convincing evidence (Section 6) that the ordering of samples P\u2217while evaluating new models bottlenecks prediction performance. Generalizing from imposing a single sample ordering P\u2217to sample ordering structures, such as different clusters of models each with their own orderings or rejection frameworks for models if it does not align with the ordering could dramatically improve the framework. (3) Identifying Difficult Samples: Finding and labeling challenging examples is an essential task for lifelong benchmarks, which is not the focus of our work. Studying hard or adversarial sample selection approaches with lifelong benchmarking is a promising direction. We provide an extensive survey of related approaches in this direction in the Appendix. 8."
+ },
+ {
+ "url": "http://arxiv.org/abs/2402.08823v1",
+ "title": "RanDumb: A Simple Approach that Questions the Efficacy of Continual Representation Learning",
+ "abstract": "We propose RanDumb to examine the efficacy of continual representation\nlearning. RanDumb embeds raw pixels using a fixed random transform which\napproximates an RBF-Kernel, initialized before seeing any data, and learns a\nsimple linear classifier on top. We present a surprising and consistent\nfinding: RanDumb significantly outperforms the continually learned\nrepresentations using deep networks across numerous continual learning\nbenchmarks, demonstrating the poor performance of representation learning in\nthese scenarios. RanDumb stores no exemplars and performs a single pass over\nthe data, processing one sample at a time. It complements GDumb, operating in a\nlow-exemplar regime where GDumb has especially poor performance. We reach the\nsame consistent conclusions when RanDumb is extended to scenarios with\npretrained models replacing the random transform with pretrained feature\nextractor. Our investigation is both surprising and alarming as it questions\nour understanding of how to effectively design and train models that require\nefficient continual representation learning, and necessitates a principled\nreinvestigation of the widely explored problem formulation itself. Our code is\navailable at https://github.com/drimpossible/RanDumb.",
+ "authors": "Ameya Prabhu, Shiven Sinha, Ponnurangam Kumaraguru, Philip H. S. Torr, Ozan Sener, Puneet K. Dokania",
+ "published": "2024-02-13",
+ "updated": "2024-02-13",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.LG"
+ ],
+ "main_content": "Introduction Continual learning is a specialized form of supervised learning, characterized by sequentially arriving tasks, coupled with additional computational and memory constraints [24, 27, 39, 44, 49] (see Verwimp et al. [51] for a survey of applications). A notable implication of this setup is the inherent performance gap between continual learning and traditional supervised learning. To elucidate this further, let us consider the achievable risk under continual learning given a dataset D: \\mat h cal {R}_\\ t exttt {CL}(\\mathcal {D}) \\geq R_{\\texttt {Joint}}(\\mathcal {D}) + E_{\\texttt {CL}}(\\mathcal {D}) (1) *authors contributed equally, + equal advising Decorrelate (D) NCM (C) Approximate RBF-Kernel RanDumb O = CTDT\u03c6( I) I O Figure 1. Mechanism of RanDumb. Project raw pixels to a high dimensional space using random fourier projections (\u03c6), then decorrelate the features using Mahalanobis distance [31] and classify with the nearest class mean. The online update only involves updating a single covariance matrix and class-means. Here, RJoint denotes the risk achievable with supervised learning without any constraints, while ECL represents the gap arising from transitioning from fully supervised learning to continual learning which imposes additional constraints such as: resource-constraint (e.g., single-epoch), distribution shift, and predefined task ordering with limited access to data. Because of these additional constraints imposed by continual learning, the risk is non-decreasing and ECL is non-negative. Minimizing ECL has been the primary focus of the continual learning literature [16, 29, 37, 41]. In this work, we discover that the fundamental aspect of continual representation learning (CRL) is often overlooked in the continual learning literature, as most methods merge continual classifier and representation learning. We dissect these two aspects and study the effect of representation learning on ECL. The central question of our work is trying to answer: Are current continual learning setups overly constrained for effective continual representation learning? Specifically, consider a continual representation learning algorithm CRL which learns a representation from the continual dataset D. Is this representation better than a random (ie. non-learned) representation RandRep? More formally, is the following quantity positive or negative? RepGain( \\ mathca l {D}) = E_\\tex ttt {CL}(\\mathcal {D}, \\mathtt {RandRep}) E_\\texttt {CL}(\\mathcal {D}, \\mathtt {CRL}(\\mathcal {D})) \\nonumber 1 arXiv:2402.08823v1 [cs.CV] 13 Feb 2024 \fTable 1. Online Continual Learning. Performance comparison of RanDumb on the PEC setup [59] and VAE-GC [50]. Setup and numbers borrowed from PEC [59]. RanDumb outperforms the best exemplar-free OCL method. Comparing representation learning with no representation learning, a fixed, random function outperforms training a deep network continually, i.e. RepGain(D) < 0. Furthermore, RanDumb nearly matches performance of one-pass joint, demonstrating the inefficacy of current benchmarks. Method MNIST CIFAR10 CIFAR100 m-ImNet Comparison with Best Method Best (PEC) 92.3 58.9 26.5 14.9 RanDumb (Ours) 98.3 55.6 28.6 17.7 Improvement +6.0 -3.3 +2.1 +2.8 Only Ablating Representation Learning: Random v/s Deep Embedding VAE-GC 84.0 42.7 19.7 12.1 RanDumb (Ours) 98.3 55.6 28.6 17.7 Error (ECRL) +14.3 +12.9 +8.9 +5.6 Scope of Improvement Joint (One Pass) 98.3 74.2 33.0 25.3 RanDumb (Ours) 98.3 55.6 28.6 17.7 Gap Covered. (%) 100% 75% 87% 70% Here, the random representation functions are defined at initialization without seeing any data or using domain-specific priors. Note that one expects the impact of continual representation learning (RepGain(D)) to be positive, as continual representation learning approaches should improve performance over even the best random representation. Our primary contribution is empirically demonstrating that currently RepGain(D) < 0, i.e. there exists random representations which outperform state-of-the-art representations learned by continual learning algorithms using deep networks. We show this by introducing a straightforward baseline, which we name RanDumb, standing for Random representation function and a Dumb linear classifier. Despite replacing the deep embedder with a random function, we see from Table 1 (top) that RanDumb outperforms the current state-of-the-art methods being exemplar-free over challenging large task scenarios given in [59]. We extend the large task scenario to methods with pretrained feature extractors [64]. In RanDumb, we replace our random projection with the pretrained feature extractor here, simply learning a linear classifier. We show results in Table 2 (top), showing that RanDumb outperforms the best methods in this scenario as well. 1.1. RanDumb: Construction and Performance Mechanism. Our mechanism is illustrated in Figure 1. RanDumb first projects input pixels into a high-dimensional space using a fixed kernel based on random Fourier basis, which is a low-rank data-independent approximation of the RBF Kernel [42]. Then, we use a simple linear classifier which first normalizes distances across different feature dimensions (anisotropy) with Mahalanobis distance [31] and then uses nearest class means for classification [33]. In scenarios with pretrained feature extractors, we use the fixed pretrained model as embedder and learn a linear classifier Table 2. Offline Continual Learning. Performance comparison with ImageNet21K ViT-B16 model using 2 init classes and 1 new class per task. RanDumb here assumes the features as input pixels. SLCA [62] and RanPAC [30] results reproduced from original codebase. Comparing representation learning with no representation learning, a fixed, pretrained feature extractor outperforms further finetuning a deep network continually, i.e. RepGain(D) < 0. RanDumb nearly matches performance of joint, demonstrating the inefficacy of current benchmarks. *here representation learning collapses, accuracy reported without PETL. +here representation learning and random projection both collapse, reporting last stable accuracy. Method CIFAR IN-A IN-R CUB OB VTAB Cars Comparison with Best Method Best (RanPAC) 89.6 26.8+ 67.3 87.2+ 77.9* 88.2+ 53.7\u2217 RanDumb (Ours) 86.8 42.2 64.9 88.5 75.3 92.3 69.1 Improvement -2.9 +15.4 -2.4 +1.3 -2.6 +3.7 +15.4 Only Ablating Representation Learning: No Finetuning v/s Full Finetuning SLCA 86.8 54.2 82.1 18.2 RanDumb (Ours) 86.8 42.2 64.9 88.5 75.3 92.3 69.1 Improvement +0.0 +10.7 +6.4 +50.9 Scope of Improvement Joint 93.8 70.8 86.6 91.1 83.8 95.5 86.9 RanDumb (Ours) 86.8 42.2 64.9 88.5 75.3 92.3 69.1 Gap Covered. (%) 93% 60% 75% 97% 92% 97% 80% over it, similar to Hayes and Kanan [20]. Properties. RanDumb needs no storage of exemplars and requires only one pass over the data in a one-sample-pertimestep fashion. Furthermore, it only requires online estimation of the covariance matrix and nearest class mean, hence is invariant to data order, representing the accuracy in the worst-case ordering. Poor Representation Learning. We first compare RanDumb with VAE-GC [50] in Table 1 (middle) and SLCA [62] in Table 2 (middle). The only distinction between them is their embedding functions: RanDumb uses a fixed function (random/pretrained network) for the embedder, whereas VAE-GC and SLCA further continually trained deep networks, using them as the embedder. Astonishingly, RanDumb consistently surpasses VAE-GC and SLCA baselines by wide margins of 5-15%. This shows that state-ofthe-art online continual learning algorithms fail to learn effective representations in a single pass across standard online continual learning datasets. Benchmarks Over-Constrained. The reader might now ask: With better representation learning, can one outperform RanDumb by large margins in these settings? To study this, we compare the performance of RanDumb with Joint in both online and offline settings respectively in Table 1 (bottom) and Table 2 (bottom). We observe that our simple baseline RanDumb recovers 70-90% of the gap to the respective joint classifiers across both settings. Improving forgetting in continual representation learning [21, 36] offers limited scope for improvement, as current benchmarks might be too constrained to have effective representation learning at all. 2 \fEmbed 3D View 2D Projection (Horizontal midway) Input Output Decorrelate Figure 2. Mechanism of RanDumb. RanDumb projects the input datapoints to a high-dimensional space to create a clearer separation between classes. Subsequently, it corrects the anisotropy across different feature dimensions, scaling them to be unit variance each. This allows a nearest class mean classifier to accurately rely on cosine similarity. The figure is adapted from Pilario et al. [38]. In the next sections, we shall first describe our proposed method and then walk through our findings in detail. 2. RanDumb: Mechanism & Intuitions In this section, we thoroughly examine RanDumb, focusing on its two main elements: random projection and the dumb learner. We illustrate the mechanism of RanDumb using three toy examples in Figure 2. We first describe the dumb classifier. To classify a test sample xtest, we start with a simple classifier, the nearest class mean (NCM) classifier. It predicts the class among C classes by highest value of the similarity function f among class means \u00b5i: \\lab e l { eq: ncm} \\nonumbe r y_\\tex trm {pred } = \\arg max _ {{i}\\ in \\{1,\\ldots ,|C|\\}} f({\\bf x_\\textrm {test}},\\mathbf {\\mu }_i), \\text {where}\\\\ f({\\bf x_\\textrm {test}}, \\mathbf {\\mu }_i) := {\\bf x_\\textrm {test}}^\\top {\\mathbf {\\mu }_i (2) and \u00b5i are the class-means in the pixel space: \u00b5i = 1 |Ci| X x x \u2200x \u2208Ci RanDumb adds two additional components to this classifier: 1) Kernelization and 2) Decorrelation. 1 Kernelization: Why & How? Classes are typically not linearly separable in the pixel space, unlike in the feature space of deep models. Hence, we apply the kernel trick to embed the pixels in a better representation space, computing all distances between the data and class-means in this embedding space. This phenomena is illustrated on three toy examples to build intuitions in Figure 2 (Embed). We use an RBF-Kernel, which for two points \\mathbf {x} and \\mathbf {y} is defined as: KRBF(x, y) = exp \u0000\u2212\u03b3\u2225x \u2212y\u22252\u0001 where \\gamma is a scaling parameter. However, calculating the RBF kernel is not possible due to the online continual learning constraints preventing computation of pairwise-distance between all points. Hence, we use a data-independent approximation, random Fourier projection \u03d5(x), as given in [42]: K_{\\t ex t {RBF }}(\\mathbf {x}, \\mathbf {y}) \\approx \\phi (\\mathbf {x})^T \\phi (\\mathbf {y}) where the random Fourier features \u03d5(x) are defined by first sampling D vectors \\{\\ m a t h bf {\\omega }_1, \\ldots , \\mathbf {\\omega }_D\\} from a Gaussian distribution with mean zero and covariance matrix 2\\gamma \\mathbf {I} , where \\mathbf {I} is the identity matrix. Then \u03d5(x) is a 2D-dimensional feature, defined as: \\h s p a c e {-0.0 5 cm} \\phi ( \\ma thb f {x}) = \\ frac { 1}{ \\sqrt {D}} \\left [ \\cos (\\mathbf {\\omega }_1^T \\mathbf {x}), \\sin (\\mathbf {\\omega }_1^T \\mathbf {x}),.., \\cos (\\mathbf {\\omega }_D^T \\mathbf {x}), \\sin (\\mathbf {\\omega }_D^T \\mathbf {x}) \\right ] We keep these \u03c9 bases fixed throughout online learning. Thus, we obtain our modified similarity function from Equation 2 as: \\label {eq :k ernelncm} f({\\bf x_{\\rm test}}, \\mathbf {\\mu }_i) := {\\bf \\phi (x_{\\rm test})}^\\top {\\bar {\\mathbf {\\mu }}_i (3) where \u00af \u00b5i are the class-means in the kernel space: \u00af \u00b5i = 1 |Ci| X x \u03d5(x) \u2200x \u2208Ci 2 Decorrelation: Why & How? Projected raw pixels, similar to features of deep models, have feature dimensions with different variances (anisotropic). Hence, instead of naively computing \u03d5(xtest)\u22a4\u00af \u00b5i, we further decorrelate the feature dimensions using a Mahalonobis distance with the empirical covariance matrix S. We illustrate this phenomena as well on three toy examples in Figure 2 (Decorrelate) to build intuitions. Our similarity function finally is: \\label {eq :m ahancm} f ( { \\bf x_{\\rm test} } , \\mathbf {\\mu }_i) := (\\phi (\\mathbf {x}_{\\rm test}) \\bar {\\mathbf {\\mu }}_i)^T \\mathbf {S}^{-1} (\\phi (\\mathbf {x}_{\\rm test}) \\bar {\\mathbf {\\mu }}_i) (4) Online Computation. Our random projection is computed and fixed before seeing any real data. In the continual learning process, we only update the running class mean and em3 \fTable 3. Benchmark (Overview). We evaluate RanDumb across a diverse set of benchmarks that progressively relax constraints compared to RanDumb. Benchmark A closely matches RanDumb with one class per timestep and no stored exemplars. Benchmark B, D, E progressively relax the constraints on exemplars and classes per timestep. Benchmark C and E remove the online constraint by allowing unrestricted training and sample access within a task without exemplar-storage of past tasks. Benchmark F allows using large pretrained models, modified by us with one class per task, inspired by challenges detailed in Benchmark A. Setup Num #Classes #Samples #Stored Contrastive Passes Per Task Per Step Exemplars Augment RanDumb (Ours) 1 1 1 0 No A (Zaj \u02db ac et al. [59]) 1 1 10 0 No B1 (Guo et al. [18]) 1 2 10 100-2000 No B2 (Guo et al. [18]) 1 2 10 100-1000 Yes C (Smith et al. [48]) Many 10 All 0 No D (Anonymous [4]) 1 2-10 10 1000 No E (Ye and Bors [58]) 1 2-5 10 1000-5000 No F (Wang et al. [54]) Many 1 All 0 No pirical covariance matrix, which is accurately estimable online1. Note on Equivalences. For the curious reader we draw equivalences between the linear classifier used in this work and related works. These equivalences rely on the assumption that the classes are equiprobable, which is the case for most datasets here. In these cases, nearest class mean classifier with the Mahalanobis distance metric is equivalent to linear discriminant analysis classifier [32]. Hence, one could say RanDumb is equivalent to a Streaming LDA classifier with an approximate RBF Kernel. Alternatively, one could think of the decorrelation operation as explicitly decorrelating the features with ZCA whitening [7]. We hope these alternative perspectives allow for a better understanding of the RanDumb classifier. 3. Experiments We compare RanDumb with algorithms across online continual learning benchmarks with an emphasis on exemplarfree and low-exemplar storage regime complementing GDumb [39] in regimes where it has poor performance. Benchmarks. We illustrate the benchmarks used in our experiments along with key differences to training run for RanDumb in Table 3. We aim for a truly comprehensive coverage and show results on four different benchmarks (A, B, D, E) which reflect the latest trends in online continual learning (\u201922-\u201924) across exemplar-free, contrastivetraining2, meta-continual learning and network-expansion 1Online prediction is possible using the Sherman\u2013Morrison formula to update the inverse of the covariance matrix. However, we only needed to compute inverse at the last timestep for all our experiments, hence we do not experiment with this. 2Benchmark B is split into two distinct sections: (i) methods that do not rely on contrastive learning and heavy augmentation, named B1, and (ii) approaches that incorporate contrastive learning and extra augmentations, based approaches respectively and a rehearsal-free offline continual learning benchmark C. These benchmarks are ordered by increasingly relaxed constraints, moving further away from the training scenario of RanDumb. These benchmarks train models from scratch. Hence, RanDumb utilizes a random Fourier transform for embedding. We further test on exemplar-free scenarios in offline continual learning using Benchmark F [54] with the challenging one-class per task constraint borrowed from [59]. This benchmark allows using pretrained models along with unrestricted training time and access to all class samples at each timestep. However, RanDumb is restricted to learning from a single pass seeing only one sample at a time. RanDumb only learns a linear classifier over a given pretrained model in Benchmark F. We use LAMDA-PILOT codebase3 for all methods, except RanPAC and SLDA which use their codebase. All methods use the original hyperparameters. We only change initial classes to 2 and number of classes per task to 1 in the scripts and test using both ImageNet21K and ImageNet1K ViT-B/16 models. For detailed assumptions, task orderings, implementation details and other specific information of these benchmarks and approaches, readers are encouraged to consult the LAMDA-PILOT codebase. Implementation Details (RanDumb). We evaluate RanDumb using five datasets commonly used for online continual learning: MNIST, CIFAR10, CIFAR100, TinyImageNet200, and miniImageNet100. For the latter two datasets, we downscale all images to 32x32. We augment each datapoint with flipped version, hence two images are seen by the classifier at each timestep (except for MNIST and Benchmark F). We normalize all images and flatten them into vectors, obtaining 784-dim input vectors for MNIST and 3072-dim input vectors for all the other datasets. For Benchmark F, we compare RanDumb on seven datasets used in LAMDA-PILOT, replacing ObjectNet with Stanford Cars as ObjectNet license prohibits training models on it. We use the 768-dimensional features from the same pretrained ViT-B models used in this benchmark. We measure accuracy on the test set of all past seen classes after completing the full one-pass over the dataset. We take the average accuracy after the last task on all past tasks [18, 54, 59]. In Benchmark A and F, since we have one class per task, the average accuracy across past tasks is the same regardless of the task ordering. In Benchmarks A-E, all datasets have the same number of samples, hence similarly the average accuracy across past tasks is the same regardless of the task ordering. We used the Scikit-Learn implementation of Random Fourier Features [42] with 25K embedding size, \u03b3 = 1.0. We use progressively increasing ridge regression parameter (\u03bb) with dataset complexity, using \u03bb = 10\u22126 for MNIST, \u03bb = 10\u22125 for CIFAR10/100 named as Scenario B2. 3https://github.com/sun-hailong/LAMDA-PILOT 4 \fTable 4. Benchmark A (Ref: Table 1 from PEC [59]). We perform comprehensive comparisons of RanDumb with popular online continual learning approaches in a 1-class per task setting referred as \u2018Dataset (num_tasks/1)\u2019. This setting is a challenging setting for OCL methods except GDumb. We observe that RanDumb outperforms all approaches across all datasets by 2-6% margins, with an exception of latest work PEC [59] on CIFAR10. Method Memory MNIST (10/1) CIFAR-10 (10/1) CIFAR-100 (100/1) miniImageNet (100/1) Fine-tuning all 10.1\u00b1 0.0 10.0\u00b1 0.0 1.0\u00b1 0.0 1.0\u00b1 0.0 Joint, 1 epoch all 98.3\u00b1 0.0 74.2\u00b1 0.1 33.0\u00b1 0.2 25.3\u00b1 0.2 Rehearsal-Based ER [13] (ICML-W \u201919) 500 84.4\u00b1 0.3 40.6\u00b1 1.1 12.5\u00b1 0.3 5.7\u00b1 0.2 A-GEM [12] (ICLR \u201919) 500 59.8\u00b1 0.8 10.2\u00b1 0.1 1.0\u00b1 0.0 1.1\u00b1 0.1 iCaRL [44] (CVPR \u201917) 500 83.1\u00b1 0.3 37.8\u00b1 0.4 5.7\u00b1 0.1 7.5\u00b1 0.1 BiC [56] (CVPR \u201919) 500 86.0\u00b1 0.4 35.9\u00b1 0.4 6.4\u00b1 0.3 1.5\u00b1 0.1 ER-ACE [10] (ICLR \u201922) 500 87.8\u00b10.2 39.9\u00b10.5 8.2\u00b10.2 5.7\u00b10.2 DER [9] (NeurIPS \u201920) 500 91.7\u00b1 0.1 40.0\u00b1 1.5 1.0\u00b1 0.1 1.0\u00b1 0.0 DER++ [9] (NeurIPS \u201920) 500 91.9\u00b1 0.2 35.6\u00b1 2.4 6.2\u00b1 0.4 1.4\u00b1 0.1 X-DER [8] (TPAMI \u201922) 500 83.0\u00b1 0.1 43.2\u00b1 0.5 15.6\u00b1 0.1 8.2\u00b1 0.4 GDumb [39] (ECCV \u201920) 500 91.0\u00b10.2 50.7\u00b10.7 8.2\u00b10.2 Rehearsal-Free EWC [24] (PNAS \u201917) 0 10.1\u00b1 0.0 10.6\u00b1 0.4 1.0\u00b1 0.0 1.0\u00b1 0.0 SI [60] (ICML \u201917) 0 12.7\u00b1 1.0 10.1\u00b1 0.1 1.1\u00b10.0 1.0\u00b10.1 LwF [26] (TPAMI \u201917) 0 11.8 \u00b1 0.6 10.1\u00b1 0.1 0.9\u00b10.0 1.0\u00b1 0.0 LT [61] (Arxiv \u201918) 0 10.9\u00b1 0.9 10.0\u00b1 0.2 1.1\u00b1 0.1 1.0\u00b1 0.0 Gen-NCM [22] (NeurIPS-W \u201922) 0 82.0\u00b1 0.0 27.7\u00b1 0.0 10.0\u00b1 0.0 7.5\u00b1 0.0 Gen-SLDA [20] (CVPR-W \u201920) 0 88.0\u00b1 0.0 41.4\u00b1 0.0 18.8\u00b1 0.0 12.9\u00b1 0.0 VAE-GC [50] (CVPR-W \u201921) 0 84.0\u00b1 0.5 42.7\u00b1 1.3 19.7\u00b1 0.1 12.1\u00b1 0.1 PEC [59] (ICLR \u201924) 0 92.3\u00b1 0.1 58.9\u00b1 0.1 26.5\u00b1 0.1 14.9\u00b1 0.1 RanDumb (Ours) 0 98.3 (+5.9) 55.6 (-3.3) 28.6 (+2.1) 17.7 (+2.8) and \u03bb = 10\u22124 for TinyImageNet200/miniImageNet100. All experiments were conducted on a CPU server with a 48-core Intel Xeon Platinum 8268 CPU and 392GB of RAM. Our code is available at https://github.com/ drimpossible/RanDumb. 3.1. Results We extensively evaluate RanDumb and detail our findings on each benchmark in this section. Benchmark A. Benchmark A assesses continual learning models in the challenging setup of one class per timestep, closely mirroring our training assumptions. We present our results in Table 4. Comparing across rows, and see that RanDumb improves over prior state-of-theart across all datasets with 2-6% margins. The only exception is PEC on CIFAR10, where RanDumb underperforms by 3.3%. Nonetheless, it still outperforms the second-best model, GDumb with a 500 memory size, by 4.9%. Benchmark B1. We present our results comparing with non-contrastive methods in Table 5. We notice that scenario allows two classes per task and relaxes the memory constraints for online continual learning methods, allowing for higher accuracies compared to Benchmark A. Despite that, RanDumb outperforms latest OCL algorithms on MNIST, CIFAR10 and CIFAR100\u2014often by margins exceeding >10%. The lone exception is GDumb achieving a higher performance with 2K memory samples on TinyImageNet, indicating that this already is in the high-memory regime. Overall, RanDumb is the best algorithm across the benchmark. Benchmark B2. We additionally compare our performance with the latest online CL approaches using contrastive losses with sophisticated data augmentations. As shown in in Table 6 (Left), these advancements provide large performance improvements over methods from Benchmark B.1. To compensate, we compare on lower exemplar budgets. The best approach, OnPro [55], which outpeforms RanDumb on CIFAR10 by 2.2% and TinyImageNet by 0.3%, but falls significantly short on CIFAR100 by 5.9%. Overall, RanDumb achieves strong results compared to highly performant representation learning using state-of-the-art contrastive learning approaches customized to continual learning, despite storing no exemplars. Benchmark C. We now compare against offline rehearsal-free continual learning approaches in Table 6 (Right) on CIFAR100 dataset. Despite online training, RanDumb outperforms PredKD by over 4% margins. Benchmark D. We now compare performance of RanDumb against meta-continual learning methods, which require large exemplars with buffer sizes of 1K in Table 7 (left). RanDumb achieves strong performance under these conditions, exceeding all prior work by a large margin of 9.1% on CIFAR100 and outperforms all but VR-MCL approach on the TinyImageNet dataset. GDumb performs the best on CIFAR10, indicating this is already in a largeexemplar regime uniquely unsuited for RanDumb. Benchmark E. We compare RanDumb against recent network expansion-based online continual learning methods in Table 7 (middle). These approaches grow model capacity to mitigate forgetting while dealing with shifts in the data distribution, and are allow larger memory buffers. RanDumb matches the performance of the state-of-the-art 5 \fTable 5. Benchmark B.1 (Ref: Table adopted from OnPro [55], OCM[18]) We perform comprehensive comparisons of RanDumb with popular online continual learning approaches in less ideal many-classes per task setting referred as \u2018Dataset (num_tasks/num_classes_per_task)\u2019. Furthermore, results are categorized by memory buffer sizes given in \u2018M\u2019 subcolumn for that dataset. Comparing across rows, we observe that RanDumb outperforms the best among the compared approaches without heavy-augmentations by 3-20% margins despite being exemplar free. Only in one case, it is second best, but after GDumb. Comparing the two columns of CIFAR100, we observe a large drop in performance of OCL methods on benchmarks with longer timesteps indicating analyzing longer timesteps is important in continual learning. Method MNIST (5/2) CIFAR10 (5/2) CIFAR100 (10/10) CIFAR100 (50/2) TinyImageNet (100/2) M =0.1k M =0.1k M =0.2k M =0.5k M =1k M =1k M =1k M =2k AGEM [12] (ICLR \u201919) 56.9\u00b15.2 17.7\u00b10.3 22.7\u00b11.8 5.8\u00b10.2 5.9\u00b10.1 1.8\u00b10.2 0.8\u00b10.1 0.9\u00b10.1 GSS [3] (NeurIPS \u201919) 70.4\u00b11.5 18.4\u00b10.2 26.9\u00b11.2 8.1\u00b10.2 11.1\u00b10.2 4.3\u00b10.2 1.1\u00b10.1 3.3\u00b10.5 ER [13] (ICML-W \u201919) 78.7\u00b10.4 19.4\u00b10.6 29.7\u00b11.0 8.7\u00b10.3 15.7\u00b10.3 8.3\u00b10.3 1.2\u00b10.1 5.6\u00b10.5 ASER [46] (AAAI \u201921) 61.6\u00b12.1 20.0\u00b11.0 27.8\u00b11.0 11.0\u00b10.3 16.4\u00b10.3 9.6\u00b11.3 2.2\u00b10.1 5.3\u00b10.3 MIR [2] (NeurIPS \u201919) 79.0\u00b10.5 20.7\u00b10.7 37.3\u00b10.3 9.7\u00b10.3 15.7\u00b10.2 12.7\u00b10.3 1.4\u00b10.1 6.1\u00b10.5 ER-AML [10] (ICLR \u201922) 76.5\u00b10.1 40.5\u00b10.7 16.1\u00b10.4 5.4\u00b10.2 iCaRL [44] (CVPR \u201917) 31.0\u00b11.2 33.9\u00b10.9 12.8\u00b10.4 16.5\u00b10.4 5.0\u00b10.3 6.6\u00b10.4 DER++ [9] (NeurIPS \u201920) 74.4\u00b11.1 31.5\u00b12.9 44.2\u00b11.1 16.0\u00b10.6 21.4\u00b10.9 9.3\u00b10.3 3.7\u00b10.4 5.1\u00b10.8 GDumb [39] (ECCV \u201920) 81.2\u00b10.5 23.3\u00b11.3 35.9\u00b11.1 8.2\u00b10.2 18.1\u00b10.3 18.1\u00b10.3 4.6\u00b10.3 12.6\u00b10.1 CoPE [15] (CVPR \u201921) 33.5\u00b13.2 37.3\u00b12.2 11.6\u00b10.4 14.6\u00b11.3 2.1\u00b10.3 2.3\u00b10.4 DVC [18] (CVPR \u201922) 35.2\u00b11.7 41.6\u00b12.7 15.4\u00b10.3 20.3\u00b11.0 4.9\u00b10.6 7.5\u00b10.5 Co\u00b2L [11] (ICCV \u201921) 83.1\u00b10.1 42.1\u00b11.2 17.1\u00b10.4 10.1\u00b10.2 R-RT [6] (CVPR \u201921) 89.1\u00b10.3 45.2\u00b10.4 15.4\u00b10.3 6.6\u00b10.3 CCIL [35] (CVPR \u201921) 86.4\u00b10.1 50.5\u00b10.2 18.5\u00b10.3 5.6\u00b10.9 IL2A [65] (NeurIPS \u201921) 90.2\u00b10.1 54.7\u00b10.5 18.2\u00b11.2 5.5\u00b10.7 BiC [56] (CVPR \u201919) 90.4\u00b10.1 48.2\u00b10.7 21.2\u00b10.3 10.2\u00b10.9 SSIL [1] (ICCV \u201921) 88.2\u00b10.1 49.5\u00b10.2 26.0\u00b10.1 9.6\u00b10.7 Rehearsal-Free PASS [65] (NeurIPS \u201921) 33.7\u00b12.2 33.7\u00b12.2 7.5\u00b10.7 7.5\u00b10.7 0.5\u00b10.1 0.5\u00b10.1 RanDumb (Ours) 98.3 (+7.8) 55.6 (+20.4) 55.6 (+5.9) 28.6 (+12.6) 28.6 (+2.6) 28.6 (+10.5) 11.6 (+6.6) 11.6 (-1.0) Table 6. (Left) Benchmark B.2 (Ref: Table from OnPro [55]) We now compare with state-of-the-art contrastive representation learning based online continual learning approaches which additionally use sophisticated augmentations. These augmentations and additional loss function can improve any of the OCL methods described in Setup B.1 by large margins [18]. We observe that RanDumb often outperforms these sophisticated methods despite all of these factors on small-exemplar settings. (Right) Benchmark C (Ref: Table 2 from [48]). We compare the performance of RanDumb with latest efforts on improving rehearsal-free methods. We outperform them by 4% margins. Method MNIST (5/2) CIFAR10 (5/2) CIFAR100 (10/10) TinyImageNet (100/2) M =0.1k M =0.1k M =0.5k M =1k SCR [28] (CVPR \u201921) 86.2\u00b10.5 40.2\u00b11.3 19.3\u00b10.6 8.9\u00b10.3 OCM [18] (ICML \u201922) 90.7\u00b10.1 47.5\u00b11.7 19.7\u00b10.5 10.8\u00b10.4 OnPro [55] (ICCV \u201923) 57.8\u00b11.1 22.7\u00b10.7 11.9\u00b10.3 Rehearsal-Free RanDumb (Ours) 98.3 (+7.5) 55.6 (-2.2) 28.6 (+5.9) 11.6 (-0.3) Method CIFAR100 (10/10) Rehearsal-Free PredKD [26] 24.6 PredKD + FeatKD 12.4 PredKD + EWC 23.3 PredKD + L2 21.5 RanDumb 28.6 (+4.0) method SEDEM [58] on MNIST, while exceeding it by 0.3% on CIFAR10 and 3.8% on CIFAR100. The results indicate current benchmarks remain too restrictive for effective continual representation learning. Benchmark F. We compare performance of approaches which do not further train the deep network like RanDumb (ours)4 and NCM [22] against popular prompt-tuning approaches in Table 8. and discover that prompt-tuning approaches completely collapse under large timesteps and approaches which do not finetune their pretrained model achieve strong performance, even under challenging one class per timestep constraint. Note that there are variants of RanDumb (e.g. RanPAC RP PETL [30]) that achieve higher accuracies due to minor modifications which are designed for this benchmark. 4Recall that RanDumb in Benchmark F is quite similar to SLDA [20]. Conclusion. Overall, despite RanDumb being exemplarfree and modeling worst-case ordering, it outperforms nearly all online continual learning methods across various tasks when exemplar storage is limited. We specifically benchmark on lower exemplar sizes to complement settings in which GDumb does not perform well. 3.2. RanDumb: Analysis We extensively analyse RanDumb for the traditional OCL benchmarks in this subsection. First, we ablate the two components (random embedder and decorrelation) in RanDumb to ensure both components are necessary. Then we study effect of varying the embedding dimensionality, shrinkage and removing flipping augmentation and compare with alternate random embeddings proposed in literature [30] and across architectures [34]. 6 \fTable 7. (Left) Benchmark D (Ref: Table 2 from VR-MCL [4]) We compare RanDumb with meta-continual learning approaches operating in a high memory setting, allowing buffer sizes up to 1K exemplars. Despite catering representations specifically to combat forgetting, all methods with the exception of VR-MCL are outperformed by RanDumb on TinyImageNet. RanDumb also surpasses all prior work by a substantial 9.1% on CIFAR100. Allowing generous replay buffers shifts scenarios to a high exemplar regime where GDumb performs the best on CIFAR10. Yet RanDumb competes favorably even under these conditions not optimized for its approach. (Middle) Benchmark E (Ref: Table 1 from SEDEM [58]) We compare RanDumb with recent network expansion based online continual learning approaches. Despite allowing access to much larger memory buffers, RanDumb matches the performance of best method SEDEM on MNIST, while exceeding it by 0.3% on CIFAR10 and 3.8% on CIFAR100. (Right) Architectures (Ref: Table 1 from Mirzadeh et al. [34]) RanDumb surpasses continual representation learning across a wide range of architectures, achieving close to 94% of the joint performance. Method CIFAR10 CIFAR100 TinyImageNet (5/2) (10/10) (20/10) M = 1k M = 1k M = 1k Finetune 17.0 \\pm 0.6 5.3 \\pm 0.3 3.9 \\pm 0.2 A-GEM [12] (ICLR \u201919) 18.4 \\pm 0.2 6.0 \\pm 0.2 4.0 \\pm 0.2 IS [60] (ICML \u201917) 17.4 \\pm 0.2 5.2 \\pm 0.2 3.3 \\pm 0.3 MER [45] (ICLR \u201919) 36.9 \\pm 2.4 \u2013 \u2013 La-MAML [19] (NeurIPS \u201920) 33.4 \\pm 1.2 11.8 \\pm 0.6 6.74 \\pm 0.4 GDumb [39] (ECCV \u201920) 61.2 \\pm 1.0 18.1 \\pm 0.3 4.6 \\pm 0.3 ER [13] (ICML-W \u201919) 43.8 \\pm 4.8 16.1 \\pm 0.9 11.1 \\pm 0.4 DER [9] (NeurIPS \u201920) 29.9 \\pm 2.9 6.1 \\pm 0.1 4.1 \\pm 0.1 DER++ [9] (NeurIPS \u201920) 52.3 \\pm 1.9 11.8 \\pm 0.7 8.3 \\pm 0.3 CLSER [5] (ICLR \u201922) 52.8 \\pm 1.7 17.9 \\pm 0.7 11.1 \\pm 0.2 OCM [18] (ICML \u201922) 53.4 \\pm 1.0 14.4 \\pm 0.8 4.5 \\pm 0.5 ER-OBC [14] (ICLR \u201923) 54.8 \\pm 2.2 17.2 \\pm 0.9 11.5 \\pm 0.2 VR-MCL [4] (ICLR \u201924) 56.5 \\pm 1.8 19.5 \\pm 0.7 13.3 \\pm 0.4 Rehearsal-Free RanDumb (Ours) 55.6 (-5.6) 28.6 (+9.1) 11.6 (-1.7) Method MNIST CIFAR10 CIFAR100 (5/2) (5/2) (20/5) M = 2k M = 1k M = 5k Finetune 19.8 \u00b1 0.1 18.5 \u00b1 0.3 3.5 \u00b1 0.1 MIR [2] (NeurIPS \u201919) 93.2 \u00b1 0.4 42.8 \u00b1 2.2 20.0 \u00b1 0.6 GEM [12] (ICLR \u201919) 93.2 \u00b1 0.4 24.1 \u00b1 2.5 11.1 \u00b1 2.4 iCARL [44] (CVPR \u201917) 83.9 \u00b1 0.2 37.3 \u00b1 2.7 10.8 \u00b1 0.4 G-MED [23] (NeurIPS \u201921) 82.2 \u00b1 2.9 47.5 \u00b1 3.2 19.6 \u00b1 1.5 GSS [3] (NeurIPS \u201919) 92.5 \u00b1 0.9 38.5 \u00b1 1.4 13.1 \u00b1 0.9 CoPE [15] (CVPR \u201921) 93.9 \u00b1 0.2 48.9 \u00b1 1.3 21.6 \u00b1 0.7 CURL [43] (NeurIPS \u201919) 92.6 \u00b1 0.7 CNDPM [25] (ICLR \u201920) 95.4 \u00b1 0.2 48.8 \u00b1 0.3 22.5 \u00b1 1.3 Dynamic-OCM [57] (ECCV \u201922) 94.0 \u00b1 0.2 49.2 \u00b1 1.5 21.8 \u00b1 0.7 SEDEM [58] (ICCV \u201923) 98.3 \u00b1 0.2 55.3 \u00b1 1.3 24.8 \u00b1 1.2 Rehearsal-Free RanDumb (Ours) 98.3 (0.0) 55.6 (+0.3) 28.6 (+3.8) Model CIFAR100 Joint 79.58 CNN x1 62.2 \u00b11.35 CNN x2 66.3 \u00b11.12 CNN x4 68.1 \u00b10.5 CNN x8 69.9 \u00b10.62 CNN x16 76.8 \u00b10.76 ResNet-18 45.0 \u00b10.63 ResNet-34 44.8 \u00b12.34 ResNet-50 56.2 \u00b10.88 ResNet-101 56.8 \u00b11.62 WRN-10-2 50.5 \u00b12.65 WRN-10-10 56.8 \u00b12.03 WRN-16-2 44.6 \u00b12.81 WRN-16-10 51.3 \u00b11.47 WRN-28-2 46.6 \u00b12.27 WRN-28-10 49.3 \u00b12.02 ViT-512/1024 51.7 \u00b11.4 ViT-1024/1546 60.4 \u00b11.56 RandDumb (Ours) 74.8 (-2.0) Table 8. Benchmark F We compare RanDumb with other prompt-tuning based continual learning approaches using ViTB/16 ImageNet-21K/1K pretrained models using 2 init classes and 1 class per task setting. Most prompt-tuning based methods collapse and simple baselines which do not pretrain the architecture like NCM [22] or RanDumb achieve state-of-the-art performance. \u2019-\u2019 indicates that despite reasonable efforts, we could not run the method. +here representation learning and random projection both collapse, reporting last stable accuracy. Method CIFAR IN-A IN-R CUB VTAB ViT-B/16 (IN-1K Pretrained) Finetune 1.0 1.2 1.1 1.0 2.1 L2P [54] (CVPR \u201922) 2.4 0.3 0.8 1.4 1.3 DualPrompt [53] (ECCV \u201922) 2.3 0.3 0.8 0.9 4.2 CODA-Prompt [47] (CVPR \u201923) 2.6 0.3 0.8 1.9 6.3 Adam-Adapt [64] (Arxiv \u201923) 76.7 49.3 62.0 85.2 83.6 Adam-SSF [64] (Arxiv \u201923) 76.0 47.3 64.2 85.6 84.2 Adam-VPT [64] (Arxiv \u201923) 79.3 35.8 61.2 83.8 86.9 Adam-FT [64] (Arxiv \u201923) 72.6 49.3 61.0 85.2 83.8 Memo [63] (ICLR \u201923) 69.8 81.4 iCARL [44] (CVPR \u201917) 72.4 35.2 72.4 Foster [52] (ECCV \u201922) 52.2 76.8 86.6 NCM [22] (NeurIPS-W \u201922) 76.2 49.4 61.2 85.2 83.6 SLCA [62] (ICCV \u201923) 86.3 52.8 84.7 RanPAC [30] (NeurIPS \u201923) 88.2 39.0+ 72.8 77.7+ 93.0 RanDumb (Ours) 84.5 47.8 66.9 88.0 93.4 ViT-B/16 (IN-21K Pretrained) Finetune 2.8 0.5 1.2 1.2 0.5 Adam-Adapt [64] (Arxiv \u201923) 82.4 48.8 55.4 86.7 84.4 Adam-SSF [64] (Arxiv \u201923) 82.7 46.0 59.7 86.2 84.9 Adam-VPT [64] (Arxiv \u201923) 70.8 34.8 53.9 84.0 81.1 Adam-FT [64] (Arxiv \u201923) 65.7 48.5 56.1 86.5 84.4 Foster [52] (ECCV \u201922) 87.3 5.1 86.9 iCARL [44] (CVPR \u201917) 71.6 35.1 71.6 NCM [22] (NeurIPS-W \u201922) 81.3 48.9 54.6 86.7 84.4 SLCA [62] (ICCV \u201923) 86.8 54.2 82.1 RanPAC [30] (NeurIPS \u201923) 89.6 26.8+ 67.3 87.2+ 88.2+ RanDumb (Ours) 86.8 42.2 64.9 88.5 92.3 Ablating Components of RanDumb. We ablate the contribution of only using Random Fourier features for embedding and decorrelation to the overall performance of RanDumb in Table 9 (top). Ablating the decorrelation and relying solely on random Fourier features, colloquially Table 9. RanDumb (Analysis). We thoroughly analyse the key components of RanDumb, and observe: (1) contributions of decorrelation and embedding, finding them interdependent and integral to performance; (2) consistent gains from augmentation across datasets; (3) marginal returns and saturation with higher embedding sizes, enabling computational tradeoffs; (4) increase in the optimal regularisation parameter for more complex datasets; (5) comparisons with alternate embeddings (or lack thereof) favoring RanDumb\u2019s approach. Method MNIST CIFAR10 CIFAR100 T-ImNet m-ImNet (10/1) (10/1) (10/1) (200/1) (100/1) Ablating Components of RanDumb RanDumb 98.3 55.6 28.6 11.1 17.7 -Decorrelate 83.8 (-14.5) 30.0 (-25.6) 12.0 (-16.6) 4.7 (-6.4) 8.9 (-8.8) -Embed 88.0 (-10.3) 41.6 (-14.0) 19.0 (-9.6) 8.0 (-3.1) 12.9 (-4.8) -Both 82.1 (-16.2) 28.5 (-27.1) 10.4 (-18.2) 4.1 (-7.0) 7.28 (-10.4) Effect of Adding Flip Augmentation With 55.6 28.6 11.1 17.7 Without 98.3 52.5 (-3.1) 26.9 (-1.7) 10.7 (-0.4) 16.6 (-1.1) Variation with Ridge Parameter \u03bb \u03bb = 10\u22126 98.3 53.9 27.8 10.3 15.8 \u03bb = 10\u22125 55.6 28.6 11.1 15.9 \u03bb = 10\u22124 96.6 52.6 26.1 11.6 17.7 Variation Across Embedding Projections No-Embed 88.0 41.6 19.0 8.0 12.9 RP+ReLU 95.2 48.8 23.1 9.7 15.7 RanDumb (Ours) 98.3 (+3.1) 55.6 (+6.8) 28.6 (+5.5) 11.1 (+1.4) 17.7 (+2.0) dubbed Kernel-NCM, has performance drops ranging from 6-25% across the datasets. Replacing random Fourier features with raw features, colloquially the SLDA baseline, leads to pronounced drop in performance ranging from 314% across the datasets. Moreover, ablating both components, resulting in the base nearest class mean classifier exhibits the poorest performance with an average reduction of 17%. The poor performance when ablating both decorrelation and embedding highlights their interdependence. Impact of Embedding Dimensions. We vary the dimen7 \fFigure 3. Impact of Embedding Dimensions. We show the variation of accuracy of RanDumb with embedding dimensionality across datasets. We see decreasing marginal returns with increasing dimensions, obtaining relatively minor improvements after 15K dimensions. sions of the random Fourier features ranging from compressing 3K input dimensions to 1K to projecting it to 25K dimensions and evaluate its impact on performance in Figure 3. Surprisingly, the random projection to a 3x compressed 1K dimensional space allows for significant performance improvement over not using embedding, given in Table 9 (top). Furthermore, increasing the dimension from 1K to 25K results in improvements of 3.6%, 10.4%, 7.0%, and 2.5% on MNIST, CIFAR10, CIFAR100, and TinyImageNet respectively. Increasing the embedding sizes beyond 15K, however, only results in modest improvements of 0.1%, 1.4%, 1.1% and 0.2% on the same datasets, indicating 15K dimensions would be a good point for a performancecomputational cost tradeoff. Impact of Flip Augmentation. We evaluate the impact of adding the flip augmentation on the performance of RanDumb in Table 9 (middle). Note that MNIST was not augmented. Augmentation provided large gains of 3.1% on CIFAR10, 1.7% on CIFAR100, and 0.4% on TinyImageNet. We did not try further augmenting the data with RandomCrop transform as done with standard augmentations. Impact of Varying Ridge Parameter. All prior experiments use a ridge parameter (\u03bb) that increases with dataset complexity: \u03bb = 10\u22126 for MNIST, 10\u22125 for CIFAR10 and CIFAR100, and 10\u22124 for TinyImageNet and miniImageNet. Table 9 (middle) shows the effect of varying \u03bb on RanDumb\u2019s performance. With a smaller \u03bb = 10\u22126, CIFAR10, CIFAR100, TinyImageNet and miniImageNet all exhibit minor drops of 0.1%-1.7%, 0.8%, 0.8%. Increasing shrinkage to a \u03bb = 10\u22124 reduces CIFAR10 and CIFAR100 performance more substantially by 3% and 2.5% versus their optimal \u03bb = 10\u22125. On the other hand, this larger \u03bb leads to improvements of 0.5% and 1.8% on TinyImageNet and miniImageNet. This aligns with the trend that datasets with greater complexity benefit from more regularisation, with the optimal \u03bb surfacing as a result of balancing underand over-regularisation effects. Comparison with RanPAC. We compared our random Fourier features with projection based on a random weight matrix and ReLU [30] (RP+ReLU) in Table 9 (middle) with their best embedding size. Our method performs significantly better on each dataset, averaging a gain of 3.4%. We believe this is attributable to using a theoretical grounded random projection compared to RanPAC [30], which is specifically designed for the setting using pretrained transformers. Comparisons across Architectures. In table 7 (right), we compare whether using random Fourier features as embeddings outperforms models across various architectures for continual representation learning. We use experience replay (ER) baseline in the task-incremental CIFAR100 setup (for details, see Mirzadeh et al. [34] as it differs significantly from earlier setups). Our comparison involved across various architectures. The findings revealed that RanDumb surpassed the performance of nearly all considered architectures, and achieved close to 94% of the joint multi-task performance. This suggests that RanDumb outperforms continual representation learning across a wide range of architectures."
+ },
+ {
+ "url": "http://arxiv.org/abs/2311.11293v1",
+ "title": "From Categories to Classifier: Name-Only Continual Learning by Exploring the Web",
+ "abstract": "Continual Learning (CL) often relies on the availability of extensive\nannotated datasets, an assumption that is unrealistically time-consuming and\ncostly in practice. We explore a novel paradigm termed name-only continual\nlearning where time and cost constraints prohibit manual annotation. In this\nscenario, learners adapt to new category shifts using only category names\nwithout the luxury of annotated training data. Our proposed solution leverages\nthe expansive and ever-evolving internet to query and download uncurated\nwebly-supervised data for image classification. We investigate the reliability\nof our web data and find them comparable, and in some cases superior, to\nmanually annotated datasets. Additionally, we show that by harnessing the web,\nwe can create support sets that surpass state-of-the-art name-only\nclassification that create support sets using generative models or image\nretrieval from LAION-5B, achieving up to 25% boost in accuracy. When applied\nacross varied continual learning contexts, our method consistently exhibits a\nsmall performance gap in comparison to models trained on manually annotated\ndatasets. We present EvoTrends, a class-incremental dataset made from the web\nto capture real-world trends, created in just minutes. Overall, this paper\nunderscores the potential of using uncurated webly-supervised data to mitigate\nthe challenges associated with manual data labeling in continual learning.",
+ "authors": "Ameya Prabhu, Hasan Abed Al Kader Hammoud, Ser-Nam Lim, Bernard Ghanem, Philip H. S. Torr, Adel Bibi",
+ "published": "2023-11-19",
+ "updated": "2023-11-19",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG"
+ ],
+ "main_content": "INTRODUCTION Continual Learning (CL) predominantly rely on annotated data streams, i.e., a common underlying assumption is the availability of well-curated, annotated datasets. However, the financial and temporal costs associated with continual annotation is staggering. To illustrate this, annotating 30K samples in the CLEAR10 dataset (Lin et al., 2021), a popular CL dataset, despite using optimized annotation workflows with large CLIP models (Radford et al., 2021), cost $4,500 and more than a day worth of annotation time. In contrast, businesses like Amazon and Fast Fashion companies constantly need to update their image classification models and associated recommendation engines due to changing inventory, seasonal and customer trends. Annotating labeled training sets every time for commercial classification models with thousands of categories and millions of samples is unrealistic, as it would take weeks and cost hundreds of thousands of dollars. In short, manual data collection and annotation are expensive and time-consuming, posing a bottleneck in real-world continual learning. To this end, we explore a new scenario called name-only continual learning1. As commonly done in the traditional continual learning, new categories or domain shifts are continuously introduced at each timestep and we need to quickly adapt the classification model to the changes in the stream; however, in this setting we cannot create annotated training datasets. At each timestep, the learner is only provided with category/class names and is allocated a computational budget to adapt to the new classes. At the end of each timestep, the learner is presented with test samples and its performance is assessed. To tackle this setting, we propose to leverage the ever-evolving internet \u2217authors contributed equally; order decided by a coin flip. Work done during Hasan\u2019s intership at the University of Oxford. 1We borrow the term name-only classification from (Udandarao et al., 2023). We do not use zero-shot classification (Lampert et al., 2009) as it aims to generalize to unseen categories without seeing any examples, using attribute information whereas name-only setting allows access to public models and data. 1 arXiv:2311.11293v1 [cs.LG] 19 Nov 2023 \fPreprint. by query and downloading uncurated webly-supervised data for continual image classification. This will dramatically speed up the process of continually updating classifiers, from once in several days to once practically every hour. Why Revisit Webly-Supervised Learning (Fergus et al., 2005; Schroff et al., 2010)? Recently, countries like Japan2 have enacted legislations allowing the use of online data for training deep models, irrespective of the copyright status. This follows the intuition that one can learn and be inspired from copyrighted materials so long they do not regenerate it or derivate works, such as with classification models. This allows us to leverage the internet, which functions as an ever-expanding database, continually updating itself with billions of new photos daily, staying current with the latest trends. Additionally, it provides search engines that traditionally offer highly relevant image results at scale, allowing us to query and download webly-supervised data cheaply and in just minutes. Being dynamic, the internet is ideal for continually updating to rapid changes in the stream. In this context, we address three crucial questions about the use of the web for training dataset creation: 1 How reliable is our uncurated webly-supervised data? To assess its quality, we compare performance of deep learning models on our webly-supervised training data with manually annotated datasets for fine-grained image classification, which typically require expert annotations. We find that in some cases models trained on uncurated webly-supervised data can equal or even surpass the performance of those trained on manually annotated datasets. We show that this performance primarily results from our ability to cheaply gather much larger training sets than manual annotation allows. 2 How does uncurated webly-supervised data compare to the latest name-only classification approaches? We demonstrate that using uncurated webly-supervised data, one can outperform alternative methods of dataset generation used in state-of-the-art name-only classification approaches (Udandarao et al., 2023; He et al., 2022; Wallingford et al., 2023) on the same CLIP model by an impressive 5-25% absolute accuracy improvement. Our approach can also generalize to vision-only self-supervised models like MoCoV3 ImageNet1K models (Chen et al., 2021). 3 Can we efficiently utilize uncurated webly-supervised data across various continual learning settings? We apply our name-only webly-supervised approach to various continual learning situations such as class-incremental (new classes introduced over time), domain incremental (new domains introduced over time), and time incremental (mimicking a chronologically ordered class-annotated stream). In each of the above scenarios where we had access only to class names, our models trained on uncurated webly-supervised data only had a small performance gap compared to those trained on curated datasets. To illustrate our capabilities beyond existing datasets, we introduce EvoTrends, a continual learning dataset that introduces trending products year-by-year from 2000 to 2020. This underscores our ability to build classifiers and deploy them in a continual manner within minutes without relying on manually curated training datasets. In summary, our primary contributions address the aforementioned three questions, conclusively showing that using uncurated webly-supervised data can significantly reduce the time and expense associated with manual annotation in the proposed name-only continual learning setting. 2 NAME-ONLY CONTINUAL LEARNING: PROBLEM FORMULATION In the name-only classification setup, the target is to learn a function f\u03b8 parameterized by \u03b8, where here, unlike traditional classification tasks, the only given information is the class categories denoted by Y. While additional context about the data distribution (e.g. cartoon, art, sketch,...) is allowed to be given in Y, no training samples are provided. In contrast to the zero-shot setting, the learner is allowed to use publicly available data and models, with the exception of the original training set and models trained on it. For example, the use of prominent backbones like GPT (OpenAI, 2023), DALL-E (Ramesh et al., 2022) and assembling a training set from public datasets such as LAION5B (Schuhmann et al., 2022) is allowed to obtain the classifier. The performance of the learner is subsequently assessed on a curated test set, X \u2217. We extend the name-only classification paradigm to continual learning, dubbing this name-only continual learning. In this setup, we perform name-only classification across multiple timesteps, t \u2208{1, 2, 3, . . . }. For each timestep t, a data stream S, unveils a distinct set of class categories, Yt. 2https://aibusiness.com/data/japan-s-copyright-laws-do-not-protect-works-used-to-train-ai2 \fPreprint. Notably, Yt might introduce categories absent in preceding timesteps; that is, a category yt \u2208Yt might not belong to Yj for all j < t. Subsequently, at each t, the algorithm must continually update the classifier f\u03b8 by using prominent backbones or publicly available data. Formally, the primary goal in continual learning, is to learn a classifier f\u03b8t : X \u2192St i=1 Yi, parameterized by \u03b8t, that correctly classifies a category from all the introduced class categories up to the current timestep. Given that evaluation samples could originate from any past class categories, i.e. yi \u2208St i=1 Yi, the updated model f\u03b8t must maintain its capabilities in classifying earlier seen classes. In summary, at every timestep t: 1. The data stream, S, presents a set of categories, Yt, to be learned. 2. Under a given computational budget, Ct, the classifier f\u03b8t\u22121 is updated to f\u03b8t. 3. To evaluate the learner, the stream S presents test samples {(xi, yi)}n i=1 with yi belonging to the collective set St i=1 Yi. In Step 3, it is important to note that the annotated test set is reserved solely for evaluation. Neither the images nor the labels from the test set are available for the model in any future training steps. Moreover, it is worth noting that computational budgeting has become the prevailing standard in CL Prabhu et al. (2023a). This practice involves setting limits, either in terms of computation or time, hence on the number of samples that can be generated or annotated for training purposes. 3 OUR APPROACH: CATEGORIES TO CLASSIFIER BY EXPLORING THE WEB Without access to training data, one might be tempted to use generative models to create training data. However, as explained in Section 2, the continual learner is constrained by a budget limit Ct. This budget constraint makes generative methods computationally impractical due to their high computational requirements. Hence, we discuss our approach, \u201cC2C\u201d, for transitioning from class categories to classifiers within a computational budget. At each timestep t, our approach involves takes main steps: (1) collecting data from the web, which we refer to as uncurated webly-supervised data and (2) training a classifier using this data. Step 1. Querying and Downloading Uncurated Webly-Supervised Training Data. There are several challenges associated with querying the web which raises questions that we address below: How to design web queries? The web is expansive and noisy, and simply class categories provided by stream are often not specific enough. Consider the category name \u201csnapdragon\u201d: on its own, search engines might yield images of computer chips. Hence, we design a simple querying strategy of adding an auxiliary suffix to refine our queries. Our searches follow the pattern: Category Name + Auxiliary Suffix. When building a flower dataset and querying \u201csnapdragon\u201d, appending the suffix \u201cflower\u201d refines the query to focus on the desired botanical images. Moreover, within domain-incremental settings, we can adapt our search by using domain-specific suffixes like \u201ccartoon\u201d for cartoon images. In summary, this addition offers a richer context, steering the search engine more precisely. How do we prevent unintentional download of explicit images? Past webly supervised methods have unintentionally collected explicit content from online sources (Birhane & Prabhu, 2021). To address this, we implemented some cost-effective safeguards. First, we enabled strict safe-search feature on our search engines, which helps filter out explicit or inappropriate content. Second, we ensure that class-categories Yt do not have explicit terms by manually checking the queries and replacing possible offensive terms with less offensive ones, e.g. \u201cafrican ass\u201d would be replaced by \u201cafrican wild donkey\u201d or \u201cEquus africanus\u201d. We manually inspected a few hundred of the downloaded images with random sampling and found no explicit content providing preliminary evidence of effectiveness of the safeguards. Improvements in the speed of querying and download. The end-to-end scraping and downloading time required for 39 million flickr samples in a stress test required 12 days using a standard Python query and download pipeline. We optimized and reduced it to just 2 days a 600% improvement using the same computational resources. We applied the same pipeline to accelerate querying and downloading of uncurated internet data, we utilize parallelization across multiple dimensions: (1) We query four major search engines concurrently Bing, Flickr, Google and DuckDuckGo using 3 \fPreprint. Figure 1: Continual Name-Only Classification: Our Approach. At each timestep t, the learner receives a list of class categories without any training samples. We start by collecting weblysupervised data through querying and downloading data from multiple search engines. We then extract features using a frozen backbone, and subsequently train a linear layer on those features. The same process is repeated for the next timestep. separate CPU nodes in a cluster. This allows for simultaneous querying across engines. (2) We use an efficient multi-threaded querying tool3 that handles image search queries in parallel for each engine. This tool utilizes FIFO threaded queues to concurrently manage the search and download workflows for each query. (3) After aggregating image links from different engines, we leverage a parallelized image downloading tool4, which additionally applies postprocessing such as resizing. In conclusion, the key factors were concurrent querying across multiple search engines, fast multithreaded querying per engine, and parallelized downloading and resizing of images. Step 2. Classifier Training. Once we have uncurated webly-supervised data the next step is to train a classifier. At each timestep t, the learner is assigned a computational budget, denoted as Ct. Ideally, this budget should include the entire data collection process, whether it involves querying and downloading from the web or manual annotation. It is important to note that including this overhead within the budget would make it challenging or even impossible for manually annotated datasets to receive sufficient training, as their annotation pipeline incurs significant costs. We test three budgets: tight, normal, and relaxed. Normal budget allows for a training equivalent to 1 epoch on the first timestep of the manually annotated datasets (details in Appendix D). The \u201ctight\u201d budget is half of the normal, while the \u201crelaxed\u201d budget is four times the normal, as done in (Prabhu et al., 2023a). Under this budgeted setup, we compare three continual learning baselines (a) Linear Probing, (b) NCM (Mensink et al., 2013; Janson et al., 2022) and (c) KNN (Malkov & Yashunin, 2018; Prabhu et al., 2023b), providing insights into efficient CL methods with fixed feature extractor. Our approach is summarized in Figure 1. In our continual name-only classification setting, for each timestep t, we query and download webly-supervised data based on the provided class categories Yt, by following the recipe described in Step 1. Once we complete downloading the data, the classifier is trained not only on the uncurated data downloaded from the current timestep t but also on the uncurated data downloaded from all prior timesteps. 4 EVALUATING CAPABILITIES OF UNCURATED WEBLY-SUPERVISED DATA We begin by examining two main questions: 1 How reliable is uncurated webly-supervised data? Specifically, can models trained on these training sets match the accuracy of those trained on expertannotated training sets? 2 How does the uncurated webly-supervised data compare to the latest name-only classification approaches? For instance, can models trained on our data surpass the latest methods tailored for vision-language models, such as CLIP, in a name-only classification 3https://github.com/hellock/icrawler 4https://github.com/rom1504/img2dataset 4 \fPreprint. Table 1: Performance Analysis between Uncurated Webly-Supervised Data (C2C) and Manually Annotated Training (MA) Data. Despite utilizing uncurated web data, our results demonstrate competitive or even better performance than that of manually annotated datasets in fine-grained categorization tasks. The most notable improvements are observed when using MLP-adapters. Datasets Evaluation Training Dataset FGVC Aircraft Flowers102 OxfordIIITPets Stanford Cars BirdSnap Linear MA Data 38.5% 83.3% 89.8% 56.3% 46.2% Probe C2C (Ours) 57.5% (+19.0%) 85.7% (+2.4%) 91.7% (+1.9%) 62.1% (+5.8%) 56.1% (+9.9%) MLP MA Data 46.0% 80.3% 89.7% 57.6% 47.7% Adapter C2C (Ours) 65.5% (+19.5%) 87.1% (+6.8%) 92.8% (+3.1%) 66.8% (+9.2%) 53.7% (+6.0%) Finetune MA Data 76.6% 94.3% 92.8% 91.6% 70.4% Backbone C2C (Ours) 94.8% (+18.2%) 93.3% (-1.0%) 94.7% (+1.9%) 92.8% (+1.2%) 69.9% (-0.5%) context? We analyze these questions in the better studied non-continual name-only classification setting where one is provided with a set of class categories to be learnt. 4.1 EXPERIMENTAL DETAILS Datasets. We focus primarily on fine-grained classification tasks due to two main reasons: (i) such tasks present a greater challenge than coarse-grained datasets especially when sourced from noisy data sources such as the web, and (ii) they are prevalently employed in name-only classification benchmarks, facilitating comprehensive comparisons with existing methods. We evaluate the classification accuracy across five benchmarks that contain a broad selection of classes: (1) FGVC Aircraft (Maji et al., 2013), (2) Flowers102 (Nilsback & Zisserman, 2008), (3) OxfordIIITPets (Parkhi et al., 2012), (4) Stanford Cars (Krause et al., 2013), and (5) BirdSnap (Berg et al., 2014). Models. We use a fixed backbone, ResNet50 MoCoV3 (Chen et al., 2021), and experiment with two classifiers on top: (i) Linear Probe and (ii) MLP Adapter. The MLP Adapter consists of a threelayer model: input dim \u2192512, 512 \u2192256, and 256 \u2192num classes, with Dropout(0.5) and ReLU nonlinearities. Additionally, we also try fine-tuning both the backbone and a linear layer. Training Procedure. For linear probing and MLP adapter experiments, we freeze the backbone and extract features from both our uncurated webly-supervised data and manually annotated (MA) datasets. We then perform linear probing and MLP adapter training on the extracted features. Our training uses an Adam optimizer with a batch size of 512 and a learning rate of 0.001. We use a LRonPlateau scheduler with a patience of 10 and a decay of 0.1. Models are trained for 300 epochs, reaching convergence within 10-50 epochs. For finetuning experiments for both our uncurated webly-supervised data and manually annotated (MA) datasets, we use an SGD optimizer with a learning rate of 0.1 and a linear learning rate scheduler. A batch size of 128 and standard data augmentations are applied. Models are trained until convergence on both uncurated web data and the manually annotated training sets, within 50 epochs for our uncurated web data and up to 250 for manually annotated datasets. Class-balanced random sampling is used for all experiments, especially helpful for data downloaded from the internet given its natural long-tail distribution. 4.2 HOW RELIABLE IS UNCURATED WEBLY-SUPERVISED DATA? We begin by addressing our first fundamental question: 1 Can uncurated webly-supervised data serve as a substitute for meticulously curated training data? Put simply, can web data match the performance of manually annotated datasets? Results and Key Findings. Table 1 contrasts the performance of our uncurated webly-supervised data with manually annotated datasets. Remarkably, classifiers trained on our webly-supervised data surpass those trained on manually annotated datasets by a margin of 1\u221219% (highlighted in green). In the worst-case scenario, there is a performance decline of less than 1% (marked in red). The most pronounced improvement, ranging from 3 \u221219%, arises when our webly-supervised data is integrated with an MLP-Adapter. As anticipated, fine-tuning yields superior results compared to the MLP adapter, which in itself outperforms linear probing. In summary, using uncurated webly-supervised data consistently outperform manually annotated 5 \fPreprint. datasets across different classifiers and datasets. This finding is counterintuitive, given that our web data is: (i) uncurated, (ii) noisy, and (iii) out-of-distribution with respect to the test set. The reason behind this apparent paradox can be attributed to the dataset sizes. Details are provided in the Appendix B and summarized below. How Does Scale Influence Our Performance? Our webly-supervised datasets are notably large due to cheap query and download properties, being approximately 15 to 50 times larger than the manually annotated datasets. Hence, we explore the impact of scale by limiting our queries to search engines to return only the top-k images in Table 2. Our results suggest that query size is the primary driver for performance gains. When we limit our query size to match the size of manually annotated datasets (using the top 10 or 20 images per engine per class), there is a drop in accuracy by 10-20% relative to manually curated datasets. However, as we gather more data, we consistently observe performance improvements. The scalability of our method, only possible due to virtually no scaling cost. The primary cause of superior performance is by scaling the size of downloaded data, without high costs of manual annotations or other checks. In Appendix B, we explore various factors that, surprisingly, had little to no impact on the effectiveness of our approach. Our approach demonstrated strong performance across various model architectures and training protocols. Its strength was mostly evident when sourcing data from multiple web engines (Google, Flickr, DuckDuckGo Bing), effectively handling diverse data distributions. Surprisingly, even after cleaning our web data using deduplication and automatic removal of noisy samples, reducing the data size by 30%, the performance remained unaffected. This suggests that the main challenges are likely due to out-of-domain instances rather than reducing noise or duplicate samples. Lastly, class-balanced sampling does not affect the performance of our model, indicating that further exploration of long-tailed loss functions (Karthik et al., 2021), may not yield significant improvements. 4.3 COMPARISON WITH NAME-ONLY CLASSIFICATION STRATEGIES We now address our second question: 2 How does the performance of webly-supervised datasets compare to the latest name-only classification approaches? Can web data surpass the latest methods tailored for vision-language models, such as CLIP, in a name-only classification context? Comparison with Recent Advances. Traditional name-only classification methods are often built upon zero-shot CLIP (CLIP-ZS) (Radford et al., 2021). CLIP-ZS works by using text prompts that contain category names to classify images. For each test data point, it predicts the class by finding the category name prompt that best matches the input image. Recent research has introduced improvements to this approach in three main areas: (i) Better Text Prompts: Methods like VisDesc (Menon & Vondrick, 2022), CuPL (Pratt et al., 2023) and WaffleCLIP (Roth et al., 2023) have explored more effective text prompts to enhance classification accuracy; (ii) Creating Pseudo-training Datasets: Approaches such as Glide-Syn (He et al., 2022) and Sus-X (Udandarao et al., 2023), and Neural Priming (Wallingford et al., 2023) focus on creating training datasets either by retrieval from LAION5B or generating samples from diffusion models to improve model performance, with retrieval being better (Burg et al., 2023); (iii) Enhanced Adapters: CALIP (Guo et al., 2023) , along with Glide-Syn (He et al., 2022) and Sus-X (Udandarao et al., 2023) propose improved adapters for CLIP models to enhance their classification abilities. There are alternative approaches, like SD-Clf (Li et al., 2023a), which shows the effectiveness of stable-diffusion models for classification tasks. Additionally, CaFo (Zhang et al., 2023) explores chaining different foundation models for tasks including name-only classification. We describe these approaches in detail in Appendix C. Results. To evaluate our approach, we compare it against recent strategies using the ResNet50 CLIP model for a fair comparison. The results are summarized in Table 3; comparisons on CLIP ViT-B/16 model can be found in Appendix C. Consistently, our approach outperforms other leading methods such as CaFo and SuS-X-LC, with performance improvements between 2-25%. Additionally, we apply our apporach to vision-only ResNet50 MoCoV3 model trained on ImageNet1K. Notably, this often performs significantly better out-of-the-box than CLIP variants, with impressive improvements of 2-8%, offering new insights on recent works (Li et al., 2023b). Moreover, employing an MLP Adapter results in a 1-4% boost in performance over linear probing, and this is achieved with minimal added computational cost when compared to extracting features from a ResNet50 model. 6 \fPreprint. Table 2: Impact of Size of Queried Webly-Supervised Data on Performance. This table illustrates the influence of downsizing our queried web data by considering only the top-k queries for download. Notably, a substantial performance drop occurs as the dataset size decreases. Despite the higher quality of the top-k samples, their limited quantity adversely affects performance. We use Manually Annotated Training (MA) Data as a reference point. Datasets Eval. Dataset Size FGVC Aircraft Flowers102 OxfordIIITPets Stanford Cars BirdSnap Linear Probe MA Data 38.5% 83.3% 89.8% 56.3% 46.2% Top-10/engine/class 18.3% ( -20.2%) 64.3% ( -19.0%) 82.0% ( -7.8%) 34.3% ( -22.0%) 36.4% ( -9.8%) Top-20/engine/class 33.8% ( -4.7%) 71.7% ( -11.6%) 87.3% ( -2.5%) 45.7% ( -10.6%) 42.1% ( -4.1%) Top-50/engine/class 40.8% (+2.3%) 77.7% ( -5.6%) 88.7% ( -1.1%) 57.9% (+1.6%) 48.5% (+2.3%) Top-100/engine/class 52.4% (+13.9%) 80.8% ( -2.5%) 90.1% (+0.3%) 64.6% (+8.3%) 52.4% (+6.2%) Top-200/engine/class 56.6% (+18.1%) 82.7% ( -0.6%) 90.7% (+0.9%) 67.8% (+11.5%) 54.6% (+8.4%) All Data 57.5% (+19.0%) 85.7% (+2.4%) 91.7% (+1.9%) 62.1% (+5.8%) 56.1% (+9.9%) MLP-Adapter MA Data 46.0% 80.3% 89.7% 57.6% 47.7% Top-10/engine/class 32.2% ( -13.8%) 66.0% ( -14.3%) 86.7% ( -3.0%) 39.8% ( -17.8%) 36.4% ( -11.3%) Top-20/engine/class 37.4% ( -8.6%) 72.6% ( -7.7%) 88.1% ( -1.6%) 49.5% ( -8.1%) 42.3% ( -5.4%) Top-50/engine/class 51.4% (+5.4%) 77.3% ( -3.0%) 89.9% (+0.2%) 61.1% (+3.5%) 47.9% (+0.2%) Top-100/engine/class 58.2% (+12.2%) 81.9% (+1.6%) 90.6% (+0.9%) 65.8% (+8.2%) 50.1% (+2.4%) Top-200/engine/class 63.4% (+17.4%) 84.1% (+3.8%) 91.9% (+2.2%) 70.3% (+12.7%) 54.4% (+6.7%) All Data 65.5% (+19.5%) 87.1% (+6.8%) 92.8% (+3.1%) 66.8% (+9.2%) 53.7% (+6.0%) Table 3: Comparison with Name-Only Classification Techniques with ResNet50: When comparing with existing state-of-the-art name-only classification techniques, we show that our method outperforms those methods by margins ranging from 2% to 25%. Type Method Model Birdsnap Aircraft Flowers Pets Cars DTD Data-Free CLIP-ZS (Radford et al., 2021) CLIP 32.6 19.3 65.9 85.4 55.8 41.7 CaFo-ZS (Zhang et al., 2023) CLIP 17.3 66.1 85.8 55.6 50.3 CALIP (Guo et al., 2023) CLIP 17.8 66.4 86.2 56.3 42.4 CLIP-DN (Zhou et al., 2023) CLIP 31.2 17.4 63.3 81.9 56.6 41.2 CuPL (Pratt et al., 2023) CLIP 35.8 19.3 65.9 85.1 57.2 47.5 VisDesc (Menon & Vondrick, 2022) CLIP 35.7 16.3 65.4 82.4 54.8 42.0 SD-Clf (Li et al., 2023a) SD-2.0 26.4 66.3 87.3 Use-Data GLIDE-Syn (He et al., 2022) CLIP 38.1 22.0 67.1 86.8 56.9 43.2 CaFo (Zhang et al., 2023) CLIP 21.1 66.5 87.5 58.5 50.2 SuS-X-LC (Udandarao et al., 2023) CLIP 38.5 21.1 67.1 86.6 57.3 50.6 SuS-X-SD (Udandarao et al., 2023) CLIP 37.1 19.5 67.7 85.3 57.2 49.2 C2C (Ours-Linear Probe) CLIP 48.1 (+9.6) 44.0 (+22.0) 82.0 (+14.3) 88.1 (+0.6) 71.3 (+12.8) 57.1 (+6.5) C2C (Ours-MLP Adapter) CLIP 46.6 (+8.1) 48.9 (+26.9) 84.8 (+17.1) 89.4 (+1.9) 72.6 (+14.1) 57.6 (+7.0) C2C (Ours-Linear Probe) MocoV3 56.1 (+17.6) 57.5 (+35.5) 85.7 (+18.0) 91.7 (+4.2) 62.1 (+3.6) 54.6 (+4.0) C2C (Ours-MLP Adapter) MocoV3 53.7 (+15.2) 65.5 (+43.5) 87.1 (+19.4) 92.8 (+5.3) 66.8 (+8.3) 55.8 (+5.2) Why Does Our Webly-Supervised Data Outperform Other Approaches? A fundamental factor in the superior performance of our approach is again the scale of our uncurated webly-supervised data. We download roughly ten times larger than what is used in alternative approaches (detailed in Appendix C). One might wonder: why not just scale up the datasets used by other methods? Retrieval-augmented techniques such as SuS-X (Udandarao et al., 2023) and Neural Priming\u2014our (Wallingford et al., 2023) closest competitors performance-wise\u2014experience stagnation or even a decline in results when expanded to 100 samples per class, as illustrated in Figure 6 of Udandarao et al. (2023) and discussed in Appendix B of Wallingford et al. (2023). Conversely, our method still achieves marked improvements in accuracy even as dataset sizes approach 500 \u2212750 samples per class, as previously highlighted in Table 2. Alternative dataset generation methods, like Diffusion models (He et al., 2022; Zhang et al., 2023), come with a significant computational cost, yet they do not surpass retrieval methods such as LAION-5B in performance (Udandarao et al., 2023; Burg et al., 2023). To provide some context, producing a dataset equivalent in size to ours (\u223c150K samples) using generative techniques like stable-diffusion demands a staggering 32 hours of computation on an 8 A100 GPUs. In contrast, our approach collects the same dataset in around 15 minutes using a basic CPU machine. 7 \fPreprint. 5 CONTINUAL WEBLY-SUPERVISED LEARNING 3 Building upon our prior observations regarding the efficiency of collecting webly-supervised data and its effectiveness for name-only classification, we now test this approach in the context of continual name-only classification. Within this framework, the learner is solely provided with category names, and potentially descriptions, necessitating the continuous and streamlined construction of data and updation of the classifier. To assess the robustness and adaptability of our approach, we subject it to a diverse range of data streams encountered in various continual learning scenarios, namely: (i) class-incremental: the incremental addition of classes, (ii) domain-incremental: incremental adaptation to known domain shifts, and (iii) time-incremental: the gradual incorporation of new data over time. The subsequent subsection presents a comprehensive overview of the experimental setup and the corresponding results obtained from these three scenarios. 5.1 EXPERIMENTAL DETAILS Datasets: We assess the effectiveness of our uncurated webly-supervised data in three different continual learning (CL) scenarios. For each scenario, we compare the performance of our downloaded data with manually annotated data. This evaluation setup aligns with the traditional CL paradigm, where labeled training data is revealed sequentially in the data stream. It is worth noting that methods with access to manually annotated data naturally have an inherent advantage. In principle, manually annotated data serves as a soft upper bound to our webly supervised approach. However, our primary goal is to determine to what extent web-supervised datasets can bridge this performance gap, with extreme limits of < 1 hour and cost <$15 on AWS servers. Our experiments focus on the following three CL setups: Class-Incremental: In this setting, we use CIFAR100, which is partitioned into ten timesteps, where at each timestep ten new class categories are introduced. CIFAR100 exhibits a notable domain gap due to its samples being old, centered, and downscaled to 32x32 pixels. To match this resolution, we downscale our images to 32x32 as well. The queries provided in this case simply consist of the class names for all previously encountered classes. Domain-Incremental: In this setting, we use PACS (Li et al., 2017b) dataset, which comprises four timesteps and is suitable for the domain-incremental setup. Each timestep introduces new domains, namely Photos, Art, Cartoon, and Sketches. The primary challenge here lies in adapting to the distinct visual styles associated with each domain. The queries in this case are composed of a combination of class names and the names of the visual domains. Time-Incremental: In this setting, we use CLEAR10 (Lin et al., 2021) dataset, a recently popular CL dataset, by incorporating timestamps from the CLEAR benchmark into web queries5. Our web queries for categories are consistent across timesteps, however, samples are filtered by timestamp to match CLEAR time categorization. Here we only use Flickr as it supports timestamped querying. Optimizing Data Collection. To optimize the creation of our webly-supervised datasets while adhering to time constraints, we conduct additional experiments involving the retrieval of only the top-k most relevant samples per search engine. Specifically, we explore two settings: k = 20 and k = 50. This approach significantly diminishes the cost and time associated with querying the web and feature extraction processes. Training Models. We note that we do not restrict the storage of past samples, unlike previous literature as download links largely remain accessible. If some download link expires then we do not that sample, allowing realistic privacy evaluation. However, we note that no links expired in the duration of our study and only a small fraction (<5%) of links of CLOC dataset collected until 2014 have become invalid until today. Hence, we follow the constraints specified in Prabhu et al. (2023a) limiting the computational budgets and without storage constraints. We train a linear probe under varying Linear probing results are compared to NCM (Mensink et al., 2013; Janson et al., 2022) and KNN (Malkov & Yashunin, 2018; Prabhu et al., 2023b) classifiers. We use a ResNet50 MoCoV3 backbone for all experiments since SSL training has been shown to help in CL tasks (Gallardo et al., 2021). For linear probing, we use the same optimization parameters provided earlier except that we constrain the iterations according to our compute budgets Ct. For more details about the computational budgets please refer to Appendix D. We set k = 1 and use cosine distance for KNN. 5Timestamps from: https://github.com/linzhiqiu/continual-learning/blob/main/clear_10_time.json 8 \fPreprint. Table 4: Linear Probe Performance in Continual Learning Scenarios (Avg. Acc. \u2191). Our uncurated webly-supervised data achieves average accuracy close to manually annotated (MA) datasets in a continual learning context with relatively small performance gaps. Eval Dataset Split-CIFAR100 Split-PACS CLEAR10 MA Data 43.2% 82.8% 70.0% C2C (Ours) 38.7% ( -4.5%) 80.8% ( -2.0%) 65.3% ( -4.7%) C2C (Top-20/engine/class) 39.2% ( -4.0%) 79.9% ( -2.9%) 62.0% ( -8.0%) C2C (Top-50/engine/class) 39.5% ( -3.7%) 78.6% ( -4.2%) 60.8% ( -9.2%) During the training process, we implement experience replay and utilize class-balanced sampling to select training batches from the previously collected samples. Metrics. We compute the average incremental accuracy for all three settings (Rebuffi et al., 2017). Briefly, we compute the accuracy on the available test set after finishing training of each timestep, which gives us a graph of accuracies over time. The average incremental accuracy is the aggregate measure of these incremental accuracies, which gives average performance of the method over time. 5.2 RESULTS We evaluate the efficacy of our uncurated web data in the context of continual name-only learning and compare the results with manually annotated datasets in various scenarios. Linear probing results are presented in Table 4, while additional results for NCM and KNN can be found in the Appendix. Despite having access solely to class/category names, our uncurated webly-supervised data achieves accuracies that are close to training on the manually annotated datasets. We note that the performance on manually annotated datasets serves as an upper bound and not a fair comparison as they require expensive curation process, they are well-aligned with the test distribution as both sets are created from the same sampling of data. The performance gap between them is small, ranging from 2-5%, with the exception of CLEAR10 where it reaches 5-10%. In Table 6, we also consider the time required for querying and downloading our uncurated continual webly-supervised data. Remarkably, we are able to generate the web data within minutes, instead of days, across a variety of continual learning scenarios allowing more budget for computational resources. All experiments were completed in < 1 hour and cost <$15 on AWS servers. This approach delivers both a performance comparable to manually annotated datasets and significantly reduces associated expenses, which typically exceed $4500. Understanding the Performance Gap: While our webly-supervised dataset (C2C) has shown promise, a performance discrepancy exists when compared to the manually annotated data (MA Data) . This performance lag, is only slightly behind the ideal scenario of using in-distribution annotated data. The natural question that arises is: Why cannot we bridge this performance gap and possibly exceed it, as observed in Section 4? Two primary distinctions from Section 4 can explain this: (i) The current training operates within a limited computational budgets, and (ii) The size difference between our webly-supervised continual datasets and manually annotated datasets has notably shrunk, transitioning from a substantial 30 \u2212100\u00d7 difference to a mere 2 \u22123\u00d7. It is important to note that in Section 4, when we match the size of the manually annotated datasets by considering only the top-20 web queries, we observe a similar gap to that in this section. Nevertheless, the improvements in speed and reduction in annotation costs significantly outweigh this gap. Firstly, in the case of PACS, a significant domain gap arises between web sketches which refer to line-drawings and manually annotated sketches which refer to quick-draws. This domain shift results in a performance gap, which is challenging to bridge with the inclusion of additional sketch data from the internet. Second, in CIFAR100, images are carefully selected and often do not reflect real-world data streams. Specifically, they consist of older, centered, and downsampled images, which strongly contrast with the dynamic and varied nature of web data harnessed in our approach. This difference highlights the importance of considering more realistic data streams, over handpicked and potentially unrepresentative datasets. Lastly, in the context of CLEAR10, our analysis uncovers data collection inaccuracies, particularly in the bus class. While our web datasets consist of images depicting the exterior of buses, the manually annotated CLEAR10 dataset primarily includes interior images of buses in the train/test set. Given that the bus class constitutes one out of ten 9 \fPreprint. Table 5: Comparing Last-Layer Based Continual Learning Approaches in Name-Only Continual Learning (Avg. Acc. \u2191). We evaluate the average accuracy of various continual learning methods in a name-only continual learning scenario with constrained computational resources. Surprisingly, KNN achieves superior results compared to linear probing, even while operating within a lower computational budget than the \u201dtight\u201d setting. Classifier Budget Split-CIFAR100 Split-PACS CLEAR10 C2C-NCM 2000$ Mobile NAND 2000 2005 2010 2017 2022 $/MB 1100 85 1.83 0.25 0.03 Cost of storing CLOC ($) 4M 300K 6350 850 70 Training Cost (ER) Not Currently Feasible Koh et al., 2022). While limited storage aligns with the practical constraints of biological learning agents and offline embodied artificial agents, deep learning-based systems are largely compute-constrained and demand high throughput. Such systems need to process incoming data points faster than the rate of the incoming stream to effectively keep up with the data stream. Cai et al. (2021) shows that even with unlimited storage, the online continual learning problem is hard as limited computational budgets implicitly limit the set of samples to be used for each training update. Our paper addresses the online continual learning problem, not from a storage limitation standpoint, but with a focus on computational budgets. We propose a system based on approximate k-nearest neighbor (kNN) algorithm (Malkov & Yashunin, 2018) following its five desirable properties. i) Approximate kNNs are inherently incremental algorithms with explicit insert and retrieve operations allowing it to rapidly adapt to incoming data; ii) With a suitable representation, approximate kNN algorithms are exceptionally effective models at large-scale (Efros, 2017) iii) Approximate kNN algorithms are computationally cheap, with a graceful logarithmic scaling of computation despite compactly storing and using all past samples; iv) kNN does not forget past data. In other words, if a data point from history is queried again, the query yields the same label; v) It has no stability gap (De Lange et al., 2022) 1. Ideally, a continual learner should learn and update the feature representations over time. We defer this interesting problem to future work and instead show that combined with kNNs, it is possible for feature representations pre-trained on a relatively smaller dataset (Imagenet1K) to reasonably tackle complex tasks such as geolocalization over 39 million images (CLOC), and long-tailed fine-grained classification (CGLM). While this does not solve the underlying continual representation learning problem, it does show the effectiveness of a simple method on large-scale online continual learning problems, demonstrating viability to many real-world applications. Additionally, our approach overcomes a significant limitation of existing gradient-descent-based methods: the ability to learn from a single example. Updating a deep network for every incoming sample is computationally infeasible. In contrast, a kNN can efficiently learn from this sample, enabling rapid adaptation. We argue that the capacity to adapt to a single example while leveraging all past seen data is essential for truly online operation, allowing our simple method to outperform existing continual learning baselines. Problem formulation. We formally define the online continual learning (OCL) problem following Cai et al. (2021). In classification settings, we aim to continually learn a function f : X \u2192Y, parameterized by \u03b8t at time t. OCL is an iterative process where each step consists of a learner receiving information and updating its model. Specifically, at each step t of the interaction, 1. One data point xt \u223c\u03c0t sampled from a non-stationary distribution \u03c0t is revealed. 2. The learner makes the scalar prediction \u02c6 yt = f(xt; \u03b8t) using a compute budget, Bpred t . 3. Learner receives the true label yt. 4. Learner updates the model \u03b8t+1 using a compute budget, Blearn t We evaluate the performance using the metrics forward transfer (adaptability) and backward transfer (information retention) as given in Cai et al. (2021). A critical aspect of OCL is the budget in the second and fourth steps, which limits the computation that the learner can expend. A common choice in past work is to 1We expand the discussion on these properties in detail in Section 3 2 \fTable 2: Breakdown of popular OCL systems, with key contributions in red. Most methods focus on sampling techniques for storing datapoints, which cannot transfer here as we store all past samples. Works MemSamp BatchSamp Loss Other Cont. ER (Base) Random Random CEnt GSS (Aljundi et al., 2019b) GSS Random CEnt MIR (Aljundi et al., 2019a) Reservoir MIR CEnt ER-Ring (Chaudhry et al., 2019b) RingBuf Random CEnt GDumb (Prabhu et al., 2020) GreedyBal Random CEnt MR HAL (Chaudhry et al., 2021) RingBuf Random CEnt HAL CBRS (Chrysakis & Moens, 2020) CBRS Weighting CEnt CLIB (Koh et al., 2022) ImpSamp Random CEnt MR, AdO CoPE (De Lange & Tuytelaars, 2021) CBRS Random PPPLoss CLOC (Cai et al., 2021) FIFO Random CEnt AdO InfoRS (Sun et al., 2022) InfoRS Random CEnt OCS (Yoon et al., 2022) OCS Random CEnt AML (Caccia et al., 2022) Reservoir PosNeg AML/ACE impose a fixed limit on storage and computation (Cai et al., 2021). We remove the storage constraint and argue that storing the entirety of the data is cost-effective as long as impact on computation is controlled. We relax the fixed computation constraint to a logarithmic constraint. In other words, we require that the computation time per operation fit within Bpred t , Blearn t \u223cO(log t). This construction results in total cost scaling O(n log n) with the amount of data. 2 Related Work Formulations. Parisi et al. (2019) and De Lange et al. (2020) have argued for improving the realism of online continual learning benchmarks. Earliest formulations (Lopez-Paz & Ranzato, 2017) worked in a task-incremental setup, assuming access to which subset of classes a test sample is from. Subsequent mainstream formulation (Aljundi et al., 2019b,a) required models to predict across all seen classes at test time, with progress in the train-time sample ordering (Bang et al., 2021; Koh et al., 2022). However, Prabhu et al. (2020) highlighted the limitations of current formulations by achieving good performance despite not using any unstored training data. Latest works (Hu et al., 2022; Cai et al., 2021; Lin et al., 2021) overcome this limitation by testing the capability for rapid adaptation to next incoming sample and eliminate data-ordering requirements by simply using timestamps of real-world data streams. Our work builds on the latest generation of formulation by Cai et al. (2021). Unlike Cai et al. (2021), we perform one-sample learning; in other words, we entirely remove the concept of task by processing the incoming stream one sample at a time, in a truly online manner. Additionally, we further remove the storage constraint which is the key to addressing issues discussed in GDumb (Prabhu et al., 2020). Methods. Traditional methods of adapting to concept drift (Gama et al., 2014) include a variety of approaches based on SVMs (Laskov et al., 2006; Zheng et al., 2013), random forests (Gomes et al., 2017; Ristin et al., 2015; Mourtada et al., 2019), and other models (Oza & Russell, 2001; Mensink et al., 2013). They offer incremental additional and querying properties, most similar to our method, but have not been compared with recent continual learning approaches (Ostapenko et al., 2022; Hayes & Kanan, 2020; Hayes et al., 2019). We perform extensive comparisons with them. The (online) continual learning methods designed for deep networks are typically based on experience replay (Chaudhry et al., 2019b) and change a subset of the three aspects summarized in Table 2: (i) the loss function used for learning, (ii) the algorithm to sample points into the replay buffer, and (iii) the algorithm to sample a batch from the replay buffer. Methods to sample points into the replay buffer include works such 3 \fFigure 1: Adaptive Continual Memory (ACM) performs Memory.Retrieve and Memory.Insert on new incoming samples, extracted by a fixed, pretrained deep network. as GSS (Aljundi et al., 2019b), RingBuffer (Chaudhry et al., 2019b), class-balanced reservoir (Chrysakis & Moens, 2020), greedy balancing (Prabhu et al., 2020), rainbow memory (Bang et al., 2021), herding (Rebuffi et al., 2017), coreset selection (Yoon et al., 2022), information-theoretic reservoir (Sun et al., 2022), and samplewise importance (Koh et al., 2022). These approaches do not apply to our setting because we simply remove the storage constraint. Approaches to sampling batches from the replay buffer include MIR (Aljundi et al., 2019a), ASER (Shim et al., 2021), and AML (Caccia et al., 2022). These require mining hard negatives or performing additional updates for importance sampling over the stored data, which face scaling issues to large-scale storage as in our work. We compare with some of the above approaches, including ER as proposed in Cai et al. (2021), that finetune the backbone deep network with one gradient update for incoming data, with unrestricted access to past samples for replay. Pretrained representations. Pretrained representations (Yuan et al., 2021; Caron et al., 2021; Chen et al., 2021; Ali et al., 2021) have been utilized as initializations for continual learning, but in settings with harsh constraints on memory (Wu et al., 2022; Ostapenko et al., 2022). Inspired by Ostapenko et al. (2022), we additionally explore suitability of different pretrained representations. Another emerging direction for using pretrained models in continual learning has been prompt-tuning as it produces accurate classifiers while being computationally efficient (Wang et al., 2022b,a; Chen et al., 2023). However, Janson et al. (2022) show that simple NCM classification outperforms complex prompt tuning strategies. Lastly, the direction most similar to ours is methods which use kNN classifiers alongside deep networks for classification (Nakata et al., 2022; Iscen et al., 2022). We operate in significantly different setting and constraints, use far weaker pretrained representations (ImageNet1K) and benchmark on far larger online classification datasets. 3 Our Approach: Adaptive Continual Memory We use pre-trained feature representations and only learn using the approximate k-nearest neighbor algorithm. Hence, our algorithm is rather simple. We refer to our algorithm as Adaptive Continual Memory (ACM) and refer to the kNN neighbour set as Memory. At each time step, our continual learner performs the following steps: 1. One data point xt \u223c\u03c0t sampled from a non-stationary distribution \u03c0t is revealed. 2. Learner extracts features zt = f(xt; \u03b8) 3. Learner retrieves nearest neighbors Nt = Memory.Retrieve(zt, k). 4 \f4. Learner makes the prediction \u02c6 yt = majority-vote(Nt). 5. Learner receives the true label yt. 6. Learner inserts new data: Memory.Insert(zt, yt). We summarize this approach in Figure 1. Before presenting further implementation details, we discuss four properties of this method in detail. Fast adaptation. Suppose the learner makes a mistake in a given time step. If the same data point is received in the next time step, the learner will produce the correct answer. By leveraging nearest neighbors, we enable the system to incorporate new data immediately and locally modify its answers in response to as little as a single datapoint. Such fast adaptation, a core desideratum in online continual learning, is infeasible with gradient descent strategies and is not presently a characteristic of deep continual learning systems. Consistency. Consider a hypothetical scenario in which a data point is queried at multiple time instances. Our learner will never forget the correct label for this data point and will consistently produce it when queried, even after long time spans. While learning and memory are much more general than rote memorization, producing the correct answer on previously seen data is an informative sanity check. For comparison, continual learning on deep networks forgets a large fraction of previously seen datapoints even with a minimal delay (Toneva et al., 2019). Zero Stability Gap. When learning new points, traditional continual learning algorithms first drop in performance on past samples due to a large drift from current minima and gradually recover performance on convergence. This phenomena is called as the stability gap (De Lange et al., 2022). Approximate kNN inherently does not have this optimization issue, hence enjoy zero stability gap. Efficient Online Hyperparameter Optimization. Hyperparameter optimization is a critical issue during the online continual learning phase because, as distributions shifts, hyperparameters must be recalibrated. Selecting hyperparameters relevant to optimization like learning rate and batch size, can be nuanced; an incorrect choice has the potential to indefinitely impede future performance. Common strategies include executing multiple simultaneous online learning tasks using diverse parameters (Cai et al., 2021). However, this can be prohibitively resource-intensive. In contrast, our method has a single hyperparameter (k), which only affects the immediate prediction, and can be recalibrated during the online continual learning phase with minimal computational cost. We do this by first retrieving the 512 nearest neighbours in a sorted order and subequently searching over smaller k in powers of two within this ranked list, and selecting the k which achieves highest accuracy on simulating arrival of previous samples. 3.1 Computational Cost and Storage Considerations In the presented algorithm above for our method, feature extraction (step 2) and prediction (step 4) have a fixed overhead cost. However, nearest-neighbour retrieval (step 3) and inserting new data (step 6) can have high computational costs if done naively. However, literature in approximate k-nearest neighbours (Shakhnarovich et al., 2006) has shown that we can achieve high performance while significantly reducing computational complexity from linear O(n) to logarithmic O(log n), where n is the number of data points in memory. By switching from exact kNN to HNSW-kNN, we reduce the comparisons from 30 million to a few hundred, while maintaining a similar accuracy. We utilize the HNSW algorithm from HNSWlib because of its high accuracy, approximate guarantees and practically fast runtime on ANN Benchmarks (Aum\u00a8 uller et al., 2020). We use NMSLib (Malkov & Yashunin, 2018) with ef=500 and m=100 as default construction parameters. We perform a wall-clock time analysis quantifying this speed in Section 4.3. 4 Experiments We first describe our experimental setup below and then provide comprehensive comparisons of our method against existing incremental learning approaches. 5 \fDatasets. We used a subset of Google Landmarks V2 and YFCC-100M datasets for online image classification. These datasets are ordered by the timestamps of image uploads, and our task is to predict the label of incoming images. We followed the online continual learning (OCL) protocol as described in Chaudhry et al. (2019a): We first tune hyperparameters of all OCL algorithms on a pretraining set, continually train the methods on the online training set while measuring rapid adaptation performance, and finally evaluated information retention on a unseen test set. Further dataset details are available in the Appendix. Metrics. We follow Cai et al. (2021), measuring average online accuracy until the current timestep t (at) as a metric for measuring rapid adaptation, given by at = 1/t Pt i=1 1yi=\u02c6 yi where 1(\u00b7) is the indicator function. We additionally measure information retention, i.e. mitigating catastrophic forgetting, after online training on unseen samples from a test set. Formally, information retention for h timesteps (IRh) at time T, is defined as IRh = 1/h PT t=T \u2212h 1yt=\u02c6 yt. Computational Budget and Pretraining. To ensure fairness among compared methods, we restrict the computational budget for all methods to one gradient update using the naive ER method. All methods were allowed to access all past samples with no storage restrictions. All methods started with a pretrained ResNet50 model on the ImageNet1K dataset for fairness. Note that we select the ImageNet1K pretrained ResNet50 because despite being a good initialization, is not sufficient by itself to perform well on the selected continual learning benchmarks. We select a fine grained landmark recognition benchmark over 10788 categories (CGLM), and a harder geolocalization task over a far larger dataset of 39 million samples (CLOC). OCL Approaches. We compared five popular OCL approaches as described in (Ghunaim et al., 2023) on the CLOC dataset. For CGLM, we compare among the top two performing methods from CLOC. We provide a brief summary of the approaches: 1. ER (Cai et al., 2021): We use vanilla ER without PoLRS and ADRep (Cai et al., 2021) as they did not improve performance. 2. MIR (Aljundi et al., 2019a): It additionally uses MIR as the selection mechanism for choosing samples for training (in a task-free manner). 3. ACE (Caccia et al., 2022): ACE loss is used instead of cross entropy to reduce class interference. 4. LwF (Li & Hoiem, 2017): It adds a distillation loss to promote information retention. 5. RWalk (Chaudhry et al., 2018): This method adds a regularization term based on Fisher information matrix and optimization-path based importance scores. We treat each incoming batch of samples as a new task. Training Details for Baselines. CGLM has the same optimal hyperparameters for CLOC. The ResNet50 model was continually updated using the hyperparameters outlined in (Ghunaim et al., 2023). We used a batch size of 64 for CGLM and 128 for CLOC to control the computational costs. Predictions are made on the next batch of 64/128 samples for CGLM/CLOC dataset respectively using the latest model. The model uses a batch size of 128/256 respectively for training, with the remaining batch used for replaying samples from storage. Fixed Feature Extractor based Approaches. In this section, we ablate capabilities specifically contributed by kNN in ACM compared to other continual learning methods which use a common fixed feature extractor. However, the compared baselines do not have the consistency property provided by ACM, ablating the contribution of this property. We use a 2 layer embedder MLP to project the 2048 dimensional features of ResNet to 256 dimensions using the pretrain set. This adapts the pretrained features to domain of the tested dataset, while providing compact storage and increasing processing speed. All below methods operate on these fixed 256 dimensional features for fairness and operate on features normalized by an online scaler for best performance, with one sample incoming at a timestep. Note that the full model continual learning methods did not benefit significantly by this additional adaptation step. We detail the approaches below: 1. Nearest Class Mean (NCM) (Mensink et al., 2013; Rebuffi et al., 2017; Janson et al., 2022): This method maintains a mean feature for each class and classifies new samples by measuring cosine similarity with the mean feature. 6 \fFigure 2: Online Continual Learning Performance. We observe that ACM outperforms existing methods by a large margin despite being far cheaper computationally. Traditional methods perform poorly given unrestricted access to past seen data indicating continual learning is a hard problem even without storage constraints. 2. Streaming LDA (SLDA) (Hayes et al., 2019): This is the current state-of-the-art online continual method using fixed-feature extractors. We use the code provided by the authors with 1e \u22124 being optimal shrinkage parameter. 3. Incremental Logistic Classification (Tsai et al., 2014): We include traditional incremental logistic classification. We use scikit-learn SGDClassifier with Logistic loss. 4. Incremental SVM (Laskov et al., 2006): We include traditional online support vector classification. We use scikit-learn SGDClassifier with Hinge loss. 5. Adaptive Random Forests (ARF) (Gomes et al., 2017): We chose the best performing method from benchmarks provided by the River library 2 called Adaptive Random Forests. 6. Eigen Memory Trees (EMT) (Rucker et al., 2022): This is the current state-of-the-art incremental learning method using Trees, outperforming Sun et al. (2019) by large margins. 4.1 Comparison of ACM with Online Continual learning Approaches Online adaptation. We compare the average online accuracy of ACM to state-of-the-art approaches on CGLM and CLOC datasets in Figure 2. We observe that ACM significantly outperforms previous methods, achieving a 35% and 5% higher absolute accuracy margin on CGLM and CLOC, respectively. This improvement is due to the capability of ACM to rapidly learn new information. Information retention. We compare backward transfer of ACM to current state-of-the-art approaches on CGLM and CLOC in Figure 2. We find that ACM preserves past information much better than existing approaches, achieving 20% higher accuracy on both datasets. On the larger CLOC dataset, we discover 2https://riverml.xyz/0.19.0/ 7 \fFigure 3: Online Continual Learning Performance. ACM outperforms existing methods which leverage a fixed backbone. This highlights the importance of preserving the consistency property in online continual learning. The collapse of other approaches for the CLOC dataset indicates the hardness of the continual learning scenario. that existing methods catastrophically forget nearly all past knowledge, while ACM maintains a fairly high cumulative accuracy across past timesteps. This highlights the advantages of the consistency property, allowing perfect recall of past train samples and subsequently, good generalization ability on similar unseen test samples to past data. Moreover, comparing the performance of methods to those in the fast stream from Ghunaim et al. (2023), it becomes evident that removing memory restrictions (from 40,000 samples) did not substantially alter the performance of traditional OCL methods. This emphasizes that online continual learning with limited computation remains challenging even without storage constraints. Key Takeaways. ACM demonstrates significantly better performance in both rapid adaptation and information retention when compared to popular continual learning algorithms which can update the base deep network on any of the past seen samples with no restrictions. We additionally highlight that ACM has a substantially lesser computational cost compared to traditional OCL methods. 4.2 Comparison of ACM with Approaches Leveraging a Fixed Backbone Online adaptation. We compare average online accuracy of ACM against recent continual learning approaches that also employ a fixed feature extractor on CGLM and CLOC datasets in Figure 3. We find that ACM outperforms these alternative approaches by significant margins, achieving 10% and 20% higher absolute accuracy on CGLM and CLOC respectively. All approaches here can rapidly adapt to every incoming sample, achieving higher accuracy than traditional OCL approaches. However, ACM can additionally utilizing past seen samples when necessary. Notably, in CLOC, the best alternative approaches collapse to random performance, highlighting that pretrained feature representations are not sufficient, and the effectiveness of 8 \fFigure 4: Left: Wall clock time overhead of using ACM Memory after feature extraction (x-axis is log-scaled) on a 16-core i7 CPU server. The time increases logarithmically with dataset size, with a 8ms overhead at 40M samples. Right: Contribution of kNN and contribution of the original backbone along with the MLP. We observe most of the performance is attributable to the kNN. kNN despite its simplicity. Information retention. We compare backward transfer of ACM compared to other fixed-feature based OCL approaches on CGLM and CLOC dataset in Figure 3. We observe that ACM outperforms other apporaches by 20% on both datasets, demonstrating its remarkable ability to preserve past knowledge over time. Even after 39 million online updates on the CLOC, ACM preserves information from the earliest samples. In contrast, existing fixed-feature online continual learning methods collapse to random performance. Key Takeaways. The impressive performance of ACM is evident in both rapid adaptation and information retention even amongst latest approaches which similarly use a fixed feature extractor. This further demonstrates the impact of preserving consistency. A Note on Time. ACM and NCM were the fastest approaches among the compared methods. All other approaches were considerably slower, with a 5 to 100 fold increase in runtime compared with NCM or ACM. However, this could be attributed to codebases, although we used open-source, fast libraries such as River and Scikit-learn. 4.3 Analyzing Our Method: Adaptive Continual Memory Contribution of kNN. Here, we aim to disentangle the benefit provided by the pretrained backbone and the domain tuning by the MLP, with the contribution of kNN to ACM. To test this, we perform online continual learning by replacing the kNN with the fixed MLP classifier. The performance obtained on rapid adaptation on ablating the kNN will be the effect of strong backbone and first session tuning using MLP on pretrain set (Panos et al., 2023). We conducted this experiment using two additional pretrained backbones stronger than our ResNet50 trained on ImageNet1K: A ResNet50 trained on Instagram 1B dataset and the best DINO model XCIT-DINO trained on Imagenet1K to vary the pretraining dataset and architecture. The results presented in Figure 4 (right). We observe that removing the kNN for classification leads to a drastic decline in performance, indicating that kNN is the primary driver of performance, with performance gains of 20-30%. The decline in performance, losing over 10%, compared to initialization is attributable to distribution shift across time. This is consistently seen across model architectures, indicating that CGLM remains a challenging task with a fixed feature extractor despite backbones far stronger than ResNet50 trained on ImageNet1K. These findings suggest that kNN is the primary reason for rapid adaptation gains to distribution shifts. Having high-quality feature representations alone or fist-session adaptation is insufficient for a satisfactory online continual learning performance. 9 \fTime Overhead of ACM. We provide a practical analysis of time to ground the logarithmic computational complexity of ACM. Figure 4 (left) provides insights into the wall-clock time required for the overhead cost imposed by ACM when scaling to datasets of 40 million samples. We observe that the computational overhead while using ACM scales logarithmically, reaching a maximum of approximately 5 milliseconds for 256 dimensional embeddings. In comparison, the time required for the classification of a single sample for deep models like ResNet50 is approximately 10 milliseconds on an Intel 16-core CPU. It\u2019s important to note that when using ACM, the total inference cost of ACM inference would be 15 milliseconds, representing a 50% inference overhead as a tradeoff for high rapid adaptation and information retention performance. 4.4 Discussion A notable limitation of our approach is its dependency on the presence of pretrained features. Consequently, our method may not be appropriate for situations where such features are unavailable. While this limitation is important to acknowledge, it doesn\u2019t diminish the relevance of our approach in situations where pretrained features are available. Our method can be applied effectively in a wide range of visual continual learning scenarios. We have been selective in our choice of models and experiments, opting for pretrained models on ImageNet1K rather than larger models like CLIP or DINOv2 to demonstrate this applicability. Moreover, we tested our approach on more complex datasets like Continual YFCC-100M, which is 39 times larger than ImageNet1K and includes significantly more challenging geolocation tasks. Memory constraints are often linked to privacy concerns. However, it\u2019s crucial to understand that merely avoiding data storage does not guarantee privacy in continual learning. Given the tendency of deep neural networks to memorize information, ensuring privacy becomes a much bigger challenge. While a privacypreserving adaptation of our method is beyond this paper\u2019s scope, one can employ differentially private feature extractors, as suggested by (Ma et al., 2022), to build privacy-conscious ACM models. We conjecture that as more advanced privacy-preserving feature extractors become available, privacy concerns can be addressed in parallel. Finally, to contextualize the computational budget with tangible figures, we envision a hypothetical system necessitating real-time operation on a video stream, facilitated by a 16-core i7 CPU server. Given a feature size of 256 and drawing insights from Figure 4, our method is projected to sustain real-time processing at 30 frames per second for an impressive span of up to 71 years, without necessitating further optimization. Such a configuration would consume approximately 900 GB of storage annually, translating to a cost of roughly $20 per year, as indicated in Table 1. Thus, the ACM stands out as practical, even for the prolonged deployment of continual learning systems. 5"
+ },
+ {
+ "url": "http://arxiv.org/abs/2303.11165v2",
+ "title": "Computationally Budgeted Continual Learning: What Does Matter?",
+ "abstract": "Continual Learning (CL) aims to sequentially train models on streams of\nincoming data that vary in distribution by preserving previous knowledge while\nadapting to new data. Current CL literature focuses on restricted access to\npreviously seen data, while imposing no constraints on the computational budget\nfor training. This is unreasonable for applications in-the-wild, where systems\nare primarily constrained by computational and time budgets, not storage. We\nrevisit this problem with a large-scale benchmark and analyze the performance\nof traditional CL approaches in a compute-constrained setting, where effective\nmemory samples used in training can be implicitly restricted as a consequence\nof limited computation. We conduct experiments evaluating various CL sampling\nstrategies, distillation losses, and partial fine-tuning on two large-scale\ndatasets, namely ImageNet2K and Continual Google Landmarks V2 in data\nincremental, class incremental, and time incremental settings. Through\nextensive experiments amounting to a total of over 1500 GPU-hours, we find\nthat, under compute-constrained setting, traditional CL approaches, with no\nexception, fail to outperform a simple minimal baseline that samples uniformly\nfrom memory. Our conclusions are consistent in a different number of stream\ntime steps, e.g., 20 to 200, and under several computational budgets. This\nsuggests that most existing CL methods are particularly too computationally\nexpensive for realistic budgeted deployment. Code for this project is available\nat: https://github.com/drimpossible/BudgetCL.",
+ "authors": "Ameya Prabhu, Hasan Abed Al Kader Hammoud, Puneet Dokania, Philip H. S. Torr, Ser-Nam Lim, Bernard Ghanem, Adel Bibi",
+ "published": "2023-03-20",
+ "updated": "2023-07-15",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CV"
+ ],
+ "main_content": "Introduction Deep learning has excelled in various computer vision tasks [8,25,31,50] by performing hundreds of shuffled passes through well-curated offline static labeled datasets. However, modern real-world systems, e.g., Instagram, TikTok, and Flickr, experience high throughput of a constantly changing stream of data, which poses a challenge for deep learning to cope with such a setting. Continual learning (CL) aims to go beyond static datasets and develop learning strategies that can adapt and learn from streams where data is pre*authors contributed equally; order decided by a coin flip. Figure 1. Main Findings. Under per time step computationally budgeted continual learning, classical continual learning methods, e.g., sampling strategies, distillation losses, and fully connected (FC) layer correction based methods such as calibration, struggle to cope with such a setting. Most proposed continual algorithms are particularly useful only when large computation is available, where, otherwise, minimalistic algorithms (ERM) are superior. sented incrementally over time, often referred to as time steps. However, the current CL literature overlooks a key necessity for practical real deployment of such algorithms. In particular, most prior art is focused on offline continual learning [26,29,47] where, despite limited access to previous stream data, training algorithms do not have restrictions on the computational training budget per time step. High-throughput streams, e.g., Instagram, where every stream sample at every time step needs to be classified for, say, misinformation or hate speech, are time-sensitive in which long training times before deployment are simply not an option. Otherwise, new stream data will accumulate until training is completed, causing server delays and worsening user experience. Moreover, limiting the computational budget is necessary towards reducing the overall cost. This is because computational costs are higher compared to any storage associated costs. For example, on Google Cloud Standard Storage (2\u00a2 per GB per month), it costs no more than 6\u00a2 to store the entire CLEAR benchmark [32], a recent large-scale CL dataset. On the contrary, one run of a CL algorithm on CLEAR performing \u223c300K iterations costs around 100$ on an A100 Google instance (3$ per hour for 1 GPU). Therefore, it is prudent to have computationally budgeted methods where the memory size, as a consequence, is implicitly restricted. This is because, under a computational budget, it arXiv:2303.11165v2 [cs.LG] 15 Jul 2023 \fis no longer possible to revisit all previous data even if they were all stored in memory (given their low memory costs). This raises the question: \u201cDo existing continual learning algorithms perform well under per step restricted computation?\u201d To address this question, we exhaustively study continual learning systems, analyzing the effect of the primary directions of progress proposed in the literature in the setting where algorithms are permitted fixed computational budget per stream time step. We evaluate and benchmark at scale various classical CL sampling strategies (Uniform, Class-Balanced [43], Recency-Biased [32], FIFO [12,16], Max Loss, Uncertainity Loss [6], and KMeans [16]), CL distillation strategies (BCE [47], MSE [10], CrossEntropy [56], and Cosine [26]) and FC layer corrections (ACE [11,37,61], BiC [56], CosFC [26], and WA [63]) that are common in the literature. Evaluation is carried on two large-scale datasets, amounting to a total of 1500 GPU-hours, namely ImageNet [19] and Continual Google Landmarks V2 [42] (CGLM) under various stream settings, namely, data incremental, class incremental, and time incremental settings. We compare against Naive; a simple baseline that, utilizing all the per step computational budget, trains while sampling from previous memory samples. Conclusions. We summarize our empirical conclusions in three folds. (1) None of the proposed CL algorithms, see Table 1 for considered methods, can outperform our simple baseline when computation is restricted. (2) The gap between existing CL algorithms and our baseline becomes larger with harsher compute restrictions. (3) We find that training a minimal subset of the model can close the performance gap compared to our baseline in our setting, but only when supported by strong pretrained models. Surprisingly, we find that these observations hold even when the number of time steps is increased to 200, a large increase compared to current benchmarks, while normalizing the effective total computation accordingly. This suggests that existing CL literature is particularly suited for settings where memory is limited, and less practical in scenarios having limited computational budgets. 2. Continual Learning with Limited Compute 2.1. Problem Formulation We start by first defining our proposed setting of computationally budgeted continual learning. Let S be a stream revealing data sequentially over time steps. At each time step t \u2208{1, 2, . . . , \u221e}, the stream S reveals nt imagelabel pairs {(xt i, yt i)}nt i=1 \u223cDj from distribution Dj where j \u2208{1, . . . , t}. In this setting, we seek to learn a function f\u03b8t : X \u2192Yt parameterized by \u03b8t that maps images x \u2208X to class labels y \u2208Yt, where Yt = St i=1 Yi, which aims to correctly classify samples from any of the previous distributions Dj\u2264t. In general, there are no constraints on the incoming distribution Dj, e.g., the distribution might change after every time step or it may stay unchanged for all time steps. The size of the revealed stream data nt can generally change per step, e.g., the rate at which users upload data to a server. The unique aspect about our setting is that at every time step t, a computational budget Ct is available for the CL method to update the parameters from \u03b8t\u22121 to \u03b8t in light of the new revealed data. Due to the inexpensive costs associated with memory storage, in our setting, we assume that CL methods in our setting can have full access to all previous samples Tt = \u222at r=1{(xr i , yr i )}nr i=1.1 However, as will be discussed later, while all samples can be stored, they cannot all be used for training due to the constrained computation imposing an implicit memory restriction. 2.2. Key Differences with Prior Art (1) Tasks: In most prior work, CL is simplified to the problem of learning a set of non-overlapping tasks, i.e., distributions, with known boundaries between them [14,35,47]. In particular, the data of a given distribution Dj is given all at once for the model to train. This is as opposed to our setup, where there is no knowledge about the distribution boundaries, since they are often gradually changing and not known a priori. As such, continual learning methods cannot train only just before the distribution changes. (2) Computational Budget: A key feature of our work is that, per time step, CL methods are given a fixed computational budget Ct to train on {(xt i, yt i)}nt i=1. For ease, we assume throughout that Ct = C \u2200t, and that nt = n\u2200t. Although C can be represented in terms of wall clock training time, for a given f\u03b8 and stream S, and comparability between GPUs, we state C in terms of the number of training iterations instead. This avoids hardware dependency or suboptimal implementations when comparing methods. This is unlike prior work, which do not put hard constraints on compute per step [10,26,47] giving rise to degenerate but well-performing algorithms such as GDumb [43]. Concurrent works [9,21] restrict the computational budget, however, they operate in a setup with constrained memory which significantly affects performance of CL methods. (3) Memory Constraints: Prior work focuses on a fixed, small memory buffer for learning and thereof proposing various memory update strategies to select samples from the stream. We assume that all the samples seen so far can be stored at little cost. However, given the restricted imposed computation C, CL methods cannot revisit or learn from all stored samples. For example, as shown in Figure 2, consider performing continual learning on ImageNet2K, composed of 1.2M samples from ImageNet1K and 1.2M samples from ImageNet21K forming 2K classes, which will be detailed later, over 20 time steps, where the stream reveals sequentially n = 60K images per step. Then, under a computation budget of 8000 iterations, the model cannot revisit more than 50% of all seen data at any given time step, i.e. 600K samples. Our proposed setting is closer to realistic scenarios that 1We discuss in the Appendix the privacy arguments often used towards restricting the memory. \fDir. Reference Applicability Components (our setup) Distillation MemUpdate MemRetrieve FC Correction Others Naive \u2713 Random Random Distillation iCARL [47] \u2713 BCE Herding Random NCM LUCIR [26] \u2713 Cosine Herding MargRank CosFC NCM PODNet [20] \u2713 POD Herding Random LSC Imprint,NCM DER [10] \u2713 MSE Reservoir Random CO2L [13] \u00d7 IRD Random Random Asym.SupCon SCR [38] \u2713 Reservoir Random SupCon NCM Sampling TinyER [16] \u2713 FIFO,KMeans,Reservoir GSS [5] \u00d7 GSS Random MIR [3] \u00d7 Reservoir MIR GDumb [43] \u2713 Balanced Random MemOnly Mnemonics [34] \u00d7 Mnemonics BalFineTune OCS [60] \u00d7 OCS Random InfoRS [52] \u00d7 MSE InfoRS Random RMM [33] \u00d7 RMM ASER [51] \u00d7 SV ASV RM [6] \u2713 Uncertainty Random AutoDA CLIB [30] \u00d7 Max Loss Random MemOnly,AdaLR FC Layer BiC [56] \u00d7 CrossEnt Random Random BiC WA [63] \u00d7 CrossEnt Random Random WA SS-IL [2] \u00d7 TKD Random Balanced SS CoPE [18] \u2713 Balanced Random PPPLoss ACE [11] \u2713 Reservoir Random ACE Table 1. Primary Directions of Progress in CL. Analysis of recent replay-based systems, with bold highlighting the primary contribution. We observe that there are three primary directions of improvement. \u201cApp.\u201d denotes the applicability to our setting based on whether they are scalable to large datasets and applicable beyond the class-incremental stream. cope with high-throughput streams, similar to concurrent work [42], where computational bottlenecks impose implicit constraints on learning from past samples that can be too many to be revisited during training. 2.3. Constructing the Stream We explore three stream settings in our proposed benchmark, which we now describe in detail. (1) Data Incremental Stream: In this setting, there is no restriction on the incoming distribution Dj over time that has not been well-explored in prior works. We randomly shuffle all data and then reveal it sequentially over steps, which could lead to a varying distribution Dj over steps in which there are no clear distribution boundaries. (2) Time Incremental Stream: In this setting, the stream data is ordered by the upload timestamp to a server, reflecting a natural distribution change Dj across the stream as it would in real scenarios. There is a recent shift toward studying this ordering as apparent in recent CL benchmarks, e.g., CLEAR [32], Yearbook [58] and FMoW [58], NEVIS22 [9], Continual YFCC100M [12] and Continual Google Landmarks V2 [42]. (3) Class Incremental Stream: For completeness, we consider this classical setting in the CL literature. Each of the distributions Dj represents images belonging to a set of classes different from the classes of images in any other distribution Di\u0338=j. We benchmark these three settings using a large-scale dataset that will be detailed in the Experiments. 3. Dissecting Continual Learning Systems Continual learning methods typically propose a system of multiple components that jointly help improve learning performance. For example, LUCIR [26] is composed of a cosine linear layer, a cosine distillation loss function, and a hard-negative mining memory-based selector. In this section, we analyze continual learning systems and dissect them into their underlying components. This helps to analyze and isolate the role of different components under our budgeted computation setting and helps us to understand the most relevant components. In Table 1, we present the breakdown of novel contributions that have been the focus of recent progress in CL. The columns indicate the major directions of change in the CL literature. Overall, there have been three major components on which advances have focused, namely distillation, sampling, and FC layer correction. These three components are considered additions to a naive baseline that simply performs uniform sampling from memory. We refer to this baseline as Naive in Table 1. (1) Distillation: One popular approach towards preserving model performance on previous distributions has been through distillation. It enables student models, i.e., current time step model, to learn from a teacher model, i.e., one that has been training for many time steps, through the logits providing a rich signal. In this paper, we consider four widely adopted distillation losses, namely, Binary CrossEntropy (BCE) [47], CrossEntropy [34, 56, 63], Cosine Similarity (Cosine) [26], and Mean Square Error (MSE) [10,52] Loss. (2) Sampling: Rehearsing samples from previous distributions is another popular approach in CL. However, sampling strategies have been used for two objectives. Particularly when access to previous samples is restricted to a small memory, they are used to select which samples from the stream will update the memory (MemUpdate) or to decide on which memory samples are retrieved for rehearsal (MemRetrieve). In our unconstrained memory setup, simply sampling uniformly over the joint data of past and current time step data (as in Naive) exposes a particular shortcoming. When training for a large number of time steps, uniform sampling reduces the probability of selecting samples from the current time step. For that, we consider various sampling strategies, e.g., recency sampling [32] that biases toward sampling current time step data, and FIFO [12,16] that ex\fclusively samples from the current step. We do not consider Reservoir, since it approximates uniform sampling in our setup with no memory restrictions. In addition to benchmarking the sampling strategies mentioned above, we also consider approaches that evaluate the contribution of each memory sample to learning [53]. For example, Herding [47], K-Means [16], OCS [60], InfoRS [52], RM [6], and GSS [5] aim to maximize diversity among samples selected for training with different metrics. MIR [3], ASER [51], and CLIB [30] rank the samples according to their informativeness and select the top-k. Lastly, balanced sampling [17,18,43] select samples such that an equal distribution of classes is selected for training. In our experiments, we only consider previous sampling strategies that are applicable to our setup and compare them against Naive. (3) FC Layer Correction: It has been hypothesized that the large difference in the magnitudes of the weights associated with different classes in the last fully connected (FC) layer is among the key reasons behind catastrophic forgetting [56]. There has been a family of different methods addressing this problem. These include methods that improve the design of FC layers, such as CosFC [26], LSC [20], and PPP [18], by making the predictions independent of their magnitude. Other approaches such as SS-IL [2] and ACE [11,37,61] mask out unaffected classes to reduce their interference during training. In addition, calibrating the FC layer in post-training, e.g., BiC [56], WA [63], and IL2M [7] is widely used. Note that the calibration techniques are only applicable to the class-incremental setup. We benchmark existing methods applicable to our setting against the Naive approach that does not implement any FC layer correction. (4) Model Expansion Methods: Several works attempt to adapt the model architecture according to the data. This is done by only training part of the model [1,4,39,40,44] or by directly expanding the model when data is presented [46,49,54,57,59,62]. However, most of the previous techniques in this area do not apply to our setup. Most of this line of work [39,40,46] assumes a task-incremental setting, where at every time step, new samples are known to what set of classes they belong, i.e., the distribution boundaries are known, even at test time. To overcome these limitations, newer methods [1,45] use a bilevel prediction structure, predicting the task at one level and the label within the task at the second level. They are restricted to the class-incremental setting as they assume each task corresponds to a set of nonoverlapping classes. We seek to understand the limitation of partial retraining in a network; hence, instead, we compare Naive against a setting where only the FC layer is being trained, thus minimally training the network per time step. In addition, we examine the role of pretraining which has recently become a widely popular direction for exploration in continual learning [55]. 4. Experiments We first start by detailing the experimental setup, datasets, computational budget C, and evaluation metrics for our largescale benchmark. We then present the main results evaluating various CL components, followed by extensive analysis. 4.1. Experimental Setup and Details Model. We use a standard ResNet50 following prior work on continual learning [12]. The model is ImageNet1K pre-trained used as a backbone throughout all experiments. Datasets. We conduct experiments using two large-scale datasets, namely ImageNet2K and Continual Google Landmarks V2 (CGLM). We construct ImageNet2K by augmenting ImageNet1K with 1.2M images from ImageNet21K [19], thus, adding 1K new non-overlapping classes with ImageNet1K amounting to a total of 2K classes. (1) Data Incremental ImageNet2K: The stream is constructed by randomly shuffling the set of images from the 1K classes of ImageNet21K, by doing so, there is no knowledge of the distribution boundaries. The model continually learns on this set of images, while ImageNet1K is available in memory. CL methods are expected to learn both the new classes from the stream while maintaining the performance on ImageNet1K. We refer to this setting as DI-ImageNet2K. (2) Class Incremental ImageNet2K: Similar to the above defined DI-ImageNet2K, ImageNet1K is available in memory and the 1K classes of ImageNet21K are presented sequentially by the stream but in a class incremental setting. We refer to this setting as CI-ImageNet2K. (3) Time Incremental Google Landmarks V2 (CGLM): In this setting, the stream consists of data from the CGLM dataset ordered according to the timestamps of the images mimicking a natural distribution shift. Note that ImageNet1K is not considered as part of the evaluation. We refer to this setting simply as CGLM. Throughout, unless stated otherwise, the stream reveals data incrementally over 20 time steps. This amounts to a per step stream size of n = 60K for the CI-ImageNet2K and DI-ImageNet2K settings, and n = 29K for the CGLM setting. More details on the construction of datasets is given in the Appendix along with the sample orders. Computational Budget. We set the computational budget C to 400 training iterations per time step (8000 = 20 time steps \u00d7 400) for ImageNet2K, i.e., DI-ImageNet2K and CI-ImageNet2K, and set C to 100 training iterations for CGLM. In each iteration, a batch of images is used to update the model in training where we set the training batch size B to 1500. The choice of C is made such that it corresponds to training on at most 25 \u221250% of all observed data at any given step. For example, as highlighted in Figure 2 for ImageNet2K, time step t = 5. corresponds to training only on about 40% of the complete observed data at this step, i.e., 400\u00d71500/1.2M+5\u00d760K \u22480.4 of an epoch where 1.2M denotes the ImageNet1K samples. Furthermore, we set C to 100 iterations for CGLM, since the dataset contains 1/4 of \fFigure 2. Effective Training Epochs Per Time Step. Our default setting sets a total training budget over all 20 time steps of 8000 and 2000 iterations for ImageNet2K and CGLM ,respectively, with a per iteration batch size of B = 1500. Effectively, this reflects to training on 25-50% of the stored data, except in the first few time steps on CGLM. Note that for ImageNet2K, we assume that ImageNet1K of 1.2M samples is available in memory. the total data in ImageNet2K. Note that after 20 time steps on CGLM, the data that would have been seen is 20 \u00d7 29K images, as opposed to 1.2M + 20 \u00d7 60K images for ImageNet2K experiments. Metrics. We report the accuracy (Acc) on a separate test set after training at each time step. This test set simply comprises the joint test set for all classes seen up to the current time step. Moreover, for ImageNet2K, we decompose the test accuracy into the accuracy on ImageNet1K (ImageNet1K Acc), which measures forgetting, and the accuracy on the stream (Stream Acc), which measures adaptation. For GCLM, we only report stream accuracy. Training Details. We use SGD as an optimizer with a linear learning rate schedule and a weight decay of 0. We follow standard augmentation techniques. All experiments were run on the same A100 GPU. For a fair comparison, we fix the order of the samples revealed by the stream S in all experiments on a given dataset and comparisons. We summarize all the settings with all the benchmark parameters in the first part of Table 2. 4.2. Budgeted Continual Learning In this section, we investigate the effectiveness of the three main directions studied in the CL literature, namely sampling strategies, distillation, and FC layer correction. 1. Do Sampling Strategies Matter? We evaluate seven sampling strategies that govern the construction of the training batch from memory. These strategies are grouped into two categories based on their computational cost. Inexpensive sampling methods include Uniform, Class-Balanced, Recency-Biased and FIFO sampling. On the other hand, costly sampling strategies include KMeans, Max Loss, and Uncertainty loss sampling. To normalize for the effective C due to the overhead of associated extra forward passes to decide on the sampling, costly sampling strategies are allowed C/2 training iterations, where the exact calculation is left for the Appendix. That is to say, costly sampling strategies perform 200 training iterations for ImageNet2K and 50 training iterations for CGLM as the rest of the budget is for the extra forward passes. We report the performance of the five sampling strategies conAttributes ImageNet2K CGLM Initial memory ImageNet1K {} Initial memory size 1.2M 0 Per step stream size n 60K 29K Time steps 20 20 Stream size 1.2M 58K Size of data by the last time step 2.4M 58K Stream Class incremental Data incremental Time incremental # iterations per time step C 400 100 Training batch size B 1500 1500 Metrics Acc on ImageNet1K Acc on Stream Acc on Stream Eq. Distillation Iters 267 67 Eq. Sampling Iters 200 100 Eq. FC Correction Iters 400 100 Iters per t (Sensitivity) 100, 1200 40, 400 Time Steps (Sensitivity) 50, 200 50, 200 Table 2. Experimental Details. The first block shows the various considered settings in the experiments section. The second block denotes the effective training iterations C for each class of methods due to their over head extra computation. The last block details the setup for our sensitivity analysis. sisting of the inexpensive and the best performing costly sampling strategy (KMeans), presented in shades of blue, in Figures 3, 4, and 5 for DI-ImageNet2K, CI-ImageNet2K, and CGLM, respectively. Other methods are listed in the Appendix due to lack of space. We compare against a noncontinual learning oracle that performs classical empirical risk minimization at every step on Tt = \u222at r=1{(xr i , yr i )}nr i=1 with a computational budget of C \u00d7 t, which we refer to as ERM-Naive; this is as opposed to the previously mentioned continual learning methods that have only C per step t spent equally over all steps. ERM-Naive acts as a training method with hindsight, spending the complete computational budget at once after collecting the full dataset. This acts as a very strong baseline against all continual learning methods. We report it in shades of red in the same figures. We also report the average accuracy, averaged over all time steps, for each sampling method in the yellow box in each figure. Conclusion. First, we observe that the top inexpensive sampling strategies perform very similarly to each other. This is consistent across settings, CI-ImageNet2K, and CGLM, on both ImageNet1K accuracy and Stream Accuracy. There are some advantages for Class-Balanced over other sampling strategies, e.g., gaining an average accuracy of 2.5% over Uniform in DI-ImageNet2K. However, sampling strategies such as FIFO completely forget ImageNet1K (dark blue line), leading to poor performance over all three settings. Interestingly, costly sampling strategies perform significantly worse in CL performance over the simple Uniform sampling when subjected to an effectively similar computational budget. This observation is different from previous settings [41], as the additional computational overhead of costly sampling does not seem worthwhile to improve performance. 2. Does Distillation Matter? We evaluate four wellknown distillation losses in our benchmark, namely, Cosine, CrossEntropy, BCE, and MSE losses. Given that ClassBalanced is a simple inexpensive sampling procedure that performed slightly favorably, as highlighted in the previous \fFigure 3. DI-ImageNet2K (400 Iterations). ERM-Naive, a non-continual learning algorithm, is compared against inexpensive sampling strategies (first four plots) with 400 training iterations and the costly KMeans (fifth plot) with 200 iterations. All CL methods perform similarly but worse than ERM-Naive. This is the case for FIFO that suffers from forgetting the KMeans due to its expensive nature. ImageNet2K experiments performance can be decomposed into (i) accuracy on classes seen during pre-training on ImageNet1K and (ii) accuracy on newly seen classes in ImageNet2K, allowing analysis of forgetting old classes and learning newly introduced classes. Figure 4. CI-ImageNet2K (400 Iterations). Similarly, ERM-Naive is compared on CI-ImageNet2K. Both FIFO, which suffers from forgetting, and KMeans due to its expensive nature, struggle to compete against simpler inexpensive methods as Class-Balanced. However, overall, all other methods perform very similarly with no clear advantage. ImageNet2K experiments performance can be decomposed into (i) accuracy on classes seen during pre-training on ImageNet1K and (ii) accuracy on newly seen classes in ImageNet2K, allowing analysis of forgetting old classes and learning newly introduced classes. Figure 5. CGLM (100 Iterations). All inexpensive methods perform overall similarly with the exception for KMeans due to its expensive nature. This highlights that simplicity is key under a budgeted continual learning setting. CGLM is not an extension of ImageNet1K and involves a different task: landmark classification. Hence, we measure only the stream accuracy resulting in two lines instead of six. section, we use it as a default sampling strategy from now onward, where the number of samples used per training step is equal over all classes. We refer to this basic approach with a cross entropy loss as Naive. To fairly factor in the overhead of an additional forward pass, distillation approaches are allowed 2C/3 iterations compared to Naive with C training iterations. That is, the distillation losses perform 267 iterations for ImageNet2k and 67 iterations for CGLM compared to 400 and 100 iterations for Naive. We report the results for Cosine and MSE on DI-ImageNet2K, CI-ImageNet2K, and CGLM datasets in the first, second, and third rows of Figure 6, respectively. Other methods are left for the Appendix due to lack of space. Distillation methods are shown in shades of blue, whereas Naive is shown in shades of red. We report the average accuracy, averaged over all time steps, for each distillation method in the yellow box in each figure. Conclusion. In all three settings, distillation methods underperform compared to Naive. Even in ImageNet1K Acc, which measures forgetting, Naive performs similarly or slightly better than all distillation methods in DIImageNet2K and CI-ImageNet2K streams. The results in Figure 6 show that the top distillation methods, such as MSE, perform only slightly worse compared to Naive (54.9 vs 55.9 on DI-ImageNet2K and 64 vs 64.9 on CI-ImageNet2K). However, in CGLM they perform significantly worse (26.4 compared to 35.7) due to the limited iterations. We attribute this to the fact that distillation methods often require a larger number of training samples, and thereof a large enough computational budget per time step. 3. Does FC Layer Correction Matter? We evaluate five FC layer correction approaches from two different families. A family of methods that modifies the FC layer directly, \fFigure 6. Distillation in Data and Class Incremental Settings. Naive, which does not employ any distillation loss, outperforms all distillation methods (MSE, and Cosine) across all three settings. including CosineFC [26] and ACE2 [11,37,61]. The other family of methods applies post-training calibration including BiC [56], WA [63], along with temperature scaling [23]. All methods employ Class-Balanced as a sampling strategy and compare against Naive (Class-balanced with cross entropy loss) with no corrections in FC layer. The first three subplots of Figure 7 correspond to comparisons of direct FC layer modification methods against Naive on DI-ImageNet2K, CIImageNet2K, and CGLM. Since calibration methods tailored for Class Incremental settings, in the rightmost plot of Figure 7, we report comparisons with Naive on CI-ImageNet2K. Since all FC layer corrections are with virtually no extra cost, the number of training iterations per time step is set to C, i.e., 400 for ImageNet2K and 100 for CGLM. Conclusion. No method consistently outperforms Naive in computationally budgeted continual learning. The first family of methods helps in DI-ImageNet2K, particularly in the initial steps due to class imbalance, but no method outperforms Naive in the CI-ImageNet2K set-up. Calibrationbased methods, such as BIC, are somewhat competitive with Naive, but WA fails. Surprisingly, even under various FC correction approaches, all methods fail to outperform Naive in computationally budgeted continual learning. 4.3. Sensitivity Analysis We have analyzed the performance of various CL methods under budgeted computation. We have consistently observed over a variety of settings on large-scale datasets that a simple method, i.e., Naive, simply sampling with a Class-Balanced strategy and a cross entropy loss outperforms all existing methods. However, all reported results were for 20 time steps 2We treat all seen samples as incoming samples to test this case, diverging from the original ACE Loss. The ACE Loss collapses to Crossentropy, as new samples form a tiny fraction of all past seen data. with C = 400 or C = 100 training iterations for ImageNet2K and CGLM, respectively, in which expensive methods were normalized accordingly. Now, we analyze the sensitivity of our conclusions over different time steps and iterations C. Does the Number of Time Steps Matter? Prior art, such as GDumb [43], found that the relative performance of CL methods changes drastically when the number of time steps is varied. Subsequently, we increased the number of time steps to 50 and 200 from 20, a more extreme setting than explored in recent works, while maintaining the same overall computational budget C eliminating any source of performance variation due to a different total computational budget. This is since per time step, the stream reveals fewer number of samples n with an increased number of time steps. We report experiments in the CGLM setting where Naive will receive only 40 and 10 iterations for the 50 and 200 time steps, respectively. We consider distillation approaches where they are permitted 2/3C, which is 27 and 7 iterations, respectively, on the 50 and 200 time steps, respectively. Note that, in these settings, per time step, methods observe 2/3 \u00d7 11.6K and 2/3 \u00d7 2.9K samples, respectively. We leave the experiments on ImageNet2K for the Appendix due to space constraints. We compare two distillation methods against Naive in Figure 8. Other methods are presented in the Appendix. Conclusion. We still consistently observe that Naive outperforms all distillation methods on both the 50 and the 200 time steps. Moreover, the relative performance across distillation methods is preserved similarly to the 20 time steps setup. That is, our conclusions are largely robust under different number of time steps. This is contrary to the observation of the prior art [43], this is because unlike our setting, [43] does not scale the compute with increased number of time steps. Does the Compute Budget Matter? Finally, we explore the impact of changing the computational budget on the performance of different distillation methods on CGLM under 20 time steps. We study two scenarios, one where the budget is increased to C = 400 and where it is reduced to C = 40, originally C = 100 for CGLM. Hence, distillation would be allocated 267 and 27 iterations in this setting, respectively. As shown in Figure 2, the higher budget setting allows approximately a full pass per time step over all stored data. We leave the experiments on ImageNet2K for the Appendix. We compare two distillation methods with Naive in Figure 9. The remaining methods are presented in the Appendix. Conclusion. Again, we observe that Naive outperforms all distillation methods in both increased and decreased compute budget settings. The final gap between MSE distillation and Naive is 11.41% for C = 40, this gap is reduced to 3.85% for C = 400. Surprisingly, even with increased compute budget, distillation methods still fall behind Naive. However, the reduced gap in performance compared to that of Naive is a strong indication that the reason behind the failure of distillation methods is indeed the limited computation. \fFigure 7. FC Layer Correction. Even though loss functions (CosineFC and ACE) might outperform Naive in the first few time steps, eventually Naive catches up. Overall, Naive consistently outperforms all considered calibration methods too, namely, BIC and WA. Figure 8. CGLM Distillation with Different Number of Time Steps. Under larger number of time steps, where total number of iterations is normalized accordingly, Naive outperforms distillation in both settings, namely, 50 and 200 time steps. Figure 9. CGLM Distillation with Different Computational Budgets. Naive outperforms distillation methods under the restricted 40 and the larger 400 iterations (originally 100). Distillation methods become competitive when enough budget is available. Figure 10. Linear vs Full Model Training. Performing Linear fine tuning allows to leveraging the computational budget efficiently improving the gap compared to full model training particularly for better pretrained models, e.g., Instagram1B+ImNet1K. 4.4. Exploring Partial Training We now investigate the reallocation of the computational budget through partial training of the model, which is a model expansion method that involves pre-selecting the subnetwork to be trained. This approach is more computationally efficient, especially on large-scale datasets. The top (FC) layer is the smallest part of the network that can be retrained. We compare partial training of the network, i.e., FC layer only, to training the full model (Naive) using two different model initializations, ImageNet1K pretraining [25] and Instagram1B+ImageNet1K pretraining [36]. Note that Instagram1B+ImageNet1K is a stronger pretrained model, with better feature representations. To normalize the computation for the FC partial training, we permit 3C training iterations compared to full model training (Naive) with C training iterations. Hence, CGLM FC partial training performs 300 iterations compared to 100 iterations for training Naive. We present our results in Figure 10, where the shades of purple and blue represent models trained from pretrained ImageNet1K and Instagram1B+ImageNet1K models, respectively. Conclusion. There exists a gap between full model training and partial FC layer training (Linear). However, this gap is greatly reduced when a stronger pretrained model is adopted as an initialization. More specifically, the final gap drops from 23.73% for ImageNet1K initialization to 9.45% for Instagram1B+ImageNet1K initialization. Partial training of the FC layer for Instagram1B+ImageNet1K model initialization outperforms ImageNet1K full model training on average, over time steps, by 8.08%, which verifies that partially training a strong backbone could be more beneficial than fully training a weaker one. 5."
+ },
+ {
+ "url": "http://arxiv.org/abs/1911.11433v2",
+ "title": "\"You might also like this model\": Data Driven Approach for Recommending Deep Learning Models for Unknown Image Datasets",
+ "abstract": "For an unknown (new) classification dataset, choosing an appropriate deep\nlearning architecture is often a recursive, time-taking, and laborious process.\nIn this research, we propose a novel technique to recommend a suitable\narchitecture from a repository of known models. Further, we predict the\nperformance accuracy of the recommended architecture on the given unknown\ndataset, without the need for training the model. We propose a model encoder\napproach to learn a fixed length representation of deep learning architectures\nalong with its hyperparameters, in an unsupervised fashion. We manually curate\na repository of image datasets with corresponding known deep learning models\nand show that the predicted accuracy is a good estimator of the actual\naccuracy. We discuss the implications of the proposed approach for three\nbenchmark images datasets and also the challenges in using the approach for\ntext modality. To further increase the reproducibility of the proposed\napproach, the entire implementation is made publicly available along with the\ntrained models.",
+ "authors": "Ameya Prabhu, Riddhiman Dasgupta, Anush Sankaran, Srikanth Tamilselvam, Senthil Mani",
+ "published": "2019-11-26",
+ "updated": "2020-05-20",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.CV",
+ "cs.IR",
+ "eess.IV",
+ "stat.ML"
+ ],
+ "main_content": "Introduction With the current unprecedented growth in deep learning, the primary and most pressing challenge faced by the community is to \ufb01nd the most appropriate model for a given dataset. Consider the $1M Data Science Bowl challenge for detecting lung cancer hosted by Kaggle in 20171. It introduces a dataset of lung scans, consisting of thousands of images, and aims to develop algorithms that accurately determine when lesions in the lungs are cancerous. To solve this in practice, a common approach is to abstract the problem of lung cancer detection as a special case of object detection: use pre-trained models on large scale image classi\ufb01cation datasets, and \ufb01ne-tune them to target lung cancer dataset. The process would begin with choosing a state-of-the-art deep learning architecture, say AlexNet [16], with weights pre-trained on ImageNet dataset. After \ufb01ne-tuning multiple models and using multiple pre-training datasets, it is found that both the AlexNet architecture and ImageNet dataset are not suitable for the task of lung cancer detection. This procedure is then extensively repeated for multiple models, such as ResNet [10], VGG-16 [27], VGG-19 [27], and Network-inNetwork [18], till the ideal model is found. Similarly, it has to be repeated for different datasets such as CIFAR-10, CIFAR-100, and TinyImageNet until the pre-training dataset is obtained. This is an extremely expensive hit-and-trial search approach. Models pre-trained from generic datasets have shown improved performance in different domains and diverse tasks such as music genre classi\ufb01cation [24], face recognition [29], healthcare [9], and food industry [32]. Currently, the choice of the generic dataset and pre-train model is purely based on human expertise and prior knowledge. Kornblith et al. [14] considered thirteen different deep learning 1https://www.kaggle.com/c/data-science-bowl-2017 Preprint. Under review. arXiv:1911.11433v2 [cs.LG] 20 May 2020 \fmodels trained on ImageNet and studied the \ufb01ne-tuned performance to different target datasets. They found improvements in certain transfer scenarios and also showed that transferability is limited based on the source and target datasets properties. This explains the necessity for a systematic approach to choose the dataset and pre-trained networks for a given unknown dataset or task (such as, lung cancer detection). In this paper, we aim to address this problem by proposing an automated deep learning model recommendation system from a repository of models for a given unknown dataset. We further predict the accuracy of the recommended deep learning model on the unknown dataset without the need for training or \ufb01ne-tuning. This enables the user to take a well informed decision on which popular deep learning model to adopt for the unknown dataset in hand and also what is the ballpark performance to expect. The proposed research problem is de\ufb01ned as follow: For a given unknown dataset du, select a dataset dc and model mc that will provide the best \ufb01ne-tuning accuracy, ac of du after being pre-trained on dc and also predict the accuracy ac without the need for training. Formally, our system assumes a repository of k popular deep learning architectures trained independently on n different existing datasets. Given an unknown dataset du, we \ufb01nd the most similarly dataset dc from the list of n datasets and predict the accuracy of every model k on that dataset, without actually training the model on it. This allow us to quantitatively assess the promise of transferring models and also recommend a suitable model for the unknown dataset. For example, for the given unknown lung cancer dataset, say the proposed approach predicts that STL-10 dataset [5] as the most similar dataset. Then, we predict the accuracy of all the architectures available for STL-10 in the model repository for the lung cancer dataset and rank them. Thus, we obtain the best performing pre-training dataset as well as the architecture. The proposed approach advances the literature to achieve a deep neural network recommendation systems using only limited resources and in real-time. To summarize, the primary research contributions of this research are as follows: 1. A model recommendation system which predicts the best suitable pre-trained model from a repository of models and predict its accuracy for the unknown dataset. 2. A general purpose unsupervised model encoder which extracts a \ufb01xed length, continuous vector representation for any given discrete, variable-length deep learning architecture, along with its hyperparameters. 3. A dataset similarity ranker system which characterizes the similarity distribution between a given unknown dataset and datasets in our repository using an ensemble of classi\ufb01ers. We show that it is possible to get a good correlation between the dataset similarity predictions and actual accuracy obtained on that dataset. 4. A accuracy regressor which estimates the accuracy of a deep learning model on an unknown query dataset ef\ufb01ciently, using the dataset similarity ranker and model encoder features. 5. In order to further increase the reproducibility of the proposed work, the entire working implementation is publicly made available along with the trained models: https://github.com/ dl-model-recommend/cikm-2019 2 Existing Literature We will discuss the existing literature in the area of Neural Architecture Search (NAS), accuracy prediction, as well as recommender systems. Neural Architecture Search (NAS): The aim of NAS is to \ufb01nd the most suitable architecture for a given dataset from the set of all possible architectures [36]. ENAS [23] is the \ufb01rst work towards fast and inexpensive automatic model design. Baker et al. [2] uses a Markov decision based metamodeling algorithm with an average time to search for the best model being around 8-10 days. Liu et al. [20] and Real et al. [25] use an evolutionary algorithm instead of reinforcement learning algorithms. Liu et al. [19] propose using a sequential model-based optimization (SMBO) strategy, which is up to 5 \u22128 times more ef\ufb01cient than reinforcement learning based techniques. Liu et al. [21] is the \ufb01rst major work to pose architecture search as a differentiable problem over a discrete and non-differentiable search space instead of a reinforcement learning problem. Accuracy Prediction: Baker et al. [3] leverage standard frequentist regression models to predict \ufb01nal performance based on architecture, hyperparameters and partial learning curves. Deng et al. [7] 2 \f(a) An overview of the proposed system. (b) An overview of the Unsupervised Model Encoder Figure 1: Given a query dataset, we \ufb01rst calculate the dataset similarity vector. The obtained pairwise vector along with the model encoding is used to predict the accuracy. Then we rank the results and recommend a model from our repository. predict the performance of a network before training, based on its architecture and hyperparameters. TAPAS [12] is another novel deep neural network accuracy predictor, parameterized on network topology as well as a measure of dataset dif\ufb01culty. Scheidegger et al. [26] introduced a class of neural networks called ProbeNets to measure the dif\ufb01culty of an image classi\ufb01cation dataset, without having to train expensive state-of-the-art neural networks on new datasets. In contrast to existing techniques that rely on reinforcement learning or evolutionary algorithms, Elsken et al. [8] employ a new method which is a combination of hill climbing, network morphism, and cosine annealing based optimization. Summarizing, works such as Peephole [7] use only the model architecture, while TAPAS [12] use the model architecture along with the characterization of the query unknown dataset. In the proposed work, we use the model architecture as well as the similarity between the unknown dataset and the known dataset on which the model was trained upon. Additionally, the training process for learning the model representation and the dataset similarity are performed with a large training data. Traditionally in literature, deep learning methods are used as a solution to solve the personalized recommendation problem. However in this research, we propose a technique to use recommendation systems as a solution for which deep learning model to be used for a dataset and task. Further, most of the existing NAS techniques for deep learning are still unusable in practical situations, requiring huge clusters of GPUs and consuming a lot of time2. Moreover, in most of these applications, \ufb01nding a novel architecture from scratch is not essentially required and a minor variant of a popular deep learning model would suf\ufb01ce. 3 Model Recommendation Approach As illustrated in Figure 1a, the proposed approach consists of three novel components: 1. Unsupervised Model Encoder: which obtains a \ufb01xed length continuous space representation for a variable-length, discrete-spaced deep learning model architecture, along with its hyperparameters, using an unsupervised encoding technique. 2. Dataset Similarity Ranker: which predicts the most similar existing dataset dc \u2208 [d1, d2, d3, . . . , dn] for any given unknown dataset du. 3. Accuracy Regressor: It learns the mapping from the above two unsupervised representations to the accuracy obtained by the model. Thus, for a given unknown dataset, our system will retrieve a dataset and architecture from the repository using the dataset similarity ranker, encode a \ufb01xed length representation of the architecture using the unsupervised model encoder, and predict the accuracy of the architecture on the unknown dataset using the accuracy regressor. Although this is quite a challenging combination of tasks, we feel that it remains an important problem to solve, given its bene\ufb01ts in saving both resources and time compared to hit-and-trial approaches. 2https://twitter.com/beenwrekt/status/961262527240921088 3 \f3.1 Unsupervised Model Encoder Deep neural networks\u2019 architecture can be considered as a directed acyclic graph (DAG) whose nodes represent certain transformations, such as convolution, recurrent cells, dropout, and pooling. In this component, we aim to develop a representation of such a graph (network architecture) in an unsupervised fashion. The \ufb01rst step is to de\ufb01ne a representation of individual nodes, i.e., the layers and encode information about the layer sequence into \ufb01xed sized vectors. This is analogous to encoding individual words of a sentence using a word embedding model (such as, word2vec) and using the individual word embeddings to learn a language model at the sentence level. Learning to generate valid models: We exploit the fact that models have only certain structures which are valid. Valid models are those which could be trained for a given dataset without any errors and could turn out to be accurate / optimal or inaccurate / sub-optimal for that dataset. Invalid models are those that are either structurally impossible to occur, such as networks having embedding layer between two LSTM layers, or those that cannot be compiled for the given dataset, such as a CNN that reduces the image size to less than zero. Similarities can be drawn between this imposition of structures in deep networks and imposition of a grammar in a language. This further motivates the usage of a sequential language model technique to encode possible structures of a network architecture. A manually de\ufb01ned grammar is used to generate lots of possible valid models for a given dataset and these valid models are stored in a custom JSON structure, which is very similar to the Keras JSON format or the Caffe protobuf format. Construction: As illustrated in Figure 3b, given an input abstract JSON representation of model architecture, we compute a \ufb01xed-length vector as the output. The major steps are as follows: (1) Layer Encoding: A layer vocabulary is constructed which contains all unique layers with its hyperparameter combinations. For instance, a Convolution2D layer has the following hyperparameter set: {\u2019number of \ufb01lters\u2019: [512, 384, 256, 128, 64, 32], \u2019kernel row\u2019: [1, 2, 3, 4, 5], \u2019kernel column\u2019: [1, 2, 3, 4, 5], \u2019stride row\u2019: [1, 2, 3], \u2019stride column\u2019: [1, 2, 3], \u2019border mode\u2019: [\u2019Same\u2019, \u2019Valid\u2019]}, totalling to 2700 unique combinations to the layer vocabulary. To account for layers or hyperparameters that are not a part of our grammar, we added an Unknown layer, UNK, to our vocabulary to be able to encode any kind of deep learning architecture. A total of 19 unique layers were used resulting in a vocabulary size of 4523 tokens. The encoding is performed similar to a Uni\ufb01ed Layer Code[7]. (2) Generating Layer Representations: Each model architecture is represented as a sequence of tokens, for example Convolution2D _512_3_3_1_1_Same is one token in that sentence. If a function model is provided, each path from source to sink is added as an independent sentence. Inspired from word embedding, for each given layer we predict the surrounding context of layers resulting in vector representations for each layer, independently. We train word2vec representation with standard hyperparameters (gensim library) to obtain a 512-dimensional layer representation. (3) Generating Model Representations: We use the layer embeddings to initialize and train a three layer LSTM model with tied weights and trained it similar to a language modelling task to generate the 512-dimensional model representation. Sentence perplexity is used as the objective function to be optimized while learning the language model. Thus, we develop an unsupervised subsystem to convert a variable length sequence of discrete network layers to a succinct, continuous, vector-space representation. 3.2 Dataset Similarity Ranker This component computes the similarity between the given dataset and all existing datasets in the dataset repository. The aim is to study the similarity between datasets and provide a guided approach for transferability between datasets. As illustrated in Figure 2a, given a query dataset, dq and a list of repository datasets, di, i \u2208[1, . . . , n], the procedure for calculating the dataset similarity between the query dataset and the repository datasets is as follows: 1. A set of s data samples are uniformly picked from each of the repository datasets di. 2. For every sample, j, in these di, we extract features fij from the input data. These form the input vectors and the output class is the dataset number i. 3. Several classi\ufb01ers are trained on each of the sampled image features to predict which of the n repository datasets does the given feature vector belong to. Torralba [31] studied the presence of a unique signature for every dataset, enabling us to \ufb01nd similarity and dissimilarity between datasets. 4 \f(a) An overview of the Dataset Similarity Ranker. (b) An overview of the Accuracy Regressor Figure 2: Given the dataset and model encoding representations, we can compute the predicted accuracies for that pre-trained pairing. In this manner, we predict the accuracies. They are further ranked and the best predicted accuracy is used to return the model and dataset. Now, given an unknown query dataset, du 1. A set of s data samples are randomly picked from the query dataset, du. For each sample, we extract all the set of features fu. 2. The features are passed to the respective trained n-class classi\ufb01ers, which classify each sample individually to one of the n repository datasets, ns X j=1 Ck,f(s(j) u ) (1) \u2200{k, f} \u2208[1, . . . , en], Ck,f denotes the classi\ufb01er k learnt on feature f, and ns is the number of samples in the set s 3. We collect all the predictions and perform majority voting fusion across the ensemble, obtained a n \u00d7 1 output vector denoting the probability of the similarity between the unknown dataset du against each of the n repository dataset di, \u2295isim(dq, di) = X k X f ns X j=1 Ck,f(s(j) u ) (2) where, \u2295i denotes concatenation of values across the i repository datasets There are three feature extractors used for the image modality: (i) GIST [22] (ii) DAISY [30] (iii) Local Binary Pattern (LBP) [35]. Five popular classi\ufb01ers are used in the ensemble: (i) Naive Bayes (NB), (ii) Random Decision Forest (RDF), (iii) Boosted Gradient Trees (BGT), (iv) Multilayer Perceptron (MLP), and (v) Support Vector Machines (SVM). 3.3 Accuracy Regressor The accuracy regressor takes a n1-dimensional dataset similarity vector between unknown dataset du and repository dataset di (obtained using equation (1) and a n2-dimensional model representation vector as input and predicts accuracy of the model for the unknown dataset, as shown in Figure 2b. This is learnt using a supervised regression approach, thus avoiding the need to ef\ufb01ciently learn to predict accuracy of deep networks. The system predicts the expected accuracy of a model trained on a dataset with a degree of similarity to the query dataset as given by the dataset similarity vector. Given a query dataset du, the accuracy regressor component is learnt as follows: 1. We extract the dataset similarity vector, n1, for every pair of dataset (all seven image datasets) using the dataset similarity ranker subsystem. 2. Using model encoder subsystem, we encode the models available in our model repository to obtain a vector n2 for each model. 5 \f(a) The tSNE plot of 70 VGG variant and random DL models (b) Correlation plot with coef\ufb01cients for an unknown dataset Figure 3: The performance of unsupervised feature encoder 3. We concatenate these two features as n1 + n2 dimensional input vector and perform regression using an ensemble of regressors to learn a mapping function between this high dimensional input vectors and the accuracy of the model, pre-trained on di and \ufb01ne-tuned on du. We use eight different types of regressors: (i) Support Vector Regressor (RBF, linear, polynomial Kernel), (ii) Multi-Layer Perceptron, (iii) Ridge Regression, (iv) RandomForest Regressor, (v) GradientBoosting Regression, and (vi) AdaBoost Regressor. 4 Experiments and Analysis In this section, we demonstrate the performance of the three individual components and the overall approach. All the experiments were implemented using PyTorch 3 and the code is publicly made available along with the trained models: https://github.com/dl-model-recommend/cikm-2019 4.1 Model Repository The image dataset repository contains seven different diverse benchmark vision datasets: (i) MNIST, (ii) Fashion-MNIST, (iii) CIFAR-10, (iv) CIFAR-100, (v) SVHN, (vi) STL-10, and (vii) GTSRB. All of them are resized to 32 \u00d7 32 pixels. The choice of image based deep learning architectures in the repository is constrained by the input image size (32 \u00d7 32), with: (i) VGG-16 [27], (ii) Network-in-Network (NIN) [18], (iii) Strictly Convolutional Neural Network (All-CNN) [28], (iv) ResNet-20 [10], (v) Wide-ResNet [34], (vi) Pre-ResNet [11], and (vii) LeNet [17]. 4.2 Experimental Details To learn the word embedding and the language model for unsupervised model encoding, we generated 190, 000 random valid models using the proposed grammar (simulated dataset). For each model, we randomly replaced a layer as UNK with a probability of 0.2 and generated a total of 570, 000. This dropout makes the sampling more diverse as well as enables us to encode models which cannot be de\ufb01ned by the grammar. To train and evaluate the accuracy regressor, we take a subset of models from the above set of 190, 000 models. We train these models on the seven different image datasets and we have 700 inaccurate models which perform poorly and 504 accurate models on the respective datasets. This constitutes a total of 1204 models along with the accuracy they obtain on the respective datasets. We divide the models into a 80-20 train-test split randomly and use this dataset to train and evaluate the accuracy regressor. 4.3 Unsupervised Model Encoder We evaluate the subsystem by evaluating the perplexity of the encoded representations generated, as shown in Figure 5 (b). A lower perplexity score implies that the language model is better at generating 3https://pytorch.org/ 6 \f(a) Sankey plot with computer dataset similarity (b) The effect of sample size Figure 4: The performance of dataset similarity ranker valid models. To study the effectiveness of our learned model architecture representation, we take 70 variations of VGG model by varying the number of blocks with hyperparameters and 70 random deep learning models. The two dimensional tSNE visualization of the model representations in Figure 3a show that all the VGG-like models are clustered together and are very different from the random deep learning models. This shows that similar looking architectures have similar representations in the learnt feature space. Thus, the proposed unsupervised model encoder can be used as a general purpose deep learning architecture encoding technique and can be used and extended for multiple applications. 4.4 Dataset Similarity Ranker We evaluate the performance of the dataset similarity ranker by performing an exhaustive leave-oneout test on the dataset repository. For each of the seven unknown datasets and the rest of the repository, we predict the ranking of the datasets obtained from our system. To obtain the ground truth, we train all the seven models: (i) VGG-16, (ii) NIN, (iii) All-CNN, (iv) ResNet-20, (v) Wide-ResNet, (vi) Pre-ResNet, and (vii) LeNet on each of the 6 remaining datasets present in the catalog. Given the query dataset, we \ufb01ne-tune these networks, giving accuracy values a1, a2, .., a6. The ensemble of models are trained using a sample of images taken from the train dataset of the respective datasets, while the ensemble of models are tested using a sample of images taken from the test dataset. The covariance shift that exists between the train and test of the respective datasets could also in\ufb02uence the performance of the dataset similarity ranker. We obtain the correlation scores and show that the dataset ranking provided by our system is highly correlated to the ranking obtained by the accuracy exhaustive \ufb01ne-tuning. This indicates that models pre-trained on the dataset that we predicted to be the most similar dataset, provided the best performance accuracy after being \ufb01ne-tuned on the unknown dataset. The results are populated in Table 1 and the correlation plot for an unknown dataset, CIFAR-100 and LeNet as the model is provided in Figure 3b. It can be observed that the correlation coef\ufb01cients are positive and high for all the datasets except SVHN. This implies that \ufb01nding similar datasets that could provide a good pre-training for models is possible and also shows that there are no similar datasets for SVHN in the repository, indicating that none of the pre-trained models are bound to produce high results in SVHN datasets. Also, based on general intuition we expect CIFAR-10 and CIFAR-100, and MNIST and FashionMNIST to look visually similar. The proportion of each unknown dataset being classi\ufb01ed to the repository dataset is shown in \ufb01gure 4a, which follows our intuitions. Furthermore, we study the effect of sample size which is one of the critical hyper-parameter for computing the dataset similarity. Although we used 512 as the effective sample size, we studied the effect of four different sample size on the classi\ufb01cation performance: [64, 256, 512]. The result is shown in Figure 5b and can be observed that our subsystem can give reliable predictions irrespective of the size of the sample for MNIST and CIFAR variations. However, for SVHN and STL for which there are no related datasets, a smaller sample size tends to classify the input images towards SVHN. 7 \fUnknown dataset CIFAR10 CIFAR100 FMNIST GTSRB MNIST SVHN VGG-16 Pearson 0.981 0.844 0.564 0.796 0.572 0.217 Spearman 0.928 0.943 0.883 0.886 0.429 0.486 Kendall 0.828 0.867 0.788 0.733 0.200 0.333 NIN Pearson 0.624 0.070 0.976 0.312 0.970 0.899 Spearman 0.828 0.029 0.551 0.232 0.599 0.714 Kendall 0.733 0.066 0.414 0.138 0.466 0.466 All-CNN Pearson 0.480 0.492 0.983 0.085 0.978 0.934 Spearman 0.314 0.486 0.464 0.232 0.314 0.714 Kendall 0.200 0.333 0.276 0.138 0.200 0.466 ResNet-20 Pearson 0.95 0.886 0.728 0.790 0.720 -0.185 Spearman 0.638 0.943 0.706 0.829 0.486 -0.029 Kendall 0.552 0.867 0.645 0.733 0.333 -0.067 Wide-ResNet Pearson 0.000 0.089 0.966 0.499 0.969 0.697 Spearman -0.085 0.257 0.522 0.232 0.486 0.609 Kendall -0.200 0.200 0.414 0.138 0.333 0.414 Pre-ResNet Pearson -0.370 0.721 0.962 0.497 0.965 0.945 Spearman -0.085 0.428 0.521 0.232 0.486 0.714 Kendall -0.200 0.200 0.414 0.138 0.333 0.466 LeNet Pearson 0.981 0.991 0.756 0.798 0.755 0.982 Spearman 0.928 0.943 0.530 0.886 0.600 1.00 Kendall 0.828 0.867 0.501 0.733 0.467 1.00 Table 1: The correlation coef\ufb01cients obtained between the dataset similarity scores and the actual performance accuracy. This shows that the dataset similarity score is an unbiased estimator of the model\u2019s accuracy. (a) The MSE error of the various regressors (b) The perplexity graph of training an LSTM for Unsupervised Model Encoder. Figure 5: The performance of accuracy regressors and the reason for failure in text based DL models 4.5 Accuracy Regressor We evaluate the regressor model using the Mean Square Error (MSE) error between the predicted accuracy by the regressor and the actual accuracy obtained after \ufb01ne-tuning. The obtained results are shown in Figure 5a. It can be observed that ridge regression performs the best with a MSE of 0.15. This shows the a simple regression could predict the approximate performance of a deep learning model on a given unknown dataset, without the need for sophisticated models. Thus for a given 8 \funknown dataset, we sample n = 512 images, \ufb01nd the most similar dataset using an ensemble of simple machine learning classi\ufb01er. For all the architectures available in the repository for most similar dataset, we extract a \ufb01xed length representation using the unsupervised model encoder. This is a simple forward pass through the word embedding layer and the LSTM based language model. The dataset similarity vector and the model representation is fed into the accuracy regressor to predict the performance of the given models and \ufb01nd the best performing architecture. Hence, we show that accuracy prediction could be a practical almost real-time solution and could be adopted to various challenging domains. 5 Practical Use Case A practitioner will usually prefer the most recent deep learning model, which might be unnecessarily complex for the task at hand. However, theoretically the choice of model depends on the properties of the dataset and the task [1]. It is interesting to study the performance of the proposed model recommendation system with respect to human preferences. To show the effectiveness of the proposed deep learning model recommendation pipeline in a practical setting, we provide human baselines for three different datasets: (i) Caltech-UCSD Birds-200-2011 [33], (ii) Stanford Cars [15], and (iii) ETHZ Food-101 [4]. Accuracies of various deep learning learning models on these datasets are manually computed in the literature [1]. For Caltech-UCSD Birds-200-2011 and ETHZ Food-101, our approach retrieved ResNet as the recommended architecture with a predicted accuracy of 63.2% and 57.4%, respectively. The ground truth training, as performed in the literature [1], yields 76.3% and 67.59%, respectively, which are much higher than LeNet and VGG models. However, in case of Stanford Cars dataset, our approach recommended VGG-16 architecture with a predicted accuracy of 82.1%. This trend could be observed in the literature, as well, where VGG-16 performs better than ResNet variants and LeNet providing 85.2% accuracy. Thus, although the accuracy prediction provides a ballpark of the expected actual accuracy, the rank order of the retrieved models suggests that the proposed approach does not always retrieve the most complex model, but rather, retrieves models based on the properties of the datasets, the task, and the architecture of the model. 6"
+ },
+ {
+ "url": "http://arxiv.org/abs/1909.09389v1",
+ "title": "Sampling Bias in Deep Active Classification: An Empirical Study",
+ "abstract": "The exploding cost and time needed for data labeling and model training are\nbottlenecks for training DNN models on large datasets. Identifying smaller\nrepresentative data samples with strategies like active learning can help\nmitigate such bottlenecks. Previous works on active learning in NLP identify\nthe problem of sampling bias in the samples acquired by uncertainty-based\nquerying and develop costly approaches to address it. Using a large empirical\nstudy, we demonstrate that active set selection using the posterior entropy of\ndeep models like FastText.zip (FTZ) is robust to sampling biases and to various\nalgorithmic choices (query size and strategies) unlike that suggested by\ntraditional literature. We also show that FTZ based query strategy produces\nsample sets similar to those from more sophisticated approaches (e.g ensemble\nnetworks). Finally, we show the effectiveness of the selected samples by\ncreating tiny high-quality datasets, and utilizing them for fast and cheap\ntraining of large models. Based on the above, we propose a simple baseline for\ndeep active text classification that outperforms the state-of-the-art. We\nexpect the presented work to be useful and informative for dataset compression\nand for problems involving active, semi-supervised or online learning\nscenarios. Code and models are available at:\nhttps://github.com/drimpossible/Sampling-Bias-Active-Learning",
+ "authors": "Ameya Prabhu, Charles Dognin, Maneesh Singh",
+ "published": "2019-09-20",
+ "updated": "2019-09-20",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL",
+ "cs.LG"
+ ],
+ "main_content": "Introduction Deep neural networks (DNNs) trained on large datasets provide state-of-the-art results on various NLP problems (Devlin et al., 2019) including text classi\ufb01cation (Howard and Ruder, 2018). However, the cost and time needed to get labeled data and to train models is a serious impediment to creating new and/or better models. This problem can be mitigated by creating smaller representative datasets with active learning which can be used for training DNNs to achieve similar test accuracy as \u2217indicates equal contribution \u2020 Work done at Verisk | AI that using the full training dataset . In other words, the smaller sample can be considered a surrogate for the full data. However, there is lack of clarity in the active learning literature regarding sampling bias in such surrogate datasets created using active learning (Settles, 2009): its dependence on models, functions and parameters used to acquire the sample. Indeed, what constitutes a good sample? In this paper, we perform an empirical investigation using active text classi\ufb01cation as the application. Early work in active text classi\ufb01cation (Lewis and Gale, 1994) suggests that greedy query generation using label uncertainty may lead to ef\ufb01cient representative samples (Nonetheless, the same test accuracy). Subsequent concerns regarding sampling bias has lead to explicit use of expensive diversity measures (Brinker, 2003; Hoi et al., 2006) in acquisition functions or using ensemble approaches (Liere and Tadepalli, 1997; McCallum and Nigam, 1998) to improve diversity implicitly. Deep active learning approaches adapt the discussed framework above to train DNNs on large data. However, it is not clear if the properties of deep approaches mirror those of their shallow counterparts and if the theory and the empirical evidence regarding sampling ef\ufb01ciency and bias translates from shallow to deep models. For example, (Sener and Savarese, 2018) and (Ducoffe and Precioso, 2018) \ufb01nd that uncertainty based strategies perform no better than random sampling even if ensembles are used and using diversity measures outperform both. On the other hand, (Beluch et al., 2018; Gissin and Shalev-Shwartz, 2019) \ufb01nd that uncertainty measures computed with ensembles outperform diversity based approaches while (Gal et al., 2017; Beluch et al., 2018; Siddhant and Lipton, 2018) \ufb01nd them to outperform uncertainty measures computed using single models. A recent empirical study (Siddhant and Lipton, 2018) investigating active learning in NLP suggests that Bayesian active learning outperforms classical uncertainty sampling across all settings. However, the approaches have been limited to relarXiv:1909.09389v1 [cs.CL] 20 Sep 2019 \fatively small datasets. 1.1 Sampling Bias in Active Classi\ufb01cation In this paper, we investigate the issues of sampling bias and sample ef\ufb01ciency, the stability of the actively collected query and train sets and the impact of algorithmic factors i.e. the setup chosen while training the algorithm, in the context of deep active text classi\ufb01cation on large datasets. In particular, we consider two sampling biases: label and distributional bias, three algorithmic factors: initial set selection, query size and query strategy along with two trained models and four acquisition functions on eight large datasets. To isolate and evaluate the impact of the above (combinatorial) factors, a large experimental study was necessary. Consequently, we conducted over 2.3K experiments on 8 popular, large, datasets of sizes ranging from 120K-3.6M. Note that the current trend in deep learning is to train large models on very large datasets. However, the aforementioned issues have not yet been investigated in the literature in such a setup. As shown in Table 1, the datasets used in latest such analysis on active text classi\ufb01cation by (Siddhant and Lipton, 2018) are quite small in comparison. The datasets used by us are two orders of magnitude larger, our query samples often being the size of the entire datasets used by previous works, and the presented empirical study is more extensive (20x experiments). Our \ufb01ndings are as follows: (i) We \ufb01nd that utilizing the uncertainty query strategy using a deep model like FastText.zip (FTZ)1 to actively construct a representative sample provides query and train sets with remarkably good sampling properties. (ii) We \ufb01nds that a single deep model (FTZ) used for querying provides a sample set similar to more expensive approaches using ensemble of models. Additionally, the sample set has a large overlap with support vectors of an SVM trained on the entire dataset largely invariant to a variety of algorithmic factors, thus indicating the robustness of the acquired sample set. (iii) We demonstrate that the actively acquired training datasets can be utilized as small, surrogate training sets with a 5x-40x compression for 1We use FastText.zip (FTZ) to optimize the time and resources needed for this study. training large, deep text classi\ufb01cation models. In particular, we can train the ULMFiT (Howard and Ruder, 2018) model to state of the art accuracy at 25x-200x speedups. (iv) Finally, we create a novel, state-of-the-art baseline for active text classi\ufb01cation which outperforms recent work (Siddhant and Lipton, 2018), using Bayesian dropout, utilizing 4x less training data. We also outperform (Sener and Savarese, 2018) at all training data sizes. The latter uses an expensive diversity based query strategy (coreset sampling). The rest of the paper is organized as follows: in Section 2, the experimental methodology and setup are described. Section 3 presents the experimental study on sampling biases as well as the impact of various algorithmic factors. In Section 4, we compare with prior literature in active text classi\ufb01cation. Section 5 presents a downstream use case fast bootstrapping of the training of very large models like ULMFiT. Finally, we discuss the current literature in light of our work in Section 6 and summarize the conclusions in Section 7. 2 Methodology This section describes the experimental approach and the setup used to empirically investigate the issues of (i) sampling bias and (ii) sampling ef\ufb01ciency in creating small samples to train deep models. 2.1 Approach A labelled training set is incrementally built from a pool of unlabeled data by selecting & acquiring labels from an oracle in sequential increments. In this, we follow the standard approach found in the active learning literature. We use the following terminology: Queries & Query Strategy: We refer to the (incremental) set of points selected to be labeled and added to the training as the query and the (acquisition) function used to select the samples as the query strategy. Pool & Train Sets: The pool is the unlabeled data from which queries are iteratively selected, labeled and added to the (labeled) train set. Let DS = (xi, yi) denote a dataset consisting of |S| = n i.i.d samples of data/label pairs, where |.| denotes the cardinality. Let S0 \u2282S denote an initial randomly drawn sample from the initial pool. At each iteration, we train the model on the current \ftrain set and use a model-dependent query strategy to acquire new samples from the pool, get them labeled by an oracle and add them to the train set. Thus, a sequence of training sets: [S1, S2 . . . , Sb] is created by sampling b queries from the pool set, each of size K. The b queries are given by [S1 \u2212S0, S2 \u2212S1 . . . , Sb \u2212Sb\u22121]. Note that |Si| = (|S0| + i \u00d7 K) and S1 \u2282S2 . . . \u2282Sb \u2282S. In this paper, we investigate the ef\ufb01ciency and bias of sample sets S1 b , S2 b , . . . , St b obtained by different query strategies Q1, Q2, . . . Qt. We exclude the randomly acquired initial set and perform comparisons on the actively acquired sample sets de\ufb01ned as \u02c6 Si j = (Si j \u2212Si 0). 2.2 Experimental Setup In this section, we share details of the experimental setup, and present and explain the choice of the datasets, models and query strategies used. Datasets: We used eight, large, representative datasets widely used for text classi\ufb01cation: AGNews (AGN), DBPedia (DBP), Amazon Review Polarity (AMZP), Amazon Review Full (AMZF), Yelp Review Polarity (YRP), Yelp Review Full (YRF), Yahoo Answers (YHA) and Sogou News (SGN). Please refer to Section 4 of (Zhang et al., 2015) for details regarding the collection and characteristics of these datasets. Table 1 provides a comparison regarding the choice of datasets, models and number of experiments between our study and (Siddhant and Lipton, 2018) which investigates a variety of NLP tasks including text classi\ufb01cation while we focus only on the latter. Models: We reported two text classi\ufb01cation models as representatives of classical and deep learning approaches respectively which were fast to train and also had good performance on text classi\ufb01cation: Multinomial Naive Bayes (MNB) with TF-IDF (Wang and Manning, 2012) and FastText.zip (FTZ) (Joulin et al., 2016). The FTZ model provides results competitive with VDCNNs (a 29 layer CNN) (Conneau et al., 2017) but with over 15,000\u00d7 speedup (Joulin et al., 2017). This allowed us to conduct a thorough empirical study on large datasets. Multinomial Naive Bayes (MNB) with TF-IDF features is a popularly claimed baseline for text classi\ufb01cation (Wang and Manning, 2012). Query Strategies: Uncertainty based query strategies are widely used and well studied in the active learning literature. Those strategies typiPaper #Exp Datasets (#Train) Models (Full Acc) DAL 120 TREC-QA (6k), MAReview (10.5k) SVM (89%), CNN (91%), LSTM (92%) Ours 2.3K AGN (120k), SGN (450k), DBP (560k), YRP (560k), YRF (650k), YHA (1400k), AMZP (3600k), AMZF (3000k) FTZ (97%), MNB (90%) Table 1: Comparison of active text classi\ufb01cation datasets and models (Acc on Trec-QA) used in (Siddhant and Lipton, 2018) and our work. We use significantly larger datasets (two orders larger), perform 20x more experiments, and use more ef\ufb01cient and accurate models. cally use a scoring function on the (softmax) output of a single model. We evaluate the following ones: Least Con\ufb01dence (LC) and Entropy (Ent). Independently training ensembles of models (Lakshminarayanan et al., 2017) is another principled approach to obtain uncertainties associated with the output estimate.Then, we tried four query strategies LC and Ent computed using single and ensemble models and evaluated them against random sampling (chance) as a baseline. For ensembles, we used \ufb01ve FTZ ensembles (Lakshminarayanan et al., 2017). In contrast, (Siddhant and Lipton, 2018) used Bayesian ensembles using Dropout, proposed in (Gal et al., 2017). Please refer to Section 4 for a comparison. Implementation Details: We performed 2304 active learning experiments. We obtained our results on three random initial sets and three runs per seed (to account for stochasticity in FTZ) for each of the eight datasets. The query sizes were 0.5% of the dataset for AGN, AMZF, YRF and YHA and 0.25% for SGN, DBP, YRP and AMZP respectively for b = 39 sequential, active queries. We also experimented with different query sizes keeping the size of the \ufb01nal training data b \u00d7 K constant. The default query strategy uses a single model with output Entropy (Ent) unless explicitly stated otherwise. Results in the chance column are obtained using random query strategy. We used Scikit-Learn (Pedregosa et al., 2011) implementation for MNB and original implementation for FastText.zip (FTZ) 2. We required 3 weeks of running time for all FTZ experiments on a x1.16xlarge AWS instance with Intel Xeon E78880 v3 processors and 1TB RAM to obtain results presented in this work. The experiments are deterministic beyond the stochasticity involved in 2https://github.com/facebookresearch/ fastText \fDsets Limit FTZ (\u2229Q) MNB (\u2229Q) FTZ (\u2229S) MNB (\u2229S) SGN 1.61 1.56 \u00b1 0.03 1.15 \u00b1 0.32 1.59 \u00b1 0.01 1.57 \u00b1 0.01 DBP 2.64 2.50 \u00b1 0.02 2.27 \u00b1 0.11 2.51 \u00b1 0.0 2.58 \u00b1 0.01 YHA 2.30 2.25 \u00b1 0.01 2.22 \u00b1 0.02 2.25 \u00b1 0.0 2.28 \u00b1 0.0 YRP 0.69 0.69 \u00b1 0.0 0.56 \u00b1 0.13 0.69 \u00b1 0.0 0.69 \u00b1 0.01 YRF 1.61 1.56 \u00b1 0.02 1.42 \u00b1 0.21 1.56 \u00b1 0.0 1.57 \u00b1 0.01 AGN 1.39 1.33 \u00b1 0.04 1.13 \u00b1 0.17 1.33 \u00b1 0.0 1.35 \u00b1 0.01 AMZP 0.69 0.69 \u00b1 0.0 0.69 \u00b1 0.0 0.69 \u00b1 0.0 0.69 \u00b1 0.0 AMZF 1.61 1.58 \u00b1 0.02 1.6 \u00b1 0.01 1.59 \u00b1 0.0 1.61 \u00b1 0.0 Table 2: Label entropy with a large query size (b = 9 queries). \u2229Q denotes averaging across queries of a single run, \u2229S denotes the label entropy of the \ufb01nal collected samples, averaged across seeds. Naive Bayes (\u2229Q) has biased (inef\ufb01cient) queries while FastText (\u2229Q) shows stable, high label entropy showing a rich diversity in classes despite the large query size. Overall, the resultant sample (\u2229S) becomes balanced in both cases. training the FTZ model, random initialization and SGD updates. The entire list of hyperparameters and metrics affecting uncertainty such as calibration error (Guo et al., 2017) is given in the supplementary material. The experimental logs and models are available on our github link3. 3 Results In this section, we study several aspects of sampling bias (class bias, feature bias) and the impact of relevant algorithmic factors (initial set selection, query size and query strategy. We evaluated the actively acquired queries and sample set for sampling bias, and for the stability as measured by %intersection of collected sets across a critical in\ufb02uencing factor. Higher sample intersections indicate more stability increase to the chosen in\ufb02uencing factor. 3.1 Aspects of Sampling Bias We study two types of sampling biases: (a) Class Bias and (b) Feature Bias. 3.1.1 Class Bias Greedy uncertainty based query strategies are said to pick disproportionately from a subset of classes per query (Sener and Savarese, 2018; Ebert et al., 2012), developing a lopsided representation in each query. However, its effect on the resulting sample set is not clear. We test this by measuring the Kullback-Leibler (KL) divergence between the ground-truth label distribution and the distribution obtained per query as one experiment (\u2229Q), 3https://github.com/drimpossible/Sampling-Bias-ActiveLearning and over the resulting sample (\u2229S) as the second. Let us denote P as the true distribution of labels, \u02c6 P the sample distribution and C the total number of classes. Since P follows a uniform distribution, we can use Label entropy instead (L = \u2212KL(P|| \u02c6 P) + log(C)). Label entropy L is an intuitive measure. The maximum label entropy is reached when sampling is uniform, \u02c6 P(x) = P(x), i.e. L = log(C). We present our results in Table 15. We observe that across queries (\u2229Q), FTZ with entropy strategy has a balanced representation from all classes (high mean) with a high probability (low std) while Multinomial Naive Bayes (MNB) results in more biased queries (lower mean) with high probability (high std) as studied previously. However, we did not \ufb01nd evidence of class bias in the resulting sample (\u2229S) in both models: FastText and Naive Bayes (column 5 and 6 from Table 15). We conclude that entropy as a query strategy can be robust to class bias even with large query sizes. 3.1.2 Feature Bias Uncertainty sampling can lead to undesirable sampling bias in feature space (Settles, 2009) by repeating redundant samples and picking outliers (Zhu et al., 2008). Diversity-based query strategies (Sener and Savarese, 2018) are used to address this issue, by selecting a representative subset of the data. In the context of active classi\ufb01cation, it is good to pick the most informative samples to be the ones closer to class boundaries4. Indeed, recent work suggests that the learning in deep classi\ufb01cation networks may focus on small part of the data closer to class boundaries, thus resembling support vectors (Xu et al., 2018; Toneva et al., 2019). To investigate whether uncertainty sampling also exhibits this behavior, we perform below a direct comparison with support vectors from a SVM. For this, we train a FTZ model on the full training data and train a SVM on the resulting features (sentence embeddings) to obtain the support vectors and compute the intersection of support vectors with each selected set. The percentage intersections are shown in Table 3. The high percentage overlap is a surprising result which shows that the sampling is indeed biased but in 4In this work, we assume ergodicity in the setup. We do not consider incremental, online modeling scenarios where new modes or new classes are sequentially encountered. \fFigure 1: Accuracy across different number of queries b for FastText and Naive Bayes, with b \u00d7 K constant. FastText is robust to increase in query size and signi\ufb01cantly outperforms random in all cases. Naive Bayes: (Left) All including b=39 perform worse than random, (Center) All including b=9 eventually perform better than random (Right) b = 39 performs better than random but larger query sizes perform worse than random. Uncertainty sampling with Naive Bayes suffers from sampling size bias. Dsets Common% Chance% #SV SGN 71.3 \u00b1 0.5 9.3 \u00b1 0.5 13184 DBP 86.3 \u00b1 0.5 9.7 \u00b1 0.5 1479 YRP 57.3 \u00b1 0.5 9.7 \u00b1 0.5 31750 AGN 45.0 \u00b1 0.8 21.0 \u00b1 1.6 1032 Table 3: Proportion of Support Vectors intersecting with our actively selected set calculated by |SSV \u2229\u02c6 Sb| |SSV | . Actively selected sets share large overlap with supports of an SVM (critical for classi\ufb01cation). a desirable way. Since the support vectors represent the class boundaries, a large percentage of selected data consists of samples around the class boundaries. This overlap indicates that the actively acquired training sample covers the support vectors well which are important for good classi\ufb01cation performance. The overlap with the support vectors of an SVM (a \ufb01xed algorithm) also suggests that uncertainty sampling using deep models might generalize beyond FastText, to other learning algorithms. Experimental Details: We used a fast GPU implementation for training an SVM with a linear kernel (Wen et al., 2018) with default hyperparameters. Please refer to supplementary material for additional details. We ensured the SVM achieves similar accuracies as original FTZ model. Dsets Chance FTZD FTZS MNBD MNBS SGN 0.8 77.8 81.0 55.5 100.0 DBP 0.9 79.7 81.3 79.7 100.0 YHA 3.7 69.0 73.6 89.5 100.0 YRP 0.9 42.9 43.7 16.0 100.0 YRF 3.6 67.7 71.6 13.6 100.0 AGN 3.7 68.7 70.1 79.8 100.0 AMZP 0.9 48.4 48.8 15.0 100.0 AMZF 3.6 56.8 63.1 57.8 100.0 Table 4: % Intersection of samples obtained with different seeds (ModelD) compared to same seeds (ModelS) and chance intersection for b = 39 queries. We see that FastText is initialization independent (FTZD \u2248FTZS \u226bChance). NaiveBayes shows signi\ufb01cant dependency on the initial set sometimes, while other times performs comparable to FastText. 3.2 Algorithmic Factors We analyze three algorithmic factors of relevance to sampling bias: (a) Initial set selection (b) Query size, and, (c) Query strategy. 3.2.1 Initial Set Selection To investigate the dependence of the actively acquired train set on the initial set, we compare the overlap (intersection) of the incrementally constructed sets from different random initial sets versus the same initial set. The results are shown in Table 4. We \ufb01rst observe that chance overlaps (column 2) are very low less than 4%. Columns \fDsets Chance FTZ 9 \u222919 \u222939 FTZ 39 \u222939 \u222939 MNB 9 \u222919 \u222939 MNB 39 \u222939 \u222939 SGN 0.83 \u00b10.0 77.0 \u00b1 0.5 77.9 \u00b1 0.2 31.9 \u00b1 0.0 55.5 \u00b1 0.0 DBP 0.9 \u00b10.0 80.0 \u00b1 0.1 79.6 \u00b1 0.2 82.3 \u00b1 0.0 79.7 \u00b1 0.0 YHA 3.7 \u00b10.0 68.3 \u00b1 0.1 69.0 \u00b1 0.0 92.1 \u00b1 0.0 89.5 \u00b1 0.0 YRP 0.9 \u00b10.0 46.0 \u00b1 0.9 42.7 \u00b1 1.0 10.8 \u00b1 0.0 16.0 \u00b1 0.0 YRF 3.6 \u00b10.0 68.4 \u00b1 0.2 67.6 \u00b1 0.1 14.2 \u00b1 0.0 13.6 \u00b1 0.0 AGN 3.7 \u00b10.0 70.3 \u00b1 0.2 68.7 \u00b1 0.1 81.6 \u00b1 0.0 79.8 \u00b1 0.0 AMZP 0.9 \u00b10.0 45.8 \u00b1 0.1 48.2 \u00b1 0.2 11.5 \u00b1 0.0 15.0 \u00b1 0.0 AMZF 3.6 \u00b10.0 55.2 \u00b1 0.4 57.0 \u00b1 0.2 28.4 \u00b1 0.0 57.8 \u00b1 0.0 Table 5: Intersection of samples obtained with different values of b. We see the intersection of samples selected with different number of intersections comparable to highest possible (different seeds) in FastText, far higher compared to chance intersection. This indicates similar samples are selected regardless of sample size. NaiveBayes does not show clear trends but occasionally the queried percentage drops signi\ufb01cantly when increasing iterations, occasionally it remains unaffected. Dsets Chance FTZ Ent-Ent FTZ Ent-LC FTZ Ent-DelEnt FTZ DelEnt-DelLC FTZ DelEnt-DelEnt SGN 9.4 \u00b1 0.0 84.6 \u00b1 0.2 83.1 \u00b1 0.3 81.7 \u00b1 0.1 82.6 \u00b1 0.1 84.2 \u00b1 0.1 DBP 9.3 \u00b1 0.0 85.7 \u00b1 0.2 85.5 \u00b1 0.3 83.3 \u00b1 0.1 83.0 \u00b1 0.4 83.2 \u00b1 0.2 YHA 19.0 \u00b1 0.0 79.0 \u00b1 0.0 71.6 \u00b1 0.2 76.3 \u00b1 0.1 69.6 \u00b1 0.7 75.6 \u00b1 3.9 YRP 9.3 \u00b1 0.0 58.4 \u00b1 0.6 59.0 \u00b1 0.3 59.0 \u00b1 0.6 61.6 \u00b1 0.7 62.1 \u00b1 0.1 YRF 19.0 \u00b1 0.0 77.8 \u00b1 0.2 66.6 \u00b1 0.3 75.8 \u00b1 0.1 65.4 \u00b1 0.3 80.1 \u00b1 0.2 AGN 19.1 \u00b1 0.0 78.3 \u00b1 0.1 77.3 \u00b1 0.1 77.1 \u00b1 0.3 78.2 \u00b1 0.4 79.0 \u00b1 0.3 AMZP 9.5 \u00b1 0.0 63.5 \u00b1 0.2 63.5 \u00b1 0.3 66.1 \u00b1 0.4 70.0 \u00b1 0.1 70.0 \u00b1 0.1 AMZF 19.0 \u00b1 0.0 70.3 \u00b1 0.1 64.3 \u00b1 0.2 69.6 \u00b1 0.1 65.6 \u00b1 0.2 72.6 \u00b1 0.2 Table 6: Intersection of query strategies across acquisition functions. We observe that the % intersection among samples in the Ent-LC is comparable to those Ent-Ent. Similarly, the Ent-DelEnt (entropy with deletion) is comparable to both DelEnt-DelLC and DelEnt-DelEnt showing robustness of FastText to query functions (beyond minor variation). DelEnt-DelEnt obtains similar intersections as compared to Ent-Ent, showing the robustness of the acquired samples to deletion. 3 and 5 present overlaps from different initial sets while 4 and 6 from same initial sets. We note from column 4 and 6 that due to the stochasticity of training in FTZ, we expect non-identical \ufb01nal sets even with same initial samples as well. The results demonstrate that samples obtained using FastText are largely initialization independent (low variation between columns 3 and 4) consistently across datasets while the samples obtained with Naive Bayes can be vastly different showing relatively heavy dependence on the initial seed. This indicates the relative stability of train set obtained with the posterior uncertainty of the actively trained FTZ as an acquisition function. 3.2.2 Query size Since the sampled data is sequentially constructed by training models on previously sampled data, large query sizes were expected to impact samples collected by uncertainty sampling and the performance thereof (Hoi et al., 2006). We experiment with various query sizes (0.25%, 0.5%, 1%) for DBP, SGN, YRP and AMZP and (0.5%, 1%, 2%) for the rest corresponding to 9, 19 and 39 iterations. Figure 1 shows that FastText (top row) has very stable performance across sample sizes while MNB (bottom row) show more erratic performance. Table 5 presents the intersection of samples obtained with different query sizes across multiple runs. We observe a high overlap of the acquired samples across different query sizes indicating that the performance is independent of the query size (compare column 3 to column 4 where the size is held constant) while MNB results in lower overlap with more erratic behavior due to change in the query size (compare column 5 compared to column 6). 3.2.3 Query strategy We now investigate the impact of various query strategies using FastText by evaluating and comparing the correlation between the respective actively selected sample sets. Acquisition Functions: We compare four uncertainty query strategies: Least Con\ufb01dence (LC) and Entropy (Ent), with and without deletion of least uncertain samples from the training set. Deletion of least uncertain samples reduces the dependence on the initial randomly selected set. The results are presented in Table 14. We present \ufb01ve \fDsets Chance FTZ-FTZ Ent FTZ-5F TZ Ent 5FTZ-5FTZ Ent-LC 5FTZ-5FTZ Ent-Ent SGN 9.4 \u00b1 0.0 84.6 \u00b1 0.2 86.3 \u00b1 0.2 85.4 \u00b1 0.4 85.8 \u00b1 0.0 DBP 9.3 \u00b1 0.0 85.7 \u00b1 0.2 86.6 \u00b1 0.3 86.78 \u00b1 0.1 87.8 \u00b1 0.2 YRP 9.3 \u00b1 0.0 58.4 \u00b1 0.6 58.1 \u00b1 0.7 58.3 \u00b1 0.3 58.2 \u00b1 0.2 YRF 19.0 \u00b1 0.0 77.8 \u00b1 0.2 79.0 \u00b1 0.3 68.5 \u00b1 1.1 77.6 \u00b1 0.3 AGN 19.1 \u00b1 0.0 78.3 \u00b1 0.1 79.0 \u00b1 0.2 79.1 \u00b1 0.2 77.9 \u00b1 0.2 Table 7: Intersection of query strategies across single and ensemble of 5FTZ models. We observe that the % intersection of samples selected by ensembles and single models is comparable to intersection among either. The 5-model committee does not seem to add any additional value over selection by a single model. of the ten possible combinations and again observe the high degree of overlap in the collected samples. It can be concluded that the approach is fairly robust to these variations in the query strategy. Ensembles versus Single Models: A similar experiment was conducted to investigate the overlap between a single FTZ model and a probabilistic committee of models (5-model ensemble with FTZ (Lakshminarayanan et al., 2017)) to identify comparative advantages of using ensemble methods. The results are presented in Table 7 showing little to no difference in sample overlaps. 5 We conclude that more expensive sampling strategies commonly used, like ensembling, may offer little bene\ufb01t compared to using a single FTZ model with posterior uncertainty as a query function. The experiments in this section demonstrate that uncertainty based sampling using deep models like FTZ show no class bias or an undesirable feature bias (and favorable bias to class boundaries). There is also a high degree of robustness to algorithmic factors, especially query size, a surprisingly high degree of overlap in the resulting training samples and stable performances (classi\ufb01cation accuracy). Additionally, all uncertainty query strategies perform well, and expensive sampling strategies like ensembling offer little bene\ufb01t. We conclude that sampling biases demonstrated in active learning literature do hold well with traditional models, however, they do not seem to translate to deep models like FTZ using (posterior) uncertainty. 4 Application: Active Text Classi\ufb01cation Experimental results from the previous sections suggest that entropy function with a single FTZ 5The ensembles were too costly to run on larger datasets, so the results for YHA, AMZP and AMZF could not be obtained. Figure 2: Active text classi\ufb01cation: Comparison with K-Center Coreset, BALD and SVM algorithms. Accuracy is plotted against percentage data sampled. We reach full-train accuracy using 12% of the data, compared to BALD which requires 50% data and perform signi\ufb01cantly worse in terms of accuracy. We also outperform K-center greedy Coreset at all sampling percentages without utilizing additional diversity-based augmentation. model would be a good baseline for active text classi\ufb01cation. We compare our baseline with the latest work in deep active learning for text classi\ufb01cation BALD (Siddhant and Lipton, 2018) and with the recent diversity based Coreset query function (Sener and Savarese, 2018) which uses a costly K-center algorithm to build the query. Experiments are performed on TREC-QA for a fair comparison (used by (Siddhant and Lipton, 2018)). Table 8 shows that the results of our study generalize to small datasets like TREC-QA. The results are shown in Figure 2 using the baseline with the query size of 2% of the full dataset (b=9 queries). Note that uncertainty sampling converges to full accuracy using just 12% of the data, whereas (Siddhant and Lipton, 2018) required 50% of the data. There is also a remarkable accuracy improvement over (Siddhant and Lipton, 2018) which can be largely attributed to the models used (FastText versus 1layer CNN/BiLSTM). Also, uncertainty sampling outperforms diversity-based augmentations like Coreset Sampling (Sener and Savarese, 2018) before convergence. Thus, we establish a new stateof-the-art baseline for further research in deep active text classi\ufb01cation. 5 Application: Training of Large Models The cost and time needed to get and label vast amounts of data to train large DNNs is a serious \fDsets Chance FTZ-Ent-Ent FTZ Ent-LC SV Chce% SV Com% TQA 15.1 \u00b1 0.0 59.7 \u00b1 0.5 56.3 \u00b1 1.4 18.7 \u00b1 6.1 79.0 \u00b1 3.6 Table 8: Results of sample selection from previous investigations on small datasets (Trec-QA). Model AGN DBP SGN YRF YRP YHA AMZP AMZF VDCNN (Conneau et al., 2017) 91.3 98.7 96.8 64.7 95.7 73.4 95.7 63.0 DPCNN (Johnson and Zhang, 2017) 93.1 99.1 98.1 69.4 97.3 76.1 96.7 65.2 WC-Reg (Qiao et al., 2018) 92.8 98.9 97.6 64.9 96.4 73.7 95.1 60.9 DC+MFA (Wang et al., 2018) 93.6 99.2 66.0 96.5 63.0 DRNN (Wang, 2018) 94.5 99.2 69.1 97.3 70.3 96.5 64.4 ULMFiT (Howard and Ruder, 2018) 95.0 99.2 70.0 97.8 EXAM (Du et al., 2019) 93.0 99.0 74.8 95.5 61.9 Ours: ULMFiT (Small data) 93.7 (20) 99.2 (10) 97.0 (10) 67.6 (20) 97.1 (10) 74.3 (20) 96.1 (10) 64.1 (20) Ours: ULMFiT (Tiny data) 91.7 (8) 98.6 (2.3) 97.4 (6.3) 66.3 (8) 96.7 (4) 73.3 (8) 95.8 (4) 62.9 (8) Table 9: Comparison of accuracies with state-of-the-art approaches (earliest-latest) for text classi\ufb01cation (%dataset in brackets). We are competitive with state-of-the-art models while using 5x-40x compressed datasets. ULMFiT AGN DBP YRP YRF Full 95.0 99.2 97.8 70.0 Ours-Small 93.7 (20) 99.2 (10) 97.1 (10) 67.6 (20) Ours-Tiny 91.7 (8) 98.6 (2.3) 96.7 (4) 66.3(8) Table 10: ULMFiT: Resulting sample \u02c6 Sb compared to reported accuracies in (Howard and Ruder, 2018) (%dataset in brackets). We observe that using our cheaply obtained compressed datasets, we can achieve similar accuracies with 25x-200x speedup (5x less epochs, 5x-40x less data). Transferability to other models is evidence of the generalizability of the subset collected using FTZ to other deep models. impediment to creating new and/or better models. Our study suggests that the training samples collected with uncertainty sampling (entropy) on a single model FTZ may provide a good representation (surrogate) for the entire dataset. Buoyed by this, we investigate if we can speedup training of ULMFiT (Howard and Ruder, 2018) using the surrogate dataset. We show these results in Table 10. We achieve 25x-200x speedup6 (5x fewer epochs, 5x-40x smaller training size). We also benchmark the performance against the state-ofthe-art on text classi\ufb01cation as shown in Table 9. We conclude that we can signi\ufb01cantly compress the training datasets and speedup classi\ufb01er training time with little tradeoff in accuracy. Implementation Details: We use the of\ufb01cial github repository for ULMFiT7, use default hyperparameters and train on one NVIDIA Tesla V100 16GB GPU. Further details are provided in sup6The cost of acquiring the training data using FTZ-Ent is negligible in comparison. 7https://github.com/fastai/fastai/ tree/master/courses/dl2/imdb_scripts plementary material. 6 Related Work We now expand on the brief literature review in Section 1 to better contextualize our work. We divide the past works into (i) Traditional Models and (ii) Deep Models. Sampling Bias in Classical AL in NLP: Active learning (AL) in text classi\ufb01cation started with greedy uncertainty query strategy from a pool using decision trees (Lewis and Gale, 1994), which was shown to be effective and led to widespread adoption with classi\ufb01ers like SVMs (Tong and Koller, 2001), Naive Bayes (Roy and McCallum, 2001) and KNN (Fujii et al., 1998). This strategy was also applied to other NLP tasks like parse selection (Baldridge and Osborne, 2004), sequence labeling (Settles and Craven, 2008) and information extraction (Thompson and Mooney, 1999). These early papers popularized two greedy uncertainty query methods: Least Con\ufb01dent and Entropy. Issues of lack of diversity (large reduduncy in sampling) (Zhang and Oles, 2000) and lack of robustness (high variance in sample quality)(Krogh and Vedelsby, 1994) guided subsequent efforts. The two most popular directions were: (i) augmenting uncertainty with diversity measures (Hoi et al., 2006; Brinker, 2003; Tang et al., 2002) and (ii) using query-by-committee (McCallum and Nigam, 1998; Liere and Tadepalli, 1997). For a comprehensive survey of classical AL methods for NLP, please refer to (Settles, 2009). Sampling Bias in Deep AL: Deep active learning approach adapt the above framework to the \ftraining of DNNs on large data. Two main query strategies are used: (i) ensemble based greedy uncertainty, which represents a probabilistic queryby-committee paradigm (Gal et al., 2017; Beluch et al., 2018), and (ii) diversity based measures (Sener and Savarese, 2018; Ducoffe and Precioso, 2018). Papers proposing diversity based approaches \ufb01nd that greedy uncertainty based sampling (using ensemble and single model) perform signi\ufb01cantly worse than random (See Figures 4 and 2 respectively in (Sener and Savarese, 2018; Ducoffe and Precioso, 2018)). They attribute the poor performance to redundant, highly correlated sampling selected using uncertainty based methods and justify the need for prohibitively expensive diversity-based approaches (Refer section 2 of (Sener and Savarese, 2018) for details on the expensiveness of various diversity sampling methods). However, K-center greedy coreset sampling scales poorly: we were only able to use it on TREC-QA (a small dataset). On the other hand, ensemble-based greedy uncertainty methods \ufb01nd that probabilistic averaging from a committee (Gal et al., 2017; Beluch et al., 2018) performs better than single model as with on diversity based methods like coreset(Gissin and Shalev-Shwartz, 2019; Beluch et al., 2018). Current approaches in text classi\ufb01cation literature mostly adopt the ensemble based greedy uncertainty framework (Siddhant and Lipton, 2018; Lowell et al., 2018; Zhang et al., 2017). However, our work demonstrates the problems of sampling bias and ef\ufb01ciency may not translate from shallow to deep approaches. Recent evidence from image domain (Gissin and Shalev-Shwartz, 2019) demonstrates atleast a subset of our \ufb01ndings generalize to other DNNs (class bias and query functions). Uncertainty sampling using a deep model like FTZ demonstrates surprisingly good sampling properties without using ensembles or bayesian methods. Ensembles do not seem to signi\ufb01cantly affect sampling. Whether this behavior generalizes to other deep models and tasks is yet to be seen. Other Related Works: An interesting set of papers (Soudry et al., 2018; Xu et al., 2018) show that deep neural networks trained with SGD converge to the maximum margin solution in the linearly separable case. Several works investigate the possibility that deep networks give high importance to a subset of the training dataset (Toneva et al., 2019; Vodrahalli et al., 2018; Birodkar et al., 2019), resembling supports in support vector machines. In our experiments, we \ufb01nd that active learning with uncertainty sampling with deep models like FTZ has a (surprisingly) large overlap with the support vectors of an SVM. Thus, it seems to have a inductive bias for class boundaries, similar to the above works. Whether this property generalizes to other deep models is yet to be seen. 7"
+ },
+ {
+ "url": "http://arxiv.org/abs/1804.03867v1",
+ "title": "Hybrid Binary Networks: Optimizing for Accuracy, Efficiency and Memory",
+ "abstract": "Binarization is an extreme network compression approach that provides large\ncomputational speedups along with energy and memory savings, albeit at\nsignificant accuracy costs. We investigate the question of where to binarize\ninputs at layer-level granularity and show that selectively binarizing the\ninputs to specific layers in the network could lead to significant improvements\nin accuracy while preserving most of the advantages of binarization. We analyze\nthe binarization tradeoff using a metric that jointly models the input\nbinarization-error and computational cost and introduce an efficient algorithm\nto select layers whose inputs are to be binarized. Practical guidelines based\non insights obtained from applying the algorithm to a variety of models are\ndiscussed. Experiments on Imagenet dataset using AlexNet and ResNet-18 models\nshow 3-4% improvements in accuracy over fully binarized networks with minimal\nimpact on compression and computational speed. The improvements are even more\nsubstantial on sketch datasets like TU-Berlin, where we match state-of-the-art\naccuracy as well, getting over 8% increase in accuracies. We further show that\nour approach can be applied in tandem with other forms of compression that deal\nwith individual layers or overall model compression (e.g., SqueezeNets). Unlike\nprevious quantization approaches, we are able to binarize the weights in the\nlast layers of a network, which often have a large number of parameters,\nresulting in significant improvement in accuracy over fully binarized models.",
+ "authors": "Ameya Prabhu, Vishal Batchu, Rohit Gajawada, Sri Aurobindo Munagala, Anoop Namboodiri",
+ "published": "2018-04-11",
+ "updated": "2018-04-11",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "main_content": "Introduction Convolutional Neural Networks (CNNs) have found applications in many vision-related domains ranging from generic image-understanding for self-driving cars [3] and automatic image captioning [32, 20] to recognition of speci\ufb01c image parts for scene-text recognition [24, 26] and face-based identi\ufb01cation [29]. Figure 1: Convolution of binary and non-binary activations of two different layers. Note that the error introduced due to binarization is minimal in the \ufb01rst pair compared to the second. Hence, ef\ufb01ciently deciding which layers to binarize could contribute signi\ufb01cantly to the overall accuracy of the network and not damage the speed-ups. After the introduction of AlexNet [21], several architectural improvements were proposed to push image recognition accuracy, such as VGG-Net [28], but these models were massive both in terms of memory usage and computational costs. AlexNet has around 60 million parameters in the network, while VGG has around 138 million, requiring 1.5 billion FLOPs and 19.6 billion FLOPs respectively for inference. The computational requirements make these architectures inappropriate for smaller portable systems such as mobiles and other embedded systems. These networks also use large amounts of energy, creating a bottleneck for performance improvements. Full-precision multiplyaccumulate (MAC) operations in convolutional layers consume 30x more power than integer MAC operations (see Table 1). Since these applications would be deployed on resourceconstrained systems, CNN compression is an important arXiv:1804.03867v1 [cs.CV] 11 Apr 2018 \fOperation MUL Power ADD Power 32-bit Float 3.7pJ 18.5x 0.9pJ 30x 16-bit Float 1.1pJ 5.5x 0.4pJ 13.3x 8-bit Integer 0.2pJ 1x 0.03pJ 1x Table 1: As shown by Horowitz et al. [14], power consumption for various operations at 45nm 0.9V. Observe that 8-bit integers require signi\ufb01cantly less energy than their equivalent 32-bit \ufb02oating point operations. emerging area for research on vision applications [18, 36, 11, 23, 25, 31, 13, 19]. One of the methods of compression: Quantization, can help networks consume far less power, memory, and incur lower computational costs. Quantization has proven to be a powerful compression strategy. Our paper is based on the most extreme form of quantization Binarization. There are many bene\ufb01ts to binarizing a network. Primarily, having binary weights/activations enables us to use xnor and popcount operations to calculate weighted sums of the inputs to a layer as compared to full-precision multiply-accumulate operations (MACs). This results in signi\ufb01cant computational speedup compared to other compression techniques. Secondly, as each binary weight requires only a single bit to represent, one can achieve drastic reductions in run-time memory requirements. Previous research [27, 18] shows that it is possible to perform weight and input binarization on large networks with up to 58x speedups and 10.4x compression ratios, albeit with signi\ufb01cant drops in accuracy. In this paper, we explore the problem of hybrid binarization of a network. We propose a technique devised from our investigation into the question as to where and which quantities of a network should one binarize, with respect to inputs to a layer to the best of our knowledge, this is the \ufb01rst work that explores this question. We observe in Figure 1 that in a trained fully binarized model, binarization in certain layers induces minimal error, whereas in others, the error obtained is signi\ufb01cant. Our proposed partition algorithm, when run on trained fully binarized models can design effective architectures. When these hybrid models are trained from scratch, they achieve a balance between compression, speedup, energy-ef\ufb01ciency, and accuracy, compared to fully binarized models. We conduct extensive experiments applying our method to different model architectures on popular large-scale classi\ufb01cation datasets over different domains. The resulting models achieve signi\ufb01cant speedups and compression with signi\ufb01cant accuracy improvements over a fully binarized network. Our main contribution includes: 1. A metric to jointly optimize binarization-errors of layers and the associated computational costs; 2. A partitioning algorithm to \ufb01nd suitable layers for input binarization, based on the above metric, which generates hybrid model architectures which if trained from scratch, achieve a good balance between compression, speedup, energy-ef\ufb01ciency, and accuracy; 3. Insights into what the algorithm predicts, which can provide an intuitive framework for understanding why binarizing certain areas of networks give good bene\ufb01ts; 4. Hybrid model architectures for AlexNet, ResNet-18, Sketch-A-Net and SqueezeNet with over 5-8% accuracy improvements on various datasets; and 5. A demonstration that our technique that achieves signi\ufb01cant compression in tandem with other compression methods. Reproducibility: Our implementation can be found on GitHub 1. 2. Related Work CNNs are often over-parametrized with high amounts of redundancy, increasing memory costs and making computation unnecessarily expensive. Several methods were proposed to compress networks and eliminate redundancy, which we summarize below. Space-ef\ufb01cient architectures: Designing compact architectures for deep networks helps save memory and computational costs. Architectures such as ResNet [13], DenseNet [17] signi\ufb01cantly reduced model size compared to VGG-Net by proposing a bottleneck structure to reduce the number of parameters while improving speed and accuracy. SqueezeNet [19] was another model architecture that achieved AlexNet-level accuracy on ImageNet with 50x fewer parameters by replacing 3x3 \ufb01lters with 1x1 \ufb01lters and late downsampling in the network. MobileNets [16] and Shuf\ufb02eNets [35] used depthwise separable convolutions to create small models, with low accuracy drop on ImageNet. Pruning and Quantization: Optimal Brain Damage [8] and Optimal Brain Surgeon [12] used the Hessian of the loss function to prune a network by reducing the number of connections. Deep Compression [11] reduced the number of parameters by an order of magnitude in several stateof-the-art neural networks through pruning. It further reduced non-runtime memory by employing trained quantization and Huffman coding. Network Slimming [23] took advantage of channel-level sparsity in networks, by identifying and pruning out non-contributing channels during training. HashedNets [5] performed binning of network weights using hash functions. INQ [2] used low-precision 16 bitquantized weights and achieved an 8x reduction in memory consumption, using 4 bits to represent 16 distinct quantized values and 1 bit to represent zeros speci\ufb01cally. 1https://github.com/erilyth/HybridBinaryNetworks-WACV18 \fBinarization: BinaryConnect [6] obtained huge compression in CNNs where all weights had only two allowed states (+1, -1) using Expectation Back Propagation (EBP). Approaches like [18, 22, 37] train deep neural networks using low precision multiplications, bringing down memory required drastically, showing that these models could be \ufb01t on memory constrained devices. DoReFa-net [36] applied low bit width gradients during back-propagation. XNOR-Net [27] multiplied binary weights and activations with scaling constants based on layer norms. QNNs [18] extended BNNs[7], the \ufb01rst method using binary weights and inputs to successfully achieve accuracy comparable to their corresponding 32-bit versions on constrained datasets using higher bit quantizations. HWGQ-Net [4] introduces a better suited activation function for binary networks. HTCBN [30] introduce helpful techniques such as replacing ReLU layers with PReLU layers and a scale layer to recover accuracy loss on binarizing the last layer, to effectively train a binary neural network. Hou et al. [15] use Hessian approximations to minimize loss w.r.t binary weights during training. Anderson et al. [1] offers a theoretical analysis of the workings of binary networks, in terms of high-dimensional geometry. Unlike previous works in this area, we look at binarizing speci\ufb01c parts of a network, instead of simply binarizing the inputs to all the layers end-to-end. We see in later sections, binarizing the right areas in the network contributes significantly to the overall accuracy of the network and does not damage its speed-ups. 3. Hybrid Binarization We de\ufb01ne certain conventions to be used throughout the paper. We de\ufb01ne a WBin CNN to be a CNN having the weights of convolutional layers binarized (referred to as WeightBinConv layers), FBin CNN to be a CNN having both inputs and weights of convolutional layers binarized (referred to as FullBinConv layers) and FPrec CNN to be the original full-precision network having both weights and inputs of convolutional layers in full-precision (referred to as Conv layers). We compare the FBin and WBin networks with FPrec networks at speci\ufb01c layers. Table 3 and Table 4 in the Experiments section show test accuracies for WBin, FBin and FPrec networks of different models. Observe that there is very little loss in accuracy from FPrec to WBin networks with signi\ufb01cant memory compression and fewer FLOPs. However, as we go from WBin to FBin networks, there is a signi\ufb01cant drop in accuracy along with the trade-off of signi\ufb01cantly lower FLOPs in FBin over WBin networks. Hence, we focus on improving the accuracies of FBin networks along with preserving the lower FLOPs as far as possible by investigating which activations to binarize. 3.1. Error Metric: Optimizing Speed & Accuracy Full-precision inputs I \u2208Rn, are approximated by binary matrix IB \u2208{ \u22121, +1}n. The optimal binary representation IB is calculated by IB \u2217= argmin(\u2225I \u2212IB \u22252) (1) XNOR-Net[27] minimized the error function: E = \u2225I \u2212IB \u22252 n (2) In order to do that, they maximized I\u22a4IB and proposed the binary activation IB to be IB \u2217= argmax IB (I\u22a4IB), IB \u2208{\u22121, +1}n, I \u2208Rn (3) , obtaining the optimal IB \u2217can be shown to be sgn(I). We need to investigate where to replace FullBinConv with WeightBinConv layers. In order to optimize for accuracy, we need to measure the ef\ufb01cacy of the binary approximation for inputs to any given layer. A good metric of this is the average error function calculated over a subset of training images E (de\ufb01ned in Eq. 2) used to calculate the optimal IB itself, which is explicitly being minimized in the process. Hence, we use that error function to capture the binarization error. Similarly to optimize speed, we need to convert layers with low number of FLOPs to WeightBinConv and layers having high number of FLOPs should be kept in FullBinConv. Since we need to jointly optimize both, we propose a metric that tries to achieve a good tradeoff between the two quantities. A simple but effective metric is the linear combination M = E + \u03b3 \u00b7 1 NF (4) where \u03b3 is the tradeoff ratio, NF is the number of \ufb02ops in the layer and E is the binarization error per neuron. The trade-off ratio \u03b3 is a hyperparameter which ensures that both the terms are of comparable magnitude. Figure 2, captures the layer-wise variation of the error metric across multiple models. 3.2. Partitioning Algorithm We aim to partition the layers of a network into two parts, one set of layers to keep FullBinConv and the other set which are replaced with WeightBinConv layers. A naive but intuitive partitioning algorithm would be to sort the list of metric errors M and replace FullBinConv layers which have highest error values Mi one-by-one with WeightBinConv layers, train new hybrid models and stop when the accuracies in the retrained models stop improving i.e when the maxima in accuracy v/s \ufb02ops tradeoff is reached. However, we need a partitioning algorithm which gives informed \f0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0 Layer number 0 5 10 15 20 25 30 35 40 45 Metric Score Sketch-A-Net Resnet18 SqueezeNet Weight binarized Full binarized Figure 2: Binarization-error metric across layers for Sketch-A-Net, ResNet-18, and SqueezeNet. Stars indicate that the layer was replaced with a WeightBinConv layer, while squares indicate the FullBinConv layer was retained in the FBin model. We see that the algorithm selects the last layers in the case of Sketch-A-Net and ResNet, while in the case of SqueezeNet, it selects the \ufb01rst four, last three and some alternate intermediate layers to be replaced by WeightBinConv layers, retaining the rest as FullBinConv layers. guesses on where are the effective places to partition the set. This would avoid the long retraining times and large resources required to try every possible option for a hybrid model. We propose a layer selection algorithm that gives informed partitions from a trained FBin model, helping us to determine which layers are to be converted to WeightBinConv and which layers are to be converted to FullBinConv without having to train all possible hybrid models from scratch. Our algorithm starts by taking a trained FBin model. We pass in a subset of the training images and calculate the average error metric for all layers over them. Then we perform K-Means Clustering on the metric values with each point being the metric error of layers as shown in Figure 2. We perform the K-Means Clustering for different values of the number of clusters. We \ufb01nd a suitable number of clusters such that the ratio of layers in the highest-error cluster (K) to the total number of convolutional layers (P) is less than a hyperparameter, which we de\ufb01ne as the Hybridization Ratio R. Layers with terms falling in the highest mean cluster are converted to WeightBinConv, while the ones in all other clusters are left as FullBinConv. A \ufb02ow of the algorithm is illustrated in Figure 3 and is explained stepby-step in Algorithm 1. We show metric scores of various layers for different networks in Figure 2 and indicate which layers are replaced with WeightBinConv/FullBinConv layers. This algorithm guides in forming the architecture of the hybrid model, which is then trained from scratch obtaining the accuracies given in the tables presented in the Experiment section. Note that this algorithm does not change the con\ufb01guration of the model; it only converts certain layers to their binarized versions. To give an intuition of what the Hybridization ratio R Algorithm 1 Partition Algorithm Marks layers for binarization and creates a hybrid network. 1: Inputs \u21d2Layer-wise Binarization Errors 2: 3: Initialization 4: P = Total convolutional layers 5: R = Hybridization Ratio 6: ToConvert = List() 7: 8: Mark binary layers 9: for N = 2 to P do 10: Compute KMeans with N means 11: K = Number of layers in highest-error cluster 12: if K/P \u2264R then 13: for Q in high-error clusters do 14: ToConvert.add(Q) \u25b7Add layer Q 15: Break 16: 17: Create Hybrid Network 18: HybridNet = () 19: HybridNet.Add(Conv) 20: 21: for N = 2 to P do 22: if N in ToConvert then 23: HybridNet.Add(WeightBinConv) 24: else 25: HybridNet.Add(FullBinConv) 26: 27: Output \u21d2HybridNet means, a low R would indicate we need the number of \fFigure 3: The Procedure: Error metrics from binarization of inputs to the network layers are partitioned into clusters using K-means. The highest error cluster indicates the inputs that are not binarized to generate the hybrid version. WeightBinConv layers to be low, ensuring a high asymmetry between errors in WeightBinConv and FullBinConv layers, prioritizing saving computational cost. Conversely, a higher R would prioritize accuracy over computational cost. R was set to be 0.4 for AlexNet and ResNet-18, and 0.6 for Squeezenet. Variation with different values of R is further discussed in the experiments section. 3.3. Impact on Speed and Energy Use Computational Speedups: Convolutional operations are computationally expensive. For each convolution operation between an image I \u2208Rcin\u00d7hI\u00d7wI and weight W \u2208 Rcout\u00d7h\u00d7w, the number of MAC operations required N are \u2248CinCoutNW NI where NW = wh and NI = wIhI. According to benchmarks done in XNOR-Net, the current speedup obtained in these operations is 58x after including the overhead induced by computing \u03b1. Accordingly, in later sections, we take one FLOP through a layer as equivalent to 58 binary operations when weights and inputs are binarized. Exploiting \ufb01lter repetitions: The number of unique convolutional binary \ufb01lters is bounded by the size of the \ufb01lter [18]. As most of our intermediate convolutional layers have 3 \u00d7 3 \ufb01lters which only have 29 unique \ufb01lters, we \ufb01nd that the percentage of unique \ufb01lters decreases as we go deeper into the network. We can exploit this fact to simply prune \ufb01lters and use that in calculating speedups for binary networks. More details regarding how the speedup was computed is included in the supplementary material. 4. Experiments and Results We report and compare accuracies, speedups and compression between the FPrec model, different kinds of binarization models (WBin and FBin), and their generated hybrid versions of the same. We also present a detailed comparison of our method with several different compression techniques applied on AlexNet [21], ResNet-18 [13], Sketch-A-Net [10] and SqueezeNet [19]. We empirically demonstrate the effectiveness of hybrid binarization on several benchmark image and sketch datasets. We show that our approach is robust and can generalize to different types of CNN architectures across domains. 4.1. Datasets and Models Binary Networks have achieved accuracies comparable to full-precision networks on limited domain/simpli\ufb01ed datasets like CIFAR-10, MNIST, SVHN, but show drastic accuracy losses on larger-scale datasets. To compare with state-of-the-art vision, we evaluate our method on ImageNet[9]. To show the robustness of our approach, we test it on sketch datasets, where models \ufb01ne-tuned with ImageNet are demonstrably not suitable as shown in[34]. Binary networks might be better suited for sketch data due to its binary nature and sparsity of information in the data. ImageNet: The benchmark dataset for evaluating image recognition tasks, with over a million training images and 50,000 validation images. We report the single-center-crop validation errors of the \ufb01nal models. TU-Berlin: The TU-Berlin [10] sketch dataset is the most popular large-scale free-hand sketch dataset containing sketches of 250 categories, with a human sketchrecognition accuracy of 73.1% on average. Sketchy: It is a recent large-scale free-hand sketch dataset containing 75,471 hand-drawn sketches from across 125 categories. This dataset was primarily used to crossvalidate results obtained on the TU-Berlin dataset and ensure that our approach is robust to the variation in collection of data. We use the standard splits with commonly used hyperparameters to train our models. Each FullBinConv block was structured as in XNOR-Net (Batchnorm-Activ-ConvReLU). Each WeightBinConv and Conv block has the standard convolutional block structure (Conv-BatchnormReLU). Weights of all layers except the \ufb01rst were binarized throughout our experiments unless speci\ufb01ed otherwise. Note that FLOPs are stated in millions in all diagrams and sections. All networks are trained from scratch independently. The architecture of the hybrid network once designed does not change during training. Additional details \fTechnique Acc-Top1 Acc-Top5 W/I Mem FLOPs AlexNet BNN 39.5% 63.6% 1/1 32x 121 (1x) XNOR 43.3% 68.4% 1/1 10.4x 121 (1x) Hybrid-1 48.6% 72.1% 1/1 10.4x 174 (1.4x) Hybrid-2 48.2% 71.9% 1/1 31.6x 174 (1.4x) HTCBN 46.6% 71.1% 1/2 31.6x 780 (6.4x) DoReFa-Net 47.7% 1/2 10.4x 780 (6.4x) Res-Net 18 BNN 42.1% 67.1% 1/1 32x 134 (1x) XNOR 51.2% 73.2% 1/1 13.4x 134 (1x) Hybrid-1 54.9% 77.9% 1/1 13.4x 359 (2.7x) Hybrid-2 54.8% 77.7% 1/1 31.2x 359 (2.7x) HTCBN 53.6% 1/2 31.2x 1030 (7.7x) Table 2: A detailed comparison of accuracy, memory use, FLOPs with popular benchmark compression techniques on ImageNet. Our hybrid models outperform other 1-bit activation models and perform on par with 2-bit models while having a signi\ufb01cantly higher speedup. Hybrid-2 models have the last layer binarized. about the datasets, model selection and layer-wise description of each of the hybrid models along with experimental details can be found in the supplementary material. 4.2. Results We compare FBin, WBin, Hybrid and FPrec recognition accuracies across models on ImageNet, TU-Berlin and Sketchy datasets. Note that higher accuracies are an improvement, hence stated in green in the table, while higher FLOPs mean more computational expense, hence are stated in red. W/I indicates the number of bits used for weights and inputs to the layer respectively. Note that in the table, the compression obtained is only due to the weight binarization, while the decrease in effective FLOPs are due to activation binarization. On the ImageNet dataset in Table 3, hybrid versions of AlexNet and ResNet-18 models outperform their FBin counterparts in top-1 accuracy by 4.1% and 3.6% respectively, and around 20x compression for both. We also compare with the results of other compression techniques in Table 2. On the TU-Berlin and Sketchy datasets in Table 4, we \ufb01nd that Sketch-A-Net and ResNet-18 have signi\ufb01cantly higher accuracies in the hybrid models compared to their FBin counterparts, a 13.5% gain for Sketch-A-Net and 5.0% for ResNet-18. These hybrid models also achieve over 29x compression over FPrec models and with a reasonable increase in the number of FLOPs a mere 7M increase in Sketch-A-Net and a decent 225M increase in ResNet-18. We also compare them with state-of-the-art sketch classi\ufb01cation models in Table 5. Our hybrid Sketch-A-Net and ResNet-18 models achieve similar accuracies to state-of-the-art, while also highly compressing the models upto 233x compared to the Model Method Accuracy Mem FLOPs Top-1 Top-5 AlexNet FPrec 57.1% 80.2% 1x 1135 (9.4x) WBin (BWN) 56.8% 79.4% 10.4x 780 (6.4x) FBin (XNOR) 43.3% 68.4% 10.4x 121 (1x) Hybrid-1 48.6% 72.1% 10.4x 174 (1.4x) Hybrid-2 48.2% 71.9% 31.6x 174 (1.4x) Increase Hybrid vs FBin +4.9% +3.5% +21.2x +53 (+0.4x) ResNet-18 FPrec 69.3% 89.2% 1x 1814 (13.5x) WBin (BWN) 60.8% 83.0% 13.4x 1030 (7.7x) FBin (XNOR) 51.2% 73.2% 13.4x 134 (1x) Hybrid-1 54.9% 77.9% 13.4x 359 (2.7x) Hybrid-2 54.8% 77.7% 31.2x 359 (2.7x) Increase Hybrid vs FBin +3.6% +4.5% +17.8x +225 (+1.7x) Table 3: Our hybrid models compared to FBin, WBin and NoBin models on Imagenet in terms of accuracy, memory and computations expense. Model Method Accuracy Mem FLOPs TU-Berlin Sketchy Sketch-A-Net FPrec 72.9% 85.9% 1x 608 (7.8x) WBin (BWN) 73% 85.6% 29.2x 406 (5.2x) FBin (XNOR) 59.6% 68.6% 19.7x 78 (1x) Hybrid 73.1% 83.6% 29.2x 85 (1.1x) Increase Hybrid vs FBin +13.5% +15.0% +9.5x +7 (+0.1x) ResNet-18 FPrec 74.1% 88.7% 1x 1814 (13.5x) WBin (BWN) 73.4% 89.3% 31.2x 1030 (7.7x) FBin (XNOR) 68.8% 82.8% 31.2x 134 (1x) Hybrid 73.8% 87.9% 31.2x 359 (2.7x) Increase Hybrid vs FBin +5.0% +5.1% +225 (+1.7x) Table 4: Our hybrid models compared to FBin, WBin and full prec models on TU-Berlin and Sketchy datasets in terms of accuracy, memory and speed tradeoff. Model Acc Mem FLOPs AlexNet-SVM 67.1% 1x 1135 (13.4x) AlexNet-Sketch 68.6% 1x 1135 (13.4x) Sketch-A-Net SC 72.2% 8x 608 (7.2x) Sketch-A-Net-Hybrid 73.1% 233x 85 (1x) ResNet18-Hybrid 73.8% 359 Humans 73.1% Sketch-A-Net-2 2[33] 77.0% 8x 608 (7.2x) Table 5: A comparison between state-of-the-art single model accuracies of recognition systems on the TU-Berlin dataset. AlexNet FPrec model. Thus, we \ufb01nd that our hybrid binarization technique \ufb01nds a balance between sacri\ufb01cing accuracy and gaining speedups and compression for various models on various datasets. 4.3. Algorithmic Insights We gained some insights into where to binarize from our investigation. We provide them as a set of practical guidelines to enable rapid prototyping of hybrid models, which gives meaningful insights into which layers were par2It is the sketch-a-net SC model trained with additional imagenet data, additional data augmentation strategies and considering an ensemble, hence would not be a direct comparison \f0 20 40 60 80 100 Percentage of WeightBinConv layers 55 60 65 70 75 Accuracy WBin (73.0) Hybrid1 (73.1) Hybrid2 (71.0) FBin (59.6) WBin (73.4) Hybrid1 (73.8) Hybrid2 (72.8) FBin (68.8) WBin (66.7) Hybrid1 (64.8) Hybrid2 (61.6) Hybrid3 (59.3) FBin (56.8) Sketch-A-Net Resnet18 SqueezeNet 0 20 40 60 80 100 Percentage of WeightBinConv layers 250 500 750 1000 1250 1500 1750 2000 Equivalent FLOPs WBin Hybrid1 Hybrid2 WBin Hybrid1 Hybrid2 FBin WBin Hybrid1 Hybrid2 Hybrid3 FBin Sketch-A-Net AlexNet Resnet18 SqueezeNet Figure 4: Trade-off between WeightBinConv layers and accuracy on the TU-Berlin dataset is shown in the left \ufb01gure, while the trade-off between weight binarized layers and speedup is shown in the right \ufb01gure. Early on, we observe that a small increase in the percentage of WeightBinConv layers leads to a large increase in accuracy and a marginal decrease in speed. We achieve accuracies comparable to the WBin model with much fewer WeightBinConv layers. titioned. Convert layers towards the end to WeightBinConv: It is observed that later layers typically have high error rates, more \ufb01lter repetitions, and lower computational cost. Hence, the algorithm tends to start converting models to Hybrid from the last layers. Convert the smaller of the layer placed parallely to WeightBinConv: It is a good idea to convert the smaller of the parallely placed layers in the architecture like Residual layers in the ResNet architecture to WeightBinConv, since converting them to WeightBinConv would not damage the computational speedup obtained by the parallel FullBinConv layers. Pick a low Hybridization Ratio: Try to pick low values of the Hybridization Ratio R, ensuring a low proportion of number of layers the highest-error cluster. Relax the Hybridization Ratio for compact models: Having a higher Hybridization Ratio for compact models which inherently have fewer \ufb02ops leaves more layer inputs un-binarized and retains accuracy. 4.4. Why are layer-wise errors independent? Can binarization noise introduced in a layer propagate further into the network and in\ufb02uence other layers? Hubara et al. [18] provide some insights for the same. Let W be the weight and I be the input to the convolutional layer. The output of the convolution between the binary weights and inputs can be represented by OB = \u03b1 \u00b7 (sgn(W\u22ba) \u2299sgn(I)) (5) The desired output O is modelled by OB along with the binarization noise N introduced due to the function sgn(.). O = W \u2217I = X i OBi + Ni (6) When the layer is wide, we expect the deterministic term OB to dominate, because the noise term N is a summation over many independent binarizations from all the neurons in the layer. Thus, we argue that the binarization noise N should have minimal propagation and do little to in\ufb02uence the further inputs. Hence, it is a reasonable approximation to consider the error across each layer independently of the other layers. 4.5. Variation with the Hybridization Ratio (R) To observe the trade-off between accuracy and speedup on different degrees of binarization, we chose different values of the Hybridization Ratio (R) to create multiple hybrid versions of the AlexNet, ResNet-18 and SqueezeNet models. Picking a larger R would result in a higher number of WeightBinConv layers. We compare these hybrid networks to their corresponding FBin and WBin versions. In Figure 4, we show model accuracies of AlexNet, ResNet-18 and SqueezeNet on the ImageNet dataset plotted against the number of WeightBinConv layers, starting from only FBin versions on the left, to only WBin versions on the right. We observe that in the case of AlexNet and ResNet18, which are large models, we recover WBin accuracies quickly, at around the 35% mark (Roughly a third of the network containing WeightBinConv layers), with low computational trade-off. We also observe that on sketch data, hybrid models tend to perform signi\ufb01cantly better and perform on par with their WBin counterparts. We also notice that the smaller a model, the more tradeoff must be made to achieve WBin accuracy, i.e a larger Hybridization Ratio must be used. AlexNet, the largest model crosses WBin accuracy at around 32%, while ResNet-18, being smaller, saturates at around 40%. SqueezeNet, a much more compact model, reaches its WBin accuracy at \fModel BinType Last Bin? Acc Mem Sketch-A-Net FBin (XNOR) No 59.6% 19.7x Yes 48.3% 29.2x Sketch-A-Net Hybrid No 73.1% 19.7x Yes 72.0% 29.2x Resnet-18 FBin (XNOR) No 69.9% 13.4x Yes 68.8% 31.2x Resnet-18 Hybrid No 73.9% 13.4x Yes 73.8% 31.2x Table 6: Effects of last layer weight-binarization on TUBerlin dataset, for Sketch-A-Net and ResNet-1. Observe that our hybrid models do not face drastic accuracy drop when the last layer is weight-binarized. 60%. 4.6. Optimizing Memory We measured accuracies for FBin and Hybrid variants of Sketch-A-Net and ResNet-18 models on TU-Berlin and Sketchy Datasets with weights of the last layer binarized as well as non-binarized and the results are presented in Table 6. For AlexNet-styled architectures (Sketch-A-Net), we observe a drastic drop in accuracies (From 59.1% to 48.3%) on binarizing the last layer, similar to observations made in previous binarization works [36, 30]. Many efforts were made to quantize the last layer and avoid this drop. DoReFaNet and XNOR-Net did not binarize the last layer choosing to incur a degradation in model compression instead while [30] proposed an additional scale layer to mitigate this effect. However, our hybrid versions are able to achieve similar accuracies (a 1% drop for hybrid Sketch-A-Net and no drop for ResNet-18 or AlexNet) since the last layer is weight binarized instead. Hence, our method preserves the overall speedup even though we only weight-binarize the last layer, owing to the comparatively smaller number of computations that occur in this layer. Note that the \ufb01rst layer is always a full-precision Conv layer. The reasons behind this are the insights obtained from [1]. They state that the \ufb01rst layer of the network functions are fundamentally different than the computations being done in the rest of the network because the high variance principal components are not randomly oriented relative to the binarization. Also, since it contains fewer parameters and low computational cost, it does not affect our experiments. 4.7. Compressing Compact Models Whether compact models can be compressed further, or need all of the representational power afforded through dense \ufb02oating-point values is an open question asked originally by [19]. We show that our hybrid-binarization technique can Model Method Accuracy Mem FLOPs TU-Berlin Sketchy Sketch-A-Net FPrec 72.9% 85.9% 1x 1135 (12.3x) Squeezenet FPrec 71.2% 86.5% 8x 610 (6.6x) Squeezenet WBin 66.7% 81.1% 23.7x 412 (4.5x) Squeezenet FBin 56.8% 66.0% 23.7x 92 (1x) Squeezenet Hybrid 64.8% 79.6% 23.7x 164 (1.8x) Improvement Hybrid vs FBin +8.0% +13.6% +72 (+0.8x) Table 7: Our performance on SqueezeNet, an explicitly compressed model architecture. Although SqueezeNet is an inherently compressed model, our method still achieves further compression on it. work in tandem with other compression techniques, which do not involve quantization of weights/activations and that hybrid binarization is possible even on compact models. We apply hybrid binarization to SqueezeNet[19] a recent model that employed various architectural design strategies to achieve compactness. SqueezeNet achieves an 8x compression on the compact architecture of Sketch-A-Net. On applying hybrid binarization we achieve a further 32x compression, an overall 256x compression with merely 6% decrease in accuracy. This is due to the high rate of compression inherent and further compression is dif\ufb01cult due to the small number of parameters. After showing that ef\ufb01cacy of hybrid binarization in the previous section, we show that hybrid binarization can work in combination with other compression techniques here. Results for SqueezeNet are shown in Table 7 for the TU-Berlin and Sketchy datasets, and we see that accuracy is only slightly lower compared to the hybridized versions of ResNet-18 and Sketch-A-Net on the same. Hybrid SqueezeNet achieves a total compression of 256x. Similarly, this technique can be combined with many techniques such as HWGQ-Net [4] which proposes an alternative layer to ReLU and repeated binarization as illustrated in [30] among others. Since our primary goal is to investigate the viability of hybrid binarization, these investigationsalbeit interesting, are out of the scope of our current work. 5."
+ },
+ {
+ "url": "http://arxiv.org/abs/1804.02941v1",
+ "title": "Distribution-Aware Binarization of Neural Networks for Sketch Recognition",
+ "abstract": "Deep neural networks are highly effective at a range of computational tasks.\nHowever, they tend to be computationally expensive, especially in\nvision-related problems, and also have large memory requirements. One of the\nmost effective methods to achieve significant improvements in\ncomputational/spatial efficiency is to binarize the weights and activations in\na network. However, naive binarization results in accuracy drops when applied\nto networks for most tasks. In this work, we present a highly generalized,\ndistribution-aware approach to binarizing deep networks that allows us to\nretain the advantages of a binarized network, while reducing accuracy drops. We\nalso develop efficient implementations for our proposed approach across\ndifferent architectures. We present a theoretical analysis of the technique to\nshow the effective representational power of the resulting layers, and explore\nthe forms of data they model best. Experiments on popular datasets show that\nour technique offers better accuracies than naive binarization, while retaining\nthe same benefits that binarization provides - with respect to run-time\ncompression, reduction of computational costs, and power consumption.",
+ "authors": "Ameya Prabhu, Vishal Batchu, Sri Aurobindo Munagala, Rohit Gajawada, Anoop Namboodiri",
+ "published": "2018-04-09",
+ "updated": "2018-04-09",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "main_content": "Introduction Deep learning models are pushing the state-of-the-art in various problems across domains, but are computationally intensive to train and run, especially Convolutional Neural Networks (CNNs) used for vision applications. They also occupy a large amount of memory, and the amount of computation required to train a network leads to high power consumption as well. There have been many developments in the area of model compression in the last few years, with the aim of bringing down network runtimes and storage requirements to mobile-friendly levels. Compression strategies for Convolutional Neural Networks included architectural improvements [16, 20] and re-parametrization [27, 34] to pruning 0.10 0.05 0.00 0.05 0.10 Value 0.00 0.01 0.02 0.03 Probability Distribution of weights (Ours) (Ours) , (XNOR) Figure 1: Weight distribution of a layer with corresponding \u03b1/\u03b2 values, and the scaling factor \u03b1 in the XNOR-Net implementation for comparison. \u03b1 and \u03b2 in our method have differing magnitudes, unlike in XNOR-Net. techniques [14, 25] and quantization [19, 40]. Among these approaches, quantization especially, binarization provided the most compact models as shown in Table 1. Quantized networks where weights/activations were quantized into low-precision representations were found to achieve great model compression. Quantization has proven to be a powerful compression strategy, especially the most extreme form of quantization Binarization. Binarization has enabled the use of XNOR-Popcount operations for vector dot products, which take much less time compared to full-precision Multiply-Accumulates (MACs), contributing to a huge speedup in convolutional layers [28, 19] on a general-purpose CPU. Moreover, as each binary weight requires only a single bit to represent, one can achieve drastic reductions in run-time memory requirements. Previous research [28, 19] shows that it is possible to perform weight and activation binarization on large networks with up to 58x speedups and approximately 32x compression ratios, albeit with signi\ufb01cant drops in accuracy. Later works have tended to move away from binary representations of weights/inputs to multi-bit representations. The reason for this was mainly the large accuracy degradaarXiv:1804.02941v1 [cs.CV] 9 Apr 2018 \fFigure 2: An example sketch passing through a convolutional layer \ufb01lter, with the real-valued \ufb01lter shown alongside corresponding \u03b1-\u03b2 and XNOR-Net \ufb01lters. Orange signi\ufb01es the highest response areas. We can see that DAB-Net has signi\ufb01cantly better responses when compared to XNORNet tion observed in binary networks. While some works [32] have proposed methods to recover some of the lost accuracy, this leads to the natural question of whether, in theory, binary-representations of neural networks can be used at all to effectively approximate a full-precision network. If shown to be suf\ufb01cient, the search for an optimally accurate binarization technique is worthwhile, due to the large gains in speedups (due to binary operations rather than full-prec MACs) and compression compared to multi-bit representations. In our paper, we make the following contributions: 1. We show that binary representations are as expressive as full precision neural networks for polynomial functions, and offer theoretical insights into the same. 2. We present a generalized, distribution-aware representation for binary networks, and proceed to calculate the generalized parameter-values for any binary network. 3. We offer an intuitive analysis and comparison of our representation vis-a-vis previous representations, as illustrated in Figure 1. 4. We provide a provably ef\ufb01cient implementation of networks trained using this representation. 5. We demonstrate the effectiveness of our method by extensive experiments applying it to popular model architectures on large-scale sketch datasets and improving upon existing binarization approaches. We also offer intuitions about how this technique might be effective in problems involving data that is inherently binary, such as sketches, as shown in Figure 2. Sketches are a universal form of communication and are easy to draw through mobile devices thus emerging as a new paradigm Method Compression Finetuned SVD 2 [34] 2.6x Circulant CNN 2 [7] 3.6x Adaptive Fastfood-16 [34] 3.7x Collins et al. [8] 4x Zhou et al. [39] 4.3x ACDC [27] 6.3x Network Pruning [14] 9.1x Deep Compression [14] 9.1x GreBdec [38] 10.2x Srinivas et al. [30] 10.3x Guo et al. [13] 17.9x Binarization \u224832x Table 1: Comparison of Binarization and other methods in terms of compression. with interesting areas to explore, such as fast classi\ufb01cation and sketch-based image retrieval. Reproducibility: Our implementation can be found on GitHub 1 2. Related Work We ask the question: Do CNNs need the representational power of 32-bit \ufb02oating point operations, especially for binary-valued data such as sketches? Is it possible to cut down memory costs and make output computations signi\ufb01cantly less expensive? In recent years, several different approaches were proposed to achieve network compression and speedups, and special-purpose networks were proposed for sketch classi\ufb01cation/retrieval tasks. These are summarized below: Sketch Recognition: Many deep-network based works in the past did not lead to fruitful results before, primarily due to these networks being better suited for images rather than sketches. Sketches have signi\ufb01cantly different characteristics as compared to images, and require specialized, \ufb01ne-tuned networks to work with. Sketch-a-Net from Yu et al. [37] took these factors into account, and proposed a carefully designed network structure that suited sketch representations. Their single-model showed tremendous increments over the then state-of-the-art, and managed to beat the average human performance using a Bayesian Fusion ensemble. Being a signi\ufb01cant achievement in this problem since beating human accuracy in recognition problems is dif\ufb01cult this model has been adopted by a number of later works Bui et al. [4], Yu et al. [35], Wang et al. [33]. Pruning Networks for Compression: Optimal Brain Damage [10] and Optimal Brain Surgeon [15] introduced a network pruning technique based on the Hessian of the 1https://github.com/erilyth/DistributionAwareBinarizedNetworksWACV18 \floss function. Deep Compression [14] also used pruning to achieve compression by an order of magnitude in various standard neural networks. It further reduced non-runtime memory by employing trained quantization and Huffman coding. Network Slimming [25] introduced a new learning scheme for CNNs that leverages channel-level sparsity in networks, and showed compression and speedup without accuracy degradation, with decreased run-time memory footprint as well. We train our binary models from scratch, as opposed to using pre-trained networks as in the above approaches. Higher Bit Quantization: HashedNets [6] hashed network weights to bin them. Zhou et al. [2] quantized networks to 4-bit weights, achieving 8x memory compression by using 4 bits to represent 16 different values and 1 bit to represent zeros. Trained Ternary Quantization [41] uses 2-bit weights and scaling factors to bring down model size to 16x compression, with little accuracy degradation. Quantized Neural Networks[19] use low-precision quantized weights and inputs and replaces arithmetic operations with bit-wise ones, reducing power consumption. DoReFa-Net [40] used low bit-width gradients during backpropagation, and obtained train-time speedups. Ternary Weight Networks [22] optimize a threshold-based ternary function for approximation, with stronger expressive abilities than binary networks. The above works cannot leverage the speedups gained by XNOR/Pop-count operations which could be performed on dedicated hardware, unlike in our work. This is our primary motivation for attempting to improve binary algorithms. Binarization: We provide an optimal method for calculating binary weights, and we show that all of the above binarization techniques were special cases of our method, with less accurate approximations. Previous binarization papers performed binarization independent of the distribution weights, for example [28]. The method we introduce is distribution-aware, i.e. looks at the distribution of weights to calculate an optimal binarization. BinaryConnect [9] was one of the \ufb01rst works to use binary (+1, -1) values for network parameters, achieving signi\ufb01cant compression. XNOR-Nets [28] followed the work of BNNs [18], binarizing both layer weights and inputs and multiplying them with scaling constants bringing signi\ufb01cant speedups by using faster XNOR-Popcount operations to calculate convolutional outputs. Recent research proposed a variety of additional methods including novel activation functions [5], alternative layers [32], approximation algorithms [17], \ufb01xed point bit-width allocations [23]. Merolla et al. [?] and Anderson et al. [1] offer a few theoretical insights and analysis into binary networks. Further works have extended this in various directions, including using local binary patterns [21] and lookup-based compression methods [3]. 3. Representational Power of Binary Networks Many recent works in network compression involve higher bit weight quantization using two or more bits [2, 41, 22] instead of binarization, arguing that binary representations would not be able to approximate full-precision networks. In light of this, we explore whether the representational power that binary networks can offer is theoretically suf\ufb01cient to get similar representational power as full-precision networks. Rolnick et al. [24, 29] have done extensive work in characterizing the expressiveness of neural networks. They claim that due to the nature of functions that they depend on real-world physics, in addition to mathematics the seemingly huge set of possible functions could be approximated by deep learning models. From the Universal Approximation Theorem [11], it is seen that any arbitrary function can be well-approximated by an Arti\ufb01cial Neural Network; but cheap learning, or models with far fewer parameters than generic ones, are often suf\ufb01cient to approximate multivariate monomials which are a class of functions with practical interest, occurring in most real-world problems. We can de\ufb01ne a binary neural network having k layers with activation function \u03c3(x) and consider how many neurons are required to compute a multivariate monomial p(x) of degree d. The network takes an n dimensional input x, producing a one dimensional output p(x). We de\ufb01ne Bk(p, \u03c3) to be the minimum number of binary neurons (excluding input and output) required to approximate p, where the error of approximation is of degree at least d + 1 in the input variables. For instance, B1(p, \u03c3) is the minimal integer m such that: m X j=1 wj\u03c3 n X i=1 aijxi ! = p(x) + O(xd+1 1 + . . . + xd+1 n ). Any polynomial can be approximated to high precision as long as input variables are small enough [24]. Let B(p, \u03c3) = mink\u22650 Bk(p, \u03c3). Theorem 1. For p(x) equal to the product x1x2 \u00b7 \u00b7 \u00b7 xn, and for any \u03c3 with all nonzero Taylor coef\ufb01cients, we have one construction of a binary neural network which meets the condition Bk(p, \u03c3) = O \u0010 n(k\u22121)/k \u00b7 2n1/k\u0011 . (1) Proof of the above can be found in the supplementary material. Conjecture III.2. of Rolnick et al. [29] says that this bound is approximately optimal. If this conjecture proves to be true, weight-binarized networks would have the same representational power as full-precision networks, since the \fnetwork that was essentially used to prove that the above theorem that a network exists that can satisfy that bound was a binary network. The above theorem shows that any neural network that can be represented as a multivariate polynomial function is considered as a simpli\ufb01ed model with ELU-like activations, using continuously differentiable layers so pooling layers are excluded as well. While there can exist a deep binary-weight network that can possibly approximate polynomials similar to full precision networks, it does say that such a representation would be ef\ufb01ciently obtainable through Stochastic Gradient Descent. Also, this theorem assumes only weights are binarized, not the activations. Activation binarization typically loses a lot of information and might not be a good thing to do frequently. However, this insight motivates the fact that more investigation is needed into approximating networks through binary network structures. 4. Distribution-Aware Binarization We have so far established that binary representations are possibly suf\ufb01cient to approximate a polynomial with similar numbers of neurons as a full-precision neural network. We now investigate the question What is the most general form of binary representation possible? In this section, we derive a generalized distribution-aware formulation of binary weights, and provide an ef\ufb01cient implementation of the same. We consider models binarized with our approach as DAB-Nets (Distribution Aware Binarized Networks). We model the loss function layer-wise for the network. We assume that inputs to the convolutional layers are binary i.e. belong to {+1, \u22121}, and \ufb01nd constants \u03b1 and \u03b2 (elaborated below) as a general binary form for layer weights. These constants are calculated from the distribution of real-valued weights in a layer thus making our approach distribution-aware. 4.1. Derivation Without loss of generality, we assume that W is a vector in Rn , where n = c \u00b7 w \u00b7 h. We attempt to binarize the weight vector W to f W which takes a form similar to this example [\u03b1\u03b1...\u03b2\u03b1\u03b2]. Simply put, f W is a vector consisting of scalars \u03b1 and \u03b2, the two values forming the binary vector. We represent this as f W = \u03b1e + \u03b2(1 \u2212e) where e is a vector such that e \u2208{0, 1}n \u220be \u0338= 0 and e \u0338= 1. We de\ufb01ne K as eT e which represents the number of ones in the e vector. Our objective is to \ufb01nd the best possible binary approximation for W. We set up the optimization problem as: f W\u2217= argmin f W || W \u2212f W ||2 We formally state this as the following: The optimal binary weight vector f W\u2217for any weight vector W which minimizes the approximate-error function J =|| W \u2212f W ||2 can be represented as: f W\u2217= \u03b1e + \u03b2(1 \u2212e) where \u03b1 = WT e K , \u03b2 = WT (1 \u2212e) n \u2212K for a given K. That is, given a K, the optimal selection of e would correspond to either the K smallest weights of W or the K largest weights of W. The best suited K, we calculate the value of the following expression for every value of K, giving us an e, and maximize the expression: e\u2217= argmax e (|| WT e ||2 K + || WT (1 \u2212e) ||2 n \u2212K ) A detailed proof of the above can be found in the supplementary material. The above representation shows the values obtained for e, \u03b1 and \u03b2 are the optimal approximate representations of the weight vector W. The vector e, which controls the number and distribution of occurrences of \u03b1 and \u03b2, acts as a mask of the top/bottom K values of W. We assign \u03b1 to capture the greater of the two values in magnitude. Note that the scaling values derived in the XNOR formulation, \u03b1 and \u2212\u03b1, are a special case of the above, and hence our approximation error is at most that of the XNOR error. We explore what this function represents and how this relates to previous binarization techniques in the next subsection. 4.2. Intuitions about DAB-Net In this section, we investigate intuitions about the derived representation. We can visualize that e and (1 \u2212e) are orthogonal vectors. Hence, if normalized, e and (1 \u2212e) form a basis for a subspace R2. Theorem 2 says the best \u03b1 and \u03b2 can be found by essentially projecting the weight matrix W into this subspace, \ufb01nding the vector in the subspace which is closest to e and (1 \u2212e) respectively. \u03b1 = \u27e8W, e\u27e9 \u27e8e, e\u27e9\u00b7 e , \u03b2 = \u27e8W, (1 \u2212e)\u27e9 \u27e8(1 \u2212e), (1 \u2212e)\u27e9\u00b7 (1 \u2212e) We also show that our derived representation is different from the previous binary representations since we cannot derive them by assuming a special case of our formulation. XNOR-Net [28] or BNN [18]-like representations cannot be obtained from our formulation. However, in practice, we are able to simulate XNOR-Net by constraining W to be mean-centered and K = n 2 , since roughly half the weights are above 0, the other half below, as seen in Figure 5 in Section 5.3.2. \fAlgorithm 1 Finding an optimal K value. 1: Initialization 2: W = 1D weight vector 3: T = Sum of all the elements of W 4: Sort(W) 5: D = [00...0] // Empty array of same size as W 6: optK1 = 0 // Optimal value for K 7: maxD1 = 0 // Value of D for optimal K value 8: 9: for I= 1 to D.size do 10: Pi = Pi\u22121 + Wi 11: Di = P 2 i i + (T \u2212Pi)2 n\u2212i 12: if Di \u2265maxD1 then 13: maxD1 = Di 14: optK1 = i 15: 16: Sort(W, reverse=true) and Repeat steps 4-13 with optK2 and maxD2 17: 18: optKfinal = optK1 19: if maxD2 > maxD1 then 20: optKfinal = optK2 21: 22: return optKfinal 4.3. Implementation The representation that we earlier derived requires to be ef\ufb01ciently computable, in order to ensure that our algorithm runs fast enough to be able to train binary networks. In this section, we investigate the implementation, by breaking it into two parts: 1) Computing the parameter K ef\ufb01ciently for every iteration. 2) Training the entire network using that value of K for a given iteration. We show that it is possible to get an ef\ufb01ciently trainable network at minimal extra cost. We provide an ef\ufb01cient algorithm using Dynamic Programming which computes the optimal value for K quickly at every iteration. 4.3.1 Parallel Pre\ufb01x-Sums to Obtain K Theorem 2. The optimal K\u2217which minimizes the value e can be computed in O(n \u00b7 logn) complexity. Considering one weight \ufb01lter at a time for each convolution layer, we \ufb02atten the weights into a 1-dimensional weight vector W. We then sort the vector in ascending order and then compute the pre\ufb01x-sum array P of W. For a selected value of K, the term to be maximized would be ( ||WT e||2 K + ||WT (1\u2212e)||2 n\u2212K ), which is equal to ( P 2 i i + (T \u2212Pi)2 n\u2212i ) since the top K values in W sum up to Pi where T is the sum of all weights in W. We also perform the same computation with a descending order of W\u2019s weights since K can correspond to either the smallest K weights or the largest K weights as we mentioned earlier. In order to speed this up, we perform these operations on all the weight \ufb01lters at the same time considering them as a 2D weight vector instead. Our algorithm runs in O(n \u00b7 logn) time complexity, and is speci\ufb01ed in Algorithm 1. This algorithm is integrated into our code, and will be provided alongside. 4.3.2 Forward and Backward Pass Now that we know how to calculate K, e, \u03b1, and \u03b2 for each \ufb01lter in each layer optimally, we can compute f W which approximates W well. Here, topk(W, K) represents the top K values of W which remain as is whereas the rest are converted to zeros. Let Tk = topk(W, K). Corollary 1 (Weight Binarization). The optimal binary weight f W can be represented as, f W = \u03b1.sgn(Tk) + \u03b2.(1 \u2212sgn(Tk)) where, \u03b1 = Tk K and \u03b2 = (W \u2212Tk) n \u2212K Once we have f W, we can perform convolution as I \u229b f W during the forward pass of the network. Similarly, the optimal gradient e G can be computed as follows, which is back-propagated throughout the network in order to update the weights: Theorem 3 (Backward Pass). The optimal gradient value e G can be represented as, (2) e G = f G1 + f G2 where, (3) f G1 = sgn(Tk) K \u25e6sgn(Tk) + ||Tk||l1 K .STE(Tk) (4) f G2 = sgn(W \u2212Tk) n \u2212K \u25e6(1 \u2212sgn(Tk)) + ||W \u2212Tk||l1 n \u2212K .STE(W \u2212Tk) STE(Tk)i = ( Tk i, where |W|i<= 1 0, elsewhere (5) The gradient vector, as seen above, can be intuitively understood if seen as the sum of two independent gradients f G1 and f G2, each corresponding to the vectors e and (1 \u2212e) respectively. Further details regarding the derivation of this gradient would be provided in the supplementary material. \fAlgorithm 2 Training an L-layers CNN with binary weights: 1: A minibatch of inputs and targets (I, Y), cost function C(Y, \u02c6 Y), current weight Wt and current learning rate \u03b7t. 2: updated weight Wt+1 and updated learning rate \u03b7t+1. 3: Binarizing weight \ufb01lters: 4: Wt = MeanCenter(Wt) 5: Wt = Clamp(Wt, -1, 1) 6: Wreal = Wt 7: for l = 1 to L do 8: for jth \ufb01lter in lth layer do 9: Find Klj using Algorithm 1 10: \u03b1lj = topk(Wlj,Klj) Klj 11: \u03b2lj = \u2212(Wlj\u2212topk(Wlj,Klj)) n\u2212Klj 12: f Wlj = \u03b1.sgn(topk(Wlj, Klj)) 13: + \u03b2.(1 \u2212sgn(topk(Wlj, Klj))) 14: 15: \u02c6 Y = BinaryForward(I, f W) 16: 17: \u2202C \u2202f W = BinaryBackward( \u2202C \u02c6 Y , f W) // Standard backward propagation except that gradients are computed using f W instead of Wt as mentioned in Theorem. 3 18: 19: We then copy back the real weights in order to apply the gradients computed. Wt = Wreal 20: 21: Wt+1 = UpdateParameters(Wt, \u2202C \u2202f W, \u03b7t) 22: \u03b7t+1 = UpdateLearningrate(\u03b7t, t) 4.4. Training Procedure Putting all the components mentioned above together, we have outlined our training procedure in Algorithm 2. During the forward pass of the network, we \ufb01rst mean center and clamp the current weights of the network. We then store a copy of these weights as Wreal. We compute the binary forward pass of the network, and then apply the backward pass using the weights f W, computing gradients for each of the weights. We then apply these gradients on the original set of weights Wt in order to obtain Wt+1. In essence, binarized weights are used to compute the gradients, but they are applied to the original stored weights to perform the update. This requires us to store the full precision weights during training, but once the network is trained, we store only the binarized weights for inference. 5. Experiments We empirically demonstrate the effectiveness of our optimal distribution-aware binarization algorithm (DAB-Net) on the TU-Berlin and Sketchy datasets. We compare DABNet with BNN and XNOR-Net [28] on various architectures, on two popular large-scale sketch recognition datasets as sketches are sparse and binary. Also, they are easier to train with than standard images, for which we believe the algorithm needs to be stabilized in essence, the K value must be restricted to change by only slight amounts. We show that our approach is superior to existing binarization algorithms, and can generalize to different kinds of CNN architectures on sketches. 5.1. Experimental Setup In our experiments, we de\ufb01ne the network having only the convolutional layer weights binarized as WBin, the network having both inputs and weights binarized as FBin and the original full-precision network as FPrec. Binary Networks have achieved accuracies comparable to fullprecision networks on limited domain/simpli\ufb01ed datasets like CIFAR-10, MNIST, SVHN, but show considerable losses on larger datasets. Binary networks are well suited for sketch data due to its binary and sparse nature of the data. TU-Berlin: The TU-Berlin [12] dataset is the most popular large-scale free-hand sketch dataset containing sketches of 250 categories, with a human sketchrecognition accuracy of 73.1% on an average. Sketchy: A recent large-scale free-hand sketch dataset containing 75,471 hand-drawn sketches spanning 125 categories. This dataset was primarily used to cross-validate results obtained on the TU-Berlin dataset, to ensure the robustness of our approach with respect to the method of data collection. For all the datasets, we \ufb01rst resized the input images to 256 x 256. A 224 x 224 (225 x 225 for Sketch-A-Net) sized crop was then randomly taken from an image with standard augmentations such as rotation and horizontal \ufb02ipping, for TU-Berlin and Sketchy. In the TU-Berlin dataset, we use three-fold cross validation which gives us a 2:1 train-test split ensuring that our results are comparable with all previous methods. For Sketchy, we use the training images for retrieval as the training images for classi\ufb01cation, and validation images for retrieval as the validation images for classi\ufb01cation. We report ten-crop accuracies on both the datasets. We used the PyTorch framework to train our networks. We used the Sketch-A-Net[37], ResNet-18[16] and GoogleNet[31] architectures. Weights of all layers except the \ufb01rst were binarized throughout our experiments, except in Sketch-A-Net for which all layers except \ufb01rst and last layers were binarized. All networks were trained from scratch. We used the Adam optimizer for all experiments. Note that we do not use a bias term or weight decay for binarized Conv layers. We used a batch size of 256 for all \fModels Method Accuracies TU-Berlin Sketchy Sketch-A-Net FPrec 72.9% 85.9% WBin (BWN) 73.0% 85.6% FBin (XNOR-Net) 59.6% 68.6% WBin DAB-Net 72.4% 84.0% FBin DAB-Net 60.4% 70.6% Improvement XNOR-Net vs DAB-Net +0.8% +2.0% ResNet-18 FPrec 74.1% 88.7% WBin (BWN) 73.4% 89.3% FBin (XNOR-Net) 68.8% 82.8% WBin DAB-Net 73.5% 88.8% FBin DAB-Net 71.3% 84.2% Improvement XNOR-Net vs DAB-Net +2.5% +1.4% GoogleNet FPrec 75.0% 90.0% WBin (BWN) 74.8% 89.8% FBin (XNOR-Net) 72.2% 86.8% WBin DAB-Net 75.7% 90.1% FBin DAB-Net 73.7% 87.4% Improvement XNOR-Net vs DAB-Net +1.5% +0.6% Table 2: Our DAB-Net models compared to FBin, WBin and FPrec models on TU-Berlin and Sketchy in terms of accuracy. Sketch-A-Net models and a batch size of 128 for ResNet18 and GoogleNet models, the maximum size that \ufb01ts in a 1080Ti GPU. Additional experimental details are available in the supplementary material. 5.2. Results We compare the accuracies of our distribution aware binarization algorithm for WBin and FBin models on the TUBerlin and Sketchy datasets. Note that higher accuracies are an improvement, hence stated in green in Table 2. On the TU-Berlin and Sketchy datasets in Table 2, we observe that FBin DAB-Net models consistently perform better over their XNOR-Net counterparts. They improve upon XNORNet accuracies by 0.8%, 2.5%, and 1.5% in Sketch-A-Net, ResNet-18, and GoogleNet respectively on the TU-Berlin dataset. Similarly, they improve by 2.0%, 1.4%, and 0.6% respectively on the Sketchy dataset. We also compare them with state-of-the-art sketch classi\ufb01cation models in Table 3. We \ufb01nd that our compressed models perform signi\ufb01cantly better than the original sketch models and offer compression, runtime and energy savings additionally. Our DAB-Net WBin models attain accuracies similar to BWN WBin models and do not offer major improvements mainly because WBin models achieve FPrec accuracies already, hence do not have much scope for improvement unlike FBin models. Thus, we conclude that our DAB-Net FBin models are able to attain signi\ufb01cant accuracy improvements over their XNOR-Net counterparts when everything apart from the binarization method is kept constant. 2It is the sketch-a-net SC model trained with additional imagenet data, additional data augmentation strategies and considering an ensemble, hence would not be a direct comparison Models Accuracy AlexNet-SVM 67.1% AlexNet-Sketch 68.6% Sketch-A-Net SC 72.2% Humans 73.1% Sketch-A-Net-22[36] 77.0% Sketch-A-Net WBin DAB-Net 72.4% ResNet-18 WBin DAB-Net 73.5% GoogleNet WBin DAB-Net 75.7% Sketch-A-Net FBin DAB-Net 60.4% ResNet-18 FBin DAB-Net 71.3% GoogleNet FBin DAB-Net 73.7% Table 3: A comparison between state-of-the-art single model accuracies of recognition systems on the TU-Berlin dataset. 5.3. XNOR-Net vs DAB-Net We measure how K, \u03b1, and \u03b2 vary across various layers over time during training, and these variations are observed to be quite different from their corresponding values in XNOR-Net. These observations show that binarization can approximate a network much better when it is distribution-aware (like in our technique) versus when it is distribution-agnostic (like XNOR-Nets). 5.3.1 Variation of \u03b1 and \u03b2 across Time We plot the distribution of weights of a randomly selected \ufb01lter belonging to a layer and observe that \u03b1 and \u03b2 of DABNet start out to be similar to \u03b1 and \u2212\u03b1 of XNOR-Nets, since the distributions are randomly initialized. However, as training progresses, we observe as we go from Sub\ufb01gure (1) to (4) in Figure 3, the distribution eventually becomes nonsymmetric and complex, hence our values signi\ufb01cantly diverge from their XNOR-Net counterparts. This divergence signi\ufb01es a better approximation of the underlying distribution of weights in our method, giving additional evidence to our claim that the proposed DAB-Net technique gives a better representation of layer weights, signi\ufb01cantly different from that of XNOR-Nets. 5.3.2 Variation of K across Time and Layers We de\ufb01ne normalized K as the K n for a layer \ufb01lter. For XNOR-Nets, K would be the number of values below zero in a given weight \ufb01lter which has minimal variation, and does not take into consideration the distribution of weights in the \ufb01lter as K in this case is simply the number of weights below a certain \ufb01xed global threshold, zero. However, we observe that the K computed in DAB-Net varies signi\ufb01cantly across epochs initially, but slowly converges to an optimal value for the speci\ufb01c layer as shown in Figure 4. \f(1) (2) (3) (4) Figure 3: Sub-\ufb01gures (1) to (4) show the train-time variation of \u03b1 and \u03b2 for a layer \ufb01lter. Initially, \u03b1 and \u03b2 have nearly equal magnitudes, similar to the XNOR-Net formulation, but as we progress to (4), we see that \u03b1 and \u03b2 have widely different magnitudes.Having just one scaling constant (XNOR-Net) would be a comparatively poor approximator. 0 50 100 150 200 250 300 350 400 Epoch 0.0 0.2 0.4 0.6 0.8 1.0 Normalized K-Value Variation of K-Value during training XNOR Ours Figure 4: The variation of the normalized K-value over time during training. It falls initially but converges eventually to 0.35. The normalized K-value for XNOR-Net remains almost at 0.5 till the end. We also plot the variation of normalized K values for a few randomly chosen \ufb01lters indexes across layers and observe that it varies across layers, trying to match the distribution of weights at each layer. Each \ufb01lter has its own set of weights, accounting for the differences in variation of K in each case, as shown in Figure 5. 6."
+ },
+ {
+ "url": "http://arxiv.org/abs/1611.00472v1",
+ "title": "Towards Sub-Word Level Compositions for Sentiment Analysis of Hindi-English Code Mixed Text",
+ "abstract": "Sentiment analysis (SA) using code-mixed data from social media has several\napplications in opinion mining ranging from customer satisfaction to social\ncampaign analysis in multilingual societies. Advances in this area are impeded\nby the lack of a suitable annotated dataset. We introduce a Hindi-English\n(Hi-En) code-mixed dataset for sentiment analysis and perform empirical\nanalysis comparing the suitability and performance of various state-of-the-art\nSA methods in social media.\n In this paper, we introduce learning sub-word level representations in LSTM\n(Subword-LSTM) architecture instead of character-level or word-level\nrepresentations. This linguistic prior in our architecture enables us to learn\nthe information about sentiment value of important morphemes. This also seems\nto work well in highly noisy text containing misspellings as shown in our\nexperiments which is demonstrated in morpheme-level feature maps learned by our\nmodel. Also, we hypothesize that encoding this linguistic prior in the\nSubword-LSTM architecture leads to the superior performance. Our system attains\naccuracy 4-5% greater than traditional approaches on our dataset, and also\noutperforms the available system for sentiment analysis in Hi-En code-mixed\ntext by 18%.",
+ "authors": "Ameya Prabhu, Aditya Joshi, Manish Shrivastava, Vasudeva Varma",
+ "published": "2016-11-02",
+ "updated": "2016-11-02",
+ "primary_cat": "cs.CL",
+ "cats": [
+ "cs.CL"
+ ],
+ "main_content": "Introduction Code Mixing is a natural phenomenon of embedding linguistic units such as phrases, words or morphemes of one language into an utterance of another (Muysken, 2000; Duran, 1994; Gysels, 1992). Code-mixing is widely observed in multilingual societies like India, which has 22 of\ufb01cial languages most popular of which are Hindi and English. With over 375 million Indian population online, usage of Hindi has been steadily increasing on the internet. This opens up tremendous potential for research in sentiment and opinion analysis community for studying trends, reviews, events, human behaviour as well as linguistic analysis. Most of the current research works have involved sentiment polarity detection (Feldman, 2013; Liu, 2012; Pang and Lee, 2008) where the aim is to identify whether a given sentence or document is (usually) positive, negative or neutral. Due to availability of large-scale monolingual corpora, resources and widespread use of the language, English has attracted the most attention. Seminal work in sentiment analysis of Hindi text was done by Joshi et al. (2010) in which the authors built three step fallback model based on classi\ufb01cation, machine translation and sentiment lexicons. They also observed that their system performed best with unigram features without stemming. Bakliwal et al. (2012) generated a sentiment lexicon for Hindi and validated the results on translated form of Amazon Product Dataset Blitzer et al. (2007). Das and Bandyopadhyay (2010) created Hindi SentiWordNet, a sentiment lexicon for Hindi. \u2217 * indicates these authors contributed equally to this work. This work is licensed under a Creative Commons Attribution 4.0 International Licence. Licence details: http://creativecommons.org/licenses/by/4.0/ arXiv:1611.00472v1 [cs.CL] 2 Nov 2016 \fSentence variations Trailer dhannnsu hai bhai Dhannnsu trailer hai bhai Bhai trailer dhannnsu hai Bhai dhannnsu trailer hai Table 1: Illustration of free structure present in code mixed text. All sentences convey the same meaning. Word Meaning Appearing Variations bh \u0000 t (bahut) very bahout bohut bhout bauhat bohot bahut bhaut bahot bhot m \u0000 bArk (mubaarak) wishes mobarak mubarak mubark p ~ yAr (pyaar) love pyaar peyar pyara piyar pyr piyaar pyar Table 2: Spelling variations of romanized words in our Hi-En code-mix dataset. Sentiment Analysis in Code-mixed languages has recently started gaining interest owing to the rising amount of non-English speaking users. Sharma et al. (2015) segregated Hindi and English words and calculated \ufb01nal sentiment score by lexicon lookup in respective sentient dictionaries. Hindi-English (Hi-En) code mixing allows ease-of-communication among speakers by providing a much wider variety of phrases and expressions. A common form of code mixing is called as romanization 1, which refers to the conversion of writing from a different writing system to the Roman script. But this freedom makes the task for developing NLP tools more dif\ufb01cult, highlighted by (Chittaranjan et al., 2014; Vyas et al., 2014; Barman et al., 2014). Initiatives have been taken by shared tasks (Sequiera et al., 2015; Solorio et al., 2014), however they do not cover the requirements for a sentiment analysis system. Deep learning based approaches (Zhang and LeCun, 2015; Socher et al., 2013) have been demonstrated to solve various NLP tasks. We believe these can provide solution to code-mixed and romanized text from various demographics in India, as similar trends are followed in many other Indian languages too. dos Santos and Zadrozny (2014) demonstrated applicability of character models for NLP tasks like POS tagging and Named Entity Recognition (dos Santos and Guimar\u02dc aes, 2015). LSTMs have been observed to outperform baselines for language modelling (Kim et al., 2015) and classi\ufb01cation (Zhou et al., 2015). In a recent work, (Bojanowski et al., 2016) proposed a skip-gram based model in which each word is represented as a bag of character n-grams. The method produced improved results for languages with large vocabularies and rare words. The romanized code mixed data on social media presents additional inherent challenges such as contractions like \u201dbetween\u201d \u2192\u201dbtwn\u201d, non-standard spellings such as \u201dcooolll\u201d or \u201dbhut bdiya\u201d and nongrammatical constructions like \u201dsir hlp plzz naa\u201d. Hindi is phonetically typed while English (Roman script) doesn\u2019t preserve phonetics in text. Thus, along with diverse sentence construction, words in Hindi can have diverse variations when written online, which leads to large amount of tokens, as illustrated in Table 2. Meanwhile there is a lack of a suitable dataset. Our contributions in this paper are (i) Creation, annotation and analysis of a Hi-En code-mixed dataset for the sentiment analysis, (ii) Sub-word level representations that lead to better performance of LSTM networks compared to Character level LSTMs (iii) Experimental evaluation for suitability and evaluation of performance of various state-of-the-art techniques for the SA task, (iv) A preliminary investigation of embedding linguistic priors might be encoded for SA task by char-RNN architecture and the relation of architecture with linguistic priors, leading to the superior performance on this task. Our paper is divided into the following sections: We begin with an introduction to Code Mixing and romanization in Section 1. We mention the issues with code-mixed data in context of Sentiment Analysis and provides an overview of existing solutions. We then discusses the process of creation of the dataset and its features in Section 2. In Section 3, we introduce Sub-word level representation and explains how they are able to model morphemes along with propagating meaningful information, thus capturing sentiment in a sentence. Then in Section 4, we explain our experimental setup, describe the performance of proposed system and compare it with baselines and other methods, proceeded by a discussion on our results. 1https://en.wikipedia.org/wiki/Romanization \f2 Dataset We collected user comments from public Facebook pages popular in India. We chose pages of Salman Khan, a popular Indian actor with massive fan following, and Narendra Modi, the current Prime Minister of India. The pages have 31 million and 34 million facebook user likes respectively. These pages attract large variety of users from all across India and contain lot of comments to the original posts in codemixed representations in varied sentiment polarities. We manually pre-processed the collected data to remove the comments that were not written in roman script, were longer than 50 words, or were complete English sentences. We also removed the comments that contained more than one sentence, as each sentence might have different sentiment polarity. Then, we proceeded to manual annotation of our dataset. The comments were annotated by two annotators in a 3-level polarity scale positive, negative or neutral. Only the comments with same polarity marked by both the annotators are considered for the experiments. They agreed on the polarity of 3879 of 4981 (77%) sentences. The Cohen\u2019s Kappa coef\ufb01cient (Cohen, 1960) was found to be 0.64. We studied the reasons for misalignment and found that causes typically were due to difference in perception of sentiments by individuals, different interpretations by them and sarcastic nature of some comments which is common in social media data. The dataset contains 15% negative, 50% neutral and 35% positive comments owing to the nature of conversations in the selected pages. The dataset exhibits some of the major issues while dealing with code-mixed data like short sentences with unclear grammatical structure. Further, romanization of Hindi presents an additional set of complexities due to loss of phonetics and free ordering in sentence constructions as shown in Table 1. This leads to a number of variations of how words can be written. Table 2 contains some of the words with multiple spelling variations in our dataset, which is one of the major challenges to tackle in Hi-En code-mixed data. Dataset Size # Vocab Social CM Sentiment STS-Test 498 2375 \u0013 \u0013 OMD 3238 6211 \u0013 \u0013 SemEval\u201913 13975 35709 \u0013 \u0013 IMDB 50000 5000 \u0013 (Vyas et al., 2014) 381 \u0013 \u0013 Ours 3879 7549 \u0013 \u0013 \u0013 Table 3: Comparison with other datasets. Popular related datasets are listed in Table 3. STS, SemEval, IMDB etc. have been explored for SA tasks but they contain text in English. The dataset used by Vyas et al. (2014) contains Hi-En Code Mixed text but doesn\u2019t contain sentiment polarity. We constructed a code mixed dataset with sentiment polarity annotations, and the size is comparable with several datasets. Table 4 shows some examples of sentences from our dataset. Here, we have phrases in Hindi (source language) written in English (target) language. Example Approximate Meaning Sentiment Polarity Aisa PM naa hua hai aur naa hee hoga Neither there has been a PM like him, nor there will be Positive abe kutte tere se kon baat karega Who would talk to you, dog? Negative Trailer dhannnsu hai bhai Trailer is awesome, brother. Positive Table 4: Examples of Hi-En Code Mixed Comments from the dataset. Our dataset and code is freely available for download 2 to encourage further exploration in this domain. 2https://github.com/DrImpossible/Sub-word-LSTM \f3 Learning Compositionality Our target is to perform sentiment analysis on the above presented dataset. Most commonly used statistical approaches learn word-level feature representations. We start our exploration for suitable algorithms from models having word-based representations. 3.1 Word-level models Word2Vec(Mikolov et al., 2013) and Word-level RNNs (Word-RNNs) (thang Luong et al., 2013) have substantially contributed to development of new representations and their applications in NLP such as in Summarization (Cao et al., 2015) and Machine Translation (Cho et al., 2014). They are theoretically sound since language consists of inherently arbitrary mappings between ideas and words. Eg: The words person(English) and insaan(Hindi) do not share any priors in their construction and neither do their constructions have any relationship with the semantic concept of a person. Hence, popular approaches consider lexical units to be independent entities. However, operating on the lexical domain draws criticism since the \ufb01nite vocabulary assumption; which states that models assume language has \ufb01nite vocabulary but in contrast, people actively learn & understand new words all the time. Excitingly, our dataset seems suited to validate some of these assumptions. In our dataset, vocabulary sizes are greater than the size of the dataset as shown in Table 3. Studies on similar datasets have shown strong correlation between number of comments and size of vocabulary (Saif et al., 2013). This rules out methods like Word2Vec, N-grams or Word-RNNs which inherently assume a small vocabulary in comparison to the data size. The \ufb01nite vocabulary generally used to be a good approximation for English, but is no longer valid in our scenario. Due to the high sparsity of words themselves, it is not possible to learn useful word representations. This opens avenues to learn non-lexical representations, the most widely studied being character-level representations, which is discussed in the next section. 3.2 Character-level models Character-level RNNs (Char-RNNs) have recently become popular, contributing to various tasks like (Kim et al., 2015). They do not have the limitation of vocabulary, hence can freely learn to generate new words. This freedom, in fact, is an issue: Language is composed of lexical units made by combining letters in some speci\ufb01c combinations, i.e. most of the combinations of letters do not make sense. The complexity arises because the mappings between meaning and its construction from characters is arbitrary. Character models may be apriori inappropriate models of language as characters individually do not usually provide semantic information. For example, while \u201c King \u2212Man + Women = Queen\u201d is semantically interpretable by a human, \u201cCat \u2212C + B = Bat\u201d lacks any linguistic basis. But, groups of characters may serve semantic functions. This is illustrated by Un + Holy = Unholy or Cat + s = Cats which is semantically interpretable by a human. Since sub-word level representations can generate meaningful lexical representations and individually carry semantic weight, we believe that sub-word level representations consisting composition of characters might allow generation of new lexical structures and serve as better linguistic units than characters. 3.3 Sub-word level representations Lexicon based approaches for the SA task (Taboada et al., 2011; Sharma et al., 2015) perform a dictionary look up to obtain an individual score for words in a given sentence and combine these scores to get the sentiment polarity of a sentence. We however want to use intermediate sub-word feature representations learned by the \ufb01lters during convolution operation. Unlike traditional approaches that add sentiment scores of individual words, we propagate relevant information with LSTM and compute \ufb01nal sentiment of the sentence as illustrated in Figure 1. Hypothesis: We propose that incorporating sub-word level representations into the design of our models should result in better performance. This would also serve as a test scenario for the broader hypothesis proposed by Dyer et. al. in his impressive ICLR keynote 3 Incorporating linguistic priors in network architectures lead to better performance of models. 3Available at: http://videolectures.net/iclr2016 dyer model architecture/ \fMethodology: We propose a method of generating sub-word level representations through 1-D convolutions on character inputs for a given sentence. Formally, let C be the set of characters and T be an set of input sentences. The sentence s \u2208T is made up of a sequence of characters [c1, ...., cl] where l is length of the input. Hence, the representation of the input s is given by the matrix Q \u2208Rd\u00d7l where d is the dimensionality of character embedding that corresponding to [c1, ...., cl]. We perform convolution of Q with a \ufb01lter H \u2208Rd\u00d7m of length m after which we add a bias and apply a non-linearity to obtain a feature map f \u2208Rl\u2212m+1. Thus we can get sub-word level (morpheme-like) feature map. Speci\ufb01cally, the ith element of f is given by: f[i] = g((Q[:, i : i + m \u22121] \u2217H) + b) (1) where Q[:, i : i + m \u22121] is the matrix of (i)th to (i + m \u22121)th character embedding and g corresponds to ReLU non-linearity. Finally, we pool the maximal responses from p feature representations corresponding to selecting subword representations as: yi = max(f[p \u2217(i : i + p \u22121)]) (2) Next, we need to model the relationships between these features yi[:] in order to \ufb01nd the overall sentiment of the sentence. This is achieved by LSTM(Graves, 2013) which is suited to learning to propagate and \u2019remember\u2019 useful information, \ufb01nally arriving at a sentiment vector representation from the inputs. We provide ft as an input to the memory cell at time t. We then compute values of It the input gate, \u02dc Ct the candidate value for the state of the memory cell at time t and ft the activation of the forget gate, which can be used to compute the information stored in memory cell at time t. With the new state of memory cell Ct, we can compute the output feature representation by: Ot = \u03c3(Wyt + Uh(t \u22121) + V (Ct + b) (3) ht = Ottanh(Ct) (4) where W,U and V are weight matrices and bi are biases. After l steps, hl represents the relevant information retained from the history. That is then passed to a fully connected layer which calculates the \ufb01nal sentiment polarity as illustrated in the Figure 1. Figure 2 gives schematic overview of the architecture. We perform extensive experiments to qualitatively and quantitatively validate the above claims as explained in the next section. 4 Experiments We perform extensive evaluation of various approaches, starting with a suitability study for the nature of approaches that would be able to generalize to this data. We compare our approaches with the stateof-the-art methods which are feasible to generalize on code-mixed data and (Sharma et al., 2015), the current state-of-the-art in Hi-En code-mixed SA task. 4.1 Method Suitability Following approaches have been used for performing SA tasks in English but do not suit mix code setting: \u2022 Approaches involving NLP tools: RNTN (Socher et al., 2013) etc which involve generation of parse trees which are not available for code mixed text; \u2022 Word Embedding Based Approaches: Word2Vec, Word-RNN may not provide reliable embedding in situations with small amount of highly sparse dataset. \u2022 Surface Feature engineering based approaches: Hashtags, User Mentions, Emoticons etc. may not exist in the data. \fFigure 1: Illustration of the proposed methodology Figure 2: Schematic overview of the architecture. Figure 3: Training accuracy and loss variation. 4.2 Experimental Setup Our dataset is divided into 3 splitsTraining, validation and testing. We \ufb01rst divide the data into randomized 80-20 train test split, then further randomly divide the training data into 80-20 split to get the \ufb01nal training, validation and testing data. As the problem is relatively new, we compare state of the art sentiment analysis techniques (Wang and Manning, 2012; Pang and Lee, 2008) which are generalizable to our dataset. We also compare the results with system proposed by Sharma et al. (2015) on our dataset. As their system is not available publicly, we implemented it using language identi\ufb01cation and transliteration using the tools provided by Bhat et al. (2015) for Hi-En Code Mixed data. The polarity of thus obtained tokens is computed from SentiWordNet (Esuli and Sebastiani, 2006) and Hindi SentiWordNet (Das and Bandyopadhyay, 2010) to obtain the polarity of words, which are then voted to get \ufb01nal polarity of the sentence. The architecture of the proposed system (Subword-LSTM) is described in Figure 2. We compare it with a character-level LSTM (Char-LSTM) following the same architecture without the convolutional and maxpooling layers. We use Adamax (Kingma and Ba, 2014) (a variant of Adam based on in\ufb01nity norm) optimizer to train this setup in an end-to-end fashion using batch size of 128. We use very simplistic architectures because of the constraint on the size of the dataset. As the datasets in this domain expand, we would like to scale up our approach to bigger architectures. The stability of training using this architecture is illustrated in Figure 3. \fMethod Reported In Our dataset SemEval\u2019 13 Accuracy F1-Score Accuracy F1-Score NBSVM (Unigram) (Wang and Manning, 2012) 59.15% 0.5335 57.89% 0.5369 NBSVM (Uni+Bigram) (Wang and Manning, 2012) 62.5% 0.5375 51.33% 0.5566 MNB (Unigram) (Wang and Manning, 2012) 66.75% 0.6143 58.41% 0.4689 MNB (Uni+Bigram) (Wang and Manning, 2012) 66.36% 0.6046 58.4% 0.469 MNB (Tf-Idf) (Wang and Manning, 2012) 63.53% 0.4783 57.82% 0.4196 SVM (Unigram) (Pang and Lee, 2008) 57.6% 0.5232 57.6% 0.5232 SVM (Uni+Bigram) (Pang and Lee, 2008) 52.96% 0.3773 52.9% 0.3773 Lexicon Lookup (Sharma et al., 2015) 51.15% 0.252 N/A N/A Char-LSTM Proposed 59.8% 0.511 46.6% 0.332 Subword-LSTM Proposed 69.7% 0.658 60.57% 0.537 Table 5: Classi\ufb01cation results show that the proposed system provides signi\ufb01cant improvement over traditional and state of art method for Sentiment Analysis in Code Mixed Text Table 6: Output produced a by Hi-En Transliteration Tool 4.3 Observations In the comparative study performed on our dataset, we observe that Multinomial Naive Bayes performs better than SVM(Pang and Lee, 2008) for snippets providing additional validation to this hypothesis given by Wang and Manning (2012). We also observe that unigrams perform better than bigrams and Bag of words performs better than tf-idf in contrast to trends in English, as the approaches inducing more sparsity would yield to poorer results because our dataset is inherently very sparse. The lexicon lookup approach (Sharma et al., 2015) didn\u2019t perform well owing to the heavily misspelt words in the text, which led to incorrect transliterations as shown in Table 6. 4.4 Validation of proposed hypothesis We obtain preliminary validation for our hypothesis that incorporating sub-word level features instead of characters would lead to better performance. Our Subword-LSTM system provides an F-score of 0.658 for our dataset, which is signi\ufb01cantly better than Char-LSTM which provides F-score of 0.511. Since we do not have any other dataset in Hi-En code-mixed setting of comparable to other settings, we performed cross-validation of our hypothesis on SemEval\u201913 Twitter Sentiment Analysis dataset. We took the raw tweets character-by-character as an input for our model from the training set of 7800 tweets and test on the SemEval\u201913 development set provided containing 1368 tweets. The results are summarized in Table 5. In all the cases, the text was converted to lowercase and tokenized. No extra features or heuristics were used. \fFigure 4: Visualization of the convolution layer for examples comments from the dataset show that word segments convey sentiment information despite being severely misspelt. 4.5 Visualizing character responses Visualizations in Figure 4 shows how the proposed model is learning to identify sentiment lexicons. We see that different \ufb01lters generally tend to learn mappings from different parts, interestingly showing shifting trends to the right which maybe due to LSTM picking their feature representation in future time steps. The words sections that convey sentiment polarity information are captured despite misspelling in example (i) and (ii). In example (iii), starting and ending phrases show high response which correspond to the sentiment conveying words (party and gift). The severe morpheme stretching in example (iv) also affects the sentiment polarity. 5"
+ }
+ ],
+ "Adhiraj Ghosh": [
+ {
+ "url": "http://arxiv.org/abs/2110.07933v2",
+ "title": "Relation Preserving Triplet Mining for Stabilising the Triplet Loss in Re-identification Systems",
+ "abstract": "Object appearances change dramatically with pose variations. This creates a\nchallenge for embedding schemes that seek to map instances with the same object\nID to locations that are as close as possible. This issue becomes significantly\nheightened in complex computer vision tasks such as re-identification(reID). In\nthis paper, we suggest that these dramatic appearance changes are indications\nthat an object ID is composed of multiple natural groups, and it is\ncounterproductive to forcefully map instances from different groups to a common\nlocation. This leads us to introduce Relation Preserving Triplet Mining (RPTM),\na feature-matching guided triplet mining scheme, that ensures that triplets\nwill respect the natural subgroupings within an object ID. We use this triplet\nmining mechanism to establish a pose-aware, well-conditioned triplet loss by\nimplicitly enforcing view consistency. This allows a single network to be\ntrained with fixed parameters across datasets while providing state-of-the-art\nresults. Code is available at https://github.com/adhirajghosh/RPTM_reid.",
+ "authors": "Adhiraj Ghosh, Kuruparan Shanmugalingam, Wen-Yan Lin",
+ "published": "2021-10-15",
+ "updated": "2022-11-14",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "main_content": "Introduction Re-identi\ufb01cation is the process of identifying images of the same object taken under different conditions. One of the main challenges of reID is pose-induced appearance changes [2, 9]. Not only does object appearance change with pose, different objects often look similar when viewed from the same pose, also known as inverse-variability. This paper suggests a new interpretation of the inversevariability problem, one with the potential to signi\ufb01cantly improve the effectiveness of reID algorithms. Although we focus on re-identi\ufb01cation, the underlying principles developed here are not restricted to this task and have the potential to impact a wide range of other computer vision problems [1, 21, 30, 35]. Current reID frameworks deploy representation and metric learning methodologies in DMT DMT RPTM Figure 1: Comparing the features learned by DMT [13], a current state-of-the-art, with our proposed Triplet Mining scheme. Features correspond to the \ufb01rst four IDs of Veri776 [25]. The distance preserving UMAP projection shows the RPTM feature transform is more intuitive. the attempt to learn embeddings that map semantically similar instances to relatively nearby locations; and semantically dissimilar images to relatively distant locations. This is typically achieved through a metric loss function such as triplet loss [37], which encourages a reference (anchor) input to be more similar to a positive (truthy) input than to a negative (falsy) input. The number of triplet combinations tend to grow polynomially with the number of instances in a dataset, as detailed by Hermans et al. [15]; however, most triplet combinations are redundant. This has led to the development of triplet mining, whose aim is to identify the most important triplets in a given sample set. While triplet mining is ubiquitous in reID algorithms [2, 13, 15, 39], it has an innate vulnerability. Consider a hypothetical dataset containing instances of apple-the-phone and apple-the-fruit, both of which are classi\ufb01ed as Apple. The dataset also has instances of phones made by Samsung, classi\ufb01ed as Samsung-phone. This dataset will have many dif\ufb01cult triplets, for example, applethe-phone (anchor), apple-the-fruit (positive) and SamsungarXiv:2110.07933v2 [cs.CV] 14 Nov 2022 \f(a) Without RPTM (b) With RPTM Figure 2: Loss landscape visualisation of a ResNet-50 trained with SGD using Triplet Loss on Veri-776 with/without Relation Preserving Triplet Mining. RPTM demonstrates smoother loss surfaces, improved model generalisation and a wider minima, thus allowing better optimisation during training. phone (negative), which triplet-mining techniques are encouraged to focus on. However, training with such triplets is counter-productive as they attempt to ensure that instances of apple-the-phone are mapped closer to instances of applethe-fruit than to instances of Samsung-phone. Such a mapping mechanism violates the natural appearance relation between objects, and it can be seen that current metric learning systems enforce vastly different views of the same object to be coincident in feature space. It is unlikely that models trained on this hypothesis generalise adequately. A similar phenomenon occurs in reID, where most datasets [23, 25, 34] group instances by ID. However, the appearance of a person or vehicle\u2019s front, rear and sides pro\ufb01les are very different from each other and they appear to belong to physically different entities. This creates fallacious anchor-positive pairs, where the instances chosen to be anchor and positive do not share a natural group [2]. This fallacy in the triplet mining scheme can be further realised considering the fact that in [37], triplet loss was de\ufb01ned for face detection, where datasets only have the front view of the face, hence all anchor-positive pairs are semantically meaningful. Due to this, triplet mining does not generalise well to reID. This problem has been recognised in recent reID works [9, 19, 24, 28, 39], who incorporate pose awareness into the network, and in metric learning [35], in which latent characteristics shared within and between classes are explicitly learnt. Although this approach can be effective, it complicates network training and incurs an additional burden of training a new, dataset-speci\ufb01c, pose-aware layer. We suggest a simple alternative, where feature matching [5, 26] is leveraged to discover natural groupings. Therefore, we propose Relation Preserving Triplet Mining (RPTM), a triplet mining scheme that respects natural appearance groupings. We further de\ufb01ne our solution as Implicitly Enforced View Consistency, which we de\ufb01ne as the process of exploiting internal, natural groupings within a class and mapping instances with the same view together as a semantic entity, to overcome intra-class separability. These groupings follow natural patterns referenced by semantics [21], and tend to be pose-related in the context of reID. Here, RPTM implicitly enforces pose-aware triplet mining, which prevents different poses from being mapped onto one another. This improves the conditioning of the triplet-cost, allowing for the same training parameters to be employed across a variety of different datasets. The resultant feature embeddings provide better reID results and are more intuitive, as shown in Figure 1. We observe that past triplet mining processes fail in terms of pose awareness and this may lead to poor ranking results, whereas RPTM not only shows pose awareness, better conditioned triplet mining also ensures accurate ranking results. Our experiments are structured to demonstrate how a coherent triplet mining scheme can eliminate the largest vulnerability of using triplet loss in reID, without the requirement of key-point labels and pose estimation pipelines. One indicator of the effectiveness of our method is observing the loss optimisation landscape during training. Due to the smooth loss landscape for RPTM, shown in \ufb01gure 2, we demonstrate how RPTM cleans up the triplet mining process with a triplet \ufb01ltration step and prevents erroneous lo\fcal minimas. Thus, when trained with RPTM, models with larger parameters can optimise just as fast as smaller networks, which serves our main goal of achieving impressive retrieval results with self-imposed constraints on compute power as well as generalising parameter settings across tasks and datasets. As RPTM is robust to \ufb02uctuations of loss landscape, training deeper networks with SGD on a simple cost function is more accessible for object retrieval tasks. In summary, our paper contributions are: 1. We explain how traditional triplet mining methods are ill-conditioned because it does not take into account natural groupings; 2. We propose a feature guided triplet mining scheme that we term Relation Preserving Triplet Mining (RPTM); 3. We show RPTM is well-conditioned enough to permit the use of constant training parameters across datasets and tasks. The resultant network is simultaneously capable of state-of-the-art in vehicle reID and competitive results for person reID. 2. Related Works Re-identi\ufb01cation. The demand for urban surveillance applications has led to a surge of interest in person and vehicle re-identi\ufb01cation. Challenge benchmarks such as VehicleID [23], Veri-776 [25], DukeMTMC [34] and others have been established; and many new algorithms have been proposed [13, 19, 20, 24, 38, 46]. In reID, many algorithms achieve good results by estimating vehicular pose. Notably, Tang et al. [39] created a synthetic data set for pose estimation and Meng et al. [28] used a parser model to split vehicles into four parts for pose-aware feature embedding. Recently, Vision Transformers (ViT) for reID [14, 42, 50] were proposed for attention learning and [10] addressed person reID with noisy labels. We suggest that the root problem encountered by most of these techniques lies in their de\ufb01nition of triplet loss. By replacing traditional triplet losses with our RPTM technique, we show that it is possible to achieve state-of-the-art results by minimizing a simple cost function. This stands out from the trend towards ever more complex reID techniques. Triplet Loss. The triplet loss was \ufb01rst introduced in the context of face identi\ufb01cation [37]. Since then, it has undergone many re\ufb01nements [2, 15, 44, 45]. Such triplet-based formulations implicitly assume that the given IDs correspond to meaningful groups. We suggest that this assumption is often wrong and that triplets should be de\ufb01ned with respect to naturally occurring groups rather than the given labels. This perspective on triplet loss differs signi\ufb01cantly from that used in most papers. To our knowledge, the research most similar to ours is Bai et al. [2] who acknowledge the importance of naturally occurring groups within an ID. However, Bai et al. attempts to use the groups to force tighter mappings of an ID, \ufb01ghting rather than harnessing the natural relationships. Another problem for clustering based works like Bai et al. [2]\u2019s, is that variations often have no naturally occurring cluster boundaries. This is not a problem for RPTM which de\ufb01nes relations in a pairwise manner, rather than on the basis of shared clusters. Feature Matching. RPTM uses feature matching to help establish triplets. Feature matching is a well-established \ufb01eld in computer vision, whose goal is to match key points between image pairs. Classic feature matching works include SIFT [26], SURF [3], ORB [36], etc. Recent developments include [4] for exploiting matching context information, and [27] for mismatch removal between two features sets. In this paper, we employ Grid-Based Motion Statistics (GMS) [5] as our feature matcher of choice. This is a newer algorithm which incorporates match coherence [22] to facilitate key-point matching. GMS outperforms most classic techniques while also being much faster. 3. Why Triplet Loss? 3.1. Neural Networks as Embedding Functions Much of computer vision can be interpreted as an attempt to map image instances to a semantically meaningful embedding. Thus, if xk represents an image instance and yk its associated feature, the transformation from xk to yk can be denoted by yk = f(xk), where f : R3\u00d7w\u00d7h \u2192Rd; w \u00d7 h denotes image dimension; and d represents the embedding space\u2019s dimensions. In this scheme, the embedding function f(.) is learnt by minimising the cross-entropy loss Eent = m X k=1 Lent(xk), (1) where m denotes the total number of training images. Minimising the cost in Eq. 1 provides an embedding that maximises classi\ufb01cation accuracy. However, this does not ensure that the embedding is semantically meaningful. The retrieval problem requires an embedding in which semantically similar instances are mapped close to each other, leading to the development of triplet loss [37]. 3.2. The Triplet Loss A triplet loss is de\ufb01ned with respect to three image instances: Anchor (randomly chosen instance); Positive (instance that shares a common ID with the anchor); Negative (instance whose ID is different from the anchor). We denote these instances xa, xp and xn, respectively. Given the anchor, positive and negative, the triplet loss is de\ufb01ned as [37]: Ltri(xa, xp, xn) = max(0, dap \u2212dan + \u03b1), (2) where \u03b1 is the desired margin separation between positive and negative instance, dap = \u2225f(xa) \u2212f(xp)\u2225and dan = \fLabeled ID 1 Labeled ID 1 Labeled ID 2 Labeled ID 3 Labeled ID 2 Labeled ID 3 True ID 1 True ID 2 True ID 3 True ID 1 True ID 2 True ID 1 True ID 2 True ID 3 True ID 3 True ID 4 Image pairs with natural affinity Regions where traditional and relational triplets forbid anchor-positive pairs Regions where relational triplets forbid anchorpositive pairs (a) Af\ufb01nity matrix Anchor Image 15 matches 82 matches 970 matches Positive (b) Anchor-positive selection scheme Figure 3: Representational Schematic for Relation Preserving Triplet Mining. In \ufb01gure 3a, each ID contains a number of naturally occurring groups. Relational triplets are based on natural groups rather than IDs, thus preventing pathological anchor-positives. In \ufb01gure 3b, observe that the positive shares clear similarities with the anchor(indicating they share a common natural group) but is not a near-duplicate. \u2225f(xa) \u2212f(xn)\u2225. The \ufb01nal triplet-cost is computed by summing the individual triplet losses: Etri = t X c=1 Ltri(xac, xpc, xnc), (3) where t is the total number of triplets. In general, triplet costs are not used in isolation. Instead, they are combined with the cross-entropy cost from Eq. 1, leading to the \ufb01nal cost function: E = \u03bbentEent + \u03bbtriEtri, (4) where \u03bbent and \u03bbtri control the weights given to the crossentropy loss and triplet-cost respectively. 4. Relation Preserving Triplet Mining To prevent training pipelines from stagnating, it is important to implement a good triplet mining scheme. Triplet mining is part of a larger framework which views features as the key to machine learning. For example, NetVLAD [1] and many other domain transfer works, show that adapting features signi\ufb01cantly improves performance. Somewhat similarly, knowledge distillation [30] tries to compress unwieldy networks into more compact features for practical deployment. We focus on triplet mining, as most related works in reID use some form of triplet loss and also to effectively highlight Implicitly Enforced View Consistency. Na\u00a8 \u0131vely incorporating every possible triplet into the loss yields poor results [15]. Instead, training algorithms employ triplet-mining, a process which aims to incorporate only the most relevant triplets into the triplet-cost. Unfortunately, there is no consensus on how relevance can be measured; thus, triplet mining relies on heuristics. The two most popular heuristics are: hard-negative mining and semi-hard negative mining. Hard-negative mining focusses on triplets whose negatives are very similar to the anchor. Semi-hard negative mining shifts the focus from the hardest negatives to negatives close to the decision boundary. Both heuristics seem sensible and often perform well; however, closer inspection suggests something may be amiss. Let us perform a thought experiment where we assign the IDs A and B, to similar car models. Hard or semi-hard mining \ufb01nds the most confusing triplets, leading to the following triplet: front of car A as anchor, rear of car A as positive, and front of car B as negative. The triplet is indeed very hard; however, its incorporation into the training cost is counter-productive. This is because such a triplet encourages embedding mapping the rear of A to the front of A. The embedding is so counter-intuitive, it is unlikely to generalise well. To avoid such pathological cases, we introduce relational triplets, which address the problem of intraclass separability with greater attention than other methods. 4.1. Relational Triplets Relational triplets change the triplet de\ufb01nition from one based on human assigned IDs to one based on naturally occurring groups. Formally, we denote the set of training images as S = {x1, x2, \u00b7 \u00b7 \u00b7 , xK}. We hypothesise that these images are members of naturally occurring (and possibly overlapping) subsets. The set of subsets is denoted by N = {Sm}, where S = [ Sm\u2208N Sm. (5) \fIBN GMS classifier Final FC layer ID Loss Triplet Loss Re-ID Loss Relational Matrix Input Image Input Set 1 1 2 2 . . . . . . . . m m Feature Embedding & Extraction Feature Matcher Conv Block Squeeze Excitation Scale Channel-wise multiplication Res Residual Block Global Avg Pool GAP Figure 4: Schematic of a re-identi\ufb01cation network deploying Relation Preserving Triplet Mining. The RPTM module includes Instance-Batch Normalisation (IBN) and Squeeze-Excitation (SE) to reduce channel inter-dependencies. The relational matrix is estimated using GMS matches and is used for triplet selection. We use the relational indicator. C(xi, xj) = ( 1, if xi, xj share a subset in N, 0, otherwise. (6) to denote whether two instances share a natural subset. A relational triplet is one where the anchor-positive pair shares a common natural subset, while the negative does not. C(xa, xp) = 1, C(xa, xn) = 0, C(xp, xn) = 0. (7) Traditional triplets are a special case of relational triplets, where the given IDs mirror the natural subsets. This is not the case in reID, as we explained in the thought experiment and through the relational diagram in Figure 3a. Pathological triplets arise when anchor-positive pairs do not have natural af\ufb01nity (share a common group). Observe that the traditional ID based triplet permits pathological anchorpositive pairs. In reID, the natural subsets likely correspond to object poses. This creates the possibility for identifying such subsets using a feature matching algorithm. The next section shows how this can be achieved. 4.2. Mining the Relation Preserving Triplets GMS [5] is a modern feature matcher that uses coherence to validate hypothesised feature matches. The coherence scheme assumes that a true match hypothesis will be strongly supported by many other match hypotheses between neighbouring region pairs, while a false match hypothesis will not. The coherence-based validation is notably better than the traditional ratio test [26]. This allows GMS to reliably match features across signi\ufb01cant viewpoint changes while simultaneously ensuring few matches between image pairs with nothing in common. As a result, the presence of GMS matches between image pairs provides a good approximation of the relational indicator in Eq. 5. GMS is quite effective in reID systems to quantify the innate relation between images and is crucial in establishing implicitly enforced view consistencies. While GMS has few errors, errors do occur. To ensure an anchor-positive pair has a relational indicator of one, we set the positive instance of each anchor to be the image instance whose number of GMS matches with the anchor is closest to the threshold \u03c4. Here, we accept that setting similar anchor-positive pairs leads to poorer training. Hence, we use a middle-ground approach for anchor-positive selection, which we call RPTMmean, in which \u03c4 is set as the average number of GMS matches in the set of nonzero pairwise GMS matches between the anchor and all other images. More formally, for two images, xi, xj we predict that the natural relational indicator is true, C(xi, xj) = 1, if the number matches between them exceed \u03c4. The above provides a semi-hard positive mining, that ensures anchor-positive pairs satisfy the relational indicator in Eq. 5, while also ensuring that the positive differs significantly from the anchor. An example is shown in Figure 3b.We de\ufb01ne negatives using batch hard-triplet mining [15]. If Sb = {xj} denotes the set of instances in a batch that do not share an ID with xa, the negative is xn = argmin xj\u2208Sb (\u2225(f(xa) \u2212f(xj)\u2225) . (8) \fID: 96 ID 96 96 96 117 96 117 117 113 117 113 ID 96 96 96 96 96 113 113 113 96 113 R1 R2 ID 35 30 35 30 38 30 38 30 30 35 ID 38 38 38 38 38 38 30 38 30 30 ID: 38 R1 R2 Figure 5: Qualitative retrieval results for bad targets without RPTM(R1) and with RPTM(R2). Correct identi\ufb01cations are outlined in green; wrong ones are outlined in red. RPTM clearly aligns backbone models with better pose awareness and provides \ufb01ne-grained attention. Observe that the triplets de\ufb01ned in this manner satisfy Eq. 7, making them relation preserving triplets. Given such triplets, the \ufb01nal embedding can be obtained by minimising the cost function in Eq. 4. As evidenced by the mining strategy, RPTM allows for an intrinsic understanding of the viewpoint and pose without hard coded pose estimation. 5. Implementation Details A schematic of the network architecture is provided in Figure 4. In this section we discuss the model layout, elaborating on the comparative feature matching pipeline in Section 5.1 and the model structure with RPTM in Section 5.2. To test and highlight the universality of RPTM and its ability to generalise the training pipeline due to its novel triplet mining scheme, we put limitations on network and parameter tuning across all datasets. 5.1. Feature Matching As discussed before, we use GMS feature matching to guide our triplet-mining process, in order to implement semi-hard positive mining. In theory, we need to establish GMS matches between an anchor and every other image in the dataset. In practise, we use image IDs as guides to the natural groupings and restrict the matching to only images that share a common ID with the anchor. This greatly reduces computational cost in triplet mining. Feature matching is performed on images that have been resized to (224, 224). The GMS feature matching parameters are: 10,000 ORB features whose orientation parameter is set to true and nearest neighbours are identi\ufb01ed with the brute-force hamming distance. All other parameters are set according to the guidelines reported by [5]. After matching, the number of matches between image pairs is stored in a relational matrix m \u00d7 m, where m is the number of training images. 5.2. Neural Network For fair comparison of our results with established benchmarks, we chose ResNet-50 and ResNet-101 pretrained on ImageNet as our backbone. Our RPTM module includes instance-batch-normalization and a squeezeexcitation layer[16]. The weights of this network are trained by minimising the loss function in Eq. 4. This network is trained using triplets de\ufb01ned through our Relation Preserving Triplet Mining (RPTM) in Section 4.2. The images are resized to (240,240) for vehicle reID and (300,150) for person reID. Data augmentation is applied, with random \ufb02ipping, random padding, random erasing and colour jitter (randomly changing contrast, brightness, hue and saturation) all activated. Stochastic Gradient Descent(SGD) is used as the optimiser for the model. The initial learning rate is initialised at 0.005 and is set to decay by a factor of 0.1 every 20 epochs. The model is trained for 80 epochs with a batch size of 24. Training parameters are \ufb01xed for all datasets. 1. Finally, Figure 5 provides qualitative comparisons showing that RPTM\u2019s top-k-ranked retrievals are signi\ufb01cantly better than its backbone network (we showcase top-k results alternatively (top-1, top-3...top19). We focus on demonstrating the quality of gallery image retrieval for query samples by RPTM by comparing top-k retrieval results with and without the RPTM pipeline. 6. Experiments 6.1. Datasets VehicleID [23] allows us to test RPTM\u2019s scalability by offering multiple, progressively larger (and harder) testsets. We evaluate our algorithm with 800, 1600 and 2400 labels for testing. Veri-776 [25] is a widely used benchmark with a diverse range of viewpoints for each vehicle and is designed to provide more constrained but highly realistic conditions. DukeMTMC [34] is a person re-identi\ufb01cation benchmark with 1,404 distinct classes. While our focus is vehicle reID, we include this benchmark to show our algorithm can generalise to other problems. 6.2. Evaluation Metrics Rankings are scored according to the protocols suggested in [23, 25] and all methods are reported with mean 1These parameters are signi\ufb01cantly less computationally demanding than those used by recent state-of-the-art models [10, 14, 33, 41, 50] \fModel Small (query size=800) Medium (query size=1600) Large (query size=2400) mAP r=1 r=5 mAP r=1 r=5 mAP r=1 r=5 C2F-Rank [11] 63.50 61.10 81.70 60.00 56.20 76.20 53.00 51.40 72.20 AGNet [47] 76.06 73.14 86.25 73.39 70.77 81.75 71.75 69.10 80.40 ANet [31] 86.00 97.40 81.90 95.10 79.60 92.70 VANet [9] 88.12 97.29 83.10 95.14 80.35 92.97 Smooth-AP [6] 94.90 97.60 93.30 96.40 91.90 96.20 RPTM (ResNet-50) 82.30 95.00 96.70 79.90 92.50 96.20 78.60 92.10 95.70 QD-DLP [51] 76.54 72.32 92.48 74.63 70.66 88.90 68.41 64.14 83.37 AAVER [18] 74.69 93.82 68.62 89.95 63.54 85.64 VehicleNet [48] 83.64 96.86 81.35 93.61 79.46 92.04 RPTM (ResNet-101) 84.80 95.50 97.40 81.20 93.30 96.50 80.50 92.90 96.30 Table 1: Comparison with state-of-the-art methods on VehicleID. RPTM provides the best retrieval results in all three test sets, with notably better performance in the large test set. Model mAP r = 1 r = 5 SPAN [7] 68.90 94.00 97.60 PAMTRI [39] 71.88 92.86 96.97 PVEN [28] 79.50 95.60 98.40 TBE [38] 79.50 96.00 98.50 RPTM (ResNet-50) 79.90 96.10 98.50 GAN+LSRO\u2217[43] 64.78 88.62 94.52 SAVER\u2217[19] 82.00 96.90 97.70 RPTM (ResNet-50) \u2217 86.40 96.70 98.00 CAL [33] 74.30 95.40 97.90 TransReID [14] 80.60 96.80 \u2013 RPTM (ResNet-101) 80.80 96.60 98.90 AAVER\u2217[18] 66.35 90.17 94.34 DMT\u2217[13] 82.00 96.90 \u2013 VehicleNet\u2217[48] 83.41 96.78 \u2013 Strong Baseline\u2217[17] 87.10 97.00 \u2013 RPTM (ResNet-101) \u2217 88.00 97.30 98.40 Table 2: Comparison with the state-of-the-art results on the Veri-776 dataset. The \u2217indicates the usage of re-ranking. average precision (mAP) and Cumulative Matching Characteristics(CMC). For the Veri-776 and DukeMTMC datasets, we also use re-ranking [49], which re\ufb01nes the \ufb01nal rankings by considering the k-reciprocal nearest-neighbours of both the query and retrieved images, effectively improving upon the pairwise distance result that is used to quantify mAP and top-k ranking accuracies. Re-ranking is not adopted for VehicleID because there is often only one true match ID in the gallery set [18]. We split past works based on the complexity of the backbone network, with our results on ResNet-50 and ResNet-101 backbones. 6.3. Comparison with State-of-the-art VehicleID: Table 1 shows that RPTM achieves state-ofthe-art results on the challenging VehicleID dataset, indicatModel mAP r = 1 r = 5 P2-Net [12] 73.10 86.50 93.10 GPS [29] 78.70 88.20 95.20 PNL [10] 79.00 89.20 \u2013 SCSN [8] 79.00 91.00 \u2013 RPTM (ResNet-50) 80.20 91.40 95.80 Top-DB-Net\u2217[32] 88.60 90.90 \u2013 NFormer\u2217[41] 83.40 89.50 \u2013 st-reID\u2217[40] 92.70 94.50 96.80 RPTM (ResNet-50)* 87.50 92.30 95.20 PAT\u2217[20] 78.20 88.80 \u2013 LDS\u2217[46] 91.00 92.90 \u2013 RPTM (ResNet-101)* 89.20 93.50 96.10 Table 3: Comparison on the DukeMTMC benchmark. RPTM provides competitive results even though it is not tuned for person reID. \u2217indicates re-ranking. ing RPTM\u2019s scalability across vehicle datasets. Although not exceeding Smooth-AP[6], table 4 shows a drop in performance by Smooth-AP on Veri-776 and DukeMTMC. Veri-776: As shown in table 2, RPTM surpasses the recent state-of-the-art vehicle reID models. These results are very respectable, especially if we consider the fact that well-performing algorithms like VehicleNet [48] uses supplementary data for training. We also edge out Strong Baseline [17], which uses deeper backbones and larger images. In addition, RPTM\u2019s training scheme is very simple, as it only requires gradient descent on a well-de\ufb01ned loss. DukeMTMC:Table 3 shows RPTM achieves competitive results at person reID, despite training parameters tuned to vehicle datasets. With the exception of changing the image size to account for the aspect ratio of input images, no changes were made to the RPTM network or training parameters. These results are respectable for a network whose training parameters are tuned for a different task. \fDiscussion: Table 1, 2 and 3, show that incorporating RPTM to feature learning techniques make them more effective at re-identi\ufb01cation. Performance improvements are especially notable on more dif\ufb01cult datasets like VehicleID and harsher evaluation metrics (mAP). These performances are quite remarkable when we take into account that RPTM uses constant training parameters for all three datasets. Most deep-learning algorithms require parameters to be tweaked from dataset to dataset, and RPTM\u2019s capability in this respect is an indication that relational aware triplet choice makes the triplet losses better conditioned. To demonstrate the challenge of maintaining constant training parameters, we trained Smooth-AP [6] on two other datasets, while using the training parameters of Table 1, as shown in Table 4. We also acknowledge the use of Visual Transformers (ViT) in TransReID by He et al. [14], demonstrating impressive results, albeit using camera embeddings and viewpoint labelling. Although RPTM uses universal parameters that are compliant to low compute requirements, we still achieve state-of-the-art results compared to transformer-based ReID models. As an additional experiment, using the increased parameter settings de\ufb01ned in TransReID, we further improve our retrieval results, achieving an mAP of 82.5%(w/o re-ranking) on Veri-776. Method mAP r = 1 r = 5 Smooth-AP (Veri-776) 79.40 91.10 94.20 RPTM (Veri-776) 88.00 97.30 98.40 Smooth-AP (DukeMTMC) 65.70 79.90 88.40 RPTM (DukeMTMC) 89.20 93.50 96.10 Table 4: Performance of Smooth-AP [6] (ResNet-101 backbone) on Veri-776 and DukeMTMC, with re-ranking. 6.4. Ablation Study Image Size. We begin by investigating how image size impacts re-identi\ufb01cation. Table 5a shows that the evaluation metrics improve as the image size increases, a \ufb01nding that is mirrored by many other reID algorithms, which often seek to use the largest possible image. However, we \ufb01nd that performance peaks at (240, 240) on Veri-776 and VehicleID, which validates RPTM\u2019s ability to achieve state-of-the-art results at lower resolutions compared to other benchmarks. Threshold for Positive Selection Section 4.2 suggests positive images are chosen using a threshold, \u03c4, which is the mean number of non-zero matching results. We denote this scheme RPTMmean (semi-hard positive mining). There are a number of alternatives. One possibility is to \ufb01x \u03c4 on a low number of matches, such as 10. We term this scheme RPTMmin. The scheme ensures anchor-positive pairs are not near duplicates and corresponds to hard posiModel Veri-776 VehicleID(small) mAP r=1 mAP r=1 RPTM128\u00d7128 56.5 84.5 72.5 89.0 RPTM160\u00d7160 74.8 92.4 80.5 91.8 RPTM224\u00d7224 85.1 95.2 83.1 92.9 RPTM240\u00d7240 88.0 97.3 84.8 95.5 (a) Image size ablation Model Veri-776 VehicleID(small) mAP r=1 mAP r=1 RPTMmin 86.3 95.9 82.1 93.9 RPTMmean 88.0 97.3 84.8 95.5 RPTMmax 82.2 95.6 79.8 93.1 (b) Thresholding ablation Table 5: (a) ReID performance with increasing image size. mAP and rank-1 increase with image size until (240, 240), after which performance plateaus. (b) Comparing positive selection thresholds. RPTMmin,mean,max correspond to hard positive, semi-hard positive and easy positive-mining. tive mining. The drawback is a vulnerability to occasional matching errors. Another possibility is to set \u03c4 to the largest number of matches that the anchor image has. We term this RPTMmax. This eliminates any vulnerability to GMS matching errors but sacri\ufb01ces the positive image\u2019s distinctiveness. This corresponds to easy positive mining. Table 5b indicates that RPTMmean has the best performance; hence, it is adopted as our default mining scheme. 7."
+ }
+ ]
+ },
+ "edge_feat": {}
+ }
+}
\ No newline at end of file