diff --git "a/related_34K/test_related_short_2405.01373v1.json" "b/related_34K/test_related_short_2405.01373v1.json" new file mode 100644--- /dev/null +++ "b/related_34K/test_related_short_2405.01373v1.json" @@ -0,0 +1,1395 @@ +[ + { + "url": "http://arxiv.org/abs/2405.01373v1", + "title": "ATOM: Attention Mixer for Efficient Dataset Distillation", + "abstract": "Recent works in dataset distillation seek to minimize training expenses by\ngenerating a condensed synthetic dataset that encapsulates the information\npresent in a larger real dataset. These approaches ultimately aim to attain\ntest accuracy levels akin to those achieved by models trained on the entirety\nof the original dataset. Previous studies in feature and distribution matching\nhave achieved significant results without incurring the costs of bi-level\noptimization in the distillation process. Despite their convincing efficiency,\nmany of these methods suffer from marginal downstream performance improvements,\nlimited distillation of contextual information, and subpar cross-architecture\ngeneralization. To address these challenges in dataset distillation, we propose\nthe ATtentiOn Mixer (ATOM) module to efficiently distill large datasets using a\nmixture of channel and spatial-wise attention in the feature matching process.\nSpatial-wise attention helps guide the learning process based on consistent\nlocalization of classes in their respective images, allowing for distillation\nfrom a broader receptive field. Meanwhile, channel-wise attention captures the\ncontextual information associated with the class itself, thus making the\nsynthetic image more informative for training. By integrating both types of\nattention, our ATOM module demonstrates superior performance across various\ncomputer vision datasets, including CIFAR10/100 and TinyImagenet. Notably, our\nmethod significantly improves performance in scenarios with a low number of\nimages per class, thereby enhancing its potential. Furthermore, we maintain the\nimprovement in cross-architectures and applications such as neural architecture\nsearch.", + "authors": "Samir Khaki, Ahmad Sajedi, Kai Wang, Lucy Z. Liu, Yuri A. Lawryshyn, Konstantinos N. Plataniotis", + "published": "2024-05-02", + "updated": "2024-05-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "Coreset Selection. Coreset selection, an early data-centric approach, aimed to efficiently choose a representative subset from a full dataset to enhance downstream training performance and efficiency. Various methods have been proposed in the past, including geometry-based approaches [1, 10, 55, 57, 66], loss-based techniques as mentioned in [46, 59], decision-boundary-focused methods [16, 42], bilevel optimization strategies [32, 33], and gradient-matching algorithms outlined in [31, 43]. Notable among them are Random, which randomly selects samples as the coreset; Herding, which picks samples closest to the cluster center; K-Center, which selects multiple center points to minimize the maximum distance between data points and their nearest center; and Forgetting, which identifies informative training samples based on learning difficulties [4, 6, 55, 59]. While these selection-based methods have shown moderate success in efficient training, they inherently possess limitations in capturing rich information. Since each image in the selected subset is treated independently, they lack the rich features that could have been captured if the diversity within classes had been considered. These limitations have motivated the emergence of dataset distillation within the field. Dataset Distillation. Dataset distillation has emerged as a learnable method of synthesizing a smaller, informationrich dataset from a large-scale real dataset. This approach offers a more efficient training paradigm, commonly applied in various downstream applications such as continual learning [9, 20, 51, 70, 76], neural architecture search [27, 58], and federated learning [28, 39, 40, 69]. The seminal work, initially proposed by Wang et al. [63], introduced Figure 2. (a) An overview of the proposed ATOM framework. By mixing attention, ATOM is able to capture both spatial localization and class context. (b) Demonstration of the internal architecture for spatialand channel-wise attention in the ATOM Module. The spatial-wise attention computes attention at specific locales through different filters, resulting in a matrix output, whereas the channel-wise attention calculates attention between each filter, naturally producing a vectorized output. bilevel optimization, comprising an outer loop for learning the pixel-level synthetic dataset and an inner loop for training the matching network. Following this, several studies adopted surrogate objectives to tackle unrolled optimization problems in meta-learning. For example, gradient matching methods [15, 34, 38, 74, 76] learn images by aligning network gradients derived from real and synthetic datasets. Trajectory matching [7, 11, 14, 21] improves performance by minimizing differences in model training trajectories between original and synthetic samples. Meanwhile, feature matching strategies [51, 51, 61, 73, 75, 77] aim to align feature distributions between real and synthetic data within diverse latent spaces. Despite significant advancements in this field, methods still struggle to find a trade-off between the computational costs associated with the distillation pipeline and the model\u2019s performance. A recent work, DataDAM [51], used spatial attention to improve the performance of feature-matching-based methods by selectively matching features based on their spatial attention scores. However, although this method operates without bilevel optimization, it only marginally improves performance on larger test suites. In this study, we delve deeper into the potential of attention-based methods and demonstrate superior performance compared to DataDAM and previous benchmarks across various computer vision datasets. Additionally, we achieve a lower computational cost compared to conventional attention-matching approaches by leveraging information in a channel-wise manner. Attention Mechanism. Attention mechanisms have been widely adopted in deep learning to enhance performance across various tasks [3, 64, 72]. Initially applied in natural language processing [3], it has extended to computer vision, with global attention models [64] improving image classification and convolutional block attention modules [67] enhancing feature map selection. Additionally, attention aids model compression in knowledge distillation [72]. They are lauded for their ability to efficiently incorporate global contextual information into feature representations. When applied to feature maps, attention can take the form of either spatial or channel-based methods. Spatial methods focus on identifying the informative regions (\u201dwhere\u201d), while channel-based methods complementarily emphasize the informative features (\u201dwhat\u201d). Both spatial localization and channel information are crucial for identifying class characteristics. Recently, Sajedi et al. proposed DataDAM [51] to concentrate only on spatial attention, capturing class correlations within image localities for efficient training purposes. However, inspired by the inherent obfuscation of the content in the attention maps, we propose an Attention Mixer module that uses a unique combination of spatial and channel-wise attention to capture localization and information content.", + "pre_questions": [], + "main_content": "Introduction Efficient deep learning has surged in recent years due to the increasing computational costs associated with training *Equal contribution Figure 1. The ATOM Framework utilizes inherent information to capture both context and location, resulting in significantly improved performance in dataset distillation. We display the performance of various components within the ATOM framework, showcasing a 5.8% enhancement from the base distribution matching performance on CIFAR10 at IPC50. Complete numerical details can be found in Table 4. and inferencing pipelines [2, 26, 51\u201354, 63, 71, 75, 76]. This growth can be attributed to the escalating complexity of model architectures and the ever-expanding scale of datasets. Despite the increasing computational burden, two distinct approaches have emerged as potential avenues for addressing this issue: the model-centric and data-centric approaches. The model-centric approach is primarily concerned with mitigating computational costs by refining the architecture of deep learning models. Techniques such as pruning, quantization, knowledge distillation, and architectural simplification are key strategies employed within this paradigm [26, 29, 30, 49, 50, 65, 68, 71]. In contrast, the data-centric approach adopts a different perspective, focusing on exploring and leveraging the inherent redundancy within datasets. Rather than modifying model architectures, arXiv:2405.01373v1 [cs.CV] 2 May 2024 this approach seeks to identify or construct a smaller dataset that retains the essential information necessary for maintaining performance levels. Coreset selection was a fairly adopted method for addressing this gap [4, 6, 47, 55, 60]. In particular works such as Herding [66] and K-Center [55] offered a heuristic-based approach to intelligently select an informative subset of data. However, as a heuristic-based method, the downstream performance is limited by the information contained solely in the subset. More recently, shapely data selection [17] found the optimal subset of data by measuring the downstream performance for every subset combination achievable in the dataset. However inefficient this may be, the downstream performance is still limited by the diversity of samples selected. therefore, Dataset Distillation (DD) [63] has emerged as a front-runner wherein a synthetic dataset can be learned. Dataset distillation aims to distill large-scale datasets into a smaller representation, such that downstream models trained on this condensed dataset will retain competitive performance with those trained on the larger original one [7, 63, 76]. Recently, many techniques have been introduced to address this challenge, including gradient matching [38, 74, 76], feature/distribution matching [51, 75, 77], and trajectory matching [7, 14, 21]. However, many of these methods suffer from complex and computationally heavy distillation pipelines [7, 21, 76] or inferior performance [51, 75, 76]. A promising approach, DataDAM [51], effectively tackled the computational challenges present in prior distillation techniques by employing untrained neural networks, in contrast to bi-level optimization methods. However, despite its potential, DataDAM faced several significant limitations: (1) it obscured relevant class-contentbased information existing channel-wise in intermediate layers; (2) it only achieved marginal enhancements on previous dataset distillation algorithms; and (3) it exhibited inferior cross-architecture generalization. In this work, we introduce ATtentiOn Mixer, dubbed ATOM as an efficient dataset distillation pipeline that strikes an impressive balance between computational efficiency and superior performance. Drawing upon spatial attention matching techniques from prior studies like DataDAM [51], we expand our receptive field of information in the matching process. Our key contribution lies in mixing spatial information with channel-wise contextual information. Intuitively, different convolutional filters focus on different localizations of the input feature; thus, channel-wise attention aids in the distillation matching process by compressing and aggregating information from multiple regions as evident by the performance improvmenets displayed in Figure 1. ATOM not only combines localization and context, but it also produces distilled images that are more generalizable to various downstream architectures, implying that the distilled features are true representations of the original dataset. Moreover, our approach demonstrates consistent improvements across all settings on a comprehensive distillation test suite. In summary, the key contributions of this study can be outlined as follows: [C1]: We provide further insight into the intricacies of attention matching, ultimately introducing the use of channel-wise attention matching for capturing a higher level of information in the feature-matching process. Our mixing module combines both spatial localization awareness of a particular class, with distinctive contextual information derived channel-wise. [C2]: Empirically we show superior performance against previous dataset distillation methods including feature matching and attention matching works, without bilevel optimization on common computer vision datasets. [C3]: We extend our findings by demonstrating superior performance in cross-architecture and neural architecture search. In particular, we provide a channel-only setting that maintains the majority of the performance while incurring a lower computational cost. Given the larger source dataset T = {(xi, yi)}|T | i=1 containing |T | real image-label pairs, we generate a smaller learnable synthetic dataset S = {(sj, yj)}|S| j=1 with |S| synthetic image and label pairs. Following previous works [7, 51, 61, 74, 76], we use random sampling to initialize our synthetic dataset. For every class k, we obtain a batch of real and synthetic data (BT k and BS k , respectively) and use a neural network \u03d5\u03b8(\u00b7) with randomly initialized weights \u03b8 [22] to extract intermediate and output features. We illustrate our method in Figure 2 where an L-layer neural network \u03d5\u03b8(\u00b7) is used to extract features from the real and synthetic sets. The collection of feature maps from the real and synthetic sets can be expressed as \u03d5\u03b8(Tk) = [f Tk \u03b8,1, \u00b7 \u00b7 \u00b7 , f Tk \u03b8,L] and \u03d5\u03b8(Sk) = [f Sk \u03b8,1, \u00b7 \u00b7 \u00b7 , f Sk \u03b8,L], respectively. The feature f Tk \u03b8,l comprises a multi-dimensional array within R|BT k |\u00d7Cl\u00d7Wl\u00d7Hl, obtained from the real dataset at the lth layer, where Cl denotes the number of channels and Hl \u00d7 Wl represents the spatial dimensions. Correspondingly, a feature f Sk \u03b8,l is derived for the synthetic dataset. We now introduce the Attention Mixer Module (ATOM) which generates attention maps for the intermediate features derived from both the real and synthetic datasets. Leveraging a feature-based mapping function A(\u00b7), ATOM takes the intermediate feature maps as input and produces a corresponding attention map for each feature. Formally, we express this as: A \u0000\u03d5\u03b8(Tk) \u0001 = [aTk \u03b8,1, \u00b7 \u00b7 \u00b7 , aTk \u03b8,L\u22121] and A(\u03d5\u03b8(Sk)) = [aSk \u03b8,1, \u00b7 \u00b7 \u00b7 , aSk \u03b8,L\u22121] for the real and synthetic sets, respectively. Previous works [51, 72] have shown that spatial attention, which aggregates the absolute values of feature maps across the channel dimension, can emphasize common spatial locations associated with high neuron activation. The implication of this is retaining the most informative regions, thus generating an efficient feature descriptor. In this work, we also consider the effect of channel-wise attention, which emphasizes the most significant information captured by each channel based on the magnitude of its activation. Since different filters explore different regions or locations of the input feature, channel-wise activation yields the best aggregation of the global information. Ultimately, we convert the feature map f Tk \u03b8,l of the lth layer into an attention map aTk \u03b8,l representing spatial or channel-wise attention using the corresponding mapping functions As(\u00b7) or Ac(\u00b7) respectively. Formally, we can denote the spatial and channel-wise attention maps as: As(f Tk \u03b8,l) = Cl X i=1 \f \f(f Tk \u03b8,l)i \f \fps, (1) Ac(f Tk \u03b8,l) = Hl\u2217Wl X i=1 \f \f(f Tk \u03b8,l)\u22c6 i \f \fpc, (2) where, (f Tk \u03b8,l)i = f Tk \u03b8,l(:, i, :, :) is the feature map of channel i from the lth layer, and the power and absolute value operations are applied element-wise; meanwhile, the symbol \u22c6flattens the feature map along the spatial dimension \u0010 (f Tk \u03b8,l)\u2217\u2208R|BT k |\u00d7Cl\u00d7Wl\u2217Hl \u0011 , such that (f Tk \u03b8,l)\u22c6 i = (f Tk \u03b8,l)\u22c6(:, :, i). By leveraging both types of attention, we can better encapsulate the relevant information in the intermediate features, as investigated in Section 4.3. Further, the effect of power parameters for spatial and channel-wise attention, i.e. ps and pc is studied in the Section 4.3. Given our generated spatial and channel attention maps for the intermediate features, we apply standard normalization such that we can formulate a matching loss between the synthetic and real datasets. We denote our generalized loss LATOM as: E \u03b8\u223cP\u03b8 \u0014 K X k=1 L\u22121 X l=1 \r \r \rETk h zTk \u03b8,l \u2225zTk \u03b8,l\u22252 i \u2212ESk h zSk \u03b8,l \u2225zSk \u03b8,l\u22252 i\r \r \r 2\u0015 , (3) where, in the case of spatial attention, we denote zTk \u03b8,l = vec(aTk \u03b8,l) \u2208R|BT k |\u00d7(Wl\u00d7Hl) and zSk \u03b8,l = vec(aSk \u03b8,l) \u2208 R|BS k |\u00d7(Wl\u00d7Hl) to represent the vectorized spatial attention map pairs at the lth layer for the real and synthetic datasets, respectively. Meanwhile, for channel-based attention, we have zTk \u03b8,l = vec(aTk \u03b8,l) \u2208R|BT k |\u00d7(Cl) and zSk \u03b8,l = vec(aSk \u03b8,l) \u2208R|BS k |\u00d7(Cl) to represent the flattened channel attention map pairs at the lth layer for the real and synthetic datasets, respectively. The parameter K is the number of categories in a dataset, and P\u03b8 denotes the distribution of network parameters. We estimate the expectation terms in Equation (3) empirically if ground-truth data distributions are not available. Following previous works [51, 61, 73, 75, 77], we leverage the features in the final layer to regularize our matching process. In particular, the features of the penultimate layer represent a high-level abstraction of information from the input images in an embedded representation and can thus be used to inject semantic information in the matching process [19, 48, 51, 75]. Thus, we employ LMMD as described in [51, 75] out-of-the-box. Finally, we learn the synthetic dataset by minimizing the following optimization problem using SGD optimizer: S\u2217= arg min S \u0000LATOM + \u03bbLMMD \u0001 , (4) where \u03bb is the task balance parameter inherited from [51]. In particular, we highlight that LMMD brings semantic information from the final layer, while LATOM mixes the spatial and channel-wise attention information from the intermediate layers. Note that our approach assigns a fixed label to each synthetic sample and keeps it constant during training. A summary of the learning algorithm can be found in Algorithm 1. 4. Experiments 4.1. Experimental Setup Datasets. Our method is evaluated on the CIFAR-10 and CIFAR-100 datasets [35], which maintain a resolution of 32 \u00d7 32, aligning with state-of-the-art benchmarks. Furthermore, we resize the Tiny ImageNet [37] datasets to 64 Algorithm 1 Attention Mixer for Dataset Distillation Input: Real training dataset T = {(xi, yi)}|T | i=1 Required: Initialized synthetic samples for K classes, Deep neural network \u03d5\u03b8 parameterized with \u03b8, Probability distribution over randomly initialized weights P\u03b8, Learning rate \u03b7S, Task balance parameter \u03bb, Number of training iterations I. 1: Initialize synthetic dataset S 2: for i = 1, 2, \u00b7 \u00b7 \u00b7 , I do 3: Sample \u03b8 from P\u03b8 4: Sample mini-batch pairs BT k and BS k from the real and synthetic sets for each class k 5: Compute LATOM and LMMD 6: Calculate L = LATOM + \u03bbLMMD 7: Update the synthetic dataset using S \u2190S\u2212\u03b7S\u2207SL 8: end for Output: Synthetic dataset S = {(si, yi)}|S| i=1 \u00d7 64 for additional experimentation. The supplementary materials provide more detailed dataset information. Network Architectures. We employ a ConvNet architecture [18] for distillation, following prior studies. The default ConvNet comprises three convolutional blocks, each consisting of a 128-kernel 3 \u00d7 3 convolutional layer, instance normalization, ReLU activation, and 3 \u00d7 3 average pooling with a stride of 2. To accommodate the increased resolutions in Tiny ImageNet, we append a fourth convolutional block. Network parameters are initialized using normal initialization [22] in all experiments. Evaluation Protocol. We evaluate the methods using standard measures from previous studies [51, 61, 74\u201376]. Five sets of synthetic images are generated from a real training dataset with 1, 10, and 50 images per class. Then, 20 neural network models are trained on each synthetic set using an SGD optimizer with a fixed learning rate of 0.01. Each experiment reports the mean and standard deviation values for 100 models to assess the efficacy of distilled datasets. Furthermore, computational costs are assessed by calculating run-time per step over 100 iterations, as well as peak GPU memory usage during 100 iterations of training. Implementation Details. We use the SGD optimizer with a fixed learning rate of 1 to learn synthetic datasets containing 1, 10, and 50 IPCs over 8000 iterations with task balances (\u03bb) set at 0.01. Previous works have shown that ps = 4 is sufficient for spatial attention matching [51]. As such we set our default case as: pc = ps = 4. This is further ablated in Section 4.3. We adopt differentiable augmentation for both training and evaluating the synthetic set, following [51, 76]. For dataset reprocessing, we utilized the Kornia implementation of Zero Component Analysis (ZCA) with default parameters, following previous works [7, 44, 51]. All experiments are performed on a single A100 GPU with 80 GB of memory. Further hyperparameter details can be found in the supplementary materials. Competitive Methods. In this paper, we compare the empirical results of ATOM on three computer vision datasets: CIFAR10/100 and TinyImageNet. We evaluate ATOM against four corset selection approaches and thirteen distillation methods for training set synthesis. The corset selection methods include Random selection [47], Herding [4, 6], K-Center [55], and Forgetting [60]. We also compare our approach with state-of-the-art distillation methods, including Dataset Distillation [63] (DD), Flexible Dataset Distillation [5] (LD), Dataset Condensation [76] (DC), Dataset Condensation with Contrastive (DCC) [38], Dataset Condensation with Differentiable Siamese Augmentation [74] (DSA), Distribution Matching [75] (DM), Deep Generative Priors (GLaD), Aligning Features [61] (CAFE), VIG [41], Kernel Inducing Points [44, 45] (KIP), Matching Training Trajectories [7] (MTT), and Attention Matching [51] (DAM). 4.2. Comparison with State-of-the-art Methods Performance Comparison. In this section, we present a comparative analysis of our method against coreset and dataset distillation approaches. ATOM consistently outperforms these studies, especially at smaller distillation ratios, as shown in Table 1. Since the goal of dataset distillation is to generate a more compact synthetic set, we emphasize our significant performance improvements at low IPCs. We achieve almost 4% improvement over the previous attention matching framework [51], DataDAM when evaluated on CIFAR-100 at IPC1. Notably, our performance on CIFAR100 at IPC50 is 50.2% \u2013 that is nearly 90% of the baseline accuracy at a mere 10% of the original dataset. These examples motivate the development of dataset distillation works as downstream models can achieve relatively competitive performance with their baselines at a fraction of the training costs. Our primary objective in this study is to investigate the impact of channel-wise attention within the featurematching process. Compared to prior attention-based and feature-based methodologies, our findings underscore the significance of channel-wise attention and the ATOM module, as validated also in the ablation studies in Section 4.3. Cross-architecture Generalization. In this section, we assess the generalization capacity of our refined dataset by training various unseen deep neural networks on it and then evaluating their performance on downstream classification tasks. Following established benchmarks [51, 61, 75, 76], we examine classic CNN architectures such as AlexNet [36], VGG-11 [56], ResNet-18 [23], and additionally, a standard Vision Transformer (ViT) [13]. Specifically, we utilize synthetic images learned from CIFAR-10 with IPC50 using ConvNet as the reference model and subsequently Dataset CIFAR-10 CIFAR-100 Tiny ImageNet IPC 1 10 50 1 10 50 1 10 50 Ratio % 0.02 0.2 1 0.2 2 10 0.2 2 10 Random 14.4\u00b12.0 26.0\u00b11.2 43.4\u00b11.0 4.2\u00b10.3 14.6\u00b10.5 30.0\u00b10.4 1.4\u00b10.1 5.0\u00b10.2 15.0\u00b10.4 Herding [66] 21.5\u00b11.2 31.6\u00b10.7 40.4\u00b10.6 8.3\u00b10.3 17.3\u00b10.3 33.7\u00b10.5 2.8\u00b10.2 6.3\u00b10.2 16.7\u00b10.3 K-Center [55] 21.5\u00b11.3 14.7\u00b10.9 27.0\u00b11.4 8.4\u00b10.3 17.3\u00b10.3 30.5\u00b10.3 Forgetting [59] 13.5\u00b11.2 23.3\u00b11.0 23.3\u00b11.1 4.5\u00b10.2 15.1\u00b10.3 1.6\u00b10.1 5.1\u00b10.2 15.0\u00b10.3 DD\u2020[63] 36.8\u00b11.2 LD\u2020[5] 25.7\u00b10.7 38.3\u00b10.4 42.5\u00b10.4 11.5\u00b10.4 DC [76] 28.3\u00b10.5 44.9\u00b10.5 53.9\u00b10.5 12.8\u00b10.3 25.2\u00b10.3 30.6\u00b10.6 5.3\u00b10.1 12.9\u00b10.1 12.7\u00b10.4 DCC [38] 32.9\u00b10.8 49.4\u00b10.5 61.6\u00b10.4 13.3\u00b10.3 30.6\u00b10.4 DSA [74] 28.8\u00b10.7 52.1\u00b10.5 60.6\u00b10.5 13.9\u00b10.3 32.3\u00b10.3 42.8\u00b10.4 5.7\u00b10.1 16.3\u00b10.2 15.1\u00b10.2 DM [75] 26.0\u00b10.8 48.9\u00b10.6 63.0\u00b10.4 11.4\u00b10.3 29.7\u00b10.3 43.6\u00b10.4 3.9\u00b10.2 12.9\u00b10.4 25.3\u00b10.2 GLaD [8] 28.0\u00b10.8 46.7\u00b10.5 59.9\u00b10.7 CAFE [61] 30.3\u00b11.1 46.3\u00b10.6 55.5\u00b10.6 12.9\u00b10.3 27.8\u00b10.3 37.9\u00b10.3 CAFE+DSA [61] 31.6\u00b10.8 50.9\u00b10.5 62.3\u00b10.4 14.0\u00b10.3 31.5\u00b10.2 42.9\u00b10.2 VIG [41] 26.5\u00b11.2 54.6\u00b10.1 35.6\u00b10.6 17.8\u00b10.1 29.3\u00b10.1 KIP [44] 29.8\u00b11.0 46.1\u00b10.7 53.2\u00b10.7 12.0\u00b10.2 29.0\u00b10.3 MTT [7] 31.9\u00b11.2 56.4\u00b10.7 65.9\u00b10.6 13.8\u00b10.6 33.1\u00b10.4 42.9\u00b10.3 6.2\u00b10.4 17.3\u00b10.2 26.5\u00b10.3 DAM [51] 32.0\u00b11.2 54.2\u00b10.8 67.0\u00b10.4 14.5\u00b10.5 34.8\u00b10.5 49.4\u00b10.3 8.3\u00b10.4 18.7\u00b10.3 28.7\u00b10.3 ATOM (Ours) 34.8\u00b11.0 57.9\u00b10.7 68.8\u00b10.5 18.1\u00b10.4 35.7\u00b10.4 50.2\u00b10.3 9.1\u00b10.2 19.5\u00b10.4 29.1\u00b10.3 Full Dataset 84.8\u00b10.1 56.2\u00b10.3 37.6\u00b10.4 Table 1. Comparison with previous dataset distillation methods on CIFAR-10, CIFAR-100 and Tiny ImageNet. The works DD\u2020 and LD\u2020 use AlexNet [36] for CIFAR-10 dataset. All other methods use ConvNet for training and evaluation. Bold entries are the best results. train the aforementioned networks on the refined dataset to assess their performance on downstream tasks. The results, as depicted in Table 2, indicate that ATOM demonstrates superior generalization across a spectrum of architectures. Notably, it achieves a significant performance boost of over 4% compared to the prior state-of-the-art on ResNet-18 [23]. This implies that the channel-wise attention mechanism effectively identifies features not only relevant to ConvNet but also to a wider range of deep neural networks, thereby enhancing the refined dataset with this discerned information. ConvNet AlexNet VGG-11 ResNet-18 ViT Avg. DC [76] 53.9\u00b10.5 28.8\u00b10.7 38.8\u00b11.1 20.9\u00b11.0 30.1\u00b10.5 34.5\u00b10.8 CAFE [61] 62.3\u00b10.4 43.2\u00b10.4 48.8\u00b10.5 43.3\u00b10.7 22.7\u00b10.7 44.1\u00b10.5 DSA [74] 60.6\u00b10.5 53.7\u00b10.6 51.4\u00b11.0 47.8\u00b10.9 43.3\u00b10.4 51.4\u00b10.7 DM [75] 63.0\u00b10.4 60.1\u00b10.5 57.4\u00b10.8 52.9\u00b10.4 45.2\u00b10.4 55.7\u00b10.5 KIP [44] 56.9\u00b10.4 53.2\u00b11.6 53.2\u00b10.5 47.6\u00b10.8 18.3\u00b10.6 45.8\u00b10.8 MTT [7] 66.2\u00b10.6 43.9\u00b10.9 48.7\u00b11.3 60.0\u00b10.7 47.7\u00b10.6 53.3\u00b10.8 DAM [51] 67.0\u00b10.4 63.9\u00b10.9 64.8\u00b10.5 60.2\u00b10.7 48.2\u00b10.8 60.8\u00b10.7 ATOM (Ours) 68.8\u00b10.4 64.1\u00b10.7 66.4\u00b10.6 64.5\u00b10.6 49.5\u00b10.7 62.7\u00b10.6 Table 2. Cross-architecture testing performance (%) on CIFAR-10 with 50 images per class. The ConvNet architecture is employed for distillation. Bold entries are the best results. Distillation Cost Analysis. In this section, we delve into an examination of the training costs required for the distillation process. Although the main goal of dataset distillation is to reduce training costs across different applications such as neural architecture search and continual learning, the distillation technique itself must be efficient, enabling smooth operation on consumer-grade hardware. Approaches such Method Run Time (Sec.) GPU memory (MB) IPC1 IPC10 IPC50 IPC1 IPC10 IPC50 DC [76] 0.16\u00b10.01 3.31\u00b10.02 15.74\u00b10.10 3515 3621 4527 DSA [74] 0.22\u00b10.02 4.47\u00b10.12 20.13\u00b10.58 3513 3639 4539 DM [75] 0.08\u00b10.02 0.08\u00b10.02 0.08\u00b10.02 3323 3455 3605 MTT [7] 0.36\u00b10.23 0.40\u00b10.20 OOM 2711 8049 OOM DAM [51] 0.09\u00b10.01 0.08\u00b10.01 0.16\u00b10.04 3452 3561 3724 ATOM\u2020 (Ours) 0.08\u00b10.02 0.08\u00b10.02 0.13\u00b10.03 3152 3263 4151 ATOM (Ours) 0.10\u00b10.02 0.10\u00b10.01 0.17\u00b10.02 3601 4314 5134 Table 3. Comparisons of training time and GPU memory usage for prior dataset distillation methods. Run time is averaged per step over 100 iterations, while GPU memory usage is reported as peak memory during the same 100 iterations of training on an A100 GPU for CIFAR-10. Methods that surpass the GPU memory threshold and fail to run are denoted as OOM (out-of-memory). ATOM\u2020 represents our method with on-channel attention, hence offering a better tradeoff in computational complexity. as DC, DSA, and MTT introduce additional computational overhead due to bi-level optimization and training an expert model. In contrast, our method, akin to DM and DAM, capitalizes on randomly initialized networks, obviating the need for training and thereby reducing the computational cost per step involved in the matching stage. As illustrated in Table 3 utilizing solely the channel-based ATOM\u2020 decreases the computational burden of matching compared to the default ATOM configuration. This efficiency is crucial, as channel-wise attention offers a more effective distillation process while maintaining superior performance (refer to Section 4.3). Convergence Speed Analysis. In Figure 3, we plot the Figure 3. Test accuracy evolution of synthetic image learning on CIFAR10 with IPC50 for ATOM (ours), DM [75] and DataDAM [51]. downstream testing accuracy evolution for the synthetic images on CIFAR10 IPC50. Comparing with previous methods, DM [75] and DataDAM [51], we can explicitly see an improvement in convergence speed and a significantly higher steady state achieved with the ATOM framework. Our included convergence analysis supports the practicality of our method and the consistency to which we outperform previous baselines. 4.3. Ablation Studies and Analysis Evaluation of loss components in ATOM. In Table 4, we evaluate the effect of different attention-matching mechanisms with respect to pure feature matching in intermediate layers and distribution matching in the final layer (LMMD). The results clearly demonstrate that attentionmatching improves the performance of the distillation process. In particular, the attention-matching process improves feature matching by 8.0%. Further, it seems that channel attention is able to capture the majority of relevant information from the intermediate features, as evidenced by an improvement of over 1.5% from spatial attention matching. Ultimately, this provides an incentive to favor channel attention in the distillation process. LMMD Feature Map Spatial Atn. Channel Atn. Performance (%) \u2713 63.0\u00b10.4 \u2713 60.8\u00b10.6 \u2713 \u2713 67.0\u00b10.7 \u2713 \u2713 68.6\u00b10.3 \u2713 \u2713 \u2713 68.8\u00b10.5 Table 4. Evaluation of loss components and attention components in ATOM using CIFAR-10 with IPC50. Evaluating attention balance in ATOM. In this section, we evaluate the balance between spatial and channelwise attention through the power value p. Referencing Equation (1) and Equation (2), modulating the values of ps and pc ultimately affects the balance of spatial and channelwise attention in LATOM. In Table 5, we examine the impact of different exponentiation powers p in the attentionmatching mechanisms. Specifically, we conduct a gridbased search to investigate how varying the exponentiation of spatial (ps) and channel (pc) attention influences subsequent performance. Our findings reveal that optimal performance (nearly 1% improvement over our default) occurs when the exponentiation for channel attention significantly exceeds that of spatial attention. This suggests that assigning a higher exponential value places greater emphasis on channel-attention matching over spatial-wise matching. This aligns with our observations from the loss component ablation, where channel-wise matching was found to encapsulate the majority of information within the feature map. Consequently, we deduce that prioritizing channelwise matching will enhance downstream performance outcomes. Channel Attention pc Spatial Attention ps 1 2 4 8 1 57.4% 57.5% 57.0% 56.2% 2 58.2% 57.5% 57.2% 56.3% 4 58.4% 58.5% 57.9% 57.6% 8 58.8% 58.7% 58.2% 57.8% Table 5. Evaluation of power values in the spatial and channel attention computations for LATOM using CIFAR-10 with IPC10. Visualization of Synthetic Images. We include samples of our distilled images in Figure 4. The images appear to be interleaved with artifacts that assimilate the background and object information into a mixed collage-like appearance. The synthetic images effectively capture the correlation between background and object elements, suggesting their potential for generalizability across various architectures, as empirically verified in Table 2. Additional visualizations are available in the supplementary material. 4.4. Applications Neural Architecture Search. In Table Table 6 we leverage our distilled synthetic datasets as proxy sets to accelerate Neural Architecture Search. In line with previous state-of-the-art, [51, 74, 76], we outline our architectural search space, comprising 720 ConvNets on the CIFAR10 dataset. We commence with a foundational ConvNet and devise a consistent grid, varying in depth D \u2208{1, 2, 3, 4}, width W \u2208{32, 64, 128, 256}, activation function A \u2208{Sigmoid, ReLU, LeakyReLU}, normalization technique N \u2208{None, BatchNorm, LayerNorm, InstanceNorm, GroupNorm}, and pooling operation P \u2208{None, MaxPooling, AvgPooling}. Additionally, we benchmark our approach against several state-of-the-art methods, including Random, DSA [76], DM [75], CAFE [61], DAM [51], and Early-Stopping. Our method demonstrates superior performance, accompanied by a heightened Spearman\u2019s correlation (0.75), thereby reinforcing the robustness Figure 4. Sample learned synthetic images for CIFAR-10/100 (32\u00d732 resolution) IPC10 and TinyImageNet (64\u00d764 resolution) IPC 1. of ATOM and its potential in neural architecture search. Random DSA DM CAFE DAM ATOM Early-stopping Full Dataset Performance (%) 88.9 87.2 87.2 83.6 89.0 88.9 88.9 89.2 Correlation 0.70 0.66 0.71 0.59 0.72 0.75 0.69 1.00 Time cost (min) 206.4 206.4 206.6 206.4 206.4 206.4 206.2 5168.9 Storage (imgs) 500 500 500 500 500 500 5 \u00d7 104 5 \u00d7 104 Table 6. Neural architecture search on CIFAR-10 with IPC50. 5. Limitations Many studies in dataset distillation encounter a constraint known as re-distillation costs [24, 25, 62]. This limitation becomes apparent when adjusting the number of images per class (IPC) or the distillation ratios. Like most other distillation methods, our approach requires re-distillation on the updated setting configuration, which limits flexibility regarding configuration changes and storage allocation. Additionally, we observed in Table 2 that dataset distillation methods often struggle with generalizing to transformer architectures. Despite ATOM outperforming other methods, there is still a noticeable performance drop compared to convolutional neural networks. This suggests that the effectiveness of transformers for downstream training might be constrained by the distilled data. 6. Conclusion In this work, we introduced an Attention Mixer (ATOM) for efficient dataset distillation. Previous approaches have struggled with marginal performance gains, obfuscating channel-wise information, and high computational overheads. ATOM addresses these issues by effectively combining information from different attention mechanisms, facilitating a more informative distillation process with untrained neural networks. Our approach utilizes a broader receptive field to capture spatial information while preserving distinct content information at the channel level, thus better aligning synthetic and real datasets. By capturing information across intermediate layers, ATOM facilitates multi-scale distillation. We demonstrated the superior performance of ATOM on standard distillation benchmarks and its favorable performance across multiple architectures. We conducted several ablative studies to justify the design choices behind ATOM. Furthermore, we applied our distilled data to Neural Architecture Search, showing a superior correlation with the real large-scale dataset. In the future, we aim to extend attention mixing to various downstream tasks, including image segmentation and localizations. We also hope to address limitations of ATOM, such as re-distillation costs and cross-architecture generalizations on transformers." + }, + { + "url": "http://arxiv.org/abs/2311.01570v1", + "title": "Sequential Subset Matching for Dataset Distillation", + "abstract": "Dataset distillation is a newly emerging task that synthesizes a small-size\ndataset used in training deep neural networks (DNNs) for reducing data storage\nand model training costs. The synthetic datasets are expected to capture the\nessence of the knowledge contained in real-world datasets such that the former\nyields a similar performance as the latter. Recent advancements in distillation\nmethods have produced notable improvements in generating synthetic datasets.\nHowever, current state-of-the-art methods treat the entire synthetic dataset as\na unified entity and optimize each synthetic instance equally. This static\noptimization approach may lead to performance degradation in dataset\ndistillation. Specifically, we argue that static optimization can give rise to\na coupling issue within the synthetic data, particularly when a larger amount\nof synthetic data is being optimized. This coupling issue, in turn, leads to\nthe failure of the distilled dataset to extract the high-level features learned\nby the deep neural network (DNN) in the latter epochs.\n In this study, we propose a new dataset distillation strategy called\nSequential Subset Matching (SeqMatch), which tackles this problem by adaptively\noptimizing the synthetic data to encourage sequential acquisition of knowledge\nduring dataset distillation. Our analysis indicates that SeqMatch effectively\naddresses the coupling issue by sequentially generating the synthetic\ninstances, thereby enhancing its performance significantly. Our proposed\nSeqMatch outperforms state-of-the-art methods in various datasets, including\nSVNH, CIFAR-10, CIFAR-100, and Tiny ImageNet. Our code is available at\nhttps://github.com/shqii1j/seqmatch.", + "authors": "Jiawei Du, Qin Shi, Joey Tianyi Zhou", + "published": "2023-11-02", + "updated": "2023-11-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2207.09653v1", + "title": "FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning", + "abstract": "Federated learning~(FL) has recently attracted increasing attention from\nacademia and industry, with the ultimate goal of achieving collaborative\ntraining under privacy and communication constraints. Existing iterative model\naveraging based FL algorithms require a large number of communication rounds to\nobtain a well-performed model due to extremely unbalanced and non-i.i.d data\npartitioning among different clients. Thus, we propose FedDM to build the\nglobal training objective from multiple local surrogate functions, which\nenables the server to gain a more global view of the loss landscape. In detail,\nwe construct synthetic sets of data on each client to locally match the loss\nlandscape from original data through distribution matching. FedDM reduces\ncommunication rounds and improves model quality by transmitting more\ninformative and smaller synthesized data compared with unwieldy model weights.\nWe conduct extensive experiments on three image classification datasets, and\nresults show that our method can outperform other FL counterparts in terms of\nefficiency and model performance. Moreover, we demonstrate that FedDM can be\nadapted to preserve differential privacy with Gaussian mechanism and train a\nbetter model under the same privacy budget.", + "authors": "Yuanhao Xiong, Ruochen Wang, Minhao Cheng, Felix Yu, Cho-Jui Hsieh", + "published": "2022-07-20", + "updated": "2022-07-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CR" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.11004v3", + "title": "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation", + "abstract": "Model-based deep learning has achieved astounding successes due in part to\nthe availability of large-scale real-world data. However, processing such\nmassive amounts of data comes at a considerable cost in terms of computations,\nstorage, training and the search for good neural architectures. Dataset\ndistillation has thus recently come to the fore. This paradigm involves\ndistilling information from large real-world datasets into tiny and compact\nsynthetic datasets such that processing the latter ideally yields similar\nperformances as the former. State-of-the-art methods primarily rely on learning\nthe synthetic dataset by matching the gradients obtained during training\nbetween the real and synthetic data. However, these gradient-matching methods\nsuffer from the so-called accumulated trajectory error caused by the\ndiscrepancy between the distillation and subsequent evaluation. To mitigate the\nadverse impact of this accumulated trajectory error, we propose a novel\napproach that encourages the optimization algorithm to seek a flat trajectory.\nWe show that the weights trained on synthetic data are robust against the\naccumulated errors perturbations with the regularization towards the flat\ntrajectory. Our method, called Flat Trajectory Distillation (FTD), is shown to\nboost the performance of gradient-matching methods by up to 4.7% on a subset of\nimages of the ImageNet dataset with higher resolution images. We also validate\nthe effectiveness and generalizability of our method with datasets of different\nresolutions and demonstrate its applicability to neural architecture search.\nCode is available at https://github.com/AngusDujw/FTD-distillation.", + "authors": "Jiawei Du, Yidi Jiang, Vincent Y. F. Tan, Joey Tianyi Zhou, Haizhou Li", + "published": "2022-11-20", + "updated": "2023-03-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.01537v1", + "title": "Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents", + "abstract": "Data heterogeneity presents significant challenges for federated learning\n(FL). Recently, dataset distillation techniques have been introduced, and\nperformed at the client level, to attempt to mitigate some of these challenges.\nIn this paper, we propose a highly efficient FL dataset distillation framework\non the server side, significantly reducing both the computational and\ncommunication demands on local devices while enhancing the clients' privacy.\nUnlike previous strategies that perform dataset distillation on local devices\nand upload synthetic data to the server, our technique enables the server to\nleverage prior knowledge from pre-trained deep generative models to synthesize\nessential data representations from a heterogeneous model architecture. This\nprocess allows local devices to train smaller surrogate models while enabling\nthe training of a larger global model on the server, effectively minimizing\nresource utilization. We substantiate our claim with a theoretical analysis,\ndemonstrating the asymptotic resemblance of the process to the hypothetical\nideal of completely centralized training on a heterogeneous dataset. Empirical\nevidence from our comprehensive experiments indicates our method's superiority,\ndelivering an accuracy enhancement of up to 40% over non-dataset-distillation\ntechniques in highly heterogeneous FL contexts, and surpassing existing\ndataset-distillation methods by 18%. In addition to the high accuracy, our\nframework converges faster than the baselines because rather than the server\ntrains on several sets of heterogeneous data distributions, it trains on a\nmulti-modal distribution. Our code is available at\nhttps://github.com/FedDG23/FedDG-main.git", + "authors": "Yuqi Jia, Saeed Vahidian, Jingwei Sun, Jianyi Zhang, Vyacheslav Kungurtsev, Neil Zhenqiang Gong, Yiran Chen", + "published": "2023-12-03", + "updated": "2023-12-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2106.07760v2", + "title": "RETRIEVE: Coreset Selection for Efficient and Robust Semi-Supervised Learning", + "abstract": "Semi-supervised learning (SSL) algorithms have had great success in recent\nyears in limited labeled data regimes. However, the current state-of-the-art\nSSL algorithms are computationally expensive and entail significant compute\ntime and energy requirements. This can prove to be a huge limitation for many\nsmaller companies and academic groups. Our main insight is that training on a\nsubset of unlabeled data instead of entire unlabeled data enables the current\nSSL algorithms to converge faster, significantly reducing computational costs.\nIn this work, we propose RETRIEVE, a coreset selection framework for efficient\nand robust semi-supervised learning. RETRIEVE selects the coreset by solving a\nmixed discrete-continuous bi-level optimization problem such that the selected\ncoreset minimizes the labeled set loss. We use a one-step gradient\napproximation and show that the discrete optimization problem is approximately\nsubmodular, enabling simple greedy algorithms to obtain the coreset. We\nempirically demonstrate on several real-world datasets that existing SSL\nalgorithms like VAT, Mean-Teacher, FixMatch, when used with RETRIEVE, achieve\na) faster training times, b) better performance when unlabeled data consists of\nOut-of-Distribution (OOD) data and imbalance. More specifically, we show that\nwith minimal accuracy degradation, RETRIEVE achieves a speedup of around\n$3\\times$ in the traditional SSL setting and achieves a speedup of $5\\times$\ncompared to state-of-the-art (SOTA) robust SSL algorithms in the case of\nimbalance and OOD data. RETRIEVE is available as a part of the CORDS toolkit:\nhttps://github.com/decile-team/cords.", + "authors": "Krishnateja Killamsetty, Xujiang Zhao, Feng Chen, Rishabh Iyer", + "published": "2021-06-14", + "updated": "2021-10-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1811.10959v3", + "title": "Dataset Distillation", + "abstract": "Model distillation aims to distill the knowledge of a complex model into a\nsimpler one. In this paper, we consider an alternative formulation called\ndataset distillation: we keep the model fixed and instead attempt to distill\nthe knowledge from a large training dataset into a small one. The idea is to\nsynthesize a small number of data points that do not need to come from the\ncorrect data distribution, but will, when given to the learning algorithm as\ntraining data, approximate the model trained on the original data. For example,\nwe show that it is possible to compress 60,000 MNIST training images into just\n10 synthetic distilled images (one per class) and achieve close to original\nperformance with only a few gradient descent steps, given a fixed network\ninitialization. We evaluate our method in various initialization settings and\nwith different learning objectives. Experiments on multiple datasets show the\nadvantage of our approach compared to alternative methods.", + "authors": "Tongzhou Wang, Jun-Yan Zhu, Antonio Torralba, Alexei A. Efros", + "published": "2018-11-27", + "updated": "2020-02-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.06982v1", + "title": "Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality", + "abstract": "Dataset distillation aims to minimize the time and memory needed for training\ndeep networks on large datasets, by creating a small set of synthetic images\nthat has a similar generalization performance to that of the full dataset.\nHowever, current dataset distillation techniques fall short, showing a notable\nperformance gap when compared to training on the original data. In this work,\nwe are the first to argue that using just one synthetic subset for distillation\nwill not yield optimal generalization performance. This is because the training\ndynamics of deep networks drastically change during the training. Hence,\nmultiple synthetic subsets are required to capture the training dynamics at\ndifferent phases of training. To address this issue, we propose Progressive\nDataset Distillation (PDD). PDD synthesizes multiple small sets of synthetic\nimages, each conditioned on the previous sets, and trains the model on the\ncumulative union of these subsets without requiring additional training time.\nOur extensive experiments show that PDD can effectively improve the performance\nof existing dataset distillation methods by up to 4.3%. In addition, our method\nfor the first time enable generating considerably larger synthetic datasets.", + "authors": "Xuxi Chen, Yu Yang, Zhangyang Wang, Baharan Mirzasoleiman", + "published": "2023-10-10", + "updated": "2023-10-10", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2307.09742v1", + "title": "Improved Distribution Matching for Dataset Condensation", + "abstract": "Dataset Condensation aims to condense a large dataset into a smaller one\nwhile maintaining its ability to train a well-performing model, thus reducing\nthe storage cost and training effort in deep learning applications. However,\nconventional dataset condensation methods are optimization-oriented and\ncondense the dataset by performing gradient or parameter matching during model\noptimization, which is computationally intensive even on small datasets and\nmodels. In this paper, we propose a novel dataset condensation method based on\ndistribution matching, which is more efficient and promising. Specifically, we\nidentify two important shortcomings of naive distribution matching (i.e.,\nimbalanced feature numbers and unvalidated embeddings for distance computation)\nand address them with three novel techniques (i.e., partitioning and expansion\naugmentation, efficient and enriched model sampling, and class-aware\ndistribution regularization). Our simple yet effective method outperforms most\nprevious optimization-oriented methods with much fewer computational resources,\nthereby scaling data condensation to larger datasets and models. Extensive\nexperiments demonstrate the effectiveness of our method. Codes are available at\nhttps://github.com/uitrbn/IDM", + "authors": "Ganlong Zhao, Guanbin Li, Yipeng Qin, Yizhou Yu", + "published": "2023-07-19", + "updated": "2023-07-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2209.14851v1", + "title": "Meta Knowledge Condensation for Federated Learning", + "abstract": "Existing federated learning paradigms usually extensively exchange\ndistributed models at a central solver to achieve a more powerful model.\nHowever, this would incur severe communication burden between a server and\nmultiple clients especially when data distributions are heterogeneous. As a\nresult, current federated learning methods often require a large number of\ncommunication rounds in training. Unlike existing paradigms, we introduce an\nalternative perspective to significantly decrease the communication cost in\nfederate learning. In this work, we first introduce a meta knowledge\nrepresentation method that extracts meta knowledge from distributed clients.\nThe extracted meta knowledge encodes essential information that can be used to\nimprove the current model. As the training progresses, the contributions of\ntraining samples to a federated model also vary. Thus, we introduce a dynamic\nweight assignment mechanism that enables samples to contribute adaptively to\nthe current model update. Then, informative meta knowledge from all active\nclients is sent to the server for model update. Training a model on the\ncombined meta knowledge without exposing original data among different clients\ncan significantly mitigate the heterogeneity issues. Moreover, to further\nameliorate data heterogeneity, we also exchange meta knowledge among clients as\nconditional initialization for local meta knowledge extraction. Extensive\nexperiments demonstrate the effectiveness and efficiency of our proposed\nmethod. Remarkably, our method outperforms the state-of-the-art by a large\nmargin (from $74.07\\%$ to $92.95\\%$) on MNIST with a restricted communication\nbudget (i.e. 10 rounds).", + "authors": "Ping Liu, Xin Yu, Joey Tianyi Zhou", + "published": "2022-09-29", + "updated": "2022-09-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1708.00489v4", + "title": "Active Learning for Convolutional Neural Networks: A Core-Set Approach", + "abstract": "Convolutional neural networks (CNNs) have been successfully applied to many\nrecognition and learning tasks using a universal recipe; training a deep model\non a very large dataset of supervised examples. However, this approach is\nrather restrictive in practice since collecting a large set of labeled images\nis very expensive. One way to ease this problem is coming up with smart ways\nfor choosing images to be labelled from a very large collection (ie. active\nlearning).\n Our empirical study suggests that many of the active learning heuristics in\nthe literature are not effective when applied to CNNs in batch setting.\nInspired by these limitations, we define the problem of active learning as\ncore-set selection, ie. choosing set of points such that a model learned over\nthe selected subset is competitive for the remaining data points. We further\npresent a theoretical result characterizing the performance of any selected\nsubset using the geometry of the datapoints. As an active learning algorithm,\nwe choose the subset which is expected to yield best result according to our\ncharacterization. Our experiments show that the proposed method significantly\noutperforms existing approaches in image classification experiments by a large\nmargin.", + "authors": "Ozan Sener, Silvio Savarese", + "published": "2017-08-01", + "updated": "2018-06-01", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2001.05755v1", + "title": "ScaIL: Classifier Weights Scaling for Class Incremental Learning", + "abstract": "Incremental learning is useful if an AI agent needs to integrate data from a\nstream. The problem is non trivial if the agent runs on a limited computational\nbudget and has a bounded memory of past data. In a deep learning approach, the\nconstant computational budget requires the use of a fixed architecture for all\nincremental states. The bounded memory generates data imbalance in favor of new\nclasses and a prediction bias toward them appears. This bias is commonly\ncountered by introducing a data balancing step in addition to the basic network\ntraining. We depart from this approach and propose simple but efficient scaling\nof past class classifier weights to make them more comparable to those of new\nclasses. Scaling exploits incremental state level statistics and is applied to\nthe classifiers learned in the initial state of classes in order to profit from\nall their available data. We also question the utility of the widely used\ndistillation loss component of incremental learning algorithms by comparing it\nto vanilla fine tuning in presence of a bounded memory. Evaluation is done\nagainst competitive baselines using four public datasets. Results show that the\nclassifier weights scaling and the removal of the distillation are both\nbeneficial.", + "authors": "Eden Belouadah, Adrian Popescu", + "published": "2020-01-16", + "updated": "2020-01-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.16645v2", + "title": "Summarizing Stream Data for Memory-Constrained Online Continual Learning", + "abstract": "Replay-based methods have proved their effectiveness on online continual\nlearning by rehearsing past samples from an auxiliary memory. With many efforts\nmade on improving training schemes based on the memory, however, the\ninformation carried by each sample in the memory remains under-investigated.\nUnder circumstances with restricted storage space, the informativeness of the\nmemory becomes critical for effective replay. Although some works design\nspecific strategies to select representative samples, by only employing a small\nnumber of original images, the storage space is still not well utilized. To\nthis end, we propose to Summarize the knowledge from the Stream Data (SSD) into\nmore informative samples by distilling the training characteristics of real\nimages. Through maintaining the consistency of training gradients and\nrelationship to the past tasks, the summarized samples are more representative\nfor the stream data compared to the original images. Extensive experiments are\nconducted on multiple online continual learning benchmarks to support that the\nproposed SSD method significantly enhances the replay effects. We demonstrate\nthat with limited extra computational overhead, SSD provides more than 3%\naccuracy boost for sequential CIFAR-100 under extremely restricted memory\nbuffer. Code in https://github.com/vimar-gu/SSD.", + "authors": "Jianyang Gu, Kai Wang, Wei Jiang, Yang You", + "published": "2023-05-26", + "updated": "2024-01-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1910.13540v1", + "title": "Small-GAN: Speeding Up GAN Training Using Core-sets", + "abstract": "Recent work by Brock et al. (2018) suggests that Generative Adversarial\nNetworks (GANs) benefit disproportionately from large mini-batch sizes.\nUnfortunately, using large batches is slow and expensive on conventional\nhardware. Thus, it would be nice if we could generate batches that were\neffectively large though actually small. In this work, we propose a method to\ndo this, inspired by the use of Coreset-selection in active learning. When\ntraining a GAN, we draw a large batch of samples from the prior and then\ncompress that batch using Coreset-selection. To create effectively large\nbatches of 'real' images, we create a cached dataset of Inception activations\nof each training image, randomly project them down to a smaller dimension, and\nthen use Coreset-selection on those projected activations at training time. We\nconduct experiments showing that this technique substantially reduces training\ntime and memory usage for modern GAN variants, that it reduces the fraction of\ndropped modes in a synthetic dataset, and that it allows GANs to reach a new\nstate of the art in anomaly detection.", + "authors": "Samarth Sinha, Han Zhang, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Augustus Odena", + "published": "2019-10-29", + "updated": "2019-10-29", + "primary_cat": "stat.ML", + "cats": [ + "stat.ML", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2109.03764v1", + "title": "Active Learning by Acquiring Contrastive Examples", + "abstract": "Common acquisition functions for active learning use either uncertainty or\ndiversity sampling, aiming to select difficult and diverse data points from the\npool of unlabeled data, respectively. In this work, leveraging the best of both\nworlds, we propose an acquisition function that opts for selecting\n\\textit{contrastive examples}, i.e. data points that are similar in the model\nfeature space and yet the model outputs maximally different predictive\nlikelihoods. We compare our approach, CAL (Contrastive Active Learning), with a\ndiverse set of acquisition functions in four natural language understanding\ntasks and seven datasets. Our experiments show that CAL performs consistently\nbetter or equal than the best performing baseline across all tasks, on both\nin-domain and out-of-domain data. We also conduct an extensive ablation study\nof our method and we further analyze all actively acquired datasets showing\nthat CAL achieves a better trade-off between uncertainty and diversity compared\nto other strategies.", + "authors": "Katerina Margatina, Giorgos Vernikos, Lo\u00efc Barrault, Nikolaos Aletras", + "published": "2021-09-08", + "updated": "2021-09-08", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2008.05723v1", + "title": "Contextual Diversity for Active Learning", + "abstract": "Requirement of large annotated datasets restrict the use of deep\nconvolutional neural networks (CNNs) for many practical applications. The\nproblem can be mitigated by using active learning (AL) techniques which, under\na given annotation budget, allow to select a subset of data that yields maximum\naccuracy upon fine tuning. State of the art AL approaches typically rely on\nmeasures of visual diversity or prediction uncertainty, which are unable to\neffectively capture the variations in spatial context. On the other hand,\nmodern CNN architectures make heavy use of spatial context for achieving highly\naccurate predictions. Since the context is difficult to evaluate in the absence\nof ground-truth labels, we introduce the notion of contextual diversity that\ncaptures the confusion associated with spatially co-occurring classes.\nContextual Diversity (CD) hinges on a crucial observation that the probability\nvector predicted by a CNN for a region of interest typically contains\ninformation from a larger receptive field. Exploiting this observation, we use\nthe proposed CD measure within two AL frameworks: (1) a core-set based strategy\nand (2) a reinforcement learning based policy, for active frame selection. Our\nextensive empirical evaluation establish state of the art results for active\nlearning on benchmark datasets of Semantic Segmentation, Object Detection and\nImage Classification. Our ablation studies show clear advantages of using\ncontextual diversity for active learning. The source code and additional\nresults are available at https://github.com/sharat29ag/CDAL.", + "authors": "Sharat Agarwal, Himanshu Arora, Saket Anand, Chetan Arora", + "published": "2020-08-13", + "updated": "2020-08-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2205.14959v2", + "title": "Dataset Condensation via Efficient Synthetic-Data Parameterization", + "abstract": "The great success of machine learning with massive amounts of data comes at a\nprice of huge computation costs and storage for training and tuning. Recent\nstudies on dataset condensation attempt to reduce the dependence on such\nmassive data by synthesizing a compact training dataset. However, the existing\napproaches have fundamental limitations in optimization due to the limited\nrepresentability of synthetic datasets without considering any data regularity\ncharacteristics. To this end, we propose a novel condensation framework that\ngenerates multiple synthetic data with a limited storage budget via efficient\nparameterization considering data regularity. We further analyze the\nshortcomings of the existing gradient matching-based condensation methods and\ndevelop an effective optimization technique for improving the condensation of\ntraining data information. We propose a unified algorithm that drastically\nimproves the quality of condensed data against the current state-of-the-art on\nCIFAR-10, ImageNet, and Speech Commands.", + "authors": "Jang-Hyun Kim, Jinuk Kim, Seong Joon Oh, Sangdoo Yun, Hwanjun Song, Joonhyun Jeong, Jung-Woo Ha, Hyun Oh Song", + "published": "2022-05-30", + "updated": "2022-06-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.00123v2", + "title": "GRAD-MATCH: Gradient Matching based Data Subset Selection for Efficient Deep Model Training", + "abstract": "The great success of modern machine learning models on large datasets is\ncontingent on extensive computational resources with high financial and\nenvironmental costs. One way to address this is by extracting subsets that\ngeneralize on par with the full data. In this work, we propose a general\nframework, GRAD-MATCH, which finds subsets that closely match the gradient of\nthe training or validation set. We find such subsets effectively using an\northogonal matching pursuit algorithm. We show rigorous theoretical and\nconvergence guarantees of the proposed algorithm and, through our extensive\nexperiments on real-world datasets, show the effectiveness of our proposed\nframework. We show that GRAD-MATCH significantly and consistently outperforms\nseveral recent data-selection algorithms and achieves the best\naccuracy-efficiency trade-off. GRAD-MATCH is available as a part of the CORDS\ntoolkit: \\url{https://github.com/decile-team/cords}.", + "authors": "Krishnateja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, Abir De, Rishabh Iyer", + "published": "2021-02-27", + "updated": "2021-06-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2312.15927v3", + "title": "M3D: Dataset Condensation by Minimizing Maximum Mean Discrepancy", + "abstract": "Training state-of-the-art (SOTA) deep models often requires extensive data,\nresulting in substantial training and storage costs. To address these\nchallenges, dataset condensation has been developed to learn a small synthetic\nset that preserves essential information from the original large-scale dataset.\nNowadays, optimization-oriented methods have been the primary method in the\nfield of dataset condensation for achieving SOTA results. However, the bi-level\noptimization process hinders the practical application of such methods to\nrealistic and larger datasets. To enhance condensation efficiency, previous\nworks proposed Distribution-Matching (DM) as an alternative, which\nsignificantly reduces the condensation cost. Nonetheless, current DM-based\nmethods still yield less comparable results to SOTA optimization-oriented\nmethods. In this paper, we argue that existing DM-based methods overlook the\nhigher-order alignment of the distributions, which may lead to sub-optimal\nmatching results. Inspired by this, we present a novel DM-based method named\nM3D for dataset condensation by Minimizing the Maximum Mean Discrepancy between\nfeature representations of the synthetic and real images. By embedding their\ndistributions in a reproducing kernel Hilbert space, we align all orders of\nmoments of the distributions of real and synthetic images, resulting in a more\ngeneralized condensed set. Notably, our method even surpasses the SOTA\noptimization-oriented method IDC on the high-resolution ImageNet dataset.\nExtensive analysis is conducted to verify the effectiveness of the proposed\nmethod. Source codes are available at https://github.com/Hansong-Zhang/M3D.", + "authors": "Hansong Zhang, Shikun Li, Pengju Wang, Dan Zeng, Shiming Ge", + "published": "2023-12-26", + "updated": "2024-02-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2202.02916v3", + "title": "Dataset Condensation with Contrastive Signals", + "abstract": "Recent studies have demonstrated that gradient matching-based dataset\nsynthesis, or dataset condensation (DC), methods can achieve state-of-the-art\nperformance when applied to data-efficient learning tasks. However, in this\nstudy, we prove that the existing DC methods can perform worse than the random\nselection method when task-irrelevant information forms a significant part of\nthe training dataset. We attribute this to the lack of participation of the\ncontrastive signals between the classes resulting from the class-wise gradient\nmatching strategy. To address this problem, we propose Dataset Condensation\nwith Contrastive signals (DCC) by modifying the loss function to enable the DC\nmethods to effectively capture the differences between classes. In addition, we\nanalyze the new loss function in terms of training dynamics by tracking the\nkernel velocity. Furthermore, we introduce a bi-level warm-up strategy to\nstabilize the optimization. Our experimental results indicate that while the\nexisting methods are ineffective for fine-grained image classification tasks,\nthe proposed method can successfully generate informative synthetic datasets\nfor the same tasks. Moreover, we demonstrate that the proposed method\noutperforms the baselines even on benchmark datasets such as SVHN, CIFAR-10,\nand CIFAR-100. Finally, we demonstrate the high applicability of the proposed\nmethod by applying it to continual learning tasks.", + "authors": "Saehyung Lee, Sanghyuk Chun, Sangwon Jung, Sangdoo Yun, Sungroh Yoon", + "published": "2022-02-07", + "updated": "2022-06-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2211.10586v4", + "title": "Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory", + "abstract": "Dataset Distillation is a newly emerging area that aims to distill large\ndatasets into much smaller and highly informative synthetic ones to accelerate\ntraining and reduce storage. Among various dataset distillation methods,\ntrajectory-matching-based methods (MTT) have achieved SOTA performance in many\ntasks, e.g., on CIFAR-10/100. However, due to exorbitant memory consumption\nwhen unrolling optimization through SGD steps, MTT fails to scale to\nlarge-scale datasets such as ImageNet-1K. Can we scale this SOTA method to\nImageNet-1K and does its effectiveness on CIFAR transfer to ImageNet-1K? To\nanswer these questions, we first propose a procedure to exactly compute the\nunrolled gradient with constant memory complexity, which allows us to scale MTT\nto ImageNet-1K seamlessly with ~6x reduction in memory footprint. We further\ndiscover that it is challenging for MTT to handle datasets with a large number\nof classes, and propose a novel soft label assignment that drastically improves\nits convergence. The resulting algorithm sets new SOTA on ImageNet-1K: we can\nscale up to 50 IPCs (Image Per Class) on ImageNet-1K on a single GPU (all\nprevious methods can only scale to 2 IPCs on ImageNet-1K), leading to the best\naccuracy (only 5.9% accuracy drop against full dataset training) while\nutilizing only 4.2% of the number of data points - an 18.2% absolute gain over\nprior SOTA. Our code is available at https://github.com/justincui03/tesla", + "authors": "Justin Cui, Ruochen Wang, Si Si, Cho-Jui Hsieh", + "published": "2022-11-19", + "updated": "2023-10-31", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1711.07971v3", + "title": "Non-local Neural Networks", + "abstract": "Both convolutional and recurrent operations are building blocks that process\none local neighborhood at a time. In this paper, we present non-local\noperations as a generic family of building blocks for capturing long-range\ndependencies. Inspired by the classical non-local means method in computer\nvision, our non-local operation computes the response at a position as a\nweighted sum of the features at all positions. This building block can be\nplugged into many computer vision architectures. On the task of video\nclassification, even without any bells and whistles, our non-local models can\ncompete or outperform current competition winners on both Kinetics and Charades\ndatasets. In static image recognition, our non-local models improve object\ndetection/segmentation and pose estimation on the COCO suite of tasks. Code is\navailable at https://github.com/facebookresearch/video-nonlocal-net .", + "authors": "Xiaolong Wang, Ross Girshick, Abhinav Gupta, Kaiming He", + "published": "2017-11-21", + "updated": "2018-04-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2110.04181v3", + "title": "Dataset Condensation with Distribution Matching", + "abstract": "Computational cost of training state-of-the-art deep models in many learning\nproblems is rapidly increasing due to more sophisticated models and larger\ndatasets. A recent promising direction for reducing training cost is dataset\ncondensation that aims to replace the original large training set with a\nsignificantly smaller learned synthetic set while preserving the original\ninformation. While training deep models on the small set of condensed images\ncan be extremely fast, their synthesis remains computationally expensive due to\nthe complex bi-level optimization and second-order derivative computation. In\nthis work, we propose a simple yet effective method that synthesizes condensed\nimages by matching feature distributions of the synthetic and original training\nimages in many sampled embedding spaces. Our method significantly reduces the\nsynthesis cost while achieving comparable or better performance. Thanks to its\nefficiency, we apply our method to more realistic and larger datasets with\nsophisticated neural architectures and obtain a significant performance boost.\nWe also show promising practical benefits of our method in continual learning\nand neural architecture search.", + "authors": "Bo Zhao, Hakan Bilen", + "published": "2021-10-08", + "updated": "2022-12-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.01531v2", + "title": "CAFE: Learning to Condense Dataset by Aligning Features", + "abstract": "Dataset condensation aims at reducing the network training effort through\ncondensing a cumbersome training set into a compact synthetic one.\nState-of-the-art approaches largely rely on learning the synthetic data by\nmatching the gradients between the real and synthetic data batches. Despite the\nintuitive motivation and promising results, such gradient-based methods, by\nnature, easily overfit to a biased set of samples that produce dominant\ngradients, and thus lack global supervision of data distribution. In this\npaper, we propose a novel scheme to Condense dataset by Aligning FEatures\n(CAFE), which explicitly attempts to preserve the real-feature distribution as\nwell as the discriminant power of the resulting synthetic set, lending itself\nto strong generalization capability to various architectures. At the heart of\nour approach is an effective strategy to align features from the real and\nsynthetic data across various scales, while accounting for the classification\nof real samples. Our scheme is further backed up by a novel dynamic bi-level\noptimization, which adaptively adjusts parameter updates to prevent\nover-/under-fitting. We validate the proposed CAFE across various datasets, and\ndemonstrate that it generally outperforms the state of the art: on the SVHN\ndataset, for example, the performance gain is up to 11%. Extensive experiments\nand analyses verify the effectiveness and necessity of proposed designs.", + "authors": "Kai Wang, Bo Zhao, Xiangyu Peng, Zheng Zhu, Shuo Yang, Shuo Wang, Guan Huang, Hakan Bilen, Xinchao Wang, Yang You", + "published": "2022-03-03", + "updated": "2022-03-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1606.03476v1", + "title": "Generative Adversarial Imitation Learning", + "abstract": "Consider learning a policy from example expert behavior, without interaction\nwith the expert or access to reinforcement signal. One approach is to recover\nthe expert's cost function with inverse reinforcement learning, then extract a\npolicy from that cost function with reinforcement learning. This approach is\nindirect and can be slow. We propose a new general framework for directly\nextracting a policy from data, as if it were obtained by reinforcement learning\nfollowing inverse reinforcement learning. We show that a certain instantiation\nof our framework draws an analogy between imitation learning and generative\nadversarial networks, from which we derive a model-free imitation learning\nalgorithm that obtains significant performance gains over existing model-free\nmethods in imitating complex behaviors in large, high-dimensional environments.", + "authors": "Jonathan Ho, Stefano Ermon", + "published": "2016-06-10", + "updated": "2016-06-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.00093v2", + "title": "DataDAM: Efficient Dataset Distillation with Attention Matching", + "abstract": "Researchers have long tried to minimize training costs in deep learning while\nmaintaining strong generalization across diverse datasets. Emerging research on\ndataset distillation aims to reduce training costs by creating a small\nsynthetic set that contains the information of a larger real dataset and\nultimately achieves test accuracy equivalent to a model trained on the whole\ndataset. Unfortunately, the synthetic data generated by previous methods are\nnot guaranteed to distribute and discriminate as well as the original training\ndata, and they incur significant computational costs. Despite promising\nresults, there still exists a significant performance gap between models\ntrained on condensed synthetic sets and those trained on the whole dataset. In\nthis paper, we address these challenges using efficient Dataset Distillation\nwith Attention Matching (DataDAM), achieving state-of-the-art performance while\nreducing training costs. Specifically, we learn synthetic images by matching\nthe spatial attention maps of real and synthetic data generated by different\nlayers within a family of randomly initialized neural networks. Our method\noutperforms the prior methods on several datasets, including CIFAR10/100,\nTinyImageNet, ImageNet-1K, and subsets of ImageNet-1K across most of the\nsettings, and achieves improvements of up to 6.5% and 4.1% on CIFAR100 and\nImageNet-1K, respectively. We also show that our high-quality distilled images\nhave practical benefits for downstream applications, such as continual learning\nand neural architecture search.", + "authors": "Ahmad Sajedi, Samir Khaki, Ehsan Amjadian, Lucy Z. Liu, Yuri A. Lawryshyn, Konstantinos N. Plataniotis", + "published": "2023-09-29", + "updated": "2023-10-31", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2006.05929v3", + "title": "Dataset Condensation with Gradient Matching", + "abstract": "As the state-of-the-art machine learning methods in many fields rely on\nlarger datasets, storing datasets and training models on them become\nsignificantly more expensive. This paper proposes a training set synthesis\ntechnique for data-efficient learning, called Dataset Condensation, that learns\nto condense large dataset into a small set of informative synthetic samples for\ntraining deep neural networks from scratch. We formulate this goal as a\ngradient matching problem between the gradients of deep neural network weights\nthat are trained on the original and our synthetic data. We rigorously evaluate\nits performance in several computer vision benchmarks and demonstrate that it\nsignificantly outperforms the state-of-the-art methods. Finally we explore the\nuse of our method in continual learning and neural architecture search and\nreport promising gains when limited memory and computations are available.", + "authors": "Bo Zhao, Konda Reddy Mopuri, Hakan Bilen", + "published": "2020-06-10", + "updated": "2021-03-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1912.07768v1", + "title": "Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data", + "abstract": "This paper investigates the intriguing question of whether we can create\nlearning algorithms that automatically generate training data, learning\nenvironments, and curricula in order to help AI agents rapidly learn. We show\nthat such algorithms are possible via Generative Teaching Networks (GTNs), a\ngeneral approach that is, in theory, applicable to supervised, unsupervised,\nand reinforcement learning, although our experiments only focus on the\nsupervised case. GTNs are deep neural networks that generate data and/or\ntraining environments that a learner (e.g. a freshly initialized neural\nnetwork) trains on for a few SGD steps before being tested on a target task. We\nthen differentiate through the entire learning process via meta-gradients to\nupdate the GTN parameters to improve performance on the target task. GTNs have\nthe beneficial property that they can theoretically generate any type of data\nor training environment, making their potential impact large. This paper\nintroduces GTNs, discusses their potential, and showcases that they can\nsubstantially accelerate learning. We also demonstrate a practical and exciting\napplication of GTNs: accelerating the evaluation of candidate architectures for\nneural architecture search (NAS), which is rate-limited by such evaluations,\nenabling massive speed-ups in NAS. GTN-NAS improves the NAS state of the art,\nfinding higher performing architectures when controlling for the search\nproposal mechanism. GTN-NAS also is competitive with the overall state of the\nart approaches, which achieve top performance while using orders of magnitude\nless computation than typical NAS methods. Speculating forward, GTNs may\nrepresent a first step toward the ambitious goal of algorithms that generate\ntheir own training data and, in doing so, open a variety of interesting new\nresearch questions and directions.", + "authors": "Felipe Petroski Such, Aditya Rawal, Joel Lehman, Kenneth O. Stanley, Jeff Clune", + "published": "2019-12-17", + "updated": "2019-12-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1812.05159v3", + "title": "An Empirical Study of Example Forgetting during Deep Neural Network Learning", + "abstract": "Inspired by the phenomenon of catastrophic forgetting, we investigate the\nlearning dynamics of neural networks as they train on single classification\ntasks. Our goal is to understand whether a related phenomenon occurs when data\ndoes not undergo a clear distributional shift. We define a `forgetting event'\nto have occurred when an individual training example transitions from being\nclassified correctly to incorrectly over the course of learning. Across several\nbenchmark data sets, we find that: (i) certain examples are forgotten with high\nfrequency, and some not at all; (ii) a data set's (un)forgettable examples\ngeneralize across neural architectures; and (iii) based on forgetting dynamics,\na significant fraction of examples can be omitted from the training data set\nwhile still maintaining state-of-the-art generalization performance.", + "authors": "Mariya Toneva, Alessandro Sordoni, Remi Tachet des Combes, Adam Trischler, Yoshua Bengio, Geoffrey J. Gordon", + "published": "2018-12-12", + "updated": "2019-11-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2102.08259v2", + "title": "Dataset Condensation with Differentiable Siamese Augmentation", + "abstract": "In many machine learning problems, large-scale datasets have become the\nde-facto standard to train state-of-the-art deep networks at the price of heavy\ncomputation load. In this paper, we focus on condensing large training sets\ninto significantly smaller synthetic sets which can be used to train deep\nneural networks from scratch with minimum drop in performance. Inspired from\nthe recent training set synthesis methods, we propose Differentiable Siamese\nAugmentation that enables effective use of data augmentation to synthesize more\ninformative synthetic images and thus achieves better performance when training\nnetworks with augmentations. Experiments on multiple image classification\nbenchmarks demonstrate that the proposed method obtains substantial gains over\nthe state-of-the-art, 7% improvements on CIFAR10 and CIFAR100 datasets. We show\nwith only less than 1% data that our method achieves 99.6%, 94.9%, 88.5%, 71.5%\nrelative performance on MNIST, FashionMNIST, SVHN, CIFAR10 respectively. We\nalso explore the use of our method in continual learning and neural\narchitecture search, and show promising results.", + "authors": "Bo Zhao, Hakan Bilen", + "published": "2021-02-16", + "updated": "2021-06-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1612.03928v3", + "title": "Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer", + "abstract": "Attention plays a critical role in human visual experience. Furthermore, it\nhas recently been demonstrated that attention can also play an important role\nin the context of applying artificial neural networks to a variety of tasks\nfrom fields such as computer vision and NLP. In this work we show that, by\nproperly defining attention for convolutional neural networks, we can actually\nuse this type of information in order to significantly improve the performance\nof a student CNN network by forcing it to mimic the attention maps of a\npowerful teacher network. To that end, we propose several novel methods of\ntransferring attention, showing consistent improvement across a variety of\ndatasets and convolutional neural network architectures. Code and models for\nour experiments are available at\nhttps://github.com/szagoruyko/attention-transfer", + "authors": "Sergey Zagoruyko, Nikos Komodakis", + "published": "2016-12-12", + "updated": "2017-02-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1802.09841v1", + "title": "Adversarial Active Learning for Deep Networks: a Margin Based Approach", + "abstract": "We propose a new active learning strategy designed for deep neural networks.\nThe goal is to minimize the number of data annotation queried from an oracle\nduring training. Previous active learning strategies scalable for deep networks\nwere mostly based on uncertain sample selection. In this work, we focus on\nexamples lying close to the decision boundary. Based on theoretical works on\nmargin theory for active learning, we know that such examples may help to\nconsiderably decrease the number of annotations. While measuring the exact\ndistance to the decision boundaries is intractable, we propose to rely on\nadversarial examples. We do not consider anymore them as a threat instead we\nexploit the information they provide on the distribution of the input space in\norder to approximate the distance to decision boundaries. We demonstrate\nempirically that adversarial active queries yield faster convergence of CNNs\ntrained on MNIST, the Shoe-Bag and the Quick-Draw datasets.", + "authors": "Melanie Ducoffe, Frederic Precioso", + "published": "2018-02-27", + "updated": "2018-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1807.06521v2", + "title": "CBAM: Convolutional Block Attention Module", + "abstract": "We propose Convolutional Block Attention Module (CBAM), a simple yet\neffective attention module for feed-forward convolutional neural networks.\nGiven an intermediate feature map, our module sequentially infers attention\nmaps along two separate dimensions, channel and spatial, then the attention\nmaps are multiplied to the input feature map for adaptive feature refinement.\nBecause CBAM is a lightweight and general module, it can be integrated into any\nCNN architectures seamlessly with negligible overheads and is end-to-end\ntrainable along with base CNNs. We validate our CBAM through extensive\nexperiments on ImageNet-1K, MS~COCO detection, and VOC~2007 detection datasets.\nOur experiments show consistent improvements in classification and detection\nperformances with various models, demonstrating the wide applicability of CBAM.\nThe code and models will be publicly available.", + "authors": "Sanghyun Woo, Jongchan Park, Joon-Young Lee, In So Kweon", + "published": "2018-07-17", + "updated": "2018-07-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1203.3472v1", + "title": "Super-Samples from Kernel Herding", + "abstract": "We extend the herding algorithm to continuous spaces by using the kernel\ntrick. The resulting \"kernel herding\" algorithm is an infinite memory\ndeterministic process that learns to approximate a PDF with a collection of\nsamples. We show that kernel herding decreases the error of expectations of\nfunctions in the Hilbert space at a rate O(1/T) which is much faster than the\nusual O(1/pT) for iid random samples. We illustrate kernel herding by\napproximating Bayesian predictive distributions.", + "authors": "Yutian Chen, Max Welling, Alex Smola", + "published": "2012-03-15", + "updated": "2012-03-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2107.07075v2", + "title": "Deep Learning on a Data Diet: Finding Important Examples Early in Training", + "abstract": "Recent success in deep learning has partially been driven by training\nincreasingly overparametrized networks on ever larger datasets. It is therefore\nnatural to ask: how much of the data is superfluous, which examples are\nimportant for generalization, and how do we find them? In this work, we make\nthe striking observation that, in standard vision datasets, simple scores\naveraged over several weight initializations can be used to identify important\nexamples very early in training. We propose two such scores -- the Gradient\nNormed (GraNd) and the Error L2-Norm (EL2N) scores -- and demonstrate their\nefficacy on a range of architectures and datasets by pruning significant\nfractions of training data without sacrificing test accuracy. In fact, using\nEL2N scores calculated a few epochs into training, we can prune half of the\nCIFAR10 training set while slightly improving test accuracy. Furthermore, for a\ngiven dataset, EL2N scores from one architecture or hyperparameter\nconfiguration generalize to other configurations. Compared to recent work that\nprunes data by discarding examples that are rarely forgotten over the course of\ntraining, our scores use only local information early in training. We also use\nour scores to detect noisy examples and study training dynamics through the\nlens of important examples -- we investigate how the data distribution shapes\nthe loss surface and identify subspaces of the model's data representation that\nare relatively stable over training.", + "authors": "Mansheej Paul, Surya Ganguli, Gintare Karolina Dziugaite", + "published": "2021-07-15", + "updated": "2023-03-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2203.11932v1", + "title": "Dataset Distillation by Matching Training Trajectories", + "abstract": "Dataset distillation is the task of synthesizing a small dataset such that a\nmodel trained on the synthetic set will match the test accuracy of the model\ntrained on the full dataset. In this paper, we propose a new formulation that\noptimizes our distilled data to guide networks to a similar state as those\ntrained on real data across many training steps. Given a network, we train it\nfor several iterations on our distilled data and optimize the distilled data\nwith respect to the distance between the synthetically trained parameters and\nthe parameters trained on real data. To efficiently obtain the initial and\ntarget network parameters for large-scale datasets, we pre-compute and store\ntraining trajectories of expert networks trained on the real dataset. Our\nmethod handily outperforms existing methods and also allows us to distill\nhigher-resolution visual data.", + "authors": "George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, Jun-Yan Zhu", + "published": "2022-03-22", + "updated": "2022-03-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2012.10630v4", + "title": "GLISTER: Generalization based Data Subset Selection for Efficient and Robust Learning", + "abstract": "Large scale machine learning and deep models are extremely data-hungry.\nUnfortunately, obtaining large amounts of labeled data is expensive, and\ntraining state-of-the-art models (with hyperparameter tuning) requires\nsignificant computing resources and time. Secondly, real-world data is noisy\nand imbalanced. As a result, several recent papers try to make the training\nprocess more efficient and robust. However, most existing work either focuses\non robustness or efficiency, but not both. In this work, we introduce Glister,\na GeneraLIzation based data Subset selecTion for Efficient and Robust learning\nframework. We formulate Glister as a mixed discrete-continuous bi-level\noptimization problem to select a subset of the training data, which maximizes\nthe log-likelihood on a held-out validation set. Next, we propose an iterative\nonline algorithm Glister-Online, which performs data selection iteratively\nalong with the parameter updates and can be applied to any loss-based learning\nalgorithm. We then show that for a rich class of loss functions including\ncross-entropy, hinge-loss, squared-loss, and logistic-loss, the inner discrete\ndata selection is an instance of (weakly) submodular optimization, and we\nanalyze conditions for which Glister-Online reduces the validation loss and\nconverges. Finally, we propose Glister-Active, an extension to batch active\nlearning, and we empirically demonstrate the performance of Glister on a wide\nrange of tasks including, (a) data selection to reduce training time, (b)\nrobust learning under label noise and imbalance settings, and (c) batch-active\nlearning with several deep and shallow models. We show that our framework\nimproves upon state of the art both in efficiency and accuracy (in cases (a)\nand (c)) and is more efficient compared to other state-of-the-art robust\nlearning algorithms in case (b).", + "authors": "Krishnateja Killamsetty, Durga Sivasubramanian, Ganesh Ramakrishnan, Rishabh Iyer", + "published": "2020-12-19", + "updated": "2021-06-11", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2310.05773v2", + "title": "Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching", + "abstract": "The ultimate goal of Dataset Distillation is to synthesize a small synthetic\ndataset such that a model trained on this synthetic set will perform equally\nwell as a model trained on the full, real dataset. Until now, no method of\nDataset Distillation has reached this completely lossless goal, in part due to\nthe fact that previous methods only remain effective when the total number of\nsynthetic samples is extremely small. Since only so much information can be\ncontained in such a small number of samples, it seems that to achieve truly\nloss dataset distillation, we must develop a distillation method that remains\neffective as the size of the synthetic dataset grows. In this work, we present\nsuch an algorithm and elucidate why existing methods fail to generate larger,\nhigh-quality synthetic sets. Current state-of-the-art methods rely on\ntrajectory-matching, or optimizing the synthetic data to induce similar\nlong-term training dynamics as the real data. We empirically find that the\ntraining stage of the trajectories we choose to match (i.e., early or late)\ngreatly affects the effectiveness of the distilled dataset. Specifically, early\ntrajectories (where the teacher network learns easy patterns) work well for a\nlow-cardinality synthetic set since there are fewer examples wherein to\ndistribute the necessary information. Conversely, late trajectories (where the\nteacher network learns hard patterns) provide better signals for larger\nsynthetic sets since there are now enough samples to represent the necessary\ncomplex patterns. Based on our findings, we propose to align the difficulty of\nthe generated patterns with the size of the synthetic dataset. In doing so, we\nsuccessfully scale trajectory matching-based methods to larger synthetic\ndatasets, achieving lossless dataset distillation for the very first time. Code\nand distilled datasets are available at https://gzyaftermath.github.io/DATM.", + "authors": "Ziyao Guo, Kai Wang, George Cazenavette, Hui Li, Kaipeng Zhang, Yang You", + "published": "2023-10-09", + "updated": "2024-03-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1409.0473v7", + "title": "Neural Machine Translation by Jointly Learning to Align and Translate", + "abstract": "Neural machine translation is a recently proposed approach to machine\ntranslation. Unlike the traditional statistical machine translation, the neural\nmachine translation aims at building a single neural network that can be\njointly tuned to maximize the translation performance. The models proposed\nrecently for neural machine translation often belong to a family of\nencoder-decoders and consists of an encoder that encodes a source sentence into\na fixed-length vector from which a decoder generates a translation. In this\npaper, we conjecture that the use of a fixed-length vector is a bottleneck in\nimproving the performance of this basic encoder-decoder architecture, and\npropose to extend this by allowing a model to automatically (soft-)search for\nparts of a source sentence that are relevant to predicting a target word,\nwithout having to form these parts as a hard segment explicitly. With this new\napproach, we achieve a translation performance comparable to the existing\nstate-of-the-art phrase-based system on the task of English-to-French\ntranslation. Furthermore, qualitative analysis reveals that the\n(soft-)alignments found by the model agree well with our intuition.", + "authors": "Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio", + "published": "2014-09-01", + "updated": "2016-05-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG", + "cs.NE", + "stat.ML" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.12330v1", + "title": "Task-agnostic Distillation of Encoder-Decoder Language Models", + "abstract": "Finetuning pretrained language models (LMs) have enabled appealing\nperformance on a diverse array of tasks. The intriguing task-agnostic property\nhas driven a shifted focus from task-specific to task-agnostic distillation of\nLMs. While task-agnostic, compute-efficient, performance-preserved LMs can be\nyielded by task-agnostic distillation, previous studies mainly sit in\ndistillation of either encoder-only LMs (e.g., BERT) or decoder-only ones\n(e.g., GPT) yet largely neglect that distillation of encoder-decoder LMs (e.g.,\nT5) can posit very distinguished behaviors. Frustratingly, we discover that\nexisting task-agnostic distillation methods can fail to handle the distillation\nof encoder-decoder LMs. To the demand, we explore a few paths and uncover a\npath named as MiniEnD that successfully tackles the distillation of\nencoder-decoder LMs in a task-agnostic fashion. We examine MiniEnD on language\nunderstanding and abstractive summarization. The results showcase that MiniEnD\nis generally effective and is competitive compared to other alternatives. We\nfurther scale MiniEnD up to distillation of 3B encoder-decoder language models\nwith interpolated distillation. The results imply the opportunities and\nchallenges in distilling large language models (e.g., LLaMA).", + "authors": "Chen Zhang, Yang Yang, Jingang Wang, Dawei Song", + "published": "2023-05-21", + "updated": "2023-05-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.16004v3", + "title": "What Knowledge Gets Distilled in Knowledge Distillation?", + "abstract": "Knowledge distillation aims to transfer useful information from a teacher\nnetwork to a student network, with the primary goal of improving the student's\nperformance for the task at hand. Over the years, there has a been a deluge of\nnovel techniques and use cases of knowledge distillation. Yet, despite the\nvarious improvements, there seems to be a glaring gap in the community's\nfundamental understanding of the process. Specifically, what is the knowledge\nthat gets distilled in knowledge distillation? In other words, in what ways\ndoes the student become similar to the teacher? Does it start to localize\nobjects in the same way? Does it get fooled by the same adversarial samples?\nDoes its data invariance properties become similar? Our work presents a\ncomprehensive study to try to answer these questions. We show that existing\nmethods can indeed indirectly distill these properties beyond improving task\nperformance. We further study why knowledge distillation might work this way,\nand show that our findings have practical implications as well.", + "authors": "Utkarsh Ojha, Yuheng Li, Anirudh Sundara Rajan, Yingyu Liang, Yong Jae Lee", + "published": "2022-05-31", + "updated": "2023-11-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2006.08572v3", + "title": "Flexible Dataset Distillation: Learn Labels Instead of Images", + "abstract": "We study the problem of dataset distillation - creating a small set of\nsynthetic examples capable of training a good model. In particular, we study\nthe problem of label distillation - creating synthetic labels for a small set\nof real images, and show it to be more effective than the prior image-based\napproach to dataset distillation. Methodologically, we introduce a more robust\nand flexible meta-learning algorithm for distillation, as well as an effective\nfirst-order strategy based on convex optimization layers. Distilling labels\nwith our new algorithm leads to improved results over prior image-based\ndistillation. More importantly, it leads to clear improvements in flexibility\nof the distilled dataset in terms of compatibility with off-the-shelf\noptimizers and diverse neural architectures. Interestingly, label distillation\ncan also be applied across datasets, for example enabling learning Japanese\ncharacter recognition by training only on synthetically labeled English\nletters.", + "authors": "Ondrej Bohdal, Yongxin Yang, Timothy Hospedales", + "published": "2020-06-15", + "updated": "2020-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2312.00739v1", + "title": "Adversarial Score Distillation: When score distillation meets GAN", + "abstract": "Existing score distillation methods are sensitive to classifier-free guidance\n(CFG) scale: manifested as over-smoothness or instability at small CFG scales,\nwhile over-saturation at large ones. To explain and analyze these issues, we\nrevisit the derivation of Score Distillation Sampling (SDS) and decipher\nexisting score distillation with the Wasserstein Generative Adversarial Network\n(WGAN) paradigm. With the WGAN paradigm, we find that existing score\ndistillation either employs a fixed sub-optimal discriminator or conducts\nincomplete discriminator optimization, resulting in the scale-sensitive issue.\nWe propose the Adversarial Score Distillation (ASD), which maintains an\noptimizable discriminator and updates it using the complete optimization\nobjective. Experiments show that the proposed ASD performs favorably in 2D\ndistillation and text-to-3D tasks against existing methods. Furthermore, to\nexplore the generalization ability of our WGAN paradigm, we extend ASD to the\nimage editing task, which achieves competitive results. The project page and\ncode are at https://github.com/2y7c3/ASD.", + "authors": "Min Wei, Jingkai Zhou, Junyao Sun, Xuesong Zhang", + "published": "2023-12-01", + "updated": "2023-12-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0108029v1", + "title": "Distillability, Bell inequalities and multiparticle bound entanglement", + "abstract": "We study the relation between violation of Bell inequalities and\ndistillability properties of quantum states. Recently, D\\\"ur has shown that\nthere are some multiparticle bound entangled states, non-separable and\nnon-distillable, that violate a Bell inequality. We prove that for all the\nstates violating this inequality there exist at least one splitting of the\nparties into two groups such that some pure-state entanglement can be\ndistilled, obtaining a connection between Bell inequalities and bipartite\ndistillable entanglement.", + "authors": "A. Acin", + "published": "2001-08-07", + "updated": "2001-08-07", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.03846v1", + "title": "On the Effectiveness of Distillation in Mitigating Backdoors in Pre-trained Encoder", + "abstract": "In this paper, we study a defense against poisoned encoders in SSL called\ndistillation, which is a defense used in supervised learning originally.\nDistillation aims to distill knowledge from a given model (a.k.a the teacher\nnet) and transfer it to another (a.k.a the student net). Now, we use it to\ndistill benign knowledge from poisoned pre-trained encoders and transfer it to\na new encoder, resulting in a clean pre-trained encoder. In particular, we\nconduct an empirical study on the effectiveness and performance of distillation\nagainst poisoned encoders. Using two state-of-the-art backdoor attacks against\npre-trained image encoders and four commonly used image classification\ndatasets, our experimental results show that distillation can reduce attack\nsuccess rate from 80.87% to 27.51% while suffering a 6.35% loss in accuracy.\nMoreover, we investigate the impact of three core components of distillation on\nperformance: teacher net, student net, and distillation loss. By comparing 4\ndifferent teacher nets, 3 student nets, and 6 distillation losses, we find that\nfine-tuned teacher nets, warm-up-training-based student nets, and\nattention-based distillation loss perform best, respectively.", + "authors": "Tingxu Han, Shenghan Huang, Ziqi Ding, Weisong Sun, Yebo Feng, Chunrong Fang, Jun Li, Hanwei Qian, Cong Wu, Quanjun Zhang, Yang Liu, Zhenyu Chen", + "published": "2024-03-06", + "updated": "2024-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0008047v2", + "title": "A semidefinite program for distillable entanglement", + "abstract": "We show that the maximum fidelity obtained by a p.p.t. distillation protocol\nis given by the solution to a certain semidefinite program. This gives a number\nof new lower and upper bounds on p.p.t. distillable entanglement (and thus new\nupper bounds on 2-locally distillable entanglement). In the presence of\nsymmetry, the semidefinite program simplifies considerably, becoming a linear\nprogram in the case of isotropic and Werner states. Using these techniques, we\ndetermine the p.p.t. distillable entanglement of asymmetric Werner states and\n``maximally correlated'' states. We conclude with a discussion of possible\napplications of semidefinite programming to quantum codes and 1-local\ndistillation.", + "authors": "Eric M. Rains", + "published": "2000-08-10", + "updated": "2001-04-12", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.09632v1", + "title": "HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers", + "abstract": "Knowledge distillation has been shown to be a powerful model compression\napproach to facilitate the deployment of pre-trained language models in\npractice. This paper focuses on task-agnostic distillation. It produces a\ncompact pre-trained model that can be easily fine-tuned on various tasks with\nsmall computational costs and memory footprints. Despite the practical\nbenefits, task-agnostic distillation is challenging. Since the teacher model\nhas a significantly larger capacity and stronger representation power than the\nstudent model, it is very difficult for the student to produce predictions that\nmatch the teacher's over a massive amount of open-domain training data. Such a\nlarge prediction discrepancy often diminishes the benefits of knowledge\ndistillation. To address this challenge, we propose Homotopic Distillation\n(HomoDistil), a novel task-agnostic distillation approach equipped with\niterative pruning. Specifically, we initialize the student model from the\nteacher model, and iteratively prune the student's neurons until the target\nwidth is reached. Such an approach maintains a small discrepancy between the\nteacher's and student's predictions throughout the distillation process, which\nensures the effectiveness of knowledge transfer. Extensive experiments\ndemonstrate that HomoDistil achieves significant improvements on existing\nbaselines.", + "authors": "Chen Liang, Haoming Jiang, Zheng Li, Xianfeng Tang, Bin Yin, Tuo Zhao", + "published": "2023-02-19", + "updated": "2023-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2206.12370v2", + "title": "Mixed Sample Augmentation for Online Distillation", + "abstract": "Mixed Sample Regularization (MSR), such as MixUp or CutMix, is a powerful\ndata augmentation strategy to generalize convolutional neural networks.\nPrevious empirical analysis has illustrated an orthogonal performance gain\nbetween MSR and conventional offline Knowledge Distillation (KD). To be more\nspecific, student networks can be enhanced with the involvement of MSR in the\ntraining stage of sequential distillation. Yet, the interplay between MSR and\nonline knowledge distillation, where an ensemble of peer students learn\nmutually from each other, remains unexplored. To bridge the gap, we make the\nfirst attempt at incorporating CutMix into online distillation, where we\nempirically observe a significant improvement. Encouraged by this fact, we\npropose an even stronger MSR specifically for online distillation, named as\nCut\\textsuperscript{n}Mix. Furthermore, a novel online distillation framework\nis designed upon Cut\\textsuperscript{n}Mix, to enhance the distillation with\nfeature level mutual learning and a self-ensemble teacher. Comprehensive\nevaluations on CIFAR10 and CIFAR100 with six network architectures show that\nour approach can consistently outperform state-of-the-art distillation methods.", + "authors": "Yiqing Shen, Liwu Xu, Yuzhe Yang, Yaqian Li, Yandong Guo", + "published": "2022-06-24", + "updated": "2023-03-02", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1905.09747v2", + "title": "Adversarially Robust Distillation", + "abstract": "Knowledge distillation is effective for producing small, high-performance\nneural networks for classification, but these small networks are vulnerable to\nadversarial attacks. This paper studies how adversarial robustness transfers\nfrom teacher to student during knowledge distillation. We find that a large\namount of robustness may be inherited by the student even when distilled on\nonly clean images. Second, we introduce Adversarially Robust Distillation (ARD)\nfor distilling robustness onto student networks. In addition to producing small\nmodels with high test accuracy like conventional distillation, ARD also passes\nthe superior robustness of large networks onto the student. In our experiments,\nwe find that ARD student models decisively outperform adversarially trained\nnetworks of identical architecture in terms of robust accuracy, surpassing\nstate-of-the-art methods on standard robustness benchmarks. Finally, we adapt\nrecent fast adversarial training methods to ARD for accelerated robust\ndistillation.", + "authors": "Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein", + "published": "2019-05-23", + "updated": "2019-12-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.06170v1", + "title": "CLIP-Embed-KD: Computationally Efficient Knowledge Distillation Using Embeddings as Teachers", + "abstract": "Contrastive Language-Image Pre-training (CLIP) has been shown to improve\nzero-shot generalization capabilities of language and vision models. In this\npaper, we extend CLIP for efficient knowledge distillation, by utilizing\nembeddings as teachers. Typical knowledge distillation frameworks require\nrunning forward passes through a teacher model, which is often prohibitive in\nthe case of billion or trillion parameter teachers. In these cases, using only\nthe embeddings of the teacher models to guide the distillation can yield\nsignificant computational savings. Our preliminary findings show that\nCLIP-based knowledge distillation with embeddings can outperform full scale\nknowledge distillation using $9\\times$ less memory and $8\\times$ less training\ntime. Code available at: https://github.com/lnairGT/CLIP-Distillation/", + "authors": "Lakshmi Nair", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0312123v2", + "title": "Many copies may be required for entanglement distillation", + "abstract": "A mixed quantum state shared between two parties is said to be distillable\nif, by means of a protocol involving only local quantum operations and\nclassical communication, the two parties can transform some number of copies of\nthat state into a single shared pair of qubits having high fidelity with a\nmaximally entangled state state. In this paper it is proved that there exist\nstates that are distillable, but for which an arbitrarily large number of\ncopies is required before any distillation procedure can produce a shared pair\nof qubits with even a small amount of entanglement. Specifically, for every\npositive integer n there exists a state that is distillable, but given n or\nfewer copies of that state every distillation procedure outputting a single\nshared pair of qubits will output those qubits in a separable state.\nEssentially all previous examples of states proved to be distillable were such\nthat some distillation procedure could output an entangled pair of qubits given\na single copy of the state in question.", + "authors": "John Watrous", + "published": "2003-12-15", + "updated": "2004-05-31", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.10045v1", + "title": "Towards Adversarially Robust Dataset Distillation by Curvature Regularization", + "abstract": "Dataset distillation (DD) allows datasets to be distilled to fractions of\ntheir original size while preserving the rich distributional information so\nthat models trained on the distilled datasets can achieve a comparable accuracy\nwhile saving significant computational loads. Recent research in this area has\nbeen focusing on improving the accuracy of models trained on distilled\ndatasets. In this paper, we aim to explore a new perspective of DD. We study\nhow to embed adversarial robustness in distilled datasets, so that models\ntrained on these datasets maintain the high accuracy and meanwhile acquire\nbetter adversarial robustness. We propose a new method that achieves this goal\nby incorporating curvature regularization into the distillation process with\nmuch less computational overhead than standard adversarial training. Extensive\nempirical experiments suggest that our method not only outperforms standard\nadversarial training on both accuracy and robustness with less computation\noverhead but is also capable of generating robust distilled datasets that can\nwithstand various adversarial attacks.", + "authors": "Eric Xue, Yijiang Li, Haoyang Liu, Yifan Shen, Haohan Wang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0607126v3", + "title": "Random bipartite entanglement from W and W-like states", + "abstract": "We describe a protocol for distilling maximally entangled bipartite states\nbetween random pairs of parties from those sharing a tripartite W state, and\nshow that, rather surprisingly, the total distillation rate (the total number\nof EPR pairs distilled per W, irrespective of who shares them) may be done at a\nhigher rate than distillation of bipartite entanglement between specified pairs\nof parties. Specifically, the optimal distillation rate for specified\nentanglement for the W has been previously shown to be the asymptotic\nentanglement of assistance of 0.92 EPR pairs per W, while our protocol can\nasymptotically distill 1 EPR pair per W between random pairs of parties, which\nwe conjecture to be optimal. We thus demonstrate a tradeoff between the overall\nasymptotic rate of EPR distillation and the distribution of final EPR pairs\nbetween parties. We further show that by increasing the number of parties in\nthe protocol that there exist states with fixed lower-bounded distillable\nentanglement for random parties but arbitrarily small distillable entanglement\nfor specified parties.", + "authors": "Ben Fortescue, Hoi-Kwong Lo", + "published": "2006-07-18", + "updated": "2007-02-23", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2104.02857v2", + "title": "Soft-Label Anonymous Gastric X-ray Image Distillation", + "abstract": "This paper presents a soft-label anonymous gastric X-ray image distillation\nmethod based on a gradient descent approach. The sharing of medical data is\ndemanded to construct high-accuracy computer-aided diagnosis (CAD) systems.\nHowever, the large size of the medical dataset and privacy protection are\nremaining problems in medical data sharing, which hindered the research of CAD\nsystems. The idea of our distillation method is to extract the valid\ninformation of the medical dataset and generate a tiny distilled dataset that\nhas a different data distribution. Different from model distillation, our\nmethod aims to find the optimal distilled images, distilled labels and the\noptimized learning rate. Experimental results show that the proposed method can\nnot only effectively compress the medical dataset but also anonymize medical\nimages to protect the patient's private information. The proposed approach can\nimprove the efficiency and security of medical data sharing.", + "authors": "Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama", + "published": "2021-04-07", + "updated": "2024-03-21", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.06110v1", + "title": "Efficient Knowledge Distillation for RNN-Transducer Models", + "abstract": "Knowledge Distillation is an effective method of transferring knowledge from\na large model to a smaller model. Distillation can be viewed as a type of model\ncompression, and has played an important role for on-device ASR applications.\nIn this paper, we develop a distillation method for RNN-Transducer (RNN-T)\nmodels, a popular end-to-end neural network architecture for streaming speech\nrecognition. Our proposed distillation loss is simple and efficient, and uses\nonly the \"y\" and \"blank\" posterior probabilities from the RNN-T output\nprobability lattice. We study the effectiveness of the proposed approach in\nimproving the accuracy of sparse RNN-T models obtained by gradually pruning a\nlarger uncompressed model, which also serves as the teacher during\ndistillation. With distillation of 60% and 90% sparse multi-domain RNN-T\nmodels, we obtain WER reductions of 4.3% and 12.1% respectively, on a noisy\nFarField eval set. We also present results of experiments on LibriSpeech, where\nthe introduction of the distillation loss yields a 4.8% relative WER reduction\non the test-other dataset for a small Conformer model.", + "authors": "Sankaran Panchapagesan, Daniel S. Park, Chung-Cheng Chiu, Yuan Shangguan, Qiao Liang, Alexander Gruenstein", + "published": "2020-11-11", + "updated": "2020-11-11", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.SD" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.14554v1", + "title": "A Selective Survey on Versatile Knowledge Distillation Paradigm for Neural Network Models", + "abstract": "This paper aims to provide a selective survey about knowledge\ndistillation(KD) framework for researchers and practitioners to take advantage\nof it for developing new optimized models in the deep neural network field. To\nthis end, we give a brief overview of knowledge distillation and some related\nworks including learning using privileged information(LUPI) and generalized\ndistillation(GD). Even though knowledge distillation based on the\nteacher-student architecture was initially devised as a model compression\ntechnique, it has found versatile applications over various frameworks.\n In this paper, we review the characteristics of knowledge distillation from\nthe hypothesis that the three important ingredients of knowledge distillation\nare distilled knowledge and loss,teacher-student paradigm, and the distillation\nprocess. In addition, we survey the versatility of the knowledge distillation\nby studying its direct applications and its usage in combination with other\ndeep learning paradigms. Finally we present some future works in knowledge\ndistillation including explainable knowledge distillation where the analytical\nanalysis of the performance gain is studied and the self-supervised learning\nwhich is a hot research topic in deep learning community.", + "authors": "Jeong-Hoe Ku, JiHun Oh, YoungYoon Lee, Gaurav Pooniwala, SangJeong Lee", + "published": "2020-11-30", + "updated": "2020-11-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.09969v1", + "title": "Neural network algorithm and its application in reactive distillation", + "abstract": "Reactive distillation is a special distillation technology based on the\ncoupling of chemical reaction and distillation. It has the characteristics of\nlow energy consumption and high separation efficiency. However, because the\ncombination of reaction and separation produces highly nonlinear robust\nbehavior, the control and optimization of the reactive distillation process\ncannot use conventional methods, but must rely on neural network algorithms.\nThis paper briefly describes the characteristics and research progress of\nreactive distillation technology and neural network algorithms, and summarizes\nthe application of neural network algorithms in reactive distillation, aiming\nto provide reference for the development and innovation of industry technology.", + "authors": "Huihui Wang, Ruyang Mo", + "published": "2020-11-16", + "updated": "2020-11-16", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.LG", + "I.2.8" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2303.05015v2", + "title": "Smooth and Stepwise Self-Distillation for Object Detection", + "abstract": "Distilling the structured information captured in feature maps has\ncontributed to improved results for object detection tasks, but requires\ncareful selection of baseline architectures and substantial pre-training.\nSelf-distillation addresses these limitations and has recently achieved\nstate-of-the-art performance for object detection despite making several\nsimplifying architectural assumptions. Building on this work, we propose Smooth\nand Stepwise Self-Distillation (SSSD) for object detection. Our SSSD\narchitecture forms an implicit teacher from object labels and a feature pyramid\nnetwork backbone to distill label-annotated feature maps using Jensen-Shannon\ndistance, which is smoother than distillation losses used in prior work. We\nadditionally add a distillation coefficient that is adaptively configured based\non the learning rate. We extensively benchmark SSSD against a baseline and two\nstate-of-the-art object detector architectures on the COCO dataset by varying\nthe coefficients and backbone and detector networks. We demonstrate that SSSD\nachieves higher average precision in most experimental settings, is robust to a\nwide range of coefficients, and benefits from our stepwise distillation\nprocedure.", + "authors": "Jieren Deng, Xin Zhou, Hao Tian, Zhihong Pan, Derek Aguiar", + "published": "2023-03-09", + "updated": "2024-01-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2306.06629v1", + "title": "GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model", + "abstract": "Currently, the reduction in the parameter scale of large-scale pre-trained\nlanguage models (PLMs) through knowledge distillation has greatly facilitated\ntheir widespread deployment on various devices. However, the deployment of\nknowledge distillation systems faces great challenges in real-world\nindustrial-strength applications, which require the use of complex distillation\nmethods on even larger-scale PLMs (over 10B), limited by memory on GPUs and the\nswitching of methods. To overcome these challenges, we propose GKD, a general\nknowledge distillation framework that supports distillation on larger-scale\nPLMs using various distillation methods. With GKD, developers can build larger\ndistillation models on memory-limited GPUs and easily switch and combine\ndifferent distillation methods within a single framework. Experimental results\nshow that GKD can support the distillation of at least 100B-scale PLMs and 25\nmainstream methods on 8 NVIDIA A100 (40GB) GPUs.", + "authors": "Shicheng Tan, Weng Lam Tam, Yuanchun Wang, Wenwen Gong, Yang Yang, Hongyin Tang, Keqing He, Jiahao Liu, Jingang Wang, Shu Zhao, Peng Zhang, Jie Tang", + "published": "2023-06-11", + "updated": "2023-06-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.06461v2", + "title": "Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning", + "abstract": "Self-supervised learning (SSL) has made remarkable progress in visual\nrepresentation learning. Some studies combine SSL with knowledge distillation\n(SSL-KD) to boost the representation learning performance of small models. In\nthis study, we propose a Multi-mode Online Knowledge Distillation method (MOKD)\nto boost self-supervised visual representation learning. Different from\nexisting SSL-KD methods that transfer knowledge from a static pre-trained\nteacher to a student, in MOKD, two different models learn collaboratively in a\nself-supervised manner. Specifically, MOKD consists of two distillation modes:\nself-distillation and cross-distillation modes. Among them, self-distillation\nperforms self-supervised learning for each model independently, while\ncross-distillation realizes knowledge interaction between different models. In\ncross-distillation, a cross-attention feature search strategy is proposed to\nenhance the semantic feature alignment between different models. As a result,\nthe two models can absorb knowledge from each other to boost their\nrepresentation learning performance. Extensive experimental results on\ndifferent backbones and datasets demonstrate that two heterogeneous models can\nbenefit from MOKD and outperform their independently trained baseline. In\naddition, MOKD also outperforms existing SSL-KD methods for both the student\nand teacher models.", + "authors": "Kaiyou Song, Jin Xie, Shan Zhang, Zimeng Luo", + "published": "2023-04-13", + "updated": "2023-06-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.02255v2", + "title": "On Self-Distilling Graph Neural Network", + "abstract": "Recently, the teacher-student knowledge distillation framework has\ndemonstrated its potential in training Graph Neural Networks (GNNs). However,\ndue to the difficulty of training over-parameterized GNN models, one may not\neasily obtain a satisfactory teacher model for distillation. Furthermore, the\ninefficient training process of teacher-student knowledge distillation also\nimpedes its applications in GNN models. In this paper, we propose the first\nteacher-free knowledge distillation method for GNNs, termed GNN\nSelf-Distillation (GNN-SD), that serves as a drop-in replacement of the\nstandard training process. The method is built upon the proposed neighborhood\ndiscrepancy rate (NDR), which quantifies the non-smoothness of the embedded\ngraph in an efficient way. Based on this metric, we propose the adaptive\ndiscrepancy retaining (ADR) regularizer to empower the transferability of\nknowledge that maintains high neighborhood discrepancy across GNN layers. We\nalso summarize a generic GNN-SD framework that could be exploited to induce\nother distillation strategies. Experiments further prove the effectiveness and\ngeneralization of our approach, as it brings: 1) state-of-the-art GNN\ndistillation performance with less training cost, 2) consistent and\nconsiderable performance enhancement for various popular backbones.", + "authors": "Yuzhao Chen, Yatao Bian, Xi Xiao, Yu Rong, Tingyang Xu, Junzhou Huang", + "published": "2020-11-04", + "updated": "2021-04-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0803.0345v2", + "title": "Secret key distillation from shielded two-qubit states", + "abstract": "The quantum states corresponding to a secret key are characterized using the\nso-called private states, where the key part consisting of a secret key is\nshielded by the additional systems. Based on the construction, it was shown\nthat a secret key can be distilled from bound entangled states. In this work, I\nconsider the shielded two-qubit states in a key-distillation scenario and\nderive the conditions under which a secret key can be distilled using the\nrecurrence protocol or the two-way classical distillation, advantage\ndistillation together with one-way postprocessing. From the security\nconditions, it is shown that a secret key can be distilled from bound entangled\nstates in a much wider range. In addition, I consider the case that in which\nwhite noise is added to quantum states and show that the classical distillation\nprotocol still works despite a certain amount of noise although the recurrence\nprotocol does not.", + "authors": "Joonwoo Bae", + "published": "2008-03-03", + "updated": "2010-09-22", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0012022v1", + "title": "Distilling a Greenberger-Horne-Zeilinger State From an Arbitrary Pure State of Three Qubits", + "abstract": "We present a general algorithm to achieve local operators which can produce\nthe GHZ state for an arbitrary given three-qubit state. Thus the distillation\nprocess of the state can be realized optimally. The algorithm is shown to be\nsufficient for the three-qubit state on account of the fact that any state for\nwhich this distillation algorithm is invalid cannot be distilled to the GHZ\nstate by any local actions. Moreover, an analytical result of distillation\noperations is achieved for the general state of three qubits.", + "authors": "Li-Xiang Cen, Shun-Jin Wang", + "published": "2000-12-05", + "updated": "2000-12-05", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0908.0836v3", + "title": "Bound States for Magic State Distillation in Fault-Tolerant Quantum Computation", + "abstract": "Magic state distillation is an important primitive in fault-tolerant quantum\ncomputation. The magic states are pure non-stabilizer states which can be\ndistilled from certain mixed non-stabilizer states via Clifford group\noperations alone. Because of the Gottesman-Knill theorem, mixtures of Pauli\neigenstates are not expected to be magic state distillable, but it has been an\nopen question whether all mixed states outside this set may be distilled. In\nthis Letter we show that, when resources are finitely limited, non-distillable\nstates exist outside the stabilizer octahedron. In analogy with the bound\nentangled states, which arise in entanglement theory, we call such states bound\nstates for magic state distillation.", + "authors": "Earl T. Campbell, Dan E. Browne", + "published": "2009-08-06", + "updated": "2010-02-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2303.05958v1", + "title": "Robust Knowledge Distillation from RNN-T Models With Noisy Training Labels Using Full-Sum Loss", + "abstract": "This work studies knowledge distillation (KD) and addresses its constraints\nfor recurrent neural network transducer (RNN-T) models. In hard distillation, a\nteacher model transcribes large amounts of unlabelled speech to train a student\nmodel. Soft distillation is another popular KD method that distills the output\nlogits of the teacher model. Due to the nature of RNN-T alignments, applying\nsoft distillation between RNN-T architectures having different posterior\ndistributions is challenging. In addition, bad teachers having high\nword-error-rate (WER) reduce the efficacy of KD. We investigate how to\neffectively distill knowledge from variable quality ASR teachers, which has not\nbeen studied before to the best of our knowledge. We show that a sequence-level\nKD, full-sum distillation, outperforms other distillation methods for RNN-T\nmodels, especially for bad teachers. We also propose a variant of full-sum\ndistillation that distills the sequence discriminative knowledge of the teacher\nleading to further improvement in WER. We conduct experiments on public\ndatasets namely SpeechStew and LibriSpeech, and on in-house production data.", + "authors": "Mohammad Zeineldeen, Kartik Audhkhasi, Murali Karthick Baskar, Bhuvana Ramabhadran", + "published": "2023-03-10", + "updated": "2023-03-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0305188v1", + "title": "Dynamics of Distillability", + "abstract": "The time evolution of a maximally entangled bipartite systems is presented in\nthis paper. The distillability criterion is given in terms of Kraus operators.\nUsing the criterion, we discuss the distillability of $2\\times 2$ and $n\\times\nn (n>2)$ systems in their evolution process. There are two distinguished\nprocesses, dissipation and decoherence, which may destroy the distillability.\nWe discuss the effects of those processes on distillability in details.", + "authors": "W. Wu, W. Wang, X. X. Yi", + "published": "2003-05-30", + "updated": "2003-05-30", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2211.08071v2", + "title": "Knowledge Distillation for Detection Transformer with Consistent Distillation Points Sampling", + "abstract": "DETR is a novel end-to-end transformer architecture object detector, which\nsignificantly outperforms classic detectors when scaling up the model size. In\nthis paper, we focus on the compression of DETR with knowledge distillation.\nWhile knowledge distillation has been well-studied in classic detectors, there\nis a lack of researches on how to make it work effectively on DETR. We first\nprovide experimental and theoretical analysis to point out that the main\nchallenge in DETR distillation is the lack of consistent distillation points.\nDistillation points refer to the corresponding inputs of the predictions for\nstudent to mimic, and reliable distillation requires sufficient distillation\npoints which are consistent between teacher and student. Based on this\nobservation, we propose a general knowledge distillation paradigm for\nDETR(KD-DETR) with consistent distillation points sampling. Specifically, we\ndecouple detection and distillation tasks by introducing a set of specialized\nobject queries to construct distillation points. In this paradigm, we further\npropose a general-to-specific distillation points sampling strategy to explore\nthe extensibility of KD-DETR. Extensive experiments on different DETR\narchitectures with various scales of backbones and transformer layers validate\nthe effectiveness and generalization of KD-DETR. KD-DETR boosts the performance\nof DAB-DETR with ResNet-18 and ResNet-50 backbone to 41.4$\\%$, 45.7$\\%$ mAP,\nrespectively, which are 5.2$\\%$, 3.5$\\%$ higher than the baseline, and\nResNet-50 even surpasses the teacher model by $2.2\\%$.", + "authors": "Yu Wang, Xin Li, Shengzhao Wen, Fukui Yang, Wanping Zhang, Gang Zhang, Haocheng Feng, Junyu Han, Errui Ding", + "published": "2022-11-15", + "updated": "2022-11-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.14800v1", + "title": "Multi-to-Single Knowledge Distillation for Point Cloud Semantic Segmentation", + "abstract": "3D point cloud semantic segmentation is one of the fundamental tasks for\nenvironmental understanding. Although significant progress has been made in\nrecent years, the performance of classes with few examples or few points is\nstill far from satisfactory. In this paper, we propose a novel multi-to-single\nknowledge distillation framework for the 3D point cloud semantic segmentation\ntask to boost the performance of those hard classes. Instead of fusing all the\npoints of multi-scans directly, only the instances that belong to the\npreviously defined hard classes are fused. To effectively and sufficiently\ndistill valuable knowledge from multi-scans, we leverage a multilevel\ndistillation framework, i.e., feature representation distillation, logit\ndistillation, and affinity distillation. We further develop a novel\ninstance-aware affinity distillation algorithm for capturing high-level\nstructural knowledge to enhance the distillation efficacy for hard classes.\nFinally, we conduct experiments on the SemanticKITTI dataset, and the results\non both the validation and test sets demonstrate that our method yields\nsubstantial improvements compared with the baseline method. The code is\navailable at \\Url{https://github.com/skyshoumeng/M2SKD}.", + "authors": "Shoumeng Qiu, Feng Jiang, Haiqiang Zhang, Xiangyang Xue, Jian Pu", + "published": "2023-04-28", + "updated": "2023-04-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2204.00548v1", + "title": "Unified and Effective Ensemble Knowledge Distillation", + "abstract": "Ensemble knowledge distillation can extract knowledge from multiple teacher\nmodels and encode it into a single student model. Many existing methods learn\nand distill the student model on labeled data only. However, the teacher models\nare usually learned on the same labeled data, and their predictions have high\ncorrelations with groudtruth labels. Thus, they cannot provide sufficient\nknowledge complementary to task labels for student teaching. Distilling on\nunseen unlabeled data has the potential to enhance the knowledge transfer from\nthe teachers to the student. In this paper, we propose a unified and effective\nensemble knowledge distillation method that distills a single student model\nfrom an ensemble of teacher models on both labeled and unlabeled data. Since\ndifferent teachers may have diverse prediction correctness on the same sample,\non labeled data we weight the predictions of different teachers according to\ntheir correctness. In addition, we weight the distillation loss based on the\noverall prediction correctness of the teacher ensemble to distill high-quality\nknowledge. On unlabeled data, there is no groundtruth to evaluate prediction\ncorrectness. Fortunately, the disagreement among teachers is an indication of\nsample hardness, and thereby we weight the distillation loss based on teachers'\ndisagreement to emphasize knowledge distillation on important samples.\nExtensive experiments on four datasets show the effectiveness of our proposed\nensemble distillation method.", + "authors": "Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang", + "published": "2022-04-01", + "updated": "2022-04-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.17732v1", + "title": "Generative Dataset Distillation: Balancing Global Structure and Local Details", + "abstract": "In this paper, we propose a new dataset distillation method that considers\nbalancing global structure and local details when distilling the information\nfrom a large dataset into a generative model. Dataset distillation has been\nproposed to reduce the size of the required dataset when training models. The\nconventional dataset distillation methods face the problem of long redeployment\ntime and poor cross-architecture performance. Moreover, previous methods\nfocused too much on the high-level semantic attributes between the synthetic\ndataset and the original dataset while ignoring the local features such as\ntexture and shape. Based on the above understanding, we propose a new method\nfor distilling the original image dataset into a generative model. Our method\ninvolves using a conditional generative adversarial network to generate the\ndistilled dataset. Subsequently, we ensure balancing global structure and local\ndetails in the distillation process, continuously optimizing the generator for\nmore information-dense dataset generation.", + "authors": "Longzhen Li, Guang Li, Ren Togo, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama", + "published": "2024-04-26", + "updated": "2024-04-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2102.02973v1", + "title": "Show, Attend and Distill:Knowledge Distillation via Attention-based Feature Matching", + "abstract": "Knowledge distillation extracts general knowledge from a pre-trained teacher\nnetwork and provides guidance to a target student network. Most studies\nmanually tie intermediate features of the teacher and student, and transfer\nknowledge through pre-defined links. However, manual selection often constructs\nineffective links that limit the improvement from the distillation. There has\nbeen an attempt to address the problem, but it is still challenging to identify\neffective links under practical scenarios. In this paper, we introduce an\neffective and efficient feature distillation method utilizing all the feature\nlevels of the teacher without manually selecting the links. Specifically, our\nmethod utilizes an attention-based meta-network that learns relative\nsimilarities between features, and applies identified similarities to control\ndistillation intensities of all possible pairs. As a result, our method\ndetermines competent links more efficiently than the previous approach and\nprovides better performance on model compression and transfer learning tasks.\nFurther qualitative analyses and ablative studies describe how our method\ncontributes to better distillation. The implementation code is available at\ngithub.com/clovaai/attention-feature-distillation.", + "authors": "Mingi Ji, Byeongho Heo, Sungrae Park", + "published": "2021-02-05", + "updated": "2021-02-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.11365v1", + "title": "Confidence Preservation Property in Knowledge Distillation Abstractions", + "abstract": "Social media platforms prevent malicious activities by detecting harmful\ncontent of posts and comments. To that end, they employ large-scale deep neural\nnetwork language models for sentiment analysis and content understanding. Some\nmodels, like BERT, are complex, and have numerous parameters, which makes them\nexpensive to operate and maintain. To overcome these deficiencies, industry\nexperts employ a knowledge distillation compression technique, where a\ndistilled model is trained to reproduce the classification behavior of the\noriginal model. The distillation processes terminates when the distillation\nloss function reaches the stopping criteria. This function is mainly designed\nto ensure that the original and the distilled models exhibit alike\nclassification behaviors. However, besides classification accuracy, there are\nadditional properties of the original model that the distilled model should\npreserve to be considered as an appropriate abstraction. In this work, we\nexplore whether distilled TinyBERT models preserve confidence values of the\noriginal BERT models, and investigate how this confidence preservation property\ncould guide tuning hyperparameters of the distillation process.", + "authors": "Dmitry Vengertsev, Elena Sherman", + "published": "2024-01-21", + "updated": "2024-01-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2109.14960v3", + "title": "Prune Your Model Before Distill It", + "abstract": "Knowledge distillation transfers the knowledge from a cumbersome teacher to a\nsmall student. Recent results suggest that the student-friendly teacher is more\nappropriate to distill since it provides more transferable knowledge. In this\nwork, we propose the novel framework, \"prune, then distill,\" that prunes the\nmodel first to make it more transferrable and then distill it to the student.\nWe provide several exploratory examples where the pruned teacher teaches better\nthan the original unpruned networks. We further show theoretically that the\npruned teacher plays the role of regularizer in distillation, which reduces the\ngeneralization error. Based on this result, we propose a novel neural network\ncompression scheme where the student network is formed based on the pruned\nteacher and then apply the \"prune, then distill\" strategy. The code is\navailable at https://github.com/ososos888/prune-then-distill", + "authors": "Jinhyuk Park, Albert No", + "published": "2021-09-30", + "updated": "2022-07-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.18381v3", + "title": "Distill Gold from Massive Ores: Efficient Dataset Distillation via Critical Samples Selection", + "abstract": "Data-efficient learning has garnered significant attention, especially given\nthe current trend of large multi-modal models. Recently, dataset distillation\nbecomes an effective approach for data-efficiency; however, the distillation\nprocess itself can still be inefficient. In this work, we model the dataset\ndistillation task within the context of information transport. By observing the\nsubstantial data redundancy inherent in the distillation, we argue to put more\nemphasis on the samples' utility for the distillation task. We introduce and\nvalidate a family of data utility estimators and optimal data selection methods\nto exploit the most valuable samples. This strategy significantly reduces the\ntraining costs and extends various existing distillation algorithms to larger\nand more diversified datasets, e.g., in some cases only 0.04% training data is\nsufficient for comparable distillation performance. Our method consistently\nenhances the distillation algorithms, even on much larger-scale and more\nheterogeneous datasets, e.g. ImageNet-1K and Kinetics-400. This paradigm opens\nup new avenues in the dynamics of distillation and paves the way for efficient\ndataset distillation. Our code is available on\nhttps://github.com/silicx/GoldFromOres .", + "authors": "Yue Xu, Yong-Lu Li, Kaitong Cui, Ziyu Wang, Cewu Lu, Yu-Wing Tai, Chi-Keung Tang", + "published": "2023-05-28", + "updated": "2023-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1901.09135v1", + "title": "Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks", + "abstract": "Much of the focus in the area of knowledge distillation has been on\ndistilling knowledge from a larger teacher network to a smaller student\nnetwork. However, there has been little research on how the concept of\ndistillation can be leveraged to distill the knowledge encapsulated in the\ntraining data itself into a reduced form. In this study, we explore the concept\nof progressive label distillation, where we leverage a series of\nteacher-student network pairs to progressively generate distilled training data\nfor learning deep neural networks with greatly reduced input dimensions. To\ninvestigate the efficacy of the proposed progressive label distillation\napproach, we experimented with learning a deep limited vocabulary speech\nrecognition network based on generated 500ms input utterances distilled\nprogressively from 1000ms source training data, and demonstrated a significant\nincrease in test accuracy of almost 78% compared to direct learning.", + "authors": "Zhong Qiu Lin, Alexander Wong", + "published": "2019-01-26", + "updated": "2019-01-26", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2307.08436v1", + "title": "DOT: A Distillation-Oriented Trainer", + "abstract": "Knowledge distillation transfers knowledge from a large model to a small one\nvia task and distillation losses. In this paper, we observe a trade-off between\ntask and distillation losses, i.e., introducing distillation loss limits the\nconvergence of task loss. We believe that the trade-off results from the\ninsufficient optimization of distillation loss. The reason is: The teacher has\na lower task loss than the student, and a lower distillation loss drives the\nstudent more similar to the teacher, then a better-converged task loss could be\nobtained. To break the trade-off, we propose the Distillation-Oriented Trainer\n(DOT). DOT separately considers gradients of task and distillation losses, then\napplies a larger momentum to distillation loss to accelerate its optimization.\nWe empirically prove that DOT breaks the trade-off, i.e., both losses are\nsufficiently optimized. Extensive experiments validate the superiority of DOT.\nNotably, DOT achieves a +2.59% accuracy improvement on ImageNet-1k for the\nResNet50-MobileNetV1 pair. Conclusively, DOT greatly benefits the student's\noptimization properties in terms of loss convergence and model generalization.\nCode will be made publicly available.", + "authors": "Borui Zhao, Quan Cui, Renjie Song, Jiajun Liang", + "published": "2023-07-17", + "updated": "2023-07-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1912.12630v1", + "title": "Real-time Policy Distillation in Deep Reinforcement Learning", + "abstract": "Policy distillation in deep reinforcement learning provides an effective way\nto transfer control policies from a larger network to a smaller untrained\nnetwork without a significant degradation in performance. However, policy\ndistillation is underexplored in deep reinforcement learning, and existing\napproaches are computationally inefficient, resulting in a long distillation\ntime. In addition, the effectiveness of the distillation process is still\nlimited to the model capacity. We propose a new distillation mechanism, called\nreal-time policy distillation, in which training the teacher model and\ndistilling the policy to the student model occur simultaneously. Accordingly,\nthe teacher's latest policy is transferred to the student model in real time.\nThis reduces the distillation time to half the original time or even less and\nalso makes it possible for extremely small student models to learn skills at\nthe expert level. We evaluated the proposed algorithm in the Atari 2600 domain.\nThe results show that our approach can achieve full distillation in most games,\neven with compression ratios up to 1.7%.", + "authors": "Yuxiang Sun, Pooyan Fazli", + "published": "2019-12-29", + "updated": "2019-12-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0908.2142v1", + "title": "Distillation of Bell states in open systems", + "abstract": "In this work we review the entire classification of 2x2 distillable states\nfor protocols with a finite numbers of copies. We show a distillation protocol\nthat allows to distill Bell states with non zero probability at any time for an\ninitial singlet in vacuum. It is shown that the same protocol used in non zero\nthermal baths yields a considerable recovering of entanglement.", + "authors": "E. Isasi, D. Mundarain", + "published": "2009-08-14", + "updated": "2009-08-14", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2307.12732v1", + "title": "CLIP-KD: An Empirical Study of Distilling CLIP Models", + "abstract": "CLIP has become a promising language-supervised visual pre-training framework\nand achieves excellent performance over a wide range of tasks. This paper aims\nto distill small CLIP models supervised by a large teacher CLIP model. We\npropose several distillation strategies, including relation, feature, gradient\nand contrastive paradigm, to examine the impact on CLIP distillation. We show\nthat the simplest feature mimicry with MSE loss performs best. Moreover,\ninteractive contrastive learning and relation-based distillation are also\ncritical in performance improvement. We apply the unified method to distill\nseveral student networks trained on 15 million (image, text) pairs.\nDistillation improves the student CLIP models consistently over zero-shot\nImageNet classification and cross-modal retrieval benchmarks. We hope our\nempirical study will become an important baseline for future CLIP distillation\nresearch. The code is available at \\url{https://github.com/winycg/CLIP-KD}.", + "authors": "Chuanguang Yang, Zhulin An, Libo Huang, Junyu Bi, Xinqiang Yu, Han Yang, Yongjun Xu", + "published": "2023-07-24", + "updated": "2023-07-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.04057v1", + "title": "Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation", + "abstract": "We introduce Score identity Distillation (SiD), an innovative data-free\nmethod that distills the generative capabilities of pretrained diffusion models\ninto a single-step generator. SiD not only facilitates an exponentially fast\nreduction in Fr\\'echet inception distance (FID) during distillation but also\napproaches or even exceeds the FID performance of the original teacher\ndiffusion models. By reformulating forward diffusion processes as semi-implicit\ndistributions, we leverage three score-related identities to create an\ninnovative loss mechanism. This mechanism achieves rapid FID reduction by\ntraining the generator using its own synthesized images, eliminating the need\nfor real data or reverse-diffusion-based generation, all accomplished within\nsignificantly shortened generation time. Upon evaluation across four benchmark\ndatasets, the SiD algorithm demonstrates high iteration efficiency during\ndistillation and surpasses competing distillation approaches, whether they are\none-step or few-step, data-free, or dependent on training data, in terms of\ngeneration quality. This achievement not only redefines the benchmarks for\nefficiency and effectiveness in diffusion distillation but also in the broader\nfield of diffusion-based generation. Our PyTorch implementation will be\npublicly accessible on GitHub.", + "authors": "Mingyuan Zhou, Huangjie Zheng, Zhendong Wang, Mingzhang Yin, Hai Huang", + "published": "2024-04-05", + "updated": "2024-04-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/9908047v2", + "title": "On bound entanglement assisted distillation", + "abstract": "We investigate asymptotic distillation of entanglement in the presence of an\nunlimited amount of bound entanglement for bi-partite systems. We show that the\ndistillability is still bounded by the relative entropy of entanglement. This\noffers a strong support to the fact that bound entanglement does not improve\ndistillation of entanglement.", + "authors": "V. Vedral", + "published": "1999-08-14", + "updated": "1999-11-17", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1108.0537v2", + "title": "Isotropic non-locality cannot be distilled", + "abstract": "We investigate non-locality distillation protocols for isotropic\ncorrelations. These correlations are the hardest instances which respect to\ndistillability and only partial results are known about their behaviour under\nnon-locality distillation protocols. We completely resolve this issue by\nproving that non-locality distillation is impossible for all non-local\nisotropic correlations.", + "authors": "Dejan D. Dukaric", + "published": "2011-08-02", + "updated": "2011-09-20", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.01392v1", + "title": "No-go theorem for probabilistic one-way secret-key distillation", + "abstract": "The probabilistic one-way distillable secret key is equal to the largest\nexpected rate at which perfect secret key bits can be probabilistically\ndistilled from a bipartite state by means of local operations and one-way\nclassical communication. Here we define the set of super two-extendible states\nand prove that an arbitrary state in this set cannot be used for probabilistic\none-way secret-key distillation. This broad class of states includes both\nerased states and all full-rank states. Comparing the probabilistic one-way\ndistillable secret key with the more commonly studied approximate one-way\ndistillable secret key, our results demonstrate an extreme gap between them for\nmany states of interest, with the approximate one-way distillable secret key\nbeing much larger. Our findings naturally extend to probabilistic one-way\nentanglement distillation, with similar conclusions.", + "authors": "Vishal Singh, Mark M. Wilde", + "published": "2024-04-01", + "updated": "2024-04-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.IT", + "math.IT" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.05637v2", + "title": "Dual Relation Knowledge Distillation for Object Detection", + "abstract": "Knowledge distillation is an effective method for model compression. However,\nit is still a challenging topic to apply knowledge distillation to detection\ntasks. There are two key points resulting in poor distillation performance for\ndetection tasks. One is the serious imbalance between foreground and background\nfeatures, another one is that small object lacks enough feature representation.\nTo solve the above issues, we propose a new distillation method named dual\nrelation knowledge distillation (DRKD), including pixel-wise relation\ndistillation and instance-wise relation distillation. The pixel-wise relation\ndistillation embeds pixel-wise features in the graph space and applies graph\nconvolution to capture the global pixel relation. By distilling the global\npixel relation, the student detector can learn the relation between foreground\nand background features, and avoid the difficulty of distilling features\ndirectly for the feature imbalance issue. Besides, we find that instance-wise\nrelation supplements valuable knowledge beyond independent features for small\nobjects. Thus, the instance-wise relation distillation is designed, which\ncalculates the similarity of different instances to obtain a relation matrix.\nMore importantly, a relation filter module is designed to highlight valuable\ninstance relations. The proposed dual relation knowledge distillation is\ngeneral and can be easily applied for both one-stage and two-stage detectors.\nOur method achieves state-of-the-art performance, which improves Faster R-CNN\nbased on ResNet50 from 38.4% to 41.6% mAP and improves RetinaNet based on\nResNet50 from 37.4% to 40.3% mAP on COCO 2017.", + "authors": "Zhenliang Ni, Fukui Yang, Shengzhao Wen, Gang Zhang", + "published": "2023-02-11", + "updated": "2023-06-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2108.12905v1", + "title": "Lipschitz Continuity Guided Knowledge Distillation", + "abstract": "Knowledge distillation has become one of the most important model compression\ntechniques by distilling knowledge from larger teacher networks to smaller\nstudent ones. Although great success has been achieved by prior distillation\nmethods via delicately designing various types of knowledge, they overlook the\nfunctional properties of neural networks, which makes the process of applying\nthose techniques to new tasks unreliable and non-trivial. To alleviate such\nproblem, in this paper, we initially leverage Lipschitz continuity to better\nrepresent the functional characteristic of neural networks and guide the\nknowledge distillation process. In particular, we propose a novel Lipschitz\nContinuity Guided Knowledge Distillation framework to faithfully distill\nknowledge by minimizing the distance between two neural networks' Lipschitz\nconstants, which enables teacher networks to better regularize student networks\nand improve the corresponding performance. We derive an explainable\napproximation algorithm with an explicit theoretical derivation to address the\nNP-hard problem of calculating the Lipschitz constant. Experimental results\nhave shown that our method outperforms other benchmarks over several knowledge\ndistillation tasks (e.g., classification, segmentation and object detection) on\nCIFAR-100, ImageNet, and PASCAL VOC datasets.", + "authors": "Yuzhang Shang, Bin Duan, Ziliang Zong, Liqiang Nie, Yan Yan", + "published": "2021-08-29", + "updated": "2021-08-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2206.08491v1", + "title": "Revisiting Self-Distillation", + "abstract": "Knowledge distillation is the procedure of transferring \"knowledge\" from a\nlarge model (the teacher) to a more compact one (the student), often being used\nin the context of model compression. When both models have the same\narchitecture, this procedure is called self-distillation. Several works have\nanecdotally shown that a self-distilled student can outperform the teacher on\nheld-out data. In this work, we systematically study self-distillation in a\nnumber of settings. We first show that even with a highly accurate teacher,\nself-distillation allows a student to surpass the teacher in all cases.\nSecondly, we revisit existing theoretical explanations of (self) distillation\nand identify contradicting examples, revealing possible drawbacks of these\nexplanations. Finally, we provide an alternative explanation for the dynamics\nof self-distillation through the lens of loss landscape geometry. We conduct\nextensive experiments to show that self-distillation leads to flatter minima,\nthereby resulting in better generalization.", + "authors": "Minh Pham, Minsu Cho, Ameya Joshi, Chinmay Hegde", + "published": "2022-06-17", + "updated": "2022-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.14827v1", + "title": "Sentence-Level or Token-Level? A Comprehensive Study on Knowledge Distillation", + "abstract": "Knowledge distillation, transferring knowledge from a teacher model to a\nstudent model, has emerged as a powerful technique in neural machine\ntranslation for compressing models or simplifying training targets. Knowledge\ndistillation encompasses two primary methods: sentence-level distillation and\ntoken-level distillation. In sentence-level distillation, the student model is\ntrained to align with the output of the teacher model, which can alleviate the\ntraining difficulty and give student model a comprehensive understanding of\nglobal structure. Differently, token-level distillation requires the student\nmodel to learn the output distribution of the teacher model, facilitating a\nmore fine-grained transfer of knowledge. Studies have revealed divergent\nperformances between sentence-level and token-level distillation across\ndifferent scenarios, leading to the confusion on the empirical selection of\nknowledge distillation methods. In this study, we argue that token-level\ndistillation, with its more complex objective (i.e., distribution), is better\nsuited for ``simple'' scenarios, while sentence-level distillation excels in\n``complex'' scenarios. To substantiate our hypothesis, we systematically\nanalyze the performance of distillation methods by varying the model size of\nstudent models, the complexity of text, and the difficulty of decoding\nprocedure. While our experimental results validate our hypothesis, defining the\ncomplexity level of a given scenario remains a challenging task. So we further\nintroduce a novel hybrid method that combines token-level and sentence-level\ndistillation through a gating mechanism, aiming to leverage the advantages of\nboth individual methods. Experiments demonstrate that the hybrid method\nsurpasses the performance of token-level or sentence-level distillation methods\nand the previous works by a margin, demonstrating the effectiveness of the\nproposed hybrid method.", + "authors": "Jingxuan Wei, Linzhuang Sun, Yichong Leng, Xu Tan, Bihui Yu, Ruifeng Guo", + "published": "2024-04-23", + "updated": "2024-04-23", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2301.01615v2", + "title": "StereoDistill: Pick the Cream from LiDAR for Distilling Stereo-based 3D Object Detection", + "abstract": "In this paper, we propose a cross-modal distillation method named\nStereoDistill to narrow the gap between the stereo and LiDAR-based approaches\nvia distilling the stereo detectors from the superior LiDAR model at the\nresponse level, which is usually overlooked in 3D object detection\ndistillation. The key designs of StereoDistill are: the X-component Guided\nDistillation~(XGD) for regression and the Cross-anchor Logit Distillation~(CLD)\nfor classification. In XGD, instead of empirically adopting a threshold to\nselect the high-quality teacher predictions as soft targets, we decompose the\npredicted 3D box into sub-components and retain the corresponding part for\ndistillation if the teacher component pilot is consistent with ground truth to\nlargely boost the number of positive predictions and alleviate the mimicking\ndifficulty of the student model. For CLD, we aggregate the probability\ndistribution of all anchors at the same position to encourage the highest\nprobability anchor rather than individually distill the distribution at the\nanchor level. Finally, our StereoDistill achieves state-of-the-art results for\nstereo-based 3D detection on the KITTI test benchmark and extensive experiments\non KITTI and Argoverse Dataset validate the effectiveness.", + "authors": "Zhe Liu, Xiaoqing Ye, Xiao Tan, Errui Ding, Xiang Bai", + "published": "2023-01-04", + "updated": "2023-01-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1907.09682v2", + "title": "Similarity-Preserving Knowledge Distillation", + "abstract": "Knowledge distillation is a widely applicable technique for training a\nstudent neural network under the guidance of a trained teacher network. For\nexample, in neural network compression, a high-capacity teacher is distilled to\ntrain a compact student; in privileged learning, a teacher trained with\nprivileged data is distilled to train a student without access to that data.\nThe distillation loss determines how a teacher's knowledge is captured and\ntransferred to the student. In this paper, we propose a new form of knowledge\ndistillation loss that is inspired by the observation that semantically similar\ninputs tend to elicit similar activation patterns in a trained network.\nSimilarity-preserving knowledge distillation guides the training of a student\nnetwork such that input pairs that produce similar (dissimilar) activations in\nthe teacher network produce similar (dissimilar) activations in the student\nnetwork. In contrast to previous distillation methods, the student is not\nrequired to mimic the representation space of the teacher, but rather to\npreserve the pairwise similarities in its own representation space. Experiments\non three public datasets demonstrate the potential of our approach.", + "authors": "Frederick Tung, Greg Mori", + "published": "2019-07-23", + "updated": "2019-08-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2004.03097v1", + "title": "Towards Non-task-specific Distillation of BERT via Sentence Representation Approximation", + "abstract": "Recently, BERT has become an essential ingredient of various NLP deep models\ndue to its effectiveness and universal-usability. However, the online\ndeployment of BERT is often blocked by its large-scale parameters and high\ncomputational cost. There are plenty of studies showing that the knowledge\ndistillation is efficient in transferring the knowledge from BERT into the\nmodel with a smaller size of parameters. Nevertheless, current BERT\ndistillation approaches mainly focus on task-specified distillation, such\nmethodologies lead to the loss of the general semantic knowledge of BERT for\nuniversal-usability. In this paper, we propose a sentence representation\napproximating oriented distillation framework that can distill the pre-trained\nBERT into a simple LSTM based model without specifying tasks. Consistent with\nBERT, our distilled model is able to perform transfer learning via fine-tuning\nto adapt to any sentence-level downstream task. Besides, our model can further\ncooperate with task-specific distillation procedures. The experimental results\non multiple NLP tasks from the GLUE benchmark show that our approach\noutperforms other task-specific distillation methods or even much larger\nmodels, i.e., ELMO, with efficiency well-improved.", + "authors": "Bowen Wu, Huan Zhang, Mengyuan Li, Zongsheng Wang, Qihang Feng, Junhong Huang, Baoxun Wang", + "published": "2020-04-07", + "updated": "2020-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1812.00249v1", + "title": "On Compressing U-net Using Knowledge Distillation", + "abstract": "We study the use of knowledge distillation to compress the U-net\narchitecture. We show that, while standard distillation is not sufficient to\nreliably train a compressed U-net, introducing other regularization methods,\nsuch as batch normalization and class re-weighting, in knowledge distillation\nsignificantly improves the training process. This allows us to compress a U-net\nby over 1000x, i.e., to 0.1% of its original number of parameters, at a\nnegligible decrease in performance.", + "authors": "Karttikeya Mangalam, Mathieu Salzamann", + "published": "2018-12-01", + "updated": "2018-12-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1707.02573v1", + "title": "Distilling Entanglement with Noisy Operations", + "abstract": "Entanglement distillation is a fundamental task in quantum information\nprocessing. It not only extracts entanglement out of corrupted systems but also\nleads to protecting systems of interest against intervention with environment.\nIn this work, we consider a realistic scenario of entanglement distillation\nwhere noisy quantum operations are applied. In particular, the two-way\ndistillation protocol that tolerates the highest error rate is considered. We\nshow that among all types of noise there are only four equivalence classes\naccording to the distillability condition. Since the four classes are connected\nby local unitary transformations, our results can be used to improve\nentanglement distillability in practice when entanglement distillation is\nperformed in a realistic setting.", + "authors": "Jinho Chang, Joonwoo Bae, Younghun Kwon", + "published": "2017-07-09", + "updated": "2017-07-09", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.15863v1", + "title": "Importance-Aware Adaptive Dataset Distillation", + "abstract": "Herein, we propose a novel dataset distillation method for constructing small\ninformative datasets that preserve the information of the large original\ndatasets. The development of deep learning models is enabled by the\navailability of large-scale datasets. Despite unprecedented success,\nlarge-scale datasets considerably increase the storage and transmission costs,\nresulting in a cumbersome model training process. Moreover, using raw data for\ntraining raises privacy and copyright concerns. To address these issues, a new\ntask named dataset distillation has been introduced, aiming to synthesize a\ncompact dataset that retains the essential information from the large original\ndataset. State-of-the-art (SOTA) dataset distillation methods have been\nproposed by matching gradients or network parameters obtained during training\non real and synthetic datasets. The contribution of different network\nparameters to the distillation process varies, and uniformly treating them\nleads to degraded distillation performance. Based on this observation, we\npropose an importance-aware adaptive dataset distillation (IADD) method that\ncan improve distillation performance by automatically assigning importance\nweights to different network parameters during distillation, thereby\nsynthesizing more robust distilled datasets. IADD demonstrates superior\nperformance over other SOTA dataset distillation methods based on parameter\nmatching on multiple benchmark datasets and outperforms them in terms of\ncross-architecture generalization. In addition, the analysis of self-adaptive\nweights demonstrates the effectiveness of IADD. Furthermore, the effectiveness\nof IADD is validated in a real-world medical application such as COVID-19\ndetection.", + "authors": "Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1807.04705v2", + "title": "Non-asymptotic assisted distillation of quantum coherence", + "abstract": "We characterize the operational task of environment-assisted distillation of\nquantum coherence under different sets of free operations when only a finite\nsupply of copies of a given state is available. We first evaluate the one-shot\nassisted distillable coherence exactly, and introduce a semidefinite\nprogramming bound on it in terms of a smooth entropic quantity. We prove the\nbound to be tight for all systems in dimensions 2 and 3, which allows us to\nobtain computable expressions for the one-shot rate of distillation, establish\nan analytical expression for the best achievable fidelity of assisted\ndistillation for any finite number of copies, and fully solve the problem of\nasymptotic zero-error assisted distillation for qubit and qutrit systems. Our\ncharacterization shows that all relevant sets of free operations in the\nresource theory of coherence have exactly the same power in the task of\none-shot assisted coherence distillation, and furthermore resolves a conjecture\nregarding the additivity of coherence of assistance in dimension 3.", + "authors": "Bartosz Regula, Ludovico Lami, Alexander Streltsov", + "published": "2018-07-12", + "updated": "2018-10-16", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.02399v1", + "title": "Spot-adaptive Knowledge Distillation", + "abstract": "Knowledge distillation (KD) has become a well established paradigm for\ncompressing deep neural networks. The typical way of conducting knowledge\ndistillation is to train the student network under the supervision of the\nteacher network to harness the knowledge at one or multiple spots (i.e.,\nlayers) in the teacher network. The distillation spots, once specified, will\nnot change for all the training samples, throughout the whole distillation\nprocess. In this work, we argue that distillation spots should be adaptive to\ntraining samples and distillation epochs. We thus propose a new distillation\nstrategy, termed spot-adaptive KD (SAKD), to adaptively determine the\ndistillation spots in the teacher network per sample, at every training\niteration during the whole distillation period. As SAKD actually focuses on\n\"where to distill\" instead of \"what to distill\" that is widely investigated by\nmost existing works, it can be seamlessly integrated into existing distillation\nmethods to further improve their performance. Extensive experiments with 10\nstate-of-the-art distillers are conducted to demonstrate the effectiveness of\nSAKD for improving their distillation performance, under both homogeneous and\nheterogeneous distillation settings. Code is available at\nhttps://github.com/zju-vipa/spot-adaptive-pytorch", + "authors": "Jie Song, Ying Chen, Jingwen Ye, Mingli Song", + "published": "2022-05-05", + "updated": "2022-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.05233v1", + "title": "DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy Correction-Based Distillation for Gap Optimizing", + "abstract": "The knowledge distillation uses a high-performance teacher network to guide\nthe student network. However, the performance gap between the teacher and\nstudent networks can affect the student's training. This paper proposes a novel\nknowledge distillation algorithm based on dynamic entropy correction to reduce\nthe gap by adjusting the student instead of the teacher. Firstly, the effect of\nchanging the output entropy (short for output information entropy) in the\nstudent on the distillation loss is analyzed in theory. This paper shows that\ncorrecting the output entropy can reduce the gap. Then, a knowledge\ndistillation algorithm based on dynamic entropy correction is created, which\ncan correct the output entropy in real-time with an entropy controller updated\ndynamically by the distillation loss. The proposed algorithm is validated on\nthe CIFAR100 and ImageNet. The comparison with various state-of-the-art\ndistillation algorithms shows impressive results, especially in the experiment\non the CIFAR100 regarding teacher-student pair resnet32x4-resnet8x4. The\nproposed algorithm raises 2.64 points over the traditional distillation\nalgorithm and 0.87 points over the state-of-the-art algorithm CRD in\nclassification accuracy, demonstrating its effectiveness and efficiency.", + "authors": "Songling Zhu, Ronghua Shang, Bo Yuan, Weitong Zhang, Yangyang Li, Licheng Jiao", + "published": "2023-05-09", + "updated": "2023-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.05563v2", + "title": "Entanglement distillation in terms of Schmidt rank and matrix rank", + "abstract": "Entanglement distillation is a key task in quantum-information processing. In\nthis paper, we distill non-positive-partial-transpose (NPT) bipartite states of\nsome given Schmidt rank and matrix rank. We show that all bipartite states of\nSchmidt rank two are locally equivalent to classical-classical states, and all\nbipartite states of Schmidt rank three are 1-undistillable. Subsequently, we\nshow that low-rank B-irreducible NPT states are distillable for large-rank\nreduced density operators by proving low-rank B-irreducible NPT state whose\nrange contains a product vector is distillable. Eventually, we present an\nequivalent condition to distill $M\\times N$ bipartite states of rank\n$\\max\\{M,N\\}+1$.", + "authors": "Tianyi Ding, Lin Chen", + "published": "2023-04-12", + "updated": "2023-07-06", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2312.06899v1", + "title": "LoRA-Enhanced Distillation on Guided Diffusion Models", + "abstract": "Diffusion models, such as Stable Diffusion (SD), offer the ability to\ngenerate high-resolution images with diverse features, but they come at a\nsignificant computational and memory cost. In classifier-free guided diffusion\nmodels, prolonged inference times are attributed to the necessity of computing\ntwo separate diffusion models at each denoising step. Recent work has shown\npromise in improving inference time through distillation techniques, teaching\nthe model to perform similar denoising steps with reduced computations.\nHowever, the application of distillation introduces additional memory overhead\nto these already resource-intensive diffusion models, making it less practical.\n To address these challenges, our research explores a novel approach that\ncombines Low-Rank Adaptation (LoRA) with model distillation to efficiently\ncompress diffusion models. This approach not only reduces inference time but\nalso mitigates memory overhead, and notably decreases memory consumption even\nbefore applying distillation. The results are remarkable, featuring a\nsignificant reduction in inference time due to the distillation process and a\nsubstantial 50% reduction in memory consumption. Our examination of the\ngenerated images underscores that the incorporation of LoRA-enhanced\ndistillation maintains image quality and alignment with the provided prompts.\nIn summary, while conventional distillation tends to increase memory\nconsumption, LoRA-enhanced distillation offers optimization without any\ntrade-offs or compromises in quality.", + "authors": "Pareesa Ameneh Golnari", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.09053v1", + "title": "Towards a theory of model distillation", + "abstract": "Distillation is the task of replacing a complicated machine learning model\nwith a simpler model that approximates the original [BCNM06,HVD15]. Despite\nmany practical applications, basic questions about the extent to which models\ncan be distilled, and the runtime and amount of data needed to distill, remain\nlargely open.\n To study these questions, we initiate a general theory of distillation,\ndefining PAC-distillation in an analogous way to PAC-learning [Val84]. As\napplications of this theory: (1) we propose new algorithms to extract the\nknowledge stored in the trained weights of neural networks -- we show how to\nefficiently distill neural networks into succinct, explicit decision tree\nrepresentations when possible by using the ``linear representation\nhypothesis''; and (2) we prove that distillation can be much cheaper than\nlearning from scratch, and make progress on characterizing its complexity.", + "authors": "Enric Boix-Adsera", + "published": "2024-03-14", + "updated": "2024-03-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.NE" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.14643v1", + "title": "Graph-based Knowledge Distillation: A survey and experimental evaluation", + "abstract": "Graph, such as citation networks, social networks, and transportation\nnetworks, are prevalent in the real world. Graph Neural Networks (GNNs) have\ngained widespread attention for their robust expressiveness and exceptional\nperformance in various graph applications. However, the efficacy of GNNs is\nheavily reliant on sufficient data labels and complex network models, with the\nformer obtaining hardly and the latter computing costly. To address the labeled\ndata scarcity and high complexity of GNNs, Knowledge Distillation (KD) has been\nintroduced to enhance existing GNNs. This technique involves transferring the\nsoft-label supervision of the large teacher model to the small student model\nwhile maintaining prediction performance. This survey offers a comprehensive\noverview of Graph-based Knowledge Distillation methods, systematically\ncategorizing and summarizing them while discussing their limitations and future\ndirections. This paper first introduces the background of graph and KD. It then\nprovides a comprehensive summary of three types of Graph-based Knowledge\nDistillation methods, namely Graph-based Knowledge Distillation for deep neural\nnetworks (DKD), Graph-based Knowledge Distillation for GNNs (GKD), and\nSelf-Knowledge Distillation based Graph-based Knowledge Distillation (SKD).\nEach type is further divided into knowledge distillation methods based on the\noutput layer, middle layer, and constructed graph. Subsequently, various\nalgorithms' ideas are analyzed and compared, concluding with the advantages and\ndisadvantages of each algorithm supported by experimental results. In addition,\nthe applications of graph-based knowledge distillation in CV, NLP, RS, and\nother fields are listed. Finally, the graph-based knowledge distillation is\nsummarized and prospectively discussed. We have also released related resources\nat https://github.com/liujing1023/Graph-based-Knowledge-Distillation.", + "authors": "Jing Liu, Tongya Zheng, Guanzheng Zhang, Qinfen Hao", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0202165v1", + "title": "Distinguishing locally of quantum states and the distillation of entanglement", + "abstract": "This paper try to probe the relation of distinguishing locally and\ndistillation of entanglement. The distinguishing information (DI) and the\nmaximal distinguishing information (MDI) of a set of pure states are defined.\nThe interpretation of distillation of entanglement in term of information is\ngiven. The relation between the maximal distinguishing information and\ndistillable entanglement is gained. As a application of this relation the\ndistillable entanglement of Bell-diagonal states is present.", + "authors": "ping-xing. chen, Cheng-zu Li", + "published": "2002-02-27", + "updated": "2002-02-27", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + } +] \ No newline at end of file