diff --git "a/related_34K/test_related_short_2404.19639v1.json" "b/related_34K/test_related_short_2404.19639v1.json" new file mode 100644--- /dev/null +++ "b/related_34K/test_related_short_2404.19639v1.json" @@ -0,0 +1,1387 @@ +[ + { + "url": "http://arxiv.org/abs/2404.19639v1", + "title": "ESP-Zero: Unsupervised enhancement of zero-shot classification for Extremely Sparse Point cloud", + "abstract": "In recent years, zero-shot learning has attracted the focus of many\nresearchers, due to its flexibility and generality. Many approaches have been\nproposed to achieve the zero-shot classification of the point clouds for 3D\nobject understanding, following the schema of CLIP. However, in the real world,\nthe point clouds could be extremely sparse, dramatically limiting the\neffectiveness of the 3D point cloud encoders, and resulting in the misalignment\nof point cloud features and text embeddings. To the point cloud encoders to fit\nthe extremely sparse point clouds without re-running the pre-training procedure\nwhich could be time-consuming and expensive, in this work, we propose an\nunsupervised model adaptation approach to enhance the point cloud encoder for\nthe extremely sparse point clouds. We propose a novel fused-cross attention\nlayer that expands the pre-trained self-attention layer with additional\nlearnable tokens and attention blocks, which effectively modifies the point\ncloud features while maintaining the alignment between point cloud features and\ntext embeddings. We also propose a complementary learning-based\nself-distillation schema that encourages the modified features to be pulled\napart from the irrelevant text embeddings without overfitting the feature space\nto the observed text embeddings. Extensive experiments demonstrate that the\nproposed approach effectively increases the zero-shot capability on extremely\nsparse point clouds, and overwhelms other state-of-the-art model adaptation\napproaches.", + "authors": "Jiayi Han, Zidi Cao, Weibo Zheng, Xiangguo Zhou, Xiangjian He, Yuanfang Zhang, Daisen Wei", + "published": "2024-04-30", + "updated": "2024-04-30", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Distillation", + "gt": "2.1 Point cloud processing Since the success of PointNet [9], processing 3D objects in the form of point clouds has become a natural solution. PointNet++ [10] introduces the grouping layers in point cloud processing, which allows the deep models to leverage the neighboring information like convolution networks. Afterward, many researchers propose to utilize well-designed kernels to improve their performances. For example, KP-conv [11] utilizes a spherical neighborhood and a kernel function to determine the weight of each neighboring point during convolution. Edge-conv [12] utilizes the relative position to fetch the integrating weight of neighboring points. [13, 14] leverage local geometry for point cloud classification. [15] proposes multi-scale FPS to fuse point cloud features. [16] transforms points into a Hough space, and utilizes CNN-based networks to encode the points. Augmentations are also beneficial for point cloud processing [17]. In recent years, many approaches have introduced self-attention blocks to point cloud processing. Point Transformer [18] and Point Cloud Transformer [19] introduce the self-attention mechanism to point cloud processing for the first time. They utilize similar structures like PointNet++ and modify the feature aggregation with self-attention layers. Meanwhile, YOGO [20] proposes to group and embed the points once that first group and encode multiple sub-structures of the point cloud into embeddings, then calculate the cross-attention of grouped embeddings and all the points. This strategy effectively reduces the cost of SA-based point cloud processors but sacrifices some precision. To decrease the cost of self-attention in point cloud processing, PointFormer [21] proposes to utilize Linformer to replace the standard self-attention mechanism. SD-SA [22] validates the efficient self-attention mechanisms for the point cloud transformer and proposes to modify self-attention with skeleton decomposition to reduce the computational cost of the point cloud transformer. Inspired by NLP tasks, PointBERT [1] and I2P-MAE [2] introduce BERT-styled pre-training in 3D processing and achieve great success. They mask some tokens of the point cloud and train to recover those masked tokens. The fine-tuning procedure is followed to fetch the downstream capabilities. [23] utilizes a novel pipeline that leverages neural rendering and 2D images to align features of point clouds and images for effective model pre-training. 2.2 Zero-shot classification of point clouds Inspired by the success of CLIP [24], many approaches are proposed to classify point clouds in a zero-shot manner. PointCLIP [25] directly renders the point clouds to depth images and adopts a pre-trained CLIP model to classify the rendered images with prompts like \u201cthis is the depth map of \u201c{category}\u201d. CLIP2Point [26] proposes to render the initial 3D mesh and point cloud into images and depth maps, encode them by pre-trained CLIP models, and fine-tune the model to align their features. [27] aligns the seen semantics and point cloud features, and leverages the unlabeled object to address the downstream issues like domain adaption. ULIP and ULIP-2 [28, 29] further introduce the text embeddings in the training phase. The point cloud features are aligned to both CLIP features and corresponding text embeddings of the ground truth labels. 3 Running Title for Header Figure 3: The overall framework of the proposed approach. The Dense point cloud is down-sampled to a sparse point cloud, grouped by KNN, and encoded to point cloud tokens. The initial model is modified with a trainable FCA block when processing the tokens of the sparse point cloud. We then distill the model with infoNCE loss and CL loss with the assistance of text embeddings. 2.3 Point cloud completion Point completion network (PCN) [30] is one of the most classical point cloud completion approaches, which introduces both coarse and fine-grained supervision during training. Many approaches pay attention to refining the details of the completed point clouds. PMP-Net [4] proposed to iteratively refine the sparse point cloud. After each refinement step, the modified point cloud is further refined, until it reaches the maximum refinement steps. PMP-Net++ [5] further enhances the PMP-Net with the self-attention mechanism. SeedFormer [6] introduces patch seeds as shape representation which leverages both global information (of the partial point cloud) and local geometry. Similarly, [31] also leverages the geometry transformation to recover the missing point. USSPA [32] introduces a shape-preserving autoencoder for point cloud completion without supervision. Adapted loss functions are also introduced to enhance the point cloud completion [3].", + "pre_questions": [], + "main_content": "Introduction 3D point cloud processing with deep-learning approaches has been explored deeply in recent years, due to its growing applications in VR, AR, autonomous driving, embodied intelligence, and so on. With the improvements in self-attention techniques, the performance of 3D point cloud processing has significantly increased. Despite the great success of 3D point cloud processing, researchers mainly focus on dense point clouds, while the processing of extremely sparse point clouds has not been fully explored, especially in zero-shot settings. As the density of points significantly influences the shape representation of the input object, the models trained with dense point clouds may lack generalization ability on those extremely sparse point clouds. Inspired by the pre-training schema in NLP tasks, some 3D object processing approaches with the pre-training step involve the training procedure of sparse point clouds. For example, PointBERT [1] groups the input point cloud into \u2217\u2020: Corresponding author arXiv:2404.19639v1 [cs.CV] 30 Apr 2024 Running Title for Header Figure 1: Zero-shot classification accuracy on the ModelNet-40 dataset. Our approach dramatically increases the zero-shot capability on the extremely sparse point clouds. clusters via furthest point sampling (FPS), encodes each cluster into an embedding, masks 25% \u223c45% clusters out of the embeddings, and then recovers those masked embeddings. This training approach results in the training on the sparse point clouds. Similarly, I2P-MAE [2] also proposes to mask out 80% of points during the pre-training procedure. However, with an initial number of 8096 points, the down-sampled point cloud still has more than 1k points which is still very dense and may not be applicable in real usages. A potential solution for enhancing sparse point cloud classification is to complete the sparse point clouds, named point completion. However, researchers of point cloud completion mainly focus on completing the sparse point clouds with approximately 2k points to 16k points [3, 4, 5, 6]. Some of the point-completion approaches validated their performance on datasets with more sparse samples, for example, the KITTI dataset [7]. However, these methods are only validated on a limited number of categories (usually only \u201dvehicle\u201d is validated). Figure 2: The proposed approach enhances the zero-shot ability on the extremely sparse point clouds. To enhance the zero-shot classification capability on the extremely sparse point clouds, in this work, we propose a novel self-distillation approach that leverages the information of the dense point cloud and the text embedding of the observed categories. The proposed approach is based on a well-trained point cloud transformer, whose latent space is aligned with the latent space of text embeddings. For the alignment of point cloud feature space and text embedding space could be reserved, we introduce an additive tuning strategy. We freeze the weights of the pre-trained point cloud transformer and attach fused cross-attention (FCA) layers to it for model optimization. Each FCA layer consists of a batch of learnable tokens, a learnable self-attention (SA) block, and a frozen self-attention (SA) block that belongs to the pre-trained network. Following the cross-attention mechanism, the learnable tokens are first fed into the learnable SA block. Inspired by VPT [8], the modified tokens are concatenated to the encoded point tokens. The frozen SA block further encodes the merged tokens, and the learnable tokens are discarded in the output. Despite the small number of learnable tokens, FCA could effectively enhance the pre-trained model. 2 Running Title for Header We also propose a novel complementary learning-based self-distillation approach to optimize the modified model for both visible and invisible categories in the training set. Different from normal distillation approaches that utilize the pseudo labels or modified output distributions of the input, we adopt complementary labels to specify the categories to which the input does not belong, and suppress the similarity of the 3D object representations and the text embeddings of the complementary labels. This approach allows the encoded sparse representation to be pulled apart from the unmatched text embeddings rather than fitting the most similar text embedding, which reduces the risk of overfitting the observed text embeddings during training. Our main contributions could be listed as follows. 1. To the best of our knowledge, this work proposes to enhance the zero-shot classification of extremely sparse point clouds for the first time. 2. We propose a fused cross-attention layer that introduces the refinement of frozen self-attention blocks, which effectively modifies the encoded representation space of the pre-trained model while maintaining its zero-shot capability. 3. We propose a complimentary learning-based self-distillation approach that pulls the sparse point cloud representation away from the unmatched label text embeddings, which decreases the potential overfitting. 3.1 Overall Architecture As depicted in Fig. 3, the overall framework consists of a pre-trained point cloud transformer with grouping and encoding layers, encoding layers of the point embeddings, and a final projecting layer that projects the point cloud representation to the latent space shared with text embeddings. The pre-trained network could be seen as a teacher model. To modify the pre-trained model, we expand the pre-trained encoding layers to an FCA block. A trainable FCA block consists of additional learnable tokens, an SA block, and the corresponding frozen encoding layer. By modifying the trainable parts in the FCA block, the model could be enhanced for extremely sparse point clouds. During training, the dense point cloud is down-sampled to a sparse point cloud. The dense point cloud is directly encoded with the pre-trained model to obtain a standard representation. The model modified by the FCA block encodes the sparse point cloud. The inner product of text embeddings and the dense representations is utilized to fetch the pseudo supervision for the sparse representation to learn from. During testing, the text embedding with the largest similarity to the sparse representation indicates the final prediction. Note that since the pre-trained weights are fixed, and the FCA block could be directly rolled back to the initial encoding layer, therefore we could avoid catastrophic forgetting. 3.2 Fused Cross-attention (FCA) The pre-trained point-cloud transformer is well-aligned with text embedding, which is crucial to zero-shot classification. To fine-tune the pre-trained model without eliminating its zero-shot capability, we introduce fused cross-attention to each transformer block of the point-cloud transformer. 4 Running Title for Header Figure 4: The structure of FCA. The learnable tokens are firstly processed by an SA block, then combined with the encoded point cloud tokens, pass through an encoder block, and only output the point cloud tokens. For each transformer block, we add m learnable tokens which are randomly initialized. We first forward the learnable tokens with a self-attention block, then fuse them to the encoded point cloud tokens. Specifically, denote the encoded point cloud tokens as T = {ti}n i=1 and the learnable tokens as P = {pj}m j=1. The learnable tokens are first processed as in Eq. 1: ( \u02c6 P = P Wq(P Wk)T \u03bb PWv P \u2032 = \u02c6 PWP , (1) in which SA and FFN represent the self-attention and the feed-forward network, respectively. P \u2032 is then fused with T via self-attention as follows: ( \u02c6 T = [T ;P \u2032]Wq([W ;P \u2032]Wk)T \u03bb [T; P \u2032]Wv T \u2032 = \u02c6 TWT , (2) After the fusion procedure, the modified point tokens are fed forward, while the learnable tokens are discarded so that the total number of tokens is consistent. The final output of each FCA layer is as T \u2032 0:|T |. 3.3 Self-distillation with complementary learning To enhance the classification ability of the pre-trained point cloud transformer, we propose a self-distillation schema to optimize the learnable parameters (the learnable tokens, q, k, v projections, and FFN) in FCA. The pre-trained point cloud transformer without FCA is utilized to encode the dense point cloud (with 2048 points) to form its standard representation which is well-aligned to the text embedding of the prompt template of the corresponding category. The text embedding could be obtained by the corresponding text encoder, in our case, a text transformer aligned with the pre-trained point cloud transformer. Pseudo label for Self-distillation Denote the standard representation as R \u2208R1,C, and the text embeddings of the prompt templates (as shown in Fig. 3) of involved categories as E \u2208RN,C, in which N, C represent the number of involved categories and the dimensionality of the latent space, respectively. The complementary labels of the input objects could be obtained by the similarity of the standard representation and text embeddings. We first compute the similarity of the standard representation and the text embeddings: q = ERT ||E||2||R||2 . (3) 5 Running Title for Header Figure 5: The difference between pseudo label and complementary label. Via pseudo labeling (a), the sparse representation is encouraged to be aligned with the text embedding of the pseudo label. On the contrary, complementary learning (b) encourages the sparse representation to be apart from the unmatched text embeddings. A direct optimization approach could be utilizing the pseudo labels. For a given input x, its pseudo label could be formulated as follows: (\u02c6 y|x)i = ( 1, i = argmax i qi 0, otherwise (4) Then we obtain the sparse representations. In this work, we down-sample the dense point cloud according to the uniform distribution. We then encode the down-sampled point cloud with the modified model (with FCA) to obtain the sparse representations. The cross-entropy loss could be formulated as in Eq. 5: losspl = \u2212P i (\u02c6 y|x)ilog \u0000Softmax(\u02c6 q/\u03c4)i \u0001 (5) By optimizing Eq. 5, the object representation R|x could be modified to match the most probable text embedding. Although utilizing pseudo labels could be a direct self-distillation approach since the matched embedding could be seen as an approximation of the standard representation, this process might result in the model overfitting on the training text embeddings, and decrease the zero-shot ability on the unseen categories. Complementary learning for dense-to-sparse self-distillation Different from learning with pseudo labels which directly minimize the distance between the sparse representation and the matched text embeddings, complementary learning aims to learn from the labels that an input does not belong to, namely \u201ccomplementary labels\u201d. Complementary learning allows the model to pull the sparse representation apart from the unmatched text embeddings. ECL[33] proves that the complementary labels are more accurate and could provide effective information in unsupervised domain adaptation, compared with pseudo labels. To enhance the zero-shot ability of the pre-trained model, instead of utilizing the pseudo labels, we adopt a complementary learning-based approach to leverage the information of labels without overfitting the observed ones. Complementary learning fetches the most \u201cimprobable\u201d categories and fine-tunes the model to push the object representations apart from those unmatched text embeddings. In this setting, the representations are not matched to a certain embedding. We select the improbable (or negative) categories based on q. The k categories with the smallest similarities to the standard representation are set to negative categories. Denote the inner product of the sparse representations and text embedding to \u02c6 q. The loss function could be formulated as follows: lossCL = \u2212P i\u2208C\u2212log(1 \u2212Softmax(\u02c6 q/\u03c4)i) (6) in which C\u2212represents the set of negative categories and \u03c4 represents temperature. Total loss Apart from the complementary loss, we also introduce infoNCE loss [34] to align the sparse representation to the standard representation, which could be formulated as Eq. 7: losssd = exp(RT i Rs i /\u03c4) P j\u0338=i exp(RT j Rs i /\u03c4), (7) in which Rs i represents the sparse representation of the ith object. The total loss could be formulated as follows: losstotal = \u03bblosssd + lossCL, (8) where \u03bb is the balance coefficient. 6 Running Title for Header 4 Experiments 4.1 Implementation details In this work, we validate the proposed approach on two benchmark datasets: ModelNet40[35] and PartNet[36]. ModelNet40 dataset contains 12,311 objects (9843 for training) from 40 categories. PartNet dataset contains 26,671 objects from 24 categories. They share nine categories, and the others are unique to each other. All models are trained on a Nvidia-4090 GPU within 16 epochs. We adopt a cross-validation schema that trains the model on one dataset and validates it on the other. We show the result for the \u201cUnseen\u201d categories (the unique categories of each dataset). We set the number of learnable tokens to 12, and the coefficient \u03bb to 0.2. 4.2 Comparison with SOTA approaches We compare the proposed approach with other zero-shot learners and unsupervised model adaption approaches. Note that for PL, Tent[37] and USKD-PL[38], we utilize the prediction of 2048 points to obtain the pseudo label or label distribution, for a fair comparison with the proposed approach. However, due to the nature of RPL [39], we could not involve the standard representation during adaptation, so only the sparse representation is utilized. In Tab. 1, we demonstrate the model performance on the unseen classes that are unique to each dataset. The result shows that our approach dramatically increases the classification of the unseen categories, in an average of 8.1% on the PartNet dataset, and 8.7% on the ModelNet40 dataset, which demonstrates the effectiveness of our method. Table 1: Zero-shot classification accuracy of UnSeen categories on ModelNet40 and PartNet dataset. Dataset M40\u2192Partnet PartNet\u2192M40 Num Points 128 64 16 Mean 128 64 16 Mean PointBERT[1] 45.2 17.0 0.2 20.8 23.1 12.1 5.2 13.5 PointMLP[40] 40.3 10.8 0.0 17.0 20.1 6.5 3.0 9.9 ULIP-2[29] 33.3 17.6 7.8 19.6 31.7 15.4 4.4 17.2 Pseudo Label 38.9 29.5 13.8 27.4 25.8 15.7 3.6 15.0 TENT[37] 40.1 22.4 9.8 24.1 30.6 16.9 3.4 17.0 RPL[39] 30.6 16.8 11.1 19.5 31.0 15.6 3.1 16.6 USKD[38] 27.2 26.1 19.7 24.3 25.5 17.5 6.4 16.5 USKD-PL[38] 27.2 16.4 3.3 15.6 31.8 19.2 5.7 18.9 Ours 45.8 39.9 20.8 35.5 40.4 29.9 12.4 27.6 4.3 Point cloud completion may not be sufficient for extremely sparse point cloud zero-shot classification It is a direct approach to improving the classification performance by point cloud completion. In this work, we validate two popular point cloud completion methods: PCN [30] and SeedFormer [6]. For the PCN, due to the flexibility of PCN architecture, we modify the PCN network by changing the coarse output to 256 points and the fine-grained output to 2048 points and training the modified PCN in both end-to-end and independent manners. The end-to-end manner means that the modified PCN is trained along with the pre-trained ULIP model. For the independent manner, we first train point completion independently, then merge the modified PCN and the ULIP model. SeedFormer leverages grouping layers which are introduced in PointNet++. As there are many fixed parameters that could influence the model\u2019s performance, we maintain its initial architecture and utilize its pre-trained model. Since the SeedFormer holds the limitation of 256 input points, we simply repeat the point cloud to meet its limitation. The results are shown in Tab.1. Seedformer pre-trained with PCN [30] dataset lacks generality on the extremely sparse point clouds of the ModelNet40 dataset, thus resulting in a performance decrease. Even if the model is fine-tuned for ModelNet-40, a performance decrease is still observed. Only when the PCN is trained in an end-to-end manner can the performance be increased, which demonstrates that simply applying point cloud completion is not sufficient for extremely sparse point cloud zero-shot classification. 4.4 Ablation study Validation of the proposed modules We first validate the effectiveness of the proposed modules on both ModelNet40 and PartNet. The result is shown in Tab. 3. \u201cCL\u201d demonstrates the complementary learning, and \u201cCA\u201d demonstrate the cross-attention. Note that the baseline represents the pre-trained ULIP-2 backbone and the model trained only by 7 Running Title for Header Table 2: Validation of point cloud completion for extremely sparse point cloud (128 points) on the FULL ModelNet40 test set. Model Training strategy Acc(%) SeedFormer[6] pre-trained on PCN 5.0 PCN-modified[30] two-stage 20.9 PCN-modified[30] end-to-end 37.2 No Completion / 35.4 InfoNCE needs an additional MLP head following the model. The result demonstrates that by introducing the learnable tokens, the model performance is significantly increased. By adopting the cross attention in FCA, the model is further improved, with an average of 1.3% performance gain. The complementary loss dramatically increases the model\u2019s performance, on an average of 5.6%, which demonstrates the effectiveness of the proposed modules. Table 3: The ablation study of the proposed modules, validated on the PartNet dataset. Note that the baseline (no modules are involved) represents ULIP-2. When only InfoNCE is adopted, a learnable MLP is followed to the pre-trained model. Self-distillation FCA Accuracy InfoNCE CL tokens CA N=128 N=64 N=16 \u00d7 \u00d7 \u00d7 \u00d7 40.6% 24.8% 9.0% \u2713 \u00d7 \u00d7 \u00d7 31.4% 22.2% 6.7% \u2713 \u00d7 \u2713 \u00d7 44.2% 39.6% 25.9% \u2713 \u00d7 \u2713 \u2713 45.4% 40.5% 27.5% \u2713 \u2713 \u2713 \u2713 53.2% 47.9% 29.2% Further validation of point cloud completion To further validate whether point completion is beneficial to super sparse point clouds, we also validate the combination of PCN+learnable tokens+InfoNCE and show the result in Tab. 4. The PCN results in a crucial performance decrease, especially with a short number of points. This result demonstrates that the PCN is not able to perform robust point completion, and might overfit on the training samples, which leads to the loss of generality in zero-shot classification. Table 4: Further validation of point cloud completion for zero-shot super sparse point cloud classification on PartNet dataset. N=128 N=64 N=16 w PCN 49.7% 37.0% 15.1% w/o PCN 53.2% 47.9% 30.2% Number of learnable tokens We also validate the influence of the number of learnable tokens, without the cross attention, on the UnSeen categories of PartNet. As shown in Tab. 5, increasing the number of learnable tokens benefits the performance, but the improvement becomes marginal with the number getting larger. Table 5: The influence of the number of learnable tokens. Increasing the number of learnable tokens benefits the model performance, but the improvement is getting marginal with N getting large. Num of Tokens Num of Points Mean 128 64 16 4 32.0 21.9 7.3 20.4 8 36.6 31.3 18.6 28.8 12 43.0 35.1 20.8 33.0 24 41.2 36.6 25.6 34.5 Down-sampling strategies Except for the random down-sampling, we also validate the model performance of the KNN down-sampling strategy. This strategy selects a random center point and then samples the K nearest points to 8 Running Title for Header the center point. The result is shown in Tab. 6, which demonstrates that the proposed approach could generalize on different down-sampling strategies. Table 6: The influence of different down-sampling strategies during the test phase on the ModelNet40 dataset. Compared with the ULIP-2 and its enhancement via pseudo labels, the proposed approach achieves a significant improvement. Model N=128 N=64 N=16 Mean ULIP-2 27.7 13.2 3.3 14.7 PL 29.9 17.7 4.0 17.2 Ours 38.0 26.5 6.6 23.7 5 Conclusion In this work, we raise the issue of zero-shot super sparse point cloud classification for the first time and propose a simple yet effective unsupervised training schema that effectively enhances the zero-shot classification ability of super sparse point clouds. We propose a learnable FCA to modify the latent space of the point cloud encoder while maintaining its alignment with text embeddings. We also propose a complementary learning-based self-distillation approach that leverages the training labels without overfitting the training text embeddings. Explicit experiments demonstrate the effectiveness of the proposed approach, which dramatically increases the zero-shot classification performance on the super sparse point clouds." + }, + { + "url": "http://arxiv.org/abs/2212.05171v4", + "title": "ULIP: Learning a Unified Representation of Language, Images, and Point Clouds for 3D Understanding", + "abstract": "The recognition capabilities of current state-of-the-art 3D models are\nlimited by datasets with a small number of annotated data and a pre-defined set\nof categories. In its 2D counterpart, recent advances have shown that similar\nproblems can be significantly alleviated by employing knowledge from other\nmodalities, such as language. Inspired by this, leveraging multimodal\ninformation for 3D modality could be promising to improve 3D understanding\nunder the restricted data regime, but this line of research is not well\nstudied. Therefore, we introduce ULIP to learn a unified representation of\nimages, texts, and 3D point clouds by pre-training with object triplets from\nthe three modalities. To overcome the shortage of training triplets, ULIP\nleverages a pre-trained vision-language model that has already learned a common\nvisual and textual space by training with massive image-text pairs. Then, ULIP\nlearns a 3D representation space aligned with the common image-text space,\nusing a small number of automatically synthesized triplets. ULIP is agnostic to\n3D backbone networks and can easily be integrated into any 3D architecture.\nExperiments show that ULIP effectively improves the performance of multiple\nrecent 3D backbones by simply pre-training them on ShapeNet55 using our\nframework, achieving state-of-the-art performance in both standard 3D\nclassification and zero-shot 3D classification on ModelNet40 and ScanObjectNN.\nULIP also improves the performance of PointMLP by around 3% in 3D\nclassification on ScanObjectNN, and outperforms PointCLIP by 28.8% on top-1\naccuracy for zero-shot 3D classification on ModelNet40. Our code and\npre-trained models are released at https://github.com/salesforce/ULIP.", + "authors": "Le Xue, Mingfei Gao, Chen Xing, Roberto Mart\u00edn-Mart\u00edn, Jiajun Wu, Caiming Xiong, Ran Xu, Juan Carlos Niebles, Silvio Savarese", + "published": "2022-12-10", + "updated": "2023-06-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2112.02413v1", + "title": "PointCLIP: Point Cloud Understanding by CLIP", + "abstract": "Recently, zero-shot and few-shot learning via Contrastive Vision-Language\nPre-training (CLIP) have shown inspirational performance on 2D visual\nrecognition, which learns to match images with their corresponding texts in\nopen-vocabulary settings. However, it remains under explored that whether CLIP,\npre-trained by large-scale image-text pairs in 2D, can be generalized to 3D\nrecognition. In this paper, we identify such a setting is feasible by proposing\nPointCLIP, which conducts alignment between CLIP-encoded point cloud and 3D\ncategory texts. Specifically, we encode a point cloud by projecting it into\nmulti-view depth maps without rendering, and aggregate the view-wise zero-shot\nprediction to achieve knowledge transfer from 2D to 3D. On top of that, we\ndesign an inter-view adapter to better extract the global feature and\nadaptively fuse the few-shot knowledge learned from 3D into CLIP pre-trained in\n2D. By just fine-tuning the lightweight adapter in the few-shot settings, the\nperformance of PointCLIP could be largely improved. In addition, we observe the\ncomplementary property between PointCLIP and classical 3D-supervised networks.\nBy simple ensembling, PointCLIP boosts baseline's performance and even\nsurpasses state-of-the-art models. Therefore, PointCLIP is a promising\nalternative for effective 3D point cloud understanding via CLIP under low\nresource cost and data regime. We conduct thorough experiments on\nwidely-adopted ModelNet10, ModelNet40 and the challenging ScanObjectNN to\ndemonstrate the effectiveness of PointCLIP. The code is released at\nhttps://github.com/ZrrSkywalker/PointCLIP.", + "authors": "Renrui Zhang, Ziyu Guo, Wei Zhang, Kunchang Li, Xupeng Miao, Bin Cui, Yu Qiao, Peng Gao, Hongsheng Li", + "published": "2021-12-04", + "updated": "2021-12-04", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1808.00671v3", + "title": "PCN: Point Completion Network", + "abstract": "Shape completion, the problem of estimating the complete geometry of objects\nfrom partial observations, lies at the core of many vision and robotics\napplications. In this work, we propose Point Completion Network (PCN), a novel\nlearning-based approach for shape completion. Unlike existing shape completion\nmethods, PCN directly operates on raw point clouds without any structural\nassumption (e.g. symmetry) or annotation (e.g. semantic class) about the\nunderlying shape. It features a decoder design that enables the generation of\nfine-grained completions while maintaining a small number of parameters. Our\nexperiments show that PCN produces dense, complete point clouds with realistic\nstructures in the missing regions on inputs with various levels of\nincompleteness and noise, including cars from LiDAR scans in the KITTI dataset.", + "authors": "Wentao Yuan, Tejas Khot, David Held, Christoph Mertz, Martial Hebert", + "published": "2018-08-02", + "updated": "2019-09-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2202.09507v3", + "title": "PMP-Net++: Point Cloud Completion by Transformer-Enhanced Multi-step Point Moving Paths", + "abstract": "Point cloud completion concerns to predict missing part for incomplete 3D\nshapes. A common strategy is to generate complete shape according to incomplete\ninput. However, unordered nature of point clouds will degrade generation of\nhigh-quality 3D shapes, as detailed topology and structure of unordered points\nare hard to be captured during the generative process using an extracted latent\ncode. We address this problem by formulating completion as point cloud\ndeformation process. Specifically, we design a novel neural network, named\nPMP-Net++, to mimic behavior of an earth mover. It moves each point of\nincomplete input to obtain a complete point cloud, where total distance of\npoint moving paths (PMPs) should be the shortest. Therefore, PMP-Net++ predicts\nunique PMP for each point according to constraint of point moving distances.\nThe network learns a strict and unique correspondence on point-level, and thus\nimproves quality of predicted complete shape. Moreover, since moving points\nheavily relies on per-point features learned by network, we further introduce a\ntransformer-enhanced representation learning network, which significantly\nimproves completion performance of PMP-Net++. We conduct comprehensive\nexperiments in shape completion, and further explore application on point cloud\nup-sampling, which demonstrate non-trivial improvement of PMP-Net++ over\nstate-of-the-art point cloud completion/up-sampling methods.", + "authors": "Xin Wen, Peng Xiang, Zhizhong Han, Yan-Pei Cao, Pengfei Wan, Wen Zheng, Yu-Shen Liu", + "published": "2022-02-19", + "updated": "2022-02-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1612.00593v2", + "title": "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation", + "abstract": "Point cloud is an important type of geometric data structure. Due to its\nirregular format, most researchers transform such data to regular 3D voxel\ngrids or collections of images. This, however, renders data unnecessarily\nvoluminous and causes issues. In this paper, we design a novel type of neural\nnetwork that directly consumes point clouds and well respects the permutation\ninvariance of points in the input. Our network, named PointNet, provides a\nunified architecture for applications ranging from object classification, part\nsegmentation, to scene semantic parsing. Though simple, PointNet is highly\nefficient and effective. Empirically, it shows strong performance on par or\neven better than state of the art. Theoretically, we provide analysis towards\nunderstanding of what the network has learnt and why the network is robust with\nrespect to input perturbation and corruption.", + "authors": "Charles R. Qi, Hao Su, Kaichun Mo, Leonidas J. Guibas", + "published": "2016-12-02", + "updated": "2017-04-10", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2207.10315v1", + "title": "SeedFormer: Patch Seeds based Point Cloud Completion with Upsample Transformer", + "abstract": "Point cloud completion has become increasingly popular among generation tasks\nof 3D point clouds, as it is a challenging yet indispensable problem to recover\nthe complete shape of a 3D object from its partial observation. In this paper,\nwe propose a novel SeedFormer to improve the ability of detail preservation and\nrecovery in point cloud completion. Unlike previous methods based on a global\nfeature vector, we introduce a new shape representation, namely Patch Seeds,\nwhich not only captures general structures from partial inputs but also\npreserves regional information of local patterns. Then, by integrating seed\nfeatures into the generation process, we can recover faithful details for\ncomplete point clouds in a coarse-to-fine manner. Moreover, we devise an\nUpsample Transformer by extending the transformer structure into basic\noperations of point generators, which effectively incorporates spatial and\nsemantic relationships between neighboring points. Qualitative and quantitative\nevaluations demonstrate that our method outperforms state-of-the-art completion\nnetworks on several benchmark datasets. Our code is available at\nhttps://github.com/hrzhou2/seedformer.", + "authors": "Haoran Zhou, Yun Cao, Wenqing Chu, Junwei Zhu, Tong Lu, Ying Tai, Chengjie Wang", + "published": "2022-07-21", + "updated": "2022-07-21", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2305.08275v4", + "title": "ULIP-2: Towards Scalable Multimodal Pre-training for 3D Understanding", + "abstract": "Recent advancements in multimodal pre-training have shown promising efficacy\nin 3D representation learning by aligning multimodal features across 3D shapes,\ntheir 2D counterparts, and language descriptions. However, the methods used by\nexisting frameworks to curate such multimodal data, in particular language\ndescriptions for 3D shapes, are not scalable, and the collected language\ndescriptions are not diverse. To address this, we introduce ULIP-2, a simple\nyet effective tri-modal pre-training framework that leverages large multimodal\nmodels to automatically generate holistic language descriptions for 3D shapes.\nIt only needs 3D data as input, eliminating the need for any manual 3D\nannotations, and is therefore scalable to large datasets. ULIP-2 is also\nequipped with scaled-up backbones for better multimodal representation\nlearning. We conduct experiments on two large-scale 3D datasets, Objaverse and\nShapeNet, and augment them with tri-modal datasets of 3D point clouds, images,\nand language for training ULIP-2. Experiments show that ULIP-2 demonstrates\nsubstantial benefits in three downstream tasks: zero-shot 3D classification,\nstandard 3D classification with fine-tuning, and 3D captioning (3D-to-language\ngeneration). It achieves a new SOTA of 50.6% (top-1) on Objaverse-LVIS and\n84.7% (top-1) on ModelNet40 in zero-shot classification. In the ScanObjectNN\nbenchmark for standard fine-tuning, ULIP-2 reaches an overall accuracy of 91.5%\nwith a compact model of only 1.4 million parameters. ULIP-2 sheds light on a\nnew paradigm for scalable multimodal 3D representation learning without human\nannotations and shows significant improvements over existing baselines. The\ncode and datasets are released at https://github.com/salesforce/ULIP.", + "authors": "Le Xue, Ning Yu, Shu Zhang, Artemis Panagopoulou, Junnan Li, Roberto Mart\u00edn-Mart\u00edn, Jiajun Wu, Caiming Xiong, Ran Xu, Juan Carlos Niebles, Silvio Savarese", + "published": "2023-05-14", + "updated": "2024-04-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2012.03408v3", + "title": "PMP-Net: Point Cloud Completion by Learning Multi-step Point Moving Paths", + "abstract": "The task of point cloud completion aims to predict the missing part for an\nincomplete 3D shape. A widely used strategy is to generate a complete point\ncloud from the incomplete one. However, the unordered nature of point clouds\nwill degrade the generation of high-quality 3D shapes, as the detailed topology\nand structure of discrete points are hard to be captured by the generative\nprocess only using a latent code. In this paper, we address the above problem\nby reconsidering the completion task from a new perspective, where we formulate\nthe prediction as a point cloud deformation process. Specifically, we design a\nnovel neural network, named PMP-Net, to mimic the behavior of an earth mover.\nIt moves each point of the incomplete input to complete the point cloud, where\nthe total distance of point moving paths (PMP) should be shortest. Therefore,\nPMP-Net predicts a unique point moving path for each point according to the\nconstraint of total point moving distances. As a result, the network learns a\nstrict and unique correspondence on point-level, which can capture the detailed\ntopology and structure relationships between the incomplete shape and the\ncomplete target, and thus improves the quality of the predicted complete shape.\nWe conduct comprehensive experiments on Completion3D and PCN datasets, which\ndemonstrate our advantages over the state-of-the-art point cloud completion\nmethods.", + "authors": "Xin Wen, Peng Xiang, Zhizhong Han, Yan-Pei Cao, Pengfei Wan, Wen Zheng, Yu-Shen Liu", + "published": "2020-12-07", + "updated": "2021-06-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.00020v1", + "title": "Learning Transferable Visual Models From Natural Language Supervision", + "abstract": "State-of-the-art computer vision systems are trained to predict a fixed set\nof predetermined object categories. This restricted form of supervision limits\ntheir generality and usability since additional labeled data is needed to\nspecify any other visual concept. Learning directly from raw text about images\nis a promising alternative which leverages a much broader source of\nsupervision. We demonstrate that the simple pre-training task of predicting\nwhich caption goes with which image is an efficient and scalable way to learn\nSOTA image representations from scratch on a dataset of 400 million (image,\ntext) pairs collected from the internet. After pre-training, natural language\nis used to reference learned visual concepts (or describe new ones) enabling\nzero-shot transfer of the model to downstream tasks. We study the performance\nof this approach by benchmarking on over 30 different existing computer vision\ndatasets, spanning tasks such as OCR, action recognition in videos,\ngeo-localization, and many types of fine-grained object classification. The\nmodel transfers non-trivially to most tasks and is often competitive with a\nfully supervised baseline without the need for any dataset specific training.\nFor instance, we match the accuracy of the original ResNet-50 on ImageNet\nzero-shot without needing to use any of the 1.28 million training examples it\nwas trained on. We release our code and pre-trained model weights at\nhttps://github.com/OpenAI/CLIP.", + "authors": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever", + "published": "2021-02-26", + "updated": "2021-02-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2012.09688v4", + "title": "PCT: Point cloud transformer", + "abstract": "The irregular domain and lack of ordering make it challenging to design deep\nneural networks for point cloud processing. This paper presents a novel\nframework named Point Cloud Transformer(PCT) for point cloud learning. PCT is\nbased on Transformer, which achieves huge success in natural language\nprocessing and displays great potential in image processing. It is inherently\npermutation invariant for processing a sequence of points, making it\nwell-suited for point cloud learning. To better capture local context within\nthe point cloud, we enhance input embedding with the support of farthest point\nsampling and nearest neighbor search. Extensive experiments demonstrate that\nthe PCT achieves the state-of-the-art performance on shape classification, part\nsegmentation and normal estimation tasks.", + "authors": "Meng-Hao Guo, Jun-Xiong Cai, Zheng-Ning Liu, Tai-Jiang Mu, Ralph R. Martin, Shi-Min Hu", + "published": "2020-12-17", + "updated": "2021-06-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2103.09975v2", + "title": "You Only Group Once: Efficient Point-Cloud Processing with Token Representation and Relation Inference Module", + "abstract": "3D point-cloud-based perception is a challenging but crucial computer vision\ntask. A point-cloud consists of a sparse, unstructured, and unordered set of\npoints. To understand a point-cloud, previous point-based methods, such as\nPointNet++, extract visual features through hierarchically aggregation of local\nfeatures. However, such methods have several critical limitations: 1) Such\nmethods require several sampling and grouping operations, which slow down the\ninference speed. 2) Such methods spend an equal amount of computation on each\npoints in a point-cloud, though many of points are redundant. 3) Such methods\naggregate local features together through downsampling, which leads to\ninformation loss and hurts the perception performance. To overcome these\nchallenges, we propose a novel, simple, and elegant deep learning model called\nYOGO (You Only Group Once). Compared with previous methods, YOGO only needs to\nsample and group a point-cloud once, so it is very efficient. Instead of\noperating on points, YOGO operates on a small number of tokens, each of which\nsummarizes the point features in a sub-region. This allows us to avoid\ncomputing on the redundant points and thus boosts efficiency.Moreover, YOGO\npreserves point-wise features by projecting token features to point features\nalthough the computation is performed on tokens. This avoids information loss\nand can improve point-wise perception performance. We conduct thorough\nexperiments to demonstrate that YOGO achieves at least 3.0x speedup over\npoint-based baselines while delivering competitive classification and\nsegmentation performance on the ModelNet, ShapeNetParts and S3DIS datasets.", + "authors": "Chenfeng Xu, Bohan Zhai, Bichen Wu, Tian Li, Wei Zhan, Peter Vajda, Kurt Keutzer, Masayoshi Tomizuka", + "published": "2021-03-18", + "updated": "2021-03-24", + "primary_cat": "cs.RO", + "cats": [ + "cs.RO" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2111.14819v2", + "title": "Point-BERT: Pre-training 3D Point Cloud Transformers with Masked Point Modeling", + "abstract": "We present Point-BERT, a new paradigm for learning Transformers to generalize\nthe concept of BERT to 3D point cloud. Inspired by BERT, we devise a Masked\nPoint Modeling (MPM) task to pre-train point cloud Transformers. Specifically,\nwe first divide a point cloud into several local point patches, and a point\ncloud Tokenizer with a discrete Variational AutoEncoder (dVAE) is designed to\ngenerate discrete point tokens containing meaningful local information. Then,\nwe randomly mask out some patches of input point clouds and feed them into the\nbackbone Transformers. The pre-training objective is to recover the original\npoint tokens at the masked locations under the supervision of point tokens\nobtained by the Tokenizer. Extensive experiments demonstrate that the proposed\nBERT-style pre-training strategy significantly improves the performance of\nstandard point cloud Transformers. Equipped with our pre-training strategy, we\nshow that a pure Transformer architecture attains 93.8% accuracy on ModelNet40\nand 83.1% accuracy on the hardest setting of ScanObjectNN, surpassing carefully\ndesigned point cloud models with much fewer hand-made designs. We also\ndemonstrate that the representations learned by Point-BERT transfer well to new\ntasks and domains, where our models largely advance the state-of-the-art of\nfew-shot point cloud classification task. The code and pre-trained models are\navailable at https://github.com/lulutang0608/Point-BERT", + "authors": "Xumin Yu, Lulu Tang, Yongming Rao, Tiejun Huang, Jie Zhou, Jiwen Lu", + "published": "2021-11-29", + "updated": "2022-06-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1904.08889v2", + "title": "KPConv: Flexible and Deformable Convolution for Point Clouds", + "abstract": "We present Kernel Point Convolution (KPConv), a new design of point\nconvolution, i.e. that operates on point clouds without any intermediate\nrepresentation. The convolution weights of KPConv are located in Euclidean\nspace by kernel points, and applied to the input points close to them. Its\ncapacity to use any number of kernel points gives KPConv more flexibility than\nfixed grid convolutions. Furthermore, these locations are continuous in space\nand can be learned by the network. Therefore, KPConv can be extended to\ndeformable convolutions that learn to adapt kernel points to local geometry.\nThanks to a regular subsampling strategy, KPConv is also efficient and robust\nto varying densities. Whether they use deformable KPConv for complex tasks, or\nrigid KPconv for simpler tasks, our networks outperform state-of-the-art\nclassification and segmentation approaches on several datasets. We also offer\nablation studies and visualizations to provide understanding of what has been\nlearned by KPConv and to validate the descriptive power of deformable KPConv.", + "authors": "Hugues Thomas, Charles R. Qi, Jean-Emmanuel Deschaud, Beatriz Marcotegui, Fran\u00e7ois Goulette, Leonidas J. Guibas", + "published": "2019-04-18", + "updated": "2019-08-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2212.06785v1", + "title": "Learning 3D Representations from 2D Pre-trained Models via Image-to-Point Masked Autoencoders", + "abstract": "Pre-training by numerous image data has become de-facto for robust 2D\nrepresentations. In contrast, due to the expensive data acquisition and\nannotation, a paucity of large-scale 3D datasets severely hinders the learning\nfor high-quality 3D features. In this paper, we propose an alternative to\nobtain superior 3D representations from 2D pre-trained models via\nImage-to-Point Masked Autoencoders, named as I2P-MAE. By self-supervised\npre-training, we leverage the well learned 2D knowledge to guide 3D masked\nautoencoding, which reconstructs the masked point tokens with an\nencoder-decoder architecture. Specifically, we first utilize off-the-shelf 2D\nmodels to extract the multi-view visual features of the input point cloud, and\nthen conduct two types of image-to-point learning schemes on top. For one, we\nintroduce a 2D-guided masking strategy that maintains semantically important\npoint tokens to be visible for the encoder. Compared to random masking, the\nnetwork can better concentrate on significant 3D structures and recover the\nmasked tokens from key spatial cues. For another, we enforce these visible\ntokens to reconstruct the corresponding multi-view 2D features after the\ndecoder. This enables the network to effectively inherit high-level 2D\nsemantics learned from rich image data for discriminative 3D modeling. Aided by\nour image-to-point pre-training, the frozen I2P-MAE, without any fine-tuning,\nachieves 93.4% accuracy for linear SVM on ModelNet40, competitive to the fully\ntrained results of existing methods. By further fine-tuning on on\nScanObjectNN's hardest split, I2P-MAE attains the state-of-the-art 90.11%\naccuracy, +3.68% to the second-best, demonstrating superior transferable\ncapacity. Code will be available at https://github.com/ZrrSkywalker/I2P-MAE.", + "authors": "Renrui Zhang, Liuhui Wang, Yu Qiao, Peng Gao, Hongsheng Li", + "published": "2022-12-13", + "updated": "2022-12-13", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2210.01055v3", + "title": "CLIP2Point: Transfer CLIP to Point Cloud Classification with Image-Depth Pre-training", + "abstract": "Pre-training across 3D vision and language remains under development because\nof limited training data. Recent works attempt to transfer vision-language\npre-training models to 3D vision. PointCLIP converts point cloud data to\nmulti-view depth maps, adopting CLIP for shape classification. However, its\nperformance is restricted by the domain gap between rendered depth maps and\nimages, as well as the diversity of depth distributions. To address this issue,\nwe propose CLIP2Point, an image-depth pre-training method by contrastive\nlearning to transfer CLIP to the 3D domain, and adapt it to point cloud\nclassification. We introduce a new depth rendering setting that forms a better\nvisual effect, and then render 52,460 pairs of images and depth maps from\nShapeNet for pre-training. The pre-training scheme of CLIP2Point combines\ncross-modality learning to enforce the depth features for capturing expressive\nvisual and textual features and intra-modality learning to enhance the\ninvariance of depth aggregation. Additionally, we propose a novel Dual-Path\nAdapter (DPA) module, i.e., a dual-path structure with simplified adapters for\nfew-shot learning. The dual-path structure allows the joint use of CLIP and\nCLIP2Point, and the simplified adapter can well fit few-shot tasks without\npost-search. Experimental results show that CLIP2Point is effective in\ntransferring CLIP knowledge to 3D vision. Our CLIP2Point outperforms PointCLIP\nand other self-supervised 3D networks, achieving state-of-the-art results on\nzero-shot and few-shot classification.", + "authors": "Tianyu Huang, Bowen Dong, Yunhan Yang, Xiaoshui Huang, Rynson W. H. Lau, Wanli Ouyang, Wangmeng Zuo", + "published": "2022-10-03", + "updated": "2023-08-23", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2104.04980v1", + "title": "Zero-Shot Learning on 3D Point Cloud Objects and Beyond", + "abstract": "Zero-shot learning, the task of learning to recognize new classes not seen\nduring training, has received considerable attention in the case of 2D image\nclassification. However, despite the increasing ubiquity of 3D sensors, the\ncorresponding 3D point cloud classification problem has not been meaningfully\nexplored and introduces new challenges. In this paper, we identify some of the\nchallenges and apply 2D Zero-Shot Learning (ZSL) methods in the 3D domain to\nanalyze the performance of existing models. Then, we propose a novel approach\nto address the issues specific to 3D ZSL. We first present an inductive ZSL\nprocess and then extend it to the transductive ZSL and Generalized ZSL (GZSL)\nsettings for 3D point cloud classification. To this end, a novel loss function\nis developed that simultaneously aligns seen semantics with point cloud\nfeatures and takes advantage of unlabeled test data to address some known\nissues (e.g., the problems of domain adaptation, hubness, and data bias). While\ndesigned for the particularities of 3D point cloud classification, the method\nis shown to also be applicable to the more common use-case of 2D image\nclassification. An extensive set of experiments is carried out, establishing\nstate-of-the-art for ZSL and GZSL on synthetic (ModelNet40, ModelNet10, McGill)\nand real (ScanObjectNN) 3D point cloud datasets.", + "authors": "Ali Cheraghian, Shafinn Rahman, Townim F. Chowdhury, Dylan Campbell, Lars Petersson", + "published": "2021-04-11", + "updated": "2021-04-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1801.07829v2", + "title": "Dynamic Graph CNN for Learning on Point Clouds", + "abstract": "Point clouds provide a flexible geometric representation suitable for\ncountless applications in computer graphics; they also comprise the raw output\nof most 3D data acquisition devices. While hand-designed features on point\nclouds have long been proposed in graphics and vision, however, the recent\noverwhelming success of convolutional neural networks (CNNs) for image analysis\nsuggests the value of adapting insight from CNN to the point cloud world. Point\nclouds inherently lack topological information so designing a model to recover\ntopology can enrich the representation power of point clouds. To this end, we\npropose a new neural network module dubbed EdgeConv suitable for CNN-based\nhigh-level tasks on point clouds including classification and segmentation.\nEdgeConv acts on graphs dynamically computed in each layer of the network. It\nis differentiable and can be plugged into existing architectures. Compared to\nexisting modules operating in extrinsic space or treating each point\nindependently, EdgeConv has several appealing properties: It incorporates local\nneighborhood information; it can be stacked applied to learn global shape\nproperties; and in multi-layer systems affinity in feature space captures\nsemantic characteristics over potentially long distances in the original\nembedding. We show the performance of our model on standard benchmarks\nincluding ModelNet40, ShapeNetPart, and S3DIS.", + "authors": "Yue Wang, Yongbin Sun, Ziwei Liu, Sanjay E. Sarma, Michael M. Bronstein, Justin M. Solomon", + "published": "2018-01-24", + "updated": "2019-06-11", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/1706.02413v1", + "title": "PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space", + "abstract": "Few prior works study deep learning on point sets. PointNet by Qi et al. is a\npioneer in this direction. However, by design PointNet does not capture local\nstructures induced by the metric space points live in, limiting its ability to\nrecognize fine-grained patterns and generalizability to complex scenes. In this\nwork, we introduce a hierarchical neural network that applies PointNet\nrecursively on a nested partitioning of the input point set. By exploiting\nmetric space distances, our network is able to learn local features with\nincreasing contextual scales. With further observation that point sets are\nusually sampled with varying densities, which results in greatly decreased\nperformance for networks trained on uniform densities, we propose novel set\nlearning layers to adaptively combine features from multiple scales.\nExperiments show that our network called PointNet++ is able to learn deep point\nset features efficiently and robustly. In particular, results significantly\nbetter than state-of-the-art have been obtained on challenging benchmarks of 3D\npoint clouds.", + "authors": "Charles R. Qi, Li Yi, Hao Su, Leonidas J. Guibas", + "published": "2017-06-07", + "updated": "2017-06-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2011.00931v2", + "title": "Point Transformer", + "abstract": "In this work, we present Point Transformer, a deep neural network that\noperates directly on unordered and unstructured point sets. We design Point\nTransformer to extract local and global features and relate both\nrepresentations by introducing the local-global attention mechanism, which aims\nto capture spatial point relations and shape information. For that purpose, we\npropose SortNet, as part of the Point Transformer, which induces input\npermutation invariance by selecting points based on a learned score. The output\nof Point Transformer is a sorted and permutation invariant feature list that\ncan directly be incorporated into common computer vision applications. We\nevaluate our approach on standard classification and part segmentation\nbenchmarks to demonstrate competitive results compared to the prior work. Code\nis publicly available at: https://github.com/engelnico/point-transformer", + "authors": "Nico Engel, Vasileios Belagiannis, Klaus Dietmayer", + "published": "2020-11-02", + "updated": "2021-10-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2012.11409v3", + "title": "3D Object Detection with Pointformer", + "abstract": "Feature learning for 3D object detection from point clouds is very\nchallenging due to the irregularity of 3D point cloud data. In this paper, we\npropose Pointformer, a Transformer backbone designed for 3D point clouds to\nlearn features effectively. Specifically, a Local Transformer module is\nemployed to model interactions among points in a local region, which learns\ncontext-dependent region features at an object level. A Global Transformer is\ndesigned to learn context-aware representations at the scene level. To further\ncapture the dependencies among multi-scale representations, we propose\nLocal-Global Transformer to integrate local features with global features from\nhigher resolution. In addition, we introduce an efficient coordinate refinement\nmodule to shift down-sampled points closer to object centroids, which improves\nobject proposal generation. We use Pointformer as the backbone for\nstate-of-the-art object detection models and demonstrate significant\nimprovements over original models on both indoor and outdoor datasets.", + "authors": "Xuran Pan, Zhuofan Xia, Shiji Song, Li Erran Li, Gao Huang", + "published": "2020-12-21", + "updated": "2021-06-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Related Work" + }, + { + "url": "http://arxiv.org/abs/2302.05637v2", + "title": "Dual Relation Knowledge Distillation for Object Detection", + "abstract": "Knowledge distillation is an effective method for model compression. However,\nit is still a challenging topic to apply knowledge distillation to detection\ntasks. There are two key points resulting in poor distillation performance for\ndetection tasks. One is the serious imbalance between foreground and background\nfeatures, another one is that small object lacks enough feature representation.\nTo solve the above issues, we propose a new distillation method named dual\nrelation knowledge distillation (DRKD), including pixel-wise relation\ndistillation and instance-wise relation distillation. The pixel-wise relation\ndistillation embeds pixel-wise features in the graph space and applies graph\nconvolution to capture the global pixel relation. By distilling the global\npixel relation, the student detector can learn the relation between foreground\nand background features, and avoid the difficulty of distilling features\ndirectly for the feature imbalance issue. Besides, we find that instance-wise\nrelation supplements valuable knowledge beyond independent features for small\nobjects. Thus, the instance-wise relation distillation is designed, which\ncalculates the similarity of different instances to obtain a relation matrix.\nMore importantly, a relation filter module is designed to highlight valuable\ninstance relations. The proposed dual relation knowledge distillation is\ngeneral and can be easily applied for both one-stage and two-stage detectors.\nOur method achieves state-of-the-art performance, which improves Faster R-CNN\nbased on ResNet50 from 38.4% to 41.6% mAP and improves RetinaNet based on\nResNet50 from 37.4% to 40.3% mAP on COCO 2017.", + "authors": "Zhenliang Ni, Fukui Yang, Shengzhao Wen, Gang Zhang", + "published": "2023-02-11", + "updated": "2023-06-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0708.3699v2", + "title": "Convolutional Entanglement Distillation", + "abstract": "We develop a theory of entanglement distillation that exploits a\nconvolutional coding structure. We provide a method for converting an arbitrary\nclassical binary or quaternary convolutional code into a convolutional\nentanglement distillation protocol. The imported classical convolutional code\ndoes not have to be dual-containing or self-orthogonal. The yield and\nerror-correcting properties of such a protocol depend respectively on the rate\nand error-correcting properties of the imported classical convolutional code. A\nconvolutional entanglement distillation protocol has several other benefits.\nTwo parties sharing noisy ebits can distill noiseless ebits ``online'' as they\nacquire more noisy ebits. Distillation yield is high and decoding complexity is\nsimple for a convolutional entanglement distillation protocol. Our theory of\nconvolutional entanglement distillation reduces the problem of finding a good\nconvolutional entanglement distillation protocol to the well-established\nproblem of finding a good classical convolutional code.", + "authors": "Mark M. Wilde, Hari Krovi, Todd A. Brun", + "published": "2007-08-28", + "updated": "2007-09-19", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.IT", + "math.IT" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0312123v2", + "title": "Many copies may be required for entanglement distillation", + "abstract": "A mixed quantum state shared between two parties is said to be distillable\nif, by means of a protocol involving only local quantum operations and\nclassical communication, the two parties can transform some number of copies of\nthat state into a single shared pair of qubits having high fidelity with a\nmaximally entangled state state. In this paper it is proved that there exist\nstates that are distillable, but for which an arbitrarily large number of\ncopies is required before any distillation procedure can produce a shared pair\nof qubits with even a small amount of entanglement. Specifically, for every\npositive integer n there exists a state that is distillable, but given n or\nfewer copies of that state every distillation procedure outputting a single\nshared pair of qubits will output those qubits in a separable state.\nEssentially all previous examples of states proved to be distillable were such\nthat some distillation procedure could output an entangled pair of qubits given\na single copy of the state in question.", + "authors": "John Watrous", + "published": "2003-12-15", + "updated": "2004-05-31", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.17732v1", + "title": "Generative Dataset Distillation: Balancing Global Structure and Local Details", + "abstract": "In this paper, we propose a new dataset distillation method that considers\nbalancing global structure and local details when distilling the information\nfrom a large dataset into a generative model. Dataset distillation has been\nproposed to reduce the size of the required dataset when training models. The\nconventional dataset distillation methods face the problem of long redeployment\ntime and poor cross-architecture performance. Moreover, previous methods\nfocused too much on the high-level semantic attributes between the synthetic\ndataset and the original dataset while ignoring the local features such as\ntexture and shape. Based on the above understanding, we propose a new method\nfor distilling the original image dataset into a generative model. Our method\ninvolves using a conditional generative adversarial network to generate the\ndistilled dataset. Subsequently, we ensure balancing global structure and local\ndetails in the distillation process, continuously optimizing the generator for\nmore information-dense dataset generation.", + "authors": "Longzhen Li, Guang Li, Ren Togo, Keisuke Maeda, Takahiro Ogawa, Miki Haseyama", + "published": "2024-04-26", + "updated": "2024-04-26", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2204.00548v1", + "title": "Unified and Effective Ensemble Knowledge Distillation", + "abstract": "Ensemble knowledge distillation can extract knowledge from multiple teacher\nmodels and encode it into a single student model. Many existing methods learn\nand distill the student model on labeled data only. However, the teacher models\nare usually learned on the same labeled data, and their predictions have high\ncorrelations with groudtruth labels. Thus, they cannot provide sufficient\nknowledge complementary to task labels for student teaching. Distilling on\nunseen unlabeled data has the potential to enhance the knowledge transfer from\nthe teachers to the student. In this paper, we propose a unified and effective\nensemble knowledge distillation method that distills a single student model\nfrom an ensemble of teacher models on both labeled and unlabeled data. Since\ndifferent teachers may have diverse prediction correctness on the same sample,\non labeled data we weight the predictions of different teachers according to\ntheir correctness. In addition, we weight the distillation loss based on the\noverall prediction correctness of the teacher ensemble to distill high-quality\nknowledge. On unlabeled data, there is no groundtruth to evaluate prediction\ncorrectness. Fortunately, the disagreement among teachers is an indication of\nsample hardness, and thereby we weight the distillation loss based on teachers'\ndisagreement to emphasize knowledge distillation on important samples.\nExtensive experiments on four datasets show the effectiveness of our proposed\nensemble distillation method.", + "authors": "Chuhan Wu, Fangzhao Wu, Tao Qi, Yongfeng Huang", + "published": "2022-04-01", + "updated": "2022-04-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0803.0345v2", + "title": "Secret key distillation from shielded two-qubit states", + "abstract": "The quantum states corresponding to a secret key are characterized using the\nso-called private states, where the key part consisting of a secret key is\nshielded by the additional systems. Based on the construction, it was shown\nthat a secret key can be distilled from bound entangled states. In this work, I\nconsider the shielded two-qubit states in a key-distillation scenario and\nderive the conditions under which a secret key can be distilled using the\nrecurrence protocol or the two-way classical distillation, advantage\ndistillation together with one-way postprocessing. From the security\nconditions, it is shown that a secret key can be distilled from bound entangled\nstates in a much wider range. In addition, I consider the case that in which\nwhite noise is added to quantum states and show that the classical distillation\nprotocol still works despite a certain amount of noise although the recurrence\nprotocol does not.", + "authors": "Joonwoo Bae", + "published": "2008-03-03", + "updated": "2010-09-22", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.05233v1", + "title": "DynamicKD: An Effective Knowledge Distillation via Dynamic Entropy Correction-Based Distillation for Gap Optimizing", + "abstract": "The knowledge distillation uses a high-performance teacher network to guide\nthe student network. However, the performance gap between the teacher and\nstudent networks can affect the student's training. This paper proposes a novel\nknowledge distillation algorithm based on dynamic entropy correction to reduce\nthe gap by adjusting the student instead of the teacher. Firstly, the effect of\nchanging the output entropy (short for output information entropy) in the\nstudent on the distillation loss is analyzed in theory. This paper shows that\ncorrecting the output entropy can reduce the gap. Then, a knowledge\ndistillation algorithm based on dynamic entropy correction is created, which\ncan correct the output entropy in real-time with an entropy controller updated\ndynamically by the distillation loss. The proposed algorithm is validated on\nthe CIFAR100 and ImageNet. The comparison with various state-of-the-art\ndistillation algorithms shows impressive results, especially in the experiment\non the CIFAR100 regarding teacher-student pair resnet32x4-resnet8x4. The\nproposed algorithm raises 2.64 points over the traditional distillation\nalgorithm and 0.87 points over the state-of-the-art algorithm CRD in\nclassification accuracy, demonstrating its effectiveness and efficiency.", + "authors": "Songling Zhu, Ronghua Shang, Bo Yuan, Weitong Zhang, Yangyang Li, Licheng Jiao", + "published": "2023-05-09", + "updated": "2023-05-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2311.13811v2", + "title": "Education distillation:getting student models to learn in shcools", + "abstract": "Knowledge distillation is one of the methods for model compression, and\nexisting knowledge distillation techniques focus on how to improve the\ndistillation algorithm so as to enhance the distillation efficiency. This paper\nintroduces dynamic incremental learning into knowledge distillation and\nproposes a distillation strategy for education distillation. Specifically, it\nis proposed to take fragmented student models divided from the complete student\nmodel as lower-grade models. As the grade level rises, fragmented student\nmodels deepen in conjunction with designed teaching reference layers, while\nlearning and distilling from more teacher models. By moving from lower to\nhigher grades, fragmented student models were gradually integrated into a\ncomplete target student model, and the performance of the student models\ngradually improved from lower to higher grades of the stage. Education\ndistillation strategies combined with distillation algorithms outperform the\nresults of single distillation algorithms on the public dataset\nCIFAR100,Caltech256, Food-101 dataset.", + "authors": "Ling Feng, Danyang Li, Tianhao Wu, Xuliang Duan", + "published": "2023-11-23", + "updated": "2023-11-27", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2307.08436v1", + "title": "DOT: A Distillation-Oriented Trainer", + "abstract": "Knowledge distillation transfers knowledge from a large model to a small one\nvia task and distillation losses. In this paper, we observe a trade-off between\ntask and distillation losses, i.e., introducing distillation loss limits the\nconvergence of task loss. We believe that the trade-off results from the\ninsufficient optimization of distillation loss. The reason is: The teacher has\na lower task loss than the student, and a lower distillation loss drives the\nstudent more similar to the teacher, then a better-converged task loss could be\nobtained. To break the trade-off, we propose the Distillation-Oriented Trainer\n(DOT). DOT separately considers gradients of task and distillation losses, then\napplies a larger momentum to distillation loss to accelerate its optimization.\nWe empirically prove that DOT breaks the trade-off, i.e., both losses are\nsufficiently optimized. Extensive experiments validate the superiority of DOT.\nNotably, DOT achieves a +2.59% accuracy improvement on ImageNet-1k for the\nResNet50-MobileNetV1 pair. Conclusively, DOT greatly benefits the student's\noptimization properties in terms of loss convergence and model generalization.\nCode will be made publicly available.", + "authors": "Borui Zhao, Quan Cui, Renjie Song, Jiajun Liang", + "published": "2023-07-17", + "updated": "2023-07-17", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2312.06899v1", + "title": "LoRA-Enhanced Distillation on Guided Diffusion Models", + "abstract": "Diffusion models, such as Stable Diffusion (SD), offer the ability to\ngenerate high-resolution images with diverse features, but they come at a\nsignificant computational and memory cost. In classifier-free guided diffusion\nmodels, prolonged inference times are attributed to the necessity of computing\ntwo separate diffusion models at each denoising step. Recent work has shown\npromise in improving inference time through distillation techniques, teaching\nthe model to perform similar denoising steps with reduced computations.\nHowever, the application of distillation introduces additional memory overhead\nto these already resource-intensive diffusion models, making it less practical.\n To address these challenges, our research explores a novel approach that\ncombines Low-Rank Adaptation (LoRA) with model distillation to efficiently\ncompress diffusion models. This approach not only reduces inference time but\nalso mitigates memory overhead, and notably decreases memory consumption even\nbefore applying distillation. The results are remarkable, featuring a\nsignificant reduction in inference time due to the distillation process and a\nsubstantial 50% reduction in memory consumption. Our examination of the\ngenerated images underscores that the incorporation of LoRA-enhanced\ndistillation maintains image quality and alignment with the provided prompts.\nIn summary, while conventional distillation tends to increase memory\nconsumption, LoRA-enhanced distillation offers optimization without any\ntrade-offs or compromises in quality.", + "authors": "Pareesa Ameneh Golnari", + "published": "2023-12-12", + "updated": "2023-12-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.09632v1", + "title": "HomoDistil: Homotopic Task-Agnostic Distillation of Pre-trained Transformers", + "abstract": "Knowledge distillation has been shown to be a powerful model compression\napproach to facilitate the deployment of pre-trained language models in\npractice. This paper focuses on task-agnostic distillation. It produces a\ncompact pre-trained model that can be easily fine-tuned on various tasks with\nsmall computational costs and memory footprints. Despite the practical\nbenefits, task-agnostic distillation is challenging. Since the teacher model\nhas a significantly larger capacity and stronger representation power than the\nstudent model, it is very difficult for the student to produce predictions that\nmatch the teacher's over a massive amount of open-domain training data. Such a\nlarge prediction discrepancy often diminishes the benefits of knowledge\ndistillation. To address this challenge, we propose Homotopic Distillation\n(HomoDistil), a novel task-agnostic distillation approach equipped with\niterative pruning. Specifically, we initialize the student model from the\nteacher model, and iteratively prune the student's neurons until the target\nwidth is reached. Such an approach maintains a small discrepancy between the\nteacher's and student's predictions throughout the distillation process, which\nensures the effectiveness of knowledge transfer. Extensive experiments\ndemonstrate that HomoDistil achieves significant improvements on existing\nbaselines.", + "authors": "Chen Liang, Haoming Jiang, Zheng Li, Xianfeng Tang, Bin Yin, Tuo Zhao", + "published": "2023-02-19", + "updated": "2023-02-19", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2311.09874v1", + "title": "Experimental virtual distillation of entanglement and coherence", + "abstract": "Noise is in general inevitable and detrimental to practical and useful\nquantum communication and computation. Under the resource theory framework,\nresource distillation serves as a generic tool to overcome the effect of noise.\nYet, conventional resource distillation protocols generally require operations\non multi-copies of resource states, and strong limitations exist that restrict\ntheir practical utilities. Recently, by relaxing the setting of resource\ndistillation to only approximating the measurement statistics instead of the\nquantum state, a resource-frugal protocol, virtual resource distillation, is\nproposed, which allows more effective distillation of noisy resources. Here, we\nreport its experimental implementation on a four-qubit photonic quantum system\nfor the distillation of quantum coherence (up to dimension 4) and bipartite\nentanglement. We show the virtual distillation of the maximal superposed state\nof dimension four from the state of dimension two, an impossible task in\nconventional coherence distillation. Furthermore, we demonstrate the virtual\ndistillation of entanglement with operations acting only on a single copy of\nthe noisy EPR pair and showcase the quantum teleportation task using the\nvirtually distilled EPR pair with a significantly improved fidelity of the\nteleported state. These results illustrate the feasibility of the virtual\nresource distillation method and pave the way for accurate manipulation of\nquantum resources with noisy quantum hardware.", + "authors": "Ting Zhang, Yukun Zhang, Lu Liu, Xiao-Xu Fang, Qian-Xi Zhang, Xiao Yuan, He Lu", + "published": "2023-11-16", + "updated": "2023-11-16", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2203.11932v1", + "title": "Dataset Distillation by Matching Training Trajectories", + "abstract": "Dataset distillation is the task of synthesizing a small dataset such that a\nmodel trained on the synthetic set will match the test accuracy of the model\ntrained on the full dataset. In this paper, we propose a new formulation that\noptimizes our distilled data to guide networks to a similar state as those\ntrained on real data across many training steps. Given a network, we train it\nfor several iterations on our distilled data and optimize the distilled data\nwith respect to the distance between the synthetically trained parameters and\nthe parameters trained on real data. To efficiently obtain the initial and\ntarget network parameters for large-scale datasets, we pre-compute and store\ntraining trajectories of expert networks trained on the real dataset. Our\nmethod handily outperforms existing methods and also allows us to distill\nhigher-resolution visual data.", + "authors": "George Cazenavette, Tongzhou Wang, Antonio Torralba, Alexei A. Efros, Jun-Yan Zhu", + "published": "2022-03-22", + "updated": "2022-03-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2208.08840v1", + "title": "Mind the Gap in Distilling StyleGANs", + "abstract": "StyleGAN family is one of the most popular Generative Adversarial Networks\n(GANs) for unconditional generation. Despite its impressive performance, its\nhigh demand on storage and computation impedes their deployment on\nresource-constrained devices. This paper provides a comprehensive study of\ndistilling from the popular StyleGAN-like architecture. Our key insight is that\nthe main challenge of StyleGAN distillation lies in the output discrepancy\nissue, where the teacher and student model yield different outputs given the\nsame input latent code. Standard knowledge distillation losses typically fail\nunder this heterogeneous distillation scenario. We conduct thorough analysis\nabout the reasons and effects of this discrepancy issue, and identify that the\nmapping network plays a vital role in determining semantic information of\ngenerated images. Based on this finding, we propose a novel initialization\nstrategy for the student model, which can ensure the output consistency to the\nmaximum extent. To further enhance the semantic consistency between the teacher\nand student model, we present a latent-direction-based distillation loss that\npreserves the semantic relations in latent space. Extensive experiments\ndemonstrate the effectiveness of our approach in distilling StyleGAN2 and\nStyleGAN3, outperforming existing GAN distillation methods by a large margin.", + "authors": "Guodong Xu, Yuenan Hou, Ziwei Liu, Chen Change Loy", + "published": "2022-08-18", + "updated": "2022-08-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1910.02551v3", + "title": "Soft-Label Dataset Distillation and Text Dataset Distillation", + "abstract": "Dataset distillation is a method for reducing dataset sizes by learning a\nsmall number of synthetic samples containing all the information of a large\ndataset. This has several benefits like speeding up model training, reducing\nenergy consumption, and reducing required storage space. Currently, each\nsynthetic sample is assigned a single `hard' label, and also, dataset\ndistillation can currently only be used with image data.\n We propose to simultaneously distill both images and their labels, thus\nassigning each synthetic sample a `soft' label (a distribution of labels). Our\nalgorithm increases accuracy by 2-4% over the original algorithm for several\nimage classification tasks. Using `soft' labels also enables distilled datasets\nto consist of fewer samples than there are classes as each sample can encode\ninformation for multiple classes. For example, training a LeNet model with 10\ndistilled images (one per class) results in over 96% accuracy on MNIST, and\nalmost 92% accuracy when trained on just 5 distilled images.\n We also extend the dataset distillation algorithm to distill sequential\ndatasets including texts. We demonstrate that text distillation outperforms\nother methods across multiple datasets. For example, models attain almost their\noriginal accuracy on the IMDB sentiment analysis task using just 20 distilled\nsentences.\n Our code can be found at\n$\\href{https://github.com/ilia10000/dataset-distillation}{\\text{https://github.com/ilia10000/dataset-distillation}}$.", + "authors": "Ilia Sucholutsky, Matthias Schonlau", + "published": "2019-10-06", + "updated": "2020-05-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2306.06629v1", + "title": "GKD: A General Knowledge Distillation Framework for Large-scale Pre-trained Language Model", + "abstract": "Currently, the reduction in the parameter scale of large-scale pre-trained\nlanguage models (PLMs) through knowledge distillation has greatly facilitated\ntheir widespread deployment on various devices. However, the deployment of\nknowledge distillation systems faces great challenges in real-world\nindustrial-strength applications, which require the use of complex distillation\nmethods on even larger-scale PLMs (over 10B), limited by memory on GPUs and the\nswitching of methods. To overcome these challenges, we propose GKD, a general\nknowledge distillation framework that supports distillation on larger-scale\nPLMs using various distillation methods. With GKD, developers can build larger\ndistillation models on memory-limited GPUs and easily switch and combine\ndifferent distillation methods within a single framework. Experimental results\nshow that GKD can support the distillation of at least 100B-scale PLMs and 25\nmainstream methods on 8 NVIDIA A100 (40GB) GPUs.", + "authors": "Shicheng Tan, Weng Lam Tam, Yuanchun Wang, Wenwen Gong, Yang Yang, Hongyin Tang, Keqing He, Jiahao Liu, Jingang Wang, Shu Zhao, Peng Zhang, Jie Tang", + "published": "2023-06-11", + "updated": "2023-06-11", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2109.14960v3", + "title": "Prune Your Model Before Distill It", + "abstract": "Knowledge distillation transfers the knowledge from a cumbersome teacher to a\nsmall student. Recent results suggest that the student-friendly teacher is more\nappropriate to distill since it provides more transferable knowledge. In this\nwork, we propose the novel framework, \"prune, then distill,\" that prunes the\nmodel first to make it more transferrable and then distill it to the student.\nWe provide several exploratory examples where the pruned teacher teaches better\nthan the original unpruned networks. We further show theoretically that the\npruned teacher plays the role of regularizer in distillation, which reduces the\ngeneralization error. Based on this result, we propose a novel neural network\ncompression scheme where the student network is formed based on the pruned\nteacher and then apply the \"prune, then distill\" strategy. The code is\navailable at https://github.com/ososos888/prune-then-distill", + "authors": "Jinhyuk Park, Albert No", + "published": "2021-09-30", + "updated": "2022-07-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2307.12732v1", + "title": "CLIP-KD: An Empirical Study of Distilling CLIP Models", + "abstract": "CLIP has become a promising language-supervised visual pre-training framework\nand achieves excellent performance over a wide range of tasks. This paper aims\nto distill small CLIP models supervised by a large teacher CLIP model. We\npropose several distillation strategies, including relation, feature, gradient\nand contrastive paradigm, to examine the impact on CLIP distillation. We show\nthat the simplest feature mimicry with MSE loss performs best. Moreover,\ninteractive contrastive learning and relation-based distillation are also\ncritical in performance improvement. We apply the unified method to distill\nseveral student networks trained on 15 million (image, text) pairs.\nDistillation improves the student CLIP models consistently over zero-shot\nImageNet classification and cross-modal retrieval benchmarks. We hope our\nempirical study will become an important baseline for future CLIP distillation\nresearch. The code is available at \\url{https://github.com/winycg/CLIP-KD}.", + "authors": "Chuanguang Yang, Zhulin An, Libo Huang, Junyu Bi, Xinqiang Yu, Han Yang, Yongjun Xu", + "published": "2023-07-24", + "updated": "2023-07-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2205.02399v1", + "title": "Spot-adaptive Knowledge Distillation", + "abstract": "Knowledge distillation (KD) has become a well established paradigm for\ncompressing deep neural networks. The typical way of conducting knowledge\ndistillation is to train the student network under the supervision of the\nteacher network to harness the knowledge at one or multiple spots (i.e.,\nlayers) in the teacher network. The distillation spots, once specified, will\nnot change for all the training samples, throughout the whole distillation\nprocess. In this work, we argue that distillation spots should be adaptive to\ntraining samples and distillation epochs. We thus propose a new distillation\nstrategy, termed spot-adaptive KD (SAKD), to adaptively determine the\ndistillation spots in the teacher network per sample, at every training\niteration during the whole distillation period. As SAKD actually focuses on\n\"where to distill\" instead of \"what to distill\" that is widely investigated by\nmost existing works, it can be seamlessly integrated into existing distillation\nmethods to further improve their performance. Extensive experiments with 10\nstate-of-the-art distillers are conducted to demonstrate the effectiveness of\nSAKD for improving their distillation performance, under both homogeneous and\nheterogeneous distillation settings. Code is available at\nhttps://github.com/zju-vipa/spot-adaptive-pytorch", + "authors": "Jie Song, Ying Chen, Jingwen Ye, Mingli Song", + "published": "2022-05-05", + "updated": "2022-05-05", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2310.18628v2", + "title": "Personalised Distillation: Empowering Open-Sourced LLMs with Adaptive Learning for Code Generation", + "abstract": "With the rise of powerful closed-sourced LLMs (ChatGPT, GPT-4), there are\nincreasing interests in distilling the capabilies of close-sourced LLMs to\nsmaller open-sourced LLMs. Previous distillation methods usually prompt ChatGPT\nto generate a set of instructions and answers, for the student model to learn.\nHowever, such standard distillation approach neglects the merits and conditions\nof the student model. Inspired by modern teaching principles, we design a\npersonalised distillation process, in which the student attempts to solve a\ntask first, then the teacher provides an adaptive refinement for the student to\nimprove. Instead of feeding the student with teacher's prior, personalised\ndistillation enables personalised learning for the student model, as it only\nlearns on examples it makes mistakes upon and learns to improve its own\nsolution. On code generation, personalised distillation consistently\noutperforms standard distillation with only one third of the data. With only\n2.5-3K personalised examples that incur a data-collection cost of 4-6$, we\nboost CodeGen-mono-16B by 7% to achieve 36.4% pass@1 and StarCoder by 12.2% to\nachieve 45.8% pass@1 on HumanEval.", + "authors": "Hailin Chen, Amrita Saha, Steven Hoi, Shafiq Joty", + "published": "2023-10-28", + "updated": "2024-01-26", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1901.09135v1", + "title": "Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks", + "abstract": "Much of the focus in the area of knowledge distillation has been on\ndistilling knowledge from a larger teacher network to a smaller student\nnetwork. However, there has been little research on how the concept of\ndistillation can be leveraged to distill the knowledge encapsulated in the\ntraining data itself into a reduced form. In this study, we explore the concept\nof progressive label distillation, where we leverage a series of\nteacher-student network pairs to progressively generate distilled training data\nfor learning deep neural networks with greatly reduced input dimensions. To\ninvestigate the efficacy of the proposed progressive label distillation\napproach, we experimented with learning a deep limited vocabulary speech\nrecognition network based on generated 500ms input utterances distilled\nprogressively from 1000ms source training data, and demonstrated a significant\nincrease in test accuracy of almost 78% compared to direct learning.", + "authors": "Zhong Qiu Lin, Alexander Wong", + "published": "2019-01-26", + "updated": "2019-01-26", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0001084v2", + "title": "Distillation of GHZ states by selective information manipulation", + "abstract": "Methods for distilling maximally entangled tripartite (GHZ) states from\narbitrary entangled tripartite pure states are described. These techniques work\nfor virtually any input state. Each technique has two stages which we call\nprimary and secondary distillation. Primary distillation produces a GHZ state\nwith some probability, so that when applied to an ensemble of systems, a\ncertain percentage is discarded. Secondary distillation produces further GHZs\nfrom the discarded systems. These protocols are developed with the help of an\napproach to quantum information theory based on absolutely selective\ninformation, which has other potential applications.", + "authors": "Oliver Cohen, Todd A. Brun", + "published": "2000-01-23", + "updated": "2000-02-02", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2211.08071v2", + "title": "Knowledge Distillation for Detection Transformer with Consistent Distillation Points Sampling", + "abstract": "DETR is a novel end-to-end transformer architecture object detector, which\nsignificantly outperforms classic detectors when scaling up the model size. In\nthis paper, we focus on the compression of DETR with knowledge distillation.\nWhile knowledge distillation has been well-studied in classic detectors, there\nis a lack of researches on how to make it work effectively on DETR. We first\nprovide experimental and theoretical analysis to point out that the main\nchallenge in DETR distillation is the lack of consistent distillation points.\nDistillation points refer to the corresponding inputs of the predictions for\nstudent to mimic, and reliable distillation requires sufficient distillation\npoints which are consistent between teacher and student. Based on this\nobservation, we propose a general knowledge distillation paradigm for\nDETR(KD-DETR) with consistent distillation points sampling. Specifically, we\ndecouple detection and distillation tasks by introducing a set of specialized\nobject queries to construct distillation points. In this paradigm, we further\npropose a general-to-specific distillation points sampling strategy to explore\nthe extensibility of KD-DETR. Extensive experiments on different DETR\narchitectures with various scales of backbones and transformer layers validate\nthe effectiveness and generalization of KD-DETR. KD-DETR boosts the performance\nof DAB-DETR with ResNet-18 and ResNet-50 backbone to 41.4$\\%$, 45.7$\\%$ mAP,\nrespectively, which are 5.2$\\%$, 3.5$\\%$ higher than the baseline, and\nResNet-50 even surpasses the teacher model by $2.2\\%$.", + "authors": "Yu Wang, Xin Li, Shengzhao Wen, Fukui Yang, Wanping Zhang, Gang Zhang, Haocheng Feng, Junyu Han, Errui Ding", + "published": "2022-11-15", + "updated": "2022-11-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.12330v1", + "title": "Task-agnostic Distillation of Encoder-Decoder Language Models", + "abstract": "Finetuning pretrained language models (LMs) have enabled appealing\nperformance on a diverse array of tasks. The intriguing task-agnostic property\nhas driven a shifted focus from task-specific to task-agnostic distillation of\nLMs. While task-agnostic, compute-efficient, performance-preserved LMs can be\nyielded by task-agnostic distillation, previous studies mainly sit in\ndistillation of either encoder-only LMs (e.g., BERT) or decoder-only ones\n(e.g., GPT) yet largely neglect that distillation of encoder-decoder LMs (e.g.,\nT5) can posit very distinguished behaviors. Frustratingly, we discover that\nexisting task-agnostic distillation methods can fail to handle the distillation\nof encoder-decoder LMs. To the demand, we explore a few paths and uncover a\npath named as MiniEnD that successfully tackles the distillation of\nencoder-decoder LMs in a task-agnostic fashion. We examine MiniEnD on language\nunderstanding and abstractive summarization. The results showcase that MiniEnD\nis generally effective and is competitive compared to other alternatives. We\nfurther scale MiniEnD up to distillation of 3B encoder-decoder language models\nwith interpolated distillation. The results imply the opportunities and\nchallenges in distilling large language models (e.g., LLaMA).", + "authors": "Chen Zhang, Yang Yang, Jingang Wang, Dawei Song", + "published": "2023-05-21", + "updated": "2023-05-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.15863v1", + "title": "Importance-Aware Adaptive Dataset Distillation", + "abstract": "Herein, we propose a novel dataset distillation method for constructing small\ninformative datasets that preserve the information of the large original\ndatasets. The development of deep learning models is enabled by the\navailability of large-scale datasets. Despite unprecedented success,\nlarge-scale datasets considerably increase the storage and transmission costs,\nresulting in a cumbersome model training process. Moreover, using raw data for\ntraining raises privacy and copyright concerns. To address these issues, a new\ntask named dataset distillation has been introduced, aiming to synthesize a\ncompact dataset that retains the essential information from the large original\ndataset. State-of-the-art (SOTA) dataset distillation methods have been\nproposed by matching gradients or network parameters obtained during training\non real and synthetic datasets. The contribution of different network\nparameters to the distillation process varies, and uniformly treating them\nleads to degraded distillation performance. Based on this observation, we\npropose an importance-aware adaptive dataset distillation (IADD) method that\ncan improve distillation performance by automatically assigning importance\nweights to different network parameters during distillation, thereby\nsynthesizing more robust distilled datasets. IADD demonstrates superior\nperformance over other SOTA dataset distillation methods based on parameter\nmatching on multiple benchmark datasets and outperforms them in terms of\ncross-architecture generalization. In addition, the analysis of self-adaptive\nweights demonstrates the effectiveness of IADD. Furthermore, the effectiveness\nof IADD is validated in a real-world medical application such as COVID-19\ndetection.", + "authors": "Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama", + "published": "2024-01-29", + "updated": "2024-01-29", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.14800v1", + "title": "Multi-to-Single Knowledge Distillation for Point Cloud Semantic Segmentation", + "abstract": "3D point cloud semantic segmentation is one of the fundamental tasks for\nenvironmental understanding. Although significant progress has been made in\nrecent years, the performance of classes with few examples or few points is\nstill far from satisfactory. In this paper, we propose a novel multi-to-single\nknowledge distillation framework for the 3D point cloud semantic segmentation\ntask to boost the performance of those hard classes. Instead of fusing all the\npoints of multi-scans directly, only the instances that belong to the\npreviously defined hard classes are fused. To effectively and sufficiently\ndistill valuable knowledge from multi-scans, we leverage a multilevel\ndistillation framework, i.e., feature representation distillation, logit\ndistillation, and affinity distillation. We further develop a novel\ninstance-aware affinity distillation algorithm for capturing high-level\nstructural knowledge to enhance the distillation efficacy for hard classes.\nFinally, we conduct experiments on the SemanticKITTI dataset, and the results\non both the validation and test sets demonstrate that our method yields\nsubstantial improvements compared with the baseline method. The code is\navailable at \\Url{https://github.com/skyshoumeng/M2SKD}.", + "authors": "Shoumeng Qiu, Feng Jiang, Haiqiang Zhang, Xiangyang Xue, Jian Pu", + "published": "2023-04-28", + "updated": "2023-04-28", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2010.13002v2", + "title": "Pre-trained Summarization Distillation", + "abstract": "Recent state-of-the-art approaches to summarization utilize large pre-trained\nTransformer models. Distilling these models to smaller student models has\nbecome critically important for practical use; however there are many different\ndistillation methods proposed by the NLP literature. Recent work on distilling\nBERT for classification and regression tasks shows strong performance using\ndirect knowledge distillation. Alternatively, machine translation practitioners\ndistill using pseudo-labeling, where a small model is trained on the\ntranslations of a larger model. A third, simpler approach is to 'shrink and\nfine-tune' (SFT), which avoids any explicit distillation by copying parameters\nto a smaller student model and then fine-tuning. We compare these three\napproaches for distillation of Pegasus and BART, the current and former state\nof the art, pre-trained summarization models, and find that SFT outperforms\nknowledge distillation and pseudo-labeling on the CNN/DailyMail dataset, but\nunder-performs pseudo-labeling on the more abstractive XSUM dataset. PyTorch\nCode and checkpoints of different sizes are available through Hugging Face\ntransformers here http://tiny.cc/4iy0tz.", + "authors": "Sam Shleifer, Alexander M. Rush", + "published": "2020-10-24", + "updated": "2020-10-28", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2006.01683v1", + "title": "Channel Distillation: Channel-Wise Attention for Knowledge Distillation", + "abstract": "Knowledge distillation is to transfer the knowledge from the data learned by\nthe teacher network to the student network, so that the student has the\nadvantage of less parameters and less calculations, and the accuracy is close\nto the teacher. In this paper, we propose a new distillation method, which\ncontains two transfer distillation strategies and a loss decay strategy. The\nfirst transfer strategy is based on channel-wise attention, called Channel\nDistillation (CD). CD transfers the channel information from the teacher to the\nstudent. The second is Guided Knowledge Distillation (GKD). Unlike Knowledge\nDistillation (KD), which allows the student to mimic each sample's prediction\ndistribution of the teacher, GKD only enables the student to mimic the correct\noutput of the teacher. The last part is Early Decay Teacher (EDT). During the\ntraining process, we gradually decay the weight of the distillation loss. The\npurpose is to enable the student to gradually control the optimization rather\nthan the teacher. Our proposed method is evaluated on ImageNet and CIFAR100. On\nImageNet, we achieve 27.68% of top-1 error with ResNet18, which outperforms\nstate-of-the-art methods. On CIFAR100, we achieve surprising result that the\nstudent outperforms the teacher. Code is available at\nhttps://github.com/zhouzaida/channel-distillation.", + "authors": "Zaida Zhou, Chaoran Zhuge, Xinwei Guan, Wen Liu", + "published": "2020-06-02", + "updated": "2020-06-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2106.12591v1", + "title": "Magic State Distillation from Entangled States", + "abstract": "Magic can be distributed non-locally in many-body entangled states, such as\nthe low energy states of condensed matter systems. Using the Bravyi-Kitaev\nmagic state distillation protocol, we find that non-local magic is distillable\nand can improve the distillation outcome. We analyze a few explicit examples\nand show that spin squeezing can be used to convert non-distillable states into\ndistillable ones.\n Our analysis also suggests that the conventional product input states assumed\nby magic distillation protocols are extremely atypical among general states\nwith distillable magic. It further justifies the need for studying a diverse\nrange of entangled inputs that yield magic states with high probability.", + "authors": "Ning Bao, ChunJun Cao, Vincent Paul Su", + "published": "2021-06-23", + "updated": "2021-06-23", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.18381v3", + "title": "Distill Gold from Massive Ores: Efficient Dataset Distillation via Critical Samples Selection", + "abstract": "Data-efficient learning has garnered significant attention, especially given\nthe current trend of large multi-modal models. Recently, dataset distillation\nbecomes an effective approach for data-efficiency; however, the distillation\nprocess itself can still be inefficient. In this work, we model the dataset\ndistillation task within the context of information transport. By observing the\nsubstantial data redundancy inherent in the distillation, we argue to put more\nemphasis on the samples' utility for the distillation task. We introduce and\nvalidate a family of data utility estimators and optimal data selection methods\nto exploit the most valuable samples. This strategy significantly reduces the\ntraining costs and extends various existing distillation algorithms to larger\nand more diversified datasets, e.g., in some cases only 0.04% training data is\nsufficient for comparable distillation performance. Our method consistently\nenhances the distillation algorithms, even on much larger-scale and more\nheterogeneous datasets, e.g. ImageNet-1K and Kinetics-400. This paradigm opens\nup new avenues in the dynamics of distillation and paves the way for efficient\ndataset distillation. Our code is available on\nhttps://github.com/silicx/GoldFromOres .", + "authors": "Yue Xu, Yong-Lu Li, Kaitong Cui, Ziyu Wang, Cewu Lu, Yu-Wing Tai, Chi-Keung Tang", + "published": "2023-05-28", + "updated": "2023-11-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.04057v1", + "title": "Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation", + "abstract": "We introduce Score identity Distillation (SiD), an innovative data-free\nmethod that distills the generative capabilities of pretrained diffusion models\ninto a single-step generator. SiD not only facilitates an exponentially fast\nreduction in Fr\\'echet inception distance (FID) during distillation but also\napproaches or even exceeds the FID performance of the original teacher\ndiffusion models. By reformulating forward diffusion processes as semi-implicit\ndistributions, we leverage three score-related identities to create an\ninnovative loss mechanism. This mechanism achieves rapid FID reduction by\ntraining the generator using its own synthesized images, eliminating the need\nfor real data or reverse-diffusion-based generation, all accomplished within\nsignificantly shortened generation time. Upon evaluation across four benchmark\ndatasets, the SiD algorithm demonstrates high iteration efficiency during\ndistillation and surpasses competing distillation approaches, whether they are\none-step or few-step, data-free, or dependent on training data, in terms of\ngeneration quality. This achievement not only redefines the benchmarks for\nefficiency and effectiveness in diffusion distillation but also in the broader\nfield of diffusion-based generation. Our PyTorch implementation will be\npublicly accessible on GitHub.", + "authors": "Mingyuan Zhou, Huangjie Zheng, Zhendong Wang, Mingzhang Yin, Hai Huang", + "published": "2024-04-05", + "updated": "2024-04-05", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0908.2142v1", + "title": "Distillation of Bell states in open systems", + "abstract": "In this work we review the entire classification of 2x2 distillable states\nfor protocols with a finite numbers of copies. We show a distillation protocol\nthat allows to distill Bell states with non zero probability at any time for an\ninitial singlet in vacuum. It is shown that the same protocol used in non zero\nthermal baths yields a considerable recovering of entanglement.", + "authors": "E. Isasi, D. Mundarain", + "published": "2009-08-14", + "updated": "2009-08-14", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1607.04311v1", + "title": "Defensive Distillation is Not Robust to Adversarial Examples", + "abstract": "We show that defensive distillation is not secure: it is no more resistant to\ntargeted misclassification attacks than unprotected neural networks.", + "authors": "Nicholas Carlini, David Wagner", + "published": "2016-07-14", + "updated": "2016-07-14", + "primary_cat": "cs.CR", + "cats": [ + "cs.CR", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1903.04197v7", + "title": "Structured Knowledge Distillation for Dense Prediction", + "abstract": "In this work, we consider transferring the structure information from large\nnetworks to compact ones for dense prediction tasks in computer vision.\nPrevious knowledge distillation strategies used for dense prediction tasks\noften directly borrow the distillation scheme for image classification and\nperform knowledge distillation for each pixel separately, leading to\nsub-optimal performance. Here we propose to distill structured knowledge from\nlarge networks to compact networks, taking into account the fact that dense\nprediction is a structured prediction problem. Specifically, we study two\nstructured distillation schemes: i) pair-wise distillation that distills the\npair-wise similarities by building a static graph; and ii) holistic\ndistillation that uses adversarial training to distill holistic knowledge. The\neffectiveness of our knowledge distillation approaches is demonstrated by\nexperiments on three dense prediction tasks: semantic segmentation, depth\nestimation and object detection. Code is available at: https://git.io/StructKD", + "authors": "Yifan Liu, Changyong Shun, Jingdong Wang, Chunhua Shen", + "published": "2019-03-11", + "updated": "2020-06-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2007.09029v1", + "title": "Knowledge Distillation in Deep Learning and its Applications", + "abstract": "Deep learning based models are relatively large, and it is hard to deploy\nsuch models on resource-limited devices such as mobile phones and embedded\ndevices. One possible solution is knowledge distillation whereby a smaller\nmodel (student model) is trained by utilizing the information from a larger\nmodel (teacher model). In this paper, we present a survey of knowledge\ndistillation techniques applied to deep learning models. To compare the\nperformances of different techniques, we propose a new metric called\ndistillation metric. Distillation metric compares different knowledge\ndistillation algorithms based on sizes and accuracy scores. Based on the\nsurvey, some interesting conclusions are drawn and presented in this paper.", + "authors": "Abdolmaged Alkhulaifi, Fahad Alsahli, Irfan Ahmad", + "published": "2020-07-17", + "updated": "2020-07-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2108.12905v1", + "title": "Lipschitz Continuity Guided Knowledge Distillation", + "abstract": "Knowledge distillation has become one of the most important model compression\ntechniques by distilling knowledge from larger teacher networks to smaller\nstudent ones. Although great success has been achieved by prior distillation\nmethods via delicately designing various types of knowledge, they overlook the\nfunctional properties of neural networks, which makes the process of applying\nthose techniques to new tasks unreliable and non-trivial. To alleviate such\nproblem, in this paper, we initially leverage Lipschitz continuity to better\nrepresent the functional characteristic of neural networks and guide the\nknowledge distillation process. In particular, we propose a novel Lipschitz\nContinuity Guided Knowledge Distillation framework to faithfully distill\nknowledge by minimizing the distance between two neural networks' Lipschitz\nconstants, which enables teacher networks to better regularize student networks\nand improve the corresponding performance. We derive an explainable\napproximation algorithm with an explicit theoretical derivation to address the\nNP-hard problem of calculating the Lipschitz constant. Experimental results\nhave shown that our method outperforms other benchmarks over several knowledge\ndistillation tasks (e.g., classification, segmentation and object detection) on\nCIFAR-100, ImageNet, and PASCAL VOC datasets.", + "authors": "Yuzhang Shang, Bin Duan, Ziliang Zong, Liqiang Nie, Yan Yan", + "published": "2021-08-29", + "updated": "2021-08-29", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.03846v1", + "title": "On the Effectiveness of Distillation in Mitigating Backdoors in Pre-trained Encoder", + "abstract": "In this paper, we study a defense against poisoned encoders in SSL called\ndistillation, which is a defense used in supervised learning originally.\nDistillation aims to distill knowledge from a given model (a.k.a the teacher\nnet) and transfer it to another (a.k.a the student net). Now, we use it to\ndistill benign knowledge from poisoned pre-trained encoders and transfer it to\na new encoder, resulting in a clean pre-trained encoder. In particular, we\nconduct an empirical study on the effectiveness and performance of distillation\nagainst poisoned encoders. Using two state-of-the-art backdoor attacks against\npre-trained image encoders and four commonly used image classification\ndatasets, our experimental results show that distillation can reduce attack\nsuccess rate from 80.87% to 27.51% while suffering a 6.35% loss in accuracy.\nMoreover, we investigate the impact of three core components of distillation on\nperformance: teacher net, student net, and distillation loss. By comparing 4\ndifferent teacher nets, 3 student nets, and 6 distillation losses, we find that\nfine-tuned teacher nets, warm-up-training-based student nets, and\nattention-based distillation loss perform best, respectively.", + "authors": "Tingxu Han, Shenghan Huang, Ziqi Ding, Weisong Sun, Yebo Feng, Chunrong Fang, Jun Li, Hanwei Qian, Cong Wu, Quanjun Zhang, Yang Liu, Zhenyu Chen", + "published": "2024-03-06", + "updated": "2024-03-06", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/0908.0836v3", + "title": "Bound States for Magic State Distillation in Fault-Tolerant Quantum Computation", + "abstract": "Magic state distillation is an important primitive in fault-tolerant quantum\ncomputation. The magic states are pure non-stabilizer states which can be\ndistilled from certain mixed non-stabilizer states via Clifford group\noperations alone. Because of the Gottesman-Knill theorem, mixtures of Pauli\neigenstates are not expected to be magic state distillable, but it has been an\nopen question whether all mixed states outside this set may be distilled. In\nthis Letter we show that, when resources are finitely limited, non-distillable\nstates exist outside the stabilizer octahedron. In analogy with the bound\nentangled states, which arise in entanglement theory, we call such states bound\nstates for magic state distillation.", + "authors": "Earl T. Campbell, Dan E. Browne", + "published": "2009-08-06", + "updated": "2010-02-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1812.00249v1", + "title": "On Compressing U-net Using Knowledge Distillation", + "abstract": "We study the use of knowledge distillation to compress the U-net\narchitecture. We show that, while standard distillation is not sufficient to\nreliably train a compressed U-net, introducing other regularization methods,\nsuch as batch normalization and class re-weighting, in knowledge distillation\nsignificantly improves the training process. This allows us to compress a U-net\nby over 1000x, i.e., to 0.1% of its original number of parameters, at a\nnegligible decrease in performance.", + "authors": "Karttikeya Mangalam, Mathieu Salzamann", + "published": "2018-12-01", + "updated": "2018-12-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2301.01615v2", + "title": "StereoDistill: Pick the Cream from LiDAR for Distilling Stereo-based 3D Object Detection", + "abstract": "In this paper, we propose a cross-modal distillation method named\nStereoDistill to narrow the gap between the stereo and LiDAR-based approaches\nvia distilling the stereo detectors from the superior LiDAR model at the\nresponse level, which is usually overlooked in 3D object detection\ndistillation. The key designs of StereoDistill are: the X-component Guided\nDistillation~(XGD) for regression and the Cross-anchor Logit Distillation~(CLD)\nfor classification. In XGD, instead of empirically adopting a threshold to\nselect the high-quality teacher predictions as soft targets, we decompose the\npredicted 3D box into sub-components and retain the corresponding part for\ndistillation if the teacher component pilot is consistent with ground truth to\nlargely boost the number of positive predictions and alleviate the mimicking\ndifficulty of the student model. For CLD, we aggregate the probability\ndistribution of all anchors at the same position to encourage the highest\nprobability anchor rather than individually distill the distribution at the\nanchor level. Finally, our StereoDistill achieves state-of-the-art results for\nstereo-based 3D detection on the KITTI test benchmark and extensive experiments\non KITTI and Argoverse Dataset validate the effectiveness.", + "authors": "Zhe Liu, Xiaoqing Ye, Xiao Tan, Errui Ding, Xiang Bai", + "published": "2023-01-04", + "updated": "2023-01-07", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/9809078v2", + "title": "A rigorous treatment of distillable entanglement", + "abstract": "The notion of distillable entanglement is one of the fundamental concepts of\nquantum information theory. Unfortunately, there is an apparent mismatch\nbetween the intuitive and rigorous definitions of distillable entanglement. To\nbe precise, the existing rigorous definitions impose the constraint that the\ndistilation protocol produce an output of constant dimension. It is therefore\nconceivable that this unnecessary constraint might have led to underestimation\nof the true distillable entanglement. We give a new definition of distillable\nentanglement which removes this constraint, but could conceivably overestimate\nthe true value. Since the definitions turn out to be equivalent, neither\nunderestimation nor overestimation is possible, and both definitions are\narguably correct", + "authors": "Eric M. Rains", + "published": "1998-09-24", + "updated": "1998-10-12", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.05563v2", + "title": "Entanglement distillation in terms of Schmidt rank and matrix rank", + "abstract": "Entanglement distillation is a key task in quantum-information processing. In\nthis paper, we distill non-positive-partial-transpose (NPT) bipartite states of\nsome given Schmidt rank and matrix rank. We show that all bipartite states of\nSchmidt rank two are locally equivalent to classical-classical states, and all\nbipartite states of Schmidt rank three are 1-undistillable. Subsequently, we\nshow that low-rank B-irreducible NPT states are distillable for large-rank\nreduced density operators by proving low-rank B-irreducible NPT state whose\nrange contains a product vector is distillable. Eventually, we present an\nequivalent condition to distill $M\\times N$ bipartite states of rank\n$\\max\\{M,N\\}+1$.", + "authors": "Tianyi Ding, Lin Chen", + "published": "2023-04-12", + "updated": "2023-07-06", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.06170v1", + "title": "CLIP-Embed-KD: Computationally Efficient Knowledge Distillation Using Embeddings as Teachers", + "abstract": "Contrastive Language-Image Pre-training (CLIP) has been shown to improve\nzero-shot generalization capabilities of language and vision models. In this\npaper, we extend CLIP for efficient knowledge distillation, by utilizing\nembeddings as teachers. Typical knowledge distillation frameworks require\nrunning forward passes through a teacher model, which is often prohibitive in\nthe case of billion or trillion parameter teachers. In these cases, using only\nthe embeddings of the teacher models to guide the distillation can yield\nsignificant computational savings. Our preliminary findings show that\nCLIP-based knowledge distillation with embeddings can outperform full scale\nknowledge distillation using $9\\times$ less memory and $8\\times$ less training\ntime. Code available at: https://github.com/lnairGT/CLIP-Distillation/", + "authors": "Lakshmi Nair", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2403.10045v1", + "title": "Towards Adversarially Robust Dataset Distillation by Curvature Regularization", + "abstract": "Dataset distillation (DD) allows datasets to be distilled to fractions of\ntheir original size while preserving the rich distributional information so\nthat models trained on the distilled datasets can achieve a comparable accuracy\nwhile saving significant computational loads. Recent research in this area has\nbeen focusing on improving the accuracy of models trained on distilled\ndatasets. In this paper, we aim to explore a new perspective of DD. We study\nhow to embed adversarial robustness in distilled datasets, so that models\ntrained on these datasets maintain the high accuracy and meanwhile acquire\nbetter adversarial robustness. We propose a new method that achieves this goal\nby incorporating curvature regularization into the distillation process with\nmuch less computational overhead than standard adversarial training. Extensive\nempirical experiments suggest that our method not only outperforms standard\nadversarial training on both accuracy and robustness with less computation\noverhead but is also capable of generating robust distilled datasets that can\nwithstand various adversarial attacks.", + "authors": "Eric Xue, Yijiang Li, Haoyang Liu, Yifan Shen, Haohan Wang", + "published": "2024-03-15", + "updated": "2024-03-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0008047v2", + "title": "A semidefinite program for distillable entanglement", + "abstract": "We show that the maximum fidelity obtained by a p.p.t. distillation protocol\nis given by the solution to a certain semidefinite program. This gives a number\nof new lower and upper bounds on p.p.t. distillable entanglement (and thus new\nupper bounds on 2-locally distillable entanglement). In the presence of\nsymmetry, the semidefinite program simplifies considerably, becoming a linear\nprogram in the case of isotropic and Werner states. Using these techniques, we\ndetermine the p.p.t. distillable entanglement of asymmetric Werner states and\n``maximally correlated'' states. We conclude with a discussion of possible\napplications of semidefinite programming to quantum codes and 1-local\ndistillation.", + "authors": "Eric M. Rains", + "published": "2000-08-10", + "updated": "2001-04-12", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.06461v2", + "title": "Multi-Mode Online Knowledge Distillation for Self-Supervised Visual Representation Learning", + "abstract": "Self-supervised learning (SSL) has made remarkable progress in visual\nrepresentation learning. Some studies combine SSL with knowledge distillation\n(SSL-KD) to boost the representation learning performance of small models. In\nthis study, we propose a Multi-mode Online Knowledge Distillation method (MOKD)\nto boost self-supervised visual representation learning. Different from\nexisting SSL-KD methods that transfer knowledge from a static pre-trained\nteacher to a student, in MOKD, two different models learn collaboratively in a\nself-supervised manner. Specifically, MOKD consists of two distillation modes:\nself-distillation and cross-distillation modes. Among them, self-distillation\nperforms self-supervised learning for each model independently, while\ncross-distillation realizes knowledge interaction between different models. In\ncross-distillation, a cross-attention feature search strategy is proposed to\nenhance the semantic feature alignment between different models. As a result,\nthe two models can absorb knowledge from each other to boost their\nrepresentation learning performance. Extensive experimental results on\ndifferent backbones and datasets demonstrate that two heterogeneous models can\nbenefit from MOKD and outperform their independently trained baseline. In\naddition, MOKD also outperforms existing SSL-KD methods for both the student\nand teacher models.", + "authors": "Kaiyou Song, Jin Xie, Shan Zhang, Zimeng Luo", + "published": "2023-04-13", + "updated": "2023-06-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2303.05958v1", + "title": "Robust Knowledge Distillation from RNN-T Models With Noisy Training Labels Using Full-Sum Loss", + "abstract": "This work studies knowledge distillation (KD) and addresses its constraints\nfor recurrent neural network transducer (RNN-T) models. In hard distillation, a\nteacher model transcribes large amounts of unlabelled speech to train a student\nmodel. Soft distillation is another popular KD method that distills the output\nlogits of the teacher model. Due to the nature of RNN-T alignments, applying\nsoft distillation between RNN-T architectures having different posterior\ndistributions is challenging. In addition, bad teachers having high\nword-error-rate (WER) reduce the efficacy of KD. We investigate how to\neffectively distill knowledge from variable quality ASR teachers, which has not\nbeen studied before to the best of our knowledge. We show that a sequence-level\nKD, full-sum distillation, outperforms other distillation methods for RNN-T\nmodels, especially for bad teachers. We also propose a variant of full-sum\ndistillation that distills the sequence discriminative knowledge of the teacher\nleading to further improvement in WER. We conduct experiments on public\ndatasets namely SpeechStew and LibriSpeech, and on in-house production data.", + "authors": "Mohammad Zeineldeen, Kartik Audhkhasi, Murali Karthick Baskar, Bhuvana Ramabhadran", + "published": "2023-03-10", + "updated": "2023-03-10", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.SD", + "eess.AS", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2104.02857v2", + "title": "Soft-Label Anonymous Gastric X-ray Image Distillation", + "abstract": "This paper presents a soft-label anonymous gastric X-ray image distillation\nmethod based on a gradient descent approach. The sharing of medical data is\ndemanded to construct high-accuracy computer-aided diagnosis (CAD) systems.\nHowever, the large size of the medical dataset and privacy protection are\nremaining problems in medical data sharing, which hindered the research of CAD\nsystems. The idea of our distillation method is to extract the valid\ninformation of the medical dataset and generate a tiny distilled dataset that\nhas a different data distribution. Different from model distillation, our\nmethod aims to find the optimal distilled images, distilled labels and the\noptimized learning rate. Experimental results show that the proposed method can\nnot only effectively compress the medical dataset but also anonymize medical\nimages to protect the patient's private information. The proposed approach can\nimprove the efficiency and security of medical data sharing.", + "authors": "Guang Li, Ren Togo, Takahiro Ogawa, Miki Haseyama", + "published": "2021-04-07", + "updated": "2024-03-21", + "primary_cat": "eess.IV", + "cats": [ + "eess.IV", + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1907.09682v2", + "title": "Similarity-Preserving Knowledge Distillation", + "abstract": "Knowledge distillation is a widely applicable technique for training a\nstudent neural network under the guidance of a trained teacher network. For\nexample, in neural network compression, a high-capacity teacher is distilled to\ntrain a compact student; in privileged learning, a teacher trained with\nprivileged data is distilled to train a student without access to that data.\nThe distillation loss determines how a teacher's knowledge is captured and\ntransferred to the student. In this paper, we propose a new form of knowledge\ndistillation loss that is inspired by the observation that semantically similar\ninputs tend to elicit similar activation patterns in a trained network.\nSimilarity-preserving knowledge distillation guides the training of a student\nnetwork such that input pairs that produce similar (dissimilar) activations in\nthe teacher network produce similar (dissimilar) activations in the student\nnetwork. In contrast to previous distillation methods, the student is not\nrequired to mimic the representation space of the teacher, but rather to\npreserve the pairwise similarities in its own representation space. Experiments\non three public datasets demonstrate the potential of our approach.", + "authors": "Frederick Tung, Greg Mori", + "published": "2019-07-23", + "updated": "2019-08-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2303.05015v2", + "title": "Smooth and Stepwise Self-Distillation for Object Detection", + "abstract": "Distilling the structured information captured in feature maps has\ncontributed to improved results for object detection tasks, but requires\ncareful selection of baseline architectures and substantial pre-training.\nSelf-distillation addresses these limitations and has recently achieved\nstate-of-the-art performance for object detection despite making several\nsimplifying architectural assumptions. Building on this work, we propose Smooth\nand Stepwise Self-Distillation (SSSD) for object detection. Our SSSD\narchitecture forms an implicit teacher from object labels and a feature pyramid\nnetwork backbone to distill label-annotated feature maps using Jensen-Shannon\ndistance, which is smoother than distillation losses used in prior work. We\nadditionally add a distillation coefficient that is adaptively configured based\non the learning rate. We extensively benchmark SSSD against a baseline and two\nstate-of-the-art object detector architectures on the COCO dataset by varying\nthe coefficients and backbone and detector networks. We demonstrate that SSSD\nachieves higher average precision in most experimental settings, is robust to a\nwide range of coefficients, and benefits from our stepwise distillation\nprocedure.", + "authors": "Jieren Deng, Xin Zhou, Hao Tian, Zhihong Pan, Derek Aguiar", + "published": "2023-03-09", + "updated": "2024-01-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2106.07137v1", + "title": "Why Can You Lay Off Heads? Investigating How BERT Heads Transfer", + "abstract": "The huge size of the widely used BERT family models has led to recent efforts\nabout model distillation. The main goal of distillation is to create a\ntask-agnostic pre-trained model that can be fine-tuned on downstream tasks\nwithout fine-tuning its full-sized version. Despite the progress of\ndistillation, to what degree and for what reason a task-agnostic model can be\ncreated from distillation has not been well studied. Also, the mechanisms\nbehind transfer learning of those BERT models are not well investigated either.\nTherefore, this work focuses on analyzing the acceptable deduction when\ndistillation for guiding the future distillation procedure. Specifically, we\nfirst inspect the prunability of the Transformer heads in RoBERTa and ALBERT\nusing their head importance estimation proposed by Michel et al. (2019), and\nthen check the coherence of the important heads between the pre-trained task\nand downstream tasks. Hence, the acceptable deduction of performance on the\npre-trained task when distilling a model can be derived from the results, and\nwe further compare the behavior of the pruned model before and after\nfine-tuning. Our studies provide guidance for future directions about BERT\nfamily model distillation.", + "authors": "Ting-Rui Chiang, Yun-Nung Chen", + "published": "2021-06-14", + "updated": "2021-06-14", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.11365v1", + "title": "Confidence Preservation Property in Knowledge Distillation Abstractions", + "abstract": "Social media platforms prevent malicious activities by detecting harmful\ncontent of posts and comments. To that end, they employ large-scale deep neural\nnetwork language models for sentiment analysis and content understanding. Some\nmodels, like BERT, are complex, and have numerous parameters, which makes them\nexpensive to operate and maintain. To overcome these deficiencies, industry\nexperts employ a knowledge distillation compression technique, where a\ndistilled model is trained to reproduce the classification behavior of the\noriginal model. The distillation processes terminates when the distillation\nloss function reaches the stopping criteria. This function is mainly designed\nto ensure that the original and the distilled models exhibit alike\nclassification behaviors. However, besides classification accuracy, there are\nadditional properties of the original model that the distilled model should\npreserve to be considered as an appropriate abstraction. In this work, we\nexplore whether distilled TinyBERT models preserve confidence values of the\noriginal BERT models, and investigate how this confidence preservation property\ncould guide tuning hyperparameters of the distillation process.", + "authors": "Dmitry Vengertsev, Elena Sherman", + "published": "2024-01-21", + "updated": "2024-01-21", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0012022v1", + "title": "Distilling a Greenberger-Horne-Zeilinger State From an Arbitrary Pure State of Three Qubits", + "abstract": "We present a general algorithm to achieve local operators which can produce\nthe GHZ state for an arbitrary given three-qubit state. Thus the distillation\nprocess of the state can be realized optimally. The algorithm is shown to be\nsufficient for the three-qubit state on account of the fact that any state for\nwhich this distillation algorithm is invalid cannot be distilled to the GHZ\nstate by any local actions. Moreover, an analytical result of distillation\noperations is achieved for the general state of three qubits.", + "authors": "Li-Xiang Cen, Shun-Jin Wang", + "published": "2000-12-05", + "updated": "2000-12-05", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2405.00348v1", + "title": "Practical Dataset Distillation Based on Deep Support Vectors", + "abstract": "Conventional dataset distillation requires significant computational\nresources and assumes access to the entire dataset, an assumption impractical\nas it presumes all data resides on a central server. In this paper, we focus on\ndataset distillation in practical scenarios with access to only a fraction of\nthe entire dataset. We introduce a novel distillation method that augments the\nconventional process by incorporating general model knowledge via the addition\nof Deep KKT (DKKT) loss. In practical settings, our approach showed improved\nperformance compared to the baseline distribution matching distillation method\non the CIFAR-10 dataset. Additionally, we present experimental evidence that\nDeep Support Vectors (DSVs) offer unique information to the original\ndistillation, and their integration results in enhanced performance.", + "authors": "Hyunho Lee, Junhoo Lee, Nojun Kwak", + "published": "2024-05-01", + "updated": "2024-05-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.11472v1", + "title": "Distilling Calibrated Student from an Uncalibrated Teacher", + "abstract": "Knowledge distillation is a common technique for improving the performance of\na shallow student network by transferring information from a teacher network,\nwhich in general, is comparatively large and deep. These teacher networks are\npre-trained and often uncalibrated, as no calibration technique is applied to\nthe teacher model while training. Calibration of a network measures the\nprobability of correctness for any of its predictions, which is critical in\nhigh-risk domains. In this paper, we study how to obtain a calibrated student\nfrom an uncalibrated teacher. Our approach relies on the fusion of the\ndata-augmentation techniques, including but not limited to cutout, mixup, and\nCutMix, with knowledge distillation. We extend our approach beyond traditional\nknowledge distillation and find it suitable for Relational Knowledge\nDistillation and Contrastive Representation Distillation as well. The novelty\nof the work is that it provides a framework to distill a calibrated student\nfrom an uncalibrated teacher model without compromising the accuracy of the\ndistilled student. We perform extensive experiments to validate our approach on\nvarious datasets, including CIFAR-10, CIFAR-100, CINIC-10 and TinyImageNet, and\nobtained calibrated student models. We also observe robust performance of our\napproach while evaluating it on corrupted CIFAR-100C data.", + "authors": "Ishan Mishra, Sethu Vamsi Krishna, Deepak Mishra", + "published": "2023-02-22", + "updated": "2023-02-22", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2305.08076v1", + "title": "Improving Defensive Distillation using Teacher Assistant", + "abstract": "Adversarial attacks pose a significant threat to the security and safety of\ndeep neural networks being applied to modern applications. More specifically,\nin computer vision-based tasks, experts can use the knowledge of model\narchitecture to create adversarial samples imperceptible to the human eye.\nThese attacks can lead to security problems in popular applications such as\nself-driving cars, face recognition, etc. Hence, building networks which are\nrobust to such attacks is highly desirable and essential. Among the various\nmethods present in literature, defensive distillation has shown promise in\nrecent years. Using knowledge distillation, researchers have been able to\ncreate models robust against some of those attacks. However, more attacks have\nbeen developed exposing weakness in defensive distillation. In this project, we\nderive inspiration from teacher assistant knowledge distillation and propose\nthat introducing an assistant network can improve the robustness of the\ndistilled model. Through a series of experiments, we evaluate the distilled\nmodels for different distillation temperatures in terms of accuracy,\nsensitivity, and robustness. Our experiments demonstrate that the proposed\nhypothesis can improve robustness in most cases. Additionally, we show that\nmulti-step distillation can further improve robustness with very little impact\non model accuracy.", + "authors": "Maniratnam Mandal, Suna Gao", + "published": "2023-05-14", + "updated": "2023-05-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.CR", + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0303009v2", + "title": "Security bounds in Quantum Cryptography using d-level systems", + "abstract": "We analyze the security of quantum cryptography schemes for $d$-level systems\nusing 2 or $d+1$ maximally conjugated bases, under individual eavesdropping\nattacks based on cloning machines and measurement after the basis\nreconciliation. We consider classical advantage distillation protocols, that\nallow to extract a key even in situations where the mutual information between\nthe honest parties is smaller than the eavesdropper's information. In this\nscenario, advantage distillation protocols are shown to be as powerful as\nquantum distillation: key distillation is possible using classical techniques\nif and only if the corresponding state in the entanglement based protocol is\ndistillable.", + "authors": "Antonio Acin, Nicolas Gisin, Valerio Scarani", + "published": "2003-03-03", + "updated": "2003-11-03", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.01392v1", + "title": "No-go theorem for probabilistic one-way secret-key distillation", + "abstract": "The probabilistic one-way distillable secret key is equal to the largest\nexpected rate at which perfect secret key bits can be probabilistically\ndistilled from a bipartite state by means of local operations and one-way\nclassical communication. Here we define the set of super two-extendible states\nand prove that an arbitrary state in this set cannot be used for probabilistic\none-way secret-key distillation. This broad class of states includes both\nerased states and all full-rank states. Comparing the probabilistic one-way\ndistillable secret key with the more commonly studied approximate one-way\ndistillable secret key, our results demonstrate an extreme gap between them for\nmany states of interest, with the approximate one-way distillable secret key\nbeing much larger. Our findings naturally extend to probabilistic one-way\nentanglement distillation, with similar conclusions.", + "authors": "Vishal Singh, Mark M. Wilde", + "published": "2024-04-01", + "updated": "2024-04-01", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph", + "cs.IT", + "math.IT" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2006.08572v3", + "title": "Flexible Dataset Distillation: Learn Labels Instead of Images", + "abstract": "We study the problem of dataset distillation - creating a small set of\nsynthetic examples capable of training a good model. In particular, we study\nthe problem of label distillation - creating synthetic labels for a small set\nof real images, and show it to be more effective than the prior image-based\napproach to dataset distillation. Methodologically, we introduce a more robust\nand flexible meta-learning algorithm for distillation, as well as an effective\nfirst-order strategy based on convex optimization layers. Distilling labels\nwith our new algorithm leads to improved results over prior image-based\ndistillation. More importantly, it leads to clear improvements in flexibility\nof the distilled dataset in terms of compatibility with off-the-shelf\noptimizers and diverse neural architectures. Interestingly, label distillation\ncan also be applied across datasets, for example enabling learning Japanese\ncharacter recognition by training only on synthetically labeled English\nletters.", + "authors": "Ondrej Bohdal, Yongxin Yang, Timothy Hospedales", + "published": "2020-06-15", + "updated": "2020-12-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2309.09920v1", + "title": "Distilling HuBERT with LSTMs via Decoupled Knowledge Distillation", + "abstract": "Much research effort is being applied to the task of compressing the\nknowledge of self-supervised models, which are powerful, yet large and memory\nconsuming. In this work, we show that the original method of knowledge\ndistillation (and its more recently proposed extension, decoupled knowledge\ndistillation) can be applied to the task of distilling HuBERT. In contrast to\nmethods that focus on distilling internal features, this allows for more\nfreedom in the network architecture of the compressed model. We thus propose to\ndistill HuBERT's Transformer layers into an LSTM-based distilled model that\nreduces the number of parameters even below DistilHuBERT and at the same time\nshows improved performance in automatic speech recognition.", + "authors": "Danilo de Oliveira, Timo Gerkmann", + "published": "2023-09-18", + "updated": "2023-09-18", + "primary_cat": "eess.AS", + "cats": [ + "eess.AS", + "cs.LG", + "cs.SD", + "eess.SP" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/9908047v2", + "title": "On bound entanglement assisted distillation", + "abstract": "We investigate asymptotic distillation of entanglement in the presence of an\nunlimited amount of bound entanglement for bi-partite systems. We show that the\ndistillability is still bounded by the relative entropy of entanglement. This\noffers a strong support to the fact that bound entanglement does not improve\ndistillation of entanglement.", + "authors": "V. Vedral", + "published": "1999-08-14", + "updated": "1999-11-17", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2402.02781v1", + "title": "Dual Knowledge Distillation for Efficient Sound Event Detection", + "abstract": "Sound event detection (SED) is essential for recognizing specific sounds and\ntheir temporal locations within acoustic signals. This becomes challenging\nparticularly for on-device applications, where computational resources are\nlimited. To address this issue, we introduce a novel framework referred to as\ndual knowledge distillation for developing efficient SED systems in this work.\nOur proposed dual knowledge distillation commences with temporal-averaging\nknowledge distillation (TAKD), utilizing a mean student model derived from the\ntemporal averaging of the student model's parameters. This allows the student\nmodel to indirectly learn from a pre-trained teacher model, ensuring a stable\nknowledge distillation. Subsequently, we introduce embedding-enhanced feature\ndistillation (EEFD), which involves incorporating an embedding distillation\nlayer within the student model to bolster contextual learning. On DCASE 2023\nTask 4A public evaluation dataset, our proposed SED system with dual knowledge\ndistillation having merely one-third of the baseline model's parameters,\ndemonstrates superior performance in terms of PSDS1 and PSDS2. This highlights\nthe importance of proposed dual knowledge distillation for compact SED systems,\nwhich can be ideal for edge devices.", + "authors": "Yang Xiao, Rohan Kumar Das", + "published": "2024-02-05", + "updated": "2024-02-05", + "primary_cat": "cs.SD", + "cats": [ + "cs.SD", + "cs.AI", + "cs.CL", + "cs.LG", + "eess.AS" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2304.04615v1", + "title": "A Survey on Recent Teacher-student Learning Studies", + "abstract": "Knowledge distillation is a method of transferring the knowledge from a\ncomplex deep neural network (DNN) to a smaller and faster DNN, while preserving\nits accuracy. Recent variants of knowledge distillation include teaching\nassistant distillation, curriculum distillation, mask distillation, and\ndecoupling distillation, which aim to improve the performance of knowledge\ndistillation by introducing additional components or by changing the learning\nprocess. Teaching assistant distillation involves an intermediate model called\nthe teaching assistant, while curriculum distillation follows a curriculum\nsimilar to human education. Mask distillation focuses on transferring the\nattention mechanism learned by the teacher, and decoupling distillation\ndecouples the distillation loss from the task loss. Overall, these variants of\nknowledge distillation have shown promising results in improving the\nperformance of knowledge distillation.", + "authors": "Minghong Gao", + "published": "2023-04-10", + "updated": "2023-04-10", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2308.14286v2", + "title": "Bridging Cross-task Protocol Inconsistency for Distillation in Dense Object Detection", + "abstract": "Knowledge distillation (KD) has shown potential for learning compact models\nin dense object detection. However, the commonly used softmax-based\ndistillation ignores the absolute classification scores for individual\ncategories. Thus, the optimum of the distillation loss does not necessarily\nlead to the optimal student classification scores for dense object detectors.\nThis cross-task protocol inconsistency is critical, especially for dense object\ndetectors, since the foreground categories are extremely imbalanced. To address\nthe issue of protocol differences between distillation and classification, we\npropose a novel distillation method with cross-task consistent protocols,\ntailored for the dense object detection. For classification distillation, we\naddress the cross-task protocol inconsistency problem by formulating the\nclassification logit maps in both teacher and student models as multiple\nbinary-classification maps and applying a binary-classification distillation\nloss to each map. For localization distillation, we design an IoU-based\nLocalization Distillation Loss that is free from specific network structures\nand can be compared with existing localization distillation losses. Our\nproposed method is simple but effective, and experimental results demonstrate\nits superiority over existing methods. Code is available at\nhttps://github.com/TinyTigerPan/BCKD.", + "authors": "Longrong Yang, Xianpan Zhou, Xuewei Li, Liang Qiao, Zheyang Li, Ziwei Yang, Gaoang Wang, Xi Li", + "published": "2023-08-28", + "updated": "2024-03-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2302.14643v1", + "title": "Graph-based Knowledge Distillation: A survey and experimental evaluation", + "abstract": "Graph, such as citation networks, social networks, and transportation\nnetworks, are prevalent in the real world. Graph Neural Networks (GNNs) have\ngained widespread attention for their robust expressiveness and exceptional\nperformance in various graph applications. However, the efficacy of GNNs is\nheavily reliant on sufficient data labels and complex network models, with the\nformer obtaining hardly and the latter computing costly. To address the labeled\ndata scarcity and high complexity of GNNs, Knowledge Distillation (KD) has been\nintroduced to enhance existing GNNs. This technique involves transferring the\nsoft-label supervision of the large teacher model to the small student model\nwhile maintaining prediction performance. This survey offers a comprehensive\noverview of Graph-based Knowledge Distillation methods, systematically\ncategorizing and summarizing them while discussing their limitations and future\ndirections. This paper first introduces the background of graph and KD. It then\nprovides a comprehensive summary of three types of Graph-based Knowledge\nDistillation methods, namely Graph-based Knowledge Distillation for deep neural\nnetworks (DKD), Graph-based Knowledge Distillation for GNNs (GKD), and\nSelf-Knowledge Distillation based Graph-based Knowledge Distillation (SKD).\nEach type is further divided into knowledge distillation methods based on the\noutput layer, middle layer, and constructed graph. Subsequently, various\nalgorithms' ideas are analyzed and compared, concluding with the advantages and\ndisadvantages of each algorithm supported by experimental results. In addition,\nthe applications of graph-based knowledge distillation in CV, NLP, RS, and\nother fields are listed. Finally, the graph-based knowledge distillation is\nsummarized and prospectively discussed. We have also released related resources\nat https://github.com/liujing1023/Graph-based-Knowledge-Distillation.", + "authors": "Jing Liu, Tongya Zheng, Guanzheng Zhang, Qinfen Hao", + "published": "2023-02-27", + "updated": "2023-02-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0202165v1", + "title": "Distinguishing locally of quantum states and the distillation of entanglement", + "abstract": "This paper try to probe the relation of distinguishing locally and\ndistillation of entanglement. The distinguishing information (DI) and the\nmaximal distinguishing information (MDI) of a set of pure states are defined.\nThe interpretation of distillation of entanglement in term of information is\ngiven. The relation between the maximal distinguishing information and\ndistillable entanglement is gained. As a application of this relation the\ndistillable entanglement of Bell-diagonal states is present.", + "authors": "ping-xing. chen, Cheng-zu Li", + "published": "2002-02-27", + "updated": "2002-02-27", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2404.09740v1", + "title": "Leveraging Zero-Level Distillation to Generate High-Fidelity Magic States", + "abstract": "Magic state distillation plays an important role in universal fault-tolerant\nquantum computing, and its overhead is one of the major obstacles to realizing\nfault-tolerant quantum computers. Hence, many studies have been conducted to\nreduce this overhead. Among these, Litinski has provided a concrete assessment\nof resource-efficient distillation protocol implementations on the rotated\nsurface code. On the other hand, recently, Itogawa et al. have proposed\nzero-level distillation, a distillation protocol offering very small spatial\nand temporal overhead to generate relatively low-fidelity magic states. While\nzero-level distillation offers preferable spatial and temporal overhead, it\ncannot directly generate high-fidelity magic states since it only reduces the\nlogical error rate of the magic state quadratically. In this study, we evaluate\nthe spatial and temporal overhead of two-level distillation implementations\ngenerating relatively high-fidelity magic states, including ones incorporating\nzero-level distillation. To this end, we introduce (0+1)-level distillation, a\ntwo-level distillation protocol which combines zero-level distillation and the\n15-to-1 distillation protocol. We refine the second-level 15-to-1\nimplementation in it to capitalize on the small footprint of zero-level\ndistillation. Under conditions of a physical error probability of\n$p_{\\mathrm{phys}} = 10^{-4}$ ($10^{-3}$) and targeting an error rate for the\nmagic state within $[5 \\times 10^{-17}, 10^{-11}]$ ($[5 \\times 10^{-11},\n10^{-8}]$), (0+1)-level distillation reduces the spatiotemporal overhead by\nmore than 63% (61%) compared to the (15-to-1)$\\times$(15-to-1) protocol and\nmore than 43% (44%) compared to the (15-to-1)$\\times$(20-to-4) protocol,\noffering a substantial efficiency gain over the traditional protocols.", + "authors": "Yutaka Hirano, Tomohiro Itogawa, Keisuke Fujii", + "published": "2024-04-15", + "updated": "2024-04-15", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2112.05638v2", + "title": "DistilCSE: Effective Knowledge Distillation For Contrastive Sentence Embeddings", + "abstract": "Large-scale contrastive learning models can learn very informative sentence\nembeddings, but are hard to serve online due to the huge model size. Therefore,\nthey often play the role of \"teacher\", transferring abilities to small\n\"student\" models through knowledge distillation. However, knowledge\ndistillation inevitably brings some drop in embedding effect. To tackle that,\nwe propose an effective knowledge distillation framework for contrastive\nsentence embeddings, termed DistilCSE. It first applies knowledge distillation\non a large amount of unlabeled data, and then fine-tunes student models through\ncontrastive learning on limited labeled data. To achieve better distillation\nresults, we further propose Contrastive Knowledge Distillation (CKD). CKD uses\nInfoNCE as the loss function in knowledge distillation, enhancing the objective\nconsistency among teacher model training, knowledge distillation, and student\nmodel fine-tuning. Extensive experiments show that student models trained with\nthe proposed DistilCSE and CKD suffer from little or even no performance\ndecrease and consistently outperform the corresponding counterparts of the same\nparameter size. Impressively, our 110M student model outperforms the latest\nstate-of-the-art model, i.e., Sentence-T5 (11B), with only 1% parameters and\n0.25% unlabeled data.", + "authors": "Chaochen Gao, Xing Wu, Peng Wang, Jue Wang, Liangjun Zang, Zhongyuan Wang, Songlin Hu", + "published": "2021-12-10", + "updated": "2023-01-30", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1504.05965v2", + "title": "Qutrit Magic State Distillation Tight in Some Directions", + "abstract": "Magic state distillation is a crucial component in the leading approaches to\nimplementing universal fault tolerant quantum computation, with existing\nprotocols for both qubit and higher dimensional systems. Early work focused on\ndetermining the region of distillable states for qubit protocols, yet\ncomparatively little is known about which states can be distilled and with what\ndistillable region for d>2. Here we focus on d=3 and present new four-qutrit\ndistillation schemes that improve upon the known distillable region, and\nachieve distillation tight to the boundary of undistillable states for some\nclasses of state. As a consequence of recent results, this implies that there\nis a family of quantum states that enable universality if and only if they\nexhibit contextuality with respect to stabilizer measurements. We also identify\na new routine whose fixed point is a magic state with maximal sum-negativity\ni.e., it is maximally non-stabilizer in a specific sense.", + "authors": "Hillary Dawkins, Mark Howard", + "published": "2015-04-22", + "updated": "2015-09-21", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1807.04705v2", + "title": "Non-asymptotic assisted distillation of quantum coherence", + "abstract": "We characterize the operational task of environment-assisted distillation of\nquantum coherence under different sets of free operations when only a finite\nsupply of copies of a given state is available. We first evaluate the one-shot\nassisted distillable coherence exactly, and introduce a semidefinite\nprogramming bound on it in terms of a smooth entropic quantity. We prove the\nbound to be tight for all systems in dimensions 2 and 3, which allows us to\nobtain computable expressions for the one-shot rate of distillation, establish\nan analytical expression for the best achievable fidelity of assisted\ndistillation for any finite number of copies, and fully solve the problem of\nasymptotic zero-error assisted distillation for qubit and qutrit systems. Our\ncharacterization shows that all relevant sets of free operations in the\nresource theory of coherence have exactly the same power in the task of\none-shot assisted coherence distillation, and furthermore resolves a conjecture\nregarding the additivity of coherence of assistance in dimension 3.", + "authors": "Bartosz Regula, Ludovico Lami, Alexander Streltsov", + "published": "2018-07-12", + "updated": "2018-10-16", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0305188v1", + "title": "Dynamics of Distillability", + "abstract": "The time evolution of a maximally entangled bipartite systems is presented in\nthis paper. The distillability criterion is given in terms of Kraus operators.\nUsing the criterion, we discuss the distillability of $2\\times 2$ and $n\\times\nn (n>2)$ systems in their evolution process. There are two distinguished\nprocesses, dissipation and decoherence, which may destroy the distillability.\nWe discuss the effects of those processes on distillability in details.", + "authors": "W. Wu, W. Wang, X. X. Yi", + "published": "2003-05-30", + "updated": "2003-05-30", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2401.06370v1", + "title": "Graph Relation Distillation for Efficient Biomedical Instance Segmentation", + "abstract": "Instance-aware embeddings predicted by deep neural networks have\nrevolutionized biomedical instance segmentation, but its resource requirements\nare substantial. Knowledge distillation offers a solution by transferring\ndistilled knowledge from heavy teacher networks to lightweight yet\nhigh-performance student networks. However, existing knowledge distillation\nmethods struggle to extract knowledge for distinguishing instances and overlook\nglobal relation information. To address these challenges, we propose a graph\nrelation distillation approach for efficient biomedical instance segmentation,\nwhich considers three essential types of knowledge: instance-level features,\ninstance relations, and pixel-level boundaries. We introduce two graph\ndistillation schemes deployed at both the intra-image level and the inter-image\nlevel: instance graph distillation (IGD) and affinity graph distillation (AGD).\nIGD constructs a graph representing instance features and relations,\ntransferring these two types of knowledge by enforcing instance graph\nconsistency. AGD constructs an affinity graph representing pixel relations to\ncapture structured knowledge of instance boundaries, transferring\nboundary-related knowledge by ensuring pixel affinity consistency. Experimental\nresults on a number of biomedical datasets validate the effectiveness of our\napproach, enabling student models with less than $ 1\\%$ parameters and less\nthan $10\\%$ inference time while achieving promising performance compared to\nteacher models.", + "authors": "Xiaoyu Liu, Yueyi Zhang, Zhiwei Xiong, Wei Huang, Bo Hu, Xiaoyan Sun, Feng Wu", + "published": "2024-01-12", + "updated": "2024-01-12", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2112.10047v1", + "title": "Controlling the Quality of Distillation in Response-Based Network Compression", + "abstract": "The performance of a distillation-based compressed network is governed by the\nquality of distillation. The reason for the suboptimal distillation of a large\nnetwork (teacher) to a smaller network (student) is largely attributed to the\ngap in the learning capacities of given teacher-student pair. While it is hard\nto distill all the knowledge of a teacher, the quality of distillation can be\ncontrolled to a large extent to achieve better performance. Our experiments\nshow that the quality of distillation is largely governed by the quality of\nteacher's response, which in turn is heavily affected by the presence of\nsimilarity information in its response. A well-trained large capacity teacher\nloses similarity information between classes in the process of learning\nfine-grained discriminative properties for classification. The absence of\nsimilarity information causes the distillation process to be reduced from one\nexample-many class learning to one example-one class learning, thereby\nthrottling the flow of diverse knowledge from the teacher. With the implicit\nassumption that only the instilled knowledge can be distilled, instead of\nfocusing only on the knowledge distilling process, we scrutinize the knowledge\ninculcation process. We argue that for a given teacher-student pair, the\nquality of distillation can be improved by finding the sweet spot between batch\nsize and number of epochs while training the teacher. We discuss the steps to\nfind this sweet spot for better distillation. We also propose the distillation\nhypothesis to differentiate the behavior of the distillation process between\nknowledge distillation and regularization effect. We conduct all our\nexperiments on three different datasets.", + "authors": "Vibhas Vats, David Crandall", + "published": "2021-12-19", + "updated": "2021-12-19", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/1905.09747v2", + "title": "Adversarially Robust Distillation", + "abstract": "Knowledge distillation is effective for producing small, high-performance\nneural networks for classification, but these small networks are vulnerable to\nadversarial attacks. This paper studies how adversarial robustness transfers\nfrom teacher to student during knowledge distillation. We find that a large\namount of robustness may be inherited by the student even when distilled on\nonly clean images. Second, we introduce Adversarially Robust Distillation (ARD)\nfor distilling robustness onto student networks. In addition to producing small\nmodels with high test accuracy like conventional distillation, ARD also passes\nthe superior robustness of large networks onto the student. In our experiments,\nwe find that ARD student models decisively outperform adversarially trained\nnetworks of identical architecture in terms of robust accuracy, surpassing\nstate-of-the-art methods on standard robustness benchmarks. Finally, we adapt\nrecent fast adversarial training methods to ARD for accelerated robust\ndistillation.", + "authors": "Micah Goldblum, Liam Fowl, Soheil Feizi, Tom Goldstein", + "published": "2019-05-23", + "updated": "2019-12-02", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.CV", + "stat.ML" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.14554v1", + "title": "A Selective Survey on Versatile Knowledge Distillation Paradigm for Neural Network Models", + "abstract": "This paper aims to provide a selective survey about knowledge\ndistillation(KD) framework for researchers and practitioners to take advantage\nof it for developing new optimized models in the deep neural network field. To\nthis end, we give a brief overview of knowledge distillation and some related\nworks including learning using privileged information(LUPI) and generalized\ndistillation(GD). Even though knowledge distillation based on the\nteacher-student architecture was initially devised as a model compression\ntechnique, it has found versatile applications over various frameworks.\n In this paper, we review the characteristics of knowledge distillation from\nthe hypothesis that the three important ingredients of knowledge distillation\nare distilled knowledge and loss,teacher-student paradigm, and the distillation\nprocess. In addition, we survey the versatility of the knowledge distillation\nby studying its direct applications and its usage in combination with other\ndeep learning paradigms. Finally we present some future works in knowledge\ndistillation including explainable knowledge distillation where the analytical\nanalysis of the performance gain is studied and the self-supervised learning\nwhich is a hot research topic in deep learning community.", + "authors": "Jeong-Hoe Ku, JiHun Oh, YoungYoon Lee, Gaurav Pooniwala, SangJeong Lee", + "published": "2020-11-30", + "updated": "2020-11-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/quant-ph/0607126v3", + "title": "Random bipartite entanglement from W and W-like states", + "abstract": "We describe a protocol for distilling maximally entangled bipartite states\nbetween random pairs of parties from those sharing a tripartite W state, and\nshow that, rather surprisingly, the total distillation rate (the total number\nof EPR pairs distilled per W, irrespective of who shares them) may be done at a\nhigher rate than distillation of bipartite entanglement between specified pairs\nof parties. Specifically, the optimal distillation rate for specified\nentanglement for the W has been previously shown to be the asymptotic\nentanglement of assistance of 0.92 EPR pairs per W, while our protocol can\nasymptotically distill 1 EPR pair per W between random pairs of parties, which\nwe conjecture to be optimal. We thus demonstrate a tradeoff between the overall\nasymptotic rate of EPR distillation and the distribution of final EPR pairs\nbetween parties. We further show that by increasing the number of parties in\nthe protocol that there exist states with fixed lower-bounded distillable\nentanglement for random parties but arbitrarily small distillable entanglement\nfor specified parties.", + "authors": "Ben Fortescue, Hoi-Kwong Lo", + "published": "2006-07-18", + "updated": "2007-02-23", + "primary_cat": "quant-ph", + "cats": [ + "quant-ph" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.09969v1", + "title": "Neural network algorithm and its application in reactive distillation", + "abstract": "Reactive distillation is a special distillation technology based on the\ncoupling of chemical reaction and distillation. It has the characteristics of\nlow energy consumption and high separation efficiency. However, because the\ncombination of reaction and separation produces highly nonlinear robust\nbehavior, the control and optimization of the reactive distillation process\ncannot use conventional methods, but must rely on neural network algorithms.\nThis paper briefly describes the characteristics and research progress of\nreactive distillation technology and neural network algorithms, and summarizes\nthe application of neural network algorithms in reactive distillation, aiming\nto provide reference for the development and innovation of industry technology.", + "authors": "Huihui Wang, Ruyang Mo", + "published": "2020-11-16", + "updated": "2020-11-16", + "primary_cat": "cs.NE", + "cats": [ + "cs.NE", + "cs.LG", + "I.2.8" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2004.03097v1", + "title": "Towards Non-task-specific Distillation of BERT via Sentence Representation Approximation", + "abstract": "Recently, BERT has become an essential ingredient of various NLP deep models\ndue to its effectiveness and universal-usability. However, the online\ndeployment of BERT is often blocked by its large-scale parameters and high\ncomputational cost. There are plenty of studies showing that the knowledge\ndistillation is efficient in transferring the knowledge from BERT into the\nmodel with a smaller size of parameters. Nevertheless, current BERT\ndistillation approaches mainly focus on task-specified distillation, such\nmethodologies lead to the loss of the general semantic knowledge of BERT for\nuniversal-usability. In this paper, we propose a sentence representation\napproximating oriented distillation framework that can distill the pre-trained\nBERT into a simple LSTM based model without specifying tasks. Consistent with\nBERT, our distilled model is able to perform transfer learning via fine-tuning\nto adapt to any sentence-level downstream task. Besides, our model can further\ncooperate with task-specific distillation procedures. The experimental results\non multiple NLP tasks from the GLUE benchmark show that our approach\noutperforms other task-specific distillation methods or even much larger\nmodels, i.e., ELMO, with efficiency well-improved.", + "authors": "Bowen Wu, Huan Zhang, Mengyuan Li, Zongsheng Wang, Qihang Feng, Junhong Huang, Baoxun Wang", + "published": "2020-04-07", + "updated": "2020-04-07", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2206.08491v1", + "title": "Revisiting Self-Distillation", + "abstract": "Knowledge distillation is the procedure of transferring \"knowledge\" from a\nlarge model (the teacher) to a more compact one (the student), often being used\nin the context of model compression. When both models have the same\narchitecture, this procedure is called self-distillation. Several works have\nanecdotally shown that a self-distilled student can outperform the teacher on\nheld-out data. In this work, we systematically study self-distillation in a\nnumber of settings. We first show that even with a highly accurate teacher,\nself-distillation allows a student to surpass the teacher in all cases.\nSecondly, we revisit existing theoretical explanations of (self) distillation\nand identify contradicting examples, revealing possible drawbacks of these\nexplanations. Finally, we provide an alternative explanation for the dynamics\nof self-distillation through the lens of loss landscape geometry. We conduct\nextensive experiments to show that self-distillation leads to flatter minima,\nthereby resulting in better generalization.", + "authors": "Minh Pham, Minsu Cho, Ameya Joshi, Chinmay Hegde", + "published": "2022-06-17", + "updated": "2022-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "category": "Distillation" + }, + { + "url": "http://arxiv.org/abs/2011.02255v2", + "title": "On Self-Distilling Graph Neural Network", + "abstract": "Recently, the teacher-student knowledge distillation framework has\ndemonstrated its potential in training Graph Neural Networks (GNNs). However,\ndue to the difficulty of training over-parameterized GNN models, one may not\neasily obtain a satisfactory teacher model for distillation. Furthermore, the\ninefficient training process of teacher-student knowledge distillation also\nimpedes its applications in GNN models. In this paper, we propose the first\nteacher-free knowledge distillation method for GNNs, termed GNN\nSelf-Distillation (GNN-SD), that serves as a drop-in replacement of the\nstandard training process. The method is built upon the proposed neighborhood\ndiscrepancy rate (NDR), which quantifies the non-smoothness of the embedded\ngraph in an efficient way. Based on this metric, we propose the adaptive\ndiscrepancy retaining (ADR) regularizer to empower the transferability of\nknowledge that maintains high neighborhood discrepancy across GNN layers. We\nalso summarize a generic GNN-SD framework that could be exploited to induce\nother distillation strategies. Experiments further prove the effectiveness and\ngeneralization of our approach, as it brings: 1) state-of-the-art GNN\ndistillation performance with less training cost, 2) consistent and\nconsiderable performance enhancement for various popular backbones.", + "authors": "Yuzhao Chen, Yatao Bian, Xi Xiao, Yu Rong, Tingyang Xu, Junzhou Huang", + "published": "2020-11-04", + "updated": "2021-04-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "category": "Distillation" + } +] \ No newline at end of file